From patchwork Fri Jan 11 05:12:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 10757357 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C850C14E5 for ; Fri, 11 Jan 2019 05:13:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB060299B3 for ; Fri, 11 Jan 2019 05:13:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AE945299BC; Fri, 11 Jan 2019 05:13:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 942FC299B3 for ; Fri, 11 Jan 2019 05:13:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B0B5B8E0004; Fri, 11 Jan 2019 00:13:26 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ABB8E8E0001; Fri, 11 Jan 2019 00:13:26 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9AE368E0004; Fri, 11 Jan 2019 00:13:26 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 5904D8E0001 for ; Fri, 11 Jan 2019 00:13:26 -0500 (EST) Received: by mail-pf1-f198.google.com with SMTP id q64so9448405pfa.18 for ; Thu, 10 Jan 2019 21:13:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=V+u0Zr0RniU+aM1CbZXWVIvdBXHEhjiwQS1respGVgg=; b=A9ekGnVvTel/TrZ9aTEHf+vkiSch5s1BE6wN+7D/+y3G34GG0i9sGXp7jyDeq/ECbV Tk2MEqvjo3eR33UzHc/JfxQ3S1wd7hu0To88p+sSqMu4JO7/eqLqJ9UNsFNR101fRV1q LCY0L1X3CB8p34476KwchlPeJimMdMDwdDqQeKlrPFUKEieMIoX36JxM1w5JV8OQEVcb 5iEVtsCfAiLo710wsjuB2wjW69/0VmDrEU0yMOoOLkCkQHoWC0PmrjPMYfjw0QV2BXGV 8x0B0ZzZwbbv1mtXjJa7nKD3OwUaUEOZLb7HCBM8xHx+yY6S7i+bdZ9ZgX59/0FlsKZh /0NQ== X-Gm-Message-State: AJcUukdhDcnjp7Ae6Cf16/AhsbtPKyslNIj98xvjUk8Ja4hpT9myyI7r yMGvyfIcWNLj2rkQWe4VEuI7bgvIXdeAMRAawxEmUVR6Q1VpR6lxO9EDPQZZJmhJyzGmy2rLUAe IQvcU+RWwvf/DzGAGy/VR/HEarAHB6geogzWKNoBCGZ5pTxP9V57V0WmuV58EG4PYWP13XQ7DJG 1mRGf7xhN2YuxYQ3leU+Y0ZJTUMamntNIm6oPB89a+W/2heF8xByaF/bBV/KSTfT/UAdoVNsnAC eAnPTPaK5mSKB+xbMU0ADrByJpdLfN/y68xtE1+bezGOuhpAl+fjqlG0KL26yDhXfpl0emgpiAm Cufv5HG0V54RAQPWC8RWBPx5PP0OubCJqpSZaBtUkytB5ZgO4hksY0BXAl6iw/ukU9QVXT1Cwqr 9 X-Received: by 2002:a17:902:bc3:: with SMTP id 61mr13296342plr.15.1547183605933; Thu, 10 Jan 2019 21:13:25 -0800 (PST) X-Received: by 2002:a17:902:bc3:: with SMTP id 61mr13296311plr.15.1547183605165; Thu, 10 Jan 2019 21:13:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547183605; cv=none; d=google.com; s=arc-20160816; b=x5P7ZXlmxZOi+VBqZSoXLTdQODW1854fBuOiwEP1xqdc4JFZjdKTGDhOYEPdJSLAcf jJWvjwYv3iXuyAuHxCjLLlJO7ZtHLwUt0xw4AkM1rtJU47RsxBqJ0HLVL8CXohwTdo+E 9QCr/7qWK4Bl7mhPqKNRflP8gtXpBI2jkInUbeEYBjVs8THfqYD2LTe9MYn6GfXN8m2x 7VqiZeeFzs2YlyX5W/Is7Ke7vW/VsZKBIHXrLTz0vS8JvZ/T6ReSFPHrZWoFMwFIXZ+T 2jc/pn0F3IgLSOvqf69j4C/Qfb6IH5L/LOaJYWbDBfZBKI0u0e5ZYevZC0YjaCJ868s+ nDFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=V+u0Zr0RniU+aM1CbZXWVIvdBXHEhjiwQS1respGVgg=; b=qMRewYHyAJyJZftgkkwiZVkqjMRoLn4OysL/+PdLqgTyFkyhc9tJahUWRz31dGgQvf DVpeuoflZVbp/uja+dss5rrqSz7fO7xR8a1+TxFWyviLZw+N8nkYCqr34f5ffpJga3KX TC2st8d4UT108cgY0kb1bPqw/6HJI3PS6+bmR06YCstTXJv0gKTcweByUIjK8WjL9lp3 e0rTK/Fx8bWcgxEo8weZVlYG6NSMQ71ba5hhZw2vilyujmr9wEaYOv/xDfD03DYSvEAz CoBUNCvdVq5pjFGZZwVoWPVxOvyISP/m+iOVBOhs376k40McYi/UdkkHlA74KhkpX4LJ 3piw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=QxIVKFSc; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id 5sor1585555plx.26.2019.01.10.21.13.25 for (Google Transport Security); Thu, 10 Jan 2019 21:13:25 -0800 (PST) Received-SPF: pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=QxIVKFSc; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=V+u0Zr0RniU+aM1CbZXWVIvdBXHEhjiwQS1respGVgg=; b=QxIVKFSc+xLsuLW52+XUiU/TXXaB/qGlW4A3Cfa8HuJjgkRI/wq/+IAYzlXWpogLhL Vi+Dd9HsaFROyIJIagXcYbmzrfaIhR4PGbbxmRMZuH1yn1GgIFu/ZL7NZD+s46NfL45X 8C5AwFAPWdyoRQwJ+d8vsbAdzSku4NcDZqy+GyaEBdD+Of+l9fkrvT/Zmfbrtdy8w2LH 8PfhPnh/14bVlCBvekx6P9ZKTm6CqjiJ73oHlSYxKJy79q14FJP5w14Z3LD5SAuTolcL 7CzpEwS5A7hagjBChFbVjBPChSF2jmpenqDqd0Mx8UOyMdYFKVmG82yvL38fgM6TGyWF iFMw== X-Google-Smtp-Source: ALg8bN7eC57a6OGKT0WjXEulrsCn268Kwp8SuL/Sc5JAlDGxLx10K/Z81zjSvV5waQB2JL3h2km0FQ== X-Received: by 2002:a17:902:7614:: with SMTP id k20mr13420401pll.285.1547183604873; Thu, 10 Jan 2019 21:13:24 -0800 (PST) Received: from mylaptop.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id q7sm93490471pgp.40.2019.01.10.21.13.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Jan 2019 21:13:24 -0800 (PST) From: Pingfan Liu To: linux-kernel@vger.kernel.org Cc: Pingfan Liu , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , "Rafael J. Wysocki" , Len Brown , Yinghai Lu , Tejun Heo , Chao Fan , Baoquan He , Juergen Gross , Andrew Morton , Mike Rapoport , Vlastimil Babka , Michal Hocko , x86@kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org Subject: [PATCHv2 1/7] x86/mm: concentrate the code to memblock allocator enabled Date: Fri, 11 Jan 2019 13:12:51 +0800 Message-Id: <1547183577-20309-2-git-send-email-kernelfans@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> References: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This patch identifies the point where memblock alloc start. It has no functional. Signed-off-by: Pingfan Liu Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: "Rafael J. Wysocki" Cc: Len Brown Cc: Yinghai Lu Cc: Tejun Heo Cc: Chao Fan Cc: Baoquan He Cc: Juergen Gross Cc: Andrew Morton Cc: Mike Rapoport Cc: Vlastimil Babka Cc: Michal Hocko Cc: x86@kernel.org Cc: linux-acpi@vger.kernel.org Cc: linux-mm@kvack.org --- arch/x86/kernel/setup.c | 54 ++++++++++++++++++++++++------------------------- 1 file changed, 26 insertions(+), 28 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index d494b9b..ac432ae 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -962,29 +962,6 @@ void __init setup_arch(char **cmdline_p) if (efi_enabled(EFI_BOOT)) efi_memblock_x86_reserve_range(); -#ifdef CONFIG_MEMORY_HOTPLUG - /* - * Memory used by the kernel cannot be hot-removed because Linux - * cannot migrate the kernel pages. When memory hotplug is - * enabled, we should prevent memblock from allocating memory - * for the kernel. - * - * ACPI SRAT records all hotpluggable memory ranges. But before - * SRAT is parsed, we don't know about it. - * - * The kernel image is loaded into memory at very early time. We - * cannot prevent this anyway. So on NUMA system, we set any - * node the kernel resides in as un-hotpluggable. - * - * Since on modern servers, one node could have double-digit - * gigabytes memory, we can assume the memory around the kernel - * image is also un-hotpluggable. So before SRAT is parsed, just - * allocate memory near the kernel image to try the best to keep - * the kernel away from hotpluggable memory. - */ - if (movable_node_is_enabled()) - memblock_set_bottom_up(true); -#endif x86_report_nx(); @@ -1096,9 +1073,6 @@ void __init setup_arch(char **cmdline_p) cleanup_highmap(); - memblock_set_current_limit(ISA_END_ADDRESS); - e820__memblock_setup(); - reserve_bios_regions(); if (efi_enabled(EFI_MEMMAP)) { @@ -1113,6 +1087,8 @@ void __init setup_arch(char **cmdline_p) efi_reserve_boot_services(); } + memblock_set_current_limit(0, ISA_END_ADDRESS, false); + e820__memblock_setup(); /* preallocate 4k for mptable mpc */ e820__memblock_alloc_reserved_mpc_new(); @@ -1130,7 +1106,31 @@ void __init setup_arch(char **cmdline_p) trim_platform_memory_ranges(); trim_low_memory_range(); +#ifdef CONFIG_MEMORY_HOTPLUG + /* + * Memory used by the kernel cannot be hot-removed because Linux + * cannot migrate the kernel pages. When memory hotplug is + * enabled, we should prevent memblock from allocating memory + * for the kernel. + * + * ACPI SRAT records all hotpluggable memory ranges. But before + * SRAT is parsed, we don't know about it. + * + * The kernel image is loaded into memory at very early time. We + * cannot prevent this anyway. So on NUMA system, we set any + * node the kernel resides in as un-hotpluggable. + * + * Since on modern servers, one node could have double-digit + * gigabytes memory, we can assume the memory around the kernel + * image is also un-hotpluggable. So before SRAT is parsed, just + * allocate memory near the kernel image to try the best to keep + * the kernel away from hotpluggable memory. + */ + if (movable_node_is_enabled()) + memblock_set_bottom_up(true); +#endif init_mem_mapping(); + memblock_set_current_limit(get_max_mapped()); idt_setup_early_pf(); @@ -1145,8 +1145,6 @@ void __init setup_arch(char **cmdline_p) */ mmu_cr4_features = __read_cr4() & ~X86_CR4_PCIDE; - memblock_set_current_limit(get_max_mapped()); - /* * NOTE: On x86-32, only from this point on, fixmaps are ready for use. */ From patchwork Fri Jan 11 05:12:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 10757361 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A9E6591E for ; Fri, 11 Jan 2019 05:13:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BC07299BC for ; Fri, 11 Jan 2019 05:13:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8EDA2299BE; Fri, 11 Jan 2019 05:13:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E7D1299B3 for ; Fri, 11 Jan 2019 05:13:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 120258E0005; Fri, 11 Jan 2019 00:13:34 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0F7E68E0001; Fri, 11 Jan 2019 00:13:34 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 00F838E0005; Fri, 11 Jan 2019 00:13:33 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id B33CD8E0001 for ; Fri, 11 Jan 2019 00:13:33 -0500 (EST) Received: by mail-pf1-f200.google.com with SMTP id d18so9505115pfe.0 for ; Thu, 10 Jan 2019 21:13:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=Fl+859MCIsPrIsKoMm9QR3qOmotIpRzZocMuktvJB6I=; b=frm76vvshB+FrQ8vWgK5yyb03IW0h7wGi27YCouNafLJmbxBKIvXHSTrSLLMZaR7W1 2K6BX+LLw8blYcTSH32s0XTAmoamCQaZJlaJynLT4OeW48xR/z+46jIVYOYojjvx7AU2 63DWqaJwzI9XwnjyG7PV+hSualOi5g9H5+QWAp634CstbwDVI1nNmFx8P79QEneCgd1x g8FnYNm/V56eaIVi7AZ3NRDwgSP32jAlatZCh0DCkxGVkAQRY5CvKiRup3KDWvFrrDUl VZ+EinLffSvwjN9SlhyYMUOy9Ni750JRDu1vZFePLbcDxU8bRWqzr2puXPFVJDZ4pf2X g5wA== X-Gm-Message-State: AJcUukcYwkJmWbfDObyZJaNGWKwqZa3HuHfpLm//SixnaDvmY9JUCU6O +18SeqabIPfMMyOGPi2POJ9dKDRciqmyGMvDuZQlIzgFw03r0W8mjRaRrM9VonBdUU+9ZCWsg8+ cLYPdn8+4rWYZqngOn3inoP3EL62448bIHeMUYRqcEYrE7qRmxRhkqQSNNzftHSZe3zU7CK5OUs M+v7t4/MHBJaVmeXdqA2CcG6bLypEA7I9rRYFtWUQNt77QTGBjdo6DXoTEVLbCG3RjgqVyyu4nh QeMC2xbZdIYAU5S4u6XpxBtNrnyGpWPFkpPFbXSMQFlKugfYDbr9ytItgVKO1PAUwUP2tFB1/LU Bu9dfzbTEcyaKslsJmdb60ppyvhWeO6a7cfyCJcI6+GIv25hwuDrJqte3Z0bivk0gP0T4or5dES + X-Received: by 2002:a63:3507:: with SMTP id c7mr12105810pga.315.1547183613315; Thu, 10 Jan 2019 21:13:33 -0800 (PST) X-Received: by 2002:a63:3507:: with SMTP id c7mr12105769pga.315.1547183612202; Thu, 10 Jan 2019 21:13:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547183612; cv=none; d=google.com; s=arc-20160816; b=s7y6lDzotlchbTvbifteBBF7gwDfPf3hoygZ0MlUZ6s8eNsCvl5MSOyKKI8nel2sYR 9ZzaPWggyUWtvIXCco/YNjUc51QP+DS1qM7HY8raDcTrp4hfhyRPcShuZ+IqBvt7RA95 GkDZrc8h9vNSUgSmg2n1w+azYaO2Hq07bZ8iPGsaCDxu2OOhsA7rcoUiCmmDbBFAX5Vn m1KrMWCaxN0epOYefURKblIUL+fUOOgKuwZ7tM78lJP6Qa7mGPx6EIjJOevqUn6vWYDB jaQH68TAV/c57V9dIWGulcZSADS3iGgVZVeeMLPyzkVXqVnrYI42KJcajFXSDwZyIfJv KUSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Fl+859MCIsPrIsKoMm9QR3qOmotIpRzZocMuktvJB6I=; b=F/5wQElItY8ovYZDos3gTFPQnzaDV3HxcKV+q4HR3UencSrWq23Zab0WhG52oNPecp 6uspdbs1MZA5Nvjti7IFdFfHv7FcS4XWeXYFphJQSC5qCAqu8f2HxNMAysEZ0ZUCKHZV cK9xqBjn8QnUgduF+l3gOsiOtZKzhj7dCSlyOKNGv6AUZuriB7/Rp/zd1rKzMVo/Wene uq0XRWt4KLAh/k3YpYL+2GG7pVvfJ98kVMQIExagPhWskfS96htYbtArtblpGt0JFXIu ykOMAHBbhXmGADapw+kJroFIYZo6A8nhpSN9u5PK8geZ8iHMbWJmoOIvdrO/SpRJUMFu K3EA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=J8FShFzH; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id c8sor1650019plr.70.2019.01.10.21.13.32 for (Google Transport Security); Thu, 10 Jan 2019 21:13:32 -0800 (PST) Received-SPF: pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=J8FShFzH; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Fl+859MCIsPrIsKoMm9QR3qOmotIpRzZocMuktvJB6I=; b=J8FShFzHpRwxxdAjPdsiYXXAAyRJEOJ7x+Ptmmt4Wb0lGPeFqtyC/n60IU/sB46Qha KkpZPD9RHyntynn6sKdVgFQnmo+mQ4xtUceXylBrPNJPrv4HEEMzhb63T7rUtJHHazbv 3Kj9X/gAQ+r9zkfqOkDZ0SgXhMRPEO8mzItkC3NKFgk1vkYl4DwDPCTBC3iugC0LcnLg +cgcIjFI0ugxScM9kmJOi1X8WxxFOWMCWKQ/zdyKC0dQJW9EhdhawghMZNNPgjdxMXqY //Q19MPdQEUsSe+IL4wM706RIeimx2acLihtqYBdxd5cN/bVgpoMawedTsMV674r3atT BpHw== X-Google-Smtp-Source: ALg8bN7JOdeQYRn3zkVi+6IJkspRtFUCj2IVmwO03Kj270ObKQXYdJIXZvMHTdOdX/LYWeVLxSP2JQ== X-Received: by 2002:a17:902:7e4f:: with SMTP id a15mr8905337pln.149.1547183611922; Thu, 10 Jan 2019 21:13:31 -0800 (PST) Received: from mylaptop.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id q7sm93490471pgp.40.2019.01.10.21.13.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Jan 2019 21:13:31 -0800 (PST) From: Pingfan Liu To: linux-kernel@vger.kernel.org Cc: Pingfan Liu , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , "Rafael J. Wysocki" , Len Brown , Yinghai Lu , Tejun Heo , Chao Fan , Baoquan He , Juergen Gross , Andrew Morton , Mike Rapoport , Vlastimil Babka , Michal Hocko , x86@kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org Subject: [PATCHv2 2/7] acpi: change the topo of acpi_table_upgrade() Date: Fri, 11 Jan 2019 13:12:52 +0800 Message-Id: <1547183577-20309-3-git-send-email-kernelfans@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> References: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The current acpi_table_upgrade() relies on initrd_start, but this var is only valid after relocate_initrd(). There is requirement to extract the acpi info from initrd before memblock-allocator can work(see [2/4]), hence acpi_table_upgrade() need to accept the input param directly. Signed-off-by: Pingfan Liu Acked-by: "Rafael J. Wysocki" Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: "Rafael J. Wysocki" Cc: Len Brown Cc: Yinghai Lu Cc: Tejun Heo Cc: Chao Fan Cc: Baoquan He Cc: Juergen Gross Cc: Andrew Morton Cc: Mike Rapoport Cc: Vlastimil Babka Cc: Michal Hocko Cc: x86@kernel.org Cc: linux-acpi@vger.kernel.org Cc: linux-mm@kvack.org --- arch/arm64/kernel/setup.c | 2 +- arch/x86/kernel/setup.c | 2 +- drivers/acpi/tables.c | 4 +--- include/linux/acpi.h | 4 ++-- 4 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index f4fc1e0..bc4b47d 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -315,7 +315,7 @@ void __init setup_arch(char **cmdline_p) paging_init(); efi_apply_persistent_mem_reservations(); - acpi_table_upgrade(); + acpi_table_upgrade((void *)initrd_start, initrd_end - initrd_start); /* Parse the ACPI tables for possible boot-time configuration */ acpi_boot_table_init(); diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index ac432ae..dc8fc5d 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -1172,8 +1172,8 @@ void __init setup_arch(char **cmdline_p) reserve_initrd(); - acpi_table_upgrade(); + acpi_table_upgrade((void *)initrd_start, initrd_end - initrd_start); vsmp_init(); io_delay_init(); diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c index 61203ee..84e0a79 100644 --- a/drivers/acpi/tables.c +++ b/drivers/acpi/tables.c @@ -471,10 +471,8 @@ static DECLARE_BITMAP(acpi_initrd_installed, NR_ACPI_INITRD_TABLES); #define MAP_CHUNK_SIZE (NR_FIX_BTMAPS << PAGE_SHIFT) -void __init acpi_table_upgrade(void) +void __init acpi_table_upgrade(void *data, size_t size) { - void *data = (void *)initrd_start; - size_t size = initrd_end - initrd_start; int sig, no, table_nr = 0, total_offset = 0; long offset = 0; struct acpi_table_header *table; diff --git a/include/linux/acpi.h b/include/linux/acpi.h index ed80f14..0b6e0b6 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -1254,9 +1254,9 @@ acpi_graph_get_remote_endpoint(const struct fwnode_handle *fwnode, #endif #ifdef CONFIG_ACPI_TABLE_UPGRADE -void acpi_table_upgrade(void); +void acpi_table_upgrade(void *data, size_t size); #else -static inline void acpi_table_upgrade(void) { } +static inline void acpi_table_upgrade(void *data, size_t size) { } #endif #if defined(CONFIG_ACPI) && defined(CONFIG_ACPI_WATCHDOG) From patchwork Fri Jan 11 05:12:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 10757365 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0FAD817FB for ; Fri, 11 Jan 2019 05:13:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 008CB299C3 for ; Fri, 11 Jan 2019 05:13:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DF74A299BC; Fri, 11 Jan 2019 05:13:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B0432299B7 for ; Fri, 11 Jan 2019 05:13:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9D26B8E0006; Fri, 11 Jan 2019 00:13:41 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 982F08E0001; Fri, 11 Jan 2019 00:13:41 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 825E08E0006; Fri, 11 Jan 2019 00:13:41 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id 3264B8E0001 for ; Fri, 11 Jan 2019 00:13:41 -0500 (EST) Received: by mail-pg1-f198.google.com with SMTP id g188so7742264pgc.22 for ; Thu, 10 Jan 2019 21:13:41 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=95vlLxARYVrT+/VreCWncitVEoLQwJiAuXUmuXw9OyU=; b=X/f0LzO7YvqcUZ9bc9xqPTaQAdbmpQHZFU3UeTefKpUN//I/fZhcKOVwAW5y4xqXhx LYNTKMxXRHoyRTiZUvtF5F8WrHGIjKiIoDMMFv8Kjo1AGMt9ETG6eiMQTr1UF/j1g8Pu LthTTwFz6MnIW9ApRskudXS5y43h/Rya91nr/W8/CIuvOm3abEGNuR9K5cfIwxfbGZgD uxNCrR8sJnbHfch1Ykmjp6icoS4okEzqcgiKFQcJy0FnHosrFQERP9SrRcCx20n/QJ9T 0qvAgGFFGs/vIMgsz+BDgR33t7jSgdaKJxX15mDbVjLfJtXMLp6dvwZfkBFYr64twFYl EvlA== X-Gm-Message-State: AJcUukczrL0Imxrr/9muBU7i0hnstqyxF8Ed6I0+iA87+S9/vkYDj5mF RRZ8RIJQ3SKZH3G13/3p0rEYiVlgnNuSEU1+yJuWvCJ1mA58BGJLPayQKj/M0Xh3rFn4sr47QFN EX1bzH8OysyszqZapPRghg7Qdkn+Rxlum4HECe35PgjMg21aNJMALSrhRFT0NObEta3OH/mG1G2 xLSCwc5Q42qMo3V5w62wDdrNDgg5HktqzQ6uOOJWHKkb5wg+q/6HZfgJEeNBsy6RSFN/oK7vw48 CKlO91nCflilF92hCW7OZSm1CKhYqWftJ/C4/lnTFY1f5OtZGQcsUgm6I0i3VB8E8PrALRxdi7p /86GFEDmzzuaGpmVtZxU1YGzCOo9E8lbbAVu/lJ+SElGT7S2nCJfMa54l5srmnYA5haTJZnWTEh 1 X-Received: by 2002:a62:1112:: with SMTP id z18mr13023444pfi.173.1547183620831; Thu, 10 Jan 2019 21:13:40 -0800 (PST) X-Received: by 2002:a62:1112:: with SMTP id z18mr13023401pfi.173.1547183619742; Thu, 10 Jan 2019 21:13:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547183619; cv=none; d=google.com; s=arc-20160816; b=Q7UjgcBrthhPQ52DFdgEL8zD23iVGXWt9M5s//nvb1m8HCszKlZDofAQgkfiuSZ4bd UEipoccDRkpQUTZOboGuUQl2hdyChiCFszv8fS3NGrQmyumS6mPrahGnQKcIWrSy9Juf prZH8PTwfbDS3PJiGGbDuMQAACsxoTO8J6pgwR75I/Wl7T7BeiZGuOWyagF+Cqgv+dEE jSA3NhEZiq7McFxK89cH40goTCZdm+X/VUTIbj4bIC7xX3cjIhRWvmuLq6Whtz07nvip YETrs/TcTYGdzYk1EDNUPSzuk+oQJRhC1XtkiPolHVnSZRAsdfrW9eBIGaZnLKIbQZGk iDVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=95vlLxARYVrT+/VreCWncitVEoLQwJiAuXUmuXw9OyU=; b=f4I46AN+u478Ip7cZWS3JEr6gq6zT7xDms7ctiTf22kEtolFFXk0FocZXCS0yTfSPc JVynAqcxbV3pZEgxWDW9DT/iXdiRH7tPgdsM9p4ETVNjUEhRFsivvUOzJmrolI7UtEDe 9AHoJIXRBh8vFceuEtt/fxjl/2O6JtPe+A643wZ/YCDS6jUmlfvA9JH3Xzdp6w3SPGRf lZg2BVswnSrU+UDgXHJXxhXASPMM7892bDjN/zqMb3sbTdeOHkxaZhO4FtOflUKNsgv3 kBYK6PcNDfPSBSOZeEgnI2gFfxC+T+Iz/tvDMLmc7KFptTJ6noh9y6ekwP7zuAwhEgVT yJsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=CLR9x8NX; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id d80sor1724803pfj.29.2019.01.10.21.13.39 for (Google Transport Security); Thu, 10 Jan 2019 21:13:39 -0800 (PST) Received-SPF: pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=CLR9x8NX; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=95vlLxARYVrT+/VreCWncitVEoLQwJiAuXUmuXw9OyU=; b=CLR9x8NXbeestSQIpTg/cYEfoaT60/x3MSvclZKFwKz+Nko73nfSO62gGtFtBZSEOE J1iuT3yyZkyi22pvZ2evUz0oY5krYbYQ0W3TwkN1c2AWwxeeTVvx60ovEAJtJsLI5m5v B4iJktDYVjvpA5LZArcYBomqQDQpQThc37519BQmJC0se4X6dXX6t6F8dMJ5f5XvDNWd kM9sXpboL+VsZupPv5655NywfO50jo1/2458jdsV+QLqRLM4avJoOmrN87zcU1d6eDct 4D7+mJmX0/g6nV7PEOyh11pCDwzLsZQMwBa5wJrD8MMOD67TC/9AfqZrrRV7RY+xh0+o 3qoQ== X-Google-Smtp-Source: ALg8bN5Lbkahb6ggCpP1eVPKzUy5iMB3MzGFLofXdwDEWLuveFl5Jsi5Ot8Znac0KCx2wXkyvGJvxg== X-Received: by 2002:a62:16d6:: with SMTP id 205mr13022988pfw.256.1547183619334; Thu, 10 Jan 2019 21:13:39 -0800 (PST) Received: from mylaptop.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id q7sm93490471pgp.40.2019.01.10.21.13.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Jan 2019 21:13:38 -0800 (PST) From: Pingfan Liu To: linux-kernel@vger.kernel.org Cc: Pingfan Liu , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , "Rafael J. Wysocki" , Len Brown , Yinghai Lu , Tejun Heo , Chao Fan , Baoquan He , Juergen Gross , Andrew Morton , Mike Rapoport , Vlastimil Babka , Michal Hocko , x86@kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org Subject: [PATCHv2 3/7] mm/memblock: introduce allocation boundary for tracing purpose Date: Fri, 11 Jan 2019 13:12:53 +0800 Message-Id: <1547183577-20309-4-git-send-email-kernelfans@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> References: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP During boot time, there is requirement to tell whether a series of func call will consume memory or not. For some reason, a temporary memory resource can be loan to those func through memblock allocator, but at a check point, all of the loan memory should be turned back. A typical using style: -1. find a usable range by memblock_find_in_range(), said, [A,B] -2. before calling a series of func, memblock_set_current_limit(A,B,true) -3. call funcs -4. memblock_find_in_range(A,B,B-A,1), if failed, then some memory is not turned back. -5. reset the original limit E.g. in the case of hotmovable memory, some acpi routines should be called, and they are not allowed to own some movable memory. Although at present these functions do not consume memory, but later, if changed without awareness, they may do. With the above method, the allocation can be detected, and pr_warn() to ask people to resolve it. Signed-off-by: Pingfan Liu Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: "Rafael J. Wysocki" Cc: Len Brown Cc: Yinghai Lu Cc: Tejun Heo Cc: Chao Fan Cc: Baoquan He Cc: Juergen Gross Cc: Andrew Morton Cc: Mike Rapoport Cc: Vlastimil Babka Cc: Michal Hocko Cc: x86@kernel.org Cc: linux-acpi@vger.kernel.org Cc: linux-mm@kvack.org --- arch/arm/mm/init.c | 3 ++- arch/arm/mm/mmu.c | 4 ++-- arch/arm/mm/nommu.c | 2 +- arch/csky/kernel/setup.c | 2 +- arch/microblaze/mm/init.c | 2 +- arch/mips/kernel/setup.c | 2 +- arch/powerpc/mm/40x_mmu.c | 6 ++++-- arch/powerpc/mm/44x_mmu.c | 2 +- arch/powerpc/mm/8xx_mmu.c | 2 +- arch/powerpc/mm/fsl_booke_mmu.c | 5 +++-- arch/powerpc/mm/hash_utils_64.c | 4 ++-- arch/powerpc/mm/init_32.c | 2 +- arch/powerpc/mm/pgtable-radix.c | 2 +- arch/powerpc/mm/ppc_mmu_32.c | 8 ++++++-- arch/powerpc/mm/tlb_nohash.c | 6 ++++-- arch/unicore32/mm/mmu.c | 2 +- arch/x86/kernel/setup.c | 2 +- arch/xtensa/mm/init.c | 2 +- include/linux/memblock.h | 10 +++++++--- mm/memblock.c | 23 ++++++++++++++++++----- 20 files changed, 59 insertions(+), 32 deletions(-) diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 32e4845..58a4342 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -93,7 +93,8 @@ __tagtable(ATAG_INITRD2, parse_tag_initrd2); static void __init find_limits(unsigned long *min, unsigned long *max_low, unsigned long *max_high) { - *max_low = PFN_DOWN(memblock_get_current_limit()); + memblock_get_current_limit(NULL, max_low); + *max_low = PFN_DOWN(*max_low); *min = PFN_UP(memblock_start_of_DRAM()); *max_high = PFN_DOWN(memblock_end_of_DRAM()); } diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index f5cc1cc..9025418 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1240,7 +1240,7 @@ void __init adjust_lowmem_bounds(void) } } - memblock_set_current_limit(memblock_limit); + memblock_set_current_limit(0, memblock_limit, false); } static inline void prepare_page_table(void) @@ -1625,7 +1625,7 @@ void __init paging_init(const struct machine_desc *mdesc) prepare_page_table(); map_lowmem(); - memblock_set_current_limit(arm_lowmem_limit); + memblock_set_current_limit(0, arm_lowmem_limit, false); dma_contiguous_remap(); early_fixmap_shutdown(); devicemaps_init(mdesc); diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c index 7d67c70..721535c 100644 --- a/arch/arm/mm/nommu.c +++ b/arch/arm/mm/nommu.c @@ -138,7 +138,7 @@ void __init adjust_lowmem_bounds(void) adjust_lowmem_bounds_mpu(); end = memblock_end_of_DRAM(); high_memory = __va(end - 1) + 1; - memblock_set_current_limit(end); + memblock_set_current_limit(0, end, false); } /* diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c index dff8b89..e6f88bf 100644 --- a/arch/csky/kernel/setup.c +++ b/arch/csky/kernel/setup.c @@ -100,7 +100,7 @@ static void __init csky_memblock_init(void) highend_pfn = max_pfn; #endif - memblock_set_current_limit(PFN_PHYS(max_low_pfn)); + memblock_set_current_limit(0, PFN_PHYS(max_low_pfn), false); dma_contiguous_reserve(0); diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c index b17fd8a..cee99da 100644 --- a/arch/microblaze/mm/init.c +++ b/arch/microblaze/mm/init.c @@ -353,7 +353,7 @@ asmlinkage void __init mmu_init(void) /* Shortly after that, the entire linear mapping will be available */ /* This will also cause that unflatten device tree will be allocated * inside 768MB limit */ - memblock_set_current_limit(memory_start + lowmem_size - 1); + memblock_set_current_limit(0, memory_start + lowmem_size - 1, false); } /* This is only called until mem_init is done. */ diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c index 8c6c48ed..62dabe1 100644 --- a/arch/mips/kernel/setup.c +++ b/arch/mips/kernel/setup.c @@ -862,7 +862,7 @@ static void __init arch_mem_init(char **cmdline_p) * with memblock_reserve; memblock_alloc* can be used * only after this point */ - memblock_set_current_limit(PFN_PHYS(max_low_pfn)); + memblock_set_current_limit(0, PFN_PHYS(max_low_pfn), false); #ifdef CONFIG_PROC_VMCORE if (setup_elfcorehdr && setup_elfcorehdr_size) { diff --git a/arch/powerpc/mm/40x_mmu.c b/arch/powerpc/mm/40x_mmu.c index 61ac468..427bb56 100644 --- a/arch/powerpc/mm/40x_mmu.c +++ b/arch/powerpc/mm/40x_mmu.c @@ -141,7 +141,7 @@ unsigned long __init mmu_mapin_ram(unsigned long top) * coverage with normal-sized pages (or other reasons) do not * attempt to allocate outside the allowed range. */ - memblock_set_current_limit(mapped); + memblock_set_current_limit(0, mapped, false); return mapped; } @@ -155,5 +155,7 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base, BUG_ON(first_memblock_base != 0); /* 40x can only access 16MB at the moment (see head_40x.S) */ - memblock_set_current_limit(min_t(u64, first_memblock_size, 0x00800000)); + memblock_set_current_limit(0, + min_t(u64, first_memblock_size, 0x00800000), + false); } diff --git a/arch/powerpc/mm/44x_mmu.c b/arch/powerpc/mm/44x_mmu.c index 12d9251..3cf127d 100644 --- a/arch/powerpc/mm/44x_mmu.c +++ b/arch/powerpc/mm/44x_mmu.c @@ -225,7 +225,7 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base, /* 44x has a 256M TLB entry pinned at boot */ size = (min_t(u64, first_memblock_size, PPC_PIN_SIZE)); - memblock_set_current_limit(first_memblock_base + size); + memblock_set_current_limit(0, first_memblock_base + size, false); } #ifdef CONFIG_SMP diff --git a/arch/powerpc/mm/8xx_mmu.c b/arch/powerpc/mm/8xx_mmu.c index 01b7f51..c75bca6 100644 --- a/arch/powerpc/mm/8xx_mmu.c +++ b/arch/powerpc/mm/8xx_mmu.c @@ -135,7 +135,7 @@ unsigned long __init mmu_mapin_ram(unsigned long top) * attempt to allocate outside the allowed range. */ if (mapped) - memblock_set_current_limit(mapped); + memblock_set_current_limit(0, mapped, false); block_mapped_ram = mapped; diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c index 080d49b..3be24b8 100644 --- a/arch/powerpc/mm/fsl_booke_mmu.c +++ b/arch/powerpc/mm/fsl_booke_mmu.c @@ -252,7 +252,8 @@ void __init adjust_total_lowmem(void) pr_cont("%lu Mb, residual: %dMb\n", tlbcam_sz(tlbcam_index - 1) >> 20, (unsigned int)((total_lowmem - __max_low_memory) >> 20)); - memblock_set_current_limit(memstart_addr + __max_low_memory); + memblock_set_current_limit(0, + memstart_addr + __max_low_memory, false); } void setup_initial_memory_limit(phys_addr_t first_memblock_base, @@ -261,7 +262,7 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base, phys_addr_t limit = first_memblock_base + first_memblock_size; /* 64M mapped initially according to head_fsl_booke.S */ - memblock_set_current_limit(min_t(u64, limit, 0x04000000)); + memblock_set_current_limit(0, min_t(u64, limit, 0x04000000), false); } #ifdef CONFIG_RELOCATABLE diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c index 0cc7fbc..30fba80 100644 --- a/arch/powerpc/mm/hash_utils_64.c +++ b/arch/powerpc/mm/hash_utils_64.c @@ -925,7 +925,7 @@ static void __init htab_initialize(void) BUG_ON(htab_bolt_mapping(base, base + size, __pa(base), prot, mmu_linear_psize, mmu_kernel_ssize)); } - memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE); + memblock_set_current_limit(0, MEMBLOCK_ALLOC_ANYWHERE, false); /* * If we have a memory_limit and we've allocated TCEs then we need to @@ -1867,7 +1867,7 @@ void hash__setup_initial_memory_limit(phys_addr_t first_memblock_base, ppc64_rma_size = min_t(u64, ppc64_rma_size, 0x40000000); /* Finally limit subsequent allocations */ - memblock_set_current_limit(ppc64_rma_size); + memblock_set_current_limit(0, ppc64_rma_size, false); } else { ppc64_rma_size = ULONG_MAX; } diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c index 3e59e5d..863d710 100644 --- a/arch/powerpc/mm/init_32.c +++ b/arch/powerpc/mm/init_32.c @@ -183,5 +183,5 @@ void __init MMU_init(void) #endif /* Shortly after that, the entire linear mapping will be available */ - memblock_set_current_limit(lowmem_end_addr); + memblock_set_current_limit(0, lowmem_end_addr, false); } diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c index 9311560..8cd5f2d 100644 --- a/arch/powerpc/mm/pgtable-radix.c +++ b/arch/powerpc/mm/pgtable-radix.c @@ -603,7 +603,7 @@ void __init radix__early_init_mmu(void) radix_init_pseries(); } - memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE); + memblock_set_current_limit(0, MEMBLOCK_ALLOC_ANYWHERE, false); radix_init_iamr(); radix_init_pgtable(); diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c index f6f575b..80927ad 100644 --- a/arch/powerpc/mm/ppc_mmu_32.c +++ b/arch/powerpc/mm/ppc_mmu_32.c @@ -283,7 +283,11 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base, /* 601 can only access 16MB at the moment */ if (PVR_VER(mfspr(SPRN_PVR)) == 1) - memblock_set_current_limit(min_t(u64, first_memblock_size, 0x01000000)); + memblock_set_current_limit(0, + min_t(u64, first_memblock_size, 0x01000000), + false); else /* Anything else has 256M mapped */ - memblock_set_current_limit(min_t(u64, first_memblock_size, 0x10000000)); + memblock_set_current_limit(0, + min_t(u64, first_memblock_size, 0x10000000), + false); } diff --git a/arch/powerpc/mm/tlb_nohash.c b/arch/powerpc/mm/tlb_nohash.c index ae5d568..d074362 100644 --- a/arch/powerpc/mm/tlb_nohash.c +++ b/arch/powerpc/mm/tlb_nohash.c @@ -735,7 +735,7 @@ static void __init early_mmu_set_memory_limit(void) * reduces the memory available to Linux. We need to * do this because highmem is not supported on 64-bit. */ - memblock_enforce_memory_limit(linear_map_top); + memblock_enforce_memory_limit(0, linear_map_top, false); } #endif @@ -792,7 +792,9 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base, ppc64_rma_size = min_t(u64, first_memblock_size, 0x40000000); /* Finally limit subsequent allocations */ - memblock_set_current_limit(first_memblock_base + ppc64_rma_size); + memblock_set_current_limit(0, + first_memblock_base + ppc64_rma_size, + false); } #else /* ! CONFIG_PPC64 */ void __init early_init_mmu(void) diff --git a/arch/unicore32/mm/mmu.c b/arch/unicore32/mm/mmu.c index 040a8c2..6d62529 100644 --- a/arch/unicore32/mm/mmu.c +++ b/arch/unicore32/mm/mmu.c @@ -286,7 +286,7 @@ static void __init sanity_check_meminfo(void) int i, j; lowmem_limit = __pa(vmalloc_min - 1) + 1; - memblock_set_current_limit(lowmem_limit); + memblock_set_current_limit(0, lowmem_limit, false); for (i = 0, j = 0; i < meminfo.nr_banks; i++) { struct membank *bank = &meminfo.bank[j]; diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index dc8fc5d..a0122cd 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -1130,7 +1130,7 @@ void __init setup_arch(char **cmdline_p) memblock_set_bottom_up(true); #endif init_mem_mapping(); - memblock_set_current_limit(get_max_mapped()); + memblock_set_current_limit(0, get_max_mapped(), false); idt_setup_early_pf(); diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index 30a48bb..b924387 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -60,7 +60,7 @@ void __init bootmem_init(void) max_pfn = PFN_DOWN(memblock_end_of_DRAM()); max_low_pfn = min(max_pfn, MAX_LOW_PFN); - memblock_set_current_limit(PFN_PHYS(max_low_pfn)); + memblock_set_current_limit(0, PFN_PHYS(max_low_pfn), false); dma_contiguous_reserve(PFN_PHYS(max_low_pfn)); memblock_dump_all(); diff --git a/include/linux/memblock.h b/include/linux/memblock.h index aee299a..49676f0 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -88,6 +88,8 @@ struct memblock_type { */ struct memblock { bool bottom_up; /* is bottom up direction? */ + bool enforce_checking; + phys_addr_t start_limit; phys_addr_t current_limit; struct memblock_type memory; struct memblock_type reserved; @@ -482,12 +484,14 @@ static inline void memblock_dump_all(void) * memblock_set_current_limit - Set the current allocation limit to allow * limiting allocations to what is currently * accessible during boot - * @limit: New limit value (physical address) + * [start_limit, end_limit]: New limit value (physical address) + * enforcing: whether check against the limit boundary or not */ -void memblock_set_current_limit(phys_addr_t limit); +void memblock_set_current_limit(phys_addr_t start_limit, + phys_addr_t end_limit, bool enforcing); -phys_addr_t memblock_get_current_limit(void); +bool memblock_get_current_limit(phys_addr_t *start, phys_addr_t *end); /* * pfn conversion functions diff --git a/mm/memblock.c b/mm/memblock.c index 81ae63c..b792be0 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -116,6 +116,8 @@ struct memblock memblock __initdata_memblock = { #endif .bottom_up = false, + .enforce_checking = false, + .start_limit = 0, .current_limit = MEMBLOCK_ALLOC_ANYWHERE, }; @@ -261,8 +263,11 @@ phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size, { phys_addr_t kernel_end, ret; + if (unlikely(memblock.enforce_checking)) { + start = memblock.start_limit; + end = memblock.current_limit; /* pump up @end */ - if (end == MEMBLOCK_ALLOC_ACCESSIBLE) + } else if (end == MEMBLOCK_ALLOC_ACCESSIBLE) end = memblock.current_limit; /* avoid allocating the first page */ @@ -1826,14 +1831,22 @@ void __init_memblock memblock_trim_memory(phys_addr_t align) } } -void __init_memblock memblock_set_current_limit(phys_addr_t limit) +void __init_memblock memblock_set_current_limit(phys_addr_t start, + phys_addr_t end, bool enforcing) { - memblock.current_limit = limit; + memblock.start_limit = start; + memblock.current_limit = end; + memblock.enforce_checking = enforcing; } -phys_addr_t __init_memblock memblock_get_current_limit(void) +bool __init_memblock memblock_get_current_limit(phys_addr_t *start, + phys_addr_t *end) { - return memblock.current_limit; + if (start) + *start = memblock.start_limit; + if (end) + *end = memblock.current_limit; + return memblock.enforce_checking; } static void __init_memblock memblock_dump(struct memblock_type *type) From patchwork Fri Jan 11 05:12:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 10757367 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 91B0114E5 for ; Fri, 11 Jan 2019 05:13:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80AAF299BE for ; Fri, 11 Jan 2019 05:13:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 73DB6299C9; Fri, 11 Jan 2019 05:13:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B17A4299BE for ; Fri, 11 Jan 2019 05:13:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A94EB8E0007; Fri, 11 Jan 2019 00:13:49 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A44E48E0001; Fri, 11 Jan 2019 00:13:49 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 95C408E0007; Fri, 11 Jan 2019 00:13:49 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id 57D558E0001 for ; Fri, 11 Jan 2019 00:13:49 -0500 (EST) Received: by mail-pf1-f199.google.com with SMTP id i3so9496837pfj.4 for ; Thu, 10 Jan 2019 21:13:49 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=sIr0XR9JtSIXTySKwYaIUFJCPnOMAPJJ3pzvm0xJRbI=; b=I8iT1jKu0CzbSjLbtdotHxmyASbgUvF9aAkkE4iSBM5Dro++yeQ9JNVrFFoCaMnKRJ iWPWPRZMXkww5ZIv3glgyZ0mSfIsVkk8V/tOQlqZ930aZVnXhlMO6bQpJbJ9L3JrClaZ MKPz19QPLIDfG6MWOBD33c64NOqkwnrxWDjB2tT/Z2AYl8yzmwOxG4ZOL6SdUEGsD3hI AgFIIgCLhATrpSq62mPA9T7oeICWsIfJxmNTVBrP9/zWONpKVyqAmimggNvtQPWmuTGQ ceSFoDLiUBGocxCpdlm95IJ4DRThEUkyyZMTUtKwgPgibV3SQF/DVps53FxRTQ/LOmeu bMMQ== X-Gm-Message-State: AJcUukc35T6Vxi41MPNcpE3S0OzCywoCIS3XmgUdU642jf6rOFtE8kXw o3/DS6adP7s5Qk4pmHBLtX/DQAKjK2TF451rKLJ4IE3o3lwbr/Iy8I3sUlXYDK1532RUXfhFp9X +MHvg5hQv3o2gVo1YhyeJGrXB8We6HT34QnJcPYXojHM6NyflK5G0n7McccnNVLNb4zwlfSH5FG 083p5BZbJunMxOHhULA3EbuMvmBmM0chLf6YopNHSSlZd6hZYucTxNQ8vyY+Mzy4Oyf+tEpSKQ1 BTrUbzeHjtpVcdZGyBRyoB7O4ozpgVdoKfKGaHlMjGYICLVAVShV+80xM06YCypxf9x6Ljn3nBw BoMRC1KQYj6BtG/2Sfg1j5CV7gA626+XYybfdQsXJ+sJ6W6C/tu45e8ZZt19GcIPizz2T3OFywQ A X-Received: by 2002:a17:902:74cc:: with SMTP id f12mr13382314plt.134.1547183629005; Thu, 10 Jan 2019 21:13:49 -0800 (PST) X-Received: by 2002:a17:902:74cc:: with SMTP id f12mr13382253plt.134.1547183627209; Thu, 10 Jan 2019 21:13:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547183627; cv=none; d=google.com; s=arc-20160816; b=CwT02xeszxGyWbt6xWB6MtisvqlA6CisIg7W63tWF8wKpG9A0Ije+vYTI/7GPtNRcx r4aupPVOA7aUjaBjbI7iwocebOooJNHSuXo+dTIttSUwrpTvUENLQxbaOdBPSts2OYAR zmpKZ2KWFhNUkQWQ2jgUbZLLvwhaqz1lkToauXyp5kbMXPE14GyjksYI82odqjxVSOoU eMUvhWLL1tDtpqGYZnizFw91RobVU7M2IYxKO0LeDOu0rnNu14RmHeQxZ4deEJKbz0Ee PCYc5z9StZDysvRnGkK6Cs4cRgQcemTdyq3FCAz+Qr06ENy07FWxQskfPf3oZnccX0U8 4vcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=sIr0XR9JtSIXTySKwYaIUFJCPnOMAPJJ3pzvm0xJRbI=; b=ohAGzVOmVqVALiF0M9P6Pl7X9qF/WqhEMrPWYEGRZAsgs47A680D5gGlzrLca5CCkQ 3sq0C+e12/n8xlLPLVIxgC3oreysOA1LHv1lwLIcf9dRVj9gfbyQZxq50Q4N+QjmvfCb Ryix+2UFgN0D04s+euhDiF+MxyjlPNDUyrxf32Ain6pMD8KAvjBsKcD+1jgiNaInHYcq B1yEwW/m7RRz4MTy6GBecFY5GNngWaVwBbFEl/oqGQvhKzH9TsXWLL3eeCA04iHmpOjv m8UpsftlC1yKpSRwIVLhRcINZil+Z4DL9rmjDZvRwq2nwPAa9jwa1IzJDC+jhmsDSZQC 7XWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=U+2XdPO+; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id b36sor54771245pgl.8.2019.01.10.21.13.47 for (Google Transport Security); Thu, 10 Jan 2019 21:13:47 -0800 (PST) Received-SPF: pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=U+2XdPO+; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=sIr0XR9JtSIXTySKwYaIUFJCPnOMAPJJ3pzvm0xJRbI=; b=U+2XdPO+b0Y5Wq2bgrVPFxop20iY3m6KFjB8cQoC5apiFJca48DoqliwuouseL3L/e eh6IxYxOa85yqcUFI7BCKmI6qiniRhQs4coDnUm1DrT6aq+y0ZGeJojpjfjPTF4p3C3j 71eLod8/Ay0jeql+cTR0ld0+eiik2VnZ2Ni+V5FuXw+m2M6V3r1LFrkM/c1IwsPr3Kb6 /RV5KgQYyZojMXJCot4Oy09nUdkSs5Lb3MtSaAcPfax6+91pO+YyxiDL8q5JGbAIiBy8 5J5Vxglbrhf3xFYUfq5sC+t34E8jd78cEoOkmOMyEFk67vQfNXDtpt5ggBB0z0q1OtS8 /8LA== X-Google-Smtp-Source: ALg8bN6MrepyiJeI9/hkg0ePCAkRL+WmhJa6GehuKilZH1gz5v9005Y4X/lsU0mx1lt2nZCPdWtAfg== X-Received: by 2002:a63:7512:: with SMTP id q18mr11105826pgc.231.1547183626904; Thu, 10 Jan 2019 21:13:46 -0800 (PST) Received: from mylaptop.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id q7sm93490471pgp.40.2019.01.10.21.13.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Jan 2019 21:13:46 -0800 (PST) From: Pingfan Liu To: linux-kernel@vger.kernel.org Cc: Pingfan Liu , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , "Rafael J. Wysocki" , Len Brown , Yinghai Lu , Tejun Heo , Chao Fan , Baoquan He , Juergen Gross , Andrew Morton , Mike Rapoport , Vlastimil Babka , Michal Hocko , x86@kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org Subject: [PATCHv2 4/7] x86/setup: parse acpi to get hotplug info before init_mem_mapping() Date: Fri, 11 Jan 2019 13:12:54 +0800 Message-Id: <1547183577-20309-5-git-send-email-kernelfans@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> References: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP At present, memblock bottom-up allocation can help us against staining over movable node in very high probability. But if the hotplug info has already been parsed, the memblock allocator can step around the movable node by itself. This patch pushes the parsing step forward, just ahead of where, the memblock allocator can work. About how memblock allocator steps around the movable node, referring to the cond check on memblock_is_hotpluggable() in __next_mem_range(). Later in this series, the bottom-up allocation style can be removed on x86_64. Signed-off-by: Pingfan Liu Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: "Rafael J. Wysocki" Cc: Len Brown Cc: Yinghai Lu Cc: Tejun Heo Cc: Chao Fan Cc: Baoquan He Cc: Juergen Gross Cc: Andrew Morton Cc: Mike Rapoport Cc: Vlastimil Babka Cc: Michal Hocko Cc: x86@kernel.org Cc: linux-acpi@vger.kernel.org Cc: linux-mm@kvack.org --- arch/x86/kernel/setup.c | 39 ++++++++++++++++++++++++++++++--------- include/linux/acpi.h | 1 + 2 files changed, 31 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index a0122cd..9b57e01 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -804,6 +804,35 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p) return 0; } +static void early_acpi_parse(void) +{ + phys_addr_t start, end, orig_start, orig_end; + bool enforcing; + + enforcing = memblock_get_current_limit(&orig_start, &orig_end); + /* find a 16MB slot for temporary usage by the following routines. */ + start = memblock_find_in_range(ISA_END_ADDRESS, + max_pfn, 1 << 24, 1); + end = start + 1 + (1 << 24); + memblock_set_current_limit(start, end, true); +#ifdef CONFIG_BLK_DEV_INITRD + if (get_ramdisk_size()) + acpi_table_upgrade(__va(get_ramdisk_image()), + get_ramdisk_size()); +#endif + /* + * Parse the ACPI tables for possible boot-time SMP configuration. + */ + acpi_boot_table_init(); + early_acpi_boot_init(); + initmem_init(); + /* check whether memory is returned or not */ + start = memblock_find_in_range(start, end, 1<<24, 1); + if (!start) + pr_warn("the above acpi routines change and consume memory\n"); + memblock_set_current_limit(orig_start, orig_end, enforcing); +} + /* * Determine if we were loaded by an EFI loader. If so, then we have also been * passed the efi memmap, systab, etc., so we should use these data structures @@ -1129,6 +1158,7 @@ void __init setup_arch(char **cmdline_p) if (movable_node_is_enabled()) memblock_set_bottom_up(true); #endif + early_acpi_parse(); init_mem_mapping(); memblock_set_current_limit(0, get_max_mapped(), false); @@ -1173,21 +1203,12 @@ void __init setup_arch(char **cmdline_p) reserve_initrd(); - acpi_table_upgrade((void *)initrd_start, initrd_end - initrd_start); vsmp_init(); io_delay_init(); early_platform_quirks(); - /* - * Parse the ACPI tables for possible boot-time SMP configuration. - */ - acpi_boot_table_init(); - - early_acpi_boot_init(); - - initmem_init(); dma_contiguous_reserve(max_pfn_mapped << PAGE_SHIFT); /* diff --git a/include/linux/acpi.h b/include/linux/acpi.h index 0b6e0b6..4f6b391 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -235,6 +235,7 @@ int acpi_mps_check (void); int acpi_numa_init (void); int acpi_table_init (void); +void acpi_tb_terminate(void); int acpi_table_parse(char *id, acpi_tbl_table_handler handler); int __init acpi_table_parse_entries(char *id, unsigned long table_size, int entry_id, From patchwork Fri Jan 11 05:12:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 10757373 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C01C14E5 for ; Fri, 11 Jan 2019 05:13:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6DA75298E5 for ; Fri, 11 Jan 2019 05:13:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 61AE8299BE; Fri, 11 Jan 2019 05:13:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D5735298E5 for ; Fri, 11 Jan 2019 05:13:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C92108E0008; Fri, 11 Jan 2019 00:13:55 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C41788E0001; Fri, 11 Jan 2019 00:13:55 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B59528E0008; Fri, 11 Jan 2019 00:13:55 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 763AB8E0001 for ; Fri, 11 Jan 2019 00:13:55 -0500 (EST) Received: by mail-pf1-f198.google.com with SMTP id i3so9496944pfj.4 for ; Thu, 10 Jan 2019 21:13:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=HyITMNdELPclxNoKcR1zVio3u+9JmLHOG25CjPlIULI=; b=U10JcYYK0s8QDIe+e1GOm4EAF326v/XF+zdiiNWcsvyMtQ5XQz2bzCwoMVEdapbUfl gPjJxMrTO8C+HIt1aT6Shdd1ElFZxd53/eMcE4J+SS3GiimAlU+SBKDA4q2bUc+4Cbdp ds0LoXhV5rJXgfXRxjg9NmCK6EdrsnYDTUy5Zw7jdDPkKyubkcsk5S1/mRgrYWPu7jc9 5ZpVkF5zsDXftIQH6dJJQW3juh1s5nL/LEnhZB3MBOFYPJ3VTFEdFZ4zHGH/FXYGu4fy HbjYcda30526NLVRtOsKdyLgXRPHsPK/ioz0FIbblF5pM7L+NwmsHUpH8rP7LqUhXY+I 9fQg== X-Gm-Message-State: AJcUukeE9g0Dj5hy0223rRnPlklH3B0EEq9EA14V5qvEFI8skGzXXLbG bDYWKIuSVPoXNTVF/pJmSjKkIxDrrldv5aX4R7Wz1ToDJeB+oGlSLd0Lk9tqqhqWM09U3gHv2Kg LmpFKhZS7/yiDmrH8JZ1D2erSfUcLfW5ENyhAgwuMXQEyOc+c59FH24JKCl7xG0C9Z0smMwPZgv kLGi/ibEP6f/9mchVxKbzR/Z5t3tlnN7Mpp1ZwwLvq2HJg9D8Vt71rXlNtNFL3jUHWh3EcG6zKK yq+kigmX/HN0z9G+muZ40syuhLo64VWEt1pTHo1hAFvRnQK57Ud5t/Hk/bk6s3WeVHsdMFBOB+m bh651fPHCBkLH9qWsUOhWqZcaLyrHLotvFmH7vtpo6O2G7ml64z1FUs8+I4BgWfNmXpRpE68j6K g X-Received: by 2002:a65:5c4b:: with SMTP id v11mr12021496pgr.333.1547183635130; Thu, 10 Jan 2019 21:13:55 -0800 (PST) X-Received: by 2002:a65:5c4b:: with SMTP id v11mr12021463pgr.333.1547183634198; Thu, 10 Jan 2019 21:13:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547183634; cv=none; d=google.com; s=arc-20160816; b=PqDrDeDIiU29BILaig7dZdQprhmOj9QWl1Qbf4uz21rnv6R5mSGeRL9q5nuxn59AJ+ OhwSb6mkoKrGi78m15NV2yTBLVpfWPmo5miveW0i3dyL+bv2xG2o2N2sdNlM/NiTUv1B nDqmxVQOOZxx/RCPRTwoA0C7VvsQAPkcd6IuL1hcjb6bxyysfhdI1C82oMc9nRhFmVDt +C/kOkLWRYSpoFmeOv48Dtd4C6Zam9njYJmfZpsZNls8qGbejLnUP2Jel+Ww376CL3Hc 5wF9d+W7x3TM/2FvpEKFJK+gu7uE21JTkTwuyX+Vh0To8mi/8OT9EKqIYO13CZOFSW0X p8hg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=HyITMNdELPclxNoKcR1zVio3u+9JmLHOG25CjPlIULI=; b=dRnmhNDFPI238b4aC8FuUGgItOJ39U1Me4Yf60n0plvi+h8XEjOgW5XYVpTTRWytPT g1CBbr5Gsoc7LZ6zkgdggquPnbH4E+PTIZrhK+W5TZUgfVYkGAjYqEuoQOYGeQeMGyfu 0Ul6/TTWok+LctCN+ObkVO2pQHP16NmmhCO7+CzbTXIAlx8aimNLniiEudbxCVTLFIYx mp13DJer83cTpkHy0n4RyOmSrtPf/DwCkw0pSkRPGedfZEFOWqM5wlYJSS4L7QvhT6fT 9b+nES0mecj7cgb7S5+f1HMTMlmR8SgQlaZk0TnnAbV72xdXm+Xxh99JnDIfcEdqvVKY OTJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=YzFAotY+; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id w11sor1629716ply.14.2019.01.10.21.13.54 for (Google Transport Security); Thu, 10 Jan 2019 21:13:54 -0800 (PST) Received-SPF: pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=YzFAotY+; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HyITMNdELPclxNoKcR1zVio3u+9JmLHOG25CjPlIULI=; b=YzFAotY+NwCnyRx28zQm2g9r3fmR7pCjWvfroJkraJswxUAdRRZ3wswDAug/7D59rL OcWULBgD5QWfA8kg2RcGMOk5TS5W5AJVPaKtuV20MwIueNXJIuGF9XBCaS9rx1Onb4Lq FYIxRx4oe7IL0rqgK7eAkqukFsispBSCEpJEXtTWIlMipIpMcB2CJ9fFoOVZvf61AqWR LI++l2sYxQL3hRHseTnz33T4U9gutvkdkmFCI3vcd7Im7W8FIJhnN5wkPoGKEXjeuHDr kBLmbgH4mBdsPxXKacs3HRy8u2xHlzPJLM8oQQ/qaud6Q/BL65/zInLq8YApNIs/eJbA ub0g== X-Google-Smtp-Source: ALg8bN7Dh/zgZ+YLM8araolSDXBBTDY8kyLtFO6bcLu5ojQxeeEy88HPlwMpDDXIALPqfSiTGOIk3w== X-Received: by 2002:a17:902:82c2:: with SMTP id u2mr13234631plz.110.1547183633911; Thu, 10 Jan 2019 21:13:53 -0800 (PST) Received: from mylaptop.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id q7sm93490471pgp.40.2019.01.10.21.13.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Jan 2019 21:13:53 -0800 (PST) From: Pingfan Liu To: linux-kernel@vger.kernel.org Cc: Pingfan Liu , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , "Rafael J. Wysocki" , Len Brown , Yinghai Lu , Tejun Heo , Chao Fan , Baoquan He , Juergen Gross , Andrew Morton , Mike Rapoport , Vlastimil Babka , Michal Hocko , x86@kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org Subject: [PATCHv2 5/7] x86/mm: set allowed range for memblock allocator Date: Fri, 11 Jan 2019 13:12:55 +0800 Message-Id: <1547183577-20309-6-git-send-email-kernelfans@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> References: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Due to the incoming divergence of x86_32 and x86_64, there is requirement to set the allowed allocating range at the early boot stage. This patch also includes minor change to remove redundat cond check, refer to memblock_find_in_range_node(), memblock_find_in_range() has already protect itself from the case: start > end. Signed-off-by: Pingfan Liu Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: "Rafael J. Wysocki" Cc: Len Brown Cc: Yinghai Lu Cc: Tejun Heo Cc: Chao Fan Cc: Baoquan He Cc: Juergen Gross Cc: Andrew Morton Cc: Mike Rapoport Cc: Vlastimil Babka Cc: Michal Hocko Cc: x86@kernel.org Cc: linux-acpi@vger.kernel.org Cc: linux-mm@kvack.org --- arch/x86/mm/init.c | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index ef99f38..385b9cd 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -76,6 +76,14 @@ static unsigned long min_pfn_mapped; static bool __initdata can_use_brk_pgt = true; +static unsigned long min_pfn_allowed; +static unsigned long max_pfn_allowed; +void set_alloc_range(unsigned long low, unsigned long high) +{ + min_pfn_allowed = low; + max_pfn_allowed = high; +} + /* * Pages returned are already directly mapped. * @@ -100,12 +108,10 @@ __ref void *alloc_low_pages(unsigned int num) if ((pgt_buf_end + num) > pgt_buf_top || !can_use_brk_pgt) { unsigned long ret = 0; - if (min_pfn_mapped < max_pfn_mapped) { - ret = memblock_find_in_range( - min_pfn_mapped << PAGE_SHIFT, - max_pfn_mapped << PAGE_SHIFT, - PAGE_SIZE * num , PAGE_SIZE); - } + ret = memblock_find_in_range( + min_pfn_allowed << PAGE_SHIFT, + max_pfn_allowed << PAGE_SHIFT, + PAGE_SIZE * num, PAGE_SIZE); if (ret) memblock_reserve(ret, PAGE_SIZE * num); else if (can_use_brk_pgt) @@ -588,14 +594,17 @@ static void __init memory_map_top_down(unsigned long map_start, start = map_start; mapped_ram_size += init_range_memory_mapping(start, last_start); + set_alloc_range(min_pfn_mapped, max_pfn_mapped); last_start = start; min_pfn_mapped = last_start >> PAGE_SHIFT; if (mapped_ram_size >= step_size) step_size = get_new_step_size(step_size); } - if (real_end < map_end) + if (real_end < map_end) { init_range_memory_mapping(real_end, map_end); + set_alloc_range(min_pfn_mapped, max_pfn_mapped); + } } /** @@ -636,6 +645,7 @@ static void __init memory_map_bottom_up(unsigned long map_start, } mapped_ram_size += init_range_memory_mapping(start, next); + set_alloc_range(min_pfn_mapped, max_pfn_mapped); start = next; if (mapped_ram_size >= step_size) From patchwork Fri Jan 11 05:12:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 10757377 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 69ED291E for ; Fri, 11 Jan 2019 05:14:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C4C0298E5 for ; Fri, 11 Jan 2019 05:14:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4F1E6299B3; Fri, 11 Jan 2019 05:14:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 96E4B299BE for ; Fri, 11 Jan 2019 05:14:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B0308E0009; Fri, 11 Jan 2019 00:14:03 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 75F2C8E0001; Fri, 11 Jan 2019 00:14:03 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 675428E0009; Fri, 11 Jan 2019 00:14:03 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id 245958E0001 for ; Fri, 11 Jan 2019 00:14:03 -0500 (EST) Received: by mail-pl1-f200.google.com with SMTP id m13so7604959pls.15 for ; Thu, 10 Jan 2019 21:14:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=rzNgf1HX1LRzHvXNDEH6fLsdUxAcQ1NDOrWEAz1gjyY=; b=RMPrblQLY/C+WBLJy/jpDGr+aQ3PBGaWANtBW6m5UEewz8Qr93maULytOtUd7E1r6L MTwuPwUPEb4+7upGgISyijSC+rH5f+INFHTp4pEhrOCR1axzXIPu3EnrPwSt8amnU3d9 E2l+/nImDi9EZkua3V5Tgi345TDI9a24fSwbrkafcSJykA8bHMXqTcBKHg8/OM/OfV/k 8Xd41sg0i6j48Ycpfz9oKcOYqS6BVTBp79LLIJy3M5ieYnyCEm+WOl0OarmawyjULkXu WbzLjiBSz5EWJnl+tTw86i/bn+wnCwsoL3WTmY99RC4GzF+VWH/FUG0YZY9XMkD9jS2G hhdA== X-Gm-Message-State: AJcUukfgn/VzN01CKVl36fvA9rEtspWlxD1ql7YdK+iq1necfyJtHqh6 DGQQBCatvuxnUT2FrcA+ZT5IUWeUCsAXw4C98GMR7EAU326wuFilooEdzpq5ZfIqy2uZ21F0kDK dK6Wr6GaJdRIclTWRofJ/AYumFJzMGC5HoVAy3LY3MVHU5zcMUdI+Aqb5pr/FMz8/cXMifb5cr4 DM9+ZETrafexhzwpfMfat9zLbSqL7VfwnhdR38/W+h4723EGy7UcbAT95ebmcVb5BMPz104S5sF tJAJjK5MLVsaK4gvVWXfjq8DvQheOyBsapDpcqgiCYBBKOFjgS7Cx7vJV4juXDr1/1RodkD+tTt iorpdmCDN7dCDipz/XBzNJWjKivpYsE0+LVfXhRculpPgnIMdDicW3aqNovUlE4I7qK5jf4qlv4 d X-Received: by 2002:a17:902:2a89:: with SMTP id j9mr8501111plb.296.1547183642782; Thu, 10 Jan 2019 21:14:02 -0800 (PST) X-Received: by 2002:a17:902:2a89:: with SMTP id j9mr8501074plb.296.1547183641839; Thu, 10 Jan 2019 21:14:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547183641; cv=none; d=google.com; s=arc-20160816; b=XktbV1Kjn6XWv2fHKX4rRjH9EV1UXbLBCnjDBp5SftUkJAUf2bwGAI8SlJGZ++JU68 tuA8OsZctkp85E/PwidaRcMeN68gh787M8+xXUNJoGT6i9y+/4zqhRAxM0wci6yyV27M 3b7C7hs8BRrTmhPmLorXCJRIaaT4IVHEDNYRfPpKg/cCbRuT6ncbQ3Jr13HD77iXmMPl NPFNF1sOupBMXJMtCVwuS+G2f3ZdhzA2Us/3ljplW4GJ6VsFa8LeewxD94fVxjMZ1L1X 1cf/vg5kIZNxhkrVx43fGRiEw4lz3dfhy1qd7CgLnO2htm2fk27gMccIV5LZH9+APITJ 9loQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=rzNgf1HX1LRzHvXNDEH6fLsdUxAcQ1NDOrWEAz1gjyY=; b=fL6ptVag6inicbXvLNJG1eUNjCEZgrg7jpRWCI3RcTjPVpeeNqo6EA4cc/hLgjTWUT 2ULOXnvyoL6+x+T3GrJ3YbypddgLwt8jXxigoE29VgQUteFOIk8Dx0NdAiGq60oQH409 QXexOGW5BnfKZqzyMEDlEFNc/y1wwOyAFwYWCWROmCEqLsVD/o9OYcC4lAOp11E9uvlF qH+S9odMWy4sAiX8Xu0bGYZJ/SF0OXn1iWiEezwrVbPgg7cvZqFdTwZauOGCCWZh38gf B0tsH/0cxC4NFSjq38yBTsKFFkXcY/sBBUtoG2jKJBI/Xec0+x1bQr789ww//Pi7MKdr UiBw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=nNp9fdlM; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id l189sor53290759pgd.51.2019.01.10.21.14.01 for (Google Transport Security); Thu, 10 Jan 2019 21:14:01 -0800 (PST) Received-SPF: pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=nNp9fdlM; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rzNgf1HX1LRzHvXNDEH6fLsdUxAcQ1NDOrWEAz1gjyY=; b=nNp9fdlMm4AGNuCQXSEDspc1l3wd3kiL5pN2RY2C6ARIPyzKYzWxs5yTIKKzt1E/4c 1AQPFM9m526tFP8XgW/FDRCIVK8FzbGNOPrFqS9dJ85Eljim55SvQ3m7Pl+qp+NYmK9Y jzL2+No0eouItU9JipYU8TM4z84mydN9SQx+oqxcATXF6g+rkuoBRWk+xwI/wykLck8z 4JNlCbwwgSvb8dqu8YF7hsEE9t6QDTiw44G6vYyaELAnc+yRqFcNh+xBfJKTGDJL5eyf 77VVl3fWAVzrptI5Fsj5j/LsJH16Cgvwlx9V6lCfw/knmoQ6rH3tb8VrPeROAEychbS1 zyaA== X-Google-Smtp-Source: ALg8bN4yq2XtfY6ND6zW/GDd12iNHsw8DjiYMbmvmejWTF8nmnyuT4ditOyjlFffV1szW56d6KYupA== X-Received: by 2002:a63:3602:: with SMTP id d2mr11870595pga.404.1547183641475; Thu, 10 Jan 2019 21:14:01 -0800 (PST) Received: from mylaptop.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id q7sm93490471pgp.40.2019.01.10.21.13.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Jan 2019 21:14:00 -0800 (PST) From: Pingfan Liu To: linux-kernel@vger.kernel.org Cc: Pingfan Liu , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , "Rafael J. Wysocki" , Len Brown , Yinghai Lu , Tejun Heo , Chao Fan , Baoquan He , Juergen Gross , Andrew Morton , Mike Rapoport , Vlastimil Babka , Michal Hocko , x86@kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org Subject: [PATCHv2 6/7] x86/mm: remove bottom-up allocation style for x86_64 Date: Fri, 11 Jan 2019 13:12:56 +0800 Message-Id: <1547183577-20309-7-git-send-email-kernelfans@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> References: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Although kaslr-kernel can avoid to stain the movable node. [1] But the pgtable can still stain the movable node. That is a probability problem, although low, but exist. This patch tries to make it certainty by allocating pgtable on unmovable node, instead of following kernel end. There are two acheivements by this patch: -1st. keep the subtree of pgtable away from movable node. With the previous patch, at the point of init_mem_mapping(), memblock allocator can work with the knowledge of acpi memory hotmovable info, and avoid to stain the movable node. As a result, memory_map_bottom_up() is not needed any more. The following figure show the defection of current bottom-up style: [startA, endA][startB, "kaslr kernel verly close to" endB][startC, endC] If nodeA,B is unmovable, while nodeC is movable, then init_mem_mapping() can generate pgtable on nodeC, which stain movable node. For more lengthy background, please refer to Background section -2nd. simplify the logic of memory_map_top_down() Thanks to the help of early_make_pgtable(), x86_64 can directly set up the subtree of pgtable at any place, hence the careful iteration in memory_map_top_down() can be discard. *Background section* When kaslr kernel can be guaranteed to sit inside unmovable node after [1]. But if kaslr kernel is located near the end of the movable node, then bottom-up allocator may create pagetable which crosses the boundary between unmovable node and movable node. It is a probability issue, two factors include -1. how big the gap between kernel end and unmovable node's end. -2. how many memory does the system own. Alternative way to fix this issue is by increasing the gap by boot/compressed/kaslr*. But taking the scenario of PB level memory, the pagetable will take server MB even if using 1GB page, different page attr and fragment will make things worse. So it is hard to decide how much should the gap increase. [1]: https://lore.kernel.org/patchwork/patch/1029376/ Signed-off-by: Pingfan Liu Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: "Rafael J. Wysocki" Cc: Len Brown Cc: Yinghai Lu Cc: Tejun Heo Cc: Chao Fan Cc: Baoquan He Cc: Juergen Gross Cc: Andrew Morton Cc: Mike Rapoport Cc: Vlastimil Babka Cc: Michal Hocko Cc: x86@kernel.org Cc: linux-acpi@vger.kernel.org Cc: linux-mm@kvack.org --- arch/x86/kernel/setup.c | 4 ++-- arch/x86/mm/init.c | 56 ++++++++++++++++++++++++++++++------------------- 2 files changed, 36 insertions(+), 24 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 9b57e01..00a1b84 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -827,7 +827,7 @@ static void early_acpi_parse(void) early_acpi_boot_init(); initmem_init(); /* check whether memory is returned or not */ - start = memblock_find_in_range(start, end, 1<<24, 1); + start = memblock_find_in_range(start, end, 1 << 24, 1); if (!start) pr_warn("the above acpi routines change and consume memory\n"); memblock_set_current_limit(orig_start, orig_end, enforcing); @@ -1135,7 +1135,7 @@ void __init setup_arch(char **cmdline_p) trim_platform_memory_ranges(); trim_low_memory_range(); -#ifdef CONFIG_MEMORY_HOTPLUG +#if defined(CONFIG_MEMORY_HOTPLUG) && defined(CONFIG_X86_32) /* * Memory used by the kernel cannot be hot-removed because Linux * cannot migrate the kernel pages. When memory hotplug is diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 385b9cd..003ad77 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -72,8 +72,6 @@ static unsigned long __initdata pgt_buf_start; static unsigned long __initdata pgt_buf_end; static unsigned long __initdata pgt_buf_top; -static unsigned long min_pfn_mapped; - static bool __initdata can_use_brk_pgt = true; static unsigned long min_pfn_allowed; @@ -532,6 +530,10 @@ static unsigned long __init init_range_memory_mapping( return mapped_ram_size; } +#ifdef CONFIG_X86_32 + +static unsigned long min_pfn_mapped; + static unsigned long __init get_new_step_size(unsigned long step_size) { /* @@ -653,6 +655,32 @@ static void __init memory_map_bottom_up(unsigned long map_start, } } +static unsigned long __init init_range_memory_mapping32( + unsigned long r_start, unsigned long r_end) +{ + /* + * If the allocation is in bottom-up direction, we setup direct mapping + * in bottom-up, otherwise we setup direct mapping in top-down. + */ + if (memblock_bottom_up()) { + unsigned long kernel_end = __pa_symbol(_end); + + /* + * we need two separate calls here. This is because we want to + * allocate page tables above the kernel. So we first map + * [kernel_end, end) to make memory above the kernel be mapped + * as soon as possible. And then use page tables allocated above + * the kernel to map [ISA_END_ADDRESS, kernel_end). + */ + memory_map_bottom_up(kernel_end, r_end); + memory_map_bottom_up(r_start, kernel_end); + } else { + memory_map_top_down(r_start, r_end); + } +} + +#endif + void __init init_mem_mapping(void) { unsigned long end; @@ -663,6 +691,8 @@ void __init init_mem_mapping(void) #ifdef CONFIG_X86_64 end = max_pfn << PAGE_SHIFT; + /* allow alloc_low_pages() to allocate from memblock */ + set_alloc_range(ISA_END_ADDRESS, end); #else end = max_low_pfn << PAGE_SHIFT; #endif @@ -673,32 +703,14 @@ void __init init_mem_mapping(void) /* Init the trampoline, possibly with KASLR memory offset */ init_trampoline(); - /* - * If the allocation is in bottom-up direction, we setup direct mapping - * in bottom-up, otherwise we setup direct mapping in top-down. - */ - if (memblock_bottom_up()) { - unsigned long kernel_end = __pa_symbol(_end); - - /* - * we need two separate calls here. This is because we want to - * allocate page tables above the kernel. So we first map - * [kernel_end, end) to make memory above the kernel be mapped - * as soon as possible. And then use page tables allocated above - * the kernel to map [ISA_END_ADDRESS, kernel_end). - */ - memory_map_bottom_up(kernel_end, end); - memory_map_bottom_up(ISA_END_ADDRESS, kernel_end); - } else { - memory_map_top_down(ISA_END_ADDRESS, end); - } - #ifdef CONFIG_X86_64 + init_range_memory_mapping(ISA_END_ADDRESS, end); if (max_pfn > max_low_pfn) { /* can we preseve max_low_pfn ?*/ max_low_pfn = max_pfn; } #else + init_range_memory_mapping32(ISA_END_ADDRESS, end); early_ioremap_page_table_range_init(); #endif From patchwork Fri Jan 11 05:12:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 10757381 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D724514E5 for ; Fri, 11 Jan 2019 05:14:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C928A298E5 for ; Fri, 11 Jan 2019 05:14:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BC7BD299BE; Fri, 11 Jan 2019 05:14:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BC72298E5 for ; Fri, 11 Jan 2019 05:14:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 651368E000A; Fri, 11 Jan 2019 00:14:11 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5FEFC8E0001; Fri, 11 Jan 2019 00:14:11 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EFA78E000A; Fri, 11 Jan 2019 00:14:11 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by kanga.kvack.org (Postfix) with ESMTP id 011478E0001 for ; Fri, 11 Jan 2019 00:14:11 -0500 (EST) Received: by mail-pl1-f198.google.com with SMTP id x7so7568377pll.23 for ; Thu, 10 Jan 2019 21:14:10 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=6n7WLTQIQU4iYs5jAR5RfAZTaxdM+QIQwvErCLIWGI0=; b=hiQNSCzMY906mmOQ4BH7vs9j9zl6nChB4GM3SlWKyURz4a0togbzDwZteexqurIQKK 6vcoTbMbFRStRLX4xqigt5684ZYmmORw5kYLL1ES7dOcQF4T8F6HfQpknTHydM5j/DKE Y2fgwX96Fts3MFjjpKGAeoBaAPKsKk6AD0QioLOAvHhQiDjcI3XHxH+Ei54VEQeg9Bqu g22tlFI3oAGndAgDnABApSBVZvdLoDZewI4+H4z/JoJGEWnpz+vJnAJfbUeskZoXgGaN HW1ny8ElvmSuK/SQvTL1dNIQSdLjFgY1eC56+NOxTCzG1a36JAuSNH8AgHapdTFJ8x5o v2SQ== X-Gm-Message-State: AJcUukfkeufzdiaRhBfIUtw9MrI3WCNlISO4L2ALtEI8V0n5QOhkKYHa mYdME2pzx6n/wLOQ+N2tgfamc/4nPZcW9DIeyg3M4NtwLXqbHnDuLprprdyM0cb4KPlkebJllKN wP2mLDsxYWDY3FFyAGOjvihJv6SYC5NEk9d+81fzrxWFtW6U/C7d46z4R2RtGT+bo7BHsimb4XW eVLLszhuU8qeaFBnRyTpaVFuEaD1i/sxnHkWFIR0Q3AZLAGR3GeB7CBpnDCIA5y8Y8xnebnuIA2 PyARIF9aZ8u9zzVWrYHUKEVvOPqHj/Y8nVJRKoTPM4wwdNX1/SZ1V5TJcue+0raFwajFoLEybsu Bgt3R24dDkqKmibxU2IpisBHpAI4ymSNlijHDLX68PzdmOS8Cy6NCKE4o3kxJd0uPoAtXV8BJQs S X-Received: by 2002:a17:902:6b46:: with SMTP id g6mr13221479plt.21.1547183650643; Thu, 10 Jan 2019 21:14:10 -0800 (PST) X-Received: by 2002:a17:902:6b46:: with SMTP id g6mr13221446plt.21.1547183649741; Thu, 10 Jan 2019 21:14:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547183649; cv=none; d=google.com; s=arc-20160816; b=GKXeV4uPKvkoUEug/t+ftcHLT/MZzgeh+p39ixo8iQndqUFgyGLvkxuRDE8jCQfyah MAUwTVmvjY37i/TKwUFOEV39VRMK3zBkXIpRgiuaVQwvzkU4kYu40nYdAnxcf4Trpam2 H1lu9cWXYLCobYZmeZerAnpJ8Ox/2xt5Jy6cayI/N5s7CwWDIvBbbP0axJgaYM3sqT6l bXdoFd+8p48xehZ/uHu0VhaVQZ9Mo7VS/woKN+fNL/vrVP3KJxHjDC+6uLqnxyw422y0 XAvlXXOjc/7j1GtECXO17pjopg02JhoQIHc9mhpDV/s4gX6969dI2OvWtuIOnmcX9EAo JmSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6n7WLTQIQU4iYs5jAR5RfAZTaxdM+QIQwvErCLIWGI0=; b=f+nH7Xt+x8CkaYOaXN0biQRALclFYCO8l+uf3glS3HDcvooRxpzXeznjO3BvNEsHoh mgshFRGoAhS+iQUG+CwCc//TVl3/dwJRMu0tCGuv1ozD8I/Y+gWNn+651Nosw4k7dQsz 90UmmeXJoqxGGCTglwp7jQV8z29pZZFRSo4Yj72oS5e4jmw6dT3jnLKsqqz3pjTlWX1+ XdctrxJfjU46goP92hK+4rga6ZyXMnnPqiDBjre/fTB7VCyPDodxLMx9t29UJvwICeJY nhJ9nm4AVPNt0zcUy9/erdUFk8EA/8zrAF8u9O9l1EyFfVQSbY8N4dJskJKpamZ/rJLP 13og== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=OkCRn4Rm; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id q2sor1601500plh.10.2019.01.10.21.14.09 for (Google Transport Security); Thu, 10 Jan 2019 21:14:09 -0800 (PST) Received-SPF: pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=OkCRn4Rm; spf=pass (google.com: domain of kernelfans@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=kernelfans@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6n7WLTQIQU4iYs5jAR5RfAZTaxdM+QIQwvErCLIWGI0=; b=OkCRn4RmDaTCWIV/VlqEcURdQeHooOBsggtOA4KSHGJ1ClFwy19LqiaQCia1SwBNKy pZHQZ84BZ8h50LkR1ja3p+I0NmEivjjzSbAFjnIEJBR/MTHMdg7G4GmsvoytwREHBZGL ajccvvhf7Aj9VGbNKj7+jnRLgmau6Fq0aa+H0HzOqV7E5+mGyeeheB+4LbSPKNwfpGX5 RtMOQG1dCfsKMl8hODWqb+G/ooKw2pr3b05rXMIdY4IWaThAznQdtH/MCq6/GAF2yvWX 1G12E6qLk3vWBdXsbx2nrJeTAYki46RuNPQJ517kU5Ma+oPNknZqDERAwKjnJkCDytvM yy8w== X-Google-Smtp-Source: ALg8bN519M8FSblftpFRFKZNcmDiRvlUMgPPvlgL7aMetK7LTEx85h37vl9npAeYzhOQCFfavx0WJw== X-Received: by 2002:a17:902:583:: with SMTP id f3mr13628711plf.202.1547183649391; Thu, 10 Jan 2019 21:14:09 -0800 (PST) Received: from mylaptop.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id q7sm93490471pgp.40.2019.01.10.21.14.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Jan 2019 21:14:08 -0800 (PST) From: Pingfan Liu To: linux-kernel@vger.kernel.org Cc: Pingfan Liu , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Dave Hansen , Andy Lutomirski , Peter Zijlstra , "Rafael J. Wysocki" , Len Brown , Yinghai Lu , Tejun Heo , Chao Fan , Baoquan He , Juergen Gross , Andrew Morton , Mike Rapoport , Vlastimil Babka , Michal Hocko , x86@kernel.org, linux-acpi@vger.kernel.org, linux-mm@kvack.org Subject: [PATCHv2 7/7] x86/mm: isolate the bottom-up style to init_32.c Date: Fri, 11 Jan 2019 13:12:57 +0800 Message-Id: <1547183577-20309-8-git-send-email-kernelfans@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> References: <1547183577-20309-1-git-send-email-kernelfans@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP bottom-up style is useless in x86_64 any longer, isolate it. Later, it may be removed completely from x86. Signed-off-by: Pingfan Liu Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: "H. Peter Anvin" Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: "Rafael J. Wysocki" Cc: Len Brown Cc: Yinghai Lu Cc: Tejun Heo Cc: Chao Fan Cc: Baoquan He Cc: Juergen Gross Cc: Andrew Morton Cc: Mike Rapoport Cc: Vlastimil Babka Cc: Michal Hocko Cc: x86@kernel.org Cc: linux-acpi@vger.kernel.org Cc: linux-mm@kvack.org --- arch/x86/mm/init.c | 153 +--------------------------------------------- arch/x86/mm/init_32.c | 147 ++++++++++++++++++++++++++++++++++++++++++++ arch/x86/mm/mm_internal.h | 8 ++- 3 files changed, 155 insertions(+), 153 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 003ad77..6a853e4 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -502,7 +502,7 @@ unsigned long __ref init_memory_mapping(unsigned long start, * That range would have hole in the middle or ends, and only ram parts * will be mapped in init_range_memory_mapping(). */ -static unsigned long __init init_range_memory_mapping( +unsigned long __init init_range_memory_mapping( unsigned long r_start, unsigned long r_end) { @@ -530,157 +530,6 @@ static unsigned long __init init_range_memory_mapping( return mapped_ram_size; } -#ifdef CONFIG_X86_32 - -static unsigned long min_pfn_mapped; - -static unsigned long __init get_new_step_size(unsigned long step_size) -{ - /* - * Initial mapped size is PMD_SIZE (2M). - * We can not set step_size to be PUD_SIZE (1G) yet. - * In worse case, when we cross the 1G boundary, and - * PG_LEVEL_2M is not set, we will need 1+1+512 pages (2M + 8k) - * to map 1G range with PTE. Hence we use one less than the - * difference of page table level shifts. - * - * Don't need to worry about overflow in the top-down case, on 32bit, - * when step_size is 0, round_down() returns 0 for start, and that - * turns it into 0x100000000ULL. - * In the bottom-up case, round_up(x, 0) returns 0 though too, which - * needs to be taken into consideration by the code below. - */ - return step_size << (PMD_SHIFT - PAGE_SHIFT - 1); -} - -/** - * memory_map_top_down - Map [map_start, map_end) top down - * @map_start: start address of the target memory range - * @map_end: end address of the target memory range - * - * This function will setup direct mapping for memory range - * [map_start, map_end) in top-down. That said, the page tables - * will be allocated at the end of the memory, and we map the - * memory in top-down. - */ -static void __init memory_map_top_down(unsigned long map_start, - unsigned long map_end) -{ - unsigned long real_end, start, last_start; - unsigned long step_size; - unsigned long addr; - unsigned long mapped_ram_size = 0; - - /* xen has big range in reserved near end of ram, skip it at first.*/ - addr = memblock_find_in_range(map_start, map_end, PMD_SIZE, PMD_SIZE); - real_end = addr + PMD_SIZE; - - /* step_size need to be small so pgt_buf from BRK could cover it */ - step_size = PMD_SIZE; - max_pfn_mapped = 0; /* will get exact value next */ - min_pfn_mapped = real_end >> PAGE_SHIFT; - last_start = start = real_end; - - /* - * We start from the top (end of memory) and go to the bottom. - * The memblock_find_in_range() gets us a block of RAM from the - * end of RAM in [min_pfn_mapped, max_pfn_mapped) used as new pages - * for page table. - */ - while (last_start > map_start) { - if (last_start > step_size) { - start = round_down(last_start - 1, step_size); - if (start < map_start) - start = map_start; - } else - start = map_start; - mapped_ram_size += init_range_memory_mapping(start, - last_start); - set_alloc_range(min_pfn_mapped, max_pfn_mapped); - last_start = start; - min_pfn_mapped = last_start >> PAGE_SHIFT; - if (mapped_ram_size >= step_size) - step_size = get_new_step_size(step_size); - } - - if (real_end < map_end) { - init_range_memory_mapping(real_end, map_end); - set_alloc_range(min_pfn_mapped, max_pfn_mapped); - } -} - -/** - * memory_map_bottom_up - Map [map_start, map_end) bottom up - * @map_start: start address of the target memory range - * @map_end: end address of the target memory range - * - * This function will setup direct mapping for memory range - * [map_start, map_end) in bottom-up. Since we have limited the - * bottom-up allocation above the kernel, the page tables will - * be allocated just above the kernel and we map the memory - * in [map_start, map_end) in bottom-up. - */ -static void __init memory_map_bottom_up(unsigned long map_start, - unsigned long map_end) -{ - unsigned long next, start; - unsigned long mapped_ram_size = 0; - /* step_size need to be small so pgt_buf from BRK could cover it */ - unsigned long step_size = PMD_SIZE; - - start = map_start; - min_pfn_mapped = start >> PAGE_SHIFT; - - /* - * We start from the bottom (@map_start) and go to the top (@map_end). - * The memblock_find_in_range() gets us a block of RAM from the - * end of RAM in [min_pfn_mapped, max_pfn_mapped) used as new pages - * for page table. - */ - while (start < map_end) { - if (step_size && map_end - start > step_size) { - next = round_up(start + 1, step_size); - if (next > map_end) - next = map_end; - } else { - next = map_end; - } - - mapped_ram_size += init_range_memory_mapping(start, next); - set_alloc_range(min_pfn_mapped, max_pfn_mapped); - start = next; - - if (mapped_ram_size >= step_size) - step_size = get_new_step_size(step_size); - } -} - -static unsigned long __init init_range_memory_mapping32( - unsigned long r_start, unsigned long r_end) -{ - /* - * If the allocation is in bottom-up direction, we setup direct mapping - * in bottom-up, otherwise we setup direct mapping in top-down. - */ - if (memblock_bottom_up()) { - unsigned long kernel_end = __pa_symbol(_end); - - /* - * we need two separate calls here. This is because we want to - * allocate page tables above the kernel. So we first map - * [kernel_end, end) to make memory above the kernel be mapped - * as soon as possible. And then use page tables allocated above - * the kernel to map [ISA_END_ADDRESS, kernel_end). - */ - memory_map_bottom_up(kernel_end, r_end); - memory_map_bottom_up(r_start, kernel_end); - } else { - memory_map_top_down(r_start, r_end); - } -} - -#endif - void __init init_mem_mapping(void) { unsigned long end; diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 49ecf5e..f802678 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -550,6 +550,153 @@ void __init early_ioremap_page_table_range_init(void) early_ioremap_reset(); } +static unsigned long min_pfn_mapped; + +static unsigned long __init get_new_step_size(unsigned long step_size) +{ + /* + * Initial mapped size is PMD_SIZE (2M). + * We can not set step_size to be PUD_SIZE (1G) yet. + * In worse case, when we cross the 1G boundary, and + * PG_LEVEL_2M is not set, we will need 1+1+512 pages (2M + 8k) + * to map 1G range with PTE. Hence we use one less than the + * difference of page table level shifts. + * + * Don't need to worry about overflow in the top-down case, on 32bit, + * when step_size is 0, round_down() returns 0 for start, and that + * turns it into 0x100000000ULL. + * In the bottom-up case, round_up(x, 0) returns 0 though too, which + * needs to be taken into consideration by the code below. + */ + return step_size << (PMD_SHIFT - PAGE_SHIFT - 1); +} + +/** + * memory_map_top_down - Map [map_start, map_end) top down + * @map_start: start address of the target memory range + * @map_end: end address of the target memory range + * + * This function will setup direct mapping for memory range + * [map_start, map_end) in top-down. That said, the page tables + * will be allocated at the end of the memory, and we map the + * memory in top-down. + */ +static void __init memory_map_top_down(unsigned long map_start, + unsigned long map_end) +{ + unsigned long real_end, start, last_start; + unsigned long step_size; + unsigned long addr; + unsigned long mapped_ram_size = 0; + + /* xen has big range in reserved near end of ram, skip it at first.*/ + addr = memblock_find_in_range(map_start, map_end, PMD_SIZE, PMD_SIZE); + real_end = addr + PMD_SIZE; + + /* step_size need to be small so pgt_buf from BRK could cover it */ + step_size = PMD_SIZE; + max_pfn_mapped = 0; /* will get exact value next */ + min_pfn_mapped = real_end >> PAGE_SHIFT; + last_start = start = real_end; + + /* + * We start from the top (end of memory) and go to the bottom. + * The memblock_find_in_range() gets us a block of RAM from the + * end of RAM in [min_pfn_mapped, max_pfn_mapped) used as new pages + * for page table. + */ + while (last_start > map_start) { + if (last_start > step_size) { + start = round_down(last_start - 1, step_size); + if (start < map_start) + start = map_start; + } else + start = map_start; + mapped_ram_size += init_range_memory_mapping(start, + last_start); + set_alloc_range(min_pfn_mapped, max_pfn_mapped); + last_start = start; + min_pfn_mapped = last_start >> PAGE_SHIFT; + if (mapped_ram_size >= step_size) + step_size = get_new_step_size(step_size); + } + + if (real_end < map_end) { + init_range_memory_mapping(real_end, map_end); + set_alloc_range(min_pfn_mapped, max_pfn_mapped); + } +} + +/** + * memory_map_bottom_up - Map [map_start, map_end) bottom up + * @map_start: start address of the target memory range + * @map_end: end address of the target memory range + * + * This function will setup direct mapping for memory range + * [map_start, map_end) in bottom-up. Since we have limited the + * bottom-up allocation above the kernel, the page tables will + * be allocated just above the kernel and we map the memory + * in [map_start, map_end) in bottom-up. + */ +static void __init memory_map_bottom_up(unsigned long map_start, + unsigned long map_end) +{ + unsigned long next, start; + unsigned long mapped_ram_size = 0; + /* step_size need to be small so pgt_buf from BRK could cover it */ + unsigned long step_size = PMD_SIZE; + + start = map_start; + min_pfn_mapped = start >> PAGE_SHIFT; + + /* + * We start from the bottom (@map_start) and go to the top (@map_end). + * The memblock_find_in_range() gets us a block of RAM from the + * end of RAM in [min_pfn_mapped, max_pfn_mapped) used as new pages + * for page table. + */ + while (start < map_end) { + if (step_size && map_end - start > step_size) { + next = round_up(start + 1, step_size); + if (next > map_end) + next = map_end; + } else { + next = map_end; + } + + mapped_ram_size += init_range_memory_mapping(start, next); + set_alloc_range(min_pfn_mapped, max_pfn_mapped); + start = next; + + if (mapped_ram_size >= step_size) + step_size = get_new_step_size(step_size); + } +} + +void __init init_range_memory_mapping32( + unsigned long r_start, unsigned long r_end) +{ + /* + * If the allocation is in bottom-up direction, we setup direct mapping + * in bottom-up, otherwise we setup direct mapping in top-down. + */ + if (memblock_bottom_up()) { + unsigned long kernel_end = __pa_symbol(_end); + + /* + * we need two separate calls here. This is because we want to + * allocate page tables above the kernel. So we first map + * [kernel_end, end) to make memory above the kernel be mapped + * as soon as possible. And then use page tables allocated above + * the kernel to map [ISA_END_ADDRESS, kernel_end). + */ + memory_map_bottom_up(kernel_end, r_end); + memory_map_bottom_up(r_start, kernel_end); + } else { + memory_map_top_down(r_start, r_end); + } +} + static void __init pagetable_init(void) { pgd_t *pgd_base = swapper_pg_dir; diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h index 4e1f6e1..5ab133c 100644 --- a/arch/x86/mm/mm_internal.h +++ b/arch/x86/mm/mm_internal.h @@ -9,7 +9,13 @@ static inline void *alloc_low_page(void) } void early_ioremap_page_table_range_init(void); - +void init_range_memory_mapping32( + unsigned long r_start, + unsigned long r_end); +void set_alloc_range(unsigned long low, unsigned long high); +unsigned long __init init_range_memory_mapping( + unsigned long r_start, + unsigned long r_end); unsigned long kernel_physical_mapping_init(unsigned long start, unsigned long end, unsigned long page_size_mask);