From patchwork Fri Jan 3 10:51:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weikang Guo X-Patchwork-Id: 13925483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB8C4E77188 for ; Fri, 3 Jan 2025 10:52:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 170A36B007B; Fri, 3 Jan 2025 05:52:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 120866B0082; Fri, 3 Jan 2025 05:52:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB61C6B0083; Fri, 3 Jan 2025 05:52:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C54316B007B for ; Fri, 3 Jan 2025 05:52:09 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 738A543B4B for ; Fri, 3 Jan 2025 10:52:09 +0000 (UTC) X-FDA: 82965824970.16.F446C55 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf17.hostedemail.com (Postfix) with ESMTP id 3679840010 for ; Fri, 3 Jan 2025 10:51:28 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=llFvTVfn; spf=pass (imf17.hostedemail.com: domain of guoweikang.kernel@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=guoweikang.kernel@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735901503; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=9TNEYv+duFY8RVauaWrhnOP6U24yL4/nBUvkV5K+rro=; b=a+zc+7Nw9hIOLdOodjATH9UCxi75JtV7CMk7Gg+XYCuqfADFhey83d+LGUnauw2ZIxl1lK /StwaJx1PGTcVW8CkO4ai8csGcUE4wmeVcjlyEE3rMWBkyi3dr03h2IxkLT3evkBAJk9GJ nFe3UBZ1dWTTmo6UuqZZkFzctAd4Xi0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735901503; a=rsa-sha256; cv=none; b=6CwyHd8/eOcKbH+ZcM+XROnLazPMgMMePkvDovXWEbze2WiFo+0mKJMOs3KtRXoifIq1L2 Wu3w6x9GoDT/2uCScVdJSQ44YOcWh6+BqDsjPfWoVl1cWiDMAHhK4fNbjBqiuRG20WeBu3 5rS8hbg0P8YKiBbYOZPWAWlOgjGL7FQ= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=llFvTVfn; spf=pass (imf17.hostedemail.com: domain of guoweikang.kernel@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=guoweikang.kernel@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-21644aca3a0so90047855ad.3 for ; Fri, 03 Jan 2025 02:52:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1735901526; x=1736506326; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=9TNEYv+duFY8RVauaWrhnOP6U24yL4/nBUvkV5K+rro=; b=llFvTVfnwQL0ON4DWct1X2PgZsVEADWzGg9kI4wGeQxCogc27pY3IbbnQgVEYQt1jd AxvvFvwbOHzIfsvH3MZnSPAMCA3VfvA+ewYTnwoUmImAGD6eWsElPvGOspeOdlkVbheu SoKz1seIiUVySI2wEnN1dr6qfWLHL9LrGXhZnecCUBwLKjgmOqZfmSvKDAn2F7Nb7DVX Q7k0AbVjjw3wcA2Mt5sRvvxmOA3iKULk/ls3sl0cKtpVF9drpeAhHKcUF4EIXBZZd1Si SrJdr/Nk5vixqUlZipc9kb0mAn5Ip1YQP0PiRDC5bftHorzSzd1eNr8I/SPxYWC8gP/y 0DoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735901526; x=1736506326; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9TNEYv+duFY8RVauaWrhnOP6U24yL4/nBUvkV5K+rro=; b=M6NSXCTdSj+qQuPIpsLRf2r5gTrizEX1qWB+nTEDut3ci3MRqvg3PR8Jo7QWjOfDTX pD2nxdk8NA50f5Wlv5wjGH2HhGOUOMS0LCHtRam62nNBId9OCgKGck1rCvJkzlDV35+5 nWd8LpDXcbp+ErBjGOcRhpEUnRHqwDx+vaH+/ZegYetRxb5jRnKX5GpeL11pJ9AKHcxs bK07XYXllXXwCifsP6yHVjPSU8XBERgj3YQyza1rDtBL0UowsXWCsEwwS/gid+ZgzOdW qL0eYVT2VGc5HQZRg88YifBj/dnzH+/TeNx0rgBt5pjBe0DTcXdNx57wZlmcbYqM+m64 6gHg== X-Gm-Message-State: AOJu0YzAep1NvFGQs45snhoNSryBEsnMf/MIPRniZSgyH93sQWT431Hk 89LszNMoTRptsDoXBLQZOupvY7904QYY/nsIHNx7j2g4XlN0on3J X-Gm-Gg: ASbGncudEPHJ+fHRLQ+sr7s/I5N/BebK6hUxYrXsniRElQZF/iXK33UB/dIQre1Sh57 PfYehcw8tyHoihYYm43DvTfniiZHbuHL2kZUtdi2lS34/JveEfv9UV07iJgzKXlVI/xasdOr8yD 2Uf/yE2Frjvo5/86+tL5fbVTFjUeG/Uhi8AmodZ3YNQzXDEPjkf1gUZTFC7ODLgQHQsP7YNNjdZ obICFb3A2H7nc3Wed5xJ837iyIjXI+NleoNmDpPib6QX3zMSuZXQTQ1uNWA3HnuQPRuwM5j3mRC tvEv X-Google-Smtp-Source: AGHT+IHjHBW8zjth4aFHoO24oXumi8Pm1Kedy4iNVqPSkoH9ejC0QWfEWRK/IcuI2vFKaBUzJ1fwVg== X-Received: by 2002:a05:6a20:7347:b0:1e3:da8c:1e07 with SMTP id adf61e73a8af0-1e5e0458ea2mr80311062637.7.1735901525684; Fri, 03 Jan 2025 02:52:05 -0800 (PST) Received: from localhost.localdomain ([36.110.106.149]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8157edsm26802019b3a.20.2025.01.03.02.52.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Jan 2025 02:52:05 -0800 (PST) From: Guo Weikang To: Mike Rapoport , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Guo Weikang Subject: [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic Date: Fri, 3 Jan 2025 18:51:56 +0800 Message-Id: <20250103105158.1350689-1-guoweikang.kernel@gmail.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 3679840010 X-Stat-Signature: ya1snet7tjbg4557j3u8fhwp884e8wtu X-Rspam-User: X-HE-Tag: 1735901488-340940 X-HE-Meta: U2FsdGVkX18fwjxVYqvbbTsl4oHWna5ozZKPW7xgkuvMppL57jbNpEe2JGZCp00MQkXNMffe+fNmoU9nIQspShAnKoR5ICFBr7aISkucmHt9fLgn+3fUMKZjNHkNeyknlq7XePY1vkgotpvQDcN87Tbceo+tjyytNJPq/ByE8mCRm+QrL4JuPfj8Ba30DQS/HJmKPlgIDSzrcniSwTQrurjRa0oMZuhJyr4083HhDE84+vWNXAjygJ6dcWZ78VbRNXiKAlvxjPzPpCjMkt3VRP5zaq3SvfOlPmrAvNTGdjJcP17+6WEgUtDBXt2TbdIen6UFqo27DbrH/hyZMpv9ehN2SjaWza2jaBhvFyFjZimnXHxmxo9xWiCP43SVCe9OfOGh9aiezObLyNJQHcS30dOZiuv8UQf/JVlDsDnUHJxMGfmQBMJwnbm8Vca4pTWx3eljBpQdvWXJl76gl9Oi4cK6IJh/jMOfytq5+E9gl41Iv0FV3C0c53BOI5KotsRyLnYrO7EMuonLWqTNg5nFn3S0vGHA7EUQ8QHg0YZ7xaX2luyG3RHAGO/1FLNBz6HowWBh0wnY6TNPps5IkrNfHNyvwzML+aiRWjMSUtBdtbs/Ri+v5D74zzJ3s04+d9GYYr+M8XQuhi3IXbElojdF9EOW6PyLf3Ls8TreeAV1qyJ1Pjeo5TBE0GXF4cfPUaDlsDjlrJP1Xn/L265PVtmqRvdBr1NGf6qJ9/2P1/+wHCOr/M9NUlftTWnJ0VeGK5BfnPb//tiI3V52YDq7VVszxVdOX2TTtjzmAFd3VLPOWsjORILG86SAIjZaTck3sVjDAsRArY2l2z6G7uwTtNAo1+a8oNtv8KXtzSBKf60rG5GmashGqHxParm7KWKU8pca3FOYvj+hR0dUApLFsdwWb05YsJ4XTFpmFOIBQ2rfKld3tMVrrZW8mBejkPFa3/U4dDH+c0ugesf9O4B3RJ5 QY2LMM2F QcHJQcdUo9PJwrMCqCps4ESwWEKKxhyiGEgBQfg8xLB7q/+wIE4XCz2KSCpgGDtNbGxTyHeckksn826ro8ViuirvFmG254Mnxj07AgG1FdQQs48ngPA7l/9BH/RoWOlgsDr0LcOlbkZyY5Cjksr/xIU6DnXg903q9DFpKDExSsYtwz3FwjDI0WMqT5WKU0uMxLbDxvNdm0UUZ2MB1Pf/g+XyN7pFwIfAyr8F4xdamRPyesAaoTi/4bjYTU0JhUfQCrs1BYzUsSRsUcmJSKsdLgaNeoi3BVZQ1Fhg8sMZJBMR/kgBu8o9uXlSO+eNoy7Wkz52wivPAMFkQlFqyvGCH0pu/PSFAXQIR4z69FvloxL29CyFmVH6aow1OkeJFKuHwyCzyk5O1znqW72wMrbCjaxe6uFa4Onx2O1RUw1/Vb+9gd/dk9kLisEIO7yLJzgmQQBIh8XxBavisyywL+lufN23kuAvwKCOekd3MNsag8H97ow8jL1/NvYHjo2919PECQn4tI3sSFvfnYSw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: After analyzing the usage of memblock_alloc, it was found that approximately 4/5 (120/155) of the calls expect a panic behavior on allocation failure. To reflect this common usage pattern, the default failure behavior of memblock_alloc is now modified to trigger a panic when allocation fails. Additionally, a new interface, memblock_alloc_no_panic, has been introduced to handle cases where panic behavior is not desired. Signed-off-by: Guo Weikang --- arch/alpha/kernel/core_cia.c | 2 +- arch/alpha/kernel/core_marvel.c | 6 ++--- arch/alpha/kernel/pci.c | 4 ++-- arch/alpha/kernel/pci_iommu.c | 4 ++-- arch/alpha/kernel/setup.c | 2 +- arch/arm/kernel/setup.c | 4 ++-- arch/arm/mach-omap2/omap_hwmod.c | 16 +++++++++---- arch/arm/mm/mmu.c | 6 ++--- arch/arm/mm/nommu.c | 2 +- arch/arm64/kernel/setup.c | 2 +- arch/loongarch/include/asm/dmi.h | 2 +- arch/loongarch/kernel/setup.c | 2 +- arch/loongarch/mm/init.c | 6 ++--- arch/m68k/mm/init.c | 2 +- arch/m68k/mm/mcfmmu.c | 4 ++-- arch/m68k/mm/motorola.c | 2 +- arch/m68k/mm/sun3mmu.c | 4 ++-- arch/m68k/sun3/sun3dvma.c | 2 +- arch/mips/kernel/setup.c | 2 +- arch/openrisc/mm/ioremap.c | 2 +- arch/parisc/mm/init.c | 10 ++++---- arch/powerpc/kernel/dt_cpu_ftrs.c | 2 +- arch/powerpc/kernel/pci_32.c | 2 +- arch/powerpc/kernel/setup-common.c | 2 +- arch/powerpc/kernel/setup_32.c | 2 +- arch/powerpc/mm/book3s32/mmu.c | 2 +- arch/powerpc/mm/book3s64/pgtable.c | 2 +- arch/powerpc/mm/kasan/8xx.c | 5 ++-- arch/powerpc/mm/kasan/init_32.c | 7 +++--- arch/powerpc/mm/kasan/init_book3e_64.c | 8 +++---- arch/powerpc/mm/kasan/init_book3s_64.c | 2 +- arch/powerpc/mm/nohash/mmu_context.c | 6 ++--- arch/powerpc/mm/pgtable_32.c | 2 +- arch/powerpc/platforms/powermac/nvram.c | 2 +- arch/powerpc/platforms/powernv/opal.c | 2 +- arch/powerpc/platforms/ps3/setup.c | 2 +- arch/powerpc/sysdev/msi_bitmap.c | 2 +- arch/riscv/kernel/setup.c | 2 +- arch/riscv/mm/kasan_init.c | 14 +++++------ arch/s390/kernel/crash_dump.c | 2 +- arch/s390/kernel/numa.c | 2 +- arch/s390/kernel/setup.c | 8 +++---- arch/s390/kernel/smp.c | 4 ++-- arch/s390/kernel/topology.c | 4 ++-- arch/s390/mm/vmem.c | 4 ++-- arch/sh/mm/init.c | 4 ++-- arch/sparc/kernel/prom_32.c | 2 +- arch/sparc/kernel/prom_64.c | 2 +- arch/sparc/mm/init_32.c | 2 +- arch/sparc/mm/srmmu.c | 6 ++--- arch/um/drivers/net_kern.c | 2 +- arch/um/drivers/vector_kern.c | 2 +- arch/um/kernel/load_file.c | 2 +- arch/x86/coco/sev/core.c | 2 +- arch/x86/kernel/acpi/boot.c | 2 +- arch/x86/kernel/acpi/madt_wakeup.c | 2 +- arch/x86/kernel/apic/io_apic.c | 4 ++-- arch/x86/kernel/e820.c | 2 +- arch/x86/platform/olpc/olpc_dt.c | 2 +- arch/x86/xen/p2m.c | 2 +- arch/xtensa/mm/kasan_init.c | 2 +- arch/xtensa/platforms/iss/network.c | 2 +- drivers/clk/ti/clk.c | 2 +- drivers/firmware/memmap.c | 2 +- drivers/macintosh/smu.c | 2 +- drivers/of/fdt.c | 2 +- drivers/of/of_reserved_mem.c | 2 +- drivers/of/unittest.c | 2 +- drivers/usb/early/xhci-dbc.c | 2 +- include/linux/memblock.h | 10 ++++---- init/main.c | 12 +++++----- kernel/dma/swiotlb.c | 6 ++--- kernel/power/snapshot.c | 2 +- kernel/printk/printk.c | 6 ++--- lib/cpumask.c | 2 +- mm/kasan/tags.c | 2 +- mm/kfence/core.c | 4 ++-- mm/kmsan/shadow.c | 4 ++-- mm/memblock.c | 18 +++++++------- mm/numa.c | 2 +- mm/numa_emulation.c | 2 +- mm/numa_memblks.c | 2 +- mm/percpu.c | 32 ++++++++++++------------- mm/sparse.c | 2 +- 84 files changed, 173 insertions(+), 165 deletions(-) diff --git a/arch/alpha/kernel/core_cia.c b/arch/alpha/kernel/core_cia.c index 6e577228e175..05f80b4bbf12 100644 --- a/arch/alpha/kernel/core_cia.c +++ b/arch/alpha/kernel/core_cia.c @@ -331,7 +331,7 @@ cia_prepare_tbia_workaround(int window) long i; /* Use minimal 1K map. */ - ppte = memblock_alloc_or_panic(CIA_BROKEN_TBIA_SIZE, 32768); + ppte = memblock_alloc(CIA_BROKEN_TBIA_SIZE, 32768); pte = (virt_to_phys(ppte) >> (PAGE_SHIFT - 1)) | 1; for (i = 0; i < CIA_BROKEN_TBIA_SIZE / sizeof(unsigned long); ++i) diff --git a/arch/alpha/kernel/core_marvel.c b/arch/alpha/kernel/core_marvel.c index b1bfbd11980d..716ed3197f72 100644 --- a/arch/alpha/kernel/core_marvel.c +++ b/arch/alpha/kernel/core_marvel.c @@ -79,9 +79,9 @@ mk_resource_name(int pe, int port, char *str) { char tmp[80]; char *name; - + sprintf(tmp, "PCI %s PE %d PORT %d", str, pe, port); - name = memblock_alloc_or_panic(strlen(tmp) + 1, SMP_CACHE_BYTES); + name = memblock_alloc(strlen(tmp) + 1, SMP_CACHE_BYTES); strcpy(name, tmp); return name; @@ -116,7 +116,7 @@ alloc_io7(unsigned int pe) return NULL; } - io7 = memblock_alloc_or_panic(sizeof(*io7), SMP_CACHE_BYTES); + io7 = memblock_alloc(sizeof(*io7), SMP_CACHE_BYTES); io7->pe = pe; raw_spin_lock_init(&io7->irq_lock); diff --git a/arch/alpha/kernel/pci.c b/arch/alpha/kernel/pci.c index 8e9b4ac86b7e..d359ebaf6de7 100644 --- a/arch/alpha/kernel/pci.c +++ b/arch/alpha/kernel/pci.c @@ -391,7 +391,7 @@ alloc_pci_controller(void) { struct pci_controller *hose; - hose = memblock_alloc_or_panic(sizeof(*hose), SMP_CACHE_BYTES); + hose = memblock_alloc(sizeof(*hose), SMP_CACHE_BYTES); *hose_tail = hose; hose_tail = &hose->next; @@ -402,7 +402,7 @@ alloc_pci_controller(void) struct resource * __init alloc_resource(void) { - return memblock_alloc_or_panic(sizeof(struct resource), SMP_CACHE_BYTES); + return memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES); } diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c index 681f56089d9c..7a465c207684 100644 --- a/arch/alpha/kernel/pci_iommu.c +++ b/arch/alpha/kernel/pci_iommu.c @@ -71,8 +71,8 @@ iommu_arena_new_node(int nid, struct pci_controller *hose, dma_addr_t base, if (align < mem_size) align = mem_size; - arena = memblock_alloc_or_panic(sizeof(*arena), SMP_CACHE_BYTES); - arena->ptes = memblock_alloc_or_panic(mem_size, align); + arena = memblock_alloc(sizeof(*arena), SMP_CACHE_BYTES); + arena->ptes = memblock_alloc(mem_size, align); spin_lock_init(&arena->lock); arena->hose = hose; diff --git a/arch/alpha/kernel/setup.c b/arch/alpha/kernel/setup.c index bebdffafaee8..6de866a62bd9 100644 --- a/arch/alpha/kernel/setup.c +++ b/arch/alpha/kernel/setup.c @@ -269,7 +269,7 @@ move_initrd(unsigned long mem_limit) unsigned long size; size = initrd_end - initrd_start; - start = memblock_alloc(PAGE_ALIGN(size), PAGE_SIZE); + start = memblock_alloc_no_panic(PAGE_ALIGN(size), PAGE_SIZE); if (!start || __pa(start) + size > mem_limit) { initrd_start = initrd_end = 0; return NULL; diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index a41c93988d2c..b36498c0bedd 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -880,7 +880,7 @@ static void __init request_standard_resources(const struct machine_desc *mdesc) */ boot_alias_start = phys_to_idmap(start); if (arm_has_idmap_alias() && boot_alias_start != IDMAP_INVALID_ADDR) { - res = memblock_alloc_or_panic(sizeof(*res), SMP_CACHE_BYTES); + res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES); res->name = "System RAM (boot alias)"; res->start = boot_alias_start; res->end = phys_to_idmap(res_end); @@ -888,7 +888,7 @@ static void __init request_standard_resources(const struct machine_desc *mdesc) request_resource(&iomem_resource, res); } - res = memblock_alloc_or_panic(sizeof(*res), SMP_CACHE_BYTES); + res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES); res->name = "System RAM"; res->start = start; res->end = res_end; diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c index 111677878d9c..30e4f1279cdb 100644 --- a/arch/arm/mach-omap2/omap_hwmod.c +++ b/arch/arm/mach-omap2/omap_hwmod.c @@ -709,7 +709,7 @@ static int __init _setup_clkctrl_provider(struct device_node *np) struct clkctrl_provider *provider; int i; - provider = memblock_alloc(sizeof(*provider), SMP_CACHE_BYTES); + provider = memblock_alloc_no_panic(sizeof(*provider), SMP_CACHE_BYTES); if (!provider) return -ENOMEM; @@ -718,16 +718,16 @@ static int __init _setup_clkctrl_provider(struct device_node *np) provider->num_addrs = of_address_count(np); provider->addr = - memblock_alloc(sizeof(void *) * provider->num_addrs, + memblock_alloc_no_panic(sizeof(void *) * provider->num_addrs, SMP_CACHE_BYTES); if (!provider->addr) - return -ENOMEM; + goto err_free_provider; provider->size = - memblock_alloc(sizeof(u32) * provider->num_addrs, + memblock_alloc_no_panic(sizeof(u32) * provider->num_addrs, SMP_CACHE_BYTES); if (!provider->size) - return -ENOMEM; + goto err_free_addr; for (i = 0; i < provider->num_addrs; i++) { struct resource res; @@ -740,6 +740,12 @@ static int __init _setup_clkctrl_provider(struct device_node *np) list_add(&provider->link, &clkctrl_providers); return 0; + +err_free_addr: + memblock_free(provider->addr, sizeof(void *)); +err_free_provider: + memblock_free(provider, sizeof(*provider)); + return -ENOMEM; } static int __init _init_clkctrl_providers(void) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index f02f872ea8a9..3d788304839e 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -726,7 +726,7 @@ EXPORT_SYMBOL(phys_mem_access_prot); static void __init *early_alloc(unsigned long sz) { - return memblock_alloc_or_panic(sz, sz); + return memblock_alloc(sz, sz); } @@ -1022,7 +1022,7 @@ void __init iotable_init(struct map_desc *io_desc, int nr) if (!nr) return; - svm = memblock_alloc_or_panic(sizeof(*svm) * nr, __alignof__(*svm)); + svm = memblock_alloc(sizeof(*svm) * nr, __alignof__(*svm)); for (md = io_desc; nr; md++, nr--) { create_mapping(md); @@ -1044,7 +1044,7 @@ void __init vm_reserve_area_early(unsigned long addr, unsigned long size, struct vm_struct *vm; struct static_vm *svm; - svm = memblock_alloc_or_panic(sizeof(*svm), __alignof__(*svm)); + svm = memblock_alloc(sizeof(*svm), __alignof__(*svm)); vm = &svm->vm; vm->addr = (void *)addr; diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c index 1a8f6914ee59..079b4d4acd29 100644 --- a/arch/arm/mm/nommu.c +++ b/arch/arm/mm/nommu.c @@ -162,7 +162,7 @@ void __init paging_init(const struct machine_desc *mdesc) mpu_setup(); /* allocate the zero page. */ - zero_page = (void *)memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + zero_page = (void *)memblock_alloc(PAGE_SIZE, PAGE_SIZE); bootmem_init(); diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 85104587f849..3012cf9b0f9b 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -223,7 +223,7 @@ static void __init request_standard_resources(void) num_standard_resources = memblock.memory.cnt; res_size = num_standard_resources * sizeof(*standard_resources); - standard_resources = memblock_alloc_or_panic(res_size, SMP_CACHE_BYTES); + standard_resources = memblock_alloc(res_size, SMP_CACHE_BYTES); for_each_mem_region(region) { res = &standard_resources[i++]; diff --git a/arch/loongarch/include/asm/dmi.h b/arch/loongarch/include/asm/dmi.h index 605493417753..6305bc3ba15b 100644 --- a/arch/loongarch/include/asm/dmi.h +++ b/arch/loongarch/include/asm/dmi.h @@ -10,7 +10,7 @@ #define dmi_early_remap(x, l) dmi_remap(x, l) #define dmi_early_unmap(x, l) dmi_unmap(x) -#define dmi_alloc(l) memblock_alloc(l, PAGE_SIZE) +#define dmi_alloc(l) memblock_alloc_no_panic(l, PAGE_SIZE) static inline void *dmi_remap(u64 phys_addr, unsigned long size) { diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c index edcfdfcad7d2..56934fe58170 100644 --- a/arch/loongarch/kernel/setup.c +++ b/arch/loongarch/kernel/setup.c @@ -431,7 +431,7 @@ static void __init resource_init(void) num_standard_resources = memblock.memory.cnt; res_size = num_standard_resources * sizeof(*standard_resources); - standard_resources = memblock_alloc_or_panic(res_size, SMP_CACHE_BYTES); + standard_resources = memblock_alloc(res_size, SMP_CACHE_BYTES); for_each_mem_region(region) { res = &standard_resources[i++]; diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c index ca5aa5f46a9f..99b4d5cf3e9c 100644 --- a/arch/loongarch/mm/init.c +++ b/arch/loongarch/mm/init.c @@ -174,7 +174,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr) pmd_t *pmd; if (p4d_none(p4dp_get(p4d))) { - pud = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + pud = memblock_alloc(PAGE_SIZE, PAGE_SIZE); p4d_populate(&init_mm, p4d, pud); #ifndef __PAGETABLE_PUD_FOLDED pud_init(pud); @@ -183,7 +183,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr) pud = pud_offset(p4d, addr); if (pud_none(pudp_get(pud))) { - pmd = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE); pud_populate(&init_mm, pud, pmd); #ifndef __PAGETABLE_PMD_FOLDED pmd_init(pmd); @@ -194,7 +194,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr) if (!pmd_present(pmdp_get(pmd))) { pte_t *pte; - pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); pmd_populate_kernel(&init_mm, pmd, pte); kernel_pte_init(pte); } diff --git a/arch/m68k/mm/init.c b/arch/m68k/mm/init.c index 8b11d0d545aa..1ccc238f33d9 100644 --- a/arch/m68k/mm/init.c +++ b/arch/m68k/mm/init.c @@ -68,7 +68,7 @@ void __init paging_init(void) high_memory = (void *) end_mem; - empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT; free_area_init(max_zone_pfn); } diff --git a/arch/m68k/mm/mcfmmu.c b/arch/m68k/mm/mcfmmu.c index 19a75029036c..26bac0984964 100644 --- a/arch/m68k/mm/mcfmmu.c +++ b/arch/m68k/mm/mcfmmu.c @@ -42,14 +42,14 @@ void __init paging_init(void) unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; int i; - empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); pg_dir = swapper_pg_dir; memset(swapper_pg_dir, 0, sizeof(swapper_pg_dir)); size = num_pages * sizeof(pte_t); size = (size + PAGE_SIZE) & ~(PAGE_SIZE-1); - next_pgtable = (unsigned long) memblock_alloc_or_panic(size, PAGE_SIZE); + next_pgtable = (unsigned long) memblock_alloc(size, PAGE_SIZE); pg_dir += PAGE_OFFSET >> PGDIR_SHIFT; diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index eab50dda14ee..ce016ae8c972 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -491,7 +491,7 @@ void __init paging_init(void) * initialize the bad page table and bad page to point * to a couple of allocated pages */ - empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); /* * Set up SFC/DFC registers diff --git a/arch/m68k/mm/sun3mmu.c b/arch/m68k/mm/sun3mmu.c index 1ecf6bdd08bf..748645ac8cda 100644 --- a/arch/m68k/mm/sun3mmu.c +++ b/arch/m68k/mm/sun3mmu.c @@ -44,7 +44,7 @@ void __init paging_init(void) unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, }; unsigned long size; - empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); address = PAGE_OFFSET; pg_dir = swapper_pg_dir; @@ -54,7 +54,7 @@ void __init paging_init(void) size = num_pages * sizeof(pte_t); size = (size + PAGE_SIZE) & ~(PAGE_SIZE-1); - next_pgtable = (unsigned long)memblock_alloc_or_panic(size, PAGE_SIZE); + next_pgtable = (unsigned long)memblock_alloc(size, PAGE_SIZE); bootmem_end = (next_pgtable + size + PAGE_SIZE) & PAGE_MASK; /* Map whole memory from PAGE_OFFSET (0x0E000000) */ diff --git a/arch/m68k/sun3/sun3dvma.c b/arch/m68k/sun3/sun3dvma.c index 225fc735e466..681fcf83caa2 100644 --- a/arch/m68k/sun3/sun3dvma.c +++ b/arch/m68k/sun3/sun3dvma.c @@ -252,7 +252,7 @@ void __init dvma_init(void) list_add(&(hole->list), &hole_list); - iommu_use = memblock_alloc_or_panic(IOMMU_TOTAL_ENTRIES * sizeof(unsigned long), + iommu_use = memblock_alloc(IOMMU_TOTAL_ENTRIES * sizeof(unsigned long), SMP_CACHE_BYTES); dvma_unmap_iommu(DVMA_START, DVMA_SIZE); diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c index fbfe0771317e..fcccff55dc77 100644 --- a/arch/mips/kernel/setup.c +++ b/arch/mips/kernel/setup.c @@ -704,7 +704,7 @@ static void __init resource_init(void) for_each_mem_range(i, &start, &end) { struct resource *res; - res = memblock_alloc_or_panic(sizeof(struct resource), SMP_CACHE_BYTES); + res = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES); res->start = start; /* diff --git a/arch/openrisc/mm/ioremap.c b/arch/openrisc/mm/ioremap.c index 8e63e86251ca..e0f58f40c0ab 100644 --- a/arch/openrisc/mm/ioremap.c +++ b/arch/openrisc/mm/ioremap.c @@ -38,7 +38,7 @@ pte_t __ref *pte_alloc_one_kernel(struct mm_struct *mm) if (likely(mem_init_done)) { pte = (pte_t *)get_zeroed_page(GFP_KERNEL); } else { - pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); } return pte; diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c index 61c0a2477072..d587a7cf7fdb 100644 --- a/arch/parisc/mm/init.c +++ b/arch/parisc/mm/init.c @@ -377,7 +377,7 @@ static void __ref map_pages(unsigned long start_vaddr, #if CONFIG_PGTABLE_LEVELS == 3 if (pud_none(*pud)) { - pmd = memblock_alloc_or_panic(PAGE_SIZE << PMD_TABLE_ORDER, + pmd = memblock_alloc(PAGE_SIZE << PMD_TABLE_ORDER, PAGE_SIZE << PMD_TABLE_ORDER); pud_populate(NULL, pud, pmd); } @@ -386,7 +386,7 @@ static void __ref map_pages(unsigned long start_vaddr, pmd = pmd_offset(pud, vaddr); for (tmp1 = start_pmd; tmp1 < PTRS_PER_PMD; tmp1++, pmd++) { if (pmd_none(*pmd)) { - pg_table = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + pg_table = memblock_alloc(PAGE_SIZE, PAGE_SIZE); pmd_populate_kernel(NULL, pmd, pg_table); } @@ -644,7 +644,7 @@ static void __init pagetable_init(void) } #endif - empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); } @@ -681,7 +681,7 @@ static void __init fixmap_init(void) #if CONFIG_PGTABLE_LEVELS == 3 if (pud_none(*pud)) { - pmd = memblock_alloc_or_panic(PAGE_SIZE << PMD_TABLE_ORDER, + pmd = memblock_alloc(PAGE_SIZE << PMD_TABLE_ORDER, PAGE_SIZE << PMD_TABLE_ORDER); pud_populate(NULL, pud, pmd); } @@ -689,7 +689,7 @@ static void __init fixmap_init(void) pmd = pmd_offset(pud, addr); do { - pte_t *pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + pte_t *pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); pmd_populate_kernel(&init_mm, pmd, pte); diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c index 3af6c06af02f..f00a3b607e06 100644 --- a/arch/powerpc/kernel/dt_cpu_ftrs.c +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c @@ -1088,7 +1088,7 @@ static int __init dt_cpu_ftrs_scan_callback(unsigned long node, const char of_scan_flat_dt_subnodes(node, count_cpufeatures_subnodes, &nr_dt_cpu_features); dt_cpu_features = - memblock_alloc_or_panic( + memblock_alloc( sizeof(struct dt_cpu_feature) * nr_dt_cpu_features, PAGE_SIZE); diff --git a/arch/powerpc/kernel/pci_32.c b/arch/powerpc/kernel/pci_32.c index f8a3bd8cfae4..b56c853fc8be 100644 --- a/arch/powerpc/kernel/pci_32.c +++ b/arch/powerpc/kernel/pci_32.c @@ -213,7 +213,7 @@ pci_create_OF_bus_map(void) struct property* of_prop; struct device_node *dn; - of_prop = memblock_alloc_or_panic(sizeof(struct property) + 256, + of_prop = memblock_alloc(sizeof(struct property) + 256, SMP_CACHE_BYTES); dn = of_find_node_by_path("/"); if (dn) { diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index f3ea1329c566..9c8bf12fdf3a 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -458,7 +458,7 @@ void __init smp_setup_cpu_maps(void) DBG("smp_setup_cpu_maps()\n"); - cpu_to_phys_id = memblock_alloc_or_panic(nr_cpu_ids * sizeof(u32), + cpu_to_phys_id = memblock_alloc(nr_cpu_ids * sizeof(u32), __alignof__(u32)); for_each_node_by_type(dn, "cpu") { diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c index 5a1bf501fbe1..ec440aa52fde 100644 --- a/arch/powerpc/kernel/setup_32.c +++ b/arch/powerpc/kernel/setup_32.c @@ -140,7 +140,7 @@ arch_initcall(ppc_init); static void *__init alloc_stack(void) { - return memblock_alloc_or_panic(THREAD_SIZE, THREAD_ALIGN); + return memblock_alloc(THREAD_SIZE, THREAD_ALIGN); } void __init irqstack_early_init(void) diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c index be9c4106e22f..f18d2a1e0df6 100644 --- a/arch/powerpc/mm/book3s32/mmu.c +++ b/arch/powerpc/mm/book3s32/mmu.c @@ -377,7 +377,7 @@ void __init MMU_init_hw(void) * Find some memory for the hash table. */ if ( ppc_md.progress ) ppc_md.progress("hash:find piece", 0x322); - Hash = memblock_alloc_or_panic(Hash_size, Hash_size); + Hash = memblock_alloc(Hash_size, Hash_size); _SDR1 = __pa(Hash) | SDR1_LOW_BITS; pr_info("Total memory = %lldMB; using %ldkB for hash table\n", diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index ce64abea9e3e..21bf84a134c3 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -330,7 +330,7 @@ void __init mmu_partition_table_init(void) unsigned long ptcr; /* Initialize the Partition Table with no entries */ - partition_tb = memblock_alloc_or_panic(patb_size, patb_size); + partition_tb = memblock_alloc(patb_size, patb_size); ptcr = __pa(partition_tb) | (PATB_SIZE_SHIFT - 12); set_ptcr_when_no_uv(ptcr); powernv_set_nmmu_ptcr(ptcr); diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c index 989d6cdf4141..c43b1b3bcaac 100644 --- a/arch/powerpc/mm/kasan/8xx.c +++ b/arch/powerpc/mm/kasan/8xx.c @@ -22,10 +22,9 @@ kasan_init_shadow_8M(unsigned long k_start, unsigned long k_end, void *block) if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; - ptep = memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE); + ptep = memblock_alloc_no_panic(PTE_FRAG_SIZE, PTE_FRAG_SIZE); if (!ptep) return -ENOMEM; - for (i = 0; i < PTRS_PER_PTE; i++) { pte_t pte = pte_mkhuge(pfn_pte(PHYS_PFN(__pa(block + i * PAGE_SIZE)), PAGE_KERNEL)); @@ -45,7 +44,7 @@ int __init kasan_init_region(void *start, size_t size) int ret; void *block; - block = memblock_alloc(k_end - k_start, SZ_8M); + block = memblock_alloc_no_panic(k_end - k_start, SZ_8M); if (!block) return -ENOMEM; diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c index 03666d790a53..226b9bfbb784 100644 --- a/arch/powerpc/mm/kasan/init_32.c +++ b/arch/powerpc/mm/kasan/init_32.c @@ -42,10 +42,10 @@ int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_ if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; - new = memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE); - + new = memblock_alloc_no_panic(PTE_FRAG_SIZE, PTE_FRAG_SIZE); if (!new) return -ENOMEM; + kasan_populate_pte(new, PAGE_KERNEL); pmd_populate_kernel(&init_mm, pmd, new); } @@ -65,7 +65,7 @@ int __init __weak kasan_init_region(void *start, size_t size) return ret; k_start = k_start & PAGE_MASK; - block = memblock_alloc(k_end - k_start, PAGE_SIZE); + block = memblock_alloc_no_panic(k_end - k_start, PAGE_SIZE); if (!block) return -ENOMEM; @@ -129,7 +129,6 @@ void __init kasan_mmu_init(void) if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) { ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END); - if (ret) panic("kasan: kasan_init_shadow_page_tables() failed"); } diff --git a/arch/powerpc/mm/kasan/init_book3e_64.c b/arch/powerpc/mm/kasan/init_book3e_64.c index 60c78aac0f63..43c03b84ff32 100644 --- a/arch/powerpc/mm/kasan/init_book3e_64.c +++ b/arch/powerpc/mm/kasan/init_book3e_64.c @@ -40,19 +40,19 @@ static int __init kasan_map_kernel_page(unsigned long ea, unsigned long pa, pgpr pgdp = pgd_offset_k(ea); p4dp = p4d_offset(pgdp, ea); if (kasan_pud_table(*p4dp)) { - pudp = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE); + pudp = memblock_alloc(PUD_TABLE_SIZE, PUD_TABLE_SIZE); memcpy(pudp, kasan_early_shadow_pud, PUD_TABLE_SIZE); p4d_populate(&init_mm, p4dp, pudp); } pudp = pud_offset(p4dp, ea); if (kasan_pmd_table(*pudp)) { - pmdp = memblock_alloc_or_panic(PMD_TABLE_SIZE, PMD_TABLE_SIZE); + pmdp = memblock_alloc(PMD_TABLE_SIZE, PMD_TABLE_SIZE); memcpy(pmdp, kasan_early_shadow_pmd, PMD_TABLE_SIZE); pud_populate(&init_mm, pudp, pmdp); } pmdp = pmd_offset(pudp, ea); if (kasan_pte_table(*pmdp)) { - ptep = memblock_alloc_or_panic(PTE_TABLE_SIZE, PTE_TABLE_SIZE); + ptep = memblock_alloc(PTE_TABLE_SIZE, PTE_TABLE_SIZE); memcpy(ptep, kasan_early_shadow_pte, PTE_TABLE_SIZE); pmd_populate_kernel(&init_mm, pmdp, ptep); } @@ -74,7 +74,7 @@ static void __init kasan_init_phys_region(void *start, void *end) k_start = ALIGN_DOWN((unsigned long)kasan_mem_to_shadow(start), PAGE_SIZE); k_end = ALIGN((unsigned long)kasan_mem_to_shadow(end), PAGE_SIZE); - va = memblock_alloc_or_panic(k_end - k_start, PAGE_SIZE); + va = memblock_alloc(k_end - k_start, PAGE_SIZE); for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE, va += PAGE_SIZE) kasan_map_kernel_page(k_cur, __pa(va), PAGE_KERNEL); } diff --git a/arch/powerpc/mm/kasan/init_book3s_64.c b/arch/powerpc/mm/kasan/init_book3s_64.c index 7d959544c077..3fb5ce4f48f4 100644 --- a/arch/powerpc/mm/kasan/init_book3s_64.c +++ b/arch/powerpc/mm/kasan/init_book3s_64.c @@ -32,7 +32,7 @@ static void __init kasan_init_phys_region(void *start, void *end) k_start = ALIGN_DOWN((unsigned long)kasan_mem_to_shadow(start), PAGE_SIZE); k_end = ALIGN((unsigned long)kasan_mem_to_shadow(end), PAGE_SIZE); - va = memblock_alloc_or_panic(k_end - k_start, PAGE_SIZE); + va = memblock_alloc(k_end - k_start, PAGE_SIZE); for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE, va += PAGE_SIZE) map_kernel_page(k_cur, __pa(va), PAGE_KERNEL); } diff --git a/arch/powerpc/mm/nohash/mmu_context.c b/arch/powerpc/mm/nohash/mmu_context.c index a1a4e697251a..eb9ea3e88a10 100644 --- a/arch/powerpc/mm/nohash/mmu_context.c +++ b/arch/powerpc/mm/nohash/mmu_context.c @@ -385,11 +385,11 @@ void __init mmu_context_init(void) /* * Allocate the maps used by context management */ - context_map = memblock_alloc_or_panic(CTX_MAP_SIZE, SMP_CACHE_BYTES); - context_mm = memblock_alloc_or_panic(sizeof(void *) * (LAST_CONTEXT + 1), + context_map = memblock_alloc(CTX_MAP_SIZE, SMP_CACHE_BYTES); + context_mm = memblock_alloc(sizeof(void *) * (LAST_CONTEXT + 1), SMP_CACHE_BYTES); if (IS_ENABLED(CONFIG_SMP)) { - stale_map[boot_cpuid] = memblock_alloc_or_panic(CTX_MAP_SIZE, SMP_CACHE_BYTES); + stale_map[boot_cpuid] = memblock_alloc(CTX_MAP_SIZE, SMP_CACHE_BYTES); cpuhp_setup_state_nocalls(CPUHP_POWERPC_MMU_CTX_PREPARE, "powerpc/mmu/ctx:prepare", mmu_ctx_cpu_prepare, mmu_ctx_cpu_dead); diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c index 15276068f657..8a523d91512f 100644 --- a/arch/powerpc/mm/pgtable_32.c +++ b/arch/powerpc/mm/pgtable_32.c @@ -50,7 +50,7 @@ notrace void __init early_ioremap_init(void) void __init *early_alloc_pgtable(unsigned long size) { - return memblock_alloc_or_panic(size, size); + return memblock_alloc(size, size); } diff --git a/arch/powerpc/platforms/powermac/nvram.c b/arch/powerpc/platforms/powermac/nvram.c index a112d26185a0..e4fec71444cf 100644 --- a/arch/powerpc/platforms/powermac/nvram.c +++ b/arch/powerpc/platforms/powermac/nvram.c @@ -514,7 +514,7 @@ static int __init core99_nvram_setup(struct device_node *dp, unsigned long addr) printk(KERN_ERR "nvram: no address\n"); return -EINVAL; } - nvram_image = memblock_alloc_or_panic(NVRAM_SIZE, SMP_CACHE_BYTES); + nvram_image = memblock_alloc(NVRAM_SIZE, SMP_CACHE_BYTES); nvram_data = ioremap(addr, NVRAM_SIZE*2); nvram_naddrs = 1; /* Make sure we get the correct case */ diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c index 09bd93464b4f..5763f6e6eb1c 100644 --- a/arch/powerpc/platforms/powernv/opal.c +++ b/arch/powerpc/platforms/powernv/opal.c @@ -180,7 +180,7 @@ int __init early_init_dt_scan_recoverable_ranges(unsigned long node, /* * Allocate a buffer to hold the MC recoverable ranges. */ - mc_recoverable_range = memblock_alloc_or_panic(size, __alignof__(u64)); + mc_recoverable_range = memblock_alloc(size, __alignof__(u64)); for (i = 0; i < mc_recoverable_range_len; i++) { mc_recoverable_range[i].start_addr = diff --git a/arch/powerpc/platforms/ps3/setup.c b/arch/powerpc/platforms/ps3/setup.c index 150c09b58ae8..082935871b6d 100644 --- a/arch/powerpc/platforms/ps3/setup.c +++ b/arch/powerpc/platforms/ps3/setup.c @@ -115,7 +115,7 @@ static void __init prealloc(struct ps3_prealloc *p) if (!p->size) return; - p->address = memblock_alloc_or_panic(p->size, p->align); + p->address = memblock_alloc(p->size, p->align); printk(KERN_INFO "%s: %lu bytes at %p\n", p->name, p->size, p->address); diff --git a/arch/powerpc/sysdev/msi_bitmap.c b/arch/powerpc/sysdev/msi_bitmap.c index 456a4f64ae0a..87ec0dc8db3b 100644 --- a/arch/powerpc/sysdev/msi_bitmap.c +++ b/arch/powerpc/sysdev/msi_bitmap.c @@ -124,7 +124,7 @@ int __ref msi_bitmap_alloc(struct msi_bitmap *bmp, unsigned int irq_count, if (bmp->bitmap_from_slab) bmp->bitmap = kzalloc(size, GFP_KERNEL); else { - bmp->bitmap = memblock_alloc_or_panic(size, SMP_CACHE_BYTES); + bmp->bitmap = memblock_alloc(size, SMP_CACHE_BYTES); /* the bitmap won't be freed from memblock allocator */ kmemleak_not_leak(bmp->bitmap); } diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index f1793630fc51..3087810c29ca 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -147,7 +147,7 @@ static void __init init_resources(void) res_idx = num_resources - 1; mem_res_sz = num_resources * sizeof(*mem_res); - mem_res = memblock_alloc_or_panic(mem_res_sz, SMP_CACHE_BYTES); + mem_res = memblock_alloc(mem_res_sz, SMP_CACHE_BYTES); /* * Start by adding the reserved regions, if they overlap diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index 41c635d6aca4..c301c8d291d2 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -32,7 +32,7 @@ static void __init kasan_populate_pte(pmd_t *pmd, unsigned long vaddr, unsigned pte_t *ptep, *p; if (pmd_none(pmdp_get(pmd))) { - p = memblock_alloc_or_panic(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE); + p = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE); set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(p)), PAGE_TABLE)); } @@ -54,7 +54,7 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned unsigned long next; if (pud_none(pudp_get(pud))) { - p = memblock_alloc_or_panic(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE); + p = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE); set_pud(pud, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE)); } @@ -85,7 +85,7 @@ static void __init kasan_populate_pud(p4d_t *p4d, unsigned long next; if (p4d_none(p4dp_get(p4d))) { - p = memblock_alloc_or_panic(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); + p = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); set_p4d(p4d, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE)); } @@ -116,7 +116,7 @@ static void __init kasan_populate_p4d(pgd_t *pgd, unsigned long next; if (pgd_none(pgdp_get(pgd))) { - p = memblock_alloc_or_panic(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE); + p = memblock_alloc(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE); set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); } @@ -385,7 +385,7 @@ static void __init kasan_shallow_populate_pud(p4d_t *p4d, next = pud_addr_end(vaddr, end); if (pud_none(pudp_get(pud_k))) { - p = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); set_pud(pud_k, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE)); continue; } @@ -405,7 +405,7 @@ static void __init kasan_shallow_populate_p4d(pgd_t *pgd, next = p4d_addr_end(vaddr, end); if (p4d_none(p4dp_get(p4d_k))) { - p = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); set_p4d(p4d_k, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE)); continue; } @@ -424,7 +424,7 @@ static void __init kasan_shallow_populate_pgd(unsigned long vaddr, unsigned long next = pgd_addr_end(vaddr, end); if (pgd_none(pgdp_get(pgd_k))) { - p = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); set_pgd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); continue; } diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c index f699df2a2b11..53cdfec9398c 100644 --- a/arch/s390/kernel/crash_dump.c +++ b/arch/s390/kernel/crash_dump.c @@ -63,7 +63,7 @@ struct save_area * __init save_area_alloc(bool is_boot_cpu) { struct save_area *sa; - sa = memblock_alloc(sizeof(*sa), 8); + sa = memblock_alloc_no_panic(sizeof(*sa), 8); if (!sa) return NULL; diff --git a/arch/s390/kernel/numa.c b/arch/s390/kernel/numa.c index a33e20f73330..1b589d575567 100644 --- a/arch/s390/kernel/numa.c +++ b/arch/s390/kernel/numa.c @@ -22,7 +22,7 @@ void __init numa_setup(void) node_set(0, node_possible_map); node_set_online(0); for (nid = 0; nid < MAX_NUMNODES; nid++) { - NODE_DATA(nid) = memblock_alloc_or_panic(sizeof(pg_data_t), 8); + NODE_DATA(nid) = memblock_alloc(sizeof(pg_data_t), 8); } NODE_DATA(0)->node_spanned_pages = memblock_end_of_DRAM() >> PAGE_SHIFT; NODE_DATA(0)->node_id = 0; diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c index f873535eddd2..e51426113f26 100644 --- a/arch/s390/kernel/setup.c +++ b/arch/s390/kernel/setup.c @@ -384,7 +384,7 @@ static unsigned long __init stack_alloc_early(void) { unsigned long stack; - stack = (unsigned long)memblock_alloc_or_panic(THREAD_SIZE, THREAD_SIZE); + stack = (unsigned long)memblock_alloc(THREAD_SIZE, THREAD_SIZE); return stack; } @@ -508,7 +508,7 @@ static void __init setup_resources(void) bss_resource.end = __pa_symbol(__bss_stop) - 1; for_each_mem_range(i, &start, &end) { - res = memblock_alloc_or_panic(sizeof(*res), 8); + res = memblock_alloc(sizeof(*res), 8); res->flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM; res->name = "System RAM"; @@ -527,7 +527,7 @@ static void __init setup_resources(void) std_res->start > res->end) continue; if (std_res->end > res->end) { - sub_res = memblock_alloc_or_panic(sizeof(*sub_res), 8); + sub_res = memblock_alloc(sizeof(*sub_res), 8); *sub_res = *std_res; sub_res->end = res->end; std_res->start = res->end + 1; @@ -814,7 +814,7 @@ static void __init setup_randomness(void) { struct sysinfo_3_2_2 *vmms; - vmms = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + vmms = memblock_alloc(PAGE_SIZE, PAGE_SIZE); if (stsi(vmms, 3, 2, 2) == 0 && vmms->count) add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count); memblock_free(vmms, PAGE_SIZE); diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c index d77aaefb59bd..9eb4508b4ca4 100644 --- a/arch/s390/kernel/smp.c +++ b/arch/s390/kernel/smp.c @@ -613,7 +613,7 @@ void __init smp_save_dump_ipl_cpu(void) sa = save_area_alloc(true); if (!sa) panic("could not allocate memory for boot CPU save area\n"); - regs = memblock_alloc_or_panic(512, 8); + regs = memblock_alloc(512, 8); copy_oldmem_kernel(regs, __LC_FPREGS_SAVE_AREA, 512); save_area_add_regs(sa, regs); memblock_free(regs, 512); @@ -792,7 +792,7 @@ void __init smp_detect_cpus(void) u16 address; /* Get CPU information */ - info = memblock_alloc_or_panic(sizeof(*info), 8); + info = memblock_alloc(sizeof(*info), 8); smp_get_core_info(info, 1); /* Find boot CPU type */ if (sclp.has_core_type) { diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c index cf5ee6032c0b..fef1c7b4951d 100644 --- a/arch/s390/kernel/topology.c +++ b/arch/s390/kernel/topology.c @@ -548,7 +548,7 @@ static void __init alloc_masks(struct sysinfo_15_1_x *info, nr_masks *= info->mag[TOPOLOGY_NR_MAG - offset - 1 - i]; nr_masks = max(nr_masks, 1); for (i = 0; i < nr_masks; i++) { - mask->next = memblock_alloc_or_panic(sizeof(*mask->next), 8); + mask->next = memblock_alloc(sizeof(*mask->next), 8); mask = mask->next; } } @@ -566,7 +566,7 @@ void __init topology_init_early(void) } if (!MACHINE_HAS_TOPOLOGY) goto out; - tl_info = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + tl_info = memblock_alloc(PAGE_SIZE, PAGE_SIZE); info = tl_info; store_topology(info); pr_info("The CPU configuration topology of the machine is: %d %d %d %d %d %d / %d\n", diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c index 665b8228afeb..df43575564a3 100644 --- a/arch/s390/mm/vmem.c +++ b/arch/s390/mm/vmem.c @@ -33,7 +33,7 @@ static void __ref *vmem_alloc_pages(unsigned int order) if (slab_is_available()) return (void *)__get_free_pages(GFP_KERNEL, order); - return memblock_alloc(size, size); + return memblock_alloc_no_panic(size, size); } static void vmem_free_pages(unsigned long addr, int order, struct vmem_altmap *altmap) @@ -69,7 +69,7 @@ pte_t __ref *vmem_pte_alloc(void) if (slab_is_available()) pte = (pte_t *) page_table_alloc(&init_mm); else - pte = (pte_t *) memblock_alloc(size, size); + pte = (pte_t *) memblock_alloc_no_panic(size, size); if (!pte) return NULL; memset64((u64 *)pte, _PAGE_INVALID, PTRS_PER_PTE); diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index 289a2fecebef..d64c4b54e289 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -137,7 +137,7 @@ static pmd_t * __init one_md_table_init(pud_t *pud) if (pud_none(*pud)) { pmd_t *pmd; - pmd = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE); pud_populate(&init_mm, pud, pmd); BUG_ON(pmd != pmd_offset(pud, 0)); } @@ -150,7 +150,7 @@ static pte_t * __init one_page_table_init(pmd_t *pmd) if (pmd_none(*pmd)) { pte_t *pte; - pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); pmd_populate_kernel(&init_mm, pmd, pte); BUG_ON(pte != pte_offset_kernel(pmd, 0)); } diff --git a/arch/sparc/kernel/prom_32.c b/arch/sparc/kernel/prom_32.c index a67dd67f10c8..e6dfa3895bb5 100644 --- a/arch/sparc/kernel/prom_32.c +++ b/arch/sparc/kernel/prom_32.c @@ -28,7 +28,7 @@ void * __init prom_early_alloc(unsigned long size) { void *ret; - ret = memblock_alloc_or_panic(size, SMP_CACHE_BYTES); + ret = memblock_alloc(size, SMP_CACHE_BYTES); prom_early_allocated += size; diff --git a/arch/sparc/kernel/prom_64.c b/arch/sparc/kernel/prom_64.c index ba82884cb92a..197771fdf8cc 100644 --- a/arch/sparc/kernel/prom_64.c +++ b/arch/sparc/kernel/prom_64.c @@ -30,7 +30,7 @@ void * __init prom_early_alloc(unsigned long size) { - void *ret = memblock_alloc(size, SMP_CACHE_BYTES); + void *ret = memblock_alloc_no_panic(size, SMP_CACHE_BYTES); if (!ret) { prom_printf("prom_early_alloc(%lu) failed\n", size); diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c index d96a14ffceeb..65a4d8ec3972 100644 --- a/arch/sparc/mm/init_32.c +++ b/arch/sparc/mm/init_32.c @@ -265,7 +265,7 @@ void __init mem_init(void) i = last_valid_pfn >> ((20 - PAGE_SHIFT) + 5); i += 1; sparc_valid_addr_bitmap = (unsigned long *) - memblock_alloc(i << 2, SMP_CACHE_BYTES); + memblock_alloc_no_panic(i << 2, SMP_CACHE_BYTES); if (sparc_valid_addr_bitmap == NULL) { prom_printf("mem_init: Cannot alloc valid_addr_bitmap.\n"); diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index dd32711022f5..4a7d558ed0c9 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -277,12 +277,12 @@ static void __init srmmu_nocache_init(void) bitmap_bits = srmmu_nocache_size >> SRMMU_NOCACHE_BITMAP_SHIFT; - srmmu_nocache_pool = memblock_alloc_or_panic(srmmu_nocache_size, + srmmu_nocache_pool = memblock_alloc(srmmu_nocache_size, SRMMU_NOCACHE_ALIGN_MAX); memset(srmmu_nocache_pool, 0, srmmu_nocache_size); srmmu_nocache_bitmap = - memblock_alloc_or_panic(BITS_TO_LONGS(bitmap_bits) * sizeof(long), + memblock_alloc(BITS_TO_LONGS(bitmap_bits) * sizeof(long), SMP_CACHE_BYTES); bit_map_init(&srmmu_nocache_map, srmmu_nocache_bitmap, bitmap_bits); @@ -446,7 +446,7 @@ static void __init sparc_context_init(int numctx) unsigned long size; size = numctx * sizeof(struct ctx_list); - ctx_list_pool = memblock_alloc_or_panic(size, SMP_CACHE_BYTES); + ctx_list_pool = memblock_alloc(size, SMP_CACHE_BYTES); for (ctx = 0; ctx < numctx; ctx++) { struct ctx_list *clist; diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c index d5a9c5aabaec..cf3a20440293 100644 --- a/arch/um/drivers/net_kern.c +++ b/arch/um/drivers/net_kern.c @@ -636,7 +636,7 @@ static int __init eth_setup(char *str) return 1; } - new = memblock_alloc_or_panic(sizeof(*new), SMP_CACHE_BYTES); + new = memblock_alloc(sizeof(*new), SMP_CACHE_BYTES); INIT_LIST_HEAD(&new->list); new->index = n; diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c index 85b129e2b70b..096fefb73e09 100644 --- a/arch/um/drivers/vector_kern.c +++ b/arch/um/drivers/vector_kern.c @@ -1694,7 +1694,7 @@ static int __init vector_setup(char *str) str, error); return 1; } - new = memblock_alloc_or_panic(sizeof(*new), SMP_CACHE_BYTES); + new = memblock_alloc(sizeof(*new), SMP_CACHE_BYTES); INIT_LIST_HEAD(&new->list); new->unit = n; new->arguments = str; diff --git a/arch/um/kernel/load_file.c b/arch/um/kernel/load_file.c index cb9d178ab7d8..00e0b789e5ab 100644 --- a/arch/um/kernel/load_file.c +++ b/arch/um/kernel/load_file.c @@ -48,7 +48,7 @@ void *uml_load_file(const char *filename, unsigned long long *size) return NULL; } - area = memblock_alloc_or_panic(*size, SMP_CACHE_BYTES); + area = memblock_alloc(*size, SMP_CACHE_BYTES); if (__uml_load_file(filename, area, *size)) { memblock_free(area, *size); diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c index a3c9b7c67640..6dde4ebc7b8e 100644 --- a/arch/x86/coco/sev/core.c +++ b/arch/x86/coco/sev/core.c @@ -1572,7 +1572,7 @@ static void __init alloc_runtime_data(int cpu) struct svsm_ca *caa; /* Allocate the SVSM CA page if an SVSM is present */ - caa = memblock_alloc_or_panic(sizeof(*caa), PAGE_SIZE); + caa = memblock_alloc(sizeof(*caa), PAGE_SIZE); per_cpu(svsm_caa, cpu) = caa; per_cpu(svsm_caa_pa, cpu) = __pa(caa); diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c index 7c15d6e83c37..a54cdd5be071 100644 --- a/arch/x86/kernel/acpi/boot.c +++ b/arch/x86/kernel/acpi/boot.c @@ -911,7 +911,7 @@ static int __init acpi_parse_hpet(struct acpi_table_header *table) * the resource tree during the lateinit timeframe. */ #define HPET_RESOURCE_NAME_SIZE 9 - hpet_res = memblock_alloc_or_panic(sizeof(*hpet_res) + HPET_RESOURCE_NAME_SIZE, + hpet_res = memblock_alloc(sizeof(*hpet_res) + HPET_RESOURCE_NAME_SIZE, SMP_CACHE_BYTES); hpet_res->name = (void *)&hpet_res[1]; diff --git a/arch/x86/kernel/acpi/madt_wakeup.c b/arch/x86/kernel/acpi/madt_wakeup.c index d5ef6215583b..bae8b3452834 100644 --- a/arch/x86/kernel/acpi/madt_wakeup.c +++ b/arch/x86/kernel/acpi/madt_wakeup.c @@ -62,7 +62,7 @@ static void acpi_mp_cpu_die(unsigned int cpu) /* The argument is required to match type of x86_mapping_info::alloc_pgt_page */ static void __init *alloc_pgt_page(void *dummy) { - return memblock_alloc(PAGE_SIZE, PAGE_SIZE); + return memblock_alloc_no_panic(PAGE_SIZE, PAGE_SIZE); } static void __init free_pgt_page(void *pgt, void *dummy) diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c index a57d3fa7c6b6..ebb00747f135 100644 --- a/arch/x86/kernel/apic/io_apic.c +++ b/arch/x86/kernel/apic/io_apic.c @@ -2503,7 +2503,7 @@ static struct resource * __init ioapic_setup_resources(void) n = IOAPIC_RESOURCE_NAME_SIZE + sizeof(struct resource); n *= nr_ioapics; - mem = memblock_alloc_or_panic(n, SMP_CACHE_BYTES); + mem = memblock_alloc(n, SMP_CACHE_BYTES); res = (void *)mem; mem += sizeof(struct resource) * nr_ioapics; @@ -2562,7 +2562,7 @@ void __init io_apic_init_mappings(void) #ifdef CONFIG_X86_32 fake_ioapic_page: #endif - ioapic_phys = (unsigned long)memblock_alloc_or_panic(PAGE_SIZE, + ioapic_phys = (unsigned long)memblock_alloc(PAGE_SIZE, PAGE_SIZE); ioapic_phys = __pa(ioapic_phys); } diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c index 82b96ed9890a..7c9b25c5f209 100644 --- a/arch/x86/kernel/e820.c +++ b/arch/x86/kernel/e820.c @@ -1146,7 +1146,7 @@ void __init e820__reserve_resources(void) struct resource *res; u64 end; - res = memblock_alloc_or_panic(sizeof(*res) * e820_table->nr_entries, + res = memblock_alloc(sizeof(*res) * e820_table->nr_entries, SMP_CACHE_BYTES); e820_res = res; diff --git a/arch/x86/platform/olpc/olpc_dt.c b/arch/x86/platform/olpc/olpc_dt.c index cf5dca2dbb91..90be2eef3910 100644 --- a/arch/x86/platform/olpc/olpc_dt.c +++ b/arch/x86/platform/olpc/olpc_dt.c @@ -136,7 +136,7 @@ void * __init prom_early_alloc(unsigned long size) * fast enough on the platforms we care about while minimizing * wasted bootmem) and hand off chunks of it to callers. */ - res = memblock_alloc_or_panic(chunk_size, SMP_CACHE_BYTES); + res = memblock_alloc(chunk_size, SMP_CACHE_BYTES); prom_early_allocated += chunk_size; memset(res, 0, chunk_size); free_mem = chunk_size; diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c index 56914e21e303..468cfdcf9147 100644 --- a/arch/x86/xen/p2m.c +++ b/arch/x86/xen/p2m.c @@ -178,7 +178,7 @@ static void p2m_init_identity(unsigned long *p2m, unsigned long pfn) static void * __ref alloc_p2m_page(void) { if (unlikely(!slab_is_available())) { - return memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE); + return memblock_alloc(PAGE_SIZE, PAGE_SIZE); } return (void *)__get_free_page(GFP_KERNEL); diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c index f39c4d83173a..50ee88d9f2cc 100644 --- a/arch/xtensa/mm/kasan_init.c +++ b/arch/xtensa/mm/kasan_init.c @@ -39,7 +39,7 @@ static void __init populate(void *start, void *end) unsigned long i, j; unsigned long vaddr = (unsigned long)start; pmd_t *pmd = pmd_off_k(vaddr); - pte_t *pte = memblock_alloc_or_panic(n_pages * sizeof(pte_t), PAGE_SIZE); + pte_t *pte = memblock_alloc(n_pages * sizeof(pte_t), PAGE_SIZE); pr_debug("%s: %p - %p\n", __func__, start, end); diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c index e89f27f2bb18..74e2a149f35f 100644 --- a/arch/xtensa/platforms/iss/network.c +++ b/arch/xtensa/platforms/iss/network.c @@ -604,7 +604,7 @@ static int __init iss_net_setup(char *str) return 1; } - new = memblock_alloc(sizeof(*new), SMP_CACHE_BYTES); + new = memblock_alloc_no_panic(sizeof(*new), SMP_CACHE_BYTES); if (new == NULL) { pr_err("Alloc_bootmem failed\n"); return 1; diff --git a/drivers/clk/ti/clk.c b/drivers/clk/ti/clk.c index 9c75dcc9a534..7ef6f6e1d063 100644 --- a/drivers/clk/ti/clk.c +++ b/drivers/clk/ti/clk.c @@ -449,7 +449,7 @@ void __init omap2_clk_legacy_provider_init(int index, void __iomem *mem) { struct clk_iomap *io; - io = memblock_alloc_or_panic(sizeof(*io), SMP_CACHE_BYTES); + io = memblock_alloc(sizeof(*io), SMP_CACHE_BYTES); io->mem = mem; diff --git a/drivers/firmware/memmap.c b/drivers/firmware/memmap.c index 55b9cfad8a04..4cef459855c2 100644 --- a/drivers/firmware/memmap.c +++ b/drivers/firmware/memmap.c @@ -325,7 +325,7 @@ int __init firmware_map_add_early(u64 start, u64 end, const char *type) { struct firmware_map_entry *entry; - entry = memblock_alloc(sizeof(struct firmware_map_entry), + entry = memblock_alloc_no_panic(sizeof(struct firmware_map_entry), SMP_CACHE_BYTES); if (WARN_ON(!entry)) return -ENOMEM; diff --git a/drivers/macintosh/smu.c b/drivers/macintosh/smu.c index a1534cc6c641..e93fbe71ed90 100644 --- a/drivers/macintosh/smu.c +++ b/drivers/macintosh/smu.c @@ -492,7 +492,7 @@ int __init smu_init (void) goto fail_np; } - smu = memblock_alloc_or_panic(sizeof(struct smu_device), SMP_CACHE_BYTES); + smu = memblock_alloc(sizeof(struct smu_device), SMP_CACHE_BYTES); spin_lock_init(&smu->lock); INIT_LIST_HEAD(&smu->cmd_list); INIT_LIST_HEAD(&smu->cmd_i2c_list); diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index 2eb718fbeffd..d1f33510db67 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -1126,7 +1126,7 @@ void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size) static void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align) { - return memblock_alloc_or_panic(size, align); + return memblock_alloc(size, align); } bool __init early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys) diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index 45517b9e57b1..f46c8639b535 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -79,7 +79,7 @@ static void __init alloc_reserved_mem_array(void) return; } - new_array = memblock_alloc(alloc_size, SMP_CACHE_BYTES); + new_array = memblock_alloc_no_panic(alloc_size, SMP_CACHE_BYTES); if (!new_array) { pr_err("Failed to allocate memory for reserved_mem array with err: %d", -ENOMEM); return; diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c index 6e8561dba537..e8b0c8d430c2 100644 --- a/drivers/of/unittest.c +++ b/drivers/of/unittest.c @@ -3666,7 +3666,7 @@ static struct device_node *overlay_base_root; static void * __init dt_alloc_memory(u64 size, u64 align) { - return memblock_alloc_or_panic(size, align); + return memblock_alloc(size, align); } /* diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c index 341408410ed9..2f4172c98eaf 100644 --- a/drivers/usb/early/xhci-dbc.c +++ b/drivers/usb/early/xhci-dbc.c @@ -94,7 +94,7 @@ static void * __init xdbc_get_page(dma_addr_t *dma_addr) { void *virt; - virt = memblock_alloc(PAGE_SIZE, PAGE_SIZE); + virt = memblock_alloc_no_panic(PAGE_SIZE, PAGE_SIZE); if (!virt) return NULL; diff --git a/include/linux/memblock.h b/include/linux/memblock.h index dee628350cd1..6b21a3834225 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -417,11 +417,13 @@ static __always_inline void *memblock_alloc(phys_addr_t size, phys_addr_t align) MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE); } -void *__memblock_alloc_or_panic(phys_addr_t size, phys_addr_t align, - const char *func); +void *__memblock_alloc_panic(phys_addr_t size, phys_addr_t align, + const char *func, bool should_panic); -#define memblock_alloc_or_panic(size, align) \ - __memblock_alloc_or_panic(size, align, __func__) +#define memblock_alloc(size, align) \ + __memblock_alloc_panic(size, align, __func__, true) +#define memblock_alloc_no_panic(size, align) \ + __memblock_alloc_panic(size, align, __func__, false) static inline void *memblock_alloc_raw(phys_addr_t size, phys_addr_t align) diff --git a/init/main.c b/init/main.c index 4bae539ebc05..302f85078e2b 100644 --- a/init/main.c +++ b/init/main.c @@ -379,7 +379,7 @@ static char * __init xbc_make_cmdline(const char *key) if (len <= 0) return NULL; - new_cmdline = memblock_alloc(len + 1, SMP_CACHE_BYTES); + new_cmdline = memblock_alloc_no_panic(len + 1, SMP_CACHE_BYTES); if (!new_cmdline) { pr_err("Failed to allocate memory for extra kernel cmdline.\n"); return NULL; @@ -640,11 +640,11 @@ static void __init setup_command_line(char *command_line) len = xlen + strlen(boot_command_line) + ilen + 1; - saved_command_line = memblock_alloc_or_panic(len, SMP_CACHE_BYTES); + saved_command_line = memblock_alloc(len, SMP_CACHE_BYTES); len = xlen + strlen(command_line) + 1; - static_command_line = memblock_alloc_or_panic(len, SMP_CACHE_BYTES); + static_command_line = memblock_alloc(len, SMP_CACHE_BYTES); if (xlen) { /* @@ -860,7 +860,7 @@ static void __init print_unknown_bootoptions(void) len += strlen(*p); } - unknown_options = memblock_alloc(len, SMP_CACHE_BYTES); + unknown_options = memblock_alloc_no_panic(len, SMP_CACHE_BYTES); if (!unknown_options) { pr_err("%s: Failed to allocate %zu bytes\n", __func__, len); @@ -1141,9 +1141,9 @@ static int __init initcall_blacklist(char *str) str_entry = strsep(&str, ","); if (str_entry) { pr_debug("blacklisting initcall %s\n", str_entry); - entry = memblock_alloc_or_panic(sizeof(*entry), + entry = memblock_alloc(sizeof(*entry), SMP_CACHE_BYTES); - entry->buf = memblock_alloc_or_panic(strlen(str_entry) + 1, + entry->buf = memblock_alloc(strlen(str_entry) + 1, SMP_CACHE_BYTES); strcpy(entry->buf, str_entry); list_add(&entry->next, &blacklisted_initcalls); diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index abcf3fa63a56..85381f2b8ab3 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -328,7 +328,7 @@ static void __init *swiotlb_memblock_alloc(unsigned long nslabs, * memory encryption. */ if (flags & SWIOTLB_ANY) - tlb = memblock_alloc(bytes, PAGE_SIZE); + tlb = memblock_alloc_no_panic(bytes, PAGE_SIZE); else tlb = memblock_alloc_low(bytes, PAGE_SIZE); @@ -396,14 +396,14 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags, } alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs)); - mem->slots = memblock_alloc(alloc_size, PAGE_SIZE); + mem->slots = memblock_alloc_no_panic(alloc_size, PAGE_SIZE); if (!mem->slots) { pr_warn("%s: Failed to allocate %zu bytes align=0x%lx\n", __func__, alloc_size, PAGE_SIZE); return; } - mem->areas = memblock_alloc(array_size(sizeof(struct io_tlb_area), + mem->areas = memblock_alloc_no_panic(array_size(sizeof(struct io_tlb_area), nareas), SMP_CACHE_BYTES); if (!mem->areas) { pr_warn("%s: Failed to allocate mem->areas.\n", __func__); diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index c9fb559a6399..18604fc4103d 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -1011,7 +1011,7 @@ void __init register_nosave_region(unsigned long start_pfn, unsigned long end_pf } } /* This allocation cannot fail */ - region = memblock_alloc_or_panic(sizeof(struct nosave_region), + region = memblock_alloc(sizeof(struct nosave_region), SMP_CACHE_BYTES); region->start_pfn = start_pfn; region->end_pfn = end_pfn; diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index 80910bc3470c..6a7801b1d283 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -1211,7 +1211,7 @@ void __init setup_log_buf(int early) goto out; } - new_log_buf = memblock_alloc(new_log_buf_len, LOG_ALIGN); + new_log_buf = memblock_alloc_no_panic(new_log_buf_len, LOG_ALIGN); if (unlikely(!new_log_buf)) { pr_err("log_buf_len: %lu text bytes not available\n", new_log_buf_len); @@ -1219,7 +1219,7 @@ void __init setup_log_buf(int early) } new_descs_size = new_descs_count * sizeof(struct prb_desc); - new_descs = memblock_alloc(new_descs_size, LOG_ALIGN); + new_descs = memblock_alloc_no_panic(new_descs_size, LOG_ALIGN); if (unlikely(!new_descs)) { pr_err("log_buf_len: %zu desc bytes not available\n", new_descs_size); @@ -1227,7 +1227,7 @@ void __init setup_log_buf(int early) } new_infos_size = new_descs_count * sizeof(struct printk_info); - new_infos = memblock_alloc(new_infos_size, LOG_ALIGN); + new_infos = memblock_alloc_no_panic(new_infos_size, LOG_ALIGN); if (unlikely(!new_infos)) { pr_err("log_buf_len: %zu info bytes not available\n", new_infos_size); diff --git a/lib/cpumask.c b/lib/cpumask.c index 57274ba8b6d9..d638587f97df 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -83,7 +83,7 @@ EXPORT_SYMBOL(alloc_cpumask_var_node); */ void __init alloc_bootmem_cpumask_var(cpumask_var_t *mask) { - *mask = memblock_alloc_or_panic(cpumask_size(), SMP_CACHE_BYTES); + *mask = memblock_alloc(cpumask_size(), SMP_CACHE_BYTES); } /** diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c index d65d48b85f90..129acb1d2fe1 100644 --- a/mm/kasan/tags.c +++ b/mm/kasan/tags.c @@ -86,7 +86,7 @@ void __init kasan_init_tags(void) if (kasan_stack_collection_enabled()) { if (!stack_ring.size) stack_ring.size = KASAN_STACK_RING_SIZE_DEFAULT; - stack_ring.entries = memblock_alloc( + stack_ring.entries = memblock_alloc_no_panic( sizeof(stack_ring.entries[0]) * stack_ring.size, SMP_CACHE_BYTES); if (WARN_ON(!stack_ring.entries)) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 67fc321db79b..4676a5557e60 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -869,7 +869,7 @@ void __init kfence_alloc_pool_and_metadata(void) * re-allocate the memory pool. */ if (!__kfence_pool) - __kfence_pool = memblock_alloc(KFENCE_POOL_SIZE, PAGE_SIZE); + __kfence_pool = memblock_alloc_no_panic(KFENCE_POOL_SIZE, PAGE_SIZE); if (!__kfence_pool) { pr_err("failed to allocate pool\n"); @@ -877,7 +877,7 @@ void __init kfence_alloc_pool_and_metadata(void) } /* The memory allocated by memblock has been zeroed out. */ - kfence_metadata_init = memblock_alloc(KFENCE_METADATA_SIZE, PAGE_SIZE); + kfence_metadata_init = memblock_alloc_no_panic(KFENCE_METADATA_SIZE, PAGE_SIZE); if (!kfence_metadata_init) { pr_err("failed to allocate metadata\n"); memblock_free(__kfence_pool, KFENCE_POOL_SIZE); diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c index 1bb505a08415..938ca5eb6df7 100644 --- a/mm/kmsan/shadow.c +++ b/mm/kmsan/shadow.c @@ -280,8 +280,8 @@ void __init kmsan_init_alloc_meta_for_range(void *start, void *end) start = (void *)PAGE_ALIGN_DOWN((u64)start); size = PAGE_ALIGN((u64)end - (u64)start); - shadow = memblock_alloc_or_panic(size, PAGE_SIZE); - origin = memblock_alloc_or_panic(size, PAGE_SIZE); + shadow = memblock_alloc(size, PAGE_SIZE); + origin = memblock_alloc(size, PAGE_SIZE); for (u64 addr = 0; addr < size; addr += PAGE_SIZE) { page = virt_to_page_or_null((char *)start + addr); diff --git a/mm/memblock.c b/mm/memblock.c index 95af35fd1389..901da45ecf8b 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1692,21 +1692,23 @@ void * __init memblock_alloc_try_nid( } /** - * __memblock_alloc_or_panic - Try to allocate memory and panic on failure + * __memblock_alloc_panic - Try to allocate memory and panic on failure * @size: size of memory block to be allocated in bytes * @align: alignment of the region and block's size * @func: caller func name + * @should_panic: whether failed panic * - * This function attempts to allocate memory using memblock_alloc, - * and in case of failure, it calls panic with the formatted message. - * This function should not be used directly, please use the macro memblock_alloc_or_panic. + * In case of failure, it calls panic with the formatted message. + * This function should not be used directly, please use the macro + * memblock_alloc and memblock_alloc_no_panic. */ -void *__init __memblock_alloc_or_panic(phys_addr_t size, phys_addr_t align, - const char *func) +void *__init __memblock_alloc_panic(phys_addr_t size, phys_addr_t align, + const char *func, bool should_panic) { - void *addr = memblock_alloc(size, align); + void *addr = memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT, + MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE); - if (unlikely(!addr)) + if (unlikely(!addr && should_panic)) panic("%s: Failed to allocate %pap bytes\n", func, &size); return addr; } diff --git a/mm/numa.c b/mm/numa.c index f1787d7713a6..9442448dc74f 100644 --- a/mm/numa.c +++ b/mm/numa.c @@ -37,7 +37,7 @@ void __init alloc_node_data(int nid) void __init alloc_offline_node_data(int nid) { pg_data_t *pgdat; - node_data[nid] = memblock_alloc_or_panic(sizeof(*pgdat), SMP_CACHE_BYTES); + node_data[nid] = memblock_alloc(sizeof(*pgdat), SMP_CACHE_BYTES); } /* Stub functions: */ diff --git a/mm/numa_emulation.c b/mm/numa_emulation.c index 031fb9961bf7..958dc5a1715c 100644 --- a/mm/numa_emulation.c +++ b/mm/numa_emulation.c @@ -447,7 +447,7 @@ void __init numa_emulation(struct numa_meminfo *numa_meminfo, int numa_dist_cnt) /* copy the physical distance table */ if (numa_dist_cnt) { - phys_dist = memblock_alloc(phys_size, PAGE_SIZE); + phys_dist = memblock_alloc_no_panic(phys_size, PAGE_SIZE); if (!phys_dist) { pr_warn("NUMA: Warning: can't allocate copy of distance table, disabling emulation\n"); goto no_emu; diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c index a3877e9bc878..549a6d6607c6 100644 --- a/mm/numa_memblks.c +++ b/mm/numa_memblks.c @@ -61,7 +61,7 @@ static int __init numa_alloc_distance(void) cnt++; size = cnt * cnt * sizeof(numa_distance[0]); - numa_distance = memblock_alloc(size, PAGE_SIZE); + numa_distance = memblock_alloc_no_panic(size, PAGE_SIZE); if (!numa_distance) { pr_warn("Warning: can't allocate distance table!\n"); /* don't retry until explicitly reset */ diff --git a/mm/percpu.c b/mm/percpu.c index ac61e3fc5f15..a381d626ed32 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1359,7 +1359,7 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr, /* allocate chunk */ alloc_size = struct_size(chunk, populated, BITS_TO_LONGS(region_size >> PAGE_SHIFT)); - chunk = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES); + chunk = memblock_alloc(alloc_size, SMP_CACHE_BYTES); INIT_LIST_HEAD(&chunk->list); @@ -1371,14 +1371,14 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr, region_bits = pcpu_chunk_map_bits(chunk); alloc_size = BITS_TO_LONGS(region_bits) * sizeof(chunk->alloc_map[0]); - chunk->alloc_map = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES); + chunk->alloc_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES); alloc_size = BITS_TO_LONGS(region_bits + 1) * sizeof(chunk->bound_map[0]); - chunk->bound_map = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES); + chunk->bound_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES); alloc_size = pcpu_chunk_nr_blocks(chunk) * sizeof(chunk->md_blocks[0]); - chunk->md_blocks = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES); + chunk->md_blocks = memblock_alloc(alloc_size, SMP_CACHE_BYTES); #ifdef NEED_PCPUOBJ_EXT /* first chunk is free to use */ chunk->obj_exts = NULL; @@ -2399,7 +2399,7 @@ struct pcpu_alloc_info * __init pcpu_alloc_alloc_info(int nr_groups, __alignof__(ai->groups[0].cpu_map[0])); ai_size = base_size + nr_units * sizeof(ai->groups[0].cpu_map[0]); - ptr = memblock_alloc(PFN_ALIGN(ai_size), PAGE_SIZE); + ptr = memblock_alloc_no_panic(PFN_ALIGN(ai_size), PAGE_SIZE); if (!ptr) return NULL; ai = ptr; @@ -2582,16 +2582,16 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, /* process group information and build config tables accordingly */ alloc_size = ai->nr_groups * sizeof(group_offsets[0]); - group_offsets = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES); + group_offsets = memblock_alloc(alloc_size, SMP_CACHE_BYTES); alloc_size = ai->nr_groups * sizeof(group_sizes[0]); - group_sizes = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES); + group_sizes = memblock_alloc(alloc_size, SMP_CACHE_BYTES); alloc_size = nr_cpu_ids * sizeof(unit_map[0]); - unit_map = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES); + unit_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES); alloc_size = nr_cpu_ids * sizeof(unit_off[0]); - unit_off = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES); + unit_off = memblock_alloc(alloc_size, SMP_CACHE_BYTES); for (cpu = 0; cpu < nr_cpu_ids; cpu++) unit_map[cpu] = UINT_MAX; @@ -2660,7 +2660,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, pcpu_free_slot = pcpu_sidelined_slot + 1; pcpu_to_depopulate_slot = pcpu_free_slot + 1; pcpu_nr_slots = pcpu_to_depopulate_slot + 1; - pcpu_chunk_lists = memblock_alloc_or_panic(pcpu_nr_slots * + pcpu_chunk_lists = memblock_alloc(pcpu_nr_slots * sizeof(pcpu_chunk_lists[0]), SMP_CACHE_BYTES); @@ -3010,7 +3010,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size, size_sum = ai->static_size + ai->reserved_size + ai->dyn_size; areas_size = PFN_ALIGN(ai->nr_groups * sizeof(void *)); - areas = memblock_alloc(areas_size, SMP_CACHE_BYTES); + areas = memblock_alloc_no_panic(areas_size, SMP_CACHE_BYTES); if (!areas) { rc = -ENOMEM; goto out_free; @@ -3127,19 +3127,19 @@ void __init __weak pcpu_populate_pte(unsigned long addr) pmd_t *pmd; if (pgd_none(*pgd)) { - p4d = memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE); + p4d = memblock_alloc(P4D_TABLE_SIZE, P4D_TABLE_SIZE); pgd_populate(&init_mm, pgd, p4d); } p4d = p4d_offset(pgd, addr); if (p4d_none(*p4d)) { - pud = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE); + pud = memblock_alloc(PUD_TABLE_SIZE, PUD_TABLE_SIZE); p4d_populate(&init_mm, p4d, pud); } pud = pud_offset(p4d, addr); if (pud_none(*pud)) { - pmd = memblock_alloc_or_panic(PMD_TABLE_SIZE, PMD_TABLE_SIZE); + pmd = memblock_alloc(PMD_TABLE_SIZE, PMD_TABLE_SIZE); pud_populate(&init_mm, pud, pmd); } @@ -3147,7 +3147,7 @@ void __init __weak pcpu_populate_pte(unsigned long addr) if (!pmd_present(*pmd)) { pte_t *new; - new = memblock_alloc_or_panic(PTE_TABLE_SIZE, PTE_TABLE_SIZE); + new = memblock_alloc(PTE_TABLE_SIZE, PTE_TABLE_SIZE); pmd_populate_kernel(&init_mm, pmd, new); } @@ -3198,7 +3198,7 @@ int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_cpu_to_node_fn_t /* unaligned allocations can't be freed, round up to page size */ pages_size = PFN_ALIGN(unit_pages * num_possible_cpus() * sizeof(pages[0])); - pages = memblock_alloc_or_panic(pages_size, SMP_CACHE_BYTES); + pages = memblock_alloc(pages_size, SMP_CACHE_BYTES); /* allocate pages */ j = 0; diff --git a/mm/sparse.c b/mm/sparse.c index 133b033d0cba..56191a32e6c5 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -257,7 +257,7 @@ static void __init memblocks_present(void) size = sizeof(struct mem_section *) * NR_SECTION_ROOTS; align = 1 << (INTERNODE_CACHE_SHIFT); - mem_section = memblock_alloc_or_panic(size, align); + mem_section = memblock_alloc(size, align); } #endif From patchwork Fri Jan 3 10:51:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weikang Guo X-Patchwork-Id: 13925484 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBAFDE77198 for ; Fri, 3 Jan 2025 10:52:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 541826B0082; Fri, 3 Jan 2025 05:52:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F23E6B0083; Fri, 3 Jan 2025 05:52:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 393756B0085; Fri, 3 Jan 2025 05:52:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 21A7A6B0082 for ; Fri, 3 Jan 2025 05:52:13 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C34541404E4 for ; Fri, 3 Jan 2025 10:52:12 +0000 (UTC) X-FDA: 82965824424.21.EFC514C Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf02.hostedemail.com (Postfix) with ESMTP id 1990D8000E for ; Fri, 3 Jan 2025 10:50:31 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=fvMqkuOe; spf=pass (imf02.hostedemail.com: domain of guoweikang.kernel@gmail.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=guoweikang.kernel@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735901496; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Cy+yX6wEbnIcQp94PEsxSyfI8yNEpKG8JNMSnjXRqAI=; b=xfYh5CLxt8EIUZZfKlNMXecxRUeCyXzxZyR4J3Yq1OntbicJuMQBEy3cm9DmnE9Twn9LpT xqnbPf54970VAfYQDCXqdQUYRY7Tj9rNUniRfxPeBoe22DjFKh0ugZSDdjeyTB6CGFfrd4 d7D3VhQt8ZrjZZBNjYBKQ6RW/iKMdS0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735901496; a=rsa-sha256; cv=none; b=JwDNgluX+UDmSBKfLHGlhkKw7fSbf7Qo+TaWo4vxwQuDBaO8P2EXaR+FbO1Y7/t8j50dXQ ipgN/XNOYhZj+KGC7+gzuAE4BMx6S9AicX8XQxEA0m9Kb7glcVdwpIyQydmqT/MgGCO73E WDpgkx1GECVDgXbtyb+BhBpvoo3ycck= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=fvMqkuOe; spf=pass (imf02.hostedemail.com: domain of guoweikang.kernel@gmail.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=guoweikang.kernel@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-21631789fcdso125084025ad.1 for ; Fri, 03 Jan 2025 02:52:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1735901530; x=1736506330; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Cy+yX6wEbnIcQp94PEsxSyfI8yNEpKG8JNMSnjXRqAI=; b=fvMqkuOeDNdRgFSc33zeMTtDoFkn+sUkrr4kK2S/zr4PTtHzlyUAXFPPNhsw4+jBz6 NhS7WjLrmkQGcygx+I5cSil1yAcvhBR7qXwdzJSqJ/V10O5MR+Dgjb+c+64ylb9dgkuK Kcbe9tXZOF94rM3zSvKm2EW36Sj6O5v5e8nMf7vK8FIMpQrnIwJcSQY3EBZ7K8P4yU5e 02X5VO2stI5yQEhM6455G8pBexdXv7jCCJVGgpFByTj18LTqnT+7r3VLRCsBJj3lPqSW c9WPKiBXKbOUHUiwycrfbjwcDwebAevzPosEFB4TK62TvMBcDJovGoeohVemGuvXGsTV RFrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735901530; x=1736506330; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Cy+yX6wEbnIcQp94PEsxSyfI8yNEpKG8JNMSnjXRqAI=; b=aZgsD21zGeu2BrYg8b0Gd+E77PspSSd0ly4U836vlItOUkMJJVeKEI5Sw3OmOgIM3j WySkPuchsBa1igZLOF2/OxFoVKqIcwvY2xA7oTH4dZXERwIsfe+ETfQNrbLPI4OMtbtW lcp+IBlltko2Xqx+dH//44iGceGYJ4Ihhx6mG6Z6Dpp9ZYTKtGTjZUTvwkoF3WUpGQ25 LuZrPsFCSoNyCJfJwiOwvx3WXj5xBP21xoyyqcRIAQnolhlWKpfG5YUqdO7fhMLPQvNB +L0thX+aIOeVFHfmAOFd7HQ+7hEHMorj8HFeztAb9MY0XFgvTlr7SaXKbBtRMni/mkdW wUJw== X-Gm-Message-State: AOJu0YybWGdTiVcgyW+K/D/jjuU78BRzP/wLVFq+/UwbTeB5Rz6koSG2 Jg6Co3Qt6DkkPNq49zkacp8R46qc23ErHa1FIqJQIz2j3okqdq6M X-Gm-Gg: ASbGncuEskDSnXZUxtpPxA5nS0zd8282LfaHeoG8ny8MlARZ/DjCFzlHqfGpN7APtBf 5Hb6vEYhD+Rt/WaI0rsSgOwnIYz4wDJTe5akl1Iudg2OwhvNgzr4TOPTOA5wjGKmMvFSSMIZ1Yz 3gcR9DJRmnQFlxMi4tc38tLPWe/uBrA+asDt/cwxWMJgpkX7RupHGR7qKq8SyKdvradT1cto9ZK lK1xBtJT68wKAwOAWHOvNxWTZUtA44UwLc0YxoeFmG/wBJmxrxqqJBLP+YP0TMiXWIDRC641rDY 3QNW X-Google-Smtp-Source: AGHT+IEFyETklwSj2j6erSwDIdU9K6u5rhSzXwT6aCJIlf3JqQYNREYAUxQsw/AQtYTFrVfR+9hOOQ== X-Received: by 2002:a05:6a20:43ac:b0:1e1:e2d8:fd4a with SMTP id adf61e73a8af0-1e5e1e04618mr69199866637.5.1735901529701; Fri, 03 Jan 2025 02:52:09 -0800 (PST) Received: from localhost.localdomain ([36.110.106.149]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8157edsm26802019b3a.20.2025.01.03.02.52.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Jan 2025 02:52:09 -0800 (PST) From: Guo Weikang To: Mike Rapoport , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Guo Weikang Subject: [PATCH 2/3] mm/memblock: Modify the default failure behavior of memblock_alloc_raw to panic Date: Fri, 3 Jan 2025 18:51:57 +0800 Message-Id: <20250103105158.1350689-2-guoweikang.kernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250103105158.1350689-1-guoweikang.kernel@gmail.com> References: <20250103105158.1350689-1-guoweikang.kernel@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1990D8000E X-Stat-Signature: 4k8n5jgdrirpbg7qks91n5564hfshiqj X-Rspam-User: X-HE-Tag: 1735901431-343603 X-HE-Meta: U2FsdGVkX19HCuBm8uH4PYX+MLD8SOor8WFTXMJEzC3vjWUQoiQ1a4OHnnwkWkoMVW3364YBub51l25uD/+6X/ManpBbb7HHDOTr9KYqY1EtJ3ZcMXz6+HFARrH5726XTX25fzMURabZYpEIOCwzynd2y1MMeqAOa4vauxCIz8dG/9vOUaWgzSINpQzcVvW7yLKYg/NwLNXhirS6mDJ49LoLehHjkAYEB70p7/Z+c9UW42u3+VQZ3JrduTARDeyqbUTckZZ9OkCAZYZpe/0UijrZBtueCSBG3zVnxqBP7YDp2xPVnPWiIOpYSiYQgXYNylGY+AMkZD4keK0ijzjcYx7nMN5bovjCtJCZ9QCbD/BuWfOX53rnZiDiEMsg7VrvZPLRk2yfmHLMPpDILRfqIG9SaOHUwG4Wz+11kknX7zZ6M9RicOphGH/A9NW25dovZvOzA2TOPZrW/MWZuvbYuhvBu9sHzPWOMIao1NSV1syn3pj4UEFCvLcMGbuBADn0BPU6wOZUQu9BJTxxhYIYm7qmTmTl3WTLoMBmoc1DthzTRGIiYtIAreQepYlUz9tjJN3G76xBcNrVGep2hi70Vqfhkwezg74G0C2/VUkJQXS8Kehylp1GJ9FEu/gwKld7sFFfRifwnRtnj+lkiISizRPWgo4OT5/hQ5lD/bZVaVT9cQ5EMonHqR4+YgXy0yVQnE3ac7aeQEMgYc86Y+jqi2ZVh+7mpX41YVmbc3YuRtx7Ycr/+fa2qtq2ns/rtVkhrJTP2+lsrCQ7m1rLP77gFmKiTDYJiAXaI5haJfiHzFHO0Byh17ThRyEajh4Dx749mKYO2NyPaxHKzi0NHbc2qUiQNmfgFqzcL1JJuXly5Rdv3NNDyVuf4S6qzJtzOrjRlm/myQyZoAppvB5K/sOf1Io0faJgCzDafXaRW9BfjeDD5NsVTtmLxrMcXNv1eUaGYO2xTOkqhnpOqR08AJe xvTk0Wig frPbRtLocRGC0t6WVdMvSaKLYy55Sjd+/u71enrdxEDV52mJbUeaW4IyMBC5a26CmxXyqibbye/Io2cB6rbe7m8LvzQYX2xbmEnxmgdn07fUJoqu/jgWFl5iwgFdnGpwEUjlJ6Q2So54r1IvFw8n2AN/VPlbcQ8W7SWaasP/O2aJy/fyRtwUBp2Q2uz4WpOurnoEC1VmNW7E8Qz4a0VNpu5/EYuRzAuACLHHTjRDO6tnEa9QdrrPBCf41B3fK8niQT6pHbBTM94ShhP+p7n7UJXqCzqlBO7+X6FQkP21qeNV+YDka08DHtLQaNHoRiSSB127iOzSY/FKoIDL676QQDpk83UmN1s5UUV4f0v0w3hbpHpXCSAT2+b1y1WT2eidD7c2RYJdu4f9Rm5QxSuUpUDi3NwLvheGfWLTHs8OmHquY+7l9tTCZLkYI4/KH/HWvQ0E+qRbm0n4q80UmB8RqTfBQHk/DllqLuP2aRs82Wl8KKtswW1fjhbI1YvOzmB3lJBQ5HMqhICbiRpk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Just like memblock_alloc, the default failure behavior of memblock_alloc_raw is now modified to trigger a panic when allocation fails. memblock_alloc_no_panic has been introduced to handle cases where panic behavior is not desired. Signed-off-by: Guo Weikang --- arch/openrisc/mm/init.c | 3 --- arch/powerpc/kernel/paca.c | 4 ---- arch/powerpc/kernel/prom.c | 3 --- arch/powerpc/platforms/pseries/plpks.c | 2 +- include/linux/memblock.h | 17 +++++++---------- mm/memblock.c | 13 +++++++++++-- 6 files changed, 19 insertions(+), 23 deletions(-) diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c index d0cb1a0126f9..9e0047764f54 100644 --- a/arch/openrisc/mm/init.c +++ b/arch/openrisc/mm/init.c @@ -96,9 +96,6 @@ static void __init map_ram(void) /* Alloc one page for holding PTE's... */ pte = memblock_alloc_raw(PAGE_SIZE, PAGE_SIZE); - if (!pte) - panic("%s: Failed to allocate page for PTEs\n", - __func__); set_pmd(pme, __pmd(_KERNPG_TABLE + __pa(pte))); /* Fill the newly allocated page with PTE'S */ diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c index 7502066c3c53..9d15799e97d4 100644 --- a/arch/powerpc/kernel/paca.c +++ b/arch/powerpc/kernel/paca.c @@ -246,10 +246,6 @@ void __init allocate_paca_ptrs(void) paca_ptrs_size = sizeof(struct paca_struct *) * nr_cpu_ids; paca_ptrs = memblock_alloc_raw(paca_ptrs_size, SMP_CACHE_BYTES); - if (!paca_ptrs) - panic("Failed to allocate %d bytes for paca pointers\n", - paca_ptrs_size); - memset(paca_ptrs, 0x88, paca_ptrs_size); } diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c index e0059842a1c6..3aba66ddd2c8 100644 --- a/arch/powerpc/kernel/prom.c +++ b/arch/powerpc/kernel/prom.c @@ -128,9 +128,6 @@ static void __init move_device_tree(void) !memblock_is_memory(start + size - 1) || overlaps_crashkernel(start, size) || overlaps_initrd(start, size)) { p = memblock_alloc_raw(size, PAGE_SIZE); - if (!p) - panic("Failed to allocate %lu bytes to move device tree\n", - size); memcpy(p, initial_boot_params, size); initial_boot_params = p; DBG("Moved device tree to 0x%px\n", p); diff --git a/arch/powerpc/platforms/pseries/plpks.c b/arch/powerpc/platforms/pseries/plpks.c index b1667ed05f98..1bcbed41ce44 100644 --- a/arch/powerpc/platforms/pseries/plpks.c +++ b/arch/powerpc/platforms/pseries/plpks.c @@ -671,7 +671,7 @@ void __init plpks_early_init_devtree(void) return; } - ospassword = memblock_alloc_raw(len, SMP_CACHE_BYTES); + ospassword = memblock_alloc_raw_no_panic(len, SMP_CACHE_BYTES); if (!ospassword) { pr_err("Error allocating memory for password.\n"); goto out; diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 6b21a3834225..b68c141ebc44 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -418,20 +418,17 @@ static __always_inline void *memblock_alloc(phys_addr_t size, phys_addr_t align) } void *__memblock_alloc_panic(phys_addr_t size, phys_addr_t align, - const char *func, bool should_panic); + const char *func, bool should_panic, bool raw); #define memblock_alloc(size, align) \ - __memblock_alloc_panic(size, align, __func__, true) + __memblock_alloc_panic(size, align, __func__, true, false) #define memblock_alloc_no_panic(size, align) \ - __memblock_alloc_panic(size, align, __func__, false) + __memblock_alloc_panic(size, align, __func__, false, false) -static inline void *memblock_alloc_raw(phys_addr_t size, - phys_addr_t align) -{ - return memblock_alloc_try_nid_raw(size, align, MEMBLOCK_LOW_LIMIT, - MEMBLOCK_ALLOC_ACCESSIBLE, - NUMA_NO_NODE); -} +#define memblock_alloc_raw(size, align) \ + __memblock_alloc_panic(size, align, __func__, true, true) +#define memblock_alloc_raw_no_panic(size, align) \ + __memblock_alloc_panic(size, align, __func__, false, true) static inline void *memblock_alloc_from(phys_addr_t size, phys_addr_t align, diff --git a/mm/memblock.c b/mm/memblock.c index 901da45ecf8b..4974ae2ee5ec 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1697,15 +1697,24 @@ void * __init memblock_alloc_try_nid( * @align: alignment of the region and block's size * @func: caller func name * @should_panic: whether failed panic + * @raw: whether zeroing mem * * In case of failure, it calls panic with the formatted message. * This function should not be used directly, please use the macro * memblock_alloc and memblock_alloc_no_panic. + * memblock_alloc_raw and memblock_alloc_raw_no_panic. */ void *__init __memblock_alloc_panic(phys_addr_t size, phys_addr_t align, - const char *func, bool should_panic) + const char *func, bool should_panic, + bool raw) { - void *addr = memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT, + void *addr; + + if (unlikely(raw)) + addr = memblock_alloc_try_nid_raw(size, align, MEMBLOCK_LOW_LIMIT, + MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE); + else + addr = memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT, MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE); if (unlikely(!addr && should_panic)) From patchwork Fri Jan 3 10:51:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weikang Guo X-Patchwork-Id: 13925485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FEB0E7718F for ; Fri, 3 Jan 2025 10:52:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E45496B008A; Fri, 3 Jan 2025 05:52:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DFED96B0089; Fri, 3 Jan 2025 05:52:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C6D776B008A; Fri, 3 Jan 2025 05:52:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A21466B0088 for ; Fri, 3 Jan 2025 05:52:17 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1F3B4C072D for ; Fri, 3 Jan 2025 10:52:17 +0000 (UTC) X-FDA: 82965825810.14.E457323 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf29.hostedemail.com (Postfix) with ESMTP id 53E1712000C for ; Fri, 3 Jan 2025 10:51:02 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=BM4CFqfb; spf=pass (imf29.hostedemail.com: domain of guoweikang.kernel@gmail.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=guoweikang.kernel@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735901511; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rq36yhx+vIwlwQqelDChXzPpuerxjjjClrW9h8Ys2vo=; b=watyUKtET+3pX1MIP4I+MHAsuy3R2W8dOVc5Dn0yiK3qVml42gXy1KcdMAw7w1Ou5OY1JP /w57950khDa2AmBb+jibOMmnhLz4HVmLkAQ1pxwdP1P/NHEb8ke1wpa+9gQJFvifJrxCql WF9kOG0x2pNATjnUR9/KLLt6TgYieP8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735901511; a=rsa-sha256; cv=none; b=guaEh3I/+TlZAVTYnqYqhLB7gh+2waHZ+DqSIVFm7o/ugArCHqLLsFPf1nMCV8QW5HP9Fy zxf++86S0Zqh68NaR5nQpkMhBRs7qkbhpW0WabddQSxEmFy+OBbNjQWj/9QWeVhFp/3i9E 5Uk7QQZkmszun5Vurj/dfgu1THrBRBo= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=BM4CFqfb; spf=pass (imf29.hostedemail.com: domain of guoweikang.kernel@gmail.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=guoweikang.kernel@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-21644aca3a0so90049725ad.3 for ; Fri, 03 Jan 2025 02:52:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1735901534; x=1736506334; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rq36yhx+vIwlwQqelDChXzPpuerxjjjClrW9h8Ys2vo=; b=BM4CFqfbSFKOWZ7ViNl07sRifA/eB9VPbfLYzpVu3Qg0cMoujhCG/U6TIG1zwK7hqa Y7ouU1/BW0SN21IZ8Zoeui2ShXuKhuG039w4GAaLTjXNgJ1iOP6/BHrLSeHcIPXrh9Pr E28gMoz/d77w62oV1weRVJy9pJFA+LIZcyjc0YiKRsmqZ03z9WYwFZg2PUUEp79AAqpv 1AyAhP3JRlCRixwCmYFRrOq3nd0xGPa33zgFZQZqdiNm+UYGCVhUWEwX7p8dG1ypUWWJ 0+ZCayY0Oi8vRG+hPS4XCtqXNovpoMfn5nLu7IEvpJnpoaIiHgDWyK/pDoMQlN0d0Mir rknQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735901534; x=1736506334; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rq36yhx+vIwlwQqelDChXzPpuerxjjjClrW9h8Ys2vo=; b=LNoJHTcZeezXRSNgYXg3pDXeTUtko/Ez8NZbgoyxAGW6DFenQyKtMNvfuQXByWJnVG pKooCI1uacVEsIRK6pTNxPv2FbfOzoctNs0bi5VgaTz6LpIXfnvdjnwMZpy8A1YE9fZx Obz9SyyiOPCcBgbQ2R8E51maMaw2GSW62n6ARKRgf2gl7EPhdWRIQGJNuU5JPvJ16CoO Pgns7mW+OKdb4PtZik8j8Nc8Hvtnmq5uFKVeah+Ky18qLbze+N216+dhcHH0ru7DGAZf DRiXlf0U0E50msgsOXdvb+TxdREB7vwaxMczPED3bCHCWKsFxU2E5y0UuKArqUOBRUfB 2NSA== X-Gm-Message-State: AOJu0YwhICsB78aydrwO5twNJkyP2SO5E4638tHv/LEA71wl74QtlqWb NG8dDQQwAp/Lt6SSbjZUny82QOII/kQsVqD2J2Ie4SgMhkMsmTlb X-Gm-Gg: ASbGncsYIJb9Nzf1CERmcjrdvggbpYVM+EFamb8Pa1Bxy42L5UQ4/eARfcGdu7/nzSK MFzJgxSKK/b31Xg6fyK1Q29M15oYfOkGIrPcsvQuEHm1NjqHuG1V7cd0cw2kPygeFGRYa6PMUb3 EPwQVwYUk1IoQX8GiQfproa73tQ21WG/iduOo0n1BhRs/WD2tXxTwzcBnGOq+DEu73H6QzicLoS f4YAbf2zWkMd9DwlIuRI8Tyv1azl0u96rDK9ftw8cp/DmqqoTuiHftElPN1cC2m57y9Fgh9UH1g XXnC X-Google-Smtp-Source: AGHT+IHh4XC8NkolJbSkhLb/aO+URp4mmTVdrbAljgMgsTmiDDrpYkNiEyeb0fF1Qk2SmB7XiqjqSg== X-Received: by 2002:a05:6a00:3d0c:b0:725:973f:9d53 with SMTP id d2e1a72fcca58-72abde3bcd1mr66794855b3a.15.1735901533906; Fri, 03 Jan 2025 02:52:13 -0800 (PST) Received: from localhost.localdomain ([36.110.106.149]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8157edsm26802019b3a.20.2025.01.03.02.52.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Jan 2025 02:52:13 -0800 (PST) From: Guo Weikang To: Mike Rapoport , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Guo Weikang Subject: [PATCH 3/3] mm/memblock: Modify the default failure behavior of memblock_alloc_low(from) Date: Fri, 3 Jan 2025 18:51:58 +0800 Message-Id: <20250103105158.1350689-3-guoweikang.kernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250103105158.1350689-1-guoweikang.kernel@gmail.com> References: <20250103105158.1350689-1-guoweikang.kernel@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 53E1712000C X-Stat-Signature: mn3ogacdt79mxk9r6mwpw1hnrua5iepi X-Rspam-User: X-HE-Tag: 1735901462-182931 X-HE-Meta: U2FsdGVkX19k2JZo9gj/Gk++uf5khVZVLN9GEq9MueKQNGFGI6EMbkOpVOY0AuBVpWZAo/ipH2/39Qz2qES9uDXaTaisevkf8/bLKmAvka7N4L6TWfCkm7bQRVNq1rPViUvbwCYX1mdfXZn0xG+ozS8V6mi/OwvO4LWhhnpc2wRpD6De8pUqCweXGVjVoQ1dni3L4mDZijPiV2Gt8CR8bcy1crdTtYkpicl49dxsqnrSMSTtdnAA3JsFPxnG6+JPMA2r98oOHewHSRD0SKs0/IUvPyFq4hQQOz15tcSIzUsgH/BsS7l2Yemov5Z2w8nT6JfinVeqy5ayYJoNgHoOofM1ar7/JyMTAFlxa1iA2v6KhLO3VVsBR83Fvt2mFbh+I9vyhlXKoby/mWDgMkv3Vpwy23obtv86S5SojPRVDOL7PYiVauCs+f07upqHmAV5faejr217a2ZWQwjXFYT3MP/zy6AjNOEfXGkSQm3dTLT7Z11b9FH+sgIo2M0bq19gnXBaeYt59LqcUyIcZc581acPbgtMkSBY36pc/gXCOOOOHwbyslfwri+sQ/r8FWvYoMClKAmgQ9ms/VVt4jabKqYm5qYvGMOCI7WltPljvenXBMEClnKmtD9nczNfexDi03Sn74seixUVJ+79STN+/mssPVlWaxnSYeDt7A4jne+RzX/TtlVnNUyJ5pms2EVCRXL3Z/HjNRE9Rr9jLyNHWfKeWFncnzCxArKPgxnHY3TrQyFNge7betRL/HKkT7BWBfT4j4UUWy5h0mky1KOtfme2PNXQkmtPNDyR3UI6myXJaz/yksOwuw5tVHo0pcStE1GOCsFSqRz79JMdKZN+lmZhA3iJqD0CQWcvYdGcKBX8y5L7/rUohjjF1OtdMnKTk/bsj9bJ3Wtl5BW729oifNelguk5wOvVE6AnKaOe4bd/ptsq47/3XpZTUcIhlLo7EGAxOcBgA3jC1PEaHXp JQUizvmQ /4KekU2TXBvJgshA6cL39Wb0Y8GpwmLvODFJmKcnlRtstCZBXkx4bE48UdHwIqqaYroyeccCcF0baoEkmbyH7Jt10+24FNmUPzSX90aCViBUXL38hE4fs6ylwxOzzEQ1HvwQqhJAfHtEYmjY/rxa8xfc+3ryvFUyoXtwQhRSx757sntX4aJlelONMGU6hraSSbqbgiYbirCBEzQowkwMxHrMOD37/HWxMWkJp8YpBxY+HMYy51WjFdSr44R/pNAjx++3+JoRxYWpF4f9ZiLCjEZGGen5rQdKiHjM1vuJkWsQTtHeSPrtxiivCO9/8rAUmEM4Ybv6KNw1pHaVNv+5kEj0JrrxDcO0Fy0tDgJ6rwlJ652BwSbUAGPGaDgSY4lr2bSRTeXFsY+w5gI/V12Q/aOK7vGQwdqZ8AL07bFmXa2ln9iMA1N+3bBXDtn6uege6oN1R3KPYwWNrX/UNO5r8+6uD6fZ0wq99tt3CVNyClhA1VBmB0RS3ZdkuqgKQ8vAl3wcN1y/KV+MH6vk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Just like memblock_alloc, the default failure behavior of memblock_alloc_low and memblock_alloc_from is now modified to trigger a panic when allocation fails. Signed-off-by: Guo Weikang --- arch/arc/mm/highmem.c | 4 ---- arch/csky/mm/init.c | 5 ---- arch/m68k/atari/stram.c | 4 ---- arch/m68k/mm/motorola.c | 9 ------- arch/mips/include/asm/dmi.h | 2 +- arch/mips/mm/init.c | 5 ---- arch/s390/kernel/setup.c | 4 ---- arch/s390/kernel/smp.c | 3 --- arch/sparc/mm/init_64.c | 13 ---------- arch/um/kernel/mem.c | 20 ---------------- arch/xtensa/mm/mmu.c | 4 ---- include/linux/memblock.h | 30 ++++++++++++----------- mm/memblock.c | 47 +++++++++++++++++++++++++++++++++++++ mm/percpu.c | 6 ++--- 14 files changed, 67 insertions(+), 89 deletions(-) diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c index c79912a6b196..4ed597b19388 100644 --- a/arch/arc/mm/highmem.c +++ b/arch/arc/mm/highmem.c @@ -53,10 +53,6 @@ static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr) pte_t *pte_k; pte_k = (pte_t *)memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); - if (!pte_k) - panic("%s: Failed to allocate %lu bytes align=0x%lx\n", - __func__, PAGE_SIZE, PAGE_SIZE); - pmd_populate_kernel(&init_mm, pmd_k, pte_k); return pte_k; } diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c index bde7cabd23df..04de02a83564 100644 --- a/arch/csky/mm/init.c +++ b/arch/csky/mm/init.c @@ -174,11 +174,6 @@ void __init fixrange_init(unsigned long start, unsigned long end, for (; (k < PTRS_PER_PMD) && (vaddr != end); pmd++, k++) { if (pmd_none(*pmd)) { pte = (pte_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); - if (!pte) - panic("%s: Failed to allocate %lu bytes align=%lx\n", - __func__, PAGE_SIZE, - PAGE_SIZE); - set_pmd(pmd, __pmd(__pa(pte))); BUG_ON(pte != pte_offset_kernel(pmd, 0)); } diff --git a/arch/m68k/atari/stram.c b/arch/m68k/atari/stram.c index 922e53bcb853..14f761330b29 100644 --- a/arch/m68k/atari/stram.c +++ b/arch/m68k/atari/stram.c @@ -96,10 +96,6 @@ void __init atari_stram_reserve_pages(void *start_mem) pr_debug("atari_stram pool: kernel in ST-RAM, using alloc_bootmem!\n"); stram_pool.start = (resource_size_t)memblock_alloc_low(pool_size, PAGE_SIZE); - if (!stram_pool.start) - panic("%s: Failed to allocate %lu bytes align=%lx\n", - __func__, pool_size, PAGE_SIZE); - stram_pool.end = stram_pool.start + pool_size - 1; request_resource(&iomem_resource, &stram_pool); stram_virt_offset = 0; diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index ce016ae8c972..83bbada15be2 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -227,11 +227,6 @@ static pte_t * __init kernel_page_table(void) if (PAGE_ALIGNED(last_pte_table)) { pte_table = memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); - if (!pte_table) { - panic("%s: Failed to allocate %lu bytes align=%lx\n", - __func__, PAGE_SIZE, PAGE_SIZE); - } - clear_page(pte_table); mmu_page_ctor(pte_table); @@ -275,10 +270,6 @@ static pmd_t * __init kernel_ptr_table(void) last_pmd_table += PTRS_PER_PMD; if (PAGE_ALIGNED(last_pmd_table)) { last_pmd_table = memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); - if (!last_pmd_table) - panic("%s: Failed to allocate %lu bytes align=%lx\n", - __func__, PAGE_SIZE, PAGE_SIZE); - clear_page(last_pmd_table); mmu_page_ctor(last_pmd_table); } diff --git a/arch/mips/include/asm/dmi.h b/arch/mips/include/asm/dmi.h index dc397f630c66..9698d072cc4d 100644 --- a/arch/mips/include/asm/dmi.h +++ b/arch/mips/include/asm/dmi.h @@ -11,7 +11,7 @@ #define dmi_unmap(x) iounmap(x) /* MIPS initialize DMI scan before SLAB is ready, so we use memblock here */ -#define dmi_alloc(l) memblock_alloc_low(l, PAGE_SIZE) +#define dmi_alloc(l) memblock_alloc_low_no_panic(l, PAGE_SIZE) #if defined(CONFIG_MACH_LOONGSON64) #define SMBIOS_ENTRY_POINT_SCAN_START 0xFFFE000 diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 4583d1a2a73e..cca62f23769f 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -257,11 +257,6 @@ void __init fixrange_init(unsigned long start, unsigned long end, if (pmd_none(*pmd)) { pte = (pte_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); - if (!pte) - panic("%s: Failed to allocate %lu bytes align=%lx\n", - __func__, PAGE_SIZE, - PAGE_SIZE); - set_pmd(pmd, __pmd((unsigned long)pte)); BUG_ON(pte != pte_offset_kernel(pmd, 0)); } diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c index e51426113f26..854d3744dacf 100644 --- a/arch/s390/kernel/setup.c +++ b/arch/s390/kernel/setup.c @@ -397,10 +397,6 @@ static void __init setup_lowcore(void) */ BUILD_BUG_ON(sizeof(struct lowcore) != LC_PAGES * PAGE_SIZE); lc = memblock_alloc_low(sizeof(*lc), sizeof(*lc)); - if (!lc) - panic("%s: Failed to allocate %zu bytes align=%zx\n", - __func__, sizeof(*lc), sizeof(*lc)); - lc->pcpu = (unsigned long)per_cpu_ptr(&pcpu_devices, 0); lc->restart_psw.mask = PSW_KERNEL_BITS & ~PSW_MASK_DAT; lc->restart_psw.addr = __pa(restart_int_handler); diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c index 9eb4508b4ca4..467d4f390837 100644 --- a/arch/s390/kernel/smp.c +++ b/arch/s390/kernel/smp.c @@ -631,9 +631,6 @@ void __init smp_save_dump_secondary_cpus(void) return; /* Allocate a page as dumping area for the store status sigps */ page = memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); - if (!page) - panic("ERROR: Failed to allocate %lx bytes below %lx\n", - PAGE_SIZE, 1UL << 31); /* Set multi-threading state to the previous system. */ pcpu_set_smt(sclp.mtid_prev); diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 05882bca5b73..8c813c755eb8 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -1789,8 +1789,6 @@ static unsigned long __ref kernel_map_range(unsigned long pstart, new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE); - if (!new) - goto err_alloc; alloc_bytes += PAGE_SIZE; pgd_populate(&init_mm, pgd, new); } @@ -1801,8 +1799,6 @@ static unsigned long __ref kernel_map_range(unsigned long pstart, new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE); - if (!new) - goto err_alloc; alloc_bytes += PAGE_SIZE; p4d_populate(&init_mm, p4d, new); } @@ -1817,8 +1813,6 @@ static unsigned long __ref kernel_map_range(unsigned long pstart, } new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE); - if (!new) - goto err_alloc; alloc_bytes += PAGE_SIZE; pud_populate(&init_mm, pud, new); } @@ -1833,8 +1827,6 @@ static unsigned long __ref kernel_map_range(unsigned long pstart, } new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE); - if (!new) - goto err_alloc; alloc_bytes += PAGE_SIZE; pmd_populate_kernel(&init_mm, pmd, new); } @@ -1854,11 +1846,6 @@ static unsigned long __ref kernel_map_range(unsigned long pstart, } return alloc_bytes; - -err_alloc: - panic("%s: Failed to allocate %lu bytes align=%lx from=%lx\n", - __func__, PAGE_SIZE, PAGE_SIZE, PAGE_SIZE); - return -ENOMEM; } static void __init flush_all_kernel_tsbs(void) diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c index 53248ed04771..9c161fb4ed3a 100644 --- a/arch/um/kernel/mem.c +++ b/arch/um/kernel/mem.c @@ -83,10 +83,6 @@ static void __init one_page_table_init(pmd_t *pmd) if (pmd_none(*pmd)) { pte_t *pte = (pte_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); - if (!pte) - panic("%s: Failed to allocate %lu bytes align=%lx\n", - __func__, PAGE_SIZE, PAGE_SIZE); - set_pmd(pmd, __pmd(_KERNPG_TABLE + (unsigned long) __pa(pte))); BUG_ON(pte != pte_offset_kernel(pmd, 0)); @@ -97,10 +93,6 @@ static void __init one_md_table_init(pud_t *pud) { #if CONFIG_PGTABLE_LEVELS > 2 pmd_t *pmd_table = (pmd_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); - if (!pmd_table) - panic("%s: Failed to allocate %lu bytes align=%lx\n", - __func__, PAGE_SIZE, PAGE_SIZE); - set_pud(pud, __pud(_KERNPG_TABLE + (unsigned long) __pa(pmd_table))); BUG_ON(pmd_table != pmd_offset(pud, 0)); #endif @@ -110,10 +102,6 @@ static void __init one_ud_table_init(p4d_t *p4d) { #if CONFIG_PGTABLE_LEVELS > 3 pud_t *pud_table = (pud_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); - if (!pud_table) - panic("%s: Failed to allocate %lu bytes align=%lx\n", - __func__, PAGE_SIZE, PAGE_SIZE); - set_p4d(p4d, __p4d(_KERNPG_TABLE + (unsigned long) __pa(pud_table))); BUG_ON(pud_table != pud_offset(p4d, 0)); #endif @@ -163,10 +151,6 @@ static void __init fixaddr_user_init( void) fixrange_init( FIXADDR_USER_START, FIXADDR_USER_END, swapper_pg_dir); v = (unsigned long) memblock_alloc_low(size, PAGE_SIZE); - if (!v) - panic("%s: Failed to allocate %lu bytes align=%lx\n", - __func__, size, PAGE_SIZE); - memcpy((void *) v , (void *) FIXADDR_USER_START, size); p = __pa(v); for ( ; size > 0; size -= PAGE_SIZE, vaddr += PAGE_SIZE, @@ -184,10 +168,6 @@ void __init paging_init(void) empty_zero_page = (unsigned long *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); - if (!empty_zero_page) - panic("%s: Failed to allocate %lu bytes align=%lx\n", - __func__, PAGE_SIZE, PAGE_SIZE); - max_zone_pfn[ZONE_NORMAL] = end_iomem >> PAGE_SHIFT; free_area_init(max_zone_pfn); diff --git a/arch/xtensa/mm/mmu.c b/arch/xtensa/mm/mmu.c index 92e158c69c10..aee020c986a3 100644 --- a/arch/xtensa/mm/mmu.c +++ b/arch/xtensa/mm/mmu.c @@ -33,10 +33,6 @@ static void * __init init_pmd(unsigned long vaddr, unsigned long n_pages) __func__, vaddr, n_pages); pte = memblock_alloc_low(n_pages * sizeof(pte_t), PAGE_SIZE); - if (!pte) - panic("%s: Failed to allocate %lu bytes align=%lx\n", - __func__, n_pages * sizeof(pte_t), PAGE_SIZE); - for (i = 0; i < n_pages; ++i) pte_clear(NULL, 0, pte + i); diff --git a/include/linux/memblock.h b/include/linux/memblock.h index b68c141ebc44..3f940bf628a9 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -430,20 +430,22 @@ void *__memblock_alloc_panic(phys_addr_t size, phys_addr_t align, #define memblock_alloc_raw_no_panic(size, align) \ __memblock_alloc_panic(size, align, __func__, false, true) -static inline void *memblock_alloc_from(phys_addr_t size, - phys_addr_t align, - phys_addr_t min_addr) -{ - return memblock_alloc_try_nid(size, align, min_addr, - MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE); -} - -static inline void *memblock_alloc_low(phys_addr_t size, - phys_addr_t align) -{ - return memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT, - ARCH_LOW_ADDRESS_LIMIT, NUMA_NO_NODE); -} +void *__memblock_alloc_from_panic(phys_addr_t size, phys_addr_t align, + phys_addr_t min_addr,const char *func, + bool should_panic); + +#define memblock_alloc_from(size, align, min_addr) \ + __memblock_alloc_from_panic(size, align, min_addr, __func__, true) +#define memblock_alloc_from_no_panic(size, align, min_addr) \ + __memblock_alloc_from_panic(size, align, min_addr, __func__, false) + +void *__memblock_alloc_low_panic(phys_addr_t size, phys_addr_t align, + const char *func, bool should_panic); + +#define memblock_alloc_low(size, align) \ + __memblock_alloc_low_panic(size, align, __func__, true) +#define memblock_alloc_low_no_panic(size, align) \ + __memblock_alloc_low_panic(size, align, __func__, false) static inline void *memblock_alloc_node(phys_addr_t size, phys_addr_t align, int nid) diff --git a/mm/memblock.c b/mm/memblock.c index 4974ae2ee5ec..22922c81ff77 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1722,6 +1722,53 @@ void *__init __memblock_alloc_panic(phys_addr_t size, phys_addr_t align, return addr; } +/** + * __memblock_alloc_from_panic - Try to allocate memory and panic on failure + * @size: size of memory block to be allocated in bytes + * @align: alignment of the region and block's size + * @min_addr: the lower bound of the memory region from where the allocation + * is preferred (phys address) + * @func: caller func name + * @should_panic: whether failed panic + * + * In case of failure, it calls panic with the formatted message. + * This function should not be used directly, please use the macro + * memblock_alloc_from and memblock_alloc_from_no_panic. + */ +void *__init __memblock_alloc_from_panic(phys_addr_t size, phys_addr_t align, + phys_addr_t min_addr, const char *func, + bool should_panic) +{ + void *addr = memblock_alloc_try_nid(size, align, min_addr, + MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE); + + if (unlikely(!addr && should_panic)) + panic("%s: Failed to allocate %pap bytes\n", func, &size); + return addr; +} + +/** + * __memblock_alloc_low_panic - Try to allocate memory and panic on failure + * @size: size of memory block to be allocated in bytes + * @align: alignment of the region and block's size + * @func: caller func name + * @should_panic: whether failed panic + * + * In case of failure, it calls panic with the formatted message. + * This function should not be used directly, please use the macro + * memblock_alloc_low and memblock_alloc_low_no_panic. + */ +void *__init __memblock_alloc_low_panic(phys_addr_t size, phys_addr_t align, + const char *func,bool should_panic) +{ + void *addr = memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT, + ARCH_LOW_ADDRESS_LIMIT, NUMA_NO_NODE); + + if (unlikely(!addr && should_panic)) + panic("%s: Failed to allocate %pap bytes\n", func, &size); + return addr; +} + /** * memblock_free_late - free pages directly to buddy allocator * @base: phys starting address of the boot memory block diff --git a/mm/percpu.c b/mm/percpu.c index a381d626ed32..980fba4292be 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -2933,7 +2933,7 @@ static void * __init pcpu_fc_alloc(unsigned int cpu, size_t size, size_t align, node = cpu_to_nd_fn(cpu); if (node == NUMA_NO_NODE || !node_online(node) || !NODE_DATA(node)) { - ptr = memblock_alloc_from(size, align, goal); + ptr = memblock_alloc_from_no_panic(size, align, goal); pr_info("cpu %d has no node %d or node-local memory\n", cpu, node); pr_debug("per cpu data for cpu%d %zu bytes at 0x%llx\n", @@ -2948,7 +2948,7 @@ static void * __init pcpu_fc_alloc(unsigned int cpu, size_t size, size_t align, } return ptr; #else - return memblock_alloc_from(size, align, goal); + return memblock_alloc_from_no_panic(size, align, goal); #endif } @@ -3318,7 +3318,7 @@ void __init setup_per_cpu_areas(void) ai = pcpu_alloc_alloc_info(1, 1); fc = memblock_alloc_from(unit_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS)); - if (!ai || !fc) + if (!ai) panic("Failed to allocate memory for percpu areas."); /* kmemleak tracks the percpu allocations separately */ kmemleak_ignore_phys(__pa(fc));