From patchwork Thu Feb 6 13:27:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A581C02194 for ; Thu, 6 Feb 2025 13:30:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9652428000C; Thu, 6 Feb 2025 08:30:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8ED8D280005; Thu, 6 Feb 2025 08:30:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 740CB28000C; Thu, 6 Feb 2025 08:30:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4C67D280005 for ; Thu, 6 Feb 2025 08:30:12 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id EEFC5C1138 for ; Thu, 6 Feb 2025 13:30:11 +0000 (UTC) X-FDA: 83089603422.13.27FC448 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf05.hostedemail.com (Postfix) with ESMTP id 5F375100023 for ; Thu, 6 Feb 2025 13:30:10 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UlLGwm05; spf=pass (imf05.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848610; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HRU1eY8lQWE+WdlV3otZw/7J3jFpvj6dJXq20rk8Gjo=; b=1mKvQfsJ1SuWL70609U5ZVoB41J/4hUL5brya4Eq/NFRAbgkA7O313iP8Qlh8C4dhbgqt0 AYO3+BfGt+ENyfaBDGBE2swRx5piBQ0VoN7EASwwlca0xPAlfYfXtQKRi6lkXeL5zmNTKO zysQeD9RHjpgRZSPYMGCYIH36K2wHTc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848610; a=rsa-sha256; cv=none; b=Lgu3sHyP6BZt4Lsbq1fxh+6NTX1ZrmIzKQD88hJFrP6qBeXqxByNu4jQPKht9ctDf9woSY fS/Te1xMdr6cHkRQGf9kiTTPHBPMvRxpIy8Bt8YyImsp15i6UT0k4DE/UL1c/f0LTqCbUs dvW+guoM8780ru911XYCStv8miVous0= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UlLGwm05; spf=pass (imf05.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 8FC82A4426C; Thu, 6 Feb 2025 13:28:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC5DAC4CEE7; Thu, 6 Feb 2025 13:30:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848609; bh=DoJEW5UPWLB+Obb+G1lnBQtqt27Wrp5AJwpwp06AFOo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UlLGwm05L5QcpmJWTDfel287vfdneKikF+X9x1PLxloCZkQFYSTdwfZk67H4CbACf SjpGYMrBglL1eCrzUUZK57n5/M23b+qmEwIJqInUyuIAYnn+TmqNQ3nPdRbyYMKuNl ZByOBYnqMEy5lY8CmeFvD9vgr6at+lROV7TfunAKx2CDVXwvh1AKBfFFfGVYMSC8vi EMegz5keLq3kHnLZZhSOtnzyesXeNk71x3PhY+aFp5TEzIu3szzKS+tzHxI8Radxu8 EOITqVt3uvy9poaB8XIXos09+EVWO6TlgoFm05ynd9PhrdVKy6VE0uZZJFIrSgxsnl do126iSCoB9zQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 13/14] memblock: Add KHO support for reserve_mem Date: Thu, 6 Feb 2025 15:27:53 +0200 Message-ID: <20250206132754.2596694-14-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5F375100023 X-Stat-Signature: mz63upea9r3pmb15d1xfefz667pfoyrc X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1738848610-772084 X-HE-Meta: U2FsdGVkX19ewE6f/XCQ+nMST6L5AeOg0NYASi2fNdk0rg3wDunX2P0WvRXHSBoFpdqSljR3/N+sPrWqUkPyS4xTO9LOoJOn6HpVJcYlHeZbLsWE0F8f2mU0P+HFOBLs/Xj52mfFnsasq38DaJI3R9itnlTtjJzqlGfW49JMmKJPZMiEmYLMOwcbHSAANbuQP1m07DNAelBOuQkBpbWsU3Avwn9b70sIVQyBZ06rvcVQapxtMqmoRvilRUoh/ArgdE2llJoM3eiyLBE077FmS+cKI5OdTbpDmTp9yitIf5+6E72MPiaep8qMSprqcgiD1lglr9mHY23jpqKxiO++rjptmVsJ+Iy0IwbLcAcEfPiiRFzPNov0lyk3lJt0trENfsLE1vdV5Xd3CwIrQxLUNWPZFCl36VPjO+Y9Z7zl3szdbjTW5eEc64ibQ8Nd9Snxdo7MWTB4P9Ym3P9d0e//Nv8/N+omX+N71b8BaX20+dwXvLtOplALG8f9yr6OcXEZQW1g9UYUBzEsi1jCTHJBe62XfCyZ/C3rABpYcCmRQgsoTMRHqcBfrzckHfH3yhF6yVAL63SIDvvR856ylFa18+0mfAdVqoQbP4eTVNwyFGh0qVBxBfJacrger7MzEv8dX0NnXKKA7mJaPYN9TxL2JQ0N2rvwWLcz4Ai3iS0mBItDNtv4b9CTyQDCLo8vEk4i6vGWVuSpoEn0gjQiQt2EX487Pmbe8blWjveD0LsSPLXDjzPC7yR5oEM62K9sRxYoAVG0IWrcFyK+KJ0ZKXSqCp3kv7dm+FodGyLGYameDqFOH68i4+Z0yKpe5HYmengyOLqy9H/7uOjdtncBVeeSwBebRJnQ1oBNcK/007+NWmSnOstoL3Xmn+X/4k0sbxwsZxGRiMpQiRlPyJYpcsWeZY/202S4q2B93BnDdaOCGqXoAHJ2saQUT5JckbLKtocbxZqAbDsL7xbsMHiPGHJ nTi+hSJV fJpwCzaxfsYdhpaQH6gjep5AU2pqXDX1dzcKjCxQDmm/leB2viV4kS7/+r2XaMy4CtV8XQ2X8tN1RSDsURbZOpiaRE3gpnTpp0+jvJMOh0ToH6V7x5vCqUSkbofdavXVcgMc8Fjter7ify0uNwR5426VEIdpDQEows0wfm6tlWR4YdVsDNPHrnimYowgNd/1zVM6a5RZoLhaAYZ4HvlybHew0TC5zsgxyfVhw0TkhIPJURuKSMz+B2nXtpeKzVLQk0/aOo8inlbp4SOitRUgWTgL/UUboO1Ntk4Y4vVmsGL/0XjaIeLy/DXfwjxTM85ehN42oHe9cFZFQLjjjIA5Pei2f+QSohf9Irb+oOELU5uDR1Hw0DvjZhjLtOw+Iwseoq30i X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexander Graf Linux has recently gained support for "reserve_mem": A mechanism to allocate a region of memory early enough in boot that we can cross our fingers and hope it stays at the same location during most boots, so we can store for example ftrace buffers into it. Thanks to KASLR, we can never be really sure that "reserve_mem" allocations are static across kexec. Let's teach it KHO awareness so that it serializes its reservations on kexec exit and deserializes them again on boot, preserving the exact same mapping across kexec. This is an example user for KHO in the KHO patch set to ensure we have at least one (not very controversial) user in the tree before extending KHO's use to more subsystems. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- mm/memblock.c | 131 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 131 insertions(+) diff --git a/mm/memblock.c b/mm/memblock.c index 84df96efca62..fdb08b60efc1 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -16,6 +16,9 @@ #include #include #include +#include +#include +#include #include #include @@ -2423,6 +2426,70 @@ int reserve_mem_find_by_name(const char *name, phys_addr_t *start, phys_addr_t * } EXPORT_SYMBOL_GPL(reserve_mem_find_by_name); +static bool __init reserve_mem_kho_revive(const char *name, phys_addr_t size, + phys_addr_t align) +{ + const void *fdt = kho_get_fdt(); + const char *path = "/reserve_mem"; + int node, child, err; + + if (!IS_ENABLED(CONFIG_KEXEC_HANDOVER)) + return false; + + if (!fdt) + return false; + + node = fdt_path_offset(fdt, "/reserve_mem"); + if (node < 0) + return false; + + err = fdt_node_check_compatible(fdt, node, "reserve_mem-v1"); + if (err) { + pr_warn("Node '%s' has unknown compatible", path); + return false; + } + + fdt_for_each_subnode(child, fdt, node) { + const struct kho_mem *mem; + const char *child_name; + int len; + + /* Search for old kernel's reserved_mem with the same name */ + child_name = fdt_get_name(fdt, child, NULL); + if (strcmp(name, child_name)) + continue; + + err = fdt_node_check_compatible(fdt, child, "reserve_mem_map-v1"); + if (err) { + pr_warn("Node '%s/%s' has unknown compatible", path, name); + continue; + } + + mem = fdt_getprop(fdt, child, "mem", &len); + if (!mem || len != sizeof(*mem)) + continue; + + if (mem->addr & (align - 1)) { + pr_warn("KHO reserved_mem '%s' has wrong alignment (0x%lx, 0x%lx)", + name, (long)align, (long)mem->addr); + continue; + } + + if (mem->size != size) { + pr_warn("KHO reserved_mem '%s' has wrong size (0x%lx != 0x%lx)", + name, (long)mem->size, (long)size); + continue; + } + + reserved_mem_add(mem->addr, mem->size, name); + pr_info("Revived memory reservation '%s' from KHO", name); + + return true; + } + + return false; +} + /* * Parse reserve_mem=nn:align:name */ @@ -2478,6 +2545,11 @@ static int __init reserve_mem(char *p) if (reserve_mem_find_by_name(name, &start, &tmp)) return -EBUSY; + /* Pick previous allocations up from KHO if available */ + if (reserve_mem_kho_revive(name, size, align)) + return 1; + + /* TODO: Allocation must be outside of scratch region */ start = memblock_phys_alloc(size, align); if (!start) return -ENOMEM; @@ -2488,6 +2560,65 @@ static int __init reserve_mem(char *p) } __setup("reserve_mem=", reserve_mem); +static int reserve_mem_kho_write_map(void *fdt, struct reserve_mem_table *map) +{ + int err = 0; + const char compatible[] = "reserve_mem_map-v1"; + struct kho_mem mem = { + .addr = map->start, + .size = map->size, + }; + + err |= fdt_begin_node(fdt, map->name); + err |= fdt_property(fdt, "compatible", compatible, sizeof(compatible)); + err |= fdt_property(fdt, "mem", &mem, sizeof(mem)); + err |= fdt_end_node(fdt); + + return err; +} + +static int reserve_mem_kho_notifier(struct notifier_block *self, + unsigned long cmd, void *v) +{ + const char compatible[] = "reserve_mem-v1"; + void *fdt = v; + int err = 0; + int i; + + switch (cmd) { + case KEXEC_KHO_ABORT: + return NOTIFY_DONE; + case KEXEC_KHO_DUMP: + /* Handled below */ + break; + default: + return NOTIFY_BAD; + } + + if (!reserved_mem_count) + return NOTIFY_DONE; + + err |= fdt_begin_node(fdt, "reserve_mem"); + err |= fdt_property(fdt, "compatible", compatible, sizeof(compatible)); + for (i = 0; i < reserved_mem_count; i++) + err |= reserve_mem_kho_write_map(fdt, &reserved_mem_table[i]); + err |= fdt_end_node(fdt); + + return err ? NOTIFY_BAD : NOTIFY_DONE; +} + +static struct notifier_block reserve_mem_kho_nb = { + .notifier_call = reserve_mem_kho_notifier, +}; + +static int __init reserve_mem_init(void) +{ + register_kho_notifier(&reserve_mem_kho_nb); + + return 0; +} +core_initcall(reserve_mem_init); + #if defined(CONFIG_DEBUG_FS) && defined(CONFIG_ARCH_KEEP_MEMBLOCK) static const char * const flagname[] = { [ilog2(MEMBLOCK_HOTPLUG)] = "HOTPLUG",