From patchwork Thu Mar 20 01:55:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D4C0AC35FFC for ; Thu, 20 Mar 2025 01:59:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=MyKsFNp8HnVmgjRlr9CxzqXpTdvZkVjeQqS6GTtSoj0=; b=hqJ49kEQ5F/MGF46qggShE/h8g iTDGr6WzEYrrDhxMehMmGTcXLmWCHhacNg39djEQJxsMJyyZZVjCH13kQStRb32pzZ4GqCO3nb4B7 IaEKaPfYFAlQa8ND3DmryXkGEG0/XNQVDvRrRr4r9ga6L3l6mOfbgKHDABsbrDCagjNvcDPHy0sLp VlvFBJqbOZ8LDN4Rr7jWF7HWRXeLfufXRxzVJOaEnGHfV0IwejM6uA6pm56ync7ja0lgP9lN6hF86 lxgrYGICuZy3FmvNqN+mFsJZiYBDEAiSK1Wi47T4Aq59PZuji7BQTvFL0q4im+OIhM70VtxM+ZOaB uSSWVn4g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5C3-0000000Am5P-22RX; Thu, 20 Mar 2025 01:59:35 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58d-0000000AkI6-1tkI for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:04 +0000 Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-2ff7f9a0b9bso500894a91.0 for ; Wed, 19 Mar 2025 18:56:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435762; x=1743040562; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MyKsFNp8HnVmgjRlr9CxzqXpTdvZkVjeQqS6GTtSoj0=; b=UZw5ecLpbX1mRQDnVO+zxDFglR34oO918q7k846S3EaOzCKbCUxrlRoSRxRq5EuZgU RpBrfhVo2kAYcrjbvfsNbyxLtp46PdVMgk8DTB7UZkYyJDezwLo++LTNvEVnmVoJy1XT 35wMJEq7vZXeOFCk3FDFWSFdGfzNsQrKpqic9TyQ1ZNtz0bgpQ5piCcJXOoBUYvtiMPy BpJlZhoB43adbPCxDkKm1a3xhmUOjFgVN1fJyrs4YJMlT3el4sJScUsjAAiAHQnLa4uf sOhmzTVuvrc9KjXxqat0A0NRrqEetZwSzWGYunt9xQ1SShlDa3fcL2Ur+Guhc0Gj598i 454A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435762; x=1743040562; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MyKsFNp8HnVmgjRlr9CxzqXpTdvZkVjeQqS6GTtSoj0=; b=W8OqbFl56PNZGq/zzDypcQmZ+sp82s18td4NGKGW9S0+S5UZiA9tE5UhqeFFqfoXjk r52HCRVYXQM7L64uHL7LW1w8OVggLCLfc21faMman5mZKfmIJrhiIBl6R+ALsMK4b3YK qqbMXCjQM3A3D7Irn+TSpiKVQZoK3Dxu92gD9ZkKh+qhfQuU5qvNHi08lMHTWuOURWnT M4TgadN8NjpuOMZPygzWwXff31lT1RJazf5JnW1/waVo5X1/c9Y8bU/sp3wjXQKyxqGD g8KKVF4aKZqJdJNxplqv0ryGCWxUmK9I7btC3UECL1YzD08HrTgzm1jGTjNXHPt+1QA/ 0n/A== X-Forwarded-Encrypted: i=1; AJvYcCVWJ8K0PMcEhueJQO2Tm+5RadhrM8utUDTm2QfztvOQjTJ1i/yKC+NQbS8+UPfg+AWNFskiMkLyjquIii2aognn@lists.infradead.org X-Gm-Message-State: AOJu0Yz9d+k6VCVnaUcLK5ehktkSNjnI69fKs3bQ6LUh2CsWokJNYHJK BVLtYixkcY6PUzlfDrzBlITWU1vDUbfPApbuUMjUuv2UwYuPzT5b9kk9N5pG5SLs9D1yXSJLjeC f2sCOj0T8N81kVEM/Tw== X-Google-Smtp-Source: AGHT+IH4fNuT0QWlfSI42heq+dLpm/GFEXTXaL6bQRWMvudZOwxIJGtvzZFRNXPgaqxisIVnXeLUFEDTHfoI9HMH X-Received: from pjur6.prod.google.com ([2002:a17:90a:d406:b0:2ee:4a90:3d06]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4fc5:b0:2f6:d266:f462 with SMTP id 98e67ed59e1d1-301be20750dmr7815826a91.35.1742435761976; Wed, 19 Mar 2025 18:56:01 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:36 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-2-changyuanl@google.com> Subject: [PATCH v5 01/16] kexec: define functions to map and unmap segments From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, steven chen , Tushar Sugandhi , Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_185603_486921_ED0FC9CC X-CRM114-Status: GOOD ( 19.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: steven chen Currently, the mechanism to map and unmap segments to the kimage structure is not available to the subsystems outside of kexec. This functionality is needed when IMA is allocating the memory segments during kexec 'load' operation. Implement functions to map and unmap segments to kimage. Implement kimage_map_segment() to enable mapping of IMA buffer source pages to the kimage structure post kexec 'load'. This function, accepting a kimage pointer, an address, and a size, will gather the source pages within the specified address range, create an array of page pointers, and map these to a contiguous virtual address range. The function returns the start of this range if successful, or NULL if unsuccessful. Implement kimage_unmap_segment() for unmapping segments using vunmap(). Signed-off-by: Tushar Sugandhi Signed-off-by: steven chen Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- include/linux/kexec.h | 5 ++++ kernel/kexec_core.c | 54 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 59 insertions(+) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index f0e9f8eda7a3..fad04f3bcf1d 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -467,6 +467,8 @@ extern bool kexec_file_dbg_print; #define kexec_dprintk(fmt, arg...) \ do { if (kexec_file_dbg_print) pr_info(fmt, ##arg); } while (0) +void *kimage_map_segment(struct kimage *image, unsigned long addr, unsigned long size); +void kimage_unmap_segment(void *buffer); #else /* !CONFIG_KEXEC_CORE */ struct pt_regs; struct task_struct; @@ -474,6 +476,9 @@ static inline void __crash_kexec(struct pt_regs *regs) { } static inline void crash_kexec(struct pt_regs *regs) { } static inline int kexec_should_crash(struct task_struct *p) { return 0; } static inline int kexec_crash_loaded(void) { return 0; } +static inline void *kimage_map_segment(struct kimage *image, unsigned long addr, unsigned long size) +{ return NULL; } +static inline void kimage_unmap_segment(void *buffer) { } #define kexec_in_progress false #endif /* CONFIG_KEXEC_CORE */ diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index c0bdc1686154..640d252306ea 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -867,6 +867,60 @@ int kimage_load_segment(struct kimage *image, return result; } +void *kimage_map_segment(struct kimage *image, + unsigned long addr, unsigned long size) +{ + unsigned long eaddr = addr + size; + unsigned long src_page_addr, dest_page_addr; + unsigned int npages; + struct page **src_pages; + int i; + kimage_entry_t *ptr, entry; + void *vaddr = NULL; + + /* + * Collect the source pages and map them in a contiguous VA range. + */ + npages = PFN_UP(eaddr) - PFN_DOWN(addr); + src_pages = kvmalloc_array(npages, sizeof(*src_pages), GFP_KERNEL); + if (!src_pages) { + pr_err("Could not allocate source pages array for destination %lx.\n", addr); + return NULL; + } + + i = 0; + for_each_kimage_entry(image, ptr, entry) { + if (entry & IND_DESTINATION) { + dest_page_addr = entry & PAGE_MASK; + } else if (entry & IND_SOURCE) { + if (dest_page_addr >= addr && dest_page_addr < eaddr) { + src_page_addr = entry & PAGE_MASK; + src_pages[i++] = + virt_to_page(__va(src_page_addr)); + if (i == npages) + break; + dest_page_addr += PAGE_SIZE; + } + } + } + + /* Sanity check. */ + WARN_ON(i < npages); + + vaddr = vmap(src_pages, npages, VM_MAP, PAGE_KERNEL); + kvfree(src_pages); + + if (!vaddr) + pr_err("Could not map segment source pages for destination %lx.\n", addr); + + return vaddr; +} + +void kimage_unmap_segment(void *segment_buffer) +{ + vunmap(segment_buffer); +} + struct kexec_load_limit { /* Mutex protects the limit count. */ struct mutex mutex; From patchwork Thu Mar 20 01:55:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023322 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AAD2BC35FFF for ; Thu, 20 Mar 2025 02:01:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ilsQeNiRJnAcOliWxyBDLHi9DId7Z2YXJm9lAmFy6DI=; b=G8mpjTn9JGp93cQkuAsFyCDChO p3WImzY8VS6P5S25wWcMJ45kl50FQ8kph8Xbgb54NyRpovTbE3xqSNLtk9vDNEMvhEzXH+eGKabXI S5WKphmpuh7h9t5nFXUKHBbEyQvXmEpkq8itWe+qJNU29BK4gfqdjaRUwvWBeJBAU64u8ZYNmEFzD vc2T1x5UYdPWobEzs9GXLrOQWj3UfTf7KL2RdbiHiI3Gs6MEXVfb4mit4Q1Bc4kwPVE/o0kR5Tq/u TMW7y+4HvPvuij3/CwupJe1Cqq6l39DJ39awG/hKiEyMHmGDRH3XIuAz+cd7eD1J6pVu1g5c8HTGw FU7qjRLA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5Dj-0000000Amem-0hLb; Thu, 20 Mar 2025 02:01:19 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58e-0000000AkJj-2GLD for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:05 +0000 Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-2ff78bd3026so447102a91.1 for ; Wed, 19 Mar 2025 18:56:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435763; x=1743040563; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ilsQeNiRJnAcOliWxyBDLHi9DId7Z2YXJm9lAmFy6DI=; b=jlwA9Nfc60IaXYizbVB3sDRygeOe1l6K2q8sj6rT8I1zO5qPstiZwINuL77WA752dG WCQTjVk8b3Lwsw7w5juaHz8YEDC5ApcK7Ekh6qQjvkghzxKDcInsapYJQRjUuN5iCbQE P3QwWf16HjulEfV9SBveLCNHR2kROcqh88uGp5ziPZPt37RSjbD4go+j+2hhKUkVkIsD rMlZe1EuL8gHCXpysorpkDBhS6uNhSSYYfq5wVjbPy9esA3bSTCqHf4v0PCqjipvjz22 Ei0Ol2ORn16tGdAzPRtUwGAbqhZN7jURqgySk8ot9SLoRQ+oH4WS1MzuKOo7v7NaGhKo foww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435763; x=1743040563; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ilsQeNiRJnAcOliWxyBDLHi9DId7Z2YXJm9lAmFy6DI=; b=SARZKIeqtIRr/PCqZk8nzVP24LeLmTOMcbqYF/hnOK+wEEWxeyPdWdcPRvE4U0VhGg 61/SBDoogLYg4p5K8Hxv3HOYlBf4e6t95e12ogXLWzpbT8eEZiRG1juFvCpnW6Uo6zNy 5tUNLDItGFg2DHsi8zKIIaJeVCGiEJczRxR0IGHj8BxlwpMQMCPfyrijdHix8+nQeidD HfFzYNE9jMPXOWc5f2VqMKECaw06N70hsBw63pXEf6sQ9aIi+dSMr/RqwAwIlPcs0F+n zk10t5KVs/nuh0bpIXvC2tUmeOrXlgzipKPpOMtGIT2A+bDLCWORoI7AJFXgrCwz3C/t bsIg== X-Forwarded-Encrypted: i=1; AJvYcCVuFd0JTjA/PjVuaysYb8oCKb1ZtuyPHY/sMgBpbSG5aF2NLNe11UryiFF8ZpRkPznegXoohEMDQtEgUpUYXWyh@lists.infradead.org X-Gm-Message-State: AOJu0Yyn2ZLrBvl9IKBjrxrWhuW5l/kR/XEYn7MQgB10+lUWAJjEszlb L8PgDA0kQtaQe+yZ4RtHTTBiG43p6+V+gxOmWg6s9TTCbsP44x0Kn7xVesMQ+sinodRoegrefv6 /szxBSVi9A9z7UDqmyQ== X-Google-Smtp-Source: AGHT+IHLp0TM25605mfB4iWKOjzImtCB+u2sC+62Tcd+2pSs/X0minmUzplavXz6x1NXqtyXk3N9lUuGrg30J6Xe X-Received: from pgbdo13.prod.google.com ([2002:a05:6a02:e8d:b0:af2:8474:f67e]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:594:b0:1f5:7b6f:f8e8 with SMTP id adf61e73a8af0-1fbeae9141bmr8613381637.6.1742435763548; Wed, 19 Mar 2025 18:56:03 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:37 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-3-changyuanl@google.com> Subject: [PATCH v5 02/16] mm/mm_init: rename init_reserved_page to init_deferred_page From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_185604_590291_7EB9410A X-CRM114-Status: GOOD ( 11.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (Microsoft)" When CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, init_reserved_page() function performs initialization of a struct page that would have been deferred normally. Rename it to init_deferred_page() to better reflect what the function does. Signed-off-by: Mike Rapoport (Microsoft) Signed-off-by: Changyuan Lyu --- mm/mm_init.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index 2630cc30147e..c4b425125bad 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -705,7 +705,7 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } -static void __meminit init_reserved_page(unsigned long pfn, int nid) +static void __meminit init_deferred_page(unsigned long pfn, int nid) { pg_data_t *pgdat; int zid; @@ -739,7 +739,7 @@ static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } -static inline void init_reserved_page(unsigned long pfn, int nid) +static inline void init_deferred_page(unsigned long pfn, int nid) { } #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ @@ -760,7 +760,7 @@ void __meminit reserve_bootmem_region(phys_addr_t start, if (pfn_valid(start_pfn)) { struct page *page = pfn_to_page(start_pfn); - init_reserved_page(start_pfn, nid); + init_deferred_page(start_pfn, nid); /* * no need for atomic set_bit because the struct From patchwork Thu Mar 20 01:55:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C3934C35FFC for ; Thu, 20 Mar 2025 02:03:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=yN4bzpkkVfQ/uH0w0E99LDF9bPjq6B3vGfE1Jc/6gP4=; b=tJbwgthuUUyI5+7EXLaXoU29aw pZaTfhqevqc6c0ckZ2QI1B+NBrO/WexxKMPmvfAxoEjsson5rXelubFn5rWTQoGmuGY60EW2muYrH v06PHCd/H04CMgXLbElWHf5JyN4QO/KCxrwDlRE+IfCRewITiv6fPp9PyQuL8nRDhcxl6i92BpCOv wOXpQAttMuWeW14Zt+4SFqdckXN5HJdqoJWHn0cwN0j+UqZk7WHLXDGiRDy0ftq3mRv6zcWpaB4Do xUNji8zqp6NnDA6FMffe24stSGIb2l/HDhyL0+Hq3rlWRj8QgAq6b3Q5V9va+EQIWP8AoIRn7J+I4 GmpE19yQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5FP-0000000An6H-2ZYv; Thu, 20 Mar 2025 02:03:03 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58g-0000000AkLW-1iWZ for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:07 +0000 Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-2ff69646218so684839a91.3 for ; Wed, 19 Mar 2025 18:56:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435765; x=1743040565; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yN4bzpkkVfQ/uH0w0E99LDF9bPjq6B3vGfE1Jc/6gP4=; b=RV5qdShqWGMxntTCd3lGsLscNTkMKGs+DC/h18lCvwp92Zn9oIZmEhOvj4xTDGfyND mp8bIXdTnA87OQK11zxrwvpFt2c76eIWkj/+tMzP7eFOOVeaYMXRea3uiTIRiJtY4+om g/FSeTBKi9KI+/615e0hzkIwOsEHumT6qal8WDCfORNWq+3wuXR7N5cxaUzar5o0U9jn lw2jFXG1k7PM9b/af7FKaX0cwB1FDdpfjPf5U+2IDs3+w50WuzoaB35fyPXGsXOeLOty Fv0IfDD1Du/pBI5jvCIN0dn9rBsUfUsSxRQ1VLBYGgOiY4atotRjTZaCjskzS06a3D2T gYRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435765; x=1743040565; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yN4bzpkkVfQ/uH0w0E99LDF9bPjq6B3vGfE1Jc/6gP4=; b=S5mCcRp3I6VTT6xYfKYJszHJ3G/HVul60WHQjCS3TuvIdKHwpGucUCoY2GmRPX4aCz QWFKSsAbHK0xM137lgvIKMbnhxoBXHG+XSJ3/3KFbb9Pf0VgZvxMKwf/wc/QSS8fr/87 tzVTIQEktCWRE9WjfN+f4sSqAbAxq0G9Pa97dHytrWhaOz9rSw1YKBx4dK+Qsxnpg12Y FQhBZZu/pt+hFz8AZlsYvwr1Kdz2ydvyPno1nse8VoU+m1RkG+MGS8EnpMhiKCKrN9HO 6e98fLG+eVEjgjzZ9+eun/hTs1G7CThLWrKYcC3RQCblTwAxU34PkTdUCcBzgd35gN6F 3H0g== X-Forwarded-Encrypted: i=1; AJvYcCU1KDK/GDvWzUxcP2jHA3+c+uSurqpY6Yp9m/21vGZPQhLVROGgPNn61nXVpWM7sJBmGbCpCe/Syg7emFJboIvx@lists.infradead.org X-Gm-Message-State: AOJu0YzjjyxTdSmSW4dcvc5q07jQksoGJEOcpvo5G1EOvtH7zmI2xDVa eL8r6beDg4os+1FGnqn5wcxN330nUW0cXYAs95XwBcyxpqJhzEpxGrxBA9iM99sD4Smm8UO5YI/ RSjg5JYp/FxvJwGBRPg== X-Google-Smtp-Source: AGHT+IEsdt/zCxXN1IHqiLuLR1iz8KpffEW0rcwgNYsBwKcCWy7uT/QglIx/zEe+p2m6KToDSLlcFSwk6G8J4CvU X-Received: from pgkk67.prod.google.com ([2002:a63:2446:0:b0:af5:6108:71e4]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:9188:b0:1f5:60ce:6cc6 with SMTP id adf61e73a8af0-1fbebd7cea6mr8202773637.21.1742435765048; Wed, 19 Mar 2025 18:56:05 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:38 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-4-changyuanl@google.com> Subject: [PATCH v5 03/16] memblock: add MEMBLOCK_RSRV_KERN flag From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_185606_447951_CF0F25F6 X-CRM114-Status: GOOD ( 17.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (Microsoft)" to denote areas that were reserved for kernel use either directly with memblock_reserve_kern() or via memblock allocations. Signed-off-by: Mike Rapoport (Microsoft) Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- include/linux/memblock.h | 19 ++++++++- mm/memblock.c | 40 +++++++++++++++---- tools/testing/memblock/tests/alloc_api.c | 22 +++++----- .../memblock/tests/alloc_helpers_api.c | 4 +- tools/testing/memblock/tests/alloc_nid_api.c | 20 +++++----- 5 files changed, 73 insertions(+), 32 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index e79eb6ac516f..1037fd7aabf4 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -42,6 +42,9 @@ extern unsigned long long max_possible_pfn; * kernel resource tree. * @MEMBLOCK_RSRV_NOINIT: memory region for which struct pages are * not initialized (only for reserved regions). + * @MEMBLOCK_RSRV_KERN: memory region that is reserved for kernel use, + * either explictitly with memblock_reserve_kern() or via memblock + * allocation APIs. All memblock allocations set this flag. */ enum memblock_flags { MEMBLOCK_NONE = 0x0, /* No special request */ @@ -50,6 +53,7 @@ enum memblock_flags { MEMBLOCK_NOMAP = 0x4, /* don't add to kernel direct mapping */ MEMBLOCK_DRIVER_MANAGED = 0x8, /* always detected via a driver */ MEMBLOCK_RSRV_NOINIT = 0x10, /* don't initialize struct pages */ + MEMBLOCK_RSRV_KERN = 0x20, /* memory reserved for kernel use */ }; /** @@ -116,7 +120,19 @@ int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid, int memblock_add(phys_addr_t base, phys_addr_t size); int memblock_remove(phys_addr_t base, phys_addr_t size); int memblock_phys_free(phys_addr_t base, phys_addr_t size); -int memblock_reserve(phys_addr_t base, phys_addr_t size); +int __memblock_reserve(phys_addr_t base, phys_addr_t size, int nid, + enum memblock_flags flags); + +static __always_inline int memblock_reserve(phys_addr_t base, phys_addr_t size) +{ + return __memblock_reserve(base, size, NUMA_NO_NODE, 0); +} + +static __always_inline int memblock_reserve_kern(phys_addr_t base, phys_addr_t size) +{ + return __memblock_reserve(base, size, NUMA_NO_NODE, MEMBLOCK_RSRV_KERN); +} + #ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP int memblock_physmem_add(phys_addr_t base, phys_addr_t size); #endif @@ -477,6 +493,7 @@ static inline __init_memblock bool memblock_bottom_up(void) phys_addr_t memblock_phys_mem_size(void); phys_addr_t memblock_reserved_size(void); +phys_addr_t memblock_reserved_kern_size(phys_addr_t limit, int nid); unsigned long memblock_estimated_nr_free_pages(void); phys_addr_t memblock_start_of_DRAM(void); phys_addr_t memblock_end_of_DRAM(void); diff --git a/mm/memblock.c b/mm/memblock.c index 95af35fd1389..e704e3270b32 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -491,7 +491,7 @@ static int __init_memblock memblock_double_array(struct memblock_type *type, * needn't do it */ if (!use_slab) - BUG_ON(memblock_reserve(addr, new_alloc_size)); + BUG_ON(memblock_reserve_kern(addr, new_alloc_size)); /* Update slab flag */ *in_slab = use_slab; @@ -641,7 +641,7 @@ static int __init_memblock memblock_add_range(struct memblock_type *type, #ifdef CONFIG_NUMA WARN_ON(nid != memblock_get_region_node(rgn)); #endif - WARN_ON(flags != rgn->flags); + WARN_ON(flags != MEMBLOCK_NONE && flags != rgn->flags); nr_new++; if (insert) { if (start_rgn == -1) @@ -901,14 +901,15 @@ int __init_memblock memblock_phys_free(phys_addr_t base, phys_addr_t size) return memblock_remove_range(&memblock.reserved, base, size); } -int __init_memblock memblock_reserve(phys_addr_t base, phys_addr_t size) +int __init_memblock __memblock_reserve(phys_addr_t base, phys_addr_t size, + int nid, enum memblock_flags flags) { phys_addr_t end = base + size - 1; - memblock_dbg("%s: [%pa-%pa] %pS\n", __func__, - &base, &end, (void *)_RET_IP_); + memblock_dbg("%s: [%pa-%pa] nid=%d flags=%x %pS\n", __func__, + &base, &end, nid, flags, (void *)_RET_IP_); - return memblock_add_range(&memblock.reserved, base, size, MAX_NUMNODES, 0); + return memblock_add_range(&memblock.reserved, base, size, nid, flags); } #ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP @@ -1459,14 +1460,14 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, again: found = memblock_find_in_range_node(size, align, start, end, nid, flags); - if (found && !memblock_reserve(found, size)) + if (found && !__memblock_reserve(found, size, nid, MEMBLOCK_RSRV_KERN)) goto done; if (numa_valid_node(nid) && !exact_nid) { found = memblock_find_in_range_node(size, align, start, end, NUMA_NO_NODE, flags); - if (found && !memblock_reserve(found, size)) + if (found && !memblock_reserve_kern(found, size)) goto done; } @@ -1751,6 +1752,28 @@ phys_addr_t __init_memblock memblock_reserved_size(void) return memblock.reserved.total_size; } +phys_addr_t __init_memblock memblock_reserved_kern_size(phys_addr_t limit, int nid) +{ + struct memblock_region *r; + phys_addr_t total = 0; + + for_each_reserved_mem_region(r) { + phys_addr_t size = r->size; + + if (r->base > limit) + break; + + if (r->base + r->size > limit) + size = limit - r->base; + + if (nid == memblock_get_region_node(r) || !numa_valid_node(nid)) + if (r->flags & MEMBLOCK_RSRV_KERN) + total += size; + } + + return total; +} + /** * memblock_estimated_nr_free_pages - return estimated number of free pages * from memblock point of view @@ -2397,6 +2420,7 @@ static const char * const flagname[] = { [ilog2(MEMBLOCK_NOMAP)] = "NOMAP", [ilog2(MEMBLOCK_DRIVER_MANAGED)] = "DRV_MNG", [ilog2(MEMBLOCK_RSRV_NOINIT)] = "RSV_NIT", + [ilog2(MEMBLOCK_RSRV_KERN)] = "RSV_KERN", }; static int memblock_debug_show(struct seq_file *m, void *private) diff --git a/tools/testing/memblock/tests/alloc_api.c b/tools/testing/memblock/tests/alloc_api.c index 68f1a75cd72c..c55f67dd367d 100644 --- a/tools/testing/memblock/tests/alloc_api.c +++ b/tools/testing/memblock/tests/alloc_api.c @@ -134,7 +134,7 @@ static int alloc_top_down_before_check(void) PREFIX_PUSH(); setup_memblock(); - memblock_reserve(memblock_end_of_DRAM() - total_size, r1_size); + memblock_reserve_kern(memblock_end_of_DRAM() - total_size, r1_size); allocated_ptr = run_memblock_alloc(r2_size, SMP_CACHE_BYTES); @@ -182,7 +182,7 @@ static int alloc_top_down_after_check(void) total_size = r1.size + r2_size; - memblock_reserve(r1.base, r1.size); + memblock_reserve_kern(r1.base, r1.size); allocated_ptr = run_memblock_alloc(r2_size, SMP_CACHE_BYTES); @@ -231,8 +231,8 @@ static int alloc_top_down_second_fit_check(void) total_size = r1.size + r2.size + r3_size; - memblock_reserve(r1.base, r1.size); - memblock_reserve(r2.base, r2.size); + memblock_reserve_kern(r1.base, r1.size); + memblock_reserve_kern(r2.base, r2.size); allocated_ptr = run_memblock_alloc(r3_size, SMP_CACHE_BYTES); @@ -285,8 +285,8 @@ static int alloc_in_between_generic_check(void) total_size = r1.size + r2.size + r3_size; - memblock_reserve(r1.base, r1.size); - memblock_reserve(r2.base, r2.size); + memblock_reserve_kern(r1.base, r1.size); + memblock_reserve_kern(r2.base, r2.size); allocated_ptr = run_memblock_alloc(r3_size, SMP_CACHE_BYTES); @@ -422,7 +422,7 @@ static int alloc_limited_space_generic_check(void) setup_memblock(); /* Simulate almost-full memory */ - memblock_reserve(memblock_start_of_DRAM(), reserved_size); + memblock_reserve_kern(memblock_start_of_DRAM(), reserved_size); allocated_ptr = run_memblock_alloc(available_size, SMP_CACHE_BYTES); @@ -608,7 +608,7 @@ static int alloc_bottom_up_before_check(void) PREFIX_PUSH(); setup_memblock(); - memblock_reserve(memblock_start_of_DRAM() + r1_size, r2_size); + memblock_reserve_kern(memblock_start_of_DRAM() + r1_size, r2_size); allocated_ptr = run_memblock_alloc(r1_size, SMP_CACHE_BYTES); @@ -655,7 +655,7 @@ static int alloc_bottom_up_after_check(void) total_size = r1.size + r2_size; - memblock_reserve(r1.base, r1.size); + memblock_reserve_kern(r1.base, r1.size); allocated_ptr = run_memblock_alloc(r2_size, SMP_CACHE_BYTES); @@ -705,8 +705,8 @@ static int alloc_bottom_up_second_fit_check(void) total_size = r1.size + r2.size + r3_size; - memblock_reserve(r1.base, r1.size); - memblock_reserve(r2.base, r2.size); + memblock_reserve_kern(r1.base, r1.size); + memblock_reserve_kern(r2.base, r2.size); allocated_ptr = run_memblock_alloc(r3_size, SMP_CACHE_BYTES); diff --git a/tools/testing/memblock/tests/alloc_helpers_api.c b/tools/testing/memblock/tests/alloc_helpers_api.c index 3ef9486da8a0..e5362cfd2ff3 100644 --- a/tools/testing/memblock/tests/alloc_helpers_api.c +++ b/tools/testing/memblock/tests/alloc_helpers_api.c @@ -163,7 +163,7 @@ static int alloc_from_top_down_no_space_above_check(void) min_addr = memblock_end_of_DRAM() - SMP_CACHE_BYTES * 2; /* No space above this address */ - memblock_reserve(min_addr, r2_size); + memblock_reserve_kern(min_addr, r2_size); allocated_ptr = memblock_alloc_from(r1_size, SMP_CACHE_BYTES, min_addr); @@ -199,7 +199,7 @@ static int alloc_from_top_down_min_addr_cap_check(void) start_addr = (phys_addr_t)memblock_start_of_DRAM(); min_addr = start_addr - SMP_CACHE_BYTES * 3; - memblock_reserve(start_addr + r1_size, MEM_SIZE - r1_size); + memblock_reserve_kern(start_addr + r1_size, MEM_SIZE - r1_size); allocated_ptr = memblock_alloc_from(r1_size, SMP_CACHE_BYTES, min_addr); diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/memblock/tests/alloc_nid_api.c index 49bb416d34ff..562e4701b0e0 100644 --- a/tools/testing/memblock/tests/alloc_nid_api.c +++ b/tools/testing/memblock/tests/alloc_nid_api.c @@ -324,7 +324,7 @@ static int alloc_nid_min_reserved_generic_check(void) min_addr = max_addr - r2_size; reserved_base = min_addr - r1_size; - memblock_reserve(reserved_base, r1_size); + memblock_reserve_kern(reserved_base, r1_size); allocated_ptr = run_memblock_alloc_nid(r2_size, SMP_CACHE_BYTES, min_addr, max_addr, @@ -374,7 +374,7 @@ static int alloc_nid_max_reserved_generic_check(void) max_addr = memblock_end_of_DRAM() - r1_size; min_addr = max_addr - r2_size; - memblock_reserve(max_addr, r1_size); + memblock_reserve_kern(max_addr, r1_size); allocated_ptr = run_memblock_alloc_nid(r2_size, SMP_CACHE_BYTES, min_addr, max_addr, @@ -436,8 +436,8 @@ static int alloc_nid_top_down_reserved_with_space_check(void) min_addr = r2.base + r2.size; max_addr = r1.base; - memblock_reserve(r1.base, r1.size); - memblock_reserve(r2.base, r2.size); + memblock_reserve_kern(r1.base, r1.size); + memblock_reserve_kern(r2.base, r2.size); allocated_ptr = run_memblock_alloc_nid(r3_size, SMP_CACHE_BYTES, min_addr, max_addr, @@ -499,8 +499,8 @@ static int alloc_nid_reserved_full_merge_generic_check(void) min_addr = r2.base + r2.size; max_addr = r1.base; - memblock_reserve(r1.base, r1.size); - memblock_reserve(r2.base, r2.size); + memblock_reserve_kern(r1.base, r1.size); + memblock_reserve_kern(r2.base, r2.size); allocated_ptr = run_memblock_alloc_nid(r3_size, SMP_CACHE_BYTES, min_addr, max_addr, @@ -563,8 +563,8 @@ static int alloc_nid_top_down_reserved_no_space_check(void) min_addr = r2.base + r2.size; max_addr = r1.base; - memblock_reserve(r1.base, r1.size); - memblock_reserve(r2.base, r2.size); + memblock_reserve_kern(r1.base, r1.size); + memblock_reserve_kern(r2.base, r2.size); allocated_ptr = run_memblock_alloc_nid(r3_size, SMP_CACHE_BYTES, min_addr, max_addr, @@ -909,8 +909,8 @@ static int alloc_nid_bottom_up_reserved_with_space_check(void) min_addr = r2.base + r2.size; max_addr = r1.base; - memblock_reserve(r1.base, r1.size); - memblock_reserve(r2.base, r2.size); + memblock_reserve_kern(r1.base, r1.size); + memblock_reserve_kern(r2.base, r2.size); allocated_ptr = run_memblock_alloc_nid(r3_size, SMP_CACHE_BYTES, min_addr, max_addr, From patchwork Thu Mar 20 01:55:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023324 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0A62CC35FFC for ; Thu, 20 Mar 2025 02:04:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=o8JvgS9AJeolBSiA/4pI8Lzjs3pyle8TVUuTg3xWzw4=; b=E/5GVBMPdEbS7jek01/JdcTs0b tXC3/njrWuTK+kXteaZOzwRxF0UBAh1OMwhTeb/37VG6rZ+oMjonsm4DkpPNuMBtC8ylCkbb12hLN H5pCkBiTrc256PBKPPDjmWLnB9gmtxm/RwpCKkek4zAlpJFfgTfH5BymEPtOyUrWC1TgRgPwFFctm 0cga9RuZtD1YAUAeBgRRFjCOQk45E0inSrPugJ6oQx9Jz2NH4SBbN0c+49HfkU/XiMstBm8uT8RdQ dgq+b5ZOj/iQfpXfnzMeYLOJUc7aLZnvA63ssLYI9dxYm9LWP//MTGgBlZp9iQXw+qFBwmBHVTRbm hhuyckSw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5H5-0000000AnUc-3l9J; Thu, 20 Mar 2025 02:04:47 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58h-0000000AkN8-1erV for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:09 +0000 Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-2ff8c5d185aso837269a91.1 for ; Wed, 19 Mar 2025 18:56:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435766; x=1743040566; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=o8JvgS9AJeolBSiA/4pI8Lzjs3pyle8TVUuTg3xWzw4=; b=FrO9h9evgycwYx44sK3c9uyfw2uG2b9P9YXACiy7Rm1JBOMF2TtdX8lbiUB49gE20b Of6IY8ZibYUHB4UG4IJ0w0TAUtbrcz780JFPsF9YHPc/wY9QpUG1IJEkHhOv5gBNb5oz MY97VnR2L7fwufvp8sGSg+ydhX5KVJCgylBdQ7AkjSZTTs3AnvVC7AQjWsoGEZpdzkTc V9rEKAKe8DRMa6LvphWb4hn7CnYlMIdqmij+3USLSX+y/Lis2bhFbWL2JoGXs+IPzfjq 9lmwnamXMyDjh3aexkO2d2TdCa39MIwszCovJw+Ndyyil8fkh1ZrvXvqCjYlBpRi6G+2 qQdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435766; x=1743040566; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=o8JvgS9AJeolBSiA/4pI8Lzjs3pyle8TVUuTg3xWzw4=; b=Ia6mrExbc2y4wXE4k0UE92AqkWg1GHNPG95NjdoLdchMJ9KMDmyEnZzvYcbcOGejyA 7ENvhGF89W6M65SVyOM8IXN6ppmXu+tAeGoDkyhi2RRj3sCLLdYt/ArgvVYG8SLskB7x cLdQyi4I+r50iw1/UGR/9vIgHo4fTsqzdI396psFWSAQ4Bkgttmn5JWKVudRGofbU7LD uzlVgn1UXcOHcds3DJwE+z2jLErJmqTOu3J1KriqVHQsvsZr6nWHjdOudsjnNYxjJNrA 9HFsmZpX/MvbC9RPDZ9IrC41LbXM9gPPtXdK9UKAGWQ1PnijbSsxU3Hp0mV0dOMsyX3T bf9Q== X-Forwarded-Encrypted: i=1; AJvYcCUS3TxWfp9ctS2AHeX06ygJ6hukzb4acvlOzwuBLhLMALZ8kbat53aGkgxSKoDMAYmHGgmKHvXIdxPk2zby3G9I@lists.infradead.org X-Gm-Message-State: AOJu0YyTppVugQswBGxTg0IiDotSUzOUoQuZx8NLRxxY7Sn2K03er1US Np1qeX5wfPwkmurv5D4hbvPVMf+3YRj7yi/nvpVzwySnlcTGaEG/AHN0HZlPtYuaosBdJ3UvM5g rHxSvKC9SWuLigWaPqg== X-Google-Smtp-Source: AGHT+IHJ+6KP5vNZbXdYS1/CKUc+TX5VPg88WvJbDadlOPM+I9Fd2xGdM5rk7YaUv/4ZvrxELn+iFnZzlrBohdZi X-Received: from pjk8.prod.google.com ([2002:a17:90b:5588:b0:2fc:ccfe:368]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2651:b0:2ff:6ac2:c5a5 with SMTP id 98e67ed59e1d1-301be1f90d5mr6693304a91.26.1742435766515; Wed, 19 Mar 2025 18:56:06 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:39 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-5-changyuanl@google.com> Subject: [PATCH v5 04/16] memblock: Add support for scratch memory From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_185607_473270_35BD4220 X-CRM114-Status: GOOD ( 27.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Alexander Graf With KHO (Kexec HandOver), we need a way to ensure that the new kernel does not allocate memory on top of any memory regions that the previous kernel was handing over. But to know where those are, we need to include them in the memblock.reserved array which may not be big enough to hold all ranges that need to be persisted across kexec. To resize the array, we need to allocate memory. That brings us into a catch 22 situation. The solution to that is limit memblock allocations to the scratch regions: safe regions to operate in the case when there is memory that should remain intact across kexec. KHO provides several "scratch regions" as part of its metadata. These scratch regions are contiguous memory blocks that known not to contain any memory that should be persisted across kexec. These regions should be large enough to accommodate all memblock allocations done by the kexeced kernel. We introduce a new memblock_set_scratch_only() function that allows KHO to indicate that any memblock allocation must happen from the scratch regions. Later, we may want to perform another KHO kexec. For that, we reuse the same scratch regions. To ensure that no eventually handed over data gets allocated inside a scratch region, we flip the semantics of the scratch region with memblock_clear_scratch_only(): After that call, no allocations may happen from scratch memblock regions. We will lift that restriction in the next patch. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/memblock.h | 20 +++++++++++++ mm/Kconfig | 4 +++ mm/memblock.c | 61 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 85 insertions(+) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 1037fd7aabf4..a83738b7218b 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -45,6 +45,11 @@ extern unsigned long long max_possible_pfn; * @MEMBLOCK_RSRV_KERN: memory region that is reserved for kernel use, * either explictitly with memblock_reserve_kern() or via memblock * allocation APIs. All memblock allocations set this flag. + * @MEMBLOCK_KHO_SCRATCH: memory region that kexec can pass to the next + * kernel in handover mode. During early boot, we do not know about all + * memory reservations yet, so we get scratch memory from the previous + * kernel that we know is good to use. It is the only memory that + * allocations may happen from in this phase. */ enum memblock_flags { MEMBLOCK_NONE = 0x0, /* No special request */ @@ -54,6 +59,7 @@ enum memblock_flags { MEMBLOCK_DRIVER_MANAGED = 0x8, /* always detected via a driver */ MEMBLOCK_RSRV_NOINIT = 0x10, /* don't initialize struct pages */ MEMBLOCK_RSRV_KERN = 0x20, /* memory reserved for kernel use */ + MEMBLOCK_KHO_SCRATCH = 0x40, /* scratch memory for kexec handover */ }; /** @@ -148,6 +154,8 @@ int memblock_mark_mirror(phys_addr_t base, phys_addr_t size); int memblock_mark_nomap(phys_addr_t base, phys_addr_t size); int memblock_clear_nomap(phys_addr_t base, phys_addr_t size); int memblock_reserved_mark_noinit(phys_addr_t base, phys_addr_t size); +int memblock_mark_kho_scratch(phys_addr_t base, phys_addr_t size); +int memblock_clear_kho_scratch(phys_addr_t base, phys_addr_t size); void memblock_free_all(void); void memblock_free(void *ptr, size_t size); @@ -292,6 +300,11 @@ static inline bool memblock_is_driver_managed(struct memblock_region *m) return m->flags & MEMBLOCK_DRIVER_MANAGED; } +static inline bool memblock_is_kho_scratch(struct memblock_region *m) +{ + return m->flags & MEMBLOCK_KHO_SCRATCH; +} + int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, unsigned long *end_pfn); void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, @@ -620,5 +633,12 @@ static inline void early_memtest(phys_addr_t start, phys_addr_t end) { } static inline void memtest_report_meminfo(struct seq_file *m) { } #endif +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH +void memblock_set_kho_scratch_only(void); +void memblock_clear_kho_scratch_only(void); +#else +static inline void memblock_set_kho_scratch_only(void) { } +static inline void memblock_clear_kho_scratch_only(void) { } +#endif #endif /* _LINUX_MEMBLOCK_H */ diff --git a/mm/Kconfig b/mm/Kconfig index 1b501db06417..550bbafe5c0b 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -506,6 +506,10 @@ config HAVE_GUP_FAST depends on MMU bool +# Enable memblock support for scratch memory which is needed for kexec handover +config MEMBLOCK_KHO_SCRATCH + bool + # Don't discard allocated memory used to track "memory" and "reserved" memblocks # after early boot, so it can still be used to test for validity of memory. # Also, memblocks are updated with memory hot(un)plug. diff --git a/mm/memblock.c b/mm/memblock.c index e704e3270b32..c0f7da7dff47 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -106,6 +106,13 @@ unsigned long min_low_pfn; unsigned long max_pfn; unsigned long long max_possible_pfn; +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH +/* When set to true, only allocate from MEMBLOCK_KHO_SCRATCH ranges */ +static bool kho_scratch_only; +#else +#define kho_scratch_only false +#endif + static struct memblock_region memblock_memory_init_regions[INIT_MEMBLOCK_MEMORY_REGIONS] __initdata_memblock; static struct memblock_region memblock_reserved_init_regions[INIT_MEMBLOCK_RESERVED_REGIONS] __initdata_memblock; #ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP @@ -165,6 +172,10 @@ bool __init_memblock memblock_has_mirror(void) static enum memblock_flags __init_memblock choose_memblock_flags(void) { + /* skip non-scratch memory for kho early boot allocations */ + if (kho_scratch_only) + return MEMBLOCK_KHO_SCRATCH; + return system_has_some_mirror ? MEMBLOCK_MIRROR : MEMBLOCK_NONE; } @@ -924,6 +935,18 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size) } #endif +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH +__init_memblock void memblock_set_kho_scratch_only(void) +{ + kho_scratch_only = true; +} + +__init_memblock void memblock_clear_kho_scratch_only(void) +{ + kho_scratch_only = false; +} +#endif + /** * memblock_setclr_flag - set or clear flag for a memory region * @type: memblock type to set/clear flag for @@ -1049,6 +1072,36 @@ int __init_memblock memblock_reserved_mark_noinit(phys_addr_t base, phys_addr_t MEMBLOCK_RSRV_NOINIT); } +/** + * memblock_mark_kho_scratch - Mark a memory region as MEMBLOCK_KHO_SCRATCH. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Only memory regions marked with %MEMBLOCK_KHO_SCRATCH will be considered + * for allocations during early boot with kexec handover. + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_mark_kho_scratch(phys_addr_t base, phys_addr_t size) +{ + return memblock_setclr_flag(&memblock.memory, base, size, 1, + MEMBLOCK_KHO_SCRATCH); +} + +/** + * memblock_clear_kho_scratch - Clear MEMBLOCK_KHO_SCRATCH flag for a + * specified region. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_clear_kho_scratch(phys_addr_t base, phys_addr_t size) +{ + return memblock_setclr_flag(&memblock.memory, base, size, 0, + MEMBLOCK_KHO_SCRATCH); +} + static bool should_skip_region(struct memblock_type *type, struct memblock_region *m, int nid, int flags) @@ -1080,6 +1133,13 @@ static bool should_skip_region(struct memblock_type *type, if (!(flags & MEMBLOCK_DRIVER_MANAGED) && memblock_is_driver_managed(m)) return true; + /* + * In early alloc during kexec handover, we can only consider + * MEMBLOCK_KHO_SCRATCH regions for the allocations + */ + if ((flags & MEMBLOCK_KHO_SCRATCH) && !memblock_is_kho_scratch(m)) + return true; + return false; } @@ -2421,6 +2481,7 @@ static const char * const flagname[] = { [ilog2(MEMBLOCK_DRIVER_MANAGED)] = "DRV_MNG", [ilog2(MEMBLOCK_RSRV_NOINIT)] = "RSV_NIT", [ilog2(MEMBLOCK_RSRV_KERN)] = "RSV_KERN", + [ilog2(MEMBLOCK_KHO_SCRATCH)] = "KHO_SCRATCH", }; static int memblock_debug_show(struct seq_file *m, void *private) From patchwork Thu Mar 20 01:55:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3609DC35FFC for ; Thu, 20 Mar 2025 02:06:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=PiwjRVxMAh+aRSJQzROvUDuvUP7bfHphETD8QOGYAo8=; b=J4J5gP5zwiEn4eV9MZgbP8TVSE efhsBdrzfdPwbdy8nWyboryWbYIYYUKfSfxAxFXRQGniM+o56vQW+yn/mobWJ/VfMMo/k+gsg0/aF Eaahd7J++6V+/3HUak7DhbWk9onjkeUA/6m1j0G+Z8pAbBXDBKS2M0R9Y75wPDZGhX/O7uInijB1F hUwbl/oOx2HsQNC/gi8nievWaqRsbxmDBloDrtvbLMQGrwh7aFmhdtH3eWShrWA+vTiHs1W2iV2qN xpF9yyfjSGtHHcbUAcGQRyhf4M8b1kQQK/dPnjqO9ELL2r+2Lq5Bz0eZChLGUc5hWTebECYfSgdhW AE2slddA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5Im-0000000AntE-07Se; Thu, 20 Mar 2025 02:06:32 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58j-0000000AkPH-1G5R for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:11 +0000 Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-2ff78bd3026so447166a91.1 for ; Wed, 19 Mar 2025 18:56:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435768; x=1743040568; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PiwjRVxMAh+aRSJQzROvUDuvUP7bfHphETD8QOGYAo8=; b=RN9NBzYXSqbk5T/lJdmEzb43eApU61hu3hmHEoS9ryAjTrh59MAlol0ZHcppLIGY9h GVVZRzMdPB9ZBWnsOIaoUATQmGTQheeybNX+gyxqcT8aeI4q+79PH80ke/ALQHpdJ/7e OZa7aKxezCsL9XjlCjcIfpKjlN62ltMm53z4ZBgy5PxElAoblnCb84r4tJl0zC1jiHpY cTT6vVdaOpC3kBzaGsL0zvd1Wq9gYrsj/tskPaU8bBVTKNQmoWey9dSYn2ui2wxFJ51A 1wmTL7k+vM8OsUVO+yjXqA1Xy1bu+nBF6SV2Aan41QeroVDnFJ/EVGmvLje5gXNPyt5j zDCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435768; x=1743040568; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PiwjRVxMAh+aRSJQzROvUDuvUP7bfHphETD8QOGYAo8=; b=jLSrMmAviAqHbYNBpZdfJPlyLUtLm/A3pPCfdiVYViTLLryjM+KpW3NDtvU4iIb2tT 5bqP4uEG9L8izMHgpuAqp2iUw7qUMdytiA0S6LflaWyUtA8SN2ok3goBa7b2NDJyAtfh o52buyoCY2WDc99rqSExd/m9SEgioJXP+uPRmsuP6AFsIe3wLFQ35hx2TnftDoDx1gOX kMkV/lsoEwQS4Huee1mPKnJEqOrY9kwL830Pv22COXw9eRK6619SV+2AQYN5jbv7ZW2j cvXFaRnydbHjzXOmY/EqM4ihM5h21pj6OFm8xHpTm2WQO5W9GUaF+n21DxLp6BsTAveD +2/Q== X-Forwarded-Encrypted: i=1; AJvYcCVelhjjjy68/KYxWjvwsbk4DpeWA9gXrKpfc/fJpGFa/nEEnqgF5v2MvQOVf7k/JovH7zus5hLTFSJJxyfyEZbD@lists.infradead.org X-Gm-Message-State: AOJu0YwnTcekYIxnUWpxrhycXbjck+pf92IBwI2xNiWsI30ZHL7FvBaW ThUHCn9S0W4LOC+ay9DntV0LOSB8F1D4Z1Kh0JCirJYKOeU2ZyAPMzq/VKya5ynMOeXCEDQlcmt 2Ih3De6BievBZgjv+6w== X-Google-Smtp-Source: AGHT+IEhVbWDRhW3hPJYHJMT+HzW9sk9EOurakKCEQdD5tgMb9uQ2NYyCFbTRsSUUBpQ3kTf4vM/PmcUJOjFZ+gE X-Received: from pglu36.prod.google.com ([2002:a63:1424:0:b0:af2:22fe:cfb3]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:4cc7:b0:1ee:e96a:d9ed with SMTP id adf61e73a8af0-1fbeaf88adfmr8896191637.7.1742435768113; Wed, 19 Mar 2025 18:56:08 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:40 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-6-changyuanl@google.com> Subject: [PATCH v5 05/16] memblock: introduce memmap_init_kho_scratch() From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_185609_374460_1BDB7A22 X-CRM114-Status: GOOD ( 14.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (Microsoft)" With deferred initialization of struct page it will be necessary to initialize memory map for KHO scratch regions early. Add memmap_init_kho_scratch() method that will allow such initialization in upcoming patches. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/memblock.h | 2 ++ mm/internal.h | 2 ++ mm/memblock.c | 22 ++++++++++++++++++++++ mm/mm_init.c | 11 ++++++++--- 4 files changed, 34 insertions(+), 3 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index a83738b7218b..497e2c1364a6 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -636,9 +636,11 @@ static inline void memtest_report_meminfo(struct seq_file *m) { } #ifdef CONFIG_MEMBLOCK_KHO_SCRATCH void memblock_set_kho_scratch_only(void); void memblock_clear_kho_scratch_only(void); +void memmap_init_kho_scratch_pages(void); #else static inline void memblock_set_kho_scratch_only(void) { } static inline void memblock_clear_kho_scratch_only(void) { } +static inline void memmap_init_kho_scratch_pages(void) {} #endif #endif /* _LINUX_MEMBLOCK_H */ diff --git a/mm/internal.h b/mm/internal.h index 20b3535935a3..8e45a2ae961a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1053,6 +1053,8 @@ DECLARE_STATIC_KEY_TRUE(deferred_pages); bool __init deferred_grow_zone(struct zone *zone, unsigned int order); #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ +void init_deferred_page(unsigned long pfn, int nid); + enum mminit_level { MMINIT_WARNING, MMINIT_VERIFY, diff --git a/mm/memblock.c b/mm/memblock.c index c0f7da7dff47..d5d406a5160a 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -945,6 +945,28 @@ __init_memblock void memblock_clear_kho_scratch_only(void) { kho_scratch_only = false; } + +void __init_memblock memmap_init_kho_scratch_pages(void) +{ + phys_addr_t start, end; + unsigned long pfn; + int nid; + u64 i; + + if (!IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) + return; + + /* + * Initialize struct pages for free scratch memory. + * The struct pages for reserved scratch memory will be set up in + * reserve_bootmem_region() + */ + __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, + MEMBLOCK_KHO_SCRATCH, &start, &end, &nid) { + for (pfn = PFN_UP(start); pfn < PFN_DOWN(end); pfn++) + init_deferred_page(pfn, nid); + } +} #endif /** diff --git a/mm/mm_init.c b/mm/mm_init.c index c4b425125bad..04441c258b05 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -705,7 +705,7 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } -static void __meminit init_deferred_page(unsigned long pfn, int nid) +static void __meminit __init_deferred_page(unsigned long pfn, int nid) { pg_data_t *pgdat; int zid; @@ -739,11 +739,16 @@ static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } -static inline void init_deferred_page(unsigned long pfn, int nid) +static inline void __init_deferred_page(unsigned long pfn, int nid) { } #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ +void __meminit init_deferred_page(unsigned long pfn, int nid) +{ + __init_deferred_page(pfn, nid); +} + /* * Initialised pages do not have PageReserved set. This function is * called for each range allocated by the bootmem allocator and @@ -760,7 +765,7 @@ void __meminit reserve_bootmem_region(phys_addr_t start, if (pfn_valid(start_pfn)) { struct page *page = pfn_to_page(start_pfn); - init_deferred_page(start_pfn, nid); + __init_deferred_page(start_pfn, nid); /* * no need for atomic set_bit because the struct From patchwork Thu Mar 20 01:55:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023330 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D8DF8C35FFC for ; Thu, 20 Mar 2025 02:08:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8yL5b2WTU5WHGZETUIu4ND+tiGH8mntIZf0oxmIMUMA=; b=AXGUeE9sQcg8GNnEOU3qQe27oE rhfOMtLSr5xDLKy+nQ7T3fmyFo86I2A8F3CxXz5eY3GSQxBWM2iqbgF6Nt+23gEOUpw5a65lVvONg tFCuF7vUlbZrGVR9LsevKV7JqK5s98M5NrfMr2LeRiVy7tlzbjfKClHGnVWq6gHr1rFTE5aOGxx6n mNoIbDOOUxupjwUccPKbyUyOIOHz9Llt8WrtUE1mG1SDcPIqIR4XIVtMreJdvYWDXISyKFyspSZqE 1MAtRrXudPCXuG98D46w+rCZYJSiepSoICxwCsnSRrl+hkila8sQN4YdwbAzm7or/QZekWHuhYUW9 BtHRL8cg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5KR-0000000AoHQ-0w7Z; Thu, 20 Mar 2025 02:08:15 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58l-0000000AkRp-11fa for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:13 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-22650077995so5170205ad.3 for ; Wed, 19 Mar 2025 18:56:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435770; x=1743040570; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8yL5b2WTU5WHGZETUIu4ND+tiGH8mntIZf0oxmIMUMA=; b=wSfsMcupzBcOYYlvN7GoRYET9X70swpalaV45piFB5Jz8BFqcmfO98JIBHE1TxhzJk qmKyOfGded0SshqCJveqJ3MCATdGLJOBztkTo/EJaO0Hz7riBAKWY2knNmHmIdy5Dep2 y8+wAG3ioPLSx3VDQwWRL9XhStVzAvljlVyN44+DWEtPqf9Yc1ydygaCOFMH3tjoAMVW pYgdj/jvKxEKI2QNZO0APEJgV0Y4mAby/R/c5eVjUPvvFtO1Yg4iuJfnUk+Xc1KaHmWo eRfLRQeWj9dD9K3XtKghn+LBeP1fupbPI2BzLnBCg7/vxg1s9bwIqu2PhaR3eeEL1lcJ WNMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435770; x=1743040570; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8yL5b2WTU5WHGZETUIu4ND+tiGH8mntIZf0oxmIMUMA=; b=oURYdppyHX9w3z5Swwg+9HuCFleVjKIcqAX0bLVDrnBjyz1QM9pTsKwzjTbaxiQDcb /OumM2kZYoZn9meL5gtUhDCgU09CT8J/te32hcP366X+MJr9athYxvunzB4AB/IMX3bO 53E63RG61wTh4EETeq5f1TSwCXdcxsXcTD4aJNYXi0diOPFD82eJGVH4Sh47ld0vz5Zh OsUx56kf5IxtYcO/+CuwiJzpH9zmhDJ5erD2GtVVgFn6OMLMJtrhlI2wWq7nqoIunLqR cLYBLhOBZ//JLvLHC4jkPN3wyF5GsW5k2QN4jab0IaBP8WlKObCpY3WmfprS5mE8orJQ U7zg== X-Forwarded-Encrypted: i=1; AJvYcCUHvHzJX05etnygjNum0dStFdyJjhRECVtCov1nQZINrpKLoDyzao/aRm0HTNIrAY+YuZQi6WsUh2szNjZ/CGiK@lists.infradead.org X-Gm-Message-State: AOJu0YyAvU9wWcAI9KycSFthudKl9DCXKgHYBiANT/m8EAr+P3fN/E26 LwCMi4yXL4vcY+qKla1w31pEpNg+G783728rTRo4ulIpId7P9XB7ETx+0a6EI9A/mn0tigOUwhl qlyIeD7Dn85mD41EtUQ== X-Google-Smtp-Source: AGHT+IHW15FvPpMLWn1HKaL85qfGSHpQDCxTWGFYK6N/65irCZHXzpE1wn1H8XnTV124IW6pqODmzO2IIQUyBB4r X-Received: from pfbna11.prod.google.com ([2002:a05:6a00:3e0b:b0:730:4672:64ac]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:4187:b0:736:fff2:99b with SMTP id d2e1a72fcca58-7376d6ff535mr8193650b3a.23.1742435769805; Wed, 19 Mar 2025 18:56:09 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:41 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-7-changyuanl@google.com> Subject: [PATCH v5 06/16] hashtable: add macro HASHTABLE_INIT From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_185611_304317_D80D6376 X-CRM114-Status: UNSURE ( 9.83 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Similar to HLIST_HEAD_INIT, HASHTABLE_INIT allows a hashtable embedded in another structure to be initialized at compile time. Example, struct tree_node { DECLARE_HASHTABLE(properties, 4); DECLARE_HASHTABLE(sub_nodes, 4); }; static struct tree_node root_node = { .properties = HASHTABLE_INIT(4), .sub_nodes = HASHTABLE_INIT(4), }; Signed-off-by: Changyuan Lyu --- include/linux/hashtable.h | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/include/linux/hashtable.h b/include/linux/hashtable.h index f6c666730b8c..27e07a436e2a 100644 --- a/include/linux/hashtable.h +++ b/include/linux/hashtable.h @@ -13,13 +13,14 @@ #include #include +#define HASHTABLE_INIT(bits) { [0 ... ((1 << (bits)) - 1)] = HLIST_HEAD_INIT } + #define DEFINE_HASHTABLE(name, bits) \ - struct hlist_head name[1 << (bits)] = \ - { [0 ... ((1 << (bits)) - 1)] = HLIST_HEAD_INIT } + struct hlist_head name[1 << (bits)] = HASHTABLE_INIT(bits) \ #define DEFINE_READ_MOSTLY_HASHTABLE(name, bits) \ struct hlist_head name[1 << (bits)] __read_mostly = \ - { [0 ... ((1 << (bits)) - 1)] = HLIST_HEAD_INIT } + HASHTABLE_INIT(bits) #define DECLARE_HASHTABLE(name, bits) \ struct hlist_head name[1 << (bits)] From patchwork Thu Mar 20 01:55:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26165C35FFC for ; Thu, 20 Mar 2025 02:10:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UfibToy0nGoCGguqNLW86QgSIt/epNH6HRSzzhgRgfw=; b=4vUQ7YppUyGP4CfkENjn7Aimla KAUh9wjVHCeggQflg/FvN5ft5lV4PozPPZcAaCS5D6cF4BujUX0Gf36quhSrhs5QRFf5klE5SgbMq gNBLpqL0TmIodBnYJn4jYyd3vTikp68LpN6fsHs6K+nDSraLZieWVukJpHmFffo1BXU6QJ15rQ7dr xX5GlyxCR6JgSDap1RE5UATo9ODxR1hXwKRiUxkZMvDrQbNuXV0+dveslIVXpKlU36sSk/goMSN0x yTeGL+RVPibtYvij+1kd6ZqxMz5Eiss81zY5I36GBCaGnxP+WLo6dkPiAP7NhDZlVPvcPKP0XVv5d qCnEePKA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5M6-0000000Aojn-1hL5; Thu, 20 Mar 2025 02:09:58 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58m-0000000AkTy-2rNl for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:15 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-225107fbdc7so3366525ad.0 for ; Wed, 19 Mar 2025 18:56:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435771; x=1743040571; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UfibToy0nGoCGguqNLW86QgSIt/epNH6HRSzzhgRgfw=; b=QM89SdUbpQ0lJPX+0hj707TwvubTkOS89x3O1ceOkjfy40eoXmPkhb3KoJTxHitrEF ACx267Y+71jWgmNlN7wW/6WUpuQmgmAsdsvj9DXH1I/K4mNv4G8HJQ88O+FNh1Of3Hcn q1zOxWrxmpdRjNhrqh0/Vz3PULvi/ZKIS/EZQPflfWHvQLzEfxpBkRqf3ApgPETQ8v/2 i+awgo3+Ws8TM+vBIlBlRrnjMeLgMGLYVCvZNHAQKdo8wlZCsHzkFZxa6Ly1S0FtP1ET IF/qtNIMXxfKUoKPln5ChmCQF07aholnHYGxHOVzEiwUSYh3raF3h+v/go7tHZ5C5ycj 61Lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435771; x=1743040571; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UfibToy0nGoCGguqNLW86QgSIt/epNH6HRSzzhgRgfw=; b=hW8AU60CIKtOczgUfvDoCgiS0neKvkD1wPH9DBuDmHBAmvUsU1pyPxiNY84DDCMyWp 9reYfwIdqRMM2qQ4PL0/cCO+uyemUe+McNieBF8NyPQ4Sjk+7jMLujCIkdIU0BJPeJIQ ZXVvy6xY/hoC6ASmVdOcQB9dIo1TxMQ5Z4fS/kBylv7+LVKFy5E5jB+TAWL4TxTgzgd9 BuNRE+ccDvfTmjVjA+7u3lakI0xwQdQ+sDZq3PcxCwjW8WuE94l39DayMrgG+lE+zZYB /vVeLPLm99TV0yMxwfMNcYk8aG6+lxpEvRT2hAT//N87TF3MyoDh+m2u1b9jbNL/t5PR 6J/g== X-Forwarded-Encrypted: i=1; AJvYcCViX/iN5qMoqy7qEuRPGCAMWc0Lt/jM3s5v4O53gyAQMjab2mDsxPdOhJKhfTU8yj9HDCWTG8O2ClXQNAC/HJpz@lists.infradead.org X-Gm-Message-State: AOJu0YzhnAleWe3mwRk6U1hYcI2yzcZpRvQacAcpZcj2kDHaL7GF6aVf MTTwGa/3u5kOjPlhcgYpQKVuaiVxjnJVDOt2kt5B07fSfrKFqDaX++K7NAVRwP6Sek71uo62sXH WKV2/+Wo0rGWVxMXMjA== X-Google-Smtp-Source: AGHT+IHonsyQBMv9RBxtQtceR/ApPbVZwOG1MadGL1Le7a4EopOnz4DX+w8JLu4FBcso+eobsS8t+IumlBcRGPW6 X-Received: from plbi21.prod.google.com ([2002:a17:903:20d5:b0:223:5c58:1855]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:8c8:b0:224:1294:1d26 with SMTP id d9443c01a7336-2265edbf303mr23035225ad.13.1742435771495; Wed, 19 Mar 2025 18:56:11 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:42 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-8-changyuanl@google.com> Subject: [PATCH v5 07/16] kexec: add Kexec HandOver (KHO) generation helpers From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_185612_754437_F1E776C7 X-CRM114-Status: GOOD ( 34.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Alexander Graf Add the core infrastructure to generate Kexec HandOver metadata. Kexec HandOver is a mechanism that allows Linux to preserve state - arbitrary properties as well as memory locations - across kexec. It does so using 2 concepts: 1) State Tree - Every KHO kexec carries a state tree that describes the state of the system. The state tree is represented as hash-tables. Device drivers can add/remove their data into/from the state tree at system runtime. On kexec, the tree is converted to FDT (flattened device tree). 2) Scratch Regions - CMA regions that we allocate in the first kernel. CMA gives us the guarantee that no handover pages land in those regions, because handover pages must be at a static physical memory location. We use these regions as the place to load future kexec images so that they won't collide with any handover data. Signed-off-by: Alexander Graf Co-developed-by: Pratyush Yadav Signed-off-by: Pratyush Yadav Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- MAINTAINERS | 2 +- include/linux/kexec_handover.h | 109 +++++ kernel/Makefile | 1 + kernel/kexec_handover.c | 865 +++++++++++++++++++++++++++++++++ mm/mm_init.c | 8 + 5 files changed, 984 insertions(+), 1 deletion(-) create mode 100644 include/linux/kexec_handover.h create mode 100644 kernel/kexec_handover.c diff --git a/MAINTAINERS b/MAINTAINERS index 12852355bd66..a000a277ccf7 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12828,7 +12828,7 @@ F: include/linux/kernfs.h KEXEC L: kexec@lists.infradead.org W: http://kernel.org/pub/linux/utils/kernel/kexec/ -F: include/linux/kexec.h +F: include/linux/kexec*.h F: include/uapi/linux/kexec.h F: kernel/kexec* diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h new file mode 100644 index 000000000000..9cd9ad31e2d1 --- /dev/null +++ b/include/linux/kexec_handover.h @@ -0,0 +1,109 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef LINUX_KEXEC_HANDOVER_H +#define LINUX_KEXEC_HANDOVER_H + +#include +#include +#include + +struct kho_scratch { + phys_addr_t addr; + phys_addr_t size; +}; + +/* KHO Notifier index */ +enum kho_event { + KEXEC_KHO_FINALIZE = 0, + KEXEC_KHO_UNFREEZE = 1, +}; + +#define KHO_HASHTABLE_BITS 3 +#define KHO_NODE_INIT \ + { \ + .props = HASHTABLE_INIT(KHO_HASHTABLE_BITS), \ + .nodes = HASHTABLE_INIT(KHO_HASHTABLE_BITS), \ + } + +struct kho_node { + struct hlist_node hlist; + + const char *name; + DECLARE_HASHTABLE(props, KHO_HASHTABLE_BITS); + DECLARE_HASHTABLE(nodes, KHO_HASHTABLE_BITS); + + struct list_head list; + bool visited; +}; + +#ifdef CONFIG_KEXEC_HANDOVER +bool kho_is_enabled(void); +void kho_init_node(struct kho_node *node); +int kho_add_node(struct kho_node *parent, const char *name, + struct kho_node *child); +struct kho_node *kho_remove_node(struct kho_node *parent, const char *name); +int kho_add_prop(struct kho_node *node, const char *key, const void *val, + u32 size); +void *kho_remove_prop(struct kho_node *node, const char *key, u32 *size); +int kho_add_string_prop(struct kho_node *node, const char *key, + const char *val); + +int register_kho_notifier(struct notifier_block *nb); +int unregister_kho_notifier(struct notifier_block *nb); + +void kho_memory_init(void); +#else +static inline bool kho_is_enabled(void) +{ + return false; +} + +static inline void kho_init_node(struct kho_node *node) +{ +} + +static inline int kho_add_node(struct kho_node *parent, const char *name, + struct kho_node *child) +{ + return -EOPNOTSUPP; +} + +static inline struct kho_node *kho_remove_node(struct kho_node *parent, + const char *name) +{ + return ERR_PTR(-EOPNOTSUPP); +} + +static inline int kho_add_prop(struct kho_node *node, const char *key, + const void *val, u32 size) +{ + return -EOPNOTSUPP; +} + +static inline void *kho_remove_prop(struct kho_node *node, const char *key, + u32 *size) +{ + return ERR_PTR(-EOPNOTSUPP); +} + +static inline int kho_add_string_prop(struct kho_node *node, const char *key, + const char *val) +{ + return -EOPNOTSUPP; +} + +static inline int register_kho_notifier(struct notifier_block *nb) +{ + return -EOPNOTSUPP; +} + +static inline int unregister_kho_notifier(struct notifier_block *nb) +{ + return -EOPNOTSUPP; +} + +static inline void kho_memory_init(void) +{ +} +#endif /* CONFIG_KEXEC_HANDOVER */ + +#endif /* LINUX_KEXEC_HANDOVER_H */ diff --git a/kernel/Makefile b/kernel/Makefile index 87866b037fbe..cef5377c25cd 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -75,6 +75,7 @@ obj-$(CONFIG_CRASH_DUMP) += crash_core.o obj-$(CONFIG_KEXEC) += kexec.o obj-$(CONFIG_KEXEC_FILE) += kexec_file.o obj-$(CONFIG_KEXEC_ELF) += kexec_elf.o +obj-$(CONFIG_KEXEC_HANDOVER) += kexec_handover.o obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o obj-$(CONFIG_COMPAT) += compat.o obj-$(CONFIG_CGROUPS) += cgroup/ diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c new file mode 100644 index 000000000000..df0d9debbb64 --- /dev/null +++ b/kernel/kexec_handover.c @@ -0,0 +1,865 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * kexec_handover.c - kexec handover metadata processing + * Copyright (C) 2023 Alexander Graf + * Copyright (C) 2025 Microsoft Corporation, Mike Rapoport + * Copyright (C) 2024 Google LLC + */ + +#define pr_fmt(fmt) "KHO: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +/* + * KHO is tightly coupled with mm init and needs access to some of mm + * internal APIs. + */ +#include "../mm/internal.h" +#include "kexec_internal.h" + +static bool kho_enable __ro_after_init; + +bool kho_is_enabled(void) +{ + return kho_enable; +} +EXPORT_SYMBOL_GPL(kho_is_enabled); + +static int __init kho_parse_enable(char *p) +{ + return kstrtobool(p, &kho_enable); +} +early_param("kho", kho_parse_enable); + +/* + * With KHO enabled, memory can become fragmented because KHO regions may + * be anywhere in physical address space. The scratch regions give us a + * safe zones that we will never see KHO allocations from. This is where we + * can later safely load our new kexec images into and then use the scratch + * area for early allocations that happen before page allocator is + * initialized. + */ +static struct kho_scratch *kho_scratch; +static unsigned int kho_scratch_cnt; + +static struct dentry *debugfs_root; + +struct kho_out { + struct blocking_notifier_head chain_head; + + struct debugfs_blob_wrapper fdt_wrapper; + struct dentry *fdt_file; + struct dentry *dir; + + struct rw_semaphore tree_lock; + struct kho_node root; + + void *fdt; + u64 fdt_max; +}; + +static struct kho_out kho_out = { + .chain_head = BLOCKING_NOTIFIER_INIT(kho_out.chain_head), + .tree_lock = __RWSEM_INITIALIZER(kho_out.tree_lock), + .root = KHO_NODE_INIT, + .fdt_max = 10 * SZ_1M, +}; + +int register_kho_notifier(struct notifier_block *nb) +{ + return blocking_notifier_chain_register(&kho_out.chain_head, nb); +} +EXPORT_SYMBOL_GPL(register_kho_notifier); + +int unregister_kho_notifier(struct notifier_block *nb) +{ + return blocking_notifier_chain_unregister(&kho_out.chain_head, nb); +} +EXPORT_SYMBOL_GPL(unregister_kho_notifier); + +/* Helper functions for KHO state tree */ + +struct kho_prop { + struct hlist_node hlist; + + const char *key; + const void *val; + u32 size; +}; + +static unsigned long strhash(const char *s) +{ + return xxhash(s, strlen(s), 1120); +} + +void kho_init_node(struct kho_node *node) +{ + hash_init(node->props); + hash_init(node->nodes); +} +EXPORT_SYMBOL_GPL(kho_init_node); + +/** + * kho_add_node - add a child node to a parent node. + * @parent: parent node to add to. + * @name: name of the child node. + * @child: child node to be added to @parent with @name. + * + * If @parent is NULL, @child is added to KHO state tree root node. + * + * @child must be a valid pointer through KHO FDT finalization. + * @name is duplicated and thus can have a short lifetime. + * + * Callers must use their own locking if there are concurrent accesses to + * @parent or @child. + * + * Return: 0 on success, 1 if @child is already in @parent with @name, or + * - -EOPNOTSUPP: KHO is not enabled in the kernel command line, + * - -ENOMEM: failed to duplicate @name, + * - -EBUSY: KHO state tree has been converted to FDT, + * - -EEXIST: another node of the same name has been added to the parent. + */ +int kho_add_node(struct kho_node *parent, const char *name, + struct kho_node *child) +{ + unsigned long name_hash; + int ret = 0; + struct kho_node *node; + char *child_name; + + if (!kho_enable) + return -EOPNOTSUPP; + + if (!parent) + parent = &kho_out.root; + + child_name = kstrdup(name, GFP_KERNEL); + if (!child_name) + return -ENOMEM; + + name_hash = strhash(child_name); + + if (parent == &kho_out.root) + down_write(&kho_out.tree_lock); + else + down_read(&kho_out.tree_lock); + + if (kho_out.fdt) { + ret = -EBUSY; + goto out; + } + + hash_for_each_possible(parent->nodes, node, hlist, name_hash) { + if (!strcmp(node->name, child_name)) { + ret = node == child ? 1 : -EEXIST; + break; + } + } + + if (ret == 0) { + child->name = child_name; + hash_add(parent->nodes, &child->hlist, name_hash); + } + +out: + if (parent == &kho_out.root) + up_write(&kho_out.tree_lock); + else + up_read(&kho_out.tree_lock); + + if (ret) + kfree(child_name); + + return ret; +} +EXPORT_SYMBOL_GPL(kho_add_node); + +/** + * kho_remove_node - remove a child node from a parent node. + * @parent: parent node to look up for. + * @name: name of the child node. + * + * If @parent is NULL, KHO state tree root node is looked up. + * + * Callers must use their own locking if there are concurrent accesses to + * @parent or @child. + * + * Return: the pointer to the child node on success, or an error pointer, + * - -EOPNOTSUPP: KHO is not enabled in the kernel command line, + * - -ENOENT: no node named @name is found. + * - -EBUSY: KHO state tree has been converted to FDT. + */ +struct kho_node *kho_remove_node(struct kho_node *parent, const char *name) +{ + struct kho_node *child, *ret = ERR_PTR(-ENOENT); + unsigned long name_hash; + + if (!kho_enable) + return ERR_PTR(-EOPNOTSUPP); + + if (!parent) + parent = &kho_out.root; + + name_hash = strhash(name); + + if (parent == &kho_out.root) + down_write(&kho_out.tree_lock); + else + down_read(&kho_out.tree_lock); + + if (kho_out.fdt) { + ret = ERR_PTR(-EBUSY); + goto out; + } + + hash_for_each_possible(parent->nodes, child, hlist, name_hash) { + if (!strcmp(child->name, name)) { + ret = child; + break; + } + } + + if (!IS_ERR(ret)) { + hash_del(&ret->hlist); + kfree(ret->name); + ret->name = NULL; + } + +out: + if (parent == &kho_out.root) + up_write(&kho_out.tree_lock); + else + up_read(&kho_out.tree_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(kho_remove_node); + +/** + * kho_add_prop - add a property to a node. + * @node: KHO node to add the property to. + * @key: key of the property. + * @val: pointer to the property value. + * @size: size of the property value in bytes. + * + * @val and @key must be valid pointers through KHO FDT finalization. + * Generally @key is a string literal with static lifetime. + * + * Callers must use their own locking if there are concurrent accesses to @node. + * + * Return: 0 on success, 1 if the value is already added with @key, or + * - -ENOMEM: failed to allocate memory, + * - -EBUSY: KHO state tree has been converted to FDT, + * - -EEXIST: another property of the same key exists. + */ +int kho_add_prop(struct kho_node *node, const char *key, const void *val, + u32 size) +{ + unsigned long key_hash; + int ret = 0; + struct kho_prop *prop, *p; + + key_hash = strhash(key); + prop = kmalloc(sizeof(*prop), GFP_KERNEL); + if (!prop) + return -ENOMEM; + + prop->key = key; + prop->val = val; + prop->size = size; + + down_read(&kho_out.tree_lock); + if (kho_out.fdt) { + ret = -EBUSY; + goto out; + } + + hash_for_each_possible(node->props, p, hlist, key_hash) { + if (!strcmp(p->key, key)) { + ret = (p->val == val && p->size == size) ? 1 : -EEXIST; + break; + } + } + + if (!ret) + hash_add(node->props, &prop->hlist, key_hash); + +out: + up_read(&kho_out.tree_lock); + + if (ret) + kfree(prop); + + return ret; +} +EXPORT_SYMBOL_GPL(kho_add_prop); + +/** + * kho_add_string_prop - add a string property to a node. + * + * See kho_add_prop() for details. + */ +int kho_add_string_prop(struct kho_node *node, const char *key, const char *val) +{ + return kho_add_prop(node, key, val, strlen(val) + 1); +} +EXPORT_SYMBOL_GPL(kho_add_string_prop); + +/** + * kho_remove_prop - remove a property from a node. + * @node: KHO node to remove the property from. + * @key: key of the property. + * @size: if non-NULL, the property size is stored in it on success. + * + * Callers must use their own locking if there are concurrent accesses to @node. + * + * Return: the pointer to the property value, or + * - -EBUSY: KHO state tree has been converted to FDT, + * - -ENOENT: no property with @key is found. + */ +void *kho_remove_prop(struct kho_node *node, const char *key, u32 *size) +{ + struct kho_prop *p, *prop = NULL; + unsigned long key_hash; + void *ret = ERR_PTR(-ENOENT); + + key_hash = strhash(key); + + down_read(&kho_out.tree_lock); + + if (kho_out.fdt) { + ret = ERR_PTR(-EBUSY); + goto out; + } + + hash_for_each_possible(node->props, p, hlist, key_hash) { + if (!strcmp(p->key, key)) { + prop = p; + break; + } + } + + if (prop) { + ret = (void *)prop->val; + if (size) + *size = prop->size; + hash_del(&prop->hlist); + kfree(prop); + } + +out: + up_read(&kho_out.tree_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(kho_remove_prop); + +static int kho_out_update_debugfs_fdt(void) +{ + int err = 0; + + if (kho_out.fdt) { + kho_out.fdt_wrapper.data = kho_out.fdt; + kho_out.fdt_wrapper.size = fdt_totalsize(kho_out.fdt); + kho_out.fdt_file = debugfs_create_blob("fdt", 0400, kho_out.dir, + &kho_out.fdt_wrapper); + if (IS_ERR(kho_out.fdt_file)) + err = -ENOENT; + } else { + debugfs_remove(kho_out.fdt_file); + } + + return err; +} + +static int kho_unfreeze(void) +{ + int err; + void *fdt; + + down_write(&kho_out.tree_lock); + fdt = kho_out.fdt; + kho_out.fdt = NULL; + up_write(&kho_out.tree_lock); + + if (fdt) + kvfree(fdt); + + err = blocking_notifier_call_chain(&kho_out.chain_head, + KEXEC_KHO_UNFREEZE, NULL); + err = notifier_to_errno(err); + + return notifier_to_errno(err); +} + +static int kho_flatten_tree(void *fdt) +{ + int iter, err = 0; + struct kho_node *node, *sub_node; + struct list_head *ele; + struct kho_prop *prop; + LIST_HEAD(stack); + + kho_out.root.visited = false; + list_add(&kho_out.root.list, &stack); + + for (ele = stack.next; !list_is_head(ele, &stack); ele = stack.next) { + node = list_entry(ele, struct kho_node, list); + + if (node->visited) { + err = fdt_end_node(fdt); + if (err) + return err; + list_del_init(ele); + continue; + } + + err = fdt_begin_node(fdt, node->name); + if (err) + return err; + + hash_for_each(node->props, iter, prop, hlist) { + err = fdt_property(fdt, prop->key, prop->val, + prop->size); + if (err) + return err; + } + + hash_for_each(node->nodes, iter, sub_node, hlist) { + sub_node->visited = false; + list_add(&sub_node->list, &stack); + } + + node->visited = true; + } + + return 0; +} + +static int kho_convert_tree(void *buffer, int size) +{ + void *fdt = buffer; + int err = 0; + + err = fdt_create(fdt, size); + if (err) + goto out; + + err = fdt_finish_reservemap(fdt); + if (err) + goto out; + + err = kho_flatten_tree(fdt); + if (err) + goto out; + + err = fdt_finish(fdt); + if (err) + goto out; + + err = fdt_check_header(fdt); + if (err) + goto out; + +out: + if (err) { + pr_err("failed to flatten state tree: %d\n", err); + return -EINVAL; + } + return 0; +} + +static int kho_finalize(void) +{ + int err = 0; + void *fdt; + + fdt = kvmalloc(kho_out.fdt_max, GFP_KERNEL); + if (!fdt) + return -ENOMEM; + + err = blocking_notifier_call_chain(&kho_out.chain_head, + KEXEC_KHO_FINALIZE, NULL); + err = notifier_to_errno(err); + if (err) + goto unfreeze; + + down_write(&kho_out.tree_lock); + kho_out.fdt = fdt; + up_write(&kho_out.tree_lock); + + err = kho_convert_tree(fdt, kho_out.fdt_max); + +unfreeze: + if (err) { + int abort_err; + + pr_err("Failed to convert KHO state tree: %d\n", err); + + abort_err = kho_unfreeze(); + if (abort_err) + pr_err("Failed to abort KHO state tree: %d\n", + abort_err); + } + + return err; +} + +/* Handling for debug/kho/out */ +static int kho_out_finalize_get(void *data, u64 *val) +{ + *val = !!kho_out.fdt; + + return 0; +} + +static int kho_out_finalize_set(void *data, u64 _val) +{ + int ret = 0; + bool val = !!_val; + + if (!kexec_trylock()) + return -EBUSY; + + if (val == !!kho_out.fdt) { + if (kho_out.fdt) + ret = -EEXIST; + else + ret = -ENOENT; + goto unlock; + } + + if (val) + ret = kho_finalize(); + else + ret = kho_unfreeze(); + + if (ret) + goto unlock; + + ret = kho_out_update_debugfs_fdt(); + +unlock: + kexec_unlock(); + return ret; +} + +DEFINE_DEBUGFS_ATTRIBUTE(fops_kho_out_finalize, kho_out_finalize_get, + kho_out_finalize_set, "%llu\n"); + +static int kho_out_fdt_max_get(void *data, u64 *val) +{ + *val = kho_out.fdt_max; + + return 0; +} + +static int kho_out_fdt_max_set(void *data, u64 val) +{ + int ret = 0; + + if (!kexec_trylock()) { + ret = -EBUSY; + goto unlock; + } + + /* FDT already exists, it's too late to change fdt_max */ + if (kho_out.fdt) { + ret = -EBUSY; + goto unlock; + } + + kho_out.fdt_max = val; + +unlock: + kexec_unlock(); + return ret; +} + +DEFINE_DEBUGFS_ATTRIBUTE(fops_kho_out_fdt_max, kho_out_fdt_max_get, + kho_out_fdt_max_set, "%llu\n"); + +static int scratch_phys_show(struct seq_file *m, void *v) +{ + for (int i = 0; i < kho_scratch_cnt; i++) + seq_printf(m, "0x%llx\n", kho_scratch[i].addr); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(scratch_phys); + +static int scratch_len_show(struct seq_file *m, void *v) +{ + for (int i = 0; i < kho_scratch_cnt; i++) + seq_printf(m, "0x%llx\n", kho_scratch[i].size); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(scratch_len); + +static __init int kho_out_debugfs_init(void) +{ + struct dentry *dir, *f; + + dir = debugfs_create_dir("out", debugfs_root); + if (IS_ERR(dir)) + return -ENOMEM; + + f = debugfs_create_file("scratch_phys", 0400, dir, NULL, + &scratch_phys_fops); + if (IS_ERR(f)) + goto err_rmdir; + + f = debugfs_create_file("scratch_len", 0400, dir, NULL, + &scratch_len_fops); + if (IS_ERR(f)) + goto err_rmdir; + + f = debugfs_create_file("fdt_max", 0600, dir, NULL, + &fops_kho_out_fdt_max); + if (IS_ERR(f)) + goto err_rmdir; + + f = debugfs_create_file("finalize", 0600, dir, NULL, + &fops_kho_out_finalize); + if (IS_ERR(f)) + goto err_rmdir; + + kho_out.dir = dir; + return 0; + +err_rmdir: + debugfs_remove_recursive(dir); + return -ENOENT; +} + +static __init int kho_init(void) +{ + int err; + + if (!kho_enable) + return 0; + + kho_out.root.name = ""; + err = kho_add_string_prop(&kho_out.root, "compatible", "kho-v1"); + if (err) + goto err_free_scratch; + + debugfs_root = debugfs_create_dir("kho", NULL); + if (IS_ERR(debugfs_root)) { + err = -ENOENT; + goto err_free_scratch; + } + + err = kho_out_debugfs_init(); + if (err) + goto err_free_scratch; + + for (int i = 0; i < kho_scratch_cnt; i++) { + unsigned long base_pfn = PHYS_PFN(kho_scratch[i].addr); + unsigned long count = kho_scratch[i].size >> PAGE_SHIFT; + unsigned long pfn; + + for (pfn = base_pfn; pfn < base_pfn + count; + pfn += pageblock_nr_pages) + init_cma_reserved_pageblock(pfn_to_page(pfn)); + } + + return 0; + +err_free_scratch: + for (int i = 0; i < kho_scratch_cnt; i++) { + void *start = __va(kho_scratch[i].addr); + void *end = start + kho_scratch[i].size; + + free_reserved_area(start, end, -1, ""); + } + kho_enable = false; + return err; +} +late_initcall(kho_init); + +/* + * The scratch areas are scaled by default as percent of memory allocated from + * memblock. A user can override the scale with command line parameter: + * + * kho_scratch=N% + * + * It is also possible to explicitly define size for a lowmem, a global and + * per-node scratch areas: + * + * kho_scratch=l[KMG],n[KMG],m[KMG] + * + * The explicit size definition takes precedence over scale definition. + */ +static unsigned int scratch_scale __initdata = 200; +static phys_addr_t scratch_size_global __initdata; +static phys_addr_t scratch_size_pernode __initdata; +static phys_addr_t scratch_size_lowmem __initdata; + +static int __init kho_parse_scratch_size(char *p) +{ + unsigned long size, size_pernode, size_global; + char *endptr, *oldp = p; + + if (!p) + return -EINVAL; + + size = simple_strtoul(p, &endptr, 0); + if (*endptr == '%') { + scratch_scale = size; + pr_notice("scratch scale is %d percent\n", scratch_scale); + } else { + size = memparse(p, &p); + if (!size || p == oldp) + return -EINVAL; + + if (*p != ',') + return -EINVAL; + + oldp = p; + size_global = memparse(p + 1, &p); + if (!size_global || p == oldp) + return -EINVAL; + + if (*p != ',') + return -EINVAL; + + size_pernode = memparse(p + 1, &p); + if (!size_pernode) + return -EINVAL; + + scratch_size_lowmem = size; + scratch_size_global = size_global; + scratch_size_pernode = size_pernode; + scratch_scale = 0; + + pr_notice("scratch areas: lowmem: %lluMB global: %lluMB pernode: %lldMB\n", + (u64)(scratch_size_lowmem >> 20), + (u64)(scratch_size_global >> 20), + (u64)(scratch_size_pernode >> 20)); + } + + return 0; +} +early_param("kho_scratch", kho_parse_scratch_size); + +static void __init scratch_size_update(void) +{ + phys_addr_t size; + + if (!scratch_scale) + return; + + size = memblock_reserved_kern_size(ARCH_LOW_ADDRESS_LIMIT, + NUMA_NO_NODE); + size = size * scratch_scale / 100; + scratch_size_lowmem = round_up(size, CMA_MIN_ALIGNMENT_BYTES); + + size = memblock_reserved_kern_size(MEMBLOCK_ALLOC_ANYWHERE, + NUMA_NO_NODE); + size = size * scratch_scale / 100 - scratch_size_lowmem; + scratch_size_global = round_up(size, CMA_MIN_ALIGNMENT_BYTES); +} + +static phys_addr_t __init scratch_size_node(int nid) +{ + phys_addr_t size; + + if (scratch_scale) { + size = memblock_reserved_kern_size(MEMBLOCK_ALLOC_ANYWHERE, + nid); + size = size * scratch_scale / 100; + } else { + size = scratch_size_pernode; + } + + return round_up(size, CMA_MIN_ALIGNMENT_BYTES); +} + +/** + * kho_reserve_scratch - Reserve a contiguous chunk of memory for kexec + * + * With KHO we can preserve arbitrary pages in the system. To ensure we still + * have a large contiguous region of memory when we search the physical address + * space for target memory, let's make sure we always have a large CMA region + * active. This CMA region will only be used for movable pages which are not a + * problem for us during KHO because we can just move them somewhere else. + */ +static void __init kho_reserve_scratch(void) +{ + phys_addr_t addr, size; + int nid, i = 0; + + if (!kho_enable) + return; + + scratch_size_update(); + + /* FIXME: deal with node hot-plug/remove */ + kho_scratch_cnt = num_online_nodes() + 2; + size = kho_scratch_cnt * sizeof(*kho_scratch); + kho_scratch = memblock_alloc(size, PAGE_SIZE); + if (!kho_scratch) + goto err_disable_kho; + + /* + * reserve scratch area in low memory for lowmem allocations in the + * next kernel + */ + size = scratch_size_lowmem; + addr = memblock_phys_alloc_range(size, CMA_MIN_ALIGNMENT_BYTES, 0, + ARCH_LOW_ADDRESS_LIMIT); + if (!addr) + goto err_free_scratch_desc; + + kho_scratch[i].addr = addr; + kho_scratch[i].size = size; + i++; + + /* reserve large contiguous area for allocations without nid */ + size = scratch_size_global; + addr = memblock_phys_alloc(size, CMA_MIN_ALIGNMENT_BYTES); + if (!addr) + goto err_free_scratch_areas; + + kho_scratch[i].addr = addr; + kho_scratch[i].size = size; + i++; + + for_each_online_node(nid) { + size = scratch_size_node(nid); + addr = memblock_alloc_range_nid(size, CMA_MIN_ALIGNMENT_BYTES, + 0, MEMBLOCK_ALLOC_ACCESSIBLE, + nid, true); + if (!addr) + goto err_free_scratch_areas; + + kho_scratch[i].addr = addr; + kho_scratch[i].size = size; + i++; + } + + return; + +err_free_scratch_areas: + for (i--; i >= 0; i--) + memblock_phys_free(kho_scratch[i].addr, kho_scratch[i].size); +err_free_scratch_desc: + memblock_free(kho_scratch, kho_scratch_cnt * sizeof(*kho_scratch)); +err_disable_kho: + kho_enable = false; +} + +void __init kho_memory_init(void) +{ + kho_reserve_scratch(); +} diff --git a/mm/mm_init.c b/mm/mm_init.c index 04441c258b05..757659b7a26b 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -30,6 +30,7 @@ #include #include #include +#include #include "internal.h" #include "slab.h" #include "shuffle.h" @@ -2661,6 +2662,13 @@ void __init mm_core_init(void) report_meminit(); kmsan_init_shadow(); stack_depot_early_init(); + + /* + * KHO memory setup must happen while memblock is still active, but + * as close as possible to buddy initialization + */ + kho_memory_init(); + mem_init(); kmem_cache_init(); /* From patchwork Thu Mar 20 01:55:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A7C6CC35FFC for ; Thu, 20 Mar 2025 02:11:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=uzt28YGX83ujtBiUFsJhvG1AtvyLncXc8B9iuS5ri6U=; b=4g7Yw/vVbQJoVX88+Cf24OAsm9 QwSa5t9llE3D60oo2tUlrz0+kZApKifNPv3xlRGh/dnoWnTqTMNklpLGTVqQscp4SUCZdp1NXKpcK 9RVfXQNJ4LRlvEkX4LC02dHPliSo/o+jBRmV2LkZzF8HRH3t7xeoCrqsqhnlCsPPUVLXqFnE6/+ah 68UP8EVUjOZtz+4uSLD6wzdwn/W52KHoS6wbj0wIkQqcQR6xB8fLNdgzBttTGrRYgrqFGVe8PNqE6 fdGYsOe7IlyIzS7QcY6B0jgRfcjg3+EPzY+JZHx0Y6qFGkqBLJUhEI2cFPSYGTpe2Dc7Y+dBLDIzx dk14aoog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5Nn-0000000ApN0-0PkG; Thu, 20 Mar 2025 02:11:43 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58o-0000000AkWU-1kj5 for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:18 +0000 Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-2ff6167e9ccso701577a91.1 for ; Wed, 19 Mar 2025 18:56:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435773; x=1743040573; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uzt28YGX83ujtBiUFsJhvG1AtvyLncXc8B9iuS5ri6U=; b=zNytb+Y1foWqQ4GfX9/Pkr5CAPqQ12/t0S4C+cf5htYKNnQXBk4/QZZP5oVvk3LBWM EwJENYj9jZB1vETY+esIPgUiIS+SEWaiJh7esp7RsGtKNQyUcErxTsCwyRQtpYdOkXmb k2i9l3g4nGQL2iTAFVDmcyfX0eT4Y2CUIhi1klG60IxRKjaElxF7ZhhcTKJPVu3HrhAD ySizQI1Vw8EegrwX9pKT/2z7xOwb4/pRV/b1PhzHEXEq/6ZdBPgG077Qx9dh+Uaseya6 23o1qEmmy2BK0BkXxgnEsNHghMT2rvYCw5zSHWS7cLjHKG09MNv/o0fBZwMHAJ20SUjL tkzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435773; x=1743040573; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uzt28YGX83ujtBiUFsJhvG1AtvyLncXc8B9iuS5ri6U=; b=FleT7Uxk7VYlx7GXiP92VmNkEOLPbI92bqNiI7rzKf0uBQD4fYWfYmkU2oPA2ewHeS 8Fnv9HHD4l6aVRimLnr5gZ27D+7Wzztp7rhG5nxEth3DTyknbY93NPFQT618dE/H5aKK 4DozrtMFJD7LpNI8yRB97YLyn4KubkSAlfpLVY1KbdEYThMH7iKu6c37tRfl7euMiQaD GR7Vv+GPKvhUUpbT8hAfVNJbgliVkccoKwWgOcQjwzZxfb8NoSpuZteRCC1weI8o3+Pz K9RcEHTfu2Wk57oW42g4M7EfX84eGLbwYHmF1KzVlv+j8YQWqXpWnjtMKB+AJ/isBK8Y R2Iw== X-Forwarded-Encrypted: i=1; AJvYcCVgktZu8hl2wBsdMuywJk1qtV/sYVNabHOGhoC34AsmiJnf9+AOcvkGkrvYV/b/FYW9xypaRRiRBoqK62Da+ah0@lists.infradead.org X-Gm-Message-State: AOJu0Yz9sxsGKfIpliyD8rRUx3zlK7qR6YmDd2DGlwfgwaWetJody2ll zmMm86huIkD6dqe63UX5v7y++kRCL9vIEKzZYcPbb0G4XdtQW0Y4OfdX7lIOMr0NSSKCRJcl1Cd bHcqp0xQecz2+Vk7n6w== X-Google-Smtp-Source: AGHT+IETZEnQ1fnebG+EPXz8EIs4aUlnfPIGCh4puq8YWTGEpiNjSNK1+MOm9oAvMrmsXQRIpafhM24FzfM8r9f8 X-Received: from pgct9.prod.google.com ([2002:a05:6a02:5289:b0:aef:faff:6861]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:c901:b0:1f5:7f45:7f8a with SMTP id adf61e73a8af0-1fd13ed2b9fmr2759978637.38.1742435773077; Wed, 19 Mar 2025 18:56:13 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:43 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-9-changyuanl@google.com> Subject: [PATCH v5 08/16] kexec: add KHO parsing support From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_185614_516693_F97AB82B X-CRM114-Status: GOOD ( 34.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Alexander Graf When we have a KHO kexec, we get an FDT blob and scratch region to populate the state of the system. Provide helper functions that allow architecture code to easily handle memory reservations based on them and give device drivers visibility into the KHO FDT and memory reservations so they can recover their own state. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- include/linux/kexec_handover.h | 48 ++++++ kernel/kexec_handover.c | 302 ++++++++++++++++++++++++++++++++- mm/memblock.c | 1 + 3 files changed, 350 insertions(+), 1 deletion(-) diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h index 9cd9ad31e2d1..c665ff6cd728 100644 --- a/include/linux/kexec_handover.h +++ b/include/linux/kexec_handover.h @@ -35,6 +35,10 @@ struct kho_node { bool visited; }; +struct kho_in_node { + int offset; +}; + #ifdef CONFIG_KEXEC_HANDOVER bool kho_is_enabled(void); void kho_init_node(struct kho_node *node); @@ -51,6 +55,19 @@ int register_kho_notifier(struct notifier_block *nb); int unregister_kho_notifier(struct notifier_block *nb); void kho_memory_init(void); + +void kho_populate(phys_addr_t handover_fdt_phys, phys_addr_t scratch_phys, + u64 scratch_len); + +int kho_get_node(const struct kho_in_node *parent, const char *name, + struct kho_in_node *child); +int kho_get_nodes(const struct kho_in_node *parent, + int (*func)(const char *, const struct kho_in_node *, void *), + void *data); +const void *kho_get_prop(const struct kho_in_node *node, const char *key, + u32 *size); +int kho_node_check_compatible(const struct kho_in_node *node, + const char *compatible); #else static inline bool kho_is_enabled(void) { @@ -104,6 +121,37 @@ static inline int unregister_kho_notifier(struct notifier_block *nb) static inline void kho_memory_init(void) { } + +static inline void kho_populate(phys_addr_t handover_fdt_phys, + phys_addr_t scratch_phys, u64 scratch_len) +{ +} + +static inline int kho_get_node(const struct kho_in_node *parent, + const char *name, struct kho_in_node *child) +{ + return -EOPNOTSUPP; +} + +static inline int kho_get_nodes(const struct kho_in_node *parent, + int (*func)(const char *, + const struct kho_in_node *, void *), + void *data) +{ + return -EOPNOTSUPP; +} + +static inline const void *kho_get_prop(const struct kho_in_node *node, + const char *key, u32 *size) +{ + return ERR_PTR(-EOPNOTSUPP); +} + +static inline int kho_node_check_compatible(const struct kho_in_node *node, + const char *compatible) +{ + return -EOPNOTSUPP; +} #endif /* CONFIG_KEXEC_HANDOVER */ #endif /* LINUX_KEXEC_HANDOVER_H */ diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c index df0d9debbb64..6ebad2f023f9 100644 --- a/kernel/kexec_handover.c +++ b/kernel/kexec_handover.c @@ -73,6 +73,20 @@ static struct kho_out kho_out = { .fdt_max = 10 * SZ_1M, }; +struct kho_in { + struct debugfs_blob_wrapper fdt_wrapper; + struct dentry *dir; + phys_addr_t kho_scratch_phys; + phys_addr_t fdt_phys; +}; + +static struct kho_in kho_in; + +static const void *kho_get_fdt(void) +{ + return kho_in.fdt_phys ? phys_to_virt(kho_in.fdt_phys) : NULL; +} + int register_kho_notifier(struct notifier_block *nb) { return blocking_notifier_chain_register(&kho_out.chain_head, nb); @@ -85,6 +99,144 @@ int unregister_kho_notifier(struct notifier_block *nb) } EXPORT_SYMBOL_GPL(unregister_kho_notifier); +/** + * kho_get_node - retrieve a node saved in KHO FDT. + * @parent: the parent node to look up for. + * @name: the name of the node to look for. + * @child: if a node named @name is found under @parent, it is stored in @child. + * + * If @parent is NULL, this function looks up for @name under KHO root node. + * + * Return: 0 on success, and @child is populated, error code on failure. + */ +int kho_get_node(const struct kho_in_node *parent, const char *name, + struct kho_in_node *child) +{ + int parent_offset = 0; + int offset = 0; + const void *fdt = kho_get_fdt(); + + if (!fdt) + return -ENOENT; + + if (!child) + return -EINVAL; + + if (parent) + parent_offset = parent->offset; + + offset = fdt_subnode_offset(fdt, parent_offset, name); + if (offset < 0) + return -ENOENT; + + child->offset = offset; + return 0; +} +EXPORT_SYMBOL_GPL(kho_get_node); + +/** + * kho_get_nodes - iterate over all direct child nodes. + * @parent: the parent node to look for child nodes. + * @func: a function pointer to be called on each child node. + * @data: auxiliary data to be passed to @func. + * + * For every direct child node of @parent, @func is called with the child node + * name, the child node (a struct kho_in_node *), and @data. + * + * If @parent is NULL, this function iterates over the child nodes of the KHO + * root node. + * + * Return: 0 on success, error code on failure. + */ +int kho_get_nodes(const struct kho_in_node *parent, + int (*func)(const char *, const struct kho_in_node *, void *), + void *data) +{ + int parent_offset = 0; + struct kho_in_node child; + const char *name; + int ret = 0; + const void *fdt = kho_get_fdt(); + + if (!fdt) + return -ENOENT; + + if (parent) + parent_offset = parent->offset; + + fdt_for_each_subnode(child.offset, fdt, parent_offset) { + if (child.offset < 0) + return -EINVAL; + + name = fdt_get_name(fdt, child.offset, NULL); + + if (!name) + return -EINVAL; + + ret = func(name, &child, data); + + if (ret < 0) + break; + } + + return ret; +} +EXPORT_SYMBOL_GPL(kho_get_nodes); + +/** + * kho_get_prop - retrieve the property data stored in the KHO tree. + * @node: the node to look up for. + * @key: the key of the property. + * @size: a pointer to store the size of the data in bytes. + * + * Return: pointer to the data, and data size is stored in @size, or NULL on + * failure. + */ +const void *kho_get_prop(const struct kho_in_node *node, const char *key, + u32 *size) +{ + int offset = 0; + u32 s; + const void *fdt = kho_get_fdt(); + + if (!fdt) + return NULL; + + if (node) + offset = node->offset; + + if (!size) + size = &s; + + return fdt_getprop(fdt, offset, key, size); +} +EXPORT_SYMBOL_GPL(kho_get_prop); + +/** + * kho_node_check_compatible - check a node's compatible property. + * @node: the node to check. + * @compatible: the compatible stirng. + * + * Wrapper of fdt_node_check_compatible(). + * + * Return: 0 if @compatible is in the node's "compatible" list, or + * error code on failure. + */ +int kho_node_check_compatible(const struct kho_in_node *node, + const char *compatible) +{ + int result = 0; + const void *fdt = kho_get_fdt(); + + if (!fdt) + return -ENOENT; + + result = fdt_node_check_compatible(fdt, node->offset, compatible); + + return result ? -EINVAL : 0; +} +EXPORT_SYMBOL_GPL(kho_node_check_compatible); + /* Helper functions for KHO state tree */ struct kho_prop { @@ -605,6 +757,32 @@ static int scratch_len_show(struct seq_file *m, void *v) } DEFINE_SHOW_ATTRIBUTE(scratch_len); +/* Handling for debugfs/kho/in */ +static __init int kho_in_debugfs_init(const void *fdt) +{ + struct dentry *file; + int err; + + kho_in.dir = debugfs_create_dir("in", debugfs_root); + if (IS_ERR(kho_in.dir)) + return PTR_ERR(kho_in.dir); + + kho_in.fdt_wrapper.size = fdt_totalsize(fdt); + kho_in.fdt_wrapper.data = (void *)fdt; + file = debugfs_create_blob("fdt", 0400, kho_in.dir, + &kho_in.fdt_wrapper); + if (IS_ERR(file)) { + err = PTR_ERR(file); + goto err_rmdir; + } + + return 0; + +err_rmdir: + debugfs_remove(kho_in.dir); + return err; +} + static __init int kho_out_debugfs_init(void) { struct dentry *dir, *f; @@ -644,6 +822,7 @@ static __init int kho_out_debugfs_init(void) static __init int kho_init(void) { int err; + const void *fdt = kho_get_fdt(); if (!kho_enable) return 0; @@ -663,6 +842,21 @@ static __init int kho_init(void) if (err) goto err_free_scratch; + if (fdt) { + err = kho_in_debugfs_init(fdt); + /* + * Failure to create /sys/kernel/debug/kho/in does not prevent + * reviving state from KHO and setting up KHO for the next + * kexec. + */ + if (err) + pr_err("failed exposing handover FDT in debugfs\n"); + + kho_scratch = __va(kho_in.kho_scratch_phys); + + return 0; + } + for (int i = 0; i < kho_scratch_cnt; i++) { unsigned long base_pfn = PHYS_PFN(kho_scratch[i].addr); unsigned long count = kho_scratch[i].size >> PAGE_SHIFT; @@ -859,7 +1053,113 @@ static void __init kho_reserve_scratch(void) kho_enable = false; } +static void __init kho_release_scratch(void) +{ + phys_addr_t start, end; + u64 i; + + memmap_init_kho_scratch_pages(); + + /* + * Mark scratch mem as CMA before we return it. That way we + * ensure that no kernel allocations happen on it. That means + * we can reuse it as scratch memory again later. + */ + __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, + MEMBLOCK_KHO_SCRATCH, &start, &end, NULL) { + ulong start_pfn = pageblock_start_pfn(PFN_DOWN(start)); + ulong end_pfn = pageblock_align(PFN_UP(end)); + ulong pfn; + + for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) + set_pageblock_migratetype(pfn_to_page(pfn), + MIGRATE_CMA); + } +} + void __init kho_memory_init(void) { - kho_reserve_scratch(); + if (!kho_get_fdt()) + kho_reserve_scratch(); + else + kho_release_scratch(); +} + +void __init kho_populate(phys_addr_t handover_fdt_phys, + phys_addr_t scratch_phys, u64 scratch_len) +{ + void *handover_fdt; + struct kho_scratch *scratch; + u32 fdt_size = 0; + + /* Determine the real size of the FDT */ + handover_fdt = + early_memremap(handover_fdt_phys, sizeof(struct fdt_header)); + if (!handover_fdt) { + pr_warn("setup: failed to memremap kexec FDT (0x%llx)\n", + handover_fdt_phys); + return; + } + + if (fdt_check_header(handover_fdt)) { + pr_warn("setup: kexec handover FDT is invalid (0x%llx)\n", + handover_fdt_phys); + early_memunmap(handover_fdt, sizeof(struct fdt_header)); + return; + } + + fdt_size = fdt_totalsize(handover_fdt); + kho_in.fdt_phys = handover_fdt_phys; + + early_memunmap(handover_fdt, sizeof(struct fdt_header)); + + /* Reserve the DT so we can still access it in late boot */ + memblock_reserve(handover_fdt_phys, fdt_size); + + kho_in.kho_scratch_phys = scratch_phys; + kho_scratch_cnt = scratch_len / sizeof(*kho_scratch); + scratch = early_memremap(scratch_phys, scratch_len); + if (!scratch) { + pr_warn("setup: failed to memremap kexec scratch (0x%llx)\n", + scratch_phys); + return; + } + + /* + * We pass a safe contiguous blocks of memory to use for early boot + * purporses from the previous kernel so that we can resize the + * memblock array as needed. + */ + for (int i = 0; i < kho_scratch_cnt; i++) { + struct kho_scratch *area = &scratch[i]; + u64 size = area->size; + + memblock_add(area->addr, size); + + if (WARN_ON(memblock_mark_kho_scratch(area->addr, size))) { + pr_err("Kexec failed to mark the scratch region. Disabling KHO revival."); + kho_in.fdt_phys = 0; + scratch = NULL; + break; + } + pr_debug("Marked 0x%pa+0x%pa as scratch", &area->addr, &size); + } + + early_memunmap(scratch, scratch_len); + + if (!scratch) + return; + + memblock_reserve(scratch_phys, scratch_len); + + /* + * Now that we have a viable region of scratch memory, let's tell + * the memblocks allocator to only use that for any allocations. + * That way we ensure that nothing scribbles over in use data while + * we initialize the page tables which we will need to ingest all + * memory reservations from the previous kernel. + */ + memblock_set_kho_scratch_only(); + + pr_info("setup: Found kexec handover data. Will skip init for some devices\n"); } diff --git a/mm/memblock.c b/mm/memblock.c index d5d406a5160a..d28abf3def1c 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -2374,6 +2374,7 @@ void __init memblock_free_all(void) free_unused_memmap(); reset_all_zones_managed_pages(); + memblock_clear_kho_scratch_only(); pages = free_low_memory_core_early(); totalram_pages_add(pages); } From patchwork Thu Mar 20 01:55:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023334 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 14EB3C35FFC for ; Thu, 20 Mar 2025 02:13:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=uSm1C3hzri2Wj9pRnB1afKKuALL7+tUjT9kdNp30IT0=; b=jcPsClkv6JVITM7rYDgI1wVco9 I2UxLnFj6A6+9DjMnQ39mXunLzA0bgQimlU7ykG1jyjneryR5LiBdVuRKRFrMyEWLRRufAs0iRGUN OXRamv15RGMNapt4hUJudFchZRInowvpVYozk7mVuAAp1PKWxL0JO9bjOn+IC/xGf6eou7v4RS8iz l90Gxatw6hPYnE7sBwBJhXdJ4bq4fWCpv5HZCBBKysiey6ST1UkJ7WA/4sEhKDCu00mHLcuN5/LlI k4bt9CvXAIH9XSI1z2UR/S2+NhWWENNQYpti0822NCVytk5sset/CSpi7Z78aNYp/qMP2qq20tFqW H0X1x9CQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5PT-0000000ApuN-3PeK; Thu, 20 Mar 2025 02:13:27 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58q-0000000AkYZ-0MIC for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:20 +0000 Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-2ff69646218so685064a91.3 for ; Wed, 19 Mar 2025 18:56:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435774; x=1743040574; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uSm1C3hzri2Wj9pRnB1afKKuALL7+tUjT9kdNp30IT0=; b=F2dLNAZGdLG3vRP3HZa4azQjrm1C+oshzOOzXURr5NzBOJR0+ezsqd8E1r2mVLl9uy F4G8En/WqGA2/1jZrht2SSSEWz2DpFltGk3NdeIcXDwhAn8YIUteEd235JxNJeWGb/ks kG5VUBhiSe1CZmmH1QiMZr/+jXxTk5UiPKPVP1AipPsm8ICTCfZgwNpNPSTQlSnJDBv8 wZllssWuXDTeXkgujsbs/8ey26mYe/tWky3+4WawoHcqkP2702ZnwJm3Za0bNfkKq/Fw kn7o7dh3dY0nc3Tckj8ZlnQJf+TBsEw3OnwyCeNJrnQbTQMAC9GFc6UeFiBZ7BUlcYSY 6xEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435774; x=1743040574; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uSm1C3hzri2Wj9pRnB1afKKuALL7+tUjT9kdNp30IT0=; b=eqYZ9FLXml8XFcdfdm5ZnktW5pzwhjJ7by4L0Mwz95fR33TkwprkfjNbQCSDzoyviO MPuewcexbFzsYwnEngleDQZukCIBnPRkdoVeUJDZrYNg1ZMRkeng21ThiCovLd3wl3Qa Od9T6IRkHVfYCwUYDJ4D9XecuhODg1C0Fbn3Qg0zh6mwFvUdNBZEnds8PMdUGZ05kK37 5cSt7k7nDZcYhXKdGv6r7hIwKK0XbyJ4sfmdf+abwxnNaz4qh4PwzdM31v5Elb3jJIyy kxBt73Xp28qnfajxft8RBbCD+6ezWXIVxGDT+k98wAa11tKMHK0Tv4p0KYUMP9qaOWCu VP8A== X-Forwarded-Encrypted: i=1; AJvYcCUKn06Iy/hbi4lx9VieBJp9PXSJRUNIY+gR5PbWCKbRG7CWX1rAVARsOYZHbS5Lb+f6eWaGDoANewEWQB8O1cit@lists.infradead.org X-Gm-Message-State: AOJu0YyyB0/XrqV+t4w6+xLC0w6i0XGEbmGp1qEwohOz3p/gQmE7UcDc WWYhd8lnnrSlXGeEYlkXAJuB2i/cn/nsAumUQsDv/cD783NEX/+GfbqzgIIcNuZBTB1j8TTHU7a hEEM/jwRehf9fM3XQxw== X-Google-Smtp-Source: AGHT+IGDFxZ224ZfpKuOQMpb0L5eyzWuzuuNb6JbsQOka9qAXNPHwzXyZCt1rphKnSoqunSh0fOsVTw3awCrNzrI X-Received: from pfbay13.prod.google.com ([2002:a05:6a00:300d:b0:730:7e2d:df69]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:12d4:b0:1f5:837b:1868 with SMTP id adf61e73a8af0-1fbecd48a69mr9047217637.29.1742435774464; Wed, 19 Mar 2025 18:56:14 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:44 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-10-changyuanl@google.com> Subject: [PATCH v5 09/16] kexec: enable KHO support for memory preservation From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, Jason Gunthorpe , Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_185616_403749_4DC1F6FC X-CRM114-Status: GOOD ( 27.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (Microsoft)" Introduce APIs allowing KHO users to preserve memory across kexec and get access to that memory after boot of the kexeced kernel kho_preserve_folio() - record a folio to be preserved over kexec kho_restore_folio() - recreates the folio from the preserved memory kho_preserve_phys() - record physically contiguous range to be preserved over kexec. kho_restore_phys() - recreates order-0 pages corresponding to the preserved physical range The memory preservations are tracked by two levels of xarrays to manage chunks of per-order 512 byte bitmaps. For instance the entire 1G order of a 1TB x86 system would fit inside a single 512 byte bitmap. For order 0 allocations each bitmap will cover 16M of address space. Thus, for 16G of memory at most 512K of bitmap memory will be needed for order 0. At serialization time all bitmaps are recorded in a linked list of pages for the next kernel to process and the physical address of the list is recorded in KHO FDT. The next kernel then processes that list, reserves the memory ranges and later, when a user requests a folio or a physical range, KHO restores corresponding memory map entries. Suggested-by: Jason Gunthorpe Signed-off-by: Mike Rapoport (Microsoft) Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu Signed-off-by: Mike Rapoport (Microsoft) Signed-off-by: Changyuan Lyu Signed-off-by: Pratyush Yadav --- include/linux/kexec_handover.h | 38 +++ kernel/kexec_handover.c | 486 ++++++++++++++++++++++++++++++++- 2 files changed, 522 insertions(+), 2 deletions(-) diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h index c665ff6cd728..d52a7b500f4c 100644 --- a/include/linux/kexec_handover.h +++ b/include/linux/kexec_handover.h @@ -5,6 +5,7 @@ #include #include #include +#include struct kho_scratch { phys_addr_t addr; @@ -54,6 +55,13 @@ int kho_add_string_prop(struct kho_node *node, const char *key, int register_kho_notifier(struct notifier_block *nb); int unregister_kho_notifier(struct notifier_block *nb); +int kho_preserve_folio(struct folio *folio); +int kho_unpreserve_folio(struct folio *folio); +int kho_preserve_phys(phys_addr_t phys, size_t size); +int kho_unpreserve_phys(phys_addr_t phys, size_t size); +struct folio *kho_restore_folio(phys_addr_t phys); +void *kho_restore_phys(phys_addr_t phys, size_t size); + void kho_memory_init(void); void kho_populate(phys_addr_t handover_fdt_phys, phys_addr_t scratch_phys, @@ -118,6 +126,36 @@ static inline int unregister_kho_notifier(struct notifier_block *nb) return -EOPNOTSUPP; } +static inline int kho_preserve_folio(struct folio *folio) +{ + return -EOPNOTSUPP; +} + +static inline int kho_unpreserve_folio(struct folio *folio) +{ + return -EOPNOTSUPP; +} + +static inline int kho_preserve_phys(phys_addr_t phys, size_t size) +{ + return -EOPNOTSUPP; +} + +static inline int kho_unpreserve_phys(phys_addr_t phys, size_t size) +{ + return -EOPNOTSUPP; +} + +static inline struct folio *kho_restore_folio(phys_addr_t phys) +{ + return NULL; +} + +static inline void *kho_restore_phys(phys_addr_t phys, size_t size) +{ + return NULL; +} + static inline void kho_memory_init(void) { } diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c index 6ebad2f023f9..592563c21369 100644 --- a/kernel/kexec_handover.c +++ b/kernel/kexec_handover.c @@ -62,6 +62,13 @@ struct kho_out { struct rw_semaphore tree_lock; struct kho_node root; + /** + * Physical address of the first struct khoser_mem_chunk containing + * serialized data from struct kho_mem_track. + */ + phys_addr_t first_chunk_phys; + struct kho_node preserved_memory; + void *fdt; u64 fdt_max; }; @@ -70,6 +77,7 @@ static struct kho_out kho_out = { .chain_head = BLOCKING_NOTIFIER_INIT(kho_out.chain_head), .tree_lock = __RWSEM_INITIALIZER(kho_out.tree_lock), .root = KHO_NODE_INIT, + .preserved_memory = KHO_NODE_INIT, .fdt_max = 10 * SZ_1M, }; @@ -237,6 +245,461 @@ int kho_node_check_compatible(const struct kho_in_node *node, } EXPORT_SYMBOL_GPL(kho_node_check_compatible); +/* + * Keep track of memory that is to be preserved across KHO. + * + * The serializing side uses two levels of xarrays to manage chunks of per-order + * 512 byte bitmaps. For instance the entire 1G order of a 1TB system would fit + * inside a single 512 byte bitmap. For order 0 allocations each bitmap will + * cover 16M of address space. Thus, for 16G of memory at most 512K + * of bitmap memory will be needed for order 0. + * + * This approach is fully incremental, as the serialization progresses folios + * can continue be aggregated to the tracker. The final step, immediately prior + * to kexec would serialize the xarray information into a linked list for the + * successor kernel to parse. + */ + +#define PRESERVE_BITS (512 * 8) + +struct kho_mem_phys_bits { + DECLARE_BITMAP(preserve, PRESERVE_BITS); +}; + +struct kho_mem_phys { + /* + * Points to kho_mem_phys_bits, a sparse bitmap array. Each bit is sized + * to order. + */ + struct xarray phys_bits; +}; + +struct kho_mem_track { + /* Points to kho_mem_phys, each order gets its own bitmap tree */ + struct xarray orders; +}; + +static struct kho_mem_track kho_mem_track; + +static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz) +{ + void *elm, *res; + + elm = xa_load(xa, index); + if (elm) + return elm; + + elm = kzalloc(sz, GFP_KERNEL); + if (!elm) + return ERR_PTR(-ENOMEM); + + res = xa_cmpxchg(xa, index, NULL, elm, GFP_KERNEL); + if (xa_is_err(res)) + res = ERR_PTR(xa_err(res)); + + if (res) { + kfree(elm); + return res; + } + + return elm; +} + +static void __kho_unpreserve(struct kho_mem_track *tracker, unsigned long pfn, + unsigned int order) +{ + struct kho_mem_phys_bits *bits; + struct kho_mem_phys *physxa; + unsigned long pfn_hi = pfn >> order; + + physxa = xa_load(&tracker->orders, order); + if (!physxa) + return; + + bits = xa_load(&physxa->phys_bits, pfn_hi / PRESERVE_BITS); + if (!bits) + return; + + clear_bit(pfn_hi % PRESERVE_BITS, bits->preserve); +} + +static int __kho_preserve(struct kho_mem_track *tracker, unsigned long pfn, + unsigned int order) +{ + struct kho_mem_phys_bits *bits; + struct kho_mem_phys *physxa; + unsigned long pfn_hi = pfn >> order; + + might_sleep(); + + physxa = xa_load_or_alloc(&tracker->orders, order, sizeof(*physxa)); + if (IS_ERR(physxa)) + return PTR_ERR(physxa); + + bits = xa_load_or_alloc(&physxa->phys_bits, pfn_hi / PRESERVE_BITS, + sizeof(*bits)); + if (IS_ERR(bits)) + return PTR_ERR(bits); + + set_bit(pfn_hi % PRESERVE_BITS, bits->preserve); + + return 0; +} + +/** + * kho_preserve_folio - preserve a folio across KHO. + * @folio: folio to preserve + * + * Records that the entire folio is preserved across KHO. The order + * will be preserved as well. + * + * Return: 0 on success, error code on failure + */ +int kho_preserve_folio(struct folio *folio) +{ + unsigned long pfn = folio_pfn(folio); + unsigned int order = folio_order(folio); + int err; + + if (!kho_enable) + return -EOPNOTSUPP; + + down_read(&kho_out.tree_lock); + if (kho_out.fdt) { + err = -EBUSY; + goto unlock; + } + + err = __kho_preserve(&kho_mem_track, pfn, order); + +unlock: + up_read(&kho_out.tree_lock); + + return err; +} +EXPORT_SYMBOL_GPL(kho_preserve_folio); + +/** + * kho_unpreserve_folio - unpreserve a folio + * @folio: folio to unpreserve + * + * Remove the record of a folio previously preserved by kho_preserve_folio(). + * + * Return: 0 on success, error code on failure + */ +int kho_unpreserve_folio(struct folio *folio) +{ + unsigned long pfn = folio_pfn(folio); + unsigned int order = folio_order(folio); + int err = 0; + + down_read(&kho_out.tree_lock); + if (kho_out.fdt) { + err = -EBUSY; + goto unlock; + } + + __kho_unpreserve(&kho_mem_track, pfn, order); + +unlock: + up_read(&kho_out.tree_lock); + + return err; +} +EXPORT_SYMBOL_GPL(kho_unpreserve_folio); + +/** + * kho_preserve_phys - preserve a physically contiguous range across KHO. + * @phys: physical address of the range + * @size: size of the range + * + * Records that the entire range from @phys to @phys + @size is preserved + * across KHO. + * + * Return: 0 on success, error code on failure + */ +int kho_preserve_phys(phys_addr_t phys, size_t size) +{ + unsigned long pfn = PHYS_PFN(phys), end_pfn = PHYS_PFN(phys + size); + unsigned int order = ilog2(end_pfn - pfn); + unsigned long failed_pfn; + int err = 0; + + if (!kho_enable) + return -EOPNOTSUPP; + + down_read(&kho_out.tree_lock); + if (kho_out.fdt) { + err = -EBUSY; + goto unlock; + } + + for (; pfn < end_pfn; + pfn += (1 << order), order = ilog2(end_pfn - pfn)) { + err = __kho_preserve(&kho_mem_track, pfn, order); + if (err) { + failed_pfn = pfn; + break; + } + } + + if (err) + for (pfn = PHYS_PFN(phys); pfn < failed_pfn; + pfn += (1 << order), order = ilog2(end_pfn - pfn)) + __kho_unpreserve(&kho_mem_track, pfn, order); + +unlock: + up_read(&kho_out.tree_lock); + + return err; +} +EXPORT_SYMBOL_GPL(kho_preserve_phys); + +/** + * kho_unpreserve_phys - unpreserve a physically contiguous range + * @phys: physical address of the range + * @size: size of the range + * + * Remove the record of a range previously preserved by kho_preserve_phys(). + * + * Return: 0 on success, error code on failure + */ +int kho_unpreserve_phys(phys_addr_t phys, size_t size) +{ + unsigned long pfn = PHYS_PFN(phys), end_pfn = PHYS_PFN(phys + size); + unsigned int order = ilog2(end_pfn - pfn); + int err = 0; + + down_read(&kho_out.tree_lock); + if (kho_out.fdt) { + err = -EBUSY; + goto unlock; + } + + for (; pfn < end_pfn; pfn += (1 << order), order = ilog2(end_pfn - pfn)) + __kho_unpreserve(&kho_mem_track, pfn, order); + +unlock: + up_read(&kho_out.tree_lock); + + return err; +} +EXPORT_SYMBOL_GPL(kho_unpreserve_phys); + +/* almost as free_reserved_page(), just don't free the page */ +static void kho_restore_page(struct page *page) +{ + ClearPageReserved(page); + init_page_count(page); + adjust_managed_page_count(page, 1); +} + +struct folio *kho_restore_folio(phys_addr_t phys) +{ + struct page *page = pfn_to_online_page(PHYS_PFN(phys)); + unsigned long order = page->private; + + if (!page) + return NULL; + + order = page->private; + if (order) + prep_compound_page(page, order); + else + kho_restore_page(page); + + return page_folio(page); +} +EXPORT_SYMBOL_GPL(kho_restore_folio); + +void *kho_restore_phys(phys_addr_t phys, size_t size) +{ + unsigned long start_pfn, end_pfn, pfn; + void *va = __va(phys); + + start_pfn = PFN_DOWN(phys); + end_pfn = PFN_UP(phys + size); + + for (pfn = start_pfn; pfn < end_pfn; pfn++) { + struct page *page = pfn_to_online_page(pfn); + + if (!page) + return NULL; + kho_restore_page(page); + } + + return va; +} +EXPORT_SYMBOL_GPL(kho_restore_phys); + +#define KHOSER_PTR(type) \ + union { \ + phys_addr_t phys; \ + type ptr; \ + } +#define KHOSER_STORE_PTR(dest, val) \ + ({ \ + (dest).phys = virt_to_phys(val); \ + typecheck(typeof((dest).ptr), val); \ + }) +#define KHOSER_LOAD_PTR(src) \ + ((src).phys ? (typeof((src).ptr))(phys_to_virt((src).phys)) : NULL) + +struct khoser_mem_bitmap_ptr { + phys_addr_t phys_start; + KHOSER_PTR(struct kho_mem_phys_bits *) bitmap; +}; + +struct khoser_mem_chunk; + +struct khoser_mem_chunk_hdr { + KHOSER_PTR(struct khoser_mem_chunk *) next; + unsigned int order; + unsigned int num_elms; +}; + +#define KHOSER_BITMAP_SIZE \ + ((PAGE_SIZE - sizeof(struct khoser_mem_chunk_hdr)) / \ + sizeof(struct khoser_mem_bitmap_ptr)) + +struct khoser_mem_chunk { + struct khoser_mem_chunk_hdr hdr; + struct khoser_mem_bitmap_ptr bitmaps[KHOSER_BITMAP_SIZE]; +}; +static_assert(sizeof(struct khoser_mem_chunk) == PAGE_SIZE); + +static struct khoser_mem_chunk *new_chunk(struct khoser_mem_chunk *cur_chunk, + unsigned long order) +{ + struct khoser_mem_chunk *chunk; + + chunk = (struct khoser_mem_chunk *)get_zeroed_page(GFP_KERNEL); + if (!chunk) + return NULL; + chunk->hdr.order = order; + if (cur_chunk) + KHOSER_STORE_PTR(cur_chunk->hdr.next, chunk); + return chunk; +} + +static void kho_mem_ser_free(struct khoser_mem_chunk *first_chunk) +{ + struct khoser_mem_chunk *chunk = first_chunk; + + while (chunk) { + unsigned long chunk_page = (unsigned long)chunk; + + chunk = KHOSER_LOAD_PTR(chunk->hdr.next); + free_page(chunk_page); + } +} + +/* + * Record all the bitmaps in a linked list of pages for the next kernel to + * process. Each chunk holds bitmaps of the same order and each block of bitmaps + * starts at a given physical address. This allows the bitmaps to be sparse. The + * xarray is used to store them in a tree while building up the data structure, + * but the KHO successor kernel only needs to process them once in order. + * + * All of this memory is normal kmalloc() memory and is not marked for + * preservation. The successor kernel will remain isolated to the scratch space + * until it completes processing this list. Once processed all the memory + * storing these ranges will be marked as free. + */ +static struct khoser_mem_chunk *kho_mem_serialize(void) +{ + struct kho_mem_track *tracker = &kho_mem_track; + struct khoser_mem_chunk *first_chunk = NULL; + struct khoser_mem_chunk *chunk = NULL; + struct kho_mem_phys *physxa; + unsigned long order; + + xa_for_each(&tracker->orders, order, physxa) { + struct kho_mem_phys_bits *bits; + unsigned long phys; + + chunk = new_chunk(chunk, order); + if (!chunk) + goto err_free; + + if (!first_chunk) + first_chunk = chunk; + + xa_for_each(&physxa->phys_bits, phys, bits) { + struct khoser_mem_bitmap_ptr *elm; + + if (chunk->hdr.num_elms == ARRAY_SIZE(chunk->bitmaps)) { + chunk = new_chunk(chunk, order); + if (!chunk) + goto err_free; + } + + elm = &chunk->bitmaps[chunk->hdr.num_elms]; + chunk->hdr.num_elms++; + elm->phys_start = (phys * PRESERVE_BITS) + << (order + PAGE_SHIFT); + KHOSER_STORE_PTR(elm->bitmap, bits); + } + } + + return first_chunk; + +err_free: + kho_mem_ser_free(first_chunk); + return ERR_PTR(-ENOMEM); +} + +static void deserialize_bitmap(unsigned int order, + struct khoser_mem_bitmap_ptr *elm) +{ + struct kho_mem_phys_bits *bitmap = KHOSER_LOAD_PTR(elm->bitmap); + unsigned long bit; + + for_each_set_bit(bit, bitmap->preserve, PRESERVE_BITS) { + int sz = 1 << (order + PAGE_SHIFT); + phys_addr_t phys = + elm->phys_start + (bit << (order + PAGE_SHIFT)); + struct page *page = phys_to_page(phys); + + memblock_reserve(phys, sz); + memblock_reserved_mark_noinit(phys, sz); + page->private = order; + } +} + +static void __init kho_mem_deserialize(void) +{ + struct khoser_mem_chunk *chunk; + struct kho_in_node preserved_mem; + const phys_addr_t *mem; + int err; + u32 len; + + err = kho_get_node(NULL, "preserved-memory", &preserved_mem); + if (err) { + pr_err("no preserved-memory node: %d\n", err); + return; + } + + mem = kho_get_prop(&preserved_mem, "metadata", &len); + if (!mem || len != sizeof(*mem)) { + pr_err("failed to get preserved memory bitmaps\n"); + return; + } + + chunk = *mem ? phys_to_virt(*mem) : NULL; + while (chunk) { + unsigned int i; + + memblock_reserve(virt_to_phys(chunk), sizeof(*chunk)); + + for (i = 0; i != chunk->hdr.num_elms; i++) + deserialize_bitmap(chunk->hdr.order, + &chunk->bitmaps[i]); + chunk = KHOSER_LOAD_PTR(chunk->hdr.next); + } +} + /* Helper functions for KHO state tree */ struct kho_prop { @@ -545,6 +1008,11 @@ static int kho_unfreeze(void) if (fdt) kvfree(fdt); + if (kho_out.first_chunk_phys) { + kho_mem_ser_free(phys_to_virt(kho_out.first_chunk_phys)); + kho_out.first_chunk_phys = 0; + } + err = blocking_notifier_call_chain(&kho_out.chain_head, KEXEC_KHO_UNFREEZE, NULL); err = notifier_to_errno(err); @@ -633,6 +1101,7 @@ static int kho_finalize(void) { int err = 0; void *fdt; + struct khoser_mem_chunk *first_chunk; fdt = kvmalloc(kho_out.fdt_max, GFP_KERNEL); if (!fdt) @@ -648,6 +1117,13 @@ static int kho_finalize(void) kho_out.fdt = fdt; up_write(&kho_out.tree_lock); + first_chunk = kho_mem_serialize(); + if (IS_ERR(first_chunk)) { + err = PTR_ERR(first_chunk); + goto unfreeze; + } + kho_out.first_chunk_phys = first_chunk ? virt_to_phys(first_chunk) : 0; + err = kho_convert_tree(fdt, kho_out.fdt_max); unfreeze: @@ -829,6 +1305,10 @@ static __init int kho_init(void) kho_out.root.name = ""; err = kho_add_string_prop(&kho_out.root, "compatible", "kho-v1"); + err |= kho_add_prop(&kho_out.preserved_memory, "metadata", + &kho_out.first_chunk_phys, sizeof(phys_addr_t)); + err |= kho_add_node(&kho_out.root, "preserved-memory", + &kho_out.preserved_memory); if (err) goto err_free_scratch; @@ -1079,10 +1559,12 @@ static void __init kho_release_scratch(void) void __init kho_memory_init(void) { - if (!kho_get_fdt()) + if (!kho_get_fdt()) { kho_reserve_scratch(); - else + } else { + kho_mem_deserialize(); kho_release_scratch(); + } } void __init kho_populate(phys_addr_t handover_fdt_phys, From patchwork Thu Mar 20 01:55:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 92E93C35FFC for ; Thu, 20 Mar 2025 02:15:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8V+X6vapuQnm7T0BErSGY0fSGWOIaeixziHNtlYjz6I=; b=lIRYfxr4aq1bsAWAaVI3v8ao21 53asbSGNirE476ceTGUkEu1KVqZXOkCwbPOIv3JzxT8cQ/cj+Y/PY0gM/RM9D9b8yWcx2Z2ZJDtW+ 84MzKCkp2VlXnqwUh5aY2uPrJSZByJ5uWw853rUuUsRBrae3OqXVTlNZJetxLeGjMswcBHUSBD1LT Mlum87H7al8HtHKxQc1yVo0Q1wVUvyP1ZBMzvOtJvpIoMdHpL7u3eZyDXU09KsOhJKE2lC8gKmmTN u2SPtRp5SKTySBODiD3TXyOw2xXpwls5gVJwIxeQt5cAvfMZe5/g11wbWoVmbCm6bHwWnER7QByfU aMI5LZkg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5R9-0000000AqMU-2Bsg; Thu, 20 Mar 2025 02:15:11 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58r-0000000AkaS-1fWl for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:21 +0000 Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-22410053005so6818805ad.1 for ; Wed, 19 Mar 2025 18:56:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435776; x=1743040576; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8V+X6vapuQnm7T0BErSGY0fSGWOIaeixziHNtlYjz6I=; b=oRt2nd2VCOo1W4ECrttDPJJ51pLAQFkddxUnHvFQ7eFESPrn5tNTVRM+5ET5od/kTu WULr8nqQhkMMS7KjC66tY09pg5LIrsGSK7dauwWY33Flm0EpZWi7PQuWx+3WTK2TEzdv JxZs0dXwYj29FmKI3gv1pPRqNrmLGb2W5Ogd0Ekfuz40vS8Rol4Y8162IHNDhE97f3iC C46chi3d4xs57IUlL+156SOJrgtSH8LmeCfzBbyf0PgqVbJGgrhH3feAEx+U/9rn6LQJ xaLxuTKYLdfqvcsREJ0/IdsgwG2XVJWL2ptR7MQ5huse8gr9aMgV6t6w4Ly7v0JX/89I 5raw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435776; x=1743040576; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8V+X6vapuQnm7T0BErSGY0fSGWOIaeixziHNtlYjz6I=; b=qMEoJwCYgUApC/574GMC1+nriSzrzR4NfonpKXCYtw3jl0JCr95iD1eQLtjCL6G9CW iVc2eqfsZRVjXirkhjoREKM7UGFlcICZIZw09yZ8t3S2REJmdir1LHthiq40yMs+hAQW ZPP8fkynNXwpyTkcrCC0KiDNObcq+iZqzq0NPVyRNI3y4EHrQEoIVk+VM094l9bHxyNV IgQjqduU87K0Lg2NRHTjSDslspCIUm0H8+arv+lNVS3cCH/AyEarxtg8WCFe5BLueFGK FIMs2rnEu2XuEc/UFzy5yNLZAqVz6FRJ7SeIdiHh1EYyG+NjuoKBR7bWCv+MBlLr4wxZ zl4A== X-Forwarded-Encrypted: i=1; AJvYcCU9Ayd1o+bvhyzDxjoGdqfbG3or7RQHLLVk/1RtHJT7ObHzV7rn9+s9nc/UdUWc7jkWm2J18eff0S4cve48jPSY@lists.infradead.org X-Gm-Message-State: AOJu0YzqVk6LFk4Bk6bjHVAcWjr29MNzcHTWAFlRniIFv3rNFITHwSUf j/sqlIWpRMdYB8cd2VzPtiN2X4fMi7VgyMvlWvbwVfD3HirgKA5anhf/Qm4zLBJ7Jv4ooBqteC6 GpCe/g9AQASSBl6yOWA== X-Google-Smtp-Source: AGHT+IEM1zhUI4aB8FBWWC+mYMClc6ahuDGj6ZxX8EDQ9DE5DKRFOEqA+5eaTn1Hw9R605eYXv4+w1iX7+c1hN7N X-Received: from pltk7.prod.google.com ([2002:a17:902:6947:b0:223:6930:304e]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:db0a:b0:223:f408:c3f7 with SMTP id d9443c01a7336-2265edc7b36mr19232405ad.16.1742435776171; Wed, 19 Mar 2025 18:56:16 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:45 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-11-changyuanl@google.com> Subject: [PATCH v5 10/16] kexec: add KHO support to kexec file loads From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_185617_493537_32B1B714 X-CRM114-Status: GOOD ( 26.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Alexander Graf Kexec has 2 modes: A user space driven mode and a kernel driven mode. For the kernel driven mode, kernel code determines the physical addresses of all target buffers that the payload gets copied into. With KHO, we can only safely copy payloads into the "scratch area". Teach the kexec file loader about it, so it only allocates for that area. In addition, enlighten it with support to ask the KHO subsystem for its respective payloads to copy into target memory. Also teach the KHO subsystem how to fill the images for file loads. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- include/linux/kexec.h | 7 +++ kernel/kexec_core.c | 4 ++ kernel/kexec_file.c | 19 +++++++ kernel/kexec_handover.c | 108 ++++++++++++++++++++++++++++++++++++++++ kernel/kexec_internal.h | 18 +++++++ 5 files changed, 156 insertions(+) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index fad04f3bcf1d..d59eee60e36e 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -364,6 +364,13 @@ struct kimage { size_t ima_buffer_size; #endif +#ifdef CONFIG_KEXEC_HANDOVER + struct { + struct kexec_segment *scratch; + struct kexec_segment *fdt; + } kho; +#endif + /* Core ELF header buffer */ void *elf_headers; unsigned long elf_headers_sz; diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index 640d252306ea..67fb9c0b3714 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -1053,6 +1053,10 @@ int kernel_kexec(void) goto Unlock; } + error = kho_copy_fdt(kexec_image); + if (error) + goto Unlock; + #ifdef CONFIG_KEXEC_JUMP if (kexec_image->preserve_context) { /* diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index 3eedb8c226ad..070ef206f573 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -253,6 +253,11 @@ kimage_file_prepare_segments(struct kimage *image, int kernel_fd, int initrd_fd, /* IMA needs to pass the measurement list to the next kernel. */ ima_add_kexec_buffer(image); + /* If KHO is active, add its images to the list */ + ret = kho_fill_kimage(image); + if (ret) + goto out; + /* Call image load handler */ ldata = kexec_image_load_default(image); @@ -636,6 +641,14 @@ int kexec_locate_mem_hole(struct kexec_buf *kbuf) if (kbuf->mem != KEXEC_BUF_MEM_UNKNOWN) return 0; + /* + * If KHO is active, only use KHO scratch memory. All other memory + * could potentially be handed over. + */ + ret = kho_locate_mem_hole(kbuf, locate_mem_hole_callback); + if (ret <= 0) + return ret; + if (!IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) ret = kexec_walk_resources(kbuf, locate_mem_hole_callback); else @@ -764,6 +777,12 @@ static int kexec_calculate_store_digests(struct kimage *image) if (ksegment->kbuf == pi->purgatory_buf) continue; +#ifdef CONFIG_KEXEC_HANDOVER + /* Skip KHO FDT as its contects are copied in kernel_kexec(). */ + if (ksegment == image->kho.fdt) + continue; +#endif + ret = crypto_shash_update(desc, ksegment->kbuf, ksegment->bufsz); if (ret) diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c index 592563c21369..5108e2cc1a22 100644 --- a/kernel/kexec_handover.c +++ b/kernel/kexec_handover.c @@ -245,6 +245,85 @@ int kho_node_check_compatible(const struct kho_in_node *node, } EXPORT_SYMBOL_GPL(kho_node_check_compatible); +int kho_fill_kimage(struct kimage *image) +{ + ssize_t scratch_size; + int err = 0; + + if (!kho_enable) + return 0; + + /* Allocate target memory for KHO FDT */ + struct kexec_buf fdt = { + .image = image, + .buffer = NULL, + .bufsz = 0, + .mem = KEXEC_BUF_MEM_UNKNOWN, + .memsz = kho_out.fdt_max, + .buf_align = SZ_64K, /* Makes it easier to map */ + .buf_max = ULONG_MAX, + .top_down = true, + }; + err = kexec_add_buffer(&fdt); + if (err) { + pr_err("failed to reserved a segment for KHO FDT: %d\n", err); + return err; + } + image->kho.fdt = &image->segment[image->nr_segments - 1]; + + scratch_size = sizeof(*kho_scratch) * kho_scratch_cnt; + struct kexec_buf scratch = { + .image = image, + .buffer = kho_scratch, + .bufsz = scratch_size, + .mem = KEXEC_BUF_MEM_UNKNOWN, + .memsz = scratch_size, + .buf_align = SZ_64K, /* Makes it easier to map */ + .buf_max = ULONG_MAX, + .top_down = true, + }; + err = kexec_add_buffer(&scratch); + if (err) + return err; + image->kho.scratch = &image->segment[image->nr_segments - 1]; + + return 0; +} + +static int kho_walk_scratch(struct kexec_buf *kbuf, + int (*func)(struct resource *, void *)) +{ + int ret = 0; + int i; + + for (i = 0; i < kho_scratch_cnt; i++) { + struct resource res = { + .start = kho_scratch[i].addr, + .end = kho_scratch[i].addr + kho_scratch[i].size - 1, + }; + + /* Try to fit the kimage into our KHO scratch region */ + ret = func(&res, kbuf); + if (ret) + break; + } + + return ret; +} + +int kho_locate_mem_hole(struct kexec_buf *kbuf, + int (*func)(struct resource *, void *)) +{ + int ret; + + if (!kho_enable || kbuf->image->type == KEXEC_TYPE_CRASH) + return 1; + + ret = kho_walk_scratch(kbuf, func); + + return ret == 1 ? 0 : -EADDRNOTAVAIL; +} + /* * Keep track of memory that is to be preserved across KHO. * @@ -1141,6 +1220,35 @@ static int kho_finalize(void) return err; } +int kho_copy_fdt(struct kimage *image) +{ + int err = 0; + void *fdt; + + if (!kho_enable || !image->file_mode) + return 0; + + if (!kho_out.fdt) { + err = kho_finalize(); + kho_out_update_debugfs_fdt(); + if (err) + return err; + } + + fdt = kimage_map_segment(image, image->kho.fdt->mem, + PAGE_ALIGN(kho_out.fdt_max)); + if (!fdt) { + pr_err("failed to vmap fdt ksegment in kimage\n"); + return -ENOMEM; + } + + memcpy(fdt, kho_out.fdt, fdt_totalsize(kho_out.fdt)); + + kimage_unmap_segment(fdt); + + return 0; +} + /* Handling for debug/kho/out */ static int kho_out_finalize_get(void *data, u64 *val) { diff --git a/kernel/kexec_internal.h b/kernel/kexec_internal.h index d35d9792402d..ec9555a4751d 100644 --- a/kernel/kexec_internal.h +++ b/kernel/kexec_internal.h @@ -39,4 +39,22 @@ extern size_t kexec_purgatory_size; #else /* CONFIG_KEXEC_FILE */ static inline void kimage_file_post_load_cleanup(struct kimage *image) { } #endif /* CONFIG_KEXEC_FILE */ + +struct kexec_buf; + +#ifdef CONFIG_KEXEC_HANDOVER +int kho_locate_mem_hole(struct kexec_buf *kbuf, + int (*func)(struct resource *, void *)); +int kho_fill_kimage(struct kimage *image); +int kho_copy_fdt(struct kimage *image); +#else +static inline int kho_locate_mem_hole(struct kexec_buf *kbuf, + int (*func)(struct resource *, void *)) +{ + return 1; +} + +static inline int kho_fill_kimage(struct kimage *image) { return 0; } +static inline int kho_copy_fdt(struct kimage *image) { return 0; } +#endif /* CONFIG_KEXEC_HANDOVER */ #endif /* LINUX_KEXEC_INTERNAL_H */ From patchwork Thu Mar 20 01:55:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0D0F6C35FFC for ; Thu, 20 Mar 2025 02:20:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UkcR0SU+CU01cj/UMVFVKIzr3SF+GMypT0ANn4wKrhE=; b=oZzlbhE/NvvtYYvdbWr8hMcxh1 wVMK5AEAiuowfyqqHivPNZwb/wK8GvA/a8vrBBo3e3tJHY6dQWqp9iQSgQ7xCPUPfgw7+TT7KBh1o ESHsNSGLq2+YYbdxNhuCHRA1ggZa0CHZ0bf0AJy95vyyF86hDtEWKopXHOvdZbEK+wRDl9Ci/+mln CvgRNjrxrxXkLCHhTCULvYKRYcveqh3hNb0KfpIr4WKv/+AbQtLIPW6/IxatyfaWTKflpu7XNJxYG qrYy4nrz3muBhgegkpRWL/bJn6OynKdf7dGUq/25D8rmR1J3PLaIejoEOxTUH5l0co2rTIy+8JkkH l2TlzXcA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5W9-0000000ArV4-2vfN; Thu, 20 Mar 2025 02:20:21 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv590-0000000AklQ-3wJo for linux-arm-kernel@bombadil.infradead.org; Thu, 20 Mar 2025 01:56:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=UkcR0SU+CU01cj/UMVFVKIzr3SF+GMypT0ANn4wKrhE=; b=SR6KHlhEJ+p1piBZMUfisFpz9d 8DHwJeGkyCe8mEoTU1iSHux9gZqqWirKAgpYPb5Fn/kc2q2lLkFPW2eohq2RDhOPc4duKlN2i47sl PcPl+Suy5Os9Aw9+ulyIyG8Ov9hB/FAob+B4CYNxQI1Dxc6nhW8OUiJybSUCgspd33+5PaL+e9L3N tzjVxcq1uxn8nFXK52r+Fkr4FVkKvL23trxic7DJ7bdLTfdGOXhPPA6uS2v29/qSnNpFN7s3cvIXW iB4RprOLM63i4awEj+Rvcv5CjR4hu/YHIVPvMJ6zd08qZ/UvtgMCp7SiNUENLVFuKLGr6e/BHG/ZS SPi3VhTQ==; Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58y-000000043e5-0N6d for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:25 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-2264c423e37so2837305ad.3 for ; Wed, 19 Mar 2025 18:56:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435777; x=1743040577; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UkcR0SU+CU01cj/UMVFVKIzr3SF+GMypT0ANn4wKrhE=; b=VEwm2nbOALa3ehfas/TZLnX78O3ou5qJFB4BBknixkcedGU0g8Ktzcfn7pxfgVAYC6 iUzDOBdfFjOMUiZQqtIUnOBxVnG9Uf1fEdBOfJENEEMVd1KbAdCsLVgkq7T06zyAKkBc l9Z55qXli+yxuGM7aN9q4E9NmZwH7+slDG70950fweygNN21Hv3lItRRz1Fegdzzyw0k +ua0I8728mtm/zv6uJVPTwBwvHjQaBzQLqup/mnm1ZPiyghmFb41MYFM3fHnPNc0RSzY N8UV+jBC/L2MEAKrWEp4jhSOnhhhQPP6fKn9g1HMu8V37s0iIjfMIhyyvMMlNIc9PQRz jkUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435777; x=1743040577; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UkcR0SU+CU01cj/UMVFVKIzr3SF+GMypT0ANn4wKrhE=; b=ITGyUQq8tao9+q6cBdFUnlj8z/BtcXrCC35F+0W0vRJhyrbMZSo5kTKr9w8nYD8bgl vbP8Ti0GaHvuiCC9eaxI3+pGwAQaOSoSMkjPhhQ9kKGAdfJMidb0fYBKj8O42BvWJXku vHra+UgzccorLrhRzlCMbzQsWS277arROo1BEUkDDroAGOIlTSQLF0wU+T3HSPufjamT TkSWtf8sCUyKX8mcbVWyjJz7l+qXYRfZVoVpZ7myIvCMavqrxbzSS22m8Dcad3mECoP+ KryyvPZ9O3Kr7qF2tNh18pupckCyT6yyiD1+7cLQjeoAJ1UX8cKaQsBZgfzqUE55JvxW 0FxA== X-Forwarded-Encrypted: i=1; AJvYcCWbnh8K2Ip3PpSuVTdBfDzXzeRBBymZ9hzFXnxE4zA+1tEqTb5xogNU20cqupLrbDiKhLkke/uO7PNZQawHEZDm@lists.infradead.org X-Gm-Message-State: AOJu0YyENc3nmUxRET+UFwFz4SzlxcSgsfbTQgUHO/h0n6R7nDL2WkRM KBApivE1JEGewT52JV2EgVa8rHViIK9vUJcrYofpvfxrvZKVj+9dXHlIJaBf++u275P67HAlE41 GFEhWkRZaKMuW0d70ZA== X-Google-Smtp-Source: AGHT+IEq833KvKiO9EApYrAEuf1UTsJySLJu5CjTMNPxyZpCp1rooyCfDuUQ3lxBHQoctugFxssTdHmFh/kEHnmc X-Received: from plsq4.prod.google.com ([2002:a17:902:bd84:b0:223:3ab:e4a0]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:e801:b0:21f:c67:a68a with SMTP id d9443c01a7336-22649a2ee71mr75511855ad.31.1742435777590; Wed, 19 Mar 2025 18:56:17 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:46 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-12-changyuanl@google.com> Subject: [PATCH v5 11/16] kexec: add config option for KHO From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250320_015624_177142_2BD832CD X-CRM114-Status: GOOD ( 11.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Alexander Graf We have all generic code in place now to support Kexec with KHO. This patch adds a config option that depends on architecture support to enable KHO support. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- kernel/Kconfig.kexec | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/kernel/Kconfig.kexec b/kernel/Kconfig.kexec index 4d111f871951..57db99e758a8 100644 --- a/kernel/Kconfig.kexec +++ b/kernel/Kconfig.kexec @@ -95,6 +95,21 @@ config KEXEC_JUMP Jump between original kernel and kexeced kernel and invoke code in physical address mode via KEXEC +config KEXEC_HANDOVER + bool "kexec handover" + depends on ARCH_SUPPORTS_KEXEC_HANDOVER && ARCH_SUPPORTS_KEXEC_FILE + select MEMBLOCK_KHO_SCRATCH + select KEXEC_FILE + select DEBUG_FS + select LIBFDT + select CMA + select XXHASH + help + Allow kexec to hand over state across kernels by generating and + passing additional metadata to the target kernel. This is useful + to keep data or state alive across the kexec. For this to work, + both source and target kernels need to have this option enabled. + config CRASH_DUMP bool "kernel crash dumps" default ARCH_DEFAULT_CRASH_DUMP From patchwork Thu Mar 20 01:55:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023336 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C3EE7C35FFF for ; Thu, 20 Mar 2025 02:17:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=rd3wJc2IZkbxeklI0oyF9EA5atShYqSy3T5piwpZMms=; b=op/wonQPss6dWBY4KlkCXWUHch spXq4auzUsM3LYDVT+D+2q43W5ayRCPK/tV24XAzpAeSQAqMMXI67bUBf+Mzumeu1MlTqMj8ykv+s +Rdv8Mf6svH/jNz2rAc+zQCf36PQ78Tr7ul1kPoYRD05VIIHYoeGkerdQsVRuDGxeu64VPkqLrASp 0VLqrbqPCSZkLUrsStuFnH5+4ZXw4Hg0R0VLdSLE+cxycPXU8QmFFCidGS/xNKFlZGf3dPK1CLwWy yF3tIEw5X+GqLFbkgJYh3ZR6pJPkVoz2ZmlinOMACXVIQYFts90DmZnNquUnQD8mj5D0uj9JGZcTC aKIgPLeg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5So-0000000Aqjg-2oZv; Thu, 20 Mar 2025 02:16:54 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58u-0000000Ake9-1JzP for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:24 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-2240a960f9cso2984825ad.0 for ; Wed, 19 Mar 2025 18:56:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435779; x=1743040579; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rd3wJc2IZkbxeklI0oyF9EA5atShYqSy3T5piwpZMms=; b=isR6kPMFXZa2fnACG26tUzMx9X+vPntIf3xQ0V3kDsfnhfsMV7NJLyy4nP1kC8HZGp 8akD1LDtjr+TGYW2En9gSKAJEGJbRsIEcozwwcT8fdv8njkSbN0DXF3yHmfDLuZL96JI TxQvNS6XwgkL7B7ElQbPpQ3sWEYu98fh8w81zWjQsI7yyJfsQdJR7raGunnGNnUyI2Kg HcFO5FgOsRsXdJDuQ9oWBlfTGwoDFgxJ10ggmB+hhCjWTq+CwSlll7XGdy/K3L5CjtQQ SAWBw23aDaeaqB8Qh26VRPXAEk4msuaD1jxRJRxS1xXaYEhaJJjx22CwGriFozUsvj9J qAWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435779; x=1743040579; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rd3wJc2IZkbxeklI0oyF9EA5atShYqSy3T5piwpZMms=; b=TKagI/6vW0n3P4HLb6vHUq8KglhOtACPT6h8choDz7xIC2ccnnGQbJJyNwML0mGf1w QLHuRmkvtYVsGDoq/Sc1OTyWhou7LiRvJr01mVlvt8DnN9DsHTNX81f2Fn40lI7fsacs jWyR7zDZoG+AGiDpTiI/Wwlqsugt4Zg0rmlYptmrf2K+zyxSHWKRe1446oHVryU1y5E0 tHZuz55RtVPlf59fymAB6fpN8KIQECqokmnEXqQivUocgvBSmyxVm/JmfN8nyMz8+Xcy eI+imzBRXPAsJwGpwOCPicKwxEcD3MGYpvIeZnlyksKHELSoyQBaSl6KqPlKNxy5SUVE T4pQ== X-Forwarded-Encrypted: i=1; AJvYcCVozd0qurZdr1yE5hG0UF7hV8P/5/wlYH09S5QUE72GAfXqO8/ZgXUJh46hCP/zRoQGlaX8IJZ4Qgbe/NXAOh/U@lists.infradead.org X-Gm-Message-State: AOJu0Yxd4GyK9zGuh2zB/Hc0opeg0OjUoVammyp6rpvhLFFKwW3N3O0R YPTr9AEudcFnqaE3Yjjjt1fxs60Nebm5sRIDhjNdKqrTet16RrFl5b2f5ZZTdYtUEA94KkgWf9Z loFQNdA7k8wNItm8HhQ== X-Google-Smtp-Source: AGHT+IGJMwZEQr7XX54NJ2Wc09gQFNrebDpGN2LmiuqxXXpWV4PLS6zLIf0YQX/7suAC0aQ3d5AJ2BOMHO4N4Oqa X-Received: from pfbig24.prod.google.com ([2002:a05:6a00:8b98:b0:730:597f:f227]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:3a08:b0:736:57cb:f2b6 with SMTP id d2e1a72fcca58-7376d62aac5mr7233223b3a.12.1742435779245; Wed, 19 Mar 2025 18:56:19 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:47 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-13-changyuanl@google.com> Subject: [PATCH v5 12/16] arm64: add KHO support From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_185620_443538_FD98C4CF X-CRM114-Status: GOOD ( 19.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Alexander Graf We now have all bits in place to support KHO kexecs. Add awareness of KHO in the kexec file as well as boot path for arm64 and adds the respective kconfig option to the architecture so that it can use KHO successfully. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- arch/arm64/Kconfig | 3 +++ drivers/of/fdt.c | 33 +++++++++++++++++++++++++++++++++ drivers/of/kexec.c | 37 +++++++++++++++++++++++++++++++++++++ 3 files changed, 73 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 940343beb3d4..c997b27b7da1 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1589,6 +1589,9 @@ config ARCH_SUPPORTS_KEXEC_IMAGE_VERIFY_SIG config ARCH_DEFAULT_KEXEC_IMAGE_VERIFY_SIG def_bool y +config ARCH_SUPPORTS_KEXEC_HANDOVER + def_bool y + config ARCH_SUPPORTS_CRASH_DUMP def_bool y diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index aedd0e2dcd89..73f80e3f7188 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -25,6 +25,7 @@ #include #include #include +#include #include /* for COMMAND_LINE_SIZE */ #include @@ -875,6 +876,35 @@ void __init early_init_dt_check_for_usable_mem_range(void) memblock_add(rgn[i].base, rgn[i].size); } +/** + * early_init_dt_check_kho - Decode info required for kexec handover from DT + */ +static void __init early_init_dt_check_kho(void) +{ + unsigned long node = chosen_node_offset; + u64 kho_start, scratch_start, scratch_size; + const __be32 *p; + int l; + + if (!IS_ENABLED(CONFIG_KEXEC_HANDOVER) || (long)node < 0) + return; + + p = of_get_flat_dt_prop(node, "linux,kho-fdt", &l); + if (l != (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32)) + return; + + kho_start = dt_mem_next_cell(dt_root_addr_cells, &p); + + p = of_get_flat_dt_prop(node, "linux,kho-scratch", &l); + if (l != (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32)) + return; + + scratch_start = dt_mem_next_cell(dt_root_addr_cells, &p); + scratch_size = dt_mem_next_cell(dt_root_addr_cells, &p); + + kho_populate(kho_start, scratch_start, scratch_size); +} + #ifdef CONFIG_SERIAL_EARLYCON int __init early_init_dt_scan_chosen_stdout(void) @@ -1169,6 +1199,9 @@ void __init early_init_dt_scan_nodes(void) /* Handle linux,usable-memory-range property */ early_init_dt_check_for_usable_mem_range(); + + /* Handle kexec handover */ + early_init_dt_check_kho(); } bool __init early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys) diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c index 5b924597a4de..db7d7014d8b4 100644 --- a/drivers/of/kexec.c +++ b/drivers/of/kexec.c @@ -264,6 +264,38 @@ static inline int setup_ima_buffer(const struct kimage *image, void *fdt, } #endif /* CONFIG_IMA_KEXEC */ +static int kho_add_chosen(const struct kimage *image, void *fdt, int chosen_node) +{ + int ret = 0; +#ifdef CONFIG_KEXEC_HANDOVER + phys_addr_t dt_mem = 0; + phys_addr_t dt_len = 0; + phys_addr_t scratch_mem = 0; + phys_addr_t scratch_len = 0; + + if (!image->kho.fdt || !image->kho.scratch) + return 0; + + dt_mem = image->kho.fdt->mem; + dt_len = image->kho.fdt->memsz; + + scratch_mem = image->kho.scratch->mem; + scratch_len = image->kho.scratch->bufsz; + + pr_debug("Adding kho metadata to DT"); + + ret = fdt_appendprop_addrrange(fdt, 0, chosen_node, "linux,kho-fdt", + dt_mem, dt_len); + if (ret) + return ret; + + ret = fdt_appendprop_addrrange(fdt, 0, chosen_node, "linux,kho-scratch", + scratch_mem, scratch_len); + +#endif /* CONFIG_KEXEC_HANDOVER */ + return ret; +} + /* * of_kexec_alloc_and_setup_fdt - Alloc and setup a new Flattened Device Tree * @@ -414,6 +446,11 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage *image, #endif } + /* Add kho metadata if this is a KHO image */ + ret = kho_add_chosen(image, fdt, chosen_node); + if (ret) + goto out; + /* add bootargs */ if (cmdline) { ret = fdt_setprop_string(fdt, chosen_node, "bootargs", cmdline); From patchwork Thu Mar 20 01:55:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0EE61C35FFC for ; Thu, 20 Mar 2025 02:18:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Q4HakJ1xIrca+h2lsnqHQuPoA99HtUJXt7bntXeIsss=; b=0v8dx+eRQNJpuYONqNhqUbzqxT A5wp/pTBd/0+ZF7XEawVN+/QM7kqaFKf/dmMrBqNzkFvKNGx/528Sq5YnPusOS2uSOWivDpqkjF2S Il8L5jZMnbUAzxvF0nlZJNSU2bILCrBvdSXwtL4nj7CVAA+z+pvhDr24/cfGgQ+VjeMSrve4WwQK9 EEzcAOcYE82zDU9gRq6QpkNvxwGlFqe0zT4mZizQ+hgHtwscH7HzXLgCj8SEvIZxxNQvlz4SRHw+y zomg924A4s0CpnD3eUPUtYwELiwwPXFoZx8JLftMEoNy1XYYdl2Gpzeha4tcaS8EoPmLTWAL1q9m7 4m+NCBsQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5UU-0000000Ar8i-47pY; Thu, 20 Mar 2025 02:18:38 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv590-0000000Akkr-1F8Q for linux-arm-kernel@bombadil.infradead.org; Thu, 20 Mar 2025 01:56:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Q4HakJ1xIrca+h2lsnqHQuPoA99HtUJXt7bntXeIsss=; b=NLmSFfT6YY+cLUkj+qLuDECfqt oVuL+wy/oQ+yKZTzaRt5KcOzrHcQejXkQfjtXTP5wrB5MYnmgrheBRweu+khBJM/jiQlYzKgkp/q+ AgQIO7wPXQ/JC68y9hOBkCnbfXfG1tBDXdHA8DmWvfIafH3+YIvjjI+aWHrhO/35vdnIYCC8maEIV AX/mRs3i2jigo0kOqRIkwGJQV9PwPv0UXKOtWVJEaBLZWmEsRvLZFS7kkWZiStMHpRVh1vd+2i7uA /wcq6IN1giXyE4kV/nMS6Yth1dnjA8XbCraiaRmCX9Db5RbTwYp90U12WozSl1NEBmUFubw5ohRC7 oC9EEObQ==; Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58x-000000043ea-0MHB for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:25 +0000 Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-2254bdd4982so4818545ad.1 for ; Wed, 19 Mar 2025 18:56:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435781; x=1743040581; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Q4HakJ1xIrca+h2lsnqHQuPoA99HtUJXt7bntXeIsss=; b=fyxmrUIYgX0Cq/IMAid+aoDCpUk1HA3gibS36h5wu0pBy5zbdo0u2fStw8KKfTP05N xdcjAERNDwynE1yxoATP68pWAmc3Bt8uAsaSFGvCvBlXSMgR/jSd7uPywNiqeRe+xxVE klTNJzlq4I5OvcxbZ81ELIsn24tm5qijw74pZUjY/GFR0bRAirK688Z6DbIG/l27zUsk ZKWY2Zyt/aZTXj73OfDiKFwmZpkQaOZJMAzasVnFWXZo8IcsG3S9AyLOvLwaHeN1jCxR Mgi4ti/0k/QGi+OUSdKdxHj5VCpFV1F44eyoWtomz1/MfZDtf3rP7hXgsuFfaulybmEU Xr/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435781; x=1743040581; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Q4HakJ1xIrca+h2lsnqHQuPoA99HtUJXt7bntXeIsss=; b=V/lrED36GTGrNHTwl7zLLcnJcnzvBJHj7MzrWrHwcHhvRr5szc//Ym02dJzpFiFmgx nAzcGCZUMZAhBDLDTJe3l0w9wioRoG0xpkXxixM050OkO1epixtK+n3mOidjHxOzfa6v +n6LZEIHRLxZfA7p7AiWctP6zASKyeU1moHLUegHBR8/3IqeO9gOzwsoiLhX62AT1KSR VwV+1wFBSjeCBo4DJ8DWnD+htB1Hb0DquqQ2J2RTvx5P8L9DC/XO1eBQ9SqT1/QJFIwB zcpXgMTeAIRJHjuyAS9n/rq/jdfDdKIY813/bRkSCsrn+J1sK4I0QXBv4u87aexRAZo7 vcCw== X-Forwarded-Encrypted: i=1; AJvYcCUM/HXyWvNdfbsVgXN77atQP4QzfJLk8KhlR289IcRJNHgfd29rJh/BPKrD0jYyNHVQBc0U8rOWKL/XtsYTAdnM@lists.infradead.org X-Gm-Message-State: AOJu0YxfFFCs1ib+tfUg7OcN43zeCCm7VnxCDVdFngwMdV/QuCJC3cZD vtAViKOG+HSTqkgR7XxtMuJD6V+JLKK3lfIulCFB9ciWL7uuQI646jWXXKKK/Hze6p9zQ8pCeVg mUM57rHGlkHYmxLAJaA== X-Google-Smtp-Source: AGHT+IGeUFjJTWpK4QjhVVj4506qhZmgM6+vP3bTkcGXoNreK6tGzf61M1Gbm1yeqL/C/F63sl0IwvGvOY21BMKy X-Received: from plof5.prod.google.com ([2002:a17:902:8605:b0:21f:56e1:c515]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f548:b0:21f:b483:2ad5 with SMTP id d9443c01a7336-2264993273bmr61032515ad.20.1742435781232; Wed, 19 Mar 2025 18:56:21 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:48 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-14-changyuanl@google.com> Subject: [PATCH v5 13/16] x86/setup: use memblock_reserve_kern for memory used by kernel From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250320_015623_284712_41ED39DF X-CRM114-Status: GOOD ( 14.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Mike Rapoport (Microsoft)" memblock_reserve() does not distinguish memory used by firmware from memory used by kernel. The distinction is nice to have for accounting of early memory allocations and reservations, but it is essential for kexec handover (kho) to know how much memory kernel consumes during boot. Use memblock_reserve_kern() to reserve kernel memory, such as kernel image, initrd and setup data. Signed-off-by: Mike Rapoport (Microsoft) --- arch/x86/kernel/setup.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index cebee310e200..ead370570eb2 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -220,8 +220,8 @@ static void __init cleanup_highmap(void) static void __init reserve_brk(void) { if (_brk_end > _brk_start) - memblock_reserve(__pa_symbol(_brk_start), - _brk_end - _brk_start); + memblock_reserve_kern(__pa_symbol(_brk_start), + _brk_end - _brk_start); /* Mark brk area as locked down and no longer taking any new allocations */ @@ -294,7 +294,7 @@ static void __init early_reserve_initrd(void) !ramdisk_image || !ramdisk_size) return; /* No initrd provided by bootloader */ - memblock_reserve(ramdisk_image, ramdisk_end - ramdisk_image); + memblock_reserve_kern(ramdisk_image, ramdisk_end - ramdisk_image); } static void __init reserve_initrd(void) @@ -347,7 +347,7 @@ static void __init add_early_ima_buffer(u64 phys_addr) } if (data->size) { - memblock_reserve(data->addr, data->size); + memblock_reserve_kern(data->addr, data->size); ima_kexec_buffer_phys = data->addr; ima_kexec_buffer_size = data->size; } @@ -447,7 +447,7 @@ static void __init memblock_x86_reserve_range_setup_data(void) len = sizeof(*data); pa_next = data->next; - memblock_reserve(pa_data, sizeof(*data) + data->len); + memblock_reserve_kern(pa_data, sizeof(*data) + data->len); if (data->type == SETUP_INDIRECT) { len += data->len; @@ -461,7 +461,7 @@ static void __init memblock_x86_reserve_range_setup_data(void) indirect = (struct setup_indirect *)data->data; if (indirect->type != SETUP_INDIRECT) - memblock_reserve(indirect->addr, indirect->len); + memblock_reserve_kern(indirect->addr, indirect->len); } pa_data = pa_next; @@ -649,8 +649,8 @@ static void __init early_reserve_memory(void) * __end_of_kernel_reserve symbol must be explicitly reserved with a * separate memblock_reserve() or they will be discarded. */ - memblock_reserve(__pa_symbol(_text), - (unsigned long)__end_of_kernel_reserve - (unsigned long)_text); + memblock_reserve_kern(__pa_symbol(_text), + (unsigned long)__end_of_kernel_reserve - (unsigned long)_text); /* * The first 4Kb of memory is a BIOS owned area, but generally it is From patchwork Thu Mar 20 01:55:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3E0CAC35FFC for ; Thu, 20 Mar 2025 02:22:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=QmV5BWqKnlxPhrRMqlARz/TZT0WTjCpLvcGNlHyxpKs=; b=Y2t81YndJP9OSGLf6xp+ahPNsj 2rSYjpAsprsyswrJiHw+ppi+fGipZQb6b1qp9XBaFaGvubELYAeQPkDM8/f+f7oXhCgdlQhz+wNX3 IBGNLLD/4m0X9v2HEpSbocq6jpr83dHk5lz486Vjry5Q72i0kugYiuvrgxoF9SGTlTeyVGhBMLWHZ ut763F3WTq0zBQpujPVdr8quEPOOT2ryQQrnDglSc6Xo+RCwGwv1482dpsQVGhPEpbSekktaCjYWf D4Hr+HEd4/JMS+Gi3NC3K2jfw5Ho0aJjhQ5DV2+H2B4SrF3RlSXlp9/+oUpVqPTR7FmWaXMYFeCW9 suCnKeFQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5Xp-0000000ArtT-1XWX; Thu, 20 Mar 2025 02:22:05 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv592-0000000Aknt-2Y8Q for linux-arm-kernel@bombadil.infradead.org; Thu, 20 Mar 2025 01:56:28 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=QmV5BWqKnlxPhrRMqlARz/TZT0WTjCpLvcGNlHyxpKs=; b=ja1+QRRozLG5BCp1m6+y3tZFc3 iiXdNaf1oHtGaRMx9IQbaMZR2W8riuyMlCShADzW8r+TLOb4mzL43Vk/FteElJGXciqk2YqixwFTC AYuI6sIz1dnd1OOeVLHY+fkynGSeLpbpMt5otUfqMEayPV6OTJe59Mp0coes8PJMWgFVaslI5NlhB 3966ZOnX4Js/SZy9nO8S/HZIxRTAH1Cf5NJ/pK/UACt2cNmIAPvAixtdEjThjHUN046O9PFpwbCBb bFX7EvaHDngaZc5kTHUa+IBqAyJXWGIu/VdjvD+CkOd9rLQIJZ/ld7d0oBWUD+o/au1EgzdtpxKrX 8s1oizKw==; Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv58z-000000043fO-2RT6 for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:27 +0000 Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-2ff58318acaso846559a91.0 for ; Wed, 19 Mar 2025 18:56:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435783; x=1743040583; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QmV5BWqKnlxPhrRMqlARz/TZT0WTjCpLvcGNlHyxpKs=; b=hOyv9f8XTCn45SUhcuSKTgnPbHFtQ7waOjvb5DQs+lF1hsWY2OwSeQIRz5ScUqWfQy Z1tJkw1WU8TpJVX9wRX/xEdRSwF9N3U4yT/m+cRWNGaTePaqAX0dqONHSxm3twIkgIJN /AJverV4JpHbrn1kBNzPOu5Sx9Hog+mFHxoa5pi8i6nUYLLHaDv/QrFRu9LYfQcDNrPE BNtc6LMWhMwrDqLQ63Qj1iChFpQmConO07DlzuCHT1VvaWUXRy+550U3BBhn+l+drE4g W2c0pQy1sLzlAm9qRXrnToifwLG1KofUsP8sSfGf/YtxZ+pja8glGK2hR15Jywb3/68I EWow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435783; x=1743040583; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QmV5BWqKnlxPhrRMqlARz/TZT0WTjCpLvcGNlHyxpKs=; b=WjepFeJxvB8x/qvZM7wbiDvhG/ToPokAXBX5BwqCotILqEvh3+VIqxtwCcrUyFd/0k S9Ypr3K/soMTG/9AKqlKY6TD1jfZu7lIQ72iFqGwnfCCCLQyG1PY4is/mEdk6wo/i9Nq PPO9LkBCp/eCGSgNXlqrXs5Ga8JBBvLatgg4G1nETe3+L36ia3ILs7VNub6BJitQMLZX MnSz0c49Ezmn5b2TnYg6f4CV+2bn4WLJ/skMix4uh8VFHDUsrPn7n3fijFYjwQDHpERv BrYU8KzteG01sT66FMn89B18W+wtF0tw3VQolGz/Dy69s5VrCCpHpkl7Ec+fhW0OAUtm yeBg== X-Forwarded-Encrypted: i=1; AJvYcCUw38j0MRQI2c0uIzGqGadzHgT9E61FnCRtqtJWt9MGw0Al2gFCl1rm19dtCF3AApZ/PmZWGGDW509vZCNXaR3e@lists.infradead.org X-Gm-Message-State: AOJu0YyKGVEvzcRE1bz6MSSFcFw8jANYdNHk+lvyuq+W69WOnC5NmLGr vHUt/H0nSjjgXyFy6+2VaNAb4iGmk94m4vJKrwCsPRGY7DJf3KpkxiRTWaXIn5+3ByHrKbZYjvO FWHdUN6fb4nNqSt+ZgQ== X-Google-Smtp-Source: AGHT+IFcz6xgCpsdaGMV/lA4NVDUFar/eMRSs8b1l17t60oWu+KTL3OhVFDVjETsYQaFDiZdylma2/Q1A+VwCJiL X-Received: from pjbrr14.prod.google.com ([2002:a17:90b:2b4e:b0:2ef:82c0:cb8d]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1f86:b0:301:6343:1626 with SMTP id 98e67ed59e1d1-301d507fab7mr2148203a91.1.1742435782874; Wed, 19 Mar 2025 18:56:22 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:49 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-15-changyuanl@google.com> Subject: [PATCH v5 14/16] x86: add KHO support From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250320_015625_881695_D23FDE3B X-CRM114-Status: GOOD ( 28.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Alexander Graf We now have all bits in place to support KHO kexecs. This patch adds awareness of KHO in the kexec file as well as boot path for x86 and adds the respective kconfig option to the architecture so that it can use KHO successfully. In addition, it enlightens it decompression code with KHO so that its KASLR location finder only considers memory regions that are not already occupied by KHO memory. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- arch/x86/Kconfig | 3 ++ arch/x86/boot/compressed/kaslr.c | 52 +++++++++++++++++++++++++- arch/x86/include/asm/setup.h | 4 ++ arch/x86/include/uapi/asm/setup_data.h | 13 ++++++- arch/x86/kernel/e820.c | 18 +++++++++ arch/x86/kernel/kexec-bzimage64.c | 36 ++++++++++++++++++ arch/x86/kernel/setup.c | 25 +++++++++++++ arch/x86/realmode/init.c | 2 + include/linux/kexec_handover.h | 13 +++++-- 9 files changed, 161 insertions(+), 5 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 0e27ebd7e36a..acd180e3002f 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2091,6 +2091,9 @@ config ARCH_SUPPORTS_KEXEC_BZIMAGE_VERIFY_SIG config ARCH_SUPPORTS_KEXEC_JUMP def_bool y +config ARCH_SUPPORTS_KEXEC_HANDOVER + def_bool y + config ARCH_SUPPORTS_CRASH_DUMP def_bool X86_64 || (X86_32 && HIGHMEM) diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c index f03d59ea6e40..ff1168881016 100644 --- a/arch/x86/boot/compressed/kaslr.c +++ b/arch/x86/boot/compressed/kaslr.c @@ -760,6 +760,55 @@ static void process_e820_entries(unsigned long minimum, } } +/* + * If KHO is active, only process its scratch areas to ensure we are not + * stepping onto preserved memory. + */ +#ifdef CONFIG_KEXEC_HANDOVER +static bool process_kho_entries(unsigned long minimum, unsigned long image_size) +{ + struct kho_scratch *kho_scratch; + struct setup_data *ptr; + int i, nr_areas = 0; + + ptr = (struct setup_data *)(unsigned long)boot_params_ptr->hdr.setup_data; + while (ptr) { + if (ptr->type == SETUP_KEXEC_KHO) { + struct kho_data *kho = (struct kho_data *)ptr->data; + + kho_scratch = (void *)kho->scratch_addr; + nr_areas = kho->scratch_size / sizeof(*kho_scratch); + + break; + } + + ptr = (struct setup_data *)(unsigned long)ptr->next; + } + + if (!nr_areas) + return false; + + for (i = 0; i < nr_areas; i++) { + struct kho_scratch *area = &kho_scratch[i]; + struct mem_vector region = { + .start = area->addr, + .size = area->size, + }; + + if (process_mem_region(®ion, minimum, image_size)) + break; + } + + return true; +} +#else +static inline bool process_kho_entries(unsigned long minimum, + unsigned long image_size) +{ + return false; +} +#endif + static unsigned long find_random_phys_addr(unsigned long minimum, unsigned long image_size) { @@ -775,7 +824,8 @@ static unsigned long find_random_phys_addr(unsigned long minimum, return 0; } - if (!process_efi_entries(minimum, image_size)) + if (!process_kho_entries(minimum, image_size) && + !process_efi_entries(minimum, image_size)) process_e820_entries(minimum, image_size); phys_addr = slots_fetch_random(); diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h index 85f4fde3515c..70e045321d4b 100644 --- a/arch/x86/include/asm/setup.h +++ b/arch/x86/include/asm/setup.h @@ -66,6 +66,10 @@ extern void x86_ce4100_early_setup(void); static inline void x86_ce4100_early_setup(void) { } #endif +#ifdef CONFIG_KEXEC_HANDOVER +#include +#endif + #ifndef _SETUP #include diff --git a/arch/x86/include/uapi/asm/setup_data.h b/arch/x86/include/uapi/asm/setup_data.h index b111b0c18544..c258c37768ee 100644 --- a/arch/x86/include/uapi/asm/setup_data.h +++ b/arch/x86/include/uapi/asm/setup_data.h @@ -13,7 +13,8 @@ #define SETUP_CC_BLOB 7 #define SETUP_IMA 8 #define SETUP_RNG_SEED 9 -#define SETUP_ENUM_MAX SETUP_RNG_SEED +#define SETUP_KEXEC_KHO 10 +#define SETUP_ENUM_MAX SETUP_KEXEC_KHO #define SETUP_INDIRECT (1<<31) #define SETUP_TYPE_MAX (SETUP_ENUM_MAX | SETUP_INDIRECT) @@ -78,6 +79,16 @@ struct ima_setup_data { __u64 size; } __attribute__((packed)); +/* + * Locations of kexec handover metadata + */ +struct kho_data { + __u64 dt_addr; + __u64 dt_size; + __u64 scratch_addr; + __u64 scratch_size; +} __attribute__((packed)); + #endif /* __ASSEMBLY__ */ #endif /* _UAPI_ASM_X86_SETUP_DATA_H */ diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c index 82b96ed9890a..0b81cd70b02a 100644 --- a/arch/x86/kernel/e820.c +++ b/arch/x86/kernel/e820.c @@ -1329,6 +1329,24 @@ void __init e820__memblock_setup(void) memblock_add(entry->addr, entry->size); } + /* + * At this point with KHO we only allocate from scratch memory. + * At the same time, we configure memblock to only allow + * allocations from memory below ISA_END_ADDRESS which is not + * a natural scratch region, because Linux ignores memory below + * ISA_END_ADDRESS at runtime. Beside very few (if any) early + * allocations, we must allocate real-mode trapoline below + * ISA_END_ADDRESS. + * + * To make sure that we can actually perform allocations during + * this phase, let's mark memory below ISA_END_ADDRESS as scratch + * so we can allocate from there in a scratch-only world. + * + * After real mode trampoline is allocated, we clear scratch + * marking from the memory below ISA_END_ADDRESS + */ + memblock_mark_kho_scratch(0, ISA_END_ADDRESS); + /* Throw away partial pages: */ memblock_trim_memory(PAGE_SIZE); diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c index 68530fad05f7..09d6a068b14c 100644 --- a/arch/x86/kernel/kexec-bzimage64.c +++ b/arch/x86/kernel/kexec-bzimage64.c @@ -233,6 +233,31 @@ setup_ima_state(const struct kimage *image, struct boot_params *params, #endif /* CONFIG_IMA_KEXEC */ } +static void setup_kho(const struct kimage *image, struct boot_params *params, + unsigned long params_load_addr, + unsigned int setup_data_offset) +{ +#ifdef CONFIG_KEXEC_HANDOVER + struct setup_data *sd = (void *)params + setup_data_offset; + struct kho_data *kho = (void *)sd + sizeof(*sd); + + sd->type = SETUP_KEXEC_KHO; + sd->len = sizeof(struct kho_data); + + /* Only add if we have all KHO images in place */ + if (!image->kho.fdt || !image->kho.scratch) + return; + + /* Add setup data */ + kho->dt_addr = image->kho.fdt->mem; + kho->dt_size = image->kho.fdt->memsz; + kho->scratch_addr = image->kho.scratch->mem; + kho->scratch_size = image->kho.scratch->bufsz; + sd->next = params->hdr.setup_data; + params->hdr.setup_data = params_load_addr + setup_data_offset; +#endif /* CONFIG_KEXEC_HANDOVER */ +} + static int setup_boot_parameters(struct kimage *image, struct boot_params *params, unsigned long params_load_addr, @@ -312,6 +337,13 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params, sizeof(struct ima_setup_data); } + if (IS_ENABLED(CONFIG_KEXEC_HANDOVER)) { + /* Setup space to store preservation metadata */ + setup_kho(image, params, params_load_addr, setup_data_offset); + setup_data_offset += sizeof(struct setup_data) + + sizeof(struct kho_data); + } + /* Setup RNG seed */ setup_rng_seed(params, params_load_addr, setup_data_offset); @@ -479,6 +511,10 @@ static void *bzImage64_load(struct kimage *image, char *kernel, kbuf.bufsz += sizeof(struct setup_data) + sizeof(struct ima_setup_data); + if (IS_ENABLED(CONFIG_KEXEC_HANDOVER)) + kbuf.bufsz += sizeof(struct setup_data) + + sizeof(struct kho_data); + params = kzalloc(kbuf.bufsz, GFP_KERNEL); if (!params) return ERR_PTR(-ENOMEM); diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index ead370570eb2..e2c54181405b 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -385,6 +385,28 @@ int __init ima_get_kexec_buffer(void **addr, size_t *size) } #endif +static void __init add_kho(u64 phys_addr, u32 data_len) +{ +#ifdef CONFIG_KEXEC_HANDOVER + struct kho_data *kho; + u64 addr = phys_addr + sizeof(struct setup_data); + u64 size = data_len - sizeof(struct setup_data); + + kho = early_memremap(addr, size); + if (!kho) { + pr_warn("setup: failed to memremap kho data (0x%llx, 0x%llx)\n", + addr, size); + return; + } + + kho_populate(kho->dt_addr, kho->scratch_addr, kho->scratch_size); + + early_memunmap(kho, size); +#else + pr_warn("Passed KHO data, but CONFIG_KEXEC_HANDOVER not set. Ignoring.\n"); +#endif +} + static void __init parse_setup_data(void) { struct setup_data *data; @@ -413,6 +435,9 @@ static void __init parse_setup_data(void) case SETUP_IMA: add_early_ima_buffer(pa_data); break; + case SETUP_KEXEC_KHO: + add_kho(pa_data, data_len); + break; case SETUP_RNG_SEED: data = early_memremap(pa_data, data_len); add_bootloader_randomness(data->data, data->len); diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c index f9bc444a3064..9b9f4534086d 100644 --- a/arch/x86/realmode/init.c +++ b/arch/x86/realmode/init.c @@ -65,6 +65,8 @@ void __init reserve_real_mode(void) * setup_arch(). */ memblock_reserve(0, SZ_1M); + + memblock_clear_kho_scratch(0, SZ_1M); } static void __init sme_sev_setup_real_mode(struct trampoline_header *th) diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h index d52a7b500f4c..2dd51a77d56c 100644 --- a/include/linux/kexec_handover.h +++ b/include/linux/kexec_handover.h @@ -3,9 +3,6 @@ #define LINUX_KEXEC_HANDOVER_H #include -#include -#include -#include struct kho_scratch { phys_addr_t addr; @@ -18,6 +15,15 @@ enum kho_event { KEXEC_KHO_UNFREEZE = 1, }; +#ifdef _SETUP +struct notifier_block; +struct kho_node; +struct folio; +#else +#include +#include +#include + #define KHO_HASHTABLE_BITS 3 #define KHO_NODE_INIT \ { \ @@ -35,6 +41,7 @@ struct kho_node { struct list_head list; bool visited; }; +#endif /* _SETUP */ struct kho_in_node { int offset; From patchwork Thu Mar 20 01:55:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 14D10C35FFC for ; Thu, 20 Mar 2025 02:24:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=+uiHbtbR2+LTCG9tPghyUo9+PEq69RRB7eOXIX8kmmQ=; b=XCEeae5CTHdNxCdWfRTo2r/EsD k3PFPg2TFSM/loDRc/loSs0hdV25K5t6313U2B2bd0au3PbI4PGmxhr5YrzQsTB856N5FPxdzB7gg fiR3QM6ouqUJzGed7z+ggLCo7ux5sIduI/k+ATZB6qWzuQ129kVWLM7/Z/r2AJM5D5zEXtIgICGHM 9TcJMq1unRwdnKYIMGtlGDsKBl4UrmZ9SLaol4PLqdOpi+KCi9O7HEMIF5PdaIZDROPWGsenL8sEc S2bwkjftmeXFyf/Az+UJ2ia14jSWMh8Zxty0Cbs97/+9CUd2lOfnRPwn4RIPpVC91c+tPnF3+uMTH A5xNJc+Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5ZV-0000000AsJE-2VDe; Thu, 20 Mar 2025 02:23:49 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv594-0000000Akps-0rH6 for linux-arm-kernel@bombadil.infradead.org; Thu, 20 Mar 2025 01:56:30 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=+uiHbtbR2+LTCG9tPghyUo9+PEq69RRB7eOXIX8kmmQ=; b=MmLIAyAMMArYv65dUBzyozMJaU e20J+1wqafhCVXZ4GqlA27xHeNZnqDcRRbVoAKoW+fVic1MDV35ZzbdD7P3Tm0W3SICjx9rfVPVro 2L+9vTTzMSLSA1XpISq4ZZZyV0DcxNydb5sp6eEBmkw9N9mIRV4+zDewzW0nm5L2qwZaHRM0AQvFF 2ESZp+H2JcSpRQqYHWe0Tj9L16CenIZZnG/okGo+VLjNnxl3Cz/3xZjGZieTX7ju4NuzsGdqbpbeH p/Q16YlOEYbJmVOKBpdAEp6Z0F3WIN9V8zXKZ4QzF4lQu42IeBL6e+HlI5fxplXpECtmRoX0Pk3Kw DNNQkwTw==; Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv590-000000043fx-0ijJ for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:29 +0000 Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-2ff62f96b10so2210661a91.0 for ; Wed, 19 Mar 2025 18:56:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435784; x=1743040584; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+uiHbtbR2+LTCG9tPghyUo9+PEq69RRB7eOXIX8kmmQ=; b=e3k0Lo60PpHn6fXyv87r9sknedIJLblwUeYNTZlcRNoYG3vdMS4Dk5/jnA2XlsPftS nPQuYA8WQEcxdpzIgJCgIxJYV0goxlNwVZ9cBovx+zo5l0bpYmCcmdTl8WrEw3Djt5aa YYXI5mCQ2B7WqM2b4Bg1ogyE1ltv8yM0Bblp/gCwitzYp7U/pFGAptq6DCyFdsclz0B0 tzfHGjKhpavcXoX0tuMuwiBU+0IK5RBEKrvtcXuYVNvCaxslFfctpeP5+2DI7zWCClG1 c75pvvy1P4zGdUfcMtS5Umq7++32GsiQGW/FJBuJWwqpti2H3DRUaHEjIZsnFpvVf4BE iAbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435784; x=1743040584; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+uiHbtbR2+LTCG9tPghyUo9+PEq69RRB7eOXIX8kmmQ=; b=hso+Lu6NoijaJLa8eelyBqCAHk2mgYP7n9NeDSXYJ9HP2I8AvtySxG8hOzHZpg0XR1 5dcevNd0wjhNgEcLR6yMSj2s1X6/4fZI41sKR8qYBgOy0t4Err6Ese+BIt4h+1UCdUQj dkxixH2Pgn+PP/Zrje2GOyNS+oAuVEr2YU1C5rb0gQZ5TygYzlA3Mg1AsBd3W+wjJOhE dcl0EwSiVgtT+FCOEJWPUhqCS5NcdVGvNceY8bHk728iGZicalAmdSNnb6B3BsO/Wr7V A410cQvEtswA9FrIvYXPAleMScQyAUpfU2TjNoRv3LqXLSUiajsW3yO7rInRNg1OZh+a JHDA== X-Forwarded-Encrypted: i=1; AJvYcCW5Jpxb0hklup+Yry3XuHl3R1BKdsWqwJyYLPtbdM1GIJuTbZz2nb9PohP2Gqz0ZuCRloOwS1TZmu2N+HmktkWh@lists.infradead.org X-Gm-Message-State: AOJu0Yypb1IelO+CM4ZZLDcTEtgAWhMvE/NHN+zO28eMXHKCEiQ2S1b/ OGAQoXbt2rfq8jT82nSApYdU1Zsp+Sfc8ImaSsOlrq8dbh+snuopBSZCx09BcvpmYuyre/f6ORL ONvmuHqxnW7SqdS9/Mg== X-Google-Smtp-Source: AGHT+IGco0TbFvrpWaz7RU09LVDLIbrwQOB0hzzeh7i66UCW7yrWSDQtXXKFJgzbGrQWcK/LzqGUCqggvCd+K/bd X-Received: from pgvr15.prod.google.com ([2002:a65:60cf:0:b0:ad8:bdc2:8a33]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:69c:b0:1f5:8bf4:fde0 with SMTP id adf61e73a8af0-1fd0904c18cmr2493103637.9.1742435784427; Wed, 19 Mar 2025 18:56:24 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:50 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-16-changyuanl@google.com> Subject: [PATCH v5 15/16] memblock: add KHO support for reserve_mem From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250320_015627_240568_A8B75152 X-CRM114-Status: GOOD ( 20.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Alexander Graf Linux has recently gained support for "reserve_mem": A mechanism to allocate a region of memory early enough in boot that we can cross our fingers and hope it stays at the same location during most boots, so we can store for example ftrace buffers into it. Thanks to KASLR, we can never be really sure that "reserve_mem" allocations are static across kexec. Let's teach it KHO awareness so that it serializes its reservations on kexec exit and deserializes them again on boot, preserving the exact same mapping across kexec. This is an example user for KHO in the KHO patch set to ensure we have at least one (not very controversial) user in the tree before extending KHO's use to more subsystems. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- mm/memblock.c | 179 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 179 insertions(+) diff --git a/mm/memblock.c b/mm/memblock.c index d28abf3def1c..dd698c55b87e 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -17,6 +17,10 @@ #include #include +#ifdef CONFIG_KEXEC_HANDOVER +#include +#endif /* CONFIG_KEXEC_HANDOVER */ + #include #include @@ -2431,6 +2435,176 @@ int reserve_mem_find_by_name(const char *name, phys_addr_t *start, phys_addr_t * } EXPORT_SYMBOL_GPL(reserve_mem_find_by_name); +#ifdef CONFIG_KEXEC_HANDOVER +#define MEMBLOCK_KHO_NODE "memblock" +#define MEMBLOCK_KHO_NODE_COMPATIBLE "memblock-v1" +#define RESERVE_MEM_KHO_NODE_COMPATIBLE "reserve-mem-v1" + +static struct kho_node memblock_kho_node = KHO_NODE_INIT; + +static void reserve_mem_kho_reset(void) +{ + int i; + struct kho_node *node; + + kho_remove_node(NULL, MEMBLOCK_KHO_NODE); + kho_remove_prop(&memblock_kho_node, "compatible", NULL); + + for (i = 0; i < reserved_mem_count; i++) { + struct reserve_mem_table *map = &reserved_mem_table[i]; + + node = kho_remove_node(&memblock_kho_node, map->name); + if (IS_ERR(node)) + continue; + + kho_unpreserve_phys(map->start, map->size); + + kho_remove_prop(node, "compatible", NULL); + kho_remove_prop(node, "start", NULL); + kho_remove_prop(node, "size", NULL); + + kfree(node); + } +} + +static int reserve_mem_kho_finalize(void) +{ + int i, err = 0; + struct kho_node *node; + + if (!reserved_mem_count) + return NOTIFY_DONE; + + err = kho_add_node(NULL, MEMBLOCK_KHO_NODE, &memblock_kho_node); + if (err == 1) + return NOTIFY_DONE; + + err |= kho_add_string_prop(&memblock_kho_node, "compatible", + MEMBLOCK_KHO_NODE_COMPATIBLE); + + for (i = 0; i < reserved_mem_count; i++) { + struct reserve_mem_table *map = &reserved_mem_table[i]; + + node = kmalloc(sizeof(*node), GFP_KERNEL); + if (!node) { + err = -ENOMEM; + break; + } + + err |= kho_preserve_phys(map->start, map->size); + + kho_init_node(node); + err |= kho_add_string_prop(node, "compatible", + RESERVE_MEM_KHO_NODE_COMPATIBLE); + err |= kho_add_prop(node, "start", &map->start, + sizeof(map->start)); + err |= kho_add_prop(node, "size", &map->size, + sizeof(map->size)); + err |= kho_add_node(&memblock_kho_node, map->name, node); + + if (err) + break; + } + + if (err) { + pr_err("failed to save reserve_mem to KHO: %d\n", err); + reserve_mem_kho_reset(); + return NOTIFY_STOP; + } + + return NOTIFY_DONE; +} + +static int reserve_mem_kho_notifier(struct notifier_block *self, + unsigned long cmd, void *v) +{ + switch (cmd) { + case KEXEC_KHO_FINALIZE: + return reserve_mem_kho_finalize(); + case KEXEC_KHO_UNFREEZE: + return NOTIFY_DONE; + default: + return NOTIFY_BAD; + } +} + +static struct notifier_block reserve_mem_kho_nb = { + .notifier_call = reserve_mem_kho_notifier, +}; + +static int __init reserve_mem_init(void) +{ + if (!kho_is_enabled()) + return 0; + + return register_kho_notifier(&reserve_mem_kho_nb); +} +core_initcall(reserve_mem_init); + +static bool __init reserve_mem_kho_revive(const char *name, phys_addr_t size, + phys_addr_t align) +{ + int err, len_start, len_size; + struct kho_in_node node, child; + const phys_addr_t *p_start, *p_size; + + err = kho_get_node(NULL, MEMBLOCK_KHO_NODE, &node); + if (err) + return false; + + err = kho_node_check_compatible(&node, MEMBLOCK_KHO_NODE_COMPATIBLE); + if (err) { + pr_warn("Node '%s' is incompatible with %s: %d\n", + MEMBLOCK_KHO_NODE, MEMBLOCK_KHO_NODE_COMPATIBLE, err); + return false; + } + + err = kho_get_node(&node, name, &child); + if (err) { + pr_warn("Node '%s' has no child '%s': %d\n", + MEMBLOCK_KHO_NODE, name, err); + return false; + } + err = kho_node_check_compatible(&child, RESERVE_MEM_KHO_NODE_COMPATIBLE); + if (err) { + pr_warn("Node '%s/%s' is incompatible with %s: %d\n", + MEMBLOCK_KHO_NODE, name, + RESERVE_MEM_KHO_NODE_COMPATIBLE, err); + return false; + } + + p_start = kho_get_prop(&child, "start", &len_start); + p_size = kho_get_prop(&child, "size", &len_size); + if (!p_start || len_start != sizeof(*p_start) || !p_size || + len_size != sizeof(*p_size)) { + return false; + } + + if (*p_start & (align - 1)) { + pr_warn("KHO reserve-mem '%s' has wrong alignment (0x%lx, 0x%lx)\n", + name, (long)align, (long)*p_start); + return false; + } + + if (*p_size != size) { + pr_warn("KHO reserve-mem '%s' has wrong size (0x%lx != 0x%lx)\n", + name, (long)*p_size, (long)size); + return false; + } + + reserved_mem_add(*p_start, size, name); + pr_info("Revived memory reservation '%s' from KHO\n", name); + + return true; +} +#else +static bool __init reserve_mem_kho_revive(const char *name, phys_addr_t size, + phys_addr_t align) +{ + return false; +} +#endif /* CONFIG_KEXEC_HANDOVER */ + /* * Parse reserve_mem=nn:align:name */ @@ -2486,6 +2660,11 @@ static int __init reserve_mem(char *p) if (reserve_mem_find_by_name(name, &start, &tmp)) return -EBUSY; + /* Pick previous allocations up from KHO if available */ + if (reserve_mem_kho_revive(name, size, align)) + return 1; + + /* TODO: Allocation must be outside of scratch region */ start = memblock_phys_alloc(size, align); if (!start) return -ENOMEM; From patchwork Thu Mar 20 01:55:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14023348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A55DBC35FFC for ; Thu, 20 Mar 2025 02:25:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=iApjQxSPhTF/tl2rkU2LC3L+Sofo1JAgj9zbjrL2yA0=; b=kHrSpLvhtUiAsNVZECmQyx4f9y gUWaJKipBwxeRPPFS21J9vkg23n8f6n5xSDx3Vf7sQoUCe/MoStfQTerHKvWZk/i+WJzcLTxEXV7T 9avmHAy9Hw9b8eoaVuumFJEBdKcMh9Ixtc+YNyPLGnJlrRRkYy6QZEkbbYCktUR09H1qj1gYQwORk SI2hOHEnrkC9OFV4Vz3Qvfq6FX1mmCo259wAqYavB7kCe7xZ2U4UP4tpiI8ErztHDYkkNFkexZg8N NojaSlcGecPH3sCGisf+tF6h2ukuY9To2MW565as4aWP2Lontu3cj1pgklHtLbAcA2KyP/j+A7HmG yFzfx/jQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tv5bA-0000000Asg9-33o3; Thu, 20 Mar 2025 02:25:32 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv594-0000000AkqT-3RhZ for linux-arm-kernel@bombadil.infradead.org; Thu, 20 Mar 2025 01:56:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=iApjQxSPhTF/tl2rkU2LC3L+Sofo1JAgj9zbjrL2yA0=; b=O8bqfC8yOenaeiZPrhRvnF70YP 1LYnRerU90RkNZPf8AKhiS93D1inujXME/Km8T9rCOuUhA9IN/mshESTrsloKzkBSl6rlsmkukAp+ nIkkpyxTy6A0r/GZ5NO3SOa2GG1rF6ci59WK32KzM/pRULjhMdVTAQFBT/aTj7dk98QnWw8Cc2BL2 o8b1IKkSk5ipbQ/If35jXaxnoe4AU67OQRRpoQF/pBnuK9ApFjKAfNY/ZoSwujESMD6OoIz1jIQS6 dl0Q8/2Hi0qjjYqDHaHzcaSp/coIflvJU8lsqBaYxhKF0ZpJ0yMzU0Y2EyXzr/Z7xiz18A/z9IQq+ aKaTOp6w==; Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by desiato.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tv591-000000043h5-2aUB for linux-arm-kernel@lists.infradead.org; Thu, 20 Mar 2025 01:56:29 +0000 Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-2254bdd4982so4820365ad.1 for ; Wed, 19 Mar 2025 18:56:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742435786; x=1743040586; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iApjQxSPhTF/tl2rkU2LC3L+Sofo1JAgj9zbjrL2yA0=; b=okhhoXCzCePtuhAklcm/dy8fgA22qbaZtdgmzRgAF5hPB2AhRtolCj0VVY96Tcnzq0 KZ1G23NjYnkN0FAjNRfymUZoCWy4F8ECiskw2qgbUcSSc8QUCDRBz7DwJAVTd49RMRaK sByDIluWmB1SV0+eh2iACMNqBpAOrkT4JE7lwXm53LpHReeKunl+U6o9p7sniwSWHekL TJfg1FL4LjOYWB1hcyhN1vf5ddzKxZ3Qp9oSqfxv1QmA/GJm2WZgjIIjzF+sQqX2iqJW JhPqYohe3M2YL0Zb9LN/wkiAOnOIvq3I/Cmtq5/ePELWhNhCkkpIwaLRP2AcWM8ZuiSS MFVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742435786; x=1743040586; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iApjQxSPhTF/tl2rkU2LC3L+Sofo1JAgj9zbjrL2yA0=; b=k9T3Fu5/pjc6K2D6ce3Gx3JEr4bgMiWxejnzCpDd8e3Vx05K+6ytnbTkv4yaSvoBkn gWAy04fI1ZwINPLYx9yiBLafJtWS+Uo8E06mn3UGq5nP0W8+qfRUzGhGWahn5VJBugv+ 831iSG5mN5zbEkoSsPEWBZ36js2jGsHMowwiG1d5z2o5yqUjqHqPdCKra/JU2QDfcvP1 YcL7MRIDnxdHHNnLqWpVVoFmV6AFHUaa4v6EyipW3fKB6iy0X8Mh0KF32CZLN5zcm5GV B8fIPXIURFifAPKAkRk8DoHpjfzsfjWhxpk/X9eIMijZBsBWjqLDaEqsp8dh4u4EddBU pitA== X-Forwarded-Encrypted: i=1; AJvYcCU/hnb2MDe+tSiMWjlzxzhpbxq0OujjuDkie1ToIRPOQO3W8rjpL0KfxDheRHfTBMbvMAYZBihfImpkjfa7Mkn+@lists.infradead.org X-Gm-Message-State: AOJu0YwWKJqkdVvixdDllYgCNdPzPQApQagCuiALoBSGyOt0JI0vcbbt luoxTCSbmMiBSjlLCg28VGeRGOq6hGyLYgsQI3W5mKJ7hZZ4KBGkdeB8/L4FcJbWXecRrvUpkM2 AorPVXpgPd5O1STXifA== X-Google-Smtp-Source: AGHT+IGFSfyI7p/Qh0SdAoE1uQ2Phsg//Ps3doqJu8c++ecy3AeyWqH3lvhpNMjdI0PgAE047ECxcGLwH834CeR5 X-Received: from pjbpw8.prod.google.com ([2002:a17:90b:2788:b0:2fa:15aa:4d2b]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:22c1:b0:223:fd7f:2752 with SMTP id d9443c01a7336-22649a34325mr73764035ad.29.1742435786002; Wed, 19 Mar 2025 18:56:26 -0700 (PDT) Date: Wed, 19 Mar 2025 18:55:51 -0700 In-Reply-To: <20250320015551.2157511-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250320015551.2157511-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.rc1.451.g8f38331e32-goog Message-ID: <20250320015551.2157511-17-changyuanl@google.com> Subject: [PATCH v5 16/16] Documentation: add documentation for KHO From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: graf@amazon.com, akpm@linux-foundation.org, luto@kernel.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dwmw2@infradead.org, ebiederm@xmission.com, mingo@redhat.com, jgowans@amazon.com, corbet@lwn.net, krzk@kernel.org, rppt@kernel.org, mark.rutland@arm.com, pbonzini@redhat.com, pasha.tatashin@soleen.com, hpa@zytor.com, peterz@infradead.org, ptyadav@amazon.de, robh+dt@kernel.org, robh@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, rostedt@goodmis.org, tglx@linutronix.de, thomas.lendacky@amd.com, usama.arif@bytedance.com, will@kernel.org, devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, Changyuan Lyu X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250320_015627_751098_E33F0C60 X-CRM114-Status: GOOD ( 41.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Alexander Graf With KHO in place, let's add documentation that describes what it is and how to use it. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- .../admin-guide/kernel-parameters.txt | 25 ++++ Documentation/kho/concepts.rst | 70 +++++++++++ Documentation/kho/fdt.rst | 62 +++++++++ Documentation/kho/index.rst | 14 +++ Documentation/kho/usage.rst | 118 ++++++++++++++++++ Documentation/subsystem-apis.rst | 1 + MAINTAINERS | 1 + 7 files changed, 291 insertions(+) create mode 100644 Documentation/kho/concepts.rst create mode 100644 Documentation/kho/fdt.rst create mode 100644 Documentation/kho/index.rst create mode 100644 Documentation/kho/usage.rst diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index fb8752b42ec8..d715c6d9dbb3 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2698,6 +2698,31 @@ kgdbwait [KGDB,EARLY] Stop kernel execution and enter the kernel debugger at the earliest opportunity. + kho= [KEXEC,EARLY] + Format: { "0" | "1" | "off" | "on" | "y" | "n" } + Enables or disables Kexec HandOver. + "0" | "off" | "n" - kexec handover is disabled + "1" | "on" | "y" - kexec handover is enabled + + kho_scratch= [KEXEC,EARLY] + Format: ll[KMG],mm[KMG],nn[KMG] | nn% + Defines the size of the KHO scratch region. The KHO + scratch regions are physically contiguous memory + ranges that can only be used for non-kernel + allocations. That way, even when memory is heavily + fragmented with handed over memory, the kexeced + kernel will always have enough contiguous ranges to + bootstrap itself. + + It is possible to specify the exact amount of + memory in the form of "ll[KMG],mm[KMG],nn[KMG]" + where the first parameter defines the size of a low + memory scratch area, the second parameter defines + the size of a global scratch area and the third + parameter defines the size of additional per-node + scratch areas. The form "nn%" defines scale factor + (in percents) of memory that was used during boot. + kmac= [MIPS] Korina ethernet MAC address. Configure the RouterBoard 532 series on-chip Ethernet adapter MAC address. diff --git a/Documentation/kho/concepts.rst b/Documentation/kho/concepts.rst new file mode 100644 index 000000000000..174e23404ebc --- /dev/null +++ b/Documentation/kho/concepts.rst @@ -0,0 +1,70 @@ +.. SPDX-License-Identifier: GPL-2.0-or-later +.. _concepts: + +======================= +Kexec Handover Concepts +======================= + +Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state - +arbitrary properties as well as memory locations - across kexec. + +It introduces multiple concepts: + +KHO State tree +============== + +Every KHO kexec carries a state tree, in the format of flattened device tree +(FDT), that describes the state of the system. Device drivers can register to +KHO to serialize their state before kexec. After KHO, device drivers can read +the FDT and extract previous state. + +KHO only uses the FDT container format and libfdt library, but does not +adhere to the same property semantics that normal device trees do: Properties +are passed in native endianness and standardized properties like ``regs`` and +``ranges`` do not exist, hence there are no ``#...-cells`` properties. + +Scratch Regions +=============== + +To boot into kexec, we need to have a physically contiguous memory range that +contains no handed over memory. Kexec then places the target kernel and initrd +into that region. The new kernel exclusively uses this region for memory +allocations before during boot up to the initialization of the page allocator. + +We guarantee that we always have such regions through the scratch regions: On +first boot KHO allocates several physically contiguous memory regions. Since +after kexec these regions will be used by early memory allocations, there is a +scratch region per NUMA node plus a scratch region to satisfy allocations +requests that do not require particular NUMA node assignment. +By default, size of the scratch region is calculated based on amount of memory +allocated during boot. The ``kho_scratch`` kernel command line option may be +used to explicitly define size of the scratch regions. +The scratch regions are declared as CMA when page allocator is initialized so +that their memory can be used during system lifetime. CMA gives us the +guarantee that no handover pages land in that region, because handover pages +must be at a static physical memory location and CMA enforces that only +movable pages can be located inside. + +After KHO kexec, we ignore the ``kho_scratch`` kernel command line option and +instead reuse the exact same region that was originally allocated. This allows +us to recursively execute any amount of KHO kexecs. Because we used this region +for boot memory allocations and as target memory for kexec blobs, some parts +of that memory region may be reserved. These reservations are irrelevant for +the next KHO, because kexec can overwrite even the original kernel. + +.. _finalization_phase: + +KHO finalization phase +====================== + +To enable user space based kexec file loader, the kernel needs to be able to +provide the FDT that describes the previous kernel's state before +performing the actual kexec. The process of generating that FDT is +called serialization. When the FDT is generated, some properties +of the system may become immutable because they are already written down +in the FDT. That state is called the KHO finalization phase. + +With the in-kernel kexec file loader, i.e., using the syscall +``kexec_file_load``, KHO FDT is not created until the actual kexec. Thus the +finalization phase is much shorter. User space can optionally choose to generate +the FDT early using the debugfs interface. diff --git a/Documentation/kho/fdt.rst b/Documentation/kho/fdt.rst new file mode 100644 index 000000000000..70b508533b77 --- /dev/null +++ b/Documentation/kho/fdt.rst @@ -0,0 +1,62 @@ +.. SPDX-License-Identifier: GPL-2.0-or-later + +======= +KHO FDT +======= + +KHO uses the flattened device tree (FDT) container format and libfdt +library to create and parse the data that is passed between the +kernels. The properties in KHO FDT are stored in native format and can +include any data KHO users need to preserve. Parsing of FDT subnodes is +responsibility of KHO users, except for nodes and properties defined by +KHO itself. + +KHO nodes and properties +======================== + +Node ``preserved-memory`` +------------------------- + +KHO saves a special node named ``preserved-memory`` under the root node. +This node contains the metadata for KHO to preserve pages across kexec. + +Property ``compatible`` +----------------------- + +The ``compatible`` property determines compatibility between the kernel +that created the KHO FDT and the kernel that attempts to load it. +If the kernel that loads the KHO FDT is not compatible with it, the entire +KHO process will be bypassed. + +Examples +======== + +The following example demonstrates KHO FDT that preserves two memory +regions create with ``reserve_mem`` kernel command line parameter:: + + /dts-v1/; + + / { + compatible = "kho-v1"; + + memblock { + compatible = "memblock-v1"; + + region1 { + compatible = "reserve-mem-v1"; + start = <0xc07a 0x4000000>; + size = <0x01 0x00>; + }; + + region2 { + compatible = "reserve-mem-v1"; + start = <0xc07b 0x4000000>; + size = <0x8000 0x00>; + }; + + }; + + preserved-memory { + metadata = <0x00 0x00>; + }; + }; diff --git a/Documentation/kho/index.rst b/Documentation/kho/index.rst new file mode 100644 index 000000000000..d108c3f8d15c --- /dev/null +++ b/Documentation/kho/index.rst @@ -0,0 +1,14 @@ +.. SPDX-License-Identifier: GPL-2.0-or-later + +======================== +Kexec Handover Subsystem +======================== + +.. toctree:: + :maxdepth: 1 + + concepts + usage + fdt + +.. only:: subproject and html diff --git a/Documentation/kho/usage.rst b/Documentation/kho/usage.rst new file mode 100644 index 000000000000..b45dc58e8d3f --- /dev/null +++ b/Documentation/kho/usage.rst @@ -0,0 +1,118 @@ +.. SPDX-License-Identifier: GPL-2.0-or-later + +==================== +Kexec Handover Usage +==================== + +Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state - +arbitrary properties as well as memory locations - across kexec. + +This document expects that you are familiar with the base KHO +:ref:`concepts `. If you have not read +them yet, please do so now. + +Prerequisites +============= + +KHO is available when the ``CONFIG_KEXEC_HANDOVER`` config option is set to y +at compile time. Every KHO producer may have its own config option that you +need to enable if you would like to preserve their respective state across +kexec. + +To use KHO, please boot the kernel with the ``kho=on`` command line +parameter. You may use ``kho_scratch`` parameter to define size of the +scratch regions. For example ``kho_scratch=16M,512M,256M`` will reserve a +16 MiB low memory scratch area, a 512 MiB global scratch region, and 256 MiB +per NUMA node scratch regions on boot. + +Perform a KHO kexec +=================== + +First, before you perform a KHO kexec, you can optionally move the system into +the :ref:`KHO finalization phase ` :: + + $ echo 1 > /sys/kernel/debug/kho/out/finalize + +After this command, the KHO FDT is available in +``/sys/kernel/debug/kho/out/fdt``. + +Next, load the target payload and kexec into it. It is important that you +use the ``-s`` parameter to use the in-kernel kexec file loader, as user +space kexec tooling currently has no support for KHO with the user space +based file loader :: + + # kexec -l Image --initrd=initrd -s + # kexec -e + +If you skipped finalization in the first step, ``kexec -e`` triggers +FDT finalization automatically. The new kernel will boot up and contain +some of the previous kernel's state. + +For example, if you used ``reserve_mem`` command line parameter to create +an early memory reservation, the new kernel will have that memory at the +same physical address as the old kernel. + +Unfreeze KHO FDT data +===================== + +You can move the system out of KHO finalization phase by calling :: + + $ echo 0 > /sys/kernel/debug/kho/out/finalize + +After this command, the KHO FDT is no longer available in +``/sys/kernel/debug/kho/out/fdt``, and the states kept in KHO can be +modified by other kernel subsystems again. + +debugfs Interfaces +================== + +Currently KHO creates the following debugfs interfaces. Notice that these +interfaces may change in the future. They will be moved to sysfs once KHO is +stabilized. + +``/sys/kernel/debug/kho/out/finalize`` + Kexec HandOver (KHO) allows Linux to transition the state of + compatible drivers into the next kexec'ed kernel. To do so, + device drivers will serialize their current state into an FDT. + While the state is serialized, they are unable to perform + any modifications to state that was serialized, such as + handed over memory allocations. + + When this file contains "1", the system is in the transition + state. When contains "0", it is not. To switch between the + two states, echo the respective number into this file. + +``/sys/kernel/debug/kho/out/fdt_max`` + KHO needs to allocate a buffer for the FDT that gets + generated before it knows the final size. By default, it + will allocate 10 MiB for it. You can write to this file + to modify the size of that allocation. + +``/sys/kernel/debug/kho/out/fdt`` + When KHO state tree is finalized, the kernel exposes the + flattened device tree blob that carries its current KHO + state in this file. Kexec user space tooling can use this + as input file for the KHO payload image. + +``/sys/kernel/debug/kho/out/scratch_len`` + To support continuous KHO kexecs, we need to reserve + physically contiguous memory regions that will always stay + available for future kexec allocations. This file describes + the length of these memory regions. Kexec user space tooling + can use this to determine where it should place its payload + images. + +``/sys/kernel/debug/kho/out/scratch_phys`` + To support continuous KHO kexecs, we need to reserve + physically contiguous memory regions that will always stay + available for future kexec allocations. This file describes + the physical location of these memory regions. Kexec user space + tooling can use this to determine where it should place its + payload images. + +``/sys/kernel/debug/kho/in/fdt`` + When the kernel was booted with Kexec HandOver (KHO), + the state tree that carries metadata about the previous + kernel's state is in this file in the format of flattened + device tree. This file may disappear when all consumers of + it finished to interpret their metadata. diff --git a/Documentation/subsystem-apis.rst b/Documentation/subsystem-apis.rst index b52ad5b969d4..5fc69d6ff9f0 100644 --- a/Documentation/subsystem-apis.rst +++ b/Documentation/subsystem-apis.rst @@ -90,3 +90,4 @@ Other subsystems peci/index wmi/index tee/index + kho/index diff --git a/MAINTAINERS b/MAINTAINERS index a000a277ccf7..d0df0b380e34 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12828,6 +12828,7 @@ F: include/linux/kernfs.h KEXEC L: kexec@lists.infradead.org W: http://kernel.org/pub/linux/utils/kernel/kexec/ +F: Documentation/kho/ F: include/linux/kexec*.h F: include/uapi/linux/kexec.h F: kernel/kexec*