From patchwork Wed Dec 13 00:04:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 13490136 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C038C4167B for ; Wed, 13 Dec 2023 00:05:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 90FF78D000B; Tue, 12 Dec 2023 19:05:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8BFDC8D0009; Tue, 12 Dec 2023 19:05:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7614C8D000B; Tue, 12 Dec 2023 19:05:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 61CEF8D0009 for ; Tue, 12 Dec 2023 19:05:12 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 2A8711409DA for ; Wed, 13 Dec 2023 00:05:12 +0000 (UTC) X-FDA: 81559850064.10.6B34591 Received: from smtp-fw-52002.amazon.com (smtp-fw-52002.amazon.com [52.119.213.150]) by imf18.hostedemail.com (Postfix) with ESMTP id E612D1C002A for ; Wed, 13 Dec 2023 00:05:09 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=amazon.com header.s=amazon201209 header.b="c8qYexr/"; spf=pass (imf18.hostedemail.com: domain of "prvs=704f7accf=graf@amazon.de" designates 52.119.213.150 as permitted sender) smtp.mailfrom="prvs=704f7accf=graf@amazon.de"; dmarc=pass (policy=quarantine) header.from=amazon.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702425910; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2/zyadG4/H/PJTWoJtupWHcuxbsnDpMhuO8wY//fQ4A=; b=fgn75t9dKC1Rt++yq3fB9M2eZFJ9wNbyAWCIVvDMhT2iZDQhC6tz9OOove3qUZqOJ0HjQw Vf0sWOTYP7Sk8E15HnEfXZOBCc0cbb1C1nv5jtRDXanwHThNuO5RtpbAxxZXHxhk4rreo0 5vkfNw6IW7BUzJ0TSa6Z9DYeuyX9e9Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702425910; a=rsa-sha256; cv=none; b=CSVbkqJ58xmr/aTUCxGnPQlLKktypsRoklGlr5JbJecnorMQ4IbYCAXm+KPrHpvgyC4B1W ebh/qUptwmzTrRJIPPnaIUuYKLAo1RJfDXOyySrkGOAgoPUfesvCv0qOSTeQDxRCucfMJl p7mKJzghE8S/fWlXZ0mKJlYb4xnPxKo= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=amazon.com header.s=amazon201209 header.b="c8qYexr/"; spf=pass (imf18.hostedemail.com: domain of "prvs=704f7accf=graf@amazon.de" designates 52.119.213.150 as permitted sender) smtp.mailfrom="prvs=704f7accf=graf@amazon.de"; dmarc=pass (policy=quarantine) header.from=amazon.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1702425910; x=1733961910; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2/zyadG4/H/PJTWoJtupWHcuxbsnDpMhuO8wY//fQ4A=; b=c8qYexr/7nuijexVqvHa/sjdBBaTUc27spRejHSbtAVYXBUtP2OHrqw8 wXajPf9DOxwlGdURVsu9XWnIN6zwTpwiuL7YgU4AUbMl84wJ12DtAk8Zk 0VWfUggQppPAvRLTEcBZyvND8BWmXlUItj9BSUgFFB9mbmnDKkJaVkjaN 0=; X-IronPort-AV: E=Sophos;i="6.04,271,1695686400"; d="scan'208";a="600238157" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-1cca8d67.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-52002.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Dec 2023 00:05:05 +0000 Received: from smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev (pdx2-ws-svc-p26-lb5-vlan3.pdx.amazon.com [10.39.38.70]) by email-inbound-relay-pdx-2a-m6i4x-1cca8d67.us-west-2.amazon.com (Postfix) with ESMTPS id D22C680658; Wed, 13 Dec 2023 00:05:02 +0000 (UTC) Received: from EX19MTAUWB001.ant.amazon.com [10.0.21.151:56515] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.2.31:2525] with esmtp (Farcaster) id f13a4a39-aa9b-4426-8e9f-ccc718085517; Wed, 13 Dec 2023 00:05:02 +0000 (UTC) X-Farcaster-Flow-ID: f13a4a39-aa9b-4426-8e9f-ccc718085517 Received: from EX19D020UWC004.ant.amazon.com (10.13.138.149) by EX19MTAUWB001.ant.amazon.com (10.250.64.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Wed, 13 Dec 2023 00:05:02 +0000 Received: from dev-dsk-graf-1a-5ce218e4.eu-west-1.amazon.com (10.253.83.51) by EX19D020UWC004.ant.amazon.com (10.13.138.149) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Wed, 13 Dec 2023 00:04:58 +0000 From: Alexander Graf To: CC: , , , , , , , Eric Biederman , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , "Rob Herring" , Steven Rostedt , "Andrew Morton" , Mark Rutland , "Tom Lendacky" , Ashish Kalra , James Gowans , Stanislav Kinsburskii , , , , Anthony Yznaga , Usama Arif , David Woodhouse , Benjamin Herrenschmidt Subject: [PATCH 01/15] mm,memblock: Add support for scratch memory Date: Wed, 13 Dec 2023 00:04:38 +0000 Message-ID: <20231213000452.88295-2-graf@amazon.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231213000452.88295-1-graf@amazon.com> References: <20231213000452.88295-1-graf@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.253.83.51] X-ClientProxiedBy: EX19D031UWC001.ant.amazon.com (10.13.139.241) To EX19D020UWC004.ant.amazon.com (10.13.138.149) X-Rspamd-Queue-Id: E612D1C002A X-Rspam-User: X-Stat-Signature: zeq76atarhw6eee8ejcrxz71681tjezx X-Rspamd-Server: rspam03 X-HE-Tag: 1702425909-121338 X-HE-Meta: U2FsdGVkX1/Ywp+8nxrrsKTmNgow5maj2zFLsJtvMItyj8QJy0ynqvkmbzBq//wQPff7F2/rkJQCblOLohbvThXKu2D6tuZJvGx7gBnVk+MMkXiZ4xQTiePqV6qMMP9ZEcvD1DEbhmO7T31+hc7fz7IHFVoLC1Mz9KGu4P4SWoPKP/9F8FrAGe/w4ucvioYnC7V6TG1pGBaQbX5BWmJ+1m1/NhZXhaOYBPTjVKS/ln4yx6I/ByizrIlzkmdVk1tBHWjkKwL+jVvSLjUEivLqVBczaIPLGYHTs7RbR/VoiENCfpPO1zfTL40xe35KZTq3CoeziC3Q81V6bL+YS4/lBa+mCy6eLGm/BMrWnQq4t/ZbvLl7q7VMa4oKGp0oEfMnBb+VEfiztsN+XYrLg1mHW87qTaSjaGqzGxoIWWzgY4L+tOqH7fkq06Yh+see3Dyl+8c2jYKSQRPFyia88LPo4h7rAOJU8YwcrRbBXALUXViXvV3d6xnmlOgFF8NyuWZ4WWaAaOFiDGc2RWBDtm8lt0hOvDRFbu5bfXOsFCr1Xz8tSRgYFNBW4Qm7GhE9Yxjk2owXKNDj3nrDzKyKJ+aVBfuOU9wHR7rG8YZHTqlizWPkn11sTTP8YyWe9qt2pqKIHNXSTL6DSTMZT7OclAaPlrL720qOAy/87POF8/I0R/csKMIaj5sRb8MerNKILnilvkjU/KPm7QQoVKeDpnpUcFtCV9ICaX8ATGi4OZ2bKGt6XQaNZQUbkrNXlko6XARW26ugQ0ApjjFAxRFZuzEJePVp9BxrV04b4LRl0QA6b1a2fvs5Yt2Py3g5/qfP/x9hyE6OSf/h6UvYjBnwEm4kuJ+7RRYzm3x3IYWHBjXqn/xIj+4V/exIycSK6fa/IBZfhldc0TnACAUrePnXNjeGcu9u/3H4/6TGTpjxF61tIB4cdxv1FvmjjaDYUEk4uhEboHbtqDblmtgxfPcFgOz 7c+a7RZL jFO9rVZbVcZ6ohb2wv3wA6SP08kysk9Hs9m+DhuxxleQccfmjjQjmTvtuVDniw8T6FLTWq+PrH2ENeNu3yeYHscbnaUD5WprT/Z7qcN2dB4PavmQnB/2M/4KQjVOVKszztVNlWMGJKwS3oiyHHCxCv6etiu9hkcz2BnH6JXPm1KB2DhE3sVq6+3QWI0vh7uU07YHF6u4WgTgIoBs7Aa8o0jF+y+7cFCrzJTE+f8TboDvL8nNdPwxyfWE1DYGY8IW4TGY2zCqIItk2FgrZ94cA3C7n2o8gf/JYtdh/fSvYHPH4vEncvfk61lTFq7WXQFBWXdae84GUihTYX9Dv2lZL1Al46yy5rd9Bl8zB9N5mRGRJDURwIm5beZVsivpddGoC/89KLCg5kjr/aQyByHdH26GqZet6pP58Njg2HtlagiKBZBp4mEWLAfbY5IFLFx/NRIVWTGF343Sj5UGUzVN+ULof+x3RFNJNSqKPSSbrU2+FSUqghy4ZBQ28LaD1dx3bmW5QI3p0uBvT4LRFiFvRP1JwQymX9MVch5DTJtw+ik4TVcuKpYgs8FsYiWSNhDXUsxbZy5LT6ogDxJZaikhDjERcsoL6m5Or2X2E67/wMsipVTs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With KHO (Kexec HandOver), we need a way to ensure that the new kernel does not allocate memory on top of any memory regions that the previous kernel was handing over. But to know where those are, we need to include them in the reserved memblocks array which may not be big enough to hold all allocations. To resize the array, we need to allocate memory. That brings us into a catch 22 situation. The solution to that is the scratch region: a safe region to operate in. KHO provides a "scratch region" as part of its metadata. This scratch region is a single, contiguous memory block that we know does not contain any KHO allocations. We can exclusively allocate from there until we finish kernel initialization to a point where it knows about all the KHO memory reservations. We introduce a new memblock_set_scratch_only() function that allows KHO to indicate that any memblock allocation must happen from the scratch region. Later, we may want to perform another KHO kexec. For that, we reuse the same scratch region. To ensure that no eventually handed over data gets allocated inside that scratch region, we flip the semantics of the scratch region with memblock_clear_scratch_only(): After that call, no allocations may happen from scratch memblock regions. We will lift that restriction in the next patch. Signed-off-by: Alexander Graf --- include/linux/memblock.h | 19 +++++++++++++ mm/Kconfig | 4 +++ mm/memblock.c | 61 +++++++++++++++++++++++++++++++++++++++- 3 files changed, 83 insertions(+), 1 deletion(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index ae3bde302f70..14043f5b696f 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -42,6 +42,10 @@ extern unsigned long long max_possible_pfn; * kernel resource tree. * @MEMBLOCK_RSRV_NOINIT: memory region for which struct pages are * not initialized (only for reserved regions). + * @MEMBLOCK_SCRATCH: memory region that kexec can pass to the next kernel in + * handover mode. During early boot, we do not know about all memory reservations + * yet, so we get scratch memory from the previous kernel that we know is good + * to use. It is the only memory that allocations may happen from in this phase. */ enum memblock_flags { MEMBLOCK_NONE = 0x0, /* No special request */ @@ -50,6 +54,7 @@ enum memblock_flags { MEMBLOCK_NOMAP = 0x4, /* don't add to kernel direct mapping */ MEMBLOCK_DRIVER_MANAGED = 0x8, /* always detected via a driver */ MEMBLOCK_RSRV_NOINIT = 0x10, /* don't initialize struct pages */ + MEMBLOCK_SCRATCH = 0x20, /* scratch memory for kexec handover */ }; /** @@ -129,6 +134,8 @@ int memblock_mark_mirror(phys_addr_t base, phys_addr_t size); int memblock_mark_nomap(phys_addr_t base, phys_addr_t size); int memblock_clear_nomap(phys_addr_t base, phys_addr_t size); int memblock_reserved_mark_noinit(phys_addr_t base, phys_addr_t size); +int memblock_mark_scratch(phys_addr_t base, phys_addr_t size); +int memblock_clear_scratch(phys_addr_t base, phys_addr_t size); void memblock_free_all(void); void memblock_free(void *ptr, size_t size); @@ -273,6 +280,11 @@ static inline bool memblock_is_driver_managed(struct memblock_region *m) return m->flags & MEMBLOCK_DRIVER_MANAGED; } +static inline bool memblock_is_scratch(struct memblock_region *m) +{ + return m->flags & MEMBLOCK_SCRATCH; +} + int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, unsigned long *end_pfn); void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, @@ -610,5 +622,12 @@ static inline void early_memtest(phys_addr_t start, phys_addr_t end) { } static inline void memtest_report_meminfo(struct seq_file *m) { } #endif +#ifdef CONFIG_MEMBLOCK_SCRATCH +void memblock_set_scratch_only(void); +void memblock_clear_scratch_only(void); +#else +static inline void memblock_set_scratch_only(void) { } +static inline void memblock_clear_scratch_only(void) { } +#endif #endif /* _LINUX_MEMBLOCK_H */ diff --git a/mm/Kconfig b/mm/Kconfig index 89971a894b60..36f5e7d95195 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -513,6 +513,10 @@ config ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP config HAVE_MEMBLOCK_PHYS_MAP bool +# Enable memblock support for scratch memory which is needed for KHO +config MEMBLOCK_SCRATCH + bool + config HAVE_FAST_GUP depends on MMU bool diff --git a/mm/memblock.c b/mm/memblock.c index 5a88d6d24d79..e89e6c8f9d75 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -106,6 +106,13 @@ unsigned long min_low_pfn; unsigned long max_pfn; unsigned long long max_possible_pfn; +#ifdef CONFIG_MEMBLOCK_SCRATCH +/* When set to true, only allocate from MEMBLOCK_SCRATCH ranges */ +static bool scratch_only; +#else +#define scratch_only false +#endif + static struct memblock_region memblock_memory_init_regions[INIT_MEMBLOCK_MEMORY_REGIONS] __initdata_memblock; static struct memblock_region memblock_reserved_init_regions[INIT_MEMBLOCK_RESERVED_REGIONS] __initdata_memblock; #ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP @@ -168,6 +175,10 @@ bool __init_memblock memblock_has_mirror(void) static enum memblock_flags __init_memblock choose_memblock_flags(void) { + /* skip non-scratch memory for kho early boot allocations */ + if (scratch_only) + return MEMBLOCK_SCRATCH; + return system_has_some_mirror ? MEMBLOCK_MIRROR : MEMBLOCK_NONE; } @@ -643,7 +654,7 @@ static int __init_memblock memblock_add_range(struct memblock_type *type, #ifdef CONFIG_NUMA WARN_ON(nid != memblock_get_region_node(rgn)); #endif - WARN_ON(flags != rgn->flags); + WARN_ON(flags != (rgn->flags & ~MEMBLOCK_SCRATCH)); nr_new++; if (insert) { if (start_rgn == -1) @@ -890,6 +901,18 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size) } #endif +#ifdef CONFIG_MEMBLOCK_SCRATCH +__init_memblock void memblock_set_scratch_only(void) +{ + scratch_only = true; +} + +__init_memblock void memblock_clear_scratch_only(void) +{ + scratch_only = false; +} +#endif + /** * memblock_setclr_flag - set or clear flag for a memory region * @type: memblock type to set/clear flag for @@ -1015,6 +1038,33 @@ int __init_memblock memblock_reserved_mark_noinit(phys_addr_t base, phys_addr_t MEMBLOCK_RSRV_NOINIT); } +/** + * memblock_mark_scratch - Mark a memory region with flag MEMBLOCK_SCRATCH. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Only memory regions marked with %MEMBLOCK_SCRATCH will be considered for + * allocations during early boot with kexec handover. + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_mark_scratch(phys_addr_t base, phys_addr_t size) +{ + return memblock_setclr_flag(&memblock.memory, base, size, 1, MEMBLOCK_SCRATCH); +} + +/** + * memblock_clear_scratch - Clear flag MEMBLOCK_SCRATCH for a specified region. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_clear_scratch(phys_addr_t base, phys_addr_t size) +{ + return memblock_setclr_flag(&memblock.memory, base, size, 0, MEMBLOCK_SCRATCH); +} + static bool should_skip_region(struct memblock_type *type, struct memblock_region *m, int nid, int flags) @@ -1046,6 +1096,14 @@ static bool should_skip_region(struct memblock_type *type, if (!(flags & MEMBLOCK_DRIVER_MANAGED) && memblock_is_driver_managed(m)) return true; + /* In early alloc during kho, we can only consider scratch allocations */ + if ((flags & MEMBLOCK_SCRATCH) && !memblock_is_scratch(m)) + return true; + + /* Leave scratch memory alone after scratch-only phase */ + if (!(flags & MEMBLOCK_SCRATCH) && memblock_is_scratch(m)) + return true; + return false; } @@ -2211,6 +2269,7 @@ static const char * const flagname[] = { [ilog2(MEMBLOCK_MIRROR)] = "MIRROR", [ilog2(MEMBLOCK_NOMAP)] = "NOMAP", [ilog2(MEMBLOCK_DRIVER_MANAGED)] = "DRV_MNG", + [ilog2(MEMBLOCK_SCRATCH)] = "SCRATCH", }; static int memblock_debug_show(struct seq_file *m, void *private)