From patchwork Fri Apr 11 05:37:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changyuan Lyu X-Patchwork-Id: 14047565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C316BC369A8 for ; Fri, 11 Apr 2025 05:38:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3381628015D; Fri, 11 Apr 2025 01:38:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2CFE328015B; Fri, 11 Apr 2025 01:38:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F1EAA28015D; Fri, 11 Apr 2025 01:38:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id CCD3128015B for ; Fri, 11 Apr 2025 01:38:29 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A3ED41A1DAD for ; Fri, 11 Apr 2025 05:38:30 +0000 (UTC) X-FDA: 83320657980.01.265B5FE Received: from mail-ot1-f74.google.com (mail-ot1-f74.google.com [209.85.210.74]) by imf17.hostedemail.com (Postfix) with ESMTP id CF48340002 for ; Fri, 11 Apr 2025 05:38:28 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=T0HEjtG+; spf=pass (imf17.hostedemail.com: domain of 306r4ZwoKCFIw1u70IEu7508805y.w86527EH-664Fuw4.8B0@flex--changyuanl.bounces.google.com designates 209.85.210.74 as permitted sender) smtp.mailfrom=306r4ZwoKCFIw1u70IEu7508805y.w86527EH-664Fuw4.8B0@flex--changyuanl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744349908; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=u8/tw64ZaXw3+8Mw68exs5moBeGsA6izDAl/HLWUnHo=; b=W9N+Vse0rPmg61ALBoFQSj00dJG/MUKjt5vSdHZNKmFJi+/wWEYvtWpwR9yCxCW+uIB49u bI66rtMQq/ZhbWSyMSkp1H2eVp0n2WMKIEcxKTCdI1hA8EZPMxD3lwo04GLieSxeSVEyOy x3cTXn/VPTD2BC5kv4yJDDosPFW8GUg= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=T0HEjtG+; spf=pass (imf17.hostedemail.com: domain of 306r4ZwoKCFIw1u70IEu7508805y.w86527EH-664Fuw4.8B0@flex--changyuanl.bounces.google.com designates 209.85.210.74 as permitted sender) smtp.mailfrom=306r4ZwoKCFIw1u70IEu7508805y.w86527EH-664Fuw4.8B0@flex--changyuanl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744349908; a=rsa-sha256; cv=none; b=KU6MKCJVJ9RGCdhsUazc62bze87nQqT8lzPxDMl4lQh/Ec3MmWUp8eWUJQQV+kFxAg0z+n 0diPDD1a76RQlkOVQuMJJAxskWtKaf2bZoK/UVN07AOKX0qEMKDBVc7MsyuVv1kC8Q5OFU yiGANHoLdMXwMmYtmTfFpaVl7jjS+s0= Received: by mail-ot1-f74.google.com with SMTP id 46e09a7af769-72b881599f6so1364566a34.1 for ; Thu, 10 Apr 2025 22:38:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744349908; x=1744954708; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=u8/tw64ZaXw3+8Mw68exs5moBeGsA6izDAl/HLWUnHo=; b=T0HEjtG+3JB5VvFx/OTUWixR1lkUNNQe1rOWCkoDDEX7GxDW3DTPrcF6W7AwnM6veC JdwNzsmE4IOINd6w7Zb1YXYmDzdF39BUcSVot4uuIfi2JX4eiNX4mRLk8ExSsl3IvQcl VzKm7Ng2fzgldwNNrR0eqoMF2SJjFtL618uDIACYyjZ4WZOZVafzO+8Qf9sQ9sAD0McJ 4BhyPH7TnQsKKO8u2ZNP1VbeiOm9EPeszHRYXfxuAfTkhM6IjvnXlMX21aM3bCRULFti 36oHs1maZ54TwkqvAE7BGtkUtbM5vOwSLwLJX2jRK4IL23vFexkTzbxntuv6K2K2Bh3f XIFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744349908; x=1744954708; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=u8/tw64ZaXw3+8Mw68exs5moBeGsA6izDAl/HLWUnHo=; b=Bqn4GKPVzUQVVlfqM/iE88k6MlnmQRCq+GPulXdZsFJo5wp7OCro5CSLoDzjWwymhL FXMrX6f/ltViH+no9idfSliyDuXoJNKVOOwfA7iQKuxzwvv62UoRF/3NsREHxt2n88mR +zBCPE+KGDJ1P7n/AfSPVzZrKmM/t5BpD4Y/pDR777E9aHZg10VFszACjfPuEMJQ86N6 Oa6Q33eIqSOmbx42C2UPHcMFFpeIUz06+TxErlN4J+AUD9fGtN+uvQCZ8kjc0Kst45hp MdOSq9UUdSw4N52BoR/go+m/QMnbqfQpIXy+eJLsbmViPkWJWGoo0sNvqhr9BfU7q/yL uj2A== X-Forwarded-Encrypted: i=1; AJvYcCWIMUXfpjIdjX7jT5PcuHP8Hozsv1qug6/0f0/5/IuUjfGAeNwNF4cXL038OjRrkJS7FaDD4L/V4g==@kvack.org X-Gm-Message-State: AOJu0YzTxIE5N5GYy4zh4cPgb00MwBenOhuNDPG4cCFHBNcGIRmCaFlB 0p0uJnW3aEchTCGFTbrsRhyPCx6HELzo6PhRm4VvtAOaBwNfxQhE7f0GFs/A6hq/0qAi84Exsoy dN9zwkPuP6U0sukjF8A== X-Google-Smtp-Source: AGHT+IGwZ1eisxe0ymuYtmLoCJqD7rlopgQFj4rrMwmM4Wrelc58jw3CKa/b1F9+keU+6losfGrOoBNzqRMO1/hD X-Received: from oabvs3.prod.google.com ([2002:a05:6871:a103:b0:2b8:40a0:4445]) (user=changyuanl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6830:6987:b0:72b:823c:8f66 with SMTP id 46e09a7af769-72e862fbc8amr968642a34.9.1744349907721; Thu, 10 Apr 2025 22:38:27 -0700 (PDT) Date: Thu, 10 Apr 2025 22:37:36 -0700 In-Reply-To: <20250411053745.1817356-1-changyuanl@google.com> Mime-Version: 1.0 References: <20250411053745.1817356-1-changyuanl@google.com> X-Mailer: git-send-email 2.49.0.604.gff1f9ca942-goog Message-ID: <20250411053745.1817356-6-changyuanl@google.com> Subject: [PATCH v6 05/14] kexec: add KHO parsing support From: Changyuan Lyu To: linux-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, anthony.yznaga@oracle.com, arnd@arndb.de, ashish.kalra@amd.com, benh@kernel.crashing.org, bp@alien8.de, catalin.marinas@arm.com, corbet@lwn.net, dave.hansen@linux.intel.com, devicetree@vger.kernel.org, dwmw2@infradead.org, ebiederm@xmission.com, graf@amazon.com, hpa@zytor.com, jgowans@amazon.com, kexec@lists.infradead.org, krzk@kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, luto@kernel.org, mark.rutland@arm.com, mingo@redhat.com, pasha.tatashin@soleen.com, pbonzini@redhat.com, peterz@infradead.org, ptyadav@amazon.de, robh@kernel.org, rostedt@goodmis.org, rppt@kernel.org, saravanak@google.com, skinsburskii@linux.microsoft.com, tglx@linutronix.de, thomas.lendacky@amd.com, will@kernel.org, x86@kernel.org, Changyuan Lyu X-Rspamd-Queue-Id: CF48340002 X-Stat-Signature: u8m8n6z4n4hia5nzwndh9nt4pofm7xfq X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1744349908-424247 X-HE-Meta: U2FsdGVkX1/RLvry5qU2gWn19kOPcYIoTfK39NOw6XjNZDsX4PHlD7cw47mjhqE1M6RU4Us1C0YSfd2Yp+HvG9F8JJMAEfazHcUrjkLvVn+4sNx2xi4gETtdj2YJm30AZqxjVYB4WAXZBdqTdzPlaSJ4Do09XYFr+sH78bwoIw86FhNwOPsjwUNwHXC+hcM28gfF9LoMv71PhYUMaidclNsM0OzVTtioTwsOr0+3RhxdlsRaIaIVfvDRPFR2tJKIVX4tCejAalUwjjEM3K+PXQC9AbX/hIWaW9H0dvhqw0cr+s77UwMWm+7GKLkTVmo3C5ovATZ3raOjdP3fK+UAt29mejXtGxGRuI4VXmBCNxtsiIMpAQ9rAcXqs/Hfn8kSU+R0Q49ctim4EchOGNsYGhcTlaIeKBqaW2lxrRhy47A52rlkseeC2RTx4H7Bv3z12bCgNqlVxKjk8AcNcVoMvUxgeUWbZRKcHiEPxU1agrlvHg9UreeUKatDTKW8C0ryADbHkcQWI/1DffYgJuU30/dfK5ZZMWUPA7FVbkJnxHlEn8/e+kKMz92YpY3sadrUcUMcvgfu35mZ/y08qjG7Yazeq7JL7edbFjW8khwGIHr12AIHbwdnNjPkQXAGWYPLjzWzykCLKP9DNmly7T0P+bJVYlgLMx14qd3/y1bhUX8GK8mjdvSHi+wZak9EcK6bW1F+FFOobsVC7+erWW3tUyAL1NnkxEuk7n4c258t9q+CtMi+W2dFcDmJZpmln9qm9Fqd8InGem0z3Wn2gPrs3f7n8crJNuNncqObeaM9lQpiv7sHkCmpJLCEWMvJCiU7oQoMS9WBB/XswlZbsDwAsiH4uw8Qxa5/XD8Q8lTFjo4eddzFDZEQXD9lVpTq0dbGeFjbEalqx5f+epQI4cNG0lg8xbd6meLqFgnYJn4YcEAobzE4nI4GjhuhgiSCAlKkAldm21hhAZEhVrgcOuA pj82CHWq s/eJqfoYOLWBLwO3nw8P+c1EIGUfilEi3OI+YNZGf2juIu/OZjbIezAO9lbdmUv0EexNGAGqteSF11Cj6rhTqLZBGY7/ITxdX/tNYS9CoNac2iN1aIFfZtduk1kPXdy+ji+J5ajZw6kLX6FEYOBNb3lrHF3gXIC5TNbFGoUoE173+Nc4GF/dleKf/w15AaM2aAnnUrPHhZhm8OqNPeYCtObGmQJlmyYrA+sBSm1W9yfgsUOTLVh7cQ5nBFiynNe/e4De8lPN4fABR2eHekzxnOzBAFZVZRUDHLefWbzN8fnuDsmSzA15sPrdmDVS4fz1nmc18ABVstUi13iYSQPs3JbxZFQ5GXbUV1f760tdjVrnfVrkmsWoiPfmMvgqsZ467Ronqlpw/DXpZGfJsPQSxtJ4l4d1fXa8blTH+AN9l2MhPv+a3wHXgZmDnlQtINTwbzhJZsDCOKuoPA3ZYiwnpo//nFFvjlM1gfSJtdnbe0S5w1SJnW5TsCshRbJNxuPYDvsHzlcsoaFnT4HyoFDvem0LjwnGN1JeRPhPoeIhE1mRn9g60053bdLqFS8V+padTu2s6woCCbC84DZlA9tekX/jGLbzufHhS9Q6hs86pzwNvvV1RBsjE0i4ylw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexander Graf When we have a KHO kexec, we get an FDT blob and scratch region to populate the state of the system. Provide helper functions that allow architecture code to easily handle memory reservations based on them and give device drivers visibility into the KHO FDT and memory reservations so they can recover their own state. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) Co-developed-by: Changyuan Lyu Signed-off-by: Changyuan Lyu --- include/linux/kexec_handover.h | 14 ++ kernel/kexec_handover.c | 230 ++++++++++++++++++++++++++++++++- mm/memblock.c | 1 + 3 files changed, 244 insertions(+), 1 deletion(-) diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h index 593ea842a5b61..2b77d4b9fbc3c 100644 --- a/include/linux/kexec_handover.h +++ b/include/linux/kexec_handover.h @@ -23,11 +23,15 @@ struct kho_serialization; bool kho_is_enabled(void); int kho_add_subtree(struct kho_serialization *ser, const char *name, void *fdt); +int kho_retrieve_subtree(const char *name, phys_addr_t *phys); int register_kho_notifier(struct notifier_block *nb); int unregister_kho_notifier(struct notifier_block *nb); void kho_memory_init(void); + +void kho_populate(phys_addr_t fdt_phys, u64 fdt_len, phys_addr_t scratch_phys, + u64 scratch_len); #else static inline bool kho_is_enabled(void) { @@ -40,6 +44,11 @@ static inline int kho_add_subtree(struct kho_serialization *ser, return -EOPNOTSUPP; } +static inline int kho_retrieve_subtree(const char *name, phys_addr_t *phys) +{ + return -EOPNOTSUPP; +} + static inline int register_kho_notifier(struct notifier_block *nb) { return -EOPNOTSUPP; @@ -53,6 +62,11 @@ static inline int unregister_kho_notifier(struct notifier_block *nb) static inline void kho_memory_init(void) { } + +static inline void kho_populate(phys_addr_t fdt_phys, u64 fdt_len, + phys_addr_t scratch_phys, u64 scratch_len) +{ +} #endif /* CONFIG_KEXEC_HANDOVER */ #endif /* LINUX_KEXEC_HANDOVER_H */ diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c index e541d3d5003d1..a1e1cd0330143 100644 --- a/kernel/kexec_handover.c +++ b/kernel/kexec_handover.c @@ -501,9 +501,112 @@ static __init int kho_out_debugfs_init(void) return -ENOENT; } +struct kho_in { + struct dentry *dir; + phys_addr_t fdt_phys; + phys_addr_t scratch_phys; + struct list_head fdt_list; +}; + +static struct kho_in kho_in = { + .fdt_list = LIST_HEAD_INIT(kho_in.fdt_list), +}; + +static const void *kho_get_fdt(void) +{ + return kho_in.fdt_phys ? phys_to_virt(kho_in.fdt_phys) : NULL; +} + +/** + * kho_retrieve_subtree - retrieve a preserved sub FDT by its name. + * @name: the name of the sub FDT passed to kho_add_subtree(). + * @phys: if found, the physical address of the sub FDT is stored in @phys. + * + * Retrieve a preserved sub FDT named @name and store its physical + * address in @phys. + * + * Return: 0 on success, error code on failure + */ +int kho_retrieve_subtree(const char *name, phys_addr_t *phys) +{ + const void *fdt = kho_get_fdt(); + const u64 *val; + int offset, len; + + if (!fdt) + return -ENOENT; + + if (!phys) + return -EINVAL; + + offset = fdt_subnode_offset(fdt, 0, name); + if (offset < 0) + return -ENOENT; + + val = fdt_getprop(fdt, offset, PROP_SUB_FDT, &len); + if (!val || len != sizeof(*val)) + return -EINVAL; + + *phys = (phys_addr_t)*val; + + return 0; +} +EXPORT_SYMBOL_GPL(kho_retrieve_subtree); + +/* Handling for debugfs/kho/in */ + +static __init int kho_in_debugfs_init(const void *fdt) +{ + struct dentry *sub_fdt_dir; + int err, child; + + kho_in.dir = debugfs_create_dir("in", debugfs_root); + if (IS_ERR(kho_in.dir)) + return PTR_ERR(kho_in.dir); + + sub_fdt_dir = debugfs_create_dir("sub_fdts", kho_in.dir); + if (IS_ERR(sub_fdt_dir)) { + err = PTR_ERR(sub_fdt_dir); + goto err_rmdir; + } + + err = kho_debugfs_fdt_add(&kho_in.fdt_list, kho_in.dir, "fdt", fdt); + if (err) + goto err_rmdir; + + fdt_for_each_subnode(child, fdt, 0) { + int len = 0; + const char *name = fdt_get_name(fdt, child, NULL); + const u64 *fdt_phys; + + fdt_phys = fdt_getprop(fdt, child, "fdt", &len); + if (!fdt_phys) + continue; + if (len != sizeof(*fdt_phys)) { + pr_warn("node `%s`'s prop `fdt` has invalid length: %d\n", + name, len); + continue; + } + err = kho_debugfs_fdt_add(&kho_in.fdt_list, sub_fdt_dir, name, + phys_to_virt(*fdt_phys)); + if (err) { + pr_warn("failed to add fdt `%s` to debugfs: %d\n", name, + err); + continue; + } + } + + return 0; + +err_rmdir: + debugfs_remove_recursive(kho_in.dir); + return err; +} + static __init int kho_init(void) { int err = 0; + const void *fdt = kho_get_fdt(); if (!kho_enable) return 0; @@ -524,6 +627,20 @@ static __init int kho_init(void) if (err) goto err_free_fdt; + if (fdt) { + err = kho_in_debugfs_init(fdt); + /* + * Failure to create /sys/kernel/debug/kho/in does not prevent + * reviving state from KHO and setting up KHO for the next + * kexec. + */ + if (err) + pr_err("failed exposing handover FDT in debugfs: %d\n", + err); + + return 0; + } + for (int i = 0; i < kho_scratch_cnt; i++) { unsigned long base_pfn = PHYS_PFN(kho_scratch[i].addr); unsigned long count = kho_scratch[i].size >> PAGE_SHIFT; @@ -551,7 +668,118 @@ static __init int kho_init(void) } late_initcall(kho_init); +static void __init kho_release_scratch(void) +{ + phys_addr_t start, end; + u64 i; + + memmap_init_kho_scratch_pages(); + + /* + * Mark scratch mem as CMA before we return it. That way we + * ensure that no kernel allocations happen on it. That means + * we can reuse it as scratch memory again later. + */ + __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, + MEMBLOCK_KHO_SCRATCH, &start, &end, NULL) { + ulong start_pfn = pageblock_start_pfn(PFN_DOWN(start)); + ulong end_pfn = pageblock_align(PFN_UP(end)); + ulong pfn; + + for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) + set_pageblock_migratetype(pfn_to_page(pfn), + MIGRATE_CMA); + } +} + void __init kho_memory_init(void) { - kho_reserve_scratch(); + if (kho_in.scratch_phys) { + kho_scratch = phys_to_virt(kho_in.scratch_phys); + kho_release_scratch(); + } else { + kho_reserve_scratch(); + } +} + +void __init kho_populate(phys_addr_t fdt_phys, u64 fdt_len, + phys_addr_t scratch_phys, u64 scratch_len) +{ + void *fdt = NULL; + struct kho_scratch *scratch = NULL; + int err = 0; + unsigned int scratch_cnt = scratch_len / sizeof(*kho_scratch); + + /* Validate the input FDT */ + fdt = early_memremap(fdt_phys, fdt_len); + if (!fdt) { + pr_warn("setup: failed to memremap FDT (0x%llx)\n", fdt_phys); + err = -EFAULT; + goto out; + } + err = fdt_check_header(fdt); + if (err) { + pr_warn("setup: handover FDT (0x%llx) is invalid: %d\n", + fdt_phys, err); + err = -EINVAL; + goto out; + } + err = fdt_node_check_compatible(fdt, 0, KHO_FDT_COMPATIBLE); + if (err) { + pr_warn("setup: handover FDT (0x%llx) is incompatible with '%s': %d\n", + fdt_phys, KHO_FDT_COMPATIBLE, err); + err = -EINVAL; + goto out; + } + + scratch = early_memremap(scratch_phys, scratch_len); + if (!scratch) { + pr_warn("setup: failed to memremap scratch (phys=0x%llx, len=%lld)\n", + scratch_phys, scratch_len); + err = -EFAULT; + goto out; + } + + /* + * We pass a safe contiguous blocks of memory to use for early boot + * purporses from the previous kernel so that we can resize the + * memblock array as needed. + */ + for (int i = 0; i < scratch_cnt; i++) { + struct kho_scratch *area = &scratch[i]; + u64 size = area->size; + + memblock_add(area->addr, size); + err = memblock_mark_kho_scratch(area->addr, size); + if (WARN_ON(err)) { + pr_warn("failed to mark the scratch region 0x%pa+0x%pa: %d", + &area->addr, &size, err); + goto out; + } + pr_debug("Marked 0x%pa+0x%pa as scratch", &area->addr, &size); + } + + memblock_reserve(scratch_phys, scratch_len); + + /* + * Now that we have a viable region of scratch memory, let's tell + * the memblocks allocator to only use that for any allocations. + * That way we ensure that nothing scribbles over in use data while + * we initialize the page tables which we will need to ingest all + * memory reservations from the previous kernel. + */ + memblock_set_kho_scratch_only(); + + kho_in.fdt_phys = fdt_phys; + kho_in.scratch_phys = scratch_phys; + kho_scratch_cnt = scratch_cnt; + pr_info("found kexec handover data. Will skip init for some devices\n"); + +out: + if (fdt) + early_memunmap(fdt, fdt_len); + if (scratch) + early_memunmap(scratch, scratch_len); + if (err) + pr_warn("disabling KHO revival: %d\n", err); } diff --git a/mm/memblock.c b/mm/memblock.c index c2633003ed8ea..456689cb73e20 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -2377,6 +2377,7 @@ void __init memblock_free_all(void) free_unused_memmap(); reset_all_zones_managed_pages(); + memblock_clear_kho_scratch_only(); pages = free_low_memory_core_early(); totalram_pages_add(pages); }