From patchwork Fri Oct 25 15:11:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 495A6D0BB56 for ; Fri, 25 Oct 2024 15:12:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D52D66B009D; Fri, 25 Oct 2024 11:12:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D28EC6B009E; Fri, 25 Oct 2024 11:12:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF0E46B009F; Fri, 25 Oct 2024 11:12:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A2ACD6B009D for ; Fri, 25 Oct 2024 11:12:54 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 49656C0C43 for ; Fri, 25 Oct 2024 15:12:33 +0000 (UTC) X-FDA: 82712466816.02.C643B89 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf01.hostedemail.com (Postfix) with ESMTP id EBCA540018 for ; Fri, 25 Oct 2024 15:12:36 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ljj98W6U; spf=pass (imf01.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729868967; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6hl9c5Yb2sx5GRvpqgpVj1LJUfxGGqj6HUI+5nw39Sc=; b=0m7pw7LKVY3Ui2NsutysokmySKHLaHVa/2qFo1IJfxQ0mVAAURLGZwBXAXYAv40nSjR+tK Y9JUdN1ChfE4Y+QHSRIUXUBplXlTGnr/SlZNQbCHaUA+ZrFk7gDm+HajCd2bEyQB3jvgdS GV0BBAbKaLco9sS7A65m516q0iMRG3E= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ljj98W6U; spf=pass (imf01.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729868967; a=rsa-sha256; cv=none; b=6/ZBhJn4TNkoWEU6Fipjt1SATGE5sWRwvFdLy6vQA/WzzDsbKbdId3zI/G32BtzpXF7ZhD YtjkHWSsazvMrJAgrbJjOfgK6syplEysIYaeMItnisbFsCIjgQi+qBCSHeUr9TgWuGHz7t CR3EAC0BbaowDwBvdZryKOaCvd4XcqQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869172; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6hl9c5Yb2sx5GRvpqgpVj1LJUfxGGqj6HUI+5nw39Sc=; b=Ljj98W6UFSUT5qNKBTbQMi3hCjaMgdS0UMFUwd/lJWqwYVRKvbmUPbSuGxP8+jshugyR/w yELfbaVwPRtjV5zz4VCMJfWia4x8pjsXfwiIEjTCFcPaVEh2gOfKU8zFey1A9dt5zmgVVC wLj5m3H7uzSzTbyBHMbD46sRoIdV0N8= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-692-4fSCEJXAOeah5Ecoqje4TQ-1; Fri, 25 Oct 2024 11:12:47 -0400 X-MC-Unique: 4fSCEJXAOeah5Ecoqje4TQ-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3839D1955F54; Fri, 25 Oct 2024 15:12:45 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 4B62C30001A9; Fri, 25 Oct 2024 15:12:37 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 07/11] fs/proc/vmcore: introduce PROC_VMCORE_DEVICE_RAM to detect device RAM ranges in 2nd kernel Date: Fri, 25 Oct 2024 17:11:29 +0200 Message-ID: <20241025151134.1275575-8-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: EBCA540018 X-Stat-Signature: qs7hrgs1birdxsq1beed3jr3w6ggu1oz X-Rspam-User: X-HE-Tag: 1729869156-464342 X-HE-Meta: U2FsdGVkX1/I/yTsnoylfhwe2cr2fWjVK/fz/Vk9l3teC1nLkaILxPAgqrAj9qjZae5sabsKeAEWmjpG67LssrXLmAEC94SqbQCPOAIc2k7dxNwQwL8HaWrxUbMKcEGQoJM09+hkG91MGsjTe16G2tByxH5x6s3WvGR+1h8VGvbi1pSFFA7rsohuLdaGqZQITKoek6C5uz0klAb17wci6pyr57noT/kCkrsz/lbE3KLKnM7puRoWAwG/XPaQJtnbIW1E2rpEHvxog9MVNKUWeFGswUgUhWCiDqxQ7kU4PLw/VT4/zB6wHHWXBmFGTHd+M+S9U8LRLHC8osayd6bbgyXwZ2Z3myo3ocGeqCkKJ1f3ScTQ8qP+Dp+/CFo/qVkchYT0OsvuGOYVpjqd1bVsVXuZY8kUwlO0ttaqZ5ZV1a33PtnyyAlw5DeNfyUM6Ko8RsdGqMd0h8KvrF7vw+3xk5bT6t/FniDbhQYxBidkEQ0YmebTHj18KiDAldhKROVYsR3LRRjTD6OWUI7Oh5drKJBLvU+h9ayddWfpyv+JqbkWP6pE2AmT9H2/raGFX6+S/OSIs+XDYwDvr97cKYCXpFOSzkF08ja7eF4E+hgPrRYKeF2WEGyAK29kZLiteVPT+p8Hcgxf2z8mpNwauTGhZHlSSGcKz5Iep0hI8OjWzrodqLNrlPYn7kdEENKj6YXUT9PsTUdVLo6eG/5Wh3eW/J/9Uj6o3WSclXjblpLdo6ySsn+mrC+nqeAsWUPz/Yuu+wzCD4iEMB9goyP7Y5Ci90ct8isVLgQ0v5KbJMnzGhiRLyOlOYIjAc4grMwDJJzLsB22pGc0lgNfnfEYd8XqtYaACDpypgEp0fSmd0pHgj9xgSJpu9hnfTvBZfsShZdwIAmm3kS/f1xfh7ERjJCUchxRXrYHDbCXFavM5DqhPaqkspycIjV3LvIFQA+8CzR0Eq0ch8As1VUojAcOzkb kYhiGNI1 NV5zCGLYUrp/NpGhs15i2VkdnfKO+tU5dHCYzFjmejZ5nSwxgGK3OMSQd4dUC9/l5pQBop/ml7Ca2BWeudDc1hZLezZyWHpULqu97OcwngpxLU0/TdlX56NjI3XtWktG+tKR4IeBa8RGOkjC0TuXrVp1UX8AvCUi7zfs4XaHCRnuRNbG7QjpuSQARZtNiaPsmoywU+w9C6IgGagD9A1dtiYoJxL7B45uLgEy0dXjIj+iQxkixVNqfL0fRLTpGbbN9Inj5lUEV8hmwirnTvc4jFORx7ru9wwfchp6FwLAcZYfucf87Cu+Vp2tbwxANkUX7qQwGlwk39+vMAnWzl1+oKJDNeOkuiTTrIN3ViXH99BHanilEGhwr4ys2sQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: s390 allocates+prepares the elfcore hdr in the dump (2nd) kernel, not in the crashed kernel. RAM provided by memory devices such as virtio-mem can only be detected using the device driver; when vmcore_init() is called, these device drivers are usually not loaded yet, or the devices did not get probed yet. Consequently, on s390 these RAM ranges will not be included in the crash dump, which makes the dump partially corrupt and is unfortunate. Instead of deferring the vmcore_init() call, to an (unclear?) later point, let's reuse the vmcore_cb infrastructure to obtain device RAM ranges as the device drivers probe the device and get access to this information. Then, we'll add these ranges to the vmcore, adding more PT_LOAD entries and updating the offsets+vmcore size. Use Kconfig tricks to include this code automatically only if (a) there is a device driver compiled that implements the callback (PROVIDE_PROC_VMCORE_DEVICE_RAM) and; (b) the architecture actually needs this information (NEED_PROC_VMCORE_DEVICE_RAM). The current target use case is s390, which only creates an elf64 elfcore, so focusing on elf64 is sufficient. Signed-off-by: David Hildenbrand --- fs/proc/Kconfig | 25 ++++++ fs/proc/vmcore.c | 156 +++++++++++++++++++++++++++++++++++++ include/linux/crash_dump.h | 9 +++ 3 files changed, 190 insertions(+) diff --git a/fs/proc/Kconfig b/fs/proc/Kconfig index d80a1431ef7b..1e11de5f9380 100644 --- a/fs/proc/Kconfig +++ b/fs/proc/Kconfig @@ -61,6 +61,31 @@ config PROC_VMCORE_DEVICE_DUMP as ELF notes to /proc/vmcore. You can still disable device dump using the kernel command line option 'novmcoredd'. +config PROVIDE_PROC_VMCORE_DEVICE_RAM + def_bool n + +config NEED_PROC_VMCORE_DEVICE_RAM + def_bool n + +config PROC_VMCORE_DEVICE_RAM + def_bool y + depends on PROC_VMCORE + depends on NEED_PROC_VMCORE_DEVICE_RAM + depends on PROVIDE_PROC_VMCORE_DEVICE_RAM + help + If the elfcore hdr is allocated and prepared by the dump kernel + ("2nd kernel") instead of the crashed kernel, RAM provided by memory + devices such as virtio-mem will not be included in the dump + image, because only the device driver can properly detect them. + + With this config enabled, these RAM ranges will be queried from the + device drivers once the device gets probed, so they can be included + in the crash dump. + + Relevant architectures should select NEED_PROC_VMCORE_DEVICE_RAM + and relevant device drivers should select + PROVIDE_PROC_VMCORE_DEVICE_RAM. + config PROC_SYSCTL bool "Sysctl support (/proc/sys)" if EXPERT depends on PROC_FS diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 3e90416ee54e..c332a9a4920b 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -69,6 +69,8 @@ static LIST_HEAD(vmcore_cb_list); /* Whether the vmcore has been opened once. */ static bool vmcore_opened; +static void vmcore_process_device_ram(struct vmcore_cb *cb); + void register_vmcore_cb(struct vmcore_cb *cb) { INIT_LIST_HEAD(&cb->next); @@ -80,6 +82,8 @@ void register_vmcore_cb(struct vmcore_cb *cb) */ if (vmcore_opened) pr_warn_once("Unexpected vmcore callback registration\n"); + else if (cb->get_device_ram) + vmcore_process_device_ram(cb); mutex_unlock(&vmcore_mutex); } EXPORT_SYMBOL_GPL(register_vmcore_cb); @@ -1511,6 +1515,158 @@ int vmcore_add_device_dump(struct vmcoredd_data *data) EXPORT_SYMBOL(vmcore_add_device_dump); #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */ +#ifdef CONFIG_PROC_VMCORE_DEVICE_RAM +static int vmcore_realloc_elfcore_buffer_elf64(size_t new_size) +{ + char *elfcorebuf_new; + + if (WARN_ON_ONCE(new_size < elfcorebuf_sz)) + return -EINVAL; + if (get_order(elfcorebuf_sz_orig) == get_order(new_size)) { + elfcorebuf_sz_orig = new_size; + return 0; + } + + elfcorebuf_new = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, + get_order(new_size)); + if (!elfcorebuf_new) + return -ENOMEM; + memcpy(elfcorebuf_new, elfcorebuf, elfcorebuf_sz); + free_pages((unsigned long)elfcorebuf, get_order(elfcorebuf_sz_orig)); + elfcorebuf = elfcorebuf_new; + elfcorebuf_sz_orig = new_size; + return 0; +} + +static void vmcore_reset_offsets_elf64(void) +{ + Elf64_Phdr *phdr_start = (Elf64_Phdr *)(elfcorebuf + sizeof(Elf64_Ehdr)); + loff_t vmcore_off = elfcorebuf_sz + elfnotes_sz; + Elf64_Ehdr *ehdr = (Elf64_Ehdr *)elfcorebuf; + Elf64_Phdr *phdr; + int i; + + for (i = 0, phdr = phdr_start; i < ehdr->e_phnum; i++, phdr++) { + u64 start, end; + + /* + * After merge_note_headers_elf64() we should only have a single + * PT_NOTE entry that starts immediately after elfcorebuf_sz. + */ + if (phdr->p_type == PT_NOTE) { + phdr->p_offset = elfcorebuf_sz; + continue; + } + + start = rounddown(phdr->p_offset, PAGE_SIZE); + end = roundup(phdr->p_offset + phdr->p_memsz, PAGE_SIZE); + phdr->p_offset = vmcore_off + (phdr->p_offset - start); + vmcore_off = vmcore_off + end - start; + } + set_vmcore_list_offsets(elfcorebuf_sz, elfnotes_sz, &vmcore_list); +} + +static int vmcore_add_device_ram_elf64(struct list_head *list, size_t count) +{ + Elf64_Phdr *phdr_start = (Elf64_Phdr *)(elfcorebuf + sizeof(Elf64_Ehdr)); + Elf64_Ehdr *ehdr = (Elf64_Ehdr *)elfcorebuf; + struct vmcore_mem_node *cur; + Elf64_Phdr *phdr; + size_t new_size; + int rc; + + if ((Elf32_Half)(ehdr->e_phnum + count) != ehdr->e_phnum + count) { + pr_err("Kdump: too many device ram ranges\n"); + return -ENOSPC; + } + + /* elfcorebuf_sz must always cover full pages. */ + new_size = sizeof(Elf64_Ehdr) + + (ehdr->e_phnum + count) * sizeof(Elf64_Phdr); + new_size = roundup(new_size, PAGE_SIZE); + + /* + * Make sure we have sufficient space to include the new PT_LOAD + * entries. + */ + rc = vmcore_realloc_elfcore_buffer_elf64(new_size); + if (rc) { + pr_err("Kdump: resizing elfcore failed\n"); + return rc; + } + + /* Modify our used elfcore buffer size to cover the new entries. */ + elfcorebuf_sz = new_size; + + /* Fill the added PT_LOAD entries. */ + phdr = phdr_start + ehdr->e_phnum; + list_for_each_entry(cur, list, list) { + WARN_ON_ONCE(!IS_ALIGNED(cur->paddr | cur->size, PAGE_SIZE)); + elfcorehdr_fill_device_ram_ptload_elf64(phdr, cur->paddr, cur->size); + + /* p_offset will be adjusted later. */ + phdr++; + ehdr->e_phnum++; + } + list_splice_tail(list, &vmcore_list); + + /* We changed elfcorebuf_sz and added new entries; reset all offsets. */ + vmcore_reset_offsets_elf64(); + + /* Finally, recalculated the total vmcore size. */ + vmcore_size = get_vmcore_size(elfcorebuf_sz, elfnotes_sz, + &vmcore_list); + proc_vmcore->size = vmcore_size; + return 0; +} + +static void vmcore_process_device_ram(struct vmcore_cb *cb) +{ + unsigned char *e_ident = (unsigned char *)elfcorebuf; + struct vmcore_mem_node *first, *m; + LIST_HEAD(list); + int count; + + if (cb->get_device_ram(cb, &list)) { + pr_err("Kdump: obtaining device ram ranges failed\n"); + return; + } + count = list_count_nodes(&list); + if (!count) + return; + + /* We only support Elf64 dumps for now. */ + if (WARN_ON_ONCE(e_ident[EI_CLASS] != ELFCLASS64)) { + pr_err("Kdump: device ram ranges only support Elf64\n"); + goto out_free; + } + + /* + * For some reason these ranges are already know? Might happen + * with unusual register->unregister->register sequences; we'll simply + * sanity check using the first range. + */ + first = list_first_entry(&list, struct vmcore_mem_node, list); + list_for_each_entry(m, &vmcore_list, list) { + unsigned long long m_end = m->paddr + m->size; + unsigned long long first_end = first->paddr + first->size; + + if (first->paddr < m_end && m->paddr < first_end) + goto out_free; + } + + /* If adding the mem nodes succeeds, they must not be freed. */ + if (!vmcore_add_device_ram_elf64(&list, count)) + return; +out_free: + vmcore_free_mem_nodes(&list); +} +#else /* !CONFIG_PROC_VMCORE_DEVICE_RAM */ +static void vmcore_process_device_ram(struct vmcore_cb *cb) +{ +} +#endif /* CONFIG_PROC_VMCORE_DEVICE_RAM */ + /* Free all dumps in vmcore device dump list */ static void vmcore_free_device_dumps(void) { diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h index 722dbcff7371..8e581a053d7f 100644 --- a/include/linux/crash_dump.h +++ b/include/linux/crash_dump.h @@ -20,6 +20,8 @@ extern int elfcorehdr_alloc(unsigned long long *addr, unsigned long long *size); extern void elfcorehdr_free(unsigned long long addr); extern ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos); extern ssize_t elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos); +void elfcorehdr_fill_device_ram_ptload_elf64(Elf64_Phdr *phdr, + unsigned long long paddr, unsigned long long size); extern int remap_oldmem_pfn_range(struct vm_area_struct *vma, unsigned long from, unsigned long pfn, unsigned long size, pgprot_t prot); @@ -99,6 +101,12 @@ static inline void vmcore_unusable(void) * indicated in the vmcore instead. For example, a ballooned page * contains no data and reading from such a page will cause high * load in the hypervisor. + * @get_device_ram: query RAM ranges that can only be detected by device + * drivers, such as the virtio-mem driver, so they can be included in + * the crash dump on architectures that allocate the elfcore hdr in the dump + * ("2nd") kernel. Indicated RAM ranges may contain holes to reduce the + * total number of ranges; such holes can be detected using the pfn_is_ram + * callback just like for other RAM. * @next: List head to manage registered callbacks internally; initialized by * register_vmcore_cb(). * @@ -109,6 +117,7 @@ static inline void vmcore_unusable(void) */ struct vmcore_cb { bool (*pfn_is_ram)(struct vmcore_cb *cb, unsigned long pfn); + int (*get_device_ram)(struct vmcore_cb *cb, struct list_head *list); struct list_head next; }; extern void register_vmcore_cb(struct vmcore_cb *cb);