From patchwork Fri Oct 25 15:11:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF893D0BB56 for ; Fri, 25 Oct 2024 15:13:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3673E6B00A3; Fri, 25 Oct 2024 11:13:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F0A46B00A4; Fri, 25 Oct 2024 11:13:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F4D86B00A5; Fri, 25 Oct 2024 11:13:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D85EF6B00A3 for ; Fri, 25 Oct 2024 11:13:16 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0D3B11C68C9 for ; Fri, 25 Oct 2024 15:12:54 +0000 (UTC) X-FDA: 82712467404.30.3870155 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf19.hostedemail.com (Postfix) with ESMTP id 2551B1A0019 for ; Fri, 25 Oct 2024 15:12:49 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=jShXYWPt; spf=pass (imf19.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729869039; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mE6UZtvB4UrwqxDxBvXj02Eqawd4qPQUjTyB3c978sA=; b=4O6bJHAz6P94s3ecwo5b4PtR7o6xjsxyXX6mhih7bSJgy/g8vmvMcWmYUvi2OKOrvI2Bta Ah6HLj+c/m8Ii7zY/3u1h/CV6jXl+EWPNrUTiMkwccJwLyV/pVDYIRzTxo/5lfbNVPwGXl 6HZ0ghylEIfPsa8qU5zQGoG7KP5zoZo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729869039; a=rsa-sha256; cv=none; b=0Ff5/bE5rBQ/TsBKNY8y+pXkXsG63WUdYTOCQq0MuDbMO36TGn69owHE+pStfgXNDOEUyc EHNA3v1wtqMAUaOrUNpJfGMHvUQoVqcI6uN65dC+GBZnJTTDNbvfBG3qI217X58S67/Frp IwaZJKi6bKEq6lcfe3jjl2DcF+tAEp4= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=jShXYWPt; spf=pass (imf19.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869194; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mE6UZtvB4UrwqxDxBvXj02Eqawd4qPQUjTyB3c978sA=; b=jShXYWPtNueJxnQRao9sOto0TZM9uybO9ME4q04+FsmPBh0o70ILCrDJZLOWdeFoGlDM7A 1ROz+woSCMRNprdLg8xEgBopWOA3/W+3PKlU868FPQaZ6SHr1BcMemCOt21vPkJJ9OgZAY wt9YkQreZQ2jqAlBZl4mc3bV+/56ohg= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-373-3cSc-QklMMuKHb2DIsw0Pg-1; Fri, 25 Oct 2024 11:13:10 -0400 X-MC-Unique: 3cSc-QklMMuKHb2DIsw0Pg-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A10BC1955D4A; Fri, 25 Oct 2024 15:13:08 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 86CEF300018D; Fri, 25 Oct 2024 15:13:01 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 10/11] virtio-mem: support CONFIG_PROC_VMCORE_DEVICE_RAM Date: Fri, 25 Oct 2024 17:11:32 +0200 Message-ID: <20241025151134.1275575-11-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Stat-Signature: 6mykjtr1sb6pxkw96r14ppww3k5mnczm X-Rspamd-Queue-Id: 2551B1A0019 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1729869169-687555 X-HE-Meta: U2FsdGVkX1+he71GvzHPIEnm97jEmy5zvVxVFt/lETw6bHnUQxard5gJLSa4prbPzTak2AHG4rY8eSU1SBjSx94JLDsxjG+wRb+vmk4QgPTLiEeUUk1r35175LFqgdeOqzP2WxSFvkvvE2CVh1VRFj9kUp9SqpNJl9Jy1M6DzvLoX9luD8P2f+HVlVe5niHzDU+FJlnplmVdvFVpe1MAfOONRjITB3nw1ZyvWVT5KofNSFW4ZitY2QBZZLKPim1+16q/qKrku4RUja3uAjPgYtC2Z4bbbkvHqy+SdnUPGjKav+iwS3a2ggFQD1orl/xxl7cECffd/ZX1/TDDouvRaug8BUjnOq+fYTiXeZ58B9jxLKsYe4axdwP1M2Yt2dDencMw43ynNOIT/6Hz/RhRMhaEOy9XiZoyqfR12bm3KbVyriI6Km03RzHyXsb0JQD9yFte9gPPtUuIqPLiy8QiHG9ji/tQq6IFVSoaS4ptV5iozKNYyd32pBheh4vRRyORBbDxuq1zrPbe/ThTVoGfmtqDwzB+TplCydK5m/vG00tqht0m8aGyu/oojOiiAkPZTq2oAcul1IQPq1jwEG5S1VcUk57yIsDCNl2OIGEXXQN/XDswZlezV2vAHGR9plOnhtlOBx7/cerUtzg4z2w/b9hGkhEt4D84mig8/MCGev6baOHCB0XVfqnwC9/KQu3grooWYDcfTcclF6CFUVbkRltfowAzal4uddcwONVGVuXtFdexA/D9OdSqs6leUGbZjU34DvH3lIY8ryMjE8vPZj4agtbJhHNd/SzxwEidOb/DftCMNSyNvWSFCZJH4wsPST1ToNk7+LiY4OAxPifOefH2FqlcoXxeU7/qIvvK8py6nbyCSWuwy7xihggTkhHW1I4vbmSEoKaLVZfBAziplEIThwMP6orXQUO2ZZ6nc5tbPZgHbDSnljXGP6DPnEJlGO7HZBVB818wv5RO0qb Epj6MjM8 8S4s42ymx5f0hOEu1jvs3cjL8Zjtl2hQY+SDjfqTfizAV+ahtlSUIdSP5rJNw8eqfU856x1T6fZth0VoN/lOohEyMKEHYbwbumjZK+d62CV+c1MSF7gBAZCmVmjDon3cMw6Cq65zLhxiRbc/avmUgs4TlNCmDzy2J0NxJbp/8pzyp/6wZG/2FSbVDd8JSZ9k34YetEz8Jo0b5SUG4genfTyDAl7dSp9IzSGbkacRKh/v88aI1Po4ZPiQj6V83ME9ap5IhqyDkCXdh3ce34xJFBnyCHEn55rn0rcVb/+U7hQ68O5/iqD1lQYIkdqiM25xY//vMvJZD9torSiPl0hz8z8YsBHRz/FlBdbZ+GOGqxR63h3rAJbj8t5YWTw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's implement the get_device_ram() vmcore callback, so architectures that select NEED_PROC_VMCORE_NEED_DEVICE_RAM, like s390 soon, can include that memory in a crash dump. Merge ranges, and process ranges that might contain a mixture of plugged and unplugged, to reduce the total number of ranges. Signed-off-by: David Hildenbrand --- drivers/virtio/Kconfig | 1 + drivers/virtio/virtio_mem.c | 88 +++++++++++++++++++++++++++++++++++++ 2 files changed, 89 insertions(+) diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 2eb747311bfd..60fdaf2c2c49 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -128,6 +128,7 @@ config VIRTIO_MEM depends on MEMORY_HOTREMOVE depends on CONTIG_ALLOC depends on EXCLUSIVE_SYSTEM_RAM + select PROVIDE_PROC_VMCORE_DEVICE_RAM if PROC_VMCORE help This driver provides access to virtio-mem paravirtualized memory devices, allowing to hotplug and hotunplug memory. diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 73477d5b79cf..1ae1199a7617 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -2728,6 +2728,91 @@ static bool virtio_mem_vmcore_pfn_is_ram(struct vmcore_cb *cb, mutex_unlock(&vm->hotplug_mutex); return is_ram; } + +#ifdef CONFIG_PROC_VMCORE_DEVICE_RAM +static int virtio_mem_vmcore_add_device_ram(struct virtio_mem *vm, + struct list_head *list, uint64_t start, uint64_t end) +{ + int rc; + + rc = vmcore_alloc_add_mem_node(list, start, end - start); + if (rc) + dev_err(&vm->vdev->dev, + "Error adding device RAM range: %d\n", rc); + return rc; +} + +static int virtio_mem_vmcore_get_device_ram(struct vmcore_cb *cb, + struct list_head *list) +{ + struct virtio_mem *vm = container_of(cb, struct virtio_mem, + vmcore_cb); + const uint64_t device_start = vm->addr; + const uint64_t device_end = vm->addr + vm->usable_region_size; + uint64_t chunk_size, cur_start, cur_end, plugged_range_start = 0; + LIST_HEAD(tmp_list); + int rc; + + if (!vm->plugged_size) + return 0; + + /* Process memory sections, unless the device block size is bigger. */ + chunk_size = max_t(uint64_t, PFN_PHYS(PAGES_PER_SECTION), + vm->device_block_size); + + mutex_lock(&vm->hotplug_mutex); + + /* + * We process larger chunks and indicate the complete chunk if any + * block in there is plugged. This reduces the number of pfn_is_ram() + * callbacks and mimic what is effectively being done when the old + * kernel would add complete memory sections/blocks to the elfcore hdr. + */ + cur_start = device_start; + for (cur_start = device_start; cur_start < device_end; cur_start = cur_end) { + cur_end = ALIGN_DOWN(cur_start + chunk_size, chunk_size); + cur_end = min_t(uint64_t, cur_end, device_end); + + rc = virtio_mem_send_state_request(vm, cur_start, + cur_end - cur_start); + + if (rc < 0) { + dev_err(&vm->vdev->dev, + "Error querying block states: %d\n", rc); + goto out; + } else if (rc != VIRTIO_MEM_STATE_UNPLUGGED) { + /* Merge ranges with plugged memory. */ + if (!plugged_range_start) + plugged_range_start = cur_start; + continue; + } + + /* Flush any plugged range. */ + if (plugged_range_start) { + rc = virtio_mem_vmcore_add_device_ram(vm, &tmp_list, + plugged_range_start, + cur_start); + if (rc) + goto out; + plugged_range_start = 0; + } + } + + /* Flush any plugged range. */ + if (plugged_range_start) + rc = virtio_mem_vmcore_add_device_ram(vm, &tmp_list, + plugged_range_start, + cur_start); +out: + mutex_unlock(&vm->hotplug_mutex); + if (rc < 0) { + vmcore_free_mem_nodes(&tmp_list); + return rc; + } + list_splice_tail(&tmp_list, list); + return 0; +} +#endif /* CONFIG_PROC_VMCORE_DEVICE_RAM */ #endif /* CONFIG_PROC_VMCORE */ static int virtio_mem_init_kdump(struct virtio_mem *vm) @@ -2737,6 +2822,9 @@ static int virtio_mem_init_kdump(struct virtio_mem *vm) #ifdef CONFIG_PROC_VMCORE dev_info(&vm->vdev->dev, "memory hot(un)plug disabled in kdump kernel\n"); vm->vmcore_cb.pfn_is_ram = virtio_mem_vmcore_pfn_is_ram; +#ifdef CONFIG_PROC_VMCORE_DEVICE_RAM + vm->vmcore_cb.get_device_ram = virtio_mem_vmcore_get_device_ram; +#endif /* CONFIG_PROC_VMCORE_DEVICE_RAM */ register_vmcore_cb(&vm->vmcore_cb); return 0; #else /* CONFIG_PROC_VMCORE */