From patchwork Fri Oct 25 15:11:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850924 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAD9821A4BD for ; Fri, 25 Oct 2024 15:12:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869124; cv=none; b=LE8eJaOoMu/3T0Wpz6VXbQhF9TFDTw/pTY5TCup61Ed74WWmKFVI70HpZDJ7vGZuuPJ2a2qkBfDnV0rgzvJYHWULaOzjQz3/eg2s1e2gc7+O01Z3WPmoML0AcoqgxfGwfNF2kpMcZVZ1d7t9hQlcr3rIKyDDCB9gJovak/vbuPw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869124; c=relaxed/simple; bh=16NCmAhjGQbw+/tIcvFYWrIt83TaD8/WL/KjPIqDzrc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WV25posncJU9Jxv/05EjayxoqbLYXPesY+XfEM02xaQdgW0oXbHHBK+ymFsF3TSKYq3yzzetXsQQM0K/LKklWybCfQz1iSLbV0oAYr7b+e499LofyLQxKrpI+gvNwdjhXZFdsAiJkLW9/+p4oWHtp3KWHMccGtKMdIwMQJD3Sp4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=fgJhnSnE; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fgJhnSnE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869121; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VrMChLaTWQQV1P3cCXF0g4Mb1STd5YPEt4ibYFFJxW4=; b=fgJhnSnEQSTql4I4+FerEDjUXmI19VK+NxbhKXl3Hq6GJ/HQv1ipTufu8pTDSZv2RyjO2B WctAAgGWqxj+/xDsNihW+2AwG5llF5XDzMQS+8l8jTskh8afRSaXlhu0eUowl7xRdruy6a o2epJmpChMRnBbUAjXp/xWUjtESmFg0= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-35-oROdk647PWOwfbs1qWmB5g-1; Fri, 25 Oct 2024 11:11:58 -0400 X-MC-Unique: oROdk647PWOwfbs1qWmB5g-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1D5A8193585B; Fri, 25 Oct 2024 15:11:53 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1F17D3000198; Fri, 25 Oct 2024 15:11:44 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 01/11] fs/proc/vmcore: convert vmcore_cb_lock into vmcore_mutex Date: Fri, 25 Oct 2024 17:11:23 +0200 Message-ID: <20241025151134.1275575-2-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 We want to protect vmcore modifications from concurrent opening of the vmcore, and also serialize vmcore modiciations. Let's convert the spinlock into a mutex, because some of the operations we'll be protecting might sleep (e.g., memory allocations) and might take a bit longer. Signed-off-by: David Hildenbrand --- fs/proc/vmcore.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index b52d85f8ad59..110ce193d20f 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -62,7 +62,8 @@ core_param(novmcoredd, vmcoredd_disabled, bool, 0); /* Device Dump Size */ static size_t vmcoredd_orig_sz; -static DEFINE_SPINLOCK(vmcore_cb_lock); +static DEFINE_MUTEX(vmcore_mutex); + DEFINE_STATIC_SRCU(vmcore_cb_srcu); /* List of registered vmcore callbacks. */ static LIST_HEAD(vmcore_cb_list); @@ -72,7 +73,7 @@ static bool vmcore_opened; void register_vmcore_cb(struct vmcore_cb *cb) { INIT_LIST_HEAD(&cb->next); - spin_lock(&vmcore_cb_lock); + mutex_lock(&vmcore_mutex); list_add_tail(&cb->next, &vmcore_cb_list); /* * Registering a vmcore callback after the vmcore was opened is @@ -80,13 +81,13 @@ void register_vmcore_cb(struct vmcore_cb *cb) */ if (vmcore_opened) pr_warn_once("Unexpected vmcore callback registration\n"); - spin_unlock(&vmcore_cb_lock); + mutex_unlock(&vmcore_mutex); } EXPORT_SYMBOL_GPL(register_vmcore_cb); void unregister_vmcore_cb(struct vmcore_cb *cb) { - spin_lock(&vmcore_cb_lock); + mutex_lock(&vmcore_mutex); list_del_rcu(&cb->next); /* * Unregistering a vmcore callback after the vmcore was opened is @@ -95,7 +96,7 @@ void unregister_vmcore_cb(struct vmcore_cb *cb) */ if (vmcore_opened) pr_warn_once("Unexpected vmcore callback unregistration\n"); - spin_unlock(&vmcore_cb_lock); + mutex_unlock(&vmcore_mutex); synchronize_srcu(&vmcore_cb_srcu); } @@ -120,9 +121,9 @@ static bool pfn_is_ram(unsigned long pfn) static int open_vmcore(struct inode *inode, struct file *file) { - spin_lock(&vmcore_cb_lock); + mutex_lock(&vmcore_mutex); vmcore_opened = true; - spin_unlock(&vmcore_cb_lock); + mutex_unlock(&vmcore_mutex); return 0; } From patchwork Fri Oct 25 15:11:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850925 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AED871FB89E for ; Fri, 25 Oct 2024 15:12:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869131; cv=none; b=PUGc/jkYOlXX8FD9MesMYv9bekXJ5DuOntDHv/WHpu3lo87dF8uz+p7IE4wwwAl4sYqgXth0zP8RQh/cDjoaHwvhtp0cSelnV4gCLe9ZRgBCcIGXX/f9y9NjGza3hV+PGK0Paggd8RpuUB/N7L3b0gBK4l4oDSs1El6LDj01WoI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869131; c=relaxed/simple; bh=SF9SnI1mj3HkmaMMxMaZqomPiNLKbAE+IaaqQT0ldrg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RTQZxaXdyONMff3TIN2IsCt6+A3f6KZcpB6oDicBj2HlaF2Yz5q8uMfWsIHMXVRHE4fOdbNL4P3s5YAiRG1izuT2iRNrftuoPSB75J9/nxqi2rn12pKV/qMXDgy87hrtrybV8GmPw4JoJVLnf5d/9kXUC+Sh5nHD650aSAcuLGk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=UsaE3Eit; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UsaE3Eit" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869128; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5pKboAP4By7PRQHzx/JKVUBvBNHAppZRg9bKS5h/5DI=; b=UsaE3EitaXk2oGAbpTGlS3NH7PENHCon4ffUtNSw8L9qYj9fXlwi31kW+0LLrIizrr2Aj2 1l7j8HPu6EqYCeFqTHIB6we/KyuoX4AwXIO+fJcwzl84aBBw5WO6k4ZVFKhKtRNtCHZYPE wwneuWl+8cn5iYtEQYs00Z/h3/lCKWI= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-505-2i6VTHzzNouTjIokigDSJA-1; Fri, 25 Oct 2024 11:12:04 -0400 X-MC-Unique: 2i6VTHzzNouTjIokigDSJA-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4B3F51955F3F; Fri, 25 Oct 2024 15:12:02 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7CD01300018D; Fri, 25 Oct 2024 15:11:53 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 02/11] fs/proc/vmcore: replace vmcoredd_mutex by vmcore_mutex Date: Fri, 25 Oct 2024 17:11:24 +0200 Message-ID: <20241025151134.1275575-3-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Let's use our new mutex instead. Signed-off-by: David Hildenbrand --- fs/proc/vmcore.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 110ce193d20f..b91c304463c9 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -53,7 +53,6 @@ static struct proc_dir_entry *proc_vmcore; #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP /* Device Dump list and mutex to synchronize access to list */ static LIST_HEAD(vmcoredd_list); -static DEFINE_MUTEX(vmcoredd_mutex); static bool vmcoredd_disabled; core_param(novmcoredd, vmcoredd_disabled, bool, 0); @@ -248,7 +247,7 @@ static int vmcoredd_copy_dumps(struct iov_iter *iter, u64 start, size_t size) size_t tsz; char *buf; - mutex_lock(&vmcoredd_mutex); + mutex_lock(&vmcore_mutex); list_for_each_entry(dump, &vmcoredd_list, list) { if (start < offset + dump->size) { tsz = min(offset + (u64)dump->size - start, (u64)size); @@ -269,7 +268,7 @@ static int vmcoredd_copy_dumps(struct iov_iter *iter, u64 start, size_t size) } out_unlock: - mutex_unlock(&vmcoredd_mutex); + mutex_unlock(&vmcore_mutex); return ret; } @@ -283,7 +282,7 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst, size_t tsz; char *buf; - mutex_lock(&vmcoredd_mutex); + mutex_lock(&vmcore_mutex); list_for_each_entry(dump, &vmcoredd_list, list) { if (start < offset + dump->size) { tsz = min(offset + (u64)dump->size - start, (u64)size); @@ -306,7 +305,7 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst, } out_unlock: - mutex_unlock(&vmcoredd_mutex); + mutex_unlock(&vmcore_mutex); return ret; } #endif /* CONFIG_MMU */ @@ -1517,9 +1516,9 @@ int vmcore_add_device_dump(struct vmcoredd_data *data) dump->size = data_size; /* Add the dump to driver sysfs list */ - mutex_lock(&vmcoredd_mutex); + mutex_lock(&vmcore_mutex); list_add_tail(&dump->list, &vmcoredd_list); - mutex_unlock(&vmcoredd_mutex); + mutex_unlock(&vmcore_mutex); vmcoredd_update_size(data_size); return 0; @@ -1537,7 +1536,7 @@ EXPORT_SYMBOL(vmcore_add_device_dump); static void vmcore_free_device_dumps(void) { #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP - mutex_lock(&vmcoredd_mutex); + mutex_lock(&vmcore_mutex); while (!list_empty(&vmcoredd_list)) { struct vmcoredd_node *dump; @@ -1547,7 +1546,7 @@ static void vmcore_free_device_dumps(void) vfree(dump->buf); vfree(dump); } - mutex_unlock(&vmcoredd_mutex); + mutex_unlock(&vmcore_mutex); #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */ } From patchwork Fri Oct 25 15:11:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850926 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F3CFB165F1A for ; Fri, 25 Oct 2024 15:12:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869140; cv=none; b=qpdVKIpXXLZy6JgBOP9wSpDaVNyPSQBO17xC4yMj/UaFaENCkUYdsENSLTSjvu5vaa0Mzodh6gHjAZgNn1hSTbD9ab9DoHWjBBnYzHzf4OhuEt6VqCJqDmR6+obQhpsuFpiX/aeSdq+Gsf7FcwiBnyLHIALH4YeBQnbGeE7mXDA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869140; c=relaxed/simple; bh=B7PrUz5VAdfRkpHKAqsUYQXPd2NhJV/DByAXCQR//kc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SqLHpjeN5I3skRH222Zj+o9v4o6BtY9UO3SpsQmzea69vyHdyJ4umMWlxGhUIsf66lfkr2UgYLgLn+0ktE0WjeEo6A69dJxkL4wa9H2RxjTCPZ0BAdMGDCH8Pl+bX0rvk9u2EYOqvZhepSARkAFbxJCxBu9GQ3JIVsXVJUlymOA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Rqj5EET6; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Rqj5EET6" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869137; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RBGqG14Ir1TauPvJbSz3jQOA9+mQRgCc4BCTlJgnkuA=; b=Rqj5EET6GQwa0uXiipITubJYJzWfk20EteUrsW8ZAGEyP17keweeL/LyCahCAmY/ZfpakE Xj1vnyiAiGLhpWNrIJPw2Cra3uB5W0xLTltnwF26YHuEt5KfmcfneAbEA6EmCzffcAp/St AYqw8rJUf6ZS4U1sX4gszfJcHXMR3T0= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-602-pkIUU_7mPiOuJegdS3U3rg-1; Fri, 25 Oct 2024 11:12:13 -0400 X-MC-Unique: pkIUU_7mPiOuJegdS3U3rg-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7C5211955F43; Fri, 25 Oct 2024 15:12:10 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AF463300018D; Fri, 25 Oct 2024 15:12:02 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 03/11] fs/proc/vmcore: disallow vmcore modifications after the vmcore was opened Date: Fri, 25 Oct 2024 17:11:25 +0200 Message-ID: <20241025151134.1275575-4-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Let's protect all vmcore modifications by the vmcore_mutex and disallow vmcore modifications after the vmcore was opened: modifications would no longer be safe. Properly synchronize against concurrent opening of the vmcore. As a nice side-effect, we now properly protect concurrent vmcore modifications. No need to grab the mutex during mmap()/read(): after we opened the vmcore, modifications are impossible. Signed-off-by: David Hildenbrand --- fs/proc/vmcore.c | 42 +++++++++++++++++++----------------------- 1 file changed, 19 insertions(+), 23 deletions(-) diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index b91c304463c9..6371dbaa21be 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -243,33 +243,27 @@ static int vmcoredd_copy_dumps(struct iov_iter *iter, u64 start, size_t size) { struct vmcoredd_node *dump; u64 offset = 0; - int ret = 0; size_t tsz; char *buf; - mutex_lock(&vmcore_mutex); list_for_each_entry(dump, &vmcoredd_list, list) { if (start < offset + dump->size) { tsz = min(offset + (u64)dump->size - start, (u64)size); buf = dump->buf + start - offset; - if (copy_to_iter(buf, tsz, iter) < tsz) { - ret = -EFAULT; - goto out_unlock; - } + if (copy_to_iter(buf, tsz, iter) < tsz) + return -EFAULT; size -= tsz; start += tsz; /* Leave now if buffer filled already */ if (!size) - goto out_unlock; + return 0; } offset += dump->size; } -out_unlock: - mutex_unlock(&vmcore_mutex); - return ret; + return 0; } #ifdef CONFIG_MMU @@ -278,20 +272,16 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst, { struct vmcoredd_node *dump; u64 offset = 0; - int ret = 0; size_t tsz; char *buf; - mutex_lock(&vmcore_mutex); list_for_each_entry(dump, &vmcoredd_list, list) { if (start < offset + dump->size) { tsz = min(offset + (u64)dump->size - start, (u64)size); buf = dump->buf + start - offset; if (remap_vmalloc_range_partial(vma, dst, buf, 0, - tsz)) { - ret = -EFAULT; - goto out_unlock; - } + tsz)) + return -EFAULT; size -= tsz; start += tsz; @@ -299,14 +289,12 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst, /* Leave now if buffer filled already */ if (!size) - goto out_unlock; + return 0; } offset += dump->size; } -out_unlock: - mutex_unlock(&vmcore_mutex); - return ret; + return 0; } #endif /* CONFIG_MMU */ #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */ @@ -1482,6 +1470,10 @@ int vmcore_add_device_dump(struct vmcoredd_data *data) return -EINVAL; } + /* We'll recheck under lock later. */ + if (data_race(vmcore_opened)) + return -EBUSY; + if (!data || !strlen(data->dump_name) || !data->vmcoredd_callback || !data->size) return -EINVAL; @@ -1515,12 +1507,16 @@ int vmcore_add_device_dump(struct vmcoredd_data *data) dump->buf = buf; dump->size = data_size; - /* Add the dump to driver sysfs list */ + /* Add the dump to driver sysfs list and update the elfcore hdr */ mutex_lock(&vmcore_mutex); - list_add_tail(&dump->list, &vmcoredd_list); - mutex_unlock(&vmcore_mutex); + if (vmcore_opened) { + ret = -EBUSY; + goto out_err; + } + list_add_tail(&dump->list, &vmcoredd_list); vmcoredd_update_size(data_size); + mutex_unlock(&vmcore_mutex); return 0; out_err: From patchwork Fri Oct 25 15:11:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850927 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11AF216A94A for ; Fri, 25 Oct 2024 15:12:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869148; cv=none; b=RYii5rJO1yN3B6UU0Q23t36+YuUOe/O6un4Apeej3lc1b0k6ZIgdT5B4EWj0RHNssFOBjG0WqK23bzNXYGtv9vzNzrUVmDuufQ0wniqSfaLj4vQsFW7altERi5zzeB4rTvUinmRZtp+mfk1VRQYT2UQp+xCd4YH+yw15rhEpsIU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869148; c=relaxed/simple; bh=OKuCXCkQYhPRRPyz3j97T+8ZrBgA9TgHLuBXidwiR8I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OLwXvkLDMCtOOLEZTScBPnLSyX+8DIRhdVnIOhf2JjUadp5F8TxZkEBaQRqUpXJM3brug4Zo/ZwiKh4wjOCIHqF9oxlVHUebWQuXG+pt+x63rczw3rH7Kdl29KVGgfrgS+Bn8LhEKP01ABZv9DP8fQRmEEqhBCr34mHqHVqis9o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gQva7K9X; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gQva7K9X" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869145; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=D6QGok1SHO7tWBONbTK+qNFzpgcjUP8FVLCibVkMqhw=; b=gQva7K9XGz+oR8s9WP+3kkPAwKcD4XlfTEeduDYEGgS7WxwtbiZtpo+9ZWHmr5f9LZRZ2O Qa8ldBJwRvo2ztWpvAJjq2pw2zvFyMmWasQhuWI+AckRUJkGMxBBKRwQZEbC6cRB2rpBmy WERYLwkTb5GR5d7dBHZA9t8UOfOGj+o= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-64-m7d72Gv7PTyqbx3Zkb-x3g-1; Fri, 25 Oct 2024 11:12:21 -0400 X-MC-Unique: m7d72Gv7PTyqbx3Zkb-x3g-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 65E1F1956077; Fri, 25 Oct 2024 15:12:19 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E93DA300018D; Fri, 25 Oct 2024 15:12:10 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 04/11] fs/proc/vmcore: move vmcore definitions from kcore.h to crash_dump.h Date: Fri, 25 Oct 2024 17:11:26 +0200 Message-ID: <20241025151134.1275575-5-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 These defines are not related to /proc/kcore, move them to crash_dump.h instead. While at it, rename "struct vmcore" to "struct vmcore_mem_node", which is a more fitting name. Signed-off-by: David Hildenbrand --- fs/proc/vmcore.c | 20 ++++++++++---------- include/linux/crash_dump.h | 13 +++++++++++++ include/linux/kcore.h | 13 ------------- 3 files changed, 23 insertions(+), 23 deletions(-) diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 6371dbaa21be..47652df95202 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -304,10 +304,10 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst, */ static ssize_t __read_vmcore(struct iov_iter *iter, loff_t *fpos) { + struct vmcore_mem_node *m = NULL; ssize_t acc = 0, tmp; size_t tsz; u64 start; - struct vmcore *m = NULL; if (!iov_iter_count(iter) || *fpos >= vmcore_size) return 0; @@ -560,8 +560,8 @@ static int vmcore_remap_oldmem_pfn(struct vm_area_struct *vma, static int mmap_vmcore(struct file *file, struct vm_area_struct *vma) { size_t size = vma->vm_end - vma->vm_start; + struct vmcore_mem_node *m; u64 start, end, len, tsz; - struct vmcore *m; start = (u64)vma->vm_pgoff << PAGE_SHIFT; end = start + size; @@ -683,16 +683,16 @@ static const struct proc_ops vmcore_proc_ops = { .proc_mmap = mmap_vmcore, }; -static struct vmcore* __init get_new_element(void) +static struct vmcore_mem_node * __init get_new_element(void) { - return kzalloc(sizeof(struct vmcore), GFP_KERNEL); + return kzalloc(sizeof(struct vmcore_mem_node), GFP_KERNEL); } static u64 get_vmcore_size(size_t elfsz, size_t elfnotesegsz, struct list_head *vc_list) { + struct vmcore_mem_node *m; u64 size; - struct vmcore *m; size = elfsz + elfnotesegsz; list_for_each_entry(m, vc_list, list) { @@ -1090,11 +1090,11 @@ static int __init process_ptload_program_headers_elf64(char *elfptr, size_t elfnotes_sz, struct list_head *vc_list) { + struct vmcore_mem_node *new; int i; Elf64_Ehdr *ehdr_ptr; Elf64_Phdr *phdr_ptr; loff_t vmcore_off; - struct vmcore *new; ehdr_ptr = (Elf64_Ehdr *)elfptr; phdr_ptr = (Elf64_Phdr*)(elfptr + sizeof(Elf64_Ehdr)); /* PT_NOTE hdr */ @@ -1133,11 +1133,11 @@ static int __init process_ptload_program_headers_elf32(char *elfptr, size_t elfnotes_sz, struct list_head *vc_list) { + struct vmcore_mem_node *new; int i; Elf32_Ehdr *ehdr_ptr; Elf32_Phdr *phdr_ptr; loff_t vmcore_off; - struct vmcore *new; ehdr_ptr = (Elf32_Ehdr *)elfptr; phdr_ptr = (Elf32_Phdr*)(elfptr + sizeof(Elf32_Ehdr)); /* PT_NOTE hdr */ @@ -1175,8 +1175,8 @@ static int __init process_ptload_program_headers_elf32(char *elfptr, static void set_vmcore_list_offsets(size_t elfsz, size_t elfnotes_sz, struct list_head *vc_list) { + struct vmcore_mem_node *m; loff_t vmcore_off; - struct vmcore *m; /* Skip ELF header, program headers and ELF note segment. */ vmcore_off = elfsz + elfnotes_sz; @@ -1587,9 +1587,9 @@ void vmcore_cleanup(void) /* clear the vmcore list. */ while (!list_empty(&vmcore_list)) { - struct vmcore *m; + struct vmcore_mem_node *m; - m = list_first_entry(&vmcore_list, struct vmcore, list); + m = list_first_entry(&vmcore_list, struct vmcore_mem_node, list); list_del(&m->list); kfree(m); } diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h index acc55626afdc..5e48ab12c12b 100644 --- a/include/linux/crash_dump.h +++ b/include/linux/crash_dump.h @@ -114,10 +114,23 @@ struct vmcore_cb { extern void register_vmcore_cb(struct vmcore_cb *cb); extern void unregister_vmcore_cb(struct vmcore_cb *cb); +struct vmcore_mem_node { + struct list_head list; + unsigned long long paddr; + unsigned long long size; + loff_t offset; +}; + #else /* !CONFIG_CRASH_DUMP */ static inline bool is_kdump_kernel(void) { return false; } #endif /* CONFIG_CRASH_DUMP */ +struct vmcoredd_node { + struct list_head list; /* List of dumps */ + void *buf; /* Buffer containing device's dump */ + unsigned int size; /* Size of the buffer */ +}; + /* Device Dump information to be filled by drivers */ struct vmcoredd_data { char dump_name[VMCOREDD_MAX_NAME_BYTES]; /* Unique name of the dump */ diff --git a/include/linux/kcore.h b/include/linux/kcore.h index 86c0f1d18998..9a2fa013c91d 100644 --- a/include/linux/kcore.h +++ b/include/linux/kcore.h @@ -20,19 +20,6 @@ struct kcore_list { int type; }; -struct vmcore { - struct list_head list; - unsigned long long paddr; - unsigned long long size; - loff_t offset; -}; - -struct vmcoredd_node { - struct list_head list; /* List of dumps */ - void *buf; /* Buffer containing device's dump */ - unsigned int size; /* Size of the buffer */ -}; - #ifdef CONFIG_PROC_KCORE void __init kclist_add(struct kcore_list *, void *, size_t, int type); From patchwork Fri Oct 25 15:11:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850928 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4495A216210 for ; Fri, 25 Oct 2024 15:12:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869157; cv=none; b=J6vJjXeecUHYXPIKVyGA3E3cS9deS6hnnnEvG6VBXfkSsmb+tf8em6Rrp3nAblftQckOjWg44wRiS/c0brRk1/elo2gHJnVl1imXE6lRudLACxsCDAXrowvX7A45oO9D593JuFYMOWBu/dajpCCwBonu1CJNjSbHK4hFwmlalP8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869157; c=relaxed/simple; bh=wzqFyeRzIbZYPQzh5xeryisiS+GCyWq3rwgwt2g7Hdo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dX37LHy3FvFlf6d3+GCFxObAZNFkcqbAE4CoJ1ujflL9d8QRgvCZpIKvZy19Qs0O+BIrspOU1+Bqvz2ofkeeaHsNyBLLUzZNfhGqF5IjRjAFN1zx/TSwbgFfgpK+aISJu8BMxxL87Nj7+oB2edLV++sSZMKG59XS9HrS0JcqCx0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=HCYGFgfC; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HCYGFgfC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869154; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UerumhmGRgSm1AsU/ieQ+gWT/mqI7oI1n5V7Kl7Bfao=; b=HCYGFgfCMHNSarehRCaSP7uVM+vkNMk5LydC2a67LZ8gNqc6Tn6IhzixzzeReS9SITamTS Pz2K9BWn/utDdQgZGRHekthwHuXu9GSfNFq4S1nHdYlpa0RJBQ1t1m1ZP5hkYkzoCXr/tZ QG7PXvXAwI1rQGLpaAdvwdc7xQ8s1a4= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-520-8XrGVJi3OyGjKDZHuJgcSw-1; Fri, 25 Oct 2024 11:12:30 -0400 X-MC-Unique: 8XrGVJi3OyGjKDZHuJgcSw-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 98ACF1955E70; Fri, 25 Oct 2024 15:12:27 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 006CD30001A9; Fri, 25 Oct 2024 15:12:19 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 05/11] fs/proc/vmcore: factor out allocating a vmcore memory node Date: Fri, 25 Oct 2024 17:11:27 +0200 Message-ID: <20241025151134.1275575-6-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Let's factor it out into include/linux/crash_dump.h, from where we can use it also outside of vmcore.c later. Signed-off-by: David Hildenbrand --- fs/proc/vmcore.c | 21 ++------------------- include/linux/crash_dump.h | 14 ++++++++++++++ 2 files changed, 16 insertions(+), 19 deletions(-) diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 47652df95202..76fdc3fb8c0e 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -683,11 +683,6 @@ static const struct proc_ops vmcore_proc_ops = { .proc_mmap = mmap_vmcore, }; -static struct vmcore_mem_node * __init get_new_element(void) -{ - return kzalloc(sizeof(struct vmcore_mem_node), GFP_KERNEL); -} - static u64 get_vmcore_size(size_t elfsz, size_t elfnotesegsz, struct list_head *vc_list) { @@ -1090,7 +1085,6 @@ static int __init process_ptload_program_headers_elf64(char *elfptr, size_t elfnotes_sz, struct list_head *vc_list) { - struct vmcore_mem_node *new; int i; Elf64_Ehdr *ehdr_ptr; Elf64_Phdr *phdr_ptr; @@ -1113,13 +1107,8 @@ static int __init process_ptload_program_headers_elf64(char *elfptr, end = roundup(paddr + phdr_ptr->p_memsz, PAGE_SIZE); size = end - start; - /* Add this contiguous chunk of memory to vmcore list.*/ - new = get_new_element(); - if (!new) + if (vmcore_alloc_add_mem_node(vc_list, start, size)) return -ENOMEM; - new->paddr = start; - new->size = size; - list_add_tail(&new->list, vc_list); /* Update the program header offset. */ phdr_ptr->p_offset = vmcore_off + (paddr - start); @@ -1133,7 +1122,6 @@ static int __init process_ptload_program_headers_elf32(char *elfptr, size_t elfnotes_sz, struct list_head *vc_list) { - struct vmcore_mem_node *new; int i; Elf32_Ehdr *ehdr_ptr; Elf32_Phdr *phdr_ptr; @@ -1156,13 +1144,8 @@ static int __init process_ptload_program_headers_elf32(char *elfptr, end = roundup(paddr + phdr_ptr->p_memsz, PAGE_SIZE); size = end - start; - /* Add this contiguous chunk of memory to vmcore list.*/ - new = get_new_element(); - if (!new) + if (vmcore_alloc_add_mem_node(vc_list, start, size)) return -ENOMEM; - new->paddr = start; - new->size = size; - list_add_tail(&new->list, vc_list); /* Update the program header offset */ phdr_ptr->p_offset = vmcore_off + (paddr - start); diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h index 5e48ab12c12b..ae77049fc023 100644 --- a/include/linux/crash_dump.h +++ b/include/linux/crash_dump.h @@ -121,6 +121,20 @@ struct vmcore_mem_node { loff_t offset; }; +/* Allocate a vmcore memory node and add it to the list. */ +static inline int vmcore_alloc_add_mem_node(struct list_head *list, + unsigned long long paddr, unsigned long long size) +{ + struct vmcore_mem_node *m = kzalloc(sizeof(*m), GFP_KERNEL); + + if (!m) + return -ENOMEM; + m->paddr = paddr; + m->size = size; + list_add_tail(&m->list, list); + return 0; +} + #else /* !CONFIG_CRASH_DUMP */ static inline bool is_kdump_kernel(void) { return false; } #endif /* CONFIG_CRASH_DUMP */ From patchwork Fri Oct 25 15:11:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850929 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 733EF216216 for ; Fri, 25 Oct 2024 15:12:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869166; cv=none; b=XE9w6e9wMHFn+mww3PXl7KmZgSczhZM07mSTAnWJg+PwBEeL/v9koLAGz14f7PbDC3clNZAVRG29BlO1IJpjfsJrEfnjC6LmiYaUiW1dfvnB5C3ZAy3zaUdjAj3Hw3Q+wMvYAodTMmJ/OuKftZvM8Idc77JR3BqALzVgD2FKNbc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869166; c=relaxed/simple; bh=XgE5h5JSsM45B+HHXH3XDv6/wR/fgvzuwnpVaJQjZ6c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QsmAn2h5OI/tKcd4gyORTc6YGmr9giVQdLqvI8g0AusyRQkEndJ+mx2JkYWHOkvl1Ga/+ryaT4rXk/N1scu5aq4omqojfp28lHTZHVuRReLbmYE++s0ZfcamYT0va8stiPaPPJJ7qpux4eGYuFdZWPryM8e+j4JsVxjLtjCs0ck= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=EGacU649; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="EGacU649" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869163; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gIFNVFgavkgVE76rylQBcYDoA1k/9BPXcVZSMfOlsww=; b=EGacU649e30qFoOH91WVI/3VKCftYanFhWzIfUpM1Sew0jANo/VDAIuut52AZlMsWl6A5j GizvCviIOxmIbCipkTy+/WC1zhJ583nTL2QSas4imzj+oYs4ZDLcF7sjxc3sZ3+OBL1Yet S+IvkmRtCukNE7dHGg97nNfOI9+kATY= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-271-KbkSjjO5MweDNtywRObLgg-1; Fri, 25 Oct 2024 11:12:38 -0400 X-MC-Unique: KbkSjjO5MweDNtywRObLgg-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B1FBE1955F25; Fri, 25 Oct 2024 15:12:36 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 2F05A300018D; Fri, 25 Oct 2024 15:12:27 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 06/11] fs/proc/vmcore: factor out freeing a list of vmcore ranges Date: Fri, 25 Oct 2024 17:11:28 +0200 Message-ID: <20241025151134.1275575-7-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Let's factor it out into include/linux/crash_dump.h, from where we can use it also outside of vmcore.c later. Signed-off-by: David Hildenbrand --- fs/proc/vmcore.c | 9 +-------- include/linux/crash_dump.h | 11 +++++++++++ 2 files changed, 12 insertions(+), 8 deletions(-) diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 76fdc3fb8c0e..3e90416ee54e 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -1568,14 +1568,7 @@ void vmcore_cleanup(void) proc_vmcore = NULL; } - /* clear the vmcore list. */ - while (!list_empty(&vmcore_list)) { - struct vmcore_mem_node *m; - - m = list_first_entry(&vmcore_list, struct vmcore_mem_node, list); - list_del(&m->list); - kfree(m); - } + vmcore_free_mem_nodes(&vmcore_list); free_elfcorebuf(); /* clear vmcore device dump list */ diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h index ae77049fc023..722dbcff7371 100644 --- a/include/linux/crash_dump.h +++ b/include/linux/crash_dump.h @@ -135,6 +135,17 @@ static inline int vmcore_alloc_add_mem_node(struct list_head *list, return 0; } +/* Free a list of vmcore memory nodes. */ +static inline void vmcore_free_mem_nodes(struct list_head *list) +{ + struct vmcore_mem_node *m, *tmp; + + list_for_each_entry_safe(m, tmp, list, list) { + list_del(&m->list); + kfree(m); + } +} + #else /* !CONFIG_CRASH_DUMP */ static inline bool is_kdump_kernel(void) { return false; } #endif /* CONFIG_CRASH_DUMP */ From patchwork Fri Oct 25 15:11:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850930 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 405F1185955 for ; Fri, 25 Oct 2024 15:12:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869173; cv=none; b=nED6G2+KqmKqfl86dJF+3Q5Bhd65FHZfh5GSUpAYfgnmoCzfaTY5CzKYTf2t/TbDSYdW7gtGdGrBA+Ahy2SSg1MSQ8lKmQkYdj1ed0wmaDryNGSq5Bc5nY46ly2iomStGzMt1PbC9E1CIW6EaO/MExazeAgSHJP2NCNwj9BUHVI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869173; c=relaxed/simple; bh=m2ZIKnqe2ELYMThIR4Bxvtg8dbrJ+q1dEdqLEYQoRTQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RVqatq0lQJvtmHXdLKZyUqa+VFoGPl+mckI4bhMKbdop/UaktfaHr9KjsoMYYXsrnM2lo7X7ozzfl1vXrJGWLH802+qwQajL59V3+myNuT4c59KxghZ02zZNUJkrXwwPHRulyeYvFxmJHdo0ZWGgOHgRkSiZxSgtvVTAnCqFy3U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Xwoo6wgl; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Xwoo6wgl" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869170; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6hl9c5Yb2sx5GRvpqgpVj1LJUfxGGqj6HUI+5nw39Sc=; b=Xwoo6wgl0R73yt3hv1zPufI5dTf4aKvnO+YCOgEI+pSqSfBv0mCn7sCrMLZ+3PsJQQQNP2 IySbyzRzzFkQFaX9yyGODVlhXxiwH+v8V7U+z1u1feYrF0wggo46JY5lUmIKYs80N6vAcr KUEgO2bvIVbWN/zKOjcE3b4mDBEm+zg= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-692-4fSCEJXAOeah5Ecoqje4TQ-1; Fri, 25 Oct 2024 11:12:47 -0400 X-MC-Unique: 4fSCEJXAOeah5Ecoqje4TQ-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3839D1955F54; Fri, 25 Oct 2024 15:12:45 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 4B62C30001A9; Fri, 25 Oct 2024 15:12:37 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 07/11] fs/proc/vmcore: introduce PROC_VMCORE_DEVICE_RAM to detect device RAM ranges in 2nd kernel Date: Fri, 25 Oct 2024 17:11:29 +0200 Message-ID: <20241025151134.1275575-8-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 s390 allocates+prepares the elfcore hdr in the dump (2nd) kernel, not in the crashed kernel. RAM provided by memory devices such as virtio-mem can only be detected using the device driver; when vmcore_init() is called, these device drivers are usually not loaded yet, or the devices did not get probed yet. Consequently, on s390 these RAM ranges will not be included in the crash dump, which makes the dump partially corrupt and is unfortunate. Instead of deferring the vmcore_init() call, to an (unclear?) later point, let's reuse the vmcore_cb infrastructure to obtain device RAM ranges as the device drivers probe the device and get access to this information. Then, we'll add these ranges to the vmcore, adding more PT_LOAD entries and updating the offsets+vmcore size. Use Kconfig tricks to include this code automatically only if (a) there is a device driver compiled that implements the callback (PROVIDE_PROC_VMCORE_DEVICE_RAM) and; (b) the architecture actually needs this information (NEED_PROC_VMCORE_DEVICE_RAM). The current target use case is s390, which only creates an elf64 elfcore, so focusing on elf64 is sufficient. Signed-off-by: David Hildenbrand --- fs/proc/Kconfig | 25 ++++++ fs/proc/vmcore.c | 156 +++++++++++++++++++++++++++++++++++++ include/linux/crash_dump.h | 9 +++ 3 files changed, 190 insertions(+) diff --git a/fs/proc/Kconfig b/fs/proc/Kconfig index d80a1431ef7b..1e11de5f9380 100644 --- a/fs/proc/Kconfig +++ b/fs/proc/Kconfig @@ -61,6 +61,31 @@ config PROC_VMCORE_DEVICE_DUMP as ELF notes to /proc/vmcore. You can still disable device dump using the kernel command line option 'novmcoredd'. +config PROVIDE_PROC_VMCORE_DEVICE_RAM + def_bool n + +config NEED_PROC_VMCORE_DEVICE_RAM + def_bool n + +config PROC_VMCORE_DEVICE_RAM + def_bool y + depends on PROC_VMCORE + depends on NEED_PROC_VMCORE_DEVICE_RAM + depends on PROVIDE_PROC_VMCORE_DEVICE_RAM + help + If the elfcore hdr is allocated and prepared by the dump kernel + ("2nd kernel") instead of the crashed kernel, RAM provided by memory + devices such as virtio-mem will not be included in the dump + image, because only the device driver can properly detect them. + + With this config enabled, these RAM ranges will be queried from the + device drivers once the device gets probed, so they can be included + in the crash dump. + + Relevant architectures should select NEED_PROC_VMCORE_DEVICE_RAM + and relevant device drivers should select + PROVIDE_PROC_VMCORE_DEVICE_RAM. + config PROC_SYSCTL bool "Sysctl support (/proc/sys)" if EXPERT depends on PROC_FS diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 3e90416ee54e..c332a9a4920b 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -69,6 +69,8 @@ static LIST_HEAD(vmcore_cb_list); /* Whether the vmcore has been opened once. */ static bool vmcore_opened; +static void vmcore_process_device_ram(struct vmcore_cb *cb); + void register_vmcore_cb(struct vmcore_cb *cb) { INIT_LIST_HEAD(&cb->next); @@ -80,6 +82,8 @@ void register_vmcore_cb(struct vmcore_cb *cb) */ if (vmcore_opened) pr_warn_once("Unexpected vmcore callback registration\n"); + else if (cb->get_device_ram) + vmcore_process_device_ram(cb); mutex_unlock(&vmcore_mutex); } EXPORT_SYMBOL_GPL(register_vmcore_cb); @@ -1511,6 +1515,158 @@ int vmcore_add_device_dump(struct vmcoredd_data *data) EXPORT_SYMBOL(vmcore_add_device_dump); #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */ +#ifdef CONFIG_PROC_VMCORE_DEVICE_RAM +static int vmcore_realloc_elfcore_buffer_elf64(size_t new_size) +{ + char *elfcorebuf_new; + + if (WARN_ON_ONCE(new_size < elfcorebuf_sz)) + return -EINVAL; + if (get_order(elfcorebuf_sz_orig) == get_order(new_size)) { + elfcorebuf_sz_orig = new_size; + return 0; + } + + elfcorebuf_new = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, + get_order(new_size)); + if (!elfcorebuf_new) + return -ENOMEM; + memcpy(elfcorebuf_new, elfcorebuf, elfcorebuf_sz); + free_pages((unsigned long)elfcorebuf, get_order(elfcorebuf_sz_orig)); + elfcorebuf = elfcorebuf_new; + elfcorebuf_sz_orig = new_size; + return 0; +} + +static void vmcore_reset_offsets_elf64(void) +{ + Elf64_Phdr *phdr_start = (Elf64_Phdr *)(elfcorebuf + sizeof(Elf64_Ehdr)); + loff_t vmcore_off = elfcorebuf_sz + elfnotes_sz; + Elf64_Ehdr *ehdr = (Elf64_Ehdr *)elfcorebuf; + Elf64_Phdr *phdr; + int i; + + for (i = 0, phdr = phdr_start; i < ehdr->e_phnum; i++, phdr++) { + u64 start, end; + + /* + * After merge_note_headers_elf64() we should only have a single + * PT_NOTE entry that starts immediately after elfcorebuf_sz. + */ + if (phdr->p_type == PT_NOTE) { + phdr->p_offset = elfcorebuf_sz; + continue; + } + + start = rounddown(phdr->p_offset, PAGE_SIZE); + end = roundup(phdr->p_offset + phdr->p_memsz, PAGE_SIZE); + phdr->p_offset = vmcore_off + (phdr->p_offset - start); + vmcore_off = vmcore_off + end - start; + } + set_vmcore_list_offsets(elfcorebuf_sz, elfnotes_sz, &vmcore_list); +} + +static int vmcore_add_device_ram_elf64(struct list_head *list, size_t count) +{ + Elf64_Phdr *phdr_start = (Elf64_Phdr *)(elfcorebuf + sizeof(Elf64_Ehdr)); + Elf64_Ehdr *ehdr = (Elf64_Ehdr *)elfcorebuf; + struct vmcore_mem_node *cur; + Elf64_Phdr *phdr; + size_t new_size; + int rc; + + if ((Elf32_Half)(ehdr->e_phnum + count) != ehdr->e_phnum + count) { + pr_err("Kdump: too many device ram ranges\n"); + return -ENOSPC; + } + + /* elfcorebuf_sz must always cover full pages. */ + new_size = sizeof(Elf64_Ehdr) + + (ehdr->e_phnum + count) * sizeof(Elf64_Phdr); + new_size = roundup(new_size, PAGE_SIZE); + + /* + * Make sure we have sufficient space to include the new PT_LOAD + * entries. + */ + rc = vmcore_realloc_elfcore_buffer_elf64(new_size); + if (rc) { + pr_err("Kdump: resizing elfcore failed\n"); + return rc; + } + + /* Modify our used elfcore buffer size to cover the new entries. */ + elfcorebuf_sz = new_size; + + /* Fill the added PT_LOAD entries. */ + phdr = phdr_start + ehdr->e_phnum; + list_for_each_entry(cur, list, list) { + WARN_ON_ONCE(!IS_ALIGNED(cur->paddr | cur->size, PAGE_SIZE)); + elfcorehdr_fill_device_ram_ptload_elf64(phdr, cur->paddr, cur->size); + + /* p_offset will be adjusted later. */ + phdr++; + ehdr->e_phnum++; + } + list_splice_tail(list, &vmcore_list); + + /* We changed elfcorebuf_sz and added new entries; reset all offsets. */ + vmcore_reset_offsets_elf64(); + + /* Finally, recalculated the total vmcore size. */ + vmcore_size = get_vmcore_size(elfcorebuf_sz, elfnotes_sz, + &vmcore_list); + proc_vmcore->size = vmcore_size; + return 0; +} + +static void vmcore_process_device_ram(struct vmcore_cb *cb) +{ + unsigned char *e_ident = (unsigned char *)elfcorebuf; + struct vmcore_mem_node *first, *m; + LIST_HEAD(list); + int count; + + if (cb->get_device_ram(cb, &list)) { + pr_err("Kdump: obtaining device ram ranges failed\n"); + return; + } + count = list_count_nodes(&list); + if (!count) + return; + + /* We only support Elf64 dumps for now. */ + if (WARN_ON_ONCE(e_ident[EI_CLASS] != ELFCLASS64)) { + pr_err("Kdump: device ram ranges only support Elf64\n"); + goto out_free; + } + + /* + * For some reason these ranges are already know? Might happen + * with unusual register->unregister->register sequences; we'll simply + * sanity check using the first range. + */ + first = list_first_entry(&list, struct vmcore_mem_node, list); + list_for_each_entry(m, &vmcore_list, list) { + unsigned long long m_end = m->paddr + m->size; + unsigned long long first_end = first->paddr + first->size; + + if (first->paddr < m_end && m->paddr < first_end) + goto out_free; + } + + /* If adding the mem nodes succeeds, they must not be freed. */ + if (!vmcore_add_device_ram_elf64(&list, count)) + return; +out_free: + vmcore_free_mem_nodes(&list); +} +#else /* !CONFIG_PROC_VMCORE_DEVICE_RAM */ +static void vmcore_process_device_ram(struct vmcore_cb *cb) +{ +} +#endif /* CONFIG_PROC_VMCORE_DEVICE_RAM */ + /* Free all dumps in vmcore device dump list */ static void vmcore_free_device_dumps(void) { diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h index 722dbcff7371..8e581a053d7f 100644 --- a/include/linux/crash_dump.h +++ b/include/linux/crash_dump.h @@ -20,6 +20,8 @@ extern int elfcorehdr_alloc(unsigned long long *addr, unsigned long long *size); extern void elfcorehdr_free(unsigned long long addr); extern ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos); extern ssize_t elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos); +void elfcorehdr_fill_device_ram_ptload_elf64(Elf64_Phdr *phdr, + unsigned long long paddr, unsigned long long size); extern int remap_oldmem_pfn_range(struct vm_area_struct *vma, unsigned long from, unsigned long pfn, unsigned long size, pgprot_t prot); @@ -99,6 +101,12 @@ static inline void vmcore_unusable(void) * indicated in the vmcore instead. For example, a ballooned page * contains no data and reading from such a page will cause high * load in the hypervisor. + * @get_device_ram: query RAM ranges that can only be detected by device + * drivers, such as the virtio-mem driver, so they can be included in + * the crash dump on architectures that allocate the elfcore hdr in the dump + * ("2nd") kernel. Indicated RAM ranges may contain holes to reduce the + * total number of ranges; such holes can be detected using the pfn_is_ram + * callback just like for other RAM. * @next: List head to manage registered callbacks internally; initialized by * register_vmcore_cb(). * @@ -109,6 +117,7 @@ static inline void vmcore_unusable(void) */ struct vmcore_cb { bool (*pfn_is_ram)(struct vmcore_cb *cb, unsigned long pfn); + int (*get_device_ram)(struct vmcore_cb *cb, struct list_head *list); struct list_head next; }; extern void register_vmcore_cb(struct vmcore_cb *cb); From patchwork Fri Oct 25 15:11:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850952 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF20621A4D1 for ; Fri, 25 Oct 2024 15:14:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869288; cv=none; b=QeWV6OuMCLN/o3+qaj8e1qgcJlbwwohTU4ht1flSAyxjIYbUj9gNsB0O9ns7BwLqjb484O16wrhqZmcjsGHCTqy7CQourEaOPKy96f1RWpPafULID50rIHvI2PHVDwzPYXzGx0MetcOh1C8HQ4elml9IcC3aVCGfEI9QAFQVAPg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869288; c=relaxed/simple; bh=Lbj/6ZDYkPFZxCNambrPjgDOlmbT2PYTi20WVpcLMzQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Qe0Grxoq6g19LEkGAxJlJZGkshB4uTRL/HGdXG6wofTO52oCl9EUH//77STpO8RL6y17xbNwM/dFnvP6dNRiAtKaBSK/jgefJHQGivgwTVP5gtKvjcP96YbzdaUakaUiERs4gFpQAhl/Pswvm5rJ6NUj/kJhUBTlFwaCJ3R/MYI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=AFgtw71n; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="AFgtw71n" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869285; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ACGdF21cKkvnMHoeXpjOJK7M5S4ojlG1Lcv7sY6UCa8=; b=AFgtw71nPiM9DTCs2WzBEGHUojiPIA+/1CrckALFoVmNCNE69+YKM77B8+ldk9gKAkMibx TWh6o+YKknz5g7kg3IDp0tSB+rkSs/e2ZVEawzDEIQcmSQNPun1K4Ji8iCGp2i8ZtnOKmg JinW6n3dMKayabFiZC0BfZvBgAanMbc= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-640-JyKffLwLP4WOg7IrO6F-yg-1; Fri, 25 Oct 2024 11:12:58 -0400 X-MC-Unique: JyKffLwLP4WOg7IrO6F-yg-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id CC5061955F35; Fri, 25 Oct 2024 15:12:53 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 93CD030001A7; Fri, 25 Oct 2024 15:12:45 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 08/11] virtio-mem: mark device ready before registering callbacks in kdump mode Date: Fri, 25 Oct 2024 17:11:30 +0200 Message-ID: <20241025151134.1275575-9-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 After the callbacks are registered we may immediately get a callback. So mark the device ready before registering the callbacks. Signed-off-by: David Hildenbrand --- drivers/virtio/virtio_mem.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index b0b871441578..126f1d669bb0 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -2648,6 +2648,7 @@ static int virtio_mem_init_hotplug(struct virtio_mem *vm) if (rc) goto out_unreg_pm; + virtio_device_ready(vm->vdev); return 0; out_unreg_pm: unregister_pm_notifier(&vm->pm_notifier); @@ -2729,6 +2730,8 @@ static bool virtio_mem_vmcore_pfn_is_ram(struct vmcore_cb *cb, static int virtio_mem_init_kdump(struct virtio_mem *vm) { + /* We must be prepared to receive a callback immediately. */ + virtio_device_ready(vm->vdev); #ifdef CONFIG_PROC_VMCORE dev_info(&vm->vdev->dev, "memory hot(un)plug disabled in kdump kernel\n"); vm->vmcore_cb.pfn_is_ram = virtio_mem_vmcore_pfn_is_ram; @@ -2870,8 +2873,6 @@ static int virtio_mem_probe(struct virtio_device *vdev) if (rc) goto out_del_vq; - virtio_device_ready(vdev); - /* trigger a config update to start processing the requested_size */ if (!vm->in_kdump) { atomic_set(&vm->config_changed, 1); From patchwork Fri Oct 25 15:11:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850949 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4F2421B85A for ; Fri, 25 Oct 2024 15:13:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869189; cv=none; b=bR4YEAN2Nr1+EEYGyzUvKU/ncbw10zjT4xRmJmIryy/iZQ4Mtcy6MgcCb56raxsplfk9O4i93g/9tkBioFZgsTUyOS6wGAdZfmnvpaUlpHViLeUPj/jgGQtlS22Kw0XDY2XzB1Uy8TYJGJbwJYl3b6eQPrqLwas36G9QBNevyvs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869189; c=relaxed/simple; bh=fQJBnTtQ1k3tZbAezsg2RYRu1rYxlaYdIOHefLMEcds=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LuLwrmlxuYI+0OTYRrwbiQE07xcdGBEa0J0L5VY0mw1GwSZCrBfqEjtaoh6Ql7UoIMkzHfxgq2rPy+qU0GMcw39qmZJmtlYK5/1JIJrgfSg6Ijvx6ppCmnlLwe90TVCfp9FxGIBRCuL6LDwyzcCudqfgL6BeP1C0eN9HjJ0eaik= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=NdNFlVCY; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NdNFlVCY" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869186; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PdMqUIxpls1gNH4eQdrzo/q29zzRB98r4tpBih1lnzQ=; b=NdNFlVCYVTBUe4Hw1bI7QDxS9w4gJ2sWwT+9BsgWwhkwRAGEXqTKcbPFLNVpVMtjFIaBWG 37vW/GjlmqRJnhhjkkmNQGzCkk63GyF/+HIL5h3/T05z1pFrOdJzVTgXlWe4SUFPwcsfVx ffZjA9FcP8GNirvC7tRW9tk3mWlJdEA= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-642-o3a0P1v0Nd-PreOhqdKZvg-1; Fri, 25 Oct 2024 11:13:03 -0400 X-MC-Unique: o3a0P1v0Nd-PreOhqdKZvg-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2135819560A2; Fri, 25 Oct 2024 15:13:01 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5282C300018D; Fri, 25 Oct 2024 15:12:54 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 09/11] virtio-mem: remember usable region size Date: Fri, 25 Oct 2024 17:11:31 +0200 Message-ID: <20241025151134.1275575-10-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Let's remember the usable region size, which will be helpful in kdump mode next. Signed-off-by: David Hildenbrand --- drivers/virtio/virtio_mem.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 126f1d669bb0..73477d5b79cf 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -133,6 +133,8 @@ struct virtio_mem { uint64_t addr; /* Maximum region size in bytes. */ uint64_t region_size; + /* Usable region size in bytes. */ + uint64_t usable_region_size; /* The parent resource for all memory added via this device. */ struct resource *parent_resource; @@ -2368,7 +2370,7 @@ static int virtio_mem_cleanup_pending_mb(struct virtio_mem *vm) static void virtio_mem_refresh_config(struct virtio_mem *vm) { const struct range pluggable_range = mhp_get_pluggable_range(true); - uint64_t new_plugged_size, usable_region_size, end_addr; + uint64_t new_plugged_size, end_addr; /* the plugged_size is just a reflection of what _we_ did previously */ virtio_cread_le(vm->vdev, struct virtio_mem_config, plugged_size, @@ -2378,8 +2380,8 @@ static void virtio_mem_refresh_config(struct virtio_mem *vm) /* calculate the last usable memory block id */ virtio_cread_le(vm->vdev, struct virtio_mem_config, - usable_region_size, &usable_region_size); - end_addr = min(vm->addr + usable_region_size - 1, + usable_region_size, &vm->usable_region_size); + end_addr = min(vm->addr + vm->usable_region_size - 1, pluggable_range.end); if (vm->in_sbm) { @@ -2763,6 +2765,8 @@ static int virtio_mem_init(struct virtio_mem *vm) virtio_cread_le(vm->vdev, struct virtio_mem_config, addr, &vm->addr); virtio_cread_le(vm->vdev, struct virtio_mem_config, region_size, &vm->region_size); + virtio_cread_le(vm->vdev, struct virtio_mem_config, usable_region_size, + &vm->usable_region_size); /* Determine the nid for the device based on the lowest address. */ if (vm->nid == NUMA_NO_NODE) From patchwork Fri Oct 25 15:11:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850950 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 10B2321E608 for ; Fri, 25 Oct 2024 15:13:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869197; cv=none; b=V84J0olODio1+ZUkQyVESJl5vJat1CbuIs88dl2MQJbbvcvxB/oght1zF/xi6MD2lLLvnRvlOrpiwdjidkH9wWQAjSlPcH65RiRPHqTKjTPZRhu+IxQ8K9ZkPVqSsEmeGN1RWlBfzmzeuHyflEbFqbYer+BioDYzOjZg8btt5FY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869197; c=relaxed/simple; bh=dRgtVgI4Sd1wwZ3rW+JZU7d5KVuXtHp4HRtbhjg+8kw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qOU1m3NIRXqbc5kIhmF+ILShjiiZ4gXNiqGzSy/c/mkdrXtpQ684DsJDkFcIFzIYQImKgOxQrbgQQ0u0DGL+QZf/0AMEeeNyc0BAOczcir8zTolWNE7CKuPt8VWAEmZHCYGJK3P73NYrzjvO3R0Cj9Mz9hokwInLGwwZLsvA39g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=jShXYWPt; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="jShXYWPt" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869194; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mE6UZtvB4UrwqxDxBvXj02Eqawd4qPQUjTyB3c978sA=; b=jShXYWPtNueJxnQRao9sOto0TZM9uybO9ME4q04+FsmPBh0o70ILCrDJZLOWdeFoGlDM7A 1ROz+woSCMRNprdLg8xEgBopWOA3/W+3PKlU868FPQaZ6SHr1BcMemCOt21vPkJJ9OgZAY wt9YkQreZQ2jqAlBZl4mc3bV+/56ohg= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-373-3cSc-QklMMuKHb2DIsw0Pg-1; Fri, 25 Oct 2024 11:13:10 -0400 X-MC-Unique: 3cSc-QklMMuKHb2DIsw0Pg-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A10BC1955D4A; Fri, 25 Oct 2024 15:13:08 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 86CEF300018D; Fri, 25 Oct 2024 15:13:01 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 10/11] virtio-mem: support CONFIG_PROC_VMCORE_DEVICE_RAM Date: Fri, 25 Oct 2024 17:11:32 +0200 Message-ID: <20241025151134.1275575-11-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Let's implement the get_device_ram() vmcore callback, so architectures that select NEED_PROC_VMCORE_NEED_DEVICE_RAM, like s390 soon, can include that memory in a crash dump. Merge ranges, and process ranges that might contain a mixture of plugged and unplugged, to reduce the total number of ranges. Signed-off-by: David Hildenbrand --- drivers/virtio/Kconfig | 1 + drivers/virtio/virtio_mem.c | 88 +++++++++++++++++++++++++++++++++++++ 2 files changed, 89 insertions(+) diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 2eb747311bfd..60fdaf2c2c49 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -128,6 +128,7 @@ config VIRTIO_MEM depends on MEMORY_HOTREMOVE depends on CONTIG_ALLOC depends on EXCLUSIVE_SYSTEM_RAM + select PROVIDE_PROC_VMCORE_DEVICE_RAM if PROC_VMCORE help This driver provides access to virtio-mem paravirtualized memory devices, allowing to hotplug and hotunplug memory. diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 73477d5b79cf..1ae1199a7617 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -2728,6 +2728,91 @@ static bool virtio_mem_vmcore_pfn_is_ram(struct vmcore_cb *cb, mutex_unlock(&vm->hotplug_mutex); return is_ram; } + +#ifdef CONFIG_PROC_VMCORE_DEVICE_RAM +static int virtio_mem_vmcore_add_device_ram(struct virtio_mem *vm, + struct list_head *list, uint64_t start, uint64_t end) +{ + int rc; + + rc = vmcore_alloc_add_mem_node(list, start, end - start); + if (rc) + dev_err(&vm->vdev->dev, + "Error adding device RAM range: %d\n", rc); + return rc; +} + +static int virtio_mem_vmcore_get_device_ram(struct vmcore_cb *cb, + struct list_head *list) +{ + struct virtio_mem *vm = container_of(cb, struct virtio_mem, + vmcore_cb); + const uint64_t device_start = vm->addr; + const uint64_t device_end = vm->addr + vm->usable_region_size; + uint64_t chunk_size, cur_start, cur_end, plugged_range_start = 0; + LIST_HEAD(tmp_list); + int rc; + + if (!vm->plugged_size) + return 0; + + /* Process memory sections, unless the device block size is bigger. */ + chunk_size = max_t(uint64_t, PFN_PHYS(PAGES_PER_SECTION), + vm->device_block_size); + + mutex_lock(&vm->hotplug_mutex); + + /* + * We process larger chunks and indicate the complete chunk if any + * block in there is plugged. This reduces the number of pfn_is_ram() + * callbacks and mimic what is effectively being done when the old + * kernel would add complete memory sections/blocks to the elfcore hdr. + */ + cur_start = device_start; + for (cur_start = device_start; cur_start < device_end; cur_start = cur_end) { + cur_end = ALIGN_DOWN(cur_start + chunk_size, chunk_size); + cur_end = min_t(uint64_t, cur_end, device_end); + + rc = virtio_mem_send_state_request(vm, cur_start, + cur_end - cur_start); + + if (rc < 0) { + dev_err(&vm->vdev->dev, + "Error querying block states: %d\n", rc); + goto out; + } else if (rc != VIRTIO_MEM_STATE_UNPLUGGED) { + /* Merge ranges with plugged memory. */ + if (!plugged_range_start) + plugged_range_start = cur_start; + continue; + } + + /* Flush any plugged range. */ + if (plugged_range_start) { + rc = virtio_mem_vmcore_add_device_ram(vm, &tmp_list, + plugged_range_start, + cur_start); + if (rc) + goto out; + plugged_range_start = 0; + } + } + + /* Flush any plugged range. */ + if (plugged_range_start) + rc = virtio_mem_vmcore_add_device_ram(vm, &tmp_list, + plugged_range_start, + cur_start); +out: + mutex_unlock(&vm->hotplug_mutex); + if (rc < 0) { + vmcore_free_mem_nodes(&tmp_list); + return rc; + } + list_splice_tail(&tmp_list, list); + return 0; +} +#endif /* CONFIG_PROC_VMCORE_DEVICE_RAM */ #endif /* CONFIG_PROC_VMCORE */ static int virtio_mem_init_kdump(struct virtio_mem *vm) @@ -2737,6 +2822,9 @@ static int virtio_mem_init_kdump(struct virtio_mem *vm) #ifdef CONFIG_PROC_VMCORE dev_info(&vm->vdev->dev, "memory hot(un)plug disabled in kdump kernel\n"); vm->vmcore_cb.pfn_is_ram = virtio_mem_vmcore_pfn_is_ram; +#ifdef CONFIG_PROC_VMCORE_DEVICE_RAM + vm->vmcore_cb.get_device_ram = virtio_mem_vmcore_get_device_ram; +#endif /* CONFIG_PROC_VMCORE_DEVICE_RAM */ register_vmcore_cb(&vm->vmcore_cb); return 0; #else /* CONFIG_PROC_VMCORE */ From patchwork Fri Oct 25 15:11:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13850951 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2FB1E2064E6 for ; Fri, 25 Oct 2024 15:13:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869207; cv=none; b=dnwd/b0nG2vu5lFm0yVnSeoOERxdI2z/hndRG2SBrxKjfLPNxnYGzBFxQasCRSFsFxDIXGja4uA8C5w2zwmmWW2lQEwNPjLBHZWlU8lsZTc9IHjjmiLWcI2o8337t5iBEQQpRmdGeK8TiQg4fk05f2DE50T972u9sjm42zMiyEY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729869207; c=relaxed/simple; bh=l8Vke+v1qJcY3Kdt2qKC4wrsbTNgJAV880fY1X1hrJ8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cqFQE4PRmdS3oTAF48hT4X1yqygG0LoMckAe4SUyZ2CucNhXWyjxjHHqKQyXq97D0W02DVB2f9jSYafxX60mPFqq+ZWlpEg0t5GnD034A3qnmBD43akcHgU1G6MJPutcNqDhHWlAUNU0gXoxifQAojgLv6wprciimH5IeEdwt5g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Py6SXWnR; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Py6SXWnR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1729869204; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=duV64aDOzPCg70Erai+k0MR4CZFFfLJs/Tny3PQhGqo=; b=Py6SXWnRRaYj5YfdQ32q1Yz4tx1KceUdfrsMSo9rXb1hCRYtJuxPMeU8ELBtJ32Aby/wK/ mhlU3WLXKkd5ykcpAKH66baagvhntNHGedC+8wnLT5i+WVsKHJCChIoKylnhv0UHfOpJHn o6afDZvZNqmL17RURTtHktZCkTVDDUw= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-44-_q3EKHFQNEmGt7K_9EZzAQ-1; Fri, 25 Oct 2024 11:13:19 -0400 X-MC-Unique: _q3EKHFQNEmGt7K_9EZzAQ-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 750F61956096; Fri, 25 Oct 2024 15:13:17 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.22.65.27]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 27021300018D; Fri, 25 Oct 2024 15:13:08 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-s390@vger.kernel.org, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, kexec@lists.infradead.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Baoquan He , Vivek Goyal , Dave Young , Thomas Huth , Cornelia Huck , Janosch Frank , Claudio Imbrenda , Eric Farman , Andrew Morton Subject: [PATCH v1 11/11] s390/kdump: virtio-mem kdump support (CONFIG_PROC_VMCORE_DEVICE_RAM) Date: Fri, 25 Oct 2024 17:11:33 +0200 Message-ID: <20241025151134.1275575-12-david@redhat.com> In-Reply-To: <20241025151134.1275575-1-david@redhat.com> References: <20241025151134.1275575-1-david@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Let's add support for including virtio-mem device RAM in the crash dump, setting NEED_PROC_VMCORE_DEVICE_RAM, and implementing elfcorehdr_fill_device_ram_ptload_elf64(). To avoid code duplication, factor out the code to fill a PT_LOAD entry. Signed-off-by: David Hildenbrand --- arch/s390/Kconfig | 1 + arch/s390/kernel/crash_dump.c | 39 ++++++++++++++++++++++++++++------- 2 files changed, 32 insertions(+), 8 deletions(-) diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index d339fe4fdedf..d80450d957a9 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -230,6 +230,7 @@ config S390 select MODULES_USE_ELF_RELA select NEED_DMA_MAP_STATE if PCI select NEED_PER_CPU_EMBED_FIRST_CHUNK + select NEED_PROC_VMCORE_DEVICE_RAM if PROC_VMCORE select NEED_SG_DMA_LENGTH if PCI select OLD_SIGACTION select OLD_SIGSUSPEND3 diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c index edae13416196..97b9e71b734d 100644 --- a/arch/s390/kernel/crash_dump.c +++ b/arch/s390/kernel/crash_dump.c @@ -497,6 +497,19 @@ static int get_mem_chunk_cnt(void) return cnt; } +static void fill_ptload(Elf64_Phdr *phdr, unsigned long paddr, + unsigned long vaddr, unsigned long size) +{ + phdr->p_type = PT_LOAD; + phdr->p_vaddr = vaddr; + phdr->p_offset = paddr; + phdr->p_paddr = paddr; + phdr->p_filesz = size; + phdr->p_memsz = size; + phdr->p_flags = PF_R | PF_W | PF_X; + phdr->p_align = PAGE_SIZE; +} + /* * Initialize ELF loads (new kernel) */ @@ -509,14 +522,8 @@ static void loads_init(Elf64_Phdr *phdr, bool os_info_has_vm) if (os_info_has_vm) old_identity_base = os_info_old_value(OS_INFO_IDENTITY_BASE); for_each_physmem_range(idx, &oldmem_type, &start, &end) { - phdr->p_type = PT_LOAD; - phdr->p_vaddr = old_identity_base + start; - phdr->p_offset = start; - phdr->p_paddr = start; - phdr->p_filesz = end - start; - phdr->p_memsz = end - start; - phdr->p_flags = PF_R | PF_W | PF_X; - phdr->p_align = PAGE_SIZE; + fill_ptload(phdr, start, old_identity_base + start, + end - start); phdr++; } } @@ -526,6 +533,22 @@ static bool os_info_has_vm(void) return os_info_old_value(OS_INFO_KASLR_OFFSET); } +#ifdef CONFIG_PROC_VMCORE_DEVICE_RAM +/* + * Fill PT_LOAD for a physical memory range owned by a device and detected by + * its device driver. + */ +void elfcorehdr_fill_device_ram_ptload_elf64(Elf64_Phdr *phdr, + unsigned long long paddr, unsigned long long size) +{ + unsigned long old_identity_base = 0; + + if (os_info_has_vm()) + old_identity_base = os_info_old_value(OS_INFO_IDENTITY_BASE); + fill_ptload(phdr, paddr, old_identity_base + paddr, size); +} +#endif + /* * Prepare PT_LOAD type program header for kernel image region */