diff mbox series

vhost: release memory objects in error path

Message ID 20230522181021.403585-1-ppandit@redhat.com (mailing list archive)
State New, archived
Headers show
Series vhost: release memory objects in error path | expand

Commit Message

Prasad Pandit May 22, 2023, 6:10 p.m. UTC
From: Prasad Pandit <pjp@fedoraproject.org>

vhost_dev_start function does not release memory objects in case
of an error. This may crash the guest with:

  stack trace of thread 125653:
  Program terminated with signal SIGSEGV, Segmentation fault
  #0  memory_listener_register (qemu-kvm + 0x6cda0f)
  #1  vhost_dev_start (qemu-kvm + 0x699301)
  #2  vhost_net_start (qemu-kvm + 0x45b03f)
  #3  virtio_net_set_status (qemu-kvm + 0x665672)
  #4  qmp_set_link (qemu-kvm + 0x548fd5)
  #5  net_vhost_user_event (qemu-kvm + 0x552c45)
  #6  tcp_chr_connect (qemu-kvm + 0x88d473)
  #7  tcp_chr_new_client (qemu-kvm + 0x88cf83)
  #8  tcp_chr_accept (qemu-kvm + 0x88b429)
  #9  qio_net_listener_channel_func (qemu-kvm + 0x7ac07c)
  #10 g_main_context_dispatch (libglib-2.0.so.0 + 0x54e2f)
  ===

Release memory_listener and virtqueue objects in the error path.

Signed-off-by: Prasad Pandit <pjp@fedoraproject.org>
---
 hw/virtio/vhost.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Peter Xu May 25, 2023, 1:02 p.m. UTC | #1
Hi, Prasad,

On Mon, May 22, 2023 at 11:40:21PM +0530, P J P wrote:
> From: Prasad Pandit <pjp@fedoraproject.org>
> 
> vhost_dev_start function does not release memory objects in case
> of an error. This may crash the guest with:
> 
>   stack trace of thread 125653:
>   Program terminated with signal SIGSEGV, Segmentation fault
>   #0  memory_listener_register (qemu-kvm + 0x6cda0f)
>   #1  vhost_dev_start (qemu-kvm + 0x699301)
>   #2  vhost_net_start (qemu-kvm + 0x45b03f)
>   #3  virtio_net_set_status (qemu-kvm + 0x665672)
>   #4  qmp_set_link (qemu-kvm + 0x548fd5)
>   #5  net_vhost_user_event (qemu-kvm + 0x552c45)
>   #6  tcp_chr_connect (qemu-kvm + 0x88d473)
>   #7  tcp_chr_new_client (qemu-kvm + 0x88cf83)
>   #8  tcp_chr_accept (qemu-kvm + 0x88b429)
>   #9  qio_net_listener_channel_func (qemu-kvm + 0x7ac07c)
>   #10 g_main_context_dispatch (libglib-2.0.so.0 + 0x54e2f)
>   ===
> 
> Release memory_listener and virtqueue objects in the error path.
> 
> Signed-off-by: Prasad Pandit <pjp@fedoraproject.org>

From the patch change itself it looks all right here.

IIRC this bug used to only reproduce on rt kernels, is it still the case?

Here besides doing correct unregister, does it also mean that even if
event_notifier_init() failed there's totally no error message anywhere?
Should we dump something when it fails?  And did you check why that failed?

Thanks,

> ---
>  hw/virtio/vhost.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 23da579ce2..e261ae7c49 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -1942,7 +1942,7 @@ int vhost_dev_start(struct vhost_dev *hdev, VirtIODevice *vdev, bool vrings)
>      r = event_notifier_init(
>          &hdev->vqs[VHOST_QUEUE_NUM_CONFIG_INR].masked_config_notifier, 0);
>      if (r < 0) {
> -        return r;
> +        goto fail_vq;
>      }
>      event_notifier_test_and_clear(
>          &hdev->vqs[VHOST_QUEUE_NUM_CONFIG_INR].masked_config_notifier);
> @@ -2004,6 +2004,9 @@ fail_vq:
>      }
>  
>  fail_mem:
> +    if (vhost_dev_has_iommu(hdev)) {
> +        memory_listener_unregister(&hdev->iommu_listener);
> +    }
>  fail_features:
>      vdev->vhost_started = false;
>      hdev->started = false;
> -- 
> 2.40.1
> 
>
Prasad Pandit May 26, 2023, 6:54 a.m. UTC | #2
Hello Peter, all

On Thu, 25 May 2023 at 18:33, Peter Xu <peterx@redhat.com> wrote:

> IIRC this bug used to only reproduce on rt kernels, is it still the case?
>

* Yes, it's a same crash.


> Here besides doing correct unregister, does it also mean that even if
> event_notifier_init() failed there's totally no error message anywhere?
> Should we dump something when it fails?  And did you check why that failed?
>

* In the qemu logs we see following error

        VHOST_OPS_DEBUG(r, "vhost_set_mem_table failed");
        goto fail_mem

After this execution likely did not reach the event_notifier_init() call,
because: goto fail_mem.

* But in case it fails, no error message gets logged. Do we want to add it
in this same patch?

Thank you.
---
  - Prasad
Peter Xu May 26, 2023, 2:35 p.m. UTC | #3
On Fri, May 26, 2023 at 12:24:07PM +0530, Prasad Pandit wrote:
> Hello Peter, all
> 
> On Thu, 25 May 2023 at 18:33, Peter Xu <peterx@redhat.com> wrote:
> 
> > IIRC this bug used to only reproduce on rt kernels, is it still the case?
> >
> 
> * Yes, it's a same crash.
> 
> 
> > Here besides doing correct unregister, does it also mean that even if
> > event_notifier_init() failed there's totally no error message anywhere?
> > Should we dump something when it fails?  And did you check why that failed?
> >
> 
> * In the qemu logs we see following error
> 
>         VHOST_OPS_DEBUG(r, "vhost_set_mem_table failed");
>         goto fail_mem
> 
> After this execution likely did not reach the event_notifier_init() call,
> because: goto fail_mem.

Ah so it's not about eventfd, okay!

> 
> * But in case it fails, no error message gets logged. Do we want to add it
> in this same patch?

I see you already sent one, will read.

Thanks!
diff mbox series

Patch

diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 23da579ce2..e261ae7c49 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -1942,7 +1942,7 @@  int vhost_dev_start(struct vhost_dev *hdev, VirtIODevice *vdev, bool vrings)
     r = event_notifier_init(
         &hdev->vqs[VHOST_QUEUE_NUM_CONFIG_INR].masked_config_notifier, 0);
     if (r < 0) {
-        return r;
+        goto fail_vq;
     }
     event_notifier_test_and_clear(
         &hdev->vqs[VHOST_QUEUE_NUM_CONFIG_INR].masked_config_notifier);
@@ -2004,6 +2004,9 @@  fail_vq:
     }
 
 fail_mem:
+    if (vhost_dev_has_iommu(hdev)) {
+        memory_listener_unregister(&hdev->iommu_listener);
+    }
 fail_features:
     vdev->vhost_started = false;
     hdev->started = false;