From patchwork Tue May 28 10:56:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 10964725 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 44803933 for ; Tue, 28 May 2019 10:56:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 344FA2849B for ; Tue, 28 May 2019 10:56:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 27E1E2852A; Tue, 28 May 2019 10:56:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C6E462849B for ; Tue, 28 May 2019 10:56:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726797AbfE1K4j (ORCPT ); Tue, 28 May 2019 06:56:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60092 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726282AbfE1K4j (ORCPT ); Tue, 28 May 2019 06:56:39 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E2B8B3079B8F; Tue, 28 May 2019 10:56:38 +0000 (UTC) Received: from steredhat.redhat.com (ovpn-117-13.ams2.redhat.com [10.36.117.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2E37560BE2; Tue, 28 May 2019 10:56:36 +0000 (UTC) From: Stefano Garzarella To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, Stefan Hajnoczi , "David S. Miller" , "Michael S . Tsirkin" Subject: [PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock' Date: Tue, 28 May 2019 12:56:20 +0200 Message-Id: <20190528105623.27983-2-sgarzare@redhat.com> In-Reply-To: <20190528105623.27983-1-sgarzare@redhat.com> References: <20190528105623.27983-1-sgarzare@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.41]); Tue, 28 May 2019 10:56:39 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch protects the reading of 'the_virtio_vsock' taking the mutex used when it is set. We also move the 'the_virtio_vsock' assignment at the end of the .probe(), when we finished all the initialization, and at the beginning of .remove(), before to release resources, taking the lock until the end of the function. Signed-off-by: Stefano Garzarella --- net/vmw_vsock/virtio_transport.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index 96ab344f17bb..d3ba7747aa73 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -68,7 +68,13 @@ struct virtio_vsock { static struct virtio_vsock *virtio_vsock_get(void) { - return the_virtio_vsock; + struct virtio_vsock *vsock; + + mutex_lock(&the_virtio_vsock_mutex); + vsock = the_virtio_vsock; + mutex_unlock(&the_virtio_vsock_mutex); + + return vsock; } static u32 virtio_transport_get_local_cid(void) @@ -592,7 +598,6 @@ static int virtio_vsock_probe(struct virtio_device *vdev) atomic_set(&vsock->queued_replies, 0); vdev->priv = vsock; - the_virtio_vsock = vsock; mutex_init(&vsock->tx_lock); mutex_init(&vsock->rx_lock); mutex_init(&vsock->event_lock); @@ -614,6 +619,8 @@ static int virtio_vsock_probe(struct virtio_device *vdev) virtio_vsock_event_fill(vsock); mutex_unlock(&vsock->event_lock); + the_virtio_vsock = vsock; + mutex_unlock(&the_virtio_vsock_mutex); return 0; @@ -628,6 +635,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev) struct virtio_vsock *vsock = vdev->priv; struct virtio_vsock_pkt *pkt; + mutex_lock(&the_virtio_vsock_mutex); + the_virtio_vsock = NULL; + flush_work(&vsock->loopback_work); flush_work(&vsock->rx_work); flush_work(&vsock->tx_work); @@ -667,13 +677,10 @@ static void virtio_vsock_remove(struct virtio_device *vdev) } spin_unlock_bh(&vsock->loopback_list_lock); - mutex_lock(&the_virtio_vsock_mutex); - the_virtio_vsock = NULL; - mutex_unlock(&the_virtio_vsock_mutex); - vdev->config->del_vqs(vdev); kfree(vsock); + mutex_unlock(&the_virtio_vsock_mutex); } static struct virtio_device_id id_table[] = { From patchwork Tue May 28 10:56:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 10964731 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E9440912 for ; Tue, 28 May 2019 10:57:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D954528505 for ; Tue, 28 May 2019 10:57:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CCEA72855A; Tue, 28 May 2019 10:57:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A9C8528505 for ; Tue, 28 May 2019 10:57:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726858AbfE1K4p (ORCPT ); Tue, 28 May 2019 06:56:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40574 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726668AbfE1K4n (ORCPT ); Tue, 28 May 2019 06:56:43 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B7202307D925; Tue, 28 May 2019 10:56:42 +0000 (UTC) Received: from steredhat.redhat.com (ovpn-117-13.ams2.redhat.com [10.36.117.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6D2F97D59A; Tue, 28 May 2019 10:56:39 +0000 (UTC) From: Stefano Garzarella To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, Stefan Hajnoczi , "David S. Miller" , "Michael S . Tsirkin" Subject: [PATCH 2/4] vsock/virtio: stop workers during the .remove() Date: Tue, 28 May 2019 12:56:21 +0200 Message-Id: <20190528105623.27983-3-sgarzare@redhat.com> In-Reply-To: <20190528105623.27983-1-sgarzare@redhat.com> References: <20190528105623.27983-1-sgarzare@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.48]); Tue, 28 May 2019 10:56:42 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Before to call vdev->config->reset(vdev) we need to be sure that no one is accessing the device, for this reason, we add new variables in the struct virtio_vsock to stop the workers during the .remove(). This patch also add few comments before vdev->config->reset(vdev) and vdev->config->del_vqs(vdev). Suggested-by: Stefan Hajnoczi Suggested-by: Michael S. Tsirkin Signed-off-by: Stefano Garzarella --- net/vmw_vsock/virtio_transport.c | 49 +++++++++++++++++++++++++++++++- 1 file changed, 48 insertions(+), 1 deletion(-) diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index d3ba7747aa73..e694df10ab61 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -39,6 +39,7 @@ struct virtio_vsock { * must be accessed with tx_lock held. */ struct mutex tx_lock; + bool tx_run; struct work_struct send_pkt_work; spinlock_t send_pkt_list_lock; @@ -54,6 +55,7 @@ struct virtio_vsock { * must be accessed with rx_lock held. */ struct mutex rx_lock; + bool rx_run; int rx_buf_nr; int rx_buf_max_nr; @@ -61,6 +63,7 @@ struct virtio_vsock { * vqs[VSOCK_VQ_EVENT] must be accessed with event_lock held. */ struct mutex event_lock; + bool event_run; struct virtio_vsock_event event_list[8]; u32 guest_cid; @@ -98,6 +101,10 @@ static void virtio_transport_loopback_work(struct work_struct *work) spin_unlock_bh(&vsock->loopback_list_lock); mutex_lock(&vsock->rx_lock); + + if (!vsock->rx_run) + goto out; + while (!list_empty(&pkts)) { struct virtio_vsock_pkt *pkt; @@ -106,6 +113,7 @@ static void virtio_transport_loopback_work(struct work_struct *work) virtio_transport_recv_pkt(pkt); } +out: mutex_unlock(&vsock->rx_lock); } @@ -134,6 +142,9 @@ virtio_transport_send_pkt_work(struct work_struct *work) mutex_lock(&vsock->tx_lock); + if (!vsock->tx_run) + goto out; + vq = vsock->vqs[VSOCK_VQ_TX]; for (;;) { @@ -192,6 +203,7 @@ virtio_transport_send_pkt_work(struct work_struct *work) if (added) virtqueue_kick(vq); +out: mutex_unlock(&vsock->tx_lock); if (restart_rx) @@ -314,6 +326,10 @@ static void virtio_transport_tx_work(struct work_struct *work) vq = vsock->vqs[VSOCK_VQ_TX]; mutex_lock(&vsock->tx_lock); + + if (!vsock->tx_run) + goto out; + do { struct virtio_vsock_pkt *pkt; unsigned int len; @@ -324,6 +340,8 @@ static void virtio_transport_tx_work(struct work_struct *work) added = true; } } while (!virtqueue_enable_cb(vq)); + +out: mutex_unlock(&vsock->tx_lock); if (added) @@ -352,6 +370,9 @@ static void virtio_transport_rx_work(struct work_struct *work) mutex_lock(&vsock->rx_lock); + if (!vsock->rx_run) + goto out; + do { virtqueue_disable_cb(vq); for (;;) { @@ -461,6 +482,9 @@ static void virtio_transport_event_work(struct work_struct *work) mutex_lock(&vsock->event_lock); + if (!vsock->event_run) + goto out; + do { struct virtio_vsock_event *event; unsigned int len; @@ -475,7 +499,7 @@ static void virtio_transport_event_work(struct work_struct *work) } while (!virtqueue_enable_cb(vq)); virtqueue_kick(vsock->vqs[VSOCK_VQ_EVENT]); - +out: mutex_unlock(&vsock->event_lock); } @@ -611,12 +635,18 @@ static int virtio_vsock_probe(struct virtio_device *vdev) INIT_WORK(&vsock->send_pkt_work, virtio_transport_send_pkt_work); INIT_WORK(&vsock->loopback_work, virtio_transport_loopback_work); + mutex_lock(&vsock->tx_lock); + vsock->tx_run = true; + mutex_unlock(&vsock->tx_lock); + mutex_lock(&vsock->rx_lock); virtio_vsock_rx_fill(vsock); + vsock->rx_run = true; mutex_unlock(&vsock->rx_lock); mutex_lock(&vsock->event_lock); virtio_vsock_event_fill(vsock); + vsock->event_run = true; mutex_unlock(&vsock->event_lock); the_virtio_vsock = vsock; @@ -647,6 +677,22 @@ static void virtio_vsock_remove(struct virtio_device *vdev) /* Reset all connected sockets when the device disappear */ vsock_for_each_connected_socket(virtio_vsock_reset_sock); + /* Stop all work handlers to make sure no one is accessing the device */ + mutex_lock(&vsock->rx_lock); + vsock->rx_run = false; + mutex_unlock(&vsock->rx_lock); + + mutex_lock(&vsock->tx_lock); + vsock->tx_run = false; + mutex_unlock(&vsock->tx_lock); + + mutex_lock(&vsock->event_lock); + vsock->event_run = false; + mutex_unlock(&vsock->event_lock); + + /* Flush all device writes and interrupts, device will not use any + * more buffers. + */ vdev->config->reset(vdev); mutex_lock(&vsock->rx_lock); @@ -677,6 +723,7 @@ static void virtio_vsock_remove(struct virtio_device *vdev) } spin_unlock_bh(&vsock->loopback_list_lock); + /* Delete virtqueues and flush outstanding callbacks if any */ vdev->config->del_vqs(vdev); kfree(vsock); From patchwork Tue May 28 10:56:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 10964727 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1A3AA912 for ; Tue, 28 May 2019 10:56:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 095D02849B for ; Tue, 28 May 2019 10:56:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F19112852A; Tue, 28 May 2019 10:56:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BA6D2849B for ; Tue, 28 May 2019 10:56:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726883AbfE1K4p (ORCPT ); Tue, 28 May 2019 06:56:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53250 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726859AbfE1K4p (ORCPT ); Tue, 28 May 2019 06:56:45 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 350F5300502A; Tue, 28 May 2019 10:56:45 +0000 (UTC) Received: from steredhat.redhat.com (ovpn-117-13.ams2.redhat.com [10.36.117.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1F2B67D59A; Tue, 28 May 2019 10:56:42 +0000 (UTC) From: Stefano Garzarella To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, Stefan Hajnoczi , "David S. Miller" , "Michael S . Tsirkin" Subject: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove() Date: Tue, 28 May 2019 12:56:22 +0200 Message-Id: <20190528105623.27983-4-sgarzare@redhat.com> In-Reply-To: <20190528105623.27983-1-sgarzare@redhat.com> References: <20190528105623.27983-1-sgarzare@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Tue, 28 May 2019 10:56:45 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We flush all pending works before to call vdev->config->reset(vdev), but other works can be queued before the vdev->config->del_vqs(vdev), so we add another flush after it, to avoid use after free. Suggested-by: Michael S. Tsirkin Signed-off-by: Stefano Garzarella --- net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index e694df10ab61..ad093ce96693 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev) return ret; } +static void virtio_vsock_flush_works(struct virtio_vsock *vsock) +{ + flush_work(&vsock->loopback_work); + flush_work(&vsock->rx_work); + flush_work(&vsock->tx_work); + flush_work(&vsock->event_work); + flush_work(&vsock->send_pkt_work); +} + static void virtio_vsock_remove(struct virtio_device *vdev) { struct virtio_vsock *vsock = vdev->priv; @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev) mutex_lock(&the_virtio_vsock_mutex); the_virtio_vsock = NULL; - flush_work(&vsock->loopback_work); - flush_work(&vsock->rx_work); - flush_work(&vsock->tx_work); - flush_work(&vsock->event_work); - flush_work(&vsock->send_pkt_work); - /* Reset all connected sockets when the device disappear */ vsock_for_each_connected_socket(virtio_vsock_reset_sock); @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev) vsock->event_run = false; mutex_unlock(&vsock->event_lock); + /* Flush all pending works */ + virtio_vsock_flush_works(vsock); + /* Flush all device writes and interrupts, device will not use any * more buffers. */ @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev) /* Delete virtqueues and flush outstanding callbacks if any */ vdev->config->del_vqs(vdev); + /* Other works can be queued before 'config->del_vqs()', so we flush + * all works before to free the vsock object to avoid use after free. + */ + virtio_vsock_flush_works(vsock); + kfree(vsock); mutex_unlock(&the_virtio_vsock_mutex); } From patchwork Tue May 28 10:56:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 10964729 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 228BB1395 for ; Tue, 28 May 2019 10:56:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 121922849B for ; Tue, 28 May 2019 10:56:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 05D382852A; Tue, 28 May 2019 10:56:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ABB732849B for ; Tue, 28 May 2019 10:56:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726963AbfE1K4w (ORCPT ); Tue, 28 May 2019 06:56:52 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55006 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726912AbfE1K4r (ORCPT ); Tue, 28 May 2019 06:56:47 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 70E1CF74CF; Tue, 28 May 2019 10:56:47 +0000 (UTC) Received: from steredhat.redhat.com (ovpn-117-13.ams2.redhat.com [10.36.117.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8F19260BE2; Tue, 28 May 2019 10:56:45 +0000 (UTC) From: Stefano Garzarella To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, Stefan Hajnoczi , "David S. Miller" , "Michael S . Tsirkin" Subject: [PATCH 4/4] vsock/virtio: free used buffers during the .remove() Date: Tue, 28 May 2019 12:56:23 +0200 Message-Id: <20190528105623.27983-5-sgarzare@redhat.com> In-Reply-To: <20190528105623.27983-1-sgarzare@redhat.com> References: <20190528105623.27983-1-sgarzare@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Tue, 28 May 2019 10:56:47 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Before this patch, we only freed unused buffers, but there may still be used buffers to be freed. Signed-off-by: Stefano Garzarella --- net/vmw_vsock/virtio_transport.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index ad093ce96693..6a2afb989562 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -669,6 +669,18 @@ static void virtio_vsock_flush_works(struct virtio_vsock *vsock) flush_work(&vsock->send_pkt_work); } +static void virtio_vsock_free_buf(struct virtqueue *vq) +{ + struct virtio_vsock_pkt *pkt; + unsigned int len; + + while ((pkt = virtqueue_detach_unused_buf(vq))) + virtio_transport_free_pkt(pkt); + + while ((pkt = virtqueue_get_buf(vq, &len))) + virtio_transport_free_pkt(pkt); +} + static void virtio_vsock_remove(struct virtio_device *vdev) { struct virtio_vsock *vsock = vdev->priv; @@ -702,13 +714,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev) vdev->config->reset(vdev); mutex_lock(&vsock->rx_lock); - while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_RX]))) - virtio_transport_free_pkt(pkt); + virtio_vsock_free_buf(vsock->vqs[VSOCK_VQ_RX]); mutex_unlock(&vsock->rx_lock); mutex_lock(&vsock->tx_lock); - while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_TX]))) - virtio_transport_free_pkt(pkt); + virtio_vsock_free_buf(vsock->vqs[VSOCK_VQ_TX]); mutex_unlock(&vsock->tx_lock); spin_lock_bh(&vsock->send_pkt_list_lock);