diff mbox series

[72/78] 9p: Lock directory streams with a CoMutex

Message ID 20200616141547.24664-73-mdroth@linux.vnet.ibm.com (mailing list archive)
State New, archived
Headers show
Series Patch Round-up for stable 4.2.1, freeze on 2020-06-22 | expand

Commit Message

Michael Roth June 16, 2020, 2:15 p.m. UTC
From: Greg Kurz <groug@kaod.org>

Locking was introduced in QEMU 2.7 to address the deprecation of
readdir_r(3) in glibc 2.24. It turns out that the frontend code is
the worst place to handle a critical section with a pthread mutex:
the code runs in a coroutine on behalf of the QEMU mainloop and then
yields control, waiting for the fsdev backend to process the request
in a worker thread. If the client resends another readdir request for
the same fid before the previous one finally unlocked the mutex, we're
deadlocked.

This never bit us because the linux client serializes readdir requests
for the same fid, but it is quite easy to demonstrate with a custom
client.

A good solution could be to narrow the critical section in the worker
thread code and to return a copy of the dirent to the frontend, but
this causes quite some changes in both 9p.c and codir.c. So, instead
of that, in order for people to easily backport the fix to older QEMU
versions, let's simply use a CoMutex since all the users for this
sit in coroutines.

Fixes: 7cde47d4a89d ("9p: add locking to V9fsDir")
Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
Message-Id: <158981894794.109297.3530035833368944254.stgit@bahia.lan>
Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit ed463454efd0ac3042ff772bfe1b1d846dc281a5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 hw/9pfs/9p.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Comments

Greg Kurz June 16, 2020, 3:14 p.m. UTC | #1
Cc'ing co-maintainer Christian Schoenebeck.

Christian,

If there are some more commits you think are worth being cherry picked
for QEMU 4.2.1, please inform Michael before freeze on 2020-06-22.

Cheers,

--
Greg

On Tue, 16 Jun 2020 09:15:41 -0500
Michael Roth <mdroth@linux.vnet.ibm.com> wrote:

> From: Greg Kurz <groug@kaod.org>
> 
> Locking was introduced in QEMU 2.7 to address the deprecation of
> readdir_r(3) in glibc 2.24. It turns out that the frontend code is
> the worst place to handle a critical section with a pthread mutex:
> the code runs in a coroutine on behalf of the QEMU mainloop and then
> yields control, waiting for the fsdev backend to process the request
> in a worker thread. If the client resends another readdir request for
> the same fid before the previous one finally unlocked the mutex, we're
> deadlocked.
> 
> This never bit us because the linux client serializes readdir requests
> for the same fid, but it is quite easy to demonstrate with a custom
> client.
> 
> A good solution could be to narrow the critical section in the worker
> thread code and to return a copy of the dirent to the frontend, but
> this causes quite some changes in both 9p.c and codir.c. So, instead
> of that, in order for people to easily backport the fix to older QEMU
> versions, let's simply use a CoMutex since all the users for this
> sit in coroutines.
> 
> Fixes: 7cde47d4a89d ("9p: add locking to V9fsDir")
> Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> Message-Id: <158981894794.109297.3530035833368944254.stgit@bahia.lan>
> Signed-off-by: Greg Kurz <groug@kaod.org>
> (cherry picked from commit ed463454efd0ac3042ff772bfe1b1d846dc281a5)
> Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
> ---
>  hw/9pfs/9p.h | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/9pfs/9p.h b/hw/9pfs/9p.h
> index 3904f82901..069c86333f 100644
> --- a/hw/9pfs/9p.h
> +++ b/hw/9pfs/9p.h
> @@ -186,22 +186,22 @@ typedef struct V9fsXattr
>  
>  typedef struct V9fsDir {
>      DIR *stream;
> -    QemuMutex readdir_mutex;
> +    CoMutex readdir_mutex;
>  } V9fsDir;
>  
>  static inline void v9fs_readdir_lock(V9fsDir *dir)
>  {
> -    qemu_mutex_lock(&dir->readdir_mutex);
> +    qemu_co_mutex_lock(&dir->readdir_mutex);
>  }
>  
>  static inline void v9fs_readdir_unlock(V9fsDir *dir)
>  {
> -    qemu_mutex_unlock(&dir->readdir_mutex);
> +    qemu_co_mutex_unlock(&dir->readdir_mutex);
>  }
>  
>  static inline void v9fs_readdir_init(V9fsDir *dir)
>  {
> -    qemu_mutex_init(&dir->readdir_mutex);
> +    qemu_co_mutex_init(&dir->readdir_mutex);
>  }
>  
>  /*
Christian Schoenebeck June 16, 2020, 4:09 p.m. UTC | #2
On Dienstag, 16. Juni 2020 17:14:40 CEST Greg Kurz wrote:
> Cc'ing co-maintainer Christian Schoenebeck.
> 
> Christian,
> 
> If there are some more commits you think are worth being cherry picked
> for QEMU 4.2.1, please inform Michael before freeze on 2020-06-22.

Indeed, for that particular stable branch I would see the following 9p fixes
as additional candidates (chronologically top down):

841b8d099c [trivial] 9pfs: local: Fix possible memory leak in local_link()
846cf408a4 [maybe] 9p: local: always return -1 on error in local_unlinkat_common
9580d60e66 [maybe] virtio-9p-device: fix memleak in virtio_9p_device_unrealize
659f195328 [trivial] 9p/proxy: Fix export_flags
a5804fcf7b [maybe] 9pfs: local: ignore O_NOATIME if we don't have permissions
03556ea920 [trivial] 9pfs: include linux/limits.h for XATTR_SIZE_MAX
a4c4d46272 [recommended] xen/9pfs: yield when there isn't enough room on the ring

What do you think Greg?

What's the recommended way for me to keep track of imminent stable picks/
freezes in future?

Best regards,
Christian Schoenebeck
Greg Kurz June 16, 2020, 4:41 p.m. UTC | #3
On Tue, 16 Jun 2020 18:09:04 +0200
Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:

> On Dienstag, 16. Juni 2020 17:14:40 CEST Greg Kurz wrote:
> > Cc'ing co-maintainer Christian Schoenebeck.
> > 
> > Christian,
> > 
> > If there are some more commits you think are worth being cherry picked
> > for QEMU 4.2.1, please inform Michael before freeze on 2020-06-22.
> 
> Indeed, for that particular stable branch I would see the following 9p fixes
> as additional candidates (chronologically top down):
> 
> 841b8d099c [trivial] 9pfs: local: Fix possible memory leak in local_link()
> 846cf408a4 [maybe] 9p: local: always return -1 on error in local_unlinkat_common
> 9580d60e66 [maybe] virtio-9p-device: fix memleak in virtio_9p_device_unrealize
> 659f195328 [trivial] 9p/proxy: Fix export_flags
> a5804fcf7b [maybe] 9pfs: local: ignore O_NOATIME if we don't have permissions
> 03556ea920 [trivial] 9pfs: include linux/limits.h for XATTR_SIZE_MAX
> a4c4d46272 [recommended] xen/9pfs: yield when there isn't enough room on the ring
> 
> What do you think Greg?
> 

AFAIK, only regressions and fixes to severe bugs (QEMU crashes, hangs, CVEs) go
to stable QEMU releases. It doesn't seem to be the case for any of the commits
listed above but I had only a quick look.

> What's the recommended way for me to keep track of imminent stable picks/
> freezes in future?
> 

Hmm good question. I'm usually notified when Michael posts the patch round-up
and a 9p patch is already in the list, like for the present patch. Other than
that I watch qemu-stable from time to time or the planning pages in the wiki.

Michael, anything better to suggest to Christian ?

> Best regards,
> Christian Schoenebeck
> 
>
Michael Roth June 16, 2020, 10:46 p.m. UTC | #4
Quoting Greg Kurz (2020-06-16 11:41:36)
> On Tue, 16 Jun 2020 18:09:04 +0200
> Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> 
> > On Dienstag, 16. Juni 2020 17:14:40 CEST Greg Kurz wrote:
> > > Cc'ing co-maintainer Christian Schoenebeck.
> > > 
> > > Christian,
> > > 
> > > If there are some more commits you think are worth being cherry picked
> > > for QEMU 4.2.1, please inform Michael before freeze on 2020-06-22.
> > 
> > Indeed, for that particular stable branch I would see the following 9p fixes
> > as additional candidates (chronologically top down):
> > 
> > 841b8d099c [trivial] 9pfs: local: Fix possible memory leak in local_link()
> > 846cf408a4 [maybe] 9p: local: always return -1 on error in local_unlinkat_common
> > 9580d60e66 [maybe] virtio-9p-device: fix memleak in virtio_9p_device_unrealize
> > 659f195328 [trivial] 9p/proxy: Fix export_flags
> > a5804fcf7b [maybe] 9pfs: local: ignore O_NOATIME if we don't have permissions
> > 03556ea920 [trivial] 9pfs: include linux/limits.h for XATTR_SIZE_MAX
> > a4c4d46272 [recommended] xen/9pfs: yield when there isn't enough room on the ring
> > 
> > What do you think Greg?
> > 
> 
> AFAIK, only regressions and fixes to severe bugs (QEMU crashes, hangs, CVEs) go
> to stable QEMU releases. It doesn't seem to be the case for any of the commits
> listed above but I had only a quick look.

That's the main focus, but if memory leaks and other minor fixes get tagged
for stable I'll generally pull those in as well if the backport is fairly
straightforward. As that was the case with the patches above I went
ahead and pull those in.

> 
> > What's the recommended way for me to keep track of imminent stable picks/
> > freezes in future?
> > 
> 
> Hmm good question. I'm usually notified when Michael posts the patch round-up
> and a 9p patch is already in the list, like for the present patch. Other than
> that I watch qemu-stable from time to time or the planning pages in the wiki.
> 
> Michael, anything better to suggest to Christian ?

I think that about covers it. You can also subscribe to the planning
pages, e.g. https://wiki.qemu.org/Planning/5.0 (by clicking the
star/"add to watchlist" icon), then you'll get notifications when
additional release/freeze dates are added. Generally it will be updated
shortly before the patch round-up gets posted to qemu-stable.

> 
> > Best regards,
> > Christian Schoenebeck
> > 
> > 
>
Christian Schoenebeck June 18, 2020, 1:47 p.m. UTC | #5
On Mittwoch, 17. Juni 2020 00:46:26 CEST Michael Roth wrote:
> > > Indeed, for that particular stable branch I would see the following 9p
> > > fixes as additional candidates (chronologically top down):
> > > 
> > > 841b8d099c [trivial] 9pfs: local: Fix possible memory leak in
> > > local_link()
> > > 846cf408a4 [maybe] 9p: local: always return -1 on error in
> > > local_unlinkat_common 9580d60e66 [maybe] virtio-9p-device: fix memleak
> > > in virtio_9p_device_unrealize 659f195328 [trivial] 9p/proxy: Fix
> > > export_flags
> > > a5804fcf7b [maybe] 9pfs: local: ignore O_NOATIME if we don't have
> > > permissions 03556ea920 [trivial] 9pfs: include linux/limits.h for
> > > XATTR_SIZE_MAX a4c4d46272 [recommended] xen/9pfs: yield when there
> > > isn't enough room on the ring
> > > 
> > > What do you think Greg?
> > 
> > AFAIK, only regressions and fixes to severe bugs (QEMU crashes, hangs,
> > CVEs) go to stable QEMU releases. It doesn't seem to be the case for any
> > of the commits listed above but I had only a quick look.
> 
> That's the main focus, but if memory leaks and other minor fixes get tagged
> for stable I'll generally pull those in as well if the backport is fairly
> straightforward. As that was the case with the patches above I went
> ahead and pull those in.
> 
> > > What's the recommended way for me to keep track of imminent stable
> > > picks/
> > > freezes in future?
> > 
> > Hmm good question. I'm usually notified when Michael posts the patch
> > round-up and a 9p patch is already in the list, like for the present
> > patch. Other than that I watch qemu-stable from time to time or the
> > planning pages in the wiki.
> > 
> > Michael, anything better to suggest to Christian ?
> 
> I think that about covers it. You can also subscribe to the planning
> pages, e.g. https://wiki.qemu.org/Planning/5.0 (by clicking the
> star/"add to watchlist" icon), then you'll get notifications when
> additional release/freeze dates are added. Generally it will be updated
> shortly before the patch round-up gets posted to qemu-stable.

Good idea! Will do that.

Thanks Michael!

Best regards,
Christian Schoenebeck
diff mbox series

Patch

diff --git a/hw/9pfs/9p.h b/hw/9pfs/9p.h
index 3904f82901..069c86333f 100644
--- a/hw/9pfs/9p.h
+++ b/hw/9pfs/9p.h
@@ -186,22 +186,22 @@  typedef struct V9fsXattr
 
 typedef struct V9fsDir {
     DIR *stream;
-    QemuMutex readdir_mutex;
+    CoMutex readdir_mutex;
 } V9fsDir;
 
 static inline void v9fs_readdir_lock(V9fsDir *dir)
 {
-    qemu_mutex_lock(&dir->readdir_mutex);
+    qemu_co_mutex_lock(&dir->readdir_mutex);
 }
 
 static inline void v9fs_readdir_unlock(V9fsDir *dir)
 {
-    qemu_mutex_unlock(&dir->readdir_mutex);
+    qemu_co_mutex_unlock(&dir->readdir_mutex);
 }
 
 static inline void v9fs_readdir_init(V9fsDir *dir)
 {
-    qemu_mutex_init(&dir->readdir_mutex);
+    qemu_co_mutex_init(&dir->readdir_mutex);
 }
 
 /*