diff mbox

[v3,00/16] aio: first part of aio_context_acquire/release pushdown

Message ID 1455018374-4706-1-git-send-email-pbonzini@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Paolo Bonzini Feb. 9, 2016, 11:45 a.m. UTC
This is the infrastructure part of the aio_context_acquire/release pushdown,
which in turn is the first step towards a real multiqueue block layer in
QEMU.  The next step is to touch all the drivers and move calls to the
aio_context_acquire/release functions from aio-*.c to the drivers.  This
will be done in a separate patch series, which I plan to post before soft
freeze.

While the inserted lines are a lot, more than half of it are in documentation
and formal models of the code, as well as in the implementation of the new
"lockcnt" synchronization primitive.  The code is also very heavily commented.

The first four patches are new, as the issue they fix was found after posting
the previous patch.  Everything else is more or less the same as before.

Paolo

v1->v2: Update documentation [Stefan]
        Remove g_usleep from testcase [Stefan]

v2->v3: Fix broken sentence [Eric]
        Use osdep.h [Eric]
        (v2->v3 diff after diffstat)

Paolo Bonzini (16):
  aio: introduce aio_context_in_iothread
  aio: do not really acquire/release the main AIO context
  aio: introduce aio_poll_internal
  aio: only call aio_poll_internal from iothread
  iothread: release AioContext around aio_poll
  qemu-thread: introduce QemuRecMutex
  aio: convert from RFifoLock to QemuRecMutex
  aio: rename bh_lock to list_lock
  qemu-thread: introduce QemuLockCnt
  aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh
  qemu-thread: optimize QemuLockCnt with futexes on Linux
  aio: tweak walking in dispatch phase
  aio-posix: remove walking_handlers, protecting AioHandler list with
    list_lock
  aio-win32: remove walking_handlers, protecting AioHandler list with
    list_lock
  aio: document locking
  aio: push aio_context_acquire/release down to dispatching

 aio-posix.c                     |  86 +++++----
 aio-win32.c                     | 106 ++++++-----
 async.c                         | 278 ++++++++++++++++++++++++----
 block/io.c                      |  14 +-
 docs/aio_poll_drain.promela     | 210 +++++++++++++++++++++
 docs/aio_poll_drain_bug.promela | 158 ++++++++++++++++
 docs/aio_poll_sync_io.promela   |  88 +++++++++
 docs/lockcnt.txt                | 342 ++++++++++++++++++++++++++++++++++
 docs/multiple-iothreads.txt     |  63 ++++---
 include/block/aio.h             |  69 ++++---
 include/qemu/futex.h            |  36 ++++
 include/qemu/rfifolock.h        |  54 ------
 include/qemu/thread-posix.h     |   6 +
 include/qemu/thread-win32.h     |  10 +
 include/qemu/thread.h           |  23 +++
 iothread.c                      |  20 +-
 stubs/iothread-lock.c           |   5 +
 tests/.gitignore                |   1 -
 tests/Makefile                  |   2 -
 tests/test-aio.c                |  22 ++-
 tests/test-rfifolock.c          |  91 ---------
 trace-events                    |  10 +
 util/Makefile.objs              |   2 +-
 util/lockcnt.c                  | 395 ++++++++++++++++++++++++++++++++++++++++
 util/qemu-thread-posix.c        |  38 ++--
 util/qemu-thread-win32.c        |  25 +++
 util/rfifolock.c                |  78 --------
 27 files changed, 1782 insertions(+), 450 deletions(-)
 create mode 100644 docs/aio_poll_drain.promela
 create mode 100644 docs/aio_poll_drain_bug.promela
 create mode 100644 docs/aio_poll_sync_io.promela
 create mode 100644 docs/lockcnt.txt
 create mode 100644 include/qemu/futex.h
 delete mode 100644 include/qemu/rfifolock.h
 delete mode 100644 tests/test-rfifolock.c
 create mode 100644 util/lockcnt.c
 delete mode 100644 util/rfifolock.c

Comments

Stefan Hajnoczi Feb. 9, 2016, 2:01 p.m. UTC | #1
On Tue, Feb 09, 2016 at 12:45:58PM +0100, Paolo Bonzini wrote:
> This is the infrastructure part of the aio_context_acquire/release pushdown,
> which in turn is the first step towards a real multiqueue block layer in
> QEMU.  The next step is to touch all the drivers and move calls to the
> aio_context_acquire/release functions from aio-*.c to the drivers.  This
> will be done in a separate patch series, which I plan to post before soft
> freeze.
> 
> While the inserted lines are a lot, more than half of it are in documentation
> and formal models of the code, as well as in the implementation of the new
> "lockcnt" synchronization primitive.  The code is also very heavily commented.
> 
> The first four patches are new, as the issue they fix was found after posting
> the previous patch.  Everything else is more or less the same as before.
> 
> Paolo
> 
> v1->v2: Update documentation [Stefan]
>         Remove g_usleep from testcase [Stefan]
> 
> v2->v3: Fix broken sentence [Eric]
>         Use osdep.h [Eric]
>         (v2->v3 diff after diffstat)
> 
> Paolo Bonzini (16):
>   aio: introduce aio_context_in_iothread
>   aio: do not really acquire/release the main AIO context
>   aio: introduce aio_poll_internal
>   aio: only call aio_poll_internal from iothread
>   iothread: release AioContext around aio_poll
>   qemu-thread: introduce QemuRecMutex
>   aio: convert from RFifoLock to QemuRecMutex
>   aio: rename bh_lock to list_lock
>   qemu-thread: introduce QemuLockCnt
>   aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh
>   qemu-thread: optimize QemuLockCnt with futexes on Linux
>   aio: tweak walking in dispatch phase
>   aio-posix: remove walking_handlers, protecting AioHandler list with
>     list_lock
>   aio-win32: remove walking_handlers, protecting AioHandler list with
>     list_lock
>   aio: document locking
>   aio: push aio_context_acquire/release down to dispatching
> 
>  aio-posix.c                     |  86 +++++----
>  aio-win32.c                     | 106 ++++++-----
>  async.c                         | 278 ++++++++++++++++++++++++----
>  block/io.c                      |  14 +-
>  docs/aio_poll_drain.promela     | 210 +++++++++++++++++++++
>  docs/aio_poll_drain_bug.promela | 158 ++++++++++++++++
>  docs/aio_poll_sync_io.promela   |  88 +++++++++
>  docs/lockcnt.txt                | 342 ++++++++++++++++++++++++++++++++++
>  docs/multiple-iothreads.txt     |  63 ++++---
>  include/block/aio.h             |  69 ++++---
>  include/qemu/futex.h            |  36 ++++
>  include/qemu/rfifolock.h        |  54 ------
>  include/qemu/thread-posix.h     |   6 +
>  include/qemu/thread-win32.h     |  10 +
>  include/qemu/thread.h           |  23 +++
>  iothread.c                      |  20 +-
>  stubs/iothread-lock.c           |   5 +
>  tests/.gitignore                |   1 -
>  tests/Makefile                  |   2 -
>  tests/test-aio.c                |  22 ++-
>  tests/test-rfifolock.c          |  91 ---------
>  trace-events                    |  10 +
>  util/Makefile.objs              |   2 +-
>  util/lockcnt.c                  | 395 ++++++++++++++++++++++++++++++++++++++++
>  util/qemu-thread-posix.c        |  38 ++--
>  util/qemu-thread-win32.c        |  25 +++
>  util/rfifolock.c                |  78 --------
>  27 files changed, 1782 insertions(+), 450 deletions(-)
>  create mode 100644 docs/aio_poll_drain.promela
>  create mode 100644 docs/aio_poll_drain_bug.promela
>  create mode 100644 docs/aio_poll_sync_io.promela
>  create mode 100644 docs/lockcnt.txt
>  create mode 100644 include/qemu/futex.h
>  delete mode 100644 include/qemu/rfifolock.h
>  delete mode 100644 tests/test-rfifolock.c
>  create mode 100644 util/lockcnt.c
>  delete mode 100644 util/rfifolock.c

I'm getting the following with mingw:

util/qemu-thread-win32.c:87:6: error: conflicting types for 'qemu_rec_mutex_destroy'
 void qemu_rec_mutex_destroy(QemuRecMutex *mutex)
      ^
In file included from /home/stefanha/qemu/include/qemu/thread.h:16:0,
                 from util/qemu-thread-win32.c:15:
/home/stefanha/qemu/include/qemu/thread-win32.h:15:6: note: previous declaration of 'qemu_rec_mutex_destroy' was here
 void qemu_rec_mutex_destroy(QemuMutex *mutex);
      ^
diff mbox

Patch

diff --git a/async.c b/async.c
index 9eab833..03a8e69 100644
--- a/async.c
+++ b/async.c
@@ -322,11 +322,10 @@  void aio_notify_accept(AioContext *ctx)
  * only, this only works when the calling thread holds the big QEMU lock.
  *
  * Because aio_poll is used in a loop, spurious wakeups are okay.
- * Therefore, the I/O thread calls qemu_event_set very liberally
- * (it helps that qemu_event_set is cheap on an already-set event).
- * generally used in a loop, it's okay to have spurious wakeups.
- * Similarly it is okay to return true when no progress was made
- * (as long as this doesn't happen forever, or you get livelock).
+ * Therefore, the I/O thread calls qemu_event_set very liberally;
+ * it helps that qemu_event_set is cheap on an already-set event.
+ * Similarly it is okay to return true when no progress was made,
+ * as long as this doesn't happen forever (or you get livelock).
  *
  * The important thing is that you need to report progress from
  * aio_poll(ctx, false) correctly.  This is complicated and the
diff --git a/util/lockcnt.c b/util/lockcnt.c
index 56eb29e..71e8f8f 100644
--- a/util/lockcnt.c
+++ b/util/lockcnt.c
@@ -6,16 +6,7 @@ 
  * Author:
  *   Paolo Bonzini <pbonzini@redhat.com>
  */
-#include <stdlib.h>
-#include <stdio.h>
-#include <errno.h>
-#include <time.h>
-#include <signal.h>
-#include <stdint.h>
-#include <string.h>
-#include <limits.h>
-#include <unistd.h>
-#include <sys/time.h>
+#include "qemu/osdep.h"
 #include "qemu/thread.h"
 #include "qemu/atomic.h"
 #include "trace.h"