diff mbox series

[1/1] util: adjust coroutine pool size to virtio block queue

Message ID 20220111091950.840-2-hnarukaw@yahoo-corp.jp (mailing list archive)
State New, archived
Headers show
Series Patch to adjust coroutine pool size adaptively | expand

Commit Message

成川 弘樹 Jan. 11, 2022, 9:19 a.m. UTC
Coroutine pool size was 64 from long ago, and the basis was organized in the commit message in c740ad92.

At that time, virtio-blk queue-size and num-queue were not configuable, and equivalent values were 128 and 1.

Coroutine pool size 64 was fine then.

Later queue-size and num-queue got configuable, and default values were increased.

Coroutine pool with size 64 exhausts frequently with random disk IO in new size, and slows down.

This commit adjusts coroutine pool size adaptively with new values.

This commit adds 64 by default, but now coroutine is not only for block devices,

and is not too much burdon comparing with new default.

pool size of 128 * vCPUs.

Signed-off-by: Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
---
 hw/block/virtio-blk.c    |  3 +++
 include/qemu/coroutine.h |  5 +++++
 util/qemu-coroutine.c    | 15 +++++++++++----
 3 files changed, 19 insertions(+), 4 deletions(-)

Comments

Stefan Hajnoczi Jan. 27, 2022, 3:47 p.m. UTC | #1
On Tue, Jan 11, 2022 at 06:19:50PM +0900, Hiroki Narukawa wrote:
> Coroutine pool size was 64 from long ago, and the basis was organized in the commit message in c740ad92.
> 
> At that time, virtio-blk queue-size and num-queue were not configuable, and equivalent values were 128 and 1.
> 
> Coroutine pool size 64 was fine then.
> 
> Later queue-size and num-queue got configuable, and default values were increased.
> 
> Coroutine pool with size 64 exhausts frequently with random disk IO in new size, and slows down.
> 
> This commit adjusts coroutine pool size adaptively with new values.
> 
> This commit adds 64 by default, but now coroutine is not only for block devices,
> 
> and is not too much burdon comparing with new default.
> 
> pool size of 128 * vCPUs.
> 
> Signed-off-by: Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
> ---
>  hw/block/virtio-blk.c    |  3 +++
>  include/qemu/coroutine.h |  5 +++++
>  util/qemu-coroutine.c    | 15 +++++++++++----
>  3 files changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index f139cd7cc9..726dbe14de 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -32,6 +32,7 @@
>  #include "hw/virtio/virtio-bus.h"
>  #include "migration/qemu-file-types.h"
>  #include "hw/virtio/virtio-access.h"
> +#include "qemu/coroutine.h"
>  
>  /* Config size before the discard support (hide associated config fields) */
>  #define VIRTIO_BLK_CFG_SIZE offsetof(struct virtio_blk_config, \
> @@ -1222,6 +1223,8 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
>      for (i = 0; i < conf->num_queues; i++) {
>          virtio_add_queue(vdev, conf->queue_size, virtio_blk_handle_output);
>      }
> +    qemu_coroutine_increase_pool_batch_size(conf->num_queues * conf->queue_size
> +                                            / 2);

Why "/ 2"?

>      virtio_blk_data_plane_create(vdev, conf, &s->dataplane, &err);
>      if (err != NULL) {
>          error_propagate(errp, err);

Please handle hot unplug (->unrealize()) so the coroutine pool shrinks
down again when virtio-blk devices are removed.

My main concern is memory footprint. A burst of I/O can create many
coroutines and they might never be used again. But I think we can deal
with that using a timer in a separate future patch (if necessary).

Thanks,
Stefan
成川 弘樹 Jan. 28, 2022, 8:50 a.m. UTC | #2
> >  /* Config size before the discard support (hide associated config
> > fields) */  #define VIRTIO_BLK_CFG_SIZE offsetof(struct
> > virtio_blk_config, \ @@ -1222,6 +1223,8 @@ static void
> virtio_blk_device_realize(DeviceState *dev, Error **errp)
> >      for (i = 0; i < conf->num_queues; i++) {
> >          virtio_add_queue(vdev, conf->queue_size,
> virtio_blk_handle_output);
> >      }
> > +    qemu_coroutine_increase_pool_batch_size(conf->num_queues *
> conf->queue_size
> > +                                            / 2);
> 
> Why "/ 2"?

In my understanding, a request on virtio-blk consumes 2 entries each for rx and tx,
so there can be num_queues * queue_size / 2 requests running at the same time.

Start point of this was that coroutine pool size was 64, queue_size equivalent size was 128,
and num_queue equivalent size was 1 from long ago and seems to work well.
New value num_queues * queue_size / 2 also seems to work well as more overprovisioned value.


> 
> >      virtio_blk_data_plane_create(vdev, conf, &s->dataplane, &err);
> >      if (err != NULL) {
> >          error_propagate(errp, err);
> 
> Please handle hot unplug (->unrealize()) so the coroutine pool shrinks down
> again when virtio-blk devices are removed.
> 

Added it in v3 and resent it.


> My main concern is memory footprint. A burst of I/O can create many coroutines
> and they might never be used again. But I think we can deal with that using a timer
> in a separate future patch (if necessary).

In my understanding coroutine pool size does not limit the peak memory consumption,
so I think even when coroutines are temporarily released, it is a room required for
qemu to keep running with accessing disk IO, which I think users does not imagine to be memory-consuming task.

Timer to release unused memory would be nice, but how serious is it?
diff mbox series

Patch

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index f139cd7cc9..726dbe14de 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -32,6 +32,7 @@ 
 #include "hw/virtio/virtio-bus.h"
 #include "migration/qemu-file-types.h"
 #include "hw/virtio/virtio-access.h"
+#include "qemu/coroutine.h"
 
 /* Config size before the discard support (hide associated config fields) */
 #define VIRTIO_BLK_CFG_SIZE offsetof(struct virtio_blk_config, \
@@ -1222,6 +1223,8 @@  static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
     for (i = 0; i < conf->num_queues; i++) {
         virtio_add_queue(vdev, conf->queue_size, virtio_blk_handle_output);
     }
+    qemu_coroutine_increase_pool_batch_size(conf->num_queues * conf->queue_size
+                                            / 2);
     virtio_blk_data_plane_create(vdev, conf, &s->dataplane, &err);
     if (err != NULL) {
         error_propagate(errp, err);
diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h
index 4829ff373d..e52ed76ab2 100644
--- a/include/qemu/coroutine.h
+++ b/include/qemu/coroutine.h
@@ -331,6 +331,11 @@  void qemu_co_sleep_wake(QemuCoSleep *w);
  */
 void coroutine_fn yield_until_fd_readable(int fd);
 
+/**
+ * Increase coroutine pool size
+ */
+void qemu_coroutine_increase_pool_batch_size(unsigned int additional_pool_size);
+
 #include "qemu/lockable.h"
 
 #endif /* QEMU_COROUTINE_H */
diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
index 38fb6d3084..d5bd9d468f 100644
--- a/util/qemu-coroutine.c
+++ b/util/qemu-coroutine.c
@@ -20,12 +20,14 @@ 
 #include "qemu/coroutine_int.h"
 #include "block/aio.h"
 
+/** Initial batch size is 64, and is increased on demand */
 enum {
-    POOL_BATCH_SIZE = 64,
+    POOL_INITIAL_BATCH_SIZE = 64,
 };
 
 /** Free list to speed up creation */
 static QSLIST_HEAD(, Coroutine) release_pool = QSLIST_HEAD_INITIALIZER(pool);
+static unsigned int pool_batch_size = POOL_INITIAL_BATCH_SIZE;
 static unsigned int release_pool_size;
 static __thread QSLIST_HEAD(, Coroutine) alloc_pool = QSLIST_HEAD_INITIALIZER(pool);
 static __thread unsigned int alloc_pool_size;
@@ -49,7 +51,7 @@  Coroutine *qemu_coroutine_create(CoroutineEntry *entry, void *opaque)
     if (CONFIG_COROUTINE_POOL) {
         co = QSLIST_FIRST(&alloc_pool);
         if (!co) {
-            if (release_pool_size > POOL_BATCH_SIZE) {
+            if (release_pool_size > qatomic_read(&pool_batch_size)) {
                 /* Slow path; a good place to register the destructor, too.  */
                 if (!coroutine_pool_cleanup_notifier.notify) {
                     coroutine_pool_cleanup_notifier.notify = coroutine_pool_cleanup;
@@ -86,12 +88,12 @@  static void coroutine_delete(Coroutine *co)
     co->caller = NULL;
 
     if (CONFIG_COROUTINE_POOL) {
-        if (release_pool_size < POOL_BATCH_SIZE * 2) {
+        if (release_pool_size < qatomic_read(&pool_batch_size) * 2) {
             QSLIST_INSERT_HEAD_ATOMIC(&release_pool, co, pool_next);
             qatomic_inc(&release_pool_size);
             return;
         }
-        if (alloc_pool_size < POOL_BATCH_SIZE) {
+        if (alloc_pool_size < qatomic_read(&pool_batch_size)) {
             QSLIST_INSERT_HEAD(&alloc_pool, co, pool_next);
             alloc_pool_size++;
             return;
@@ -202,3 +204,8 @@  AioContext *coroutine_fn qemu_coroutine_get_aio_context(Coroutine *co)
 {
     return co->ctx;
 }
+
+void qemu_coroutine_increase_pool_batch_size(unsigned int additional_pool_size)
+{
+    qatomic_add(&pool_batch_size, additional_pool_size);
+}