diff mbox series

block/aio_task: allow start/wait task from any coroutine

Message ID 20200611073631.10817-1-vsementsov@virtuozzo.com (mailing list archive)
State New, archived
Headers show
Series block/aio_task: allow start/wait task from any coroutine | expand

Commit Message

Vladimir Sementsov-Ogievskiy June 11, 2020, 7:36 a.m. UTC
Currently, aio task pool assumes that there is a main coroutine, which
creates tasks and wait for them. Let's remove the restriction by using
CoQueue. Code becomes clearer, interface more obvious.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---

Hi! Here is my counter-propasal for
"[PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine"
by Denis. I'm sure that if we are going to change something here, better
is make the interface work from any coroutine without the restriction of
only-on-waiter at the moment.

(Note, that it is still not thread-safe)


 block/aio_task.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

Comments

Denis V. Lunev June 11, 2020, 12:31 p.m. UTC | #1
On 6/11/20 10:36 AM, Vladimir Sementsov-Ogievskiy wrote:
> Currently, aio task pool assumes that there is a main coroutine, which
> creates tasks and wait for them. Let's remove the restriction by using
> CoQueue. Code becomes clearer, interface more obvious.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>
> Hi! Here is my counter-propasal for
> "[PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine"
> by Denis. I'm sure that if we are going to change something here, better
> is make the interface work from any coroutine without the restriction of
> only-on-waiter at the moment.
>
> (Note, that it is still not thread-safe)
>
>
>  block/aio_task.c | 15 ++++-----------
>  1 file changed, 4 insertions(+), 11 deletions(-)
>
> diff --git a/block/aio_task.c b/block/aio_task.c
> index 88989fa248..d48b29ff83 100644
> --- a/block/aio_task.c
> +++ b/block/aio_task.c
> @@ -27,11 +27,10 @@
>  #include "block/aio_task.h"
>  
>  struct AioTaskPool {
> -    Coroutine *main_co;
>      int status;
>      int max_busy_tasks;
>      int busy_tasks;
> -    bool waiting;
> +    CoQueue waiters;
>  };
>  
>  static void coroutine_fn aio_task_co(void *opaque)
> @@ -52,21 +51,15 @@ static void coroutine_fn aio_task_co(void *opaque)
>  
>      g_free(task);
>  
> -    if (pool->waiting) {
> -        pool->waiting = false;
> -        aio_co_wake(pool->main_co);
> -    }
> +    qemu_co_queue_next(&pool->waiters);
nope, this will wakeup only single waiter.
the code will deadlock If there are 2 waiters for the last
entry.

You need something like qemu_co_queue_restart_all() here
at least.

Den
Vladimir Sementsov-Ogievskiy June 11, 2020, 12:52 p.m. UTC | #2
11.06.2020 15:31, Denis V. Lunev wrote:
> On 6/11/20 10:36 AM, Vladimir Sementsov-Ogievskiy wrote:
>> Currently, aio task pool assumes that there is a main coroutine, which
>> creates tasks and wait for them. Let's remove the restriction by using
>> CoQueue. Code becomes clearer, interface more obvious.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>
>> Hi! Here is my counter-propasal for
>> "[PATCH 1/2] aio: allow to wait for coroutine pool from different coroutine"
>> by Denis. I'm sure that if we are going to change something here, better
>> is make the interface work from any coroutine without the restriction of
>> only-on-waiter at the moment.
>>
>> (Note, that it is still not thread-safe)
>>
>>
>>   block/aio_task.c | 15 ++++-----------
>>   1 file changed, 4 insertions(+), 11 deletions(-)
>>
>> diff --git a/block/aio_task.c b/block/aio_task.c
>> index 88989fa248..d48b29ff83 100644
>> --- a/block/aio_task.c
>> +++ b/block/aio_task.c
>> @@ -27,11 +27,10 @@
>>   #include "block/aio_task.h"
>>   
>>   struct AioTaskPool {
>> -    Coroutine *main_co;
>>       int status;
>>       int max_busy_tasks;
>>       int busy_tasks;
>> -    bool waiting;
>> +    CoQueue waiters;
>>   };
>>   
>>   static void coroutine_fn aio_task_co(void *opaque)
>> @@ -52,21 +51,15 @@ static void coroutine_fn aio_task_co(void *opaque)
>>   
>>       g_free(task);
>>   
>> -    if (pool->waiting) {
>> -        pool->waiting = false;
>> -        aio_co_wake(pool->main_co);
>> -    }
>> +    qemu_co_queue_next(&pool->waiters);
> nope, this will wakeup only single waiter.
> the code will deadlock If there are 2 waiters for the last
> entry.
> 

Hmm, right.
The problem is that original design combines into one thing two different:
  
   1. wait for all tasks to complete
   2. wait for a free slot, to start a new task

2. should work as is, with qemu_co_queue_next() and co-queue, and for 1. we
should have separate yield-point, to be waken up when busy_tasks becomes 0.

I'll resend.
diff mbox series

Patch

diff --git a/block/aio_task.c b/block/aio_task.c
index 88989fa248..d48b29ff83 100644
--- a/block/aio_task.c
+++ b/block/aio_task.c
@@ -27,11 +27,10 @@ 
 #include "block/aio_task.h"
 
 struct AioTaskPool {
-    Coroutine *main_co;
     int status;
     int max_busy_tasks;
     int busy_tasks;
-    bool waiting;
+    CoQueue waiters;
 };
 
 static void coroutine_fn aio_task_co(void *opaque)
@@ -52,21 +51,15 @@  static void coroutine_fn aio_task_co(void *opaque)
 
     g_free(task);
 
-    if (pool->waiting) {
-        pool->waiting = false;
-        aio_co_wake(pool->main_co);
-    }
+    qemu_co_queue_next(&pool->waiters);
 }
 
 void coroutine_fn aio_task_pool_wait_one(AioTaskPool *pool)
 {
     assert(pool->busy_tasks > 0);
-    assert(qemu_coroutine_self() == pool->main_co);
 
-    pool->waiting = true;
-    qemu_coroutine_yield();
+    qemu_co_queue_wait(&pool->waiters, NULL);
 
-    assert(!pool->waiting);
     assert(pool->busy_tasks < pool->max_busy_tasks);
 }
 
@@ -98,8 +91,8 @@  AioTaskPool *coroutine_fn aio_task_pool_new(int max_busy_tasks)
 {
     AioTaskPool *pool = g_new0(AioTaskPool, 1);
 
-    pool->main_co = qemu_coroutine_self();
     pool->max_busy_tasks = max_busy_tasks;
+    qemu_co_queue_init(&pool->waiters);
 
     return pool;
 }