Message ID | 20200406191320.13371-5-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | async: fix hangs on weakly-ordered architectures | expand |
On Mon, Apr 06, 2020 at 03:13:20PM -0400, Paolo Bonzini wrote: > When using C11 atomics, non-seqcst reads and writes do not participate > in the total order of seqcst operations. In util/async.c and util/aio-posix.c, > in particular, the pattern that we use > > write ctx->notify_me write bh->scheduled > read bh->scheduled read ctx->notify_me > if !bh->scheduled, sleep if ctx->notify_me, notify > > needs to use seqcst operations for both the write and the read. In > general this is something that we do not want, because there can be > many sources that are polled in addition to bottom halves. The > alternative is to place a seqcst memory barrier between the write > and the read. This also comes with a disadvantage, in that the > memory barrier is implicit on strongly-ordered architectures and > it wastes a few dozen clock cycles. > > Fortunately, ctx->notify_me is never written concurrently by two > threads, so we can instead relax the writes to ctx->notify_me. > [This part of the commit message is still to be written more > in detail and is what I am not sure about.] > > Analyzed-by: Ying Fang <fangying1@huawei.com> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > util/aio-posix.c | 9 ++++++++- > util/aio-win32.c | 8 +++++++- > util/async.c | 12 ++++++++++-- > 3 files changed, 25 insertions(+), 4 deletions(-) > > diff --git a/util/aio-posix.c b/util/aio-posix.c > index cd6cf0a4a9..37afec726f 100644 > --- a/util/aio-posix.c > +++ b/util/aio-posix.c > @@ -569,7 +569,13 @@ bool aio_poll(AioContext *ctx, bool blocking) > * so disable the optimization now. > */ > if (blocking) { > - atomic_add(&ctx->notify_me, 2); > + atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2); Non-atomic "atomic" code looks suspicious and warrants a comment mentioning that this is only executed from one thread. This applies to the other instances in this patch too.
On 07/04/20 11:09, Stefan Hajnoczi wrote: >> if (blocking) { >> - atomic_add(&ctx->notify_me, 2); >> + atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2); > Non-atomic "atomic" code looks suspicious and warrants a comment > mentioning that this is only executed from one thread. This applies to > the other instances in this patch too. Yes, that's the patch that is missing from this series, which strengthens the assertion to ensure that we're in the home thread. Paolo
diff --git a/util/aio-posix.c b/util/aio-posix.c index cd6cf0a4a9..37afec726f 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -569,7 +569,13 @@ bool aio_poll(AioContext *ctx, bool blocking) * so disable the optimization now. */ if (blocking) { - atomic_add(&ctx->notify_me, 2); + atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2); + /* + * Write ctx->notify_me before computing the timeout + * (reading bottom half flags, etc.). Pairs with + * atomic_xchg in aio_notify(). + */ + smp_mb(); } qemu_lockcnt_inc(&ctx->list_lock); @@ -590,6 +596,7 @@ bool aio_poll(AioContext *ctx, bool blocking) } if (blocking) { + /* Finish the poll before clearing the flag. */ atomic_sub(&ctx->notify_me, 2); aio_notify_accept(ctx); } diff --git a/util/aio-win32.c b/util/aio-win32.c index a23b9c364d..2cccdb35c1 100644 --- a/util/aio-win32.c +++ b/util/aio-win32.c @@ -331,7 +331,13 @@ bool aio_poll(AioContext *ctx, bool blocking) * so disable the optimization now. */ if (blocking) { - atomic_add(&ctx->notify_me, 2); + atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) + 2); + /* + * Write ctx->notify_me before computing the timeout + * (reading bottom half flags, etc.). Pairs with + * atomic_xchg in aio_notify(). + */ + smp_mb(); } qemu_lockcnt_inc(&ctx->list_lock); diff --git a/util/async.c b/util/async.c index b94518b948..ee1bc87d2b 100644 --- a/util/async.c +++ b/util/async.c @@ -249,7 +249,14 @@ aio_ctx_prepare(GSource *source, gint *timeout) { AioContext *ctx = (AioContext *) source; - atomic_or(&ctx->notify_me, 1); + atomic_set(&ctx->notify_me, atomic_read(&ctx->notify_me) | 1); + + /* + * Write ctx->notify_me before computing the timeout + * (reading bottom half flags, etc.). Pairs with + * atomic_xchg in aio_notify(). + */ + smp_mb(); /* We assume there is no timeout already supplied */ *timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)); @@ -414,7 +422,7 @@ void aio_notify(AioContext *ctx) * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. */ smp_mb(); - if (ctx->notify_me) { + if (atomic_read(&ctx->notify_me)) { event_notifier_set(&ctx->notifier); atomic_mb_set(&ctx->notified, true); }
When using C11 atomics, non-seqcst reads and writes do not participate in the total order of seqcst operations. In util/async.c and util/aio-posix.c, in particular, the pattern that we use write ctx->notify_me write bh->scheduled read bh->scheduled read ctx->notify_me if !bh->scheduled, sleep if ctx->notify_me, notify needs to use seqcst operations for both the write and the read. In general this is something that we do not want, because there can be many sources that are polled in addition to bottom halves. The alternative is to place a seqcst memory barrier between the write and the read. This also comes with a disadvantage, in that the memory barrier is implicit on strongly-ordered architectures and it wastes a few dozen clock cycles. Fortunately, ctx->notify_me is never written concurrently by two threads, so we can instead relax the writes to ctx->notify_me. [This part of the commit message is still to be written more in detail and is what I am not sure about.] Analyzed-by: Ying Fang <fangying1@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- util/aio-posix.c | 9 ++++++++- util/aio-win32.c | 8 +++++++- util/async.c | 12 ++++++++++-- 3 files changed, 25 insertions(+), 4 deletions(-)