mbox series

[RFC,0/6] Removal of Aiocontext lock and usage of subtree drains in aborted transactions

Message ID 20211213104014.69858-1-eesposit@redhat.com (mailing list archive)
Headers show
Series Removal of Aiocontext lock and usage of subtree drains in aborted transactions | expand

Message

Emanuele Giuseppe Esposito Dec. 13, 2021, 10:40 a.m. UTC
Hello everyone,

As you know already, my current goal is to try to remove the AioContext lock from the QEMU block layer.
Currently the AioContext is used pretty much throughout the whole block layer, it is a little bit confusing to understand what it exactly protects, and I am starting to think that in some places it is being taken just because of the block API assumptions.
For example, some functions like AIO_WAIT_WHILE() release the lock with the assumption that it is always held, so all callers must take it just to allow the function to release it.

Removing the aiocontext lock is not a straightforward task: the first step is to understand which function is running in the main loop thus under the BQL (Big Qemu Lock) and which is used by the iothreads. We call the former category global state (GS) and the latter I/O.

The patch series "block layer: split block APIs in global state and I/O" aims to do that. Once we can at least (roughly) distinguish what is called by iothreads and what from the main loop, we can start analyzing what needs protection and what doesn't. This series is particularly helpful because by splitting the API we know where each function runs, so it helps us identifying the cases where both the main loop and iothreads read/write the same value/field (and thus need protection) and cases where the same function is used only by the main loop for example, so it shouldn't need protection.
For example, if some BlockDriverState field is read by I/O threads but modified in a GS function, this has to be protected in some way.

Another series I posted, "job: replace AioContext lock with job_mutex", provides a good example on how the AioContext lock can be removed and simply replaced by a fine grained lock.

Another way to have thread safety in the AioContext is to rely to the fact that in some cases, writings to a field are always done in the main loop *and* under drains. In this way, we know that no request is coming to the I/O threads, so we can safely modify the fields.

This is exactly what assert_bdrv_graph_writable() introduced in the block API splitup (patch 9 in v5) is there for, even though it is currently not checking for drains but only for main loop.

We could then use this assertion to effectively prove that some writes on a field/list are safe, and completely get rid of the aiocontext lock.
However, this is not an easy task: for example, if we look at the ->children and ->parents lists in BlockDriverState we can see that they are modified in BQL functions, but also read in I/O.
We therefore ideally need to add some drains (because in the current state assert_bdrv_graph_writable() with drains would fail).

The main function that modifies the ->children and ->parent lists is bdrv_replace_child_noperm.
So ideally we would like to drain both the old_bs and new_bs (the function moves a BdrvChild from one bs to another, modifying the respective lists).

A couple of question to answer:

- which drain to use? My answer would be bdrv_subtree_drain_* class of functions, because it takes care of draining the whole graph of the node, while bdrv_drained_* does not cover the child of the given node.
This theoretically simplifies the draining requirements, as we can just invoke subtree_drain_* on the two bs that are involved in bdrv_replace_child_noperm, and we should guarantee that the write is safe.

- where to add these drains? Inside the function or delegate to the caller?
According to d736f119da (and my unit tests), it is safe to modify the graph even side a bdrv_subtree_drained_begin/end() section.
Therefore, wrapping each call of bdrv_replace_child_noperm with a subtree_drain_begin/end is (or seems) perfectly valid.

Problems met so far (mostly solved):

1) consider that the drains use BDRV_WAIT_WHILE, which in turns unlocks the AioContext lock. This can create problems because not all caller take the lock, but could be easily fixed by introducing BDRV_WAIT_WHILE_UNLOCKED and bdrv_subtree_drain_begin/end_unlocked functions, but when running unit tests it is easy to find cases where the aiocontext is not always held. For example, in test_blockjob_common_drain_node (tests/unit/test-bdrv-drain.c):

    blk_insert_bs(blk_target, target, &error_abort);
    [...]
    aio_context_acquire(ctx);
    tjob = block_job_create("job0", &test_job_driver, NULL, src,
                            0, BLK_PERM_ALL,
                            0, 0, NULL, NULL, &error_abort);

Both functions eventually call bdrv_replace_child_noperm, but none one with the aiocontext lock held, another without.
In this case the solution is easy and helpful for our goal, since we are reducing the area that the aiocontext lock covers.

2) Some tests like tests/unit/test-bdrv-drain.c do not expect additional drains. Therefore we might have cases where a specific drain callback (in this case used for testing) is called way before it is expected to do so, because of the additional subtree drains.
Again also here we can simply modify the test to use the specific callback only when we actually need to use it. The test I am referring to is test_detach_by_driver_cb().

3) Transactions. I am currently struggling a lot with this, and need a little bit of help trying to figure out what is happening.
Basically the test test_update_perm_tree() in tests/unit/test-bdrv-graph-mod.c tests for permissions, but indirectly calls also the abort() procedure of the transactions.

The test performs the following (ignoring the permissions):
1. create a blockbackend blk
2. create a BlockdriverState "node" and "filter"
3. create BdrvChild edge "root" that represents blk -> node
4. create BdrvChild edge "child" that represents filter -> node

Current graph:
blk ------ root -------v
                      node
filter --- child ------^

5a. bdrv_append: modify "root" child to point blk -> filter
5b. bdrv_append: create BdrvChild edge "backing" that represents filter -> node (redundant edge)
5c. bdrv_append: refresh permissions, and as expected make bdrv_append fail.

Current graph:
blk ------- root --------v
                       filter
node <---- child --------+
 ^-------- backing ------+

At this point, the transaction procedure takes place to undo everything, and firstly it restores the BdrvChild "root" to point again to node, and then deletes "backing".
The problem here is that despite d736f119da, in this case in bdrv_replace_child_abort() moving an edge under subtree_drain* has side effects, leaving the quiesce_counter, recursive_counter and parent_counter of the various bs in the graph are not to zero. This is obviously due to edge movement between subtree_drained_begin and end, but I am not sure why the drain_saldo mechanism implemented in bdrv_replace_child_noperm is not effective in this case.

The failure is actually on the next step of the aborted transaction, bdrv_attach_child_common_abort(), but the root cause
is due to the non-zero counters left by bdrv_replace_child_abort().

Error message:
test-bdrv-graph-mod: ../block/io.c:63: bdrv_parent_drained_end_single_no_poll: Assertion `c->parent_quiesce_counter > 0' failed.

It is worth mentioning also that I know a way to fix this case,
and it is simply to not call
bdrv_subtree_drained_begin/end_unlocked(s->child->bs);
where s->child->bs is the filter bs in bdrv_replace_child_abort().
In this specific case, it would work correctly, leaving all counters
to zero once the drain ends, but I think it is not correct when/if
the BdrvChild is pointing into another separated graph, because we
would need to drain also that.

I even tried to reproduce this case with an unit test, but adding subtree_drain_begin/end around bdrv_append does not reproduce this issue.

So the questions in this RFC are:
- is this the right approach to remove the aiocontext lock? I think so
- are there better options?
- most importantly, any idea or suggestion on why this happens,
  and why when adding drains the quiesce counters are not properly restored in abort()?

This series is based on "job: replace AioContext lock with job_mutex".

Based-on: <20211104145334.1346363-1-eesposit@redhat.com>

Thank you in advance,
Emanuele

Emanuele Giuseppe Esposito (6):
  tests/unit/test-bdrv-drain.c: graph setup functions can't run in
    coroutines
  introduce BDRV_POLL_WHILE_UNLOCKED
  block/io.c: introduce bdrv_subtree_drained_{begin/end}_unlocked
  block.c: add subtree_drains where needed
  test-bdrv-drain.c: adapt test to the new subtree drains
  block/io.c: enable assert_bdrv_graph_writable

 block.c                            |  24 +++++
 block/io.c                         |  36 ++++++--
 include/block/block-global-state.h |   5 ++
 include/block/block-io.h           |   2 +
 tests/unit/test-bdrv-drain.c       | 136 ++++++++++++++++++-----------
 5 files changed, 145 insertions(+), 58 deletions(-)

Comments

Stefan Hajnoczi Dec. 13, 2021, 2:52 p.m. UTC | #1
On Mon, Dec 13, 2021 at 05:40:08AM -0500, Emanuele Giuseppe Esposito wrote:
> Hello everyone,
> 
> As you know already, my current goal is to try to remove the AioContext lock from the QEMU block layer.
> Currently the AioContext is used pretty much throughout the whole block layer, it is a little bit confusing to understand what it exactly protects, and I am starting to think that in some places it is being taken just because of the block API assumptions.
> For example, some functions like AIO_WAIT_WHILE() release the lock with the assumption that it is always held, so all callers must take it just to allow the function to release it.
> 
> Removing the aiocontext lock is not a straightforward task: the first step is to understand which function is running in the main loop thus under the BQL (Big Qemu Lock) and which is used by the iothreads. We call the former category global state (GS) and the latter I/O.
> 
> The patch series "block layer: split block APIs in global state and I/O" aims to do that. Once we can at least (roughly) distinguish what is called by iothreads and what from the main loop, we can start analyzing what needs protection and what doesn't. This series is particularly helpful because by splitting the API we know where each function runs, so it helps us identifying the cases where both the main loop and iothreads read/write the same value/field (and thus need protection) and cases where the same function is used only by the main loop for example, so it shouldn't need protection.
> For example, if some BlockDriverState field is read by I/O threads but modified in a GS function, this has to be protected in some way.
> 
> Another series I posted, "job: replace AioContext lock with job_mutex", provides a good example on how the AioContext lock can be removed and simply replaced by a fine grained lock.
> 
> Another way to have thread safety in the AioContext is to rely to the fact that in some cases, writings to a field are always done in the main loop *and* under drains. In this way, we know that no request is coming to the I/O threads, so we can safely modify the fields.
> 
> This is exactly what assert_bdrv_graph_writable() introduced in the block API splitup (patch 9 in v5) is there for, even though it is currently not checking for drains but only for main loop.
> 
> We could then use this assertion to effectively prove that some writes on a field/list are safe, and completely get rid of the aiocontext lock.
> However, this is not an easy task: for example, if we look at the ->children and ->parents lists in BlockDriverState we can see that they are modified in BQL functions, but also read in I/O.
> We therefore ideally need to add some drains (because in the current state assert_bdrv_graph_writable() with drains would fail).
> 
> The main function that modifies the ->children and ->parent lists is bdrv_replace_child_noperm.
> So ideally we would like to drain both the old_bs and new_bs (the function moves a BdrvChild from one bs to another, modifying the respective lists).
> 
> A couple of question to answer:
> 
> - which drain to use? My answer would be bdrv_subtree_drain_* class of functions, because it takes care of draining the whole graph of the node, while bdrv_drained_* does not cover the child of the given node.
> This theoretically simplifies the draining requirements, as we can just invoke subtree_drain_* on the two bs that are involved in bdrv_replace_child_noperm, and we should guarantee that the write is safe.

Off-topic: I don't understand the difference between the effects of
bdrv_drained_begin() and bdrv_subtree_drained_begin(). Both call
aio_disable_external(aio_context) and aio_poll(). bdrv_drained_begin()
only polls parents and itself, while bdrv_subtree_drained_begin() also
polls children. But why does that distinction matter? I wouldn't know
when to use one over the other.

On-topic: aio_disable_external() does not notify the AioContext. We
probably get away with it since the AioContext lock is currently held,
but it will be necessary to notify the AioContext so it disables
external handlers when the lock is not held:

  static inline void aio_disable_external(AioContext *ctx)
  {
      qatomic_inc(&ctx->external_disable_cnt);
      <--- missing aio_notify() since the AioContext needs to
           re-evaluate handlers
  }

> - where to add these drains? Inside the function or delegate to the caller?
> According to d736f119da (and my unit tests), it is safe to modify the graph even side a bdrv_subtree_drained_begin/end() section.
> Therefore, wrapping each call of bdrv_replace_child_noperm with a subtree_drain_begin/end is (or seems) perfectly valid.
> 
> Problems met so far (mostly solved):
> 
> 1) consider that the drains use BDRV_WAIT_WHILE, which in turns unlocks the AioContext lock. This can create problems because not all caller take the lock, but could be easily fixed by introducing BDRV_WAIT_WHILE_UNLOCKED and bdrv_subtree_drain_begin/end_unlocked functions, but when running unit tests it is easy to find cases where the aiocontext is not always held. For example, in test_blockjob_common_drain_node (tests/unit/test-bdrv-drain.c):
> 
>     blk_insert_bs(blk_target, target, &error_abort);
>     [...]
>     aio_context_acquire(ctx);
>     tjob = block_job_create("job0", &test_job_driver, NULL, src,
>                             0, BLK_PERM_ALL,
>                             0, 0, NULL, NULL, &error_abort);
> 
> Both functions eventually call bdrv_replace_child_noperm, but none one with the aiocontext lock held, another without.
> In this case the solution is easy and helpful for our goal, since we are reducing the area that the aiocontext lock covers.
> 
> 2) Some tests like tests/unit/test-bdrv-drain.c do not expect additional drains. Therefore we might have cases where a specific drain callback (in this case used for testing) is called way before it is expected to do so, because of the additional subtree drains.
> Again also here we can simply modify the test to use the specific callback only when we actually need to use it. The test I am referring to is test_detach_by_driver_cb().

I'm not sure what this means but some tests make assumptions about
internals. They are fragile. Modifying the test sounds reasonable.

> 
> 3) Transactions. I am currently struggling a lot with this, and need a little bit of help trying to figure out what is happening.
> Basically the test test_update_perm_tree() in tests/unit/test-bdrv-graph-mod.c tests for permissions, but indirectly calls also the abort() procedure of the transactions.
> 
> The test performs the following (ignoring the permissions):
> 1. create a blockbackend blk
> 2. create a BlockdriverState "node" and "filter"
> 3. create BdrvChild edge "root" that represents blk -> node
> 4. create BdrvChild edge "child" that represents filter -> node
> 
> Current graph:
> blk ------ root -------v
>                       node
> filter --- child ------^
> 
> 5a. bdrv_append: modify "root" child to point blk -> filter
> 5b. bdrv_append: create BdrvChild edge "backing" that represents filter -> node (redundant edge)
> 5c. bdrv_append: refresh permissions, and as expected make bdrv_append fail.
> 
> Current graph:
> blk ------- root --------v
>                        filter
> node <---- child --------+
>  ^-------- backing ------+
> 
> At this point, the transaction procedure takes place to undo everything, and firstly it restores the BdrvChild "root" to point again to node, and then deletes "backing".
> The problem here is that despite d736f119da, in this case in bdrv_replace_child_abort() moving an edge under subtree_drain* has side effects, leaving the quiesce_counter, recursive_counter and parent_counter of the various bs in the graph are not to zero. This is obviously due to edge movement between subtree_drained_begin and end, but I am not sure why the drain_saldo mechanism implemented in bdrv_replace_child_noperm is not effective in this case.
> 
> The failure is actually on the next step of the aborted transaction, bdrv_attach_child_common_abort(), but the root cause
> is due to the non-zero counters left by bdrv_replace_child_abort().
> 
> Error message:
> test-bdrv-graph-mod: ../block/io.c:63: bdrv_parent_drained_end_single_no_poll: Assertion `c->parent_quiesce_counter > 0' failed.
> 
> It is worth mentioning also that I know a way to fix this case,
> and it is simply to not call
> bdrv_subtree_drained_begin/end_unlocked(s->child->bs);
> where s->child->bs is the filter bs in bdrv_replace_child_abort().
> In this specific case, it would work correctly, leaving all counters
> to zero once the drain ends, but I think it is not correct when/if
> the BdrvChild is pointing into another separated graph, because we
> would need to drain also that.
> 
> I even tried to reproduce this case with an unit test, but adding subtree_drain_begin/end around bdrv_append does not reproduce this issue.
> 
> So the questions in this RFC are:
> - is this the right approach to remove the aiocontext lock? I think so

Yes, I think using drained sections to quiesce I/O is the right choice.

> - are there better options?

I/O needs to quiesce, at least in some cases, so I don't think we can
avoid drained sections. It may be possible to implement other solutions
on a case-by-case basis, but it would be more complex and still wouldn't
get rid of some of the drained sections that are definitely needed.

> - most importantly, any idea or suggestion on why this happens,
>   and why when adding drains the quiesce counters are not properly restored in abort()?

Maybe Kevin has an idea here. He wrote bdrv_subtree_drained_begin().

Stefan
Stefan Hajnoczi Dec. 14, 2021, 4:47 p.m. UTC | #2
Ignore what I said about a missing aio_notify() call in
aio_disable_external(). Skipping the call is an optimization and it's
safe. We only need to call aio_notify() in aio_enable_external().

Stefan
Emanuele Giuseppe Esposito Dec. 14, 2021, 6:10 p.m. UTC | #3
On 13/12/2021 15:52, Stefan Hajnoczi wrote:
> Off-topic: I don't understand the difference between the effects of
> bdrv_drained_begin() and bdrv_subtree_drained_begin(). Both call
> aio_disable_external(aio_context) and aio_poll(). bdrv_drained_begin()
> only polls parents and itself, while bdrv_subtree_drained_begin() also
> polls children. But why does that distinction matter? I wouldn't know
> when to use one over the other.

Good point. Now I am wondering the same, so it would be great if anyone 
could clarify it.

Emanuele
Hanna Czenczek Dec. 15, 2021, 12:34 p.m. UTC | #4
On 14.12.21 19:10, Emanuele Giuseppe Esposito wrote:
>
>
> On 13/12/2021 15:52, Stefan Hajnoczi wrote:
>> Off-topic: I don't understand the difference between the effects of
>> bdrv_drained_begin() and bdrv_subtree_drained_begin(). Both call
>> aio_disable_external(aio_context) and aio_poll(). bdrv_drained_begin()
>> only polls parents and itself, while bdrv_subtree_drained_begin() also
>> polls children. But why does that distinction matter? I wouldn't know
>> when to use one over the other.
>
> Good point. Now I am wondering the same, so it would be great if 
> anyone could clarify it.

As far as I understand, bdrv_drained_begin() is used to drain and stop 
requests on a single BDS, whereas bdrv_subtree_drained_begin() drains 
the BDS and all of its children.  So when you don’t care about lingering 
requests in child nodes, then bdrv_drained_begin() suffices.

Hanna
Kevin Wolf Dec. 16, 2021, 10:37 a.m. UTC | #5
Am 15.12.2021 um 13:34 hat Hanna Reitz geschrieben:
> On 14.12.21 19:10, Emanuele Giuseppe Esposito wrote:
> > 
> > 
> > On 13/12/2021 15:52, Stefan Hajnoczi wrote:
> > > Off-topic: I don't understand the difference between the effects of
> > > bdrv_drained_begin() and bdrv_subtree_drained_begin(). Both call
> > > aio_disable_external(aio_context) and aio_poll(). bdrv_drained_begin()
> > > only polls parents and itself, while bdrv_subtree_drained_begin() also
> > > polls children. But why does that distinction matter? I wouldn't know
> > > when to use one over the other.
> > 
> > Good point. Now I am wondering the same, so it would be great if anyone
> > could clarify it.
> 
> As far as I understand, bdrv_drained_begin() is used to drain and stop
> requests on a single BDS, whereas bdrv_subtree_drained_begin() drains the
> BDS and all of its children.  So when you don’t care about lingering
> requests in child nodes, then bdrv_drained_begin() suffices.

Right. This is different in practice when a child node has multiple
parents. Usually, when you want to quiesce one parent, the other parent
can keep using the child without being in the way.

For example, two qcow2 overlays based on a single template:

    vda             vdb
     |               |
     v               v
   qcow2           qcow2
(vda.qcow2)     (vdb.qcow2)
     |               |
     +-----+   +-----+
           |   |
           v   v
           qcow2
      (template.qcow2)

If you drain vdb.qcow2 because you want to safely modify something in
its BlockDriverState, there is nothing that should stop vda.qcow2 from
processing requests.

If you're not sure which one to use, bdrv_drained_begin() is what you
want. If you want bdrv_subtree_drained_begin(), you'll know. (It's
currently only used by reopen and by drop_intermediates, which both
operate on more than one node.)

Kevin