diff mbox series

[v4,20/25] block_int-common.h: assertion in the callers of BlockDriver function pointers

Message ID 20211025101735.2060852-21-eesposit@redhat.com (mailing list archive)
State New, archived
Headers show
Series block layer: split block APIs in global state and I/O | expand

Commit Message

Emanuele Giuseppe Esposito Oct. 25, 2021, 10:17 a.m. UTC
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

Comments

Hanna Czenczek Nov. 15, 2021, 12:48 p.m. UTC | #1
On 25.10.21 12:17, Emanuele Giuseppe Esposito wrote:
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block.c | 17 +++++++++++++++++
>   1 file changed, 17 insertions(+)
>
> diff --git a/block.c b/block.c
> index 94bff5c757..40c4729b8d 100644
> --- a/block.c
> +++ b/block.c

[...]

> @@ -2148,6 +2152,7 @@ static void bdrv_child_perm(BlockDriverState *bs, BlockDriverState *child_bs,
>                               uint64_t *nperm, uint64_t *nshared)
>   {
>       assert(bs->drv && bs->drv->bdrv_child_perm);
> +    assert(qemu_in_main_thread());
>       bs->drv->bdrv_child_perm(bs, c, role, reopen_queue,
>                                parent_perm, parent_shared,
>                                nperm, nshared);

(Should’ve noticed earlier, but only did now...)

First, this function is indirectly called by bdrv_refresh_perms(). I 
understand that all perm-related functions are classified as GS.

However, bdrv_co_invalidate_cache() invokes bdrv_refresh_perms. Being 
declared in block/coroutine.h, it’s an I/O function, so it mustn’t call 
such a GS function. BlockDriver.bdrv_co_invalidate_cache(), 
bdrv_invalidate_cache(), and blk_invalidate_cache() are also classified 
as I/O functions. Perhaps all of these functions should be classified as 
GS functions?  I believe their callers and their purpose would allow for 
this.


Second, it’s called by bdrv_child_refresh_perms(), which is called by 
block_crypto_amend_options_generic_luks().  This function is called by 
block_crypto_co_amend_luks(), which is a BlockDriver.bdrv_co_amend 
implementation, which is classified as an I/O function.

Honestly, I don’t know how to fix that mess.  The best would be if we 
could make the perm functions thread-safe and classify them as I/O, but 
it seems to me like that’s impossible (I sure hope I’m wrong).  On the 
other hand, .bdrv_co_amend very much strikes me like a GS function, but 
it isn’t.  I’m afraid it must work on nodes that are not in the main 
context, and it launches a job, so AFAIU we absolutely cannot run it 
under the BQL.

It almost seems to me like we’d need a thread-safe variant of the perm 
functions that’s allowed to fail when it cannot guarantee thread safety 
or something.  Or perhaps I’m wrong and the perm functions can actually 
be classified as thread-safe and I/O, that’d be great…

Hanna
Hanna Czenczek Nov. 15, 2021, 2:15 p.m. UTC | #2
On 15.11.21 13:48, Hanna Reitz wrote:
> On 25.10.21 12:17, Emanuele Giuseppe Esposito wrote:
>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>> ---
>>   block.c | 17 +++++++++++++++++
>>   1 file changed, 17 insertions(+)
>>
>> diff --git a/block.c b/block.c
>> index 94bff5c757..40c4729b8d 100644
>> --- a/block.c
>> +++ b/block.c
>
> [...]
>
>> @@ -2148,6 +2152,7 @@ static void bdrv_child_perm(BlockDriverState 
>> *bs, BlockDriverState *child_bs,
>>                               uint64_t *nperm, uint64_t *nshared)
>>   {
>>       assert(bs->drv && bs->drv->bdrv_child_perm);
>> +    assert(qemu_in_main_thread());
>>       bs->drv->bdrv_child_perm(bs, c, role, reopen_queue,
>>                                parent_perm, parent_shared,
>>                                nperm, nshared);
>
> (Should’ve noticed earlier, but only did now...)
>
> First, this function is indirectly called by bdrv_refresh_perms(). I 
> understand that all perm-related functions are classified as GS.
>
> However, bdrv_co_invalidate_cache() invokes bdrv_refresh_perms. Being 
> declared in block/coroutine.h, it’s an I/O function, so it mustn’t 
> call such a GS function. BlockDriver.bdrv_co_invalidate_cache(), 
> bdrv_invalidate_cache(), and blk_invalidate_cache() are also 
> classified as I/O functions. Perhaps all of these functions should be 
> classified as GS functions?  I believe their callers and their purpose 
> would allow for this.
>
>
> Second, it’s called by bdrv_child_refresh_perms(), which is called by 
> block_crypto_amend_options_generic_luks().  This function is called by 
> block_crypto_co_amend_luks(), which is a BlockDriver.bdrv_co_amend 
> implementation, which is classified as an I/O function.
>
> Honestly, I don’t know how to fix that mess.  The best would be if we 
> could make the perm functions thread-safe and classify them as I/O, 
> but it seems to me like that’s impossible (I sure hope I’m wrong).  On 
> the other hand, .bdrv_co_amend very much strikes me like a GS 
> function, but it isn’t.  I’m afraid it must work on nodes that are not 
> in the main context, and it launches a job, so AFAIU we absolutely 
> cannot run it under the BQL.
>
> It almost seems to me like we’d need a thread-safe variant of the perm 
> functions that’s allowed to fail when it cannot guarantee thread 
> safety or something.  Or perhaps I’m wrong and the perm functions can 
> actually be classified as thread-safe and I/O, that’d be great…

Hm.  Can we perhaps let block_crypto_amend_options_generic_luks() take 
the BQL just for the permission adjustment (i.e. the 
bdrv_child_refresh_perms() call)?  Would that work? :/

Hanna
Emanuele Giuseppe Esposito Nov. 17, 2021, 11:33 a.m. UTC | #3
On 15/11/2021 13:48, Hanna Reitz wrote:
> On 25.10.21 12:17, Emanuele Giuseppe Esposito wrote:
>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>> ---
>>   block.c | 17 +++++++++++++++++
>>   1 file changed, 17 insertions(+)
>>
>> diff --git a/block.c b/block.c
>> index 94bff5c757..40c4729b8d 100644
>> --- a/block.c
>> +++ b/block.c
> 
> [...]
> 
>> @@ -2148,6 +2152,7 @@ static void bdrv_child_perm(BlockDriverState 
>> *bs, BlockDriverState *child_bs,
>>                               uint64_t *nperm, uint64_t *nshared)
>>   {
>>       assert(bs->drv && bs->drv->bdrv_child_perm);
>> +    assert(qemu_in_main_thread());
>>       bs->drv->bdrv_child_perm(bs, c, role, reopen_queue,
>>                                parent_perm, parent_shared,
>>                                nperm, nshared);
> 
> (Should’ve noticed earlier, but only did now...)
> 
> First, this function is indirectly called by bdrv_refresh_perms(). I 
> understand that all perm-related functions are classified as GS.
> 
> However, bdrv_co_invalidate_cache() invokes bdrv_refresh_perms. Being 
> declared in block/coroutine.h, it’s an I/O function, so it mustn’t call 
> such a GS function. BlockDriver.bdrv_co_invalidate_cache(), 
> bdrv_invalidate_cache(), and blk_invalidate_cache() are also classified 
> as I/O functions. Perhaps all of these functions should be classified as 
> GS functions?  I believe their callers and their purpose would allow for 
> this.

I think that the *_invalidate_cache functions are I/O.
First of all, test-block-iothread.c calls bdrv_invalidate_cache in 
test_sync_op_invalidate_cache, which is purposefully called in an 
iothread. So that hints that we want it as I/O.
(Small mistake I just noticed: blk_invalidate_cache has the BQL 
assertion even though it is rightly put in block-backend-io.h

> 
> Second, it’s called by bdrv_child_refresh_perms(), which is called by 
> block_crypto_amend_options_generic_luks().  This function is called by 
> block_crypto_co_amend_luks(), which is a BlockDriver.bdrv_co_amend 
> implementation, which is classified as an I/O function.
> 
> Honestly, I don’t know how to fix that mess.  The best would be if we 
> could make the perm functions thread-safe and classify them as I/O, but 
> it seems to me like that’s impossible (I sure hope I’m wrong).  On the 
> other hand, .bdrv_co_amend very much strikes me like a GS function, but 
> it isn’t.  I’m afraid it must work on nodes that are not in the main 
> context, and it launches a job, so AFAIU we absolutely cannot run it 
> under the BQL.
> 
> It almost seems to me like we’d need a thread-safe variant of the perm 
> functions that’s allowed to fail when it cannot guarantee thread safety 
> or something.  Or perhaps I’m wrong and the perm functions can actually 
> be classified as thread-safe and I/O, that’d be great…

I think that since we are currently only splitting and not taking care 
of the actual I/O thread safety, we can move the _perm functions in I/O, 
and add a nice TODO to double check their thread safety.

I mean, if they are not thread-safe after the split it means they are 
not thread safe also right now.

Emanuele
Hanna Czenczek Nov. 17, 2021, 12:51 p.m. UTC | #4
On 17.11.21 12:33, Emanuele Giuseppe Esposito wrote:
>
>
> On 15/11/2021 13:48, Hanna Reitz wrote:
>> On 25.10.21 12:17, Emanuele Giuseppe Esposito wrote:
>>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>>> ---
>>>   block.c | 17 +++++++++++++++++
>>>   1 file changed, 17 insertions(+)
>>>
>>> diff --git a/block.c b/block.c
>>> index 94bff5c757..40c4729b8d 100644
>>> --- a/block.c
>>> +++ b/block.c
>>
>> [...]
>>
>>> @@ -2148,6 +2152,7 @@ static void bdrv_child_perm(BlockDriverState 
>>> *bs, BlockDriverState *child_bs,
>>>                               uint64_t *nperm, uint64_t *nshared)
>>>   {
>>>       assert(bs->drv && bs->drv->bdrv_child_perm);
>>> +    assert(qemu_in_main_thread());
>>>       bs->drv->bdrv_child_perm(bs, c, role, reopen_queue,
>>>                                parent_perm, parent_shared,
>>>                                nperm, nshared);
>>
>> (Should’ve noticed earlier, but only did now...)
>>
>> First, this function is indirectly called by bdrv_refresh_perms(). I 
>> understand that all perm-related functions are classified as GS.
>>
>> However, bdrv_co_invalidate_cache() invokes bdrv_refresh_perms. Being 
>> declared in block/coroutine.h, it’s an I/O function, so it mustn’t 
>> call such a GS function. BlockDriver.bdrv_co_invalidate_cache(), 
>> bdrv_invalidate_cache(), and blk_invalidate_cache() are also 
>> classified as I/O functions. Perhaps all of these functions should be 
>> classified as GS functions?  I believe their callers and their 
>> purpose would allow for this.
>
> I think that the *_invalidate_cache functions are I/O.
> First of all, test-block-iothread.c calls bdrv_invalidate_cache in 
> test_sync_op_invalidate_cache, which is purposefully called in an 
> iothread. So that hints that we want it as I/O.

Hm, OK, but bdrv_co_invalidate_cache() calls bdrv_refresh_perms(), which 
is a GS function, so that shouldn’t work, right?

> (Small mistake I just noticed: blk_invalidate_cache has the BQL 
> assertion even though it is rightly put in block-backend-io.h
>
>>
>> Second, it’s called by bdrv_child_refresh_perms(), which is called by 
>> block_crypto_amend_options_generic_luks().  This function is called 
>> by block_crypto_co_amend_luks(), which is a BlockDriver.bdrv_co_amend 
>> implementation, which is classified as an I/O function.
>>
>> Honestly, I don’t know how to fix that mess.  The best would be if we 
>> could make the perm functions thread-safe and classify them as I/O, 
>> but it seems to me like that’s impossible (I sure hope I’m wrong).  
>> On the other hand, .bdrv_co_amend very much strikes me like a GS 
>> function, but it isn’t.  I’m afraid it must work on nodes that are 
>> not in the main context, and it launches a job, so AFAIU we 
>> absolutely cannot run it under the BQL.
>>
>> It almost seems to me like we’d need a thread-safe variant of the 
>> perm functions that’s allowed to fail when it cannot guarantee thread 
>> safety or something.  Or perhaps I’m wrong and the perm functions can 
>> actually be classified as thread-safe and I/O, that’d be great…
>
> I think that since we are currently only splitting and not taking care 
> of the actual I/O thread safety, we can move the _perm functions in 
> I/O, and add a nice TODO to double check their thread safety.

:/

I would really, really like to avoid that unless it’s clear that we can 
make them thread-safe, or that there’s a way to take the BQL in I/O 
functions to call GS functions.  But the latter still wouldn’t make the 
perm functions I/O functions.  At most, I’d sort them under common 
functions.

> I mean, if they are not thread-safe after the split it means they are 
> not thread safe also right now.

Yes, sorry I wasn’t clear, I think there’s a pre-existing problem that 
your series only unveils.  I don’t know whether it has implications in 
practice yet.

Hanna
Emanuele Giuseppe Esposito Nov. 17, 2021, 1:09 p.m. UTC | #5
On 17/11/2021 13:51, Hanna Reitz wrote:
> On 17.11.21 12:33, Emanuele Giuseppe Esposito wrote:
>>
>>
>> On 15/11/2021 13:48, Hanna Reitz wrote:
>>> On 25.10.21 12:17, Emanuele Giuseppe Esposito wrote:
>>>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>>>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>>>> ---
>>>>   block.c | 17 +++++++++++++++++
>>>>   1 file changed, 17 insertions(+)
>>>>
>>>> diff --git a/block.c b/block.c
>>>> index 94bff5c757..40c4729b8d 100644
>>>> --- a/block.c
>>>> +++ b/block.c
>>>
>>> [...]
>>>
>>>> @@ -2148,6 +2152,7 @@ static void bdrv_child_perm(BlockDriverState 
>>>> *bs, BlockDriverState *child_bs,
>>>>                               uint64_t *nperm, uint64_t *nshared)
>>>>   {
>>>>       assert(bs->drv && bs->drv->bdrv_child_perm);
>>>> +    assert(qemu_in_main_thread());
>>>>       bs->drv->bdrv_child_perm(bs, c, role, reopen_queue,
>>>>                                parent_perm, parent_shared,
>>>>                                nperm, nshared);
>>>
>>> (Should’ve noticed earlier, but only did now...)
>>>
>>> First, this function is indirectly called by bdrv_refresh_perms(). I 
>>> understand that all perm-related functions are classified as GS.
>>>
>>> However, bdrv_co_invalidate_cache() invokes bdrv_refresh_perms. Being 
>>> declared in block/coroutine.h, it’s an I/O function, so it mustn’t 
>>> call such a GS function. BlockDriver.bdrv_co_invalidate_cache(), 
>>> bdrv_invalidate_cache(), and blk_invalidate_cache() are also 
>>> classified as I/O functions. Perhaps all of these functions should be 
>>> classified as GS functions?  I believe their callers and their 
>>> purpose would allow for this.
>>
>> I think that the *_invalidate_cache functions are I/O.
>> First of all, test-block-iothread.c calls bdrv_invalidate_cache in 
>> test_sync_op_invalidate_cache, which is purposefully called in an 
>> iothread. So that hints that we want it as I/O.
> 
> Hm, OK, but bdrv_co_invalidate_cache() calls bdrv_refresh_perms(), which 
> is a GS function, so that shouldn’t work, right?

Ok let's take a step back for one moment: can you tell me why the perm 
functions should be GS?

On one side I see they are also used by I/O, as we can see above. On the 
other side, I kinda see that permission should only be modified under 
BQL. But I don't have any valid point to sustain that.
So I wonder if you have any specific and more valid reason to put them 
as GS.

Maybe clarifying this will help finding a clean solution to this problem.

> 
>> (Small mistake I just noticed: blk_invalidate_cache has the BQL 
>> assertion even though it is rightly put in block-backend-io.h
>>
>>>
>>> Second, it’s called by bdrv_child_refresh_perms(), which is called by 
>>> block_crypto_amend_options_generic_luks().  This function is called 
>>> by block_crypto_co_amend_luks(), which is a BlockDriver.bdrv_co_amend 
>>> implementation, which is classified as an I/O function.
>>>
>>> Honestly, I don’t know how to fix that mess.  The best would be if we 
>>> could make the perm functions thread-safe and classify them as I/O, 
>>> but it seems to me like that’s impossible (I sure hope I’m wrong). On 
>>> the other hand, .bdrv_co_amend very much strikes me like a GS 
>>> function, but it isn’t.  I’m afraid it must work on nodes that are 
>>> not in the main context, and it launches a job, so AFAIU we 
>>> absolutely cannot run it under the BQL.
>>>
>>> It almost seems to me like we’d need a thread-safe variant of the 
>>> perm functions that’s allowed to fail when it cannot guarantee thread 
>>> safety or something.  Or perhaps I’m wrong and the perm functions can 
>>> actually be classified as thread-safe and I/O, that’d be great…
>>
>> I think that since we are currently only splitting and not taking care 
>> of the actual I/O thread safety, we can move the _perm functions in 
>> I/O, and add a nice TODO to double check their thread safety.
> 
> :/
> 
> I would really, really like to avoid that unless it’s clear that we can 
> make them thread-safe, or that there’s a way to take the BQL in I/O 
> functions to call GS functions.  But the latter still wouldn’t make the 
> perm functions I/O functions.  At most, I’d sort them under common 
> functions.
> 
>> I mean, if they are not thread-safe after the split it means they are 
>> not thread safe also right now.
> 
> Yes, sorry I wasn’t clear, I think there’s a pre-existing problem that 
> your series only unveils.  I don’t know whether it has implications in 
> practice yet.
> 
> Hanna
>
Hanna Czenczek Nov. 17, 2021, 1:34 p.m. UTC | #6
On 17.11.21 14:09, Emanuele Giuseppe Esposito wrote:
>
>
> On 17/11/2021 13:51, Hanna Reitz wrote:
>> On 17.11.21 12:33, Emanuele Giuseppe Esposito wrote:
>>>
>>>
>>> On 15/11/2021 13:48, Hanna Reitz wrote:
>>>> On 25.10.21 12:17, Emanuele Giuseppe Esposito wrote:
>>>>> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
>>>>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>>>>> ---
>>>>>   block.c | 17 +++++++++++++++++
>>>>>   1 file changed, 17 insertions(+)
>>>>>
>>>>> diff --git a/block.c b/block.c
>>>>> index 94bff5c757..40c4729b8d 100644
>>>>> --- a/block.c
>>>>> +++ b/block.c
>>>>
>>>> [...]
>>>>
>>>>> @@ -2148,6 +2152,7 @@ static void bdrv_child_perm(BlockDriverState 
>>>>> *bs, BlockDriverState *child_bs,
>>>>>                               uint64_t *nperm, uint64_t *nshared)
>>>>>   {
>>>>>       assert(bs->drv && bs->drv->bdrv_child_perm);
>>>>> +    assert(qemu_in_main_thread());
>>>>>       bs->drv->bdrv_child_perm(bs, c, role, reopen_queue,
>>>>>                                parent_perm, parent_shared,
>>>>>                                nperm, nshared);
>>>>
>>>> (Should’ve noticed earlier, but only did now...)
>>>>
>>>> First, this function is indirectly called by bdrv_refresh_perms(). 
>>>> I understand that all perm-related functions are classified as GS.
>>>>
>>>> However, bdrv_co_invalidate_cache() invokes bdrv_refresh_perms. 
>>>> Being declared in block/coroutine.h, it’s an I/O function, so it 
>>>> mustn’t call such a GS function. 
>>>> BlockDriver.bdrv_co_invalidate_cache(), bdrv_invalidate_cache(), 
>>>> and blk_invalidate_cache() are also classified as I/O functions. 
>>>> Perhaps all of these functions should be classified as GS 
>>>> functions?  I believe their callers and their purpose would allow 
>>>> for this.
>>>
>>> I think that the *_invalidate_cache functions are I/O.
>>> First of all, test-block-iothread.c calls bdrv_invalidate_cache in 
>>> test_sync_op_invalidate_cache, which is purposefully called in an 
>>> iothread. So that hints that we want it as I/O.
>>
>> Hm, OK, but bdrv_co_invalidate_cache() calls bdrv_refresh_perms(), 
>> which is a GS function, so that shouldn’t work, right?
>
> Ok let's take a step back for one moment: can you tell me why the perm 
> functions should be GS?
>
> On one side I see they are also used by I/O, as we can see above. On 
> the other side, I kinda see that permission should only be modified 
> under BQL. But I don't have any valid point to sustain that.
> So I wonder if you have any specific and more valid reason to put them 
> as GS.

First I believe permissions to be part of the block graph state, and so 
global state.  But, well, that could be declared just a hunch.

Second permissions have transaction mechanisms – you try to update them 
on every node, if one fails, all are aborted, else all are committed.  
So this is by no means an atomic operation but quite drawn out.

The problem with this is that I/O operations rely on permissions, e.g. 
you’ll get assertion failures when trying to write but don’t have the 
WRITE permission.  So it definitely doesn’t seem like something to me 
that can be thread-safe in the sense of cooperating nicely with other 
I/O threads.

Perhaps it’d be fine to do permission updates while the relevant 
subgraph is drained (i.e. blocking all other I/O threads), but I kind of 
feel like the same could be said for all (other) GS operations.  Like, 
you could probably do all kinds of graph changes while all involved 
subgraphs are drained.

Hanna
diff mbox series

Patch

diff --git a/block.c b/block.c
index 94bff5c757..40c4729b8d 100644
--- a/block.c
+++ b/block.c
@@ -1074,6 +1074,7 @@  int refresh_total_sectors(BlockDriverState *bs, int64_t hint)
 static void bdrv_join_options(BlockDriverState *bs, QDict *options,
                               QDict *old_options)
 {
+    assert(qemu_in_main_thread());
     if (bs->drv && bs->drv->bdrv_join_options) {
         bs->drv->bdrv_join_options(options, old_options);
     } else {
@@ -1566,6 +1567,7 @@  static int bdrv_open_driver(BlockDriverState *bs, BlockDriver *drv,
 {
     Error *local_err = NULL;
     int i, ret;
+    assert(qemu_in_main_thread());
 
     bdrv_assign_node_name(bs, node_name, &local_err);
     if (local_err) {
@@ -1955,6 +1957,8 @@  static int bdrv_fill_options(QDict **options, const char *filename,
     BlockDriver *drv = NULL;
     Error *local_err = NULL;
 
+    assert(qemu_in_main_thread());
+
     /*
      * Caution: while qdict_get_try_str() is fine, getting non-string
      * types would require more care.  When @options come from
@@ -2148,6 +2152,7 @@  static void bdrv_child_perm(BlockDriverState *bs, BlockDriverState *child_bs,
                             uint64_t *nperm, uint64_t *nshared)
 {
     assert(bs->drv && bs->drv->bdrv_child_perm);
+    assert(qemu_in_main_thread());
     bs->drv->bdrv_child_perm(bs, c, role, reopen_queue,
                              parent_perm, parent_shared,
                              nperm, nshared);
@@ -2231,6 +2236,7 @@  static void bdrv_drv_set_perm_commit(void *opaque)
 {
     BlockDriverState *bs = opaque;
     uint64_t cumulative_perms, cumulative_shared_perms;
+    assert(qemu_in_main_thread());
 
     if (bs->drv->bdrv_set_perm) {
         bdrv_get_cumulative_perm(bs, &cumulative_perms,
@@ -2242,6 +2248,7 @@  static void bdrv_drv_set_perm_commit(void *opaque)
 static void bdrv_drv_set_perm_abort(void *opaque)
 {
     BlockDriverState *bs = opaque;
+    assert(qemu_in_main_thread());
 
     if (bs->drv->bdrv_abort_perm_update) {
         bs->drv->bdrv_abort_perm_update(bs);
@@ -2257,6 +2264,7 @@  static int bdrv_drv_set_perm(BlockDriverState *bs, uint64_t perm,
                              uint64_t shared_perm, Transaction *tran,
                              Error **errp)
 {
+    assert(qemu_in_main_thread());
     if (!bs->drv) {
         return 0;
     }
@@ -4221,6 +4229,7 @@  int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, Error **errp)
 
     assert(qemu_get_current_aio_context() == qemu_get_aio_context());
     assert(bs_queue != NULL);
+    assert(qemu_in_main_thread());
 
     QTAILQ_FOREACH(bs_entry, bs_queue, entry) {
         ctx = bdrv_get_aio_context(bs_entry->state.bs);
@@ -4484,6 +4493,7 @@  static int bdrv_reopen_prepare(BDRVReopenState *reopen_state,
 
     assert(reopen_state != NULL);
     assert(reopen_state->bs->drv != NULL);
+    assert(qemu_in_main_thread());
     drv = reopen_state->bs->drv;
 
     /* This function and each driver's bdrv_reopen_prepare() remove
@@ -4694,6 +4704,7 @@  static void bdrv_reopen_commit(BDRVReopenState *reopen_state)
     bs = reopen_state->bs;
     drv = bs->drv;
     assert(drv != NULL);
+    assert(qemu_in_main_thread());
 
     /* If there are any driver level actions to take */
     if (drv->bdrv_reopen_commit) {
@@ -4735,6 +4746,7 @@  static void bdrv_reopen_abort(BDRVReopenState *reopen_state)
     assert(reopen_state != NULL);
     drv = reopen_state->bs->drv;
     assert(drv != NULL);
+    assert(qemu_in_main_thread());
 
     if (drv->bdrv_reopen_abort) {
         drv->bdrv_reopen_abort(reopen_state);
@@ -4748,6 +4760,7 @@  static void bdrv_close(BlockDriverState *bs)
     BdrvChild *child, *next;
 
     assert(!bs->refcnt);
+    assert(qemu_in_main_thread());
 
     bdrv_drained_begin(bs); /* complete I/O */
     bdrv_flush(bs);
@@ -6499,6 +6512,8 @@  static int bdrv_inactivate_recurse(BlockDriverState *bs)
     int ret;
     uint64_t cumulative_perms, cumulative_shared_perms;
 
+    assert(qemu_in_main_thread());
+
     if (!bs->drv) {
         return -ENOMEDIUM;
     }
@@ -7007,6 +7022,7 @@  static void bdrv_detach_aio_context(BlockDriverState *bs)
     BdrvAioNotifier *baf, *baf_tmp;
 
     assert(!bs->walking_aio_notifiers);
+    assert(qemu_in_main_thread());
     bs->walking_aio_notifiers = true;
     QLIST_FOREACH_SAFE(baf, &bs->aio_notifiers, list, baf_tmp) {
         if (baf->deleted) {
@@ -7034,6 +7050,7 @@  static void bdrv_attach_aio_context(BlockDriverState *bs,
                                     AioContext *new_context)
 {
     BdrvAioNotifier *ban, *ban_tmp;
+    assert(qemu_in_main_thread());
 
     if (bs->quiesce_counter) {
         aio_disable_external(new_context);