diff mbox series

[v5,2/5] xen: add bitmap to indicate per-domain state changes

Message ID 20241217142218.24129-3-jgross@suse.com (mailing list archive)
State New
Headers show
Series remove libxenctrl usage from xenstored | expand

Commit Message

Jürgen Groß Dec. 17, 2024, 2:22 p.m. UTC
Add a bitmap with one bit per possible domid indicating the respective
domain has changed its state (created, deleted, dying, crashed,
shutdown).

Registering the VIRQ_DOM_EXC event will result in setting the bits for
all existing domains and resetting all other bits.

As the usage of this bitmap is tightly coupled with the VIRQ_DOM_EXC
event, it is meant to be used only by a single consumer in the system,
just like the VIRQ_DOM_EXC event.

Resetting a bit will be done in a future patch.

This information is needed for Xenstore to keep track of all domains.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- use DOMID_FIRST_RESERVED instead of DOMID_MASK + 1 (Jan Beulich)
- use const (Jan Beulich)
- move call of domain_reset_states() into evtchn_bind_virq() (Jan Beulich)
- dynamically allocate dom_state_changed bitmap (Jan Beulich)
V3:
- use xvzalloc_array() (Jan Beulich)
- don't rename existing label (Jan Beulich)
V4:
- add __read_mostly (Jan Beulich)
- use __set_biz() (Jan Beulich)
- fix error handling in evtchn_bind_virq() (Jan Beulich)
V5:
- domain_init_states() may be called only if evtchn_bind_virq() has been
  called validly (Jan Beulich)
---
 xen/common/domain.c        | 60 ++++++++++++++++++++++++++++++++++++++
 xen/common/event_channel.c | 16 ++++++++++
 xen/include/xen/sched.h    |  3 ++
 3 files changed, 79 insertions(+)

Comments

Jan Beulich Dec. 17, 2024, 3:19 p.m. UTC | #1
On 17.12.2024 15:22, Juergen Gross wrote:
> Add a bitmap with one bit per possible domid indicating the respective
> domain has changed its state (created, deleted, dying, crashed,
> shutdown).
> 
> Registering the VIRQ_DOM_EXC event will result in setting the bits for
> all existing domains and resetting all other bits.
> 
> As the usage of this bitmap is tightly coupled with the VIRQ_DOM_EXC
> event, it is meant to be used only by a single consumer in the system,
> just like the VIRQ_DOM_EXC event.

I'm sorry, but I need to come back to this. I thought I had got convinced
that only a single entity in the system can bind this vIRQ. Yet upon
checking I can't seem to find what would guarantee this. In particular
binding a vIRQ doesn't involve any XSM check. Hence an unprivileged entity
could, on the assumption that the interested privileged entity (xenstore)
is already up and running, bind and unbind this vIRQ, just to have the
global map freed. What am I overlooking (which would likely want stating
here)?

> V5:
> - domain_init_states() may be called only if evtchn_bind_virq() has been
>   called validly (Jan Beulich)

I now recall why I had first suggested the placement later in the handling:
You're now doing the allocation with yet another lock held. It's likely not
the end of the world, but ...

> @@ -138,6 +139,60 @@ bool __read_mostly vmtrace_available;
>  
>  bool __read_mostly vpmu_is_available;
>  
> +static DEFINE_SPINLOCK(dom_state_changed_lock);
> +static unsigned long *__read_mostly dom_state_changed;
> +
> +int domain_init_states(void)
> +{
> +    const struct domain *d;
> +    int rc = -ENOMEM;
> +
> +    spin_lock(&dom_state_changed_lock);
> +
> +    if ( dom_state_changed )
> +        bitmap_zero(dom_state_changed, DOMID_FIRST_RESERVED);
> +    else
> +    {
> +        dom_state_changed = xvzalloc_array(unsigned long,
> +                                           BITS_TO_LONGS(DOMID_FIRST_RESERVED));

... already this alone wasn't nice, and could be avoided (by doing the
allocation prior to acquiring the lock, which of course complicates the
logic some).

What's perhaps less desirable is that ...

> @@ -494,6 +495,15 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
>          goto out;
>      }
>  
> +    if ( virq == VIRQ_DOM_EXC )
> +    {
> +        rc = domain_init_states();
> +        if ( rc )
> +            goto out;
> +
> +        deinit_if_err = true;
> +    }
> +
>      port = rc = evtchn_get_port(d, port);
>      if ( rc < 0 )
>      {

... the placement here additionally introduces lock nesting when really
the two locks shouldn't have any relationship.

How about giving domain_init_states() a boolean parameter, such that it
can be called twice, with the first invocation moved back up where it
was, and the second one put ...

> @@ -527,6 +537,9 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
>   out:
>      write_unlock(&d->event_lock);
>  
> +    if ( rc && deinit_if_err )
> +        domain_deinit_states();
> +
>      return rc;
>  }

... down here, not doing any allocation at all (only the clearing), and
hence eliminating the need to deal with its failure? (Alternatively
there could of course be a split into an init and a reset function.)

There of course is the chance of races with such an approach. I'd like
to note though that with the placement of the call in the hunk above
there's a minor race, too (against ...

> @@ -730,6 +743,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
>          struct vcpu *v;
>          unsigned long flags;
>  
> +        if ( chn1->u.virq == VIRQ_DOM_EXC )
> +            domain_deinit_states();

... this and the same remote vCPU then immediately binding the vIRQ
again). Hence yet another alternative would appear to be to drop the
new global lock and use d->event_lock for synchronization instead
(provided - see above - that only a single entity can actually set up
all of this). That would pretty much want to have the allocation kept
with the lock already held (which isn't nice, but as said is perhaps
tolerable), but would at least eliminate the undesirable lock nesting.

Re-use of the domain's event lock is at least somewhat justified by
the bit array being tied to VIRQ_DOM_EXEC.

Thoughts?

Jan
Jürgen Groß Dec. 17, 2024, 3:55 p.m. UTC | #2
On 17.12.24 16:19, Jan Beulich wrote:
> On 17.12.2024 15:22, Juergen Gross wrote:
>> Add a bitmap with one bit per possible domid indicating the respective
>> domain has changed its state (created, deleted, dying, crashed,
>> shutdown).
>>
>> Registering the VIRQ_DOM_EXC event will result in setting the bits for
>> all existing domains and resetting all other bits.
>>
>> As the usage of this bitmap is tightly coupled with the VIRQ_DOM_EXC
>> event, it is meant to be used only by a single consumer in the system,
>> just like the VIRQ_DOM_EXC event.
> 
> I'm sorry, but I need to come back to this. I thought I had got convinced
> that only a single entity in the system can bind this vIRQ. Yet upon
> checking I can't seem to find what would guarantee this. In particular
> binding a vIRQ doesn't involve any XSM check. Hence an unprivileged entity
> could, on the assumption that the interested privileged entity (xenstore)
> is already up and running, bind and unbind this vIRQ, just to have the
> global map freed. What am I overlooking (which would likely want stating
> here)?

I think you are not overlooking anything.

I guess this can easily be handled by checking that the VIRQ_DOM_EXC handling
domain is the calling one in domain_[de]init_states(). Note that global virqs
are only ever sent to vcpu[0] of the handling domain, so rebinding the event
to another vcpu is possible, but doesn't make sense.

> 
>> V5:
>> - domain_init_states() may be called only if evtchn_bind_virq() has been
>>    called validly (Jan Beulich)
> 
> I now recall why I had first suggested the placement later in the handling:
> You're now doing the allocation with yet another lock held. It's likely not
> the end of the world, but ...
> 
>> @@ -138,6 +139,60 @@ bool __read_mostly vmtrace_available;
>>   
>>   bool __read_mostly vpmu_is_available;
>>   
>> +static DEFINE_SPINLOCK(dom_state_changed_lock);
>> +static unsigned long *__read_mostly dom_state_changed;
>> +
>> +int domain_init_states(void)
>> +{
>> +    const struct domain *d;
>> +    int rc = -ENOMEM;
>> +
>> +    spin_lock(&dom_state_changed_lock);
>> +
>> +    if ( dom_state_changed )
>> +        bitmap_zero(dom_state_changed, DOMID_FIRST_RESERVED);
>> +    else
>> +    {
>> +        dom_state_changed = xvzalloc_array(unsigned long,
>> +                                           BITS_TO_LONGS(DOMID_FIRST_RESERVED));
> 
> ... already this alone wasn't nice, and could be avoided (by doing the
> allocation prior to acquiring the lock, which of course complicates the
> logic some).
> 
> What's perhaps less desirable is that ...
> 
>> @@ -494,6 +495,15 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
>>           goto out;
>>       }
>>   
>> +    if ( virq == VIRQ_DOM_EXC )
>> +    {
>> +        rc = domain_init_states();
>> +        if ( rc )
>> +            goto out;
>> +
>> +        deinit_if_err = true;
>> +    }
>> +
>>       port = rc = evtchn_get_port(d, port);
>>       if ( rc < 0 )
>>       {
> 
> ... the placement here additionally introduces lock nesting when really
> the two locks shouldn't have any relationship.
> 
> How about giving domain_init_states() a boolean parameter, such that it
> can be called twice, with the first invocation moved back up where it
> was, and the second one put ...
> 
>> @@ -527,6 +537,9 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
>>    out:
>>       write_unlock(&d->event_lock);
>>   
>> +    if ( rc && deinit_if_err )
>> +        domain_deinit_states();
>> +
>>       return rc;
>>   }
> 
> ... down here, not doing any allocation at all (only the clearing), and
> hence eliminating the need to deal with its failure? (Alternatively
> there could of course be a split into an init and a reset function.)
> 
> There of course is the chance of races with such an approach. I'd like
> to note though that with the placement of the call in the hunk above
> there's a minor race, too (against ...
> 
>> @@ -730,6 +743,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
>>           struct vcpu *v;
>>           unsigned long flags;
>>   
>> +        if ( chn1->u.virq == VIRQ_DOM_EXC )
>> +            domain_deinit_states();
> 
> ... this and the same remote vCPU then immediately binding the vIRQ
> again). Hence yet another alternative would appear to be to drop the
> new global lock and use d->event_lock for synchronization instead
> (provided - see above - that only a single entity can actually set up
> all of this). That would pretty much want to have the allocation kept
> with the lock already held (which isn't nice, but as said is perhaps
> tolerable), but would at least eliminate the undesirable lock nesting.
> 
> Re-use of the domain's event lock is at least somewhat justified by
> the bit array being tied to VIRQ_DOM_EXEC.
> 
> Thoughts?

With my suggestion above I think there is no race, as only the domain handling
VIRQ_DOM_EXC could alloc/dealloc dom_state_changed.

Using d->event_lock for synchronization is not a nice option IMO, as it would
require to take the event_lock of the domain handling VIRQ_DOM_EXEC when trying
to set a bit for another domain changing state.


Juergen
Jan Beulich Dec. 17, 2024, 4:12 p.m. UTC | #3
On 17.12.2024 16:55, Jürgen Groß wrote:
> On 17.12.24 16:19, Jan Beulich wrote:
>> On 17.12.2024 15:22, Juergen Gross wrote:
>>> Add a bitmap with one bit per possible domid indicating the respective
>>> domain has changed its state (created, deleted, dying, crashed,
>>> shutdown).
>>>
>>> Registering the VIRQ_DOM_EXC event will result in setting the bits for
>>> all existing domains and resetting all other bits.
>>>
>>> As the usage of this bitmap is tightly coupled with the VIRQ_DOM_EXC
>>> event, it is meant to be used only by a single consumer in the system,
>>> just like the VIRQ_DOM_EXC event.
>>
>> I'm sorry, but I need to come back to this. I thought I had got convinced
>> that only a single entity in the system can bind this vIRQ. Yet upon
>> checking I can't seem to find what would guarantee this. In particular
>> binding a vIRQ doesn't involve any XSM check. Hence an unprivileged entity
>> could, on the assumption that the interested privileged entity (xenstore)
>> is already up and running, bind and unbind this vIRQ, just to have the
>> global map freed. What am I overlooking (which would likely want stating
>> here)?
> 
> I think you are not overlooking anything.
> 
> I guess this can easily be handled by checking that the VIRQ_DOM_EXC handling
> domain is the calling one in domain_[de]init_states(). Note that global virqs
> are only ever sent to vcpu[0] of the handling domain, so rebinding the event
> to another vcpu is possible, but doesn't make sense.

No, that's precluded by

    if ( virq_is_global(virq) && (vcpu != 0) )
        return -EINVAL;

afaict. That doesn't, however, preclude multiple vCPU-s from trying to bind
the vIRQ to vCPU 0.

>>> V5:
>>> - domain_init_states() may be called only if evtchn_bind_virq() has been
>>>    called validly (Jan Beulich)
>>
>> I now recall why I had first suggested the placement later in the handling:
>> You're now doing the allocation with yet another lock held. It's likely not
>> the end of the world, but ...
>>
>>> @@ -138,6 +139,60 @@ bool __read_mostly vmtrace_available;
>>>   
>>>   bool __read_mostly vpmu_is_available;
>>>   
>>> +static DEFINE_SPINLOCK(dom_state_changed_lock);
>>> +static unsigned long *__read_mostly dom_state_changed;
>>> +
>>> +int domain_init_states(void)
>>> +{
>>> +    const struct domain *d;
>>> +    int rc = -ENOMEM;
>>> +
>>> +    spin_lock(&dom_state_changed_lock);
>>> +
>>> +    if ( dom_state_changed )
>>> +        bitmap_zero(dom_state_changed, DOMID_FIRST_RESERVED);
>>> +    else
>>> +    {
>>> +        dom_state_changed = xvzalloc_array(unsigned long,
>>> +                                           BITS_TO_LONGS(DOMID_FIRST_RESERVED));
>>
>> ... already this alone wasn't nice, and could be avoided (by doing the
>> allocation prior to acquiring the lock, which of course complicates the
>> logic some).
>>
>> What's perhaps less desirable is that ...
>>
>>> @@ -494,6 +495,15 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
>>>           goto out;
>>>       }
>>>   
>>> +    if ( virq == VIRQ_DOM_EXC )
>>> +    {
>>> +        rc = domain_init_states();
>>> +        if ( rc )
>>> +            goto out;
>>> +
>>> +        deinit_if_err = true;
>>> +    }
>>> +
>>>       port = rc = evtchn_get_port(d, port);
>>>       if ( rc < 0 )
>>>       {
>>
>> ... the placement here additionally introduces lock nesting when really
>> the two locks shouldn't have any relationship.
>>
>> How about giving domain_init_states() a boolean parameter, such that it
>> can be called twice, with the first invocation moved back up where it
>> was, and the second one put ...
>>
>>> @@ -527,6 +537,9 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
>>>    out:
>>>       write_unlock(&d->event_lock);
>>>   
>>> +    if ( rc && deinit_if_err )
>>> +        domain_deinit_states();
>>> +
>>>       return rc;
>>>   }
>>
>> ... down here, not doing any allocation at all (only the clearing), and
>> hence eliminating the need to deal with its failure? (Alternatively
>> there could of course be a split into an init and a reset function.)
>>
>> There of course is the chance of races with such an approach. I'd like
>> to note though that with the placement of the call in the hunk above
>> there's a minor race, too (against ...
>>
>>> @@ -730,6 +743,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
>>>           struct vcpu *v;
>>>           unsigned long flags;
>>>   
>>> +        if ( chn1->u.virq == VIRQ_DOM_EXC )
>>> +            domain_deinit_states();
>>
>> ... this and the same remote vCPU then immediately binding the vIRQ
>> again). Hence yet another alternative would appear to be to drop the
>> new global lock and use d->event_lock for synchronization instead
>> (provided - see above - that only a single entity can actually set up
>> all of this). That would pretty much want to have the allocation kept
>> with the lock already held (which isn't nice, but as said is perhaps
>> tolerable), but would at least eliminate the undesirable lock nesting.
>>
>> Re-use of the domain's event lock is at least somewhat justified by
>> the bit array being tied to VIRQ_DOM_EXEC.
>>
>> Thoughts?
> 
> With my suggestion above I think there is no race, as only the domain handling
> VIRQ_DOM_EXC could alloc/dealloc dom_state_changed.

Yet still it could be multiple vCPU-s therein to try to in parallel.

> Using d->event_lock for synchronization is not a nice option IMO, as it would
> require to take the event_lock of the domain handling VIRQ_DOM_EXEC when trying
> to set a bit for another domain changing state.

Well, yes, it's that domain's data that's to be modified, after all.

Jan
Jürgen Groß Dec. 18, 2024, 7:15 a.m. UTC | #4
On 17.12.24 17:12, Jan Beulich wrote:
> On 17.12.2024 16:55, Jürgen Groß wrote:
>> On 17.12.24 16:19, Jan Beulich wrote:
>>> On 17.12.2024 15:22, Juergen Gross wrote:
>>>> Add a bitmap with one bit per possible domid indicating the respective
>>>> domain has changed its state (created, deleted, dying, crashed,
>>>> shutdown).
>>>>
>>>> Registering the VIRQ_DOM_EXC event will result in setting the bits for
>>>> all existing domains and resetting all other bits.
>>>>
>>>> As the usage of this bitmap is tightly coupled with the VIRQ_DOM_EXC
>>>> event, it is meant to be used only by a single consumer in the system,
>>>> just like the VIRQ_DOM_EXC event.
>>>
>>> I'm sorry, but I need to come back to this. I thought I had got convinced
>>> that only a single entity in the system can bind this vIRQ. Yet upon
>>> checking I can't seem to find what would guarantee this. In particular
>>> binding a vIRQ doesn't involve any XSM check. Hence an unprivileged entity
>>> could, on the assumption that the interested privileged entity (xenstore)
>>> is already up and running, bind and unbind this vIRQ, just to have the
>>> global map freed. What am I overlooking (which would likely want stating
>>> here)?
>>
>> I think you are not overlooking anything.
>>
>> I guess this can easily be handled by checking that the VIRQ_DOM_EXC handling
>> domain is the calling one in domain_[de]init_states(). Note that global virqs
>> are only ever sent to vcpu[0] of the handling domain, so rebinding the event
>> to another vcpu is possible, but doesn't make sense.
> 
> No, that's precluded by
> 
>      if ( virq_is_global(virq) && (vcpu != 0) )
>          return -EINVAL;
> 
> afaict. That doesn't, however, preclude multiple vCPU-s from trying to bind
> the vIRQ to vCPU 0.

I let myself fool by the ability to use EVTCHNOP_bind_vcpu for a global
virq. While it is possible to send the event to another vcpu, it is still
vcpu[0] which is used for the bookkeeping.

> 
>>>> V5:
>>>> - domain_init_states() may be called only if evtchn_bind_virq() has been
>>>>     called validly (Jan Beulich)
>>>
>>> I now recall why I had first suggested the placement later in the handling:
>>> You're now doing the allocation with yet another lock held. It's likely not
>>> the end of the world, but ...
>>>
>>>> @@ -138,6 +139,60 @@ bool __read_mostly vmtrace_available;
>>>>    
>>>>    bool __read_mostly vpmu_is_available;
>>>>    
>>>> +static DEFINE_SPINLOCK(dom_state_changed_lock);
>>>> +static unsigned long *__read_mostly dom_state_changed;
>>>> +
>>>> +int domain_init_states(void)
>>>> +{
>>>> +    const struct domain *d;
>>>> +    int rc = -ENOMEM;
>>>> +
>>>> +    spin_lock(&dom_state_changed_lock);
>>>> +
>>>> +    if ( dom_state_changed )
>>>> +        bitmap_zero(dom_state_changed, DOMID_FIRST_RESERVED);
>>>> +    else
>>>> +    {
>>>> +        dom_state_changed = xvzalloc_array(unsigned long,
>>>> +                                           BITS_TO_LONGS(DOMID_FIRST_RESERVED));
>>>
>>> ... already this alone wasn't nice, and could be avoided (by doing the
>>> allocation prior to acquiring the lock, which of course complicates the
>>> logic some).
>>>
>>> What's perhaps less desirable is that ...
>>>
>>>> @@ -494,6 +495,15 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
>>>>            goto out;
>>>>        }
>>>>    
>>>> +    if ( virq == VIRQ_DOM_EXC )
>>>> +    {
>>>> +        rc = domain_init_states();
>>>> +        if ( rc )
>>>> +            goto out;
>>>> +
>>>> +        deinit_if_err = true;
>>>> +    }
>>>> +
>>>>        port = rc = evtchn_get_port(d, port);
>>>>        if ( rc < 0 )
>>>>        {
>>>
>>> ... the placement here additionally introduces lock nesting when really
>>> the two locks shouldn't have any relationship.
>>>
>>> How about giving domain_init_states() a boolean parameter, such that it
>>> can be called twice, with the first invocation moved back up where it
>>> was, and the second one put ...
>>>
>>>> @@ -527,6 +537,9 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
>>>>     out:
>>>>        write_unlock(&d->event_lock);
>>>>    
>>>> +    if ( rc && deinit_if_err )
>>>> +        domain_deinit_states();
>>>> +
>>>>        return rc;
>>>>    }
>>>
>>> ... down here, not doing any allocation at all (only the clearing), and
>>> hence eliminating the need to deal with its failure? (Alternatively
>>> there could of course be a split into an init and a reset function.)
>>>
>>> There of course is the chance of races with such an approach. I'd like
>>> to note though that with the placement of the call in the hunk above
>>> there's a minor race, too (against ...
>>>
>>>> @@ -730,6 +743,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
>>>>            struct vcpu *v;
>>>>            unsigned long flags;
>>>>    
>>>> +        if ( chn1->u.virq == VIRQ_DOM_EXC )
>>>> +            domain_deinit_states();
>>>
>>> ... this and the same remote vCPU then immediately binding the vIRQ
>>> again). Hence yet another alternative would appear to be to drop the
>>> new global lock and use d->event_lock for synchronization instead
>>> (provided - see above - that only a single entity can actually set up
>>> all of this). That would pretty much want to have the allocation kept
>>> with the lock already held (which isn't nice, but as said is perhaps
>>> tolerable), but would at least eliminate the undesirable lock nesting.
>>>
>>> Re-use of the domain's event lock is at least somewhat justified by
>>> the bit array being tied to VIRQ_DOM_EXEC.
>>>
>>> Thoughts?
>>
>> With my suggestion above I think there is no race, as only the domain handling
>> VIRQ_DOM_EXC could alloc/dealloc dom_state_changed.
> 
> Yet still it could be multiple vCPU-s therein to try to in parallel.

But isn't this again the need for trusting other processes within the domain
having the right to consume the virq?

In the end it doesn't matter whether there is such a race or not. Some
process in that domain having the power to do event channel operations could
simply close the event channel. So it IS really about trust.

> 
>> Using d->event_lock for synchronization is not a nice option IMO, as it would
>> require to take the event_lock of the domain handling VIRQ_DOM_EXEC when trying
>> to set a bit for another domain changing state.
> 
> Well, yes, it's that domain's data that's to be modified, after all.

True, but using d->event_lock would probably increase lock contention, as this
lock is used much more often than the new lock introduced by my patch.


Juergen
Jan Beulich Dec. 19, 2024, 8:01 a.m. UTC | #5
On 18.12.2024 08:15, Jürgen Groß wrote:
> On 17.12.24 17:12, Jan Beulich wrote:
>> On 17.12.2024 16:55, Jürgen Groß wrote:
>>> On 17.12.24 16:19, Jan Beulich wrote:
>>>> On 17.12.2024 15:22, Juergen Gross wrote:
>>>>> Add a bitmap with one bit per possible domid indicating the respective
>>>>> domain has changed its state (created, deleted, dying, crashed,
>>>>> shutdown).
>>>>>
>>>>> Registering the VIRQ_DOM_EXC event will result in setting the bits for
>>>>> all existing domains and resetting all other bits.
>>>>>
>>>>> As the usage of this bitmap is tightly coupled with the VIRQ_DOM_EXC
>>>>> event, it is meant to be used only by a single consumer in the system,
>>>>> just like the VIRQ_DOM_EXC event.
>>>>
>>>> I'm sorry, but I need to come back to this. I thought I had got convinced
>>>> that only a single entity in the system can bind this vIRQ. Yet upon
>>>> checking I can't seem to find what would guarantee this. In particular
>>>> binding a vIRQ doesn't involve any XSM check. Hence an unprivileged entity
>>>> could, on the assumption that the interested privileged entity (xenstore)
>>>> is already up and running, bind and unbind this vIRQ, just to have the
>>>> global map freed. What am I overlooking (which would likely want stating
>>>> here)?
>>>
>>> I think you are not overlooking anything.
>>>
>>> I guess this can easily be handled by checking that the VIRQ_DOM_EXC handling
>>> domain is the calling one in domain_[de]init_states(). Note that global virqs
>>> are only ever sent to vcpu[0] of the handling domain, so rebinding the event
>>> to another vcpu is possible, but doesn't make sense.
>>
>> No, that's precluded by
>>
>>      if ( virq_is_global(virq) && (vcpu != 0) )
>>          return -EINVAL;
>>
>> afaict. That doesn't, however, preclude multiple vCPU-s from trying to bind
>> the vIRQ to vCPU 0.
> 
> I let myself fool by the ability to use EVTCHNOP_bind_vcpu for a global
> virq. While it is possible to send the event to another vcpu, it is still
> vcpu[0] which is used for the bookkeeping.
> 
>>
>>>>> V5:
>>>>> - domain_init_states() may be called only if evtchn_bind_virq() has been
>>>>>     called validly (Jan Beulich)
>>>>
>>>> I now recall why I had first suggested the placement later in the handling:
>>>> You're now doing the allocation with yet another lock held. It's likely not
>>>> the end of the world, but ...
>>>>
>>>>> @@ -138,6 +139,60 @@ bool __read_mostly vmtrace_available;
>>>>>    
>>>>>    bool __read_mostly vpmu_is_available;
>>>>>    
>>>>> +static DEFINE_SPINLOCK(dom_state_changed_lock);
>>>>> +static unsigned long *__read_mostly dom_state_changed;
>>>>> +
>>>>> +int domain_init_states(void)
>>>>> +{
>>>>> +    const struct domain *d;
>>>>> +    int rc = -ENOMEM;
>>>>> +
>>>>> +    spin_lock(&dom_state_changed_lock);
>>>>> +
>>>>> +    if ( dom_state_changed )
>>>>> +        bitmap_zero(dom_state_changed, DOMID_FIRST_RESERVED);
>>>>> +    else
>>>>> +    {
>>>>> +        dom_state_changed = xvzalloc_array(unsigned long,
>>>>> +                                           BITS_TO_LONGS(DOMID_FIRST_RESERVED));
>>>>
>>>> ... already this alone wasn't nice, and could be avoided (by doing the
>>>> allocation prior to acquiring the lock, which of course complicates the
>>>> logic some).
>>>>
>>>> What's perhaps less desirable is that ...
>>>>
>>>>> @@ -494,6 +495,15 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
>>>>>            goto out;
>>>>>        }
>>>>>    
>>>>> +    if ( virq == VIRQ_DOM_EXC )
>>>>> +    {
>>>>> +        rc = domain_init_states();
>>>>> +        if ( rc )
>>>>> +            goto out;
>>>>> +
>>>>> +        deinit_if_err = true;
>>>>> +    }
>>>>> +
>>>>>        port = rc = evtchn_get_port(d, port);
>>>>>        if ( rc < 0 )
>>>>>        {
>>>>
>>>> ... the placement here additionally introduces lock nesting when really
>>>> the two locks shouldn't have any relationship.
>>>>
>>>> How about giving domain_init_states() a boolean parameter, such that it
>>>> can be called twice, with the first invocation moved back up where it
>>>> was, and the second one put ...
>>>>
>>>>> @@ -527,6 +537,9 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
>>>>>     out:
>>>>>        write_unlock(&d->event_lock);
>>>>>    
>>>>> +    if ( rc && deinit_if_err )
>>>>> +        domain_deinit_states();
>>>>> +
>>>>>        return rc;
>>>>>    }
>>>>
>>>> ... down here, not doing any allocation at all (only the clearing), and
>>>> hence eliminating the need to deal with its failure? (Alternatively
>>>> there could of course be a split into an init and a reset function.)
>>>>
>>>> There of course is the chance of races with such an approach. I'd like
>>>> to note though that with the placement of the call in the hunk above
>>>> there's a minor race, too (against ...
>>>>
>>>>> @@ -730,6 +743,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
>>>>>            struct vcpu *v;
>>>>>            unsigned long flags;
>>>>>    
>>>>> +        if ( chn1->u.virq == VIRQ_DOM_EXC )
>>>>> +            domain_deinit_states();
>>>>
>>>> ... this and the same remote vCPU then immediately binding the vIRQ
>>>> again). Hence yet another alternative would appear to be to drop the
>>>> new global lock and use d->event_lock for synchronization instead
>>>> (provided - see above - that only a single entity can actually set up
>>>> all of this). That would pretty much want to have the allocation kept
>>>> with the lock already held (which isn't nice, but as said is perhaps
>>>> tolerable), but would at least eliminate the undesirable lock nesting.
>>>>
>>>> Re-use of the domain's event lock is at least somewhat justified by
>>>> the bit array being tied to VIRQ_DOM_EXEC.
>>>>
>>>> Thoughts?
>>>
>>> With my suggestion above I think there is no race, as only the domain handling
>>> VIRQ_DOM_EXC could alloc/dealloc dom_state_changed.
>>
>> Yet still it could be multiple vCPU-s therein to try to in parallel.
> 
> But isn't this again the need for trusting other processes within the domain
> having the right to consume the virq?
> 
> In the end it doesn't matter whether there is such a race or not. Some
> process in that domain having the power to do event channel operations could
> simply close the event channel. So it IS really about trust.

Well. What we do in Xen should be correct without regard to what a guest might
be doing. And it should be safe against any not-fully-privileged entity in the
system. Hence why I think such a race needs dealing with correctly, no matter
that it's not a sane thing to do for a guest.

>>> Using d->event_lock for synchronization is not a nice option IMO, as it would
>>> require to take the event_lock of the domain handling VIRQ_DOM_EXEC when trying
>>> to set a bit for another domain changing state.
>>
>> Well, yes, it's that domain's data that's to be modified, after all.
> 
> True, but using d->event_lock would probably increase lock contention, as this
> lock is used much more often than the new lock introduced by my patch.

On a system with extremely heavy domain creation / teardown activity there may
be an increase of contention, yes. Whether that's a price worth to pay to avoid
introducing an otherwise unnecessary relationship between two locks is
precisely what we're trying to determine in this discussion. If the two of us
are taking opposite positions here, we'll simply need a 3rd view.

Jan
diff mbox series

Patch

diff --git a/xen/common/domain.c b/xen/common/domain.c
index e33a0a5a21..87633b1f7b 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -34,6 +34,7 @@ 
 #include <xen/xenoprof.h>
 #include <xen/irq.h>
 #include <xen/argo.h>
+#include <xen/xvmalloc.h>
 #include <asm/p2m.h>
 #include <asm/processor.h>
 #include <public/sched.h>
@@ -138,6 +139,60 @@  bool __read_mostly vmtrace_available;
 
 bool __read_mostly vpmu_is_available;
 
+static DEFINE_SPINLOCK(dom_state_changed_lock);
+static unsigned long *__read_mostly dom_state_changed;
+
+int domain_init_states(void)
+{
+    const struct domain *d;
+    int rc = -ENOMEM;
+
+    spin_lock(&dom_state_changed_lock);
+
+    if ( dom_state_changed )
+        bitmap_zero(dom_state_changed, DOMID_FIRST_RESERVED);
+    else
+    {
+        dom_state_changed = xvzalloc_array(unsigned long,
+                                           BITS_TO_LONGS(DOMID_FIRST_RESERVED));
+        if ( !dom_state_changed )
+            goto unlock;
+    }
+
+    rcu_read_lock(&domlist_read_lock);
+
+    for_each_domain ( d )
+        __set_bit(d->domain_id, dom_state_changed);
+
+    rcu_read_unlock(&domlist_read_lock);
+
+    rc = 0;
+
+ unlock:
+    spin_unlock(&dom_state_changed_lock);
+
+    return rc;
+}
+
+void domain_deinit_states(void)
+{
+    spin_lock(&dom_state_changed_lock);
+
+    XVFREE(dom_state_changed);
+
+    spin_unlock(&dom_state_changed_lock);
+}
+
+static void domain_changed_state(const struct domain *d)
+{
+    spin_lock(&dom_state_changed_lock);
+
+    if ( dom_state_changed )
+        __set_bit(d->domain_id, dom_state_changed);
+
+    spin_unlock(&dom_state_changed_lock);
+}
+
 static void __domain_finalise_shutdown(struct domain *d)
 {
     struct vcpu *v;
@@ -152,6 +207,7 @@  static void __domain_finalise_shutdown(struct domain *d)
             return;
 
     d->is_shut_down = 1;
+    domain_changed_state(d);
     if ( (d->shutdown_code == SHUTDOWN_suspend) && d->suspend_evtchn )
         evtchn_send(d, d->suspend_evtchn);
     else
@@ -839,6 +895,7 @@  struct domain *domain_create(domid_t domid,
      */
     domlist_insert(d);
 
+    domain_changed_state(d);
     memcpy(d->handle, config->handle, sizeof(d->handle));
 
     return d;
@@ -1104,6 +1161,7 @@  int domain_kill(struct domain *d)
         /* Mem event cleanup has to go here because the rings 
          * have to be put before we call put_domain. */
         vm_event_cleanup(d);
+        domain_changed_state(d);
         put_domain(d);
         send_global_virq(VIRQ_DOM_EXC);
         /* fallthrough */
@@ -1293,6 +1351,8 @@  static void cf_check complete_domain_destroy(struct rcu_head *head)
 
     xfree(d->vcpu);
 
+    domain_changed_state(d);
+
     _domain_destroy(d);
 
     send_global_virq(VIRQ_DOM_EXC);
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 8db2ca4ba2..aa947efba7 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -469,6 +469,7 @@  int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
     struct domain *d = current->domain;
     int            virq = bind->virq, vcpu = bind->vcpu;
     int            rc = 0;
+    bool           deinit_if_err = false;
 
     if ( (virq < 0) || (virq >= ARRAY_SIZE(v->virq_to_evtchn)) )
         return -EINVAL;
@@ -494,6 +495,15 @@  int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
         goto out;
     }
 
+    if ( virq == VIRQ_DOM_EXC )
+    {
+        rc = domain_init_states();
+        if ( rc )
+            goto out;
+
+        deinit_if_err = true;
+    }
+
     port = rc = evtchn_get_port(d, port);
     if ( rc < 0 )
     {
@@ -527,6 +537,9 @@  int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
  out:
     write_unlock(&d->event_lock);
 
+    if ( rc && deinit_if_err )
+        domain_deinit_states();
+
     return rc;
 }
 
@@ -730,6 +743,9 @@  int evtchn_close(struct domain *d1, int port1, bool guest)
         struct vcpu *v;
         unsigned long flags;
 
+        if ( chn1->u.virq == VIRQ_DOM_EXC )
+            domain_deinit_states();
+
         v = d1->vcpu[virq_is_global(chn1->u.virq) ? 0 : chn1->notify_vcpu_id];
 
         write_lock_irqsave(&v->virq_lock, flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 711668e028..16684bbaf9 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -800,6 +800,9 @@  void domain_resume(struct domain *d);
 
 int domain_soft_reset(struct domain *d, bool resuming);
 
+int domain_init_states(void);
+void domain_deinit_states(void);
+
 int vcpu_start_shutdown_deferral(struct vcpu *v);
 void vcpu_end_shutdown_deferral(struct vcpu *v);