Message ID | 20190723092056.15045-2-jgross@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | xen: fix/enhance temporary vcpu pinning | expand |
On Tue, 2019-07-23 at 11:20 +0200, Juergen Gross wrote: > Commit 0763cd2687897b55e7 ("xen/sched: don't disable scheduler on > cpus > during suspend") removed a lock in restore_vcpu_affinity() which > needs > to stay: cpumask_scratch_cpu() must be protected by the scheduler > lock. > And indeed I recall the thing looked weird, and bringing it up... but then I also managed to convince myself that it wasn't a problem, which apparently is. :-( Sorry for having overlooked that. > restore_vcpu_affinity() is being called by thaw_domains(), so > with multiple domains in the system another domain might already be > running and the scheduler might make use of cpumask_scratch_cpu() > already. > > Signed-off-by: Juergen Gross <jgross@suse.com> > Reviewed-by: Dario Faggioli <dfaggioli@suse.com> Thanks and Regards
On 23.07.2019 11:20, Juergen Gross wrote: > --- a/xen/common/schedule.c > +++ b/xen/common/schedule.c > @@ -708,6 +708,8 @@ void restore_vcpu_affinity(struct domain *d) > * set v->processor of each of their vCPUs to something that will > * make sense for the scheduler of the cpupool in which they are in. > */ > + lock = vcpu_schedule_lock_irq(v); > + > cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, > cpupool_domain_cpumask(d)); > if ( cpumask_empty(cpumask_scratch_cpu(cpu)) ) > @@ -731,6 +733,9 @@ void restore_vcpu_affinity(struct domain *d) > > v->processor = cpumask_any(cpumask_scratch_cpu(cpu)); > > + spin_unlock_irq(lock); > + > + /* v->processor might have changed, so reacquire the lock. */ Whoever commits this (perhaps me, later today) will want to replace the hard tab here. Jan
On 23.07.19 14:08, Jan Beulich wrote: > On 23.07.2019 11:20, Juergen Gross wrote: >> --- a/xen/common/schedule.c >> +++ b/xen/common/schedule.c >> @@ -708,6 +708,8 @@ void restore_vcpu_affinity(struct domain *d) >> * set v->processor of each of their vCPUs to something that will >> * make sense for the scheduler of the cpupool in which they are in. >> */ >> + lock = vcpu_schedule_lock_irq(v); >> + >> cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, >> cpupool_domain_cpumask(d)); >> if ( cpumask_empty(cpumask_scratch_cpu(cpu)) ) >> @@ -731,6 +733,9 @@ void restore_vcpu_affinity(struct domain *d) >> >> v->processor = cpumask_any(cpumask_scratch_cpu(cpu)); >> >> + spin_unlock_irq(lock); >> + >> + /* v->processor might have changed, so reacquire the lock. */ > > Whoever commits this (perhaps me, later today) will want to replace > the hard tab here. Oh, I'm sorry for that! We really want checkpatch in Xen! Juergen
On 23/07/2019 13:08, Jan Beulich wrote: > On 23.07.2019 11:20, Juergen Gross wrote: >> --- a/xen/common/schedule.c >> +++ b/xen/common/schedule.c >> @@ -708,6 +708,8 @@ void restore_vcpu_affinity(struct domain *d) >> * set v->processor of each of their vCPUs to something that will >> * make sense for the scheduler of the cpupool in which they are in. >> */ >> + lock = vcpu_schedule_lock_irq(v); >> + >> cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, >> cpupool_domain_cpumask(d)); >> if ( cpumask_empty(cpumask_scratch_cpu(cpu)) ) >> @@ -731,6 +733,9 @@ void restore_vcpu_affinity(struct domain *d) >> >> v->processor = cpumask_any(cpumask_scratch_cpu(cpu)); >> >> + spin_unlock_irq(lock); >> + >> + /* v->processor might have changed, so reacquire the lock. */ > Whoever commits this (perhaps me, later today) will want to replace > the hard tab here. I've already committed this, and did fix up the tab. ~Andrew
diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 25f6ab388d..89bc259ae4 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -708,6 +708,8 @@ void restore_vcpu_affinity(struct domain *d) * set v->processor of each of their vCPUs to something that will * make sense for the scheduler of the cpupool in which they are in. */ + lock = vcpu_schedule_lock_irq(v); + cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, cpupool_domain_cpumask(d)); if ( cpumask_empty(cpumask_scratch_cpu(cpu)) ) @@ -731,6 +733,9 @@ void restore_vcpu_affinity(struct domain *d) v->processor = cpumask_any(cpumask_scratch_cpu(cpu)); + spin_unlock_irq(lock); + + /* v->processor might have changed, so reacquire the lock. */ lock = vcpu_schedule_lock_irq(v); v->processor = sched_pick_cpu(vcpu_scheduler(v), v); spin_unlock_irq(lock);
Commit 0763cd2687897b55e7 ("xen/sched: don't disable scheduler on cpus during suspend") removed a lock in restore_vcpu_affinity() which needs to stay: cpumask_scratch_cpu() must be protected by the scheduler lock. restore_vcpu_affinity() is being called by thaw_domains(), so with multiple domains in the system another domain might already be running and the scheduler might make use of cpumask_scratch_cpu() already. Signed-off-by: Juergen Gross <jgross@suse.com> --- xen/common/schedule.c | 5 +++++ 1 file changed, 5 insertions(+)