Message ID | 156412235104.2385.3911161728130674771.stgit@Palanthas (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | xen: sched: support vcpu hotplug/hotunplug in the 'null scheduler' | expand |
On 26.07.2019 08:25, Dario Faggioli wrote: > vcpu_deassign() is called only once (in _vcpu_remove()). > > Let's consolidate the two functions into one. > > No functional change intended. > > Signed-off-by: Dario Faggioli <dfaggioli@suse.com> > Acked-by: George Dunlap <george.dunlap@citrix.com> I'm puzzled by commit a397471278, in two ways: 1) The commit is empty, presumably because I did apply the patch a few days ago already. 2) The committer is recorded as "Patchew Importer <importer@patchew.org>". Do we really want to hide the fact who has been committing a patch? While it's mostly mechanical, there's still the human decision of "this is ready to go in" involved, which I don't think a bot can reliably take in all cases. Jan
On 8/5/19 4:58 PM, Jan Beulich wrote: > On 26.07.2019 08:25, Dario Faggioli wrote: >> vcpu_deassign() is called only once (in _vcpu_remove()). >> >> Let's consolidate the two functions into one. >> >> No functional change intended. >> >> Signed-off-by: Dario Faggioli <dfaggioli@suse.com> >> Acked-by: George Dunlap <george.dunlap@citrix.com> > > I'm puzzled by commit a397471278, in two ways: > > 1) The commit is empty, presumably because I did apply the patch a few > days ago already. > > 2) The committer is recorded as "Patchew Importer <importer@patchew.org>". > Do we really want to hide the fact who has been committing a patch? > While it's mostly mechanical, there's still the human decision of "this > is ready to go in" involved, which I don't think a bot can reliably take > in all cases. Both of these are mistakes, and due to the fact that I `git fetch`ed patchew's commit rather than doing `git am` of the mbox provided by patchew. (And I used patchew's commit because somehow 4/4 didn't reach my inbox.) Re #1, I re-reviewed v2 from the commit itself; but then rebased before pushing, and didn't notice that the commit ended up being empty. Re #2, I guess that means I shouldn't really be pushing from patchew's commit anyway. Either way, sorry about the mistake; I'll try to remember to avoid using patchew's commit in the future. -George
On Mon, 2019-08-05 at 17:04 +0100, George Dunlap wrote: > On 8/5/19 4:58 PM, Jan Beulich wrote: > > On 26.07.2019 08:25, Dario Faggioli wrote: > > > > > 1) The commit is empty, presumably because I did apply the patch a > > few > > days ago already. > > > > 2) The committer is recorded as "Patchew Importer < > > importer@patchew.org>". > > Do we really want to hide the fact who has been committing a patch? > > While it's mostly mechanical, there's still the human decision of > > "this > > is ready to go in" involved, which I don't think a bot can reliably > > take > > in all cases. > > Both of these are mistakes, and due to the fact that I `git fetch`ed > patchew's commit rather than doing `git am` of the mbox provided by > patchew. (And I used patchew's commit because somehow 4/4 didn't > reach > my inbox.) > And this last part is apparently my fault, as your email address is actually wrong, in the Cc line of that patch. So I guess I had a part in this as well... Sorry about that! :-) Regards
> On Aug 5, 2019, at 6:30 PM, Dario Faggioli <dfaggioli@suse.com> wrote: > > On Mon, 2019-08-05 at 17:04 +0100, George Dunlap wrote: >> On 8/5/19 4:58 PM, Jan Beulich wrote: >>> On 26.07.2019 08:25, Dario Faggioli wrote: >>>> >>> 1) The commit is empty, presumably because I did apply the patch a >>> few >>> days ago already. >>> >>> 2) The committer is recorded as "Patchew Importer < >>> importer@patchew.org>". >>> Do we really want to hide the fact who has been committing a patch? >>> While it's mostly mechanical, there's still the human decision of >>> "this >>> is ready to go in" involved, which I don't think a bot can reliably >>> take >>> in all cases. >> >> Both of these are mistakes, and due to the fact that I `git fetch`ed >> patchew's commit rather than doing `git am` of the mbox provided by >> patchew. (And I used patchew's commit because somehow 4/4 didn't >> reach >> my inbox.) >> > And this last part is apparently my fault, as your email address is > actually wrong, in the Cc line of that patch. That would explain it. :-) But that’s by no means your “fault”: It’s very common for me to only be CC’d on a subset of patches in a series (e.g., because I’m the maintainer for some patches but not others), so “receiving a only a subset of patches” is something my workflow should handle effectively but didn’t. -George
diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c index c02c1b9c1f..c47c1b5aae 100644 --- a/xen/common/sched_null.c +++ b/xen/common/sched_null.c @@ -358,9 +358,14 @@ static void vcpu_assign(struct null_private *prv, struct vcpu *v, } } -static void vcpu_deassign(struct null_private *prv, struct vcpu *v, - unsigned int cpu) +static void vcpu_deassign(struct null_private *prv, struct vcpu *v) { + unsigned int bs; + unsigned int cpu = v->processor; + struct null_vcpu *wvc; + + ASSERT(list_empty(&null_vcpu(v)->waitq_elem)); + per_cpu(npc, cpu).vcpu = NULL; cpumask_set_cpu(cpu, &prv->cpus_free); @@ -377,6 +382,32 @@ static void vcpu_deassign(struct null_private *prv, struct vcpu *v, d.cpu = cpu; __trace_var(TRC_SNULL_VCPU_DEASSIGN, 1, sizeof(d), &d); } + + spin_lock(&prv->waitq_lock); + + /* + * If v is assigned to a pCPU, let's see if there is someone waiting, + * suitable to be assigned to it (prioritizing vcpus that have + * soft-affinity with cpu). + */ + for_each_affinity_balance_step( bs ) + { + list_for_each_entry( wvc, &prv->waitq, waitq_elem ) + { + if ( bs == BALANCE_SOFT_AFFINITY && !has_soft_affinity(wvc->vcpu) ) + continue; + + if ( vcpu_check_affinity(wvc->vcpu, cpu, bs) ) + { + list_del_init(&wvc->waitq_elem); + vcpu_assign(prv, wvc->vcpu, cpu); + cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); + spin_unlock(&prv->waitq_lock); + return; + } + } + } + spin_unlock(&prv->waitq_lock); } /* Change the scheduler of cpu to us (null). */ @@ -459,43 +490,6 @@ static void null_vcpu_insert(const struct scheduler *ops, struct vcpu *v) SCHED_STAT_CRANK(vcpu_insert); } -static void _vcpu_remove(struct null_private *prv, struct vcpu *v) -{ - unsigned int bs; - unsigned int cpu = v->processor; - struct null_vcpu *wvc; - - ASSERT(list_empty(&null_vcpu(v)->waitq_elem)); - - vcpu_deassign(prv, v, cpu); - - spin_lock(&prv->waitq_lock); - - /* - * If v is assigned to a pCPU, let's see if there is someone waiting, - * suitable to be assigned to it (prioritizing vcpus that have - * soft-affinity with cpu). - */ - for_each_affinity_balance_step( bs ) - { - list_for_each_entry( wvc, &prv->waitq, waitq_elem ) - { - if ( bs == BALANCE_SOFT_AFFINITY && !has_soft_affinity(wvc->vcpu) ) - continue; - - if ( vcpu_check_affinity(wvc->vcpu, cpu, bs) ) - { - list_del_init(&wvc->waitq_elem); - vcpu_assign(prv, wvc->vcpu, cpu); - cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); - spin_unlock(&prv->waitq_lock); - return; - } - } - } - spin_unlock(&prv->waitq_lock); -} - static void null_vcpu_remove(const struct scheduler *ops, struct vcpu *v) { struct null_private *prv = null_priv(ops); @@ -519,7 +513,7 @@ static void null_vcpu_remove(const struct scheduler *ops, struct vcpu *v) ASSERT(per_cpu(npc, v->processor).vcpu == v); ASSERT(!cpumask_test_cpu(v->processor, &prv->cpus_free)); - _vcpu_remove(prv, v); + vcpu_deassign(prv, v); out: vcpu_schedule_unlock_irq(lock, v); @@ -605,7 +599,7 @@ static void null_vcpu_migrate(const struct scheduler *ops, struct vcpu *v, */ if ( likely(list_empty(&nvc->waitq_elem)) ) { - _vcpu_remove(prv, v); + vcpu_deassign(prv, v); SCHED_STAT_CRANK(migrate_running); } else