Message ID | 1499098656-1608-1-git-send-email-naroahlee@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, 2017-07-03 at 11:17 -0500, Haoran Li wrote: > From: naroahlee <naroahlee@gmail.com> > > When more than one idle VCPUs that have > the same PCPU as their previous running core invoke runq_tickle(), > they will > tickle the same PCPU. The tickled PCPU will only pick at most one > VCPU, i.e., > the highest-priority one, to execute. The other VCPUs will not be > scheduled > for a period, even when there is an idle core, making these VCPUs > unnecessarily starve for one period. Therefore, always make sure > that we only > tickle PCPUs that have not been tickled already. > > Signed-off-by: Haoran Li <naroahlee@gmail.com> > Reviewed-by: Meng Xu <mengxu@cis.upenn.edu> > So, from what I can see from the 'From' tag, and from the pieces of emails, that appear below the patch, this is some kind of resubmission/new version, of a patch sent a while back. However, the subject seems to have changed... Or in any case, the current subject is no good. It's also a bit unusual, and definitely not comfortable for people managing the patch, to have a quoted email conversation below the patch itself (or so I think). So, please, remove it. Finally, in that quoted email conversation, I asked for some changes, and said that, with them done, my Reviewed-by: would stand. Have you made those changes? If yes, please, mention this somewhere (Ideally, between the S-o-b, R-b tags and the patch itself, after a '---' mark). Regards, Dario
Hi Haoran, On Mon, Jul 3, 2017 at 12:17 PM, Haoran Li <naroahlee@gmail.com> wrote: > > From: naroahlee <naroahlee@gmail.com> > > When more than one idle VCPUs that have > the same PCPU as their previous running core invoke runq_tickle(), they will > tickle the same PCPU. The tickled PCPU will only pick at most one VCPU, i.e., > the highest-priority one, to execute. The other VCPUs will not be scheduled > for a period, even when there is an idle core, making these VCPUs > unnecessarily starve for one period. Therefore, always make sure that we only > tickle PCPUs that have not been tickled already. > > Signed-off-by: Haoran Li <naroahlee@gmail.com> > Reviewed-by: Meng Xu <mengxu@cis.upenn.edu> As Dario mentioned in the email, the title should be changed and the email should be a new email thread, instead of a forward email. A reference to the format of sending a newer version of patch can be found at https://www.mail-archive.com/xen-devel@lists.xen.org/msg60115.html In the commit message, you can add --- Changes to the v1 --- to state the changes made from the previous versions. You can also refer to the previous discussion with a link in that section.. This makes the reviewers' life easier. The change log won't be committed. Could you please send another version after resolving the concerns raised by Dario and me? Don't hesitate to ping me if you have any question. Thanks, Meng
>>> On 03.07.17 at 19:09, <dario.faggioli@citrix.com> wrote: > On Mon, 2017-07-03 at 11:17 -0500, Haoran Li wrote: >> From: naroahlee <naroahlee@gmail.com> >> >> When more than one idle VCPUs that have >> the same PCPU as their previous running core invoke runq_tickle(), >> they will >> tickle the same PCPU. The tickled PCPU will only pick at most one >> VCPU, i.e., >> the highest-priority one, to execute. The other VCPUs will not be >> scheduled >> for a period, even when there is an idle core, making these VCPUs >> unnecessarily starve for one period. Therefore, always make sure >> that we only >> tickle PCPUs that have not been tickled already. >> >> Signed-off-by: Haoran Li <naroahlee@gmail.com> >> Reviewed-by: Meng Xu <mengxu@cis.upenn.edu> >> > So, from what I can see from the 'From' tag, and from the pieces of > emails, that appear below the patch, this is some kind of > resubmission/new version, of a patch sent a while back. > > However, the subject seems to have changed... Or in any case, the > current subject is no good. > > It's also a bit unusual, and definitely not comfortable for people > managing the patch, to have a quoted email conversation below the patch > itself (or so I think). So, please, remove it. > > Finally, in that quoted email conversation, I asked for some changes, > and said that, with them done, my Reviewed-by: would stand. > > Have you made those changes? If yes, please, mention this somewhere > (Ideally, between the S-o-b, R-b tags and the patch itself, after a > '---' mark). Additionally, you would almost never submit patches for other than the unstable staging branch. The only exception being if there's a change that absolutely has to go into an older branch, but which isn't applicable at all anymore to current staging. If you do your development on an older version, so be it. But for submission it is you who is responsible for doing (and testing!) the forward port. Jan
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 1b30014..b3d55d8 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -1144,12 +1144,11 @@ rt_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) * Called by wake() and context_saved() * We have a running candidate here, the kick logic is: * Among all the cpus that are within the cpu affinity - * 1) if the new->cpu is idle, kick it. This could benefit cache hit - * 2) if there are any idle vcpu, kick it. - * 3) now all pcpus are busy; + * 1) if there are any idle vcpu, kick it. + * For cache benefit, we first search new->cpu. + * 2) now all pcpus are busy; * among all the running vcpus, pick lowest priority one * if snext has higher priority, kick it. - * * TODO: * 1) what if these two vcpus belongs to the same domain? * replace a vcpu belonging to the same domain introduces more overhead @@ -1174,17 +1173,11 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new) cpumask_and(¬_tickled, online, new->vcpu->cpu_hard_affinity); cpumask_andnot(¬_tickled, ¬_tickled, &prv->tickled); - /* 1) if new's previous cpu is idle, kick it for cache benefit */ - if ( is_idle_vcpu(curr_on_cpu(new->vcpu->processor)) ) - { - SCHED_STAT_CRANK(tickled_idle_cpu); - cpu_to_tickle = new->vcpu->processor; - goto out; - } - - /* 2) if there are any idle pcpu, kick it */ + /* 1) if there are any idle pcpu, kick it */ /* The same loop also find the one with lowest priority */ - for_each_cpu(cpu, ¬_tickled) + /* For cache benefit, we search new->cpu first */ + cpu = cpumask_test_or_cycle(new->vcpu->processor, ¬_tickled); + while ( cpu != nr_cpu_ids ) { iter_vc = curr_on_cpu(cpu); if ( is_idle_vcpu(iter_vc) ) @@ -1197,9 +1190,12 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new) if ( latest_deadline_vcpu == NULL || iter_svc->cur_deadline > latest_deadline_vcpu->cur_deadline ) latest_deadline_vcpu = iter_svc; + + cpumask_clear_cpu(cpu, ¬_tickled); + cpu = cpumask_cycle(cpu, ¬_tickled); } - /* 3) candicate has higher priority, kick out lowest priority vcpu */ + /* 2) candicate has higher priority, kick out lowest priority vcpu */ if ( latest_deadline_vcpu != NULL && new->cur_deadline < latest_deadline_vcpu->cur_deadline ) { @@ -1207,7 +1203,6 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new) cpu_to_tickle = latest_deadline_vcpu->vcpu->processor; goto out; } - /* didn't tickle any cpu */ SCHED_STAT_CRANK(tickled_no_cpu); return;