From patchwork Thu Aug 3 02:13:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Meng Xu X-Patchwork-Id: 9878015 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 78A8D6035F for ; Thu, 3 Aug 2017 02:17:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 66C472887B for ; Thu, 3 Aug 2017 02:17:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5B80228887; Thu, 3 Aug 2017 02:17:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4A3E32887D for ; Thu, 3 Aug 2017 02:17:24 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dd5ea-0001YJ-RZ; Thu, 03 Aug 2017 02:14:24 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dd5eZ-0001XU-O2 for xen-devel@lists.xen.org; Thu, 03 Aug 2017 02:14:23 +0000 Received: from [85.158.139.211] by server-5.bemta-5.messagelabs.com id 9E/AB-02177-EF682895; Thu, 03 Aug 2017 02:14:22 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrLLMWRWlGSWpSXmKPExsUyr8m9UfdfW1O kwcqFIhZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8aRz5eZCh7LVFx4tImxgfG9WBcjF4eQwAwm iSm3JjJ1MXJysAmoSBzf8IgVxBYRkJa49vkyI0gRs8AiRokDFzaygySEBZwluh61sIDYLAKqE v83QTTzCrhIzJ3cCWZLCMhJnDw2mRXCDpVYs/g8XPzxwweMExi5FjAyrGLUKE4tKkst0jU01E sqykzPKMlNzMzRNTQw1ctNLS5OTE/NSUwq1kvOz93ECPQkAxDsYFzZ7nyIUZKDSUmUV7GnPlK ILyk/pTIjsTgjvqg0J7X4EKMMB4eSBO+R1qZIIcGi1PTUirTMHGBIwaQlOHiURHh3g6R5iwsS c4sz0yFSpxiNOTasXv+FiePVhP/fmIRY8vLzUqXEeVeClAqAlGaU5sENgoX6JUZZKWFeRqDTh HgKUotyM0tQ5V8xinMwKgnz7geZwpOZVwK37xXQKUxAp/ypawQ5pSQRISXVwDh5nbRiiUtgiP vHDKFS68Of0/V93vvM1a5bvaB0Y7HPR6WMyDu/JnuFJj/c/HzXt/tRL+/zX5+/bPHVrSHmIst djrwSuh0owM/Te3K/9t43pj8CJmhMk/l44fO+M5uPzAqd7xeyi++77Z+ZHu7Lmv9+XWR3VLr4 weLKOkWLytMVbqVu1+9bNnIosRRnJBpqMRcVJwIA8DCnlHACAAA= X-Env-Sender: mengxu@cis.upenn.edu X-Msg-Ref: server-7.tower-206.messagelabs.com!1501726461!102734762!1 X-Originating-IP: [158.130.71.129] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 56123 invoked from network); 3 Aug 2017 02:14:22 -0000 Received: from renard.seas.upenn.edu (HELO fox.seas.upenn.edu) (158.130.71.129) by server-7.tower-206.messagelabs.com with SMTP; 3 Aug 2017 02:14:22 -0000 Received: from panda-catbroadwell.cis.upenn.edu (SEASnet-48-12.cis.upenn.edu [158.130.48.13]) (authenticated bits=0) by fox.seas.upenn.edu (8.15.2/8.14.5) with ESMTPSA id v732DxEn028000 (version=TLSv1.2 cipher=AES256-SHA256 bits=256 verify=NOT); Wed, 2 Aug 2017 22:14:09 -0400 From: Meng Xu To: xen-devel@lists.xen.org Date: Wed, 2 Aug 2017 22:13:52 -0400 Message-Id: <1501726432-13142-1-git-send-email-mengxu@cis.upenn.edu> X-Mailer: git-send-email 1.9.1 X-Proofpoint-Virus-Version: vendor=nai engine=5600 definitions=5800 signatures=585085 X-Proofpoint-Spam-Reason: safe Cc: george.dunlap@eu.citrix.com, dario.faggioli@citrix.com, xumengpanda@gmail.com, Meng Xu , Haoran Li Subject: [Xen-devel] [PATCH v5] xen: rtds: only tickle non-already tickled CPUs X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP When more than one idle VCPUs that have the same PCPU as their previous running core invoke runq_tickle(), they will tickle the same PCPU. The tickled PCPU will only pick at most one VCPU, i.e., the highest-priority one, to execute. The other VCPUs will not be scheduled for a period, even when there is an idle core, making these VCPUs unnecessarily starve for one period. Therefore, always make sure that we only tickle PCPUs that have not been tickled already. Signed-off-by: Haoran Li Signed-off-by: Meng Xu Reviewed-by: Dario Faggioli --- The initial discussion of this patch can be found at https://lists.xenproject.org/archives/html/xen-devel/2017-02/msg02857.html Changes in v5: Revise comments as Dario suggested Changes in v4: 1) Take Dario's suggestions: Search the new->cpu first for the cpu to tickle. This get rid of the if statement in previous versions. 2) Reword the comments and commit messages. 3) Rebased on staging branch. Issues in v2 and v3: Did not rebase on the latest staging branch. Did not solve the comments/issues in v1. Please ignore the v2 and v3. --- xen/common/sched_rt.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 39f6bee..0ac5816 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -1147,9 +1147,9 @@ rt_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) * Called by wake() and context_saved() * We have a running candidate here, the kick logic is: * Among all the cpus that are within the cpu affinity - * 1) if the new->cpu is idle, kick it. This could benefit cache hit - * 2) if there are any idle vcpu, kick it. - * 3) now all pcpus are busy; + * 1) if there are any idle CPUs, kick one. + For cache benefit, we check new->cpu as first + * 2) now all pcpus are busy; * among all the running vcpus, pick lowest priority one * if snext has higher priority, kick it. * @@ -1177,17 +1177,13 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new) cpumask_and(¬_tickled, online, new->vcpu->cpu_hard_affinity); cpumask_andnot(¬_tickled, ¬_tickled, &prv->tickled); - /* 1) if new's previous cpu is idle, kick it for cache benefit */ - if ( is_idle_vcpu(curr_on_cpu(new->vcpu->processor)) ) - { - SCHED_STAT_CRANK(tickled_idle_cpu); - cpu_to_tickle = new->vcpu->processor; - goto out; - } - - /* 2) if there are any idle pcpu, kick it */ - /* The same loop also find the one with lowest priority */ - for_each_cpu(cpu, ¬_tickled) + /* + * 1) If there are any idle CPUs, kick one. + * For cache benefit,we first search new->cpu. + * The same loop also find the one with lowest priority. + */ + cpu = cpumask_test_or_cycle(new->vcpu->processor, ¬_tickled); + while ( cpu!= nr_cpu_ids ) { iter_vc = curr_on_cpu(cpu); if ( is_idle_vcpu(iter_vc) ) @@ -1200,9 +1196,12 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new) if ( latest_deadline_vcpu == NULL || iter_svc->cur_deadline > latest_deadline_vcpu->cur_deadline ) latest_deadline_vcpu = iter_svc; + + cpumask_clear_cpu(cpu, ¬_tickled); + cpu = cpumask_cycle(cpu, ¬_tickled); } - /* 3) candicate has higher priority, kick out lowest priority vcpu */ + /* 2) candicate has higher priority, kick out lowest priority vcpu */ if ( latest_deadline_vcpu != NULL && new->cur_deadline < latest_deadline_vcpu->cur_deadline ) {