From patchwork Tue Aug 1 19:24:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Meng Xu X-Patchwork-Id: 9875359 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D818F60365 for ; Tue, 1 Aug 2017 19:27:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C8C8D252D5 for ; Tue, 1 Aug 2017 19:27:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BD7AA2872B; Tue, 1 Aug 2017 19:27:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 41FFC252D5 for ; Tue, 1 Aug 2017 19:27:19 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dccmq-0004XA-PH; Tue, 01 Aug 2017 19:25:00 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dccmp-0004X2-UG for xen-devel@lists.xen.org; Tue, 01 Aug 2017 19:25:00 +0000 Received: from [193.109.254.147] by server-6.bemta-6.messagelabs.com id 45/7B-03937-B85D0895; Tue, 01 Aug 2017 19:24:59 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrDLMWRWlGSWpSXmKPExsUyr8m9UbfrakO kwZtTOhZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8bLJ5OYC75IV1w+85G1gbFfrIuRi0NIYAaT xOn599i6GDk52ARUJI5veMQKYosISEtc+3yZEaSIWWARo8SBCxvZQRLCAs4SLZu6wYpYBFQlN n/eBtbMCxTfPKkNzJYQkJM4eWwyK4QdKrFm8XkmmPjjhw8YJzByLWBkWMWoXpxaVJZapGuol1 SUmZ5RkpuYmaNraGCml5taXJyYnpqTmFSsl5yfu4kR6EcGINjBuPO50yFGSQ4mJVFexZ76SCG +pPyUyozE4oz4otKc1OJDjDIcHEoSvOVXGiKFBItS01Mr0jJzgAEFk5bg4FES4X0OkuYtLkjM Lc5Mh0idYjTm2LB6/RcmjlcT/n9jEmLJy89LlRLn3QlSKgBSmlGaBzcIFuiXGGWlhHkZgU4T4 ilILcrNLEGVf8UozsGoJMwrCzKFJzOvBG7fK6BTmIBOkSytBTmlJBEhJdXAWOww//XP25ntRT +Uc9eLHvdivhujdHxlgeHrJNva0+XRca7T3T11biveWOjYuf2JE4dz72y+HbLKTc//f1Kd/H3 /7rzDdcfr3TYWGv+8I+ozJe8Jg+sE96NPD7BGXBBzZDAoPBx448q/fVc0LKW+LcpPEhFf+f7Y xltRIfrf0t9ER7ladEYbK7EUZyQaajEXFScCAB7ZmoFvAgAA X-Env-Sender: mengxu@cis.upenn.edu X-Msg-Ref: server-14.tower-27.messagelabs.com!1501615498!97460478!1 X-Originating-IP: [158.130.71.129] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 22977 invoked from network); 1 Aug 2017 19:24:58 -0000 Received: from renard.seas.upenn.edu (HELO fox.seas.upenn.edu) (158.130.71.129) by server-14.tower-27.messagelabs.com with SMTP; 1 Aug 2017 19:24:58 -0000 Received: from panda-catbroadwell.cis.upenn.edu (SEASnet-48-12.cis.upenn.edu [158.130.48.13]) (authenticated bits=0) by fox.seas.upenn.edu (8.15.2/8.14.5) with ESMTPSA id v71JOl3c006911 (version=TLSv1.2 cipher=AES256-SHA256 bits=256 verify=NOT); Tue, 1 Aug 2017 15:24:47 -0400 From: Meng Xu To: xen-devel@lists.xen.org Date: Tue, 1 Aug 2017 15:24:36 -0400 Message-Id: <1501615476-3059-1-git-send-email-mengxu@cis.upenn.edu> X-Mailer: git-send-email 1.9.1 X-Proofpoint-Virus-Version: vendor=nai engine=5600 definitions=5800 signatures=585085 X-Proofpoint-Spam-Reason: safe Cc: george.dunlap@eu.citrix.com, dario.faggioli@citrix.com, xumengpanda@gmail.com, Meng Xu , Haoran Li Subject: [Xen-devel] [PATCH v4] xen: rtds: only tickle non-already tickled CPUs X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP When more than one idle VCPUs that have the same PCPU as their previous running core invoke runq_tickle(), they will tickle the same PCPU. The tickled PCPU will only pick at most one VCPU, i.e., the highest-priority one, to execute. The other VCPUs will not be scheduled for a period, even when there is an idle core, making these VCPUs unnecessarily starve for one period. Therefore, always make sure that we only tickle PCPUs that have not been tickled already. Signed-off-by: Haoran Li Signed-off-by: Meng Xu Reviewed-by: Dario Faggioli --- The initial discussion of this patch can be found at https://lists.xenproject.org/archives/html/xen-devel/2017-02/msg02857.html Changes in v4: 1) Take Dario's suggestions: Search the new->cpu first for the cpu to tickle. This get rid of the if statement in previous versions. 2) Reword the comments and commit messages. 3) Rebased on staging branch. Issues in v2 and v3: Did not rebase on the latest staging branch. Did not solve the comments/issues in v1. Please ignore the v2 and v3. --- xen/common/sched_rt.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 39f6bee..5fec95f 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -1147,9 +1147,9 @@ rt_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) * Called by wake() and context_saved() * We have a running candidate here, the kick logic is: * Among all the cpus that are within the cpu affinity - * 1) if the new->cpu is idle, kick it. This could benefit cache hit - * 2) if there are any idle vcpu, kick it. - * 3) now all pcpus are busy; + * 1) if there are any idle vcpu, kick it. + For cache benefit,we first search new->cpu. + * 2) now all pcpus are busy; * among all the running vcpus, pick lowest priority one * if snext has higher priority, kick it. * @@ -1177,17 +1177,13 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new) cpumask_and(¬_tickled, online, new->vcpu->cpu_hard_affinity); cpumask_andnot(¬_tickled, ¬_tickled, &prv->tickled); - /* 1) if new's previous cpu is idle, kick it for cache benefit */ - if ( is_idle_vcpu(curr_on_cpu(new->vcpu->processor)) ) - { - SCHED_STAT_CRANK(tickled_idle_cpu); - cpu_to_tickle = new->vcpu->processor; - goto out; - } - - /* 2) if there are any idle pcpu, kick it */ - /* The same loop also find the one with lowest priority */ - for_each_cpu(cpu, ¬_tickled) + /* + * 1) If there are any idle vcpu, kick it. + * For cache benefit,we first search new->cpu. + * The same loop also find the one with lowest priority. + */ + cpu = cpumask_test_or_cycle(new->vcpu->processor, ¬_tickled); + while ( cpu!= nr_cpu_ids ) { iter_vc = curr_on_cpu(cpu); if ( is_idle_vcpu(iter_vc) ) @@ -1200,9 +1196,12 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new) if ( latest_deadline_vcpu == NULL || iter_svc->cur_deadline > latest_deadline_vcpu->cur_deadline ) latest_deadline_vcpu = iter_svc; + + cpumask_clear_cpu(cpu, ¬_tickled); + cpu = cpumask_cycle(cpu, ¬_tickled); } - /* 3) candicate has higher priority, kick out lowest priority vcpu */ + /* 2) candicate has higher priority, kick out lowest priority vcpu */ if ( latest_deadline_vcpu != NULL && new->cur_deadline < latest_deadline_vcpu->cur_deadline ) {