From patchwork Fri Feb 24 21:54:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haoran Li X-Patchwork-Id: 9591277 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 00E2F60471 for ; Fri, 24 Feb 2017 21:57:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EC3DE28249 for ; Fri, 24 Feb 2017 21:57:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DD5812851D; Fri, 24 Feb 2017 21:57:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A4A2128249 for ; Fri, 24 Feb 2017 21:57:28 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1chNpD-0006xB-0k; Fri, 24 Feb 2017 21:54:51 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1chNpB-0006x5-S3 for xen-devel@lists.xenproject.org; Fri, 24 Feb 2017 21:54:50 +0000 Received: from [85.158.139.211] by server-13.bemta-5.messagelabs.com id 44/2D-01724-8ABA0B85; Fri, 24 Feb 2017 21:54:48 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFuplkeJIrShJLcpLzFFi42K5GHr/iO7y1Rs iDK7PF7f4vmUykwOjx+EPV1gCGKNYM/OS8isSWDNmLTrJVnBGvOJGRxtzA+NVoS5GTg4hgemM ElfelHQxcnGwCFxikZi67BwjiCMh8I5F4mnTfXaQKgmBGInOPT0sEHaFxOQXLewQ3UoSU4/2M oM0AE1iknjz9TNYEZuAqsS53z/AbBGgonurJjOB2MwCYRJPn00EaubgEBYIlmhtZgQJswCVr/ jcAVbCK+AscWPOJVaIXXISJ49NZp3AyLeAkWEVo0ZxalFZapGuobFeUlFmekZJbmJmjq6hgal ebmpxcWJ6ak5iUrFecn7uJkZgoDAAwQ7Gf9s8DzFKcjApifKumrMhQogvKT+lMiOxOCO+qDQn tfgQowwHh5IE79tVQDnBotT01Iq0zBxgyMKkJTh4lER4Z60ESvMWFyTmFmemQ6ROMRpzPDi16 w0Tx6f+w2+YhFjy8vNSpcR5e0AmCYCUZpTmwQ2CxdIlRlkpYV5GoNOEeApSi3IzS1DlXzGKcz AqCfOuXQ40hSczrwRu3yugU5iATrF0XgtySkkiQkqqgVEsLvKFZsflAnFG0W8LJ3b6Kaa6Bfy NXloqcfin154tzN5991Y9OVSxWHbi8mh7LRvXn1cN7sWsus54fTafzKR1me1Mh18/ehjC6ZHu +yaINWm30KFUO/7cXL/FikXxrFZK8zYLqwYdi9iY/CI3zCVvnoJYlEnssqwbdh3pPn/4Z7P22 LlGKrEUZyQaajEXFScCAF3QsxWgAgAA X-Env-Sender: naroahlee@gmail.com X-Msg-Ref: server-4.tower-206.messagelabs.com!1487973286!86768920!1 X-Originating-IP: [209.85.223.196] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 3511 invoked from network); 24 Feb 2017 21:54:47 -0000 Received: from mail-io0-f196.google.com (HELO mail-io0-f196.google.com) (209.85.223.196) by server-4.tower-206.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 24 Feb 2017 21:54:47 -0000 Received: by mail-io0-f196.google.com with SMTP id w10so191204iod.3 for ; Fri, 24 Feb 2017 13:54:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=C7idcRrhNb2DGDJiFgIAgJ1xg9iXRprsIZUkXQDCEaM=; b=bYfah2Ux6q5j7eunmaAI5/ftbWdBBGB1q4aX9D0IZuj2KMMLxGWolXHzTZap8MhOZP GWJVASVABERwipIF+wcNSMozL9/LZx19B6sY6f2wZLvYleckq8sxpXzISNWMe2w5ID0Z XEOwXOWd/TAusgedurmqsrY9emoWEM1QCn2YLJTbwObjyH2YjNt1gXGQ++NAnf4piicX 1YgVN5m14bl2Y1o9M/3j5VKiu0EH3G7n8EclAWc9UoGx/tcyTTYAN/i/0BmHYDZ6FUKz dHhiZFRQ58LBNw106N4tUBvXMq4NA4g1gT46ob7hhJ7wVGyv/pwvKU2if2l4zrBuj57U /I7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=C7idcRrhNb2DGDJiFgIAgJ1xg9iXRprsIZUkXQDCEaM=; b=D9JTI6wo6D+Tg6uTzF6NcfPRiCSDsfi503Mcca7I+TTY4OGcIRdG0dEsU0A9V24QPH pdVnll2vqzQ/S3tWDca6ogXsxR+D/oS0Np/orwdTSNsW0n6ctiSLOLiuE3zyIdcCB2zk 4VnERHWM8Mh08pLeG5/gPvQcp6K0E5hP7CjjWrboEZhmnTIXeYo8zhzCuCIlCfJZobdV 4vllo/NlJSiEkhuISqFwup62YfDzcagNe/qC2nzShOTBlw9QbHhkaHdUlMOpCBaqfHHC HBXMPiirOEXoj4UkqpuJ1w6bD+DZVMVgDq2d0wJ9WdPs5Cg5gurR+FF24v0nz9PM/13I +83A== X-Gm-Message-State: AMke39m5GP1Z/0DMk64j3Lb1t3FrwOGZ1+BlPoErczSVajgC9OH3jlzvmZKeyK5M6scUPg== X-Received: by 10.107.19.97 with SMTP id b94mr4377382ioj.93.1487973285975; Fri, 24 Feb 2017 13:54:45 -0800 (PST) Received: from E5-2863v4.seas.wustl.edu (admin998.cec.wustl.edu. [128.252.20.193]) by smtp.googlemail.com with ESMTPSA id c38sm50311ioj.19.2017.02.24.13.54.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 24 Feb 2017 13:54:45 -0800 (PST) From: Haoran Li To: xen-devel@lists.xenproject.org Date: Fri, 24 Feb 2017 15:54:37 -0600 Message-Id: <1487973277-20854-1-git-send-email-naroahlee@gmail.com> X-Mailer: git-send-email 1.9.1 Cc: dario.faggioli@citrix.com, mengxu@cis.upenn.edu, naroahlee Subject: [Xen-devel] [RTDS Patch v2 for Xen4.8] xen: rtds: only tickle non-already tickled CPUs X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: naroahlee Bug Analysis: When more than one idle VCPUs that have the same PCPU as their previous running core invoke runq_tickle(), they will tickle the same PCPU. The tickled PCPU will only pick at most one VCPU, i.e., the highest-priority one, to execute. The other VCPUs will not be scheduled for a period, even when there is an idle core, making these VCPUs unnecessarily starve for one period. Therefore, always make sure that we only tickle PCPUs that have not been tickled already. Reviewed-by: Meng Xu Reviewed-by: Dario Faggioli --- xen/common/sched_rt.c | 26 ++++++++++++-------------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c index 1b30014..012975c 100644 --- a/xen/common/sched_rt.c +++ b/xen/common/sched_rt.c @@ -1144,9 +1144,10 @@ rt_vcpu_sleep(const struct scheduler *ops, struct vcpu *vc) * Called by wake() and context_saved() * We have a running candidate here, the kick logic is: * Among all the cpus that are within the cpu affinity - * 1) if the new->cpu is idle, kick it. This could benefit cache hit - * 2) if there are any idle vcpu, kick it. - * 3) now all pcpus are busy; + * 1) if there are any idle vcpu, kick it. + * For cache benefit, we first search new->cpu. + * + * 2) now all pcpus are busy; * among all the running vcpus, pick lowest priority one * if snext has higher priority, kick it. * @@ -1174,17 +1175,11 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new) cpumask_and(¬_tickled, online, new->vcpu->cpu_hard_affinity); cpumask_andnot(¬_tickled, ¬_tickled, &prv->tickled); - /* 1) if new's previous cpu is idle, kick it for cache benefit */ - if ( is_idle_vcpu(curr_on_cpu(new->vcpu->processor)) ) - { - SCHED_STAT_CRANK(tickled_idle_cpu); - cpu_to_tickle = new->vcpu->processor; - goto out; - } - - /* 2) if there are any idle pcpu, kick it */ + /* 1) if there are any idle pcpu, kick it */ /* The same loop also find the one with lowest priority */ - for_each_cpu(cpu, ¬_tickled) + /* For cache benefit, we search new->cpu first */ + cpu = cpumask_test_or_cycle(new->vcpu->processor, ¬_tickled); + while ( cpu != nr_cpu_ids ) { iter_vc = curr_on_cpu(cpu); if ( is_idle_vcpu(iter_vc) ) @@ -1197,9 +1192,12 @@ runq_tickle(const struct scheduler *ops, struct rt_vcpu *new) if ( latest_deadline_vcpu == NULL || iter_svc->cur_deadline > latest_deadline_vcpu->cur_deadline ) latest_deadline_vcpu = iter_svc; + + cpumask_clear_cpu(cpu, ¬_tickled); + cpu = cpumask_cycle(cpu, ¬_tickled); } - /* 3) candicate has higher priority, kick out lowest priority vcpu */ + /* 2) candicate has higher priority, kick out lowest priority vcpu */ if ( latest_deadline_vcpu != NULL && new->cur_deadline < latest_deadline_vcpu->cur_deadline ) {