From patchwork Fri Apr 8 01:24:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 8779561 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id E558FC0553 for ; Fri, 8 Apr 2016 01:26:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0BED9201CD for ; Fri, 8 Apr 2016 01:26:11 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 083A9201C7 for ; Fri, 8 Apr 2016 01:26:10 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aoL9m-0007z1-KS; Fri, 08 Apr 2016 01:24:18 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aoL9l-0007yO-IA for xen-devel@lists.xenproject.org; Fri, 08 Apr 2016 01:24:17 +0000 Received: from [85.158.137.68] by server-17.bemta-3.messagelabs.com id 3A/78-03149-04807075; Fri, 08 Apr 2016 01:24:16 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrHIsWRWlGSWpSXmKPExsXiVRvkomvPwR5 u0NNrbvF9y2QmB0aPwx+usAQwRrFm5iXlVySwZiw9NIW14JZ8xcNt25gaGNdIdDFycQgJTGeU 6N/WzgLisAisYZX49vUUO4gjIXCJVeL27wVADieQEyPRsXoyE4RdLrF28Sc2EFtIQEXi5vZVT BCjFjJJnF/xkRkkISygJ3Hk6A92CDtIonneVjCbTcBA4s2OvawgtoiAksS9VRBDmYFqbs85Cm azCKhKfLl7E2wOr4C9xI11PxhBbE4BB4mJG54yQSy2l9jWfQLMFhWQk1h5uYUVol5Q4uTMJ0D vcADN1JRYv0sfYry8xPa3c5gnMIrMQlI1C6FqFpKqBYzMqxjVi1OLylKLdM31kooy0zNKchMz c3QNDYz1clOLixPTU3MSk4r1kvNzNzECw58BCHYwNn53OsQoycGkJMp75Q1buBBfUn5KZUZic UZ8UWlOavEhRg0ODoEJZ+dOZ5JiycvPS1WS4M1jZw8XEixKTU+tSMvMAUYoTKkEB4+SCK81SJ q3uCAxtzgzHSJ1ilGXY8vUe2uZhMBmSInz5oAUCYAUZZTmwY2AJYtLjLJSwryMQAcK8RSkFuV mlqDKv2IU52BUEuZNBJnCk5lXArfpFdARTEBHXOBnAzmiJBEhJdXA2L512tWWtgVCxz8rmEk3 zCjwLd536rLfXpELPmzPFgrM4Tiwfu7XE0pbHvWoP+7YNEfzedX/uc0S4sX6zfq3ih6cily1z Sxn+bbi68d1ty7dF592l81I6PWpUMecvg08J3hv7m53vhR3Ryyi5Pxnk81+LO9Z4z9PmOpz4J 6Y+Qr7bs/tjbPcgpRYijMSDbWYi4oTAZFfQAoRAwAA X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-13.tower-31.messagelabs.com!1460078655!33079238!1 X-Originating-IP: [74.125.82.68] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.28; banners=-,-,- X-VirusChecked: Checked Received: (qmail 6299 invoked from network); 8 Apr 2016 01:24:15 -0000 Received: from mail-wm0-f68.google.com (HELO mail-wm0-f68.google.com) (74.125.82.68) by server-13.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 8 Apr 2016 01:24:15 -0000 Received: by mail-wm0-f68.google.com with SMTP id a140so750529wma.2 for ; Thu, 07 Apr 2016 18:24:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=t//IVOx9AiIPgBxA/fzgLdFOVscAIW7WPm53AMQfqD0=; b=aWMB8rl/a2ZK5PMrJ/Y16VOwae3EzKbrjYo0I9m1t0i8Egb8rsGEgDpEYo1je78mDc je+ek6sJmzqD2nb77GhJLckbtY8pUW1NxN4FT8Ii5sxkBGNUtOULkxGvVYV2xCYrqr/F tawGzc0kaG5brE8tAoi9J7Ezb/lUqc1SPnVBTcicSdIbTLRXkwDxs/fs+m1eE0k/uCHi 02ML+WMufjeY25L8IsbNuwpGkDvyIvBBhbJnfEmv5GXxnamNkaLWrqRSYRk/+bDuCYwP zSga0ONW7gjCNTt7ZUgk70vMPGRoz883GnErov+kuPrWRHiTEqG6xv6sxwCLDHHxikOp 6dTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=t//IVOx9AiIPgBxA/fzgLdFOVscAIW7WPm53AMQfqD0=; b=TR4+mjZpmKAq8ITnCGEajips7JYxu5lymsdudu5S30G83CDDWb8ESxAAfvmeA2Dcp+ LY/hYLjKg04hYnHZBGG9WiX+Xw0Xf2R5p2paqjvwyU8WiWC5/98DDiV4loFaeZWBJCrF d3A4QIanBocn5amMufWuWLcmb0tsAEwu4c4jp6EE0DpNmP8VaMo0Pkmg+xOXLZT3kouy tXC4/huQnrcyNBIPacfVYoHwsM0lPmjjJI/IjBF7UPI8hPL6mPLP3aMLeyU++hlImBiE 9SHhwBTW5O8qJp3suCw6zCfVN3foiJO/3MamUnu8Bgcse3L+YMcZVIjaK5NmAZQ9tBFU 8rgg== X-Gm-Message-State: AD7BkJLwKIpoDztvni0JAwtOAIM3hR2p0A8YxmxuN4uM1wGSA0gSAjvKjQ+8jro+N8109w== X-Received: by 10.28.222.84 with SMTP id v81mr642280wmg.14.1460078655597; Thu, 07 Apr 2016 18:24:15 -0700 (PDT) Received: from Solace.fritz.box (net-37-116-155-252.cust.vodafonedsl.it. [37.116.155.252]) by smtp.gmail.com with ESMTPSA id 192sm595230wmw.0.2016.04.07.18.24.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Apr 2016 18:24:14 -0700 (PDT) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Fri, 08 Apr 2016 03:24:13 +0200 Message-ID: <20160408012411.10762.37733.stgit@Solace.fritz.box> In-Reply-To: <20160408011204.10762.14241.stgit@Solace.fritz.box> References: <20160408011204.10762.14241.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: Justin Weaver , George Dunlap Subject: [Xen-devel] [PATCH v3 07/11] xen: sched: fix per-socket runqueue creation in credit2 X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The credit2 scheduler tries to setup runqueues in such a way that there is one of them per each socket. However, that does not work. The issue is described in bug #36 "credit2 only uses one runqueue instead of one runq per socket" (http://bugs.xenproject.org/xen/bug/36), and a solution has been attempted by an old patch series: http://lists.xen.org/archives/html/xen-devel/2014-08/msg02168.html Here, we take advantage of the fact that now initialization happens (for all schedulers) during CPU_STARTING, so we have all the topology information available when necessary. This is true for all the pCPUs _except_ the boot CPU. That is not an issue, though. In fact, no runqueue exists yet when the boot CPU is initialized, so we can just create one and put the boot CPU in there. Signed-off-by: Dario Faggioli Reviewed-by: George Dunlap --- Cc: Justin Weaver --- Changes from v1: * fixed a typo in a comment. --- xen/common/sched_credit2.c | 59 ++++++++++++++++++++++++++++++++------------ 1 file changed, 43 insertions(+), 16 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index b207d84..a61a45a 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -53,7 +53,6 @@ * http://wiki.xen.org/wiki/Credit2_Scheduler_Development * TODO: * + Multiple sockets - * - Detect cpu layout and make runqueue map, one per L2 (make_runq_map()) * - Simple load balancer / runqueue assignment * - Runqueue load measurement * - Load-based load balancer @@ -1975,6 +1974,48 @@ static void deactivate_runqueue(struct csched2_private *prv, int rqi) cpumask_clear_cpu(rqi, &prv->active_queues); } +static unsigned int +cpu_to_runqueue(struct csched2_private *prv, unsigned int cpu) +{ + struct csched2_runqueue_data *rqd; + unsigned int rqi; + + for ( rqi = 0; rqi < nr_cpu_ids; rqi++ ) + { + unsigned int peer_cpu; + + /* + * As soon as we come across an uninitialized runqueue, use it. + * In fact, either: + * - we are initializing the first cpu, and we assign it to + * runqueue 0. This is handy, especially if we are dealing + * with the boot cpu (if credit2 is the default scheduler), + * as we would not be able to use cpu_to_socket() and similar + * helpers anyway (they're result of which is not reliable yet); + * - we have gone through all the active runqueues, and have not + * found anyone whose cpus' topology matches the one we are + * dealing with, so activating a new runqueue is what we want. + */ + if ( prv->rqd[rqi].id == -1 ) + break; + + rqd = prv->rqd + rqi; + BUG_ON(cpumask_empty(&rqd->active)); + + peer_cpu = cpumask_first(&rqd->active); + BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID || + cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID); + + if ( cpu_to_socket(cpumask_first(&rqd->active)) == cpu_to_socket(cpu) ) + break; + } + + /* We really expect to be able to assign each cpu to a runqueue. */ + BUG_ON(rqi >= nr_cpu_ids); + + return rqi; +} + /* Returns the ID of the runqueue the cpu is assigned to. */ static unsigned init_pdata(struct csched2_private *prv, unsigned int cpu) @@ -1986,21 +2027,7 @@ init_pdata(struct csched2_private *prv, unsigned int cpu) ASSERT(!cpumask_test_cpu(cpu, &prv->initialized)); /* Figure out which runqueue to put it in */ - rqi = 0; - - /* Figure out which runqueue to put it in */ - /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */ - if ( cpu == 0 ) - rqi = 0; - else - rqi = cpu_to_socket(cpu); - - if ( rqi == XEN_INVALID_SOCKET_ID ) - { - printk("%s: cpu_to_socket(%d) returned %d!\n", - __func__, cpu, rqi); - BUG(); - } + rqi = cpu_to_runqueue(prv, cpu); rqd = prv->rqd + rqi;