From patchwork Wed Apr 6 17:23:35 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 8763971 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 9E92E9F39A for ; Wed, 6 Apr 2016 17:25:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A3597201D3 for ; Wed, 6 Apr 2016 17:25:43 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A9F1D201EC for ; Wed, 6 Apr 2016 17:25:42 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1anrBD-0001J5-AZ; Wed, 06 Apr 2016 17:23:47 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1anrBC-0001Im-FB for xen-devel@lists.xenproject.org; Wed, 06 Apr 2016 17:23:46 +0000 Received: from [85.158.137.68] by server-9.bemta-3.messagelabs.com id 49/74-03814-12645075; Wed, 06 Apr 2016 17:23:45 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrNIsWRWlGSWpSXmKPExsXiVRvkoqvgxhp u8LPNwuL7lslMDowehz9cYQlgjGLNzEvKr0hgzVh6aAprwS35iofbtjE1MK6R6GLk4hASmMko sf7TUUYQh0VgDavEk7sz2EEcCYFLrBLXJ95l7mLkBHJiJD6uvsPSxcgBZJdJbG7UBgkLCahI3 Ny+ignCXsQk8XdRKogtLKAnceToD3YIO0ji/6FmRhCbTcBA4s2OvawgtoiAksS9VZPBepmBam 7POcoEMp5FQFXi5TUukDCvgJ3Esn/P2EDCnAIOEnsmu0Nsspe4+uUE2GGiAnISKy+3sEKUC0q cnPkE7EhmAU2J9bv0IYbLS2x/O4d5AqPILCRVsxCqZiGpWsDIvIpRozi1qCy1SNfQTC+pKDM9 oyQ3MTNH19DAWC83tbg4MT01JzGpWC85P3cTIzDwGYBgB+Oq7Z6HGCU5mJREeT0lWMOF+JLyU yozEosz4otKc1KLDzFqcHAITDg7dzqTFEtefl6qkgTvLBegOsGi1PTUirTMHGBswpRKcPAoif DecgZK8xYXJOYWZ6ZDpE4x6nJsmXpvLZMQ2Awpcd44kBkCIEUZpXlwI2Bp4hKjrJQwLyPQgUI 8BalFuZklqPKvGMU5GJWEeetBpvBk5pXAbXoFdAQT0BH1wkwgR5QkIqSkGhgDe75+UPe5d3zG vbM7extyrfR+TQ++5P6EL/JA8pR71fVnrVZ99jjLdXLTL/fiLrZrWqtWZy4+ZZT+e2OkoW1a0 VSeP8410559eG1/XXd9xdrji+4ltp93uR37/oqeppCkGYesr6jG2qsnbZZySN3mDwiTFjCU33 j/wXrPU5sezZ7AEHKOtWiREktxRqKhFnNRcSIAvVgxhg4DAAA= X-Env-Sender: raistlin.df@gmail.com X-Msg-Ref: server-6.tower-31.messagelabs.com!1459963424!7154539!1 X-Originating-IP: [74.125.82.68] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.28; banners=-,-,- X-VirusChecked: Checked Received: (qmail 45866 invoked from network); 6 Apr 2016 17:23:44 -0000 Received: from mail-wm0-f68.google.com (HELO mail-wm0-f68.google.com) (74.125.82.68) by server-6.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 6 Apr 2016 17:23:44 -0000 Received: by mail-wm0-f68.google.com with SMTP id a140so15154952wma.2 for ; Wed, 06 Apr 2016 10:23:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=t//IVOx9AiIPgBxA/fzgLdFOVscAIW7WPm53AMQfqD0=; b=UZZYTltN4zchNAOY8yEZQ0Qp6bQiItekw7xIHvZH2NH6vh3nMvBNCZhCs7191A7w28 RuvEpcpUwQTTdSInkUc6nP/920MJdc3Ze3QD8mhvGiToG0zooYIZZJ/PrPit4rFKoJsc KTRbe4g+n4VYlzFhrJpoMA3IxEGRsrmOZ1b6HA//l7eAQ5IQmPlONesCfsTuQ0dmMIH8 TMmWp0Q+GX7HkNdWC4fg422ezXC024VQg7tIc8LCpT4cE9U3uZQiyurcTuSwxUBALJV0 mhv81hnKstCZiS8W48Ba553LqiFYC7RuTpA/8zyqH6uRaFVPKmtzOOESl67+ZVtt7fpA iEUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=t//IVOx9AiIPgBxA/fzgLdFOVscAIW7WPm53AMQfqD0=; b=a5bfBsjpuNs1vjf8WcKKDNZ7FE4Zd9Wp7wb4owM8aZB1LdfHrv8s8LakfNVIp9zqIA G2DgKc55Zx+Hs75tucgH0kjUvDLa3BUia1Kml4KtZMly6Z22yLFKRFnRSwwsr5ahGQQk W8gWmxdowgFkI2W52vRtEsKO6MegwrYaxpZzGzb2hdGmT0sTlbaMh6McKCdCymLnz/aR +XPLFZ1FM14sogkpO/PHMKJmJpHut51k3/ZJ1PeN/ZWo2Ay/dpgaxTrrxoBf9NNeV7/6 EQw9bDmIpCxj+y+4a/B7XjG/m7BFWdF2yznICcngChThD4p4uQ3BP4jr2u8Mueot2HtA AVKA== X-Gm-Message-State: AD7BkJLpD5efiC38+FvbaRa//LiEXfCIE0NEENc/ZuT2WBvwXtNhF0LHa1f6eX/sPVWXfA== X-Received: by 10.28.187.5 with SMTP id l5mr9826344wmf.17.1459963417546; Wed, 06 Apr 2016 10:23:37 -0700 (PDT) Received: from Solace.fritz.box (net-37-116-155-252.cust.vodafonedsl.it. [37.116.155.252]) by smtp.gmail.com with ESMTPSA id a73sm3363305wme.2.2016.04.06.10.23.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Apr 2016 10:23:36 -0700 (PDT) From: Dario Faggioli To: xen-devel@lists.xenproject.org Date: Wed, 06 Apr 2016 19:23:35 +0200 Message-ID: <20160406172335.25877.9772.stgit@Solace.fritz.box> In-Reply-To: <20160406170023.25877.15622.stgit@Solace.fritz.box> References: <20160406170023.25877.15622.stgit@Solace.fritz.box> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Cc: Justin Weaver , George Dunlap Subject: [Xen-devel] [PATCH v2 07/11] xen: sched: fix per-socket runqueue creation in credit2 X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The credit2 scheduler tries to setup runqueues in such a way that there is one of them per each socket. However, that does not work. The issue is described in bug #36 "credit2 only uses one runqueue instead of one runq per socket" (http://bugs.xenproject.org/xen/bug/36), and a solution has been attempted by an old patch series: http://lists.xen.org/archives/html/xen-devel/2014-08/msg02168.html Here, we take advantage of the fact that now initialization happens (for all schedulers) during CPU_STARTING, so we have all the topology information available when necessary. This is true for all the pCPUs _except_ the boot CPU. That is not an issue, though. In fact, no runqueue exists yet when the boot CPU is initialized, so we can just create one and put the boot CPU in there. Signed-off-by: Dario Faggioli Reviewed-by: George Dunlap --- Cc: Justin Weaver --- Changes from v1: * fixed a typo in a comment. --- xen/common/sched_credit2.c | 59 ++++++++++++++++++++++++++++++++------------ 1 file changed, 43 insertions(+), 16 deletions(-) diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index b207d84..a61a45a 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -53,7 +53,6 @@ * http://wiki.xen.org/wiki/Credit2_Scheduler_Development * TODO: * + Multiple sockets - * - Detect cpu layout and make runqueue map, one per L2 (make_runq_map()) * - Simple load balancer / runqueue assignment * - Runqueue load measurement * - Load-based load balancer @@ -1975,6 +1974,48 @@ static void deactivate_runqueue(struct csched2_private *prv, int rqi) cpumask_clear_cpu(rqi, &prv->active_queues); } +static unsigned int +cpu_to_runqueue(struct csched2_private *prv, unsigned int cpu) +{ + struct csched2_runqueue_data *rqd; + unsigned int rqi; + + for ( rqi = 0; rqi < nr_cpu_ids; rqi++ ) + { + unsigned int peer_cpu; + + /* + * As soon as we come across an uninitialized runqueue, use it. + * In fact, either: + * - we are initializing the first cpu, and we assign it to + * runqueue 0. This is handy, especially if we are dealing + * with the boot cpu (if credit2 is the default scheduler), + * as we would not be able to use cpu_to_socket() and similar + * helpers anyway (they're result of which is not reliable yet); + * - we have gone through all the active runqueues, and have not + * found anyone whose cpus' topology matches the one we are + * dealing with, so activating a new runqueue is what we want. + */ + if ( prv->rqd[rqi].id == -1 ) + break; + + rqd = prv->rqd + rqi; + BUG_ON(cpumask_empty(&rqd->active)); + + peer_cpu = cpumask_first(&rqd->active); + BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID || + cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID); + + if ( cpu_to_socket(cpumask_first(&rqd->active)) == cpu_to_socket(cpu) ) + break; + } + + /* We really expect to be able to assign each cpu to a runqueue. */ + BUG_ON(rqi >= nr_cpu_ids); + + return rqi; +} + /* Returns the ID of the runqueue the cpu is assigned to. */ static unsigned init_pdata(struct csched2_private *prv, unsigned int cpu) @@ -1986,21 +2027,7 @@ init_pdata(struct csched2_private *prv, unsigned int cpu) ASSERT(!cpumask_test_cpu(cpu, &prv->initialized)); /* Figure out which runqueue to put it in */ - rqi = 0; - - /* Figure out which runqueue to put it in */ - /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */ - if ( cpu == 0 ) - rqi = 0; - else - rqi = cpu_to_socket(cpu); - - if ( rqi == XEN_INVALID_SOCKET_ID ) - { - printk("%s: cpu_to_socket(%d) returned %d!\n", - __func__, cpu, rqi); - BUG(); - } + rqi = cpu_to_runqueue(prv, cpu); rqd = prv->rqd + rqi;