From patchwork Thu Feb 13 12:54:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11380307 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 80BD5924 for ; Thu, 13 Feb 2020 12:55:50 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 67CD3218AC for ; Thu, 13 Feb 2020 12:55:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67CD3218AC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1A-0005xh-IX; Thu, 13 Feb 2020 12:54:56 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E19-0005xQ-3x for xen-devel@lists.xenproject.org; Thu, 13 Feb 2020 12:54:55 +0000 X-Inumbo-ID: 072a8ebe-4e60-11ea-ade5-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 072a8ebe-4e60-11ea-ade5-bc764e2007e4; Thu, 13 Feb 2020 12:54:53 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id F1804AC52; Thu, 13 Feb 2020 12:54:52 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Thu, 13 Feb 2020 13:54:44 +0100 Message-Id: <20200213125449.14226-4-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200213125449.14226-1-jgross@suse.com> References: <20200213125449.14226-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 3/8] xen/sched: don't use irqsave locks in dumping functions X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Meng Xu , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" All dumping functions invoked by the "runq" keyhandler are called with disabled interrupts, so there is no need to use the irqsave variants of any locks in those functions. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched/credit.c | 10 ++++------ xen/common/sched/credit2.c | 5 ++--- xen/common/sched/null.c | 10 ++++------ xen/common/sched/rt.c | 10 ++++------ 4 files changed, 14 insertions(+), 21 deletions(-) diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c index 05946eea6e..dee87e7fe2 100644 --- a/xen/common/sched/credit.c +++ b/xen/common/sched/credit.c @@ -2048,7 +2048,6 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) const struct csched_pcpu *spc; const struct csched_unit *svc; spinlock_t *lock; - unsigned long flags; int loop; /* @@ -2058,7 +2057,7 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) * - we scan through the runqueue, so we need the proper runqueue * lock (the one of the runqueue of this cpu). */ - spin_lock_irqsave(&prv->lock, flags); + spin_lock(&prv->lock); lock = pcpu_schedule_lock(cpu); spc = CSCHED_PCPU(cpu); @@ -2089,7 +2088,7 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) } pcpu_schedule_unlock(lock, cpu); - spin_unlock_irqrestore(&prv->lock, flags); + spin_unlock(&prv->lock); } static void @@ -2098,9 +2097,8 @@ csched_dump(const struct scheduler *ops) struct list_head *iter_sdom, *iter_svc; struct csched_private *prv = CSCHED_PRIV(ops); int loop; - unsigned long flags; - spin_lock_irqsave(&prv->lock, flags); + spin_lock(&prv->lock); printk("info:\n" "\tncpus = %u\n" @@ -2153,7 +2151,7 @@ csched_dump(const struct scheduler *ops) } } - spin_unlock_irqrestore(&prv->lock, flags); + spin_unlock(&prv->lock); } static int __init diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c index f2752f27e2..e76d2ed543 100644 --- a/xen/common/sched/credit2.c +++ b/xen/common/sched/credit2.c @@ -3649,14 +3649,13 @@ csched2_dump(const struct scheduler *ops) { struct list_head *iter_sdom; struct csched2_private *prv = csched2_priv(ops); - unsigned long flags; unsigned int i, j, loop; /* * We need the private scheduler lock as we access global * scheduler data and (below) the list of active domains. */ - read_lock_irqsave(&prv->lock, flags); + read_lock(&prv->lock); printk("Active queues: %d\n" "\tdefault-weight = %d\n", @@ -3749,7 +3748,7 @@ csched2_dump(const struct scheduler *ops) spin_unlock(&rqd->lock); } - read_unlock_irqrestore(&prv->lock, flags); + read_unlock(&prv->lock); } static void * diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c index 8c3101649d..3b31703d7e 100644 --- a/xen/common/sched/null.c +++ b/xen/common/sched/null.c @@ -954,9 +954,8 @@ static void null_dump_pcpu(const struct scheduler *ops, int cpu) const struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; const struct null_unit *nvc; spinlock_t *lock; - unsigned long flags; - lock = pcpu_schedule_lock_irqsave(cpu, &flags); + lock = pcpu_schedule_lock(cpu); printk("CPU[%02d] sibling={%*pbl}, core={%*pbl}", cpu, CPUMASK_PR(per_cpu(cpu_sibling_mask, cpu)), @@ -974,17 +973,16 @@ static void null_dump_pcpu(const struct scheduler *ops, int cpu) printk("\n"); } - pcpu_schedule_unlock_irqrestore(lock, flags, cpu); + pcpu_schedule_unlock(lock, cpu); } static void null_dump(const struct scheduler *ops) { struct null_private *prv = null_priv(ops); struct list_head *iter; - unsigned long flags; unsigned int loop; - spin_lock_irqsave(&prv->lock, flags); + spin_lock(&prv->lock); printk("\tcpus_free = %*pbl\n", CPUMASK_PR(&prv->cpus_free)); @@ -1029,7 +1027,7 @@ static void null_dump(const struct scheduler *ops) printk("\n"); spin_unlock(&prv->waitq_lock); - spin_unlock_irqrestore(&prv->lock, flags); + spin_unlock(&prv->lock); } static const struct scheduler sched_null_def = { diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c index 66585ed50a..16379cb2d2 100644 --- a/xen/common/sched/rt.c +++ b/xen/common/sched/rt.c @@ -353,9 +353,8 @@ rt_dump_pcpu(const struct scheduler *ops, int cpu) { struct rt_private *prv = rt_priv(ops); const struct rt_unit *svc; - unsigned long flags; - spin_lock_irqsave(&prv->lock, flags); + spin_lock(&prv->lock); printk("CPU[%02d]\n", cpu); /* current UNIT (nothing to say if that's the idle unit). */ svc = rt_unit(curr_on_cpu(cpu)); @@ -363,7 +362,7 @@ rt_dump_pcpu(const struct scheduler *ops, int cpu) { rt_dump_unit(ops, svc); } - spin_unlock_irqrestore(&prv->lock, flags); + spin_unlock(&prv->lock); } static void @@ -373,9 +372,8 @@ rt_dump(const struct scheduler *ops) struct rt_private *prv = rt_priv(ops); const struct rt_unit *svc; const struct rt_dom *sdom; - unsigned long flags; - spin_lock_irqsave(&prv->lock, flags); + spin_lock(&prv->lock); if ( list_empty(&prv->sdom) ) goto out; @@ -421,7 +419,7 @@ rt_dump(const struct scheduler *ops) } out: - spin_unlock_irqrestore(&prv->lock, flags); + spin_unlock(&prv->lock); } /*