From patchwork Thu Feb 13 12:54:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11380303 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D26D81580 for ; Thu, 13 Feb 2020 12:55:45 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B91D92168B for ; Thu, 13 Feb 2020 12:55:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B91D92168B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E19-0005xX-AD; Thu, 13 Feb 2020 12:54:55 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E18-0005xL-LF for xen-devel@lists.xenproject.org; Thu, 13 Feb 2020 12:54:54 +0000 X-Inumbo-ID: 07188cc8-4e60-11ea-b897-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 07188cc8-4e60-11ea-b897-12813bfff9fa; Thu, 13 Feb 2020 12:54:53 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 92C79AB87; Thu, 13 Feb 2020 12:54:52 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Thu, 13 Feb 2020 13:54:42 +0100 Message-Id: <20200213125449.14226-2-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200213125449.14226-1-jgross@suse.com> References: <20200213125449.14226-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 1/8] xen: make rangeset_printk() static X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" rangeset_printk() is only used locally, so it can be made static. Signed-off-by: Juergen Gross Acked-by: Jan Beulich --- xen/common/rangeset.c | 3 +-- xen/include/xen/rangeset.h | 2 -- 2 files changed, 1 insertion(+), 4 deletions(-) diff --git a/xen/common/rangeset.c b/xen/common/rangeset.c index f34cafdc7e..4ebba30ba3 100644 --- a/xen/common/rangeset.c +++ b/xen/common/rangeset.c @@ -541,8 +541,7 @@ static void print_limit(struct rangeset *r, unsigned long s) printk((r->flags & RANGESETF_prettyprint_hex) ? "%lx" : "%lu", s); } -void rangeset_printk( - struct rangeset *r) +static void rangeset_printk(struct rangeset *r) { int nr_printed = 0; struct range *x; diff --git a/xen/include/xen/rangeset.h b/xen/include/xen/rangeset.h index 0c05c2fd4e..5f62a97971 100644 --- a/xen/include/xen/rangeset.h +++ b/xen/include/xen/rangeset.h @@ -95,8 +95,6 @@ bool_t __must_check rangeset_contains_singleton( void rangeset_swap(struct rangeset *a, struct rangeset *b); /* Rangeset pretty printing. */ -void rangeset_printk( - struct rangeset *r); void rangeset_domain_printk( struct domain *d); From patchwork Thu Feb 13 12:54:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11380301 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B8CD31580 for ; Thu, 13 Feb 2020 12:55:43 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9FD082168B for ; Thu, 13 Feb 2020 12:55:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9FD082168B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1E-0005zU-SB; Thu, 13 Feb 2020 12:55:00 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1D-0005z4-Hi for xen-devel@lists.xenproject.org; Thu, 13 Feb 2020 12:54:59 +0000 X-Inumbo-ID: 073bcc06-4e60-11ea-b898-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 073bcc06-4e60-11ea-b898-12813bfff9fa; Thu, 13 Feb 2020 12:54:53 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id D6DC6ABF6; Thu, 13 Feb 2020 12:54:52 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Thu, 13 Feb 2020 13:54:43 +0100 Message-Id: <20200213125449.14226-3-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200213125449.14226-1-jgross@suse.com> References: <20200213125449.14226-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 2/8] xen: add using domlist_read_lock in keyhandlers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Kevin Tian , Stefano Stabellini , Julien Grall , Jun Nakajima , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Using for_each_domain() with out holding the domlist_read_lock is fragile, so add the lock in the keyhandlers it is missing. Signed-off-by: Juergen Gross Acked-by: Jan Beulich Acked-by: George Dunlap Reviewed-by: Kevin Tian --- xen/arch/x86/mm/p2m-ept.c | 4 ++++ xen/arch/x86/time.c | 5 +++++ xen/common/grant_table.c | 7 +++++++ xen/drivers/passthrough/iommu.c | 5 +++++ 4 files changed, 21 insertions(+) diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index d4defa01c2..eb0f0edfef 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -1297,6 +1297,8 @@ static void ept_dump_p2m_table(unsigned char key) struct p2m_domain *p2m; struct ept_data *ept; + rcu_read_lock(&domlist_read_lock); + for_each_domain(d) { if ( !hap_enabled(d) ) @@ -1347,6 +1349,8 @@ static void ept_dump_p2m_table(unsigned char key) unmap_domain_page(table); } } + + rcu_read_unlock(&domlist_read_lock); } void setup_ept_dump(void) diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c index cf3e51fb5e..509679235d 100644 --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -2401,6 +2401,9 @@ static void dump_softtsc(unsigned char key) } else printk("TSC not marked as either constant or reliable, " "warp=%lu (count=%lu)\n", tsc_max_warp, tsc_check_count); + + rcu_read_lock(&domlist_read_lock); + for_each_domain ( d ) { if ( is_hardware_domain(d) && d->arch.tsc_mode == TSC_MODE_DEFAULT ) @@ -2417,6 +2420,8 @@ static void dump_softtsc(unsigned char key) domcnt++; } + rcu_read_unlock(&domlist_read_lock); + if ( !domcnt ) printk("No domains have emulated TSC\n"); } diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 2ecf38dfbe..c793927cd6 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -4104,9 +4104,16 @@ static void gnttab_usage_print(struct domain *rd) static void gnttab_usage_print_all(unsigned char key) { struct domain *d; + printk("%s [ key '%c' pressed\n", __func__, key); + + rcu_read_lock(&domlist_read_lock); + for_each_domain ( d ) gnttab_usage_print(d); + + rcu_read_unlock(&domlist_read_lock); + printk("%s ] done\n", __func__); } diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index 9d421e06de..cab7a068aa 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -591,6 +591,9 @@ static void iommu_dump_p2m_table(unsigned char key) } ops = iommu_get_ops(); + + rcu_read_lock(&domlist_read_lock); + for_each_domain(d) { if ( is_hardware_domain(d) || !is_iommu_enabled(d) ) @@ -605,6 +608,8 @@ static void iommu_dump_p2m_table(unsigned char key) printk("\ndomain%d IOMMU p2m table: \n", d->domain_id); ops->dump_p2m_table(d); } + + rcu_read_unlock(&domlist_read_lock); } /* From patchwork Thu Feb 13 12:54:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11380307 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 80BD5924 for ; Thu, 13 Feb 2020 12:55:50 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 67CD3218AC for ; Thu, 13 Feb 2020 12:55:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67CD3218AC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1A-0005xh-IX; Thu, 13 Feb 2020 12:54:56 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E19-0005xQ-3x for xen-devel@lists.xenproject.org; Thu, 13 Feb 2020 12:54:55 +0000 X-Inumbo-ID: 072a8ebe-4e60-11ea-ade5-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 072a8ebe-4e60-11ea-ade5-bc764e2007e4; Thu, 13 Feb 2020 12:54:53 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id F1804AC52; Thu, 13 Feb 2020 12:54:52 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Thu, 13 Feb 2020 13:54:44 +0100 Message-Id: <20200213125449.14226-4-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200213125449.14226-1-jgross@suse.com> References: <20200213125449.14226-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 3/8] xen/sched: don't use irqsave locks in dumping functions X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Meng Xu , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" All dumping functions invoked by the "runq" keyhandler are called with disabled interrupts, so there is no need to use the irqsave variants of any locks in those functions. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched/credit.c | 10 ++++------ xen/common/sched/credit2.c | 5 ++--- xen/common/sched/null.c | 10 ++++------ xen/common/sched/rt.c | 10 ++++------ 4 files changed, 14 insertions(+), 21 deletions(-) diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c index 05946eea6e..dee87e7fe2 100644 --- a/xen/common/sched/credit.c +++ b/xen/common/sched/credit.c @@ -2048,7 +2048,6 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) const struct csched_pcpu *spc; const struct csched_unit *svc; spinlock_t *lock; - unsigned long flags; int loop; /* @@ -2058,7 +2057,7 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) * - we scan through the runqueue, so we need the proper runqueue * lock (the one of the runqueue of this cpu). */ - spin_lock_irqsave(&prv->lock, flags); + spin_lock(&prv->lock); lock = pcpu_schedule_lock(cpu); spc = CSCHED_PCPU(cpu); @@ -2089,7 +2088,7 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) } pcpu_schedule_unlock(lock, cpu); - spin_unlock_irqrestore(&prv->lock, flags); + spin_unlock(&prv->lock); } static void @@ -2098,9 +2097,8 @@ csched_dump(const struct scheduler *ops) struct list_head *iter_sdom, *iter_svc; struct csched_private *prv = CSCHED_PRIV(ops); int loop; - unsigned long flags; - spin_lock_irqsave(&prv->lock, flags); + spin_lock(&prv->lock); printk("info:\n" "\tncpus = %u\n" @@ -2153,7 +2151,7 @@ csched_dump(const struct scheduler *ops) } } - spin_unlock_irqrestore(&prv->lock, flags); + spin_unlock(&prv->lock); } static int __init diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c index f2752f27e2..e76d2ed543 100644 --- a/xen/common/sched/credit2.c +++ b/xen/common/sched/credit2.c @@ -3649,14 +3649,13 @@ csched2_dump(const struct scheduler *ops) { struct list_head *iter_sdom; struct csched2_private *prv = csched2_priv(ops); - unsigned long flags; unsigned int i, j, loop; /* * We need the private scheduler lock as we access global * scheduler data and (below) the list of active domains. */ - read_lock_irqsave(&prv->lock, flags); + read_lock(&prv->lock); printk("Active queues: %d\n" "\tdefault-weight = %d\n", @@ -3749,7 +3748,7 @@ csched2_dump(const struct scheduler *ops) spin_unlock(&rqd->lock); } - read_unlock_irqrestore(&prv->lock, flags); + read_unlock(&prv->lock); } static void * diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c index 8c3101649d..3b31703d7e 100644 --- a/xen/common/sched/null.c +++ b/xen/common/sched/null.c @@ -954,9 +954,8 @@ static void null_dump_pcpu(const struct scheduler *ops, int cpu) const struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; const struct null_unit *nvc; spinlock_t *lock; - unsigned long flags; - lock = pcpu_schedule_lock_irqsave(cpu, &flags); + lock = pcpu_schedule_lock(cpu); printk("CPU[%02d] sibling={%*pbl}, core={%*pbl}", cpu, CPUMASK_PR(per_cpu(cpu_sibling_mask, cpu)), @@ -974,17 +973,16 @@ static void null_dump_pcpu(const struct scheduler *ops, int cpu) printk("\n"); } - pcpu_schedule_unlock_irqrestore(lock, flags, cpu); + pcpu_schedule_unlock(lock, cpu); } static void null_dump(const struct scheduler *ops) { struct null_private *prv = null_priv(ops); struct list_head *iter; - unsigned long flags; unsigned int loop; - spin_lock_irqsave(&prv->lock, flags); + spin_lock(&prv->lock); printk("\tcpus_free = %*pbl\n", CPUMASK_PR(&prv->cpus_free)); @@ -1029,7 +1027,7 @@ static void null_dump(const struct scheduler *ops) printk("\n"); spin_unlock(&prv->waitq_lock); - spin_unlock_irqrestore(&prv->lock, flags); + spin_unlock(&prv->lock); } static const struct scheduler sched_null_def = { diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c index 66585ed50a..16379cb2d2 100644 --- a/xen/common/sched/rt.c +++ b/xen/common/sched/rt.c @@ -353,9 +353,8 @@ rt_dump_pcpu(const struct scheduler *ops, int cpu) { struct rt_private *prv = rt_priv(ops); const struct rt_unit *svc; - unsigned long flags; - spin_lock_irqsave(&prv->lock, flags); + spin_lock(&prv->lock); printk("CPU[%02d]\n", cpu); /* current UNIT (nothing to say if that's the idle unit). */ svc = rt_unit(curr_on_cpu(cpu)); @@ -363,7 +362,7 @@ rt_dump_pcpu(const struct scheduler *ops, int cpu) { rt_dump_unit(ops, svc); } - spin_unlock_irqrestore(&prv->lock, flags); + spin_unlock(&prv->lock); } static void @@ -373,9 +372,8 @@ rt_dump(const struct scheduler *ops) struct rt_private *prv = rt_priv(ops); const struct rt_unit *svc; const struct rt_dom *sdom; - unsigned long flags; - spin_lock_irqsave(&prv->lock, flags); + spin_lock(&prv->lock); if ( list_empty(&prv->sdom) ) goto out; @@ -421,7 +419,7 @@ rt_dump(const struct scheduler *ops) } out: - spin_unlock_irqrestore(&prv->lock, flags); + spin_unlock(&prv->lock); } /* From patchwork Thu Feb 13 12:54:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11380313 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23F09924 for ; Thu, 13 Feb 2020 12:56:03 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0AB012168B for ; Thu, 13 Feb 2020 12:56:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0AB012168B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1O-00066J-W1; Thu, 13 Feb 2020 12:55:10 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1N-00065T-Hw for xen-devel@lists.xenproject.org; Thu, 13 Feb 2020 12:55:09 +0000 X-Inumbo-ID: 07188f03-4e60-11ea-b898-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 07188f03-4e60-11ea-b898-12813bfff9fa; Thu, 13 Feb 2020 12:54:54 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 3F9A4AC62; Thu, 13 Feb 2020 12:54:53 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Thu, 13 Feb 2020 13:54:45 +0100 Message-Id: <20200213125449.14226-5-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200213125449.14226-1-jgross@suse.com> References: <20200213125449.14226-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 4/8] xen: add locks with timeouts for keyhandlers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Most keyhandlers are used to dump hypervisor data to the console and they are used mostly for debugging purposes. In those cases it might happen that some data structures are locked and thus are blocking the handler to access the data. In order to be able to still get some information don't use plain locking functions in the keyhandlers, but a variant of trylocks with a timeout value. This allows to wait for some time and to give up in case the lock was not obtained. Add the main infrastructure for this feature including a new runtime parameter allowing to specify the timeout value in milliseconds. Use the new locking scheme in the handlers defined in keyhandler.c. Signed-off-by: Juergen Gross --- docs/misc/xen-command-line.pandoc | 9 +++++++++ xen/arch/x86/domain.c | 9 +++++++-- xen/common/keyhandler.c | 29 ++++++++++++++++++++++++++++- xen/common/rangeset.c | 7 +++++-- xen/include/xen/keyhandler.h | 26 ++++++++++++++++++++++++++ 5 files changed, 75 insertions(+), 5 deletions(-) diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc index 5051583a5d..ee3d031771 100644 --- a/docs/misc/xen-command-line.pandoc +++ b/docs/misc/xen-command-line.pandoc @@ -1384,6 +1384,15 @@ Force the use of `[:]:.` as device ID of IO-APIC `` instead of the one specified by the IVHD sub-tables of the IVRS ACPI table. +### keyhandler-lock-timeout +> `= ` + +> Default: `1` + +> Can be modified at runtime + +Specify the lock timeout of keyhandlers in milliseconds. + ### lapic (x86) > `= ` diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index f53ae5ff86..1d09911dc0 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -222,7 +223,8 @@ void dump_pageframe_info(struct domain *d) { printk(" DomPage list too long to display\n"); } - else + else if ( keyhandler_spin_lock(&d->page_alloc_lock, + "could not read page_list") ) { unsigned long total[MASK_EXTR(PGT_type_mask, PGT_type_mask) + 1] = {}; @@ -251,7 +253,10 @@ void dump_pageframe_info(struct domain *d) if ( is_hvm_domain(d) ) p2m_pod_dump_data(d); - spin_lock(&d->page_alloc_lock); + if ( !keyhandler_spin_lock(&d->page_alloc_lock, + "could not read page_list") ) + return; + page_list_for_each ( page, &d->xenpage_list ) { printk(" XenPage %p: caf=%08lx, taf=%" PRtype_info "\n", diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c index f50490d0f3..c393d83b70 100644 --- a/xen/common/keyhandler.c +++ b/xen/common/keyhandler.c @@ -14,8 +14,10 @@ #include #include #include +#include #include #include +#include #include #include #include @@ -71,6 +73,30 @@ static struct keyhandler { #undef KEYHANDLER }; +static unsigned int lock_timeout = 1; +integer_runtime_param("keyhandler-lock-timeout", lock_timeout); + +s_time_t keyhandler_lock_timeout(void) +{ + return NOW() + MILLISECS(lock_timeout); +} + +bool keyhandler_spin_lock(spinlock_t *lock, const char *msg) +{ + keyhandler_lock_body(bool, spin_trylock(lock), "%s\n", msg); +} + +bool keyhandler_spin_lock_irqsave(spinlock_t *lock, unsigned long *flags, + const char *msg) +{ + keyhandler_lock_body(bool, spin_trylock_irqsave(lock, *flags), "%s\n", msg); +} + +bool keyhandler_read_lock(rwlock_t *lock, const char *msg) +{ + keyhandler_lock_body(bool, read_trylock(lock), "%s\n", msg); +} + static void keypress_action(void *unused) { handle_keypress(keypress_key, NULL); @@ -378,7 +404,8 @@ static void read_clocks(unsigned char key) static u32 count = 0; static DEFINE_SPINLOCK(lock); - spin_lock(&lock); + if ( !keyhandler_spin_lock(&lock, "could not read clock stats") ) + return; smp_call_function(read_clocks_slave, NULL, 0); diff --git a/xen/common/rangeset.c b/xen/common/rangeset.c index 4ebba30ba3..97104abb45 100644 --- a/xen/common/rangeset.c +++ b/xen/common/rangeset.c @@ -9,6 +9,7 @@ #include #include +#include #include #include @@ -546,7 +547,8 @@ static void rangeset_printk(struct rangeset *r) int nr_printed = 0; struct range *x; - read_lock(&r->lock); + if ( !keyhandler_read_lock(&r->lock, "could not read rangeset") ) + return; printk("%-10s {", r->name); @@ -575,7 +577,8 @@ void rangeset_domain_printk( printk("Rangesets belonging to domain %u:\n", d->domain_id); - spin_lock(&d->rangesets_lock); + if ( !keyhandler_spin_lock(&d->rangesets_lock, "could not get rangesets") ) + return; if ( list_empty(&d->rangesets) ) printk(" None\n"); diff --git a/xen/include/xen/keyhandler.h b/xen/include/xen/keyhandler.h index 5131e86cbc..cc8e0b18f5 100644 --- a/xen/include/xen/keyhandler.h +++ b/xen/include/xen/keyhandler.h @@ -10,6 +10,9 @@ #ifndef __XEN_KEYHANDLER_H__ #define __XEN_KEYHANDLER_H__ +#include +#include +#include #include /* @@ -48,4 +51,27 @@ void register_irq_keyhandler(unsigned char key, /* Inject a keypress into the key-handling subsystem. */ extern void handle_keypress(unsigned char key, struct cpu_user_regs *regs); +/* Locking primitives for inside keyhandlers (like trylock). */ +bool keyhandler_spin_lock(spinlock_t *lock, const char *msg); +bool keyhandler_spin_lock_irqsave(spinlock_t *lock, unsigned long *flags, + const char *msg); +bool keyhandler_read_lock(rwlock_t *lock, const char *msg); + +/* Primitives for custom keyhandler lock functions. */ +s_time_t keyhandler_lock_timeout(void); +#define keyhandler_lock_body(type, lockfunc, arg...) \ + s_time_t end = keyhandler_lock_timeout(); \ + type ret; \ + \ + do { \ + ret = lockfunc; \ + if ( ret ) \ + return ret; \ + cpu_relax(); \ + } while ( NOW() < end ); \ + \ + printk("-->lock conflict: " arg); \ + \ + return ret + #endif /* __XEN_KEYHANDLER_H__ */ From patchwork Thu Feb 13 12:54:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11380317 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 89EF41580 for ; Thu, 13 Feb 2020 12:56:11 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6FF60218AC for ; Thu, 13 Feb 2020 12:56:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6FF60218AC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1Y-0006DR-Kk; Thu, 13 Feb 2020 12:55:20 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1X-0006Cd-Hy for xen-devel@lists.xenproject.org; Thu, 13 Feb 2020 12:55:19 +0000 X-Inumbo-ID: 07fa8cc2-4e60-11ea-b898-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 07fa8cc2-4e60-11ea-b898-12813bfff9fa; Thu, 13 Feb 2020 12:54:55 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 66CC1AC6B; Thu, 13 Feb 2020 12:54:53 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Thu, 13 Feb 2020 13:54:46 +0100 Message-Id: <20200213125449.14226-6-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200213125449.14226-1-jgross@suse.com> References: <20200213125449.14226-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 5/8] xen/sched: use keyhandler locks when dumping data to console X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , George Dunlap , Meng Xu , Dario Faggioli MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Instead of using the normal locks use the keyhandler provided trylocks with timeouts. This requires a special primitive for the scheduler lock. Signed-off-by: Juergen Gross --- xen/common/sched/core.c | 7 +++++++ xen/common/sched/cpupool.c | 4 +++- xen/common/sched/credit.c | 25 ++++++++++++++++++------- xen/common/sched/credit2.c | 17 +++++++++++------ xen/common/sched/null.c | 42 +++++++++++++++++++++++++----------------- xen/common/sched/private.h | 1 + xen/common/sched/rt.c | 7 +++++-- 7 files changed, 70 insertions(+), 33 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index d4e8944e0e..7b8b0fe80e 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -3302,6 +3303,12 @@ void __init sched_setup_dom0_vcpus(struct domain *d) } #endif +spinlock_t *keyhandler_pcpu_lock(unsigned int cpu) +{ + keyhandler_lock_body(spinlock_t *, pcpu_schedule_trylock(cpu), + "could not get pcpu lock, cpu=%u\n", cpu); +} + #ifdef CONFIG_COMPAT #include "compat.c" #endif diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 476916c6ea..5c181e9772 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -893,7 +893,9 @@ void dump_runq(unsigned char key) s_time_t now = NOW(); struct cpupool **c; - spin_lock(&cpupool_lock); + if ( !keyhandler_spin_lock(&cpupool_lock, "could not get cpupools") ) + return; + local_irq_save(flags); printk("sched_smt_power_savings: %s\n", diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c index dee87e7fe2..165ff26bb8 100644 --- a/xen/common/sched/credit.c +++ b/xen/common/sched/credit.c @@ -2057,8 +2057,15 @@ csched_dump_pcpu(const struct scheduler *ops, int cpu) * - we scan through the runqueue, so we need the proper runqueue * lock (the one of the runqueue of this cpu). */ - spin_lock(&prv->lock); - lock = pcpu_schedule_lock(cpu); + if ( !keyhandler_spin_lock(&prv->lock, "could not get credit data") ) + return; + + lock = keyhandler_pcpu_lock(cpu); + if ( !lock ) + { + spin_unlock(&prv->lock); + return; + } spc = CSCHED_PCPU(cpu); runq = &spc->runq; @@ -2098,7 +2105,8 @@ csched_dump(const struct scheduler *ops) struct csched_private *prv = CSCHED_PRIV(ops); int loop; - spin_lock(&prv->lock); + if ( !keyhandler_spin_lock(&prv->lock, "could not get credit data") ) + return; printk("info:\n" "\tncpus = %u\n" @@ -2142,12 +2150,15 @@ csched_dump(const struct scheduler *ops) spinlock_t *lock; svc = list_entry(iter_svc, struct csched_unit, active_unit_elem); - lock = unit_schedule_lock(svc->unit); + lock = keyhandler_pcpu_lock(svc->unit->res->master_cpu); - printk("\t%3d: ", ++loop); - csched_dump_unit(svc); + if ( lock ) + { + printk("\t%3d: ", ++loop); + csched_dump_unit(svc); - unit_schedule_unlock(lock, svc->unit); + pcpu_schedule_unlock(lock, svc->unit->res->master_cpu); + } } } diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c index e76d2ed543..28b03fe744 100644 --- a/xen/common/sched/credit2.c +++ b/xen/common/sched/credit2.c @@ -3655,7 +3655,8 @@ csched2_dump(const struct scheduler *ops) * We need the private scheduler lock as we access global * scheduler data and (below) the list of active domains. */ - read_lock(&prv->lock); + if ( !keyhandler_read_lock(&prv->lock, "could not get credit2 data") ) + return; printk("Active queues: %d\n" "\tdefault-weight = %d\n", @@ -3711,12 +3712,15 @@ csched2_dump(const struct scheduler *ops) struct csched2_unit * const svc = csched2_unit(unit); spinlock_t *lock; - lock = unit_schedule_lock(unit); + lock = keyhandler_pcpu_lock(unit->res->master_cpu); - printk("\t%3d: ", ++loop); - csched2_dump_unit(prv, svc); + if ( lock ) + { + printk("\t%3d: ", ++loop); + csched2_dump_unit(prv, svc); - unit_schedule_unlock(lock, unit); + pcpu_schedule_unlock(lock, unit->res->master_cpu); + } } } @@ -3727,7 +3731,8 @@ csched2_dump(const struct scheduler *ops) int loop = 0; /* We need the lock to scan the runqueue. */ - spin_lock(&rqd->lock); + if ( !keyhandler_spin_lock(&rqd->lock, "could not get runq") ) + continue; printk("Runqueue %d:\n", i); diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c index 3b31703d7e..fe59ce17fe 100644 --- a/xen/common/sched/null.c +++ b/xen/common/sched/null.c @@ -28,6 +28,7 @@ * if the scheduler is used inside a cpupool. */ +#include #include #include #include @@ -982,7 +983,8 @@ static void null_dump(const struct scheduler *ops) struct list_head *iter; unsigned int loop; - spin_lock(&prv->lock); + if ( !keyhandler_spin_lock(&prv->lock, "could not get null data") ) + return; printk("\tcpus_free = %*pbl\n", CPUMASK_PR(&prv->cpus_free)); @@ -1001,31 +1003,37 @@ static void null_dump(const struct scheduler *ops) struct null_unit * const nvc = null_unit(unit); spinlock_t *lock; - lock = unit_schedule_lock(unit); + lock = keyhandler_pcpu_lock(unit->res->master_cpu); - printk("\t%3d: ", ++loop); - dump_unit(prv, nvc); - printk("\n"); + if ( lock ) + { + printk("\t%3d: ", ++loop); + dump_unit(prv, nvc); + printk("\n"); - unit_schedule_unlock(lock, unit); + pcpu_schedule_unlock(lock, unit->res->master_cpu); + } } } printk("Waitqueue: "); loop = 0; - spin_lock(&prv->waitq_lock); - list_for_each( iter, &prv->waitq ) + if ( keyhandler_spin_lock(&prv->waitq_lock, "could not get waitq") ) { - struct null_unit *nvc = list_entry(iter, struct null_unit, waitq_elem); - - if ( loop++ != 0 ) - printk(", "); - if ( loop % 24 == 0 ) - printk("\n\t"); - printk("%pdv%d", nvc->unit->domain, nvc->unit->unit_id); + list_for_each( iter, &prv->waitq ) + { + struct null_unit *nvc = list_entry(iter, struct null_unit, + waitq_elem); + + if ( loop++ != 0 ) + printk(", "); + if ( loop % 24 == 0 ) + printk("\n\t"); + printk("%pdv%d", nvc->unit->domain, nvc->unit->unit_id); + } + printk("\n"); + spin_unlock(&prv->waitq_lock); } - printk("\n"); - spin_unlock(&prv->waitq_lock); spin_unlock(&prv->lock); } diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index 2a94179baa..6723f74d28 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -631,5 +631,6 @@ struct cpupool *cpupool_get_by_id(int poolid); void cpupool_put(struct cpupool *pool); int cpupool_add_domain(struct domain *d, int poolid); void cpupool_rm_domain(struct domain *d); +spinlock_t *keyhandler_pcpu_lock(unsigned int cpu); #endif /* __XEN_SCHED_IF_H__ */ diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c index 16379cb2d2..d4b17e0f8b 100644 --- a/xen/common/sched/rt.c +++ b/xen/common/sched/rt.c @@ -354,7 +354,9 @@ rt_dump_pcpu(const struct scheduler *ops, int cpu) struct rt_private *prv = rt_priv(ops); const struct rt_unit *svc; - spin_lock(&prv->lock); + if ( !keyhandler_spin_lock(&prv->lock, "could not get rt data") ) + return; + printk("CPU[%02d]\n", cpu); /* current UNIT (nothing to say if that's the idle unit). */ svc = rt_unit(curr_on_cpu(cpu)); @@ -373,7 +375,8 @@ rt_dump(const struct scheduler *ops) const struct rt_unit *svc; const struct rt_dom *sdom; - spin_lock(&prv->lock); + if ( !keyhandler_spin_lock(&prv->lock, "could not get rt data") ) + return; if ( list_empty(&prv->sdom) ) goto out; From patchwork Thu Feb 13 12:54:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11380311 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 24EAF1580 for ; Thu, 13 Feb 2020 12:56:02 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 001952168B for ; Thu, 13 Feb 2020 12:56:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 001952168B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1U-0006A8-A7; Thu, 13 Feb 2020 12:55:16 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1S-000694-IF for xen-devel@lists.xenproject.org; Thu, 13 Feb 2020 12:55:14 +0000 X-Inumbo-ID: 07fa8cc3-4e60-11ea-b898-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 07fa8cc3-4e60-11ea-b898-12813bfff9fa; Thu, 13 Feb 2020 12:54:55 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id AD057ACB9; Thu, 13 Feb 2020 12:54:53 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Thu, 13 Feb 2020 13:54:47 +0100 Message-Id: <20200213125449.14226-7-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200213125449.14226-1-jgross@suse.com> References: <20200213125449.14226-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 6/8] xen/common: use keyhandler locks when dumping data to console X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Ross Lagerwall , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Instead of using the normal locks use the keyhandler provided trylocks with timeouts. This requires adding a percpu read_trylock and a special primitive for the grant lock. Signed-off-by: Juergen Gross --- xen/common/event_channel.c | 3 ++- xen/common/grant_table.c | 32 +++++++++++++++++++++++++++++--- xen/common/livepatch.c | 11 +++-------- xen/common/spinlock.c | 18 +++++++++++++++--- xen/common/timer.c | 15 +++++++++------ xen/include/xen/rwlock.h | 37 +++++++++++++++++++++++++++++++++++++ 6 files changed, 95 insertions(+), 21 deletions(-) diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c index e86e2bfab0..a8fd481cb8 100644 --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -1387,7 +1387,8 @@ static void domain_dump_evtchn_info(struct domain *d) "Polling vCPUs: {%*pbl}\n" " port [p/m/s]\n", d->domain_id, d->max_vcpus, d->poll_mask); - spin_lock(&d->event_lock); + if ( !keyhandler_spin_lock(&d->event_lock, "could not get event lock") ) + return; for ( port = 1; port < d->max_evtchns; ++port ) { diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index c793927cd6..14d01950ab 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -335,6 +335,11 @@ static inline void grant_read_lock(struct grant_table *gt) percpu_read_lock(grant_rwlock, >->lock); } +static inline int grant_read_trylock(struct grant_table *gt) +{ + return percpu_read_trylock(grant_rwlock, >->lock); +} + static inline void grant_read_unlock(struct grant_table *gt) { percpu_read_unlock(grant_rwlock, >->lock); @@ -4040,6 +4045,24 @@ int gnttab_get_status_frame(struct domain *d, unsigned long idx, return rc; } +static int keyhandler_grant_read_lock(struct domain *d) +{ + keyhandler_lock_body(int, grant_read_trylock(d->grant_table), + "could not get grant lock for %pd\n", d); +} + +static inline struct active_grant_entry * +keyhandler_active_entry_acquire(struct grant_table *t, grant_ref_t e) +{ + struct active_grant_entry *act; + + act = &_active_entry(t, e); + if ( !keyhandler_spin_lock(&act->lock, "could not acquire active entry") ) + return NULL; + + return act; +} + static void gnttab_usage_print(struct domain *rd) { int first = 1; @@ -4047,11 +4070,12 @@ static void gnttab_usage_print(struct domain *rd) struct grant_table *gt = rd->grant_table; unsigned int nr_ents; + if ( !keyhandler_grant_read_lock(rd) ) + return; + printk(" -------- active -------- -------- shared --------\n"); printk("[ref] localdom mfn pin localdom gmfn flags\n"); - grant_read_lock(gt); - printk("grant-table for remote d%d (v%u)\n" " %u frames (%u max), %u maptrack frames (%u max)\n", rd->domain_id, gt->gt_version, @@ -4066,7 +4090,9 @@ static void gnttab_usage_print(struct domain *rd) uint16_t status; uint64_t frame; - act = active_entry_acquire(gt, ref); + act = keyhandler_active_entry_acquire(gt, ref); + if ( !act ) + continue; if ( !act->pin ) { active_entry_release(act); diff --git a/xen/common/livepatch.c b/xen/common/livepatch.c index 5e09dc990b..0f0a877704 100644 --- a/xen/common/livepatch.c +++ b/xen/common/livepatch.c @@ -2072,11 +2072,8 @@ static void livepatch_printall(unsigned char key) if ( !xen_build_id(&binary_id, &len) ) printk("build-id: %*phN\n", len, binary_id); - if ( !spin_trylock(&payload_lock) ) - { - printk("Lock held. Try again.\n"); + if ( !keyhandler_spin_lock(&payload_lock, "could not get payload lock") ) return; - } list_for_each_entry ( data, &payload_list, list ) { @@ -2096,11 +2093,9 @@ static void livepatch_printall(unsigned char key) { spin_unlock(&payload_lock); process_pending_softirqs(); - if ( !spin_trylock(&payload_lock) ) - { - printk("Couldn't reacquire lock. Try again.\n"); + if ( !keyhandler_spin_lock(&payload_lock, + "could not reacquire payload lock") ) return; - } } } if ( data->id.len ) diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 344981c54a..3204d24dfa 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -349,17 +349,23 @@ static struct lock_profile_anc lock_profile_ancs[LOCKPROF_TYPE_N]; static struct lock_profile_qhead lock_profile_glb_q; static spinlock_t lock_profile_lock = SPIN_LOCK_UNLOCKED; -static void spinlock_profile_iterate(lock_profile_subfunc *sub, void *par) +static void spinlock_profile_iterate_locked(lock_profile_subfunc *sub, + void *par) { int i; struct lock_profile_qhead *hq; struct lock_profile *eq; - spin_lock(&lock_profile_lock); for ( i = 0; i < LOCKPROF_TYPE_N; i++ ) for ( hq = lock_profile_ancs[i].head_q; hq; hq = hq->head_q ) for ( eq = hq->elem_q; eq; eq = eq->next ) sub(eq, i, hq->idx, par); +} + +static void spinlock_profile_iterate(lock_profile_subfunc *sub, void *par) +{ + spin_lock(&lock_profile_lock); + spinlock_profile_iterate_locked(sub, par); spin_unlock(&lock_profile_lock); } @@ -389,7 +395,13 @@ void spinlock_profile_printall(unsigned char key) diff = now - lock_profile_start; printk("Xen lock profile info SHOW (now = %"PRI_stime" total = " "%"PRI_stime")\n", now, diff); - spinlock_profile_iterate(spinlock_profile_print_elem, NULL); + + if ( !keyhandler_spin_lock(&lock_profile_lock, "could not get lock") ) + return; + + spinlock_profile_iterate_locked(spinlock_profile_print_elem, NULL); + + spin_unlock(&lock_profile_lock); } static void spinlock_profile_reset_elem(struct lock_profile *data, diff --git a/xen/common/timer.c b/xen/common/timer.c index 1bb265ceea..0a00857e2d 100644 --- a/xen/common/timer.c +++ b/xen/common/timer.c @@ -561,12 +561,15 @@ static void dump_timerq(unsigned char key) ts = &per_cpu(timers, i); printk("CPU%02d:\n", i); - spin_lock_irqsave(&ts->lock, flags); - for ( j = 1; j <= heap_metadata(ts->heap)->size; j++ ) - dump_timer(ts->heap[j], now); - for ( t = ts->list; t != NULL; t = t->list_next ) - dump_timer(t, now); - spin_unlock_irqrestore(&ts->lock, flags); + if ( keyhandler_spin_lock_irqsave(&ts->lock, &flags, + "could not get lock") ) + { + for ( j = 1; j <= heap_metadata(ts->heap)->size; j++ ) + dump_timer(ts->heap[j], now); + for ( t = ts->list; t != NULL; t = t->list_next ) + dump_timer(t, now); + spin_unlock_irqrestore(&ts->lock, flags); + } } } diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h index 3dfea1ac2a..add8577429 100644 --- a/xen/include/xen/rwlock.h +++ b/xen/include/xen/rwlock.h @@ -278,6 +278,41 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata, } } +static inline int _percpu_read_trylock(percpu_rwlock_t **per_cpudata, + percpu_rwlock_t *percpu_rwlock) +{ + /* Validate the correct per_cpudata variable has been provided. */ + _percpu_rwlock_owner_check(per_cpudata, percpu_rwlock); + + /* We cannot support recursion on the same lock. */ + ASSERT(this_cpu_ptr(per_cpudata) != percpu_rwlock); + /* + * Detect using a second percpu_rwlock_t simulatenously and fallback + * to standard read_trylock. + */ + if ( unlikely(this_cpu_ptr(per_cpudata) != NULL ) ) + return read_trylock(&percpu_rwlock->rwlock); + + /* Indicate this cpu is reading. */ + this_cpu_ptr(per_cpudata) = percpu_rwlock; + smp_mb(); + /* Check if a writer is waiting. */ + if ( unlikely(percpu_rwlock->writer_activating) ) + { + /* Let the waiting writer know we aren't holding the lock. */ + this_cpu_ptr(per_cpudata) = NULL; + /* Try using the read lock to keep the lock fair. */ + if ( !read_trylock(&percpu_rwlock->rwlock) ) + return 0; + /* Set the per CPU data again and continue. */ + this_cpu_ptr(per_cpudata) = percpu_rwlock; + /* Drop the read lock because we don't need it anymore. */ + read_unlock(&percpu_rwlock->rwlock); + } + + return 1; +} + static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata, percpu_rwlock_t *percpu_rwlock) { @@ -318,6 +353,8 @@ static inline void _percpu_write_unlock(percpu_rwlock_t **per_cpudata, #define percpu_read_lock(percpu, lock) \ _percpu_read_lock(&get_per_cpu_var(percpu), lock) +#define percpu_read_trylock(percpu, lock) \ + _percpu_read_trylock(&get_per_cpu_var(percpu), lock) #define percpu_read_unlock(percpu, lock) \ _percpu_read_unlock(&get_per_cpu_var(percpu), lock) #define percpu_write_lock(percpu, lock) \ From patchwork Thu Feb 13 12:54:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11380315 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BE871924 for ; Thu, 13 Feb 2020 12:56:10 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A4FD12168B for ; Thu, 13 Feb 2020 12:56:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A4FD12168B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1e-0006I4-4c; Thu, 13 Feb 2020 12:55:26 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1c-0006H1-IU for xen-devel@lists.xenproject.org; Thu, 13 Feb 2020 12:55:24 +0000 X-Inumbo-ID: 0843f5ed-4e60-11ea-b898-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 0843f5ed-4e60-11ea-b898-12813bfff9fa; Thu, 13 Feb 2020 12:54:55 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id E9C76AD06; Thu, 13 Feb 2020 12:54:53 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Thu, 13 Feb 2020 13:54:48 +0100 Message-Id: <20200213125449.14226-8-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200213125449.14226-1-jgross@suse.com> References: <20200213125449.14226-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 7/8] xen/drivers: use keyhandler locks when dumping data to console X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Instead of using the normal locks use the keyhandler provided trylocks with timeouts. This requires adding a special primitive for the pcidev lock. Signed-off-by: Juergen Gross --- xen/drivers/passthrough/amd/iommu_intr.c | 14 ++++++++++---- xen/drivers/passthrough/pci.c | 14 +++++++++++--- xen/drivers/vpci/msi.c | 5 ++++- 3 files changed, 25 insertions(+), 8 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu_intr.c b/xen/drivers/passthrough/amd/iommu_intr.c index e1cc13b873..753aaf3679 100644 --- a/xen/drivers/passthrough/amd/iommu_intr.c +++ b/xen/drivers/passthrough/amd/iommu_intr.c @@ -16,6 +16,7 @@ * along with this program; If not, see . */ +#include #include #include @@ -886,9 +887,12 @@ static int dump_intremap_mapping(const struct amd_iommu *iommu, if ( !ivrs_mapping ) return 0; - spin_lock_irqsave(&(ivrs_mapping->intremap_lock), flags); - dump_intremap_table(iommu, ivrs_mapping->intremap_table, ivrs_mapping); - spin_unlock_irqrestore(&(ivrs_mapping->intremap_lock), flags); + if ( keyhandler_spin_lock_irqsave(&(ivrs_mapping->intremap_lock), &flags, + "could not get intremap lock") ) + { + dump_intremap_table(iommu, ivrs_mapping->intremap_table, ivrs_mapping); + spin_unlock_irqrestore(&(ivrs_mapping->intremap_lock), flags); + } process_pending_softirqs(); @@ -909,7 +913,9 @@ void amd_iommu_dump_intremap_tables(unsigned char key) printk("--- Dumping Shared IOMMU Interrupt Remapping Table ---\n"); - spin_lock_irqsave(&shared_intremap_lock, flags); + if ( !keyhandler_spin_lock_irqsave(&shared_intremap_lock, &flags, + "could not get lock") ) + return; dump_intremap_table(list_first_entry(&amd_iommu_head, struct amd_iommu, list), shared_intremap_table, NULL); diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index 5660f7e1c2..1fd998af3a 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -1356,12 +1356,20 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg) return 0; } +static bool keyhandler_pcidevs_lock(void) +{ + keyhandler_lock_body(bool_t, pcidevs_trylock(), + "could not get pcidevs lock\n"); +} + static void dump_pci_devices(unsigned char ch) { printk("==== PCI devices ====\n"); - pcidevs_lock(); - pci_segments_iterate(_dump_pci_devices, NULL); - pcidevs_unlock(); + if ( keyhandler_pcidevs_lock() ) + { + pci_segments_iterate(_dump_pci_devices, NULL); + pcidevs_unlock(); + } } static int __init setup_dump_pcidevs(void) diff --git a/xen/drivers/vpci/msi.c b/xen/drivers/vpci/msi.c index 75010762ed..31ea99b62e 100644 --- a/xen/drivers/vpci/msi.c +++ b/xen/drivers/vpci/msi.c @@ -16,6 +16,7 @@ * License along with this program; If not, see . */ +#include #include #include #include @@ -283,7 +284,9 @@ void vpci_dump_msi(void) const struct vpci_msi *msi; const struct vpci_msix *msix; - if ( !pdev->vpci || !spin_trylock(&pdev->vpci->lock) ) + if ( !pdev->vpci || + !keyhandler_spin_lock(&pdev->vpci->lock, + "could not get vpci lock") ) continue; msi = pdev->vpci->msi; From patchwork Thu Feb 13 12:54:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11380309 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF26A1580 for ; Thu, 13 Feb 2020 12:55:57 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D5C2E2168B for ; Thu, 13 Feb 2020 12:55:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D5C2E2168B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1F-0005zm-6U; Thu, 13 Feb 2020 12:55:01 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j2E1E-0005zI-2A for xen-devel@lists.xenproject.org; Thu, 13 Feb 2020 12:55:00 +0000 X-Inumbo-ID: 084ba42c-4e60-11ea-b0fd-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 084ba42c-4e60-11ea-b0fd-bc764e2007e4; Thu, 13 Feb 2020 12:54:55 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 21DBFAD07; Thu, 13 Feb 2020 12:54:54 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Thu, 13 Feb 2020 13:54:49 +0100 Message-Id: <20200213125449.14226-9-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200213125449.14226-1-jgross@suse.com> References: <20200213125449.14226-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 8/8] xen/x86: use keyhandler locks when dumping data to console X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Andrew Cooper , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Instead of using the normal locks use the keyhandler provided trylocks with timeouts. Signed-off-by: Juergen Gross --- xen/arch/x86/io_apic.c | 53 +++++++++++++++++++++++++++++++++++++------------- xen/arch/x86/irq.c | 5 ++++- xen/arch/x86/msi.c | 4 +++- xen/arch/x86/numa.c | 16 +++++++++------ 4 files changed, 57 insertions(+), 21 deletions(-) diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c index e98e08e9c8..4acdc566b9 100644 --- a/xen/arch/x86/io_apic.c +++ b/xen/arch/x86/io_apic.c @@ -1098,6 +1098,18 @@ static inline void UNEXPECTED_IO_APIC(void) { } +static bool get_ioapic_lock(unsigned long *flags, bool boot) +{ + if ( boot ) + { + spin_lock_irqsave(&ioapic_lock, *flags); + return true; + } + + return keyhandler_spin_lock_irqsave(&ioapic_lock, flags, + "could not get ioapic lock"); +} + static void /*__init*/ __print_IO_APIC(bool boot) { int apic, i; @@ -1125,13 +1137,16 @@ static void /*__init*/ __print_IO_APIC(bool boot) if (!nr_ioapic_entries[apic]) continue; - spin_lock_irqsave(&ioapic_lock, flags); + if ( !get_ioapic_lock(&flags, boot) ) + continue; + reg_00.raw = io_apic_read(apic, 0); reg_01.raw = io_apic_read(apic, 1); if (reg_01.bits.version >= 0x10) reg_02.raw = io_apic_read(apic, 2); if (reg_01.bits.version >= 0x20) reg_03.raw = io_apic_read(apic, 3); + spin_unlock_irqrestore(&ioapic_lock, flags); printk(KERN_DEBUG "IO APIC #%d......\n", mp_ioapics[apic].mpc_apicid); @@ -1201,7 +1216,12 @@ static void /*__init*/ __print_IO_APIC(bool boot) for (i = 0; i <= reg_01.bits.entries; i++) { struct IO_APIC_route_entry entry; - entry = ioapic_read_entry(apic, i, 0); + if ( !get_ioapic_lock(&flags, boot) ) + continue; + + entry = __ioapic_read_entry(apic, i, 0); + + spin_unlock_irqrestore(&ioapic_lock, flags); if ( x2apic_enabled && iommu_intremap ) printk(KERN_DEBUG " %02x %08x", i, entry.dest.dest32); @@ -2495,21 +2515,28 @@ void dump_ioapic_irq_info(void) for ( ; ; ) { + unsigned long flags; + pin = entry->pin; printk(" Apic 0x%02x, Pin %2d: ", entry->apic, pin); - rte = ioapic_read_entry(entry->apic, pin, 0); - - printk("vec=%02x delivery=%-5s dest=%c status=%d " - "polarity=%d irr=%d trig=%c mask=%d dest_id:%0*x\n", - rte.vector, delivery_mode_2_str(rte.delivery_mode), - rte.dest_mode ? 'L' : 'P', - rte.delivery_status, rte.polarity, rte.irr, - rte.trigger ? 'L' : 'E', rte.mask, - (x2apic_enabled && iommu_intremap) ? 8 : 2, - (x2apic_enabled && iommu_intremap) ? - rte.dest.dest32 : rte.dest.logical.logical_dest); + if ( keyhandler_spin_lock_irqsave(&ioapic_lock, &flags, + "could not get ioapic lock") ) + { + rte = __ioapic_read_entry(entry->apic, pin, 0); + spin_unlock_irqrestore(&ioapic_lock, flags); + + printk("vec=%02x delivery=%-5s dest=%c status=%d " + "polarity=%d irr=%d trig=%c mask=%d dest_id:%0*x\n", + rte.vector, delivery_mode_2_str(rte.delivery_mode), + rte.dest_mode ? 'L' : 'P', + rte.delivery_status, rte.polarity, rte.irr, + rte.trigger ? 'L' : 'E', rte.mask, + (x2apic_enabled && iommu_intremap) ? 8 : 2, + (x2apic_enabled && iommu_intremap) ? + rte.dest.dest32 : rte.dest.logical.logical_dest); + } if ( entry->next == 0 ) break; diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index cc2eb8e925..f3d931b121 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -2470,7 +2470,9 @@ static void dump_irqs(unsigned char key) ssid = in_irq() ? NULL : xsm_show_irq_sid(irq); - spin_lock_irqsave(&desc->lock, flags); + if ( !keyhandler_spin_lock_irqsave(&desc->lock, &flags, + "could not get irq lock") ) + goto free_ssid; printk(" IRQ:%4d vec:%02x %-15s status=%03x aff:{%*pbl}/{%*pbl} ", irq, desc->arch.vector, desc->handler->typename, desc->status, @@ -2506,6 +2508,7 @@ static void dump_irqs(unsigned char key) spin_unlock_irqrestore(&desc->lock, flags); + free_ssid: xfree(ssid); } diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c index c85cf9f85a..d10b856179 100644 --- a/xen/arch/x86/msi.c +++ b/xen/arch/x86/msi.c @@ -1470,7 +1470,9 @@ static void dump_msi(unsigned char key) if ( !irq_desc_initialized(desc) ) continue; - spin_lock_irqsave(&desc->lock, flags); + if ( !keyhandler_spin_lock_irqsave(&desc->lock, &flags, + "could not get irq lock") ) + continue; entry = desc->msi_desc; if ( !entry ) diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index 6ef15b34d5..d21ed8737f 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -425,18 +425,22 @@ static void dump_numa(unsigned char key) for_each_online_node ( i ) page_num_node[i] = 0; - spin_lock(&d->page_alloc_lock); - page_list_for_each(page, &d->page_list) + if ( keyhandler_spin_lock(&d->page_alloc_lock, + "could not get page_alloc lock") ) { - i = phys_to_nid(page_to_maddr(page)); - page_num_node[i]++; + page_list_for_each(page, &d->page_list) + { + i = phys_to_nid(page_to_maddr(page)); + page_num_node[i]++; + } + spin_unlock(&d->page_alloc_lock); } - spin_unlock(&d->page_alloc_lock); for_each_online_node ( i ) printk(" Node %u: %u\n", i, page_num_node[i]); - if ( !read_trylock(&d->vnuma_rwlock) ) + if ( !keyhandler_read_lock(&d->vnuma_rwlock, + "could not get vnuma lock") ) continue; if ( !d->vnuma )