From patchwork Tue Jan 5 13:09:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11998853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.5 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 017A4C433E0 for ; Tue, 5 Jan 2021 13:09:15 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B5DC622240 for ; Tue, 5 Jan 2021 13:09:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B5DC622240 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.61918.109218 (Exim 4.92) (envelope-from ) id 1kwm5A-0007p0-IU; Tue, 05 Jan 2021 13:09:04 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 61918.109218; Tue, 05 Jan 2021 13:09:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm5A-0007ot-Eg; Tue, 05 Jan 2021 13:09:04 +0000 Received: by outflank-mailman (input) for mailman id 61918; Tue, 05 Jan 2021 13:09:03 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm59-0007oo-3v for xen-devel@lists.xenproject.org; Tue, 05 Jan 2021 13:09:03 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id c64e7731-4547-412b-8b8e-636dbcfdc028; Tue, 05 Jan 2021 13:09:02 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 5641DAD19; Tue, 5 Jan 2021 13:09:01 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c64e7731-4547-412b-8b8e-636dbcfdc028 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1609852141; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yMZvz5Gxp0c7q1LbaBOtpEcti2EjPHh4ZP85frCbNn0=; b=Q3tyEQOjMfSYiZzgdiCwysTMW4ZLSrpzMqWkKX7+zUeM4dfI34EX4rGlBtfodpPxHBriNN K5fDJPVbsQWVXQ4JTE/Ae5J1bXuS+CYvdehJagpVg6R1ordX4+Mh3zQr6Ad1b5NO9SrH8C uc0bDUFzqmBvRfXOSokavZltx1Gvo58= Subject: [PATCH v4 01/10] evtchn: use per-channel lock where possible From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: Message-ID: Date: Tue, 5 Jan 2021 14:09:00 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Neither evtchn_status() nor domain_dump_evtchn_info() nor flask_get_peer_sid() need to hold the per-domain lock - they all only read a single channel's state (at a time, in the dump case). Signed-off-by: Jan Beulich --- v4: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -968,15 +968,16 @@ int evtchn_status(evtchn_status_t *statu if ( d == NULL ) return -ESRCH; - spin_lock(&d->event_lock); - if ( !port_is_valid(d, port) ) { - rc = -EINVAL; - goto out; + rcu_unlock_domain(d); + return -EINVAL; } chn = evtchn_from_port(d, port); + + evtchn_read_lock(chn); + if ( consumer_is_xen(chn) ) { rc = -EACCES; @@ -1021,7 +1022,7 @@ int evtchn_status(evtchn_status_t *statu status->vcpu = chn->notify_vcpu_id; out: - spin_unlock(&d->event_lock); + evtchn_read_unlock(chn); rcu_unlock_domain(d); return rc; @@ -1576,22 +1577,32 @@ void evtchn_move_pirqs(struct vcpu *v) static void domain_dump_evtchn_info(struct domain *d) { unsigned int port; - int irq; printk("Event channel information for domain %d:\n" "Polling vCPUs: {%*pbl}\n" " port [p/m/s]\n", d->domain_id, d->max_vcpus, d->poll_mask); - spin_lock(&d->event_lock); - for ( port = 1; port_is_valid(d, port); ++port ) { - const struct evtchn *chn; + struct evtchn *chn; char *ssid; + if ( !(port & 0x3f) ) + process_pending_softirqs(); + chn = evtchn_from_port(d, port); + + if ( !evtchn_read_trylock(chn) ) + { + printk(" %4u in flux\n", port); + continue; + } + if ( chn->state == ECS_FREE ) + { + evtchn_read_unlock(chn); continue; + } printk(" %4u [%d/%d/", port, @@ -1601,26 +1612,49 @@ static void domain_dump_evtchn_info(stru printk("]: s=%d n=%d x=%d", chn->state, chn->notify_vcpu_id, chn->xen_consumer); + ssid = xsm_show_security_evtchn(d, chn); + switch ( chn->state ) { case ECS_UNBOUND: printk(" d=%d", chn->u.unbound.remote_domid); break; + case ECS_INTERDOMAIN: printk(" d=%d p=%d", chn->u.interdomain.remote_dom->domain_id, chn->u.interdomain.remote_port); break; - case ECS_PIRQ: - irq = domain_pirq_to_irq(d, chn->u.pirq.irq); - printk(" p=%d i=%d", chn->u.pirq.irq, irq); + + case ECS_PIRQ: { + unsigned int pirq = chn->u.pirq.irq; + + /* + * The per-channel locks nest inside the per-domain one, so we + * can't acquire the latter without first letting go of the former. + */ + evtchn_read_unlock(chn); + chn = NULL; + if ( spin_trylock(&d->event_lock) ) + { + int irq = domain_pirq_to_irq(d, pirq); + + spin_unlock(&d->event_lock); + printk(" p=%u i=%d", pirq, irq); + } + else + printk(" p=%u i=?", pirq); break; + } + case ECS_VIRQ: printk(" v=%d", chn->u.virq); break; } - ssid = xsm_show_security_evtchn(d, chn); + if ( chn ) + evtchn_read_unlock(chn); + if (ssid) { printk(" Z=%s\n", ssid); xfree(ssid); @@ -1628,8 +1662,6 @@ static void domain_dump_evtchn_info(stru printk("\n"); } } - - spin_unlock(&d->event_lock); } static void dump_evtchn_info(unsigned char key) --- a/xen/xsm/flask/flask_op.c +++ b/xen/xsm/flask/flask_op.c @@ -555,12 +555,13 @@ static int flask_get_peer_sid(struct xen struct evtchn *chn; struct domain_security_struct *dsec; - spin_lock(&d->event_lock); - if ( !port_is_valid(d, arg->evtchn) ) - goto out; + return -EINVAL; chn = evtchn_from_port(d, arg->evtchn); + + evtchn_read_lock(chn); + if ( chn->state != ECS_INTERDOMAIN ) goto out; @@ -573,7 +574,7 @@ static int flask_get_peer_sid(struct xen rv = 0; out: - spin_unlock(&d->event_lock); + evtchn_read_unlock(chn); return rv; } From patchwork Tue Jan 5 13:09:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11998855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1AFFC433E0 for ; Tue, 5 Jan 2021 13:09:41 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 88D6522240 for ; Tue, 5 Jan 2021 13:09:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88D6522240 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.61921.109229 (Exim 4.92) (envelope-from ) id 1kwm5d-00081d-VC; Tue, 05 Jan 2021 13:09:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 61921.109229; Tue, 05 Jan 2021 13:09:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm5d-00081W-Rp; Tue, 05 Jan 2021 13:09:33 +0000 Received: by outflank-mailman (input) for mailman id 61921; Tue, 05 Jan 2021 13:09:33 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm5c-00081Q-WD for xen-devel@lists.xenproject.org; Tue, 05 Jan 2021 13:09:33 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 87474077-90a6-4260-b7b1-ab1383ecc613; Tue, 05 Jan 2021 13:09:32 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 523B4AD19; Tue, 5 Jan 2021 13:09:31 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 87474077-90a6-4260-b7b1-ab1383ecc613 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1609852171; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UZmBD7Vu9gEKBGq3iumpz+92oYKBoj1HFDeHDpU2UJo=; b=GUGoDOkt4j3ZuksV4UmryzD+IkUOkuoq9fji3WggObuHNt/A0IIsvRakI539Ybe3ajc4X0 JMeLm0rlzdml50docBklx66dhAvonxZCMLjv7iLt5HkYiUvDoMB0ZaXl+G26dXsJM90TJ0 1J3F9MnrP0c8UxszJg3Ln0mGFeq5LzU= Subject: [PATCH v4 02/10] evtchn: bind-interdomain doesn't need to hold both domains' event locks From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: Message-ID: <8b21ff13-d6ea-3fa5-8d87-c05157112e4b@suse.com> Date: Tue, 5 Jan 2021 14:09:30 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US The local domain's lock is needed for the port allocation, but for the remote side the per-channel lock is sufficient. The per-channel locks then need to be acquired slightly earlier, though. Signed-off-by: Jan Beulich --- v4: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -355,18 +355,7 @@ static long evtchn_bind_interdomain(evtc if ( (rd = rcu_lock_domain_by_id(rdom)) == NULL ) return -ESRCH; - /* Avoid deadlock by first acquiring lock of domain with smaller id. */ - if ( ld < rd ) - { - spin_lock(&ld->event_lock); - spin_lock(&rd->event_lock); - } - else - { - if ( ld != rd ) - spin_lock(&rd->event_lock); - spin_lock(&ld->event_lock); - } + spin_lock(&ld->event_lock); if ( (lport = get_free_port(ld)) < 0 ) ERROR_EXIT(lport); @@ -375,15 +364,19 @@ static long evtchn_bind_interdomain(evtc if ( !port_is_valid(rd, rport) ) ERROR_EXIT_DOM(-EINVAL, rd); rchn = evtchn_from_port(rd, rport); + + double_evtchn_lock(lchn, rchn); + if ( (rchn->state != ECS_UNBOUND) || (rchn->u.unbound.remote_domid != ld->domain_id) ) ERROR_EXIT_DOM(-EINVAL, rd); rc = xsm_evtchn_interdomain(XSM_HOOK, ld, lchn, rd, rchn); if ( rc ) + { + double_evtchn_unlock(lchn, rchn); goto out; - - double_evtchn_lock(lchn, rchn); + } lchn->u.interdomain.remote_dom = rd; lchn->u.interdomain.remote_port = rport; @@ -407,8 +400,6 @@ static long evtchn_bind_interdomain(evtc out: check_free_port(ld, lport); spin_unlock(&ld->event_lock); - if ( ld != rd ) - spin_unlock(&rd->event_lock); rcu_unlock_domain(rd); From patchwork Tue Jan 5 13:10:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11998857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DF90C433DB for ; Tue, 5 Jan 2021 13:10:23 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2295122240 for ; Tue, 5 Jan 2021 13:10:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2295122240 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.61924.109242 (Exim 4.92) (envelope-from ) id 1kwm6C-0000Nw-8R; Tue, 05 Jan 2021 13:10:08 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 61924.109242; Tue, 05 Jan 2021 13:10:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm6C-0000Np-5H; Tue, 05 Jan 2021 13:10:08 +0000 Received: by outflank-mailman (input) for mailman id 61924; Tue, 05 Jan 2021 13:10:07 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm6B-0000Nh-1s for xen-devel@lists.xenproject.org; Tue, 05 Jan 2021 13:10:07 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 0d66111b-161a-4ced-9c15-c0c2e09bc99b; Tue, 05 Jan 2021 13:10:04 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id D95A1AD0B; Tue, 5 Jan 2021 13:10:03 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0d66111b-161a-4ced-9c15-c0c2e09bc99b X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1609852204; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sEi5H+tkbmt80UcBu3rEXlPf+6/0tqBn34N8BeifU/o=; b=mStOFF1nEis93WFqjcR3a+ZtCRpXVfj2NkRgHg/mGeOWzROxWSdMR1WuacG2qNa1Zi4IFk 09oaSaczqiKiHnY4Z3zVxBx/kM44t8tkkYTFC/o6DSHVM0/YR8WfSpF4nmfjEEW59cpn71 iIm2TwkB4LoNabGcfAO6d6Q4ADT4ZCw= Subject: [PATCH v4 03/10] evtchn: convert domain event lock to an r/w one From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: Message-ID: <4e3c110d-a55b-e91d-919e-2e91edb1eecb@suse.com> Date: Tue, 5 Jan 2021 14:10:03 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Especially for the use in evtchn_move_pirqs() (called when moving a vCPU across pCPU-s) and the ones in EOI handling in PCI pass-through code, serializing perhaps an entire domain isn't helpful when no state (which isn't e.g. further protected by the per-channel lock) changes. Unfortunately this implies dropping of lock profiling for this lock, until r/w locks may get enabled for such functionality. While ->notify_vcpu_id is now meant to be consistently updated with the per-channel lock held, an extension applies to ECS_PIRQ: The field is also guaranteed to not change with the per-domain event lock held for writing. Therefore the link_pirq_port() call from evtchn_bind_pirq() could in principle be moved out of the per-channel locked regions, but this further code churn didn't seem worth it. Signed-off-by: Jan Beulich --- v4: Re-base, in particular over new earlier patches. Acquire both per-domain locks for writing in evtchn_close(). Adjust spin_barrier() related comments. v3: Re-base. v2: Consistently lock for writing in evtchn_reset(). Fix error path in pci_clean_dpci_irqs(). Lock for writing in pt_irq_time_out(), hvm_dirq_assist(), hvm_dpci_eoi(), and hvm_dpci_isairq_eoi(). Move rw_barrier() introduction here. Re-base over changes earlier in the series. --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -903,7 +903,7 @@ int arch_domain_soft_reset(struct domain if ( !is_hvm_domain(d) ) return -EINVAL; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); for ( i = 0; i < d->nr_pirqs ; i++ ) { if ( domain_pirq_to_emuirq(d, i) != IRQ_UNBOUND ) @@ -913,7 +913,7 @@ int arch_domain_soft_reset(struct domain break; } } - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); if ( ret ) return ret; --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -528,9 +528,9 @@ void hvm_migrate_pirqs(struct vcpu *v) if ( !is_iommu_enabled(d) || !hvm_domain_irq(d)->dpci ) return; - spin_lock(&d->event_lock); + read_lock(&d->event_lock); pt_pirq_iterate(d, migrate_pirq, v); - spin_unlock(&d->event_lock); + read_unlock(&d->event_lock); } static bool hvm_get_pending_event(struct vcpu *v, struct x86_event *info) --- a/xen/arch/x86/hvm/irq.c +++ b/xen/arch/x86/hvm/irq.c @@ -404,9 +404,9 @@ int hvm_inject_msi(struct domain *d, uin { int rc; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); rc = map_domain_emuirq_pirq(d, pirq, IRQ_MSI_EMU); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); if ( rc ) return rc; info = pirq_info(d, pirq); --- a/xen/arch/x86/hvm/vioapic.c +++ b/xen/arch/x86/hvm/vioapic.c @@ -203,9 +203,9 @@ static int vioapic_hwdom_map_gsi(unsigne { gprintk(XENLOG_WARNING, "vioapic: error binding GSI %u: %d\n", gsi, ret); - spin_lock(&currd->event_lock); + write_lock(&currd->event_lock); unmap_domain_pirq(currd, pirq); - spin_unlock(&currd->event_lock); + write_unlock(&currd->event_lock); } pcidevs_unlock(); --- a/xen/arch/x86/hvm/vmsi.c +++ b/xen/arch/x86/hvm/vmsi.c @@ -465,7 +465,7 @@ int msixtbl_pt_register(struct domain *d int r = -EINVAL; ASSERT(pcidevs_locked()); - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); if ( !msixtbl_initialised(d) ) return -ENODEV; @@ -535,7 +535,7 @@ void msixtbl_pt_unregister(struct domain struct msixtbl_entry *entry; ASSERT(pcidevs_locked()); - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); if ( !msixtbl_initialised(d) ) return; @@ -589,13 +589,13 @@ void msixtbl_pt_cleanup(struct domain *d if ( !msixtbl_initialised(d) ) return; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); list_for_each_entry_safe( entry, temp, &d->arch.hvm.msixtbl_list, list ) del_msixtbl_entry(entry); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); } void msix_write_completion(struct vcpu *v) @@ -726,9 +726,9 @@ int vpci_msi_arch_update(struct vpci_msi msi->arch.pirq, msi->mask); if ( rc ) { - spin_lock(&pdev->domain->event_lock); + write_lock(&pdev->domain->event_lock); unmap_domain_pirq(pdev->domain, msi->arch.pirq); - spin_unlock(&pdev->domain->event_lock); + write_unlock(&pdev->domain->event_lock); pcidevs_unlock(); msi->arch.pirq = INVALID_PIRQ; return rc; @@ -767,9 +767,9 @@ static int vpci_msi_enable(const struct rc = vpci_msi_update(pdev, data, address, vectors, pirq, mask); if ( rc ) { - spin_lock(&pdev->domain->event_lock); + write_lock(&pdev->domain->event_lock); unmap_domain_pirq(pdev->domain, pirq); - spin_unlock(&pdev->domain->event_lock); + write_unlock(&pdev->domain->event_lock); pcidevs_unlock(); return rc; } @@ -814,9 +814,9 @@ static void vpci_msi_disable(const struc ASSERT(!rc); } - spin_lock(&pdev->domain->event_lock); + write_lock(&pdev->domain->event_lock); unmap_domain_pirq(pdev->domain, pirq); - spin_unlock(&pdev->domain->event_lock); + write_unlock(&pdev->domain->event_lock); pcidevs_unlock(); } --- a/xen/arch/x86/io_apic.c +++ b/xen/arch/x86/io_apic.c @@ -2413,10 +2413,10 @@ int ioapic_guest_write(unsigned long phy } if ( pirq >= 0 ) { - spin_lock(&hardware_domain->event_lock); + write_lock(&hardware_domain->event_lock); ret = map_domain_pirq(hardware_domain, pirq, irq, MAP_PIRQ_TYPE_GSI, NULL); - spin_unlock(&hardware_domain->event_lock); + write_unlock(&hardware_domain->event_lock); if ( ret < 0 ) return ret; } --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -1547,7 +1547,7 @@ int pirq_guest_bind(struct vcpu *v, stru unsigned int max_nr_guests = will_share ? irq_max_guests : 1; int rc = 0; - WARN_ON(!spin_is_locked(&v->domain->event_lock)); + WARN_ON(!rw_is_write_locked(&v->domain->event_lock)); BUG_ON(!local_irq_is_enabled()); retry: @@ -1761,7 +1761,7 @@ void pirq_guest_unbind(struct domain *d, struct irq_desc *desc; int irq = 0; - WARN_ON(!spin_is_locked(&d->event_lock)); + WARN_ON(!rw_is_write_locked(&d->event_lock)); BUG_ON(!local_irq_is_enabled()); desc = pirq_spin_lock_irq_desc(pirq, NULL); @@ -1798,7 +1798,7 @@ static bool pirq_guest_force_unbind(stru unsigned int i; bool bound = false; - WARN_ON(!spin_is_locked(&d->event_lock)); + WARN_ON(!rw_is_write_locked(&d->event_lock)); BUG_ON(!local_irq_is_enabled()); desc = pirq_spin_lock_irq_desc(pirq, NULL); @@ -2040,7 +2040,7 @@ int get_free_pirq(struct domain *d, int { int i; - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); if ( type == MAP_PIRQ_TYPE_GSI ) { @@ -2065,7 +2065,7 @@ int get_free_pirqs(struct domain *d, uns { unsigned int i, found = 0; - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); for ( i = d->nr_pirqs - 1; i >= nr_irqs_gsi; --i ) if ( is_free_pirq(d, pirq_info(d, i)) ) @@ -2093,7 +2093,7 @@ int map_domain_pirq( DECLARE_BITMAP(prepared, MAX_MSI_IRQS) = {}; DECLARE_BITMAP(granted, MAX_MSI_IRQS) = {}; - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); if ( !irq_access_permitted(current->domain, irq)) return -EPERM; @@ -2312,7 +2312,7 @@ int unmap_domain_pirq(struct domain *d, return -EINVAL; ASSERT(pcidevs_locked()); - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); info = pirq_info(d, pirq); if ( !info || (irq = info->arch.irq) <= 0 ) @@ -2439,13 +2439,13 @@ void free_domain_pirqs(struct domain *d) int i; pcidevs_lock(); - spin_lock(&d->event_lock); + write_lock(&d->event_lock); for ( i = 0; i < d->nr_pirqs; i++ ) if ( domain_pirq_to_irq(d, i) > 0 ) unmap_domain_pirq(d, i); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); pcidevs_unlock(); } @@ -2688,7 +2688,7 @@ int map_domain_emuirq_pirq(struct domain int old_emuirq = IRQ_UNBOUND, old_pirq = IRQ_UNBOUND; struct pirq *info; - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); if ( !is_hvm_domain(d) ) return -EINVAL; @@ -2754,7 +2754,7 @@ int unmap_domain_pirq_emuirq(struct doma if ( (pirq < 0) || (pirq >= d->nr_pirqs) ) return -EINVAL; - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); emuirq = domain_pirq_to_emuirq(d, pirq); if ( emuirq == IRQ_UNBOUND ) @@ -2802,7 +2802,7 @@ static int allocate_pirq(struct domain * { int current_pirq; - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); current_pirq = domain_irq_to_pirq(d, irq); if ( pirq < 0 ) { @@ -2874,7 +2874,7 @@ int allocate_and_map_gsi_pirq(struct dom } /* Verify or get pirq. */ - spin_lock(&d->event_lock); + write_lock(&d->event_lock); pirq = allocate_pirq(d, index, *pirq_p, irq, MAP_PIRQ_TYPE_GSI, NULL); if ( pirq < 0 ) { @@ -2887,7 +2887,7 @@ int allocate_and_map_gsi_pirq(struct dom *pirq_p = pirq; done: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return ret; } @@ -2928,7 +2928,7 @@ int allocate_and_map_msi_pirq(struct dom pcidevs_lock(); /* Verify or get pirq. */ - spin_lock(&d->event_lock); + write_lock(&d->event_lock); pirq = allocate_pirq(d, index, *pirq_p, irq, type, &msi->entry_nr); if ( pirq < 0 ) { @@ -2941,7 +2941,7 @@ int allocate_and_map_msi_pirq(struct dom *pirq_p = pirq; done: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); pcidevs_unlock(); if ( ret ) { --- a/xen/arch/x86/physdev.c +++ b/xen/arch/x86/physdev.c @@ -34,7 +34,7 @@ static int physdev_hvm_map_pirq( ASSERT(!is_hardware_domain(d)); - spin_lock(&d->event_lock); + write_lock(&d->event_lock); switch ( type ) { case MAP_PIRQ_TYPE_GSI: { @@ -84,7 +84,7 @@ static int physdev_hvm_map_pirq( break; } - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return ret; } @@ -154,18 +154,18 @@ int physdev_unmap_pirq(domid_t domid, in if ( is_hvm_domain(d) && has_pirq(d) ) { - spin_lock(&d->event_lock); + write_lock(&d->event_lock); if ( domain_pirq_to_emuirq(d, pirq) != IRQ_UNBOUND ) ret = unmap_domain_pirq_emuirq(d, pirq); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); if ( domid == DOMID_SELF || ret ) goto free_domain; } pcidevs_lock(); - spin_lock(&d->event_lock); + write_lock(&d->event_lock); ret = unmap_domain_pirq(d, pirq); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); pcidevs_unlock(); free_domain: @@ -192,10 +192,10 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H ret = -EINVAL; if ( eoi.irq >= currd->nr_pirqs ) break; - spin_lock(&currd->event_lock); + read_lock(&currd->event_lock); pirq = pirq_info(currd, eoi.irq); if ( !pirq ) { - spin_unlock(&currd->event_lock); + read_unlock(&currd->event_lock); break; } if ( currd->arch.auto_unmask ) @@ -214,7 +214,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H && hvm_irq->gsi_assert_count[gsi] ) send_guest_pirq(currd, pirq); } - spin_unlock(&currd->event_lock); + read_unlock(&currd->event_lock); ret = 0; break; } @@ -626,7 +626,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H if ( copy_from_guest(&out, arg, 1) != 0 ) break; - spin_lock(&currd->event_lock); + write_lock(&currd->event_lock); ret = get_free_pirq(currd, out.type); if ( ret >= 0 ) @@ -639,7 +639,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H ret = -ENOMEM; } - spin_unlock(&currd->event_lock); + write_unlock(&currd->event_lock); if ( ret >= 0 ) { --- a/xen/arch/x86/pv/shim.c +++ b/xen/arch/x86/pv/shim.c @@ -448,7 +448,7 @@ static long pv_shim_event_channel_op(int if ( rc ) \ break; \ \ - spin_lock(&d->event_lock); \ + write_lock(&d->event_lock); \ rc = evtchn_allocate_port(d, op.port_field); \ if ( rc ) \ { \ @@ -457,7 +457,7 @@ static long pv_shim_event_channel_op(int } \ else \ evtchn_reserve(d, op.port_field); \ - spin_unlock(&d->event_lock); \ + write_unlock(&d->event_lock); \ \ if ( !rc && __copy_to_guest(arg, &op, 1) ) \ rc = -EFAULT; \ @@ -585,11 +585,11 @@ static long pv_shim_event_channel_op(int if ( rc ) break; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); rc = evtchn_allocate_port(d, ipi.port); if ( rc ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); close.port = ipi.port; BUG_ON(xen_hypercall_event_channel_op(EVTCHNOP_close, &close)); @@ -598,7 +598,7 @@ static long pv_shim_event_channel_op(int evtchn_assign_vcpu(d, ipi.port, ipi.vcpu); evtchn_reserve(d, ipi.port); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); if ( __copy_to_guest(arg, &ipi, 1) ) rc = -EFAULT; --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -294,7 +294,7 @@ static long evtchn_alloc_unbound(evtchn_ if ( d == NULL ) return -ESRCH; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); if ( (port = get_free_port(d)) < 0 ) ERROR_EXIT_DOM(port, d); @@ -317,7 +317,7 @@ static long evtchn_alloc_unbound(evtchn_ out: check_free_port(d, port); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); rcu_unlock_domain(d); return rc; @@ -355,7 +355,7 @@ static long evtchn_bind_interdomain(evtc if ( (rd = rcu_lock_domain_by_id(rdom)) == NULL ) return -ESRCH; - spin_lock(&ld->event_lock); + write_lock(&ld->event_lock); if ( (lport = get_free_port(ld)) < 0 ) ERROR_EXIT(lport); @@ -399,7 +399,7 @@ static long evtchn_bind_interdomain(evtc out: check_free_port(ld, lport); - spin_unlock(&ld->event_lock); + write_unlock(&ld->event_lock); rcu_unlock_domain(rd); @@ -430,7 +430,7 @@ int evtchn_bind_virq(evtchn_bind_virq_t if ( (v = domain_vcpu(d, vcpu)) == NULL ) return -ENOENT; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); if ( read_atomic(&v->virq_to_evtchn[virq]) ) ERROR_EXIT(-EEXIST); @@ -471,7 +471,7 @@ int evtchn_bind_virq(evtchn_bind_virq_t write_atomic(&v->virq_to_evtchn[virq], port); out: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return rc; } @@ -487,7 +487,7 @@ static long evtchn_bind_ipi(evtchn_bind_ if ( domain_vcpu(d, vcpu) == NULL ) return -ENOENT; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); if ( (port = get_free_port(d)) < 0 ) ERROR_EXIT(port); @@ -505,7 +505,7 @@ static long evtchn_bind_ipi(evtchn_bind_ bind->port = port; out: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return rc; } @@ -551,7 +551,7 @@ static long evtchn_bind_pirq(evtchn_bind if ( !is_hvm_domain(d) && !pirq_access_permitted(d, pirq) ) return -EPERM; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); if ( pirq_to_evtchn(d, pirq) != 0 ) ERROR_EXIT(-EEXIST); @@ -591,7 +591,7 @@ static long evtchn_bind_pirq(evtchn_bind out: check_free_port(d, port); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return rc; } @@ -606,7 +606,7 @@ int evtchn_close(struct domain *d1, int long rc = 0; again: - spin_lock(&d1->event_lock); + write_lock(&d1->event_lock); if ( !port_is_valid(d1, port1) ) { @@ -676,13 +676,11 @@ int evtchn_close(struct domain *d1, int BUG(); if ( d1 < d2 ) - { - spin_lock(&d2->event_lock); - } + write_lock(&d2->event_lock); else if ( d1 != d2 ) { - spin_unlock(&d1->event_lock); - spin_lock(&d2->event_lock); + write_unlock(&d1->event_lock); + write_lock(&d2->event_lock); goto again; } } @@ -729,11 +727,11 @@ int evtchn_close(struct domain *d1, int if ( d2 != NULL ) { if ( d1 != d2 ) - spin_unlock(&d2->event_lock); + write_unlock(&d2->event_lock); put_domain(d2); } - spin_unlock(&d1->event_lock); + write_unlock(&d1->event_lock); return rc; } @@ -1031,7 +1029,7 @@ long evtchn_bind_vcpu(unsigned int port, if ( (v = domain_vcpu(d, vcpu_id)) == NULL ) return -ENOENT; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); if ( !port_is_valid(d, port) ) { @@ -1075,7 +1073,7 @@ long evtchn_bind_vcpu(unsigned int port, } out: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return rc; } @@ -1121,7 +1119,7 @@ int evtchn_reset(struct domain *d, bool if ( d != current->domain && !d->controller_pause_count ) return -EINVAL; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); /* * If we are resuming, then start where we stopped. Otherwise, check @@ -1132,7 +1130,7 @@ int evtchn_reset(struct domain *d, bool if ( i > d->next_evtchn ) d->next_evtchn = i; - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); if ( !i ) return -EBUSY; @@ -1144,14 +1142,14 @@ int evtchn_reset(struct domain *d, bool /* NB: Choice of frequency is arbitrary. */ if ( !(i & 0x3f) && hypercall_preempt_check() ) { - spin_lock(&d->event_lock); + write_lock(&d->event_lock); d->next_evtchn = i; - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -ERESTART; } } - spin_lock(&d->event_lock); + write_lock(&d->event_lock); d->next_evtchn = 0; @@ -1164,7 +1162,7 @@ int evtchn_reset(struct domain *d, bool evtchn_2l_init(d); } - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return rc; } @@ -1354,7 +1352,7 @@ int alloc_unbound_xen_event_channel( struct evtchn *chn; int port, rc; - spin_lock(&ld->event_lock); + write_lock(&ld->event_lock); port = rc = get_free_port(ld); if ( rc < 0 ) @@ -1382,7 +1380,7 @@ int alloc_unbound_xen_event_channel( out: check_free_port(ld, port); - spin_unlock(&ld->event_lock); + write_unlock(&ld->event_lock); return rc < 0 ? rc : port; } @@ -1393,7 +1391,7 @@ void free_xen_event_channel(struct domai { /* * Make sure ->is_dying is read /after/ ->valid_evtchns, pairing - * with the spin_barrier() and BUG_ON() in evtchn_destroy(). + * with the kind-of-barrier and BUG_ON() in evtchn_destroy(). */ smp_rmb(); BUG_ON(!d->is_dying); @@ -1413,7 +1411,7 @@ void notify_via_xen_event_channel(struct { /* * Make sure ->is_dying is read /after/ ->valid_evtchns, pairing - * with the spin_barrier() and BUG_ON() in evtchn_destroy(). + * with the kind-of-barrier and BUG_ON() in evtchn_destroy(). */ smp_rmb(); ASSERT(ld->is_dying); @@ -1470,7 +1468,8 @@ int evtchn_init(struct domain *d, unsign return -ENOMEM; d->valid_evtchns = EVTCHNS_PER_BUCKET; - spin_lock_init_prof(d, event_lock); + rwlock_init(&d->event_lock); + if ( get_free_port(d) != 0 ) { free_evtchn_bucket(d, d->evtchn); @@ -1495,9 +1494,10 @@ int evtchn_destroy(struct domain *d) { unsigned int i; - /* After this barrier no new event-channel allocations can occur. */ + /* After this kind-of-barrier no new event-channel allocations can occur. */ BUG_ON(!d->is_dying); - spin_barrier(&d->event_lock); + read_lock(&d->event_lock); + read_unlock(&d->event_lock); /* Close all existing event channels. */ for ( i = d->valid_evtchns; --i; ) @@ -1555,13 +1555,13 @@ void evtchn_move_pirqs(struct vcpu *v) unsigned int port; struct evtchn *chn; - spin_lock(&d->event_lock); + read_lock(&d->event_lock); for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port ) { chn = evtchn_from_port(d, port); pirq_set_affinity(d, chn->u.pirq.irq, mask); } - spin_unlock(&d->event_lock); + read_unlock(&d->event_lock); } @@ -1626,11 +1626,11 @@ static void domain_dump_evtchn_info(stru */ evtchn_read_unlock(chn); chn = NULL; - if ( spin_trylock(&d->event_lock) ) + if ( read_trylock(&d->event_lock) ) { int irq = domain_pirq_to_irq(d, pirq); - spin_unlock(&d->event_lock); + read_unlock(&d->event_lock); printk(" p=%u i=%d", pirq, irq); } else --- a/xen/common/event_fifo.c +++ b/xen/common/event_fifo.c @@ -600,7 +600,7 @@ int evtchn_fifo_init_control(struct evtc if ( offset & (8 - 1) ) return -EINVAL; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); /* * If this is the first control block, setup an empty event array @@ -636,13 +636,13 @@ int evtchn_fifo_init_control(struct evtc else rc = map_control_block(v, gfn, offset); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return rc; error: evtchn_fifo_destroy(d); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return rc; } @@ -695,9 +695,9 @@ int evtchn_fifo_expand_array(const struc if ( !d->evtchn_fifo ) return -EOPNOTSUPP; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); rc = add_page_to_event_array(d, expand_array->array_gfn); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return rc; } --- a/xen/drivers/passthrough/vtd/x86/hvm.c +++ b/xen/drivers/passthrough/vtd/x86/hvm.c @@ -54,7 +54,7 @@ void hvm_dpci_isairq_eoi(struct domain * if ( !is_iommu_enabled(d) ) return; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); dpci = domain_get_irq_dpci(d); @@ -63,5 +63,5 @@ void hvm_dpci_isairq_eoi(struct domain * /* Multiple mirq may be mapped to one isa irq */ pt_pirq_iterate(d, _hvm_dpci_isairq_eoi, (void *)(long)isairq); } - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); } --- a/xen/drivers/passthrough/x86/hvm.c +++ b/xen/drivers/passthrough/x86/hvm.c @@ -105,7 +105,7 @@ static void pt_pirq_softirq_reset(struct { struct domain *d = pirq_dpci->dom; - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_write_locked(&d->event_lock)); switch ( cmpxchg(&pirq_dpci->state, 1 << STATE_SCHED, 0) ) { @@ -162,7 +162,7 @@ static void pt_irq_time_out(void *data) const struct hvm_irq_dpci *dpci; const struct dev_intx_gsi_link *digl; - spin_lock(&irq_map->dom->event_lock); + write_lock(&irq_map->dom->event_lock); if ( irq_map->flags & HVM_IRQ_DPCI_IDENTITY_GSI ) { @@ -177,7 +177,7 @@ static void pt_irq_time_out(void *data) hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq); irq_map->flags |= HVM_IRQ_DPCI_EOI_LATCH; pt_irq_guest_eoi(irq_map->dom, irq_map, NULL); - spin_unlock(&irq_map->dom->event_lock); + write_unlock(&irq_map->dom->event_lock); return; } @@ -185,7 +185,7 @@ static void pt_irq_time_out(void *data) if ( unlikely(!dpci) ) { ASSERT_UNREACHABLE(); - spin_unlock(&irq_map->dom->event_lock); + write_unlock(&irq_map->dom->event_lock); return; } list_for_each_entry ( digl, &irq_map->digl_list, list ) @@ -204,7 +204,7 @@ static void pt_irq_time_out(void *data) pt_pirq_iterate(irq_map->dom, pt_irq_guest_eoi, NULL); - spin_unlock(&irq_map->dom->event_lock); + write_unlock(&irq_map->dom->event_lock); } struct hvm_irq_dpci *domain_get_irq_dpci(const struct domain *d) @@ -288,7 +288,7 @@ int pt_irq_create_bind( return -EINVAL; restart: - spin_lock(&d->event_lock); + write_lock(&d->event_lock); hvm_irq_dpci = domain_get_irq_dpci(d); if ( !hvm_irq_dpci && !is_hardware_domain(d) ) @@ -304,7 +304,7 @@ int pt_irq_create_bind( hvm_irq_dpci = xzalloc(struct hvm_irq_dpci); if ( hvm_irq_dpci == NULL ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -ENOMEM; } for ( i = 0; i < NR_HVM_DOMU_IRQS; i++ ) @@ -316,7 +316,7 @@ int pt_irq_create_bind( info = pirq_get_info(d, pirq); if ( !info ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -ENOMEM; } pirq_dpci = pirq_dpci(info); @@ -331,7 +331,7 @@ int pt_irq_create_bind( */ if ( pt_pirq_softirq_active(pirq_dpci) ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); cpu_relax(); goto restart; } @@ -389,7 +389,7 @@ int pt_irq_create_bind( pirq_dpci->dom = NULL; pirq_dpci->flags = 0; pirq_cleanup_check(info, d); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return rc; } } @@ -399,7 +399,7 @@ int pt_irq_create_bind( if ( (pirq_dpci->flags & mask) != mask ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -EBUSY; } @@ -423,7 +423,7 @@ int pt_irq_create_bind( dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode); pirq_dpci->gmsi.dest_vcpu_id = dest_vcpu_id; - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); pirq_dpci->gmsi.posted = false; vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL; @@ -483,7 +483,7 @@ int pt_irq_create_bind( if ( !digl || !girq ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); xfree(girq); xfree(digl); return -ENOMEM; @@ -510,7 +510,7 @@ int pt_irq_create_bind( if ( pt_irq_bind->irq_type != PT_IRQ_TYPE_PCI || pirq >= hvm_domain_irq(d)->nr_gsis ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -EINVAL; } @@ -546,7 +546,7 @@ int pt_irq_create_bind( if ( mask < 0 || trigger_mode < 0 ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); ASSERT_UNREACHABLE(); return -EINVAL; @@ -594,14 +594,14 @@ int pt_irq_create_bind( } pirq_dpci->flags = 0; pirq_cleanup_check(info, d); - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); xfree(girq); xfree(digl); return rc; } } - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); if ( iommu_verbose ) { @@ -619,7 +619,7 @@ int pt_irq_create_bind( } default: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -EOPNOTSUPP; } @@ -672,13 +672,13 @@ int pt_irq_destroy_bind( return -EOPNOTSUPP; } - spin_lock(&d->event_lock); + write_lock(&d->event_lock); hvm_irq_dpci = domain_get_irq_dpci(d); if ( !hvm_irq_dpci && !is_hardware_domain(d) ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -EINVAL; } @@ -711,7 +711,7 @@ int pt_irq_destroy_bind( if ( girq ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return -EINVAL; } @@ -755,7 +755,7 @@ int pt_irq_destroy_bind( pirq_cleanup_check(pirq, d); } - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); if ( what && iommu_verbose ) { @@ -799,7 +799,7 @@ int pt_pirq_iterate(struct domain *d, unsigned int pirq = 0, n, i; struct pirq *pirqs[8]; - ASSERT(spin_is_locked(&d->event_lock)); + ASSERT(rw_is_locked(&d->event_lock)); do { n = radix_tree_gang_lookup(&d->pirq_tree, (void **)pirqs, pirq, @@ -880,9 +880,9 @@ void hvm_dpci_msi_eoi(struct domain *d, (!hvm_domain_irq(d)->dpci && !is_hardware_domain(d)) ) return; - spin_lock(&d->event_lock); + read_lock(&d->event_lock); pt_pirq_iterate(d, _hvm_dpci_msi_eoi, (void *)(long)vector); - spin_unlock(&d->event_lock); + read_unlock(&d->event_lock); } static void hvm_dirq_assist(struct domain *d, struct hvm_pirq_dpci *pirq_dpci) @@ -893,7 +893,7 @@ static void hvm_dirq_assist(struct domai return; } - spin_lock(&d->event_lock); + write_lock(&d->event_lock); if ( test_and_clear_bool(pirq_dpci->masked) ) { struct pirq *pirq = dpci_pirq(pirq_dpci); @@ -947,7 +947,7 @@ static void hvm_dirq_assist(struct domai } out: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); } static void hvm_pirq_eoi(struct pirq *pirq, @@ -1012,7 +1012,7 @@ void hvm_dpci_eoi(struct domain *d, unsi if ( is_hardware_domain(d) ) { - spin_lock(&d->event_lock); + write_lock(&d->event_lock); hvm_gsi_eoi(d, guest_gsi, ent); goto unlock; } @@ -1023,7 +1023,7 @@ void hvm_dpci_eoi(struct domain *d, unsi return; } - spin_lock(&d->event_lock); + write_lock(&d->event_lock); hvm_irq_dpci = domain_get_irq_dpci(d); if ( !hvm_irq_dpci ) @@ -1033,7 +1033,7 @@ void hvm_dpci_eoi(struct domain *d, unsi __hvm_dpci_eoi(d, girq, ent); unlock: - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); } static int pci_clean_dpci_irq(struct domain *d, @@ -1072,7 +1072,7 @@ int arch_pci_clean_pirqs(struct domain * if ( !is_hvm_domain(d) ) return 0; - spin_lock(&d->event_lock); + write_lock(&d->event_lock); hvm_irq_dpci = domain_get_irq_dpci(d); if ( hvm_irq_dpci != NULL ) { @@ -1090,14 +1090,14 @@ int arch_pci_clean_pirqs(struct domain * ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL); if ( ret ) { - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return ret; } hvm_domain_irq(d)->dpci = NULL; free_hvm_irq_dpci(hvm_irq_dpci); } - spin_unlock(&d->event_lock); + write_unlock(&d->event_lock); return 0; } --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -377,7 +377,7 @@ struct domain unsigned int xen_evtchns; /* Port to resume from in evtchn_reset(), when in a continuation. */ unsigned int next_evtchn; - spinlock_t event_lock; + rwlock_t event_lock; const struct evtchn_port_ops *evtchn_port_ops; struct evtchn_fifo_domain *evtchn_fifo; From patchwork Tue Jan 5 13:10:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11998859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFC1AC433DB for ; Tue, 5 Jan 2021 13:10:41 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A4F6322240 for ; Tue, 5 Jan 2021 13:10:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A4F6322240 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.61928.109254 (Exim 4.92) (envelope-from ) id 1kwm6b-0000UV-OG; Tue, 05 Jan 2021 13:10:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 61928.109254; Tue, 05 Jan 2021 13:10:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm6b-0000UO-Jb; Tue, 05 Jan 2021 13:10:33 +0000 Received: by outflank-mailman (input) for mailman id 61928; Tue, 05 Jan 2021 13:10:31 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm6Z-0000U1-Qe for xen-devel@lists.xenproject.org; Tue, 05 Jan 2021 13:10:31 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3209bdae-e740-48d3-b719-00c39f9615a1; Tue, 05 Jan 2021 13:10:31 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 4F225AD0B; Tue, 5 Jan 2021 13:10:30 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3209bdae-e740-48d3-b719-00c39f9615a1 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1609852230; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7KEt2eAO57gYDz6fS7ggu7fXlsDVJkSylqA5Z3JwCgE=; b=FFUJ5TkCbnI0z71f1760x4zuhufDBpcOZnCjYbLmr5BmYIvYAFXysYJ4yGdwie1lDHAH0K QxEn69BaYE3d38JypCFBDR0bfxL2iY8lVV1XEXlrK+5CtIjfKh22Eq9r8Ub0cEfMf63r4X xeKxzqrJtL3wQRV8XcxDCN6MFSew4fY= Subject: [PATCH v4 04/10] evtchn: don't call Xen consumer callback with per-channel lock held From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: Message-ID: Date: Tue, 5 Jan 2021 14:10:29 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US While there don't look to be any problems with this right now, the lock order implications from holding the lock can be very difficult to follow (and may be easy to violate unknowingly). The present callbacks don't (and no such callback should) have any need for the lock to be held. Signed-off-by: Jan Beulich Acked-by: Julien Grall --- v4: Go back to v2. v3: Drain callbacks before proceeding with closing. Re-base. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -767,9 +767,18 @@ int evtchn_send(struct domain *ld, unsig rport = lchn->u.interdomain.remote_port; rchn = evtchn_from_port(rd, rport); if ( consumer_is_xen(rchn) ) - xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport); - else - evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn); + { + /* Don't keep holding the lock for the call below. */ + xen_event_channel_notification_t fn = xen_notification_fn(rchn); + struct vcpu *rv = rd->vcpu[rchn->notify_vcpu_id]; + + rcu_lock_domain(rd); + evtchn_read_unlock(lchn); + fn(rv, rport); + rcu_unlock_domain(rd); + return 0; + } + evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn); break; case ECS_IPI: evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn); From patchwork Tue Jan 5 13:10:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11998861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA85AC433DB for ; Tue, 5 Jan 2021 13:11:08 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B15D322AAB for ; Tue, 5 Jan 2021 13:11:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B15D322AAB Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.61933.109266 (Exim 4.92) (envelope-from ) id 1kwm72-0000cj-0f; Tue, 05 Jan 2021 13:11:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 61933.109266; Tue, 05 Jan 2021 13:10:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm71-0000cb-TB; Tue, 05 Jan 2021 13:10:59 +0000 Received: by outflank-mailman (input) for mailman id 61933; Tue, 05 Jan 2021 13:10:58 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm70-0000cN-E7 for xen-devel@lists.xenproject.org; Tue, 05 Jan 2021 13:10:58 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 5190937a-7794-4f92-9f35-f5c64e46d221; Tue, 05 Jan 2021 13:10:57 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id CB268AD19; Tue, 5 Jan 2021 13:10:56 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5190937a-7794-4f92-9f35-f5c64e46d221 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1609852256; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WYw+EB7EgwhVs6MDF2yGE+1uRpPa0Jl4jtg0PCVk0Do=; b=qZwC+VpA5/LtIGrkqgxsYzczsILaiK6KVG2tDa5YKpVEgrmgSOdiwvC5/jP6nJKIVEEmV9 R4lpelj88dUAzzrxR4d8EnyyiSItNGpoPFQbtkPhjU1aEgVbU8fP0YSBCGga3RfoyPUauc 08SBXYCDcejVUJNDEX8BTkvkg4Uwi+Q= Subject: [PATCH v4 05/10] evtchn: closing of vIRQ-s doesn't require looping over all vCPU-s From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: Message-ID: Date: Tue, 5 Jan 2021 14:10:56 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Global vIRQ-s have their event channel association tracked on vCPU 0. Per-vCPU vIRQ-s can't have their notify_vcpu_id changed. Hence it is well-known which vCPU's virq_to_evtchn[] needs updating. Signed-off-by: Jan Beulich Reviewed-by: Julien Grall --- v4: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -600,7 +600,6 @@ static long evtchn_bind_pirq(evtchn_bind int evtchn_close(struct domain *d1, int port1, bool guest) { struct domain *d2 = NULL; - struct vcpu *v; struct evtchn *chn1, *chn2; int port2; long rc = 0; @@ -651,17 +650,19 @@ int evtchn_close(struct domain *d1, int break; } - case ECS_VIRQ: - for_each_vcpu ( d1, v ) - { - unsigned long flags; + case ECS_VIRQ: { + struct vcpu *v; + unsigned long flags; + + v = d1->vcpu[virq_is_global(chn1->u.virq) ? 0 : chn1->notify_vcpu_id]; + + write_lock_irqsave(&v->virq_lock, flags); + ASSERT(read_atomic(&v->virq_to_evtchn[chn1->u.virq]) == port1); + write_atomic(&v->virq_to_evtchn[chn1->u.virq], 0); + write_unlock_irqrestore(&v->virq_lock, flags); - write_lock_irqsave(&v->virq_lock, flags); - if ( read_atomic(&v->virq_to_evtchn[chn1->u.virq]) == port1 ) - write_atomic(&v->virq_to_evtchn[chn1->u.virq], 0); - write_unlock_irqrestore(&v->virq_lock, flags); - } break; + } case ECS_IPI: break; From patchwork Tue Jan 5 13:11:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11998863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9261BC433E0 for ; Tue, 5 Jan 2021 13:12:01 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4F43122AAB for ; Tue, 5 Jan 2021 13:12:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F43122AAB Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.61938.109277 (Exim 4.92) (envelope-from ) id 1kwm7j-0000lH-AG; Tue, 05 Jan 2021 13:11:43 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 61938.109277; Tue, 05 Jan 2021 13:11:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm7j-0000lA-7I; Tue, 05 Jan 2021 13:11:43 +0000 Received: by outflank-mailman (input) for mailman id 61938; Tue, 05 Jan 2021 13:11:42 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm7h-0000l1-Th for xen-devel@lists.xenproject.org; Tue, 05 Jan 2021 13:11:41 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 7ce02c72-6601-4dca-b9ec-dd518f66e3cc; Tue, 05 Jan 2021 13:11:41 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 48480AD0B; Tue, 5 Jan 2021 13:11:40 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7ce02c72-6601-4dca-b9ec-dd518f66e3cc X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1609852300; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jpDPxfUXwQJobIQUxX1fnLD15/1oKgv3k1efx+tAnAE=; b=GekgU3LU5vpUV6FglutXN4q5kx/j3iIBoLIiRKy0WPCU/DN/M00r38NLNQFCarKGUuLOAY c3qpjgIPdtIxIYeeLB0SNIBkiP9XMVjX57ayphoT83ZHWqfSCtY8xt+IShoDE6t8LDNu4c wXVyJE+iJ2cVaVeR+Kahb1ng3O+S3nM= Subject: [PATCH v4 06/10] evtchn: slightly defer lock acquire where possible From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: Message-ID: <50d76d71-7e76-8c4d-0546-bf690085036b@suse.com> Date: Tue, 5 Jan 2021 14:11:39 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US port_is_valid() and evtchn_from_port() are fine to use without holding any locks. Accordingly acquire the per-domain lock slightly later in evtchn_close() and evtchn_bind_vcpu(). Signed-off-by: Jan Beulich --- v4: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -604,17 +604,14 @@ int evtchn_close(struct domain *d1, int int port2; long rc = 0; - again: - write_lock(&d1->event_lock); - if ( !port_is_valid(d1, port1) ) - { - rc = -EINVAL; - goto out; - } + return -EINVAL; chn1 = evtchn_from_port(d1, port1); + again: + write_lock(&d1->event_lock); + /* Guest cannot close a Xen-attached event channel. */ if ( unlikely(consumer_is_xen(chn1)) && guest ) { @@ -1039,16 +1036,13 @@ long evtchn_bind_vcpu(unsigned int port, if ( (v = domain_vcpu(d, vcpu_id)) == NULL ) return -ENOENT; - write_lock(&d->event_lock); - if ( !port_is_valid(d, port) ) - { - rc = -EINVAL; - goto out; - } + return -EINVAL; chn = evtchn_from_port(d, port); + write_lock(&d->event_lock); + /* Guest cannot re-bind a Xen-attached event channel. */ if ( unlikely(consumer_is_xen(chn)) ) { From patchwork Tue Jan 5 13:12:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11998865 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FF16C433E0 for ; Tue, 5 Jan 2021 13:12:33 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4385822240 for ; Tue, 5 Jan 2021 13:12:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4385822240 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.61942.109289 (Exim 4.92) (envelope-from ) id 1kwm8I-0000tp-Jm; Tue, 05 Jan 2021 13:12:18 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 61942.109289; Tue, 05 Jan 2021 13:12:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm8I-0000ti-Gf; Tue, 05 Jan 2021 13:12:18 +0000 Received: by outflank-mailman (input) for mailman id 61942; Tue, 05 Jan 2021 13:12:17 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm8H-0000tZ-Qa for xen-devel@lists.xenproject.org; Tue, 05 Jan 2021 13:12:17 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 93e5ae14-b21a-4246-9a5f-affa686567d1; Tue, 05 Jan 2021 13:12:16 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id C2905AD0B; Tue, 5 Jan 2021 13:12:15 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 93e5ae14-b21a-4246-9a5f-affa686567d1 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1609852335; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lq6jmPFxyGjXjD9MsCgMeKHr6YBRuiaVaAtrxirfbsc=; b=Yqs11mE5pYFEholJIsyvQ1omhKx0WfPOBJOAg5Zh4xaOOaFacuHEQs+w+zvDtL3r0KIUaK znENz2MWIK0fqJWS/npczgOMh64mECuMvHhrjzdq73hkFu7Xl8lRdDdgtHPYpFm+Dpwdk1 /4ssc/XzWemGZYvUPfecdffOJrSYMCE= Subject: [PATCH v4 07/10] evtchn: add helper for port_is_valid() + evtchn_from_port() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: Message-ID: Date: Tue, 5 Jan 2021 14:12:15 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US The combination is pretty common, so adding a simple local helper seems worthwhile. Make it const- and type-correct, in turn requiring the two called function to also be const-correct (and at this occasion also make them type-correct). Signed-off-by: Jan Beulich Acked-by: Julien Grall --- v4: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -147,6 +147,11 @@ static bool virq_is_global(unsigned int return true; } +static struct evtchn *_evtchn_from_port(const struct domain *d, + evtchn_port_t port) +{ + return port_is_valid(d, port) ? evtchn_from_port(d, port) : NULL; +} static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port) { @@ -361,9 +366,9 @@ static long evtchn_bind_interdomain(evtc ERROR_EXIT(lport); lchn = evtchn_from_port(ld, lport); - if ( !port_is_valid(rd, rport) ) + rchn = _evtchn_from_port(rd, rport); + if ( !rchn ) ERROR_EXIT_DOM(-EINVAL, rd); - rchn = evtchn_from_port(rd, rport); double_evtchn_lock(lchn, rchn); @@ -600,15 +605,12 @@ static long evtchn_bind_pirq(evtchn_bind int evtchn_close(struct domain *d1, int port1, bool guest) { struct domain *d2 = NULL; - struct evtchn *chn1, *chn2; - int port2; + struct evtchn *chn1 = _evtchn_from_port(d1, port1), *chn2; long rc = 0; - if ( !port_is_valid(d1, port1) ) + if ( !chn1 ) return -EINVAL; - chn1 = evtchn_from_port(d1, port1); - again: write_lock(&d1->event_lock); @@ -695,10 +697,8 @@ int evtchn_close(struct domain *d1, int goto out; } - port2 = chn1->u.interdomain.remote_port; - BUG_ON(!port_is_valid(d2, port2)); - - chn2 = evtchn_from_port(d2, port2); + chn2 = _evtchn_from_port(d2, chn1->u.interdomain.remote_port); + BUG_ON(!chn2); BUG_ON(chn2->state != ECS_INTERDOMAIN); BUG_ON(chn2->u.interdomain.remote_dom != d1); @@ -736,15 +736,13 @@ int evtchn_close(struct domain *d1, int int evtchn_send(struct domain *ld, unsigned int lport) { - struct evtchn *lchn, *rchn; + struct evtchn *lchn = _evtchn_from_port(ld, lport), *rchn; struct domain *rd; int rport, ret = 0; - if ( !port_is_valid(ld, lport) ) + if ( !lchn ) return -EINVAL; - lchn = evtchn_from_port(ld, lport); - evtchn_read_lock(lchn); /* Guest cannot send via a Xen-attached event channel. */ @@ -956,7 +954,6 @@ int evtchn_status(evtchn_status_t *statu { struct domain *d; domid_t dom = status->dom; - int port = status->port; struct evtchn *chn; long rc = 0; @@ -964,14 +961,13 @@ int evtchn_status(evtchn_status_t *statu if ( d == NULL ) return -ESRCH; - if ( !port_is_valid(d, port) ) + chn = _evtchn_from_port(d, status->port); + if ( !chn ) { rcu_unlock_domain(d); return -EINVAL; } - chn = evtchn_from_port(d, port); - evtchn_read_lock(chn); if ( consumer_is_xen(chn) ) @@ -1036,11 +1032,10 @@ long evtchn_bind_vcpu(unsigned int port, if ( (v = domain_vcpu(d, vcpu_id)) == NULL ) return -ENOENT; - if ( !port_is_valid(d, port) ) + chn = _evtchn_from_port(d, port); + if ( !chn ) return -EINVAL; - chn = evtchn_from_port(d, port); - write_lock(&d->event_lock); /* Guest cannot re-bind a Xen-attached event channel. */ @@ -1086,13 +1081,11 @@ long evtchn_bind_vcpu(unsigned int port, int evtchn_unmask(unsigned int port) { struct domain *d = current->domain; - struct evtchn *evtchn; + struct evtchn *evtchn = _evtchn_from_port(d, port); - if ( unlikely(!port_is_valid(d, port)) ) + if ( unlikely(!evtchn) ) return -EINVAL; - evtchn = evtchn_from_port(d, port); - evtchn_read_lock(evtchn); evtchn_port_unmask(d, evtchn); @@ -1175,14 +1168,12 @@ static long evtchn_set_priority(const st { struct domain *d = current->domain; unsigned int port = set_priority->port; - struct evtchn *chn; + struct evtchn *chn = _evtchn_from_port(d, port); long ret; - if ( !port_is_valid(d, port) ) + if ( !chn ) return -EINVAL; - chn = evtchn_from_port(d, port); - evtchn_read_lock(chn); ret = evtchn_port_set_priority(d, chn, set_priority->priority); @@ -1408,10 +1399,10 @@ void free_xen_event_channel(struct domai void notify_via_xen_event_channel(struct domain *ld, int lport) { - struct evtchn *lchn, *rchn; + struct evtchn *lchn = _evtchn_from_port(ld, lport), *rchn; struct domain *rd; - if ( !port_is_valid(ld, lport) ) + if ( !lchn ) { /* * Make sure ->is_dying is read /after/ ->valid_evtchns, pairing @@ -1422,8 +1413,6 @@ void notify_via_xen_event_channel(struct return; } - lchn = evtchn_from_port(ld, lport); - if ( !evtchn_read_trylock(lchn) ) return; @@ -1577,16 +1566,17 @@ static void domain_dump_evtchn_info(stru "Polling vCPUs: {%*pbl}\n" " port [p/m/s]\n", d->domain_id, d->max_vcpus, d->poll_mask); - for ( port = 1; port_is_valid(d, port); ++port ) + for ( port = 1; ; ++port ) { - struct evtchn *chn; + struct evtchn *chn = _evtchn_from_port(d, port); char *ssid; + if ( !chn ) + break; + if ( !(port & 0x3f) ) process_pending_softirqs(); - chn = evtchn_from_port(d, port); - if ( !evtchn_read_trylock(chn) ) { printk(" %4u in flux\n", port); --- a/xen/include/xen/event.h +++ b/xen/include/xen/event.h @@ -120,7 +120,7 @@ static inline void evtchn_read_unlock(st read_unlock(&evtchn->lock); } -static inline bool_t port_is_valid(struct domain *d, unsigned int p) +static inline bool port_is_valid(const struct domain *d, evtchn_port_t p) { if ( p >= read_atomic(&d->valid_evtchns) ) return false; @@ -135,7 +135,8 @@ static inline bool_t port_is_valid(struc return true; } -static inline struct evtchn *evtchn_from_port(struct domain *d, unsigned int p) +static inline struct evtchn *evtchn_from_port(const struct domain *d, + evtchn_port_t p) { if ( p < EVTCHNS_PER_BUCKET ) return &d->evtchn[array_index_nospec(p, EVTCHNS_PER_BUCKET)]; From patchwork Tue Jan 5 13:12:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11998877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A63CC433DB for ; Tue, 5 Jan 2021 13:13:03 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F08B9229EF for ; Tue, 5 Jan 2021 13:13:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F08B9229EF Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.61947.109314 (Exim 4.92) (envelope-from ) id 1kwm8n-00015q-DK; Tue, 05 Jan 2021 13:12:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 61947.109314; Tue, 05 Jan 2021 13:12:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm8n-00015j-9K; Tue, 05 Jan 2021 13:12:49 +0000 Received: by outflank-mailman (input) for mailman id 61947; Tue, 05 Jan 2021 13:12:48 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm8m-00015X-13 for xen-devel@lists.xenproject.org; Tue, 05 Jan 2021 13:12:48 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id bedf2ede-ad0b-4576-825b-85acd2643572; Tue, 05 Jan 2021 13:12:46 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 0DD73AD0B; Tue, 5 Jan 2021 13:12:46 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bedf2ede-ad0b-4576-825b-85acd2643572 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1609852366; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SpxX4mM7UaE7DIJT+8KP+Q3gPOhRwR0g9uwFVjsXk70=; b=khjXkNwI6PJYcqEEAgnTtfuyU4bQg64gRx9mIqjGtNOzxK5jauiEQr1naFjBQym0DkNabe f5L5a/8bK6yl6n8MJfJX8veEnzlIeYcgX0s+0hYqP8Iu30TS/mG8TwhIFjcfL2bXfturYS 59umq4qyDlKD8nslxj9YRQbZDVEbpIo= Subject: [PATCH v4 08/10] evtchn: closing of ports doesn't need to hold both domains' event locks From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: Message-ID: <495496e6-710d-bec0-cbd7-46c78f20fcf0@suse.com> Date: Tue, 5 Jan 2021 14:12:45 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US The local domain's lock is needed for the port freeing, but for the remote side the per-channel lock is sufficient. Other logic then needs rearranging, though, including the early dropping of both locks (and the remote domain ref) in the ECS_PIRQ and ECS_VIRQ cases. Note in particular that there is no real race with evtchn_bind_vcpu(): ECS_INTERDOMAIN and ECS_UNBOUND get treated identically there, and evtchn_close() doesn't care about the notification vCPU ID. Note also that we can't use double_evtchn_unlock() or evtchn_write_unlock() when releasing locks to cover for possible races. See the respective code comment. Signed-off-by: Jan Beulich --- v4: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -605,15 +605,15 @@ static long evtchn_bind_pirq(evtchn_bind int evtchn_close(struct domain *d1, int port1, bool guest) { struct domain *d2 = NULL; - struct evtchn *chn1 = _evtchn_from_port(d1, port1), *chn2; + struct evtchn *chn1 = _evtchn_from_port(d1, port1), *chn2 = NULL; long rc = 0; if ( !chn1 ) return -EINVAL; - again: write_lock(&d1->event_lock); + again: /* Guest cannot close a Xen-attached event channel. */ if ( unlikely(consumer_is_xen(chn1)) && guest ) { @@ -634,6 +634,22 @@ int evtchn_close(struct domain *d1, int case ECS_PIRQ: { struct pirq *pirq = pirq_info(d1, chn1->u.pirq.irq); + /* + * We don't require the per-channel lock here, so in case a race + * happened on the interdomain path below better release both. + */ + if ( unlikely(chn2) ) + { + /* + * See evtchn_write_unlock() vs plain write_unlock() comment in + * ECS_INTERDOMAIN handling below. + */ + write_unlock(&chn1->lock); + write_unlock(&chn2->lock); + put_domain(d2); + chn2 = NULL; + } + if ( pirq ) { if ( !is_hvm_domain(d1) ) @@ -653,6 +669,22 @@ int evtchn_close(struct domain *d1, int struct vcpu *v; unsigned long flags; + /* + * The per-channel locks nest inside the vIRQ ones, so we must release + * them if a race happened on the interdomain path below. + */ + if ( unlikely(chn2) ) + { + /* + * See evtchn_write_unlock() vs plain write_unlock() comment in + * ECS_INTERDOMAIN handling below. + */ + write_unlock(&chn1->lock); + write_unlock(&chn2->lock); + put_domain(d2); + chn2 = NULL; + } + v = d1->vcpu[virq_is_global(chn1->u.virq) ? 0 : chn1->notify_vcpu_id]; write_lock_irqsave(&v->virq_lock, flags); @@ -669,63 +701,87 @@ int evtchn_close(struct domain *d1, int case ECS_INTERDOMAIN: if ( d2 == NULL ) { - d2 = chn1->u.interdomain.remote_dom; + evtchn_write_lock(chn1); - /* If we unlock d1 then we could lose d2. Must get a reference. */ - if ( unlikely(!get_domain(d2)) ) - BUG(); - - if ( d1 < d2 ) - write_lock(&d2->event_lock); - else if ( d1 != d2 ) - { - write_unlock(&d1->event_lock); - write_lock(&d2->event_lock); - goto again; - } + if ( chn1->state == ECS_INTERDOMAIN ) + d2 = chn1->u.interdomain.remote_dom; + else + /* See comment further down. */ + write_unlock(&chn1->lock); } - else if ( d2 != chn1->u.interdomain.remote_dom ) + + if ( d2 != chn1->u.interdomain.remote_dom ) { /* - * We can only get here if the port was closed and re-bound after - * unlocking d1 but before locking d2 above. We could retry but - * it is easier to return the same error as if we had seen the - * port in ECS_FREE. It must have passed through that state for - * us to end up here, so it's a valid error to return. + * We can only get here if the port was closed and re-bound + * - before locking chn1 or + * - after unlocking chn1 but before locking both channels + * above. We could retry but it is easier to return the same error + * as if we had seen the port in ECS_FREE. It must have passed + * through that state for us to end up here, so it's a valid error + * to return. */ + if ( d2 && !chn2 ) + write_unlock(&chn1->lock); rc = -EINVAL; goto out; } - chn2 = _evtchn_from_port(d2, chn1->u.interdomain.remote_port); - BUG_ON(!chn2); - BUG_ON(chn2->state != ECS_INTERDOMAIN); - BUG_ON(chn2->u.interdomain.remote_dom != d1); + if ( !chn2 ) + { + /* If we unlock chn1 then we could lose d2. Must get a reference. */ + if ( unlikely(!get_domain(d2)) ) + BUG(); - double_evtchn_lock(chn1, chn2); + chn2 = _evtchn_from_port(d2, chn1->u.interdomain.remote_port); + BUG_ON(!chn2); - evtchn_free(d1, chn1); + if ( chn1 > chn2 ) + { + /* + * Cannot use evtchn_write_unlock() here, as its assertion + * likely won't hold. However, a race - as per the comment + * below - indicates a transition through ECS_FREE must + * have occurred, so the assumptions by users of + * evtchn_read_trylock() still hold (i.e. they're similarly + * fine to bail). + */ + write_unlock(&chn1->lock); + double_evtchn_lock(chn2, chn1); + goto again; + } + + evtchn_write_lock(chn2); + } + + BUG_ON(chn2->state != ECS_INTERDOMAIN); + BUG_ON(chn2->u.interdomain.remote_dom != d1); chn2->state = ECS_UNBOUND; chn2->u.unbound.remote_domid = d1->domain_id; - double_evtchn_unlock(chn1, chn2); - - goto out; + break; default: BUG(); } - evtchn_write_lock(chn1); + if ( !chn2 ) + evtchn_write_lock(chn1); evtchn_free(d1, chn1); - evtchn_write_unlock(chn1); + if ( !chn2 ) + evtchn_write_unlock(chn1); out: - if ( d2 != NULL ) + if ( chn2 ) { - if ( d1 != d2 ) - write_unlock(&d2->event_lock); + /* + * See evtchn_write_unlock() vs plain write_unlock() comment in + * ECS_INTERDOMAIN handling above. In principle we could use + * double_evtchn_unlock() on the ECS_INTERDOMAIN success path. + */ + write_unlock(&chn1->lock); + write_unlock(&chn2->lock); put_domain(d2); } From patchwork Tue Jan 5 13:13:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11998879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D76F5C433E0 for ; Tue, 5 Jan 2021 13:13:20 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 90A2A229EF for ; Tue, 5 Jan 2021 13:13:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 90A2A229EF Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.61950.109326 (Exim 4.92) (envelope-from ) id 1kwm98-0001CE-Lv; Tue, 05 Jan 2021 13:13:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 61950.109326; Tue, 05 Jan 2021 13:13:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm98-0001C7-IF; Tue, 05 Jan 2021 13:13:10 +0000 Received: by outflank-mailman (input) for mailman id 61950; Tue, 05 Jan 2021 13:13:09 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm97-0001By-O3 for xen-devel@lists.xenproject.org; Tue, 05 Jan 2021 13:13:09 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b25d47e7-ce27-453f-80af-b95ac8abdcf5; Tue, 05 Jan 2021 13:13:08 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id EF85CAD0B; Tue, 5 Jan 2021 13:13:07 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b25d47e7-ce27-453f-80af-b95ac8abdcf5 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1609852388; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7fxjIkE4iQBOZVvoQ/eYJds7u97Ifu+xJRQZf1fkZVA=; b=AoIPjBqmbezItuns8Ls6WhhadQm0FZnLSIGUmIxHM7uO6BrlJhBhP5AiovAY7teNvxnSGf yixK8ZJRt17uQoidPbwiIkjfxEZPFV6YlRJDD/L5GWaGVriKagLtvdFF0J4/c+dTEhzi0L gLVtd7HTTL4K0+RapS6aaPD6kIT8eIM= Subject: [PATCH v4 09/10] evtchn: type adjustments From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: Message-ID: <646f25c5-36a1-34b5-8bed-6776068bd52b@suse.com> Date: Tue, 5 Jan 2021 14:13:07 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US First of all avoid "long" when "int" suffices, i.e. in particular when merely conveying error codes. 32-bit values are slightly cheaper to deal with on x86, and their processing is at least no more expensive on Arm. Where possible use evtchn_port_t for port numbers and unsigned int for other unsigned quantities in adjacent code. In evtchn_set_priority() eliminate a local variable altogether instead of changing its type. Signed-off-by: Jan Beulich --- v4: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -287,13 +287,12 @@ void evtchn_free(struct domain *d, struc xsm_evtchn_close_post(chn); } -static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc) +static int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc) { struct evtchn *chn; struct domain *d; - int port; + int port, rc; domid_t dom = alloc->dom; - long rc; d = rcu_lock_domain_by_any_id(dom); if ( d == NULL ) @@ -346,13 +345,13 @@ static void double_evtchn_unlock(struct evtchn_write_unlock(rchn); } -static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind) +static int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind) { struct evtchn *lchn, *rchn; struct domain *ld = current->domain, *rd; - int lport, rport = bind->remote_port; + int lport, rc; + evtchn_port_t rport = bind->remote_port; domid_t rdom = bind->remote_dom; - long rc; if ( rdom == DOMID_SELF ) rdom = current->domain->domain_id; @@ -482,12 +481,12 @@ int evtchn_bind_virq(evtchn_bind_virq_t } -static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind) +static int evtchn_bind_ipi(evtchn_bind_ipi_t *bind) { struct evtchn *chn; struct domain *d = current->domain; - int port, vcpu = bind->vcpu; - long rc = 0; + int port, rc = 0; + unsigned int vcpu = bind->vcpu; if ( domain_vcpu(d, vcpu) == NULL ) return -ENOENT; @@ -541,16 +540,16 @@ static void unlink_pirq_port(struct evtc } -static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind) +static int evtchn_bind_pirq(evtchn_bind_pirq_t *bind) { struct evtchn *chn; struct domain *d = current->domain; struct vcpu *v = d->vcpu[0]; struct pirq *info; - int port = 0, pirq = bind->pirq; - long rc; + int port = 0, rc; + unsigned int pirq = bind->pirq; - if ( (pirq < 0) || (pirq >= d->nr_pirqs) ) + if ( pirq >= d->nr_pirqs ) return -EINVAL; if ( !is_hvm_domain(d) && !pirq_access_permitted(d, pirq) ) @@ -606,7 +605,7 @@ int evtchn_close(struct domain *d1, int { struct domain *d2 = NULL; struct evtchn *chn1 = _evtchn_from_port(d1, port1), *chn2 = NULL; - long rc = 0; + int rc = 0; if ( !chn1 ) return -EINVAL; @@ -1011,7 +1010,7 @@ int evtchn_status(evtchn_status_t *statu struct domain *d; domid_t dom = status->dom; struct evtchn *chn; - long rc = 0; + int rc = 0; d = rcu_lock_domain_by_any_id(dom); if ( d == NULL ) @@ -1077,11 +1076,11 @@ int evtchn_status(evtchn_status_t *statu } -long evtchn_bind_vcpu(unsigned int port, unsigned int vcpu_id) +int evtchn_bind_vcpu(evtchn_port_t port, unsigned int vcpu_id) { struct domain *d = current->domain; struct evtchn *chn; - long rc = 0; + int rc = 0; struct vcpu *v; /* Use the vcpu info to prevent speculative out-of-bound accesses */ @@ -1220,12 +1219,11 @@ int evtchn_reset(struct domain *d, bool return rc; } -static long evtchn_set_priority(const struct evtchn_set_priority *set_priority) +static int evtchn_set_priority(const struct evtchn_set_priority *set_priority) { struct domain *d = current->domain; - unsigned int port = set_priority->port; - struct evtchn *chn = _evtchn_from_port(d, port); - long ret; + struct evtchn *chn = _evtchn_from_port(d, set_priority->port); + int ret; if ( !chn ) return -EINVAL; @@ -1241,7 +1239,7 @@ static long evtchn_set_priority(const st long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { - long rc; + int rc; switch ( cmd ) { --- a/xen/include/xen/event.h +++ b/xen/include/xen/event.h @@ -54,7 +54,7 @@ void send_guest_pirq(struct domain *, co int evtchn_send(struct domain *d, unsigned int lport); /* Bind a local event-channel port to the specified VCPU. */ -long evtchn_bind_vcpu(unsigned int port, unsigned int vcpu_id); +int evtchn_bind_vcpu(evtchn_port_t port, unsigned int vcpu_id); /* Bind a VIRQ. */ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port); From patchwork Tue Jan 5 13:13:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11998881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23181C433DB for ; Tue, 5 Jan 2021 13:13:51 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DBD62229EF for ; Tue, 5 Jan 2021 13:13:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DBD62229EF Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.61957.109338 (Exim 4.92) (envelope-from ) id 1kwm9d-0001LV-Vi; Tue, 05 Jan 2021 13:13:41 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 61957.109338; Tue, 05 Jan 2021 13:13:41 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm9d-0001LN-SI; Tue, 05 Jan 2021 13:13:41 +0000 Received: by outflank-mailman (input) for mailman id 61957; Tue, 05 Jan 2021 13:13:40 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kwm9c-0001L8-PG for xen-devel@lists.xenproject.org; Tue, 05 Jan 2021 13:13:40 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4c49a128-2675-456b-b0b9-d76613176ff5; Tue, 05 Jan 2021 13:13:39 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 187A7AD0B; Tue, 5 Jan 2021 13:13:39 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4c49a128-2675-456b-b0b9-d76613176ff5 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1609852419; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i0iR9T+v1oRO5cV/yOQIxjLak6wuwlTEYG8DgBkHRfU=; b=U/NgrQmha+c6v2GjumcztMfuvr2RzgMPN9qvhrTD8ZQCI7CniP5hmEtwO/D2NX5J161Uys j/z4TuqGC1h+KAnivB/z7/UZnyWHW/7TzTFR+v+XqBrqIRDcszjtwPWEu3txiirjkahk9q 8xPTI7MuW07X/xP/0YCNXY93SSn2ylY= Subject: [PATCH v4 10/10] evtchn: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Wei Liu , Stefano Stabellini References: Message-ID: Date: Tue, 5 Jan 2021 14:13:38 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US The per-vCPU virq_lock, which is being held anyway, together with there not being any call to evtchn_port_set_pending() when v->virq_to_evtchn[] is zero, provide sufficient guarantees. Undo the lock addition done for XSA-343 (commit e045199c7c9c "evtchn: address races with evtchn_reset()"). Update the description next to struct evtchn_port_ops accordingly. Signed-off-by: Jan Beulich --- v4: Move to end of series, for being the most controversial change. v3: Re-base. v2: New. --- a/xen/common/event_channel.c +++ b/xen/common/event_channel.c @@ -863,7 +863,6 @@ void send_guest_vcpu_virq(struct vcpu *v unsigned long flags; int port; struct domain *d; - struct evtchn *chn; ASSERT(!virq_is_global(virq)); @@ -874,12 +873,7 @@ void send_guest_vcpu_virq(struct vcpu *v goto out; d = v->domain; - chn = evtchn_from_port(d, port); - if ( evtchn_read_trylock(chn) ) - { - evtchn_port_set_pending(d, v->vcpu_id, chn); - evtchn_read_unlock(chn); - } + evtchn_port_set_pending(d, v->vcpu_id, evtchn_from_port(d, port)); out: read_unlock_irqrestore(&v->virq_lock, flags); @@ -908,11 +902,7 @@ void send_guest_global_virq(struct domai goto out; chn = evtchn_from_port(d, port); - if ( evtchn_read_trylock(chn) ) - { - evtchn_port_set_pending(d, chn->notify_vcpu_id, chn); - evtchn_read_unlock(chn); - } + evtchn_port_set_pending(d, chn->notify_vcpu_id, chn); out: read_unlock_irqrestore(&v->virq_lock, flags); --- a/xen/include/xen/event.h +++ b/xen/include/xen/event.h @@ -193,9 +193,16 @@ int evtchn_reset(struct domain *d, bool * Low-level event channel port ops. * * All hooks have to be called with a lock held which prevents the channel - * from changing state. This may be the domain event lock, the per-channel - * lock, or in the case of sending interdomain events also the other side's - * per-channel lock. Exceptions apply in certain cases for the PV shim. + * from changing state. This may be + * - the domain event lock, + * - the per-channel lock, + * - in the case of sending interdomain events the other side's per-channel + * lock, + * - in the case of sending non-global vIRQ-s the per-vCPU virq_lock (in + * combination with the ordering enforced through how the vCPU's + * virq_to_evtchn[] gets updated), + * - in the case of sending global vIRQ-s vCPU 0's virq_lock. + * Exceptions apply in certain cases for the PV shim. */ struct evtchn_port_ops { void (*init)(struct domain *d, struct evtchn *evtchn);