From patchwork Mon Jun 3 12:25:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 10972833 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F2F9B76 for ; Mon, 3 Jun 2019 12:29:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E2E932015F for ; Mon, 3 Jun 2019 12:29:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D6E76284F1; Mon, 3 Jun 2019 12:29:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4B2862015F for ; Mon, 3 Jun 2019 12:29:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hXm40-0002o3-RK; Mon, 03 Jun 2019 12:27:44 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hXm3z-0002nC-Gj for xen-devel@lists.xenproject.org; Mon, 03 Jun 2019 12:27:43 +0000 X-Inumbo-ID: f9b906e4-85fa-11e9-9fcf-336492711862 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id f9b906e4-85fa-11e9-9fcf-336492711862; Mon, 03 Jun 2019 12:27:39 +0000 (UTC) Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@citrix.com; spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com; spf=None smtp.helo=postmaster@MIAPEX02MSOL02.citrite.net Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of andrew.cooper3@citrix.com) identity=pra; client-ip=23.29.105.83; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="andrew.cooper3@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of Andrew.Cooper3@citrix.com designates 23.29.105.83 as permitted sender) identity=mailfrom; client-ip=23.29.105.83; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="Andrew.Cooper3@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:23.29.105.83 ip4:162.221.156.50 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@MIAPEX02MSOL02.citrite.net) identity=helo; client-ip=23.29.105.83; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="postmaster@MIAPEX02MSOL02.citrite.net"; x-conformance=sidf_compatible IronPort-SDR: tfCXhO82ivCs1L11dQ7dQYzVyNJaZ7CUr48tS1CwPAPmtLWg7mO3Iaoqg1i+k/HZZG10+4/ePt A2HX7vNSyTi1nfN/kX+Q0YOmqIW9Dh3BUl1tlCS2axIMBB2YXeslucAztJXQpbaiAqAWXA/g5y RTn4DOgbc4eG6ccFJMpmzSd3rQRe8bDUKQ7v9zxESNvNVQvl4J8mFoFrwsmNLppGrKQXb07bhb kPPo4NErJG2/NlnluMQvZLbq68NX8RsDuI47jIST9pfJbwEcG44oGAsyFXHqsYQPa/YWInGp+W Dgo= X-SBRS: 2.7 X-MesageID: 1213218 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 23.29.105.83 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.60,546,1549947600"; d="scan'208";a="1213218" From: Andrew Cooper To: Xen-devel Date: Mon, 3 Jun 2019 13:25:26 +0100 Message-ID: <1559564728-17167-4-git-send-email-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1559564728-17167-1-git-send-email-andrew.cooper3@citrix.com> References: <1559564728-17167-1-git-send-email-andrew.cooper3@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 3/5] xen/vm-event: Remove unnecessary vm_event_domain indirection X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Petre Pircalabu , Andrew Cooper , Tamas K Lengyel , Razvan Cojocaru Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The use of (*ved)-> leads to poor code generation, as the compiler can't assume the pointer hasn't changed, and results in hard-to-follow code. For both vm_event_{en,dis}able(), rename the ved parameter to p_ved, and work primarily with a local ved pointer. This has a key advantage in vm_event_enable(), in that the partially constructed vm_event_domain only becomes globally visible once it is fully constructed. As a consequence, the spinlock doesn't need holding. Furthermore, rearrange the order of operations to be more sensible. Check for repeated enables and an bad HVM_PARAM before allocating memory, and gather the trivial setup into one place, dropping the redundant zeroing. No practical change that callers will notice. Signed-off-by: Andrew Cooper Acked-by: Razvan Cojocaru --- CC: Razvan Cojocaru CC: Tamas K Lengyel CC: Petre Pircalabu --- xen/common/vm_event.c | 90 +++++++++++++++++++++++---------------------------- 1 file changed, 40 insertions(+), 50 deletions(-) diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index db975e9..dcba98c 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -38,74 +38,63 @@ static int vm_event_enable( struct domain *d, struct xen_domctl_vm_event_op *vec, - struct vm_event_domain **ved, + struct vm_event_domain **p_ved, int pause_flag, int param, xen_event_channel_notification_t notification_fn) { int rc; unsigned long ring_gfn = d->arch.hvm.params[param]; + struct vm_event_domain *ved; - if ( !*ved ) - *ved = xzalloc(struct vm_event_domain); - if ( !*ved ) - return -ENOMEM; - - /* Only one helper at a time. If the helper crashed, - * the ring is in an undefined state and so is the guest. + /* + * Only one connected agent at a time. If the helper crashed, the ring is + * in an undefined state, and the guest is most likely unrecoverable. */ - if ( (*ved)->ring_page ) - return -EBUSY;; + if ( *p_ved != NULL ) + return -EBUSY; - /* The parameter defaults to zero, and it should be - * set to something */ + /* No chosen ring GFN? Nothing we can do. */ if ( ring_gfn == 0 ) return -EOPNOTSUPP; - spin_lock_init(&(*ved)->lock); - spin_lock(&(*ved)->lock); + ved = xzalloc(struct vm_event_domain); + if ( !ved ) + return -ENOMEM; - rc = vm_event_init_domain(d); + /* Trivial setup. */ + spin_lock_init(&ved->lock); + init_waitqueue_head(&ved->wq); + ved->pause_flag = pause_flag; + rc = vm_event_init_domain(d); if ( rc < 0 ) goto err; - rc = prepare_ring_for_helper(d, ring_gfn, &(*ved)->ring_pg_struct, - &(*ved)->ring_page); + rc = prepare_ring_for_helper(d, ring_gfn, &ved->ring_pg_struct, + &ved->ring_page); if ( rc < 0 ) goto err; - /* Set the number of currently blocked vCPUs to 0. */ - (*ved)->blocked = 0; + FRONT_RING_INIT(&ved->front_ring, + (vm_event_sring_t *)ved->ring_page, + PAGE_SIZE); - /* Allocate event channel */ rc = alloc_unbound_xen_event_channel(d, 0, current->domain->domain_id, notification_fn); if ( rc < 0 ) goto err; - (*ved)->xen_port = vec->u.enable.port = rc; + ved->xen_port = vec->u.enable.port = rc; - /* Prepare ring buffer */ - FRONT_RING_INIT(&(*ved)->front_ring, - (vm_event_sring_t *)(*ved)->ring_page, - PAGE_SIZE); - - /* Save the pause flag for this particular ring. */ - (*ved)->pause_flag = pause_flag; - - /* Initialize the last-chance wait queue. */ - init_waitqueue_head(&(*ved)->wq); + /* Success. Fill in the domain's appropriate ved. */ + *p_ved = ved; - spin_unlock(&(*ved)->lock); return 0; err: - destroy_ring_for_helper(&(*ved)->ring_page, - (*ved)->ring_pg_struct); - spin_unlock(&(*ved)->lock); - xfree(*ved); - *ved = NULL; + destroy_ring_for_helper(&ved->ring_page, ved->ring_pg_struct); + xfree(ved); return rc; } @@ -190,43 +179,44 @@ void vm_event_wake(struct domain *d, struct vm_event_domain *ved) vm_event_wake_blocked(d, ved); } -static int vm_event_disable(struct domain *d, struct vm_event_domain **ved) +static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved) { - if ( vm_event_check_ring(*ved) ) + struct vm_event_domain *ved = *p_ved; + + if ( vm_event_check_ring(ved) ) { struct vcpu *v; - spin_lock(&(*ved)->lock); + spin_lock(&ved->lock); - if ( !list_empty(&(*ved)->wq.list) ) + if ( !list_empty(&ved->wq.list) ) { - spin_unlock(&(*ved)->lock); + spin_unlock(&ved->lock); return -EBUSY; } /* Free domU's event channel and leave the other one unbound */ - free_xen_event_channel(d, (*ved)->xen_port); + free_xen_event_channel(d, ved->xen_port); /* Unblock all vCPUs */ for_each_vcpu ( d, v ) { - if ( test_and_clear_bit((*ved)->pause_flag, &v->pause_flags) ) + if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) ) { vcpu_unpause(v); - (*ved)->blocked--; + ved->blocked--; } } - destroy_ring_for_helper(&(*ved)->ring_page, - (*ved)->ring_pg_struct); + destroy_ring_for_helper(&ved->ring_page, ved->ring_pg_struct); vm_event_cleanup_domain(d); - spin_unlock(&(*ved)->lock); + spin_unlock(&ved->lock); } - xfree(*ved); - *ved = NULL; + xfree(ved); + *p_ved = NULL; return 0; }