From patchwork Wed Aug 28 20:39:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Druzhinin X-Patchwork-Id: 11119907 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A46E5112C for ; Wed, 28 Aug 2019 20:42:05 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7E3BB20828 for ; Wed, 28 Aug 2019 20:42:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="NlCcDDwE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E3BB20828 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i34jb-000566-5Q; Wed, 28 Aug 2019 20:40:03 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i34jZ-0004wP-JG for xen-devel@lists.xenproject.org; Wed, 28 Aug 2019 20:40:01 +0000 X-Inumbo-ID: 01398994-c9d4-11e9-b95f-bc764e2007e4 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 01398994-c9d4-11e9-b95f-bc764e2007e4; Wed, 28 Aug 2019 20:40:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567024800; h=from:to:cc:subject:date:message-id:mime-version; bh=kgYWLCT0BOAOjNd1aDk6CwFPMkMqIEMwqZ1/S2jpqwE=; b=NlCcDDwESmf1ZPWJ8umhkvg/HJOQ+41FHfJiLbjxNBv89rbteR/+AOL+ /Bxf07hz1NaOgHJdzDikuZc/XYphhOVvk4JNFQ3/WCMH7Ioi4sF4WC9AM npuVhiVvudzgSaaXq4G8DGGvoMi8tcYQsGLIhVM4/kOo/8e1dsijlWh5c c=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=igor.druzhinin@citrix.com; spf=Pass smtp.mailfrom=igor.druzhinin@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of igor.druzhinin@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="igor.druzhinin@citrix.com"; x-sender="igor.druzhinin@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of igor.druzhinin@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="igor.druzhinin@citrix.com"; x-sender="igor.druzhinin@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="igor.druzhinin@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 0qM69n3yKD0hTe27CF9djp1aLTRc5oRzHVMV5+PnG2f6AKPclNYdnjG+Jz3rOWqD68nH7l31/x OZaF3zGLpjZZDlYqLO8E3VHUXd+eUaX/Frc39/PelUhukKZVkAaUS8/g0WHJJ6ic8yPcITzJF8 eMw4vK8Z99lsCXLQnCP/7PftAl9j2HKsuafKe3xPw7Bkg+4wpr4d0kUXc0AvMQjKFkYLoFH7Qp DhZPmeznicf2T5uJCV1InTB6z1nFjnXaWPpaZwA2eQvajCYlDfa+he9jr2FQBqoCtr1O0I3asc GgA= X-SBRS: 2.7 X-MesageID: 5115327 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,442,1559534400"; d="scan'208";a="5115327" From: Igor Druzhinin To: Date: Wed, 28 Aug 2019 21:39:56 +0100 Message-ID: <1567024796-13463-1-git-send-email-igor.druzhinin@citrix.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH] x86/domain: don't destroy IOREQ servers on soft reset X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Igor Druzhinin , wl@xen.org, andrew.cooper3@citrix.com, paul.durrant@citrix.com, jbeulich@suse.com, roger.pau@citrix.com Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Performing soft reset should not opportunistically kill IOREQ servers for device emulators that might be currently running for a domain. Every emulator is supposed to clean up IOREQ servers for itself on exit. This allows a toolstack to elect whether or not a particular device model should be restarted. Since commit ba7fdd64b ("xen: cleanup IOREQ server on exit") QEMU now destroys IOREQ server for itself as every other device emulator is supposed to do. It's now safe to remove this code from soft reset path - existing systems with old QEMU should be able to work as even if there are IOREQ servers left behind, a new QEMU instance will override its ranges anyway. Signed-off-by: Igor Druzhinin --- xen/arch/x86/domain.c | 2 -- xen/arch/x86/hvm/hvm.c | 5 ----- xen/include/asm-x86/hvm/hvm.h | 1 - 3 files changed, 8 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 2df3123..957f059 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -713,8 +713,6 @@ int arch_domain_soft_reset(struct domain *d) if ( !is_hvm_domain(d) ) return -EINVAL; - hvm_domain_soft_reset(d); - spin_lock(&d->event_lock); for ( i = 0; i < d->nr_pirqs ; i++ ) { diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 029eea3..2b81899 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -5045,11 +5045,6 @@ void hvm_toggle_singlestep(struct vcpu *v) v->arch.hvm.single_step = !v->arch.hvm.single_step; } -void hvm_domain_soft_reset(struct domain *d) -{ - hvm_destroy_all_ioreq_servers(d); -} - /* * Segment caches in VMCB/VMCS are inconsistent about which bits are checked, * important, and preserved across vmentry/exit. Cook the values to make them diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index b327bd2..4e72d07 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -240,7 +240,6 @@ extern const struct hvm_function_table *start_vmx(void); int hvm_domain_initialise(struct domain *d); void hvm_domain_relinquish_resources(struct domain *d); void hvm_domain_destroy(struct domain *d); -void hvm_domain_soft_reset(struct domain *d); int hvm_vcpu_initialise(struct vcpu *v); void hvm_vcpu_destroy(struct vcpu *v);