From patchwork Fri Aug 30 11:56:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Druzhinin X-Patchwork-Id: 11123817 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE89A112C for ; Fri, 30 Aug 2019 11:59:10 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A8ED621721 for ; Fri, 30 Aug 2019 11:59:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="RiRPq6mV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A8ED621721 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i3fWQ-0005rF-9h; Fri, 30 Aug 2019 11:56:54 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i3fWP-0005rA-3k for xen-devel@lists.xenproject.org; Fri, 30 Aug 2019 11:56:53 +0000 X-Inumbo-ID: 40cd7b02-cb1d-11e9-8980-bc764e2007e4 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 40cd7b02-cb1d-11e9-8980-bc764e2007e4; Fri, 30 Aug 2019 11:56:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567166211; h=from:to:cc:subject:date:message-id:mime-version; bh=hzn+fYFsxbSO2UZdkLCQFVdU3e/oHK01lN/gfNtRyXI=; b=RiRPq6mVdaWxDep7NLO4azLhNxqJUnTYnTc/XCeXs4RAPlrr7aGIYf71 B3NlAfFfFvUP4eVznTc5SR1yiUVdBbqsl0BeWipUpUH+oEZgxSzAsuXlL XWIq1FFUWFGNyV6lBdjJHMYw/gR6x21g57YNrLfBKIgUetQ0mdqG4zCZL k=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=igor.druzhinin@citrix.com; spf=Pass smtp.mailfrom=igor.druzhinin@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of igor.druzhinin@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="igor.druzhinin@citrix.com"; x-sender="igor.druzhinin@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of igor.druzhinin@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="igor.druzhinin@citrix.com"; x-sender="igor.druzhinin@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="igor.druzhinin@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: l95+oPIHFoTN9zC850DqoRDeT/ZRiEMo25H6SvckxjynC8RI12/2Zw5Sw8WmPACvkIJZw8OBww 5Kyzybjq6+7yrgp+JZi7r+Op/vxegs8PfWxLzqLny9Hx5BkFsEhPAh7Npoxs1Gct9vd+mp52UI h5nY3ln81SmCFsqqlWDrxp4xYo8iyZ31p5Tq0R9sNIBEANje3reDS+tMcLYh1FCWKzBWFyWc97 RbowJUzKnhBeODAPEF3NGRWerLDNersWZ80gAgGV+buhm7wG22xrndXVV5SlLBhCBnfUt3KwKc SB0= X-SBRS: 2.7 X-MesageID: 5199533 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,447,1559534400"; d="scan'208";a="5199533" From: Igor Druzhinin To: Date: Fri, 30 Aug 2019 12:56:46 +0100 Message-ID: <1567166206-13872-1-git-send-email-igor.druzhinin@citrix.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH] x86/domain: don't destroy IOREQ servers on soft reset X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Igor Druzhinin , wl@xen.org, andrew.cooper3@citrix.com, paul.durrant@citrix.com, jbeulich@suse.com, roger.pau@citrix.com Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Performing soft reset should not opportunistically kill IOREQ servers for device emulators that might be currently running for a domain. Every emulator is supposed to clean up IOREQ servers for itself on exit. This allows a toolstack to elect whether or not a particular device model should be restarted. The original code was introduced in 3235cbfe ("arch-specific hooks for domain_soft_reset()") likely due to the fact 'default' IOREQ server existed in Xen at the time and used by QEMU didn't have an API call to destroy. Since the removal of 'default' IOREQ server from Xen this reason has gone away. Since commit ba7fdd64b ("xen: cleanup IOREQ server on exit") QEMU now destroys IOREQ server for itself as every other device emulator is supposed to do. It's now safe to remove this code from soft reset path - existing systems with old QEMU should be able to work as even if there are IOREQ servers left behind, a new QEMU instance will override its ranges anyway. Signed-off-by: Igor Druzhinin Reviewed-by: Paul Durrant Acked-by: Jan Beulich --- xen/arch/x86/domain.c | 2 -- xen/arch/x86/hvm/hvm.c | 5 ----- xen/include/asm-x86/hvm/hvm.h | 1 - 3 files changed, 8 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 2df3123..957f059 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -713,8 +713,6 @@ int arch_domain_soft_reset(struct domain *d) if ( !is_hvm_domain(d) ) return -EINVAL; - hvm_domain_soft_reset(d); - spin_lock(&d->event_lock); for ( i = 0; i < d->nr_pirqs ; i++ ) { diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 029eea3..2b81899 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -5045,11 +5045,6 @@ void hvm_toggle_singlestep(struct vcpu *v) v->arch.hvm.single_step = !v->arch.hvm.single_step; } -void hvm_domain_soft_reset(struct domain *d) -{ - hvm_destroy_all_ioreq_servers(d); -} - /* * Segment caches in VMCB/VMCS are inconsistent about which bits are checked, * important, and preserved across vmentry/exit. Cook the values to make them diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index b327bd2..4e72d07 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -240,7 +240,6 @@ extern const struct hvm_function_table *start_vmx(void); int hvm_domain_initialise(struct domain *d); void hvm_domain_relinquish_resources(struct domain *d); void hvm_domain_destroy(struct domain *d); -void hvm_domain_soft_reset(struct domain *d); int hvm_vcpu_initialise(struct vcpu *v); void hvm_vcpu_destroy(struct vcpu *v);