From patchwork Thu Dec 5 22:30:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 11275459 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 089D8138D for ; Thu, 5 Dec 2019 22:31:38 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D7F082245C for ; Thu, 5 Dec 2019 22:31:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="DHzPO0HA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D7F082245C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iczda-0002lW-Ca; Thu, 05 Dec 2019 22:30:18 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iczdY-0002lR-Q0 for xen-devel@lists.xenproject.org; Thu, 05 Dec 2019 22:30:16 +0000 X-Inumbo-ID: ce997b2e-17ae-11ea-8241-12813bfff9fa Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ce997b2e-17ae-11ea-8241-12813bfff9fa; Thu, 05 Dec 2019 22:30:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1575585015; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yAjwg+Xqd72fHPRmgwB4RDDJxb9bQy9BUN53Jf7Acxs=; b=DHzPO0HA0Fxpvp2XUYjTS+Usn68Ikw0fn9NJQ+qeCgp7ay3x1yw1VdtH Bmdw4gsaOHTbgv39Nkir4gPhv7bwOj9ymaD26FrSdo2/e84KU7MXwD9OM 3CANoTYoTGVYe1Wbc6iNB62Kb/H0yht14CpT6uyHETtBxnp7qf7QAQHzr g=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@citrix.com; spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of andrew.cooper3@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="andrew.cooper3@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of Andrew.Cooper3@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="Andrew.Cooper3@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: yu1Sxrv+8jCX5WEWSjdZUW7cde2HD1Uzf1Poin2y4uAdObjukiFd2XXeGrtraI8Brau0asiTB7 qHSj+bTUlBNbPZ3OsI7B4FHn5Pu7e4CU6neGhNHzJ83mrdi4GtN3xYIhm1lyhWPukLZwsUFu0D nCPLFoBXyVwUAFGWLs4qMVjpV9k5A3pAaQjPqQPApCsTf1S1VS+oN/KSAPi1c3teLWo/xT3p4c DIpXAk29UJd6FB7pX8JnMsyRm3g9bLtCHd+uEri6+3PnXeojjEKDdgfynuaaXZm3EzWIWirEiE sBo= X-SBRS: 2.7 X-MesageID: 9280003 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.69,282,1571716800"; d="scan'208";a="9280003" From: Andrew Cooper To: Xen-devel Date: Thu, 5 Dec 2019 22:30:05 +0000 Message-ID: <20191205223008.8623-4-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20191205223008.8623-1-andrew.cooper3@citrix.com> References: <20191205223008.8623-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 3/6] xen/domctl: Consolidate hypercall continuation handling at the top level X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Andrew Cooper , Jan Beulich , Volodymyr Babchuk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" More paths are going start using hypercall continuations. We could add extra calls to hypercall_create_continuation() but it is much easier to handle -ERESTART once at the top level. One complication is XEN_DOMCTL_shadow_op, which for XSA-97 and ABI compatibility in a security fix, turn a DOMCTL continuation into __HYPERVISOR_arch_1. This remains as it was, gaining a comment explaining what is going on. With -ERESTART handling in place, the !domctl_lock_acquire() path can use the normal exit path, instead of opencoding a subset of it locally. Signed-off-by: Andrew Cooper --- CC: Jan Beulich CC: Wei Liu CC: Roger Pau Monné CC: Stefano Stabellini CC: Julien Grall CC: Volodymyr Babchuk --- xen/arch/x86/domctl.c | 5 ++++- xen/arch/x86/mm/hap/hap.c | 3 +-- xen/arch/x86/mm/shadow/common.c | 3 +-- xen/common/domctl.c | 19 +++++-------------- 4 files changed, 11 insertions(+), 19 deletions(-) diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index b461aadbd6..2fa0e7dda5 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -326,9 +326,12 @@ long arch_do_domctl( switch ( domctl->cmd ) { - case XEN_DOMCTL_shadow_op: ret = paging_domctl(d, &domctl->u.shadow_op, u_domctl, 0); + /* + * Continuations from paging_domctl() switch index to arch_1, and + * can't use the common domctl continuation path. + */ if ( ret == -ERESTART ) return hypercall_create_continuation(__HYPERVISOR_arch_1, "h", u_domctl); diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 3d93f3451c..3996e17b7e 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -600,8 +600,7 @@ int hap_domctl(struct domain *d, struct xen_domctl_shadow_op *sc, paging_unlock(d); if ( preempted ) /* Not finished. Set up to re-run the call. */ - rc = hypercall_create_continuation(__HYPERVISOR_domctl, "h", - u_domctl); + rc = -ERESTART; else /* Finished. Return the new allocation */ sc->mb = hap_get_allocation(d); diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c index 6212ec2c4a..17ca21104f 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -3400,8 +3400,7 @@ int shadow_domctl(struct domain *d, paging_unlock(d); if ( preempted ) /* Not finished. Set up to re-run the call. */ - rc = hypercall_create_continuation( - __HYPERVISOR_domctl, "h", u_domctl); + rc = -ERESTART; else /* Finished. Return the new allocation */ sc->mb = shadow_get_allocation(d); diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 03d0226039..cb0295085d 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -415,10 +415,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) if ( !domctl_lock_acquire() ) { - if ( d && d != dom_io ) - rcu_unlock_domain(d); - return hypercall_create_continuation( - __HYPERVISOR_domctl, "h", u_domctl); + ret = -ERESTART; + goto domctl_out_unlock_domonly; } switch ( op->cmd ) @@ -438,9 +436,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) if ( guest_handle_is_null(op->u.vcpucontext.ctxt) ) { ret = vcpu_reset(v); - if ( ret == -ERESTART ) - ret = hypercall_create_continuation( - __HYPERVISOR_domctl, "h", u_domctl); break; } @@ -469,10 +464,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) domain_pause(d); ret = arch_set_info_guest(v, c); domain_unpause(d); - - if ( ret == -ERESTART ) - ret = hypercall_create_continuation( - __HYPERVISOR_domctl, "h", u_domctl); } free_vcpu_guest_context(c.nat); @@ -585,9 +576,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) domain_lock(d); ret = domain_kill(d); domain_unlock(d); - if ( ret == -ERESTART ) - ret = hypercall_create_continuation( - __HYPERVISOR_domctl, "h", u_domctl); goto domctl_out_unlock_domonly; case XEN_DOMCTL_setnodeaffinity: @@ -1080,6 +1068,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) if ( copyback && __copy_to_guest(u_domctl, op, 1) ) ret = -EFAULT; + if ( ret == -ERESTART ) + ret = hypercall_create_continuation(__HYPERVISOR_domctl, + "h", u_domctl); return ret; }