From patchwork Tue Nov 5 19:49:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 11228473 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 484AE1515 for ; Tue, 5 Nov 2019 19:50:45 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2D2A521A4A for ; Tue, 5 Nov 2019 19:50:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="SbC5tPh9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2D2A521A4A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iS4pH-0006Gc-Uw; Tue, 05 Nov 2019 19:49:15 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iS4pG-0006GT-Jc for xen-devel@lists.xenproject.org; Tue, 05 Nov 2019 19:49:14 +0000 X-Inumbo-ID: 577c2642-0005-11ea-9631-bc764e2007e4 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 577c2642-0005-11ea-9631-bc764e2007e4; Tue, 05 Nov 2019 19:49:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1572983353; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=ycOqv6kD+Z+hJnHNzSqQkOsZVZVHAO/96AIBmdOBHj4=; b=SbC5tPh9uhgMIfIiA8voDhDG9vgJNtmyXNRGEPKH3U1OnRMB3sn/eZha /lRJguxYIAqtYEUZaiifoVxOlhMaBshiP5Sir7mWQEBmWsCvi9SbfPih5 naiQgIJN7c8pjOOAUIWTduCjKZ4vtfe9RYy6lBltLmfAQnRPbd8sOj5BK s=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@citrix.com; spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of andrew.cooper3@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="andrew.cooper3@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of Andrew.Cooper3@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="Andrew.Cooper3@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: LLaHzHIvjFeliltcmXgNjjWUwVB7om6+Cj70cYFsARX1md26KFrM7CeaASYNI/Z+KaHlPY3jJv yLvdfU0aUX2Bte8ldvnzdjoPeWpKj1t4giIgUAgLVZQj/BhFyIM8o0gSO9bApaRc8rBN0+nUJZ uCLdzQ48VHJlKrFeZuvJxJFJ2W3kCU7FsNm9B+c11lAhibE4CVaOuqwSL6OpsJZC8ZWAyhkl9m riGCcp1jcafPsKJ82hXiRhmnAncfg6YgCM7Ghp+hqTqafqjSStsaosba3fvVpWJJ3456ubYHLg iEI= X-SBRS: 2.7 X-MesageID: 8236833 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.68,271,1569297600"; d="scan'208";a="8236833" From: Andrew Cooper To: Xen-devel Date: Tue, 5 Nov 2019 19:49:09 +0000 Message-ID: <20191105194909.32234-1-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20191105194317.16232-3-andrew.cooper3@citrix.com> References: <20191105194317.16232-3-andrew.cooper3@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v1.5] x86/livepatch: Prevent patching with active waitqueues X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Andrew Cooper , Ross Lagerwall , Konrad Rzeszutek Wilk Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The safety of livepatching depends on every stack having been unwound, but there is one corner case where this is not true. The Sharing/Paging/Monitor infrastructure may use waitqueues, which copy the stack frame sideways and longjmp() to a different vcpu. This case is rare, and can be worked around by pausing the offending domain(s), waiting for their rings to drain, then performing a livepatch. In the case that there is an active waitqueue, fail the livepatch attempt with -EBUSY, which is preforable to the fireworks which occur from trying to unwind the old stack frame at a later point. Signed-off-by: Andrew Cooper --- CC: Konrad Rzeszutek Wilk CC: Ross Lagerwall CC: Juergen Gross This fix wants backporting, and is long overdue for posting upstream. v1.5: * Send out a non-stale patch this time. --- xen/arch/arm/livepatch.c | 5 +++++ xen/arch/x86/livepatch.c | 40 ++++++++++++++++++++++++++++++++++++++++ xen/common/livepatch.c | 8 ++++++++ xen/include/xen/livepatch.h | 1 + 4 files changed, 54 insertions(+) diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c index 00c5e2bc45..915e9d926a 100644 --- a/xen/arch/arm/livepatch.c +++ b/xen/arch/arm/livepatch.c @@ -18,6 +18,11 @@ void *vmap_of_xen_text; +int arch_livepatch_safety_check(void) +{ + return 0; +} + int arch_livepatch_quiesce(void) { mfn_t text_mfn; diff --git a/xen/arch/x86/livepatch.c b/xen/arch/x86/livepatch.c index c82cf53b9e..2749cbc5cf 100644 --- a/xen/arch/x86/livepatch.c +++ b/xen/arch/x86/livepatch.c @@ -10,10 +10,50 @@ #include #include #include +#include #include #include +static bool has_active_waitqueue(const struct vm_event_domain *ved) +{ + /* ved may be xzalloc()'d without INIT_LIST_HEAD() yet. */ + return (ved && !list_head_is_null(&ved->wq.list) && + !list_empty(&ved->wq.list)); +} + +/* + * x86's implementation of waitqueue violates the livepatching safey principle + * of having unwound every CPUs stack before modifying live content. + * + * Search through every domain and check that no vCPUs have an active + * waitqueue. + */ +int arch_livepatch_safety_check(void) +{ + struct domain *d; + + for_each_domain ( d ) + { +#ifdef CONFIG_MEM_SHARING + if ( has_active_waitqueue(d->vm_event_share) ) + goto fail; +#endif +#ifdef CONFIG_MEM_PAGING + if ( has_active_waitqueue(d->vm_event_paging) ) + goto fail; +#endif + if ( has_active_waitqueue(d->vm_event_monitor) ) + goto fail; + } + + return 0; + + fail: + printk(XENLOG_ERR LIVEPATCH "%pd found with active waitqueue\n", d); + return -EBUSY; +} + int arch_livepatch_quiesce(void) { /* Disable WP to allow changes to read-only pages. */ diff --git a/xen/common/livepatch.c b/xen/common/livepatch.c index 962647616a..8386e611f2 100644 --- a/xen/common/livepatch.c +++ b/xen/common/livepatch.c @@ -1060,6 +1060,14 @@ static int apply_payload(struct payload *data) unsigned int i; int rc; + rc = arch_livepatch_safety_check(); + if ( rc ) + { + printk(XENLOG_ERR LIVEPATCH "%s: Safety checks failed: %d\n", + data->name, rc); + return rc; + } + printk(XENLOG_INFO LIVEPATCH "%s: Applying %u functions\n", data->name, data->nfuncs); diff --git a/xen/include/xen/livepatch.h b/xen/include/xen/livepatch.h index 1b1817ca0d..69ede75d20 100644 --- a/xen/include/xen/livepatch.h +++ b/xen/include/xen/livepatch.h @@ -104,6 +104,7 @@ static inline int livepatch_verify_distance(const struct livepatch_func *func) * These functions are called around the critical region patching live code, * for an architecture to take make appropratie global state adjustments. */ +int arch_livepatch_safety_check(void); int arch_livepatch_quiesce(void); void arch_livepatch_revive(void);