From patchwork Mon Jul 29 15:41:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Lagerwall X-Patchwork-Id: 11064085 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D534112C for ; Mon, 29 Jul 2019 15:42:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5CC6228555 for ; Mon, 29 Jul 2019 15:42:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 50F3528613; Mon, 29 Jul 2019 15:42:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DC01728555 for ; Mon, 29 Jul 2019 15:42:40 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hs7mE-0000sg-MB; Mon, 29 Jul 2019 15:41:30 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hs7mD-0000s2-6j for xen-devel@lists.xenproject.org; Mon, 29 Jul 2019 15:41:29 +0000 X-Inumbo-ID: 53d50ba1-b217-11e9-8980-bc764e045a96 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 53d50ba1-b217-11e9-8980-bc764e045a96; Mon, 29 Jul 2019 15:41:27 +0000 (UTC) Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=ross.lagerwall@citrix.com; spf=Pass smtp.mailfrom=ross.lagerwall@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of ross.lagerwall@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="ross.lagerwall@citrix.com"; x-sender="ross.lagerwall@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of ross.lagerwall@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="ross.lagerwall@citrix.com"; x-sender="ross.lagerwall@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="ross.lagerwall@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: EfVmCSp/3iSv1XZUlDJQdS4SR95+T40KnPCZJ6RZRzFnzo6kauir8xqWQJZGVeFONIY13aDAqx ggtolvWdsfbdTB/99VAfGZFR2QOQB8YWwyzImnKJOiLil0WblUPR5AZAWqHuPry7AMyjyiwBxz cH0DUSO3YTJtfbkoEcUklYxs/DJUWB2ImQFZrvmK05adZQny+1CMlxopjjXi5ENumd/xcmWfBY IpPsGBAjtdS1kKO2FJm6dp+3gkilSXQzXnF3Kt9Rr0unhgqCMpBsE/7YhrPpX+wbNo4GePDqSu /MQ= X-SBRS: 2.7 X-MesageID: 3709473 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,323,1559534400"; d="scan'208";a="3709473" From: Ross Lagerwall To: Date: Mon, 29 Jul 2019 16:41:12 +0100 Message-ID: <20190729154112.7644-1-ross.lagerwall@citrix.com> X-Mailer: git-send-email 2.17.2 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH] xen: Avoid calling device suspend/resume callbacks X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Ross Lagerwall , Boris Ostrovsky , Stefano Stabellini Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP When suspending/resuming or migrating under Xen, there isn't much need for suspending and resuming all the attached devices since the Xen/QEMU should correctly maintain the hardware state. Drop these calls and replace with more specific calls to ensure Xen frontend devices are properly reconnected. This change is needed to make NVIDIA vGPU migration work under Xen since the vGPU device being suspended interferes with that working correctly. However, it has the added benefit of reducing migration downtime - by approximately 500ms with an HVM guest in my environment. Tested by putting an HVM guest through 1000 migration cycles. I also tested PV guest migration (though less rigorously). Signed-off-by: Ross Lagerwall --- drivers/xen/manage.c | 24 +++++++--------------- drivers/xen/xenbus/xenbus_probe_frontend.c | 22 ++++++++++++++++++++ include/xen/xenbus.h | 3 +++ 3 files changed, 32 insertions(+), 17 deletions(-) diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c index cd046684e0d1..53768e0e2560 100644 --- a/drivers/xen/manage.c +++ b/drivers/xen/manage.c @@ -113,21 +113,12 @@ static void do_suspend(void) goto out_thaw; } - err = dpm_suspend_start(PMSG_FREEZE); - if (err) { - pr_err("%s: dpm_suspend_start %d\n", __func__, err); - goto out_thaw; - } + xenbus_suspend_frontends(); printk(KERN_DEBUG "suspending xenstore...\n"); xs_suspend(); - err = dpm_suspend_end(PMSG_FREEZE); - if (err) { - pr_err("dpm_suspend_end failed: %d\n", err); - si.cancelled = 0; - goto out_resume; - } + suspend_device_irqs(); xen_arch_suspend(); @@ -141,7 +132,7 @@ static void do_suspend(void) raw_notifier_call_chain(&xen_resume_notifier, 0, NULL); - dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE); + resume_device_irqs(); if (err) { pr_err("failed to start xen_suspend: %d\n", err); @@ -150,13 +141,12 @@ static void do_suspend(void) xen_arch_resume(); -out_resume: - if (!si.cancelled) + if (!si.cancelled) { xs_resume(); - else + xenbus_resume_frontends(); + } else { xs_suspend_cancel(); - - dpm_resume_end(si.cancelled ? PMSG_THAW : PMSG_RESTORE); + } out_thaw: thaw_processes(); diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c index a7d90a719cea..8cd836c402e1 100644 --- a/drivers/xen/xenbus/xenbus_probe_frontend.c +++ b/drivers/xen/xenbus/xenbus_probe_frontend.c @@ -153,6 +153,28 @@ static struct xen_bus_type xenbus_frontend = { }, }; +static int xenbus_suspend_one(struct device *dev, void *data) +{ + xenbus_dev_suspend(dev); + return 0; +} + +void xenbus_suspend_frontends(void) +{ + bus_for_each_dev(&xenbus_frontend.bus, NULL, NULL, xenbus_suspend_one); +} + +static int xenbus_resume_one(struct device *dev, void *data) +{ + xenbus_frontend_dev_resume(dev); + return 0; +} + +void xenbus_resume_frontends(void) +{ + bus_for_each_dev(&xenbus_frontend.bus, NULL, NULL, xenbus_resume_one); +} + static void frontend_changed(struct xenbus_watch *watch, const char *path, const char *token) { diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h index 869c816d5f8c..71eeb442c375 100644 --- a/include/xen/xenbus.h +++ b/include/xen/xenbus.h @@ -233,4 +233,7 @@ extern const struct file_operations xen_xenbus_fops; extern struct xenstore_domain_interface *xen_store_interface; extern int xen_store_evtchn; +void xenbus_suspend_frontends(void); +void xenbus_resume_frontends(void); + #endif /* _XEN_XENBUS_H */