From patchwork Wed Nov 22 12:39:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 10070421 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9DB266038F for ; Wed, 22 Nov 2017 12:42:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8E53129B9B for ; Wed, 22 Nov 2017 12:42:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7F1EA29BA5; Wed, 22 Nov 2017 12:42:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0F89629B9B for ; Wed, 22 Nov 2017 12:41:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1eHUJF-0001gD-S1; Wed, 22 Nov 2017 12:39:21 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1eHUJE-0001g7-DG for xen-devel@lists.xenproject.org; Wed, 22 Nov 2017 12:39:20 +0000 Received: from [85.158.143.35] by server-5.bemta-6.messagelabs.com id 93/E8-32540-7FF651A5; Wed, 22 Nov 2017 12:39:19 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprNIsWRWlGSWpSXmKPExsXS6fjDS/d7vmi UwZtV3Bbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8axK+oFUzgr/kxSaGBcyN7FyMkhJJAncbfn HlsXIwcHr4CdxJGWUJCwhIChxOmFN1lAwiwCqhIHJxqAhNkE1CXanm1nBQmLCBhInDua1MXIx cEssIFZ4u6RdjaQGmEBB4kruyayQEwUlPi7QxgkzCygJfHw1y0WCFtbYtnC18wgJcwC0hLL/3 FMYOSZhdAwC0nDLCQNsxAaFjCyrGLUKE4tKkst0jU01UsqykzPKMlNzMzRNTQw08tNLS5OTE/ NSUwq1kvOz93ECAwiBiDYwfhtWcAhRkkOJiVR3uDlIlFCfEn5KZUZicUZ8UWlOanFhxhlODiU JHhL80SjhASLUtNTK9Iyc4DhDJOW4OBREuGtAUnzFhck5hZnpkOkTjEaczyb+bqBmWPa1dYmZ iGWvPy8VClxXnOQUgGQ0ozSPLhBsDi7xCgrJczLCHSaEE9BalFuZgmq/CtGcQ5GJWHeSSBTeD LzSuD2vQI6hQnolJ/HhUFOKUlESEk1MIYuv3uluehfWtLMjJwjPPxetxY/FmNaqThnssvjFVF ddzLPOcr4hi5mqz01g2eRoYDI8RkXxepXbnZtaG1m1y1v13C6dqlcetdHpZCzkwv1NsRZh21Q 911//M3fqvUSLGtml2/ZuOLXheczj0Wo7TCtep2ltepjxOfDE3TWc2RxpBcd3CSUOVOJpTgj0 VCLuag4EQB3jyvhrgIAAA== X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-14.tower-21.messagelabs.com!1511354357!76478473!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 65133 invoked from network); 22 Nov 2017 12:39:18 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 22 Nov 2017 12:39:18 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Wed, 22 Nov 2017 05:39:16 -0700 Message-Id: <5A157E040200007800190FAE@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.2 Date: Wed, 22 Nov 2017 05:39:16 -0700 From: "Jan Beulich" To: "xen-devel" Mime-Version: 1.0 Content-Disposition: inline Cc: Sergey Dyasli , Stefano Stabellini , Wei Liu , Igor Druzhinin , George Dunlap , Andrew Cooper , Konrad Rzeszutek Wilk , Ian Jackson , Tim Deegan , Dario Faggioli Subject: [Xen-devel] [PATCH v2] sync CPU state upon final domain destruction X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP See the code comment being added for why we need this. This is being placed here to balance between the desire to prevent future similar issues (the risk of which would grow if it was put further down the call stack, e.g. in vmx_vcpu_destroy()) and the intention to limit the performance impact (otherwise it could also go into rcu_do_batch(), paralleling the use in do_tasklet_work()). Reported-by: Igor Druzhinin Signed-off-by: Jan Beulich Acked-by: Andrew Cooper --- v2: Move from vmx_vcpu_destroy() to complete_domain_destroy(). --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -794,6 +794,14 @@ static void complete_domain_destroy(stru struct vcpu *v; int i; + /* + * Flush all state for the vCPU previously having run on the current CPU. + * This is in particular relevant for x86 HVM ones on VMX, so that this + * flushing of state won't happen from the TLB flush IPI handler behind + * the back of a vmx_vmcs_enter() / vmx_vmcs_exit() section. + */ + sync_local_execstate(); + for ( i = d->max_vcpus - 1; i >= 0; i-- ) { if ( (v = d->vcpu[i]) == NULL )