From patchwork Thu Jan 5 02:55:23 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xuquan (Euler)" X-Patchwork-Id: 9498285 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0E43D606B5 for ; Thu, 5 Jan 2017 02:57:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EB69A27B13 for ; Thu, 5 Jan 2017 02:57:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DEE9626907; Thu, 5 Jan 2017 02:57:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4F82B26907 for ; Thu, 5 Jan 2017 02:57:41 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cOyDP-0003w6-AC; Thu, 05 Jan 2017 02:55:43 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cOyDN-0003vw-VG for xen-devel@lists.xen.org; Thu, 05 Jan 2017 02:55:42 +0000 Received: from [193.109.254.147] by server-11.bemta-6.messagelabs.com id 8B/E7-25337-DA5BD685; Thu, 05 Jan 2017 02:55:41 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrIKsWRWlGSWpSXmKPExsVSPpHPUXfN1tw Ig3cvFCyWfFzM4sDocXT3b6YAxijWzLyk/IoE1ozp18+zFUy1r1j3opO5gfGpZRcjF4eQwClG ieeLnrJDOOsZJY7O2sbaxcjJwSagK7H99CkwW0TAXGLrki2MIEXMAk+ZJG7snAqWEBZwlfj/d CMLRJGXxPKXO4CKOIBsPYmWr74gYRYBFYmDXZcYQWxegWCJRT9ms4HYjAJiEt9PrWECsZkFxC XmTpsFNlJCQETi4cXTbBC2mMS/XQ+hbEWJPX0fWCHqMyQWT1rPBDFTUOLkzCcsEDWSEgdX3GA BuVNC4AKjxMyNrxkhEqYSt5Z+YJnAKDILyb5ZSGbNQjILIq4jsWD3JzYIW1ti2cLXzBB2tsSX Peuh7ACJ5+dPs88Ch8t1RomDW05BNStKTOl+yL6AkXMVo0ZxalFZapGukYFeUlFmekZJbmJmj q6hgZlebmpxcWJ6ak5iUrFecn7uJkZgVDIAwQ7GX8sCDjFKcjApifImVuVGCPEl5adUZiQWZ8 QXleakFh9ilOHgUJLg7dsClBMsSk1PrUjLzAGmB5i0BAePkgjvvc1Aad7igsTc4sx0iNQpRkU pcd4QkD4BkERGaR5cGywlXWKUlRLmZQQ6RIinILUoN7MEVf4VozgHo5IwxHaezLwSuOmvgBYz AS3eHpANsrgkESEl1cDIkl80S3nZVdPrh4L/XN4fNKMr+eFFMRfLgH0XDu84OvfX9cmTtW707 uJmP2TPFz9ZMTW82eZAmMiNZd1u8/xyXZXjLy10dK98G3fxyZQQHSs7sTfSBZVFl08ftrUTXd K9oMU88ursx7zHrV3lV620UykrnL39udSOvOdFh196MuRs+1qXvP+1EktxRqKhFnNRcSIAONG 5IEQDAAA= X-Env-Sender: xuquan8@huawei.com X-Msg-Ref: server-6.tower-27.messagelabs.com!1483584935!80536747!1 X-Originating-IP: [119.145.14.65] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiA3NzQ2Mw==\n X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 43139 invoked from network); 5 Jan 2017 02:55:39 -0000 Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com) (119.145.14.65) by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 5 Jan 2017 02:55:39 -0000 Received: from 172.24.1.60 (EHLO SZXEMI403-HUB.china.huawei.com) ([172.24.1.60]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DTA06081; Thu, 05 Jan 2017 10:55:33 +0800 (CST) Received: from SZXEMI506-MBX.china.huawei.com ([169.254.5.247]) by SZXEMI403-HUB.china.huawei.com ([10.83.65.55]) with mapi id 14.03.0235.001; Thu, 5 Jan 2017 10:55:23 +0800 From: "Xuquan (Quan Xu)" To: "xen-devel@lists.xen.org" Thread-Topic: [PATCH v6] x86/apicv: fix RTC periodic timer and apicv issue Thread-Index: AdJm/yZxPVD3iCvOTue96q8m1iK0JQ== Date: Thu, 5 Jan 2017 02:55:23 +0000 Message-ID: Accept-Language: en-US Content-Language: zh-CN X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-originating-ip: [10.142.69.246] MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090206.586DB5A6.0029, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=169.254.5.247, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 75feffa9955543f106a4c9ca71c19adf Cc: "yang.zhang.wz@gmail.com" , Lan Tianyu , Kevin Tian , Jan Beulich , Andrew Cooper , George Dunlap , Jun Nakajima , Chao Gao Subject: [Xen-devel] [PATCH v6] x86/apicv: fix RTC periodic timer and apicv issue X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From 7c0091cdce951f707bd8dff906aabdf5d645a85f Mon Sep 17 00:00:00 2001 From: Quan Xu Date: Thu, 5 Jan 2017 10:38:39 +0800 Subject: [PATCH v6] x86/apicv: fix RTC periodic timer and apicv issue When Xen apicv is enabled, wall clock time is faster on Windows7-32 guest with high payload (with 2vCPU, captured from xentrace, in high payload, the count of IPI interrupt increases rapidly between these vCPUs). If IPI intrrupt (vector 0xe1) and periodic timer interrupt (vector 0xd1) are both pending (index of bit set in vIRR), unfortunately, the IPI intrrupt is high priority than periodic timer interrupt. Xen updates IPI interrupt bit set in vIRR to guest interrupt status (RVI) as a high priority and apicv (Virtual-Interrupt Delivery) delivers IPI interrupt within VMX non-root operation without a VM-Exit. Within VMX non-root operation, if periodic timer interrupt index of bit is set in vIRR and highest, the apicv delivers periodic timer interrupt within VMX non-root operation as well. But in current code, if Xen doesn't update periodic timer interrupt bit set in vIRR to guest interrupt status (RVI) directly, Xen is not aware of this case to decrease the count (pending_intr_nr) of pending periodic timer interrupt, then Xen will deliver a periodic timer interrupt again. And that we update periodic timer interrupt in every VM-entry, there is a chance that already-injected instance (before EOI-induced exit happens) will incur another pending IRR setting if there is a VM-exit happens between virtual interrupt injection (vIRR->0, vISR->1) and EOI-induced exit (vISR->0), since pt_intr_post hasn't been invoked yet, then the guest receives more periodic timer interrupt. So we set eoi_exit_bitmap for intack.vector - give a chance to post periodic time interrupts when periodic time interrupts become the highest one. Signed-off-by: Quan Xu Acked-by: Kevin Tian Tested-by: Chao Gao --- xen/arch/x86/hvm/vmx/intr.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) -- 1.7.12.4 diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c index 639a705..24e4505 100644 --- a/xen/arch/x86/hvm/vmx/intr.c +++ b/xen/arch/x86/hvm/vmx/intr.c @@ -312,12 +312,15 @@ void vmx_intr_assist(void) unsigned int i, n; /* - * Set eoi_exit_bitmap for periodic timer interrup to cause EOI-induced VM - * exit, then pending periodic time interrups have the chance to be injected - * for compensation + * intack.vector is the highest priority vector. So we set eoi_exit_bitmap + * for intack.vector - give a chance to post periodic time interrupts when + * periodic time interrupts become the highest one */ - if (pt_vector != -1) - vmx_set_eoi_exit_bitmap(v, pt_vector); + if ( pt_vector != -1 ) + { + ASSERT(intack.vector >= pt_vector); + vmx_set_eoi_exit_bitmap(v, intack.vector); + } /* we need update the RVI field */ __vmread(GUEST_INTR_STATUS, &status);