From patchwork Thu Nov 8 09:51:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liran Alon X-Patchwork-Id: 10673863 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E40C1057 for ; Thu, 8 Nov 2018 09:52:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D05F2CC5B for ; Thu, 8 Nov 2018 09:52:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 30E662D149; Thu, 8 Nov 2018 09:52:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 938062CC5B for ; Thu, 8 Nov 2018 09:52:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727071AbeKHT0y (ORCPT ); Thu, 8 Nov 2018 14:26:54 -0500 Received: from userp2130.oracle.com ([156.151.31.86]:60306 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726517AbeKHT0y (ORCPT ); Thu, 8 Nov 2018 14:26:54 -0500 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id wA89kcVA018034; Thu, 8 Nov 2018 09:52:11 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id; s=corp-2018-07-02; bh=VbmEdIuEIA9ybwXGCX07AKAZ20ka2v+T8tbrdFNYvXE=; b=mgU0qNAibVOnczKEN7k2uzlRopqUpnFoyF83lLBTE+vRn3z+FRy/Bj1g+RHa6uOrlM1J vC5cQ/WeGaL4fokaZjbopWCCIkmG8DlsghjKJseY0CKQa/VuaOVPBXwlMYyyRBFw6otu Mt94sfDWt7un9rDfDbZy6y8R8hDUo2B18r34ThRFsiB0gBJZ3uUd3iu8N/Sx+wtBoCFR mg0K+FM+LN6k3iBnOHn2SdrirGHVilLd5Db02WME14wktcBmAzpGn3ktyriLm8SUND+C ap7clPIbvt5AQ+h5hmQ/h4a/79smKV8hki3kjK/1ZzkunE/iLjy2CYeqApGdTRaTmIOL 2Q== Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by userp2130.oracle.com with ESMTP id 2nh33u8f4v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 08 Nov 2018 09:52:10 +0000 Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id wA89qARF017188 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 8 Nov 2018 09:52:10 GMT Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id wA89qACh008591; Thu, 8 Nov 2018 09:52:10 GMT Received: from spark.ravello.local (/213.57.127.2) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 08 Nov 2018 01:51:52 -0800 From: Liran Alon To: pbonzini@redhat.com, rkrcmar@redhat.com, kvm@vger.kernel.org Cc: idan.brown@oracle.com, Leonid Shatz Subject: [PATCH v2 1/2] KVM: nVMX/nSVM: Fix bug which sets vcpu->arch.tsc_offset to L1 tsc_offset Date: Thu, 8 Nov 2018 11:51:29 +0200 Message-Id: <20181108095130.64904-1-liran.alon@oracle.com> X-Mailer: git-send-email 2.16.1 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9070 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1811080085 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Leonid Shatz Since commit e79f245ddec1 ("X86/KVM: Properly update 'tsc_offset' to represent the running guest"), vcpu->arch.tsc_offset meaning was changed to always reflect the tsc_offset value set on active VMCS. Regardless if vCPU is currently running L1 or L2. However, above mentioned commit failed to also change kvm_vcpu_write_tsc_offset() to set vcpu->arch.tsc_offset correctly. This is because vmx_write_tsc_offset() could set the tsc_offset value in active VMCS to given offset parameter *plus vmcs12->tsc_offset*. However, kvm_vcpu_write_tsc_offset() just sets vcpu->arch.tsc_offset to given offset parameter. Without taking into account the possible addition of vmcs12->tsc_offset. (Same is true for SVM case). Fix this issue by changing kvm_x86_ops->write_tsc_offset() to return actually set tsc_offset in active VMCS and modify kvm_vcpu_write_tsc_offset() to set returned value in vcpu->arch.tsc_offset. In addition, rename write_tsc_offset() argument to clearly indicate it specifies a L1 TSC offset. Fixes: e79f245ddec1 ("X86/KVM: Properly update 'tsc_offset' to represent the running guest") Reviewed-by: Liran Alon Reviewed-by: Mihai Carabas Reviewed-by: Krish Sadhukhan Signed-off-by: Leonid Shatz --- arch/x86/include/asm/kvm_host.h | 6 +++++- arch/x86/kvm/svm.c | 9 +++++---- arch/x86/kvm/vmx.c | 21 +++++++++------------ arch/x86/kvm/x86.c | 8 ++++---- 4 files changed, 23 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 55e51ff7e421..cc9df79b1016 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1094,7 +1094,11 @@ struct kvm_x86_ops { bool (*has_wbinvd_exit)(void); u64 (*read_l1_tsc_offset)(struct kvm_vcpu *vcpu); - void (*write_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset); + /* + * Updates active guest TSC offset based on given L1 TSC offset + * and returns actual TSC offset set for the active guest + */ + u64 (*write_tsc_offset)(struct kvm_vcpu *vcpu, u64 l1_offset); void (*get_exit_info)(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2); diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 0e21ccc46792..842d4bb053d1 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1446,7 +1446,7 @@ static u64 svm_read_l1_tsc_offset(struct kvm_vcpu *vcpu) return vcpu->arch.tsc_offset; } -static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) +static u64 svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 l1_offset) { struct vcpu_svm *svm = to_svm(vcpu); u64 g_tsc_offset = 0; @@ -1455,15 +1455,16 @@ static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) /* Write L1's TSC offset. */ g_tsc_offset = svm->vmcb->control.tsc_offset - svm->nested.hsave->control.tsc_offset; - svm->nested.hsave->control.tsc_offset = offset; + svm->nested.hsave->control.tsc_offset = l1_offset; } else trace_kvm_write_tsc_offset(vcpu->vcpu_id, svm->vmcb->control.tsc_offset, - offset); + l1_offset); - svm->vmcb->control.tsc_offset = offset + g_tsc_offset; + svm->vmcb->control.tsc_offset = l1_offset + g_tsc_offset; mark_dirty(svm->vmcb, VMCB_INTERCEPTS); + return svm->vmcb->control.tsc_offset; } static void avic_init_vmcb(struct vcpu_svm *svm) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 4555077d69ce..20d90ebd6cc8 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -3452,11 +3452,9 @@ static u64 vmx_read_l1_tsc_offset(struct kvm_vcpu *vcpu) return vcpu->arch.tsc_offset; } -/* - * writes 'offset' into guest's timestamp counter offset register - */ -static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) +static u64 vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 l1_offset) { + u64 active_offset = l1_offset; if (is_guest_mode(vcpu)) { /* * We're here if L1 chose not to trap WRMSR to TSC. According @@ -3464,17 +3462,16 @@ static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) * set for L2 remains unchanged, and still needs to be added * to the newly set TSC to get L2's TSC. */ - struct vmcs12 *vmcs12; - /* recalculate vmcs02.TSC_OFFSET: */ - vmcs12 = get_vmcs12(vcpu); - vmcs_write64(TSC_OFFSET, offset + - (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING) ? - vmcs12->tsc_offset : 0)); + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); + if (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING)) + active_offset += vmcs12->tsc_offset; } else { trace_kvm_write_tsc_offset(vcpu->vcpu_id, - vmcs_read64(TSC_OFFSET), offset); - vmcs_write64(TSC_OFFSET, offset); + vmcs_read64(TSC_OFFSET), l1_offset); } + + vmcs_write64(TSC_OFFSET, active_offset); + return active_offset; } /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8525dc75a79a..c4e80af1a54e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1766,10 +1766,9 @@ u64 kvm_read_l1_tsc(struct kvm_vcpu *vcpu, u64 host_tsc) } EXPORT_SYMBOL_GPL(kvm_read_l1_tsc); -static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) +static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 l1_offset) { - kvm_x86_ops->write_tsc_offset(vcpu, offset); - vcpu->arch.tsc_offset = offset; + vcpu->arch.tsc_offset = kvm_x86_ops->write_tsc_offset(vcpu, l1_offset); } static inline bool kvm_check_tsc_unstable(void) @@ -1897,7 +1896,8 @@ EXPORT_SYMBOL_GPL(kvm_write_tsc); static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, s64 adjustment) { - kvm_vcpu_write_tsc_offset(vcpu, vcpu->arch.tsc_offset + adjustment); + u64 l1_tsc_offset = kvm_x86_ops->read_l1_tsc_offset(vcpu); + kvm_vcpu_write_tsc_offset(vcpu, l1_tsc_offset + adjustment); } static inline void adjust_tsc_offset_host(struct kvm_vcpu *vcpu, s64 adjustment) From patchwork Thu Nov 8 09:51:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liran Alon X-Patchwork-Id: 10673865 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 461621057 for ; Thu, 8 Nov 2018 09:52:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3603B2CB2F for ; Thu, 8 Nov 2018 09:52:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2A4FE2CFAA; Thu, 8 Nov 2018 09:52:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C97702CB2F for ; Thu, 8 Nov 2018 09:52:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727083AbeKHT05 (ORCPT ); Thu, 8 Nov 2018 14:26:57 -0500 Received: from userp2130.oracle.com ([156.151.31.86]:60328 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726517AbeKHT05 (ORCPT ); Thu, 8 Nov 2018 14:26:57 -0500 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id wA89kdXe018055; Thu, 8 Nov 2018 09:52:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=Q6iEnVy9RaLW8o5SZQ5BLUzQhWe9zWNQcEQCtkmOius=; b=jy+HTbgKfApNl/BeBWRzXE+JejcNxvf2+i0Ms02hUR6Kzp38ta8P3N8DSL7AeZakLknn 7OlROjxk+p5c+3a2o3mBpsC5EJ5Gv/awqKl6mLWoQgCS1TO7VoBjKAy0GyHSax+jS1H3 /u4yhrWuBUmdNJqCVO/MIVSeaTOOo5tDeg18+h4rDify0s52SQ5ARAflAH9X6lGFarQo fkbb508p36akGJSlQEVok2HkEsJlSzLo9ZcT7BeSVDJkj0aAMqH+o4fxnc+KlVmBJFCF 4H0UR8JZYqGCVv/6qVYmsNqxxNzIAZM8sOShmfsDfwSqYb6mrRWfdVkx0zDHYK15uEN4 Ag== Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by userp2130.oracle.com with ESMTP id 2nh33u8f50-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 08 Nov 2018 09:52:13 +0000 Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id wA89qDei023296 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 8 Nov 2018 09:52:13 GMT Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id wA89qDIB008601; Thu, 8 Nov 2018 09:52:13 GMT Received: from spark.ravello.local (/213.57.127.2) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 08 Nov 2018 01:51:58 -0800 From: Liran Alon To: pbonzini@redhat.com, rkrcmar@redhat.com, kvm@vger.kernel.org Cc: idan.brown@oracle.com, Liran Alon Subject: [PATCH v2 2/2] KVM: x86: Trace changes to active TSC offset regardless if vCPU in guest-mode Date: Thu, 8 Nov 2018 11:51:30 +0200 Message-Id: <20181108095130.64904-2-liran.alon@oracle.com> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20181108095130.64904-1-liran.alon@oracle.com> References: <20181108095130.64904-1-liran.alon@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9070 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=794 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1811080085 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From some unknown reasons, kvm_x86_ops->write_tsc_offset() skipped trace of change to active TSC offset in case vCPU is in guest-mode. This patch changes write_tsc_offset() behavior to trace any change to active TSC offset to aid debugging. Reviewed-by: Mihai Carabas Signed-off-by: Liran Alon --- arch/x86/kvm/svm.c | 8 ++++---- arch/x86/kvm/vmx.c | 5 ++--- 2 files changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 842d4bb053d1..bcf1433f0d85 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1456,11 +1456,11 @@ static u64 svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 l1_offset) g_tsc_offset = svm->vmcb->control.tsc_offset - svm->nested.hsave->control.tsc_offset; svm->nested.hsave->control.tsc_offset = l1_offset; - } else - trace_kvm_write_tsc_offset(vcpu->vcpu_id, - svm->vmcb->control.tsc_offset, - l1_offset); + } + trace_kvm_write_tsc_offset(vcpu->vcpu_id, + svm->vmcb->control.tsc_offset, + l1_offset + g_tsc_offset); svm->vmcb->control.tsc_offset = l1_offset + g_tsc_offset; mark_dirty(svm->vmcb, VMCB_INTERCEPTS); diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 20d90ebd6cc8..36456314bc0f 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -3465,11 +3465,10 @@ static u64 vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 l1_offset) struct vmcs12 *vmcs12 = get_vmcs12(vcpu); if (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING)) active_offset += vmcs12->tsc_offset; - } else { - trace_kvm_write_tsc_offset(vcpu->vcpu_id, - vmcs_read64(TSC_OFFSET), l1_offset); } + trace_kvm_write_tsc_offset(vcpu->vcpu_id, + vmcs_read64(TSC_OFFSET), active_offset); vmcs_write64(TSC_OFFSET, active_offset); return active_offset; }