From patchwork Thu Nov 7 22:49:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 11233859 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 88D7913BD for ; Thu, 7 Nov 2019 22:49:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6627521D82 for ; Thu, 7 Nov 2019 22:49:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uixDCo9s" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727994AbfKGWtu (ORCPT ); Thu, 7 Nov 2019 17:49:50 -0500 Received: from mail-pl1-f202.google.com ([209.85.214.202]:55161 "EHLO mail-pl1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725992AbfKGWtu (ORCPT ); Thu, 7 Nov 2019 17:49:50 -0500 Received: by mail-pl1-f202.google.com with SMTP id a11so2730468plp.21 for ; Thu, 07 Nov 2019 14:49:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XTXOT3PTQaYYqLr+YkwfmLP4vZijQPTV+yL5uM5usc8=; b=uixDCo9sWcFrzbbPJ3qwhEF3MglWdyIZbnTF6Z0mmbqtPwdIO33yUMIZf/wB2crjw7 cSIqj5EFReWyFUHM5VlOKhG/Kmh+bCJBFyFXQ1cASUxnQwtZgc1h8ZKqk/m4OtRN9E3J cUkV9FdJv6N1wtnRs1QuaLpofcjngxaqz+GKku21nLbd/7v8LH9P8FB87L4JfA37qP7e WvYQ6/UXXZoW4YvuBw44cC3/WEvPLQV7T1FEKmB58UgXG/4PKuQevZybKf7Rjb1N5xkx 8aGD8D2lspQkbz8Ef65SypE4kQL8/AX8zqHFwS2KhG7ONeTTsjUa/NL4hV6ygFY8DJ5G T3hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XTXOT3PTQaYYqLr+YkwfmLP4vZijQPTV+yL5uM5usc8=; b=ES/FAJyM1jHc2H4xJwQlzZJi713hsZcFF7QGYpVPlKZ9jAQkhwJqQEXk68YRSBoHVc U7ErLXAtivEVAl+1vdmO7GvYtauvpU2QwyrNxmTwe48TybxhL/oe8l2EKsUyi2kBzGcJ B106F65TGFAMYMi6RU++1CssBV53calh4RU5hVeTTZZ8jPgiwTPHh4XXgHx3c97R1AGI UvZv03RttY2Rksnh+1tk2pYLaXv3dn4m8gVYTvfEQ8fi8VkkE5oZ1yjHIKD5skd73dHb irC9TW4Yx/XnnbTvO2/KDZtDh1Hsdkk34JuDOzbQBfotwmLebHFjJjWW7b/2VTFsA3lr OxiQ== X-Gm-Message-State: APjAAAVe4NhbSx0cnqx7JOL4gcAOIqTuOW/InyNDtV9J9X1nI01zsC+v OTbVVmCSgEscghCFxDPclSFs0VG3jrUrgf6s/nq96EUGPdzLsHTP8z9Qu7PAHp0ik8HF79363wZ yeaGeXmHhwggdTNVReHAC3zFMzPYOaqeXOVBARqz9U6XsnzFdTsMTJYji3zNFCd/oDgLR X-Google-Smtp-Source: APXvYqzU6VmWxUonRHHAaH5w6SCOh10x1jbiy3PifEFWkuoUi3IQHCdDu5ovhE4kB3aVjpsgws3CmwQKlGqExQJG X-Received: by 2002:a63:115b:: with SMTP id 27mr7153288pgr.298.1573166989202; Thu, 07 Nov 2019 14:49:49 -0800 (PST) Date: Thu, 7 Nov 2019 14:49:37 -0800 In-Reply-To: <20191107224941.60336-1-aaronlewis@google.com> Message-Id: <20191107224941.60336-2-aaronlewis@google.com> Mime-Version: 1.0 References: <20191107224941.60336-1-aaronlewis@google.com> X-Mailer: git-send-email 2.24.0.432.g9d3f5f5b63-goog Subject: [PATCH v3 1/5] kvm: nested: Introduce read_and_check_msr_entry() From: Aaron Lewis To: kvm@vger.kernel.org Cc: Jim Mattson , Paolo Bonzini , Aaron Lewis , Liran Alon Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add the function read_and_check_msr_entry() which just pulls some code out of nested_vmx_store_msr(). This will be useful as reusable code in upcoming patches. Reviewed-by: Liran Alon Reviewed-by: Jim Mattson Signed-off-by: Aaron Lewis --- arch/x86/kvm/vmx/nested.c | 35 ++++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index e76eb4f07f6c..7b058d7b9fcc 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -929,6 +929,26 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) return i + 1; } +static bool read_and_check_msr_entry(struct kvm_vcpu *vcpu, u64 gpa, int i, + struct vmx_msr_entry *e) +{ + if (kvm_vcpu_read_guest(vcpu, + gpa + i * sizeof(*e), + e, 2 * sizeof(u32))) { + pr_debug_ratelimited( + "%s cannot read MSR entry (%u, 0x%08llx)\n", + __func__, i, gpa + i * sizeof(*e)); + return false; + } + if (nested_vmx_store_msr_check(vcpu, e)) { + pr_debug_ratelimited( + "%s check failed (%u, 0x%x, 0x%x)\n", + __func__, i, e->index, e->reserved); + return false; + } + return true; +} + static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) { u64 data; @@ -940,20 +960,9 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) if (unlikely(i >= max_msr_list_size)) return -EINVAL; - if (kvm_vcpu_read_guest(vcpu, - gpa + i * sizeof(e), - &e, 2 * sizeof(u32))) { - pr_debug_ratelimited( - "%s cannot read MSR entry (%u, 0x%08llx)\n", - __func__, i, gpa + i * sizeof(e)); + if (!read_and_check_msr_entry(vcpu, gpa, i, &e)) return -EINVAL; - } - if (nested_vmx_store_msr_check(vcpu, &e)) { - pr_debug_ratelimited( - "%s check failed (%u, 0x%x, 0x%x)\n", - __func__, i, e.index, e.reserved); - return -EINVAL; - } + if (kvm_get_msr(vcpu, e.index, &data)) { pr_debug_ratelimited( "%s cannot read MSR (%u, 0x%x)\n", From patchwork Thu Nov 7 22:49:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 11233861 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 941C91575 for ; Thu, 7 Nov 2019 22:49:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 712E121D82 for ; Thu, 7 Nov 2019 22:49:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GNiUHhZJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728055AbfKGWtx (ORCPT ); Thu, 7 Nov 2019 17:49:53 -0500 Received: from mail-pg1-f202.google.com ([209.85.215.202]:38984 "EHLO mail-pg1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727903AbfKGWtx (ORCPT ); Thu, 7 Nov 2019 17:49:53 -0500 Received: by mail-pg1-f202.google.com with SMTP id 21so3069663pgt.6 for ; Thu, 07 Nov 2019 14:49:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4mSbfnV86WhUg0ov7lHDDNrLNgfz2FACKhYcwdyo8BE=; b=GNiUHhZJlII5AiiX4TfdImST/8tffSXgfb79aU8yGDDSudy86tUQrZ7uw/yWlYfpxq oEbG3q6M8BWCwAoHIgjppxsta5hGEypRrm+EqKPnjtVAMGmsFWny9MwHbwEyv+aD+sEU mIz5CTiu2u0BmwA+jddnJ70SYsSYYfXcJVehkE4T3eeyHzYT5VvhYlScDAqrRui8Abkb 06PNyGQ17QunRR4koAubXy8qeVHPlSXDOcj1i8+B+mVO/FFdBYD/H7MasW5a2igewLpH JYuCm3hb8tjxYonth5zDRoLvILshKx2VdbI+ddFBPaUfNr+cJN5EwghWg6vFMkG5Pxr/ EyLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4mSbfnV86WhUg0ov7lHDDNrLNgfz2FACKhYcwdyo8BE=; b=QXPuCijU+ZWqyjC3y8yfwajgWzjV3EI4akW7ewDFXcT6Vq2C/WHx9uW7jm/uOU/S1+ 7wC/KEaT3W6YvHG+AJsZCs1Bkfiad8IjzeRsd4g/5mbRCq8ZW4gd7v00dcI8C2+1B64R klj9NKTUP2FtMnDH6Tv0dgnNTJ9PFEIckpekP0p72ApovmLqhiAHL0IpyCGziiUdkOHF HSrmxcasCshY/fcrFAntxnLwk28ItGCF31GKmCLkV+O5YzonUtYO65hK5P1AfwuLWHCI 4jpL/oSYqvRPoCZChNAr/VKvP2zPJJMhHFhvZyye2LMemXnAW65hlGpnswpQ8YKNMtui g7Dw== X-Gm-Message-State: APjAAAUx8ygJSU+tuMOLBLIr3gObRawhxlNXv9bPaAbgpbpXMyztsGEB HPqLT3xfLP5jY2jsnYWpJuVfzPK9+wqVWzXG3oqsImuMTQHRBS2G+4WsIVQbgbLQRvXJzwag6Uj MYrmSp+V16VkdkMUdgDIq8SONuGU+diqBra3NwvR3cBB54zXNPvlPvtnwh4Ia3rfZgnV7 X-Google-Smtp-Source: APXvYqwiKk5bvB1+nhSN+mtQty8Lm04D2mrS8fnYfBDp0tRHKz3ukybAur1qyOQ+tW3vMzNDwV2BLm3BAR7ZQ3Br X-Received: by 2002:a63:4a05:: with SMTP id x5mr7726694pga.6.1573166992143; Thu, 07 Nov 2019 14:49:52 -0800 (PST) Date: Thu, 7 Nov 2019 14:49:38 -0800 In-Reply-To: <20191107224941.60336-1-aaronlewis@google.com> Message-Id: <20191107224941.60336-3-aaronlewis@google.com> Mime-Version: 1.0 References: <20191107224941.60336-1-aaronlewis@google.com> X-Mailer: git-send-email 2.24.0.432.g9d3f5f5b63-goog Subject: [PATCH v3 2/5] kvm: vmx: Rename NR_AUTOLOAD_MSRS to NR_LOADSTORE_MSRS From: Aaron Lewis To: kvm@vger.kernel.org Cc: Jim Mattson , Paolo Bonzini , Aaron Lewis Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename NR_AUTOLOAD_MSRS to NR_LOADSTORE_MSRS. This needs to be done due to the addition of the MSR-autostore area that will be added in a future patch. After that the name AUTOLOAD will no longer make sense. Reviewed-by: Jim Mattson Signed-off-by: Aaron Lewis --- arch/x86/kvm/vmx/vmx.c | 4 ++-- arch/x86/kvm/vmx/vmx.h | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e7970a2e8eae..d9afafc0e399 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -940,8 +940,8 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, if (!entry_only) j = find_msr(&m->host, msr); - if ((i < 0 && m->guest.nr == NR_AUTOLOAD_MSRS) || - (j < 0 && m->host.nr == NR_AUTOLOAD_MSRS)) { + if ((i < 0 && m->guest.nr == NR_LOADSTORE_MSRS) || + (j < 0 && m->host.nr == NR_LOADSTORE_MSRS)) { printk_once(KERN_WARNING "Not enough msr switch entries. " "Can't add msr %x\n", msr); return; diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index bee16687dc0b..1dad8e5c8f86 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -22,11 +22,11 @@ extern u32 get_umwait_control_msr(void); #define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4)) -#define NR_AUTOLOAD_MSRS 8 +#define NR_LOADSTORE_MSRS 8 struct vmx_msrs { unsigned int nr; - struct vmx_msr_entry val[NR_AUTOLOAD_MSRS]; + struct vmx_msr_entry val[NR_LOADSTORE_MSRS]; }; struct shared_msr_entry { From patchwork Thu Nov 7 22:49:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 11233863 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 695541575 for ; Thu, 7 Nov 2019 22:49:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 45928222C2 for ; Thu, 7 Nov 2019 22:49:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EkxKZ7h2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728097AbfKGWt6 (ORCPT ); Thu, 7 Nov 2019 17:49:58 -0500 Received: from mail-pg1-f201.google.com ([209.85.215.201]:37759 "EHLO mail-pg1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728053AbfKGWt6 (ORCPT ); Thu, 7 Nov 2019 17:49:58 -0500 Received: by mail-pg1-f201.google.com with SMTP id u20so3077029pga.4 for ; Thu, 07 Nov 2019 14:49:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=t9ntozjX65YwM0irxGvMCIbTKOnw1B1qw4N9v+V/Q1U=; b=EkxKZ7h25cwZs9Kwye/dqUn9XreSPPLXv11cpTf6EIPDqp18W+o3iFyzmPUbvakr7v JsimfRrn9VrNBjFbPMe0pnpURYJSDBw0ezOwYTzFZWNVgCPnHu+u/8xgssH8blQnqstO RLXLrJczRY8E32RcrhiZ1xv7KfxiWoGMl8z+kRc+7/Mxg1KIY22tPgdfm4q6TJs5rm2i mcMI6WHrBllwUVgTOy8++TL42e9AO6AZ4zR24+yVuDYlzgZ80eMBspaiwa1Y+mP7Entd vDoMKQ1eQG0v2hqZGVSNZHyheJ7Gh7eyrIzhZxAYFPM3JuTM/PUzsL+rIHgRuyhu/Foe IlMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=t9ntozjX65YwM0irxGvMCIbTKOnw1B1qw4N9v+V/Q1U=; b=WUqQPr+uW/JaX6xUfIC/dfq5LRdvV/f2UWzhxIjWKj9MS3tw5hQHVuY6UYdRGj0+ib /RALFg7vSXA4ffseH7JpInXMjFgyP07i7NkqOtd2mjWxf9uPiy+1x6wKTLL8P+zQk1Mi A6V6nf/4TQdubLPb42yc7cLDNSi++U0/FxZy9KZBcOup7K9Ys/h4la6Op42Tou6LupYH Fv6xv7jSb68wNYKI0gf6EhaQVOtiB6t+fVlVSRAsxkGCut3REiy36Sr0SzGBy0f13TCt C8KToIt2caN8IyrCoW5T0egKFUL9YlnbO163+yOPHptkc9r6GWCPlQNUkaGLPBDYK1An yZhA== X-Gm-Message-State: APjAAAXEbevmctuZNx86Hdt2Qqj/xId4LC1y+bIzAP4UBit21ijGHZbl Mn3IqUWYCxandX8m43kOo9d+bP9LrFXQvpC0b0NmF//LkF2SNkPt/txND35k2xDcNP3KOIUsO33 79dCTnFTqnsYvxEmoHux/vij504VkoRimvbExlRQ2G7kr3evFKFWMCdjiLI5sW4JKOcCx X-Google-Smtp-Source: APXvYqzAdykP3rbj4+dZiV4yQvY4cBJPQmYUwW3/ycn86pIrfn4qMO7Ppz937kPw2VnvDU7SVIRNXnGsxaGDiVgK X-Received: by 2002:a63:6585:: with SMTP id z127mr7824956pgb.330.1573166995308; Thu, 07 Nov 2019 14:49:55 -0800 (PST) Date: Thu, 7 Nov 2019 14:49:39 -0800 In-Reply-To: <20191107224941.60336-1-aaronlewis@google.com> Message-Id: <20191107224941.60336-4-aaronlewis@google.com> Mime-Version: 1.0 References: <20191107224941.60336-1-aaronlewis@google.com> X-Mailer: git-send-email 2.24.0.432.g9d3f5f5b63-goog Subject: [PATCH v3 3/5] kvm: vmx: Rename function find_msr() to vmx_find_msr_index() From: Aaron Lewis To: kvm@vger.kernel.org Cc: Jim Mattson , Paolo Bonzini , Aaron Lewis Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename function find_msr() to vmx_find_msr_index() in preparation for an upcoming patch where we export it and use it in nested.c. Reviewed-by: Jim Mattson Signed-off-by: Aaron Lewis --- arch/x86/kvm/vmx/vmx.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d9afafc0e399..3fa836a5569a 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -835,7 +835,7 @@ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx, vm_exit_controls_clearbit(vmx, exit); } -static int find_msr(struct vmx_msrs *m, unsigned int msr) +static int vmx_find_msr_index(struct vmx_msrs *m, u32 msr) { unsigned int i; @@ -869,7 +869,7 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr) } break; } - i = find_msr(&m->guest, msr); + i = vmx_find_msr_index(&m->guest, msr); if (i < 0) goto skip_guest; --m->guest.nr; @@ -877,7 +877,7 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr) vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr); skip_guest: - i = find_msr(&m->host, msr); + i = vmx_find_msr_index(&m->host, msr); if (i < 0) return; @@ -936,9 +936,9 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, wrmsrl(MSR_IA32_PEBS_ENABLE, 0); } - i = find_msr(&m->guest, msr); + i = vmx_find_msr_index(&m->guest, msr); if (!entry_only) - j = find_msr(&m->host, msr); + j = vmx_find_msr_index(&m->host, msr); if ((i < 0 && m->guest.nr == NR_LOADSTORE_MSRS) || (j < 0 && m->host.nr == NR_LOADSTORE_MSRS)) { From patchwork Thu Nov 7 22:49:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 11233865 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 976B71575 for ; Thu, 7 Nov 2019 22:50:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 73D89222C2 for ; Thu, 7 Nov 2019 22:50:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jhf/LkyD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728129AbfKGWuB (ORCPT ); Thu, 7 Nov 2019 17:50:01 -0500 Received: from mail-vk1-f201.google.com ([209.85.221.201]:34312 "EHLO mail-vk1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728053AbfKGWuB (ORCPT ); Thu, 7 Nov 2019 17:50:01 -0500 Received: by mail-vk1-f201.google.com with SMTP id r16so1841951vkd.1 for ; Thu, 07 Nov 2019 14:49:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=a4CIljDdvkHl1ogbtcZjf6TC/OiDGj7XTi1C8LhO2/M=; b=jhf/LkyDtnHhL12H6g4qt5F5/ESFFCDVBMyEgQ5NIgpe49ZPMNTaQkf/i63zj/AYIa 59Dcy0966Ql5yX4KBuIPC6v0VMrmlwFGcwqARVNJdFY/4aQwjqvRK0jvhMOL2xRdOLQF GqU28H1xn8qU+pTqsLlwdtdzFGXUGqeBLhZvkpxDRrT6fnO7t3ORqzfho1mOvzN9GDqh 0wo83nwpDdmvW4EmRAtYuQkjR/IKRbHoetrB5Jo6sKngxv5WXuugG0aLx3J9mTPsTl+5 iOuUBW4/eg0BhlgxQ/Fa1FeG/pnxY6nNsQ4II5ik9ffKuK3pEzKa+8UlgfuxPUdQag+q 4bdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=a4CIljDdvkHl1ogbtcZjf6TC/OiDGj7XTi1C8LhO2/M=; b=maeTWTnOZUjRlG7rDjQIQt1PHMYOPojauNa2LCo8Ds+ijxagR6sV6Mus2vpuu/Cx23 z3poUxsdR/TW2+eE3JFO8lCWj9WoZuBxA/HyRXS9cJKSTTv7G994aPzLvnXBG0Eerzhy D7lkDkQx+5FDdP5UETbYNltmdC+4HWosjOfFlyRKTI8SBgMz2brMHjVof76mTaKy4mfY 8JOXipe2ZaWaCy8RcanfnIAkgR0bNFktTW3F4f3cmvTp2+exDj6v0sKd/jkH1DygugTO BNxL5E/dbG7JFGWauLTEbRQY+LUgJ4vUmfdDewfABY6ijHWSVpzDpiARzaKoJVsy+C7T DuTQ== X-Gm-Message-State: APjAAAWOdc5EVQHs6t4sZK1P6cPoMiI4zp9xs/05cJhs6FzD/8i/JZEz Vq/sXaO7TPDYJFp63lT/HXewFBtKoRZNF/O0cD4KfUeLoixCZ/OelzqBD2FtrOMS+pPChrzKrOv SKMrTUOraST28gQEyqJsZWAY9hcN8rKv0VDKfihivAQp6yIOPt0tKzBJzTD4Gq2mhyVob X-Google-Smtp-Source: APXvYqzCzAkkrKssucP+MzRc/ugee5aNFGNZ14IIWvXySdfr/gZL55DxmbbZ2wy0sWHQCX2AbBbaIWYcz8Y0t11d X-Received: by 2002:ab0:77da:: with SMTP id y26mr4298565uar.137.1573166998621; Thu, 07 Nov 2019 14:49:58 -0800 (PST) Date: Thu, 7 Nov 2019 14:49:40 -0800 In-Reply-To: <20191107224941.60336-1-aaronlewis@google.com> Message-Id: <20191107224941.60336-5-aaronlewis@google.com> Mime-Version: 1.0 References: <20191107224941.60336-1-aaronlewis@google.com> X-Mailer: git-send-email 2.24.0.432.g9d3f5f5b63-goog Subject: [PATCH v3 4/5] KVM: nVMX: Prepare MSR-store area From: Aaron Lewis To: kvm@vger.kernel.org Cc: Jim Mattson , Paolo Bonzini , Aaron Lewis Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Prepare the MSR-store area to be used in a follow up patch. Signed-off-by: Aaron Lewis --- arch/x86/kvm/vmx/nested.c | 17 ++++++++++++++++- arch/x86/kvm/vmx/vmx.h | 4 ++++ 2 files changed, 20 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 7b058d7b9fcc..c249be43fff2 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -982,6 +982,14 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) return 0; } +static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + struct vmx_msrs *autostore = &vmx->msr_autostore.guest; + + autostore->nr = 0; +} + static bool nested_cr3_valid(struct kvm_vcpu *vcpu, unsigned long val) { unsigned long invalid_mask; @@ -2027,7 +2035,7 @@ static void prepare_vmcs02_constant_state(struct vcpu_vmx *vmx) * addresses are constant (for vmcs02), the counts can change based * on L2's behavior, e.g. switching to/from long mode. */ - vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0); + vmcs_write64(VM_EXIT_MSR_STORE_ADDR, __pa(vmx->msr_autostore.guest.val)); vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val)); vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val)); @@ -2294,6 +2302,13 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12) vmcs_write64(EOI_EXIT_BITMAP3, vmcs12->eoi_exit_bitmap3); } + /* + * Make sure the msr_autostore list is up to date before we set the + * count in the vmcs02. + */ + prepare_vmx_msr_autostore_list(&vmx->vcpu, MSR_IA32_TSC); + + vmcs_write32(VM_EXIT_MSR_STORE_COUNT, vmx->msr_autostore.guest.nr); vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 1dad8e5c8f86..2616f639cf50 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -230,6 +230,10 @@ struct vcpu_vmx { struct vmx_msrs host; } msr_autoload; + struct msr_autostore { + struct vmx_msrs guest; + } msr_autostore; + struct { int vm86_active; ulong save_rflags; From patchwork Thu Nov 7 22:49:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 11233867 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6ED6D1575 for ; Thu, 7 Nov 2019 22:50:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 40FC3222C2 for ; Thu, 7 Nov 2019 22:50:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fTQqN19x" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728134AbfKGWuD (ORCPT ); Thu, 7 Nov 2019 17:50:03 -0500 Received: from mail-vs1-f73.google.com ([209.85.217.73]:45170 "EHLO mail-vs1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728053AbfKGWuC (ORCPT ); Thu, 7 Nov 2019 17:50:02 -0500 Received: by mail-vs1-f73.google.com with SMTP id k6so1119257vsq.12 for ; Thu, 07 Nov 2019 14:50:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YtBMF5qYIpUO2FuGmNquYzDsUQ1AdPczALfiEukGBvg=; b=fTQqN19xv9fMGnxv/rxUD0zSZrNfP5ToFyVF35lgBORzVJnmP2L222ETiZzhejx2x8 n+Kah3BzbunS5IzyXPIIkqDo3RpTJ6lPJrY9a8524ZbxxWpnU2NeZHgSSFao2yZAHwZf 5msdmCd5wzhkzrPLSyrQYDx5Q3dNt7GEleImDT0Z9BHcX6zGFtAvH3YTVwUQ22D2dEck 8mOn3GsLwiFJ6rjnp3mu/OjwUUoeVq18MQAUUu6vkltiLN4huzy3VrOo8hC/uAZZEGA0 HHNE5xqM4yyED5iWZKwmrsP978/Xmj6phY3/o6KyL6gw37/H4huLKZry2uWS2pRmIgWt /Bjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YtBMF5qYIpUO2FuGmNquYzDsUQ1AdPczALfiEukGBvg=; b=iuMuWo1upF08dYB6/rAAwoKeAjY4jFhkFrQcweaCdd9Px3hqCzV6MfZ0MSMFABUDY6 Ttpx8L1RiJSev7cNeL+nrKCOQiSkfODWw3y/Pkg19m+ANcaHNBBeqMXL3+qxz6Tx/pMm 9Tuh8CzYWBzcimvsYYWd6yw3RHXjQlbX3PmZudvxV3VnyPS0CX2tyavl6ncK7PxSLCUg EPX2sXUlFV48kKQPndWy6bkNsc/6OT0Z093w4UhLX6lDnpkIR5pOx79JpJjlKy0imO0f PPgnMGjEw0ZwpZO0rF36SHlDxiTKVwgVHrXNIYvSrsR9za5gcAggfg/YPTgd4j5/fKmu eFsQ== X-Gm-Message-State: APjAAAXC4yPlMXiTt/QsKy2Hjhz/oVEU58Ons7ygFTHyInAEI7lr8Bgy IRxIgHluVOPN2woF/Jl/j7rv74jUBXDxfxMZjiI9zQmVHyt6VC8X+OOXGTgTFbp2IrZj+QxUpoJ uE7WGkVFNbTyn4l//gsQNzkwcD0WMcD2CjM7+poDk054Dn8fClR5rv0I2CGNtysNIObMj X-Google-Smtp-Source: APXvYqxqikDI9lOKbMx/FSramKqt6h/hzpGRBiK0yd1SVQGV86oapxi3jd7OAat6yI4t4S/LAuh3YDwYj7LHCqYb X-Received: by 2002:a05:6122:11af:: with SMTP id y15mr4756877vkn.68.1573167001652; Thu, 07 Nov 2019 14:50:01 -0800 (PST) Date: Thu, 7 Nov 2019 14:49:41 -0800 In-Reply-To: <20191107224941.60336-1-aaronlewis@google.com> Message-Id: <20191107224941.60336-6-aaronlewis@google.com> Mime-Version: 1.0 References: <20191107224941.60336-1-aaronlewis@google.com> X-Mailer: git-send-email 2.24.0.432.g9d3f5f5b63-goog Subject: [PATCH v3 5/5] KVM: nVMX: Add support for capturing highest observable L2 TSC From: Aaron Lewis To: kvm@vger.kernel.org Cc: Jim Mattson , Paolo Bonzini , Aaron Lewis , Liran Alon Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The L1 hypervisor may include the IA32_TIME_STAMP_COUNTER MSR in the vmcs12 MSR VM-exit MSR-store area as a way of determining the highest TSC value that might have been observed by L2 prior to VM-exit. The current implementation does not capture a very tight bound on this value. To tighten the bound, add the IA32_TIME_STAMP_COUNTER MSR to the vmcs02 VM-exit MSR-store area whenever it appears in the vmcs12 VM-exit MSR-store area. When L0 processes the vmcs12 VM-exit MSR-store area during the emulation of an L2->L1 VM-exit, special-case the IA32_TIME_STAMP_COUNTER MSR, using the value stored in the vmcs02 VM-exit MSR-store area to derive the value to be stored in the vmcs12 VM-exit MSR-store area. Reviewed-by: Liran Alon Reviewed-by: Jim Mattson Signed-off-by: Aaron Lewis --- arch/x86/kvm/vmx/nested.c | 88 +++++++++++++++++++++++++++++++++++---- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/vmx/vmx.h | 1 + 3 files changed, 83 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index c249be43fff2..2055f15cb007 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -929,6 +929,37 @@ static u32 nested_vmx_load_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) return i + 1; } +static bool nested_vmx_get_vmexit_msr_value(struct kvm_vcpu *vcpu, + u32 msr_index, + u64 *data) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + + /* + * If the L0 hypervisor stored a more accurate value for the TSC that + * does not include the time taken for emulation of the L2->L1 + * VM-exit in L0, use the more accurate value. + */ + if (msr_index == MSR_IA32_TSC) { + int index = vmx_find_msr_index(&vmx->msr_autostore.guest, + MSR_IA32_TSC); + + if (index >= 0) { + u64 val = vmx->msr_autostore.guest.val[index].value; + + *data = kvm_read_l1_tsc(vcpu, val); + return true; + } + } + + if (kvm_get_msr(vcpu, msr_index, data)) { + pr_debug_ratelimited("%s cannot read MSR (0x%x)\n", __func__, + msr_index); + return false; + } + return true; +} + static bool read_and_check_msr_entry(struct kvm_vcpu *vcpu, u64 gpa, int i, struct vmx_msr_entry *e) { @@ -963,12 +994,9 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) if (!read_and_check_msr_entry(vcpu, gpa, i, &e)) return -EINVAL; - if (kvm_get_msr(vcpu, e.index, &data)) { - pr_debug_ratelimited( - "%s cannot read MSR (%u, 0x%x)\n", - __func__, i, e.index); + if (!nested_vmx_get_vmexit_msr_value(vcpu, e.index, &data)) return -EINVAL; - } + if (kvm_vcpu_write_guest(vcpu, gpa + i * sizeof(e) + offsetof(struct vmx_msr_entry, value), @@ -982,12 +1010,58 @@ static int nested_vmx_store_msr(struct kvm_vcpu *vcpu, u64 gpa, u32 count) return 0; } -static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu) +static bool nested_msr_store_list_has_msr(struct kvm_vcpu *vcpu, u32 msr_index) +{ + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); + u32 count = vmcs12->vm_exit_msr_store_count; + u64 gpa = vmcs12->vm_exit_msr_store_addr; + struct vmx_msr_entry e; + u32 i; + + for (i = 0; i < count; i++) { + if (!read_and_check_msr_entry(vcpu, gpa, i, &e)) + return false; + + if (e.index == msr_index) + return true; + } + return false; +} + +static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu, + u32 msr_index) { struct vcpu_vmx *vmx = to_vmx(vcpu); struct vmx_msrs *autostore = &vmx->msr_autostore.guest; + bool in_vmcs12_store_list; + int msr_autostore_index; + bool in_autostore_list; + int last; - autostore->nr = 0; + msr_autostore_index = vmx_find_msr_index(autostore, msr_index); + in_autostore_list = msr_autostore_index >= 0; + in_vmcs12_store_list = nested_msr_store_list_has_msr(vcpu, msr_index); + + if (in_vmcs12_store_list && !in_autostore_list) { + if (autostore->nr == NR_LOADSTORE_MSRS) { + /* + * Emulated VMEntry does not fail here. Instead a less + * accurate value will be returned by + * nested_vmx_get_vmexit_msr_value() using kvm_get_msr() + * instead of reading the value from the vmcs02 VMExit + * MSR-store area. + */ + pr_warn_ratelimited( + "Not enough msr entries in msr_autostore. Can't add msr %x\n", + msr_index); + return; + } + last = autostore->nr++; + autostore->val[last].index = msr_index; + } else if (!in_vmcs12_store_list && in_autostore_list) { + last = --autostore->nr; + autostore->val[msr_autostore_index] = autostore->val[last]; + } } static bool nested_cr3_valid(struct kvm_vcpu *vcpu, unsigned long val) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 3fa836a5569a..a7fd70db0b1e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -835,7 +835,7 @@ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx, vm_exit_controls_clearbit(vmx, exit); } -static int vmx_find_msr_index(struct vmx_msrs *m, u32 msr) +int vmx_find_msr_index(struct vmx_msrs *m, u32 msr) { unsigned int i; diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 2616f639cf50..673330e528e8 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -338,6 +338,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu); struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr); void pt_update_intercept_for_msr(struct vcpu_vmx *vmx); void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); +int vmx_find_msr_index(struct vmx_msrs *m, u32 msr); #define POSTED_INTR_ON 0 #define POSTED_INTR_SN 1