From patchwork Fri Dec 6 23:46:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 11277147 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA9CF138D for ; Fri, 6 Dec 2019 23:46:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A9A0E2082E for ; Fri, 6 Dec 2019 23:46:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Q/Bw9UGv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726403AbfLFXqs (ORCPT ); Fri, 6 Dec 2019 18:46:48 -0500 Received: from mail-pf1-f202.google.com ([209.85.210.202]:56765 "EHLO mail-pf1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726375AbfLFXqs (ORCPT ); Fri, 6 Dec 2019 18:46:48 -0500 Received: by mail-pf1-f202.google.com with SMTP id j7so4902318pfh.23 for ; Fri, 06 Dec 2019 15:46:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=y+19aKtY4EbBtyHli0gL/tj0TPv6Yayq8480kA5pLQM=; b=Q/Bw9UGvsan3MQgEECqz5rz8WNiHQEfJ9kQVCGEQJqiYZdN7EDRRj2X9oq/NNxRUtr amcDCuTnkR0lUSzstsIOxo6y7T78WJy7hAaaQPu05YDA/fKXw82dswmVSKaBlnkKAGVe ozjXPj8xxtwL0hT62VcpZy35c4qnKsdgPi0UMxs/O2zPveISWYffhdL4QEhUQfiCNMUy wj4TZWZXDCebqRab7X9L0J4aQKWJPTezfu0gb7bU6FzEjKbZXkkBO10Tg42sopP6ZK/0 oMpcc2oiGxodgrKgg+RxYiFJidXtD2PDpRBuILvyyMIMI3S2HMZuUGdQV8MFKADdQd8r D0mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=y+19aKtY4EbBtyHli0gL/tj0TPv6Yayq8480kA5pLQM=; b=FSHe5SFUiE5JNC3J+3pzDaoQd9PXJGf8vwSoRwo01A+ZtvKwXgqJpp7mQaT5CEF9CM thr2k0UrgeBcF4jzVwJrtqtVZ27L0eL+uVk4MzvJ14AToxXGyUZP8xUuHha6aQomTR2s qd87t+bsktFgPiy2YEnlVSI/aYwVrDsygAS8OyMlpYzkMF8ZwrToYdMNYybQdr3g8XEj Y57Le7ydFhGpP3yyXAUXaD4JUai9fYRbcb7Fm0vqRzTRFgAO1gfsJneC8YGfmhaVM/Oc k7EOayJRDa4vRev1jgNtNDqfzM073Nb7clZaTyZj5Vd8gzGusVEXFhTQ8fZCOC5FTBkH ocJg== X-Gm-Message-State: APjAAAX92URGL3KoBtwbnJjYrPWUs9o+neTJk959QW+siQ8+3mp70rgh m5W7WkCOR4OYK0+MBryZLAWfkin7BAeilvJnV8xYSvUH/BseZYQbwcz5CcsbGnvEevckvpcwbtP 7pZvpDJhNSHNMreZ1Gj/5omfMn5kF8XjnYhxqeImHhuQqEn6QtfGkbxAHEUazSqQ= X-Google-Smtp-Source: APXvYqw39BeYdk7jH0zVdelpeLBdC4zRpUvP8KvdtXmV8mrLSW6bHeCDxEGMuQ6TUkxHkClH+aJItPWkuaLCwQ== X-Received: by 2002:a63:7311:: with SMTP id o17mr6103845pgc.29.1575676007030; Fri, 06 Dec 2019 15:46:47 -0800 (PST) Date: Fri, 6 Dec 2019 15:46:35 -0800 Message-Id: <20191206234637.237698-1-jmattson@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog Subject: [PATCH v3 1/3] kvm: nVMX: VMWRITE checks VMCS-link pointer before VMCS field From: Jim Mattson To: kvm@vger.kernel.org Cc: Jim Mattson , Liran Alon , Paolo Bonzini , Vitaly Kuznetsov , Peter Shier , Oliver Upton , Jon Cargille Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org According to the SDM, a VMWRITE in VMX non-root operation with an invalid VMCS-link pointer results in VMfailInvalid before the validity of the VMCS field in the secondary source operand is checked. For consistency, modify both handle_vmwrite and handle_vmread, even though there was no problem with the latter. Fixes: 6d894f498f5d1 ("KVM: nVMX: vmread/vmwrite: Use shadow vmcs12 if running L2") Signed-off-by: Jim Mattson Cc: Liran Alon Cc: Paolo Bonzini Cc: Vitaly Kuznetsov Reviewed-by: Peter Shier Reviewed-by: Oliver Upton Reviewed-by: Jon Cargille --- v1 -> v2: * Split the invalid VMCS-link pointer check from the conditional call to copy_vmcs02_to_vmcs12_rare. * Hoisted the assignment of vmcs12 to its declaration. * Modified handle_vmread to maintain consistency with handle_vmwrite. v2 -> v3: * No changes arch/x86/kvm/vmx/nested.c | 59 +++++++++++++++++---------------------- 1 file changed, 25 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 4aea7d304beb..ee1bf9710e86 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4753,32 +4753,28 @@ static int handle_vmread(struct kvm_vcpu *vcpu) { unsigned long field; u64 field_value; + struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); u32 vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); int len; gva_t gva = 0; - struct vmcs12 *vmcs12; + struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu) + : get_vmcs12(vcpu); struct x86_exception e; short offset; if (!nested_vmx_check_permission(vcpu)) return 1; - if (to_vmx(vcpu)->nested.current_vmptr == -1ull) + /* + * In VMX non-root operation, when the VMCS-link pointer is -1ull, + * any VMREAD sets the ALU flags for VMfailInvalid. + */ + if (vmx->nested.current_vmptr == -1ull || + (is_guest_mode(vcpu) && + get_vmcs12(vcpu)->vmcs_link_pointer == -1ull)) return nested_vmx_failInvalid(vcpu); - if (!is_guest_mode(vcpu)) - vmcs12 = get_vmcs12(vcpu); - else { - /* - * When vmcs->vmcs_link_pointer is -1ull, any VMREAD - * to shadowed-field sets the ALU flags for VMfailInvalid. - */ - if (get_vmcs12(vcpu)->vmcs_link_pointer == -1ull) - return nested_vmx_failInvalid(vcpu); - vmcs12 = get_shadow_vmcs12(vcpu); - } - /* Decode instruction info and find the field to read */ field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf)); @@ -4855,13 +4851,20 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) */ u64 field_value = 0; struct x86_exception e; - struct vmcs12 *vmcs12; + struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu) + : get_vmcs12(vcpu); short offset; if (!nested_vmx_check_permission(vcpu)) return 1; - if (vmx->nested.current_vmptr == -1ull) + /* + * In VMX non-root operation, when the VMCS-link pointer is -1ull, + * any VMWRITE sets the ALU flags for VMfailInvalid. + */ + if (vmx->nested.current_vmptr == -1ull || + (is_guest_mode(vcpu) && + get_vmcs12(vcpu)->vmcs_link_pointer == -1ull)) return nested_vmx_failInvalid(vcpu); if (vmx_instruction_info & (1u << 10)) @@ -4889,24 +4892,12 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) return nested_vmx_failValid(vcpu, VMXERR_VMWRITE_READ_ONLY_VMCS_COMPONENT); - if (!is_guest_mode(vcpu)) { - vmcs12 = get_vmcs12(vcpu); - - /* - * Ensure vmcs12 is up-to-date before any VMWRITE that dirties - * vmcs12, else we may crush a field or consume a stale value. - */ - if (!is_shadow_field_rw(field)) - copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12); - } else { - /* - * When vmcs->vmcs_link_pointer is -1ull, any VMWRITE - * to shadowed-field sets the ALU flags for VMfailInvalid. - */ - if (get_vmcs12(vcpu)->vmcs_link_pointer == -1ull) - return nested_vmx_failInvalid(vcpu); - vmcs12 = get_shadow_vmcs12(vcpu); - } + /* + * Ensure vmcs12 is up-to-date before any VMWRITE that dirties + * vmcs12, else we may crush a field or consume a stale value. + */ + if (!is_guest_mode(vcpu) && !is_shadow_field_rw(field)) + copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12); offset = vmcs_field_to_offset(field); if (offset < 0) From patchwork Fri Dec 6 23:46:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 11277149 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C672D138D for ; Fri, 6 Dec 2019 23:46:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A5C852082E for ; Fri, 6 Dec 2019 23:46:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="il3XbSx7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726469AbfLFXqx (ORCPT ); Fri, 6 Dec 2019 18:46:53 -0500 Received: from mail-pj1-f73.google.com ([209.85.216.73]:42835 "EHLO mail-pj1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726410AbfLFXqw (ORCPT ); Fri, 6 Dec 2019 18:46:52 -0500 Received: by mail-pj1-f73.google.com with SMTP id s19so4445218pjp.9 for ; Fri, 06 Dec 2019 15:46:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ThBm32fZpvVy6KLCvM0Dumk6qX4tmGYO+C0u99pX1vg=; b=il3XbSx7gWXfPRoNoZP5MmiYz3V1x5tbTrKh+WJWJEEPVqzTrFmZVZZgGnanlUQyrX QSAxH+9+9/d8DNWIU3FTYu4htzwRchY1LgziS1NzLpRCSzDAou3/DRaOMXMfuWFF1LHY hiZHjZOSeBE8hPxjoc9aMh0fvBFIVVt+GGYOWutuj5NxC8mdX7kHDTOc05el7T4KCwEB iXa8hc4RKiOuDhBdEPs5+GKaHv2abfV1YimvCjBCsfTa7iGqKo95u7tS9qzTTIPdSgPW DwOvwEWDS6R7mb1WTcQ1yVb2PAbqjrjnNPDTLYdgms7wiKnBGjkiFClOyLJkufOv55HU cyqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ThBm32fZpvVy6KLCvM0Dumk6qX4tmGYO+C0u99pX1vg=; b=avar19tsGzu5XVdTNyfdbGDm3J1Sb6tEk9KtiOXOoCUMm8psYWSuHpuQ3wIolomRKI cALozrb8YruyolVCYeazWRkpdaEUOinLnCVHpPUNxv5FEWwgqus+P3shG7R1Q2RA1OTY 94NjAq6RLBqHc2nBHcfWsEv43bGsj453x9PLCzIESS8/pNLlL9xmLnUBIIh6VyZxdSNx DNVCfIxbg5rzyvJtsAX3QbLyVF6BmElA+sEjivOotupFnhUYTuoSyN/+ubquZ3sVCAVO zLNLLaFH5QuuYPVy7MM8xw9/xi/X/VIsSDJU4kxs9i4q/VeYyXemrHrN6jpISG2dfSzb gjKQ== X-Gm-Message-State: APjAAAUKWxT/XY/2g5lSnQi9ammLKt9pW079GbFj6ifJxzIei/UthSqS PxRQHaoFzgwVWavrJsOpaQOkIpecYmtuPGNIEoUk1CQzwulPRcz3UFUQ3IHYKgfYN46fvVR9Z7X txOeYtiwR0LsTY+ZpATEvjIhd/+xbQROFIzSIE/MFnB03CixayL0cpzblEO9rNFM= X-Google-Smtp-Source: APXvYqwxqIoqLTF/uSaniehTN+bNFLNirC9FoDHPkDof0TojVIINDMBZgFSMC+5jCXtbwge1f/3iJWsYMHywnA== X-Received: by 2002:a63:334f:: with SMTP id z76mr6485745pgz.14.1575676011947; Fri, 06 Dec 2019 15:46:51 -0800 (PST) Date: Fri, 6 Dec 2019 15:46:36 -0800 In-Reply-To: <20191206234637.237698-1-jmattson@google.com> Message-Id: <20191206234637.237698-2-jmattson@google.com> Mime-Version: 1.0 References: <20191206234637.237698-1-jmattson@google.com> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog Subject: [PATCH v3 2/3] kvm: nVMX: VMWRITE checks unsupported field before read-only field From: Jim Mattson To: kvm@vger.kernel.org Cc: Jim Mattson , Paolo Bonzini , Peter Shier , Oliver Upton , Jon Cargille Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org According to the SDM, VMWRITE checks to see if the secondary source operand corresponds to an unsupported VMCS field before it checks to see if the secondary source operand corresponds to a VM-exit information field and the processor does not support writing to VM-exit information fields. Fixes: 49f705c5324aa ("KVM: nVMX: Implement VMREAD and VMWRITE") Signed-off-by: Jim Mattson Cc: Paolo Bonzini Reviewed-by: Peter Shier Reviewed-by: Oliver Upton Reviewed-by: Jon Cargille --- v1 -> v2: * New commit in v2. v2 -> v3: * No changes. arch/x86/kvm/vmx/nested.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index ee1bf9710e86..94ec089d6d1a 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4883,6 +4883,12 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf)); + + offset = vmcs_field_to_offset(field); + if (offset < 0) + return nested_vmx_failValid(vcpu, + VMXERR_UNSUPPORTED_VMCS_COMPONENT); + /* * If the vCPU supports "VMWRITE to any supported field in the * VMCS," then the "read-only" fields are actually read/write. @@ -4899,11 +4905,6 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) if (!is_guest_mode(vcpu) && !is_shadow_field_rw(field)) copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12); - offset = vmcs_field_to_offset(field); - if (offset < 0) - return nested_vmx_failValid(vcpu, - VMXERR_UNSUPPORTED_VMCS_COMPONENT); - /* * Some Intel CPUs intentionally drop the reserved bits of the AR byte * fields on VMWRITE. Emulate this behavior to ensure consistent KVM From patchwork Fri Dec 6 23:46:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 11277151 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 47E49930 for ; Fri, 6 Dec 2019 23:46:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1C0A22173E for ; Fri, 6 Dec 2019 23:46:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lHwyLBJw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726414AbfLFXq4 (ORCPT ); Fri, 6 Dec 2019 18:46:56 -0500 Received: from mail-ua1-f74.google.com ([209.85.222.74]:37702 "EHLO mail-ua1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726375AbfLFXq4 (ORCPT ); Fri, 6 Dec 2019 18:46:56 -0500 Received: by mail-ua1-f74.google.com with SMTP id q23so2765711uar.4 for ; Fri, 06 Dec 2019 15:46:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Omb2N8rapJnlYgPKGBZO4Ry7yz4/k07W4Hzsyqy3ZaM=; b=lHwyLBJwhrr4gPxzXyzMccQQw9vsFkcOnXAogBbnAVAQm9rW4CNQWvuETiJ4xlYMoU WZ4aGisOTMrQQDFodmhpm5l8qo3f6pIfGKaGrJlzptnYfkVvS/wVsLbsYZRR+/9+8NuS /g816Vzc15F7t/Quu2dwtFT/2SumwOYT3l8JGCtMG6TX4qolxvEqH7EUijJa6E2YOtpt 0GWNgTIvZo00T1RPb4ewCcOpUUtABiIZNN8iaDWoPBU6FwT7a2OIZ4LvYvLtXcg2q8c7 uGZK/RcJNeCZJR1JDtKQjHZHRATmaTVZGnWs75jHdXNjMU9nANMvlSuqUR6YDJ1pABpT l+sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Omb2N8rapJnlYgPKGBZO4Ry7yz4/k07W4Hzsyqy3ZaM=; b=tnD2ovJtVWgIniW4pEvW4uXtWc8Fviwi5JkUEn4yrgnCSP5bjLtaCwIAhsRf4RSpx1 XqyqUJ81K2LEEZBA/D+oCsLPlu2JE/nSGS3lNHWByaKKfquXgoyQPZ5EfXBziAoqwY05 at8bxE7+57M0dBjtogKXyLdXc0w+Xm9M/0wXI6mnsyyBfEsbKhDyrZRYQ405qnZK9rRB RXeYKwdiWHPfvkOBzTD+ZX2BPK0OjoLbCsSWsk7ItPYTIVQFU3dZ3nG8pjKUekXsC+ST 40sadzrm7k0JbzJwUuqyaUByYgLdA4/F3hcAyikGvwJ2MrYn6ToS56Ntp2UclfMFUUn0 C9Aw== X-Gm-Message-State: APjAAAX6zQz/wiN1fo3vHZS5shvZYy4BO8nQEnW/F+dIO7pPPci2mheo a9G7relmUUhZzC6NkJgI3oSAMpv6Ok4o3kwwqxF4vF8DQ9ST+fR4jibUxYZNYudq/s0xIrYfZcy D/XY6jVAyYt+so/2edNwsTpqmfIg8CnCvz345t0cZPgO4TGf+aZuklhBpePThLRA= X-Google-Smtp-Source: APXvYqyt2hiQ+CIvgYbjyO/iv0e6k7ndZqFly+crPeXSRzXTyzzO22OdhzyPtLQG4hkjZBEpsrjTDiL+XUqaIQ== X-Received: by 2002:a9f:3e84:: with SMTP id x4mr14843414uai.83.1575676015004; Fri, 06 Dec 2019 15:46:55 -0800 (PST) Date: Fri, 6 Dec 2019 15:46:37 -0800 In-Reply-To: <20191206234637.237698-1-jmattson@google.com> Message-Id: <20191206234637.237698-3-jmattson@google.com> Mime-Version: 1.0 References: <20191206234637.237698-1-jmattson@google.com> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog Subject: [PATCH v3 3/3] kvm: nVMX: Aesthetic cleanup of handle_vmread and handle_vmwrite From: Jim Mattson To: kvm@vger.kernel.org Cc: Jim Mattson , Paolo Bonzini , Sean Christopherson , Peter Shier , Oliver Upton , Jon Cargille Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Apply reverse fir tree declaration order, shorten some variable names to avoid line wrap, reformat a block comment, delete an extra blank line, and use BIT(10) instead of (1u << 10). Signed-off-by: Jim Mattson Cc: Paolo Bonzini Cc: Sean Christopherson Reviewed-by: Peter Shier Reviewed-by: Oliver Upton Reviewed-by: Jon Cargille --- v1 -> v2: * New commit in v2. v2 -> v3: * Shortened some variable names instead of wrapping long lines. * Changed BIT_ULL to BIT. arch/x86/kvm/vmx/nested.c | 70 +++++++++++++++++++-------------------- 1 file changed, 34 insertions(+), 36 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 94ec089d6d1a..336fe366a25f 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4751,17 +4751,17 @@ static int handle_vmresume(struct kvm_vcpu *vcpu) static int handle_vmread(struct kvm_vcpu *vcpu) { - unsigned long field; - u64 field_value; - struct vcpu_vmx *vmx = to_vmx(vcpu); - unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); - u32 vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); - int len; - gva_t gva = 0; struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu) : get_vmcs12(vcpu); + unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); + u32 instr_info = vmcs_read32(VMX_INSTRUCTION_INFO); + struct vcpu_vmx *vmx = to_vmx(vcpu); struct x86_exception e; + unsigned long field; + u64 value; + gva_t gva = 0; short offset; + int len; if (!nested_vmx_check_permission(vcpu)) return 1; @@ -4776,7 +4776,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu) return nested_vmx_failInvalid(vcpu); /* Decode instruction info and find the field to read */ - field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf)); + field = kvm_register_readl(vcpu, (((instr_info) >> 28) & 0xf)); offset = vmcs_field_to_offset(field); if (offset < 0) @@ -4786,24 +4786,23 @@ static int handle_vmread(struct kvm_vcpu *vcpu) if (!is_guest_mode(vcpu) && is_vmcs12_ext_field(field)) copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12); - /* Read the field, zero-extended to a u64 field_value */ - field_value = vmcs12_read_any(vmcs12, field, offset); + /* Read the field, zero-extended to a u64 value */ + value = vmcs12_read_any(vmcs12, field, offset); /* * Now copy part of this value to register or memory, as requested. * Note that the number of bits actually copied is 32 or 64 depending * on the guest's mode (32 or 64 bit), not on the given field's length. */ - if (vmx_instruction_info & (1u << 10)) { - kvm_register_writel(vcpu, (((vmx_instruction_info) >> 3) & 0xf), - field_value); + if (instr_info & BIT(10)) { + kvm_register_writel(vcpu, (((instr_info) >> 3) & 0xf), value); } else { len = is_64_bit_mode(vcpu) ? 8 : 4; if (get_vmx_mem_address(vcpu, exit_qualification, - vmx_instruction_info, true, len, &gva)) + instr_info, true, len, &gva)) return 1; /* _system ok, nested_vmx_check_permission has verified cpl=0 */ - if (kvm_write_guest_virt_system(vcpu, gva, &field_value, len, &e)) + if (kvm_write_guest_virt_system(vcpu, gva, &value, len, &e)) kvm_inject_page_fault(vcpu, &e); } @@ -4836,24 +4835,25 @@ static bool is_shadow_field_ro(unsigned long field) static int handle_vmwrite(struct kvm_vcpu *vcpu) { + struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu) + : get_vmcs12(vcpu); + unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); + u32 instr_info = vmcs_read32(VMX_INSTRUCTION_INFO); + struct vcpu_vmx *vmx = to_vmx(vcpu); + struct x86_exception e; unsigned long field; - int len; + short offset; gva_t gva; - struct vcpu_vmx *vmx = to_vmx(vcpu); - unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); - u32 vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); + int len; - /* The value to write might be 32 or 64 bits, depending on L1's long + /* + * The value to write might be 32 or 64 bits, depending on L1's long * mode, and eventually we need to write that into a field of several * possible lengths. The code below first zero-extends the value to 64 - * bit (field_value), and then copies only the appropriate number of + * bit (value), and then copies only the appropriate number of * bits into the vmcs12 field. */ - u64 field_value = 0; - struct x86_exception e; - struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu) - : get_vmcs12(vcpu); - short offset; + u64 value = 0; if (!nested_vmx_check_permission(vcpu)) return 1; @@ -4867,22 +4867,20 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) get_vmcs12(vcpu)->vmcs_link_pointer == -1ull)) return nested_vmx_failInvalid(vcpu); - if (vmx_instruction_info & (1u << 10)) - field_value = kvm_register_readl(vcpu, - (((vmx_instruction_info) >> 3) & 0xf)); + if (instr_info & BIT(10)) + value = kvm_register_readl(vcpu, (((instr_info) >> 3) & 0xf)); else { len = is_64_bit_mode(vcpu) ? 8 : 4; if (get_vmx_mem_address(vcpu, exit_qualification, - vmx_instruction_info, false, len, &gva)) + instr_info, false, len, &gva)) return 1; - if (kvm_read_guest_virt(vcpu, gva, &field_value, len, &e)) { + if (kvm_read_guest_virt(vcpu, gva, &value, len, &e)) { kvm_inject_page_fault(vcpu, &e); return 1; } } - - field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf)); + field = kvm_register_readl(vcpu, (((instr_info) >> 28) & 0xf)); offset = vmcs_field_to_offset(field); if (offset < 0) @@ -4914,9 +4912,9 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) * the stripped down value, L2 sees the full value as stored by KVM). */ if (field >= GUEST_ES_AR_BYTES && field <= GUEST_TR_AR_BYTES) - field_value &= 0x1f0ff; + value &= 0x1f0ff; - vmcs12_write_any(vmcs12, field, offset, field_value); + vmcs12_write_any(vmcs12, field, offset, value); /* * Do not track vmcs12 dirty-state if in guest-mode as we actually @@ -4933,7 +4931,7 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) preempt_disable(); vmcs_load(vmx->vmcs01.shadow_vmcs); - __vmcs_writel(field, field_value); + __vmcs_writel(field, value); vmcs_clear(vmx->vmcs01.shadow_vmcs); vmcs_load(vmx->loaded_vmcs->vmcs);