From patchwork Fri Dec 6 19:31:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 11276997 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2F03E112B for ; Fri, 6 Dec 2019 19:32:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id F2A15206DF for ; Fri, 6 Dec 2019 19:32:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RRvSp2mJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726371AbfLFTcI (ORCPT ); Fri, 6 Dec 2019 14:32:08 -0500 Received: from mail-pj1-f73.google.com ([209.85.216.73]:35998 "EHLO mail-pj1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726321AbfLFTcH (ORCPT ); Fri, 6 Dec 2019 14:32:07 -0500 Received: by mail-pj1-f73.google.com with SMTP id h6so788176pju.3 for ; Fri, 06 Dec 2019 11:32:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=j4XRHZCw0G2Q9oTbXX5ca2bK+8yfDz21ed70QhN9JTs=; b=RRvSp2mJGwiPRXJINZ0d820tblpaR+vgxnDgRXYJ37hlvL+LDcNMD3idv+oMM5CeYW 9ftVgLfbCH5tXV/lu5zWDBFDlQlYG4oi5gNNEmBzMy/CjTgiuGG8ZEZy3yWR9JAPn5g5 CCPkN4FJ1B16fE2+aNjtImSXc/qUouGnn9Nkp8fDyGEDMBlGkHnlWRMCz5zO13ejE5vm 8lKZaowfg1W4gq6bbabcGihDfGrMff3v62RDM1IYS42rfk3PIAk1/i79xipFIDy9OWtg mosBge7FW6dkeoMS/pHwo0YcVVam4Z1QSH8+UWiipb7HKaGc2v5gZ+7TUn5UgFjuKOVK Xx+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=j4XRHZCw0G2Q9oTbXX5ca2bK+8yfDz21ed70QhN9JTs=; b=Wz9XodCmUG7zzglCSDj5ioDtCyVvyee+fcrp1xWQf1dtGr7YjSd/Bf7xQCSSVBJfe1 NzWsUhhgO6Tu7Nd1HU/UT0PO5zHBVCGLVlaQD5yaqzedrP9AHTrD6JoLKxwpnLQSGHX4 3ps67vbOsqYwVJMXoDmK4boHMXdTdzxFOiOdkhcRpLtu5X79jivDkExKNeJkAh0TPPi2 k6rdHHaw8cJwKn/yghOlSP+UzHkHZxmOOG14bFvV5c3bY7kyhUy3YCcnjwhHweBTLiUI yxSAlAa8SUAwVzZnpmJhoD8pZ7tTmQ2bBYAo7rzR3B/BREeF3Vh9FQl61RQV+nzfqr+f /W/w== X-Gm-Message-State: APjAAAWVjVSDrZJJ/yE7IBSJvVeAcVE6ZNhGab+x8JbRM1e9SWJf8zgh jreiayYUmLSjRev0A2sOmHZC7Kbm1MCRbGbjktEyaAQYJjgwfejEPnrvy1O5xQKHJZuMCEWV0pD 6jLB7Z3D7GrSZ2d/OnrOEVWgc8GqQVjUrf2s8JpCnoOrcVegUc5PGBaz59UGvdPw= X-Google-Smtp-Source: APXvYqxdez10mu0I4p1MtnMG/cWtZiGAnGVGCCJ1MOJqdhnLTC4izjs6+s8F/hnISExGQip6H+TRsw5ABwG7Iw== X-Received: by 2002:a65:6451:: with SMTP id s17mr5345865pgv.188.1575660726256; Fri, 06 Dec 2019 11:32:06 -0800 (PST) Date: Fri, 6 Dec 2019 11:31:42 -0800 Message-Id: <20191206193144.33209-1-jmattson@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog Subject: [PATCH v2 1/3] kvm: nVMX: VMWRITE checks VMCS-link pointer before VMCS field From: Jim Mattson To: kvm@vger.kernel.org Cc: Jim Mattson , Peter Shier , Oliver Upton , Jon Cargille , Liran Alon , Paolo Bonzini , Vitaly Kuznetsov Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org According to the SDM, a VMWRITE in VMX non-root operation with an invalid VMCS-link pointer results in VMfailInvalid before the validity of the VMCS field in the secondary source operand is checked. For consistency, modify both handle_vmwrite and handle_vmread, even though there was no problem with the latter. Fixes: 6d894f498f5d1 ("KVM: nVMX: vmread/vmwrite: Use shadow vmcs12 if running L2") Signed-off-by: Jim Mattson Reviewed-by: Peter Shier Reviewed-by: Oliver Upton Reviewed-by: Jon Cargille Cc: Liran Alon Cc: Paolo Bonzini Cc: Vitaly Kuznetsov --- v1 -> v2: * Split the invalid VMCS-link pointer check from the conditional call to copy_vmcs02_to_vmcs12_rare. * Hoisted the assignment of vmcs12 to its declaration. * Modified handle_vmread to maintain consistency with handle_vmwrite. arch/x86/kvm/vmx/nested.c | 59 +++++++++++++++++---------------------- 1 file changed, 25 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 4aea7d304beb..ee1bf9710e86 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4753,32 +4753,28 @@ static int handle_vmread(struct kvm_vcpu *vcpu) { unsigned long field; u64 field_value; + struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); u32 vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); int len; gva_t gva = 0; - struct vmcs12 *vmcs12; + struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu) + : get_vmcs12(vcpu); struct x86_exception e; short offset; if (!nested_vmx_check_permission(vcpu)) return 1; - if (to_vmx(vcpu)->nested.current_vmptr == -1ull) + /* + * In VMX non-root operation, when the VMCS-link pointer is -1ull, + * any VMREAD sets the ALU flags for VMfailInvalid. + */ + if (vmx->nested.current_vmptr == -1ull || + (is_guest_mode(vcpu) && + get_vmcs12(vcpu)->vmcs_link_pointer == -1ull)) return nested_vmx_failInvalid(vcpu); - if (!is_guest_mode(vcpu)) - vmcs12 = get_vmcs12(vcpu); - else { - /* - * When vmcs->vmcs_link_pointer is -1ull, any VMREAD - * to shadowed-field sets the ALU flags for VMfailInvalid. - */ - if (get_vmcs12(vcpu)->vmcs_link_pointer == -1ull) - return nested_vmx_failInvalid(vcpu); - vmcs12 = get_shadow_vmcs12(vcpu); - } - /* Decode instruction info and find the field to read */ field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf)); @@ -4855,13 +4851,20 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) */ u64 field_value = 0; struct x86_exception e; - struct vmcs12 *vmcs12; + struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu) + : get_vmcs12(vcpu); short offset; if (!nested_vmx_check_permission(vcpu)) return 1; - if (vmx->nested.current_vmptr == -1ull) + /* + * In VMX non-root operation, when the VMCS-link pointer is -1ull, + * any VMWRITE sets the ALU flags for VMfailInvalid. + */ + if (vmx->nested.current_vmptr == -1ull || + (is_guest_mode(vcpu) && + get_vmcs12(vcpu)->vmcs_link_pointer == -1ull)) return nested_vmx_failInvalid(vcpu); if (vmx_instruction_info & (1u << 10)) @@ -4889,24 +4892,12 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) return nested_vmx_failValid(vcpu, VMXERR_VMWRITE_READ_ONLY_VMCS_COMPONENT); - if (!is_guest_mode(vcpu)) { - vmcs12 = get_vmcs12(vcpu); - - /* - * Ensure vmcs12 is up-to-date before any VMWRITE that dirties - * vmcs12, else we may crush a field or consume a stale value. - */ - if (!is_shadow_field_rw(field)) - copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12); - } else { - /* - * When vmcs->vmcs_link_pointer is -1ull, any VMWRITE - * to shadowed-field sets the ALU flags for VMfailInvalid. - */ - if (get_vmcs12(vcpu)->vmcs_link_pointer == -1ull) - return nested_vmx_failInvalid(vcpu); - vmcs12 = get_shadow_vmcs12(vcpu); - } + /* + * Ensure vmcs12 is up-to-date before any VMWRITE that dirties + * vmcs12, else we may crush a field or consume a stale value. + */ + if (!is_guest_mode(vcpu) && !is_shadow_field_rw(field)) + copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12); offset = vmcs_field_to_offset(field); if (offset < 0) From patchwork Fri Dec 6 19:31:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 11276999 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B5B55138D for ; Fri, 6 Dec 2019 19:32:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 937CE24676 for ; Fri, 6 Dec 2019 19:32:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cFdRGHH9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726395AbfLFTcO (ORCPT ); Fri, 6 Dec 2019 14:32:14 -0500 Received: from mail-vk1-f201.google.com ([209.85.221.201]:52560 "EHLO mail-vk1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726321AbfLFTcO (ORCPT ); Fri, 6 Dec 2019 14:32:14 -0500 Received: by mail-vk1-f201.google.com with SMTP id m25so3444457vko.19 for ; Fri, 06 Dec 2019 11:32:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=qyhICE/ZtsEAMoLVwH8/EamejrwLF/pI/nmxVL5Kj9A=; b=cFdRGHH9Onag9onWjmy46nTOYuiVo9113UiXx8+4Qw+ipTWXzHb7jnV5XnLvwWtVFb 9GYyeW5x81SulLSHXjg3lGA9+3fpdSJryJ4aldek8vN4ebmzEySU8rrVKyqBDpslPimq h11E/9IbmGZreuzfUechAlkpm2gr1a5GXQUYPhcwWP3neq+DKE2/QVKJVHw2VTyOuJvj rOWEhH+D52JWYIA65cUqOwY9NBS85aPgcmcgEaMeIACouzIqzehjaRWwTJBT7fFZzexB buKuIKwEEymNSH8jAqsfHmw8TggzuGFOVGVVTTIxs7sFe/PpKvwsfF0RYbeb2z4FUYT/ g2yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=qyhICE/ZtsEAMoLVwH8/EamejrwLF/pI/nmxVL5Kj9A=; b=OWv8iNb4lNdcR0dXglgrsed2LJyLKI0AtSBcqrV2LGDVRnfn9fySX62Z5CXU72J0ai cXDN/CFnX1KoeLmoJAdfzFWU2+lOLY++Qu4ZGyVthTK5/zDrxz7hqlaszYKJczZmcETK VHHS1by5gLk/Jc1aky1cnyW+fM1+1PRhZo3shKaOZn7NFLkhfvjdSVfq0XTY6pyU9up4 l9w7JgJ5i+6hW6k1zMdTBKwtVOzB/YC8pF6N2QzLVlRsLjAXKrduXzVY2n/ZwQjDTn3y zX3Be2+iBeSuJlztrIfsykS9jHj1/gc44W83cE9i2hBdZuleCUXavdzeXOSxdcBIerOQ Kd4Q== X-Gm-Message-State: APjAAAUc2J6vk0ED5C+NHJlvdSqBxm5FbvrLGDgmliS8eDPqiWIK3rKz gbnVkPC9QbwKOYuoD5mgpPgTW3t7TMwrbtqrfQXdLdmuVtLi+MSw4TTojux8fxWzpDyu8ME2qPy giYfw7ShDBq4Xxu6rZjNnBArL0HugGaDN5JuN84/Sau1RWnmeu1vfw2BaDBVAq8I= X-Google-Smtp-Source: APXvYqx1T3nviQI+EtuzKktVshDJcEvHM4sIvCiGBBG9IPJxDyqoEhF9PEwDUqClU4dBtOjl03qQfeg+zXUZpg== X-Received: by 2002:a67:fb58:: with SMTP id e24mr11198991vsr.55.1575660731307; Fri, 06 Dec 2019 11:32:11 -0800 (PST) Date: Fri, 6 Dec 2019 11:31:43 -0800 In-Reply-To: <20191206193144.33209-1-jmattson@google.com> Message-Id: <20191206193144.33209-2-jmattson@google.com> Mime-Version: 1.0 References: <20191206193144.33209-1-jmattson@google.com> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog Subject: [PATCH v2 2/3] kvm: nVMX: VMWRITE checks unsupported field before read-only field From: Jim Mattson To: kvm@vger.kernel.org Cc: Jim Mattson , Peter Shier , Oliver Upton , Jon Cargille , Paolo Bonzini Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org According to the SDM, VMWRITE checks to see if the secondary source operand corresponds to an unsupported VMCS field before it checks to see if the secondary source operand corresponds to a VM-exit information field and the processor does not support writing to VM-exit information fields. Fixes: 49f705c5324aa ("KVM: nVMX: Implement VMREAD and VMWRITE") Signed-off-by: Jim Mattson Reviewed-by: Peter Shier Reviewed-by: Oliver Upton Reviewed-by: Jon Cargille Cc: Paolo Bonzini --- arch/x86/kvm/vmx/nested.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index ee1bf9710e86..94ec089d6d1a 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4883,6 +4883,12 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf)); + + offset = vmcs_field_to_offset(field); + if (offset < 0) + return nested_vmx_failValid(vcpu, + VMXERR_UNSUPPORTED_VMCS_COMPONENT); + /* * If the vCPU supports "VMWRITE to any supported field in the * VMCS," then the "read-only" fields are actually read/write. @@ -4899,11 +4905,6 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) if (!is_guest_mode(vcpu) && !is_shadow_field_rw(field)) copy_vmcs02_to_vmcs12_rare(vcpu, vmcs12); - offset = vmcs_field_to_offset(field); - if (offset < 0) - return nested_vmx_failValid(vcpu, - VMXERR_UNSUPPORTED_VMCS_COMPONENT); - /* * Some Intel CPUs intentionally drop the reserved bits of the AR byte * fields on VMWRITE. Emulate this behavior to ensure consistent KVM From patchwork Fri Dec 6 19:31:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jim Mattson X-Patchwork-Id: 11277001 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B165138D for ; Fri, 6 Dec 2019 19:32:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 66F4424677 for ; Fri, 6 Dec 2019 19:32:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dY6tBvjd" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726403AbfLFTcQ (ORCPT ); Fri, 6 Dec 2019 14:32:16 -0500 Received: from mail-vs1-f73.google.com ([209.85.217.73]:43367 "EHLO mail-vs1-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726321AbfLFTcQ (ORCPT ); Fri, 6 Dec 2019 14:32:16 -0500 Received: by mail-vs1-f73.google.com with SMTP id j8so1072138vsm.10 for ; Fri, 06 Dec 2019 11:32:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RcvscsMIfpYbjq4X9AKXrVogIUHEvrO+W/QnGXNMzfk=; b=dY6tBvjdFkIIUPknFq9Egln7Q7YL1nDQumi8uUcBlE25nlIKBDl7J7aCWu63hDSPrN TJupYGmuu3i5m1C8tiggIEKJSBycfPCC1X5jwbPK1Pj/qjV2t9h+XYyvuOOL42GHIxq3 /6xVaDORXNPUqwAK8q6rhKrXSiof9A2GhSleesWZ1z2+VSbzmaZpqzpGsEBM5prIEx5T iDDORHjldDxLZAe95CDBQ4+nOuXxyfd9plHkBL3G9vWoGrhs/BeMCH2U5YIdbfKLz6J8 F2qzt/ma2mQnjRummkeTxOu/12Mh30gSyzwR7Eby9+SgnfL/z73UHHXREFm+8rexYBiG Cwng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RcvscsMIfpYbjq4X9AKXrVogIUHEvrO+W/QnGXNMzfk=; b=MsDrvHjkHSWxqIvWU/4xKVCkbDw0By2VslEFjTV9Cfr1+LhGFu9y7tI4dLFuOYf0mk 8Q9g1Oyp560t8O1xSQoxfil+oKO89yO9a6Cs4N+A59wuVf+8kwJtCFn1rCYqVwirfkvK 6lYB+PVp3ZQGbg/KVDjWwMUaqx75gyosEz68rohaIEgtBCujh8NZT7TapQG/yt1MM09R g9jNJzKGcaJq3t4OsM+1DyyospkXm6uFudC2jIf7Ze1/bUJaFIgp9lPVhdU7N/mifdPo +yHDI4n9rgUZVKW02EBlh8bsODkCKb6ObjjZ2our1DONhi65pFI1uBSlqYSSNZvrNV2l inLA== X-Gm-Message-State: APjAAAWxx8ssIxpOGkC4qY8m0WY3JZHh2mXviIGFu5dn+LdSBIAJej9S ShTXuXrn37nAS1oi7FC4RvMGJokv/tbD5DR1e7DtPcoldB0siFzAgfoLGgWAlWaY45VrI2REGnW DM8XPguvpk9YEnTqTqoPmwl1f5+7REOWzga9dFri33hBJM7ytra+5W/Bmq6z0xOI= X-Google-Smtp-Source: APXvYqz7Eih+OR7fAhDjKbiJOYcLSQIhIkDETCurItjNcVx1GTknFvvuChJsTrBxT10GzUSuCAuzlvLQb5CZow== X-Received: by 2002:a1f:bf86:: with SMTP id p128mr13030403vkf.3.1575660734499; Fri, 06 Dec 2019 11:32:14 -0800 (PST) Date: Fri, 6 Dec 2019 11:31:44 -0800 In-Reply-To: <20191206193144.33209-1-jmattson@google.com> Message-Id: <20191206193144.33209-3-jmattson@google.com> Mime-Version: 1.0 References: <20191206193144.33209-1-jmattson@google.com> X-Mailer: git-send-email 2.24.0.393.g34dc348eaf-goog Subject: [PATCH v2 3/3] kvm: nVMX: Aesthetic cleanup of handle_vmread and handle_vmwrite From: Jim Mattson To: kvm@vger.kernel.org Cc: Jim Mattson , Peter Shier , Oliver Upton , Jon Cargille , Paolo Bonzini Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Apply reverse fir tree declaration order, wrap long lines, reformat a block comment, delete an extra blank line, and use BIT_ULL(10) instead of (1u << 10i). Signed-off-by: Jim Mattson Reviewed-by: Peter Shier Reviewed-by: Oliver Upton Reviewed-by: Jon Cargille Cc: Paolo Bonzini --- arch/x86/kvm/vmx/nested.c | 47 +++++++++++++++++++++------------------ 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 94ec089d6d1a..aff163192369 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4751,17 +4751,17 @@ static int handle_vmresume(struct kvm_vcpu *vcpu) static int handle_vmread(struct kvm_vcpu *vcpu) { - unsigned long field; - u64 field_value; - struct vcpu_vmx *vmx = to_vmx(vcpu); - unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); - u32 vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); - int len; - gva_t gva = 0; struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu) : get_vmcs12(vcpu); + unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); + u32 vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); + struct vcpu_vmx *vmx = to_vmx(vcpu); struct x86_exception e; + unsigned long field; + u64 field_value; + gva_t gva = 0; short offset; + int len; if (!nested_vmx_check_permission(vcpu)) return 1; @@ -4776,7 +4776,8 @@ static int handle_vmread(struct kvm_vcpu *vcpu) return nested_vmx_failInvalid(vcpu); /* Decode instruction info and find the field to read */ - field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf)); + field = kvm_register_readl(vcpu, + (((vmx_instruction_info) >> 28) & 0xf)); offset = vmcs_field_to_offset(field); if (offset < 0) @@ -4794,7 +4795,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu) * Note that the number of bits actually copied is 32 or 64 depending * on the guest's mode (32 or 64 bit), not on the given field's length. */ - if (vmx_instruction_info & (1u << 10)) { + if (vmx_instruction_info & BIT_ULL(10)) { kvm_register_writel(vcpu, (((vmx_instruction_info) >> 3) & 0xf), field_value); } else { @@ -4803,7 +4804,8 @@ static int handle_vmread(struct kvm_vcpu *vcpu) vmx_instruction_info, true, len, &gva)) return 1; /* _system ok, nested_vmx_check_permission has verified cpl=0 */ - if (kvm_write_guest_virt_system(vcpu, gva, &field_value, len, &e)) + if (kvm_write_guest_virt_system(vcpu, gva, &field_value, + len, &e)) kvm_inject_page_fault(vcpu, &e); } @@ -4836,24 +4838,25 @@ static bool is_shadow_field_ro(unsigned long field) static int handle_vmwrite(struct kvm_vcpu *vcpu) { - unsigned long field; - int len; - gva_t gva; - struct vcpu_vmx *vmx = to_vmx(vcpu); + struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu) + : get_vmcs12(vcpu); unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); u32 vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); + struct vcpu_vmx *vmx = to_vmx(vcpu); + struct x86_exception e; + unsigned long field; + short offset; + gva_t gva; + int len; - /* The value to write might be 32 or 64 bits, depending on L1's long + /* + * The value to write might be 32 or 64 bits, depending on L1's long * mode, and eventually we need to write that into a field of several * possible lengths. The code below first zero-extends the value to 64 * bit (field_value), and then copies only the appropriate number of * bits into the vmcs12 field. */ u64 field_value = 0; - struct x86_exception e; - struct vmcs12 *vmcs12 = is_guest_mode(vcpu) ? get_shadow_vmcs12(vcpu) - : get_vmcs12(vcpu); - short offset; if (!nested_vmx_check_permission(vcpu)) return 1; @@ -4867,7 +4870,7 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) get_vmcs12(vcpu)->vmcs_link_pointer == -1ull)) return nested_vmx_failInvalid(vcpu); - if (vmx_instruction_info & (1u << 10)) + if (vmx_instruction_info & BIT_ULL(10)) field_value = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 3) & 0xf)); else { @@ -4881,8 +4884,8 @@ static int handle_vmwrite(struct kvm_vcpu *vcpu) } } - - field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf)); + field = kvm_register_readl(vcpu, + (((vmx_instruction_info) >> 28) & 0xf)); offset = vmcs_field_to_offset(field); if (offset < 0)