From patchwork Wed May 31 07:23:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9756031 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 58046602CC for ; Wed, 31 May 2017 07:25:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 474B22848B for ; Wed, 31 May 2017 07:25:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3C30F284A6; Wed, 31 May 2017 07:25:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2203B2848B for ; Wed, 31 May 2017 07:25:29 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dFxyD-0000pM-NO; Wed, 31 May 2017 07:23:05 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dFxyD-0000p8-1J for xen-devel@lists.xenproject.org; Wed, 31 May 2017 07:23:05 +0000 Received: from [85.158.143.35] by server-10.bemta-6.messagelabs.com id 6D/BC-03613-85F6E295; Wed, 31 May 2017 07:23:04 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrDIsWRWlGSWpSXmKPExsXS6fjDSzc8Xy/ S4OQfZYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNePZmUlMBYtLK/o2zmNvYLwd2cXIySEkkCfx dN0KRhCbV8BO4vaK8+wgtoSAocTphTdZQGwWAVWJOdcfs4HYbALqEm3PtrN2MXJwiAgYSJw7m gQSZhbIkfh7eB0riC0sYCrxevUpdojxRRIzl14CG8MpYC/xbVM7E0grr4CgxN8dwhCtdhKr57 UwT2DkmYWQmYUkA2FrSTz8dYsFwtaWWLbwNTNIObOAtMTyfxwQppVE3+lSVBUgtqvE5WN32Rc wcqxi1ChOLSpLLdI1NNdLKspMzyjJTczM0TU0MNPLTS0uTkxPzUlMKtZLzs/dxAgMVQYg2MF4 e2PAIUZJDiYlUd4KG71IIb6k/JTKjMTijPii0pzU4kOMMhwcShK8W3KBcoJFqempFWmZOcCog UlLcPAoifDagaR5iwsSc4sz0yFSpxgVpcR5F4EkBEASGaV5cG2wSL3EKCslzMsIdIgQT0FqUW 5mCar8K0ZxDkYlYd7tIFN4MvNK4Ka/AlrMBLR41w5tkMUliQgpqQZGg8mxCz+FJZcyPsiZXWl waIvyPq+mQ6LBCvmveyVW1L+czhnxeHP9ypUxM43CV90OPnwg5W4Js73Fe9npPvPcVNa90Dh4 cueG2xMKN/oHLxc4P+Gt3IyzC6oy2tJkdl5N+bCg9+WBut9dulqB865si0rrfPpQ5quCyd3N3 3ZObDDOtdOM5Cp1UmIpzkg01GIuKk4EAGZfXBvPAgAA X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-13.tower-21.messagelabs.com!1496215381!65649794!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.19; banners=-,-,- X-VirusChecked: Checked Received: (qmail 53332 invoked from network); 31 May 2017 07:23:03 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 31 May 2017 07:23:03 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Wed, 31 May 2017 01:23:01 -0600 Message-Id: <592E8B74020000780015DFCC@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.2 Date: Wed, 31 May 2017 01:23:00 -0600 From: "Jan Beulich" To: "xen-devel" References: <592E8925020000780015DF7A@prv-mh.provo.novell.com> <592E8925020000780015DF7A@prv-mh.provo.novell.com> In-Reply-To: <592E8925020000780015DF7A@prv-mh.provo.novell.com> Mime-Version: 1.0 Cc: Boris Ostrovsky , Suravee Suthikulpanit Subject: [Xen-devel] [PATCH 4/4] SVM: clean up svm_vmcb_isvalid() X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP - correct CR3 and CR4 checks - add vcpu parameter (to include in log messages) and constify vmcb one - use bool/true/false - use accessors - adjust formatting Signed-off-by: Jan Beulich SVM: clean up svm_vmcb_isvalid() - correct CR3 and CR4 checks - add vcpu parameter (to include in log messages) and constify vmcb one - use bool/true/false - use accessors - adjust formatting Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/svm/nestedsvm.c +++ b/xen/arch/x86/hvm/svm/nestedsvm.c @@ -658,13 +658,13 @@ static int nsvm_vmcb_prepare4vmrun(struc /* Cleanbits */ n2vmcb->cleanbits.bytes = 0; - rc = svm_vmcb_isvalid(__func__, ns_vmcb, 1); + rc = svm_vmcb_isvalid(__func__, ns_vmcb, v, true); if (rc) { gdprintk(XENLOG_ERR, "virtual vmcb invalid\n"); return NSVM_ERROR_VVMCB; } - rc = svm_vmcb_isvalid(__func__, n2vmcb, 1); + rc = svm_vmcb_isvalid(__func__, n2vmcb, v, true); if (rc) { gdprintk(XENLOG_ERR, "n2vmcb invalid\n"); return NSVM_ERROR_VMENTRY; --- a/xen/arch/x86/hvm/svm/svmdebug.c +++ b/xen/arch/x86/hvm/svm/svmdebug.c @@ -16,6 +16,7 @@ * */ +#include #include #include #include @@ -87,93 +88,76 @@ void svm_vmcb_dump(const char *from, con svm_dump_sel(" TR", &vmcb->tr); } -bool_t -svm_vmcb_isvalid(const char *from, struct vmcb_struct *vmcb, - bool_t verbose) +bool svm_vmcb_isvalid(const char *from, const struct vmcb_struct *vmcb, + const struct vcpu *v, bool verbose) { - bool_t ret = 0; /* ok */ + bool ret = false; /* ok */ -#define PRINTF(...) \ - if (verbose) { ret = 1; printk("%s: ", from); printk(__VA_ARGS__); \ - } else return 1; +#define PRINTF(fmt, args...) do { \ + if ( !verbose ) return true; \ + ret = true; \ + printk(XENLOG_GUEST "%pv[%s]: " fmt, v, from, ## args); \ +} while (0) - if ((vmcb->_efer & EFER_SVME) == 0) { - PRINTF("EFER: SVME bit not set (%#"PRIx64")\n", vmcb->_efer); - } + if ( !(vmcb_get_efer(vmcb) & EFER_SVME) ) + PRINTF("EFER: SVME bit not set (%#"PRIx64")\n", vmcb_get_efer(vmcb)); - if ((vmcb->_cr0 & X86_CR0_CD) == 0 && (vmcb->_cr0 & X86_CR0_NW) != 0) { + if ( !(vmcb_get_cr0(vmcb) & X86_CR0_CD) && (vmcb_get_cr0(vmcb) & X86_CR0_NW) ) PRINTF("CR0: CD bit is zero and NW bit set (%#"PRIx64")\n", - vmcb->_cr0); - } + vmcb_get_cr0(vmcb)); - if ((vmcb->_cr0 >> 32U) != 0) { + if ( vmcb_get_cr0(vmcb) >> 32 ) PRINTF("CR0: bits [63:32] are not zero (%#"PRIx64")\n", - vmcb->_cr0); - } + vmcb_get_cr0(vmcb)); - if ((vmcb->_cr3 & 0x7) != 0) { - PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb->_cr3); - } - if ((vmcb->_efer & EFER_LMA) && (vmcb->_cr3 & 0xfe) != 0) { - PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb->_cr3); - } + if ( (vmcb_get_cr3(vmcb) & 0x7) || + ((!(vmcb_get_cr4(vmcb) & X86_CR4_PAE) || + (vmcb_get_efer(vmcb) & EFER_LMA)) && + (vmcb_get_cr3(vmcb) & 0xfe0)) || + ((vmcb_get_efer(vmcb) & EFER_LMA) && + (vmcb_get_cr3(vmcb) >> v->domain->arch.cpuid->extd.maxphysaddr)) ) + PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb_get_cr3(vmcb)); - if ((vmcb->_cr4 >> 19U) != 0) { - PRINTF("CR4: bits [63:19] are not zero (%#"PRIx64")\n", - vmcb->_cr4); - } - - if (((vmcb->_cr4 >> 11U) & 0x7fU) != 0) { - PRINTF("CR4: bits [17:11] are not zero (%#"PRIx64")\n", - vmcb->_cr4); - } + if ( vmcb_get_cr4(vmcb) & ~hvm_cr4_guest_valid_bits(v, false) ) + PRINTF("CR4: MBZ bits are set (%#"PRIx64")\n", vmcb_get_cr4(vmcb)); - if ((vmcb->_dr6 >> 32U) != 0) { + if ( vmcb_get_dr6(vmcb) >> 32 ) PRINTF("DR6: bits [63:32] are not zero (%#"PRIx64")\n", - vmcb->_dr6); - } + vmcb_get_dr6(vmcb)); - if ((vmcb->_dr7 >> 32U) != 0) { + if ( vmcb_get_dr7(vmcb) >> 32 ) PRINTF("DR7: bits [63:32] are not zero (%#"PRIx64")\n", - vmcb->_dr7); - } + vmcb_get_dr7(vmcb)); - if ((vmcb->_efer >> 15U) != 0) { + if ( vmcb_get_efer(vmcb) >> 15U ) PRINTF("EFER: bits [63:15] are not zero (%#"PRIx64")\n", - vmcb->_efer); - } - - if ((vmcb->_efer & EFER_LME) != 0 && ((vmcb->_cr0 & X86_CR0_PG) != 0)) { - if ((vmcb->_cr4 & X86_CR4_PAE) == 0) { - PRINTF("EFER_LME and CR0.PG are both set and CR4.PAE is zero.\n"); - } - if ((vmcb->_cr0 & X86_CR0_PE) == 0) { - PRINTF("EFER_LME and CR0.PG are both set and CR0.PE is zero.\n"); - } - } + vmcb_get_efer(vmcb)); - if ((vmcb->_efer & EFER_LME) != 0 - && (vmcb->_cr0 & X86_CR0_PG) != 0 - && (vmcb->_cr4 & X86_CR4_PAE) != 0 - && (vmcb->cs.attr.fields.l != 0) - && (vmcb->cs.attr.fields.db != 0)) + if ( (vmcb_get_efer(vmcb) & EFER_LME) && (vmcb_get_cr0(vmcb) & X86_CR0_PG) ) { - PRINTF("EFER_LME, CR0.PG, CR4.PAE, CS.L and CS.D are all non-zero.\n"); + if ( !(vmcb_get_cr4(vmcb) & X86_CR4_PAE) ) + PRINTF("EFER_LME and CR0.PG are both set and CR4.PAE is zero\n"); + if ( !(vmcb_get_cr0(vmcb) & X86_CR0_PE) ) + PRINTF("EFER_LME and CR0.PG are both set and CR0.PE is zero\n"); } - if ((vmcb->_general2_intercepts & GENERAL2_INTERCEPT_VMRUN) == 0) { + if ( (vmcb_get_efer(vmcb) & EFER_LME) && + (vmcb_get_cr0(vmcb) & X86_CR0_PG) && + (vmcb_get_cr4(vmcb) & X86_CR4_PAE) && + vmcb->cs.attr.fields.l && + vmcb->cs.attr.fields.db ) + PRINTF("EFER_LME, CR0.PG, CR4.PAE, CS.L and CS.D are all non-zero\n"); + + if ( !(vmcb_get_general2_intercepts(vmcb) & GENERAL2_INTERCEPT_VMRUN) ) PRINTF("GENERAL2_INTERCEPT: VMRUN intercept bit is clear (%#"PRIx32")\n", - vmcb->_general2_intercepts); - } + vmcb_get_general2_intercepts(vmcb)); - if (vmcb->eventinj.fields.resvd1 != 0) { + if ( vmcb->eventinj.fields.resvd1 ) PRINTF("eventinj: MBZ bits are set (%#"PRIx64")\n", vmcb->eventinj.bytes); - } - if (vmcb->_np_enable && vmcb->_h_cr3 == 0) { + if ( vmcb_get_np_enable(vmcb) && !vmcb_get_h_cr3(vmcb) ) PRINTF("nested paging enabled but host cr3 is 0\n"); - } #undef PRINTF return ret; --- a/xen/include/asm-x86/hvm/svm/svmdebug.h +++ b/xen/include/asm-x86/hvm/svm/svmdebug.h @@ -23,7 +23,7 @@ #include void svm_vmcb_dump(const char *from, const struct vmcb_struct *vmcb); -bool_t svm_vmcb_isvalid(const char *from, struct vmcb_struct *vmcb, - bool_t verbose); +bool svm_vmcb_isvalid(const char *from, const struct vmcb_struct *vmcb, + const struct vcpu *v, bool verbose); #endif /* __ASM_X86_HVM_SVM_SVMDEBUG_H__ */ --- a/xen/arch/x86/hvm/svm/nestedsvm.c +++ b/xen/arch/x86/hvm/svm/nestedsvm.c @@ -658,13 +658,13 @@ static int nsvm_vmcb_prepare4vmrun(struc /* Cleanbits */ n2vmcb->cleanbits.bytes = 0; - rc = svm_vmcb_isvalid(__func__, ns_vmcb, 1); + rc = svm_vmcb_isvalid(__func__, ns_vmcb, v, true); if (rc) { gdprintk(XENLOG_ERR, "virtual vmcb invalid\n"); return NSVM_ERROR_VVMCB; } - rc = svm_vmcb_isvalid(__func__, n2vmcb, 1); + rc = svm_vmcb_isvalid(__func__, n2vmcb, v, true); if (rc) { gdprintk(XENLOG_ERR, "n2vmcb invalid\n"); return NSVM_ERROR_VMENTRY; --- a/xen/arch/x86/hvm/svm/svmdebug.c +++ b/xen/arch/x86/hvm/svm/svmdebug.c @@ -16,6 +16,7 @@ * */ +#include #include #include #include @@ -87,93 +88,76 @@ void svm_vmcb_dump(const char *from, con svm_dump_sel(" TR", &vmcb->tr); } -bool_t -svm_vmcb_isvalid(const char *from, struct vmcb_struct *vmcb, - bool_t verbose) +bool svm_vmcb_isvalid(const char *from, const struct vmcb_struct *vmcb, + const struct vcpu *v, bool verbose) { - bool_t ret = 0; /* ok */ + bool ret = false; /* ok */ -#define PRINTF(...) \ - if (verbose) { ret = 1; printk("%s: ", from); printk(__VA_ARGS__); \ - } else return 1; +#define PRINTF(fmt, args...) do { \ + if ( !verbose ) return true; \ + ret = true; \ + printk(XENLOG_GUEST "%pv[%s]: " fmt, v, from, ## args); \ +} while (0) - if ((vmcb->_efer & EFER_SVME) == 0) { - PRINTF("EFER: SVME bit not set (%#"PRIx64")\n", vmcb->_efer); - } + if ( !(vmcb_get_efer(vmcb) & EFER_SVME) ) + PRINTF("EFER: SVME bit not set (%#"PRIx64")\n", vmcb_get_efer(vmcb)); - if ((vmcb->_cr0 & X86_CR0_CD) == 0 && (vmcb->_cr0 & X86_CR0_NW) != 0) { + if ( !(vmcb_get_cr0(vmcb) & X86_CR0_CD) && (vmcb_get_cr0(vmcb) & X86_CR0_NW) ) PRINTF("CR0: CD bit is zero and NW bit set (%#"PRIx64")\n", - vmcb->_cr0); - } + vmcb_get_cr0(vmcb)); - if ((vmcb->_cr0 >> 32U) != 0) { + if ( vmcb_get_cr0(vmcb) >> 32 ) PRINTF("CR0: bits [63:32] are not zero (%#"PRIx64")\n", - vmcb->_cr0); - } + vmcb_get_cr0(vmcb)); - if ((vmcb->_cr3 & 0x7) != 0) { - PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb->_cr3); - } - if ((vmcb->_efer & EFER_LMA) && (vmcb->_cr3 & 0xfe) != 0) { - PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb->_cr3); - } + if ( (vmcb_get_cr3(vmcb) & 0x7) || + ((!(vmcb_get_cr4(vmcb) & X86_CR4_PAE) || + (vmcb_get_efer(vmcb) & EFER_LMA)) && + (vmcb_get_cr3(vmcb) & 0xfe0)) || + ((vmcb_get_efer(vmcb) & EFER_LMA) && + (vmcb_get_cr3(vmcb) >> v->domain->arch.cpuid->extd.maxphysaddr)) ) + PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb_get_cr3(vmcb)); - if ((vmcb->_cr4 >> 19U) != 0) { - PRINTF("CR4: bits [63:19] are not zero (%#"PRIx64")\n", - vmcb->_cr4); - } - - if (((vmcb->_cr4 >> 11U) & 0x7fU) != 0) { - PRINTF("CR4: bits [17:11] are not zero (%#"PRIx64")\n", - vmcb->_cr4); - } + if ( vmcb_get_cr4(vmcb) & ~hvm_cr4_guest_valid_bits(v, false) ) + PRINTF("CR4: MBZ bits are set (%#"PRIx64")\n", vmcb_get_cr4(vmcb)); - if ((vmcb->_dr6 >> 32U) != 0) { + if ( vmcb_get_dr6(vmcb) >> 32 ) PRINTF("DR6: bits [63:32] are not zero (%#"PRIx64")\n", - vmcb->_dr6); - } + vmcb_get_dr6(vmcb)); - if ((vmcb->_dr7 >> 32U) != 0) { + if ( vmcb_get_dr7(vmcb) >> 32 ) PRINTF("DR7: bits [63:32] are not zero (%#"PRIx64")\n", - vmcb->_dr7); - } + vmcb_get_dr7(vmcb)); - if ((vmcb->_efer >> 15U) != 0) { + if ( vmcb_get_efer(vmcb) >> 15U ) PRINTF("EFER: bits [63:15] are not zero (%#"PRIx64")\n", - vmcb->_efer); - } - - if ((vmcb->_efer & EFER_LME) != 0 && ((vmcb->_cr0 & X86_CR0_PG) != 0)) { - if ((vmcb->_cr4 & X86_CR4_PAE) == 0) { - PRINTF("EFER_LME and CR0.PG are both set and CR4.PAE is zero.\n"); - } - if ((vmcb->_cr0 & X86_CR0_PE) == 0) { - PRINTF("EFER_LME and CR0.PG are both set and CR0.PE is zero.\n"); - } - } + vmcb_get_efer(vmcb)); - if ((vmcb->_efer & EFER_LME) != 0 - && (vmcb->_cr0 & X86_CR0_PG) != 0 - && (vmcb->_cr4 & X86_CR4_PAE) != 0 - && (vmcb->cs.attr.fields.l != 0) - && (vmcb->cs.attr.fields.db != 0)) + if ( (vmcb_get_efer(vmcb) & EFER_LME) && (vmcb_get_cr0(vmcb) & X86_CR0_PG) ) { - PRINTF("EFER_LME, CR0.PG, CR4.PAE, CS.L and CS.D are all non-zero.\n"); + if ( !(vmcb_get_cr4(vmcb) & X86_CR4_PAE) ) + PRINTF("EFER_LME and CR0.PG are both set and CR4.PAE is zero\n"); + if ( !(vmcb_get_cr0(vmcb) & X86_CR0_PE) ) + PRINTF("EFER_LME and CR0.PG are both set and CR0.PE is zero\n"); } - if ((vmcb->_general2_intercepts & GENERAL2_INTERCEPT_VMRUN) == 0) { + if ( (vmcb_get_efer(vmcb) & EFER_LME) && + (vmcb_get_cr0(vmcb) & X86_CR0_PG) && + (vmcb_get_cr4(vmcb) & X86_CR4_PAE) && + vmcb->cs.attr.fields.l && + vmcb->cs.attr.fields.db ) + PRINTF("EFER_LME, CR0.PG, CR4.PAE, CS.L and CS.D are all non-zero\n"); + + if ( !(vmcb_get_general2_intercepts(vmcb) & GENERAL2_INTERCEPT_VMRUN) ) PRINTF("GENERAL2_INTERCEPT: VMRUN intercept bit is clear (%#"PRIx32")\n", - vmcb->_general2_intercepts); - } + vmcb_get_general2_intercepts(vmcb)); - if (vmcb->eventinj.fields.resvd1 != 0) { + if ( vmcb->eventinj.fields.resvd1 ) PRINTF("eventinj: MBZ bits are set (%#"PRIx64")\n", vmcb->eventinj.bytes); - } - if (vmcb->_np_enable && vmcb->_h_cr3 == 0) { + if ( vmcb_get_np_enable(vmcb) && !vmcb_get_h_cr3(vmcb) ) PRINTF("nested paging enabled but host cr3 is 0\n"); - } #undef PRINTF return ret; --- a/xen/include/asm-x86/hvm/svm/svmdebug.h +++ b/xen/include/asm-x86/hvm/svm/svmdebug.h @@ -23,7 +23,7 @@ #include void svm_vmcb_dump(const char *from, const struct vmcb_struct *vmcb); -bool_t svm_vmcb_isvalid(const char *from, struct vmcb_struct *vmcb, - bool_t verbose); +bool svm_vmcb_isvalid(const char *from, const struct vmcb_struct *vmcb, + const struct vcpu *v, bool verbose); #endif /* __ASM_X86_HVM_SVM_SVMDEBUG_H__ */