From patchwork Thu Aug 15 11:45:07 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arthur Chunqi Li X-Patchwork-Id: 2845125 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 385459F2F4 for ; Thu, 15 Aug 2013 11:45:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 45B26202BE for ; Thu, 15 Aug 2013 11:45:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D34AC201E0 for ; Thu, 15 Aug 2013 11:45:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756745Ab3HOLpb (ORCPT ); Thu, 15 Aug 2013 07:45:31 -0400 Received: from mail-pb0-f52.google.com ([209.85.160.52]:49239 "EHLO mail-pb0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755978Ab3HOLpa (ORCPT ); Thu, 15 Aug 2013 07:45:30 -0400 Received: by mail-pb0-f52.google.com with SMTP id wz12so628509pbc.39 for ; Thu, 15 Aug 2013 04:45:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DR9EgjFLwV7VeaOOsKDDSIMwViolYp10ludL84l5qXk=; b=GvImlupGP7jOi4iuziFLXsazysn4DobH64NHITDxvonJKG137vb6/nvECYase61XdB nTXu/VIeHFpQDx4mgkLC1wnJJwN+F0CFSAtT8JFByfpoXdd21A95OAABoe1iaOhEkm5s NB1Zd/9ulHT74BRUC5X54xcz2IWLGhWacwugzkM/qut17sFK/JgG+MpnufSY5hMbiEE6 oyVg39Prm8LQ12txmN4C2txe79U3k9xkcO5sj3st9YBvlmGffdXCOUEVgDv5pVjJRKfB LvIgzC/wEWIGZB1Ln4bjfkWbQZ2O7c8X56L27v00D3mMOoCaGlxQdkQFWVX4RiOKo2WG lXAQ== X-Received: by 10.66.217.166 with SMTP id oz6mr15223639pac.22.1376567129630; Thu, 15 Aug 2013 04:45:29 -0700 (PDT) Received: from Blade1-02.Blade1-02 ([162.105.146.101]) by mx.google.com with ESMTPSA id om2sm56378153pbb.34.2013.08.15.04.45.26 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 15 Aug 2013 04:45:28 -0700 (PDT) From: Arthur Chunqi Li To: kvm@vger.kernel.org Cc: jan.kiszka@web.de, gleb@redhat.com, pbonzini@redhat.com, Arthur Chunqi Li Subject: [PATCH v2 2/4] kvm-unit-tests: VMX: Add test cases for CR0/4 shadowing Date: Thu, 15 Aug 2013 19:45:07 +0800 Message-Id: <1376567109-20834-3-git-send-email-yzt356@gmail.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1376567109-20834-1-git-send-email-yzt356@gmail.com> References: <1376567109-20834-1-git-send-email-yzt356@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add testing for CR0/4 shadowing. Two types of flags in CR0/4 are tested: flags owned and shadowed by L1. They are treated differently in KVM. We test one flag of both types in CR0 (TS and MP) and CR4 (DE and TSD) with read through, read shadow, write through, write shadow (same as and different from shadowed value). Signed-off-by: Arthur Chunqi Li --- lib/x86/vm.h | 4 + x86/vmx_tests.c | 218 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 222 insertions(+) diff --git a/lib/x86/vm.h b/lib/x86/vm.h index eff6f72..6e0ce2b 100644 --- a/lib/x86/vm.h +++ b/lib/x86/vm.h @@ -17,9 +17,13 @@ #define PTE_ADDR (0xffffffffff000ull) #define X86_CR0_PE 0x00000001 +#define X86_CR0_MP 0x00000002 +#define X86_CR0_TS 0x00000008 #define X86_CR0_WP 0x00010000 #define X86_CR0_PG 0x80000000 #define X86_CR4_VMXE 0x00000001 +#define X86_CR4_TSD 0x00000004 +#define X86_CR4_DE 0x00000008 #define X86_CR4_PSE 0x00000010 #define X86_CR4_PAE 0x00000020 #define X86_CR4_PCIDE 0x00020000 diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 61b0cef..a5cc353 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -5,12 +5,20 @@ u64 ia32_pat; u64 ia32_efer; +volatile u32 stage; static inline void vmcall() { asm volatile("vmcall"); } +static inline void set_stage(u32 s) +{ + barrier(); + stage = s; + barrier(); +} + void basic_init() { } @@ -257,6 +265,214 @@ static int test_ctrl_efer_exit_handler() return VMX_TEST_VMEXIT; } +u32 guest_cr0, guest_cr4; + +static void cr_shadowing_main() +{ + u32 cr0, cr4, tmp; + + // Test read through + set_stage(0); + guest_cr0 = read_cr0(); + if (stage == 1) + report("Read through CR0", 0); + else + vmcall(); + set_stage(1); + guest_cr4 = read_cr4(); + if (stage == 2) + report("Read through CR4", 0); + else + vmcall(); + // Test write through + guest_cr0 = guest_cr0 ^ (X86_CR0_TS | X86_CR0_MP); + guest_cr4 = guest_cr4 ^ (X86_CR4_TSD | X86_CR4_DE); + set_stage(2); + write_cr0(guest_cr0); + if (stage == 3) + report("Write throuth CR0", 0); + else + vmcall(); + set_stage(3); + write_cr4(guest_cr4); + if (stage == 4) + report("Write through CR4", 0); + else + vmcall(); + // Test read shadow + set_stage(4); + vmcall(); + cr0 = read_cr0(); + if (stage != 5) { + if (cr0 == guest_cr0) + report("Read shadowing CR0", 1); + else + report("Read shadowing CR0", 0); + } + set_stage(5); + cr4 = read_cr4(); + if (stage != 6) { + if (cr4 == guest_cr4) + report("Read shadowing CR4", 1); + else + report("Read shadowing CR4", 0); + } + // Test write shadow (same value with shadow) + set_stage(6); + write_cr0(guest_cr0); + if (stage == 7) + report("Write shadowing CR0 (same value with shadow)", 0); + else + vmcall(); + set_stage(7); + write_cr4(guest_cr4); + if (stage == 8) + report("Write shadowing CR4 (same value with shadow)", 0); + else + vmcall(); + // Test write shadow (different value) + set_stage(8); + tmp = guest_cr0 ^ X86_CR0_TS; + asm volatile("mov %0, %%rsi\n\t" + "mov %%rsi, %%cr0\n\t" + ::"m"(tmp) + :"rsi", "memory", "cc"); + if (stage != 9) + report("Write shadowing different X86_CR0_TS", 0); + else + report("Write shadowing different X86_CR0_TS", 1); + set_stage(9); + tmp = guest_cr0 ^ X86_CR0_MP; + asm volatile("mov %0, %%rsi\n\t" + "mov %%rsi, %%cr0\n\t" + ::"m"(tmp) + :"rsi", "memory", "cc"); + if (stage != 10) + report("Write shadowing different X86_CR0_MP", 0); + else + report("Write shadowing different X86_CR0_MP", 1); + set_stage(10); + tmp = guest_cr4 ^ X86_CR4_TSD; + asm volatile("mov %0, %%rsi\n\t" + "mov %%rsi, %%cr4\n\t" + ::"m"(tmp) + :"rsi", "memory", "cc"); + if (stage != 11) + report("Write shadowing different X86_CR4_TSD", 0); + else + report("Write shadowing different X86_CR4_TSD", 1); + set_stage(11); + tmp = guest_cr4 ^ X86_CR4_DE; + asm volatile("mov %0, %%rsi\n\t" + "mov %%rsi, %%cr4\n\t" + ::"m"(tmp) + :"rsi", "memory", "cc"); + if (stage != 12) + report("Write shadowing different X86_CR4_DE", 0); + else + report("Write shadowing different X86_CR4_DE", 1); +} + +static int cr_shadowing_exit_handler() +{ + u64 guest_rip; + ulong reason; + u32 insn_len; + u32 exit_qual; + + guest_rip = vmcs_read(GUEST_RIP); + reason = vmcs_read(EXI_REASON) & 0xff; + insn_len = vmcs_read(EXI_INST_LEN); + exit_qual = vmcs_read(EXI_QUALIFICATION); + switch (reason) { + case VMX_VMCALL: + switch (stage) { + case 0: + if (guest_cr0 == vmcs_read(GUEST_CR0)) + report("Read through CR0", 1); + else + report("Read through CR0", 0); + break; + case 1: + if (guest_cr4 == vmcs_read(GUEST_CR4)) + report("Read through CR4", 1); + else + report("Read through CR4", 0); + break; + case 2: + if (guest_cr0 == vmcs_read(GUEST_CR0)) + report("Write through CR0", 1); + else + report("Write through CR0", 0); + break; + case 3: + if (guest_cr4 == vmcs_read(GUEST_CR4)) + report("Write through CR4", 1); + else + report("Write through CR4", 0); + break; + case 4: + guest_cr0 = vmcs_read(GUEST_CR0) ^ (X86_CR0_TS | X86_CR0_MP); + guest_cr4 = vmcs_read(GUEST_CR4) ^ (X86_CR4_TSD | X86_CR4_DE); + vmcs_write(CR0_MASK, X86_CR0_TS | X86_CR0_MP); + vmcs_write(CR0_READ_SHADOW, guest_cr0 & (X86_CR0_TS | X86_CR0_MP)); + vmcs_write(CR4_MASK, X86_CR4_TSD | X86_CR4_DE); + vmcs_write(CR4_READ_SHADOW, guest_cr4 & (X86_CR4_TSD | X86_CR4_DE)); + break; + case 6: + if (guest_cr0 == (vmcs_read(GUEST_CR0) ^ (X86_CR0_TS | X86_CR0_MP))) + report("Write shadowing CR0 (same value)", 1); + else + report("Write shadowing CR0 (same value)", 0); + break; + case 7: + if (guest_cr4 == (vmcs_read(GUEST_CR4) ^ (X86_CR4_TSD | X86_CR4_DE))) + report("Write shadowing CR4 (same value)", 1); + else + report("Write shadowing CR4 (same value)", 0); + break; + } + vmcs_write(GUEST_RIP, guest_rip + insn_len); + return VMX_TEST_RESUME; + case VMX_CR: + switch (stage) { + case 4: + report("Read shadowing CR0", 0); + set_stage(stage + 1); + break; + case 5: + report("Read shadowing CR4", 0); + set_stage(stage + 1); + break; + case 6: + report("Write shadowing CR0 (same value)", 0); + set_stage(stage + 1); + break; + case 7: + report("Write shadowing CR4 (same value)", 0); + set_stage(stage + 1); + break; + case 8: + case 9: + // 0x600 encodes "mov %esi, %cr0" + if (exit_qual == 0x600) + set_stage(stage + 1); + break; + case 10: + case 11: + // 0x604 encodes "mov %esi, %cr4" + if (exit_qual == 0x604) + set_stage(stage + 1); + } + vmcs_write(GUEST_RIP, guest_rip + insn_len); + return VMX_TEST_RESUME; + default: + printf("Unknown exit reason, %d\n", reason); + print_vmexit_info(); + } + return VMX_TEST_VMEXIT; +} + /* name/init/guest_main/exit_handler/syscall_handler/guest_regs basic_* just implement some basic functions */ struct vmx_test vmx_tests[] = { @@ -268,5 +484,7 @@ struct vmx_test vmx_tests[] = { test_ctrl_pat_exit_handler, basic_syscall_handler, {0} }, { "control field EFER", test_ctrl_efer_init, test_ctrl_efer_main, test_ctrl_efer_exit_handler, basic_syscall_handler, {0} }, + { "CR shadowing", basic_init, cr_shadowing_main, + cr_shadowing_exit_handler, basic_syscall_handler, {0} }, { NULL, NULL, NULL, NULL, NULL, {0} }, };