From patchwork Sun Feb 10 20:42:29 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kiszka X-Patchwork-Id: 2122951 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id EC945DF24C for ; Sun, 10 Feb 2013 20:42:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756375Ab3BJUmj (ORCPT ); Sun, 10 Feb 2013 15:42:39 -0500 Received: from mout.web.de ([212.227.15.4]:63359 "EHLO mout.web.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755681Ab3BJUmi (ORCPT ); Sun, 10 Feb 2013 15:42:38 -0500 Received: from mchn199C.mchp.siemens.de ([95.157.56.37]) by smtp.web.de (mrweb001) with ESMTPSA (Nemesis) id 0MT8sw-1UWmie4A1W-00SJ7w; Sun, 10 Feb 2013 21:42:32 +0100 Message-ID: <51180635.3060003@web.de> Date: Sun, 10 Feb 2013 21:42:29 +0100 From: Jan Kiszka User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666 MIME-Version: 1.0 To: Gleb Natapov , Marcelo Tosatti CC: kvm , Nadav Har'El , Orit Wasserman Subject: [PATCH] KVM: nVMX: Improve I/O exit handling X-Enigmail-Version: 1.5 X-Provags-ID: V02:K0:LF/XVQuFUMvfDWKk2fw61MizBgu/6jbdeNyh/3xrZ7e gi1iag1hzrqft/4c5SUpP/hYD1haHIhz4xxRna0pJBRIs4K1br XGvVqBEBkncIAu9f5n9aW30AbkIWOepimZhQWWPnFOVlndJMNG vvOcVkVJRyXQc8ECrlnFwWpi46X6BHZp3/THO0lrnABTRxqnzi L80pwrPCH73L+CqJFvGPA== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jan Kiszka This prevents trapping L2 I/O exits if L1 has neither unconditional nor bitmap-based exiting enabled. Furthermore, it implements basic I/O bitmap handling. Repeated string accesses are still reported to L1 unconditionally for now. Signed-off-by: Jan Kiszka --- If someone tells me how to figure out the effective I/O access range with rep ins/outs in all possible CPU modes in three lines, I'll complete this patch. For now I had no use and was too lazy. arch/x86/kvm/vmx.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 52 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index fe9a9cf..056bd95 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -5913,6 +5913,57 @@ static int (*const kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = { static const int kvm_vmx_max_exit_handlers = ARRAY_SIZE(kvm_vmx_exit_handlers); +static bool nested_vmx_exit_handled_io(struct kvm_vcpu *vcpu, + struct vmcs12 *vmcs12) +{ + unsigned long exit_qualification; + gpa_t bitmap, last_bitmap; + bool string, rep; + u16 port; + int size; + u8 b; + + if (nested_cpu_has(get_vmcs12(vcpu), CPU_BASED_UNCOND_IO_EXITING)) + return 1; + + if (!nested_cpu_has(get_vmcs12(vcpu), CPU_BASED_USE_IO_BITMAPS)) + return 0; + + exit_qualification = vmcs_readl(EXIT_QUALIFICATION); + + string = exit_qualification & 16; + rep = exit_qualification & 32; + + /* TODO: interpret instruction and check range against bitmap */ + if (string && rep) + return 1; + + port = exit_qualification >> 16; + size = (exit_qualification & 7) + 1; + + last_bitmap = (gpa_t)-1; + b = -1; + + while (size > 0) { + if (port < 0x8000) + bitmap = vmcs12->io_bitmap_a; + else + bitmap = vmcs12->io_bitmap_b; + bitmap += port / 8; + + if (last_bitmap != bitmap) + kvm_read_guest(vcpu->kvm, bitmap, &b, 1); + if (b & (1 >> (port & 7))) + return 1; + + port++; + size--; + last_bitmap = bitmap; + } + + return 0; +} + /* * Return 1 if we should exit from L2 to L1 to handle an MSR access access, * rather than handle it ourselves in L0. I.e., check whether L1 expressed @@ -6102,8 +6153,7 @@ static bool nested_vmx_exit_handled(struct kvm_vcpu *vcpu) case EXIT_REASON_DR_ACCESS: return nested_cpu_has(vmcs12, CPU_BASED_MOV_DR_EXITING); case EXIT_REASON_IO_INSTRUCTION: - /* TODO: support IO bitmaps */ - return 1; + return nested_vmx_exit_handled_io(vcpu, vmcs12); case EXIT_REASON_MSR_READ: case EXIT_REASON_MSR_WRITE: return nested_vmx_exit_handled_msr(vcpu, vmcs12, exit_reason);