From patchwork Tue Apr 27 10:38:27 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 95343 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o3RAfN4I020087 for ; Tue, 27 Apr 2010 10:41:23 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754804Ab0D0KjL (ORCPT ); Tue, 27 Apr 2010 06:39:11 -0400 Received: from va3ehsobe003.messaging.microsoft.com ([216.32.180.13]:51821 "EHLO VA3EHSOBE003.bigfish.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754737Ab0D0KjD (ORCPT ); Tue, 27 Apr 2010 06:39:03 -0400 Received: from mail177-va3-R.bigfish.com (10.7.14.242) by VA3EHSOBE003.bigfish.com (10.7.40.23) with Microsoft SMTP Server id 8.1.240.5; Tue, 27 Apr 2010 10:39:01 +0000 Received: from mail177-va3 (localhost.localdomain [127.0.0.1]) by mail177-va3-R.bigfish.com (Postfix) with ESMTP id 064EFC02FD; Tue, 27 Apr 2010 10:39:01 +0000 (UTC) X-SpamScore: 1 X-BigFish: VPS1(zzab9bhzz1202hzz6ff19hz32i87h2a8h43h62h) X-Spam-TCS-SCL: 1:0 X-FB-DOMAIN-IP-MATCH: fail Received: from mail177-va3 (localhost.localdomain [127.0.0.1]) by mail177-va3 (MessageSwitch) id 1272364740262338_16741; Tue, 27 Apr 2010 10:39:00 +0000 (UTC) Received: from VA3EHSMHS020.bigfish.com (unknown [10.7.14.248]) by mail177-va3.bigfish.com (Postfix) with ESMTP id 3D0E6E7804D; Tue, 27 Apr 2010 10:39:00 +0000 (UTC) Received: from ausb3extmailp01.amd.com (163.181.251.8) by VA3EHSMHS020.bigfish.com (10.7.99.30) with Microsoft SMTP Server (TLS) id 14.0.482.44; Tue, 27 Apr 2010 10:38:58 +0000 Received: from ausb3twp02.amd.com ([163.181.250.38]) by ausb3extmailp01.amd.com (Switch-3.2.7/Switch-3.2.7) with SMTP id o3RAVfgL015040; Tue, 27 Apr 2010 05:31:44 -0500 X-WSS-ID: 0L1J6WP-02-NP5-02 X-M-MSG: Received: from sausexhtp01.amd.com (sausexhtp01.amd.com [163.181.3.165]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by ausb3twp02.amd.com (Tumbleweed MailGate 3.7.2) with ESMTP id 2E8ADC857D; Tue, 27 Apr 2010 05:38:49 -0500 (CDT) Received: from storexhtp01.amd.com (172.24.4.3) by sausexhtp01.amd.com (163.181.3.165) with Microsoft SMTP Server (TLS) id 8.2.234.1; Tue, 27 Apr 2010 03:38:52 -0700 Received: from storexbh1.amd.com (10.1.1.17) by storexhtp01.amd.com (172.24.4.3) with Microsoft SMTP Server id 8.2.234.1; Tue, 27 Apr 2010 03:38:51 -0700 Received: from sausexmb1.amd.com ([163.181.3.156]) by storexbh1.amd.com with Microsoft SMTPSVC(6.0.3790.4675); Tue, 27 Apr 2010 06:38:50 -0400 Received: from seurexmb1.amd.com ([165.204.9.130]) by sausexmb1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Tue, 27 Apr 2010 05:38:47 -0500 Received: from lemmy.osrc.amd.com ([165.204.15.93]) by seurexmb1.amd.com with Microsoft SMTPSVC(6.0.3790.4675); Tue, 27 Apr 2010 12:38:35 +0200 Received: by lemmy.osrc.amd.com (Postfix, from userid 41430) id 13C83C9B43; Tue, 27 Apr 2010 12:38:36 +0200 (CEST) From: Joerg Roedel To: Avi Kivity , Marcelo Tosatti CC: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Joerg Roedel Subject: [PATCH 17/22] KVM: MMU: Propagate the right fault back to the guest after gva_to_gpa Date: Tue, 27 Apr 2010 12:38:27 +0200 Message-ID: <1272364712-17425-18-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.7.0.4 In-Reply-To: <1272364712-17425-1-git-send-email-joerg.roedel@amd.com> References: <1272364712-17425-1-git-send-email-joerg.roedel@amd.com> X-OriginalArrivalTime: 27 Apr 2010 10:38:36.0089 (UTC) FILETIME=[CA843290:01CAE5F5] MIME-Version: 1.0 X-Reverse-DNS: unknown Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Tue, 27 Apr 2010 10:41:24 +0000 (UTC) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 8426870..d024e27 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -645,6 +645,7 @@ void kvm_requeue_exception(struct kvm_vcpu *vcpu, unsigned nr); void kvm_requeue_exception_e(struct kvm_vcpu *vcpu, unsigned nr, u32 error_code); void kvm_inject_page_fault(struct kvm_vcpu *vcpu, unsigned long cr2, u32 error_code); +void kvm_propagate_fault(struct kvm_vcpu *vcpu); int kvm_read_guest_page_tdp(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, gfn_t gfn, void *data, int offset, int len, u32 *error); diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 5ac0bb4..171e1c7 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -1336,7 +1336,7 @@ static int read_segment_descriptor(struct x86_emulate_ctxt *ctxt, addr = dt.address + index * 8; ret = ops->read_std(addr, desc, sizeof *desc, ctxt->vcpu, &err); if (ret == X86EMUL_PROPAGATE_FAULT) - kvm_inject_page_fault(ctxt->vcpu, addr, err); + kvm_propagate_fault(ctxt->vcpu); return ret; } @@ -1362,7 +1362,7 @@ static int write_segment_descriptor(struct x86_emulate_ctxt *ctxt, addr = dt.address + index * 8; ret = ops->write_std(addr, desc, sizeof *desc, ctxt->vcpu, &err); if (ret == X86EMUL_PROPAGATE_FAULT) - kvm_inject_page_fault(ctxt->vcpu, addr, err); + kvm_propagate_fault(ctxt->vcpu); return ret; } @@ -2165,7 +2165,7 @@ static int task_switch_16(struct x86_emulate_ctxt *ctxt, &err); if (ret == X86EMUL_PROPAGATE_FAULT) { /* FIXME: need to provide precise fault address */ - kvm_inject_page_fault(ctxt->vcpu, old_tss_base, err); + kvm_propagate_fault(ctxt->vcpu); return ret; } @@ -2175,7 +2175,7 @@ static int task_switch_16(struct x86_emulate_ctxt *ctxt, &err); if (ret == X86EMUL_PROPAGATE_FAULT) { /* FIXME: need to provide precise fault address */ - kvm_inject_page_fault(ctxt->vcpu, old_tss_base, err); + kvm_propagate_fault(ctxt->vcpu); return ret; } @@ -2183,7 +2183,7 @@ static int task_switch_16(struct x86_emulate_ctxt *ctxt, &err); if (ret == X86EMUL_PROPAGATE_FAULT) { /* FIXME: need to provide precise fault address */ - kvm_inject_page_fault(ctxt->vcpu, new_tss_base, err); + kvm_propagate_fault(ctxt->vcpu); return ret; } @@ -2196,7 +2196,7 @@ static int task_switch_16(struct x86_emulate_ctxt *ctxt, ctxt->vcpu, &err); if (ret == X86EMUL_PROPAGATE_FAULT) { /* FIXME: need to provide precise fault address */ - kvm_inject_page_fault(ctxt->vcpu, new_tss_base, err); + kvm_propagate_fault(ctxt->vcpu); return ret; } } @@ -2304,7 +2304,7 @@ static int task_switch_32(struct x86_emulate_ctxt *ctxt, &err); if (ret == X86EMUL_PROPAGATE_FAULT) { /* FIXME: need to provide precise fault address */ - kvm_inject_page_fault(ctxt->vcpu, old_tss_base, err); + kvm_propagate_fault(ctxt->vcpu); return ret; } @@ -2314,7 +2314,7 @@ static int task_switch_32(struct x86_emulate_ctxt *ctxt, &err); if (ret == X86EMUL_PROPAGATE_FAULT) { /* FIXME: need to provide precise fault address */ - kvm_inject_page_fault(ctxt->vcpu, old_tss_base, err); + kvm_propagate_fault(ctxt->vcpu); return ret; } @@ -2322,7 +2322,7 @@ static int task_switch_32(struct x86_emulate_ctxt *ctxt, &err); if (ret == X86EMUL_PROPAGATE_FAULT) { /* FIXME: need to provide precise fault address */ - kvm_inject_page_fault(ctxt->vcpu, new_tss_base, err); + kvm_propagate_fault(ctxt->vcpu); return ret; } @@ -2335,7 +2335,7 @@ static int task_switch_32(struct x86_emulate_ctxt *ctxt, ctxt->vcpu, &err); if (ret == X86EMUL_PROPAGATE_FAULT) { /* FIXME: need to provide precise fault address */ - kvm_inject_page_fault(ctxt->vcpu, new_tss_base, err); + kvm_propagate_fault(ctxt->vcpu); return ret; } } diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 64f619b..b42b27e 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -47,6 +47,7 @@ #define PFERR_USER_MASK (1U << 2) #define PFERR_RSVD_MASK (1U << 3) #define PFERR_FETCH_MASK (1U << 4) +#define PFERR_NESTED_MASK (1U << 31) int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]); int kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 317ad26..4d3a698 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -324,6 +324,22 @@ void kvm_inject_page_fault(struct kvm_vcpu *vcpu, unsigned long addr, kvm_queue_exception_e(vcpu, PF_VECTOR, error_code); } +void kvm_propagate_fault(struct kvm_vcpu *vcpu) +{ + unsigned long address; + u32 nested, error; + + address = vcpu->arch.fault_address; + error = vcpu->arch.fault_error_code; + nested = error & PFERR_NESTED_MASK; + error = error & ~PFERR_NESTED_MASK; + + if (vcpu->arch.mmu.nested && !nested) + vcpu->arch.nested_mmu.inject_page_fault(vcpu, address, error); + else + vcpu->arch.mmu.inject_page_fault(vcpu, address, error); +} + void kvm_inject_nmi(struct kvm_vcpu *vcpu) { vcpu->arch.nmi_pending = 1; @@ -3338,7 +3354,7 @@ static int emulator_read_emulated(unsigned long addr, gpa = kvm_mmu_gva_to_gpa_read(vcpu, addr, &error_code); if (gpa == UNMAPPED_GVA) { - kvm_inject_page_fault(vcpu, addr, error_code); + kvm_propagate_fault(vcpu); return X86EMUL_PROPAGATE_FAULT; } @@ -3392,7 +3408,7 @@ static int emulator_write_emulated_onepage(unsigned long addr, gpa = kvm_mmu_gva_to_gpa_write(vcpu, addr, &error_code); if (gpa == UNMAPPED_GVA) { - kvm_inject_page_fault(vcpu, addr, error_code); + kvm_propagate_fault(vcpu); return X86EMUL_PROPAGATE_FAULT; }