From patchwork Mon Dec 18 19:06:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 10121701 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4EDE160390 for ; Mon, 18 Dec 2017 19:07:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D21E28C67 for ; Mon, 18 Dec 2017 19:07:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 31EC128C7A; Mon, 18 Dec 2017 19:07:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B3C1A28C67 for ; Mon, 18 Dec 2017 19:07:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760286AbdLRTHH (ORCPT ); Mon, 18 Dec 2017 14:07:07 -0500 Received: from mx02.bbu.dsd.mx.bitdefender.com ([91.199.104.133]:54612 "EHLO mx02.buh.bitdefender.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933439AbdLRTHB (ORCPT ); Mon, 18 Dec 2017 14:07:01 -0500 Comment: DomainKeys? See http://domainkeys.sourceforge.net/ DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bitdefender.com; b=kCtZxqf37+M7c1I5A/NnQFxf+MmgGQkiqM3sI2fdTx2cifPO9cOrHd0DDhw+TLY41E6jne1yZZsZUxMdeg6NJBvi92RLMAoPWs5PbUNKYFcCHH9GKuFnW9VsxzYG7c/FDFwTgBoiZCO8ucqo/md1A8xobAcUc/fuJCmxfGQOYW21y+clcLHXezLiFv5V/KLcFkK0qUHjVE120XTaETHaQV1uv080cUNORVfn1RyS96MVe3n6jDjnT6F2EdubQGRT8o2c9xyJWKE+TzvTkgFHcdswj3IKFVNjfkKH3ypz8BfBfpVzhr7piMEzOf8p2V+oxWvJ663vQIK9+wsbDhtIlA==; h=Received:Received:Received:Received:From:To:Cc:Subject:Date:Message-Id:X-Mailer:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding; DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=bitdefender.com; h=from:to :cc:subject:date:message-id:in-reply-to:references:mime-version :content-type:content-transfer-encoding; s=default; bh=FMpE3LdqL uSO5fZKcEHEGlMkcJQ=; b=B4D8p21jwb4OazyGK1SEYVV0ZbWcj24cGe2zXE6O2 b9R3GyNN0vXzfR0Mytwecx6S6scrAwD06Tc88sxzZdkoEjiyQawHw/h1pzYUySwr hdW7cF8vAkFfcAf5KS/ZUUrUNBR0BTJNaJn31xsIlyoUz4vRKX2ddeKldpJNm2ph BvySA/ze2Fk8sd/bZUYGmNu7cZ7d1Ntc2TKKaisLfwbvoHr7FnKWL5HlfSHFB56Y 7R2oySPJFwQX6y+2dx/4eJiJL8aaAiiAMAENPTB6JAOovPy3+a6SrRVOHzoAkwKi IXQJ5Ow5a3ZMis4w2tvfJ8Acbhjt01JPi0d/XxA3qXYGA== Received: (qmail 30701 invoked from network); 18 Dec 2017 21:06:56 +0200 Received: from mx-robo.bitdefender.biz (HELO mx01robo.bbu.dsd.mx.bitdefender.com) (10.17.80.60) by mx02.buh.bitdefender.com with AES128-GCM-SHA256 encrypted SMTP; 18 Dec 2017 21:06:56 +0200 Received: (qmail 31342 invoked from network); 18 Dec 2017 21:06:56 +0200 Received: from unknown (HELO host.bbu.bitdefender.biz) (10.10.193.111) by mx01robo.bbu.dsd.mx.bitdefender.com with SMTP; 18 Dec 2017 21:06:56 +0200 From: =?UTF-8?q?Adalber=20Laz=C4=83r?= To: kvm@vger.kernel.org Cc: linux-mm@kvack.org, Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , Xiao Guangrong , =?UTF-8?q?Mihai=20Don=C8=9Bu?= , Adalbert Lazar Subject: [RFC PATCH v4 04/18] kvm: x86: add kvm_mmu_nested_guest_page_fault() and kvmi_mmu_fault_gla() Date: Mon, 18 Dec 2017 21:06:28 +0200 Message-Id: <20171218190642.7790-5-alazar@bitdefender.com> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171218190642.7790-1-alazar@bitdefender.com> References: <20171218190642.7790-1-alazar@bitdefender.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Adalbert Lazar These are helper functions used by the VM introspection subsytem on the PF call path. Signed-off-by: Mihai Donțu --- arch/x86/include/asm/kvm_host.h | 7 +++++++ arch/x86/include/asm/vmx.h | 2 ++ arch/x86/kvm/mmu.c | 10 ++++++++++ arch/x86/kvm/svm.c | 8 ++++++++ arch/x86/kvm/vmx.c | 9 +++++++++ 5 files changed, 36 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 8842d8e1e4ee..239eb628f8fb 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -692,6 +692,9 @@ struct kvm_vcpu_arch { /* set at EPT violation at this point */ unsigned long exit_qualification; + /* #PF translated error code from EPT/NPT exit reason */ + u64 error_code; + /* pv related host specific info */ struct { bool pv_unhalted; @@ -1081,6 +1084,7 @@ struct kvm_x86_ops { int (*enable_smi_window)(struct kvm_vcpu *vcpu); void (*msr_intercept)(struct kvm_vcpu *vcpu, unsigned int msr, bool enable); + u64 (*fault_gla)(struct kvm_vcpu *vcpu); }; struct kvm_arch_async_pf { @@ -1455,4 +1459,7 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, void kvm_arch_msr_intercept(struct kvm_vcpu *vcpu, unsigned int msr, bool enable); +u64 kvm_mmu_fault_gla(struct kvm_vcpu *vcpu); +bool kvm_mmu_nested_guest_page_fault(struct kvm_vcpu *vcpu); + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index 8b6780751132..7036125349dd 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -530,6 +530,7 @@ struct vmx_msr_entry { #define EPT_VIOLATION_READABLE_BIT 3 #define EPT_VIOLATION_WRITABLE_BIT 4 #define EPT_VIOLATION_EXECUTABLE_BIT 5 +#define EPT_VIOLATION_GLA_VALID_BIT 7 #define EPT_VIOLATION_GVA_TRANSLATED_BIT 8 #define EPT_VIOLATION_ACC_READ (1 << EPT_VIOLATION_ACC_READ_BIT) #define EPT_VIOLATION_ACC_WRITE (1 << EPT_VIOLATION_ACC_WRITE_BIT) @@ -537,6 +538,7 @@ struct vmx_msr_entry { #define EPT_VIOLATION_READABLE (1 << EPT_VIOLATION_READABLE_BIT) #define EPT_VIOLATION_WRITABLE (1 << EPT_VIOLATION_WRITABLE_BIT) #define EPT_VIOLATION_EXECUTABLE (1 << EPT_VIOLATION_EXECUTABLE_BIT) +#define EPT_VIOLATION_GLA_VALID (1 << EPT_VIOLATION_GLA_VALID_BIT) #define EPT_VIOLATION_GVA_TRANSLATED (1 << EPT_VIOLATION_GVA_TRANSLATED_BIT) /* diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index c4deb1f34faa..55fcb0292724 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -5530,3 +5530,13 @@ void kvm_mmu_module_exit(void) unregister_shrinker(&mmu_shrinker); mmu_audit_disable(); } + +u64 kvm_mmu_fault_gla(struct kvm_vcpu *vcpu) +{ + return kvm_x86_ops->fault_gla(vcpu); +} + +bool kvm_mmu_nested_guest_page_fault(struct kvm_vcpu *vcpu) +{ + return !!(vcpu->arch.error_code & PFERR_GUEST_PAGE_MASK); +} diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 5f7482851223..f41e4d7008d7 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2145,6 +2145,8 @@ static int pf_interception(struct vcpu_svm *svm) u64 fault_address = svm->vmcb->control.exit_info_2; u64 error_code = svm->vmcb->control.exit_info_1; + svm->vcpu.arch.error_code = error_code; + return kvm_handle_page_fault(&svm->vcpu, error_code, fault_address, svm->vmcb->control.insn_bytes, svm->vmcb->control.insn_len); @@ -5514,6 +5516,11 @@ static void svm_msr_intercept(struct kvm_vcpu *vcpu, unsigned int msr, set_msr_interception(msrpm, msr, enable, enable); } +static u64 svm_fault_gla(struct kvm_vcpu *vcpu) +{ + return ~0ull; +} + static struct kvm_x86_ops svm_x86_ops __ro_after_init = { .cpu_has_kvm_support = has_svm, .disabled_by_bios = is_disabled, @@ -5631,6 +5638,7 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = { .enable_smi_window = enable_smi_window, .msr_intercept = svm_msr_intercept, + .fault_gla = svm_fault_gla }; static int __init svm_init(void) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 9c984bbe263e..5487e0242030 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -6541,6 +6541,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK; vcpu->arch.exit_qualification = exit_qualification; + vcpu->arch.error_code = error_code; return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); } @@ -12120,6 +12121,13 @@ static void vmx_msr_intercept(struct kvm_vcpu *vcpu, unsigned int msr, } } +static u64 vmx_fault_gla(struct kvm_vcpu *vcpu) +{ + if (vcpu->arch.exit_qualification & EPT_VIOLATION_GLA_VALID) + return vmcs_readl(GUEST_LINEAR_ADDRESS); + return ~0ul; +} + static struct kvm_x86_ops vmx_x86_ops __ro_after_init = { .cpu_has_kvm_support = cpu_has_kvm_support, .disabled_by_bios = vmx_disabled_by_bios, @@ -12252,6 +12260,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = { .enable_smi_window = enable_smi_window, .msr_intercept = vmx_msr_intercept, + .fault_gla = vmx_fault_gla }; static int __init vmx_init(void)