diff mbox series

[RFC,10/21] KVM: x86: Export kvm_mmu_gva_to_gpa_{read,write}() for VMX/SGX

Message ID 20190727055214.9282-11-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series x86/sgx: KVM: Add SGX virtualization | expand

Commit Message

Sean Christopherson July 27, 2019, 5:52 a.m. UTC
Support for SGX Launch Control requires KVM to trap and execute
ENCLS[ECREATE] and ENCLS[EINIT] on behalf of the guest, which requires
requires obtaining the GPA of a Secure Enclave Control Structure (SECS)
in order to get its corresponding HVA.

Because the SECS must reside in the Enclave Page Cache (EPC), copying
the SECS's data to a host-controlled buffer via existing exported
helpers is not a viable option as the EPC is not readable or writable
by the kernel.

Translating GVA->HVA for non-EPC pages is also desirable, as passing
user pointers directly to ECREATE and EINIT avoids having to copy pages
worth of data into the kernel.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/x86.c | 2 ++
 1 file changed, 2 insertions(+)
diff mbox series

Patch

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index afcc01a59421..2b64bb854571 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5089,6 +5089,7 @@  gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva,
 	u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;
 	return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
 }
+EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read);
 
  gpa_t kvm_mmu_gva_to_gpa_fetch(struct kvm_vcpu *vcpu, gva_t gva,
 				struct x86_exception *exception)
@@ -5105,6 +5106,7 @@  gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva,
 	access |= PFERR_WRITE_MASK;
 	return vcpu->arch.walk_mmu->gva_to_gpa(vcpu, gva, access, exception);
 }
+EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_write);
 
 /* uses this to access any guest's mapped memory without checking CPL */
 gpa_t kvm_mmu_gva_to_gpa_system(struct kvm_vcpu *vcpu, gva_t gva,