[RFC,17/21] KVM: VMX: Add handler for ENCLS[EINIT] to support SGX Launch Control
diff mbox series

Message ID 20190727055214.9282-18-sean.j.christopherson@intel.com
State New
Headers show
  • x86/sgx: KVM: Add SGX virtualization
Related show

Commit Message

Sean Christopherson July 27, 2019, 5:52 a.m. UTC
SGX Launch Control (LC) modifies the behavior of ENCLS[EINIT] to query
a set of user-controllable MSRs (Launch Enclave, a.k.a. LE, Hash MSRs)
when verifying the key used to sign an enclave.  On CPUs without LC
support, the public key hash of allowed LEs is hardwired into the CPU to
an Intel controlled key (the Intel key is also the reset value of the LE
hash MSRs).

When LC is enabled in the host, EINIT must be intercepted and executed
in the host using the guest's LE hash MSR value, even if the guest's
values are fixed to hardware default values.  The MSRs are not switched
on VM-Enter/VM-Exit as writing the MSRs is extraordinarily expensive,
e.g. each WRMSR is 4x slower than a regular WRMSR and on par with a full
VM-Enter -> VM-Exit transition.  Furthermore, as the MSRS aren't allowed
in the hardware-supported lists, i.e. would need to be manually read and
written.  On the other hand, EINIT takes tens of thousands of cycles to
execute (it's so slow that it's interruptible), i.e. the ~1k cycles of
overhead to trap-and-execute EINIT is unlikely to be noticed by the
guest, let alone impact the overall performance of SGX.

Actual usage of the handler will be added in a future patch, i.e. when
SGX virtualization is fully enabled.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
 arch/x86/kvm/vmx/sgx.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff mbox series

diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
index 5b08e7dcc3a3..2bcfa3b6c75e 100644
--- a/arch/x86/kvm/vmx/sgx.c
+++ b/arch/x86/kvm/vmx/sgx.c
@@ -221,3 +221,27 @@  int handle_encls_ecreate(struct kvm_vcpu *vcpu)
 	return sgx_encls_postamble(vcpu, ret, trapnr, secs_gva);
+int handle_encls_einit(struct kvm_vcpu *vcpu)
+	unsigned long sig_hva, secs_hva, token_hva;
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+	gva_t sig_gva, secs_gva, token_gva;
+	int ret, trapnr;
+	if (sgx_get_encls_gva(vcpu, kvm_rbx_read(vcpu), 1808, 4096, &sig_gva) ||
+	    sgx_get_encls_gva(vcpu, kvm_rcx_read(vcpu), 4096, 4096, &secs_gva) ||
+	    sgx_get_encls_gva(vcpu, kvm_rdx_read(vcpu), 304, 512, &token_gva))
+		return 1;
+	if (sgx_gva_to_hva(vcpu, sig_gva, false, &sig_hva) ||
+	    sgx_gva_to_hva(vcpu, secs_gva, true, &secs_hva) ||
+	    sgx_gva_to_hva(vcpu, token_gva, false, &token_hva))
+		return 1;
+	ret = sgx_einit((void __user *)sig_hva, (void __user *)token_hva,
+			(void __user *)secs_hva, vmx->msr_ia32_sgxlepubkeyhash,
+			&trapnr);
+	return sgx_encls_postamble(vcpu, ret, trapnr, secs_hva);