mbox series

[RFC,00/23] KVM SGX virtualization support

Message ID cover.1609890536.git.kai.huang@intel.com (mailing list archive)
Headers show
Series KVM SGX virtualization support | expand

Message

Huang, Kai Jan. 6, 2021, 1:55 a.m. UTC
--- Disclaimer ---

These patches were originally written by Sean Christopherson while at Intel.
Now that Sean has left Intel, I (Kai) have taken over getting them upstream.
This series needs more review before it can be merged.  It is being posted
publicly and under RFC so Sean and others can review it. Maintainers are safe
ignoring it for now.

------------------

Hi all,

This series adds KVM SGX virtualization support. The first 12 patches starting
with x86/sgx or x86/cpu.. are necessary changes to x86 and SGX core/driver to
support KVM SGX virtualization, while the rest are patches to KVM subsystem.

Please help to review this series. Also I'd like to hear what is the proper
way to merge this series, since it contains change to both x86/SGX and KVM
subsystem. Any feedback is highly appreciated. And please let me know if I
forgot to CC anyone, or anyone wants to be removed from CC. Thanks in advance!

This series is based against latest tip tree's x86/sgx branch. You can also get
the code from tip branch of kvm-sgx repo on github:

        https://github.com/intel/kvm-sgx.git tip

It also requires Qemu changes to create VM with SGX support. You can find Qemu
repo here:

	https://github.com/intel/qemu-sgx.git next

Please refer to README.md of above qemu-sgx repo for detail on how to create
guest with SGX support. At meantime, for your quick reference you can use below
command to create SGX guest:

	#qemu-system-x86_64 -smp 4 -m 2G -drive file=<your_vm_image>,if=virtio \
		-cpu host,+sgx_provisionkey \
		-sgx-epc id=epc1,memdev=mem1 \
		-object memory-backend-epc,id=mem1,size=64M,prealloc

Please note that the SGX relevant part is:

		-cpu host,+sgx_provisionkey \
		-sgx-epc id=epc1,memdev=mem1 \
		-object memory-backend-epc,id=mem1,size=64M,prealloc

And you can change other parameters of your qemu command based on your needs.

=========
KVM SGX virtualization Overview

- Virtual EPC

"Virtual EPC" is the EPC section exposed by KVM to guest so SGX software in
guest can discover it and use it to create SGX enclaves. KVM exposes SGX to 
guest via CPUID, and exposes one or more "virtual EPC" sections for guest.
The size of "virtual EPC" is passed as Qemu parameter when creating the
guest, and the base address is calcualted internally according to guest's
configuration.

To support virtual EPC, add a new misc device /dev/sgx_virt_epc to SGX
core/driver to allow userspace (Qemu) to allocate "raw" EPC, and use it as
"virtual EPC" for guest. Obviously, unlike EPC allocated for host SGX driver,
virtual EPC allocated via /dev/sgx_virt_epc doesn't have enclave associated,
and how virtual EPC is used by guest is compeletely controlled by guest's SGX
software.

Implement the "raw" EPC allocation in the x86 core-SGX subsystem via
/dev/sgx_virt_epc rather than in KVM. Doing so has two major advantages:

  - Does not require changes to KVM's uAPI, e.g. EPC gets handled as
    just another memory backend for guests.

  - EPC management is wholly contained in the SGX subsystem, e.g. SGX
    does not have to export any symbols, changes to reclaim flows don't
    need to be routed through KVM, SGX's dirty laundry doesn't have to
    get aired out for the world to see, and so on and so forth.

The virtual EPC allocated to guests is currently not reclaimable, due to
reclaiming EPC from KVM guests is not currently supported. Due to the
complications of handling reclaim conflicts between guest and host, KVM
EPC oversubscription, which allows total virtual EPC size greater than
physical EPC by being able to reclaiming guests' EPC, is significantly more
complex than basic support for SGX virtualization.

- Support SGX virtualization without SGX Launch Control unlocked mode

Although SGX driver requires SGX Launch Control unlocked mode to work, SGX
virtualization doesn't, since how enclave is created is completely controlled
by guest SGX software, which is not necessarily linux. Therefore, this series
allows KVM to expose SGX to guest even SGX Launch Control is in locked mode,
or is not present at all. The reason is the goal of SGX virtualization, or
virtualization in general, is to expose hardware feature to guest, but not to
make assumption how guest will use it. Therefore, KVM should support SGX guest
as long as hardware is able to, to have chance to support more potential use
cases in cloud environment.

- Support exposing SGX2

Due to the same reason above, SGX2 feature detection is added to core SGX code
to allow KVM to expose SGX2 to guest, even currently SGX driver doesn't support
SGX2, because SGX2 can work just fine in guest w/o any interaction to host SGX
driver.

- Restricit SGX guest access to provisioning key

To grant guest being able to fully use SGX, guest needs to be able to create
provisioning enclave. However provisioning key is sensitive and is restricted by
/dev/sgx_provision in host SGX driver, therefore KVM SGX virtualization follows
the same role: a new KVM_CAP_SGX_ATTRIBUTE is added to KVM uAPI, and only file
descriptor of /dev/sgx_provision is passed to that CAP by usersppace hypervisor
(Qemu) when creating the guest, it can access provisioning bit. This is done by
making KVM trape ECREATE instruction from guest, and check the provisioning bit
in ECREATE's attribute.


Kai Huang (1):
  x86/sgx: Add helper to update SGX_LEPUBKEYHASHn MSRs

Sean Christopherson (22):
  x86/sgx: Split out adding EPC page to free list to separate helper
  x86/sgx: Add enum for SGX_CHILD_PRESENT error code
  x86/sgx: Introduce virtual EPC for use by KVM guests
  x86/cpufeatures: Add SGX1 and SGX2 sub-features
  x86/cpu/intel: Allow SGX virtualization without Launch Control support
  x86/sgx: Expose SGX architectural definitions to the kernel
  x86/sgx: Move ENCLS leaf definitions to sgx_arch.h
  x86/sgx: Add SGX2 ENCLS leaf definitions (EAUG, EMODPR and EMODT)
  x86/sgx: Add encls_faulted() helper
  x86/sgx: Add helpers to expose ECREATE and EINIT to KVM
  x86/sgx: Move provisioning device creation out of SGX driver
  KVM: VMX: Convert vcpu_vmx.exit_reason to a union
  KVM: x86: Export kvm_mmu_gva_to_gpa_{read,write}() for SGX (VMX)
  KVM: x86: Define new #PF SGX error code bit
  KVM: x86: Add SGX feature leaf to reverse CPUID lookup
  KVM: VMX: Add basic handling of VM-Exit from SGX enclave
  KVM: VMX: Frame in ENCLS handler for SGX virtualization
  KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions
  KVM: VMX: Add emulation of SGX Launch Control LE hash MSRs
  KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC)
  KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
  KVM: x86: Add capability to grant VM access to privileged SGX
    attribute

 Documentation/virt/kvm/api.rst                |  23 +
 arch/x86/Kconfig                              |  12 +
 arch/x86/include/asm/cpufeature.h             |   5 +-
 arch/x86/include/asm/cpufeatures.h            |   6 +-
 arch/x86/include/asm/disabled-features.h      |   7 +-
 arch/x86/include/asm/kvm_host.h               |   5 +
 arch/x86/include/asm/required-features.h      |   2 +-
 arch/x86/include/asm/sgx.h                    |  19 +
 .../cpu/sgx/arch.h => include/asm/sgx_arch.h} |  20 +
 arch/x86/include/asm/vmx.h                    |   1 +
 arch/x86/include/uapi/asm/vmx.h               |   1 +
 arch/x86/kernel/cpu/common.c                  |   4 +
 arch/x86/kernel/cpu/feat_ctl.c                |  50 +-
 arch/x86/kernel/cpu/sgx/Makefile              |   1 +
 arch/x86/kernel/cpu/sgx/driver.c              |  17 -
 arch/x86/kernel/cpu/sgx/encl.c                |   2 +-
 arch/x86/kernel/cpu/sgx/encls.h               |  29 +-
 arch/x86/kernel/cpu/sgx/ioctl.c               |  23 +-
 arch/x86/kernel/cpu/sgx/main.c                |  79 ++-
 arch/x86/kernel/cpu/sgx/sgx.h                 |   5 +-
 arch/x86/kernel/cpu/sgx/virt.c                | 318 ++++++++++++
 arch/x86/kernel/cpu/sgx/virt.h                |  14 +
 arch/x86/kvm/Makefile                         |   2 +
 arch/x86/kvm/cpuid.c                          |  58 ++-
 arch/x86/kvm/cpuid.h                          |   1 +
 arch/x86/kvm/vmx/nested.c                     |  70 ++-
 arch/x86/kvm/vmx/nested.h                     |   5 +
 arch/x86/kvm/vmx/sgx.c                        | 462 ++++++++++++++++++
 arch/x86/kvm/vmx/sgx.h                        |  34 ++
 arch/x86/kvm/vmx/vmcs12.c                     |   1 +
 arch/x86/kvm/vmx/vmcs12.h                     |   4 +-
 arch/x86/kvm/vmx/vmx.c                        | 171 +++++--
 arch/x86/kvm/vmx/vmx.h                        |  27 +-
 arch/x86/kvm/x86.c                            |  24 +
 include/uapi/linux/kvm.h                      |   1 +
 tools/testing/selftests/sgx/defines.h         |   2 +-
 36 files changed, 1366 insertions(+), 139 deletions(-)
 create mode 100644 arch/x86/include/asm/sgx.h
 rename arch/x86/{kernel/cpu/sgx/arch.h => include/asm/sgx_arch.h} (96%)
 create mode 100644 arch/x86/kernel/cpu/sgx/virt.c
 create mode 100644 arch/x86/kernel/cpu/sgx/virt.h
 create mode 100644 arch/x86/kvm/vmx/sgx.c
 create mode 100644 arch/x86/kvm/vmx/sgx.h

Comments

Huang, Kai Jan. 6, 2021, 2:22 a.m. UTC | #1
Sorry that I made mistake when copy and paste Jim's email address :( 
Remove the wrong email address (mattson@google.com) and add the correct one
(jmattson@gmail.com). Really apologize for the noise.

Thanks,
-Kai

On Wed, 2021-01-06 at 14:55 +1300, Kai Huang wrote:
> --- Disclaimer ---
> 
> These patches were originally written by Sean Christopherson while at Intel.
> Now that Sean has left Intel, I (Kai) have taken over getting them upstream.
> This series needs more review before it can be merged.  It is being posted
> publicly and under RFC so Sean and others can review it. Maintainers are safe
> ignoring it for now.
> 
> ------------------
> 
> Hi all,
> 
> This series adds KVM SGX virtualization support. The first 12 patches starting
> with x86/sgx or x86/cpu.. are necessary changes to x86 and SGX core/driver to
> support KVM SGX virtualization, while the rest are patches to KVM subsystem.
> 
> Please help to review this series. Also I'd like to hear what is the proper
> way to merge this series, since it contains change to both x86/SGX and KVM
> subsystem. Any feedback is highly appreciated. And please let me know if I
> forgot to CC anyone, or anyone wants to be removed from CC. Thanks in advance!
> 
> This series is based against latest tip tree's x86/sgx branch. You can also get
> the code from tip branch of kvm-sgx repo on github:
> 
>         https://github.com/intel/kvm-sgx.git tip
> 
> It also requires Qemu changes to create VM with SGX support. You can find Qemu
> repo here:
> 
> 	https://github.com/intel/qemu-sgx.git next
> 
> Please refer to README.md of above qemu-sgx repo for detail on how to create
> guest with SGX support. At meantime, for your quick reference you can use below
> command to create SGX guest:
> 
> 	#qemu-system-x86_64 -smp 4 -m 2G -drive file=<your_vm_image>,if=virtio \
> 		-cpu host,+sgx_provisionkey \
> 		-sgx-epc id=epc1,memdev=mem1 \
> 		-object memory-backend-epc,id=mem1,size=64M,prealloc
> 
> Please note that the SGX relevant part is:
> 
> 		-cpu host,+sgx_provisionkey \
> 		-sgx-epc id=epc1,memdev=mem1 \
> 		-object memory-backend-epc,id=mem1,size=64M,prealloc
> 
> And you can change other parameters of your qemu command based on your needs.
> 
> =========
> KVM SGX virtualization Overview
> 
> - Virtual EPC
> 
> "Virtual EPC" is the EPC section exposed by KVM to guest so SGX software in
> guest can discover it and use it to create SGX enclaves. KVM exposes SGX to 
> guest via CPUID, and exposes one or more "virtual EPC" sections for guest.
> The size of "virtual EPC" is passed as Qemu parameter when creating the
> guest, and the base address is calcualted internally according to guest's
> configuration.
> 
> To support virtual EPC, add a new misc device /dev/sgx_virt_epc to SGX
> core/driver to allow userspace (Qemu) to allocate "raw" EPC, and use it as
> "virtual EPC" for guest. Obviously, unlike EPC allocated for host SGX driver,
> virtual EPC allocated via /dev/sgx_virt_epc doesn't have enclave associated,
> and how virtual EPC is used by guest is compeletely controlled by guest's SGX
> software.
> 
> Implement the "raw" EPC allocation in the x86 core-SGX subsystem via
> /dev/sgx_virt_epc rather than in KVM. Doing so has two major advantages:
> 
>   - Does not require changes to KVM's uAPI, e.g. EPC gets handled as
>     just another memory backend for guests.
> 
>   - EPC management is wholly contained in the SGX subsystem, e.g. SGX
>     does not have to export any symbols, changes to reclaim flows don't
>     need to be routed through KVM, SGX's dirty laundry doesn't have to
>     get aired out for the world to see, and so on and so forth.
> 
> The virtual EPC allocated to guests is currently not reclaimable, due to
> reclaiming EPC from KVM guests is not currently supported. Due to the
> complications of handling reclaim conflicts between guest and host, KVM
> EPC oversubscription, which allows total virtual EPC size greater than
> physical EPC by being able to reclaiming guests' EPC, is significantly more
> complex than basic support for SGX virtualization.
> 
> - Support SGX virtualization without SGX Launch Control unlocked mode
> 
> Although SGX driver requires SGX Launch Control unlocked mode to work, SGX
> virtualization doesn't, since how enclave is created is completely controlled
> by guest SGX software, which is not necessarily linux. Therefore, this series
> allows KVM to expose SGX to guest even SGX Launch Control is in locked mode,
> or is not present at all. The reason is the goal of SGX virtualization, or
> virtualization in general, is to expose hardware feature to guest, but not to
> make assumption how guest will use it. Therefore, KVM should support SGX guest
> as long as hardware is able to, to have chance to support more potential use
> cases in cloud environment.
> 
> - Support exposing SGX2
> 
> Due to the same reason above, SGX2 feature detection is added to core SGX code
> to allow KVM to expose SGX2 to guest, even currently SGX driver doesn't support
> SGX2, because SGX2 can work just fine in guest w/o any interaction to host SGX
> driver.
> 
> - Restricit SGX guest access to provisioning key
> 
> To grant guest being able to fully use SGX, guest needs to be able to create
> provisioning enclave. However provisioning key is sensitive and is restricted by
> /dev/sgx_provision in host SGX driver, therefore KVM SGX virtualization follows
> the same role: a new KVM_CAP_SGX_ATTRIBUTE is added to KVM uAPI, and only file
> descriptor of /dev/sgx_provision is passed to that CAP by usersppace hypervisor
> (Qemu) when creating the guest, it can access provisioning bit. This is done by
> making KVM trape ECREATE instruction from guest, and check the provisioning bit
> in ECREATE's attribute.
> 
> 
> Kai Huang (1):
>   x86/sgx: Add helper to update SGX_LEPUBKEYHASHn MSRs
> 
> Sean Christopherson (22):
>   x86/sgx: Split out adding EPC page to free list to separate helper
>   x86/sgx: Add enum for SGX_CHILD_PRESENT error code
>   x86/sgx: Introduce virtual EPC for use by KVM guests
>   x86/cpufeatures: Add SGX1 and SGX2 sub-features
>   x86/cpu/intel: Allow SGX virtualization without Launch Control support
>   x86/sgx: Expose SGX architectural definitions to the kernel
>   x86/sgx: Move ENCLS leaf definitions to sgx_arch.h
>   x86/sgx: Add SGX2 ENCLS leaf definitions (EAUG, EMODPR and EMODT)
>   x86/sgx: Add encls_faulted() helper
>   x86/sgx: Add helpers to expose ECREATE and EINIT to KVM
>   x86/sgx: Move provisioning device creation out of SGX driver
>   KVM: VMX: Convert vcpu_vmx.exit_reason to a union
>   KVM: x86: Export kvm_mmu_gva_to_gpa_{read,write}() for SGX (VMX)
>   KVM: x86: Define new #PF SGX error code bit
>   KVM: x86: Add SGX feature leaf to reverse CPUID lookup
>   KVM: VMX: Add basic handling of VM-Exit from SGX enclave
>   KVM: VMX: Frame in ENCLS handler for SGX virtualization
>   KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions
>   KVM: VMX: Add emulation of SGX Launch Control LE hash MSRs
>   KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC)
>   KVM: VMX: Enable SGX virtualization for SGX1, SGX2 and LC
>   KVM: x86: Add capability to grant VM access to privileged SGX
>     attribute
> 
>  Documentation/virt/kvm/api.rst                |  23 +
>  arch/x86/Kconfig                              |  12 +
>  arch/x86/include/asm/cpufeature.h             |   5 +-
>  arch/x86/include/asm/cpufeatures.h            |   6 +-
>  arch/x86/include/asm/disabled-features.h      |   7 +-
>  arch/x86/include/asm/kvm_host.h               |   5 +
>  arch/x86/include/asm/required-features.h      |   2 +-
>  arch/x86/include/asm/sgx.h                    |  19 +
>  .../cpu/sgx/arch.h => include/asm/sgx_arch.h} |  20 +
>  arch/x86/include/asm/vmx.h                    |   1 +
>  arch/x86/include/uapi/asm/vmx.h               |   1 +
>  arch/x86/kernel/cpu/common.c                  |   4 +
>  arch/x86/kernel/cpu/feat_ctl.c                |  50 +-
>  arch/x86/kernel/cpu/sgx/Makefile              |   1 +
>  arch/x86/kernel/cpu/sgx/driver.c              |  17 -
>  arch/x86/kernel/cpu/sgx/encl.c                |   2 +-
>  arch/x86/kernel/cpu/sgx/encls.h               |  29 +-
>  arch/x86/kernel/cpu/sgx/ioctl.c               |  23 +-
>  arch/x86/kernel/cpu/sgx/main.c                |  79 ++-
>  arch/x86/kernel/cpu/sgx/sgx.h                 |   5 +-
>  arch/x86/kernel/cpu/sgx/virt.c                | 318 ++++++++++++
>  arch/x86/kernel/cpu/sgx/virt.h                |  14 +
>  arch/x86/kvm/Makefile                         |   2 +
>  arch/x86/kvm/cpuid.c                          |  58 ++-
>  arch/x86/kvm/cpuid.h                          |   1 +
>  arch/x86/kvm/vmx/nested.c                     |  70 ++-
>  arch/x86/kvm/vmx/nested.h                     |   5 +
>  arch/x86/kvm/vmx/sgx.c                        | 462 ++++++++++++++++++
>  arch/x86/kvm/vmx/sgx.h                        |  34 ++
>  arch/x86/kvm/vmx/vmcs12.c                     |   1 +
>  arch/x86/kvm/vmx/vmcs12.h                     |   4 +-
>  arch/x86/kvm/vmx/vmx.c                        | 171 +++++--
>  arch/x86/kvm/vmx/vmx.h                        |  27 +-
>  arch/x86/kvm/x86.c                            |  24 +
>  include/uapi/linux/kvm.h                      |   1 +
>  tools/testing/selftests/sgx/defines.h         |   2 +-
>  36 files changed, 1366 insertions(+), 139 deletions(-)
>  create mode 100644 arch/x86/include/asm/sgx.h
>  rename arch/x86/{kernel/cpu/sgx/arch.h => include/asm/sgx_arch.h} (96%)
>  create mode 100644 arch/x86/kernel/cpu/sgx/virt.c
>  create mode 100644 arch/x86/kernel/cpu/sgx/virt.h
>  create mode 100644 arch/x86/kvm/vmx/sgx.c
>  create mode 100644 arch/x86/kvm/vmx/sgx.h
>
Dave Hansen Jan. 6, 2021, 5:07 p.m. UTC | #2
On 1/5/21 5:55 PM, Kai Huang wrote:
> - Virtual EPC
> 
> "Virtual EPC" is the EPC section exposed by KVM to guest so SGX software in
> guest can discover it and use it to create SGX enclaves. KVM exposes SGX to 
> guest via CPUID, and exposes one or more "virtual EPC" sections for guest.
> The size of "virtual EPC" is passed as Qemu parameter when creating the
> guest, and the base address is calcualted internally according to guest's

				^ calculated

> configuration.

This is not a great first paragraph to introduce me to this feature.

Please remind us what EPC *is*, then you can go and talk about why we
have to virtualize it, and how "virtual EPC" is different from normal
EPC.  For instance:

SGX enclave memory is special and is reserved specifically for enclave
use.  In bare-metal SGX enclaves, the kernel allocates enclave pages,
copies data into the pages with privileged instructions, then allows the
enclave to start.  In this scenario, only initialized pages already
assigned to an enclave are mapped to userspace.

In virtualized environments, the hypervisor still needs to do the
physical enclave page allocation.  The guest kernel is responsible for
the data copying (among other things).  This means that the job of
starting an enclave is now split between hypervisor and guest.

This series introduces a new misc device: /dev/sgx_virt_epc.  This
device allows the host to map *uninitialized* enclave memory into
userspace, which can then be passed into a guest.

While it might be *possible* to start a host-side enclave with
/dev/sgx_enclave and pass its memory into a guest, it would be wasteful
and convoluted.

> core/driver to allow userspace (Qemu) to allocate "raw" EPC, and use it as
> "virtual EPC" for guest. Obviously, unlike EPC allocated for host SGX driver,
> virtual EPC allocated via /dev/sgx_virt_epc doesn't have enclave associated,
> and how virtual EPC is used by guest is compeletely controlled by guest's SGX

					   ^ completely

Please run a spell checker on this thing.

> software.
> 
> Implement the "raw" EPC allocation in the x86 core-SGX subsystem via
> /dev/sgx_virt_epc rather than in KVM. Doing so has two major advantages:
> 
>   - Does not require changes to KVM's uAPI, e.g. EPC gets handled as
>     just another memory backend for guests.
> 
>   - EPC management is wholly contained in the SGX subsystem, e.g. SGX
>     does not have to export any symbols, changes to reclaim flows don't
>     need to be routed through KVM, SGX's dirty laundry doesn't have to
>     get aired out for the world to see, and so on and so forth.
> 
> The virtual EPC allocated to guests is currently not reclaimable, due to
> reclaiming EPC from KVM guests is not currently supported. Due to the
> complications of handling reclaim conflicts between guest and host, KVM
> EPC oversubscription, which allows total virtual EPC size greater than
> physical EPC by being able to reclaiming guests' EPC, is significantly more
> complex than basic support for SGX virtualization.

It would also help here to remind the reader that enclave pages have a
special reclaim mechanism separtae from normal page reclaim, and that
mechanism is disabled for these pages.

Does the *ABI* here preclude doing oversubscription in the future?

> - Support SGX virtualization without SGX Launch Control unlocked mode
> 
> Although SGX driver requires SGX Launch Control unlocked mode to work, SGX

Although the bare-metal SGX driver requires...

Also, didn't we call this "Flexible Launch Control"?

> virtualization doesn't, since how enclave is created is completely controlled
> by guest SGX software, which is not necessarily linux. Therefore, this series
> allows KVM to expose SGX to guest even SGX Launch Control is in locked mode,

... "expose SGX to guests even if" ...

> or is not present at all. The reason is the goal of SGX virtualization, or
> virtualization in general, is to expose hardware feature to guest, but not to
> make assumption how guest will use it. Therefore, KVM should support SGX guest
> as long as hardware is able to, to have chance to support more potential use
> cases in cloud environment.

This is kinda long-winded and misses a lot of important context.  How about:

SGX hardware supports two "launch control" modes to limit which enclaves
can run.  In the "locked" mode, the hardware prevents enclaves from
running unless they are blessed by a third party.  In the unlocked mode,
the kernel is in full control of which enclaves can run.  The bare-metal
SGX code refuses to launch enclaves unless it is in the unlocked mode.

This sgx_virt_epc driver does not have such a restriction.  This allows
guests which are OK with the locked mode to use SGX, even if the host
kernel refuses to.

> - Support exposing SGX2
> 
> Due to the same reason above, SGX2 feature detection is added to core SGX code
> to allow KVM to expose SGX2 to guest, even currently SGX driver doesn't support
> SGX2, because SGX2 can work just fine in guest w/o any interaction to host SGX
> driver.
> 
> - Restricit SGX guest access to provisioning key
> 
> To grant guest being able to fully use SGX, guest needs to be able to create
> provisioning enclave.

"enclave" or "enclaves"?

> However provisioning key is sensitive and is restricted by

	^ the

> /dev/sgx_provision in host SGX driver, therefore KVM SGX virtualization follows
> the same role: a new KVM_CAP_SGX_ATTRIBUTE is added to KVM uAPI, and only file
> descriptor of /dev/sgx_provision is passed to that CAP by usersppace hypervisor
> (Qemu) when creating the guest, it can access provisioning bit. This is done by
> making KVM trape ECREATE instruction from guest, and check the provisioning bit

		^ trap

> in ECREATE's attribute.

The grammar in that paragraph is really off to me.  Can you give it
another go?
Huang, Kai Jan. 7, 2021, 12:34 a.m. UTC | #3
On Wed, 6 Jan 2021 09:07:13 -0800 Dave Hansen wrote:
> On 1/5/21 5:55 PM, Kai Huang wrote:
> > - Virtual EPC
> > 
> > "Virtual EPC" is the EPC section exposed by KVM to guest so SGX software in
> > guest can discover it and use it to create SGX enclaves. KVM exposes SGX to 
> > guest via CPUID, and exposes one or more "virtual EPC" sections for guest.
> > The size of "virtual EPC" is passed as Qemu parameter when creating the
> > guest, and the base address is calcualted internally according to guest's
> 
> 				^ calculated
> 
> > configuration.
> 
> This is not a great first paragraph to introduce me to this feature.
> 
> Please remind us what EPC *is*, then you can go and talk about why we
> have to virtualize it, and how "virtual EPC" is different from normal
> EPC.  For instance:
> 
> SGX enclave memory is special and is reserved specifically for enclave
> use.  In bare-metal SGX enclaves, the kernel allocates enclave pages,
> copies data into the pages with privileged instructions, then allows the
> enclave to start.  In this scenario, only initialized pages already
> assigned to an enclave are mapped to userspace.
> 
> In virtualized environments, the hypervisor still needs to do the
> physical enclave page allocation.  The guest kernel is responsible for
> the data copying (among other things).  This means that the job of
> starting an enclave is now split between hypervisor and guest.
> 
> This series introduces a new misc device: /dev/sgx_virt_epc.  This
> device allows the host to map *uninitialized* enclave memory into
> userspace, which can then be passed into a guest.
> 
> While it might be *possible* to start a host-side enclave with
> /dev/sgx_enclave and pass its memory into a guest, it would be wasteful
> and convoluted.

Thanks. I'll add this.

> 
> > core/driver to allow userspace (Qemu) to allocate "raw" EPC, and use it as
> > "virtual EPC" for guest. Obviously, unlike EPC allocated for host SGX driver,
> > virtual EPC allocated via /dev/sgx_virt_epc doesn't have enclave associated,
> > and how virtual EPC is used by guest is compeletely controlled by guest's SGX
> 
> 					   ^ completely
> 
> Please run a spell checker on this thing.

Yeah will do. Thanks for good suggestion.

> 
> > software.
> > 
> > Implement the "raw" EPC allocation in the x86 core-SGX subsystem via
> > /dev/sgx_virt_epc rather than in KVM. Doing so has two major advantages:
> > 
> >   - Does not require changes to KVM's uAPI, e.g. EPC gets handled as
> >     just another memory backend for guests.
> > 
> >   - EPC management is wholly contained in the SGX subsystem, e.g. SGX
> >     does not have to export any symbols, changes to reclaim flows don't
> >     need to be routed through KVM, SGX's dirty laundry doesn't have to
> >     get aired out for the world to see, and so on and so forth.
> > 
> > The virtual EPC allocated to guests is currently not reclaimable, due to
> > reclaiming EPC from KVM guests is not currently supported. Due to the
> > complications of handling reclaim conflicts between guest and host, KVM
> > EPC oversubscription, which allows total virtual EPC size greater than
> > physical EPC by being able to reclaiming guests' EPC, is significantly more
> > complex than basic support for SGX virtualization.
> 
> It would also help here to remind the reader that enclave pages have a
> special reclaim mechanism separtae from normal page reclaim, and that
> mechanism is disabled for these pages.

OK.

> 
> Does the *ABI* here preclude doing oversubscription in the future?

I am Sorry what *ABI* do you mean?

> 
> > - Support SGX virtualization without SGX Launch Control unlocked mode
> > 
> > Although SGX driver requires SGX Launch Control unlocked mode to work, SGX
> 
> Although the bare-metal SGX driver requires...

OK.

> 
> Also, didn't we call this "Flexible Launch Control"?

I am actually a little bit confused about all those terms here. I don't think
from spec's perspective, there's such thing "Flexible Launch Control", but I
think everyone knows what does it mean. But I am not sure whether it is
commonly used by community. 

I think using FLC is fine if we only want to mention unlocked mode. But if you
want to mention both, IMHO it would be better to specifically use LC locked
mode and unlocked mode, since technically there's third case that LC is not
present at all.

> 
> > virtualization doesn't, since how enclave is created is completely controlled
> > by guest SGX software, which is not necessarily linux. Therefore, this series
> > allows KVM to expose SGX to guest even SGX Launch Control is in locked mode,
> 
> ... "expose SGX to guests even if" ...

Thanks.

> 
> > or is not present at all. The reason is the goal of SGX virtualization, or
> > virtualization in general, is to expose hardware feature to guest, but not to
> > make assumption how guest will use it. Therefore, KVM should support SGX guest
> > as long as hardware is able to, to have chance to support more potential use
> > cases in cloud environment.
> 
> This is kinda long-winded and misses a lot of important context.  How about:
> 
> SGX hardware supports two "launch control" modes to limit which enclaves
> can run.  In the "locked" mode, the hardware prevents enclaves from
> running unless they are blessed by a third party. 

or "by Intel".

 In the unlocked mode,
> the kernel is in full control of which enclaves can run.  The bare-metal
> SGX code refuses to launch enclaves unless it is in the unlocked mode.
> 
> This sgx_virt_epc driver does not have such a restriction.  This allows
> guests which are OK with the locked mode to use SGX, even if the host
> kernel refuses to.

Indeed better. Thanks a lot.

> 
> > - Support exposing SGX2
> > 
> > Due to the same reason above, SGX2 feature detection is added to core SGX code
> > to allow KVM to expose SGX2 to guest, even currently SGX driver doesn't support
> > SGX2, because SGX2 can work just fine in guest w/o any interaction to host SGX
> > driver.
> > 
> > - Restricit SGX guest access to provisioning key
> > 
> > To grant guest being able to fully use SGX, guest needs to be able to create
> > provisioning enclave.
> 
> "enclave" or "enclaves"?

I think should be "enclave", inside one VM, there should only be one
provisioning enclave.

> 
> > However provisioning key is sensitive and is restricted by
> 
> 	^ the

Thanks.

> 
> > /dev/sgx_provision in host SGX driver, therefore KVM SGX virtualization follows
> > the same role: a new KVM_CAP_SGX_ATTRIBUTE is added to KVM uAPI, and only file
> > descriptor of /dev/sgx_provision is passed to that CAP by usersppace hypervisor
> > (Qemu) when creating the guest, it can access provisioning bit. This is done by
> > making KVM trape ECREATE instruction from guest, and check the provisioning bit
> 
> 		^ trap
> 
> > in ECREATE's attribute.
> 
> The grammar in that paragraph is really off to me.  Can you give it
> another go?

I'll refine it. Thanks a lot for input.
Dave Hansen Jan. 7, 2021, 12:48 a.m. UTC | #4
On 1/6/21 4:34 PM, Kai Huang wrote:
> On Wed, 6 Jan 2021 09:07:13 -0800 Dave Hansen wrote:
>> Does the *ABI* here preclude doing oversubscription in the future?
> 
> I am Sorry what *ABI* do you mean?

Oh boy.

https://en.wikipedia.org/wiki/Application_binary_interface

In your patch set that you are posting, /dev/sgx_virt_epc is a new
interface: a new ABI.  If we accept your contribution, programs will be
build around and expect Linux to support this ABI.  An ABI is a contract
between software written to use it and the kernel.  The kernel tries
*really* hard to never break its contracts with applications.

OK, now that we have that out of the way, I'll ask my question in
another way:

Your series adds some new interfaces, including /dev/sgx_virt_epc.  If
the kernel wants to add oversubscription in the future, will old binary
application users of /dev/sgx_virt_epc be able to support
oversubscription?  Or, would users of /dev/sgx_virt_epc need to change
to support oversubscription?

>> Also, didn't we call this "Flexible Launch Control"?
> 
> I am actually a little bit confused about all those terms here. I don't think
> from spec's perspective, there's such thing "Flexible Launch Control", but I
> think everyone knows what does it mean. But I am not sure whether it is
> commonly used by community. 
> 
> I think using FLC is fine if we only want to mention unlocked mode. But if you
> want to mention both, IMHO it would be better to specifically use LC locked
> mode and unlocked mode, since technically there's third case that LC is not
> present at all.

Could you go over the changelogs from Jarkko's patches and at least make
these consistent with those?


>>> or is not present at all. The reason is the goal of SGX virtualization, or
>>> virtualization in general, is to expose hardware feature to guest, but not to
>>> make assumption how guest will use it. Therefore, KVM should support SGX guest
>>> as long as hardware is able to, to have chance to support more potential use
>>> cases in cloud environment.
>>
>> This is kinda long-winded and misses a lot of important context.  How about:
>>
>> SGX hardware supports two "launch control" modes to limit which enclaves
>> can run.  In the "locked" mode, the hardware prevents enclaves from
>> running unless they are blessed by a third party. 
> 
> or "by Intel".

From what I understand, Intel had to bless the enclaves but the
architecture itself doesn't say "Intel must bless them".  But, yeah, in
practice, it had to be Intel.

>>> - Support exposing SGX2
>>>
>>> Due to the same reason above, SGX2 feature detection is added to core SGX code
>>> to allow KVM to expose SGX2 to guest, even currently SGX driver doesn't support
>>> SGX2, because SGX2 can work just fine in guest w/o any interaction to host SGX
>>> driver.
>>>
>>> - Restricit SGX guest access to provisioning key
>>>
>>> To grant guest being able to fully use SGX, guest needs to be able to create
>>> provisioning enclave.
>>
>> "enclave" or "enclaves"?
> 
> I think should be "enclave", inside one VM, there should only be one
> provisioning enclave.

This is where the language becomes important.  Is the provisioning
enclave a one-shot deal?  You create one per guest and can never create
another?  Or, can you restart it?  Can you architecturally have more
than one active at once?  Or, can you only create one once the first one
dies?

You'll write that sentence differently based on the answers.
Huang, Kai Jan. 7, 2021, 1:50 a.m. UTC | #5
On Wed, 6 Jan 2021 16:48:58 -0800 Dave Hansen wrote:
> On 1/6/21 4:34 PM, Kai Huang wrote:
> > On Wed, 6 Jan 2021 09:07:13 -0800 Dave Hansen wrote:
> >> Does the *ABI* here preclude doing oversubscription in the future?
> > 
> > I am Sorry what *ABI* do you mean?
> 
> Oh boy.
> 
> https://en.wikipedia.org/wiki/Application_binary_interface
> 
> In your patch set that you are posting, /dev/sgx_virt_epc is a new
> interface: a new ABI.  If we accept your contribution, programs will be
> build around and expect Linux to support this ABI.  An ABI is a contract
> between software written to use it and the kernel.  The kernel tries
> *really* hard to never break its contracts with applications.

Thanks.

> 
> OK, now that we have that out of the way, I'll ask my question in
> another way:
> 
> Your series adds some new interfaces, including /dev/sgx_virt_epc.  If
> the kernel wants to add oversubscription in the future, will old binary
> application users of /dev/sgx_virt_epc be able to support
> oversubscription?  Or, would users of /dev/sgx_virt_epc need to change
> to support oversubscription?

Oversubscription will be completely done in kernel/kvm, and will be
transparent to userspace, so it will not impact ABI.

> 
> >> Also, didn't we call this "Flexible Launch Control"?
> > 
> > I am actually a little bit confused about all those terms here. I don't think
> > from spec's perspective, there's such thing "Flexible Launch Control", but I
> > think everyone knows what does it mean. But I am not sure whether it is
> > commonly used by community. 
> > 
> > I think using FLC is fine if we only want to mention unlocked mode. But if you
> > want to mention both, IMHO it would be better to specifically use LC locked
> > mode and unlocked mode, since technically there's third case that LC is not
> > present at all.
> 
> Could you go over the changelogs from Jarkko's patches and at least make
> these consistent with those?

I'll dig into them.

[...]

> >>> - Restricit SGX guest access to provisioning key
> >>>
> >>> To grant guest being able to fully use SGX, guest needs to be able to create
> >>> provisioning enclave.
> >>
> >> "enclave" or "enclaves"?
> > 
> > I think should be "enclave", inside one VM, there should only be one
> > provisioning enclave.
> 
> This is where the language becomes important.  Is the provisioning
> enclave a one-shot deal?  You create one per guest and can never create
> another?  Or, can you restart it?  Can you architecturally have more
> than one active at once?  Or, can you only create one once the first one
> dies?
> 
> You'll write that sentence differently based on the answers.
> 

I think I can just change to "guest needs to be able to access provisioning
key". :)
Sean Christopherson Jan. 7, 2021, 4:14 p.m. UTC | #6
On Thu, Jan 07, 2021, Kai Huang wrote:
> On Wed, 6 Jan 2021 16:48:58 -0800 Dave Hansen wrote:
> > Your series adds some new interfaces, including /dev/sgx_virt_epc.  If
> > the kernel wants to add oversubscription in the future, will old binary
> > application users of /dev/sgx_virt_epc be able to support
> > oversubscription?  Or, would users of /dev/sgx_virt_epc need to change
> > to support oversubscription?
> 
> Oversubscription will be completely done in kernel/kvm, and will be
> transparent to userspace, so it will not impact ABI.

It's not transparent to userpsace, odds are very good that userspace would want
to opt in/out of EPC reclaim for its VMs.  E.g. for cases where it would be
preferable to fail to launch a VM than degrade performance.

That being said, there are no anticipated /dev/sgx_virt_epc ABI changes to
support reclaim, as the ABI changes will be in KVM.  In the KVM oversubscription
POC, I added a KVM ioctl to allow enabling EPC reclaim/oversubscription.  That
ioctl took a fd for a /dev/sgx_virt_epc instance.

The reason for routing through KVM was to solve two dependencies issues:

  - KVM needs a reference to the virt_epc instance to handle SGX_CONFLICT VM-Exits

  - The SGX subsystem needs to be able to translate GPAs to HVAs to retrieve the
    SECS for a page it is reclaiming.  That requires a KVM instance and helper
    function.

Routing the ioctl through KVM allows KVM to hand over a pointer of itself along
with a GPA->HVA helper, and the SGX subsystem in turn can hand back the virt_epc
instance resolved from the fd.

It would be possible to invert the flow, e.g. pass in a KVM fd to a new
/dev/sgx_virt_epc ioctl, but I suspect that would be kludgier, and in any case
it would be a new ioctl and so would not break existing users.
Huang, Kai Jan. 8, 2021, 2:16 a.m. UTC | #7
On Thu, 7 Jan 2021 08:14:44 -0800 Sean Christopherson wrote:
> On Thu, Jan 07, 2021, Kai Huang wrote:
> > On Wed, 6 Jan 2021 16:48:58 -0800 Dave Hansen wrote:
> > > Your series adds some new interfaces, including /dev/sgx_virt_epc.  If
> > > the kernel wants to add oversubscription in the future, will old binary
> > > application users of /dev/sgx_virt_epc be able to support
> > > oversubscription?  Or, would users of /dev/sgx_virt_epc need to change
> > > to support oversubscription?
> > 
> > Oversubscription will be completely done in kernel/kvm, and will be
> > transparent to userspace, so it will not impact ABI.
> 
> It's not transparent to userpsace, odds are very good that userspace would want
> to opt in/out of EPC reclaim for its VMs.  E.g. for cases where it would be
> preferable to fail to launch a VM than degrade performance.

It seems reasonable use case, but I don't have immediate picture how it
requires new ABI related to virtualization. For instance, SGX driver should
expose sysfs saying how frequent the EPC swapping is (with KVM
oversubscription, host SGX code should provide such info in whole I think),
and cloud admin can determine whether to launch new VM.

Another argument is, theoretically, cloud admin may not know how EPC will be
used in guest, so potentially guest will only use very little EPC, thus
creating new VM won't hurt a lot, so I am not sure that, if we want to
upstream KVM oversubscription one day, do we need to consider such case.
 
> 
> That being said, there are no anticipated /dev/sgx_virt_epc ABI changes to
> support reclaim, as the ABI changes will be in KVM.  In the KVM oversubscription
> POC, I added a KVM ioctl to allow enabling EPC reclaim/oversubscription.  That
> ioctl took a fd for a /dev/sgx_virt_epc instance.

Adding IOCTL to enable/disable oversubscription for particular VM seems
user-case dependent, and I am not sure whether we need to support that if we
want to upstream oversubscription one day. To me, it makes sense to upstream
*basic* oversubscription (which just supports reclaiming EPC from VM) first,
and then we can extend if needed according to use cases.

Anyway, oversubscription won't break existing ABI as you mentioned. 

> 
> The reason for routing through KVM was to solve two dependencies issues:
> 
>   - KVM needs a reference to the virt_epc instance to handle SGX_CONFLICT VM-Exits
> 
>   - The SGX subsystem needs to be able to translate GPAs to HVAs to retrieve the
>     SECS for a page it is reclaiming.  That requires a KVM instance and helper
>     function.
> 
> Routing the ioctl through KVM allows KVM to hand over a pointer of itself along
> with a GPA->HVA helper, and the SGX subsystem in turn can hand back the virt_epc
> instance resolved from the fd.
> 
> It would be possible to invert the flow, e.g. pass in a KVM fd to a new
> /dev/sgx_virt_epc ioctl, but I suspect that would be kludgier, and in any case
> it would be a new ioctl and so would not break existing users.
Jarkko Sakkinen Jan. 11, 2021, 5:20 p.m. UTC | #8
On Wed, 2021-01-06 at 14:55 +1300, Kai Huang wrote:
> --- Disclaimer ---
> 
> These patches were originally written by Sean Christopherson while at Intel.
> Now that Sean has left Intel, I (Kai) have taken over getting them upstream.
> This series needs more review before it can be merged.  It is being posted
> publicly and under RFC so Sean and others can review it. Maintainers are safe
> ignoring it for now.
> 
> ------------------
> 
> Hi all,
> 
> This series adds KVM SGX virtualization support. The first 12 patches starting
> with x86/sgx or x86/cpu.. are necessary changes to x86 and SGX core/driver to
> support KVM SGX virtualization, while the rest are patches to KVM subsystem.
> 
> Please help to review this series. Also I'd like to hear what is the proper
> way to merge this series, since it contains change to both x86/SGX and KVM
> subsystem. Any feedback is highly appreciated. And please let me know if I
> forgot to CC anyone, or anyone wants to be removed from CC. Thanks in advance!
> 
> This series is based against latest tip tree's x86/sgx branch. You can also get
> the code from tip branch of kvm-sgx repo on github:
> 
>         https://github.com/intel/kvm-sgx.git tip
> 
> It also requires Qemu changes to create VM with SGX support. You can find Qemu
> repo here:
> 
>         https://github.com/intel/qemu-sgx.git next
> 
> Please refer to README.md of above qemu-sgx repo for detail on how to create
> guest with SGX support. At meantime, for your quick reference you can use below
> command to create SGX guest:
> 
>         #qemu-system-x86_64 -smp 4 -m 2G -drive file=<your_vm_image>,if=virtio \
>                 -cpu host,+sgx_provisionkey \
>                 -sgx-epc id=epc1,memdev=mem1 \
>                 -object memory-backend-epc,id=mem1,size=64M,prealloc
> 
> Please note that the SGX relevant part is:
> 
>                 -cpu host,+sgx_provisionkey \
>                 -sgx-epc id=epc1,memdev=mem1 \
>                 -object memory-backend-epc,id=mem1,size=64M,prealloc
> 
> And you can change other parameters of your qemu command based on your needs.

Thanks a lot documenting these snippets to the cover letter. I dig these
up from lore once my environment is working.

I'm setting up Arch based test environment with the eye on this patch set
and generic Linux keyring patches:

https://git.kernel.org/pub/scm/linux/kernel/git/jarkko/arch.git/

Still have some minor bits to adjust before I can start deploying it for SGX
testing. For this patch set I'll use two instances of it.

> =========
> KVM SGX virtualization Overview
> 
> - Virtual EPC
> 
> "Virtual EPC" is the EPC section exposed by KVM to guest so SGX software in
> guest can discover it and use it to create SGX enclaves. KVM exposes SGX to 

Virtual EPC is a representation of an EPC section. And there is no "the".

> guest via CPUID, and exposes one or more "virtual EPC" sections for guest.
> The size of "virtual EPC" is passed as Qemu parameter when creating the
> guest, and the base address is calcualted internally according to guest's
> configuration.
> 
> To support virtual EPC, add a new misc device /dev/sgx_virt_epc to SGX
> core/driver to allow userspace (Qemu) to allocate "raw" EPC, and use it as
> "virtual EPC" for guest. Obviously, unlike EPC allocated for host SGX driver,
> virtual EPC allocated via /dev/sgx_virt_epc doesn't have enclave associated,
> and how virtual EPC is used by guest is compeletely controlled by guest's SGX
> software.

I think that /dev/sgx_vepc would be a clear enough name for the device. This
text has now a bit confusing "terminology" related to this.

In some places virtual EPC is quotes, and in other places it is not. I think
that you could consistently an abbervation vEPC (without quotations):

"
vEPC
====

Virtual EPC, shortened as vEPC, is a representation of ...
"

> Implement the "raw" EPC allocation in the x86 core-SGX subsystem via
> /dev/sgx_virt_epc rather than in KVM. Doing so has two major advantages:

Maybe you could remove "raw" from the text.

>   - Does not require changes to KVM's uAPI, e.g. EPC gets handled as
>     just another memory backend for guests.

Why this an advantage? No objection, just a question.

>   - EPC management is wholly contained in the SGX subsystem, e.g. SGX
>     does not have to export any symbols, changes to reclaim flows don't
>     need to be routed through KVM, SGX's dirty laundry doesn't have to
>     get aired out for the world to see, and so on and so forth.

No comments to this before understanding code changes better.

> The virtual EPC allocated to guests is currently not reclaimable, due to
> reclaiming EPC from KVM guests is not currently supported. Due to the
> complications of handling reclaim conflicts between guest and host, KVM
> EPC oversubscription, which allows total virtual EPC size greater than
> physical EPC by being able to reclaiming guests' EPC, is significantly more
> complex than basic support for SGX virtualization.

I think it should be really in the center of the patch set description that
this patch set implements segmentation of EPC, not oversubscription. It should
be clear immediately. It's a core part of knowing "what I'm looking at".

> - Support SGX virtualization without SGX Launch Control unlocked mode
> 
> Although SGX driver requires SGX Launch Control unlocked mode to work, SGX
> virtualization doesn't, since how enclave is created is completely controlled
> by guest SGX software, which is not necessarily linux. Therefore, this series
> allows KVM to expose SGX to guest even SGX Launch Control is in locked mode,
> or is not present at all. The reason is the goal of SGX virtualization, or
> virtualization in general, is to expose hardware feature to guest, but not to
> make assumption how guest will use it. Therefore, KVM should support SGX guest
> as long as hardware is able to, to have chance to support more potential use
> cases in cloud environment.

AFAIK the convergence point with the FLC was, and is that Linux never enables
SGX with locked MSRs.

And I don't understand, if it is not fine to allow locked SGX for a *process*,
why is it fine for a *virtual machine*? They have a lot same.

I cannot remember out of top of my head, could the Intel SHA256 be read when
booted with unlocked MSRs. If that is the case, then you can still support
guests with that configuration.

Context-dependent guidelines tend to also trash code big time. Also, for the
sake of a sane kernel code base, I would consider only supporting unlocked
MSRs.

/Jarkko
Sean Christopherson Jan. 11, 2021, 6:37 p.m. UTC | #9
On Mon, Jan 11, 2021, Jarkko Sakkinen wrote:
> On Wed, 2021-01-06 at 14:55 +1300, Kai Huang wrote:
> >   - Does not require changes to KVM's uAPI, e.g. EPC gets handled as
> >     just another memory backend for guests.
> 
> Why this an advantage? No objection, just a question.

There are zero KVM changes required to support exposing EPC to a guest.  KVM's
MMU is completely ignorant of what physical backing is used for any given host
virtual address.  KVM has to be aware of various VM_* flags, e.g. VM_PFNMAP and
VM_IO, but that code is arch agnostic and is quite isolated.

> >   - EPC management is wholly contained in the SGX subsystem, e.g. SGX
> >     does not have to export any symbols, changes to reclaim flows don't
> >     need to be routed through KVM, SGX's dirty laundry doesn't have to
> >     get aired out for the world to see, and so on and so forth.
> 
> No comments to this before understanding code changes better.
> 
> > The virtual EPC allocated to guests is currently not reclaimable, due to
> > reclaiming EPC from KVM guests is not currently supported. Due to the
> > complications of handling reclaim conflicts between guest and host, KVM
> > EPC oversubscription, which allows total virtual EPC size greater than
> > physical EPC by being able to reclaiming guests' EPC, is significantly more
> > complex than basic support for SGX virtualization.
> 
> I think it should be really in the center of the patch set description that
> this patch set implements segmentation of EPC, not oversubscription. It should
> be clear immediately. It's a core part of knowing "what I'm looking at".

Technically, it doesn't implement EPC segmentation of EPC.  It implements
non-reclaimable EPC allocation.  Even that is somewhat untrue as the EPC can be
forcefully reclaimed, but doing so will destroy the guest contents.

Userspace can oversubscribe the EPC to KVM guests, but it would need to kill,
migrate, or pause one or more VMs if the pool of physical EPC were exhausted.

> > - Support SGX virtualization without SGX Launch Control unlocked mode
> > 
> > Although SGX driver requires SGX Launch Control unlocked mode to work, SGX
> > virtualization doesn't, since how enclave is created is completely controlled
> > by guest SGX software, which is not necessarily linux. Therefore, this series
> > allows KVM to expose SGX to guest even SGX Launch Control is in locked mode,
> > or is not present at all. The reason is the goal of SGX virtualization, or
> > virtualization in general, is to expose hardware feature to guest, but not to
> > make assumption how guest will use it. Therefore, KVM should support SGX guest
> > as long as hardware is able to, to have chance to support more potential use
> > cases in cloud environment.
> 
> AFAIK the convergence point with the FLC was, and is that Linux never enables
> SGX with locked MSRs.
> 
> And I don't understand, if it is not fine to allow locked SGX for a *process*,
> why is it fine for a *virtual machine*? They have a lot same.

Because it's a completely different OS/kernel.  If the user has a kernel that
supports locked SGX, then so be it.  There's no novel circumvention of the
kernel policy, e.g. the user could simply boot the non-upstream kernel directly,
and running an upstream kernel in the guest will not cause the kernel to support
SGX.

There are any number of things that are allowed in a KVM guest that are not
allowed in a bare metal process.

> I cannot remember out of top of my head, could the Intel SHA256 be read when
> booted with unlocked MSRs. If that is the case, then you can still support
> guests with that configuration.

No, it's not guaranteed to be readable as firmware could have already changed
the values in the MSRs.

> Context-dependent guidelines tend to also trash code big time. Also, for the
> sake of a sane kernel code base, I would consider only supporting unlocked
> MSRs.

It's one line of a code to teach the kernel driver not to load if the MSRs are
locked.  And IMO, that one line of code is a net positive as it makes it clear
in the driver itself that it chooses not support locked MSRs, even if SGX itself
is fully enabled.
Huang, Kai Jan. 12, 2021, 1:14 a.m. UTC | #10
On Mon, 11 Jan 2021 19:20:48 +0200 Jarkko Sakkinen wrote:
> On Wed, 2021-01-06 at 14:55 +1300, Kai Huang wrote:
> > --- Disclaimer ---
> > 
> > These patches were originally written by Sean Christopherson while at Intel.
> > Now that Sean has left Intel, I (Kai) have taken over getting them upstream.
> > This series needs more review before it can be merged.  It is being posted
> > publicly and under RFC so Sean and others can review it. Maintainers are safe
> > ignoring it for now.
> > 
> > ------------------
> > 
> > Hi all,
> > 
> > This series adds KVM SGX virtualization support. The first 12 patches starting
> > with x86/sgx or x86/cpu.. are necessary changes to x86 and SGX core/driver to
> > support KVM SGX virtualization, while the rest are patches to KVM subsystem.
> > 
> > Please help to review this series. Also I'd like to hear what is the proper
> > way to merge this series, since it contains change to both x86/SGX and KVM
> > subsystem. Any feedback is highly appreciated. And please let me know if I
> > forgot to CC anyone, or anyone wants to be removed from CC. Thanks in advance!
> > 
> > This series is based against latest tip tree's x86/sgx branch. You can also get
> > the code from tip branch of kvm-sgx repo on github:
> > 
> >         https://github.com/intel/kvm-sgx.git tip
> > 
> > It also requires Qemu changes to create VM with SGX support. You can find Qemu
> > repo here:
> > 
> >         https://github.com/intel/qemu-sgx.git next
> > 
> > Please refer to README.md of above qemu-sgx repo for detail on how to create
> > guest with SGX support. At meantime, for your quick reference you can use below
> > command to create SGX guest:
> > 
> >         #qemu-system-x86_64 -smp 4 -m 2G -drive file=<your_vm_image>,if=virtio \
> >                 -cpu host,+sgx_provisionkey \
> >                 -sgx-epc id=epc1,memdev=mem1 \
> >                 -object memory-backend-epc,id=mem1,size=64M,prealloc
> > 
> > Please note that the SGX relevant part is:
> > 
> >                 -cpu host,+sgx_provisionkey \
> >                 -sgx-epc id=epc1,memdev=mem1 \
> >                 -object memory-backend-epc,id=mem1,size=64M,prealloc
> > 
> > And you can change other parameters of your qemu command based on your needs.
> 
> Thanks a lot documenting these snippets to the cover letter. I dig these
> up from lore once my environment is working.
> 
> I'm setting up Arch based test environment with the eye on this patch set
> and generic Linux keyring patches:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/jarkko/arch.git/
> 
> Still have some minor bits to adjust before I can start deploying it for SGX
> testing. For this patch set I'll use two instances of it.

Thanks. Please let me know if you need anything more.

> 
> > =========
> > KVM SGX virtualization Overview
> > 
> > - Virtual EPC
> > 
> > "Virtual EPC" is the EPC section exposed by KVM to guest so SGX software in
> > guest can discover it and use it to create SGX enclaves. KVM exposes SGX to 
> 
> Virtual EPC is a representation of an EPC section. And there is no "the".
> 
> > guest via CPUID, and exposes one or more "virtual EPC" sections for guest.
> > The size of "virtual EPC" is passed as Qemu parameter when creating the
> > guest, and the base address is calcualted internally according to guest's
> > configuration.
> > 
> > To support virtual EPC, add a new misc device /dev/sgx_virt_epc to SGX
> > core/driver to allow userspace (Qemu) to allocate "raw" EPC, and use it as
> > "virtual EPC" for guest. Obviously, unlike EPC allocated for host SGX driver,
> > virtual EPC allocated via /dev/sgx_virt_epc doesn't have enclave associated,
> > and how virtual EPC is used by guest is compeletely controlled by guest's SGX
> > software.
> 
> I think that /dev/sgx_vepc would be a clear enough name for the device. This
> text has now a bit confusing "terminology" related to this.

/dev/sgx_virt_epc may be clearer from userspace's perspective, for instance,
if people see /dev/sgx_vepc, they may have to think about what it is,
while /dev/sgx_virt_epc they may not.

But I don't have strong objection here. Does anyone has anything to say here?
Jarkko Sakkinen Jan. 12, 2021, 1:58 a.m. UTC | #11
On Mon, Jan 11, 2021 at 10:37:05AM -0800, Sean Christopherson wrote:
> On Mon, Jan 11, 2021, Jarkko Sakkinen wrote:
> > On Wed, 2021-01-06 at 14:55 +1300, Kai Huang wrote:
> > >   - Does not require changes to KVM's uAPI, e.g. EPC gets handled as
> > >     just another memory backend for guests.
> > 
> > Why this an advantage? No objection, just a question.
> 
> There are zero KVM changes required to support exposing EPC to a guest.  KVM's
> MMU is completely ignorant of what physical backing is used for any given host
> virtual address.  KVM has to be aware of various VM_* flags, e.g. VM_PFNMAP and
> VM_IO, but that code is arch agnostic and is quite isolated.

Right, thanks for explanation.

> > >   - EPC management is wholly contained in the SGX subsystem, e.g. SGX
> > >     does not have to export any symbols, changes to reclaim flows don't
> > >     need to be routed through KVM, SGX's dirty laundry doesn't have to
> > >     get aired out for the world to see, and so on and so forth.
> > 
> > No comments to this before understanding code changes better.
> > 
> > > The virtual EPC allocated to guests is currently not reclaimable, due to
> > > reclaiming EPC from KVM guests is not currently supported. Due to the
> > > complications of handling reclaim conflicts between guest and host, KVM
> > > EPC oversubscription, which allows total virtual EPC size greater than
> > > physical EPC by being able to reclaiming guests' EPC, is significantly more
> > > complex than basic support for SGX virtualization.
> > 
> > I think it should be really in the center of the patch set description that
> > this patch set implements segmentation of EPC, not oversubscription. It should
> > be clear immediately. It's a core part of knowing "what I'm looking at".
> 
> Technically, it doesn't implement EPC segmentation of EPC.  It implements
> non-reclaimable EPC allocation.  Even that is somewhat untrue as the EPC can be
> forcefully reclaimed, but doing so will destroy the guest contents.

In SGX case, that isn't actually as a bad as a policy in high stress
situations as with "normal" applications.  Runtimes must expect
dissappearance of the enclave at any point of time anyway...

> Userspace can oversubscribe the EPC to KVM guests, but it would need to kill,
> migrate, or pause one or more VMs if the pool of physical EPC were exhausted.

Right.

> 
> > > - Support SGX virtualization without SGX Launch Control unlocked mode
> > > 
> > > Although SGX driver requires SGX Launch Control unlocked mode to work, SGX
> > > virtualization doesn't, since how enclave is created is completely controlled
> > > by guest SGX software, which is not necessarily linux. Therefore, this series
> > > allows KVM to expose SGX to guest even SGX Launch Control is in locked mode,
> > > or is not present at all. The reason is the goal of SGX virtualization, or
> > > virtualization in general, is to expose hardware feature to guest, but not to
> > > make assumption how guest will use it. Therefore, KVM should support SGX guest
> > > as long as hardware is able to, to have chance to support more potential use
> > > cases in cloud environment.
> > 
> > AFAIK the convergence point with the FLC was, and is that Linux never enables
> > SGX with locked MSRs.
> > 
> > And I don't understand, if it is not fine to allow locked SGX for a *process*,
> > why is it fine for a *virtual machine*? They have a lot same.
> 
> Because it's a completely different OS/kernel.  If the user has a kernel that
> supports locked SGX, then so be it.  There's no novel circumvention of the
> kernel policy, e.g. the user could simply boot the non-upstream kernel directly,
> and running an upstream kernel in the guest will not cause the kernel to support
> SGX.
> 
> There are any number of things that are allowed in a KVM guest that are not
> allowed in a bare metal process.

I buy this.

> > I cannot remember out of top of my head, could the Intel SHA256 be read when
> > booted with unlocked MSRs. If that is the case, then you can still support
> > guests with that configuration.
> 
> No, it's not guaranteed to be readable as firmware could have already changed
> the values in the MSRs.

Right.

> > Context-dependent guidelines tend to also trash code big time. Also, for the
> > sake of a sane kernel code base, I would consider only supporting unlocked
> > MSRs.
> 
> It's one line of a code to teach the kernel driver not to load if the MSRs are
> locked.  And IMO, that one line of code is a net positive as it makes it clear
> in the driver itself that it chooses not support locked MSRs, even if SGX itself
> is fully enabled.

Yup, I think this clears my concerns, thank you.

/Jarkko
Jarkko Sakkinen Jan. 12, 2021, 2:02 a.m. UTC | #12
On Tue, Jan 12, 2021 at 02:14:28PM +1300, Kai Huang wrote:
> On Mon, 11 Jan 2021 19:20:48 +0200 Jarkko Sakkinen wrote:
> > On Wed, 2021-01-06 at 14:55 +1300, Kai Huang wrote:
> > > --- Disclaimer ---
> > > 
> > > These patches were originally written by Sean Christopherson while at Intel.
> > > Now that Sean has left Intel, I (Kai) have taken over getting them upstream.
> > > This series needs more review before it can be merged.  It is being posted
> > > publicly and under RFC so Sean and others can review it. Maintainers are safe
> > > ignoring it for now.
> > > 
> > > ------------------
> > > 
> > > Hi all,
> > > 
> > > This series adds KVM SGX virtualization support. The first 12 patches starting
> > > with x86/sgx or x86/cpu.. are necessary changes to x86 and SGX core/driver to
> > > support KVM SGX virtualization, while the rest are patches to KVM subsystem.
> > > 
> > > Please help to review this series. Also I'd like to hear what is the proper
> > > way to merge this series, since it contains change to both x86/SGX and KVM
> > > subsystem. Any feedback is highly appreciated. And please let me know if I
> > > forgot to CC anyone, or anyone wants to be removed from CC. Thanks in advance!
> > > 
> > > This series is based against latest tip tree's x86/sgx branch. You can also get
> > > the code from tip branch of kvm-sgx repo on github:
> > > 
> > >         https://github.com/intel/kvm-sgx.git tip
> > > 
> > > It also requires Qemu changes to create VM with SGX support. You can find Qemu
> > > repo here:
> > > 
> > >         https://github.com/intel/qemu-sgx.git next
> > > 
> > > Please refer to README.md of above qemu-sgx repo for detail on how to create
> > > guest with SGX support. At meantime, for your quick reference you can use below
> > > command to create SGX guest:
> > > 
> > >         #qemu-system-x86_64 -smp 4 -m 2G -drive file=<your_vm_image>,if=virtio \
> > >                 -cpu host,+sgx_provisionkey \
> > >                 -sgx-epc id=epc1,memdev=mem1 \
> > >                 -object memory-backend-epc,id=mem1,size=64M,prealloc
> > > 
> > > Please note that the SGX relevant part is:
> > > 
> > >                 -cpu host,+sgx_provisionkey \
> > >                 -sgx-epc id=epc1,memdev=mem1 \
> > >                 -object memory-backend-epc,id=mem1,size=64M,prealloc
> > > 
> > > And you can change other parameters of your qemu command based on your needs.
> > 
> > Thanks a lot documenting these snippets to the cover letter. I dig these
> > up from lore once my environment is working.
> > 
> > I'm setting up Arch based test environment with the eye on this patch set
> > and generic Linux keyring patches:
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/jarkko/arch.git/
> > 
> > Still have some minor bits to adjust before I can start deploying it for SGX
> > testing. For this patch set I'll use two instances of it.
> 
> Thanks. Please let me know if you need anything more.
> 
> > 
> > > =========
> > > KVM SGX virtualization Overview
> > > 
> > > - Virtual EPC
> > > 
> > > "Virtual EPC" is the EPC section exposed by KVM to guest so SGX software in
> > > guest can discover it and use it to create SGX enclaves. KVM exposes SGX to 
> > 
> > Virtual EPC is a representation of an EPC section. And there is no "the".
> > 
> > > guest via CPUID, and exposes one or more "virtual EPC" sections for guest.
> > > The size of "virtual EPC" is passed as Qemu parameter when creating the
> > > guest, and the base address is calcualted internally according to guest's
> > > configuration.
> > > 
> > > To support virtual EPC, add a new misc device /dev/sgx_virt_epc to SGX
> > > core/driver to allow userspace (Qemu) to allocate "raw" EPC, and use it as
> > > "virtual EPC" for guest. Obviously, unlike EPC allocated for host SGX driver,
> > > virtual EPC allocated via /dev/sgx_virt_epc doesn't have enclave associated,
> > > and how virtual EPC is used by guest is compeletely controlled by guest's SGX
> > > software.
> > 
> > I think that /dev/sgx_vepc would be a clear enough name for the device. This
> > text has now a bit confusing "terminology" related to this.
> 
> /dev/sgx_virt_epc may be clearer from userspace's perspective, for instance,
> if people see /dev/sgx_vepc, they may have to think about what it is,
> while /dev/sgx_virt_epc they may not.
> 
> But I don't have strong objection here. Does anyone has anything to say here?

It's already an abberevation to start with, why leave it halfways?

Especially when three remaining words have been shrunk to single
characters ('E', 'P' and 'C').

/Jarkko
Huang, Kai Jan. 12, 2021, 2:07 a.m. UTC | #13
> > > > 
> > > > To support virtual EPC, add a new misc device /dev/sgx_virt_epc to SGX
> > > > core/driver to allow userspace (Qemu) to allocate "raw" EPC, and use it as
> > > > "virtual EPC" for guest. Obviously, unlike EPC allocated for host SGX driver,
> > > > virtual EPC allocated via /dev/sgx_virt_epc doesn't have enclave associated,
> > > > and how virtual EPC is used by guest is compeletely controlled by guest's SGX
> > > > software.
> > > 
> > > I think that /dev/sgx_vepc would be a clear enough name for the device. This
> > > text has now a bit confusing "terminology" related to this.
> > 
> > /dev/sgx_virt_epc may be clearer from userspace's perspective, for instance,
> > if people see /dev/sgx_vepc, they may have to think about what it is,
> > while /dev/sgx_virt_epc they may not.
> > 
> > But I don't have strong objection here. Does anyone has anything to say here?
> 
> It's already an abberevation to start with, why leave it halfways?
> 
> Especially when three remaining words have been shrunk to single
> characters ('E', 'P' and 'C').
> 

I have expressed my opinion above. And as I said I don't have strong objection
here. I'll change to /dev/sgx_vepc if no one opposes.
Huang, Kai Jan. 15, 2021, 2:43 p.m. UTC | #14
On Tue, 2021-01-12 at 15:07 +1300, Kai Huang wrote:
> > > > > 
> > > > > To support virtual EPC, add a new misc device /dev/sgx_virt_epc to SGX
> > > > > core/driver to allow userspace (Qemu) to allocate "raw" EPC, and use it as
> > > > > "virtual EPC" for guest. Obviously, unlike EPC allocated for host SGX
> > > > > driver,
> > > > > virtual EPC allocated via /dev/sgx_virt_epc doesn't have enclave
> > > > > associated,
> > > > > and how virtual EPC is used by guest is compeletely controlled by guest's
> > > > > SGX
> > > > > software.
> > > > 
> > > > I think that /dev/sgx_vepc would be a clear enough name for the device. This
> > > > text has now a bit confusing "terminology" related to this.
> > > 
> > > /dev/sgx_virt_epc may be clearer from userspace's perspective, for instance,
> > > if people see /dev/sgx_vepc, they may have to think about what it is,
> > > while /dev/sgx_virt_epc they may not.
> > > 
> > > But I don't have strong objection here. Does anyone has anything to say here?
> > 
> > It's already an abberevation to start with, why leave it halfways?
> > 
> > Especially when three remaining words have been shrunk to single
> > characters ('E', 'P' and 'C').
> > 
> 
> I have expressed my opinion above. And as I said I don't have strong objection
> here. I'll change to /dev/sgx_vepc if no one opposes.

Hi Jarkko,

I am reluctant to change to /dev/sgx_vepc now, because there are lots of
'sgx_virt_epc' in the code.  For instance, 'struct sgx_virt_epc', and function names
in sgx/virt.c are all sgx_virt_epc_xxx(), which has 'sgx_virt_epc' as prefix. I feel
changing to /dev/sgx_vepc only is kinda incomplete, but I really don't want to change
so many 'sgx_virt_epc' to 'sgx_vepc'. 

(Plus I still think  'virt_epc' is more obvious than 'vepc' from userspace's
perspective.)

Does it make sense?
Jarkko Sakkinen Jan. 16, 2021, 9:31 a.m. UTC | #15
On Sat, Jan 16, 2021 at 03:43:18AM +1300, Kai Huang wrote:
> On Tue, 2021-01-12 at 15:07 +1300, Kai Huang wrote:
> > > > > > 
> > > > > > To support virtual EPC, add a new misc device /dev/sgx_virt_epc to SGX
> > > > > > core/driver to allow userspace (Qemu) to allocate "raw" EPC, and use it as
> > > > > > "virtual EPC" for guest. Obviously, unlike EPC allocated for host SGX
> > > > > > driver,
> > > > > > virtual EPC allocated via /dev/sgx_virt_epc doesn't have enclave
> > > > > > associated,
> > > > > > and how virtual EPC is used by guest is compeletely controlled by guest's
> > > > > > SGX
> > > > > > software.
> > > > > 
> > > > > I think that /dev/sgx_vepc would be a clear enough name for the device. This
> > > > > text has now a bit confusing "terminology" related to this.
> > > > 
> > > > /dev/sgx_virt_epc may be clearer from userspace's perspective, for instance,
> > > > if people see /dev/sgx_vepc, they may have to think about what it is,
> > > > while /dev/sgx_virt_epc they may not.
> > > > 
> > > > But I don't have strong objection here. Does anyone has anything to say here?
> > > 
> > > It's already an abberevation to start with, why leave it halfways?
> > > 
> > > Especially when three remaining words have been shrunk to single
> > > characters ('E', 'P' and 'C').
> > > 
> > 
> > I have expressed my opinion above. And as I said I don't have strong objection
> > here. I'll change to /dev/sgx_vepc if no one opposes.
> 
> Hi Jarkko,
> 
> I am reluctant to change to /dev/sgx_vepc now, because there are lots of
> 'sgx_virt_epc' in the code.  For instance, 'struct sgx_virt_epc', and function names
> in sgx/virt.c are all sgx_virt_epc_xxx(), which has 'sgx_virt_epc' as prefix. I feel
> changing to /dev/sgx_vepc only is kinda incomplete, but I really don't want to change
> so many 'sgx_virt_epc' to 'sgx_vepc'. 
> 
> (Plus I still think  'virt_epc' is more obvious than 'vepc' from userspace's
> perspective.)
> 
> Does it make sense?

We can reconsider naming later on for sure, and maybe it's better to do
so. It's probably too early to define the final name.

As far as naming goes, I'm actually wondering is this usable outside of
KVM by any means? If not, then probably the best name for this device
would be sgx_kvm_epc. Better to be always as explicit as possible.

/Jarkko
Jarkko Sakkinen Jan. 16, 2021, 9:50 a.m. UTC | #16
On Sat, Jan 16, 2021 at 11:31:49AM +0200, Jarkko Sakkinen wrote:
> On Sat, Jan 16, 2021 at 03:43:18AM +1300, Kai Huang wrote:
> > On Tue, 2021-01-12 at 15:07 +1300, Kai Huang wrote:
> > > > > > > 
> > > > > > > To support virtual EPC, add a new misc device /dev/sgx_virt_epc to SGX
> > > > > > > core/driver to allow userspace (Qemu) to allocate "raw" EPC, and use it as
> > > > > > > "virtual EPC" for guest. Obviously, unlike EPC allocated for host SGX
> > > > > > > driver,
> > > > > > > virtual EPC allocated via /dev/sgx_virt_epc doesn't have enclave
> > > > > > > associated,
> > > > > > > and how virtual EPC is used by guest is compeletely controlled by guest's
> > > > > > > SGX
> > > > > > > software.
> > > > > > 
> > > > > > I think that /dev/sgx_vepc would be a clear enough name for the device. This
> > > > > > text has now a bit confusing "terminology" related to this.
> > > > > 
> > > > > /dev/sgx_virt_epc may be clearer from userspace's perspective, for instance,
> > > > > if people see /dev/sgx_vepc, they may have to think about what it is,
> > > > > while /dev/sgx_virt_epc they may not.
> > > > > 
> > > > > But I don't have strong objection here. Does anyone has anything to say here?
> > > > 
> > > > It's already an abberevation to start with, why leave it halfways?
> > > > 
> > > > Especially when three remaining words have been shrunk to single
> > > > characters ('E', 'P' and 'C').
> > > > 
> > > 
> > > I have expressed my opinion above. And as I said I don't have strong objection
> > > here. I'll change to /dev/sgx_vepc if no one opposes.
> > 
> > Hi Jarkko,
> > 
> > I am reluctant to change to /dev/sgx_vepc now, because there are lots of
> > 'sgx_virt_epc' in the code.  For instance, 'struct sgx_virt_epc', and function names
> > in sgx/virt.c are all sgx_virt_epc_xxx(), which has 'sgx_virt_epc' as prefix. I feel
> > changing to /dev/sgx_vepc only is kinda incomplete, but I really don't want to change
> > so many 'sgx_virt_epc' to 'sgx_vepc'. 
> > 
> > (Plus I still think  'virt_epc' is more obvious than 'vepc' from userspace's
> > perspective.)
> > 
> > Does it make sense?
> 
> We can reconsider naming later on for sure, and maybe it's better to do
> so. It's probably too early to define the final name.
> 
> As far as naming goes, I'm actually wondering is this usable outside of
> KVM by any means? If not, then probably the best name for this device
> would be sgx_kvm_epc. Better to be always as explicit as possible.

You can easily do such renames with git filter-branch over a patch set:

https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History

Having to rename something in too many places is not an argument.
Considering it too early is.

/Jarkko