Message ID | 20210514201508.27967-1-chang.seok.bae@intel.com (mailing list archive) |
---|---|
Headers | show |
Series | x86: Support Intel Key Locker | expand |
On 5/14/21 1:14 PM, Chang S. Bae wrote: > Key Locker [1][2] is a new security feature available in new Intel CPUs to > protect data encryption keys for the Advanced Encryption Standard > algorithm. The protection limits the amount of time an AES key is exposed > in memory by sealing a key and referencing it with new AES instructions. > > The new AES instruction set is a successor of Intel's AES-NI (AES New > Instruction). Users may switch to the Key Locker version from crypto > libraries. This series includes a new AES implementation for the Crypto > API, which was validated through the crypto unit tests. The performance in > the test cases was measured and found comparable to the AES-NI version. > > Key Locker introduces a (CPU-)internal key to encode AES keys. The kernel > needs to load it and ensure it unchanged as long as CPUs are operational. I have high-level questions: What is the expected use case? My personal hypothesis, based on various public Intel slides, is that the actual intended use case was internal to the ME, and that KL was ported to end-user CPUs more or less verbatim. I certainly understand how KL is valuable in a context where a verified boot process installs some KL keys that are not subsequently accessible outside the KL ISA, but Linux does not really work like this. I'm wondering what people will use it for. On a related note, does Intel plan to extend KL with ways to securely load keys? (E.g. the ability to, in effect, LOADIWKEY from inside an enclave? Key wrapping/unwrapping operations?) In other words, is should we look at KL the way we look at MKTME, i.e. the foundation of something neat but not necessarily very useful as is, or should we expect that KL is in its more or less final form? What is the expected interaction between a KL-using VM guest and the host VMM? Will there be performance impacts (to context switching, for example) if a guest enables KL, even if the guest does not subsequently do anything with it? Should Linux actually enable KL if it detects that it's a VM guest? Should Linux have use a specific keying method as a guest? --Andy
On May 15, 2021, at 11:01, Andy Lutomirski <luto@kernel.org> wrote: > On 5/14/21 1:14 PM, Chang S. Bae wrote: >> Key Locker [1][2] is a new security feature available in new Intel CPUs to >> protect data encryption keys for the Advanced Encryption Standard >> algorithm. The protection limits the amount of time an AES key is exposed >> in memory by sealing a key and referencing it with new AES instructions. >> >> The new AES instruction set is a successor of Intel's AES-NI (AES New >> Instruction). Users may switch to the Key Locker version from crypto >> libraries. This series includes a new AES implementation for the Crypto >> API, which was validated through the crypto unit tests. The performance in >> the test cases was measured and found comparable to the AES-NI version. >> >> Key Locker introduces a (CPU-)internal key to encode AES keys. The kernel >> needs to load it and ensure it unchanged as long as CPUs are operational. > > I have high-level questions: > > What is the expected use case? The wrapping key here is only used for new AES instructions. I’m aware of their potential use cases for encrypting file system or disks. > My personal hypothesis, based on various > public Intel slides, is that the actual intended use case was internal > to the ME, and that KL was ported to end-user CPUs more or less > verbatim. No, this is a separate one. The feature has nothing to do with the firmware except that in some situations it merely helps to back up the key in its state. > I certainly understand how KL is valuable in a context where > a verified boot process installs some KL keys that are not subsequently > accessible outside the KL ISA, but Linux does not really work like this. Do you mind elaborating on the concern? I try to understand any issue with PATCH3 [1], specifically. > I'm wondering what people will use it for. Mentioned above. > On a related note, does Intel plan to extend KL with ways to securely > load keys? (E.g. the ability to, in effect, LOADIWKEY from inside an > enclave? Key wrapping/unwrapping operations?) In other words, is > should we look at KL the way we look at MKTME, i.e. the foundation of > something neat but not necessarily very useful as is, or should we > expect that KL is in its more or less final form? All I have is pretty much in the spec. So, I think the latter is the case. I don’t see anything about that LOADIWKEY inside an enclave in the spec. (A relevant section is A.6.1 Key Locker Usage with TEE.) > What is the expected interaction between a KL-using VM guest and the > host VMM? Will there be performance impacts (to context switching, for > example) if a guest enables KL, even if the guest does not subsequently > do anything with it? Should Linux actually enable KL if it detects that > it's a VM guest? Should Linux have use a specific keying method as a guest? First of all, there is an RFC series for KVM [2]. Each CPU has one internal key state so it needs to reload it between guest and host if both are enabled. The proposed approach enables it exclusively; expose it to guests only when disabled in a host. Then, I guess a guest may enable it. Thanks, Chang [1] https://lore.kernel.org/lkml/20210514201508.27967-4-chang.seok.bae@intel.com/ [2] https://lore.kernel.org/kvm/1611565580-47718-1-git-send-email-robert.hu@linux.intel.com/
On Mon, May 17, 2021 at 11:21 AM Bae, Chang Seok <chang.seok.bae@intel.com> wrote: > > On May 15, 2021, at 11:01, Andy Lutomirski <luto@kernel.org> wrote: > > On 5/14/21 1:14 PM, Chang S. Bae wrote: > >> Key Locker [1][2] is a new security feature available in new Intel CPUs to > >> protect data encryption keys for the Advanced Encryption Standard > >> algorithm. The protection limits the amount of time an AES key is exposed > >> in memory by sealing a key and referencing it with new AES instructions. > >> > >> The new AES instruction set is a successor of Intel's AES-NI (AES New > >> Instruction). Users may switch to the Key Locker version from crypto > >> libraries. This series includes a new AES implementation for the Crypto > >> API, which was validated through the crypto unit tests. The performance in > >> the test cases was measured and found comparable to the AES-NI version. > >> > >> Key Locker introduces a (CPU-)internal key to encode AES keys. The kernel > >> needs to load it and ensure it unchanged as long as CPUs are operational. > > > > I have high-level questions: > > > > What is the expected use case? > > The wrapping key here is only used for new AES instructions. > > I’m aware of their potential use cases for encrypting file system or disks. > > > My personal hypothesis, based on various > > public Intel slides, is that the actual intended use case was internal > > to the ME, and that KL was ported to end-user CPUs more or less > > verbatim. > > No, this is a separate one. The feature has nothing to do with the firmware > except that in some situations it merely helps to back up the key in its > state. > > > I certainly understand how KL is valuable in a context where > > a verified boot process installs some KL keys that are not subsequently > > accessible outside the KL ISA, but Linux does not really work like this. > > Do you mind elaborating on the concern? I try to understand any issue with > PATCH3 [1], specifically. If I understand Andy's concern it is the observation that the weakest link in this facility is the initial key load. Yes, KL reduces exposure after that event, but the key loading process is still vulnerable. This question is similar to the concern between the Linux "encrypted-keys" and "trusted-keys" interface. The trusted-keys interface still has an attack window where the key is unwrapped in kernel space to decrypt the sub-keys, but that exposure need not cross the user-kernel boundary and can be time-limited to a given PCR state. The encrypted-keys interface maintains the private-key material outside the kernel where it has increased exposure. KL is effectively "encrypted-keys" and Andy is questioning whether this makes KL similar to the MKTME vs SGX / TDX situation. > > > I'm wondering what people will use it for. > > Mentioned above. I don't think this answers Andy's question. There is a distinction between what it can be used for and what people will deploy with it in practice given the "encrypted-keys"-like exposure. Clarify the end user benefit that motivates the kernel to carry this support.
On Mon, May 17, 2021, Bae, Chang Seok wrote: > On May 15, 2021, at 11:01, Andy Lutomirski <luto@kernel.org> wrote: > > What is the expected interaction between a KL-using VM guest and the > > host VMM? Messy. :-) > > Will there be performance impacts (to context switching, for > > example) if a guest enables KL, even if the guest does not subsequently > > do anything with it? Short answer, yes. But the proposed solution is to disallow KL in KVM guests if KL is in use by the host. The problem is that, by design, the host can't restore its key via LOADIWKEY because the whole point is to throw away the real key. To restore its value, the host would need to use the platform backup/restore mechanism, which is comically slow (tens of thousands of cycles). If KL virtualization is mutually exclusive with use in the host, then IIRC the context switching penalty is only paid by vCPUs that have executed LOADIWKEY, as other tasks can safely run with a stale/bogus key. > > Should Linux actually enable KL if it detects that it's a VM guest? Probably not by default. It shouldn't even be considered unless the VMM is trusted, as a malicious VMM can completely subvert KL. Even if the host is trusted, it's not clear that the tradeoffs are a net win. Practically speaking, VMMs have to either (a) save the real key in host memory or (b) provide a single VM exclusive access to the underlying hardware. For (a), that rules out using an ephemeral, random key, as using a truly random key prevents the VMM from saving/restoring the real key. That means the guest has to generate its own key, and the host has to also store the key in memory. There are also potential performance and live migration implications. The only benefit to using KL in the guest is that the real key is not stored in _guest_ accessible memory. So it probably reduces the attack surface, but on the other hand the VMM may store the guest's master key in a known location, which might make cross-VM attacks easier in some ways. (b) is a fairly unlikely scenario, and certainly can't be assumed to be the default scenario for a guest. > > Should Linux have use a specific keying method as a guest? Could you rephrase this question? I didn't follow. > First of all, there is an RFC series for KVM [2]. That series also fails to address the use case question. [*] https://lore.kernel.org/kvm/YGs07I%2FmKhDy3pxD@google.com/
On May 17, 2021, at 11:45, Dan Williams <dan.j.williams@intel.com> wrote: > On Mon, May 17, 2021 at 11:21 AM Bae, Chang Seok > <chang.seok.bae@intel.com> wrote: >> >> On May 15, 2021, at 11:01, Andy Lutomirski <luto@kernel.org> wrote: >>> >>> >>> I certainly understand how KL is valuable in a context where >>> a verified boot process installs some KL keys that are not subsequently >>> accessible outside the KL ISA, but Linux does not really work like this. >> >> Do you mind elaborating on the concern? I try to understand any issue with >> PATCH3 [1], specifically. > > If I understand Andy's concern it is the observation that the weakest > link in this facility is the initial key load. Yes, KL reduces > exposure after that event, but the key loading process is still > vulnerable. This question is similar to the concern between the Linux > "encrypted-keys" and "trusted-keys" interface. The trusted-keys > interface still has an attack window where the key is unwrapped in > kernel space to decrypt the sub-keys, but that exposure need not cross > the user-kernel boundary and can be time-limited to a given PCR state. > The encrypted-keys interface maintains the private-key material > outside the kernel where it has increased exposure. KL is effectively > "encrypted-keys" and Andy is questioning whether this makes KL similar > to the MKTME vs SGX / TDX situation. I don’t fully grasp the MKTME vs SGX/TDX background, but LOADIWKEY provides a hardware randomization option for the initial load. Then, the internal key is unknown. Nonetheless, if one does not trust this randomization and decides not to use it, then perhaps unavoidable is the key in memory sometime during boot-time. I think Dan just gave an example here, but FWIW, these “encrypted-keys” and “trusted-keys” are for the kernel keyring service. I wish to clarify the keyring service itself is not intended usage here. Instead, this series is intended to focus on the kernel Crypto API, as this technology protects AES keys during data transformation time. >>> I'm wondering what people will use it for. >> >> Mentioned above. > > I don't think this answers Andy's question. There is a distinction > between what it can be used for and what people will deploy with it in > practice given the "encrypted-keys"-like exposure. Clarify the end > user benefit that motivates the kernel to carry this support. The end-user of this series will benefit from key protection at data transformation time and also be provided with matched performance as AES-NI does. Thanks, Chang
On 5/17/21 11:21 AM, Bae, Chang Seok wrote: > On May 15, 2021, at 11:01, Andy Lutomirski <luto@kernel.org> wrote: >> On 5/14/21 1:14 PM, Chang S. Bae wrote: >>> Key Locker [1][2] is a new security feature available in new Intel CPUs to >>> protect data encryption keys for the Advanced Encryption Standard >>> algorithm. The protection limits the amount of time an AES key is exposed >>> in memory by sealing a key and referencing it with new AES instructions. >>> >>> The new AES instruction set is a successor of Intel's AES-NI (AES New >>> Instruction). Users may switch to the Key Locker version from crypto >>> libraries. This series includes a new AES implementation for the Crypto >>> API, which was validated through the crypto unit tests. The performance in >>> the test cases was measured and found comparable to the AES-NI version. >>> >>> Key Locker introduces a (CPU-)internal key to encode AES keys. The kernel >>> needs to load it and ensure it unchanged as long as CPUs are operational. >> >> I have high-level questions: >> >> What is the expected use case? > > The wrapping key here is only used for new AES instructions. > > I’m aware of their potential use cases for encrypting file system or disks. I would like to understand what people are actually going to do with this. Give me a user story or two, please. If it turns out to be useless, I would rather not merge it. > >> I certainly understand how KL is valuable in a context where >> a verified boot process installs some KL keys that are not subsequently >> accessible outside the KL ISA, but Linux does not really work like this. > > Do you mind elaborating on the concern? I try to understand any issue with > PATCH3 [1], specifically. My concern has nothing to do with your patches per se. I want to understand the entire workflow that makes Key Locker safer than not using Key Locker. Something like: Step 1: Computer is powered on. Step 2: Boot loader loads Linux Step 3: Linux does such-and-such Step 4: Attacker compromises the computer in the following way and an explanation of why this is realistic and how Key Locker helps would be nice. >> What is the expected interaction between a KL-using VM guest and the >> host VMM? Will there be performance impacts (to context switching, for >> example) if a guest enables KL, even if the guest does not subsequently >> do anything with it? Should Linux actually enable KL if it detects that >> it's a VM guest? Should Linux have use a specific keying method as a guest? > > First of all, there is an RFC series for KVM [2]. > > Each CPU has one internal key state so it needs to reload it between guest and > host if both are enabled. The proposed approach enables it exclusively; expose > it to guests only when disabled in a host. Then, I guess a guest may enable it. I read that series. This is not a good solution. I can think of at least a few reasonable ways that a host and a guest can cooperate to, potentially, make KL useful. a) Host knows that the guest will never migrate, and guest delegates IWKEY management to the host. The host generates a random key and does not permit the guest to use LOADIWKEY. The guest shares the random key with the host. Of course, this means that a host key handle that leaks to a guest can be used within the guest. b) Host may migrate the guest. Guest delegates IWKEY management to the host, and the host generates and remembers a key for the guest. On migration, the host forwards the key to the new host. The host can still internally any type of key, but context switches may be quite slow. c) Guest wants to manage its own non-random key. Host lets it and context switches it. d) Guest does not need KL and leaves CR4.KL clear. Host does whatever it wants with no overhead. All of these have tradeoffs. My current thought is that, if Linux is going to support Key Locker, then this all needs to be explicitly controlled. On initial boot, Linux should not initialize Key Locker. Upon explicit administrator request (via sysfs?), Linux will initialize Key Locker in the mode requested by the administrator. Modes could include: native_random_key: Use a random key per the ISA. native_kernel_key_remember: Use a random key but load it as a non-random key. Remember the key in kernel memory and use it for S3 resume, etc. native_kernel_key_backup: Use a random key, put it in the backup storage, and forget it. Use the backup for resume, etc. native_kernel_key_norestore: Use a random key. The key is lost on any power transition that forgets the key. Backup is not used. paravirt_any: Ask the hypervisor to handle keying. Any mechanism is acceptable. paravirt_random: Ask the hypervisor for a random key. Only succeeds if we get an actual random key. Does this make sense?
On Tue, May 18, 2021, Andy Lutomirski wrote: > On 5/17/21 11:21 AM, Bae, Chang Seok wrote: > > First of all, there is an RFC series for KVM [2]. > > > > Each CPU has one internal key state so it needs to reload it between guest and > > host if both are enabled. The proposed approach enables it exclusively; expose > > it to guests only when disabled in a host. Then, I guess a guest may enable it. > > I read that series. This is not a good solution. > > I can think of at least a few reasonable ways that a host and a guest > can cooperate to, potentially, make KL useful. > > a) Host knows that the guest will never migrate, and guest delegates > IWKEY management to the host. The host generates a random key and does > not permit the guest to use LOADIWKEY. The guest shares the random key > with the host. Of course, this means that a host key handle that leaks > to a guest can be used within the guest. If the guest and host share a random key, then they also share the key handle. And that handle+key would also need to be shared across all guests. I doubt this option is acceptable on the security front. Using multiple random keys is a non-starter because they can't be restored via LOADIWKEY. Using multiple software-defined keys will have moderate overhead because of the possibility of using KL from soft IRQ context, i.e. KVM would have to do LOADIWKEY on every VM-Enter _and_ VM-Exit. It sounds like LOADIWKEY has latency similar to WRMSR, so it's not a deal-breaker, but the added latency on top of the restrictions on how the host can use KL certainly lessen the appeal. > b) Host may migrate the guest. Guest delegates IWKEY management to the > host, and the host generates and remembers a key for the guest. On > migration, the host forwards the key to the new host. The host can > still internally any type of key, but context switches may be quite slow. Migrating is sketchy because the IWKEY has to be exposed to host userspace. But, I think the migration aspect is a secondary discussion. > c) Guest wants to manage its own non-random key. Host lets it and > context switches it. This is essentially a variant of (b). In both cases, the host has full control over the guest's key. > d) Guest does not need KL and leaves CR4.KL clear. Host does whatever > it wants with no overhead. > > All of these have tradeoffs. > > My current thought is that, if Linux is going to support Key Locker, > then this all needs to be explicitly controlled. On initial boot, Linux > should not initialize Key Locker. Upon explicit administrator request > (via sysfs?), Linux will initialize Key Locker in the mode requested by > the administrator. Deferring KL usage to post-boot can work, but KVM shouldn't be allowed to expose KL to a guest until KL has been explicitly configured in the host. If KVM can spawn KL guests before the host is configured, the sysfs knob would have to deal with the case where the desired configuration is incompatible with exposing KL to a guest. > Modes could include: > > native_random_key: Use a random key per the ISA. > > native_kernel_key_remember: Use a random key but load it as a non-random > key. Remember the key in kernel memory and use it for S3 resume, etc. What would be the motivation for this mode? It largely defeats the value proposition of KL, no? > native_kernel_key_backup: Use a random key, put it in the backup > storage, and forget it. Use the backup for resume, etc. > > native_kernel_key_norestore: Use a random key. The key is lost on any > power transition that forgets the key. Backup is not used. > > paravirt_any: Ask the hypervisor to handle keying. Any mechanism is > acceptable. > > paravirt_random: Ask the hypervisor for a random key. Only succeeds if > we get an actual random key. AFAIK, there's no way for the guest to verify that it got a truly random key. Hell, the guest can't even easily verify that KL is even supported. The host can lie about CPUID and CR4.KL, and intercept all KL instructions via #UD by running the guest with CR4.KL=0. I also don't see any reason to define a paravirt interface for a truly random key. Using a random key all but requires a single guest to have exclusive access to KL, and in that case the host can simply expose KL to only that guest. > Does this make sense? I really want to use see concrete guest use cases before we start adding paravirt interfaces.
On 5/18/21 10:52 AM, Sean Christopherson wrote: > On Tue, May 18, 2021, Andy Lutomirski wrote: >> On 5/17/21 11:21 AM, Bae, Chang Seok wrote: >>> First of all, there is an RFC series for KVM [2]. >>> >>> Each CPU has one internal key state so it needs to reload it between guest and >>> host if both are enabled. The proposed approach enables it exclusively; expose >>> it to guests only when disabled in a host. Then, I guess a guest may enable it. >> >> I read that series. This is not a good solution. >> >> I can think of at least a few reasonable ways that a host and a guest >> can cooperate to, potentially, make KL useful. >> >> a) Host knows that the guest will never migrate, and guest delegates >> IWKEY management to the host. The host generates a random key and does >> not permit the guest to use LOADIWKEY. The guest shares the random key >> with the host. Of course, this means that a host key handle that leaks >> to a guest can be used within the guest. > > If the guest and host share a random key, then they also share the key handle. > And that handle+key would also need to be shared across all guests. I doubt this > option is acceptable on the security front. > Indeed. Oddly, SGX has the exact same problem for any scenario in which SGX is used for HSM-like functionality, and people still use SGX. However, I suspect that there will be use cases in which exactly one VM is permitted to use KL. Qubes might want that (any Qubes people around?) > Using multiple random keys is a non-starter because they can't be restored via > LOADIWKEY. > > Using multiple software-defined keys will have moderate overhead because of the > possibility of using KL from soft IRQ context, i.e. KVM would have to do > LOADIWKEY on every VM-Enter _and_ VM-Exit. It sounds like LOADIWKEY has latency > similar to WRMSR, so it's not a deal-breaker, but the added latency on top of the > restrictions on how the host can use KL certainly lessen the appeal. Indeed. This stinks. > >> b) Host may migrate the guest. Guest delegates IWKEY management to the >> host, and the host generates and remembers a key for the guest. On >> migration, the host forwards the key to the new host. The host can >> still internally any type of key, but context switches may be quite slow. > > Migrating is sketchy because the IWKEY has to be exposed to host userspace. > But, I think the migration aspect is a secondary discussion. > >> c) Guest wants to manage its own non-random key. Host lets it and >> context switches it. > > This is essentially a variant of (b). In both cases, the host has full control > over the guest's key. > >> d) Guest does not need KL and leaves CR4.KL clear. Host does whatever >> it wants with no overhead. >> >> All of these have tradeoffs. >> >> My current thought is that, if Linux is going to support Key Locker, >> then this all needs to be explicitly controlled. On initial boot, Linux >> should not initialize Key Locker. Upon explicit administrator request >> (via sysfs?), Linux will initialize Key Locker in the mode requested by >> the administrator. > > Deferring KL usage to post-boot can work, but KVM shouldn't be allowed to expose > KL to a guest until KL has been explicitly configured in the host. If KVM can > spawn KL guests before the host is configured, the sysfs knob would have to deal > with the case where the desired configuration is incompatible with exposing KL > to a guest. There could be a host configuration "guest_only", perhaps. > >> Modes could include: >> >> native_random_key: Use a random key per the ISA. >> >> native_kernel_key_remember: Use a random key but load it as a non-random >> key. Remember the key in kernel memory and use it for S3 resume, etc. > > What would be the motivation for this mode? It largely defeats the value > proposition of KL, no? It lets userspace use KL with some degree of security. > >> native_kernel_key_backup: Use a random key, put it in the backup >> storage, and forget it. Use the backup for resume, etc. >> >> native_kernel_key_norestore: Use a random key. The key is lost on any >> power transition that forgets the key. Backup is not used. >> >> paravirt_any: Ask the hypervisor to handle keying. Any mechanism is >> acceptable. >> >> paravirt_random: Ask the hypervisor for a random key. Only succeeds if >> we get an actual random key. > > AFAIK, there's no way for the guest to verify that it got a truly random key. > Hell, the guest can't even easily verify that KL is even supported. The host > can lie about CPUID and CR4.KL, and intercept all KL instructions via #UD by > running the guest with CR4.KL=0. The guest can use TDX. Oh wait, TDX doesn't support KL. That being said, a host attack on the guest of this sort would be quite slow. > > I also don't see any reason to define a paravirt interface for a truly random > key. Using a random key all but requires a single guest to have exclusive access > to KL, and in that case the host can simply expose KL to only that guest. > >> Does this make sense? > > I really want to use see concrete guest use cases before we start adding paravirt > interfaces. > I want to see concrete guest use cases before we start adding *any* guest support. And this cuts both ways -- I think that, until the guest use cases are at least somewhat worked out, Linux should certainly not initialize KL by default on boot if the CPUID hypervisor bit is set.
On Wed, May 19, 2021, Andy Lutomirski wrote: > On 5/18/21 10:52 AM, Sean Christopherson wrote: > > On Tue, May 18, 2021, Andy Lutomirski wrote: > >> On 5/17/21 11:21 AM, Bae, Chang Seok wrote: > >>> First of all, there is an RFC series for KVM [2]. > >>> > >>> Each CPU has one internal key state so it needs to reload it between guest and > >>> host if both are enabled. The proposed approach enables it exclusively; expose > >>> it to guests only when disabled in a host. Then, I guess a guest may enable it. > >> > >> I read that series. This is not a good solution. > >> > >> I can think of at least a few reasonable ways that a host and a guest > >> can cooperate to, potentially, make KL useful. > >> > >> a) Host knows that the guest will never migrate, and guest delegates > >> IWKEY management to the host. The host generates a random key and does > >> not permit the guest to use LOADIWKEY. The guest shares the random key > >> with the host. Of course, this means that a host key handle that leaks > >> to a guest can be used within the guest. > > > > If the guest and host share a random key, then they also share the key handle. > > And that handle+key would also need to be shared across all guests. I doubt this > > option is acceptable on the security front. > > > > Indeed. Oddly, SGX has the exact same problem for any scenario in which > SGX is used for HSM-like functionality, and people still use SGX. The entire PRM/EPC shares a single key, but SGX doesn't rely on encryption to isolate enclaves from other software, including other enclaves. E.g. Intel could ship a CPU with the EPC backed entirely by on-die cache and avoid hardware encryption entirely. > However, I suspect that there will be use cases in which exactly one VM > is permitted to use KL. Qubes might want that (any Qubes people around?)
On Wed, May 19, 2021, Sean Christopherson wrote: > On Wed, May 19, 2021, Andy Lutomirski wrote: > > On 5/18/21 10:52 AM, Sean Christopherson wrote: > > > On Tue, May 18, 2021, Andy Lutomirski wrote: > > >> On 5/17/21 11:21 AM, Bae, Chang Seok wrote: > > >>> First of all, there is an RFC series for KVM [2]. > > >>> > > >>> Each CPU has one internal key state so it needs to reload it between guest and > > >>> host if both are enabled. The proposed approach enables it exclusively; expose > > >>> it to guests only when disabled in a host. Then, I guess a guest may enable it. > > >> > > >> I read that series. This is not a good solution. > > >> > > >> I can think of at least a few reasonable ways that a host and a guest > > >> can cooperate to, potentially, make KL useful. > > >> > > >> a) Host knows that the guest will never migrate, and guest delegates > > >> IWKEY management to the host. The host generates a random key and does > > >> not permit the guest to use LOADIWKEY. The guest shares the random key > > >> with the host. Of course, this means that a host key handle that leaks > > >> to a guest can be used within the guest. > > > > > > If the guest and host share a random key, then they also share the key handle. > > > And that handle+key would also need to be shared across all guests. I doubt this > > > option is acceptable on the security front. > > > > > > > Indeed. Oddly, SGX has the exact same problem for any scenario in which > > SGX is used for HSM-like functionality, and people still use SGX. > > The entire PRM/EPC shares a single key, but SGX doesn't rely on encryption to > isolate enclaves from other software, including other enclaves. E.g. Intel could > ship a CPU with the EPC backed entirely by on-die cache and avoid hardware > encryption entirely. Ha! I belatedly see your point: in the end, virtualized KL would also rely on a trusted entity to isolate its sensitive data via paging-like mechanisms. The difference in my mind is that encryption is a means to an end for SGX, whereas hiding the key is the entire point of KL. E.g. the guest is already relying on the VMM to isolate its code and data, adding KL doesn't change that. Sharing an IWKEY across multiple guests would add intra-VM protection, at the cost of making cross-VM attacks easier to some degree.
On May 18, 2021, at 10:10, Andy Lutomirski <luto@kernel.org> wrote: > On 5/17/21 11:21 AM, Bae, Chang Seok wrote: >> On May 15, 2021, at 11:01, Andy Lutomirski <luto@kernel.org> wrote: >>> >>> I have high-level questions: >>> >>> What is the expected use case? >> >> The wrapping key here is only used for new AES instructions. >> >> I’m aware of their potential use cases for encrypting file system or disks. > > I would like to understand what people are actually going to do with > this. Give me a user story or two, please. If it turns out to be > useless, I would rather not merge it. Hi Andy, V3 was posted here with both cover letter and code changes to address this: https://lore.kernel.org/lkml/20211124200700.15888-1-chang.seok.bae@intel.com/ Appreciate, if you can comment on the use case at least. Thanks, Chang