Message ID | 20220519153713.819591-1-chao.p.peng@linux.intel.com (mailing list archive) |
---|---|
Headers | show |
Series | KVM: mm: fd-based approach for supporting KVM guest private memory | expand |
> > Private memory map/unmap and conversion > --------------------------------------- > Userspace's map/unmap operations are done by fallocate() ioctl on the > backing store fd. > - map: default fallocate() with mode=0. > - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE. > The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap > secondary MMU page tables. > .... > QEMU: https://github.com/chao-p/qemu/tree/privmem-v6 > > An example QEMU command line for TDX test: > -object tdx-guest,id=tdx \ > -object memory-backend-memfd-private,id=ram1,size=2G \ > -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1 > There should be more discussion around double allocation scenarios when using the private fd approach. A malicious guest or buggy userspace VMM can cause physical memory getting allocated for both shared (memory accessible from host) and private fds backing the guest memory. Userspace VMM will need to unback the shared guest memory while handling the conversion from shared to private in order to prevent double allocation even with malicious guests or bugs in userspace VMM. Options to unback shared guest memory seem to be: 1) madvise(.., MADV_DONTNEED/MADV_REMOVE) - This option won't stop kernel from backing the shared memory on subsequent write accesses 2) fallocate(..., FALLOC_FL_PUNCH_HOLE...) - For file backed shared guest memory, this option still is similar to madvice since this would still allow shared memory to get backed on write accesses 3) munmap - This would give away the contiguous virtual memory region reservation with holes in the guest backing memory, which might make guest memory management difficult. 4) mprotect(... PROT_NONE) - This would keep the virtual memory address range backing the guest memory preserved ram_block_discard_range_fd from reference implementation: https://github.com/chao-p/qemu/tree/privmem-v6 seems to be relying on fallocate/madvise. Any thoughts/suggestions around better ways to unback the shared memory in order to avoid double allocation scenarios? Regards, Vishal
On Mon, Jun 06, 2022 at 01:09:50PM -0700, Vishal Annapurve wrote: > > > > Private memory map/unmap and conversion > > --------------------------------------- > > Userspace's map/unmap operations are done by fallocate() ioctl on the > > backing store fd. > > - map: default fallocate() with mode=0. > > - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE. > > The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap > > secondary MMU page tables. > > > .... > > QEMU: https://github.com/chao-p/qemu/tree/privmem-v6 > > > > An example QEMU command line for TDX test: > > -object tdx-guest,id=tdx \ > > -object memory-backend-memfd-private,id=ram1,size=2G \ > > -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1 > > > > There should be more discussion around double allocation scenarios > when using the private fd approach. A malicious guest or buggy > userspace VMM can cause physical memory getting allocated for both > shared (memory accessible from host) and private fds backing the guest > memory. > Userspace VMM will need to unback the shared guest memory while > handling the conversion from shared to private in order to prevent > double allocation even with malicious guests or bugs in userspace VMM. I don't know how malicious guest can cause that. The initial design of this serie is to put the private/shared memory into two different address spaces and gives usersapce VMM the flexibility to convert between the two. It can choose respect the guest conversion request or not. It's possible for a usrspace VMM to cause double allocation if it fails to call the unback operation during the conversion, this may be a bug or not. Double allocation may not be a wrong thing, even in conception. At least TDX allows you to use half shared half private in guest, means both shared/private can be effective. Unbacking the memory is just the current QEMU implementation choice. Chao > > Options to unback shared guest memory seem to be: > 1) madvise(.., MADV_DONTNEED/MADV_REMOVE) - This option won't stop > kernel from backing the shared memory on subsequent write accesses > 2) fallocate(..., FALLOC_FL_PUNCH_HOLE...) - For file backed shared > guest memory, this option still is similar to madvice since this would > still allow shared memory to get backed on write accesses > 3) munmap - This would give away the contiguous virtual memory region > reservation with holes in the guest backing memory, which might make > guest memory management difficult. > 4) mprotect(... PROT_NONE) - This would keep the virtual memory > address range backing the guest memory preserved > > ram_block_discard_range_fd from reference implementation: > https://github.com/chao-p/qemu/tree/privmem-v6 seems to be relying on > fallocate/madvise. > > Any thoughts/suggestions around better ways to unback the shared > memory in order to avoid double allocation scenarios? > > Regards, > Vishal
On Tue, Jun 7, 2022 at 12:01 AM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > On Mon, Jun 06, 2022 at 01:09:50PM -0700, Vishal Annapurve wrote: > > > > > > Private memory map/unmap and conversion > > > --------------------------------------- > > > Userspace's map/unmap operations are done by fallocate() ioctl on the > > > backing store fd. > > > - map: default fallocate() with mode=0. > > > - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE. > > > The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap > > > secondary MMU page tables. > > > > > .... > > > QEMU: https://github.com/chao-p/qemu/tree/privmem-v6 > > > > > > An example QEMU command line for TDX test: > > > -object tdx-guest,id=tdx \ > > > -object memory-backend-memfd-private,id=ram1,size=2G \ > > > -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1 > > > > > > > There should be more discussion around double allocation scenarios > > when using the private fd approach. A malicious guest or buggy > > userspace VMM can cause physical memory getting allocated for both > > shared (memory accessible from host) and private fds backing the guest > > memory. > > Userspace VMM will need to unback the shared guest memory while > > handling the conversion from shared to private in order to prevent > > double allocation even with malicious guests or bugs in userspace VMM. > > I don't know how malicious guest can cause that. The initial design of > this serie is to put the private/shared memory into two different > address spaces and gives usersapce VMM the flexibility to convert > between the two. It can choose respect the guest conversion request or > not. For example, the guest could maliciously give a device driver a private page so that a host-side virtual device will blindly write the private page. > It's possible for a usrspace VMM to cause double allocation if it fails > to call the unback operation during the conversion, this may be a bug > or not. Double allocation may not be a wrong thing, even in conception. > At least TDX allows you to use half shared half private in guest, means > both shared/private can be effective. Unbacking the memory is just the > current QEMU implementation choice. Right. But the idea is that this patch series should accommodate all of the CVM architectures. Or at least that's what I know was envisioned last time we discussed this topic for SNP [*]. Regardless, it's important to ensure that the VM respects its memory budget. For example, within Google, we run VMs inside of containers. So if we double allocate we're going to OOM. This seems acceptable for an early version of CVMs. But ultimately, I think we need a more robust way to ensure that the VM operates within its memory container. Otherwise, the OOM is going to be hard to diagnose and distinguish from a real OOM. [*] https://lore.kernel.org/all/20210820155918.7518-1-brijesh.singh@amd.com/ > > Chao > > > > Options to unback shared guest memory seem to be: > > 1) madvise(.., MADV_DONTNEED/MADV_REMOVE) - This option won't stop > > kernel from backing the shared memory on subsequent write accesses > > 2) fallocate(..., FALLOC_FL_PUNCH_HOLE...) - For file backed shared > > guest memory, this option still is similar to madvice since this would > > still allow shared memory to get backed on write accesses > > 3) munmap - This would give away the contiguous virtual memory region > > reservation with holes in the guest backing memory, which might make > > guest memory management difficult. > > 4) mprotect(... PROT_NONE) - This would keep the virtual memory > > address range backing the guest memory preserved > > > > ram_block_discard_range_fd from reference implementation: > > https://github.com/chao-p/qemu/tree/privmem-v6 seems to be relying on > > fallocate/madvise. > > > > Any thoughts/suggestions around better ways to unback the shared > > memory in order to avoid double allocation scenarios? I agree with Vishal. I think this patch set is making great progress. But the double allocation scenario seems like a high-level design issue that warrants more discussion.
On Tue, Jun 07, 2022 at 05:55:46PM -0700, Marc Orr wrote: > On Tue, Jun 7, 2022 at 12:01 AM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > > > On Mon, Jun 06, 2022 at 01:09:50PM -0700, Vishal Annapurve wrote: > > > > > > > > Private memory map/unmap and conversion > > > > --------------------------------------- > > > > Userspace's map/unmap operations are done by fallocate() ioctl on the > > > > backing store fd. > > > > - map: default fallocate() with mode=0. > > > > - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE. > > > > The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap > > > > secondary MMU page tables. > > > > > > > .... > > > > QEMU: https://github.com/chao-p/qemu/tree/privmem-v6 > > > > > > > > An example QEMU command line for TDX test: > > > > -object tdx-guest,id=tdx \ > > > > -object memory-backend-memfd-private,id=ram1,size=2G \ > > > > -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1 > > > > > > > > > > There should be more discussion around double allocation scenarios > > > when using the private fd approach. A malicious guest or buggy > > > userspace VMM can cause physical memory getting allocated for both > > > shared (memory accessible from host) and private fds backing the guest > > > memory. > > > Userspace VMM will need to unback the shared guest memory while > > > handling the conversion from shared to private in order to prevent > > > double allocation even with malicious guests or bugs in userspace VMM. > > > > I don't know how malicious guest can cause that. The initial design of > > this serie is to put the private/shared memory into two different > > address spaces and gives usersapce VMM the flexibility to convert > > between the two. It can choose respect the guest conversion request or > > not. > > For example, the guest could maliciously give a device driver a > private page so that a host-side virtual device will blindly write the > private page. With this patch series, it's actually even not possible for userspace VMM to allocate private page by a direct write, it's basically unmapped from there. If it really wants to, it should so something special, by intention, that's basically the conversion, which we should allow. > > > It's possible for a usrspace VMM to cause double allocation if it fails > > to call the unback operation during the conversion, this may be a bug > > or not. Double allocation may not be a wrong thing, even in conception. > > At least TDX allows you to use half shared half private in guest, means > > both shared/private can be effective. Unbacking the memory is just the > > current QEMU implementation choice. > > Right. But the idea is that this patch series should accommodate all > of the CVM architectures. Or at least that's what I know was > envisioned last time we discussed this topic for SNP [*]. AFAICS, this series should work for both TDX and SNP, and other CVM architectures. I don't see where TDX can work but SNP cannot, or I missed something here? > > Regardless, it's important to ensure that the VM respects its memory > budget. For example, within Google, we run VMs inside of containers. > So if we double allocate we're going to OOM. This seems acceptable for > an early version of CVMs. But ultimately, I think we need a more > robust way to ensure that the VM operates within its memory container. > Otherwise, the OOM is going to be hard to diagnose and distinguish > from a real OOM. Thanks for bringing this up. But in my mind I still think userspace VMM can do and it's its responsibility to guarantee that, if that is hard required. By design, userspace VMM is the decision-maker for page conversion and has all the necessary information to know which page is shared/private. It also has the necessary knobs to allocate/free the physical pages for guest memory. Definitely, we should make userspace VMM more robust. Chao > > [*] https://lore.kernel.org/all/20210820155918.7518-1-brijesh.singh@amd.com/ > > > > > Chao > > > > > > Options to unback shared guest memory seem to be: > > > 1) madvise(.., MADV_DONTNEED/MADV_REMOVE) - This option won't stop > > > kernel from backing the shared memory on subsequent write accesses > > > 2) fallocate(..., FALLOC_FL_PUNCH_HOLE...) - For file backed shared > > > guest memory, this option still is similar to madvice since this would > > > still allow shared memory to get backed on write accesses > > > 3) munmap - This would give away the contiguous virtual memory region > > > reservation with holes in the guest backing memory, which might make > > > guest memory management difficult. > > > 4) mprotect(... PROT_NONE) - This would keep the virtual memory > > > address range backing the guest memory preserved > > > > > > ram_block_discard_range_fd from reference implementation: > > > https://github.com/chao-p/qemu/tree/privmem-v6 seems to be relying on > > > fallocate/madvise. > > > > > > Any thoughts/suggestions around better ways to unback the shared > > > memory in order to avoid double allocation scenarios? > > I agree with Vishal. I think this patch set is making great progress. > But the double allocation scenario seems like a high-level design > issue that warrants more discussion.
... > With this patch series, it's actually even not possible for userspace VMM > to allocate private page by a direct write, it's basically unmapped from > there. If it really wants to, it should so something special, by intention, > that's basically the conversion, which we should allow. > A VM can pass GPA backed by private pages to userspace VMM and when Userspace VMM accesses the backing hva there will be pages allocated to back the shared fd causing 2 sets of pages backing the same guest memory range. > Thanks for bringing this up. But in my mind I still think userspace VMM > can do and it's its responsibility to guarantee that, if that is hard > required. By design, userspace VMM is the decision-maker for page > conversion and has all the necessary information to know which page is > shared/private. It also has the necessary knobs to allocate/free the > physical pages for guest memory. Definitely, we should make userspace > VMM more robust. Making Userspace VMM more robust to avoid double allocation can get complex, it will have to keep track of all in-use (by Userspace VMM) shared fd memory to disallow conversion from shared to private and will have to ensure that all guest supplied addresses belong to shared GPA ranges. A coarser but simpler alternative could be to always allow shared to private conversion with unbacking the memory from shared fd and exit if the VMM runs in double allocation scenarios. In either cases, unbacking shared fd memory ideally should prevent memory allocation on subsequent write accesses to ensure double allocation scenarios are caught early. Regards, Vishal
On Wed, Jun 08, 2022, Vishal Annapurve wrote: > ... > > With this patch series, it's actually even not possible for userspace VMM > > to allocate private page by a direct write, it's basically unmapped from > > there. If it really wants to, it should so something special, by intention, > > that's basically the conversion, which we should allow. > > > > A VM can pass GPA backed by private pages to userspace VMM and when > Userspace VMM accesses the backing hva there will be pages allocated > to back the shared fd causing 2 sets of pages backing the same guest > memory range. > > > Thanks for bringing this up. But in my mind I still think userspace VMM > > can do and it's its responsibility to guarantee that, if that is hard > > required. That was my initial reaction too, but there are unfortunate side effects to punting this to userspace. > By design, userspace VMM is the decision-maker for page > > conversion and has all the necessary information to know which page is > > shared/private. It also has the necessary knobs to allocate/free the > > physical pages for guest memory. Definitely, we should make userspace > > VMM more robust. > > Making Userspace VMM more robust to avoid double allocation can get > complex, it will have to keep track of all in-use (by Userspace VMM) > shared fd memory to disallow conversion from shared to private and > will have to ensure that all guest supplied addresses belong to shared > GPA ranges. IMO, the complexity argument isn't sufficient justfication for introducing new kernel functionality. If multiple processes are accessing guest memory then there already needs to be some amount of coordination, i.e. it can't be _that_ complex. My concern with forcing userspace to fully handle unmapping shared memory is that it may lead to additional performance overhead and/or noisy neighbor issues, even if all guests are well-behaved. Unnmapping arbitrary ranges will fragment the virtual address space and consume more memory for all the result VMAs. The extra memory consumption isn't that big of a deal, and it will be self-healing to some extent as VMAs will get merged when the holes are filled back in (if the guest converts back to shared), but it's still less than desirable. More concerning is having to take mmap_lock for write for every conversion, which is very problematic for configurations where a single userspace process maps memory belong to multiple VMs. Unmapping and remapping on every conversion will create a bottleneck, especially if a VM has sub-optimal behavior and is converting pages at a high rate. One argument is that userspace can simply rely on cgroups to detect misbehaving guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM kill from the host is typically considered a _host_ issue and will be treated as a missed SLO. An idea for handling this in the kernel without too much complexity would be to add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would still work, but writes to previously unreserved/unallocated memory would get a SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent unintentional allocations without having to coordinate unmapping/remapping across multiple processes.
On Tue, Jun 7, 2022 at 7:22 PM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > On Tue, Jun 07, 2022 at 05:55:46PM -0700, Marc Orr wrote: > > On Tue, Jun 7, 2022 at 12:01 AM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > > > > > On Mon, Jun 06, 2022 at 01:09:50PM -0700, Vishal Annapurve wrote: > > > > > > > > > > Private memory map/unmap and conversion > > > > > --------------------------------------- > > > > > Userspace's map/unmap operations are done by fallocate() ioctl on the > > > > > backing store fd. > > > > > - map: default fallocate() with mode=0. > > > > > - unmap: fallocate() with FALLOC_FL_PUNCH_HOLE. > > > > > The map/unmap will trigger above memfile_notifier_ops to let KVM map/unmap > > > > > secondary MMU page tables. > > > > > > > > > .... > > > > > QEMU: https://github.com/chao-p/qemu/tree/privmem-v6 > > > > > > > > > > An example QEMU command line for TDX test: > > > > > -object tdx-guest,id=tdx \ > > > > > -object memory-backend-memfd-private,id=ram1,size=2G \ > > > > > -machine q35,kvm-type=tdx,pic=no,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1 > > > > > > > > > > > > > There should be more discussion around double allocation scenarios > > > > when using the private fd approach. A malicious guest or buggy > > > > userspace VMM can cause physical memory getting allocated for both > > > > shared (memory accessible from host) and private fds backing the guest > > > > memory. > > > > Userspace VMM will need to unback the shared guest memory while > > > > handling the conversion from shared to private in order to prevent > > > > double allocation even with malicious guests or bugs in userspace VMM. > > > > > > I don't know how malicious guest can cause that. The initial design of > > > this serie is to put the private/shared memory into two different > > > address spaces and gives usersapce VMM the flexibility to convert > > > between the two. It can choose respect the guest conversion request or > > > not. > > > > For example, the guest could maliciously give a device driver a > > private page so that a host-side virtual device will blindly write the > > private page. > > With this patch series, it's actually even not possible for userspace VMM > to allocate private page by a direct write, it's basically unmapped from > there. If it really wants to, it should so something special, by intention, > that's basically the conversion, which we should allow. I think Vishal did a better job to explain this scenario in his last reply than I did. > > > It's possible for a usrspace VMM to cause double allocation if it fails > > > to call the unback operation during the conversion, this may be a bug > > > or not. Double allocation may not be a wrong thing, even in conception. > > > At least TDX allows you to use half shared half private in guest, means > > > both shared/private can be effective. Unbacking the memory is just the > > > current QEMU implementation choice. > > > > Right. But the idea is that this patch series should accommodate all > > of the CVM architectures. Or at least that's what I know was > > envisioned last time we discussed this topic for SNP [*]. > > AFAICS, this series should work for both TDX and SNP, and other CVM > architectures. I don't see where TDX can work but SNP cannot, or I > missed something here? Agreed. I was just responding to the "At least TDX..." bit. Sorry for any confusion. > > > > Regardless, it's important to ensure that the VM respects its memory > > budget. For example, within Google, we run VMs inside of containers. > > So if we double allocate we're going to OOM. This seems acceptable for > > an early version of CVMs. But ultimately, I think we need a more > > robust way to ensure that the VM operates within its memory container. > > Otherwise, the OOM is going to be hard to diagnose and distinguish > > from a real OOM. > > Thanks for bringing this up. But in my mind I still think userspace VMM > can do and it's its responsibility to guarantee that, if that is hard > required. By design, userspace VMM is the decision-maker for page > conversion and has all the necessary information to know which page is > shared/private. It also has the necessary knobs to allocate/free the > physical pages for guest memory. Definitely, we should make userspace > VMM more robust. Vishal and Sean did a better job to articulate the concern in their most recent replies.
On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote: > On Wed, Jun 08, 2022, Vishal Annapurve wrote: > > ... > > > With this patch series, it's actually even not possible for userspace VMM > > > to allocate private page by a direct write, it's basically unmapped from > > > there. If it really wants to, it should so something special, by intention, > > > that's basically the conversion, which we should allow. > > > > > > > A VM can pass GPA backed by private pages to userspace VMM and when > > Userspace VMM accesses the backing hva there will be pages allocated > > to back the shared fd causing 2 sets of pages backing the same guest > > memory range. > > > > > Thanks for bringing this up. But in my mind I still think userspace VMM > > > can do and it's its responsibility to guarantee that, if that is hard > > > required. > > That was my initial reaction too, but there are unfortunate side effects to punting > this to userspace. > > > By design, userspace VMM is the decision-maker for page > > > conversion and has all the necessary information to know which page is > > > shared/private. It also has the necessary knobs to allocate/free the > > > physical pages for guest memory. Definitely, we should make userspace > > > VMM more robust. > > > > Making Userspace VMM more robust to avoid double allocation can get > > complex, it will have to keep track of all in-use (by Userspace VMM) > > shared fd memory to disallow conversion from shared to private and > > will have to ensure that all guest supplied addresses belong to shared > > GPA ranges. > > IMO, the complexity argument isn't sufficient justfication for introducing new > kernel functionality. If multiple processes are accessing guest memory then there > already needs to be some amount of coordination, i.e. it can't be _that_ complex. > > My concern with forcing userspace to fully handle unmapping shared memory is that > it may lead to additional performance overhead and/or noisy neighbor issues, even > if all guests are well-behaved. > > Unnmapping arbitrary ranges will fragment the virtual address space and consume > more memory for all the result VMAs. The extra memory consumption isn't that big > of a deal, and it will be self-healing to some extent as VMAs will get merged when > the holes are filled back in (if the guest converts back to shared), but it's still > less than desirable. > > More concerning is having to take mmap_lock for write for every conversion, which > is very problematic for configurations where a single userspace process maps memory > belong to multiple VMs. Unmapping and remapping on every conversion will create a > bottleneck, especially if a VM has sub-optimal behavior and is converting pages at > a high rate. > > One argument is that userspace can simply rely on cgroups to detect misbehaving > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM > kill from the host is typically considered a _host_ issue and will be treated as > a missed SLO. > > An idea for handling this in the kernel without too much complexity would be to > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would > still work, but writes to previously unreserved/unallocated memory would get a > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent > unintentional allocations without having to coordinate unmapping/remapping across > multiple processes. Since this is mainly for shared memory and the motivation is catching misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark those range backed by private fd as PROT_NONE during the conversion so subsequence misbehaved accesses will be blocked instead of causing double allocation silently. Chao
On Tue, Jun 14, 2022 at 12:32 AM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote: > > On Wed, Jun 08, 2022, Vishal Annapurve wrote: > > > > One argument is that userspace can simply rely on cgroups to detect misbehaving > > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM > > kill from the host is typically considered a _host_ issue and will be treated as > > a missed SLO. > > > > An idea for handling this in the kernel without too much complexity would be to > > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from > > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor > > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would > > still work, but writes to previously unreserved/unallocated memory would get a > > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent > > unintentional allocations without having to coordinate unmapping/remapping across > > multiple processes. > > Since this is mainly for shared memory and the motivation is catching > misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark > those range backed by private fd as PROT_NONE during the conversion so > subsequence misbehaved accesses will be blocked instead of causing double > allocation silently. This patch series is fairly close to implementing a rather more efficient solution. I'm not familiar enough with hypervisor userspace to really know if this would work, but: What if shared guest memory could also be file-backed, either in the same fd or with a second fd covering the shared portion of a memslot? This would allow changes to the backing store (punching holes, etc) to be some without mmap_lock or host-userspace TLB flushes? Depending on what the guest is doing with its shared memory, userspace might need the memory mapped or it might not. --Andy
On Tue, Jun 14, 2022, Andy Lutomirski wrote: > On Tue, Jun 14, 2022 at 12:32 AM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > > > On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote: > > > On Wed, Jun 08, 2022, Vishal Annapurve wrote: > > > > > > One argument is that userspace can simply rely on cgroups to detect misbehaving > > > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM > > > kill from the host is typically considered a _host_ issue and will be treated as > > > a missed SLO. > > > > > > An idea for handling this in the kernel without too much complexity would be to > > > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from > > > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor > > > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would > > > still work, but writes to previously unreserved/unallocated memory would get a > > > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent > > > unintentional allocations without having to coordinate unmapping/remapping across > > > multiple processes. > > > > Since this is mainly for shared memory and the motivation is catching > > misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark > > those range backed by private fd as PROT_NONE during the conversion so > > subsequence misbehaved accesses will be blocked instead of causing double > > allocation silently. PROT_NONE, a.k.a. mprotect(), has the same vma downsides as munmap(). > This patch series is fairly close to implementing a rather more > efficient solution. I'm not familiar enough with hypervisor userspace > to really know if this would work, but: > > What if shared guest memory could also be file-backed, either in the > same fd or with a second fd covering the shared portion of a memslot? > This would allow changes to the backing store (punching holes, etc) to > be some without mmap_lock or host-userspace TLB flushes? Depending on > what the guest is doing with its shared memory, userspace might need > the memory mapped or it might not. That's what I'm angling for with the F_SEAL_FAULT_ALLOCATIONS idea. The issue, unless I'm misreading code, is that punching a hole in the shared memory backing store doesn't prevent reallocating that hole on fault, i.e. a helper process that keeps a valid mapping of guest shared memory can silently fill the hole. What we're hoping to achieve is a way to prevent allocating memory without a very explicit action from userspace, e.g. fallocate().
On Tue, Jun 14, 2022 at 12:09 PM Sean Christopherson <seanjc@google.com> wrote: > > On Tue, Jun 14, 2022, Andy Lutomirski wrote: > > On Tue, Jun 14, 2022 at 12:32 AM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > > > > > On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote: > > > > On Wed, Jun 08, 2022, Vishal Annapurve wrote: > > > > > > > > One argument is that userspace can simply rely on cgroups to detect misbehaving > > > > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM > > > > kill from the host is typically considered a _host_ issue and will be treated as > > > > a missed SLO. > > > > > > > > An idea for handling this in the kernel without too much complexity would be to > > > > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from > > > > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor > > > > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would > > > > still work, but writes to previously unreserved/unallocated memory would get a > > > > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent > > > > unintentional allocations without having to coordinate unmapping/remapping across > > > > multiple processes. > > > > > > Since this is mainly for shared memory and the motivation is catching > > > misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark > > > those range backed by private fd as PROT_NONE during the conversion so > > > subsequence misbehaved accesses will be blocked instead of causing double > > > allocation silently. > > PROT_NONE, a.k.a. mprotect(), has the same vma downsides as munmap(). > > > This patch series is fairly close to implementing a rather more > > efficient solution. I'm not familiar enough with hypervisor userspace > > to really know if this would work, but: > > > > What if shared guest memory could also be file-backed, either in the > > same fd or with a second fd covering the shared portion of a memslot? > > This would allow changes to the backing store (punching holes, etc) to > > be some without mmap_lock or host-userspace TLB flushes? Depending on > > what the guest is doing with its shared memory, userspace might need > > the memory mapped or it might not. > > That's what I'm angling for with the F_SEAL_FAULT_ALLOCATIONS idea. The issue, > unless I'm misreading code, is that punching a hole in the shared memory backing > store doesn't prevent reallocating that hole on fault, i.e. a helper process that > keeps a valid mapping of guest shared memory can silently fill the hole. > > What we're hoping to achieve is a way to prevent allocating memory without a very > explicit action from userspace, e.g. fallocate(). Ah, I misunderstood. I thought your goal was to mmap it and prevent page faults from allocating. It is indeed the case (and has been since before quite a few of us were born) that a hole in a sparse file is logically just a bunch of zeros. A way to make a file for which a hole is an actual hole seems like it would solve this problem nicely. It could also be solved more specifically for KVM by making sure that the private/shared mode that userspace programs is strict enough to prevent accidental allocations -- if a GPA is definitively private, shared, neither, or (potentially, on TDX only) both, then a page that *isn't* shared will never be accidentally allocated by KVM. If the shared backing is not mmapped, it also won't be accidentally allocated by host userspace on a stray or careless write. --Andy
On Tue, Jun 14, 2022 at 01:59:41PM -0700, Andy Lutomirski wrote: > On Tue, Jun 14, 2022 at 12:09 PM Sean Christopherson <seanjc@google.com> wrote: > > > > On Tue, Jun 14, 2022, Andy Lutomirski wrote: > > > On Tue, Jun 14, 2022 at 12:32 AM Chao Peng <chao.p.peng@linux.intel.com> wrote: > > > > > > > > On Thu, Jun 09, 2022 at 08:29:06PM +0000, Sean Christopherson wrote: > > > > > On Wed, Jun 08, 2022, Vishal Annapurve wrote: > > > > > > > > > > One argument is that userspace can simply rely on cgroups to detect misbehaving > > > > > guests, but (a) those types of OOMs will be a nightmare to debug and (b) an OOM > > > > > kill from the host is typically considered a _host_ issue and will be treated as > > > > > a missed SLO. > > > > > > > > > > An idea for handling this in the kernel without too much complexity would be to > > > > > add F_SEAL_FAULT_ALLOCATIONS (terrible name) that would prevent page faults from > > > > > allocating pages, i.e. holes can only be filled by an explicit fallocate(). Minor > > > > > faults, e.g. due to NUMA balancing stupidity, and major faults due to swap would > > > > > still work, but writes to previously unreserved/unallocated memory would get a > > > > > SIGSEGV on something it has mapped. That would allow the userspace VMM to prevent > > > > > unintentional allocations without having to coordinate unmapping/remapping across > > > > > multiple processes. > > > > > > > > Since this is mainly for shared memory and the motivation is catching > > > > misbehaved access, can we use mprotect(PROT_NONE) for this? We can mark > > > > those range backed by private fd as PROT_NONE during the conversion so > > > > subsequence misbehaved accesses will be blocked instead of causing double > > > > allocation silently. > > > > PROT_NONE, a.k.a. mprotect(), has the same vma downsides as munmap(). Yes, right. > > > > > This patch series is fairly close to implementing a rather more > > > efficient solution. I'm not familiar enough with hypervisor userspace > > > to really know if this would work, but: > > > > > > What if shared guest memory could also be file-backed, either in the > > > same fd or with a second fd covering the shared portion of a memslot? > > > This would allow changes to the backing store (punching holes, etc) to > > > be some without mmap_lock or host-userspace TLB flushes? Depending on > > > what the guest is doing with its shared memory, userspace might need > > > the memory mapped or it might not. > > > > That's what I'm angling for with the F_SEAL_FAULT_ALLOCATIONS idea. The issue, > > unless I'm misreading code, is that punching a hole in the shared memory backing > > store doesn't prevent reallocating that hole on fault, i.e. a helper process that > > keeps a valid mapping of guest shared memory can silently fill the hole. > > > > What we're hoping to achieve is a way to prevent allocating memory without a very > > explicit action from userspace, e.g. fallocate(). > > Ah, I misunderstood. I thought your goal was to mmap it and prevent > page faults from allocating. I think we still need the mmap, but want to prevent allocating when userspace touches previously mmaped area that has never filled the page. I don't have clear answer if other operations like read/write should be also prevented (probably yes). And only after an explicit fallocate() to allocate the page these operations would act normally. > > It is indeed the case (and has been since before quite a few of us > were born) that a hole in a sparse file is logically just a bunch of > zeros. A way to make a file for which a hole is an actual hole seems > like it would solve this problem nicely. It could also be solved more > specifically for KVM by making sure that the private/shared mode that > userspace programs is strict enough to prevent accidental allocations > -- if a GPA is definitively private, shared, neither, or (potentially, > on TDX only) both, then a page that *isn't* shared will never be > accidentally allocated by KVM. KVM is clever enough to not allocate since it knows a GPA is shared or not. This case it's the host userspace that can cause the allocating and is too complex to check on every access from guest. > If the shared backing is not mmapped, > it also won't be accidentally allocated by host userspace on a stray > or careless write. As said above, mmap is still prefered, otherwise too many changes are needed for usespace VMM. Thanks, Chao > > > --Andy
On Wed, Jun 15, 2022, Chao Peng wrote: > On Tue, Jun 14, 2022 at 01:59:41PM -0700, Andy Lutomirski wrote: > > On Tue, Jun 14, 2022 at 12:09 PM Sean Christopherson <seanjc@google.com> wrote: > > > > > > On Tue, Jun 14, 2022, Andy Lutomirski wrote: > > > > This patch series is fairly close to implementing a rather more > > > > efficient solution. I'm not familiar enough with hypervisor userspace > > > > to really know if this would work, but: > > > > > > > > What if shared guest memory could also be file-backed, either in the > > > > same fd or with a second fd covering the shared portion of a memslot? > > > > This would allow changes to the backing store (punching holes, etc) to > > > > be some without mmap_lock or host-userspace TLB flushes? Depending on > > > > what the guest is doing with its shared memory, userspace might need > > > > the memory mapped or it might not. > > > > > > That's what I'm angling for with the F_SEAL_FAULT_ALLOCATIONS idea. The issue, > > > unless I'm misreading code, is that punching a hole in the shared memory backing > > > store doesn't prevent reallocating that hole on fault, i.e. a helper process that > > > keeps a valid mapping of guest shared memory can silently fill the hole. > > > > > > What we're hoping to achieve is a way to prevent allocating memory without a very > > > explicit action from userspace, e.g. fallocate(). > > > > Ah, I misunderstood. I thought your goal was to mmap it and prevent > > page faults from allocating. I don't think you misunderstood, that's also one of the goals. The use case is that multiple processes in the host mmap() guest memory, and we'd like to be able to punch a hole without having to rendezvous with all processes and also to prevent an unintentional re-allocation. > I think we still need the mmap, but want to prevent allocating when > userspace touches previously mmaped area that has never filled the page. Yes, or if a chunk was filled at some point but then was removed via PUNCH_HOLE. > I don't have clear answer if other operations like read/write should be > also prevented (probably yes). And only after an explicit fallocate() to > allocate the page these operations would act normally. I always forget about read/write. I believe reads should be ok, the semantics of holes are that they return zeros, i.e. can use ZERO_PAGE() and not allocate a new backing page. Not sure what to do about writes though. Allocating on direct writes might be ok for our use case, but that could also result in a rather wierd API. > > It is indeed the case (and has been since before quite a few of us > > were born) that a hole in a sparse file is logically just a bunch of > > zeros. A way to make a file for which a hole is an actual hole seems > > like it would solve this problem nicely. It could also be solved more > > specifically for KVM by making sure that the private/shared mode that > > userspace programs is strict enough to prevent accidental allocations > > -- if a GPA is definitively private, shared, neither, or (potentially, > > on TDX only) both, then a page that *isn't* shared will never be > > accidentally allocated by KVM. > > KVM is clever enough to not allocate since it knows a GPA is shared or > not. This case it's the host userspace that can cause the allocating and > is too complex to check on every access from guest. Yes, KVM is not in the picture at all. KVM won't trigger allocation, but KVM also is not in a position to prevent userspace from touching memory. > > If the shared backing is not mmapped, > > it also won't be accidentally allocated by host userspace on a stray > > or careless write. > > As said above, mmap is still prefered, otherwise too many changes are > needed for usespace VMM. Forcing userspace to change doesn't bother me too much, the biggest concern is having to take mmap_lock for write in a per-host process.