Message ID | 20211119134739.20218-14-chao.p.peng@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: mm: fd-based approach for supporting KVM guest private memory | expand |
On Fri, Nov 19, 2021 at 09:47:39PM +0800, Chao Peng wrote: > Since the memory backing store does not get notified when VM is > destroyed so need check if VM is still live in these callbacks. > > Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com> > Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com> > --- > virt/kvm/memfd.c | 22 ++++++++++++++++++++++ > 1 file changed, 22 insertions(+) > > diff --git a/virt/kvm/memfd.c b/virt/kvm/memfd.c > index bd930dcb455f..bcfdc685ce22 100644 > --- a/virt/kvm/memfd.c > +++ b/virt/kvm/memfd.c > @@ -12,16 +12,38 @@ > #include <linux/memfd.h> > const static struct guest_mem_ops *memfd_ops; > > +static bool vm_is_dead(struct kvm *vm) > +{ > + struct kvm *kvm; > + > + list_for_each_entry(kvm, &vm_list, vm_list) { > + if (kvm == vm) > + return false; > + } I don't think this is enough. The struct kvm can be freed and re-allocated from the slab and this function will give false-negetive. Maybe the kvm has to be tagged with a sequential id that incremented every allocation. This id can be checked here. > + > + return true; > +}
On Mon, Nov 22, 2021 at 05:16:47PM +0300, Kirill A. Shutemov wrote: > On Fri, Nov 19, 2021 at 09:47:39PM +0800, Chao Peng wrote: > > Since the memory backing store does not get notified when VM is > > destroyed so need check if VM is still live in these callbacks. > > > > Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com> > > Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com> > > --- > > virt/kvm/memfd.c | 22 ++++++++++++++++++++++ > > 1 file changed, 22 insertions(+) > > > > diff --git a/virt/kvm/memfd.c b/virt/kvm/memfd.c > > index bd930dcb455f..bcfdc685ce22 100644 > > --- a/virt/kvm/memfd.c > > +++ b/virt/kvm/memfd.c > > @@ -12,16 +12,38 @@ > > #include <linux/memfd.h> > > const static struct guest_mem_ops *memfd_ops; > > > > +static bool vm_is_dead(struct kvm *vm) > > +{ > > + struct kvm *kvm; > > + > > + list_for_each_entry(kvm, &vm_list, vm_list) { > > + if (kvm == vm) > > + return false; > > + } > > I don't think this is enough. The struct kvm can be freed and re-allocated > from the slab and this function will give false-negetive. Right. > > Maybe the kvm has to be tagged with a sequential id that incremented every > allocation. This id can be checked here. Sounds like a sequential id will be needed, no existing fields in struct kvm can work for this. > > > + > > + return true; > > +} > > -- > Kirill A. Shutemov
On 11/19/21 14:47, Chao Peng wrote: > + list_for_each_entry(kvm, &vm_list, vm_list) { > + if (kvm == vm) > + return false; > + } > + > + return true; This would have to take the kvm_lock, but see my reply to patch 1. Paolo
On 11/23/21 02:06, Chao Peng wrote: >> Maybe the kvm has to be tagged with a sequential id that incremented every >> allocation. This id can be checked here. > Sounds like a sequential id will be needed, no existing fields in struct > kvm can work for this. There's no need to new concepts when there's a perfectly usable reference count. :) Paolo
On Tue, Nov 23, 2021 at 10:09:28AM +0100, Paolo Bonzini wrote: > On 11/23/21 02:06, Chao Peng wrote: > > > Maybe the kvm has to be tagged with a sequential id that incremented every > > > allocation. This id can be checked here. > > Sounds like a sequential id will be needed, no existing fields in struct > > kvm can work for this. > > There's no need to new concepts when there's a perfectly usable reference > count. :) Indeed, thanks. > > Paolo
diff --git a/virt/kvm/memfd.c b/virt/kvm/memfd.c index bd930dcb455f..bcfdc685ce22 100644 --- a/virt/kvm/memfd.c +++ b/virt/kvm/memfd.c @@ -12,16 +12,38 @@ #include <linux/memfd.h> const static struct guest_mem_ops *memfd_ops; +static bool vm_is_dead(struct kvm *vm) +{ + struct kvm *kvm; + + list_for_each_entry(kvm, &vm_list, vm_list) { + if (kvm == vm) + return false; + } + + return true; +} + static void memfd_invalidate_page_range(struct inode *inode, void *owner, pgoff_t start, pgoff_t end) { //!!!We can get here after the owner no longer exists + if (vm_is_dead(owner)) + return; + + kvm_memfd_invalidate_range(owner, inode, start >> PAGE_SHIFT, + end >> PAGE_SHIFT); } static void memfd_fallocate(struct inode *inode, void *owner, pgoff_t start, pgoff_t end) { //!!!We can get here after the owner no longer exists + if (vm_is_dead(owner)) + return; + + kvm_memfd_fallocate_range(owner, inode, start >> PAGE_SHIFT, + end >> PAGE_SHIFT); } static const struct guest_ops memfd_notifier = {