Message ID | 20240910163038.1298452-7-roypat@amazon.co.uk (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Unmapping guest_memfd from Direct Map | expand |
On Tue, 10 Sep 2024 17:30:32 +0100 Patrick Roy <roypat@amazon.co.uk> wrote: > Add tracepoints for calls to kvm_gmem_get_folio that cause the returned > folio to be considered "shared" (e.g. accessible by host KVM), and > tracepoint for when KVM is done accessing a gmem pfn > (kvm_gmem_put_shared_pfn). > > The above operations can cause folios to be insert/removed into/from the > direct map. We want to be able to make sure that only those gmem folios > that we expect KVM to access are ever reinserted into the direct map, > and that all folios that are temporarily reinserted are also removed > again at a later point. Processing ftrace output is one way to verify > this. > > Signed-off-by: Patrick Roy <roypat@amazon.co.uk> > --- > include/trace/events/kvm.h | 43 ++++++++++++++++++++++++++++++++++++++ > virt/kvm/guest_memfd.c | 7 ++++++- > 2 files changed, 49 insertions(+), 1 deletion(-) > > diff --git a/include/trace/events/kvm.h b/include/trace/events/kvm.h > index 74e40d5d4af42..4a40fd4c22f91 100644 > --- a/include/trace/events/kvm.h > +++ b/include/trace/events/kvm.h > @@ -489,6 +489,49 @@ TRACE_EVENT(kvm_test_age_hva, > TP_printk("mmu notifier test age hva: %#016lx", __entry->hva) > ); > > +#ifdef CONFIG_KVM_PRIVATE_MEM > +TRACE_EVENT(kvm_gmem_share, > + TP_PROTO(struct folio *folio, pgoff_t index), > + TP_ARGS(folio, index), > + > + TP_STRUCT__entry( > + __field(unsigned int, sharing_count) > + __field(kvm_pfn_t, pfn) > + __field(pgoff_t, index) > + __field(unsigned long, npages) Looking at the TP_printk() format below, the pfn is 8 bytes and sharing_count is 4. This will likely create a hole between the two fields for alignment reasons. Should put the sharing_count at the end. > + ), > + > + TP_fast_assign( > + __entry->sharing_count = refcount_read(folio_get_private(folio)); > + __entry->pfn = folio_pfn(folio); > + __entry->index = index; > + __entry->npages = folio_nr_pages(folio); > + ), > + > + TP_printk("pfn=0x%llx index=%lu pages=%lu (refcount now %d)", > + __entry->pfn, __entry->index, __entry->npages, __entry->sharing_count - 1) > +); > + > +TRACE_EVENT(kvm_gmem_unshare, > + TP_PROTO(kvm_pfn_t pfn), > + TP_ARGS(pfn), > + > + TP_STRUCT__entry( > + __field(unsigned int, sharing_count) > + __field(kvm_pfn_t, pfn) Same here. It should swap the two fields. Note, if you already added this, it will not break backward compatibility swapping them, as tooling should use the format files that state where these fields are located in the raw data. -- Steve > + ), > + > + TP_fast_assign( > + __entry->sharing_count = refcount_read(folio_get_private(pfn_folio(pfn))); > + __entry->pfn = pfn; > + ), > + > + TP_printk("pfn=0x%llx (refcount now %d)", > + __entry->pfn, __entry->sharing_count - 1) > +) > + > +#endif > + > #endif /* _TRACE_KVM_MAIN_H */ > > /* This part must be outside protection */ > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c > index 6772253497e4d..742eba36d2371 100644 > --- a/virt/kvm/guest_memfd.c > +++ b/virt/kvm/guest_memfd.c > @@ -7,6 +7,7 @@ > #include <linux/set_memory.h> > > #include "kvm_mm.h" > +#include "trace/events/kvm.h" > > struct kvm_gmem { > struct kvm *kvm; > @@ -204,8 +205,10 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, unsi > if (r) > goto out_err; > > - if (share) > + if (share) { > refcount_inc(folio_get_private(folio)); > + trace_kvm_gmem_share(folio, index); > + } > > out: > /* > @@ -759,6 +762,8 @@ int kvm_gmem_put_shared_pfn(kvm_pfn_t pfn) { > if (refcount_read(sharing_count) == 1) > r = kvm_gmem_folio_set_private(folio); > > + trace_kvm_gmem_unshare(pfn); > + > return r; > } > EXPORT_SYMBOL_GPL(kvm_gmem_put_shared_pfn);
diff --git a/include/trace/events/kvm.h b/include/trace/events/kvm.h index 74e40d5d4af42..4a40fd4c22f91 100644 --- a/include/trace/events/kvm.h +++ b/include/trace/events/kvm.h @@ -489,6 +489,49 @@ TRACE_EVENT(kvm_test_age_hva, TP_printk("mmu notifier test age hva: %#016lx", __entry->hva) ); +#ifdef CONFIG_KVM_PRIVATE_MEM +TRACE_EVENT(kvm_gmem_share, + TP_PROTO(struct folio *folio, pgoff_t index), + TP_ARGS(folio, index), + + TP_STRUCT__entry( + __field(unsigned int, sharing_count) + __field(kvm_pfn_t, pfn) + __field(pgoff_t, index) + __field(unsigned long, npages) + ), + + TP_fast_assign( + __entry->sharing_count = refcount_read(folio_get_private(folio)); + __entry->pfn = folio_pfn(folio); + __entry->index = index; + __entry->npages = folio_nr_pages(folio); + ), + + TP_printk("pfn=0x%llx index=%lu pages=%lu (refcount now %d)", + __entry->pfn, __entry->index, __entry->npages, __entry->sharing_count - 1) +); + +TRACE_EVENT(kvm_gmem_unshare, + TP_PROTO(kvm_pfn_t pfn), + TP_ARGS(pfn), + + TP_STRUCT__entry( + __field(unsigned int, sharing_count) + __field(kvm_pfn_t, pfn) + ), + + TP_fast_assign( + __entry->sharing_count = refcount_read(folio_get_private(pfn_folio(pfn))); + __entry->pfn = pfn; + ), + + TP_printk("pfn=0x%llx (refcount now %d)", + __entry->pfn, __entry->sharing_count - 1) +) + +#endif + #endif /* _TRACE_KVM_MAIN_H */ /* This part must be outside protection */ diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 6772253497e4d..742eba36d2371 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -7,6 +7,7 @@ #include <linux/set_memory.h> #include "kvm_mm.h" +#include "trace/events/kvm.h" struct kvm_gmem { struct kvm *kvm; @@ -204,8 +205,10 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, unsi if (r) goto out_err; - if (share) + if (share) { refcount_inc(folio_get_private(folio)); + trace_kvm_gmem_share(folio, index); + } out: /* @@ -759,6 +762,8 @@ int kvm_gmem_put_shared_pfn(kvm_pfn_t pfn) { if (refcount_read(sharing_count) == 1) r = kvm_gmem_folio_set_private(folio); + trace_kvm_gmem_unshare(pfn); + return r; } EXPORT_SYMBOL_GPL(kvm_gmem_put_shared_pfn);
Add tracepoints for calls to kvm_gmem_get_folio that cause the returned folio to be considered "shared" (e.g. accessible by host KVM), and tracepoint for when KVM is done accessing a gmem pfn (kvm_gmem_put_shared_pfn). The above operations can cause folios to be insert/removed into/from the direct map. We want to be able to make sure that only those gmem folios that we expect KVM to access are ever reinserted into the direct map, and that all folios that are temporarily reinserted are also removed again at a later point. Processing ftrace output is one way to verify this. Signed-off-by: Patrick Roy <roypat@amazon.co.uk> --- include/trace/events/kvm.h | 43 ++++++++++++++++++++++++++++++++++++++ virt/kvm/guest_memfd.c | 7 ++++++- 2 files changed, 49 insertions(+), 1 deletion(-)