Message ID | 1675796609-235681-1-git-send-email-steven.sistare@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [V2] memory: flat section iterator | expand |
On Tue, Feb 07, 2023 at 11:03:29AM -0800, Steve Sistare wrote: > Add an iterator over the sections of a flattened address space. > This will be needed by cpr to issue vfio ioctl's on the same memory > ranges that are already programmed. Should this better be proposed with the context of using it? Or I don't know how to justify this new interface is needed. For example, any explanations on why memory region listeners cannot work? Thanks,
On 2/7/2023 3:10 PM, Peter Xu wrote: > On Tue, Feb 07, 2023 at 11:03:29AM -0800, Steve Sistare wrote: >> Add an iterator over the sections of a flattened address space. >> This will be needed by cpr to issue vfio ioctl's on the same memory >> ranges that are already programmed. > > Should this better be proposed with the context of using it? Or I don't > know how to justify this new interface is needed. > > For example, any explanations on why memory region listeners cannot work? For context, the new interfaces is used in the patch "vfio-pci: recover from unmap-all-vaddr failure" in the original live update series: https://lore.kernel.org/qemu-devel/1658851843-236870-1-git-send-email-steven.sistare@oracle.com/ More succinctly, the memory region listeners already ran, and the vfio callbacks created vfio memory regions. Now we want to perform live update, and are in steady state, so no listeners are being called. We need the flat section iterator to reproduce the same addresses and extents that were produced by the listeners, to make a state change on each distinct vfio memory region. - Steve
On Tue, Feb 07, 2023 at 04:28:49PM -0500, Steven Sistare wrote: > On 2/7/2023 3:10 PM, Peter Xu wrote: > > On Tue, Feb 07, 2023 at 11:03:29AM -0800, Steve Sistare wrote: > >> Add an iterator over the sections of a flattened address space. > >> This will be needed by cpr to issue vfio ioctl's on the same memory > >> ranges that are already programmed. > > > > Should this better be proposed with the context of using it? Or I don't > > know how to justify this new interface is needed. > > > > For example, any explanations on why memory region listeners cannot work? > > For context, the new interfaces is used in the patch > "vfio-pci: recover from unmap-all-vaddr failure" > in the original live update series: > https://lore.kernel.org/qemu-devel/1658851843-236870-1-git-send-email-steven.sistare@oracle.com/ > > More succinctly, the memory region listeners already ran, and the vfio > callbacks created vfio memory regions. Now we want to perform live update, > and are in steady state, so no listeners are being called. We need the > flat section iterator to reproduce the same addresses and extents that were > produced by the listeners, to make a state change on each distinct vfio > memory region. Okay it makes sense, thanks. I think it'll be the same as one memory_listener_register() followed by an unregister with region_add() hooked only, but maybe we should avoid fiddling with the global list of listeners when possible. If you plan to keep posting it separately, would you add some information into the commit message? With that enhanced, please feel free to add: Acked-by: Peter Xu <peterx@redhat.com>
On 07.02.23 22:28, Steven Sistare wrote: > On 2/7/2023 3:10 PM, Peter Xu wrote: >> On Tue, Feb 07, 2023 at 11:03:29AM -0800, Steve Sistare wrote: >>> Add an iterator over the sections of a flattened address space. >>> This will be needed by cpr to issue vfio ioctl's on the same memory >>> ranges that are already programmed. >> >> Should this better be proposed with the context of using it? Or I don't >> know how to justify this new interface is needed. >> >> For example, any explanations on why memory region listeners cannot work? > > For context, the new interfaces is used in the patch > "vfio-pci: recover from unmap-all-vaddr failure" > in the original live update series: > https://lore.kernel.org/qemu-devel/1658851843-236870-1-git-send-email-steven.sistare@oracle.com/ > > More succinctly, the memory region listeners already ran, and the vfio > callbacks created vfio memory regions. Now we want to perform live update, > and are in steady state, so no listeners are being called. We need the > flat section iterator to reproduce the same addresses and extents that were > produced by the listeners, to make a state change on each distinct vfio > memory region. Would a "replay" functionality on a registered memory notifier eventually be cleaner?
On 2/7/2023 4:47 PM, Peter Xu wrote: > On Tue, Feb 07, 2023 at 04:28:49PM -0500, Steven Sistare wrote: >> On 2/7/2023 3:10 PM, Peter Xu wrote: >>> On Tue, Feb 07, 2023 at 11:03:29AM -0800, Steve Sistare wrote: >>>> Add an iterator over the sections of a flattened address space. >>>> This will be needed by cpr to issue vfio ioctl's on the same memory >>>> ranges that are already programmed. >>> >>> Should this better be proposed with the context of using it? Or I don't >>> know how to justify this new interface is needed. >>> >>> For example, any explanations on why memory region listeners cannot work? >> >> For context, the new interfaces is used in the patch >> "vfio-pci: recover from unmap-all-vaddr failure" >> in the original live update series: >> https://lore.kernel.org/qemu-devel/1658851843-236870-1-git-send-email-steven.sistare@oracle.com/ >> >> More succinctly, the memory region listeners already ran, and the vfio >> callbacks created vfio memory regions. Now we want to perform live update, >> and are in steady state, so no listeners are being called. We need the >> flat section iterator to reproduce the same addresses and extents that were >> produced by the listeners, to make a state change on each distinct vfio >> memory region. > > Okay it makes sense, thanks. > > I think it'll be the same as one memory_listener_register() followed by an > unregister with region_add() hooked only, but maybe we should avoid > fiddling with the global list of listeners when possible. Indeed it is the same, thanks for the suggestion. And this occurs infrequently, so modifying the listener list has no impact. I withdraw this patch request. Thank you all for reviewing it. - Steve > If you plan to keep posting it separately, would you add some information > into the commit message? With that enhanced, please feel free to add: > > Acked-by: Peter Xu <peterx@redhat.com> >
diff --git a/include/exec/memory.h b/include/exec/memory.h index 3224e09..b6ca3f5 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -2376,6 +2376,37 @@ void memory_region_set_ram_discard_manager(MemoryRegion *mr, RamDiscardManager *rdm); /** + * memory_region_section_cb: callback for address_space_flat_for_each_section() + * + * @mrs: MemoryRegionSection of the range + * @opaque: data pointer passed to address_space_flat_for_each_section() + * @errp: error message, returned to the address_space_flat_for_each_section + * caller. + * + * Returns: non-zero to stop the iteration, and 0 to continue. The same + * non-zero value is returned to the address_space_flat_for_each_section caller. + */ + +typedef int (*memory_region_section_cb)(MemoryRegionSection *mrs, + void *opaque, + Error **errp); + +/** + * address_space_flat_for_each_section: walk the ranges in the address space + * flat view and call @func for each. Return 0 on success, else return non-zero + * with a message in @errp. + * + * @as: target address space + * @func: callback function + * @opaque: passed to @func + * @errp: passed to @func + */ +int address_space_flat_for_each_section(AddressSpace *as, + memory_region_section_cb func, + void *opaque, + Error **errp); + +/** * memory_region_find: translate an address/size relative to a * MemoryRegion into a #MemoryRegionSection. * diff --git a/softmmu/memory.c b/softmmu/memory.c index 9d64efc..57c3131 100644 --- a/softmmu/memory.c +++ b/softmmu/memory.c @@ -2748,6 +2748,27 @@ bool memory_region_is_mapped(MemoryRegion *mr) return !!mr->container || mr->mapped_via_alias; } +int address_space_flat_for_each_section(AddressSpace *as, + memory_region_section_cb func, + void *opaque, + Error **errp) +{ + FlatView *view = address_space_get_flatview(as); + FlatRange *fr; + int ret; + + FOR_EACH_FLAT_RANGE(fr, view) { + MemoryRegionSection mrs = section_from_flat_range(fr, view); + ret = func(&mrs, opaque, errp); + if (ret) { + return ret; + } + } + + flatview_unref(view); + return 0; +} + /* Same as memory_region_find, but it does not add a reference to the * returned region. It must be called from an RCU critical section. */