Message ID | 1579779525-20065-75-git-send-email-imammedo@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | refactor main RAM allocation to use hostmem backend | expand |
On Thu, 23 Jan 2020 12:38:39 +0100 Igor Mammedov <imammedo@redhat.com> wrote: > Since all RAM is backed by hostmem backends, drop > global -mem-path invariant and simplify code. Looks like origin of removed here code is PPC, could PPC folk review this please? > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > --- > CC: pbonzini@redhat.com > CC: rth@twiddle.net > --- > exec.c | 51 +++++---------------------------------------------- > 1 file changed, 5 insertions(+), 46 deletions(-) > > diff --git a/exec.c b/exec.c > index 67e520d..809987c 100644 > --- a/exec.c > +++ b/exec.c > @@ -1667,60 +1667,19 @@ static int find_max_backend_pagesize(Object *obj, void *opaque) > */ > long qemu_minrampagesize(void) > { > - long hpsize = LONG_MAX; > - long mainrampagesize; > - Object *memdev_root; > - MachineState *ms = MACHINE(qdev_get_machine()); > - > - mainrampagesize = qemu_mempath_getpagesize(mem_path); > - > - /* it's possible we have memory-backend objects with > - * hugepage-backed RAM. these may get mapped into system > - * address space via -numa parameters or memory hotplug > - * hooks. we want to take these into account, but we > - * also want to make sure these supported hugepage > - * sizes are applicable across the entire range of memory > - * we may boot from, so we take the min across all > - * backends, and assume normal pages in cases where a > - * backend isn't backed by hugepages. > - */ > - memdev_root = object_resolve_path("/objects", NULL); > - if (memdev_root) { > - object_child_foreach(memdev_root, find_min_backend_pagesize, &hpsize); > - } > - if (hpsize == LONG_MAX) { > - /* No additional memory regions found ==> Report main RAM page size */ > - return mainrampagesize; > - } > - > - /* If NUMA is disabled or the NUMA nodes are not backed with a > - * memory-backend, then there is at least one node using "normal" RAM, > - * so if its page size is smaller we have got to report that size instead. > - */ > - if (hpsize > mainrampagesize && > - (ms->numa_state == NULL || > - ms->numa_state->num_nodes == 0 || > - ms->numa_state->nodes[0].node_memdev == NULL)) { > - static bool warned; > - if (!warned) { > - error_report("Huge page support disabled (n/a for main memory)."); > - warned = true; > - } > - return mainrampagesize; > - } > + long hpsize; > + Object *memdev_root = object_resolve_path("/objects", NULL); > > + object_child_foreach(memdev_root, find_min_backend_pagesize, &hpsize); > return hpsize; > } > > long qemu_maxrampagesize(void) > { > - long pagesize = qemu_mempath_getpagesize(mem_path); > + long pagesize; > Object *memdev_root = object_resolve_path("/objects", NULL); > > - if (memdev_root) { > - object_child_foreach(memdev_root, find_max_backend_pagesize, > - &pagesize); > - } > + object_child_foreach(memdev_root, find_max_backend_pagesize, &pagesize); > return pagesize; > } > #else
On Fri, Jan 24, 2020 at 11:25:11AM +0100, Igor Mammedov wrote: > On Thu, 23 Jan 2020 12:38:39 +0100 > Igor Mammedov <imammedo@redhat.com> wrote: > > > Since all RAM is backed by hostmem backends, drop > > global -mem-path invariant and simplify code. > > Looks like origin of removed here code is PPC, > could PPC folk review this please? Oh, sure. I don't think I was CCed initially - I generally don't have the bandwidth to scan qemu-devel. I haven't looked at this series as a whole, only the bits which were CCed to me (presumably because they touched ppc code). But assuming the statement above is correct, that everything is now backed by hostmem backends, then the idea of this change should be fine. But... > > > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > > --- > > CC: pbonzini@redhat.com > > CC: rth@twiddle.net > > --- > > exec.c | 51 +++++---------------------------------------------- > > 1 file changed, 5 insertions(+), 46 deletions(-) > > > > diff --git a/exec.c b/exec.c > > index 67e520d..809987c 100644 > > --- a/exec.c > > +++ b/exec.c > > @@ -1667,60 +1667,19 @@ static int find_max_backend_pagesize(Object *obj, void *opaque) > > */ > > long qemu_minrampagesize(void) > > { > > - long hpsize = LONG_MAX; > > - long mainrampagesize; > > - Object *memdev_root; > > - MachineState *ms = MACHINE(qdev_get_machine()); > > - > > - mainrampagesize = qemu_mempath_getpagesize(mem_path); > > - > > - /* it's possible we have memory-backend objects with > > - * hugepage-backed RAM. these may get mapped into system > > - * address space via -numa parameters or memory hotplug > > - * hooks. we want to take these into account, but we > > - * also want to make sure these supported hugepage > > - * sizes are applicable across the entire range of memory > > - * we may boot from, so we take the min across all > > - * backends, and assume normal pages in cases where a > > - * backend isn't backed by hugepages. > > - */ > > - memdev_root = object_resolve_path("/objects", NULL); > > - if (memdev_root) { > > - object_child_foreach(memdev_root, find_min_backend_pagesize, &hpsize); > > - } > > - if (hpsize == LONG_MAX) { > > - /* No additional memory regions found ==> Report main RAM page size */ > > - return mainrampagesize; > > - } > > - > > - /* If NUMA is disabled or the NUMA nodes are not backed with a > > - * memory-backend, then there is at least one node using "normal" RAM, > > - * so if its page size is smaller we have got to report that size instead. > > - */ > > - if (hpsize > mainrampagesize && > > - (ms->numa_state == NULL || > > - ms->numa_state->num_nodes == 0 || > > - ms->numa_state->nodes[0].node_memdev == NULL)) { > > - static bool warned; > > - if (!warned) { > > - error_report("Huge page support disabled (n/a for main memory)."); > > - warned = true; > > - } > > - return mainrampagesize; > > - } > > + long hpsize; hpsize absolutely has to be initialized. find_min_backend_pagesize() reads it as well as writing it, so if it's uninitialized you'll have UB. > > + Object *memdev_root = object_resolve_path("/objects", NULL); > > > > + object_child_foreach(memdev_root, find_min_backend_pagesize, &hpsize); > > return hpsize; > > } > > > > long qemu_maxrampagesize(void) > > { > > - long pagesize = qemu_mempath_getpagesize(mem_path); > > + long pagesize; Same here. > > Object *memdev_root = object_resolve_path("/objects", NULL); > > > > - if (memdev_root) { > > - object_child_foreach(memdev_root, find_max_backend_pagesize, > > - &pagesize); > > - } > > + object_child_foreach(memdev_root, find_max_backend_pagesize, &pagesize); > > return pagesize; > > } > > #else >
diff --git a/exec.c b/exec.c index 67e520d..809987c 100644 --- a/exec.c +++ b/exec.c @@ -1667,60 +1667,19 @@ static int find_max_backend_pagesize(Object *obj, void *opaque) */ long qemu_minrampagesize(void) { - long hpsize = LONG_MAX; - long mainrampagesize; - Object *memdev_root; - MachineState *ms = MACHINE(qdev_get_machine()); - - mainrampagesize = qemu_mempath_getpagesize(mem_path); - - /* it's possible we have memory-backend objects with - * hugepage-backed RAM. these may get mapped into system - * address space via -numa parameters or memory hotplug - * hooks. we want to take these into account, but we - * also want to make sure these supported hugepage - * sizes are applicable across the entire range of memory - * we may boot from, so we take the min across all - * backends, and assume normal pages in cases where a - * backend isn't backed by hugepages. - */ - memdev_root = object_resolve_path("/objects", NULL); - if (memdev_root) { - object_child_foreach(memdev_root, find_min_backend_pagesize, &hpsize); - } - if (hpsize == LONG_MAX) { - /* No additional memory regions found ==> Report main RAM page size */ - return mainrampagesize; - } - - /* If NUMA is disabled or the NUMA nodes are not backed with a - * memory-backend, then there is at least one node using "normal" RAM, - * so if its page size is smaller we have got to report that size instead. - */ - if (hpsize > mainrampagesize && - (ms->numa_state == NULL || - ms->numa_state->num_nodes == 0 || - ms->numa_state->nodes[0].node_memdev == NULL)) { - static bool warned; - if (!warned) { - error_report("Huge page support disabled (n/a for main memory)."); - warned = true; - } - return mainrampagesize; - } + long hpsize; + Object *memdev_root = object_resolve_path("/objects", NULL); + object_child_foreach(memdev_root, find_min_backend_pagesize, &hpsize); return hpsize; } long qemu_maxrampagesize(void) { - long pagesize = qemu_mempath_getpagesize(mem_path); + long pagesize; Object *memdev_root = object_resolve_path("/objects", NULL); - if (memdev_root) { - object_child_foreach(memdev_root, find_max_backend_pagesize, - &pagesize); - } + object_child_foreach(memdev_root, find_max_backend_pagesize, &pagesize); return pagesize; } #else
Since all RAM is backed by hostmem backends, drop global -mem-path invariant and simplify code. Signed-off-by: Igor Mammedov <imammedo@redhat.com> --- CC: pbonzini@redhat.com CC: rth@twiddle.net --- exec.c | 51 +++++---------------------------------------------- 1 file changed, 5 insertions(+), 46 deletions(-)