Message ID | 20171121110606.22809-1-jgross@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 21.11.17 at 12:06, <jgross@suse.com> wrote: > The "special pages" for PVH guests include the frames for console and > Xenstore ring buffers. Those have to be marked as "Reserved" in the > guest's E820 map, as otherwise conflicts might arise later e.g. when > hotplugging memory into the guest. Afaict this differs from v1 only in no longer adding the extra entry for HVM. How does this address the concerns raised on v1 wrt spec compliance? v1 having caused problems with hvmloader should not have resulted in simply excluding HVM here. That's even more so because we mean HVM and PVH to converge in the long run - I'd expect that to mean that no clear type distinction would exist anymore on libxl. If you want to reserve Xenstore ring buffer and console page, why don't you reserve just the two (provided of course they live outside of any [fake] PCI device BAR), which then ought to also be compatible with plain HVM? Jan
On 21/11/17 12:27, Jan Beulich wrote: >>>> On 21.11.17 at 12:06, <jgross@suse.com> wrote: >> The "special pages" for PVH guests include the frames for console and >> Xenstore ring buffers. Those have to be marked as "Reserved" in the >> guest's E820 map, as otherwise conflicts might arise later e.g. when >> hotplugging memory into the guest. > > Afaict this differs from v1 only in no longer adding the extra entry > for HVM. How does this address the concerns raised on v1 wrt spec > compliance? v1 having caused problems with hvmloader should not > have resulted in simply excluding HVM here. That's even more so > because we mean HVM and PVH to converge in the long run - I'd > expect that to mean that no clear type distinction would exist > anymore on libxl. The difference is for HVM the HVMloader is creating the additional "Reserved" entry. > If you want to reserve Xenstore ring buffer and console page, > why don't you reserve just the two (provided of course they > live outside of any [fake] PCI device BAR), which then ought to > also be compatible with plain HVM? For PVH the "mmio" area is starting with the LAPIC and extends up to 4GB. And all of this area conflicts with the HVMloader: (d11) HVM Loader (d11) Detected Xen v4.10-unstable (d11) Fail to setup memory map due to conflict on dynamic reserved memory range. (d11) *** HVMLoader bug at e820.c:52 (d11) *** HVMLoader crashed. I guess this should be fixed, but I wanted the patch to be as small as possible to minimze the risk for 4.10. Juergen
>>> On 21.11.17 at 12:48, <jgross@suse.com> wrote: > On 21/11/17 12:27, Jan Beulich wrote: >>>>> On 21.11.17 at 12:06, <jgross@suse.com> wrote: >>> The "special pages" for PVH guests include the frames for console and >>> Xenstore ring buffers. Those have to be marked as "Reserved" in the >>> guest's E820 map, as otherwise conflicts might arise later e.g. when >>> hotplugging memory into the guest. >> >> Afaict this differs from v1 only in no longer adding the extra entry >> for HVM. How does this address the concerns raised on v1 wrt spec >> compliance? v1 having caused problems with hvmloader should not >> have resulted in simply excluding HVM here. That's even more so >> because we mean HVM and PVH to converge in the long run - I'd >> expect that to mean that no clear type distinction would exist >> anymore on libxl. > > The difference is for HVM the HVMloader is creating the additional > "Reserved" entry. > >> If you want to reserve Xenstore ring buffer and console page, >> why don't you reserve just the two (provided of course they >> live outside of any [fake] PCI device BAR), which then ought to >> also be compatible with plain HVM? > > For PVH the "mmio" area is starting with the LAPIC and extends up > to 4GB. Oh, I see - that's probably okay then (or at least as okay as libxl having knowledge of the LAPIC base address in the first place). Jan
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c index 5f91fe4f92..d82013f6ed 100644 --- a/tools/libxl/libxl_x86.c +++ b/tools/libxl/libxl_x86.c @@ -530,6 +530,9 @@ int libxl__arch_domain_construct_memmap(libxl__gc *gc, if (d_config->rdms[i].policy != LIBXL_RDM_RESERVE_POLICY_INVALID) e820_entries++; + /* Add mmio entry for PVH. */ + if (dom->mmio_size && d_config->b_info.type == LIBXL_DOMAIN_TYPE_PVH) + e820_entries++; /* If we should have a highmem range. */ if (highmem_size) @@ -564,6 +567,14 @@ int libxl__arch_domain_construct_memmap(libxl__gc *gc, nr++; } + /* mmio area */ + if (dom->mmio_size && d_config->b_info.type == LIBXL_DOMAIN_TYPE_PVH) { + e820[nr].addr = dom->mmio_start; + e820[nr].size = dom->mmio_size; + e820[nr].type = E820_RESERVED; + nr++; + } + for (i = 0; i < MAX_ACPI_MODULES; i++) { if (dom->acpi_modules[i].length) { e820[nr].addr = dom->acpi_modules[i].guest_addr_out & ~(page_size - 1);
The "special pages" for PVH guests include the frames for console and Xenstore ring buffers. Those have to be marked as "Reserved" in the guest's E820 map, as otherwise conflicts might arise later e.g. when hotplugging memory into the guest. Signed-off-by: Juergen Gross <jgross@suse.com> --- This is a bugfix for PVH guests. Please consider for 4.10. --- tools/libxl/libxl_x86.c | 11 +++++++++++ 1 file changed, 11 insertions(+)