Message ID | 20230312092244.451465-13-ray.huang@amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add VirtIO GPU and Passthrough GPU support on Xen | expand |
On Sun, 12 Mar 2023, Huang Rui wrote: > The xen_map_cache function wants to pass offset and size of this memory > block as the input parameters to map the host virtual address. However, > block->offset is too large as 0x100000000 (4G), if we assign the size as > block->max_length (0x110000000), the mapped host address will be out of > block->max_length and easy to overflow. Hi Ray, Is this patch still required after all the other fixes? If it is required, where is the overflow that it is trying to prevent? Is it a failure in the hypercall mapping the memory to QEMU (hw/i386/xen/xen-mapcache.c:xen_remap_bucket) ? > We have to assign the size as > (block->max_length - block->offset), then that is able to ensure the > address will be located in legal range inside of max_length. > > {rcu = {next = 0x0, func = 0x0}, mr = 0x55555681b620, host = 0x0, > colo_cache = 0x0, offset = 0x100000000, used_length = 0x110000000, > max_length = 0x110000000, resized = 0x0, flags = 0x10, idstr = {0x78, > 0x65, 0x6e, 0x2e, 0x72, 0x61, 0x6d, 0x0 <repeats 249 times>}, next = { > le_next = 0x5555568c61b0, le_prev = 0x55555681c640}, > ramblock_notifiers = {lh_first = 0x0}, fd = 0xffffffff, page_size = > 0x1000, bmap = 0x0, receivedmap = 0x0, clear_bmap = 0x0, > clear_bmap_shift = 0x0, postcopy_length = 0x0} > > Signed-off-by: Huang Rui <ray.huang@amd.com> > --- > softmmu/physmem.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/softmmu/physmem.c b/softmmu/physmem.c > index 1b606a3002..1b0bb35da9 100644 > --- a/softmmu/physmem.c > +++ b/softmmu/physmem.c > @@ -2304,7 +2304,7 @@ void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr) > return xen_map_cache(addr, 0, 0, false); > } > > - block->host = xen_map_cache(block->offset, block->max_length, 1, false); > + block->host = xen_map_cache(block->offset, block->max_length, 1, false); Coding style: indentation is 4 spaces. In any case, this looks like a spurious change? > } > return ramblock_ptr(block, addr); > } > @@ -2337,7 +2337,8 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr, > return xen_map_cache(addr, *size, lock, lock); > } > > - block->host = xen_map_cache(block->offset, block->max_length, 1, lock); > + block->host = xen_map_cache(block->offset, > + block->max_length - block->offset, 1, lock); > } > return ramblock_ptr(block, addr); block->offset is the address of the beginning of the block, and block->max_length is the size. Here the behavior is theoretically correct: if block->host is not set (not mapped in QEMU yet), then call xen_map_cache to map the entire block from beginning to end, setting block->host with a pointer to the beginning of the mapped area in QEMU. From that point onward, ramblock_ptr() will then behave correctly. Of course if xen_map_cache fails to map the entire region at once because it is too large or other error, then we have a big problem. But I think in that case this patch would still cause issues. In this example offset (start of the ramblock) is 0x100000000, and max_length (size of the ramblock) is 0x110000000. So with this change we are mapping 0x110000000-0x100000000 = 0x10000000 which is only the first 256MB of the region which is more than 4GB. What happens the next time qemu_ram_ptr_length is called for an address above the first 256MB? It will break because block->host != NULL so the function will behave as if the entire ramblock is mapped in QEMU while it is not (only the first 256MB are). ramblock_ptr will return block->host + something-more-than-256MB which is actually invalid. I think we would need more something along this line where we fall back to temporary mappings of a smaller region if we can't map it all at once. MAX_SIZE would be the max size where a single mapping still succeeds, maybe 4GB? if (block->offset == 0 || block->max_length > MAX_SIZE) { return xen_map_cache(addr, *size, lock, lock); } Otherwise, maybe the error could be due to max_length being incorrect to begin with?
diff --git a/softmmu/physmem.c b/softmmu/physmem.c index 1b606a3002..1b0bb35da9 100644 --- a/softmmu/physmem.c +++ b/softmmu/physmem.c @@ -2304,7 +2304,7 @@ void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr) return xen_map_cache(addr, 0, 0, false); } - block->host = xen_map_cache(block->offset, block->max_length, 1, false); + block->host = xen_map_cache(block->offset, block->max_length, 1, false); } return ramblock_ptr(block, addr); } @@ -2337,7 +2337,8 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr, return xen_map_cache(addr, *size, lock, lock); } - block->host = xen_map_cache(block->offset, block->max_length, 1, lock); + block->host = xen_map_cache(block->offset, + block->max_length - block->offset, 1, lock); } return ramblock_ptr(block, addr);
The xen_map_cache function wants to pass offset and size of this memory block as the input parameters to map the host virtual address. However, block->offset is too large as 0x100000000 (4G), if we assign the size as block->max_length (0x110000000), the mapped host address will be out of block->max_length and easy to overflow. We have to assign the size as (block->max_length - block->offset), then that is able to ensure the address will be located in legal range inside of max_length. {rcu = {next = 0x0, func = 0x0}, mr = 0x55555681b620, host = 0x0, colo_cache = 0x0, offset = 0x100000000, used_length = 0x110000000, max_length = 0x110000000, resized = 0x0, flags = 0x10, idstr = {0x78, 0x65, 0x6e, 0x2e, 0x72, 0x61, 0x6d, 0x0 <repeats 249 times>}, next = { le_next = 0x5555568c61b0, le_prev = 0x55555681c640}, ramblock_notifiers = {lh_first = 0x0}, fd = 0xffffffff, page_size = 0x1000, bmap = 0x0, receivedmap = 0x0, clear_bmap = 0x0, clear_bmap_shift = 0x0, postcopy_length = 0x0} Signed-off-by: Huang Rui <ray.huang@amd.com> --- softmmu/physmem.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)