diff mbox series

virtio-gpu: handle partial maps properly

Message ID 20210506091001.1301250-1-kraxel@redhat.com (mailing list archive)
State New, archived
Headers show
Series virtio-gpu: handle partial maps properly | expand

Commit Message

Gerd Hoffmann May 6, 2021, 9:10 a.m. UTC
dma_memory_map() may map only a part of the request.  Happens if the
request can't be mapped in one go, for example due to a iommu creating
a linear dma mapping for scattered physical pages.  Should that be the
case virtio-gpu must call dma_memory_map() again with the remaining
range instead of simply throwing an error.

Note that this change implies the number of iov entries may differ from
the number of mapping entries sent by the guest.  Therefore the iov_len
bookkeeping needs some updates too, we have to explicitly pass around
the iov length now.

Reported-by: Auger Eric <eric.auger@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/hw/virtio/virtio-gpu.h |  3 +-
 hw/display/virtio-gpu-3d.c     |  7 ++--
 hw/display/virtio-gpu.c        | 75 ++++++++++++++++++++--------------
 3 files changed, 51 insertions(+), 34 deletions(-)

Comments

no-reply@patchew.org May 6, 2021, 9:30 a.m. UTC | #1
Patchew URL: https://patchew.org/QEMU/20210506091001.1301250-1-kraxel@redhat.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 20210506091001.1301250-1-kraxel@redhat.com
Subject: [PATCH] virtio-gpu: handle partial maps properly

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag]         patchew/20210506091001.1301250-1-kraxel@redhat.com -> patchew/20210506091001.1301250-1-kraxel@redhat.com
Switched to a new branch 'test'
455d3f4 virtio-gpu: handle partial maps properly

=== OUTPUT BEGIN ===
WARNING: line over 80 characters
#42: FILE: hw/display/virtio-gpu-3d.c:292:
+    ret = virtio_gpu_create_mapping_iov(g, &att_rb, cmd, NULL, &res_iovs, &res_niov);

WARNING: line over 80 characters
#114: FILE: hw/display/virtio-gpu.c:652:
+                qemu_log_mask(LOG_GUEST_ERROR, "%s: failed to map MMIO memory for"

ERROR: line over 90 characters
#161: FILE: hw/display/virtio-gpu.c:737:
+    ret = virtio_gpu_create_mapping_iov(g, &ab, cmd, &res->addrs, &res->iov, &res->iov_cnt);

total: 1 errors, 2 warnings, 141 lines checked

Commit 455d3f4a94d5 (virtio-gpu: handle partial maps properly) has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
=== OUTPUT END ===

Test command exited with code: 1


The full log is available at
http://patchew.org/logs/20210506091001.1301250-1-kraxel@redhat.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
Eric Auger May 6, 2021, 11:54 a.m. UTC | #2
Hi Gerd,

On 5/6/21 11:10 AM, Gerd Hoffmann wrote:
> dma_memory_map() may map only a part of the request.  Happens if the
> request can't be mapped in one go, for example due to a iommu creating
> a linear dma mapping for scattered physical pages.  Should that be the
> case virtio-gpu must call dma_memory_map() again with the remaining
> range instead of simply throwing an error.
> 
> Note that this change implies the number of iov entries may differ from
> the number of mapping entries sent by the guest.  Therefore the iov_len
> bookkeeping needs some updates too, we have to explicitly pass around
> the iov length now.
> 
> Reported-by: Auger Eric <eric.auger@redhat.com>
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> ---
>  include/hw/virtio/virtio-gpu.h |  3 +-
>  hw/display/virtio-gpu-3d.c     |  7 ++--
>  hw/display/virtio-gpu.c        | 75 ++++++++++++++++++++--------------
>  3 files changed, 51 insertions(+), 34 deletions(-)
> 
> diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
> index fae149235c58..0d15af41d96d 100644
> --- a/include/hw/virtio/virtio-gpu.h
> +++ b/include/hw/virtio/virtio-gpu.h
> @@ -209,7 +209,8 @@ void virtio_gpu_get_edid(VirtIOGPU *g,
>  int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
>                                    struct virtio_gpu_resource_attach_backing *ab,
>                                    struct virtio_gpu_ctrl_command *cmd,
> -                                  uint64_t **addr, struct iovec **iov);
> +                                  uint64_t **addr, struct iovec **iov,
> +                                  uint32_t *niov);
>  void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
>                                      struct iovec *iov, uint32_t count);
>  void virtio_gpu_process_cmdq(VirtIOGPU *g);
> diff --git a/hw/display/virtio-gpu-3d.c b/hw/display/virtio-gpu-3d.c
> index d98964858e13..72c14d91324b 100644
> --- a/hw/display/virtio-gpu-3d.c
> +++ b/hw/display/virtio-gpu-3d.c
> @@ -283,22 +283,23 @@ static void virgl_resource_attach_backing(VirtIOGPU *g,
>  {
>      struct virtio_gpu_resource_attach_backing att_rb;
>      struct iovec *res_iovs;
> +    uint32_t res_niov;
>      int ret;
>  
>      VIRTIO_GPU_FILL_CMD(att_rb);
>      trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
>  
> -    ret = virtio_gpu_create_mapping_iov(g, &att_rb, cmd, NULL, &res_iovs);
> +    ret = virtio_gpu_create_mapping_iov(g, &att_rb, cmd, NULL, &res_iovs, &res_niov);
>      if (ret != 0) {
>          cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>          return;
>      }
>  
>      ret = virgl_renderer_resource_attach_iov(att_rb.resource_id,
> -                                             res_iovs, att_rb.nr_entries);
> +                                             res_iovs, res_niov);
>  
>      if (ret != 0)
> -        virtio_gpu_cleanup_mapping_iov(g, res_iovs, att_rb.nr_entries);
> +        virtio_gpu_cleanup_mapping_iov(g, res_iovs, res_niov);
>  }
>  
>  static void virgl_resource_detach_backing(VirtIOGPU *g,
> diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
> index c9f5e36fd076..1dd3648f32a3 100644
> --- a/hw/display/virtio-gpu.c
> +++ b/hw/display/virtio-gpu.c
> @@ -608,11 +608,12 @@ static void virtio_gpu_set_scanout(VirtIOGPU *g,
>  int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
>                                    struct virtio_gpu_resource_attach_backing *ab,
>                                    struct virtio_gpu_ctrl_command *cmd,
> -                                  uint64_t **addr, struct iovec **iov)
> +                                  uint64_t **addr, struct iovec **iov,
> +                                  uint32_t *niov)
>  {
>      struct virtio_gpu_mem_entry *ents;
>      size_t esize, s;
> -    int i;
> +    int e, v;
>  
>      if (ab->nr_entries > 16384) {
>          qemu_log_mask(LOG_GUEST_ERROR,
> @@ -633,37 +634,53 @@ int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
>          return -1;
>      }
>  
> -    *iov = g_malloc0(sizeof(struct iovec) * ab->nr_entries);
> +    *iov = NULL;
>      if (addr) {
> -        *addr = g_malloc0(sizeof(uint64_t) * ab->nr_entries);
> +        *addr = NULL;
>      }
> -    for (i = 0; i < ab->nr_entries; i++) {
> -        uint64_t a = le64_to_cpu(ents[i].addr);
> -        uint32_t l = le32_to_cpu(ents[i].length);
> -        hwaddr len = l;
> -        (*iov)[i].iov_base = dma_memory_map(VIRTIO_DEVICE(g)->dma_as,
> -                                            a, &len, DMA_DIRECTION_TO_DEVICE);
> -        (*iov)[i].iov_len = len;
> -        if (addr) {
> -            (*addr)[i] = a;
> -        }
> -        if (!(*iov)[i].iov_base || len != l) {
> -            qemu_log_mask(LOG_GUEST_ERROR, "%s: failed to map MMIO memory for"
> -                          " resource %d element %d\n",
> -                          __func__, ab->resource_id, i);
> -            if ((*iov)[i].iov_base) {
> -                i++; /* cleanup the 'i'th map */
> +    for (e = 0, v = 0; e < ab->nr_entries; e++) {
> +        uint64_t a = le64_to_cpu(ents[e].addr);
> +        uint32_t l = le32_to_cpu(ents[e].length);
> +        hwaddr len;
> +        void *map;
> +
> +        do {
> +            len = l;
> +            map = dma_memory_map(VIRTIO_DEVICE(g)->dma_as,
> +                                 a, &len, DMA_DIRECTION_TO_DEVICE);
> +            if (!map) {
> +                qemu_log_mask(LOG_GUEST_ERROR, "%s: failed to map MMIO memory for"
> +                              " resource %d element %d\n",
> +                              __func__, ab->resource_id, e);
> +                virtio_gpu_cleanup_mapping_iov(g, *iov, v);
> +                g_free(ents);
> +                *iov = NULL;
> +                if (addr) {
> +                    g_free(*addr);
> +                    *addr = NULL;
> +                }
> +                return -1;
>              }
> -            virtio_gpu_cleanup_mapping_iov(g, *iov, i);
> -            g_free(ents);
> -            *iov = NULL;
> +
> +            if (!(v % 16)) {
> +                *iov = g_realloc(*iov, sizeof(struct iovec) * (v + 16));
> +                if (addr) {
> +                    *addr = g_realloc(*addr, sizeof(uint64_t) * (v + 16));
nit: just wondering why you chose to do the alloc by slice of 16 instead
of doing the usual allocation at the beginning and re-allocating the iov
when l != len. Do you think the perf will be better with this
implementation? Looks the guest does concatenation quite often so most
probably your implementation is best.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>

Thanks!

Eric
> +                }
> +            }
> +            (*iov)[v].iov_base = map;
> +            (*iov)[v].iov_len = len;
>              if (addr) {
> -                g_free(*addr);
> -                *addr = NULL;
> +                (*addr)[v] = a;
>              }
> -            return -1;
> -        }
> +
> +            a += len;
> +            l -= len;
> +            v += 1;
> +        } while (l > 0);
>      }
> +    *niov = v;
> +
>      g_free(ents);
>      return 0;
>  }
> @@ -717,13 +734,11 @@ virtio_gpu_resource_attach_backing(VirtIOGPU *g,
>          return;
>      }
>  
> -    ret = virtio_gpu_create_mapping_iov(g, &ab, cmd, &res->addrs, &res->iov);
> +    ret = virtio_gpu_create_mapping_iov(g, &ab, cmd, &res->addrs, &res->iov, &res->iov_cnt);
>      if (ret != 0) {
>          cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>          return;
>      }
> -
> -    res->iov_cnt = ab.nr_entries;
>  }
>  
>  static void
>
Gerd Hoffmann May 6, 2021, 12:21 p.m. UTC | #3
Hi,

> > +            if (!(v % 16)) {
> > +                *iov = g_realloc(*iov, sizeof(struct iovec) * (v + 16));
> > +                if (addr) {
> > +                    *addr = g_realloc(*addr, sizeof(uint64_t) * (v + 16));
> nit: just wondering why you chose to do the alloc by slice of 16 instead
> of doing the usual allocation at the beginning and re-allocating the iov
> when l != len.

It's unknown in advance how many iov entries I'll need.  So I'll go
allocate them on demand.  To avoid one (or two) realloc calls on each
single loop run allocate in chunks.

Chunk size is 16 entries, it would also work with smaller or larger
chunks.  It's a tradeoff between realloc overhead (smaller chunks) and
wasted memory (larger chunks).

take care,
  Gerd
diff mbox series

Patch

diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index fae149235c58..0d15af41d96d 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -209,7 +209,8 @@  void virtio_gpu_get_edid(VirtIOGPU *g,
 int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
                                   struct virtio_gpu_resource_attach_backing *ab,
                                   struct virtio_gpu_ctrl_command *cmd,
-                                  uint64_t **addr, struct iovec **iov);
+                                  uint64_t **addr, struct iovec **iov,
+                                  uint32_t *niov);
 void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
                                     struct iovec *iov, uint32_t count);
 void virtio_gpu_process_cmdq(VirtIOGPU *g);
diff --git a/hw/display/virtio-gpu-3d.c b/hw/display/virtio-gpu-3d.c
index d98964858e13..72c14d91324b 100644
--- a/hw/display/virtio-gpu-3d.c
+++ b/hw/display/virtio-gpu-3d.c
@@ -283,22 +283,23 @@  static void virgl_resource_attach_backing(VirtIOGPU *g,
 {
     struct virtio_gpu_resource_attach_backing att_rb;
     struct iovec *res_iovs;
+    uint32_t res_niov;
     int ret;
 
     VIRTIO_GPU_FILL_CMD(att_rb);
     trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
 
-    ret = virtio_gpu_create_mapping_iov(g, &att_rb, cmd, NULL, &res_iovs);
+    ret = virtio_gpu_create_mapping_iov(g, &att_rb, cmd, NULL, &res_iovs, &res_niov);
     if (ret != 0) {
         cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
         return;
     }
 
     ret = virgl_renderer_resource_attach_iov(att_rb.resource_id,
-                                             res_iovs, att_rb.nr_entries);
+                                             res_iovs, res_niov);
 
     if (ret != 0)
-        virtio_gpu_cleanup_mapping_iov(g, res_iovs, att_rb.nr_entries);
+        virtio_gpu_cleanup_mapping_iov(g, res_iovs, res_niov);
 }
 
 static void virgl_resource_detach_backing(VirtIOGPU *g,
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index c9f5e36fd076..1dd3648f32a3 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -608,11 +608,12 @@  static void virtio_gpu_set_scanout(VirtIOGPU *g,
 int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
                                   struct virtio_gpu_resource_attach_backing *ab,
                                   struct virtio_gpu_ctrl_command *cmd,
-                                  uint64_t **addr, struct iovec **iov)
+                                  uint64_t **addr, struct iovec **iov,
+                                  uint32_t *niov)
 {
     struct virtio_gpu_mem_entry *ents;
     size_t esize, s;
-    int i;
+    int e, v;
 
     if (ab->nr_entries > 16384) {
         qemu_log_mask(LOG_GUEST_ERROR,
@@ -633,37 +634,53 @@  int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
         return -1;
     }
 
-    *iov = g_malloc0(sizeof(struct iovec) * ab->nr_entries);
+    *iov = NULL;
     if (addr) {
-        *addr = g_malloc0(sizeof(uint64_t) * ab->nr_entries);
+        *addr = NULL;
     }
-    for (i = 0; i < ab->nr_entries; i++) {
-        uint64_t a = le64_to_cpu(ents[i].addr);
-        uint32_t l = le32_to_cpu(ents[i].length);
-        hwaddr len = l;
-        (*iov)[i].iov_base = dma_memory_map(VIRTIO_DEVICE(g)->dma_as,
-                                            a, &len, DMA_DIRECTION_TO_DEVICE);
-        (*iov)[i].iov_len = len;
-        if (addr) {
-            (*addr)[i] = a;
-        }
-        if (!(*iov)[i].iov_base || len != l) {
-            qemu_log_mask(LOG_GUEST_ERROR, "%s: failed to map MMIO memory for"
-                          " resource %d element %d\n",
-                          __func__, ab->resource_id, i);
-            if ((*iov)[i].iov_base) {
-                i++; /* cleanup the 'i'th map */
+    for (e = 0, v = 0; e < ab->nr_entries; e++) {
+        uint64_t a = le64_to_cpu(ents[e].addr);
+        uint32_t l = le32_to_cpu(ents[e].length);
+        hwaddr len;
+        void *map;
+
+        do {
+            len = l;
+            map = dma_memory_map(VIRTIO_DEVICE(g)->dma_as,
+                                 a, &len, DMA_DIRECTION_TO_DEVICE);
+            if (!map) {
+                qemu_log_mask(LOG_GUEST_ERROR, "%s: failed to map MMIO memory for"
+                              " resource %d element %d\n",
+                              __func__, ab->resource_id, e);
+                virtio_gpu_cleanup_mapping_iov(g, *iov, v);
+                g_free(ents);
+                *iov = NULL;
+                if (addr) {
+                    g_free(*addr);
+                    *addr = NULL;
+                }
+                return -1;
             }
-            virtio_gpu_cleanup_mapping_iov(g, *iov, i);
-            g_free(ents);
-            *iov = NULL;
+
+            if (!(v % 16)) {
+                *iov = g_realloc(*iov, sizeof(struct iovec) * (v + 16));
+                if (addr) {
+                    *addr = g_realloc(*addr, sizeof(uint64_t) * (v + 16));
+                }
+            }
+            (*iov)[v].iov_base = map;
+            (*iov)[v].iov_len = len;
             if (addr) {
-                g_free(*addr);
-                *addr = NULL;
+                (*addr)[v] = a;
             }
-            return -1;
-        }
+
+            a += len;
+            l -= len;
+            v += 1;
+        } while (l > 0);
     }
+    *niov = v;
+
     g_free(ents);
     return 0;
 }
@@ -717,13 +734,11 @@  virtio_gpu_resource_attach_backing(VirtIOGPU *g,
         return;
     }
 
-    ret = virtio_gpu_create_mapping_iov(g, &ab, cmd, &res->addrs, &res->iov);
+    ret = virtio_gpu_create_mapping_iov(g, &ab, cmd, &res->addrs, &res->iov, &res->iov_cnt);
     if (ret != 0) {
         cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
         return;
     }
-
-    res->iov_cnt = ab.nr_entries;
 }
 
 static void