diff mbox series

hw/pvrdma: Protect against buggy or malicious guest driver

Message ID 20230227133511.5913-1-yuval.shaia.ml@gmail.com (mailing list archive)
State New, archived
Headers show
Series hw/pvrdma: Protect against buggy or malicious guest driver | expand

Commit Message

Yuval Shaia Feb. 27, 2023, 1:35 p.m. UTC
Guest driver allocates and initialize page tables to be used as a ring
of descriptors for CQ and async events.
The page table that represents the ring, along with the number of pages
in the page table is passed to the device.
Currently our device supports only one page table for a ring.

Let's make sure that the number of page table entries the driver
reports, do not exceeds the one page table size.

Signed-off-by: Yuval Shaia <yuval.shaia.ml@gmail.com>
---
 hw/rdma/vmw/pvrdma_main.c | 8 ++++++++
 1 file changed, 8 insertions(+)
diff mbox series

Patch

diff --git a/hw/rdma/vmw/pvrdma_main.c b/hw/rdma/vmw/pvrdma_main.c
index 4fc6712025..e84d68a81f 100644
--- a/hw/rdma/vmw/pvrdma_main.c
+++ b/hw/rdma/vmw/pvrdma_main.c
@@ -98,12 +98,20 @@  static int init_dev_ring(PvrdmaRing *ring, PvrdmaRingState **ring_state,
         return -EINVAL;
     }
 
+    if (num_pages > TARGET_PAGE_SIZE / sizeof(dma_addr_t)) {
+        rdma_error_report("Maximum pages on a single directory must not exceed %ld\n",
+                          TARGET_PAGE_SIZE / sizeof(dma_addr_t));
+        return -EINVAL;
+    }
+
     dir = rdma_pci_dma_map(pci_dev, dir_addr, TARGET_PAGE_SIZE);
     if (!dir) {
         rdma_error_report("Failed to map to page directory (ring %s)", name);
         rc = -ENOMEM;
         goto out;
     }
+
+    /* We support only one page table for a ring */
     tbl = rdma_pci_dma_map(pci_dev, dir[0], TARGET_PAGE_SIZE);
     if (!tbl) {
         rdma_error_report("Failed to map to page table (ring %s)", name);