diff mbox series

[PULL,08/69] hbitmap: handle set/reset with zero length

Message ID 20191028121501.15279-9-mreitz@redhat.com (mailing list archive)
State New, archived
Headers show
Series [PULL,01/69] iotests: Prefer null-co over null-aio | expand

Commit Message

Max Reitz Oct. 28, 2019, 12:14 p.m. UTC
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

Passing zero length to these functions leads to unpredicted results.
Zero-length set/reset may occur in active-mirror, on zero-length write
(which is unlikely, but not guaranteed to never happen).

Let's just do nothing on zero-length request.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20191011090711.19940-2-vsementsov@virtuozzo.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
---
 util/hbitmap.c | 8 ++++++++
 1 file changed, 8 insertions(+)
diff mbox series

Patch

diff --git a/util/hbitmap.c b/util/hbitmap.c
index 66db87c6ff..242c6e519c 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -387,6 +387,10 @@  void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count)
     uint64_t first, n;
     uint64_t last = start + count - 1;
 
+    if (count == 0) {
+        return;
+    }
+
     trace_hbitmap_set(hb, start, count,
                       start >> hb->granularity, last >> hb->granularity);
 
@@ -478,6 +482,10 @@  void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count)
     uint64_t last = start + count - 1;
     uint64_t gran = 1ULL << hb->granularity;
 
+    if (count == 0) {
+        return;
+    }
+
     assert(QEMU_IS_ALIGNED(start, gran));
     assert(QEMU_IS_ALIGNED(count, gran) || (start + count == hb->orig_size));