diff mbox

[PULL,1/3] xen/disk: don't leak stack data via response ring

Message ID 1498601083-11799-1-git-send-email-sstabellini@kernel.org (mailing list archive)
State New, archived
Headers show

Commit Message

Stefano Stabellini June 27, 2017, 10:04 p.m. UTC
Rather than constructing a local structure instance on the stack, fill
the fields directly on the shared ring, just like other (Linux)
backends do. Build on the fact that all response structure flavors are
actually identical (aside from alignment and padding at the end).

This is XSA-216.

Reported by: Anthony Perard <anthony.perard@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Anthony PERARD <anthony.perard@citrix.com>
---
 hw/block/xen_disk.c | 25 ++++++++++++-------------
 1 file changed, 12 insertions(+), 13 deletions(-)

Comments

Michael Tokarev Sept. 23, 2017, 4:05 p.m. UTC | #1
28.06.2017 01:04, Stefano Stabellini wrote:
> Rather than constructing a local structure instance on the stack, fill
> the fields directly on the shared ring, just like other (Linux)
> backends do. Build on the fact that all response structure flavors are
> actually identical (aside from alignment and padding at the end).
> 
> This is XSA-216.
> 
> Reported by: Anthony Perard <anthony.perard@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
> Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Reportedly, after this patch, HVM DomUs running with qemu-system-i386
(note i386, not x86_64), are leaking memory and host is running out of
memory rather fast.  See for example https://bugs.debian.org/871702

I've asked for details, let's see...

For one, I've no idea how xen hvm works, and whenever -i386 version
can be choosen in config or how.

Thanks,

/mjt
Michael Tokarev Sept. 24, 2017, 9:18 a.m. UTC | #2
23.09.2017 19:05, Michael Tokarev wrote:
> 28.06.2017 01:04, Stefano Stabellini wrote:
>> Rather than constructing a local structure instance on the stack, fill
>> the fields directly on the shared ring, just like other (Linux)
>> backends do. Build on the fact that all response structure flavors are
>> actually identical (aside from alignment and padding at the end).
>>
>> This is XSA-216.
>>
>> Reported by: Anthony Perard <anthony.perard@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
>> Acked-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Reportedly, after this patch, HVM DomUs running with qemu-system-i386
> (note i386, not x86_64), are leaking memory and host is running out of
> memory rather fast.  See for example https://bugs.debian.org/871702

Looks like this is a false alarm, the problem actually is with
04bf2526ce87f21b32c9acba1c5518708c243ad0 (exec: use qemu_ram_ptr_length
to access guest ram) without f5aa69bdc3418773f26747ca282c291519626ece
(exec: Add lock parameter to qemu_ram_ptr_length).

I applied only 04bf2526ce87f to 2.8, without realizing that we also
need f5aa69bdc3418).

Now when I try to backport f5aa69bdc3418 to 2.8 (on top of 04bf2526ce87f),
I face an interesting logic without also applying 1ff7c5986a515d2d936eba0
(xen/mapcache: store dma information in revmapcache entries for debugging),
the arguments for xen_map_cache in qemu_ram_ptr_length() in these two patches
are quite fun.. :)

Thanks,

/mjt
Stefano Stabellini Sept. 25, 2017, 10:48 p.m. UTC | #3
On Sun, 24 Sep 2017, Michael Tokarev wrote:
> 23.09.2017 19:05, Michael Tokarev wrote:
> > 28.06.2017 01:04, Stefano Stabellini wrote:
> >> Rather than constructing a local structure instance on the stack, fill
> >> the fields directly on the shared ring, just like other (Linux)
> >> backends do. Build on the fact that all response structure flavors are
> >> actually identical (aside from alignment and padding at the end).
> >>
> >> This is XSA-216.
> >>
> >> Reported by: Anthony Perard <anthony.perard@citrix.com>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
> >> Acked-by: Anthony PERARD <anthony.perard@citrix.com>
> > 
> > Reportedly, after this patch, HVM DomUs running with qemu-system-i386
> > (note i386, not x86_64), are leaking memory and host is running out of
> > memory rather fast.  See for example https://bugs.debian.org/871702
> 
> Looks like this is a false alarm, the problem actually is with
> 04bf2526ce87f21b32c9acba1c5518708c243ad0 (exec: use qemu_ram_ptr_length
> to access guest ram) without f5aa69bdc3418773f26747ca282c291519626ece
> (exec: Add lock parameter to qemu_ram_ptr_length).
> 
> I applied only 04bf2526ce87f to 2.8, without realizing that we also
> need f5aa69bdc3418).
> 
> Now when I try to backport f5aa69bdc3418 to 2.8 (on top of 04bf2526ce87f),
> I face an interesting logic without also applying 1ff7c5986a515d2d936eba0
> (xen/mapcache: store dma information in revmapcache entries for debugging),
> the arguments for xen_map_cache in qemu_ram_ptr_length() in these two patches
> are quite fun.. :)

Sorry about that. Let me know if you need any help with those backport.
Thank you!

- Stefano
diff mbox

Patch

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 3a22805..9200511 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -769,31 +769,30 @@  static int blk_send_response_one(struct ioreq *ioreq)
     struct XenBlkDev  *blkdev = ioreq->blkdev;
     int               send_notify   = 0;
     int               have_requests = 0;
-    blkif_response_t  resp;
-    void              *dst;
-
-    resp.id        = ioreq->req.id;
-    resp.operation = ioreq->req.operation;
-    resp.status    = ioreq->status;
+    blkif_response_t  *resp;
 
     /* Place on the response ring for the relevant domain. */
     switch (blkdev->protocol) {
     case BLKIF_PROTOCOL_NATIVE:
-        dst = RING_GET_RESPONSE(&blkdev->rings.native, blkdev->rings.native.rsp_prod_pvt);
+        resp = (blkif_response_t *) RING_GET_RESPONSE(&blkdev->rings.native,
+                                 blkdev->rings.native.rsp_prod_pvt);
         break;
     case BLKIF_PROTOCOL_X86_32:
-        dst = RING_GET_RESPONSE(&blkdev->rings.x86_32_part,
-                                blkdev->rings.x86_32_part.rsp_prod_pvt);
+        resp = (blkif_response_t *) RING_GET_RESPONSE(&blkdev->rings.x86_32_part,
+                                 blkdev->rings.x86_32_part.rsp_prod_pvt);
         break;
     case BLKIF_PROTOCOL_X86_64:
-        dst = RING_GET_RESPONSE(&blkdev->rings.x86_64_part,
-                                blkdev->rings.x86_64_part.rsp_prod_pvt);
+        resp = (blkif_response_t *) RING_GET_RESPONSE(&blkdev->rings.x86_64_part,
+                                 blkdev->rings.x86_64_part.rsp_prod_pvt);
         break;
     default:
-        dst = NULL;
         return 0;
     }
-    memcpy(dst, &resp, sizeof(resp));
+
+    resp->id        = ioreq->req.id;
+    resp->operation = ioreq->req.operation;
+    resp->status    = ioreq->status;
+
     blkdev->rings.common.rsp_prod_pvt++;
 
     RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&blkdev->rings.common, send_notify);