From patchwork Thu Mar 24 17:56:50 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 8663521 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7D0D99F36E for ; Thu, 24 Mar 2016 17:57:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C7013203B5 for ; Thu, 24 Mar 2016 17:57:23 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 34BD52038E for ; Thu, 24 Mar 2016 17:57:22 +0000 (UTC) Received: from localhost ([::1]:52111 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj9VZ-0006vK-Ka for patchwork-qemu-devel@patchwork.kernel.org; Thu, 24 Mar 2016 13:57:21 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51589) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj9VK-0006qD-HT for qemu-devel@nongnu.org; Thu, 24 Mar 2016 13:57:07 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aj9VJ-0003RB-Dq for qemu-devel@nongnu.org; Thu, 24 Mar 2016 13:57:06 -0400 Received: from mx1.redhat.com ([209.132.183.28]:41864) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aj9VJ-0003R7-66 for qemu-devel@nongnu.org; Thu, 24 Mar 2016 13:57:05 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (Postfix) with ESMTPS id BEE7A711C6 for ; Thu, 24 Mar 2016 17:57:04 +0000 (UTC) Received: from localhost (ovpn-112-44.ams2.redhat.com [10.36.112.44]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u2OHv3Z6031512; Thu, 24 Mar 2016 13:57:04 -0400 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Date: Thu, 24 Mar 2016 17:56:50 +0000 Message-Id: <1458842214-11450-4-git-send-email-stefanha@redhat.com> In-Reply-To: <1458842214-11450-1-git-send-email-stefanha@redhat.com> References: <1458842214-11450-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Thu, 24 Mar 2016 17:57:04 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Cc: Fam Zheng , Stefan Hajnoczi , "Michael S. Tsirkin" Subject: [Qemu-devel] [RFC 3/7] virtio: handle virtqueue_map_desc() errors X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Errors can occur during virtqueue_pop(), especially in virtqueue_map_desc(). In order to handle this we must unmap iov[] before returning NULL. The caller will consider the virtqueue empty and the virtio_error() call will have marked the device broken. Signed-off-by: Stefan Hajnoczi --- hw/virtio/virtio.c | 62 ++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 49 insertions(+), 13 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 8fac47c..86352c8 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -457,10 +457,12 @@ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes, return in_bytes <= in_total && out_bytes <= out_total; } -static void virtqueue_map_desc(unsigned int *p_num_sg, hwaddr *addr, struct iovec *iov, +static bool virtqueue_map_desc(VirtIODevice *vdev, unsigned int *p_num_sg, + hwaddr *addr, struct iovec *iov, unsigned int max_num_sg, bool is_write, hwaddr pa, size_t sz) { + bool ok = false; unsigned num_sg = *p_num_sg; assert(num_sg <= max_num_sg); @@ -468,8 +470,9 @@ static void virtqueue_map_desc(unsigned int *p_num_sg, hwaddr *addr, struct iove hwaddr len = sz; if (num_sg == max_num_sg) { - error_report("virtio: too many write descriptors in indirect table"); - exit(1); + virtio_error(vdev, "virtio: too many write descriptors in " + "indirect table"); + goto out; } iov[num_sg].iov_base = cpu_physical_memory_map(pa, &len, is_write); @@ -480,7 +483,28 @@ static void virtqueue_map_desc(unsigned int *p_num_sg, hwaddr *addr, struct iove pa += len; num_sg++; } + ok = true; + +out: *p_num_sg = num_sg; + return ok; +} + +/* Only used by error code paths before we have a VirtQueueElement (therefore + * virtqueue_unmap_sg() can't be used). Assumes buffers weren't written to + * yet. + */ +static void virtqueue_undo_map_desc(unsigned out_num, unsigned in_num, + struct iovec *iov) +{ + unsigned i; + + for (i = 0; i < out_num + in_num; i++) { + int is_write = i >= out_num; + + cpu_physical_memory_unmap(iov->iov_base, iov->iov_len, is_write, 0); + iov++; + } } static void virtqueue_map_iovec(struct iovec *sg, hwaddr *addr, @@ -579,8 +603,8 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) vring_desc_read(vdev, &desc, desc_pa, i); if (desc.flags & VRING_DESC_F_INDIRECT) { if (desc.len % sizeof(VRingDesc)) { - error_report("Invalid size for indirect buffer table"); - exit(1); + virtio_error(vdev, "Invalid size for indirect buffer table"); + return NULL; } /* loop over the indirect descriptor table */ @@ -592,22 +616,30 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) /* Collect all the descriptors */ do { + bool map_ok; + if (desc.flags & VRING_DESC_F_WRITE) { - virtqueue_map_desc(&in_num, addr + out_num, iov + out_num, - VIRTQUEUE_MAX_SIZE - out_num, true, desc.addr, desc.len); + map_ok = virtqueue_map_desc(vdev, &in_num, addr + out_num, + iov + out_num, + VIRTQUEUE_MAX_SIZE - out_num, true, + desc.addr, desc.len); } else { if (in_num) { - error_report("Incorrect order for descriptors"); - exit(1); + virtio_error(vdev, "Incorrect order for descriptors"); + goto err_undo_map; } - virtqueue_map_desc(&out_num, addr, iov, - VIRTQUEUE_MAX_SIZE, false, desc.addr, desc.len); + map_ok = virtqueue_map_desc(vdev, &out_num, addr, iov, + VIRTQUEUE_MAX_SIZE, false, + desc.addr, desc.len); + } + if (!map_ok) { + goto err_undo_map; } /* If we've got too many, that implies a descriptor loop. */ if ((in_num + out_num) > max) { - error_report("Looped descriptor"); - exit(1); + virtio_error(vdev, "Looped descriptor"); + goto err_undo_map; } } while ((i = virtqueue_read_next_desc(vdev, &desc, desc_pa, max)) != max); @@ -627,6 +659,10 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) trace_virtqueue_pop(vq, elem, elem->in_num, elem->out_num); return elem; + +err_undo_map: + virtqueue_undo_map_desc(out_num, in_num, iov); + return NULL; } /* Reading and writing a structure directly to QEMUFile is *awful*, but