From patchwork Tue Mar 29 16:12:56 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 8688791 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6E74FC0553 for ; Tue, 29 Mar 2016 16:13:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BAD2620328 for ; Tue, 29 Mar 2016 16:13:41 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EC996202EB for ; Tue, 29 Mar 2016 16:13:40 +0000 (UTC) Received: from localhost ([::1]:48523 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1akwGy-0002Xu-5T for patchwork-qemu-devel@patchwork.kernel.org; Tue, 29 Mar 2016 12:13:40 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34887) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1akwGf-0002LX-VP for qemu-devel@nongnu.org; Tue, 29 Mar 2016 12:13:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1akwGW-00024l-E1 for qemu-devel@nongnu.org; Tue, 29 Mar 2016 12:13:21 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58085) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1akwGW-00024X-5D for qemu-devel@nongnu.org; Tue, 29 Mar 2016 12:13:12 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (Postfix) with ESMTPS id DF0CA811DF; Tue, 29 Mar 2016 16:13:11 +0000 (UTC) Received: from localhost (ovpn-112-65.ams2.redhat.com [10.36.112.65]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u2TGDAhe020528; Tue, 29 Mar 2016 12:13:11 -0400 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Date: Tue, 29 Mar 2016 17:12:56 +0100 Message-Id: <1459267981-23408-5-git-send-email-stefanha@redhat.com> In-Reply-To: <1459267981-23408-1-git-send-email-stefanha@redhat.com> References: <1459267981-23408-1-git-send-email-stefanha@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: Cornelia Huck , Fam Zheng , Stefan Hajnoczi , "Michael S. Tsirkin" Subject: [Qemu-devel] [RFC v2 4/9] virtio: handle virtqueue_map_desc() errors X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Errors can occur during virtqueue_pop(), especially in virtqueue_map_desc(). In order to handle this we must unmap iov[] before returning NULL. The caller will consider the virtqueue empty and the virtio_error() call will have marked the device broken. Signed-off-by: Stefan Hajnoczi --- hw/virtio/virtio.c | 62 ++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 49 insertions(+), 13 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index ff2c736..c07b451 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -457,10 +457,12 @@ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes, return in_bytes <= in_total && out_bytes <= out_total; } -static void virtqueue_map_desc(unsigned int *p_num_sg, hwaddr *addr, struct iovec *iov, +static bool virtqueue_map_desc(VirtIODevice *vdev, unsigned int *p_num_sg, + hwaddr *addr, struct iovec *iov, unsigned int max_num_sg, bool is_write, hwaddr pa, size_t sz) { + bool ok = false; unsigned num_sg = *p_num_sg; assert(num_sg <= max_num_sg); @@ -468,8 +470,9 @@ static void virtqueue_map_desc(unsigned int *p_num_sg, hwaddr *addr, struct iove hwaddr len = sz; if (num_sg == max_num_sg) { - error_report("virtio: too many write descriptors in indirect table"); - exit(1); + virtio_error(vdev, "virtio: too many write descriptors in " + "indirect table"); + goto out; } iov[num_sg].iov_base = cpu_physical_memory_map(pa, &len, is_write); @@ -480,7 +483,28 @@ static void virtqueue_map_desc(unsigned int *p_num_sg, hwaddr *addr, struct iove pa += len; num_sg++; } + ok = true; + +out: *p_num_sg = num_sg; + return ok; +} + +/* Only used by error code paths before we have a VirtQueueElement (therefore + * virtqueue_unmap_sg() can't be used). Assumes buffers weren't written to + * yet. + */ +static void virtqueue_undo_map_desc(unsigned out_num, unsigned in_num, + struct iovec *iov) +{ + unsigned i; + + for (i = 0; i < out_num + in_num; i++) { + int is_write = i >= out_num; + + cpu_physical_memory_unmap(iov->iov_base, iov->iov_len, is_write, 0); + iov++; + } } static void virtqueue_map_iovec(struct iovec *sg, hwaddr *addr, @@ -579,8 +603,8 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) vring_desc_read(vdev, &desc, desc_pa, i); if (desc.flags & VRING_DESC_F_INDIRECT) { if (desc.len % sizeof(VRingDesc)) { - error_report("Invalid size for indirect buffer table"); - exit(1); + virtio_error(vdev, "Invalid size for indirect buffer table"); + return NULL; } /* loop over the indirect descriptor table */ @@ -592,22 +616,30 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) /* Collect all the descriptors */ do { + bool map_ok; + if (desc.flags & VRING_DESC_F_WRITE) { - virtqueue_map_desc(&in_num, addr + out_num, iov + out_num, - VIRTQUEUE_MAX_SIZE - out_num, true, desc.addr, desc.len); + map_ok = virtqueue_map_desc(vdev, &in_num, addr + out_num, + iov + out_num, + VIRTQUEUE_MAX_SIZE - out_num, true, + desc.addr, desc.len); } else { if (in_num) { - error_report("Incorrect order for descriptors"); - exit(1); + virtio_error(vdev, "Incorrect order for descriptors"); + goto err_undo_map; } - virtqueue_map_desc(&out_num, addr, iov, - VIRTQUEUE_MAX_SIZE, false, desc.addr, desc.len); + map_ok = virtqueue_map_desc(vdev, &out_num, addr, iov, + VIRTQUEUE_MAX_SIZE, false, + desc.addr, desc.len); + } + if (!map_ok) { + goto err_undo_map; } /* If we've got too many, that implies a descriptor loop. */ if ((in_num + out_num) > max) { - error_report("Looped descriptor"); - exit(1); + virtio_error(vdev, "Looped descriptor"); + goto err_undo_map; } } while ((i = virtqueue_read_next_desc(vdev, &desc, desc_pa, max)) != max); @@ -627,6 +659,10 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) trace_virtqueue_pop(vq, elem, elem->in_num, elem->out_num); return elem; + +err_undo_map: + virtqueue_undo_map_desc(out_num, in_num, iov); + return NULL; } /* Reading and writing a structure directly to QEMUFile is *awful*, but