From patchwork Mon Jul 17 08:11:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ladi Prosek X-Patchwork-Id: 9844163 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BD93A60392 for ; Mon, 17 Jul 2017 08:13:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AEBCE2821F for ; Mon, 17 Jul 2017 08:13:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A361B28338; Mon, 17 Jul 2017 08:13:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 02D922821F for ; Mon, 17 Jul 2017 08:13:31 +0000 (UTC) Received: from localhost ([::1]:48664 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dX19m-0005PQ-TF for patchwork-qemu-devel@patchwork.kernel.org; Mon, 17 Jul 2017 04:13:30 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56711) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dX18S-0005N3-1u for qemu-devel@nongnu.org; Mon, 17 Jul 2017 04:12:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dX18Q-0001Kr-C0 for qemu-devel@nongnu.org; Mon, 17 Jul 2017 04:12:08 -0400 Received: from mx1.redhat.com ([209.132.183.28]:56992) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dX18Q-0001Ka-2t for qemu-devel@nongnu.org; Mon, 17 Jul 2017 04:12:06 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 15D9C85363; Mon, 17 Jul 2017 08:12:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 15D9C85363 Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=lprosek@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 15D9C85363 Received: from dhcp-1-107.brq.redhat.com (unknown [10.43.2.170]) by smtp.corp.redhat.com (Postfix) with ESMTP id 160A6600C6; Mon, 17 Jul 2017 08:12:02 +0000 (UTC) From: Ladi Prosek To: qemu-devel@nongnu.org Date: Mon, 17 Jul 2017 10:11:46 +0200 Message-Id: <20170717081152.17153-4-lprosek@redhat.com> In-Reply-To: <20170717081152.17153-1-lprosek@redhat.com> References: <20170717081152.17153-1-lprosek@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Mon, 17 Jul 2017 08:12:05 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v3 3/9] virtio: use virtqueue_error for errors with queue context X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: casasfernando@hotmail.com, mst@redhat.com, jasowang@redhat.com, cohuck@redhat.com, armbru@redhat.com, groug@kaod.org, arei.gonglei@huawei.com, aneesh.kumar@linux.vnet.ibm.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP virtqueue_error includes the queue index in the error output and is preferred for errors that pertain to a virtqueue rather than to the device as a whole. Signed-off-by: Ladi Prosek Reviewed-by: Cornelia Huck Reviewed-by: Stefan Hajnoczi --- hw/virtio/virtio.c | 57 +++++++++++++++++++++++++++--------------------------- 1 file changed, 28 insertions(+), 29 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 935a5e3..de4dd32 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -148,7 +148,7 @@ static void virtio_init_region_cache(VirtIODevice *vdev, int n) len = address_space_cache_init(&new->desc, vdev->dma_as, addr, size, false); if (len < size) { - virtio_error(vdev, "Cannot map desc"); + virtqueue_error(vq, "Cannot map desc"); goto err_desc; } @@ -156,7 +156,7 @@ static void virtio_init_region_cache(VirtIODevice *vdev, int n) len = address_space_cache_init(&new->used, vdev->dma_as, vq->vring.used, size, true); if (len < size) { - virtio_error(vdev, "Cannot map used"); + virtqueue_error(vq, "Cannot map used"); goto err_used; } @@ -164,7 +164,7 @@ static void virtio_init_region_cache(VirtIODevice *vdev, int n) len = address_space_cache_init(&new->avail, vdev->dma_as, vq->vring.avail, size, false); if (len < size) { - virtio_error(vdev, "Cannot map avail"); + virtqueue_error(vq, "Cannot map avail"); goto err_avail; } @@ -522,7 +522,7 @@ static int virtqueue_num_heads(VirtQueue *vq, unsigned int idx) /* Check it isn't doing very strange things with descriptor numbers. */ if (num_heads > vq->vring.num) { - virtio_error(vq->vdev, "Guest moved used index from %u to %u", + virtqueue_error(vq, "Guest moved used index from %u to %u", idx, vq->shadow_avail_idx); return -EINVAL; } @@ -545,7 +545,7 @@ static bool virtqueue_get_head(VirtQueue *vq, unsigned int idx, /* If their number is silly, that's a fatal mistake. */ if (*head >= vq->vring.num) { - virtio_error(vq->vdev, "Guest says index %u is available", *head); + virtqueue_error(vq, "Guest says index %u is available", *head); return false; } @@ -558,7 +558,7 @@ enum { VIRTQUEUE_READ_DESC_MORE = 1, /* more buffers in chain */ }; -static int virtqueue_read_next_desc(VirtIODevice *vdev, VRingDesc *desc, +static int virtqueue_read_next_desc(VirtQueue *vq, VRingDesc *desc, MemoryRegionCache *desc_cache, unsigned int max, unsigned int *next) { @@ -573,11 +573,11 @@ static int virtqueue_read_next_desc(VirtIODevice *vdev, VRingDesc *desc, smp_wmb(); if (*next >= max) { - virtio_error(vdev, "Desc next is %u", *next); + virtqueue_error(vq, "Desc next is %u", *next); return VIRTQUEUE_READ_DESC_ERROR; } - vring_desc_read(vdev, desc, desc_cache, *next); + vring_desc_read(vq->vdev, desc, desc_cache, *next); return VIRTQUEUE_READ_DESC_MORE; } @@ -610,7 +610,7 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes, max = vq->vring.num; caches = vring_get_region_caches(vq); if (caches->desc.len < max * sizeof(VRingDesc)) { - virtio_error(vdev, "Cannot map descriptor ring"); + virtqueue_error(vq, "Cannot map descriptor ring"); goto err; } @@ -630,13 +630,13 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes, if (desc.flags & VRING_DESC_F_INDIRECT) { if (desc.len % sizeof(VRingDesc)) { - virtio_error(vdev, "Invalid size for indirect buffer table"); + virtqueue_error(vq, "Invalid size for indirect buffer table"); goto err; } /* If we've got too many, that implies a descriptor loop. */ if (num_bufs >= max) { - virtio_error(vdev, "Looped descriptor"); + virtqueue_error(vq, "Looped descriptor"); goto err; } @@ -646,7 +646,7 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes, desc.addr, desc.len, false); desc_cache = &indirect_desc_cache; if (len < desc.len) { - virtio_error(vdev, "Cannot map indirect buffer"); + virtqueue_error(vq, "Cannot map indirect buffer"); goto err; } @@ -658,7 +658,7 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes, do { /* If we've got too many, that implies a descriptor loop. */ if (++num_bufs > max) { - virtio_error(vdev, "Looped descriptor"); + virtqueue_error(vq, "Looped descriptor"); goto err; } @@ -671,7 +671,7 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes, goto done; } - rc = virtqueue_read_next_desc(vdev, &desc, desc_cache, max, &i); + rc = virtqueue_read_next_desc(vq, &desc, desc_cache, max, &i); } while (rc == VIRTQUEUE_READ_DESC_MORE); if (rc == VIRTQUEUE_READ_DESC_ERROR) { @@ -715,7 +715,7 @@ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes, return in_bytes <= in_total && out_bytes <= out_total; } -static bool virtqueue_map_desc(VirtIODevice *vdev, unsigned int *p_num_sg, +static bool virtqueue_map_desc(VirtQueue *vq, unsigned int *p_num_sg, hwaddr *addr, struct iovec *iov, unsigned int max_num_sg, bool is_write, hwaddr pa, size_t sz) @@ -725,7 +725,7 @@ static bool virtqueue_map_desc(VirtIODevice *vdev, unsigned int *p_num_sg, assert(num_sg <= max_num_sg); if (!sz) { - virtio_error(vdev, "virtio: zero sized buffers are not allowed"); + virtqueue_error(vq, "Zero sized buffers are not allowed"); goto out; } @@ -733,17 +733,16 @@ static bool virtqueue_map_desc(VirtIODevice *vdev, unsigned int *p_num_sg, hwaddr len = sz; if (num_sg == max_num_sg) { - virtio_error(vdev, "virtio: too many write descriptors in " - "indirect table"); + virtqueue_error(vq, "Too many write descriptors in indirect table"); goto out; } - iov[num_sg].iov_base = dma_memory_map(vdev->dma_as, pa, &len, + iov[num_sg].iov_base = dma_memory_map(vq->vdev->dma_as, pa, &len, is_write ? DMA_DIRECTION_FROM_DEVICE : DMA_DIRECTION_TO_DEVICE); if (!iov[num_sg].iov_base) { - virtio_error(vdev, "virtio: bogus descriptor or out of resources"); + virtqueue_error(vq, "Bogus descriptor or out of resources"); goto out; } @@ -862,7 +861,7 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) max = vq->vring.num; if (vq->inuse >= vq->vring.num) { - virtio_error(vdev, "Virtqueue size exceeded"); + virtqueue_error(vq, "Virtqueue size exceeded"); goto done; } @@ -878,7 +877,7 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) caches = vring_get_region_caches(vq); if (caches->desc.len < max * sizeof(VRingDesc)) { - virtio_error(vdev, "Cannot map descriptor ring"); + virtqueue_error(vq, "Cannot map descriptor ring"); goto done; } @@ -886,7 +885,7 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) vring_desc_read(vdev, &desc, desc_cache, i); if (desc.flags & VRING_DESC_F_INDIRECT) { if (desc.len % sizeof(VRingDesc)) { - virtio_error(vdev, "Invalid size for indirect buffer table"); + virtqueue_error(vq, "Invalid size for indirect buffer table"); goto done; } @@ -895,7 +894,7 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) desc.addr, desc.len, false); desc_cache = &indirect_desc_cache; if (len < desc.len) { - virtio_error(vdev, "Cannot map indirect buffer"); + virtqueue_error(vq, "Cannot map indirect buffer"); goto done; } @@ -909,16 +908,16 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) bool map_ok; if (desc.flags & VRING_DESC_F_WRITE) { - map_ok = virtqueue_map_desc(vdev, &in_num, addr + out_num, + map_ok = virtqueue_map_desc(vq, &in_num, addr + out_num, iov + out_num, VIRTQUEUE_MAX_SIZE - out_num, true, desc.addr, desc.len); } else { if (in_num) { - virtio_error(vdev, "Incorrect order for descriptors"); + virtqueue_error(vq, "Incorrect order for descriptors"); goto err_undo_map; } - map_ok = virtqueue_map_desc(vdev, &out_num, addr, iov, + map_ok = virtqueue_map_desc(vq, &out_num, addr, iov, VIRTQUEUE_MAX_SIZE, false, desc.addr, desc.len); } @@ -928,11 +927,11 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) /* If we've got too many, that implies a descriptor loop. */ if ((in_num + out_num) > max) { - virtio_error(vdev, "Looped descriptor"); + virtqueue_error(vq, "Looped descriptor"); goto err_undo_map; } - rc = virtqueue_read_next_desc(vdev, &desc, desc_cache, max, &i); + rc = virtqueue_read_next_desc(vq, &desc, desc_cache, max, &i); } while (rc == VIRTQUEUE_READ_DESC_MORE); if (rc == VIRTQUEUE_READ_DESC_ERROR) {