From patchwork Sat Jun 4 21:33:21 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marc-Andr=C3=A9_Lureau?= X-Patchwork-Id: 9154993 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DE05760574 for ; Sat, 4 Jun 2016 21:51:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA36D281FE for ; Sat, 4 Jun 2016 21:51:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AAA8528327; Sat, 4 Jun 2016 21:51:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E66C5281FE for ; Sat, 4 Jun 2016 21:51:43 +0000 (UTC) Received: from localhost ([::1]:34299 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b9JTq-0006Aa-PI for patchwork-qemu-devel@patchwork.kernel.org; Sat, 04 Jun 2016 17:51:42 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44839) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b9JCn-0006JK-Un for qemu-devel@nongnu.org; Sat, 04 Jun 2016 17:34:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b9JCh-0005vn-Qf for qemu-devel@nongnu.org; Sat, 04 Jun 2016 17:34:04 -0400 Received: from mail-wm0-x244.google.com ([2a00:1450:400c:c09::244]:36189) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b9JCg-0005ve-TS for qemu-devel@nongnu.org; Sat, 04 Jun 2016 17:33:59 -0400 Received: by mail-wm0-x244.google.com with SMTP id m124so6198829wme.3 for ; Sat, 04 Jun 2016 14:33:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IDGcGdw9GhEcRwi+OVOdCwaIrU7ZED0xtVa1KHmdrOw=; b=mhQHkx/MS+IrG/6X0+QfYeW55hYog7z8nPeiGRvq1geh3n7dOwtVL9ggRHk/j9Osor 4Z0sJWn9H98/Pszg3NUqt4r8ZvAWeW/RO8YSgozimbiso/aZEDHwMOtAPgg3oEaUlwmE 4MSXHKxaw/g4pERqKBr3epFAJfSiyljR3ZnWOxbELOXJwvZR+6asgHb9TtQRFYujZwcs fDyoywKCETkHEIry3aAh6DbTrVA4pEimGkD4lb13Y8K8ZksfEmtlogKWOhVRolAIqsO/ EwkRX0LVUR76iD7dbtx/8xS5CH/VFQz3iOEdEgcLOKS7cb2JdDOkLRPbWCwU44NkU0Om zq8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=IDGcGdw9GhEcRwi+OVOdCwaIrU7ZED0xtVa1KHmdrOw=; b=bI9etxrvwwUQjnsjcPyZTBo4txiYmyCThYJUMcfyyLBdZJYF8/XuaRJGP5Htbs9J7b Akc52/tOkbEbYP68/Hp2zcrzygbiWycYWqsztziTIjIRsN7kKAMd69JNsaUHKlMLf0Q0 Ekksc8vmsL+gbiO6mAeYtBk0yDzNgW9NDO1Rim7yZX8KsR6RNkih+6p7aO32wgB5zUYP hb92TQrVM09tVYn2XJMVfG2oFV7/sk4uCfnBGRoXPbTXCQOkH8wXmaUhMtvQ/eNOrEt/ A7XN3+PaMYuYSrYnFw1DwP9+epa+613j7O3xZpnSPuXE8vp3Y24Ci+rC7N9a4dtTA+AG hh8g== X-Gm-Message-State: ALyK8tJnDv00XDCEeGrJ6ihKhE1UD3EjuYvr5zmGksz2CkYudmD8X4soFMru5wnIhoBlwQ== X-Received: by 10.194.121.8 with SMTP id lg8mr8901113wjb.16.1465076037802; Sat, 04 Jun 2016 14:33:57 -0700 (PDT) Received: from localhost (APoitiers-110-1-1-235.w193-248.abo.wanadoo.fr. [193.248.103.235]) by smtp.gmail.com with ESMTPSA id kc2sm12384948wjb.5.2016.06.04.14.33.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 04 Jun 2016 14:33:56 -0700 (PDT) From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Date: Sat, 4 Jun 2016 23:33:21 +0200 Message-Id: <1465076003-26291-13-git-send-email-marcandre.lureau@redhat.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1465076003-26291-1-git-send-email-marcandre.lureau@redhat.com> References: <1465076003-26291-1-git-send-email-marcandre.lureau@redhat.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::244 Subject: [Qemu-devel] [RFC 12/14] contrib: add vhost-user-gpu X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= , kraxel@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Marc-André Lureau Add a vhost-user gpu backend example, based on virtio-gpu/3d device. It is to be associated with a vhost-user-backend object, ex: -object vhost-user-backend,id=vug,cmd="vhost-user-gpu" Signed-off-by: Marc-André Lureau --- Makefile | 3 + Makefile.objs | 1 + configure | 4 + contrib/vhost-user-gpu/Makefile.objs | 7 + contrib/vhost-user-gpu/main.c | 1012 ++++++++++++++++++++++++++++++++++ contrib/vhost-user-gpu/virgl.c | 545 ++++++++++++++++++ contrib/vhost-user-gpu/virgl.h | 24 + contrib/vhost-user-gpu/vugpu.h | 155 ++++++ 8 files changed, 1751 insertions(+) create mode 100644 contrib/vhost-user-gpu/Makefile.objs create mode 100644 contrib/vhost-user-gpu/main.c create mode 100644 contrib/vhost-user-gpu/virgl.c create mode 100644 contrib/vhost-user-gpu/virgl.h create mode 100644 contrib/vhost-user-gpu/vugpu.h diff --git a/Makefile b/Makefile index ae054de..a97aa94 100644 --- a/Makefile +++ b/Makefile @@ -153,6 +153,7 @@ dummy := $(call unnest-vars,, \ ivshmem-server-obj-y \ libvhost-user-obj-y \ vhost-user-input-obj-y \ + vhost-user-gpu-obj-y \ qga-vss-dll-obj-y \ block-obj-y \ block-obj-m \ @@ -335,6 +336,8 @@ ivshmem-server$(EXESUF): $(ivshmem-server-obj-y) libqemuutil.a libqemustub.a $(call LINK, $^) vhost-user-input$(EXESUF): $(vhost-user-input-obj-y) $(libvhost-user-obj-y) libqemuutil.a libqemustub.a $(call LINK, $^) +vhost-user-gpu$(EXESUF): $(vhost-user-gpu-obj-y) $(libvhost-user-obj-y) libqemuutil.a libqemustub.a + $(call LINK, $^) clean: # avoid old build problems by removing potentially incorrect old files diff --git a/Makefile.objs b/Makefile.objs index cdd48ca..c8a949c 100644 --- a/Makefile.objs +++ b/Makefile.objs @@ -117,3 +117,4 @@ ivshmem-client-obj-y = contrib/ivshmem-client/ ivshmem-server-obj-y = contrib/ivshmem-server/ libvhost-user-obj-y = contrib/libvhost-user/ vhost-user-input-obj-y = contrib/vhost-user-input/ +vhost-user-gpu-obj-y = contrib/vhost-user-gpu/ diff --git a/configure b/configure index b02c0f4..bf120f0 100755 --- a/configure +++ b/configure @@ -4603,6 +4603,7 @@ if test "$want_tools" = "yes" ; then tools="qemu-nbd\$(EXESUF) $tools" tools="ivshmem-client\$(EXESUF) ivshmem-server\$(EXESUF) $tools" tools="vhost-user-input\$(EXESUF) $tools" + tools="vhost-user-gpu\$(EXESUF) $tools" fi fi if test "$softmmu" = yes ; then @@ -5924,6 +5925,9 @@ if [ "$pixman" = "internal" ]; then echo "config-host.h: subdir-pixman" >> $config_host_mak fi +echo "PIXMAN_CFLAGS=$pixman_cflags" >> $config_host_mak +echo "PIXMAN_LIBS=$pixman_libs" >> $config_host_mak + if [ "$dtc_internal" = "yes" ]; then echo "config-host.h: subdir-dtc" >> $config_host_mak fi diff --git a/contrib/vhost-user-gpu/Makefile.objs b/contrib/vhost-user-gpu/Makefile.objs new file mode 100644 index 0000000..da7c4b8 --- /dev/null +++ b/contrib/vhost-user-gpu/Makefile.objs @@ -0,0 +1,7 @@ +vhost-user-gpu-obj-y = main.o virgl.o + +main.o-cflags := $(PIXMAN_CFLAGS) +main.o-libs := $(PIXMAN_LIBS) + +virgl.o-cflags := $(VIRGL_CFLAGS) +virgl.o-libs := $(VIRGL_LIBS) diff --git a/contrib/vhost-user-gpu/main.c b/contrib/vhost-user-gpu/main.c new file mode 100644 index 0000000..047fb45 --- /dev/null +++ b/contrib/vhost-user-gpu/main.c @@ -0,0 +1,1012 @@ +/* + * Virtio vhost-user GPU Device + * + * Copyright Red Hat, Inc. 2013-2016 + * + * Authors: + * Dave Airlie + * Gerd Hoffmann + * Marc-André Lureau + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ +#include +#include + +#include "vugpu.h" +#include "virgl.h" + +struct virtio_gpu_simple_resource { + uint32_t resource_id; + uint32_t width; + uint32_t height; + uint32_t format; + struct iovec *iov; + unsigned int iov_cnt; + uint32_t scanout_bitmask; + pixman_image_t *image; + QTAILQ_ENTRY(virtio_gpu_simple_resource) next; +}; + +#define VG_DEBUG 0 + +#define DPRINT(...) \ + do { \ + if (VG_DEBUG) { \ + fprintf(stderr, __VA_ARGS__); \ + } \ + } while (0) + +static const char * +vg_cmd_to_string(int cmd) +{ +#define CMD(cmd) [cmd] = #cmd + static const char *vg_cmd_str[] = { + CMD(VIRTIO_GPU_UNDEFINED), + + /* 2d commands */ + CMD(VIRTIO_GPU_CMD_GET_DISPLAY_INFO), + CMD(VIRTIO_GPU_CMD_RESOURCE_CREATE_2D), + CMD(VIRTIO_GPU_CMD_RESOURCE_UNREF), + CMD(VIRTIO_GPU_CMD_SET_SCANOUT), + CMD(VIRTIO_GPU_CMD_RESOURCE_FLUSH), + CMD(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D), + CMD(VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING), + CMD(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING), + CMD(VIRTIO_GPU_CMD_GET_CAPSET_INFO), + CMD(VIRTIO_GPU_CMD_GET_CAPSET), + + /* 3d commands */ + CMD(VIRTIO_GPU_CMD_CTX_CREATE), + CMD(VIRTIO_GPU_CMD_CTX_DESTROY), + CMD(VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE), + CMD(VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE), + CMD(VIRTIO_GPU_CMD_RESOURCE_CREATE_3D), + CMD(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D), + CMD(VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D), + CMD(VIRTIO_GPU_CMD_SUBMIT_3D), + + /* cursor commands */ + CMD(VIRTIO_GPU_CMD_UPDATE_CURSOR), + CMD(VIRTIO_GPU_CMD_MOVE_CURSOR), + }; +#undef REQ + + if (cmd < G_N_ELEMENTS(vg_cmd_str)) { + return vg_cmd_str[cmd]; + } else { + return "unknown"; + } +} + +ssize_t +vg_sock_fd_write(int sock, void *buf, ssize_t buflen, int fd) +{ + ssize_t size; + struct msghdr msg; + struct iovec iov; + union { + struct cmsghdr cmsghdr; + char control[CMSG_SPACE(sizeof(int))]; + } cmsgu; + struct cmsghdr *cmsg; + + iov.iov_base = buf; + iov.iov_len = buflen; + + msg.msg_name = NULL; + msg.msg_namelen = 0; + msg.msg_iov = &iov; + msg.msg_iovlen = 1; + + if (fd != -1) { + msg.msg_control = cmsgu.control; + msg.msg_controllen = sizeof(cmsgu.control); + + cmsg = CMSG_FIRSTHDR(&msg); + cmsg->cmsg_len = CMSG_LEN(sizeof(int)); + cmsg->cmsg_level = SOL_SOCKET; + cmsg->cmsg_type = SCM_RIGHTS; + + *((int *)CMSG_DATA(cmsg)) = fd; + } else { + msg.msg_control = NULL; + msg.msg_controllen = 0; + } + + size = sendmsg(sock, &msg, 0); + + return size; +} + +static struct virtio_gpu_simple_resource * +virtio_gpu_find_resource(VuGpu *g, uint32_t resource_id) +{ + struct virtio_gpu_simple_resource *res; + + QTAILQ_FOREACH(res, &g->reslist, next) { + if (res->resource_id == resource_id) { + return res; + } + } + return NULL; +} + +void +vg_ctrl_response(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd, + struct virtio_gpu_ctrl_hdr *resp, + size_t resp_len) +{ + size_t s; + + if (cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE) { + resp->flags |= VIRTIO_GPU_FLAG_FENCE; + resp->fence_id = cmd->cmd_hdr.fence_id; + resp->ctx_id = cmd->cmd_hdr.ctx_id; + } + /* qemu_hexdump(resp, stderr, "re:", resp_len); */ + s = iov_from_buf(cmd->elem.in_sg, cmd->elem.in_num, 0, resp, resp_len); + if (s != resp_len) { + g_critical("%s: response size incorrect %zu vs %zu", + __func__, s, resp_len); + } + vu_queue_push(&g->dev, cmd->vq, &cmd->elem, s); + vu_queue_notify(&g->dev, cmd->vq); + cmd->finished = true; +} + +void +vg_ctrl_response_nodata(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd, + enum virtio_gpu_ctrl_type type) +{ + struct virtio_gpu_ctrl_hdr resp = { + .type = type, + }; + + vg_ctrl_response(g, cmd, &resp, sizeof(resp)); +} + +void +vg_get_display_info(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_resp_display_info dpy_info = { 0 ,}; + int i; + + dpy_info.hdr.type = VIRTIO_GPU_RESP_OK_DISPLAY_INFO; + for (i = 0; i < 1 /* g->conf.max_outputs */; i++) { + /* if (g->enabled_output_bitmask & (1 << i)) { */ + dpy_info.pmodes[i].enabled = 1; + dpy_info.pmodes[i].r.width = 1024; + dpy_info.pmodes[i].r.height = 768; + /* } */ + } + + vg_ctrl_response(vg, cmd, &dpy_info.hdr, sizeof(dpy_info)); +} + +static pixman_format_code_t +get_pixman_format(uint32_t virtio_gpu_format) +{ + switch (virtio_gpu_format) { +#ifdef HOST_WORDS_BIGENDIAN + case VIRTIO_GPU_FORMAT_B8G8R8X8_UNORM: + return PIXMAN_b8g8r8x8; + case VIRTIO_GPU_FORMAT_B8G8R8A8_UNORM: + return PIXMAN_b8g8r8a8; + case VIRTIO_GPU_FORMAT_X8R8G8B8_UNORM: + return PIXMAN_x8r8g8b8; + case VIRTIO_GPU_FORMAT_A8R8G8B8_UNORM: + return PIXMAN_a8r8g8b8; + case VIRTIO_GPU_FORMAT_R8G8B8X8_UNORM: + return PIXMAN_r8g8b8x8; + case VIRTIO_GPU_FORMAT_R8G8B8A8_UNORM: + return PIXMAN_r8g8b8a8; + case VIRTIO_GPU_FORMAT_X8B8G8R8_UNORM: + return PIXMAN_x8b8g8r8; + case VIRTIO_GPU_FORMAT_A8B8G8R8_UNORM: + return PIXMAN_a8b8g8r8; +#else + case VIRTIO_GPU_FORMAT_B8G8R8X8_UNORM: + return PIXMAN_x8r8g8b8; + case VIRTIO_GPU_FORMAT_B8G8R8A8_UNORM: + return PIXMAN_a8r8g8b8; + case VIRTIO_GPU_FORMAT_X8R8G8B8_UNORM: + return PIXMAN_b8g8r8x8; + case VIRTIO_GPU_FORMAT_A8R8G8B8_UNORM: + return PIXMAN_b8g8r8a8; + case VIRTIO_GPU_FORMAT_R8G8B8X8_UNORM: + return PIXMAN_x8b8g8r8; + case VIRTIO_GPU_FORMAT_R8G8B8A8_UNORM: + return PIXMAN_a8b8g8r8; + case VIRTIO_GPU_FORMAT_X8B8G8R8_UNORM: + return PIXMAN_r8g8b8x8; + case VIRTIO_GPU_FORMAT_A8B8G8R8_UNORM: + return PIXMAN_r8g8b8a8; +#endif + default: + return 0; + } +} + +static void +vg_resource_create_2d(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + pixman_format_code_t pformat; + struct virtio_gpu_simple_resource *res; + struct virtio_gpu_resource_create_2d c2d; + + VUGPU_FILL_CMD(c2d); + + if (c2d.resource_id == 0) { + g_critical("%s: resource id 0 is not allowed", __func__); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; + return; + } + + res = virtio_gpu_find_resource(g, c2d.resource_id); + if (res) { + g_critical("%s: resource already exists %d", __func__, c2d.resource_id); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; + return; + } + + res = g_new0(struct virtio_gpu_simple_resource, 1); + res->width = c2d.width; + res->height = c2d.height; + res->format = c2d.format; + res->resource_id = c2d.resource_id; + + pformat = get_pixman_format(c2d.format); + if (!pformat) { + g_critical("%s: host couldn't handle guest format %d", + __func__, c2d.format); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER; + return; + } + res->image = pixman_image_create_bits(pformat, + c2d.width, + c2d.height, + NULL, 0); + if (!res->image) { + g_critical("%s: resource creation failed %d %d %d", + __func__, c2d.resource_id, c2d.width, c2d.height); + g_free(res); + cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY; + return; + } + + QTAILQ_INSERT_HEAD(&g->reslist, res, next); +} + +static void +vg_resource_destroy(VuGpu *g, + struct virtio_gpu_simple_resource *res) +{ + pixman_image_unref(res->image); + QTAILQ_REMOVE(&g->reslist, res, next); + g_free(res); +} + +static void +vg_resource_unref(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_simple_resource *res; + struct virtio_gpu_resource_unref unref; + + VUGPU_FILL_CMD(unref); + + res = virtio_gpu_find_resource(g, unref.resource_id); + if (!res) { + g_critical("%s: illegal resource specified %d", + __func__, unref.resource_id); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; + return; + } + vg_resource_destroy(g, res); +} + +int +vg_create_mapping_iov(VuGpu *g, + struct virtio_gpu_resource_attach_backing *ab, + struct virtio_gpu_ctrl_command *cmd, + struct iovec **iov) +{ + struct virtio_gpu_mem_entry *ents; + size_t esize, s; + int i; + + if (ab->nr_entries > 16384) { + g_critical("%s: nr_entries is too big (%d > 16384)", + __func__, ab->nr_entries); + return -1; + } + + esize = sizeof(*ents) * ab->nr_entries; + ents = g_malloc(esize); + s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num, + sizeof(*ab), ents, esize); + if (s != esize) { + g_critical("%s: command data size incorrect %zu vs %zu", + __func__, s, esize); + g_free(ents); + return -1; + } + + *iov = g_malloc0(sizeof(struct iovec) * ab->nr_entries); + for (i = 0; i < ab->nr_entries; i++) { + uint32_t len = ents[i].length; + (*iov)[i].iov_len = ents[i].length; + (*iov)[i].iov_base = vu_gpa_to_va(&g->dev, ents[i].addr); + if (!(*iov)[i].iov_base || len != ents[i].length) { + g_critical("%s: resource %d element %d", + __func__, ab->resource_id, i); + g_free(*iov); + g_free(ents); + *iov = NULL; + return -1; + } + } + g_free(ents); + return 0; +} + +static void +vg_resource_attach_backing(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_simple_resource *res; + struct virtio_gpu_resource_attach_backing ab; + int ret; + + VUGPU_FILL_CMD(ab); + + res = virtio_gpu_find_resource(g, ab.resource_id); + if (!res) { + g_critical("%s: illegal resource specified %d", + __func__, ab.resource_id); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; + return; + } + + ret = vg_create_mapping_iov(g, &ab, cmd, &res->iov); + if (ret != 0) { + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC; + return; + } + + res->iov_cnt = ab.nr_entries; +} + +static void +vg_resource_detach_backing(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_simple_resource *res; + struct virtio_gpu_resource_detach_backing detach; + + VUGPU_FILL_CMD(detach); + + res = virtio_gpu_find_resource(g, detach.resource_id); + if (!res || !res->iov) { + g_critical("%s: illegal resource specified %d", + __func__, detach.resource_id); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; + return; + } + + g_free(res->iov); + res->iov = NULL; + res->iov_cnt = 0; +} + +static void +vg_transfer_to_host_2d(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_simple_resource *res; + int h; + uint32_t src_offset, dst_offset, stride; + int bpp; + pixman_format_code_t format; + struct virtio_gpu_transfer_to_host_2d t2d; + + VUGPU_FILL_CMD(t2d); + + res = virtio_gpu_find_resource(g, t2d.resource_id); + if (!res || !res->iov) { + g_critical("%s: illegal resource specified %d", + __func__, t2d.resource_id); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; + return; + } + + if (t2d.r.x > res->width || + t2d.r.y > res->height || + t2d.r.width > res->width || + t2d.r.height > res->height || + t2d.r.x + t2d.r.width > res->width || + t2d.r.y + t2d.r.height > res->height) { + g_critical("%s: transfer bounds outside resource" + " bounds for resource %d: %d %d %d %d vs %d %d", + __func__, t2d.resource_id, t2d.r.x, t2d.r.y, + t2d.r.width, t2d.r.height, res->width, res->height); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER; + return; + } + + format = pixman_image_get_format(res->image); + bpp = (PIXMAN_FORMAT_BPP(format) + 7) / 8; + stride = pixman_image_get_stride(res->image); + + if (t2d.offset || t2d.r.x || t2d.r.y || + t2d.r.width != pixman_image_get_width(res->image)) { + void *img_data = pixman_image_get_data(res->image); + for (h = 0; h < t2d.r.height; h++) { + src_offset = t2d.offset + stride * h; + dst_offset = (t2d.r.y + h) * stride + (t2d.r.x * bpp); + + iov_to_buf(res->iov, res->iov_cnt, src_offset, + (uint8_t *)img_data + + dst_offset, t2d.r.width * bpp); + } + } else { + iov_to_buf(res->iov, res->iov_cnt, 0, + pixman_image_get_data(res->image), + pixman_image_get_stride(res->image) + * pixman_image_get_height(res->image)); + } +} + +static void +vg_set_scanout(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_simple_resource *res; + struct virtio_gpu_scanout *scanout; + struct virtio_gpu_set_scanout ss; + + VUGPU_FILL_CMD(ss); + g_critical("set scanout %d:%d", ss.scanout_id, ss.resource_id); + + if (ss.scanout_id >= VIRTIO_GPU_MAX_SCANOUTS) { + g_critical("%s: illegal scanout id specified %d", + __func__, ss.scanout_id); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_SCANOUT_ID; + return; + } + + if (ss.resource_id == 0) { + scanout = &g->scanout[ss.scanout_id]; + if (scanout->resource_id) { + res = virtio_gpu_find_resource(g, scanout->resource_id); + if (res) { + res->scanout_bitmask &= ~(1 << ss.scanout_id); + } + } + if (ss.scanout_id == 0) { + g_critical("%s: illegal scanout id specified %d", + __func__, ss.scanout_id); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_SCANOUT_ID; + return; + } + /* dpy_gfx_replace_surface(g->scanout[ss.scanout_id].con, NULL); */ + scanout->width = 0; + scanout->height = 0; + return; + } + + /* create a surface for this scanout */ + res = virtio_gpu_find_resource(g, ss.resource_id); + if (!res) { + g_critical("%s: illegal resource specified %d", + __func__, ss.resource_id); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; + return; + } + + if (ss.r.x > res->width || + ss.r.y > res->height || + ss.r.width > res->width || + ss.r.height > res->height || + ss.r.x + ss.r.width > res->width || + ss.r.y + ss.r.height > res->height) { + g_critical("%s: illegal scanout %d bounds for" + " resource %d, (%d,%d)+%d,%d vs %d %d", + __func__, ss.scanout_id, ss.resource_id, ss.r.x, ss.r.y, + ss.r.width, ss.r.height, res->width, res->height); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER; + return; + } + + scanout = &g->scanout[ss.scanout_id]; + + res->scanout_bitmask |= (1 << ss.scanout_id); + scanout->resource_id = ss.resource_id; + scanout->x = ss.r.x; + scanout->y = ss.r.y; + scanout->width = ss.r.width; + scanout->height = ss.r.height; + + VhostGpuMsg msg = { + .request = VHOST_GPU_SCANOUT, + .size = sizeof(VhostGpuScanout), + .payload.scanout.scanout_id = ss.scanout_id, + .payload.scanout.width = scanout->width, + .payload.scanout.height = scanout->height + }; + vg_sock_fd_write(g->sock_fd, &msg, VHOST_GPU_HDR_SIZE + msg.size, -1); +} + +static void +vg_resource_flush(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_simple_resource *res; + struct virtio_gpu_resource_flush rf; + pixman_region16_t flush_region; + int i; + + VUGPU_FILL_CMD(rf); + + res = virtio_gpu_find_resource(g, rf.resource_id); + if (!res) { + g_critical("%s: illegal resource specified %d\n", + __func__, rf.resource_id); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; + return; + } + + if (rf.r.x > res->width || + rf.r.y > res->height || + rf.r.width > res->width || + rf.r.height > res->height || + rf.r.x + rf.r.width > res->width || + rf.r.y + rf.r.height > res->height) { + g_critical("%s: flush bounds outside resource" + " bounds for resource %d: %d %d %d %d vs %d %d\n", + __func__, rf.resource_id, rf.r.x, rf.r.y, + rf.r.width, rf.r.height, res->width, res->height); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER; + return; + } + + pixman_region_init_rect(&flush_region, + rf.r.x, rf.r.y, rf.r.width, rf.r.height); + for (i = 0; i < VIRTIO_GPU_MAX_SCANOUTS; i++) { + struct virtio_gpu_scanout *scanout; + pixman_region16_t region, finalregion; + pixman_box16_t *extents; + + if (!(res->scanout_bitmask & (1 << i))) { + continue; + } + scanout = &g->scanout[i]; + + pixman_region_init(&finalregion); + pixman_region_init_rect(®ion, scanout->x, scanout->y, + scanout->width, scanout->height); + + pixman_region_intersect(&finalregion, &flush_region, ®ion); + + extents = pixman_region_extents(&finalregion); + size_t width = extents->x2 - extents->x1; + size_t height = extents->y2 - extents->y1; + size_t bpp = PIXMAN_FORMAT_BPP(pixman_image_get_format(res->image)) / 8; + size_t size = width * height * bpp; + + VhostGpuMsg *msg = g_malloc(VHOST_GPU_HDR_SIZE + + sizeof(VhostGpuUpdate) + size); + msg->request = VHOST_GPU_UPDATE; + msg->size = sizeof(VhostGpuUpdate) + size; + msg->payload.update.scanout_id = i; + msg->payload.update.x = extents->x1; + msg->payload.update.y = extents->y1; + msg->payload.update.width = width; + msg->payload.update.height = height; + pixman_image_t *i = + pixman_image_create_bits(pixman_image_get_format(res->image), + msg->payload.update.width, + msg->payload.update.height, + (uint32_t *)msg->payload.update.data, + width * bpp); + pixman_image_composite(PIXMAN_OP_SRC, + res->image, NULL, i, + extents->x1, extents->y1, + 0, 0, 0, 0, + width, height); + pixman_image_unref(i); + vg_sock_fd_write(g->sock_fd, msg, VHOST_GPU_HDR_SIZE + msg->size, -1); + g_free(msg); + + pixman_region_fini(®ion); + pixman_region_fini(&finalregion); + } + pixman_region_fini(&flush_region); +} + +static void +vg_process_cmd(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd) +{ + switch (cmd->cmd_hdr.type) { + case VIRTIO_GPU_CMD_GET_DISPLAY_INFO: + vg_get_display_info(vg, cmd); + break; + case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D: + vg_resource_create_2d(vg, cmd); + break; + case VIRTIO_GPU_CMD_RESOURCE_UNREF: + vg_resource_unref(vg, cmd); + break; + case VIRTIO_GPU_CMD_RESOURCE_FLUSH: + vg_resource_flush(vg, cmd); + break; + case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D: + vg_transfer_to_host_2d(vg, cmd); + break; + case VIRTIO_GPU_CMD_SET_SCANOUT: + vg_set_scanout(vg, cmd); + break; + case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING: + vg_resource_attach_backing(vg, cmd); + break; + case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING: + vg_resource_detach_backing(vg, cmd); + break; + default: + DPRINT("TODO handle ctrl %x\n", cmd->cmd_hdr.type); + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC; + break; + } + if (!cmd->finished) { + vg_ctrl_response_nodata(vg, cmd, cmd->error ? cmd->error : + VIRTIO_GPU_RESP_OK_NODATA); + } +} + +static void +vg_handle_ctrl(VuDev *dev, int qidx) +{ + VuGpu *vg = container_of(dev, VuGpu, dev); + VuVirtq *vq = vu_get_queue(dev, qidx); + struct virtio_gpu_ctrl_command *cmd = NULL; + size_t len; + + DPRINT("%s\n", __func__); + + for (;;) { + cmd = vu_queue_pop(dev, vq, sizeof(struct virtio_gpu_ctrl_command)); + if (!cmd) { + break; + } + cmd->vq = vq; + cmd->error = 0; + cmd->finished = false; + + len = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num, + 0, &cmd->cmd_hdr, sizeof(cmd->cmd_hdr)); + if (len != sizeof(cmd->cmd_hdr)) { + g_warning("%s: command size incorrect %zu vs %zu\n", + __func__, len, sizeof(cmd->cmd_hdr)); + } + + DPRINT("%d %s\n", cmd->cmd_hdr.type, + vg_cmd_to_string(cmd->cmd_hdr.type)); + + if (vg->virgl) { + vg_virgl_process_cmd(vg, cmd); + } else { + vg_process_cmd(vg, cmd); + } + + if (!cmd->finished) { + QTAILQ_INSERT_TAIL(&vg->fenceq, cmd, next); + vg->inflight++; + } else { + g_free(cmd); + } + } +} + +static void +update_cursor_data_simple(VuGpu *g, uint32_t resource_id, gpointer data) +{ + struct virtio_gpu_simple_resource *res; + + res = virtio_gpu_find_resource(g, resource_id); + g_return_if_fail(res != NULL); + g_return_if_fail(pixman_image_get_width(res->image) == 64); + g_return_if_fail(pixman_image_get_height(res->image) == 64); + g_return_if_fail( + PIXMAN_FORMAT_BPP(pixman_image_get_format(res->image)) == 32); + + memcpy(data, pixman_image_get_data(res->image), 64 * 64 * sizeof(uint32_t)); +} + +static void +vg_handle_cursor(VuDev *dev, int qidx) +{ + VuGpu *g = container_of(dev, VuGpu, dev); + VuVirtq *vq = vu_get_queue(dev, qidx); + VuVirtqElement *elem; + size_t len; + struct virtio_gpu_update_cursor cursor; + + for (;;) { + elem = vu_queue_pop(dev, vq, sizeof(VuVirtqElement)); + if (!elem) { + break; + } + DPRINT("cursor out:%d in:%d\n", elem->out_num, elem->in_num); + + len = iov_to_buf(elem->out_sg, elem->out_num, + 0, &cursor, sizeof(cursor)); + if (len != sizeof(cursor)) { + g_warning("%s: cursor size incorrect %zu vs %zu\n", + __func__, len, sizeof(cursor)); + } + bool move = cursor.hdr.type != VIRTIO_GPU_CMD_MOVE_CURSOR; + DPRINT("%s move:%d\n", __func__, move); + + if (move) { + VhostGpuMsg msg = { + .request = cursor.resource_id ? + VHOST_GPU_CURSOR_POS : VHOST_GPU_CURSOR_POS_HIDE, + .size = sizeof(VhostGpuCursorPos), + .payload.cursor_pos = { + .scanout_id = cursor.pos.scanout_id, + .x = cursor.pos.x, + .y = cursor.pos.y, + } + }; + vg_sock_fd_write(g->sock_fd, &msg, + VHOST_GPU_HDR_SIZE + msg.size, -1); + } else { + VhostGpuMsg msg = { + .request = VHOST_GPU_CURSOR_UPDATE, + .size = sizeof(VhostGpuCursorUpdate), + .payload.cursor_update = { + .pos = { + .scanout_id = cursor.pos.scanout_id, + .x = cursor.pos.x, + .y = cursor.pos.y, + }, + .hot_x = cursor.hot_x, + .hot_y = cursor.hot_y, + } + }; + if (g->virgl) { + vg_virgl_update_cursor_data(g, cursor.resource_id, + msg.payload.cursor_update.data); + } else { + update_cursor_data_simple(g, cursor.resource_id, + msg.payload.cursor_update.data); + } + vg_sock_fd_write(g->sock_fd, &msg, + VHOST_GPU_HDR_SIZE + msg.size, -1); + } + + vu_queue_push(dev, vq, elem, 0); + vu_queue_notify(&g->dev, vq); + g_free(elem); + } +} + +static void +vg_panic(VuDev *dev, const char *msg) +{ + g_critical("%s\n", msg); + exit(1); +} + +typedef struct Watch { + GSource source; + GIOCondition condition; + gpointer tag; + VuDev *dev; + guint id; +} Watch; + +static GIOCondition +vu_to_gio_condition(int condition) +{ + return (condition & VU_WATCH_IN ? G_IO_IN : 0) | + (condition & VU_WATCH_OUT ? G_IO_OUT : 0) | + (condition & VU_WATCH_PRI ? G_IO_PRI : 0) | + (condition & VU_WATCH_ERR ? G_IO_ERR : 0) | + (condition & VU_WATCH_HUP ? G_IO_HUP : 0); +} + +static GIOCondition +vu_from_gio_condition(int condition) +{ + return (condition & G_IO_IN ? VU_WATCH_IN : 0) | + (condition & G_IO_OUT ? VU_WATCH_OUT : 0) | + (condition & G_IO_PRI ? VU_WATCH_PRI : 0) | + (condition & G_IO_ERR ? VU_WATCH_ERR : 0) | + (condition & G_IO_HUP ? VU_WATCH_HUP : 0); +} + +static gboolean +watch_check(GSource *source) +{ + Watch *watch = (Watch *)source; + GIOCondition poll_condition = g_source_query_unix_fd(source, watch->tag); + + return poll_condition & watch->condition; +} + +static gboolean +watch_dispatch(GSource *source, + GSourceFunc callback, + gpointer user_data) + +{ + vu_watch_cb func = (vu_watch_cb)callback; + Watch *watch = (Watch *)source; + GIOCondition poll_condition = g_source_query_unix_fd(source, watch->tag); + int cond = vu_from_gio_condition(poll_condition & watch->condition); + + (*func) (watch->dev, cond, user_data); + + return G_SOURCE_CONTINUE; +} + +static GSourceFuncs watch_funcs = { + .check = watch_check, + .dispatch = watch_dispatch, +}; + +void +vg_set_fd_handler(VuDev *dev, int fd, GIOCondition condition, + vu_watch_cb cb, void *data) +{ + VuGpu *vg = container_of(dev, VuGpu, dev); + Watch *watch; + GSource *s; + + g_assert_cmpint(fd, <, G_N_ELEMENTS(vg->watches)); + + s = vg->watches[fd]; + if (cb) { + if (!s) { + s = g_source_new(&watch_funcs, sizeof(Watch)); + watch = (Watch *)s; + watch->dev = dev; + watch->condition = condition; + watch->tag = + g_source_add_unix_fd(s, fd, condition); + watch->id = g_source_attach(s, NULL); + vg->watches[fd] = s; + } else { + watch = (Watch *)s; + g_source_modify_unix_fd(s, watch->tag, condition); + } + + g_source_set_callback(s, (GSourceFunc)cb, data, NULL); + } else if (s) { + watch = (Watch *)s; + g_source_remove_unix_fd(s, watch->tag); + g_source_unref(s); + g_source_remove(watch->id); + vg->watches[fd] = NULL; + } +} + +static void +vg_add_watch(VuDev *dev, int fd, int condition, + vu_watch_cb cb, void *data) +{ + vg_set_fd_handler(dev, fd, vu_to_gio_condition(condition), cb, data); +} + +static void +vg_remove_watch(VuDev *dev, int fd) +{ + vg_set_fd_handler(dev, fd, 0, NULL, NULL); +} + +static void +vg_queue_set_started(VuDev *dev, int qidx, bool started) +{ + VuVirtq *vq = vu_get_queue(dev, qidx); + + DPRINT("queue started %d:%d\n", qidx, started); + + switch (qidx) { + case 0: + vu_set_queue_handler(dev, vq, started ? vg_handle_ctrl : NULL); + break; + case 1: + vu_set_queue_handler(dev, vq, started ? vg_handle_cursor : NULL); + break; + default: + break; + } +} + +static int +vg_process_msg(VuDev *dev, VhostUserMsg *msg, int *do_reply) +{ + VuGpu *g = container_of(dev, VuGpu, dev); + + switch (msg->request) { + case VHOST_USER_GPU_SET_SOCKET: + g_return_val_if_fail(msg->fd_num == 1, 1); + g_return_val_if_fail(g->sock_fd == -1, 1); + g->sock_fd = msg->fds[0]; + return 1; + default: + return 0; + } + + return 0; +} + +static void +vg_set_features(VuDev *dev, uint64_t features) +{ + VuGpu *g = container_of(dev, VuGpu, dev); + bool virgl = features & (1 << VIRTIO_GPU_F_VIRGL); + + if (virgl && !g->virgl_inited) { + vg_virgl_init(g); + g->virgl_inited = true; + } + + g->virgl = virgl; +} + +static const VuDevIface vuiface = { + .set_features = vg_set_features, + .queue_set_started = vg_queue_set_started, + .process_msg = vg_process_msg, +}; + +static void +vg_vhost_watch(VuDev *dev, int condition, void *data) +{ + vu_dispatch(dev); +} + +static void +vg_reset(VuGpu *g) +{ + struct virtio_gpu_simple_resource *res, *tmp; + + vu_deinit(&g->dev); + + if (g->sock_fd != -1) { + close(g->sock_fd); + g->sock_fd = -1; + } + + QTAILQ_FOREACH_SAFE(res, &g->reslist, next, tmp) { + vg_resource_destroy(g, res); + } +} + +int +main(int argc, char *argv[]) +{ + GMainLoop *loop = NULL; + VuGpu g = { .sock_fd = -1 }; + + QTAILQ_INIT(&g.reslist); + QTAILQ_INIT(&g.fenceq); + + vu_init(&g.dev, 3, vg_panic, vg_add_watch, vg_remove_watch, &vuiface); + vg_set_fd_handler(&g.dev, 3, G_IO_IN | G_IO_HUP, vg_vhost_watch, NULL); + + loop = g_main_loop_new(NULL, FALSE); + g_main_loop_run(loop); + g_main_loop_unref(loop); + + vg_reset(&g); + + return 0; +} diff --git a/contrib/vhost-user-gpu/virgl.c b/contrib/vhost-user-gpu/virgl.c new file mode 100644 index 0000000..a047531 --- /dev/null +++ b/contrib/vhost-user-gpu/virgl.c @@ -0,0 +1,545 @@ +/* + * Virtio vhost-user GPU Device + * + * Copyright Red Hat, Inc. 2013-2016 + * + * Authors: + * Dave Airlie + * Gerd Hoffmann + * Marc-André Lureau + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#include +#include "virgl.h" + +#define VG_DEBUG 0 + +#define DPRINT(...) \ + do { \ + if (VG_DEBUG) { \ + fprintf(stderr, __VA_ARGS__); \ + } \ + } while (0) + +void +vg_virgl_update_cursor_data(VuGpu *g, uint32_t resource_id, + gpointer data) +{ + uint32_t width, height; + uint32_t *cursor; + + cursor = virgl_renderer_get_cursor_data(resource_id, &width, &height); + g_return_if_fail(cursor != NULL); + g_return_if_fail(width == 64); + g_return_if_fail(height == 64); + + memcpy(data, cursor, 64 * 64 * sizeof(uint32_t)); + free(cursor); +} + +static void +virgl_cmd_context_create(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_ctx_create cc; + + VUGPU_FILL_CMD(cc); + + virgl_renderer_context_create(cc.hdr.ctx_id, cc.nlen, + cc.debug_name); +} + +static void +virgl_cmd_context_destroy(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_ctx_destroy cd; + + VUGPU_FILL_CMD(cd); + + virgl_renderer_context_destroy(cd.hdr.ctx_id); +} + +static void +virgl_cmd_create_resource_2d(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_resource_create_2d c2d; + struct virgl_renderer_resource_create_args args; + + VUGPU_FILL_CMD(c2d); + + args.handle = c2d.resource_id; + args.target = 2; + args.format = c2d.format; + args.bind = (1 << 1); + args.width = c2d.width; + args.height = c2d.height; + args.depth = 1; + args.array_size = 1; + args.last_level = 0; + args.nr_samples = 0; + args.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP; + virgl_renderer_resource_create(&args, NULL, 0); +} + +static void +virgl_cmd_create_resource_3d(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_resource_create_3d c3d; + struct virgl_renderer_resource_create_args args; + + VUGPU_FILL_CMD(c3d); + + args.handle = c3d.resource_id; + args.target = c3d.target; + args.format = c3d.format; + args.bind = c3d.bind; + args.width = c3d.width; + args.height = c3d.height; + args.depth = c3d.depth; + args.array_size = c3d.array_size; + args.last_level = c3d.last_level; + args.nr_samples = c3d.nr_samples; + args.flags = c3d.flags; + virgl_renderer_resource_create(&args, NULL, 0); +} + +static void +virgl_cmd_resource_unref(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_resource_unref unref; + + VUGPU_FILL_CMD(unref); + + virgl_renderer_resource_unref(unref.resource_id); +} + +static void +virgl_cmd_get_capset_info(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_get_capset_info info; + struct virtio_gpu_resp_capset_info resp; + + DPRINT("%s\n", __func__); + VUGPU_FILL_CMD(info); + + if (info.capset_index == 0) { + resp.capset_id = VIRTIO_GPU_CAPSET_VIRGL; + virgl_renderer_get_cap_set(resp.capset_id, + &resp.capset_max_version, + &resp.capset_max_size); + } else { + resp.capset_max_version = 0; + resp.capset_max_size = 0; + } + resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO; + vg_ctrl_response(g, cmd, &resp.hdr, sizeof(resp)); +} + +static void +virgl_cmd_get_capset(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_get_capset gc; + struct virtio_gpu_resp_capset *resp; + uint32_t max_ver, max_size; + + VUGPU_FILL_CMD(gc); + + virgl_renderer_get_cap_set(gc.capset_id, &max_ver, + &max_size); + resp = g_malloc0(sizeof(*resp) + max_size); + + resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET; + virgl_renderer_fill_caps(gc.capset_id, + gc.capset_version, + (void *)resp->capset_data); + vg_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) + max_size); + g_free(resp); +} + +static void +virgl_cmd_submit_3d(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_cmd_submit cs; + void *buf; + size_t s; + + VUGPU_FILL_CMD(cs); + + buf = g_malloc(cs.size); + s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num, + sizeof(cs), buf, cs.size); + if (s != cs.size) { + g_critical("%s: size mismatch (%zd/%d)", __func__, s, cs.size); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER; + goto out; + } + + virgl_renderer_submit_cmd(buf, cs.hdr.ctx_id, cs.size / 4); + +out: + g_free(buf); +} + +static void +virgl_cmd_transfer_to_host_2d(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_transfer_to_host_2d t2d; + struct virtio_gpu_box box; + + VUGPU_FILL_CMD(t2d); + + box.x = t2d.r.x; + box.y = t2d.r.y; + box.z = 0; + box.w = t2d.r.width; + box.h = t2d.r.height; + box.d = 1; + + virgl_renderer_transfer_write_iov(t2d.resource_id, + 0, + 0, + 0, + 0, + (struct virgl_box *)&box, + t2d.offset, NULL, 0); +} + +static void +virgl_cmd_transfer_to_host_3d(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_transfer_host_3d t3d; + + VUGPU_FILL_CMD(t3d); + + virgl_renderer_transfer_write_iov(t3d.resource_id, + t3d.hdr.ctx_id, + t3d.level, + t3d.stride, + t3d.layer_stride, + (struct virgl_box *)&t3d.box, + t3d.offset, NULL, 0); +} + +static void +virgl_cmd_transfer_from_host_3d(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_transfer_host_3d tf3d; + + VUGPU_FILL_CMD(tf3d); + + virgl_renderer_transfer_read_iov(tf3d.resource_id, + tf3d.hdr.ctx_id, + tf3d.level, + tf3d.stride, + tf3d.layer_stride, + (struct virgl_box *)&tf3d.box, + tf3d.offset, NULL, 0); +} + +static void +virgl_resource_attach_backing(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_resource_attach_backing att_rb; + struct iovec *res_iovs; + int ret; + + VUGPU_FILL_CMD(att_rb); + + ret = vg_create_mapping_iov(g, &att_rb, cmd, &res_iovs); + if (ret != 0) { + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC; + return; + } + + virgl_renderer_resource_attach_iov(att_rb.resource_id, + res_iovs, att_rb.nr_entries); +} + +static void +virgl_resource_detach_backing(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_resource_detach_backing detach_rb; + struct iovec *res_iovs = NULL; + int num_iovs = 0; + + VUGPU_FILL_CMD(detach_rb); + + virgl_renderer_resource_detach_iov(detach_rb.resource_id, + &res_iovs, + &num_iovs); + if (res_iovs == NULL || num_iovs == 0) { + return; + } + g_free(res_iovs); +} + +static void +virgl_cmd_set_scanout(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_set_scanout ss; + struct virgl_renderer_resource_info info; + int ret; + + VUGPU_FILL_CMD(ss); + + if (ss.scanout_id >= VIRTIO_GPU_MAX_SCANOUTS) { + g_critical("%s: illegal scanout id specified %d", + __func__, ss.scanout_id); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_SCANOUT_ID; + return; + } + + memset(&info, 0, sizeof(info)); + + if (ss.resource_id && ss.r.width && ss.r.height) { + ret = virgl_renderer_resource_get_info(ss.resource_id, &info); + if (ret == -1) { + g_critical("%s: illegal resource specified %d\n", + __func__, ss.resource_id); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; + return; + } + + int fd = -1; + if (virgl_renderer_get_fd_for_texture(info.tex_id, &fd) < 0) { + g_critical("%s: failed to get fd for texture\n", __func__); + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID; + return; + } + + VhostGpuMsg msg = { + .request = VHOST_GPU_GL_SCANOUT, + .size = sizeof(VhostGpuGlScanout), + .payload.gl_scanout.scanout_id = ss.scanout_id, + .payload.gl_scanout.x = ss.r.x, + .payload.gl_scanout.y = ss.r.y, + .payload.gl_scanout.width = ss.r.width, + .payload.gl_scanout.height = ss.r.height, + .payload.gl_scanout.fd_width = info.width, + .payload.gl_scanout.fd_height = info.height, + .payload.gl_scanout.fd_stride = info.stride, + .payload.gl_scanout.fd_flags = info.flags, + .payload.gl_scanout.fd_drm_fourcc = info.drm_fourcc + }; + vg_sock_fd_write(g->sock_fd, &msg, VHOST_GPU_HDR_SIZE + msg.size, fd); + close(fd); + } else { + VhostGpuMsg msg = { + .request = VHOST_GPU_GL_SCANOUT, + .size = sizeof(VhostGpuGlScanout), + .payload.gl_scanout.scanout_id = ss.scanout_id, + }; + vg_sock_fd_write(g->sock_fd, &msg, VHOST_GPU_HDR_SIZE + msg.size, -1); + } + g->scanout[ss.scanout_id].resource_id = ss.resource_id; +} + +static void +virgl_cmd_resource_flush(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_resource_flush rf; + int i, ret; + uint32_t ok; + + VUGPU_FILL_CMD(rf); + + for (i = 0; i < VIRTIO_GPU_MAX_SCANOUTS; i++) { + if (g->scanout[i].resource_id != rf.resource_id) { + continue; + } + VhostGpuMsg msg = { + .request = VHOST_GPU_GL_UPDATE, + .size = sizeof(VhostGpuUpdate), + .payload.update.scanout_id = i, + .payload.update.x = rf.r.x, + .payload.update.y = rf.r.y, + .payload.update.width = rf.r.width, + .payload.update.height = rf.r.height + }; + ret = vg_sock_fd_write(g->sock_fd, &msg, + VHOST_GPU_HDR_SIZE + msg.size, -1); + g_return_if_fail(ret == VHOST_GPU_HDR_SIZE + msg.size); + ret = read(g->sock_fd, &ok, sizeof(ok)); + g_return_if_fail(ret == sizeof(ok)); + } +} + +static void +virgl_cmd_ctx_attach_resource(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_ctx_resource att_res; + + VUGPU_FILL_CMD(att_res); + + virgl_renderer_ctx_attach_resource(att_res.hdr.ctx_id, att_res.resource_id); +} + +static void +virgl_cmd_ctx_detach_resource(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd) +{ + struct virtio_gpu_ctx_resource det_res; + + VUGPU_FILL_CMD(det_res); + + virgl_renderer_ctx_detach_resource(det_res.hdr.ctx_id, det_res.resource_id); +} + +void vg_virgl_process_cmd(VuGpu *g, struct virtio_gpu_ctrl_command *cmd) +{ + virgl_renderer_force_ctx_0(); + switch (cmd->cmd_hdr.type) { + case VIRTIO_GPU_CMD_CTX_CREATE: + virgl_cmd_context_create(g, cmd); + break; + case VIRTIO_GPU_CMD_CTX_DESTROY: + virgl_cmd_context_destroy(g, cmd); + break; + case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D: + virgl_cmd_create_resource_2d(g, cmd); + break; + case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D: + virgl_cmd_create_resource_3d(g, cmd); + break; + case VIRTIO_GPU_CMD_SUBMIT_3D: + virgl_cmd_submit_3d(g, cmd); + break; + case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D: + virgl_cmd_transfer_to_host_2d(g, cmd); + break; + case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D: + virgl_cmd_transfer_to_host_3d(g, cmd); + break; + case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D: + virgl_cmd_transfer_from_host_3d(g, cmd); + break; + case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING: + virgl_resource_attach_backing(g, cmd); + break; + case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING: + virgl_resource_detach_backing(g, cmd); + break; + case VIRTIO_GPU_CMD_SET_SCANOUT: + virgl_cmd_set_scanout(g, cmd); + break; + case VIRTIO_GPU_CMD_RESOURCE_FLUSH: + virgl_cmd_resource_flush(g, cmd); + break; + case VIRTIO_GPU_CMD_RESOURCE_UNREF: + virgl_cmd_resource_unref(g, cmd); + break; + case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE: + /* TODO add security */ + virgl_cmd_ctx_attach_resource(g, cmd); + break; + case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE: + /* TODO add security */ + virgl_cmd_ctx_detach_resource(g, cmd); + break; + case VIRTIO_GPU_CMD_GET_CAPSET_INFO: + virgl_cmd_get_capset_info(g, cmd); + break; + case VIRTIO_GPU_CMD_GET_CAPSET: + virgl_cmd_get_capset(g, cmd); + break; + case VIRTIO_GPU_CMD_GET_DISPLAY_INFO: + vg_get_display_info(g, cmd); + break; + default: + DPRINT("TODO handle ctrl %x\n", cmd->cmd_hdr.type); + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC; + break; + } + + if (cmd->finished) { + return; + } + + if (cmd->error) { + g_warning("%s: ctrl 0x%x, error 0x%x\n", __func__, + cmd->cmd_hdr.type, cmd->error); + vg_ctrl_response_nodata(g, cmd, cmd->error); + return; + } + + if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) { + vg_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA); + return; + } + + DPRINT("Creating fence id:%" PRId64 " type:%d", + cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type); + virgl_renderer_create_fence(cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type); +} + +static void +virgl_write_fence(void *opaque, uint32_t fence) +{ + VuGpu *g = opaque; + struct virtio_gpu_ctrl_command *cmd, *tmp; + + QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) { + /* + * the guest can end up emitting fences out of order + * so we should check all fenced cmds not just the first one. + */ + if (cmd->cmd_hdr.fence_id > fence) { + continue; + } + DPRINT("FENCE %" PRIu64, cmd->cmd_hdr.fence_id); + vg_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA); + QTAILQ_REMOVE(&g->fenceq, cmd, next); + g_free(cmd); + g->inflight--; + } +} + +static struct virgl_renderer_callbacks virgl_cbs = { + .version = 1, + .write_fence = virgl_write_fence, +}; + +static void +vg_virgl_poll(VuDev *dev, int condition, void *data) +{ + virgl_renderer_poll(); +} + +int +vg_virgl_init(VuGpu *g) +{ + int ret; + + ret = virgl_renderer_init(g, + VIRGL_RENDERER_USE_EGL | + VIRGL_RENDERER_THREAD_SYNC, + &virgl_cbs); + if (ret != 0) { + return ret; + } + + ret = virgl_renderer_get_poll_fd(); + if (ret != -1) { + vg_set_fd_handler(&g->dev, ret, G_IO_IN, vg_virgl_poll, g); + } + + return ret; +} diff --git a/contrib/vhost-user-gpu/virgl.h b/contrib/vhost-user-gpu/virgl.h new file mode 100644 index 0000000..76402d4 --- /dev/null +++ b/contrib/vhost-user-gpu/virgl.h @@ -0,0 +1,24 @@ +/* + * Virtio vhost-user GPU Device + * + * Copyright Red Hat, Inc. 2013-2016 + * + * Authors: + * Dave Airlie + * Gerd Hoffmann + * Marc-André Lureau + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ +#ifndef VUGPU_VIRGL_H_ +#define VUGPU_VIRGL_H_ + +#include "vugpu.h" + +int vg_virgl_init(VuGpu *g); +void vg_virgl_process_cmd(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd); +void vg_virgl_update_cursor_data(VuGpu *g, uint32_t resource_id, + gpointer data); + +#endif diff --git a/contrib/vhost-user-gpu/vugpu.h b/contrib/vhost-user-gpu/vugpu.h new file mode 100644 index 0000000..5d718fd --- /dev/null +++ b/contrib/vhost-user-gpu/vugpu.h @@ -0,0 +1,155 @@ +/* + * Virtio vhost-user GPU Device + * + * Copyright Red Hat, Inc. 2013-2016 + * + * Authors: + * Dave Airlie + * Gerd Hoffmann + * Marc-André Lureau + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ +#ifndef VUGPU_H_ +#define VUGPU_H_ + +#include "contrib/libvhost-user/libvhost-user.h" +#include "standard-headers/linux/virtio_gpu.h" + +#include "qemu/osdep.h" +#include "qemu/queue.h" +#include "qemu/iov.h" +#include "qemu/bswap.h" + +typedef enum VhostGpuRequest { + VHOST_GPU_NONE = 0, + VHOST_GPU_CURSOR_POS, + VHOST_GPU_CURSOR_POS_HIDE, + VHOST_GPU_CURSOR_UPDATE, + VHOST_GPU_SCANOUT, + VHOST_GPU_UPDATE, + VHOST_GPU_GL_SCANOUT, + VHOST_GPU_GL_UPDATE, +} VhostGpuRequest; + +typedef struct VhostGpuCursorPos { + uint32_t scanout_id; + uint32_t x; + uint32_t y; +} VhostGpuCursorPos; + +typedef struct VhostGpuCursorUpdate { + VhostGpuCursorPos pos; + uint32_t hot_x; + uint32_t hot_y; + uint32_t data[64 * 64]; +} VhostGpuCursorUpdate; + +typedef struct VhostGpuScanout { + uint32_t scanout_id; + uint32_t width; + uint32_t height; +} VhostGpuScanout; + +typedef struct VhostGpuUpdate { + uint32_t scanout_id; + uint32_t x; + uint32_t y; + uint32_t width; + uint32_t height; + uint8_t data[]; +} VhostGpuUpdate; + +typedef struct VhostGpuGlScanout { + uint32_t scanout_id; + uint32_t x; + uint32_t y; + uint32_t width; + uint32_t height; + uint32_t fd_width; + uint32_t fd_height; + uint32_t fd_stride; + uint32_t fd_flags; + int fd_drm_fourcc; +} VhostGpuGlScanout; + +typedef struct VhostGpuMsg { + VhostGpuRequest request; + uint32_t size; /* the following payload size */ + union { + VhostGpuCursorPos cursor_pos; + VhostGpuCursorUpdate cursor_update; + VhostGpuScanout scanout; + VhostGpuUpdate update; + VhostGpuGlScanout gl_scanout; + } payload; +} QEMU_PACKED VhostGpuMsg; + +static VhostGpuMsg m __attribute__ ((unused)); +#define VHOST_GPU_HDR_SIZE (sizeof(m.request) + sizeof(m.size)) + +struct virtio_gpu_scanout { + uint32_t width, height; + int x, y; + int invalidate; + uint32_t resource_id; +}; + +typedef struct VuGpu { + VuDev dev; + int sock_fd; + GSource *watches[16]; + + bool virgl; + bool virgl_inited; + uint32_t inflight; + + struct virtio_gpu_scanout scanout[VIRTIO_GPU_MAX_SCANOUTS]; + QTAILQ_HEAD(, virtio_gpu_simple_resource) reslist; + QTAILQ_HEAD(, virtio_gpu_ctrl_command) fenceq; +} VuGpu; + +struct virtio_gpu_ctrl_command { + VuVirtqElement elem; + VuVirtq *vq; + struct virtio_gpu_ctrl_hdr cmd_hdr; + uint32_t error; + bool finished; + QTAILQ_ENTRY(virtio_gpu_ctrl_command) next; +}; + +#define VUGPU_FILL_CMD(out) do { \ + size_t s; \ + s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num, 0, \ + &out, sizeof(out)); \ + if (s != sizeof(out)) { \ + g_critical("%s: command size incorrect %zu vs %zu", \ + __func__, s, sizeof(out)); \ + return; \ + } \ + } while (0) + + +void vg_ctrl_response(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd, + struct virtio_gpu_ctrl_hdr *resp, + size_t resp_len); + +void vg_ctrl_response_nodata(VuGpu *g, + struct virtio_gpu_ctrl_command *cmd, + enum virtio_gpu_ctrl_type type); + +int vg_create_mapping_iov(VuGpu *g, + struct virtio_gpu_resource_attach_backing *ab, + struct virtio_gpu_ctrl_command *cmd, + struct iovec **iov); + +void vg_get_display_info(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd); + +void vg_set_fd_handler(VuDev *dev, int fd, GIOCondition condition, + vu_watch_cb cb, void *data); + +ssize_t vg_sock_fd_write(int sock, void *buf, ssize_t buflen, int fd); + +#endif