From patchwork Tue Nov 27 10:32:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Andrushchenko X-Patchwork-Id: 10700089 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 182E613BB for ; Tue, 27 Nov 2018 10:33:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04BBF2ACAB for ; Tue, 27 Nov 2018 10:33:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E731D2ACB7; Tue, 27 Nov 2018 10:33:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7ECDE2ACAB for ; Tue, 27 Nov 2018 10:33:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BDB9189FCA; Tue, 27 Nov 2018 10:33:02 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-lf1-x143.google.com (mail-lf1-x143.google.com [IPv6:2a00:1450:4864:20::143]) by gabe.freedesktop.org (Postfix) with ESMTPS id E742E89FCA for ; Tue, 27 Nov 2018 10:33:00 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id u18so16012600lff.10 for ; Tue, 27 Nov 2018 02:33:00 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=r1gi3+Dk1YWB2w+kSeJ4WK1TaNSfKWfiECwYi2qFw50=; b=lhag2QhbOXEpT27NjJ6GPprk+sOgsnVGRjpdg/BJaaW4hD8y6M6MfP17tOZOlEJrjI dL7/cF9FPxetfuRi5kP26pzZmyh63113M3RkZhbCEqYVhc7b9Efa7H5j5vqe58L4bbNb uqKr1HD39Z4ViCp6Ism9ANIauHgqqDcWzdK7vGN0cDYuB+7WFJU5HCKAXhmDkgpRC4BA ow4f0GDYa3c7ApwTOku/xJ7Is9yqLWGyCcj3MppzJi2YvutTJMmMYNxAfEc32IDW7Vxt EItXCxdqdZMbm85QfMkOudo288GYWTnl/eo/cRv5PVBODMrgRZ9gifZF6pxKoBPt/5IF F7Yg== X-Gm-Message-State: AGRZ1gL8IPw5jkTmqmGBnu5KuVCiSAUkCLVOxg5gKO9wRI0n9VSV8xEM /AMymhV3h0yOKQZoRe0vR7w= X-Google-Smtp-Source: AJdET5ffzbOsFcdrcCLk9mkGLx/VkHPl0L9k9rEJTjhDaFg3Z0mdZK7vj+ZbQHIUySRuumw8aI40Ug== X-Received: by 2002:ac2:41cb:: with SMTP id d11mr19390351lfi.3.1543314779029; Tue, 27 Nov 2018 02:32:59 -0800 (PST) Received: from a2k-HP-ProDesk-600-G2-SFF.kyiv.epam.com (ll-22.209.223.85.sovam.net.ua. [85.223.209.22]) by smtp.gmail.com with ESMTPSA id 65sm515098lfp.55.2018.11.27.02.32.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 27 Nov 2018 02:32:58 -0800 (PST) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, daniel.vetter@intel.com, jgross@suse.com, boris.ostrovsky@oracle.com Subject: [PATCH] drm/xen-front: Make shmem backed display buffer coherent Date: Tue, 27 Nov 2018 12:32:52 +0200 Message-Id: <20181127103252.20994-1-andr2000@gmail.com> X-Mailer: git-send-email 2.19.1 MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: andr2000@gmail.com, Oleksandr Andrushchenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Oleksandr Andrushchenko When GEM backing storage is allocated with drm_gem_get_pages the backing pages may be cached, thus making it possible that the backend sees only partial content of the buffer which may lead to screen artifacts. Make sure that the frontend's memory is coherent and the backend always sees correct display buffer content. Fixes: c575b7eeb89f ("drm/xen-front: Add support for Xen PV display frontend") Signed-off-by: Oleksandr Andrushchenko --- drivers/gpu/drm/xen/xen_drm_front_gem.c | 62 +++++++++++++++++++------ 1 file changed, 48 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c index 47ff019d3aef..c592735e49d2 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c @@ -33,8 +33,11 @@ struct xen_gem_object { /* set for buffers allocated by the backend */ bool be_alloc; - /* this is for imported PRIME buffer */ - struct sg_table *sgt_imported; + /* + * this is for imported PRIME buffer or the one allocated via + * drm_gem_get_pages. + */ + struct sg_table *sgt; }; static inline struct xen_gem_object * @@ -77,10 +80,21 @@ static struct xen_gem_object *gem_create_obj(struct drm_device *dev, return xen_obj; } +struct sg_table *xen_drm_front_gem_get_sg_table(struct drm_gem_object *gem_obj) +{ + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); + + if (!xen_obj->pages) + return ERR_PTR(-ENOMEM); + + return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages); +} + static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size) { struct xen_drm_front_drm_info *drm_info = dev->dev_private; struct xen_gem_object *xen_obj; + struct address_space *mapping; int ret; size = round_up(size, PAGE_SIZE); @@ -113,10 +127,14 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size) xen_obj->be_alloc = true; return xen_obj; } + /* * need to allocate backing pages now, so we can share those * with the backend */ + mapping = xen_obj->base.filp->f_mapping; + mapping_set_gfp_mask(mapping, GFP_USER | __GFP_DMA32); + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE); xen_obj->pages = drm_gem_get_pages(&xen_obj->base); if (IS_ERR_OR_NULL(xen_obj->pages)) { @@ -125,8 +143,27 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size) goto fail; } + xen_obj->sgt = xen_drm_front_gem_get_sg_table(&xen_obj->base); + if (IS_ERR_OR_NULL(xen_obj->sgt)){ + ret = PTR_ERR(xen_obj->sgt); + xen_obj->sgt = NULL; + goto fail_put_pages; + } + + if (!dma_map_sg(dev->dev, xen_obj->sgt->sgl, xen_obj->sgt->nents, + DMA_BIDIRECTIONAL)) { + ret = -EFAULT; + goto fail_free_sgt; + } + return xen_obj; +fail_free_sgt: + sg_free_table(xen_obj->sgt); + xen_obj->sgt = NULL; +fail_put_pages: + drm_gem_put_pages(&xen_obj->base, xen_obj->pages, true, false); + xen_obj->pages = NULL; fail: DRM_ERROR("Failed to allocate buffer with size %zu\n", size); return ERR_PTR(ret); @@ -149,7 +186,7 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj) struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); if (xen_obj->base.import_attach) { - drm_prime_gem_destroy(&xen_obj->base, xen_obj->sgt_imported); + drm_prime_gem_destroy(&xen_obj->base, xen_obj->sgt); gem_free_pages_array(xen_obj); } else { if (xen_obj->pages) { @@ -158,6 +195,13 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj) xen_obj->pages); gem_free_pages_array(xen_obj); } else { + if (xen_obj->sgt) { + dma_unmap_sg(xen_obj->base.dev->dev, + xen_obj->sgt->sgl, + xen_obj->sgt->nents, + DMA_BIDIRECTIONAL); + sg_free_table(xen_obj->sgt); + } drm_gem_put_pages(&xen_obj->base, xen_obj->pages, true, false); } @@ -174,16 +218,6 @@ struct page **xen_drm_front_gem_get_pages(struct drm_gem_object *gem_obj) return xen_obj->pages; } -struct sg_table *xen_drm_front_gem_get_sg_table(struct drm_gem_object *gem_obj) -{ - struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); - - if (!xen_obj->pages) - return ERR_PTR(-ENOMEM); - - return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages); -} - struct drm_gem_object * xen_drm_front_gem_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, @@ -203,7 +237,7 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev, if (ret < 0) return ERR_PTR(ret); - xen_obj->sgt_imported = sgt; + xen_obj->sgt = sgt; ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages, NULL, xen_obj->num_pages);