From patchwork Tue Dec 22 21:36:47 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiago Vignatti X-Patchwork-Id: 7907321 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 5144FBEEE5 for ; Tue, 22 Dec 2015 21:37:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 711442049E for ; Tue, 22 Dec 2015 21:37:20 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 2D44020522 for ; Tue, 22 Dec 2015 21:37:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 515556E37D; Tue, 22 Dec 2015 13:37:17 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id 030A76E37D for ; Tue, 22 Dec 2015 13:37:16 -0800 (PST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 22 Dec 2015 13:37:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,466,1444719600"; d="scan'208";a="622430932" Received: from tvignatt-mobl3.amr.corp.intel.com ([10.254.25.204]) by FMSMGA003.fm.intel.com with ESMTP; 22 Dec 2015 13:37:12 -0800 From: Tiago Vignatti To: dri-devel@lists.freedesktop.org Subject: [PATCH v7 4/5] drm/i915: Implement end_cpu_access Date: Tue, 22 Dec 2015 19:36:47 -0200 Message-Id: <1450820214-12509-5-git-send-email-tiago.vignatti@intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1450820214-12509-1-git-send-email-tiago.vignatti@intel.com> References: <1450820214-12509-1-git-send-email-tiago.vignatti@intel.com> Cc: daniel.thompson@linaro.org, marcheu@google.com, daniel.vetter@ffwll.ch, thellstrom@vmware.com, jglisse@redhat.com, reveman@google.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This function is meant to be used with dma-buf mmap, when finishing the CPU access of the mapped pointer. The error case should be rare to happen though, requiring the buffer become active during the sync period and for the end_cpu_access to be interrupted. So we use a uninterruptible mutex_lock to spit out when it ever happens. v2: disable interruption to make sure errors are reported. v3: update to the new end_cpu_access API. v7: use .write = false cause it doesn't need to know whether it's write. Reviewed-by: Chris Wilson Signed-off-by: Tiago Vignatti --- drivers/gpu/drm/i915/i915_gem_dmabuf.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c index 65ab2bd..8c9ed2a 100644 --- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c @@ -212,6 +212,27 @@ static int i915_gem_begin_cpu_access(struct dma_buf *dma_buf, enum dma_data_dire return ret; } +static void i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direction direction) +{ + struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); + struct drm_device *dev = obj->base.dev; + struct drm_i915_private *dev_priv = to_i915(dev); + bool was_interruptible; + int ret; + + mutex_lock(&dev->struct_mutex); + was_interruptible = dev_priv->mm.interruptible; + dev_priv->mm.interruptible = false; + + ret = i915_gem_object_set_to_gtt_domain(obj, false); + + dev_priv->mm.interruptible = was_interruptible; + mutex_unlock(&dev->struct_mutex); + + if (unlikely(ret)) + DRM_ERROR("unable to flush buffer following CPU access; rendering may be corrupt\n"); +} + static const struct dma_buf_ops i915_dmabuf_ops = { .map_dma_buf = i915_gem_map_dma_buf, .unmap_dma_buf = i915_gem_unmap_dma_buf, @@ -224,6 +245,7 @@ static const struct dma_buf_ops i915_dmabuf_ops = { .vmap = i915_gem_dmabuf_vmap, .vunmap = i915_gem_dmabuf_vunmap, .begin_cpu_access = i915_gem_begin_cpu_access, + .end_cpu_access = i915_gem_end_cpu_access, }; struct dma_buf *i915_gem_prime_export(struct drm_device *dev,