From patchwork Fri Jun 10 08:53:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: ankitprasad.r.sharma@intel.com X-Patchwork-Id: 9169107 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6E0C16048F for ; Fri, 10 Jun 2016 09:18:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5DD6128325 for ; Fri, 10 Jun 2016 09:18:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 51C672835D; Fri, 10 Jun 2016 09:18:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E12F328325 for ; Fri, 10 Jun 2016 09:18:55 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B54696ECC3; Fri, 10 Jun 2016 09:18:54 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTP id CE0CE6ECC3 for ; Fri, 10 Jun 2016 09:18:48 +0000 (UTC) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP; 10 Jun 2016 02:18:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,449,1459839600"; d="scan'208";a="972871354" Received: from ankitprasad-desktop.iind.intel.com ([10.223.82.74]) by orsmga001.jf.intel.com with ESMTP; 10 Jun 2016 02:18:47 -0700 From: ankitprasad.r.sharma@intel.com To: intel-gfx@lists.freedesktop.org Date: Fri, 10 Jun 2016 14:23:01 +0530 Message-Id: <1465548783-19712-3-git-send-email-ankitprasad.r.sharma@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1465548783-19712-1-git-send-email-ankitprasad.r.sharma@intel.com> References: <1465548783-19712-1-git-send-email-ankitprasad.r.sharma@intel.com> Cc: akash.goel@intel.com, Ankitprasad Sharma Subject: [Intel-gfx] [PATCH 3/5] drm/i915: Use insert_page for pwrite_fast X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Ankitprasad Sharma In pwrite_fast, map an object page by page if obj_ggtt_pin fails. First, we try a nonblocking pin for the whole object (since that is fastest if reused), then failing that we try to grab one page in the mappable aperture. It also allows us to handle objects larger than the mappable aperture (e.g. if we need to pwrite with vGPU restricting the aperture to a measely 8MiB or something like that). v2: Pin pages before starting pwrite, Combined duplicate loops (Chris) v3: Combined loops based on local patch by Chris (Chris) v4: Added i915 wrapper function for drm_mm_insert_node_in_range (Chris) v5: Renamed wrapper function for drm_mm_insert_node_in_range (Chris) v5: Added wrapper for drm_mm_remove_node() (Chris) v6: Added get_pages call before pinning the pages (Tvrtko) Added remove_mappable_node() wrapper for drm_mm_remove_node() (Chris) v7: Added size argument for insert_mappable_node (Tvrtko) v8: Do not put_pages after pwrite, do memset of node in the wrapper function (insert_mappable_node) (Chris) v9: Rebase (Ankit) Signed-off-by: Ankitprasad Sharma Signed-off-by: Chris Wilson Reviewed-by: Tvrtko Ursulin --- drivers/gpu/drm/i915/i915_gem.c | 90 +++++++++++++++++++++++++++++++---------- 1 file changed, 68 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index eae8d7a..452178c 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -60,6 +60,24 @@ static bool cpu_write_needs_clflush(struct drm_i915_gem_object *obj) return obj->pin_display; } +static int +insert_mappable_node(struct drm_i915_private *i915, + struct drm_mm_node *node, u32 size) +{ + memset(node, 0, sizeof(*node)); + return drm_mm_insert_node_in_range_generic(&i915->ggtt.base.mm, node, + size, 0, 0, 0, + i915->ggtt.mappable_end, + DRM_MM_SEARCH_DEFAULT, + DRM_MM_CREATE_DEFAULT); +} + +static void +remove_mappable_node(struct drm_mm_node *node) +{ + drm_mm_remove_node(node); +} + /* some bookkeeping */ static void i915_gem_info_add_obj(struct drm_i915_private *dev_priv, size_t size) @@ -765,21 +783,34 @@ fast_user_write(struct io_mapping *mapping, * @file: drm file pointer */ static int -i915_gem_gtt_pwrite_fast(struct drm_device *dev, +i915_gem_gtt_pwrite_fast(struct drm_i915_private *i915, struct drm_i915_gem_object *obj, struct drm_i915_gem_pwrite *args, struct drm_file *file) { - struct drm_i915_private *dev_priv = to_i915(dev); - struct i915_ggtt *ggtt = &dev_priv->ggtt; - ssize_t remain; - loff_t offset, page_base; + struct i915_ggtt *ggtt = &i915->ggtt; + struct drm_mm_node node; + uint64_t remain, offset; char __user *user_data; - int page_offset, page_length, ret; + int ret; ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_MAPPABLE | PIN_NONBLOCK); - if (ret) - goto out; + if (ret) { + ret = insert_mappable_node(i915, &node, PAGE_SIZE); + if (ret) + goto out; + + ret = i915_gem_object_get_pages(obj); + if (ret) { + remove_mappable_node(&node); + goto out; + } + + i915_gem_object_pin_pages(obj); + } else { + node.start = i915_gem_obj_ggtt_offset(obj); + node.allocated = false; + } ret = i915_gem_object_set_to_gtt_domain(obj, true); if (ret) @@ -789,26 +820,32 @@ i915_gem_gtt_pwrite_fast(struct drm_device *dev, if (ret) goto out_unpin; - user_data = u64_to_user_ptr(args->data_ptr); - remain = args->size; - - offset = i915_gem_obj_ggtt_offset(obj) + args->offset; - intel_fb_obj_invalidate(obj, ORIGIN_GTT); + obj->dirty = true; - while (remain > 0) { + user_data = u64_to_user_ptr(args->data_ptr); + offset = args->offset; + remain = args->size; + while (remain) { /* Operation in this page * * page_base = page offset within aperture * page_offset = offset within page * page_length = bytes to copy for this page */ - page_base = offset & PAGE_MASK; - page_offset = offset_in_page(offset); - page_length = remain; - if ((page_offset + remain) > PAGE_SIZE) - page_length = PAGE_SIZE - page_offset; - + u32 page_base = node.start; + unsigned page_offset = offset_in_page(offset); + unsigned page_length = PAGE_SIZE - page_offset; + page_length = remain < page_length ? remain : page_length; + if (node.allocated) { + wmb(); /* flush the write before we modify the GGTT */ + ggtt->base.insert_page(&ggtt->base, + i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT), + node.start, I915_CACHE_NONE, 0); + wmb(); /* flush modifications to the GGTT (insert_page) */ + } else { + page_base += offset & PAGE_MASK; + } /* If we get a fault while copying data, then (presumably) our * source page isn't available. Return the error and we'll * retry in the slow path. @@ -827,7 +864,16 @@ i915_gem_gtt_pwrite_fast(struct drm_device *dev, out_flush: intel_fb_obj_flush(obj, false, ORIGIN_GTT); out_unpin: - i915_gem_object_ggtt_unpin(obj); + if (node.allocated) { + wmb(); + ggtt->base.clear_range(&ggtt->base, + node.start, node.size, + true); + i915_gem_object_unpin_pages(obj); + remove_mappable_node(&node); + } else { + i915_gem_object_ggtt_unpin(obj); + } out: return ret; } @@ -1095,7 +1141,7 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data, if (obj->tiling_mode == I915_TILING_NONE && obj->base.write_domain != I915_GEM_DOMAIN_CPU && cpu_write_needs_clflush(obj)) { - ret = i915_gem_gtt_pwrite_fast(dev, obj, args, file); + ret = i915_gem_gtt_pwrite_fast(dev_priv, obj, args, file); /* Note that the gtt paths might fail with non-page-backed user * pointers (e.g. gtt mappings when moving data between * textures). Fallback to the shmem path in that case. */