From patchwork Wed Apr 13 14:47:58 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 8823651 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 194B8C0553 for ; Wed, 13 Apr 2016 14:48:12 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EB2CD2026F for ; Wed, 13 Apr 2016 14:48:10 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id C16982010C for ; Wed, 13 Apr 2016 14:48:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 268E96E900; Wed, 13 Apr 2016 14:48:09 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-x244.google.com (mail-wm0-x244.google.com [IPv6:2a00:1450:400c:c09::244]) by gabe.freedesktop.org (Postfix) with ESMTPS id A21C96E0A5 for ; Wed, 13 Apr 2016 14:48:06 +0000 (UTC) Received: by mail-wm0-x244.google.com with SMTP id a140so14650126wma.2 for ; Wed, 13 Apr 2016 07:48:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=7pGwRmuUgcZoLDeUnOWtO3OsTElyQJqVRSDB036P4Uo=; b=KIz0GDF0s7ia6x0IP1b4m9G7OQY8ZCbqZ6Qv+92ZHzcdoUCsyCSOEzd1wO6CbnFSfx dymienWgKcXuS0kpP3yUEzCo+M74kKioOrFBrLS3oE1rXYu7ML8fxSj79x4awHHAmcaC l9L4gEqDi1UsSWcL1H8hAXVFTR5k/90e3l+H/bfRnNvFwygXDeEdQoe1rA4tIE/dj4CW 0ILX3kJZnppPF8rGuHZ9nDXgXQMLDXJHiy8O0wVtNzdgcnacTjKBIpMXD2AUJbYqmPL7 ImLA+9+lzHsMjs8P5R7CtsYZOLcatddl55mqz60FYJPb7fEXd+hUerb0MRY3TiVF+bU1 4hLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=7pGwRmuUgcZoLDeUnOWtO3OsTElyQJqVRSDB036P4Uo=; b=Mb3EXDhDOR5RlQDcdPinE6oiRyc0ZX3Pv/qJh65oTlWymaVnIFanKt6HzCiq5T7y1y cVwBoiXg2wJQnsBWjyoEAVPT0dZaTr0qbJSQldgOzVmfr9CsB5h1hJw3qCZ0/Sw4X+FD LklgHBAfc57qApCV+RzHRQNUrib5iTL05bo4j5UZJR2CTxH+xGhnSO7MjNmY2GEsEuyl MHKfm+a2qtLDrIfwqc8Gn01dr/Efj7joAaQZTo5C9kp0+F+15acLmHQ0kVhGzbvYYdU4 BuuSXCXw3OXeBsujEYnqeeQhpow2RZ1v5sd8JASskeiT3jmbaAtE4XO4Z6y/fX1lTgY4 ZqpQ== X-Gm-Message-State: AOPr4FUIF2cofeH9v/EEN7wIr0eChDXZrGlJ+dC/a2M48xOrkW1fnU8zN6ZpPbD0iVU5sw== X-Received: by 10.194.246.137 with SMTP id xw9mr11215250wjc.172.1460558885272; Wed, 13 Apr 2016 07:48:05 -0700 (PDT) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id e140sm12566706wma.13.2016.04.13.07.48.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 13 Apr 2016 07:48:03 -0700 (PDT) From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Wed, 13 Apr 2016 15:47:58 +0100 Message-Id: <1460558878-14613-1-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.8.0.rc3 In-Reply-To: <20160413124451.GF15577@nuc-i3427.alporthouse.com> References: <20160413124451.GF15577@nuc-i3427.alporthouse.com> Subject: [Intel-gfx] [PATCH] drm/i915: Move ioremap_wc tracking onto VMA X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP By tracking the iomapping on the VMA itself, we can share that area between multiple users. Also by only revoking the iomapping upon unbinding from the mappable portion of the GGTT, we can keep that iomap across multiple invocations (e.g. execlists context pinning). Note that by moving the iounnmap tracking to the VMA, we actually end up fixing a leak of the iomapping in intel_fbdev. v1.5: Rebase prompted by Tvrtko v2: Drop dev_priv parameter, we can recover the i915_ggtt from the vma. v3: Move handling of ioremap space exhaustion to vmap_purge and also allow vmallocs to recover old iomaps. Add Tvrtko's kerneldoc. Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin --- drivers/gpu/drm/i915/i915_gem.c | 2 ++ drivers/gpu/drm/i915/i915_gem_gtt.c | 25 +++++++++++++++++++++ drivers/gpu/drm/i915/i915_gem_gtt.h | 38 ++++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_gem_shrinker.c | 26 +++++++++++++++++----- drivers/gpu/drm/i915/intel_fbdev.c | 22 +++++++++--------- 5 files changed, 97 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index b37ffea8b458..6a485630595e 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -3393,6 +3393,8 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait) ret = i915_gem_object_put_fence(obj); if (ret) return ret; + + i915_vma_iounmap(vma); } trace_i915_vma_unbind(vma); diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index c5cb04907525..53e55aead512 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -3626,3 +3626,28 @@ i915_ggtt_view_size(struct drm_i915_gem_object *obj, return obj->base.size; } } + +void *i915_vma_iomap(struct i915_vma *vma) +{ + if (WARN_ON(!vma->obj->map_and_fenceable)) + return ERR_PTR(-ENODEV); + + BUG_ON(!vma->is_ggtt); + BUG_ON((vma->bound & GLOBAL_BIND) == 0); + + if (vma->iomap == NULL) { + struct i915_ggtt *ggtt = + container_of(vma->vm, struct i915_ggtt, base); + void *ptr; + + ptr = io_mapping_map_wc(ggtt->mappable, + vma->node.start, + vma->node.size); + if (ptr == NULL) + return ERR_PTR(-ENOMEM); + + vma->iomap = ptr; + } + + return vma->iomap; +} diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h index d7dd3d8a8758..d95190ddf2d6 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.h +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h @@ -34,6 +34,8 @@ #ifndef __I915_GEM_GTT_H__ #define __I915_GEM_GTT_H__ +#include + struct drm_i915_file_private; typedef uint32_t gen6_pte_t; @@ -175,6 +177,7 @@ struct i915_vma { struct drm_mm_node node; struct drm_i915_gem_object *obj; struct i915_address_space *vm; + void *iomap; /** Flags and address space this VMA is bound to */ #define GLOBAL_BIND (1<<0) @@ -559,4 +562,39 @@ size_t i915_ggtt_view_size(struct drm_i915_gem_object *obj, const struct i915_ggtt_view *view); +/** + * i915_vma_iomap - calls ioremap_wc to map the GGTT VMA via the aperture + * @vma: VMA to iomap + * + * The passed in VMA has to be pinned in the global GTT mappable region. Or in + * other words callers are responsible for managing the VMA pinned lifetime and + * ensuring it covers the use of the returned mapping. + * + * Callers must hold the struct_mutex. + * + * Returns a valid iomapped pointer or ERR_PTR. + */ +void *i915_vma_iomap(struct i915_vma *vma); + +/** + * i915_vma_iounmap - unmaps the mapping returned from i915_vma_iomap + * @dev_priv: i915 private pointer + * @vma: VMA to unmap + * + * Unmaps the previously iomapped VMA using iounmap. + * + * Users of i915_vma_iomap should not manually unmap by calling this function + * if they want to take advantage of the mapping getting cached in the VMA. + * + * Callers must hold the struct_mutex. + */ +static inline void i915_vma_iounmap(struct i915_vma *vma) +{ + if (vma->iomap == NULL) + return; + + io_mapping_unmap(vma->iomap); + vma->iomap = NULL; +} + #endif diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c index d46388f25e04..908c083a39f1 100644 --- a/drivers/gpu/drm/i915/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c @@ -387,17 +387,31 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr struct drm_i915_private *dev_priv = container_of(nb, struct drm_i915_private, mm.vmap_notifier); struct shrinker_lock_uninterruptible slu; - unsigned long freed_pages; + struct i915_vma *vma, *next; + unsigned long freed_pages = 0; + int ret; if (!i915_gem_shrinker_lock_uninterruptible(dev_priv, &slu, 5000)) return NOTIFY_DONE; - freed_pages = i915_gem_shrink(dev_priv, -1UL, - I915_SHRINK_BOUND | - I915_SHRINK_UNBOUND | - I915_SHRINK_ACTIVE | - I915_SHRINK_VMAPS); + /* Force everything onto the inactive lists */ + ret = i915_gpu_idle(dev_priv->dev); + if (ret) + goto out; + freed_pages += i915_gem_shrink(dev_priv, -1UL, + I915_SHRINK_BOUND | + I915_SHRINK_UNBOUND | + I915_SHRINK_ACTIVE | + I915_SHRINK_VMAPS); + + /* We also want to clear any cached iomaps as they wrap vmap */ + list_for_each_entry_safe(vma, next, + &dev_priv->ggtt.base.inactive_list, vm_link) + if (vma->iomap && i915_vma_unbind(vma) == 0) + freed_pages += vma->node.size >> PAGE_SHIFT; + +out: i915_gem_shrinker_unlock_uninterruptible(dev_priv, &slu); *(unsigned long *)ptr += freed_pages; diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c index 79ac202f3870..3f3c97a30418 100644 --- a/drivers/gpu/drm/i915/intel_fbdev.c +++ b/drivers/gpu/drm/i915/intel_fbdev.c @@ -186,9 +186,11 @@ static int intelfb_create(struct drm_fb_helper *helper, struct i915_ggtt *ggtt = &dev_priv->ggtt; struct fb_info *info; struct drm_framebuffer *fb; + struct i915_vma *vma; struct drm_i915_gem_object *obj; - int size, ret; bool prealloc = false; + void *vaddr; + int ret; if (intel_fb && (sizes->fb_width > intel_fb->base.width || @@ -214,7 +216,6 @@ static int intelfb_create(struct drm_fb_helper *helper, } obj = intel_fb->obj; - size = obj->base.size; mutex_lock(&dev->struct_mutex); @@ -244,22 +245,23 @@ static int intelfb_create(struct drm_fb_helper *helper, info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT; info->fbops = &intelfb_ops; + vma = i915_gem_obj_to_ggtt(obj); + /* setup aperture base/size for vesafb takeover */ info->apertures->ranges[0].base = dev->mode_config.fb_base; info->apertures->ranges[0].size = ggtt->mappable_end; - info->fix.smem_start = dev->mode_config.fb_base + i915_gem_obj_ggtt_offset(obj); - info->fix.smem_len = size; + info->fix.smem_start = dev->mode_config.fb_base + vma->node.start; + info->fix.smem_len = vma->node.size; - info->screen_base = - ioremap_wc(ggtt->mappable_base + i915_gem_obj_ggtt_offset(obj), - size); - if (!info->screen_base) { + vaddr = i915_vma_iomap(vma); + if (IS_ERR(vaddr)) { DRM_ERROR("Failed to remap framebuffer into virtual memory\n"); - ret = -ENOSPC; + ret = PTR_ERR(vaddr); goto out_destroy_fbi; } - info->screen_size = size; + info->screen_base = vaddr; + info->screen_size = vma->node.size; /* This driver doesn't need a VT switch to restore the mode on resume */ info->skip_vt_switch = true;