From patchwork Thu Aug 18 16:16:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 9288465 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5B7136086A for ; Thu, 18 Aug 2016 16:17:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D48C29136 for ; Thu, 18 Aug 2016 16:17:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 41F5729162; Thu, 18 Aug 2016 16:17:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BE90129136 for ; Thu, 18 Aug 2016 16:17:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7F0D66EAC8; Thu, 18 Aug 2016 16:17:50 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-x241.google.com (mail-wm0-x241.google.com [IPv6:2a00:1450:400c:c09::241]) by gabe.freedesktop.org (Postfix) with ESMTPS id B0E476EAC0 for ; Thu, 18 Aug 2016 16:17:43 +0000 (UTC) Received: by mail-wm0-x241.google.com with SMTP id q128so282815wma.1 for ; Thu, 18 Aug 2016 09:17:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:subject:date:message-id:in-reply-to:references; bh=91P+EOpEDGtxWx5r/iiQAv7hIju1dSu4Yh7rdGvkVkw=; b=SD4yn2T+VGRz8FtFu+pYwbIlyjm8IxmCx1cm8F7KloOxmQ54Mabk+d09r4pW27V+sD 7EMMQkeo7ZIQ7NOw+Y7nuiqHBRWWAwEAoGzy28U74tbj/TaVivCB+DctmLJ6yzpHlyIL Pjqj6eXocsvxVWDs9MKh8vwjIWLW8Ltq/eOmF9TyUokYxwfLXxV8KGjWXebHtiqiZC9h Q5rMdXw7HypDIY5z9kNiGuKEEJJcva9TYHNfbpEXwcJgmEJsHgh9QloNs9ItQ6g4ZrHj ns3YV5Gwi//nnXd7fMO/WYqsoruONWTEwJhn7gKBK878s6dVrvfxVQjDb5vgwHxb3MlD BHuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:subject:date:message-id :in-reply-to:references; bh=91P+EOpEDGtxWx5r/iiQAv7hIju1dSu4Yh7rdGvkVkw=; b=BYZme9F6FeyxXMehAKIfY68pgBlWvKouIbB9mJ6TdFLqd4RwETVBYN0AcYnkcmKnqL xloInc12bpKc8CVoijF3GkDmX0taQrp6Bt0oP3fK2o1lfNXvw0tf5k38SHQ0UWujBMbt AGfh1J2fhoFUFa5qEhg2aFXjPAH6r4TtNE4qqBqIdzR6wgrRWiGLN2NixF4pxDmwHgHF 0h1FJDC2exXjqJKVDIrvMk8IP60u4qtXF5YG+iFClf7Qs5cXGw6/k1DmOYYr0rexJ3Z0 5A23pWt31pgdx2rZZj/BUWfZKuyi6tX4/t6NHb/pbU79hX4w5rWdxPn32zYgRPn7EMUn IBwA== X-Gm-Message-State: AEkoouteH2bNIOLp1+nXcCoRK0Eq4haeU4SnGoxhOohYvl9eebB5oz3qowVJbM2aqTcCpA== X-Received: by 10.194.70.165 with SMTP id n5mr2783574wju.135.1471537062019; Thu, 18 Aug 2016 09:17:42 -0700 (PDT) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id yz6sm2902833wjb.35.2016.08.18.09.17.40 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 18 Aug 2016 09:17:40 -0700 (PDT) From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Thu, 18 Aug 2016 17:16:53 +0100 Message-Id: <20160818161718.27187-14-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20160818161718.27187-1-chris@chris-wilson.co.uk> References: <20160818161718.27187-1-chris@chris-wilson.co.uk> Subject: [Intel-gfx] [CI 14/39] drm/i915: Fallback to single page GTT mmappings for relocations X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP If we cannot pin the entire object into the mappable region of the GTT, try to pin a single page instead. This is much more likely to succeed, and prevents us falling back to the clflush slow path. Signed-off-by: Chris Wilson Reviewed-by: Joonas Lahtinen --- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 62 ++++++++++++++++++++++++------ 1 file changed, 51 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index 8d0df7d81d8b..c970aabfffa3 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -331,6 +331,7 @@ static void reloc_cache_init(struct reloc_cache *cache, cache->vaddr = 0; cache->i915 = i915; cache->use_64bit_reloc = INTEL_GEN(cache->i915) >= 8; + cache->node.allocated = false; } static inline void *unmask_page(unsigned long p) @@ -360,8 +361,19 @@ static void reloc_cache_fini(struct reloc_cache *cache) kunmap_atomic(vaddr); i915_gem_obj_finish_shmem_access((struct drm_i915_gem_object *)cache->node.mm); } else { + wmb(); io_mapping_unmap_atomic((void __iomem *)vaddr); - i915_vma_unpin((struct i915_vma *)cache->node.mm); + if (cache->node.allocated) { + struct i915_ggtt *ggtt = &cache->i915->ggtt; + + ggtt->base.clear_range(&ggtt->base, + cache->node.start, + cache->node.size, + true); + drm_mm_remove_node(&cache->node); + } else { + i915_vma_unpin((struct i915_vma *)cache->node.mm); + } } } @@ -401,8 +413,19 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj, struct reloc_cache *cache, int page) { + struct i915_ggtt *ggtt = &cache->i915->ggtt; + unsigned long offset; void *vaddr; + if (cache->node.allocated) { + wmb(); + ggtt->base.insert_page(&ggtt->base, + i915_gem_object_get_dma_address(obj, page), + cache->node.start, I915_CACHE_NONE, 0); + cache->page = page; + return unmask_page(cache->vaddr); + } + if (cache->vaddr) { io_mapping_unmap_atomic(unmask_page(cache->vaddr)); } else { @@ -418,21 +441,38 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj, vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, PIN_MAPPABLE | PIN_NONBLOCK); - if (IS_ERR(vma)) - return NULL; + if (IS_ERR(vma)) { + memset(&cache->node, 0, sizeof(cache->node)); + ret = drm_mm_insert_node_in_range_generic + (&ggtt->base.mm, &cache->node, + 4096, 0, 0, + 0, ggtt->mappable_end, + DRM_MM_SEARCH_DEFAULT, + DRM_MM_CREATE_DEFAULT); + if (ret) + return ERR_PTR(ret); + } else { + ret = i915_gem_object_put_fence(obj); + if (ret) { + i915_vma_unpin(vma); + return ERR_PTR(ret); + } - ret = i915_gem_object_put_fence(obj); - if (ret) { - i915_vma_unpin(vma); - return ERR_PTR(ret); + cache->node.start = vma->node.start; + cache->node.mm = (void *)vma; } + } - cache->node.start = vma->node.start; - cache->node.mm = (void *)vma; + offset = cache->node.start; + if (cache->node.allocated) { + ggtt->base.insert_page(&ggtt->base, + i915_gem_object_get_dma_address(obj, page), + offset, I915_CACHE_NONE, 0); + } else { + offset += page << PAGE_SHIFT; } - vaddr = io_mapping_map_atomic_wc(cache->i915->ggtt.mappable, - cache->node.start + (page << PAGE_SHIFT)); + vaddr = io_mapping_map_atomic_wc(cache->i915->ggtt.mappable, offset); cache->page = page; cache->vaddr = (unsigned long)vaddr;