From patchwork Mon Jan 11 10:45:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 8001691 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 3BF029F8AA for ; Mon, 11 Jan 2016 10:50:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 43A012024D for ; Mon, 11 Jan 2016 10:50:55 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 631942028D for ; Mon, 11 Jan 2016 10:50:54 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 981A86E45F; Mon, 11 Jan 2016 02:50:53 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-f68.google.com (mail-wm0-f68.google.com [74.125.82.68]) by gabe.freedesktop.org (Postfix) with ESMTPS id D905F6E2E9 for ; Mon, 11 Jan 2016 02:47:21 -0800 (PST) Received: by mail-wm0-f68.google.com with SMTP id u188so25624252wmu.0 for ; Mon, 11 Jan 2016 02:47:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=v+La8S9ASK1ImZ4q4s2nU0TMRwTfDD9TVMRVGJd8F08=; b=gfEsd9vojNMFD45NoC1uVUxmpuaj6xjbTM385479ngyOWb9LLFsn6koSIKf37MNTMq 4ajna3LZ0fxIFi/EaEdYzf7GcvkRoMM1gH053YFIC8eWKVRBHSccseuuQjO75X2hwVVF ZFKFALxqG9xFJVnI+6SycPb7kCHGgsG6R9aX9R3N6GDX5DIemlMlvNV69ywnI143NoPh 4sxj3Owd0Vm00e53ks7AHARlp3A/8A2d8XDGmX7z5/ccs0tqNrW5tM0eXJr/GDH7F/xE 4DgQRkAQE+HZn6bj9eu5SMAGssEIycnUfB/hIwt1dxI8MOhB8Hcb55JjzYKn1s7mG+Gl 4o4w== X-Received: by 10.194.90.50 with SMTP id bt18mr133336643wjb.118.1452509240598; Mon, 11 Jan 2016 02:47:20 -0800 (PST) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id t3sm118879383wjz.11.2016.01.11.02.47.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 11 Jan 2016 02:47:19 -0800 (PST) From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Mon, 11 Jan 2016 10:45:17 +0000 Message-Id: <1452509174-16671-47-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.7.0.rc3 In-Reply-To: <1452509174-16671-1-git-send-email-chris@chris-wilson.co.uk> References: <1452503961-14837-1-git-send-email-chris@chris-wilson.co.uk> <1452509174-16671-1-git-send-email-chris@chris-wilson.co.uk> Subject: [Intel-gfx] [PATCH 133/190] drm/i915: Convert known clflush paths over to clflush_cache_range() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A step towards removing redundant functions from the kernel, in this case both drm and arch/86 define a clflush(addr, range) operation. The difference is that drm_clflush_virt_range() provides a wbinvd() fallback, but along most paths, we only clflush when we know we can. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_gem.c | 13 +++++-------- drivers/gpu/drm/i915/i915_gem_gtt.c | 2 +- drivers/gpu/drm/i915/intel_ringbuffer.h | 4 ++-- 3 files changed, 8 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index c3d43921bc98..d81821c6f9a1 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -614,8 +614,7 @@ shmem_pread_fast(struct page *page, int shmem_page_offset, int page_length, vaddr = kmap_atomic(page); if (needs_clflush) - drm_clflush_virt_range(vaddr + shmem_page_offset, - page_length); + clflush_cache_range(vaddr + shmem_page_offset, page_length); ret = __copy_to_user_inatomic(user_data, vaddr + shmem_page_offset, page_length); @@ -639,9 +638,9 @@ shmem_clflush_swizzled_range(char *addr, unsigned long length, start = round_down(start, 128); end = round_up(end, 128); - drm_clflush_virt_range((void *)start, end - start); + clflush_cache_range((void *)start, end - start); } else { - drm_clflush_virt_range(addr, length); + clflush_cache_range(addr, length); } } @@ -934,13 +933,11 @@ shmem_pwrite_fast(struct page *page, int shmem_page_offset, int page_length, vaddr = kmap_atomic(page); if (needs_clflush_before) - drm_clflush_virt_range(vaddr + shmem_page_offset, - page_length); + clflush_cache_range(vaddr + shmem_page_offset, page_length); ret = __copy_from_user_inatomic(vaddr + shmem_page_offset, user_data, page_length); if (needs_clflush_after) - drm_clflush_virt_range(vaddr + shmem_page_offset, - page_length); + clflush_cache_range(vaddr + shmem_page_offset, page_length); kunmap_atomic(vaddr); return ret ? -EFAULT : 0; diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 0aadfaee2150..b8af904ad12c 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -357,7 +357,7 @@ static void kunmap_page_dma(struct drm_device *dev, void *vaddr) * And we are not sure about the latter so play safe for now. */ if (IS_CHERRYVIEW(dev) || IS_BROXTON(dev)) - drm_clflush_virt_range(vaddr, PAGE_SIZE); + clflush_cache_range(vaddr, PAGE_SIZE); kunmap_atomic(vaddr); } diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 894eb8089296..a66213b2450e 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -395,8 +395,8 @@ intel_engine_sync_index(struct intel_engine_cs *ring, static inline void intel_flush_status_page(struct intel_engine_cs *ring, int reg) { - drm_clflush_virt_range(&ring->status_page.page_addr[reg], - sizeof(uint32_t)); + clflush_cache_range(&ring->status_page.page_addr[reg], + sizeof(uint32_t)); } static inline u32