From patchwork Thu Oct 30 18:41:09 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Harrison X-Patchwork-Id: 5199771 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 3EB539F472 for ; Thu, 30 Oct 2014 18:41:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3B76A2021B for ; Thu, 30 Oct 2014 18:41:53 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 1D1C9201FE for ; Thu, 30 Oct 2014 18:41:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4649F6E588; Thu, 30 Oct 2014 11:41:51 -0700 (PDT) X-Original-To: Intel-GFX@lists.freedesktop.org Delivered-To: Intel-GFX@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id 568096E593 for ; Thu, 30 Oct 2014 11:41:45 -0700 (PDT) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 30 Oct 2014 11:41:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,287,1413270000"; d="scan'208";a="623664415" Received: from johnharr-linux.isw.intel.com ([10.102.226.51]) by fmsmga002.fm.intel.com with ESMTP; 30 Oct 2014 11:41:43 -0700 From: John.C.Harrison@Intel.com To: Intel-GFX@Lists.FreeDesktop.Org Date: Thu, 30 Oct 2014 18:41:09 +0000 Message-Id: <1414694481-15724-18-git-send-email-John.C.Harrison@Intel.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1414694481-15724-1-git-send-email-John.C.Harrison@Intel.com> References: <1414694481-15724-1-git-send-email-John.C.Harrison@Intel.com> Organization: Intel Corporation (UK) Ltd. - Co. Reg. #1134945 - Pipers Way, Swindon SN3 1RJ Subject: [Intel-gfx] [PATCH 17/29] drm/i915: Convert trace functions from seqno to request X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Harrison All the code above is now using requests not seqnos so it is possible to convert the trace functions across. Note that rather than get into problematic reference counting issues, the trace code only saves the seqno and ring values from the request structure not the structure pointer itself. For: VIZ-4377 Signed-off-by: John Harrison --- drivers/gpu/drm/i915/i915_gem.c | 10 +++--- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 2 +- drivers/gpu/drm/i915/i915_trace.h | 47 ++++++++++++++++------------ 3 files changed, 33 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index e4cb253..774ab9f 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1184,7 +1184,7 @@ static int __wait_request(struct drm_i915_gem_request *req, return -ENODEV; /* Record current time in case interrupted by signal, or wedged */ - trace_i915_gem_request_wait_begin(i915_gem_request_get_ring(req), i915_gem_request_get_seqno(req)); + trace_i915_gem_request_wait_begin(req); before = ktime_get_raw_ns(); for (;;) { struct timer_list timer; @@ -1235,7 +1235,7 @@ static int __wait_request(struct drm_i915_gem_request *req, } } now = ktime_get_raw_ns(); - trace_i915_gem_request_wait_end(i915_gem_request_get_ring(req), i915_gem_request_get_seqno(req)); + trace_i915_gem_request_wait_end(req); if (!irq_test_in_progress) ring->irq_put(ring); @@ -2416,7 +2416,7 @@ int __i915_add_request(struct intel_engine_cs *ring, spin_unlock(&file_priv->mm.lock); } - trace_i915_gem_request_add(ring, request->seqno); + trace_i915_gem_request_add(request); ring->outstanding_lazy_request = NULL; if (!dev_priv->ums.mm_suspended) { @@ -2691,7 +2691,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring) if (!i915_seqno_passed(seqno, request->seqno)) break; - trace_i915_gem_request_retire(ring, request->seqno); + trace_i915_gem_request_retire(request); /* This is one of the few common intersection points * between legacy ringbuffer submission and execlists: @@ -2927,7 +2927,7 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj, if (ret) return ret; - trace_i915_gem_ring_sync_to(from, to, seqno); + trace_i915_gem_ring_sync_to(from, to, obj->last_read_req); ret = to->semaphore.sync_to(to, from, seqno); if (!ret) /* We use last_read_req because sync_to() diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index 4212365..532ca0d 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -1167,7 +1167,7 @@ i915_gem_ringbuffer_submission(struct drm_device *dev, struct drm_file *file, return ret; } - trace_i915_gem_ring_dispatch(ring, i915_gem_request_get_seqno(intel_ring_get_request(ring)), flags); + trace_i915_gem_ring_dispatch(intel_ring_get_request(ring), flags); i915_gem_execbuffer_move_to_active(vmas, ring); i915_gem_execbuffer_retire_commands(dev, file, ring, batch_obj); diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h index f5aa006..66616f7 100644 --- a/drivers/gpu/drm/i915/i915_trace.h +++ b/drivers/gpu/drm/i915/i915_trace.h @@ -328,8 +328,8 @@ TRACE_EVENT(i915_gem_evict_vm, TRACE_EVENT(i915_gem_ring_sync_to, TP_PROTO(struct intel_engine_cs *from, struct intel_engine_cs *to, - u32 seqno), - TP_ARGS(from, to, seqno), + struct drm_i915_gem_request *req), + TP_ARGS(from, to, req), TP_STRUCT__entry( __field(u32, dev) @@ -342,7 +342,7 @@ TRACE_EVENT(i915_gem_ring_sync_to, __entry->dev = from->dev->primary->index; __entry->sync_from = from->id; __entry->sync_to = to->id; - __entry->seqno = seqno; + __entry->seqno = i915_gem_request_get_seqno(req); ), TP_printk("dev=%u, sync-from=%u, sync-to=%u, seqno=%u", @@ -352,8 +352,8 @@ TRACE_EVENT(i915_gem_ring_sync_to, ); TRACE_EVENT(i915_gem_ring_dispatch, - TP_PROTO(struct intel_engine_cs *ring, u32 seqno, u32 flags), - TP_ARGS(ring, seqno, flags), + TP_PROTO(struct drm_i915_gem_request *req, u32 flags), + TP_ARGS(req, flags), TP_STRUCT__entry( __field(u32, dev) @@ -363,11 +363,13 @@ TRACE_EVENT(i915_gem_ring_dispatch, ), TP_fast_assign( + struct intel_engine_cs *ring = + i915_gem_request_get_ring(req); __entry->dev = ring->dev->primary->index; __entry->ring = ring->id; - __entry->seqno = seqno; + __entry->seqno = i915_gem_request_get_seqno(req); __entry->flags = flags; - i915_trace_irq_get(ring, seqno); + i915_trace_irq_get(ring, __entry->seqno); ), TP_printk("dev=%u, ring=%u, seqno=%u, flags=%x", @@ -398,8 +400,8 @@ TRACE_EVENT(i915_gem_ring_flush, ); DECLARE_EVENT_CLASS(i915_gem_request, - TP_PROTO(struct intel_engine_cs *ring, u32 seqno), - TP_ARGS(ring, seqno), + TP_PROTO(struct drm_i915_gem_request *req), + TP_ARGS(req), TP_STRUCT__entry( __field(u32, dev) @@ -408,9 +410,11 @@ DECLARE_EVENT_CLASS(i915_gem_request, ), TP_fast_assign( + struct intel_engine_cs *ring = + i915_gem_request_get_ring(req); __entry->dev = ring->dev->primary->index; __entry->ring = ring->id; - __entry->seqno = seqno; + __entry->seqno = i915_gem_request_get_seqno(req); ), TP_printk("dev=%u, ring=%u, seqno=%u", @@ -418,8 +422,8 @@ DECLARE_EVENT_CLASS(i915_gem_request, ); DEFINE_EVENT(i915_gem_request, i915_gem_request_add, - TP_PROTO(struct intel_engine_cs *ring, u32 seqno), - TP_ARGS(ring, seqno) + TP_PROTO(struct drm_i915_gem_request *req), + TP_ARGS(req) ); TRACE_EVENT(i915_gem_request_complete, @@ -443,13 +447,13 @@ TRACE_EVENT(i915_gem_request_complete, ); DEFINE_EVENT(i915_gem_request, i915_gem_request_retire, - TP_PROTO(struct intel_engine_cs *ring, u32 seqno), - TP_ARGS(ring, seqno) + TP_PROTO(struct drm_i915_gem_request *req), + TP_ARGS(req) ); TRACE_EVENT(i915_gem_request_wait_begin, - TP_PROTO(struct intel_engine_cs *ring, u32 seqno), - TP_ARGS(ring, seqno), + TP_PROTO(struct drm_i915_gem_request *req), + TP_ARGS(req), TP_STRUCT__entry( __field(u32, dev) @@ -465,10 +469,13 @@ TRACE_EVENT(i915_gem_request_wait_begin, * less desirable. */ TP_fast_assign( + struct intel_engine_cs *ring = + i915_gem_request_get_ring(req); __entry->dev = ring->dev->primary->index; __entry->ring = ring->id; - __entry->seqno = seqno; - __entry->blocking = mutex_is_locked(&ring->dev->struct_mutex); + __entry->seqno = i915_gem_request_get_seqno(req); + __entry->blocking = + mutex_is_locked(&ring->dev->struct_mutex); ), TP_printk("dev=%u, ring=%u, seqno=%u, blocking=%s", @@ -477,8 +484,8 @@ TRACE_EVENT(i915_gem_request_wait_begin, ); DEFINE_EVENT(i915_gem_request, i915_gem_request_wait_end, - TP_PROTO(struct intel_engine_cs *ring, u32 seqno), - TP_ARGS(ring, seqno) + TP_PROTO(struct drm_i915_gem_request *req), + TP_ARGS(req) ); DECLARE_EVENT_CLASS(i915_ring,