From patchwork Mon Jun 22 09:55:06 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: sourab.gupta@intel.com X-Patchwork-Id: 6654531 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 880519F1C1 for ; Mon, 22 Jun 2015 09:53:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 79373203C0 for ; Mon, 22 Jun 2015 09:53:24 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 727A3203AE for ; Mon, 22 Jun 2015 09:53:23 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E802F6E5DD; Mon, 22 Jun 2015 02:53:22 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTP id 473486E5DD for ; Mon, 22 Jun 2015 02:53:22 -0700 (PDT) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP; 22 Jun 2015 02:53:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,658,1427785200"; d="scan'208";a="732124364" Received: from sourabgu-desktop.iind.intel.com ([10.223.82.35]) by fmsmga001.fm.intel.com with ESMTP; 22 Jun 2015 02:53:19 -0700 From: sourab.gupta@intel.com To: intel-gfx@lists.freedesktop.org Date: Mon, 22 Jun 2015 15:25:06 +0530 Message-Id: <1434966909-4113-5-git-send-email-sourab.gupta@intel.com> X-Mailer: git-send-email 1.8.5.1 In-Reply-To: <1434966909-4113-1-git-send-email-sourab.gupta@intel.com> References: <1434966909-4113-1-git-send-email-sourab.gupta@intel.com> Cc: Insoo Woo , Peter Zijlstra , Jabin Wu , Sourab Gupta Subject: [Intel-gfx] [RFC 4/7] drm/i915: Add mechanism for forwarding the data samples to userspace through Gen PMU perf interface X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-5.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Sourab Gupta This patch adds the mechanism for forwarding the data snapshots through the Gen PMU perf event interface. In this particular case, the data type of timestamp data node introduced earlier is being forwarded through the interface. The samples will be forwarded in a workqueue, which is scheduled when hrtimer triggers. In the workqueue, each node of data collected will be forwarded as a separate perf sample. Signed-off-by: Sourab Gupta --- drivers/gpu/drm/i915/i915_drv.h | 1 + drivers/gpu/drm/i915/i915_oa_perf.c | 125 +++++++++++++++++++++++++++++++++++- 2 files changed, 124 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index b6a897a..25c0938 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -2021,6 +2021,7 @@ struct drm_i915_private { u32 head; u32 tail; } buffer; + struct work_struct work_timer; } gen_pmu; struct list_head profile_cmd; diff --git a/drivers/gpu/drm/i915/i915_oa_perf.c b/drivers/gpu/drm/i915/i915_oa_perf.c index e2042b6..e3e867f 100644 --- a/drivers/gpu/drm/i915/i915_oa_perf.c +++ b/drivers/gpu/drm/i915/i915_oa_perf.c @@ -224,11 +224,121 @@ void forward_oa_async_snapshots_work(struct work_struct *__work) mutex_unlock(&dev_priv->dev->struct_mutex); } +static void init_gen_pmu_buf_queue(struct drm_i915_private *dev_priv) +{ + struct drm_i915_ts_queue_header *hdr = + (struct drm_i915_ts_queue_header *) + dev_priv->gen_pmu.buffer.addr; + void *data_ptr; + + hdr->size_in_bytes = dev_priv->gen_pmu.buffer.obj->base.size; + /* 8 byte alignment for node address */ + data_ptr = PTR_ALIGN((void *)(hdr + 1), 8); + hdr->data_offset = (__u64)(data_ptr - (void *)hdr); + + hdr->node_count = 0; + hdr->wrap_count = 0; +} + +static void forward_one_gen_pmu_sample(struct drm_i915_private *dev_priv, + struct drm_i915_ts_node *node) +{ + struct perf_sample_data data; + struct perf_event *event = dev_priv->gen_pmu.exclusive_event; + int snapshot_size = sizeof(struct drm_i915_ts_usernode); + struct perf_raw_record raw; + + perf_sample_data_init(&data, 0, event->hw.last_period); + + /* Note: the combined u32 raw->size member + raw data itself must be 8 + * byte aligned.*/ + raw.size = snapshot_size + 4; + raw.data = node; + + data.raw = &raw; + + perf_event_overflow(event, &data, &dev_priv->gen_pmu.dummy_regs); +} + +void i915_gen_pmu_wait_gpu(struct drm_i915_private *dev_priv) +{ + struct drm_i915_ts_queue_header *hdr = + (struct drm_i915_ts_queue_header *) + dev_priv->gen_pmu.buffer.addr; + struct drm_i915_ts_node *first_node, *node; + int head, tail, num_nodes, ret; + struct drm_i915_gem_request *req; + + first_node = (struct drm_i915_ts_node *) + ((char *)hdr + hdr->data_offset); + num_nodes = (hdr->size_in_bytes - hdr->data_offset) / + sizeof(*node); + + tail = hdr->node_count; + head = dev_priv->gen_pmu.buffer.head; + + /* wait for all requests to complete*/ + while ((head % num_nodes) != (tail % num_nodes)) { + node = &first_node[head % num_nodes]; + req = node->node_info.req; + if (req) { + if (!i915_gem_request_completed(req, true)) { + ret = i915_wait_request(req); + if (ret) + DRM_DEBUG_DRIVER( + "gen pmu: failed to wait\n"); + } + i915_gem_request_assign(&node->node_info.req, NULL); + } + head++; + } +} + +void forward_gen_pmu_snapshots_work(struct work_struct *__work) +{ + struct drm_i915_private *dev_priv = + container_of(__work, typeof(*dev_priv), + gen_pmu.work_timer); + struct drm_i915_ts_queue_header *hdr = + (struct drm_i915_ts_queue_header *) + dev_priv->gen_pmu.buffer.addr; + struct drm_i915_ts_node *first_node, *node; + int head, tail, num_nodes, ret; + struct drm_i915_gem_request *req; + + first_node = (struct drm_i915_ts_node *) + ((char *)hdr + hdr->data_offset); + num_nodes = (hdr->size_in_bytes - hdr->data_offset) / + sizeof(*node); + + ret = i915_mutex_lock_interruptible(dev_priv->dev); + if (ret) + return; + + tail = hdr->node_count; + head = dev_priv->gen_pmu.buffer.head; + + while ((head % num_nodes) != (tail % num_nodes)) { + node = &first_node[head % num_nodes]; + req = node->node_info.req; + if (req && i915_gem_request_completed(req, true)) { + forward_one_gen_pmu_sample(dev_priv, node); + i915_gem_request_assign(&node->node_info.req, NULL); + head++; + } else + break; + } + + dev_priv->gen_pmu.buffer.tail = tail; + dev_priv->gen_pmu.buffer.head = head; + + mutex_unlock(&dev_priv->dev->struct_mutex); +} + static void gen_pmu_flush_snapshots(struct drm_i915_private *dev_priv) { WARN_ON(!dev_priv->gen_pmu.buffer.addr); - - /* TODO: routine for forwarding snapshots to userspace */ + schedule_work(&dev_priv->gen_pmu.work_timer); } static void forward_one_oa_snapshot_to_event(struct drm_i915_private *dev_priv, @@ -652,6 +762,7 @@ static int init_gen_pmu_buffer(struct perf_event *event) dev_priv->gen_pmu.buffer.obj = bo; dev_priv->gen_pmu.buffer.addr = vmap_oa_buffer(bo); + init_gen_pmu_buf_queue(dev_priv); DRM_DEBUG_DRIVER("Gen PMU Buffer initialized, vaddr = %p", dev_priv->gen_pmu.buffer.addr); @@ -1327,6 +1438,13 @@ static void i915_gen_event_flush(struct perf_event *event) { struct drm_i915_private *i915 = container_of(event->pmu, typeof(*i915), gen_pmu.pmu); + int ret; + + ret = i915_mutex_lock_interruptible(i915->dev); + if (ret) + return; + i915_gen_pmu_wait_gpu(i915); + mutex_unlock(&i915->dev->struct_mutex); gen_pmu_flush_snapshots(i915); } @@ -1476,6 +1594,7 @@ void i915_gen_pmu_register(struct drm_device *dev) hrtimer_init(&i915->gen_pmu.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); i915->gen_pmu.timer.function = hrtimer_sample_gen; + INIT_WORK(&i915->gen_pmu.work_timer, forward_gen_pmu_snapshots_work); spin_lock_init(&i915->gen_pmu.lock); i915->gen_pmu.pmu.capabilities = PERF_PMU_CAP_IS_DEVICE; @@ -1505,6 +1624,8 @@ void i915_gen_pmu_unregister(struct drm_device *dev) if (i915->gen_pmu.pmu.event_init == NULL) return; + cancel_work_sync(&i915->gen_pmu.work_timer); + perf_pmu_unregister(&i915->gen_pmu.pmu); i915->gen_pmu.pmu.event_init = NULL; }