From patchwork Wed Jun 6 12:49:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 10450185 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2D08A6054E for ; Wed, 6 Jun 2018 12:50:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1491E29C49 for ; Wed, 6 Jun 2018 12:50:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 131FA29D6F; Wed, 6 Jun 2018 12:50:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9375C29C49 for ; Wed, 6 Jun 2018 12:49:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A79216F00D; Wed, 6 Jun 2018 12:49:21 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-x244.google.com (mail-wm0-x244.google.com [IPv6:2a00:1450:400c:c09::244]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3F9996F00D for ; Wed, 6 Jun 2018 12:49:20 +0000 (UTC) Received: by mail-wm0-x244.google.com with SMTP id x6-v6so11244337wmc.3 for ; Wed, 06 Jun 2018 05:49:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QP7voB3u0KwEa7/BZifLNC9Bpgw0jAxWPdGfp+wnmBk=; b=SxVfkSxfCoL8vA3MhNfDLVaYj7MsxT6/7Tcyu7RewCxxSK3zpfTzKbFQyViQpNcfmj Wrds6zQ72rey9is5GAAT0kARif+V3zfV8HFwv16cZnnz7VodCTo4vn0RIYHGq133PdM8 qoLf56cIAkfXfdI4+3fw0bCWQntbygDZx/GTvNRcu4i+OUFBhhRp/8bvjUoPNvvX71Qa s6G7bG4A9a6fVnpkAaY7zhVdWMS5kPiWZJ1bWNVWReuaWIx9ZCyNYNoLSWu1kl9UrBwB Qz+UZrnnDSsd89Uhk4ZulHE32iKv87H4bPYPpVFhSDdBorZD1mvTeK89Du0fPd5k7/g4 t36Q== X-Gm-Message-State: APt69E1pCpALO92P0Fz0cOQ113bjBvEC/+2Gsyfxca8Cp3aSRbTR+Z89 q88NefJ+THhkDS7vMHCccIatcg== X-Google-Smtp-Source: ADUXVKK5CKB0NdtjUV4IIqYAEENUMLlGaR7HUs0P3XKRn+T0iEVoBF+x91JSTGbxeJIM7rb2i4SwLg== X-Received: by 2002:a1c:f114:: with SMTP id p20-v6mr1754277wmh.10.1528289358692; Wed, 06 Jun 2018 05:49:18 -0700 (PDT) Received: from localhost.localdomain ([95.146.151.144]) by smtp.gmail.com with ESMTPSA id x5-v6sm19616176wrr.3.2018.06.06.05.49.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Jun 2018 05:49:18 -0700 (PDT) From: Tvrtko Ursulin X-Google-Original-From: Tvrtko Ursulin To: igt-dev@lists.freedesktop.org Date: Wed, 6 Jun 2018 13:49:05 +0100 Message-Id: <20180606124907.13139-5-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180606124907.13139-1-tvrtko.ursulin@linux.intel.com> References: <20180606124907.13139-1-tvrtko.ursulin@linux.intel.com> Subject: [Intel-gfx] [PATCH i-g-t 4/6] tests/perf_pmu: Add tests for engine queued/runnable/running stats X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: intel-gfx@lists.freedesktop.org MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Tvrtko Ursulin Simple tests to check reported queue depths are correct. v2: * Improvements similar to ones from i915_query.c. Signed-off-by: Tvrtko Ursulin --- tests/perf_pmu.c | 258 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 258 insertions(+) diff --git a/tests/perf_pmu.c b/tests/perf_pmu.c index 4570f926d7fe..788ea09cc6a0 100644 --- a/tests/perf_pmu.c +++ b/tests/perf_pmu.c @@ -169,6 +169,7 @@ static unsigned int e2ring(int gem_fd, const struct intel_execution_engine2 *e) #define TEST_RUNTIME_PM (8) #define FLAG_LONG (16) #define FLAG_HANG (32) +#define TEST_CONTEXTS (64) static igt_spin_t * __spin_poll(int fd, uint32_t ctx, unsigned long flags) { @@ -952,6 +953,223 @@ multi_client(int gem_fd, const struct intel_execution_engine2 *e) assert_within_epsilon(val[1], perf_slept[1], tolerance); } +static double calc_queued(uint64_t d_val, uint64_t d_ns) +{ + return (double)d_val * 1e9 / I915_SAMPLE_QUEUED_DIVISOR / d_ns; +} + +static void +queued(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags) +{ + const unsigned long engine = e2ring(gem_fd, e); + const uint32_t bbe = MI_BATCH_BUFFER_END; + const unsigned int max_rq = 10; + double queued[max_rq + 1]; + unsigned int n, i; + uint64_t val[2]; + uint64_t ts[2]; + uint32_t bo; + int fd; + + igt_require_sw_sync(); + if (flags & TEST_CONTEXTS) + gem_require_contexts(gem_fd); + + memset(queued, 0, sizeof(queued)); + + bo = gem_create(gem_fd, 4096); + gem_write(gem_fd, bo, 4092, &bbe, sizeof(bbe)); + + fd = open_pmu(I915_PMU_ENGINE_QUEUED(e->class, e->instance)); + + for (n = 0; n <= max_rq; n++) { + IGT_CORK_FENCE(cork); + int fence = -1; + + gem_quiescent_gpu(gem_fd); + + if (n) + fence = igt_cork_plug(&cork, -1); + + for (i = 0; i < n; i++) { + struct drm_i915_gem_exec_object2 obj = { }; + struct drm_i915_gem_execbuffer2 eb = { }; + + obj.handle = bo; + + eb.buffer_count = 1; + eb.buffers_ptr = to_user_pointer(&obj); + + eb.flags = engine | I915_EXEC_FENCE_IN; + if (flags & TEST_CONTEXTS) + eb.rsvd1 = gem_context_create(gem_fd); + eb.rsvd2 = fence; + + gem_execbuf(gem_fd, &eb); + + if (flags & TEST_CONTEXTS) + gem_context_destroy(gem_fd, eb.rsvd1); + } + + val[0] = __pmu_read_single(fd, &ts[0]); + usleep(batch_duration_ns / 1000); + val[1] = __pmu_read_single(fd, &ts[1]); + + queued[n] = calc_queued(val[1] - val[0], ts[1] - ts[0]); + igt_info("n=%u queued=%.2f\n", n, queued[n]); + + if (fence >= 0) + igt_cork_unplug(&cork); + + for (i = 0; i < n; i++) + gem_sync(gem_fd, bo); + } + + close(fd); + + gem_close(gem_fd, bo); + + for (i = 0; i <= max_rq; i++) + assert_within_epsilon(queued[i], i, tolerance); +} + +static unsigned long __query_wait(igt_spin_t *spin, unsigned int n) +{ + struct timespec ts = { }; + unsigned long t; + + igt_nsec_elapsed(&ts); + + if (spin->running) { + igt_spin_busywait_until_running(spin); + } else { + igt_debug("__spin_wait - usleep mode\n"); + usleep(500e3); /* Better than nothing! */ + } + + t = igt_nsec_elapsed(&ts); + + return spin->running ? t : 500e6 / n; +} + +static void +runnable(int gem_fd, const struct intel_execution_engine2 *e) +{ + const unsigned long engine = e2ring(gem_fd, e); + bool contexts = gem_has_contexts(gem_fd); + const unsigned int max_rq = 10; + igt_spin_t *spin[max_rq + 1]; + double runnable[max_rq + 1]; + uint32_t ctx[max_rq]; + unsigned int n, i; + uint64_t val[2]; + uint64_t ts[2]; + int fd; + + memset(runnable, 0, sizeof(runnable)); + + if (contexts) { + for (i = 0; i < max_rq; i++) + ctx[i] = gem_context_create(gem_fd); + } + + fd = open_pmu(I915_PMU_ENGINE_RUNNABLE(e->class, e->instance)); + + for (n = 0; n <= max_rq; n++) { + gem_quiescent_gpu(gem_fd); + + for (i = 0; i < n; i++) { + uint32_t ctx_ = contexts ? ctx[i] : 0; + + if (i == 0) + spin[i] = __spin_poll(gem_fd, ctx_, engine); + else + spin[i] = __igt_spin_batch_new(gem_fd, ctx_, + engine, 0); + } + + if (n) + usleep(__query_wait(spin[0], n) * n); + + val[0] = __pmu_read_single(fd, &ts[0]); + usleep(batch_duration_ns / 1000); + val[1] = __pmu_read_single(fd, &ts[1]); + + runnable[n] = calc_queued(val[1] - val[0], ts[1] - ts[0]); + igt_info("n=%u runnable=%.2f\n", n, runnable[n]); + + for (i = 0; i < n; i++) { + end_spin(gem_fd, spin[i], FLAG_SYNC); + igt_spin_batch_free(gem_fd, spin[i]); + } + } + + if (contexts) { + for (i = 0; i < max_rq; i++) + gem_context_destroy(gem_fd, ctx[i]); + } + + close(fd); + + assert_within_epsilon(runnable[0], 0, tolerance); + igt_assert(runnable[max_rq] > 0.0); + + if (contexts) + assert_within_epsilon(runnable[max_rq] - runnable[max_rq - 1], + 1, tolerance); +} + +static void +running(int gem_fd, const struct intel_execution_engine2 *e) +{ + const unsigned long engine = e2ring(gem_fd, e); + const unsigned int max_rq = 10; + igt_spin_t *spin[max_rq + 1]; + double running[max_rq + 1]; + unsigned int n, i; + uint64_t val[2]; + uint64_t ts[2]; + int fd; + + memset(running, 0, sizeof(running)); + memset(spin, 0, sizeof(spin)); + + fd = open_pmu(I915_PMU_ENGINE_RUNNING(e->class, e->instance)); + + for (n = 0; n <= max_rq; n++) { + gem_quiescent_gpu(gem_fd); + + for (i = 0; i < n; i++) { + if (i == 0) + spin[i] = __spin_poll(gem_fd, 0, engine); + else + spin[i] = __igt_spin_batch_new(gem_fd, 0, + engine, 0); + } + + if (n) + usleep(__query_wait(spin[0], n) * n); + + val[0] = __pmu_read_single(fd, &ts[0]); + usleep(batch_duration_ns / 1000); + val[1] = __pmu_read_single(fd, &ts[1]); + + running[n] = calc_queued(val[1] - val[0], ts[1] - ts[0]); + igt_info("n=%u running=%.2f\n", n, running[n]); + + for (i = 0; i < n; i++) { + end_spin(gem_fd, spin[i], FLAG_SYNC); + igt_spin_batch_free(gem_fd, spin[i]); + } + } + + close(fd); + + assert_within_epsilon(running[0], 0, tolerance); + for (i = 1; i <= max_rq; i++) + igt_assert(running[i] > 0); +} + /** * Tests that i915 PMU corectly errors out in invalid initialization. * i915 PMU is uncore PMU, thus: @@ -1684,6 +1902,15 @@ igt_main igt_subtest_f("init-sema-%s", e->name) init(fd, e, I915_SAMPLE_SEMA); + igt_subtest_f("init-queued-%s", e->name) + init(fd, e, I915_SAMPLE_QUEUED); + + igt_subtest_f("init-runnable-%s", e->name) + init(fd, e, I915_SAMPLE_RUNNABLE); + + igt_subtest_f("init-running-%s", e->name) + init(fd, e, I915_SAMPLE_RUNNING); + igt_subtest_group { igt_fixture { gem_require_engine(fd, e->class, e->instance); @@ -1789,6 +2016,27 @@ igt_main igt_subtest_f("busy-hang-%s", e->name) single(fd, e, TEST_BUSY | FLAG_HANG); + + /** + * Test that queued metric works. + */ + igt_subtest_f("queued-%s", e->name) + queued(fd, e, 0); + + igt_subtest_f("queued-contexts-%s", e->name) + queued(fd, e, TEST_CONTEXTS); + + /** + * Test that runnable metric works. + */ + igt_subtest_f("runnable-%s", e->name) + runnable(fd, e); + + /** + * Test that running metric works. + */ + igt_subtest_f("running-%s", e->name) + running(fd, e); } /** @@ -1881,6 +2129,16 @@ igt_main e->name) single(render_fd, e, TEST_BUSY | TEST_TRAILING_IDLE); + igt_subtest_f("render-node-queued-%s", e->name) + queued(render_fd, e, 0); + igt_subtest_f("render-node-queued-contexts-%s", + e->name) + queued(render_fd, e, TEST_CONTEXTS); + igt_subtest_f("render-node-runnable-%s", + e->name) + runnable(render_fd, e); + igt_subtest_f("render-node-running-%s", e->name) + running(render_fd, e); } }