From patchwork Thu Apr 18 14:24:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Kuoppala X-Patchwork-Id: 10907449 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59699161F for ; Thu, 18 Apr 2019 14:25:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 40CB9286BF for ; Thu, 18 Apr 2019 14:25:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 327AA28725; Thu, 18 Apr 2019 14:25:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2FC12286BF for ; Thu, 18 Apr 2019 14:25:03 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 860926E14E; Thu, 18 Apr 2019 14:25:02 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id D06A96E14E for ; Thu, 18 Apr 2019 14:25:00 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Apr 2019 07:25:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,366,1549958400"; d="scan'208";a="165846464" Received: from rosetta.fi.intel.com ([10.237.72.186]) by fmsmga001.fm.intel.com with ESMTP; 18 Apr 2019 07:24:59 -0700 Received: by rosetta.fi.intel.com (Postfix, from userid 1000) id 2C0AC8404FF; Thu, 18 Apr 2019 17:24:58 +0300 (EEST) From: Mika Kuoppala To: intel-gfx@lists.freedesktop.org Date: Thu, 18 Apr 2019 17:24:54 +0300 Message-Id: <20190418142455.19763-1-mika.kuoppala@linux.intel.com> X-Mailer: git-send-email 2.17.1 Subject: [Intel-gfx] [PATCH i-g-t 1/2] lib/igt_dummyload: Introduce igt_spin_reset X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Libify resetting a spin for reuse. v2: use also in perf_pmu Cc: Chris Wilson Cc: Tvrtko Ursulin Signed-off-by: Mika Kuoppala --- lib/igt_dummyload.c | 20 ++++++++++++++++++++ lib/igt_dummyload.h | 2 ++ tests/i915/gem_exec_latency.c | 19 ++++--------------- tests/i915/gem_sync.c | 34 ++++++++++++++-------------------- tests/perf_pmu.c | 10 +--------- 5 files changed, 41 insertions(+), 44 deletions(-) diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c index 1d57a53c..cb466317 100644 --- a/lib/igt_dummyload.c +++ b/lib/igt_dummyload.c @@ -260,6 +260,8 @@ emit_recursive_batch(igt_spin_t *spin, obj[SCRATCH].flags = EXEC_OBJECT_PINNED; obj[BATCH].flags = EXEC_OBJECT_PINNED; + spin->cmd_spin = *spin->batch; + return fence_fd; } @@ -366,6 +368,24 @@ void igt_spin_set_timeout(igt_spin_t *spin, int64_t ns) spin->timer = timer; } +/** + * igt_spin_reset: + * @spin: spin state from igt_spin_new() + * + * Reset the state of spin, allowing its reuse. + */ +void igt_spin_reset(igt_spin_t *spin) +{ + if (!spin) + return; + + if (igt_spin_has_poll(spin)) + spin->poll[SPIN_POLL_START_IDX] = 0; + + *spin->batch = spin->cmd_spin; + __sync_synchronize(); +} + /** * igt_spin_end: * @spin: spin state from igt_spin_new() diff --git a/lib/igt_dummyload.h b/lib/igt_dummyload.h index d6482089..d7b1be91 100644 --- a/lib/igt_dummyload.h +++ b/lib/igt_dummyload.h @@ -37,6 +37,7 @@ typedef struct igt_spin { timer_t timer; struct igt_list link; uint32_t *batch; + uint32_t cmd_spin; int out_fence; struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_execbuffer2 execbuf; @@ -68,6 +69,7 @@ igt_spin_factory(int fd, const struct igt_spin_factory *opts); igt_spin_factory(fd, &((struct igt_spin_factory){__VA_ARGS__})) void igt_spin_set_timeout(igt_spin_t *spin, int64_t ns); +void igt_spin_reset(igt_spin_t *spin); void igt_spin_end(igt_spin_t *spin); void igt_spin_free(int fd, igt_spin_t *spin); diff --git a/tests/i915/gem_exec_latency.c b/tests/i915/gem_exec_latency.c index 6b7dfbc0..2cfb78bf 100644 --- a/tests/i915/gem_exec_latency.c +++ b/tests/i915/gem_exec_latency.c @@ -73,19 +73,17 @@ poll_ring(int fd, unsigned ring, const char *name) unsigned long cycles; igt_spin_t *spin[2]; uint64_t elapsed; - uint32_t cmd; gem_require_ring(fd, ring); igt_require(gem_can_store_dword(fd, ring)); spin[0] = __igt_spin_factory(fd, &opts); igt_assert(igt_spin_has_poll(spin[0])); - cmd = *spin[0]->batch; spin[1] = __igt_spin_factory(fd, &opts); igt_assert(igt_spin_has_poll(spin[1])); - igt_assert(cmd == *spin[1]->batch); + igt_assert(*spin[0]->batch == *spin[1]->batch); igt_spin_end(spin[0]); igt_spin_busywait_until_started(spin[1]); @@ -96,8 +94,8 @@ poll_ring(int fd, unsigned ring, const char *name) while ((elapsed = igt_nsec_elapsed(&tv)) < 2ull << 30) { const unsigned int idx = cycles++ & 1; - *spin[idx]->batch = cmd; - spin[idx]->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin[idx]); + gem_execbuf(fd, &spin[idx]->execbuf); igt_spin_end(spin[!idx]); @@ -414,15 +412,6 @@ static void latency_from_ring(int fd, } } -static void __rearm_spin(igt_spin_t *spin) -{ - const uint32_t mi_arb_chk = 0x5 << 23; - - *spin->batch = mi_arb_chk; - spin->poll[SPIN_POLL_START_IDX] = 0; - __sync_synchronize(); -} - static void __submit_spin(int fd, igt_spin_t *spin, unsigned int flags) { @@ -557,7 +546,7 @@ rthog_latency_on_ring(int fd, unsigned int engine, const char *name, unsigned in if (nengine > 1) usleep(10*nengine); - __rearm_spin(spin); + igt_spin_reset(spin); igt_nsec_elapsed(&ts); __submit_spin(fd, spin, engine); diff --git a/tests/i915/gem_sync.c b/tests/i915/gem_sync.c index f17ecd0b..8c5aaa14 100644 --- a/tests/i915/gem_sync.c +++ b/tests/i915/gem_sync.c @@ -209,7 +209,6 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) struct drm_i915_gem_execbuffer2 execbuf; double end, this, elapsed, now, baseline; unsigned long cycles; - uint32_t cmd; igt_spin_t *spin; memset(&object, 0, sizeof(object)); @@ -226,7 +225,6 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) .flags = (IGT_SPIN_POLL_RUN | IGT_SPIN_FAST)); igt_assert(igt_spin_has_poll(spin)); - cmd = *spin->batch; gem_execbuf(fd, &execbuf); @@ -238,8 +236,8 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) elapsed = 0; cycles = 0; do { - *spin->batch = cmd; - spin->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin); + gem_execbuf(fd, &spin->execbuf); igt_spin_busywait_until_started(spin); @@ -262,8 +260,8 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) elapsed = 0; cycles = 0; do { - *spin->batch = cmd; - spin->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin); + gem_execbuf(fd, &spin->execbuf); igt_spin_busywait_until_started(spin); @@ -321,17 +319,14 @@ static void active_ring(int fd, unsigned ring, int timeout) double start, end, elapsed; unsigned long cycles; igt_spin_t *spin[2]; - uint32_t cmd; spin[0] = __igt_spin_new(fd, .engine = ring, .flags = IGT_SPIN_FAST); - cmd = *spin[0]->batch; spin[1] = __igt_spin_new(fd, .engine = ring, .flags = IGT_SPIN_FAST); - igt_assert(*spin[1]->batch == cmd); start = gettime(); end = start + timeout; @@ -343,7 +338,8 @@ static void active_ring(int fd, unsigned ring, int timeout) igt_spin_end(s); gem_sync(fd, s->handle); - *s->batch = cmd; + igt_spin_reset(s); + gem_execbuf(fd, &s->execbuf); } cycles += 1024; @@ -393,7 +389,6 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) double end, this, elapsed, now, baseline; unsigned long cycles; igt_spin_t *spin[2]; - uint32_t cmd; memset(&object, 0, sizeof(object)); object.handle = gem_create(fd, 4096); @@ -409,7 +404,6 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) .flags = (IGT_SPIN_POLL_RUN | IGT_SPIN_FAST)); igt_assert(igt_spin_has_poll(spin[0])); - cmd = *spin[0]->batch; spin[1] = __igt_spin_new(fd, .engine = execbuf.flags, @@ -423,8 +417,8 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) gem_sync(fd, object.handle); for (int warmup = 0; warmup <= 1; warmup++) { - *spin[0]->batch = cmd; - spin[0]->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin[0]); + gem_execbuf(fd, &spin[0]->execbuf); end = gettime() + timeout/10.; @@ -433,8 +427,8 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) do { igt_spin_busywait_until_started(spin[0]); - *spin[1]->batch = cmd; - spin[1]->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin[1]); + gem_execbuf(fd, &spin[1]->execbuf); this = gettime(); @@ -454,8 +448,8 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) names[child % num_engines] ? " b" : "B", cycles, elapsed*1e6/cycles); - *spin[0]->batch = cmd; - spin[0]->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin[0]); + gem_execbuf(fd, &spin[0]->execbuf); end = gettime() + timeout; @@ -467,8 +461,8 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) for (int n = 0; n < wlen; n++) gem_execbuf(fd, &execbuf); - *spin[1]->batch = cmd; - spin[1]->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin[1]); + gem_execbuf(fd, &spin[1]->execbuf); this = gettime(); diff --git a/tests/perf_pmu.c b/tests/perf_pmu.c index a8ad86ce..e719a292 100644 --- a/tests/perf_pmu.c +++ b/tests/perf_pmu.c @@ -1501,14 +1501,6 @@ test_enable_race(int gem_fd, const struct intel_execution_engine2 *e) gem_quiescent_gpu(gem_fd); } -static void __rearm_spin(igt_spin_t *spin) -{ - const uint32_t mi_arb_chk = 0x5 << 23; - - *spin->batch = mi_arb_chk; - __sync_synchronize(); -} - #define __assert_within(x, ref, tol_up, tol_down) \ igt_assert_f((double)(x) <= ((double)(ref) + (tol_up)) && \ (double)(x) >= ((double)(ref) - (tol_down)), \ @@ -1596,7 +1588,7 @@ accuracy(int gem_fd, const struct intel_execution_engine2 *e, nanosleep(&_ts, NULL); /* Restart the spinbatch. */ - __rearm_spin(spin); + igt_spin_reset(spin); __submit_spin(gem_fd, spin, e, 0); /* PWM busy sleep. */ From patchwork Thu Apr 18 14:24:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Kuoppala X-Patchwork-Id: 10907451 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C258013B5 for ; Thu, 18 Apr 2019 14:25:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A7923286BF for ; Thu, 18 Apr 2019 14:25:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9BC3628725; Thu, 18 Apr 2019 14:25:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 90AE7286BF for ; Thu, 18 Apr 2019 14:25:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A55986E14F; Thu, 18 Apr 2019 14:25:04 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id D35916E14F for ; Thu, 18 Apr 2019 14:25:02 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Apr 2019 07:25:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,366,1549958400"; d="scan'208";a="292624944" Received: from rosetta.fi.intel.com ([10.237.72.186]) by orsmga004.jf.intel.com with ESMTP; 18 Apr 2019 07:25:00 -0700 Received: by rosetta.fi.intel.com (Postfix, from userid 1000) id 42F75840501; Thu, 18 Apr 2019 17:24:59 +0300 (EEST) From: Mika Kuoppala To: intel-gfx@lists.freedesktop.org Date: Thu, 18 Apr 2019 17:24:55 +0300 Message-Id: <20190418142455.19763-2-mika.kuoppala@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190418142455.19763-1-mika.kuoppala@linux.intel.com> References: <20190418142455.19763-1-mika.kuoppala@linux.intel.com> Subject: [Intel-gfx] [PATCH i-g-t 2/2] lib/igt_dummyload: Send batch as first X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP To simplify emitting the recursive batch, make batch always the first object on the execbuf list. v2: set handles early, poll_ptr indecency (Chris) Cc: Chris Wilson Signed-off-by: Mika Kuoppala --- lib/igt_dummyload.c | 129 ++++++++++++++++---------------- lib/igt_dummyload.h | 12 ++- tests/i915/gem_concurrent_all.c | 3 +- tests/i915/gem_exec_latency.c | 2 +- tests/i915/gem_exec_schedule.c | 14 ++-- tests/i915/gem_softpin.c | 2 +- tests/i915/gem_spin_batch.c | 16 ++-- tests/i915/i915_hangman.c | 2 +- 8 files changed, 93 insertions(+), 87 deletions(-) diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c index cb466317..c012879d 100644 --- a/lib/igt_dummyload.c +++ b/lib/igt_dummyload.c @@ -62,6 +62,7 @@ #define MI_ARB_CHK (0x5 << 23) static const int BATCH_SIZE = 4096; +static const int POLL_SIZE = 4096; static IGT_LIST(spin_list); static pthread_mutex_t list_lock = PTHREAD_MUTEX_INITIALIZER; @@ -69,16 +70,15 @@ static int emit_recursive_batch(igt_spin_t *spin, int fd, const struct igt_spin_factory *opts) { -#define SCRATCH 0 -#define BATCH 1 const int gen = intel_gen(intel_get_drm_devid(fd)); + struct drm_i915_gem_exec_object2 * const batch = &spin->_obj[0]; + struct drm_i915_gem_exec_object2 * const poll = &spin->_obj[1]; struct drm_i915_gem_relocation_entry relocs[2], *r; struct drm_i915_gem_execbuffer2 *execbuf; - struct drm_i915_gem_exec_object2 *obj; unsigned int engines[16]; unsigned int nengine; int fence_fd = -1; - uint32_t *batch, *batch_start; + uint32_t *cs, *cs_start; int i; nengine = 0; @@ -99,30 +99,31 @@ emit_recursive_batch(igt_spin_t *spin, memset(&spin->execbuf, 0, sizeof(spin->execbuf)); execbuf = &spin->execbuf; - memset(spin->obj, 0, sizeof(spin->obj)); - obj = spin->obj; + memset(spin->_obj, 0, sizeof(spin->_obj)); memset(relocs, 0, sizeof(relocs)); - obj[BATCH].handle = gem_create(fd, BATCH_SIZE); - batch = __gem_mmap__wc(fd, obj[BATCH].handle, - 0, BATCH_SIZE, PROT_WRITE); - if (!batch) - batch = gem_mmap__gtt(fd, obj[BATCH].handle, - BATCH_SIZE, PROT_WRITE); - gem_set_domain(fd, obj[BATCH].handle, - I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT); + batch->handle = gem_create(fd, BATCH_SIZE); + spin->handle = batch->handle; + + cs = __gem_mmap__wc(fd, batch->handle, + 0, BATCH_SIZE, PROT_WRITE); + if (!cs) + cs = gem_mmap__gtt(fd, batch->handle, + BATCH_SIZE, PROT_WRITE); + gem_set_domain(fd, batch->handle, + I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT); execbuf->buffer_count++; - batch_start = batch; + cs_start = cs; if (opts->dependency) { igt_assert(!(opts->flags & IGT_SPIN_POLL_RUN)); - r = &relocs[obj[BATCH].relocation_count++]; + r = &relocs[batch->relocation_count++]; /* dummy write to dependency */ - obj[SCRATCH].handle = opts->dependency; + poll->handle = opts->dependency; r->presumed_offset = 0; - r->target_handle = obj[SCRATCH].handle; + r->target_handle = poll->handle; r->offset = sizeof(uint32_t) * 1020; r->delta = 0; r->read_domains = I915_GEM_DOMAIN_RENDER; @@ -130,7 +131,7 @@ emit_recursive_batch(igt_spin_t *spin, execbuf->buffer_count++; } else if (opts->flags & IGT_SPIN_POLL_RUN) { - r = &relocs[obj[BATCH].relocation_count++]; + r = &relocs[batch->relocation_count++]; igt_assert(!opts->dependency); @@ -139,52 +140,51 @@ emit_recursive_batch(igt_spin_t *spin, igt_require(__igt_device_set_master(fd) == 0); } - spin->poll_handle = gem_create(fd, 4096); - obj[SCRATCH].handle = spin->poll_handle; + poll->handle = gem_create(fd, POLL_SIZE); + spin->poll_handle = poll->handle; - if (__gem_set_caching(fd, spin->poll_handle, + if (__gem_set_caching(fd, poll->handle, I915_CACHING_CACHED) == 0) - spin->poll = gem_mmap__cpu(fd, spin->poll_handle, - 0, 4096, + spin->poll = gem_mmap__cpu(fd, poll->handle, + 0, POLL_SIZE, PROT_READ | PROT_WRITE); else - spin->poll = gem_mmap__wc(fd, spin->poll_handle, - 0, 4096, + spin->poll = gem_mmap__wc(fd, poll->handle, + 0, POLL_SIZE, PROT_READ | PROT_WRITE); igt_assert_eq(spin->poll[SPIN_POLL_START_IDX], 0); /* batch is first */ - r->presumed_offset = 4096; - r->target_handle = obj[SCRATCH].handle; + r->presumed_offset = BATCH_SIZE; + r->target_handle = poll->handle; r->offset = sizeof(uint32_t) * 1; r->delta = sizeof(uint32_t) * SPIN_POLL_START_IDX; - *batch++ = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0); + *cs++ = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0); if (gen >= 8) { - *batch++ = r->presumed_offset + r->delta; - *batch++ = 0; + *cs++ = r->presumed_offset + r->delta; + *cs++ = 0; } else if (gen >= 4) { - *batch++ = 0; - *batch++ = r->presumed_offset + r->delta; + *cs++ = 0; + *cs++ = r->presumed_offset + r->delta; r->offset += sizeof(uint32_t); } else { - batch[-1]--; - *batch++ = r->presumed_offset + r->delta; + cs[-1]--; + *cs++ = r->presumed_offset + r->delta; } - *batch++ = 1; + *cs++ = 1; execbuf->buffer_count++; } - spin->batch = batch = batch_start + 64 / sizeof(*batch); - spin->handle = obj[BATCH].handle; + spin->cs = cs = cs_start + 64 / sizeof(*cs); /* Allow ourselves to be preempted */ if (!(opts->flags & IGT_SPIN_NO_PREEMPTION)) - *batch++ = MI_ARB_CHK; + *cs++ = MI_ARB_CHK; /* Pad with a few nops so that we do not completely hog the system. * @@ -198,32 +198,33 @@ emit_recursive_batch(igt_spin_t *spin, * trouble. See https://bugs.freedesktop.org/show_bug.cgi?id=102262 */ if (!(opts->flags & IGT_SPIN_FAST)) - batch += 1000; + cs += 1000; /* recurse */ - r = &relocs[obj[BATCH].relocation_count++]; - r->target_handle = obj[BATCH].handle; - r->offset = (batch + 1 - batch_start) * sizeof(*batch); + r = &relocs[batch->relocation_count++]; + r->presumed_offset = 0; + r->target_handle = batch->handle; + r->offset = (cs + 1 - cs_start) * sizeof(*cs); r->read_domains = I915_GEM_DOMAIN_COMMAND; r->delta = 64; if (gen >= 8) { - *batch++ = MI_BATCH_BUFFER_START | 1 << 8 | 1; - *batch++ = r->delta; - *batch++ = 0; + *cs++ = MI_BATCH_BUFFER_START | 1 << 8 | 1; + *cs++ = r->presumed_offset + r->delta; + *cs++ = 0; } else if (gen >= 6) { - *batch++ = MI_BATCH_BUFFER_START | 1 << 8; - *batch++ = r->delta; + *cs++ = MI_BATCH_BUFFER_START | 1 << 8; + *cs++ = r->presumed_offset + r->delta; } else { - *batch++ = MI_BATCH_BUFFER_START | 2 << 6; + *cs++ = MI_BATCH_BUFFER_START | 2 << 6; if (gen < 4) r->delta |= 1; - *batch = r->delta; - batch++; + *cs = r->presumed_offset + r->delta; + cs++; } - obj[BATCH].relocs_ptr = to_user_pointer(relocs); + batch->relocs_ptr = to_user_pointer(relocs); - execbuf->buffers_ptr = to_user_pointer(obj + - (2 - execbuf->buffer_count)); + execbuf->buffers_ptr = to_user_pointer(spin->_obj); + execbuf->flags |= I915_EXEC_BATCH_FIRST; execbuf->rsvd1 = opts->ctx; if (opts->flags & IGT_SPIN_FENCE_OUT) @@ -252,15 +253,13 @@ emit_recursive_batch(igt_spin_t *spin, } } - /* Make it easier for callers to resubmit. */ - - obj[BATCH].relocation_count = 0; - obj[BATCH].relocs_ptr = 0; - - obj[SCRATCH].flags = EXEC_OBJECT_PINNED; - obj[BATCH].flags = EXEC_OBJECT_PINNED; + for (i = 0; i < execbuf->buffer_count; i++) { + spin->_obj[i].relocation_count = 0; + spin->_obj[i].relocs_ptr = 0; + spin->_obj[i].flags = EXEC_OBJECT_PINNED; + } - spin->cmd_spin = *spin->batch; + spin->cmd_spin = *spin->cs; return fence_fd; } @@ -382,7 +381,7 @@ void igt_spin_reset(igt_spin_t *spin) if (igt_spin_has_poll(spin)) spin->poll[SPIN_POLL_START_IDX] = 0; - *spin->batch = spin->cmd_spin; + *spin->cs = spin->cmd_spin; __sync_synchronize(); } @@ -397,7 +396,7 @@ void igt_spin_end(igt_spin_t *spin) if (!spin) return; - *spin->batch = MI_BATCH_BUFFER_END; + *spin->cs = MI_BATCH_BUFFER_END; __sync_synchronize(); } @@ -422,7 +421,7 @@ void igt_spin_free(int fd, igt_spin_t *spin) timer_delete(spin->timer); igt_spin_end(spin); - gem_munmap((void *)((unsigned long)spin->batch & (~4095UL)), + gem_munmap((void *)((unsigned long)spin->cs & (~4095UL)), BATCH_SIZE); if (spin->poll) { diff --git a/lib/igt_dummyload.h b/lib/igt_dummyload.h index d7b1be91..d6599a0c 100644 --- a/lib/igt_dummyload.h +++ b/lib/igt_dummyload.h @@ -33,14 +33,20 @@ #include "i915_drm.h" typedef struct igt_spin { - unsigned int handle; + uint32_t handle; + timer_t timer; struct igt_list link; - uint32_t *batch; + + uint32_t *cs; uint32_t cmd_spin; int out_fence; - struct drm_i915_gem_exec_object2 obj[2]; + + struct drm_i915_gem_exec_object2 _obj[2]; +#define SPIN_BATCH_IDX 0 + struct drm_i915_gem_execbuffer2 execbuf; + uint32_t poll_handle; uint32_t *poll; #define SPIN_POLL_START_IDX 0 diff --git a/tests/i915/gem_concurrent_all.c b/tests/i915/gem_concurrent_all.c index 3ddaab82..b5377191 100644 --- a/tests/i915/gem_concurrent_all.c +++ b/tests/i915/gem_concurrent_all.c @@ -957,7 +957,8 @@ static igt_hang_t all_hang(void) if (engine == I915_EXEC_RENDER) continue; - eb.flags = engine; + eb.flags &= ~(I915_EXEC_RING_MASK | I915_EXEC_BSD_MASK); + eb.flags |= engine; __gem_execbuf(fd, &eb); } diff --git a/tests/i915/gem_exec_latency.c b/tests/i915/gem_exec_latency.c index 2cfb78bf..cb4f090b 100644 --- a/tests/i915/gem_exec_latency.c +++ b/tests/i915/gem_exec_latency.c @@ -83,7 +83,7 @@ poll_ring(int fd, unsigned ring, const char *name) spin[1] = __igt_spin_factory(fd, &opts); igt_assert(igt_spin_has_poll(spin[1])); - igt_assert(*spin[0]->batch == *spin[1]->batch); + igt_assert(*spin[0]->cs == *spin[1]->cs); igt_spin_end(spin[0]); igt_spin_busywait_until_started(spin[1]); diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c index 9a079528..82fd23ed 100644 --- a/tests/i915/gem_exec_schedule.c +++ b/tests/i915/gem_exec_schedule.c @@ -223,11 +223,11 @@ static void independent(int fd, unsigned int engine) if (spin == NULL) { spin = __igt_spin_new(fd, .engine = other); } else { - struct drm_i915_gem_execbuffer2 eb = { - .buffer_count = 1, - .buffers_ptr = to_user_pointer(&spin->obj[1]), - .flags = other, - }; + struct drm_i915_gem_execbuffer2 eb = spin->execbuf; + + eb.flags &= ~(I915_EXEC_RING_MASK | I915_EXEC_BSD_MASK); + eb.flags |= other; + gem_execbuf(fd, &eb); } @@ -619,8 +619,8 @@ static igt_spin_t *__noise(int fd, uint32_t ctx, int prio, igt_spin_t *spin) .engine = other); } else { struct drm_i915_gem_execbuffer2 eb = { - .buffer_count = 1, - .buffers_ptr = to_user_pointer(&spin->obj[1]), + .buffer_count = spin->execbuf.buffer_count, + .buffers_ptr = to_user_pointer(&spin->_obj[SPIN_BATCH_IDX]), .rsvd1 = ctx, .flags = other, }; diff --git a/tests/i915/gem_softpin.c b/tests/i915/gem_softpin.c index 336008b8..c269afdf 100644 --- a/tests/i915/gem_softpin.c +++ b/tests/i915/gem_softpin.c @@ -360,7 +360,7 @@ static void test_evict_hang(int fd) execbuf.buffer_count = 1; hang = igt_hang_ctx(fd, 0, 0, 0); - expected = hang.spin->obj[1].offset; + expected = hang.spin->_obj[SPIN_BATCH_IDX].offset; /* Replace the hung batch with ourselves, forcing an eviction */ object.offset = expected; diff --git a/tests/i915/gem_spin_batch.c b/tests/i915/gem_spin_batch.c index a92672b8..a8af8d1e 100644 --- a/tests/i915/gem_spin_batch.c +++ b/tests/i915/gem_spin_batch.c @@ -77,28 +77,28 @@ static void spin_resubmit(int fd, unsigned int engine, unsigned int flags) igt_spin_t *spin = __igt_spin_new(fd, .ctx = ctx0, .engine = engine); unsigned int other; - struct drm_i915_gem_execbuffer2 eb = { - .buffer_count = 1, - .buffers_ptr = to_user_pointer(&spin->obj[1]), - .rsvd1 = ctx1, - }; + struct drm_i915_gem_execbuffer2 eb = spin->execbuf; + + eb.rsvd1 = ctx1; if (flags & RESUBMIT_ALL_ENGINES) { for_each_physical_engine(fd, other) { if (other == engine) continue; - eb.flags = other; + eb.flags &= ~(I915_EXEC_RING_MASK | I915_EXEC_BSD_MASK); + eb.flags |= other; gem_execbuf(fd, &eb); } } else { - eb.flags = engine; + eb.flags &= ~(I915_EXEC_RING_MASK | I915_EXEC_BSD_MASK); + eb.flags |= engine; gem_execbuf(fd, &eb); } igt_spin_end(spin); - gem_sync(fd, spin->obj[1].handle); + gem_sync(fd, spin->handle); igt_spin_free(fd, spin); diff --git a/tests/i915/i915_hangman.c b/tests/i915/i915_hangman.c index 9a1d5889..b2203df3 100644 --- a/tests/i915/i915_hangman.c +++ b/tests/i915/i915_hangman.c @@ -209,7 +209,7 @@ static void test_error_state_capture(unsigned ring_id, clear_error_state(); hang = igt_hang_ctx(device, 0, ring_id, HANG_ALLOW_CAPTURE); - offset = hang.spin->obj[1].offset; + offset = hang.spin->_obj[SPIN_BATCH_IDX].offset; batch = gem_mmap__cpu(device, hang.spin->handle, 0, 4096, PROT_READ); gem_set_domain(device, hang.spin->handle, I915_GEM_DOMAIN_CPU, 0);