From patchwork Wed Apr 17 15:28:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Kuoppala X-Patchwork-Id: 10905537 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB5721515 for ; Wed, 17 Apr 2019 15:28:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3B82285A5 for ; Wed, 17 Apr 2019 15:28:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C7C6428A19; Wed, 17 Apr 2019 15:28:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9D8B5285A5 for ; Wed, 17 Apr 2019 15:28:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 227FA6E06C; Wed, 17 Apr 2019 15:28:49 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTPS id B77146E06C for ; Wed, 17 Apr 2019 15:28:47 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Apr 2019 08:28:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,362,1549958400"; d="scan'208";a="292350812" Received: from rosetta.fi.intel.com ([10.237.72.186]) by orsmga004.jf.intel.com with ESMTP; 17 Apr 2019 08:28:45 -0700 Received: by rosetta.fi.intel.com (Postfix, from userid 1000) id 84694840611; Wed, 17 Apr 2019 18:28:36 +0300 (EEST) From: Mika Kuoppala To: intel-gfx@lists.freedesktop.org Date: Wed, 17 Apr 2019 18:28:32 +0300 Message-Id: <20190417152834.12705-1-mika.kuoppala@linux.intel.com> X-Mailer: git-send-email 2.17.1 Subject: [Intel-gfx] [PATCH i-g-t 1/3] lib/igt_dummyload: libify checks for spin batch activation X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Instead of opencoding the poll into the spinner, use a helper to check if spinner has started. Cc: Chris Wilson Signed-off-by: Mika Kuoppala --- lib/igt_dummyload.c | 35 +++++++++++++++++++--------------- lib/igt_dummyload.h | 17 ++++++++++++++--- tests/i915/gem_ctx_exec.c | 4 +--- tests/i915/gem_ctx_isolation.c | 4 ++-- tests/i915/gem_eio.c | 4 ++-- tests/i915/gem_exec_latency.c | 22 ++++++++++----------- tests/i915/gem_exec_schedule.c | 5 ++--- tests/i915/gem_sync.c | 28 ++++++++++++--------------- tests/perf_pmu.c | 4 ++-- 9 files changed, 66 insertions(+), 57 deletions(-) diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c index 47f6b92b..49b69737 100644 --- a/lib/igt_dummyload.c +++ b/lib/igt_dummyload.c @@ -67,11 +67,13 @@ static pthread_mutex_t list_lock = PTHREAD_MUTEX_INITIALIZER; static void fill_reloc(struct drm_i915_gem_relocation_entry *reloc, - uint32_t gem_handle, uint32_t offset, + uint32_t gem_handle, uint32_t offset, uint32_t delta, uint32_t read_domains, uint32_t write_domains) { + reloc->presumed_offset = -1; reloc->target_handle = gem_handle; reloc->offset = offset * sizeof(uint32_t); + reloc->delta = delta * sizeof(uint32_t); reloc->read_domains = read_domains; reloc->write_domain = write_domains; } @@ -131,11 +133,13 @@ emit_recursive_batch(igt_spin_t *spin, /* dummy write to dependency */ obj[SCRATCH].handle = opts->dependency; fill_reloc(&relocs[obj[BATCH].relocation_count++], - opts->dependency, 1020, + opts->dependency, 1020, 0, I915_GEM_DOMAIN_RENDER, I915_GEM_DOMAIN_RENDER); execbuf->buffer_count++; } else if (opts->flags & IGT_SPIN_POLL_RUN) { + const unsigned int start_idx_offset = + SPIN_POLL_START_IDX * sizeof(uint32_t); unsigned int offset; igt_assert(!opts->dependency); @@ -149,36 +153,37 @@ emit_recursive_batch(igt_spin_t *spin, if (__gem_set_caching(fd, spin->poll_handle, I915_CACHING_CACHED) == 0) - spin->running = gem_mmap__cpu(fd, spin->poll_handle, - 0, 4096, - PROT_READ | PROT_WRITE); + spin->poll = gem_mmap__cpu(fd, spin->poll_handle, + 0, 4096, + PROT_READ | PROT_WRITE); else - spin->running = gem_mmap__wc(fd, spin->poll_handle, - 0, 4096, - PROT_READ | PROT_WRITE); - igt_assert_eq(*spin->running, 0); + spin->poll = gem_mmap__wc(fd, spin->poll_handle, + 0, 4096, + PROT_READ | PROT_WRITE); + igt_assert_eq(spin->poll[SPIN_POLL_START_IDX], 0); *batch++ = MI_STORE_DWORD_IMM | (gen < 6 ? 1 << 22 : 0); if (gen >= 8) { offset = 1; - *batch++ = 0; + *batch++ = start_idx_offset; *batch++ = 0; } else if (gen >= 4) { offset = 2; *batch++ = 0; - *batch++ = 0; + *batch++ = start_idx_offset; } else { offset = 1; batch[-1]--; - *batch++ = 0; + *batch++ = start_idx_offset; } *batch++ = 1; obj[SCRATCH].handle = spin->poll_handle; fill_reloc(&relocs[obj[BATCH].relocation_count++], - spin->poll_handle, offset, 0, 0); + spin->poll_handle, offset, + SPIN_POLL_START_IDX, 0, 0); execbuf->buffer_count++; } @@ -408,8 +413,8 @@ void igt_spin_batch_free(int fd, igt_spin_t *spin) gem_munmap((void *)((unsigned long)spin->batch & (~4095UL)), BATCH_SIZE); - if (spin->running) { - gem_munmap(spin->running, 4096); + if (spin->poll) { + gem_munmap(spin->poll, 4096); gem_close(fd, spin->poll_handle); } diff --git a/lib/igt_dummyload.h b/lib/igt_dummyload.h index 73bd035b..3793bf7f 100644 --- a/lib/igt_dummyload.h +++ b/lib/igt_dummyload.h @@ -41,7 +41,8 @@ typedef struct igt_spin { struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_execbuffer2 execbuf; uint32_t poll_handle; - bool *running; + uint32_t *poll; +#define SPIN_POLL_START_IDX 0 } igt_spin_t; struct igt_spin_factory { @@ -70,9 +71,19 @@ void igt_spin_batch_set_timeout(igt_spin_t *spin, int64_t ns); void igt_spin_batch_end(igt_spin_t *spin); void igt_spin_batch_free(int fd, igt_spin_t *spin); -static inline void igt_spin_busywait_until_running(igt_spin_t *spin) +static inline bool igt_spin_has_poll(const igt_spin_t *spin) { - while (!READ_ONCE(*spin->running)) + return spin->poll; +} + +static inline bool igt_spin_has_started(igt_spin_t *spin) +{ + return READ_ONCE(spin->poll[SPIN_POLL_START_IDX]); +} + +static inline void igt_spin_busywait_until_started(igt_spin_t *spin) +{ + while (!igt_spin_has_started(spin)) ; } diff --git a/tests/i915/gem_ctx_exec.c b/tests/i915/gem_ctx_exec.c index d67d0ec2..f37e6f28 100644 --- a/tests/i915/gem_ctx_exec.c +++ b/tests/i915/gem_ctx_exec.c @@ -181,10 +181,8 @@ static void norecovery(int i915) spin = __igt_spin_batch_new(i915, .ctx = param.ctx_id, .flags = IGT_SPIN_POLL_RUN); - igt_assert(spin->running); + igt_spin_busywait_until_started(spin); - while (!READ_ONCE(*spin->running)) - ; igt_force_gpu_reset(i915); igt_spin_batch_end(spin); diff --git a/tests/i915/gem_ctx_isolation.c b/tests/i915/gem_ctx_isolation.c index f1000458..bed71c2b 100644 --- a/tests/i915/gem_ctx_isolation.c +++ b/tests/i915/gem_ctx_isolation.c @@ -704,8 +704,8 @@ static void inject_reset_context(int fd, unsigned int engine) spin = __igt_spin_batch_factory(fd, &opts); - if (spin->running) - igt_spin_busywait_until_running(spin); + if (igt_spin_has_poll(spin)) + igt_spin_busywait_until_started(spin); else usleep(1000); /* better than nothing */ diff --git a/tests/i915/gem_eio.c b/tests/i915/gem_eio.c index 29250852..07bbdeb1 100644 --- a/tests/i915/gem_eio.c +++ b/tests/i915/gem_eio.c @@ -186,8 +186,8 @@ static igt_spin_t * __spin_poll(int fd, uint32_t ctx, unsigned long flags) static void __spin_wait(int fd, igt_spin_t *spin) { - if (spin->running) { - igt_spin_busywait_until_running(spin); + if (igt_spin_has_poll(spin)) { + igt_spin_busywait_until_started(spin); } else { igt_debug("__spin_wait - usleep mode\n"); usleep(500e3); /* Better than nothing! */ diff --git a/tests/i915/gem_exec_latency.c b/tests/i915/gem_exec_latency.c index 39f441d2..fc1040c3 100644 --- a/tests/i915/gem_exec_latency.c +++ b/tests/i915/gem_exec_latency.c @@ -79,29 +79,29 @@ poll_ring(int fd, unsigned ring, const char *name) igt_require(gem_can_store_dword(fd, ring)); spin[0] = __igt_spin_batch_factory(fd, &opts); - igt_assert(spin[0]->running); + igt_assert(igt_spin_has_poll(spin[0])); cmd = *spin[0]->batch; spin[1] = __igt_spin_batch_factory(fd, &opts); - igt_assert(spin[1]->running); + igt_assert(igt_spin_has_poll(spin[1])); + igt_assert(cmd == *spin[1]->batch); igt_spin_batch_end(spin[0]); - while (!READ_ONCE(*spin[1]->running)) - ; + igt_spin_busywait_until_started(spin[1]); + igt_assert(!gem_bo_busy(fd, spin[0]->handle)); cycles = 0; while ((elapsed = igt_nsec_elapsed(&tv)) < 2ull << 30) { - unsigned int idx = cycles++ & 1; + const unsigned int idx = cycles++ & 1; *spin[idx]->batch = cmd; - *spin[idx]->running = 0; + spin[idx]->poll[SPIN_POLL_START_IDX] = 0; gem_execbuf(fd, &spin[idx]->execbuf); igt_spin_batch_end(spin[!idx]); - while (!READ_ONCE(*spin[idx]->running)) - ; + igt_spin_busywait_until_started(spin[idx]); } igt_info("%s completed %ld cycles: %.3f us\n", @@ -419,7 +419,7 @@ static void __rearm_spin_batch(igt_spin_t *spin) const uint32_t mi_arb_chk = 0x5 << 23; *spin->batch = mi_arb_chk; - *spin->running = 0; + spin->poll[SPIN_POLL_START_IDX] = 0; __sync_synchronize(); } @@ -441,7 +441,7 @@ struct rt_pkt { static bool __spin_wait(int fd, igt_spin_t *spin) { - while (!READ_ONCE(*spin->running)) { + while (!igt_spin_has_started(spin)) { if (!gem_bo_busy(fd, spin->handle)) return false; } @@ -537,7 +537,7 @@ rthog_latency_on_ring(int fd, unsigned int engine, const char *name, unsigned in passname[pass]); break; } - igt_spin_busywait_until_running(spin); + igt_spin_busywait_until_started(spin); igt_until_timeout(pass > 0 ? 5 : 2) { struct timespec ts = { }; diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c index 6f3f52d2..718a1935 100644 --- a/tests/i915/gem_exec_schedule.c +++ b/tests/i915/gem_exec_schedule.c @@ -436,7 +436,7 @@ static void semaphore_codependency(int i915) .ctx = ctx, .engine = engine, .flags = IGT_SPIN_POLL_RUN); - igt_spin_busywait_until_running(task[i].xcs); + igt_spin_busywait_until_started(task[i].xcs); /* Common rcs tasks will be queued in FIFO */ task[i].rcs = @@ -1361,8 +1361,7 @@ static void measure_semaphore_power(int i915) .engine = signaler, .flags = IGT_SPIN_POLL_RUN); gem_wait(i915, spin->handle, &jiffie); /* waitboost */ - igt_assert(spin->running); - igt_spin_busywait_until_running(spin); + igt_spin_busywait_until_started(spin); gpu_power_read(&power, &s_spin[0]); usleep(100*1000); diff --git a/tests/i915/gem_sync.c b/tests/i915/gem_sync.c index 3e4feff3..0a0ed2a1 100644 --- a/tests/i915/gem_sync.c +++ b/tests/i915/gem_sync.c @@ -225,7 +225,7 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) .engine = execbuf.flags, .flags = (IGT_SPIN_POLL_RUN | IGT_SPIN_FAST)); - igt_assert(spin->running); + igt_assert(igt_spin_has_poll(spin)); cmd = *spin->batch; gem_execbuf(fd, &execbuf); @@ -239,10 +239,9 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) cycles = 0; do { *spin->batch = cmd; - *spin->running = 0; + spin->poll[SPIN_POLL_START_IDX] = 0; gem_execbuf(fd, &spin->execbuf); - while (!READ_ONCE(*spin->running)) - ; + igt_spin_busywait_until_started(spin); this = gettime(); igt_spin_batch_end(spin); @@ -264,10 +263,9 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) cycles = 0; do { *spin->batch = cmd; - *spin->running = 0; + spin->poll[SPIN_POLL_START_IDX] = 0; gem_execbuf(fd, &spin->execbuf); - while (!READ_ONCE(*spin->running)) - ; + igt_spin_busywait_until_started(spin); for (int n = 0; n < wlen; n++) gem_execbuf(fd, &execbuf); @@ -410,7 +408,7 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) .engine = execbuf.flags, .flags = (IGT_SPIN_POLL_RUN | IGT_SPIN_FAST)); - igt_assert(spin[0]->running); + igt_assert(igt_spin_has_poll(spin[0])); cmd = *spin[0]->batch; spin[1] = __igt_spin_batch_new(fd, @@ -426,18 +424,17 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) for (int warmup = 0; warmup <= 1; warmup++) { *spin[0]->batch = cmd; - *spin[0]->running = 0; + spin[0]->poll[SPIN_POLL_START_IDX] = 0; gem_execbuf(fd, &spin[0]->execbuf); end = gettime() + timeout/10.; elapsed = 0; cycles = 0; do { - while (!READ_ONCE(*spin[0]->running)) - ; + igt_spin_busywait_until_started(spin[0]); *spin[1]->batch = cmd; - *spin[1]->running = 0; + spin[1]->poll[SPIN_POLL_START_IDX] = 0; gem_execbuf(fd, &spin[1]->execbuf); this = gettime(); @@ -458,21 +455,20 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) cycles, elapsed*1e6/cycles); *spin[0]->batch = cmd; - *spin[0]->running = 0; + spin[0]->poll[SPIN_POLL_START_IDX] = 0; gem_execbuf(fd, &spin[0]->execbuf); end = gettime() + timeout; elapsed = 0; cycles = 0; do { - while (!READ_ONCE(*spin[0]->running)) - ; + igt_spin_busywait_until_started(spin[0]); for (int n = 0; n < wlen; n++) gem_execbuf(fd, &execbuf); *spin[1]->batch = cmd; - *spin[1]->running = 0; + spin[1]->poll[SPIN_POLL_START_IDX] = 0; gem_execbuf(fd, &spin[1]->execbuf); this = gettime(); diff --git a/tests/perf_pmu.c b/tests/perf_pmu.c index 4f552bc2..28f235b1 100644 --- a/tests/perf_pmu.c +++ b/tests/perf_pmu.c @@ -189,10 +189,10 @@ static unsigned long __spin_wait(int fd, igt_spin_t *spin) igt_nsec_elapsed(&start); - if (spin->running) { + if (igt_spin_has_poll(spin)) { unsigned long timeout = 0; - while (!READ_ONCE(*spin->running)) { + while (!igt_spin_has_started(spin)) { unsigned long t = igt_nsec_elapsed(&start); if ((t - timeout) > 250e6) { From patchwork Wed Apr 17 15:28:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Kuoppala X-Patchwork-Id: 10905549 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 56A9717E6 for ; Wed, 17 Apr 2019 15:28:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 37F76285A5 for ; Wed, 17 Apr 2019 15:28:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2B61828998; Wed, 17 Apr 2019 15:28:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8FC91289C9 for ; Wed, 17 Apr 2019 15:28:53 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B979F6E074; Wed, 17 Apr 2019 15:28:52 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id F1A666E06C for ; Wed, 17 Apr 2019 15:28:47 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Apr 2019 08:28:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,362,1549958400"; d="scan'208";a="338471120" Received: from rosetta.fi.intel.com ([10.237.72.186]) by fmsmga005.fm.intel.com with ESMTP; 17 Apr 2019 08:28:45 -0700 Received: by rosetta.fi.intel.com (Postfix, from userid 1000) id 86EA88400FF; Wed, 17 Apr 2019 18:28:36 +0300 (EEST) From: Mika Kuoppala To: intel-gfx@lists.freedesktop.org Date: Wed, 17 Apr 2019 18:28:33 +0300 Message-Id: <20190417152834.12705-2-mika.kuoppala@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190417152834.12705-1-mika.kuoppala@linux.intel.com> References: <20190417152834.12705-1-mika.kuoppala@linux.intel.com> Subject: [Intel-gfx] [PATCH i-g-t 2/3] lib/igt_dummyload: Get rid of 'batch' on spinner accessors X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP There is no guarantee that spinners are and will be implemented using batches. As we have igt_spin_t, manipulate it through igt_spin_* functions consistently and hide the batch nature. Cc: Chris Wilson Signed-off-by: Mika Kuoppala Reviewed-by: Chris Wilson --- lib/drmtest.c | 4 +- lib/igt_core.c | 4 +- lib/igt_dummyload.c | 48 +++++++------- lib/igt_dummyload.h | 20 +++--- lib/igt_gt.c | 10 +-- tests/i915/gem_busy.c | 34 +++++----- tests/i915/gem_ctx_exec.c | 10 +-- tests/i915/gem_ctx_isolation.c | 20 +++--- tests/i915/gem_eio.c | 18 +++--- tests/i915/gem_exec_fence.c | 32 +++++----- tests/i915/gem_exec_latency.c | 38 +++++------ tests/i915/gem_exec_nop.c | 8 +-- tests/i915/gem_exec_reloc.c | 26 ++++---- tests/i915/gem_exec_schedule.c | 92 +++++++++++++-------------- tests/i915/gem_exec_suspend.c | 4 +- tests/i915/gem_fenced_exec_thrash.c | 4 +- tests/i915/gem_mmap.c | 4 +- tests/i915/gem_mmap_gtt.c | 4 +- tests/i915/gem_mmap_wc.c | 4 +- tests/i915/gem_shrink.c | 10 +-- tests/i915/gem_spin_batch.c | 20 +++--- tests/i915/gem_sync.c | 74 +++++++++++----------- tests/i915/gem_wait.c | 12 ++-- tests/i915/i915_pm_rps.c | 26 ++++---- tests/kms_busy.c | 26 ++++---- tests/kms_cursor_legacy.c | 12 ++-- tests/perf_pmu.c | 98 ++++++++++++++--------------- 27 files changed, 331 insertions(+), 331 deletions(-) diff --git a/lib/drmtest.c b/lib/drmtest.c index d31ade3f..4a92fb5c 100644 --- a/lib/drmtest.c +++ b/lib/drmtest.c @@ -176,7 +176,7 @@ static const char *forced_driver(void) */ void gem_quiescent_gpu(int fd) { - igt_terminate_spin_batches(); + igt_terminate_spins(); igt_drop_caches_set(fd, DROP_ACTIVE | DROP_RETIRE | DROP_IDLE | DROP_FREED); @@ -314,7 +314,7 @@ static int at_exit_drm_render_fd = -1; static void __cancel_work_at_exit(int fd) { - igt_terminate_spin_batches(); /* for older kernels */ + igt_terminate_spins(); /* for older kernels */ igt_sysfs_set_parameter(fd, "reset", "%x", -1u /* any method */); igt_drop_caches_set(fd, diff --git a/lib/igt_core.c b/lib/igt_core.c index ae03e909..3141d923 100644 --- a/lib/igt_core.c +++ b/lib/igt_core.c @@ -1007,7 +1007,7 @@ static void exit_subtest(const char *result) fprintf(stderr, "Subtest %s: %s (%.3fs)\n", in_subtest, result, igt_time_elapsed(&subtest_time, &now)); - igt_terminate_spin_batches(); + igt_terminate_spins(); in_subtest = NULL; siglongjmp(igt_subtest_jmpbuf, 1); @@ -1915,7 +1915,7 @@ static void call_exit_handlers(int sig) { int i; - igt_terminate_spin_batches(); + igt_terminate_spins(); if (!exit_handler_count) { return; diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c index 49b69737..b9d54450 100644 --- a/lib/igt_dummyload.c +++ b/lib/igt_dummyload.c @@ -272,7 +272,7 @@ emit_recursive_batch(igt_spin_t *spin, } static igt_spin_t * -spin_batch_create(int fd, const struct igt_spin_factory *opts) +spin_create(int fd, const struct igt_spin_factory *opts) { igt_spin_t *spin; @@ -289,25 +289,25 @@ spin_batch_create(int fd, const struct igt_spin_factory *opts) } igt_spin_t * -__igt_spin_batch_factory(int fd, const struct igt_spin_factory *opts) +__igt_spin_factory(int fd, const struct igt_spin_factory *opts) { - return spin_batch_create(fd, opts); + return spin_create(fd, opts); } /** - * igt_spin_batch_factory: + * igt_spin_factory: * @fd: open i915 drm file descriptor * @opts: controlling options such as context, engine, dependencies etc * * Start a recursive batch on a ring. Immediately returns a #igt_spin_t that * contains the batch's handle that can be waited upon. The returned structure - * must be passed to igt_spin_batch_free() for post-processing. + * must be passed to igt_spin_free() for post-processing. * * Returns: - * Structure with helper internal state for igt_spin_batch_free(). + * Structure with helper internal state for igt_spin_free(). */ igt_spin_t * -igt_spin_batch_factory(int fd, const struct igt_spin_factory *opts) +igt_spin_factory(int fd, const struct igt_spin_factory *opts) { igt_spin_t *spin; @@ -319,7 +319,7 @@ igt_spin_batch_factory(int fd, const struct igt_spin_factory *opts) igt_require(gem_can_store_dword(fd, opts->engine)); } - spin = spin_batch_create(fd, opts); + spin = spin_create(fd, opts); igt_assert(gem_bo_busy(fd, spin->handle)); if (opts->flags & IGT_SPIN_FENCE_OUT) { @@ -335,19 +335,19 @@ static void notify(union sigval arg) { igt_spin_t *spin = arg.sival_ptr; - igt_spin_batch_end(spin); + igt_spin_end(spin); } /** - * igt_spin_batch_set_timeout: - * @spin: spin batch state from igt_spin_batch_new() + * igt_spin_set_timeout: + * @spin: spin state from igt_spin_new() * @ns: amount of time in nanoseconds the batch continues to execute * before finishing. * * Specify a timeout. This ends the recursive batch associated with @spin after * the timeout has elapsed. */ -void igt_spin_batch_set_timeout(igt_spin_t *spin, int64_t ns) +void igt_spin_set_timeout(igt_spin_t *spin, int64_t ns) { timer_t timer; struct sigevent sev; @@ -375,12 +375,12 @@ void igt_spin_batch_set_timeout(igt_spin_t *spin, int64_t ns) } /** - * igt_spin_batch_end: - * @spin: spin batch state from igt_spin_batch_new() + * igt_spin_end: + * @spin: spin state from igt_spin_new() * - * End the recursive batch associated with @spin manually. + * End the spinner associated with @spin manually. */ -void igt_spin_batch_end(igt_spin_t *spin) +void igt_spin_end(igt_spin_t *spin) { if (!spin) return; @@ -390,14 +390,14 @@ void igt_spin_batch_end(igt_spin_t *spin) } /** - * igt_spin_batch_free: + * igt_spin_free: * @fd: open i915 drm file descriptor - * @spin: spin batch state from igt_spin_batch_new() + * @spin: spin state from igt_spin_new() * - * This function does the necessary post-processing after starting a recursive - * batch with igt_spin_batch_new(). + * This function does the necessary post-processing after starting a + * spin with igt_spin_new() and then frees it. */ -void igt_spin_batch_free(int fd, igt_spin_t *spin) +void igt_spin_free(int fd, igt_spin_t *spin) { if (!spin) return; @@ -409,7 +409,7 @@ void igt_spin_batch_free(int fd, igt_spin_t *spin) if (spin->timer) timer_delete(spin->timer); - igt_spin_batch_end(spin); + igt_spin_end(spin); gem_munmap((void *)((unsigned long)spin->batch & (~4095UL)), BATCH_SIZE); @@ -426,13 +426,13 @@ void igt_spin_batch_free(int fd, igt_spin_t *spin) free(spin); } -void igt_terminate_spin_batches(void) +void igt_terminate_spins(void) { struct igt_spin *iter; pthread_mutex_lock(&list_lock); igt_list_for_each(iter, &spin_list, link) - igt_spin_batch_end(iter); + igt_spin_end(iter); pthread_mutex_unlock(&list_lock); } diff --git a/lib/igt_dummyload.h b/lib/igt_dummyload.h index 3793bf7f..d6482089 100644 --- a/lib/igt_dummyload.h +++ b/lib/igt_dummyload.h @@ -58,18 +58,18 @@ struct igt_spin_factory { #define IGT_SPIN_NO_PREEMPTION (1 << 3) igt_spin_t * -__igt_spin_batch_factory(int fd, const struct igt_spin_factory *opts); +__igt_spin_factory(int fd, const struct igt_spin_factory *opts); igt_spin_t * -igt_spin_batch_factory(int fd, const struct igt_spin_factory *opts); +igt_spin_factory(int fd, const struct igt_spin_factory *opts); -#define __igt_spin_batch_new(fd, ...) \ - __igt_spin_batch_factory(fd, &((struct igt_spin_factory){__VA_ARGS__})) -#define igt_spin_batch_new(fd, ...) \ - igt_spin_batch_factory(fd, &((struct igt_spin_factory){__VA_ARGS__})) +#define __igt_spin_new(fd, ...) \ + __igt_spin_factory(fd, &((struct igt_spin_factory){__VA_ARGS__})) +#define igt_spin_new(fd, ...) \ + igt_spin_factory(fd, &((struct igt_spin_factory){__VA_ARGS__})) -void igt_spin_batch_set_timeout(igt_spin_t *spin, int64_t ns); -void igt_spin_batch_end(igt_spin_t *spin); -void igt_spin_batch_free(int fd, igt_spin_t *spin); +void igt_spin_set_timeout(igt_spin_t *spin, int64_t ns); +void igt_spin_end(igt_spin_t *spin); +void igt_spin_free(int fd, igt_spin_t *spin); static inline bool igt_spin_has_poll(const igt_spin_t *spin) { @@ -87,7 +87,7 @@ static inline void igt_spin_busywait_until_started(igt_spin_t *spin) ; } -void igt_terminate_spin_batches(void); +void igt_terminate_spins(void); enum igt_cork_type { CORK_SYNC_FD = 1, diff --git a/lib/igt_gt.c b/lib/igt_gt.c index 59995243..a2eaadf5 100644 --- a/lib/igt_gt.c +++ b/lib/igt_gt.c @@ -294,10 +294,10 @@ igt_hang_t igt_hang_ctx(int fd, uint32_t ctx, int ring, unsigned flags) if ((flags & HANG_ALLOW_BAN) == 0) context_set_ban(fd, ctx, 0); - spin = __igt_spin_batch_new(fd, - .ctx = ctx, - .engine = ring, - .flags = IGT_SPIN_NO_PREEMPTION); + spin = __igt_spin_new(fd, + .ctx = ctx, + .engine = ring, + .flags = IGT_SPIN_NO_PREEMPTION); return (igt_hang_t){ spin, ctx, ban, flags }; } @@ -333,7 +333,7 @@ void igt_post_hang_ring(int fd, igt_hang_t arg) return; gem_sync(fd, arg.spin->handle); /* Wait until it hangs */ - igt_spin_batch_free(fd, arg.spin); + igt_spin_free(fd, arg.spin); context_set_ban(fd, arg.ctx, arg.ban); diff --git a/tests/i915/gem_busy.c b/tests/i915/gem_busy.c index ad853468..c120faf1 100644 --- a/tests/i915/gem_busy.c +++ b/tests/i915/gem_busy.c @@ -128,9 +128,9 @@ static void semaphore(int fd, unsigned ring, uint32_t flags) /* Create a long running batch which we can use to hog the GPU */ handle[BUSY] = gem_create(fd, 4096); - spin = igt_spin_batch_new(fd, - .engine = ring, - .dependency = handle[BUSY]); + spin = igt_spin_new(fd, + .engine = ring, + .dependency = handle[BUSY]); /* Queue a batch after the busy, it should block and remain "busy" */ igt_assert(exec_noop(fd, handle, ring | flags, false)); @@ -159,7 +159,7 @@ static void semaphore(int fd, unsigned ring, uint32_t flags) /* Check that our long batch was long enough */ igt_assert(still_busy(fd, handle[BUSY])); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); /* And make sure it becomes idle again */ gem_sync(fd, handle[TEST]); @@ -379,16 +379,16 @@ static void close_race(int fd) igt_assert(sched_setscheduler(getpid(), SCHED_RR, &rt) == 0); for (i = 0; i < nhandles; i++) { - spin[i] = __igt_spin_batch_new(fd, - .engine = engines[rand() % nengine]); + spin[i] = __igt_spin_new(fd, + .engine = engines[rand() % nengine]); handles[i] = spin[i]->handle; } igt_until_timeout(20) { for (i = 0; i < nhandles; i++) { - igt_spin_batch_free(fd, spin[i]); - spin[i] = __igt_spin_batch_new(fd, - .engine = engines[rand() % nengine]); + igt_spin_free(fd, spin[i]); + spin[i] = __igt_spin_new(fd, + .engine = engines[rand() % nengine]); handles[i] = spin[i]->handle; __sync_synchronize(); } @@ -398,7 +398,7 @@ static void close_race(int fd) __sync_synchronize(); for (i = 0; i < nhandles; i++) - igt_spin_batch_free(fd, spin[i]); + igt_spin_free(fd, spin[i]); } igt_waitchildren(); @@ -430,11 +430,11 @@ static bool has_semaphores(int fd) static bool has_extended_busy_ioctl(int fd) { - igt_spin_t *spin = igt_spin_batch_new(fd, .engine = I915_EXEC_RENDER); + igt_spin_t *spin = igt_spin_new(fd, .engine = I915_EXEC_RENDER); uint32_t read, write; __gem_busy(fd, spin->handle, &read, &write); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); return read != 0; } @@ -442,9 +442,9 @@ static bool has_extended_busy_ioctl(int fd) static void basic(int fd, unsigned ring, unsigned flags) { igt_spin_t *spin = - igt_spin_batch_new(fd, - .engine = ring, - .flags = IGT_SPIN_NO_PREEMPTION); + igt_spin_new(fd, + .engine = ring, + .flags = IGT_SPIN_NO_PREEMPTION); struct timespec tv; int timeout; bool busy; @@ -453,7 +453,7 @@ static void basic(int fd, unsigned ring, unsigned flags) timeout = 120; if ((flags & HANG) == 0) { - igt_spin_batch_end(spin); + igt_spin_end(spin); timeout = 1; } @@ -470,7 +470,7 @@ static void basic(int fd, unsigned ring, unsigned flags) } } - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); } igt_main diff --git a/tests/i915/gem_ctx_exec.c b/tests/i915/gem_ctx_exec.c index f37e6f28..b8e0e074 100644 --- a/tests/i915/gem_ctx_exec.c +++ b/tests/i915/gem_ctx_exec.c @@ -178,16 +178,16 @@ static void norecovery(int i915) gem_context_get_param(i915, ¶m); igt_assert_eq(param.value, pass); - spin = __igt_spin_batch_new(i915, - .ctx = param.ctx_id, - .flags = IGT_SPIN_POLL_RUN); + spin = __igt_spin_new(i915, + .ctx = param.ctx_id, + .flags = IGT_SPIN_POLL_RUN); igt_spin_busywait_until_started(spin); igt_force_gpu_reset(i915); - igt_spin_batch_end(spin); + igt_spin_end(spin); igt_assert_eq(__gem_execbuf(i915, &spin->execbuf), expect); - igt_spin_batch_free(i915, spin); + igt_spin_free(i915, spin); gem_context_destroy(i915, param.ctx_id); } diff --git a/tests/i915/gem_ctx_isolation.c b/tests/i915/gem_ctx_isolation.c index bed71c2b..bcd0f481 100644 --- a/tests/i915/gem_ctx_isolation.c +++ b/tests/i915/gem_ctx_isolation.c @@ -578,7 +578,7 @@ static void nonpriv(int fd, tmpl_regs(fd, ctx, e, tmpl, values[v]); - spin = igt_spin_batch_new(fd, .ctx = ctx, .engine = engine); + spin = igt_spin_new(fd, .ctx = ctx, .engine = engine); igt_debug("%s[%d]: Setting all registers to 0x%08x\n", __func__, v, values[v]); @@ -592,7 +592,7 @@ static void nonpriv(int fd, */ restore_regs(fd, ctx, e, flags, regs[0]); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); compare_regs(fd, tmpl, regs[1], "nonpriv read/writes"); @@ -631,7 +631,7 @@ static void isolation(int fd, ctx[0] = gem_context_create(fd); regs[0] = read_regs(fd, ctx[0], e, flags); - spin = igt_spin_batch_new(fd, .ctx = ctx[0], .engine = engine); + spin = igt_spin_new(fd, .ctx = ctx[0], .engine = engine); if (flags & DIRTY1) { igt_debug("%s[%d]: Setting all registers of ctx 0 to 0x%08x\n", @@ -663,7 +663,7 @@ static void isolation(int fd, tmp = read_regs(fd, ctx[0], e, flags); restore_regs(fd, ctx[0], e, flags, regs[0]); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); if (!(flags & DIRTY1)) compare_regs(fd, regs[0], tmp, "two reads of the same ctx"); @@ -702,7 +702,7 @@ static void inject_reset_context(int fd, unsigned int engine) if (gem_can_store_dword(fd, engine)) opts.flags |= IGT_SPIN_POLL_RUN; - spin = __igt_spin_batch_factory(fd, &opts); + spin = __igt_spin_factory(fd, &opts); if (igt_spin_has_poll(spin)) igt_spin_busywait_until_started(spin); @@ -711,7 +711,7 @@ static void inject_reset_context(int fd, unsigned int engine) igt_force_gpu_reset(fd); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); gem_context_destroy(fd, opts.ctx); } @@ -738,7 +738,7 @@ static void preservation(int fd, gem_quiescent_gpu(fd); ctx[num_values] = gem_context_create(fd); - spin = igt_spin_batch_new(fd, .ctx = ctx[num_values], .engine = engine); + spin = igt_spin_new(fd, .ctx = ctx[num_values], .engine = engine); regs[num_values][0] = read_regs(fd, ctx[num_values], e, flags); for (int v = 0; v < num_values; v++) { ctx[v] = gem_context_create(fd); @@ -748,7 +748,7 @@ static void preservation(int fd, } gem_close(fd, read_regs(fd, ctx[num_values], e, flags)); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); if (flags & RESET) inject_reset_context(fd, engine); @@ -778,11 +778,11 @@ static void preservation(int fd, break; } - spin = igt_spin_batch_new(fd, .ctx = ctx[num_values], .engine = engine); + spin = igt_spin_new(fd, .ctx = ctx[num_values], .engine = engine); for (int v = 0; v < num_values; v++) regs[v][1] = read_regs(fd, ctx[v], e, flags); regs[num_values][1] = read_regs(fd, ctx[num_values], e, flags); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); for (int v = 0; v < num_values; v++) { char buf[80]; diff --git a/tests/i915/gem_eio.c b/tests/i915/gem_eio.c index 07bbdeb1..5396a04e 100644 --- a/tests/i915/gem_eio.c +++ b/tests/i915/gem_eio.c @@ -181,7 +181,7 @@ static igt_spin_t * __spin_poll(int fd, uint32_t ctx, unsigned long flags) if (gem_can_store_dword(fd, opts.engine)) opts.flags |= IGT_SPIN_POLL_RUN; - return __igt_spin_batch_factory(fd, &opts); + return __igt_spin_factory(fd, &opts); } static void __spin_wait(int fd, igt_spin_t *spin) @@ -346,7 +346,7 @@ static void __test_banned(int fd) /* Trigger a reset, making sure we are detected as guilty */ hang = spin_sync(fd, 0, 0); trigger_reset(fd); - igt_spin_batch_free(fd, hang); + igt_spin_free(fd, hang); count++; } @@ -386,7 +386,7 @@ static void test_wait(int fd, unsigned int flags, unsigned int wait) check_wait(fd, hang->handle, wait, NULL); - igt_spin_batch_free(fd, hang); + igt_spin_free(fd, hang); igt_require(i915_reset_control(true)); @@ -466,7 +466,7 @@ static void test_inflight(int fd, unsigned int wait) close(fence[n]); } - igt_spin_batch_free(fd, hang); + igt_spin_free(fd, hang); igt_assert(i915_reset_control(true)); trigger_reset(fd); @@ -522,7 +522,7 @@ static void test_inflight_suspend(int fd) close(fence[n]); } - igt_spin_batch_free(fd, hang); + igt_spin_free(fd, hang); igt_assert(i915_reset_control(true)); trigger_reset(fd); close(fd); @@ -600,7 +600,7 @@ static void test_inflight_contexts(int fd, unsigned int wait) close(fence[n]); } - igt_spin_batch_free(fd, hang); + igt_spin_free(fd, hang); gem_close(fd, obj[1].handle); igt_assert(i915_reset_control(true)); trigger_reset(fd); @@ -660,7 +660,7 @@ static void test_inflight_external(int fd) igt_assert_eq(sync_fence_status(fence), -EIO); close(fence); - igt_spin_batch_free(fd, hang); + igt_spin_free(fd, hang); igt_assert(i915_reset_control(true)); trigger_reset(fd); close(fd); @@ -709,7 +709,7 @@ static void test_inflight_internal(int fd, unsigned int wait) close(fences[nfence]); } - igt_spin_batch_free(fd, hang); + igt_spin_free(fd, hang); igt_assert(i915_reset_control(true)); trigger_reset(fd); close(fd); @@ -779,7 +779,7 @@ static void reset_stress(int fd, gem_execbuf(fd, &execbuf); gem_sync(fd, obj.handle); - igt_spin_batch_free(fd, hang); + igt_spin_free(fd, hang); gem_context_destroy(fd, ctx); } check_wait_elapsed(fd, &stats); diff --git a/tests/i915/gem_exec_fence.c b/tests/i915/gem_exec_fence.c index ba46595d..8120f8b5 100644 --- a/tests/i915/gem_exec_fence.c +++ b/tests/i915/gem_exec_fence.c @@ -468,7 +468,7 @@ static void test_parallel(int fd, unsigned int master) /* Fill the queue with many requests so that the next one has to * wait before it can be executed by the hardware. */ - spin = igt_spin_batch_new(fd, .engine = master, .dependency = plug); + spin = igt_spin_new(fd, .engine = master, .dependency = plug); resubmit(fd, spin->handle, master, 16); /* Now queue the master request and its secondaries */ @@ -588,7 +588,7 @@ static void test_parallel(int fd, unsigned int master) /* Unblock the master */ igt_cork_unplug(&c); gem_close(fd, plug); - igt_spin_batch_end(spin); + igt_spin_end(spin); /* Wait for all secondaries to complete. If we used a regular fence * then the secondaries would not start until the master was complete. @@ -651,7 +651,7 @@ static void test_keep_in_fence(int fd, unsigned int engine, unsigned int flags) igt_spin_t *spin; int fence; - spin = igt_spin_batch_new(fd, .engine = engine); + spin = igt_spin_new(fd, .engine = engine); gem_execbuf_wr(fd, &execbuf); fence = upper_32_bits(execbuf.rsvd2); @@ -698,7 +698,7 @@ static void test_keep_in_fence(int fd, unsigned int engine, unsigned int flags) gem_close(fd, obj.handle); close(fence); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); gem_quiescent_gpu(fd); } @@ -1070,7 +1070,7 @@ static void test_syncobj_unused_fence(int fd) struct local_gem_exec_fence fence = { .handle = syncobj_create(fd), }; - igt_spin_t *spin = igt_spin_batch_new(fd); + igt_spin_t *spin = igt_spin_new(fd); /* sanity check our syncobj_to_sync_file interface */ igt_assert_eq(__syncobj_to_sync_file(fd, 0), -ENOENT); @@ -1095,7 +1095,7 @@ static void test_syncobj_unused_fence(int fd) gem_close(fd, obj.handle); syncobj_destroy(fd, fence.handle); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); } static void test_syncobj_invalid_wait(int fd) @@ -1162,7 +1162,7 @@ static void test_syncobj_signal(int fd) struct local_gem_exec_fence fence = { .handle = syncobj_create(fd), }; - igt_spin_t *spin = igt_spin_batch_new(fd); + igt_spin_t *spin = igt_spin_new(fd); /* Check that the syncobj is signaled only when our request/fence is */ @@ -1183,7 +1183,7 @@ static void test_syncobj_signal(int fd) igt_assert(gem_bo_busy(fd, obj.handle)); igt_assert(syncobj_busy(fd, fence.handle)); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); gem_sync(fd, obj.handle); igt_assert(!gem_bo_busy(fd, obj.handle)); @@ -1212,7 +1212,7 @@ static void test_syncobj_wait(int fd) gem_quiescent_gpu(fd); - spin = igt_spin_batch_new(fd); + spin = igt_spin_new(fd); memset(&execbuf, 0, sizeof(execbuf)); execbuf.buffers_ptr = to_user_pointer(&obj); @@ -1265,7 +1265,7 @@ static void test_syncobj_wait(int fd) for (int i = 0; i < n; i++) igt_assert(gem_bo_busy(fd, handle[i])); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); for (int i = 0; i < n; i++) { gem_sync(fd, handle[i]); @@ -1282,7 +1282,7 @@ static void test_syncobj_export(int fd) .handle = syncobj_create(fd), }; int export[2]; - igt_spin_t *spin = igt_spin_batch_new(fd); + igt_spin_t *spin = igt_spin_new(fd); /* Check that if we export the syncobj prior to use it picks up * the later fence. This allows a syncobj to establish a channel @@ -1315,7 +1315,7 @@ static void test_syncobj_export(int fd) syncobj_destroy(fd, import); } - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); gem_sync(fd, obj.handle); igt_assert(!gem_bo_busy(fd, obj.handle)); @@ -1340,7 +1340,7 @@ static void test_syncobj_repeat(int fd) struct drm_i915_gem_execbuffer2 execbuf; struct local_gem_exec_fence *fence; int export; - igt_spin_t *spin = igt_spin_batch_new(fd); + igt_spin_t *spin = igt_spin_new(fd); /* Check that we can wait on the same fence multiple times */ fence = calloc(nfences, sizeof(*fence)); @@ -1378,7 +1378,7 @@ static void test_syncobj_repeat(int fd) igt_assert(syncobj_busy(fd, fence[i].handle)); igt_assert(gem_bo_busy(fd, obj.handle)); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); gem_sync(fd, obj.handle); gem_close(fd, obj.handle); @@ -1395,7 +1395,7 @@ static void test_syncobj_import(int fd) const uint32_t bbe = MI_BATCH_BUFFER_END; struct drm_i915_gem_exec_object2 obj; struct drm_i915_gem_execbuffer2 execbuf; - igt_spin_t *spin = igt_spin_batch_new(fd); + igt_spin_t *spin = igt_spin_new(fd); uint32_t sync = syncobj_create(fd); int fence; @@ -1423,7 +1423,7 @@ static void test_syncobj_import(int fd) igt_assert(gem_bo_busy(fd, obj.handle)); igt_assert(syncobj_busy(fd, sync)); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); gem_sync(fd, obj.handle); igt_assert(!gem_bo_busy(fd, obj.handle)); diff --git a/tests/i915/gem_exec_latency.c b/tests/i915/gem_exec_latency.c index fc1040c3..6b7dfbc0 100644 --- a/tests/i915/gem_exec_latency.c +++ b/tests/i915/gem_exec_latency.c @@ -78,16 +78,16 @@ poll_ring(int fd, unsigned ring, const char *name) gem_require_ring(fd, ring); igt_require(gem_can_store_dword(fd, ring)); - spin[0] = __igt_spin_batch_factory(fd, &opts); + spin[0] = __igt_spin_factory(fd, &opts); igt_assert(igt_spin_has_poll(spin[0])); cmd = *spin[0]->batch; - spin[1] = __igt_spin_batch_factory(fd, &opts); + spin[1] = __igt_spin_factory(fd, &opts); igt_assert(igt_spin_has_poll(spin[1])); igt_assert(cmd == *spin[1]->batch); - igt_spin_batch_end(spin[0]); + igt_spin_end(spin[0]); igt_spin_busywait_until_started(spin[1]); igt_assert(!gem_bo_busy(fd, spin[0]->handle)); @@ -100,15 +100,15 @@ poll_ring(int fd, unsigned ring, const char *name) spin[idx]->poll[SPIN_POLL_START_IDX] = 0; gem_execbuf(fd, &spin[idx]->execbuf); - igt_spin_batch_end(spin[!idx]); + igt_spin_end(spin[!idx]); igt_spin_busywait_until_started(spin[idx]); } igt_info("%s completed %ld cycles: %.3f us\n", name, cycles, elapsed*1e-3/cycles); - igt_spin_batch_free(fd, spin[1]); - igt_spin_batch_free(fd, spin[0]); + igt_spin_free(fd, spin[1]); + igt_spin_free(fd, spin[0]); } #define RCS_TIMESTAMP (0x2000 + 0x358) @@ -192,7 +192,7 @@ static void latency_on_ring(int fd, } if (flags & LIVE) - spin = igt_spin_batch_new(fd, .engine = ring); + spin = igt_spin_new(fd, .engine = ring); start = *reg; for (j = 0; j < repeats; j++) { @@ -209,7 +209,7 @@ static void latency_on_ring(int fd, end = *reg; igt_assert(reloc.presumed_offset == obj[1].offset); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); if (flags & CORK) igt_cork_unplug(&c); @@ -324,9 +324,9 @@ static void latency_from_ring(int fd, I915_GEM_DOMAIN_GTT); if (flags & PREEMPT) - spin = __igt_spin_batch_new(fd, - .ctx = ctx[0], - .engine = ring); + spin = __igt_spin_new(fd, + .ctx = ctx[0], + .engine = ring); if (flags & CORK) { obj[0].handle = igt_cork_plug(&c, fd); @@ -393,7 +393,7 @@ static void latency_from_ring(int fd, gem_set_domain(fd, obj[1].handle, I915_GEM_DOMAIN_GTT, I915_GEM_DOMAIN_GTT); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); igt_info("%s-%s delay: %.2fns\n", name, e__->name, @@ -414,7 +414,7 @@ static void latency_from_ring(int fd, } } -static void __rearm_spin_batch(igt_spin_t *spin) +static void __rearm_spin(igt_spin_t *spin) { const uint32_t mi_arb_chk = 0x5 << 23; @@ -424,7 +424,7 @@ static void __rearm_spin_batch(igt_spin_t *spin) } static void -__submit_spin_batch(int fd, igt_spin_t *spin, unsigned int flags) +__submit_spin(int fd, igt_spin_t *spin, unsigned int flags) { struct drm_i915_gem_execbuffer2 eb = spin->execbuf; @@ -531,7 +531,7 @@ rthog_latency_on_ring(int fd, unsigned int engine, const char *name, unsigned in usleep(250); - spin = __igt_spin_batch_factory(fd, &opts); + spin = __igt_spin_factory(fd, &opts); if (!spin) { igt_warn("Failed to create spinner! (%s)\n", passname[pass]); @@ -543,7 +543,7 @@ rthog_latency_on_ring(int fd, unsigned int engine, const char *name, unsigned in struct timespec ts = { }; double t; - igt_spin_batch_end(spin); + igt_spin_end(spin); gem_sync(fd, spin->handle); if (flags & RTIDLE) igt_drop_caches_set(fd, DROP_IDLE); @@ -557,10 +557,10 @@ rthog_latency_on_ring(int fd, unsigned int engine, const char *name, unsigned in if (nengine > 1) usleep(10*nengine); - __rearm_spin_batch(spin); + __rearm_spin(spin); igt_nsec_elapsed(&ts); - __submit_spin_batch(fd, spin, engine); + __submit_spin(fd, spin, engine); if (!__spin_wait(fd, spin)) { igt_warn("Wait timeout! (%s)\n", passname[pass]); @@ -576,7 +576,7 @@ rthog_latency_on_ring(int fd, unsigned int engine, const char *name, unsigned in igt_mean_add(&mean, t); } - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); igt_info("%8s %10s: mean=%.2fus stddev=%.3fus [%.2fus, %.2fus] (n=%lu)\n", names[child], diff --git a/tests/i915/gem_exec_nop.c b/tests/i915/gem_exec_nop.c index b91b4d0f..8922685a 100644 --- a/tests/i915/gem_exec_nop.c +++ b/tests/i915/gem_exec_nop.c @@ -823,14 +823,14 @@ static void preempt(int fd, uint32_t handle, clock_gettime(CLOCK_MONOTONIC, &start); do { igt_spin_t *spin = - __igt_spin_batch_new(fd, - .ctx = ctx[0], - .engine = ring_id); + __igt_spin_new(fd, + .ctx = ctx[0], + .engine = ring_id); for (int loop = 0; loop < 1024; loop++) gem_execbuf(fd, &execbuf); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); count += 1024; clock_gettime(CLOCK_MONOTONIC, &now); diff --git a/tests/i915/gem_exec_reloc.c b/tests/i915/gem_exec_reloc.c index 837f60a6..fdd9661d 100644 --- a/tests/i915/gem_exec_reloc.c +++ b/tests/i915/gem_exec_reloc.c @@ -388,11 +388,11 @@ static void basic_reloc(int fd, unsigned before, unsigned after, unsigned flags) } if (flags & ACTIVE) { - spin = igt_spin_batch_new(fd, - .engine = I915_EXEC_DEFAULT, - .dependency = obj.handle); + spin = igt_spin_new(fd, + .engine = I915_EXEC_DEFAULT, + .dependency = obj.handle); if (!(flags & HANG)) - igt_spin_batch_set_timeout(spin, NSEC_PER_SEC/100); + igt_spin_set_timeout(spin, NSEC_PER_SEC/100); igt_assert(gem_bo_busy(fd, obj.handle)); } @@ -424,7 +424,7 @@ static void basic_reloc(int fd, unsigned before, unsigned after, unsigned flags) igt_assert_eq_u64(reloc.presumed_offset, offset); igt_assert_eq_u64(obj.offset, offset); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); /* Simulate relocation */ if (flags & NORELOC) { @@ -456,11 +456,11 @@ static void basic_reloc(int fd, unsigned before, unsigned after, unsigned flags) } if (flags & ACTIVE) { - spin = igt_spin_batch_new(fd, - .engine = I915_EXEC_DEFAULT, - .dependency = obj.handle); + spin = igt_spin_new(fd, + .engine = I915_EXEC_DEFAULT, + .dependency = obj.handle); if (!(flags & HANG)) - igt_spin_batch_set_timeout(spin, NSEC_PER_SEC/100); + igt_spin_set_timeout(spin, NSEC_PER_SEC/100); igt_assert(gem_bo_busy(fd, obj.handle)); } @@ -492,7 +492,7 @@ static void basic_reloc(int fd, unsigned before, unsigned after, unsigned flags) igt_assert_eq_u64(reloc.presumed_offset, offset); igt_assert_eq_u64(obj.offset, offset); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); if (trash) gem_close(fd, trash); } @@ -585,14 +585,14 @@ static void basic_range(int fd, unsigned flags) execbuf.buffer_count = n + 1; if (flags & ACTIVE) { - spin = igt_spin_batch_new(fd, .dependency = obj[n].handle); + spin = igt_spin_new(fd, .dependency = obj[n].handle); if (!(flags & HANG)) - igt_spin_batch_set_timeout(spin, NSEC_PER_SEC/100); + igt_spin_set_timeout(spin, NSEC_PER_SEC/100); igt_assert(gem_bo_busy(fd, obj[n].handle)); } gem_execbuf(fd, &execbuf); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); for (int i = 0; i < n; i++) { uint64_t offset; diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c index 718a1935..9a079528 100644 --- a/tests/i915/gem_exec_schedule.c +++ b/tests/i915/gem_exec_schedule.c @@ -161,7 +161,7 @@ static void unplug_show_queue(int fd, struct igt_cork *c, unsigned int engine) .ctx = create_highest_priority(fd), .engine = engine, }; - spin[n] = __igt_spin_batch_factory(fd, &opts); + spin[n] = __igt_spin_factory(fd, &opts); gem_context_destroy(fd, opts.ctx); } @@ -169,7 +169,7 @@ static void unplug_show_queue(int fd, struct igt_cork *c, unsigned int engine) igt_debugfs_dump(fd, "i915_engine_info"); for (int n = 0; n < ARRAY_SIZE(spin); n++) - igt_spin_batch_free(fd, spin[n]); + igt_spin_free(fd, spin[n]); } @@ -221,7 +221,7 @@ static void independent(int fd, unsigned int engine) continue; if (spin == NULL) { - spin = __igt_spin_batch_new(fd, .engine = other); + spin = __igt_spin_new(fd, .engine = other); } else { struct drm_i915_gem_execbuffer2 eb = { .buffer_count = 1, @@ -250,7 +250,7 @@ static void independent(int fd, unsigned int engine) igt_assert(gem_bo_busy(fd, scratch)); igt_assert_eq(ptr[0], engine); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); gem_quiescent_gpu(fd); /* And we expect the others to have overwritten us, order unspecified */ @@ -358,9 +358,9 @@ static void semaphore_userlock(int i915) scratch = gem_create(i915, 4096); for_each_physical_engine(i915, engine) { if (!spin) { - spin = igt_spin_batch_new(i915, - .dependency = scratch, - .engine = engine); + spin = igt_spin_new(i915, + .dependency = scratch, + .engine = engine); } else { uint64_t saved = spin->execbuf.flags; @@ -398,7 +398,7 @@ static void semaphore_userlock(int i915) gem_sync(i915, obj.handle); /* to hang unless we can preempt */ gem_close(i915, obj.handle); - igt_spin_batch_free(i915, spin); + igt_spin_free(i915, spin); } static void semaphore_codependency(int i915) @@ -432,18 +432,18 @@ static void semaphore_codependency(int i915) ctx = gem_context_create(i915); task[i].xcs = - __igt_spin_batch_new(i915, - .ctx = ctx, - .engine = engine, - .flags = IGT_SPIN_POLL_RUN); + __igt_spin_new(i915, + .ctx = ctx, + .engine = engine, + .flags = IGT_SPIN_POLL_RUN); igt_spin_busywait_until_started(task[i].xcs); /* Common rcs tasks will be queued in FIFO */ task[i].rcs = - __igt_spin_batch_new(i915, - .ctx = ctx, - .engine = I915_EXEC_RENDER, - .dependency = task[i].xcs->handle); + __igt_spin_new(i915, + .ctx = ctx, + .engine = I915_EXEC_RENDER, + .dependency = task[i].xcs->handle); gem_context_destroy(i915, ctx); @@ -453,13 +453,13 @@ static void semaphore_codependency(int i915) igt_require(i == ARRAY_SIZE(task)); /* Since task[0] was queued first, it will be first in queue for rcs */ - igt_spin_batch_end(task[1].xcs); - igt_spin_batch_end(task[1].rcs); + igt_spin_end(task[1].xcs); + igt_spin_end(task[1].rcs); gem_sync(i915, task[1].rcs->handle); /* to hang if task[0] hogs rcs */ for (i = 0; i < ARRAY_SIZE(task); i++) { - igt_spin_batch_free(i915, task[i].xcs); - igt_spin_batch_free(i915, task[i].rcs); + igt_spin_free(i915, task[i].xcs); + igt_spin_free(i915, task[i].rcs); } } @@ -579,9 +579,9 @@ static void preempt(int fd, unsigned ring, unsigned flags) ctx[LO] = gem_context_create(fd); gem_context_set_priority(fd, ctx[LO], MIN_PRIO); } - spin[n] = __igt_spin_batch_new(fd, - .ctx = ctx[LO], - .engine = ring); + spin[n] = __igt_spin_new(fd, + .ctx = ctx[LO], + .engine = ring); igt_debug("spin[%d].handle=%d\n", n, spin[n]->handle); store_dword(fd, ctx[HI], ring, result, 0, n + 1, 0, I915_GEM_DOMAIN_RENDER); @@ -592,7 +592,7 @@ static void preempt(int fd, unsigned ring, unsigned flags) } for (int n = 0; n < ARRAY_SIZE(spin); n++) - igt_spin_batch_free(fd, spin[n]); + igt_spin_free(fd, spin[n]); if (flags & HANG_LP) igt_post_hang_ring(fd, hang); @@ -614,9 +614,9 @@ static igt_spin_t *__noise(int fd, uint32_t ctx, int prio, igt_spin_t *spin) for_each_physical_engine(fd, other) { if (spin == NULL) { - spin = __igt_spin_batch_new(fd, - .ctx = ctx, - .engine = other); + spin = __igt_spin_new(fd, + .ctx = ctx, + .engine = other); } else { struct drm_i915_gem_execbuffer2 eb = { .buffer_count = 1, @@ -703,7 +703,7 @@ static void preempt_other(int fd, unsigned ring, unsigned int flags) } igt_assert(gem_bo_busy(fd, spin->handle)); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); gem_context_destroy(fd, ctx[LO]); gem_context_destroy(fd, ctx[NOISE]); @@ -768,7 +768,7 @@ static void __preempt_queue(int fd, if (above) { igt_assert(gem_bo_busy(fd, above->handle)); - igt_spin_batch_free(fd, above); + igt_spin_free(fd, above); } gem_set_domain(fd, result, I915_GEM_DOMAIN_GTT, 0); @@ -781,7 +781,7 @@ static void __preempt_queue(int fd, if (below) { igt_assert(gem_bo_busy(fd, below->handle)); - igt_spin_batch_free(fd, below); + igt_spin_free(fd, below); } gem_context_destroy(fd, ctx[LO]); @@ -825,9 +825,9 @@ static void preempt_self(int fd, unsigned ring) n = 0; gem_context_set_priority(fd, ctx[HI], MIN_PRIO); for_each_physical_engine(fd, other) { - spin[n] = __igt_spin_batch_new(fd, - .ctx = ctx[NOISE], - .engine = other); + spin[n] = __igt_spin_new(fd, + .ctx = ctx[NOISE], + .engine = other); store_dword(fd, ctx[HI], other, result, (n + 1)*sizeof(uint32_t), n + 1, 0, I915_GEM_DOMAIN_RENDER); @@ -842,7 +842,7 @@ static void preempt_self(int fd, unsigned ring) for (i = 0; i < n; i++) { igt_assert(gem_bo_busy(fd, spin[i]->handle)); - igt_spin_batch_free(fd, spin[i]); + igt_spin_free(fd, spin[i]); } __sync_read_u32_count(fd, result, result_read, sizeof(result_read)); @@ -870,9 +870,9 @@ static void preemptive_hang(int fd, unsigned ring) ctx[LO] = gem_context_create(fd); gem_context_set_priority(fd, ctx[LO], MIN_PRIO); - spin[n] = __igt_spin_batch_new(fd, - .ctx = ctx[LO], - .engine = ring); + spin[n] = __igt_spin_new(fd, + .ctx = ctx[LO], + .engine = ring); gem_context_destroy(fd, ctx[LO]); } @@ -886,7 +886,7 @@ static void preemptive_hang(int fd, unsigned ring) * be updated to reflect such changes. */ igt_assert(gem_bo_busy(fd, spin[n]->handle)); - igt_spin_batch_free(fd, spin[n]); + igt_spin_free(fd, spin[n]); } gem_context_destroy(fd, ctx[HI]); @@ -1357,9 +1357,9 @@ static void measure_semaphore_power(int i915) int64_t jiffie = 1; igt_spin_t *spin; - spin = __igt_spin_batch_new(i915, - .engine = signaler, - .flags = IGT_SPIN_POLL_RUN); + spin = __igt_spin_new(i915, + .engine = signaler, + .flags = IGT_SPIN_POLL_RUN); gem_wait(i915, spin->handle, &jiffie); /* waitboost */ igt_spin_busywait_until_started(spin); @@ -1374,11 +1374,11 @@ static void measure_semaphore_power(int i915) if (engine == signaler) continue; - sema = __igt_spin_batch_new(i915, - .engine = engine, - .dependency = spin->handle); + sema = __igt_spin_new(i915, + .engine = engine, + .dependency = spin->handle); - igt_spin_batch_free(i915, sema); + igt_spin_free(i915, sema); } usleep(10); /* just give the tasklets a chance to run */ @@ -1386,7 +1386,7 @@ static void measure_semaphore_power(int i915) usleep(100*1000); gpu_power_read(&power, &s_sema[1]); - igt_spin_batch_free(i915, spin); + igt_spin_free(i915, spin); baseline = gpu_power_W(&power, &s_spin[0], &s_spin[1]); total = gpu_power_W(&power, &s_sema[0], &s_sema[1]); diff --git a/tests/i915/gem_exec_suspend.c b/tests/i915/gem_exec_suspend.c index 43c52d10..e43a16e9 100644 --- a/tests/i915/gem_exec_suspend.c +++ b/tests/i915/gem_exec_suspend.c @@ -189,7 +189,7 @@ static void run_test(int fd, unsigned engine, unsigned flags) } if (flags & HANG) - spin = igt_spin_batch_new(fd, .engine = engine); + spin = igt_spin_new(fd, .engine = engine); switch (mode(flags)) { case NOSLEEP: @@ -216,7 +216,7 @@ static void run_test(int fd, unsigned engine, unsigned flags) break; } - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); check_bo(fd, obj[0].handle); gem_close(fd, obj[0].handle); diff --git a/tests/i915/gem_fenced_exec_thrash.c b/tests/i915/gem_fenced_exec_thrash.c index 7248d310..145b8bf8 100644 --- a/tests/i915/gem_fenced_exec_thrash.c +++ b/tests/i915/gem_fenced_exec_thrash.c @@ -132,14 +132,14 @@ static void run_test(int fd, int num_fences, int expected_errno, igt_spin_t *spin = NULL; if (flags & BUSY_LOAD) - spin = __igt_spin_batch_new(fd); + spin = __igt_spin_new(fd); igt_while_interruptible(flags & INTERRUPTIBLE) { igt_assert_eq(__gem_execbuf(fd, &execbuf[i]), -expected_errno); } - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); gem_quiescent_gpu(fd); } count++; diff --git a/tests/i915/gem_mmap.c b/tests/i915/gem_mmap.c index 1f5348d9..d1b10013 100644 --- a/tests/i915/gem_mmap.c +++ b/tests/i915/gem_mmap.c @@ -122,7 +122,7 @@ test_pf_nonblock(int i915) igt_spin_t *spin; uint32_t *ptr; - spin = igt_spin_batch_new(i915); + spin = igt_spin_new(i915); igt_set_timeout(1, "initial pagefaulting did not complete within 1s"); @@ -132,7 +132,7 @@ test_pf_nonblock(int i915) igt_reset_timeout(); - igt_spin_batch_free(i915, spin); + igt_spin_free(i915, spin); } static int mmap_ioctl(int i915, struct drm_i915_gem_mmap *arg) diff --git a/tests/i915/gem_mmap_gtt.c b/tests/i915/gem_mmap_gtt.c index ab7d3f2d..9a670f03 100644 --- a/tests/i915/gem_mmap_gtt.c +++ b/tests/i915/gem_mmap_gtt.c @@ -309,7 +309,7 @@ test_pf_nonblock(int i915) igt_require(mmap_gtt_version(i915) >= 3); - spin = igt_spin_batch_new(i915); + spin = igt_spin_new(i915); igt_set_timeout(1, "initial pagefaulting did not complete within 1s"); @@ -319,7 +319,7 @@ test_pf_nonblock(int i915) igt_reset_timeout(); - igt_spin_batch_free(i915, spin); + igt_spin_free(i915, spin); } static void diff --git a/tests/i915/gem_mmap_wc.c b/tests/i915/gem_mmap_wc.c index e3ffc5ad..159eedbf 100644 --- a/tests/i915/gem_mmap_wc.c +++ b/tests/i915/gem_mmap_wc.c @@ -448,7 +448,7 @@ test_pf_nonblock(int i915) igt_spin_t *spin; uint32_t *ptr; - spin = igt_spin_batch_new(i915); + spin = igt_spin_new(i915); igt_set_timeout(1, "initial pagefaulting did not complete within 1s"); @@ -458,7 +458,7 @@ test_pf_nonblock(int i915) igt_reset_timeout(); - igt_spin_batch_free(i915, spin); + igt_spin_free(i915, spin); } static void diff --git a/tests/i915/gem_shrink.c b/tests/i915/gem_shrink.c index 3e8b8f2d..037ff005 100644 --- a/tests/i915/gem_shrink.c +++ b/tests/i915/gem_shrink.c @@ -346,17 +346,17 @@ static void reclaim(unsigned engine, int timeout) } while (!*shared); } - spin = igt_spin_batch_new(fd, .engine = engine); + spin = igt_spin_new(fd, .engine = engine); igt_until_timeout(timeout) { - igt_spin_t *next = __igt_spin_batch_new(fd, .engine = engine); + igt_spin_t *next = __igt_spin_new(fd, .engine = engine); - igt_spin_batch_set_timeout(spin, timeout_100ms); + igt_spin_set_timeout(spin, timeout_100ms); gem_sync(fd, spin->handle); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); spin = next; } - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); *shared = 1; igt_waitchildren(); diff --git a/tests/i915/gem_spin_batch.c b/tests/i915/gem_spin_batch.c index 9afdbe09..a92672b8 100644 --- a/tests/i915/gem_spin_batch.c +++ b/tests/i915/gem_spin_batch.c @@ -41,12 +41,12 @@ static void spin(int fd, unsigned int engine, unsigned int timeout_sec) struct timespec itv = { }; uint64_t elapsed; - spin = __igt_spin_batch_new(fd, .engine = engine); + spin = __igt_spin_new(fd, .engine = engine); while ((elapsed = igt_nsec_elapsed(&tv)) >> 30 < timeout_sec) { - igt_spin_t *next = __igt_spin_batch_new(fd, .engine = engine); + igt_spin_t *next = __igt_spin_new(fd, .engine = engine); - igt_spin_batch_set_timeout(spin, - timeout_100ms - igt_nsec_elapsed(&itv)); + igt_spin_set_timeout(spin, + timeout_100ms - igt_nsec_elapsed(&itv)); gem_sync(fd, spin->handle); igt_debug("loop %lu: interval=%fms (target 100ms), elapsed %fms\n", loops, @@ -54,11 +54,11 @@ static void spin(int fd, unsigned int engine, unsigned int timeout_sec) igt_nsec_elapsed(&tv) * 1e-6); memset(&itv, 0, sizeof(itv)); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); spin = next; loops++; } - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); igt_info("Completed %ld loops in %lld ns, target %ld\n", loops, (long long)elapsed, (long)(elapsed / timeout_100ms)); @@ -74,7 +74,7 @@ static void spin_resubmit(int fd, unsigned int engine, unsigned int flags) const uint32_t ctx0 = gem_context_create(fd); const uint32_t ctx1 = (flags & RESUBMIT_NEW_CTX) ? gem_context_create(fd) : ctx0; - igt_spin_t *spin = __igt_spin_batch_new(fd, .ctx = ctx0, .engine = engine); + igt_spin_t *spin = __igt_spin_new(fd, .ctx = ctx0, .engine = engine); unsigned int other; struct drm_i915_gem_execbuffer2 eb = { @@ -96,11 +96,11 @@ static void spin_resubmit(int fd, unsigned int engine, unsigned int flags) gem_execbuf(fd, &eb); } - igt_spin_batch_end(spin); + igt_spin_end(spin); gem_sync(fd, spin->obj[1].handle); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); if (ctx1 != ctx0) gem_context_destroy(fd, ctx1); @@ -110,7 +110,7 @@ static void spin_resubmit(int fd, unsigned int engine, unsigned int flags) static void spin_exit_handler(int sig) { - igt_terminate_spin_batches(); + igt_terminate_spins(); } static void spin_on_all_engines(int fd, unsigned int timeout_sec) diff --git a/tests/i915/gem_sync.c b/tests/i915/gem_sync.c index 0a0ed2a1..f17ecd0b 100644 --- a/tests/i915/gem_sync.c +++ b/tests/i915/gem_sync.c @@ -221,16 +221,16 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) execbuf.buffer_count = 1; execbuf.flags = engines[child % num_engines]; - spin = __igt_spin_batch_new(fd, - .engine = execbuf.flags, - .flags = (IGT_SPIN_POLL_RUN | - IGT_SPIN_FAST)); + spin = __igt_spin_new(fd, + .engine = execbuf.flags, + .flags = (IGT_SPIN_POLL_RUN | + IGT_SPIN_FAST)); igt_assert(igt_spin_has_poll(spin)); cmd = *spin->batch; gem_execbuf(fd, &execbuf); - igt_spin_batch_end(spin); + igt_spin_end(spin); gem_sync(fd, object.handle); for (int warmup = 0; warmup <= 1; warmup++) { @@ -244,7 +244,7 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) igt_spin_busywait_until_started(spin); this = gettime(); - igt_spin_batch_end(spin); + igt_spin_end(spin); gem_sync(fd, spin->handle); now = gettime(); @@ -271,7 +271,7 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) gem_execbuf(fd, &execbuf); this = gettime(); - igt_spin_batch_end(spin); + igt_spin_end(spin); gem_sync(fd, object.handle); now = gettime(); @@ -285,7 +285,7 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) names[child % num_engines] ? " c" : "C", cycles, 1e6*baseline, elapsed*1e6/cycles); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); gem_close(fd, object.handle); } igt_waitchildren_timeout(2*timeout, NULL); @@ -323,14 +323,14 @@ static void active_ring(int fd, unsigned ring, int timeout) igt_spin_t *spin[2]; uint32_t cmd; - spin[0] = __igt_spin_batch_new(fd, - .engine = ring, - .flags = IGT_SPIN_FAST); + spin[0] = __igt_spin_new(fd, + .engine = ring, + .flags = IGT_SPIN_FAST); cmd = *spin[0]->batch; - spin[1] = __igt_spin_batch_new(fd, - .engine = ring, - .flags = IGT_SPIN_FAST); + spin[1] = __igt_spin_new(fd, + .engine = ring, + .flags = IGT_SPIN_FAST); igt_assert(*spin[1]->batch == cmd); start = gettime(); @@ -340,7 +340,7 @@ static void active_ring(int fd, unsigned ring, int timeout) for (int loop = 0; loop < 1024; loop++) { igt_spin_t *s = spin[loop & 1]; - igt_spin_batch_end(s); + igt_spin_end(s); gem_sync(fd, s->handle); *s->batch = cmd; @@ -348,8 +348,8 @@ static void active_ring(int fd, unsigned ring, int timeout) } cycles += 1024; } while ((elapsed = gettime()) < end); - igt_spin_batch_free(fd, spin[1]); - igt_spin_batch_free(fd, spin[0]); + igt_spin_free(fd, spin[1]); + igt_spin_free(fd, spin[0]); igt_info("%s%sompleted %ld cycles: %.3f us\n", names[child % num_engines] ?: "", @@ -404,22 +404,22 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) execbuf.buffer_count = 1; execbuf.flags = engines[child % num_engines]; - spin[0] = __igt_spin_batch_new(fd, - .engine = execbuf.flags, - .flags = (IGT_SPIN_POLL_RUN | - IGT_SPIN_FAST)); + spin[0] = __igt_spin_new(fd, + .engine = execbuf.flags, + .flags = (IGT_SPIN_POLL_RUN | + IGT_SPIN_FAST)); igt_assert(igt_spin_has_poll(spin[0])); cmd = *spin[0]->batch; - spin[1] = __igt_spin_batch_new(fd, - .engine = execbuf.flags, - .flags = (IGT_SPIN_POLL_RUN | - IGT_SPIN_FAST)); + spin[1] = __igt_spin_new(fd, + .engine = execbuf.flags, + .flags = (IGT_SPIN_POLL_RUN | + IGT_SPIN_FAST)); gem_execbuf(fd, &execbuf); - igt_spin_batch_end(spin[1]); - igt_spin_batch_end(spin[0]); + igt_spin_end(spin[1]); + igt_spin_end(spin[0]); gem_sync(fd, object.handle); for (int warmup = 0; warmup <= 1; warmup++) { @@ -438,7 +438,7 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) gem_execbuf(fd, &spin[1]->execbuf); this = gettime(); - igt_spin_batch_end(spin[0]); + igt_spin_end(spin[0]); gem_sync(fd, spin[0]->handle); now = gettime(); @@ -446,7 +446,7 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) cycles++; igt_swap(spin[0], spin[1]); } while (now < end); - igt_spin_batch_end(spin[0]); + igt_spin_end(spin[0]); baseline = elapsed / cycles; } igt_info("%s%saseline %ld cycles: %.3f us\n", @@ -472,7 +472,7 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) gem_execbuf(fd, &spin[1]->execbuf); this = gettime(); - igt_spin_batch_end(spin[0]); + igt_spin_end(spin[0]); gem_sync(fd, object.handle); now = gettime(); @@ -480,7 +480,7 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) cycles++; igt_swap(spin[0], spin[1]); } while (now < end); - igt_spin_batch_end(spin[0]); + igt_spin_end(spin[0]); elapsed -= cycles * baseline; igt_info("%s%sompleted %ld cycles: %.3f + %.3f us\n", @@ -488,8 +488,8 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) names[child % num_engines] ? " c" : "C", cycles, 1e6*baseline, elapsed*1e6/cycles); - igt_spin_batch_free(fd, spin[1]); - igt_spin_batch_free(fd, spin[0]); + igt_spin_free(fd, spin[1]); + igt_spin_free(fd, spin[0]); gem_close(fd, object.handle); } igt_waitchildren_timeout(2*timeout, NULL); @@ -1189,16 +1189,16 @@ preempt(int fd, unsigned ring, int num_children, int timeout) cycles = 0; do { igt_spin_t *spin = - __igt_spin_batch_new(fd, - .ctx = ctx[0], - .engine = execbuf.flags); + __igt_spin_new(fd, + .ctx = ctx[0], + .engine = execbuf.flags); do { gem_execbuf(fd, &execbuf); gem_sync(fd, object.handle); } while (++cycles & 1023); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); } while ((elapsed = gettime() - start) < timeout); igt_info("%s%sompleted %ld cycles: %.3f us\n", names[child % num_engines] ?: "", diff --git a/tests/i915/gem_wait.c b/tests/i915/gem_wait.c index 7914c936..ee2ecfa0 100644 --- a/tests/i915/gem_wait.c +++ b/tests/i915/gem_wait.c @@ -74,9 +74,9 @@ static void basic(int fd, unsigned engine, unsigned flags) IGT_CORK_HANDLE(cork); uint32_t plug = flags & (WRITE | AWAIT) ? igt_cork_plug(&cork, fd) : 0; - igt_spin_t *spin = igt_spin_batch_new(fd, - .engine = engine, - .dependency = plug); + igt_spin_t *spin = igt_spin_new(fd, + .engine = engine, + .dependency = plug); struct drm_i915_gem_wait wait = { flags & WRITE ? plug : spin->handle }; @@ -89,7 +89,7 @@ static void basic(int fd, unsigned engine, unsigned flags) timeout = 120; if ((flags & HANG) == 0) { - igt_spin_batch_set_timeout(spin, NSEC_PER_SEC/2); + igt_spin_set_timeout(spin, NSEC_PER_SEC/2); timeout = 1; } @@ -112,7 +112,7 @@ static void basic(int fd, unsigned engine, unsigned flags) igt_assert_eq(__gem_wait(fd, &wait), -ETIME); if ((flags & HANG) == 0) { - igt_spin_batch_set_timeout(spin, NSEC_PER_SEC/2); + igt_spin_set_timeout(spin, NSEC_PER_SEC/2); wait.timeout_ns = NSEC_PER_SEC; /* 1.0s */ igt_assert_eq(__gem_wait(fd, &wait), 0); igt_assert(wait.timeout_ns >= 0); @@ -129,7 +129,7 @@ static void basic(int fd, unsigned engine, unsigned flags) if (plug) gem_close(fd, plug); - igt_spin_batch_free(fd, spin); + igt_spin_free(fd, spin); } igt_main diff --git a/tests/i915/i915_pm_rps.c b/tests/i915/i915_pm_rps.c index 91f46f10..478c7be7 100644 --- a/tests/i915/i915_pm_rps.c +++ b/tests/i915/i915_pm_rps.c @@ -254,29 +254,29 @@ static void load_helper_run(enum load load) igt_debug("Applying %s load...\n", lh.load ? "high" : "low"); prev_load = lh.load == HIGH; - spin[0] = __igt_spin_batch_new(drm_fd); + spin[0] = __igt_spin_new(drm_fd); if (prev_load) - spin[1] = __igt_spin_batch_new(drm_fd); + spin[1] = __igt_spin_new(drm_fd); prev_load = !prev_load; /* send the initial signal */ while (!lh.exit) { bool high_load; handle = spin[0]->handle; - igt_spin_batch_end(spin[0]); + igt_spin_end(spin[0]); while (gem_bo_busy(drm_fd, handle)) usleep(100); - igt_spin_batch_free(drm_fd, spin[0]); + igt_spin_free(drm_fd, spin[0]); usleep(100); high_load = lh.load == HIGH; if (!high_load && spin[1]) { - igt_spin_batch_free(drm_fd, spin[1]); + igt_spin_free(drm_fd, spin[1]); spin[1] = NULL; } else { spin[0] = spin[1]; } - spin[high_load] = __igt_spin_batch_new(drm_fd); + spin[high_load] = __igt_spin_new(drm_fd); if (lh.signal && high_load != prev_load) { write(lh.link, &lh.signal, sizeof(lh.signal)); @@ -286,11 +286,11 @@ static void load_helper_run(enum load load) } handle = spin[0]->handle; - igt_spin_batch_end(spin[0]); + igt_spin_end(spin[0]); if (spin[1]) { handle = spin[1]->handle; - igt_spin_batch_end(spin[1]); + igt_spin_end(spin[1]); } /* Wait for completion without boosting */ @@ -305,8 +305,8 @@ static void load_helper_run(enum load load) */ igt_drop_caches_set(drm_fd, DROP_RETIRE); - igt_spin_batch_free(drm_fd, spin[1]); - igt_spin_batch_free(drm_fd, spin[0]); + igt_spin_free(drm_fd, spin[1]); + igt_spin_free(drm_fd, spin[0]); } close(lh.link); @@ -549,7 +549,7 @@ static void boost_freq(int fd, int *boost_freqs) int64_t timeout = 1; igt_spin_t *load; - load = igt_spin_batch_new(fd); + load = igt_spin_new(fd); resubmit_batch(fd, load->handle, 16); /* Waiting will grant us a boost to maximum */ @@ -559,9 +559,9 @@ static void boost_freq(int fd, int *boost_freqs) dump(boost_freqs); /* Avoid downlocking till boost request is pending */ - igt_spin_batch_end(load); + igt_spin_end(load); gem_sync(fd, load->handle); - igt_spin_batch_free(fd, load); + igt_spin_free(fd, load); } static void waitboost(int fd, bool reset) diff --git a/tests/kms_busy.c b/tests/kms_busy.c index 321db820..66f26cd0 100644 --- a/tests/kms_busy.c +++ b/tests/kms_busy.c @@ -76,9 +76,9 @@ static void flip_to_fb(igt_display_t *dpy, int pipe, const int timeout = modeset ? 8500 : 100; struct drm_event_vblank ev; - igt_spin_t *t = igt_spin_batch_new(dpy->drm_fd, - .engine = ring, - .dependency = fb->gem_handle); + igt_spin_t *t = igt_spin_new(dpy->drm_fd, + .engine = ring, + .dependency = fb->gem_handle); if (modeset) { /* @@ -115,7 +115,7 @@ static void flip_to_fb(igt_display_t *dpy, int pipe, igt_waitchildren_timeout(5 * timeout, "flip blocked waiting for busy bo\n"); - igt_spin_batch_end(t); + igt_spin_end(t); igt_assert(read(dpy->drm_fd, &ev, sizeof(ev)) == sizeof(ev)); igt_assert(poll(&pfd, 1, 0) == 0); @@ -131,7 +131,7 @@ static void flip_to_fb(igt_display_t *dpy, int pipe, igt_display_commit2(dpy, COMMIT_ATOMIC); } - igt_spin_batch_free(dpy->drm_fd, t); + igt_spin_free(dpy->drm_fd, t); } static void test_flip(igt_display_t *dpy, unsigned ring, int pipe, bool modeset) @@ -180,9 +180,9 @@ static void test_flip(igt_display_t *dpy, unsigned ring, int pipe, bool modeset) static void test_atomic_commit_hang(igt_display_t *dpy, igt_plane_t *primary, struct igt_fb *busy_fb, unsigned ring) { - igt_spin_t *t = igt_spin_batch_new(dpy->drm_fd, - .engine = ring, - .dependency = busy_fb->gem_handle); + igt_spin_t *t = igt_spin_new(dpy->drm_fd, + .engine = ring, + .dependency = busy_fb->gem_handle); struct pollfd pfd = { .fd = dpy->drm_fd, .events = POLLIN }; unsigned flags = 0; struct drm_event_vblank ev; @@ -210,7 +210,7 @@ static void test_atomic_commit_hang(igt_display_t *dpy, igt_plane_t *primary, igt_assert(read(dpy->drm_fd, &ev, sizeof(ev)) == sizeof(ev)); - igt_spin_batch_end(t); + igt_spin_end(t); } static void test_hang(igt_display_t *dpy, unsigned ring, @@ -269,9 +269,9 @@ static void test_pageflip_modeset_hang(igt_display_t *dpy, igt_display_commit2(dpy, dpy->is_atomic ? COMMIT_ATOMIC : COMMIT_LEGACY); - t = igt_spin_batch_new(dpy->drm_fd, - .engine = ring, - .dependency = fb.gem_handle); + t = igt_spin_new(dpy->drm_fd, + .engine = ring, + .dependency = fb.gem_handle); do_or_die(drmModePageFlip(dpy->drm_fd, dpy->pipes[pipe].crtc_id, fb.fb_id, DRM_MODE_PAGE_FLIP_EVENT, &fb)); @@ -282,7 +282,7 @@ static void test_pageflip_modeset_hang(igt_display_t *dpy, igt_assert(read(dpy->drm_fd, &ev, sizeof(ev)) == sizeof(ev)); - igt_spin_batch_end(t); + igt_spin_end(t); igt_remove_fb(dpy->drm_fd, &fb); } diff --git a/tests/kms_cursor_legacy.c b/tests/kms_cursor_legacy.c index 9febf6e9..f8d5f631 100644 --- a/tests/kms_cursor_legacy.c +++ b/tests/kms_cursor_legacy.c @@ -534,8 +534,8 @@ static void basic_flip_cursor(igt_display_t *display, spin = NULL; if (flags & BASIC_BUSY) - spin = igt_spin_batch_new(display->drm_fd, - .dependency = fb_info.gem_handle); + spin = igt_spin_new(display->drm_fd, + .dependency = fb_info.gem_handle); /* Start with a synchronous query to align with the vblank */ vblank_start = get_vblank(display->drm_fd, pipe, DRM_VBLANK_NEXTONMISS); @@ -580,7 +580,7 @@ static void basic_flip_cursor(igt_display_t *display, if (spin) { struct pollfd pfd = { display->drm_fd, POLLIN }; igt_assert(poll(&pfd, 1, 0) == 0); - igt_spin_batch_free(display->drm_fd, spin); + igt_spin_free(display->drm_fd, spin); } if (miss) @@ -1321,8 +1321,8 @@ static void flip_vs_cursor_busy_crc(igt_display_t *display, bool atomic) for (int i = 1; i >= 0; i--) { igt_spin_t *spin; - spin = igt_spin_batch_new(display->drm_fd, - .dependency = fb_info[1].gem_handle); + spin = igt_spin_new(display->drm_fd, + .dependency = fb_info[1].gem_handle); vblank_start = get_vblank(display->drm_fd, pipe, DRM_VBLANK_NEXTONMISS); @@ -1333,7 +1333,7 @@ static void flip_vs_cursor_busy_crc(igt_display_t *display, bool atomic) igt_pipe_crc_get_current(display->drm_fd, pipe_crc, &test_crc); - igt_spin_batch_free(display->drm_fd, spin); + igt_spin_free(display->drm_fd, spin); igt_set_timeout(1, "Stuck page flip"); igt_ignore_warn(read(display->drm_fd, &vbl, sizeof(vbl))); diff --git a/tests/perf_pmu.c b/tests/perf_pmu.c index 28f235b1..a8ad86ce 100644 --- a/tests/perf_pmu.c +++ b/tests/perf_pmu.c @@ -180,7 +180,7 @@ static igt_spin_t * __spin_poll(int fd, uint32_t ctx, unsigned long flags) if (gem_can_store_dword(fd, flags)) opts.flags |= IGT_SPIN_POLL_RUN; - return __igt_spin_batch_factory(fd, &opts); + return __igt_spin_factory(fd, &opts); } static unsigned long __spin_wait(int fd, igt_spin_t *spin) @@ -230,7 +230,7 @@ static void end_spin(int fd, igt_spin_t *spin, unsigned int flags) if (!spin) return; - igt_spin_batch_end(spin); + igt_spin_end(spin); if (flags & FLAG_SYNC) gem_sync(fd, spin->handle); @@ -296,7 +296,7 @@ single(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags) assert_within_epsilon(val, 0, tolerance); } - igt_spin_batch_free(gem_fd, spin); + igt_spin_free(gem_fd, spin); close(fd); gem_quiescent_gpu(gem_fd); @@ -325,7 +325,7 @@ busy_start(int gem_fd, const struct intel_execution_engine2 *e) val = __pmu_read_single(fd, &ts[1]) - val; igt_debug("slept=%lu perf=%"PRIu64"\n", slept, ts[1] - ts[0]); - igt_spin_batch_free(gem_fd, spin); + igt_spin_free(gem_fd, spin); close(fd); assert_within_epsilon(val, ts[1] - ts[0], tolerance); @@ -361,9 +361,9 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e) */ spin[0] = __spin_sync(gem_fd, 0, e2ring(gem_fd, e)); usleep(500e3); - spin[1] = __igt_spin_batch_new(gem_fd, - .ctx = ctx, - .engine = e2ring(gem_fd, e)); + spin[1] = __igt_spin_new(gem_fd, + .ctx = ctx, + .engine = e2ring(gem_fd, e)); /* * Open PMU as fast as possible after the second spin batch in attempt @@ -376,8 +376,8 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e) val = __pmu_read_single(fd, &ts[1]) - val; igt_debug("slept=%lu perf=%"PRIu64"\n", slept, ts[1] - ts[0]); - igt_spin_batch_end(spin[0]); - igt_spin_batch_end(spin[1]); + igt_spin_end(spin[0]); + igt_spin_end(spin[1]); /* Wait for GPU idle to verify PMU reports idle. */ gem_quiescent_gpu(gem_fd); @@ -388,8 +388,8 @@ busy_double_start(int gem_fd, const struct intel_execution_engine2 *e) igt_info("busy=%"PRIu64" idle=%"PRIu64"\n", val, val2); - igt_spin_batch_free(gem_fd, spin[0]); - igt_spin_batch_free(gem_fd, spin[1]); + igt_spin_free(gem_fd, spin[0]); + igt_spin_free(gem_fd, spin[1]); close(fd); @@ -453,7 +453,7 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, pmu_read_multi(fd[0], num_engines, tval[1]); end_spin(gem_fd, spin, FLAG_SYNC); - igt_spin_batch_free(gem_fd, spin); + igt_spin_free(gem_fd, spin); close(fd[0]); for (i = 0; i < num_engines; i++) @@ -471,9 +471,9 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, } static void -__submit_spin_batch(int gem_fd, igt_spin_t *spin, - const struct intel_execution_engine2 *e, - int offset) +__submit_spin(int gem_fd, igt_spin_t *spin, + const struct intel_execution_engine2 *e, + int offset) { struct drm_i915_gem_execbuffer2 eb = spin->execbuf; @@ -501,7 +501,7 @@ most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, if (e == e_) idle_idx = i; else if (spin) - __submit_spin_batch(gem_fd, spin, e_, 64); + __submit_spin(gem_fd, spin, e_, 64); else spin = __spin_poll(gem_fd, 0, e2ring(gem_fd, e_)); @@ -524,7 +524,7 @@ most_busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, pmu_read_multi(fd[0], num_engines, tval[1]); end_spin(gem_fd, spin, FLAG_SYNC); - igt_spin_batch_free(gem_fd, spin); + igt_spin_free(gem_fd, spin); close(fd[0]); for (i = 0; i < num_engines; i++) @@ -556,7 +556,7 @@ all_busy_check_all(int gem_fd, const unsigned int num_engines, i = 0; for_each_engine_class_instance(gem_fd, e) { if (spin) - __submit_spin_batch(gem_fd, spin, e, 64); + __submit_spin(gem_fd, spin, e, 64); else spin = __spin_poll(gem_fd, 0, e2ring(gem_fd, e)); @@ -578,7 +578,7 @@ all_busy_check_all(int gem_fd, const unsigned int num_engines, pmu_read_multi(fd[0], num_engines, tval[1]); end_spin(gem_fd, spin, FLAG_SYNC); - igt_spin_batch_free(gem_fd, spin); + igt_spin_free(gem_fd, spin); close(fd[0]); for (i = 0; i < num_engines; i++) @@ -617,7 +617,7 @@ no_sema(int gem_fd, const struct intel_execution_engine2 *e, unsigned int flags) if (spin) { end_spin(gem_fd, spin, FLAG_SYNC); - igt_spin_batch_free(gem_fd, spin); + igt_spin_free(gem_fd, spin); } close(fd); @@ -950,9 +950,9 @@ multi_client(int gem_fd, const struct intel_execution_engine2 *e) perf_slept[0] = ts[1] - ts[0]; igt_debug("slept=%lu perf=%"PRIu64"\n", slept[0], perf_slept[0]); - igt_spin_batch_end(spin); + igt_spin_end(spin); gem_sync(gem_fd, spin->handle); - igt_spin_batch_free(gem_fd, spin); + igt_spin_free(gem_fd, spin); close(fd[0]); assert_within_epsilon(val[0], perf_slept[0], tolerance); @@ -1052,8 +1052,8 @@ static void cpu_hotplug(int gem_fd) * Create two spinners so test can ensure shorter gaps in engine * busyness as it is terminating one and re-starting the other. */ - spin[0] = igt_spin_batch_new(gem_fd, .engine = I915_EXEC_RENDER); - spin[1] = __igt_spin_batch_new(gem_fd, .engine = I915_EXEC_RENDER); + spin[0] = igt_spin_new(gem_fd, .engine = I915_EXEC_RENDER); + spin[1] = __igt_spin_new(gem_fd, .engine = I915_EXEC_RENDER); val = __pmu_read_single(fd, &ts[0]); @@ -1135,9 +1135,9 @@ static void cpu_hotplug(int gem_fd) if ( ret == 1 || (ret < 0 && errno != EAGAIN)) break; - igt_spin_batch_free(gem_fd, spin[cur]); - spin[cur] = __igt_spin_batch_new(gem_fd, - .engine = I915_EXEC_RENDER); + igt_spin_free(gem_fd, spin[cur]); + spin[cur] = __igt_spin_new(gem_fd, + .engine = I915_EXEC_RENDER); cur ^= 1; } @@ -1145,8 +1145,8 @@ static void cpu_hotplug(int gem_fd) end_spin(gem_fd, spin[0], FLAG_SYNC); end_spin(gem_fd, spin[1], FLAG_SYNC); - igt_spin_batch_free(gem_fd, spin[0]); - igt_spin_batch_free(gem_fd, spin[1]); + igt_spin_free(gem_fd, spin[0]); + igt_spin_free(gem_fd, spin[1]); igt_waitchildren(); close(fd); close(link[0]); @@ -1174,9 +1174,9 @@ test_interrupts(int gem_fd) /* Queue spinning batches. */ for (int i = 0; i < target; i++) { - spin[i] = __igt_spin_batch_new(gem_fd, - .engine = I915_EXEC_RENDER, - .flags = IGT_SPIN_FENCE_OUT); + spin[i] = __igt_spin_new(gem_fd, + .engine = I915_EXEC_RENDER, + .flags = IGT_SPIN_FENCE_OUT); if (i == 0) { fence_fd = spin[i]->out_fence; } else { @@ -1200,9 +1200,9 @@ test_interrupts(int gem_fd) /* Arm batch expiration. */ for (int i = 0; i < target; i++) - igt_spin_batch_set_timeout(spin[i], - (i + 1) * test_duration_ms * 1e6 - / target); + igt_spin_set_timeout(spin[i], + (i + 1) * test_duration_ms * 1e6 + / target); /* Wait for last batch to finish. */ pfd.events = POLLIN; @@ -1212,7 +1212,7 @@ test_interrupts(int gem_fd) /* Free batches. */ for (int i = 0; i < target; i++) - igt_spin_batch_free(gem_fd, spin[i]); + igt_spin_free(gem_fd, spin[i]); /* Check at least as many interrupts has been generated. */ busy = pmu_read_single(fd) - idle; @@ -1237,8 +1237,8 @@ test_interrupts_sync(int gem_fd) /* Queue spinning batches. */ for (int i = 0; i < target; i++) - spin[i] = __igt_spin_batch_new(gem_fd, - .flags = IGT_SPIN_FENCE_OUT); + spin[i] = __igt_spin_new(gem_fd, + .flags = IGT_SPIN_FENCE_OUT); /* Wait for idle state. */ idle = pmu_read_single(fd); @@ -1254,9 +1254,9 @@ test_interrupts_sync(int gem_fd) const unsigned int timeout_ms = test_duration_ms / target; pfd.fd = spin[i]->out_fence; - igt_spin_batch_set_timeout(spin[i], timeout_ms * 1e6); + igt_spin_set_timeout(spin[i], timeout_ms * 1e6); igt_assert_eq(poll(&pfd, 1, 2 * timeout_ms), 1); - igt_spin_batch_free(gem_fd, spin[i]); + igt_spin_free(gem_fd, spin[i]); } /* Check at least as many interrupts has been generated. */ @@ -1310,7 +1310,7 @@ test_frequency(int gem_fd) min[0] = 1e9*(val[0] - start[0]) / slept; min[1] = 1e9*(val[1] - start[1]) / slept; - igt_spin_batch_free(gem_fd, spin); + igt_spin_free(gem_fd, spin); gem_quiescent_gpu(gem_fd); /* Don't leak busy bo into the next phase */ usleep(1e6); @@ -1336,7 +1336,7 @@ test_frequency(int gem_fd) max[0] = 1e9*(val[0] - start[0]) / slept; max[1] = 1e9*(val[1] - start[1]) / slept; - igt_spin_batch_free(gem_fd, spin); + igt_spin_free(gem_fd, spin); gem_quiescent_gpu(gem_fd); /* @@ -1501,7 +1501,7 @@ test_enable_race(int gem_fd, const struct intel_execution_engine2 *e) gem_quiescent_gpu(gem_fd); } -static void __rearm_spin_batch(igt_spin_t *spin) +static void __rearm_spin(igt_spin_t *spin) { const uint32_t mi_arb_chk = 0x5 << 23; @@ -1570,8 +1570,8 @@ accuracy(int gem_fd, const struct intel_execution_engine2 *e, igt_spin_t *spin; /* Allocate our spin batch and idle it. */ - spin = igt_spin_batch_new(gem_fd, .engine = e2ring(gem_fd, e)); - igt_spin_batch_end(spin); + spin = igt_spin_new(gem_fd, .engine = e2ring(gem_fd, e)); + igt_spin_end(spin); gem_sync(gem_fd, spin->handle); /* 1st pass is calibration, second pass is the test. */ @@ -1596,14 +1596,14 @@ accuracy(int gem_fd, const struct intel_execution_engine2 *e, nanosleep(&_ts, NULL); /* Restart the spinbatch. */ - __rearm_spin_batch(spin); - __submit_spin_batch(gem_fd, spin, e, 0); + __rearm_spin(spin); + __submit_spin(gem_fd, spin, e, 0); /* PWM busy sleep. */ loop_busy = igt_nsec_elapsed(&start); _ts.tv_nsec = busy_us * 1000; nanosleep(&_ts, NULL); - igt_spin_batch_end(spin); + igt_spin_end(spin); /* Time accounting. */ now = igt_nsec_elapsed(&start); @@ -1640,7 +1640,7 @@ accuracy(int gem_fd, const struct intel_execution_engine2 *e, write(link[1], &expected, sizeof(expected)); } - igt_spin_batch_free(gem_fd, spin); + igt_spin_free(gem_fd, spin); } fd = open_pmu(I915_PMU_ENGINE_BUSY(e->class, e->instance)); From patchwork Wed Apr 17 15:28:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Kuoppala X-Patchwork-Id: 10905535 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F1A041515 for ; Wed, 17 Apr 2019 15:28:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D9AEE28A11 for ; Wed, 17 Apr 2019 15:28:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CD39A28A2A; Wed, 17 Apr 2019 15:28:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B9CD328A11 for ; Wed, 17 Apr 2019 15:28:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3CE6C6E06E; Wed, 17 Apr 2019 15:28:49 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTPS id CDA8C6E06E for ; Wed, 17 Apr 2019 15:28:47 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Apr 2019 08:28:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,362,1549958400"; d="scan'208";a="338471121" Received: from rosetta.fi.intel.com ([10.237.72.186]) by fmsmga005.fm.intel.com with ESMTP; 17 Apr 2019 08:28:45 -0700 Received: by rosetta.fi.intel.com (Postfix, from userid 1000) id 8A962840612; Wed, 17 Apr 2019 18:28:36 +0300 (EEST) From: Mika Kuoppala To: intel-gfx@lists.freedesktop.org Date: Wed, 17 Apr 2019 18:28:34 +0300 Message-Id: <20190417152834.12705-3-mika.kuoppala@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190417152834.12705-1-mika.kuoppala@linux.intel.com> References: <20190417152834.12705-1-mika.kuoppala@linux.intel.com> Subject: [Intel-gfx] [PATCH i-g-t 3/3] lib/igt_dummyload: Introduce igt_spin_reset X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Libify resetting a spin for reuse. Cc: Chris Wilson Signed-off-by: Mika Kuoppala Reviewed-by: Chris Wilson --- lib/igt_dummyload.c | 20 ++++++++++++++++++++ lib/igt_dummyload.h | 2 ++ tests/i915/gem_exec_latency.c | 19 ++++--------------- tests/i915/gem_sync.c | 34 ++++++++++++++-------------------- 4 files changed, 40 insertions(+), 35 deletions(-) diff --git a/lib/igt_dummyload.c b/lib/igt_dummyload.c index b9d54450..90223828 100644 --- a/lib/igt_dummyload.c +++ b/lib/igt_dummyload.c @@ -268,6 +268,8 @@ emit_recursive_batch(igt_spin_t *spin, obj[SCRATCH].flags = EXEC_OBJECT_PINNED; obj[BATCH].flags = EXEC_OBJECT_PINNED; + spin->cmd_spin = *spin->batch; + return fence_fd; } @@ -374,6 +376,24 @@ void igt_spin_set_timeout(igt_spin_t *spin, int64_t ns) spin->timer = timer; } +/** + * igt_spin_reset: + * @spin: spin state from igt_spin_new() + * + * Reset the state of spin, allowing its reuse. + */ +void igt_spin_reset(igt_spin_t *spin) +{ + if (!spin) + return; + + if (igt_spin_has_poll(spin)) + spin->poll[SPIN_POLL_START_IDX] = 0; + + *spin->batch = spin->cmd_spin; + __sync_synchronize(); +} + /** * igt_spin_end: * @spin: spin state from igt_spin_new() diff --git a/lib/igt_dummyload.h b/lib/igt_dummyload.h index d6482089..d7b1be91 100644 --- a/lib/igt_dummyload.h +++ b/lib/igt_dummyload.h @@ -37,6 +37,7 @@ typedef struct igt_spin { timer_t timer; struct igt_list link; uint32_t *batch; + uint32_t cmd_spin; int out_fence; struct drm_i915_gem_exec_object2 obj[2]; struct drm_i915_gem_execbuffer2 execbuf; @@ -68,6 +69,7 @@ igt_spin_factory(int fd, const struct igt_spin_factory *opts); igt_spin_factory(fd, &((struct igt_spin_factory){__VA_ARGS__})) void igt_spin_set_timeout(igt_spin_t *spin, int64_t ns); +void igt_spin_reset(igt_spin_t *spin); void igt_spin_end(igt_spin_t *spin); void igt_spin_free(int fd, igt_spin_t *spin); diff --git a/tests/i915/gem_exec_latency.c b/tests/i915/gem_exec_latency.c index 6b7dfbc0..2cfb78bf 100644 --- a/tests/i915/gem_exec_latency.c +++ b/tests/i915/gem_exec_latency.c @@ -73,19 +73,17 @@ poll_ring(int fd, unsigned ring, const char *name) unsigned long cycles; igt_spin_t *spin[2]; uint64_t elapsed; - uint32_t cmd; gem_require_ring(fd, ring); igt_require(gem_can_store_dword(fd, ring)); spin[0] = __igt_spin_factory(fd, &opts); igt_assert(igt_spin_has_poll(spin[0])); - cmd = *spin[0]->batch; spin[1] = __igt_spin_factory(fd, &opts); igt_assert(igt_spin_has_poll(spin[1])); - igt_assert(cmd == *spin[1]->batch); + igt_assert(*spin[0]->batch == *spin[1]->batch); igt_spin_end(spin[0]); igt_spin_busywait_until_started(spin[1]); @@ -96,8 +94,8 @@ poll_ring(int fd, unsigned ring, const char *name) while ((elapsed = igt_nsec_elapsed(&tv)) < 2ull << 30) { const unsigned int idx = cycles++ & 1; - *spin[idx]->batch = cmd; - spin[idx]->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin[idx]); + gem_execbuf(fd, &spin[idx]->execbuf); igt_spin_end(spin[!idx]); @@ -414,15 +412,6 @@ static void latency_from_ring(int fd, } } -static void __rearm_spin(igt_spin_t *spin) -{ - const uint32_t mi_arb_chk = 0x5 << 23; - - *spin->batch = mi_arb_chk; - spin->poll[SPIN_POLL_START_IDX] = 0; - __sync_synchronize(); -} - static void __submit_spin(int fd, igt_spin_t *spin, unsigned int flags) { @@ -557,7 +546,7 @@ rthog_latency_on_ring(int fd, unsigned int engine, const char *name, unsigned in if (nengine > 1) usleep(10*nengine); - __rearm_spin(spin); + igt_spin_reset(spin); igt_nsec_elapsed(&ts); __submit_spin(fd, spin, engine); diff --git a/tests/i915/gem_sync.c b/tests/i915/gem_sync.c index f17ecd0b..8c5aaa14 100644 --- a/tests/i915/gem_sync.c +++ b/tests/i915/gem_sync.c @@ -209,7 +209,6 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) struct drm_i915_gem_execbuffer2 execbuf; double end, this, elapsed, now, baseline; unsigned long cycles; - uint32_t cmd; igt_spin_t *spin; memset(&object, 0, sizeof(object)); @@ -226,7 +225,6 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) .flags = (IGT_SPIN_POLL_RUN | IGT_SPIN_FAST)); igt_assert(igt_spin_has_poll(spin)); - cmd = *spin->batch; gem_execbuf(fd, &execbuf); @@ -238,8 +236,8 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) elapsed = 0; cycles = 0; do { - *spin->batch = cmd; - spin->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin); + gem_execbuf(fd, &spin->execbuf); igt_spin_busywait_until_started(spin); @@ -262,8 +260,8 @@ wakeup_ring(int fd, unsigned ring, int timeout, int wlen) elapsed = 0; cycles = 0; do { - *spin->batch = cmd; - spin->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin); + gem_execbuf(fd, &spin->execbuf); igt_spin_busywait_until_started(spin); @@ -321,17 +319,14 @@ static void active_ring(int fd, unsigned ring, int timeout) double start, end, elapsed; unsigned long cycles; igt_spin_t *spin[2]; - uint32_t cmd; spin[0] = __igt_spin_new(fd, .engine = ring, .flags = IGT_SPIN_FAST); - cmd = *spin[0]->batch; spin[1] = __igt_spin_new(fd, .engine = ring, .flags = IGT_SPIN_FAST); - igt_assert(*spin[1]->batch == cmd); start = gettime(); end = start + timeout; @@ -343,7 +338,8 @@ static void active_ring(int fd, unsigned ring, int timeout) igt_spin_end(s); gem_sync(fd, s->handle); - *s->batch = cmd; + igt_spin_reset(s); + gem_execbuf(fd, &s->execbuf); } cycles += 1024; @@ -393,7 +389,6 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) double end, this, elapsed, now, baseline; unsigned long cycles; igt_spin_t *spin[2]; - uint32_t cmd; memset(&object, 0, sizeof(object)); object.handle = gem_create(fd, 4096); @@ -409,7 +404,6 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) .flags = (IGT_SPIN_POLL_RUN | IGT_SPIN_FAST)); igt_assert(igt_spin_has_poll(spin[0])); - cmd = *spin[0]->batch; spin[1] = __igt_spin_new(fd, .engine = execbuf.flags, @@ -423,8 +417,8 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) gem_sync(fd, object.handle); for (int warmup = 0; warmup <= 1; warmup++) { - *spin[0]->batch = cmd; - spin[0]->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin[0]); + gem_execbuf(fd, &spin[0]->execbuf); end = gettime() + timeout/10.; @@ -433,8 +427,8 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) do { igt_spin_busywait_until_started(spin[0]); - *spin[1]->batch = cmd; - spin[1]->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin[1]); + gem_execbuf(fd, &spin[1]->execbuf); this = gettime(); @@ -454,8 +448,8 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) names[child % num_engines] ? " b" : "B", cycles, elapsed*1e6/cycles); - *spin[0]->batch = cmd; - spin[0]->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin[0]); + gem_execbuf(fd, &spin[0]->execbuf); end = gettime() + timeout; @@ -467,8 +461,8 @@ active_wakeup_ring(int fd, unsigned ring, int timeout, int wlen) for (int n = 0; n < wlen; n++) gem_execbuf(fd, &execbuf); - *spin[1]->batch = cmd; - spin[1]->poll[SPIN_POLL_START_IDX] = 0; + igt_spin_reset(spin[1]); + gem_execbuf(fd, &spin[1]->execbuf); this = gettime();