From patchwork Thu Dec 22 08:36:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 9484465 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 07F24601D2 for ; Thu, 22 Dec 2016 08:39:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04326277D9 for ; Thu, 22 Dec 2016 08:39:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ED326281C3; Thu, 22 Dec 2016 08:39:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 754D8277D9 for ; Thu, 22 Dec 2016 08:39:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 77ADD6F18A; Thu, 22 Dec 2016 08:39:00 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wj0-x243.google.com (mail-wj0-x243.google.com [IPv6:2a00:1450:400c:c01::243]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6D8356F16E; Thu, 22 Dec 2016 08:36:57 +0000 (UTC) Received: by mail-wj0-x243.google.com with SMTP id ez4so669877wjd.1; Thu, 22 Dec 2016 00:36:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=V3vFo//Kw5rqv2Yt/KNRHJI4tlDD2W5GkjgoyD3KgEU=; b=QVc1rRAB6N6fag5n7S/jL7kigBv31CuwKj/v7fAkPRNZilJH9PH3SqiMY4OJqOJtVx Ut82Sc/rRASgfw7x8Wcka+7M10CyDD7mKTCy8zowTSRoiKWVMf8fHO0MogOZt6aAyGYy gN8aQ0WY/+pkTV1Q+pvaI+ig7QkTwjoOY9tN23Y3Klpa8cFsnqpJUl/LVcZYV53RLsRp X5EPd6oghCHPE+J3ORA8RYGWB4mOSDe4MpyNwRsgnT/OR86qqbK+1RCFTkMzztMzHhpz CvDgxGTliVobfGIARzBawU51n9JanVqSgYs0ghw5njaVj1eH3kZLIvFbKX0JyAnMyXx8 KxdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=V3vFo//Kw5rqv2Yt/KNRHJI4tlDD2W5GkjgoyD3KgEU=; b=i/bjPnnn2glLzUG6aDh6kJbO6+abrw8YalkUr86WRC/PakpVHBeVumJEqcV23MQs8Z bw/ekpN5JgcqAmrOVqWxdIo0SMXUBtK5eKRnhyxRlOpUXIA+XKjePwpQeJs5F8vwgm30 unaWQeFRu1+Il2ZYVaL4n6uZ7n1ixoOxTGd+KvZTMko1eqHHa0Zi64dR3Nqww5rsL6PI /d/VyTkM5KuBSg7MGiU9zBVDA5pt4rauiMm0WHyWIKD/6LiptYfDSOklNTYiRsdEcx7/ iRPUV1VCBjb32dvQXdxBhjlZF8RSLNb4veK+L/q4J/rdG2KAEboJ6WJ+Nx95nBcR01GS QFxg== X-Gm-Message-State: AIkVDXKPr0syPGaeXLRYKMuz3jhLqKWMGLGgf6Lrn4Bx3HdEVmV/dd+4jxBLY/lhglgx9A== X-Received: by 10.194.120.198 with SMTP id le6mr204549wjb.22.1482395815202; Thu, 22 Dec 2016 00:36:55 -0800 (PST) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id di9sm34442318wjc.37.2016.12.22.00.36.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Dec 2016 00:36:54 -0800 (PST) From: Chris Wilson To: dri-devel@lists.freedesktop.org Date: Thu, 22 Dec 2016 08:36:12 +0000 Message-Id: <20161222083641.2691-10-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20161222083641.2691-1-chris@chris-wilson.co.uk> References: <20161222083641.2691-1-chris@chris-wilson.co.uk> Cc: intel-gfx@lists.freedesktop.org Subject: [Intel-gfx] [PATCH v4 09/38] drm: kselftest for drm_mm_reserve_node() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Exercise drm_mm_reserve_node(), check that we can't reserve an already occupied range and that the lists are correct after reserving/removing. v2: Check for invalid node reservation. Signed-off-by: Chris Wilson Reviewed-by: Joonas Lahtinen --- drivers/gpu/drm/selftests/drm_mm_selftests.h | 1 + drivers/gpu/drm/selftests/test-drm_mm.c | 275 +++++++++++++++++++++++++++ 2 files changed, 276 insertions(+) diff --git a/drivers/gpu/drm/selftests/drm_mm_selftests.h b/drivers/gpu/drm/selftests/drm_mm_selftests.h index 0265f09e92fa..693d85677e7f 100644 --- a/drivers/gpu/drm/selftests/drm_mm_selftests.h +++ b/drivers/gpu/drm/selftests/drm_mm_selftests.h @@ -8,3 +8,4 @@ selftest(sanitycheck, igt_sanitycheck) /* keep first (selfcheck for igt) */ selftest(init, igt_init) selftest(debug, igt_debug) +selftest(reserve, igt_reserve) diff --git a/drivers/gpu/drm/selftests/test-drm_mm.c b/drivers/gpu/drm/selftests/test-drm_mm.c index 434320061d9e..789a810d66aa 100644 --- a/drivers/gpu/drm/selftests/test-drm_mm.c +++ b/drivers/gpu/drm/selftests/test-drm_mm.c @@ -80,6 +80,57 @@ static bool assert_one_hole(const struct drm_mm *mm, u64 start, u64 end) return ok; } +static bool assert_continuous(const struct drm_mm *mm, u64 size) +{ + struct drm_mm_node *node, *check, *found; + unsigned long n; + u64 addr; + + if (!assert_no_holes(mm)) + return false; + + n = 0; + addr = 0; + drm_mm_for_each_node(node, mm) { + if (node->start != addr) { + pr_err("node[%ld] list out of order, expected %llx found %llx\n", + n, addr, node->start); + return false; + } + + if (node->size != size) { + pr_err("node[%ld].size incorrect, expected %llx, found %llx\n", + n, size, node->size); + return false; + } + + if (node->hole_follows) { + pr_err("node[%ld] is followed by a hole!\n", n); + return false; + } + + found = NULL; + drm_mm_for_each_node_in_range(check, mm, addr, addr + size) { + if (node != check) { + pr_err("lookup return wrong node, expected start %llx, found %llx\n", + node->start, check->start); + return false; + } + found = check; + } + if (!found) { + pr_err("lookup failed for node %llx + %llx\n", + addr, size); + return false; + } + + addr += size; + n++; + } + + return true; +} + static int igt_init(void *ignored) { const unsigned int size = 4096; @@ -176,6 +227,230 @@ static int igt_debug(void *ignored) return 0; } +static struct drm_mm_node *set_node(struct drm_mm_node *node, + u64 start, u64 size) +{ + node->start = start; + node->size = size; + return node; +} + +static bool expect_reserve_fail(struct drm_mm *mm, struct drm_mm_node *node) +{ + int err; + + err = drm_mm_reserve_node(mm, node); + if (likely(err == -ENOSPC)) + return true; + + if (!err) { + pr_err("impossible reserve succeeded, node %llu + %llu\n", + node->start, node->size); + drm_mm_remove_node(node); + } else { + pr_err("impossible reserve failed with wrong error %d [expected %d], node %llu + %llu\n", + err, -ENOSPC, node->start, node->size); + } + return false; +} + +static bool check_reserve_boundaries(struct drm_mm *mm, + unsigned int count, + u64 size) +{ + const struct boundary { + u64 start, size; + const char *name; + } boundaries[] = { +#define B(st, sz) { (st), (sz), "{ " #st ", " #sz "}" } + B(0, 0), + B(-size, 0), + B(size, 0), + B(size * count, 0), + B(-size, size), + B(-size, -size), + B(-size, 2*size), + B(0, -size), + B(size, -size), + B(count*size, size), + B(count*size, -size), + B(count*size, count*size), + B(count*size, -count*size), + B(count*size, -(count+1)*size), + B((count+1)*size, size), + B((count+1)*size, -size), + B((count+1)*size, -2*size), +#undef B + }; + struct drm_mm_node tmp = {}; + int n; + + for (n = 0; n < ARRAY_SIZE(boundaries); n++) { + if (!expect_reserve_fail(mm, + set_node(&tmp, + boundaries[n].start, + boundaries[n].size))) { + pr_err("boundary[%d:%s] failed, count=%u, size=%lld\n", + n, boundaries[n].name, count, size); + return false; + } + } + + return true; +} + +static int __igt_reserve(unsigned int count, u64 size) +{ + DRM_RND_STATE(prng, random_seed); + struct drm_mm mm; + struct drm_mm_node tmp, *nodes, *node, *next; + unsigned int *order, n, m, o = 0; + int ret, err; + + /* For exercising drm_mm_reserve_node(), we want to check that + * reservations outside of the drm_mm range are rejected, and to + * overlapping and otherwise already occupied ranges. Afterwards, + * the tree and nodes should be intact. + */ + + DRM_MM_BUG_ON(!count); + DRM_MM_BUG_ON(!size); + + ret = -ENOMEM; + order = drm_random_order(count, &prng); + if (!order) + goto err; + + nodes = vzalloc(sizeof(*nodes) * count); + if (!nodes) + goto err_order; + + ret = -EINVAL; + drm_mm_init(&mm, 0, count * size); + + if (!check_reserve_boundaries(&mm, count, size)) + goto out; + + for (n = 0; n < count; n++) { + nodes[n].start = order[n] * size; + nodes[n].size = size; + + err = drm_mm_reserve_node(&mm, &nodes[n]); + if (err) { + pr_err("reserve failed, step %d, start %llu\n", + n, nodes[n].start); + ret = err; + goto out; + } + + if (!drm_mm_node_allocated(&nodes[n])) { + pr_err("reserved node not allocated! step %d, start %llu\n", + n, nodes[n].start); + goto out; + } + + if (!expect_reserve_fail(&mm, &nodes[n])) + goto out; + } + + /* After random insertion the nodes should be in order */ + if (!assert_continuous(&mm, size)) + goto out; + + /* Repeated use should then fail */ + drm_random_reorder(order, count, &prng); + for (n = 0; n < count; n++) { + if (!expect_reserve_fail(&mm, + set_node(&tmp, order[n] * size, 1))) + goto out; + + /* Remove and reinsert should work */ + drm_mm_remove_node(&nodes[order[n]]); + err = drm_mm_reserve_node(&mm, &nodes[order[n]]); + if (err) { + pr_err("reserve failed, step %d, start %llu\n", + n, nodes[n].start); + ret = err; + goto out; + } + } + + if (!assert_continuous(&mm, size)) + goto out; + + /* Overlapping use should then fail */ + for (n = 0; n < count; n++) { + if (!expect_reserve_fail(&mm, set_node(&tmp, 0, size*count))) + goto out; + } + for (n = 0; n < count; n++) { + if (!expect_reserve_fail(&mm, + set_node(&tmp, + size * n, + size * (count - n)))) + goto out; + } + + /* Remove several, reinsert, check full */ + for_each_prime_number(n, min(max_prime, count)) { + for (m = 0; m < n; m++) { + node = &nodes[order[(o + m) % count]]; + drm_mm_remove_node(node); + } + + for (m = 0; m < n; m++) { + node = &nodes[order[(o + m) % count]]; + err = drm_mm_reserve_node(&mm, node); + if (err) { + pr_err("reserve failed, step %d/%d, start %llu\n", + m, n, node->start); + ret = err; + goto out; + } + } + + o += n; + + if (!assert_continuous(&mm, size)) + goto out; + } + + ret = 0; +out: + drm_mm_for_each_node_safe(node, next, &mm) + drm_mm_remove_node(node); + drm_mm_takedown(&mm); + vfree(nodes); +err_order: + kfree(order); +err: + return ret; +} + +static int igt_reserve(void *ignored) +{ + const unsigned int count = min_t(unsigned int, BIT(10), max_iterations); + int n, ret; + + for_each_prime_number_from(n, 1, 54) { + u64 size = BIT_ULL(n); + + ret = __igt_reserve(count, size - 1); + if (ret) + return ret; + + ret = __igt_reserve(count, size); + if (ret) + return ret; + + ret = __igt_reserve(count, size + 1); + if (ret) + return ret; + } + + return 0; +} + #include "drm_selftest.c" static int __init test_drm_mm_init(void)