From patchwork Fri Nov 28 14:46:24 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: tim.gore@intel.com X-Patchwork-Id: 5404161 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id EC3919F443 for ; Fri, 28 Nov 2014 14:46:31 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1341620166 for ; Fri, 28 Nov 2014 14:46:31 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 135232015E for ; Fri, 28 Nov 2014 14:46:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 01D486EE86; Fri, 28 Nov 2014 06:46:29 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTP id 10A7A6EE86 for ; Fri, 28 Nov 2014 06:46:26 -0800 (PST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP; 28 Nov 2014 06:43:54 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,477,1413270000"; d="scan'208";a="645066266" Received: from tgore-linux.isw.intel.com ([10.102.226.50]) by orsmga002.jf.intel.com with ESMTP; 28 Nov 2014 06:46:24 -0800 From: tim.gore@intel.com To: intel-gfx@lists.freedesktop.org Date: Fri, 28 Nov 2014 14:46:24 +0000 Message-Id: <1417185984-30815-1-git-send-email-tim.gore@intel.com> X-Mailer: git-send-email 2.1.3 Subject: [Intel-gfx] [RFC] tests/gem_ring_sync_copy: reduce memory usage X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Tim Gore gem_ring_sync_copy uses a lot of memory and gets OOM killed on smaller systems (eg android devices). Most of the allocation is for "busy work" to keep the render rings busy and for this we can just re-use the same few buffers over and over. This enables the test to be run on low end devices. Signed-off-by: Tim Gore --- tests/gem_ring_sync_copy.c | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-) diff --git a/tests/gem_ring_sync_copy.c b/tests/gem_ring_sync_copy.c index 4a732d2..7257188 100644 --- a/tests/gem_ring_sync_copy.c +++ b/tests/gem_ring_sync_copy.c @@ -57,6 +57,7 @@ #define WIDTH 512 #define HEIGHT 512 +#define NUM_BUSY_BUFFERS 32 typedef struct { int drm_fd; @@ -163,11 +164,13 @@ static void render_busy(data_t *data) size_t array_size; int i; - array_size = data->n_buffers_load * sizeof(struct igt_buf); + /* allocate 32 buffer objects and re-use them as needed */ + array_size = NUM_BUSY_BUFFERS * sizeof(struct igt_buf); + data->render.srcs = malloc(array_size); data->render.dsts = malloc(array_size); - for (i = 0; i < data->n_buffers_load; i++) { + for (i = 0; i < NUM_BUSY_BUFFERS; i++) { scratch_buf_init(data, &data->render.srcs[i], WIDTH, HEIGHT, 0xdeadbeef); scratch_buf_init(data, &data->render.dsts[i], WIDTH, HEIGHT, @@ -177,10 +180,10 @@ static void render_busy(data_t *data) for (i = 0; i < data->n_buffers_load; i++) { data->render.copy(data->batch, NULL, /* context */ - &data->render.srcs[i], + &data->render.srcs[i % NUM_BUSY_BUFFERS], 0, 0, /* src_x, src_y */ WIDTH, HEIGHT, - &data->render.dsts[i], + &data->render.dsts[i % NUM_BUSY_BUFFERS], 0, 0 /* dst_x, dst_y */); } } @@ -189,7 +192,7 @@ static void render_busy_fini(data_t *data) { int i; - for (i = 0; i < data->n_buffers_load; i++) { + for (i = 0; i < NUM_BUSY_BUFFERS; i++) { drm_intel_bo_unreference(data->render.srcs[i].bo); drm_intel_bo_unreference(data->render.dsts[i].bo); } @@ -225,11 +228,13 @@ static void blitter_busy(data_t *data) size_t array_size; int i; - array_size = data->n_buffers_load * sizeof(drm_intel_bo *); + /* allocate 32 buffer objects and re-use them as needed */ + array_size = NUM_BUSY_BUFFERS * sizeof(drm_intel_bo *); + data->blitter.srcs = malloc(array_size); data->blitter.dsts = malloc(array_size); - for (i = 0; i < data->n_buffers_load; i++) { + for (i = 0; i < NUM_BUSY_BUFFERS; i++) { data->blitter.srcs[i] = bo_create(data, WIDTH, HEIGHT, 0xdeadbeef); @@ -240,8 +245,8 @@ static void blitter_busy(data_t *data) for (i = 0; i < data->n_buffers_load; i++) { intel_copy_bo(data->batch, - data->blitter.srcs[i], - data->blitter.dsts[i], + data->blitter.srcs[i % NUM_BUSY_BUFFERS], + data->blitter.dsts[i % NUM_BUSY_BUFFERS], WIDTH*HEIGHT*4); } } @@ -250,7 +255,7 @@ static void blitter_busy_fini(data_t *data) { int i; - for (i = 0; i < data->n_buffers_load; i++) { + for (i = 0; i < NUM_BUSY_BUFFERS; i++) { drm_intel_bo_unreference(data->blitter.srcs[i]); drm_intel_bo_unreference(data->blitter.dsts[i]); }