From patchwork Fri Jul 31 22:22:21 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Deucher X-Patchwork-Id: 6920771 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id AE8AEC05AC for ; Fri, 31 Jul 2015 22:23:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A934520653 for ; Fri, 31 Jul 2015 22:23:13 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 9576120631 for ; Fri, 31 Jul 2015 22:23:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DE1546EDFF; Fri, 31 Jul 2015 15:23:10 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-yk0-f180.google.com (mail-yk0-f180.google.com [209.85.160.180]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5C9B26EDEC for ; Fri, 31 Jul 2015 15:22:58 -0700 (PDT) Received: by ykay190 with SMTP id y190so70351714yka.3 for ; Fri, 31 Jul 2015 15:22:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ofK6Utdylbu5Ff0DdqSeDRdcnftu1FlUMwwi0A0x94E=; b=wlSLq7z/f5A/MO4ZX43eg45s1ShSdpQnyAm8PDAzLmTIEjfKH+6x0QAx0Ngss6nXOX /Mu4lpDnoUjILkxBlbf4SR4KHFdBjPvGdZ8LaZlVG7vti6ON3YEdJYt36PDD9KyziieI nWTFHfs6EI1Xf5/SAiIz1MUAu2tiGng45VzISfOT/37cJQcG8w66Nxd9rtFffhQ1zi36 igGApabR/ZE8PpKZncSM3m1OIuInW9qkIG/M0LvC9Boy5E//KfGipQT68VhsEHel2OdZ K3GylSnVEvE2V5Qy75d7wXfcLbVHnhrGNj6KGt6PYiglCyvupTcZtJcYO7kKzFjPnr3B ihbA== X-Received: by 10.170.225.213 with SMTP id r204mr6380460ykf.24.1438381377668; Fri, 31 Jul 2015 15:22:57 -0700 (PDT) Received: from localhost.localdomain (static-74-96-105-49.washdc.fios.verizon.net. [74.96.105.49]) by smtp.gmail.com with ESMTPSA id u78sm5438990ywu.13.2015.07.31.15.22.57 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 31 Jul 2015 15:22:57 -0700 (PDT) From: Alex Deucher X-Google-Original-From: Alex Deucher To: dri-devel@lists.freedesktop.org Subject: [PATCH 05/31] drm/amdgpu: add backend implementation of gpu scheduler (v2) Date: Fri, 31 Jul 2015 18:22:21 -0400 Message-Id: <1438381367-24980-6-git-send-email-alexander.deucher@amd.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1438381367-24980-1-git-send-email-alexander.deucher@amd.com> References: <1438381367-24980-1-git-send-email-alexander.deucher@amd.com> X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Chunming Zhou v2: fix rebase breakage Signed-off-by: Chunming Zhou Acked-by: Christian K?nig Reviewed-by: Jammy Zhou --- drivers/gpu/drm/amd/amdgpu/Makefile | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 8 +++ drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 107 ++++++++++++++++++++++++++++++ 4 files changed, 119 insertions(+), 2 deletions(-) create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile index 8709026..41d526e 100644 --- a/drivers/gpu/drm/amd/amdgpu/Makefile +++ b/drivers/gpu/drm/amd/amdgpu/Makefile @@ -96,7 +96,8 @@ endif # GPU scheduler amdgpu-y += \ - ../scheduler/gpu_scheduler.o + ../scheduler/gpu_scheduler.o \ + amdgpu_sched.o amdgpu-$(CONFIG_COMPAT) += amdgpu_ioc32.o amdgpu-$(CONFIG_VGA_SWITCHEROO) += amdgpu_atpx_handler.o diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 12ac818..dd05335 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -417,6 +417,7 @@ struct amdgpu_user_fence { struct amdgpu_bo *bo; /* write-back address offset to bo start */ uint32_t offset; + uint64_t sequence; }; int amdgpu_fence_driver_init(struct amdgpu_device *adev); @@ -858,6 +859,8 @@ enum amdgpu_ring_type { AMDGPU_RING_TYPE_VCE }; +extern struct amd_sched_backend_ops amdgpu_sched_ops; + struct amdgpu_ring { struct amdgpu_device *adev; const struct amdgpu_ring_funcs *funcs; @@ -1228,6 +1231,11 @@ struct amdgpu_cs_parser { /* user fence */ struct amdgpu_user_fence uf; + + struct mutex job_lock; + struct work_struct job_work; + int (*prepare_job)(struct amdgpu_cs_parser *sched_job); + int (*run_job)(struct amdgpu_cs_parser *sched_job); }; static inline u32 amdgpu_get_ib_value(struct amdgpu_cs_parser *p, uint32_t ib_idx, int idx) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index 6cb3290..fdb3105 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -905,7 +905,8 @@ void amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring) if (amdgpu_enable_scheduler) { ring->scheduler = amd_sched_create((void *)ring->adev, - NULL, ring->idx, 5, 0); + &amdgpu_sched_ops, + ring->idx, 5, 0); if (!ring->scheduler) DRM_ERROR("Failed to create scheduler on ring %d.\n", ring->idx); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c new file mode 100644 index 0000000..1f7bf31 --- /dev/null +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c @@ -0,0 +1,107 @@ +/* + * Copyright 2015 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * + */ +#include +#include +#include +#include +#include "amdgpu.h" + +static int amdgpu_sched_prepare_job(struct amd_gpu_scheduler *sched, + struct amd_context_entity *c_entity, + void *job) +{ + int r = 0; + struct amdgpu_cs_parser *sched_job = (struct amdgpu_cs_parser *)job; + if (sched_job->prepare_job) + r = sched_job->prepare_job(sched_job); + if (r) { + DRM_ERROR("Prepare job error\n"); + schedule_work(&sched_job->job_work); + } + return r; +} + +static void amdgpu_sched_run_job(struct amd_gpu_scheduler *sched, + struct amd_context_entity *c_entity, + void *job) +{ + int r = 0; + struct amdgpu_cs_parser *sched_job = (struct amdgpu_cs_parser *)job; + + mutex_lock(&sched_job->job_lock); + r = amdgpu_ib_schedule(sched_job->adev, + sched_job->num_ibs, + sched_job->ibs, + sched_job->filp); + if (r) + goto err; + + if (sched_job->run_job) { + r = sched_job->run_job(sched_job); + if (r) + goto err; + } + mutex_unlock(&sched_job->job_lock); + return; +err: + DRM_ERROR("Run job error\n"); + mutex_unlock(&sched_job->job_lock); + schedule_work(&sched_job->job_work); +} + +static void amdgpu_sched_process_job(struct amd_gpu_scheduler *sched, void *job) +{ + struct amdgpu_cs_parser *sched_job = NULL; + struct amdgpu_fence *fence = NULL; + struct amdgpu_ring *ring = NULL; + struct amdgpu_device *adev = NULL; + struct amd_context_entity *c_entity = NULL; + + if (!job) + return; + sched_job = (struct amdgpu_cs_parser *)job; + fence = sched_job->ibs[sched_job->num_ibs - 1].fence; + if (!fence) + return; + ring = fence->ring; + adev = ring->adev; + + if (sched_job->ctx) { + c_entity = &sched_job->ctx->rings[ring->idx].c_entity; + atomic64_set(&c_entity->last_signaled_v_seq, + sched_job->uf.sequence); + } + + /* wake up users waiting for time stamp */ + wake_up_all(&c_entity->wait_queue); + + schedule_work(&sched_job->job_work); +} + +struct amd_sched_backend_ops amdgpu_sched_ops = { + .prepare_job = amdgpu_sched_prepare_job, + .run_job = amdgpu_sched_run_job, + .process_job = amdgpu_sched_process_job +}; +