From patchwork Fri Jul 13 09:51:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nayan Deshmukh X-Patchwork-Id: 10522909 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5DA9D602A0 for ; Fri, 13 Jul 2018 09:51:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4B4842967B for ; Fri, 13 Jul 2018 09:51:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 48687299E0; Fri, 13 Jul 2018 09:51:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A6EE429868 for ; Fri, 13 Jul 2018 09:51:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B4BD96F1D5; Fri, 13 Jul 2018 09:51:43 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-pl0-x243.google.com (mail-pl0-x243.google.com [IPv6:2607:f8b0:400e:c01::243]) by gabe.freedesktop.org (Postfix) with ESMTPS id B0F426F1D5; Fri, 13 Jul 2018 09:51:42 +0000 (UTC) Received: by mail-pl0-x243.google.com with SMTP id b1-v6so11991804pls.5; Fri, 13 Jul 2018 02:51:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tqdxMRP0QTINUF8uEJJTD16GD5wY04KTeY0Zdf4Fb+w=; b=Yha9AUkhuGYy8Pr38hhbykt1gpTh1zswu9cNK2ihUtv8+egHocAJahgIZcaWU8xyaC l7NrgLuw37wyjMG1VpjMclobVeGaUNp23kpi3fKeyPGeyrG7S46utQzu/QCua3io0Mm0 yRM2WYpyO/WrFRiblr8bgHDvtVybGAnTXzc5VYjqkNX7sLE3befSH0vaJLPeh0lxT76W +gwOT7afJyhBn+KqJjSSQg/pBEpNJJ6LPOy2c9uzp1gDyGTQWDI5hdfHM196wKbh8R7y FqPRo2Etn4owewkKYqaWECZndSGnuAhKCZdhNNFb+CFFE+hpnaeImPu0Npe2I9N+EeV2 CkhQ== X-Gm-Message-State: AOUpUlHb238PImg667jegYsBDXu5FP9R0ISTN79/OIaOCXCbf3BiN3w+ VX6/aZ7VBl4qhxI+8PJnz+eh+V9L X-Google-Smtp-Source: AAOMgpfZIgYbru9U+04ofymztfmnFMW4ZhUWUDQ+CzrfMov6vVsnjZIMVhejTsR/LJrCNew8RSe/eA== X-Received: by 2002:a17:902:8697:: with SMTP id g23-v6mr3287866plo.292.1531475501998; Fri, 13 Jul 2018 02:51:41 -0700 (PDT) Received: from localhost.localdomain ([47.247.122.162]) by smtp.gmail.com with ESMTPSA id z11-v6sm25454189pff.162.2018.07.13.02.51.38 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 13 Jul 2018 02:51:41 -0700 (PDT) From: Nayan Deshmukh To: dri-devel@lists.freedesktop.org Subject: [PATCH v2 2/2] drm/scheduler: modify args of drm_sched_entity_init Date: Fri, 13 Jul 2018 15:21:14 +0530 Message-Id: <20180713095114.29211-2-nayan26deshmukh@gmail.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180713095114.29211-1-nayan26deshmukh@gmail.com> References: <20180713095114.29211-1-nayan26deshmukh@gmail.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nayan Deshmukh , amd-gfx@lists.freedesktop.org, christian.koenig@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP replace run queue by a list of run queues and remove the sched arg as that is part of run queue itself Signed-off-by: Nayan Deshmukh Reviewed-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 4 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 3 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 4 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 3 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 3 +-- drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c | 4 ++-- drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 4 ++-- drivers/gpu/drm/etnaviv/etnaviv_drv.c | 8 ++++---- drivers/gpu/drm/scheduler/gpu_scheduler.c | 20 ++++++++++++-------- drivers/gpu/drm/v3d/v3d_drv.c | 7 +++---- include/drm/gpu_scheduler.h | 6 +++--- 11 files changed, 33 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c index 0120b24fae1b..83e3b320a793 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c @@ -90,8 +90,8 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev, if (ring == &adev->gfx.kiq.ring) continue; - r = drm_sched_entity_init(&ring->sched, &ctx->rings[i].entity, - rq, &ctx->guilty); + r = drm_sched_entity_init(&ctx->rings[i].entity, + &rq, 1, &ctx->guilty); if (r) goto failed; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 0246cb87d9e4..360d6b82c3c7 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -140,8 +140,7 @@ static int amdgpu_ttm_global_init(struct amdgpu_device *adev) ring = adev->mman.buffer_funcs_ring; rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_KERNEL]; - r = drm_sched_entity_init(&ring->sched, &adev->mman.entity, - rq, NULL); + r = drm_sched_entity_init(&adev->mman.entity, &rq, 1, NULL); if (r) { DRM_ERROR("Failed setting up TTM BO move run queue.\n"); goto error_entity; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c index 0b46ea1c6290..0d8c57fc3d09 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c @@ -266,8 +266,8 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev) ring = &adev->uvd.inst[j].ring; rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL]; - r = drm_sched_entity_init(&ring->sched, &adev->uvd.inst[j].entity, - rq, NULL); + r = drm_sched_entity_init(&adev->uvd.inst[j].entity, &rq, + 1, NULL); if (r != 0) { DRM_ERROR("Failed setting up UVD(%d) run queue.\n", j); return r; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c index b0dcdfd85f5b..c0790e4c5454 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c @@ -190,8 +190,7 @@ int amdgpu_vce_sw_init(struct amdgpu_device *adev, unsigned long size) ring = &adev->vce.ring[0]; rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL]; - r = drm_sched_entity_init(&ring->sched, &adev->vce.entity, - rq, NULL); + r = drm_sched_entity_init(&adev->vce.entity, &rq, 1, NULL); if (r != 0) { DRM_ERROR("Failed setting up VCE run queue.\n"); return r; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index 712af5c1a5d6..fb7da0740ad8 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -2562,8 +2562,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm, ring_instance %= adev->vm_manager.vm_pte_num_rings; ring = adev->vm_manager.vm_pte_rings[ring_instance]; rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_KERNEL]; - r = drm_sched_entity_init(&ring->sched, &vm->entity, - rq, NULL); + r = drm_sched_entity_init(&vm->entity, &rq, 1, NULL); if (r) return r; diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c index 8ee1c2eaaa14..da38aa2f3d13 100644 --- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c @@ -429,8 +429,8 @@ static int uvd_v6_0_sw_init(void *handle) struct drm_sched_rq *rq; ring = &adev->uvd.inst->ring_enc[0]; rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL]; - r = drm_sched_entity_init(&ring->sched, &adev->uvd.inst->entity_enc, - rq, NULL); + r = drm_sched_entity_init(&adev->uvd.inst->entity_enc, + &rq, 1, NULL); if (r) { DRM_ERROR("Failed setting up UVD ENC run queue.\n"); return r; diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c index ba244d3b74db..69221430aa38 100644 --- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c @@ -431,8 +431,8 @@ static int uvd_v7_0_sw_init(void *handle) for (j = 0; j < adev->uvd.num_uvd_inst; j++) { ring = &adev->uvd.inst[j].ring_enc[0]; rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL]; - r = drm_sched_entity_init(&ring->sched, &adev->uvd.inst[j].entity_enc, - rq, NULL); + r = drm_sched_entity_init(&adev->uvd.inst[j].entity_enc, + &rq, 1, NULL); if (r) { DRM_ERROR("(%d)Failed setting up UVD ENC run queue.\n", j); return r; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c index 45bfdf4cc107..36414ba56b22 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c @@ -49,12 +49,12 @@ static int etnaviv_open(struct drm_device *dev, struct drm_file *file) for (i = 0; i < ETNA_MAX_PIPES; i++) { struct etnaviv_gpu *gpu = priv->gpu[i]; + struct drm_sched_rq *rq; if (gpu) { - drm_sched_entity_init(&gpu->sched, - &ctx->sched_entity[i], - &gpu->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL], - NULL); + rq = &gpu->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL]; + drm_sched_entity_init(&ctx->sched_entity[i], + &rq, 1, NULL); } } diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c index 429b1328653a..16bf446aa6b3 100644 --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c @@ -162,26 +162,30 @@ drm_sched_rq_select_entity(struct drm_sched_rq *rq) * drm_sched_entity_init - Init a context entity used by scheduler when * submit to HW ring. * - * @sched: scheduler instance * @entity: scheduler entity to init - * @rq: the run queue this entity belongs + * @rq_list: the list of run queue on which jobs from this + * entity can be submitted + * @num_rq_list: number of run queue in rq_list * @guilty: atomic_t set to 1 when a job on this queue * is found to be guilty causing a timeout * + * Note: the rq_list should have atleast one element to schedule + * the entity + * * Returns 0 on success or a negative error code on failure. */ -int drm_sched_entity_init(struct drm_gpu_scheduler *sched, - struct drm_sched_entity *entity, - struct drm_sched_rq *rq, +int drm_sched_entity_init(struct drm_sched_entity *entity, + struct drm_sched_rq **rq_list, + unsigned int num_rq_list, atomic_t *guilty) { - if (!(sched && entity && rq)) + if (!(entity && rq_list && num_rq_list > 0 && rq_list[0])) return -EINVAL; memset(entity, 0, sizeof(struct drm_sched_entity)); INIT_LIST_HEAD(&entity->list); - entity->rq = rq; - entity->sched = sched; + entity->rq = rq_list[0]; + entity->sched = rq_list[0]->sched; entity->guilty = guilty; entity->last_scheduled = NULL; diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c index 567f7d46d912..1dceba2b42fd 100644 --- a/drivers/gpu/drm/v3d/v3d_drv.c +++ b/drivers/gpu/drm/v3d/v3d_drv.c @@ -123,6 +123,7 @@ v3d_open(struct drm_device *dev, struct drm_file *file) { struct v3d_dev *v3d = to_v3d_dev(dev); struct v3d_file_priv *v3d_priv; + struct drm_sched_rq *rq; int i; v3d_priv = kzalloc(sizeof(*v3d_priv), GFP_KERNEL); @@ -132,10 +133,8 @@ v3d_open(struct drm_device *dev, struct drm_file *file) v3d_priv->v3d = v3d; for (i = 0; i < V3D_MAX_QUEUES; i++) { - drm_sched_entity_init(&v3d->queue[i].sched, - &v3d_priv->sched_entity[i], - &v3d->queue[i].sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL], - NULL); + rq = &v3d->queue[i].sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL]; + drm_sched_entity_init(&v3d_priv->sched_entity[i], &rq, 1, NULL); } file->driver_priv = v3d_priv; diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 43e93d6077cf..2205e89722f6 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -282,9 +282,9 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const char *name); void drm_sched_fini(struct drm_gpu_scheduler *sched); -int drm_sched_entity_init(struct drm_gpu_scheduler *sched, - struct drm_sched_entity *entity, - struct drm_sched_rq *rq, +int drm_sched_entity_init(struct drm_sched_entity *entity, + struct drm_sched_rq **rq_list, + unsigned int num_rq_list, atomic_t *guilty); long drm_sched_entity_flush(struct drm_gpu_scheduler *sched, struct drm_sched_entity *entity, long timeout);