From patchwork Wed Jun 1 13:10:10 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 9147415 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C351460757 for ; Wed, 1 Jun 2016 13:10:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B50F1200F4 for ; Wed, 1 Jun 2016 13:10:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A9BE32699B; Wed, 1 Jun 2016 13:10:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D2C4200F4 for ; Wed, 1 Jun 2016 13:10:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 259006E9AC; Wed, 1 Jun 2016 13:10:44 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from pegasos-out.vodafone.de (pegasos-out.vodafone.de [80.84.1.38]) by gabe.freedesktop.org (Postfix) with ESMTP id B25966E9A7 for ; Wed, 1 Jun 2016 13:10:27 +0000 (UTC) Received: from localhost (localhost.localdomain [127.0.0.1]) by pegasos-out.vodafone.de (Rohrpostix2 Daemon) with ESMTP id 1ADB95E003A; Wed, 1 Jun 2016 15:10:27 +0200 (CEST) X-Virus-Scanned: amavisd-new at vodafone.de Authentication-Results: rohrpostix2.prod.vfnet.de (amavisd-new); dkim=pass header.i=@vodafone.de Received: from pegasos-out.vodafone.de ([127.0.0.1]) by localhost (rohrpostix2.prod.vfnet.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JlPhCjxMGddE; Wed, 1 Jun 2016 15:10:27 +0200 (CEST) Received: from smtp-02.vodafone.de (smtp-02.vodafone.de [10.215.254.37]) by pegasos-out.vodafone.de (Rohrpostix2 Daemon) with ESMTP id 1C67B5E0123; Wed, 1 Jun 2016 15:10:22 +0200 (CEST) X-DKIM: OpenDKIM Filter v2.6.8 pegasos-out.vodafone.de 1C67B5E0123 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vodafone.de; s=mail; t=1464786622; bh=Ui3mqGGVTQdG+pe8U8socmGIiWZZtwMHcAH4U+V8eJc=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=vATP9nN/YGq4hvsWNTtr34Y9ru4+m85tijezh9LbYn4rxMr62xNoucZhbqULBjxZn jBrwAohU/QPiTes0up12GQnk57mSnAlPnqEEyfzFJgPjb3aKDlX0H1jiOI4VuS5K54 D/FqXkMO2wDiXzDucXBSIE1q1pQq1iBYS4k/12/c= X-Virus-Scanned: amavisd-new at vodafone.de Received: from smtp-02.vodafone.de ([127.0.0.1]) by localhost (xsmail-dmz6.prod.vfnet.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 1fastHODjhzh; Wed, 1 Jun 2016 15:10:21 +0200 (CEST) From: =?UTF-8?q?Christian=20K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 09/11] drm/amdgpu: reuse VMIDs assigned to a VM only if there is also a free one Date: Wed, 1 Jun 2016 15:10:10 +0200 Message-Id: <1464786612-5010-10-git-send-email-deathsimple@vodafone.de> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1464786612-5010-1-git-send-email-deathsimple@vodafone.de> References: <1464786612-5010-1-git-send-email-deathsimple@vodafone.de> MIME-Version: 1.0 Cc: linux-kernel@vger.kernel.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Christian König This fixes a fairness problem with the GPU scheduler. VM having lot of jobs could previously starve VM with less jobs. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 113 +++++++++++++++++---------------- 1 file changed, 59 insertions(+), 54 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index b6484a2..f206820 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -179,75 +179,80 @@ int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring, uint64_t pd_addr = amdgpu_bo_gpu_offset(vm->page_directory); struct amdgpu_device *adev = ring->adev; struct fence *updates = sync->last_vm_update; - struct amdgpu_vm_id *id; + struct amdgpu_vm_id *id, *idle; unsigned i = ring->idx; int r; mutex_lock(&adev->vm_manager.lock); - /* Check if we can use a VMID already assigned to this VM */ - do { - struct fence *flushed; - - id = vm->ids[i++]; - if (i == AMDGPU_MAX_RINGS) - i = 0; - - /* Check all the prerequisites to using this VMID */ - if (!id) - continue; - - if (atomic64_read(&id->owner) != vm->client_id) - continue; - - if (pd_addr != id->pd_gpu_addr) - continue; + /* Check if we have an idle VMID */ + list_for_each_entry(idle, &adev->vm_manager.ids_lru, list) { + if (amdgpu_sync_is_idle(&idle->active, ring)) + break; - if (id->last_user != ring && - (!id->last_flush || !fence_is_signaled(id->last_flush))) - continue; + } - flushed = id->flushed_updates; - if (updates && (!flushed || fence_is_later(updates, flushed))) - continue; + /* If we can't find a idle VMID to use, just wait for the oldest */ + if (&idle->list == &adev->vm_manager.ids_lru) { + id = list_first_entry(&adev->vm_manager.ids_lru, + struct amdgpu_vm_id, + list); + } else { + /* Check if we can use a VMID already assigned to this VM */ + do { + struct fence *flushed; + + id = vm->ids[i++]; + if (i == AMDGPU_MAX_RINGS) + i = 0; + + /* Check all the prerequisites to using this VMID */ + if (!id) + continue; + + if (atomic64_read(&id->owner) != vm->client_id) + continue; + + if (pd_addr != id->pd_gpu_addr) + continue; + + if (id->last_user != ring && (!id->last_flush || + !fence_is_signaled(id->last_flush))) + continue; + + flushed = id->flushed_updates; + if (updates && (!flushed || + fence_is_later(updates, flushed))) + continue; + + /* Good we can use this VMID */ + if (id->last_user == ring) { + r = amdgpu_sync_fence(ring->adev, sync, + id->first); + if (r) + goto error; + } - /* Good we can use this VMID */ - if (id->last_user == ring) { - r = amdgpu_sync_fence(ring->adev, sync, - id->first); + /* And remember this submission as user of the VMID */ + r = amdgpu_sync_fence(ring->adev, &id->active, fence); if (r) goto error; - } - - /* And remember this submission as user of the VMID */ - r = amdgpu_sync_fence(ring->adev, &id->active, fence); - if (r) - goto error; - list_move_tail(&id->list, &adev->vm_manager.ids_lru); - vm->ids[ring->idx] = id; + list_move_tail(&id->list, &adev->vm_manager.ids_lru); + vm->ids[ring->idx] = id; - *vm_id = id - adev->vm_manager.ids; - *vm_pd_addr = AMDGPU_VM_NO_FLUSH; - trace_amdgpu_vm_grab_id(vm, ring->idx, *vm_id, *vm_pd_addr); + *vm_id = id - adev->vm_manager.ids; + *vm_pd_addr = AMDGPU_VM_NO_FLUSH; + trace_amdgpu_vm_grab_id(vm, ring->idx, *vm_id, + *vm_pd_addr); - mutex_unlock(&adev->vm_manager.lock); - return 0; + mutex_unlock(&adev->vm_manager.lock); + return 0; - } while (i != ring->idx); + } while (i != ring->idx); - /* Check if we have an idle VMID */ - list_for_each_entry(id, &adev->vm_manager.ids_lru, list) { - if (amdgpu_sync_is_idle(&id->active, ring)) - break; - - } - - /* If we can't find a idle VMID to use, just wait for the oldest */ - if (&id->list == &adev->vm_manager.ids_lru) { - id = list_first_entry(&adev->vm_manager.ids_lru, - struct amdgpu_vm_id, - list); + /* Still no ID to use? Then use the idle one found earlier */ + id = idle; } r = amdgpu_sync_cycle_fences(sync, &id->active, fence);