From patchwork Mon Aug 15 18:59:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12943975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4CA89C00140 for ; Mon, 15 Aug 2022 19:00:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 071C3CAD02; Mon, 15 Aug 2022 19:00:34 +0000 (UTC) Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [IPv6:2a00:1450:4864:20::631]) by gabe.freedesktop.org (Postfix) with ESMTPS id C9BB4D2547 for ; Mon, 15 Aug 2022 18:59:45 +0000 (UTC) Received: by mail-ej1-x631.google.com with SMTP id gk3so15038456ejb.8 for ; Mon, 15 Aug 2022 11:59:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=kBJ8MKPMTc/ljm1Je/dqVW7iOpGuBGK18C7LP1VWBRU=; b=o6y9L1dCIsxt/hssK/2ad5S5Aqlzp/qJogNSvunzY0H+0tr6oSVNNPBPyAOSnT+O2i ec3F7bGw20lOcmJOhIyGGHVX1Z4K+aFWemSUKiwEaIiMMUCTR5NRjAS38MCfnglkqH+T su01GHsIkoYn8z2Vu77QGo6CoS6Sd76b34CyYVWFSl1nHzf5fO9ffmeb48CeCi2NJS4B CmT+1hrki73Of6XzX1gr42WsMA4XDiTzGtsVFCClqeRFF74N8qoSo1KxesCc7FvRgact 87WyL76l9itSEwpt4cIOA3GlKfOaR+UcAkHtSFJKiXX++lSe/5KFejTv+8BOo1Vl+NqG uchQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=kBJ8MKPMTc/ljm1Je/dqVW7iOpGuBGK18C7LP1VWBRU=; b=MpZQr5MwYFFr8vj/h1wFRy6DrwXRrY6+CNfd2+JsMcZMq6+stGsBaK0UP+ASbeQN7n cuK5Q79P+QcuMsL1ubgvM2Gdycb9pGZWnRVBhJgu18aBP9LrF6sgIZP/BT6+Y/xjOlfP 4uUulkOS9jCaycqEDohGln8jbwSV5pQcHJTDhonmf1TkeXPC8MPb2wxfl5AcXRt3j4vv 0mbKpb30LJ+vocYES3CaoTOV3WyHgnTfPsJGeeUdrDQIDqn4yjVRfcu5jLtdvW8N5ROS J+EGemWnPAvNi4VcAvx8kCnNlcWdL+BEnq0JE1KQC8Z+MZmtTMwgXPir2m/jtPadoGKF S6Lw== X-Gm-Message-State: ACgBeo0P4VSgmLVJqvpcEg8M1kFqX+V90JWLRn3+89KYInC9baOubltO IQecSB4srZC7Q2V82GNSPPlk4yuyfBg= X-Google-Smtp-Source: AA6agR6EGJIrwQa9P1P7xUXJtXUlNRDWVshIsyWyMdz20PBAZbPnfj3PYn+BkeRZ5wHjocoXavsYuQ== X-Received: by 2002:a17:907:2d8c:b0:730:f00f:7ab0 with SMTP id gt12-20020a1709072d8c00b00730f00f7ab0mr10888535ejc.611.1660589984163; Mon, 15 Aug 2022 11:59:44 -0700 (PDT) Received: from able.fritz.box (p57b0bd9f.dip0.t-ipconnect.de. [87.176.189.159]) by smtp.gmail.com with ESMTPSA id d10-20020a170906304a00b00731745a7e62sm3553805ejd.28.2022.08.15.11.59.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Aug 2022 11:59:43 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 01/10] drm/sched: move calling drm_sched_entity_select_rq Date: Mon, 15 Aug 2022 20:59:31 +0200 Message-Id: <20220815185940.4744-2-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220815185940.4744-1-christian.koenig@amd.com> References: <20220815185940.4744-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" We already discussed that the call to drm_sched_entity_select_rq() needs to move to drm_sched_job_arm() to be able to set a new scheduler list between _init() and _arm(). This was just not applied for some reason. Signed-off-by: Christian König Reviewed-by: Andrey Grodzovsky --- drivers/gpu/drm/scheduler/sched_main.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 68317d3a7a27..e0ab14e0fb6b 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -592,7 +592,6 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, void *owner) { - drm_sched_entity_select_rq(entity); if (!entity->rq) return -ENOENT; @@ -628,7 +627,7 @@ void drm_sched_job_arm(struct drm_sched_job *job) struct drm_sched_entity *entity = job->entity; BUG_ON(!entity); - + drm_sched_entity_select_rq(entity); sched = entity->rq->sched; job->sched = sched; From patchwork Mon Aug 15 18:59:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12943974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 171FDC00140 for ; Mon, 15 Aug 2022 19:00:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 52206D2582; Mon, 15 Aug 2022 19:00:23 +0000 (UTC) Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com [IPv6:2a00:1450:4864:20::52f]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7A62AD251F for ; Mon, 15 Aug 2022 18:59:46 +0000 (UTC) Received: by mail-ed1-x52f.google.com with SMTP id y3so10709792eda.6 for ; Mon, 15 Aug 2022 11:59:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=c1/Qmnar+5YXjf1yCginZqyDO5f4pK3tBL1p6OOOHwQ=; b=YhvEvrVT1Q2iul9ZlR4l40J50ka4j3ygwhrwG+t3ECw0fpBBe+6TmjgmBmD0Y6h9tD o+HDG3P9mjlLawtU6V7qndiB977BK/0zCW16767JZQ2j8HVoDZ5NaIwN0n4uevLMdq0G TlQx2UGouYJErcyRUQneNTHDIPNBjAUs/0jMRLxtdwdy7MPLwEFfxMSAS6H7SsgosHHQ UjOPBUAIeOihd0j+BNXZNrz77AfVZ5PLMCs66FLv7EDQX1a3tJDpYrReVjcbud0nk6Jw KSzymtlzVxlh8NS9FCXO+YmfKpWcd1fBqUmqdulZPEbH8NW5DZtgguBsjX6a6O5H1WK5 UjxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=c1/Qmnar+5YXjf1yCginZqyDO5f4pK3tBL1p6OOOHwQ=; b=cpwGbq0K2mY/FzoqKsZq43QMwOpxJUsropxjc+FXjs5HWnkM+Yw5+QG0paIyt2OA5r esLmRLf/xFImspsUvi/ySXuIbOTaqxrsjOQzgiMsl8XcmO+Qg1kyBhuJuGVkaltZrg46 Q1aOzzdPiX9B0CGSdFItXEmhhfp/kCTPQv2bDQezWjY8XWDMuY9S8KWyEmOpYI4sksGK NFJln3jEYBPHXWL9nCY6ifheBErR9to/BILzV7InkAvw8NxY0XJ8R4ElNqAOclptHZ/X UFZq6zbPunhPG7P4hvBGJPpkOm186hfZxkDHDWNPaT9bwdmEs1OTKmEUS2Zi/nPI5w+U /3+g== X-Gm-Message-State: ACgBeo1PHRjThDkPxZcWSJ3wjiA+iJA1bUEvp1Gf6B3opeCjZ+KFJOkf Rd3yXT3LebmKBJVV5vmfh7Oa8nazeps= X-Google-Smtp-Source: AA6agR7oM8jaORCC1OKVHO+jmJVqi9piObuE0fdYRYJm6/TKJTgoJlBQ6OVNaqmpmPOB0/5t7PRX6Q== X-Received: by 2002:a05:6402:11cf:b0:43d:fc84:c51a with SMTP id j15-20020a05640211cf00b0043dfc84c51amr15222020edw.80.1660589985056; Mon, 15 Aug 2022 11:59:45 -0700 (PDT) Received: from able.fritz.box (p57b0bd9f.dip0.t-ipconnect.de. [87.176.189.159]) by smtp.gmail.com with ESMTPSA id d10-20020a170906304a00b00731745a7e62sm3553805ejd.28.2022.08.15.11.59.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Aug 2022 11:59:44 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 02/10] drm/amdgpu: revert "partial revert "remove ctx->lock" v2" Date: Mon, 15 Aug 2022 20:59:32 +0200 Message-Id: <20220815185940.4744-3-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220815185940.4744-1-christian.koenig@amd.com> References: <20220815185940.4744-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This reverts commit 94f4c4965e5513ba624488f4b601d6b385635aec. We found that the bo_list is missing a protection for its list entries. Since that is fixed now this workaround can be removed again. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 21 ++++++--------------- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 2 -- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h | 1 - 3 files changed, 6 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index d8f1335bc68f..a3b8400c914e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -128,8 +128,6 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs goto free_chunk; } - mutex_lock(&p->ctx->lock); - /* skip guilty context job */ if (atomic_read(&p->ctx->guilty) == 1) { ret = -ECANCELED; @@ -708,7 +706,6 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, dma_fence_put(parser->fence); if (parser->ctx) { - mutex_unlock(&parser->ctx->lock); amdgpu_ctx_put(parser->ctx); } if (parser->bo_list) @@ -1161,9 +1158,6 @@ static int amdgpu_cs_dependencies(struct amdgpu_device *adev, { int i, r; - /* TODO: Investigate why we still need the context lock */ - mutex_unlock(&p->ctx->lock); - for (i = 0; i < p->nchunks; ++i) { struct amdgpu_cs_chunk *chunk; @@ -1174,34 +1168,32 @@ static int amdgpu_cs_dependencies(struct amdgpu_device *adev, case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES: r = amdgpu_cs_process_fence_dep(p, chunk); if (r) - goto out; + return r; break; case AMDGPU_CHUNK_ID_SYNCOBJ_IN: r = amdgpu_cs_process_syncobj_in_dep(p, chunk); if (r) - goto out; + return r; break; case AMDGPU_CHUNK_ID_SYNCOBJ_OUT: r = amdgpu_cs_process_syncobj_out_dep(p, chunk); if (r) - goto out; + return r; break; case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_WAIT: r = amdgpu_cs_process_syncobj_timeline_in_dep(p, chunk); if (r) - goto out; + return r; break; case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_SIGNAL: r = amdgpu_cs_process_syncobj_timeline_out_dep(p, chunk); if (r) - goto out; + return r; break; } } -out: - mutex_lock(&p->ctx->lock); - return r; + return 0; } static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p) @@ -1363,7 +1355,6 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) goto out; r = amdgpu_cs_submit(&parser, cs); - out: amdgpu_cs_parser_fini(&parser, r, reserved_buffers); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c index 8ee4e8491f39..168337d8d4cf 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c @@ -315,7 +315,6 @@ static int amdgpu_ctx_init(struct amdgpu_ctx_mgr *mgr, int32_t priority, kref_init(&ctx->refcount); ctx->mgr = mgr; spin_lock_init(&ctx->ring_lock); - mutex_init(&ctx->lock); ctx->reset_counter = atomic_read(&mgr->adev->gpu_reset_counter); ctx->reset_counter_query = ctx->reset_counter; @@ -407,7 +406,6 @@ static void amdgpu_ctx_fini(struct kref *ref) drm_dev_exit(idx); } - mutex_destroy(&ctx->lock); kfree(ctx); } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h index cc7c8afff414..0fa0e56daf67 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.h @@ -53,7 +53,6 @@ struct amdgpu_ctx { bool preamble_presented; int32_t init_priority; int32_t override_priority; - struct mutex lock; atomic_t guilty; unsigned long ras_counter_ce; unsigned long ras_counter_ue; From patchwork Mon Aug 15 18:59:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12953837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CDAF7C00140 for ; Wed, 24 Aug 2022 17:15:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DEE6EBD1F3; Wed, 24 Aug 2022 17:14:59 +0000 (UTC) Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com [IPv6:2a00:1450:4864:20::62a]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4D415D2535 for ; Mon, 15 Aug 2022 18:59:47 +0000 (UTC) Received: by mail-ej1-x62a.google.com with SMTP id gb36so15014340ejc.10 for ; Mon, 15 Aug 2022 11:59:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=zSvHFbj8Jj/2GkiIIoYhX+QU8HIZGxASz4FTnsYcUx4=; b=b+Q5+wFGGksYwZtt1FeXdtb7ZsyE22vcbktnpj7oRh79K59zu79nbJGkjnQZxg3Z9g Kp9lLqT4RX4EbIbEUk9fTahuuKkffLkZxyBSadfI01Uf7BRPR3H49rfqW0X8qGSyn1hO fln9wIKVHq9aJuojwGiaB3iL2dpAZv8bn5OhkwrmES3AGmmrY6RFA0p4QykvnAnzlni1 nqoQiLar2ktSbDeTIjrka/U6WYTgXxGh6J+/jeM2Gl1mSkggARv54TaUdbCy3P6tOOHK lkkbVqgG0x0EkJy/gPnXSIQulbuuQ3JFUnMvUYuP5tbM7la1pku5RYDo3SQNW47z5SiB 9xhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=zSvHFbj8Jj/2GkiIIoYhX+QU8HIZGxASz4FTnsYcUx4=; b=iEKT++48XZOLFWNlXCXk0KBkSxdNQwQMqpVLX2Hf5hh1FQ8k9EYWPWtWok9H4n0q1X nOESYPCpw4ZByoUcuj+b0kGP+NLLPuHgpzfK6yozo11GpYAr2UztlHuELvRK3DS4ckua 6bCwKLUqDI/TXo2lnO6GjWCaDn67B761/tqy02f3wwAYetmxi/DFPrYTnNTpt1pt+i5D 0yohUiPk85MPF591RWTBtvRYaCgLD0zRPMbtvDEtdiZiSIpt+7fG0/Cwe0+ib93cLJZ6 YPXu2r8UPAxBrWqCz9fxWu1Qs7c0iMzeYhLrgV/uNsd939cVqYB7hivdToNevEBJnGw6 ftaQ== X-Gm-Message-State: ACgBeo301IAZReXtA281ppp66Bl4b4rFSICY/6HTzFzdx8/X/10rbCjz c+P0wZHPPozJn7LZEN+95kho+2H9CzE= X-Google-Smtp-Source: AA6agR5sLv5R200TiIV01qALOlgTK39MUE2zE2VAkakvC+MOzIBtnOvZN5JAgxYze8EjFz62TSZsNw== X-Received: by 2002:a17:907:7284:b0:731:82a3:16e3 with SMTP id dt4-20020a170907728400b0073182a316e3mr11538548ejc.30.1660589985895; Mon, 15 Aug 2022 11:59:45 -0700 (PDT) Received: from able.fritz.box (p57b0bd9f.dip0.t-ipconnect.de. [87.176.189.159]) by smtp.gmail.com with ESMTPSA id d10-20020a170906304a00b00731745a7e62sm3553805ejd.28.2022.08.15.11.59.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Aug 2022 11:59:45 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 03/10] drm/amdgpu: use DMA_RESV_USAGE_BOOKKEEP Date: Mon, 15 Aug 2022 20:59:33 +0200 Message-Id: <20220815185940.4744-4-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220815185940.4744-1-christian.koenig@amd.com> References: <20220815185940.4744-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use DMA_RESV_USAGE_BOOKKEEP for VM page table updates and KFD preemption fence. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index cbd593f7d553..85eb68ec692e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -297,7 +297,7 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo, */ replacement = dma_fence_get_stub(); dma_resv_replace_fences(bo->tbo.base.resv, ef->base.context, - replacement, DMA_RESV_USAGE_READ); + replacement, DMA_RESV_USAGE_BOOKKEEP); dma_fence_put(replacement); return 0; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c index 1fd3cbca20a2..03ec099d64e0 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c @@ -112,7 +112,8 @@ static int amdgpu_vm_sdma_commit(struct amdgpu_vm_update_params *p, swap(p->vm->last_unlocked, tmp); dma_fence_put(tmp); } else { - amdgpu_bo_fence(p->vm->root.bo, f, true); + dma_resv_add_fence(p->vm->root.bo->tbo.base.resv, f, + DMA_RESV_USAGE_BOOKKEEP); } if (fence && !p->immediate) From patchwork Mon Aug 15 18:59:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12953838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F3D3BC00140 for ; Wed, 24 Aug 2022 17:15:13 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1E7B4BB4AC; Wed, 24 Aug 2022 17:15:04 +0000 (UTC) Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com [IPv6:2a00:1450:4864:20::62a]) by gabe.freedesktop.org (Postfix) with ESMTPS id 67CFEBC908 for ; Mon, 15 Aug 2022 18:59:48 +0000 (UTC) Received: by mail-ej1-x62a.google.com with SMTP id gb36so15014455ejc.10 for ; Mon, 15 Aug 2022 11:59:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=jAANwixxP8Y/gdKO7coN2dVqc4AjjjIKz5KIhiIg/to=; b=JtqxEGNgWdFyyTU630y8C6MpwHT4SFD6s3z+mS55MQksh2uwuBxiCoaw8nD5UJStm1 vNU0exvvP+9QydeSmUSgHkG7Ov2Buc8BPtb295rYG8pDSqZ3Uh0Uv20u+26QigiBd5vl N9jLKDal0FTpWmov63Zar+xcOE2GVqATGJ0jA/ae8m5sDbDry7cNsNtu/llLTCnV94me M035/bhdSzQC+6BfN02g+1DXE8wlK6iOvkkpNUKNxl/JQ8yrft2s7pZV8AC6ROyGMZcC d3R8sVT5L8a5QOy8kVS/JQ2MHwAaMBZU0FPUb4bldt5vBXs77z8kUYLPFpYk44MLJYmL 6tFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=jAANwixxP8Y/gdKO7coN2dVqc4AjjjIKz5KIhiIg/to=; b=t9xBemtSUI/kcP3flmGRG6cy0fpyRl/3bc5AjNwRggVvxQTCIkgKA8KWqSS9gAaGrp bCuSMlJb4MBZDPDy0UClibdOh3Hb9AJKxkEOueX1pAgCMBB6m5xTK9iSItu+8GcBWndu SKnIfQ31DkWQjhkbvbDG+QBQsQdKxVPdJWR+MEwSD/JIIPHHnJDESwfuZJjTi7Vbw9M/ EBjuZyhSYKJhJqcw35buvAXgx62tVt5nkiooYFl0XjldIvIvWZWEIzCbBGgRUkDkXrPd 7KRN7PMASTPJmZTsqoV+lpYgKkNUPryKPOOicmx70VutSr3m1iT+7vJ29bkPTI+IhJ3V CLbg== X-Gm-Message-State: ACgBeo3VwyrrmrY+MGsjUrzUdKyq31Rg57cmeYN+/xFj4YQoNtoWm4nS yu6Rcj3SMkRhCnNsnlHBkdRJ4qTcoZQ= X-Google-Smtp-Source: AA6agR4el8JKzZ3SH5UzBhWqMfL1LXxYGrLtwJymy5NARWR9uhT5QyaHX2zrJ6u2Se6OMRshYxiDnA== X-Received: by 2002:a17:906:4bd3:b0:731:3bdf:b95c with SMTP id x19-20020a1709064bd300b007313bdfb95cmr11211356ejv.677.1660589987300; Mon, 15 Aug 2022 11:59:47 -0700 (PDT) Received: from able.fritz.box (p57b0bd9f.dip0.t-ipconnect.de. [87.176.189.159]) by smtp.gmail.com with ESMTPSA id d10-20020a170906304a00b00731745a7e62sm3553805ejd.28.2022.08.15.11.59.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Aug 2022 11:59:46 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 04/10] drm/amdgpu: cleanup and reorder amdgpu_cs.c Date: Mon, 15 Aug 2022 20:59:34 +0200 Message-Id: <20220815185940.4744-5-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220815185940.4744-1-christian.koenig@amd.com> References: <20220815185940.4744-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Sort the functions in the order they are called and cleanup the coding style and function names to represent the data they process. Check the size of the IB chunk, initialize resulting entity and scheduler job much earlier as well. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 1641 ++++++++++++------------ drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h | 2 +- 2 files changed, 817 insertions(+), 826 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index a3b8400c914e..b9de631a66a3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -39,9 +39,61 @@ #include "amdgpu_gem.h" #include "amdgpu_ras.h" -static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p, - struct drm_amdgpu_cs_chunk_fence *data, - uint32_t *offset) +static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, + struct amdgpu_device *adev, + struct drm_file *filp, + union drm_amdgpu_cs *cs) +{ + struct amdgpu_fpriv *fpriv = filp->driver_priv; + + if (cs->in.num_chunks == 0) + return -EINVAL; + + memset(p, 0, sizeof(*p)); + p->adev = adev; + p->filp = filp; + + p->ctx = amdgpu_ctx_get(fpriv, cs->in.ctx_id); + if (!p->ctx) + return -EINVAL; + + if (atomic_read(&p->ctx->guilty)) { + amdgpu_ctx_put(p->ctx); + return -ECANCELED; + } + return 0; +} + +static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p, + struct drm_amdgpu_cs_chunk_ib *chunk_ib, + unsigned int *num_ibs) +{ + struct drm_sched_entity *entity; + int r; + + r = amdgpu_ctx_get_entity(p->ctx, chunk_ib->ip_type, + chunk_ib->ip_instance, + chunk_ib->ring, &entity); + if (r) + return r; + + /* Abort if there is no run queue associated with this entity. + * Possibly because of disabled HW IP*/ + if (entity->rq == NULL) + return -EINVAL; + + /* Currently we don't support submitting to multiple entities */ + if (p->entity && p->entity != entity) + return -EINVAL; + + p->entity = entity; + ++(*num_ibs); + return 0; +} + +static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p, + struct drm_amdgpu_cs_chunk_fence *data, + uint32_t *offset) { struct drm_gem_object *gobj; struct amdgpu_bo *bo; @@ -80,11 +132,11 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p, return r; } -static int amdgpu_cs_bo_handles_chunk(struct amdgpu_cs_parser *p, - struct drm_amdgpu_bo_list_in *data) +static int amdgpu_cs_p1_bo_handles(struct amdgpu_cs_parser *p, + struct drm_amdgpu_bo_list_in *data) { + struct drm_amdgpu_bo_list_entry *info; int r; - struct drm_amdgpu_bo_list_entry *info = NULL; r = amdgpu_bo_create_list_entry_array(data, &info); if (r) @@ -104,7 +156,9 @@ static int amdgpu_cs_bo_handles_chunk(struct amdgpu_cs_parser *p, return r; } -static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs *cs) +/* Copy the data from userspace and go over it the first time */ +static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p, + union drm_amdgpu_cs *cs) { struct amdgpu_fpriv *fpriv = p->filp->driver_priv; struct amdgpu_vm *vm = &fpriv->vm; @@ -112,28 +166,17 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs uint64_t *chunk_array; unsigned size, num_ibs = 0; uint32_t uf_offset = 0; - int i; int ret; + int i; if (cs->in.num_chunks == 0) return -EINVAL; - chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t), GFP_KERNEL); + chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t), + GFP_KERNEL); if (!chunk_array) return -ENOMEM; - p->ctx = amdgpu_ctx_get(fpriv, cs->in.ctx_id); - if (!p->ctx) { - ret = -EINVAL; - goto free_chunk; - } - - /* skip guilty context job */ - if (atomic_read(&p->ctx->guilty) == 1) { - ret = -ECANCELED; - goto free_chunk; - } - /* get chunks */ chunk_array_user = u64_to_user_ptr(cs->in.chunks); if (copy_from_user(chunk_array, chunk_array_user, @@ -168,7 +211,8 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs size = p->chunks[i].length_dw; cdata = u64_to_user_ptr(user_chunk.chunk_data); - p->chunks[i].kdata = kvmalloc_array(size, sizeof(uint32_t), GFP_KERNEL); + p->chunks[i].kdata = kvmalloc_array(size, sizeof(uint32_t), + GFP_KERNEL); if (p->chunks[i].kdata == NULL) { ret = -ENOMEM; i--; @@ -180,36 +224,35 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs goto free_partial_kdata; } + /* Assume the worst on the following checks */ + ret = -EINVAL; switch (p->chunks[i].chunk_id) { case AMDGPU_CHUNK_ID_IB: - ++num_ibs; + if (size < sizeof(struct drm_amdgpu_cs_chunk_ib)) + goto free_partial_kdata; + + ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, &num_ibs); + if (ret) + goto free_partial_kdata; break; case AMDGPU_CHUNK_ID_FENCE: - size = sizeof(struct drm_amdgpu_cs_chunk_fence); - if (p->chunks[i].length_dw * sizeof(uint32_t) < size) { - ret = -EINVAL; + if (size < sizeof(struct drm_amdgpu_cs_chunk_fence)) goto free_partial_kdata; - } - ret = amdgpu_cs_user_fence_chunk(p, p->chunks[i].kdata, - &uf_offset); + ret = amdgpu_cs_p1_user_fence(p, p->chunks[i].kdata, + &uf_offset); if (ret) goto free_partial_kdata; - break; case AMDGPU_CHUNK_ID_BO_HANDLES: - size = sizeof(struct drm_amdgpu_bo_list_in); - if (p->chunks[i].length_dw * sizeof(uint32_t) < size) { - ret = -EINVAL; + if (size < sizeof(struct drm_amdgpu_bo_list_in)) goto free_partial_kdata; - } - ret = amdgpu_cs_bo_handles_chunk(p, p->chunks[i].kdata); + ret = amdgpu_cs_p1_bo_handles(p, p->chunks[i].kdata); if (ret) goto free_partial_kdata; - break; case AMDGPU_CHUNK_ID_DEPENDENCIES: @@ -221,7 +264,6 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs break; default: - ret = -EINVAL; goto free_partial_kdata; } } @@ -230,6 +272,10 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs if (ret) goto free_all_kdata; + ret = drm_sched_job_init(&p->job->base, p->entity, &fpriv->vm); + if (ret) + goto free_all_kdata; + if (p->ctx->vram_lost_counter != p->job->vram_lost_counter) { ret = -ECANCELED; goto free_all_kdata; @@ -258,941 +304,864 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs return ret; } -/* Convert microseconds to bytes. */ -static u64 us_to_bytes(struct amdgpu_device *adev, s64 us) +static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, + struct amdgpu_cs_chunk *chunk, + unsigned int *num_ibs, + unsigned int *ce_preempt, + unsigned int *de_preempt) { - if (us <= 0 || !adev->mm_stats.log2_max_MBps) - return 0; + struct amdgpu_ring *ring = to_amdgpu_ring(p->job->base.sched); + struct drm_amdgpu_cs_chunk_ib *chunk_ib = chunk->kdata; + struct amdgpu_fpriv *fpriv = p->filp->driver_priv; + struct amdgpu_ib *ib = &p->job->ibs[*num_ibs]; + struct amdgpu_vm *vm = &fpriv->vm; + int r; - /* Since accum_us is incremented by a million per second, just - * multiply it by the number of MB/s to get the number of bytes. - */ - return us << adev->mm_stats.log2_max_MBps; -} -static s64 bytes_to_us(struct amdgpu_device *adev, u64 bytes) -{ - if (!adev->mm_stats.log2_max_MBps) - return 0; + /* MM engine doesn't support user fences */ + if (p->job->uf_addr && ring->funcs->no_user_fence) + return -EINVAL; - return bytes >> adev->mm_stats.log2_max_MBps; -} + if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX && + chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT && + (amdgpu_mcbp || amdgpu_sriov_vf(p->adev))) { + if (chunk_ib->flags & AMDGPU_IB_FLAG_CE) + (*ce_preempt)++; + else + (*de_preempt)++; -/* Returns how many bytes TTM can move right now. If no bytes can be moved, - * it returns 0. If it returns non-zero, it's OK to move at least one buffer, - * which means it can go over the threshold once. If that happens, the driver - * will be in debt and no other buffer migrations can be done until that debt - * is repaid. - * - * This approach allows moving a buffer of any size (it's important to allow - * that). - * - * The currency is simply time in microseconds and it increases as the clock - * ticks. The accumulated microseconds (us) are converted to bytes and - * returned. - */ -static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev, - u64 *max_bytes, - u64 *max_vis_bytes) -{ - s64 time_us, increment_us; - u64 free_vram, total_vram, used_vram; - /* Allow a maximum of 200 accumulated ms. This is basically per-IB - * throttling. - * - * It means that in order to get full max MBps, at least 5 IBs per - * second must be submitted and not more than 200ms apart from each - * other. - */ - const s64 us_upper_bound = 200000; + /* Each GFX command submit allows only 1 IB max + * preemptible for CE & DE */ + if (*ce_preempt > 1 || *de_preempt > 1) + return -EINVAL; + } - if (!adev->mm_stats.log2_max_MBps) { - *max_bytes = 0; - *max_vis_bytes = 0; - return; + if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE) + p->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT; + + r = amdgpu_ib_get(p->adev, vm, ring->funcs->parse_cs ? + chunk_ib->ib_bytes : 0, + AMDGPU_IB_POOL_DELAYED, ib); + if (r) { + DRM_ERROR("Failed to get ib !\n"); + return r; } - total_vram = adev->gmc.real_vram_size - atomic64_read(&adev->vram_pin_size); - used_vram = ttm_resource_manager_usage(&adev->mman.vram_mgr.manager); - free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram; + ib->gpu_addr = chunk_ib->va_start; + ib->length_dw = chunk_ib->ib_bytes / 4; + ib->flags = chunk_ib->flags; - spin_lock(&adev->mm_stats.lock); + (*num_ibs)++; + return 0; +} - /* Increase the amount of accumulated us. */ - time_us = ktime_to_us(ktime_get()); - increment_us = time_us - adev->mm_stats.last_update_us; - adev->mm_stats.last_update_us = time_us; - adev->mm_stats.accum_us = min(adev->mm_stats.accum_us + increment_us, - us_upper_bound); +static int amdgpu_cs_p2_dependencies(struct amdgpu_cs_parser *p, + struct amdgpu_cs_chunk *chunk) +{ + struct drm_amdgpu_cs_chunk_dep *deps = chunk->kdata; + struct amdgpu_fpriv *fpriv = p->filp->driver_priv; + unsigned num_deps; + int i, r; - /* This prevents the short period of low performance when the VRAM - * usage is low and the driver is in debt or doesn't have enough - * accumulated us to fill VRAM quickly. - * - * The situation can occur in these cases: - * - a lot of VRAM is freed by userspace - * - the presence of a big buffer causes a lot of evictions - * (solution: split buffers into smaller ones) - * - * If 128 MB or 1/8th of VRAM is free, start filling it now by setting - * accum_us to a positive number. - */ - if (free_vram >= 128 * 1024 * 1024 || free_vram >= total_vram / 8) { - s64 min_us; + num_deps = chunk->length_dw * 4 / + sizeof(struct drm_amdgpu_cs_chunk_dep); - /* Be more aggressive on dGPUs. Try to fill a portion of free - * VRAM now. - */ - if (!(adev->flags & AMD_IS_APU)) - min_us = bytes_to_us(adev, free_vram / 4); - else - min_us = 0; /* Reset accum_us on APUs. */ + for (i = 0; i < num_deps; ++i) { + struct amdgpu_ctx *ctx; + struct drm_sched_entity *entity; + struct dma_fence *fence; - adev->mm_stats.accum_us = max(min_us, adev->mm_stats.accum_us); - } + ctx = amdgpu_ctx_get(fpriv, deps[i].ctx_id); + if (ctx == NULL) + return -EINVAL; - /* This is set to 0 if the driver is in debt to disallow (optional) - * buffer moves. - */ - *max_bytes = us_to_bytes(adev, adev->mm_stats.accum_us); + r = amdgpu_ctx_get_entity(ctx, deps[i].ip_type, + deps[i].ip_instance, + deps[i].ring, &entity); + if (r) { + amdgpu_ctx_put(ctx); + return r; + } - /* Do the same for visible VRAM if half of it is free */ - if (!amdgpu_gmc_vram_full_visible(&adev->gmc)) { - u64 total_vis_vram = adev->gmc.visible_vram_size; - u64 used_vis_vram = - amdgpu_vram_mgr_vis_usage(&adev->mman.vram_mgr); + fence = amdgpu_ctx_get_fence(ctx, entity, deps[i].handle); + amdgpu_ctx_put(ctx); - if (used_vis_vram < total_vis_vram) { - u64 free_vis_vram = total_vis_vram - used_vis_vram; - adev->mm_stats.accum_us_vis = min(adev->mm_stats.accum_us_vis + - increment_us, us_upper_bound); + if (IS_ERR(fence)) + return PTR_ERR(fence); + else if (!fence) + continue; - if (free_vis_vram >= total_vis_vram / 2) - adev->mm_stats.accum_us_vis = - max(bytes_to_us(adev, free_vis_vram / 2), - adev->mm_stats.accum_us_vis); + if (chunk->chunk_id == AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES) { + struct drm_sched_fence *s_fence; + struct dma_fence *old = fence; + + s_fence = to_drm_sched_fence(fence); + fence = dma_fence_get(&s_fence->scheduled); + dma_fence_put(old); } - *max_vis_bytes = us_to_bytes(adev, adev->mm_stats.accum_us_vis); - } else { - *max_vis_bytes = 0; + r = amdgpu_sync_fence(&p->job->sync, fence); + dma_fence_put(fence); + if (r) + return r; } + return 0; +} - spin_unlock(&adev->mm_stats.lock); +static int amdgpu_syncobj_lookup_and_add(struct amdgpu_cs_parser *p, + uint32_t handle, u64 point, + u64 flags) +{ + struct dma_fence *fence; + int r; + + r = drm_syncobj_find_fence(p->filp, handle, point, flags, &fence); + if (r) { + DRM_ERROR("syncobj %u failed to find fence @ %llu (%d)!\n", + handle, point, r); + return r; + } + + r = amdgpu_sync_fence(&p->job->sync, fence); + dma_fence_put(fence); + + return r; } -/* Report how many bytes have really been moved for the last command - * submission. This can result in a debt that can stop buffer migrations - * temporarily. - */ -void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes, - u64 num_vis_bytes) +static int amdgpu_cs_p2_syncobj_in(struct amdgpu_cs_parser *p, + struct amdgpu_cs_chunk *chunk) { - spin_lock(&adev->mm_stats.lock); - adev->mm_stats.accum_us -= bytes_to_us(adev, num_bytes); - adev->mm_stats.accum_us_vis -= bytes_to_us(adev, num_vis_bytes); - spin_unlock(&adev->mm_stats.lock); + struct drm_amdgpu_cs_chunk_sem *deps = chunk->kdata; + unsigned num_deps; + int i, r; + + num_deps = chunk->length_dw * 4 / + sizeof(struct drm_amdgpu_cs_chunk_sem); + for (i = 0; i < num_deps; ++i) { + r = amdgpu_syncobj_lookup_and_add(p, deps[i].handle, 0, 0); + if (r) + return r; + } + + return 0; } -static int amdgpu_cs_bo_validate(void *param, struct amdgpu_bo *bo) +static int amdgpu_cs_p2_syncobj_timeline_wait(struct amdgpu_cs_parser *p, + struct amdgpu_cs_chunk *chunk) { - struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); - struct amdgpu_cs_parser *p = param; - struct ttm_operation_ctx ctx = { - .interruptible = true, - .no_wait_gpu = false, - .resv = bo->tbo.base.resv - }; - uint32_t domain; - int r; + struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps = chunk->kdata; + unsigned num_deps; + int i, r; - if (bo->tbo.pin_count) - return 0; + num_deps = chunk->length_dw * 4 / + sizeof(struct drm_amdgpu_cs_chunk_syncobj); + for (i = 0; i < num_deps; ++i) { + r = amdgpu_syncobj_lookup_and_add(p, syncobj_deps[i].handle, + syncobj_deps[i].point, + syncobj_deps[i].flags); + if (r) + return r; + } - /* Don't move this buffer if we have depleted our allowance - * to move it. Don't move anything if the threshold is zero. - */ - if (p->bytes_moved < p->bytes_moved_threshold && - (!bo->tbo.base.dma_buf || - list_empty(&bo->tbo.base.dma_buf->attachments))) { - if (!amdgpu_gmc_vram_full_visible(&adev->gmc) && - (bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)) { - /* And don't move a CPU_ACCESS_REQUIRED BO to limited - * visible VRAM if we've depleted our allowance to do - * that. - */ - if (p->bytes_moved_vis < p->bytes_moved_vis_threshold) - domain = bo->preferred_domains; - else - domain = bo->allowed_domains; - } else { - domain = bo->preferred_domains; - } - } else { - domain = bo->allowed_domains; - } - -retry: - amdgpu_bo_placement_from_domain(bo, domain); - r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); - - p->bytes_moved += ctx.bytes_moved; - if (!amdgpu_gmc_vram_full_visible(&adev->gmc) && - amdgpu_bo_in_cpu_visible_vram(bo)) - p->bytes_moved_vis += ctx.bytes_moved; - - if (unlikely(r == -ENOMEM) && domain != bo->allowed_domains) { - domain = bo->allowed_domains; - goto retry; - } - - return r; + return 0; } -static int amdgpu_cs_list_validate(struct amdgpu_cs_parser *p, - struct list_head *validated) +static int amdgpu_cs_p2_syncobj_out(struct amdgpu_cs_parser *p, + struct amdgpu_cs_chunk *chunk) { - struct ttm_operation_ctx ctx = { true, false }; - struct amdgpu_bo_list_entry *lobj; - int r; + struct drm_amdgpu_cs_chunk_sem *deps = chunk->kdata; + unsigned num_deps; + int i; - list_for_each_entry(lobj, validated, tv.head) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(lobj->tv.bo); - struct mm_struct *usermm; + num_deps = chunk->length_dw * 4 / + sizeof(struct drm_amdgpu_cs_chunk_sem); - usermm = amdgpu_ttm_tt_get_usermm(bo->tbo.ttm); - if (usermm && usermm != current->mm) - return -EPERM; + if (p->post_deps) + return -EINVAL; - if (amdgpu_ttm_tt_is_userptr(bo->tbo.ttm) && - lobj->user_invalidated && lobj->user_pages) { - amdgpu_bo_placement_from_domain(bo, - AMDGPU_GEM_DOMAIN_CPU); - r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); - if (r) - return r; + p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps), + GFP_KERNEL); + p->num_post_deps = 0; - amdgpu_ttm_tt_set_user_pages(bo->tbo.ttm, - lobj->user_pages); - } + if (!p->post_deps) + return -ENOMEM; - r = amdgpu_cs_bo_validate(p, bo); - if (r) - return r; - kvfree(lobj->user_pages); - lobj->user_pages = NULL; + for (i = 0; i < num_deps; ++i) { + p->post_deps[i].syncobj = + drm_syncobj_find(p->filp, deps[i].handle); + if (!p->post_deps[i].syncobj) + return -EINVAL; + p->post_deps[i].chain = NULL; + p->post_deps[i].point = 0; + p->num_post_deps++; } + return 0; } -static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, - union drm_amdgpu_cs *cs) +static int amdgpu_cs_p2_syncobj_timeline_signal(struct amdgpu_cs_parser *p, + struct amdgpu_cs_chunk *chunk) { - struct amdgpu_fpriv *fpriv = p->filp->driver_priv; - struct amdgpu_vm *vm = &fpriv->vm; - struct amdgpu_bo_list_entry *e; - struct list_head duplicates; - struct amdgpu_bo *gds; - struct amdgpu_bo *gws; - struct amdgpu_bo *oa; - int r; - - INIT_LIST_HEAD(&p->validated); - - /* p->bo_list could already be assigned if AMDGPU_CHUNK_ID_BO_HANDLES is present */ - if (cs->in.bo_list_handle) { - if (p->bo_list) - return -EINVAL; - - r = amdgpu_bo_list_get(fpriv, cs->in.bo_list_handle, - &p->bo_list); - if (r) - return r; - } else if (!p->bo_list) { - /* Create a empty bo_list when no handle is provided */ - r = amdgpu_bo_list_create(p->adev, p->filp, NULL, 0, - &p->bo_list); - if (r) - return r; - } - - mutex_lock(&p->bo_list->bo_list_mutex); - - /* One for TTM and one for the CS job */ - amdgpu_bo_list_for_each_entry(e, p->bo_list) - e->tv.num_shared = 2; + struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps = chunk->kdata; + unsigned num_deps; + int i; - amdgpu_bo_list_get_list(p->bo_list, &p->validated); + num_deps = chunk->length_dw * 4 / + sizeof(struct drm_amdgpu_cs_chunk_syncobj); - INIT_LIST_HEAD(&duplicates); - amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd); + if (p->post_deps) + return -EINVAL; - if (p->uf_entry.tv.bo && !ttm_to_amdgpu_bo(p->uf_entry.tv.bo)->parent) - list_add(&p->uf_entry.tv.head, &p->validated); + p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps), + GFP_KERNEL); + p->num_post_deps = 0; - /* Get userptr backing pages. If pages are updated after registered - * in amdgpu_gem_userptr_ioctl(), amdgpu_cs_list_validate() will do - * amdgpu_ttm_backend_bind() to flush and invalidate new pages - */ - amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); - bool userpage_invalidated = false; - int i; + if (!p->post_deps) + return -ENOMEM; - e->user_pages = kvmalloc_array(bo->tbo.ttm->num_pages, - sizeof(struct page *), - GFP_KERNEL | __GFP_ZERO); - if (!e->user_pages) { - DRM_ERROR("kvmalloc_array failure\n"); - r = -ENOMEM; - goto out_free_user_pages; - } + for (i = 0; i < num_deps; ++i) { + struct amdgpu_cs_post_dep *dep = &p->post_deps[i]; - r = amdgpu_ttm_tt_get_user_pages(bo, e->user_pages); - if (r) { - kvfree(e->user_pages); - e->user_pages = NULL; - goto out_free_user_pages; + dep->chain = NULL; + if (syncobj_deps[i].point) { + dep->chain = dma_fence_chain_alloc(); + if (!dep->chain) + return -ENOMEM; } - for (i = 0; i < bo->tbo.ttm->num_pages; i++) { - if (bo->tbo.ttm->pages[i] != e->user_pages[i]) { - userpage_invalidated = true; - break; - } + dep->syncobj = drm_syncobj_find(p->filp, + syncobj_deps[i].handle); + if (!dep->syncobj) { + dma_fence_chain_free(dep->chain); + return -EINVAL; } - e->user_invalidated = userpage_invalidated; - } - - r = ttm_eu_reserve_buffers(&p->ticket, &p->validated, true, - &duplicates); - if (unlikely(r != 0)) { - if (r != -ERESTARTSYS) - DRM_ERROR("ttm_eu_reserve_buffers failed.\n"); - goto out_free_user_pages; - } - - amdgpu_bo_list_for_each_entry(e, p->bo_list) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); - - e->bo_va = amdgpu_vm_bo_find(vm, bo); - } - - /* Move fence waiting after getting reservation lock of - * PD root. Then there is no need on a ctx mutex lock. - */ - r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entity); - if (unlikely(r != 0)) { - if (r != -ERESTARTSYS) - DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n"); - goto error_validate; - } - - amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold, - &p->bytes_moved_vis_threshold); - p->bytes_moved = 0; - p->bytes_moved_vis = 0; - - r = amdgpu_vm_validate_pt_bos(p->adev, &fpriv->vm, - amdgpu_cs_bo_validate, p); - if (r) { - DRM_ERROR("amdgpu_vm_validate_pt_bos() failed.\n"); - goto error_validate; + dep->point = syncobj_deps[i].point; + p->num_post_deps++; } - r = amdgpu_cs_list_validate(p, &duplicates); - if (r) - goto error_validate; - - r = amdgpu_cs_list_validate(p, &p->validated); - if (r) - goto error_validate; - - amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved, - p->bytes_moved_vis); + return 0; +} - gds = p->bo_list->gds_obj; - gws = p->bo_list->gws_obj; - oa = p->bo_list->oa_obj; +static int amdgpu_cs_pass2(struct amdgpu_cs_parser *p) +{ + unsigned int num_ibs = 0, ce_preempt = 0, de_preempt = 0; + int i, r; - if (gds) { - p->job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT; - p->job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT; - } - if (gws) { - p->job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT; - p->job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT; - } - if (oa) { - p->job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT; - p->job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT; - } + for (i = 0; i < p->nchunks; ++i) { + struct amdgpu_cs_chunk *chunk; - if (!r && p->uf_entry.tv.bo) { - struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo); + chunk = &p->chunks[i]; - r = amdgpu_ttm_alloc_gart(&uf->tbo); - p->job->uf_addr += amdgpu_bo_gpu_offset(uf); + switch (chunk->chunk_id) { + case AMDGPU_CHUNK_ID_IB: + r = amdgpu_cs_p2_ib(p, chunk, &num_ibs, + &ce_preempt, &de_preempt); + if (r) + return r; + break; + case AMDGPU_CHUNK_ID_DEPENDENCIES: + case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES: + r = amdgpu_cs_p2_dependencies(p, chunk); + if (r) + return r; + break; + case AMDGPU_CHUNK_ID_SYNCOBJ_IN: + r = amdgpu_cs_p2_syncobj_in(p, chunk); + if (r) + return r; + break; + case AMDGPU_CHUNK_ID_SYNCOBJ_OUT: + r = amdgpu_cs_p2_syncobj_out(p, chunk); + if (r) + return r; + break; + case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_WAIT: + r = amdgpu_cs_p2_syncobj_timeline_wait(p, chunk); + if (r) + return r; + break; + case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_SIGNAL: + r = amdgpu_cs_p2_syncobj_timeline_signal(p, chunk); + if (r) + return r; + break; + } } -error_validate: - if (r) - ttm_eu_backoff_reservation(&p->ticket, &p->validated); + return 0; +} -out_free_user_pages: - if (r) { - amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); +/* Convert microseconds to bytes. */ +static u64 us_to_bytes(struct amdgpu_device *adev, s64 us) +{ + if (us <= 0 || !adev->mm_stats.log2_max_MBps) + return 0; - if (!e->user_pages) - continue; - amdgpu_ttm_tt_get_user_pages_done(bo->tbo.ttm); - kvfree(e->user_pages); - e->user_pages = NULL; - } - mutex_unlock(&p->bo_list->bo_list_mutex); - } - return r; + /* Since accum_us is incremented by a million per second, just + * multiply it by the number of MB/s to get the number of bytes. + */ + return us << adev->mm_stats.log2_max_MBps; } -static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p) +static s64 bytes_to_us(struct amdgpu_device *adev, u64 bytes) { - struct amdgpu_fpriv *fpriv = p->filp->driver_priv; - struct amdgpu_bo_list_entry *e; - int r; - - list_for_each_entry(e, &p->validated, tv.head) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); - struct dma_resv *resv = bo->tbo.base.resv; - enum amdgpu_sync_mode sync_mode; + if (!adev->mm_stats.log2_max_MBps) + return 0; - sync_mode = amdgpu_bo_explicit_sync(bo) ? - AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER; - r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode, - &fpriv->vm); - if (r) - return r; - } - return 0; + return bytes >> adev->mm_stats.log2_max_MBps; } -/** - * amdgpu_cs_parser_fini() - clean parser states - * @parser: parser structure holding parsing context. - * @error: error number - * @backoff: indicator to backoff the reservation +/* Returns how many bytes TTM can move right now. If no bytes can be moved, + * it returns 0. If it returns non-zero, it's OK to move at least one buffer, + * which means it can go over the threshold once. If that happens, the driver + * will be in debt and no other buffer migrations can be done until that debt + * is repaid. * - * If error is set then unvalidate buffer, otherwise just free memory - * used by parsing context. - **/ -static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, - bool backoff) + * This approach allows moving a buffer of any size (it's important to allow + * that). + * + * The currency is simply time in microseconds and it increases as the clock + * ticks. The accumulated microseconds (us) are converted to bytes and + * returned. + */ +static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev, + u64 *max_bytes, + u64 *max_vis_bytes) { - unsigned i; - - if (error && backoff) { - ttm_eu_backoff_reservation(&parser->ticket, - &parser->validated); - mutex_unlock(&parser->bo_list->bo_list_mutex); - } + s64 time_us, increment_us; + u64 free_vram, total_vram, used_vram; + /* Allow a maximum of 200 accumulated ms. This is basically per-IB + * throttling. + * + * It means that in order to get full max MBps, at least 5 IBs per + * second must be submitted and not more than 200ms apart from each + * other. + */ + const s64 us_upper_bound = 200000; - for (i = 0; i < parser->num_post_deps; i++) { - drm_syncobj_put(parser->post_deps[i].syncobj); - kfree(parser->post_deps[i].chain); + if (!adev->mm_stats.log2_max_MBps) { + *max_bytes = 0; + *max_vis_bytes = 0; + return; } - kfree(parser->post_deps); - - dma_fence_put(parser->fence); - if (parser->ctx) { - amdgpu_ctx_put(parser->ctx); - } - if (parser->bo_list) - amdgpu_bo_list_put(parser->bo_list); + total_vram = adev->gmc.real_vram_size - + atomic64_read(&adev->vram_pin_size); + used_vram = ttm_resource_manager_usage(&adev->mman.vram_mgr.manager); + free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram; - for (i = 0; i < parser->nchunks; i++) - kvfree(parser->chunks[i].kdata); - kvfree(parser->chunks); - if (parser->job) - amdgpu_job_free(parser->job); - if (parser->uf_entry.tv.bo) { - struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo); + spin_lock(&adev->mm_stats.lock); - amdgpu_bo_unref(&uf); - } -} + /* Increase the amount of accumulated us. */ + time_us = ktime_to_us(ktime_get()); + increment_us = time_us - adev->mm_stats.last_update_us; + adev->mm_stats.last_update_us = time_us; + adev->mm_stats.accum_us = min(adev->mm_stats.accum_us + increment_us, + us_upper_bound); -static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) -{ - struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched); - struct amdgpu_fpriv *fpriv = p->filp->driver_priv; - struct amdgpu_device *adev = p->adev; - struct amdgpu_vm *vm = &fpriv->vm; - struct amdgpu_bo_list_entry *e; - struct amdgpu_bo_va *bo_va; - struct amdgpu_bo *bo; - int r; + /* This prevents the short period of low performance when the VRAM + * usage is low and the driver is in debt or doesn't have enough + * accumulated us to fill VRAM quickly. + * + * The situation can occur in these cases: + * - a lot of VRAM is freed by userspace + * - the presence of a big buffer causes a lot of evictions + * (solution: split buffers into smaller ones) + * + * If 128 MB or 1/8th of VRAM is free, start filling it now by setting + * accum_us to a positive number. + */ + if (free_vram >= 128 * 1024 * 1024 || free_vram >= total_vram / 8) { + s64 min_us; - /* Only for UVD/VCE VM emulation */ - if (ring->funcs->parse_cs || ring->funcs->patch_cs_in_place) { - unsigned i, j; - - for (i = 0, j = 0; i < p->nchunks && j < p->job->num_ibs; i++) { - struct drm_amdgpu_cs_chunk_ib *chunk_ib; - struct amdgpu_bo_va_mapping *m; - struct amdgpu_bo *aobj = NULL; - struct amdgpu_cs_chunk *chunk; - uint64_t offset, va_start; - struct amdgpu_ib *ib; - uint8_t *kptr; - - chunk = &p->chunks[i]; - ib = &p->job->ibs[j]; - chunk_ib = chunk->kdata; - - if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB) - continue; + /* Be more aggresive on dGPUs. Try to fill a portion of free + * VRAM now. + */ + if (!(adev->flags & AMD_IS_APU)) + min_us = bytes_to_us(adev, free_vram / 4); + else + min_us = 0; /* Reset accum_us on APUs. */ - va_start = chunk_ib->va_start & AMDGPU_GMC_HOLE_MASK; - r = amdgpu_cs_find_mapping(p, va_start, &aobj, &m); - if (r) { - DRM_ERROR("IB va_start is invalid\n"); - return r; - } + adev->mm_stats.accum_us = max(min_us, adev->mm_stats.accum_us); + } - if ((va_start + chunk_ib->ib_bytes) > - (m->last + 1) * AMDGPU_GPU_PAGE_SIZE) { - DRM_ERROR("IB va_start+ib_bytes is invalid\n"); - return -EINVAL; - } + /* This is set to 0 if the driver is in debt to disallow (optional) + * buffer moves. + */ + *max_bytes = us_to_bytes(adev, adev->mm_stats.accum_us); - /* the IB should be reserved at this point */ - r = amdgpu_bo_kmap(aobj, (void **)&kptr); - if (r) { - return r; - } + /* Do the same for visible VRAM if half of it is free */ + if (!amdgpu_gmc_vram_full_visible(&adev->gmc)) { + u64 total_vis_vram = adev->gmc.visible_vram_size; + u64 used_vis_vram = + amdgpu_vram_mgr_vis_usage(&adev->mman.vram_mgr); - offset = m->start * AMDGPU_GPU_PAGE_SIZE; - kptr += va_start - offset; - - if (ring->funcs->parse_cs) { - memcpy(ib->ptr, kptr, chunk_ib->ib_bytes); - amdgpu_bo_kunmap(aobj); - - r = amdgpu_ring_parse_cs(ring, p, p->job, ib); - if (r) - return r; - } else { - ib->ptr = (uint32_t *)kptr; - r = amdgpu_ring_patch_cs_in_place(ring, p, p->job, ib); - amdgpu_bo_kunmap(aobj); - if (r) - return r; - } + if (used_vis_vram < total_vis_vram) { + u64 free_vis_vram = total_vis_vram - used_vis_vram; + adev->mm_stats.accum_us_vis = + min(adev->mm_stats.accum_us_vis + + increment_us, us_upper_bound); - j++; + if (free_vis_vram >= total_vis_vram / 2) + adev->mm_stats.accum_us_vis = + max(bytes_to_us(adev, free_vis_vram / 2), + adev->mm_stats.accum_us_vis); } - } - - if (!p->job->vm) - return amdgpu_cs_sync_rings(p); + *max_vis_bytes = us_to_bytes(adev, adev->mm_stats.accum_us_vis); + } else { + *max_vis_bytes = 0; + } - r = amdgpu_vm_clear_freed(adev, vm, NULL); - if (r) - return r; - - r = amdgpu_vm_bo_update(adev, fpriv->prt_va, false); - if (r) - return r; + spin_unlock(&adev->mm_stats.lock); +} - r = amdgpu_sync_fence(&p->job->sync, fpriv->prt_va->last_pt_update); - if (r) - return r; +/* Report how many bytes have really been moved for the last command + * submission. This can result in a debt that can stop buffer migrations + * temporarily. + */ +void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes, + u64 num_vis_bytes) +{ + spin_lock(&adev->mm_stats.lock); + adev->mm_stats.accum_us -= bytes_to_us(adev, num_bytes); + adev->mm_stats.accum_us_vis -= bytes_to_us(adev, num_vis_bytes); + spin_unlock(&adev->mm_stats.lock); +} - if (amdgpu_mcbp || amdgpu_sriov_vf(adev)) { - bo_va = fpriv->csa_va; - BUG_ON(!bo_va); - r = amdgpu_vm_bo_update(adev, bo_va, false); - if (r) - return r; +static int amdgpu_cs_bo_validate(void *param, struct amdgpu_bo *bo) +{ + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); + struct amdgpu_cs_parser *p = param; + struct ttm_operation_ctx ctx = { + .interruptible = true, + .no_wait_gpu = false, + .resv = bo->tbo.base.resv + }; + uint32_t domain; + int r; - r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update); - if (r) - return r; - } + if (bo->tbo.pin_count) + return 0; - amdgpu_bo_list_for_each_entry(e, p->bo_list) { - /* ignore duplicates */ - bo = ttm_to_amdgpu_bo(e->tv.bo); - if (!bo) - continue; + /* Don't move this buffer if we have depleted our allowance + * to move it. Don't move anything if the threshold is zero. + */ + if (p->bytes_moved < p->bytes_moved_threshold && + (!bo->tbo.base.dma_buf || + list_empty(&bo->tbo.base.dma_buf->attachments))) { + if (!amdgpu_gmc_vram_full_visible(&adev->gmc) && + (bo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)) { + /* And don't move a CPU_ACCESS_REQUIRED BO to limited + * visible VRAM if we've depleted our allowance to do + * that. + */ + if (p->bytes_moved_vis < p->bytes_moved_vis_threshold) + domain = bo->preferred_domains; + else + domain = bo->allowed_domains; + } else { + domain = bo->preferred_domains; + } + } else { + domain = bo->allowed_domains; + } - bo_va = e->bo_va; - if (bo_va == NULL) - continue; +retry: + amdgpu_bo_placement_from_domain(bo, domain); + r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); - r = amdgpu_vm_bo_update(adev, bo_va, false); - if (r) { - mutex_unlock(&p->bo_list->bo_list_mutex); - return r; - } + p->bytes_moved += ctx.bytes_moved; + if (!amdgpu_gmc_vram_full_visible(&adev->gmc) && + amdgpu_bo_in_cpu_visible_vram(bo)) + p->bytes_moved_vis += ctx.bytes_moved; - r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update); - if (r) { - mutex_unlock(&p->bo_list->bo_list_mutex); - return r; - } + if (unlikely(r == -ENOMEM) && domain != bo->allowed_domains) { + domain = bo->allowed_domains; + goto retry; } - r = amdgpu_vm_handle_moved(adev, vm); - if (r) - return r; - - r = amdgpu_vm_update_pdes(adev, vm, false); - if (r) - return r; + return r; +} - r = amdgpu_sync_fence(&p->job->sync, vm->last_update); - if (r) - return r; +static int amdgpu_cs_list_validate(struct amdgpu_cs_parser *p, + struct list_head *validated) +{ + struct ttm_operation_ctx ctx = { true, false }; + struct amdgpu_bo_list_entry *lobj; + int r; - p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo); + list_for_each_entry(lobj, validated, tv.head) { + struct amdgpu_bo *bo = ttm_to_amdgpu_bo(lobj->tv.bo); + struct mm_struct *usermm; - if (amdgpu_vm_debug) { - /* Invalidate all BOs to test for userspace bugs */ - amdgpu_bo_list_for_each_entry(e, p->bo_list) { - struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); + usermm = amdgpu_ttm_tt_get_usermm(bo->tbo.ttm); + if (usermm && usermm != current->mm) + return -EPERM; - /* ignore duplicates */ - if (!bo) - continue; + if (amdgpu_ttm_tt_is_userptr(bo->tbo.ttm) && + lobj->user_invalidated && lobj->user_pages) { + amdgpu_bo_placement_from_domain(bo, + AMDGPU_GEM_DOMAIN_CPU); + r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); + if (r) + return r; - amdgpu_vm_bo_invalidate(adev, bo, false); + amdgpu_ttm_tt_set_user_pages(bo->tbo.ttm, + lobj->user_pages); } - } - return amdgpu_cs_sync_rings(p); + r = amdgpu_cs_bo_validate(p, bo); + if (r) + return r; + + kvfree(lobj->user_pages); + lobj->user_pages = NULL; + } + return 0; } -static int amdgpu_cs_ib_fill(struct amdgpu_device *adev, - struct amdgpu_cs_parser *parser) +static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, + union drm_amdgpu_cs *cs) { - struct amdgpu_fpriv *fpriv = parser->filp->driver_priv; + struct amdgpu_fpriv *fpriv = p->filp->driver_priv; struct amdgpu_vm *vm = &fpriv->vm; - int r, ce_preempt = 0, de_preempt = 0; - struct amdgpu_ring *ring; - int i, j; + struct amdgpu_bo_list_entry *e; + struct list_head duplicates; + struct amdgpu_bo *gds; + struct amdgpu_bo *gws; + struct amdgpu_bo *oa; + int r; - for (i = 0, j = 0; i < parser->nchunks && j < parser->job->num_ibs; i++) { - struct amdgpu_cs_chunk *chunk; - struct amdgpu_ib *ib; - struct drm_amdgpu_cs_chunk_ib *chunk_ib; - struct drm_sched_entity *entity; + INIT_LIST_HEAD(&p->validated); - chunk = &parser->chunks[i]; - ib = &parser->job->ibs[j]; - chunk_ib = (struct drm_amdgpu_cs_chunk_ib *)chunk->kdata; + /* p->bo_list could already be assigned if AMDGPU_CHUNK_ID_BO_HANDLES is present */ + if (cs->in.bo_list_handle) { + if (p->bo_list) + return -EINVAL; - if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB) - continue; + r = amdgpu_bo_list_get(fpriv, cs->in.bo_list_handle, + &p->bo_list); + if (r) + return r; + } else if (!p->bo_list) { + /* Create a empty bo_list when no handle is provided */ + r = amdgpu_bo_list_create(p->adev, p->filp, NULL, 0, + &p->bo_list); + if (r) + return r; + } - if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX && - (amdgpu_mcbp || amdgpu_sriov_vf(adev))) { - if (chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT) { - if (chunk_ib->flags & AMDGPU_IB_FLAG_CE) - ce_preempt++; - else - de_preempt++; - } + mutex_lock(&p->bo_list->bo_list_mutex); - /* each GFX command submit allows 0 or 1 IB preemptible for CE & DE */ - if (ce_preempt > 1 || de_preempt > 1) - return -EINVAL; - } + /* One for TTM and one for the CS job */ + amdgpu_bo_list_for_each_entry(e, p->bo_list) + e->tv.num_shared = 2; - r = amdgpu_ctx_get_entity(parser->ctx, chunk_ib->ip_type, - chunk_ib->ip_instance, chunk_ib->ring, - &entity); - if (r) - return r; + amdgpu_bo_list_get_list(p->bo_list, &p->validated); - if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE) - parser->job->preamble_status |= - AMDGPU_PREAMBLE_IB_PRESENT; + INIT_LIST_HEAD(&duplicates); + amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd); - if (parser->entity && parser->entity != entity) - return -EINVAL; + if (p->uf_entry.tv.bo && !ttm_to_amdgpu_bo(p->uf_entry.tv.bo)->parent) + list_add(&p->uf_entry.tv.head, &p->validated); - /* Return if there is no run queue associated with this entity. - * Possibly because of disabled HW IP*/ - if (entity->rq == NULL) - return -EINVAL; + /* Get userptr backing pages. If pages are updated after registered + * in amdgpu_gem_userptr_ioctl(), amdgpu_cs_list_validate() will do + * amdgpu_ttm_backend_bind() to flush and invalidate new pages + */ + amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) { + struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); + bool userpage_invalidated = false; + int i; - parser->entity = entity; + e->user_pages = kvmalloc_array(bo->tbo.ttm->num_pages, + sizeof(struct page *), + GFP_KERNEL | __GFP_ZERO); + if (!e->user_pages) { + DRM_ERROR("kvmalloc_array failure\n"); + r = -ENOMEM; + goto out_free_user_pages; + } - ring = to_amdgpu_ring(entity->rq->sched); - r = amdgpu_ib_get(adev, vm, ring->funcs->parse_cs ? - chunk_ib->ib_bytes : 0, - AMDGPU_IB_POOL_DELAYED, ib); + r = amdgpu_ttm_tt_get_user_pages(bo, e->user_pages); if (r) { - DRM_ERROR("Failed to get ib !\n"); - return r; + kvfree(e->user_pages); + e->user_pages = NULL; + goto out_free_user_pages; } - ib->gpu_addr = chunk_ib->va_start; - ib->length_dw = chunk_ib->ib_bytes / 4; - ib->flags = chunk_ib->flags; + for (i = 0; i < bo->tbo.ttm->num_pages; i++) { + if (bo->tbo.ttm->pages[i] != e->user_pages[i]) { + userpage_invalidated = true; + break; + } + } + e->user_invalidated = userpage_invalidated; + } - j++; + r = ttm_eu_reserve_buffers(&p->ticket, &p->validated, true, + &duplicates); + if (unlikely(r != 0)) { + if (r != -ERESTARTSYS) + DRM_ERROR("ttm_eu_reserve_buffers failed.\n"); + goto out_free_user_pages; } - /* MM engine doesn't support user fences */ - ring = to_amdgpu_ring(parser->entity->rq->sched); - if (parser->job->uf_addr && ring->funcs->no_user_fence) - return -EINVAL; + amdgpu_bo_list_for_each_entry(e, p->bo_list) { + struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); - return 0; -} + e->bo_va = amdgpu_vm_bo_find(vm, bo); + } -static int amdgpu_cs_process_fence_dep(struct amdgpu_cs_parser *p, - struct amdgpu_cs_chunk *chunk) -{ - struct amdgpu_fpriv *fpriv = p->filp->driver_priv; - unsigned num_deps; - int i, r; - struct drm_amdgpu_cs_chunk_dep *deps; + /* Move fence waiting after getting reservation lock of + * PD root. Then there is no need on a ctx mutex lock. + */ + r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entity); + if (unlikely(r != 0)) { + if (r != -ERESTARTSYS) + DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n"); + goto error_validate; + } - deps = (struct drm_amdgpu_cs_chunk_dep *)chunk->kdata; - num_deps = chunk->length_dw * 4 / - sizeof(struct drm_amdgpu_cs_chunk_dep); + amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold, + &p->bytes_moved_vis_threshold); + p->bytes_moved = 0; + p->bytes_moved_vis = 0; - for (i = 0; i < num_deps; ++i) { - struct amdgpu_ctx *ctx; - struct drm_sched_entity *entity; - struct dma_fence *fence; + r = amdgpu_vm_validate_pt_bos(p->adev, &fpriv->vm, + amdgpu_cs_bo_validate, p); + if (r) { + DRM_ERROR("amdgpu_vm_validate_pt_bos() failed.\n"); + goto error_validate; + } - ctx = amdgpu_ctx_get(fpriv, deps[i].ctx_id); - if (ctx == NULL) - return -EINVAL; + r = amdgpu_cs_list_validate(p, &duplicates); + if (r) + goto error_validate; - r = amdgpu_ctx_get_entity(ctx, deps[i].ip_type, - deps[i].ip_instance, - deps[i].ring, &entity); - if (r) { - amdgpu_ctx_put(ctx); - return r; - } + r = amdgpu_cs_list_validate(p, &p->validated); + if (r) + goto error_validate; - fence = amdgpu_ctx_get_fence(ctx, entity, deps[i].handle); - amdgpu_ctx_put(ctx); + amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved, + p->bytes_moved_vis); - if (IS_ERR(fence)) - return PTR_ERR(fence); - else if (!fence) - continue; + gds = p->bo_list->gds_obj; + gws = p->bo_list->gws_obj; + oa = p->bo_list->oa_obj; - if (chunk->chunk_id == AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES) { - struct drm_sched_fence *s_fence; - struct dma_fence *old = fence; + if (gds) { + p->job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT; + p->job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT; + } + if (gws) { + p->job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT; + p->job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT; + } + if (oa) { + p->job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT; + p->job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT; + } - s_fence = to_drm_sched_fence(fence); - fence = dma_fence_get(&s_fence->scheduled); - dma_fence_put(old); - } + if (p->uf_entry.tv.bo) { + struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo); - r = amdgpu_sync_fence(&p->job->sync, fence); - dma_fence_put(fence); + r = amdgpu_ttm_alloc_gart(&uf->tbo); if (r) - return r; - } - return 0; -} - -static int amdgpu_syncobj_lookup_and_add_to_sync(struct amdgpu_cs_parser *p, - uint32_t handle, u64 point, - u64 flags) -{ - struct dma_fence *fence; - int r; + goto error_validate; - r = drm_syncobj_find_fence(p->filp, handle, point, flags, &fence); - if (r) { - DRM_ERROR("syncobj %u failed to find fence @ %llu (%d)!\n", - handle, point, r); - return r; + p->job->uf_addr += amdgpu_bo_gpu_offset(uf); } + return 0; - r = amdgpu_sync_fence(&p->job->sync, fence); - dma_fence_put(fence); +error_validate: + ttm_eu_backoff_reservation(&p->ticket, &p->validated); + +out_free_user_pages: + amdgpu_bo_list_for_each_userptr_entry(e, p->bo_list) { + struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); + if (!e->user_pages) + continue; + amdgpu_ttm_tt_get_user_pages_done(bo->tbo.ttm); + kvfree(e->user_pages); + e->user_pages = NULL; + } + mutex_unlock(&p->bo_list->bo_list_mutex); return r; } -static int amdgpu_cs_process_syncobj_in_dep(struct amdgpu_cs_parser *p, - struct amdgpu_cs_chunk *chunk) +static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser) { - struct drm_amdgpu_cs_chunk_sem *deps; - unsigned num_deps; - int i, r; + int i; - deps = (struct drm_amdgpu_cs_chunk_sem *)chunk->kdata; - num_deps = chunk->length_dw * 4 / - sizeof(struct drm_amdgpu_cs_chunk_sem); - for (i = 0; i < num_deps; ++i) { - r = amdgpu_syncobj_lookup_and_add_to_sync(p, deps[i].handle, - 0, 0); - if (r) - return r; - } + if (!trace_amdgpu_cs_enabled()) + return; - return 0; + for (i = 0; i < parser->job->num_ibs; i++) + trace_amdgpu_cs(parser, i); } - -static int amdgpu_cs_process_syncobj_timeline_in_dep(struct amdgpu_cs_parser *p, - struct amdgpu_cs_chunk *chunk) +static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p) { - struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps; - unsigned num_deps; - int i, r; + struct amdgpu_job *job = p->job; + struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched); + unsigned int i; + int r; - syncobj_deps = (struct drm_amdgpu_cs_chunk_syncobj *)chunk->kdata; - num_deps = chunk->length_dw * 4 / - sizeof(struct drm_amdgpu_cs_chunk_syncobj); - for (i = 0; i < num_deps; ++i) { - r = amdgpu_syncobj_lookup_and_add_to_sync(p, - syncobj_deps[i].handle, - syncobj_deps[i].point, - syncobj_deps[i].flags); - if (r) + /* Only for UVD/VCE VM emulation */ + if (!ring->funcs->parse_cs && !ring->funcs->patch_cs_in_place) + return 0; + + for (i = 0; i < job->num_ibs; ++i) { + struct amdgpu_ib *ib = &job->ibs[i]; + struct amdgpu_bo_va_mapping *m; + struct amdgpu_bo *aobj; + uint64_t va_start; + uint8_t *kptr; + + va_start = ib->gpu_addr; + r = amdgpu_cs_find_mapping(p, va_start, &aobj, &m); + if (r) { + DRM_ERROR("IB va_start is invalid\n"); return r; + } + + if ((va_start + ib->length_dw * 4) > + (m->last + 1) * AMDGPU_GPU_PAGE_SIZE) { + DRM_ERROR("IB va_start+ib_bytes is invalid\n"); + return -EINVAL; + } + + /* the IB should be reserved at this point */ + r = amdgpu_bo_kmap(aobj, (void **)&kptr); + if (r) { + return r; + } + + kptr += va_start - (m->start * AMDGPU_GPU_PAGE_SIZE); + + if (ring->funcs->parse_cs) { + memcpy(ib->ptr, kptr, ib->length_dw * 4); + amdgpu_bo_kunmap(aobj); + + r = amdgpu_ring_parse_cs(ring, p, p->job, ib); + if (r) + return r; + } else { + ib->ptr = (uint32_t *)kptr; + r = amdgpu_ring_patch_cs_in_place(ring, p, p->job, ib); + amdgpu_bo_kunmap(aobj); + if (r) + return r; + } } return 0; } -static int amdgpu_cs_process_syncobj_out_dep(struct amdgpu_cs_parser *p, - struct amdgpu_cs_chunk *chunk) +static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) { - struct drm_amdgpu_cs_chunk_sem *deps; - unsigned num_deps; - int i; - - deps = (struct drm_amdgpu_cs_chunk_sem *)chunk->kdata; - num_deps = chunk->length_dw * 4 / - sizeof(struct drm_amdgpu_cs_chunk_sem); + struct amdgpu_fpriv *fpriv = p->filp->driver_priv; + struct amdgpu_device *adev = p->adev; + struct amdgpu_vm *vm = &fpriv->vm; + struct amdgpu_bo_list_entry *e; + struct amdgpu_bo_va *bo_va; + struct amdgpu_bo *bo; + int r; - if (p->post_deps) - return -EINVAL; + r = amdgpu_vm_clear_freed(adev, vm, NULL); + if (r) + return r; - p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps), - GFP_KERNEL); - p->num_post_deps = 0; + r = amdgpu_vm_bo_update(adev, fpriv->prt_va, false); + if (r) + return r; - if (!p->post_deps) - return -ENOMEM; + r = amdgpu_sync_fence(&p->job->sync, fpriv->prt_va->last_pt_update); + if (r) + return r; + if (amdgpu_mcbp || amdgpu_sriov_vf(adev)) { + bo_va = fpriv->csa_va; + r = amdgpu_vm_bo_update(adev, bo_va, false); + if (r) + return r; - for (i = 0; i < num_deps; ++i) { - p->post_deps[i].syncobj = - drm_syncobj_find(p->filp, deps[i].handle); - if (!p->post_deps[i].syncobj) - return -EINVAL; - p->post_deps[i].chain = NULL; - p->post_deps[i].point = 0; - p->num_post_deps++; + r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update); + if (r) + return r; } - return 0; -} + amdgpu_bo_list_for_each_entry(e, p->bo_list) { + /* ignore duplicates */ + bo = ttm_to_amdgpu_bo(e->tv.bo); + if (!bo) + continue; + bo_va = e->bo_va; + if (bo_va == NULL) + continue; -static int amdgpu_cs_process_syncobj_timeline_out_dep(struct amdgpu_cs_parser *p, - struct amdgpu_cs_chunk *chunk) -{ - struct drm_amdgpu_cs_chunk_syncobj *syncobj_deps; - unsigned num_deps; - int i; + r = amdgpu_vm_bo_update(adev, bo_va, false); + if (r) + return r; - syncobj_deps = (struct drm_amdgpu_cs_chunk_syncobj *)chunk->kdata; - num_deps = chunk->length_dw * 4 / - sizeof(struct drm_amdgpu_cs_chunk_syncobj); + r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update); + if (r) + return r; + } - if (p->post_deps) - return -EINVAL; + r = amdgpu_vm_handle_moved(adev, vm); + if (r) + return r; - p->post_deps = kmalloc_array(num_deps, sizeof(*p->post_deps), - GFP_KERNEL); - p->num_post_deps = 0; + r = amdgpu_vm_update_pdes(adev, vm, false); + if (r) + return r; - if (!p->post_deps) - return -ENOMEM; + r = amdgpu_sync_fence(&p->job->sync, vm->last_update); + if (r) + return r; - for (i = 0; i < num_deps; ++i) { - struct amdgpu_cs_post_dep *dep = &p->post_deps[i]; + p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo); - dep->chain = NULL; - if (syncobj_deps[i].point) { - dep->chain = dma_fence_chain_alloc(); - if (!dep->chain) - return -ENOMEM; - } + if (amdgpu_vm_debug) { + /* Invalidate all BOs to test for userspace bugs */ + amdgpu_bo_list_for_each_entry(e, p->bo_list) { + struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); - dep->syncobj = drm_syncobj_find(p->filp, - syncobj_deps[i].handle); - if (!dep->syncobj) { - dma_fence_chain_free(dep->chain); - return -EINVAL; + /* ignore duplicates */ + if (!bo) + continue; + + amdgpu_vm_bo_invalidate(adev, bo, false); } - dep->point = syncobj_deps[i].point; - p->num_post_deps++; } return 0; } -static int amdgpu_cs_dependencies(struct amdgpu_device *adev, - struct amdgpu_cs_parser *p) +static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p) { - int i, r; - - for (i = 0; i < p->nchunks; ++i) { - struct amdgpu_cs_chunk *chunk; + struct amdgpu_fpriv *fpriv = p->filp->driver_priv; + struct amdgpu_bo_list_entry *e; + int r; - chunk = &p->chunks[i]; + list_for_each_entry(e, &p->validated, tv.head) { + struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo); + struct dma_resv *resv = bo->tbo.base.resv; + enum amdgpu_sync_mode sync_mode; - switch (chunk->chunk_id) { - case AMDGPU_CHUNK_ID_DEPENDENCIES: - case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES: - r = amdgpu_cs_process_fence_dep(p, chunk); - if (r) - return r; - break; - case AMDGPU_CHUNK_ID_SYNCOBJ_IN: - r = amdgpu_cs_process_syncobj_in_dep(p, chunk); - if (r) - return r; - break; - case AMDGPU_CHUNK_ID_SYNCOBJ_OUT: - r = amdgpu_cs_process_syncobj_out_dep(p, chunk); - if (r) - return r; - break; - case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_WAIT: - r = amdgpu_cs_process_syncobj_timeline_in_dep(p, chunk); - if (r) - return r; - break; - case AMDGPU_CHUNK_ID_SYNCOBJ_TIMELINE_SIGNAL: - r = amdgpu_cs_process_syncobj_timeline_out_dep(p, chunk); - if (r) - return r; - break; - } + sync_mode = amdgpu_bo_explicit_sync(bo) ? + AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER; + r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode, + &fpriv->vm); + if (r) + return r; } - return 0; } @@ -1226,10 +1195,6 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, job = p->job; p->job = NULL; - r = drm_sched_job_init(&job->base, entity, &fpriv->vm); - if (r) - goto error_unlock; - drm_sched_job_arm(&job->base); /* No memory allocation is allowed while holding the notifier lock. @@ -1286,29 +1251,45 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, error_abort: drm_sched_job_cleanup(&job->base); mutex_unlock(&p->adev->notifier_lock); - -error_unlock: amdgpu_job_free(job); return r; } -static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser) +/* Cleanup the parser structure */ +static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser) { - int i; + unsigned i; - if (!trace_amdgpu_cs_enabled()) - return; + for (i = 0; i < parser->num_post_deps; i++) { + drm_syncobj_put(parser->post_deps[i].syncobj); + kfree(parser->post_deps[i].chain); + } + kfree(parser->post_deps); - for (i = 0; i < parser->job->num_ibs; i++) - trace_amdgpu_cs(parser, i); + dma_fence_put(parser->fence); + + if (parser->ctx) { + amdgpu_ctx_put(parser->ctx); + } + if (parser->bo_list) + amdgpu_bo_list_put(parser->bo_list); + + for (i = 0; i < parser->nchunks; i++) + kvfree(parser->chunks[i].kdata); + kvfree(parser->chunks); + if (parser->job) + amdgpu_job_free(parser->job); + if (parser->uf_entry.tv.bo) { + struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo); + + amdgpu_bo_unref(&uf); + } } int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) { struct amdgpu_device *adev = drm_to_adev(dev); - union drm_amdgpu_cs *cs = data; - struct amdgpu_cs_parser parser = {}; - bool reserved_buffers = false; + struct amdgpu_cs_parser parser; int r; if (amdgpu_ras_intr_triggered()) @@ -1317,25 +1298,20 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) if (!adev->accel_working) return -EBUSY; - parser.adev = adev; - parser.filp = filp; - - r = amdgpu_cs_parser_init(&parser, data); + r = amdgpu_cs_parser_init(&parser, adev, filp, data); if (r) { if (printk_ratelimit()) DRM_ERROR("Failed to initialize parser %d!\n", r); - goto out; + return r; } - r = amdgpu_cs_ib_fill(adev, &parser); + r = amdgpu_cs_pass1(&parser, data); if (r) - goto out; + goto error_fini; - r = amdgpu_cs_dependencies(adev, &parser); - if (r) { - DRM_ERROR("Failed in the dependencies handling %d!\n", r); - goto out; - } + r = amdgpu_cs_pass2(&parser); + if (r) + goto error_fini; r = amdgpu_cs_parser_bos(&parser, data); if (r) { @@ -1343,21 +1319,36 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) DRM_ERROR("Not enough memory for command submission!\n"); else if (r != -ERESTARTSYS && r != -EAGAIN) DRM_ERROR("Failed to process the buffer list %d!\n", r); - goto out; + goto error_fini; } - reserved_buffers = true; + r = amdgpu_cs_patch_ibs(&parser); + if (r) + goto error_backoff; + + r = amdgpu_cs_vm_handling(&parser); + if (r) + goto error_backoff; + + r = amdgpu_cs_sync_rings(&parser); + if (r) + goto error_backoff; trace_amdgpu_cs_ibs(&parser); - r = amdgpu_cs_vm_handling(&parser); + r = amdgpu_cs_submit(&parser, data); if (r) - goto out; + goto error_backoff; - r = amdgpu_cs_submit(&parser, cs); -out: - amdgpu_cs_parser_fini(&parser, r, reserved_buffers); + amdgpu_cs_parser_fini(&parser); + return 0; + +error_backoff: + ttm_eu_backoff_reservation(&parser.ticket, &parser.validated); + mutex_unlock(&parser.bo_list->bo_list_mutex); +error_fini: + amdgpu_cs_parser_fini(&parser); return r; } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h index 30ecc4917f81..652b5593499f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h @@ -51,8 +51,8 @@ struct amdgpu_cs_parser { struct amdgpu_cs_chunk *chunks; /* scheduler job object */ - struct amdgpu_job *job; struct drm_sched_entity *entity; + struct amdgpu_job *job; /* buffer objects */ struct ww_acquire_ctx ticket; From patchwork Mon Aug 15 18:59:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12943979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A5784C00140 for ; Mon, 15 Aug 2022 19:01:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7DC9CD25E6; Mon, 15 Aug 2022 19:00:50 +0000 (UTC) Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [IPv6:2a00:1450:4864:20::630]) by gabe.freedesktop.org (Postfix) with ESMTPS id B2B1DCAD02 for ; Mon, 15 Aug 2022 18:59:49 +0000 (UTC) Received: by mail-ej1-x630.google.com with SMTP id fy5so15059681ejc.3 for ; Mon, 15 Aug 2022 11:59:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=AdvFut0rh/uAXAtDtXUY+tZD5YgCj1fQLBVIkWg1+DY=; b=RgSVVDv9ex5/MCPn8FHgzf/MUryraeV0+Nf8f5TDGaf1eTv8FPrVYKCaKFmDfCn+Tb qFdvpaQ8uHmX/Dcn3kwvWGjwSItnp0FcP92yUdJJlQnS7VNqL5oPMmo+VtYP3Sh2V3UU drJpXOXbgQClLubPBmSo7Flxfz1L831BkN3rv0jKHky+yAQJXMLQMcIeoTlVBtAyJXSx IpNUF5aRVN20m5H1D9Jsymux7sHkTlYEdEwr0OlyW1SlJEh3IsZTDfuIvYZYOOk2/144 zIzz22GYe0O52ER0VslV2VuJtOiD5EOEawP1MSSDZopVlYsewHIdi/c3behTMJHlISyz kWzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=AdvFut0rh/uAXAtDtXUY+tZD5YgCj1fQLBVIkWg1+DY=; b=yBMDajJEbcwmJh0WZ7JWNxXFTRuWnLQHraHM2QQ+bZdeGeR/stVQ4wv5jEc2dZzvLE v7rzWW07OIWNKZP6+VHIHodrrJyCwX6g2CHpv+LPizNWIrq97LXoXhuAcsqyIw+mJqAe 6GlRwykGBH+VdYWmYDC2pixRtcy/ZhbvlWoMru4wllbyE4sjZKczc6Be0gfldhDPwpaD EVpRXGsNI2urmMssgYTQnygTSuZfs3udKCBQ8Aefmitb6vadEQ0+HjZUoCiZjQhvWjlu 5UKjdV4qWfznok2r9eGbSAk8TBcEuBBcau2hW9ZN5fxdslCq1SmFG8oc5UfH0tq2+9fa bNbA== X-Gm-Message-State: ACgBeo3jXdc3BOjMPo2+N1A6WHLsXjSuxbtAr/HJUFtQ3SCmjSijPlBc VOZDHSq5ikzUFnPB8Dg1JknmdlPDV0k= X-Google-Smtp-Source: AA6agR4tAryyDv0dRp0xcsokaIrVd7cn5zYI7MEN1TKTmwu1PHmQpqLpuNauZH5NlL/soqxnWNvZFw== X-Received: by 2002:a17:907:7b95:b0:72f:9c64:4061 with SMTP id ne21-20020a1709077b9500b0072f9c644061mr11138809ejc.351.1660589988220; Mon, 15 Aug 2022 11:59:48 -0700 (PDT) Received: from able.fritz.box (p57b0bd9f.dip0.t-ipconnect.de. [87.176.189.159]) by smtp.gmail.com with ESMTPSA id d10-20020a170906304a00b00731745a7e62sm3553805ejd.28.2022.08.15.11.59.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Aug 2022 11:59:47 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 05/10] drm/amdgpu: remove SRIOV and MCBP dependencies from the CS Date: Mon, 15 Aug 2022 20:59:35 +0200 Message-Id: <20220815185940.4744-6-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220815185940.4744-1-christian.koenig@amd.com> References: <20220815185940.4744-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" We should not have any different CS constrains based on the execution environment. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index b9de631a66a3..dfb7b4f46bc3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -323,8 +323,7 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, return -EINVAL; if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX && - chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT && - (amdgpu_mcbp || amdgpu_sriov_vf(p->adev))) { + chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT) { if (chunk_ib->flags & AMDGPU_IB_FLAG_CE) (*ce_preempt)++; else @@ -1084,7 +1083,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) if (r) return r; - if (amdgpu_mcbp || amdgpu_sriov_vf(adev)) { + if (fpriv->csa_va) { bo_va = fpriv->csa_va; r = amdgpu_vm_bo_update(adev, bo_va, false); if (r) From patchwork Mon Aug 15 18:59:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12943980 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3C368C00140 for ; Mon, 15 Aug 2022 19:01:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 00343D2624; Mon, 15 Aug 2022 19:00:54 +0000 (UTC) Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) by gabe.freedesktop.org (Postfix) with ESMTPS id D775AD2510 for ; Mon, 15 Aug 2022 18:59:50 +0000 (UTC) Received: by mail-ed1-x52d.google.com with SMTP id z20so10679220edb.9 for ; Mon, 15 Aug 2022 11:59:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=VBamc8Ms6NhZBwGvL/26Jel4jyXtrZlUdTUYjhml1xE=; b=Da74hFtBdDYS0Nbz+omBdKd2AKSCs9udhBEih7h4ddHPWopb8GrqZQEbWhfQLfxEEJ mSx7YASMGoiiveqmRg0iHQx+5N19YvkkdxZnGoC9EQ4wX0kJ3AigH++OVIWFkJ7KUJId oofck6j+WcwlLmqdoIiT8OBiZsTNujC1LI4lOSRWcj2yMJS/9UZ9YTEwTpn1xdrlpq1n tioHNlRCKk12VeuQbyHwp13Sy+9x6w1XiA9D8qrVFmN0+pNSRv5Zc89I3nrgJsUjMYlv Qsq9H9VzRv/Zmt8yWuVdaUGq09Xqjak1dNVjfl9BU1aQnnPN8n2qNPRXz9LroM0VwWVQ qSBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=VBamc8Ms6NhZBwGvL/26Jel4jyXtrZlUdTUYjhml1xE=; b=pnjPEJo8YfNFnqenKVMYcnzv9qpUVwgjV342Wl1m6E8WM2GGx9jYfAcnTh0lR6OLIJ mZpn7HHI8Vp6VZG1YyUbB01k/8KYTe/GvU5yi5m3ZcfVS7dEN0J1VUVSIwlQpztDv/KN 0utPE/cu4r3uP9DbaHTmbHV73Mi0y+sQszpgArPMwWmasgN7dHEyb3g7lAauhbl9esKs TP2Pe053YQ5k1MPJGg7AMIquY3C8SEtpnGAH5NWBhTeYf1vFVyLA0TCf1miOGO22b4Nf hLiDRHXRLUWuGtKmVQjnaziWQ0BqQAS/PbnbabA/cQ8K8u2jc52XO66L+eh/VCxPX/SC 9cDA== X-Gm-Message-State: ACgBeo1vUKq07Ch6zgv+Jst54GqZtpnhHdIOiSA72Br5VWTntmXULrxy A3xKbUa7BjwkWlu6vHOkVnI4hwKzcAg= X-Google-Smtp-Source: AA6agR74GzGCSQchCU7lN5JPMzYqwE5UxE+NtIAsROB4ZlRmV1MM+wYa5CCiL0vGQ6W6YC5jR0qCLw== X-Received: by 2002:a05:6402:5188:b0:43e:7a7f:34f7 with SMTP id q8-20020a056402518800b0043e7a7f34f7mr15318828edd.406.1660589989192; Mon, 15 Aug 2022 11:59:49 -0700 (PDT) Received: from able.fritz.box (p57b0bd9f.dip0.t-ipconnect.de. [87.176.189.159]) by smtp.gmail.com with ESMTPSA id d10-20020a170906304a00b00731745a7e62sm3553805ejd.28.2022.08.15.11.59.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Aug 2022 11:59:48 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 06/10] drm/amdgpu: move setting the job resources Date: Mon, 15 Aug 2022 20:59:36 +0200 Message-Id: <20220815185940.4744-7-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220815185940.4744-1-christian.koenig@amd.com> References: <20220815185940.4744-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Luben Tuikov , =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Move setting the job resources into amdgpu_job.c Signed-off-by: Christian König Reviewed-by: Andrey Grodzovsky Reviewed-by: Luben Tuikov --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 21 ++------------------- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 17 +++++++++++++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 2 ++ 3 files changed, 21 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index dfb7b4f46bc3..88f491dc7ca2 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -828,9 +828,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, struct amdgpu_vm *vm = &fpriv->vm; struct amdgpu_bo_list_entry *e; struct list_head duplicates; - struct amdgpu_bo *gds; - struct amdgpu_bo *gws; - struct amdgpu_bo *oa; int r; INIT_LIST_HEAD(&p->validated); @@ -947,22 +944,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved, p->bytes_moved_vis); - gds = p->bo_list->gds_obj; - gws = p->bo_list->gws_obj; - oa = p->bo_list->oa_obj; - - if (gds) { - p->job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT; - p->job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT; - } - if (gws) { - p->job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT; - p->job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT; - } - if (oa) { - p->job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT; - p->job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT; - } + amdgpu_job_set_resources(p->job, p->bo_list->gds_obj, + p->bo_list->gws_obj, p->bo_list->oa_obj); if (p->uf_entry.tv.bo) { struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index d5b737c6dbbf..2348beea6a2e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -132,6 +132,23 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size, return r; } +void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds, + struct amdgpu_bo *gws, struct amdgpu_bo *oa) +{ + if (gds) { + job->gds_base = amdgpu_bo_gpu_offset(gds) >> PAGE_SHIFT; + job->gds_size = amdgpu_bo_size(gds) >> PAGE_SHIFT; + } + if (gws) { + job->gws_base = amdgpu_bo_gpu_offset(gws) >> PAGE_SHIFT; + job->gws_size = amdgpu_bo_size(gws) >> PAGE_SHIFT; + } + if (oa) { + job->oa_base = amdgpu_bo_gpu_offset(oa) >> PAGE_SHIFT; + job->oa_size = amdgpu_bo_size(oa) >> PAGE_SHIFT; + } +} + void amdgpu_job_free_resources(struct amdgpu_job *job) { struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h index babc0af751c2..2a1961bf1194 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h @@ -76,6 +76,8 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs, struct amdgpu_job **job, struct amdgpu_vm *vm); int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size, enum amdgpu_ib_pool_type pool, struct amdgpu_job **job); +void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds, + struct amdgpu_bo *gws, struct amdgpu_bo *oa); void amdgpu_job_free_resources(struct amdgpu_job *job); void amdgpu_job_free(struct amdgpu_job *job); int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity, From patchwork Mon Aug 15 18:59:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12943978 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 722DDC00140 for ; Mon, 15 Aug 2022 19:01:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 59210D25E5; Mon, 15 Aug 2022 19:00:50 +0000 (UTC) Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [IPv6:2a00:1450:4864:20::62f]) by gabe.freedesktop.org (Postfix) with ESMTPS id E76B3D2535 for ; Mon, 15 Aug 2022 18:59:51 +0000 (UTC) Received: by mail-ej1-x62f.google.com with SMTP id y13so14995200ejp.13 for ; Mon, 15 Aug 2022 11:59:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=QEEHOcVX9jcWEvEnHDVnuQFTANJGIJsGfJ2gHsBc/Ng=; b=FHccX+3OmIR1DOhFLZkXnpKKirdv+4dT3lU60vF1DjmXJ7g2NKlweUs5UlOOdA8IxV tyEOvHMrV4kayxtKLA3a7Rw6q2ehingKewdiU+qrBDOUuT3mW7NywRZUHq3rLevPvEma t+SrVtohctsmlyLJ4pQnVeOvbV54hdPpso7fsa96j2qO2mDkR9keLzu4cvhUeL+xIQqJ Yyi+SEo7zShji4kSKafcNgEKSqX+SYJ8GYoHIuHhXQOS5mTG/61fC5FoZuKu3v6iNiqx wqjZJVGA910KGivD/iRVZF0lY+yfbHM/S5mHaAAJfhKLt+fLNjxAow6uQtAlwTxgJurM txcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=QEEHOcVX9jcWEvEnHDVnuQFTANJGIJsGfJ2gHsBc/Ng=; b=EYEV5eAbULMNizQAh2kmnNXVjdZWwE/eYtQD4Zr7fAuwmuRKCk9MC5OCasFIXY/RKl yHmbxtKRRycJLdvcHAtf5BENoOFX7euXDyRkNo0a/u6UfAeiGvnvdZDrAKFoyMu6y87z mJY2Bn+NoEBrd/qmY/CyLiXMY2Qq1opI+3U2GlQn2MLyBLPiiHYr4dXOvhoJa1oMKpfE CLynjOYYeaKtkTHgHt9Pk/KucjYsaYgbc08k89Huc6/4T+MFKUaFnFH0Z3tavYsKdlFv unNRYK5bPv8gIKNP71xdTTx4ZBE+TO2ZoRT7aPr2gIHM7uSxrB4myKZRs8MRCdmsKkHL eVDA== X-Gm-Message-State: ACgBeo2ynfMUvnf8S7iLssfa/stbt89H0X29YNA+Ay2h+Ri2FhvKcTId OD/Sdwt3E0gbmL7ts1DLJBhuCAE7Dh8= X-Google-Smtp-Source: AA6agR4Qt1yFvTFYftOiZppBr0z9FrQKl8+LijUpfdAglnygyyZOpCtDJcj3x21TkE4K+PUSdIPJgA== X-Received: by 2002:a17:907:2c78:b0:730:df57:1237 with SMTP id ib24-20020a1709072c7800b00730df571237mr10802097ejc.196.1660589989981; Mon, 15 Aug 2022 11:59:49 -0700 (PDT) Received: from able.fritz.box (p57b0bd9f.dip0.t-ipconnect.de. [87.176.189.159]) by smtp.gmail.com with ESMTPSA id d10-20020a170906304a00b00731745a7e62sm3553805ejd.28.2022.08.15.11.59.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Aug 2022 11:59:49 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 07/10] drm/amdgpu: revert "fix limiting AV1 to the first instance on VCN3" Date: Mon, 15 Aug 2022 20:59:37 +0200 Message-Id: <20220815185940.4744-8-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220815185940.4744-1-christian.koenig@amd.com> References: <20220815185940.4744-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This reverts commit 250195ff744f260c169f5427422b6f39c58cb883. The job should now be initialized when we reach the parser functions. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c index 39405f0db824..3cabceee5f57 100644 --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c @@ -1761,21 +1761,23 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_sw_ring_vm_funcs = { .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper, }; -static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p) +static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p, + struct amdgpu_job *job) { struct drm_gpu_scheduler **scheds; /* The create msg must be in the first IB submitted */ - if (atomic_read(&p->entity->fence_seq)) + if (atomic_read(&job->base.entity->fence_seq)) return -EINVAL; scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_DEC] [AMDGPU_RING_PRIO_DEFAULT].sched; - drm_sched_entity_modify_sched(p->entity, scheds, 1); + drm_sched_entity_modify_sched(job->base.entity, scheds, 1); return 0; } -static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr) +static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job, + uint64_t addr) { struct ttm_operation_ctx ctx = { false, false }; struct amdgpu_bo_va_mapping *map; @@ -1846,7 +1848,7 @@ static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr) if (create[0] == 0x7 || create[0] == 0x10 || create[0] == 0x11) continue; - r = vcn_v3_0_limit_sched(p); + r = vcn_v3_0_limit_sched(p, job); if (r) goto out; } @@ -1860,7 +1862,7 @@ static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p, struct amdgpu_job *job, struct amdgpu_ib *ib) { - struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched); + struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched); uint32_t msg_lo = 0, msg_hi = 0; unsigned i; int r; @@ -1879,7 +1881,8 @@ static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p, msg_hi = val; } else if (reg == PACKET0(p->adev->vcn.internal.cmd, 0) && val == 0) { - r = vcn_v3_0_dec_msg(p, ((u64)msg_hi) << 32 | msg_lo); + r = vcn_v3_0_dec_msg(p, job, + ((u64)msg_hi) << 32 | msg_lo); if (r) return r; } From patchwork Mon Aug 15 18:59:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12943977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D364EC00140 for ; Mon, 15 Aug 2022 19:00:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 93EEBD25A5; Mon, 15 Aug 2022 19:00:35 +0000 (UTC) Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [IPv6:2a00:1450:4864:20::630]) by gabe.freedesktop.org (Postfix) with ESMTPS id 30E79D251F for ; Mon, 15 Aug 2022 18:59:51 +0000 (UTC) Received: by mail-ej1-x630.google.com with SMTP id fy5so15059863ejc.3 for ; Mon, 15 Aug 2022 11:59:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=H9WTXJxC0k6R+cX+DIl2M/kL4d+bV50OWP4NxIVQy5M=; b=loqB4sIZX1aXE5bgF3DuMWIhaaVlJgslrn/nUkEPZGgBgOS0gWF90MamLoIEHf6UZ8 ZKbCb8tYIkQ29LuL21xfeoBQ7CaGu/xAyUB3u67e822T/06UxZwhM2OQndpFhxvk8tL8 fU33Ac4n6g3qxfxWBV8y+k+neEJds+IInVBg6U2C5bbYKJGI7C2dmXHIshCxpmTlmn+5 SFnVHJc9ePADZHQ44xRw8qQB231ZdAEWEL6ZaeRfgB0mtf1FkPJxNwNO4M5odGu/7TkS 0S5ojPgtnYuA0GkG2aRSG/+LqUo+Di8c1ZQyYdcjw3As6kXrpbDXLRV9yg3LXduZHbvy 77Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=H9WTXJxC0k6R+cX+DIl2M/kL4d+bV50OWP4NxIVQy5M=; b=kDZSMO9NhJfWfplfiwZdK45LODuLUwmPswCs6JUL6sRot4rby2/9AYkCjAdta7vgiI 1h3SrW4oG3xCJpHIM+7yX9vWpJklAs0jW7i23DguATDLWybDCIKtxOrSAwbAjG6lqA5u u2H/6rKMzeE3dSYo+Is9u8sIrpD42BibGaNFGEruGWc2ngGEkcqRH2jkimQGCNOY8xQo tjcYMh7JPAhuBge/q6UBq6q1rNgz9om3ULBdWA/aId/uZ+KmcEFdvQQrp+IBEw0CKsRR qXpVCFX623bFzY1L9cuyIuhtalLlRuMtS9jmy/o9sDB+ni1qahFdOgOtBZdQltsTp2rS 3RAA== X-Gm-Message-State: ACgBeo0aTr+N2clDDedS+sG0mjMcUU5QZf8k5OWV4PpYEYkGBvxXW7+o LrV1jBZ5IAocOasXjj2FRxfRuKYjaFM= X-Google-Smtp-Source: AA6agR6gL/Ebe/O7qxgj7YrRO7698sAsToVVoUGcwy2r7LzsBFPpBOBExM2kAKmajdYeiW4unHjgjA== X-Received: by 2002:a17:906:cc0e:b0:731:6844:880a with SMTP id ml14-20020a170906cc0e00b007316844880amr11223645ejb.514.1660589990826; Mon, 15 Aug 2022 11:59:50 -0700 (PDT) Received: from able.fritz.box (p57b0bd9f.dip0.t-ipconnect.de. [87.176.189.159]) by smtp.gmail.com with ESMTPSA id d10-20020a170906304a00b00731745a7e62sm3553805ejd.28.2022.08.15.11.59.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Aug 2022 11:59:50 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 08/10] drm/amdgpu: cleanup instance limit on VCN4 Date: Mon, 15 Aug 2022 20:59:38 +0200 Message-Id: <20220815185940.4744-9-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220815185940.4744-1-christian.koenig@amd.com> References: <20220815185940.4744-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Similar to what we did for VCN3 use the job instead of the parser entity. Cleanup the coding style quite a bit as well. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c | 46 +++++++++++++++------------ 1 file changed, 25 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c index ca14c3ef742e..a59418ff9c65 100644 --- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c @@ -1328,21 +1328,23 @@ static void vcn_v4_0_unified_ring_set_wptr(struct amdgpu_ring *ring) } } -static int vcn_v4_0_limit_sched(struct amdgpu_cs_parser *p) +static int vcn_v4_0_limit_sched(struct amdgpu_cs_parser *p, + struct amdgpu_job *job) { struct drm_gpu_scheduler **scheds; /* The create msg must be in the first IB submitted */ - if (atomic_read(&p->entity->fence_seq)) + if (atomic_read(&job->base.entity->fence_seq)) return -EINVAL; - scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_ENC] - [AMDGPU_RING_PRIO_0].sched; - drm_sched_entity_modify_sched(p->entity, scheds, 1); + scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_DEC] + [AMDGPU_RING_PRIO_DEFAULT].sched; + drm_sched_entity_modify_sched(job->base.entity, scheds, 1); return 0; } -static int vcn_v4_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr) +static int vcn_v4_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job, + uint64_t addr) { struct ttm_operation_ctx ctx = { false, false }; struct amdgpu_bo_va_mapping *map; @@ -1413,7 +1415,7 @@ static int vcn_v4_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr) if (create[0] == 0x7 || create[0] == 0x10 || create[0] == 0x11) continue; - r = vcn_v4_0_limit_sched(p); + r = vcn_v4_0_limit_sched(p, job); if (r) goto out; } @@ -1426,32 +1428,34 @@ static int vcn_v4_0_dec_msg(struct amdgpu_cs_parser *p, uint64_t addr) #define RADEON_VCN_ENGINE_TYPE_DECODE (0x00000003) static int vcn_v4_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p, - struct amdgpu_job *job, - struct amdgpu_ib *ib) + struct amdgpu_job *job, + struct amdgpu_ib *ib) { - struct amdgpu_ring *ring = to_amdgpu_ring(p->entity->rq->sched); - struct amdgpu_vcn_decode_buffer *decode_buffer = NULL; + struct amdgpu_ring *ring = to_amdgpu_ring(job->base.entity->rq->sched); + struct amdgpu_vcn_decode_buffer *decode_buffer; + uint64_t addr; uint32_t val; - int r = 0; /* The first instance can decode anything */ if (!ring->me) - return r; + return 0; /* unified queue ib header has 8 double words. */ if (ib->length_dw < 8) - return r; + return 0; val = amdgpu_ib_get_value(ib, 6); //RADEON_VCN_ENGINE_TYPE + if (val != RADEON_VCN_ENGINE_TYPE_DECODE) + return 0; - if (val == RADEON_VCN_ENGINE_TYPE_DECODE) { - decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[10]; + decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[10]; - if (decode_buffer->valid_buf_flag & 0x1) - r = vcn_v4_0_dec_msg(p, ((u64)decode_buffer->msg_buffer_address_hi) << 32 | - decode_buffer->msg_buffer_address_lo); - } - return r; + if (!(decode_buffer->valid_buf_flag & 0x1)) + return 0; + + addr = ((u64)decode_buffer->msg_buffer_address_hi) << 32 | + decode_buffer->msg_buffer_address_lo; + return vcn_v4_0_dec_msg(p, job, addr); } static const struct amdgpu_ring_funcs vcn_v4_0_unified_ring_vm_funcs = { From patchwork Mon Aug 15 18:59:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12943981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 752B8C00140 for ; Mon, 15 Aug 2022 19:01:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1D997D260C; Mon, 15 Aug 2022 19:01:07 +0000 (UTC) Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com [IPv6:2a00:1450:4864:20::636]) by gabe.freedesktop.org (Postfix) with ESMTPS id 38B49D2544 for ; Mon, 15 Aug 2022 18:59:53 +0000 (UTC) Received: by mail-ej1-x636.google.com with SMTP id j8so15037552ejx.9 for ; Mon, 15 Aug 2022 11:59:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=4Anpw29AJGKoUDi9kQhyNxetXmSQysTXEjYIswTAMko=; b=jGSkpMZX66oaCistjsGZdY/aRx1xBSzl833DxeD6Prs98en6yB4Psm0FG3s/grPKmA VscHb2pVgTP+VrpBP7RZuQWZpvCWXh6zj/1exPHS26YrX+vvxIavu0llImAKG9wOSolz j8gMTli5nENfMdl9ZZzZoll2OjGDWM5u1KO9StG7oveguTU/ElTgRm0Bfds6lZrBAFGR p28aIjZpqpzMbAYT7UvpVtQDg4mQo+EVVNRp67T8bKo1Em5Klf54gfDCopL7LTyj/E6I ebvaPPuyUDaAgme2L7GT5o5hhoq/dL0tuHy5qCUl61R4NNchweyUbnLxRJUbrNWx9MyN tXlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=4Anpw29AJGKoUDi9kQhyNxetXmSQysTXEjYIswTAMko=; b=HsqMN29fIAiTRFmrTCafqIDqKEcFTytExe0OrK0QfoMA3b3IGyX48XTG3+h1FB1Rzm hTWOdhXOIvkPe7TmpgoOU9YfLRkqGSU9/slSKUk843CTYZTJawEWYaNWoxYUOjHAYec3 roCsWnCIT6k6HX5giXmjfLrl/3fkah4wXt6BOuNDAIoScicYLmP+PL63Vt2eLEYHiPzn 8nniJARK84gPzeBSh9yelek8jLaNB4d+Gts+AVbtGyi1pqwUTENCz8MVALvrbr3gikZT WgzOxQRmklWqkWqtM0eLVe/Tu9Unt5p7rHXQsrqkGLBhvi7JGarl5lmEzS11IQe4SHtx KcLg== X-Gm-Message-State: ACgBeo1bJK+y/KjCOT9TYvxSnHR3zurfXCRC/rV+Oui9FzdfIF0PFOXN u/pu7lARdSSckWhsHJo+Za+MPAvgA2E= X-Google-Smtp-Source: AA6agR40Hl4fr3TjCk8fbBaVuCADogKvDHa5i8gmKeqAlSf7LprN5Rx9nbxfleigCziHKJywnOjbFw== X-Received: by 2002:a17:907:1690:b0:731:56b6:fded with SMTP id hc16-20020a170907169000b0073156b6fdedmr11509376ejc.119.1660589991648; Mon, 15 Aug 2022 11:59:51 -0700 (PDT) Received: from able.fritz.box (p57b0bd9f.dip0.t-ipconnect.de. [87.176.189.159]) by smtp.gmail.com with ESMTPSA id d10-20020a170906304a00b00731745a7e62sm3553805ejd.28.2022.08.15.11.59.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Aug 2022 11:59:51 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 09/10] drm/amdgpu: add gang submit backend v2 Date: Mon, 15 Aug 2022 20:59:39 +0200 Message-Id: <20220815185940.4744-10-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220815185940.4744-1-christian.koenig@amd.com> References: <20220815185940.4744-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Allows submitting jobs as gang which needs to run on multiple engines at the same time. Basic idea is that we have a global gang submit fence representing when the gang leader is finally pushed to run on the hardware last. Jobs submitted as gang are never re-submitted in case of a GPU reset since this won't work and will just deadlock the hardware immediately again. v2: fix logic inversion, improve documentation, fix rcu Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 35 ++++++++++++++++++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 28 +++++++++++++++-- drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 3 ++ 4 files changed, 67 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 5a639c857bd0..3ac1e4d05fcb 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -885,6 +885,7 @@ struct amdgpu_device { u64 fence_context; unsigned num_rings; struct amdgpu_ring *rings[AMDGPU_MAX_RINGS]; + struct dma_fence __rcu *gang_submit; bool ib_pool_ready; struct amdgpu_sa_manager ib_pools[AMDGPU_IB_POOL_MAX]; struct amdgpu_sched gpu_sched[AMDGPU_HW_IP_NUM][AMDGPU_RING_PRIO_MAX]; @@ -1294,6 +1295,8 @@ u32 amdgpu_device_pcie_port_rreg(struct amdgpu_device *adev, u32 reg); void amdgpu_device_pcie_port_wreg(struct amdgpu_device *adev, u32 reg, u32 v); +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev, + struct dma_fence *gang); /* atpx handler */ #if defined(CONFIG_VGA_SWITCHEROO) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index c84fdef0ac45..23f2938a1fea 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -3499,6 +3499,7 @@ int amdgpu_device_init(struct amdgpu_device *adev, adev->gmc.gart_size = 512 * 1024 * 1024; adev->accel_working = false; adev->num_rings = 0; + RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub()); adev->mman.buffer_funcs = NULL; adev->mman.buffer_funcs_ring = NULL; adev->vm_manager.vm_pte_funcs = NULL; @@ -3979,6 +3980,7 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev) release_firmware(adev->firmware.gpu_info_fw); adev->firmware.gpu_info_fw = NULL; adev->accel_working = false; + dma_fence_put(rcu_dereference_protected(adev->gang_submit, true)); amdgpu_reset_fini(adev); @@ -5914,3 +5916,36 @@ void amdgpu_device_pcie_port_wreg(struct amdgpu_device *adev, (void)RREG32(data); spin_unlock_irqrestore(&adev->pcie_idx_lock, flags); } + +/** + * amdgpu_device_switch_gang - switch to a new gang + * @adev: amdgpu_device pointer + * @gang: the gang to switch to + * + * Try to switch to a new gang. + * Returns: NULL if we switched to the new gang or a reference to the current + * gang leader. + */ +struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev, + struct dma_fence *gang) +{ + struct dma_fence *old = NULL; + + do { + dma_fence_put(old); + rcu_read_lock(); + old = dma_fence_get_rcu_safe(&adev->gang_submit); + rcu_read_unlock(); + + if (old == gang) + break; + + if (!dma_fence_is_signaled(old)) + return old; + + } while (cmpxchg((struct dma_fence __force **)&adev->gang_submit, + old, gang) != old); + + dma_fence_put(old); + return NULL; +} diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index 2348beea6a2e..e4b791cdda2c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -173,11 +173,29 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job) dma_fence_put(&job->hw_fence); } +void amdgpu_job_set_gang_leader(struct amdgpu_job *job, + struct amdgpu_job *leader) +{ + struct dma_fence *fence = &leader->base.s_fence->scheduled; + + WARN_ON(job->gang_submit); + + /* + * Don't add a reference when we are the gang leader to avoid circle + * dependency. + */ + if (job != leader) + dma_fence_get(fence); + job->gang_submit = fence; +} + void amdgpu_job_free(struct amdgpu_job *job) { amdgpu_job_free_resources(job); amdgpu_sync_free(&job->sync); amdgpu_sync_free(&job->sched_sync); + if (job->gang_submit != &job->base.s_fence->scheduled) + dma_fence_put(job->gang_submit); dma_fence_put(&job->hw_fence); } @@ -244,12 +262,16 @@ static struct dma_fence *amdgpu_job_dependency(struct drm_sched_job *sched_job, fence = amdgpu_sync_get_fence(&job->sync); } + if (!fence && job->gang_submit) + fence = amdgpu_device_switch_gang(ring->adev, job->gang_submit); + return fence; } static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job) { struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched); + struct amdgpu_device *adev = ring->adev; struct dma_fence *fence = NULL, *finished; struct amdgpu_job *job; int r = 0; @@ -261,8 +283,10 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job) trace_amdgpu_sched_run_job(job); - if (job->vram_lost_counter != atomic_read(&ring->adev->vram_lost_counter)) - dma_fence_set_error(finished, -ECANCELED);/* skip IB as well if VRAM lost */ + /* Skip job if VRAM is lost and never resubmit gangs */ + if (job->vram_lost_counter != atomic_read(&adev->vram_lost_counter) || + (job->job_run_counter && job->gang_submit)) + dma_fence_set_error(finished, -ECANCELED); if (finished->error < 0) { DRM_INFO("Skip scheduling IBs!\n"); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h index 2a1961bf1194..4763081eb6bc 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h @@ -50,6 +50,7 @@ struct amdgpu_job { struct amdgpu_sync sync; struct amdgpu_sync sched_sync; struct dma_fence hw_fence; + struct dma_fence *gang_submit; uint32_t preamble_status; uint32_t preemption_status; bool vm_needs_flush; @@ -79,6 +80,8 @@ int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev, unsigned size, void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds, struct amdgpu_bo *gws, struct amdgpu_bo *oa); void amdgpu_job_free_resources(struct amdgpu_job *job); +void amdgpu_job_set_gang_leader(struct amdgpu_job *job, + struct amdgpu_job *leader); void amdgpu_job_free(struct amdgpu_job *job); int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity, void *owner, struct dma_fence **f); From patchwork Mon Aug 15 18:59:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12943976 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 998BFC25B0E for ; Mon, 15 Aug 2022 19:00:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 090BDD259F; Mon, 15 Aug 2022 19:00:35 +0000 (UTC) Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com [IPv6:2a00:1450:4864:20::529]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2EFFDD2563 for ; Mon, 15 Aug 2022 18:59:54 +0000 (UTC) Received: by mail-ed1-x529.google.com with SMTP id w3so10724615edc.2 for ; Mon, 15 Aug 2022 11:59:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=ZUBwAN7jOcjh1b7YeRiG+gYCX7eh7YBVKIzEHIFMfHM=; b=Tjfpf2ah8DAK01B0VdXqbBMag4keDgUFbJhaVEy0ir+HAK2Y8F67+UrfiwE14nO27S 91zFbKqp4yMOCNfnD76jNH0b1oDO4rG2J3RsDYHsj0smpdCEk0BKPnayNS6kIKy+pV/2 Nf9mL3c0J22GbKzGHcTK49HwhesQAWbZa1pOpyggKYX7FxulQiR7XXJ22iOwmCTaB2b3 THBA0MXQt/GuMDpNP6lUL1PAFB9Ymwqw2q7kpSo3/qrvz4kYJsf3FkoZLDcfxiaO0EJ2 FvCk/Ne1Bj6Rjjti0vIRRLZA6Td04ZYmEh8oK4HUYx6Hi38mha74yCoAljPdDAAgSRND rHng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=ZUBwAN7jOcjh1b7YeRiG+gYCX7eh7YBVKIzEHIFMfHM=; b=qUYitI87bd1WdlG2hBBlWj4I3vb/SMklMOsfNaZj2WKEur9ewUo9pGOg9Y8BVh3xqh m1hFh7C6GQPD85dlZKznn4gNqsGuUDhlFZFX44q5aiLdaiuAyM8spX70dG8COjgNX22j XSOw5Be3A7YjNPYmjqkaGXwt2PxW8S8kfNkm8Okb3v73c7WTi27tX2U/k3xuiQncgXWG yXrd0pzAp3LYtZTJm6WCfJzEXEN/ujdN1f/VI8A41pSD7Lk79zPG3sG0o5m3f203lhZX EaEe5cm5NXDAqMBUg/DcQgPBW7dgaLwE7VOe+RKMxngBfvuQQjxhR4WdVMJQrEhUjJ1U ZbHA== X-Gm-Message-State: ACgBeo0VJuuyal5BgmzNHKwxMyRSkc9glgbtOiN5Iwjd/JUMs1vR9VRQ PGb1vccyv5Xnj/F4rtG67KCSBRghuIE= X-Google-Smtp-Source: AA6agR6c3p+Fg05p043BGg/GJEq/umBgjCQsgdRv4yR2rBO2fr4aOk/2W89yd/Q86C7ygUliR2FloQ== X-Received: by 2002:a05:6402:4282:b0:43e:612c:fcf7 with SMTP id g2-20020a056402428200b0043e612cfcf7mr15257719edc.242.1660589992641; Mon, 15 Aug 2022 11:59:52 -0700 (PDT) Received: from able.fritz.box (p57b0bd9f.dip0.t-ipconnect.de. [87.176.189.159]) by smtp.gmail.com with ESMTPSA id d10-20020a170906304a00b00731745a7e62sm3553805ejd.28.2022.08.15.11.59.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Aug 2022 11:59:52 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 10/10] drm/amdgpu: add gang submit frontend v3 Date: Mon, 15 Aug 2022 20:59:40 +0200 Message-Id: <20220815185940.4744-11-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220815185940.4744-1-christian.koenig@amd.com> References: <20220815185940.4744-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Allows submitting jobs as gang which needs to run on multiple engines at the same time. All members of the gang get the same implicit, explicit and VM dependencies. So no gang member will start running until everything else is ready. The last job is considered the gang leader (usually a submission to the GFX ring) and used for signaling output dependencies. Each job is remembered individually as user of a buffer object, so there is no joining of work at the end. v2: rebase and fix review comments from Andrey and Yogesh v3: use READ instead of BOOKKEEP for now because of VM unmaps, set gang leader only when necessary Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 258 ++++++++++++++-------- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h | 10 +- drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h | 12 +- 3 files changed, 185 insertions(+), 95 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 88f491dc7ca2..21f0a6c08eb4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -69,6 +69,7 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p, unsigned int *num_ibs) { struct drm_sched_entity *entity; + unsigned int i; int r; r = amdgpu_ctx_get_entity(p->ctx, chunk_ib->ip_type, @@ -77,17 +78,28 @@ static int amdgpu_cs_p1_ib(struct amdgpu_cs_parser *p, if (r) return r; - /* Abort if there is no run queue associated with this entity. - * Possibly because of disabled HW IP*/ + /* + * Abort if there is no run queue associated with this entity. + * Possibly because of disabled HW IP. + */ if (entity->rq == NULL) return -EINVAL; - /* Currently we don't support submitting to multiple entities */ - if (p->entity && p->entity != entity) + /* Check if we can add this IB to some existing job */ + for (i = 0; i < p->gang_size; ++i) { + if (p->entities[i] == entity) + goto found; + } + + /* If not increase the gang size if possible */ + if (i == AMDGPU_CS_GANG_SIZE) return -EINVAL; - p->entity = entity; - ++(*num_ibs); + p->entities[i] = entity; + p->gang_size = i + 1; + +found: + ++(num_ibs[i]); return 0; } @@ -161,11 +173,12 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p, union drm_amdgpu_cs *cs) { struct amdgpu_fpriv *fpriv = p->filp->driver_priv; + unsigned int num_ibs[AMDGPU_CS_GANG_SIZE] = { }; struct amdgpu_vm *vm = &fpriv->vm; uint64_t *chunk_array_user; uint64_t *chunk_array; - unsigned size, num_ibs = 0; uint32_t uf_offset = 0; + unsigned int size; int ret; int i; @@ -231,7 +244,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p, if (size < sizeof(struct drm_amdgpu_cs_chunk_ib)) goto free_partial_kdata; - ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, &num_ibs); + ret = amdgpu_cs_p1_ib(p, p->chunks[i].kdata, num_ibs); if (ret) goto free_partial_kdata; break; @@ -268,21 +281,28 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p, } } - ret = amdgpu_job_alloc(p->adev, num_ibs, &p->job, vm); - if (ret) - goto free_all_kdata; + if (!p->gang_size) + return -EINVAL; - ret = drm_sched_job_init(&p->job->base, p->entity, &fpriv->vm); - if (ret) - goto free_all_kdata; + for (i = 0; i < p->gang_size; ++i) { + ret = amdgpu_job_alloc(p->adev, num_ibs[i], &p->jobs[i], vm); + if (ret) + goto free_all_kdata; + + ret = drm_sched_job_init(&p->jobs[i]->base, p->entities[i], + &fpriv->vm); + if (ret) + goto free_all_kdata; + } + p->gang_leader = p->jobs[p->gang_size - 1]; - if (p->ctx->vram_lost_counter != p->job->vram_lost_counter) { + if (p->ctx->vram_lost_counter != p->gang_leader->vram_lost_counter) { ret = -ECANCELED; goto free_all_kdata; } if (p->uf_entry.tv.bo) - p->job->uf_addr = uf_offset; + p->gang_leader->uf_addr = uf_offset; kvfree(chunk_array); /* Use this opportunity to fill in task info for the vm */ @@ -304,22 +324,18 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p, return ret; } -static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, - struct amdgpu_cs_chunk *chunk, - unsigned int *num_ibs, - unsigned int *ce_preempt, - unsigned int *de_preempt) +static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, struct amdgpu_job *job, + struct amdgpu_ib *ib, struct amdgpu_cs_chunk *chunk, + unsigned int *ce_preempt, unsigned int *de_preempt) { - struct amdgpu_ring *ring = to_amdgpu_ring(p->job->base.sched); + struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched); struct drm_amdgpu_cs_chunk_ib *chunk_ib = chunk->kdata; struct amdgpu_fpriv *fpriv = p->filp->driver_priv; - struct amdgpu_ib *ib = &p->job->ibs[*num_ibs]; struct amdgpu_vm *vm = &fpriv->vm; int r; - /* MM engine doesn't support user fences */ - if (p->job->uf_addr && ring->funcs->no_user_fence) + if (job->uf_addr && ring->funcs->no_user_fence) return -EINVAL; if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX && @@ -336,7 +352,7 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, } if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE) - p->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT; + job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT; r = amdgpu_ib_get(p->adev, vm, ring->funcs->parse_cs ? chunk_ib->ib_bytes : 0, @@ -349,8 +365,6 @@ static int amdgpu_cs_p2_ib(struct amdgpu_cs_parser *p, ib->gpu_addr = chunk_ib->va_start; ib->length_dw = chunk_ib->ib_bytes / 4; ib->flags = chunk_ib->flags; - - (*num_ibs)++; return 0; } @@ -399,7 +413,7 @@ static int amdgpu_cs_p2_dependencies(struct amdgpu_cs_parser *p, dma_fence_put(old); } - r = amdgpu_sync_fence(&p->job->sync, fence); + r = amdgpu_sync_fence(&p->gang_leader->sync, fence); dma_fence_put(fence); if (r) return r; @@ -421,7 +435,7 @@ static int amdgpu_syncobj_lookup_and_add(struct amdgpu_cs_parser *p, return r; } - r = amdgpu_sync_fence(&p->job->sync, fence); + r = amdgpu_sync_fence(&p->gang_leader->sync, fence); dma_fence_put(fence); return r; @@ -544,20 +558,30 @@ static int amdgpu_cs_p2_syncobj_timeline_signal(struct amdgpu_cs_parser *p, static int amdgpu_cs_pass2(struct amdgpu_cs_parser *p) { - unsigned int num_ibs = 0, ce_preempt = 0, de_preempt = 0; + unsigned int ce_preempt = 0, de_preempt = 0; + unsigned int job_idx = 0, ib_idx = 0; int i, r; for (i = 0; i < p->nchunks; ++i) { struct amdgpu_cs_chunk *chunk; + struct amdgpu_job *job; chunk = &p->chunks[i]; switch (chunk->chunk_id) { case AMDGPU_CHUNK_ID_IB: - r = amdgpu_cs_p2_ib(p, chunk, &num_ibs, + job = p->jobs[job_idx]; + r = amdgpu_cs_p2_ib(p, job, &job->ibs[ib_idx], chunk, &ce_preempt, &de_preempt); if (r) return r; + + if (++ib_idx == job->num_ibs) { + ++job_idx; + ib_idx = 0; + ce_preempt = 0; + de_preempt = 0; + } break; case AMDGPU_CHUNK_ID_DEPENDENCIES: case AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES: @@ -828,6 +852,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, struct amdgpu_vm *vm = &fpriv->vm; struct amdgpu_bo_list_entry *e; struct list_head duplicates; + unsigned int i; int r; INIT_LIST_HEAD(&p->validated); @@ -911,16 +936,6 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, e->bo_va = amdgpu_vm_bo_find(vm, bo); } - /* Move fence waiting after getting reservation lock of - * PD root. Then there is no need on a ctx mutex lock. - */ - r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entity); - if (unlikely(r != 0)) { - if (r != -ERESTARTSYS) - DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n"); - goto error_validate; - } - amdgpu_cs_get_threshold_for_moves(p->adev, &p->bytes_moved_threshold, &p->bytes_moved_vis_threshold); p->bytes_moved = 0; @@ -944,8 +959,10 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved, p->bytes_moved_vis); - amdgpu_job_set_resources(p->job, p->bo_list->gds_obj, - p->bo_list->gws_obj, p->bo_list->oa_obj); + for (i = 0; i < p->gang_size; ++i) + amdgpu_job_set_resources(p->jobs[i], p->bo_list->gds_obj, + p->bo_list->gws_obj, + p->bo_list->oa_obj); if (p->uf_entry.tv.bo) { struct amdgpu_bo *uf = ttm_to_amdgpu_bo(p->uf_entry.tv.bo); @@ -954,7 +971,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, if (r) goto error_validate; - p->job->uf_addr += amdgpu_bo_gpu_offset(uf); + p->gang_leader->uf_addr += amdgpu_bo_gpu_offset(uf); } return 0; @@ -975,20 +992,24 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, return r; } -static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *parser) +static void trace_amdgpu_cs_ibs(struct amdgpu_cs_parser *p) { - int i; + int i, j; if (!trace_amdgpu_cs_enabled()) return; - for (i = 0; i < parser->job->num_ibs; i++) - trace_amdgpu_cs(parser, i); + for (i = 0; i < p->gang_size; ++i) { + struct amdgpu_job *job = p->jobs[i]; + + for (j = 0; j < job->num_ibs; ++j) + trace_amdgpu_cs(p, job, &job->ibs[j]); + } } -static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p) +static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p, + struct amdgpu_job *job) { - struct amdgpu_job *job = p->job; struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched); unsigned int i; int r; @@ -1029,12 +1050,12 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p) memcpy(ib->ptr, kptr, ib->length_dw * 4); amdgpu_bo_kunmap(aobj); - r = amdgpu_ring_parse_cs(ring, p, p->job, ib); + r = amdgpu_ring_parse_cs(ring, p, job, ib); if (r) return r; } else { ib->ptr = (uint32_t *)kptr; - r = amdgpu_ring_patch_cs_in_place(ring, p, p->job, ib); + r = amdgpu_ring_patch_cs_in_place(ring, p, job, ib); amdgpu_bo_kunmap(aobj); if (r) return r; @@ -1044,14 +1065,29 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p) return 0; } +static int amdgpu_cs_patch_jobs(struct amdgpu_cs_parser *p) +{ + unsigned int i; + int r; + + for (i = 0; i < p->gang_size; ++i) { + r = amdgpu_cs_patch_ibs(p, p->jobs[i]); + if (r) + return r; + } + return 0; +} + static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) { struct amdgpu_fpriv *fpriv = p->filp->driver_priv; + struct amdgpu_job *job = p->gang_leader; struct amdgpu_device *adev = p->adev; struct amdgpu_vm *vm = &fpriv->vm; struct amdgpu_bo_list_entry *e; struct amdgpu_bo_va *bo_va; struct amdgpu_bo *bo; + unsigned int i; int r; r = amdgpu_vm_clear_freed(adev, vm, NULL); @@ -1062,7 +1098,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) if (r) return r; - r = amdgpu_sync_fence(&p->job->sync, fpriv->prt_va->last_pt_update); + r = amdgpu_sync_fence(&job->sync, fpriv->prt_va->last_pt_update); if (r) return r; @@ -1072,7 +1108,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) if (r) return r; - r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update); + r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update); if (r) return r; } @@ -1091,7 +1127,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) if (r) return r; - r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update); + r = amdgpu_sync_fence(&job->sync, bo_va->last_pt_update); if (r) return r; } @@ -1104,11 +1140,18 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) if (r) return r; - r = amdgpu_sync_fence(&p->job->sync, vm->last_update); + r = amdgpu_sync_fence(&job->sync, vm->last_update); if (r) return r; - p->job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo); + for (i = 0; i < p->gang_size; ++i) { + job = p->jobs[i]; + + if (!job->vm) + continue; + + job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo); + } if (amdgpu_vm_debug) { /* Invalidate all BOs to test for userspace bugs */ @@ -1129,7 +1172,9 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p) static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p) { struct amdgpu_fpriv *fpriv = p->filp->driver_priv; + struct amdgpu_job *leader = p->gang_leader; struct amdgpu_bo_list_entry *e; + unsigned int i; int r; list_for_each_entry(e, &p->validated, tv.head) { @@ -1139,12 +1184,23 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p) sync_mode = amdgpu_bo_explicit_sync(bo) ? AMDGPU_SYNC_EXPLICIT : AMDGPU_SYNC_NE_OWNER; - r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, sync_mode, + r = amdgpu_sync_resv(p->adev, &leader->sync, resv, sync_mode, &fpriv->vm); if (r) return r; } - return 0; + + for (i = 0; i < p->gang_size - 1; ++i) { + r = amdgpu_sync_clone(&leader->sync, &p->jobs[i]->sync); + if (r) + return r; + } + + r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entities[p->gang_size - 1]); + if (r && r != -ERESTARTSYS) + DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n"); + + return r; } static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p) @@ -1168,16 +1224,28 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, union drm_amdgpu_cs *cs) { struct amdgpu_fpriv *fpriv = p->filp->driver_priv; - struct drm_sched_entity *entity = p->entity; + struct amdgpu_job *leader = p->gang_leader; struct amdgpu_bo_list_entry *e; - struct amdgpu_job *job; + unsigned int i; uint64_t seq; int r; - job = p->job; - p->job = NULL; + for (i = 0; i < p->gang_size; ++i) + drm_sched_job_arm(&p->jobs[i]->base); - drm_sched_job_arm(&job->base); + for (i = 0; i < (p->gang_size - 1); ++i) { + struct dma_fence *fence; + + fence = &p->jobs[i]->base.s_fence->scheduled; + r = amdgpu_sync_fence(&leader->sync, fence); + if (r) + goto error_cleanup; + } + + if (p->gang_size > 1) { + for (i = 0; i < p->gang_size; ++i) + amdgpu_job_set_gang_leader(p->jobs[i], leader); + } /* No memory allocation is allowed while holding the notifier lock. * The lock is held until amdgpu_cs_submit is finished and fence is @@ -1195,45 +1263,60 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p, } if (r) { r = -EAGAIN; - goto error_abort; + goto error_unlock; } - p->fence = dma_fence_get(&job->base.s_fence->finished); + p->fence = dma_fence_get(&leader->base.s_fence->finished); - seq = amdgpu_ctx_add_fence(p->ctx, entity, p->fence); + seq = amdgpu_ctx_add_fence(p->ctx, p->entities[p->gang_size - 1], + p->fence); amdgpu_cs_post_dependencies(p); - if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) && + if ((leader->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) && !p->ctx->preamble_presented) { - job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST; + leader->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST; p->ctx->preamble_presented = true; } cs->out.handle = seq; - job->uf_sequence = seq; - - amdgpu_job_free_resources(job); + leader->uf_sequence = seq; - trace_amdgpu_cs_ioctl(job); amdgpu_vm_bo_trace_cs(&fpriv->vm, &p->ticket); - drm_sched_entity_push_job(&job->base); + for (i = 0; i < p->gang_size; ++i) { + amdgpu_job_free_resources(p->jobs[i]); + trace_amdgpu_cs_ioctl(p->jobs[i]); + drm_sched_entity_push_job(&p->jobs[i]->base); + } amdgpu_vm_move_to_lru_tail(p->adev, &fpriv->vm); - /* Make sure all BOs are remembered as writers */ - amdgpu_bo_list_for_each_entry(e, p->bo_list) - e->tv.num_shared = 0; + list_for_each_entry(e, &p->validated, tv.head) { + + /* Everybody except for the gang leader uses READ */ + for (i = 0; i < (p->gang_size - 1); ++i) { + dma_resv_add_fence(e->tv.bo->base.resv, + &p->jobs[i]->base.s_fence->finished, + DMA_RESV_USAGE_READ); + } + /* The gang leader as remembered as writer */ + e->tv.num_shared = 0; + } ttm_eu_fence_buffer_objects(&p->ticket, &p->validated, p->fence); + + for (i = 0; i < p->gang_size; ++i) + p->jobs[i] = NULL; + mutex_unlock(&p->adev->notifier_lock); mutex_unlock(&p->bo_list->bo_list_mutex); - return 0; -error_abort: - drm_sched_job_cleanup(&job->base); +error_unlock: mutex_unlock(&p->adev->notifier_lock); - amdgpu_job_free(job); + +error_cleanup: + for (i = 0; i < p->gang_size; ++i) + drm_sched_job_cleanup(&p->jobs[i]->base); return r; } @@ -1250,17 +1333,18 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser) dma_fence_put(parser->fence); - if (parser->ctx) { + if (parser->ctx) amdgpu_ctx_put(parser->ctx); - } if (parser->bo_list) amdgpu_bo_list_put(parser->bo_list); for (i = 0; i < parser->nchunks; i++) kvfree(parser->chunks[i].kdata); kvfree(parser->chunks); - if (parser->job) - amdgpu_job_free(parser->job); + for (i = 0; i < parser->gang_size; ++i) { + if (parser->jobs[i]) + amdgpu_job_free(parser->jobs[i]); + } if (parser->uf_entry.tv.bo) { struct amdgpu_bo *uf = ttm_to_amdgpu_bo(parser->uf_entry.tv.bo); @@ -1304,7 +1388,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) goto error_fini; } - r = amdgpu_cs_patch_ibs(&parser); + r = amdgpu_cs_patch_jobs(&parser); if (r) goto error_backoff; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h index 652b5593499f..cbaa19b2b8a3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.h @@ -27,6 +27,8 @@ #include "amdgpu_bo_list.h" #include "amdgpu_ring.h" +#define AMDGPU_CS_GANG_SIZE 4 + struct amdgpu_bo_va_mapping; struct amdgpu_cs_chunk { @@ -50,9 +52,11 @@ struct amdgpu_cs_parser { unsigned nchunks; struct amdgpu_cs_chunk *chunks; - /* scheduler job object */ - struct drm_sched_entity *entity; - struct amdgpu_job *job; + /* scheduler job objects */ + unsigned int gang_size; + struct drm_sched_entity *entities[AMDGPU_CS_GANG_SIZE]; + struct amdgpu_job *jobs[AMDGPU_CS_GANG_SIZE]; + struct amdgpu_job *gang_leader; /* buffer objects */ struct ww_acquire_ctx ticket; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h index 06dfcf297a8d..5e6ddc7e101c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h @@ -140,8 +140,10 @@ TRACE_EVENT(amdgpu_bo_create, ); TRACE_EVENT(amdgpu_cs, - TP_PROTO(struct amdgpu_cs_parser *p, int i), - TP_ARGS(p, i), + TP_PROTO(struct amdgpu_cs_parser *p, + struct amdgpu_job *job, + struct amdgpu_ib *ib), + TP_ARGS(p, job, ib), TP_STRUCT__entry( __field(struct amdgpu_bo_list *, bo_list) __field(u32, ring) @@ -151,10 +153,10 @@ TRACE_EVENT(amdgpu_cs, TP_fast_assign( __entry->bo_list = p->bo_list; - __entry->ring = to_amdgpu_ring(p->entity->rq->sched)->idx; - __entry->dw = p->job->ibs[i].length_dw; + __entry->ring = to_amdgpu_ring(job->base.sched)->idx; + __entry->dw = ib->length_dw; __entry->fences = amdgpu_fence_count_emitted( - to_amdgpu_ring(p->entity->rq->sched)); + to_amdgpu_ring(job->base.sched)); ), TP_printk("bo_list=%p, ring=%u, dw=%u, fences=%u", __entry->bo_list, __entry->ring, __entry->dw,