From patchwork Fri Nov 5 19:38:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Zack Rusin X-Patchwork-Id: 12605309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 589B6C433F5 for ; Fri, 5 Nov 2021 19:41:02 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 252126112D for ; Fri, 5 Nov 2021 19:41:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 252126112D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=vmware.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C13D86EB25; Fri, 5 Nov 2021 19:40:50 +0000 (UTC) Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com [208.91.0.189]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4C3E86ED12 for ; Fri, 5 Nov 2021 19:40:47 +0000 (UTC) Received: from sc9-mailhost1.vmware.com (10.113.161.71) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Fri, 5 Nov 2021 12:40:41 -0700 Received: from vmware.com (unknown [10.21.244.23]) by sc9-mailhost1.vmware.com (Postfix) with ESMTP id 3CD64205E7; Fri, 5 Nov 2021 12:40:46 -0700 (PDT) From: Zack Rusin To: Subject: [PATCH v2 6/6] drm/ttm: Clarify that the TTM_PL_SYSTEM buffers need to stay idle Date: Fri, 5 Nov 2021 15:38:47 -0400 Message-ID: <20211105193845.258816-7-zackr@vmware.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211105193845.258816-1-zackr@vmware.com> References: <20211105193845.258816-1-zackr@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: zackr@vmware.com does not designate permitted sender hosts) X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" TTM was designed to allow TTM_PL_SYSTEM buffer to be fenced but over the years the code that was meant to handle it was broken and new changes can not deal with buffers which have been placed in TTM_PL_SYSTEM buf do not remain idle. CPU buffers which need to be fenced and shared with accelerators should be placed in driver specific placements that can explicitly handle CPU/accelerator buffer fencing. Currently, apart, from things silently failing nothing is enforcing that requirement which means that it's easy for drivers and new developers to get this wrong. To avoid the confusion we can document this requirement and clarify the solution. This came up during a discussion on dri-devel: https://lore.kernel.org/dri-devel/232f45e9-8748-1243-09bf-56763e6668b3@amd.com Signed-off-by: Zack Rusin Cc: Christian König Cc: Thomas Hellström --- include/drm/ttm/ttm_placement.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h index 76d1b9119a2b..89dfb58ff199 100644 --- a/include/drm/ttm/ttm_placement.h +++ b/include/drm/ttm/ttm_placement.h @@ -35,6 +35,16 @@ /* * Memory regions for data placement. + * + * Due to the fact that TTM_PL_SYSTEM BO's can be accessed by the hardware + * and are not directly evictable they're handled slightly differently + * from other placements. The most important and driver visible side-effect + * of that is that TTM_PL_SYSTEM BO's are not allowed to be fenced and have + * to remain idle. For BO's which reside in system memory but for which + * the accelerator requires direct access (i.e. their usage needs to be + * synchronized between the CPU and accelerator via fences) a new, driver + * private placement should be introduced that can handle such scenarios. + * */ #define TTM_PL_SYSTEM 0