diff mbox series

[v2,6/6] drm/ttm: Clarify that the TTM_PL_SYSTEM buffers need to stay idle

Message ID 20211105193845.258816-7-zackr@vmware.com (mailing list archive)
State New, archived
Headers show
Series Support module unload and hotunplug | expand

Commit Message

Zack Rusin Nov. 5, 2021, 7:38 p.m. UTC
TTM was designed to allow TTM_PL_SYSTEM buffer to be fenced but over
the years the code that was meant to handle it was broken and new
changes can not deal with buffers which have been placed in TTM_PL_SYSTEM
buf do not remain idle.
CPU buffers which need to be fenced and shared with accelerators should
be placed in driver specific placements that can explicitly handle
CPU/accelerator buffer fencing.
Currently, apart, from things silently failing nothing is enforcing
that requirement which means that it's easy for drivers and new
developers to get this wrong. To avoid the confusion we can document
this requirement and clarify the solution.

This came up during a discussion on dri-devel:
https://lore.kernel.org/dri-devel/232f45e9-8748-1243-09bf-56763e6668b3@amd.com

Signed-off-by: Zack Rusin <zackr@vmware.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 include/drm/ttm/ttm_placement.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Christian König Nov. 8, 2021, 11:07 a.m. UTC | #1
Am 05.11.21 um 20:38 schrieb Zack Rusin:
> TTM was designed to allow TTM_PL_SYSTEM buffer to be fenced but over
> the years the code that was meant to handle it was broken and new
> changes can not deal with buffers which have been placed in TTM_PL_SYSTEM
> buf do not remain idle.
> CPU buffers which need to be fenced and shared with accelerators should
> be placed in driver specific placements that can explicitly handle
> CPU/accelerator buffer fencing.
> Currently, apart, from things silently failing nothing is enforcing
> that requirement which means that it's easy for drivers and new
> developers to get this wrong. To avoid the confusion we can document
> this requirement and clarify the solution.
>
> This came up during a discussion on dri-devel:
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kernel.org%2Fdri-devel%2F232f45e9-8748-1243-09bf-56763e6668b3%40amd.com&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C4f4ed0ab73894a482abd08d9a0942b98%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637717380521122147%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=1lGhEIESDwizXnC7APtlhGws8miB4xeh%2FsTewsEPfBY%3D&amp;reserved=0
>
> Signed-off-by: Zack Rusin <zackr@vmware.com>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>   include/drm/ttm/ttm_placement.h | 10 ++++++++++
>   1 file changed, 10 insertions(+)
>
> diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h
> index 76d1b9119a2b..89dfb58ff199 100644
> --- a/include/drm/ttm/ttm_placement.h
> +++ b/include/drm/ttm/ttm_placement.h
> @@ -35,6 +35,16 @@
>   
>   /*
>    * Memory regions for data placement.
> + *
> + * Due to the fact that TTM_PL_SYSTEM BO's can be accessed by the hardware
> + * and are not directly evictable they're handled slightly differently
> + * from other placements. The most important and driver visible side-effect
> + * of that is that TTM_PL_SYSTEM BO's are not allowed to be fenced and have
> + * to remain idle. For BO's which reside in system memory but for which
> + * the accelerator requires direct access (i.e. their usage needs to be
> + * synchronized between the CPU and accelerator via fences) a new, driver
> + * private placement should be introduced that can handle such scenarios.
> + *

That we don't have fences on them is just the tip on the iceberg and 
mostly nice to have.

The major problem is that TTM_PL_SYSTEM are considered under TTMs 
control and swapped out whenever TTMs thinks it is a good idea.

That needs to be handled manually in the driver if a TTM_PL_SYSTEM is 
ever considered a valid placement.

Christian.

>    */
>   
>   #define TTM_PL_SYSTEM           0
diff mbox series

Patch

diff --git a/include/drm/ttm/ttm_placement.h b/include/drm/ttm/ttm_placement.h
index 76d1b9119a2b..89dfb58ff199 100644
--- a/include/drm/ttm/ttm_placement.h
+++ b/include/drm/ttm/ttm_placement.h
@@ -35,6 +35,16 @@ 
 
 /*
  * Memory regions for data placement.
+ *
+ * Due to the fact that TTM_PL_SYSTEM BO's can be accessed by the hardware
+ * and are not directly evictable they're handled slightly differently
+ * from other placements. The most important and driver visible side-effect
+ * of that is that TTM_PL_SYSTEM BO's are not allowed to be fenced and have
+ * to remain idle. For BO's which reside in system memory but for which
+ * the accelerator requires direct access (i.e. their usage needs to be
+ * synchronized between the CPU and accelerator via fences) a new, driver
+ * private placement should be introduced that can handle such scenarios.
+ *
  */
 
 #define TTM_PL_SYSTEM           0