diff mbox series

[v11] drm/amdgpu: add drm buddy support to amdgpu

Message ID 20220323062552.228429-1-Arunpravin.PaneerSelvam@amd.com (mailing list archive)
State New, archived
Headers show
Series [v11] drm/amdgpu: add drm buddy support to amdgpu | expand

Commit Message

Paneer Selvam, Arunpravin March 23, 2022, 6:25 a.m. UTC
- Remove drm_mm references and replace with drm buddy functionalities
- Add res cursor support for drm buddy

v2(Matthew Auld):
  - replace spinlock with mutex as we call kmem_cache_zalloc
    (..., GFP_KERNEL) in drm_buddy_alloc() function

  - lock drm_buddy_block_trim() function as it calls
    mark_free/mark_split are all globally visible

v3(Matthew Auld):
  - remove trim method error handling as we address the failure case
    at drm_buddy_block_trim() function

v4:
  - fix warnings reported by kernel test robot <lkp@intel.com>

v5:
  - fix merge conflict issue

v6:
  - fix warnings reported by kernel test robot <lkp@intel.com>

v7:
  - remove DRM_BUDDY_RANGE_ALLOCATION flag usage

v8:
  - keep DRM_BUDDY_RANGE_ALLOCATION flag usage
  - resolve conflicts created by drm/amdgpu: remove VRAM accounting v2

v9(Christian):
  - merged the below patch
     - drm/amdgpu: move vram inline functions into a header
  - rename label name as fallback
  - move struct amdgpu_vram_mgr to amdgpu_vram_mgr.h
  - remove unnecessary flags from struct amdgpu_vram_reservation
  - rewrite block NULL check condition
  - change else style as per coding standard
  - rewrite the node max size
  - add a helper function to fetch the first entry from the list

v10(Christian):
   - rename amdgpu_get_node() function name as amdgpu_vram_mgr_first_block

v11:
   - if size is not aligned with min_page_size, enable is_contiguous flag,
     therefore, the size round up to the power of two and trimmed to the
     original size.

Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
---
 drivers/gpu/drm/Kconfig                       |   1 +
 .../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h    |  97 +++++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       |  10 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c  | 263 ++++++++++--------
 4 files changed, 234 insertions(+), 137 deletions(-)


base-commit: a678f97326454b60ffbbde6abf52d23997d71a27

Comments

Paul Menzel March 23, 2022, 6:42 a.m. UTC | #1
Dear Arunprivin,


Thank you for your patch.

Am 23.03.22 um 07:25 schrieb Arunpravin Paneer Selvam:
> - Remove drm_mm references and replace with drm buddy functionalities

The commit message summary to me suggested, you can somehow use both 
allocators now. Two suggestions below:

1.  Switch to drm buddy allocator
2.  Use drm buddy alllocator

> - Add res cursor support for drm buddy

As an allocator switch sounds invasive, could you please extend the 
commit message, briefly describing the current situation, saying what 
the downsides are, and why the buddy allocator is “better”.

How did you test it? How can it be tested, that there are no regressions?

> v2(Matthew Auld):

Nit: I’d add a space before (.


Kind regards,

Paul


>    - replace spinlock with mutex as we call kmem_cache_zalloc
>      (..., GFP_KERNEL) in drm_buddy_alloc() function
> 
>    - lock drm_buddy_block_trim() function as it calls
>      mark_free/mark_split are all globally visible
> 
> v3(Matthew Auld):
>    - remove trim method error handling as we address the failure case
>      at drm_buddy_block_trim() function
> 
> v4:
>    - fix warnings reported by kernel test robot <lkp@intel.com>
> 
> v5:
>    - fix merge conflict issue
> 
> v6:
>    - fix warnings reported by kernel test robot <lkp@intel.com>
> 
> v7:
>    - remove DRM_BUDDY_RANGE_ALLOCATION flag usage
> 
> v8:
>    - keep DRM_BUDDY_RANGE_ALLOCATION flag usage
>    - resolve conflicts created by drm/amdgpu: remove VRAM accounting v2
> 
> v9(Christian):
>    - merged the below patch
>       - drm/amdgpu: move vram inline functions into a header
>    - rename label name as fallback
>    - move struct amdgpu_vram_mgr to amdgpu_vram_mgr.h
>    - remove unnecessary flags from struct amdgpu_vram_reservation
>    - rewrite block NULL check condition
>    - change else style as per coding standard
>    - rewrite the node max size
>    - add a helper function to fetch the first entry from the list
> 
> v10(Christian):
>     - rename amdgpu_get_node() function name as amdgpu_vram_mgr_first_block
> 
> v11:
>     - if size is not aligned with min_page_size, enable is_contiguous flag,
>       therefore, the size round up to the power of two and trimmed to the
>       original size.
> 
> Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
> ---
>   drivers/gpu/drm/Kconfig                       |   1 +
>   .../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h    |  97 +++++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       |  10 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c  | 263 ++++++++++--------
>   4 files changed, 234 insertions(+), 137 deletions(-)
> 
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index f1422bee3dcc..5133c3f028ab 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -280,6 +280,7 @@ config DRM_AMDGPU
>   	select HWMON
>   	select BACKLIGHT_CLASS_DEVICE
>   	select INTERVAL_TREE
> +	select DRM_BUDDY
>   	help
>   	  Choose this option if you have a recent AMD Radeon graphics card.
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
> index acfa207cf970..864c609ba00b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
> @@ -30,12 +30,15 @@
>   #include <drm/ttm/ttm_resource.h>
>   #include <drm/ttm/ttm_range_manager.h>
>   
> +#include "amdgpu_vram_mgr.h"
> +
>   /* state back for walking over vram_mgr and gtt_mgr allocations */
>   struct amdgpu_res_cursor {
>   	uint64_t		start;
>   	uint64_t		size;
>   	uint64_t		remaining;
> -	struct drm_mm_node	*node;
> +	void			*node;
> +	uint32_t		mem_type;
>   };
>   
>   /**
> @@ -52,27 +55,63 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
>   				    uint64_t start, uint64_t size,
>   				    struct amdgpu_res_cursor *cur)
>   {
> +	struct drm_buddy_block *block;
> +	struct list_head *head, *next;
>   	struct drm_mm_node *node;
>   
> -	if (!res || res->mem_type == TTM_PL_SYSTEM) {
> -		cur->start = start;
> -		cur->size = size;
> -		cur->remaining = size;
> -		cur->node = NULL;
> -		WARN_ON(res && start + size > res->num_pages << PAGE_SHIFT);
> -		return;
> -	}
> +	if (!res)
> +		goto fallback;
>   
>   	BUG_ON(start + size > res->num_pages << PAGE_SHIFT);
>   
> -	node = to_ttm_range_mgr_node(res)->mm_nodes;
> -	while (start >= node->size << PAGE_SHIFT)
> -		start -= node++->size << PAGE_SHIFT;
> +	cur->mem_type = res->mem_type;
> +
> +	switch (cur->mem_type) {
> +	case TTM_PL_VRAM:
> +		head = &to_amdgpu_vram_mgr_node(res)->blocks;
> +
> +		block = list_first_entry_or_null(head,
> +						 struct drm_buddy_block,
> +						 link);
> +		if (!block)
> +			goto fallback;
> +
> +		while (start >= amdgpu_node_size(block)) {
> +			start -= amdgpu_node_size(block);
> +
> +			next = block->link.next;
> +			if (next != head)
> +				block = list_entry(next, struct drm_buddy_block, link);
> +		}
> +
> +		cur->start = amdgpu_node_start(block) + start;
> +		cur->size = min(amdgpu_node_size(block) - start, size);
> +		cur->remaining = size;
> +		cur->node = block;
> +		break;
> +	case TTM_PL_TT:
> +		node = to_ttm_range_mgr_node(res)->mm_nodes;
> +		while (start >= node->size << PAGE_SHIFT)
> +			start -= node++->size << PAGE_SHIFT;
> +
> +		cur->start = (node->start << PAGE_SHIFT) + start;
> +		cur->size = min((node->size << PAGE_SHIFT) - start, size);
> +		cur->remaining = size;
> +		cur->node = node;
> +		break;
> +	default:
> +		goto fallback;
> +	}
>   
> -	cur->start = (node->start << PAGE_SHIFT) + start;
> -	cur->size = min((node->size << PAGE_SHIFT) - start, size);
> +	return;
> +
> +fallback:
> +	cur->start = start;
> +	cur->size = size;
>   	cur->remaining = size;
> -	cur->node = node;
> +	cur->node = NULL;
> +	WARN_ON(res && start + size > res->num_pages << PAGE_SHIFT);
> +	return;
>   }
>   
>   /**
> @@ -85,7 +124,9 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
>    */
>   static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
>   {
> -	struct drm_mm_node *node = cur->node;
> +	struct drm_buddy_block *block;
> +	struct drm_mm_node *node;
> +	struct list_head *next;
>   
>   	BUG_ON(size > cur->remaining);
>   
> @@ -99,9 +140,27 @@ static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
>   		return;
>   	}
>   
> -	cur->node = ++node;
> -	cur->start = node->start << PAGE_SHIFT;
> -	cur->size = min(node->size << PAGE_SHIFT, cur->remaining);
> +	switch (cur->mem_type) {
> +	case TTM_PL_VRAM:
> +		block = cur->node;
> +
> +		next = block->link.next;
> +		block = list_entry(next, struct drm_buddy_block, link);
> +
> +		cur->node = block;
> +		cur->start = amdgpu_node_start(block);
> +		cur->size = min(amdgpu_node_size(block), cur->remaining);
> +		break;
> +	case TTM_PL_TT:
> +		node = cur->node;
> +
> +		cur->node = ++node;
> +		cur->start = node->start << PAGE_SHIFT;
> +		cur->size = min(node->size << PAGE_SHIFT, cur->remaining);
> +		break;
> +	default:
> +		return;
> +	}
>   }
>   
>   #endif
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 9120ae80ef52..6a70818039dd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -26,6 +26,7 @@
>   
>   #include <linux/dma-direction.h>
>   #include <drm/gpu_scheduler.h>
> +#include "amdgpu_vram_mgr.h"
>   #include "amdgpu.h"
>   
>   #define AMDGPU_PL_GDS		(TTM_PL_PRIV + 0)
> @@ -38,15 +39,6 @@
>   
>   #define AMDGPU_POISON	0xd0bed0be
>   
> -struct amdgpu_vram_mgr {
> -	struct ttm_resource_manager manager;
> -	struct drm_mm mm;
> -	spinlock_t lock;
> -	struct list_head reservations_pending;
> -	struct list_head reserved_pages;
> -	atomic64_t vis_usage;
> -};
> -
>   struct amdgpu_gtt_mgr {
>   	struct ttm_resource_manager manager;
>   	struct drm_mm mm;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
> index 0a7611648573..41fb7e6a104b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
> @@ -32,10 +32,18 @@
>   #include "atom.h"
>   
>   struct amdgpu_vram_reservation {
> +	u64 start;
> +	u64 size;
> +	struct list_head block;
>   	struct list_head node;
> -	struct drm_mm_node mm_node;
>   };
>   
> +static inline struct drm_buddy_block *
> +amdgpu_vram_mgr_first_block(struct list_head *list)
> +{
> +	return list_first_entry_or_null(list, struct drm_buddy_block, link);
> +}
> +
>   static inline struct amdgpu_vram_mgr *
>   to_vram_mgr(struct ttm_resource_manager *man)
>   {
> @@ -194,10 +202,10 @@ const struct attribute_group amdgpu_vram_mgr_attr_group = {
>    * Calculate how many bytes of the MM node are inside visible VRAM
>    */
>   static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev,
> -				    struct drm_mm_node *node)
> +				    struct drm_buddy_block *block)
>   {
> -	uint64_t start = node->start << PAGE_SHIFT;
> -	uint64_t end = (node->size + node->start) << PAGE_SHIFT;
> +	u64 start = amdgpu_node_start(block);
> +	u64 end = start + amdgpu_node_size(block);
>   
>   	if (start >= adev->gmc.visible_vram_size)
>   		return 0;
> @@ -218,9 +226,9 @@ u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
>   {
>   	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>   	struct ttm_resource *res = bo->tbo.resource;
> -	unsigned pages = res->num_pages;
> -	struct drm_mm_node *mm;
> -	u64 usage;
> +	struct amdgpu_vram_mgr_node *node = to_amdgpu_vram_mgr_node(res);
> +	struct drm_buddy_block *block;
> +	u64 usage = 0;
>   
>   	if (amdgpu_gmc_vram_full_visible(&adev->gmc))
>   		return amdgpu_bo_size(bo);
> @@ -228,9 +236,8 @@ u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
>   	if (res->start >= adev->gmc.visible_vram_size >> PAGE_SHIFT)
>   		return 0;
>   
> -	mm = &container_of(res, struct ttm_range_mgr_node, base)->mm_nodes[0];
> -	for (usage = 0; pages; pages -= mm->size, mm++)
> -		usage += amdgpu_vram_mgr_vis_size(adev, mm);
> +	list_for_each_entry(block, &node->blocks, link)
> +		usage += amdgpu_vram_mgr_vis_size(adev, block);
>   
>   	return usage;
>   }
> @@ -240,21 +247,28 @@ static void amdgpu_vram_mgr_do_reserve(struct ttm_resource_manager *man)
>   {
>   	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
>   	struct amdgpu_device *adev = to_amdgpu_device(mgr);
> -	struct drm_mm *mm = &mgr->mm;
> +	struct drm_buddy *mm = &mgr->mm;
>   	struct amdgpu_vram_reservation *rsv, *temp;
> +	struct drm_buddy_block *block;
>   	uint64_t vis_usage;
>   
>   	list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, node) {
> -		if (drm_mm_reserve_node(mm, &rsv->mm_node))
> +		if (drm_buddy_alloc_blocks(mm, rsv->start, rsv->start + rsv->size,
> +					   rsv->size, mm->chunk_size, &rsv->block,
> +					   DRM_BUDDY_RANGE_ALLOCATION))
> +			continue;
> +
> +		block = amdgpu_vram_mgr_first_block(&rsv->block);
> +		if (!block)
>   			continue;
>   
>   		dev_dbg(adev->dev, "Reservation 0x%llx - %lld, Succeeded\n",
> -			rsv->mm_node.start, rsv->mm_node.size);
> +			rsv->start, rsv->size);
>   
> -		vis_usage = amdgpu_vram_mgr_vis_size(adev, &rsv->mm_node);
> +		vis_usage = amdgpu_vram_mgr_vis_size(adev, block);
>   		atomic64_add(vis_usage, &mgr->vis_usage);
>   		spin_lock(&man->bdev->lru_lock);
> -		man->usage += rsv->mm_node.size << PAGE_SHIFT;
> +		man->usage += rsv->size;
>   		spin_unlock(&man->bdev->lru_lock);
>   		list_move(&rsv->node, &mgr->reserved_pages);
>   	}
> @@ -279,13 +293,15 @@ int amdgpu_vram_mgr_reserve_range(struct amdgpu_vram_mgr *mgr,
>   		return -ENOMEM;
>   
>   	INIT_LIST_HEAD(&rsv->node);
> -	rsv->mm_node.start = start >> PAGE_SHIFT;
> -	rsv->mm_node.size = size >> PAGE_SHIFT;
> +	INIT_LIST_HEAD(&rsv->block);
>   
> -	spin_lock(&mgr->lock);
> +	rsv->start = start;
> +	rsv->size = size;
> +
> +	mutex_lock(&mgr->lock);
>   	list_add_tail(&rsv->node, &mgr->reservations_pending);
>   	amdgpu_vram_mgr_do_reserve(&mgr->manager);
> -	spin_unlock(&mgr->lock);
> +	mutex_unlock(&mgr->lock);
>   
>   	return 0;
>   }
> @@ -307,19 +323,19 @@ int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
>   	struct amdgpu_vram_reservation *rsv;
>   	int ret;
>   
> -	spin_lock(&mgr->lock);
> +	mutex_lock(&mgr->lock);
>   
>   	list_for_each_entry(rsv, &mgr->reservations_pending, node) {
> -		if ((rsv->mm_node.start <= start) &&
> -		    (start < (rsv->mm_node.start + rsv->mm_node.size))) {
> +		if (rsv->start <= start &&
> +		    (start < (rsv->start + rsv->size))) {
>   			ret = -EBUSY;
>   			goto out;
>   		}
>   	}
>   
>   	list_for_each_entry(rsv, &mgr->reserved_pages, node) {
> -		if ((rsv->mm_node.start <= start) &&
> -		    (start < (rsv->mm_node.start + rsv->mm_node.size))) {
> +		if (rsv->start <= start &&
> +		    (start < (rsv->start + rsv->size))) {
>   			ret = 0;
>   			goto out;
>   		}
> @@ -327,32 +343,10 @@ int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
>   
>   	ret = -ENOENT;
>   out:
> -	spin_unlock(&mgr->lock);
> +	mutex_unlock(&mgr->lock);
>   	return ret;
>   }
>   
> -/**
> - * amdgpu_vram_mgr_virt_start - update virtual start address
> - *
> - * @mem: ttm_resource to update
> - * @node: just allocated node
> - *
> - * Calculate a virtual BO start address to easily check if everything is CPU
> - * accessible.
> - */
> -static void amdgpu_vram_mgr_virt_start(struct ttm_resource *mem,
> -				       struct drm_mm_node *node)
> -{
> -	unsigned long start;
> -
> -	start = node->start + node->size;
> -	if (start > mem->num_pages)
> -		start -= mem->num_pages;
> -	else
> -		start = 0;
> -	mem->start = max(mem->start, start);
> -}
> -
>   /**
>    * amdgpu_vram_mgr_new - allocate new ranges
>    *
> @@ -368,13 +362,14 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>   			       const struct ttm_place *place,
>   			       struct ttm_resource **res)
>   {
> -	unsigned long lpfn, num_nodes, pages_per_node, pages_left, pages;
> +	unsigned long lpfn, pages_per_node, pages_left, pages;
>   	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
>   	struct amdgpu_device *adev = to_amdgpu_device(mgr);
> -	uint64_t vis_usage = 0, mem_bytes, max_bytes;
> -	struct ttm_range_mgr_node *node;
> -	struct drm_mm *mm = &mgr->mm;
> -	enum drm_mm_insert_mode mode;
> +	u64 vis_usage = 0, max_bytes, min_page_size;
> +	struct amdgpu_vram_mgr_node *node;
> +	struct drm_buddy *mm = &mgr->mm;
> +	struct drm_buddy_block *block;
> +	bool is_contiguous = 0;
>   	unsigned i;
>   	int r;
>   
> @@ -382,14 +377,15 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>   	if (!lpfn)
>   		lpfn = man->size >> PAGE_SHIFT;
>   
> +	if (place->flags & TTM_PL_FLAG_CONTIGUOUS)
> +		is_contiguous = 1;
> +
>   	max_bytes = adev->gmc.mc_vram_size;
>   	if (tbo->type != ttm_bo_type_kernel)
>   		max_bytes -= AMDGPU_VM_RESERVED_VRAM;
>   
> -	mem_bytes = tbo->base.size;
>   	if (place->flags & TTM_PL_FLAG_CONTIGUOUS) {
>   		pages_per_node = ~0ul;
> -		num_nodes = 1;
>   	} else {
>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>   		pages_per_node = HPAGE_PMD_NR;
> @@ -399,11 +395,9 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>   #endif
>   		pages_per_node = max_t(uint32_t, pages_per_node,
>   				       tbo->page_alignment);
> -		num_nodes = DIV_ROUND_UP_ULL(PFN_UP(mem_bytes), pages_per_node);
>   	}
>   
> -	node = kvmalloc(struct_size(node, mm_nodes, num_nodes),
> -			GFP_KERNEL | __GFP_ZERO);
> +	node = kzalloc(sizeof(*node), GFP_KERNEL);
>   	if (!node)
>   		return -ENOMEM;
>   
> @@ -415,48 +409,86 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>   		goto error_fini;
>   	}
>   
> -	mode = DRM_MM_INSERT_BEST;
> +	INIT_LIST_HEAD(&node->blocks);
> +
>   	if (place->flags & TTM_PL_FLAG_TOPDOWN)
> -		mode = DRM_MM_INSERT_HIGH;
> +		node->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
>   
> -	pages_left = node->base.num_pages;
> +	if (place->fpfn || lpfn != man->size >> PAGE_SHIFT)
> +		/* Allocate blocks in desired range */
> +		node->flags |= DRM_BUDDY_RANGE_ALLOCATION;
>   
> -	/* Limit maximum size to 2GB due to SG table limitations */
> -	pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
> +	BUG_ON(!node->base.num_pages);
> +	pages_left = node->base.num_pages;
>   
>   	i = 0;
> -	spin_lock(&mgr->lock);
>   	while (pages_left) {
> -		uint32_t alignment = tbo->page_alignment;
> +		if (tbo->page_alignment)
> +			min_page_size = tbo->page_alignment << PAGE_SHIFT;
> +		else
> +			min_page_size = mgr->default_page_size;
> +
> +		/* Limit maximum size to 2GB due to SG table limitations */
> +		pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
>   
>   		if (pages >= pages_per_node)
> -			alignment = pages_per_node;
> -
> -		r = drm_mm_insert_node_in_range(mm, &node->mm_nodes[i], pages,
> -						alignment, 0, place->fpfn,
> -						lpfn, mode);
> -		if (unlikely(r)) {
> -			if (pages > pages_per_node) {
> -				if (is_power_of_2(pages))
> -					pages = pages / 2;
> -				else
> -					pages = rounddown_pow_of_two(pages);
> -				continue;
> -			}
> -			goto error_free;
> +			min_page_size = pages_per_node << PAGE_SHIFT;
> +
> +		if (!is_contiguous && !IS_ALIGNED(pages, min_page_size >> PAGE_SHIFT))
> +			is_contiguous = 1;
> +
> +		if (is_contiguous) {
> +			pages = roundup_pow_of_two(pages);
> +			min_page_size = pages << PAGE_SHIFT;
> +
> +			if (pages > lpfn)
> +				lpfn = pages;
>   		}
>   
> -		vis_usage += amdgpu_vram_mgr_vis_size(adev, &node->mm_nodes[i]);
> -		amdgpu_vram_mgr_virt_start(&node->base, &node->mm_nodes[i]);
> -		pages_left -= pages;
> +		BUG_ON(min_page_size < mm->chunk_size);
> +
> +		mutex_lock(&mgr->lock);
> +		r = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
> +					   (u64)lpfn << PAGE_SHIFT,
> +					   (u64)pages << PAGE_SHIFT,
> +					   min_page_size,
> +					   &node->blocks,
> +					   node->flags);
> +		mutex_unlock(&mgr->lock);
> +		if (unlikely(r))
> +			goto error_free_blocks;
> +
>   		++i;
>   
>   		if (pages > pages_left)
> -			pages = pages_left;
> +			pages_left = 0;
> +		else
> +			pages_left -= pages;
>   	}
> -	spin_unlock(&mgr->lock);
>   
> -	if (i == 1)
> +	/* Free unused pages for contiguous allocation */
> +	if (is_contiguous) {
> +		u64 actual_size = (u64)node->base.num_pages << PAGE_SHIFT;
> +
> +		mutex_lock(&mgr->lock);
> +		drm_buddy_block_trim(mm,
> +				     actual_size,
> +				     &node->blocks);
> +		mutex_unlock(&mgr->lock);
> +	}
> +
> +	list_for_each_entry(block, &node->blocks, link)
> +		vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
> +
> +	block = amdgpu_vram_mgr_first_block(&node->blocks);
> +	if (!block) {
> +		r = -EINVAL;
> +		goto error_fini;
> +	}
> +
> +	node->base.start = amdgpu_node_start(block) >> PAGE_SHIFT;
> +
> +	if (i == 1 && is_contiguous)
>   		node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
>   
>   	if (adev->gmc.xgmi.connected_to_cpu)
> @@ -468,13 +500,13 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>   	*res = &node->base;
>   	return 0;
>   
> -error_free:
> -	while (i--)
> -		drm_mm_remove_node(&node->mm_nodes[i]);
> -	spin_unlock(&mgr->lock);
> +error_free_blocks:
> +	mutex_lock(&mgr->lock);
> +	drm_buddy_free_list(mm, &node->blocks);
> +	mutex_unlock(&mgr->lock);
>   error_fini:
>   	ttm_resource_fini(man, &node->base);
> -	kvfree(node);
> +	kfree(node);
>   
>   	return r;
>   }
> @@ -490,27 +522,26 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>   static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
>   				struct ttm_resource *res)
>   {
> -	struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res);
> +	struct amdgpu_vram_mgr_node *node = to_amdgpu_vram_mgr_node(res);
>   	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
>   	struct amdgpu_device *adev = to_amdgpu_device(mgr);
> +	struct drm_buddy *mm = &mgr->mm;
> +	struct drm_buddy_block *block;
>   	uint64_t vis_usage = 0;
> -	unsigned i, pages;
>   
> -	spin_lock(&mgr->lock);
> -	for (i = 0, pages = res->num_pages; pages;
> -	     pages -= node->mm_nodes[i].size, ++i) {
> -		struct drm_mm_node *mm = &node->mm_nodes[i];
> +	mutex_lock(&mgr->lock);
> +	list_for_each_entry(block, &node->blocks, link)
> +		vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
>   
> -		drm_mm_remove_node(mm);
> -		vis_usage += amdgpu_vram_mgr_vis_size(adev, mm);
> -	}
>   	amdgpu_vram_mgr_do_reserve(man);
> -	spin_unlock(&mgr->lock);
> +
> +	drm_buddy_free_list(mm, &node->blocks);
> +	mutex_unlock(&mgr->lock);
>   
>   	atomic64_sub(vis_usage, &mgr->vis_usage);
>   
>   	ttm_resource_fini(man, res);
> -	kvfree(node);
> +	kfree(node);
>   }
>   
>   /**
> @@ -648,13 +679,22 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man,
>   				  struct drm_printer *printer)
>   {
>   	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
> +	struct drm_buddy *mm = &mgr->mm;
> +	struct drm_buddy_block *block;
>   
>   	drm_printf(printer, "  vis usage:%llu\n",
>   		   amdgpu_vram_mgr_vis_usage(mgr));
>   
> -	spin_lock(&mgr->lock);
> -	drm_mm_print(&mgr->mm, printer);
> -	spin_unlock(&mgr->lock);
> +	mutex_lock(&mgr->lock);
> +	drm_printf(printer, "default_page_size: %lluKiB\n",
> +		   mgr->default_page_size >> 10);
> +
> +	drm_buddy_print(mm, printer);
> +
> +	drm_printf(printer, "reserved:\n");
> +	list_for_each_entry(block, &mgr->reserved_pages, link)
> +		drm_buddy_block_print(mm, block, printer);
> +	mutex_unlock(&mgr->lock);
>   }
>   
>   static const struct ttm_resource_manager_func amdgpu_vram_mgr_func = {
> @@ -674,16 +714,21 @@ int amdgpu_vram_mgr_init(struct amdgpu_device *adev)
>   {
>   	struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
>   	struct ttm_resource_manager *man = &mgr->manager;
> +	int err;
>   
>   	ttm_resource_manager_init(man, &adev->mman.bdev,
>   				  adev->gmc.real_vram_size);
>   
>   	man->func = &amdgpu_vram_mgr_func;
>   
> -	drm_mm_init(&mgr->mm, 0, man->size >> PAGE_SHIFT);
> -	spin_lock_init(&mgr->lock);
> +	err = drm_buddy_init(&mgr->mm, man->size, PAGE_SIZE);
> +	if (err)
> +		return err;
> +
> +	mutex_init(&mgr->lock);
>   	INIT_LIST_HEAD(&mgr->reservations_pending);
>   	INIT_LIST_HEAD(&mgr->reserved_pages);
> +	mgr->default_page_size = PAGE_SIZE;
>   
>   	ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, &mgr->manager);
>   	ttm_resource_manager_set_used(man, true);
> @@ -711,16 +756,16 @@ void amdgpu_vram_mgr_fini(struct amdgpu_device *adev)
>   	if (ret)
>   		return;
>   
> -	spin_lock(&mgr->lock);
> +	mutex_lock(&mgr->lock);
>   	list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, node)
>   		kfree(rsv);
>   
>   	list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, node) {
> -		drm_mm_remove_node(&rsv->mm_node);
> +		drm_buddy_free_list(&mgr->mm, &rsv->block);
>   		kfree(rsv);
>   	}
> -	drm_mm_takedown(&mgr->mm);
> -	spin_unlock(&mgr->lock);
> +	drm_buddy_fini(&mgr->mm);
> +	mutex_unlock(&mgr->lock);
>   
>   	ttm_resource_manager_cleanup(man);
>   	ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, NULL);
> 
> base-commit: a678f97326454b60ffbbde6abf52d23997d71a27
Christian König March 23, 2022, 7:37 a.m. UTC | #2
Am 23.03.22 um 07:25 schrieb Arunpravin Paneer Selvam:
> [SNIP]
> @@ -415,48 +409,86 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>   		goto error_fini;
>   	}
>   
> -	mode = DRM_MM_INSERT_BEST;
> +	INIT_LIST_HEAD(&node->blocks);
> +
>   	if (place->flags & TTM_PL_FLAG_TOPDOWN)
> -		mode = DRM_MM_INSERT_HIGH;
> +		node->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
>   
> -	pages_left = node->base.num_pages;
> +	if (place->fpfn || lpfn != man->size >> PAGE_SHIFT)
> +		/* Allocate blocks in desired range */
> +		node->flags |= DRM_BUDDY_RANGE_ALLOCATION;
>   
> -	/* Limit maximum size to 2GB due to SG table limitations */
> -	pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
> +	BUG_ON(!node->base.num_pages);

Please drop this BUG_ON(). This is not something which prevents further 
data corruption, so the BUG_ON() is not justified.

> +	pages_left = node->base.num_pages;
>   
>   	i = 0;
> -	spin_lock(&mgr->lock);
>   	while (pages_left) {
> -		uint32_t alignment = tbo->page_alignment;
> +		if (tbo->page_alignment)
> +			min_page_size = tbo->page_alignment << PAGE_SHIFT;
> +		else
> +			min_page_size = mgr->default_page_size;

The handling here looks extremely awkward to me.

min_page_size should be determined outside of the loop, based on 
default_page_size, alignment and contiguous flag.

Then why do you drop the lock and grab it again inside the loop? And 
what is "i" actually good for?

> +
> +		/* Limit maximum size to 2GB due to SG table limitations */
> +		pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
>   
>   		if (pages >= pages_per_node)
> -			alignment = pages_per_node;
> -
> -		r = drm_mm_insert_node_in_range(mm, &node->mm_nodes[i], pages,
> -						alignment, 0, place->fpfn,
> -						lpfn, mode);
> -		if (unlikely(r)) {
> -			if (pages > pages_per_node) {
> -				if (is_power_of_2(pages))
> -					pages = pages / 2;
> -				else
> -					pages = rounddown_pow_of_two(pages);
> -				continue;
> -			}
> -			goto error_free;
> +			min_page_size = pages_per_node << PAGE_SHIFT;
> +
> +		if (!is_contiguous && !IS_ALIGNED(pages, min_page_size >> PAGE_SHIFT))
> +			is_contiguous = 1;
> +
> +		if (is_contiguous) {
> +			pages = roundup_pow_of_two(pages);
> +			min_page_size = pages << PAGE_SHIFT;
> +
> +			if (pages > lpfn)
> +				lpfn = pages;
>   		}
>   
> -		vis_usage += amdgpu_vram_mgr_vis_size(adev, &node->mm_nodes[i]);
> -		amdgpu_vram_mgr_virt_start(&node->base, &node->mm_nodes[i]);
> -		pages_left -= pages;
> +		BUG_ON(min_page_size < mm->chunk_size);
> +
> +		mutex_lock(&mgr->lock);
> +		r = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
> +					   (u64)lpfn << PAGE_SHIFT,
> +					   (u64)pages << PAGE_SHIFT,
> +					   min_page_size,
> +					   &node->blocks,
> +					   node->flags);
> +		mutex_unlock(&mgr->lock);
> +		if (unlikely(r))
> +			goto error_free_blocks;
> +
>   		++i;
>   
>   		if (pages > pages_left)
> -			pages = pages_left;
> +			pages_left = 0;
> +		else
> +			pages_left -= pages;
>   	}
> -	spin_unlock(&mgr->lock);
>   
> -	if (i == 1)
> +	/* Free unused pages for contiguous allocation */
> +	if (is_contiguous) {

Well that looks really odd, why is trimming not part of 
drm_buddy_alloc_blocks() ?

> +		u64 actual_size = (u64)node->base.num_pages << PAGE_SHIFT;
> +
> +		mutex_lock(&mgr->lock);
> +		drm_buddy_block_trim(mm,
> +				     actual_size,
> +				     &node->blocks);

Why is the drm_buddy_block_trim() function given all the blocks and not 
just the last one?

Regards,
Christian.

> +		mutex_unlock(&mgr->lock);
> +	}
> +
> +	list_for_each_entry(block, &node->blocks, link)
> +		vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
> +
> +	block = amdgpu_vram_mgr_first_block(&node->blocks);
> +	if (!block) {
> +		r = -EINVAL;
> +		goto error_fini;
> +	}
> +
> +	node->base.start = amdgpu_node_start(block) >> PAGE_SHIFT;
> +
> +	if (i == 1 && is_contiguous)
>   		node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
>   
>   	if (adev->gmc.xgmi.connected_to_cpu)
> @@ -468,13 +500,13 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>   	*res = &node->base;
>   	return 0;
>   
> -error_free:
> -	while (i--)
> -		drm_mm_remove_node(&node->mm_nodes[i]);
> -	spin_unlock(&mgr->lock);
> +error_free_blocks:
> +	mutex_lock(&mgr->lock);
> +	drm_buddy_free_list(mm, &node->blocks);
> +	mutex_unlock(&mgr->lock);
>   error_fini:
>   	ttm_resource_fini(man, &node->base);
> -	kvfree(node);
> +	kfree(node);
>   
>   	return r;
>   }
> @@ -490,27 +522,26 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>   static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
>   				struct ttm_resource *res)
>   {
> -	struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res);
> +	struct amdgpu_vram_mgr_node *node = to_amdgpu_vram_mgr_node(res);
>   	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
>   	struct amdgpu_device *adev = to_amdgpu_device(mgr);
> +	struct drm_buddy *mm = &mgr->mm;
> +	struct drm_buddy_block *block;
>   	uint64_t vis_usage = 0;
> -	unsigned i, pages;
>   
> -	spin_lock(&mgr->lock);
> -	for (i = 0, pages = res->num_pages; pages;
> -	     pages -= node->mm_nodes[i].size, ++i) {
> -		struct drm_mm_node *mm = &node->mm_nodes[i];
> +	mutex_lock(&mgr->lock);
> +	list_for_each_entry(block, &node->blocks, link)
> +		vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
>   
> -		drm_mm_remove_node(mm);
> -		vis_usage += amdgpu_vram_mgr_vis_size(adev, mm);
> -	}
>   	amdgpu_vram_mgr_do_reserve(man);
> -	spin_unlock(&mgr->lock);
> +
> +	drm_buddy_free_list(mm, &node->blocks);
> +	mutex_unlock(&mgr->lock);
>   
>   	atomic64_sub(vis_usage, &mgr->vis_usage);
>   
>   	ttm_resource_fini(man, res);
> -	kvfree(node);
> +	kfree(node);
>   }
>   
>   /**
> @@ -648,13 +679,22 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man,
>   				  struct drm_printer *printer)
>   {
>   	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
> +	struct drm_buddy *mm = &mgr->mm;
> +	struct drm_buddy_block *block;
>   
>   	drm_printf(printer, "  vis usage:%llu\n",
>   		   amdgpu_vram_mgr_vis_usage(mgr));
>   
> -	spin_lock(&mgr->lock);
> -	drm_mm_print(&mgr->mm, printer);
> -	spin_unlock(&mgr->lock);
> +	mutex_lock(&mgr->lock);
> +	drm_printf(printer, "default_page_size: %lluKiB\n",
> +		   mgr->default_page_size >> 10);
> +
> +	drm_buddy_print(mm, printer);
> +
> +	drm_printf(printer, "reserved:\n");
> +	list_for_each_entry(block, &mgr->reserved_pages, link)
> +		drm_buddy_block_print(mm, block, printer);
> +	mutex_unlock(&mgr->lock);
>   }
>   
>   static const struct ttm_resource_manager_func amdgpu_vram_mgr_func = {
> @@ -674,16 +714,21 @@ int amdgpu_vram_mgr_init(struct amdgpu_device *adev)
>   {
>   	struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
>   	struct ttm_resource_manager *man = &mgr->manager;
> +	int err;
>   
>   	ttm_resource_manager_init(man, &adev->mman.bdev,
>   				  adev->gmc.real_vram_size);
>   
>   	man->func = &amdgpu_vram_mgr_func;
>   
> -	drm_mm_init(&mgr->mm, 0, man->size >> PAGE_SHIFT);
> -	spin_lock_init(&mgr->lock);
> +	err = drm_buddy_init(&mgr->mm, man->size, PAGE_SIZE);
> +	if (err)
> +		return err;
> +
> +	mutex_init(&mgr->lock);
>   	INIT_LIST_HEAD(&mgr->reservations_pending);
>   	INIT_LIST_HEAD(&mgr->reserved_pages);
> +	mgr->default_page_size = PAGE_SIZE;
>   
>   	ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, &mgr->manager);
>   	ttm_resource_manager_set_used(man, true);
> @@ -711,16 +756,16 @@ void amdgpu_vram_mgr_fini(struct amdgpu_device *adev)
>   	if (ret)
>   		return;
>   
> -	spin_lock(&mgr->lock);
> +	mutex_lock(&mgr->lock);
>   	list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, node)
>   		kfree(rsv);
>   
>   	list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, node) {
> -		drm_mm_remove_node(&rsv->mm_node);
> +		drm_buddy_free_list(&mgr->mm, &rsv->block);
>   		kfree(rsv);
>   	}
> -	drm_mm_takedown(&mgr->mm);
> -	spin_unlock(&mgr->lock);
> +	drm_buddy_fini(&mgr->mm);
> +	mutex_unlock(&mgr->lock);
>   
>   	ttm_resource_manager_cleanup(man);
>   	ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, NULL);
>
> base-commit: a678f97326454b60ffbbde6abf52d23997d71a27
Christian König March 23, 2022, 7:42 a.m. UTC | #3
Hi Paul,

Am 23.03.22 um 07:42 schrieb Paul Menzel:
> Dear Arunprivin,
>
>
> Thank you for your patch.
>
> Am 23.03.22 um 07:25 schrieb Arunpravin Paneer Selvam:
>> - Remove drm_mm references and replace with drm buddy functionalities
>
> The commit message summary to me suggested, you can somehow use both 
> allocators now. Two suggestions below:
>
> 1.  Switch to drm buddy allocator
> 2.  Use drm buddy alllocator
>
>> - Add res cursor support for drm buddy
>
> As an allocator switch sounds invasive, could you please extend the 
> commit message, briefly describing the current situation, saying what 
> the downsides are, and why the buddy allocator is “better”.

Well, Paul please stop bothering developers with those requests.

It's my job as maintainer to supervise the commit messages and it is 
certainly NOT require to explain all the details of the current 
situation in a commit message. That is just overkill.

A simple not that we are switching from the drm_mm backend to the buddy 
backend is sufficient, and that is exactly what the commit message is 
saying here.

Regards,
Christian.

>
> How did you test it? How can it be tested, that there are no regressions?
>
>> v2(Matthew Auld):
>
> Nit: I’d add a space before (.
>
>
> Kind regards,
>
> Paul
>
>
>>    - replace spinlock with mutex as we call kmem_cache_zalloc
>>      (..., GFP_KERNEL) in drm_buddy_alloc() function
>>
>>    - lock drm_buddy_block_trim() function as it calls
>>      mark_free/mark_split are all globally visible
>>
>> v3(Matthew Auld):
>>    - remove trim method error handling as we address the failure case
>>      at drm_buddy_block_trim() function
>>
>> v4:
>>    - fix warnings reported by kernel test robot <lkp@intel.com>
>>
>> v5:
>>    - fix merge conflict issue
>>
>> v6:
>>    - fix warnings reported by kernel test robot <lkp@intel.com>
>>
>> v7:
>>    - remove DRM_BUDDY_RANGE_ALLOCATION flag usage
>>
>> v8:
>>    - keep DRM_BUDDY_RANGE_ALLOCATION flag usage
>>    - resolve conflicts created by drm/amdgpu: remove VRAM accounting v2
>>
>> v9(Christian):
>>    - merged the below patch
>>       - drm/amdgpu: move vram inline functions into a header
>>    - rename label name as fallback
>>    - move struct amdgpu_vram_mgr to amdgpu_vram_mgr.h
>>    - remove unnecessary flags from struct amdgpu_vram_reservation
>>    - rewrite block NULL check condition
>>    - change else style as per coding standard
>>    - rewrite the node max size
>>    - add a helper function to fetch the first entry from the list
>>
>> v10(Christian):
>>     - rename amdgpu_get_node() function name as 
>> amdgpu_vram_mgr_first_block
>>
>> v11:
>>     - if size is not aligned with min_page_size, enable is_contiguous 
>> flag,
>>       therefore, the size round up to the power of two and trimmed to 
>> the
>>       original size.
>>
>> Signed-off-by: Arunpravin Paneer Selvam 
>> <Arunpravin.PaneerSelvam@amd.com>
>> ---
>>   drivers/gpu/drm/Kconfig                       |   1 +
>>   .../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h    |  97 +++++--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       |  10 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c  | 263 ++++++++++--------
>>   4 files changed, 234 insertions(+), 137 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
>> index f1422bee3dcc..5133c3f028ab 100644
>> --- a/drivers/gpu/drm/Kconfig
>> +++ b/drivers/gpu/drm/Kconfig
>> @@ -280,6 +280,7 @@ config DRM_AMDGPU
>>       select HWMON
>>       select BACKLIGHT_CLASS_DEVICE
>>       select INTERVAL_TREE
>> +    select DRM_BUDDY
>>       help
>>         Choose this option if you have a recent AMD Radeon graphics 
>> card.
>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
>> index acfa207cf970..864c609ba00b 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
>> @@ -30,12 +30,15 @@
>>   #include <drm/ttm/ttm_resource.h>
>>   #include <drm/ttm/ttm_range_manager.h>
>>   +#include "amdgpu_vram_mgr.h"
>> +
>>   /* state back for walking over vram_mgr and gtt_mgr allocations */
>>   struct amdgpu_res_cursor {
>>       uint64_t        start;
>>       uint64_t        size;
>>       uint64_t        remaining;
>> -    struct drm_mm_node    *node;
>> +    void            *node;
>> +    uint32_t        mem_type;
>>   };
>>     /**
>> @@ -52,27 +55,63 @@ static inline void amdgpu_res_first(struct 
>> ttm_resource *res,
>>                       uint64_t start, uint64_t size,
>>                       struct amdgpu_res_cursor *cur)
>>   {
>> +    struct drm_buddy_block *block;
>> +    struct list_head *head, *next;
>>       struct drm_mm_node *node;
>>   -    if (!res || res->mem_type == TTM_PL_SYSTEM) {
>> -        cur->start = start;
>> -        cur->size = size;
>> -        cur->remaining = size;
>> -        cur->node = NULL;
>> -        WARN_ON(res && start + size > res->num_pages << PAGE_SHIFT);
>> -        return;
>> -    }
>> +    if (!res)
>> +        goto fallback;
>>         BUG_ON(start + size > res->num_pages << PAGE_SHIFT);
>>   -    node = to_ttm_range_mgr_node(res)->mm_nodes;
>> -    while (start >= node->size << PAGE_SHIFT)
>> -        start -= node++->size << PAGE_SHIFT;
>> +    cur->mem_type = res->mem_type;
>> +
>> +    switch (cur->mem_type) {
>> +    case TTM_PL_VRAM:
>> +        head = &to_amdgpu_vram_mgr_node(res)->blocks;
>> +
>> +        block = list_first_entry_or_null(head,
>> +                         struct drm_buddy_block,
>> +                         link);
>> +        if (!block)
>> +            goto fallback;
>> +
>> +        while (start >= amdgpu_node_size(block)) {
>> +            start -= amdgpu_node_size(block);
>> +
>> +            next = block->link.next;
>> +            if (next != head)
>> +                block = list_entry(next, struct drm_buddy_block, link);
>> +        }
>> +
>> +        cur->start = amdgpu_node_start(block) + start;
>> +        cur->size = min(amdgpu_node_size(block) - start, size);
>> +        cur->remaining = size;
>> +        cur->node = block;
>> +        break;
>> +    case TTM_PL_TT:
>> +        node = to_ttm_range_mgr_node(res)->mm_nodes;
>> +        while (start >= node->size << PAGE_SHIFT)
>> +            start -= node++->size << PAGE_SHIFT;
>> +
>> +        cur->start = (node->start << PAGE_SHIFT) + start;
>> +        cur->size = min((node->size << PAGE_SHIFT) - start, size);
>> +        cur->remaining = size;
>> +        cur->node = node;
>> +        break;
>> +    default:
>> +        goto fallback;
>> +    }
>>   -    cur->start = (node->start << PAGE_SHIFT) + start;
>> -    cur->size = min((node->size << PAGE_SHIFT) - start, size);
>> +    return;
>> +
>> +fallback:
>> +    cur->start = start;
>> +    cur->size = size;
>>       cur->remaining = size;
>> -    cur->node = node;
>> +    cur->node = NULL;
>> +    WARN_ON(res && start + size > res->num_pages << PAGE_SHIFT);
>> +    return;
>>   }
>>     /**
>> @@ -85,7 +124,9 @@ static inline void amdgpu_res_first(struct 
>> ttm_resource *res,
>>    */
>>   static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, 
>> uint64_t size)
>>   {
>> -    struct drm_mm_node *node = cur->node;
>> +    struct drm_buddy_block *block;
>> +    struct drm_mm_node *node;
>> +    struct list_head *next;
>>         BUG_ON(size > cur->remaining);
>>   @@ -99,9 +140,27 @@ static inline void amdgpu_res_next(struct 
>> amdgpu_res_cursor *cur, uint64_t size)
>>           return;
>>       }
>>   -    cur->node = ++node;
>> -    cur->start = node->start << PAGE_SHIFT;
>> -    cur->size = min(node->size << PAGE_SHIFT, cur->remaining);
>> +    switch (cur->mem_type) {
>> +    case TTM_PL_VRAM:
>> +        block = cur->node;
>> +
>> +        next = block->link.next;
>> +        block = list_entry(next, struct drm_buddy_block, link);
>> +
>> +        cur->node = block;
>> +        cur->start = amdgpu_node_start(block);
>> +        cur->size = min(amdgpu_node_size(block), cur->remaining);
>> +        break;
>> +    case TTM_PL_TT:
>> +        node = cur->node;
>> +
>> +        cur->node = ++node;
>> +        cur->start = node->start << PAGE_SHIFT;
>> +        cur->size = min(node->size << PAGE_SHIFT, cur->remaining);
>> +        break;
>> +    default:
>> +        return;
>> +    }
>>   }
>>     #endif
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> index 9120ae80ef52..6a70818039dd 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> @@ -26,6 +26,7 @@
>>     #include <linux/dma-direction.h>
>>   #include <drm/gpu_scheduler.h>
>> +#include "amdgpu_vram_mgr.h"
>>   #include "amdgpu.h"
>>     #define AMDGPU_PL_GDS        (TTM_PL_PRIV + 0)
>> @@ -38,15 +39,6 @@
>>     #define AMDGPU_POISON    0xd0bed0be
>>   -struct amdgpu_vram_mgr {
>> -    struct ttm_resource_manager manager;
>> -    struct drm_mm mm;
>> -    spinlock_t lock;
>> -    struct list_head reservations_pending;
>> -    struct list_head reserved_pages;
>> -    atomic64_t vis_usage;
>> -};
>> -
>>   struct amdgpu_gtt_mgr {
>>       struct ttm_resource_manager manager;
>>       struct drm_mm mm;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>> index 0a7611648573..41fb7e6a104b 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>> @@ -32,10 +32,18 @@
>>   #include "atom.h"
>>     struct amdgpu_vram_reservation {
>> +    u64 start;
>> +    u64 size;
>> +    struct list_head block;
>>       struct list_head node;
>> -    struct drm_mm_node mm_node;
>>   };
>>   +static inline struct drm_buddy_block *
>> +amdgpu_vram_mgr_first_block(struct list_head *list)
>> +{
>> +    return list_first_entry_or_null(list, struct drm_buddy_block, 
>> link);
>> +}
>> +
>>   static inline struct amdgpu_vram_mgr *
>>   to_vram_mgr(struct ttm_resource_manager *man)
>>   {
>> @@ -194,10 +202,10 @@ const struct attribute_group 
>> amdgpu_vram_mgr_attr_group = {
>>    * Calculate how many bytes of the MM node are inside visible VRAM
>>    */
>>   static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev,
>> -                    struct drm_mm_node *node)
>> +                    struct drm_buddy_block *block)
>>   {
>> -    uint64_t start = node->start << PAGE_SHIFT;
>> -    uint64_t end = (node->size + node->start) << PAGE_SHIFT;
>> +    u64 start = amdgpu_node_start(block);
>> +    u64 end = start + amdgpu_node_size(block);
>>         if (start >= adev->gmc.visible_vram_size)
>>           return 0;
>> @@ -218,9 +226,9 @@ u64 amdgpu_vram_mgr_bo_visible_size(struct 
>> amdgpu_bo *bo)
>>   {
>>       struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>>       struct ttm_resource *res = bo->tbo.resource;
>> -    unsigned pages = res->num_pages;
>> -    struct drm_mm_node *mm;
>> -    u64 usage;
>> +    struct amdgpu_vram_mgr_node *node = to_amdgpu_vram_mgr_node(res);
>> +    struct drm_buddy_block *block;
>> +    u64 usage = 0;
>>         if (amdgpu_gmc_vram_full_visible(&adev->gmc))
>>           return amdgpu_bo_size(bo);
>> @@ -228,9 +236,8 @@ u64 amdgpu_vram_mgr_bo_visible_size(struct 
>> amdgpu_bo *bo)
>>       if (res->start >= adev->gmc.visible_vram_size >> PAGE_SHIFT)
>>           return 0;
>>   -    mm = &container_of(res, struct ttm_range_mgr_node, 
>> base)->mm_nodes[0];
>> -    for (usage = 0; pages; pages -= mm->size, mm++)
>> -        usage += amdgpu_vram_mgr_vis_size(adev, mm);
>> +    list_for_each_entry(block, &node->blocks, link)
>> +        usage += amdgpu_vram_mgr_vis_size(adev, block);
>>         return usage;
>>   }
>> @@ -240,21 +247,28 @@ static void amdgpu_vram_mgr_do_reserve(struct 
>> ttm_resource_manager *man)
>>   {
>>       struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
>>       struct amdgpu_device *adev = to_amdgpu_device(mgr);
>> -    struct drm_mm *mm = &mgr->mm;
>> +    struct drm_buddy *mm = &mgr->mm;
>>       struct amdgpu_vram_reservation *rsv, *temp;
>> +    struct drm_buddy_block *block;
>>       uint64_t vis_usage;
>>         list_for_each_entry_safe(rsv, temp, 
>> &mgr->reservations_pending, node) {
>> -        if (drm_mm_reserve_node(mm, &rsv->mm_node))
>> +        if (drm_buddy_alloc_blocks(mm, rsv->start, rsv->start + 
>> rsv->size,
>> +                       rsv->size, mm->chunk_size, &rsv->block,
>> +                       DRM_BUDDY_RANGE_ALLOCATION))
>> +            continue;
>> +
>> +        block = amdgpu_vram_mgr_first_block(&rsv->block);
>> +        if (!block)
>>               continue;
>>             dev_dbg(adev->dev, "Reservation 0x%llx - %lld, Succeeded\n",
>> -            rsv->mm_node.start, rsv->mm_node.size);
>> +            rsv->start, rsv->size);
>>   -        vis_usage = amdgpu_vram_mgr_vis_size(adev, &rsv->mm_node);
>> +        vis_usage = amdgpu_vram_mgr_vis_size(adev, block);
>>           atomic64_add(vis_usage, &mgr->vis_usage);
>>           spin_lock(&man->bdev->lru_lock);
>> -        man->usage += rsv->mm_node.size << PAGE_SHIFT;
>> +        man->usage += rsv->size;
>>           spin_unlock(&man->bdev->lru_lock);
>>           list_move(&rsv->node, &mgr->reserved_pages);
>>       }
>> @@ -279,13 +293,15 @@ int amdgpu_vram_mgr_reserve_range(struct 
>> amdgpu_vram_mgr *mgr,
>>           return -ENOMEM;
>>         INIT_LIST_HEAD(&rsv->node);
>> -    rsv->mm_node.start = start >> PAGE_SHIFT;
>> -    rsv->mm_node.size = size >> PAGE_SHIFT;
>> +    INIT_LIST_HEAD(&rsv->block);
>>   -    spin_lock(&mgr->lock);
>> +    rsv->start = start;
>> +    rsv->size = size;
>> +
>> +    mutex_lock(&mgr->lock);
>>       list_add_tail(&rsv->node, &mgr->reservations_pending);
>>       amdgpu_vram_mgr_do_reserve(&mgr->manager);
>> -    spin_unlock(&mgr->lock);
>> +    mutex_unlock(&mgr->lock);
>>         return 0;
>>   }
>> @@ -307,19 +323,19 @@ int amdgpu_vram_mgr_query_page_status(struct 
>> amdgpu_vram_mgr *mgr,
>>       struct amdgpu_vram_reservation *rsv;
>>       int ret;
>>   -    spin_lock(&mgr->lock);
>> +    mutex_lock(&mgr->lock);
>>         list_for_each_entry(rsv, &mgr->reservations_pending, node) {
>> -        if ((rsv->mm_node.start <= start) &&
>> -            (start < (rsv->mm_node.start + rsv->mm_node.size))) {
>> +        if (rsv->start <= start &&
>> +            (start < (rsv->start + rsv->size))) {
>>               ret = -EBUSY;
>>               goto out;
>>           }
>>       }
>>         list_for_each_entry(rsv, &mgr->reserved_pages, node) {
>> -        if ((rsv->mm_node.start <= start) &&
>> -            (start < (rsv->mm_node.start + rsv->mm_node.size))) {
>> +        if (rsv->start <= start &&
>> +            (start < (rsv->start + rsv->size))) {
>>               ret = 0;
>>               goto out;
>>           }
>> @@ -327,32 +343,10 @@ int amdgpu_vram_mgr_query_page_status(struct 
>> amdgpu_vram_mgr *mgr,
>>         ret = -ENOENT;
>>   out:
>> -    spin_unlock(&mgr->lock);
>> +    mutex_unlock(&mgr->lock);
>>       return ret;
>>   }
>>   -/**
>> - * amdgpu_vram_mgr_virt_start - update virtual start address
>> - *
>> - * @mem: ttm_resource to update
>> - * @node: just allocated node
>> - *
>> - * Calculate a virtual BO start address to easily check if 
>> everything is CPU
>> - * accessible.
>> - */
>> -static void amdgpu_vram_mgr_virt_start(struct ttm_resource *mem,
>> -                       struct drm_mm_node *node)
>> -{
>> -    unsigned long start;
>> -
>> -    start = node->start + node->size;
>> -    if (start > mem->num_pages)
>> -        start -= mem->num_pages;
>> -    else
>> -        start = 0;
>> -    mem->start = max(mem->start, start);
>> -}
>> -
>>   /**
>>    * amdgpu_vram_mgr_new - allocate new ranges
>>    *
>> @@ -368,13 +362,14 @@ static int amdgpu_vram_mgr_new(struct 
>> ttm_resource_manager *man,
>>                      const struct ttm_place *place,
>>                      struct ttm_resource **res)
>>   {
>> -    unsigned long lpfn, num_nodes, pages_per_node, pages_left, pages;
>> +    unsigned long lpfn, pages_per_node, pages_left, pages;
>>       struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
>>       struct amdgpu_device *adev = to_amdgpu_device(mgr);
>> -    uint64_t vis_usage = 0, mem_bytes, max_bytes;
>> -    struct ttm_range_mgr_node *node;
>> -    struct drm_mm *mm = &mgr->mm;
>> -    enum drm_mm_insert_mode mode;
>> +    u64 vis_usage = 0, max_bytes, min_page_size;
>> +    struct amdgpu_vram_mgr_node *node;
>> +    struct drm_buddy *mm = &mgr->mm;
>> +    struct drm_buddy_block *block;
>> +    bool is_contiguous = 0;
>>       unsigned i;
>>       int r;
>>   @@ -382,14 +377,15 @@ static int amdgpu_vram_mgr_new(struct 
>> ttm_resource_manager *man,
>>       if (!lpfn)
>>           lpfn = man->size >> PAGE_SHIFT;
>>   +    if (place->flags & TTM_PL_FLAG_CONTIGUOUS)
>> +        is_contiguous = 1;
>> +
>>       max_bytes = adev->gmc.mc_vram_size;
>>       if (tbo->type != ttm_bo_type_kernel)
>>           max_bytes -= AMDGPU_VM_RESERVED_VRAM;
>>   -    mem_bytes = tbo->base.size;
>>       if (place->flags & TTM_PL_FLAG_CONTIGUOUS) {
>>           pages_per_node = ~0ul;
>> -        num_nodes = 1;
>>       } else {
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>           pages_per_node = HPAGE_PMD_NR;
>> @@ -399,11 +395,9 @@ static int amdgpu_vram_mgr_new(struct 
>> ttm_resource_manager *man,
>>   #endif
>>           pages_per_node = max_t(uint32_t, pages_per_node,
>>                          tbo->page_alignment);
>> -        num_nodes = DIV_ROUND_UP_ULL(PFN_UP(mem_bytes), 
>> pages_per_node);
>>       }
>>   -    node = kvmalloc(struct_size(node, mm_nodes, num_nodes),
>> -            GFP_KERNEL | __GFP_ZERO);
>> +    node = kzalloc(sizeof(*node), GFP_KERNEL);
>>       if (!node)
>>           return -ENOMEM;
>>   @@ -415,48 +409,86 @@ static int amdgpu_vram_mgr_new(struct 
>> ttm_resource_manager *man,
>>           goto error_fini;
>>       }
>>   -    mode = DRM_MM_INSERT_BEST;
>> +    INIT_LIST_HEAD(&node->blocks);
>> +
>>       if (place->flags & TTM_PL_FLAG_TOPDOWN)
>> -        mode = DRM_MM_INSERT_HIGH;
>> +        node->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
>>   -    pages_left = node->base.num_pages;
>> +    if (place->fpfn || lpfn != man->size >> PAGE_SHIFT)
>> +        /* Allocate blocks in desired range */
>> +        node->flags |= DRM_BUDDY_RANGE_ALLOCATION;
>>   -    /* Limit maximum size to 2GB due to SG table limitations */
>> -    pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
>> +    BUG_ON(!node->base.num_pages);
>> +    pages_left = node->base.num_pages;
>>         i = 0;
>> -    spin_lock(&mgr->lock);
>>       while (pages_left) {
>> -        uint32_t alignment = tbo->page_alignment;
>> +        if (tbo->page_alignment)
>> +            min_page_size = tbo->page_alignment << PAGE_SHIFT;
>> +        else
>> +            min_page_size = mgr->default_page_size;
>> +
>> +        /* Limit maximum size to 2GB due to SG table limitations */
>> +        pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
>>             if (pages >= pages_per_node)
>> -            alignment = pages_per_node;
>> -
>> -        r = drm_mm_insert_node_in_range(mm, &node->mm_nodes[i], pages,
>> -                        alignment, 0, place->fpfn,
>> -                        lpfn, mode);
>> -        if (unlikely(r)) {
>> -            if (pages > pages_per_node) {
>> -                if (is_power_of_2(pages))
>> -                    pages = pages / 2;
>> -                else
>> -                    pages = rounddown_pow_of_two(pages);
>> -                continue;
>> -            }
>> -            goto error_free;
>> +            min_page_size = pages_per_node << PAGE_SHIFT;
>> +
>> +        if (!is_contiguous && !IS_ALIGNED(pages, min_page_size >> 
>> PAGE_SHIFT))
>> +            is_contiguous = 1;
>> +
>> +        if (is_contiguous) {
>> +            pages = roundup_pow_of_two(pages);
>> +            min_page_size = pages << PAGE_SHIFT;
>> +
>> +            if (pages > lpfn)
>> +                lpfn = pages;
>>           }
>>   -        vis_usage += amdgpu_vram_mgr_vis_size(adev, 
>> &node->mm_nodes[i]);
>> -        amdgpu_vram_mgr_virt_start(&node->base, &node->mm_nodes[i]);
>> -        pages_left -= pages;
>> +        BUG_ON(min_page_size < mm->chunk_size);
>> +
>> +        mutex_lock(&mgr->lock);
>> +        r = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
>> +                       (u64)lpfn << PAGE_SHIFT,
>> +                       (u64)pages << PAGE_SHIFT,
>> +                       min_page_size,
>> +                       &node->blocks,
>> +                       node->flags);
>> +        mutex_unlock(&mgr->lock);
>> +        if (unlikely(r))
>> +            goto error_free_blocks;
>> +
>>           ++i;
>>             if (pages > pages_left)
>> -            pages = pages_left;
>> +            pages_left = 0;
>> +        else
>> +            pages_left -= pages;
>>       }
>> -    spin_unlock(&mgr->lock);
>>   -    if (i == 1)
>> +    /* Free unused pages for contiguous allocation */
>> +    if (is_contiguous) {
>> +        u64 actual_size = (u64)node->base.num_pages << PAGE_SHIFT;
>> +
>> +        mutex_lock(&mgr->lock);
>> +        drm_buddy_block_trim(mm,
>> +                     actual_size,
>> +                     &node->blocks);
>> +        mutex_unlock(&mgr->lock);
>> +    }
>> +
>> +    list_for_each_entry(block, &node->blocks, link)
>> +        vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
>> +
>> +    block = amdgpu_vram_mgr_first_block(&node->blocks);
>> +    if (!block) {
>> +        r = -EINVAL;
>> +        goto error_fini;
>> +    }
>> +
>> +    node->base.start = amdgpu_node_start(block) >> PAGE_SHIFT;
>> +
>> +    if (i == 1 && is_contiguous)
>>           node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
>>         if (adev->gmc.xgmi.connected_to_cpu)
>> @@ -468,13 +500,13 @@ static int amdgpu_vram_mgr_new(struct 
>> ttm_resource_manager *man,
>>       *res = &node->base;
>>       return 0;
>>   -error_free:
>> -    while (i--)
>> -        drm_mm_remove_node(&node->mm_nodes[i]);
>> -    spin_unlock(&mgr->lock);
>> +error_free_blocks:
>> +    mutex_lock(&mgr->lock);
>> +    drm_buddy_free_list(mm, &node->blocks);
>> +    mutex_unlock(&mgr->lock);
>>   error_fini:
>>       ttm_resource_fini(man, &node->base);
>> -    kvfree(node);
>> +    kfree(node);
>>         return r;
>>   }
>> @@ -490,27 +522,26 @@ static int amdgpu_vram_mgr_new(struct 
>> ttm_resource_manager *man,
>>   static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
>>                   struct ttm_resource *res)
>>   {
>> -    struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res);
>> +    struct amdgpu_vram_mgr_node *node = to_amdgpu_vram_mgr_node(res);
>>       struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
>>       struct amdgpu_device *adev = to_amdgpu_device(mgr);
>> +    struct drm_buddy *mm = &mgr->mm;
>> +    struct drm_buddy_block *block;
>>       uint64_t vis_usage = 0;
>> -    unsigned i, pages;
>>   -    spin_lock(&mgr->lock);
>> -    for (i = 0, pages = res->num_pages; pages;
>> -         pages -= node->mm_nodes[i].size, ++i) {
>> -        struct drm_mm_node *mm = &node->mm_nodes[i];
>> +    mutex_lock(&mgr->lock);
>> +    list_for_each_entry(block, &node->blocks, link)
>> +        vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
>>   -        drm_mm_remove_node(mm);
>> -        vis_usage += amdgpu_vram_mgr_vis_size(adev, mm);
>> -    }
>>       amdgpu_vram_mgr_do_reserve(man);
>> -    spin_unlock(&mgr->lock);
>> +
>> +    drm_buddy_free_list(mm, &node->blocks);
>> +    mutex_unlock(&mgr->lock);
>>         atomic64_sub(vis_usage, &mgr->vis_usage);
>>         ttm_resource_fini(man, res);
>> -    kvfree(node);
>> +    kfree(node);
>>   }
>>     /**
>> @@ -648,13 +679,22 @@ static void amdgpu_vram_mgr_debug(struct 
>> ttm_resource_manager *man,
>>                     struct drm_printer *printer)
>>   {
>>       struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
>> +    struct drm_buddy *mm = &mgr->mm;
>> +    struct drm_buddy_block *block;
>>         drm_printf(printer, "  vis usage:%llu\n",
>>              amdgpu_vram_mgr_vis_usage(mgr));
>>   -    spin_lock(&mgr->lock);
>> -    drm_mm_print(&mgr->mm, printer);
>> -    spin_unlock(&mgr->lock);
>> +    mutex_lock(&mgr->lock);
>> +    drm_printf(printer, "default_page_size: %lluKiB\n",
>> +           mgr->default_page_size >> 10);
>> +
>> +    drm_buddy_print(mm, printer);
>> +
>> +    drm_printf(printer, "reserved:\n");
>> +    list_for_each_entry(block, &mgr->reserved_pages, link)
>> +        drm_buddy_block_print(mm, block, printer);
>> +    mutex_unlock(&mgr->lock);
>>   }
>>     static const struct ttm_resource_manager_func 
>> amdgpu_vram_mgr_func = {
>> @@ -674,16 +714,21 @@ int amdgpu_vram_mgr_init(struct amdgpu_device 
>> *adev)
>>   {
>>       struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
>>       struct ttm_resource_manager *man = &mgr->manager;
>> +    int err;
>>         ttm_resource_manager_init(man, &adev->mman.bdev,
>>                     adev->gmc.real_vram_size);
>>         man->func = &amdgpu_vram_mgr_func;
>>   -    drm_mm_init(&mgr->mm, 0, man->size >> PAGE_SHIFT);
>> -    spin_lock_init(&mgr->lock);
>> +    err = drm_buddy_init(&mgr->mm, man->size, PAGE_SIZE);
>> +    if (err)
>> +        return err;
>> +
>> +    mutex_init(&mgr->lock);
>>       INIT_LIST_HEAD(&mgr->reservations_pending);
>>       INIT_LIST_HEAD(&mgr->reserved_pages);
>> +    mgr->default_page_size = PAGE_SIZE;
>>         ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, 
>> &mgr->manager);
>>       ttm_resource_manager_set_used(man, true);
>> @@ -711,16 +756,16 @@ void amdgpu_vram_mgr_fini(struct amdgpu_device 
>> *adev)
>>       if (ret)
>>           return;
>>   -    spin_lock(&mgr->lock);
>> +    mutex_lock(&mgr->lock);
>>       list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, 
>> node)
>>           kfree(rsv);
>>         list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, 
>> node) {
>> -        drm_mm_remove_node(&rsv->mm_node);
>> +        drm_buddy_free_list(&mgr->mm, &rsv->block);
>>           kfree(rsv);
>>       }
>> -    drm_mm_takedown(&mgr->mm);
>> -    spin_unlock(&mgr->lock);
>> +    drm_buddy_fini(&mgr->mm);
>> +    mutex_unlock(&mgr->lock);
>>         ttm_resource_manager_cleanup(man);
>>       ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, NULL);
>>
>> base-commit: a678f97326454b60ffbbde6abf52d23997d71a27
Paul Menzel March 23, 2022, 8:10 a.m. UTC | #4
Dear Christian,


Am 23.03.22 um 08:42 schrieb Christian König:

> Am 23.03.22 um 07:42 schrieb Paul Menzel:

>> Am 23.03.22 um 07:25 schrieb Arunpravin Paneer Selvam:
>>> - Remove drm_mm references and replace with drm buddy functionalities
>>
>> The commit message summary to me suggested, you can somehow use both 
>> allocators now. Two suggestions below:
>>
>> 1.  Switch to drm buddy allocator
>> 2.  Use drm buddy alllocator
>>
>>> - Add res cursor support for drm buddy
>>
>> As an allocator switch sounds invasive, could you please extend the 
>> commit message, briefly describing the current situation, saying what 
>> the downsides are, and why the buddy allocator is “better”.
> 
> Well, Paul please stop bothering developers with those requests.
> 
> It's my job as maintainer to supervise the commit messages and it is 
> certainly NOT require to explain all the details of the current 
> situation in a commit message. That is just overkill.

I did not request all the details, and I think my requests are totally 
reasonable. But let’s change the perspective. If there were not any AMD 
graphics drivers bug, I would have never needed to look at the code and 
deal with it. Unfortunately the AMD graphics driver situation – which 
improved a lot in recent years – with no public documentation, 
proprietary firmware and complex devices is still not optimal, and a lot 
of bugs get reported, and I am also hit by bugs, taking time to deal 
with them, and maybe reporting and helping to analyze them. So to keep 
your wording, if you would stop bothering users with bugs and requesting 
their help in fixing them – asking the user to bisect the issue is often 
the first thing. Actually it should not be unreasonable for customers 
buying an AMD device to expect get bug free drivers. It’s strange and a 
sad fact, that the software industry succeeded to sway that valid 
expectation and customers now except they need to regularly install 
software updates, and do not get, for example, a price reduction when 
there are bugs.

Also, as stated everywhere, reviewer time is scarce, so commit authors 
should make it easy to attract new folks.

> A simple note that we are switching from the drm_mm backend to the buddy 
> backend is sufficient, and that is exactly what the commit message is 
> saying here.

Sorry, I disagree. The motivation needs to be part of the commit 
message. For example see recent discussion on the LWN article 
*Donenfeld: Random number generator enhancements for Linux 5.17 and 
5.18* [1].

How much the commit message should be extended, I do not know, but the 
current state is insufficient (too terse).


Kind regards,

Paul


[1]: https://lwn.net/Articles/888413/
      "Donenfeld: Random number generator enhancements for Linux 5.17 
and 5.18"
Christian König March 23, 2022, 8:18 a.m. UTC | #5
Hi Paul,

Am 23.03.22 um 09:10 schrieb Paul Menzel:
> Dear Christian,
>
>
> Am 23.03.22 um 08:42 schrieb Christian König:
>
>> Am 23.03.22 um 07:42 schrieb Paul Menzel:
>
>>> Am 23.03.22 um 07:25 schrieb Arunpravin Paneer Selvam:
>>>> - Remove drm_mm references and replace with drm buddy functionalities
>>>
>>> The commit message summary to me suggested, you can somehow use both 
>>> allocators now. Two suggestions below:
>>>
>>> 1.  Switch to drm buddy allocator
>>> 2.  Use drm buddy alllocator
>>>
>>>> - Add res cursor support for drm buddy
>>>
>>> As an allocator switch sounds invasive, could you please extend the 
>>> commit message, briefly describing the current situation, saying 
>>> what the downsides are, and why the buddy allocator is “better”.
>>
>> Well, Paul please stop bothering developers with those requests.
>>
>> It's my job as maintainer to supervise the commit messages and it is 
>> certainly NOT require to explain all the details of the current 
>> situation in a commit message. That is just overkill.
>
> I did not request all the details, and I think my requests are totally 
> reasonable. But let’s change the perspective. If there were not any 
> AMD graphics drivers bug, I would have never needed to look at the 
> code and deal with it. Unfortunately the AMD graphics driver situation 
> – which improved a lot in recent years – with no public documentation, 
> proprietary firmware and complex devices is still not optimal, and a 
> lot of bugs get reported, and I am also hit by bugs, taking time to 
> deal with them, and maybe reporting and helping to analyze them. So to 
> keep your wording, if you would stop bothering users with bugs and 
> requesting their help in fixing them – asking the user to bisect the 
> issue is often the first thing. Actually it should not be unreasonable 
> for customers buying an AMD device to expect get bug free drivers. 
> It’s strange and a sad fact, that the software industry succeeded to 
> sway that valid expectation and customers now except they need to 
> regularly install software updates, and do not get, for example, a 
> price reduction when there are bugs.
>
> Also, as stated everywhere, reviewer time is scarce, so commit authors 
> should make it easy to attract new folks.
>
>> A simple note that we are switching from the drm_mm backend to the 
>> buddy backend is sufficient, and that is exactly what the commit 
>> message is saying here.
>
> Sorry, I disagree. The motivation needs to be part of the commit 
> message. For example see recent discussion on the LWN article 
> *Donenfeld: Random number generator enhancements for Linux 5.17 and 
> 5.18* [1].
>
> How much the commit message should be extended, I do not know, but the 
> current state is insufficient (too terse).

Well the key point is it's not about you to judge that.

If you want to complain about the commit message then come to me with 
that and don't request information which isn't supposed to be publicly 
available.

So to make it clear: The information is intentionally hold back and not 
made public.

Regards,
Christian.

>
>
> Kind regards,
>
> Paul
>
>
> [1]: 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flwn.net%2FArticles%2F888413%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C466afc41893d4f43ab6a08da0ca48c0a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637836198129744073%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=Ar0P8Yc61MTXP0khtoH6WVRDAKhvxXNaOJY0LRnl8Qk%3D&amp;reserved=0
>      "Donenfeld: Random number generator enhancements for Linux 5.17 
> and 5.18"
kernel test robot March 23, 2022, 11:26 a.m. UTC | #6
Hi Arunpravin,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on a678f97326454b60ffbbde6abf52d23997d71a27]

url:    https://github.com/0day-ci/linux/commits/Arunpravin-Paneer-Selvam/drm-amdgpu-add-drm-buddy-support-to-amdgpu/20220323-142749
base:   a678f97326454b60ffbbde6abf52d23997d71a27
config: arc-allyesconfig (https://download.01.org/0day-ci/archive/20220323/202203231911.crbWBIZj-lkp@intel.com/config)
compiler: arceb-elf-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/5aa85728d353f9bcca7e25e17f800d014d77dee2
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Arunpravin-Paneer-Selvam/drm-amdgpu-add-drm-buddy-support-to-amdgpu/20220323-142749
        git checkout 5aa85728d353f9bcca7e25e17f800d014d77dee2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arc SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from drivers/gpu/drm/amd/amdgpu/amdgpu.h:73,
                    from drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c:29:
>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h:29:10: fatal error: amdgpu_vram_mgr.h: No such file or directory
      29 | #include "amdgpu_vram_mgr.h"
         |          ^~~~~~~~~~~~~~~~~~~
   compilation terminated.
--
   In file included from drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgpu.h:73,
                    from drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu11/arcturus_ppt.c:27:
>> drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgpu_ttm.h:29:10: fatal error: amdgpu_vram_mgr.h: No such file or directory
      29 | #include "amdgpu_vram_mgr.h"
         |          ^~~~~~~~~~~~~~~~~~~
   compilation terminated.
--
   In file included from drivers/gpu/drm/amd/amdgpu/../display/dmub/dmub_srv.h:67,
                    from drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c:35:
   drivers/gpu/drm/amd/amdgpu/../display/dmub/inc/dmub_cmd.h: In function 'dmub_rb_flush_pending':
   drivers/gpu/drm/amd/amdgpu/../display/dmub/inc/dmub_cmd.h:3049:26: warning: variable 'temp' set but not used [-Wunused-but-set-variable]
    3049 |                 uint64_t temp;
         |                          ^~~~
   In file included from drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgpu.h:73,
                    from drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.c:44:
   drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgpu_ttm.h: At top level:
>> drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgpu_ttm.h:29:10: fatal error: amdgpu_vram_mgr.h: No such file or directory
      29 | #include "amdgpu_vram_mgr.h"
         |          ^~~~~~~~~~~~~~~~~~~
   compilation terminated.


vim +29 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h

    26	
    27	#include <linux/dma-direction.h>
    28	#include <drm/gpu_scheduler.h>
  > 29	#include "amdgpu_vram_mgr.h"
    30	#include "amdgpu.h"
    31
Daniel Stone March 23, 2022, 2 p.m. UTC | #7
On Wed, 23 Mar 2022 at 08:19, Christian König <christian.koenig@amd.com> wrote:
> Am 23.03.22 um 09:10 schrieb Paul Menzel:
> > Sorry, I disagree. The motivation needs to be part of the commit
> > message. For example see recent discussion on the LWN article
> > *Donenfeld: Random number generator enhancements for Linux 5.17 and
> > 5.18* [1].
> >
> > How much the commit message should be extended, I do not know, but the
> > current state is insufficient (too terse).
>
> Well the key point is it's not about you to judge that.
>
> If you want to complain about the commit message then come to me with
> that and don't request information which isn't supposed to be publicly
> available.
>
> So to make it clear: The information is intentionally hold back and not
> made public.

In that case, the code isn't suitable to be merged into upstream
trees; it can be resubmitted when it can be explained.

Cheers,
Daniel
Alex Deucher March 23, 2022, 2:42 p.m. UTC | #8
On Wed, Mar 23, 2022 at 10:00 AM Daniel Stone <daniel@fooishbar.org> wrote:
>
> On Wed, 23 Mar 2022 at 08:19, Christian König <christian.koenig@amd.com> wrote:
> > Am 23.03.22 um 09:10 schrieb Paul Menzel:
> > > Sorry, I disagree. The motivation needs to be part of the commit
> > > message. For example see recent discussion on the LWN article
> > > *Donenfeld: Random number generator enhancements for Linux 5.17 and
> > > 5.18* [1].
> > >
> > > How much the commit message should be extended, I do not know, but the
> > > current state is insufficient (too terse).
> >
> > Well the key point is it's not about you to judge that.
> >
> > If you want to complain about the commit message then come to me with
> > that and don't request information which isn't supposed to be publicly
> > available.
> >
> > So to make it clear: The information is intentionally hold back and not
> > made public.
>
> In that case, the code isn't suitable to be merged into upstream
> trees; it can be resubmitted when it can be explained.

So you are saying we need to publish the problematic RTL to be able to
fix a HW bug in the kernel?  That seems a little unreasonable.  Also,
links to internal documents or bug trackers don't provide much value
to the community since they can't access them.  In general, adding
internal documents to commit messages is frowned on.

Alex
Daniel Stone March 23, 2022, 3:03 p.m. UTC | #9
Hi Alex,

On Wed, 23 Mar 2022 at 14:42, Alex Deucher <alexdeucher@gmail.com> wrote:
> On Wed, Mar 23, 2022 at 10:00 AM Daniel Stone <daniel@fooishbar.org> wrote:
> > On Wed, 23 Mar 2022 at 08:19, Christian König <christian.koenig@amd.com> wrote:
> > > Well the key point is it's not about you to judge that.
> > >
> > > If you want to complain about the commit message then come to me with
> > > that and don't request information which isn't supposed to be publicly
> > > available.
> > >
> > > So to make it clear: The information is intentionally hold back and not
> > > made public.
> >
> > In that case, the code isn't suitable to be merged into upstream
> > trees; it can be resubmitted when it can be explained.
>
> So you are saying we need to publish the problematic RTL to be able to
> fix a HW bug in the kernel?  That seems a little unreasonable.  Also,
> links to internal documents or bug trackers don't provide much value
> to the community since they can't access them.  In general, adding
> internal documents to commit messages is frowned on.

That's not what anyone's saying here ...

No-one's demanding AMD publish RTL, or internal design docs, or
hardware specs, or URLs to JIRA tickets no-one can access.

This is a large and invasive commit with pretty big ramifications;
containing exactly two lines of commit message, one of which just
duplicates the subject.

It cannot be the case that it's completely impossible to provide any
justification, background, or details, about this commit being made.
Unless, of course, it's to fix a non-public security issue, that is
reasonable justification for eliding some of the details. But then
again, 'huge change which is very deliberately opaque' is a really
good way to draw a lot of attention to the commit, and it would be
better to provide more detail about the change to help it slip under
the radar.

If dri-devel@ isn't allowed to inquire about patches which are posted,
then CCing the list is just a façade; might as well just do it all
internally and periodically dump out pull requests.

Cheers,
Daniel
Alex Deucher March 23, 2022, 3:14 p.m. UTC | #10
On Wed, Mar 23, 2022 at 11:04 AM Daniel Stone <daniel@fooishbar.org> wrote:
>
> Hi Alex,
>
> On Wed, 23 Mar 2022 at 14:42, Alex Deucher <alexdeucher@gmail.com> wrote:
> > On Wed, Mar 23, 2022 at 10:00 AM Daniel Stone <daniel@fooishbar.org> wrote:
> > > On Wed, 23 Mar 2022 at 08:19, Christian König <christian.koenig@amd.com> wrote:
> > > > Well the key point is it's not about you to judge that.
> > > >
> > > > If you want to complain about the commit message then come to me with
> > > > that and don't request information which isn't supposed to be publicly
> > > > available.
> > > >
> > > > So to make it clear: The information is intentionally hold back and not
> > > > made public.
> > >
> > > In that case, the code isn't suitable to be merged into upstream
> > > trees; it can be resubmitted when it can be explained.
> >
> > So you are saying we need to publish the problematic RTL to be able to
> > fix a HW bug in the kernel?  That seems a little unreasonable.  Also,
> > links to internal documents or bug trackers don't provide much value
> > to the community since they can't access them.  In general, adding
> > internal documents to commit messages is frowned on.
>
> That's not what anyone's saying here ...
>
> No-one's demanding AMD publish RTL, or internal design docs, or
> hardware specs, or URLs to JIRA tickets no-one can access.
>
> This is a large and invasive commit with pretty big ramifications;
> containing exactly two lines of commit message, one of which just
> duplicates the subject.
>
> It cannot be the case that it's completely impossible to provide any
> justification, background, or details, about this commit being made.
> Unless, of course, it's to fix a non-public security issue, that is
> reasonable justification for eliding some of the details. But then
> again, 'huge change which is very deliberately opaque' is a really
> good way to draw a lot of attention to the commit, and it would be
> better to provide more detail about the change to help it slip under
> the radar.
>
> If dri-devel@ isn't allowed to inquire about patches which are posted,
> then CCing the list is just a façade; might as well just do it all
> internally and periodically dump out pull requests.

I think we are in agreement. I think the withheld information
Christian was referring to was on another thread with Christian and
Paul discussing a workaround for a hardware bug:
https://www.spinics.net/lists/amd-gfx/msg75908.html

Alex


Alex
Christian König March 23, 2022, 3:19 p.m. UTC | #11
Am 23.03.22 um 15:00 schrieb Daniel Stone:
> On Wed, 23 Mar 2022 at 08:19, Christian König <christian.koenig@amd.com> wrote:
>> Am 23.03.22 um 09:10 schrieb Paul Menzel:
>>> Sorry, I disagree. The motivation needs to be part of the commit
>>> message. For example see recent discussion on the LWN article
>>> *Donenfeld: Random number generator enhancements for Linux 5.17 and
>>> 5.18* [1].
>>>
>>> How much the commit message should be extended, I do not know, but the
>>> current state is insufficient (too terse).
>> Well the key point is it's not about you to judge that.
>>
>> If you want to complain about the commit message then come to me with
>> that and don't request information which isn't supposed to be publicly
>> available.
>>
>> So to make it clear: The information is intentionally hold back and not
>> made public.
> In that case, the code isn't suitable to be merged into upstream
> trees; it can be resubmitted when it can be explained.

Well what Paul is requesting here are business information, not 
technical information.

In other words we perfectly explained why it is technically necessary 
already which is sufficient.

Regards,
Christian.

>
> Cheers,
> Daniel
Daniel Stone March 23, 2022, 3:24 p.m. UTC | #12
On Wed, 23 Mar 2022 at 15:14, Alex Deucher <alexdeucher@gmail.com> wrote:
> On Wed, Mar 23, 2022 at 11:04 AM Daniel Stone <daniel@fooishbar.org> wrote:
> > That's not what anyone's saying here ...
> >
> > No-one's demanding AMD publish RTL, or internal design docs, or
> > hardware specs, or URLs to JIRA tickets no-one can access.
> >
> > This is a large and invasive commit with pretty big ramifications;
> > containing exactly two lines of commit message, one of which just
> > duplicates the subject.
> >
> > It cannot be the case that it's completely impossible to provide any
> > justification, background, or details, about this commit being made.
> > Unless, of course, it's to fix a non-public security issue, that is
> > reasonable justification for eliding some of the details. But then
> > again, 'huge change which is very deliberately opaque' is a really
> > good way to draw a lot of attention to the commit, and it would be
> > better to provide more detail about the change to help it slip under
> > the radar.
> >
> > If dri-devel@ isn't allowed to inquire about patches which are posted,
> > then CCing the list is just a façade; might as well just do it all
> > internally and periodically dump out pull requests.
>
> I think we are in agreement. I think the withheld information
> Christian was referring to was on another thread with Christian and
> Paul discussing a workaround for a hardware bug:
> https://www.spinics.net/lists/amd-gfx/msg75908.html

Right, that definitely seems like some crossed wires. I don't see
anything wrong with that commit at all: the commit message and a
comment notes that there is a hardware issue preventing Raven from
being able to do TMZ+GTT, and the code does the very straightforward
and obvious thing to ensure that on VCN 1.0, any TMZ buffer must be
VRAM-placed.

This one, on the other hand, is much less clear ...

Cheers,
Daniel
Christian König March 23, 2022, 3:32 p.m. UTC | #13
Am 23.03.22 um 16:24 schrieb Daniel Stone:
> On Wed, 23 Mar 2022 at 15:14, Alex Deucher <alexdeucher@gmail.com> wrote:
>> On Wed, Mar 23, 2022 at 11:04 AM Daniel Stone <daniel@fooishbar.org> wrote:
>>> That's not what anyone's saying here ...
>>>
>>> No-one's demanding AMD publish RTL, or internal design docs, or
>>> hardware specs, or URLs to JIRA tickets no-one can access.
>>>
>>> This is a large and invasive commit with pretty big ramifications;
>>> containing exactly two lines of commit message, one of which just
>>> duplicates the subject.
>>>
>>> It cannot be the case that it's completely impossible to provide any
>>> justification, background, or details, about this commit being made.
>>> Unless, of course, it's to fix a non-public security issue, that is
>>> reasonable justification for eliding some of the details. But then
>>> again, 'huge change which is very deliberately opaque' is a really
>>> good way to draw a lot of attention to the commit, and it would be
>>> better to provide more detail about the change to help it slip under
>>> the radar.
>>>
>>> If dri-devel@ isn't allowed to inquire about patches which are posted,
>>> then CCing the list is just a façade; might as well just do it all
>>> internally and periodically dump out pull requests.
>> I think we are in agreement. I think the withheld information
>> Christian was referring to was on another thread with Christian and
>> Paul discussing a workaround for a hardware bug:
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg75908.html&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C6a3f2815d83b4872577008da0ce1347a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637836458652370599%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=QtNB0XHMhTgH%2FNHMwF23Qn%2BgSdYyHJSenbpP%2FHG%2BkxE%3D&amp;reserved=0
> Right, that definitely seems like some crossed wires. I don't see
> anything wrong with that commit at all: the commit message and a
> comment notes that there is a hardware issue preventing Raven from
> being able to do TMZ+GTT, and the code does the very straightforward
> and obvious thing to ensure that on VCN 1.0, any TMZ buffer must be
> VRAM-placed.
>
> This one, on the other hand, is much less clear ...

Yes, completely agree. I mean a good bunch of comments on commit 
messages are certainly valid and we could improve them.

But this patch here was worked on by both AMD and Intel developers. 
Where both sides and I think even people from other companies perfectly 
understands why, what, how etc...

When now somebody comes along and asks for a whole explanation of the 
context why we do it then that sounds really strange to me.

Thanks for jumping in here,
Christian.

>
> Cheers,
> Daniel
Daniel Vetter March 24, 2022, 10:30 a.m. UTC | #14
On Wed, 23 Mar 2022 at 16:32, Christian König <christian.koenig@amd.com> wrote:
>
> Am 23.03.22 um 16:24 schrieb Daniel Stone:
> > On Wed, 23 Mar 2022 at 15:14, Alex Deucher <alexdeucher@gmail.com> wrote:
> >> On Wed, Mar 23, 2022 at 11:04 AM Daniel Stone <daniel@fooishbar.org> wrote:
> >>> That's not what anyone's saying here ...
> >>>
> >>> No-one's demanding AMD publish RTL, or internal design docs, or
> >>> hardware specs, or URLs to JIRA tickets no-one can access.
> >>>
> >>> This is a large and invasive commit with pretty big ramifications;
> >>> containing exactly two lines of commit message, one of which just
> >>> duplicates the subject.
> >>>
> >>> It cannot be the case that it's completely impossible to provide any
> >>> justification, background, or details, about this commit being made.
> >>> Unless, of course, it's to fix a non-public security issue, that is
> >>> reasonable justification for eliding some of the details. But then
> >>> again, 'huge change which is very deliberately opaque' is a really
> >>> good way to draw a lot of attention to the commit, and it would be
> >>> better to provide more detail about the change to help it slip under
> >>> the radar.
> >>>
> >>> If dri-devel@ isn't allowed to inquire about patches which are posted,
> >>> then CCing the list is just a façade; might as well just do it all
> >>> internally and periodically dump out pull requests.
> >> I think we are in agreement. I think the withheld information
> >> Christian was referring to was on another thread with Christian and
> >> Paul discussing a workaround for a hardware bug:
> >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg75908.html&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C6a3f2815d83b4872577008da0ce1347a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637836458652370599%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=QtNB0XHMhTgH%2FNHMwF23Qn%2BgSdYyHJSenbpP%2FHG%2BkxE%3D&amp;reserved=0
> > Right, that definitely seems like some crossed wires. I don't see
> > anything wrong with that commit at all: the commit message and a
> > comment notes that there is a hardware issue preventing Raven from
> > being able to do TMZ+GTT, and the code does the very straightforward
> > and obvious thing to ensure that on VCN 1.0, any TMZ buffer must be
> > VRAM-placed.
> >
> > This one, on the other hand, is much less clear ...
>
> Yes, completely agree. I mean a good bunch of comments on commit
> messages are certainly valid and we could improve them.
>
> But this patch here was worked on by both AMD and Intel developers.
> Where both sides and I think even people from other companies perfectly
> understands why, what, how etc...
>
> When now somebody comes along and asks for a whole explanation of the
> context why we do it then that sounds really strange to me.

Yeah gpus are using pages a lot more like the cpu (with bigger pages
of benefit, but not required, hence the buddy allocator to coalesce
them), and extremely funny contig allocations with bonkers
requirements aren't needed anymore (which was the speciality of
drm_mm.c). Hence why both i915 and amdgpu move over to this new buddy
allocator for managing vram.

I guess that could be added to the commit message, but also it's kinda
well known - the i915 patches also didn't explain why we want to
manage our vram with a buddy allocator (I think some of the earlier
versions explained it a bit, but the version with ttm integration that
landed didnt).

But yeah the confusing comments about hiding stuff that somehow
spilled over from other discussions into this didn't help :-/
-Daniel

> Thanks for jumping in here,
> Christian.
>
> >
> > Cheers,
> > Daniel
>
Paul Menzel March 25, 2022, 3:56 p.m. UTC | #15
Dear Christian, dear Daniel, dear Alex,


Am 23.03.22 um 16:32 schrieb Christian König:
> Am 23.03.22 um 16:24 schrieb Daniel Stone:
>> On Wed, 23 Mar 2022 at 15:14, Alex Deucher <alexdeucher@gmail.com> wrote:
>>> On Wed, Mar 23, 2022 at 11:04 AM Daniel Stone <daniel@fooishbar.org> 
>>> wrote:
>>>> That's not what anyone's saying here ...
>>>>
>>>> No-one's demanding AMD publish RTL, or internal design docs, or
>>>> hardware specs, or URLs to JIRA tickets no-one can access.
>>>>
>>>> This is a large and invasive commit with pretty big ramifications;
>>>> containing exactly two lines of commit message, one of which just
>>>> duplicates the subject.
>>>>
>>>> It cannot be the case that it's completely impossible to provide any
>>>> justification, background, or details, about this commit being made.
>>>> Unless, of course, it's to fix a non-public security issue, that is
>>>> reasonable justification for eliding some of the details. But then
>>>> again, 'huge change which is very deliberately opaque' is a really
>>>> good way to draw a lot of attention to the commit, and it would be
>>>> better to provide more detail about the change to help it slip under
>>>> the radar.
>>>>
>>>> If dri-devel@ isn't allowed to inquire about patches which are posted,
>>>> then CCing the list is just a façade; might as well just do it all
>>>> internally and periodically dump out pull requests.
>>> I think we are in agreement. I think the withheld information
>>> Christian was referring to was on another thread with Christian and
>>> Paul discussing a workaround for a hardware bug:
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg75908.html&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C6a3f2815d83b4872577008da0ce1347a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637836458652370599%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=QtNB0XHMhTgH%2FNHMwF23Qn%2BgSdYyHJSenbpP%2FHG%2BkxE%3D&amp;reserved=0 

(Thank you Microsoft for keeping us safe.)

I guess it proves, how assuming what other people should know/have read, 
especially when crossing message threads, is causing confusion and 
misunderstandings.

>> Right, that definitely seems like some crossed wires. I don't see
>> anything wrong with that commit at all: the commit message and a
>> comment notes that there is a hardware issue preventing Raven from
>> being able to do TMZ+GTT, and the code does the very straightforward
>> and obvious thing to ensure that on VCN 1.0, any TMZ buffer must be
>> VRAM-placed.

My questions were:

> Where is that documented, and how can this be reproduced? 

Shouldn’t these be answered by the commit message? In five(?) years, 
somebody, maybe even with access to the currently non-public documents, 
finds a fault in the commit, and would be helped by having an 
document/errata number where to look at. To verify the fix, the 
developer would need a method to produce the error, so why not just 
share it?

Also, I assume that workarounds often come with downsides, as otherwise 
it would have been programmed like this from the beginning, or instead 
of “workaround” it would be called “improvement”. Shouldn’t that also be 
answered?

So totally made-up example:

Currently, there is a graphics corruption running X on system Y. This is 
caused by a hardware bug in Raven ASIC (details internal document 
#NNNN/AMD-Jira #N), and can be worked around by [what is in the commit 
message].

The workaround does not affect the performance, and testing X shows the 
error is fixed.

>> This one, on the other hand, is much less clear ...
> 
> Yes, completely agree. I mean a good bunch of comments on commit 
> messages are certainly valid and we could improve them.

That’d be great.

> But this patch here was worked on by both AMD and Intel developers. 
> Where both sides and I think even people from other companies perfectly 
> understands why, what, how etc...
> 
> When now somebody comes along and asks for a whole explanation of the 
> context why we do it then that sounds really strange to me.

The motivation should be part of the commit message. I didn’t mean 
anyone to rewrite buddy memory allocator Wikipedia article [1]. But the 
commit message at hand for switching the allocator is definitely too terse.


Kind regards,

Paul


[1]: https://en.wikipedia.org/wiki/Buddy_memory_allocation
Paneer Selvam, Arunpravin March 29, 2022, 11:19 a.m. UTC | #16
> -----Original Message-----
> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Christian König
> Sent: Wednesday, March 23, 2022 1:07 PM
> To: Paneer Selvam, Arunpravin <Arunpravin.PaneerSelvam@amd.com>; intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; matthew.auld@intel.com; daniel@ffwll.ch; Koenig, Christian <Christian.Koenig@amd.com>
> Subject: Re: [PATCH v11] drm/amdgpu: add drm buddy support to amdgpu
> 
> Am 23.03.22 um 07:25 schrieb Arunpravin Paneer Selvam:
>> [SNIP]
>> @@ -415,48 +409,86 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>>   		goto error_fini;
>>   	}
>>   
>> -	mode = DRM_MM_INSERT_BEST;
>> +	INIT_LIST_HEAD(&node->blocks);
>> +
>>   	if (place->flags & TTM_PL_FLAG_TOPDOWN)
>> -		mode = DRM_MM_INSERT_HIGH;
>> +		node->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
>>   
>> -	pages_left = node->base.num_pages;
>> +	if (place->fpfn || lpfn != man->size >> PAGE_SHIFT)
>> +		/* Allocate blocks in desired range */
>> +		node->flags |= DRM_BUDDY_RANGE_ALLOCATION;
>>   
>> -	/* Limit maximum size to 2GB due to SG table limitations */
>> -	pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
>> +	BUG_ON(!node->base.num_pages);
> 
> Please drop this BUG_ON(). This is not something which prevents further data corruption, so the BUG_ON() is not justified.

ok
> 
>> +	pages_left = node->base.num_pages;
>>   
>>   	i = 0;
>> -	spin_lock(&mgr->lock);
>>   	while (pages_left) {
>> -		uint32_t alignment = tbo->page_alignment;
>> +		if (tbo->page_alignment)
>> +			min_page_size = tbo->page_alignment << PAGE_SHIFT;
>> +		else
>> +			min_page_size = mgr->default_page_size;
> 
> The handling here looks extremely awkward to me.
> 
> min_page_size should be determined outside of the loop, based on default_page_size, alignment and contiguous flag.
I kept min_page_size determine logic inside the loop for cases 2GiB+
requirements, Since now we are round up the size to the required
alignment, I modified the min_page_size determine logic outside of the
loop in v12. Please review.
> 
> Then why do you drop the lock and grab it again inside the loop? And what is "i" actually good for?
modified the lock/unlock placement in v12.

"i" is to track when there is 2GiB+ contiguous allocation request, first
we allocate 2GiB (due to SG table limit) continuously and the remaining
pages in the next iteration, hence this request can't be a continuous.
To set the placement flag we make use of "i" value. In our case "i"
value becomes 2 and we don't set the below flag.
node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;

If we don't get such requests, I will remove "i".

>



>> +
>> +		/* Limit maximum size to 2GB due to SG table limitations */
>> +		pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
>>   
>>   		if (pages >= pages_per_node)
>> -			alignment = pages_per_node;
>> -
>> -		r = drm_mm_insert_node_in_range(mm, &node->mm_nodes[i], pages,
>> -						alignment, 0, place->fpfn,
>> -						lpfn, mode);
>> -		if (unlikely(r)) {
>> -			if (pages > pages_per_node) {
>> -				if (is_power_of_2(pages))
>> -					pages = pages / 2;
>> -				else
>> -					pages = rounddown_pow_of_two(pages);
>> -				continue;
>> -			}
>> -			goto error_free;
>> +			min_page_size = pages_per_node << PAGE_SHIFT;
>> +
>> +		if (!is_contiguous && !IS_ALIGNED(pages, min_page_size >> PAGE_SHIFT))
>> +			is_contiguous = 1;
>> +
>> +		if (is_contiguous) {
>> +			pages = roundup_pow_of_two(pages);
>> +			min_page_size = pages << PAGE_SHIFT;
>> +
>> +			if (pages > lpfn)
>> +				lpfn = pages;
>>   		}
>>   
>> -		vis_usage += amdgpu_vram_mgr_vis_size(adev, &node->mm_nodes[i]);
>> -		amdgpu_vram_mgr_virt_start(&node->base, &node->mm_nodes[i]);
>> -		pages_left -= pages;
>> +		BUG_ON(min_page_size < mm->chunk_size);
>> +
>> +		mutex_lock(&mgr->lock);
>> +		r = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
>> +					   (u64)lpfn << PAGE_SHIFT,
>> +					   (u64)pages << PAGE_SHIFT,
>> +					   min_page_size,
>> +					   &node->blocks,
>> +					   node->flags);
>> +		mutex_unlock(&mgr->lock);
>> +		if (unlikely(r))
>> +			goto error_free_blocks;
>> +
>>   		++i;
>>   
>>   		if (pages > pages_left)
>> -			pages = pages_left;
>> +			pages_left = 0;
>> +		else
>> +			pages_left -= pages;
>>   	}
>> -	spin_unlock(&mgr->lock);
>>   
>> -	if (i == 1)
>> +	/* Free unused pages for contiguous allocation */
>> +	if (is_contiguous) {
> 
> Well that looks really odd, why is trimming not part of
> drm_buddy_alloc_blocks() ?
we didn't place trim function part of drm_buddy_alloc_blocks since we
thought this function can be a generic one and it can be used by any
other application as well. For example, now we are using it for trimming
the last block in case of size non-alignment with min_page_size.
> 
>> +		u64 actual_size = (u64)node->base.num_pages << PAGE_SHIFT;
>> +
>> +		mutex_lock(&mgr->lock);
>> +		drm_buddy_block_trim(mm,
>> +				     actual_size,
>> +				     &node->blocks);
> 
> Why is the drm_buddy_block_trim() function given all the blocks and not just the last one?
modified in v12.
> 
> Regards,
> Christian.
> 
>> +		mutex_unlock(&mgr->lock);
>> +	}
>> +
>> +	list_for_each_entry(block, &node->blocks, link)
>> +		vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
>> +
>> +	block = amdgpu_vram_mgr_first_block(&node->blocks);
>> +	if (!block) {
>> +		r = -EINVAL;
>> +		goto error_fini;
>> +	}
>> +
>> +	node->base.start = amdgpu_node_start(block) >> PAGE_SHIFT;
>> +
>> +	if (i == 1 && is_contiguous)
>>   		node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
>>   
>>   	if (adev->gmc.xgmi.connected_to_cpu) @@ -468,13 +500,13 @@ static 
>> int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>>   	*res = &node->base;
>>   	return 0;
>>   
>> -error_free:
>> -	while (i--)
>> -		drm_mm_remove_node(&node->mm_nodes[i]);
>> -	spin_unlock(&mgr->lock);
>> +error_free_blocks:
>> +	mutex_lock(&mgr->lock);
>> +	drm_buddy_free_list(mm, &node->blocks);
>> +	mutex_unlock(&mgr->lock);
>>   error_fini:
>>   	ttm_resource_fini(man, &node->base);
>> -	kvfree(node);
>> +	kfree(node);
>>   
>>   	return r;
>>   }
>> @@ -490,27 +522,26 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
>>   static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
>>   				struct ttm_resource *res)
>>   {
>> -	struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res);
>> +	struct amdgpu_vram_mgr_node *node = to_amdgpu_vram_mgr_node(res);
>>   	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
>>   	struct amdgpu_device *adev = to_amdgpu_device(mgr);
>> +	struct drm_buddy *mm = &mgr->mm;
>> +	struct drm_buddy_block *block;
>>   	uint64_t vis_usage = 0;
>> -	unsigned i, pages;
>>   
>> -	spin_lock(&mgr->lock);
>> -	for (i = 0, pages = res->num_pages; pages;
>> -	     pages -= node->mm_nodes[i].size, ++i) {
>> -		struct drm_mm_node *mm = &node->mm_nodes[i];
>> +	mutex_lock(&mgr->lock);
>> +	list_for_each_entry(block, &node->blocks, link)
>> +		vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
>>   
>> -		drm_mm_remove_node(mm);
>> -		vis_usage += amdgpu_vram_mgr_vis_size(adev, mm);
>> -	}
>>   	amdgpu_vram_mgr_do_reserve(man);
>> -	spin_unlock(&mgr->lock);
>> +
>> +	drm_buddy_free_list(mm, &node->blocks);
>> +	mutex_unlock(&mgr->lock);
>>   
>>   	atomic64_sub(vis_usage, &mgr->vis_usage);
>>   
>>   	ttm_resource_fini(man, res);
>> -	kvfree(node);
>> +	kfree(node);
>>   }
>>   
>>   /**
>> @@ -648,13 +679,22 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man,
>>   				  struct drm_printer *printer)
>>   {
>>   	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
>> +	struct drm_buddy *mm = &mgr->mm;
>> +	struct drm_buddy_block *block;
>>   
>>   	drm_printf(printer, "  vis usage:%llu\n",
>>   		   amdgpu_vram_mgr_vis_usage(mgr));
>>   
>> -	spin_lock(&mgr->lock);
>> -	drm_mm_print(&mgr->mm, printer);
>> -	spin_unlock(&mgr->lock);
>> +	mutex_lock(&mgr->lock);
>> +	drm_printf(printer, "default_page_size: %lluKiB\n",
>> +		   mgr->default_page_size >> 10);
>> +
>> +	drm_buddy_print(mm, printer);
>> +
>> +	drm_printf(printer, "reserved:\n");
>> +	list_for_each_entry(block, &mgr->reserved_pages, link)
>> +		drm_buddy_block_print(mm, block, printer);
>> +	mutex_unlock(&mgr->lock);
>>   }
>>   
>>   static const struct ttm_resource_manager_func amdgpu_vram_mgr_func = 
>> { @@ -674,16 +714,21 @@ int amdgpu_vram_mgr_init(struct amdgpu_device *adev)
>>   {
>>   	struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
>>   	struct ttm_resource_manager *man = &mgr->manager;
>> +	int err;
>>   
>>   	ttm_resource_manager_init(man, &adev->mman.bdev,
>>   				  adev->gmc.real_vram_size);
>>   
>>   	man->func = &amdgpu_vram_mgr_func;
>>   
>> -	drm_mm_init(&mgr->mm, 0, man->size >> PAGE_SHIFT);
>> -	spin_lock_init(&mgr->lock);
>> +	err = drm_buddy_init(&mgr->mm, man->size, PAGE_SIZE);
>> +	if (err)
>> +		return err;
>> +
>> +	mutex_init(&mgr->lock);
>>   	INIT_LIST_HEAD(&mgr->reservations_pending);
>>   	INIT_LIST_HEAD(&mgr->reserved_pages);
>> +	mgr->default_page_size = PAGE_SIZE;
>>   
>>   	ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, &mgr->manager);
>>   	ttm_resource_manager_set_used(man, true); @@ -711,16 +756,16 @@ 
>> void amdgpu_vram_mgr_fini(struct amdgpu_device *adev)
>>   	if (ret)
>>   		return;
>>   
>> -	spin_lock(&mgr->lock);
>> +	mutex_lock(&mgr->lock);
>>   	list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, node)
>>   		kfree(rsv);
>>   
>>   	list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, node) {
>> -		drm_mm_remove_node(&rsv->mm_node);
>> +		drm_buddy_free_list(&mgr->mm, &rsv->block);
>>   		kfree(rsv);
>>   	}
>> -	drm_mm_takedown(&mgr->mm);
>> -	spin_unlock(&mgr->lock);
>> +	drm_buddy_fini(&mgr->mm);
>> +	mutex_unlock(&mgr->lock);
>>   
>>   	ttm_resource_manager_cleanup(man);
>>   	ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, NULL);
>>
>> base-commit: a678f97326454b60ffbbde6abf52d23997d71a27
Christian König March 29, 2022, 11:24 a.m. UTC | #17
Am 29.03.22 um 13:19 schrieb Arunpravin Paneer Selvam:
> [SNIP]
>>> +	pages_left = node->base.num_pages;
>>>    
>>>    	i = 0;
>>> -	spin_lock(&mgr->lock);
>>>    	while (pages_left) {
>>> -		uint32_t alignment = tbo->page_alignment;
>>> +		if (tbo->page_alignment)
>>> +			min_page_size = tbo->page_alignment << PAGE_SHIFT;
>>> +		else
>>> +			min_page_size = mgr->default_page_size;
>> The handling here looks extremely awkward to me.
>>
>> min_page_size should be determined outside of the loop, based on default_page_size, alignment and contiguous flag.
> I kept min_page_size determine logic inside the loop for cases 2GiB+
> requirements, Since now we are round up the size to the required
> alignment, I modified the min_page_size determine logic outside of the
> loop in v12. Please review.

Ah! So do we only have the loop so that each allocation isn't bigger 
than 2GiB? If yes couldn't we instead add a max_alloc_size or something 
similar?

BTW: I strongly suggest that you rename min_page_size to min_alloc_size. 
Otherwise somebody could think that those numbers are in pages and not 
bytes.

>> Then why do you drop the lock and grab it again inside the loop? And what is "i" actually good for?
> modified the lock/unlock placement in v12.
>
> "i" is to track when there is 2GiB+ contiguous allocation request, first
> we allocate 2GiB (due to SG table limit) continuously and the remaining
> pages in the next iteration, hence this request can't be a continuous.
> To set the placement flag we make use of "i" value. In our case "i"
> value becomes 2 and we don't set the below flag.
> node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
>
> If we don't get such requests, I will remove "i".

I'm not sure if that works.

As far as I can see drm_buddy_alloc_blocks() can allocate multiple 
blocks at the same time, but i is only incremented when we loop.

So what you should do instead is to check if node->blocks just contain 
exactly one element after the allocation but before the trim.

>>> +
>>> +		/* Limit maximum size to 2GB due to SG table limitations */
>>> +		pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
>>>    
>>>    		if (pages >= pages_per_node)
>>> -			alignment = pages_per_node;
>>> -
>>> -		r = drm_mm_insert_node_in_range(mm, &node->mm_nodes[i], pages,
>>> -						alignment, 0, place->fpfn,
>>> -						lpfn, mode);
>>> -		if (unlikely(r)) {
>>> -			if (pages > pages_per_node) {
>>> -				if (is_power_of_2(pages))
>>> -					pages = pages / 2;
>>> -				else
>>> -					pages = rounddown_pow_of_two(pages);
>>> -				continue;
>>> -			}
>>> -			goto error_free;
>>> +			min_page_size = pages_per_node << PAGE_SHIFT;
>>> +
>>> +		if (!is_contiguous && !IS_ALIGNED(pages, min_page_size >> PAGE_SHIFT))
>>> +			is_contiguous = 1;
>>> +
>>> +		if (is_contiguous) {
>>> +			pages = roundup_pow_of_two(pages);
>>> +			min_page_size = pages << PAGE_SHIFT;
>>> +
>>> +			if (pages > lpfn)
>>> +				lpfn = pages;
>>>    		}
>>>    
>>> -		vis_usage += amdgpu_vram_mgr_vis_size(adev, &node->mm_nodes[i]);
>>> -		amdgpu_vram_mgr_virt_start(&node->base, &node->mm_nodes[i]);
>>> -		pages_left -= pages;
>>> +		BUG_ON(min_page_size < mm->chunk_size);
>>> +
>>> +		mutex_lock(&mgr->lock);
>>> +		r = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
>>> +					   (u64)lpfn << PAGE_SHIFT,
>>> +					   (u64)pages << PAGE_SHIFT,
>>> +					   min_page_size,
>>> +					   &node->blocks,
>>> +					   node->flags);
>>> +		mutex_unlock(&mgr->lock);
>>> +		if (unlikely(r))
>>> +			goto error_free_blocks;
>>> +
>>>    		++i;
>>>    
>>>    		if (pages > pages_left)
>>> -			pages = pages_left;
>>> +			pages_left = 0;
>>> +		else
>>> +			pages_left -= pages;
>>>    	}
>>> -	spin_unlock(&mgr->lock);
>>>    
>>> -	if (i == 1)
>>> +	/* Free unused pages for contiguous allocation */
>>> +	if (is_contiguous) {
>> Well that looks really odd, why is trimming not part of
>> drm_buddy_alloc_blocks() ?
> we didn't place trim function part of drm_buddy_alloc_blocks since we
> thought this function can be a generic one and it can be used by any
> other application as well. For example, now we are using it for trimming
> the last block in case of size non-alignment with min_page_size.

Good argument. Another thing I just realized is that we probably want to 
double check if we only allocated one block before the trim.

Thanks,
Christian.
Paneer Selvam, Arunpravin March 29, 2022, 4 p.m. UTC | #18
On 29/03/22 4:54 pm, Christian König wrote:
> Am 29.03.22 um 13:19 schrieb Arunpravin Paneer Selvam:
>> [SNIP]
>>>> +	pages_left = node->base.num_pages;
>>>>    
>>>>    	i = 0;
>>>> -	spin_lock(&mgr->lock);
>>>>    	while (pages_left) {
>>>> -		uint32_t alignment = tbo->page_alignment;
>>>> +		if (tbo->page_alignment)
>>>> +			min_page_size = tbo->page_alignment << PAGE_SHIFT;
>>>> +		else
>>>> +			min_page_size = mgr->default_page_size;
>>> The handling here looks extremely awkward to me.
>>>
>>> min_page_size should be determined outside of the loop, based on default_page_size, alignment and contiguous flag.
>> I kept min_page_size determine logic inside the loop for cases 2GiB+
>> requirements, Since now we are round up the size to the required
>> alignment, I modified the min_page_size determine logic outside of the
>> loop in v12. Please review.
> 
> Ah! So do we only have the loop so that each allocation isn't bigger 
> than 2GiB? If yes couldn't we instead add a max_alloc_size or something 
> similar?
yes we have the loop to limit the allocation not bigger than 2GiB, I
think we cannot avoid the loop since we need to allocate the remaining
pages if any, to complete the 2GiB+ size request. In other words, first
iteration we limit the max size to 2GiB and in the second iteration we
allocate the left over pages if any.
> 
> BTW: I strongly suggest that you rename min_page_size to min_alloc_size. 
> Otherwise somebody could think that those numbers are in pages and not 
> bytes.
modified in v12
> 
>>> Then why do you drop the lock and grab it again inside the loop? And what is "i" actually good for?
>> modified the lock/unlock placement in v12.
>>
>> "i" is to track when there is 2GiB+ contiguous allocation request, first
>> we allocate 2GiB (due to SG table limit) continuously and the remaining
>> pages in the next iteration, hence this request can't be a continuous.
>> To set the placement flag we make use of "i" value. In our case "i"
>> value becomes 2 and we don't set the below flag.
>> node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
>>
>> If we don't get such requests, I will remove "i".
> 
> I'm not sure if that works.
> 
> As far as I can see drm_buddy_alloc_blocks() can allocate multiple 
> blocks at the same time, but i is only incremented when we loop.
> 
> So what you should do instead is to check if node->blocks just contain 
> exactly one element after the allocation but before the trim.
ok
> 
>>>> +
>>>> +		/* Limit maximum size to 2GB due to SG table limitations */
>>>> +		pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
>>>>    
>>>>    		if (pages >= pages_per_node)
>>>> -			alignment = pages_per_node;
>>>> -
>>>> -		r = drm_mm_insert_node_in_range(mm, &node->mm_nodes[i], pages,
>>>> -						alignment, 0, place->fpfn,
>>>> -						lpfn, mode);
>>>> -		if (unlikely(r)) {
>>>> -			if (pages > pages_per_node) {
>>>> -				if (is_power_of_2(pages))
>>>> -					pages = pages / 2;
>>>> -				else
>>>> -					pages = rounddown_pow_of_two(pages);
>>>> -				continue;
>>>> -			}
>>>> -			goto error_free;
>>>> +			min_page_size = pages_per_node << PAGE_SHIFT;
>>>> +
>>>> +		if (!is_contiguous && !IS_ALIGNED(pages, min_page_size >> PAGE_SHIFT))
>>>> +			is_contiguous = 1;
>>>> +
>>>> +		if (is_contiguous) {
>>>> +			pages = roundup_pow_of_two(pages);
>>>> +			min_page_size = pages << PAGE_SHIFT;
>>>> +
>>>> +			if (pages > lpfn)
>>>> +				lpfn = pages;
>>>>    		}
>>>>    
>>>> -		vis_usage += amdgpu_vram_mgr_vis_size(adev, &node->mm_nodes[i]);
>>>> -		amdgpu_vram_mgr_virt_start(&node->base, &node->mm_nodes[i]);
>>>> -		pages_left -= pages;
>>>> +		BUG_ON(min_page_size < mm->chunk_size);
>>>> +
>>>> +		mutex_lock(&mgr->lock);
>>>> +		r = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
>>>> +					   (u64)lpfn << PAGE_SHIFT,
>>>> +					   (u64)pages << PAGE_SHIFT,
>>>> +					   min_page_size,
>>>> +					   &node->blocks,
>>>> +					   node->flags);
>>>> +		mutex_unlock(&mgr->lock);
>>>> +		if (unlikely(r))
>>>> +			goto error_free_blocks;
>>>> +
>>>>    		++i;
>>>>    
>>>>    		if (pages > pages_left)
>>>> -			pages = pages_left;
>>>> +			pages_left = 0;
>>>> +		else
>>>> +			pages_left -= pages;
>>>>    	}
>>>> -	spin_unlock(&mgr->lock);
>>>>    
>>>> -	if (i == 1)
>>>> +	/* Free unused pages for contiguous allocation */
>>>> +	if (is_contiguous) {
>>> Well that looks really odd, why is trimming not part of
>>> drm_buddy_alloc_blocks() ?
>> we didn't place trim function part of drm_buddy_alloc_blocks since we
>> thought this function can be a generic one and it can be used by any
>> other application as well. For example, now we are using it for trimming
>> the last block in case of size non-alignment with min_page_size.
> 
> Good argument. Another thing I just realized is that we probably want to 
> double check if we only allocated one block before the trim.
ok
> 
> Thanks,
> Christian.
>
Paneer Selvam, Arunpravin March 29, 2022, 7:18 p.m. UTC | #19
On 29/03/22 9:30 pm, Arunpravin Paneer Selvam wrote:
> 
> 
> On 29/03/22 4:54 pm, Christian König wrote:
>> Am 29.03.22 um 13:19 schrieb Arunpravin Paneer Selvam:
>>> [SNIP]
>>>>> +	pages_left = node->base.num_pages;
>>>>>    
>>>>>    	i = 0;
>>>>> -	spin_lock(&mgr->lock);
>>>>>    	while (pages_left) {
>>>>> -		uint32_t alignment = tbo->page_alignment;
>>>>> +		if (tbo->page_alignment)
>>>>> +			min_page_size = tbo->page_alignment << PAGE_SHIFT;
>>>>> +		else
>>>>> +			min_page_size = mgr->default_page_size;
>>>> The handling here looks extremely awkward to me.
>>>>
>>>> min_page_size should be determined outside of the loop, based on default_page_size, alignment and contiguous flag.
>>> I kept min_page_size determine logic inside the loop for cases 2GiB+
>>> requirements, Since now we are round up the size to the required
>>> alignment, I modified the min_page_size determine logic outside of the
>>> loop in v12. Please review.
>>
>> Ah! So do we only have the loop so that each allocation isn't bigger 
>> than 2GiB? If yes couldn't we instead add a max_alloc_size or something 
>> similar?
> yes we have the loop to limit the allocation not bigger than 2GiB, I
> think we cannot avoid the loop since we need to allocate the remaining
> pages if any, to complete the 2GiB+ size request. In other words, first
> iteration we limit the max size to 2GiB and in the second iteration we
> allocate the left over pages if any.

Hi Christian,

Here my understanding might be incorrect, should we limit the max size =
2GiB and skip all the remaining pages for a 2GiB+ request?

Thanks,
Arun
>>
>> BTW: I strongly suggest that you rename min_page_size to min_alloc_size. 
>> Otherwise somebody could think that those numbers are in pages and not 
>> bytes.
> modified in v12
>>
>>>> Then why do you drop the lock and grab it again inside the loop? And what is "i" actually good for?
>>> modified the lock/unlock placement in v12.
>>>
>>> "i" is to track when there is 2GiB+ contiguous allocation request, first
>>> we allocate 2GiB (due to SG table limit) continuously and the remaining
>>> pages in the next iteration, hence this request can't be a continuous.
>>> To set the placement flag we make use of "i" value. In our case "i"
>>> value becomes 2 and we don't set the below flag.
>>> node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
>>>
>>> If we don't get such requests, I will remove "i".
>>
>> I'm not sure if that works.
>>
>> As far as I can see drm_buddy_alloc_blocks() can allocate multiple 
>> blocks at the same time, but i is only incremented when we loop.
>>
>> So what you should do instead is to check if node->blocks just contain 
>> exactly one element after the allocation but before the trim.
> ok
>>
>>>>> +
>>>>> +		/* Limit maximum size to 2GB due to SG table limitations */
>>>>> +		pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
>>>>>    
>>>>>    		if (pages >= pages_per_node)
>>>>> -			alignment = pages_per_node;
>>>>> -
>>>>> -		r = drm_mm_insert_node_in_range(mm, &node->mm_nodes[i], pages,
>>>>> -						alignment, 0, place->fpfn,
>>>>> -						lpfn, mode);
>>>>> -		if (unlikely(r)) {
>>>>> -			if (pages > pages_per_node) {
>>>>> -				if (is_power_of_2(pages))
>>>>> -					pages = pages / 2;
>>>>> -				else
>>>>> -					pages = rounddown_pow_of_two(pages);
>>>>> -				continue;
>>>>> -			}
>>>>> -			goto error_free;
>>>>> +			min_page_size = pages_per_node << PAGE_SHIFT;
>>>>> +
>>>>> +		if (!is_contiguous && !IS_ALIGNED(pages, min_page_size >> PAGE_SHIFT))
>>>>> +			is_contiguous = 1;
>>>>> +
>>>>> +		if (is_contiguous) {
>>>>> +			pages = roundup_pow_of_two(pages);
>>>>> +			min_page_size = pages << PAGE_SHIFT;
>>>>> +
>>>>> +			if (pages > lpfn)
>>>>> +				lpfn = pages;
>>>>>    		}
>>>>>    
>>>>> -		vis_usage += amdgpu_vram_mgr_vis_size(adev, &node->mm_nodes[i]);
>>>>> -		amdgpu_vram_mgr_virt_start(&node->base, &node->mm_nodes[i]);
>>>>> -		pages_left -= pages;
>>>>> +		BUG_ON(min_page_size < mm->chunk_size);
>>>>> +
>>>>> +		mutex_lock(&mgr->lock);
>>>>> +		r = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
>>>>> +					   (u64)lpfn << PAGE_SHIFT,
>>>>> +					   (u64)pages << PAGE_SHIFT,
>>>>> +					   min_page_size,
>>>>> +					   &node->blocks,
>>>>> +					   node->flags);
>>>>> +		mutex_unlock(&mgr->lock);
>>>>> +		if (unlikely(r))
>>>>> +			goto error_free_blocks;
>>>>> +
>>>>>    		++i;
>>>>>    
>>>>>    		if (pages > pages_left)
>>>>> -			pages = pages_left;
>>>>> +			pages_left = 0;
>>>>> +		else
>>>>> +			pages_left -= pages;
>>>>>    	}
>>>>> -	spin_unlock(&mgr->lock);
>>>>>    
>>>>> -	if (i == 1)
>>>>> +	/* Free unused pages for contiguous allocation */
>>>>> +	if (is_contiguous) {
>>>> Well that looks really odd, why is trimming not part of
>>>> drm_buddy_alloc_blocks() ?
>>> we didn't place trim function part of drm_buddy_alloc_blocks since we
>>> thought this function can be a generic one and it can be used by any
>>> other application as well. For example, now we are using it for trimming
>>> the last block in case of size non-alignment with min_page_size.
>>
>> Good argument. Another thing I just realized is that we probably want to 
>> double check if we only allocated one block before the trim.
> ok
>>
>> Thanks,
>> Christian.
>>
Christian König March 30, 2022, 6:53 a.m. UTC | #20
Am 29.03.22 um 21:18 schrieb Arunpravin Paneer Selvam:
>
> On 29/03/22 9:30 pm, Arunpravin Paneer Selvam wrote:
>>
>> On 29/03/22 4:54 pm, Christian König wrote:
>>> Am 29.03.22 um 13:19 schrieb Arunpravin Paneer Selvam:
>>>> [SNIP]
>>>>>> +	pages_left = node->base.num_pages;
>>>>>>     
>>>>>>     	i = 0;
>>>>>> -	spin_lock(&mgr->lock);
>>>>>>     	while (pages_left) {
>>>>>> -		uint32_t alignment = tbo->page_alignment;
>>>>>> +		if (tbo->page_alignment)
>>>>>> +			min_page_size = tbo->page_alignment << PAGE_SHIFT;
>>>>>> +		else
>>>>>> +			min_page_size = mgr->default_page_size;
>>>>> The handling here looks extremely awkward to me.
>>>>>
>>>>> min_page_size should be determined outside of the loop, based on default_page_size, alignment and contiguous flag.
>>>> I kept min_page_size determine logic inside the loop for cases 2GiB+
>>>> requirements, Since now we are round up the size to the required
>>>> alignment, I modified the min_page_size determine logic outside of the
>>>> loop in v12. Please review.
>>> Ah! So do we only have the loop so that each allocation isn't bigger
>>> than 2GiB? If yes couldn't we instead add a max_alloc_size or something
>>> similar?
>> yes we have the loop to limit the allocation not bigger than 2GiB, I
>> think we cannot avoid the loop since we need to allocate the remaining
>> pages if any, to complete the 2GiB+ size request. In other words, first
>> iteration we limit the max size to 2GiB and in the second iteration we
>> allocate the left over pages if any.
> Hi Christian,
>
> Here my understanding might be incorrect, should we limit the max size =
> 2GiB and skip all the remaining pages for a 2GiB+ request?

No, the total size can be bigger than 2GiB. Only the contained pages 
should be a maximum of 2GiB.

See drm_buddy_alloc_blocks() already has the loop you need inside of it, 
all you need to do is to restrict the maximum allocation order.

In other words you got this line here in drm_buddy_alloc_blocks:

order = fls(pages) - 1;

Which then would become:

oder = min(fls(pages), mm->max_order) - 1;

You then just need to give mm->max_order as parameter to 
drm_buddy_init() instead of trying to figure it out yourself. This would 
then also make the following BUG_ON() superfluous.

And btw: IIRC fls() uses only 32 bit! You should either use fls64() or 
directly ilog2() which optimizes and calls fls() or fls64() or constant 
log2 based on the data type.

Regards,
Christian.

>
> Thanks,
> Arun
>>> BTW: I strongly suggest that you rename min_page_size to min_alloc_size.
>>> Otherwise somebody could think that those numbers are in pages and not
>>> bytes.
>> modified in v12
>>>>> Then why do you drop the lock and grab it again inside the loop? And what is "i" actually good for?
>>>> modified the lock/unlock placement in v12.
>>>>
>>>> "i" is to track when there is 2GiB+ contiguous allocation request, first
>>>> we allocate 2GiB (due to SG table limit) continuously and the remaining
>>>> pages in the next iteration, hence this request can't be a continuous.
>>>> To set the placement flag we make use of "i" value. In our case "i"
>>>> value becomes 2 and we don't set the below flag.
>>>> node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
>>>>
>>>> If we don't get such requests, I will remove "i".
>>> I'm not sure if that works.
>>>
>>> As far as I can see drm_buddy_alloc_blocks() can allocate multiple
>>> blocks at the same time, but i is only incremented when we loop.
>>>
>>> So what you should do instead is to check if node->blocks just contain
>>> exactly one element after the allocation but before the trim.
>> ok
>>>>>> +
>>>>>> +		/* Limit maximum size to 2GB due to SG table limitations */
>>>>>> +		pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
>>>>>>     
>>>>>>     		if (pages >= pages_per_node)
>>>>>> -			alignment = pages_per_node;
>>>>>> -
>>>>>> -		r = drm_mm_insert_node_in_range(mm, &node->mm_nodes[i], pages,
>>>>>> -						alignment, 0, place->fpfn,
>>>>>> -						lpfn, mode);
>>>>>> -		if (unlikely(r)) {
>>>>>> -			if (pages > pages_per_node) {
>>>>>> -				if (is_power_of_2(pages))
>>>>>> -					pages = pages / 2;
>>>>>> -				else
>>>>>> -					pages = rounddown_pow_of_two(pages);
>>>>>> -				continue;
>>>>>> -			}
>>>>>> -			goto error_free;
>>>>>> +			min_page_size = pages_per_node << PAGE_SHIFT;
>>>>>> +
>>>>>> +		if (!is_contiguous && !IS_ALIGNED(pages, min_page_size >> PAGE_SHIFT))
>>>>>> +			is_contiguous = 1;
>>>>>> +
>>>>>> +		if (is_contiguous) {
>>>>>> +			pages = roundup_pow_of_two(pages);
>>>>>> +			min_page_size = pages << PAGE_SHIFT;
>>>>>> +
>>>>>> +			if (pages > lpfn)
>>>>>> +				lpfn = pages;
>>>>>>     		}
>>>>>>     
>>>>>> -		vis_usage += amdgpu_vram_mgr_vis_size(adev, &node->mm_nodes[i]);
>>>>>> -		amdgpu_vram_mgr_virt_start(&node->base, &node->mm_nodes[i]);
>>>>>> -		pages_left -= pages;
>>>>>> +		BUG_ON(min_page_size < mm->chunk_size);
>>>>>> +
>>>>>> +		mutex_lock(&mgr->lock);
>>>>>> +		r = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
>>>>>> +					   (u64)lpfn << PAGE_SHIFT,
>>>>>> +					   (u64)pages << PAGE_SHIFT,
>>>>>> +					   min_page_size,
>>>>>> +					   &node->blocks,
>>>>>> +					   node->flags);
>>>>>> +		mutex_unlock(&mgr->lock);
>>>>>> +		if (unlikely(r))
>>>>>> +			goto error_free_blocks;
>>>>>> +
>>>>>>     		++i;
>>>>>>     
>>>>>>     		if (pages > pages_left)
>>>>>> -			pages = pages_left;
>>>>>> +			pages_left = 0;
>>>>>> +		else
>>>>>> +			pages_left -= pages;
>>>>>>     	}
>>>>>> -	spin_unlock(&mgr->lock);
>>>>>>     
>>>>>> -	if (i == 1)
>>>>>> +	/* Free unused pages for contiguous allocation */
>>>>>> +	if (is_contiguous) {
>>>>> Well that looks really odd, why is trimming not part of
>>>>> drm_buddy_alloc_blocks() ?
>>>> we didn't place trim function part of drm_buddy_alloc_blocks since we
>>>> thought this function can be a generic one and it can be used by any
>>>> other application as well. For example, now we are using it for trimming
>>>> the last block in case of size non-alignment with min_page_size.
>>> Good argument. Another thing I just realized is that we probably want to
>>> double check if we only allocated one block before the trim.
>> ok
>>> Thanks,
>>> Christian.
>>>
diff mbox series

Patch

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index f1422bee3dcc..5133c3f028ab 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -280,6 +280,7 @@  config DRM_AMDGPU
 	select HWMON
 	select BACKLIGHT_CLASS_DEVICE
 	select INTERVAL_TREE
+	select DRM_BUDDY
 	help
 	  Choose this option if you have a recent AMD Radeon graphics card.
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
index acfa207cf970..864c609ba00b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
@@ -30,12 +30,15 @@ 
 #include <drm/ttm/ttm_resource.h>
 #include <drm/ttm/ttm_range_manager.h>
 
+#include "amdgpu_vram_mgr.h"
+
 /* state back for walking over vram_mgr and gtt_mgr allocations */
 struct amdgpu_res_cursor {
 	uint64_t		start;
 	uint64_t		size;
 	uint64_t		remaining;
-	struct drm_mm_node	*node;
+	void			*node;
+	uint32_t		mem_type;
 };
 
 /**
@@ -52,27 +55,63 @@  static inline void amdgpu_res_first(struct ttm_resource *res,
 				    uint64_t start, uint64_t size,
 				    struct amdgpu_res_cursor *cur)
 {
+	struct drm_buddy_block *block;
+	struct list_head *head, *next;
 	struct drm_mm_node *node;
 
-	if (!res || res->mem_type == TTM_PL_SYSTEM) {
-		cur->start = start;
-		cur->size = size;
-		cur->remaining = size;
-		cur->node = NULL;
-		WARN_ON(res && start + size > res->num_pages << PAGE_SHIFT);
-		return;
-	}
+	if (!res)
+		goto fallback;
 
 	BUG_ON(start + size > res->num_pages << PAGE_SHIFT);
 
-	node = to_ttm_range_mgr_node(res)->mm_nodes;
-	while (start >= node->size << PAGE_SHIFT)
-		start -= node++->size << PAGE_SHIFT;
+	cur->mem_type = res->mem_type;
+
+	switch (cur->mem_type) {
+	case TTM_PL_VRAM:
+		head = &to_amdgpu_vram_mgr_node(res)->blocks;
+
+		block = list_first_entry_or_null(head,
+						 struct drm_buddy_block,
+						 link);
+		if (!block)
+			goto fallback;
+
+		while (start >= amdgpu_node_size(block)) {
+			start -= amdgpu_node_size(block);
+
+			next = block->link.next;
+			if (next != head)
+				block = list_entry(next, struct drm_buddy_block, link);
+		}
+
+		cur->start = amdgpu_node_start(block) + start;
+		cur->size = min(amdgpu_node_size(block) - start, size);
+		cur->remaining = size;
+		cur->node = block;
+		break;
+	case TTM_PL_TT:
+		node = to_ttm_range_mgr_node(res)->mm_nodes;
+		while (start >= node->size << PAGE_SHIFT)
+			start -= node++->size << PAGE_SHIFT;
+
+		cur->start = (node->start << PAGE_SHIFT) + start;
+		cur->size = min((node->size << PAGE_SHIFT) - start, size);
+		cur->remaining = size;
+		cur->node = node;
+		break;
+	default:
+		goto fallback;
+	}
 
-	cur->start = (node->start << PAGE_SHIFT) + start;
-	cur->size = min((node->size << PAGE_SHIFT) - start, size);
+	return;
+
+fallback:
+	cur->start = start;
+	cur->size = size;
 	cur->remaining = size;
-	cur->node = node;
+	cur->node = NULL;
+	WARN_ON(res && start + size > res->num_pages << PAGE_SHIFT);
+	return;
 }
 
 /**
@@ -85,7 +124,9 @@  static inline void amdgpu_res_first(struct ttm_resource *res,
  */
 static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
 {
-	struct drm_mm_node *node = cur->node;
+	struct drm_buddy_block *block;
+	struct drm_mm_node *node;
+	struct list_head *next;
 
 	BUG_ON(size > cur->remaining);
 
@@ -99,9 +140,27 @@  static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
 		return;
 	}
 
-	cur->node = ++node;
-	cur->start = node->start << PAGE_SHIFT;
-	cur->size = min(node->size << PAGE_SHIFT, cur->remaining);
+	switch (cur->mem_type) {
+	case TTM_PL_VRAM:
+		block = cur->node;
+
+		next = block->link.next;
+		block = list_entry(next, struct drm_buddy_block, link);
+
+		cur->node = block;
+		cur->start = amdgpu_node_start(block);
+		cur->size = min(amdgpu_node_size(block), cur->remaining);
+		break;
+	case TTM_PL_TT:
+		node = cur->node;
+
+		cur->node = ++node;
+		cur->start = node->start << PAGE_SHIFT;
+		cur->size = min(node->size << PAGE_SHIFT, cur->remaining);
+		break;
+	default:
+		return;
+	}
 }
 
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 9120ae80ef52..6a70818039dd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -26,6 +26,7 @@ 
 
 #include <linux/dma-direction.h>
 #include <drm/gpu_scheduler.h>
+#include "amdgpu_vram_mgr.h"
 #include "amdgpu.h"
 
 #define AMDGPU_PL_GDS		(TTM_PL_PRIV + 0)
@@ -38,15 +39,6 @@ 
 
 #define AMDGPU_POISON	0xd0bed0be
 
-struct amdgpu_vram_mgr {
-	struct ttm_resource_manager manager;
-	struct drm_mm mm;
-	spinlock_t lock;
-	struct list_head reservations_pending;
-	struct list_head reserved_pages;
-	atomic64_t vis_usage;
-};
-
 struct amdgpu_gtt_mgr {
 	struct ttm_resource_manager manager;
 	struct drm_mm mm;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
index 0a7611648573..41fb7e6a104b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
@@ -32,10 +32,18 @@ 
 #include "atom.h"
 
 struct amdgpu_vram_reservation {
+	u64 start;
+	u64 size;
+	struct list_head block;
 	struct list_head node;
-	struct drm_mm_node mm_node;
 };
 
+static inline struct drm_buddy_block *
+amdgpu_vram_mgr_first_block(struct list_head *list)
+{
+	return list_first_entry_or_null(list, struct drm_buddy_block, link);
+}
+
 static inline struct amdgpu_vram_mgr *
 to_vram_mgr(struct ttm_resource_manager *man)
 {
@@ -194,10 +202,10 @@  const struct attribute_group amdgpu_vram_mgr_attr_group = {
  * Calculate how many bytes of the MM node are inside visible VRAM
  */
 static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev,
-				    struct drm_mm_node *node)
+				    struct drm_buddy_block *block)
 {
-	uint64_t start = node->start << PAGE_SHIFT;
-	uint64_t end = (node->size + node->start) << PAGE_SHIFT;
+	u64 start = amdgpu_node_start(block);
+	u64 end = start + amdgpu_node_size(block);
 
 	if (start >= adev->gmc.visible_vram_size)
 		return 0;
@@ -218,9 +226,9 @@  u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
 {
 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
 	struct ttm_resource *res = bo->tbo.resource;
-	unsigned pages = res->num_pages;
-	struct drm_mm_node *mm;
-	u64 usage;
+	struct amdgpu_vram_mgr_node *node = to_amdgpu_vram_mgr_node(res);
+	struct drm_buddy_block *block;
+	u64 usage = 0;
 
 	if (amdgpu_gmc_vram_full_visible(&adev->gmc))
 		return amdgpu_bo_size(bo);
@@ -228,9 +236,8 @@  u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
 	if (res->start >= adev->gmc.visible_vram_size >> PAGE_SHIFT)
 		return 0;
 
-	mm = &container_of(res, struct ttm_range_mgr_node, base)->mm_nodes[0];
-	for (usage = 0; pages; pages -= mm->size, mm++)
-		usage += amdgpu_vram_mgr_vis_size(adev, mm);
+	list_for_each_entry(block, &node->blocks, link)
+		usage += amdgpu_vram_mgr_vis_size(adev, block);
 
 	return usage;
 }
@@ -240,21 +247,28 @@  static void amdgpu_vram_mgr_do_reserve(struct ttm_resource_manager *man)
 {
 	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
 	struct amdgpu_device *adev = to_amdgpu_device(mgr);
-	struct drm_mm *mm = &mgr->mm;
+	struct drm_buddy *mm = &mgr->mm;
 	struct amdgpu_vram_reservation *rsv, *temp;
+	struct drm_buddy_block *block;
 	uint64_t vis_usage;
 
 	list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, node) {
-		if (drm_mm_reserve_node(mm, &rsv->mm_node))
+		if (drm_buddy_alloc_blocks(mm, rsv->start, rsv->start + rsv->size,
+					   rsv->size, mm->chunk_size, &rsv->block,
+					   DRM_BUDDY_RANGE_ALLOCATION))
+			continue;
+
+		block = amdgpu_vram_mgr_first_block(&rsv->block);
+		if (!block)
 			continue;
 
 		dev_dbg(adev->dev, "Reservation 0x%llx - %lld, Succeeded\n",
-			rsv->mm_node.start, rsv->mm_node.size);
+			rsv->start, rsv->size);
 
-		vis_usage = amdgpu_vram_mgr_vis_size(adev, &rsv->mm_node);
+		vis_usage = amdgpu_vram_mgr_vis_size(adev, block);
 		atomic64_add(vis_usage, &mgr->vis_usage);
 		spin_lock(&man->bdev->lru_lock);
-		man->usage += rsv->mm_node.size << PAGE_SHIFT;
+		man->usage += rsv->size;
 		spin_unlock(&man->bdev->lru_lock);
 		list_move(&rsv->node, &mgr->reserved_pages);
 	}
@@ -279,13 +293,15 @@  int amdgpu_vram_mgr_reserve_range(struct amdgpu_vram_mgr *mgr,
 		return -ENOMEM;
 
 	INIT_LIST_HEAD(&rsv->node);
-	rsv->mm_node.start = start >> PAGE_SHIFT;
-	rsv->mm_node.size = size >> PAGE_SHIFT;
+	INIT_LIST_HEAD(&rsv->block);
 
-	spin_lock(&mgr->lock);
+	rsv->start = start;
+	rsv->size = size;
+
+	mutex_lock(&mgr->lock);
 	list_add_tail(&rsv->node, &mgr->reservations_pending);
 	amdgpu_vram_mgr_do_reserve(&mgr->manager);
-	spin_unlock(&mgr->lock);
+	mutex_unlock(&mgr->lock);
 
 	return 0;
 }
@@ -307,19 +323,19 @@  int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
 	struct amdgpu_vram_reservation *rsv;
 	int ret;
 
-	spin_lock(&mgr->lock);
+	mutex_lock(&mgr->lock);
 
 	list_for_each_entry(rsv, &mgr->reservations_pending, node) {
-		if ((rsv->mm_node.start <= start) &&
-		    (start < (rsv->mm_node.start + rsv->mm_node.size))) {
+		if (rsv->start <= start &&
+		    (start < (rsv->start + rsv->size))) {
 			ret = -EBUSY;
 			goto out;
 		}
 	}
 
 	list_for_each_entry(rsv, &mgr->reserved_pages, node) {
-		if ((rsv->mm_node.start <= start) &&
-		    (start < (rsv->mm_node.start + rsv->mm_node.size))) {
+		if (rsv->start <= start &&
+		    (start < (rsv->start + rsv->size))) {
 			ret = 0;
 			goto out;
 		}
@@ -327,32 +343,10 @@  int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
 
 	ret = -ENOENT;
 out:
-	spin_unlock(&mgr->lock);
+	mutex_unlock(&mgr->lock);
 	return ret;
 }
 
-/**
- * amdgpu_vram_mgr_virt_start - update virtual start address
- *
- * @mem: ttm_resource to update
- * @node: just allocated node
- *
- * Calculate a virtual BO start address to easily check if everything is CPU
- * accessible.
- */
-static void amdgpu_vram_mgr_virt_start(struct ttm_resource *mem,
-				       struct drm_mm_node *node)
-{
-	unsigned long start;
-
-	start = node->start + node->size;
-	if (start > mem->num_pages)
-		start -= mem->num_pages;
-	else
-		start = 0;
-	mem->start = max(mem->start, start);
-}
-
 /**
  * amdgpu_vram_mgr_new - allocate new ranges
  *
@@ -368,13 +362,14 @@  static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
 			       const struct ttm_place *place,
 			       struct ttm_resource **res)
 {
-	unsigned long lpfn, num_nodes, pages_per_node, pages_left, pages;
+	unsigned long lpfn, pages_per_node, pages_left, pages;
 	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
 	struct amdgpu_device *adev = to_amdgpu_device(mgr);
-	uint64_t vis_usage = 0, mem_bytes, max_bytes;
-	struct ttm_range_mgr_node *node;
-	struct drm_mm *mm = &mgr->mm;
-	enum drm_mm_insert_mode mode;
+	u64 vis_usage = 0, max_bytes, min_page_size;
+	struct amdgpu_vram_mgr_node *node;
+	struct drm_buddy *mm = &mgr->mm;
+	struct drm_buddy_block *block;
+	bool is_contiguous = 0;
 	unsigned i;
 	int r;
 
@@ -382,14 +377,15 @@  static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
 	if (!lpfn)
 		lpfn = man->size >> PAGE_SHIFT;
 
+	if (place->flags & TTM_PL_FLAG_CONTIGUOUS)
+		is_contiguous = 1;
+
 	max_bytes = adev->gmc.mc_vram_size;
 	if (tbo->type != ttm_bo_type_kernel)
 		max_bytes -= AMDGPU_VM_RESERVED_VRAM;
 
-	mem_bytes = tbo->base.size;
 	if (place->flags & TTM_PL_FLAG_CONTIGUOUS) {
 		pages_per_node = ~0ul;
-		num_nodes = 1;
 	} else {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 		pages_per_node = HPAGE_PMD_NR;
@@ -399,11 +395,9 @@  static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
 #endif
 		pages_per_node = max_t(uint32_t, pages_per_node,
 				       tbo->page_alignment);
-		num_nodes = DIV_ROUND_UP_ULL(PFN_UP(mem_bytes), pages_per_node);
 	}
 
-	node = kvmalloc(struct_size(node, mm_nodes, num_nodes),
-			GFP_KERNEL | __GFP_ZERO);
+	node = kzalloc(sizeof(*node), GFP_KERNEL);
 	if (!node)
 		return -ENOMEM;
 
@@ -415,48 +409,86 @@  static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
 		goto error_fini;
 	}
 
-	mode = DRM_MM_INSERT_BEST;
+	INIT_LIST_HEAD(&node->blocks);
+
 	if (place->flags & TTM_PL_FLAG_TOPDOWN)
-		mode = DRM_MM_INSERT_HIGH;
+		node->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
 
-	pages_left = node->base.num_pages;
+	if (place->fpfn || lpfn != man->size >> PAGE_SHIFT)
+		/* Allocate blocks in desired range */
+		node->flags |= DRM_BUDDY_RANGE_ALLOCATION;
 
-	/* Limit maximum size to 2GB due to SG table limitations */
-	pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
+	BUG_ON(!node->base.num_pages);
+	pages_left = node->base.num_pages;
 
 	i = 0;
-	spin_lock(&mgr->lock);
 	while (pages_left) {
-		uint32_t alignment = tbo->page_alignment;
+		if (tbo->page_alignment)
+			min_page_size = tbo->page_alignment << PAGE_SHIFT;
+		else
+			min_page_size = mgr->default_page_size;
+
+		/* Limit maximum size to 2GB due to SG table limitations */
+		pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
 
 		if (pages >= pages_per_node)
-			alignment = pages_per_node;
-
-		r = drm_mm_insert_node_in_range(mm, &node->mm_nodes[i], pages,
-						alignment, 0, place->fpfn,
-						lpfn, mode);
-		if (unlikely(r)) {
-			if (pages > pages_per_node) {
-				if (is_power_of_2(pages))
-					pages = pages / 2;
-				else
-					pages = rounddown_pow_of_two(pages);
-				continue;
-			}
-			goto error_free;
+			min_page_size = pages_per_node << PAGE_SHIFT;
+
+		if (!is_contiguous && !IS_ALIGNED(pages, min_page_size >> PAGE_SHIFT))
+			is_contiguous = 1;
+
+		if (is_contiguous) {
+			pages = roundup_pow_of_two(pages);
+			min_page_size = pages << PAGE_SHIFT;
+
+			if (pages > lpfn)
+				lpfn = pages;
 		}
 
-		vis_usage += amdgpu_vram_mgr_vis_size(adev, &node->mm_nodes[i]);
-		amdgpu_vram_mgr_virt_start(&node->base, &node->mm_nodes[i]);
-		pages_left -= pages;
+		BUG_ON(min_page_size < mm->chunk_size);
+
+		mutex_lock(&mgr->lock);
+		r = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT,
+					   (u64)lpfn << PAGE_SHIFT,
+					   (u64)pages << PAGE_SHIFT,
+					   min_page_size,
+					   &node->blocks,
+					   node->flags);
+		mutex_unlock(&mgr->lock);
+		if (unlikely(r))
+			goto error_free_blocks;
+
 		++i;
 
 		if (pages > pages_left)
-			pages = pages_left;
+			pages_left = 0;
+		else
+			pages_left -= pages;
 	}
-	spin_unlock(&mgr->lock);
 
-	if (i == 1)
+	/* Free unused pages for contiguous allocation */
+	if (is_contiguous) {
+		u64 actual_size = (u64)node->base.num_pages << PAGE_SHIFT;
+
+		mutex_lock(&mgr->lock);
+		drm_buddy_block_trim(mm,
+				     actual_size,
+				     &node->blocks);
+		mutex_unlock(&mgr->lock);
+	}
+
+	list_for_each_entry(block, &node->blocks, link)
+		vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
+
+	block = amdgpu_vram_mgr_first_block(&node->blocks);
+	if (!block) {
+		r = -EINVAL;
+		goto error_fini;
+	}
+
+	node->base.start = amdgpu_node_start(block) >> PAGE_SHIFT;
+
+	if (i == 1 && is_contiguous)
 		node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
 
 	if (adev->gmc.xgmi.connected_to_cpu)
@@ -468,13 +500,13 @@  static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
 	*res = &node->base;
 	return 0;
 
-error_free:
-	while (i--)
-		drm_mm_remove_node(&node->mm_nodes[i]);
-	spin_unlock(&mgr->lock);
+error_free_blocks:
+	mutex_lock(&mgr->lock);
+	drm_buddy_free_list(mm, &node->blocks);
+	mutex_unlock(&mgr->lock);
 error_fini:
 	ttm_resource_fini(man, &node->base);
-	kvfree(node);
+	kfree(node);
 
 	return r;
 }
@@ -490,27 +522,26 @@  static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
 static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
 				struct ttm_resource *res)
 {
-	struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res);
+	struct amdgpu_vram_mgr_node *node = to_amdgpu_vram_mgr_node(res);
 	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
 	struct amdgpu_device *adev = to_amdgpu_device(mgr);
+	struct drm_buddy *mm = &mgr->mm;
+	struct drm_buddy_block *block;
 	uint64_t vis_usage = 0;
-	unsigned i, pages;
 
-	spin_lock(&mgr->lock);
-	for (i = 0, pages = res->num_pages; pages;
-	     pages -= node->mm_nodes[i].size, ++i) {
-		struct drm_mm_node *mm = &node->mm_nodes[i];
+	mutex_lock(&mgr->lock);
+	list_for_each_entry(block, &node->blocks, link)
+		vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
 
-		drm_mm_remove_node(mm);
-		vis_usage += amdgpu_vram_mgr_vis_size(adev, mm);
-	}
 	amdgpu_vram_mgr_do_reserve(man);
-	spin_unlock(&mgr->lock);
+
+	drm_buddy_free_list(mm, &node->blocks);
+	mutex_unlock(&mgr->lock);
 
 	atomic64_sub(vis_usage, &mgr->vis_usage);
 
 	ttm_resource_fini(man, res);
-	kvfree(node);
+	kfree(node);
 }
 
 /**
@@ -648,13 +679,22 @@  static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man,
 				  struct drm_printer *printer)
 {
 	struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
+	struct drm_buddy *mm = &mgr->mm;
+	struct drm_buddy_block *block;
 
 	drm_printf(printer, "  vis usage:%llu\n",
 		   amdgpu_vram_mgr_vis_usage(mgr));
 
-	spin_lock(&mgr->lock);
-	drm_mm_print(&mgr->mm, printer);
-	spin_unlock(&mgr->lock);
+	mutex_lock(&mgr->lock);
+	drm_printf(printer, "default_page_size: %lluKiB\n",
+		   mgr->default_page_size >> 10);
+
+	drm_buddy_print(mm, printer);
+
+	drm_printf(printer, "reserved:\n");
+	list_for_each_entry(block, &mgr->reserved_pages, link)
+		drm_buddy_block_print(mm, block, printer);
+	mutex_unlock(&mgr->lock);
 }
 
 static const struct ttm_resource_manager_func amdgpu_vram_mgr_func = {
@@ -674,16 +714,21 @@  int amdgpu_vram_mgr_init(struct amdgpu_device *adev)
 {
 	struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
 	struct ttm_resource_manager *man = &mgr->manager;
+	int err;
 
 	ttm_resource_manager_init(man, &adev->mman.bdev,
 				  adev->gmc.real_vram_size);
 
 	man->func = &amdgpu_vram_mgr_func;
 
-	drm_mm_init(&mgr->mm, 0, man->size >> PAGE_SHIFT);
-	spin_lock_init(&mgr->lock);
+	err = drm_buddy_init(&mgr->mm, man->size, PAGE_SIZE);
+	if (err)
+		return err;
+
+	mutex_init(&mgr->lock);
 	INIT_LIST_HEAD(&mgr->reservations_pending);
 	INIT_LIST_HEAD(&mgr->reserved_pages);
+	mgr->default_page_size = PAGE_SIZE;
 
 	ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, &mgr->manager);
 	ttm_resource_manager_set_used(man, true);
@@ -711,16 +756,16 @@  void amdgpu_vram_mgr_fini(struct amdgpu_device *adev)
 	if (ret)
 		return;
 
-	spin_lock(&mgr->lock);
+	mutex_lock(&mgr->lock);
 	list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, node)
 		kfree(rsv);
 
 	list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, node) {
-		drm_mm_remove_node(&rsv->mm_node);
+		drm_buddy_free_list(&mgr->mm, &rsv->block);
 		kfree(rsv);
 	}
-	drm_mm_takedown(&mgr->mm);
-	spin_unlock(&mgr->lock);
+	drm_buddy_fini(&mgr->mm);
+	mutex_unlock(&mgr->lock);
 
 	ttm_resource_manager_cleanup(man);
 	ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, NULL);