diff mbox series

[v3,4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v3)

Message ID 1534154331-11810-5-git-send-email-ray.huang@amd.com (mailing list archive)
State New, archived
Headers show
Series drm/ttm,amdgpu: Introduce LRU bulk move functionality | expand

Commit Message

Huang Rui Aug. 13, 2018, 9:58 a.m. UTC
I continue to work for bulk moving that based on the proposal by Christian.

Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.

Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
"drm/amdgpu: band aid validating VM PTs"
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae

However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.

Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.

While doing so we note the beginning and end of this block in the LRU list.

Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.

Test data:
+--------------+-----------------+-----------+---------------------------------------+
|              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
|              |Principle(Vulkan)|           |                                       |
+------------------------------------------------------------------------------------+
|              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
| Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
+------------------------------------------------------------------------------------+
| Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
|(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
|PT BOs on LRU)|                 |           |                                       |
+------------------------------------------------------------------------------------+
| Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
|              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
+--------------+-----------------+-----------+---------------------------------------+

After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.

v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.

Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
 2 files changed, 61 insertions(+), 16 deletions(-)

Comments

Zhang, Jerry(Junwei) Aug. 14, 2018, 2:26 a.m. UTC | #1
On 08/13/2018 05:58 PM, Huang Rui wrote:
> I continue to work for bulk moving that based on the proposal by Christian.
>
> Background:
> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> the end of the LRU, and impact performance seriously.
>
> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> patch:
> "drm/amdgpu: band aid validating VM PTs"
> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
>
> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> instead of one by one.
>
> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> validated we move all BOs together to the end of the LRU without dropping the
> lock for the LRU.
>
> While doing so we note the beginning and end of this block in the LRU list.
>
> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> we don't move every BO one by one, but instead cut the LRU list into pieces so
> that we bulk move everything to the end in just one operation.
>
> Test data:
> +--------------+-----------------+-----------+---------------------------------------+
> |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
> |              |Principle(Vulkan)|           |                                       |
> +------------------------------------------------------------------------------------+
> |              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
> +------------------------------------------------------------------------------------+
> | Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
> |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> |PT BOs on LRU)|                 |           |                                       |
> +------------------------------------------------------------------------------------+
> | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> |              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
> +--------------+-----------------+-----------+---------------------------------------+
>
> After test them with above three benchmarks include vulkan and opencl. We can
> see the visible improvement than original, and even better than original with
> workaround.
>
> v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> put them together.
> v3: remove unused parameter and use list_for_each_entry instead of the one with
> save entry.
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Tested-by: Mike Lothian <mike@fireburn.co.uk>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
>   2 files changed, 61 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 9c84770..ee1af53 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
>   }
>
>   /**
> + * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
> + *
> + * @vm: vm providing the BOs
> + * @list: the list that stored BOs
> + *
> + * Move one list of BOs to the end of LRU and update the positions.
> + */
> +static void
> +amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
> +{
> +	struct amdgpu_vm_bo_base *bo_base;
> +
> +	list_for_each_entry(bo_base, list, vm_status) {
> +		struct amdgpu_bo *bo = bo_base->bo;
> +
> +		if (!bo->parent)
> +			continue;
> +
> +		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
> +		if (bo->shadow)
> +			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
> +						&vm->lru_bulk_move);
> +	}
> +}
> +
> +/**
> + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
> + *
> + * @adev: amdgpu device pointer
> + * @vm: vm providing the BOs
> + *
> + * Move all BOs to the end of LRU and remember their positions to put them
> + * together.
> + */
> +static void
> +amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> +{
> +	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> +
> +	spin_lock(&glob->lru_lock);
> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);

Moved list is working under vm->moved_lock, so we may hold that as well.
otherwise, to use the same one.
(not sure the detail of history about moved_lock)

> +	spin_unlock(&glob->lru_lock);
> +}
> +
> +/**
>    * amdgpu_vm_validate_pt_bos - validate the page table BOs
>    *
>    * @adev: amdgpu device pointer
> @@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>   {
>   	struct ttm_bo_global *glob = adev->mman.bdev.glob;
>   	struct amdgpu_vm_bo_base *bo_base, *tmp;
> +	bool validated = false;
>   	int r = 0;
>
>   	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
> @@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>   			r = validate(param, bo);
>   			if (r)
>   				break;
> -
> -			spin_lock(&glob->lru_lock);
> -			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> -			if (bo->shadow)
> -				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> -			spin_unlock(&glob->lru_lock);
>   		}
>
> +		validated = true;
>   		if (bo->tbo.type != ttm_bo_type_kernel) {
>   			spin_lock(&vm->moved_lock);
>   			list_move(&bo_base->vm_status, &vm->moved);
> @@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>   		}
>   	}
>
> -	spin_lock(&glob->lru_lock);
> -	list_for_each_entry(bo_base, &vm->idle, vm_status) {
> -		struct amdgpu_bo *bo = bo_base->bo;
> +	if (!validated) {
> +		spin_lock(&glob->lru_lock);
> +		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);

To confirm
we only do actual bulk move when no evicted or validate failure?

Regards,
Jerry

> +		spin_unlock(&glob->lru_lock);
> +		return 0;
> +	}
>
> -		if (!bo->parent)
> -			continue;
> +	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
>
> -		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> -		if (bo->shadow)
> -			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> -	}
> -	spin_unlock(&glob->lru_lock);
> +	amdgpu_vm_move_to_lru_tail(adev, vm);
>
>   	return r;
>   }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index 67a15d4..92725ac 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -29,6 +29,7 @@
>   #include <linux/rbtree.h>
>   #include <drm/gpu_scheduler.h>
>   #include <drm/drm_file.h>
> +#include <drm/ttm/ttm_bo_driver.h>
>
>   #include "amdgpu_sync.h"
>   #include "amdgpu_ring.h"
> @@ -226,6 +227,9 @@ struct amdgpu_vm {
>
>   	/* Some basic info about the task */
>   	struct amdgpu_task_info task_info;
> +
> +	/* Store positions of group of BOs */
> +	struct ttm_lru_bulk_move lru_bulk_move;
>   };
>
>   struct amdgpu_vm_manager {
>
Huang Rui Aug. 14, 2018, 3:05 a.m. UTC | #2
On Tue, Aug 14, 2018 at 10:26:43AM +0800, Zhang, Jerry wrote:
> On 08/13/2018 05:58 PM, Huang Rui wrote:
> > I continue to work for bulk moving that based on the proposal by Christian.
> >
> > Background:
> > amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> > them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> > the end of the LRU, and impact performance seriously.
> >
> > Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> > patch:
> > "drm/amdgpu: band aid validating VM PTs"
> > Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> >
> > However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> > instead of one by one.
> >
> > Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> > validated we move all BOs together to the end of the LRU without dropping the
> > lock for the LRU.
> >
> > While doing so we note the beginning and end of this block in the LRU list.
> >
> > Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> > we don't move every BO one by one, but instead cut the LRU list into pieces so
> > that we bulk move everything to the end in just one operation.
> >
> > Test data:
> > +--------------+-----------------+-----------+---------------------------------------+
> > |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
> > |              |Principle(Vulkan)|           |                                       |
> > +------------------------------------------------------------------------------------+
> > |              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> > | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
> > +------------------------------------------------------------------------------------+
> > | Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
> > |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> > |PT BOs on LRU)|                 |           |                                       |
> > +------------------------------------------------------------------------------------+
> > | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> > |              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
> > +--------------+-----------------+-----------+---------------------------------------+
> >
> > After test them with above three benchmarks include vulkan and opencl. We can
> > see the visible improvement than original, and even better than original with
> > workaround.
> >
> > v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> > put them together.
> > v3: remove unused parameter and use list_for_each_entry instead of the one with
> > save entry.
> >
> > Signed-off-by: Christian König <christian.koenig@amd.com>
> > Signed-off-by: Huang Rui <ray.huang@amd.com>
> > Tested-by: Mike Lothian <mike@fireburn.co.uk>
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
> >   2 files changed, 61 insertions(+), 16 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > index 9c84770..ee1af53 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > @@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
> >   }
> >
> >   /**
> > + * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
> > + *
> > + * @vm: vm providing the BOs
> > + * @list: the list that stored BOs
> > + *
> > + * Move one list of BOs to the end of LRU and update the positions.
> > + */
> > +static void
> > +amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
> > +{
> > +	struct amdgpu_vm_bo_base *bo_base;
> > +
> > +	list_for_each_entry(bo_base, list, vm_status) {
> > +		struct amdgpu_bo *bo = bo_base->bo;
> > +
> > +		if (!bo->parent)
> > +			continue;
> > +
> > +		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
> > +		if (bo->shadow)
> > +			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
> > +						&vm->lru_bulk_move);
> > +	}
> > +}
> > +
> > +/**
> > + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
> > + *
> > + * @adev: amdgpu device pointer
> > + * @vm: vm providing the BOs
> > + *
> > + * Move all BOs to the end of LRU and remember their positions to put them
> > + * together.
> > + */
> > +static void
> > +amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> > +{
> > +	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> > +
> > +	spin_lock(&glob->lru_lock);
> > +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
> > +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
> > +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
> 
> Moved list is working under vm->moved_lock, so we may hold that as well.
> otherwise, to use the same one.
> (not sure the detail of history about moved_lock)

We actually don't remove them from moved list, just move bo->lru to the end
of lru. So here, it doesn't need moved_lock.

> 
> > +	spin_unlock(&glob->lru_lock);
> > +}
> > +
> > +/**
> >    * amdgpu_vm_validate_pt_bos - validate the page table BOs
> >    *
> >    * @adev: amdgpu device pointer
> > @@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >   {
> >   	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> >   	struct amdgpu_vm_bo_base *bo_base, *tmp;
> > +	bool validated = false;
> >   	int r = 0;
> >
> >   	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
> > @@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >   			r = validate(param, bo);
> >   			if (r)
> >   				break;
> > -
> > -			spin_lock(&glob->lru_lock);
> > -			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> > -			if (bo->shadow)
> > -				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> > -			spin_unlock(&glob->lru_lock);
> >   		}
> >
> > +		validated = true;
> >   		if (bo->tbo.type != ttm_bo_type_kernel) {
> >   			spin_lock(&vm->moved_lock);
> >   			list_move(&bo_base->vm_status, &vm->moved);
> > @@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >   		}
> >   	}
> >
> > -	spin_lock(&glob->lru_lock);
> > -	list_for_each_entry(bo_base, &vm->idle, vm_status) {
> > -		struct amdgpu_bo *bo = bo_base->bo;
> > +	if (!validated) {
> > +		spin_lock(&glob->lru_lock);
> > +		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
> 
> To confirm
> we only do actual bulk move when no evicted or validate failure?
> 

Yes. Because if some BOs are evicted back, they will be moved to
moved/relocated list. Then we need update the positions of bulk moving.

Thanks,
Ray

> Regards,
> Jerry
> 
> > +		spin_unlock(&glob->lru_lock);
> > +		return 0;
> > +	}
> >
> > -		if (!bo->parent)
> > -			continue;
> > +	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
> >
> > -		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> > -		if (bo->shadow)
> > -			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> > -	}
> > -	spin_unlock(&glob->lru_lock);
> > +	amdgpu_vm_move_to_lru_tail(adev, vm);
> >
> >   	return r;
> >   }
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> > index 67a15d4..92725ac 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> > @@ -29,6 +29,7 @@
> >   #include <linux/rbtree.h>
> >   #include <drm/gpu_scheduler.h>
> >   #include <drm/drm_file.h>
> > +#include <drm/ttm/ttm_bo_driver.h>
> >
> >   #include "amdgpu_sync.h"
> >   #include "amdgpu_ring.h"
> > @@ -226,6 +227,9 @@ struct amdgpu_vm {
> >
> >   	/* Some basic info about the task */
> >   	struct amdgpu_task_info task_info;
> > +
> > +	/* Store positions of group of BOs */
> > +	struct ttm_lru_bulk_move lru_bulk_move;
> >   };
> >
> >   struct amdgpu_vm_manager {
> >
Christian König Aug. 14, 2018, 6:45 a.m. UTC | #3
Am 14.08.2018 um 05:05 schrieb Huang Rui:
> On Tue, Aug 14, 2018 at 10:26:43AM +0800, Zhang, Jerry wrote:
>> On 08/13/2018 05:58 PM, Huang Rui wrote:
>>> I continue to work for bulk moving that based on the proposal by Christian.
>>>
>>> Background:
>>> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
>>> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
>>> the end of the LRU, and impact performance seriously.
>>>
>>> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
>>> patch:
>>> "drm/amdgpu: band aid validating VM PTs"
>>> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
>>>
>>> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
>>> instead of one by one.
>>>
>>> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
>>> validated we move all BOs together to the end of the LRU without dropping the
>>> lock for the LRU.
>>>
>>> While doing so we note the beginning and end of this block in the LRU list.
>>>
>>> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
>>> we don't move every BO one by one, but instead cut the LRU list into pieces so
>>> that we bulk move everything to the end in just one operation.
>>>
>>> Test data:
>>> +--------------+-----------------+-----------+---------------------------------------+
>>> |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
>>> |              |Principle(Vulkan)|           |                                       |
>>> +------------------------------------------------------------------------------------+
>>> |              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
>>> | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
>>> +------------------------------------------------------------------------------------+
>>> | Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
>>> |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
>>> |PT BOs on LRU)|                 |           |                                       |
>>> +------------------------------------------------------------------------------------+
>>> | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
>>> |              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
>>> +--------------+-----------------+-----------+---------------------------------------+
>>>
>>> After test them with above three benchmarks include vulkan and opencl. We can
>>> see the visible improvement than original, and even better than original with
>>> workaround.
>>>
>>> v2: move all BOs include idle, relocated, and moved list to the end of LRU and
>>> put them together.
>>> v3: remove unused parameter and use list_for_each_entry instead of the one with
>>> save entry.
>>>
>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>>> ---
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
>>>    2 files changed, 61 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index 9c84770..ee1af53 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
>>>    }
>>>
>>>    /**
>>> + * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
>>> + *
>>> + * @vm: vm providing the BOs
>>> + * @list: the list that stored BOs
>>> + *
>>> + * Move one list of BOs to the end of LRU and update the positions.
>>> + */
>>> +static void
>>> +amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
>>> +{
>>> +	struct amdgpu_vm_bo_base *bo_base;
>>> +
>>> +	list_for_each_entry(bo_base, list, vm_status) {
>>> +		struct amdgpu_bo *bo = bo_base->bo;
>>> +
>>> +		if (!bo->parent)
>>> +			continue;
>>> +
>>> +		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
>>> +		if (bo->shadow)
>>> +			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
>>> +						&vm->lru_bulk_move);
>>> +	}
>>> +}
>>> +
>>> +/**
>>> + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
>>> + *
>>> + * @adev: amdgpu device pointer
>>> + * @vm: vm providing the BOs
>>> + *
>>> + * Move all BOs to the end of LRU and remember their positions to put them
>>> + * together.
>>> + */
>>> +static void
>>> +amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>> +{
>>> +	struct ttm_bo_global *glob = adev->mman.bdev.glob;
>>> +
>>> +	spin_lock(&glob->lru_lock);
>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
>> Moved list is working under vm->moved_lock, so we may hold that as well.
>> otherwise, to use the same one.
>> (not sure the detail of history about moved_lock)
> We actually don't remove them from moved list, just move bo->lru to the end
> of lru. So here, it doesn't need moved_lock.
>
>>> +	spin_unlock(&glob->lru_lock);
>>> +}
>>> +
>>> +/**
>>>     * amdgpu_vm_validate_pt_bos - validate the page table BOs
>>>     *
>>>     * @adev: amdgpu device pointer
>>> @@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>    {
>>>    	struct ttm_bo_global *glob = adev->mman.bdev.glob;
>>>    	struct amdgpu_vm_bo_base *bo_base, *tmp;
>>> +	bool validated = false;
>>>    	int r = 0;
>>>
>>>    	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
>>> @@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>    			r = validate(param, bo);
>>>    			if (r)
>>>    				break;
>>> -
>>> -			spin_lock(&glob->lru_lock);
>>> -			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
>>> -			if (bo->shadow)
>>> -				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
>>> -			spin_unlock(&glob->lru_lock);
>>>    		}
>>>
>>> +		validated = true;
>>>    		if (bo->tbo.type != ttm_bo_type_kernel) {
>>>    			spin_lock(&vm->moved_lock);
>>>    			list_move(&bo_base->vm_status, &vm->moved);
>>> @@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>    		}
>>>    	}
>>>
>>> -	spin_lock(&glob->lru_lock);
>>> -	list_for_each_entry(bo_base, &vm->idle, vm_status) {
>>> -		struct amdgpu_bo *bo = bo_base->bo;
>>> +	if (!validated) {
>>> +		spin_lock(&glob->lru_lock);
>>> +		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
>> To confirm
>> we only do actual bulk move when no evicted or validate failure?
>>
> Yes. Because if some BOs are evicted back, they will be moved to
> moved/relocated list. Then we need update the positions of bulk moving.

Ah, crap that won't work. Jerry pointed out a quite important bug here.

The moved list contains both per-VM as well as independent BOs, so 
walking it and moving everything on the LRU won't work as expected.

Probably better to just walk the idle list after we are done with the 
state machine.

Christian.

>
> Thanks,
> Ray
>
>> Regards,
>> Jerry
>>
>>> +		spin_unlock(&glob->lru_lock);
>>> +		return 0;
>>> +	}
>>>
>>> -		if (!bo->parent)
>>> -			continue;
>>> +	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
>>>
>>> -		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
>>> -		if (bo->shadow)
>>> -			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
>>> -	}
>>> -	spin_unlock(&glob->lru_lock);
>>> +	amdgpu_vm_move_to_lru_tail(adev, vm);
>>>
>>>    	return r;
>>>    }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> index 67a15d4..92725ac 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> @@ -29,6 +29,7 @@
>>>    #include <linux/rbtree.h>
>>>    #include <drm/gpu_scheduler.h>
>>>    #include <drm/drm_file.h>
>>> +#include <drm/ttm/ttm_bo_driver.h>
>>>
>>>    #include "amdgpu_sync.h"
>>>    #include "amdgpu_ring.h"
>>> @@ -226,6 +227,9 @@ struct amdgpu_vm {
>>>
>>>    	/* Some basic info about the task */
>>>    	struct amdgpu_task_info task_info;
>>> +
>>> +	/* Store positions of group of BOs */
>>> +	struct ttm_lru_bulk_move lru_bulk_move;
>>>    };
>>>
>>>    struct amdgpu_vm_manager {
>>>
Huang Rui Aug. 14, 2018, 7:24 a.m. UTC | #4
On Tue, Aug 14, 2018 at 02:45:22PM +0800, Koenig, Christian wrote:
> Am 14.08.2018 um 05:05 schrieb Huang Rui:
> > On Tue, Aug 14, 2018 at 10:26:43AM +0800, Zhang, Jerry wrote:
> >> On 08/13/2018 05:58 PM, Huang Rui wrote:
> >>> I continue to work for bulk moving that based on the proposal by Christian.
> >>>
> >>> Background:
> >>> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> >>> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> >>> the end of the LRU, and impact performance seriously.
> >>>
> >>> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> >>> patch:
> >>> "drm/amdgpu: band aid validating VM PTs"
> >>> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> >>>
> >>> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> >>> instead of one by one.
> >>>
> >>> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> >>> validated we move all BOs together to the end of the LRU without dropping the
> >>> lock for the LRU.
> >>>
> >>> While doing so we note the beginning and end of this block in the LRU list.
> >>>
> >>> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> >>> we don't move every BO one by one, but instead cut the LRU list into pieces so
> >>> that we bulk move everything to the end in just one operation.
> >>>
> >>> Test data:
> >>> +--------------+-----------------+-----------+---------------------------------------+
> >>> |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
> >>> |              |Principle(Vulkan)|           |                                       |
> >>> +------------------------------------------------------------------------------------+
> >>> |              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> >>> | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
> >>> +------------------------------------------------------------------------------------+
> >>> | Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
> >>> |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> >>> |PT BOs on LRU)|                 |           |                                       |
> >>> +------------------------------------------------------------------------------------+
> >>> | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> >>> |              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
> >>> +--------------+-----------------+-----------+---------------------------------------+
> >>>
> >>> After test them with above three benchmarks include vulkan and opencl. We can
> >>> see the visible improvement than original, and even better than original with
> >>> workaround.
> >>>
> >>> v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> >>> put them together.
> >>> v3: remove unused parameter and use list_for_each_entry instead of the one with
> >>> save entry.
> >>>
> >>> Signed-off-by: Christian König <christian.koenig@amd.com>
> >>> Signed-off-by: Huang Rui <ray.huang@amd.com>
> >>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
> >>> ---
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
> >>>    2 files changed, 61 insertions(+), 16 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> index 9c84770..ee1af53 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>> @@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
> >>>    }
> >>>
> >>>    /**
> >>> + * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
> >>> + *
> >>> + * @vm: vm providing the BOs
> >>> + * @list: the list that stored BOs
> >>> + *
> >>> + * Move one list of BOs to the end of LRU and update the positions.
> >>> + */
> >>> +static void
> >>> +amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
> >>> +{
> >>> +	struct amdgpu_vm_bo_base *bo_base;
> >>> +
> >>> +	list_for_each_entry(bo_base, list, vm_status) {
> >>> +		struct amdgpu_bo *bo = bo_base->bo;
> >>> +
> >>> +		if (!bo->parent)
> >>> +			continue;
> >>> +
> >>> +		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
> >>> +		if (bo->shadow)
> >>> +			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
> >>> +						&vm->lru_bulk_move);
> >>> +	}
> >>> +}
> >>> +
> >>> +/**
> >>> + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
> >>> + *
> >>> + * @adev: amdgpu device pointer
> >>> + * @vm: vm providing the BOs
> >>> + *
> >>> + * Move all BOs to the end of LRU and remember their positions to put them
> >>> + * together.
> >>> + */
> >>> +static void
> >>> +amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> >>> +{
> >>> +	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> >>> +
> >>> +	spin_lock(&glob->lru_lock);
> >>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
> >>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
> >>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
> >> Moved list is working under vm->moved_lock, so we may hold that as well.
> >> otherwise, to use the same one.
> >> (not sure the detail of history about moved_lock)
> > We actually don't remove them from moved list, just move bo->lru to the end
> > of lru. So here, it doesn't need moved_lock.
> >
> >>> +	spin_unlock(&glob->lru_lock);
> >>> +}
> >>> +
> >>> +/**
> >>>     * amdgpu_vm_validate_pt_bos - validate the page table BOs
> >>>     *
> >>>     * @adev: amdgpu device pointer
> >>> @@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>    {
> >>>    	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> >>>    	struct amdgpu_vm_bo_base *bo_base, *tmp;
> >>> +	bool validated = false;
> >>>    	int r = 0;
> >>>
> >>>    	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
> >>> @@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>    			r = validate(param, bo);
> >>>    			if (r)
> >>>    				break;
> >>> -
> >>> -			spin_lock(&glob->lru_lock);
> >>> -			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> >>> -			if (bo->shadow)
> >>> -				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> >>> -			spin_unlock(&glob->lru_lock);
> >>>    		}
> >>>
> >>> +		validated = true;
> >>>    		if (bo->tbo.type != ttm_bo_type_kernel) {
> >>>    			spin_lock(&vm->moved_lock);
> >>>    			list_move(&bo_base->vm_status, &vm->moved);
> >>> @@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>    		}
> >>>    	}
> >>>
> >>> -	spin_lock(&glob->lru_lock);
> >>> -	list_for_each_entry(bo_base, &vm->idle, vm_status) {
> >>> -		struct amdgpu_bo *bo = bo_base->bo;
> >>> +	if (!validated) {
> >>> +		spin_lock(&glob->lru_lock);
> >>> +		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
> >> To confirm
> >> we only do actual bulk move when no evicted or validate failure?
> >>
> > Yes. Because if some BOs are evicted back, they will be moved to
> > moved/relocated list. Then we need update the positions of bulk moving.
> 
> Ah, crap that won't work. Jerry pointed out a quite important bug here.
> 
> The moved list contains both per-VM as well as independent BOs, so 
> walking it and moving everything on the LRU won't work as expected.
> 

Our purpose is not to move the independent BOs (shared with other VM),
right?

> Probably better to just walk the idle list after we are done with the 
> state machine.
> 

If we only walk the idle list here, we probably are not included all the
Per-VM BOs, right? Or walk the idle list after command submission.

Thanks,
Ray

> Christian.
> 
> >
> > Thanks,
> > Ray
> >
> >> Regards,
> >> Jerry
> >>
> >>> +		spin_unlock(&glob->lru_lock);
> >>> +		return 0;
> >>> +	}
> >>>
> >>> -		if (!bo->parent)
> >>> -			continue;
> >>> +	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
> >>>
> >>> -		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> >>> -		if (bo->shadow)
> >>> -			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> >>> -	}
> >>> -	spin_unlock(&glob->lru_lock);
> >>> +	amdgpu_vm_move_to_lru_tail(adev, vm);
> >>>
> >>>    	return r;
> >>>    }
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>> index 67a15d4..92725ac 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>> @@ -29,6 +29,7 @@
> >>>    #include <linux/rbtree.h>
> >>>    #include <drm/gpu_scheduler.h>
> >>>    #include <drm/drm_file.h>
> >>> +#include <drm/ttm/ttm_bo_driver.h>
> >>>
> >>>    #include "amdgpu_sync.h"
> >>>    #include "amdgpu_ring.h"
> >>> @@ -226,6 +227,9 @@ struct amdgpu_vm {
> >>>
> >>>    	/* Some basic info about the task */
> >>>    	struct amdgpu_task_info task_info;
> >>> +
> >>> +	/* Store positions of group of BOs */
> >>> +	struct ttm_lru_bulk_move lru_bulk_move;
> >>>    };
> >>>
> >>>    struct amdgpu_vm_manager {
> >>>
>
Christian König Aug. 14, 2018, 7:35 a.m. UTC | #5
Am 14.08.2018 um 09:24 schrieb Huang Rui:
> On Tue, Aug 14, 2018 at 02:45:22PM +0800, Koenig, Christian wrote:
>> Am 14.08.2018 um 05:05 schrieb Huang Rui:
>>> On Tue, Aug 14, 2018 at 10:26:43AM +0800, Zhang, Jerry wrote:
>>>> On 08/13/2018 05:58 PM, Huang Rui wrote:
>>>>> I continue to work for bulk moving that based on the proposal by Christian.
>>>>>
>>>>> Background:
>>>>> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
>>>>> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
>>>>> the end of the LRU, and impact performance seriously.
>>>>>
>>>>> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
>>>>> patch:
>>>>> "drm/amdgpu: band aid validating VM PTs"
>>>>> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
>>>>>
>>>>> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
>>>>> instead of one by one.
>>>>>
>>>>> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
>>>>> validated we move all BOs together to the end of the LRU without dropping the
>>>>> lock for the LRU.
>>>>>
>>>>> While doing so we note the beginning and end of this block in the LRU list.
>>>>>
>>>>> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
>>>>> we don't move every BO one by one, but instead cut the LRU list into pieces so
>>>>> that we bulk move everything to the end in just one operation.
>>>>>
>>>>> Test data:
>>>>> +--------------+-----------------+-----------+---------------------------------------+
>>>>> |              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
>>>>> |              |Principle(Vulkan)|           |                                       |
>>>>> +------------------------------------------------------------------------------------+
>>>>> |              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
>>>>> | Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
>>>>> +------------------------------------------------------------------------------------+
>>>>> | Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
>>>>> |(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
>>>>> |PT BOs on LRU)|                 |           |                                       |
>>>>> +------------------------------------------------------------------------------------+
>>>>> | Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
>>>>> |              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
>>>>> +--------------+-----------------+-----------+---------------------------------------+
>>>>>
>>>>> After test them with above three benchmarks include vulkan and opencl. We can
>>>>> see the visible improvement than original, and even better than original with
>>>>> workaround.
>>>>>
>>>>> v2: move all BOs include idle, relocated, and moved list to the end of LRU and
>>>>> put them together.
>>>>> v3: remove unused parameter and use list_for_each_entry instead of the one with
>>>>> save entry.
>>>>>
>>>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>>>>> ---
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
>>>>>     2 files changed, 61 insertions(+), 16 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> index 9c84770..ee1af53 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> @@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
>>>>>     }
>>>>>
>>>>>     /**
>>>>> + * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
>>>>> + *
>>>>> + * @vm: vm providing the BOs
>>>>> + * @list: the list that stored BOs
>>>>> + *
>>>>> + * Move one list of BOs to the end of LRU and update the positions.
>>>>> + */
>>>>> +static void
>>>>> +amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
>>>>> +{
>>>>> +	struct amdgpu_vm_bo_base *bo_base;
>>>>> +
>>>>> +	list_for_each_entry(bo_base, list, vm_status) {
>>>>> +		struct amdgpu_bo *bo = bo_base->bo;
>>>>> +
>>>>> +		if (!bo->parent)
>>>>> +			continue;
>>>>> +
>>>>> +		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
>>>>> +		if (bo->shadow)
>>>>> +			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
>>>>> +						&vm->lru_bulk_move);
>>>>> +	}
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
>>>>> + *
>>>>> + * @adev: amdgpu device pointer
>>>>> + * @vm: vm providing the BOs
>>>>> + *
>>>>> + * Move all BOs to the end of LRU and remember their positions to put them
>>>>> + * together.
>>>>> + */
>>>>> +static void
>>>>> +amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>>>>> +{
>>>>> +	struct ttm_bo_global *glob = adev->mman.bdev.glob;
>>>>> +
>>>>> +	spin_lock(&glob->lru_lock);
>>>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
>>>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
>>>>> +	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
>>>> Moved list is working under vm->moved_lock, so we may hold that as well.
>>>> otherwise, to use the same one.
>>>> (not sure the detail of history about moved_lock)
>>> We actually don't remove them from moved list, just move bo->lru to the end
>>> of lru. So here, it doesn't need moved_lock.
>>>
>>>>> +	spin_unlock(&glob->lru_lock);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>>      * amdgpu_vm_validate_pt_bos - validate the page table BOs
>>>>>      *
>>>>>      * @adev: amdgpu device pointer
>>>>> @@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>>>     {
>>>>>     	struct ttm_bo_global *glob = adev->mman.bdev.glob;
>>>>>     	struct amdgpu_vm_bo_base *bo_base, *tmp;
>>>>> +	bool validated = false;
>>>>>     	int r = 0;
>>>>>
>>>>>     	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
>>>>> @@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>>>     			r = validate(param, bo);
>>>>>     			if (r)
>>>>>     				break;
>>>>> -
>>>>> -			spin_lock(&glob->lru_lock);
>>>>> -			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
>>>>> -			if (bo->shadow)
>>>>> -				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
>>>>> -			spin_unlock(&glob->lru_lock);
>>>>>     		}
>>>>>
>>>>> +		validated = true;
>>>>>     		if (bo->tbo.type != ttm_bo_type_kernel) {
>>>>>     			spin_lock(&vm->moved_lock);
>>>>>     			list_move(&bo_base->vm_status, &vm->moved);
>>>>> @@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>>>     		}
>>>>>     	}
>>>>>
>>>>> -	spin_lock(&glob->lru_lock);
>>>>> -	list_for_each_entry(bo_base, &vm->idle, vm_status) {
>>>>> -		struct amdgpu_bo *bo = bo_base->bo;
>>>>> +	if (!validated) {
>>>>> +		spin_lock(&glob->lru_lock);
>>>>> +		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
>>>> To confirm
>>>> we only do actual bulk move when no evicted or validate failure?
>>>>
>>> Yes. Because if some BOs are evicted back, they will be moved to
>>> moved/relocated list. Then we need update the positions of bulk moving.
>> Ah, crap that won't work. Jerry pointed out a quite important bug here.
>>
>> The moved list contains both per-VM as well as independent BOs, so
>> walking it and moving everything on the LRU won't work as expected.
>>
> Our purpose is not to move the independent BOs (shared with other VM),
> right?
>
>> Probably better to just walk the idle list after we are done with the
>> state machine.
>>
> If we only walk the idle list here, we probably are not included all the
> Per-VM BOs, right? Or walk the idle list after command submission.

Walking the idle list after command submission. At this point all BOs 
should be on there, except for the evicted ones and we can handle those 
separately.

Regards,
Christian.

>
> Thanks,
> Ray
>
>> Christian.
>>
>>> Thanks,
>>> Ray
>>>
>>>> Regards,
>>>> Jerry
>>>>
>>>>> +		spin_unlock(&glob->lru_lock);
>>>>> +		return 0;
>>>>> +	}
>>>>>
>>>>> -		if (!bo->parent)
>>>>> -			continue;
>>>>> +	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
>>>>>
>>>>> -		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
>>>>> -		if (bo->shadow)
>>>>> -			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
>>>>> -	}
>>>>> -	spin_unlock(&glob->lru_lock);
>>>>> +	amdgpu_vm_move_to_lru_tail(adev, vm);
>>>>>
>>>>>     	return r;
>>>>>     }
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> index 67a15d4..92725ac 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> @@ -29,6 +29,7 @@
>>>>>     #include <linux/rbtree.h>
>>>>>     #include <drm/gpu_scheduler.h>
>>>>>     #include <drm/drm_file.h>
>>>>> +#include <drm/ttm/ttm_bo_driver.h>
>>>>>
>>>>>     #include "amdgpu_sync.h"
>>>>>     #include "amdgpu_ring.h"
>>>>> @@ -226,6 +227,9 @@ struct amdgpu_vm {
>>>>>
>>>>>     	/* Some basic info about the task */
>>>>>     	struct amdgpu_task_info task_info;
>>>>> +
>>>>> +	/* Store positions of group of BOs */
>>>>> +	struct ttm_lru_bulk_move lru_bulk_move;
>>>>>     };
>>>>>
>>>>>     struct amdgpu_vm_manager {
>>>>>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Huang Rui Aug. 14, 2018, 8:17 a.m. UTC | #6
On Tue, Aug 14, 2018 at 09:35:50AM +0200, Christian König wrote:
> Am 14.08.2018 um 09:24 schrieb Huang Rui:
> >On Tue, Aug 14, 2018 at 02:45:22PM +0800, Koenig, Christian wrote:
> >>Am 14.08.2018 um 05:05 schrieb Huang Rui:
> >>>On Tue, Aug 14, 2018 at 10:26:43AM +0800, Zhang, Jerry wrote:
> >>>>On 08/13/2018 05:58 PM, Huang Rui wrote:
> >>>>>I continue to work for bulk moving that based on the proposal by Christian.
> >>>>>
> >>>>>Background:
> >>>>>amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> >>>>>them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> >>>>>the end of the LRU, and impact performance seriously.
> >>>>>
> >>>>>Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> >>>>>patch:
> >>>>>"drm/amdgpu: band aid validating VM PTs"
> >>>>>Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae
> >>>>>
> >>>>>However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> >>>>>instead of one by one.
> >>>>>
> >>>>>Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> >>>>>validated we move all BOs together to the end of the LRU without dropping the
> >>>>>lock for the LRU.
> >>>>>
> >>>>>While doing so we note the beginning and end of this block in the LRU list.
> >>>>>
> >>>>>Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> >>>>>we don't move every BO one by one, but instead cut the LRU list into pieces so
> >>>>>that we bulk move everything to the end in just one operation.
> >>>>>
> >>>>>Test data:
> >>>>>+--------------+-----------------+-----------+---------------------------------------+
> >>>>>|              |The Talos        |Clpeak(OCL)|BusSpeedReadback(OCL)                  |
> >>>>>|              |Principle(Vulkan)|           |                                       |
> >>>>>+------------------------------------------------------------------------------------+
> >>>>>|              |                 |           |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> >>>>>| Original     |  147.7 FPS      |  76.86 us |0.307 ms(8K) 0.310 ms(16K)             |
> >>>>>+------------------------------------------------------------------------------------+
> >>>>>| Orignial + WA|                 |           |0.254 ms(1K) 0.241 ms(2K)              |
> >>>>>|(don't move   |  162.1 FPS      |  42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> >>>>>|PT BOs on LRU)|                 |           |                                       |
> >>>>>+------------------------------------------------------------------------------------+
> >>>>>| Bulk move    |  163.1 FPS      |  40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> >>>>>|              |                 |           |0.214 ms(8K) 0.225 ms(16K)             |
> >>>>>+--------------+-----------------+-----------+---------------------------------------+
> >>>>>
> >>>>>After test them with above three benchmarks include vulkan and opencl. We can
> >>>>>see the visible improvement than original, and even better than original with
> >>>>>workaround.
> >>>>>
> >>>>>v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> >>>>>put them together.
> >>>>>v3: remove unused parameter and use list_for_each_entry instead of the one with
> >>>>>save entry.
> >>>>>
> >>>>>Signed-off-by: Christian König <christian.koenig@amd.com>
> >>>>>Signed-off-by: Huang Rui <ray.huang@amd.com>
> >>>>>Tested-by: Mike Lothian <mike@fireburn.co.uk>
> >>>>>---
> >>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 73 ++++++++++++++++++++++++++--------
> >>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  4 ++
> >>>>>    2 files changed, 61 insertions(+), 16 deletions(-)
> >>>>>
> >>>>>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>>>>index 9c84770..ee1af53 100644
> >>>>>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>>>>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> >>>>>@@ -268,6 +268,53 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
> >>>>>    }
> >>>>>
> >>>>>    /**
> >>>>>+ * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
> >>>>>+ *
> >>>>>+ * @vm: vm providing the BOs
> >>>>>+ * @list: the list that stored BOs
> >>>>>+ *
> >>>>>+ * Move one list of BOs to the end of LRU and update the positions.
> >>>>>+ */
> >>>>>+static void
> >>>>>+amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
> >>>>>+{
> >>>>>+	struct amdgpu_vm_bo_base *bo_base;
> >>>>>+
> >>>>>+	list_for_each_entry(bo_base, list, vm_status) {
> >>>>>+		struct amdgpu_bo *bo = bo_base->bo;
> >>>>>+
> >>>>>+		if (!bo->parent)
> >>>>>+			continue;
> >>>>>+
> >>>>>+		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
> >>>>>+		if (bo->shadow)
> >>>>>+			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
> >>>>>+						&vm->lru_bulk_move);
> >>>>>+	}
> >>>>>+}
> >>>>>+
> >>>>>+/**
> >>>>>+ * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
> >>>>>+ *
> >>>>>+ * @adev: amdgpu device pointer
> >>>>>+ * @vm: vm providing the BOs
> >>>>>+ *
> >>>>>+ * Move all BOs to the end of LRU and remember their positions to put them
> >>>>>+ * together.
> >>>>>+ */
> >>>>>+static void
> >>>>>+amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
> >>>>>+{
> >>>>>+	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> >>>>>+
> >>>>>+	spin_lock(&glob->lru_lock);
> >>>>>+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
> >>>>>+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
> >>>>>+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
> >>>>Moved list is working under vm->moved_lock, so we may hold that as well.
> >>>>otherwise, to use the same one.
> >>>>(not sure the detail of history about moved_lock)
> >>>We actually don't remove them from moved list, just move bo->lru to the end
> >>>of lru. So here, it doesn't need moved_lock.
> >>>
> >>>>>+	spin_unlock(&glob->lru_lock);
> >>>>>+}
> >>>>>+
> >>>>>+/**
> >>>>>     * amdgpu_vm_validate_pt_bos - validate the page table BOs
> >>>>>     *
> >>>>>     * @adev: amdgpu device pointer
> >>>>>@@ -286,6 +333,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>>>    {
> >>>>>    	struct ttm_bo_global *glob = adev->mman.bdev.glob;
> >>>>>    	struct amdgpu_vm_bo_base *bo_base, *tmp;
> >>>>>+	bool validated = false;
> >>>>>    	int r = 0;
> >>>>>
> >>>>>    	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
> >>>>>@@ -295,14 +343,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>>>    			r = validate(param, bo);
> >>>>>    			if (r)
> >>>>>    				break;
> >>>>>-
> >>>>>-			spin_lock(&glob->lru_lock);
> >>>>>-			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> >>>>>-			if (bo->shadow)
> >>>>>-				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> >>>>>-			spin_unlock(&glob->lru_lock);
> >>>>>    		}
> >>>>>
> >>>>>+		validated = true;
> >>>>>    		if (bo->tbo.type != ttm_bo_type_kernel) {
> >>>>>    			spin_lock(&vm->moved_lock);
> >>>>>    			list_move(&bo_base->vm_status, &vm->moved);
> >>>>>@@ -312,18 +355,16 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> >>>>>    		}
> >>>>>    	}
> >>>>>
> >>>>>-	spin_lock(&glob->lru_lock);
> >>>>>-	list_for_each_entry(bo_base, &vm->idle, vm_status) {
> >>>>>-		struct amdgpu_bo *bo = bo_base->bo;
> >>>>>+	if (!validated) {
> >>>>>+		spin_lock(&glob->lru_lock);
> >>>>>+		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
> >>>>To confirm
> >>>>we only do actual bulk move when no evicted or validate failure?
> >>>>
> >>>Yes. Because if some BOs are evicted back, they will be moved to
> >>>moved/relocated list. Then we need update the positions of bulk moving.
> >>Ah, crap that won't work. Jerry pointed out a quite important bug here.
> >>
> >>The moved list contains both per-VM as well as independent BOs, so
> >>walking it and moving everything on the LRU won't work as expected.
> >>
> >Our purpose is not to move the independent BOs (shared with other VM),
> >right?
> >
> >>Probably better to just walk the idle list after we are done with the
> >>state machine.
> >>
> >If we only walk the idle list here, we probably are not included all the
> >Per-VM BOs, right? Or walk the idle list after command submission.
> 
> Walking the idle list after command submission. At this point all
> BOs should be on there, except for the evicted ones and we can
> handle those separately.
> 

Thanks. BTW, could you please elaborate more about the state machine?
I also see it mentioned on the comments of idle list.

/* All BOs of this VM not currently in the state machine */
struct list_head        idle;

Thanks,
Ray
> Regards,
> Christian.
> 
> >
> >Thanks,
> >Ray
> >
> >>Christian.
> >>
> >>>Thanks,
> >>>Ray
> >>>
> >>>>Regards,
> >>>>Jerry
> >>>>
> >>>>>+		spin_unlock(&glob->lru_lock);
> >>>>>+		return 0;
> >>>>>+	}
> >>>>>
> >>>>>-		if (!bo->parent)
> >>>>>-			continue;
> >>>>>+	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
> >>>>>
> >>>>>-		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> >>>>>-		if (bo->shadow)
> >>>>>-			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> >>>>>-	}
> >>>>>-	spin_unlock(&glob->lru_lock);
> >>>>>+	amdgpu_vm_move_to_lru_tail(adev, vm);
> >>>>>
> >>>>>    	return r;
> >>>>>    }
> >>>>>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>>>>index 67a15d4..92725ac 100644
> >>>>>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>>>>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> >>>>>@@ -29,6 +29,7 @@
> >>>>>    #include <linux/rbtree.h>
> >>>>>    #include <drm/gpu_scheduler.h>
> >>>>>    #include <drm/drm_file.h>
> >>>>>+#include <drm/ttm/ttm_bo_driver.h>
> >>>>>
> >>>>>    #include "amdgpu_sync.h"
> >>>>>    #include "amdgpu_ring.h"
> >>>>>@@ -226,6 +227,9 @@ struct amdgpu_vm {
> >>>>>
> >>>>>    	/* Some basic info about the task */
> >>>>>    	struct amdgpu_task_info task_info;
> >>>>>+
> >>>>>+	/* Store positions of group of BOs */
> >>>>>+	struct ttm_lru_bulk_move lru_bulk_move;
> >>>>>    };
> >>>>>
> >>>>>    struct amdgpu_vm_manager {
> >>>>>
> >_______________________________________________
> >dri-devel mailing list
> >dri-devel@lists.freedesktop.org
> >https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
diff mbox series

Patch

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 9c84770..ee1af53 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -268,6 +268,53 @@  void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
 }
 
 /**
+ * amdgpu_vm_move_to_lru_tail - move one list of BOs to end of LRU
+ *
+ * @vm: vm providing the BOs
+ * @list: the list that stored BOs
+ *
+ * Move one list of BOs to the end of LRU and update the positions.
+ */
+static void
+amdgpu_vm_move_to_lru_tail_by_list(struct amdgpu_vm *vm, struct list_head *list)
+{
+	struct amdgpu_vm_bo_base *bo_base;
+
+	list_for_each_entry(bo_base, list, vm_status) {
+		struct amdgpu_bo *bo = bo_base->bo;
+
+		if (!bo->parent)
+			continue;
+
+		ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
+		if (bo->shadow)
+			ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
+						&vm->lru_bulk_move);
+	}
+}
+
+/**
+ * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
+ *
+ * @adev: amdgpu device pointer
+ * @vm: vm providing the BOs
+ *
+ * Move all BOs to the end of LRU and remember their positions to put them
+ * together.
+ */
+static void
+amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, struct amdgpu_vm *vm)
+{
+	struct ttm_bo_global *glob = adev->mman.bdev.glob;
+
+	spin_lock(&glob->lru_lock);
+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->idle);
+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->relocated);
+	amdgpu_vm_move_to_lru_tail_by_list(vm, &vm->moved);
+	spin_unlock(&glob->lru_lock);
+}
+
+/**
  * amdgpu_vm_validate_pt_bos - validate the page table BOs
  *
  * @adev: amdgpu device pointer
@@ -286,6 +333,7 @@  int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 {
 	struct ttm_bo_global *glob = adev->mman.bdev.glob;
 	struct amdgpu_vm_bo_base *bo_base, *tmp;
+	bool validated = false;
 	int r = 0;
 
 	list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
@@ -295,14 +343,9 @@  int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 			r = validate(param, bo);
 			if (r)
 				break;
-
-			spin_lock(&glob->lru_lock);
-			ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
-			if (bo->shadow)
-				ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
-			spin_unlock(&glob->lru_lock);
 		}
 
+		validated = true;
 		if (bo->tbo.type != ttm_bo_type_kernel) {
 			spin_lock(&vm->moved_lock);
 			list_move(&bo_base->vm_status, &vm->moved);
@@ -312,18 +355,16 @@  int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 		}
 	}
 
-	spin_lock(&glob->lru_lock);
-	list_for_each_entry(bo_base, &vm->idle, vm_status) {
-		struct amdgpu_bo *bo = bo_base->bo;
+	if (!validated) {
+		spin_lock(&glob->lru_lock);
+		ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
+		spin_unlock(&glob->lru_lock);
+		return 0;
+	}
 
-		if (!bo->parent)
-			continue;
+	memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
 
-		ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
-		if (bo->shadow)
-			ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
-	}
-	spin_unlock(&glob->lru_lock);
+	amdgpu_vm_move_to_lru_tail(adev, vm);
 
 	return r;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 67a15d4..92725ac 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -29,6 +29,7 @@ 
 #include <linux/rbtree.h>
 #include <drm/gpu_scheduler.h>
 #include <drm/drm_file.h>
+#include <drm/ttm/ttm_bo_driver.h>
 
 #include "amdgpu_sync.h"
 #include "amdgpu_ring.h"
@@ -226,6 +227,9 @@  struct amdgpu_vm {
 
 	/* Some basic info about the task */
 	struct amdgpu_task_info task_info;
+
+	/* Store positions of group of BOs */
+	struct ttm_lru_bulk_move lru_bulk_move;
 };
 
 struct amdgpu_vm_manager {