diff mbox series

drm/i915/ggtt: Move ballooning support to i915_ggtt

Message ID 20191121175850.19844-1-michal.wajdeczko@intel.com (mailing list archive)
State New, archived
Headers show
Series drm/i915/ggtt: Move ballooning support to i915_ggtt | expand

Commit Message

Michal Wajdeczko Nov. 21, 2019, 5:58 p.m. UTC
Today VGT implements GGTT ballooning on its own, using some private
static structure to hold info about reserved GGTT nodes and then
manually update vm.reserved size owned by i915_ggtt.

As i915_ggtt already manages some custom reserved nodes (like uc_fw)
move VGT ballooning support to i915_ggtt and make it more generic
for possible future reuse in other scenarios.

As a bonus we can drop another place in driver that uses static data.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Xiong Zhang <xiong.y.zhang@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_gtt.c | 47 +++++++++++++++++++
 drivers/gpu/drm/i915/i915_gem_gtt.h | 18 ++++++++
 drivers/gpu/drm/i915/i915_vgpu.c    | 72 ++++++-----------------------
 3 files changed, 78 insertions(+), 59 deletions(-)

Comments

Chris Wilson Nov. 21, 2019, 6:05 p.m. UTC | #1
Quoting Michal Wajdeczko (2019-11-21 17:58:50)
> Today VGT implements GGTT ballooning on its own, using some private
> static structure to hold info about reserved GGTT nodes and then
> manually update vm.reserved size owned by i915_ggtt.
> 
> As i915_ggtt already manages some custom reserved nodes (like uc_fw)
> move VGT ballooning support to i915_ggtt and make it more generic
> for possible future reuse in other scenarios.
> 
> As a bonus we can drop another place in driver that uses static data.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Xiong Zhang <xiong.y.zhang@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Jani Nikula <jani.nikula@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gem_gtt.c | 47 +++++++++++++++++++
>  drivers/gpu/drm/i915/i915_gem_gtt.h | 18 ++++++++
>  drivers/gpu/drm/i915/i915_vgpu.c    | 72 ++++++-----------------------
>  3 files changed, 78 insertions(+), 59 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
> index 6239a9adbf14..d8b2084dab7e 100644
> --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> @@ -3827,6 +3827,53 @@ int i915_gem_gtt_insert(struct i915_address_space *vm,
>                                            start, end, DRM_MM_INSERT_EVICT);
>  }
>  
> +int i915_ggtt_balloon_space(struct i915_ggtt *ggtt,
> +                           enum i915_ggtt_balloon_id id,

Do we want to use a fixed number of slots?

So what exactly is wrong with the clients reserving nodes themselves?

I'm not seeing why exactly we want to move to a much more centralised
system and argue over ownership of i915_ggtt_balloon_id.
-Chris
Michal Wajdeczko Nov. 21, 2019, 7:10 p.m. UTC | #2
On Thu, 21 Nov 2019 19:05:25 +0100, Chris Wilson  
<chris@chris-wilson.co.uk> wrote:

> Quoting Michal Wajdeczko (2019-11-21 17:58:50)
>> Today VGT implements GGTT ballooning on its own, using some private
>> static structure to hold info about reserved GGTT nodes and then
>> manually update vm.reserved size owned by i915_ggtt.
>>
>> As i915_ggtt already manages some custom reserved nodes (like uc_fw)
>> move VGT ballooning support to i915_ggtt and make it more generic
>> for possible future reuse in other scenarios.
>>
>> As a bonus we can drop another place in driver that uses static data.
>>
>> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>> Cc: Xiong Zhang <xiong.y.zhang@intel.com>
>> Cc: Chris Wilson <chris@chris-wilson.co.uk>
>> Cc: Jani Nikula <jani.nikula@intel.com>
>> ---
>>  drivers/gpu/drm/i915/i915_gem_gtt.c | 47 +++++++++++++++++++
>>  drivers/gpu/drm/i915/i915_gem_gtt.h | 18 ++++++++
>>  drivers/gpu/drm/i915/i915_vgpu.c    | 72 ++++++-----------------------
>>  3 files changed, 78 insertions(+), 59 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c  
>> b/drivers/gpu/drm/i915/i915_gem_gtt.c
>> index 6239a9adbf14..d8b2084dab7e 100644
>> --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
>> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
>> @@ -3827,6 +3827,53 @@ int i915_gem_gtt_insert(struct  
>> i915_address_space *vm,
>>                                            start, end,  
>> DRM_MM_INSERT_EVICT);
>>  }
>>
>> +int i915_ggtt_balloon_space(struct i915_ggtt *ggtt,
>> +                           enum i915_ggtt_balloon_id id,
>
> Do we want to use a fixed number of slots?

Probably not, enum was just used as a starting point to provide explicit
name each of the available node and to avoid adding GEM_BUG_ON while
accessing fixed size array.

>
> So what exactly is wrong with the clients reserving nodes themselves?

They still reserve nodes themselves. But now they will not do that
in a shadow and not use static data to hold actual nodes.

>
> I'm not seeing why exactly we want to move to a much more centralised
> system

I wanted to make it clear to all clients that i915_ggtt is a good place
to keep list of reserved some (instead of hiding them in private places)

And to provide wrapper function that can be easily reused and that each
ballooning behaves in the same way.

Extra bonus is that we have all such balloon tweaks in one place (both
code and data nodes).

> and argue over ownership of i915_ggtt_balloon_id.

Now it's first come, first served order, but we should already know
all current usages and in case of clash either pick different id or
add the new one.

Note that today some scenarios are exclusive (gvt won't work with guc)
so some node placeholders can be shared - but this can be equally done
without enums but with union of explicit node definitions.

	I915_GGTT_BALLOON_GUC_TOP = I915_GGTT_BALLOON3,
	I915_GGTT_BALLOON_VGT_UNMAP_TOP = I915_GGTT_BALLOON3,

	union {
		struct {
			struct drm_mm_node uc_fw;
		} guc;
		struct {
			struct drm_mm_node space[4];
		} vgt;
	} balloons;

but direct access to nodes can be also wrapped into:

struct drm_mm_node *i915_ggtt_get_uc_fw_node(i915_ggtt *ggtt)
{
	return ggtt->balloon[I915_GGTT_BALLOON_GUC_TOP];
}

Michal
Chris Wilson Nov. 21, 2019, 8:46 p.m. UTC | #3
Quoting Michal Wajdeczko (2019-11-21 19:10:46)
> On Thu, 21 Nov 2019 19:05:25 +0100, Chris Wilson  
> <chris@chris-wilson.co.uk> wrote:
> 
> > Quoting Michal Wajdeczko (2019-11-21 17:58:50)
> >> Today VGT implements GGTT ballooning on its own, using some private
> >> static structure to hold info about reserved GGTT nodes and then
> >> manually update vm.reserved size owned by i915_ggtt.
> >>
> >> As i915_ggtt already manages some custom reserved nodes (like uc_fw)
> >> move VGT ballooning support to i915_ggtt and make it more generic
> >> for possible future reuse in other scenarios.
> >>
> >> As a bonus we can drop another place in driver that uses static data.
> >>
> >> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> >> Cc: Xiong Zhang <xiong.y.zhang@intel.com>
> >> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> >> Cc: Jani Nikula <jani.nikula@intel.com>
> >> ---
> >>  drivers/gpu/drm/i915/i915_gem_gtt.c | 47 +++++++++++++++++++
> >>  drivers/gpu/drm/i915/i915_gem_gtt.h | 18 ++++++++
> >>  drivers/gpu/drm/i915/i915_vgpu.c    | 72 ++++++-----------------------
> >>  3 files changed, 78 insertions(+), 59 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c  
> >> b/drivers/gpu/drm/i915/i915_gem_gtt.c
> >> index 6239a9adbf14..d8b2084dab7e 100644
> >> --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> >> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> >> @@ -3827,6 +3827,53 @@ int i915_gem_gtt_insert(struct  
> >> i915_address_space *vm,
> >>                                            start, end,  
> >> DRM_MM_INSERT_EVICT);
> >>  }
> >>
> >> +int i915_ggtt_balloon_space(struct i915_ggtt *ggtt,
> >> +                           enum i915_ggtt_balloon_id id,
> >
> > Do we want to use a fixed number of slots?
> 
> Probably not, enum was just used as a starting point to provide explicit
> name each of the available node and to avoid adding GEM_BUG_ON while
> accessing fixed size array.
> 
> >
> > So what exactly is wrong with the clients reserving nodes themselves?
> 
> They still reserve nodes themselves. But now they will not do that
> in a shadow and not use static data to hold actual nodes.

But that's their decision, all we provide is the service to dole out the
ggtt (or not depending on our whim). If they use a static allocation for
their nodes and not kmalloc -- we don't care, why should we as long
as their code is correct?

> > I'm not seeing why exactly we want to move to a much more centralised
> > system
> 
> I wanted to make it clear to all clients that i915_ggtt is a good place
> to keep list of reserved some (instead of hiding them in private places)
> 
> And to provide wrapper function that can be easily reused and that each
> ballooning behaves in the same way.
> 
> Extra bonus is that we have all such balloon tweaks in one place (both
> code and data nodes).

So I think you would be happy with just a 

struct i915_ggtt_balloon (or whatever) {
	struct dma_mm_node node;
	struct list_head link;
	void *private;
} *i915_ggtt_reserve_balloon(i915/ggtt, u64 bottom, u64 top);

... but you are already past the point where you might as well just let
the client allocate their own struct and pass us the drm_mm_node. So

i915_ggtt_reserve(i915/ggtt, drm_mm_node, u64 bottom, u64 top, flags?)

> > and argue over ownership of i915_ggtt_balloon_id.
> 
> Now it's first come, first served order, but we should already know
> all current usages and in case of clash either pick different id or
> add the new one.
> 
> Note that today some scenarios are exclusive (gvt won't work with guc)
> so some node placeholders can be shared - but this can be equally done
> without enums but with union of explicit node definitions.
> 
>         I915_GGTT_BALLOON_GUC_TOP = I915_GGTT_BALLOON3,
>         I915_GGTT_BALLOON_VGT_UNMAP_TOP = I915_GGTT_BALLOON3,
> 
>         union {
>                 struct {
>                         struct drm_mm_node uc_fw;
>                 } guc;
>                 struct {
>                         struct drm_mm_node space[4];
>                 } vgt;
>         } balloons;
> 
> but direct access to nodes can be also wrapped into:
> 
> struct drm_mm_node *i915_ggtt_get_uc_fw_node(i915_ggtt *ggtt)
> {
>         return ggtt->balloon[I915_GGTT_BALLOON_GUC_TOP];
> }

I am not yet sold on putting an array of nodes here, we already have a
list of them all :-p
-Chris
Jani Nikula Nov. 25, 2019, 10:51 a.m. UTC | #4
On Thu, 21 Nov 2019, Michal Wajdeczko <michal.wajdeczko@intel.com> wrote:
> Today VGT implements GGTT ballooning on its own, using some private
> static structure to hold info about reserved GGTT nodes and then
> manually update vm.reserved size owned by i915_ggtt.
>
> As i915_ggtt already manages some custom reserved nodes (like uc_fw)
> move VGT ballooning support to i915_ggtt and make it more generic
> for possible future reuse in other scenarios.
>
> As a bonus we can drop another place in driver that uses static data.
>
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Xiong Zhang <xiong.y.zhang@intel.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Jani Nikula <jani.nikula@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_gem_gtt.c | 47 +++++++++++++++++++
>  drivers/gpu/drm/i915/i915_gem_gtt.h | 18 ++++++++

Maybe not part of this patch, but should we start moving this stuff out
of the top level instead of addming more?

BR,
Jani.

>  drivers/gpu/drm/i915/i915_vgpu.c    | 72 ++++++-----------------------
>  3 files changed, 78 insertions(+), 59 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
> index 6239a9adbf14..d8b2084dab7e 100644
> --- a/drivers/gpu/drm/i915/i915_gem_gtt.c
> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
> @@ -3827,6 +3827,53 @@ int i915_gem_gtt_insert(struct i915_address_space *vm,
>  					   start, end, DRM_MM_INSERT_EVICT);
>  }
>  
> +int i915_ggtt_balloon_space(struct i915_ggtt *ggtt,
> +			    enum i915_ggtt_balloon_id id,
> +			    u64 start, u64 end)
> +{
> +	struct drm_mm_node *node = &ggtt->balloon[id];
> +	u64 size = end - start;
> +	int ret;
> +
> +	GEM_BUG_ON(id < 0 || id >= ARRAY_SIZE(ggtt->balloon));
> +	GEM_BUG_ON(start >= end);
> +	DRM_DEV_DEBUG_DRIVER(ggtt->vm.i915->drm.dev,
> +			     "%sGGTT [%#llx-%#llx] %lluK\n",
> +			     "ballooning ", start, end, size / SZ_1K);
> +
> +	ret = i915_gem_gtt_reserve(&ggtt->vm, node, size, start,
> +				   I915_COLOR_UNEVICTABLE, PIN_NOEVICT);
> +	if (unlikely(ret)) {
> +		DRM_DEV_ERROR(ggtt->vm.i915->drm.dev,
> +			      "%sGGTT [%#llx-%#llx] %lluK\n",
> +			      "Failed to balloon ", node->start,
> +			      node->start + node->size, node->size / SZ_1K);
> +	} else {
> +		ggtt->vm.reserved += node->size;
> +	}
> +
> +	return ret;
> +}
> +
> +void i915_ggtt_deballoon_space(struct i915_ggtt *ggtt,
> +			       enum i915_ggtt_balloon_id id)
> +{
> +	struct drm_mm_node *node = &ggtt->balloon[id];
> +
> +	GEM_BUG_ON(id < 0 || id >= ARRAY_SIZE(ggtt->balloon));
> +	if (!drm_mm_node_allocated(node))
> +		return;
> +
> +	DRM_DEV_DEBUG_DRIVER(ggtt->vm.i915->drm.dev,
> +			     "%sGGTT [%#llx-%#llx] %lluK\n",
> +			     "deballooning ", node->start,
> +			     node->start + node->size, node->size / SZ_1K);
> +
> +	GEM_BUG_ON(ggtt->vm.reserved < node->size);
> +	ggtt->vm.reserved -= node->size;
> +	drm_mm_remove_node(node);
> +}
> +
>  #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
>  #include "selftests/mock_gtt.c"
>  #include "selftests/i915_gem_gtt.c"
> diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
> index 402283ce2864..ba40cae0e97a 100644
> --- a/drivers/gpu/drm/i915/i915_gem_gtt.h
> +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
> @@ -432,6 +432,12 @@ struct i915_ggtt {
>  
>  	struct drm_mm_node error_capture;
>  	struct drm_mm_node uc_fw;
> +
> +	/*
> +	 * There might be up to 4 regions ballooned.
> +	 * (2 for mappable and 2 for unmappable graphic memory)
> +	 */
> +	struct drm_mm_node balloon[4];
>  };
>  
>  struct i915_ppgtt {
> @@ -649,6 +655,18 @@ int i915_gem_gtt_insert(struct i915_address_space *vm,
>  			u64 size, u64 alignment, unsigned long color,
>  			u64 start, u64 end, unsigned int flags);
>  
> +enum i915_ggtt_balloon_id {
> +	I915_GGTT_BALLOON0,
> +	I915_GGTT_BALLOON1,
> +	I915_GGTT_BALLOON2,
> +	I915_GGTT_BALLOON3,
> +};
> +
> +int i915_ggtt_balloon_space(struct i915_ggtt *ggtt,
> +			    enum i915_ggtt_balloon_id id, u64 start, u64 end);
> +void i915_ggtt_deballoon_space(struct i915_ggtt *ggtt,
> +			       enum i915_ggtt_balloon_id id);
> +
>  /* Flags used by pin/bind&friends. */
>  #define PIN_NOEVICT		BIT_ULL(0)
>  #define PIN_NOSEARCH		BIT_ULL(1)
> diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
> index 968be26735c5..195c27725cbb 100644
> --- a/drivers/gpu/drm/i915/i915_vgpu.c
> +++ b/drivers/gpu/drm/i915/i915_vgpu.c
> @@ -106,32 +106,6 @@ bool intel_vgpu_has_full_ppgtt(struct drm_i915_private *dev_priv)
>  	return dev_priv->vgpu.caps & VGT_CAPS_FULL_PPGTT;
>  }
>  
> -struct _balloon_info_ {
> -	/*
> -	 * There are up to 2 regions per mappable/unmappable graphic
> -	 * memory that might be ballooned. Here, index 0/1 is for mappable
> -	 * graphic memory, 2/3 for unmappable graphic memory.
> -	 */
> -	struct drm_mm_node space[4];
> -};
> -
> -static struct _balloon_info_ bl_info;
> -
> -static void vgt_deballoon_space(struct i915_ggtt *ggtt,
> -				struct drm_mm_node *node)
> -{
> -	if (!drm_mm_node_allocated(node))
> -		return;
> -
> -	DRM_DEBUG_DRIVER("deballoon space: range [0x%llx - 0x%llx] %llu KiB.\n",
> -			 node->start,
> -			 node->start + node->size,
> -			 node->size / 1024);
> -
> -	ggtt->vm.reserved -= node->size;
> -	drm_mm_remove_node(node);
> -}
> -
>  /**
>   * intel_vgt_deballoon - deballoon reserved graphics address trunks
>   * @ggtt: the global GGTT from which we reserved earlier
> @@ -149,28 +123,7 @@ void intel_vgt_deballoon(struct i915_ggtt *ggtt)
>  	DRM_DEBUG("VGT deballoon.\n");
>  
>  	for (i = 0; i < 4; i++)
> -		vgt_deballoon_space(ggtt, &bl_info.space[i]);
> -}
> -
> -static int vgt_balloon_space(struct i915_ggtt *ggtt,
> -			     struct drm_mm_node *node,
> -			     unsigned long start, unsigned long end)
> -{
> -	unsigned long size = end - start;
> -	int ret;
> -
> -	if (start >= end)
> -		return -EINVAL;
> -
> -	DRM_INFO("balloon space: range [ 0x%lx - 0x%lx ] %lu KiB.\n",
> -		 start, end, size / 1024);
> -	ret = i915_gem_gtt_reserve(&ggtt->vm, node,
> -				   size, start, I915_COLOR_UNEVICTABLE,
> -				   0);
> -	if (!ret)
> -		ggtt->vm.reserved += size;
> -
> -	return ret;
> +		i915_ggtt_deballoon_space(ggtt, I915_GGTT_BALLOON0 + i);
>  }
>  
>  /**
> @@ -256,32 +209,33 @@ int intel_vgt_balloon(struct i915_ggtt *ggtt)
>  
>  	/* Unmappable graphic memory ballooning */
>  	if (unmappable_base > ggtt->mappable_end) {
> -		ret = vgt_balloon_space(ggtt, &bl_info.space[2],
> -					ggtt->mappable_end, unmappable_base);
> +		ret = i915_ggtt_balloon_space(ggtt, I915_GGTT_BALLOON2,
> +					      ggtt->mappable_end,
> +					      unmappable_base);
>  
>  		if (ret)
>  			goto err;
>  	}
>  
>  	if (unmappable_end < ggtt_end) {
> -		ret = vgt_balloon_space(ggtt, &bl_info.space[3],
> -					unmappable_end, ggtt_end);
> +		ret = i915_ggtt_balloon_space(ggtt, I915_GGTT_BALLOON3,
> +					      unmappable_end, ggtt_end);
>  		if (ret)
>  			goto err_upon_mappable;
>  	}
>  
>  	/* Mappable graphic memory ballooning */
>  	if (mappable_base) {
> -		ret = vgt_balloon_space(ggtt, &bl_info.space[0],
> -					0, mappable_base);
> +		ret = i915_ggtt_balloon_space(ggtt, I915_GGTT_BALLOON0,
> +					      0, mappable_base);
>  
>  		if (ret)
>  			goto err_upon_unmappable;
>  	}
>  
>  	if (mappable_end < ggtt->mappable_end) {
> -		ret = vgt_balloon_space(ggtt, &bl_info.space[1],
> -					mappable_end, ggtt->mappable_end);
> +		ret = i915_ggtt_balloon_space(ggtt, I915_GGTT_BALLOON1,
> +					      mappable_end, ggtt->mappable_end);
>  
>  		if (ret)
>  			goto err_below_mappable;
> @@ -291,11 +245,11 @@ int intel_vgt_balloon(struct i915_ggtt *ggtt)
>  	return 0;
>  
>  err_below_mappable:
> -	vgt_deballoon_space(ggtt, &bl_info.space[0]);
> +	i915_ggtt_deballoon_space(ggtt, I915_GGTT_BALLOON0);
>  err_upon_unmappable:
> -	vgt_deballoon_space(ggtt, &bl_info.space[3]);
> +	i915_ggtt_deballoon_space(ggtt, I915_GGTT_BALLOON3);
>  err_upon_mappable:
> -	vgt_deballoon_space(ggtt, &bl_info.space[2]);
> +	i915_ggtt_deballoon_space(ggtt, I915_GGTT_BALLOON2);
>  err:
>  	DRM_ERROR("VGT balloon fail\n");
>  	return ret;
Chris Wilson Nov. 25, 2019, 11:30 a.m. UTC | #5
Quoting Jani Nikula (2019-11-25 10:51:53)
> On Thu, 21 Nov 2019, Michal Wajdeczko <michal.wajdeczko@intel.com> wrote:
> > Today VGT implements GGTT ballooning on its own, using some private
> > static structure to hold info about reserved GGTT nodes and then
> > manually update vm.reserved size owned by i915_ggtt.
> >
> > As i915_ggtt already manages some custom reserved nodes (like uc_fw)
> > move VGT ballooning support to i915_ggtt and make it more generic
> > for possible future reuse in other scenarios.
> >
> > As a bonus we can drop another place in driver that uses static data.
> >
> > Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> > Cc: Xiong Zhang <xiong.y.zhang@intel.com>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Jani Nikula <jani.nikula@intel.com>
> > ---
> >  drivers/gpu/drm/i915/i915_gem_gtt.c | 47 +++++++++++++++++++
> >  drivers/gpu/drm/i915/i915_gem_gtt.h | 18 ++++++++
> 
> Maybe not part of this patch, but should we start moving this stuff out
> of the top level instead of addming more?

It's on my todo list to split this between the HW pagetable management and
GEM and other interfaces on top.
-Chris
diff mbox series

Patch

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 6239a9adbf14..d8b2084dab7e 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3827,6 +3827,53 @@  int i915_gem_gtt_insert(struct i915_address_space *vm,
 					   start, end, DRM_MM_INSERT_EVICT);
 }
 
+int i915_ggtt_balloon_space(struct i915_ggtt *ggtt,
+			    enum i915_ggtt_balloon_id id,
+			    u64 start, u64 end)
+{
+	struct drm_mm_node *node = &ggtt->balloon[id];
+	u64 size = end - start;
+	int ret;
+
+	GEM_BUG_ON(id < 0 || id >= ARRAY_SIZE(ggtt->balloon));
+	GEM_BUG_ON(start >= end);
+	DRM_DEV_DEBUG_DRIVER(ggtt->vm.i915->drm.dev,
+			     "%sGGTT [%#llx-%#llx] %lluK\n",
+			     "ballooning ", start, end, size / SZ_1K);
+
+	ret = i915_gem_gtt_reserve(&ggtt->vm, node, size, start,
+				   I915_COLOR_UNEVICTABLE, PIN_NOEVICT);
+	if (unlikely(ret)) {
+		DRM_DEV_ERROR(ggtt->vm.i915->drm.dev,
+			      "%sGGTT [%#llx-%#llx] %lluK\n",
+			      "Failed to balloon ", node->start,
+			      node->start + node->size, node->size / SZ_1K);
+	} else {
+		ggtt->vm.reserved += node->size;
+	}
+
+	return ret;
+}
+
+void i915_ggtt_deballoon_space(struct i915_ggtt *ggtt,
+			       enum i915_ggtt_balloon_id id)
+{
+	struct drm_mm_node *node = &ggtt->balloon[id];
+
+	GEM_BUG_ON(id < 0 || id >= ARRAY_SIZE(ggtt->balloon));
+	if (!drm_mm_node_allocated(node))
+		return;
+
+	DRM_DEV_DEBUG_DRIVER(ggtt->vm.i915->drm.dev,
+			     "%sGGTT [%#llx-%#llx] %lluK\n",
+			     "deballooning ", node->start,
+			     node->start + node->size, node->size / SZ_1K);
+
+	GEM_BUG_ON(ggtt->vm.reserved < node->size);
+	ggtt->vm.reserved -= node->size;
+	drm_mm_remove_node(node);
+}
+
 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
 #include "selftests/mock_gtt.c"
 #include "selftests/i915_gem_gtt.c"
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 402283ce2864..ba40cae0e97a 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -432,6 +432,12 @@  struct i915_ggtt {
 
 	struct drm_mm_node error_capture;
 	struct drm_mm_node uc_fw;
+
+	/*
+	 * There might be up to 4 regions ballooned.
+	 * (2 for mappable and 2 for unmappable graphic memory)
+	 */
+	struct drm_mm_node balloon[4];
 };
 
 struct i915_ppgtt {
@@ -649,6 +655,18 @@  int i915_gem_gtt_insert(struct i915_address_space *vm,
 			u64 size, u64 alignment, unsigned long color,
 			u64 start, u64 end, unsigned int flags);
 
+enum i915_ggtt_balloon_id {
+	I915_GGTT_BALLOON0,
+	I915_GGTT_BALLOON1,
+	I915_GGTT_BALLOON2,
+	I915_GGTT_BALLOON3,
+};
+
+int i915_ggtt_balloon_space(struct i915_ggtt *ggtt,
+			    enum i915_ggtt_balloon_id id, u64 start, u64 end);
+void i915_ggtt_deballoon_space(struct i915_ggtt *ggtt,
+			       enum i915_ggtt_balloon_id id);
+
 /* Flags used by pin/bind&friends. */
 #define PIN_NOEVICT		BIT_ULL(0)
 #define PIN_NOSEARCH		BIT_ULL(1)
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index 968be26735c5..195c27725cbb 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -106,32 +106,6 @@  bool intel_vgpu_has_full_ppgtt(struct drm_i915_private *dev_priv)
 	return dev_priv->vgpu.caps & VGT_CAPS_FULL_PPGTT;
 }
 
-struct _balloon_info_ {
-	/*
-	 * There are up to 2 regions per mappable/unmappable graphic
-	 * memory that might be ballooned. Here, index 0/1 is for mappable
-	 * graphic memory, 2/3 for unmappable graphic memory.
-	 */
-	struct drm_mm_node space[4];
-};
-
-static struct _balloon_info_ bl_info;
-
-static void vgt_deballoon_space(struct i915_ggtt *ggtt,
-				struct drm_mm_node *node)
-{
-	if (!drm_mm_node_allocated(node))
-		return;
-
-	DRM_DEBUG_DRIVER("deballoon space: range [0x%llx - 0x%llx] %llu KiB.\n",
-			 node->start,
-			 node->start + node->size,
-			 node->size / 1024);
-
-	ggtt->vm.reserved -= node->size;
-	drm_mm_remove_node(node);
-}
-
 /**
  * intel_vgt_deballoon - deballoon reserved graphics address trunks
  * @ggtt: the global GGTT from which we reserved earlier
@@ -149,28 +123,7 @@  void intel_vgt_deballoon(struct i915_ggtt *ggtt)
 	DRM_DEBUG("VGT deballoon.\n");
 
 	for (i = 0; i < 4; i++)
-		vgt_deballoon_space(ggtt, &bl_info.space[i]);
-}
-
-static int vgt_balloon_space(struct i915_ggtt *ggtt,
-			     struct drm_mm_node *node,
-			     unsigned long start, unsigned long end)
-{
-	unsigned long size = end - start;
-	int ret;
-
-	if (start >= end)
-		return -EINVAL;
-
-	DRM_INFO("balloon space: range [ 0x%lx - 0x%lx ] %lu KiB.\n",
-		 start, end, size / 1024);
-	ret = i915_gem_gtt_reserve(&ggtt->vm, node,
-				   size, start, I915_COLOR_UNEVICTABLE,
-				   0);
-	if (!ret)
-		ggtt->vm.reserved += size;
-
-	return ret;
+		i915_ggtt_deballoon_space(ggtt, I915_GGTT_BALLOON0 + i);
 }
 
 /**
@@ -256,32 +209,33 @@  int intel_vgt_balloon(struct i915_ggtt *ggtt)
 
 	/* Unmappable graphic memory ballooning */
 	if (unmappable_base > ggtt->mappable_end) {
-		ret = vgt_balloon_space(ggtt, &bl_info.space[2],
-					ggtt->mappable_end, unmappable_base);
+		ret = i915_ggtt_balloon_space(ggtt, I915_GGTT_BALLOON2,
+					      ggtt->mappable_end,
+					      unmappable_base);
 
 		if (ret)
 			goto err;
 	}
 
 	if (unmappable_end < ggtt_end) {
-		ret = vgt_balloon_space(ggtt, &bl_info.space[3],
-					unmappable_end, ggtt_end);
+		ret = i915_ggtt_balloon_space(ggtt, I915_GGTT_BALLOON3,
+					      unmappable_end, ggtt_end);
 		if (ret)
 			goto err_upon_mappable;
 	}
 
 	/* Mappable graphic memory ballooning */
 	if (mappable_base) {
-		ret = vgt_balloon_space(ggtt, &bl_info.space[0],
-					0, mappable_base);
+		ret = i915_ggtt_balloon_space(ggtt, I915_GGTT_BALLOON0,
+					      0, mappable_base);
 
 		if (ret)
 			goto err_upon_unmappable;
 	}
 
 	if (mappable_end < ggtt->mappable_end) {
-		ret = vgt_balloon_space(ggtt, &bl_info.space[1],
-					mappable_end, ggtt->mappable_end);
+		ret = i915_ggtt_balloon_space(ggtt, I915_GGTT_BALLOON1,
+					      mappable_end, ggtt->mappable_end);
 
 		if (ret)
 			goto err_below_mappable;
@@ -291,11 +245,11 @@  int intel_vgt_balloon(struct i915_ggtt *ggtt)
 	return 0;
 
 err_below_mappable:
-	vgt_deballoon_space(ggtt, &bl_info.space[0]);
+	i915_ggtt_deballoon_space(ggtt, I915_GGTT_BALLOON0);
 err_upon_unmappable:
-	vgt_deballoon_space(ggtt, &bl_info.space[3]);
+	i915_ggtt_deballoon_space(ggtt, I915_GGTT_BALLOON3);
 err_upon_mappable:
-	vgt_deballoon_space(ggtt, &bl_info.space[2]);
+	i915_ggtt_deballoon_space(ggtt, I915_GGTT_BALLOON2);
 err:
 	DRM_ERROR("VGT balloon fail\n");
 	return ret;