diff mbox

[v2] drm/i915: Shrink the request kmem_cache on allocation error

Message ID 20180116131507.7791-1-chris@chris-wilson.co.uk (mailing list archive)
State New, archived
Headers show

Commit Message

Chris Wilson Jan. 16, 2018, 1:15 p.m. UTC
If we fail to allocate a new request, make sure we recover the pages
that are in the process of being freed by inserting an RCU barrier.

v2: Comment before the shrink and barrier in the error path.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/i915_gem_request.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

Comments

Tvrtko Ursulin Jan. 16, 2018, 3:19 p.m. UTC | #1
On 16/01/2018 13:15, Chris Wilson wrote:
> If we fail to allocate a new request, make sure we recover the pages
> that are in the process of being freed by inserting an RCU barrier.
> 
> v2: Comment before the shrink and barrier in the error path.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
>   drivers/gpu/drm/i915/i915_gem_request.c | 11 +++++++++++
>   1 file changed, 11 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> index 72bdc203716f..a0f451b4a4e8 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -696,6 +696,17 @@ i915_gem_request_alloc(struct intel_engine_cs *engine,
>   		if (ret)
>   			goto err_unreserve;
>   
> +		/*
> +		 * We've forced the client to stall and catch up with whatever
> +		 * backlog there might have been. As we are assuming that we
> +		 * caused the mempressure, now is an opportune time to
> +		 * recover as much memory from the request pool as is possible.
> +		 * Having already penalized the client to stall, we spend
> +		 * a little extra time to re-optimise page allocation.
> +		 */
> +		kmem_cache_shrink(dev_priv->requests);
> +		rcu_barrier(); /* Recover the TYPESAFE_BY_RCU pages */
> +
>   		req = kmem_cache_alloc(dev_priv->requests, GFP_KERNEL);
>   		if (!req) {
>   			ret = -ENOMEM;
> 

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko
diff mbox

Patch

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 72bdc203716f..a0f451b4a4e8 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -696,6 +696,17 @@  i915_gem_request_alloc(struct intel_engine_cs *engine,
 		if (ret)
 			goto err_unreserve;
 
+		/*
+		 * We've forced the client to stall and catch up with whatever
+		 * backlog there might have been. As we are assuming that we
+		 * caused the mempressure, now is an opportune time to
+		 * recover as much memory from the request pool as is possible.
+		 * Having already penalized the client to stall, we spend
+		 * a little extra time to re-optimise page allocation.
+		 */
+		kmem_cache_shrink(dev_priv->requests);
+		rcu_barrier(); /* Recover the TYPESAFE_BY_RCU pages */
+
 		req = kmem_cache_alloc(dev_priv->requests, GFP_KERNEL);
 		if (!req) {
 			ret = -ENOMEM;