From patchwork Tue Apr 26 20:06:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 8944251 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 1F645BF29F for ; Tue, 26 Apr 2016 20:06:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0FB6220204 for ; Tue, 26 Apr 2016 20:06:58 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 072B9201EF for ; Tue, 26 Apr 2016 20:06:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7383F6E944; Tue, 26 Apr 2016 20:06:53 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-x244.google.com (mail-wm0-x244.google.com [IPv6:2a00:1450:400c:c09::244]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8D0776E93E for ; Tue, 26 Apr 2016 20:06:50 +0000 (UTC) Received: by mail-wm0-x244.google.com with SMTP id r12so7341989wme.0 for ; Tue, 26 Apr 2016 13:06:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=FKLLNzIVmCUMFf7kmzRDgmD9jqk5dAtsDkgK1TF6Mto=; b=elF9OhnV8G1gwYqfQSdIYLID2j0mI4rMKwfQzgAPAQoij9E3E237IbEkVP8NTpYXHN WpMDU5IAXXnau4XXRIeGsL+6HEqYgZMeHBJz/CFRYnv6TL4D4J2LUcxXMVNWbQzjc+Gs DrT2JxKV9fgqHuo1ou4WUbzXKqf5DvdKdzD5BoZ+XrC8jBqc3UurbbWdAkFDgl3npkcz WRNS0YzMWMdbfcEmB1JaqKTaIFk3g5hmvKDi6jNhr1HIs1bwHPpRwbiRuSjU//onSuR9 ro4WKQfmiI1UHW6ZXcjcydk0tSecrXvqzyM1eOMreyEYAdbG+9hEnsYN6f57LnSNbjgl S3pQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=FKLLNzIVmCUMFf7kmzRDgmD9jqk5dAtsDkgK1TF6Mto=; b=aJ/c8pdf3XCLhViD6p4s/wrtS4jMrZr7WgAB3lQrh1eWVEXaI+3N48JG7DMWk7LxaR UxzaHd1h1d4ndjcjFvH7xje9S6VAnGr7cWEa+Ow+eTV6ixYyvbARuKh2Oloz5pJapMVn ABqptRs4SSakzxrddQt3EVjQ0I3sjsVUsh1yo+Lskr5JpXmilqRXcLYqfAyzNF7t1h4E +gwwofh8TT/GIXJBhbroF+OnOZ1Rl2jC09xuxqzjm75u2hs1HWEEdd8YwckNwpHbytMW 3hhxfvNLJxEyph463SQHkZmyVfOj8zGX+7LB8bf/bit6CgB2u2J4M2eBIgc7lSylXbys riog== X-Gm-Message-State: AOPr4FUmdGIOpY/wnkGnboJNi5nPxiZoq3JfLDTA9DOWo0TV/ZeMz2vgtZbxnRc3decdeQ== X-Received: by 10.194.92.163 with SMTP id cn3mr5126254wjb.3.1461701207630; Tue, 26 Apr 2016 13:06:47 -0700 (PDT) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id gk4sm347603wjd.7.2016.04.26.13.06.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 Apr 2016 13:06:46 -0700 (PDT) From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Tue, 26 Apr 2016 21:06:09 +0100 Message-Id: <1461701180-895-15-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1461701180-895-1-git-send-email-chris@chris-wilson.co.uk> References: <1461701180-895-1-git-send-email-chris@chris-wilson.co.uk> Subject: [Intel-gfx] [PATCH v6 14/25] drm/i915: Manually unwind after a failed request allocation X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the next patches, we want to move the work out of freeing the request and into its retirement (so that we can free the request without requiring the struct_mutex). This means that we cannot rely on unreferencing the request to completely teardown the request any more and so we need to manually unwind the failed allocation. In doing so, we reorder the allocation in order to make the unwind simple (and ensure that we don't try to unwind a partial request that may have modified global state) and so we end up pushing the initial preallocation down into the engine request initialisation functions where we have the requisite control over the state of the request. Moving the initial preallocation into the engine is less than ideal: it moves logic to handle a specific problem with request handling out of the common code. On the other hand, it does allow those backends significantly more flexibility in performing its allocations. Signed-off-by: Chris Wilson Cc: Mika Kuoppala Cc: Tvrtko Ursulin Cc: Joonas Lahtinen --- drivers/gpu/drm/i915/i915_gem.c | 28 +++++++++------------------- drivers/gpu/drm/i915/intel_lrc.c | 16 ++++++++++++++-- drivers/gpu/drm/i915/intel_ringbuffer.c | 2 +- 3 files changed, 24 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 0e27484bd28a..d7ff5e79182f 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2766,15 +2766,6 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine, req->ctx = ctx; i915_gem_context_reference(req->ctx); - if (i915.enable_execlists) - ret = intel_logical_ring_alloc_request_extras(req); - else - ret = intel_ring_alloc_request_extras(req); - if (ret) { - i915_gem_context_unreference(req->ctx); - goto err; - } - /* * Reserve space in the ring buffer for all the commands required to * eventually emit this request. This is to guarantee that the @@ -2783,20 +2774,19 @@ __i915_gem_request_alloc(struct intel_engine_cs *engine, * away, e.g. because a GPU scheduler has deferred it. */ req->reserved_space = MIN_SPACE_FOR_ADD_REQUEST; - ret = intel_ring_begin(req, 0); - if (ret) { - /* - * At this point, the request is fully allocated even if not - * fully prepared. Thus it can be cleaned up using the proper - * free code. - */ - i915_gem_request_unreference(req); - return ret; - } + + if (i915.enable_execlists) + ret = intel_logical_ring_alloc_request_extras(req); + else + ret = intel_ring_alloc_request_extras(req); + if (ret) + goto err_ctx; *req_out = req; return 0; +err_ctx: + i915_gem_context_unreference(ctx); err: kmem_cache_free(dev_priv->requests, req); return ret; diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 910044cf143e..01517dd7069b 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -698,7 +698,7 @@ static int execlists_move_to_gpu(struct drm_i915_gem_request *req, int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request) { - int ret = 0; + int ret; request->ringbuf = request->ctx->engine[request->engine->id].ringbuf; @@ -715,9 +715,21 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request return ret; } - if (request->ctx != request->i915->kernel_context) + if (request->ctx != request->i915->kernel_context) { ret = intel_lr_context_pin(request->ctx, request->engine); + if (ret) + return ret; + } + ret = intel_ring_begin(request, 0); + if (ret) + goto err_unpin; + + return 0; + +err_unpin: + if (request->ctx != request->i915->kernel_context) + intel_lr_context_unpin(request->ctx, request->engine); return ret; } diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index ba5946b9fa06..1193372f74fd 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -2333,7 +2333,7 @@ int intel_engine_idle(struct intel_engine_cs *engine) int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request) { request->ringbuf = request->engine->buffer; - return 0; + return intel_ring_begin(request, 0); } static int wait_for_space(struct drm_i915_gem_request *req, int bytes)