From patchwork Mon May 21 08:17:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Wang X-Patchwork-Id: 10414241 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 697BB60365 for ; Mon, 21 May 2018 08:30:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B1A2286F5 for ; Mon, 21 May 2018 08:30:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5006E287C1; Mon, 21 May 2018 08:30:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B6EA3286F5 for ; Mon, 21 May 2018 08:30:07 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B03686F032; Mon, 21 May 2018 08:21:58 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2ED5C6F031; Mon, 21 May 2018 08:21:57 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 May 2018 01:21:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,426,1520924400"; d="scan'208";a="56107505" Received: from debian-nuc.sh.intel.com ([10.239.13.70]) by fmsmga004.fm.intel.com with ESMTP; 21 May 2018 01:21:55 -0700 From: Zhenyu Wang To: intel-gfx@lists.freedesktop.org Date: Mon, 21 May 2018 16:17:52 +0800 Message-Id: <20180521081752.31056-1-zhenyuw@linux.intel.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180518101305.8840-1-zhenyuw@linux.intel.com> References: <20180518101305.8840-1-zhenyuw@linux.intel.com> Subject: [Intel-gfx] [PATCH v2] drm/i915/gvt: Fix crash after request->hw_context change X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: intel-gvt-dev@lists.freedesktop.org MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP When we do shadowing, workload's request might not be allocated yet, so we still require shadow context's object. And when complete workload, delay to zero workload's request pointer after used for update guest context. v2: Move request alloc earlier as already try to track shadow status depending on request state, which also facilitate to use request->hw_context for target engine context reference. Fixes: 1fc44d9b1afb ("drm/i915: Store a pointer to intel_context in i915_request") Cc: Chris Wilson Cc: Tvrtko Ursulin Cc: Zhi Wang Cc: Weinan Li Signed-off-by: Zhenyu Wang Reviewed-by: Chris Wilson --- drivers/gpu/drm/i915/gvt/scheduler.c | 52 +++++++++------------------- 1 file changed, 16 insertions(+), 36 deletions(-) diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c index e1760030dda1..7f5e01df95ee 100644 --- a/drivers/gpu/drm/i915/gvt/scheduler.c +++ b/drivers/gpu/drm/i915/gvt/scheduler.c @@ -348,6 +348,7 @@ int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload) struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; struct intel_engine_cs *engine = dev_priv->engine[workload->ring_id]; struct intel_context *ce; + struct i915_request *rq; int ret; lockdep_assert_held(&dev_priv->drm.struct_mutex); @@ -386,46 +387,26 @@ int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload) goto err_shadow; } - ret = populate_shadow_context(workload); - if (ret) - goto err_shadow; - - return 0; - -err_shadow: - release_shadow_wa_ctx(&workload->wa_ctx); -err_unpin: - intel_context_unpin(ce); - return ret; -} - -static int intel_gvt_generate_request(struct intel_vgpu_workload *workload) -{ - int ring_id = workload->ring_id; - struct drm_i915_private *dev_priv = workload->vgpu->gvt->dev_priv; - struct i915_request *rq; - struct intel_vgpu *vgpu = workload->vgpu; - struct intel_vgpu_submission *s = &vgpu->submission; - struct i915_gem_context *shadow_ctx = s->shadow_ctx; - int ret; - - rq = i915_request_alloc(dev_priv->engine[ring_id], shadow_ctx); + rq = i915_request_alloc(engine, shadow_ctx); if (IS_ERR(rq)) { gvt_vgpu_err("fail to allocate gem request\n"); ret = PTR_ERR(rq); - goto err_unpin; + goto err_shadow; } - - gvt_dbg_sched("ring id %d get i915 gem request %p\n", ring_id, rq); - workload->req = i915_request_get(rq); - ret = copy_workload_to_ring_buffer(workload); + + ret = populate_shadow_context(workload); if (ret) - goto err_unpin; - return 0; + goto err_req; -err_unpin: + return 0; +err_req: + rq = fetch_and_zero(&workload->req); + i915_request_put(rq); +err_shadow: release_shadow_wa_ctx(&workload->wa_ctx); +err_unpin: + intel_context_unpin(ce); return ret; } @@ -609,7 +590,7 @@ static int prepare_workload(struct intel_vgpu_workload *workload) goto err_unpin_mm; } - ret = intel_gvt_generate_request(workload); + ret = copy_workload_to_ring_buffer(workload); if (ret) { gvt_vgpu_err("fail to generate request\n"); goto err_unpin_mm; @@ -823,7 +804,7 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id) scheduler->current_workload[ring_id]; struct intel_vgpu *vgpu = workload->vgpu; struct intel_vgpu_submission *s = &vgpu->submission; - struct i915_request *rq; + struct i915_request *rq = workload->req; int event; mutex_lock(&gvt->lock); @@ -832,7 +813,6 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id) * switch to make sure request is completed. * For the workload w/o request, directly complete the workload. */ - rq = fetch_and_zero(&workload->req); if (rq) { wait_event(workload->shadow_ctx_status_wq, !atomic_read(&workload->shadow_ctx_active)); @@ -863,7 +843,7 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id) intel_context_unpin(rq->hw_context); mutex_unlock(&rq->i915->drm.struct_mutex); - i915_request_put(rq); + i915_request_put(fetch_and_zero(&workload->req)); } gvt_dbg_sched("ring id %d complete workload %p status %d\n",