From patchwork Fri May 18 10:13:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Wang X-Patchwork-Id: 10409397 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 773D5602C2 for ; Fri, 18 May 2018 10:17:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 67E19288F7 for ; Fri, 18 May 2018 10:17:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 51FA728904; Fri, 18 May 2018 10:17:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 08281288F7 for ; Fri, 18 May 2018 10:17:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9262A6EBA7; Fri, 18 May 2018 10:17:08 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 660806EBA7; Fri, 18 May 2018 10:17:07 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 May 2018 03:17:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,414,1520924400"; d="scan'208";a="200456944" Received: from debian-nuc.sh.intel.com ([10.239.13.10]) by orsmga004.jf.intel.com with ESMTP; 18 May 2018 03:17:05 -0700 From: Zhenyu Wang To: intel-gfx@lists.freedesktop.org Date: Fri, 18 May 2018 18:13:05 +0800 Message-Id: <20180518101305.8840-1-zhenyuw@linux.intel.com> X-Mailer: git-send-email 2.17.0 Subject: [Intel-gfx] [PATCH] drm/i915/gvt: Fix crash after request->hw_context change X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: intel-gvt-dev@lists.freedesktop.org MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP When we do shadowing, workload's request might not be allocated yet, so we still require shadow context's object. And when complete workload, delay to zero workload's request pointer after used for update guest context. Fixes: 1fc44d9b1afb ("drm/i915: Store a pointer to intel_context in i915_request") Cc: Chris Wilson Cc: Tvrtko Ursulin Signed-off-by: Zhenyu Wang --- drivers/gpu/drm/i915/gvt/scheduler.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c index 2efb723b90cb..00f79fc940da 100644 --- a/drivers/gpu/drm/i915/gvt/scheduler.c +++ b/drivers/gpu/drm/i915/gvt/scheduler.c @@ -125,8 +125,9 @@ static int populate_shadow_context(struct intel_vgpu_workload *workload) struct intel_vgpu *vgpu = workload->vgpu; struct intel_gvt *gvt = vgpu->gvt; int ring_id = workload->ring_id; + struct i915_gem_context *shadow_ctx = vgpu->submission.shadow_ctx; struct drm_i915_gem_object *ctx_obj = - workload->req->hw_context->state->obj; + shadow_ctx->__engine[ring_id].state->obj; struct execlist_ring_context *shadow_ring_context; struct page *page; void *dst; @@ -595,8 +596,6 @@ static int prepare_workload(struct intel_vgpu_workload *workload) return ret; } - update_shadow_pdps(workload); - ret = intel_vgpu_sync_oos_pages(workload->vgpu); if (ret) { gvt_vgpu_err("fail to vgpu sync oos pages\n"); @@ -615,6 +614,8 @@ static int prepare_workload(struct intel_vgpu_workload *workload) goto err_unpin_mm; } + update_shadow_pdps(workload); + ret = prepare_shadow_batch_buffer(workload); if (ret) { gvt_vgpu_err("fail to prepare_shadow_batch_buffer\n"); @@ -825,7 +826,7 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id) scheduler->current_workload[ring_id]; struct intel_vgpu *vgpu = workload->vgpu; struct intel_vgpu_submission *s = &vgpu->submission; - struct i915_request *rq; + struct i915_request *rq = workload->req; int event; mutex_lock(&vgpu->vgpu_lock); @@ -835,7 +836,6 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id) * switch to make sure request is completed. * For the workload w/o request, directly complete the workload. */ - rq = fetch_and_zero(&workload->req); if (rq) { wait_event(workload->shadow_ctx_status_wq, !atomic_read(&workload->shadow_ctx_active)); @@ -866,7 +866,7 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id) intel_context_unpin(rq->hw_context); mutex_unlock(&rq->i915->drm.struct_mutex); - i915_request_put(rq); + i915_request_put(fetch_and_zero(&workload->req)); } gvt_dbg_sched("ring id %d complete workload %p status %d\n",