From patchwork Tue May 21 07:16:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13668959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 73D0FC25B74 for ; Tue, 21 May 2024 07:18:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7BAB010E4A6; Tue, 21 May 2024 07:18:16 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="l++GiKo5"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7C98710E426; Tue, 21 May 2024 07:17:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716275855; x=1747811855; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zcDtuFKrnmw6i3xTfahiKuOa4+9MWMmnLp6lmULrif4=; b=l++GiKo5Ff79fNF8rXJcHlYF5Bx8z7l6Bnd6F7tF/jUFPKwGOGFfhc3S iViWtUjiMC5IlKDU05bHV3imOy2Rb4RgSlj3/p3TEuZKQ7Og+hlwytTIF ipzkoMk+Ji/m6tb2asQ++7UvDaToiGV8yu0BymWEapKhaqgvV6FtiLWVB ntadtaU8C/z0fnYepFD8oswFAu4DN3h+kc5g0jkhGjQqFjYSqS7v0/7cj LMd8JyN6ndoSQFE7eY1d3Q7/QolupVPoyQB80u16SdBslwmVmwuqlRMoR 8WOrqHI712QwcOQcfTqUChKkIZURkw5RY6yHKiDwrO3e1pgDGQR5Ymm2Z w==; X-CSE-ConnectionGUID: pKiLYOISQ+mxoa1zgTGBqw== X-CSE-MsgGUID: 3lGEFJOaRbKWros7wB+gZg== X-IronPort-AV: E=McAfee;i="6600,9927,11078"; a="15393506" X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="15393506" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:35 -0700 X-CSE-ConnectionGUID: 3DwZe2yXSwSd8qdw5Sh4Qg== X-CSE-MsgGUID: g0JQ8xf0TZKEOWy5NgaPWQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,177,1712646000"; d="scan'208";a="37336822" Received: from unknown (HELO fedora..) ([10.245.246.159]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2024 00:17:34 -0700 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Somalapuram Amaranath , Matthew Brost , dri-devel@lists.freedesktop.org Subject: [RFC PATCH v3 19/21] drm/xe: Use drm_exec for fault locking Date: Tue, 21 May 2024 09:16:37 +0200 Message-ID: <20240521071639.77614-20-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> References: <20240521071639.77614-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Similar to how TTM vm does this, convert the drm/xe fault handler to use drm_exec locking. Cc: Christian König Cc: Somalapuram Amaranath Cc: Matthew Brost Cc: Signed-off-by: Thomas Hellström --- drivers/gpu/drm/xe/xe_bo.c | 38 +++++++++++++++++++++++--------------- 1 file changed, 23 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index 3c56858e0751..27d7d36401b5 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -1217,29 +1217,37 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf) struct xe_device *xe = to_xe_device(ddev); struct xe_bo *bo = ttm_to_xe_bo(tbo); bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK; + struct drm_exec exec; vm_fault_t ret; - int idx; + int idx, err; if (needs_rpm) xe_pm_runtime_get(xe); - ret = ttm_bo_vm_reserve(tbo, vmf, NULL); - if (ret) - goto out; + drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 16); + drm_exec_until_all_locked(&exec) { + ret = ttm_bo_vm_reserve(tbo, vmf, &exec); + err = drm_exec_retry_on_contention(&exec, 0); + if (err) + ret = VM_FAULT_NOPAGE; + if (ret) + goto out; - if (drm_dev_enter(ddev, &idx)) { - trace_xe_bo_cpu_fault(bo); + if (drm_dev_enter(ddev, &idx)) { + trace_xe_bo_cpu_fault(bo); - ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, - TTM_BO_VM_NUM_PREFAULT, - NULL); - drm_dev_exit(idx); - } else { - ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot); + ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, + TTM_BO_VM_NUM_PREFAULT, + &exec); + drm_dev_exit(idx); + err = drm_exec_retry_on_contention(&exec, 0); + if (err) + ret = VM_FAULT_NOPAGE; + } else { + ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot); + } } - if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) - goto out; /* * ttm_bo_vm_reserve() already has dma_resv_lock. */ @@ -1250,8 +1258,8 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf) mutex_unlock(&xe->mem_access.vram_userfault.lock); } - dma_resv_unlock(tbo->base.resv); out: + drm_exec_fini(&exec); if (needs_rpm) xe_pm_runtime_put(xe);