From patchwork Fri Nov 27 12:05:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 11935939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C605C2D0E4 for ; Fri, 27 Nov 2020 12:11:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ED5CD20665 for ; Fri, 27 Nov 2020 12:11:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED5CD20665 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E9BB86ECA1; Fri, 27 Nov 2020 12:10:09 +0000 (UTC) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0A8B26EC99; Fri, 27 Nov 2020 12:09:59 +0000 (UTC) IronPort-SDR: wJ/kjaqTGBDTTxW/8iVmsAbsICDrecb6JvYqACUJad1FzrXjplwnhlAC7rEM/6CTztg+H7alfN f3g0w2gzJf8g== X-IronPort-AV: E=McAfee;i="6000,8403,9817"; a="172540742" X-IronPort-AV: E=Sophos;i="5.78,374,1599548400"; d="scan'208";a="172540742" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2020 04:09:58 -0800 IronPort-SDR: Jut6j93vbyxn2amu58UfcFuz0yUXwMMR1mBj2J6iFziVxLBo4WkVfbOM8AUIA68xaYcJiCglWf QtJVVdTwvLsQ== X-IronPort-AV: E=Sophos;i="5.78,374,1599548400"; d="scan'208";a="548029167" Received: from mjgleeso-mobl.ger.corp.intel.com (HELO mwauld-desk1.ger.corp.intel.com) ([10.251.85.2]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2020 04:09:56 -0800 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [RFC PATCH 082/162] HAX drm/i915/lmem: support pread and pwrite Date: Fri, 27 Nov 2020 12:05:58 +0000 Message-Id: <20201127120718.454037-83-matthew.auld@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201127120718.454037-1-matthew.auld@intel.com> References: <20201127120718.454037-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Abdiel Janulgue , Steve Hampson , dri-devel@lists.freedesktop.org, =?utf-8?q?Thomas_Hellstr=C3=B6m?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" ** DO NOT MERGE. PREAD/WRITE SUPPORT WILL BE DROPPED FROM DG1+ ** We need to add support for pread'ing and pwriting an LMEM object. Cc: Joonas Lahtinen Cc: Abdiel Janulgue Signed-off-by: Matthew Auld Signed-off-by: Steve Hampson Signed-off-by: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 186 +++++++++++++++++++++++ drivers/gpu/drm/i915/gem/i915_gem_lmem.h | 2 + 2 files changed, 188 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index f6c4d5998ff9..840b68eb10d3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -8,6 +8,177 @@ #include "gem/i915_gem_lmem.h" #include "i915_drv.h" +static int +i915_ww_pin_lock_interruptible(struct drm_i915_gem_object *obj) +{ + struct i915_gem_ww_ctx ww; + int ret; + + for_i915_gem_ww(&ww, ret, true) { + ret = i915_gem_object_lock(obj, &ww); + if (ret) + continue; + + ret = i915_gem_object_pin_pages(obj); + if (ret) + continue; + + ret = i915_gem_object_set_to_wc_domain(obj, false); + if (ret) + goto out_unpin; + + ret = i915_gem_object_wait(obj, + I915_WAIT_INTERRUPTIBLE, + MAX_SCHEDULE_TIMEOUT); + if (!ret) + continue; + +out_unpin: + i915_gem_object_unpin_pages(obj); + + /* Unlocking is done implicitly */ + } + + return ret; +} + +int i915_gem_object_lmem_pread(struct drm_i915_gem_object *obj, + const struct drm_i915_gem_pread *arg) +{ + struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct intel_runtime_pm *rpm = &i915->runtime_pm; + intel_wakeref_t wakeref; + char __user *user_data; + unsigned int offset; + unsigned long idx; + u64 remain; + int ret; + + ret = i915_gem_object_wait(obj, + I915_WAIT_INTERRUPTIBLE, + MAX_SCHEDULE_TIMEOUT); + if (ret) + return ret; + + ret = i915_ww_pin_lock_interruptible(obj); + if (ret) + return ret; + + wakeref = intel_runtime_pm_get(rpm); + + remain = arg->size; + user_data = u64_to_user_ptr(arg->data_ptr); + offset = offset_in_page(arg->offset); + for (idx = arg->offset >> PAGE_SHIFT; remain; idx++) { + unsigned long unwritten; + void __iomem *vaddr; + int length; + + length = remain; + if (offset + length > PAGE_SIZE) + length = PAGE_SIZE - offset; + + vaddr = i915_gem_object_lmem_io_map_page_atomic(obj, idx); + if (!vaddr) { + ret = -ENOMEM; + goto out_put; + } + unwritten = __copy_to_user_inatomic(user_data, + (void __force *)vaddr + offset, + length); + io_mapping_unmap_atomic(vaddr); + if (unwritten) { + vaddr = i915_gem_object_lmem_io_map_page(obj, idx); + unwritten = copy_to_user(user_data, + (void __force *)vaddr + offset, + length); + io_mapping_unmap(vaddr); + } + if (unwritten) { + ret = -EFAULT; + goto out_put; + } + + remain -= length; + user_data += length; + offset = 0; + } + +out_put: + intel_runtime_pm_put(rpm, wakeref); + i915_gem_object_unpin_pages(obj); + + return ret; +} + +static int i915_gem_object_lmem_pwrite(struct drm_i915_gem_object *obj, + const struct drm_i915_gem_pwrite *arg) +{ + struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct intel_runtime_pm *rpm = &i915->runtime_pm; + intel_wakeref_t wakeref; + char __user *user_data; + unsigned int offset; + unsigned long idx; + u64 remain; + int ret; + + ret = i915_gem_object_wait(obj, + I915_WAIT_INTERRUPTIBLE, + MAX_SCHEDULE_TIMEOUT); + if (ret) + return ret; + + ret = i915_ww_pin_lock_interruptible(obj); + if (ret) + return ret; + + wakeref = intel_runtime_pm_get(rpm); + + remain = arg->size; + user_data = u64_to_user_ptr(arg->data_ptr); + offset = offset_in_page(arg->offset); + for (idx = arg->offset >> PAGE_SHIFT; remain; idx++) { + unsigned long unwritten; + void __iomem *vaddr; + int length; + + length = remain; + if (offset + length > PAGE_SIZE) + length = PAGE_SIZE - offset; + + vaddr = i915_gem_object_lmem_io_map_page_atomic(obj, idx); + if (!vaddr) { + ret = -ENOMEM; + goto out_put; + } + + unwritten = __copy_from_user_inatomic_nocache((void __force *)vaddr + offset, + user_data, length); + io_mapping_unmap_atomic(vaddr); + if (unwritten) { + vaddr = i915_gem_object_lmem_io_map_page(obj, idx); + unwritten = copy_from_user((void __force *)vaddr + offset, + user_data, length); + io_mapping_unmap(vaddr); + } + if (unwritten) { + ret = -EFAULT; + goto out_put; + } + + remain -= length; + user_data += length; + offset = 0; + } + +out_put: + intel_runtime_pm_put(rpm, wakeref); + i915_gem_object_unpin_pages(obj); + + return ret; +} + const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = { .name = "i915_gem_object_lmem", .flags = I915_GEM_OBJECT_HAS_IOMEM, @@ -15,8 +186,23 @@ const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = { .get_pages = i915_gem_object_get_pages_buddy, .put_pages = i915_gem_object_put_pages_buddy, .release = i915_gem_object_release_memory_region, + + .pread = i915_gem_object_lmem_pread, + .pwrite = i915_gem_object_lmem_pwrite, }; +void __iomem * +i915_gem_object_lmem_io_map_page(struct drm_i915_gem_object *obj, + unsigned long n) +{ + resource_size_t offset; + + offset = i915_gem_object_get_dma_address(obj, n); + offset -= obj->mm.region->region.start; + + return io_mapping_map_wc(&obj->mm.region->iomap, offset, PAGE_SIZE); +} + void __iomem * i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object *obj, unsigned long n) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h index bf7e11fad17b..a24d94bc380f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h @@ -14,6 +14,8 @@ struct intel_memory_region; extern const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops; +void __iomem *i915_gem_object_lmem_io_map_page(struct drm_i915_gem_object *obj, + unsigned long n); void __iomem * i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object *obj, unsigned long n);