From patchwork Fri Feb 16 14:04:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: kevin.rogovin@intel.com X-Patchwork-Id: 10224785 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B5578601E7 for ; Fri, 16 Feb 2018 14:04:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A481B29324 for ; Fri, 16 Feb 2018 14:04:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 98DA12946C; Fri, 16 Feb 2018 14:04:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EAA2629324 for ; Fri, 16 Feb 2018 14:04:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2BE9E6E639; Fri, 16 Feb 2018 14:04:32 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1A2766E63E for ; Fri, 16 Feb 2018 14:04:31 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Feb 2018 06:04:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,519,1511856000"; d="scan'208";a="18245582" Received: from mgallus-mobl1.ger.corp.intel.com (HELO LittleBigTrouble.ger.corp.intel.com) ([10.252.19.7]) by fmsmga007.fm.intel.com with ESMTP; 16 Feb 2018 06:04:28 -0800 From: kevin.rogovin@intel.com To: intel-gfx@lists.freedesktop.org Date: Fri, 16 Feb 2018 16:04:22 +0200 Message-Id: <1518789862-4006-2-git-send-email-kevin.rogovin@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1518789862-4006-1-git-send-email-kevin.rogovin@intel.com> References: <1518789862-4006-1-git-send-email-kevin.rogovin@intel.com> Subject: [Intel-gfx] [PATCH 1/1 RFC] drivers/gpu/drm/i915:Documentation for batchbuffer submission X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Rogovin MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Kevin Rogovin Signed-off-by: Kevin Rogovin --- Documentation/gpu/i915.rst | 109 +++++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_gem_execbuffer.c | 10 +++ 2 files changed, 119 insertions(+) diff --git a/Documentation/gpu/i915.rst b/Documentation/gpu/i915.rst index 41dc881b00dc..36b3ade85839 100644 --- a/Documentation/gpu/i915.rst +++ b/Documentation/gpu/i915.rst @@ -13,6 +13,18 @@ Core Driver Infrastructure This section covers core driver infrastructure used by both the display and the GEM parts of the driver. +Initialization +-------------- + +The real action of initialization for the i915 driver is handled by +:c:func:`i915_driver_load`; from this function one can see the key +data (in paritcular :c:struct:'drm_driver' for GEM) of the entry points +to to the driver from user space. + +.. kernel-doc:: drivers/gpu/drm/i915/i915_drv.c + :functions: i915_driver_load + + Runtime Power Management ------------------------ @@ -249,6 +261,102 @@ Memory Management and Command Submission This sections covers all things related to the GEM implementation in the i915 driver. +Intel GPU Basics +---------------- + +An Intel GPU has multiple engines. There are several engine types. + +- RCS engine is for rendering 3D and performing compute, this is named `I915_EXEC_DEFAULT` in user space. +- BCS is a blitting (copy) engine, this is named `I915_EXEC_BLT` in user space. +- VCS is a video encode and decode engine, this is named `I915_EXEC_BSD` in user space +- VECS is video enhancement engine, this is named `I915_EXEC_VEBOX` in user space. + +The Intel GPU family is a familiy of integrated GPU's using Unified Memory +Access. For having the GPU "do work", user space will feed the GPU batch buffers +via one of the ioctls `DRM_IOCTL_I915_GEM_EXECBUFFER`, `DRM_IOCTL_I915_GEM_EXECBUFFER2` +or `DRM_IOCTL_I915_GEM_EXECBUFFER2_WR`. Most such batchbuffers will instruct the +GPU to perform work (for example rendering) and that work needs memory from +which to read and memory to which to write. All memory is encapsulated within +GEM buffer objects (usually created with the ioctl DRM_IOCTL_I915_GEM_CREATE). +An ioctl providing a batchbuffer for the GPU to create will also list all GEM +buffer objects that the batchbuffer reads and/or writes. + +The GPU has its own memory management and address space. The kernel driver +maintains the memory translation table for the GPU. For older GPUs (i.e. those +before Gen8), there is a single global such translation table, a global +Graphics Translation Table (GTT). For newer generation GPUs each hardware +context has its own translation table, called Per-Process Graphics Translation +Table (PPGTT). Of important note, is that although PPGTT is named per-process it +is actually per hardware context. When user space submits a batchbuffer, the kernel +walks the list of GEM buffer objects used by the batchbuffer and guarantees +that not only is the memory of each such GEM buffer object resident but it is +also present in the (PP)GTT. If the GEM buffer object is not yet placed in +the (PP)GTT, then it is given an address. Two consequences of this are: +the kernel needs to edit the batchbuffer submitted to write the correct +value of the GPU address when a GEM BO is assigned a GPU address and +the kernel might evict a different GEM BO from the (PP)GTT to make address +room for a GEM BO. + +Consequently, the ioctls submitting a batchbuffer for execution also include +a list of all locations within buffers that refer to GPU-addresses so that the +kernel can edit the buffer correctly. This process is dubbed relocation. The +ioctls allow user space to provide what the GPU address could be. If the kernel +sees that the address provided by user space is correct, then it skips performing +relocation for that GEM buffer object. In addition, the ioctl's provide to what +addresses the kernel relocates each GEM buffer object. + +There is also an interface for user space to directly specify the address location +of GEM BO's, the feature soft-pinning and made active within an execbuffer2 +ioctl with EXEC_OBJECT_PINNED bit up. If user-space also specifies I915_EXEC_NO_RELOC, +then the kernel is to not execute any relocation and user-space manages the address +space for its PPGTT itself. The advantage of user space handling address space is +that then the kernel does far less work and user space can safely assume that +GEM buffer object's location in GPU address space do not change. + +Starting in Gen6, Intel GPU's support hardware contexts. A GPU hardware context +represents GPU state that can be saved and restored. When user space uses a hardware +context, it does not need to restore the GPU state at the start of each batchbuffer +because the kernel directly the GPU to load the state from the hardware context. +Hardware contexts allow for much greater isolation between processes that use the GPU. + +Batchbuffer Submission +---------------------- + +Depending on GPU generation, the i915 kernel driver will submit batchbuffers +in one of the several ways. However, the top code logic is shared for all +methods. They key function, i915_gem_do_execbuffer() essentially converts +the ioctl command to an internal data structure which is then added to a queue +which is processed elsewhere to give the job to the GPU; the details of +i915_gem_do_execbuffer() are covered in `Common Code`_. + + +Common Code +~~~~~~~~~~~ + +.. kernel-doc:: drivers/gpu/drm/i915/i915_gem_execbuffer.c + :doc: User command execution + +.. kernel-doc:: drivers/gpu/drm/i915/i915_gem_execbuffer.c + :functions: i915_gem_do_execbuffer + +Batchbuffer Submission Varieties +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +As stated before, there are several varieties in how to submit batchbuffers to the GPU; +which one in use is controlled by function pointer values in the c-struct intel_engine_cs +(defined in drivers/gpu/drm/i915/intel_ringbuffer.h) + +- request_alloc +- submit_request + +The three varieties for submitting batchbuffer to the GPU are the following. + +1. Batchbuffers are subbmitted directly to a ring buffer; this is the most basic way to submit batchbuffers to the GPU and is for generations strictly before Gen8. When batchbuffers are submitted this older way, their contents are checked via Batchbuffer Parsing, see `Batchbuffer Parsing`_. +2. Batchbuffer are submitting via execlists are a features supported by Gen8 and new devices; the macro :c:macro:'HAS_EXECLISTS' is used to determine if a GPU supports submitting via execlists, see `Logical Rings, Logical Ring Contexts and Execlists`_. +3. Batchbuffer are submitted to the GuC, see `GuC`_. + + + Batchbuffer Parsing ------------------- @@ -266,6 +374,7 @@ Batchbuffer Pools .. kernel-doc:: drivers/gpu/drm/i915/i915_gem_batch_pool.c :internal: Logical Rings, Logical Ring Contexts and Execlists -------------------------------------------------- diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index b15305f2fb76..4a22ae86ceb3 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -2178,6 +2178,16 @@ signal_fence_array(struct i915_execbuffer *eb, } } +/** + * i915_gem_do_execbuffer() - Batchbuffer submission common implementation + * + * All ioctl's for submitting a batchbuffer reduce to this function; + * This function will place the batchbuffer to be executed on a submission + * queue which will later (via interupt calling into i915 driver) place + * send the batchbuffer to the GPU. + * + * Return: 0 on success, error code on failure + */ static int i915_gem_do_execbuffer(struct drm_device *dev, struct drm_file *file,