From patchwork Wed Jul 1 20:15:22 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paulo Zanoni X-Patchwork-Id: 6706401 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 996369F38C for ; Wed, 1 Jul 2015 20:16:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AE82220796 for ; Wed, 1 Jul 2015 20:16:12 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 58D5E20797 for ; Wed, 1 Jul 2015 20:16:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EC30A6E31D; Wed, 1 Jul 2015 13:16:10 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-qk0-f177.google.com (mail-qk0-f177.google.com [209.85.220.177]) by gabe.freedesktop.org (Postfix) with ESMTPS id 81E1E89852 for ; Wed, 1 Jul 2015 13:16:08 -0700 (PDT) Received: by qkei195 with SMTP id i195so37999377qke.3 for ; Wed, 01 Jul 2015 13:16:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bZYwjCJAMs/Ip7Oi5aHgroJfEFOHCp2n8RE48lqexKg=; b=cCAB9idoNUCTae08y+rZAYr8dznV+7FLQSQlkQQ24AS/AyNulF/0/ec8BoooxXaGOI sAA+U4AKPwC70WzOioEXGyNXn9M+VdL02ZCIWgnLkRMYC8CnS2UAEdEPkrdnWafueuwd A6DGq1ZmFaTay/WW606PBWsT/veKgpgPLkngTNX9LlImgZlgqMCxICsAeVSgnu5BEW9k vvjlUAtTQrpGEzSxwRDSAh9yDoBAcoTSCNKDpRJ0VMsVlK7dK+j5QsGDcLUlT2sWpnrp kFW6aQBTdiiDFFy9/rIfnV4/BGaQcf+UZ5xBqPWQ9rFdEJW+iwuqo4cZbMiT6d2S6bIA gABQ== X-Received: by 10.140.147.14 with SMTP id 14mr38385408qht.97.1435781767781; Wed, 01 Jul 2015 13:16:07 -0700 (PDT) Received: from localhost.localdomain (r130-pw-tresbarras.ibys.com.br. [189.76.1.243]) by mx.google.com with ESMTPSA id h3sm1589670qgh.22.2015.07.01.13.16.06 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 01 Jul 2015 13:16:07 -0700 (PDT) From: Paulo Zanoni To: intel-gfx@lists.freedesktop.org Date: Wed, 1 Jul 2015 17:15:22 -0300 Message-Id: <1435781726-7282-4-git-send-email-przanoni@gmail.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1435781726-7282-1-git-send-email-przanoni@gmail.com> References: <1435781726-7282-1-git-send-email-przanoni@gmail.com> Cc: Paulo Zanoni Subject: [Intel-gfx] [PATCH 3/7] drm/i915: add dev_priv->mm.stolen_lock X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Paulo Zanoni Which should protect dev_priv->mm.stolen usage. This will allow us to simplify the relationship between stolen memory, FBC and struct_mutex. Cc: Chris Wilson Signed-off-by: Paulo Zanoni --- drivers/gpu/drm/i915/i915_drv.h | 7 +++- drivers/gpu/drm/i915/i915_gem_stolen.c | 69 +++++++++++++++++++++++----------- drivers/gpu/drm/i915/intel_fbc.c | 29 +++++++++++--- 3 files changed, 77 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index c955037..0b908b1 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1245,6 +1245,10 @@ struct intel_l3_parity { struct i915_gem_mm { /** Memory allocator for GTT stolen memory */ struct drm_mm stolen; + /** Protects the usage of the GTT stolen memory allocator. This is + * always the inner lock when overlapping with struct_mutex. */ + struct mutex stolen_lock; + /** List of all objects in gtt_space. Used to restore gtt * mappings on resume */ struct list_head bound_list; @@ -3112,7 +3116,8 @@ static inline void i915_gem_chipset_flush(struct drm_device *dev) int i915_gem_stolen_insert_node(struct drm_i915_private *dev_priv, struct drm_mm_node *node, u64 size, unsigned alignment); -void i915_gem_stolen_remove_node(struct drm_mm_node *node); +void i915_gem_stolen_remove_node(struct drm_i915_private *dev_priv, + struct drm_mm_node *node); int i915_gem_init_stolen(struct drm_device *dev); void i915_gem_cleanup_stolen(struct drm_device *dev); struct drm_i915_gem_object * diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c index 0619786..b432085 100644 --- a/drivers/gpu/drm/i915/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/i915_gem_stolen.c @@ -46,6 +46,8 @@ int i915_gem_stolen_insert_node(struct drm_i915_private *dev_priv, struct drm_mm_node *node, u64 size, unsigned alignment) { + WARN_ON(!mutex_is_locked(&dev_priv->mm.stolen_lock)); + if (!drm_mm_initialized(&dev_priv->mm.stolen)) return -ENODEV; @@ -53,8 +55,11 @@ int i915_gem_stolen_insert_node(struct drm_i915_private *dev_priv, DRM_MM_SEARCH_DEFAULT); } -void i915_gem_stolen_remove_node(struct drm_mm_node *node) +void i915_gem_stolen_remove_node(struct drm_i915_private *dev_priv, + struct drm_mm_node *node) { + WARN_ON(!mutex_is_locked(&dev_priv->mm.stolen_lock)); + drm_mm_remove_node(node); } @@ -171,10 +176,15 @@ void i915_gem_cleanup_stolen(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; + mutex_lock(&dev_priv->mm.stolen_lock); + if (!drm_mm_initialized(&dev_priv->mm.stolen)) - return; + goto out; drm_mm_takedown(&dev_priv->mm.stolen); + +out: + mutex_unlock(&dev_priv->mm.stolen_lock); } int i915_gem_init_stolen(struct drm_device *dev) @@ -183,6 +193,8 @@ int i915_gem_init_stolen(struct drm_device *dev) u32 tmp; int bios_reserved = 0; + mutex_init(&dev_priv->mm.stolen_lock); + #ifdef CONFIG_INTEL_IOMMU if (intel_iommu_gfx_mapped && INTEL_INFO(dev)->gen < 8) { DRM_INFO("DMAR active, disabling use of stolen memory\n"); @@ -273,8 +285,10 @@ static void i915_gem_object_put_pages_stolen(struct drm_i915_gem_object *obj) static void i915_gem_object_release_stolen(struct drm_i915_gem_object *obj) { + struct drm_i915_private *dev_priv = obj->base.dev->dev_private; + if (obj->stolen) { - i915_gem_stolen_remove_node(obj->stolen); + i915_gem_stolen_remove_node(dev_priv, obj->stolen); kfree(obj->stolen); obj->stolen = NULL; } @@ -325,29 +339,36 @@ i915_gem_object_create_stolen(struct drm_device *dev, u32 size) struct drm_mm_node *stolen; int ret; + mutex_lock(&dev_priv->mm.stolen_lock); + if (!drm_mm_initialized(&dev_priv->mm.stolen)) - return NULL; + goto out_unlock; DRM_DEBUG_KMS("creating stolen object: size=%x\n", size); if (size == 0) - return NULL; + goto out_unlock; stolen = kzalloc(sizeof(*stolen), GFP_KERNEL); if (!stolen) - return NULL; + goto out_unlock; ret = i915_gem_stolen_insert_node(dev_priv, stolen, size, 4096); - if (ret) { - kfree(stolen); - return NULL; - } + if (ret) + goto out_free; obj = _i915_gem_object_create_stolen(dev, stolen); - if (obj) - return obj; + if (!obj) + goto out_node; - i915_gem_stolen_remove_node(stolen); + mutex_unlock(&dev_priv->mm.stolen_lock); + return obj; + +out_node: + i915_gem_stolen_remove_node(dev_priv, stolen); +out_free: kfree(stolen); +out_unlock: + mutex_unlock(&dev_priv->mm.stolen_lock); return NULL; } @@ -364,8 +385,10 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, struct i915_vma *vma; int ret; + mutex_lock(&dev_priv->mm.stolen_lock); + if (!drm_mm_initialized(&dev_priv->mm.stolen)) - return NULL; + goto err_unlock; DRM_DEBUG_KMS("creating preallocated stolen object: stolen_offset=%x, gtt_offset=%x, size=%x\n", stolen_offset, gtt_offset, size); @@ -373,11 +396,11 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, /* KISS and expect everything to be page-aligned */ if (WARN_ON(size == 0) || WARN_ON(size & 4095) || WARN_ON(stolen_offset & 4095)) - return NULL; + goto err_unlock; stolen = kzalloc(sizeof(*stolen), GFP_KERNEL); if (!stolen) - return NULL; + goto err_unlock; stolen->start = stolen_offset; stolen->size = size; @@ -385,20 +408,20 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, if (ret) { DRM_DEBUG_KMS("failed to allocate stolen space\n"); kfree(stolen); - return NULL; + goto err_unlock; } obj = _i915_gem_object_create_stolen(dev, stolen); if (obj == NULL) { DRM_DEBUG_KMS("failed to allocate stolen object\n"); - i915_gem_stolen_remove_node(stolen); + i915_gem_stolen_remove_node(dev_priv, stolen); kfree(stolen); - return NULL; + goto err_unlock; } /* Some objects just need physical mem from stolen space */ if (gtt_offset == I915_GTT_OFFSET_NONE) - return obj; + goto success; vma = i915_gem_obj_lookup_or_create_vma(obj, ggtt); if (IS_ERR(vma)) { @@ -427,13 +450,17 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev, list_add_tail(&vma->mm_list, &ggtt->inactive_list); i915_gem_object_pin_pages(obj); +success: + mutex_unlock(&dev_priv->mm.stolen_lock); return obj; err_vma: i915_gem_vma_destroy(vma); err_out: - i915_gem_stolen_remove_node(stolen); + i915_gem_stolen_remove_node(dev_priv, stolen); kfree(stolen); drm_gem_object_unreference(&obj->base); +err_unlock: + mutex_unlock(&dev_priv->mm.stolen_lock); return NULL; } diff --git a/drivers/gpu/drm/i915/intel_fbc.c b/drivers/gpu/drm/i915/intel_fbc.c index a91bf82..dcd83ab 100644 --- a/drivers/gpu/drm/i915/intel_fbc.c +++ b/drivers/gpu/drm/i915/intel_fbc.c @@ -601,40 +601,57 @@ static int intel_fbc_alloc_cfb(struct drm_device *dev, int size, int fb_cpp) err_fb: kfree(compressed_llb); - i915_gem_stolen_remove_node(&dev_priv->fbc.compressed_fb); + i915_gem_stolen_remove_node(dev_priv, &dev_priv->fbc.compressed_fb); err_llb: pr_info_once("drm: not enough stolen space for compressed buffer (need %d more bytes), disabling. Hint: you may be able to increase stolen memory size in the BIOS to avoid this.\n", size); return -ENOSPC; } -void intel_fbc_cleanup_cfb(struct drm_device *dev) +static void __intel_fbc_cleanup_cfb(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; if (dev_priv->fbc.uncompressed_size == 0) return; - i915_gem_stolen_remove_node(&dev_priv->fbc.compressed_fb); + i915_gem_stolen_remove_node(dev_priv, &dev_priv->fbc.compressed_fb); if (dev_priv->fbc.compressed_llb) { - i915_gem_stolen_remove_node(dev_priv->fbc.compressed_llb); + i915_gem_stolen_remove_node(dev_priv, + dev_priv->fbc.compressed_llb); kfree(dev_priv->fbc.compressed_llb); } dev_priv->fbc.uncompressed_size = 0; } +void intel_fbc_cleanup_cfb(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + + mutex_lock(&dev_priv->mm.stolen_lock); + __intel_fbc_cleanup_cfb(dev); + mutex_unlock(&dev_priv->mm.stolen_lock); +} + static int intel_fbc_setup_cfb(struct drm_device *dev, int size, int fb_cpp) { struct drm_i915_private *dev_priv = dev->dev_private; + int ret; if (size <= dev_priv->fbc.uncompressed_size) return 0; + mutex_lock(&dev_priv->mm.stolen_lock); + /* Release any current block */ - intel_fbc_cleanup_cfb(dev); + __intel_fbc_cleanup_cfb(dev); + + ret = intel_fbc_alloc_cfb(dev, size, fb_cpp); + + mutex_unlock(&dev_priv->mm.stolen_lock); - return intel_fbc_alloc_cfb(dev, size, fb_cpp); + return ret; } /**