From patchwork Mon Nov 4 17:37:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 11226143 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A3FA13BD for ; Mon, 4 Nov 2019 17:37:34 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 629A1214B2 for ; Mon, 4 Nov 2019 17:37:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 629A1214B2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EE1BF6E78E; Mon, 4 Nov 2019 17:37:33 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wr1-x444.google.com (mail-wr1-x444.google.com [IPv6:2a00:1450:4864:20::444]) by gabe.freedesktop.org (Postfix) with ESMTPS id 179A66E78E for ; Mon, 4 Nov 2019 17:37:32 +0000 (UTC) Received: by mail-wr1-x444.google.com with SMTP id q13so18103655wrs.12 for ; Mon, 04 Nov 2019 09:37:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=5lQA5athXhmBuwKd2i0ToGIc/KLSXN4L7pjEazjTndY=; b=Vb4Rr6pU8POwyh9+WlxsxveOHKhXq4OjGkv+i4BDqZmF3ZQT52cXlxEyYFSO7FcL/y CKfDuuv1KOsi8CgYsg5aXXwpD4uKYJD3NKlZ03BJVS6aHErToIHyOx1xLg94UJZJi1q7 foestEOn20+RRVWFdqWXSp9k2y4kXp9ZDzKSxlIWHr9QmooGx+2o5HkCGz7CF2Y1eJCw m3p0Jl4BYrOQCmw4zOyXHNoA6H2wpcM5yMMSD6pkarIvmhU4BQMkdx66ydLxqqvY3Rs8 APO4WGYWvn8tcIcDk3oSz3zuvI5mFkus+/ykfWtQDLZ8EGyxH8DU35fzbnjZ1iiInaIA TQvQ== X-Gm-Message-State: APjAAAWfdKcinzbrndsn1SXRq/yNLXvqvrwgsyyZB9regAefA8+bkabN WAp9smSMPCyRjV7Ef1BZ2PfXYNWNpxQ= X-Google-Smtp-Source: APXvYqzUWhTj/HkyxRJab9ruk7BoNQEiMu3iFE/MnVmmv++sRs3bpz5p2QVpWFyiQ+ffmiYSnVANVg== X-Received: by 2002:a5d:4585:: with SMTP id p5mr3326029wrq.134.1572889050112; Mon, 04 Nov 2019 09:37:30 -0800 (PST) Received: from phenom.ffwll.local (212-51-149-96.fiber7.init7.net. [212.51.149.96]) by smtp.gmail.com with ESMTPSA id l22sm32408863wrb.45.2019.11.04.09.37.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Nov 2019 09:37:28 -0800 (PST) From: Daniel Vetter To: Intel Graphics Development Date: Mon, 4 Nov 2019 18:37:18 +0100 Message-Id: <20191104173720.2696-1-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.24.0.rc2 MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=5lQA5athXhmBuwKd2i0ToGIc/KLSXN4L7pjEazjTndY=; b=Gxs4ol6sgj3Z/Wnh/dBblOHbDu/e9yL6VlmscfI63qjvWSXQI78IZZuOQLVRXz7cx1 gei8ht79TRwZ8SOmG48Cf/TLoiATsAOIq+0NW5db+0b3Jc2tHKl/Cue6mKG1SeRrNi2x jDG5ebmXycCeSBV57mKJLaU0PmIcyCZJNuBWg= Subject: [Intel-gfx] [PATCH 1/3] drm/i915: Switch obj->mm.lock lockdep annotations on its head X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , Daniel Vetter Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" The trouble with having a plain nesting flag for locks which do not naturally nest (unlike block devices and their partitions, which is the original motivation for nesting levels) is that lockdep will never spot a true deadlock if you screw up. This patch is an attempt at trying better, by highlighting a bit more the actual nature of the nesting that's going on. Essentially we have two kinds of objects: - objects without pages allocated, which cannot be on any lru and are hence inaccessible to the shrinker. - objects which have pages allocated, which are on an lru, and which the shrinker can decide to throw out. For the former type of object, memory allcoations while holding obj->mm.lock are permissible. For the latter they are not. And get/put_pages transitions between the two types of objects. This is still not entirely fool-proof since the rules might chance. But as long as we run such a code ever at runtime lockdep should be able to observe the inconsistency and complain (like with any other lockdep class that we've split up in multiple classes). But there are a few clear benefits: - We can drop the nesting flag parameter from __i915_gem_object_put_pages, because that function by definition is never going allocate memory, and calling it on an object which doesn't have its pages allocated would be a bug. - We strictly catch more bugs, since there's not only one place in the entire tree which is annotated with the special class. All the other places that had explicit lockdep nesting annotations we're now going to leave up to lockdep again. - Specifically this catches stuff like calling get_pages from put_pages (which isn't really a good idea, if we can call put_pages so could the shrinker). I've seen patches do exactly that. Of course I fully expect CI will show me for the fool I am with this one here :-) v2: There can only be one (lockdep only has a cache for the first subclass, not for deeper ones, and we don't want to make these locks even slower). Still separate enums for better documentation. Real fix: don forget about phys objs and pin_map(), and fix the shrinker to have the right annotations ... silly me. v3: Forgot usertptr too ... v4: Improve comment for pages_pin_count, drop the IMPORTANT comment and instead prime lockdep (Chris). v5: Appease checkpatch, no double empty lines (Chris) v6: More rebasing over selftest changes. Also somehow I forgot to push this patch :-/ Also format comments consistently while at it. Cc: Chris Wilson Cc: "Tang, CQ" Cc: Tvrtko Ursulin Cc: Joonas Lahtinen Reviewed-by: Chris Wilson (v5) Signed-off-by: Daniel Vetter Reviewed-by: Joonas Lahtinen --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 +++++++++++- drivers/gpu/drm/i915/gem/i915_gem_object.h | 17 ++++++++++++++--- .../gpu/drm/i915/gem/i915_gem_object_types.h | 6 +++++- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 9 ++++----- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 5 ++--- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 4 ++-- drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 14 +++++++------- .../drm/i915/selftests/intel_memory_region.c | 4 ++-- 9 files changed, 48 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index a50296cce0d8..078d515d72c0 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -22,6 +22,8 @@ * */ +#include + #include "display/intel_frontbuffer.h" #include "gt/intel_gt.h" #include "i915_drv.h" @@ -52,6 +54,14 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, { __mutex_init(&obj->mm.lock, "obj->mm.lock", key); + if (IS_ENABLED(CONFIG_LOCKDEP)) { + mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES); + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&obj->mm.lock); + fs_reclaim_release(GFP_KERNEL); + mutex_unlock(&obj->mm.lock); + } + spin_lock_init(&obj->vma.lock); INIT_LIST_HEAD(&obj->vma.list); @@ -186,7 +196,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, GEM_BUG_ON(!list_empty(&obj->lut_list)); atomic_set(&obj->mm.pages_pin_count, 0); - __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + __i915_gem_object_put_pages(obj); GEM_BUG_ON(i915_gem_object_has_pages(obj)); bitmap_free(obj->bit_17); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 458cd51331f1..edaf7126a84d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -319,11 +319,22 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj) enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */ I915_MM_NORMAL = 0, - I915_MM_SHRINKER /* called "recursively" from direct-reclaim-esque */ + /* + * Only used by struct_mutex, when called "recursively" from + * direct-reclaim-esque. Safe because there is only every one + * struct_mutex in the entire system. + */ + I915_MM_SHRINKER = 1, + /* + * Used for obj->mm.lock when allocating pages. Safe because the object + * isn't yet on any LRU, and therefore the shrinker can't deadlock on + * it. As soon as the object has pages, obj->mm.lock nests within + * fs_reclaim. + */ + I915_MM_GET_PAGES = 1, }; -int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj, - enum i915_mm_subclass subclass); +int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj); void i915_gem_object_truncate(struct drm_i915_gem_object *obj); void i915_gem_object_writeback(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 96008374a412..15f8297dc34e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -162,7 +162,11 @@ struct drm_i915_gem_object { atomic_t bind_count; struct { - struct mutex lock; /* protects the pages and their use */ + /* + * Protects the pages and their use. Do not use directly, but + * instead go through the pin/unpin interfaces. + */ + struct mutex lock; atomic_t pages_pin_count; atomic_t shrink_pin; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 29f4c2850745..f402c2c415c2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -106,7 +106,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj) { int err; - err = mutex_lock_interruptible(&obj->mm.lock); + err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES); if (err) return err; @@ -190,8 +190,7 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj) return pages; } -int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj, - enum i915_mm_subclass subclass) +int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj) { struct sg_table *pages; int err; @@ -202,7 +201,7 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj, GEM_BUG_ON(atomic_read(&obj->bind_count)); /* May be called by shrinker from within get_pages() (on another bo) */ - mutex_lock_nested(&obj->mm.lock, subclass); + mutex_lock(&obj->mm.lock); if (unlikely(atomic_read(&obj->mm.pages_pin_count))) { err = -EBUSY; goto unlock; @@ -308,7 +307,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj, if (!i915_gem_object_type_has(obj, flags)) return ERR_PTR(-ENXIO); - err = mutex_lock_interruptible(&obj->mm.lock); + err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES); if (err) return ERR_PTR(err); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 8043ff63d73f..b1b7c1b3038a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -164,7 +164,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align) if (err) return err; - mutex_lock(&obj->mm.lock); + mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES); if (obj->mm.madv != I915_MADV_WILLNEED) { err = -EFAULT; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c index fd3ce6da8497..066b3df677e8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c @@ -57,7 +57,7 @@ static bool unsafe_drop_pages(struct drm_i915_gem_object *obj, flags = I915_GEM_OBJECT_UNBIND_ACTIVE; if (i915_gem_object_unbind(obj, flags) == 0) - __i915_gem_object_put_pages(obj, I915_MM_SHRINKER); + __i915_gem_object_put_pages(obj); return !i915_gem_object_has_pages(obj); } @@ -209,8 +209,7 @@ i915_gem_shrink(struct drm_i915_private *i915, if (unsafe_drop_pages(obj, shrink)) { /* May arrive from get_pages on another bo */ - mutex_lock_nested(&obj->mm.lock, - I915_MM_SHRINKER); + mutex_lock(&obj->mm.lock); if (!i915_gem_object_has_pages(obj)) { try_to_writeback(obj, shrink); count += obj->base.size >> PAGE_SHIFT; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 1e045c337044..ee65c6acf0e2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -131,7 +131,7 @@ userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, ret = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE); if (ret == 0) - ret = __i915_gem_object_put_pages(obj, I915_MM_SHRINKER); + ret = __i915_gem_object_put_pages(obj); i915_gem_object_put(obj); if (ret) return ret; @@ -483,7 +483,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work) } } - mutex_lock(&obj->mm.lock); + mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES); if (obj->userptr.work == &work->work) { struct sg_table *pages = ERR_PTR(ret); diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index 688c49a24f32..5c9583349077 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -517,7 +517,7 @@ static int igt_mock_memory_region_huge_pages(void *arg) i915_vma_unpin(vma); i915_vma_close(vma); - __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + __i915_gem_object_put_pages(obj); i915_gem_object_put(obj); } } @@ -650,7 +650,7 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg) i915_vma_close(vma); i915_gem_object_unpin_pages(obj); - __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + __i915_gem_object_put_pages(obj); i915_gem_object_put(obj); } @@ -678,7 +678,7 @@ static void close_object_list(struct list_head *objects, list_del(&obj->st_link); i915_gem_object_unpin_pages(obj); - __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + __i915_gem_object_put_pages(obj); i915_gem_object_put(obj); } } @@ -948,7 +948,7 @@ static int igt_mock_ppgtt_64K(void *arg) i915_vma_close(vma); i915_gem_object_unpin_pages(obj); - __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + __i915_gem_object_put_pages(obj); i915_gem_object_put(obj); } } @@ -1301,7 +1301,7 @@ static int igt_ppgtt_exhaust_huge(void *arg) } i915_gem_object_unpin_pages(obj); - __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + __i915_gem_object_put_pages(obj); i915_gem_object_put(obj); } } @@ -1442,7 +1442,7 @@ static int igt_ppgtt_smoke_huge(void *arg) } out_unpin: i915_gem_object_unpin_pages(obj); - __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + __i915_gem_object_put_pages(obj); out_put: i915_gem_object_put(obj); @@ -1530,7 +1530,7 @@ static int igt_ppgtt_sanity_check(void *arg) err = igt_write_huge(ctx, obj); i915_gem_object_unpin_pages(obj); - __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + __i915_gem_object_put_pages(obj); i915_gem_object_put(obj); if (err) { diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 19e1cca8f143..95d609abd39b 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -32,7 +32,7 @@ static void close_objects(struct intel_memory_region *mem, if (i915_gem_object_has_pinned_pages(obj)) i915_gem_object_unpin_pages(obj); /* No polluting the memory region between tests */ - __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + __i915_gem_object_put_pages(obj); list_del(&obj->st_link); i915_gem_object_put(obj); } @@ -122,7 +122,7 @@ igt_object_create(struct intel_memory_region *mem, static void igt_object_release(struct drm_i915_gem_object *obj) { i915_gem_object_unpin_pages(obj); - __i915_gem_object_put_pages(obj, I915_MM_NORMAL); + __i915_gem_object_put_pages(obj); list_del(&obj->st_link); i915_gem_object_put(obj); } From patchwork Mon Nov 4 17:37:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 11226145 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0161B13BD for ; Mon, 4 Nov 2019 17:37:36 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DC82A214D9 for ; Mon, 4 Nov 2019 17:37:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC82A214D9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D89666E78F; Mon, 4 Nov 2019 17:37:34 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wr1-x441.google.com (mail-wr1-x441.google.com [IPv6:2a00:1450:4864:20::441]) by gabe.freedesktop.org (Postfix) with ESMTPS id 65F356E78E for ; Mon, 4 Nov 2019 17:37:33 +0000 (UTC) Received: by mail-wr1-x441.google.com with SMTP id a15so18087649wrf.9 for ; Mon, 04 Nov 2019 09:37:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q42A9GE6rbKyvGuSxAml40ibzM2Hb1uuhq9f3IXuc4Y=; b=niD+P2JJa+A/owMX5QIrieHH0XOYMobEhIhkSX/uT9sRtP3J3cTsDJYFOoCetwtAaL wWJgM36WxXqALJuz6MDGNytxVsMb9p7gXeRXIpB5+nxSC9P0l2EeDBCK5b3d2ScYmTbH m8wPsiGI2/d9i9k0+0nn/SoBEPNMrcn1jGcEIxMapruMC4m7ah2Q1P6GuxrtJkaZEClW E7KOOq4+qXaaQYFXPJk4saZlu3pzk/7M6b6afTHrspV9SLmD+qPbP3NN8qatQLL725xP BFgotE8H0SQ8x7DS7gdf2BWTyF31RNW9gxsJI6oAH4riOu57VO0DUPKMTXSH8+j1WgVX C7eQ== X-Gm-Message-State: APjAAAWMVRdA0PJIaKlGE96cD7KdTIRBC1Rd8pD7LE+xk7zz7/TttlX/ 8TyCOHB8VFjhKH32lCJMw6evrv7Xegc= X-Google-Smtp-Source: APXvYqyZGwOKtaoeBaHT91cDJ8JsAllkAP5LrJWjODhvvXVM7htvxjcm6lrzfjLAuEk+9Ke+1UEPnA== X-Received: by 2002:adf:d18b:: with SMTP id v11mr25588251wrc.308.1572889051786; Mon, 04 Nov 2019 09:37:31 -0800 (PST) Received: from phenom.ffwll.local (212-51-149-96.fiber7.init7.net. [212.51.149.96]) by smtp.gmail.com with ESMTPSA id l22sm32408863wrb.45.2019.11.04.09.37.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Nov 2019 09:37:30 -0800 (PST) From: Daniel Vetter To: Intel Graphics Development Date: Mon, 4 Nov 2019 18:37:19 +0100 Message-Id: <20191104173720.2696-2-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.24.0.rc2 In-Reply-To: <20191104173720.2696-1-daniel.vetter@ffwll.ch> References: <20191104173720.2696-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q42A9GE6rbKyvGuSxAml40ibzM2Hb1uuhq9f3IXuc4Y=; b=cWzsrd68SdlOV0Q302wyR11tF2fkJQituuofoLZ+F7zHH+FsGdI5n7j9ewIP4i68t1 gXlwqq2sifm3AD7OAy94JW+f9rDRDD+4V6dymz3aeo86ZQANScxbAns0LeqfbDoHpwA7 ceGpUMSuTsUpfDA7pjN3SCRZORqyFTw2D4K3U= Subject: [Intel-gfx] [PATCH 2/3] lockdep: add might_lock_nested() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Daniel Vetter , linux-kernel@vger.kernel.org, Ingo Molnar , Daniel Vetter , Will Deacon Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Necessary to annotate functions where we might acquire a mutex_lock_nested() or similar. Needed by i915. Acked-by: Peter Zijlstra (Intel) Signed-off-by: Daniel Vetter Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Will Deacon Cc: linux-kernel@vger.kernel.org --- include/linux/lockdep.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index e0eca94e58c8..c4155436e6fc 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -628,6 +628,13 @@ do { \ lock_acquire(&(lock)->dep_map, 0, 0, 1, 1, NULL, _THIS_IP_); \ lock_release(&(lock)->dep_map, 0, _THIS_IP_); \ } while (0) +# define might_lock_nested(lock, subclass) \ +do { \ + typecheck(struct lockdep_map *, &(lock)->dep_map); \ + lock_acquire(&(lock)->dep_map, subclass, 0, 1, 1, NULL, \ + _THIS_IP_); \ + lock_release(&(lock)->dep_map, 0, _THIS_IP_); \ +} while (0) #define lockdep_assert_irqs_enabled() do { \ WARN_ONCE(debug_locks && !current->lockdep_recursion && \ @@ -650,6 +657,7 @@ do { \ #else # define might_lock(lock) do { } while (0) # define might_lock_read(lock) do { } while (0) +# define might_lock_nested(lock, subclass) do { } while (0) # define lockdep_assert_irqs_enabled() do { } while (0) # define lockdep_assert_irqs_disabled() do { } while (0) # define lockdep_assert_in_irq() do { } while (0) From patchwork Mon Nov 4 17:37:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 11226147 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 340AD1747 for ; Mon, 4 Nov 2019 17:37:37 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1C4FB214B2 for ; Mon, 4 Nov 2019 17:37:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1C4FB214B2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 32B556E791; Mon, 4 Nov 2019 17:37:36 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by gabe.freedesktop.org (Postfix) with ESMTPS id 17D966E791 for ; Mon, 4 Nov 2019 17:37:35 +0000 (UTC) Received: by mail-wm1-x341.google.com with SMTP id x4so6491477wmi.3 for ; Mon, 04 Nov 2019 09:37:35 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PXKWmt04LLOdOQwT4J7BpfWO3zbtkR+P8tnC3gpeEXQ=; b=W7h5CJ31qvfsKW255VYdBzYlzfrnIWzR7IsxKZuBEXEyPSZN6Pk7p7DF5sJCpwUkoL kl8uPm+luSJUY18dmfoLa6CxZAqnpY3J58wFuUXGUBS3L0Bw/gmYYILfNKG5O3SrxP0w zcg3K/NobwGKd/RB2x9CUyIVsokpLyEZHYhY7fk7XZJSLkITW2veUTz0ALNZ/cJSLyuK Sm4pvd3M8jo5npLd8w77OJbBkHXu78oqu62bfIb7sOgxKUYp17rQZVphVy1JbVo1B30h sdTuq1MJyN6rEYCM+wfJTVrY8zT6bRbFwnZ5cc5jjFlmwECz7Uf84VS4msOC+8EIp4BM 2Sxg== X-Gm-Message-State: APjAAAUhRCdqmSxRmIuv+Ut2Wt+uisZ/SCEOJqBmM2CA5rSN/jygqLlf jOM2s4IKw59z2AtcJVNGCp1OzAua228= X-Google-Smtp-Source: APXvYqwnBFrltg4BcDYvyP0LQfKr7LnFL2T1UPgzUktng1ER3WfNh2V8RcaPXqBQ9MI5hBomwShRrA== X-Received: by 2002:a05:600c:21c9:: with SMTP id x9mr252564wmj.54.1572889053413; Mon, 04 Nov 2019 09:37:33 -0800 (PST) Received: from phenom.ffwll.local (212-51-149-96.fiber7.init7.net. [212.51.149.96]) by smtp.gmail.com with ESMTPSA id l22sm32408863wrb.45.2019.11.04.09.37.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Nov 2019 09:37:32 -0800 (PST) From: Daniel Vetter To: Intel Graphics Development Date: Mon, 4 Nov 2019 18:37:20 +0100 Message-Id: <20191104173720.2696-3-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.24.0.rc2 In-Reply-To: <20191104173720.2696-1-daniel.vetter@ffwll.ch> References: <20191104173720.2696-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PXKWmt04LLOdOQwT4J7BpfWO3zbtkR+P8tnC3gpeEXQ=; b=VAyQzyglRqNMJ3a4V+Jk2zst6aCR675sfvIjYWZoY/TUikK/EnbUf83SZsY6hQMvZ/ qpqrMTLp8w0fMoHxqmLb1x9cV4Jrfhnir+g7hTsKlxb8muPuiiTG+25XLfQkayp2UfU3 uqK791FxcspFJz55XB6wK9tl9Y/kCkD4f7HQs= Subject: [Intel-gfx] [PATCH 3/3] drm/i915: use might_lock_nested in get_pages annotation X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Daniel Vetter , linux-kernel@vger.kernel.org, Ingo Molnar , Daniel Vetter , Will Deacon Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" So strictly speaking the existing annotation is also ok, because we have a chain of obj->mm.lock#I915_MM_GET_PAGES -> fs_reclaim -> obj->mm.lock (the shrinker cannot get at an object while we're in get_pages, hence this is safe). But it's confusing, so try to take the right subclass of the lock. This does a bit reduce our lockdep based checking, but then it's also less fragile, in case we ever change the nesting around. Signed-off-by: Daniel Vetter Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Will Deacon Cc: linux-kernel@vger.kernel.org Reviewed-by: Joonas Lahtinen --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 36 +++++++++++----------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index edaf7126a84d..e5750d506cc9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -271,10 +271,27 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj); int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj); +enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */ + I915_MM_NORMAL = 0, + /* + * Only used by struct_mutex, when called "recursively" from + * direct-reclaim-esque. Safe because there is only every one + * struct_mutex in the entire system. + */ + I915_MM_SHRINKER = 1, + /* + * Used for obj->mm.lock when allocating pages. Safe because the object + * isn't yet on any LRU, and therefore the shrinker can't deadlock on + * it. As soon as the object has pages, obj->mm.lock nests within + * fs_reclaim. + */ + I915_MM_GET_PAGES = 1, +}; + static inline int __must_check i915_gem_object_pin_pages(struct drm_i915_gem_object *obj) { - might_lock(&obj->mm.lock); + might_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES); if (atomic_inc_not_zero(&obj->mm.pages_pin_count)) return 0; @@ -317,23 +334,6 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj) __i915_gem_object_unpin_pages(obj); } -enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */ - I915_MM_NORMAL = 0, - /* - * Only used by struct_mutex, when called "recursively" from - * direct-reclaim-esque. Safe because there is only every one - * struct_mutex in the entire system. - */ - I915_MM_SHRINKER = 1, - /* - * Used for obj->mm.lock when allocating pages. Safe because the object - * isn't yet on any LRU, and therefore the shrinker can't deadlock on - * it. As soon as the object has pages, obj->mm.lock nests within - * fs_reclaim. - */ - I915_MM_GET_PAGES = 1, -}; - int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj); void i915_gem_object_truncate(struct drm_i915_gem_object *obj); void i915_gem_object_writeback(struct drm_i915_gem_object *obj);