From patchwork Mon Nov 4 17:37:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 11226147 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 340AD1747 for ; Mon, 4 Nov 2019 17:37:37 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1C4FB214B2 for ; Mon, 4 Nov 2019 17:37:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1C4FB214B2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 32B556E791; Mon, 4 Nov 2019 17:37:36 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by gabe.freedesktop.org (Postfix) with ESMTPS id 17D966E791 for ; Mon, 4 Nov 2019 17:37:35 +0000 (UTC) Received: by mail-wm1-x341.google.com with SMTP id x4so6491477wmi.3 for ; Mon, 04 Nov 2019 09:37:35 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PXKWmt04LLOdOQwT4J7BpfWO3zbtkR+P8tnC3gpeEXQ=; b=W7h5CJ31qvfsKW255VYdBzYlzfrnIWzR7IsxKZuBEXEyPSZN6Pk7p7DF5sJCpwUkoL kl8uPm+luSJUY18dmfoLa6CxZAqnpY3J58wFuUXGUBS3L0Bw/gmYYILfNKG5O3SrxP0w zcg3K/NobwGKd/RB2x9CUyIVsokpLyEZHYhY7fk7XZJSLkITW2veUTz0ALNZ/cJSLyuK Sm4pvd3M8jo5npLd8w77OJbBkHXu78oqu62bfIb7sOgxKUYp17rQZVphVy1JbVo1B30h sdTuq1MJyN6rEYCM+wfJTVrY8zT6bRbFwnZ5cc5jjFlmwECz7Uf84VS4msOC+8EIp4BM 2Sxg== X-Gm-Message-State: APjAAAUhRCdqmSxRmIuv+Ut2Wt+uisZ/SCEOJqBmM2CA5rSN/jygqLlf jOM2s4IKw59z2AtcJVNGCp1OzAua228= X-Google-Smtp-Source: APXvYqwnBFrltg4BcDYvyP0LQfKr7LnFL2T1UPgzUktng1ER3WfNh2V8RcaPXqBQ9MI5hBomwShRrA== X-Received: by 2002:a05:600c:21c9:: with SMTP id x9mr252564wmj.54.1572889053413; Mon, 04 Nov 2019 09:37:33 -0800 (PST) Received: from phenom.ffwll.local (212-51-149-96.fiber7.init7.net. [212.51.149.96]) by smtp.gmail.com with ESMTPSA id l22sm32408863wrb.45.2019.11.04.09.37.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Nov 2019 09:37:32 -0800 (PST) From: Daniel Vetter To: Intel Graphics Development Date: Mon, 4 Nov 2019 18:37:20 +0100 Message-Id: <20191104173720.2696-3-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.24.0.rc2 In-Reply-To: <20191104173720.2696-1-daniel.vetter@ffwll.ch> References: <20191104173720.2696-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PXKWmt04LLOdOQwT4J7BpfWO3zbtkR+P8tnC3gpeEXQ=; b=VAyQzyglRqNMJ3a4V+Jk2zst6aCR675sfvIjYWZoY/TUikK/EnbUf83SZsY6hQMvZ/ qpqrMTLp8w0fMoHxqmLb1x9cV4Jrfhnir+g7hTsKlxb8muPuiiTG+25XLfQkayp2UfU3 uqK791FxcspFJz55XB6wK9tl9Y/kCkD4f7HQs= Subject: [Intel-gfx] [PATCH 3/3] drm/i915: use might_lock_nested in get_pages annotation X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Daniel Vetter , linux-kernel@vger.kernel.org, Ingo Molnar , Daniel Vetter , Will Deacon Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" So strictly speaking the existing annotation is also ok, because we have a chain of obj->mm.lock#I915_MM_GET_PAGES -> fs_reclaim -> obj->mm.lock (the shrinker cannot get at an object while we're in get_pages, hence this is safe). But it's confusing, so try to take the right subclass of the lock. This does a bit reduce our lockdep based checking, but then it's also less fragile, in case we ever change the nesting around. Signed-off-by: Daniel Vetter Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Will Deacon Cc: linux-kernel@vger.kernel.org Reviewed-by: Joonas Lahtinen --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 36 +++++++++++----------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index edaf7126a84d..e5750d506cc9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -271,10 +271,27 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj); int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj); +enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */ + I915_MM_NORMAL = 0, + /* + * Only used by struct_mutex, when called "recursively" from + * direct-reclaim-esque. Safe because there is only every one + * struct_mutex in the entire system. + */ + I915_MM_SHRINKER = 1, + /* + * Used for obj->mm.lock when allocating pages. Safe because the object + * isn't yet on any LRU, and therefore the shrinker can't deadlock on + * it. As soon as the object has pages, obj->mm.lock nests within + * fs_reclaim. + */ + I915_MM_GET_PAGES = 1, +}; + static inline int __must_check i915_gem_object_pin_pages(struct drm_i915_gem_object *obj) { - might_lock(&obj->mm.lock); + might_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES); if (atomic_inc_not_zero(&obj->mm.pages_pin_count)) return 0; @@ -317,23 +334,6 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj) __i915_gem_object_unpin_pages(obj); } -enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */ - I915_MM_NORMAL = 0, - /* - * Only used by struct_mutex, when called "recursively" from - * direct-reclaim-esque. Safe because there is only every one - * struct_mutex in the entire system. - */ - I915_MM_SHRINKER = 1, - /* - * Used for obj->mm.lock when allocating pages. Safe because the object - * isn't yet on any LRU, and therefore the shrinker can't deadlock on - * it. As soon as the object has pages, obj->mm.lock nests within - * fs_reclaim. - */ - I915_MM_GET_PAGES = 1, -}; - int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj); void i915_gem_object_truncate(struct drm_i915_gem_object *obj); void i915_gem_object_writeback(struct drm_i915_gem_object *obj);