From patchwork Wed Apr 13 06:01:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B880C433F5 for ; Wed, 13 Apr 2022 06:01:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232319AbiDMGEM (ORCPT ); Wed, 13 Apr 2022 02:04:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232763AbiDMGEL (ORCPT ); Wed, 13 Apr 2022 02:04:11 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B824525C5A; Tue, 12 Apr 2022 23:01:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829710; x=1681365710; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+8KRmXihZjR6F7KMu/IkyUTy0vLDDknlzWwTKsgw244=; b=PklpqsZtow600Q6w/Q0vN4CYhawJDCWEoXp8U/wtkF0J/juciqm9LXEi CPo4Qk3foieDr/6Vk1IgHgnQsHQi7OHJjVSx1YRkh7M6M1w0RLLenweF0 yO1lJ8RZ+7yEJmoMq9qgx9OMEwyqMp+12FBs9dsJfHl9f2WoEoX6KVvGC xpCmhSQbGkIly1V/LIc7N2KS+ciQx02S2/EI3eruvhV4M21IOgSihZ9SM KF9CpEf/V351yodAhQeiNA6XSQe0sS373RL9JoOMvZe5zPg+t5A1QJa2e 2leiAkIaI3u/COLk86H6OlaNvktxDTAAzJb9cKIAs84sJV3uRhDGbatE6 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="244469508" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="244469508" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:01:33 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="590632232" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:01:33 -0700 Subject: [PATCH v2 01/12] device-core: Move device_lock() lockdep init to a helper From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Pierre-Louis Bossart , Dave Jiang , Kevin Tian , peterz@infradead.org, vishal.l.verma@intel.com, alison.schofield@intel.com, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:01:33 -0700 Message-ID: <164982969345.684294.10649947254613843363.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org In preparation for new infrastructure to support lockdep validation of device_lock() usage across driver subsystems, add a device_lockdep_init() helper to contain those updates. Suggested-by: Pierre-Louis Bossart Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- drivers/base/core.c | 5 +---- include/linux/device.h | 13 +++++++++++++ 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/drivers/base/core.c b/drivers/base/core.c index 3d6430eb0c6a..cb782299ae44 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -2864,10 +2864,7 @@ void device_initialize(struct device *dev) kobject_init(&dev->kobj, &device_ktype); INIT_LIST_HEAD(&dev->dma_pools); mutex_init(&dev->mutex); -#ifdef CONFIG_PROVE_LOCKING - mutex_init(&dev->lockdep_mutex); -#endif - lockdep_set_novalidate_class(&dev->mutex); + device_lockdep_init(dev); spin_lock_init(&dev->devres_lock); INIT_LIST_HEAD(&dev->devres_head); device_pm_init(dev); diff --git a/include/linux/device.h b/include/linux/device.h index 93459724dcde..af2576ace130 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -762,6 +762,19 @@ static inline bool dev_pm_test_driver_flags(struct device *dev, u32 flags) return !!(dev->power.driver_flags & flags); } +#ifdef CONFIG_PROVE_LOCKING +static inline void device_lockdep_init(struct device *dev) +{ + mutex_init(&dev->lockdep_mutex); + lockdep_set_novalidate_class(&dev->mutex); +} +#else +static inline void device_lockdep_init(struct device *dev) +{ + lockdep_set_novalidate_class(&dev->mutex); +} +#endif + static inline void device_lock(struct device *dev) { mutex_lock(&dev->mutex); From patchwork Wed Apr 13 06:01:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD89CC433F5 for ; Wed, 13 Apr 2022 06:02:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232742AbiDMGEW (ORCPT ); Wed, 13 Apr 2022 02:04:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231396AbiDMGEP (ORCPT ); Wed, 13 Apr 2022 02:04:15 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DC6F5132A; Tue, 12 Apr 2022 23:01:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829715; x=1681365715; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6f5NDzNq7ZkGie/x3qcHs7RbkZ14iszEa1ADUciL4uE=; b=YdNjLH8lTPtkYWxSK0hkVJWDF0vDbXTO+6raZi2zZvTyq7ARF4XZRw0E zh0+8ggxsKunCx0aKskXaSidG82Hf7Ei1SD53H4x2WR2eX5zURrdF1ECE lAomXaBzCXFzMMc9s3zKgg/i6lUpiKal1lvjTDvGQurhsp9eoZNTKK2ax weZUEOQRWpUDQad9APuD3POJpHn2pFVlZCU0Nn1qBPQxOQUQLMdKs/m6/ pGCyqMd6rDYxAS80RA+Ld8lLSlRSV0jYnMjzUjb7LTRJg/4LUUseTL8tm 3YgWLQnX6um/m0YTLcW9yjbJOoGEA7MD/Zw2z0L8RMsumpV/MgLlrjoG4 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262025895" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262025895" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:01:39 -0700 X-IronPort-AV: E=Sophos;i="5.90,255,1643702400"; d="scan'208";a="559626138" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:01:38 -0700 Subject: [PATCH v2 02/12] device-core: Add dev->lock_class to enable device_lock() lockdep validation From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Greg Kroah-Hartman , "Rafael J. Wysocki" , Dave Jiang , Kevin Tian , peterz@infradead.org, vishal.l.verma@intel.com, alison.schofield@intel.com, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:01:38 -0700 Message-ID: <164982969858.684294.17819743973041389492.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org The device_lock() is hidden from lockdep by default because, for example, a device subsystem may do something like: --- device_add(dev1); ...in driver core... device_lock(dev1); bus->probe(dev1); /* where bus->probe() calls driver1_probe() */ driver1_probe(struct device *dev) { ...do some enumeration... dev2->parent = dev; /* this triggers probe under device_lock(dev2); */ device_add(dev2); } --- To lockdep, that device_lock(dev2) looks like a deadlock because lockdep only sees lock classes, not individual lock instances. All device_lock() instances across the entire kernel are the same class. However, this is not a deadlock in practice because the locking is strictly hierarchical. I.e. device_lock(dev1) is held over device_lock(dev2), but never the reverse. In order for lockdep to be satisfied and see that it is hierarchical in practice the mutex_lock() call in device_lock() needs to be moved to mutex_lock_nested() where the @subclass argument to mutex_lock_nested() represents the nesting level, i.e.: s/device_lock(dev1)/mutex_lock_nested(&dev1->mutex, 1)/ s/device_lock(dev2)/mutex_lock_nested(&dev2->mutex, 2)/ Now, what if the internals of the device_lock() could be annotated with the right @subclass argument to call mutex_lock_nested()? With device_set_lock_class() a subsystem can optionally add that metadata. The device_lock() still takes dev->mutex, but when dev->lock_class is >= 0 it additionally takes dev->lockdep_mutex with the proper nesting. Unlike dev->mutex, dev->lockdep_mutex is not marked lockdep_set_novalidate_class() and lockdep will become useful... at least for one subsystem at a time. It is still the case that only one subsystem can be using lockdep with lockdep_mutex at a time because different subsystems will collide class numbers. You might say "well, how about subsystem1 gets class ids 0 to 9 and subsystem2 gets class ids 10 to 20?". MAX_LOCKDEP_SUBCLASSES is 8, and 8 is just enough class ids for one subsystem of moderate complexity. Fixing that problem needs deeper changes, but for now moving the ability to set a lock class into the core lets the NVDIMM and CXL subsystems drop their incomplete solutions which attempt to set the lock class and take the lockdep mutex after the fact. This approach has prevented at least one deadlock scenario from making its way upstream that was not caught by the current "local / after-the-fact" usage of dev->lockdep_mutex (commit 87a30e1f05d7 ("driver-core, libnvdimm: Let device subsystems add local lockdep coverage")). Cc: Greg Kroah-Hartman Cc: "Rafael J. Wysocki" Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- include/linux/device.h | 92 ++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 88 insertions(+), 4 deletions(-) diff --git a/include/linux/device.h b/include/linux/device.h index af2576ace130..6083e757e804 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -402,6 +402,7 @@ struct dev_msi_info { * @mutex: Mutex to synchronize calls to its driver. * @lockdep_mutex: An optional debug lock that a subsystem can use as a * peer lock to gain localized lockdep coverage of the device_lock. + * @lock_class: per-subsystem annotated device lock class * @bus: Type of bus device is on. * @driver: Which driver has allocated this * @platform_data: Platform data specific to the device. @@ -501,6 +502,7 @@ struct device { dev_set_drvdata/dev_get_drvdata */ #ifdef CONFIG_PROVE_LOCKING struct mutex lockdep_mutex; + int lock_class; #endif struct mutex mutex; /* mutex to synchronize calls to * its driver. @@ -762,18 +764,100 @@ static inline bool dev_pm_test_driver_flags(struct device *dev, u32 flags) return !!(dev->power.driver_flags & flags); } +static inline void device_lock_assert(struct device *dev) +{ + lockdep_assert_held(&dev->mutex); +} + #ifdef CONFIG_PROVE_LOCKING static inline void device_lockdep_init(struct device *dev) { mutex_init(&dev->lockdep_mutex); + dev->lock_class = -1; lockdep_set_novalidate_class(&dev->mutex); } -#else + +static inline void device_lock(struct device *dev) +{ + /* + * For double-lock programming errors the kernel will hang + * trying to acquire @dev->mutex before lockdep can report the + * problem acquiring @dev->lockdep_mutex, so manually assert + * before that hang. + */ + lockdep_assert_not_held(&dev->lockdep_mutex); + + mutex_lock(&dev->mutex); + if (dev->lock_class >= 0) + mutex_lock_nested(&dev->lockdep_mutex, dev->lock_class); +} + +static inline int device_lock_interruptible(struct device *dev) +{ + int rc; + + lockdep_assert_not_held(&dev->lockdep_mutex); + + rc = mutex_lock_interruptible(&dev->mutex); + if (rc || dev->lock_class < 0) + return rc; + + return mutex_lock_interruptible_nested(&dev->lockdep_mutex, + dev->lock_class); +} + +static inline int device_trylock(struct device *dev) +{ + if (mutex_trylock(&dev->mutex)) { + if (dev->lock_class >= 0) + mutex_lock_nested(&dev->lockdep_mutex, dev->lock_class); + return 1; + } + + return 0; +} + +static inline void device_unlock(struct device *dev) +{ + if (dev->lock_class >= 0) + mutex_unlock(&dev->lockdep_mutex); + mutex_unlock(&dev->mutex); +} + +/* + * Note: this routine expects that the state of @dev->mutex is stable + * from entry to exit. There is no support for changing lockdep + * validation classes, only enabling and disabling validation. + */ +static inline void device_set_lock_class(struct device *dev, int lock_class) +{ + /* + * Allow for setting or clearing the lock class while the + * device_lock() is held, in which case the paired nested lock + * might need to be acquired or released now to accommodate the + * next device_unlock(). + */ + if (dev->lock_class < 0 && lock_class >= 0) { + /* Enabling lockdep validation... */ + if (mutex_is_locked(&dev->mutex)) + mutex_lock_nested(&dev->lockdep_mutex, lock_class); + } else if (dev->lock_class >= 0 && lock_class < 0) { + /* Disabling lockdep validation... */ + if (mutex_is_locked(&dev->mutex)) + mutex_unlock(&dev->lockdep_mutex); + } else { + dev_warn(dev, + "%s: failed to change lock_class from: %d to %d\n", + __func__, dev->lock_class, lock_class); + return; + } + dev->lock_class = lock_class; +} +#else /* !CONFIG_PROVE_LOCKING */ static inline void device_lockdep_init(struct device *dev) { lockdep_set_novalidate_class(&dev->mutex); } -#endif static inline void device_lock(struct device *dev) { @@ -795,10 +879,10 @@ static inline void device_unlock(struct device *dev) mutex_unlock(&dev->mutex); } -static inline void device_lock_assert(struct device *dev) +static inline void device_set_lock_class(struct device *dev, int lock_class) { - lockdep_assert_held(&dev->mutex); } +#endif /* CONFIG_PROVE_LOCKING */ static inline struct device_node *dev_of_node(struct device *dev) { From patchwork Wed Apr 13 06:01:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42D35C4332F for ; Wed, 13 Apr 2022 06:02:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232787AbiDMGE0 (ORCPT ); Wed, 13 Apr 2022 02:04:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232793AbiDMGEZ (ORCPT ); Wed, 13 Apr 2022 02:04:25 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A510515AC; Tue, 12 Apr 2022 23:02:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829725; x=1681365725; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WeXGisg3F5xd5CDMy9xUZA6jIDX7Clw9WAukSk7hrxY=; b=Ks4wo2JHJlhUIyGqL0JCdMMRqzV+aAShE+bVLrkjN4JmmvNtXr7wEYBY 0yuCAeRzA1h3u6ZGArGRr3Yg5UqNecaNdu8u+a6TH8awDLeiPkpQwR05G mpi5GYc5hkhFp0JLbdmAVPGyFmYTLrZKKB8wTsa9nLIAF0ixNIlsQFPW2 2H3y8D4Hyudb+Nco9laJ58hbH153gnuxUBEGzp5/p+RXS30cUqrRCtXII gJZ6RZzU//U1Jk89yE5DAIBN5BAOE0qhpLPbBZO+AYGJCGZX8dQr9cv+V 93uf3Y9luEoXmqH1TNjwGMVlX8WFW7cmn+eIoOM7RNvnbWX7MMGYEsfdn A==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262025971" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262025971" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:05 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="623559249" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:01:44 -0700 Subject: [PATCH v2 03/12] cxl/core: Refactor a cxl_lock_class() out of cxl_nested_lock() From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Alison Schofield , Vishal Verma , Ira Weiny , Ben Widawsky , Jonathan Cameron , Dave Jiang , Kevin Tian , peterz@infradead.org, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:01:44 -0700 Message-ID: <164982970436.684294.12004091884213856239.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org In preparation for upleveling device_lock() lockdep annotation support into the core, provide a helper to retrieve the lock class. This lock_class will be used with device_set_lock_class() to identify the CXL nested locking rules. Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Ben Widawsky Reviewed-by: Jonathan Cameron Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- drivers/cxl/cxl.h | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 990b6670222e..c9fda9304c54 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -418,13 +418,12 @@ enum cxl_lock_class { */ }; -static inline void cxl_nested_lock(struct device *dev) +static inline int cxl_lock_class(struct device *dev) { if (is_cxl_port(dev)) { struct cxl_port *port = to_cxl_port(dev); - mutex_lock_nested(&dev->lockdep_mutex, - CXL_PORT_LOCK + port->depth); + return CXL_PORT_LOCK + port->depth; } else if (is_cxl_decoder(dev)) { struct cxl_port *port = to_cxl_port(dev->parent); @@ -432,14 +431,18 @@ static inline void cxl_nested_lock(struct device *dev) * A decoder is the immediate child of a port, so set * its lock class equal to other child device siblings. */ - mutex_lock_nested(&dev->lockdep_mutex, - CXL_PORT_LOCK + port->depth + 1); + return CXL_PORT_LOCK + port->depth + 1; } else if (is_cxl_nvdimm_bridge(dev)) - mutex_lock_nested(&dev->lockdep_mutex, CXL_NVDIMM_BRIDGE_LOCK); + return CXL_NVDIMM_BRIDGE_LOCK; else if (is_cxl_nvdimm(dev)) - mutex_lock_nested(&dev->lockdep_mutex, CXL_NVDIMM_LOCK); + return CXL_NVDIMM_LOCK; else - mutex_lock_nested(&dev->lockdep_mutex, CXL_ANON_LOCK); + return CXL_ANON_LOCK; +} + +static inline void cxl_nested_lock(struct device *dev) +{ + mutex_lock_nested(&dev->lockdep_mutex, cxl_lock_class(dev)); } static inline void cxl_nested_unlock(struct device *dev) From patchwork Wed Apr 13 06:01:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18737C433F5 for ; Wed, 13 Apr 2022 06:02:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232850AbiDMGEx (ORCPT ); Wed, 13 Apr 2022 02:04:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232855AbiDMGEu (ORCPT ); Wed, 13 Apr 2022 02:04:50 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D30051E51; Tue, 12 Apr 2022 23:02:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829743; x=1681365743; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vtmksge+PxxfmYNS5fXJl4pdxkZNhem2rCAEcMLmAY4=; b=Udw/7TFs+1Ip6tgvEmPFnSTqCWvGJVcDI0z0YiEc3x+S2LaXtRLfTqwT wvTful9XJrfT/QzQ1Jt3TTZ5e+5R2dEywD03pypsJQBxaTwpVwaC2G3Xl Iu78Plz3ynjclA2gYiTMYRGXe0D7x2nmpg2W0JI89moYr77wAawRIRblO l9JVxd29olICeYMD5A1oleIgqCOgHKoaSL0Mw2a8LfHDz/GepG0mCtQoJ 8ZjTqkJIRurczJxAc+vR8xVHm5fJljePkwDPvrde2oHIeZKXxWRF8uplr i5jSXMrp+DxtTcEAt4CmEc3lVxKB39kzMFe4TMVLo1ZDaI93GXR9k5LfJ w==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="323027707" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="323027707" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:01:50 -0700 X-IronPort-AV: E=Sophos;i="5.90,255,1643702400"; d="scan'208";a="645039160" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:01:50 -0700 Subject: [PATCH v2 04/12] cxl/core: Remove cxl_device_lock() From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Alison Schofield , Vishal Verma , Ira Weiny , Ben Widawsky , Jonathan Cameron , Dave Jiang , Kevin Tian , peterz@infradead.org, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:01:49 -0700 Message-ID: <164982970986.684294.11190977495722089239.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org In preparation for moving lockdep_mutex nested lock acquisition into the core, remove the cxl_device_lock() wrapper, but preserve cxl_lock_class() that will be used to inform the core of the subsystem's lock ordering rules. Note that this reverts back to the default state of unvalidated device_lock(). A follow-on change to use the new dev->lock_class facility re-enables lockdep validation. Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Ben Widawsky Reviewed-by: Jonathan Cameron Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- drivers/cxl/core/pmem.c | 4 ++- drivers/cxl/core/port.c | 54 ++++++++++++++++++++--------------------------- drivers/cxl/cxl.h | 46 ---------------------------------------- drivers/cxl/mem.c | 4 ++- drivers/cxl/pmem.c | 12 +++++----- lib/Kconfig.debug | 2 +- 6 files changed, 34 insertions(+), 88 deletions(-) diff --git a/drivers/cxl/core/pmem.c b/drivers/cxl/core/pmem.c index 8de240c4d96b..60f71caa5dbd 100644 --- a/drivers/cxl/core/pmem.c +++ b/drivers/cxl/core/pmem.c @@ -121,10 +121,10 @@ static void unregister_nvb(void *_cxl_nvb) * work to flush. Once the state has been changed to 'dead' then no new * work can be queued by user-triggered bind. */ - cxl_device_lock(&cxl_nvb->dev); + device_lock(&cxl_nvb->dev); flush = cxl_nvb->state != CXL_NVB_NEW; cxl_nvb->state = CXL_NVB_DEAD; - cxl_device_unlock(&cxl_nvb->dev); + device_unlock(&cxl_nvb->dev); /* * Even though the device core will trigger device_release_driver() diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index 2ab1ba4499b3..437e8a71e5dc 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -312,10 +312,10 @@ static void cxl_port_release(struct device *dev) struct cxl_port *port = to_cxl_port(dev); struct cxl_ep *ep, *_e; - cxl_device_lock(dev); + device_lock(dev); list_for_each_entry_safe(ep, _e, &port->endpoints, list) cxl_ep_release(ep); - cxl_device_unlock(dev); + device_unlock(dev); ida_free(&cxl_port_ida, port->id); kfree(port); } @@ -554,7 +554,7 @@ static int match_root_child(struct device *dev, const void *match) return 0; port = to_cxl_port(dev); - cxl_device_lock(dev); + device_lock(dev); list_for_each_entry(dport, &port->dports, list) { iter = match; while (iter) { @@ -564,7 +564,7 @@ static int match_root_child(struct device *dev, const void *match) } } out: - cxl_device_unlock(dev); + device_unlock(dev); return !!iter; } @@ -623,13 +623,13 @@ static int add_dport(struct cxl_port *port, struct cxl_dport *new) static void cond_cxl_root_lock(struct cxl_port *port) { if (is_cxl_root(port)) - cxl_device_lock(&port->dev); + device_lock(&port->dev); } static void cond_cxl_root_unlock(struct cxl_port *port) { if (is_cxl_root(port)) - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); } static void cxl_dport_remove(void *data) @@ -736,15 +736,15 @@ static int add_ep(struct cxl_port *port, struct cxl_ep *new) { struct cxl_ep *dup; - cxl_device_lock(&port->dev); + device_lock(&port->dev); if (port->dead) { - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); return -ENXIO; } dup = find_ep(port, new->ep); if (!dup) list_add_tail(&new->list, &port->endpoints); - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); return dup ? -EEXIST : 0; } @@ -854,12 +854,12 @@ static void delete_endpoint(void *data) goto out; parent = &parent_port->dev; - cxl_device_lock(parent); + device_lock(parent); if (parent->driver && endpoint->uport) { devm_release_action(parent, cxl_unlink_uport, endpoint); devm_release_action(parent, unregister_port, endpoint); } - cxl_device_unlock(parent); + device_unlock(parent); put_device(parent); out: put_device(&endpoint->dev); @@ -920,7 +920,7 @@ static void cxl_detach_ep(void *data) } parent_port = to_cxl_port(port->dev.parent); - cxl_device_lock(&parent_port->dev); + device_lock(&parent_port->dev); if (!parent_port->dev.driver) { /* * The bottom-up race to delete the port lost to a @@ -928,12 +928,12 @@ static void cxl_detach_ep(void *data) * parent_port ->remove() will have cleaned up all * descendants. */ - cxl_device_unlock(&parent_port->dev); + device_unlock(&parent_port->dev); put_device(&port->dev); continue; } - cxl_device_lock(&port->dev); + device_lock(&port->dev); ep = find_ep(port, &cxlmd->dev); dev_dbg(&cxlmd->dev, "disconnect %s from %s\n", ep ? dev_name(ep->ep) : "", dev_name(&port->dev)); @@ -948,7 +948,7 @@ static void cxl_detach_ep(void *data) port->dead = true; list_splice_init(&port->dports, &reap_dports); } - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); if (!list_empty(&reap_dports)) { dev_dbg(&cxlmd->dev, "delete %s\n", @@ -956,7 +956,7 @@ static void cxl_detach_ep(void *data) delete_switch_port(port, &reap_dports); } put_device(&port->dev); - cxl_device_unlock(&parent_port->dev); + device_unlock(&parent_port->dev); } } @@ -1004,7 +1004,7 @@ static int add_port_attach_ep(struct cxl_memdev *cxlmd, return -EAGAIN; } - cxl_device_lock(&parent_port->dev); + device_lock(&parent_port->dev); if (!parent_port->dev.driver) { dev_warn(&cxlmd->dev, "port %s:%s disabled, failed to enumerate CXL.mem\n", @@ -1022,7 +1022,7 @@ static int add_port_attach_ep(struct cxl_memdev *cxlmd, get_device(&port->dev); } out: - cxl_device_unlock(&parent_port->dev); + device_unlock(&parent_port->dev); if (IS_ERR(port)) rc = PTR_ERR(port); @@ -1133,14 +1133,14 @@ struct cxl_dport *cxl_find_dport_by_dev(struct cxl_port *port, { struct cxl_dport *dport; - cxl_device_lock(&port->dev); + device_lock(&port->dev); list_for_each_entry(dport, &port->dports, list) if (dport->dport == dev) { - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); return dport; } - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); return NULL; } EXPORT_SYMBOL_NS_GPL(cxl_find_dport_by_dev, CXL); @@ -1379,9 +1379,9 @@ int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map) port = to_cxl_port(cxld->dev.parent); - cxl_device_lock(&port->dev); + device_lock(&port->dev); rc = cxl_decoder_add_locked(cxld, target_map); - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); return rc; } @@ -1452,13 +1452,7 @@ static int cxl_bus_probe(struct device *dev) { int rc; - /* - * Take the CXL nested lock since the driver core only holds - * @dev->mutex and not @dev->lockdep_mutex. - */ - cxl_nested_lock(dev); rc = to_cxl_drv(dev->driver)->probe(dev); - cxl_nested_unlock(dev); dev_dbg(dev, "probe: %d\n", rc); return rc; @@ -1468,10 +1462,8 @@ static void cxl_bus_remove(struct device *dev) { struct cxl_driver *cxl_drv = to_cxl_drv(dev->driver); - cxl_nested_lock(dev); if (cxl_drv->remove) cxl_drv->remove(dev); - cxl_nested_unlock(dev); } static struct workqueue_struct *cxl_bus_wq; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index c9fda9304c54..a6c1a027e389 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -439,51 +439,5 @@ static inline int cxl_lock_class(struct device *dev) else return CXL_ANON_LOCK; } - -static inline void cxl_nested_lock(struct device *dev) -{ - mutex_lock_nested(&dev->lockdep_mutex, cxl_lock_class(dev)); -} - -static inline void cxl_nested_unlock(struct device *dev) -{ - mutex_unlock(&dev->lockdep_mutex); -} - -static inline void cxl_device_lock(struct device *dev) -{ - /* - * For double lock errors the lockup will happen before lockdep - * warns at cxl_nested_lock(), so assert explicitly. - */ - lockdep_assert_not_held(&dev->lockdep_mutex); - - device_lock(dev); - cxl_nested_lock(dev); -} - -static inline void cxl_device_unlock(struct device *dev) -{ - cxl_nested_unlock(dev); - device_unlock(dev); -} -#else -static inline void cxl_nested_lock(struct device *dev) -{ -} - -static inline void cxl_nested_unlock(struct device *dev) -{ -} - -static inline void cxl_device_lock(struct device *dev) -{ - device_lock(dev); -} - -static inline void cxl_device_unlock(struct device *dev) -{ - device_unlock(dev); -} #endif #endif /* __CXL_H__ */ diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 49a4b1c47299..91fb8d5b21a7 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -195,7 +195,7 @@ static int cxl_mem_probe(struct device *dev) return -ENXIO; } - cxl_device_lock(&parent_port->dev); + device_lock(&parent_port->dev); if (!parent_port->dev.driver) { dev_err(dev, "CXL port topology %s not enabled\n", dev_name(&parent_port->dev)); @@ -205,7 +205,7 @@ static int cxl_mem_probe(struct device *dev) rc = create_endpoint(cxlmd, parent_port); out: - cxl_device_unlock(&parent_port->dev); + device_unlock(&parent_port->dev); put_device(&parent_port->dev); return rc; } diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c index 15ad666ab03e..b65a272a2d6d 100644 --- a/drivers/cxl/pmem.c +++ b/drivers/cxl/pmem.c @@ -43,7 +43,7 @@ static int cxl_nvdimm_probe(struct device *dev) if (!cxl_nvb) return -ENXIO; - cxl_device_lock(&cxl_nvb->dev); + device_lock(&cxl_nvb->dev); if (!cxl_nvb->nvdimm_bus) { rc = -ENXIO; goto out; @@ -68,7 +68,7 @@ static int cxl_nvdimm_probe(struct device *dev) dev_set_drvdata(dev, nvdimm); rc = devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm); out: - cxl_device_unlock(&cxl_nvb->dev); + device_unlock(&cxl_nvb->dev); put_device(&cxl_nvb->dev); return rc; @@ -233,7 +233,7 @@ static void cxl_nvb_update_state(struct work_struct *work) struct nvdimm_bus *victim_bus = NULL; bool release = false, rescan = false; - cxl_device_lock(&cxl_nvb->dev); + device_lock(&cxl_nvb->dev); switch (cxl_nvb->state) { case CXL_NVB_ONLINE: if (!online_nvdimm_bus(cxl_nvb)) { @@ -251,7 +251,7 @@ static void cxl_nvb_update_state(struct work_struct *work) default: break; } - cxl_device_unlock(&cxl_nvb->dev); + device_unlock(&cxl_nvb->dev); if (release) device_release_driver(&cxl_nvb->dev); @@ -327,9 +327,9 @@ static int cxl_nvdimm_bridge_reset(struct device *dev, void *data) return 0; cxl_nvb = to_cxl_nvdimm_bridge(dev); - cxl_device_lock(dev); + device_lock(dev); cxl_nvb->state = CXL_NVB_NEW; - cxl_device_unlock(dev); + device_unlock(dev); return 0; } diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 075cd25363ac..333d41da7621 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1563,7 +1563,7 @@ config PROVE_CXL_LOCKING bool "CXL" depends on CXL_BUS help - Enable lockdep to validate cxl_device_lock() usage. + Enable lockdep to validate CXL subsystem usage of the device lock. endchoice From patchwork Wed Apr 13 06:01:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7A5CC433FE for ; Wed, 13 Apr 2022 06:02:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232922AbiDMGFH (ORCPT ); Wed, 13 Apr 2022 02:05:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232860AbiDMGEx (ORCPT ); Wed, 13 Apr 2022 02:04:53 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42C40517EB; Tue, 12 Apr 2022 23:02:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829751; x=1681365751; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iZkEJiezDzZSvDLuQ4QlHaZeACSH7DvqyONtGcXuk/g=; b=jlsmJu5c5+x9EuFl3X9rQOK4W3wbbUQuUk8hR67BWi+VR07SwExtknht 3wcaN1++dtW3sOc4WRuBaZIyIHmKMqCUCbXlyXvoxdtnVbDDhu0Nx75CZ ubOvbVjXjVRMCmjLAxVCUf9iPcwUgRdfX7k1fMBUiAPqalGpO8IY3CUkB 2cELgDw8meMiDkaupBLzSPXkMhj6bOFZjiIn2LmCosBl/ec2szb1dxyKZ t0CThEHZRmxKdv1rJTwHmmxIz7tGwBc2KrR3fEkLLIf4k/cBgGbWVhivg 8LIL6kh7yEXoShBQp/V/D7+1+cmu+w0AIxSpvOiJdEwDQxdEkh+NDBL1f w==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="323027736" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="323027736" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:01:55 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="724768554" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:01:55 -0700 Subject: [PATCH v2 05/12] cxl/core: Clamp max lock_class From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Alison Schofield , Vishal Verma , Ira Weiny , Ben Widawsky , Dave Jiang , Kevin Tian , peterz@infradead.org, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:01:55 -0700 Message-ID: <164982971542.684294.12980151056534955888.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org MAX_LOCKDEP_SUBCLASSES limits the depth of the CXL topology that can be validated by lockdep. Given that the cxl_test topology is already at this limit collapse some of the levels and clamp the max depth. Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Ben Widawsky Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- drivers/cxl/cxl.h | 25 +++++++++++++++++++++---- 1 file changed, 21 insertions(+), 4 deletions(-) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index a6c1a027e389..b86aac8cde4f 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -410,20 +410,37 @@ enum cxl_lock_class { CXL_ANON_LOCK, CXL_NVDIMM_LOCK, CXL_NVDIMM_BRIDGE_LOCK, - CXL_PORT_LOCK, + /* + * Collapse the compatible port and nvdimm-bridge lock classes + * to save space + */ + CXL_PORT_LOCK = CXL_NVDIMM_BRIDGE_LOCK, /* * Be careful to add new lock classes here, CXL_PORT_LOCK is * extended by the port depth, so a maximum CXL port topology - * depth would need to be defined first. + * depth would need to be defined first. Also, the max + * validation depth is limited by MAX_LOCKDEP_SUBCLASSES. */ }; +static inline int clamp_lock_class(struct device *dev, int lock_class) +{ + if (lock_class >= MAX_LOCKDEP_SUBCLASSES) { + dev_warn_once(dev, + "depth: %d, disabling lockdep for this device\n", + lock_class); + return -1; + } + + return lock_class; +} + static inline int cxl_lock_class(struct device *dev) { if (is_cxl_port(dev)) { struct cxl_port *port = to_cxl_port(dev); - return CXL_PORT_LOCK + port->depth; + return clamp_lock_class(dev, CXL_PORT_LOCK + port->depth); } else if (is_cxl_decoder(dev)) { struct cxl_port *port = to_cxl_port(dev->parent); @@ -431,7 +448,7 @@ static inline int cxl_lock_class(struct device *dev) * A decoder is the immediate child of a port, so set * its lock class equal to other child device siblings. */ - return CXL_PORT_LOCK + port->depth + 1; + return clamp_lock_class(dev, CXL_PORT_LOCK + port->depth + 1); } else if (is_cxl_nvdimm_bridge(dev)) return CXL_NVDIMM_BRIDGE_LOCK; else if (is_cxl_nvdimm(dev)) From patchwork Wed Apr 13 06:02:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D132DC433EF for ; Wed, 13 Apr 2022 06:02:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232784AbiDMGEY (ORCPT ); Wed, 13 Apr 2022 02:04:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232761AbiDMGEV (ORCPT ); Wed, 13 Apr 2022 02:04:21 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37C985132A; Tue, 12 Apr 2022 23:02:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829721; x=1681365721; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=71SXlL7E70/JQd0lNPz9C9v0IgR6KTTs4c9CnrGttxc=; b=MHCTqULph0N04BwRbY+O0FQ0+NQJD7hdcy5ef3ZW3zPunCALQDlLnupd n/HKFo3tPaGAX5sSjl1EkSCqomAYk8NOovDCML54Bw9IG47xUl4xyphvq Or3k1pn6SRPMN/cuTw0q1DNw1I11aT0al1x4mzWUuKeMnjz5Kwy0TFzR3 4LEbb3vxQ2arUrWFjuZ3I1neAopS6Gsb7t1kGun5Srajf92W/qmXFHiVB UWgqwbSiDq/fKG3oeqLFxEA4Vm7NlDfq0O7tZtmuYwvmDS6GH6MA3ZXMl GT54skJcoUFhWlBcRqFhmidva4U6XpBrIAdfeMEKSMdGI4H2U4y2EBi3z w==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="261430845" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="261430845" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:00 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="507854405" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:00 -0700 Subject: [PATCH v2 06/12] cxl/core: Use dev->lock_class for device_lock() lockdep validation From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Alison Schofield , Vishal Verma , Ira Weiny , Ben Widawsky , Dave Jiang , Kevin Tian , peterz@infradead.org, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:02:00 -0700 Message-ID: <164982972055.684294.1452258386098745666.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Update CONFIG_PROVE_CXL_LOCKING to use the common device-core helpers for device_lock validation. When CONFIG_PROVE_LOCKING is enabled, and device_set_lock_class() is passed a non-zero lock class , the core acquires the 'struct device' @lockdep_mutex everywhere it acquires the device_lock. Where lockdep_mutex does not skip lockdep validation like device_lock. cxl_set_lock_class() arranges to turn off CXL lock validation depending on the value of CONFIG_PROVE_CXL_LOCKING, which is a Kconfig 'choice' between the all subsystems that want to use dev->lockdep_mutex for lockdep validation. For now, that 'choice' is just between the NVDIMM and CXL subsystems. Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Ben Widawsky Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- drivers/cxl/core/memdev.c | 1 + drivers/cxl/core/pmem.c | 2 ++ drivers/cxl/core/port.c | 2 ++ drivers/cxl/cxl.h | 9 +++++++++ 4 files changed, 14 insertions(+) diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 1f76b28f9826..ad8c9f61c38a 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -343,6 +343,7 @@ struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds) cxlmd->cxlds = cxlds; cdev = &cxlmd->cdev; + cxl_set_lock_class(dev); rc = cdev_device_add(cdev, dev); if (rc) goto err; diff --git a/drivers/cxl/core/pmem.c b/drivers/cxl/core/pmem.c index 60f71caa5dbd..b0eb0b795140 100644 --- a/drivers/cxl/core/pmem.c +++ b/drivers/cxl/core/pmem.c @@ -165,6 +165,7 @@ struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host, if (rc) goto err; + cxl_set_lock_class(dev); rc = device_add(dev); if (rc) goto err; @@ -261,6 +262,7 @@ int devm_cxl_add_nvdimm(struct device *host, struct cxl_memdev *cxlmd) if (rc) goto err; + cxl_set_lock_class(dev); rc = device_add(dev); if (rc) goto err; diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index 437e8a71e5dc..eb450397104b 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -469,6 +469,7 @@ struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, if (rc) goto err; + cxl_set_lock_class(dev); rc = device_add(dev); if (rc) goto err; @@ -1349,6 +1350,7 @@ int cxl_decoder_add_locked(struct cxl_decoder *cxld, int *target_map) if (is_root_decoder(dev)) cxld->platform_res.name = dev_name(dev); + cxl_set_lock_class(dev); return device_add(dev); } EXPORT_SYMBOL_NS_GPL(cxl_decoder_add_locked, CXL); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index b86aac8cde4f..fddbcb380e84 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -456,5 +456,14 @@ static inline int cxl_lock_class(struct device *dev) else return CXL_ANON_LOCK; } + +static inline void cxl_set_lock_class(struct device *dev) +{ + device_set_lock_class(dev, cxl_lock_class(dev)); +} +#else +static inline void cxl_set_lock_class(struct device *dev) +{ +} #endif #endif /* __CXL_H__ */ From patchwork Wed Apr 13 06:02:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3164EC433EF for ; Wed, 13 Apr 2022 06:02:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232549AbiDMGE2 (ORCPT ); Wed, 13 Apr 2022 02:04:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231396AbiDMGE0 (ORCPT ); Wed, 13 Apr 2022 02:04:26 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28094515B4; Tue, 12 Apr 2022 23:02:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829726; x=1681365726; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=77GSFBA3SxqE/U83sHJwXKlAMeIpvS0DlqeYvtg365o=; b=Um1qom6eEpsQbdqB85Nv2d8DdoLy5LIdte31OSVJEWrEaZCdPv48/em0 OK627o4/GtiJ0BR1H1VVwsGm7K2mv8hIGdjuAwWwTuQoqTKxV9lYO7ySy Mkz4WEe062Qlm7ILnFPYAXwiNkl/r44VKZlqgp8dUcw+IPjY5lgLDWFfo Mig+eZB11HPLbPfts567TrgjPHyxTr0glF/s4ypaiHt6kVVf/1oRKlADe VacbxSwRDhLcSIEC4rYIvuaL535KUIcD5hphNDzV4oo1eAnDUiv5wo31L QaJhOGT31hXf08v/z6XOkeUcZy5cGNyCH6I+MTO51oaaTdUxYh7/nS3a0 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="244469597" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="244469597" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:05 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="660799955" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:05 -0700 Subject: [PATCH v2 07/12] cxl/acpi: Add a device_lock() lock class for the root platform device From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Alison Schofield , Vishal Verma , Ira Weiny , Ben Widawsky , Dave Jiang , Kevin Tian , peterz@infradead.org, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:02:05 -0700 Message-ID: <164982972569.684294.15140338434944364472.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Now that the device-core can start validating lockdep usage after the device has been added, use that capability to validate usage of device_lock() against the ACPI0017 device relative to other subsystem locks. The 'enum cxl_lock_class' definition moves outside of the ifdef guard to support device_set_lock_class() called from cxl_acpi_probe(). Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Ben Widawsky Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- drivers/cxl/acpi.c | 1 + drivers/cxl/cxl.h | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index d15a6aec0331..ef5c3252bdb2 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -283,6 +283,7 @@ static int cxl_acpi_probe(struct platform_device *pdev) struct acpi_device *adev = ACPI_COMPANION(host); struct cxl_cfmws_context ctx; + device_set_lock_class(&pdev->dev, CXL_ROOT_LOCK); root_port = devm_cxl_add_port(host, host, CXL_RESOURCE_NONE, NULL); if (IS_ERR(root_port)) return PTR_ERR(root_port); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index fddbcb380e84..05dc4c081ad2 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -405,9 +405,9 @@ struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct cxl_nvdimm *cxl_nvd); #define __mock static #endif -#ifdef CONFIG_PROVE_CXL_LOCKING enum cxl_lock_class { CXL_ANON_LOCK, + CXL_ROOT_LOCK, CXL_NVDIMM_LOCK, CXL_NVDIMM_BRIDGE_LOCK, /* @@ -423,6 +423,7 @@ enum cxl_lock_class { */ }; +#ifdef CONFIG_PROVE_CXL_LOCKING static inline int clamp_lock_class(struct device *dev, int lock_class) { if (lock_class >= MAX_LOCKDEP_SUBCLASSES) { From patchwork Wed Apr 13 06:02:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD021C433F5 for ; Wed, 13 Apr 2022 06:02:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232820AbiDMGEg (ORCPT ); Wed, 13 Apr 2022 02:04:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232817AbiDMGEf (ORCPT ); Wed, 13 Apr 2022 02:04:35 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49F4C51330; Tue, 12 Apr 2022 23:02:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829731; x=1681365731; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=faqWslrqMKaVyC0Fx4CcmYrY8vRuemnJ7TWhZUeAea8=; b=VJKkowjm710fKNmGlii/Veymc4Tfmw+e7yJb0A1zbXXlmUs1yWdP6UjX R6bniwC8cQuxhiw0p0M9bj9R7fd070HfYpxv9L5NUfAZtreeHU5wkroEP mfwOe+S03nB5dU7zSRDUVDreeHO4xNNtwFS/04DUKDc431WorYi2G+xwU GGAhP0963VQehuVNAHJAdOP2maWiuiov46U3znTPl/VPARRtDsBRk/gRQ 2DVFUbGR+WXwJU+x5VQKIkaGhe+JT8UkRxUrIAumg8oSQzi4QTJJ2WP7O 0+oNNvDL55FFEJ+W+ag+fslrdXl6Crkntl1lRImz5pG/p9rIWEfNWIq1r A==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="349020019" X-IronPort-AV: E=Sophos;i="5.90,255,1643702400"; d="scan'208";a="349020019" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:11 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="590632349" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:10 -0700 Subject: [PATCH v2 08/12] libnvdimm: Refactor an nvdimm_lock_class() helper From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Vishal Verma , Dave Jiang , Ira Weiny , Dave Jiang , Kevin Tian , peterz@infradead.org, alison.schofield@intel.com, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:02:10 -0700 Message-ID: <164982973080.684294.10727665061649724835.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org In preparation for moving to the device-core device_lock lockdep validation, refactor an nvdimm_lock_class() helper to be used with device_set_lock_class(). Cc: Vishal Verma Cc: Dave Jiang Cc: Ira Weiny Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- drivers/nvdimm/nd-core.h | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h index 448f9dcb4bb7..deb3d047571e 100644 --- a/drivers/nvdimm/nd-core.h +++ b/drivers/nvdimm/nd-core.h @@ -174,22 +174,27 @@ enum { LOCK_CLAIM, }; -static inline void debug_nvdimm_lock(struct device *dev) +static inline int nvdimm_lock_class(struct device *dev) { if (is_nd_region(dev)) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_REGION); + return LOCK_REGION; else if (is_nvdimm(dev)) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_DIMM); + return LOCK_DIMM; else if (is_nd_btt(dev) || is_nd_pfn(dev) || is_nd_dax(dev)) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_CLAIM); + return LOCK_CLAIM; else if (dev->parent && (is_nd_region(dev->parent))) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_NAMESPACE); + return LOCK_NAMESPACE; else if (is_nvdimm_bus(dev)) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_BUS); + return LOCK_BUS; else if (dev->class && dev->class == nd_class) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_NDCTL); + return LOCK_NDCTL; else - dev_WARN(dev, "unknown lock level\n"); + return -1; +} + +static inline void debug_nvdimm_lock(struct device *dev) +{ + mutex_lock_nested(&dev->lockdep_mutex, nvdimm_lock_class(dev)); } static inline void debug_nvdimm_unlock(struct device *dev) From patchwork Wed Apr 13 06:02:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28F70C433F5 for ; Wed, 13 Apr 2022 06:02:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229717AbiDMGEr (ORCPT ); Wed, 13 Apr 2022 02:04:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232824AbiDMGEg (ORCPT ); Wed, 13 Apr 2022 02:04:36 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F42C517D8; Tue, 12 Apr 2022 23:02:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829736; x=1681365736; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mfj3RQD/VbNG6IA6CbK1hSm+Veq8Lr6khP2wJSf72E8=; b=TQ/eIQl/1M3d2lHTZuMV+Wq55xoho1/1hC5bsDJE5AEFiG4onkZmaLyn 7jqgoin6MDakKX/vxAORNwJ0vrkoPW6G4UYh0Z4PyF3VVB2zHbN0Wi/MY YPnP+qsdfAi8NH0LP1iRhbS6gxnN7ism3lhkmRVxfug7O8n3XFnDOENie MBxEqUKV6eYOx/b+fpASXVUUYg+yyJT455jMc7jY3N7t8mXcVhse9QEXx OMd+tg+4X7ikLBbcIJVWy7gIsjoCmUHYDi/XxrlEGUMHGrLPEnn/2jwCj sGmA+mp/D/iDokeLdHeAiF3ESB7deLb9MMkiPILh0QXIAoojSjnQiMuYX Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="243168221" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="243168221" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:16 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="623559363" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:16 -0700 Subject: [PATCH v2 09/12] ACPI: NFIT: Drop nfit_device_lock() From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Vishal Verma , Dave Jiang , Ira Weiny , "Rafael J. Wysocki" , Dave Jiang , Kevin Tian , peterz@infradead.org, alison.schofield@intel.com, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:02:15 -0700 Message-ID: <164982973591.684294.5429843723059577546.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org In preparation for the libnvdimm subsystem switching to device-core common lockdep validation. Delete nfit_device_lock() which will need to be replaced with an implementation that specifies a non-zero lock class. Note this reverts back to the default state of unvalidated device_lock(), until a lock class for ACPI NFIT devices is added. Cc: Vishal Verma Cc: Dave Jiang Cc: Ira Weiny Cc: "Rafael J. Wysocki" Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- drivers/acpi/nfit/core.c | 30 +++++++++++++++--------------- drivers/acpi/nfit/nfit.h | 24 ------------------------ 2 files changed, 15 insertions(+), 39 deletions(-) diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c index fe61f617a943..ae5f4acf2675 100644 --- a/drivers/acpi/nfit/core.c +++ b/drivers/acpi/nfit/core.c @@ -1230,7 +1230,7 @@ static ssize_t hw_error_scrub_store(struct device *dev, if (rc) return rc; - nfit_device_lock(dev); + device_lock(dev); nd_desc = dev_get_drvdata(dev); if (nd_desc) { struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); @@ -1247,7 +1247,7 @@ static ssize_t hw_error_scrub_store(struct device *dev, break; } } - nfit_device_unlock(dev); + device_unlock(dev); if (rc) return rc; return size; @@ -1267,10 +1267,10 @@ static ssize_t scrub_show(struct device *dev, ssize_t rc = -ENXIO; bool busy; - nfit_device_lock(dev); + device_lock(dev); nd_desc = dev_get_drvdata(dev); if (!nd_desc) { - nfit_device_unlock(dev); + device_unlock(dev); return rc; } acpi_desc = to_acpi_desc(nd_desc); @@ -1287,7 +1287,7 @@ static ssize_t scrub_show(struct device *dev, } mutex_unlock(&acpi_desc->init_mutex); - nfit_device_unlock(dev); + device_unlock(dev); return rc; } @@ -1304,14 +1304,14 @@ static ssize_t scrub_store(struct device *dev, if (val != 1) return -EINVAL; - nfit_device_lock(dev); + device_lock(dev); nd_desc = dev_get_drvdata(dev); if (nd_desc) { struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); rc = acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_LONG); } - nfit_device_unlock(dev); + device_unlock(dev); if (rc) return rc; return size; @@ -1697,9 +1697,9 @@ static void acpi_nvdimm_notify(acpi_handle handle, u32 event, void *data) struct acpi_device *adev = data; struct device *dev = &adev->dev; - nfit_device_lock(dev->parent); + device_lock(dev->parent); __acpi_nvdimm_notify(dev, event); - nfit_device_unlock(dev->parent); + device_unlock(dev->parent); } static bool acpi_nvdimm_has_method(struct acpi_device *adev, char *method) @@ -3152,8 +3152,8 @@ static int acpi_nfit_flush_probe(struct nvdimm_bus_descriptor *nd_desc) struct device *dev = acpi_desc->dev; /* Bounce the device lock to flush acpi_nfit_add / acpi_nfit_notify */ - nfit_device_lock(dev); - nfit_device_unlock(dev); + device_lock(dev); + device_unlock(dev); /* Bounce the init_mutex to complete initial registration */ mutex_lock(&acpi_desc->init_mutex); @@ -3305,8 +3305,8 @@ void acpi_nfit_shutdown(void *data) * acpi_nfit_ars_rescan() submissions have had a chance to * either submit or see ->cancel set. */ - nfit_device_lock(bus_dev); - nfit_device_unlock(bus_dev); + device_lock(bus_dev); + device_unlock(bus_dev); flush_workqueue(nfit_wq); } @@ -3449,9 +3449,9 @@ EXPORT_SYMBOL_GPL(__acpi_nfit_notify); static void acpi_nfit_notify(struct acpi_device *adev, u32 event) { - nfit_device_lock(&adev->dev); + device_lock(&adev->dev); __acpi_nfit_notify(&adev->dev, adev->handle, event); - nfit_device_unlock(&adev->dev); + device_unlock(&adev->dev); } static const struct acpi_device_id acpi_nfit_ids[] = { diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h index 50882bdbeb96..6023ad61831a 100644 --- a/drivers/acpi/nfit/nfit.h +++ b/drivers/acpi/nfit/nfit.h @@ -337,30 +337,6 @@ static inline struct acpi_nfit_desc *to_acpi_desc( return container_of(nd_desc, struct acpi_nfit_desc, nd_desc); } -#ifdef CONFIG_PROVE_LOCKING -static inline void nfit_device_lock(struct device *dev) -{ - device_lock(dev); - mutex_lock(&dev->lockdep_mutex); -} - -static inline void nfit_device_unlock(struct device *dev) -{ - mutex_unlock(&dev->lockdep_mutex); - device_unlock(dev); -} -#else -static inline void nfit_device_lock(struct device *dev) -{ - device_lock(dev); -} - -static inline void nfit_device_unlock(struct device *dev) -{ - device_unlock(dev); -} -#endif - const guid_t *to_nfit_uuid(enum nfit_uuids id); int acpi_nfit_init(struct acpi_nfit_desc *acpi_desc, void *nfit, acpi_size sz); void acpi_nfit_shutdown(void *data); From patchwork Wed Apr 13 06:02:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26870C433EF for ; Wed, 13 Apr 2022 06:02:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232841AbiDMGEy (ORCPT ); Wed, 13 Apr 2022 02:04:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232844AbiDMGEt (ORCPT ); Wed, 13 Apr 2022 02:04:49 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F36AF515AC; Tue, 12 Apr 2022 23:02:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829742; x=1681365742; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wYPZp09bL2aR6352eI2z1PZ2IcGJlP4QHePKX+aDpoc=; b=HL+vLX3ZizccylcbZcVfgT6Y+MPuUYsFXPh1UdzvDIH9mNS7SoA+YXpV pDca0B+cAzE5d9llCLoplbHPKch++kaLBW0m6oknyc9xR0vhVdAYdr34P +CmCU3ebPOIvjA6bCo1PwQT3AwNZclccjLUjHxcLUylRhdU1wwGYWcv7T riKqcOQLOwHpCBv1p9u80Har5wKRfPsps6mWN3fKWxu6oTliBghhzOXzP /8TrQ0lnJwKNKteK5UFCnVCLkuL7YSxuCQ9jmlMTs0pi+Mj+nPqA8QaPk wUYjOmR6MmjxKd20ZAB0nCf7EBTHqCkcsBwy3qMLRSLXtmBp/D/TtZy3M Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="244469641" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="244469641" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:21 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="507854482" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:21 -0700 Subject: [PATCH v2 10/12] libnvdimm: Drop nd_device_lock() From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Vishal Verma , Dave Jiang , Ira Weiny , Dave Jiang , Kevin Tian , peterz@infradead.org, alison.schofield@intel.com, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:02:21 -0700 Message-ID: <164982974139.684294.10694905184257281907.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org In preparation for switching to the core device_lock lockdep validation scheme, convert nd_device_lock() calls back to device_lock(). Note that this temporarily reverts the code back to the typical no validation of device_lock() until a follow-on patch to set dev->lock_class. Cc: Vishal Verma Cc: Dave Jiang Cc: Ira Weiny Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- drivers/nvdimm/btt_devs.c | 16 ++++++++-------- drivers/nvdimm/bus.c | 23 +++++++++------------- drivers/nvdimm/core.c | 10 +++++----- drivers/nvdimm/dimm_devs.c | 8 ++++---- drivers/nvdimm/namespace_devs.c | 36 ++++++++++++++++++----------------- drivers/nvdimm/nd-core.h | 40 --------------------------------------- drivers/nvdimm/pfn_devs.c | 24 ++++++++++++----------- drivers/nvdimm/pmem.c | 2 +- drivers/nvdimm/region.c | 2 +- drivers/nvdimm/region_devs.c | 16 ++++++++-------- lib/Kconfig.debug | 3 ++- 11 files changed, 68 insertions(+), 112 deletions(-) diff --git a/drivers/nvdimm/btt_devs.c b/drivers/nvdimm/btt_devs.c index e5a58520d398..60c19b988bc4 100644 --- a/drivers/nvdimm/btt_devs.c +++ b/drivers/nvdimm/btt_devs.c @@ -50,14 +50,14 @@ static ssize_t sector_size_store(struct device *dev, struct nd_btt *nd_btt = to_nd_btt(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); rc = nd_size_select_store(dev, buf, &nd_btt->lbasize, btt_lbasize_supported); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -79,11 +79,11 @@ static ssize_t uuid_store(struct device *dev, struct nd_btt *nd_btt = to_nd_btt(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); rc = nd_uuid_store(dev, &nd_btt->uuid, buf, len); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -108,13 +108,13 @@ static ssize_t namespace_store(struct device *dev, struct nd_btt *nd_btt = to_nd_btt(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); rc = nd_namespace_store(dev, &nd_btt->ndns, buf, len); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -126,14 +126,14 @@ static ssize_t size_show(struct device *dev, struct nd_btt *nd_btt = to_nd_btt(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); if (dev->driver) rc = sprintf(buf, "%llu\n", nd_btt->size); else { /* no size to convey if the btt instance is disabled */ rc = -ENXIO; } - nd_device_unlock(dev); + device_unlock(dev); return rc; } diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c index 7b0d1443217a..b5a1317c30dd 100644 --- a/drivers/nvdimm/bus.c +++ b/drivers/nvdimm/bus.c @@ -88,9 +88,7 @@ static int nvdimm_bus_probe(struct device *dev) dev->driver->name, dev_name(dev)); nvdimm_bus_probe_start(nvdimm_bus); - debug_nvdimm_lock(dev); rc = nd_drv->probe(dev); - debug_nvdimm_unlock(dev); if ((rc == 0 || rc == -EOPNOTSUPP) && dev->parent && is_nd_region(dev->parent)) @@ -111,11 +109,8 @@ static void nvdimm_bus_remove(struct device *dev) struct module *provider = to_bus_provider(dev); struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); - if (nd_drv->remove) { - debug_nvdimm_lock(dev); + if (nd_drv->remove) nd_drv->remove(dev); - debug_nvdimm_unlock(dev); - } dev_dbg(&nvdimm_bus->dev, "%s.remove(%s)\n", dev->driver->name, dev_name(dev)); @@ -139,7 +134,7 @@ static void nvdimm_bus_shutdown(struct device *dev) void nd_device_notify(struct device *dev, enum nvdimm_event event) { - nd_device_lock(dev); + device_lock(dev); if (dev->driver) { struct nd_device_driver *nd_drv; @@ -147,7 +142,7 @@ void nd_device_notify(struct device *dev, enum nvdimm_event event) if (nd_drv->notify) nd_drv->notify(dev, event); } - nd_device_unlock(dev); + device_unlock(dev); } EXPORT_SYMBOL(nd_device_notify); @@ -572,9 +567,9 @@ void nd_device_unregister(struct device *dev, enum nd_async_mode mode) * or otherwise let the async path handle it if the * unregistration was already queued. */ - nd_device_lock(dev); + device_lock(dev); killed = kill_device(dev); - nd_device_unlock(dev); + device_unlock(dev); if (!killed) return; @@ -930,10 +925,10 @@ void wait_nvdimm_bus_probe_idle(struct device *dev) if (nvdimm_bus->probe_active == 0) break; nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); wait_event(nvdimm_bus->wait, nvdimm_bus->probe_active == 0); - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); } while (true); } @@ -1167,7 +1162,7 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, goto out; } - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); rc = nd_cmd_clear_to_send(nvdimm_bus, nvdimm, func, buf); if (rc) @@ -1189,7 +1184,7 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, out_unlock: nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); out: kfree(in_env); kfree(out_env); diff --git a/drivers/nvdimm/core.c b/drivers/nvdimm/core.c index 69a03358817f..144926b7451c 100644 --- a/drivers/nvdimm/core.c +++ b/drivers/nvdimm/core.c @@ -215,7 +215,7 @@ EXPORT_SYMBOL_GPL(to_nvdimm_bus_dev); * * Enforce that uuids can only be changed while the device is disabled * (driver detached) - * LOCKING: expects nd_device_lock() is held on entry + * LOCKING: expects device_lock() is held on entry */ int nd_uuid_store(struct device *dev, uuid_t **uuid_out, const char *buf, size_t len) @@ -316,15 +316,15 @@ static DEVICE_ATTR_RO(provider); static int flush_namespaces(struct device *dev, void *data) { - nd_device_lock(dev); - nd_device_unlock(dev); + device_lock(dev); + device_unlock(dev); return 0; } static int flush_regions_dimms(struct device *dev, void *data) { - nd_device_lock(dev); - nd_device_unlock(dev); + device_lock(dev); + device_unlock(dev); device_for_each_child(dev, NULL, flush_namespaces); return 0; } diff --git a/drivers/nvdimm/dimm_devs.c b/drivers/nvdimm/dimm_devs.c index ee507eed42b5..93113c0c4dc8 100644 --- a/drivers/nvdimm/dimm_devs.c +++ b/drivers/nvdimm/dimm_devs.c @@ -341,9 +341,9 @@ static ssize_t available_slots_show(struct device *dev, { ssize_t rc; - nd_device_lock(dev); + device_lock(dev); rc = __available_slots_show(dev_get_drvdata(dev), buf); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -386,12 +386,12 @@ static ssize_t security_store(struct device *dev, * done while probing is idle and the DIMM is not in active use * in any region. */ - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); rc = nvdimm_security_store(dev, buf, len); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc; } diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c index 62b83b2e26e3..bc578dbea472 100644 --- a/drivers/nvdimm/namespace_devs.c +++ b/drivers/nvdimm/namespace_devs.c @@ -264,7 +264,7 @@ static ssize_t alt_name_store(struct device *dev, struct nd_region *nd_region = to_nd_region(dev->parent); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); rc = __alt_name_store(dev, buf, len); @@ -272,7 +272,7 @@ static ssize_t alt_name_store(struct device *dev, rc = nd_namespace_label_update(nd_region, dev); dev_dbg(dev, "%s(%zd)\n", rc < 0 ? "fail " : "", rc); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc < 0 ? rc : len; } @@ -846,7 +846,7 @@ static ssize_t size_store(struct device *dev, if (rc) return rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); rc = __size_store(dev, val); @@ -868,7 +868,7 @@ static ssize_t size_store(struct device *dev, dev_dbg(dev, "%llx %s (%d)\n", val, rc < 0 ? "fail" : "success", rc); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc < 0 ? rc : len; } @@ -1043,7 +1043,7 @@ static ssize_t uuid_store(struct device *dev, } else return -ENXIO; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); if (to_ndns(dev)->claim) @@ -1059,7 +1059,7 @@ static ssize_t uuid_store(struct device *dev, dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc < 0 ? rc : len; } @@ -1118,7 +1118,7 @@ static ssize_t sector_size_store(struct device *dev, } else return -ENXIO; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); if (to_ndns(dev)->claim) rc = -EBUSY; @@ -1129,7 +1129,7 @@ static ssize_t sector_size_store(struct device *dev, dev_dbg(dev, "result: %zd %s: %s%s", rc, rc < 0 ? "tried" : "wrote", buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -1239,9 +1239,9 @@ static ssize_t holder_show(struct device *dev, struct nd_namespace_common *ndns = to_ndns(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); rc = sprintf(buf, "%s\n", ndns->claim ? dev_name(ndns->claim) : ""); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -1278,7 +1278,7 @@ static ssize_t holder_class_store(struct device *dev, struct nd_region *nd_region = to_nd_region(dev->parent); int rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); rc = __holder_class_store(dev, buf); @@ -1286,7 +1286,7 @@ static ssize_t holder_class_store(struct device *dev, rc = nd_namespace_label_update(nd_region, dev); dev_dbg(dev, "%s(%d)\n", rc < 0 ? "fail " : "", rc); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc < 0 ? rc : len; } @@ -1297,7 +1297,7 @@ static ssize_t holder_class_show(struct device *dev, struct nd_namespace_common *ndns = to_ndns(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); if (ndns->claim_class == NVDIMM_CCLASS_NONE) rc = sprintf(buf, "\n"); else if ((ndns->claim_class == NVDIMM_CCLASS_BTT) || @@ -1309,7 +1309,7 @@ static ssize_t holder_class_show(struct device *dev, rc = sprintf(buf, "dax\n"); else rc = sprintf(buf, "\n"); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -1323,7 +1323,7 @@ static ssize_t mode_show(struct device *dev, char *mode; ssize_t rc; - nd_device_lock(dev); + device_lock(dev); claim = ndns->claim; if (claim && is_nd_btt(claim)) mode = "safe"; @@ -1336,7 +1336,7 @@ static ssize_t mode_show(struct device *dev, else mode = "raw"; rc = sprintf(buf, "%s\n", mode); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -1456,8 +1456,8 @@ struct nd_namespace_common *nvdimm_namespace_common_probe(struct device *dev) * Flush any in-progess probes / removals in the driver * for the raw personality of this namespace. */ - nd_device_lock(&ndns->dev); - nd_device_unlock(&ndns->dev); + device_lock(&ndns->dev); + device_unlock(&ndns->dev); if (ndns->dev.driver) { dev_dbg(&ndns->dev, "is active, can't bind %s\n", dev_name(dev)); diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h index deb3d047571e..1668a10e41ba 100644 --- a/drivers/nvdimm/nd-core.h +++ b/drivers/nvdimm/nd-core.h @@ -191,45 +191,5 @@ static inline int nvdimm_lock_class(struct device *dev) else return -1; } - -static inline void debug_nvdimm_lock(struct device *dev) -{ - mutex_lock_nested(&dev->lockdep_mutex, nvdimm_lock_class(dev)); -} - -static inline void debug_nvdimm_unlock(struct device *dev) -{ - mutex_unlock(&dev->lockdep_mutex); -} - -static inline void nd_device_lock(struct device *dev) -{ - device_lock(dev); - debug_nvdimm_lock(dev); -} - -static inline void nd_device_unlock(struct device *dev) -{ - debug_nvdimm_unlock(dev); - device_unlock(dev); -} -#else -static inline void nd_device_lock(struct device *dev) -{ - device_lock(dev); -} - -static inline void nd_device_unlock(struct device *dev) -{ - device_unlock(dev); -} - -static inline void debug_nvdimm_lock(struct device *dev) -{ -} - -static inline void debug_nvdimm_unlock(struct device *dev) -{ -} #endif #endif /* __ND_CORE_H__ */ diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index c31e184bfa45..1672d139f708 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -55,7 +55,7 @@ static ssize_t mode_store(struct device *dev, struct nd_pfn *nd_pfn = to_nd_pfn_safe(dev); ssize_t rc = 0; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); if (dev->driver) rc = -EBUSY; @@ -77,7 +77,7 @@ static ssize_t mode_store(struct device *dev, dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -123,14 +123,14 @@ static ssize_t align_store(struct device *dev, unsigned long aligns[MAX_NVDIMM_ALIGN] = { [0] = 0, }; ssize_t rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); rc = nd_size_select_store(dev, buf, &nd_pfn->align, nd_pfn_supported_alignments(aligns)); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -152,11 +152,11 @@ static ssize_t uuid_store(struct device *dev, struct nd_pfn *nd_pfn = to_nd_pfn_safe(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); rc = nd_uuid_store(dev, &nd_pfn->uuid, buf, len); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -181,13 +181,13 @@ static ssize_t namespace_store(struct device *dev, struct nd_pfn *nd_pfn = to_nd_pfn_safe(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); rc = nd_namespace_store(dev, &nd_pfn->ndns, buf, len); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -199,7 +199,7 @@ static ssize_t resource_show(struct device *dev, struct nd_pfn *nd_pfn = to_nd_pfn_safe(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); if (dev->driver) { struct nd_pfn_sb *pfn_sb = nd_pfn->pfn_sb; u64 offset = __le64_to_cpu(pfn_sb->dataoff); @@ -213,7 +213,7 @@ static ssize_t resource_show(struct device *dev, /* no address to convey if the pfn instance is disabled */ rc = -ENXIO; } - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -225,7 +225,7 @@ static ssize_t size_show(struct device *dev, struct nd_pfn *nd_pfn = to_nd_pfn_safe(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); if (dev->driver) { struct nd_pfn_sb *pfn_sb = nd_pfn->pfn_sb; u64 offset = __le64_to_cpu(pfn_sb->dataoff); @@ -241,7 +241,7 @@ static ssize_t size_show(struct device *dev, /* no size to convey if the pfn instance is disabled */ rc = -ENXIO; } - nd_device_unlock(dev); + device_unlock(dev); return rc; } diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 58d95242a836..3992521c151f 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -573,7 +573,7 @@ static void nd_pmem_remove(struct device *dev) nvdimm_namespace_detach_btt(to_nd_btt(dev)); else { /* - * Note, this assumes nd_device_lock() context to not + * Note, this assumes device_lock() context to not * race nd_pmem_notify() */ sysfs_put(pmem->bb_state); diff --git a/drivers/nvdimm/region.c b/drivers/nvdimm/region.c index 188560b1c110..390123d293ea 100644 --- a/drivers/nvdimm/region.c +++ b/drivers/nvdimm/region.c @@ -95,7 +95,7 @@ static void nd_region_remove(struct device *dev) nvdimm_bus_unlock(dev); /* - * Note, this assumes nd_device_lock() context to not race + * Note, this assumes device_lock() context to not race * nd_region_notify() */ sysfs_put(nd_region->bb_state); diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c index 0cb274c2b508..83c1ce7f26a6 100644 --- a/drivers/nvdimm/region_devs.c +++ b/drivers/nvdimm/region_devs.c @@ -279,7 +279,7 @@ static ssize_t set_cookie_show(struct device *dev, * the v1.1 namespace label cookie definition. To read all this * data we need to wait for probing to settle. */ - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); if (nd_region->ndr_mappings) { @@ -296,7 +296,7 @@ static ssize_t set_cookie_show(struct device *dev, } } nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); if (rc) return rc; @@ -353,12 +353,12 @@ static ssize_t available_size_show(struct device *dev, * memory nvdimm_bus_lock() is dropped, but that's userspace's * problem to not race itself. */ - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); available = nd_region_available_dpa(nd_region); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return sprintf(buf, "%llu\n", available); } @@ -370,12 +370,12 @@ static ssize_t max_available_extent_show(struct device *dev, struct nd_region *nd_region = to_nd_region(dev); unsigned long long available = 0; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); available = nd_region_allocatable_dpa(nd_region); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return sprintf(buf, "%llu\n", available); } @@ -549,12 +549,12 @@ static ssize_t region_badblocks_show(struct device *dev, struct nd_region *nd_region = to_nd_region(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); if (dev->driver) rc = badblocks_show(&nd_region->bb, buf, 0); else rc = -ENXIO; - nd_device_unlock(dev); + device_unlock(dev); return rc; } diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 333d41da7621..1f8db59bdddd 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1557,7 +1557,8 @@ config PROVE_NVDIMM_LOCKING bool "NVDIMM" depends on LIBNVDIMM help - Enable lockdep to validate nd_device_lock() usage. + Enable lockdep to validate libnvdimm subsystem usage of the + device lock. config PROVE_CXL_LOCKING bool "CXL" From patchwork Wed Apr 13 06:02:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4AE0C433F5 for ; Wed, 13 Apr 2022 06:02:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232831AbiDMGE6 (ORCPT ); Wed, 13 Apr 2022 02:04:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232874AbiDMGEu (ORCPT ); Wed, 13 Apr 2022 02:04:50 -0400 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65A4651E7A; Tue, 12 Apr 2022 23:02:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829747; x=1681365747; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aSeTDj0W2XFMu9PYXuSRlRhFrP+VndjIlxTuiHIALMw=; b=SKwx41aZnak/CemJBqi0v27y+Yn+IJoU1Uti9b6pzmZY0e2jVg3fFVLB 4ZqYgLZmJJPPtXAChgFtOxLGoAzGYAOCkQcBKejcexsflpTn0s7xhATnF Eo/C2x9NJt5bvjrBskEdPja5HNEYfVTWS+C4dVRcZ1MuYcuMwI3zDX9IA ErRdESww5uoEWWi5Qh84ze3t30iJvPfeavY02M9IOx2jjb5aHVwU46Vw9 rt1Q/BcPGOwD3ZSFOCn325DRPMcYD4vuEsOn+Nn/S+a0NDS/BGD6akrFh nprV+n1d1JhHxpbEYSQHmS0bIADolAO9OM4wpvf8OrrCJf9dD3nKtV4Od A==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="260180234" X-IronPort-AV: E=Sophos;i="5.90,255,1643702400"; d="scan'208";a="260180234" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:27 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552064642" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:26 -0700 Subject: [PATCH v2 11/12] libnvdimm: Enable lockdep validation From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Vishal Verma , Dave Jiang , Ira Weiny , Dave Jiang , Kevin Tian , peterz@infradead.org, alison.schofield@intel.com, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:02:26 -0700 Message-ID: <164982974651.684294.17717238362306260099.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Register libnvdimm subsystem devices with a non-zero lock_class to enable the device-core to track lock dependencies. Cc: Vishal Verma Cc: Dave Jiang Cc: Ira Weiny Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- drivers/nvdimm/bus.c | 3 +++ drivers/nvdimm/nd-core.h | 9 +++++++++ 2 files changed, 12 insertions(+) diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c index b5a1317c30dd..f2ae6825f533 100644 --- a/drivers/nvdimm/bus.c +++ b/drivers/nvdimm/bus.c @@ -360,6 +360,7 @@ struct nvdimm_bus *nvdimm_bus_register(struct device *parent, if (rc) goto err; + nvdimm_set_lock_class(&nvdimm_bus->dev); rc = device_add(&nvdimm_bus->dev); if (rc) { dev_dbg(&nvdimm_bus->dev, "registration failed: %d\n", rc); @@ -485,6 +486,7 @@ static void nd_async_device_register(void *d, async_cookie_t cookie) { struct device *dev = d; + nvdimm_set_lock_class(dev); if (device_add(dev) != 0) { dev_err(dev, "%s: failed\n", __func__); put_device(dev); @@ -738,6 +740,7 @@ int nvdimm_bus_create_ndctl(struct nvdimm_bus *nvdimm_bus) if (rc) goto err; + nvdimm_set_lock_class(dev); rc = device_add(dev); if (rc) { dev_dbg(&nvdimm_bus->dev, "failed to register ndctl%d: %d\n", diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h index 1668a10e41ba..75892e43b2c9 100644 --- a/drivers/nvdimm/nd-core.h +++ b/drivers/nvdimm/nd-core.h @@ -191,5 +191,14 @@ static inline int nvdimm_lock_class(struct device *dev) else return -1; } + +static inline void nvdimm_set_lock_class(struct device *dev) +{ + device_set_lock_class(dev, nvdimm_lock_class(dev)); +} +#else +static inline void nvdimm_set_lock_class(struct device *dev) +{ +} #endif #endif /* __ND_CORE_H__ */ From patchwork Wed Apr 13 06:02:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12811547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C5C2C433F5 for ; Wed, 13 Apr 2022 06:02:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232846AbiDMGFJ (ORCPT ); Wed, 13 Apr 2022 02:05:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232866AbiDMGEy (ORCPT ); Wed, 13 Apr 2022 02:04:54 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE85B517FA; Tue, 12 Apr 2022 23:02:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649829752; x=1681365752; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0j1hA1fWHoFpMRJXA9DIC9aEvgsqA/8Z+smjBQbu3OM=; b=dN/FD7pf0u6FUei8E8yEZLuEA6Myijntxo0czz8MI3ZhXEhMKCFUGzPD kfRum4LtzqLMUyaWBfKrLt4P2YLtW5Le+9f/1CiIYhTQKj9cyGVZ9q1UQ jyNE3MyENBh2qfmf6qYaJyxnvy9ZTtj3Z+6jKC+NTXJP7xFb2RntNriI9 h5lxdC43oJcuTMb1AiowmA4vBceWve4DtySShD2CqGjaoDIyNrUkDphb/ wDKSJvc6SSEU6GhobZjYO3bXfRAdMEunLtPLDz1awgat0Puxz9xbvwcec IUa8CDCYWGEGMWbNOGb3jr3i0LPsVLahuYZ7r+8JFsyXqAGBmkpXxPj3x Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="242519932" X-IronPort-AV: E=Sophos;i="5.90,255,1643702400"; d="scan'208";a="242519932" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:32 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="611754268" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2022 23:02:32 -0700 Subject: [PATCH v2 12/12] device-core: Enable multi-subsystem device_lock() lockdep validation From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Greg Kroah-Hartman , "Rafael J. Wysocki" , Pierre-Louis Bossart , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , Dave Jiang , Kevin Tian , vishal.l.verma@intel.com, alison.schofield@intel.com, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, nvdimm@lists.linux.dev Date: Tue, 12 Apr 2022 23:02:31 -0700 Message-ID: <164982975184.684294.8587689380766181427.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164982968798.684294.15817853329823976469.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org While device_set_lock_class() allows device_lock() to take a nested lock per the requirements of a given subsystem, it does not allow multiple subsystems to enable lockdep validation at the same time. For example if CXL does: device_set_lock_class(&cxlmd->dev, 1); ...and LIBNVDIMM does: device_set_lock_class(&nvdimm->dev, 1); ...lockdep will treat those as identical locks and flag false positive reports depending on how those 2 subsystems interact. Given that there are not enough available lock class ids for each subsystem to own a slice of the number space, the only other way for lockdep to see that those are different locks is to literally make them different locks. Update device_set_lock_class() to also take a subsystem id which also acts as an index into a new array of lockdep_mutex instances. So now CXL and NVDIMM can do: device_set_lock_class(&cxlmd->dev, DEVICE_LOCK_CXL, 1); device_set_lock_class(&nvdimm->dev, DEVICE_LOCK_NVDIMM, 1); ...and lockdep will manage their dependency chains individually per expectations. Note that the reason the mutex_init() for the various subsystem device-locks is unrolled in device_lockdep_init(), vs a loop over [0 .. DEVICE_LOCK_MAX], is to give each lock a unique @lock_class_key and name in reports. While the approach is not elegant, and requires manual enabling, it seems the best that can be done given lockdep's expectations. Otherwise this validation has already proven useful in keeping device_lock() deadlocks from landing upstream. Cc: Greg Kroah-Hartman Cc: "Rafael J. Wysocki" Cc: Pierre-Louis Bossart Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Reviewed-by: Dave Jiang Reviewed-by: Kevin Tian Signed-off-by: Dan Williams --- drivers/cxl/acpi.c | 2 - drivers/cxl/cxl.h | 4 +- drivers/cxl/port.c | 2 - drivers/nvdimm/nd-core.h | 5 +- include/linux/device.h | 95 ++++++++++++++++++++++++++++++++++++---------- lib/Kconfig.debug | 24 ------------ 6 files changed, 82 insertions(+), 50 deletions(-) diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index ef5c3252bdb2..ed2fdb45b424 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -283,7 +283,7 @@ static int cxl_acpi_probe(struct platform_device *pdev) struct acpi_device *adev = ACPI_COMPANION(host); struct cxl_cfmws_context ctx; - device_set_lock_class(&pdev->dev, CXL_ROOT_LOCK); + device_set_lock_class(&pdev->dev, DEVICE_LOCK_CXL, CXL_ROOT_LOCK); root_port = devm_cxl_add_port(host, host, CXL_RESOURCE_NONE, NULL); if (IS_ERR(root_port)) return PTR_ERR(root_port); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 05dc4c081ad2..2528d4ab2e74 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -423,7 +423,7 @@ enum cxl_lock_class { */ }; -#ifdef CONFIG_PROVE_CXL_LOCKING +#ifdef CONFIG_PROVE_LOCKING static inline int clamp_lock_class(struct device *dev, int lock_class) { if (lock_class >= MAX_LOCKDEP_SUBCLASSES) { @@ -460,7 +460,7 @@ static inline int cxl_lock_class(struct device *dev) static inline void cxl_set_lock_class(struct device *dev) { - device_set_lock_class(dev, cxl_lock_class(dev)); + device_set_lock_class(dev, DEVICE_LOCK_CXL, cxl_lock_class(dev)); } #else static inline void cxl_set_lock_class(struct device *dev) diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c index d420da5fc39c..493a195a9f5a 100644 --- a/drivers/cxl/port.c +++ b/drivers/cxl/port.c @@ -17,7 +17,7 @@ * firmware) are managed in this drivers context. Each driver instance * is responsible for tearing down the driver context of immediate * descendant ports. The locking for this is validated by - * CONFIG_PROVE_CXL_LOCKING. + * CONFIG_PROVE_LOCKING. * * The primary service this driver provides is presenting APIs to other * drivers to utilize the decoders, and indicating to userspace (via bind diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h index 75892e43b2c9..56a2e56e552b 100644 --- a/drivers/nvdimm/nd-core.h +++ b/drivers/nvdimm/nd-core.h @@ -162,7 +162,7 @@ static inline void devm_nsio_disable(struct device *dev, } #endif -#ifdef CONFIG_PROVE_NVDIMM_LOCKING +#ifdef CONFIG_PROVE_LOCKING extern struct class *nd_class; enum { @@ -194,7 +194,8 @@ static inline int nvdimm_lock_class(struct device *dev) static inline void nvdimm_set_lock_class(struct device *dev) { - device_set_lock_class(dev, nvdimm_lock_class(dev)); + device_set_lock_class(dev, DEVICE_LOCK_NVDIMM, + nvdimm_lock_class(dev)); } #else static inline void nvdimm_set_lock_class(struct device *dev) diff --git a/include/linux/device.h b/include/linux/device.h index 6083e757e804..12c675edb28d 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -386,6 +386,17 @@ struct dev_msi_info { #endif }; +enum device_lock_subsys { + DEVICE_LOCK_NONE = -1, +#if IS_ENABLED(CONFIG_LIBNVDIMM) + DEVICE_LOCK_NVDIMM, +#endif +#if IS_ENABLED(CONFIG_CXL_BUS) + DEVICE_LOCK_CXL, +#endif + DEVICE_LOCK_MAX, +}; + /** * struct device - The basic device structure * @parent: The device's "parent" device, the device to which it is attached. @@ -400,8 +411,8 @@ struct dev_msi_info { * This identifies the device type and carries type-specific * information. * @mutex: Mutex to synchronize calls to its driver. - * @lockdep_mutex: An optional debug lock that a subsystem can use as a - * peer lock to gain localized lockdep coverage of the device_lock. + * @lockdep_mutex: A set of optional debug locks that subsystems can use as a + * peer lock to @mutex to gain lockdep coverage of device_lock(). * @lock_class: per-subsystem annotated device lock class * @bus: Type of bus device is on. * @driver: Which driver has allocated this @@ -501,8 +512,9 @@ struct device { void *driver_data; /* Driver data, set and get with dev_set_drvdata/dev_get_drvdata */ #ifdef CONFIG_PROVE_LOCKING - struct mutex lockdep_mutex; + struct mutex lockdep_mutex[DEVICE_LOCK_MAX]; int lock_class; + enum device_lock_subsys lock_subsys; #endif struct mutex mutex; /* mutex to synchronize calls to * its driver. @@ -772,45 +784,77 @@ static inline void device_lock_assert(struct device *dev) #ifdef CONFIG_PROVE_LOCKING static inline void device_lockdep_init(struct device *dev) { - mutex_init(&dev->lockdep_mutex); +#if IS_ENABLED(CONFIG_CXL_BUS) + mutex_init(&dev->lockdep_mutex[DEVICE_LOCK_CXL]); +#endif +#if IS_ENABLED(CONFIG_LIBNVDIMM) + mutex_init(&dev->lockdep_mutex[DEVICE_LOCK_NVDIMM]); +#endif + dev->lock_subsys = DEVICE_LOCK_NONE; dev->lock_class = -1; lockdep_set_novalidate_class(&dev->mutex); } +static inline bool subsys_valid(enum device_lock_subsys subsys) +{ + if (DEVICE_LOCK_MAX == 0) + return false; + if (subsys <= DEVICE_LOCK_NONE) + return false; + if (subsys >= DEVICE_LOCK_MAX) + return false; + return true; +} + +static inline struct mutex *device_lockdep_mutex(struct device *dev) +{ + if (!subsys_valid(dev->lock_subsys)) + return NULL; + if (dev->lock_class < 0) + return NULL; + return &dev->lockdep_mutex[dev->lock_subsys]; +} + static inline void device_lock(struct device *dev) { + struct mutex *lockdep_mutex = device_lockdep_mutex(dev); + /* * For double-lock programming errors the kernel will hang * trying to acquire @dev->mutex before lockdep can report the - * problem acquiring @dev->lockdep_mutex, so manually assert - * before that hang. + * problem acquiring @lockdep_mutex, so manually assert before + * that hang. */ - lockdep_assert_not_held(&dev->lockdep_mutex); + if (lockdep_mutex) + lockdep_assert_not_held(lockdep_mutex); mutex_lock(&dev->mutex); - if (dev->lock_class >= 0) - mutex_lock_nested(&dev->lockdep_mutex, dev->lock_class); + if (lockdep_mutex) + mutex_lock_nested(lockdep_mutex, dev->lock_class); } static inline int device_lock_interruptible(struct device *dev) { + struct mutex *lockdep_mutex = device_lockdep_mutex(dev); int rc; - lockdep_assert_not_held(&dev->lockdep_mutex); + if (lockdep_mutex) + lockdep_assert_not_held(lockdep_mutex); rc = mutex_lock_interruptible(&dev->mutex); - if (rc || dev->lock_class < 0) + if (rc || !lockdep_mutex) return rc; - return mutex_lock_interruptible_nested(&dev->lockdep_mutex, - dev->lock_class); + return mutex_lock_interruptible_nested(lockdep_mutex, dev->lock_class); } static inline int device_trylock(struct device *dev) { + struct mutex *lockdep_mutex = device_lockdep_mutex(dev); + if (mutex_trylock(&dev->mutex)) { - if (dev->lock_class >= 0) - mutex_lock_nested(&dev->lockdep_mutex, dev->lock_class); + if (lockdep_mutex) + mutex_lock_nested(lockdep_mutex, dev->lock_class); return 1; } @@ -819,8 +863,10 @@ static inline int device_trylock(struct device *dev) static inline void device_unlock(struct device *dev) { + struct mutex *lockdep_mutex = device_lockdep_mutex(dev); + if (dev->lock_class >= 0) - mutex_unlock(&dev->lockdep_mutex); + mutex_unlock(lockdep_mutex); mutex_unlock(&dev->mutex); } @@ -829,8 +875,13 @@ static inline void device_unlock(struct device *dev) * from entry to exit. There is no support for changing lockdep * validation classes, only enabling and disabling validation. */ -static inline void device_set_lock_class(struct device *dev, int lock_class) +static inline void device_set_lock_class(struct device *dev, + enum device_lock_subsys subsys, + int lock_class) { + if (!subsys_valid(subsys)) + return; + /* * Allow for setting or clearing the lock class while the * device_lock() is held, in which case the paired nested lock @@ -840,11 +891,12 @@ static inline void device_set_lock_class(struct device *dev, int lock_class) if (dev->lock_class < 0 && lock_class >= 0) { /* Enabling lockdep validation... */ if (mutex_is_locked(&dev->mutex)) - mutex_lock_nested(&dev->lockdep_mutex, lock_class); + mutex_lock_nested(&dev->lockdep_mutex[subsys], + lock_class); } else if (dev->lock_class >= 0 && lock_class < 0) { /* Disabling lockdep validation... */ if (mutex_is_locked(&dev->mutex)) - mutex_unlock(&dev->lockdep_mutex); + mutex_unlock(&dev->lockdep_mutex[subsys]); } else { dev_warn(dev, "%s: failed to change lock_class from: %d to %d\n", @@ -852,6 +904,7 @@ static inline void device_set_lock_class(struct device *dev, int lock_class) return; } dev->lock_class = lock_class; + dev->lock_subsys = subsys; } #else /* !CONFIG_PROVE_LOCKING */ static inline void device_lockdep_init(struct device *dev) @@ -879,7 +932,9 @@ static inline void device_unlock(struct device *dev) mutex_unlock(&dev->mutex); } -static inline void device_set_lock_class(struct device *dev, int lock_class) +static inline void device_set_lock_class(struct device *dev, + enum device_lock_subsys subssys, + int lock_class) { } #endif /* CONFIG_PROVE_LOCKING */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 1f8db59bdddd..cfe3b092c31d 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1544,30 +1544,6 @@ config CSD_LOCK_WAIT_DEBUG include the IPI handler function currently executing (if any) and relevant stack traces. -choice - prompt "Lock debugging: prove subsystem device_lock() correctness" - depends on PROVE_LOCKING - help - For subsystems that have instrumented their usage of the device_lock() - with nested annotations, enable lock dependency checking. The locking - hierarchy 'subclass' identifiers are not compatible across - sub-systems, so only one can be enabled at a time. - -config PROVE_NVDIMM_LOCKING - bool "NVDIMM" - depends on LIBNVDIMM - help - Enable lockdep to validate libnvdimm subsystem usage of the - device lock. - -config PROVE_CXL_LOCKING - bool "CXL" - depends on CXL_BUS - help - Enable lockdep to validate CXL subsystem usage of the device lock. - -endchoice - endmenu # lock debugging config TRACE_IRQFLAGS