From patchwork Tue Mar 1 02:48:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12763994 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B2F6017E4 for ; Tue, 1 Mar 2022 02:48:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646102938; x=1677638938; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5s4C1XoxbbWcTd1M501STQqQFqAZkTdnQMPZcJvB8f4=; b=Z6tNHaNvmYJ5fR73A32M99D0iCgynGi3S3oSvEJEFEJfMGWsplr1jmaK E/4CIvexHJ610JmktTFL3dngwJ8dLXS3pohkz/1hSL1b4LZbfOG5RRm+H VCAG3FSxFvTZMz52mZedICnaD5IqgFrAhBefY6kgFdzmFA3noJWBA9ZMK Cx911E4w6i1uYwwwJYYWmB2fbuqtyBifDwZVaTQ+50XC93EF0lC5YYjhm qRkH9kuL/J1dDwwUezJ8dNlY5Mn/5XyyDLbZJCBtE+MzQo4N73VAmF8gx 2RkP/LNyx4498jzlytwxJYu2T9Jcz2FriVO0ild/9sFHR5Jjmph+uCY1x Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="251873198" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="251873198" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:48:55 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="544942973" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:48:54 -0800 Subject: [PATCH 01/11] device-core: Enable lockdep validation From: Dan Williams To: gregkh@linuxfoundation.org, rafael.j.wysocki@intel.com Cc: "Rafael J. Wysocki" , vishal.l.verma@intel.com, alison.schofield@intel.com, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev Date: Mon, 28 Feb 2022 18:48:54 -0800 Message-ID: <164610293458.2682974.15975217569862336908.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The @lockdep_mutex attribute of 'struct device' was introduced to allow subsystems to wrap their usage of device_lock() with a local definition that adds nested annotations and lockdep validation. However, that approach leaves lockdep blind to the device_lock usage in the device-core. Instead of requiring the subsystem to replace device_lock() teach the core device_lock() to consider a subsystem specified lock class. While this enables increased coverage of the device_lock() it is still limited to one subsystem at a time unless / until a unique "lockdep_mutex" is added for each subsystem that wants a distinct lock class number space. Cc: Greg Kroah-Hartman Cc: "Rafael J. Wysocki" Signed-off-by: Dan Williams --- drivers/base/core.c | 1 + include/linux/device.h | 73 +++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 72 insertions(+), 2 deletions(-) diff --git a/drivers/base/core.c b/drivers/base/core.c index 7bb957b11861..96430fa5152e 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -2866,6 +2866,7 @@ void device_initialize(struct device *dev) mutex_init(&dev->mutex); #ifdef CONFIG_PROVE_LOCKING mutex_init(&dev->lockdep_mutex); + dev->lock_class = -1; #endif lockdep_set_novalidate_class(&dev->mutex); spin_lock_init(&dev->devres_lock); diff --git a/include/linux/device.h b/include/linux/device.h index 93459724dcde..e313ff21d670 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -402,6 +402,7 @@ struct dev_msi_info { * @mutex: Mutex to synchronize calls to its driver. * @lockdep_mutex: An optional debug lock that a subsystem can use as a * peer lock to gain localized lockdep coverage of the device_lock. + * @lock_class: per-subsystem annotated device lock class * @bus: Type of bus device is on. * @driver: Which driver has allocated this * @platform_data: Platform data specific to the device. @@ -501,6 +502,7 @@ struct device { dev_set_drvdata/dev_get_drvdata */ #ifdef CONFIG_PROVE_LOCKING struct mutex lockdep_mutex; + int lock_class; #endif struct mutex mutex; /* mutex to synchronize calls to * its driver. @@ -762,6 +764,12 @@ static inline bool dev_pm_test_driver_flags(struct device *dev, u32 flags) return !!(dev->power.driver_flags & flags); } +static inline void device_lock_assert(struct device *dev) +{ + lockdep_assert_held(&dev->mutex); +} + +#ifndef CONFIG_PROVE_LOCKING static inline void device_lock(struct device *dev) { mutex_lock(&dev->mutex); @@ -782,10 +790,71 @@ static inline void device_unlock(struct device *dev) mutex_unlock(&dev->mutex); } -static inline void device_lock_assert(struct device *dev) +static inline void device_set_lock_class(struct device *dev, int lock_class) { - lockdep_assert_held(&dev->mutex); } +#else +static inline void device_lock(struct device *dev) +{ + lockdep_assert_not_held(&dev->lockdep_mutex); + + mutex_lock(&dev->mutex); + if (dev->lock_class >= 0) + mutex_lock_nested(&dev->lockdep_mutex, dev->lock_class); +} + +static inline int device_lock_interruptible(struct device *dev) +{ + int rc = mutex_lock_interruptible(&dev->mutex); + + if (rc || dev->lock_class < 0) + return rc; + + return mutex_lock_interruptible_nested(&dev->lockdep_mutex, + dev->lock_class); +} + +static inline int device_trylock(struct device *dev) +{ + if (mutex_trylock(&dev->mutex)) { + if (dev->lock_class >= 0) + mutex_lock_nested(&dev->lockdep_mutex, dev->lock_class); + return 1; + } + + return 0; +} + +static inline void device_unlock(struct device *dev) +{ + if (dev->lock_class >= 0) + mutex_unlock(&dev->lockdep_mutex); + mutex_unlock(&dev->mutex); +} + +static inline void device_set_lock_class(struct device *dev, int lock_class) +{ + if (dev->lock_class < 0 && lock_class > 0) { + if (mutex_is_locked(&dev->mutex)) { + /* + * device_unlock() will unlock lockdep_mutex now that + * lock_class is set, so take the paired lock now + */ + mutex_lock_nested(&dev->lockdep_mutex, lock_class); + } + } else if (dev->lock_class >= 0 && lock_class < 0) { + if (mutex_is_locked(&dev->mutex)) { + /* + * device_unlock() will no longer drop lockdep_mutex now + * that lock_class is disabled, so drop the paired lock + * now. + */ + mutex_unlock(&dev->lockdep_mutex); + } + } + dev->lock_class = lock_class; +} +#endif static inline struct device_node *dev_of_node(struct device *dev) { From patchwork Tue Mar 1 02:49:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12763995 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8734417E4 for ; Tue, 1 Mar 2022 02:49:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646102941; x=1677638941; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6h6XOuo1KBYcSCELn/W7UA4CZHnFX+nOTaNEQAzs5/U=; b=LKzoxkuv6MI9Vk5malreqPEFQpcV282vPbZNX95QgcrMygtAULMgVDLv QwSgNAL3/od0Pi8BjFPgl4cF+Ep+VUttoZHAzkcMVzPT0L5w7txZAkLsc dpQ1TQjPK74u9fAfxiZdYcyzERN5HofB/JH1xnOyK13R8/9MrX+vm7MQt t+drvyFZaOuPJFNnPRA6uh9cxLgegH0PUXpBBaFMAyFqFNsAlDnOjXJex xtUBaFGqSaJCSzloCSw2Aihkjre/lORQSKlh/rnUP80f7cGpjHExqyY/G 7a7vMFs8K6jgNHZXiRp0Nu5Cr9E0knQ1wYGVw4C6izuFWEXX6BEHi/1pe Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="236550197" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="236550197" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:01 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="685560474" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:00 -0800 Subject: [PATCH 02/11] cxl/core: Refactor a cxl_lock_class() out of cxl_nested_lock() From: Dan Williams To: gregkh@linuxfoundation.org, rafael.j.wysocki@intel.com Cc: Alison Schofield , Vishal Verma , Ira Weiny , Ben Widawsky , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev Date: Mon, 28 Feb 2022 18:49:00 -0800 Message-ID: <164610294030.2682974.642590821548098371.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for upleveling device_lock() lockdep annotation support into the core, provide a helper to retrieve the lock class. This lock_class will be used with device_set_lock_class() to idenify the CXL nested locking rules. Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Ben Widawsky Signed-off-by: Dan Williams Reviewed-by: Jonathan Cameron --- drivers/cxl/cxl.h | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 5486fb6aebd4..ca8a61a623b7 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -509,13 +509,12 @@ enum cxl_lock_class { */ }; -static inline void cxl_nested_lock(struct device *dev) +static inline int cxl_lock_class(struct device *dev) { if (is_cxl_port(dev)) { struct cxl_port *port = to_cxl_port(dev); - mutex_lock_nested(&dev->lockdep_mutex, - CXL_PORT_LOCK + port->depth); + return CXL_PORT_LOCK + port->depth; } else if (is_cxl_decoder(dev)) { struct cxl_port *port = to_cxl_port(dev->parent); @@ -523,14 +522,18 @@ static inline void cxl_nested_lock(struct device *dev) * A decoder is the immediate child of a port, so set * its lock class equal to other child device siblings. */ - mutex_lock_nested(&dev->lockdep_mutex, - CXL_PORT_LOCK + port->depth + 1); + return CXL_PORT_LOCK + port->depth + 1; } else if (is_cxl_nvdimm_bridge(dev)) - mutex_lock_nested(&dev->lockdep_mutex, CXL_NVDIMM_BRIDGE_LOCK); + return CXL_NVDIMM_BRIDGE_LOCK; else if (is_cxl_nvdimm(dev)) - mutex_lock_nested(&dev->lockdep_mutex, CXL_NVDIMM_LOCK); + return CXL_NVDIMM_LOCK; else - mutex_lock_nested(&dev->lockdep_mutex, CXL_ANON_LOCK); + return CXL_ANON_LOCK; +} + +static inline void cxl_nested_lock(struct device *dev) +{ + mutex_lock_nested(&dev->lockdep_mutex, cxl_lock_class(dev)); } static inline void cxl_nested_unlock(struct device *dev) From patchwork Tue Mar 1 02:49:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12763996 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5794717E4 for ; Tue, 1 Mar 2022 02:49:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646102947; x=1677638947; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=I2OFBbygNRQGBO74DammV/3EkKdTLDNqydRcC46ubas=; b=E+gFQZurs5d5sDwOWufNH2ILwc1Vt9xUo76n2QpymR+UEBAiKGhNcyDx +fk+R4Uf8LVtufGAB5nxL95v48JRrVnEabHfWt1WWvXQxvwb8rhowUboe 2wvBO73NJ0loSGipSkZ8mDdZq6oaqECrc6e5yBN1mz14z+Ardyvv+Ww7l odhFGCSNNm0Faph+cv88AiglZmh35YvkZ038srBdTDAIPoRKEkEp39/nF T4TcZUeFinL0AB+g0r5m0EogdGAep9smgrwRoOF8EfH97HFa9pErCB3C0 cnjfcsiwS52RQPSpoe5KtZRLzlF7vbLFQ7JjcS+v7wz3eqWE7EAvx3ik4 A==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="253232833" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="253232833" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:06 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="685560509" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:06 -0800 Subject: [PATCH 03/11] cxl/core: Remove cxl_device_lock() From: Dan Williams To: gregkh@linuxfoundation.org, rafael.j.wysocki@intel.com Cc: Alison Schofield , Vishal Verma , Ira Weiny , Ben Widawsky , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev Date: Mon, 28 Feb 2022 18:49:06 -0800 Message-ID: <164610294604.2682974.11169622387063183603.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for moving lockdep_mutex nested lock acquisition into the core, remove the cxl_device_lock() wrapper, but preserve cxl_lock_class() that will be used to inform the core of the subsystem's lock ordering rules. Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Ben Widawsky Signed-off-by: Dan Williams Reviewed-by: Jonathan Cameron --- drivers/cxl/core/pmem.c | 4 ++-- drivers/cxl/core/port.c | 50 ++++++++++++++++++++++------------------------- drivers/cxl/cxl.h | 46 ------------------------------------------- drivers/cxl/mem.c | 4 ++-- drivers/cxl/pmem.c | 12 ++++++----- lib/Kconfig.debug | 2 +- 6 files changed, 34 insertions(+), 84 deletions(-) diff --git a/drivers/cxl/core/pmem.c b/drivers/cxl/core/pmem.c index 58dc6fba3130..c3d7e6ce3fdf 100644 --- a/drivers/cxl/core/pmem.c +++ b/drivers/cxl/core/pmem.c @@ -127,10 +127,10 @@ static void unregister_nvb(void *_cxl_nvb) * work to flush. Once the state has been changed to 'dead' then no new * work can be queued by user-triggered bind. */ - cxl_device_lock(&cxl_nvb->dev); + device_lock(&cxl_nvb->dev); flush = cxl_nvb->state != CXL_NVB_NEW; cxl_nvb->state = CXL_NVB_DEAD; - cxl_device_unlock(&cxl_nvb->dev); + device_unlock(&cxl_nvb->dev); /* * Even though the device core will trigger device_release_driver() diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index d48d44e911c1..44006d8eb64d 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -350,10 +350,10 @@ static void cxl_port_release(struct device *dev) struct cxl_port *port = to_cxl_port(dev); struct cxl_ep *ep, *_e; - cxl_device_lock(dev); + device_lock(dev); list_for_each_entry_safe(ep, _e, &port->endpoints, list) cxl_ep_release(ep); - cxl_device_unlock(dev); + device_unlock(dev); ida_free(&cxl_port_ida, port->id); kfree(port); } @@ -592,7 +592,7 @@ static int match_root_child(struct device *dev, const void *match) return 0; port = to_cxl_port(dev); - cxl_device_lock(dev); + device_lock(dev); list_for_each_entry(dport, &port->dports, list) { iter = match; while (iter) { @@ -602,7 +602,7 @@ static int match_root_child(struct device *dev, const void *match) } } out: - cxl_device_unlock(dev); + device_unlock(dev); return !!iter; } @@ -661,13 +661,13 @@ static int add_dport(struct cxl_port *port, struct cxl_dport *new) static void cond_cxl_root_lock(struct cxl_port *port) { if (is_cxl_root(port)) - cxl_device_lock(&port->dev); + device_lock(&port->dev); } static void cond_cxl_root_unlock(struct cxl_port *port) { if (is_cxl_root(port)) - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); } static void cxl_dport_remove(void *data) @@ -775,15 +775,15 @@ static int add_ep(struct cxl_port *port, struct cxl_ep *new) { struct cxl_ep *dup; - cxl_device_lock(&port->dev); + device_lock(&port->dev); if (port->dead) { - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); return -ENXIO; } dup = find_ep(port, new->ep); if (!dup) list_add_tail(&new->list, &port->endpoints); - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); return dup ? -EEXIST : 0; } @@ -893,12 +893,12 @@ static void delete_endpoint(void *data) goto out; parent = &parent_port->dev; - cxl_device_lock(parent); + device_lock(parent); if (parent->driver && endpoint->uport) { devm_release_action(parent, cxl_unlink_uport, endpoint); devm_release_action(parent, unregister_port, endpoint); } - cxl_device_unlock(parent); + device_unlock(parent); put_device(parent); out: put_device(&endpoint->dev); @@ -956,7 +956,7 @@ static void cxl_detach_ep(void *data) } parent_port = to_cxl_port(port->dev.parent); - cxl_device_lock(&parent_port->dev); + device_lock(&parent_port->dev); if (!parent_port->dev.driver) { /* * The bottom-up race to delete the port lost to a @@ -964,12 +964,12 @@ static void cxl_detach_ep(void *data) * parent_port ->remove() will have cleaned up all * descendants. */ - cxl_device_unlock(&parent_port->dev); + device_unlock(&parent_port->dev); put_device(&port->dev); continue; } - cxl_device_lock(&port->dev); + device_lock(&port->dev); ep = find_ep(port, &cxlmd->dev); dev_dbg(&cxlmd->dev, "disconnect %s from %s\n", ep ? dev_name(ep->ep) : "", dev_name(&port->dev)); @@ -984,7 +984,7 @@ static void cxl_detach_ep(void *data) port->dead = true; list_splice_init(&port->dports, &reap_dports); } - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); if (!list_empty(&reap_dports)) { dev_dbg(&cxlmd->dev, "delete %s\n", @@ -992,7 +992,7 @@ static void cxl_detach_ep(void *data) delete_switch_port(port, &reap_dports); } put_device(&port->dev); - cxl_device_unlock(&parent_port->dev); + device_unlock(&parent_port->dev); } } @@ -1040,7 +1040,7 @@ static int add_port_attach_ep(struct cxl_memdev *cxlmd, return -EAGAIN; } - cxl_device_lock(&parent_port->dev); + device_lock(&parent_port->dev); if (!parent_port->dev.driver) { dev_warn(&cxlmd->dev, "port %s:%s disabled, failed to enumerate CXL.mem\n", @@ -1058,7 +1058,7 @@ static int add_port_attach_ep(struct cxl_memdev *cxlmd, get_device(&port->dev); } out: - cxl_device_unlock(&parent_port->dev); + device_unlock(&parent_port->dev); if (IS_ERR(port)) rc = PTR_ERR(port); @@ -1169,14 +1169,14 @@ struct cxl_dport *cxl_find_dport_by_dev(struct cxl_port *port, { struct cxl_dport *dport; - cxl_device_lock(&port->dev); + device_lock(&port->dev); list_for_each_entry(dport, &port->dports, list) if (dport->dport == dev) { - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); return dport; } - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); return NULL; } EXPORT_SYMBOL_NS_GPL(cxl_find_dport_by_dev, CXL); @@ -1419,9 +1419,9 @@ int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map) port = to_cxl_port(cxld->dev.parent); - cxl_device_lock(&port->dev); + device_lock(&port->dev); rc = cxl_decoder_add_locked(cxld, target_map); - cxl_device_unlock(&port->dev); + device_unlock(&port->dev); return rc; } @@ -1566,7 +1566,6 @@ static int cxl_bus_probe(struct device *dev) * Take the CXL nested lock since the driver core only holds * @dev->mutex and not @dev->lockdep_mutex. */ - cxl_nested_lock(dev); if (id == CXL_DEVICE_REGION) { /* Regions cannot bind until parameters are set */ struct cxl_region *cxlr = to_cxl_region(dev); @@ -1576,7 +1575,6 @@ static int cxl_bus_probe(struct device *dev) } else { rc = to_cxl_drv(dev->driver)->probe(dev); } - cxl_nested_unlock(dev); dev_dbg(dev, "probe: %d\n", rc); @@ -1587,10 +1585,8 @@ static void cxl_bus_remove(struct device *dev) { struct cxl_driver *cxl_drv = to_cxl_drv(dev->driver); - cxl_nested_lock(dev); if (cxl_drv->remove) cxl_drv->remove(dev); - cxl_nested_unlock(dev); } static struct workqueue_struct *cxl_bus_wq; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index ca8a61a623b7..97e6ca7e4940 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -530,51 +530,5 @@ static inline int cxl_lock_class(struct device *dev) else return CXL_ANON_LOCK; } - -static inline void cxl_nested_lock(struct device *dev) -{ - mutex_lock_nested(&dev->lockdep_mutex, cxl_lock_class(dev)); -} - -static inline void cxl_nested_unlock(struct device *dev) -{ - mutex_unlock(&dev->lockdep_mutex); -} - -static inline void cxl_device_lock(struct device *dev) -{ - /* - * For double lock errors the lockup will happen before lockdep - * warns at cxl_nested_lock(), so assert explicitly. - */ - lockdep_assert_not_held(&dev->lockdep_mutex); - - device_lock(dev); - cxl_nested_lock(dev); -} - -static inline void cxl_device_unlock(struct device *dev) -{ - cxl_nested_unlock(dev); - device_unlock(dev); -} -#else -static inline void cxl_nested_lock(struct device *dev) -{ -} - -static inline void cxl_nested_unlock(struct device *dev) -{ -} - -static inline void cxl_device_lock(struct device *dev) -{ - device_lock(dev); -} - -static inline void cxl_device_unlock(struct device *dev) -{ - device_unlock(dev); -} #endif #endif /* __CXL_H__ */ diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 415f351629c8..a68c8f203626 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -207,7 +207,7 @@ static int cxl_mem_probe(struct device *dev) return -ENXIO; } - cxl_device_lock(&parent_port->dev); + device_lock(&parent_port->dev); if (!parent_port->dev.driver) { dev_err(dev, "CXL port topology %s not enabled\n", dev_name(&parent_port->dev)); @@ -226,7 +226,7 @@ static int cxl_mem_probe(struct device *dev) rc = devm_add_action_or_reset(dev, delete_memdev, cxlmd); out: - cxl_device_unlock(&parent_port->dev); + device_unlock(&parent_port->dev); put_device(&parent_port->dev); return rc; } diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c index fabdb0c6dbf2..564d125d25ef 100644 --- a/drivers/cxl/pmem.c +++ b/drivers/cxl/pmem.c @@ -43,7 +43,7 @@ static int cxl_nvdimm_probe(struct device *dev) if (!cxl_nvb) return -ENXIO; - cxl_device_lock(&cxl_nvb->dev); + device_lock(&cxl_nvb->dev); if (!cxl_nvb->nvdimm_bus) { rc = -ENXIO; goto out; @@ -68,7 +68,7 @@ static int cxl_nvdimm_probe(struct device *dev) dev_set_drvdata(dev, nvdimm); rc = devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm); out: - cxl_device_unlock(&cxl_nvb->dev); + device_unlock(&cxl_nvb->dev); put_device(&cxl_nvb->dev); return rc; @@ -233,7 +233,7 @@ static void cxl_nvb_update_state(struct work_struct *work) struct nvdimm_bus *victim_bus = NULL; bool release = false, rescan = false; - cxl_device_lock(&cxl_nvb->dev); + device_lock(&cxl_nvb->dev); switch (cxl_nvb->state) { case CXL_NVB_ONLINE: if (!online_nvdimm_bus(cxl_nvb)) { @@ -251,7 +251,7 @@ static void cxl_nvb_update_state(struct work_struct *work) default: break; } - cxl_device_unlock(&cxl_nvb->dev); + device_unlock(&cxl_nvb->dev); if (release) device_release_driver(&cxl_nvb->dev); @@ -327,9 +327,9 @@ static int cxl_nvdimm_bridge_reset(struct device *dev, void *data) return 0; cxl_nvb = to_cxl_nvdimm_bridge(dev); - cxl_device_lock(dev); + device_lock(dev); cxl_nvb->state = CXL_NVB_NEW; - cxl_device_unlock(dev); + device_unlock(dev); return 0; } diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 7dea203964f7..5942b22d7f1b 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1535,7 +1535,7 @@ config PROVE_CXL_LOCKING bool "CXL" depends on CXL_BUS help - Enable lockdep to validate cxl_device_lock() usage. + Enable lockdep to validate CXL subsystem usage of the device lock. endchoice From patchwork Tue Mar 1 02:49:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12763997 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5852917E4 for ; Tue, 1 Mar 2022 02:49:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646102952; x=1677638952; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0g12nHhygb9mDTBwhhm5NizRvH7d3c8Rr3u2nwGyQZM=; b=hf9+oi5hrkF0ECMyRg+QnvmqMbop5Y1aCgLWTgFa5gyFzCZ5kN76KYIy TefVh4RlTrZKcwm+emLyf9eGfDqygMLiAFmt0Sx2+KvdU0fqLv91Sj9CY aNM8ISFqXUrlXwmCXQAoOT+yac/bN5mzrxfUVoCXYMxXYNuquokeXaQO0 t+Lf7t49LusFjtb/lHAtQZ+vP33LnbvZPpRxBco2F754HG5sUBhL3Lz6v /b4lfs/FKH2XNOpiPdvslcKdU2KocasTD+6xz7jkrhoIvSB+TRf/vMRlk G1VIXZyhGghXt/J5errK4dqDuT6K9zrOzTBAkNf23Z9XHnhQaLuk8SZMm Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="313761718" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="313761718" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:11 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="507645813" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:11 -0800 Subject: [PATCH 04/11] cxl/core: Clamp max lock_class From: Dan Williams To: gregkh@linuxfoundation.org, rafael.j.wysocki@intel.com Cc: Alison Schofield , Vishal Verma , Ira Weiny , Ben Widawsky , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev Date: Mon, 28 Feb 2022 18:49:11 -0800 Message-ID: <164610295187.2682974.18123746840987009597.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 MAX_LOCKDEP_SUBCLASSES limits the depth of the CXL topology that can be validated by lockdep. Given that the cxl_test topology is already at this limit collapse some of the levels and clamp the max depth. Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Ben Widawsky Signed-off-by: Dan Williams --- drivers/cxl/cxl.h | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 97e6ca7e4940..1357a245037d 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -501,20 +501,33 @@ enum cxl_lock_class { CXL_ANON_LOCK, CXL_NVDIMM_LOCK, CXL_NVDIMM_BRIDGE_LOCK, - CXL_PORT_LOCK, + CXL_PORT_LOCK = 2, /* * Be careful to add new lock classes here, CXL_PORT_LOCK is * extended by the port depth, so a maximum CXL port topology - * depth would need to be defined first. + * depth would need to be defined first. Also, the max + * validation depth is limited by MAX_LOCKDEP_SUBCLASSES. */ }; +static inline int clamp_lock_class(struct device *dev, int lock_class) +{ + if (lock_class >= MAX_LOCKDEP_SUBCLASSES) { + dev_warn_once(dev, + "depth: %d, disabling lockdep for this device\n", + lock_class); + return 0; + } + + return lock_class; +} + static inline int cxl_lock_class(struct device *dev) { if (is_cxl_port(dev)) { struct cxl_port *port = to_cxl_port(dev); - return CXL_PORT_LOCK + port->depth; + return clamp_lock_class(dev, CXL_PORT_LOCK + port->depth); } else if (is_cxl_decoder(dev)) { struct cxl_port *port = to_cxl_port(dev->parent); @@ -522,7 +535,7 @@ static inline int cxl_lock_class(struct device *dev) * A decoder is the immediate child of a port, so set * its lock class equal to other child device siblings. */ - return CXL_PORT_LOCK + port->depth + 1; + return clamp_lock_class(dev, CXL_PORT_LOCK + port->depth + 1); } else if (is_cxl_nvdimm_bridge(dev)) return CXL_NVDIMM_BRIDGE_LOCK; else if (is_cxl_nvdimm(dev)) From patchwork Tue Mar 1 02:49:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12763998 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDA4C17E4 for ; Tue, 1 Mar 2022 02:49:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646102957; x=1677638957; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xBvxSz2YU+OsRHmlwREZP7FJC9naWNMR23kWinHs1ws=; b=GOJHpmGc/4QV48tyaXrO24/lFRxwEzhwa0GnCn0POADDZkw6hBit/W+3 BPF2SPgMB09VYbV8p31/qbLO7Db1p6myf0gFYpvy6YYbc0NFQYz1pIWSY rmmQHnCMBZEpnk4aHZkxEAtUO1fs12KoDWs+BCYcvZxpDN+f9Q9+NsnPi k//NTJi7yOdlI4cUb9rR2jdYNHnG7XELFrXi9fTNSLEb6HkVn7kfIsQfC slIqR/8LDOh2bsC2MPp6pVqyNGVxoOs26C1S4+tjfq+Ui5e8tHrYZdws+ jRnwLCCq4evZo/pRIFklVtZrsjAc6Ev7tgC3+n80jgnH3azvfzl80JqBt g==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="236550265" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="236550265" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:17 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="641105483" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:17 -0800 Subject: [PATCH 05/11] cxl/core: Introduce cxl_set_lock_class() From: Dan Williams To: gregkh@linuxfoundation.org, rafael.j.wysocki@intel.com Cc: Alison Schofield , Vishal Verma , Ira Weiny , Ben Widawsky , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev Date: Mon, 28 Feb 2022 18:49:17 -0800 Message-ID: <164610295699.2682974.3646198829625502263.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Update CONFIG_PROVE_CXL_LOCKING to use the common device-core helpers for device_lock validation. When CONFIG_PROVE_LOCKING is enabled and device_set_lock_class() is passed a non-zero lock class the core acquires the 'struct device' @lockdep_mutex everywhere it acquires the device_lock. Where lockdep_mutex does not skip lockdep validation like device_lock. cxl_set_lock_class() wraps device_set_lock_class() as to not collide with other subsystems that may also support this lockdep validation scheme. See the 'choice' for the various CONFIG_PROVE_$SUBSYS_LOCKING options in lib/Kconfig.debug. Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Ben Widawsky Signed-off-by: Dan Williams --- drivers/cxl/core/memdev.c | 1 + drivers/cxl/core/pmem.c | 2 ++ drivers/cxl/core/port.c | 2 ++ drivers/cxl/core/region.c | 1 + drivers/cxl/cxl.h | 9 +++++++++ 5 files changed, 15 insertions(+) diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 1f76b28f9826..ad8c9f61c38a 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -343,6 +343,7 @@ struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds) cxlmd->cxlds = cxlds; cdev = &cxlmd->cdev; + cxl_set_lock_class(dev); rc = cdev_device_add(cdev, dev); if (rc) goto err; diff --git a/drivers/cxl/core/pmem.c b/drivers/cxl/core/pmem.c index c3d7e6ce3fdf..e0bdda788b01 100644 --- a/drivers/cxl/core/pmem.c +++ b/drivers/cxl/core/pmem.c @@ -171,6 +171,7 @@ struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host, if (rc) goto err; + cxl_set_lock_class(dev); rc = device_add(dev); if (rc) goto err; @@ -283,6 +284,7 @@ int devm_cxl_add_nvdimm(struct device *host, struct cxl_memdev *cxlmd) if (rc) goto err; + cxl_set_lock_class(dev); rc = device_add(dev); if (rc) goto err; diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index 44006d8eb64d..5eee7de1c7f9 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -507,6 +507,7 @@ struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, if (rc) goto err; + cxl_set_lock_class(dev); rc = device_add(dev); if (rc) goto err; @@ -1389,6 +1390,7 @@ int cxl_decoder_add_locked(struct cxl_decoder *cxld, int *target_map) if (is_root_decoder(dev)) cxld->platform_res.name = dev_name(dev); + cxl_set_lock_class(dev); return device_add(dev); } EXPORT_SYMBOL_NS_GPL(cxl_decoder_add_locked, CXL); diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index b1a4ba622739..f0a821de94cf 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -476,6 +476,7 @@ int cxl_add_region(struct cxl_decoder *cxld, struct cxl_region *cxlr) if (rc) goto err; + cxl_set_lock_class(dev); rc = device_add(dev); if (rc) goto err; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 1357a245037d..f94eff659cce 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -543,5 +543,14 @@ static inline int cxl_lock_class(struct device *dev) else return CXL_ANON_LOCK; } + +static inline void cxl_set_lock_class(struct device *dev) +{ + device_set_lock_class(dev, cxl_lock_class(dev)); +} +#else +static inline void cxl_set_lock_class(struct device *dev) +{ +} #endif #endif /* __CXL_H__ */ From patchwork Tue Mar 1 02:49:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12763999 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5026517E8 for ; Tue, 1 Mar 2022 02:49:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646102963; x=1677638963; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BKrCzA0DUWytFgTFvZI0tuJIcbREVqqSnKKBEdoNKwU=; b=oI2hR9pc8+CnxJkN6eB/k2hIhqObnHVIhfHm8fGH22959psXxYUv9jkd Urpr/UoVDo71C8YBjRN2JfX+F4EdGCEBN71TVdLWsjLhwGndzB0snJcJE 0ycAhHgSxWcrz8Lxyd78xezwEsnidv/hcx1zehJRu08oHDVD9newbUoZS edUMyqanJdJPobF2ziKvuUZcFbWAsv5uoKK8loRsF3Tjrlo4EV1jnvp2b swTl3NjWJghjRA4YmYAk9lnAD7lJKxRv26Z4tNMZ5s/mqFOSPiThX5Gla dovBtM5gJXUHC0Ufoa77cC4dNVRMZfc02+eF3PRmey4Hwy7lN/VGvuw1b A==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="240453169" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="240453169" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:22 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="778310473" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:22 -0800 Subject: [PATCH 06/11] cxl/acpi: Add a lock class for the root platform device From: Dan Williams To: gregkh@linuxfoundation.org, rafael.j.wysocki@intel.com Cc: Alison Schofield , Vishal Verma , Ira Weiny , Ben Widawsky , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev Date: Mon, 28 Feb 2022 18:49:22 -0800 Message-ID: <164610296214.2682974.9364719321216244746.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Now that the device-core can start validating lockdep usage after the device has been added, use that capability to validate usage of device_lock() against the ACPI0017 device relative to other subsystem locks. Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Ben Widawsky Signed-off-by: Dan Williams --- drivers/cxl/acpi.c | 1 + drivers/cxl/cxl.h | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index 37bc8d787999..7fa7bf6088cd 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -313,6 +313,7 @@ static int cxl_acpi_probe(struct platform_device *pdev) struct acpi_device *adev = ACPI_COMPANION(host); struct cxl_cfmws_context ctx; + device_set_lock_class(&pdev->dev, CXL_ROOT_LOCK); root_port = devm_cxl_add_port(host, host, CXL_RESOURCE_NONE, NULL); if (IS_ERR(root_port)) return PTR_ERR(root_port); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index f94eff659cce..5179b6bb1b36 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -496,9 +496,9 @@ struct cxl_nvdimm *cxl_find_nvdimm(struct cxl_memdev *cxlmd); #define __mock static #endif -#ifdef CONFIG_PROVE_CXL_LOCKING enum cxl_lock_class { CXL_ANON_LOCK, + CXL_ROOT_LOCK, CXL_NVDIMM_LOCK, CXL_NVDIMM_BRIDGE_LOCK, CXL_PORT_LOCK = 2, @@ -510,6 +510,7 @@ enum cxl_lock_class { */ }; +#ifdef CONFIG_PROVE_CXL_LOCKING static inline int clamp_lock_class(struct device *dev, int lock_class) { if (lock_class >= MAX_LOCKDEP_SUBCLASSES) { From patchwork Tue Mar 1 02:49:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12764000 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6CC5D17E4 for ; Tue, 1 Mar 2022 02:49:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646102968; x=1677638968; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WSASaml+/DezX9y2ikDtAn54SMESZKM5xL0R4lEghco=; b=IFcXJqSC/34THnkfvN48HWLDESm8965xknZNHD2SUUjlxPz154mOBWJ1 a2XutaZmmMZ+zHJk4MnUmKf8SfLkCXfkqgTlUX3VgjD55V3zwjheshVwd 4mCiQdyEqojjEKQ9f7yg4HoBtBSinCDzoe+lT+ohBc+vYN2vhe2SYNQqv 2MTJoXRCZ3SE6l5vJAzYA0crzYhaMnwVASunzTmWqb4JpG8bfZ6JJZVnl WXG/Tnf4E+OFkE6GBAY9859QezGRK4l+DRQDUHU7F10bze7TFtH7AfYXZ PYlwejT09foisUfSMa8K9XosZNrG+a27wC6UXO5zf873EvuQxAhVgzLQH Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="233659258" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="233659258" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:27 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="507645876" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:27 -0800 Subject: [PATCH 07/11] libnvdimm: Refactor an nvdimm_lock_class() helper From: Dan Williams To: gregkh@linuxfoundation.org, rafael.j.wysocki@intel.com Cc: Vishal Verma , Dave Jiang , Ira Weiny , alison.schofield@intel.com, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev Date: Mon, 28 Feb 2022 18:49:27 -0800 Message-ID: <164610296761.2682974.11157102809606978208.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for moving to the device-core device_lock lockdep validation, refactor an nvdimm_lock_class() helper to be used with device_set_lock_class(). Cc: Vishal Verma Cc: Dave Jiang Cc: Ira Weiny Signed-off-by: Dan Williams --- drivers/nvdimm/nd-core.h | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h index 2650a852eeaf..772568ffb9d6 100644 --- a/drivers/nvdimm/nd-core.h +++ b/drivers/nvdimm/nd-core.h @@ -197,22 +197,27 @@ enum { LOCK_CLAIM, }; -static inline void debug_nvdimm_lock(struct device *dev) +static inline int nvdimm_lock_class(struct device *dev) { if (is_nd_region(dev)) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_REGION); + return LOCK_REGION; else if (is_nvdimm(dev)) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_DIMM); + return LOCK_DIMM; else if (is_nd_btt(dev) || is_nd_pfn(dev) || is_nd_dax(dev)) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_CLAIM); + return LOCK_CLAIM; else if (dev->parent && (is_nd_region(dev->parent))) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_NAMESPACE); + return LOCK_NAMESPACE; else if (is_nvdimm_bus(dev)) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_BUS); + return LOCK_BUS; else if (dev->class && dev->class == nd_class) - mutex_lock_nested(&dev->lockdep_mutex, LOCK_NDCTL); + return LOCK_NDCTL; else - dev_WARN(dev, "unknown lock level\n"); + return -1; +} + +static inline void debug_nvdimm_lock(struct device *dev) +{ + mutex_lock_nested(&dev->lockdep_mutex, nvdimm_lock_class(dev)); } static inline void debug_nvdimm_unlock(struct device *dev) From patchwork Tue Mar 1 02:49:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12764001 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E7B617E4 for ; Tue, 1 Mar 2022 02:49:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646102973; x=1677638973; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dKqV9Xtx7Ao68rbouYZAIKnCg8z1qKxZcx94UG+yU6E=; b=WiW2SUPFiK5y39j3zGb40633OxqlZPMkrYOVqsaaV/ULhsIqr9u0WKqE 12h3wbVvxGOdDVChr5hygNubSdcOIQhPxct/yI0+SKiqdnaGkrSFas6rD VcNyT7Q7vwFfEYLTB4f6qwz+STW0vtKyPYgIqmOcRt7H8CFxZWgJV6RpE 80SA39W+zqIDpBymaIL2GISqzQ0uKBbAkhue8EDOzdZEv6ql9yRQO/3yW ha7X1Y6GWIY8JU/EiwnuaDWc5AO+bnnd85mDcJAwskuxSx/PuS7aAr3Pw OmfNfLXLtvej48bEBjgloN4INaAOjF3fLwWFYu9slBhP9yRTzmvWzHc50 w==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="316256544" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="316256544" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:32 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="708892236" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:32 -0800 Subject: [PATCH 08/11] ACPI: NFIT: Drop nfit_device_lock() From: Dan Williams To: gregkh@linuxfoundation.org, rafael.j.wysocki@intel.com Cc: Vishal Verma , Dave Jiang , Ira Weiny , "Rafael J. Wysocki" , alison.schofield@intel.com, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev Date: Mon, 28 Feb 2022 18:49:32 -0800 Message-ID: <164610297271.2682974.2282625073149152598.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for the libnvdimm subsystem switching to device-core common lockdep validation. Delete nfit_device_lock() which will need to be replaced with an implementation that specifies a non-zero lock class. Cc: Vishal Verma Cc: Dave Jiang Cc: Ira Weiny Cc: "Rafael J. Wysocki" Signed-off-by: Dan Williams --- drivers/acpi/nfit/core.c | 30 +++++++++++++++--------------- drivers/acpi/nfit/nfit.h | 24 ------------------------ 2 files changed, 15 insertions(+), 39 deletions(-) diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c index e5d7f2bda13f..315fccdbfd84 100644 --- a/drivers/acpi/nfit/core.c +++ b/drivers/acpi/nfit/core.c @@ -1305,7 +1305,7 @@ static ssize_t hw_error_scrub_store(struct device *dev, if (rc) return rc; - nfit_device_lock(dev); + device_lock(dev); nd_desc = dev_get_drvdata(dev); if (nd_desc) { struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); @@ -1322,7 +1322,7 @@ static ssize_t hw_error_scrub_store(struct device *dev, break; } } - nfit_device_unlock(dev); + device_unlock(dev); if (rc) return rc; return size; @@ -1342,10 +1342,10 @@ static ssize_t scrub_show(struct device *dev, ssize_t rc = -ENXIO; bool busy; - nfit_device_lock(dev); + device_lock(dev); nd_desc = dev_get_drvdata(dev); if (!nd_desc) { - nfit_device_unlock(dev); + device_unlock(dev); return rc; } acpi_desc = to_acpi_desc(nd_desc); @@ -1362,7 +1362,7 @@ static ssize_t scrub_show(struct device *dev, } mutex_unlock(&acpi_desc->init_mutex); - nfit_device_unlock(dev); + device_unlock(dev); return rc; } @@ -1379,14 +1379,14 @@ static ssize_t scrub_store(struct device *dev, if (val != 1) return -EINVAL; - nfit_device_lock(dev); + device_lock(dev); nd_desc = dev_get_drvdata(dev); if (nd_desc) { struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); rc = acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_LONG); } - nfit_device_unlock(dev); + device_unlock(dev); if (rc) return rc; return size; @@ -1774,9 +1774,9 @@ static void acpi_nvdimm_notify(acpi_handle handle, u32 event, void *data) struct acpi_device *adev = data; struct device *dev = &adev->dev; - nfit_device_lock(dev->parent); + device_lock(dev->parent); __acpi_nvdimm_notify(dev, event); - nfit_device_unlock(dev->parent); + device_unlock(dev->parent); } static bool acpi_nvdimm_has_method(struct acpi_device *adev, char *method) @@ -3532,8 +3532,8 @@ static int acpi_nfit_flush_probe(struct nvdimm_bus_descriptor *nd_desc) struct device *dev = acpi_desc->dev; /* Bounce the device lock to flush acpi_nfit_add / acpi_nfit_notify */ - nfit_device_lock(dev); - nfit_device_unlock(dev); + device_lock(dev); + device_unlock(dev); /* Bounce the init_mutex to complete initial registration */ mutex_lock(&acpi_desc->init_mutex); @@ -3686,8 +3686,8 @@ void acpi_nfit_shutdown(void *data) * acpi_nfit_ars_rescan() submissions have had a chance to * either submit or see ->cancel set. */ - nfit_device_lock(bus_dev); - nfit_device_unlock(bus_dev); + device_lock(bus_dev); + device_unlock(bus_dev); flush_workqueue(nfit_wq); } @@ -3830,9 +3830,9 @@ EXPORT_SYMBOL_GPL(__acpi_nfit_notify); static void acpi_nfit_notify(struct acpi_device *adev, u32 event) { - nfit_device_lock(&adev->dev); + device_lock(&adev->dev); __acpi_nfit_notify(&adev->dev, adev->handle, event); - nfit_device_unlock(&adev->dev); + device_unlock(&adev->dev); } static const struct acpi_device_id acpi_nfit_ids[] = { diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h index c674f3df9be7..2c16a2dafb15 100644 --- a/drivers/acpi/nfit/nfit.h +++ b/drivers/acpi/nfit/nfit.h @@ -343,30 +343,6 @@ static inline struct acpi_nfit_desc *to_acpi_desc( return container_of(nd_desc, struct acpi_nfit_desc, nd_desc); } -#ifdef CONFIG_PROVE_LOCKING -static inline void nfit_device_lock(struct device *dev) -{ - device_lock(dev); - mutex_lock(&dev->lockdep_mutex); -} - -static inline void nfit_device_unlock(struct device *dev) -{ - mutex_unlock(&dev->lockdep_mutex); - device_unlock(dev); -} -#else -static inline void nfit_device_lock(struct device *dev) -{ - device_lock(dev); -} - -static inline void nfit_device_unlock(struct device *dev) -{ - device_unlock(dev); -} -#endif - const guid_t *to_nfit_uuid(enum nfit_uuids id); int acpi_nfit_init(struct acpi_nfit_desc *acpi_desc, void *nfit, acpi_size sz); void acpi_nfit_shutdown(void *data); From patchwork Tue Mar 1 02:49:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12764004 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 987AC17E4 for ; Tue, 1 Mar 2022 02:49:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646102992; x=1677638992; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XpyV9hsJHyJYFpBu6VWN90YcZLTsehIJAJjrttoPgSc=; b=iqP7MyQUw7j3qTBd1RkLV7dsfeJjfUkLENzCtSy1tNZMsTdo7JkfUyyZ wARVtA/ml1GF9FMQMkCywyHZZ+ciVEp8PO/T1YseHftlSODhxGdHdtaW/ JVsd3vmmhZHvvpB23VSmB3DUE8/wkUnx9jlr4o3u8UVS7Yk0w4n5jX5In FmOk3vXY1gaDxwlLfc37jBODf4UYUIbtg8czPXipbIHih3gTVEPQEZ2f+ YoiKArJ7sT+YqGj4mvunlHlis5LCuwRI9P4MWZXi5hgVkybqbBeEZ3poA t2XOG7eplB6mkudUV6bl6ryS8hDs1cMjOuG5CymxFIf3AZcJ3fJmn/Qv0 g==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="252968634" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="252968634" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:37 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="507645901" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:37 -0800 Subject: [PATCH 09/11] libnvdimm: Drop nd_device_lock() From: Dan Williams To: gregkh@linuxfoundation.org, rafael.j.wysocki@intel.com Cc: Vishal Verma , Dave Jiang , Ira Weiny , alison.schofield@intel.com, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev Date: Mon, 28 Feb 2022 18:49:37 -0800 Message-ID: <164610297784.2682974.3916870850468901338.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In preparation for switching to the core device_lock lockdep validation scheme, convert nd_device_lock() calls back to device_lock(). Cc: Vishal Verma Cc: Dave Jiang Cc: Ira Weiny Signed-off-by: Dan Williams --- drivers/nvdimm/btt_devs.c | 16 ++++++++-------- drivers/nvdimm/bus.c | 23 +++++++++------------- drivers/nvdimm/core.c | 10 +++++----- drivers/nvdimm/dimm_devs.c | 8 ++++---- drivers/nvdimm/namespace_devs.c | 36 ++++++++++++++++++----------------- drivers/nvdimm/nd-core.h | 40 --------------------------------------- drivers/nvdimm/pfn_devs.c | 24 ++++++++++++----------- drivers/nvdimm/pmem.c | 2 +- drivers/nvdimm/region.c | 2 +- drivers/nvdimm/region_devs.c | 16 ++++++++-------- lib/Kconfig.debug | 3 ++- 11 files changed, 68 insertions(+), 112 deletions(-) diff --git a/drivers/nvdimm/btt_devs.c b/drivers/nvdimm/btt_devs.c index 8b52e5144f08..d4164b2cf22e 100644 --- a/drivers/nvdimm/btt_devs.c +++ b/drivers/nvdimm/btt_devs.c @@ -51,14 +51,14 @@ static ssize_t sector_size_store(struct device *dev, struct nd_btt *nd_btt = to_nd_btt(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); rc = nd_size_select_store(dev, buf, &nd_btt->lbasize, btt_lbasize_supported); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -80,11 +80,11 @@ static ssize_t uuid_store(struct device *dev, struct nd_btt *nd_btt = to_nd_btt(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); rc = nd_uuid_store(dev, &nd_btt->uuid, buf, len); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -109,13 +109,13 @@ static ssize_t namespace_store(struct device *dev, struct nd_btt *nd_btt = to_nd_btt(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); rc = nd_namespace_store(dev, &nd_btt->ndns, buf, len); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -127,14 +127,14 @@ static ssize_t size_show(struct device *dev, struct nd_btt *nd_btt = to_nd_btt(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); if (dev->driver) rc = sprintf(buf, "%llu\n", nd_btt->size); else { /* no size to convey if the btt instance is disabled */ rc = -ENXIO; } - nd_device_unlock(dev); + device_unlock(dev); return rc; } diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c index 9dc7f3edd42b..f481831929eb 100644 --- a/drivers/nvdimm/bus.c +++ b/drivers/nvdimm/bus.c @@ -91,9 +91,7 @@ static int nvdimm_bus_probe(struct device *dev) dev->driver->name, dev_name(dev)); nvdimm_bus_probe_start(nvdimm_bus); - debug_nvdimm_lock(dev); rc = nd_drv->probe(dev); - debug_nvdimm_unlock(dev); if ((rc == 0 || rc == -EOPNOTSUPP) && dev->parent && is_nd_region(dev->parent)) @@ -114,11 +112,8 @@ static void nvdimm_bus_remove(struct device *dev) struct module *provider = to_bus_provider(dev); struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev); - if (nd_drv->remove) { - debug_nvdimm_lock(dev); + if (nd_drv->remove) nd_drv->remove(dev); - debug_nvdimm_unlock(dev); - } dev_dbg(&nvdimm_bus->dev, "%s.remove(%s)\n", dev->driver->name, dev_name(dev)); @@ -142,7 +137,7 @@ static void nvdimm_bus_shutdown(struct device *dev) void nd_device_notify(struct device *dev, enum nvdimm_event event) { - nd_device_lock(dev); + device_lock(dev); if (dev->driver) { struct nd_device_driver *nd_drv; @@ -150,7 +145,7 @@ void nd_device_notify(struct device *dev, enum nvdimm_event event) if (nd_drv->notify) nd_drv->notify(dev, event); } - nd_device_unlock(dev); + device_unlock(dev); } EXPORT_SYMBOL(nd_device_notify); @@ -575,9 +570,9 @@ void nd_device_unregister(struct device *dev, enum nd_async_mode mode) * or otherwise let the async path handle it if the * unregistration was already queued. */ - nd_device_lock(dev); + device_lock(dev); killed = kill_device(dev); - nd_device_unlock(dev); + device_unlock(dev); if (!killed) return; @@ -933,10 +928,10 @@ void wait_nvdimm_bus_probe_idle(struct device *dev) if (nvdimm_bus->probe_active == 0) break; nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); wait_event(nvdimm_bus->wait, nvdimm_bus->probe_active == 0); - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); } while (true); } @@ -1170,7 +1165,7 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, goto out; } - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); rc = nd_cmd_clear_to_send(nvdimm_bus, nvdimm, func, buf); if (rc) @@ -1192,7 +1187,7 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm, out_unlock: nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); out: kfree(in_env); kfree(out_env); diff --git a/drivers/nvdimm/core.c b/drivers/nvdimm/core.c index 69a03358817f..144926b7451c 100644 --- a/drivers/nvdimm/core.c +++ b/drivers/nvdimm/core.c @@ -215,7 +215,7 @@ EXPORT_SYMBOL_GPL(to_nvdimm_bus_dev); * * Enforce that uuids can only be changed while the device is disabled * (driver detached) - * LOCKING: expects nd_device_lock() is held on entry + * LOCKING: expects device_lock() is held on entry */ int nd_uuid_store(struct device *dev, uuid_t **uuid_out, const char *buf, size_t len) @@ -316,15 +316,15 @@ static DEVICE_ATTR_RO(provider); static int flush_namespaces(struct device *dev, void *data) { - nd_device_lock(dev); - nd_device_unlock(dev); + device_lock(dev); + device_unlock(dev); return 0; } static int flush_regions_dimms(struct device *dev, void *data) { - nd_device_lock(dev); - nd_device_unlock(dev); + device_lock(dev); + device_unlock(dev); device_for_each_child(dev, NULL, flush_namespaces); return 0; } diff --git a/drivers/nvdimm/dimm_devs.c b/drivers/nvdimm/dimm_devs.c index dc7449a40003..00001024d0b2 100644 --- a/drivers/nvdimm/dimm_devs.c +++ b/drivers/nvdimm/dimm_devs.c @@ -362,9 +362,9 @@ static ssize_t available_slots_show(struct device *dev, { ssize_t rc; - nd_device_lock(dev); + device_lock(dev); rc = __available_slots_show(dev_get_drvdata(dev), buf); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -407,12 +407,12 @@ static ssize_t security_store(struct device *dev, * done while probing is idle and the DIMM is not in active use * in any region. */ - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); rc = nvdimm_security_store(dev, buf, len); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc; } diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c index b57a2d36c517..16cedd854fa8 100644 --- a/drivers/nvdimm/namespace_devs.c +++ b/drivers/nvdimm/namespace_devs.c @@ -383,7 +383,7 @@ static ssize_t alt_name_store(struct device *dev, struct nd_region *nd_region = to_nd_region(dev->parent); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); rc = __alt_name_store(dev, buf, len); @@ -391,7 +391,7 @@ static ssize_t alt_name_store(struct device *dev, rc = nd_namespace_label_update(nd_region, dev); dev_dbg(dev, "%s(%zd)\n", rc < 0 ? "fail " : "", rc); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc < 0 ? rc : len; } @@ -1056,7 +1056,7 @@ static ssize_t size_store(struct device *dev, if (rc) return rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); rc = __size_store(dev, val); @@ -1082,7 +1082,7 @@ static ssize_t size_store(struct device *dev, dev_dbg(dev, "%llx %s (%d)\n", val, rc < 0 ? "fail" : "success", rc); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc < 0 ? rc : len; } @@ -1268,7 +1268,7 @@ static ssize_t uuid_store(struct device *dev, } else return -ENXIO; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); if (to_ndns(dev)->claim) @@ -1284,7 +1284,7 @@ static ssize_t uuid_store(struct device *dev, dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc < 0 ? rc : len; } @@ -1358,7 +1358,7 @@ static ssize_t sector_size_store(struct device *dev, } else return -ENXIO; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); if (to_ndns(dev)->claim) rc = -EBUSY; @@ -1369,7 +1369,7 @@ static ssize_t sector_size_store(struct device *dev, dev_dbg(dev, "result: %zd %s: %s%s", rc, rc < 0 ? "tried" : "wrote", buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -1484,9 +1484,9 @@ static ssize_t holder_show(struct device *dev, struct nd_namespace_common *ndns = to_ndns(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); rc = sprintf(buf, "%s\n", ndns->claim ? dev_name(ndns->claim) : ""); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -1523,7 +1523,7 @@ static ssize_t holder_class_store(struct device *dev, struct nd_region *nd_region = to_nd_region(dev->parent); int rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); rc = __holder_class_store(dev, buf); @@ -1531,7 +1531,7 @@ static ssize_t holder_class_store(struct device *dev, rc = nd_namespace_label_update(nd_region, dev); dev_dbg(dev, "%s(%d)\n", rc < 0 ? "fail " : "", rc); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc < 0 ? rc : len; } @@ -1542,7 +1542,7 @@ static ssize_t holder_class_show(struct device *dev, struct nd_namespace_common *ndns = to_ndns(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); if (ndns->claim_class == NVDIMM_CCLASS_NONE) rc = sprintf(buf, "\n"); else if ((ndns->claim_class == NVDIMM_CCLASS_BTT) || @@ -1554,7 +1554,7 @@ static ssize_t holder_class_show(struct device *dev, rc = sprintf(buf, "dax\n"); else rc = sprintf(buf, "\n"); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -1568,7 +1568,7 @@ static ssize_t mode_show(struct device *dev, char *mode; ssize_t rc; - nd_device_lock(dev); + device_lock(dev); claim = ndns->claim; if (claim && is_nd_btt(claim)) mode = "safe"; @@ -1581,7 +1581,7 @@ static ssize_t mode_show(struct device *dev, else mode = "raw"; rc = sprintf(buf, "%s\n", mode); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -1715,8 +1715,8 @@ struct nd_namespace_common *nvdimm_namespace_common_probe(struct device *dev) * Flush any in-progess probes / removals in the driver * for the raw personality of this namespace. */ - nd_device_lock(&ndns->dev); - nd_device_unlock(&ndns->dev); + device_lock(&ndns->dev); + device_unlock(&ndns->dev); if (ndns->dev.driver) { dev_dbg(&ndns->dev, "is active, can't bind %s\n", dev_name(dev)); diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h index 772568ffb9d6..90c4029c87d2 100644 --- a/drivers/nvdimm/nd-core.h +++ b/drivers/nvdimm/nd-core.h @@ -214,45 +214,5 @@ static inline int nvdimm_lock_class(struct device *dev) else return -1; } - -static inline void debug_nvdimm_lock(struct device *dev) -{ - mutex_lock_nested(&dev->lockdep_mutex, nvdimm_lock_class(dev)); -} - -static inline void debug_nvdimm_unlock(struct device *dev) -{ - mutex_unlock(&dev->lockdep_mutex); -} - -static inline void nd_device_lock(struct device *dev) -{ - device_lock(dev); - debug_nvdimm_lock(dev); -} - -static inline void nd_device_unlock(struct device *dev) -{ - debug_nvdimm_unlock(dev); - device_unlock(dev); -} -#else -static inline void nd_device_lock(struct device *dev) -{ - device_lock(dev); -} - -static inline void nd_device_unlock(struct device *dev) -{ - device_unlock(dev); -} - -static inline void debug_nvdimm_lock(struct device *dev) -{ -} - -static inline void debug_nvdimm_unlock(struct device *dev) -{ -} #endif #endif /* __ND_CORE_H__ */ diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index 58eda16f5c53..ee357401fdd1 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -56,7 +56,7 @@ static ssize_t mode_store(struct device *dev, struct nd_pfn *nd_pfn = to_nd_pfn_safe(dev); ssize_t rc = 0; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); if (dev->driver) rc = -EBUSY; @@ -78,7 +78,7 @@ static ssize_t mode_store(struct device *dev, dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -124,14 +124,14 @@ static ssize_t align_store(struct device *dev, unsigned long aligns[MAX_NVDIMM_ALIGN] = { [0] = 0, }; ssize_t rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); rc = nd_size_select_store(dev, buf, &nd_pfn->align, nd_pfn_supported_alignments(aligns)); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -153,11 +153,11 @@ static ssize_t uuid_store(struct device *dev, struct nd_pfn *nd_pfn = to_nd_pfn_safe(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); rc = nd_uuid_store(dev, &nd_pfn->uuid, buf, len); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); - nd_device_unlock(dev); + device_unlock(dev); return rc ? rc : len; } @@ -182,13 +182,13 @@ static ssize_t namespace_store(struct device *dev, struct nd_pfn *nd_pfn = to_nd_pfn_safe(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); rc = nd_namespace_store(dev, &nd_pfn->ndns, buf, len); dev_dbg(dev, "result: %zd wrote: %s%s", rc, buf, buf[len - 1] == '\n' ? "" : "\n"); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -200,7 +200,7 @@ static ssize_t resource_show(struct device *dev, struct nd_pfn *nd_pfn = to_nd_pfn_safe(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); if (dev->driver) { struct nd_pfn_sb *pfn_sb = nd_pfn->pfn_sb; u64 offset = __le64_to_cpu(pfn_sb->dataoff); @@ -214,7 +214,7 @@ static ssize_t resource_show(struct device *dev, /* no address to convey if the pfn instance is disabled */ rc = -ENXIO; } - nd_device_unlock(dev); + device_unlock(dev); return rc; } @@ -226,7 +226,7 @@ static ssize_t size_show(struct device *dev, struct nd_pfn *nd_pfn = to_nd_pfn_safe(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); if (dev->driver) { struct nd_pfn_sb *pfn_sb = nd_pfn->pfn_sb; u64 offset = __le64_to_cpu(pfn_sb->dataoff); @@ -242,7 +242,7 @@ static ssize_t size_show(struct device *dev, /* no size to convey if the pfn instance is disabled */ rc = -ENXIO; } - nd_device_unlock(dev); + device_unlock(dev); return rc; } diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 58d95242a836..3992521c151f 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -573,7 +573,7 @@ static void nd_pmem_remove(struct device *dev) nvdimm_namespace_detach_btt(to_nd_btt(dev)); else { /* - * Note, this assumes nd_device_lock() context to not + * Note, this assumes device_lock() context to not * race nd_pmem_notify() */ sysfs_put(pmem->bb_state); diff --git a/drivers/nvdimm/region.c b/drivers/nvdimm/region.c index e0c34120df37..466b3e9247b6 100644 --- a/drivers/nvdimm/region.c +++ b/drivers/nvdimm/region.c @@ -103,7 +103,7 @@ static void nd_region_remove(struct device *dev) nvdimm_bus_unlock(dev); /* - * Note, this assumes nd_device_lock() context to not race + * Note, this assumes device_lock() context to not race * nd_region_notify() */ sysfs_put(nd_region->bb_state); diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c index 9ccf3d608799..0f6c85b45df3 100644 --- a/drivers/nvdimm/region_devs.c +++ b/drivers/nvdimm/region_devs.c @@ -305,7 +305,7 @@ static ssize_t set_cookie_show(struct device *dev, * the v1.1 namespace label cookie definition. To read all this * data we need to wait for probing to settle. */ - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); if (nd_region->ndr_mappings) { @@ -322,7 +322,7 @@ static ssize_t set_cookie_show(struct device *dev, } } nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); if (rc) return rc; @@ -398,12 +398,12 @@ static ssize_t available_size_show(struct device *dev, * memory nvdimm_bus_lock() is dropped, but that's userspace's * problem to not race itself. */ - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); available = nd_region_available_dpa(nd_region); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return sprintf(buf, "%llu\n", available); } @@ -415,12 +415,12 @@ static ssize_t max_available_extent_show(struct device *dev, struct nd_region *nd_region = to_nd_region(dev); unsigned long long available = 0; - nd_device_lock(dev); + device_lock(dev); nvdimm_bus_lock(dev); wait_nvdimm_bus_probe_idle(dev); available = nd_region_allocatable_dpa(nd_region); nvdimm_bus_unlock(dev); - nd_device_unlock(dev); + device_unlock(dev); return sprintf(buf, "%llu\n", available); } @@ -594,12 +594,12 @@ static ssize_t region_badblocks_show(struct device *dev, struct nd_region *nd_region = to_nd_region(dev); ssize_t rc; - nd_device_lock(dev); + device_lock(dev); if (dev->driver) rc = badblocks_show(&nd_region->bb, buf, 0); else rc = -ENXIO; - nd_device_unlock(dev); + device_unlock(dev); return rc; } diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 5942b22d7f1b..a4bd109f6178 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1529,7 +1529,8 @@ config PROVE_NVDIMM_LOCKING bool "NVDIMM" depends on LIBNVDIMM help - Enable lockdep to validate nd_device_lock() usage. + Enable lockdep to validate libnvdimm subsystem usage of the + device lock. config PROVE_CXL_LOCKING bool "CXL" From patchwork Tue Mar 1 02:49:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12764002 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B494F17E4 for ; Tue, 1 Mar 2022 02:49:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646102983; x=1677638983; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8eXFRq2MSU2EHz63ODKG6BPK3XE2iJj4bWnW+A+qYFg=; b=IGRGUH3xsfhv7e77bxxJRFdC7172mOtDtAwCfhdJHOhcm4+VLRCF/Xyw MjxSmrALaIsUcULobozspnGWXtdEvPrKC5HzJvFsm9R9Oc7u4TwQxL1DE 61bw5p9+FXFgLxL/KA2BSmdfmXVItaMNa9HK9juyeFovl2nUIGo61JAFp a0uitW0eRu/A3js6ZkXwlmxmYTOkvC+5XOblXt9da0ck6NGhteW6JbGkG rBjjqq1vv3fwrE0UmOrVCj0kkTJ1o2DVMyOgSckPpVOCDxplp//BmYJ6k UEXMdQ85q41IJuoT1Mh7aTL2P24pBz1NlPgDXP89+QhGRWOSHWKi4t0MJ w==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="233011450" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="233011450" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:43 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="507645915" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:43 -0800 Subject: [PATCH 10/11] libnvdimm: Enable lockdep validation From: Dan Williams To: gregkh@linuxfoundation.org, rafael.j.wysocki@intel.com Cc: Vishal Verma , Dave Jiang , Ira Weiny , alison.schofield@intel.com, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev Date: Mon, 28 Feb 2022 18:49:43 -0800 Message-ID: <164610298296.2682974.14404088631456886039.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Register libnvdimm subsystem devices with a non-zero lock_class to enable the device-core to track lock dependencies. Cc: Vishal Verma Cc: Dave Jiang Cc: Ira Weiny Signed-off-by: Dan Williams --- drivers/nvdimm/bus.c | 3 +++ drivers/nvdimm/nd-core.h | 9 +++++++++ 2 files changed, 12 insertions(+) diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c index f481831929eb..c2f70f9a8b70 100644 --- a/drivers/nvdimm/bus.c +++ b/drivers/nvdimm/bus.c @@ -363,6 +363,7 @@ struct nvdimm_bus *nvdimm_bus_register(struct device *parent, if (rc) goto err; + nvdimm_set_lock_class(&nvdimm_bus->dev); rc = device_add(&nvdimm_bus->dev); if (rc) { dev_dbg(&nvdimm_bus->dev, "registration failed: %d\n", rc); @@ -488,6 +489,7 @@ static void nd_async_device_register(void *d, async_cookie_t cookie) { struct device *dev = d; + nvdimm_set_lock_class(dev); if (device_add(dev) != 0) { dev_err(dev, "%s: failed\n", __func__); put_device(dev); @@ -741,6 +743,7 @@ int nvdimm_bus_create_ndctl(struct nvdimm_bus *nvdimm_bus) if (rc) goto err; + nvdimm_set_lock_class(dev); rc = device_add(dev); if (rc) { dev_dbg(&nvdimm_bus->dev, "failed to register ndctl%d: %d\n", diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h index 90c4029c87d2..40065606a6e6 100644 --- a/drivers/nvdimm/nd-core.h +++ b/drivers/nvdimm/nd-core.h @@ -214,5 +214,14 @@ static inline int nvdimm_lock_class(struct device *dev) else return -1; } + +static inline void nvdimm_set_lock_class(struct device *dev) +{ + device_set_lock_class(dev, nvdimm_lock_class(dev)); +} +#else +static inline void nvdimm_set_lock_class(struct device *dev) +{ +} #endif #endif /* __ND_CORE_H__ */ From patchwork Tue Mar 1 02:49:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12764003 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4FDFC17E4 for ; Tue, 1 Mar 2022 02:49:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646102989; x=1677638989; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0ejWEtzYFXkVmMw+7/8/byv2i/2ikDMzQlZB6daiDkE=; b=dGg7PM0O7tBQWA4MC6tCu0QkTTqEbMeaw4y3PDm03xYhzx7nIIYBSK+1 95buvFXuRZrwnG10iXwwwlWC2SOVFwkyX9g15qku61edGP8J9XT7KMkFV BcGfx0JkppBEUwm/PlGQuZkQs7Ets8BtJ1gXbgY1Wuf2FL/Pyiv66dP5x +Cu9wqfYMXfy87S4xUAhza7t4+xbxTMV+YCdfFhuKscsOdEwPP+cKOAIm KEWdhGuOQSQxjfYTpiD7GELu5F/mMCgiqJ7DSWrQ86XCh2RbPEKMH9D4V ecZ/ELyE8nMws+xpGX4jyKNIuqAOA67j6IUFdogXLTpydpcwis0BnXEFH Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="253232914" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="253232914" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:48 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="544943085" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 18:49:48 -0800 Subject: [PATCH 11/11] device-core: Introduce a per-subsystem lockdep_mutex From: Dan Williams To: gregkh@linuxfoundation.org, rafael.j.wysocki@intel.com Cc: "Rafael J. Wysocki" , Alison Schofield , Vishal Verma , Ira Weiny , Ben Widawsky , Dave Jiang , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev Date: Mon, 28 Feb 2022 18:49:48 -0800 Message-ID: <164610298807.2682974.4215886933533996734.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> References: <164610292916.2682974.12924748003366352335.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In order for multiple subsystems to convey their device_lock ordering constraints, each needs their own exclusive mutex. Rather than select a subsystem to validate at compile-time, allow for simultaneous validation of multiple subsystems. Note that the reason the mutex_init() for the various subsystem device-locks is unrolled in device_lockdep_init(), vs a DEVICE_LOCK_MAX loop, is to give each lock a unique lock_class_key and name in reports. That approach is not elegant, and not scalable, but it seems the best that can be done given lockdep's expectations. Cc: Greg Kroah-Hartman Cc: "Rafael J. Wysocki" Cc: Alison Schofield Cc: Vishal Verma Cc: Ira Weiny Cc: Ben Widawsky Cc: Dave Jiang Signed-off-by: Dan Williams --- drivers/base/core.c | 6 +-- drivers/cxl/acpi.c | 2 + drivers/cxl/cxl.h | 4 +- drivers/cxl/port.c | 2 + drivers/nvdimm/nd-core.h | 5 ++- include/linux/device.h | 83 ++++++++++++++++++++++++++++++++++++++-------- lib/Kconfig.debug | 24 ------------- 7 files changed, 76 insertions(+), 50 deletions(-) diff --git a/drivers/base/core.c b/drivers/base/core.c index 96430fa5152e..fae3073fd9c6 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -2864,11 +2864,7 @@ void device_initialize(struct device *dev) kobject_init(&dev->kobj, &device_ktype); INIT_LIST_HEAD(&dev->dma_pools); mutex_init(&dev->mutex); -#ifdef CONFIG_PROVE_LOCKING - mutex_init(&dev->lockdep_mutex); - dev->lock_class = -1; -#endif - lockdep_set_novalidate_class(&dev->mutex); + device_lockdep_init(dev); spin_lock_init(&dev->devres_lock); INIT_LIST_HEAD(&dev->devres_head); device_pm_init(dev); diff --git a/drivers/cxl/acpi.c b/drivers/cxl/acpi.c index 7fa7bf6088cd..218c4367c39f 100644 --- a/drivers/cxl/acpi.c +++ b/drivers/cxl/acpi.c @@ -313,7 +313,7 @@ static int cxl_acpi_probe(struct platform_device *pdev) struct acpi_device *adev = ACPI_COMPANION(host); struct cxl_cfmws_context ctx; - device_set_lock_class(&pdev->dev, CXL_ROOT_LOCK); + device_set_lock_class(&pdev->dev, DEVICE_LOCK_CXL, CXL_ROOT_LOCK); root_port = devm_cxl_add_port(host, host, CXL_RESOURCE_NONE, NULL); if (IS_ERR(root_port)) return PTR_ERR(root_port); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 5179b6bb1b36..70a12bfd71b5 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -510,7 +510,7 @@ enum cxl_lock_class { */ }; -#ifdef CONFIG_PROVE_CXL_LOCKING +#ifdef CONFIG_PROVE_LOCKING static inline int clamp_lock_class(struct device *dev, int lock_class) { if (lock_class >= MAX_LOCKDEP_SUBCLASSES) { @@ -547,7 +547,7 @@ static inline int cxl_lock_class(struct device *dev) static inline void cxl_set_lock_class(struct device *dev) { - device_set_lock_class(dev, cxl_lock_class(dev)); + device_set_lock_class(dev, DEVICE_LOCK_CXL, cxl_lock_class(dev)); } #else static inline void cxl_set_lock_class(struct device *dev) diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c index fdb62ed06433..f3c11a780bed 100644 --- a/drivers/cxl/port.c +++ b/drivers/cxl/port.c @@ -17,7 +17,7 @@ * firmware) are managed in this drivers context. Each driver instance * is responsible for tearing down the driver context of immediate * descendant ports. The locking for this is validated by - * CONFIG_PROVE_CXL_LOCKING. + * CONFIG_PROVE_LOCKING. * * The primary service this driver provides is presenting APIs to other * drivers to utilize the decoders, and indicating to userspace (via bind diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h index 40065606a6e6..3754603e2ace 100644 --- a/drivers/nvdimm/nd-core.h +++ b/drivers/nvdimm/nd-core.h @@ -185,7 +185,7 @@ static inline void devm_nsio_disable(struct device *dev, } #endif -#ifdef CONFIG_PROVE_NVDIMM_LOCKING +#ifdef CONFIG_PROVE_LOCKING extern struct class *nd_class; enum { @@ -217,7 +217,8 @@ static inline int nvdimm_lock_class(struct device *dev) static inline void nvdimm_set_lock_class(struct device *dev) { - device_set_lock_class(dev, nvdimm_lock_class(dev)); + device_set_lock_class(dev, DEVICE_LOCK_NVDIMM, + nvdimm_lock_class(dev)); } #else static inline void nvdimm_set_lock_class(struct device *dev) diff --git a/include/linux/device.h b/include/linux/device.h index e313ff21d670..5cc8e4cf764f 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -386,6 +386,16 @@ struct dev_msi_info { #endif }; +enum device_lock_subsys { +#if IS_ENABLED(CONFIG_LIBNVDIMM) + DEVICE_LOCK_NVDIMM, +#endif +#if IS_ENABLED(CONFIG_CXL_BUS) + DEVICE_LOCK_CXL, +#endif + DEVICE_LOCK_MAX, +}; + /** * struct device - The basic device structure * @parent: The device's "parent" device, the device to which it is attached. @@ -400,7 +410,7 @@ struct dev_msi_info { * This identifies the device type and carries type-specific * information. * @mutex: Mutex to synchronize calls to its driver. - * @lockdep_mutex: An optional debug lock that a subsystem can use as a + * @lockdep_mutex: A set of optional debug locks that subsystem can use as a * peer lock to gain localized lockdep coverage of the device_lock. * @lock_class: per-subsystem annotated device lock class * @bus: Type of bus device is on. @@ -501,8 +511,9 @@ struct device { void *driver_data; /* Driver data, set and get with dev_set_drvdata/dev_get_drvdata */ #ifdef CONFIG_PROVE_LOCKING - struct mutex lockdep_mutex; + struct mutex lockdep_mutex[DEVICE_LOCK_MAX]; int lock_class; + int lock_subsys; #endif struct mutex mutex; /* mutex to synchronize calls to * its driver. @@ -790,35 +801,55 @@ static inline void device_unlock(struct device *dev) mutex_unlock(&dev->mutex); } -static inline void device_set_lock_class(struct device *dev, int lock_class) +static inline void device_set_lock_class(struct device *dev, + enum device_lock_subsys subssys, + int lock_class) +{ +} + +static inline void device_lockdep_init(struct device *dev) { } #else +static inline struct mutex *device_lockdep_mutex(struct device *dev) +{ + if (dev->lock_subsys < 0) + return NULL; + if (dev->lock_class < 0) + return NULL; + return &dev->lockdep_mutex[dev->lock_subsys]; +} + static inline void device_lock(struct device *dev) { - lockdep_assert_not_held(&dev->lockdep_mutex); + struct mutex *lockdep_mutex = device_lockdep_mutex(dev); + + if (lockdep_mutex) + lockdep_assert_not_held(lockdep_mutex); mutex_lock(&dev->mutex); - if (dev->lock_class >= 0) - mutex_lock_nested(&dev->lockdep_mutex, dev->lock_class); + if (lockdep_mutex) + mutex_lock_nested(lockdep_mutex, dev->lock_class); } static inline int device_lock_interruptible(struct device *dev) { int rc = mutex_lock_interruptible(&dev->mutex); + struct mutex *lockdep_mutex = device_lockdep_mutex(dev); - if (rc || dev->lock_class < 0) + if (rc || !lockdep_mutex) return rc; - return mutex_lock_interruptible_nested(&dev->lockdep_mutex, - dev->lock_class); + return mutex_lock_interruptible_nested(lockdep_mutex, dev->lock_class); } static inline int device_trylock(struct device *dev) { + struct mutex *lockdep_mutex = device_lockdep_mutex(dev); + if (mutex_trylock(&dev->mutex)) { - if (dev->lock_class >= 0) - mutex_lock_nested(&dev->lockdep_mutex, dev->lock_class); + if (lockdep_mutex) + mutex_lock_nested(lockdep_mutex, dev->lock_class); return 1; } @@ -827,20 +858,28 @@ static inline int device_trylock(struct device *dev) static inline void device_unlock(struct device *dev) { + struct mutex *lockdep_mutex = device_lockdep_mutex(dev); + if (dev->lock_class >= 0) - mutex_unlock(&dev->lockdep_mutex); + mutex_unlock(lockdep_mutex); mutex_unlock(&dev->mutex); } -static inline void device_set_lock_class(struct device *dev, int lock_class) +static inline void device_set_lock_class(struct device *dev, + enum device_lock_subsys subsys, + int lock_class) { + if (subsys < 0) + return; + if (dev->lock_class < 0 && lock_class > 0) { if (mutex_is_locked(&dev->mutex)) { /* * device_unlock() will unlock lockdep_mutex now that * lock_class is set, so take the paired lock now */ - mutex_lock_nested(&dev->lockdep_mutex, lock_class); + mutex_lock_nested(&dev->lockdep_mutex[subsys], + lock_class); } } else if (dev->lock_class >= 0 && lock_class < 0) { if (mutex_is_locked(&dev->mutex)) { @@ -849,10 +888,24 @@ static inline void device_set_lock_class(struct device *dev, int lock_class) * that lock_class is disabled, so drop the paired lock * now. */ - mutex_unlock(&dev->lockdep_mutex); + mutex_unlock(&dev->lockdep_mutex[subsys]); } } dev->lock_class = lock_class; + dev->lock_subsys = subsys; +} + +static inline void device_lockdep_init(struct device *dev) +{ +#if IS_ENABLED(CONFIG_CXL_BUS) + mutex_init(&dev->lockdep_mutex[DEVICE_LOCK_CXL]); +#endif +#if IS_ENABLED(CONFIG_LIBNVDIMM) + mutex_init(&dev->lockdep_mutex[DEVICE_LOCK_NVDIMM]); +#endif + dev->lock_subsys = -1; + dev->lock_class = -1; + lockdep_set_novalidate_class(&dev->mutex); } #endif diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index a4bd109f6178..14b89aa37c5c 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1516,30 +1516,6 @@ config CSD_LOCK_WAIT_DEBUG include the IPI handler function currently executing (if any) and relevant stack traces. -choice - prompt "Lock debugging: prove subsystem device_lock() correctness" - depends on PROVE_LOCKING - help - For subsystems that have instrumented their usage of the device_lock() - with nested annotations, enable lock dependency checking. The locking - hierarchy 'subclass' identifiers are not compatible across - sub-systems, so only one can be enabled at a time. - -config PROVE_NVDIMM_LOCKING - bool "NVDIMM" - depends on LIBNVDIMM - help - Enable lockdep to validate libnvdimm subsystem usage of the - device lock. - -config PROVE_CXL_LOCKING - bool "CXL" - depends on CXL_BUS - help - Enable lockdep to validate CXL subsystem usage of the device lock. - -endchoice - endmenu # lock debugging config TRACE_IRQFLAGS