From patchwork Fri Jan 13 06:59:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13100193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F655C678D6 for ; Fri, 13 Jan 2023 07:14:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241156AbjAMHOL (ORCPT ); Fri, 13 Jan 2023 02:14:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241348AbjAMHNn (ORCPT ); Fri, 13 Jan 2023 02:13:43 -0500 Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A41284624; Thu, 12 Jan 2023 23:00:12 -0800 (PST) Received: by mail-qt1-x82c.google.com with SMTP id bp44so18540402qtb.0; Thu, 12 Jan 2023 23:00:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=GznmxbzHt1/GkzV/aY6yL5SYQJ4MxLBeLk7lBqyQtV8=; b=AjYOLvLPeYgnNAMxgu4AnGcdFdltLFw3PYtlit21JmKTN/NkoLycOtUcajLDt9jrwb /2lbI1/WlVHu4CiINMh+cMNw6/QzqeJ3UBU1wEgPaN9IU69ZziKpxTi3agj3spNk5XAt QbT1r9BMlOV32nkC1rrfeNISmwHHca5BcPF0/CDJ10qEHs1dk7jtblFw5SQD8lX3t3FX mREiqQWU/aaWqyWrLAmTKnNGg02UP2oLgtVbcaohR5/JKRzKi3NEUpfT5L1uDYHUj5zy w1RTKAFGVKVSzrPxF/foBodHuHpsOQsBIFr+H95o8Qtn2+a99tuhH4VJS1Rv8n+ygQE6 MFTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=GznmxbzHt1/GkzV/aY6yL5SYQJ4MxLBeLk7lBqyQtV8=; b=ouMEH1WvoXn/ff1Ilde2efMKsjMoNYYBpyCrnij1iuwBgWg622635uaGuUEyCDXnVd zOKlpctwjpocVOpsyBZ1r2GLjLGrX6NKfe+5HgJTADurHlE8/VJyHhJLuBl9p2qzZ+j3 s7Gpbt+yze1UC1sqTC9uWYQzSzRbkyTF3g9e/eU59h2UVBPi6N3FuwPtLnY19t+OHrug htosmO/fiOI7lHBQvPUV9GQMTcxaUsfFWdE+OGhxVSgfE92AcyC3mt6HG2BeOYhyGbXt k88ooNLYaGs582eCU07A9/zJ2qpe+OslGMXdLaFtfBFXl31Xd5Nzza/zhjrLTx0J1LTQ PEEA== X-Gm-Message-State: AFqh2kqHFBQ1DXklU2eZIxKiJa6lbEAgoAu0gfzZOMoJrOEBvlCH+zFq 4Hjt1pOdorsu62O2xIDeixo= X-Google-Smtp-Source: AMrXdXvlz8Rvoa+zb4zc48toarnc9xMtC2GjJbt47HhSSEGXqXDKE044muBvxEdeKAmobjaAEu8qZw== X-Received: by 2002:ac8:6882:0:b0:3ab:a3d9:c5c8 with SMTP id m2-20020ac86882000000b003aba3d9c5c8mr72310867qtq.3.1673593206887; Thu, 12 Jan 2023 23:00:06 -0800 (PST) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id o7-20020ac841c7000000b003a82562c90fsm10107533qtm.62.2023.01.12.23.00.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 23:00:06 -0800 (PST) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailauth.nyi.internal (Postfix) with ESMTP id 9E56927C005B; Fri, 13 Jan 2023 02:00:05 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Fri, 13 Jan 2023 02:00:05 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrleejgddutdegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpefghfffvefhhfdvgfejgfekvdelgfekgeevueehlefhiedvgeffjefgteeu gfehieenucffohhmrghinhepkhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivg eptdenucfrrghrrghmpehmrghilhhfrhhomhepsghoqhhunhdomhgvshhmthhprghuthhh phgvrhhsohhnrghlihhthidqieelvdeghedtieegqddujeejkeehheehvddqsghoqhhunh drfhgvnhhgpeepghhmrghilhdrtghomhesfhhigihmvgdrnhgrmhgv X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 13 Jan 2023 02:00:04 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, kvm@vger.kernel.org Cc: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , Lai Jiangshan , "Paul E. McKenney" , Josh Triplett , Steven Rostedt , Mathieu Desnoyers , David Woodhouse , Paolo Bonzini , seanjc@google.com, Joel Fernandes , Matthew Wilcox , Michal Luczaj Subject: [PATCH 1/3] locking/lockdep: Introduce lock_sync() Date: Thu, 12 Jan 2023 22:59:53 -0800 Message-Id: <20230113065955.815667-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230113065955.815667-1-boqun.feng@gmail.com> References: <20230113065955.815667-1-boqun.feng@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Currently, in order to annonate functions like synchronize_srcu() for lockdep, a trick as follow can be used: lock_acquire(); lock_release(); , which indicates synchronize_srcu() acts like an empty critical section that waits for other (read-side) critical sections to finish. This surely can catch some deadlock, but as discussion brought up by Paul Mckenney [1], this could introduce false positives because of irq-safe/unsafe detection. Extra tricks might help this: local_irq_disable(..); lock_acquire(); lock_release(); local_irq_enable(...); But it's better that lockdep could provide an annonation for synchronize_srcu() like functions, so that people won't need to repeat the ugly tricks above. Therefore introduce lock_sync(). It's simply an lock+unlock pair with no irq safe/unsafe deadlock check, since the to-be-annontated functions don't create real critical sections therefore there is no way that irq can create extra dependencies. [1]: https://lore.kernel.org/lkml/20180412021233.ewncg5jjuzjw3x62@tardis/ Signed-off-by: Boqun Feng Acked-by: Waiman Long --- include/linux/lockdep.h | 5 +++++ kernel/locking/lockdep.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+) diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 1f1099dac3f0..ba09df6a0872 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -268,6 +268,10 @@ extern void lock_acquire(struct lockdep_map *lock, unsigned int subclass, extern void lock_release(struct lockdep_map *lock, unsigned long ip); +extern void lock_sync(struct lockdep_map *lock, unsigned int subclass, + int read, int check, struct lockdep_map *nest_lock, + unsigned long ip); + /* lock_is_held_type() returns */ #define LOCK_STATE_UNKNOWN -1 #define LOCK_STATE_NOT_HELD 0 @@ -555,6 +559,7 @@ do { \ #define lock_map_acquire_read(l) lock_acquire_shared_recursive(l, 0, 0, NULL, _THIS_IP_) #define lock_map_acquire_tryread(l) lock_acquire_shared_recursive(l, 0, 1, NULL, _THIS_IP_) #define lock_map_release(l) lock_release(l, _THIS_IP_) +#define lock_map_sync(l) lock_sync(l, 0, 0, 1, NULL, _THIS_IP_) #ifdef CONFIG_PROVE_LOCKING # define might_lock(lock) \ diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index e3375bc40dad..cffa026a765f 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -5692,6 +5692,40 @@ void lock_release(struct lockdep_map *lock, unsigned long ip) } EXPORT_SYMBOL_GPL(lock_release); +/* + * lock_sync() - A special annotation for synchronize_{s,}rcu()-like API. + * + * No actual critical section is created by the APIs annotated with this: these + * APIs are used to wait for one or multiple critical sections (on other CPUs + * or threads), and it means that calling these APIs inside these critical + * sections is potential deadlock. + * + * This annotation acts as an acqurie+release anontation pair with hardirqoff + * being 1. Since there's no critical section, no interrupt can create extra + * dependencies "inside" the annotation, hardirqoff == 1 allows us to avoid + * false positives. + */ +void lock_sync(struct lockdep_map *lock, unsigned subclass, int read, + int check, struct lockdep_map *nest_lock, unsigned long ip) +{ + unsigned long flags; + + if (unlikely(!lockdep_enabled())) + return; + + raw_local_irq_save(flags); + check_flags(flags); + + lockdep_recursion_inc(); + __lock_acquire(lock, subclass, 0, read, check, 1, nest_lock, ip, 0, 0); + + if (__lock_release(lock, ip)) + check_chain_key(current); + lockdep_recursion_finish(); + raw_local_irq_restore(flags); +} +EXPORT_SYMBOL_GPL(lock_sync); + noinstr int lock_is_held_type(const struct lockdep_map *lock, int read) { unsigned long flags; From patchwork Fri Jan 13 06:59:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13100192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BAEBC54EBE for ; Fri, 13 Jan 2023 07:14:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241124AbjAMHOI (ORCPT ); Fri, 13 Jan 2023 02:14:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241343AbjAMHNn (ORCPT ); Fri, 13 Jan 2023 02:13:43 -0500 Received: from mail-qt1-x832.google.com (mail-qt1-x832.google.com [IPv6:2607:f8b0:4864:20::832]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F16EF84627; Thu, 12 Jan 2023 23:00:12 -0800 (PST) Received: by mail-qt1-x832.google.com with SMTP id e22so5956659qts.1; Thu, 12 Jan 2023 23:00:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=2GOoHiIdRP/VZku0DE2s2VczcvqEvOMP5ZvCql8uGXE=; b=e0ZbKUzaOYfph1jc5vF+2cI41YxAf+sVd9V76ESM1ebXGgABcE8VYHcTuHr/u/9+as jMmy01s957P6HNIM9r4ZxUTS/WDPrCjnaDsTYMGjMNlxiAMgGCqTCEGAjnMalsS6y64l 1dSAqR8j537LdKOmPMU0POUOyI5dtGmhLCrCtIm8XzbQKW02lEW1Ean3M8BjOp8YcFK9 0MuoCr/VtP76CulD3uDqY4vkrSUn9GzZlXhWVrvj3bBszkckpzt7Q2Pgi2215T41vTL1 zJhFsDxFXtscHCXWmQmSJCs9XUrvU5m1xLKyhQJxueYglaIrbhE486C/9W/vzsMDi79U V3iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=2GOoHiIdRP/VZku0DE2s2VczcvqEvOMP5ZvCql8uGXE=; b=nUWYCQ1+MM2P3ADBgZ271keVokuCKhhQwWroOFuPZNGsDU+YuJit2r7+LLCU5AUTzO O4z0/QV8tfYOya58bx0uuzWuH3HVkMTCEMSIdb0d9IIUlDjrIR0CeVBdtaO0scrzWMMs ekmnsuvWlg5MWQTVGbJqfNtawYuoSN93Kxqojb2Gk5akMliPpqMxdsEXjquifrxwLSFB mn1zbW0Q/1JCDHIq7hSnGwtugzEWVeqOaNMTSa/2BXVUSrut0HrfoNoUqdTHB+YxmLcv LbObavB6635sDjctZcbunNHma4q7MDK3/YcBXZHfcQTjU3JbeSpV7J9/oy7Jw4Dx8aj/ yfRg== X-Gm-Message-State: AFqh2kryw0GG+3XGz+lEmJwOOaClXrOx/ESEfdWzJcUrf3/L+vSNQCpu w9HrPvaizAi+OUcnE72dnP8= X-Google-Smtp-Source: AMrXdXtPfxkfND2dGcUigazu67XkeuCJXxDLfP3Lyb1l4igX4UlZ5mawB/Ycf4dtvGBT56khRR+O3Q== X-Received: by 2002:a05:622a:488b:b0:3a9:9cbb:7cdf with SMTP id fc11-20020a05622a488b00b003a99cbb7cdfmr123259788qtb.40.1673593208014; Thu, 12 Jan 2023 23:00:08 -0800 (PST) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id bm16-20020a05620a199000b006e16dcf99c8sm12240502qkb.71.2023.01.12.23.00.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 23:00:07 -0800 (PST) Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailauth.nyi.internal (Postfix) with ESMTP id 0A32E27C005B; Fri, 13 Jan 2023 02:00:07 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Fri, 13 Jan 2023 02:00:07 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrleejgddutdegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 13 Jan 2023 02:00:06 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, kvm@vger.kernel.org Cc: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , Lai Jiangshan , "Paul E. McKenney" , Josh Triplett , Steven Rostedt , Mathieu Desnoyers , David Woodhouse , Paolo Bonzini , seanjc@google.com, Joel Fernandes , Matthew Wilcox , Michal Luczaj Subject: [PATCH 2/3] rcu: Equip sleepable RCU with lockdep dependency graph checks Date: Thu, 12 Jan 2023 22:59:54 -0800 Message-Id: <20230113065955.815667-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230113065955.815667-1-boqun.feng@gmail.com> References: <20230113065955.815667-1-boqun.feng@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Although all flavors of RCU are annotated correctly with lockdep as recursive read locks, their 'check' parameter of lock_acquire() is unset. It means that RCU read locks are not added into the lockdep dependency graph therefore deadlock detection based on dependency graph won't catch deadlock caused by RCU. This is fine for "non-sleepable" RCU flavors since wait-context detection and other context based detection can catch these deadlocks. However for sleepable RCU, this is limited. Actually we can detect the deadlocks caused by SRCU by 1) making srcu_read_lock() a 'check'ed recursive read lock and 2) making synchronize_srcu() a empty write lock critical section. Even better, with the newly introduced lock_sync(), we can avoid false positives about irq-unsafe/safe. So do it. Note that NMI safe SRCU read side critical sections are currently not annonated, since step-by-step approach can help us deal with false-positives. These may be annotated in the future. Signed-off-by: Boqun Feng Acked-by: Paul E. McKenney --- include/linux/srcu.h | 23 +++++++++++++++++++++-- kernel/rcu/srcutiny.c | 2 ++ kernel/rcu/srcutree.c | 2 ++ 3 files changed, 25 insertions(+), 2 deletions(-) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 9b9d0bbf1d3c..a1595f8c5155 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -102,6 +102,21 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp) return lock_is_held(&ssp->dep_map); } +static inline void srcu_lock_acquire(struct lockdep_map *map) +{ + lock_map_acquire_read(map); +} + +static inline void srcu_lock_release(struct lockdep_map *map) +{ + lock_map_release(map); +} + +static inline void srcu_lock_sync(struct lockdep_map *map) +{ + lock_map_sync(map); +} + #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ static inline int srcu_read_lock_held(const struct srcu_struct *ssp) @@ -109,6 +124,10 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp) return 1; } +#define srcu_lock_acquire(m) do { } while (0) +#define srcu_lock_release(m) do { } while (0) +#define srcu_lock_sync(m) do { } while (0) + #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ #define SRCU_NMI_UNKNOWN 0x0 @@ -182,7 +201,7 @@ static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) srcu_check_nmi_safety(ssp, false); retval = __srcu_read_lock(ssp); - rcu_lock_acquire(&(ssp)->dep_map); + srcu_lock_acquire(&(ssp)->dep_map); return retval; } @@ -226,7 +245,7 @@ static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) { WARN_ON_ONCE(idx & ~0x1); srcu_check_nmi_safety(ssp, false); - rcu_lock_release(&(ssp)->dep_map); + srcu_lock_release(&(ssp)->dep_map); __srcu_read_unlock(ssp, idx); } diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c index b12fb0cec44d..336af24e0fe3 100644 --- a/kernel/rcu/srcutiny.c +++ b/kernel/rcu/srcutiny.c @@ -197,6 +197,8 @@ void synchronize_srcu(struct srcu_struct *ssp) { struct rcu_synchronize rs; + srcu_lock_sync(&ssp->dep_map); + RCU_LOCKDEP_WARN(lockdep_is_held(ssp) || lock_is_held(&rcu_bh_lock_map) || lock_is_held(&rcu_lock_map) || diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index ca4b5dcec675..408088c73e0e 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -1267,6 +1267,8 @@ static void __synchronize_srcu(struct srcu_struct *ssp, bool do_norm) { struct rcu_synchronize rcu; + srcu_lock_sync(&ssp->dep_map); + RCU_LOCKDEP_WARN(lockdep_is_held(ssp) || lock_is_held(&rcu_bh_lock_map) || lock_is_held(&rcu_lock_map) || From patchwork Fri Jan 13 06:59:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13100194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13841C677F1 for ; Fri, 13 Jan 2023 07:14:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241146AbjAMHOJ (ORCPT ); Fri, 13 Jan 2023 02:14:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241346AbjAMHNn (ORCPT ); Fri, 13 Jan 2023 02:13:43 -0500 Received: from mail-qt1-x82a.google.com (mail-qt1-x82a.google.com [IPv6:2607:f8b0:4864:20::82a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C3DF84629; Thu, 12 Jan 2023 23:00:13 -0800 (PST) Received: by mail-qt1-x82a.google.com with SMTP id jr10so11152939qtb.7; Thu, 12 Jan 2023 23:00:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=bL7FAKnljET09Lz5o8O5NOQXfeYUZOAStn+hSKBeOpU=; b=mLKdcXQfYIaVPGU/nFUcKanZjagSZZU1Zzk/zDGg8fyKkBdBUeD+aiZqGf4B15lcFD XvLY6GMcL3ok/8LM6egXBh+JwQ3GpOSVTaGKk5iulFhBD8m9rlrXHCuI1vnALThnvgqJ v4ItVyJtHmiBqkNzcFz3UFZyJPPEdfNxbk4cpw9aYJTT2WB9Gv9cW2Z/f+olVaRzztab 5+Syy19z6Gh5wnN48AzLIChROYG60r2xu3kc+YOAAvCFMnLRhaRirwxjqPRKExksHLlC VF80TrC3GM+c5deULwcIDqZrB7ySmhHAE8S+VVd6OUf/K1SGAzJ6/ika3jnosRIFQI1y KVLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=bL7FAKnljET09Lz5o8O5NOQXfeYUZOAStn+hSKBeOpU=; b=Dpg5lvBqrdPF9STDISpT/AP7Bui9zYJMmO98zeh0unOZgBVArzClwgHWg5M6MUUlT4 uLYOrtl/UNLrzYnua5S5frrkdj7frukBJ7jMtMXP2IfakLbT5TtD4zMD45xfXl/o5WXu MejvtyZ6ak5+TCWlSVoSUFBu/5VrBBo2+qBLQfYUvG/ej13Fs8W1Ho2UBb4gRU7DGGgl bm2J+15rpQUPLNWJGt3HPdkSPBMdJ9zjJDn55HQbVxeK8JIkaAt3qOYV/4uN0ArESBUb 7Dc5QzS7x6Zw0ahkuOvurrkN3gsInuoTFHFcq+gJi7v9X17zBC4+/WNl9dlVbBRNN/Vn i+SA== X-Gm-Message-State: AFqh2krakvt82rw+q2UC+lrt+cQaTtyqbs1Xvf2UnCXQxxGxtlo/5xNE wiVr26f4JV2Njc3eeCXdl/c= X-Google-Smtp-Source: AMrXdXuIMNulWdH7ixWS7z9Lx70JG51NniRsnsm25d0tH8PwKh7R7JqGS38UxepwK9Kan+JJ+wcEuA== X-Received: by 2002:a05:622a:206:b0:3a7:eb36:5cb3 with SMTP id b6-20020a05622a020600b003a7eb365cb3mr141815642qtx.41.1673593209390; Thu, 12 Jan 2023 23:00:09 -0800 (PST) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id z26-20020ac8101a000000b003a70a675066sm10121892qti.79.2023.01.12.23.00.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 23:00:08 -0800 (PST) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailauth.nyi.internal (Postfix) with ESMTP id 6892327C005B; Fri, 13 Jan 2023 02:00:08 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Fri, 13 Jan 2023 02:00:08 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrleejgddutdefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeegleejiedthedvheeggfejveefjeejkefgveffieeujefhueeigfegueeh geeggfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe gsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdei gedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfih igmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 13 Jan 2023 02:00:07 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, kvm@vger.kernel.org Cc: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , Lai Jiangshan , "Paul E. McKenney" , Josh Triplett , Steven Rostedt , Mathieu Desnoyers , David Woodhouse , Paolo Bonzini , seanjc@google.com, Joel Fernandes , Matthew Wilcox , Michal Luczaj Subject: [PATCH 3/3] WIP: locking/lockdep: selftests: Add selftests for SRCU Date: Thu, 12 Jan 2023 22:59:55 -0800 Message-Id: <20230113065955.815667-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230113065955.815667-1-boqun.feng@gmail.com> References: <20230113065955.815667-1-boqun.feng@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Boqun Feng --- lib/locking-selftest.c | 71 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c index 8d24279fad05..5fc206a2f9f1 100644 --- a/lib/locking-selftest.c +++ b/lib/locking-selftest.c @@ -60,6 +60,7 @@ __setup("debug_locks_verbose=", setup_debug_locks_verbose); #define LOCKTYPE_RTMUTEX 0x20 #define LOCKTYPE_LL 0x40 #define LOCKTYPE_SPECIAL 0x80 +#define LOCKTYPE_SRCU 0x100 static struct ww_acquire_ctx t, t2; static struct ww_mutex o, o2, o3; @@ -100,6 +101,13 @@ static DEFINE_RT_MUTEX(rtmutex_D); #endif +#ifdef CONFIG_SRCU +static struct lock_class_key srcu_A_key; +static struct lock_class_key srcu_B_key; +static struct srcu_struct srcu_A; +static struct srcu_struct srcu_B; +#endif + /* * Locks that we initialize dynamically as well so that * e.g. X1 and X2 becomes two instances of the same class, @@ -1418,6 +1426,12 @@ static void reset_locks(void) memset(&ww_lockdep.acquire_key, 0, sizeof(ww_lockdep.acquire_key)); memset(&ww_lockdep.mutex_key, 0, sizeof(ww_lockdep.mutex_key)); local_irq_enable(); + +#ifdef CONFIG_SRCU + __init_srcu_struct(&srcu_A, "srcuA", &srcu_A_key); + __init_srcu_struct(&srcu_B, "srcuB", &srcu_B_key); +#endif + } #undef I @@ -2360,6 +2374,58 @@ static void ww_tests(void) pr_cont("\n"); } +static void srcu_ABBA(void) +{ + int ia, ib; + + ia = srcu_read_lock(&srcu_A); + synchronize_srcu(&srcu_B); + srcu_read_unlock(&srcu_A, ia); + + ib = srcu_read_lock(&srcu_B); + synchronize_srcu(&srcu_A); + srcu_read_unlock(&srcu_B, ib); // should fail +} + +static void srcu_mutex_ABBA(void) +{ + int ia; + + mutex_lock(&mutex_A); + synchronize_srcu(&srcu_A); + mutex_unlock(&mutex_A); + + ia = srcu_read_lock(&srcu_A); + mutex_lock(&mutex_A); + mutex_unlock(&mutex_A); + srcu_read_unlock(&srcu_A, ia); // should fail +} + +static void srcu_irqsafe(void) +{ + int ia; + + HARDIRQ_ENTER(); + ia = srcu_read_lock(&srcu_A); + srcu_read_unlock(&srcu_A, ia); + HARDIRQ_EXIT(); + + synchronize_srcu(&srcu_A); // should NOT fail +} + +static void srcu_tests(void) +{ + printk(" --------------------------------------------------------------------------\n"); + printk(" | SRCU tests |\n"); + printk(" ---------------\n"); + print_testname("ABBA read-sync/read-sync"); + dotest(srcu_ABBA, FAILURE, LOCKTYPE_SRCU); + print_testname("ABBA mutex-sync/read-mutex"); + dotest(srcu_mutex_ABBA, FAILURE, LOCKTYPE_SRCU); + print_testname("Irqsafe synchronize_srcu"); + dotest(srcu_irqsafe, SUCCESS, LOCKTYPE_SRCU); + pr_cont("\n"); +} /* * @@ -2881,6 +2947,10 @@ void locking_selftest(void) printk(" --------------------------------------------------------------------------\n"); init_shared_classes(); +#ifdef CONFIG_SRCU + __init_srcu_struct(&srcu_A, "srcuA", &srcu_A_key); + __init_srcu_struct(&srcu_B, "srcuB", &srcu_B_key); +#endif lockdep_set_selftest_task(current); DO_TESTCASE_6R("A-A deadlock", AA); @@ -2965,6 +3035,7 @@ void locking_selftest(void) DO_TESTCASE_6x2x2RW("irq read-recursion #3", irq_read_recursion3); ww_tests(); + srcu_tests(); force_read_lock_recursive = 0; /* From patchwork Fri Jan 13 23:57:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 13101739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A93ADC677F1 for ; Fri, 13 Jan 2023 23:57:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230305AbjAMX5k (ORCPT ); Fri, 13 Jan 2023 18:57:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231142AbjAMX5i (ORCPT ); Fri, 13 Jan 2023 18:57:38 -0500 Received: from mail-qt1-x82d.google.com (mail-qt1-x82d.google.com [IPv6:2607:f8b0:4864:20::82d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46011869DC; Fri, 13 Jan 2023 15:57:37 -0800 (PST) Received: by mail-qt1-x82d.google.com with SMTP id fd15so10494091qtb.9; Fri, 13 Jan 2023 15:57:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:from:to:cc:subject :date:message-id:reply-to; bh=04WipWzvi3fgLXeN3WQIiUmtsXJgHxnpbmFiZjQ6EeQ=; b=QBxD6UJhEwCxoDKZHBBFwJseq/TT/58ZrBsDNTfB8TXXX28Tuj328/Z2O1fxlpawwH 2ihQbx9jlf3Y/qBAS3f+Dv6WbLc16s3OqYHBZiq5U0cQUVCGcF2pAYsStqHBUhQq+xiR ypkQxW5NXD1p5hjXXTApSd8kZuABF6v2Ulgpa6OvARuQcqbcBaV4y9ruz9EnVjUTJAXC 5QEhd1x8mr8KKPBK6bW4K7Pz6hR1ASB0GMr5NXSZDF2vDfNTC3cC10L5oBLNrg5xV5yQ BnW+AZF4jHQbFKWgypUlv5hgWpgYQc2d4iWO9UHSXWEgX5/ugL0Xkze0QWCsRjDSAeg+ RYLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=04WipWzvi3fgLXeN3WQIiUmtsXJgHxnpbmFiZjQ6EeQ=; b=DIeARyEQkPBa4gRl4FdOfwXCRNTxwHwnij8PAWfO+M9b3M3/+BRjd45m6AUwTHUx8Y Z6ncx+qHUnYjayOeyoHjJQZMN9NT9gsqt7O7fzG+PpUHeHZrNgnkQX5X01mLpVUAhB7x mKcvFhwmyLoIHxF34Kc6lMUmG2iJMgNN9um7F/VzepWZ7AnKfCmwRtZT1+JcNK8IGhTM bvm2xzExalZjNmKGzEQ7HApdfimWHOck827k9cNCEuJjSHLR6kmM8CQAorNa0PbQr7El FmyKSZwKXd4nfqFcIrl8vwsxxh6skozGnXupynOsQt2LuyIBR79qM/JjOCeDfuwfVaVo QR2A== X-Gm-Message-State: AFqh2kpb/X/ZTU713wO/Y43ghk1bDMt5rgmW4/QMnAsk8H9DyVguFDhU giuUeTpE2KU42QCaUXkc7eU= X-Google-Smtp-Source: AMrXdXu4QLs5zM7t68JijViTTXpRKsMlqQVVej1K4Dq7BJ1vqtn9igYcToVOW9xPrM3isTtKzkFypQ== X-Received: by 2002:ac8:6755:0:b0:3b1:546e:e6b6 with SMTP id n21-20020ac86755000000b003b1546ee6b6mr12854568qtp.35.1673654256400; Fri, 13 Jan 2023 15:57:36 -0800 (PST) Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id d12-20020a05620a240c00b006fcc3858044sm13953432qkn.86.2023.01.13.15.57.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 13 Jan 2023 15:57:33 -0800 (PST) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailauth.nyi.internal (Postfix) with ESMTP id 0873E27C0054; Fri, 13 Jan 2023 18:57:32 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Fri, 13 Jan 2023 18:57:33 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrleelgddujecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeeuohhquhhn ucfhvghnghcuoegsohhquhhnrdhfvghnghesghhmrghilhdrtghomheqnecuggftrfgrth htvghrnhepgeeljeeitdehvdehgefgjeevfeejjeekgfevffeiueejhfeuiefggeeuheeg gefgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepsg hoqhhunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghedtieeg qddujeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhesfhhigi hmvgdrnhgrmhgv X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 13 Jan 2023 18:57:31 -0500 (EST) From: Boqun Feng To: linux-kernel@vger.kernel.org, rcu@vger.kernel.org, kvm@vger.kernel.org Cc: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , Lai Jiangshan , "Paul E. McKenney" , Josh Triplett , Steven Rostedt , Mathieu Desnoyers , David Woodhouse , Paolo Bonzini , seanjc@google.com, Joel Fernandes , Matthew Wilcox , Michal Luczaj Subject: [PATCH 4/3] locking/lockdep: Improve the deadlock scenario print for sync and read lock Date: Fri, 13 Jan 2023 15:57:22 -0800 Message-Id: <20230113235722.1226525-1-boqun.feng@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230113065955.815667-1-boqun.feng@gmail.com> References: <20230113065955.815667-1-boqun.feng@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Lock scenario print is always a weak spot of lockdep splats. Improvement can be made if we rework the dependency search and the error printing. However without touching the graph search, we can improve a little for the circular deadlock case, since we have the to-be-added lock dependency, and know whether these two locks are read/write/sync. In order to know whether a held_lock is sync or not, a bit was "stolen" from ->references, which reduce our limit for the same lock class nesting from 2^12 to 2^11, and it should still be good enough. Besides, since we now have bit in held_lock for sync, we don't need the "hardirqoffs being 1" trick, and also we can avoid the __lock_release() if we jump out of __lock_acquire() before the held_lock stored. With these changes, a deadlock case evolved with read lock and sync gets a better print-out from: [...] Possible unsafe locking scenario: [...] [...] CPU0 CPU1 [...] ---- ---- [...] lock(srcuA); [...] lock(srcuB); [...] lock(srcuA); [...] lock(srcuB); to [...] Possible unsafe locking scenario: [...] [...] CPU0 CPU1 [...] ---- ---- [...] rlock(srcuA); [...] lock(srcuB); [...] lock(srcuA); [...] sync(srcuB); Signed-off-by: Boqun Feng --- include/linux/lockdep.h | 3 ++- kernel/locking/lockdep.c | 48 ++++++++++++++++++++++++++-------------- 2 files changed, 34 insertions(+), 17 deletions(-) diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index ba09df6a0872..febd7ecc225c 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -134,7 +134,8 @@ struct held_lock { unsigned int read:2; /* see lock_acquire() comment */ unsigned int check:1; /* see lock_acquire() comment */ unsigned int hardirqs_off:1; - unsigned int references:12; /* 32 bits */ + unsigned int sync:1; + unsigned int references:11; /* 32 bits */ unsigned int pin_count; }; diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index cffa026a765f..4031d87f6829 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -1880,6 +1880,8 @@ print_circular_lock_scenario(struct held_lock *src, struct lock_class *source = hlock_class(src); struct lock_class *target = hlock_class(tgt); struct lock_class *parent = prt->class; + int src_read = src->read; + int tgt_read = tgt->read; /* * A direct locking problem where unsafe_class lock is taken @@ -1907,7 +1909,10 @@ print_circular_lock_scenario(struct held_lock *src, printk(" Possible unsafe locking scenario:\n\n"); printk(" CPU0 CPU1\n"); printk(" ---- ----\n"); - printk(" lock("); + if (tgt_read != 0) + printk(" rlock("); + else + printk(" lock("); __print_lock_name(target); printk(KERN_CONT ");\n"); printk(" lock("); @@ -1916,7 +1921,12 @@ print_circular_lock_scenario(struct held_lock *src, printk(" lock("); __print_lock_name(target); printk(KERN_CONT ");\n"); - printk(" lock("); + if (src_read != 0) + printk(" rlock("); + else if (src->sync) + printk(" sync("); + else + printk(" lock("); __print_lock_name(source); printk(KERN_CONT ");\n"); printk("\n *** DEADLOCK ***\n\n"); @@ -4530,7 +4540,13 @@ mark_usage(struct task_struct *curr, struct held_lock *hlock, int check) return 0; } } - if (!hlock->hardirqs_off) { + + /* + * For lock_sync(), don't mark the ENABLED usage, since lock_sync() + * creates no critical section and no extra dependency can be introduced + * by interrupts + */ + if (!hlock->hardirqs_off && !hlock->sync) { if (hlock->read) { if (!mark_lock(curr, hlock, LOCK_ENABLED_HARDIRQ_READ)) @@ -4909,7 +4925,7 @@ static int __lock_is_held(const struct lockdep_map *lock, int read); static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, int trylock, int read, int check, int hardirqs_off, struct lockdep_map *nest_lock, unsigned long ip, - int references, int pin_count) + int references, int pin_count, int sync) { struct task_struct *curr = current; struct lock_class *class = NULL; @@ -4960,7 +4976,8 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, class_idx = class - lock_classes; - if (depth) { /* we're holding locks */ + if (depth && !sync) { + /* we're holding locks and the new held lock is not a sync */ hlock = curr->held_locks + depth - 1; if (hlock->class_idx == class_idx && nest_lock) { if (!references) @@ -4994,6 +5011,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, hlock->trylock = trylock; hlock->read = read; hlock->check = check; + hlock->sync = !!sync; hlock->hardirqs_off = !!hardirqs_off; hlock->references = references; #ifdef CONFIG_LOCK_STAT @@ -5055,6 +5073,10 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, if (!validate_chain(curr, hlock, chain_head, chain_key)) return 0; + /* For lock_sync(), we are done here since no actual critical section */ + if (hlock->sync) + return 1; + curr->curr_chain_key = chain_key; curr->lockdep_depth++; check_chain_key(curr); @@ -5196,7 +5218,7 @@ static int reacquire_held_locks(struct task_struct *curr, unsigned int depth, hlock->read, hlock->check, hlock->hardirqs_off, hlock->nest_lock, hlock->acquire_ip, - hlock->references, hlock->pin_count)) { + hlock->references, hlock->pin_count, 0)) { case 0: return 1; case 1: @@ -5666,7 +5688,7 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass, lockdep_recursion_inc(); __lock_acquire(lock, subclass, trylock, read, check, - irqs_disabled_flags(flags), nest_lock, ip, 0, 0); + irqs_disabled_flags(flags), nest_lock, ip, 0, 0, 0); lockdep_recursion_finish(); raw_local_irq_restore(flags); } @@ -5699,11 +5721,6 @@ EXPORT_SYMBOL_GPL(lock_release); * APIs are used to wait for one or multiple critical sections (on other CPUs * or threads), and it means that calling these APIs inside these critical * sections is potential deadlock. - * - * This annotation acts as an acqurie+release anontation pair with hardirqoff - * being 1. Since there's no critical section, no interrupt can create extra - * dependencies "inside" the annotation, hardirqoff == 1 allows us to avoid - * false positives. */ void lock_sync(struct lockdep_map *lock, unsigned subclass, int read, int check, struct lockdep_map *nest_lock, unsigned long ip) @@ -5717,10 +5734,9 @@ void lock_sync(struct lockdep_map *lock, unsigned subclass, int read, check_flags(flags); lockdep_recursion_inc(); - __lock_acquire(lock, subclass, 0, read, check, 1, nest_lock, ip, 0, 0); - - if (__lock_release(lock, ip)) - check_chain_key(current); + __lock_acquire(lock, subclass, 0, read, check, + irqs_disabled_flags(flags), nest_lock, ip, 0, 0, 1); + check_chain_key(current); lockdep_recursion_finish(); raw_local_irq_restore(flags); }