From patchwork Wed May 8 09:46:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658426 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DCBC5C04FFE for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 274D610FFEC; Wed, 8 May 2024 10:02:56 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 0B3EE113527 for ; Wed, 8 May 2024 10:02:54 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-25-663b4a38f818 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 01/28] llist: Move llist_{head, node} definition to types.h Date: Wed, 8 May 2024 18:46:58 +0900 Message-Id: <20240508094726.35754-2-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSaUxTaRQGYL/v3t57qZZc68IVjUsTRyNRUUGPa/ilXzQak8lMJmPMTMde pJFWLLIZjSgVFQEFZat1wmI6FapoazKCQAqmCCIISrAii6CiaCkGLLEWF+ry5+TJOW/eX4ej 5DZJMKfWHhR1WmW0gpHS0qEpRUth6/rI0P9tYZCVHgqe96doMJZbGGi9VobAcvMYhkHHFng8 5kLga35AQV5OK4Kivm4Kbtb3IKg2H2fg0YtAaPcMM9CYc4aBlJJyBtrejmPoys3GUGbdDk3n ijHYva9oyBtk4GJeCp4YrzF4TaUsmJIXQr/ZwMJ43wpo7OmQQHVnCBT828VAVXUjDfW3+jE8 qjQy0GP5IoGm+gYaWrMyJHDVXczA2zETBSbPMAsP7YUYrusnilJHP0vgboYdQ+rlGxjan9xG UHPqGQarpYOBOx4XBps1h4KP/zkQ9GcOsXAi3cvCxWOZCM6cyKVB3xUOvg9GJmItueMapoje lkCqxwppcq9YIBWGbpboazpZUmiNIzbzElJSNYhJ0YhHQqylpxliHclmSdpQOybulhaWNOT7 aPKiPQ/vDP5TukElRqvjRd3yTX9Lox6a71Mxl6SJHe5KJhnVcGkogBP4MKH2Sib70xfyfZTf DL9IcDq93zydny/YMgYkflO8Sypcbtns9zR+h1BR9RylIY6j+YXCy/Kp/rWMDxd69V0/KucJ Zdft32oC+NXCk1du5Ld8InM7xfAj84UTmg2q754l1Jqd9DkkK0STSpFcrY3XKNXRYcuikrTq xGV79musaOKVTEfGd91CI62/1iGeQ4opMnvQuki5RBkfm6SpQwJHKabLHCfXRMplKmXSIVG3 /y9dXLQYW4dmc7QiSLZyLEEl5/cqD4r7RDFG1P28Yi4gOBlFnDdWjFfOnWEJmdUJWxfPX/vU uTNol75396rnkyImOxp58o9mjyHQcrp3QX7J4syzib+7HalHU3uY9w3NMNvk2zfncEn6aEub puYlD3Ha3G2/zMha3/2m3JLQFHjJ+O7x+QM3vJ8KVFcGzmaH/qGOaZs5OvCbc+NU/CHRUeVa 8xEr6Ngo5YollC5W+RXFi365RgMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+5+7q9lpCZ0sKhbR3QtkvaSEfShPUiJEVwJbdczhJdnMMohm W2VeQsOpeQmbskxX1iaSprY0zWmZ5loXlku7mLQmlJPW7OKKvrz8eJ6H36eXwSVFZCAjT0kT FCmyJCklIkQx4eq1EB0eH5JzfxEU5IaAeyKLgPJ6AwX9t+oQGBoyMRjrjIIXk04E3idPcSjW 9iO4NvwGh4auIQStNWcpGHzvD1b3OAUWbQ4F6qp6CgY+T2FgL7qMQZ1xB/Tm6zAwe0YJKB6j oKxYjU2fTxh49LU06FXLYKSmlIap4VCwDNlI6KiwkND6ejVcuWqnoKXVQkDX3REMBpvLKRgy /Caht6ubgP6CPBJuunQUfJ7U46B3j9PwzFyJwW3NtO38t18kPMozY3C++g4G1lf3ELRlvcXA aLBR0OF2YmAyanH4cb0TwcilLzScy/XQUJZ5CUHOuSICNPYw8H4vpyI38h3OcZzXmE7wrZOV BN+j4/im0jc0r2l7TfOVxuO8qWYVX9UyhvHXvrpJ3lh7keKNXy/TfPYXK8a7+vpovrvES/Dv rcVY7IL9oogjQpI8XVAEbzooSnhW8xhPrRCdtLmaKRVqY7KRH8Ox67jCEi/uY4pdzr186fnL AewSzpT3kfQxzjpFXHXfVh/PZWO4ppZ3KBsxDMEu4z7Uz/HFYjaMc2js9D/lYq7utvmvxo9d z70adSEfS6Y399SldD4SVaIZtShAnpKeLJMnhQUpExMyUuQngw4fSzai6W/Rn54quIsmBqPa Ecsg6SxxPxUeLyFl6cqM5HbEMbg0QNx5YUO8RHxElnFKUByLUxxPEpTtaAFDSOeJo/cIByXs UVmakCgIqYLif4sxfoEq5PhocXitZueNi/sYQ2aIbehHWuMDrWvziUJ6VtSZiJXzB5/bujXb Fg5En5WOrg6e6rZtz18T0afbGxl6SL0rsCk6TChDYlNu5MasM0F7vMOzK942bvGfmXq0RStt XhHXOxGy0+GV/HYY9u2OTfMUjh/Q9TxU2Zf+VLVFPG/ojJ8tJZQJstBVuEIp+wPFHbV0KQMA AA== X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" llist_head and llist_node can be used by very primitives. For example, Dept for tracking dependency uses llist things in its header. To avoid header dependency, move those to types.h. Signed-off-by: Byungchul Park --- include/linux/llist.h | 8 -------- include/linux/types.h | 8 ++++++++ 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/llist.h b/include/linux/llist.h index 2c982ff7475a..3ac071857612 100644 --- a/include/linux/llist.h +++ b/include/linux/llist.h @@ -53,14 +53,6 @@ #include #include -struct llist_head { - struct llist_node *first; -}; - -struct llist_node { - struct llist_node *next; -}; - #define LLIST_HEAD_INIT(name) { NULL } #define LLIST_HEAD(name) struct llist_head name = LLIST_HEAD_INIT(name) diff --git a/include/linux/types.h b/include/linux/types.h index 2bc8766ba20c..a1e4e046cfa5 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -202,6 +202,14 @@ struct hlist_node { struct hlist_node *next, **pprev; }; +struct llist_head { + struct llist_node *first; +}; + +struct llist_node { + struct llist_node *next; +}; + struct ustat { __kernel_daddr_t f_tfree; #ifdef CONFIG_ARCH_32BIT_USTAT_F_TINODE From patchwork Wed May 8 09:46:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6ED7FC04FFE for ; Wed, 8 May 2024 10:03:23 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4FA79113542; Wed, 8 May 2024 10:03:18 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 15786113530 for ; Wed, 8 May 2024 10:02:54 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-35-663b4a38a34c From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 02/28] dept: Implement Dept(Dependency Tracker) Date: Wed, 8 May 2024 18:46:59 +0900 Message-Id: <20240508094726.35754-3-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSeUiTcRjH+733RquXVfjOgmQkRZFldDxZRBHUr6AIC9IOauZrrrzYyjQI LM0stcNwKzXzapnTjq2k8mgpzlS0pVYaHml2SF6sJq5JtRn98/DheT7P968vR8rLaW9OHXVC 1ESpIpSMlJIOT89funb7urDlI/0SuJa2HBw/UyjIeVDKgO2+EUHp47MEDNZthffjQwhcza9J 0GfaEOT3dZPw2NqDoKr4HANtAzOg3THKQENmKgOJhQ8YePN9koAuXQYBRtMOaLpaQIDF+ZUC /SAD2fpEwj2+EeA0lLBgSPCF/uIsFib7/KGh5x0NVR+WwM3cLgYqqxoosD7tJ6DteQ4DPaV/ aGiyvqLAdi2dhrKRAga+jxtIMDhGWWi15BHwMMkdlPzjNw316RYCkoseEdDeWYGgOuUjAabS dwzUOoYIMJsySfh1tw5B/+VhFs6nOVnIPnsZQep5HQVJXavANZHDbFyLa4dGSZxkPoWrxvMo 3Fgg4GdZ3SxOqv7A4jzTSWwuXowLKwcJnG930NhUcpHBJnsGiy8NtxN4pKWFxa9uuCg80K4n dnnvk64PFSPUsaJm2YbD0vA0p46MyR6j42q6m6kElF1PXUISTuBXCm+v1LqZm+LWL9GeNcMv FDo6nKSHZ/M+gjn9C+1hkh+SCkUtWzw8i98s5JcNIg9TvK/wQlc0xTJ+lfCksZf9Fz9fMD60 TOVI+NVC59eRKUfudioSs9yO1O1McEJjqg39e1AIL4s7qKtIloemlSC5Oio2UqWOWOkXHh+l jvM7Eh1pQu42Gc5M7n+K7LbdNYjnkHK6zOIVECanVbHa+MgaJHCkcras7sKaMLksVBV/WtRE H9KcjBC1NWguRym9ZCvGT4XK+aOqE+JxUYwRNf+vBCfxTkDa8G32zNyxgYP1GUaJeV6oz4yY X3HN+sDPATq/+LbKHFeZdcHOi4pPyU3lwu3U7r7Jj+nl3gMlu+3TxoZ7g+9hrTCLDcQK14EF gZ9SrLHlwTNzgxJ3Zukj52wKuX5rz6KgAPmyDN3ekGO+vasb70wYjUHNPp1+kqUhtYoGxW26 VUlpw1X+i0mNVvUXqXOrBkkDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzXSa0hTcRgG8P7/c3U0OSyjU3ZjFYXdNLJeMiOI3Ekpgj4YFuSwY47mHFtZ KyLLZeYNlbzOYpos8ZI2Dbq4GsrUGZk3TMtZWVSSzqgmLu3ihL68/OB5eD69LCErppaxKs0Z UadRquW0hJQcCkvdDJFh8cFZ7iDIywoGz890Esrqa2novleDoLbpCoYxhwJeTY0jmHnxkoCi gm4E5e9dBDS1jSCwVV2loe+jP/R7JmlwFmTSkHqnnoaer7MYhgvzMdRYD8Lz3AoMdu9nEorG aDAVpeK58wWD11LNgCVlHYxWlTIw+z4EnCMDFLTeclJge70RSm4P09Bsc5LQ9nAUQ9/jMhpG av9S8Lytg4TuvGwK6twVNHydshBg8Uwy0Gs3Y2gwzq2l/fhDQXu2HUNa5X0M/UNPEDxNf4fB WjtAQ6tnHEOjtYCAX3cdCEZzJhi4luVlwHQlB0HmtUISjMOhMDNdRu/dJbSOTxKCsfGcYJsy k0JnBS88KnUxgvHpa0YwW88KjVVBwp3mMSyUf/dQgrX6Bi1Yv+czQsZEPxbcXV2M0FE8Qwof +4vw4cAYye6TolqVLOq27omVJGR5Cwmt6Rt1vsX1gkxBpnYyA7Esz23nez8lZSA/lubW84OD XsLnAG4135j9ifKZ4MYlfGVXhM+LuH18ed0Y8pnk1vHPCivnLeVC+QedbxmfeW4VX9Ngn9/x 43bwQ5/d8x3ZXOdJaimTiyRmtKAaBag0yYlKlTp0i/50gkGjOr8lLinRiub+xXJpNu8h+tmn aEEci+QLpd10WLyMUibrDYktiGcJeYDUcX1nvEx6Umm4IOqSTujOqkV9CwpkSfkSaWS0GCvj TinPiKdFUSvq/qeY9VuWgi4f22tnZY7jvSb9zbB0vTfCGee+aCtu1x5pp8INewKXH+rgFD3M Y7XWILdZpj+Ev3NHKzbVZMw4VvQ8MO7cZd6wVmt4QzoL+k7FeWI1qsyJlUeT9+dGhce6VmTf Cll8YOnv6HPqHP/ptPTDeE1CpyKmxHW7ma7rXbJN79REvZWT+gRlSBCh0yv/AcCgb6IrAwAA X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" CURRENT STATUS -------------- Lockdep tracks acquisition order of locks in order to detect deadlock, and IRQ and IRQ enable/disable state as well to take accident acquisitions into account. Lockdep should be turned off once it detects and reports a deadlock since the data structure and algorithm are not reusable after detection because of the complex design. PROBLEM ------- *Waits* and their *events* that never reach eventually cause deadlock. However, Lockdep is only interested in lock acquisition order, forcing to emulate lock acqusition even for just waits and events that have nothing to do with real lock. Even worse, no one likes Lockdep's false positive detection because that prevents further one that might be more valuable. That's why all the kernel developers are sensitive to Lockdep's false positive. Besides those, by tracking acquisition order, it cannot correctly deal with read lock and cross-event e.g. wait_for_completion()/complete() for deadlock detection. Lockdep is no longer a good tool for that purpose. SOLUTION -------- Again, *waits* and their *events* that never reach eventually cause deadlock. The new solution, Dept(DEPendency Tracker), focuses on waits and events themselves. Dept tracks waits and events and report it if any event would be never reachable. Dept does: . Works with read lock in the right way. . Works with any wait and event e.i. cross-event. . Continue to work even after reporting multiple times. . Provides simple and intuitive APIs. . Does exactly what dependency checker should do. Q & A ----- Q. Is this the first try ever to address the problem? A. No. Cross-release feature (b09be676e0ff2 locking/lockdep: Implement the 'crossrelease' feature) addressed it 2 years ago that was a Lockdep extension and merged but reverted shortly because: Cross-release started to report valuable hidden problems but started to give report false positive reports as well. For sure, no one likes Lockdep's false positive reports since it makes Lockdep stop, preventing reporting further real problems. Q. Why not Dept was developed as an extension of Lockdep? A. Lockdep definitely includes all the efforts great developers have made for a long time so as to be quite stable enough. But I had to design and implement newly because of the following: 1) Lockdep was designed to track lock acquisition order. The APIs and implementation do not fit on wait-event model. 2) Lockdep is turned off on detection including false positive. Which is terrible and prevents developing any extension for stronger detection. Q. Do you intend to totally replace Lockdep? A. No. Lockdep also checks if lock usage is correct. Of course, the dependency check routine should be replaced but the other functions should be still there. Q. Do you mean the dependency check routine should be replaced right away? A. No. I admit Lockdep is stable enough thanks to great efforts kernel developers have made. Lockdep and Dept, both should be in the kernel until Dept gets considered stable. Q. Stronger detection capability would give more false positive report. Which was a big problem when cross-release was introduced. Is it ok with Dept? A. It's ok. Dept allows multiple reporting thanks to simple and quite generalized design. Of course, false positive reports should be fixed anyway but it's no longer as a critical problem as it was. Signed-off-by: Byungchul Park --- include/linux/dept.h | 567 ++++++ include/linux/hardirq.h | 3 + include/linux/sched.h | 3 + init/init_task.c | 2 + init/main.c | 2 + kernel/Makefile | 1 + kernel/dependency/Makefile | 3 + kernel/dependency/dept.c | 2966 +++++++++++++++++++++++++++++++ kernel/dependency/dept_hash.h | 10 + kernel/dependency/dept_object.h | 13 + kernel/exit.c | 1 + kernel/fork.c | 2 + kernel/module/main.c | 4 + kernel/sched/core.c | 10 + lib/Kconfig.debug | 27 + lib/locking-selftest.c | 2 + 16 files changed, 3616 insertions(+) create mode 100644 include/linux/dept.h create mode 100644 kernel/dependency/Makefile create mode 100644 kernel/dependency/dept.c create mode 100644 kernel/dependency/dept_hash.h create mode 100644 kernel/dependency/dept_object.h diff --git a/include/linux/dept.h b/include/linux/dept.h new file mode 100644 index 000000000000..c6e2291dd843 --- /dev/null +++ b/include/linux/dept.h @@ -0,0 +1,567 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * DEPT(DEPendency Tracker) - runtime dependency tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_H +#define __LINUX_DEPT_H + +#ifdef CONFIG_DEPT + +#include + +struct task_struct; + +#define DEPT_MAX_STACK_ENTRY 16 +#define DEPT_MAX_WAIT_HIST 64 +#define DEPT_MAX_ECXT_HELD 48 + +#define DEPT_MAX_SUBCLASSES 16 +#define DEPT_MAX_SUBCLASSES_EVT 2 +#define DEPT_MAX_SUBCLASSES_USR (DEPT_MAX_SUBCLASSES / DEPT_MAX_SUBCLASSES_EVT) +#define DEPT_MAX_SUBCLASSES_CACHE 2 + +#define DEPT_SIRQ 0 +#define DEPT_HIRQ 1 +#define DEPT_IRQS_NR 2 +#define DEPT_SIRQF (1UL << DEPT_SIRQ) +#define DEPT_HIRQF (1UL << DEPT_HIRQ) + +struct dept_ecxt; +struct dept_iecxt { + struct dept_ecxt *ecxt; + int enirq; + /* + * for preventing to add a new ecxt + */ + bool staled; +}; + +struct dept_wait; +struct dept_iwait { + struct dept_wait *wait; + int irq; + /* + * for preventing to add a new wait + */ + bool staled; + bool touched; +}; + +struct dept_class { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * unique information about the class + */ + const char *name; + unsigned long key; + int sub_id; + + /* + * for BFS + */ + unsigned int bfs_gen; + int bfs_dist; + struct dept_class *bfs_parent; + + /* + * for hashing this object + */ + struct hlist_node hash_node; + + /* + * for linking all classes + */ + struct list_head all_node; + + /* + * for associating its dependencies + */ + struct list_head dep_head; + struct list_head dep_rev_head; + + /* + * for tracking IRQ dependencies + */ + struct dept_iecxt iecxt[DEPT_IRQS_NR]; + struct dept_iwait iwait[DEPT_IRQS_NR]; + + /* + * classified by a map embedded in task_struct, + * not an explicit map + */ + bool sched_map; + }; + }; +}; + +struct dept_key { + union { + /* + * Each byte-wise address will be used as its key. + */ + char base[DEPT_MAX_SUBCLASSES]; + + /* + * for caching the main class pointer + */ + struct dept_class *classes[DEPT_MAX_SUBCLASSES_CACHE]; + }; +}; + +struct dept_map { + const char *name; + struct dept_key *keys; + + /* + * subclass that can be set from user + */ + int sub_u; + + /* + * It's local copy for fast access to the associated classes. + * Also used for dept_key for static maps. + */ + struct dept_key map_key; + + /* + * wait timestamp associated to this map + */ + unsigned int wgen; + + /* + * whether this map should be going to be checked or not + */ + bool nocheck; +}; + +#define DEPT_MAP_INITIALIZER(n, k) \ +{ \ + .name = #n, \ + .keys = (struct dept_key *)(k), \ + .sub_u = 0, \ + .map_key = { .classes = { NULL, } }, \ + .wgen = 0U, \ + .nocheck = false, \ +} + +struct dept_stack { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * backtrace entries + */ + unsigned long raw[DEPT_MAX_STACK_ENTRY]; + int nr; + }; + }; +}; + +struct dept_ecxt { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * function that entered to this ecxt + */ + const char *ecxt_fn; + + /* + * event function + */ + const char *event_fn; + + /* + * associated class + */ + struct dept_class *class; + + /* + * flag indicating which IRQ has been + * enabled within the event context + */ + unsigned long enirqf; + + /* + * where the IRQ-enabled happened + */ + unsigned long enirq_ip[DEPT_IRQS_NR]; + struct dept_stack *enirq_stack[DEPT_IRQS_NR]; + + /* + * where the event context started + */ + unsigned long ecxt_ip; + struct dept_stack *ecxt_stack; + + /* + * where the event triggered + */ + unsigned long event_ip; + struct dept_stack *event_stack; + }; + }; +}; + +struct dept_wait { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * function causing this wait + */ + const char *wait_fn; + + /* + * the associated class + */ + struct dept_class *class; + + /* + * which IRQ the wait was placed in + */ + unsigned long irqf; + + /* + * where the IRQ wait happened + */ + unsigned long irq_ip[DEPT_IRQS_NR]; + struct dept_stack *irq_stack[DEPT_IRQS_NR]; + + /* + * where the wait happened + */ + unsigned long wait_ip; + struct dept_stack *wait_stack; + + /* + * whether this wait is for commit in scheduler + */ + bool sched_sleep; + }; + }; +}; + +struct dept_dep { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * key data of dependency + */ + struct dept_ecxt *ecxt; + struct dept_wait *wait; + + /* + * This object can be referred without dept_lock + * held but with IRQ disabled, e.g. for hash + * lookup. So deferred deletion is needed. + */ + struct rcu_head rh; + + /* + * for BFS + */ + struct list_head bfs_node; + + /* + * for hashing this object + */ + struct hlist_node hash_node; + + /* + * for linking to a class object + */ + struct list_head dep_node; + struct list_head dep_rev_node; + }; + }; +}; + +struct dept_hash { + /* + * hash table + */ + struct hlist_head *table; + + /* + * size of the table e.i. 2^bits + */ + int bits; +}; + +struct dept_pool { + const char *name; + + /* + * object size + */ + size_t obj_sz; + + /* + * the number of the static array + */ + atomic_t obj_nr; + + /* + * offset of ->pool_node + */ + size_t node_off; + + /* + * pointer to the pool + */ + void *spool; + struct llist_head boot_pool; + struct llist_head __percpu *lpool; +}; + +struct dept_ecxt_held { + /* + * associated event context + */ + struct dept_ecxt *ecxt; + + /* + * unique key for this dept_ecxt_held + */ + struct dept_map *map; + + /* + * class of the ecxt of this dept_ecxt_held + */ + struct dept_class *class; + + /* + * the wgen when the event context started + */ + unsigned int wgen; + + /* + * subclass that only works in the local context + */ + int sub_l; +}; + +struct dept_wait_hist { + /* + * associated wait + */ + struct dept_wait *wait; + + /* + * unique id of all waits system-wise until wrapped + */ + unsigned int wgen; + + /* + * local context id to identify IRQ context + */ + unsigned int ctxt_id; +}; + +struct dept_task { + /* + * all event contexts that have entered and before exiting + */ + struct dept_ecxt_held ecxt_held[DEPT_MAX_ECXT_HELD]; + int ecxt_held_pos; + + /* + * ring buffer holding all waits that have happened + */ + struct dept_wait_hist wait_hist[DEPT_MAX_WAIT_HIST]; + int wait_hist_pos; + + /* + * sequential id to identify each IRQ context + */ + unsigned int irq_id[DEPT_IRQS_NR]; + + /* + * for tracking IRQ-enabled points with cross-event + */ + unsigned int wgen_enirq[DEPT_IRQS_NR]; + + /* + * for keeping up-to-date IRQ-enabled points + */ + unsigned long enirq_ip[DEPT_IRQS_NR]; + + /* + * for reserving a current stack instance at each operation + */ + struct dept_stack *stack; + + /* + * for preventing recursive call into DEPT engine + */ + int recursive; + + /* + * for staging data to commit a wait + */ + struct dept_map stage_m; + bool stage_sched_map; + const char *stage_w_fn; + unsigned long stage_ip; + + /* + * the number of missing ecxts + */ + int missing_ecxt; + + /* + * for tracking IRQ-enable state + */ + bool hardirqs_enabled; + bool softirqs_enabled; + + /* + * whether the current is on do_exit() + */ + bool task_exit; + + /* + * whether the current is running __schedule() + */ + bool in_sched; +}; + +#define DEPT_TASK_INITIALIZER(t) \ +{ \ + .wait_hist = { { .wait = NULL, } }, \ + .ecxt_held_pos = 0, \ + .wait_hist_pos = 0, \ + .irq_id = { 0U }, \ + .wgen_enirq = { 0U }, \ + .enirq_ip = { 0UL }, \ + .stack = NULL, \ + .recursive = 0, \ + .stage_m = DEPT_MAP_INITIALIZER((t)->stage_m, NULL), \ + .stage_sched_map = false, \ + .stage_w_fn = NULL, \ + .stage_ip = 0UL, \ + .missing_ecxt = 0, \ + .hardirqs_enabled = false, \ + .softirqs_enabled = false, \ + .task_exit = false, \ + .in_sched = false, \ +} + +extern void dept_on(void); +extern void dept_off(void); +extern void dept_init(void); +extern void dept_task_init(struct task_struct *t); +extern void dept_task_exit(struct task_struct *t); +extern void dept_free_range(void *start, unsigned int sz); +extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_map_copy(struct dept_map *to, struct dept_map *from); + +extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l); +extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn); +extern void dept_request_event_wait_commit(void); +extern void dept_clean_stage(void); +extern void dept_stage_event(struct task_struct *t, unsigned long ip); +extern void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *c_fn, const char *e_fn, int sub_l); +extern bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f); +extern void dept_request_event(struct dept_map *m); +extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn); +extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); +extern void dept_sched_enter(void); +extern void dept_sched_exit(void); + +static inline void dept_ecxt_enter_nokeep(struct dept_map *m) +{ + dept_ecxt_enter(m, 0UL, 0UL, NULL, NULL, 0); +} + +/* + * for users who want to manage external keys + */ +extern void dept_key_init(struct dept_key *k); +extern void dept_key_destroy(struct dept_key *k); +extern void dept_map_ecxt_modify(struct dept_map *m, unsigned long e_f, struct dept_key *new_k, unsigned long new_e_f, unsigned long new_ip, const char *new_c_fn, const char *new_e_fn, int new_sub_l); + +extern void dept_softirq_enter(void); +extern void dept_hardirq_enter(void); +extern void dept_softirqs_on_ip(unsigned long ip); +extern void dept_hardirqs_on(void); +extern void dept_softirqs_off(void); +extern void dept_hardirqs_off(void); +#else /* !CONFIG_DEPT */ +struct dept_key { }; +struct dept_map { }; +struct dept_task { }; + +#define DEPT_MAP_INITIALIZER(n, k) { } +#define DEPT_TASK_INITIALIZER(t) { } + +#define dept_on() do { } while (0) +#define dept_off() do { } while (0) +#define dept_init() do { } while (0) +#define dept_task_init(t) do { } while (0) +#define dept_task_exit(t) do { } while (0) +#define dept_free_range(s, sz) do { } while (0) +#define dept_map_init(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_map_copy(t, f) do { } while (0) + +#define dept_wait(m, w_f, ip, w_fn, sl) do { (void)(w_fn); } while (0) +#define dept_stage_wait(m, k, ip, w_fn) do { (void)(k); (void)(w_fn); } while (0) +#define dept_request_event_wait_commit() do { } while (0) +#define dept_clean_stage() do { } while (0) +#define dept_stage_event(t, ip) do { } while (0) +#define dept_ecxt_enter(m, e_f, ip, c_fn, e_fn, sl) do { (void)(c_fn); (void)(e_fn); } while (0) +#define dept_ecxt_holding(m, e_f) false +#define dept_request_event(m) do { } while (0) +#define dept_event(m, e_f, ip, e_fn) do { (void)(e_fn); } while (0) +#define dept_ecxt_exit(m, e_f, ip) do { } while (0) +#define dept_sched_enter() do { } while (0) +#define dept_sched_exit() do { } while (0) +#define dept_ecxt_enter_nokeep(m) do { } while (0) +#define dept_key_init(k) do { (void)(k); } while (0) +#define dept_key_destroy(k) do { (void)(k); } while (0) +#define dept_map_ecxt_modify(m, e_f, n_k, n_e_f, n_ip, n_c_fn, n_e_fn, n_sl) do { (void)(n_k); (void)(n_c_fn); (void)(n_e_fn); } while (0) + +#define dept_softirq_enter() do { } while (0) +#define dept_hardirq_enter() do { } while (0) +#define dept_softirqs_on_ip(ip) do { } while (0) +#define dept_hardirqs_on() do { } while (0) +#define dept_softirqs_off() do { } while (0) +#define dept_hardirqs_off() do { } while (0) +#endif +#endif /* __LINUX_DEPT_H */ diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index d57cab4d4c06..bb279dbbe748 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -106,6 +107,7 @@ void irq_exit_rcu(void); */ #define __nmi_enter() \ do { \ + dept_off(); \ lockdep_off(); \ arch_nmi_enter(); \ BUG_ON(in_nmi() == NMI_MASK); \ @@ -128,6 +130,7 @@ void irq_exit_rcu(void); __preempt_count_sub(NMI_OFFSET + HARDIRQ_OFFSET); \ arch_nmi_exit(); \ lockdep_on(); \ + dept_on(); \ } while (0) #define nmi_exit() \ diff --git a/include/linux/sched.h b/include/linux/sched.h index 3c2abbc587b4..ec313bf1ef94 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -46,6 +46,7 @@ #include #include #include +#include /* task_struct member predeclarations (sorted alphabetically): */ struct audit_context; @@ -1183,6 +1184,8 @@ struct task_struct { struct held_lock held_locks[MAX_LOCK_DEPTH]; #endif + struct dept_task dept_task; + #if defined(CONFIG_UBSAN) && !defined(CONFIG_UBSAN_TRAP) unsigned int in_ubsan; #endif diff --git a/init/init_task.c b/init/init_task.c index 4daee6d761c8..31395e3db9d1 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -13,6 +13,7 @@ #include #include #include +#include #include @@ -190,6 +191,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = { .curr_chain_key = INITIAL_CHAIN_KEY, .lockdep_recursion = 0, #endif + .dept_task = DEPT_TASK_INITIALIZER(init_task), #ifdef CONFIG_FUNCTION_GRAPH_TRACER .ret_stack = NULL, .tracing_graph_pause = ATOMIC_INIT(0), diff --git a/init/main.c b/init/main.c index 5dcf5274c09c..cdbeeb778134 100644 --- a/init/main.c +++ b/init/main.c @@ -65,6 +65,7 @@ #include #include #include +#include #include #include #include @@ -1019,6 +1020,7 @@ void start_kernel(void) panic_param); lockdep_init(); + dept_init(); /* * Need to run this when irqs are enabled, because it wants diff --git a/kernel/Makefile b/kernel/Makefile index 3c13240dfc9f..b2ddcbcbaa80 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -51,6 +51,7 @@ obj-y += livepatch/ obj-y += dma/ obj-y += entry/ obj-$(CONFIG_MODULES) += module/ +obj-y += dependency/ obj-$(CONFIG_KCMP) += kcmp.o obj-$(CONFIG_FREEZER) += freezer.o diff --git a/kernel/dependency/Makefile b/kernel/dependency/Makefile new file mode 100644 index 000000000000..b5cfb8a03c0c --- /dev/null +++ b/kernel/dependency/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_DEPT) += dept.o diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c new file mode 100644 index 000000000000..a3e774479f94 --- /dev/null +++ b/kernel/dependency/dept.c @@ -0,0 +1,2966 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DEPT(DEPendency Tracker) - Runtime dependency tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + * + * DEPT provides a general way to detect deadlock possibility in runtime + * and the interest is not limited to typical lock but to every + * syncronization primitives. + * + * The following ideas were borrowed from LOCKDEP: + * + * 1) Use a graph to track relationship between classes. + * 2) Prevent performance regression using hash. + * + * The following items were enhanced from LOCKDEP: + * + * 1) Cover more deadlock cases. + * 2) Allow muliple reports. + * + * TODO: Both LOCKDEP and DEPT should co-exist until DEPT is considered + * stable. Then the dependency check routine should be replaced with + * DEPT after. It should finally look like: + * + * + * + * As is: + * + * LOCKDEP + * +-----------------------------------------+ + * | Lock usage correctness check | <-> locks + * | | + * | | + * | +-------------------------------------+ | + * | | Dependency check | | + * | | (by tracking lock acquisition order)| | + * | +-------------------------------------+ | + * | | + * +-----------------------------------------+ + * + * DEPT + * +-----------------------------------------+ + * | Dependency check | <-> waits/events + * | (by tracking wait and event context) | + * +-----------------------------------------+ + * + * + * + * To be: + * + * LOCKDEP + * +-----------------------------------------+ + * | Lock usage correctness check | <-> locks + * | | + * | | + * | (Request dependency check) | + * | T | + * +--------------------|--------------------+ + * | + * DEPT V + * +-----------------------------------------+ + * | Dependency check | <-> waits/events + * | (by tracking wait and event context) | + * +-----------------------------------------+ + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static int dept_stop; +static int dept_per_cpu_ready; + +#define DEPT_READY_WARN (!oops_in_progress) + +/* + * Make all operations using DEPT_WARN_ON() fail on oops_in_progress and + * prevent warning message. + */ +#define DEPT_WARN_ON_ONCE(c) \ + ({ \ + int __ret = 0; \ + \ + if (likely(DEPT_READY_WARN)) \ + __ret = WARN_ONCE(c, "DEPT_WARN_ON_ONCE: " #c); \ + __ret; \ + }) + +#define DEPT_WARN_ONCE(s...) \ + ({ \ + if (likely(DEPT_READY_WARN)) \ + WARN_ONCE(1, "DEPT_WARN_ONCE: " s); \ + }) + +#define DEPT_WARN_ON(c) \ + ({ \ + int __ret = 0; \ + \ + if (likely(DEPT_READY_WARN)) \ + __ret = WARN(c, "DEPT_WARN_ON: " #c); \ + __ret; \ + }) + +#define DEPT_WARN(s...) \ + ({ \ + if (likely(DEPT_READY_WARN)) \ + WARN(1, "DEPT_WARN: " s); \ + }) + +#define DEPT_STOP(s...) \ + ({ \ + WRITE_ONCE(dept_stop, 1); \ + if (likely(DEPT_READY_WARN)) \ + WARN(1, "DEPT_STOP: " s); \ + }) + +#define DEPT_INFO_ONCE(s...) pr_warn_once("DEPT_INFO_ONCE: " s) + +static arch_spinlock_t dept_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; + +/* + * DEPT internal engine should be careful in using outside functions + * e.g. printk at reporting since that kind of usage might cause + * untrackable deadlock. + */ +static atomic_t dept_outworld = ATOMIC_INIT(0); + +static void dept_outworld_enter(void) +{ + atomic_inc(&dept_outworld); +} + +static void dept_outworld_exit(void) +{ + atomic_dec(&dept_outworld); +} + +static bool dept_outworld_entered(void) +{ + return atomic_read(&dept_outworld); +} + +static bool dept_lock(void) +{ + while (!arch_spin_trylock(&dept_spin)) + if (unlikely(dept_outworld_entered())) + return false; + return true; +} + +static void dept_unlock(void) +{ + arch_spin_unlock(&dept_spin); +} + +/* + * whether to stack-trace on every wait or every ecxt + */ +static bool rich_stack = true; + +enum bfs_ret { + BFS_CONTINUE, + BFS_CONTINUE_REV, + BFS_DONE, + BFS_SKIP, +}; + +static bool after(unsigned int a, unsigned int b) +{ + return (int)(b - a) < 0; +} + +static bool before(unsigned int a, unsigned int b) +{ + return (int)(a - b) < 0; +} + +static bool valid_stack(struct dept_stack *s) +{ + return s && s->nr > 0; +} + +static bool valid_class(struct dept_class *c) +{ + return c->key; +} + +static void invalidate_class(struct dept_class *c) +{ + c->key = 0UL; +} + +static struct dept_ecxt *dep_e(struct dept_dep *d) +{ + return d->ecxt; +} + +static struct dept_wait *dep_w(struct dept_dep *d) +{ + return d->wait; +} + +static struct dept_class *dep_fc(struct dept_dep *d) +{ + return dep_e(d)->class; +} + +static struct dept_class *dep_tc(struct dept_dep *d) +{ + return dep_w(d)->class; +} + +static const char *irq_str(int irq) +{ + if (irq == DEPT_SIRQ) + return "softirq"; + if (irq == DEPT_HIRQ) + return "hardirq"; + return "(unknown)"; +} + +static inline struct dept_task *dept_task(void) +{ + return ¤t->dept_task; +} + +/* + * Dept doesn't work either when it's stopped by DEPT_STOP() or in a nmi + * context. + */ +static bool dept_working(void) +{ + return !READ_ONCE(dept_stop) && !in_nmi(); +} + +/* + * Even k == NULL is considered as a valid key because it would use + * &->map_key as the key in that case. + */ +struct dept_key __dept_no_validate__; +static bool valid_key(struct dept_key *k) +{ + return &__dept_no_validate__ != k; +} + +/* + * Pool + * ===================================================================== + * DEPT maintains pools to provide objects in a safe way. + * + * 1) Static pool is used at the beginning of booting time. + * 2) Local pool is tried first before the static pool. Objects that + * have been freed will be placed. + */ + +enum object_t { +#define OBJECT(id, nr) OBJECT_##id, + #include "dept_object.h" +#undef OBJECT + OBJECT_NR, +}; + +#define OBJECT(id, nr) \ +static struct dept_##id spool_##id[nr]; \ +static DEFINE_PER_CPU(struct llist_head, lpool_##id); + #include "dept_object.h" +#undef OBJECT + +static struct dept_pool pool[OBJECT_NR] = { +#define OBJECT(id, nr) { \ + .name = #id, \ + .obj_sz = sizeof(struct dept_##id), \ + .obj_nr = ATOMIC_INIT(nr), \ + .node_off = offsetof(struct dept_##id, pool_node), \ + .spool = spool_##id, \ + .lpool = &lpool_##id, }, + #include "dept_object.h" +#undef OBJECT +}; + +/* + * Can use llist no matter whether CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG is + * enabled or not because NMI and other contexts in the same CPU never + * run inside of DEPT concurrently by preventing reentrance. + */ +static void *from_pool(enum object_t t) +{ + struct dept_pool *p; + struct llist_head *h; + struct llist_node *n; + + /* + * llist_del_first() doesn't allow concurrent access e.g. + * between process and IRQ context. + */ + if (DEPT_WARN_ON(!irqs_disabled())) + return NULL; + + p = &pool[t]; + + /* + * Try local pool first. + */ + if (likely(dept_per_cpu_ready)) + h = this_cpu_ptr(p->lpool); + else + h = &p->boot_pool; + + n = llist_del_first(h); + if (n) + return (void *)n - p->node_off; + + /* + * Try static pool. + */ + if (atomic_read(&p->obj_nr) > 0) { + int idx = atomic_dec_return(&p->obj_nr); + + if (idx >= 0) + return p->spool + (idx * p->obj_sz); + } + + DEPT_INFO_ONCE("---------------------------------------------\n" + " Some of Dept internal resources are run out.\n" + " Dept might still work if the resources get freed.\n" + " However, the chances are Dept will suffer from\n" + " the lack from now. Needs to extend the internal\n" + " resource pools. Ask max.byungchul.park@gmail.com\n"); + return NULL; +} + +static void to_pool(void *o, enum object_t t) +{ + struct dept_pool *p = &pool[t]; + struct llist_head *h; + + preempt_disable(); + if (likely(dept_per_cpu_ready)) + h = this_cpu_ptr(p->lpool); + else + h = &p->boot_pool; + + llist_add(o + p->node_off, h); + preempt_enable(); +} + +#define OBJECT(id, nr) \ +static void (*ctor_##id)(struct dept_##id *a); \ +static void (*dtor_##id)(struct dept_##id *a); \ +static struct dept_##id *new_##id(void) \ +{ \ + struct dept_##id *a; \ + \ + a = (struct dept_##id *)from_pool(OBJECT_##id); \ + if (unlikely(!a)) \ + return NULL; \ + \ + atomic_set(&a->ref, 1); \ + \ + if (ctor_##id) \ + ctor_##id(a); \ + \ + return a; \ +} \ + \ +static struct dept_##id *get_##id(struct dept_##id *a) \ +{ \ + atomic_inc(&a->ref); \ + return a; \ +} \ + \ +static void put_##id(struct dept_##id *a) \ +{ \ + if (!atomic_dec_return(&a->ref)) { \ + if (dtor_##id) \ + dtor_##id(a); \ + to_pool(a, OBJECT_##id); \ + } \ +} \ + \ +static void del_##id(struct dept_##id *a) \ +{ \ + put_##id(a); \ +} \ + \ +static bool __maybe_unused id##_consumed(struct dept_##id *a) \ +{ \ + return a && atomic_read(&a->ref) > 1; \ +} +#include "dept_object.h" +#undef OBJECT + +#define SET_CONSTRUCTOR(id, f) \ +static void (*ctor_##id)(struct dept_##id *a) = f + +static void initialize_dep(struct dept_dep *d) +{ + INIT_LIST_HEAD(&d->bfs_node); + INIT_LIST_HEAD(&d->dep_node); + INIT_LIST_HEAD(&d->dep_rev_node); +} +SET_CONSTRUCTOR(dep, initialize_dep); + +static void initialize_class(struct dept_class *c) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_iecxt *ie = &c->iecxt[i]; + struct dept_iwait *iw = &c->iwait[i]; + + ie->ecxt = NULL; + ie->enirq = i; + ie->staled = false; + + iw->wait = NULL; + iw->irq = i; + iw->staled = false; + iw->touched = false; + } + c->bfs_gen = 0U; + + INIT_LIST_HEAD(&c->all_node); + INIT_LIST_HEAD(&c->dep_head); + INIT_LIST_HEAD(&c->dep_rev_head); +} +SET_CONSTRUCTOR(class, initialize_class); + +static void initialize_ecxt(struct dept_ecxt *e) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + e->enirq_stack[i] = NULL; + e->enirq_ip[i] = 0UL; + } + e->ecxt_ip = 0UL; + e->ecxt_stack = NULL; + e->enirqf = 0UL; + e->event_ip = 0UL; + e->event_stack = NULL; +} +SET_CONSTRUCTOR(ecxt, initialize_ecxt); + +static void initialize_wait(struct dept_wait *w) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + w->irq_stack[i] = NULL; + w->irq_ip[i] = 0UL; + } + w->wait_ip = 0UL; + w->wait_stack = NULL; + w->irqf = 0UL; +} +SET_CONSTRUCTOR(wait, initialize_wait); + +static void initialize_stack(struct dept_stack *s) +{ + s->nr = 0; +} +SET_CONSTRUCTOR(stack, initialize_stack); + +#define OBJECT(id, nr) \ +static void (*ctor_##id)(struct dept_##id *a); + #include "dept_object.h" +#undef OBJECT + +#undef SET_CONSTRUCTOR + +#define SET_DESTRUCTOR(id, f) \ +static void (*dtor_##id)(struct dept_##id *a) = f + +static void destroy_dep(struct dept_dep *d) +{ + if (dep_e(d)) + put_ecxt(dep_e(d)); + if (dep_w(d)) + put_wait(dep_w(d)); +} +SET_DESTRUCTOR(dep, destroy_dep); + +static void destroy_ecxt(struct dept_ecxt *e) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) + if (e->enirq_stack[i]) + put_stack(e->enirq_stack[i]); + if (e->class) + put_class(e->class); + if (e->ecxt_stack) + put_stack(e->ecxt_stack); + if (e->event_stack) + put_stack(e->event_stack); +} +SET_DESTRUCTOR(ecxt, destroy_ecxt); + +static void destroy_wait(struct dept_wait *w) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) + if (w->irq_stack[i]) + put_stack(w->irq_stack[i]); + if (w->class) + put_class(w->class); + if (w->wait_stack) + put_stack(w->wait_stack); +} +SET_DESTRUCTOR(wait, destroy_wait); + +#define OBJECT(id, nr) \ +static void (*dtor_##id)(struct dept_##id *a); + #include "dept_object.h" +#undef OBJECT + +#undef SET_DESTRUCTOR + +/* + * Caching and hashing + * ===================================================================== + * DEPT makes use of caching and hashing to improve performance. Each + * object can be obtained in O(1) with its key. + * + * NOTE: Currently we assume all the objects in the hashs will never be + * removed. Implement it when needed. + */ + +/* + * Some information might be lost but it's only for hashing key. + */ +static unsigned long mix(unsigned long a, unsigned long b) +{ + int halfbits = sizeof(unsigned long) * 8 / 2; + unsigned long halfmask = (1UL << halfbits) - 1UL; + + return (a << halfbits) | (b & halfmask); +} + +static bool cmp_dep(struct dept_dep *d1, struct dept_dep *d2) +{ + return dep_fc(d1)->key == dep_fc(d2)->key && + dep_tc(d1)->key == dep_tc(d2)->key; +} + +static unsigned long key_dep(struct dept_dep *d) +{ + return mix(dep_fc(d)->key, dep_tc(d)->key); +} + +static bool cmp_class(struct dept_class *c1, struct dept_class *c2) +{ + return c1->key == c2->key; +} + +static unsigned long key_class(struct dept_class *c) +{ + return c->key; +} + +#define HASH(id, bits) \ +static struct hlist_head table_##id[1 << (bits)]; \ + \ +static struct hlist_head *head_##id(struct dept_##id *a) \ +{ \ + return table_##id + hash_long(key_##id(a), bits); \ +} \ + \ +static struct dept_##id *hash_lookup_##id(struct dept_##id *a) \ +{ \ + struct dept_##id *b; \ + \ + hlist_for_each_entry_rcu(b, head_##id(a), hash_node) \ + if (cmp_##id(a, b)) \ + return b; \ + return NULL; \ +} \ + \ +static void hash_add_##id(struct dept_##id *a) \ +{ \ + get_##id(a); \ + hlist_add_head_rcu(&a->hash_node, head_##id(a)); \ +} \ + \ +static void hash_del_##id(struct dept_##id *a) \ +{ \ + hlist_del_rcu(&a->hash_node); \ + put_##id(a); \ +} +#include "dept_hash.h" +#undef HASH + +static struct dept_dep *lookup_dep(struct dept_class *fc, + struct dept_class *tc) +{ + struct dept_ecxt onetime_e = { .class = fc }; + struct dept_wait onetime_w = { .class = tc }; + struct dept_dep onetime_d = { .ecxt = &onetime_e, + .wait = &onetime_w }; + return hash_lookup_dep(&onetime_d); +} + +static struct dept_class *lookup_class(unsigned long key) +{ + struct dept_class onetime_c = { .key = key }; + + return hash_lookup_class(&onetime_c); +} + +/* + * Report + * ===================================================================== + * DEPT prints useful information to help debuging on detection of + * problematic dependency. + */ + +static void print_ip_stack(unsigned long ip, struct dept_stack *s) +{ + if (ip) + print_ip_sym(KERN_WARNING, ip); + + if (valid_stack(s)) { + pr_warn("stacktrace:\n"); + stack_trace_print(s->raw, s->nr, 5); + } + + if (!ip && !valid_stack(s)) + pr_warn("(N/A)\n"); +} + +#define print_spc(spc, fmt, ...) \ + pr_warn("%*c" fmt, (spc) * 4, ' ', ##__VA_ARGS__) + +static void print_diagram(struct dept_dep *d) +{ + struct dept_ecxt *e = dep_e(d); + struct dept_wait *w = dep_w(d); + struct dept_class *fc = dep_fc(d); + struct dept_class *tc = dep_tc(d); + unsigned long irqf; + int irq; + bool firstline = true; + int spc = 1; + const char *w_fn = w->wait_fn ?: "(unknown)"; + const char *e_fn = e->event_fn ?: "(unknown)"; + const char *c_fn = e->ecxt_fn ?: "(unknown)"; + const char *fc_n = fc->sched_map ? "" : (fc->name ?: "(unknown)"); + const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); + + irqf = e->enirqf & w->irqf; + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + if (!firstline) + pr_warn("\nor\n\n"); + firstline = false; + + print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); + print_spc(spc, " <%s interrupt>\n", irq_str(irq)); + print_spc(spc + 1, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); + } + + if (!irqf) { + print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); + print_spc(spc, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); + } +} + +static void print_dep(struct dept_dep *d) +{ + struct dept_ecxt *e = dep_e(d); + struct dept_wait *w = dep_w(d); + struct dept_class *fc = dep_fc(d); + struct dept_class *tc = dep_tc(d); + unsigned long irqf; + int irq; + const char *w_fn = w->wait_fn ?: "(unknown)"; + const char *e_fn = e->event_fn ?: "(unknown)"; + const char *c_fn = e->ecxt_fn ?: "(unknown)"; + const char *fc_n = fc->sched_map ? "" : (fc->name ?: "(unknown)"); + const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); + + irqf = e->enirqf & w->irqf; + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + pr_warn("%s has been enabled:\n", irq_str(irq)); + print_ip_stack(e->enirq_ip[irq], e->enirq_stack[irq]); + pr_warn("\n"); + + pr_warn("[S] %s(%s:%d):\n", c_fn, fc_n, fc->sub_id); + print_ip_stack(e->ecxt_ip, e->ecxt_stack); + pr_warn("\n"); + + pr_warn("[W] %s(%s:%d) in %s context:\n", + w_fn, tc_n, tc->sub_id, irq_str(irq)); + print_ip_stack(w->irq_ip[irq], w->irq_stack[irq]); + pr_warn("\n"); + + pr_warn("[E] %s(%s:%d):\n", e_fn, fc_n, fc->sub_id); + print_ip_stack(e->event_ip, e->event_stack); + } + + if (!irqf) { + pr_warn("[S] %s(%s:%d):\n", c_fn, fc_n, fc->sub_id); + print_ip_stack(e->ecxt_ip, e->ecxt_stack); + pr_warn("\n"); + + pr_warn("[W] %s(%s:%d):\n", w_fn, tc_n, tc->sub_id); + print_ip_stack(w->wait_ip, w->wait_stack); + pr_warn("\n"); + + pr_warn("[E] %s(%s:%d):\n", e_fn, fc_n, fc->sub_id); + print_ip_stack(e->event_ip, e->event_stack); + } +} + +static void save_current_stack(int skip); + +/* + * Print all classes in a circle. + */ +static void print_circle(struct dept_class *c) +{ + struct dept_class *fc = c->bfs_parent; + struct dept_class *tc = c; + int i; + + dept_outworld_enter(); + save_current_stack(6); + + pr_warn("===================================================\n"); + pr_warn("DEPT: Circular dependency has been detected.\n"); + pr_warn("%s %.*s %s\n", init_utsname()->release, + (int)strcspn(init_utsname()->version, " "), + init_utsname()->version, + print_tainted()); + pr_warn("---------------------------------------------------\n"); + pr_warn("summary\n"); + pr_warn("---------------------------------------------------\n"); + + if (fc == tc) + pr_warn("*** AA DEADLOCK ***\n\n"); + else + pr_warn("*** DEADLOCK ***\n\n"); + + i = 0; + do { + struct dept_dep *d = lookup_dep(fc, tc); + + pr_warn("context %c\n", 'A' + (i++)); + print_diagram(d); + if (fc != c) + pr_warn("\n"); + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + pr_warn("\n"); + pr_warn("[S]: start of the event context\n"); + pr_warn("[W]: the wait blocked\n"); + pr_warn("[E]: the event not reachable\n"); + + i = 0; + do { + struct dept_dep *d = lookup_dep(fc, tc); + + pr_warn("---------------------------------------------------\n"); + pr_warn("context %c's detail\n", 'A' + i); + pr_warn("---------------------------------------------------\n"); + pr_warn("context %c\n", 'A' + (i++)); + print_diagram(d); + pr_warn("\n"); + print_dep(d); + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + pr_warn("---------------------------------------------------\n"); + pr_warn("information that might be helpful\n"); + pr_warn("---------------------------------------------------\n"); + dump_stack(); + + dept_outworld_exit(); +} + +/* + * BFS(Breadth First Search) + * ===================================================================== + * Whenever a new dependency is added into the graph, search the graph + * for a new circular dependency. + */ + +static void enqueue(struct list_head *h, struct dept_dep *d) +{ + list_add_tail(&d->bfs_node, h); +} + +static struct dept_dep *dequeue(struct list_head *h) +{ + struct dept_dep *d; + + d = list_first_entry(h, struct dept_dep, bfs_node); + list_del(&d->bfs_node); + return d; +} + +static bool empty(struct list_head *h) +{ + return list_empty(h); +} + +static void extend_queue(struct list_head *h, struct dept_class *cur) +{ + struct dept_dep *d; + + list_for_each_entry(d, &cur->dep_head, dep_node) { + struct dept_class *next = dep_tc(d); + + if (cur->bfs_gen == next->bfs_gen) + continue; + next->bfs_gen = cur->bfs_gen; + next->bfs_dist = cur->bfs_dist + 1; + next->bfs_parent = cur; + enqueue(h, d); + } +} + +static void extend_queue_rev(struct list_head *h, struct dept_class *cur) +{ + struct dept_dep *d; + + list_for_each_entry(d, &cur->dep_rev_head, dep_rev_node) { + struct dept_class *next = dep_fc(d); + + if (cur->bfs_gen == next->bfs_gen) + continue; + next->bfs_gen = cur->bfs_gen; + next->bfs_dist = cur->bfs_dist + 1; + next->bfs_parent = cur; + enqueue(h, d); + } +} + +typedef enum bfs_ret bfs_f(struct dept_dep *d, void *in, void **out); +static unsigned int bfs_gen; + +/* + * NOTE: Must be called with dept_lock held. + */ +static void bfs(struct dept_class *c, bfs_f *cb, void *in, void **out) +{ + LIST_HEAD(q); + enum bfs_ret ret; + + if (DEPT_WARN_ON(!cb)) + return; + + /* + * Avoid zero bfs_gen. + */ + bfs_gen = bfs_gen + 1 ?: 1; + + c->bfs_gen = bfs_gen; + c->bfs_dist = 0; + c->bfs_parent = c; + + ret = cb(NULL, in, out); + if (ret == BFS_DONE) + return; + if (ret == BFS_SKIP) + return; + if (ret == BFS_CONTINUE) + extend_queue(&q, c); + if (ret == BFS_CONTINUE_REV) + extend_queue_rev(&q, c); + + while (!empty(&q)) { + struct dept_dep *d = dequeue(&q); + + ret = cb(d, in, out); + if (ret == BFS_DONE) + break; + if (ret == BFS_SKIP) + continue; + if (ret == BFS_CONTINUE) + extend_queue(&q, dep_tc(d)); + if (ret == BFS_CONTINUE_REV) + extend_queue_rev(&q, dep_fc(d)); + } + + while (!empty(&q)) + dequeue(&q); +} + +/* + * Main operations + * ===================================================================== + * Add dependencies - Each new dependency is added into the graph and + * checked if it forms a circular dependency. + * + * Track waits - Waits are queued into the ring buffer for later use to + * generate appropriate dependencies with cross-event. + * + * Track event contexts(ecxt) - Event contexts are pushed into local + * stack for later use to generate appropriate dependencies with waits. + */ + +static unsigned long cur_enirqf(void); +static int cur_irq(void); +static unsigned int cur_ctxt_id(void); + +static struct dept_iecxt *iecxt(struct dept_class *c, int irq) +{ + return &c->iecxt[irq]; +} + +static struct dept_iwait *iwait(struct dept_class *c, int irq) +{ + return &c->iwait[irq]; +} + +static void stale_iecxt(struct dept_iecxt *ie) +{ + if (ie->ecxt) + put_ecxt(ie->ecxt); + + WRITE_ONCE(ie->ecxt, NULL); + WRITE_ONCE(ie->staled, true); +} + +static void set_iecxt(struct dept_iecxt *ie, struct dept_ecxt *e) +{ + /* + * ->ecxt will never be updated once getting set until the class + * gets removed. + */ + if (ie->ecxt) + DEPT_WARN_ON(1); + else + WRITE_ONCE(ie->ecxt, get_ecxt(e)); +} + +static void stale_iwait(struct dept_iwait *iw) +{ + if (iw->wait) + put_wait(iw->wait); + + WRITE_ONCE(iw->wait, NULL); + WRITE_ONCE(iw->staled, true); +} + +static void set_iwait(struct dept_iwait *iw, struct dept_wait *w) +{ + /* + * ->wait will never be updated once getting set until the class + * gets removed. + */ + if (iw->wait) + DEPT_WARN_ON(1); + else + WRITE_ONCE(iw->wait, get_wait(w)); + + iw->touched = true; +} + +static void touch_iwait(struct dept_iwait *iw) +{ + iw->touched = true; +} + +static void untouch_iwait(struct dept_iwait *iw) +{ + iw->touched = false; +} + +static struct dept_stack *get_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + return s ? get_stack(s) : NULL; +} + +static void prepare_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + /* + * The dept_stack is already ready. + */ + if (s && !stack_consumed(s)) { + s->nr = 0; + return; + } + + if (s) + put_stack(s); + + s = dept_task()->stack = new_stack(); + if (!s) + return; + + get_stack(s); + del_stack(s); +} + +static void save_current_stack(int skip) +{ + struct dept_stack *s = dept_task()->stack; + + if (!s) + return; + if (valid_stack(s)) + return; + + s->nr = stack_trace_save(s->raw, DEPT_MAX_STACK_ENTRY, skip); +} + +static void finish_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + if (stack_consumed(s)) + save_current_stack(2); +} + +/* + * FIXME: For now, disable LOCKDEP while DEPT is working. + * + * Both LOCKDEP and DEPT report it on a deadlock detection using + * printk taking the risk of another deadlock that might be caused by + * locks of console or printk between inside and outside of them. + * + * For DEPT, it's no problem since multiple reports are allowed. But it + * would be a bad idea for LOCKDEP since it will stop even on a singe + * report. So we need to prevent LOCKDEP from its reporting the risk + * DEPT would take when reporting something. + */ +#include + +void noinstr dept_off(void) +{ + dept_task()->recursive++; + lockdep_off(); +} + +void noinstr dept_on(void) +{ + dept_task()->recursive--; + lockdep_on(); +} + +static unsigned long dept_enter(void) +{ + unsigned long flags; + + flags = arch_local_irq_save(); + dept_off(); + prepare_current_stack(); + return flags; +} + +static void dept_exit(unsigned long flags) +{ + finish_current_stack(); + dept_on(); + arch_local_irq_restore(flags); +} + +static unsigned long dept_enter_recursive(void) +{ + unsigned long flags; + + flags = arch_local_irq_save(); + return flags; +} + +static void dept_exit_recursive(unsigned long flags) +{ + arch_local_irq_restore(flags); +} + +/* + * NOTE: Must be called with dept_lock held. + */ +static struct dept_dep *__add_dep(struct dept_ecxt *e, + struct dept_wait *w) +{ + struct dept_dep *d; + + if (DEPT_WARN_ON(!valid_class(e->class))) + return NULL; + + if (DEPT_WARN_ON(!valid_class(w->class))) + return NULL; + + if (lookup_dep(e->class, w->class)) + return NULL; + + d = new_dep(); + if (unlikely(!d)) + return NULL; + + d->ecxt = get_ecxt(e); + d->wait = get_wait(w); + + /* + * Add the dependency into hash and graph. + */ + hash_add_dep(d); + list_add(&d->dep_node, &dep_fc(d)->dep_head); + list_add(&d->dep_rev_node, &dep_tc(d)->dep_rev_head); + return d; +} + +static enum bfs_ret cb_check_dl(struct dept_dep *d, + void *in, void **out) +{ + struct dept_dep *new = (struct dept_dep *)in; + + /* + * initial condition for this BFS search + */ + if (!d) { + dep_tc(new)->bfs_parent = dep_fc(new); + + if (dep_tc(new) != dep_fc(new)) + return BFS_CONTINUE; + + /* + * AA circle does not make additional deadlock. We don't + * have to continue this BFS search. + */ + print_circle(dep_tc(new)); + return BFS_DONE; + } + + /* + * Allow multiple reports. + */ + if (dep_tc(d) == dep_fc(new)) + print_circle(dep_tc(new)); + + return BFS_CONTINUE; +} + +/* + * This function is actually in charge of reporting. + */ +static void check_dl_bfs(struct dept_dep *d) +{ + bfs(dep_tc(d), cb_check_dl, (void *)d, NULL); +} + +static enum bfs_ret cb_find_iw(struct dept_dep *d, void *in, void **out) +{ + int irq = *(int *)in; + struct dept_class *fc; + struct dept_iwait *iw; + + if (DEPT_WARN_ON(!out)) + return BFS_DONE; + + /* + * initial condition for this BFS search + */ + if (!d) + return BFS_CONTINUE_REV; + + fc = dep_fc(d); + iw = iwait(fc, irq); + + /* + * If any parent's ->wait was set, then the children would've + * been touched. + */ + if (!iw->touched) + return BFS_SKIP; + + if (!iw->wait) + return BFS_CONTINUE_REV; + + *out = iw; + return BFS_DONE; +} + +static struct dept_iwait *find_iw_bfs(struct dept_class *c, int irq) +{ + struct dept_iwait *iw = iwait(c, irq); + struct dept_iwait *found = NULL; + + if (iw->wait) + return iw; + + /* + * '->touched == false' guarantees there's no parent that has + * been set ->wait. + */ + if (!iw->touched) + return NULL; + + bfs(c, cb_find_iw, (void *)&irq, (void **)&found); + + if (found) + return found; + + untouch_iwait(iw); + return NULL; +} + +static enum bfs_ret cb_touch_iw_find_ie(struct dept_dep *d, void *in, + void **out) +{ + int irq = *(int *)in; + struct dept_class *tc; + struct dept_iecxt *ie; + struct dept_iwait *iw; + + if (DEPT_WARN_ON(!out)) + return BFS_DONE; + + /* + * initial condition for this BFS search + */ + if (!d) + return BFS_CONTINUE; + + tc = dep_tc(d); + ie = iecxt(tc, irq); + iw = iwait(tc, irq); + + touch_iwait(iw); + + if (!ie->ecxt) + return BFS_CONTINUE; + + if (!*out) + *out = ie; + + return BFS_CONTINUE; +} + +static struct dept_iecxt *touch_iw_find_ie_bfs(struct dept_class *c, + int irq) +{ + struct dept_iecxt *ie = iecxt(c, irq); + struct dept_iwait *iw = iwait(c, irq); + struct dept_iecxt *found = ie->ecxt ? ie : NULL; + + touch_iwait(iw); + bfs(c, cb_touch_iw_find_ie, (void *)&irq, (void **)&found); + return found; +} + +/* + * Should be called with dept_lock held. + */ +static void __add_idep(struct dept_iecxt *ie, struct dept_iwait *iw) +{ + struct dept_dep *new; + + /* + * There's nothing to do. + */ + if (!ie || !iw || !ie->ecxt || !iw->wait) + return; + + new = __add_dep(ie->ecxt, iw->wait); + + /* + * Deadlock detected. Let check_dl_bfs() report it. + */ + if (new) { + check_dl_bfs(new); + stale_iecxt(ie); + stale_iwait(iw); + } + + /* + * If !new, it would be the case of lack of object resource. + * Just let it go and get checked by other chances. Retrying is + * meaningless in that case. + */ +} + +static void set_check_iecxt(struct dept_class *c, int irq, + struct dept_ecxt *e) +{ + struct dept_iecxt *ie = iecxt(c, irq); + + set_iecxt(ie, e); + __add_idep(ie, find_iw_bfs(c, irq)); +} + +static void set_check_iwait(struct dept_class *c, int irq, + struct dept_wait *w) +{ + struct dept_iwait *iw = iwait(c, irq); + + set_iwait(iw, w); + __add_idep(touch_iw_find_ie_bfs(c, irq), iw); +} + +static void add_iecxt(struct dept_class *c, int irq, struct dept_ecxt *e, + bool stack) +{ + /* + * This access is safe since we ensure e->class has set locally. + */ + struct dept_task *dt = dept_task(); + struct dept_iecxt *ie = iecxt(c, irq); + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (unlikely(READ_ONCE(ie->staled))) + return; + + /* + * Skip add_iecxt() if ie->ecxt has ever been set at least once. + * Which means it has a valid ->ecxt or been staled. + */ + if (READ_ONCE(ie->ecxt)) + return; + + if (unlikely(!dept_lock())) + return; + + if (unlikely(ie->staled)) + goto unlock; + if (ie->ecxt) + goto unlock; + + e->enirqf |= (1UL << irq); + + /* + * Should be NULL since it's the first time that these + * enirq_{ip,stack}[irq] have ever set. + */ + DEPT_WARN_ON(e->enirq_ip[irq]); + DEPT_WARN_ON(e->enirq_stack[irq]); + + e->enirq_ip[irq] = dt->enirq_ip[irq]; + e->enirq_stack[irq] = stack ? get_current_stack() : NULL; + + set_check_iecxt(c, irq, e); +unlock: + dept_unlock(); +} + +static void add_iwait(struct dept_class *c, int irq, struct dept_wait *w) +{ + struct dept_iwait *iw = iwait(c, irq); + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (unlikely(READ_ONCE(iw->staled))) + return; + + /* + * Skip add_iwait() if iw->wait has ever been set at least once. + * Which means it has a valid ->wait or been staled. + */ + if (READ_ONCE(iw->wait)) + return; + + if (unlikely(!dept_lock())) + return; + + if (unlikely(iw->staled)) + goto unlock; + if (iw->wait) + goto unlock; + + w->irqf |= (1UL << irq); + + /* + * Should be NULL since it's the first time that these + * irq_{ip,stack}[irq] have ever set. + */ + DEPT_WARN_ON(w->irq_ip[irq]); + DEPT_WARN_ON(w->irq_stack[irq]); + + w->irq_ip[irq] = w->wait_ip; + w->irq_stack[irq] = get_current_stack(); + + set_check_iwait(c, irq, w); +unlock: + dept_unlock(); +} + +static struct dept_wait_hist *hist(int pos) +{ + struct dept_task *dt = dept_task(); + + return dt->wait_hist + (pos % DEPT_MAX_WAIT_HIST); +} + +static int hist_pos_next(void) +{ + struct dept_task *dt = dept_task(); + + return dt->wait_hist_pos % DEPT_MAX_WAIT_HIST; +} + +static void hist_advance(void) +{ + struct dept_task *dt = dept_task(); + + dt->wait_hist_pos++; + dt->wait_hist_pos %= DEPT_MAX_WAIT_HIST; +} + +static struct dept_wait_hist *new_hist(void) +{ + struct dept_wait_hist *wh = hist(hist_pos_next()); + + hist_advance(); + return wh; +} + +static void add_hist(struct dept_wait *w, unsigned int wg, unsigned int ctxt_id) +{ + struct dept_wait_hist *wh = new_hist(); + + if (likely(wh->wait)) + put_wait(wh->wait); + + wh->wait = get_wait(w); + wh->wgen = wg; + wh->ctxt_id = ctxt_id; +} + +/* + * Should be called after setting up e's iecxt and w's iwait. + */ +static void add_dep(struct dept_ecxt *e, struct dept_wait *w) +{ + struct dept_class *fc = e->class; + struct dept_class *tc = w->class; + struct dept_dep *d; + int i; + + if (lookup_dep(fc, tc)) + return; + + if (unlikely(!dept_lock())) + return; + + /* + * __add_dep() will lookup_dep() again with lock held. + */ + d = __add_dep(e, w); + if (d) { + check_dl_bfs(d); + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_iwait *fiw = iwait(fc, i); + struct dept_iecxt *found_ie; + struct dept_iwait *found_iw; + + /* + * '->touched == false' guarantees there's no + * parent that has been set ->wait. + */ + if (!fiw->touched) + continue; + + /* + * find_iw_bfs() will untouch the iwait if + * not found. + */ + found_iw = find_iw_bfs(fc, i); + + if (!found_iw) + continue; + + found_ie = touch_iw_find_ie_bfs(tc, i); + __add_idep(found_ie, found_iw); + } + } + dept_unlock(); +} + +static atomic_t wgen = ATOMIC_INIT(1); + +static void add_wait(struct dept_class *c, unsigned long ip, + const char *w_fn, int sub_l, bool sched_sleep) +{ + struct dept_task *dt = dept_task(); + struct dept_wait *w; + unsigned int wg = 0U; + int irq; + int i; + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + w = new_wait(); + if (unlikely(!w)) + return; + + WRITE_ONCE(w->class, get_class(c)); + w->wait_ip = ip; + w->wait_fn = w_fn; + w->wait_stack = get_current_stack(); + w->sched_sleep = sched_sleep; + + irq = cur_irq(); + if (irq < DEPT_IRQS_NR) + add_iwait(c, irq, w); + + /* + * Avoid adding dependency between user aware nested ecxt and + * wait. + */ + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + + /* + * the case of invalid key'ed one + */ + if (!eh->ecxt) + continue; + + if (eh->ecxt->class != c || eh->sub_l == sub_l) + add_dep(eh->ecxt, w); + } + + if (!wait_consumed(w) && !rich_stack) { + if (w->wait_stack) + put_stack(w->wait_stack); + w->wait_stack = NULL; + } + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + add_hist(w, wg, cur_ctxt_id()); + + del_wait(w); +} + +static bool add_ecxt(struct dept_map *m, struct dept_class *c, + unsigned long ip, const char *c_fn, + const char *e_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + struct dept_ecxt_held *eh; + struct dept_ecxt *e; + unsigned long irqf; + int irq; + + if (DEPT_WARN_ON(!valid_class(c))) + return false; + + if (DEPT_WARN_ON_ONCE(dt->ecxt_held_pos >= DEPT_MAX_ECXT_HELD)) + return false; + + if (m->nocheck) { + eh = dt->ecxt_held + (dt->ecxt_held_pos++); + eh->ecxt = NULL; + eh->map = m; + eh->class = get_class(c); + eh->wgen = atomic_read(&wgen); + eh->sub_l = sub_l; + + return true; + } + + e = new_ecxt(); + if (unlikely(!e)) + return false; + + e->class = get_class(c); + e->ecxt_ip = ip; + e->ecxt_stack = ip && rich_stack ? get_current_stack() : NULL; + e->event_fn = e_fn; + e->ecxt_fn = c_fn; + + eh = dt->ecxt_held + (dt->ecxt_held_pos++); + eh->ecxt = get_ecxt(e); + eh->map = m; + eh->class = get_class(c); + eh->wgen = atomic_read(&wgen); + eh->sub_l = sub_l; + + irqf = cur_enirqf(); + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) + add_iecxt(c, irq, e, false); + + del_ecxt(e); + return true; +} + +static int find_ecxt_pos(struct dept_map *m, struct dept_class *c, + bool newfirst) +{ + struct dept_task *dt = dept_task(); + int i; + + if (newfirst) { + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + if (eh->map == m && eh->class == c) + return i; + } + } else { + for (i = 0; i < dt->ecxt_held_pos; i++) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + if (eh->map == m && eh->class == c) + return i; + } + } + return -1; +} + +static bool pop_ecxt(struct dept_map *m, struct dept_class *c) +{ + struct dept_task *dt = dept_task(); + int pos; + int i; + + pos = find_ecxt_pos(m, c, true); + if (pos == -1) + return false; + + if (dt->ecxt_held[pos].class) + put_class(dt->ecxt_held[pos].class); + + if (dt->ecxt_held[pos].ecxt) + put_ecxt(dt->ecxt_held[pos].ecxt); + + dt->ecxt_held_pos--; + + for (i = pos; i < dt->ecxt_held_pos; i++) + dt->ecxt_held[i] = dt->ecxt_held[i + 1]; + return true; +} + +static bool good_hist(struct dept_wait_hist *wh, unsigned int wg) +{ + return wh->wait != NULL && before(wg, wh->wgen); +} + +/* + * Binary-search the ring buffer for the earliest valid wait. + */ +static int find_hist_pos(unsigned int wg) +{ + int oldest; + int l; + int r; + int pos; + + oldest = hist_pos_next(); + if (unlikely(good_hist(hist(oldest), wg))) { + DEPT_INFO_ONCE("Need to expand the ring buffer.\n"); + return oldest; + } + + l = oldest + 1; + r = oldest + DEPT_MAX_WAIT_HIST - 1; + for (pos = (l + r) / 2; l <= r; pos = (l + r) / 2) { + struct dept_wait_hist *p = hist(pos - 1); + struct dept_wait_hist *wh = hist(pos); + + if (!good_hist(p, wg) && good_hist(wh, wg)) + return pos % DEPT_MAX_WAIT_HIST; + if (good_hist(wh, wg)) + r = pos - 1; + else + l = pos + 1; + } + return -1; +} + +static void do_event(struct dept_map *m, struct dept_class *c, + unsigned int wg, unsigned long ip) +{ + struct dept_task *dt = dept_task(); + struct dept_wait_hist *wh; + struct dept_ecxt_held *eh; + unsigned int ctxt_id; + int end; + int pos; + int i; + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (m->nocheck) + return; + + /* + * The event was triggered before wait. + */ + if (!wg) + return; + + pos = find_ecxt_pos(m, c, false); + if (pos == -1) + return; + + eh = dt->ecxt_held + pos; + + if (DEPT_WARN_ON(!eh->ecxt)) + return; + + eh->ecxt->event_ip = ip; + eh->ecxt->event_stack = get_current_stack(); + + /* + * The ecxt already has done what it needs. + */ + if (!before(wg, eh->wgen)) + return; + + pos = find_hist_pos(wg); + if (pos == -1) + return; + + ctxt_id = cur_ctxt_id(); + end = hist_pos_next(); + end = end > pos ? end : end + DEPT_MAX_WAIT_HIST; + for (wh = hist(pos); pos < end; wh = hist(++pos)) { + if (after(wh->wgen, eh->wgen)) + break; + + if (dt->in_sched && wh->wait->sched_sleep) + continue; + + if (wh->ctxt_id == ctxt_id) + add_dep(eh->ecxt, wh->wait); + } + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_ecxt *e; + + if (before(dt->wgen_enirq[i], wg)) + continue; + + e = eh->ecxt; + add_iecxt(e->class, i, e, false); + } +} + +static void del_dep_rcu(struct rcu_head *rh) +{ + struct dept_dep *d = container_of(rh, struct dept_dep, rh); + + preempt_disable(); + del_dep(d); + preempt_enable(); +} + +/* + * NOTE: Must be called with dept_lock held. + */ +static void disconnect_class(struct dept_class *c) +{ + struct dept_dep *d, *n; + int i; + + list_for_each_entry_safe(d, n, &c->dep_head, dep_node) { + list_del_rcu(&d->dep_node); + list_del_rcu(&d->dep_rev_node); + hash_del_dep(d); + call_rcu(&d->rh, del_dep_rcu); + } + + list_for_each_entry_safe(d, n, &c->dep_rev_head, dep_rev_node) { + list_del_rcu(&d->dep_node); + list_del_rcu(&d->dep_rev_node); + hash_del_dep(d); + call_rcu(&d->rh, del_dep_rcu); + } + + for (i = 0; i < DEPT_IRQS_NR; i++) { + stale_iecxt(iecxt(c, i)); + stale_iwait(iwait(c, i)); + } +} + +/* + * Context control + * ===================================================================== + * Whether a wait is in {hard,soft}-IRQ context or whether + * {hard,soft}-IRQ has been enabled on the way to an event is very + * important to check dependency. All those things should be tracked. + */ + +static unsigned long cur_enirqf(void) +{ + struct dept_task *dt = dept_task(); + int he = dt->hardirqs_enabled; + int se = dt->softirqs_enabled; + + if (he) + return DEPT_HIRQF | (se ? DEPT_SIRQF : 0UL); + return 0UL; +} + +static int cur_irq(void) +{ + if (lockdep_softirq_context(current)) + return DEPT_SIRQ; + if (lockdep_hardirq_context()) + return DEPT_HIRQ; + return DEPT_IRQS_NR; +} + +static unsigned int cur_ctxt_id(void) +{ + struct dept_task *dt = dept_task(); + int irq = cur_irq(); + + /* + * Normal process context + */ + if (irq == DEPT_IRQS_NR) + return 0U; + + return dt->irq_id[irq] | (1UL << irq); +} + +static void enirq_transition(int irq) +{ + struct dept_task *dt = dept_task(); + int i; + + /* + * IRQ can cut in on the way to the event. Used for cross-event + * detection. + * + * wait context event context(ecxt) + * ------------ ------------------- + * wait event + * UPDATE wgen + * observe IRQ enabled + * UPDATE wgen + * keep the wgen locally + * + * on the event + * check the wgen kept + */ + + /* + * Avoid zero wgen. + */ + dt->wgen_enirq[irq] = atomic_inc_return(&wgen) ?: + atomic_inc_return(&wgen); + + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + struct dept_ecxt *e; + + eh = dt->ecxt_held + i; + e = eh->ecxt; + if (e) + add_iecxt(e->class, irq, e, true); + } +} + +static void dept_enirq(unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long irqf = cur_enirqf(); + int irq; + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + /* + * IRQ ON/OFF transition might happen while Dept is working. + * We cannot handle recursive entrance. Just ingnore it. + * Only transitions outside of Dept will be considered. + */ + if (dt->recursive) + return; + + flags = dept_enter(); + + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + dt->enirq_ip[irq] = ip; + enirq_transition(irq); + } + + dept_exit(flags); +} + +void dept_softirqs_on_ip(unsigned long ip) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->softirqs_enabled = true; + dept_enirq(ip); +} + +void dept_hardirqs_on(void) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = true; + dept_enirq(_RET_IP_); +} + +void dept_softirqs_off(void) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->softirqs_enabled = false; +} + +void dept_hardirqs_off(void) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = false; +} + +/* + * Ensure it's the outmost softirq context. + */ +void dept_softirq_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->irq_id[DEPT_SIRQ] += 1UL << DEPT_IRQS_NR; +} + +/* + * Ensure it's the outmost hardirq context. + */ +void dept_hardirq_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->irq_id[DEPT_HIRQ] += 1UL << DEPT_IRQS_NR; +} + +void dept_sched_enter(void) +{ + dept_task()->in_sched = true; +} + +void dept_sched_exit(void) +{ + dept_task()->in_sched = false; +} + +/* + * Exposed APIs + * ===================================================================== + */ + +static void clean_classes_cache(struct dept_key *k) +{ + int i; + + for (i = 0; i < DEPT_MAX_SUBCLASSES_CACHE; i++) { + if (!READ_ONCE(k->classes[i])) + continue; + + WRITE_ONCE(k->classes[i], NULL); + } +} + +void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, + const char *n) +{ + unsigned long flags; + + if (unlikely(!dept_working())) { + m->nocheck = true; + return; + } + + if (DEPT_WARN_ON(sub_u < 0)) { + m->nocheck = true; + return; + } + + if (DEPT_WARN_ON(sub_u >= DEPT_MAX_SUBCLASSES_USR)) { + m->nocheck = true; + return; + } + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + clean_classes_cache(&m->map_key); + + m->keys = k; + m->sub_u = sub_u; + m->name = n; + m->wgen = 0U; + m->nocheck = !valid_key(k); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_map_init); + +void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, + const char *n) +{ + unsigned long flags; + + if (unlikely(!dept_working())) { + m->nocheck = true; + return; + } + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + if (k) { + clean_classes_cache(&m->map_key); + m->keys = k; + m->nocheck = !valid_key(k); + } + + if (sub_u >= 0 && sub_u < DEPT_MAX_SUBCLASSES_USR) + m->sub_u = sub_u; + + if (n) + m->name = n; + + m->wgen = 0U; + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_map_reinit); + +void dept_map_copy(struct dept_map *to, struct dept_map *from) +{ + if (unlikely(!dept_working())) { + to->nocheck = true; + return; + } + + *to = *from; + + /* + * XXX: 'to' might be in a stack or something. Using the address + * in a stack segment as a key is meaningless. Just ignore the + * case for now. + */ + if (!to->keys) { + to->nocheck = true; + return; + } + + /* + * Since the class cache can be modified concurrently we could + * observe half pointers (64bit arch using 32bit copy insns). + * Therefore clear the caches and take the performance hit. + * + * XXX: Doesn't work well with lockdep_set_class_and_subclass() + * since that relies on cache abuse. + */ + clean_classes_cache(&to->map_key); +} + +static LIST_HEAD(classes); + +static bool within(const void *addr, void *start, unsigned long size) +{ + return addr >= start && addr < start + size; +} + +void dept_free_range(void *start, unsigned int sz) +{ + struct dept_task *dt = dept_task(); + struct dept_class *c, *n; + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + DEPT_STOP("Failed to successfully free Dept objects.\n"); + return; + } + + flags = dept_enter(); + + /* + * dept_free_range() should not fail. + * + * FIXME: Should be fixed if dept_free_range() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + list_for_each_entry_safe(c, n, &classes, all_node) { + if (!within((void *)c->key, start, sz) && + !within(c->name, start, sz)) + continue; + + hash_del_class(c); + disconnect_class(c); + list_del(&c->all_node); + invalidate_class(c); + + /* + * Actual deletion will happen on the rcu callback + * that has been added in disconnect_class(). + */ + del_class(c); + } + dept_unlock(); + dept_exit(flags); + + /* + * Wait until even lockless hash_lookup_class() for the class + * returns NULL. + */ + might_sleep(); + synchronize_rcu(); +} + +static int sub_id(struct dept_map *m, int e) +{ + return (m ? m->sub_u : 0) + e * DEPT_MAX_SUBCLASSES_USR; +} + +static struct dept_class *check_new_class(struct dept_key *local, + struct dept_key *k, int sub_id, + const char *n, bool sched_map) +{ + struct dept_class *c = NULL; + + if (DEPT_WARN_ON(sub_id >= DEPT_MAX_SUBCLASSES)) + return NULL; + + if (DEPT_WARN_ON(!k)) + return NULL; + + /* + * XXX: Assume that users prevent the map from using if any of + * the cached keys has been invalidated. If not, the cache, + * local->classes should not be used because it would be racy + * with class deletion. + */ + if (local && sub_id < DEPT_MAX_SUBCLASSES_CACHE) + c = READ_ONCE(local->classes[sub_id]); + + if (c) + return c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (c) + goto caching; + + if (unlikely(!dept_lock())) + return NULL; + + c = lookup_class((unsigned long)k->base + sub_id); + if (unlikely(c)) + goto unlock; + + c = new_class(); + if (unlikely(!c)) + goto unlock; + + c->name = n; + c->sched_map = sched_map; + c->sub_id = sub_id; + c->key = (unsigned long)(k->base + sub_id); + hash_add_class(c); + list_add(&c->all_node, &classes); +unlock: + dept_unlock(); +caching: + if (local && sub_id < DEPT_MAX_SUBCLASSES_CACHE) + WRITE_ONCE(local->classes[sub_id], c); + + return c; +} + +/* + * Called between dept_enter() and dept_exit(). + */ +static void __dept_wait(struct dept_map *m, unsigned long w_f, + unsigned long ip, const char *w_fn, int sub_l, + bool sched_sleep, bool sched_map) +{ + int e; + + /* + * Be as conservative as possible. In case of mulitple waits for + * a single dept_map, we are going to keep only the last wait's + * wgen for simplicity - keeping all wgens seems overengineering. + * + * Of course, it might cause missing some dependencies that + * would rarely, probabily never, happen but it helps avoid + * false positive report. + */ + for_each_set_bit(e, &w_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, sched_map); + if (!c) + continue; + + add_wait(c, ip, w_fn, sub_l, sched_sleep); + } +} + +/* + * Called between dept_enter() and dept_exit(). + */ +static void __dept_event(struct dept_map *m, unsigned long e_f, + unsigned long ip, const char *e_fn, + bool sched_map) +{ + struct dept_class *c; + struct dept_key *k; + int e; + + e = find_first_bit(&e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (DEPT_WARN_ON(e >= DEPT_MAX_SUBCLASSES_EVT)) + return; + + /* + * An event is an event. If the caller passed more than single + * event, then warn it and handle the event corresponding to + * the first bit anyway. + */ + DEPT_WARN_ON(1UL << e != e_f); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, sched_map); + + if (c && add_ecxt(m, c, 0UL, NULL, e_fn, 0)) { + do_event(m, c, READ_ONCE(m->wgen), ip); + pop_ecxt(m, c); + } +} + +void dept_wait(struct dept_map *m, unsigned long w_f, + unsigned long ip, const char *w_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) + return; + + if (m->nocheck) + return; + + flags = dept_enter(); + + __dept_wait(m, w_f, ip, w_fn, sub_l, false, false); + + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_wait); + +void dept_stage_wait(struct dept_map *m, struct dept_key *k, + unsigned long ip, const char *w_fn) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (m && m->nocheck) + return; + + /* + * Either m or k should be passed. Which means Dept relies on + * either its own map or the caller's position in the code when + * determining its class. + */ + if (DEPT_WARN_ON(!m && !k)) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + /* + * Ensure the outmost dept_stage_wait() works. + */ + if (dt->stage_m.keys) + goto exit; + + if (m) { + dt->stage_m = *m; + + /* + * Ensure dt->stage_m.keys != NULL and it works with the + * map's map_key, not stage_m's one when ->keys == NULL. + */ + if (!m->keys) + dt->stage_m.keys = &m->map_key; + } else { + dt->stage_m.name = w_fn; + dt->stage_sched_map = true; + } + + /* + * dept_map_reinit() includes WRITE_ONCE(->wgen, 0U) that + * effectively disables the map just in case real sleep won't + * happen. dept_request_event_wait_commit() will enable it. + */ + dept_map_reinit(&dt->stage_m, k, -1, NULL); + + dt->stage_w_fn = w_fn; + dt->stage_ip = ip; +exit: + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_stage_wait); + +static void __dept_clean_stage(struct dept_task *dt) +{ + memset(&dt->stage_m, 0x0, sizeof(struct dept_map)); + dt->stage_sched_map = false; + dt->stage_w_fn = NULL; + dt->stage_ip = 0UL; +} + +void dept_clean_stage(void) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + __dept_clean_stage(dt); + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_clean_stage); + +/* + * Always called from __schedule(). + */ +void dept_request_event_wait_commit(void) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + unsigned int wg; + unsigned long ip; + const char *w_fn; + bool sched_map; + + if (unlikely(!dept_working())) + return; + + /* + * It's impossible that __schedule() is called while Dept is + * working that already disabled IRQ at the entrance. + */ + if (DEPT_WARN_ON(dt->recursive)) + return; + + flags = dept_enter(); + + /* + * Checks if current has staged a wait. + */ + if (!dt->stage_m.keys) + goto exit; + + w_fn = dt->stage_w_fn; + ip = dt->stage_ip; + sched_map = dt->stage_sched_map; + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + WRITE_ONCE(dt->stage_m.wgen, wg); + + __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map); +exit: + dept_exit(flags); +} + +/* + * Always called from try_to_wake_up(). + */ +void dept_stage_event(struct task_struct *requestor, unsigned long ip) +{ + struct dept_task *dt = dept_task(); + struct dept_task *dt_req = &requestor->dept_task; + unsigned long flags; + struct dept_map m; + bool sched_map; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) + return; + + flags = dept_enter(); + + /* + * Serializing is unnecessary as long as it always comes from + * try_to_wake_up(). + */ + m = dt_req->stage_m; + sched_map = dt_req->stage_sched_map; + __dept_clean_stage(dt_req); + + /* + * ->stage_m.keys should not be NULL if it's in use. Should + * make sure that it's not NULL when staging a valid map. + */ + if (!m.keys) + goto exit; + + __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map); +exit: + dept_exit(flags); +} + +/* + * Modifies the latest ecxt corresponding to m and e_f. + */ +void dept_map_ecxt_modify(struct dept_map *m, unsigned long e_f, + struct dept_key *new_k, unsigned long new_e_f, + unsigned long new_ip, const char *new_c_fn, + const char *new_e_fn, int new_sub_l) +{ + struct dept_task *dt = dept_task(); + struct dept_ecxt_held *eh; + struct dept_class *c; + struct dept_key *k; + unsigned long flags; + int pos = -1; + int new_e; + int e; + + if (unlikely(!dept_working())) + return; + + /* + * XXX: Couldn't handle re-enterance cases. Ingore it for now. + */ + if (dt->recursive) + return; + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + /* + * When it found an ecxt for any event in e_f, done. + */ + pos = find_ecxt_pos(m, c, true); + if (pos != -1) + break; + } + + if (unlikely(pos == -1)) + goto exit; + + eh = dt->ecxt_held + pos; + new_sub_l = new_sub_l >= 0 ? new_sub_l : eh->sub_l; + + new_e = find_first_bit(&new_e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (new_e < DEPT_MAX_SUBCLASSES_EVT) + /* + * Let it work with the first bit anyway. + */ + DEPT_WARN_ON(1UL << new_e != new_e_f); + else + new_e = e; + + pop_ecxt(m, c); + + /* + * Apply the key to the map. + */ + if (new_k) + dept_map_reinit(m, new_k, -1, NULL); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, new_e), m->name, false); + + if (c && add_ecxt(m, c, new_ip, new_c_fn, new_e_fn, new_sub_l)) + goto exit; + + /* + * Successfully pop_ecxt()ed but failed to add_ecxt(). + */ + dt->missing_ecxt++; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_map_ecxt_modify); + +void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, + const char *c_fn, const char *e_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + struct dept_class *c; + struct dept_key *k; + int e; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + dt->missing_ecxt++; + return; + } + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + e = find_first_bit(&e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (e >= DEPT_MAX_SUBCLASSES_EVT) + goto missing_ecxt; + + /* + * An event is an event. If the caller passed more than single + * event, then warn it and handle the event corresponding to + * the first bit anyway. + */ + DEPT_WARN_ON(1UL << e != e_f); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, false); + + if (c && add_ecxt(m, c, ip, c_fn, e_fn, sub_l)) + goto exit; +missing_ecxt: + dt->missing_ecxt++; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_ecxt_enter); + +bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + bool ret = false; + int e; + + if (unlikely(!dept_working())) + return false; + + if (dt->recursive) + return false; + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + if (find_ecxt_pos(m, c, true) != -1) { + ret = true; + break; + } + } + + dept_exit(flags); + + return ret; +} +EXPORT_SYMBOL_GPL(dept_ecxt_holding); + +void dept_request_event(struct dept_map *m) +{ + unsigned long flags; + unsigned int wg; + + if (unlikely(!dept_working())) + return; + + if (m->nocheck) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + WRITE_ONCE(m->wgen, wg); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_request_event); + +void dept_event(struct dept_map *m, unsigned long e_f, + unsigned long ip, const char *e_fn) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (m->nocheck) + return; + + if (dt->recursive) { + /* + * Dept won't work with this even though an event + * context has been asked. Don't make it confused at + * handling the event. Disable it until the next. + */ + WRITE_ONCE(m->wgen, 0U); + return; + } + + flags = dept_enter(); + + __dept_event(m, e_f, ip, e_fn, false); + + /* + * Keep the map diabled until the next sleep. + */ + WRITE_ONCE(m->wgen, 0U); + + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_event); + +void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, + unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int e; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + dt->missing_ecxt--; + return; + } + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + /* + * When it found an ecxt for any event in e_f, done. + */ + if (pop_ecxt(m, c)) + goto exit; + } + + dt->missing_ecxt--; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_ecxt_exit); + +void dept_task_exit(struct task_struct *t) +{ + struct dept_task *dt = &t->dept_task; + int i; + + if (unlikely(!dept_working())) + return; + + raw_local_irq_disable(); + + if (dt->stack) + put_stack(dt->stack); + + for (i = 0; i < dt->ecxt_held_pos; i++) { + if (dt->ecxt_held[i].class) + put_class(dt->ecxt_held[i].class); + if (dt->ecxt_held[i].ecxt) + put_ecxt(dt->ecxt_held[i].ecxt); + } + + for (i = 0; i < DEPT_MAX_WAIT_HIST; i++) + if (dt->wait_hist[i].wait) + put_wait(dt->wait_hist[i].wait); + + dt->task_exit = true; + dept_off(); + + raw_local_irq_enable(); +} + +void dept_task_init(struct task_struct *t) +{ + memset(&t->dept_task, 0x0, sizeof(struct dept_task)); +} + +void dept_key_init(struct dept_key *k) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int sub_id; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + DEPT_STOP("Key initialization fails.\n"); + return; + } + + flags = dept_enter(); + + clean_classes_cache(k); + + /* + * dept_key_init() should not fail. + * + * FIXME: Should be fixed if dept_key_init() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + for (sub_id = 0; sub_id < DEPT_MAX_SUBCLASSES; sub_id++) { + struct dept_class *c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (!c) + continue; + + DEPT_STOP("The class(%s/%d) has not been removed.\n", + c->name, sub_id); + break; + } + + dept_unlock(); + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_key_init); + +void dept_key_destroy(struct dept_key *k) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int sub_id; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive == 1 && dt->task_exit) { + /* + * Need to allow to go ahead in this case where + * ->recursive has been set to 1 by dept_off() in + * dept_task_exit() and ->task_exit has been set to + * true in dept_task_exit(). + */ + } else if (dt->recursive) { + DEPT_STOP("Key destroying fails.\n"); + return; + } + + flags = dept_enter(); + + /* + * dept_key_destroy() should not fail. + * + * FIXME: Should be fixed if dept_key_destroy() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + for (sub_id = 0; sub_id < DEPT_MAX_SUBCLASSES; sub_id++) { + struct dept_class *c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (!c) + continue; + + hash_del_class(c); + disconnect_class(c); + list_del(&c->all_node); + invalidate_class(c); + + /* + * Actual deletion will happen on the rcu callback + * that has been added in disconnect_class(). + */ + del_class(c); + } + + dept_unlock(); + dept_exit(flags); + + /* + * Wait until even lockless hash_lookup_class() for the class + * returns NULL. + */ + might_sleep(); + synchronize_rcu(); +} +EXPORT_SYMBOL_GPL(dept_key_destroy); + +static void move_llist(struct llist_head *to, struct llist_head *from) +{ + struct llist_node *first = llist_del_all(from); + struct llist_node *last; + + if (!first) + return; + + for (last = first; last->next; last = last->next); + llist_add_batch(first, last, to); +} + +static void migrate_per_cpu_pool(void) +{ + const int boot_cpu = 0; + int i; + + /* + * The boot CPU has been using the temperal local pool so far. + * From now on that per_cpu areas have been ready, use the + * per_cpu local pool instead. + */ + DEPT_WARN_ON(smp_processor_id() != boot_cpu); + for (i = 0; i < OBJECT_NR; i++) { + struct llist_head *from; + struct llist_head *to; + + from = &pool[i].boot_pool; + to = per_cpu_ptr(pool[i].lpool, boot_cpu); + move_llist(to, from); + } +} + +#define B2KB(B) ((B) / 1024) + +/* + * Should be called after setup_per_cpu_areas() and before no non-boot + * CPUs have been on. + */ +void __init dept_init(void) +{ + size_t mem_total = 0; + + local_irq_disable(); + dept_per_cpu_ready = 1; + migrate_per_cpu_pool(); + local_irq_enable(); + +#define HASH(id, bits) BUILD_BUG_ON(1 << (bits) <= 0); + #include "dept_hash.h" +#undef HASH +#define OBJECT(id, nr) mem_total += sizeof(struct dept_##id) * nr; + #include "dept_object.h" +#undef OBJECT +#define HASH(id, bits) mem_total += sizeof(struct hlist_head) * (1 << (bits)); + #include "dept_hash.h" +#undef HASH + + pr_info("DEPendency Tracker: Copyright (c) 2020 LG Electronics, Inc., Byungchul Park\n"); + pr_info("... DEPT_MAX_STACK_ENTRY: %d\n", DEPT_MAX_STACK_ENTRY); + pr_info("... DEPT_MAX_WAIT_HIST : %d\n", DEPT_MAX_WAIT_HIST); + pr_info("... DEPT_MAX_ECXT_HELD : %d\n", DEPT_MAX_ECXT_HELD); + pr_info("... DEPT_MAX_SUBCLASSES : %d\n", DEPT_MAX_SUBCLASSES); +#define OBJECT(id, nr) \ + pr_info("... memory used by %s: %zu KB\n", \ + #id, B2KB(sizeof(struct dept_##id) * nr)); + #include "dept_object.h" +#undef OBJECT +#define HASH(id, bits) \ + pr_info("... hash list head used by %s: %zu KB\n", \ + #id, B2KB(sizeof(struct hlist_head) * (1 << (bits)))); + #include "dept_hash.h" +#undef HASH + pr_info("... total memory used by objects and hashs: %zu KB\n", B2KB(mem_total)); + pr_info("... per task memory footprint: %zu bytes\n", sizeof(struct dept_task)); +} diff --git a/kernel/dependency/dept_hash.h b/kernel/dependency/dept_hash.h new file mode 100644 index 000000000000..fd85aab1fdfb --- /dev/null +++ b/kernel/dependency/dept_hash.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * HASH(id, bits) + * + * id : Id for the object of struct dept_##id. + * bits: 1UL << bits is the hash table size. + */ + +HASH(dep, 12) +HASH(class, 12) diff --git a/kernel/dependency/dept_object.h b/kernel/dependency/dept_object.h new file mode 100644 index 000000000000..0b7eb16fe9fb --- /dev/null +++ b/kernel/dependency/dept_object.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * OBJECT(id, nr) + * + * id: Id for the object of struct dept_##id. + * nr: # of the object that should be kept in the pool. + */ + +OBJECT(dep, 1024 * 8) +OBJECT(class, 1024 * 8) +OBJECT(stack, 1024 * 32) +OBJECT(ecxt, 1024 * 16) +OBJECT(wait, 1024 * 32) diff --git a/kernel/exit.c b/kernel/exit.c index 41a12630cbbc..525b17ab18c4 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -926,6 +926,7 @@ void __noreturn do_exit(long code) exit_tasks_rcu_finish(); lockdep_free_task(tsk); + dept_task_exit(tsk); do_task_dead(); } diff --git a/kernel/fork.c b/kernel/fork.c index aebb3e6c96dc..3eab17cfddd4 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -103,6 +103,7 @@ #include #include #include +#include #include #include @@ -2343,6 +2344,7 @@ __latent_entropy struct task_struct *copy_process( #ifdef CONFIG_LOCKDEP lockdep_init_task(p); #endif + dept_task_init(p); #ifdef CONFIG_DEBUG_MUTEXES p->blocked_on = NULL; /* not blocked yet */ diff --git a/kernel/module/main.c b/kernel/module/main.c index e1e8a7a9d6c1..9a291522e6da 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1228,12 +1228,14 @@ static void free_mod_mem(struct module *mod) /* Free lock-classes; relies on the preceding sync_rcu(). */ lockdep_free_key_range(mod_mem->base, mod_mem->size); + dept_free_range(mod_mem->base, mod_mem->size); if (mod_mem->size) module_memory_free(mod_mem->base, type); } /* MOD_DATA hosts mod, so free it at last */ lockdep_free_key_range(mod->mem[MOD_DATA].base, mod->mem[MOD_DATA].size); + dept_free_range(mod->mem[MOD_DATA].base, mod->mem[MOD_DATA].size); module_memory_free(mod->mem[MOD_DATA].base, MOD_DATA); } @@ -3040,6 +3042,8 @@ static int load_module(struct load_info *info, const char __user *uargs, for_class_mod_mem_type(type, core_data) { lockdep_free_key_range(mod->mem[type].base, mod->mem[type].size); + dept_free_range(mod->mem[type].base, + mod->mem[type].size); } module_deallocate(mod, info); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 7019a40457a6..8d05e513d91c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -65,6 +65,7 @@ #include #include #include +#include #ifdef CONFIG_PREEMPT_DYNAMIC # ifdef CONFIG_GENERIC_ENTRY @@ -4233,6 +4234,8 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) guard(preempt)(); int cpu, success = 0; + dept_stage_event(p, _RET_IP_); + if (p == current) { /* * We're waking current, this means 'p->on_rq' and 'task_cpu(p) @@ -6626,6 +6629,12 @@ static void __sched notrace __schedule(unsigned int sched_mode) rq = cpu_rq(cpu); prev = rq->curr; + prev_state = READ_ONCE(prev->__state); + if (sched_mode != SM_PREEMPT && prev_state & TASK_NORMAL) + dept_request_event_wait_commit(); + + dept_sched_enter(); + schedule_debug(prev, !!sched_mode); if (sched_feat(HRTICK) || sched_feat(HRTICK_DL)) @@ -6749,6 +6758,7 @@ static void __sched notrace __schedule(unsigned int sched_mode) __balance_callbacks(rq); raw_spin_rq_unlock_irq(rq); } + dept_sched_exit(); } void __noreturn do_task_dead(void) diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 291185f54ee4..d366acacffec 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1292,6 +1292,33 @@ config DEBUG_PREEMPT menu "Lock Debugging (spinlocks, mutexes, etc...)" +config DEPT + bool "Dependency tracking (EXPERIMENTAL)" + depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT + select DEBUG_SPINLOCK + select DEBUG_MUTEXES + select DEBUG_RT_MUTEXES if RT_MUTEXES + select DEBUG_RWSEMS + select DEBUG_WW_MUTEX_SLOWPATH + select DEBUG_LOCK_ALLOC + select TRACE_IRQFLAGS + select STACKTRACE + select FRAME_POINTER if !MIPS && !PPC && !ARM && !S390 && !MICROBLAZE && !ARC && !X86 + select KALLSYMS + select KALLSYMS_ALL + select PROVE_LOCKING + default n + help + Check dependencies between wait and event and report it if + deadlock possibility has been detected. Multiple reports are + allowed if there are more than a single problem. + + This feature is considered EXPERIMENTAL that might produce + false positive reports because new dependencies start to be + tracked, that have never been tracked before. It's worth + noting, to mitigate the impact by the false positives, multi + reporting has been supported. + config LOCK_DEBUGGING_SUPPORT bool depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c index 6f6a5fc85b42..2558ae57b117 100644 --- a/lib/locking-selftest.c +++ b/lib/locking-selftest.c @@ -1398,6 +1398,8 @@ static void reset_locks(void) local_irq_disable(); lockdep_free_key_range(&ww_lockdep.acquire_key, 1); lockdep_free_key_range(&ww_lockdep.mutex_key, 1); + dept_free_range(&ww_lockdep.acquire_key, 1); + dept_free_range(&ww_lockdep.mutex_key, 1); I1(A); I1(B); I1(C); I1(D); I1(X1); I1(X2); I1(Y1); I1(Y2); I1(Z1); I1(Z2); From patchwork Wed May 8 09:47:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4C3C6C04FFE for ; Wed, 8 May 2024 10:03:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7463811354A; Wed, 8 May 2024 10:03:37 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 15FE9113531 for ; Wed, 8 May 2024 10:02:54 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-45-663b4a3836b1 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 03/28] dept: Add single event dependency tracker APIs Date: Wed, 8 May 2024 18:47:00 +0900 Message-Id: <20240508094726.35754-4-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxTH9zz3lTs6r5XIFT64dFETDSoG9PgSs8RE74y6RY0f9ANWe7HV gloERWMCUhgW0WrEbpUpINYGqmBrjC+AtaQoELFqVTCUCKljjVC02moFpwXjl5Nfzvn/f58O S8gdVBKryd4n6bKVWgXNkdxwfHXK4tVLM+dfHOXg5LH5EH5fSkJlg40Gz5V6BLZrhRgC7lXw PDKEYPTBQwJMFR4E1f0+Aq619SFoth6h4Yn/J/CGR2horyijoehCAw2PXo9h6D1zCkO9fS10 GmswOKODJJgCNJw1FeHY+A9D1FLHgKVgBgxYzQyM9adCe98zCppfzIG/z/XS0NTcTkLbjQEM T25V0tBn+0JBZ9t9Ejwnyym4HKyh4XXEQoAlPMLAY2cVhkZ9TFTy7n8K7pU7MZTUXsXg7bmN oKX0JQa77RkNreEhDA57BQGfLrkRDBwfZqD4WJSBs4XHEZQVnyFB35sOox8r6V8Xi61DI4So d+wXmyNVpNhRI4g3zT5G1Le8YMQqe67osM4WLzQFsFgdClOive4oLdpDpxjRMOzFYrCrixHv /zVKin6vCf+RtJlbppK0mjxJN2/5Vk7tMpqpPc8TD0RM0wuQV25AcazApwldw6+I7/zWYptg mp8ldHdHJziB/1lwlP9LjTPBD3FCbdfKcZ7C/yY8bXzDGBDLkvwMocc4ZXwt49OFPl8n/qac LtQ3Oic0cfxCoWcwiMZZHsvcLjLHqlws85EVzJYg9a0wTbhr7SaNSFaFfqhDck12XpZSo02b q87P1hyYu313lh3FXslyeGzLDRTybHAhnkWKeJkzcUmmnFLm5eRnuZDAEooEmfvPRZlymUqZ f1DS7c7Q5WqlHBdKZklFomxBZL9Kzu9Q7pN2SdIeSff9itm4pAKkTVQd8oMv/s670KdJCTO5 8/0dq93qvRt/8Y/9s+5Ex4Ied/pggff6j7lrjCo20KRsXbc+dXmywW8XHC69j9u0xaFLNzxc Mblw79Q3DbRnfeDSTtNp98vP1schfbItJa32Q0YwevpDKabLplkNRb8XZuDiRUtajpblq0v4 lM/bFGSOWpk6m9DlKL8Cl/UHnUYDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+//P1dnqMCUP+cEYdMFIk7RetDthpyipT0UENfKYS52xqWVo WFNT09BAV17CSy3RVTbNrDSW4tIss1zpTFeaZNLSbhst18UVfXn58TwPv08vS8iKqYWsUpUk qlWKeDktISVREdoVsD0iZmW/IwSK8leC41sOCeU3DDT0Xa9HYGg6hWGycysMOO0IZp48JUBX 3IeganSEgCazDUFb7Wka+sfngcUxTUN38VkatDU3aHj2wY1huOQ8hnrjTugprMZgck2QoJuk oUynxbPnPQaXvo4BfcZiGKstZcA9GgLdtpcUdFR0U9A2tBwuXhqmobWtmwRzyxiG/rvlNNgM vynoMXeR0FdUQMG1qWoaPjj1BOgd0ww8N1ViaMictWV//UXBwwIThuzLNzFYrPcQ3M95g8Fo eElDh8OOodFYTMCPq50Ixs59ZCAr38VA2alzCM5mlZCQORwGM9/L6Y3hQod9mhAyG48Jbc5K UnhUzQt3SkcYIfP+ECNUGpOFxtpAoaZ1EgtVXxyUYKzLpQXjl/OMkPfRgoWp3l5G6LowQwrj Fh3e5b9PsjZajFemiOrg9Qclse2FpdTRAb/jTl1ABrLI8pAXy3Oh/Ge9gfAwzS3lBwddf9mX W8Q3FryjPExwdgl/uTfSwz7cNv5FwycmD7EsyS3mrYU+nljKhfG2kR78TxnA1zeY/mq8uNW8 dWIKeVg2u7mnLWUKkaQSzalDvkpVSoJCGR8WpImLTVUpjwcdSkwwotln0ae7i1rQt/6t7Yhj kXyutI+OiJFRihRNakI74llC7ivtPLMmRiaNVqSeENWJB9TJ8aKmHfmzpNxPun2PeFDGHVYk iXGieFRU/28x67UwA+2TOVOhaUVGy25r8o7DEYMtFa+cri0ynb8k3b3Otlb6uOzW5qr5reb1 u3NfaJWbUqrqLHRIbqItMHf/kYAN7gfRzeFBWwzWS2yFd7B3GsoPDm2+ElewyGEPjEyf9yZ8 geL2yZ+R3hO39pY8cS9JOyBvjqqGZa2rxNdvs0wq81iUnNTEKkICCbVG8QdySKOkKAMAAA== X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Wrapped the base APIs for easier annotation on wait and event. Start with supporting waiters on each single event. More general support for multiple events is a future work. Do more when the need arises. How to annotate (the simplest way): 1. Initaialize a map for the interesting wait. /* * Recommand to place along with the wait instance. */ struct dept_map my_wait; /* * Recommand to place in the initialization code. */ sdt_map_init(&my_wait); 2. Place the following at the wait code. sdt_wait(&my_wait); 3. Place the following at the event code. sdt_event(&my_wait); That's it! Signed-off-by: Byungchul Park --- include/linux/dept_sdt.h | 62 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 include/linux/dept_sdt.h diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h new file mode 100644 index 000000000000..12a793b90c7e --- /dev/null +++ b/include/linux/dept_sdt.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Single-event Dependency Tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_SDT_H +#define __LINUX_DEPT_SDT_H + +#include +#include + +#ifdef CONFIG_DEPT +#define sdt_map_init(m) \ + do { \ + static struct dept_key __key; \ + dept_map_init(m, &__key, 0, #m); \ + } while (0) + +#define sdt_map_init_key(m, k) dept_map_init(m, k, 0, #m) + +#define sdt_wait(m) \ + do { \ + dept_request_event(m); \ + dept_wait(m, 1UL, _THIS_IP_, __func__, 0); \ + } while (0) + +/* + * sdt_might_sleep() and its family will be committed in __schedule() + * when it actually gets to __schedule(). Both dept_request_event() and + * dept_wait() will be performed on the commit. + */ + +/* + * Use the code location as the class key if an explicit map is not used. + */ +#define sdt_might_sleep_start(m) \ + do { \ + struct dept_map *__m = m; \ + static struct dept_key __key; \ + dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__);\ + } while (0) + +#define sdt_might_sleep_end() dept_clean_stage() + +#define sdt_ecxt_enter(m) dept_ecxt_enter(m, 1UL, _THIS_IP_, "start", "event", 0) +#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__) +#define sdt_ecxt_exit(m) dept_ecxt_exit(m, 1UL, _THIS_IP_) +#else /* !CONFIG_DEPT */ +#define sdt_map_init(m) do { } while (0) +#define sdt_map_init_key(m, k) do { (void)(k); } while (0) +#define sdt_wait(m) do { } while (0) +#define sdt_might_sleep_start(m) do { } while (0) +#define sdt_might_sleep_end() do { } while (0) +#define sdt_ecxt_enter(m) do { } while (0) +#define sdt_event(m) do { } while (0) +#define sdt_ecxt_exit(m) do { } while (0) +#endif +#endif /* __LINUX_DEPT_SDT_H */ From patchwork Wed May 8 09:47:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658432 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 90B2AC25B75 for ; Wed, 8 May 2024 10:03:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1E91C113531; Wed, 8 May 2024 10:02:58 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id EB17110FA58 for ; Wed, 8 May 2024 10:02:54 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-55-663b4a39ddb3 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 04/28] dept: Add lock dependency tracker APIs Date: Wed, 8 May 2024 18:47:01 +0900 Message-Id: <20240508094726.35754-5-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+//P2TlztDjMyJNRxsAsK9Our92oPughioryS9HlkEc3vCSb l4wCzdllplmkM5OYZlO8pG1FN2dTcXkhW7VKRVeKeMmpYc0yJZtGX15+vO/zPDwfXjEheyzy Fitj4wVVLB8tpySkZGR+4drgvdsiArsqPeHGtUBw/bhCQkFVBQW2B+UIKh6lYhhqDIVPE04E U6/fEKDLsSEo7Okm4JHVgcBcepGC930LwO4ao6A5J4OCtHtVFLwdnsbQlXsTQ7lxP7RmF2Gw TA6QoBui4I4uDbvHIIZJQxkNhhRf6C3Np2G6JwiaHR9FYO5cDbfvdlFQY24mwfq0F8P75wUU OCpmRNBqbSLBdiNTBJWjRRQMTxgIMLjGaHhn0WOo1riDLn3/I4JXmRYMl4ofYrB3vEBQe+UL BmPFRwoaXE4MJmMOAb9LGhH0Zo3QkH5tkoY7qVkIMtJzSdB0bYKpXwXUrmCuwTlGcBpTEmee 0JNcSxHLPcvvpjlNbSfN6Y0JnKnUn7tXM4S5wnGXiDOWXaU44/hNmtOO2DE32tZGc015UyTX Z9fhg95HJdvDhWhloqBat/OURKFxF4sb9Dp735KCUlC5TIs8xCyzkXVZRsj//DN3QDTLFOPH trdPErO8kFnOmjL75/YE45SwxW0hs+zJ7GZrcv7MeUnGl3V8GUSzLGU2sfqZ6/hfpg9bXm2Z y/FgNrMdA6NzGplb8yItn9YiiVszI2YfWlvQP8Nitq60ncxGUj2aV4ZkytjEGF4ZvTFAkRyr PBtw+kyMEbmfyXBh+thTNG47XI8YMZLPl1q8tkbIRHyiOjmmHrFiQr5Q2nh5S4RMGs4nnxNU Z06qEqIFdT1aIiblXtL1E0nhMiaSjxeiBCFOUP2/YrGHdwoKSt8aFh51yKvzm1Z3pDI44GWf vehTCO/Zn6YO6F0a4mdaUbJP0XrqZJD/Kp8ntq/K4yeSTh/20WYcEM6HDPRlf/7qPWj1K67j SxQf4l/XOe3DO+I8arKizJZlRHzH7dCmhoQ9jOBLtYQ1rJCF3U+95djQrele1Pgrck1P/4/I vJVyUq3gg/wJlZr/C/69brJIAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/p+vm7OdkfjLLbszz06Z8yBrN+C0P6x9r8w9Hv6tTd9pd xdlY6YSeKK4j4Qqn1REX5qFrp3QVeqCkqKMW1To16SKl3GX++ey1z+e91z5/vEWE1Ej5iZTq OEGjlsfIaDEp3h2UvBJCgxRrrjs2QFb6GnAPnyEhr8RCQ+PdYgSWB0kY+qq2w/sRF4KxugYC jIZGBPmdHQQ8cDgR2ApP0tDUPQOa3YM01BrSaEi+UULDm/5xDO052RiKrbvg1fkCDPbRHhKM fTRcMSZjz+jFMGouYsCcuAi6CnMZGO9cC7XOFgoqr9ZSYPuwHC5fa6ehzFZLguNxF4amp3k0 OC2TFLxy1JDQmJVBwZ2BAhr6R8wEmN2DDLy1mzDc03tsKT8mKKjOsGNIuXkfQ3PbMwTlZz5j sFpaaKh0uzCUWg0E/L5dhaAr8xsDp9JHGbiSlIkg7VQOCfr2ABj7lUdv3shXugYJXl96hLeN mEj+ZQHHP8ntYHh9+QeGN1nj+dLCZfyNsj7M5w+5Kd5adJbmrUPZDJ/6rRnzA/X1DF9zaYzk u5uNOGzeXvGmCCFGmSBoVgfvF0fpPY/F9s45esueiBJRsTQV+Yg4dh33M6eH8jLNLuZaW0cJ L/uyC7jSjK9Te4J1ibmb9du8PIvdwpUZJkgvk+wizvm5F3lZwgZwpslz+J/Tnyu+Z5/y+LCB XFvPwFRG6sk8S85lziOxCU0rQr5KdYJKrowJWKWNjtKplUdXHTyssiJPXczHx7Meo+Gm7RWI FSHZdEkjHaSQUvIErU5VgTgRIfOVVJ1er5BKIuS6Y4Lm8D5NfIygrUDzRKRsjiQ0XNgvZSPl cUK0IMQKmv9XLPLxS0TzU+Mz2yJ3q3dInoc9igy+tLh17gVX9C+V0+/hurDw6tklNUtXH/z+ 8d2T7ukvst8+HRhO+ap4/V434Y6YDE173rGgLbbhnMJf6W/7w/osXFG+YX3gp5kh9Z2Gh6qK QMnWyiXGL/kndHvs4TmO6iHZiZ2KQ7K6Ax1ESHRCfvBFC4qQykhtlHztMkKjlf8FVpLjJyoD AAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Wrapped the base APIs for easier annotation on typical lock. Signed-off-by: Byungchul Park --- include/linux/dept_ldt.h | 77 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 include/linux/dept_ldt.h diff --git a/include/linux/dept_ldt.h b/include/linux/dept_ldt.h new file mode 100644 index 000000000000..062613e89fc3 --- /dev/null +++ b/include/linux/dept_ldt.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Lock Dependency Tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_LDT_H +#define __LINUX_DEPT_LDT_H + +#include + +#ifdef CONFIG_DEPT +#define LDT_EVT_L 1UL +#define LDT_EVT_R 2UL +#define LDT_EVT_W 1UL +#define LDT_EVT_RW (LDT_EVT_R | LDT_EVT_W) +#define LDT_EVT_ALL (LDT_EVT_L | LDT_EVT_RW) + +#define ldt_init(m, k, su, n) dept_map_init(m, k, su, n) +#define ldt_lock(m, sl, t, n, i) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_L, i, "trylock", "unlock", sl);\ + else { \ + dept_wait(m, LDT_EVT_L, i, "lock", sl); \ + dept_ecxt_enter(m, LDT_EVT_L, i, "lock", "unlock", sl);\ + } \ + } while (0) + +#define ldt_rlock(m, sl, t, n, i, q) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_R, i, "read_trylock", "read_unlock", sl);\ + else { \ + dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl);\ + dept_ecxt_enter(m, LDT_EVT_R, i, "read_lock", "read_unlock", sl);\ + } \ + } while (0) + +#define ldt_wlock(m, sl, t, n, i) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_W, i, "write_trylock", "write_unlock", sl);\ + else { \ + dept_wait(m, LDT_EVT_RW, i, "write_lock", sl); \ + dept_ecxt_enter(m, LDT_EVT_W, i, "write_lock", "write_unlock", sl);\ + } \ + } while (0) + +#define ldt_unlock(m, i) dept_ecxt_exit(m, LDT_EVT_ALL, i) + +#define ldt_downgrade(m, i) \ + do { \ + if (dept_ecxt_holding(m, LDT_EVT_W)) \ + dept_map_ecxt_modify(m, LDT_EVT_W, NULL, LDT_EVT_R, i, "downgrade", "read_unlock", -1);\ + } while (0) + +#define ldt_set_class(m, n, k, sl, i) dept_map_ecxt_modify(m, LDT_EVT_ALL, k, 0UL, i, "lock_set_class", "(any)unlock", sl) +#else /* !CONFIG_DEPT */ +#define ldt_init(m, k, su, n) do { (void)(k); } while (0) +#define ldt_lock(m, sl, t, n, i) do { } while (0) +#define ldt_rlock(m, sl, t, n, i, q) do { } while (0) +#define ldt_wlock(m, sl, t, n, i) do { } while (0) +#define ldt_unlock(m, i) do { } while (0) +#define ldt_downgrade(m, i) do { } while (0) +#define ldt_set_class(m, n, k, sl, i) do { } while (0) +#endif +#endif /* __LINUX_DEPT_LDT_H */ From patchwork Wed May 8 09:47:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658436 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4484DC25B75 for ; Wed, 8 May 2024 10:03:13 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C1A6E113539; Wed, 8 May 2024 10:02:58 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 0829710FFEC for ; Wed, 8 May 2024 10:02:54 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-65-663b4a395a24 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 05/28] dept: Tie to Lockdep and IRQ tracing Date: Wed, 8 May 2024 18:47:02 +0900 Message-Id: <20240508094726.35754-6-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzHfb/Pc89z3Toeh3lkk912WMiPRR8i5g8exsb4w/hDR0+66dfu +h3Wb/RrXXaOutGFc7uiXNdGXXaVSlqJbhQJLf2YflhcnHK5sv757LXP5/15/fUWEpIqgZdQ ERHNKyPkYVJKRIpGPfUbdxwKCNlcUbcS1DmbwfHzKgm68jIKOh6VIiizpGAYbjwA7yZHEEy1 vSJAq+lAoP/ykQBLUy+CWmMqBZ39C8HuGKegRZNNQdrdcgpef5vG0HOjAEOp+Qi05pdgsDkH SdAOU1CkTcPuMYTBaTDRYEiWQZ+xkIbpL1ugpfetAGrfr4dbt3sosNa2kND0pA9DZ7WOgt6y GQG0Nr0goUOdK4CHYyUUfJs0EGBwjNPwxlaMoSLdLcr84RJAc64NQ+a9xxjs3TUInl39jMFc 9paCBscIhkqzhoA/DxoR9OWN0pCR46ShKCUPQXbGDRLSe7bB1G8dtXcH1zAyTnDplXFc7WQx yb0sYbmnhR9pLv3Ze5orNsdwlUYf7q51GHP6CYeAM5uuUZx5ooDmskbtmBtrb6e5FzenSK7f rsVHvU6JdgXzYYpYXrkpMEgUWuN04aiqpHjLYzVORuqzWchDyDJ+bHdTJprnT63f55hi1rJd XU5ilpcyq9nK3AHBLBPMiIi9175/lpcwe9jvnalze5KRsa6qz+Qsi5lt7J2vVvzf6c2WVtjm PB7MdrZ7cGzOL3FnatIK6SwkcmdmhGyL1kz8f1jB1hm7yHwkLkYLTEiiiIgNlyvC/HxDEyIU 8b7nIsPNyF0mw6Xp00/QRMfxesQIkdRTbFu+M0QikMeqEsLrESskpEvFjVf8QyTiYHlCIq+M PKOMCeNV9WilkJQuF2+djAuWMOfl0fwFno/ilfNXLPTwSkZFouNiWfRin+Fj1sSp3YV5dv8Y j/7IReMB4oxN8Q1JKt8PEw1FuhXeQSfW/V2FD59fJ3OaXlubhx5ddJUH3kf+y5rbrqsH6kLG ShXVoiCpUn7qcNyQyaLb/qv7pMnzzvNolHSwYkB/e+GattZ9xpxq/XOZxksTcLn/t6vAsmEJ LSVVofItPoRSJf8Hs2FK9UgDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/p+vm7LfT9EsMZ0hGbI4PGTbGbzaP89DM6PCrjiu5q8hj lKfIOtQpJ3W10+pIV216uHarFSdSakWP5LE6ndG1Ti26zD+fvfZ+v/f66yMipDpqqkgZHimo wxUqGS0mxVsC4hau2BQQvHigkwDtjcXgHLhKgj7fREP94zwEpqILGHqqN0LLoB3B8KvXBOiS 6xFkfuggoKimE4El5yINjZ8mQZPTQYMt+ToNcVn5NDT0jWBoT7mFIc+8GWqTDBisrq8k6Hpo uKeLw2PnGwaXMZcBY+wc6M5JY2DkwxKwdTZTUHXfRoGldQGkprfTUG6xkVDztBtDY6mehk7T Hwpqa56TUK9NpOBRv4GGvkEjAUang4E31gwMT+LHbJd/jVLwLNGK4XJ2AYamd2UIKq6+x2A2 NdNQ5bRjKDQnE/D7YTWC7pvfGbh0w8XAvQs3EVy/lEJCfLschof09NqVfJXdQfDxhSd4y2AG yb8wcHxJWgfDx1e0MnyGOYovzPHjs8p7MJ/500nx5txrNG/+eYvhE743Yb6/ro7hn98dJvlP TTq8zWeveNVhQaWMFtT+q4PEoWWuURxRfPpkUYEWxyLtwQTkIeLYpVxX7Q/kZpqdx7196yLc 7MnO5AoTv1BuJli7mMuu2+Dmyewa7kfjxfGcZOdwo8XvSTdLWDn34HM5/uecweU9sY57PNhl 3Luv/eN+6dimLC6NSULiDDQhF3kqw6PDFEqVfJHmaGhMuPLkokPHwsxo7F2MZ0e0T9FA48ZK xIqQbKKkng4IllKKaE1MWCXiRITMU1J9ZXmwVHJYEXNKUB87oI5SCZpK5CMiZV6STXuEICkb oogUjgpChKD+32KRx9RY1OZtuX1m2sr8mlQyxLTLVbB/NJ0rjjzfkD1rStLrc/LzQbYpvVXT y4Yde0NKuyOC/H2HJsj613vlpM4+3rur5LOk19cQED3UTBUHPtspN8hezu1j16294pMrMh2p 2NHi3H2qJepOF2XX+6k2JOxTeG9Pd+jnh7Gl1NZAq6XkY1usjNSEKpb4EWqN4i/gzh8dKgMA AA== X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" How to place Dept this way looks so ugly. But it's inevitable for now. The way should be enhanced gradually. Signed-off-by: Byungchul Park --- include/linux/irqflags.h | 7 +- include/linux/local_lock_internal.h | 1 + include/linux/lockdep.h | 102 ++++++++++++++++++++++------ include/linux/lockdep_types.h | 3 + include/linux/mutex.h | 1 + include/linux/percpu-rwsem.h | 2 +- include/linux/rtmutex.h | 1 + include/linux/rwlock_types.h | 1 + include/linux/rwsem.h | 1 + include/linux/seqlock.h | 2 +- include/linux/spinlock_types_raw.h | 3 + include/linux/srcu.h | 2 +- kernel/dependency/dept.c | 8 +-- kernel/locking/lockdep.c | 22 ++++++ 14 files changed, 127 insertions(+), 29 deletions(-) diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h index 3f003d5fde53..f4506fe08611 100644 --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -15,6 +15,7 @@ #include #include #include +#include #include #include @@ -49,8 +50,10 @@ extern void trace_hardirqs_off(void); # define lockdep_softirqs_enabled(p) ((p)->softirqs_enabled) # define lockdep_hardirq_enter() \ do { \ - if (__this_cpu_inc_return(hardirq_context) == 1)\ + if (__this_cpu_inc_return(hardirq_context) == 1) { \ current->hardirq_threaded = 0; \ + dept_hardirq_enter(); \ + } \ } while (0) # define lockdep_hardirq_threaded() \ do { \ @@ -125,6 +128,8 @@ do { \ # define lockdep_softirq_enter() \ do { \ current->softirq_context++; \ + if (current->softirq_context == 1) \ + dept_softirq_enter(); \ } while (0) # define lockdep_softirq_exit() \ do { \ diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h index 975e33b793a7..39f67788fd95 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -21,6 +21,7 @@ typedef struct { .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ .lock_type = LD_LOCK_PERCPU, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ }, \ .owner = NULL, diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 08b0d1d9d78b..4e569b35ff2a 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -12,6 +12,7 @@ #include #include +#include #include struct task_struct; @@ -39,6 +40,8 @@ static inline void lockdep_copy_map(struct lockdep_map *to, */ for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++) to->class_cache[i] = NULL; + + dept_map_copy(&to->dmap, &from->dmap); } /* @@ -409,7 +412,8 @@ enum xhlock_context_t { * Note that _name must not be NULL. */ #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \ - { .name = (_name), .key = (void *)(_key), } + { .name = (_name), .key = (void *)(_key), \ + .dmap = DEPT_MAP_INITIALIZER(_name, _key) } static inline void lockdep_invariant_state(bool force) {} static inline void lockdep_free_task(struct task_struct *task) {} @@ -491,33 +495,89 @@ extern bool read_lock_is_recursive(void); #define lock_acquire_shared(l, s, t, n, i) lock_acquire(l, s, t, 1, 1, n, i) #define lock_acquire_shared_recursive(l, s, t, n, i) lock_acquire(l, s, t, 2, 1, n, i) -#define spin_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define spin_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define spin_release(l, i) lock_release(l, i) - -#define rwlock_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) +#define spin_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define spin_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define spin_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define rwlock_acquire(l, s, t, i) \ +do { \ + ldt_wlock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) #define rwlock_acquire_read(l, s, t, i) \ do { \ + ldt_rlock(&(l)->dmap, s, t, NULL, i, !read_lock_is_recursive());\ if (read_lock_is_recursive()) \ lock_acquire_shared_recursive(l, s, t, NULL, i); \ else \ lock_acquire_shared(l, s, t, NULL, i); \ } while (0) - -#define rwlock_release(l, i) lock_release(l, i) - -#define seqcount_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define seqcount_acquire_read(l, s, t, i) lock_acquire_shared_recursive(l, s, t, NULL, i) -#define seqcount_release(l, i) lock_release(l, i) - -#define mutex_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define mutex_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define mutex_release(l, i) lock_release(l, i) - -#define rwsem_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define rwsem_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define rwsem_acquire_read(l, s, t, i) lock_acquire_shared(l, s, t, NULL, i) -#define rwsem_release(l, i) lock_release(l, i) +#define rwlock_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define seqcount_acquire(l, s, t, i) \ +do { \ + ldt_wlock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define seqcount_acquire_read(l, s, t, i) \ +do { \ + ldt_rlock(&(l)->dmap, s, t, NULL, i, false); \ + lock_acquire_shared_recursive(l, s, t, NULL, i); \ +} while (0) +#define seqcount_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define mutex_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define mutex_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define mutex_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define rwsem_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define rwsem_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define rwsem_acquire_read(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_shared(l, s, t, NULL, i); \ +} while (0) +#define rwsem_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) #define lock_map_acquire(l) lock_acquire_exclusive(l, 0, 0, NULL, _THIS_IP_) #define lock_map_acquire_try(l) lock_acquire_exclusive(l, 0, 1, NULL, _THIS_IP_) diff --git a/include/linux/lockdep_types.h b/include/linux/lockdep_types.h index 70d30d40ea4a..f473fdb8e7d6 100644 --- a/include/linux/lockdep_types.h +++ b/include/linux/lockdep_types.h @@ -11,6 +11,7 @@ #define __LINUX_LOCKDEP_TYPES_H #include +#include #define MAX_LOCKDEP_SUBCLASSES 8UL @@ -77,6 +78,7 @@ struct lock_class_key { struct hlist_node hash_entry; struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES]; }; + struct dept_key dkey; }; extern struct lock_class_key __lockdep_no_validate__; @@ -194,6 +196,7 @@ struct lockdep_map { int cpu; unsigned long ip; #endif + struct dept_map dmap; }; struct pin_cookie { unsigned int val; }; diff --git a/include/linux/mutex.h b/include/linux/mutex.h index 67edc4ca2bee..179d0ffd0d63 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -27,6 +27,7 @@ , .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } #else # define __DEP_MAP_MUTEX_INITIALIZER(lockname) diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h index 36b942b67b7d..e871aca04645 100644 --- a/include/linux/percpu-rwsem.h +++ b/include/linux/percpu-rwsem.h @@ -21,7 +21,7 @@ struct percpu_rw_semaphore { }; #ifdef CONFIG_DEBUG_LOCK_ALLOC -#define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }, +#define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname, .dmap = DEPT_MAP_INITIALIZER(lockname, NULL) }, #else #define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) #endif diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h index 7d049883a08a..35889ac5eeae 100644 --- a/include/linux/rtmutex.h +++ b/include/linux/rtmutex.h @@ -81,6 +81,7 @@ do { \ .dep_map = { \ .name = #mutexname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(mutexname, NULL),\ } #else #define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname) diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h index 1948442e7750..6e58dfc84997 100644 --- a/include/linux/rwlock_types.h +++ b/include/linux/rwlock_types.h @@ -10,6 +10,7 @@ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL), \ } #else # define RW_DEP_MAP_INIT(lockname) diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index c8b543d428b0..2540b18e3489 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -22,6 +22,7 @@ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ }, #else # define __RWSEM_DEP_MAP_INIT(lockname) diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index d90d8ee29d81..803a5a067f07 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -51,7 +51,7 @@ static inline void __seqcount_init(seqcount_t *s, const char *name, #ifdef CONFIG_DEBUG_LOCK_ALLOC # define SEQCOUNT_DEP_MAP_INIT(lockname) \ - .dep_map = { .name = #lockname } + .dep_map = { .name = #lockname, .dmap = DEPT_MAP_INITIALIZER(lockname, NULL) } /** * seqcount_init() - runtime initializer for seqcount_t diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h index 91cb36b65a17..3dcc551ded25 100644 --- a/include/linux/spinlock_types_raw.h +++ b/include/linux/spinlock_types_raw.h @@ -31,11 +31,13 @@ typedef struct raw_spinlock { .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SPIN, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } # define SPIN_DEP_MAP_INIT(lockname) \ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } # define LOCAL_SPIN_DEP_MAP_INIT(lockname) \ @@ -43,6 +45,7 @@ typedef struct raw_spinlock { .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ .lock_type = LD_LOCK_PERCPU, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } #else # define RAW_SPIN_DEP_MAP_INIT(lockname) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 236610e4a8fa..c5e0ae6088c4 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -35,7 +35,7 @@ int __init_srcu_struct(struct srcu_struct *ssp, const char *name, __init_srcu_struct((ssp), #ssp, &__srcu_key); \ }) -#define __SRCU_DEP_MAP_INIT(srcu_name) .dep_map = { .name = #srcu_name }, +#define __SRCU_DEP_MAP_INIT(srcu_name) .dep_map = { .name = #srcu_name, .dmap = DEPT_MAP_INITIALIZER(srcu_name, NULL) }, #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ int init_srcu_struct(struct srcu_struct *ssp); diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index a3e774479f94..7e12e46dc4b7 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -244,10 +244,10 @@ static bool dept_working(void) * Even k == NULL is considered as a valid key because it would use * &->map_key as the key in that case. */ -struct dept_key __dept_no_validate__; +extern struct lock_class_key __lockdep_no_validate__; static bool valid_key(struct dept_key *k) { - return &__dept_no_validate__ != k; + return &__lockdep_no_validate__.dkey != k; } /* @@ -1936,7 +1936,7 @@ void dept_softirqs_off(void) dept_task()->softirqs_enabled = false; } -void dept_hardirqs_off(void) +void noinstr dept_hardirqs_off(void) { /* * Assumes that it's called with IRQ disabled so that accessing @@ -1958,7 +1958,7 @@ void dept_softirq_enter(void) /* * Ensure it's the outmost hardirq context. */ -void dept_hardirq_enter(void) +void noinstr dept_hardirq_enter(void) { struct dept_task *dt = dept_task(); diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 151bd3de5936..e27cf9d17163 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -1215,6 +1215,8 @@ void lockdep_register_key(struct lock_class_key *key) struct lock_class_key *k; unsigned long flags; + dept_key_init(&key->dkey); + if (WARN_ON_ONCE(static_obj(key))) return; hash_head = keyhashentry(key); @@ -4310,6 +4312,8 @@ static void __trace_hardirqs_on_caller(void) */ void lockdep_hardirqs_on_prepare(void) { + dept_hardirqs_on(); + if (unlikely(!debug_locks)) return; @@ -4430,6 +4434,8 @@ EXPORT_SYMBOL_GPL(lockdep_hardirqs_on); */ void noinstr lockdep_hardirqs_off(unsigned long ip) { + dept_hardirqs_off(); + if (unlikely(!debug_locks)) return; @@ -4474,6 +4480,8 @@ void lockdep_softirqs_on(unsigned long ip) { struct irqtrace_events *trace = ¤t->irqtrace; + dept_softirqs_on_ip(ip); + if (unlikely(!lockdep_enabled())) return; @@ -4512,6 +4520,8 @@ void lockdep_softirqs_on(unsigned long ip) */ void lockdep_softirqs_off(unsigned long ip) { + dept_softirqs_off(); + if (unlikely(!lockdep_enabled())) return; @@ -4859,6 +4869,8 @@ void lockdep_init_map_type(struct lockdep_map *lock, const char *name, { int i; + ldt_init(&lock->dmap, &key->dkey, subclass, name); + for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++) lock->class_cache[i] = NULL; @@ -5630,6 +5642,12 @@ void lock_set_class(struct lockdep_map *lock, const char *name, { unsigned long flags; + /* + * dept_map_(re)init() might be called twice redundantly. But + * there's no choice as long as Dept relies on Lockdep. + */ + ldt_set_class(&lock->dmap, name, &key->dkey, subclass, ip); + if (unlikely(!lockdep_enabled())) return; @@ -5647,6 +5665,8 @@ void lock_downgrade(struct lockdep_map *lock, unsigned long ip) { unsigned long flags; + ldt_downgrade(&lock->dmap, ip); + if (unlikely(!lockdep_enabled())) return; @@ -6447,6 +6467,8 @@ void lockdep_unregister_key(struct lock_class_key *key) unsigned long flags; bool found = false; + dept_key_destroy(&key->dkey); + might_sleep(); if (WARN_ON_ONCE(static_obj(key))) From patchwork Wed May 8 09:47:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658430 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 91FD7C04FFE for ; Wed, 8 May 2024 10:03:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4270A113533; Wed, 8 May 2024 10:02:58 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 1074C11352D for ; Wed, 8 May 2024 10:02:54 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-75-663b4a39b220 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 06/28] dept: Add proc knobs to show stats and dependency graph Date: Wed, 8 May 2024 18:47:03 +0900 Message-Id: <20240508094726.35754-7-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSa0xTZxgHcN73nPOeQ0PJsRg54octZY0J3kARH5wxfNITll0ylxhv0UYO 0sjFFEHrMgWpCCgOXJAN2FYKVlLwVjBeaLWWgCKhltEwlItAiI5R6IIWrZBhq+HLk1/y/J// p4ejFC1MFKfJPCppM9XpSiKjZVNhtWsSk79MjS3+Vwnl52PB96aIhprrTQRc1xoRNLXkY5ho 3w5/z3oQzHU/paCywoWgdnSIgpaOYQS2htMEesfDwe3zEuisOEegoO46gZ7JeQyDly5iaLR8 DV1lRgx2/ysaKicIVFcW4MD4B4PfZGbBlKeCsYYqFuZH46BzuI8B2/NV8NsfgwSstk4aOu6M Yei9V0NguGmBga6OxzS4yksZuDptJDA5a6LA5POy8JfdgOGGPlBU+Pp/Bh6V2jEU1t/E4H7W iuB+0QgGS1MfgTafB0OzpYKC91faEYxdmGLhzHk/C9X5FxCcO3OJBv3gRph7V0OSEsU2j5cS 9c3HRNusgRafGAXxbtUQK+rvP2dFgyVHbG6IEeusE1isnfExosVcTETLzEVWLJlyY3Ha6WTF x7/O0eK4uxJ/F7VbtiVFStfkStp1Ww/I0vwDPx55knC87JaD5KHG1SUolBP4eOGX+kK8aOvU GBs04VcK/f1+Kuil/OdCc+lLJmiK98iEeue2oCP474W6NvfHPM2rhBejDhK0nN8oFD/9k3zq /ExovGH/2BPKJwjPXk2joBWBTGtBVeBWFsgscIL1bQvz6WC58LChny5DcgMKMSOFJjM3Q61J j1+bpsvUHF97MCvDggK/ZPppfs8dNOPa4UA8h5Rhcnvk5lQFo87N1mU4kMBRyqXy9rObUhXy FLXuhKTN2q/NSZeyHWgFRysj5etnj6Uo+EPqo9JhSToiaRe3mAuNykO6H8xUX45x28hJg77K 1p2YfHnD7wOt+5nqEfuBvBLNkgjN1giQ5hJ2qHb9h5OGQNV18oFVlfoy2pxMxXlu9+T3O791 FTlNu8K/8oYY0+gkv6BZ1iZf4LxhRLf3C09xb+S+dQMTrrOxTofimxgcnTz58ylF0kzhzuj8 R9zecSWdnaaOi6G02eoPVffRMUcDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzXSe0hTYRgG8L7vnPOduZqcltGpoGLQBSvLSnvLiC6Qh6ILUQRB5KijDqfZ piuLwNq66kwjXekqL7VEV9YssHIxtmbaxSxHZelSM8u8RTlpaRdX9M/LD56H569XQsnzmEkS VVKKqElSqhVESks3ROnnLlkbFTu/4+1oyMmcD76BEzSYK6wEGq6XI7DeOoyhyx0NrwZ7EAw9 fUaBKbcBQVFbCwW3arwI7KVHCDR2BIPH10+gLjeDgL6kgsDz7mEMzXlnMJTb1sPj7GIMDv9H GkxdBApMejxyPmHwW8pYsKRPh/bSfBaG28KhzvuSAdeFOgbsb2bD+YvNBKrtdTTUVLVjaLxr JuC1/mbgcU0tDQ05Rgau9RUT6B60UGDx9bPwwlGI4YZhZO3Yt18MPDQ6MBy7fBODp+kegvsn WjHYrC8JuHw9GCptuRT8uOpG0J7Vy8LRTD8LBYezEGQczaPB0BwBQ9/NZMVSwdXTTwmGyn2C fbCQFh4V88Kd/BZWMNx/wwqFtlShsjRUKKnuwkLRVx8j2MpOEsH29QwrnOr1YKGvvp4Vas8N 0UKHx4Q3Td4uXbZbVKt0ombe8hhpvP/tweRHkfuzbztJOiqfcwoFSXhuEV/d284GTLiZ/OvX firgEG4aX2nsZAKmuB4pf7l+TcDjuM18icvzt09z0/l3bU4SsIyL4E8+u0T+bU7ly284/u4E cZF808c+FLB8pHNPn89mI2khGlWGQlRJukSlSh0Rpk2IT0tS7Q/btSfRhka+xXJoOKcKDTRG OxEnQYoxsgYSFStnlDptWqIT8RJKESJzH18cK5ftVqYdEDV7dmpS1aLWiSZLaMUE2dptYoyc i1OmiAmimCxq/qdYEjQpHU2dN2Hi6i/2vTtOB3+mQ1Otxic5cbLg/MHxC97rPYYw9aaMzI6K J+6FxHVzVqQiuTTBWrXQ/GBit/fpla0Hb/98kN6amYpiUjq7xzO9adE4xqLQ6fTjjLVV9Ss7 m36FD7hn8U1ZX1ZsMRd4W1e1xNEte2c4pqwb03V246KxlnWmDwpaG68MD6U0WuUfFEE9rykD AAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" It'd be useful to show Dept internal stats and dependency graph on runtime via proc for better information. Introduced the knobs. Signed-off-by: Byungchul Park --- kernel/dependency/Makefile | 1 + kernel/dependency/dept.c | 24 +++----- kernel/dependency/dept_internal.h | 26 +++++++++ kernel/dependency/dept_proc.c | 95 +++++++++++++++++++++++++++++++ 4 files changed, 131 insertions(+), 15 deletions(-) create mode 100644 kernel/dependency/dept_internal.h create mode 100644 kernel/dependency/dept_proc.c diff --git a/kernel/dependency/Makefile b/kernel/dependency/Makefile index b5cfb8a03c0c..92f165400187 100644 --- a/kernel/dependency/Makefile +++ b/kernel/dependency/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DEPT) += dept.o +obj-$(CONFIG_DEPT) += dept_proc.o diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 7e12e46dc4b7..19406093103e 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -74,6 +74,7 @@ #include #include #include +#include "dept_internal.h" static int dept_stop; static int dept_per_cpu_ready; @@ -260,20 +261,13 @@ static bool valid_key(struct dept_key *k) * have been freed will be placed. */ -enum object_t { -#define OBJECT(id, nr) OBJECT_##id, - #include "dept_object.h" -#undef OBJECT - OBJECT_NR, -}; - #define OBJECT(id, nr) \ static struct dept_##id spool_##id[nr]; \ static DEFINE_PER_CPU(struct llist_head, lpool_##id); #include "dept_object.h" #undef OBJECT -static struct dept_pool pool[OBJECT_NR] = { +struct dept_pool dept_pool[OBJECT_NR] = { #define OBJECT(id, nr) { \ .name = #id, \ .obj_sz = sizeof(struct dept_##id), \ @@ -303,7 +297,7 @@ static void *from_pool(enum object_t t) if (DEPT_WARN_ON(!irqs_disabled())) return NULL; - p = &pool[t]; + p = &dept_pool[t]; /* * Try local pool first. @@ -338,7 +332,7 @@ static void *from_pool(enum object_t t) static void to_pool(void *o, enum object_t t) { - struct dept_pool *p = &pool[t]; + struct dept_pool *p = &dept_pool[t]; struct llist_head *h; preempt_disable(); @@ -2092,7 +2086,7 @@ void dept_map_copy(struct dept_map *to, struct dept_map *from) clean_classes_cache(&to->map_key); } -static LIST_HEAD(classes); +LIST_HEAD(dept_classes); static bool within(const void *addr, void *start, unsigned long size) { @@ -2124,7 +2118,7 @@ void dept_free_range(void *start, unsigned int sz) while (unlikely(!dept_lock())) cpu_relax(); - list_for_each_entry_safe(c, n, &classes, all_node) { + list_for_each_entry_safe(c, n, &dept_classes, all_node) { if (!within((void *)c->key, start, sz) && !within(c->name, start, sz)) continue; @@ -2200,7 +2194,7 @@ static struct dept_class *check_new_class(struct dept_key *local, c->sub_id = sub_id; c->key = (unsigned long)(k->base + sub_id); hash_add_class(c); - list_add(&c->all_node, &classes); + list_add(&c->all_node, &dept_classes); unlock: dept_unlock(); caching: @@ -2915,8 +2909,8 @@ static void migrate_per_cpu_pool(void) struct llist_head *from; struct llist_head *to; - from = &pool[i].boot_pool; - to = per_cpu_ptr(pool[i].lpool, boot_cpu); + from = &dept_pool[i].boot_pool; + to = per_cpu_ptr(dept_pool[i].lpool, boot_cpu); move_llist(to, from); } } diff --git a/kernel/dependency/dept_internal.h b/kernel/dependency/dept_internal.h new file mode 100644 index 000000000000..007c1eec6bab --- /dev/null +++ b/kernel/dependency/dept_internal.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Dept(DEPendency Tracker) - runtime dependency tracker internal header + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __DEPT_INTERNAL_H +#define __DEPT_INTERNAL_H + +#ifdef CONFIG_DEPT + +enum object_t { +#define OBJECT(id, nr) OBJECT_##id, + #include "dept_object.h" +#undef OBJECT + OBJECT_NR, +}; + +extern struct list_head dept_classes; +extern struct dept_pool dept_pool[]; + +#endif +#endif /* __DEPT_INTERNAL_H */ diff --git a/kernel/dependency/dept_proc.c b/kernel/dependency/dept_proc.c new file mode 100644 index 000000000000..7d61dfbc5865 --- /dev/null +++ b/kernel/dependency/dept_proc.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Procfs knobs for Dept(DEPendency Tracker) + * + * Started by Byungchul Park : + * + * Copyright (C) 2021 LG Electronics, Inc. , Byungchul Park + */ +#include +#include +#include +#include "dept_internal.h" + +static void *l_next(struct seq_file *m, void *v, loff_t *pos) +{ + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + return seq_list_next(v, &dept_classes, pos); +} + +static void *l_start(struct seq_file *m, loff_t *pos) +{ + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + return seq_list_start_head(&dept_classes, *pos); +} + +static void l_stop(struct seq_file *m, void *v) +{ +} + +static int l_show(struct seq_file *m, void *v) +{ + struct dept_class *fc = list_entry(v, struct dept_class, all_node); + struct dept_dep *d; + const char *prefix; + + if (v == &dept_classes) { + seq_puts(m, "All classes:\n\n"); + return 0; + } + + prefix = fc->sched_map ? " " : ""; + seq_printf(m, "[%p] %s%s\n", (void *)fc->key, prefix, fc->name); + + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + list_for_each_entry(d, &fc->dep_head, dep_node) { + struct dept_class *tc = d->wait->class; + + prefix = tc->sched_map ? " " : ""; + seq_printf(m, " -> [%p] %s%s\n", (void *)tc->key, prefix, tc->name); + } + seq_puts(m, "\n"); + + return 0; +} + +static const struct seq_operations dept_deps_ops = { + .start = l_start, + .next = l_next, + .stop = l_stop, + .show = l_show, +}; + +static int dept_stats_show(struct seq_file *m, void *v) +{ + int r; + + seq_puts(m, "Availability in the static pools:\n\n"); +#define OBJECT(id, nr) \ + r = atomic_read(&dept_pool[OBJECT_##id].obj_nr); \ + if (r < 0) \ + r = 0; \ + seq_printf(m, "%s\t%d/%d(%d%%)\n", #id, r, nr, (r * 100) / (nr)); + #include "dept_object.h" +#undef OBJECT + + return 0; +} + +static int __init dept_proc_init(void) +{ + proc_create_seq("dept_deps", S_IRUSR, NULL, &dept_deps_ops); + proc_create_single("dept_stats", S_IRUSR, NULL, dept_stats_show); + return 0; +} + +__initcall(dept_proc_init); From patchwork Wed May 8 09:47:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 54DBCC04FFE for ; Wed, 8 May 2024 10:03:15 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 43BA41126C0; Wed, 8 May 2024 10:02:59 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 0AF8B1126C0 for ; Wed, 8 May 2024 10:02:54 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-85-663b4a394a9a From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 07/28] dept: Distinguish each syscall context from another Date: Wed, 8 May 2024 18:47:04 +0900 Message-Id: <20240508094726.35754-8-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe2xLcRTH/X739t67RuXmkrhGPCoisRjzPEMECS5BCP+wBI3dWemm6Wy2 hWS62osJkirTSPdI7VE2t8Rma9VkL2zKhq1mbJnH6DZGx6xB6/HPySfnfM/n/HMYgrPLQhl1 /GFRF6/SKCk5Ke8fmz83cuPymPlGzzw4e2o++L5mkWAut1HgvlaGwHbjOIa+uvXwfNiLYLT5 EQEmoxtBfvdLAm7UdyFwFOspaO0dB22+QQqajCcpSC8sp+DxRz+GzvPnMJRJm+HBmQIMrpF3 JJj6KLhkSseB8h7DiLWUBmvaLOgpzqPB3x0BTV3PZODwhMHFy50U1DiaSKiv7MHQettMQZft lwwe1DeS4D6bK4OrAwUUfBy2EmD1DdLwxGXBUGEIiDK+/JRBQ64LQ0bRdQxtHdUInFmvMUi2 ZxTc83kx2CUjAT+u1CHoOd1Pw4lTIzRcOn4awckT50kwdC6G0e9malWkcM87SAgG+xHBMWwh hfsFvFCV95IWDE4PLVikRMFePEcorOnDQv6QTyZIpdmUIA2do4Wc/jYsDLS00ELjhVFS6G0z 4a2hu+QrokWNOknUzVu5Vx5bU1KNtA17kw36D3QaqtqUgxiGZxfx7dkxOSjkD5rdejrIFDub b28fIYI8gZ3O23PfyoJMsF45X9SyLsjj2S38q29OKqgh2Vn8xZzIYFvBLubrXVfIv8ppfFmF 648mhF3Cd7wbQEHmApnq9LzAKXkg84vhW/P0/xYm8XeL28kzSGFBY0oRp45PilOpNYvCY1Pi 1cnh+w7FSSjwStZj/qhKNOTeXotYBinHKlwTl8VwMlVSQkpcLeIZQjlBUZe5NIZTRKtSUkXd oT26RI2YUIsmM6RyomLB8JFojt2vOiweFEWtqPs/xUxIaBpKvdyc+Soq9c2OVbeL7u9q/H4g YsOLTFPlmgW3/NrIsPzq6dhVYukdWmhcv32qczeX6vhk6/uyzWlOljSPqpZkJzIc1t9sWPvV PKPZUsLdKVlbVp6s5TP6vatXvHj4me2YUhvu6cHZfju5M2p8p2fZGOnogS3a7oNZmplP/d1P i5RkQqwqYg6hS1D9BukEyo9GAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfX/Pd+v47TT9pA03NKE04kNm/YF+89DwB2NMN/3S6bryO5U8 rKvOU8oK3ZFj18NOKuLOKHXtVisSOWpUKrSGnheupIaO+eez197v915/fRhcbiC9GZXmqCBq lGoFJSWkYcFpy9ZsDo5c7sqYBdkZy8H1/RwBprJSCpx3SxCUPkjBoLcuFN6ODiCYePESB2OO E0Hex04cHtR3IbAXpVLQ3DMdWlzDFDTkXKAgraCMglf9kxh0GC5hUGLdBo1Z+Rg4xj8TYOyl 4LoxDZs6XzAYtxTTYNEthO6iXBomPwZCQ9cbEmpvNJBgb18C1252UFBlbyCgvrwbg+bHJgq6 Sn+T0Fj/lABndiYJd4byKegfteBgcQ3T8NphxuCefsp25tsvEp5kOjA4U3gfg5a2SgTV5z5g YC19Q0GtawADmzUHh5+36hB0Xxyk4XTGOA3XUy4iuHDaQIC+IwgmfpiokLV87cAwzuttibx9 1Ezwz/I5viK3k+b11e00b7bG87YiP76gqhfj8766SN5afJ7irV8v0Xz6YAvGDzU10fzTqxME 39NixLbP2StdFyGoVQmCGLA+XBpVdbsSxT0JP6ZP7aN1qGJrOpIwHLuSMzlTaTdTrC/X2jqO u9mTncfZMj+RbsbZASlX2LTJzTPZMO79WDWVjhiGYBdy19LXuGMZG8TVO24R/5RzuZJ7jr8a CbuKa/s8hNwsn9pUpuXSWUhqRtOKkadKkxCjVKmD/LXRUUka1TH/g7ExVjT1LZZTk9nl6Htz aA1iGaTwkDmp4Eg5qUzQJsXUII7BFZ6yurOrI+WyCGXScUGMPSDGqwVtDZrDEAov2ebdQric PaQ8KkQLQpwg/m8xRuKtQ8khjkUB++fXHNoRKs6XHA7w6X30bOmJPMZQPnskfL9uy06poTNx aW52wZWDGwdzHJKyxYW+PcK7DaJ6V8WpsZNVic9bbb4+ybd/+jeZzp5cEOlXFnhzqxjx8Ejf 5bXx9r6sFXtok27GyJ55Yfs8fI1Epk/z2/NeSeU+yRaNdwhuBgWhjVIG+uGiVvkHdT4TCikD AAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" It enters kernel mode on each syscall and each syscall handling should be considered independently from the point of view of Dept. Otherwise, Dept may wrongly track dependencies across different syscalls. That might be a real dependency from user mode. However, now that Dept just started to work, conservatively let Dept not track dependencies across different syscalls. Signed-off-by: Byungchul Park --- arch/arm64/kernel/syscall.c | 3 ++ arch/x86/entry/common.c | 4 +++ include/linux/dept.h | 39 ++++++++++++--------- kernel/dependency/dept.c | 67 +++++++++++++++++++------------------ 4 files changed, 64 insertions(+), 49 deletions(-) diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index ad198262b981..1b41cc443db5 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -97,6 +98,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, * (Similarly for HVC and SMC elsewhere.) */ + dept_kernel_enter(); + if (flags & _TIF_MTE_ASYNC_FAULT) { /* * Process the asynchronous tag check fault before the actual diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 51cc9c7cb9bd..69d41f863c7d 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -20,6 +20,7 @@ #include #include #include +#include #ifdef CONFIG_XEN_PV #include @@ -75,6 +76,7 @@ static __always_inline bool do_syscall_x32(struct pt_regs *regs, int nr) /* Returns true to return using SYSRET, or false to use IRET */ __visible noinstr bool do_syscall_64(struct pt_regs *regs, int nr) { + dept_kernel_enter(); add_random_kstack_offset(); nr = syscall_enter_from_user_mode(regs, nr); @@ -327,6 +329,7 @@ __visible noinstr void do_int80_syscall_32(struct pt_regs *regs) { int nr = syscall_32_enter(regs); + dept_kernel_enter(); add_random_kstack_offset(); /* * Subtlety here: if ptrace pokes something larger than 2^31-1 into @@ -348,6 +351,7 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs) int nr = syscall_32_enter(regs); int res; + dept_kernel_enter(); add_random_kstack_offset(); /* * This cannot use syscall_enter_from_user_mode() as it has to diff --git a/include/linux/dept.h b/include/linux/dept.h index c6e2291dd843..4e359f76ac3c 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -25,11 +25,16 @@ struct task_struct; #define DEPT_MAX_SUBCLASSES_USR (DEPT_MAX_SUBCLASSES / DEPT_MAX_SUBCLASSES_EVT) #define DEPT_MAX_SUBCLASSES_CACHE 2 -#define DEPT_SIRQ 0 -#define DEPT_HIRQ 1 -#define DEPT_IRQS_NR 2 -#define DEPT_SIRQF (1UL << DEPT_SIRQ) -#define DEPT_HIRQF (1UL << DEPT_HIRQ) +enum { + DEPT_CXT_SIRQ = 0, + DEPT_CXT_HIRQ, + DEPT_CXT_IRQS_NR, + DEPT_CXT_PROCESS = DEPT_CXT_IRQS_NR, + DEPT_CXTS_NR +}; + +#define DEPT_SIRQF (1UL << DEPT_CXT_SIRQ) +#define DEPT_HIRQF (1UL << DEPT_CXT_HIRQ) struct dept_ecxt; struct dept_iecxt { @@ -94,8 +99,8 @@ struct dept_class { /* * for tracking IRQ dependencies */ - struct dept_iecxt iecxt[DEPT_IRQS_NR]; - struct dept_iwait iwait[DEPT_IRQS_NR]; + struct dept_iecxt iecxt[DEPT_CXT_IRQS_NR]; + struct dept_iwait iwait[DEPT_CXT_IRQS_NR]; /* * classified by a map embedded in task_struct, @@ -207,8 +212,8 @@ struct dept_ecxt { /* * where the IRQ-enabled happened */ - unsigned long enirq_ip[DEPT_IRQS_NR]; - struct dept_stack *enirq_stack[DEPT_IRQS_NR]; + unsigned long enirq_ip[DEPT_CXT_IRQS_NR]; + struct dept_stack *enirq_stack[DEPT_CXT_IRQS_NR]; /* * where the event context started @@ -252,8 +257,8 @@ struct dept_wait { /* * where the IRQ wait happened */ - unsigned long irq_ip[DEPT_IRQS_NR]; - struct dept_stack *irq_stack[DEPT_IRQS_NR]; + unsigned long irq_ip[DEPT_CXT_IRQS_NR]; + struct dept_stack *irq_stack[DEPT_CXT_IRQS_NR]; /* * where the wait happened @@ -406,19 +411,19 @@ struct dept_task { int wait_hist_pos; /* - * sequential id to identify each IRQ context + * sequential id to identify each context */ - unsigned int irq_id[DEPT_IRQS_NR]; + unsigned int cxt_id[DEPT_CXTS_NR]; /* * for tracking IRQ-enabled points with cross-event */ - unsigned int wgen_enirq[DEPT_IRQS_NR]; + unsigned int wgen_enirq[DEPT_CXT_IRQS_NR]; /* * for keeping up-to-date IRQ-enabled points */ - unsigned long enirq_ip[DEPT_IRQS_NR]; + unsigned long enirq_ip[DEPT_CXT_IRQS_NR]; /* * for reserving a current stack instance at each operation @@ -465,7 +470,7 @@ struct dept_task { .wait_hist = { { .wait = NULL, } }, \ .ecxt_held_pos = 0, \ .wait_hist_pos = 0, \ - .irq_id = { 0U }, \ + .cxt_id = { 0U }, \ .wgen_enirq = { 0U }, \ .enirq_ip = { 0UL }, \ .stack = NULL, \ @@ -503,6 +508,7 @@ extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); extern void dept_sched_enter(void); extern void dept_sched_exit(void); +extern void dept_kernel_enter(void); static inline void dept_ecxt_enter_nokeep(struct dept_map *m) { @@ -552,6 +558,7 @@ struct dept_task { }; #define dept_ecxt_exit(m, e_f, ip) do { } while (0) #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) +#define dept_kernel_enter() do { } while (0) #define dept_ecxt_enter_nokeep(m) do { } while (0) #define dept_key_init(k) do { (void)(k); } while (0) #define dept_key_destroy(k) do { (void)(k); } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 19406093103e..9aba9eb22760 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -220,9 +220,9 @@ static struct dept_class *dep_tc(struct dept_dep *d) static const char *irq_str(int irq) { - if (irq == DEPT_SIRQ) + if (irq == DEPT_CXT_SIRQ) return "softirq"; - if (irq == DEPT_HIRQ) + if (irq == DEPT_CXT_HIRQ) return "hardirq"; return "(unknown)"; } @@ -406,7 +406,7 @@ static void initialize_class(struct dept_class *c) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_iecxt *ie = &c->iecxt[i]; struct dept_iwait *iw = &c->iwait[i]; @@ -431,7 +431,7 @@ static void initialize_ecxt(struct dept_ecxt *e) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { e->enirq_stack[i] = NULL; e->enirq_ip[i] = 0UL; } @@ -447,7 +447,7 @@ static void initialize_wait(struct dept_wait *w) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { w->irq_stack[i] = NULL; w->irq_ip[i] = 0UL; } @@ -486,7 +486,7 @@ static void destroy_ecxt(struct dept_ecxt *e) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) if (e->enirq_stack[i]) put_stack(e->enirq_stack[i]); if (e->class) @@ -502,7 +502,7 @@ static void destroy_wait(struct dept_wait *w) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) if (w->irq_stack[i]) put_stack(w->irq_stack[i]); if (w->class) @@ -651,7 +651,7 @@ static void print_diagram(struct dept_dep *d) const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); irqf = e->enirqf & w->irqf; - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { if (!firstline) pr_warn("\nor\n\n"); firstline = false; @@ -684,7 +684,7 @@ static void print_dep(struct dept_dep *d) const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); irqf = e->enirqf & w->irqf; - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { pr_warn("%s has been enabled:\n", irq_str(irq)); print_ip_stack(e->enirq_ip[irq], e->enirq_stack[irq]); pr_warn("\n"); @@ -910,7 +910,7 @@ static void bfs(struct dept_class *c, bfs_f *cb, void *in, void **out) */ static unsigned long cur_enirqf(void); -static int cur_irq(void); +static int cur_cxt(void); static unsigned int cur_ctxt_id(void); static struct dept_iecxt *iecxt(struct dept_class *c, int irq) @@ -1458,7 +1458,7 @@ static void add_dep(struct dept_ecxt *e, struct dept_wait *w) if (d) { check_dl_bfs(d); - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_iwait *fiw = iwait(fc, i); struct dept_iecxt *found_ie; struct dept_iwait *found_iw; @@ -1494,7 +1494,7 @@ static void add_wait(struct dept_class *c, unsigned long ip, struct dept_task *dt = dept_task(); struct dept_wait *w; unsigned int wg = 0U; - int irq; + int cxt; int i; if (DEPT_WARN_ON(!valid_class(c))) @@ -1510,9 +1510,9 @@ static void add_wait(struct dept_class *c, unsigned long ip, w->wait_stack = get_current_stack(); w->sched_sleep = sched_sleep; - irq = cur_irq(); - if (irq < DEPT_IRQS_NR) - add_iwait(c, irq, w); + cxt = cur_cxt(); + if (cxt == DEPT_CXT_HIRQ || cxt == DEPT_CXT_SIRQ) + add_iwait(c, cxt, w); /* * Avoid adding dependency between user aware nested ecxt and @@ -1593,7 +1593,7 @@ static bool add_ecxt(struct dept_map *m, struct dept_class *c, eh->sub_l = sub_l; irqf = cur_enirqf(); - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) add_iecxt(c, irq, e, false); del_ecxt(e); @@ -1745,7 +1745,7 @@ static void do_event(struct dept_map *m, struct dept_class *c, add_dep(eh->ecxt, wh->wait); } - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_ecxt *e; if (before(dt->wgen_enirq[i], wg)) @@ -1787,7 +1787,7 @@ static void disconnect_class(struct dept_class *c) call_rcu(&d->rh, del_dep_rcu); } - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { stale_iecxt(iecxt(c, i)); stale_iwait(iwait(c, i)); } @@ -1812,27 +1812,21 @@ static unsigned long cur_enirqf(void) return 0UL; } -static int cur_irq(void) +static int cur_cxt(void) { if (lockdep_softirq_context(current)) - return DEPT_SIRQ; + return DEPT_CXT_SIRQ; if (lockdep_hardirq_context()) - return DEPT_HIRQ; - return DEPT_IRQS_NR; + return DEPT_CXT_HIRQ; + return DEPT_CXT_PROCESS; } static unsigned int cur_ctxt_id(void) { struct dept_task *dt = dept_task(); - int irq = cur_irq(); + int cxt = cur_cxt(); - /* - * Normal process context - */ - if (irq == DEPT_IRQS_NR) - return 0U; - - return dt->irq_id[irq] | (1UL << irq); + return dt->cxt_id[cxt] | (1UL << cxt); } static void enirq_transition(int irq) @@ -1893,7 +1887,7 @@ static void dept_enirq(unsigned long ip) flags = dept_enter(); - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { dt->enirq_ip[irq] = ip; enirq_transition(irq); } @@ -1939,6 +1933,13 @@ void noinstr dept_hardirqs_off(void) dept_task()->hardirqs_enabled = false; } +void noinstr dept_kernel_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->cxt_id[DEPT_CXT_PROCESS] += 1UL << DEPT_CXTS_NR; +} + /* * Ensure it's the outmost softirq context. */ @@ -1946,7 +1947,7 @@ void dept_softirq_enter(void) { struct dept_task *dt = dept_task(); - dt->irq_id[DEPT_SIRQ] += 1UL << DEPT_IRQS_NR; + dt->cxt_id[DEPT_CXT_SIRQ] += 1UL << DEPT_CXTS_NR; } /* @@ -1956,7 +1957,7 @@ void noinstr dept_hardirq_enter(void) { struct dept_task *dt = dept_task(); - dt->irq_id[DEPT_HIRQ] += 1UL << DEPT_IRQS_NR; + dt->cxt_id[DEPT_CXT_HIRQ] += 1UL << DEPT_CXTS_NR; } void dept_sched_enter(void) From patchwork Wed May 8 09:47:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658428 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3A228C04FFE for ; Wed, 8 May 2024 10:03:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F2FC0113530; Wed, 8 May 2024 10:02:57 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 2FB38113536 for ; Wed, 8 May 2024 10:02:55 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-95-663b4a39c48b From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 08/28] dept: Distinguish each work from another Date: Wed, 8 May 2024 18:47:05 +0900 Message-Id: <20240508094726.35754-9-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSeUiTcRjH+/3e09XidRa9GVQsKjJSi46ngyiiersgkCIMqpWvOZpTZmoK gbW5zFLs0NUS8WIburS2EjusZTmPlVmZRzgvIpM8QtvIlGxa/fPw4ft8+PL88bCE7CEVyCrV Z0WNWqGS0xJSMjirYNXGvZsjQ81pa+Da1VDw/EgjIbfcSkNTWSkC64MLGPprdkOrdwDB+Ju3 BBiymxAU9LgJeODsRFBluUjDh8+zodkzTEN99hUatEXlNLz7NoGhI+c6hlLbAXBlFWJwjPWR YOin4Y5Bi33jK4YxUwkDppSl0GsxMjDRsxrqO1soqPq0Em7nddDwtKqeBGdlL4YPj3Np6LRO UuBy1pHQdC2DgrtDhTR885oIMHmGGXjvyMdwT+cr0o/+pqA2w4FBX3wfQ3P7EwTP0rox2Kwt NLz0DGCw27IJ+GWuQdCbOchA6tUxBu5cyERwJTWHBF3HOhj/mUtv2yi8HBgmBJ09Uajy5pNC QyEvPDK6GUH37BMj5NviBbslSCh62o+FghEPJdhKLtOCbeQ6I6QPNmNhqLGREepujZPC52YD PhgYLtkSIaqUCaImZOsJSZSrLhvHDgecM1Z0oRRUxqUjP5bn1vIN78zUfzY+dNFTTHPL+ba2 MWKK53CLeXvGl2mH4AYkfHHjrikO4Hbw1Ub3dE5yS/lXeTenfSm3jm9wjZJ/Oxfxpfcc07kf t55v7xtCUyzzOU+0RiYdSXzOT5Z/3Zf574j5/AtLG5mFpPloRgmSKdUJ0Qqlam1wVJJaeS74 VEy0DfmeyXR+4mglGmkKq0Yci+SzpI55myJllCIhLim6GvEsIZ8jrbm0IVImjVAkJYuamOOa eJUYV40WsKR8nnSNNzFCxp1WnBXPiGKsqPm/xaxfYAoq8fbY83oWdB0mVmQhZ4w9xRgw7lrY fWKf+ujoLv8crz19+c5jluJ2N3VEbV6iL/qoX1RxaHFp2Y3Wk5Pa5ye37A8zfE/+mqT7nTp3 /+CjsBxtcG/otqDwblpfG41/FKoMz5ND+tPC/eVd263dt+5bZuqKjky6ncr3e1qXnVnJmuVk XJRidRChiVP8AaNl6EpIAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe0hTcRzF+/3ua45WlyV5sSdLCXpYQda3FtGLukTvf4r+qVXXHOqUTU2F SNOepqmlqzV1mS3xkTV710RcmjayleYrdbmkKS0X2cSlZc7on8OHcw7nryMipFoqUKRUxQpq lSJSRotJ8S556tI12+Vhy0cdsyH70nLw/DxPgr6ynAbb3TIE5Q9SMAzUbYO2YReC0TdvCdDm 2hDc7O0m4EF9DwJzyWkamvumQYvHTUNjbjoNqbcqaXj3dQxDV14OhjLTTrBmFWGo8TpJ0A7Q cEObiiekH4PXWMqAMTkYHCU6BsZ6V0BjTysFlvxGCsydi+F6QRcNL8yNJNQ/cWBofqanoad8 nAJrfQMJtuwMCioGi2j4OmwkwOhxM/C+xoDhXtrE2tmhPxS8yqjBcLb4PoaWjucIqs9/wmAq b6XB4nFhqDLlEvDrTh0CR+Y3Bs5c8jJwIyUTQfqZPBLSukJhdERPb1jLW1xugk+rOsGbhw0k /7qI45/quhk+rbqT4Q2mOL6qZBF/68UA5m/+8FC8qfQCzZt+5DD8xW8tmB9samL4hmujJN/X osV7Zh0UrzsmRCrjBfWy9YfF4daGXBzjnpGge2RHyeguexH5iTh2Jad7aKV9TLMLufZ2L+Fj f3Y+V5XxhfIxwbrEXHHTVh/PYDdztbruSZ9kg7mXBVcn+xI2lHttHSL/bc7jyu7VTPp+7Cqu wzmIfCyd6DxP1TFZSGxAU0qRv1IVH6VQRoaGaCLCE1XKhJCj0VEmNHEX48mx7CfoZ/O2WsSK kGyqxEbLw6SUIl6TGFWLOBEh85fUnVsdJpUcUyQmCeroQ+q4SEFTi2aJSFmAZPt+4bCUPa6I FSIEIUZQ/0+xyC8wGQW07bySxS7Ykvk0pHLJ9H5n4ALLwaBmeXdO58LVJUnz/yj2xUcYV6bn WWJi028r+y9PHy+2V7xp1Q3Ko09M/b7W/dvqnXtgvzS52n7KYD9UOHNv0qbSkZHPwfv0Gx/b dBUu5+78I9M+Bn0oTHFkaYXxOcsK9WInNqvsIaq+FL1xh4zUhCtWLCLUGsVflGbi5CoDAAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Workqueue already provides concurrency control. By that, any wait in a work doesn't prevents events in other works with the control enabled. Thus, each work would better be considered a different context. So let Dept assign a different context id to each work. Signed-off-by: Byungchul Park --- include/linux/dept.h | 2 ++ kernel/dependency/dept.c | 10 ++++++++++ kernel/workqueue.c | 3 +++ 3 files changed, 15 insertions(+) diff --git a/include/linux/dept.h b/include/linux/dept.h index 4e359f76ac3c..319a5b43df89 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -509,6 +509,7 @@ extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long extern void dept_sched_enter(void); extern void dept_sched_exit(void); extern void dept_kernel_enter(void); +extern void dept_work_enter(void); static inline void dept_ecxt_enter_nokeep(struct dept_map *m) { @@ -559,6 +560,7 @@ struct dept_task { }; #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) #define dept_kernel_enter() do { } while (0) +#define dept_work_enter() do { } while (0) #define dept_ecxt_enter_nokeep(m) do { } while (0) #define dept_key_init(k) do { (void)(k); } while (0) #define dept_key_destroy(k) do { (void)(k); } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 9aba9eb22760..a8e693fd590f 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -1933,6 +1933,16 @@ void noinstr dept_hardirqs_off(void) dept_task()->hardirqs_enabled = false; } +/* + * Assign a different context id to each work. + */ +void dept_work_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->cxt_id[DEPT_CXT_PROCESS] += 1UL << DEPT_CXTS_NR; +} + void noinstr dept_kernel_enter(void) { struct dept_task *dt = dept_task(); diff --git a/kernel/workqueue.c b/kernel/workqueue.c index d2dbe099286b..e7b3818a26eb 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -55,6 +55,7 @@ #include #include #include +#include #include "workqueue_internal.h" @@ -3184,6 +3185,8 @@ __acquires(&pool->lock) lockdep_copy_map(&lockdep_map, &work->lockdep_map); #endif + dept_work_enter(); + /* ensure we're on the correct CPU */ WARN_ON_ONCE(!(pool->flags & POOL_DISASSOCIATED) && raw_smp_processor_id() != pool->cpu); From patchwork Wed May 8 09:47:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658450 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 177B1C19F4F for ; Wed, 8 May 2024 10:03:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3BC13113548; Wed, 8 May 2024 10:03:29 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 213D0113533 for ; Wed, 8 May 2024 10:02:54 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-a5-663b4a393331 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 09/28] dept: Add a mechanism to refill the internal memory pools on running out Date: Wed, 8 May 2024 18:47:06 +0900 Message-Id: <20240508094726.35754-10-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe0yTZxSH977flWrJZ0fks2Zquhmjy5go6kEXJ4nRd0tImMbE6JLRwMdo Viq2grJkBqUicosQsVtpDKAWAlWwNAsMSgoEJiNi1c6BlIudDhsLJGgbEdS1Xv45efI75zzn n8NTCgej5DW6Y5Jep9aqWBktm15a+0XitzsyNjr9O6CidCMEnxfRYGm2seC+3oTA5jiFwd+3 F/4JBRAs3LpNganKjaD24RgFjv5xBM6G0yzcexQNnuAsCwNVJSwUXG5m4c7TRQzei5UYmuzJ MHi+DoNrfooGk5+FalMBDpcnGOatjRxY89eCr8HMweLDeBgYv8+A88Hn8NslLwudzgEa+tt8 GO79YWFh3PaGgcH+mzS4K8oYuDZTx8LTkJUCa3CWg7uuGgwtxrCo8NlrBv4sc2EovHIDg2ek A0FX0SQGu+0+C73BAIZWexUFL+v7EPjKpzk4UzrPQfWpcgQlZy7SYPRugYUXFnZXIukNzFLE 2HqcOEM1NPmrTiTt5jGOGLsecKTGnkNaGzaQy51+TGrnggyxN55jiX2ukiPF0x5MZoaGOHLz 1wWaPPKYcIrykOyrdEmryZX0X+5MlWX+PVjNZbu/PvG4JUTno/ZNxSiKF4UE8d9RO/eBr/qn cIRZYZ04PDxPRThGWCO2lv3HRJgSAjLxytCeCH8spInV5zyoGPE8LawVOyf2RWK5sFWc6Jii 3ylXi00trreaqHA+MjWDIqwQtogdBebwWVl45g0vml91v19YIXY3DNPnkbwGfdSIFBpdbpZa o02Iy8zTaU7EpR3JsqPwL1l/WTzchubc+3uQwCPVUrkrdnuGglHnGvKyepDIU6oYed/ZbRkK ebo672dJf+QHfY5WMvSglTytipVvCh1PVwg/qo9JP0lStqT/0MV8lDIf4d9Tv1cnrEozlpuW V2Y4krLBl0ZWrK+Pe/Jq18mRA1VFFsekVmk8yWsOmiuMsaVJbLRysiRU2NSsKk3Jiem+tVw/ eGlsp/4b6/qk6MQkQ2xwyWeL61RDbUfhzqcF9Tc2X1AOr/HaticHStp3fzLtfdyrs8SXf9fN cPHZvb7RZSrakKmO30DpDer/Acsh+9JHAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTYRzGfd9zznuOo8VpCZ0USkYX6Kqh9S+7faneiu5EEUENPepwmmxm GiSaS9MystCV05iXlqmVTT9Uapji0ixbueymlmKZaArWLHNdXNGXhx/PA79Pj8CocjlfQRsT J+tjNDo1UbCK7SGpi1duCQkP+NU2B7LPBoDr62kW8m9VEHDcLEdQUZ2CYaBpE7wcG0Iw8eQp A6YcB4LCni4Gqu3dCOpKTxJo75sKTtcIgZacMwRSi28ReDboxtCZewFDuW0btJ4vwlA/3s+C aYCA2ZSKJ+MThnFrGQ/W5LnQW5rHg7snEFq6OzhoLGjhoO7NQrh8pZNAbV0LC/Y7vRja7+UT 6K74zUGrvZkFR3YWBzeGiwgMjlkZsLpGeHheb8FQaZy0pX35xcHDrHoMaSW3MThf1yC4f/o9 BltFB4FG1xCGKlsOAz+uNSHoPfeZh1Nnx3kwp5xDcOZULgvGzmCY+J5P1q+ijUMjDDVWHaN1 YxaWPiqS6N28Lp4a77/hqcV2lFaVLqDFtQOYFo66OGoryyDUNnqBp5mfnZgOt7XxtPnSBEv7 nCa80++AYnWYrNPGy/qlaw8rIl+0mvlYx7qED5VjbDK6uywTeQuSGCRdHejHHibifOnVq3HG wz6iv1SV9ZHzMCMOKaSSto0eni6GSuYMJ8pEgsCKc6Xad7s9tVJcLr2r6Wf/KWdL5ZX1fzXe k/3r/mHkYZUYLNWk5vHnkcKCvMqQjzYmPlqj1QUvMURFJsZoE5aEHom2ocm3WE+4s++gr+2b GpAoIPUUpYOEhKs4TbwhMboBSQKj9lE2pa8IVynDNInHZf2RQ/qjOtnQgPwEVj1DuWWffFgl Rmji5ChZjpX1/1csePsmo+SC3bHmn/rnK5KiUiKsuiRzZVI7Rd2xcenF1+d1fd/F3ItfGnAx Y1Fm6GXi1g3O++jnr6v2nbmnp0Dbkb7GuN4rb+P+k82PN/tvb9j74JlzWkkCdfe5RPtB9ahp 6yzLJ2GUHHzw89sOL7tVhdNeBgWGmS7tfJsePNtiNTQdKNqgZg2RmsAFjN6g+QNJu8zoKQMA AA== X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Dept engine works in a constrained environment. For example, Dept cannot make use of dynamic allocation e.g. kmalloc(). So Dept has been using static pools to keep memory chunks Dept uses. However, Dept would barely work once any of the pools gets run out. So implemented a mechanism for the refill on the lack by any chance, using irq work and workqueue that fits on the contrained environment. Signed-off-by: Byungchul Park --- include/linux/dept.h | 19 ++++-- kernel/dependency/dept.c | 104 +++++++++++++++++++++++++++----- kernel/dependency/dept_object.h | 10 +-- kernel/dependency/dept_proc.c | 8 +-- 4 files changed, 112 insertions(+), 29 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index 319a5b43df89..ca1a34be4127 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -336,9 +336,19 @@ struct dept_pool { size_t obj_sz; /* - * the number of the static array + * the remaining number of the object in spool */ - atomic_t obj_nr; + int obj_nr; + + /* + * the number of the object in spool + */ + int tot_nr; + + /* + * accumulated amount of memory used by the object in byte + */ + atomic_t acc_sz; /* * offset of ->pool_node @@ -348,9 +358,10 @@ struct dept_pool { /* * pointer to the pool */ - void *spool; + void *spool; /* static pool */ + void *rpool; /* reserved pool */ struct llist_head boot_pool; - struct llist_head __percpu *lpool; + struct llist_head __percpu *lpool; /* local pool */ }; struct dept_ecxt_held { diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index a8e693fd590f..8ca46ad98e10 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -74,6 +74,9 @@ #include #include #include +#include +#include +#include #include "dept_internal.h" static int dept_stop; @@ -122,9 +125,11 @@ static int dept_per_cpu_ready; WARN(1, "DEPT_STOP: " s); \ }) -#define DEPT_INFO_ONCE(s...) pr_warn_once("DEPT_INFO_ONCE: " s) +#define DEPT_INFO_ONCE(s...) pr_warn_once("DEPT_INFO_ONCE: " s) +#define DEPT_INFO(s...) pr_warn("DEPT_INFO: " s) static arch_spinlock_t dept_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; +static arch_spinlock_t dept_pool_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; /* * DEPT internal engine should be careful in using outside functions @@ -263,6 +268,7 @@ static bool valid_key(struct dept_key *k) #define OBJECT(id, nr) \ static struct dept_##id spool_##id[nr]; \ +static struct dept_##id rpool_##id[nr]; \ static DEFINE_PER_CPU(struct llist_head, lpool_##id); #include "dept_object.h" #undef OBJECT @@ -271,14 +277,70 @@ struct dept_pool dept_pool[OBJECT_NR] = { #define OBJECT(id, nr) { \ .name = #id, \ .obj_sz = sizeof(struct dept_##id), \ - .obj_nr = ATOMIC_INIT(nr), \ + .obj_nr = nr, \ + .tot_nr = nr, \ + .acc_sz = ATOMIC_INIT(sizeof(spool_##id) + sizeof(rpool_##id)), \ .node_off = offsetof(struct dept_##id, pool_node), \ .spool = spool_##id, \ + .rpool = rpool_##id, \ .lpool = &lpool_##id, }, #include "dept_object.h" #undef OBJECT }; +static void dept_wq_work_fn(struct work_struct *work) +{ + int i; + + for (i = 0; i < OBJECT_NR; i++) { + struct dept_pool *p = dept_pool + i; + int sz = p->tot_nr * p->obj_sz; + void *rpool; + bool need; + + arch_spin_lock(&dept_pool_spin); + need = !p->rpool; + arch_spin_unlock(&dept_pool_spin); + + if (!need) + continue; + + rpool = vmalloc(sz); + + if (!rpool) { + DEPT_STOP("Failed to extend internal resources.\n"); + break; + } + + arch_spin_lock(&dept_pool_spin); + if (!p->rpool) { + p->rpool = rpool; + rpool = NULL; + atomic_add(sz, &p->acc_sz); + } + arch_spin_unlock(&dept_pool_spin); + + if (rpool) + vfree(rpool); + else + DEPT_INFO("Dept object(%s) just got refilled successfully.\n", p->name); + } +} + +static DECLARE_WORK(dept_wq_work, dept_wq_work_fn); + +static void dept_irq_work_fn(struct irq_work *w) +{ + schedule_work(&dept_wq_work); +} + +static DEFINE_IRQ_WORK(dept_irq_work, dept_irq_work_fn); + +static void request_rpool_refill(void) +{ + irq_work_queue(&dept_irq_work); +} + /* * Can use llist no matter whether CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG is * enabled or not because NMI and other contexts in the same CPU never @@ -314,19 +376,31 @@ static void *from_pool(enum object_t t) /* * Try static pool. */ - if (atomic_read(&p->obj_nr) > 0) { - int idx = atomic_dec_return(&p->obj_nr); + arch_spin_lock(&dept_pool_spin); + + if (!p->obj_nr) { + p->spool = p->rpool; + p->obj_nr = p->rpool ? p->tot_nr : 0; + p->rpool = NULL; + request_rpool_refill(); + } + + if (p->obj_nr) { + void *ret; + + p->obj_nr--; + ret = p->spool + (p->obj_nr * p->obj_sz); + arch_spin_unlock(&dept_pool_spin); - if (idx >= 0) - return p->spool + (idx * p->obj_sz); + return ret; } + arch_spin_unlock(&dept_pool_spin); - DEPT_INFO_ONCE("---------------------------------------------\n" - " Some of Dept internal resources are run out.\n" - " Dept might still work if the resources get freed.\n" - " However, the chances are Dept will suffer from\n" - " the lack from now. Needs to extend the internal\n" - " resource pools. Ask max.byungchul.park@gmail.com\n"); + DEPT_INFO("------------------------------------------\n" + " Dept object(%s) is run out.\n" + " Dept is trying to refill the object.\n" + " Nevertheless, if it fails, Dept will stop.\n", + p->name); return NULL; } @@ -2957,8 +3031,8 @@ void __init dept_init(void) pr_info("... DEPT_MAX_ECXT_HELD : %d\n", DEPT_MAX_ECXT_HELD); pr_info("... DEPT_MAX_SUBCLASSES : %d\n", DEPT_MAX_SUBCLASSES); #define OBJECT(id, nr) \ - pr_info("... memory used by %s: %zu KB\n", \ - #id, B2KB(sizeof(struct dept_##id) * nr)); + pr_info("... memory initially used by %s: %zu KB\n", \ + #id, B2KB(sizeof(spool_##id) + sizeof(rpool_##id))); #include "dept_object.h" #undef OBJECT #define HASH(id, bits) \ @@ -2966,6 +3040,6 @@ void __init dept_init(void) #id, B2KB(sizeof(struct hlist_head) * (1 << (bits)))); #include "dept_hash.h" #undef HASH - pr_info("... total memory used by objects and hashs: %zu KB\n", B2KB(mem_total)); + pr_info("... total memory initially used by objects and hashs: %zu KB\n", B2KB(mem_total)); pr_info("... per task memory footprint: %zu bytes\n", sizeof(struct dept_task)); } diff --git a/kernel/dependency/dept_object.h b/kernel/dependency/dept_object.h index 0b7eb16fe9fb..4f936adfa8ee 100644 --- a/kernel/dependency/dept_object.h +++ b/kernel/dependency/dept_object.h @@ -6,8 +6,8 @@ * nr: # of the object that should be kept in the pool. */ -OBJECT(dep, 1024 * 8) -OBJECT(class, 1024 * 8) -OBJECT(stack, 1024 * 32) -OBJECT(ecxt, 1024 * 16) -OBJECT(wait, 1024 * 32) +OBJECT(dep, 1024 * 4 * 2) +OBJECT(class, 1024 * 4) +OBJECT(stack, 1024 * 4 * 8) +OBJECT(ecxt, 1024 * 4 * 2) +OBJECT(wait, 1024 * 4 * 4) diff --git a/kernel/dependency/dept_proc.c b/kernel/dependency/dept_proc.c index 7d61dfbc5865..f07a512b203f 100644 --- a/kernel/dependency/dept_proc.c +++ b/kernel/dependency/dept_proc.c @@ -73,12 +73,10 @@ static int dept_stats_show(struct seq_file *m, void *v) { int r; - seq_puts(m, "Availability in the static pools:\n\n"); + seq_puts(m, "Accumulated amount of memory used by pools:\n\n"); #define OBJECT(id, nr) \ - r = atomic_read(&dept_pool[OBJECT_##id].obj_nr); \ - if (r < 0) \ - r = 0; \ - seq_printf(m, "%s\t%d/%d(%d%%)\n", #id, r, nr, (r * 100) / (nr)); + r = atomic_read(&dept_pool[OBJECT_##id].acc_sz); \ + seq_printf(m, "%s\t%d KB\n", #id, r / 1024); #include "dept_object.h" #undef OBJECT From patchwork Wed May 8 09:47:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B112C25B75 for ; Wed, 8 May 2024 10:03:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B9C86113545; Wed, 8 May 2024 10:03:29 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 8A01210FA58 for ; Wed, 8 May 2024 10:02:55 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-b5-663b4a39aaa4 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 10/28] dept: Record the latest one out of consecutive waits of the same class Date: Wed, 8 May 2024 18:47:07 +0900 Message-Id: <20240508094726.35754-11-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAz2Sf0iTeRzH+36f7/PD5eJhLXzS6LpRCUn2y+pz1kVwdPdQCEV/BN1ZjXxs q/mDbWoWxcppZi0sUC+VUqu11LtsWqipLMOlSbpypJZZSZxnbQp6Gy2lbjPqnw8vPp/35/XX m6MUd+lITptqlPSpap2KkRGZN7xyZfz2TcmrbwwSuHh+Nfj+yydQfruWAdffNQhqG05hGOv4 Dfr9HgTTT3opKClyIah8+4qCBucwglbbaQb63s0Dt2+Cga6icwzkXLvNwNMPMxiGii9hqLEn QHdhFQZHYJRAyRgDZSU5ODj+xRCwVrNgNS2DEVspCzNv10DX8HMaWl/EwOUrQwy0tHYRcDaO YOhrLmdguPYLDd3OTgKuixYa/hqvYuCD30qB1TfBwjNHBYY6c1CUN/WZhkcWB4a863cwuAfv I2jLf4PBXvucgYc+D4Z6exEFn252IBi54GUh93yAhbJTFxCcyy0mYB5aD9Mfy5mtP4kPPROU aK7PElv9FUR8XCWITaWvWNHc9oIVK+wZYr1thXitZQyLlZM+WrRXn2VE++QlVizwurE43tPD ip1/ThPxnbsE74zcK9ucJOm0mZJ+1ZYDMk3h+16UPjb36JOmcWJCj8IKUBgn8HHCg+oR8p1v DaAQM3y0MDAQoEKs5JcI9ZZ/6BBTvEcmXO/5NcTzebXQ3OifzRN+mXCz7SMOsZzfIJS/L6O/ On8Qauocs56w4H5wdHw2r+DXC/dzStkCJAtmvnBCv7kTfX1YKDywDZBCJK9Ac6qRQpuamaLW 6uJiNdmp2qOxB9NS7ChYJuuJmd8b0aRrdzviOaQKlzsi4pMVtDrTkJ3SjgSOUinlHWc2Jivk SersY5I+bb8+QycZ2lEUR1QR8rX+rCQFf0htlI5IUrqk/3bFXFikCSUl5o0uNen2CbYE90vl YWUP1/f66tozUcZ1WJm2eM/xaKfXMOmy/BEzFb7A8zRjW8uJe1JT7C8B1a4dVfDj3cF5cYs0 8RnKwMK6GNJLpxQ3rNrU7E3I6ks7mTlFtm1pT8+9UxnuzO8/2705IpD4syXQEWVSaJxFc+uM xiUbl59UEYNGvWYFpTeo/weHkxoESAMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSb0zMcRzHfb+/f3fH8dtp/NSIG2MitRUfiuGBflj+PMDmiX7jl26uw11/ ZFi5K0m1zpyjQsqu1FHuPEi5pFacJlFLWjXyt1zXhuuPy58u8+Sz197v916PPhJCYab8JSpN gqjVCGolLSNlOyL0K9dui4gNcRlWgDE7BDw/MkkorLTS0Ha3AoH1fhqGgaYoeD3iQuB9/oIA s6kNwc13vQTcb+5D4Cg7S0P7h5nQ4RmmwWm6QIO+pJKGl18nMPRcvoihwhYNLXnFGOrHP5Ng HqChwKzHk+cLhnFLOQOW1CXQX5bPwMS7UHD2dVLQeM1JgaM7CK5e76HhocNJQnN1P4b2mkIa +qx/KGhpfkpCmzGHgjvuYhq+jlgIsHiGGXhVX4ShyjBpy/j+m4InOfUYMm7dw9DxphZBXeZb DDZrJw2NHhcGu81EwM/SJgT9uUMMpGePM1CQlovgQvplEgw94eAdK6Q3ruMbXcMEb7An846R IpJ/VszxD/J7Gd5Q183wRbZE3l62nC95OID5m988FG8rP0/ztm8XGT5rqAPz7tZWhn96xUvy HzrMeFfAflnkIVGtShK1qzbEyOLyBl+gYwPTTzx/4CZT0RNpFpJKODaMe3y7C/mYZpdyXV3j hI/92IWcPecT5WOCdcm4W61bfDybFbia6pGpPcku4UrrxrCP5exqrnCwgPrnDOQqquqnPNLJ /M1n99RewYZztfp8Jg/JitC0cuSn0iTFCyp1eLDuSFyKRnUi+ODReBuafBfL6QljNfrRHtWA WAlSzpC30RGxCkpI0qXENyBOQij95E3n1sQq5IeElJOi9ugBbaJa1DWgAAmpnCvftk+MUbCH hQTxiCgeE7X/WyyR+qeiTOFMSfCfOKMqa8XonI+nvldR/frF1cmmqMzrbxfE722JDvKwlfcc ge+HNq1d3x5ZWaP+ZXyU4K3NXe8ecuYELly2fdaVyLAM+aWQRaR72SxNZ9jMeZf25Fk97jMh nTV9MVdDpW7T5tGtVfO9ErPn+LOdMbuz/e2lFr/B0AAvdyNISerihNDlhFYn/AXpFhxPKgMA AA== X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The current code records all the waits for later use to track relation between waits and events in each context. However, since the same class is handled the same way, it'd be okay to record only one on behalf of the others if they all have the same class. Even though it's the ideal to search the whole history buffer for that, since it'd cost too high, alternatively, let's keep the latest one at least when the same class'ed waits consecutively appear. Signed-off-by: Byungchul Park --- kernel/dependency/dept.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 8ca46ad98e10..2c0f30646652 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -1497,9 +1497,28 @@ static struct dept_wait_hist *new_hist(void) return wh; } +static struct dept_wait_hist *last_hist(void) +{ + int pos_n = hist_pos_next(); + struct dept_wait_hist *wh_n = hist(pos_n); + + /* + * This is the first try. + */ + if (!pos_n && !wh_n->wait) + return NULL; + + return hist(pos_n + DEPT_MAX_WAIT_HIST - 1); +} + static void add_hist(struct dept_wait *w, unsigned int wg, unsigned int ctxt_id) { - struct dept_wait_hist *wh = new_hist(); + struct dept_wait_hist *wh; + + wh = last_hist(); + + if (!wh || wh->wait->class != w->class || wh->ctxt_id != ctxt_id) + wh = new_hist(); if (likely(wh->wait)) put_wait(wh->wait); From patchwork Wed May 8 09:47:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6E4AC19F4F for ; Wed, 8 May 2024 10:03:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 25D12113541; Wed, 8 May 2024 10:03:18 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 9D0DD113527 for ; Wed, 8 May 2024 10:02:55 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-c5-663b4a39573f From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 11/28] dept: Apply sdt_might_sleep_{start, end}() to wait_for_completion()/complete() Date: Wed, 8 May 2024 18:47:08 +0900 Message-Id: <20240508094726.35754-12-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSa0yTZxQAYN/3u1Kp+awYP1jGZo0xwYiwwHa8xC0uGe+cM9P5S3/MBj6k sQVTFOzMJlhQ7gEmdhTcuJiuQrVSunhFKwYEzKBOwpBgh42g1QJJtYRKo2ud/jl5cm45Pw5P KRxMHK/OPiTpslUaJSujZdPRzes2btuUmVT2NBpqKpIg8LKEhkablQXXhXYEVkchBm9PGvwz 50Ow8NcQBcY6F4LmRw8pcPS6EXRZjrNw//ESGA7MstBfV86CodXGwr3nIQzjp2sxtNu/hbvV LRicwSc0GL0sNBgNOByeYgia2zgwF6wGj8XEQehRMvS7RxjoGlsL9b+Ns3C9q5+G3sseDPev NrLgtr5h4G5vHw2umkoGzs+0sPB8zkyBOTDLwd/OJgwXi8KLTrx4zcCdSieGE2c7MAw/uIbg RskEBrt1hIXbAR+GTnsdBa/+6EHgqZrmoLgiyEFDYRWC8uLTNBSNp8LCfCP7xQZy2zdLkaLO fNI110STgRaRXDE95EjRjTGONNkPk05LAmm97sWk2R9giL2tlCV2fy1HyqaHMZkZHORI368L NHk8bMTfxe2Rbc6QNOo8Sbd+yz5ZVr97kjl4c+mR0OtjBcixpAxF8aKQIpqe/UK/t6HbxkXM CmvE0dEgFXGM8LHYWTnFREwJPpl4dvCriJcJWeL80E0cMS2sFuvL/3w7Kxc+FU8Z5qn/d34k tl90vnVUOP/gyQyKWCGkitcMpnC/LNzzhhenCi+9OyJWvGUZpauRvAktakMKdXaeVqXWpCRm 6bPVRxLTc7R2FP4l80+hvZeR3/V9NxJ4pIyWO1dszFQwqrxcvbYbiTyljJH3nPwsUyHPUOl/ lHQ5P+gOa6TcbvQBTytXyD+Zy89QCPtVh6QDknRQ0r2vYj4qrgDVedom/Wdi6zX7gtLAJeuz k2lkW1KOr9Zna4wfm4zXL5/Z4u0QV626sjWUprXk+z886u1oXbd9rT7Qap06tUwjpQ99/o1j V3A2IXHn7tKvUyZaXsROxAiu33d8uf9cxfzi4gPpyaXHbLIpi0n788q+TbXu40P/GgdiPLCj 2pjQwyvp3CxVcgKly1X9B41FmvJHAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzXSbUhTYRQH8J7nvs3R6rasbhoYA+mNzF7UU0ZEUF2UpA+RIEXd6qqrabap aRBZUzPNSMktzUKtlqhlTQPTfGHicka63DANlZTKLN9IZzOX5Yq+HH6cP+f/6UgIuY7ykihj 40V1rKBS0FJSGhas3bg9JDjS35y/FXKu+4NjKoOEwsoKGqxPyhFUVF/GMNyyH95NjyCYfdNB gD7PiqB4oI+AanM/gvrSKzTYPi4Cu2OcBkteFg3a+5U0vP3mwtCry8VQbjwAr2+WYGiaGSJB P0zDHb0Wz48vGGYMZQwYUnxhsLSAAdfAZrD0d1HQfNdCQf37DZB/r5eGl/UWEsw1gxhstYU0 9Ff8puC1uZUEa042BY/HSmj4Nm0gwOAYZ6CzqQjD09T5tvTJOQpeZTdhSH/wDIO9pw5BQ8YH DMaKLhqaHSMYqox5BPx81IJg8MYoA2nXZxi4c/kGgqw0HQmpvQEw6yykd+/gm0fGCT616jxf P11E8m0lHP+ioI/hUxveM3yRMYGvKl3P3385jPni7w6KN5Zdo3nj91yGzxy1Y36svZ3hW2/P kvxHux4f9I6Q7jwlqpSJonrTruPSaEv/JyqucUmSa+5SCqpelIk8JBy7jdOaKhm3aXYN1909 Q7jtya7mqrI/U24T7IiUe9C+z+2lbDTn7GjEbpOsL5ef9fzvrYwN5G5pncS/Th+u/GnTX3vM 73uGxpDbcjaAq9MWMDeRtAgtKEOeytjEGEGpCvDTnIlOjlUm+Z08G2NE899iuOjKqUFTtv0m xEqQYqHMSgdHyikhUZMcY0KchFB4ylquBkXKZaeE5Aui+uwxdYJK1JiQt4RUrJCFhIvH5WyU EC+eEcU4Uf0/xRIPrxQU7nSOBhqPdnakhPSsPbTOJ0zVtS60wfVLf6A7N792cWTQ88yBV3uW 5bW27dXfjb83MRdy5O2EzRpqPmH/6ji90mvL8GTNgqGksCBSWBWcWKb7sUrYuDw3PS5U5tiT eRJ/nTzHXe1x9m5q7BvtTHtmmjq8dSJqu201266bdF6IeOijIDXRwub1hFoj/AFChrUtKQMA AA== X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Makes Dept able to track dependencies by wait_for_completion()/complete(). Signed-off-by: Byungchul Park --- include/linux/completion.h | 30 +++++++++++++++++++++++++----- 1 file changed, 25 insertions(+), 5 deletions(-) diff --git a/include/linux/completion.h b/include/linux/completion.h index fb2915676574..bd2c207481d6 100644 --- a/include/linux/completion.h +++ b/include/linux/completion.h @@ -10,6 +10,7 @@ */ #include +#include /* * struct completion - structure used to maintain state for a "completion" @@ -26,14 +27,33 @@ struct completion { unsigned int done; struct swait_queue_head wait; + struct dept_map dmap; }; +#define init_completion(x) \ +do { \ + sdt_map_init(&(x)->dmap); \ + __init_completion(x); \ +} while (0) + +/* + * XXX: No use cases for now. Fill the body when needed. + */ #define init_completion_map(x, m) init_completion(x) -static inline void complete_acquire(struct completion *x) {} -static inline void complete_release(struct completion *x) {} + +static inline void complete_acquire(struct completion *x) +{ + sdt_might_sleep_start(&x->dmap); +} + +static inline void complete_release(struct completion *x) +{ + sdt_might_sleep_end(); +} #define COMPLETION_INITIALIZER(work) \ - { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait) } + { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait), \ + .dmap = DEPT_MAP_INITIALIZER(work, NULL), } #define COMPLETION_INITIALIZER_ONSTACK_MAP(work, map) \ (*({ init_completion_map(&(work), &(map)); &(work); })) @@ -75,13 +95,13 @@ static inline void complete_release(struct completion *x) {} #endif /** - * init_completion - Initialize a dynamically allocated completion + * __init_completion - Initialize a dynamically allocated completion * @x: pointer to completion structure that is to be initialized * * This inline function will initialize a dynamically created completion * structure. */ -static inline void init_completion(struct completion *x) +static inline void __init_completion(struct completion *x) { x->done = 0; init_swait_queue_head(&x->wait); From patchwork Wed May 8 09:47:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 946CEC25B5F for ; Wed, 8 May 2024 10:03:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D6FEB113527; Wed, 8 May 2024 10:02:57 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id A2DE911352D for ; Wed, 8 May 2024 10:02:55 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-d5-663b4a3a34c6 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 12/28] dept: Apply sdt_might_sleep_{start,end}() to swait Date: Wed, 8 May 2024 18:47:09 +0900 Message-Id: <20240508094726.35754-13-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSfUzMcRwHcN/v7/Funf12bP3kD3ZjhomMfDzMbMSXeZynDcNNv9xRyV0i G7ueqNQREhUqdqVLcdc837mOotCTprKrEUvN1XFckxoq/PPZa5/Pe++/PjylLGcCeG1ktKSL VIerWDkt7/HLn7Fg1cKwWV1JYyAjbRb4vifTkFtWwkJ9qRlBSXkchu7KFdDc50Yw8KqOgqzM egT579soKK9qR2Arimfh9cfR0OTzsFCdeYqFhGtlLDR8HsTgunAWg9myBl6cKcDg6P9EQ1Y3 CzlZCXhodGHoNxVzYDJMho6ibA4G3wdBdfsbBmxvp8OlKy4WHtmqaai614Hh9YNcFtpLfjPw ouo5DfUZ6Qzc7C1g4XOfiQKTz8NBoyMPw63EoaIT334x8CzdgeHE9dsYmlofIrAnv8NgKXnD whOfG4PVkknBz8JKBB3GHg6S0vo5yIkzIjiVdIGGRNdcGPiRyy6ZT564PRRJtB4mtr48mtQU iOR+dhtHEu1vOZJnOUSsRdPItUfdmOR7fQyxFKewxOI9y5HUniZMemtrOfL84gBNPjZl4fUB 2+SLQqVwbYykm7l4t1xjTPaxUVX8EZvBTRtQHZuKZLwozBHLalKZ/z7prqaGzQpTxJaW/hGP FSaK1vTOkQwluOXi9drlwx4jrBGtd15yw6aFyWJXW8JIRiEEiwXPXP86J4jmW46RHtnQvvVT Lxq2UpgrPkzI5v5mfvBiTcPUvx4nVhS10GeQIg+NKkZKbWRMhFobPidQExupPRK450CEBQ39 kunY4PZ7yFu/0YkEHqn8FA7/BWFKRh2jj41wIpGnVGMVlSfnhSkVoerYo5LuwC7doXBJ70Tj eVrlr5jddzhUKexVR0v7JSlK0v2/Yl4WYEBB8VN6jnvX7rt63nm6MXvVsq+2kO6nIRWexg5j pumL2fDhQ6D5Eh7/tLPVL2lrof/OjOBl+u/RzavvMqHrPK0byf0w2QZ3qWHpTn/j5pXnJsXd KDWA5nLKDo14MIWxb3mc1hk878bC1falTpk3JN71NcBQqFyca24+XbGpMuqdzq6i9Rp10DRK p1f/AaJ0MwxHAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzXSa0hTYRgHcN9zec8cTU7L6GAfqkVGdo+sp4yo6HISii4fjL7kyGOOpsWm ltHFtaVWOjSw5SWxLZY4MzuLsmy2NK1l2WqjTKal3ZSm0mXSVCpn9OXhx/N/+H96JKTcREdJ VGnpgiZNqVZgKSXdHqdfuCo+LnnJiG01FOUvgcDPPArKb9RgcNfaENTc0hHQ37IF3gz7EYw+ f0GCqdiN4EpPFwm3WrsROKpOY/B8jABvYAiDq/g8Br3lBoaXX8cI8F28QIBN3AZthWYCnMEv FJj6MZSZ9MT46CMgaK1mwJo9B3qrShkY61kKru7XNDRfdtHg6JwPJRU+DPcdLgpa63sJ8Nwr x9Bd84eGttYnFLiLCmi4PmjG8HXYSoI1MMTAK2clAXWG8bacH79peFzgJCDn6k0CvG8bEDTm vSdArHmNoTngJ8AuFpMwcq0FQa9xgIEz+UEGynRGBOfPXKTA4IuF0V/leN1qvtk/RPIG+xHe MVxJ8U/NHH+3tIvhDY2dDF8pZvD2qhjecr+f4K98D9C8WH0W8+L3Cwx/bsBL8IPt7Qz/5NIo xX/0mogd0/dK1yQJalWmoFm8NlGaYswL4MOtkqOObD+VjV7gcyhcwrHLuVy/iwwZs3O5jo7g hCPZmZy94DMdMsn6pdzV9s0hT2G3cfbbz5iQKXYO19eln7iRsSs482Mf/a9zBmerc070hI/v 334ZRCHL2ViuQV/KFCJpJQqrRpGqtMxUpUodu0h7MCUrTXV00f5DqSIafxfribGievTTs6UJ sRKkmCRz47hkOa3M1GalNiFOQioiZS25K5PlsiRl1jFBc2ifJkMtaJvQdAmlmCaLTxAS5ewB ZbpwUBAOC5r/KSEJj8pGlNa4vLm/tswTtqa2Ir3uz5Rob9SI5VRf2zpnoq9xTNczW2m45pfN 25AQPPnQErF204dd3wZOeT70OHfm5IhvjCVhk+OLZUMRhawu/1NufrSufm5hh5V9ueK2fmNJ /Z6B6GmPnDMt6qncw9+63e8W2Na7PVlt5ll3fCuXwVbxwXEFpU1RLo0hNVrlX27/Nt0qAwAA X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Makes Dept able to track dependencies by swaits. Signed-off-by: Byungchul Park --- include/linux/swait.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/swait.h b/include/linux/swait.h index d324419482a0..277ac74f61c3 100644 --- a/include/linux/swait.h +++ b/include/linux/swait.h @@ -6,6 +6,7 @@ #include #include #include +#include #include /* @@ -161,6 +162,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); struct swait_queue __wait; \ long __ret = ret; \ \ + sdt_might_sleep_start(NULL); \ INIT_LIST_HEAD(&__wait.task_list); \ for (;;) { \ long __int = prepare_to_swait_event(&wq, &__wait, state);\ @@ -176,6 +178,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); cmd; \ } \ finish_swait(&wq, &__wait); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Wed May 8 09:47:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1A39DC19F4F for ; Wed, 8 May 2024 10:03:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A047C113538; Wed, 8 May 2024 10:02:58 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id A9438113531 for ; Wed, 8 May 2024 10:02:55 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-e5-663b4a3aeddf From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 13/28] dept: Apply sdt_might_sleep_{start, end}() to waitqueue wait Date: Wed, 8 May 2024 18:47:10 +0900 Message-Id: <20240508094726.35754-14-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTYRTHe573NkeLl2X4ah+qlRRGlmJ1yoxu2EtQiUUf9EONfNWRNzav QWCpZamlllo2whvTdF7aCixdLE3TRFtlpmEzbVTDzdFspinVNPpy+PE///P7dESE9BHlI1Ik JAvKBHmcjBaTYvvyyi27jwRHb7N+8oai/G3g+pFLgrpZS4OpqQGB9uFFDNauw/B+xoZgvv8V AWUlJgSV4x8JeNhtRmCou0TDW8sKGHQ5aOgtyaMhq7qZhteTCxhGS4sxNOiOQl9hFQbj3FcS yqw03C3Lwu7xDcOcpp4BTaYvTNSVM7AwHgC95iEKDB82w517ozS0G3pJ6G6dwPD2iZoGs/YP BX3dPSSYigooaJyqomFyRkOAxuVg4I2xAkNLtlt0efo3BS8KjBgu1zzAMDjShuBp7icMOu0Q DZ0uGwa9roSAX7VdCCau2xnIyZ9j4O7F6wjyckpJyB7dDvOzanrfLr7T5iD4bH0ab5ipIPmX VRz/uPwjw2c//cDwFboUXl/nx1e3WzFf6XRRvK7+Ks3rnMUMf80+iPmpgQGG77k9T/KWwTIc 5hMh3hMlxClSBeXWvWfEsWMdZiqpwCO9xjpNZKJ85hryEHFsELdg1lD/ecg0ssQ0u5EbHp4j FtmTXcvpC74s5QRrE3M1A6GLvJI9xT1vLcGLTLK+nO1z5pJTwu7gvjua6H/ONVxDi3HJ4+HO R75OoUWWstu5tqxyd1/s7vwQcXmOMeLfgTf3rG6YLESSCrSsHkkVCanxckVckH9sRoIi3f9s YrwOuZ9Jc2EhshU5TSc6ECtCsuUSo9fuaCklT1VlxHcgTkTIPCVdV3ZGSyVR8ozzgjLxtDIl TlB1oNUiUuYlCZxJi5KyMfJk4ZwgJAnK/1ss8vDJRLXrvTeIC5sMoZLXivAqOJh4zDzsmXIj MFcXLmgzpouru2uJnD/6q5vvG2dLiyzVYz4RtrWNLwKSL23qVX33C4lSH/+Zsuyd+o264ZFT HXMs1P9CBLsyyd5/b3KdvfjW7zTVqmcHbt4JtoQV+u5H8SdDIpOdh7ZYJlJDdgb1bwqelZGq WHmAH6FUyf8C6Ue/y0gDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzHfb/Pr+vm6nEyzzSya42YFOIzmdlsepZp/IPZjMOTbq4f7vrJ KO4QOqu4Tj+QtNPqFBdT6nIupVg5aqlWIQ2t0026m1NDl/nnvdfe7+3111tESA3UIpEiIVlQ JciVMlpMimMiNas2RkfGhl0zkZCXEwauyWwSSmpMNNirqxCYHp7BMNoSBe/cDgRTHa8JMOjt CG5/HCTgYesQAkvFWRq6Rnyh2+WkoV1/mQbNnRoa3oxNYxgoyMdQZd4Br3LLMFg9X0gwjNJQ bNDgmfiKwWOsZMCYFQzDFUUMTH8Mh/ahHgqab7RTYOlfCYU3B2hotLST0Fo3jKHrSQkNQ6Y/ FLxqbSPBnqej4N54GQ1jbiMBRpeTgbfWUgz3tTO28z9+U/BCZ8VwvvwBhu6+BgRN2R8wmE09 NDS7HBhqzXoCft1tQTB85RsD53I8DBSfuYLg8rkCErQDETD1s4TespFvdjgJXlubxlvcpST/ sozj64sGGV7b1M/wpeYUvrZiBX+ncRTztydcFG+uvEjz5ol8hr/0rRvz452dDN92fYrkR7oN eGfAPvGmI4JSkSqoVm8+KI57bxuiknQ+6eWjP4gslMNcQj4ijl3H9dj7KC/T7DKut9dDeNmf XcrV6j7P9gTrEHPlndu8PJ/dzT2v02Mvk2ww5/iUNeuRsOu5785q+p8zkKu6b531+Mz0fV/G kZelbATXoClicpG4FM2pRP6KhNR4uUIZEao+FpeRoEgPPZwYb0YzdzGems6rQ5NdUTbEipBs rsROR8ZKKXmqOiPehjgRIfOXtFzYECuVHJFnnBBUiQdUKUpBbUMBIlK2UBK9RzgoZY/Kk4Vj gpAkqP6vWOSzKAulmWJ8l2da50yb1mx7ljNI7N0U4lmY3ibtCEryyz6QVrh9rc3wXb9b81hJ xWt/2RafnqjvCGIeV9a7dUsy/arDtwacPH5orm/gWL/ulm3HAvn75P3zJjMeNUnCRCmKVSNP nYpCSDzbGJZdmP6utylmwG+1Ldj94e6uzKsQYgbLTxmpjpOHryBUavlfmxi4yCoDAAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Makes Dept able to track dependencies by waitqueue waits. Signed-off-by: Byungchul Park --- include/linux/wait.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/wait.h b/include/linux/wait.h index 8aa3372f21a0..3177550a1b42 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -7,6 +7,7 @@ #include #include #include +#include #include @@ -302,6 +303,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); struct wait_queue_entry __wq_entry; \ long __ret = ret; /* explicit shadow */ \ \ + sdt_might_sleep_start(NULL); \ init_wait_entry(&__wq_entry, exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ long __int = prepare_to_wait_event(&wq_head, &__wq_entry, state);\ @@ -317,6 +319,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); cmd; \ } \ finish_wait(&wq_head, &__wq_entry); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Wed May 8 09:47:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 390B7C04FFE for ; Wed, 8 May 2024 10:03:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4B69A11354B; Wed, 8 May 2024 10:03:29 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id B76E7113533 for ; Wed, 8 May 2024 10:02:55 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-f6-663b4a3a5441 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 14/28] dept: Apply sdt_might_sleep_{start, end}() to hashed-waitqueue wait Date: Wed, 8 May 2024 18:47:11 +0900 Message-Id: <20240508094726.35754-15-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAz3Sa0yTZxQHcJ73TkP1tcPwjpFMmxAVM4YG9UxxMYbpM4mJ15hIjOvgReoA tVUQGQlIuchFgQWqgKaAKRWqaOEDIrgKgiApdti4QrhMNCIKNDKK1lY3Ltm+nPxyLv9PhyNl jbQ/p0w4LaoSFHFyRkJJJn0qv9m8a0tMSEuXHIryQ8A5k0NBRb2RAeutOgTGxnQCxjt2wp+z EwjclickaEusCCqfD5HQ2DmMoNVwnoGnL5eAzelgoLskj4GM6noG/njrIWCwtJiAOtNu6Cms IsDsGqNAO85AuTaDmCuvCXDpa1nQpwXCqKGMBc/zddA9/IyG1oG1cOXaIAMtrd0UdDaNEvC0 uYKBYeM/NPR0dlFgLSqg4eZUFQNvZ/Uk6J0OFvrMOgJua+aCsv7+TMOjAjMBWdfvEGDrv4fg fs5fBJiMzxhod04Q0GAqIeFjTQeC0YuTLGTmu1goT7+IIC+zlALN4AZwf6hgtn2H2yccJNY0 JOHWWR2FH1cJ+G7ZEIs19wdYrDOdwQ2GIFzdMk7gymknjU21Fxhsmi5mce6kjcBTvb0s7rrs pvBLm5bY439YEhYtxikTRdW33/8kiR0bKqJP2rmznx7eZdOQnclF3pzAhwo5ngv/u/txDTlv hl8l2O2uBfvyK4SGglf0vEl+QiJc790x7y/4I8KlN5ULtxQfKDR35C1Yym8UjPZacjHza6Hu tnnB3nP9/rEpNG8Zv0G4l1HGLu7McMI7S8iivxQeGOxUIZLqkFctkikTEuMVyrjQ4NjkBOXZ 4KgT8SY090v6VE9kE5q27m9DPIfkPlKz3+YYGa1IVCfHtyGBI+W+0o7sTTEyabQi+ZyoOnFU dSZOVLehrzhK7iddP5sULeOPKU6Lv4jiSVH135TgvP3T0KqIX90jL/rZlPOW0ON7jL+9dgR/ 6nuYfWnlVU3NgZ2HUlLKLafa8/22uX4PzxxYtjT8zkzA8oA1UXorrvuxuWmr9ENq8I0e2Sbf gvDtHvdItjpCu/GU3pP//oewnw2Wg9X12z/ug9V9LUntxie5tii/EMfezGmdV1ZAVuFIM3vj c6ScUscq1gWRKrXiX7YHUApHAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzXSbUhTYRQH8J778tzrcHFbUpcsymFUWmaQdUgrP1RejKRPBkLkqGsb6Yyt LKNAU3vRfFmhs2kyzaZt86XZB2taos5akVmKTVHJFak1FapJS3tRoy+HH+fP+X86LCnT06tY lfqMqFErkuVYQkniIrO27IqNTAofNfiD7kY4eL9fo6C8wYqhp96CwPowk4AJRwy8m/EgmH31 mgR9cQ+CytFhEh52jSBorb2MoffjUujzTmNwFudhyLrbgOHNlzkChkpuEmCxHYKXRVUEtPnG KNBPYCjTZxHzY5wAn8nMgCljPbhrDQzMjW4D50g/DR13nDS0DobC7YohDC2tTgq6mt0E9D4u xzBi/UPDy67nFPTo8mmom6rC8GXGRILJO83A2zYjAY3Z821Xvv2m4Vl+GwFXqh8Q0DdgR/Dk 2nsCbNZ+DB1eDwFNtmISftY4ELgLJhnIueFjoCyzAEFeTgkF2UMRMPujHEfvEjo806SQ3XRO aJ0xUsKLKl54ZBhmhOwng4xgtJ0VmmpDhLstE4RQ+dVLCzbzdSzYvt5khNzJPkKY6u5mhOel s5TwsU9PHA5MkESdEJNVaaJm655EiXJsWEefdrHnf3U+YjKQC+ciP5bntvPOFzXkgjG3gXe5 fIsO4NbxTfmf6AWTnEfCV3cfWPBy7ihf+Lly8Zbi1vOPHXmLlnI7eKvLTP7rXMtbGtsW7Te/ HxibQguWcRG8PcvAFCGJES0xowCVOi1FoUqOCNOeUqarVefDjqem2ND8u5guzema0ffemHbE sUjuL+3BkUkyWpGmTU9pRzxLygOkjqs7k2TSE4r0C6Im9ZjmbLKobUeBLCVfKY09IibKuJOK M+IpUTwtav6nBOu3KgPdWmNsiJIlRLGdIUFGd2Vzp8VdOG738RmpG+/tV45Z64I6Vw+Nh81J YoJn7YV5xvCZ+44PdRWeUX1w2Yr6Bl0MUicESrfmuEyOZZveXdzydJ9oie6ofmvGObilsDRX 0zWy+fUOZf/uB6b4fH9laFzB3vZ936IO2uNDDSm/42k5pVUqtoWQGq3iL4Oqv/cqAwAA X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Makes Dept able to track dependencies by hashed-waitqueue waits. Signed-off-by: Byungchul Park --- include/linux/wait_bit.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/wait_bit.h b/include/linux/wait_bit.h index 7725b7579b78..fe89282c3e96 100644 --- a/include/linux/wait_bit.h +++ b/include/linux/wait_bit.h @@ -6,6 +6,7 @@ * Linux wait-bit related types and methods: */ #include +#include struct wait_bit_key { void *flags; @@ -246,6 +247,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); struct wait_bit_queue_entry __wbq_entry; \ long __ret = ret; /* explicit shadow */ \ \ + sdt_might_sleep_start(NULL); \ init_wait_var_entry(&__wbq_entry, var, \ exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ @@ -263,6 +265,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); cmd; \ } \ finish_wait(__wq_head, &__wbq_entry.wq_entry); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Wed May 8 09:47:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 49476C04FFE for ; Wed, 8 May 2024 10:02:59 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 81DA710FA58; Wed, 8 May 2024 10:02:57 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id BE30E113537 for ; Wed, 8 May 2024 10:02:55 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-06-663b4a3a89ab From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 15/28] dept: Apply sdt_might_sleep_{start, end}() to dma fence wait Date: Wed, 8 May 2024 18:47:12 +0900 Message-Id: <20240508094726.35754-16-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSfUzMcRwHcN/v77Gb47fT5id/4LaYkKfio8yY4TebsTF/sMVNv6ubntxd nWy20uWh1NIkKlxlp9URvzzk4exk14PIIUkqapT0xHHHVTvuNP989trn4f3XhyUUEhXEahL0 ojZBFaekZaRseGrpkoitkepl7vyFcOb0MnD9PElCSbWFBsf1KgSWW+kYBuxb4K17CMH48xcE FBY4EJT2dBFwq74bgbXiGA2vP02DVtcoDU0F2TRklFfT8HJwAkPnuXwMVdI2aM4rw2Dz9JNQ OEBDcWEG9pUvGDzmSgbMacHQW1HEwETPcmjqbqPA2rEILlzqpOGhtYmE+tpeDK/vl9DQbflD QXN9IwmOMzkUXBspo2HQbSbA7Bpl4JXNhOGG0Rd0/IeXgoYcG4bjV25iaH33AMGjkx8xSJY2 Gp64hjDUSAUEjF21I+jNHWYg87SHgeL0XATZmedIMHaGw/jvEnr9GuHJ0CghGGsMgtVtIoWn Zbxwr6iLEYyPOhjBJCULNRUhQvnDASyUOl2UIFWeogXJmc8IWcOtWBhpaWGExvPjpPCptRDv CNojWxstxmlSRO3SdftlsWP3MnFSuvxwQ1Y6kYY+y7JQAMtzYXztq6/kfzvysrHfNLeAb2/3 EH4HcnP5mpw+ym+CG5LxV1o2+z2D281L9Rd9tyxLcsF80zODn3JuFW9zspOJc/iqG7Z/KQG+ 9rv+EeS3ggvnH2QUMZM7P1l+sHrXpGfxjyvayTwkN6EplUihSUiJV2niwkJjUxM0h0MPJMZL yPdJ5qMTe2uR07GzDnEsUk6V22ZGqBWUKkWXGl+HeJZQBsrtJ1arFfJoVeoRUZu4T5scJ+rq 0GyWVM6Ur3AbohVcjEovHhTFJFH7f4rZgKA0pN+QmNSzIsIcnG9SR2a92eRdTIeEiR3rdJ+/ 77sc6f1210AH3AX90fDAxvhig/VOH8Xd9hRturyyrW9eDuM8O7rxVEF5cp73l2x+rhFFfV0V Eaz98CfDLor2tHLveO6xscEymP7re9ePQzetFjZqTHG/WR0VSr236xtitsesUZK6WNXyEEKr U/0FanoJeUUDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzXSa0iTYRTA8Z7nvTpavCyhF4WKkRTZxSjr1KL6VA9F1w8JQpdRr7mcU7bU NAPNS+WtGU3LXM0LS3RlTqGrMTQvy7KVYmU20ywTlwNrkhcqV/Tl8OMc+H86PKUwMUG8RndK 0uvUWiUro2V7VBkrN+1URYXl9s6Hwrww8P24QENprY0F150aBLaGdAwjLTvgzYQHwfSLlxQU m1wIygY+UNDQ6kbQWHWOha6hedDt87LgNOWykFFRy8Kr0RkMfUWXMdTYd0OHsRyDY3KYhuIR Fq4XZ+DZ8RXDpLWaA2taCAxWlXAwM7AGnO4eBprNTgYae0Ph2o0+Fh43OmlovT+IoethKQtu 228GOlrbaXAV5jNwe6ychdEJKwVWn5eD1w4LhruZs7Xs778YaMt3YMiurMPQ/e4RgicXPmKw 23pYaPZ5MNTbTRRM3WpBMFjwjYOsvEkOrqcXIMjNKqIhsy8cpn+Wsts2kWaPlyKZ9UmkccJC k2flInlQ8oEjmU96OWKxJ5D6quWk4vEIJmXjPobYqy+yxD5+mSM537oxGevs5Ej71WmaDHUX 433BkbLNxyWtJlHSr95yVBY99SALx6fLT7flpFNp6LMsBwXworBOdBlzsd+ssFR8+3aS8jtQ WCzW539h/KYEj0ys7Nzu93zhoGhvNdM5iOdpIUR0Pk/yUy6sFx3j/L/iIrHmruNvJWB2/W54 DPmtEMLFRxklnBHJLGhONQrU6BJj1Rpt+CpDTHSyTnN61bG4WDuafRbr2ZnC++hH144mJPBI OVfuYlVRCkadaEiObUIiTykD5S3nN0Qp5MfVySmSPu6IPkErGZpQME8rF8h3RkhHFcIJ9Skp RpLiJf3/K+YDgtLQp5PHZvqX3azr3wu22v2rjX09eaqw0P5UTaU52LDLczL1ZzkfI1OkRC61 nHvuDTXVaF8cXmluO6QuUGW/j/AuvBKMqag3DQkKY1HSpUNttrVbxweEr2WBqSSkx+yutfZu rBi+F3HpQMrIkLT3k0E3Nfpleom2zl0mr1pxJjnuqZI2RKvXLKf0BvUfj5YZAigDAAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Makes Dept able to track dma fence waits. Signed-off-by: Byungchul Park --- drivers/dma-buf/dma-fence.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 0393a9bba3a8..d6f9b339b143 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -16,6 +16,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -783,6 +784,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) cb.task = current; list_add(&cb.base.node, &fence->cb_list); + sdt_might_sleep_start(NULL); while (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { if (intr) __set_current_state(TASK_INTERRUPTIBLE); @@ -796,6 +798,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) if (ret > 0 && intr && signal_pending(current)) ret = -ERESTARTSYS; } + sdt_might_sleep_end(); if (!list_empty(&cb.base.node)) list_del(&cb.base.node); @@ -885,6 +888,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } } + sdt_might_sleep_start(NULL); while (ret > 0) { if (intr) set_current_state(TASK_INTERRUPTIBLE); @@ -899,6 +903,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, if (ret > 0 && intr && signal_pending(current)) ret = -ERESTARTSYS; } + sdt_might_sleep_end(); __set_current_state(TASK_RUNNING); From patchwork Wed May 8 09:47:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DD3B9C04FFE for ; Wed, 8 May 2024 10:03:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E4CB311352D; Wed, 8 May 2024 10:02:57 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id C1A38113538 for ; Wed, 8 May 2024 10:02:55 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-17-663b4a3a9eab From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 16/28] dept: Track timeout waits separately with a new Kconfig Date: Wed, 8 May 2024 18:47:13 +0900 Message-Id: <20240508094726.35754-17-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0zMcRjHfT7fn92cfR3mKzPcEPIjljx+zj/4zmipP2x+dtM33fRrl0oa K52QatWWI82qa6fdHeUuRN25ShHJoSW3hGaSrrK4W6fIHfPPs9ee5/1+/fWwhOwO5c8q40+I qnhFrJyWkJKhqeUrN+7aFB2UqQ2GwtwgcP24QEJptZEG+y0DAmNtJoaBlp3wxu1EMP78BQGa YjuC8o/vCKht7UVgqTpLw+tP06DTNUJDW/ElGrK01TS8HJzA0HO5CIPBtAeeFVRgsHn6SdAM 0HBNk4W94wsGj07PgC5jMfRVlTAw8XENtPV2UWBxBMLV6z00NFjaSGit68Pw+kEpDb3GSQqe tT4hwV6YR8HN4QoaBt06AnSuEQZe2cow1Ki9ouzvvyl4nGfDkF15G0Pn23oE1gsfMJiMXTQ0 u5wYzKZiAn7eaEHQlz/EwLlcDwPXMvMRXDp3mQR1zzoYHyult20Qmp0jhKA2pwoWdxkpPK3g hfsl7xhBbXUwQpkpWTBXLRe0DQNYKB91UYJJf5EWTKNFjJAz1ImF4Y4ORnhyZZwUPnVqcJj/ fsnmKDFWmSKqVm+NlMS4rZN04pfIk9+M18kMdGd3DvJjeS6Yr/RkU/+5XtuOfUxzAXx3t4fw 8UxuAW/O+/w3Q3BOCV/ZscPHM7hw/p6mmvExyS3mrzzMp30s5UL4R79t5D/nfN5QY/vr8fPu 3/YPIx/LuHV8fVaJtyvxZiZZvne8iPlXmMM3VnWTBUhahqbokUwZnxKnUMYGr4pJi1eeXHU0 Ic6EvM+kOz1xoA6N2iOaEMci+VSpbfbGaBmlSElKi2tCPEvIZ0pbzq+PlkmjFGmnRFXCEVVy rJjUhOaypHy2dK07NUrGHVOcEI+LYqKo+n/FrJ9/BjrkHHM7agKCAqxFC/cksEsNPzJ+Kbcn BNVqH6u4aTunh+U+eJ/e+D6zsLh9mXlvqNs8+DJw3uEztcbbHpWaPxDWfD99W2iB3eFYErhF X3escFG7dcWy3aFdQCobD9fcHWI/J0bUpaeunV4xIyfka9jg6vDkxoX2W7PG9i151XKQssjJ pBjFmuWEKknxBxNNdjZIAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzXSf0yMcRzAcd/v8zzf5zrOnp3wKBs7M1t+dDbxmQx/+PHMr1kbxvzowZNu KtypnM304yL6sTJ1SSWVkzrKkybqcitFIlELLU3NqOnKr7s55eiYfz577fPZ3n99FJTazPgp dFHHJX2UGKEhSlq5JThx4fINwWHaK13+kJmqBef3ZBryKqwE2m+VI7Deiccw2LQeXrmGEIw+ e06BOasdwdW+txTcae5FYCtNINDxfjJ0OkcItGSlEEgsriDw4tMYhp7sCxjK5c3QmlGEwe7+ SIN5kMBlcyIeHwMY3JYyFixxc6G/NJeFsb7F0NLbxUBjfgsDtu75cKmgh0CdrYWG5pp+DB33 8wj0Wn8z0Nr8mIb2zDQGbg4XEfjkslBgcY6w8NJeiKHSNF47883DwKM0O4YzJbcxdL6pRVCf /A6DbO0i0OgcwlAlZ1Hw83oTgv50BwtJqW4WLsenI0hJyqbB1BMEoz/yyOrlQuPQCCWYqmIF m6uQFp4U8cK93LesYKrvZoVCOVqoKg0QiusGsXD1q5MR5LJzRJC/XmCF845OLAy3tbHC45xR WnjfacZb/XcpVxyUInQxkj5wZagy3FX/mxwdCD3x2VpAx6HqTeeRj4LnlvC1xU+x14Sbx79+ 7aa89uVm81VpHxivKW5IyZe0rfN6ChfC3zVXsF7T3Fw+50E68VrFLeUfeuz0v+YsvrzS/rfj M75/83EYea3mgvjaxFw2AykL0YQy5KuLiokUdRFBiwyHw41RuhOLDhyJlNH4u1hOjWXWoO8d 6xsQp0CaSap2EhymZsQYgzGyAfEKSuOrajq7LEytOigaT0r6I/v00RGSoQH5K2jNdNWGHVKo mjskHpcOS9JRSf//ihU+fnGocqczMPj5phuTj0UbOx6xp5NSVyWIA5zCZY2d6OHmeaDVsaY6 P5md7bd62vZtM38t6NfvN3nkgpeBedrbV3aLdeHTr/tWf8lAci29a052QMmeqbo9TfeSYx/k ULa163KqHQuDcJ8jHtXsvHYxZG9CyIx33WWhRrfRR5ui2qhdoaEN4eLiAEpvEP8AEjmoSioD AAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Waits with valid timeouts don't actually cause deadlocks. However, Dept has been reporting the cases as well because it's worth informing the circular dependency for some cases where, for example, timeout is used to avoid a deadlock but not meant to be expired. However, yes, there are also a lot of, even more, cases where timeout is used for its clear purpose and meant to be expired. Let Dept report these as an information rather than shouting DEADLOCK. Plus, introduced CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT Kconfig to make it optional so that any reports involving waits with timeouts can be turned on/off depending on the purpose. Signed-off-by: Byungchul Park --- include/linux/dept.h | 15 ++++++--- include/linux/dept_ldt.h | 6 ++-- include/linux/dept_sdt.h | 12 +++++--- kernel/dependency/dept.c | 66 ++++++++++++++++++++++++++++++++++------ lib/Kconfig.debug | 10 ++++++ 5 files changed, 89 insertions(+), 20 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index ca1a34be4127..0280e45cc2af 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -270,6 +270,11 @@ struct dept_wait { * whether this wait is for commit in scheduler */ bool sched_sleep; + + /* + * whether a timeout is set + */ + bool timeout; }; }; }; @@ -453,6 +458,7 @@ struct dept_task { bool stage_sched_map; const char *stage_w_fn; unsigned long stage_ip; + bool stage_timeout; /* * the number of missing ecxts @@ -490,6 +496,7 @@ struct dept_task { .stage_sched_map = false, \ .stage_w_fn = NULL, \ .stage_ip = 0UL, \ + .stage_timeout = false, \ .missing_ecxt = 0, \ .hardirqs_enabled = false, \ .softirqs_enabled = false, \ @@ -507,8 +514,8 @@ extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, con extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); extern void dept_map_copy(struct dept_map *to, struct dept_map *from); -extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l); -extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn); +extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, long timeout); +extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn, long timeout); extern void dept_request_event_wait_commit(void); extern void dept_clean_stage(void); extern void dept_stage_event(struct task_struct *t, unsigned long ip); @@ -558,8 +565,8 @@ struct dept_task { }; #define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) #define dept_map_copy(t, f) do { } while (0) -#define dept_wait(m, w_f, ip, w_fn, sl) do { (void)(w_fn); } while (0) -#define dept_stage_wait(m, k, ip, w_fn) do { (void)(k); (void)(w_fn); } while (0) +#define dept_wait(m, w_f, ip, w_fn, sl, t) do { (void)(w_fn); } while (0) +#define dept_stage_wait(m, k, ip, w_fn, t) do { (void)(k); (void)(w_fn); } while (0) #define dept_request_event_wait_commit() do { } while (0) #define dept_clean_stage() do { } while (0) #define dept_stage_event(t, ip) do { } while (0) diff --git a/include/linux/dept_ldt.h b/include/linux/dept_ldt.h index 062613e89fc3..8adf298dfcb8 100644 --- a/include/linux/dept_ldt.h +++ b/include/linux/dept_ldt.h @@ -27,7 +27,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_L, i, "trylock", "unlock", sl);\ else { \ - dept_wait(m, LDT_EVT_L, i, "lock", sl); \ + dept_wait(m, LDT_EVT_L, i, "lock", sl, false); \ dept_ecxt_enter(m, LDT_EVT_L, i, "lock", "unlock", sl);\ } \ } while (0) @@ -39,7 +39,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_R, i, "read_trylock", "read_unlock", sl);\ else { \ - dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl);\ + dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl, false);\ dept_ecxt_enter(m, LDT_EVT_R, i, "read_lock", "read_unlock", sl);\ } \ } while (0) @@ -51,7 +51,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_W, i, "write_trylock", "write_unlock", sl);\ else { \ - dept_wait(m, LDT_EVT_RW, i, "write_lock", sl); \ + dept_wait(m, LDT_EVT_RW, i, "write_lock", sl, false);\ dept_ecxt_enter(m, LDT_EVT_W, i, "write_lock", "write_unlock", sl);\ } \ } while (0) diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h index 12a793b90c7e..21fce525f031 100644 --- a/include/linux/dept_sdt.h +++ b/include/linux/dept_sdt.h @@ -22,11 +22,12 @@ #define sdt_map_init_key(m, k) dept_map_init(m, k, 0, #m) -#define sdt_wait(m) \ +#define sdt_wait_timeout(m, t) \ do { \ dept_request_event(m); \ - dept_wait(m, 1UL, _THIS_IP_, __func__, 0); \ + dept_wait(m, 1UL, _THIS_IP_, __func__, 0, t); \ } while (0) +#define sdt_wait(m) sdt_wait_timeout(m, -1L) /* * sdt_might_sleep() and its family will be committed in __schedule() @@ -37,12 +38,13 @@ /* * Use the code location as the class key if an explicit map is not used. */ -#define sdt_might_sleep_start(m) \ +#define sdt_might_sleep_start_timeout(m, t) \ do { \ struct dept_map *__m = m; \ static struct dept_key __key; \ - dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__);\ + dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__, t);\ } while (0) +#define sdt_might_sleep_start(m) sdt_might_sleep_start_timeout(m, -1L) #define sdt_might_sleep_end() dept_clean_stage() @@ -52,7 +54,9 @@ #else /* !CONFIG_DEPT */ #define sdt_map_init(m) do { } while (0) #define sdt_map_init_key(m, k) do { (void)(k); } while (0) +#define sdt_wait_timeout(m, t) do { } while (0) #define sdt_wait(m) do { } while (0) +#define sdt_might_sleep_start_timeout(m, t) do { } while (0) #define sdt_might_sleep_start(m) do { } while (0) #define sdt_might_sleep_end() do { } while (0) #define sdt_ecxt_enter(m) do { } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 2c0f30646652..5c996f11abd5 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -739,6 +739,8 @@ static void print_diagram(struct dept_dep *d) if (!irqf) { print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); print_spc(spc, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + if (w->timeout) + print_spc(spc, "--------------- >8 timeout ---------------\n"); print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); } } @@ -792,6 +794,24 @@ static void print_dep(struct dept_dep *d) static void save_current_stack(int skip); +static bool is_timeout_wait_circle(struct dept_class *c) +{ + struct dept_class *fc = c->bfs_parent; + struct dept_class *tc = c; + + do { + struct dept_dep *d = lookup_dep(fc, tc); + + if (d->wait->timeout) + return true; + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + return false; +} + /* * Print all classes in a circle. */ @@ -814,10 +834,14 @@ static void print_circle(struct dept_class *c) pr_warn("summary\n"); pr_warn("---------------------------------------------------\n"); - if (fc == tc) + if (is_timeout_wait_circle(c)) { + pr_warn("NOT A DEADLOCK BUT A CIRCULAR DEPENDENCY\n"); + pr_warn("CHECK IF THE TIMEOUT IS INTENDED\n\n"); + } else if (fc == tc) { pr_warn("*** AA DEADLOCK ***\n\n"); - else + } else { pr_warn("*** DEADLOCK ***\n\n"); + } i = 0; do { @@ -1582,7 +1606,8 @@ static void add_dep(struct dept_ecxt *e, struct dept_wait *w) static atomic_t wgen = ATOMIC_INIT(1); static void add_wait(struct dept_class *c, unsigned long ip, - const char *w_fn, int sub_l, bool sched_sleep) + const char *w_fn, int sub_l, bool sched_sleep, + bool timeout) { struct dept_task *dt = dept_task(); struct dept_wait *w; @@ -1602,6 +1627,7 @@ static void add_wait(struct dept_class *c, unsigned long ip, w->wait_fn = w_fn; w->wait_stack = get_current_stack(); w->sched_sleep = sched_sleep; + w->timeout = timeout; cxt = cur_cxt(); if (cxt == DEPT_CXT_HIRQ || cxt == DEPT_CXT_SIRQ) @@ -2313,7 +2339,7 @@ static struct dept_class *check_new_class(struct dept_key *local, */ static void __dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, - bool sched_sleep, bool sched_map) + bool sched_sleep, bool sched_map, bool timeout) { int e; @@ -2336,7 +2362,7 @@ static void __dept_wait(struct dept_map *m, unsigned long w_f, if (!c) continue; - add_wait(c, ip, w_fn, sub_l, sched_sleep); + add_wait(c, ip, w_fn, sub_l, sched_sleep, timeout); } } @@ -2373,14 +2399,23 @@ static void __dept_event(struct dept_map *m, unsigned long e_f, } void dept_wait(struct dept_map *m, unsigned long w_f, - unsigned long ip, const char *w_fn, int sub_l) + unsigned long ip, const char *w_fn, int sub_l, + long timeoutval) { struct dept_task *dt = dept_task(); unsigned long flags; + bool timeout; if (unlikely(!dept_working())) return; + timeout = timeoutval > 0 && timeoutval < MAX_SCHEDULE_TIMEOUT; + +#if !defined(CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT) + if (timeout) + return; +#endif + if (dt->recursive) return; @@ -2389,21 +2424,30 @@ void dept_wait(struct dept_map *m, unsigned long w_f, flags = dept_enter(); - __dept_wait(m, w_f, ip, w_fn, sub_l, false, false); + __dept_wait(m, w_f, ip, w_fn, sub_l, false, false, timeout); dept_exit(flags); } EXPORT_SYMBOL_GPL(dept_wait); void dept_stage_wait(struct dept_map *m, struct dept_key *k, - unsigned long ip, const char *w_fn) + unsigned long ip, const char *w_fn, + long timeoutval) { struct dept_task *dt = dept_task(); unsigned long flags; + bool timeout; if (unlikely(!dept_working())) return; + timeout = timeoutval > 0 && timeoutval < MAX_SCHEDULE_TIMEOUT; + +#if !defined(CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT) + if (timeout) + return; +#endif + if (m && m->nocheck) return; @@ -2449,6 +2493,7 @@ void dept_stage_wait(struct dept_map *m, struct dept_key *k, dt->stage_w_fn = w_fn; dt->stage_ip = ip; + dt->stage_timeout = timeout; exit: dept_exit_recursive(flags); } @@ -2460,6 +2505,7 @@ static void __dept_clean_stage(struct dept_task *dt) dt->stage_sched_map = false; dt->stage_w_fn = NULL; dt->stage_ip = 0UL; + dt->stage_timeout = false; } void dept_clean_stage(void) @@ -2490,6 +2536,7 @@ void dept_request_event_wait_commit(void) unsigned long ip; const char *w_fn; bool sched_map; + bool timeout; if (unlikely(!dept_working())) return; @@ -2512,6 +2559,7 @@ void dept_request_event_wait_commit(void) w_fn = dt->stage_w_fn; ip = dt->stage_ip; sched_map = dt->stage_sched_map; + timeout = dt->stage_timeout; /* * Avoid zero wgen. @@ -2519,7 +2567,7 @@ void dept_request_event_wait_commit(void) wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); WRITE_ONCE(dt->stage_m.wgen, wg); - __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map); + __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map, timeout); exit: dept_exit(flags); } diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index d366acacffec..b0351667a771 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1319,6 +1319,16 @@ config DEPT noting, to mitigate the impact by the false positives, multi reporting has been supported. +config DEPT_AGGRESSIVE_TIMEOUT_WAIT + bool "Aggressively track even timeout waits" + depends on DEPT + default n + help + Timeout wait doesn't contribute to a deadlock. However, + informing a circular dependency might be helpful for cases + that timeout is used to avoid a deadlock. Say N if you'd like + to avoid verbose reports. + config LOCK_DEBUGGING_SUPPORT bool depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT From patchwork Wed May 8 09:47:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658440 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C102EC25B5F for ; Wed, 8 May 2024 10:03:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9E06211353A; Wed, 8 May 2024 10:03:17 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id C3A36113539 for ; Wed, 8 May 2024 10:02:55 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-27-663b4a3ac0a8 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 17/28] dept: Apply timeout consideration to wait_for_completion()/complete() Date: Wed, 8 May 2024 18:47:14 +0900 Message-Id: <20240508094726.35754-18-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzHfb/Pz26Ox2E98ofcZkZE5sfH718bDxsamyGTZ3rSzXW4FNlM 6YQUYXXUaVfs3OpS7tJCccqlQyk10hJuDc1VpGuXzo8r889nr73fn8/rrw9LKMqoIFalOSJp NaJaSctIWffo/NlLNi6NnpvkWwSX0ueCp/8sCYYSCw2Nt4sQWMqSMXQ51sObATeCofqXBOiz GhHkf3xHQFltB4Iq8ykamjvHQIunlwZn1nkaUm6U0ND01YehPfsyhiLrJnieWYDBPviZBH0X Dbn6FOwfXzAMmgoZMCVNA5c5hwHfxzBwdrymoKotBK7ltdNQWeUkobbChaH5voGGDssfCp7X 1pHQeCmDguKeAhq+DpgIMHl6GXhlN2Io1flFqT9+U/A0w44h9eYdDC1vHyB4ePYDBqvlNQ01 HjcGmzWLgJ+3HAhcF7oZOJ0+yEBu8gUE509nk6BrXwBDXgO9arFQ4+4lBJ3tqFA1YCSFZwW8 cC/nHSPoHrYxgtEaL9jMM4UblV1YyO/zUIK18BwtWPsuM0JadwsWehoaGKHu6hApdLbocXjQ LtmyKEmtSpC0c1bslcVYXl1jDuXJjtnONeEk9IFNQwEsz83ncyy/6f/c/74ODTPNTedbWweJ YZ7ABfO2jE/UMBOcW8bfbFg3zOO5vfy971kjOclN4085L+I0xLJybiHvKk/4p5zCF5XaRzQB /vjt554RvYJbwD9IyWHSkMy/42V5d3069e9gEv/Y3EpmIrkRjSpECpUmIVZUqeeHxiRqVMdC 9x2MtSL/L5lO+CIqUF/jtmrEsUg5Wm4PXBKtoMSEuMTYasSzhHKC3HFmUbRCHiUmHpe0ByO1 8WoprhpNZklloHzewNEoBbdfPCIdkKRDkvZ/i9mAoCSk0PyKTy/Lvi4mh3fXepfdDfdmvrhO FetWO9Z0NlzZ0X94zwaXV0wxVH5bG+HQTznT1Bm826QL5GvyxtoNBfrC1O3BmzfmRkw02qeW hqxsnr7/if1iZPty8zjnlljfyaFHGQFLR9VP2rM7LGSr5eWGcHqW71lb+YzlU7XqxzuZ9Zov SjIuRgybSWjjxL+bzZlHRwMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/x47jt9PmNy3sLCyrGPGhhs3wlSmPs9lMN/3S0QN3lbLZ 4iqKo1ilB7lip9Uhd2ZIudU6nYeUWiU9uSHHqUnXpIbK/PPea+/39vrrzVOKXGYur46NlzSx qmglK6NloUE6v7UhQZHLvo8oIfvCMnAPn6Oh6K6JhaY7FQhM909jcNZvgfYRF4KxV68pyMtp QlDyvpuC+7YeBNVlZ1ho+TATWt2DLNhzzrOgu3GXheav4xi6ci9jqDBvhxdZpRiso/005DlZ KMzT4Yn4jGHUWM6BMcUHHGUFHIy/Xw72njYG6q7ZGajuXAr5xV0sPKm202B76MDQ8riIhR7T HwZe2BpoaMrWM3B7oJSFryNGCozuQQ7eWA0YKlMnbOk/fjPwTG/FkH7zHobWt1UIas71YTCb 2lioc7swWMw5FPy6VY/AcfEbB2kXRjkoPH0Rwfm0XBpSuwJh7GcRu2EtqXMNUiTVcoJUjxho 8rxUJI8KujmSWtPJEYM5gVjKfMmNJ05MSobcDDGXZ7DEPHSZI5nfWjEZaGzkSMPVMZp8aM3D O7z2y4IjpGh1oqQJWBcuizK9yeeOFcuSLBnNOAX18ZnIgxeFleJwbwOaZFZYLHZ0jFKT7Cks EC36T8wkU4JLJt5s3DzJs4Vw8dH3nKmeFnzEM/ZLOBPxvFxYJToeJP5TzhcrKq1TGo+J+m3/ wJReIQSKVboCLgvJDGhaOfJUxybGqNTRgf7ao1HJseok/0NxMWY08RbjqfHsh2i4ZUstEnik nCFvYoMiFYwqUZscU4tEnlJ6yuvPro5UyCNUySclTdxBTUK0pK1FXjytnCMP2SeFK4TDqnjp qCQdkzT/V8x7zE1BaZ0/zbvyXz7esyTrRxGUWBm7fvfODNfHbUnvHFt9AhI2zrRp7+yU7T1y Pb6U7AgTnVWG8DW6AI8I58JF3jXTOv0zpvuejXkaur99aN6wV8Lek1dOBW+bo++7vt62aeUu vy/Eftwz3Y/zTreFiAfiZ62Y79PvrHTMDqtrDi7sDVzlraS1UarlvpRGq/oLHG6EWikDAAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to wait_for_completion()/complete(). Signed-off-by: Byungchul Park --- include/linux/completion.h | 4 ++-- kernel/sched/completion.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/completion.h b/include/linux/completion.h index bd2c207481d6..3200b741de28 100644 --- a/include/linux/completion.h +++ b/include/linux/completion.h @@ -41,9 +41,9 @@ do { \ */ #define init_completion_map(x, m) init_completion(x) -static inline void complete_acquire(struct completion *x) +static inline void complete_acquire(struct completion *x, long timeout) { - sdt_might_sleep_start(&x->dmap); + sdt_might_sleep_start_timeout(&x->dmap, timeout); } static inline void complete_release(struct completion *x) diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c index 3561ab533dd4..499b1fee9dc1 100644 --- a/kernel/sched/completion.c +++ b/kernel/sched/completion.c @@ -110,7 +110,7 @@ __wait_for_common(struct completion *x, { might_sleep(); - complete_acquire(x); + complete_acquire(x, timeout); raw_spin_lock_irq(&x->wait.lock); timeout = do_wait_for_common(x, action, timeout, state); From patchwork Wed May 8 09:47:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0127FC25B77 for ; Wed, 8 May 2024 10:03:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CBD73113549; Wed, 8 May 2024 10:03:29 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 269A610FA58 for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-38-663b4a3a04a0 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 18/28] dept: Apply timeout consideration to swait Date: Wed, 8 May 2024 18:47:15 +0900 Message-Id: <20240508094726.35754-19-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe0hTcRTH+93H715Hi8sMd5uFNYhKKS2sTg8iguoSFEVBLypHXnM0LeY7 qDTnKkszQ1emMR8tUUvdIiwfLcU32apRFmplZklOYbXV0rSt6J/Dh/M9389fhyVlFlrBqmPj RW2sSqPEEkrimFm8dN22dVFhmbU0XL0cBq7vFygorK7CYLtXiaDqfhoBI61b4bV7FMHE02ck GPJsCIo/9JNwv20AQWP5OQwvh2aB3TWOoTPvEob00moMz79OEtCXn0tApXk7dOeUEGD1fKbA MILhpiGd8I4vBHhMFQyYUhfCYHkBA5MflkPnwCsaGt+GwI1bfRgaGjspaKsbJODlo0IMA1XT NHS3dVBgu5pFw92xEgxf3SYSTK5xBl5YjQTU6Lwi/bcpGtqzrAToy2oJsL+pR9B04T0B5qpX GFpcowRYzHkk/LrTimAw28FAxmUPAzfTshFcysinQNe3EiZ+FuKNa4SW0XFS0FmShEa3kRK6 SnjhYUE/I+ia3jKC0ZwgWMqDhdKGEUIodrpowVxxEQtmZy4jZDrshDDW08MIHdcnKGHIbiB2 Kg5I1keKGnWiqA3dECGJrh5KQye/4WRDaxmdirroTMSyPBfOX7txNhP5/cWxB0+wjzG3iO/t 9ZA+ns3N5y1Zw7SPSW5Uwpf1bPGxP7eZ1/82Mz6muIV83zk98iml3CrecS3inzKIr6yx/tX4 eddvPo8hH8u4lXx9eoG3KvHe/GT5K7VW/K8wh39S3kvlIKkRzahAMnVsYoxKrQlfFp0Sq05e dvREjBl5X8l0evJgHXLadjcjjkXKmVKrfG2UjFYlxqXENCOeJZWzpa3nV0fJpJGqlFOi9sQR bYJGjGtGgSyllEtXuJMiZdwxVbx4XBRPitr/KcH6KVLRXkd9x9S8oIC2jw1Yv2R6j/8h/x+F zhXy/sXZ85uMcpOyPX5HSX5gfbdNHxo0d1I90qXYUBTY3rL/h9O9JOhU6D6m1L67Z+B8Z8Vh 9sz06Zze5E8ft3Wr3w3f3sVq1mp+p4QpXeGVAUUBLlnHguE6T35Lul2hk2fUhuUGhmyaeqyk 4qJVy4NJbZzqD+MpKRhGAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+5/L/xxHi8MUOtWHaiCB3RRcvWS3L9VBSkqiO+SqY47mkq0s I0GbmrmMVHRqs7y1RC3rWGHqbDm2UslWmllMM5FM8tJtpmkXV/Tl5cfzPPw+vSypyKPnsxrd CVGvU2uVWEbJIsKMy9eEh0UHf24LgqyLweD9lk6BpaYag/tWFYLqu8kEDDm3wKvxYQRTT5+R YM51Iyh510PCXVcvAlvFOQwdA3Og0zuGoSXXhMFYVoPh+cdpAjx52QRUSdug7XIpAfbJQQrM QxiumI3EzPlAwKS1kgFrUiD0VxQyMP0uBFp6u2hwFLXQYHuzFAquejA02loocNX1E9BRb8HQ W/2bhjbXEwrcWZk03BwtxfBx3EqC1TvGwAt7MQG3U2ZsaV9/0fA4005AWvkdAjpfNyBoSu8j QKruwuDwDhNQK+WS8OOGE0H/pREGUi9OMnAl+RICU2oeBSkeFUxNWPDGNYJjeIwUUmpPCbbx YkpoLeWFB4U9jJDS9IYRiqWTQm1FkFDWOEQIJV+8tCBVXsCC9CWbETJGOglhtL2dEZ7kT1HC QKeZ2L5gn2ztEVGriRf1K9dHyWJqBpJR3Fd82uwsp5NQK52B/FieC+VH7z/CPsbcEr67e5L0 cQC3iK/NfP93Q3LDMr68fbOP/blNfNpPifExxQXynnNpKAOxrJxbxY/kRP1TLuSrbtv/avxm 4teDo8jHCk7FNxgLmctIVoxmVaIAjS4+Vq3RqlYYjsUk6DSnVxw+HiuhmW+xJk5n1aFvHVua Ecci5Wy5G4dFK2h1vCEhthnxLKkMkDvPr45WyI+oE86I+uMH9Se1oqEZLWAp5Vx5+G4xSsEd VZ8Qj4linKj/3xKs3/wkZCpIDPsRGnjPPhBRsa13YiXvGryefqNn3lmrbjq7qwjfi4vsa6xL 3aXdsde8zHHIKFetY6XDiSGefJO79KV8/8YRJyN2601NZVmK+ri94d8jI8/WG3YfKLu2s+Ch a09AnmUhMvjbcuasXvZ2bkFrvkq2uGhig8nhb9na9/RTfbCSMsSoQ4JIvUH9ByryI5cpAwAA X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to swait, assuming an input 'ret' in ___swait_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/swait.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/swait.h b/include/linux/swait.h index 277ac74f61c3..233acdf55e9b 100644 --- a/include/linux/swait.h +++ b/include/linux/swait.h @@ -162,7 +162,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); struct swait_queue __wait; \ long __ret = ret; \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ INIT_LIST_HEAD(&__wait.task_list); \ for (;;) { \ long __int = prepare_to_swait_event(&wq, &__wait, state);\ From patchwork Wed May 8 09:47:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 89D49C25B77 for ; Wed, 8 May 2024 10:03:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CC83F11353E; Wed, 8 May 2024 10:03:17 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 3A24E1126C0 for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-48-663b4a3a9bc8 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 19/28] dept: Apply timeout consideration to waitqueue wait Date: Wed, 8 May 2024 18:47:16 +0900 Message-Id: <20240508094726.35754-20-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe0hTcRTH+/3u3b3X4eS2gm4aPUYRGJlG2ulBVFJdoidB0ENs6bUNn8xX FoG6ZeZaWGGWWajVXG69NnuoadPwscRHbZSaSa2HSVPDmriUalr9c/jw/R4+nD8OQ0grRf6M MiFFUCXI42SUmBQP+pYuXbN1TUywts4Xzp0JBvePXBKK75oo6LxjRGCqzMIw0LgFXo+6EIy3 dRBQWNCJoPT9WwIqm/oQ1BqyKbB/9AOHe5gCW4GWAvX1uxS8+DqBoffieQxG83ZozS/DYPX0 k1A4QMGVQjX2ji8YPPoKGvSZi8BpKKJh4n0I2PpeiaC2ZwlcvtZLwZNaGwlNj50Y7NXFFPSZ fougtamFhM5zOhHcHiqj4OuongC9e5iGl9YSDPc0XlHO918iaNZZMeTcuI/B0V2DoC73HQaz 6RUFz9wuDBZzAQE/yxsROM8O0nDyjIeGK1lnEWhPXiRB0xsK42PF1PpV/DPXMMFrLOl87WgJ yT8v4/iqorc0r6nrofkScypvMQTy158MYL50xC3izRWnKd48cp7m8wYdmB9qb6f5lkvjJP/R UYh3+e8Xr40W4pRpgmrZukNixaduG0p6QB99d+ESmYk0VB7yYTh2BafO/UH+55f5ZVNMsYu5 ri4PMckz2fmcRfdZNMkE6xJzN9o35yGGmcHu4Frteydjkl3E3bd9mFJK2DBOq3tI/FXO44z3 rFPs4827+4fQJEvZUK5GXUTnIbF3Z4zhPPbyfzfM5uoNXWQ+kpSgaRVIqkxIi5cr41YEKTIS lEeDohLjzcj7S/oTEwceo5HOPQ2IZZDMV2KdtTpGKpKnJWfENyCOIWQzJY2nVsZIJdHyjGOC KjFSlRonJDegAIaUzZIsH02PlrJH5ClCrCAkCar/LWZ8/DNRepSxYUH2wgv7dgfEluvYjQaD Q3az6pE6TDvWXf3z+AfFpuYw99XIb6uxyUjW35lev0tm1fo9twdSL/yDZxwOF6daWiWRro6I W85p2ZmKncbDkZv95iwu2n+tIygrpAaOHK+TbhvvDwh/+vpNRODcg4+utm1w7tgbcay+uqbZ GRwhI5MV8pBAQpUs/wOQQqMTRwMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTYRzGe99zznuOo9VhGR2SKEajG12srH8ZUQR1CIpAKqgPufKYI52x mWUXsDYrXctL6cqslsWSzS5uWSubiKK5JFtOKkUl7aLWdGDNMqVyRV8efjwP/D49HKUoYqZz Gm2apNOqk5VERsu2xhoWrt4cm7gkcH0p5J9bAqFvZ2kouVdOwHfXgaD8wUkM/fWb4M1wAMHo i5cUWAp9CG50d1LwoKELgafsFAH/h0nQGgoS8BaaCBhu3iPw6ssYho6iAgwO5xZoyivFUDPS S4Oln8AViwGPRx+GEZudBVumCnrKilkY644Gb9drBuquehnwtC+Ay9c6CDz1eGlocPdg8D8p IdBV/puBpoZGGnz5ZgbuDJYS+DJso8AWCrLQUmPFcN84bjv99RcDz8w1GE7fqsDQ2laFoPrs OwzO8tcE6kIBDC5nIQU/b9cj6Dk/wELWuREWrpw8j8CUVUSDsSMGRn+UkHWrxbpAkBKNrsOi Z9hKi89LBfFxcScrGqvbWdHqPCS6yuaLN5/2Y/HGUIgRnfZsIjqHClgxZ6AVi4PNzazYeGmU Fj+0WvC2qF2yNQlSsiZd0i1eGy9L+tjmRQcr2SPvLlyiM5GR5KAITuCXCy15pXSYCT9HePt2 hApzJD9LcJk/MWGm+IBMuNW8MQdx3BR+q9Dk3xGuaV4lVHjf/9XI+RWCyfyQ+qecKTju1/zl iPG+rXcQhVnBxwhVhmI2D8msaIIdRWq06SlqTXLMIv2BpAyt5siifakpTjT+FtuJsXw3+ubf VIt4Diknyn0kNlHBqNP1GSm1SOAoZaS8/szKRIU8QZ1xVNKl7tEdSpb0tSiKo5XT5Jt3SvEK fr86TTogSQcl3f8VcxHTM5GKy+qz9q7/mW6pjI+zvnhkPtbpru5WOUh2x8vZHndFmmP7blPu ZBP7farheyhbtzNKlRulDzhbqrRNmcaqPubysDRUEhOyB32LUVzBhHlDM1ZVuk6k+YuPb/B3 R7cvu7BDTV9MqMyde3hv4+PUbPu2mX0De/Vfg49wXNvnX24lrU9SR8+ndHr1H7XhvnMpAwAA X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to waitqueue wait, assuming an input 'ret' in ___wait_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/wait.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/wait.h b/include/linux/wait.h index 3177550a1b42..0bc7720273c8 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -303,7 +303,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); struct wait_queue_entry __wq_entry; \ long __ret = ret; /* explicit shadow */ \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ init_wait_entry(&__wq_entry, exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ long __int = prepare_to_wait_event(&wq_head, &__wq_entry, state);\ From patchwork Wed May 8 09:47:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 40BF2C25B78 for ; Wed, 8 May 2024 10:03:24 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 82DC7113543; Wed, 8 May 2024 10:03:19 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 41DB0113527 for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-59-663b4a3b5c02 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 20/28] dept: Apply timeout consideration to hashed-waitqueue wait Date: Wed, 8 May 2024 18:47:17 +0900 Message-Id: <20240508094726.35754-21-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe0hTcRTH+/3u0+HqtgJvBhmLsCx7iNXpJRGVlyAqoogKdOU1R2oxXxkU lrNMm5Rhmo9SZ2vMtekUqUxZmpZptmqUxRQd9hhODWviUqpp9M/hw+ec7/evwxKyeiqQVSYm i6pERbyclpCSYf+K0M27N8eueWabCzeurQHPz2wSSs1GGmymagTG+osYXG2R8GHcjWDy1WsC CgtsCCoGegmob+9D0KS/RMO7wdlg94zS0FGQS0Om1kzDm6EpDI5b+RiqLXug83olBqv3KwmF LhpKCjOxb3zD4NUZGNBlLAWnvpiBqYG10NH3noKmTyvg9h0HDU+aOkhof+jE8O5xKQ19xj8U dLa/IMF2Q0PBg5FKGobGdQToPKMMvLWWY6hR+4ou//hNwXONFcPlqloM9o+NCJqz+zFYjO9p aPW4MdRZCgj4db8NgTNvmIGsa14GSi7mIcjNukWC2rEOJidK6W0bhVb3KCGo69KEpvFyUnhZ yQuPinsZQd38iRHKLSlCnT5E0D5xYaFizEMJFsNVWrCM5TNCzrAdCyPd3YzwomiSFAbthXhf 4BHJlhgxXpkqqlZHREviujLN6Ewec/a7Q4Mz0ACVg/xYngvnb6r1xH9uuHd/xtNcMN/T453x 87nFfJ3my4wnOLeEr+reNc3zuIP8qK2FmWaSW8oPPOjDOYhlpdx6vqtL/q8yiK+usc7U+Pn0 x68jaJpl3Dq+MbPYF5X4biZY3vGmBP8LLOCf6nvI60hajmYZkEyZmJqgUMaHr4pLT1SeXXXi dIIF+X5Jd37q6EM0ZjvQgjgWyf2l1oBNsTJKkZqUntCCeJaQz5e2XdkQK5PGKNLPiarTUaqU eDGpBS1kSXmANGw8LUbGnVQki6dE8Yyo+r/FrF9gBuL4w4bJYG1DY+uOsm+v8n+GRlU1eJcd 7x+KdrEROxc1owufGSo17bv7+Na7Kc7ONL3CdkwcrF25a1bRgQ9hv5ebIpvtMVqTIdhcFKYc LNJ5Ip1fQsuW7Z2jqSwJ2d5JBkz4L6k6ZNzv8j7qXWBSB77OCjJr2QtGzJjU4dVKQk4mxSnW hhCqJMVf/xghQ0cDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTYRzGe99zdbQ6LKGDBsW6CEaWkfVPK6QP+RIYQUTRBRt2zJE3tjSN LGublaWloJZZqNUSnZlTKvOCzlzespWjUqbkLZVMy5q1tIsafXn48Tzw+/TwlCKL8eDVUSck TZQqQsnKaNmuAN0a/50BYet+tWyG9CvrwPntIg25pSYWbA+KEZgqzmEYaQyCt5OjCKZevKQg O9OGIL+3m4IKaw+CmsLzLHQMLAC7c5yF5szLLOjulLLw6uM0BkdWBoZiczC0XivAUOcaoiF7 hIWb2To8E8MYXMYiDoxJK6GvMIeD6V5faO55w0DDrWYGarpWw43bDhaqa5ppsD7pw9DxNJeF HtMfBlqtTTTY0lMZKBkrYOHjpJECo3Ocg9d1eRge6mdsyV9/M/A8tQ5D8t0yDPbOKgS1F99j MJvesNDgHMVQbs6k4Of9RgR9aZ84MFxxcXDzXBqCy4YsGvQOP5j6kcsG+pOG0XGK6MtPkprJ PJq0FIikMqebI/raLo7kmWNJeaE3uVM9gkn+hJMh5qJLLDFPZHAk5ZMdk7H2do40XZ+iyYA9 G+/2PCDbclSKUMdJmrXbjsjC23SlKCaNi//sSMVJqJdJQW68KGwQH927P8es4CW+e+eiZtld WCaWp36Y6ylhVCbebd8xy4uEveK4zcLNMi2sFHtLenAK4nm5sFFsa1P+Uy4Vix/WzWncZurO oTE0ywrBT6zS5XDXkCwPzStC7uqouEiVOsLPR3s8PCFKHe8TGh1pRjNvMSZOpz9B3zqCLEjg kXK+3MYGhCkYVZw2IdKCRJ5SussbL2wKU8iPqhJOSZroEE1shKS1IE+eVi6W79wnHVEIx1Qn pOOSFCNp/q+Yd/NIQh5VvnvOFyy5Gpgp5MukwEuLDNMbvZYPEfeDRY7vz7YNSsEtreHV1hAL eAZ3fYlR7kpcf2jKK9Znf2WL5F2Ktvzy7x5cNjzctkczGGSoL1uVb/2RoXgQutVlus1FRr8o 0Z4+U5nYFR+b3H+9X1+v0D/eXrZi4nCTwXTWd2GIPX5ASWvDVb7elEar+gs90H2lKQMAAA== X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to hashed-waitqueue wait, assuming an input 'ret' in ___wait_var_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/wait_bit.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/wait_bit.h b/include/linux/wait_bit.h index fe89282c3e96..3ef450d9a7c5 100644 --- a/include/linux/wait_bit.h +++ b/include/linux/wait_bit.h @@ -247,7 +247,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); struct wait_bit_queue_entry __wbq_entry; \ long __ret = ret; /* explicit shadow */ \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ init_wait_var_entry(&__wbq_entry, var, \ exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ From patchwork Wed May 8 09:47:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AAFFDC25B5F for ; Wed, 8 May 2024 10:03:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A44DD113544; Wed, 8 May 2024 10:03:38 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 47F5211352D for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-69-663b4a3b6bae From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 21/28] dept: Apply timeout consideration to dma fence wait Date: Wed, 8 May 2024 18:47:18 +0900 Message-Id: <20240508094726.35754-22-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa2xLYRjHve85PeesVjkp4WwipCITd2I8rhGEQ0KE+MAEZWfa2LqltZsQ m3VWY2Niyi50l9Sy1u10YS6V2myMqLGOja1sxix2YbRZdUE3fHnyy/P8/79PD0PIKyShjFpz QNBqlNEKSkpKe4KLZi5ZvyRqjssSAjkn54Dnh4GEgmtWCuqvWhBYK1IxdNWshdfebgT+Z88J MObWIyhqayWgotaNwF52lIKGjlHg8vRRUJd7goK0kmsUvPgyiKHl3BkMFnEDPD1djMHh6yTB 2EVBvjENB8ZnDD5zOQ3mlCnQXpZHw2DbXKhzv5KA/c10uHCxhYJ79joSaivbMTTcKaDAbf0t gae1j0moz8mSwJXeYgq+eM0EmD19NLx0mDBc1wdEx77/ksCjLAeGY6U3MLia7yK4b3iPQbS+ oqDa043BJuYS8PNyDYL27B4a0k/6aMhPzUZwIv0cCfqWcPAPFFArFvHV3X0Er7cl8navieSf FHP87bxWmtfff0PzJjGet5VN40vudWG+qN8j4cXy4xQv9p+h+cweF+Z7nU6af3zeT/IdLiPe FLpdujRSiFYnCNrZy3dLVe8aI+IamKTsaj+dgix0JgpiOHY+Jxbekvxn90svGmKKDeOamnzE EI9hJ3G2rE/DGYLtlnKlzjVDPJrdyFX2Pxv2kOwUbrDNTw2xjF3AXbEV/nNO5CzXHcOeoMC+ ubN32C9nw7m7aXmBrjSQGWC4js4P6G8hhHtQ1kSeRjITGlGO5GpNQoxSHT1/lipZo06atTc2 RkSBXzIfHoyoRP31W6oQyyBFsMwxbnGUXKJM0CXHVCGOIRRjZDUZC6Pkskhl8kFBG7tLGx8t 6KrQeIZUjJPN8yZGytl9ygPCfkGIE7T/r5gJCk1B6a8XnhpRWri8qVFVcvPQw7hP9CrXZHvS 6nnijoTZrMn5Nae5KCN2wta+2otjv+XTcfrnUc4U6S+DYWDPqLy3I8/qln1Ufzdm/Fzh3nTJ MTPYlJmaXpq7aIavtfcIbtyqWtcT/t66MynEr8gPM2iWmRurV29etUAzNTEiIt6l/Lpym4LU qZRzpxFanfIP8AkjmEcDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTYRjHe99zznvmanFYUgf7UA263wyyHjKiT/USXQ0ULMuVZzmaq7ay LAJr2m1qGal5q81smbPUMwu7TETTMsms2RW1FLEkL2FOsknljL48/Pj/+f8+PQpGnckFKfTG w5LJqDVoiJJVbg61LA7dEKoL/lXGQnpKMHiHzrGQV1pCoPmuE0FJxSkMPXXr4d1wLwLfi5cM ZGU0I7B3tDFQUd+OwF10moCnazK0eAcINGRYCVhulBJ49W0UQ2vmZQxOeRM0XirAUD3yhYWs HgK5WRY8dr5iGHEU8+BInA2dRTk8jHYsg4b2txzU5jdw4P64ELKvtRJ47G5gob6yE4PnYR6B 9pI/HDTWP2OhOT2Vgzv9BQS+DTsYcHgHeHhdbcNQljRmO/PjNwdPU6sxnCksx9Dy4RGCqnOf McglbwnUensxuOQMBn7dqkPQmdbHQ3LKCA+5p9IQWJMzWUhqDQHfzzyydhWt7R1gaJLrKHUP 21j6vECkD3LaeJpU9ZGnNvkIdRUtoDce92BqH/RyVC4+T6g8eJmnF/paMO1vauLps6s+lna1 ZOGt0yOVq2Mkgz5eMi1dE62M/fRmx0GP4lharY9PRE7+AgpQiMJysf31MPIzEeaK79+PMH4O FGaKrtRuzs+M0KsUC5vW+XmKsFmsHHwxvmWF2eJoh4/4WSWsEO+48rl/zhmis6x63BMwln/4 0j/uVwsh4iNLDn8JKW1oQjEK1Bvj47R6Q8gS8/7YBKP+2JK9B+JkNPYtjpOj6ZVoyLO+BgkK pJmkaiahOjWnjTcnxNUgUcFoAlV1Z1fq1KoYbcJxyXRgt+mIQTLXoOkKVjNNtSFCilYL+7SH pf2SdFAy/W+xIiAoEcnZRFdoLVy5L9jdFvVg0c2QIOc7Q2lKsef4nKrki1MjzVGL4GrkuhOm xMYOnXtiRNgcY7g9bMXc8Fsb+7bM31Vzrxx3Rz+8mXDbNiBbs7duu+6dB91h5zPu5eaKXWfv P7fv1u25bx+K2j7LGnFl74xINub7oR209KglKn/pTvIkR8OaY7XLFjAms/Yv1hmQZykDAAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to dma fence wait. Signed-off-by: Byungchul Park --- drivers/dma-buf/dma-fence.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index d6f9b339b143..ccd9beb140d1 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -784,7 +784,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) cb.task = current; list_add(&cb.base.node, &fence->cb_list); - sdt_might_sleep_start(NULL); + sdt_might_sleep_start_timeout(NULL, timeout); while (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { if (intr) __set_current_state(TASK_INTERRUPTIBLE); @@ -888,7 +888,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } } - sdt_might_sleep_start(NULL); + sdt_might_sleep_start_timeout(NULL, timeout); while (ret > 0) { if (intr) set_current_state(TASK_INTERRUPTIBLE); From patchwork Wed May 8 09:47:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658438 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 48F84C25B77 for ; Wed, 8 May 2024 10:03:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6B6CB11353F; Wed, 8 May 2024 10:02:59 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 4B4BC113536 for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-79-663b4a3bf635 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 22/28] dept: Make Dept able to work with an external wgen Date: Wed, 8 May 2024 18:47:19 +0900 Message-Id: <20240508094726.35754-23-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSf2yLeRwH8Pt+n6fP86z0PCmJx4gfXZDgmJ/3YYuTEL45kRD/iJ9rbs+s bCMtY+LHZuVYtxm5KTOsJTVdq9PWZWcmNbZpxRQbM1vDsjOj22TWUuvNte7888krnx/vvz4c Jb8hieVUGbtEdYYyTcFIaWnPcMNPCb8mpMTrDXFwMj8eAgPHaCi1WRjwXqtAYHHmYOiuWwHP g34Egw8fUaAv9iIwvG6nwFnvQ1BTfpiBp50/QlOgjwF3sY6B3Es2Bh6/D2NoO30KQ4V9FTwo MmJwhbpo0HczcE6fiyPlLYaQycyCKXsydJSXsBB+PRvcvmcSqGmdDmcvtDFwq8ZNQ31VB4an N0sZ8Fm+SuBB/X0avCcLJGDtNTLwPmiiwBToY+GJqwxDpTYSdPTjkAQaClwYjl6+jqHpRTWC 28deYbBbnjFwN+DH4LAXU/DlSh2CjsIeFo7kh1g4l1OIQHfkNA3atvkw+LmUWbKQ3PX3UUTr 2ENqgmU08RgF8ldJO0u0t1tZUmbfTRzl08ilW92YGPoDEmI3H2eIvf8US/J6mjDpbWxkyf0z gzTpbNLj1bHrpYnJYpoqU1TPWpwkTXW+baZ25v2y11l0lcpG/jl5KIYT+HnCPwEP+91twSvf zPBThZaWEBX1KH6i4Ch4I4ma4v1S4XLj8qhH8quEdxcb6KhpfrJg1ZbiqGX8AkFnbv0/c4JQ Uen6lhMT6b/o6kVRy/n5QnVuSWRHGtn5ygkeywn838EY4U55C12EZGXoBzOSqzIy05WqtHkz U7MyVHtn/rYj3Y4iz2Q6EN5Qhfq9a2sRzyHFcJlr9KIUuUSZqclKr0UCRylGyep+/zlFLktW Zu0T1Tu2qHeniZpaNJajFaNlc4J7kuX8VuUucbso7hTV36eYi4nNRlTi2KHCyrhlOSmV0x/v D6/0LG1Z+WHb9XVPNrq15xuG1ck2NL/6mPppvPBSY5hyNb547gC9aZKzwyyV3yEDw0Z8+vvP e9YZxjXeZoc7FPY064p0/b78YHuc7Y+kgzKbFSXWN6xYfGjoy/FrCctnbLaGFQc8y5LGGaXj E90leZzNp6A1qcrZ0yi1RvkvVqN/UkgDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzXSf0zMcRjAcZ/P99d1nH2dM9/Fpo4mIZLjmYg/yHe2jPmD2dCtvulWHbvr J7NFJ/TDrqijupziSj/EXX9EXdc6RUyiH2gJDSlXbXRxauiYf5699jzb+69HREgNlLdIpU4Q NGplnJwWk+LdIemrQ3aFRK8dezsbcrPXgmviPAnFtdU0dN6uQlBddxrDcOtOeDnpRDD19BkB hvxOBNffvyGgrm0Aga3iDA1dH+ZCt2uchvb8LBrSy2ppeP5lGkN/QR6GKks4PNGXYmh2D5Fg GKahyJCOZ8ZnDG5zJQPmND8YrChkYPp9ELQP9FLgMLZTYOtbCVdL+mlotLWT0FY/iKHrfjEN A9W/KXjS9oiEztwcCmrGSmn4MmkmwOwaZ+BFswnDHd1MLePbLwoe5jRjyLhxF0P36wYETeff YbBU99LgcDkxWC35BPwsb0UweHGUgbPZbgaKTl9EkHW2gARdvwKmfhTT2zbxDuc4weusybxt 0kTyj0s5/l7hG4bXNfUxvMmSyFsrAviyxmHMX//qonhL5QWat3zNY/jM0W7Mj3V0MPyjK1Mk /6HbgPcsOijeHCXEqZIEzZrQCHFM3ece4njm1pQ6/S0iDTnXZSIvEceu5/onyxmPaXY59+qV m/BYxvpw1pxPlMcE6xRzNzrCPJ7PhnMj1x6SHpOsH1ejK8YeS9gNXFZlH/OvuYSrutP8t+M1 s389NIY8lrIKriG9kNEjsQnNqkQylTopXqmKUwRqY2NS1aqUwMhj8RY08y7mU9O59Wiia2cL YkVIPkfSSYdESyllkjY1vgVxIkIuk7Se2xgtlUQpU08ImmNHNIlxgrYFLRKR8oWSXfuFCCl7 VJkgxArCcUHz/4pFXt5paN9ie1fwhHa/fMPSvVUyuyDdc8KoX1Pj/7FszsEt4aWJyb7Zqm02 vXu0R+nl5/C/siys15aUcHPFkQH9SZ/t30tqswp8g43Bqw7L/E+daW1i5i2NPmT0tRIOnd20 1RiaeKAwxv3ArjAG/rBnLLg0khGrUOfv+HR77uWUyGRVQIlbTmpjlEEBhEar/APVLfPUKgMA AA== X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" There is a case where total maps for its wait/event is so large in size. For instance, struct page for PG_locked and PG_writeback is the case. The additional memory size for the maps would be 'the # of pages * sizeof(struct dept_map)' if each struct page keeps its map all the way, which might be too big to accept. It'd be better to keep the minimum data in the case, which is timestamp called 'wgen' that Dept makes use of. So made Dept able to work with an external wgen when needed. Signed-off-by: Byungchul Park --- include/linux/dept.h | 18 ++++++++++++++---- include/linux/dept_sdt.h | 4 ++-- kernel/dependency/dept.c | 30 +++++++++++++++++++++--------- 3 files changed, 37 insertions(+), 15 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index 0280e45cc2af..dea53ad5b356 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -482,6 +482,13 @@ struct dept_task { bool in_sched; }; +/* + * for subsystems that requires compact use of memory e.g. struct page + */ +struct dept_ext_wgen{ + unsigned int wgen; +}; + #define DEPT_TASK_INITIALIZER(t) \ { \ .wait_hist = { { .wait = NULL, } }, \ @@ -512,6 +519,7 @@ extern void dept_task_exit(struct task_struct *t); extern void dept_free_range(void *start, unsigned int sz); extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_ext_wgen_init(struct dept_ext_wgen *ewg); extern void dept_map_copy(struct dept_map *to, struct dept_map *from); extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, long timeout); @@ -521,8 +529,8 @@ extern void dept_clean_stage(void); extern void dept_stage_event(struct task_struct *t, unsigned long ip); extern void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *c_fn, const char *e_fn, int sub_l); extern bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f); -extern void dept_request_event(struct dept_map *m); -extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn); +extern void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg); +extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn, struct dept_ext_wgen *ewg); extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); extern void dept_sched_enter(void); extern void dept_sched_exit(void); @@ -551,6 +559,7 @@ extern void dept_hardirqs_off(void); struct dept_key { }; struct dept_map { }; struct dept_task { }; +struct dept_ext_wgen { }; #define DEPT_MAP_INITIALIZER(n, k) { } #define DEPT_TASK_INITIALIZER(t) { } @@ -563,6 +572,7 @@ struct dept_task { }; #define dept_free_range(s, sz) do { } while (0) #define dept_map_init(m, k, su, n) do { (void)(n); (void)(k); } while (0) #define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_ext_wgen_init(wg) do { } while (0) #define dept_map_copy(t, f) do { } while (0) #define dept_wait(m, w_f, ip, w_fn, sl, t) do { (void)(w_fn); } while (0) @@ -572,8 +582,8 @@ struct dept_task { }; #define dept_stage_event(t, ip) do { } while (0) #define dept_ecxt_enter(m, e_f, ip, c_fn, e_fn, sl) do { (void)(c_fn); (void)(e_fn); } while (0) #define dept_ecxt_holding(m, e_f) false -#define dept_request_event(m) do { } while (0) -#define dept_event(m, e_f, ip, e_fn) do { (void)(e_fn); } while (0) +#define dept_request_event(m, wg) do { } while (0) +#define dept_event(m, e_f, ip, e_fn, wg) do { (void)(e_fn); } while (0) #define dept_ecxt_exit(m, e_f, ip) do { } while (0) #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h index 21fce525f031..8cdac7982036 100644 --- a/include/linux/dept_sdt.h +++ b/include/linux/dept_sdt.h @@ -24,7 +24,7 @@ #define sdt_wait_timeout(m, t) \ do { \ - dept_request_event(m); \ + dept_request_event(m, NULL); \ dept_wait(m, 1UL, _THIS_IP_, __func__, 0, t); \ } while (0) #define sdt_wait(m) sdt_wait_timeout(m, -1L) @@ -49,7 +49,7 @@ #define sdt_might_sleep_end() dept_clean_stage() #define sdt_ecxt_enter(m) dept_ecxt_enter(m, 1UL, _THIS_IP_, "start", "event", 0) -#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__) +#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__, NULL) #define sdt_ecxt_exit(m) dept_ecxt_exit(m, 1UL, _THIS_IP_) #else /* !CONFIG_DEPT */ #define sdt_map_init(m) do { } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 5c996f11abd5..fb33c3758c25 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -2186,6 +2186,11 @@ void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, } EXPORT_SYMBOL_GPL(dept_map_reinit); +void dept_ext_wgen_init(struct dept_ext_wgen *ewg) +{ + ewg->wgen = 0U; +} + void dept_map_copy(struct dept_map *to, struct dept_map *from) { if (unlikely(!dept_working())) { @@ -2371,7 +2376,7 @@ static void __dept_wait(struct dept_map *m, unsigned long w_f, */ static void __dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn, - bool sched_map) + bool sched_map, unsigned int wg) { struct dept_class *c; struct dept_key *k; @@ -2393,7 +2398,7 @@ static void __dept_event(struct dept_map *m, unsigned long e_f, c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, sched_map); if (c && add_ecxt(m, c, 0UL, NULL, e_fn, 0)) { - do_event(m, c, READ_ONCE(m->wgen), ip); + do_event(m, c, wg, ip); pop_ecxt(m, c); } } @@ -2606,7 +2611,7 @@ void dept_stage_event(struct task_struct *requestor, unsigned long ip) if (!m.keys) goto exit; - __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map); + __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map, m.wgen); exit: dept_exit(flags); } @@ -2785,10 +2790,11 @@ bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f) } EXPORT_SYMBOL_GPL(dept_ecxt_holding); -void dept_request_event(struct dept_map *m) +void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg) { unsigned long flags; unsigned int wg; + unsigned int *wg_p; if (unlikely(!dept_working())) return; @@ -2801,21 +2807,25 @@ void dept_request_event(struct dept_map *m) */ flags = dept_enter_recursive(); + wg_p = ewg ? &ewg->wgen : &m->wgen; + /* * Avoid zero wgen. */ wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); - WRITE_ONCE(m->wgen, wg); + WRITE_ONCE(*wg_p, wg); dept_exit_recursive(flags); } EXPORT_SYMBOL_GPL(dept_request_event); void dept_event(struct dept_map *m, unsigned long e_f, - unsigned long ip, const char *e_fn) + unsigned long ip, const char *e_fn, + struct dept_ext_wgen *ewg) { struct dept_task *dt = dept_task(); unsigned long flags; + unsigned int *wg_p; if (unlikely(!dept_working())) return; @@ -2823,24 +2833,26 @@ void dept_event(struct dept_map *m, unsigned long e_f, if (m->nocheck) return; + wg_p = ewg ? &ewg->wgen : &m->wgen; + if (dt->recursive) { /* * Dept won't work with this even though an event * context has been asked. Don't make it confused at * handling the event. Disable it until the next. */ - WRITE_ONCE(m->wgen, 0U); + WRITE_ONCE(*wg_p, 0U); return; } flags = dept_enter(); - __dept_event(m, e_f, ip, e_fn, false); + __dept_event(m, e_f, ip, e_fn, false, READ_ONCE(*wg_p)); /* * Keep the map diabled until the next sleep. */ - WRITE_ONCE(m->wgen, 0U); + WRITE_ONCE(*wg_p, 0U); dept_exit(flags); } From patchwork Wed May 8 09:47:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4E144C19F4F for ; Wed, 8 May 2024 10:03:17 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 30B6711353D; Wed, 8 May 2024 10:03:00 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id D1A7510FA58 for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-89-663b4a3b41e5 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 23/28] dept: Track PG_locked with dept Date: Wed, 8 May 2024 18:47:20 +0900 Message-Id: <20240508094726.35754-24-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe0xTZxjG/b5zek5prDmrZp6hidpIjFfU6PaK8/KP8omamJglZpdsjRxG t1LM4VaWmIBUQCoIKFaBaLmkEKiIrRqxVisEBjNgkQZQLgMkaEeBjNluHfVS2PznzS/P+7xP nj9eKaW4IwmXqrXJgqhVaZSMjJZNLa7YvDtmd9zWvs71UHR+K/je5NJQftPCgKuhHoHldiYG T2s09Pm9COY6n1JgLHEhqBgdouB22zACR+0ZBnrGl4DbN8NAR4mBgayqmwx0TwYxDF4uxlBv PQpPCisxOAOvaDB6GCgzZuHQeI0hYK5jwZwRAWO1pSwER7dBx3CvBBwvNsLVa4MMPHB00NB2 bwxDz/1yBoYt7yXwpK2dBldRvgRuTFcyMOk3U2D2zbDwzGnC0KgPBWX/9U4Cv+Y7MWRX38Lg fm5H8DB3BIPV0stAi8+LwWYtoeDfmlYEYwVTLJw9H2ChLLMAgeHsZRr0gzth7p9yZv8u0uKd oYjelkYcfhNNfqvkSVPpEEv0D1+wxGRNIbbaDaTqgQeTilmfhFjrzjHEOlvMkrwpNybTXV0s ab8yR5NxtxEfC/9a9mWsoFGnCmLk3h9k8cNlNvrUwLe6i+VuNgP5D+ehMCnP7eCL35ahj3xj IojnmeHW8f39AWqel3GreVv+hGSeKc4r46u7Ds7zUi6KP5c7tXBLcxG8y2xf8Mu5z3lv3wj+ L3MVX9/oXNDDQvrzV9MLfgW3k7dnlbJ5SBbyvJHyVx4F/y/xGf+4tp8uRHITWlSHFGptaoJK rdmxJT5dq9ZtOZmYYEWhZzKfDn5zD826jjcjToqUi+XO5VFxCokqNSk9oRnxUkq5TN6a80Wc Qh6rSv9FEBO/F1M0QlIzWiGllcvl2/1psQruR1Wy8LMgnBLEj1ssDQvPQBlR9n04ct2RPTJN 9MregcyYyUummgN3cpoMYvNXZ747LJZsPI2ekqzkeFU3W/ip82DDmkd/nrh1SD87eqzwQLuj QRe7R7vI0GTZVdrY7tl0cejvPzIvOGXGTh5VBQ3dayhvhO5x9svfc/bf9ay9HiNcsouT2k88 NQU/BQyKFbpqJZ0Ur9q2gRKTVB8AooWPyUgDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe0hTcRzF+/3u09XqtlZd9I9iZKmRKWR9yQiDHreiCAqMEHLUNYePyaYr g8DaMt9koSu1UItlU1M3C8ssUdTM0pXz2dQS0cSp9JhkiuWM/jl8OOdw/josITNSnqwqNl7U xCqjFbSElBwP1m8LPhIcEdBnWAPZGQHg+plCQkFFGQ22J6UIyqqvYhhvOgQ9M04Ec+87CDDm 2BAUfRkgoLp5EEFdyTUaOkdWgt01TUNrTjoN+gcVNHyYmMfgyL2FodRyDNpuFmOonx0jwThO Q75RjxflK4ZZk5kBU5I3DJfkMTD/JRBaB7spaLzXSkFd/1a4e99Bw8u6VhKaa4YxdL4ooGGw 7A8Fbc1vSLBlZ1JQPlVMw8SMiQCTa5qBj/WFGCoNi2vJPxYoaMmsx5D8sAqDva8WwauUzxgs Zd00NLqcGKyWHAJ+P2pCMJw1ycD1jFkG8q9mIUi/nkuCwREEc78K6JDdQqNzmhAM1otC3Uwh Kbwt5oXneQOMYHjVzwiFlgTBWuInPHg5joWi7y5KsJhTacHy/RYjpE3asTDV3s4Ib+7MkcKI 3YhPeJ2R7DkvRqt0omb73nBJ5GC+lYz7FHbpdoGdSUIzR9OQB8tzO/jy0XnsZprbwvf2zhJu lnMbeWvmKOVmgnNK+IftB928htvNp6ZMIjeTnDdvM9Uu9aXcTt7Z8xn/29zAl1bWL/kei37f 2NRSX8YF8bX6POYmkhSiZWYkV8XqYpSq6CB/bVRkYqzqkv85dYwFLd7FdGU+uwb97DzUgDgW KVZIbXRwhIxS6rSJMQ2IZwmFXNp0Y1eETHpemXhZ1KjPahKiRW0D8mJJxXrpkVAxXMZdUMaL UaIYJ2r+p5j18ExC+4fOHG7bY/C69ti4r2vtQqj9tDrktY/8cpPv6mfyLSzrKBo65zA9VRxH 6UMnOzbk6T0N5sMj5paxzTS0nKre9Dw3njWsOkDohjuX5U/cqXhRrsX9Jf7VzT9CBy5kNHjq 1F3rvNP831lWdWfd/vY1TqbxCfKJKl1eGVY1OuCrTlCQ2khloB+h0Sr/AvoTEUUqAwAA X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Makes Dept able to track PG_locked waits and events. It's going to be useful in practice. See the following link that shows dept worked with PG_locked and can detect real issues: https://lore.kernel.org/lkml/1674268856-31807-1-git-send-email-byungchul.park@lge.com/ Signed-off-by: Byungchul Park --- include/linux/mm_types.h | 2 + include/linux/page-flags.h | 125 +++++++++++++++++++++++++++++++++---- include/linux/pagemap.h | 7 ++- mm/filemap.c | 26 ++++++++ mm/mm_init.c | 2 + 5 files changed, 149 insertions(+), 13 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5240bd7bca33..d21b2e298cdd 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -19,6 +19,7 @@ #include #include #include +#include #include @@ -203,6 +204,7 @@ struct page { struct page *kmsan_shadow; struct page *kmsan_origin; #endif + struct dept_ext_wgen PG_locked_wgen; } _struct_page_alignment; /* diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 4bf1c25fd1dc..74cbbf694c18 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -197,6 +197,61 @@ enum pageflags { #ifndef __GENERATING_BOUNDS_H +#ifdef CONFIG_DEPT +#include +#include + +extern struct dept_map PG_locked_map; + +/* + * Place the following annotations in its suitable point in code: + * + * Annotate dept_page_set_bit() around firstly set_bit*() + * Annotate dept_page_clear_bit() around clear_bit*() + * Annotate dept_page_wait_on_bit() around wait_on_bit*() + */ + +static inline void dept_page_set_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_request_event(&PG_locked_map, &p->PG_locked_wgen); +} + +static inline void dept_page_clear_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_event(&PG_locked_map, 1UL, _RET_IP_, __func__, &p->PG_locked_wgen); +} + +static inline void dept_page_wait_on_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_wait(&PG_locked_map, 1UL, _RET_IP_, __func__, 0, -1L); +} + +static inline void dept_folio_set_bit(struct folio *f, int bit_nr) +{ + dept_page_set_bit(&f->page, bit_nr); +} + +static inline void dept_folio_clear_bit(struct folio *f, int bit_nr) +{ + dept_page_clear_bit(&f->page, bit_nr); +} + +static inline void dept_folio_wait_on_bit(struct folio *f, int bit_nr) +{ + dept_page_wait_on_bit(&f->page, bit_nr); +} +#else +#define dept_page_set_bit(p, bit_nr) do { } while (0) +#define dept_page_clear_bit(p, bit_nr) do { } while (0) +#define dept_page_wait_on_bit(p, bit_nr) do { } while (0) +#define dept_folio_set_bit(f, bit_nr) do { } while (0) +#define dept_folio_clear_bit(f, bit_nr) do { } while (0) +#define dept_folio_wait_on_bit(f, bit_nr) do { } while (0) +#endif + #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); @@ -381,27 +436,51 @@ static __always_inline bool folio_test_##name(const struct folio *folio) \ #define FOLIO_SET_FLAG(name, page) \ static __always_inline void folio_set_##name(struct folio *folio) \ -{ set_bit(PG_##name, folio_flags(folio, page)); } +{ \ + set_bit(PG_##name, folio_flags(folio, page)); \ + dept_folio_set_bit(folio, PG_##name); \ +} #define FOLIO_CLEAR_FLAG(name, page) \ static __always_inline void folio_clear_##name(struct folio *folio) \ -{ clear_bit(PG_##name, folio_flags(folio, page)); } +{ \ + clear_bit(PG_##name, folio_flags(folio, page)); \ + dept_folio_clear_bit(folio, PG_##name); \ +} #define __FOLIO_SET_FLAG(name, page) \ static __always_inline void __folio_set_##name(struct folio *folio) \ -{ __set_bit(PG_##name, folio_flags(folio, page)); } +{ \ + __set_bit(PG_##name, folio_flags(folio, page)); \ + dept_folio_set_bit(folio, PG_##name); \ +} #define __FOLIO_CLEAR_FLAG(name, page) \ static __always_inline void __folio_clear_##name(struct folio *folio) \ -{ __clear_bit(PG_##name, folio_flags(folio, page)); } +{ \ + __clear_bit(PG_##name, folio_flags(folio, page)); \ + dept_folio_clear_bit(folio, PG_##name); \ +} #define FOLIO_TEST_SET_FLAG(name, page) \ static __always_inline bool folio_test_set_##name(struct folio *folio) \ -{ return test_and_set_bit(PG_##name, folio_flags(folio, page)); } +{ \ + bool __ret = test_and_set_bit(PG_##name, folio_flags(folio, page)); \ + \ + if (!__ret) \ + dept_folio_set_bit(folio, PG_##name); \ + return __ret; \ +} #define FOLIO_TEST_CLEAR_FLAG(name, page) \ static __always_inline bool folio_test_clear_##name(struct folio *folio) \ -{ return test_and_clear_bit(PG_##name, folio_flags(folio, page)); } +{ \ + bool __ret = test_and_clear_bit(PG_##name, folio_flags(folio, page)); \ + \ + if (__ret) \ + dept_folio_clear_bit(folio, PG_##name); \ + return __ret; \ +} #define FOLIO_FLAG(name, page) \ FOLIO_TEST_FLAG(name, page) \ @@ -416,32 +495,54 @@ static __always_inline int Page##uname(const struct page *page) \ #define SETPAGEFLAG(uname, lname, policy) \ FOLIO_SET_FLAG(lname, FOLIO_##policy) \ static __always_inline void SetPage##uname(struct page *page) \ -{ set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + set_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_set_bit(page, PG_##lname); \ +} #define CLEARPAGEFLAG(uname, lname, policy) \ FOLIO_CLEAR_FLAG(lname, FOLIO_##policy) \ static __always_inline void ClearPage##uname(struct page *page) \ -{ clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + clear_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_clear_bit(page, PG_##lname); \ +} #define __SETPAGEFLAG(uname, lname, policy) \ __FOLIO_SET_FLAG(lname, FOLIO_##policy) \ static __always_inline void __SetPage##uname(struct page *page) \ -{ __set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + __set_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_set_bit(page, PG_##lname); \ +} #define __CLEARPAGEFLAG(uname, lname, policy) \ __FOLIO_CLEAR_FLAG(lname, FOLIO_##policy) \ static __always_inline void __ClearPage##uname(struct page *page) \ -{ __clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + __clear_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_clear_bit(page, PG_##lname); \ +} #define TESTSETFLAG(uname, lname, policy) \ FOLIO_TEST_SET_FLAG(lname, FOLIO_##policy) \ static __always_inline int TestSetPage##uname(struct page *page) \ -{ return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + bool ret = test_and_set_bit(PG_##lname, &policy(page, 1)->flags);\ + if (!ret) \ + dept_page_set_bit(page, PG_##lname); \ + return ret; \ +} #define TESTCLEARFLAG(uname, lname, policy) \ FOLIO_TEST_CLEAR_FLAG(lname, FOLIO_##policy) \ static __always_inline int TestClearPage##uname(struct page *page) \ -{ return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + bool ret = test_and_clear_bit(PG_##lname, &policy(page, 1)->flags);\ + if (ret) \ + dept_page_clear_bit(page, PG_##lname); \ + return ret; \ +} #define PAGEFLAG(uname, lname, policy) \ TESTPAGEFLAG(uname, lname, policy) \ diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 2df35e65557d..a438d8f038de 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1008,7 +1008,12 @@ void folio_unlock(struct folio *folio); */ static inline bool folio_trylock(struct folio *folio) { - return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio, 0))); + bool ret = !test_and_set_bit_lock(PG_locked, folio_flags(folio, 0)); + + if (ret) + dept_page_set_bit(&folio->page, PG_locked); + + return likely(ret); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 30de18c4fd28..ceb24a7ee0b1 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -46,6 +46,7 @@ #include #include #include +#include #include #include #include "internal.h" @@ -1108,6 +1109,7 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, if (flags & WQ_FLAG_CUSTOM) { if (test_and_set_bit(key->bit_nr, &key->folio->flags)) return -1; + dept_page_set_bit(&key->folio->page, key->bit_nr); flags |= WQ_FLAG_DONE; } } @@ -1191,6 +1193,7 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, if (wait->flags & WQ_FLAG_EXCLUSIVE) { if (test_and_set_bit(bit_nr, &folio->flags)) return false; + dept_page_set_bit(&folio->page, bit_nr); } else if (test_bit(bit_nr, &folio->flags)) return false; @@ -1201,6 +1204,9 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, /* How many times do we accept lock stealing from under a waiter? */ int sysctl_page_lock_unfairness = 5; +struct dept_map __maybe_unused PG_locked_map = DEPT_MAP_INITIALIZER(PG_locked_map, NULL); +EXPORT_SYMBOL(PG_locked_map); + static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, int state, enum behavior behavior) { @@ -1212,6 +1218,8 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, unsigned long pflags; bool in_thrashing; + dept_page_wait_on_bit(&folio->page, bit_nr); + if (bit_nr == PG_locked && !folio_test_uptodate(folio) && folio_test_workingset(folio)) { delayacct_thrashing_start(&in_thrashing); @@ -1305,6 +1313,23 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, break; } + /* + * dept_page_set_bit() might have been called already in + * folio_trylock_flag(), wake_page_function() or somewhere. + * However, call it again to reset the wgen of dept to ensure + * dept_page_wait_on_bit() is called prior to + * dept_page_set_bit(). + * + * Remind dept considers all the waits between + * dept_page_set_bit() and dept_page_clear_bit() as potential + * event disturbers. Ensure the correct sequence so that dept + * can make correct decisions: + * + * wait -> acquire(set bit) -> release(clear bit) + */ + if (wait->flags & WQ_FLAG_DONE) + dept_page_set_bit(&folio->page, bit_nr); + /* * If a signal happened, this 'finish_wait()' may remove the last * waiter from the wait-queues, but the folio waiters bit will remain @@ -1481,6 +1506,7 @@ void folio_unlock(struct folio *folio) BUILD_BUG_ON(PG_waiters != 7); BUILD_BUG_ON(PG_locked > 7); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + dept_page_clear_bit(&folio->page, PG_locked); if (folio_xor_flags_has_waiters(folio, 1 << PG_locked)) folio_wake_bit(folio, PG_locked); } diff --git a/mm/mm_init.c b/mm/mm_init.c index 549e76af8f82..a0c9069d3740 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -27,6 +27,7 @@ #include #include #include +#include #include "internal.h" #include "slab.h" #include "shuffle.h" @@ -570,6 +571,7 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn, page_mapcount_reset(page); page_cpupid_reset_last(page); page_kasan_tag_reset(page); + dept_ext_wgen_init(&page->PG_locked_wgen); INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL From patchwork Wed May 8 09:47:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0FA41C04FFE for ; Wed, 8 May 2024 10:03:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BF05611353B; Wed, 8 May 2024 10:03:17 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 5A2D0113530 for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-99-663b4a3bcedb From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 24/28] dept: Print event context requestor's stacktrace on report Date: Wed, 8 May 2024 18:47:21 +0900 Message-Id: <20240508094726.35754-25-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSfUzMcRwHcN/v77Hj+O0wP8XizGwMeag+JPOHh+8sw8ws/uCm3+noLruS srX1cCp6GLUeVOwqTrsirjYPFSeKq0nSKBJapLqrFZdSpKv557PX3p/P3n99eEpRwbjzGl24 pNepQpSsjJY5ZhWs9tvtp/ZKSfaGyyle4PyZREN+WSkLTbdLEJRWxGLoqd0F74btCMZevqIg O7MJQcGXjxRU1HUgqC6OY+FN12xocQ6wYMtMZiG+qIyF133jGNqz0jGUWPZAw6VCDNbRbhqy e1jIy47Hk+M7hlGTmQNTzHLoLM7lYPzLOrB1vGWg+v0quHKtnYWqahsNdfc7Mbx5mM9CR+kE Aw11L2houpzKwK3+Qhb6hk0UmJwDHDRbjRjuGCaLEn78ZeB5qhVDwvW7GFraKhE8SvqMwVL6 loWnTjuGcksmBb9v1iLoTHNwcD5llIO82DQEyeezaDC0e8PYSD67bRN5ah+giKH8LKkeNtKk vlAkD3I/csTw6D1HjJYzpLx4JSmq6sGkYMjJEIv5AkssQ+kcuehowaS/sZEjL3LGaNLVko33 uR+WbQmSQjQRkn7t1mOyYNtVB3X6wo7Irz1VdAzK8b2I3HhR2Ciamj9R//2hfgC5zAorxNbW 0al8nrBELE/9xrhMCXaZeL1xp8tzhYPijbgO2mVaWC4WPOjGLssFH7H+cx+e7vQUS+5Yp3rc JvO27v6pfoXgLVbG53LTNxO8OGYm014oPilupS8huRHNMCOFRhehVWlCNq4JjtJpItccD9Va 0OQvmaLHj9xHQ00HapDAI+UsuXXBZrWCUUWERWlrkMhTynny2kRftUIepIo6J+lDj+rPhEhh NciDp5UL5OuHzwYphBOqcOmUJJ2W9P+3mHdzj0F5AQJzEnv2egQeIjv8062n4nY/n+nza7Fw lV9RcWjR76DGhsG9umZNoqTe75sxGD2iSLJkhBqsRTb/kg32oeD85GW9Ktw2R5tQGfg48AnX u7T16GPH3Ej14u05xwKy/vob7p3b5pkRy4UatVfMXn8eBqSF+xysfLYsb35v2W3bHyUdFqxa t5LSh6n+AWH2omZHAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzXSeUiTcRjA8X6/95qj2esyejGiGEhg92E9teiAypfsIiihe+VrrnTWVqbR oW5maYoWc6VWnkvUrmllpbYUrSnZTEkrM7XDhsc6nLUUzRX98/DheeD71yMipAbKS6RUHRHU KkWIjBaT4g1y7Uz5WnnQHHvdZEg5Pwcc/WdJyLhVRIP1ZiGCopJoDLZqP2ge6EEw+PwFAQa9 FUFWxzsCSmraEJTnx9DQ+NEdmhx2Giz6BBq0ObdoaOgewtCaegFDoWk91CVnYzA7u0gw2GhI N2jx6PiCwWksYMAY5Q2d+WkMDHXMBUvbKwqqrlgoKH8zHS5fbaWhrNxCQk1pJ4bGhxk0tBWN UFBX84wEa0oiBTf6smnoHjASYHTYGXhpzsRwWzdaO/NjmIKniWYMZ3LvYGh6/QhBxdl2DKai VzRUOXowFJv0BPy+Xo2gM6mXgdjzTgbSo5MQJMSmkqBr9YXBXxn0iiV8VY+d4HXFx/jygUyS r83m+Adp7xheV/GG4TNNR/nifB8+p8yG+azvDoo3FZyjedP3Cwwf39uE+b76eoZ/dmmQ5D82 GfCmSdvESwOFEGW4oJ69bI842HKllzh0bnXEJ1sZGYUuLYpHbiKOXcC9rbUjl2l2GtfS4iRc 9mSncsWJnymXCbZHzOXWr3F5PLuFy4tpI10mWW8u60EXdlnCLuRq27vxv+YUrvC2+W/HbXT/ uqvvb1/K+nKPtGlMMhJnojEFyFOpCg9VKEN8Z2kOBkeqlBGz9oWFmtDouxhPDqWUov5Gv0rE ipBsrMRKy4OklCJcExlaiTgRIfOUVMctCpJKAhWRxwV12G710RBBU4kmiUjZRMnaAGGPlN2v OCIcFIRDgvr/FYvcvKJQanKkh9/pGytjWLviVHtiXMq4dEnpcrW+15brLz/BWfrSCl5otjqf z/CYUNqsauhoGAoLWPxB4ti6KvDajjz/kth74bq9EfF3R7RrWuq3w8XruRWcj9c894Zv6x4n HTaN2exhHn7/ZF9OnPXmAePL5o26mLf6b1minJ9f78+fvmvnVxmpCVbM9SHUGsUf8asK1SoD AAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Currently, print nothing in place of [S] in report, which means stacktrace of event context's start if the event is not an unlock thing by typical lock but general event because it's not easy to specify the point in a general way, where the event context has started from. However, unfortunately it makes hard to interpret dept's report in that case. So made it print the event requestor's stacktrace instead of the event context's start, in place of [S] in report. Signed-off-by: Byungchul Park --- include/linux/dept.h | 13 +++++++ kernel/dependency/dept.c | 83 ++++++++++++++++++++++++++++++++-------- 2 files changed, 80 insertions(+), 16 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index dea53ad5b356..6db23d77905e 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -145,6 +145,11 @@ struct dept_map { */ unsigned int wgen; + /* + * requestor for the event context to run + */ + struct dept_stack *req_stack; + /* * whether this map should be going to be checked or not */ @@ -486,7 +491,15 @@ struct dept_task { * for subsystems that requires compact use of memory e.g. struct page */ struct dept_ext_wgen{ + /* + * wait timestamp associated to this map + */ unsigned int wgen; + + /* + * requestor for the event context to run + */ + struct dept_stack *req_stack; }; #define DEPT_TASK_INITIALIZER(t) \ diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index fb33c3758c25..abf1cdab0615 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -129,6 +129,7 @@ static int dept_per_cpu_ready; #define DEPT_INFO(s...) pr_warn("DEPT_INFO: " s) static arch_spinlock_t dept_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; +static arch_spinlock_t dept_req_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; static arch_spinlock_t dept_pool_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; /* @@ -1669,7 +1670,8 @@ static void add_wait(struct dept_class *c, unsigned long ip, static bool add_ecxt(struct dept_map *m, struct dept_class *c, unsigned long ip, const char *c_fn, - const char *e_fn, int sub_l) + const char *e_fn, int sub_l, + struct dept_stack *req_stack) { struct dept_task *dt = dept_task(); struct dept_ecxt_held *eh; @@ -1700,10 +1702,16 @@ static bool add_ecxt(struct dept_map *m, struct dept_class *c, e->class = get_class(c); e->ecxt_ip = ip; - e->ecxt_stack = ip && rich_stack ? get_current_stack() : NULL; e->event_fn = e_fn; e->ecxt_fn = c_fn; + if (req_stack) + e->ecxt_stack = get_stack(req_stack); + else if (ip && rich_stack) + e->ecxt_stack = get_current_stack(); + else + e->ecxt_stack = NULL; + eh = dt->ecxt_held + (dt->ecxt_held_pos++); eh->ecxt = get_ecxt(e); eh->map = m; @@ -2147,6 +2155,7 @@ void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, m->sub_u = sub_u; m->name = n; m->wgen = 0U; + m->req_stack = NULL; m->nocheck = !valid_key(k); dept_exit_recursive(flags); @@ -2181,6 +2190,7 @@ void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, m->name = n; m->wgen = 0U; + m->req_stack = NULL; dept_exit_recursive(flags); } @@ -2189,6 +2199,7 @@ EXPORT_SYMBOL_GPL(dept_map_reinit); void dept_ext_wgen_init(struct dept_ext_wgen *ewg) { ewg->wgen = 0U; + ewg->req_stack = NULL; } void dept_map_copy(struct dept_map *to, struct dept_map *from) @@ -2376,7 +2387,8 @@ static void __dept_wait(struct dept_map *m, unsigned long w_f, */ static void __dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn, - bool sched_map, unsigned int wg) + bool sched_map, unsigned int wg, + struct dept_stack *req_stack) { struct dept_class *c; struct dept_key *k; @@ -2397,7 +2409,7 @@ static void __dept_event(struct dept_map *m, unsigned long e_f, k = m->keys ?: &m->map_key; c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, sched_map); - if (c && add_ecxt(m, c, 0UL, NULL, e_fn, 0)) { + if (c && add_ecxt(m, c, 0UL, "(event requestor)", e_fn, 0, req_stack)) { do_event(m, c, wg, ip); pop_ecxt(m, c); } @@ -2506,6 +2518,8 @@ EXPORT_SYMBOL_GPL(dept_stage_wait); static void __dept_clean_stage(struct dept_task *dt) { + if (dt->stage_m.req_stack) + put_stack(dt->stage_m.req_stack); memset(&dt->stage_m, 0x0, sizeof(struct dept_map)); dt->stage_sched_map = false; dt->stage_w_fn = NULL; @@ -2571,6 +2585,7 @@ void dept_request_event_wait_commit(void) */ wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); WRITE_ONCE(dt->stage_m.wgen, wg); + dt->stage_m.req_stack = get_current_stack(); __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map, timeout); exit: @@ -2602,6 +2617,8 @@ void dept_stage_event(struct task_struct *requestor, unsigned long ip) */ m = dt_req->stage_m; sched_map = dt_req->stage_sched_map; + if (m.req_stack) + get_stack(m.req_stack); __dept_clean_stage(dt_req); /* @@ -2611,8 +2628,12 @@ void dept_stage_event(struct task_struct *requestor, unsigned long ip) if (!m.keys) goto exit; - __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map, m.wgen); + __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map, m.wgen, + m.req_stack); exit: + if (m.req_stack) + put_stack(m.req_stack); + dept_exit(flags); } @@ -2692,7 +2713,7 @@ void dept_map_ecxt_modify(struct dept_map *m, unsigned long e_f, k = m->keys ?: &m->map_key; c = check_new_class(&m->map_key, k, sub_id(m, new_e), m->name, false); - if (c && add_ecxt(m, c, new_ip, new_c_fn, new_e_fn, new_sub_l)) + if (c && add_ecxt(m, c, new_ip, new_c_fn, new_e_fn, new_sub_l, NULL)) goto exit; /* @@ -2744,7 +2765,7 @@ void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, k = m->keys ?: &m->map_key; c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, false); - if (c && add_ecxt(m, c, ip, c_fn, e_fn, sub_l)) + if (c && add_ecxt(m, c, ip, c_fn, e_fn, sub_l, NULL)) goto exit; missing_ecxt: dt->missing_ecxt++; @@ -2792,9 +2813,11 @@ EXPORT_SYMBOL_GPL(dept_ecxt_holding); void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg) { + struct dept_task *dt = dept_task(); unsigned long flags; unsigned int wg; unsigned int *wg_p; + struct dept_stack **req_stack_p; if (unlikely(!dept_working())) return; @@ -2802,12 +2825,18 @@ void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg) if (m->nocheck) return; - /* - * Allow recursive entrance. - */ - flags = dept_enter_recursive(); + if (dt->recursive) + return; - wg_p = ewg ? &ewg->wgen : &m->wgen; + flags = dept_enter(); + + if (ewg) { + wg_p = &ewg->wgen; + req_stack_p = &ewg->req_stack; + } else { + wg_p = &m->wgen; + req_stack_p = &m->req_stack; + } /* * Avoid zero wgen. @@ -2815,7 +2844,13 @@ void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg) wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); WRITE_ONCE(*wg_p, wg); - dept_exit_recursive(flags); + arch_spin_lock(&dept_req_spin); + if (*req_stack_p) + put_stack(*req_stack_p); + *req_stack_p = get_current_stack(); + arch_spin_unlock(&dept_req_spin); + + dept_exit(flags); } EXPORT_SYMBOL_GPL(dept_request_event); @@ -2826,6 +2861,8 @@ void dept_event(struct dept_map *m, unsigned long e_f, struct dept_task *dt = dept_task(); unsigned long flags; unsigned int *wg_p; + struct dept_stack **req_stack_p; + struct dept_stack *req_stack; if (unlikely(!dept_working())) return; @@ -2833,7 +2870,18 @@ void dept_event(struct dept_map *m, unsigned long e_f, if (m->nocheck) return; - wg_p = ewg ? &ewg->wgen : &m->wgen; + if (ewg) { + wg_p = &ewg->wgen; + req_stack_p = &ewg->req_stack; + } else { + wg_p = &m->wgen; + req_stack_p = &m->req_stack; + } + + arch_spin_lock(&dept_req_spin); + req_stack = *req_stack_p; + *req_stack_p = NULL; + arch_spin_unlock(&dept_req_spin); if (dt->recursive) { /* @@ -2842,17 +2890,20 @@ void dept_event(struct dept_map *m, unsigned long e_f, * handling the event. Disable it until the next. */ WRITE_ONCE(*wg_p, 0U); + if (req_stack) + put_stack(req_stack); return; } flags = dept_enter(); - - __dept_event(m, e_f, ip, e_fn, false, READ_ONCE(*wg_p)); + __dept_event(m, e_f, ip, e_fn, false, READ_ONCE(*wg_p), req_stack); /* * Keep the map diabled until the next sleep. */ WRITE_ONCE(*wg_p, 0U); + if (req_stack) + put_stack(req_stack); dept_exit(flags); } From patchwork Wed May 8 09:47:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4ABB0C19F4F for ; Wed, 8 May 2024 10:03:14 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9606B113537; Wed, 8 May 2024 10:02:58 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 5E13C113531 for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-a9-663b4a3b9061 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 25/28] cpu/hotplug: Use a weaker annotation in AP thread Date: Wed, 8 May 2024 18:47:22 +0900 Message-Id: <20240508094726.35754-26-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbVBMYRTHPc+9e+/dtWvurEy3fMBizHjXiEMGX+RhNON1zPCBHd20VMyW khmjN7VKiKklTWplrUrYXeOtzVpKiYRFmoQySG2Z2J2ykW3x5cxv/ud/fp8ORymvS4I5TVyC qI1Tx6gYGS1zyUtnha0Oi5r70rQY8o7OBfcPHQ1FVyoZaK6qQFBpTcXQVbsSXnt6EHifPKVA n9+MoPTDWwqsde0IbKY0Bl58HANOdx8DDfk5DKSfv8LAs+4hDG0FJzFUmCOg8YQBg33wMw36 LgbO6tOxb3zBMGgsZ8GYMhU6TIUsDH2YBw3tryRga50BZ4rbGKi2NdBQd7MDw4vbRQy0Vw5L oLGunobmvFwJXO41MNDtMVJgdPex8NxeguFqhk+U+f23BB7m2jFkll3D4HxzB0GN7j0Gc+Ur Bu67ezBYzPkU/LxYi6DjmIuFw0cHWTibegxBzuECGjLaQsE7UMQsX0Tu9/RRJMOSRGyeEpo8 MgjkVuFblmTUtLKkxLyPWEzTyfnqLkxK+90SYi4/whBz/0mWZLucmPQ2NbGk/rSXJh+derw2 eItsSaQYo0kUtXOWbpdF92e1or3vpPtrsr1MChpms5GUE/j5Qn3LKeo/N146hUaY4acJLS2D /jyAnyhYcj9JRpjie2RCWVP4CI/l1wjv2nP8HpqfKngv3fV3FPwCoawz759/glBx1e73SH35 m8+9fr+SDxXupBf6OjJfZ4ATfgwU4L8HQcI9Uwt9AilK0KhypNTEJcaqNTHzZ0cnx2n2z96x J9aMfM9kPDi09Sbqb97gQDyHVHKFPXBxlFKiToxPjnUggaNUAYrarIVRSkWkOvmAqN2zTbsv Rox3oPEcrQpUhHiSIpX8TnWCuFsU94ra/1vMSYNT0KbOtI0/6YigZSG24tDGzOGIx8/bdiyX V+mWrV8S2ek1GF0PP1kLNsPT0bppRn3qzO5zo29/zc3TVWfJJ4WseuzkHNvXSNOG17mORyGT Y8r1RRsm6wOO5BgsF60bEw5FVNmsqlhj+AUi/3ZuV4D9WfiKcb/GesKSxqzfeSMwiMx8oKLj o9XzplPaePUf16XCT0gDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfX+P183Zbyf8ls3DbQ0ZahyfqWGz6TvP/irm6dIv3XSn3ZFi tnKXhx5MqCPVrgcndZE7DymllYs0OYpIRYmkuibu5tyRavzz2Wvvz+f9+usjIqUG2k+kVB8S NGpFjIwRU+LNwbpFweuDowILC/whIy0QnD9OU5Bz08yA/UYpAvPtJAL6baHQ5hpE4Hn2nARD ph1BfncnCbcbuhBUF59goKV3CrQ6hxlozExlQFd4k4EXA14COrLOE1Bq2QRN5woIqHX3UWDo Z+CKQUeMjS8EuE0lLJgS/aGnOJsFb3cQNHa9pqE+t5GG6vaFcDmvg4EH1Y0UNFT0ENBSmcNA l3mUhqaGJxTYM9JpKHMUMDDgMpFgcg6z8LLWSEC5fsx28vsfGh6n1xJwsugWAa1vqxDUnP5A gMX8moF65yABVksmCb+u2RD0nB1iITnNzcKVpLMIUpOzKNB3yMHzM4dZsxLXDw6TWG89gqtd Rgo/LeDx/exOFutr2llstBzG1uIAXPign8D5I04aW0rOMNgycp7FKUOtBHY0N7P4ySUPhXtb DcTWmTvEIZFCjDJO0CxZtVccPXKqHcW+94mvSfEwiWiUTUE+Ip5bxjddv4DGmeHm8W/euMlx 9uXm8Nb0z/Q4k9ygmC9qXjfOU7mN/Puu1IkuxfnznusPJ24k3HK+6GPGP+dsvrS8dsLjM5a/ 7XNM+KWcnK/SZbPnkNiIJpUgX6U6TqVQxsgXaw9EJ6iV8Yv3HVRZ0Ni7mI57MyrQj5bQOsSJ kGyyxM4ER0lpRZw2QVWHeBEp85XYTq2IkkoiFQlHBc3BPZrDMYK2Ds0UUbIZkvVhwl4pt19x SDggCLGC5v+WEPn4JaLNtPrODoelpMN/+86HLvfa+S+Kpya1lc+aMVT6bbXqq6NnabL1ovnu usextmnJ21TxfRt0fZ2fbtizwjRTorfkelPyNzyafsxbl51XdiQifJJ0ztVl99ZWBNhGExdE yObqYp2B8ULQu4FXvyvDQR4S1l/W5qKLIhSRlN6+y7NbjmWUNloRFEBqtIq/MqLSeSoDAAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" cb92173d1f0 ("locking/lockdep, cpu/hotplug: Annotate AP thread") was introduced to make lockdep_assert_cpus_held() work in AP thread. However, the annotation is too strong for that purpose. We don't have to use more than try lock annotation for that. rwsem_acquire() implies: 1. might be a waiter on contention of the lock. 2. enter to the critical section of the lock. All we need in here is to act 2, not 1. So trylock version of annotation is sufficient for that purpose. Now that dept partially relies on lockdep annotaions, dept interpets rwsem_acquire() as a potential wait and might report a deadlock by the wait. So replaced it with trylock version of annotation. Signed-off-by: Byungchul Park --- kernel/cpu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/cpu.c b/kernel/cpu.c index 63447eb85dab..da969f7269b5 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -534,7 +534,7 @@ int lockdep_is_cpus_held(void) static void lockdep_acquire_cpus_lock(void) { - rwsem_acquire(&cpu_hotplug_lock.dep_map, 0, 0, _THIS_IP_); + rwsem_acquire(&cpu_hotplug_lock.dep_map, 0, 1, _THIS_IP_); } static void lockdep_release_cpus_lock(void) From patchwork Wed May 8 09:47:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658448 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6DBA8C25B5F for ; Wed, 8 May 2024 10:03:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5CCA311354E; Wed, 8 May 2024 10:03:29 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 6236111353A for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-b9-663b4a3b016a From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 26/28] fs/jbd2: Use a weaker annotation in journal handling Date: Wed, 8 May 2024 18:47:23 +0900 Message-Id: <20240508094726.35754-27-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUxTZxSHfd97e+9tpeamknnHnJIaNcHAxMB2nEaNRn1jQjQxMUaN2sit 1EHFVlCMMyAVtXwEMYhAxYJYCCBoa/ymVkgZHwpViQMGRJgRG4E6tA0Vgms1/nPy5Dm/8/vr cJTiliSM02iPijqtKlHJyGjZWEh55Ootq9TL31aycCFnOXg/naPB1FDHgKu+FkHd7QwMbudm +Ns3imDqWRcFRYUuBOVDAxTcbhlE0Fh9moGXb+ZAt9fDQFthNgOZ1xoYeP5+GkP/pQIMtdY4 6MivwODwj9BQ5GagtCgTB8Y7DH5LDQuW9MUwXF3CwvRQNLQNvpJAY98yKC7rZ+BRYxsNLfeG Mbx8YGJgsO6LBDpaWmlwXciVwI3xCgbe+ywUWLweFl44zBhuGgJFWR9nJPBXrgNDVuUtDN29 DxHYz73GYK17xUCzdxSDzVpIwecqJ4LhvDEWzuT4WSjNyEOQfeYSDYb+WJiaNDHrVpLmUQ9F DLZjpNFnpkl7hUDulwywxGDvY4nZmkJs1RHk2iM3JuUTXgmx1pxniHWigCXGsW5Mxjs7WdJ6 eYomb7qL8LawXbLV8WKiJlXU/bJmvyzB//SHZN/s4yNdkI4+S41Iygl8jFBve4aMiPvKHcOn gprhlwo9PX4qyKF8uGDLfSsJMsWPyoTKzk3B+Fx+q1DyemNQ0/xiYbCrDAdZzv8qGOs/0N/a Fwq1Nx1fa6QB3zsyjoKs4GOFh5klrBHJAplJTpg0t6JvBz8KT6p76HwkN6NZNUih0aYmqTSJ MVEJaVrN8agDh5OsKPBIlj+nd99DE67tTYjnkDJE7pj3u1ohUaXq05KakMBRylC58+xvaoU8 XpV2QtQd3qdLSRT1TegnjlbOk6/wHYtX8AdVR8U/RDFZ1H3fYk4alo4qZ2bmP2nfK2u+u9O9 dqHaEj2iTsm5k7rILi9op45cKY60texwnsruM4Ssi6tactez9MWCyIul/9wwhe+5ahrwLFu1 8vGJvKgus519+qG/dPJQ/gFpQ3JWlfvn0PXRLqMvanZTRwzTe7LWqSWHsss/zRTG+q5mnP93 e8SG68//O6u3K2l9gio6gtLpVf8DhlJlY0QDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzXSf0zMcRgHcJ/P92eXs6/T9F0xdmOmlBrxEIYN35noP2Yzbvnmbq5wV1ET 5fLrKGIVV5FqV6tIV5vI0UpxlTp1SyWp1kpK19K1rho6m3+evfZ+nr3/elhCdp/yYlWRUaIm UqGW0xJScjBY5xe8Pzg8YNy8DlJvB4Bj8gYJWaUlNFifFSMoqUjEMFy3Dz5PjSKY/dhCQEaa FcGTvq8EVNT3IDAXXqGhbWAR2Bx2Gixpt2jQ5ZXS8GlkDkN3+j0MxaYQaLybi6HaOURCxjAN mRk6PD++Y3AaixgwJqyG/kIDA3N9gWDpaaegNttCgbnLFx4+6qbhtdlCQn1lP4a2V1k09JT8 oaCx/gMJ1tRkCp6O5dIwMmUkwOiwM9BanYPhedJ827Vfvyl4n1yN4Vp+GQZbZxWCNzd6MZhK 2mmodYxiKDelETBTUIegP+UnA1dvOxnITExBcOtqOglJ3UEwO51F79wq1I7aCSGp/Lxgnsoh hYZcXnhp+MoISW+6GCHHFC2UF/oIea+HsfBkwkEJpqKbtGCauMcI+p82LIw1NzPChwezpDBg y8Ch3kcl206KalWMqFm/44RE6WxaenbK/cJQCySgGTc9Ylme28g39l/SIzeW5tbwHR1OwmUP biVfnjxIuUxwoxI+v3mv63wJd4g39O5xxSS3mu9peYRdlnKbeP2zcdJlnlvBFz+v/lfjNp93 Do0hl2VcEF+lMzB3kSQHLShCHqrImAiFSh3krz2tjI1UXfAPOxNhQvOvYoyfS61Ek237ahDH IvlCqZUODpdRihhtbEQN4llC7iGtu745XCY9qYiNEzVnjmui1aK2BnmzpNxTuv+weELGnVJE iadF8ayo+b/FrJtXAtrrfn/JrkHlkQ1elQUjOt8HrcMDoRclg8oDmY5e8tW3vGm/c9KA3bIy +/LFG3807dr+Xs6/rVEb/Ffe7IwOt21ZNb7uS9CkPezURb2lJdu9I9oxk35uVhr2Ln6Zcs2m kMffJPEN7jsLiuQRa/WquMCYO/ZjT62e+t9dKS9CLweGNNfJSa1SEehDaLSKvyPMEcYmAwAA X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" jbd2 journal handling code doesn't want jbd2_might_wait_for_commit() to be placed between start_this_handle() and stop_this_handle(). So it marks the region with rwsem_acquire_read() and rwsem_release(). However, the annotation is too strong for that purpose. We don't have to use more than try lock annotation for that. rwsem_acquire_read() implies: 1. might be a waiter on contention of the lock. 2. enter to the critical section of the lock. All we need in here is to act 2, not 1. So trylock version of annotation is sufficient for that purpose. Now that dept partially relies on lockdep annotaions, dept interpets rwsem_acquire_read() as a potential wait and might report a deadlock by the wait. So replaced it with trylock version of annotation. Signed-off-by: Byungchul Park --- fs/jbd2/transaction.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c index cb0b8d6fc0c6..58261621da27 100644 --- a/fs/jbd2/transaction.c +++ b/fs/jbd2/transaction.c @@ -460,7 +460,7 @@ static int start_this_handle(journal_t *journal, handle_t *handle, read_unlock(&journal->j_state_lock); current->journal_info = handle; - rwsem_acquire_read(&journal->j_trans_commit_map, 0, 0, _THIS_IP_); + rwsem_acquire_read(&journal->j_trans_commit_map, 0, 1, _THIS_IP_); jbd2_journal_free_transaction(new_transaction); /* * Ensure that no allocations done while the transaction is open are From patchwork Wed May 8 09:47:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A330DC25B75 for ; Wed, 8 May 2024 10:03:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CF7F4113540; Wed, 8 May 2024 10:03:17 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id 64D9F11353B for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-cc-663b4a3b893a From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 27/28] dept: Add documentation for Dept Date: Wed, 8 May 2024 18:47:24 +0900 Message-Id: <20240508094726.35754-28-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzHfb/Pc8/zdHU8O8yjTHYklAjx8dtseLQZ4y9seNRTHV3soh82 W+mH/Dgrdv04jX7tahVXd02F45yVTlOpUFRLNbR+ka5JkSv889lrn89r78/njw9DyMslrowy 7JyoDhNCFZSUlA64ZK/c4r85aHW+YRakXF8N9pEkEjINxRQ03C9CUFwWi6G3ag+8G+1HMP6q noA0bQOC7I/tBJRVdyAwF1yioKlnJjTbhyiwaa9REJdroOB13wSGttSbGIqM+6A2OQeDZewz CWm9FNxOi8OO8gXDmL6QBn2MB3QV6GiY+OgLto63EjC/94KMO20UPDbbSKiu6MLQ9DCTgo7i SQnUVteQ0JCikcC9wRwK+kb1BOjtQzQ0WrIwlMQ7ghK//5bAC40FQ2JeKYbm1kcIniR1YjAW v6Xgub0fg8moJeBnfhWCrhsDNCRcH6PhduwNBNcSUkmIb/OD8R+Z1I6N/PP+IYKPN0Xy5tEs kn+Zw/GVunaaj3/ynuazjOd5U8EKPvdxL+azh+0S3lh4heKNwzdp/upAM+YH6+poviZ9nOR7 mtPwAdcj0i2BYqgyQlSv2nZCGvIyxY7OfjOhKEvqLRyDLHHoKnJiOHYdV2/vlvznou671BRT rCfX0jJGTPEcdhFn0nyadgi2X8rl1e2e4tnsZu7Du+/TPsl6cI1NyeQUy9j13FB6N/03050r KrFM5zg5+q2fB6f3ylk/7lGczuFIHc4kw1kN+f8Oms89K2ghk5EsC80oRHJlWIRKUIau8wmJ DlNG+QScURmR4530FyeOVqDhhkNWxDJI4SKzzNsUJJcIEeHRKiviGEIxR1Z1eUOQXBYoRF8Q 1WeOq8+HiuFW5MaQinmyNaORgXI2WDgnnhbFs6L6/xQzTq4xyH//129rnWxzy0fSvQ8+PeWt a889jp3F9JMrO+WXUkvvPNi7U1je5bb7TWZbY6e8JrDSM1hbxh2MYDI2qjZ5LbBVsrG+USfN i/Nm+6z5vUszvj326IMyrXbJz8u/ApDOO+DNha3uSxeaa5dpBPPkfmvrMa/Xns6qakPlroTD eLF7r4IMDxF8VxDqcOEP/Yf+h0oDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTYRTHe573NmeLl2X5pkGxkGphF9I6NIsoyIeg6EMRVGSrXnWkJluu 7ELW1K6Ky6ZlatNkiTOtLWilk6Fo2sWWjrxkUna1zFE60ZTKGX05/Pifw+//5Ugo+XUmRKJJ OiJqk9QJClZKS7eqDOGqzarY5R+9EWC8vBx8w+dpKKyuZMFdZUVQef8Mhv7GaOgYGUAw/vwF BfkmN4KSd28ouN/Ui8BZfpaF9g8zwOPzstBiusSC4VY1Cy+/TWDoybuCwWrbAk9zSjG4xj7T kN/Pwo18A54cXzCMWSo4sKSFQV95AQcT71ZAS+8rBhqKWhhwdi+B68U9LNQ6W2hocvRhaH9U yEJv5R8GnjY10+A2ZjFwZ7CUhW8jFgosPi8HbS4zhrvpk7bMod8MPM5yYcgsu4fB01WDoO78 Wwy2ylcsNPgGMNhtJgp+3W5E0Jf9nYOMy2Mc3DiTjeBSRh4N6T2RMD5ayK5fQxoGvBRJtx8l zhEzTZ6UCuRhwRuOpNd1c8RsSyH2ciW5VduPSclPH0NsFRdYYvt5hSMXv3swGWxt5UjztXGa fPDk422hu6RRB8UEjV7ULlu3Txr/xOhDyT/s6JgrLxenIZcBXUQBEoGPEKzvb7J+ZvmFQmfn GOXnIH6+YM/6xPiZ4gekQlnrJj/P5FXC646hqXuaDxPa2nNoP8v4VYL32nvun3OeYL3rmvIE TOZdnwenuuR8pFBjKOBykNSMplWgIE2SPlGtSYhcqjsUn5qkObb0wOFEG5p8GMupCaMDDbdH 1yNeghTTZW5WFStn1HpdamI9EiSUIkjWeG51rFx2UJ16XNQejtGmJIi6ehQqoRXBss07xX1y Pk59RDwkismi9v8WSwJC0tDtq6mkNSNQ/gyq5yzu9LpPmxpTZKfXXp2rLNtj5O9F6T8y8aPm cGUyc+518979P3bME9doNm4w5Xm2x9i/nsjujt5bFPjYrS+hdbtXOnK1Qw/0Ef21+uC1q61t YbnKUKGquG2WIyrSaV0ZNlvpkBToZ5ctyIwLPrnoxPpZ0xY0KGhdvHqFktLq1H8BjgJwCiwD AAA= X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This document describes the concept of Dept. Signed-off-by: Byungchul Park --- Documentation/dependency/dept.txt | 735 ++++++++++++++++++++++++++++++ 1 file changed, 735 insertions(+) create mode 100644 Documentation/dependency/dept.txt diff --git a/Documentation/dependency/dept.txt b/Documentation/dependency/dept.txt new file mode 100644 index 000000000000..5dd358b96734 --- /dev/null +++ b/Documentation/dependency/dept.txt @@ -0,0 +1,735 @@ +DEPT(DEPendency Tracker) +======================== + +Started by Byungchul Park + +How lockdep works +----------------- + +Lockdep detects a deadlock by checking lock acquisition order. For +example, a graph to track acquisition order built by lockdep might look +like: + + A -> B - + \ + -> E + / + C -> D - + + where 'A -> B' means that acquisition A is prior to acquisition B + with A still held. + +Lockdep keeps adding each new acquisition order into the graph in +runtime. For example, 'E -> C' will be added when the two locks have +been acquired in the order, E and then C. The graph will look like: + + A -> B - + \ + -> E - + / \ + -> C -> D - \ + / / + \ / + ------------------ + + where 'A -> B' means that acquisition A is prior to acquisition B + with A still held. + +This graph contains a subgraph that demonstrates a loop like: + + -> E - + / \ + -> C -> D - \ + / / + \ / + ------------------ + + where 'A -> B' means that acquisition A is prior to acquisition B + with A still held. + +Lockdep reports it as a deadlock on detection of a loop and stops its +working. + +CONCLUSION + +Lockdep detects a deadlock by checking if a loop has been created after +adding a new acquisition order into the graph. + + +Limitation of lockdep +--------------------- + +Lockdep deals with a deadlock by typical lock e.g. spinlock and mutex, +that are supposed to be released within the acquisition context. However, +when it comes to a deadlock by folio lock that is not supposed to be +released within the acquisition context or other general synchronization +mechanisms, lockdep doesn't work. + +Can lockdep detect the following deadlock? + + context X context Y context Z + + mutex_lock A + folio_lock B + folio_lock B <- DEADLOCK + mutex_lock A <- DEADLOCK + folio_unlock B + folio_unlock B + mutex_unlock A + mutex_unlock A + +No. What about the following? + + context X context Y + + mutex_lock A + mutex_lock A <- DEADLOCK + wait_for_complete B <- DEADLOCK + complete B + mutex_unlock A + mutex_unlock A + +No. + +CONCLUSION + +Lockdep cannot detect a deadlock by folio lock or other general +synchronization mechanisms. + + +What leads a deadlock +--------------------- + +A deadlock occurs when one or multi contexts are waiting for events that +will never happen. For example: + + context X context Y context Z + + | | | + v | | + 1 wait for A v | + . 2 wait for C v + event C . 3 wait for B + event B . + event A + +Event C cannot be triggered because context X is stuck at 1, event B +cannot be triggered because context Y is stuck at 2, and event A cannot +be triggered because context Z is stuck at 3. All the contexts are stuck. +We call this *deadlock*. + +If an event occurrence is a prerequisite to reaching another event, we +call it *dependency*. In this example: + + Event A occurrence is a prerequisite to reaching event C. + Event C occurrence is a prerequisite to reaching event B. + Event B occurrence is a prerequisite to reaching event A. + +In terms of dependency: + + Event C depends on event A. + Event B depends on event C. + Event A depends on event B. + +Dependency graph reflecting this example will look like: + + -> C -> A -> B - + / \ + \ / + ---------------- + + where 'A -> B' means that event A depends on event B. + +A circular dependency exists. Such a circular dependency leads a +deadlock since no waiters can have desired events triggered. + +CONCLUSION + +A circular dependency of events leads a deadlock. + + +Introduce DEPT +-------------- + +DEPT(DEPendency Tracker) tracks wait and event instead of lock +acquisition order so as to recognize the following situation: + + context X context Y context Z + + | | | + v | | + wait for A v | + . wait for C v + event C . wait for B + event B . + event A + +and builds up a dependency graph in runtime that is similar to lockdep. +The graph might look like: + + -> C -> A -> B - + / \ + \ / + ---------------- + + where 'A -> B' means that event A depends on event B. + +DEPT keeps adding each new dependency into the graph in runtime. For +example, 'B -> D' will be added when event D occurrence is a +prerequisite to reaching event B like: + + | + v + wait for D + . + event B + +After the addition, the graph will look like: + + -> D + / + -> C -> A -> B - + / \ + \ / + ---------------- + + where 'A -> B' means that event A depends on event B. + +DEPT is going to report a deadlock on detection of a new loop. + +CONCLUSION + +DEPT works on wait and event so as to theoretically detect all the +potential deadlocks. + + +How DEPT works +-------------- + +Let's take a look how DEPT works with the 1st example in the section +'Limitation of lockdep'. + + context X context Y context Z + + mutex_lock A + folio_lock B + folio_lock B <- DEADLOCK + mutex_lock A <- DEADLOCK + folio_unlock B + folio_unlock B + mutex_unlock A + mutex_unlock A + +Adding comments to describe DEPT's view in terms of wait and event: + + context X context Y context Z + + mutex_lock A + /* wait for A */ + folio_lock B + /* wait for A */ + /* start event A context */ + + folio_lock B + /* wait for B */ <- DEADLOCK + /* start event B context */ + + mutex_lock A + /* wait for A */ <- DEADLOCK + /* start event A context */ + + folio_unlock B + /* event B */ + folio_unlock B + /* event B */ + + mutex_unlock A + /* event A */ + mutex_unlock A + /* event A */ + +Adding more supplementary comments to describe DEPT's view in detail: + + context X context Y context Z + + mutex_lock A + /* might wait for A */ + /* start to take into account event A's context */ + /* 1 */ + folio_lock B + /* might wait for B */ + /* start to take into account event B's context */ + /* 2 */ + + folio_lock B + /* might wait for B */ <- DEADLOCK + /* start to take into account event B's context */ + /* 3 */ + + mutex_lock A + /* might wait for A */ <- DEADLOCK + /* start to take into account + event A's context */ + /* 4 */ + + folio_unlock B + /* event B that's been valid since 2 */ + folio_unlock B + /* event B that's been valid since 3 */ + + mutex_unlock A + /* event A that's been valid since 1 */ + + mutex_unlock A + /* event A that's been valid since 4 */ + +Let's build up dependency graph with this example. Firstly, context X: + + context X + + folio_lock B + /* might wait for B */ + /* start to take into account event B's context */ + /* 2 */ + +There are no events to create dependency. Next, context Y: + + context Y + + mutex_lock A + /* might wait for A */ + /* start to take into account event A's context */ + /* 1 */ + + folio_lock B + /* might wait for B */ + /* start to take into account event B's context */ + /* 3 */ + + folio_unlock B + /* event B that's been valid since 3 */ + + mutex_unlock A + /* event A that's been valid since 1 */ + +There are two events. For event B, folio_unlock B, since there are no +waits between 3 and the event, event B does not create dependency. For +event A, there is a wait, folio_lock B, between 1 and the event. Which +means event A cannot be triggered if event B does not wake up the wait. +Therefore, we can say event A depends on event B, say, 'A -> B'. The +graph will look like after adding the dependency: + + A -> B + + where 'A -> B' means that event A depends on event B. + +Lastly, context Z: + + context Z + + mutex_lock A + /* might wait for A */ + /* start to take into account event A's context */ + /* 4 */ + + folio_unlock B + /* event B that's been valid since 2 */ + + mutex_unlock A + /* event A that's been valid since 4 */ + +There are also two events. For event B, folio_unlock B, there is a +wait, mutex_lock A, between 2 and the event - remind 2 is at a very +start and before the wait in timeline. Which means event B cannot be +triggered if event A does not wake up the wait. Therefore, we can say +event B depends on event A, say, 'B -> A'. The graph will look like +after adding the dependency: + + -> A -> B - + / \ + \ / + ----------- + + where 'A -> B' means that event A depends on event B. + +A new loop has been created. So DEPT can report it as a deadlock. For +event A, mutex_unlock A, since there are no waits between 4 and the +event, event A does not create dependency. That's it. + +CONCLUSION + +DEPT works well with any general synchronization mechanisms by focusing +on wait, event and its context. + + +Interpret DEPT report +--------------------- + +The following is the example in the section 'How DEPT works'. + + context X context Y context Z + + mutex_lock A + /* might wait for A */ + /* start to take into account event A's context */ + /* 1 */ + folio_lock B + /* might wait for B */ + /* start to take into account event B's context */ + /* 2 */ + + folio_lock B + /* might wait for B */ <- DEADLOCK + /* start to take into account event B's context */ + /* 3 */ + + mutex_lock A + /* might wait for A */ <- DEADLOCK + /* start to take into account + event A's context */ + /* 4 */ + + folio_unlock B + /* event B that's been valid since 2 */ + folio_unlock B + /* event B that's been valid since 3 */ + + mutex_unlock A + /* event A that's been valid since 1 */ + + mutex_unlock A + /* event A that's been valid since 4 */ + +We can Simplify this by replacing each waiting point with [W], each +point where its event's context starts with [S] and each event with [E]. +This example will look like after the replacement: + + context X context Y context Z + + [W][S] mutex_lock A + [W][S] folio_lock B + [W][S] folio_lock B <- DEADLOCK + + [W][S] mutex_lock A <- DEADLOCK + [E] folio_unlock B + [E] folio_unlock B + [E] mutex_unlock A + [E] mutex_unlock A + +DEPT uses the symbols [W], [S] and [E] in its report as described above. +The following is an example reported by DEPT for a real problem. + + Link: https://lore.kernel.org/lkml/6383cde5-cf4b-facf-6e07-1378a485657d@I-love.SAKURA.ne.jp/#t + Link: https://lore.kernel.org/lkml/1674268856-31807-1-git-send-email-byungchul.park@lge.com/ + + =================================================== + DEPT: Circular dependency has been detected. + 6.2.0-rc1-00025-gb0c20ebf51ac-dirty #28 Not tainted + --------------------------------------------------- + summary + --------------------------------------------------- + *** DEADLOCK *** + + context A + [S] lock(&ni->ni_lock:0) + [W] folio_wait_bit_common(PG_locked_map:0) + [E] unlock(&ni->ni_lock:0) + + context B + [S] (unknown)(PG_locked_map:0) + [W] lock(&ni->ni_lock:0) + [E] folio_unlock(PG_locked_map:0) + + [S]: start of the event context + [W]: the wait blocked + [E]: the event not reachable + --------------------------------------------------- + context A's detail + --------------------------------------------------- + context A + [S] lock(&ni->ni_lock:0) + [W] folio_wait_bit_common(PG_locked_map:0) + [E] unlock(&ni->ni_lock:0) + + [S] lock(&ni->ni_lock:0): + [] ntfs3_setattr+0x54b/0xd40 + stacktrace: + ntfs3_setattr+0x54b/0xd40 + notify_change+0xcb3/0x1430 + do_truncate+0x149/0x210 + path_openat+0x21a3/0x2a90 + do_filp_open+0x1ba/0x410 + do_sys_openat2+0x16d/0x4e0 + __x64_sys_creat+0xcd/0x120 + do_syscall_64+0x41/0xc0 + entry_SYSCALL_64_after_hwframe+0x63/0xcd + + [W] folio_wait_bit_common(PG_locked_map:0): + [] truncate_inode_pages_range+0x9b0/0xf20 + stacktrace: + folio_wait_bit_common+0x5e0/0xaf0 + truncate_inode_pages_range+0x9b0/0xf20 + truncate_pagecache+0x67/0x90 + ntfs3_setattr+0x55a/0xd40 + notify_change+0xcb3/0x1430 + do_truncate+0x149/0x210 + path_openat+0x21a3/0x2a90 + do_filp_open+0x1ba/0x410 + do_sys_openat2+0x16d/0x4e0 + __x64_sys_creat+0xcd/0x120 + do_syscall_64+0x41/0xc0 + entry_SYSCALL_64_after_hwframe+0x63/0xcd + + [E] unlock(&ni->ni_lock:0): + (N/A) + --------------------------------------------------- + context B's detail + --------------------------------------------------- + context B + [S] (unknown)(PG_locked_map:0) + [W] lock(&ni->ni_lock:0) + [E] folio_unlock(PG_locked_map:0) + + [S] (unknown)(PG_locked_map:0): + (N/A) + + [W] lock(&ni->ni_lock:0): + [] attr_data_get_block+0x32c/0x19f0 + stacktrace: + attr_data_get_block+0x32c/0x19f0 + ntfs_get_block_vbo+0x264/0x1330 + __block_write_begin_int+0x3bd/0x14b0 + block_write_begin+0xb9/0x4d0 + ntfs_write_begin+0x27e/0x480 + generic_perform_write+0x256/0x570 + __generic_file_write_iter+0x2ae/0x500 + ntfs_file_write_iter+0x66d/0x1d70 + do_iter_readv_writev+0x20b/0x3c0 + do_iter_write+0x188/0x710 + vfs_iter_write+0x74/0xa0 + iter_file_splice_write+0x745/0xc90 + direct_splice_actor+0x114/0x180 + splice_direct_to_actor+0x33b/0x8b0 + do_splice_direct+0x1b7/0x280 + do_sendfile+0xb49/0x1310 + + [E] folio_unlock(PG_locked_map:0): + [] generic_write_end+0xf2/0x440 + stacktrace: + generic_write_end+0xf2/0x440 + ntfs_write_end+0x42e/0x980 + generic_perform_write+0x316/0x570 + __generic_file_write_iter+0x2ae/0x500 + ntfs_file_write_iter+0x66d/0x1d70 + do_iter_readv_writev+0x20b/0x3c0 + do_iter_write+0x188/0x710 + vfs_iter_write+0x74/0xa0 + iter_file_splice_write+0x745/0xc90 + direct_splice_actor+0x114/0x180 + splice_direct_to_actor+0x33b/0x8b0 + do_splice_direct+0x1b7/0x280 + do_sendfile+0xb49/0x1310 + __x64_sys_sendfile64+0x1d0/0x210 + do_syscall_64+0x41/0xc0 + entry_SYSCALL_64_after_hwframe+0x63/0xcd + --------------------------------------------------- + information that might be helpful + --------------------------------------------------- + CPU: 1 PID: 8060 Comm: a.out Not tainted + 6.2.0-rc1-00025-gb0c20ebf51ac-dirty #28 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), + BIOS Bochs 01/01/2011 + Call Trace: + + dump_stack_lvl+0xf2/0x169 + print_circle.cold+0xca4/0xd28 + ? lookup_dep+0x240/0x240 + ? extend_queue+0x223/0x300 + cb_check_dl+0x1e7/0x260 + bfs+0x27b/0x610 + ? print_circle+0x240/0x240 + ? llist_add_batch+0x180/0x180 + ? extend_queue_rev+0x300/0x300 + ? __add_dep+0x60f/0x810 + add_dep+0x221/0x5b0 + ? __add_idep+0x310/0x310 + ? add_iecxt+0x1bc/0xa60 + ? add_iecxt+0x1bc/0xa60 + ? add_iecxt+0x1bc/0xa60 + ? add_iecxt+0x1bc/0xa60 + __dept_wait+0x600/0x1490 + ? add_iecxt+0x1bc/0xa60 + ? truncate_inode_pages_range+0x9b0/0xf20 + ? check_new_class+0x790/0x790 + ? dept_enirq_transition+0x519/0x9c0 + dept_wait+0x159/0x3b0 + ? truncate_inode_pages_range+0x9b0/0xf20 + folio_wait_bit_common+0x5e0/0xaf0 + ? filemap_get_folios_contig+0xa30/0xa30 + ? dept_enirq_transition+0x519/0x9c0 + ? lock_is_held_type+0x10e/0x160 + ? lock_is_held_type+0x11e/0x160 + truncate_inode_pages_range+0x9b0/0xf20 + ? truncate_inode_partial_folio+0xba0/0xba0 + ? setattr_prepare+0x142/0xc40 + truncate_pagecache+0x67/0x90 + ntfs3_setattr+0x55a/0xd40 + ? ktime_get_coarse_real_ts64+0x1e5/0x2f0 + ? ntfs_extend+0x5c0/0x5c0 + ? mode_strip_sgid+0x210/0x210 + ? ntfs_extend+0x5c0/0x5c0 + notify_change+0xcb3/0x1430 + ? do_truncate+0x149/0x210 + do_truncate+0x149/0x210 + ? file_open_root+0x430/0x430 + ? process_measurement+0x18c0/0x18c0 + ? ntfs_file_release+0x230/0x230 + path_openat+0x21a3/0x2a90 + ? path_lookupat+0x840/0x840 + ? dept_enirq_transition+0x519/0x9c0 + ? lock_is_held_type+0x10e/0x160 + do_filp_open+0x1ba/0x410 + ? may_open_dev+0xf0/0xf0 + ? find_held_lock+0x2d/0x110 + ? lock_release+0x43c/0x830 + ? dept_ecxt_exit+0x31a/0x590 + ? _raw_spin_unlock+0x3b/0x50 + ? alloc_fd+0x2de/0x6e0 + do_sys_openat2+0x16d/0x4e0 + ? __ia32_sys_get_robust_list+0x3b0/0x3b0 + ? build_open_flags+0x6f0/0x6f0 + ? dept_enirq_transition+0x519/0x9c0 + ? dept_enirq_transition+0x519/0x9c0 + ? lock_is_held_type+0x4e/0x160 + ? lock_is_held_type+0x4e/0x160 + __x64_sys_creat+0xcd/0x120 + ? __x64_compat_sys_openat+0x1f0/0x1f0 + do_syscall_64+0x41/0xc0 + entry_SYSCALL_64_after_hwframe+0x63/0xcd + RIP: 0033:0x7f8b9e4e4469 + Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 + 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> + 3d 01 f0 ff ff 73 01 c3 48 8b 0d ff 49 2b 00 f7 d8 64 89 01 48 + RSP: 002b:00007f8b9eea4ef8 EFLAGS: 00000202 ORIG_RAX: 0000000000000055 + RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8b9e4e4469 + RDX: 0000000000737562 RSI: 0000000000000000 RDI: 0000000020000000 + RBP: 00007f8b9eea4f20 R08: 0000000000000000 R09: 0000000000000000 + R10: 0000000000000000 R11: 0000000000000202 R12: 00007fffa75511ee + R13: 00007fffa75511ef R14: 00007f8b9ee85000 R15: 0000000000000003 + + +Let's take a look at the summary that is the most important part. + + --------------------------------------------------- + summary + --------------------------------------------------- + *** DEADLOCK *** + + context A + [S] lock(&ni->ni_lock:0) + [W] folio_wait_bit_common(PG_locked_map:0) + [E] unlock(&ni->ni_lock:0) + + context B + [S] (unknown)(PG_locked_map:0) + [W] lock(&ni->ni_lock:0) + [E] folio_unlock(PG_locked_map:0) + + [S]: start of the event context + [W]: the wait blocked + [E]: the event not reachable + +The summary shows the following scenario: + + context A context B context ?(unknown) + + [S] folio_lock(&f1) + [S] lock(&ni->ni_lock:0) + [W] folio_wait_bit_common(PG_locked_map:0) + + [W] lock(&ni->ni_lock:0) + [E] folio_unlock(&f1) + + [E] unlock(&ni->ni_lock:0) + +Adding supplementary comments to describe DEPT's view in detail: + + context A context B context ?(unknown) + + [S] folio_lock(&f1) + /* start to take into account context + B heading for folio_unlock(&f1) */ + /* 1 */ + [S] lock(&ni->ni_lock:0) + /* start to take into account this context heading for + unlock(&ni->ni_lock:0) */ + /* 2 */ + + [W] folio_wait_bit_common(PG_locked_map:0) (= folio_lock(&f1)) + /* might wait for folio_unlock(&f1) */ + + [W] lock(&ni->ni_lock:0) + /* might wait for unlock(&ni->ni_lock:0) */ + + [E] folio_unlock(&f1) + /* event that's been valid since 1 */ + + [E] unlock(&ni->ni_lock:0) + /* event that's been valid since 2 */ + +Let's build up dependency graph with this report. Firstly, context A: + + context A + + [S] lock(&ni->ni_lock:0) + /* start to take into account this context heading for + unlock(&ni->ni_lock:0) */ + /* 2 */ + + [W] folio_wait_bit_common(PG_locked_map:0) (= folio_lock(&f1)) + /* might wait for folio_unlock(&f1) */ + + [E] unlock(&ni->ni_lock:0) + /* event that's been valid since 2 */ + +There is one interesting event, unlock(&ni->ni_lock:0). There is a +wait, folio_lock(&f1), between 2 and the event. Which means +unlock(&ni->ni_lock:0) is not reachable if folio_unlock(&f1) does not +wake up the wait. Therefore, we can say unlock(&ni->ni_lock:0) depends +on folio_unlock(&f1), say, 'unlock(&ni->ni_lock:0) -> folio_unlock(&f1)'. +The graph will look like after adding the dependency: + + unlock(&ni->ni_lock:0) -> folio_unlock(&f1) + + where 'A -> B' means that event A depends on event B. + +Secondly, context B: + + context B + + [W] lock(&ni->ni_lock:0) + /* might wait for unlock(&ni->ni_lock:0) */ + + [E] folio_unlock(&f1) + /* event that's been valid since 1 */ + +There is also one interesting event, folio_unlock(&f1). There is a +wait, lock(&ni->ni_lock:0), between 1 and the event - remind 1 is at a +very start and before the wait in timeline. Which means folio_unlock(&f1) +is not reachable if unlock(&ni->ni_lock:0) does not wake up the wait. +Therefore, we can say folio_unlock(&f1) depends on unlock(&ni->ni_lock:0), +say, 'folio_unlock(&f1) -> unlock(&ni->ni_lock:0)'. The graph will look +like after adding the dependency: + + -> unlock(&ni->ni_lock:0) -> folio_unlock(&f1) - + / \ + \ / + ------------------------------------------------ + + where 'A -> B' means that event A depends on event B. + +A new loop has been created. So DEPT can report it as a deadlock! Cool! + +CONCLUSION + +DEPT works awesome! From patchwork Wed May 8 09:47:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13658431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1A7D6C19F4F for ; Wed, 8 May 2024 10:03:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5CBDE113536; Wed, 8 May 2024 10:02:58 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by gabe.freedesktop.org (Postfix) with ESMTP id BB315113533 for ; Wed, 8 May 2024 10:02:56 +0000 (UTC) X-AuditID: a67dfc5b-d85ff70000001748-d9-663b4a3c0e6c From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v14 28/28] dept: Add documentation for Dept's APIs Date: Wed, 8 May 2024 18:47:25 +0900 Message-Id: <20240508094726.35754-29-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240508094726.35754-1-byungchul@sk.com> References: <20240508094726.35754-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTYRTHe97rXK5eltCrkdUggsLKyDqVRBTYUxEUXakPNdprG02TmbdA 0rxU3khDV9NELdbS1Wzbhy5qa+F0hTbLcuaFNMssLzSbaNplKn45/Pif8/99OiJSaqWDRKro C4ImWq6WMWJKPORfHhK+d1vkemtXMOTnrAfvr6sUlJiMDLgeViEwWlMJGKjfDW1jgwgmm96Q oC10ISjv6SLB6uhGUGu4zMC7vgXQ6h1hwFmYzUDaHRMDLT+mCOgsKiCgyrwfXl+vIMA20U+B doCBYm0a4RvfCJjQV7KgT1kJvQYdC1M9oeDs/kBD7cc1cKu0k4GaWicFjse9BLx7WsJAt/Ef Da8djRS48nNpeDBcwcCPMT0Jeu8IC29tZQRUp/tEmaN/aWjItRGQefcRAa3tzxDUXf1EgNn4 gYGX3kECLOZCEn7fq0fQmzfEQkbOBAvFqXkIsjOKKEjvDIPJ8RJmxxb8cnCExOmWBFw7Vkbh VxU8fqLrYnF63UcWl5njsMWwGt+pGSBwucdLY3PlNQabPQUszhpqJfBwczOLG29OUrivVUsc CDohDlcIalW8oFm3/bRYmeFsYGNKZYnVTXEpKH9JFvIT8dxG/vZvDzXHn0tHZ5jhVvFu9wQ5 zQHcct6S+5WeZpIbFPN3myOmeRG3k3eMvkfTTHEr+Uxny0xXwm3ibxQ40KxzGV9VbZvx+Pny 9v7hmVzKhfHP0nRsFhL7bsZFfNv7Unq2EMi/MLip60hShuZVIqkqOj5KrlJvXKtMilYlrj1z PsqMfL+kT546+Rh5XIfsiBMhmb/EtnhrpJSWx8cmRdkRLyJlAZL6K5sjpRKFPOmioDl/ShOn FmLtaImIki2WbBhLUEi5s/ILwjlBiBE0c1tC5BeUghbWdeh0l1YcW3E4yjQQY+l4eON5VlNa oNq41G21NR9pbDsuDrbYU/ECuX2n64i/OnENV4PF24wJ88e/7/H2L/t1tNPzpd2rLtocv+5+ Y9fPlojiP3/CziWHK4cK9yqGtR2mE3k5+9zKgIN1ZGVoiJX793RX7iVTgMKQ3XD/SYhURsUq 5aGrSU2s/D9L025kRwMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzH+36fX9ets2dX41nZ4qYh82sTHw4zjEfGzGxt/UGH53TrKu4q Yua4fqhkLurIlZOcVofcxUJnt9LltE6UiLRKQ6SsuubU0GX+ee+192ef119vESEtokJFqqQU QZOkUMtoMSneKdcvlkfLlct6ateC4dwy8I6dJcF010pD650qBNaa0xgGGrfCm/FBBBMtLwgw FrYiuN77gYAaVzcCR8UZGtr6Z0C7d5gGd2EeDfobd2l4+W0SQ1dRAYYq2w5ovlCGwen7TIJx gIarRj2eii8YfJZKBiy6COirKGZgsnc5uLs7KGgocVPgeLcIrpR20VDncJPgqu3D0PbIREO3 9Q8Fza5nJLQa8im4PVRGw7dxCwEW7zADr5xmDNUZU7as0d8UNOU7MWSV38PQ3vkYwZOzPRhs 1g4aGryDGOy2QgJ+3WpE0Hf+OwOZ53wMXD19HkFeZhEJGV1RMPHTRG9YwzcMDhN8hv0o7xg3 k/zzMo5/WPyB4TOevGN4sy2Vt1dE8jfqBjB/fcRL8bbKHJq3jRQwfO73dswPeTwM/+zyBMn3 txvxrrBY8dqDglqVJmiWro8Tx2e6m5jDpbJj1S2pOmQIy0WBIo5dwX0sHSX9TLPzubdvfYSf Q9g5nD3/E+Vngh0Uc+WeLX4OZjdyrtHXyM8kG8FluV9O/0rYldzFAhf65wznqqqd057Aqb7z 89B0L2WjuMf6YuYCEptRQCUKUSWlJSpU6qgl2oT49CTVsSUHkhNtaGotlpOThlo01ra1HrEi JAuStNJypZRSpGnTE+sRJyJkIZLG7FVKqeSgIv24oEnep0lVC9p6FCYiZbMk0TFCnJQ9pEgR EgThsKD5f8WiwFAduo+jT62brDPFlji/apqRY6ZhZPX28pBkpUd35URTV9jCH+qle/eXa3cH xW3y3Nx81O7bWLvN2p+84dFs/dwOpTEgRnvtgfip0hTckmBeMOxInRP1/ogqeN7TyEvyrEs1 KfNMtj1ljLM6r8cVtG/Px/zeJnN2drhWt+tAjmyd/EyJjNTGK5ZHEhqt4i/zXmBuKQMAAA== X-CFilter-Loop: Reflected X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This document describes the APIs of Dept. Signed-off-by: Byungchul Park --- Documentation/dependency/dept_api.txt | 117 ++++++++++++++++++++++++++ 1 file changed, 117 insertions(+) create mode 100644 Documentation/dependency/dept_api.txt diff --git a/Documentation/dependency/dept_api.txt b/Documentation/dependency/dept_api.txt new file mode 100644 index 000000000000..8e0d5a118a46 --- /dev/null +++ b/Documentation/dependency/dept_api.txt @@ -0,0 +1,117 @@ +DEPT(DEPendency Tracker) APIs +============================= + +Started by Byungchul Park + +SDT(Single-event Dependency Tracker) APIs +----------------------------------------- +Use these APIs to annotate on either wait or event. These have been +already applied into the existing synchronization primitives e.g. +waitqueue, swait, wait_for_completion(), dma fence and so on. The basic +APIs of SDT are: + + /* + * After defining 'struct dept_map map', initialize the instance. + */ + sdt_map_init(map); + + /* + * Place just before the interesting wait. + */ + sdt_wait(map); + + /* + * Place just before the interesting event. + */ + sdt_event(map); + +The advanced APIs of SDT are: + + /* + * After defining 'struct dept_map map', initialize the instance + * using an external key. + */ + sdt_map_init_key(map, key); + + /* + * Place just before the interesting timeout wait. + */ + sdt_wait_timeout(map, time); + + /* + * Use sdt_might_sleep_start() and sdt_might_sleep_end() in pair. + * Place at the start of the interesting section that might enter + * schedule() or its family that needs to be woken up by + * try_to_wake_up(). + */ + sdt_might_sleep_start(map); + + /* + * Use sdt_might_sleep_start_timeout() and sdt_might_sleep_end() in + * pair. Place at the start of the interesting section that might + * enter schedule_timeout() or its family that needs to be woken up + * by try_to_wake_up(). + */ + sdt_might_sleep_start_timeout(map, time); + + /* + * Use sdt_might_sleep_start() and sdt_might_sleep_end() in pair. + * Place at the end of the interesting section that might enter + * schedule(), schedule_timeout() or its family that needs to be + * woken up by try_to_wake_up(). + */ + sdt_might_sleep_end(); + + /* + * Use sdt_ecxt_enter() and sdt_ecxt_exit() in pair. Place at the + * start of the interesting section where the interesting event might + * be triggered. + */ + sdt_ecxt_enter(map); + + /* + * Use sdt_ecxt_enter() and sdt_ecxt_exit() in pair. Place at the + * end of the interesting section where the interesting event might + * be triggered. + */ + sdt_ecxt_exit(map); + + +LDT(Lock Dependency Tracker) APIs +--------------------------------- +Do not use these APIs directly. These are the wrappers for typical +locks, that have been already applied into major locks internally e.g. +spin lock, mutex, rwlock and so on. The APIs of LDT are: + + ldt_init(map, key, sub, name); + ldt_lock(map, sub_local, try, nest, ip); + ldt_rlock(map, sub_local, try, nest, ip, queued); + ldt_wlock(map, sub_local, try, nest, ip); + ldt_unlock(map, ip); + ldt_downgrade(map, ip); + ldt_set_class(map, name, key, sub_local, ip); + + +Raw APIs +-------- +Do not use these APIs directly. The raw APIs of dept are: + + dept_free_range(start, size); + dept_map_init(map, key, sub, name); + dept_map_reinit(map, key, sub, name); + dept_ext_wgen_init(ext_wgen); + dept_map_copy(map_to, map_from); + dept_wait(map, wait_flags, ip, wait_func, sub_local, time); + dept_stage_wait(map, key, ip, wait_func, time); + dept_request_event_wait_commit(); + dept_clean_stage(); + dept_stage_event(task, ip); + dept_ecxt_enter(map, evt_flags, ip, ecxt_func, evt_func, sub_local); + dept_ecxt_holding(map, evt_flags); + dept_request_event(map, ext_wgen); + dept_event(map, evt_flags, ip, evt_func, ext_wgen); + dept_ecxt_exit(map, evt_flags, ip); + dept_ecxt_enter_nokeep(map); + dept_key_init(key); + dept_key_destroy(key); + dept_map_ecxt_modify(map, cur_evt_flags, key, evt_flags, ip, ecxt_func, evt_func, sub_local);