From patchwork Mon Aug 21 03:46:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43E91EE49AE for ; Mon, 21 Aug 2023 04:34:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232987AbjHUEer (ORCPT ); Mon, 21 Aug 2023 00:34:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231812AbjHUEeq (ORCPT ); Mon, 21 Aug 2023 00:34:46 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4A64B99; Sun, 20 Aug 2023 21:34:43 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-f9-64e2ded4f362 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 01/25] llist: Move llist_{head,node} definition to types.h Date: Mon, 21 Aug 2023 12:46:13 +0900 Message-Id: <20230821034637.34630-2-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRjA+37fn53O3p22XmI4YjKJYc8wPzab9x+bCX9g0829dFzHLiIb KyXkQugHnXaVnXOd4r0MUUvmKkR0U8hNd+KacjndKZVU5p9nn+35PJ+/HpZQVFFTWI3uoKjX qbRKWkbKesKKFzhdbnVswfBMyDHEQqDvNAnGChsNzeVlCGyVaRi6nq6H1mA3gsGmVwTk5zYj KO74SEClw4Wg2nKChpbPE8EZ8NHQmHuWhvTSChpefxvC0J53EUOZtAGeXyjBUDvwlYT8LhoK 89Px6PBiGDBbGTCnRoHbcpWBoY5F0Oh6S0H1+/lwpaidhkfVjSQ47rsxtFQZaXDZRih47mgg oTknm4Jb30to+BY0E2AO+Bh4U2vCcDtjNJT58w8F9dm1GDKv38HgfPcQQc3pTxgk21sangS6 MdilXAJ+33iKwH2uh4GThgEGCtPOITh7Mo+EV8P1FGS0L4XBfiO9ZrnwpNtHCBn2w0J10EQK z0p44cHVj4yQUfOeEUzSIcFuiRZKH3VhodgfoATJeoYWJP9FRsjqcWLh+8uXjNBQMEgKn535 eGPkNtlKtajVJIv6haviZQmGNx7qwDXZkc5ySEU1bBYKZXluCZ/TZ6P/s+R2jDPNzeXb2gaI MQ7nZvD27C9UFpKxBHdqAm/pbRqXJnFb+IA/iMaY5KL4kXs+aozl3FLe5LuD/0Wn82W3a8dD odwyXnpYNe4rRp0fHR5yLMpz50N5v+Mu8e9gMv/Y0kZeQHITCrEihUaXnKjSaJfEJKToNEdi du1PlNDoR5mPDW2/j/zNcXWIY5EyTB4/1a1WUKrkpJTEOsSzhDJcHvmrQ62Qq1UpR0X9/p36 Q1oxqQ5FsqQyQr44eFit4PaoDor7RPGAqP+/xWzolFRU4H1XZCwKmb/G29ow7XhhyO5VFf2r 1+V4rJtZqfOykV5+U9kWfaPKU9rgKrobsdWauSf20rQ5L3o37TZF5aXPCtNFbLO3tsvDqUrO gHvneevj6iOb5ml3VlomG7x9caa9tzydwxMfuKf6+vqvldTtCDAtm9Z+mC1LO2YPWaGIGVGS SQmqRdGEPkn1FylsathNAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxiGed9zznsOdV1OKsajsGCakBlQhCjzURfjjzlPXLYsmYaEmEhj j1KBwlpEMJqgLY7xYcAFOrWYClobwK9TjSgCFcKXRGG0UWGAtqJIRKrMogjCAOOfJ1dyX7l+ PRylsjLLOZ0+QzLoNSlqoqAVv2wyrfYM+rQxJbUESgpjIPAujwbrlRoC3ZerEdRcP4phpGUb PJoYRTB1v4sCS2k3gnPeAQqutw4iqHccI+Ae+ho8AT+BjtICAqbKKwT+eTWNob/sJIZq+Wfo LK7A4JocpsEyQuCMxYTnzksMk/YqFuw5EeBznGZh2hsLHYMPGWgu72Cgvi8KTp3tJ3CnvoOG 1lofBvdtK4HBmlkGOlvbaeguKWLg0lgFgVcTdgrsAT8LPS4bhqvmudrx/2YYaCtyYTh+/hoG T28dgoa8pxjkmocEmgOjGJxyKQUfL7Yg8J14zUJu4SQLZ46eQFCQW0ZD16c2Bsz9cTD1wUq2 bBKbR/2UaHYeFOsnbLR4r0IQb50eYEVzQx8r2uQDotMRKVbeGcHiufEAI8pVfxJRHj/Jivmv PVgce/CAFdv/nqLFIY8F/xqWoPheK6XoMiXDms2JiqTCnmdMerki6/llyEENXD4K5gR+nSD7 Wsk8E/5b4fHjSWqeQ/gVgrPoBZOPFBzF/7FIcLy5vyAt5ncKgfEJNM80HyHM3vQz86zk4wSb /xr+HA0Xqq+6FkLB/HeCXHd7wVfNOW+9z+hipLChoCoUotNnpmp0KXHRxuSkbL0uK3pPWqqM 5n7GfmS6pBa9c29rQjyH1F8pE8N8WhWjyTRmpzYhgaPUIcrQ916tSqnVZB+SDGm7DQdSJGMT CuVo9VLl9ngpUcXv02RIyZKULhm+rJgLXp6DMt46a8OODPy4LPFJULTFFr9iUXmW6fDagn2h pjSqbuuOG5U2745w+82+59bZqKUNfVF5M/urloxt/dQ7lbA+qDEvmjhWF99tazGfMqyKcv8e 2jvc3vjNxvQZ9NcFff5G5w+us/8GmsL3bti+rMvd+FOuPW3llh4+Ytdvh0sdhV3xpExNG5M0 sZGUwaj5HwEiwkAvAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org llist_head and llist_node can be used by very primitives. For example, Dept for tracking dependency uses llist things in its header. To avoid header dependency, move those to types.h. Signed-off-by: Byungchul Park --- include/linux/llist.h | 8 -------- include/linux/types.h | 8 ++++++++ 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/llist.h b/include/linux/llist.h index 85bda2d02d65..99cc3c30f79c 100644 --- a/include/linux/llist.h +++ b/include/linux/llist.h @@ -53,14 +53,6 @@ #include #include -struct llist_head { - struct llist_node *first; -}; - -struct llist_node { - struct llist_node *next; -}; - #define LLIST_HEAD_INIT(name) { NULL } #define LLIST_HEAD(name) struct llist_head name = LLIST_HEAD_INIT(name) diff --git a/include/linux/types.h b/include/linux/types.h index 688fb943556a..0ddb0d722b3d 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -193,6 +193,14 @@ struct hlist_node { struct hlist_node *next, **pprev; }; +struct llist_head { + struct llist_node *first; +}; + +struct llist_node { + struct llist_node *next; +}; + struct ustat { __kernel_daddr_t f_tfree; #ifdef CONFIG_ARCH_32BIT_USTAT_F_TINODE From patchwork Mon Aug 21 03:46:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359136 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83527EE49AC for ; Mon, 21 Aug 2023 04:07:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232741AbjHUEHA (ORCPT ); Mon, 21 Aug 2023 00:07:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229843AbjHUEG7 (ORCPT ); Mon, 21 Aug 2023 00:06:59 -0400 X-Greylist: delayed 1031 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Sun, 20 Aug 2023 21:06:52 PDT Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4C86E99; Sun, 20 Aug 2023 21:06:52 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-0b-64e2ded4ce51 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 02/25] dept: Implement Dept(Dependency Tracker) Date: Mon, 21 Aug 2023 12:46:14 +0900 Message-Id: <20230821034637.34630-3-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa2yLYRTHPe+1rVXe1PDaNKi4ZOKyhTmuET54SNyCRPhgtb6sdEN3YS7Z UMZmy4apyyJTUrWVTTvBbqkt1l2CbpqtYxurBWNTGx2zubTDl5Nf8j/nl5OTIyJlpXSQSB0T J2hjlBoFI6Ek3QGGGc42t2r2xZp5kHVmNni/nqIgp8DMgONOPgJz0VECOh+vgKa+LgQDT56R oM92ILjW3kpCUVUbgjLTMQaed4wAp9fDQE12GgPHrxcwUP9xkICWC2cJyLeshrpMAwG2/ncU 6DsZuKI/TvjKewL6jXksGJMng9t0mYXB9lCoaWukoezFdLh0tYWB0rIaCqoeuAl4XpzDQJv5 Nw11VdUUOLLSabj9ycDAxz4jCUavh4UGWy4BhTqf6OSXXzTY020EnLxxlwBncwmC8lOvCbCY Gxmo9HYRYLVkk/Dj5mME7oxuFk6c6WfhytEMBGknLlDw7KedBl3LXBj4nsMsXYAruzwk1ln3 47K+XArXGnj88HIri3XlL1ica4nHVlMIvl7aSeBrvV4aW/JOM9jSe5bFqd1OAn96+pTF1RcH KNzh1BPrgrdIFqkEjTpB0M5aEiGJsqU2o721v+kDL1sXJ6NzTioViUU8N4fv8RrZ/1ziO5Sf GW4q73L1k34O5Cbw1vS3dCqSiEguZThv+vyE8QcjuVV8U1bD0ADFTeYN91xDLOXm8pXOTvKv dDyfX2gbYjEXzltKipGfZb6envY3lF/KcyliPt318t9GY/lHJheViaS5aFgekqljEqKVas2c mVGJMeoDMyP3RFuQ76eMRwa3PkC9jg0ViBMhRYA0YpxbJaOVCbGJ0RWIF5GKQGnwt3aVTKpS Jh4UtHu2aeM1QmwFChZRijHSsL79Khm3Uxkn7BaEvYL2f0qIxEHJaPwUT9Ky0FsJPZETF9oW ovNbm4+sWWZbqVPNB/L+Rk/Bh5TA1qAVlWrxqxsV07ZPCvhhik/atamw1hGSt3kwWZsTGR42 IPM+jEsatmN4Q5qp3nW4vt4uz8yQH5r1NmMcq985yt6cgkevjUv8Hr5G/jVas35fvp1TNGH5 8rsd8vWgoGKjlKEhpDZW+QdwIgksTwMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+5/rXK0Oy/BgQbUwregiZry1ij4E/QmKgijqQznaIUdealNz gWHNyjRLLbWLiS5boivtaKDZZM28FanpKDVdbUk1L9ltmrdsGn15+MHzvM+Hl0dCyvNof4km KkbQRqkiFIyUku5WGlbbHS71urR7CyDj8jrw/EqmILfUzEDrwxIE5oqzBLjrdsDb4QEE469a SMjJakVQ4OwhoaLegcBSdI6B9t65YPcMMdCUlcqA4W4pA6/7Jwjozs4koETcBS/TjQRYRz9T kONm4HaOgfDKFwJGTcUsmBIDwFV0i4UJZzA0Od7QUHuniQZL1yq4mdfNwFNLEwX1lS4C2p/k MuAwT9Hwsr6RgtaMNBoefDUy0D9sIsHkGWKhzZpPQFmSt+3Czz80NKRZCbhQ+IgAe2c1gprk DwSI5jcM1HoGCCgXs0gYu1+HwHVlkIXzl0dZuH32CoLU89kUtEw20JDUHQrjv3OZbUpcOzBE 4qTyU9gynE/hF0YeV93qYXFSTReL88VYXF60Et996iZwwQ8PjcXiSwwWf2SyOGXQTuCvzc0s brwxTuFeew6xZ9Eh6Wa1EKGJE7Rrt4ZJw60pnejEiyk6/l3PlkR0zU6lIB8Jz63nq70fmmaG C+Q7OkbJafbllvDlaZ/oFCSVkNzF2XzRt1fMtDGf28m/zWibOaC4AN74uGOGZVwoX2t3k/9K F/MlZdYZ9uE28GL1EzTNcm/mu/MjlY6k+WhWMfLVRMVFqjQRoWt0x8P1UZr4NUejI0XkXY0p YSKjEv1q32FDnAQp5sjCFrnUcloVp9NH2hAvIRW+soUjTrVcplbpTwva6CPa2AhBZ0MLJZTC T7bzgBAm546pYoTjgnBC0P53CYmPfyJqSR2cvdwj338wVGt/HB24RdQviQ05bBhxd5stIcaL tqA85Xfhg962t9Kx7+rVzK7rhjNVvQiNTfaYnYXbXZPKKrxtY/jJFQm5Y4eH3AW9U9nn+uSr DOnPA4JSD20tbAy8lLyppv6T8ndI//t5/vrnfXtGmpfd2bfCL/rZUlAZIhWULlwVvJLU6lR/ AUngUrMxAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org CURRENT STATUS -------------- Lockdep tracks acquisition order of locks in order to detect deadlock, and IRQ and IRQ enable/disable state as well to take accident acquisitions into account. Lockdep should be turned off once it detects and reports a deadlock since the data structure and algorithm are not reusable after detection because of the complex design. PROBLEM ------- *Waits* and their *events* that never reach eventually cause deadlock. However, Lockdep is only interested in lock acquisition order, forcing to emulate lock acqusition even for just waits and events that have nothing to do with real lock. Even worse, no one likes Lockdep's false positive detection because that prevents further one that might be more valuable. That's why all the kernel developers are sensitive to Lockdep's false positive. Besides those, by tracking acquisition order, it cannot correctly deal with read lock and cross-event e.g. wait_for_completion()/complete() for deadlock detection. Lockdep is no longer a good tool for that purpose. SOLUTION -------- Again, *waits* and their *events* that never reach eventually cause deadlock. The new solution, Dept(DEPendency Tracker), focuses on waits and events themselves. Dept tracks waits and events and report it if any event would be never reachable. Dept does: . Works with read lock in the right way. . Works with any wait and event e.i. cross-event. . Continue to work even after reporting multiple times. . Provides simple and intuitive APIs. . Does exactly what dependency checker should do. Q & A ----- Q. Is this the first try ever to address the problem? A. No. Cross-release feature (b09be676e0ff2 locking/lockdep: Implement the 'crossrelease' feature) addressed it 2 years ago that was a Lockdep extension and merged but reverted shortly because: Cross-release started to report valuable hidden problems but started to give report false positive reports as well. For sure, no one likes Lockdep's false positive reports since it makes Lockdep stop, preventing reporting further real problems. Q. Why not Dept was developed as an extension of Lockdep? A. Lockdep definitely includes all the efforts great developers have made for a long time so as to be quite stable enough. But I had to design and implement newly because of the following: 1) Lockdep was designed to track lock acquisition order. The APIs and implementation do not fit on wait-event model. 2) Lockdep is turned off on detection including false positive. Which is terrible and prevents developing any extension for stronger detection. Q. Do you intend to totally replace Lockdep? A. No. Lockdep also checks if lock usage is correct. Of course, the dependency check routine should be replaced but the other functions should be still there. Q. Do you mean the dependency check routine should be replaced right away? A. No. I admit Lockdep is stable enough thanks to great efforts kernel developers have made. Lockdep and Dept, both should be in the kernel until Dept gets considered stable. Q. Stronger detection capability would give more false positive report. Which was a big problem when cross-release was introduced. Is it ok with Dept? A. It's ok. Dept allows multiple reporting thanks to simple and quite generalized design. Of course, false positive reports should be fixed anyway but it's no longer as a critical problem as it was. Signed-off-by: Byungchul Park --- include/linux/dept.h | 577 ++++++ include/linux/hardirq.h | 3 + include/linux/sched.h | 3 + init/init_task.c | 2 + init/main.c | 2 + kernel/Makefile | 1 + kernel/dependency/Makefile | 3 + kernel/dependency/dept.c | 3009 +++++++++++++++++++++++++++++++ kernel/dependency/dept_hash.h | 10 + kernel/dependency/dept_object.h | 13 + kernel/exit.c | 1 + kernel/fork.c | 2 + kernel/module/main.c | 4 + kernel/sched/core.c | 9 + lib/Kconfig.debug | 27 + lib/locking-selftest.c | 2 + 16 files changed, 3668 insertions(+) create mode 100644 include/linux/dept.h create mode 100644 kernel/dependency/Makefile create mode 100644 kernel/dependency/dept.c create mode 100644 kernel/dependency/dept_hash.h create mode 100644 kernel/dependency/dept_object.h diff --git a/include/linux/dept.h b/include/linux/dept.h new file mode 100644 index 000000000000..b6d45b4b1fd6 --- /dev/null +++ b/include/linux/dept.h @@ -0,0 +1,577 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * DEPT(DEPendency Tracker) - runtime dependency tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_H +#define __LINUX_DEPT_H + +#ifdef CONFIG_DEPT + +#include + +struct task_struct; + +#define DEPT_MAX_STACK_ENTRY 16 +#define DEPT_MAX_WAIT_HIST 64 +#define DEPT_MAX_ECXT_HELD 48 + +#define DEPT_MAX_SUBCLASSES 16 +#define DEPT_MAX_SUBCLASSES_EVT 2 +#define DEPT_MAX_SUBCLASSES_USR (DEPT_MAX_SUBCLASSES / DEPT_MAX_SUBCLASSES_EVT) +#define DEPT_MAX_SUBCLASSES_CACHE 2 + +#define DEPT_SIRQ 0 +#define DEPT_HIRQ 1 +#define DEPT_IRQS_NR 2 +#define DEPT_SIRQF (1UL << DEPT_SIRQ) +#define DEPT_HIRQF (1UL << DEPT_HIRQ) + +struct dept_ecxt; +struct dept_iecxt { + struct dept_ecxt *ecxt; + int enirq; + /* + * for preventing to add a new ecxt + */ + bool staled; +}; + +struct dept_wait; +struct dept_iwait { + struct dept_wait *wait; + int irq; + /* + * for preventing to add a new wait + */ + bool staled; + bool touched; +}; + +struct dept_class { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * unique information about the class + */ + const char *name; + unsigned long key; + int sub_id; + + /* + * for BFS + */ + unsigned int bfs_gen; + int bfs_dist; + struct dept_class *bfs_parent; + + /* + * for hashing this object + */ + struct hlist_node hash_node; + + /* + * for linking all classes + */ + struct list_head all_node; + + /* + * for associating its dependencies + */ + struct list_head dep_head; + struct list_head dep_rev_head; + + /* + * for tracking IRQ dependencies + */ + struct dept_iecxt iecxt[DEPT_IRQS_NR]; + struct dept_iwait iwait[DEPT_IRQS_NR]; + + /* + * classified by a map embedded in task_struct, + * not an explicit map + */ + bool sched_map; + }; + }; +}; + +struct dept_key { + union { + /* + * Each byte-wise address will be used as its key. + */ + char base[DEPT_MAX_SUBCLASSES]; + + /* + * for caching the main class pointer + */ + struct dept_class *classes[DEPT_MAX_SUBCLASSES_CACHE]; + }; +}; + +struct dept_map { + const char *name; + struct dept_key *keys; + + /* + * subclass that can be set from user + */ + int sub_u; + + /* + * It's local copy for fast access to the associated classes. + * Also used for dept_key for static maps. + */ + struct dept_key map_key; + + /* + * wait timestamp associated to this map + */ + unsigned int wgen; + + /* + * whether this map should be going to be checked or not + */ + bool nocheck; +}; + +#define DEPT_MAP_INITIALIZER(n, k) \ +{ \ + .name = #n, \ + .keys = (struct dept_key *)(k), \ + .sub_u = 0, \ + .map_key = { .classes = { NULL, } }, \ + .wgen = 0U, \ + .nocheck = false, \ +} + +struct dept_stack { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * backtrace entries + */ + unsigned long raw[DEPT_MAX_STACK_ENTRY]; + int nr; + }; + }; +}; + +struct dept_ecxt { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * function that entered to this ecxt + */ + const char *ecxt_fn; + + /* + * event function + */ + const char *event_fn; + + /* + * associated class + */ + struct dept_class *class; + + /* + * flag indicating which IRQ has been + * enabled within the event context + */ + unsigned long enirqf; + + /* + * where the IRQ-enabled happened + */ + unsigned long enirq_ip[DEPT_IRQS_NR]; + struct dept_stack *enirq_stack[DEPT_IRQS_NR]; + + /* + * where the event context started + */ + unsigned long ecxt_ip; + struct dept_stack *ecxt_stack; + + /* + * where the event triggered + */ + unsigned long event_ip; + struct dept_stack *event_stack; + }; + }; +}; + +struct dept_wait { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * function causing this wait + */ + const char *wait_fn; + + /* + * the associated class + */ + struct dept_class *class; + + /* + * which IRQ the wait was placed in + */ + unsigned long irqf; + + /* + * where the IRQ wait happened + */ + unsigned long irq_ip[DEPT_IRQS_NR]; + struct dept_stack *irq_stack[DEPT_IRQS_NR]; + + /* + * where the wait happened + */ + unsigned long wait_ip; + struct dept_stack *wait_stack; + + /* + * whether this wait is for commit in scheduler + */ + bool sched_sleep; + }; + }; +}; + +struct dept_dep { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * key data of dependency + */ + struct dept_ecxt *ecxt; + struct dept_wait *wait; + + /* + * This object can be referred without dept_lock + * held but with IRQ disabled, e.g. for hash + * lookup. So deferred deletion is needed. + */ + struct rcu_head rh; + + /* + * for BFS + */ + struct list_head bfs_node; + + /* + * for hashing this object + */ + struct hlist_node hash_node; + + /* + * for linking to a class object + */ + struct list_head dep_node; + struct list_head dep_rev_node; + }; + }; +}; + +struct dept_hash { + /* + * hash table + */ + struct hlist_head *table; + + /* + * size of the table e.i. 2^bits + */ + int bits; +}; + +struct dept_pool { + const char *name; + + /* + * object size + */ + size_t obj_sz; + + /* + * the number of the static array + */ + atomic_t obj_nr; + + /* + * offset of ->pool_node + */ + size_t node_off; + + /* + * pointer to the pool + */ + void *spool; + struct llist_head boot_pool; + struct llist_head __percpu *lpool; +}; + +struct dept_ecxt_held { + /* + * associated event context + */ + struct dept_ecxt *ecxt; + + /* + * unique key for this dept_ecxt_held + */ + struct dept_map *map; + + /* + * class of the ecxt of this dept_ecxt_held + */ + struct dept_class *class; + + /* + * the wgen when the event context started + */ + unsigned int wgen; + + /* + * subclass that only works in the local context + */ + int sub_l; +}; + +struct dept_wait_hist { + /* + * associated wait + */ + struct dept_wait *wait; + + /* + * unique id of all waits system-wise until wrapped + */ + unsigned int wgen; + + /* + * local context id to identify IRQ context + */ + unsigned int ctxt_id; +}; + +struct dept_task { + /* + * all event contexts that have entered and before exiting + */ + struct dept_ecxt_held ecxt_held[DEPT_MAX_ECXT_HELD]; + int ecxt_held_pos; + + /* + * ring buffer holding all waits that have happened + */ + struct dept_wait_hist wait_hist[DEPT_MAX_WAIT_HIST]; + int wait_hist_pos; + + /* + * sequential id to identify each IRQ context + */ + unsigned int irq_id[DEPT_IRQS_NR]; + + /* + * for tracking IRQ-enabled points with cross-event + */ + unsigned int wgen_enirq[DEPT_IRQS_NR]; + + /* + * for keeping up-to-date IRQ-enabled points + */ + unsigned long enirq_ip[DEPT_IRQS_NR]; + + /* + * current effective IRQ-enabled flag + */ + unsigned long eff_enirqf; + + /* + * for reserving a current stack instance at each operation + */ + struct dept_stack *stack; + + /* + * for preventing recursive call into DEPT engine + */ + int recursive; + + /* + * for staging data to commit a wait + */ + struct dept_map stage_m; + bool stage_sched_map; + const char *stage_w_fn; + unsigned long stage_ip; + + /* + * the number of missing ecxts + */ + int missing_ecxt; + + /* + * for tracking IRQ-enable state + */ + bool hardirqs_enabled; + bool softirqs_enabled; + + /* + * whether the current is on do_exit() + */ + bool task_exit; + + /* + * whether the current is running __schedule() + */ + bool in_sched; +}; + +#define DEPT_TASK_INITIALIZER(t) \ +{ \ + .wait_hist = { { .wait = NULL, } }, \ + .ecxt_held_pos = 0, \ + .wait_hist_pos = 0, \ + .irq_id = { 0U }, \ + .wgen_enirq = { 0U }, \ + .enirq_ip = { 0UL }, \ + .eff_enirqf = 0UL, \ + .stack = NULL, \ + .recursive = 0, \ + .stage_m = DEPT_MAP_INITIALIZER((t)->stage_m, NULL), \ + .stage_sched_map = false, \ + .stage_w_fn = NULL, \ + .stage_ip = 0UL, \ + .missing_ecxt = 0, \ + .hardirqs_enabled = false, \ + .softirqs_enabled = false, \ + .task_exit = false, \ + .in_sched = false, \ +} + +extern void dept_on(void); +extern void dept_off(void); +extern void dept_init(void); +extern void dept_task_init(struct task_struct *t); +extern void dept_task_exit(struct task_struct *t); +extern void dept_free_range(void *start, unsigned int sz); +extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_map_copy(struct dept_map *to, struct dept_map *from); + +extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l); +extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn); +extern void dept_request_event_wait_commit(void); +extern void dept_clean_stage(void); +extern void dept_stage_event(struct task_struct *t, unsigned long ip); +extern void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *c_fn, const char *e_fn, int sub_l); +extern bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f); +extern void dept_request_event(struct dept_map *m); +extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn); +extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); +extern void dept_sched_enter(void); +extern void dept_sched_exit(void); + +static inline void dept_ecxt_enter_nokeep(struct dept_map *m) +{ + dept_ecxt_enter(m, 0UL, 0UL, NULL, NULL, 0); +} + +/* + * for users who want to manage external keys + */ +extern void dept_key_init(struct dept_key *k); +extern void dept_key_destroy(struct dept_key *k); +extern void dept_map_ecxt_modify(struct dept_map *m, unsigned long e_f, struct dept_key *new_k, unsigned long new_e_f, unsigned long new_ip, const char *new_c_fn, const char *new_e_fn, int new_sub_l); + +extern void dept_softirq_enter(void); +extern void dept_hardirq_enter(void); +extern void dept_softirqs_on_ip(unsigned long ip); +extern void dept_hardirqs_on(void); +extern void dept_hardirqs_on_ip(unsigned long ip); +extern void dept_softirqs_off_ip(unsigned long ip); +extern void dept_hardirqs_off(void); +extern void dept_hardirqs_off_ip(unsigned long ip); +#else /* !CONFIG_DEPT */ +struct dept_key { }; +struct dept_map { }; +struct dept_task { }; + +#define DEPT_MAP_INITIALIZER(n, k) { } +#define DEPT_TASK_INITIALIZER(t) { } + +#define dept_on() do { } while (0) +#define dept_off() do { } while (0) +#define dept_init() do { } while (0) +#define dept_task_init(t) do { } while (0) +#define dept_task_exit(t) do { } while (0) +#define dept_free_range(s, sz) do { } while (0) +#define dept_map_init(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_map_copy(t, f) do { } while (0) + +#define dept_wait(m, w_f, ip, w_fn, sl) do { (void)(w_fn); } while (0) +#define dept_stage_wait(m, k, ip, w_fn) do { (void)(k); (void)(w_fn); } while (0) +#define dept_request_event_wait_commit() do { } while (0) +#define dept_clean_stage() do { } while (0) +#define dept_stage_event(t, ip) do { } while (0) +#define dept_ecxt_enter(m, e_f, ip, c_fn, e_fn, sl) do { (void)(c_fn); (void)(e_fn); } while (0) +#define dept_ecxt_holding(m, e_f) false +#define dept_request_event(m) do { } while (0) +#define dept_event(m, e_f, ip, e_fn) do { (void)(e_fn); } while (0) +#define dept_ecxt_exit(m, e_f, ip) do { } while (0) +#define dept_sched_enter() do { } while (0) +#define dept_sched_exit() do { } while (0) +#define dept_ecxt_enter_nokeep(m) do { } while (0) +#define dept_key_init(k) do { (void)(k); } while (0) +#define dept_key_destroy(k) do { (void)(k); } while (0) +#define dept_map_ecxt_modify(m, e_f, n_k, n_e_f, n_ip, n_c_fn, n_e_fn, n_sl) do { (void)(n_k); (void)(n_c_fn); (void)(n_e_fn); } while (0) + +#define dept_softirq_enter() do { } while (0) +#define dept_hardirq_enter() do { } while (0) +#define dept_softirqs_on_ip(ip) do { } while (0) +#define dept_hardirqs_on() do { } while (0) +#define dept_hardirqs_on_ip(ip) do { } while (0) +#define dept_softirqs_off_ip(ip) do { } while (0) +#define dept_hardirqs_off() do { } while (0) +#define dept_hardirqs_off_ip(ip) do { } while (0) +#endif +#endif /* __LINUX_DEPT_H */ diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index d57cab4d4c06..bb279dbbe748 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -106,6 +107,7 @@ void irq_exit_rcu(void); */ #define __nmi_enter() \ do { \ + dept_off(); \ lockdep_off(); \ arch_nmi_enter(); \ BUG_ON(in_nmi() == NMI_MASK); \ @@ -128,6 +130,7 @@ void irq_exit_rcu(void); __preempt_count_sub(NMI_OFFSET + HARDIRQ_OFFSET); \ arch_nmi_exit(); \ lockdep_on(); \ + dept_on(); \ } while (0) #define nmi_exit() \ diff --git a/include/linux/sched.h b/include/linux/sched.h index eed5d65b8d1f..bb8f8e00b9ed 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -38,6 +38,7 @@ #include #include #include +#include /* task_struct member predeclarations (sorted alphabetically): */ struct audit_context; @@ -1170,6 +1171,8 @@ struct task_struct { struct held_lock held_locks[MAX_LOCK_DEPTH]; #endif + struct dept_task dept_task; + #if defined(CONFIG_UBSAN) && !defined(CONFIG_UBSAN_TRAP) unsigned int in_ubsan; #endif diff --git a/init/init_task.c b/init/init_task.c index ff6c4b9bfe6b..eb36ad68c912 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -12,6 +12,7 @@ #include #include #include +#include #include @@ -194,6 +195,7 @@ struct task_struct init_task .curr_chain_key = INITIAL_CHAIN_KEY, .lockdep_recursion = 0, #endif + .dept_task = DEPT_TASK_INITIALIZER(init_task), #ifdef CONFIG_FUNCTION_GRAPH_TRACER .ret_stack = NULL, .tracing_graph_pause = ATOMIC_INIT(0), diff --git a/init/main.c b/init/main.c index af50044deed5..107e83a77cf4 100644 --- a/init/main.c +++ b/init/main.c @@ -65,6 +65,7 @@ #include #include #include +#include #include #include #include @@ -1017,6 +1018,7 @@ asmlinkage __visible void __init __no_sanitize_address __noreturn start_kernel(v panic_param); lockdep_init(); + dept_init(); /* * Need to run this when irqs are enabled, because it wants diff --git a/kernel/Makefile b/kernel/Makefile index b69c95315480..871f3c618492 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -51,6 +51,7 @@ obj-y += livepatch/ obj-y += dma/ obj-y += entry/ obj-$(CONFIG_MODULES) += module/ +obj-y += dependency/ obj-$(CONFIG_KCMP) += kcmp.o obj-$(CONFIG_FREEZER) += freezer.o diff --git a/kernel/dependency/Makefile b/kernel/dependency/Makefile new file mode 100644 index 000000000000..b5cfb8a03c0c --- /dev/null +++ b/kernel/dependency/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_DEPT) += dept.o diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c new file mode 100644 index 000000000000..8ec638254e5f --- /dev/null +++ b/kernel/dependency/dept.c @@ -0,0 +1,3009 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DEPT(DEPendency Tracker) - Runtime dependency tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + * + * DEPT provides a general way to detect deadlock possibility in runtime + * and the interest is not limited to typical lock but to every + * syncronization primitives. + * + * The following ideas were borrowed from LOCKDEP: + * + * 1) Use a graph to track relationship between classes. + * 2) Prevent performance regression using hash. + * + * The following items were enhanced from LOCKDEP: + * + * 1) Cover more deadlock cases. + * 2) Allow muliple reports. + * + * TODO: Both LOCKDEP and DEPT should co-exist until DEPT is considered + * stable. Then the dependency check routine should be replaced with + * DEPT after. It should finally look like: + * + * + * + * As is: + * + * LOCKDEP + * +-----------------------------------------+ + * | Lock usage correctness check | <-> locks + * | | + * | | + * | +-------------------------------------+ | + * | | Dependency check | | + * | | (by tracking lock acquisition order)| | + * | +-------------------------------------+ | + * | | + * +-----------------------------------------+ + * + * DEPT + * +-----------------------------------------+ + * | Dependency check | <-> waits/events + * | (by tracking wait and event context) | + * +-----------------------------------------+ + * + * + * + * To be: + * + * LOCKDEP + * +-----------------------------------------+ + * | Lock usage correctness check | <-> locks + * | | + * | | + * | (Request dependency check) | + * | T | + * +--------------------|--------------------+ + * | + * DEPT V + * +-----------------------------------------+ + * | Dependency check | <-> waits/events + * | (by tracking wait and event context) | + * +-----------------------------------------+ + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static int dept_stop; +static int dept_per_cpu_ready; + +#define DEPT_READY_WARN (!oops_in_progress) + +/* + * Make all operations using DEPT_WARN_ON() fail on oops_in_progress and + * prevent warning message. + */ +#define DEPT_WARN_ON_ONCE(c) \ + ({ \ + int __ret = 0; \ + \ + if (likely(DEPT_READY_WARN)) \ + __ret = WARN_ONCE(c, "DEPT_WARN_ON_ONCE: " #c); \ + __ret; \ + }) + +#define DEPT_WARN_ONCE(s...) \ + ({ \ + if (likely(DEPT_READY_WARN)) \ + WARN_ONCE(1, "DEPT_WARN_ONCE: " s); \ + }) + +#define DEPT_WARN_ON(c) \ + ({ \ + int __ret = 0; \ + \ + if (likely(DEPT_READY_WARN)) \ + __ret = WARN(c, "DEPT_WARN_ON: " #c); \ + __ret; \ + }) + +#define DEPT_WARN(s...) \ + ({ \ + if (likely(DEPT_READY_WARN)) \ + WARN(1, "DEPT_WARN: " s); \ + }) + +#define DEPT_STOP(s...) \ + ({ \ + WRITE_ONCE(dept_stop, 1); \ + if (likely(DEPT_READY_WARN)) \ + WARN(1, "DEPT_STOP: " s); \ + }) + +#define DEPT_INFO_ONCE(s...) pr_warn_once("DEPT_INFO_ONCE: " s) + +static arch_spinlock_t dept_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; +static arch_spinlock_t stage_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; + +/* + * DEPT internal engine should be careful in using outside functions + * e.g. printk at reporting since that kind of usage might cause + * untrackable deadlock. + */ +static atomic_t dept_outworld = ATOMIC_INIT(0); + +static inline void dept_outworld_enter(void) +{ + atomic_inc(&dept_outworld); +} + +static inline void dept_outworld_exit(void) +{ + atomic_dec(&dept_outworld); +} + +static inline bool dept_outworld_entered(void) +{ + return atomic_read(&dept_outworld); +} + +static inline bool dept_lock(void) +{ + while (!arch_spin_trylock(&dept_spin)) + if (unlikely(dept_outworld_entered())) + return false; + return true; +} + +static inline void dept_unlock(void) +{ + arch_spin_unlock(&dept_spin); +} + +/* + * whether to stack-trace on every wait or every ecxt + */ +static bool rich_stack = true; + +enum bfs_ret { + BFS_CONTINUE, + BFS_CONTINUE_REV, + BFS_DONE, + BFS_SKIP, +}; + +static inline bool after(unsigned int a, unsigned int b) +{ + return (int)(b - a) < 0; +} + +static inline bool before(unsigned int a, unsigned int b) +{ + return (int)(a - b) < 0; +} + +static inline bool valid_stack(struct dept_stack *s) +{ + return s && s->nr > 0; +} + +static inline bool valid_class(struct dept_class *c) +{ + return c->key; +} + +static inline void invalidate_class(struct dept_class *c) +{ + c->key = 0UL; +} + +static inline struct dept_ecxt *dep_e(struct dept_dep *d) +{ + return d->ecxt; +} + +static inline struct dept_wait *dep_w(struct dept_dep *d) +{ + return d->wait; +} + +static inline struct dept_class *dep_fc(struct dept_dep *d) +{ + return dep_e(d)->class; +} + +static inline struct dept_class *dep_tc(struct dept_dep *d) +{ + return dep_w(d)->class; +} + +static inline const char *irq_str(int irq) +{ + if (irq == DEPT_SIRQ) + return "softirq"; + if (irq == DEPT_HIRQ) + return "hardirq"; + return "(unknown)"; +} + +static inline struct dept_task *dept_task(void) +{ + return ¤t->dept_task; +} + +/* + * Dept doesn't work either when it's stopped by DEPT_STOP() or in a nmi + * context. + */ +static inline bool dept_working(void) +{ + return !READ_ONCE(dept_stop) && !in_nmi(); +} + +/* + * Even k == NULL is considered as a valid key because it would use + * &->map_key as the key in that case. + */ +struct dept_key __dept_no_validate__; +static inline bool valid_key(struct dept_key *k) +{ + return &__dept_no_validate__ != k; +} + +/* + * Pool + * ===================================================================== + * DEPT maintains pools to provide objects in a safe way. + * + * 1) Static pool is used at the beginning of booting time. + * 2) Local pool is tried first before the static pool. Objects that + * have been freed will be placed. + */ + +enum object_t { +#define OBJECT(id, nr) OBJECT_##id, + #include "dept_object.h" +#undef OBJECT + OBJECT_NR, +}; + +#define OBJECT(id, nr) \ +static struct dept_##id spool_##id[nr]; \ +static DEFINE_PER_CPU(struct llist_head, lpool_##id); + #include "dept_object.h" +#undef OBJECT + +static struct dept_pool pool[OBJECT_NR] = { +#define OBJECT(id, nr) { \ + .name = #id, \ + .obj_sz = sizeof(struct dept_##id), \ + .obj_nr = ATOMIC_INIT(nr), \ + .node_off = offsetof(struct dept_##id, pool_node), \ + .spool = spool_##id, \ + .lpool = &lpool_##id, }, + #include "dept_object.h" +#undef OBJECT +}; + +/* + * Can use llist no matter whether CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG is + * enabled or not because NMI and other contexts in the same CPU never + * run inside of DEPT concurrently by preventing reentrance. + */ +static void *from_pool(enum object_t t) +{ + struct dept_pool *p; + struct llist_head *h; + struct llist_node *n; + + /* + * llist_del_first() doesn't allow concurrent access e.g. + * between process and IRQ context. + */ + if (DEPT_WARN_ON(!irqs_disabled())) + return NULL; + + p = &pool[t]; + + /* + * Try local pool first. + */ + if (likely(dept_per_cpu_ready)) + h = this_cpu_ptr(p->lpool); + else + h = &p->boot_pool; + + n = llist_del_first(h); + if (n) + return (void *)n - p->node_off; + + /* + * Try static pool. + */ + if (atomic_read(&p->obj_nr) > 0) { + int idx = atomic_dec_return(&p->obj_nr); + + if (idx >= 0) + return p->spool + (idx * p->obj_sz); + } + + DEPT_INFO_ONCE("---------------------------------------------\n" + " Some of Dept internal resources are run out.\n" + " Dept might still work if the resources get freed.\n" + " However, the chances are Dept will suffer from\n" + " the lack from now. Needs to extend the internal\n" + " resource pools. Ask max.byungchul.park@gmail.com\n"); + return NULL; +} + +static void to_pool(void *o, enum object_t t) +{ + struct dept_pool *p = &pool[t]; + struct llist_head *h; + + preempt_disable(); + if (likely(dept_per_cpu_ready)) + h = this_cpu_ptr(p->lpool); + else + h = &p->boot_pool; + + llist_add(o + p->node_off, h); + preempt_enable(); +} + +#define OBJECT(id, nr) \ +static void (*ctor_##id)(struct dept_##id *a); \ +static void (*dtor_##id)(struct dept_##id *a); \ +static inline struct dept_##id *new_##id(void) \ +{ \ + struct dept_##id *a; \ + \ + a = (struct dept_##id *)from_pool(OBJECT_##id); \ + if (unlikely(!a)) \ + return NULL; \ + \ + atomic_set(&a->ref, 1); \ + \ + if (ctor_##id) \ + ctor_##id(a); \ + \ + return a; \ +} \ + \ +static inline struct dept_##id *get_##id(struct dept_##id *a) \ +{ \ + atomic_inc(&a->ref); \ + return a; \ +} \ + \ +static inline void put_##id(struct dept_##id *a) \ +{ \ + if (!atomic_dec_return(&a->ref)) { \ + if (dtor_##id) \ + dtor_##id(a); \ + to_pool(a, OBJECT_##id); \ + } \ +} \ + \ +static inline void del_##id(struct dept_##id *a) \ +{ \ + put_##id(a); \ +} \ + \ +static inline bool id##_consumed(struct dept_##id *a) \ +{ \ + return a && atomic_read(&a->ref) > 1; \ +} +#include "dept_object.h" +#undef OBJECT + +#define SET_CONSTRUCTOR(id, f) \ +static void (*ctor_##id)(struct dept_##id *a) = f + +static void initialize_dep(struct dept_dep *d) +{ + INIT_LIST_HEAD(&d->bfs_node); + INIT_LIST_HEAD(&d->dep_node); + INIT_LIST_HEAD(&d->dep_rev_node); +} +SET_CONSTRUCTOR(dep, initialize_dep); + +static void initialize_class(struct dept_class *c) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_iecxt *ie = &c->iecxt[i]; + struct dept_iwait *iw = &c->iwait[i]; + + ie->ecxt = NULL; + ie->enirq = i; + ie->staled = false; + + iw->wait = NULL; + iw->irq = i; + iw->staled = false; + iw->touched = false; + } + c->bfs_gen = 0U; + + INIT_LIST_HEAD(&c->all_node); + INIT_LIST_HEAD(&c->dep_head); + INIT_LIST_HEAD(&c->dep_rev_head); +} +SET_CONSTRUCTOR(class, initialize_class); + +static void initialize_ecxt(struct dept_ecxt *e) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + e->enirq_stack[i] = NULL; + e->enirq_ip[i] = 0UL; + } + e->ecxt_ip = 0UL; + e->ecxt_stack = NULL; + e->enirqf = 0UL; + e->event_ip = 0UL; + e->event_stack = NULL; +} +SET_CONSTRUCTOR(ecxt, initialize_ecxt); + +static void initialize_wait(struct dept_wait *w) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + w->irq_stack[i] = NULL; + w->irq_ip[i] = 0UL; + } + w->wait_ip = 0UL; + w->wait_stack = NULL; + w->irqf = 0UL; +} +SET_CONSTRUCTOR(wait, initialize_wait); + +static void initialize_stack(struct dept_stack *s) +{ + s->nr = 0; +} +SET_CONSTRUCTOR(stack, initialize_stack); + +#define OBJECT(id, nr) \ +static void (*ctor_##id)(struct dept_##id *a); + #include "dept_object.h" +#undef OBJECT + +#undef SET_CONSTRUCTOR + +#define SET_DESTRUCTOR(id, f) \ +static void (*dtor_##id)(struct dept_##id *a) = f + +static void destroy_dep(struct dept_dep *d) +{ + if (dep_e(d)) + put_ecxt(dep_e(d)); + if (dep_w(d)) + put_wait(dep_w(d)); +} +SET_DESTRUCTOR(dep, destroy_dep); + +static void destroy_ecxt(struct dept_ecxt *e) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) + if (e->enirq_stack[i]) + put_stack(e->enirq_stack[i]); + if (e->class) + put_class(e->class); + if (e->ecxt_stack) + put_stack(e->ecxt_stack); + if (e->event_stack) + put_stack(e->event_stack); +} +SET_DESTRUCTOR(ecxt, destroy_ecxt); + +static void destroy_wait(struct dept_wait *w) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) + if (w->irq_stack[i]) + put_stack(w->irq_stack[i]); + if (w->class) + put_class(w->class); + if (w->wait_stack) + put_stack(w->wait_stack); +} +SET_DESTRUCTOR(wait, destroy_wait); + +#define OBJECT(id, nr) \ +static void (*dtor_##id)(struct dept_##id *a); + #include "dept_object.h" +#undef OBJECT + +#undef SET_DESTRUCTOR + +/* + * Caching and hashing + * ===================================================================== + * DEPT makes use of caching and hashing to improve performance. Each + * object can be obtained in O(1) with its key. + * + * NOTE: Currently we assume all the objects in the hashs will never be + * removed. Implement it when needed. + */ + +/* + * Some information might be lost but it's only for hashing key. + */ +static inline unsigned long mix(unsigned long a, unsigned long b) +{ + int halfbits = sizeof(unsigned long) * 8 / 2; + unsigned long halfmask = (1UL << halfbits) - 1UL; + + return (a << halfbits) | (b & halfmask); +} + +static bool cmp_dep(struct dept_dep *d1, struct dept_dep *d2) +{ + return dep_fc(d1)->key == dep_fc(d2)->key && + dep_tc(d1)->key == dep_tc(d2)->key; +} + +static unsigned long key_dep(struct dept_dep *d) +{ + return mix(dep_fc(d)->key, dep_tc(d)->key); +} + +static bool cmp_class(struct dept_class *c1, struct dept_class *c2) +{ + return c1->key == c2->key; +} + +static unsigned long key_class(struct dept_class *c) +{ + return c->key; +} + +#define HASH(id, bits) \ +static struct hlist_head table_##id[1 << (bits)]; \ + \ +static inline struct hlist_head *head_##id(struct dept_##id *a) \ +{ \ + return table_##id + hash_long(key_##id(a), bits); \ +} \ + \ +static inline struct dept_##id *hash_lookup_##id(struct dept_##id *a) \ +{ \ + struct dept_##id *b; \ + \ + hlist_for_each_entry_rcu(b, head_##id(a), hash_node) \ + if (cmp_##id(a, b)) \ + return b; \ + return NULL; \ +} \ + \ +static inline void hash_add_##id(struct dept_##id *a) \ +{ \ + get_##id(a); \ + hlist_add_head_rcu(&a->hash_node, head_##id(a)); \ +} \ + \ +static inline void hash_del_##id(struct dept_##id *a) \ +{ \ + hlist_del_rcu(&a->hash_node); \ + put_##id(a); \ +} +#include "dept_hash.h" +#undef HASH + +static inline struct dept_dep *lookup_dep(struct dept_class *fc, + struct dept_class *tc) +{ + struct dept_ecxt onetime_e = { .class = fc }; + struct dept_wait onetime_w = { .class = tc }; + struct dept_dep onetime_d = { .ecxt = &onetime_e, + .wait = &onetime_w }; + return hash_lookup_dep(&onetime_d); +} + +static inline struct dept_class *lookup_class(unsigned long key) +{ + struct dept_class onetime_c = { .key = key }; + + return hash_lookup_class(&onetime_c); +} + +/* + * Report + * ===================================================================== + * DEPT prints useful information to help debuging on detection of + * problematic dependency. + */ + +static inline void print_ip_stack(unsigned long ip, struct dept_stack *s) +{ + if (ip) + print_ip_sym(KERN_WARNING, ip); + + if (valid_stack(s)) { + pr_warn("stacktrace:\n"); + stack_trace_print(s->raw, s->nr, 5); + } + + if (!ip && !valid_stack(s)) + pr_warn("(N/A)\n"); +} + +#define print_spc(spc, fmt, ...) \ + pr_warn("%*c" fmt, (spc) * 4, ' ', ##__VA_ARGS__) + +static void print_diagram(struct dept_dep *d) +{ + struct dept_ecxt *e = dep_e(d); + struct dept_wait *w = dep_w(d); + struct dept_class *fc = dep_fc(d); + struct dept_class *tc = dep_tc(d); + unsigned long irqf; + int irq; + bool firstline = true; + int spc = 1; + const char *w_fn = w->wait_fn ?: "(unknown)"; + const char *e_fn = e->event_fn ?: "(unknown)"; + const char *c_fn = e->ecxt_fn ?: "(unknown)"; + const char *fc_n = fc->sched_map ? "" : (fc->name ?: "(unknown)"); + const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); + + irqf = e->enirqf & w->irqf; + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + if (!firstline) + pr_warn("\nor\n\n"); + firstline = false; + + print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); + print_spc(spc, " <%s interrupt>\n", irq_str(irq)); + print_spc(spc + 1, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); + } + + if (!irqf) { + print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); + print_spc(spc, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); + } +} + +static void print_dep(struct dept_dep *d) +{ + struct dept_ecxt *e = dep_e(d); + struct dept_wait *w = dep_w(d); + struct dept_class *fc = dep_fc(d); + struct dept_class *tc = dep_tc(d); + unsigned long irqf; + int irq; + const char *w_fn = w->wait_fn ?: "(unknown)"; + const char *e_fn = e->event_fn ?: "(unknown)"; + const char *c_fn = e->ecxt_fn ?: "(unknown)"; + const char *fc_n = fc->sched_map ? "" : (fc->name ?: "(unknown)"); + const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); + + irqf = e->enirqf & w->irqf; + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + pr_warn("%s has been enabled:\n", irq_str(irq)); + print_ip_stack(e->enirq_ip[irq], e->enirq_stack[irq]); + pr_warn("\n"); + + pr_warn("[S] %s(%s:%d):\n", c_fn, fc_n, fc->sub_id); + print_ip_stack(e->ecxt_ip, e->ecxt_stack); + pr_warn("\n"); + + pr_warn("[W] %s(%s:%d) in %s context:\n", + w_fn, tc_n, tc->sub_id, irq_str(irq)); + print_ip_stack(w->irq_ip[irq], w->irq_stack[irq]); + pr_warn("\n"); + + pr_warn("[E] %s(%s:%d):\n", e_fn, fc_n, fc->sub_id); + print_ip_stack(e->event_ip, e->event_stack); + } + + if (!irqf) { + pr_warn("[S] %s(%s:%d):\n", c_fn, fc_n, fc->sub_id); + print_ip_stack(e->ecxt_ip, e->ecxt_stack); + pr_warn("\n"); + + pr_warn("[W] %s(%s:%d):\n", w_fn, tc_n, tc->sub_id); + print_ip_stack(w->wait_ip, w->wait_stack); + pr_warn("\n"); + + pr_warn("[E] %s(%s:%d):\n", e_fn, fc_n, fc->sub_id); + print_ip_stack(e->event_ip, e->event_stack); + } +} + +static void save_current_stack(int skip); + +/* + * Print all classes in a circle. + */ +static void print_circle(struct dept_class *c) +{ + struct dept_class *fc = c->bfs_parent; + struct dept_class *tc = c; + int i; + + dept_outworld_enter(); + save_current_stack(6); + + pr_warn("===================================================\n"); + pr_warn("DEPT: Circular dependency has been detected.\n"); + pr_warn("%s %.*s %s\n", init_utsname()->release, + (int)strcspn(init_utsname()->version, " "), + init_utsname()->version, + print_tainted()); + pr_warn("---------------------------------------------------\n"); + pr_warn("summary\n"); + pr_warn("---------------------------------------------------\n"); + + if (fc == tc) + pr_warn("*** AA DEADLOCK ***\n\n"); + else + pr_warn("*** DEADLOCK ***\n\n"); + + i = 0; + do { + struct dept_dep *d = lookup_dep(fc, tc); + + pr_warn("context %c\n", 'A' + (i++)); + print_diagram(d); + if (fc != c) + pr_warn("\n"); + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + pr_warn("\n"); + pr_warn("[S]: start of the event context\n"); + pr_warn("[W]: the wait blocked\n"); + pr_warn("[E]: the event not reachable\n"); + + i = 0; + do { + struct dept_dep *d = lookup_dep(fc, tc); + + pr_warn("---------------------------------------------------\n"); + pr_warn("context %c's detail\n", 'A' + i); + pr_warn("---------------------------------------------------\n"); + pr_warn("context %c\n", 'A' + (i++)); + print_diagram(d); + pr_warn("\n"); + print_dep(d); + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + pr_warn("---------------------------------------------------\n"); + pr_warn("information that might be helpful\n"); + pr_warn("---------------------------------------------------\n"); + dump_stack(); + + dept_outworld_exit(); +} + +/* + * BFS(Breadth First Search) + * ===================================================================== + * Whenever a new dependency is added into the graph, search the graph + * for a new circular dependency. + */ + +static inline void enqueue(struct list_head *h, struct dept_dep *d) +{ + list_add_tail(&d->bfs_node, h); +} + +static inline struct dept_dep *dequeue(struct list_head *h) +{ + struct dept_dep *d; + + d = list_first_entry(h, struct dept_dep, bfs_node); + list_del(&d->bfs_node); + return d; +} + +static inline bool empty(struct list_head *h) +{ + return list_empty(h); +} + +static void extend_queue(struct list_head *h, struct dept_class *cur) +{ + struct dept_dep *d; + + list_for_each_entry(d, &cur->dep_head, dep_node) { + struct dept_class *next = dep_tc(d); + + if (cur->bfs_gen == next->bfs_gen) + continue; + next->bfs_gen = cur->bfs_gen; + next->bfs_dist = cur->bfs_dist + 1; + next->bfs_parent = cur; + enqueue(h, d); + } +} + +static void extend_queue_rev(struct list_head *h, struct dept_class *cur) +{ + struct dept_dep *d; + + list_for_each_entry(d, &cur->dep_rev_head, dep_rev_node) { + struct dept_class *next = dep_fc(d); + + if (cur->bfs_gen == next->bfs_gen) + continue; + next->bfs_gen = cur->bfs_gen; + next->bfs_dist = cur->bfs_dist + 1; + next->bfs_parent = cur; + enqueue(h, d); + } +} + +typedef enum bfs_ret bfs_f(struct dept_dep *d, void *in, void **out); +static unsigned int bfs_gen; + +/* + * NOTE: Must be called with dept_lock held. + */ +static void bfs(struct dept_class *c, bfs_f *cb, void *in, void **out) +{ + LIST_HEAD(q); + enum bfs_ret ret; + + if (DEPT_WARN_ON(!cb)) + return; + + /* + * Avoid zero bfs_gen. + */ + bfs_gen = bfs_gen + 1 ?: 1; + + c->bfs_gen = bfs_gen; + c->bfs_dist = 0; + c->bfs_parent = c; + + ret = cb(NULL, in, out); + if (ret == BFS_DONE) + return; + if (ret == BFS_SKIP) + return; + if (ret == BFS_CONTINUE) + extend_queue(&q, c); + if (ret == BFS_CONTINUE_REV) + extend_queue_rev(&q, c); + + while (!empty(&q)) { + struct dept_dep *d = dequeue(&q); + + ret = cb(d, in, out); + if (ret == BFS_DONE) + break; + if (ret == BFS_SKIP) + continue; + if (ret == BFS_CONTINUE) + extend_queue(&q, dep_tc(d)); + if (ret == BFS_CONTINUE_REV) + extend_queue_rev(&q, dep_fc(d)); + } + + while (!empty(&q)) + dequeue(&q); +} + +/* + * Main operations + * ===================================================================== + * Add dependencies - Each new dependency is added into the graph and + * checked if it forms a circular dependency. + * + * Track waits - Waits are queued into the ring buffer for later use to + * generate appropriate dependencies with cross-event. + * + * Track event contexts(ecxt) - Event contexts are pushed into local + * stack for later use to generate appropriate dependencies with waits. + */ + +static inline unsigned long cur_enirqf(void); +static inline int cur_irq(void); +static inline unsigned int cur_ctxt_id(void); + +static inline struct dept_iecxt *iecxt(struct dept_class *c, int irq) +{ + return &c->iecxt[irq]; +} + +static inline struct dept_iwait *iwait(struct dept_class *c, int irq) +{ + return &c->iwait[irq]; +} + +static inline void stale_iecxt(struct dept_iecxt *ie) +{ + if (ie->ecxt) + put_ecxt(ie->ecxt); + + WRITE_ONCE(ie->ecxt, NULL); + WRITE_ONCE(ie->staled, true); +} + +static inline void set_iecxt(struct dept_iecxt *ie, struct dept_ecxt *e) +{ + /* + * ->ecxt will never be updated once getting set until the class + * gets removed. + */ + if (ie->ecxt) + DEPT_WARN_ON(1); + else + WRITE_ONCE(ie->ecxt, get_ecxt(e)); +} + +static inline void stale_iwait(struct dept_iwait *iw) +{ + if (iw->wait) + put_wait(iw->wait); + + WRITE_ONCE(iw->wait, NULL); + WRITE_ONCE(iw->staled, true); +} + +static inline void set_iwait(struct dept_iwait *iw, struct dept_wait *w) +{ + /* + * ->wait will never be updated once getting set until the class + * gets removed. + */ + if (iw->wait) + DEPT_WARN_ON(1); + else + WRITE_ONCE(iw->wait, get_wait(w)); + + iw->touched = true; +} + +static inline void touch_iwait(struct dept_iwait *iw) +{ + iw->touched = true; +} + +static inline void untouch_iwait(struct dept_iwait *iw) +{ + iw->touched = false; +} + +static inline struct dept_stack *get_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + return s ? get_stack(s) : NULL; +} + +static inline void prepare_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + /* + * The dept_stack is already ready. + */ + if (s && !stack_consumed(s)) { + s->nr = 0; + return; + } + + if (s) + put_stack(s); + + s = dept_task()->stack = new_stack(); + if (!s) + return; + + get_stack(s); + del_stack(s); +} + +static void save_current_stack(int skip) +{ + struct dept_stack *s = dept_task()->stack; + + if (!s) + return; + if (valid_stack(s)) + return; + + s->nr = stack_trace_save(s->raw, DEPT_MAX_STACK_ENTRY, skip); +} + +static void finish_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + if (stack_consumed(s)) + save_current_stack(2); +} + +/* + * FIXME: For now, disable LOCKDEP while DEPT is working. + * + * Both LOCKDEP and DEPT report it on a deadlock detection using + * printk taking the risk of another deadlock that might be caused by + * locks of console or printk between inside and outside of them. + * + * For DEPT, it's no problem since multiple reports are allowed. But it + * would be a bad idea for LOCKDEP since it will stop even on a singe + * report. So we need to prevent LOCKDEP from its reporting the risk + * DEPT would take when reporting something. + */ +#include + +void dept_off(void) +{ + dept_task()->recursive++; + lockdep_off(); +} + +void dept_on(void) +{ + dept_task()->recursive--; + lockdep_on(); +} + +static inline unsigned long dept_enter(void) +{ + unsigned long flags; + + flags = arch_local_irq_save(); + dept_off(); + prepare_current_stack(); + return flags; +} + +static inline void dept_exit(unsigned long flags) +{ + finish_current_stack(); + dept_on(); + arch_local_irq_restore(flags); +} + +static inline unsigned long dept_enter_recursive(void) +{ + unsigned long flags; + + flags = arch_local_irq_save(); + return flags; +} + +static inline void dept_exit_recursive(unsigned long flags) +{ + arch_local_irq_restore(flags); +} + +/* + * NOTE: Must be called with dept_lock held. + */ +static struct dept_dep *__add_dep(struct dept_ecxt *e, + struct dept_wait *w) +{ + struct dept_dep *d; + + if (DEPT_WARN_ON(!valid_class(e->class))) + return NULL; + + if (DEPT_WARN_ON(!valid_class(w->class))) + return NULL; + + if (lookup_dep(e->class, w->class)) + return NULL; + + d = new_dep(); + if (unlikely(!d)) + return NULL; + + d->ecxt = get_ecxt(e); + d->wait = get_wait(w); + + /* + * Add the dependency into hash and graph. + */ + hash_add_dep(d); + list_add(&d->dep_node, &dep_fc(d)->dep_head); + list_add(&d->dep_rev_node, &dep_tc(d)->dep_rev_head); + return d; +} + +static enum bfs_ret cb_check_dl(struct dept_dep *d, + void *in, void **out) +{ + struct dept_dep *new = (struct dept_dep *)in; + + /* + * initial condition for this BFS search + */ + if (!d) { + dep_tc(new)->bfs_parent = dep_fc(new); + + if (dep_tc(new) != dep_fc(new)) + return BFS_CONTINUE; + + /* + * AA circle does not make additional deadlock. We don't + * have to continue this BFS search. + */ + print_circle(dep_tc(new)); + return BFS_DONE; + } + + /* + * Allow multiple reports. + */ + if (dep_tc(d) == dep_fc(new)) + print_circle(dep_tc(new)); + + return BFS_CONTINUE; +} + +/* + * This function is actually in charge of reporting. + */ +static inline void check_dl_bfs(struct dept_dep *d) +{ + bfs(dep_tc(d), cb_check_dl, (void *)d, NULL); +} + +static enum bfs_ret cb_find_iw(struct dept_dep *d, void *in, void **out) +{ + int irq = *(int *)in; + struct dept_class *fc; + struct dept_iwait *iw; + + if (DEPT_WARN_ON(!out)) + return BFS_DONE; + + /* + * initial condition for this BFS search + */ + if (!d) + return BFS_CONTINUE_REV; + + fc = dep_fc(d); + iw = iwait(fc, irq); + + /* + * If any parent's ->wait was set, then the children would've + * been touched. + */ + if (!iw->touched) + return BFS_SKIP; + + if (!iw->wait) + return BFS_CONTINUE_REV; + + *out = iw; + return BFS_DONE; +} + +static struct dept_iwait *find_iw_bfs(struct dept_class *c, int irq) +{ + struct dept_iwait *iw = iwait(c, irq); + struct dept_iwait *found = NULL; + + if (iw->wait) + return iw; + + /* + * '->touched == false' guarantees there's no parent that has + * been set ->wait. + */ + if (!iw->touched) + return NULL; + + bfs(c, cb_find_iw, (void *)&irq, (void **)&found); + + if (found) + return found; + + untouch_iwait(iw); + return NULL; +} + +static enum bfs_ret cb_touch_iw_find_ie(struct dept_dep *d, void *in, + void **out) +{ + int irq = *(int *)in; + struct dept_class *tc; + struct dept_iecxt *ie; + struct dept_iwait *iw; + + if (DEPT_WARN_ON(!out)) + return BFS_DONE; + + /* + * initial condition for this BFS search + */ + if (!d) + return BFS_CONTINUE; + + tc = dep_tc(d); + ie = iecxt(tc, irq); + iw = iwait(tc, irq); + + touch_iwait(iw); + + if (!ie->ecxt) + return BFS_CONTINUE; + + if (!*out) + *out = ie; + + return BFS_CONTINUE; +} + +static struct dept_iecxt *touch_iw_find_ie_bfs(struct dept_class *c, + int irq) +{ + struct dept_iecxt *ie = iecxt(c, irq); + struct dept_iwait *iw = iwait(c, irq); + struct dept_iecxt *found = ie->ecxt ? ie : NULL; + + touch_iwait(iw); + bfs(c, cb_touch_iw_find_ie, (void *)&irq, (void **)&found); + return found; +} + +/* + * Should be called with dept_lock held. + */ +static void __add_idep(struct dept_iecxt *ie, struct dept_iwait *iw) +{ + struct dept_dep *new; + + /* + * There's nothing to do. + */ + if (!ie || !iw || !ie->ecxt || !iw->wait) + return; + + new = __add_dep(ie->ecxt, iw->wait); + + /* + * Deadlock detected. Let check_dl_bfs() report it. + */ + if (new) { + check_dl_bfs(new); + stale_iecxt(ie); + stale_iwait(iw); + } + + /* + * If !new, it would be the case of lack of object resource. + * Just let it go and get checked by other chances. Retrying is + * meaningless in that case. + */ +} + +static void set_check_iecxt(struct dept_class *c, int irq, + struct dept_ecxt *e) +{ + struct dept_iecxt *ie = iecxt(c, irq); + + set_iecxt(ie, e); + __add_idep(ie, find_iw_bfs(c, irq)); +} + +static void set_check_iwait(struct dept_class *c, int irq, + struct dept_wait *w) +{ + struct dept_iwait *iw = iwait(c, irq); + + set_iwait(iw, w); + __add_idep(touch_iw_find_ie_bfs(c, irq), iw); +} + +static void add_iecxt(struct dept_class *c, int irq, struct dept_ecxt *e, + bool stack) +{ + /* + * This access is safe since we ensure e->class has set locally. + */ + struct dept_task *dt = dept_task(); + struct dept_iecxt *ie = iecxt(c, irq); + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (unlikely(READ_ONCE(ie->staled))) + return; + + /* + * Skip add_iecxt() if ie->ecxt has ever been set at least once. + * Which means it has a valid ->ecxt or been staled. + */ + if (READ_ONCE(ie->ecxt)) + return; + + if (unlikely(!dept_lock())) + return; + + if (unlikely(ie->staled)) + goto unlock; + if (ie->ecxt) + goto unlock; + + e->enirqf |= (1UL << irq); + + /* + * Should be NULL since it's the first time that these + * enirq_{ip,stack}[irq] have ever set. + */ + DEPT_WARN_ON(e->enirq_ip[irq]); + DEPT_WARN_ON(e->enirq_stack[irq]); + + e->enirq_ip[irq] = dt->enirq_ip[irq]; + e->enirq_stack[irq] = stack ? get_current_stack() : NULL; + + set_check_iecxt(c, irq, e); +unlock: + dept_unlock(); +} + +static void add_iwait(struct dept_class *c, int irq, struct dept_wait *w) +{ + struct dept_iwait *iw = iwait(c, irq); + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (unlikely(READ_ONCE(iw->staled))) + return; + + /* + * Skip add_iwait() if iw->wait has ever been set at least once. + * Which means it has a valid ->wait or been staled. + */ + if (READ_ONCE(iw->wait)) + return; + + if (unlikely(!dept_lock())) + return; + + if (unlikely(iw->staled)) + goto unlock; + if (iw->wait) + goto unlock; + + w->irqf |= (1UL << irq); + + /* + * Should be NULL since it's the first time that these + * irq_{ip,stack}[irq] have ever set. + */ + DEPT_WARN_ON(w->irq_ip[irq]); + DEPT_WARN_ON(w->irq_stack[irq]); + + w->irq_ip[irq] = w->wait_ip; + w->irq_stack[irq] = get_current_stack(); + + set_check_iwait(c, irq, w); +unlock: + dept_unlock(); +} + +static inline struct dept_wait_hist *hist(int pos) +{ + struct dept_task *dt = dept_task(); + + return dt->wait_hist + (pos % DEPT_MAX_WAIT_HIST); +} + +static inline int hist_pos_next(void) +{ + struct dept_task *dt = dept_task(); + + return dt->wait_hist_pos % DEPT_MAX_WAIT_HIST; +} + +static inline void hist_advance(void) +{ + struct dept_task *dt = dept_task(); + + dt->wait_hist_pos++; + dt->wait_hist_pos %= DEPT_MAX_WAIT_HIST; +} + +static inline struct dept_wait_hist *new_hist(void) +{ + struct dept_wait_hist *wh = hist(hist_pos_next()); + + hist_advance(); + return wh; +} + +static void add_hist(struct dept_wait *w, unsigned int wg, unsigned int ctxt_id) +{ + struct dept_wait_hist *wh = new_hist(); + + if (likely(wh->wait)) + put_wait(wh->wait); + + wh->wait = get_wait(w); + wh->wgen = wg; + wh->ctxt_id = ctxt_id; +} + +/* + * Should be called after setting up e's iecxt and w's iwait. + */ +static void add_dep(struct dept_ecxt *e, struct dept_wait *w) +{ + struct dept_class *fc = e->class; + struct dept_class *tc = w->class; + struct dept_dep *d; + int i; + + if (lookup_dep(fc, tc)) + return; + + if (unlikely(!dept_lock())) + return; + + /* + * __add_dep() will lookup_dep() again with lock held. + */ + d = __add_dep(e, w); + if (d) { + check_dl_bfs(d); + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_iwait *fiw = iwait(fc, i); + struct dept_iecxt *found_ie; + struct dept_iwait *found_iw; + + /* + * '->touched == false' guarantees there's no + * parent that has been set ->wait. + */ + if (!fiw->touched) + continue; + + /* + * find_iw_bfs() will untouch the iwait if + * not found. + */ + found_iw = find_iw_bfs(fc, i); + + if (!found_iw) + continue; + + found_ie = touch_iw_find_ie_bfs(tc, i); + __add_idep(found_ie, found_iw); + } + } + dept_unlock(); +} + +static atomic_t wgen = ATOMIC_INIT(1); + +static void add_wait(struct dept_class *c, unsigned long ip, + const char *w_fn, int sub_l, bool sched_sleep) +{ + struct dept_task *dt = dept_task(); + struct dept_wait *w; + unsigned int wg = 0U; + int irq; + int i; + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + w = new_wait(); + if (unlikely(!w)) + return; + + WRITE_ONCE(w->class, get_class(c)); + w->wait_ip = ip; + w->wait_fn = w_fn; + w->wait_stack = get_current_stack(); + w->sched_sleep = sched_sleep; + + irq = cur_irq(); + if (irq < DEPT_IRQS_NR) + add_iwait(c, irq, w); + + /* + * Avoid adding dependency between user aware nested ecxt and + * wait. + */ + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + + /* + * the case of invalid key'ed one + */ + if (!eh->ecxt) + continue; + + if (eh->ecxt->class != c || eh->sub_l == sub_l) + add_dep(eh->ecxt, w); + } + + if (!wait_consumed(w) && !rich_stack) { + if (w->wait_stack) + put_stack(w->wait_stack); + w->wait_stack = NULL; + } + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + add_hist(w, wg, cur_ctxt_id()); + + del_wait(w); +} + +static bool add_ecxt(struct dept_map *m, struct dept_class *c, + unsigned long ip, const char *c_fn, + const char *e_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + struct dept_ecxt_held *eh; + struct dept_ecxt *e; + unsigned long irqf; + int irq; + + if (DEPT_WARN_ON(!valid_class(c))) + return false; + + if (DEPT_WARN_ON_ONCE(dt->ecxt_held_pos >= DEPT_MAX_ECXT_HELD)) + return false; + + if (m->nocheck) { + eh = dt->ecxt_held + (dt->ecxt_held_pos++); + eh->ecxt = NULL; + eh->map = m; + eh->class = get_class(c); + eh->wgen = atomic_read(&wgen); + eh->sub_l = sub_l; + + return true; + } + + e = new_ecxt(); + if (unlikely(!e)) + return false; + + e->class = get_class(c); + e->ecxt_ip = ip; + e->ecxt_stack = ip && rich_stack ? get_current_stack() : NULL; + e->event_fn = e_fn; + e->ecxt_fn = c_fn; + + eh = dt->ecxt_held + (dt->ecxt_held_pos++); + eh->ecxt = get_ecxt(e); + eh->map = m; + eh->class = get_class(c); + eh->wgen = atomic_read(&wgen); + eh->sub_l = sub_l; + + irqf = cur_enirqf(); + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) + add_iecxt(c, irq, e, false); + + del_ecxt(e); + return true; +} + +static int find_ecxt_pos(struct dept_map *m, struct dept_class *c, + bool newfirst) +{ + struct dept_task *dt = dept_task(); + int i; + + if (newfirst) { + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + if (eh->map == m && eh->class == c) + return i; + } + } else { + for (i = 0; i < dt->ecxt_held_pos; i++) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + if (eh->map == m && eh->class == c) + return i; + } + } + return -1; +} + +static bool pop_ecxt(struct dept_map *m, struct dept_class *c) +{ + struct dept_task *dt = dept_task(); + int pos; + int i; + + pos = find_ecxt_pos(m, c, true); + if (pos == -1) + return false; + + if (dt->ecxt_held[pos].class) + put_class(dt->ecxt_held[pos].class); + + if (dt->ecxt_held[pos].ecxt) + put_ecxt(dt->ecxt_held[pos].ecxt); + + dt->ecxt_held_pos--; + + for (i = pos; i < dt->ecxt_held_pos; i++) + dt->ecxt_held[i] = dt->ecxt_held[i + 1]; + return true; +} + +static inline bool good_hist(struct dept_wait_hist *wh, unsigned int wg) +{ + return wh->wait != NULL && before(wg, wh->wgen); +} + +/* + * Binary-search the ring buffer for the earliest valid wait. + */ +static int find_hist_pos(unsigned int wg) +{ + int oldest; + int l; + int r; + int pos; + + oldest = hist_pos_next(); + if (unlikely(good_hist(hist(oldest), wg))) { + DEPT_INFO_ONCE("Need to expand the ring buffer.\n"); + return oldest; + } + + l = oldest + 1; + r = oldest + DEPT_MAX_WAIT_HIST - 1; + for (pos = (l + r) / 2; l <= r; pos = (l + r) / 2) { + struct dept_wait_hist *p = hist(pos - 1); + struct dept_wait_hist *wh = hist(pos); + + if (!good_hist(p, wg) && good_hist(wh, wg)) + return pos % DEPT_MAX_WAIT_HIST; + if (good_hist(wh, wg)) + r = pos - 1; + else + l = pos + 1; + } + return -1; +} + +static void do_event(struct dept_map *m, struct dept_class *c, + unsigned int wg, unsigned long ip) +{ + struct dept_task *dt = dept_task(); + struct dept_wait_hist *wh; + struct dept_ecxt_held *eh; + unsigned int ctxt_id; + int end; + int pos; + int i; + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (m->nocheck) + return; + + /* + * The event was triggered before wait. + */ + if (!wg) + return; + + pos = find_ecxt_pos(m, c, false); + if (pos == -1) + return; + + eh = dt->ecxt_held + pos; + + if (DEPT_WARN_ON(!eh->ecxt)) + return; + + eh->ecxt->event_ip = ip; + eh->ecxt->event_stack = get_current_stack(); + + /* + * The ecxt already has done what it needs. + */ + if (!before(wg, eh->wgen)) + return; + + pos = find_hist_pos(wg); + if (pos == -1) + return; + + ctxt_id = cur_ctxt_id(); + end = hist_pos_next(); + end = end > pos ? end : end + DEPT_MAX_WAIT_HIST; + for (wh = hist(pos); pos < end; wh = hist(++pos)) { + if (after(wh->wgen, eh->wgen)) + break; + + if (dt->in_sched && wh->wait->sched_sleep) + continue; + + if (wh->ctxt_id == ctxt_id) + add_dep(eh->ecxt, wh->wait); + } + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_ecxt *e; + + if (before(dt->wgen_enirq[i], wg)) + continue; + + e = eh->ecxt; + add_iecxt(e->class, i, e, false); + } +} + +static void del_dep_rcu(struct rcu_head *rh) +{ + struct dept_dep *d = container_of(rh, struct dept_dep, rh); + + preempt_disable(); + del_dep(d); + preempt_enable(); +} + +/* + * NOTE: Must be called with dept_lock held. + */ +static void disconnect_class(struct dept_class *c) +{ + struct dept_dep *d, *n; + int i; + + list_for_each_entry_safe(d, n, &c->dep_head, dep_node) { + list_del_rcu(&d->dep_node); + list_del_rcu(&d->dep_rev_node); + hash_del_dep(d); + call_rcu(&d->rh, del_dep_rcu); + } + + list_for_each_entry_safe(d, n, &c->dep_rev_head, dep_rev_node) { + list_del_rcu(&d->dep_node); + list_del_rcu(&d->dep_rev_node); + hash_del_dep(d); + call_rcu(&d->rh, del_dep_rcu); + } + + for (i = 0; i < DEPT_IRQS_NR; i++) { + stale_iecxt(iecxt(c, i)); + stale_iwait(iwait(c, i)); + } +} + +/* + * Context control + * ===================================================================== + * Whether a wait is in {hard,soft}-IRQ context or whether + * {hard,soft}-IRQ has been enabled on the way to an event is very + * important to check dependency. All those things should be tracked. + */ + +static inline unsigned long cur_enirqf(void) +{ + struct dept_task *dt = dept_task(); + int he = dt->hardirqs_enabled; + int se = dt->softirqs_enabled; + + if (he) + return DEPT_HIRQF | (se ? DEPT_SIRQF : 0UL); + return 0UL; +} + +static inline int cur_irq(void) +{ + if (lockdep_softirq_context(current)) + return DEPT_SIRQ; + if (lockdep_hardirq_context()) + return DEPT_HIRQ; + return DEPT_IRQS_NR; +} + +static inline unsigned int cur_ctxt_id(void) +{ + struct dept_task *dt = dept_task(); + int irq = cur_irq(); + + /* + * Normal process context + */ + if (irq == DEPT_IRQS_NR) + return 0U; + + return dt->irq_id[irq] | (1UL << irq); +} + +static void enirq_transition(int irq) +{ + struct dept_task *dt = dept_task(); + int i; + + /* + * READ wgen >= wgen of an event with IRQ enabled has been + * observed on the way to the event means, the IRQ can cut in + * within the ecxt. Used for cross-event detection. + * + * wait context event context(ecxt) + * ------------ ------------------- + * wait event + * WRITE wgen + * observe IRQ enabled + * READ wgen + * keep the wgen locally + * + * on the event + * check the local wgen + */ + dt->wgen_enirq[irq] = atomic_read(&wgen); + + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + struct dept_ecxt *e; + + eh = dt->ecxt_held + i; + e = eh->ecxt; + if (e) + add_iecxt(e->class, irq, e, true); + } +} + +static void enirq_update(unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long irqf; + unsigned long prev; + int irq; + + prev = dt->eff_enirqf; + irqf = cur_enirqf(); + dt->eff_enirqf = irqf; + + /* + * Do enirq_transition() only on an OFF -> ON transition. + */ + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + if (prev & (1UL << irq)) + continue; + + dt->enirq_ip[irq] = ip; + enirq_transition(irq); + } +} + +/* + * Ensure it has been called on ON/OFF transition. + */ +static void dept_enirq_transition(unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + /* + * IRQ ON/OFF transition might happen while Dept is working. + * We cannot handle recursive entrance. Just ingnore it. + * Only transitions outside of Dept will be considered. + */ + if (dt->recursive) + return; + + flags = dept_enter(); + + enirq_update(ip); + + dept_exit(flags); +} + +void dept_softirqs_on_ip(unsigned long ip) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->softirqs_enabled = true; + dept_enirq_transition(ip); +} + +void dept_hardirqs_on(void) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = true; + dept_enirq_transition(_RET_IP_); +} +EXPORT_SYMBOL_GPL(dept_hardirqs_on); + +void dept_hardirqs_on_ip(unsigned long ip) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = true; + dept_enirq_transition(ip); +} +EXPORT_SYMBOL_GPL(dept_hardirqs_on_ip); + +void dept_softirqs_off_ip(unsigned long ip) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->softirqs_enabled = false; + dept_enirq_transition(ip); +} + +void dept_hardirqs_off(void) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = false; + dept_enirq_transition(_RET_IP_); +} +EXPORT_SYMBOL_GPL(dept_hardirqs_off); + +void dept_hardirqs_off_ip(unsigned long ip) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = false; + dept_enirq_transition(ip); +} +EXPORT_SYMBOL_GPL(dept_hardirqs_off_ip); + +/* + * Ensure it's the outmost softirq context. + */ +void dept_softirq_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->irq_id[DEPT_SIRQ] += 1UL << DEPT_IRQS_NR; +} + +/* + * Ensure it's the outmost hardirq context. + */ +void dept_hardirq_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->irq_id[DEPT_HIRQ] += 1UL << DEPT_IRQS_NR; +} + +void dept_sched_enter(void) +{ + dept_task()->in_sched = true; +} + +void dept_sched_exit(void) +{ + dept_task()->in_sched = false; +} + +/* + * Exposed APIs + * ===================================================================== + */ + +static inline void clean_classes_cache(struct dept_key *k) +{ + int i; + + for (i = 0; i < DEPT_MAX_SUBCLASSES_CACHE; i++) { + if (!READ_ONCE(k->classes[i])) + continue; + + WRITE_ONCE(k->classes[i], NULL); + } +} + +void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, + const char *n) +{ + unsigned long flags; + + if (unlikely(!dept_working())) { + m->nocheck = true; + return; + } + + if (DEPT_WARN_ON(sub_u < 0)) { + m->nocheck = true; + return; + } + + if (DEPT_WARN_ON(sub_u >= DEPT_MAX_SUBCLASSES_USR)) { + m->nocheck = true; + return; + } + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + clean_classes_cache(&m->map_key); + + m->keys = k; + m->sub_u = sub_u; + m->name = n; + m->wgen = 0U; + m->nocheck = !valid_key(k); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_map_init); + +void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, + const char *n) +{ + unsigned long flags; + + if (unlikely(!dept_working())) { + m->nocheck = true; + return; + } + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + if (k) { + clean_classes_cache(&m->map_key); + m->keys = k; + m->nocheck = !valid_key(k); + } + + if (sub_u >= 0 && sub_u < DEPT_MAX_SUBCLASSES_USR) + m->sub_u = sub_u; + + if (n) + m->name = n; + + m->wgen = 0U; + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_map_reinit); + +void dept_map_copy(struct dept_map *to, struct dept_map *from) +{ + if (unlikely(!dept_working())) { + to->nocheck = true; + return; + } + + *to = *from; + + /* + * XXX: 'to' might be in a stack or something. Using the address + * in a stack segment as a key is meaningless. Just ignore the + * case for now. + */ + if (!to->keys) { + to->nocheck = true; + return; + } + + /* + * Since the class cache can be modified concurrently we could + * observe half pointers (64bit arch using 32bit copy insns). + * Therefore clear the caches and take the performance hit. + * + * XXX: Doesn't work well with lockdep_set_class_and_subclass() + * since that relies on cache abuse. + */ + clean_classes_cache(&to->map_key); +} + +static LIST_HEAD(classes); + +static inline bool within(const void *addr, void *start, unsigned long size) +{ + return addr >= start && addr < start + size; +} + +void dept_free_range(void *start, unsigned int sz) +{ + struct dept_task *dt = dept_task(); + struct dept_class *c, *n; + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + DEPT_STOP("Failed to successfully free Dept objects.\n"); + return; + } + + flags = dept_enter(); + + /* + * dept_free_range() should not fail. + * + * FIXME: Should be fixed if dept_free_range() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + list_for_each_entry_safe(c, n, &classes, all_node) { + if (!within((void *)c->key, start, sz) && + !within(c->name, start, sz)) + continue; + + hash_del_class(c); + disconnect_class(c); + list_del(&c->all_node); + invalidate_class(c); + + /* + * Actual deletion will happen on the rcu callback + * that has been added in disconnect_class(). + */ + del_class(c); + } + dept_unlock(); + dept_exit(flags); + + /* + * Wait until even lockless hash_lookup_class() for the class + * returns NULL. + */ + might_sleep(); + synchronize_rcu(); +} + +static inline int sub_id(struct dept_map *m, int e) +{ + return (m ? m->sub_u : 0) + e * DEPT_MAX_SUBCLASSES_USR; +} + +static struct dept_class *check_new_class(struct dept_key *local, + struct dept_key *k, int sub_id, + const char *n, bool sched_map) +{ + struct dept_class *c = NULL; + + if (DEPT_WARN_ON(sub_id >= DEPT_MAX_SUBCLASSES)) + return NULL; + + if (DEPT_WARN_ON(!k)) + return NULL; + + /* + * XXX: Assume that users prevent the map from using if any of + * the cached keys has been invalidated. If not, the cache, + * local->classes should not be used because it would be racy + * with class deletion. + */ + if (local && sub_id < DEPT_MAX_SUBCLASSES_CACHE) + c = READ_ONCE(local->classes[sub_id]); + + if (c) + return c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (c) + goto caching; + + if (unlikely(!dept_lock())) + return NULL; + + c = lookup_class((unsigned long)k->base + sub_id); + if (unlikely(c)) + goto unlock; + + c = new_class(); + if (unlikely(!c)) + goto unlock; + + c->name = n; + c->sched_map = sched_map; + c->sub_id = sub_id; + c->key = (unsigned long)(k->base + sub_id); + hash_add_class(c); + list_add(&c->all_node, &classes); +unlock: + dept_unlock(); +caching: + if (local && sub_id < DEPT_MAX_SUBCLASSES_CACHE) + WRITE_ONCE(local->classes[sub_id], c); + + return c; +} + +/* + * Called between dept_enter() and dept_exit(). + */ +static void __dept_wait(struct dept_map *m, unsigned long w_f, + unsigned long ip, const char *w_fn, int sub_l, + bool sched_sleep, bool sched_map) +{ + int e; + + /* + * Be as conservative as possible. In case of mulitple waits for + * a single dept_map, we are going to keep only the last wait's + * wgen for simplicity - keeping all wgens seems overengineering. + * + * Of course, it might cause missing some dependencies that + * would rarely, probabily never, happen but it helps avoid + * false positive report. + */ + for_each_set_bit(e, &w_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, sched_map); + if (!c) + continue; + + add_wait(c, ip, w_fn, sub_l, sched_sleep); + } +} + +/* + * Called between dept_enter() and dept_exit(). + */ +static void __dept_event(struct dept_map *m, unsigned long e_f, + unsigned long ip, const char *e_fn, + bool sched_map) +{ + struct dept_class *c; + struct dept_key *k; + int e; + + e = find_first_bit(&e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (DEPT_WARN_ON(e >= DEPT_MAX_SUBCLASSES_EVT)) + goto exit; + + /* + * An event is an event. If the caller passed more than single + * event, then warn it and handle the event corresponding to + * the first bit anyway. + */ + DEPT_WARN_ON(1UL << e != e_f); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, sched_map); + + if (c && add_ecxt(m, c, 0UL, NULL, e_fn, 0)) { + do_event(m, c, READ_ONCE(m->wgen), ip); + pop_ecxt(m, c); + } +exit: + /* + * Keep the map diabled until the next sleep. + */ + WRITE_ONCE(m->wgen, 0U); +} + +void dept_wait(struct dept_map *m, unsigned long w_f, + unsigned long ip, const char *w_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) + return; + + if (m->nocheck) + return; + + flags = dept_enter(); + + __dept_wait(m, w_f, ip, w_fn, sub_l, false, false); + + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_wait); + +void dept_stage_wait(struct dept_map *m, struct dept_key *k, + unsigned long ip, const char *w_fn) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (m && m->nocheck) + return; + + /* + * Either m or k should be passed. Which means Dept relies on + * either its own map or the caller's position in the code when + * determining its class. + */ + if (DEPT_WARN_ON(!m && !k)) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + arch_spin_lock(&stage_spin); + + /* + * Ensure the outmost dept_stage_wait() works. + */ + if (dt->stage_m.keys) + goto unlock; + + if (m) { + dt->stage_m = *m; + + /* + * Ensure dt->stage_m.keys != NULL and it works with the + * map's map_key, not stage_m's one when ->keys == NULL. + */ + if (!m->keys) + dt->stage_m.keys = &m->map_key; + } else { + dt->stage_m.name = w_fn; + dt->stage_sched_map = true; + } + + /* + * dept_map_reinit() includes WRITE_ONCE(->wgen, 0U) that + * effectively disables the map just in case real sleep won't + * happen. dept_request_event_wait_commit() will enable it. + */ + dept_map_reinit(&dt->stage_m, k, -1, NULL); + + dt->stage_w_fn = w_fn; + dt->stage_ip = ip; +unlock: + arch_spin_unlock(&stage_spin); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_stage_wait); + +void dept_clean_stage(void) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + arch_spin_lock(&stage_spin); + memset(&dt->stage_m, 0x0, sizeof(struct dept_map)); + dt->stage_sched_map = false; + dt->stage_w_fn = NULL; + dt->stage_ip = 0UL; + arch_spin_unlock(&stage_spin); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_clean_stage); + +/* + * Always called from __schedule(). + */ +void dept_request_event_wait_commit(void) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + unsigned int wg; + unsigned long ip; + const char *w_fn; + bool sched_map; + + if (unlikely(!dept_working())) + return; + + /* + * It's impossible that __schedule() is called while Dept is + * working that already disabled IRQ at the entrance. + */ + if (DEPT_WARN_ON(dt->recursive)) + return; + + flags = dept_enter(); + + /* + * Checks if current has staged a wait. + */ + if (!dt->stage_m.keys) + goto exit; + + w_fn = dt->stage_w_fn; + ip = dt->stage_ip; + sched_map = dt->stage_sched_map; + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + WRITE_ONCE(dt->stage_m.wgen, wg); + + __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map); +exit: + dept_exit(flags); +} + +/* + * Always called from try_to_wake_up(). + */ +void dept_stage_event(struct task_struct *t, unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + struct dept_map m; + bool sched_map; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) + return; + + flags = dept_enter(); + + arch_spin_lock(&stage_spin); + m = t->dept_task.stage_m; + sched_map = t->dept_task.stage_sched_map; + arch_spin_unlock(&stage_spin); + + /* + * ->stage_m.keys should not be NULL if it's in use. Should + * make sure that it's not NULL when staging a valid map. + */ + if (!m.keys) + goto exit; + + __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map); +exit: + dept_exit(flags); +} + +/* + * Modifies the latest ecxt corresponding to m and e_f. + */ +void dept_map_ecxt_modify(struct dept_map *m, unsigned long e_f, + struct dept_key *new_k, unsigned long new_e_f, + unsigned long new_ip, const char *new_c_fn, + const char *new_e_fn, int new_sub_l) +{ + struct dept_task *dt = dept_task(); + struct dept_ecxt_held *eh; + struct dept_class *c; + struct dept_key *k; + unsigned long flags; + int pos = -1; + int new_e; + int e; + + if (unlikely(!dept_working())) + return; + + /* + * XXX: Couldn't handle re-enterance cases. Ingore it for now. + */ + if (dt->recursive) + return; + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + /* + * When it found an ecxt for any event in e_f, done. + */ + pos = find_ecxt_pos(m, c, true); + if (pos != -1) + break; + } + + if (unlikely(pos == -1)) + goto exit; + + eh = dt->ecxt_held + pos; + new_sub_l = new_sub_l >= 0 ? new_sub_l : eh->sub_l; + + new_e = find_first_bit(&new_e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (new_e < DEPT_MAX_SUBCLASSES_EVT) + /* + * Let it work with the first bit anyway. + */ + DEPT_WARN_ON(1UL << new_e != new_e_f); + else + new_e = e; + + pop_ecxt(m, c); + + /* + * Apply the key to the map. + */ + if (new_k) + dept_map_reinit(m, new_k, -1, NULL); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, new_e), m->name, false); + + if (c && add_ecxt(m, c, new_ip, new_c_fn, new_e_fn, new_sub_l)) + goto exit; + + /* + * Successfully pop_ecxt()ed but failed to add_ecxt(). + */ + dt->missing_ecxt++; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_map_ecxt_modify); + +void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, + const char *c_fn, const char *e_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + struct dept_class *c; + struct dept_key *k; + int e; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + dt->missing_ecxt++; + return; + } + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + e = find_first_bit(&e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (e >= DEPT_MAX_SUBCLASSES_EVT) + goto missing_ecxt; + + /* + * An event is an event. If the caller passed more than single + * event, then warn it and handle the event corresponding to + * the first bit anyway. + */ + DEPT_WARN_ON(1UL << e != e_f); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, false); + + if (c && add_ecxt(m, c, ip, c_fn, e_fn, sub_l)) + goto exit; +missing_ecxt: + dt->missing_ecxt++; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_ecxt_enter); + +bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + bool ret = false; + int e; + + if (unlikely(!dept_working())) + return false; + + if (dt->recursive) + return false; + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + if (find_ecxt_pos(m, c, true) != -1) { + ret = true; + break; + } + } + + dept_exit(flags); + + return ret; +} +EXPORT_SYMBOL_GPL(dept_ecxt_holding); + +void dept_request_event(struct dept_map *m) +{ + unsigned long flags; + unsigned int wg; + + if (unlikely(!dept_working())) + return; + + if (m->nocheck) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + WRITE_ONCE(m->wgen, wg); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_request_event); + +void dept_event(struct dept_map *m, unsigned long e_f, + unsigned long ip, const char *e_fn) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + /* + * Dept won't work with this even though an event + * context has been asked. Don't make it confused at + * handling the event. Disable it until the next. + */ + WRITE_ONCE(m->wgen, 0U); + return; + } + + if (m->nocheck) + return; + + flags = dept_enter(); + + __dept_event(m, e_f, ip, e_fn, false); + + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_event); + +void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, + unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int e; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + dt->missing_ecxt--; + return; + } + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + /* + * When it found an ecxt for any event in e_f, done. + */ + if (pop_ecxt(m, c)) + goto exit; + } + + dt->missing_ecxt--; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_ecxt_exit); + +void dept_task_exit(struct task_struct *t) +{ + struct dept_task *dt = &t->dept_task; + int i; + + if (unlikely(!dept_working())) + return; + + raw_local_irq_disable(); + + if (dt->stack) + put_stack(dt->stack); + + for (i = 0; i < dt->ecxt_held_pos; i++) { + if (dt->ecxt_held[i].class) + put_class(dt->ecxt_held[i].class); + if (dt->ecxt_held[i].ecxt) + put_ecxt(dt->ecxt_held[i].ecxt); + } + + for (i = 0; i < DEPT_MAX_WAIT_HIST; i++) + if (dt->wait_hist[i].wait) + put_wait(dt->wait_hist[i].wait); + + dt->task_exit = true; + dept_off(); + + raw_local_irq_enable(); +} + +void dept_task_init(struct task_struct *t) +{ + memset(&t->dept_task, 0x0, sizeof(struct dept_task)); +} + +void dept_key_init(struct dept_key *k) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int sub_id; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + DEPT_STOP("Key initialization fails.\n"); + return; + } + + flags = dept_enter(); + + clean_classes_cache(k); + + /* + * dept_key_init() should not fail. + * + * FIXME: Should be fixed if dept_key_init() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + for (sub_id = 0; sub_id < DEPT_MAX_SUBCLASSES; sub_id++) { + struct dept_class *c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (!c) + continue; + + DEPT_STOP("The class(%s/%d) has not been removed.\n", + c->name, sub_id); + break; + } + + dept_unlock(); + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_key_init); + +void dept_key_destroy(struct dept_key *k) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int sub_id; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive == 1 && dt->task_exit) { + /* + * Need to allow to go ahead in this case where + * ->recursive has been set to 1 by dept_off() in + * dept_task_exit() and ->task_exit has been set to + * true in dept_task_exit(). + */ + } else if (dt->recursive) { + DEPT_STOP("Key destroying fails.\n"); + return; + } + + flags = dept_enter(); + + /* + * dept_key_destroy() should not fail. + * + * FIXME: Should be fixed if dept_key_destroy() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + for (sub_id = 0; sub_id < DEPT_MAX_SUBCLASSES; sub_id++) { + struct dept_class *c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (!c) + continue; + + hash_del_class(c); + disconnect_class(c); + list_del(&c->all_node); + invalidate_class(c); + + /* + * Actual deletion will happen on the rcu callback + * that has been added in disconnect_class(). + */ + del_class(c); + } + + dept_unlock(); + dept_exit(flags); + + /* + * Wait until even lockless hash_lookup_class() for the class + * returns NULL. + */ + might_sleep(); + synchronize_rcu(); +} +EXPORT_SYMBOL_GPL(dept_key_destroy); + +static void move_llist(struct llist_head *to, struct llist_head *from) +{ + struct llist_node *first = llist_del_all(from); + struct llist_node *last; + + if (!first) + return; + + for (last = first; last->next; last = last->next); + llist_add_batch(first, last, to); +} + +static void migrate_per_cpu_pool(void) +{ + const int boot_cpu = 0; + int i; + + /* + * The boot CPU has been using the temperal local pool so far. + * From now on that per_cpu areas have been ready, use the + * per_cpu local pool instead. + */ + DEPT_WARN_ON(smp_processor_id() != boot_cpu); + for (i = 0; i < OBJECT_NR; i++) { + struct llist_head *from; + struct llist_head *to; + + from = &pool[i].boot_pool; + to = per_cpu_ptr(pool[i].lpool, boot_cpu); + move_llist(to, from); + } +} + +#define B2KB(B) ((B) / 1024) + +/* + * Should be called after setup_per_cpu_areas() and before no non-boot + * CPUs have been on. + */ +void __init dept_init(void) +{ + size_t mem_total = 0; + + local_irq_disable(); + dept_per_cpu_ready = 1; + migrate_per_cpu_pool(); + local_irq_enable(); + +#define HASH(id, bits) BUILD_BUG_ON(1 << (bits) <= 0); + #include "dept_hash.h" +#undef HASH +#define OBJECT(id, nr) mem_total += sizeof(struct dept_##id) * nr; + #include "dept_object.h" +#undef OBJECT +#define HASH(id, bits) mem_total += sizeof(struct hlist_head) * (1 << (bits)); + #include "dept_hash.h" +#undef HASH + + pr_info("DEPendency Tracker: Copyright (c) 2020 LG Electronics, Inc., Byungchul Park\n"); + pr_info("... DEPT_MAX_STACK_ENTRY: %d\n", DEPT_MAX_STACK_ENTRY); + pr_info("... DEPT_MAX_WAIT_HIST : %d\n", DEPT_MAX_WAIT_HIST); + pr_info("... DEPT_MAX_ECXT_HELD : %d\n", DEPT_MAX_ECXT_HELD); + pr_info("... DEPT_MAX_SUBCLASSES : %d\n", DEPT_MAX_SUBCLASSES); +#define OBJECT(id, nr) \ + pr_info("... memory used by %s: %zu KB\n", \ + #id, B2KB(sizeof(struct dept_##id) * nr)); + #include "dept_object.h" +#undef OBJECT +#define HASH(id, bits) \ + pr_info("... hash list head used by %s: %zu KB\n", \ + #id, B2KB(sizeof(struct hlist_head) * (1 << (bits)))); + #include "dept_hash.h" +#undef HASH + pr_info("... total memory used by objects and hashs: %zu KB\n", B2KB(mem_total)); + pr_info("... per task memory footprint: %zu bytes\n", sizeof(struct dept_task)); +} diff --git a/kernel/dependency/dept_hash.h b/kernel/dependency/dept_hash.h new file mode 100644 index 000000000000..fd85aab1fdfb --- /dev/null +++ b/kernel/dependency/dept_hash.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * HASH(id, bits) + * + * id : Id for the object of struct dept_##id. + * bits: 1UL << bits is the hash table size. + */ + +HASH(dep, 12) +HASH(class, 12) diff --git a/kernel/dependency/dept_object.h b/kernel/dependency/dept_object.h new file mode 100644 index 000000000000..0b7eb16fe9fb --- /dev/null +++ b/kernel/dependency/dept_object.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * OBJECT(id, nr) + * + * id: Id for the object of struct dept_##id. + * nr: # of the object that should be kept in the pool. + */ + +OBJECT(dep, 1024 * 8) +OBJECT(class, 1024 * 8) +OBJECT(stack, 1024 * 32) +OBJECT(ecxt, 1024 * 16) +OBJECT(wait, 1024 * 32) diff --git a/kernel/exit.c b/kernel/exit.c index edb50b4c9972..8d3850eded25 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -923,6 +923,7 @@ void __noreturn do_exit(long code) exit_tasks_rcu_finish(); lockdep_free_task(tsk); + dept_task_exit(tsk); do_task_dead(); } diff --git a/kernel/fork.c b/kernel/fork.c index 41c964104b58..20fa77c47db2 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -99,6 +99,7 @@ #include #include #include +#include #include #include @@ -2460,6 +2461,7 @@ __latent_entropy struct task_struct *copy_process( #ifdef CONFIG_LOCKDEP lockdep_init_task(p); #endif + dept_task_init(p); #ifdef CONFIG_DEBUG_MUTEXES p->blocked_on = NULL; /* not blocked yet */ diff --git a/kernel/module/main.c b/kernel/module/main.c index 4e2cf784cf8c..aa4bd4dcc9aa 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1235,12 +1235,14 @@ static void free_mod_mem(struct module *mod) /* Free lock-classes; relies on the preceding sync_rcu(). */ lockdep_free_key_range(mod_mem->base, mod_mem->size); + dept_free_range(mod_mem->base, mod_mem->size); if (mod_mem->size) module_memory_free(mod_mem->base, type); } /* MOD_DATA hosts mod, so free it at last */ lockdep_free_key_range(mod->mem[MOD_DATA].base, mod->mem[MOD_DATA].size); + dept_free_range(mod->mem[MOD_DATA].base, mod->mem[MOD_DATA].size); module_memory_free(mod->mem[MOD_DATA].base, MOD_DATA); } @@ -3019,6 +3021,8 @@ static int load_module(struct load_info *info, const char __user *uargs, for_class_mod_mem_type(type, core_data) { lockdep_free_key_range(mod->mem[type].base, mod->mem[type].size); + dept_free_range(mod->mem[type].base, + mod->mem[type].size); } module_deallocate(mod, info); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a68d1276bab0..243f3de42721 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -64,6 +64,7 @@ #include #include #include +#include #ifdef CONFIG_PREEMPT_DYNAMIC # ifdef CONFIG_GENERIC_ENTRY @@ -4162,6 +4163,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) int cpu, success = 0; preempt_disable(); + dept_stage_event(p, _RET_IP_); if (p == current) { /* * We're waking current, this means 'p->on_rq' and 'task_cpu(p) @@ -6560,6 +6562,12 @@ static void __sched notrace __schedule(unsigned int sched_mode) rq = cpu_rq(cpu); prev = rq->curr; + prev_state = READ_ONCE(prev->__state); + if (sched_mode != SM_PREEMPT && prev_state & TASK_NORMAL) + dept_request_event_wait_commit(); + + dept_sched_enter(); + schedule_debug(prev, !!sched_mode); if (sched_feat(HRTICK) || sched_feat(HRTICK_DL)) @@ -6674,6 +6682,7 @@ static void __sched notrace __schedule(unsigned int sched_mode) __balance_callbacks(rq); raw_spin_rq_unlock_irq(rq); } + dept_sched_exit(); } void __noreturn do_task_dead(void) diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index ce51d4dc6803..aa62caa4dc14 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1207,6 +1207,33 @@ config DEBUG_PREEMPT menu "Lock Debugging (spinlocks, mutexes, etc...)" +config DEPT + bool "Dependency tracking (EXPERIMENTAL)" + depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT + select DEBUG_SPINLOCK + select DEBUG_MUTEXES + select DEBUG_RT_MUTEXES if RT_MUTEXES + select DEBUG_RWSEMS + select DEBUG_WW_MUTEX_SLOWPATH + select DEBUG_LOCK_ALLOC + select TRACE_IRQFLAGS + select STACKTRACE + select FRAME_POINTER if !MIPS && !PPC && !ARM && !S390 && !MICROBLAZE && !ARC && !X86 + select KALLSYMS + select KALLSYMS_ALL + select PROVE_LOCKING + default n + help + Check dependencies between wait and event and report it if + deadlock possibility has been detected. Multiple reports are + allowed if there are more than a single problem. + + This feature is considered EXPERIMENTAL that might produce + false positive reports because new dependencies start to be + tracked, that have never been tracked before. It's worth + noting, to mitigate the impact by the false positives, multi + reporting has been supported. + config LOCK_DEBUGGING_SUPPORT bool depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c index 8d24279fad05..cd89138d62ba 100644 --- a/lib/locking-selftest.c +++ b/lib/locking-selftest.c @@ -1398,6 +1398,8 @@ static void reset_locks(void) local_irq_disable(); lockdep_free_key_range(&ww_lockdep.acquire_key, 1); lockdep_free_key_range(&ww_lockdep.mutex_key, 1); + dept_free_range(&ww_lockdep.acquire_key, 1); + dept_free_range(&ww_lockdep.mutex_key, 1); I1(A); I1(B); I1(C); I1(D); I1(X1); I1(X2); I1(Y1); I1(Y2); I1(Z1); I1(Z2); From patchwork Mon Aug 21 03:46:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 879DBEE49AB for ; Mon, 21 Aug 2023 04:35:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233008AbjHUEfC (ORCPT ); Mon, 21 Aug 2023 00:35:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232996AbjHUEfB (ORCPT ); Mon, 21 Aug 2023 00:35:01 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 63C60B2; Sun, 20 Aug 2023 21:34:58 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-1b-64e2ded4b3b0 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 03/25] dept: Add single event dependency tracker APIs Date: Mon, 21 Aug 2023 12:46:15 +0900 Message-Id: <20230821034637.34630-4-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0yTZxTHfZ731nbUvHkx8RGWuTUhGo0iBszJNHPZYnz2YQnJRkwciTT0 jTRycUVRXJaA1upAFEiw45INcOmagre3GC9YgzXlIgK1VmUEq1TCbCx324mgDkv8cvLL+f/P 79NRMdJ1LkFlzD8gm/L1uTpew2rG45o2+ANBw6apPwGqTm2CyKuTLDRcbOXBe6EFQWtbKYaQ Zyc8joYRzPcNMGCt8SJoGnnCQFtnAIHLfpSHB6PLwR+Z5KGnppyHY+cu8nD/5QKG4bPVGFqU 76G3shlDx9y/LFhDPNRbj+HF8QLDnM0hgK0kCYL2OgEWRlKgJ/CIA9fQeqj9Y5iHm64eFjqv BTE8uNHAQ6D1PQe9nd0seKsqODg/0czDy6iNAVtkUgBfRyOGS+ZFkWX2HQddFR0YLH9dxuD/ px3BrZPPMCitj3i4EwljcCo1DLz524MgeHpcgOOn5gSoLz2NoPz4WRYG3nZxYB5Og/nXDfzX X9I74UmGmp2HqCvayNK7zYRer3siUPOtIYE2Kgep076OnrsZwrRpJsJRxfEbT5WZaoGWjfsx nejvF2j37/MsHfVbcXribs02g5xrLJJNyV9laXLclXXc/scrD0etq0uQXypDahURU4mvNIQ+ sncmHGNeXEMGB+eYD7xC/Jw4K8a4MqRRMeKJT4h9qo//EMSL6aS9vid2wIpJZHrUF9trxTTy 8N0sXpKuJi2XOmIitbiFKO03Yn1psTM98pxd6pSrycPpXUu8ity2D7KVSNuIljmQZMwvytMb c1M35hTnGw9vzC7IU9DiQ9l+XfjpGprx/uBGogrp4rRZnwYNEqcvKizOcyOiYnQrtIn/jRgk rUFffEQ2FewxHcyVC90oUcXqVmo3Rw8ZJHGv/oC8T5b3y6aPKVapE0pQZu2Rre2ku0uybA+n ktrs5OQ3+vLi6m93pcTtbNvyTVZ82577O4p0a+95/M4M3/JtWpL9NBOudPZrvhsbSv+MpM9m +AoiqT/C7hO9DrO6/9Waq2cy46s8E5vjpi44ImmWhKSrGZ6fB0Ic+mVMLJOUhUCX22ApvdL3 QkO/WObSsYU5+pR1jKlQ/z9mIX9jTAMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/3+/7u47j5zT9yONtechTjewzGWbGd7as+Uds6HI/Oj1o d5RME5WHklVbjp6WY1eriDvzVKnVeriaSp3QKu40pBRxkRKV+ee9997v9+f110fGKLO4uTJt +HFJF64OVRE5K9/lG7fK1uXQeL0xzYfUy17g/H6RhaziIgLNdwoRFN0/i6Gnege8HOpDMPKs iQFDejOCG/ZOBu7XdCEoyz9HoLV7OticAwSs6UkE4m4WE3jeO4qh42oahkKzHzSkGDFUDH9g wdBDINMQh8flI4ZhUwEPplgPcORn8DBq9wZrVxsHVdlWDsraV8D1nA4CpWVWFmoeOTC0Pski 0FX0h4OGmjoWmlOTObjdbyTQO2RiwOQc4KGlIhfD3fhx2vlvYxzUJldgOH/rHgbb6xIETy++ xWAuaiNQ5ezDYDGnM/ArrxqB48pnHhIuD/OQefYKgqSEqyw0/a7lIL7DB0Z+ZpEtvrSqb4Ch 8ZYoWjaUy9J6o0gfZ3TyNP5pO09zzSeoJd+T3iztwfTGoJOj5oJLhJoH03ia+NmGaX9jI0/r ro2wtNtmwP7z9sk3aqRQbaSkW7MpUB5cmZLBRbx0OzlkWBiLbMpE5CIThXVi82AfmvBEWCq+ ejXMTHhXYZFoSX7PJSK5jBEuTBXzvzwjE8UswV8sybROHrCCh/i1u2UyVwg+4ouxb/gfdKFY eLdiEuQirBfNJU8m98rxzVf7OzYFyXPRlALkqg2PDFNrQ31W60OCo8O1J1cfOhZmRuM/Y4oZ TX2EvrfuqESCDKmmKQLnOTRKTh2pjw6rRKKMUbkq3H/YNUqFRh19StIdO6g7ESrpK5G7jFW5 KXbukQKVwhH1cSlEkiIk3f8Wy1zmxiKyL8gYYtnQOXM3Xem1xHet30rvpI4pNQaDyjsIx3xs mfOW9MaMLU442uJWVxqgjfLfemCGkNbZFLDfb1V5fyzOeTj4mJVCThV/Wr5sW47b6ekKj/rs Npex0jxd0O/tKeoFZ7ZXN5ZH2Y8e9tu8bE59/Z2Agb17AhOts1+veeCeZyxQsfpgtbcno9Or /wI+DQQ/LwMAAA== X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Wrapped the base APIs for easier annotation on wait and event. Start with supporting waiters on each single event. More general support for multiple events is a future work. Do more when the need arises. How to annotate (the simplest way): 1. Initaialize a map for the interesting wait. /* * Recommand to place along with the wait instance. */ struct dept_map my_wait; /* * Recommand to place in the initialization code. */ sdt_map_init(&my_wait); 2. Place the following at the wait code. sdt_wait(&my_wait); 3. Place the following at the event code. sdt_event(&my_wait); That's it! Signed-off-by: Byungchul Park --- include/linux/dept_sdt.h | 62 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 include/linux/dept_sdt.h diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h new file mode 100644 index 000000000000..12a793b90c7e --- /dev/null +++ b/include/linux/dept_sdt.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Single-event Dependency Tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_SDT_H +#define __LINUX_DEPT_SDT_H + +#include +#include + +#ifdef CONFIG_DEPT +#define sdt_map_init(m) \ + do { \ + static struct dept_key __key; \ + dept_map_init(m, &__key, 0, #m); \ + } while (0) + +#define sdt_map_init_key(m, k) dept_map_init(m, k, 0, #m) + +#define sdt_wait(m) \ + do { \ + dept_request_event(m); \ + dept_wait(m, 1UL, _THIS_IP_, __func__, 0); \ + } while (0) + +/* + * sdt_might_sleep() and its family will be committed in __schedule() + * when it actually gets to __schedule(). Both dept_request_event() and + * dept_wait() will be performed on the commit. + */ + +/* + * Use the code location as the class key if an explicit map is not used. + */ +#define sdt_might_sleep_start(m) \ + do { \ + struct dept_map *__m = m; \ + static struct dept_key __key; \ + dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__);\ + } while (0) + +#define sdt_might_sleep_end() dept_clean_stage() + +#define sdt_ecxt_enter(m) dept_ecxt_enter(m, 1UL, _THIS_IP_, "start", "event", 0) +#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__) +#define sdt_ecxt_exit(m) dept_ecxt_exit(m, 1UL, _THIS_IP_) +#else /* !CONFIG_DEPT */ +#define sdt_map_init(m) do { } while (0) +#define sdt_map_init_key(m, k) do { (void)(k); } while (0) +#define sdt_wait(m) do { } while (0) +#define sdt_might_sleep_start(m) do { } while (0) +#define sdt_might_sleep_end() do { } while (0) +#define sdt_ecxt_enter(m) do { } while (0) +#define sdt_event(m) do { } while (0) +#define sdt_ecxt_exit(m) do { } while (0) +#endif +#endif /* __LINUX_DEPT_SDT_H */ From patchwork Mon Aug 21 03:46:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B20BEE49AB for ; Mon, 21 Aug 2023 04:36:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233007AbjHUEg1 (ORCPT ); Mon, 21 Aug 2023 00:36:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229679AbjHUEg1 (ORCPT ); Mon, 21 Aug 2023 00:36:27 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E9E43F3; Sun, 20 Aug 2023 21:36:12 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-2b-64e2ded49cc7 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 04/25] dept: Add lock dependency tracker APIs Date: Mon, 21 Aug 2023 12:46:16 +0900 Message-Id: <20230821034637.34630-5-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0zMcRjHfb6/77j2dVhfspWb2KL8mHhmGDPzYWNt/jCZ6bjvdFyHK6eY iQ4pmZoTCRXO6Y5yZ36Ua1foh5ToJC3NHUOrRFy6Oj+u8M+z1/Z69n6eP94cKS+jJ3FqbZKo 0yo1CkZKSXvGFEa+7PCoZlcfIyH7xGzwfk+nIL/EykDTTQsC6+1DBHQ+Xgmv+rsRDDU8IyHX 2ISg0P2GhNvVHQgc5sMMNL8PApe3l4E6YyYDaZdLGHje5Seg/UwOARbbGqg/VUSA0/eRgtxO Bs7nphGB8YkAn6mYBVNqOHjMeSz43XOgrqOFBkfbDDh3sZ2BB446CqrveQhoLstnoMP6m4b6 6loKmrKzaLjxuYiBrn4TCSZvLwsvnAUElBoCQUe//aKhJstJwNErtwhwvS5HUJH+lgCbtYWB h95uAuw2IwmD1x4j8JzsYeHICR8L5w+dRJB55AwFz37W0GBoj4ahgXxm6UL8sLuXxAb7Xuzo L6DwkyIB3897w2JDRRuLC2x7sN0cgS8/6CRwYZ+Xxrbi4wy29eWwOKPHReDPjY0srj07ROH3 rlwiJiRWukglatR6UTdrSZw03hD4bten4OSrzlSUiizyDCThBH6e4B+soP5zZVsJOcwMP11o bfWN8Hg+TLBnfaAzkJQj+WOjBfOXBmZYjOOxcKdoWEg4ig8Xsn85R4JkfLRgL31K/g0NFSyl zhGW8PMFW3kZGmZ5YOer+92/w2kS4WXl2r88Uag0t1KnkKwAjSpGcrVWn6BUa+ZFxado1clR W3cm2FCgUaYD/o33UF/TuirEc0gxRhY32aOS00p9YkpCFRI4UjFeFvLDrZLLVMqUfaJu52bd Ho2YWIVCOEoRLJvbv1cl57cpk8QdorhL1P23BCeZlIrU/usOe3Roc/r6gdgpvv0NrrFX+U0X I0MUr/VRmw363U9mdm1dsOntfc2tR3enrYw5ffq3PqzRnTM1crA29EKSV2/94PBYVvmWD7T7 t7TUPD9o0uZtGLW6pFaZ/JSKMhnDdmyPDTLmpyUHrZ+wYpG2fEPopYhlDZnGseEvghe3LK2P U1CJ8co5EaQuUfkHnxlnZ00DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfX/Pdxw/p82PMrkpW0iZ8jFm/vP1kIkN8zDd3G861ZU7ImNK h1RnZRI9kKNz66LcYZ5qp6YnD5VuHlJNJ3Sr5KGLlHLH/PPZa3u99977jw9Hygvp6Zxas0/U apSxCkZKSdctTZ3v6HCqQpoNgZCdGQLugTQKCspKGWi6YUFQeiuFANfjlfBqsBfB8LNGEnJz mhBc7mwn4VZNB4IK8zEGWromgsPdz0B9TgYDqVfKGGjuGSGg7dwZAizWCHiSZSTAPvSJglwX A/m5qYTndBMwZCphwZQcAE5zHgsjnaFQ3/GShurCehoqWufChYttDDysqKeg5q6TgJb7BQx0 lI7R8KSmjoKmbAMN1z8bGegZNJFgcvez8MJeREC53tN24vsoDbUGOwEnrt4kwPHmAYLKtHcE WEtfMlDt7iXAZs0h4de1xwicp/tYOJ45xEJ+ymkEGcfPUdD4u5YGfVsYDP8sYFYsxdW9/STW 2w7gisEiCjcYBXwvr53F+spWFhdZ92ObOQhfeegi8OVvbhpbS04x2PrtDIvT+xwE/vz8OYvr zg9TuMuRS6z32ypdphJj1YmidsHyKGm03rMuoXvqwWJ7MkpGFnk6knACv0h41FpGepnh5wiv Xw/9ZR/eX7AZPtLpSMqR/MnxgvnLM8YrpvBYuGP0CglH8QFC9qid8rKMDxNs5U/Jf6UzBUu5 /S9L+HDB+uA+8rLck/na+Z7KQtIiNK4E+ag1iXFKdWxYsC4mOkmjPhi8Kz7OijxPYzoykn0X DbSsrEI8hxQTZFF+TpWcVibqkuKqkMCRCh+Z749OlVymUiYdErXxO7X7Y0VdFfLlKMVU2erN YpSc363cJ8aIYoKo/W8JTjI9GW20h0bOD0w7LFE3g4v/8H3X5EnqkJ6R9siju2cmVYr2S0ve 7vGPHBhdrlnVXtXdMGGheV7DQM3a1SuCdxiw+U1wbWF4RPimNXUbEmaPRqwRzu4dCzmalbFl WlBe6tUYXwMeMxXP6BJ7LNUNszOR36ztt1v9t5U9PZyCFn85/6PA6FJQumhlaBCp1Sn/AHiA ENIwAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Wrapped the base APIs for easier annotation on typical lock. Signed-off-by: Byungchul Park --- include/linux/dept_ldt.h | 77 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 include/linux/dept_ldt.h diff --git a/include/linux/dept_ldt.h b/include/linux/dept_ldt.h new file mode 100644 index 000000000000..062613e89fc3 --- /dev/null +++ b/include/linux/dept_ldt.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Lock Dependency Tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_LDT_H +#define __LINUX_DEPT_LDT_H + +#include + +#ifdef CONFIG_DEPT +#define LDT_EVT_L 1UL +#define LDT_EVT_R 2UL +#define LDT_EVT_W 1UL +#define LDT_EVT_RW (LDT_EVT_R | LDT_EVT_W) +#define LDT_EVT_ALL (LDT_EVT_L | LDT_EVT_RW) + +#define ldt_init(m, k, su, n) dept_map_init(m, k, su, n) +#define ldt_lock(m, sl, t, n, i) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_L, i, "trylock", "unlock", sl);\ + else { \ + dept_wait(m, LDT_EVT_L, i, "lock", sl); \ + dept_ecxt_enter(m, LDT_EVT_L, i, "lock", "unlock", sl);\ + } \ + } while (0) + +#define ldt_rlock(m, sl, t, n, i, q) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_R, i, "read_trylock", "read_unlock", sl);\ + else { \ + dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl);\ + dept_ecxt_enter(m, LDT_EVT_R, i, "read_lock", "read_unlock", sl);\ + } \ + } while (0) + +#define ldt_wlock(m, sl, t, n, i) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_W, i, "write_trylock", "write_unlock", sl);\ + else { \ + dept_wait(m, LDT_EVT_RW, i, "write_lock", sl); \ + dept_ecxt_enter(m, LDT_EVT_W, i, "write_lock", "write_unlock", sl);\ + } \ + } while (0) + +#define ldt_unlock(m, i) dept_ecxt_exit(m, LDT_EVT_ALL, i) + +#define ldt_downgrade(m, i) \ + do { \ + if (dept_ecxt_holding(m, LDT_EVT_W)) \ + dept_map_ecxt_modify(m, LDT_EVT_W, NULL, LDT_EVT_R, i, "downgrade", "read_unlock", -1);\ + } while (0) + +#define ldt_set_class(m, n, k, sl, i) dept_map_ecxt_modify(m, LDT_EVT_ALL, k, 0UL, i, "lock_set_class", "(any)unlock", sl) +#else /* !CONFIG_DEPT */ +#define ldt_init(m, k, su, n) do { (void)(k); } while (0) +#define ldt_lock(m, sl, t, n, i) do { } while (0) +#define ldt_rlock(m, sl, t, n, i, q) do { } while (0) +#define ldt_wlock(m, sl, t, n, i) do { } while (0) +#define ldt_unlock(m, i) do { } while (0) +#define ldt_downgrade(m, i) do { } while (0) +#define ldt_set_class(m, n, k, sl, i) do { } while (0) +#endif +#endif /* __LINUX_DEPT_LDT_H */ From patchwork Mon Aug 21 03:46:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BC22EE49B8 for ; Mon, 21 Aug 2023 04:07:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232776AbjHUEHC (ORCPT ); Mon, 21 Aug 2023 00:07:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232773AbjHUEHB (ORCPT ); Mon, 21 Aug 2023 00:07:01 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A41E5A3; Sun, 20 Aug 2023 21:06:57 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-3b-64e2ded4c873 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 05/25] dept: Tie to Lockdep and IRQ tracing Date: Mon, 21 Aug 2023 12:46:17 +0900 Message-Id: <20230821034637.34630-6-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSWUwTYRRG/Wdtq9VJNTqCidJIjLihQb2ucUl0YuISfVIfpLEjNJZKyiaI CVvdEFQMVoRoWaxNqaADGkQqFQJSiVqhIhhsoK6ETdGCCFVb1Jebk++79zxdES6rIQNEKk0s r9Uo1HJKQkj6pxQueeVyK0N1XVK4dD4UPN/PEFBQbqHAUVaKwFKZikFPw3Z4PdyHYOzZCxz0 uQ4Ehd1vcahsdCGwmtIoaH0/FZyeQQrsuZkUpBeXU/CydxyDzis5GJQKO6H5YhEGttFPBOh7 KMjXp2O+8RmDUaOZBmNKMLhN12gY714OdlcbCdY3iyDveicFNVY7AY1VbgxaqwsocFl+k9Dc 2ESA41IWCbcHiijoHTbiYPQM0tBiM2BwJ8MnOvXtFwlPsmwYnCq5i4Gz4yGCR2e6MBAsbRTU e/owqBBycfh5qwGBO7ufBt35URryU7MRZOquEPDC+4SEjM6VMPajgNq0lqvvG8S5jIoEzjps ILinRSz34Npbmst49IbmDEIcV2EK4YprejCucMhDcoL5LMUJQzk0d67fiXEDz5/TXNPVMYJ7 79RjewIPSNYrebUqntcu2xguicz2fiCjO9LQ8cYyL56CHBHnkFjEMmHsa+9N/D/Xe2tJP1PM Ara9fXQin8HMYyuyPvpyiQhnTk9mTV+eUf5iOrONrWy6MMEEE8yWnc2j/SxlVrIjLsM/6Vy2 9I5tgsXMKlZ4WI38LPPtfO1+R/ilLJMpZlvKW8i/B7PZx6Z24iKSGtAkM5KpNPFRCpU6bGlk okZ1fOnhY1EC8v2U8eT4wSo05NhXhxgRkk+Rhs9xK2WkIj4mMaoOsSJcPkMaONKtlEmVisQk XnvskDZOzcfUoUARIZ8lXTGcoJQxEYpY/ijPR/Pa/y0mEgekIOZy9aYV0Ydqk82wZtf+vKAi 1q67WnJkbnBzyZHdyDINT9+rm/9jnCrZ8I2YGXA/2Nm8gYiwE4/D56vbtipykgru6Xs+5A/I 0jYLN2xB6fMEjfbz2JYAJjE53KPRJXlWT7XuNdd+xW8vDN0hsTk06xbHdorjgorVRwf3uUca TsiJmEjF8hBcG6P4A1DLqIhPAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzXSfUzMcRwHcN/f43U5fk7jN2w4wqIUxWfLTP/wm43Z/GFsptP96OZK7iqy 2UoPojpqOymxc+y0Sg93sTwc7Y4e9HS6Q09u7hDpFNVF6nDH/PPZa5/3Z++/PgJcfJ1cJJAn JvPKRKlCQgkJ4e7ozFC7wyULHy+JgML8cPBM5BJQVlNFgbW6EkFVfQYGQ892wOtJN4Lpji4c ijVWBDecb3Cob3IgMJWfpcD2fg7YPaMUtGryKMi8WUPBi+EZDAYuF2FQadgFbZd0GDROfSSg eIiCq8WZmG98wmBKX0GDPj0YXOWlNMw4I6DV8YoEy7VWEkx9a6Hk+gAFj0ytBDQ1uDCwPSij wFH1m4S2phYCrIUFJNwZ0VEwPKnHQe8ZpaG7UYtBbZavLWf8FwnNBY0Y5Nyqw8De+xDB49y3 GBiqXlFg8bgxMBo0OPy8/QyBS/2Fhuz8KRquZqgR5GVfJqDL20xC1kAUTP8oo7ZFcxb3KM5l GU9ypkktwT3Xsdz90jc0l/W4j+a0hhTOWB7C3Xw0hHE3xjwkZ6g4T3GGsSKau/DFjnEjnZ00 13JlmuDe24uxPUsOCLfIeIU8lVeu3xorjFd7P5BJvWfRqaZqL56OrEcvoAABy0SyFu8T0m+K Wc329Ezhfgcxy1hjwaBvLxTgzLlAtvxrB+UP5jPb2fqWi39NMMFs9fkS2m8RE8V+d2jxf6VL 2craxr8OYDaxhocPkN9i38035zviEhJq0awKFCRPTE2QyhVRYapj8WmJ8lNhcccTDMj3Nfoz M4UNaMK2w4wYAZLMFsUuccnEpDRVlZZgRqwAlwSJFn93ysQimTTtNK88fkiZouBVZrRYQEgW inbu42PFzFFpMn+M55N45f8UEwQsSkeayM1rXnyQx1piVu060dUcN6MLvRUStzcht/+lekF0 +IIttfcCrUdG5PHGZueTgW55vujKoGm/xh3QebDhaVdUXoptY5jEvV+hjiz7kTyvyLyaZNvG D+9ZGrqyvS7DtmrZfXLF3L7cutKhrTvdPcs/x+zzlq7L0XnqY/o3PL9rbzdLCFW8NCIEV6qk fwAR2FxtMQMAAA== X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Yes. How to place Dept in here looks so ugly. But it's inevitable as long as relying on Lockdep. The way should be enhanced gradually. 1. Basically relies on Lockdep to track typical locks and IRQ things. 2. Dept fails to recognize IRQ situation so it generates false alarms when raw_local_irq_*() APIs are used. So made it track those too. 3. Lockdep doesn't track the outmost {hard,soft}irq entracnes but Dept makes use of it. So made it track those too. Signed-off-by: Byungchul Park --- include/linux/irqflags.h | 22 +++++- include/linux/local_lock_internal.h | 1 + include/linux/lockdep.h | 102 ++++++++++++++++++++++------ include/linux/lockdep_types.h | 3 + include/linux/mutex.h | 1 + include/linux/percpu-rwsem.h | 2 +- include/linux/rtmutex.h | 1 + include/linux/rwlock_types.h | 1 + include/linux/rwsem.h | 1 + include/linux/seqlock.h | 2 +- include/linux/spinlock_types_raw.h | 3 + include/linux/srcu.h | 2 +- kernel/dependency/dept.c | 4 +- kernel/locking/lockdep.c | 23 +++++++ 14 files changed, 139 insertions(+), 29 deletions(-) diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h index 5ec0fa71399e..0ebc5ec2dbd4 100644 --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -13,6 +13,7 @@ #define _LINUX_TRACE_IRQFLAGS_H #include +#include #include #include @@ -60,8 +61,10 @@ extern void trace_hardirqs_off(void); # define lockdep_softirqs_enabled(p) ((p)->softirqs_enabled) # define lockdep_hardirq_enter() \ do { \ - if (__this_cpu_inc_return(hardirq_context) == 1)\ + if (__this_cpu_inc_return(hardirq_context) == 1) { \ current->hardirq_threaded = 0; \ + dept_hardirq_enter(); \ + } \ } while (0) # define lockdep_hardirq_threaded() \ do { \ @@ -136,6 +139,8 @@ do { \ # define lockdep_softirq_enter() \ do { \ current->softirq_context++; \ + if (current->softirq_context == 1) \ + dept_softirq_enter(); \ } while (0) # define lockdep_softirq_exit() \ do { \ @@ -170,17 +175,28 @@ extern void warn_bogus_irq_restore(void); /* * Wrap the arch provided IRQ routines to provide appropriate checks. */ -#define raw_local_irq_disable() arch_local_irq_disable() -#define raw_local_irq_enable() arch_local_irq_enable() +#define raw_local_irq_disable() \ + do { \ + arch_local_irq_disable(); \ + dept_hardirqs_off(); \ + } while (0) +#define raw_local_irq_enable() \ + do { \ + dept_hardirqs_on(); \ + arch_local_irq_enable(); \ + } while (0) #define raw_local_irq_save(flags) \ do { \ typecheck(unsigned long, flags); \ flags = arch_local_irq_save(); \ + dept_hardirqs_off(); \ } while (0) #define raw_local_irq_restore(flags) \ do { \ typecheck(unsigned long, flags); \ raw_check_bogus_irq_restore(); \ + if (!arch_irqs_disabled_flags(flags)) \ + dept_hardirqs_on(); \ arch_local_irq_restore(flags); \ } while (0) #define raw_local_save_flags(flags) \ diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h index 975e33b793a7..39f67788fd95 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -21,6 +21,7 @@ typedef struct { .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ .lock_type = LD_LOCK_PERCPU, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ }, \ .owner = NULL, diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 74bd269a80a2..f6bf7567b8df 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -12,6 +12,7 @@ #include #include +#include #include struct task_struct; @@ -39,6 +40,8 @@ static inline void lockdep_copy_map(struct lockdep_map *to, */ for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++) to->class_cache[i] = NULL; + + dept_map_copy(&to->dmap, &from->dmap); } /* @@ -458,7 +461,8 @@ enum xhlock_context_t { * Note that _name must not be NULL. */ #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \ - { .name = (_name), .key = (void *)(_key), } + { .name = (_name), .key = (void *)(_key), \ + .dmap = DEPT_MAP_INITIALIZER(_name, _key) } static inline void lockdep_invariant_state(bool force) {} static inline void lockdep_free_task(struct task_struct *task) {} @@ -540,33 +544,89 @@ extern bool read_lock_is_recursive(void); #define lock_acquire_shared(l, s, t, n, i) lock_acquire(l, s, t, 1, 1, n, i) #define lock_acquire_shared_recursive(l, s, t, n, i) lock_acquire(l, s, t, 2, 1, n, i) -#define spin_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define spin_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define spin_release(l, i) lock_release(l, i) - -#define rwlock_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) +#define spin_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define spin_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define spin_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define rwlock_acquire(l, s, t, i) \ +do { \ + ldt_wlock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) #define rwlock_acquire_read(l, s, t, i) \ do { \ + ldt_rlock(&(l)->dmap, s, t, NULL, i, !read_lock_is_recursive());\ if (read_lock_is_recursive()) \ lock_acquire_shared_recursive(l, s, t, NULL, i); \ else \ lock_acquire_shared(l, s, t, NULL, i); \ } while (0) - -#define rwlock_release(l, i) lock_release(l, i) - -#define seqcount_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define seqcount_acquire_read(l, s, t, i) lock_acquire_shared_recursive(l, s, t, NULL, i) -#define seqcount_release(l, i) lock_release(l, i) - -#define mutex_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define mutex_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define mutex_release(l, i) lock_release(l, i) - -#define rwsem_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define rwsem_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define rwsem_acquire_read(l, s, t, i) lock_acquire_shared(l, s, t, NULL, i) -#define rwsem_release(l, i) lock_release(l, i) +#define rwlock_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define seqcount_acquire(l, s, t, i) \ +do { \ + ldt_wlock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define seqcount_acquire_read(l, s, t, i) \ +do { \ + ldt_rlock(&(l)->dmap, s, t, NULL, i, false); \ + lock_acquire_shared_recursive(l, s, t, NULL, i); \ +} while (0) +#define seqcount_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define mutex_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define mutex_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define mutex_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define rwsem_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define rwsem_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define rwsem_acquire_read(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_shared(l, s, t, NULL, i); \ +} while (0) +#define rwsem_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) #define lock_map_acquire(l) lock_acquire_exclusive(l, 0, 0, NULL, _THIS_IP_) #define lock_map_acquire_try(l) lock_acquire_exclusive(l, 0, 1, NULL, _THIS_IP_) diff --git a/include/linux/lockdep_types.h b/include/linux/lockdep_types.h index 59f4fb1626ea..fc3e0c136b86 100644 --- a/include/linux/lockdep_types.h +++ b/include/linux/lockdep_types.h @@ -11,6 +11,7 @@ #define __LINUX_LOCKDEP_TYPES_H #include +#include #define MAX_LOCKDEP_SUBCLASSES 8UL @@ -77,6 +78,7 @@ struct lock_class_key { struct hlist_node hash_entry; struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES]; }; + struct dept_key dkey; }; extern struct lock_class_key __lockdep_no_validate__; @@ -186,6 +188,7 @@ struct lockdep_map { int cpu; unsigned long ip; #endif + struct dept_map dmap; }; struct pin_cookie { unsigned int val; }; diff --git a/include/linux/mutex.h b/include/linux/mutex.h index 8f226d460f51..58bf314eddeb 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -25,6 +25,7 @@ , .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } #else # define __DEP_MAP_MUTEX_INITIALIZER(lockname) diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h index 36b942b67b7d..e871aca04645 100644 --- a/include/linux/percpu-rwsem.h +++ b/include/linux/percpu-rwsem.h @@ -21,7 +21,7 @@ struct percpu_rw_semaphore { }; #ifdef CONFIG_DEBUG_LOCK_ALLOC -#define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }, +#define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname, .dmap = DEPT_MAP_INITIALIZER(lockname, NULL) }, #else #define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) #endif diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h index 7d049883a08a..35889ac5eeae 100644 --- a/include/linux/rtmutex.h +++ b/include/linux/rtmutex.h @@ -81,6 +81,7 @@ do { \ .dep_map = { \ .name = #mutexname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(mutexname, NULL),\ } #else #define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname) diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h index 1948442e7750..6e58dfc84997 100644 --- a/include/linux/rwlock_types.h +++ b/include/linux/rwlock_types.h @@ -10,6 +10,7 @@ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL), \ } #else # define RW_DEP_MAP_INIT(lockname) diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index efa5c324369a..4f856e745dce 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -21,6 +21,7 @@ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ }, #else # define __RWSEM_DEP_MAP_INIT(lockname) diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 3926e9027947..6ba00bcbc11a 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -81,7 +81,7 @@ static inline void __seqcount_init(seqcount_t *s, const char *name, #ifdef CONFIG_DEBUG_LOCK_ALLOC # define SEQCOUNT_DEP_MAP_INIT(lockname) \ - .dep_map = { .name = #lockname } + .dep_map = { .name = #lockname, .dmap = DEPT_MAP_INITIALIZER(lockname, NULL) } /** * seqcount_init() - runtime initializer for seqcount_t diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h index 91cb36b65a17..3dcc551ded25 100644 --- a/include/linux/spinlock_types_raw.h +++ b/include/linux/spinlock_types_raw.h @@ -31,11 +31,13 @@ typedef struct raw_spinlock { .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SPIN, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } # define SPIN_DEP_MAP_INIT(lockname) \ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } # define LOCAL_SPIN_DEP_MAP_INIT(lockname) \ @@ -43,6 +45,7 @@ typedef struct raw_spinlock { .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ .lock_type = LD_LOCK_PERCPU, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } #else # define RAW_SPIN_DEP_MAP_INIT(lockname) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 41c4b26fb1c1..49efe1f427fa 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -35,7 +35,7 @@ int __init_srcu_struct(struct srcu_struct *ssp, const char *name, __init_srcu_struct((ssp), #ssp, &__srcu_key); \ }) -#define __SRCU_DEP_MAP_INIT(srcu_name) .dep_map = { .name = #srcu_name }, +#define __SRCU_DEP_MAP_INIT(srcu_name) .dep_map = { .name = #srcu_name, .dmap = DEPT_MAP_INITIALIZER(srcu_name, NULL) }, #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ int init_srcu_struct(struct srcu_struct *ssp); diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 8ec638254e5f..d3b6d2f4cd7b 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -245,10 +245,10 @@ static inline bool dept_working(void) * Even k == NULL is considered as a valid key because it would use * &->map_key as the key in that case. */ -struct dept_key __dept_no_validate__; +extern struct lock_class_key __lockdep_no_validate__; static inline bool valid_key(struct dept_key *k) { - return &__dept_no_validate__ != k; + return &__lockdep_no_validate__.dkey != k; } /* diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 4dfd2f3e09b2..97eaf13cddd8 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -1221,6 +1221,8 @@ void lockdep_register_key(struct lock_class_key *key) struct lock_class_key *k; unsigned long flags; + dept_key_init(&key->dkey); + if (WARN_ON_ONCE(static_obj(key))) return; hash_head = keyhashentry(key); @@ -4343,6 +4345,8 @@ void noinstr lockdep_hardirqs_on(unsigned long ip) { struct irqtrace_events *trace = ¤t->irqtrace; + dept_hardirqs_on_ip(ip); + if (unlikely(!debug_locks)) return; @@ -4408,6 +4412,8 @@ EXPORT_SYMBOL_GPL(lockdep_hardirqs_on); */ void noinstr lockdep_hardirqs_off(unsigned long ip) { + dept_hardirqs_off_ip(ip); + if (unlikely(!debug_locks)) return; @@ -4452,6 +4458,8 @@ void lockdep_softirqs_on(unsigned long ip) { struct irqtrace_events *trace = ¤t->irqtrace; + dept_softirqs_on_ip(ip); + if (unlikely(!lockdep_enabled())) return; @@ -4490,6 +4498,9 @@ void lockdep_softirqs_on(unsigned long ip) */ void lockdep_softirqs_off(unsigned long ip) { + + dept_softirqs_off_ip(ip); + if (unlikely(!lockdep_enabled())) return; @@ -4837,6 +4848,8 @@ void lockdep_init_map_type(struct lockdep_map *lock, const char *name, { int i; + ldt_init(&lock->dmap, &key->dkey, subclass, name); + for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++) lock->class_cache[i] = NULL; @@ -5581,6 +5594,12 @@ void lock_set_class(struct lockdep_map *lock, const char *name, { unsigned long flags; + /* + * dept_map_(re)init() might be called twice redundantly. But + * there's no choice as long as Dept relies on Lockdep. + */ + ldt_set_class(&lock->dmap, name, &key->dkey, subclass, ip); + if (unlikely(!lockdep_enabled())) return; @@ -5598,6 +5617,8 @@ void lock_downgrade(struct lockdep_map *lock, unsigned long ip) { unsigned long flags; + ldt_downgrade(&lock->dmap, ip); + if (unlikely(!lockdep_enabled())) return; @@ -6398,6 +6419,8 @@ void lockdep_unregister_key(struct lock_class_key *key) unsigned long flags; bool found = false; + dept_key_destroy(&key->dkey); + might_sleep(); if (WARN_ON_ONCE(static_obj(key))) From patchwork Mon Aug 21 03:46:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B84BEE49AF for ; Mon, 21 Aug 2023 04:35:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233023AbjHUEfC (ORCPT ); Mon, 21 Aug 2023 00:35:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233002AbjHUEfB (ORCPT ); Mon, 21 Aug 2023 00:35:01 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 56064A6; Sun, 20 Aug 2023 21:34:58 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-4b-64e2ded58585 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 06/25] dept: Add proc knobs to show stats and dependency graph Date: Mon, 21 Aug 2023 12:46:18 +0900 Message-Id: <20230821034637.34630-7-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSf0yMcRwH8L7f59d1Oh7H9BDDbWbLb5LPDOMPPP6w2dpssdHNPdPNFS5y 2UwqIa6RVehwde3cKspTyFE7d4qTODpKqum6SesXcccp0oV/Pntt78/e+/zxkRByKzVDok48 JGgTlRoFLSWl/WGFi950eFRLv1inwYVzS8H37TQJhvIyGly3ShGUVZ3A0FO3GZr9fQiGG18S kJ/rQlDY2U5AVX0HghpLGg1N3ong9g3S4Mw9S0O6qZyGV70jGNrycjCUiluh4XwRBlugm4T8 HhoK8tPx2PiEIWAuYcCcOg88lisMjHQuA2fHWwpqWhfA5WttNDyscZJQX+3B0GQ10NBRNkpB Q/1TElwX9BTcHCiioddvJsDsG2Tgtc2IoSJjrCjz628KnuhtGDKLb2Nwv3uAoPb0Bwxi2Vsa HL4+DJViLgE/b9Qh8GT3M3DyXICBghPZCM6ezCPh5a8nFGS0rYThHwZ6/Wre0TdI8BmVR/ga v5HknxVx/P0r7QyfUdvK8EbxMF9pieRND3swXzjko3ix5AzNi0M5DJ/V78b8wIsXDP/00jDJ e935eFvEDukalaBRJwvaJevipPHVjw30gZZo3fXeiFR0b2EWCpVwbBRnLD6F//t5oYkImmbn cy0tgXFPZedwlfqPVBaSSgj21ATO8rmRDgZT2FjO5LWTQZPsPM7oKh23jF3JXczuJv6WzuZK K2zjDmWjOfGBFQUtH9v50tlFBks5Nj2Ua75s/nfFdO6RpYU8j2RGFFKC5OrE5ASlWhO1OD4l Ua1bvGd/gojGPsp8bGRnNRpyxdgRK0GKMFncTI9KTimTk1IS7IiTEIqpsojvnSq5TKVMOSpo 9+/WHtYISXYUISEV4bLl/iMqObtXeUjYJwgHBO3/FEtCZ6Qi+fZe053ahLvvGwYWpaVtCvhz 1+ubPmse6QwFkY5j0bEN7rXOvMlD7tGqLbG7okbDdTtzTPsy7zUqtlEjK453ybvCtjpjQ0Sv e8PH9hh78/DoDcdBXUprXYw/fFJ6eYw3ubzAobXdwdk3tzi5kJyNs9S2jXNrtatWPzZc7daH WxVkUrxyWSShTVL+AeovUdNNAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzXSfUzMcRwHcN/fc8fZz2nzo+a4KVuRMt0+hBm2fjOaP2xkNh33m27V0V0i ZivlKWrK6qKLq9P1TK5GHmrnUuTxuFtiddw5D7mIuOjBQ2X++ey1vd97//VhcImenMWo1CmC Rq1IlFEiQhQTlbnQ4XQrw11Xl0HeqXDwfT9OgP5yLQW2SzUIahszMOhti4bng30IRh49wUFX YENQ6urBobHdiaC58jAFds9UcPj6KegoOElBpvEyBU+9oxh0F+ZjUGPeAA9Ol2FgGXpPgK6X gmJdJjZ2PmAwZKqmwZQeBO7KczSMuiKgw9lJQmtJBwnNL0Ph7PluCm41dxDQ3uTGwH5DT4Gz 9g8JD9rvEWDLyyGh7nMZBd5BEw4mXz8NzywGDOqzxtaOfvtNwt0cCwZHL17BwPHiJoKW468x MNd2UtDq68OgwVyAw3BFGwJ37icajpwaoqE4IxfBySOFBDz5dZeErO5IGPmpp1ZF8a19/Tif 1bCPbx40EPz9Mo6/fq6H5rNaXtK8wbyXb6gM4Y23ejG+dMBH8ubqExRvHsin+exPDoz//Pgx zd8rGiF4j0OHbQzcKlquFBJVqYJm0co4UXzTHT21p0u+/4I3IB1dW5CN/BiOXcI9LDXi46bY +VxX19CE/dk5XEPOOzIbiRicPTaZq/zyiBoPprOxnNFjJcZNsEGcwVYzYTEbyZ3JfY//G5Vy NfWWCfuxcs588wYat2Ss89X1hjiNRAY0qRr5q9SpSQpVYmSYNiE+Ta3aH7Zzd5IZjf2M6dBo XhP6bo+2IpZBsiniuEC3UkIqUrVpSVbEMbjMXxzww6WUiJWKtAOCZvd2zd5EQWtFAQwhmyFe t1mIk7C7FClCgiDsETT/U4zxm5WOSkqK1Qvi8t0ZmYH2effLjZuiqgLh1ZfVjVK96nabL6io KuHY2uWm/DpvrE1hCl4mz/sofVuh28F4wsRhak4uDU2a3TLnxIeH20KGi9aHvkquDz+8KqVw rv1bwRrZlvnSnvSpETMjPJMOHnJMi1kcbcmNkm/wWpeiFclKZ/DSxnIZoY1XRITgGq3iL6UA 5aUvAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org It'd be useful to show Dept internal stats and dependency graph on runtime via proc for better information. Introduced the knobs. Signed-off-by: Byungchul Park --- kernel/dependency/Makefile | 1 + kernel/dependency/dept.c | 24 +++----- kernel/dependency/dept_internal.h | 26 +++++++++ kernel/dependency/dept_proc.c | 95 +++++++++++++++++++++++++++++++ 4 files changed, 131 insertions(+), 15 deletions(-) create mode 100644 kernel/dependency/dept_internal.h create mode 100644 kernel/dependency/dept_proc.c diff --git a/kernel/dependency/Makefile b/kernel/dependency/Makefile index b5cfb8a03c0c..92f165400187 100644 --- a/kernel/dependency/Makefile +++ b/kernel/dependency/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DEPT) += dept.o +obj-$(CONFIG_DEPT) += dept_proc.o diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index d3b6d2f4cd7b..c5e23e9184b8 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -74,6 +74,7 @@ #include #include #include +#include "dept_internal.h" static int dept_stop; static int dept_per_cpu_ready; @@ -261,20 +262,13 @@ static inline bool valid_key(struct dept_key *k) * have been freed will be placed. */ -enum object_t { -#define OBJECT(id, nr) OBJECT_##id, - #include "dept_object.h" -#undef OBJECT - OBJECT_NR, -}; - #define OBJECT(id, nr) \ static struct dept_##id spool_##id[nr]; \ static DEFINE_PER_CPU(struct llist_head, lpool_##id); #include "dept_object.h" #undef OBJECT -static struct dept_pool pool[OBJECT_NR] = { +struct dept_pool dept_pool[OBJECT_NR] = { #define OBJECT(id, nr) { \ .name = #id, \ .obj_sz = sizeof(struct dept_##id), \ @@ -304,7 +298,7 @@ static void *from_pool(enum object_t t) if (DEPT_WARN_ON(!irqs_disabled())) return NULL; - p = &pool[t]; + p = &dept_pool[t]; /* * Try local pool first. @@ -339,7 +333,7 @@ static void *from_pool(enum object_t t) static void to_pool(void *o, enum object_t t) { - struct dept_pool *p = &pool[t]; + struct dept_pool *p = &dept_pool[t]; struct llist_head *h; preempt_disable(); @@ -2136,7 +2130,7 @@ void dept_map_copy(struct dept_map *to, struct dept_map *from) clean_classes_cache(&to->map_key); } -static LIST_HEAD(classes); +LIST_HEAD(dept_classes); static inline bool within(const void *addr, void *start, unsigned long size) { @@ -2168,7 +2162,7 @@ void dept_free_range(void *start, unsigned int sz) while (unlikely(!dept_lock())) cpu_relax(); - list_for_each_entry_safe(c, n, &classes, all_node) { + list_for_each_entry_safe(c, n, &dept_classes, all_node) { if (!within((void *)c->key, start, sz) && !within(c->name, start, sz)) continue; @@ -2244,7 +2238,7 @@ static struct dept_class *check_new_class(struct dept_key *local, c->sub_id = sub_id; c->key = (unsigned long)(k->base + sub_id); hash_add_class(c); - list_add(&c->all_node, &classes); + list_add(&c->all_node, &dept_classes); unlock: dept_unlock(); caching: @@ -2958,8 +2952,8 @@ static void migrate_per_cpu_pool(void) struct llist_head *from; struct llist_head *to; - from = &pool[i].boot_pool; - to = per_cpu_ptr(pool[i].lpool, boot_cpu); + from = &dept_pool[i].boot_pool; + to = per_cpu_ptr(dept_pool[i].lpool, boot_cpu); move_llist(to, from); } } diff --git a/kernel/dependency/dept_internal.h b/kernel/dependency/dept_internal.h new file mode 100644 index 000000000000..007c1eec6bab --- /dev/null +++ b/kernel/dependency/dept_internal.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Dept(DEPendency Tracker) - runtime dependency tracker internal header + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __DEPT_INTERNAL_H +#define __DEPT_INTERNAL_H + +#ifdef CONFIG_DEPT + +enum object_t { +#define OBJECT(id, nr) OBJECT_##id, + #include "dept_object.h" +#undef OBJECT + OBJECT_NR, +}; + +extern struct list_head dept_classes; +extern struct dept_pool dept_pool[]; + +#endif +#endif /* __DEPT_INTERNAL_H */ diff --git a/kernel/dependency/dept_proc.c b/kernel/dependency/dept_proc.c new file mode 100644 index 000000000000..7d61dfbc5865 --- /dev/null +++ b/kernel/dependency/dept_proc.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Procfs knobs for Dept(DEPendency Tracker) + * + * Started by Byungchul Park : + * + * Copyright (C) 2021 LG Electronics, Inc. , Byungchul Park + */ +#include +#include +#include +#include "dept_internal.h" + +static void *l_next(struct seq_file *m, void *v, loff_t *pos) +{ + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + return seq_list_next(v, &dept_classes, pos); +} + +static void *l_start(struct seq_file *m, loff_t *pos) +{ + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + return seq_list_start_head(&dept_classes, *pos); +} + +static void l_stop(struct seq_file *m, void *v) +{ +} + +static int l_show(struct seq_file *m, void *v) +{ + struct dept_class *fc = list_entry(v, struct dept_class, all_node); + struct dept_dep *d; + const char *prefix; + + if (v == &dept_classes) { + seq_puts(m, "All classes:\n\n"); + return 0; + } + + prefix = fc->sched_map ? " " : ""; + seq_printf(m, "[%p] %s%s\n", (void *)fc->key, prefix, fc->name); + + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + list_for_each_entry(d, &fc->dep_head, dep_node) { + struct dept_class *tc = d->wait->class; + + prefix = tc->sched_map ? " " : ""; + seq_printf(m, " -> [%p] %s%s\n", (void *)tc->key, prefix, tc->name); + } + seq_puts(m, "\n"); + + return 0; +} + +static const struct seq_operations dept_deps_ops = { + .start = l_start, + .next = l_next, + .stop = l_stop, + .show = l_show, +}; + +static int dept_stats_show(struct seq_file *m, void *v) +{ + int r; + + seq_puts(m, "Availability in the static pools:\n\n"); +#define OBJECT(id, nr) \ + r = atomic_read(&dept_pool[OBJECT_##id].obj_nr); \ + if (r < 0) \ + r = 0; \ + seq_printf(m, "%s\t%d/%d(%d%%)\n", #id, r, nr, (r * 100) / (nr)); + #include "dept_object.h" +#undef OBJECT + + return 0; +} + +static int __init dept_proc_init(void) +{ + proc_create_seq("dept_deps", S_IRUSR, NULL, &dept_deps_ops); + proc_create_single("dept_stats", S_IRUSR, NULL, dept_stats_show); + return 0; +} + +__initcall(dept_proc_init); From patchwork Mon Aug 21 03:46:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60156EE49AE for ; Mon, 21 Aug 2023 04:07:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232753AbjHUEHA (ORCPT ); Mon, 21 Aug 2023 00:07:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232738AbjHUEG7 (ORCPT ); Mon, 21 Aug 2023 00:06:59 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8FC7E9B; Sun, 20 Aug 2023 21:06:57 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-5b-64e2ded5f456 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 07/25] dept: Apply sdt_might_sleep_{start,end}() to wait_for_completion()/complete() Date: Mon, 21 Aug 2023 12:46:19 +0900 Message-Id: <20230821034637.34630-8-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxTH9zz33udeKiXXTrOrzIlNyBaX4UvUHI0xEkn2fHEhWbZkGsVq 70Y3KFgExcxRpL6BGGqCdYJLgVkrdAoXljC0roPxNiNWraww7EaHTgaIUIsi+FLY/HLyy/mf 8/v0FxhNE7dQMBj3yCajLk1LVKxqJLrigzuBoH758Ng7YD2+HMKPj7JQfslFwHuxBoGrIR/D YOuH8PvEMIKp6zcYsJV6EVT032WgoS2AwO08SOD2QAz4wqMEOkuLCBRUXSJwc2gaQ9+pkxhq lM1wraQSg2fyHxZsgwTKbAU4Mh5gmHRU8+Awx0PQeYaH6f4V0Bno5sDd+z58+10fgSvuThba GoMYbjeVEwi4XnJwra2DBa+1mIMfHlYSGJpwMOAIj/Jwy2PHUGuJiA6HXnDQXuzBcPj7Ogy+ nssIrh79C4Pi6ibQEh7GUK+UMvDsfCuC4IkRHg4dn+ShLP8EgqJDp1i48bydA0vfaph6Wk42 rqMtw6MMtdTvpe4JO0t/q5ToT2fu8tRytZendiWb1juX0qorg5hWjIc5qlQfI1QZP8nTwhEf pg+7unjacXqKpQM+G06O3aJar5fTDDmyadmGHarUIfcIn/nz3H1nvb/yZtQQU4iiBElcJbVe /JN5zaF/W/gZJuK7kt8/ObufJ8ZJ9cX3uUKkEhjxyBzJ+eg6mQneFDOknnPmWWbFeOlHqzXC gqAWV0vWkv+di6WaWs8sR4lrJOVyE5phTeRkrP9vdsYpiUeipO77Lva/hwXSL04/W4LUdvRG NdIYjDnpOkPaqoTUXKNhX8KujHQFRRrlODC9tRGNez9uRqKAtNHqHW8H9RpOl5OVm96MJIHR zlPHPunXa9R6Xe5+2ZSRYspOk7OaUazAat9Sr5zYq9eIX+j2yF/JcqZsep1iIWqhGeHEpGN1 d8Y/s3+SmR3dsT55bE7oc0sDF3fQX/2e3zcYetCYuJvEPdrSHv/1hYSiMlNez72COmtV9pOu NYGe4v1LhpTTnjzLRzn3UmIyzYtDO41/2EJJC17Oz6df+gvDLYnfPDe+2N2bcrM9edei2hDJ W9ZXUvTpxqRN67Yt2T7Qunalls1K1a1YypiydK8AMADUhk0DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzXSa1BMcRgGcP//ubYsx2o4IzNYg5FxGzKvcZnGlw4zjBkfXGe0Y49atja7 iWUiukhsWiZLbUmxVkWclnFb1kbJpZYSkWhdU1tuGymXjfHlnd/M88zz6WUJhZUazmpi40V9 rEqrpGWkbPHs5EmPmr3qqfsdDJj3TQX/13QSrGWlNHjOlCAodezE0HorAh53tSPouV9LgCXb g+BYy3MCHJXNCJz2XTTUvR4I9f5OGqqz99KQXFRGw4O2XgxNhw5gKJEWwd2sQgyu7nckWFpp yLUk48B5j6HbVsyALWkseO05DPS2TIPq5gYKKvKqKXA+nQhH8ptouOqsJqHyohdD3WUrDc2l vym4W3mbBI/ZRMHpjkIa2rpsBNj8nQw8dBVgOJsSWEv78ouCKpMLQ9rxcxjqG68guJb+EoNU 2kBDhb8dQ7mUTcCPk7cQeDN9DKTu62Ygd2cmgr2ph0io/VlFQUpTGPR8t9Lhs4WK9k5CSCnf LDi7CkjhTiEvXMp5zggp154yQoG0SSi3hwpFV1uxcOyznxKk4j20IH0+wAgZvnosdNTUMMLt wz2k8LregpeMWCmboxa1mgRRP2VepCy6zelj4q4P3pLnuckkIcfADBTE8twM/suHCqbPNDee f/Kkm+hzMDeKLze9pTKQjCW43f15+8f7dF8whNPxjSeS/prkxvLnzeaAWVbOhfHmLOLf5ki+ 5Kzrr4O4mbx05TLqsyJQ+dTyisxCsgLUrxgFa2ITYlQabdhkw4ZoY6xmy+S1uhgJBX7Glthr voi+1kW4Ecci5QB55AivWkGpEgzGGDfiWUIZLA/51qJWyNUq41ZRr1uj36QVDW4UwpLKYfKF y8RIBRelihc3iGKcqP+fYjZoeBKaH56o+0B5l6403Sua8OKVlP8i5JRnvex4pnX1Nps7XDYs rqN3QXbNiinPojY3rtG7+sdsz5i5pBg7jPPT8XLL0B2+Ew0RiXNTfW+1Z1aV4dwFYzaOvrAi 9ODckzeiRrfdyNftqXq4blBD4puj0zWzhvCD08YdzslsrXXHm2bZjasmHFWShmjVtFBCb1D9 AXJoOr8vAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Makes Dept able to track dependencies by wait_for_completion()/complete(). Signed-off-by: Byungchul Park --- include/linux/completion.h | 30 +++++++++++++++++++++++++----- 1 file changed, 25 insertions(+), 5 deletions(-) diff --git a/include/linux/completion.h b/include/linux/completion.h index 62b32b19e0a8..32d535abebf3 100644 --- a/include/linux/completion.h +++ b/include/linux/completion.h @@ -10,6 +10,7 @@ */ #include +#include /* * struct completion - structure used to maintain state for a "completion" @@ -26,14 +27,33 @@ struct completion { unsigned int done; struct swait_queue_head wait; + struct dept_map dmap; }; +#define init_completion(x) \ +do { \ + sdt_map_init(&(x)->dmap); \ + __init_completion(x); \ +} while (0) + +/* + * XXX: No use cases for now. Fill the body when needed. + */ #define init_completion_map(x, m) init_completion(x) -static inline void complete_acquire(struct completion *x) {} -static inline void complete_release(struct completion *x) {} + +static inline void complete_acquire(struct completion *x) +{ + sdt_might_sleep_start(&x->dmap); +} + +static inline void complete_release(struct completion *x) +{ + sdt_might_sleep_end(); +} #define COMPLETION_INITIALIZER(work) \ - { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait) } + { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait), \ + .dmap = DEPT_MAP_INITIALIZER(work, NULL), } #define COMPLETION_INITIALIZER_ONSTACK_MAP(work, map) \ (*({ init_completion_map(&(work), &(map)); &(work); })) @@ -75,13 +95,13 @@ static inline void complete_release(struct completion *x) {} #endif /** - * init_completion - Initialize a dynamically allocated completion + * __init_completion - Initialize a dynamically allocated completion * @x: pointer to completion structure that is to be initialized * * This inline function will initialize a dynamically created completion * structure. */ -static inline void init_completion(struct completion *x) +static inline void __init_completion(struct completion *x) { x->done = 0; init_swait_queue_head(&x->wait); From patchwork Mon Aug 21 03:46:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359132 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3B31EE49B4 for ; Mon, 21 Aug 2023 04:07:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232761AbjHUEHB (ORCPT ); Mon, 21 Aug 2023 00:07:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232739AbjHUEHA (ORCPT ); Mon, 21 Aug 2023 00:07:00 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A42A7A6; Sun, 20 Aug 2023 21:06:57 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-6b-64e2ded5c594 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 08/25] dept: Apply sdt_might_sleep_{start,end}() to PG_{locked,writeback} wait Date: Mon, 21 Aug 2023 12:46:20 +0900 Message-Id: <20230821034637.34630-9-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbVBMexzH/f/nsdVyZpnrVFxmPT9FnuY3GNfcwT0zxjDTeIHxsGOP2rm7 1ZySYoyyIZFbmQp12R5m26klzvYiKVJjFUpYidliF7GjxLKRStoab37zme/3O59XP5ZQVVLB rC4qTpSiNHo1rSAV3YGFC592uLWLk7PmQubpxeD7mkpCfrmVhpYrZQisFckYPHf+gWe9XQj6 mx4SkJvdgqDA1U5Ahb0DQY3lKA1P3owDh6+HhsbsUzQYi8ppePRhAIMzJwtDmbwJ7mcUYqjt e0dCroeGvFwjHj7vMfSZSxkwJ80Et+UCAwOuMGjsaKWg5sV8OH/RSUN1TSMJ9ko3hidV+TR0 WIcouG9vIKElM52Cyx8LafjQaybA7Oth4HGtCcPVlGHR8S8/KbibXovhePE1DI7nNxDcTH2F Qba20lDv68Jgk7MJ+FFyB4H7TDcDx073MZCXfAbBqWM5JDwcvEtBinM59H/Pp9euFOq7eggh xXZAqOk1kcK9Ql64fqGdEVJuvmAEk7xfsFnmCUXVHiwUeH2UIJeepAXZm8UIad0OLHxsbmaE hnP9pPDGkYu3hGxXrNaKel28KC1as0cRmTfkIGIGlQn/t05PQs6xaSiA5bll/ODPkygNsSN8 sXiTP6a52XxbWx/h54ncNN6W3kmlIQVLcCfG8pZPTbS/mMDp+P9eXkJ+JrmZvPy5AvtZyS3n 292VzKh/Kl92tXZEFMCt4OUbVSN71fDms+s1OboxBvBPc2JGOYi/bWkjM5DShMaUIpUuKt6g 0emXhUYmRukSQvdGG2Q0/E/mwwM7KpG3JbwOcSxSByr3THZrVZQmPjbRUId4llBPVIZ8c2lV Sq0m8aAoRe+W9uvF2DoUwpLqScolvQe0Ki5CEyf+K4oxovS7xWxAcBJaKh2qDn4Z96etIUj/ lz01ceuucftC3z74w9gzftX4iDlNUkRYpzNovWv15IKinaYxkOmp33twmzHn75D4zjml31VS 22ba2jTXNuX2p7eNqZe/hM9ybajKSLi1dIE9bMbzIbe7+MiVdc12sWShd8a+QEM09mZeK3eS ax0e+8rNhrMb1WRspCZsHiHFan4B8io5LUsDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/x47jt6vxm+edNU+TmLOPeRiL+c6m+QtjpuN+000ld6SM SU/Sw5GtopJUTipPv5qFzm6lkvRAJw+r1DHEKZ0uTk+6Nv+899rer73/evOUKpuZyevDjkmG MG2ImlXQisC1scted9p1/gOJ0yAtxR9cA4k05NwtZaHlTgmC0vKzGHpqtsKbQQeCocZmCjLT WxBc7+6goLy2E4GlKIaF1k9TwebqY6E+PZmF2IK7LLz8PoyhPeMShhJ5OzRczMdgdX+hIbOH hezMWDweXzG4zcUcmKN9wV6UxcFw9wqo72xjoPpqPQOW90vhSm47C5WWehpqK+wYWh/lsNBZ OsZAQ+0zGlrSUhm43ZvPwvdBMwVmVx8Hr6x5GO7Fja8l/BploC7ViiGh8D4G27vHCJ4kdmGQ S9tYqHY5MJTJ6RT8vVmDwG76wUF8ipuD7LMmBMnxGTQ0j9QxENeugaE/OezGtaTa0UeRuLIT xDKYR5Pn+SJ5mNXBkbgn7zmSJx8nZUVLSEFlDybXnS6GyMXnWSI7L3Ek6YcNk96mJo48uzxE k0+2TLxj9h7FOp0Uoo+QDMs3BCmCs8dsVPiIMvJq24Jo1D45CfG8KKwScwu3JyEvnhUWim/f uikP+wjzxbLUz0wSUvCUcG6yWPSzkfUU3oJevPDhGvIwLfiKcn859rBS0Igd9grOw6IwTyy5 Z50Y8hJWi/LjRxO+atzp7/5IX0SKPDSpGPnowyJCtfoQjZ/xcHBUmD7S7+CRUBmNX8Z8ejit Ag20bq1CAo/UU5RBs+06FaONMEaFViGRp9Q+ylm/u3UqpU4bdVIyHNlvOB4iGavQLJ5Wz1Bu 2yUFqYRD2mPSYUkKlwz/W8x7zYxGm4f2ViVQu6drTdTcG7hJ6WeSUEBFW7/vogPivvVp3qcs Wc6VthkHYvxJTksr8nZsypizJvLCrVNev52BdX2RZ+b9HE3p6vLh7OcMDZU1MQ8WBfqFL9aY 56bPf+dufhr/YsuCb6/H1h1NNI0UFDY6HQ29dtPSAH7NTt/9GtnKimraGKxdsYQyGLX/AOWw WiQuAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Makes Dept able to track dependencies by PG_{locked,writeback} waits. Signed-off-by: Byungchul Park --- mm/filemap.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/mm/filemap.c b/mm/filemap.c index 83dda76d1fc3..eed64dc88e43 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -44,6 +44,7 @@ #include #include #include +#include #include #include #include "internal.h" @@ -1219,6 +1220,9 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, /* How many times do we accept lock stealing from under a waiter? */ int sysctl_page_lock_unfairness = 5; +static struct dept_map __maybe_unused PG_locked_map = DEPT_MAP_INITIALIZER(PG_locked_map, NULL); +static struct dept_map __maybe_unused PG_writeback_map = DEPT_MAP_INITIALIZER(PG_writeback_map, NULL); + static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, int state, enum behavior behavior) { @@ -1230,6 +1234,11 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, unsigned long pflags; bool in_thrashing; + if (bit_nr == PG_locked) + sdt_might_sleep_start(&PG_locked_map); + else if (bit_nr == PG_writeback) + sdt_might_sleep_start(&PG_writeback_map); + if (bit_nr == PG_locked && !folio_test_uptodate(folio) && folio_test_workingset(folio)) { delayacct_thrashing_start(&in_thrashing); @@ -1331,6 +1340,8 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, */ finish_wait(q, wait); + sdt_might_sleep_end(); + if (thrashing) { delayacct_thrashing_end(&in_thrashing); psi_memstall_leave(&pflags); From patchwork Mon Aug 21 03:46:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 453CAEE49BC for ; Mon, 21 Aug 2023 04:07:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232790AbjHUEHE (ORCPT ); Mon, 21 Aug 2023 00:07:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232745AbjHUEHA (ORCPT ); Mon, 21 Aug 2023 00:07:00 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A4129A2; Sun, 20 Aug 2023 21:06:57 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-7c-64e2ded5515b From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 09/25] dept: Apply sdt_might_sleep_{start,end}() to swait Date: Mon, 21 Aug 2023 12:46:21 +0900 Message-Id: <20230821034637.34630-10-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxTHfe597gtXa66dmc/EOG1iTFhkanw52YYxmdEn2TQmmzHqB71b r6NaQMubLFmCUsAp+MKCqIApoLWBKtLqVsBiRQHRAHVFBIZMKlGJRTZcGZVOV3z5cvLLOf/z +/QXWW0tN1s0JKaopkTFqOMlLA1PK1t0r9+vX9zQFwvH8xZD8J+DGEqq7Tx4L1YhsF/ez8BQ 0zq4PxZAMNHWwUJRoRdB2cADFi439yNw2w7w4BucDp3BER5aCw/zkFVRzcPdZ2EG+k4UMFDl WA93jpUz4Ak9wVA0xENxURYTGU8ZCFkrBbBmLgC/7bQA4YEl0NrfxYG79xM4daaPh6vuVgzN Lj8DvroSHvrtrzm403wLg/d4PgcXnpfz8GzMyoI1OCLA7x4LA5fMEVHOi1cctOR7GMg5W8NA Z089goaDDxlw2Lt4uBEMMOB0FLLw8nwTAv+RYQGy80ICFO8/guBw9gkMHf+1cGDuWw4T4yX8 6s/ojcAIS83OdOoes2B6u5zQ2tMPBGpu6BWoxZFKnbYYWnF1iKFlo0GOOip/5qljtECgh4Y7 Gfq8vV2gt05OYDrYWcRsjN4qfaFXjYY01fTpqh1SfHueV9jTLO77zd3LZaIO/hCKEom8jNQ2 XePes9n8JzvJvLyQdHeH3vBMeR5x5j+OZCSRlXOnEttfbZFnUfxA/pa4Tq2fzGB5AfHV9OBJ 1sgrSN3r4nf+j0nVJc8bT1Rk76ivQ5OslZeTvwce4beZo1HE5frqLX9Ertu68TGksaAplUhr SExLUAzGZbHxGYmGfbHfJyU4UKRQ1p/C21xo1PtNI5JFpJum2THHr9dySlpyRkIjIiKrm6mJ /ndAr9XolYwfVVPSdlOqUU1uRNEi1s3SLB1L12vlH5QUdbeq7lFN76+MGDU7E+1uqc7ntqzN Ls0t2XnAGyc9DOz88o+aXeUxF0M1K9p+yZZW4rlpcbn1cVM2bLq7d+N3gSuDbQ0ZBRNnw75V 53S5SZasr+d5zWt+JbtSnefCI+MEBz8fv/IofQY7v0y5+Vh5hbevvfBhxyLRZ9/cJSWtHO/J MXpQhURKU5aytK5dh5PjlSUxrClZ+R8RW41iTAMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxTHfZ77SqXmpqDcgEZXY0ycr5HqSVRiYgiPJk4TTRb94OjWq20s L2mViVEDUhB5i5gACmgQXUVgChfNEClpIIBVgSrMASvV1kZF0SpSFEGxmO3LyS//c/6/T4en VOVMJG9IPCSZErVGNaugFT9tyFjxt9urW91yMgIK81ZDYCybhvIbtSw4r9cgqL2ZjmG4PQ7+ GR9BMNnVQ0FJkRPBJc8QBTc73AhsVSdZ6PXNgb6AnwVHUS4LGZdvsPDw9RQGV/FZDDXydrh/ phKDfeIFDSXDLJSVZODgeIlhwlrNgTVtCXirSjmY8qwBh/sxA20XHAzYBn+E8xddLDTbHDR0 NHox9DaVs+CunWbgfsddGpyF+Qz8+baShdfjVgqsAT8Hj+wVGOosQVvWh68MdObbMWRdqcfQ N3AHQUv2Uwxy7WMW2gIjGBrkIgo+X21H4C14w0Fm3gQHZekFCHIzi2no+dLJgMWlgclP5ezm DaRtxE8RS8PvxDZeQZN7lSK5XTrEEUvLIEcq5MOkoWoZudw8jMml0QBD5OrTLJFHz3Ik500f Jm+7uzly99wkTXx9JXjn/L2KjTrJaEiRTKti4hX67jwnl9zBH/nLNsikoR42B4XwohAtWixP qBlmhaVif//Edw4XFokN+c+ZHKTgKeHUbLHqXVewwPNhwm6x8fz2mRtaWCL21g/QM6wU1olN 02X/OReKNXX2756QYC7faUIzrBI04nvPM/oMUlSgWdUo3JCYkqA1GDUrzQf1qYmGIyt/S0qQ UfBnrMenChvRWG9cKxJ4pA5Vxs/36lSMNsWcmtCKRJ5ShyujPnp0KqVOm3pUMiX9YjpslMyt KIqn1RHKbT9L8SrhgPaQdFCSkiXT/1vMh0SmIeHWg6vpxWH79p/yy5YF7T3O9OVtsTGaW6/m 7TDpY/Sx5A9j67nIop2fOyV6cfHW9e8L5iYbNc1dvtmzjs2lY0PfRe//t/TX7LVjD8Iqo9w/ XCdhD3V7TuBnGtsuMdQxZH26aXTVNbtv3Ytkn7d/yyJXtmt8Ojo2aY9l0NnlGYjwq2mzXrtm GWUya78BjoQ5KS8DAAA= X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Makes Dept able to track dependencies by swaits. Signed-off-by: Byungchul Park --- include/linux/swait.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/swait.h b/include/linux/swait.h index 6a8c22b8c2a5..02848211cef5 100644 --- a/include/linux/swait.h +++ b/include/linux/swait.h @@ -6,6 +6,7 @@ #include #include #include +#include #include /* @@ -161,6 +162,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); struct swait_queue __wait; \ long __ret = ret; \ \ + sdt_might_sleep_start(NULL); \ INIT_LIST_HEAD(&__wait.task_list); \ for (;;) { \ long __int = prepare_to_swait_event(&wq, &__wait, state);\ @@ -176,6 +178,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); cmd; \ } \ finish_swait(&wq, &__wait); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Mon Aug 21 03:46:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4BD1EE49B7 for ; Mon, 21 Aug 2023 04:07:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232785AbjHUEHD (ORCPT ); Mon, 21 Aug 2023 00:07:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232746AbjHUEHA (ORCPT ); Mon, 21 Aug 2023 00:07:00 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9E1CE9D; Sun, 20 Aug 2023 21:06:57 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-8b-64e2ded5c2df From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 10/25] dept: Apply sdt_might_sleep_{start,end}() to waitqueue wait Date: Mon, 21 Aug 2023 12:46:22 +0900 Message-Id: <20230821034637.34630-11-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfX8P39/dcfbbYX5kw00zzLPsw8z857cZ85zJQzf3m26u2JWS h+0oIUqyioSrdM51id/Z5HpwotxxnSSk1a07j+lR3JEKFf757LX3e3u//vlISMU9eqJEExUj 6KJUWiWWUbKOUbmzX3p86nnvuxbDuTPzwP/tJAU5xRYMtTcLEVjuHCWgtWolvA60I+ireUZC VkYtglxvMwl3qj0Iyk3HMLx4Nxrq/V0YnBmnMSTkF2N43tZPQFNmOgGF4mp4mpZHgL33IwVZ rRguZSUQg+cTAb1GMwNGfTD4TNkM9Hvng9Pziobyxllw8UoThrJyJwXVJT4CXthyMHgsv2l4 Wu2goPZcCg1FnXkY2gJGEoz+Lgbq7AYCbiUODiV9/UXD4xQ7AUnXbhNQ/6YUQcXJFgJEyysM D/3tBFjFDBJ+Xq9C4EvtYOD4mV4GLh1NRXD6eCYFzwYe05DYFAJ9P3LwiqX8w/Yukk+0xvHl AQPFP8nj+HvZzQyfWNHI8AZxP281zeTzy1oJPrfHT/Oi+RTmxZ50hk/uqCf4Treb4R0X+ij+ XX0WsTZoq2yZWtBqYgXd3OXhsghn6aF9N6QHKs42Ij26ziQjqYRjF3GvB9qp/3zT5UZDjNnp XENDLznEY9kpnDXlA52MZBKSPTGSM3XX4KFiDLud+/q+jR5iig3m9MamYZazi7mqY4/+CSZz hbfsw0PSwVwstQ0LFGwI98X79p84QcrZqkP+8gTugamBSkNyAxphRgpNVGykSqNdNCciPkpz YM6uvZEiGnwo45H+sBLUU7uhErESpBwlD5/kUytoVWx0fGQl4iSkcqw86LtXrZCrVfEHBd3e nbr9WiG6EgVJKOV4+YJAnFrB7lbFCHsEYZ+g+98SEulEPSrMPKF3ukpMd73uqTMG0g2Zh1rE qzGr7Ui95vNlHD7OvSU4b3JBQfPl37KSuk0W14qPnY7UotTD2yIMZTYzNH7XTrP5W85uXhWa ZraGbll/vzvJ5+nu2Bp6IT/OjovL6sISdnhNYecXuhwOkv6cXeBe4krfWNTqvr8uwMbXjFZS 0RGq+TNJXbTqD5YtGiVMAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+5/rNl0dptTJomQhgaYppb2URH3IDoUVfQkCydEOOdQVm3kp Ik1L0ywdeElXLY2l89qMWurG0LRWeSm1VFRy2cU0TXMzc1kq9eXlx/PA7/nyCnCJlvQSKJRx vEopi5FSIkJ0aFeqf8+QXR6YU7gBcq8FgmMmgwBtTSUFndUVCCofpmAw2rIf3jnHEcy3deBQ kNeJ4O7wIA4PW4cQmMsuUdA1shK6HZMU2PKyKEgtraHg9ZgLg4F8DQYVxnB4mVOCgXXuMwEF oxQUF6Rii+cLBnN6Aw36ZB+wlxXR4BoOAtvQWxKab9lIMPf7wc3bAxQ0mm0EtJrsGHTVaykY qvxDwsvW5wR05maTUDVRQsGYU4+D3jFJwxurDoPatEXblR8LJDzLtmJw5d4DDLr7GhBYMt5j YKx8S0GzYxyDOmMeDr/utyCwX/9Gw+VrczQUp1xHkHU5n4CO389ISBsIhvmfWmrPLq55fBLn 0uoSOLNTR3AvSljuSdEgzaVZ+mlOZzzL1ZX5cqWNoxh3d9pBckbDVYozTmtoLvNbN8ZNtLfT 3PPCeYIb6S7Ajqw/LgqV8zGKeF61dXekKMrWcP5MuTDRcqMfJaP7dCYSClhmO1v9qh0tMcVs Znt75/Al9mS82brsT2QmEglwJt2NLfveRi0VHkwE++PjGLnEBOPDJusHllnMhLAtl57+k25k K2qtyyLhYm5sqF8ekDDB7NTwByIHiXRohQF5KpTxsTJFTHCAOjoqSalIDDh5OtaIFn9Gf8GV a0IzXfubECNAUndx5Hq7XELK4tVJsU2IFeBST/G62WG5RCyXJZ3jVadPqM7G8OomtE5ASNeI DxzjIyXMKVkcH83zZ3jV/xYTCL2SUfjglHDCoKvpCQuLWNDM1K7Kymqs2kSFtbmpiIPmOFeI e+Cdna899hl6fK2fcdNIQugW7z0X41LrvS0+JibIvyv0UZXyo/tEn/are8O94jXB6bCg8UN6 i9dhr9rVzogkad7sUdPu38QOR83jbdvWlh92PRCllDiZ6T5N33THXimhjpIF+eIqtewvNPj9 uy8DAAA= X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Makes Dept able to track dependencies by waitqueue waits. Signed-off-by: Byungchul Park --- include/linux/wait.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/wait.h b/include/linux/wait.h index a0307b516b09..ff349e609da7 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -303,6 +304,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); struct wait_queue_entry __wq_entry; \ long __ret = ret; /* explicit shadow */ \ \ + sdt_might_sleep_start(NULL); \ init_wait_entry(&__wq_entry, exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ long __int = prepare_to_wait_event(&wq_head, &__wq_entry, state);\ @@ -318,6 +320,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); cmd; \ } \ finish_wait(&wq_head, &__wq_entry); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Mon Aug 21 03:46:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2003EEE49AB for ; Mon, 21 Aug 2023 04:08:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232823AbjHUEIU (ORCPT ); Mon, 21 Aug 2023 00:08:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232314AbjHUEIM (ORCPT ); Mon, 21 Aug 2023 00:08:12 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E3379197; Sun, 20 Aug 2023 21:07:30 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-9b-64e2ded5b3e2 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 11/25] dept: Apply sdt_might_sleep_{start,end}() to hashed-waitqueue wait Date: Mon, 21 Aug 2023 12:46:23 +0900 Message-Id: <20230821034637.34630-12-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzHfZ/fXR3PLuYhP29rTRE18ZmZ38vDZmw2I39wcw/d1OEiZTPR CSUqErnZVXZOXdKTWaprKf3SKDp1Uq27GqIfXC5Ohbvwz2evvT97vz7/fBhcVkbOZlTq44JG rYiSUxJCMuiTu6St265c9m4tpF9eBs5vFwnQFZkoaHlQgMD06CwG/bWboX10AMHYi2YcsjJb EOTYunB4VNeNwGw8R0Fr31SwOIcpaMxMoSAxr4iCV5/HMei8kYFBgbgNmtJyMahyfSAgq5+C 21mJmHt8xMBlyKfBkOAPdmM2DeO2EGjsbiPB3BEEt+50UlBhbiSgrtSOQWuZjoJu028Smuoa CGhJTyWhcCiXgs+jBhwMzmEaXlfpMXiodYuSRn6RUJ9ahUHS3WIMLG/LEVRe7MFANLVRUOMc wKBEzMTh571aBPYrgzScv+yi4fbZKwhSzt8goHmingRtZxiM/dBR61bxNQPDOK8tOcmbR/UE /zyX459kd9G8trKD5vXiCb7EGMjnVfRjfI7DSfJi/iWKFx0ZNJ88aMH4oZcvab7h5hjB91my sB1+EZLVSiFKFStolq7ZL4n80JVOHrUycRPPntAJyEolIy+GY5dzKf1D+H8uzxFJD1NsAGe1 uibz6ewCriT1vTuXMDh7wZszfnkxWfZlldy7zmTawwTrz93/2kR4WMqu4Fwjun8H5nMFD6sm RV7uXCwvQx6WsWHcV1sv4ZFybKIX195rIP8WZnFPjVYiDUn1aEo+kqnUsdEKVdTy4Mh4tSou +MCRaBG5P8pwenxvKXK07KxGLIPkPtL9c+xKGamIjYmPrkYcg8unS/2+25QyqVIRf0rQHNmn ORElxFQjP4aQz5SGjp5UythDiuPCYUE4Kmj+bzHGa3YCMvlrw2FOUlDztwA8tLQyQhoSqr7W 8bZYsssctiVu5fP1u3tDM2SF8/Yuaui431cZ9Cl86wx7evCZ7d4zIn66vKObfNTrvSHlahHn OGPXX1toUTlm9mzbs/FC65vr9aSuxzb3oFQ7MoE+5R+oSNuw4vFYe/gx587emk2Lf0zzzcsJ khMxkYqQQFwTo/gD5JDsxU0DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzHfZ/fnY5nJ+tRVnZbQ+ZHm+wzzGxmnhlmptn8mI57pqPS7irO Zi79UmSdSUpxFSd1yHMNueIU1Wn94ELU3XRMmiPJHafCnfHPe6+9P/u8/nozuKycDGNUyamC OlmRKKckhGTTisyFz50u5RKPOQr0p5aA59sJAspumijouVGLwFSfgcHw43Xw0utGMN7ZjUNx UQ+CikEHDvWtTgRN1ccpsL+bBr2eEQpsRScpyKy6ScHTjxMYDJw7g0GtuBE6CisxsPqGCCge puBCcSbmjw8Y+Iw1NBh1UeCqLqVhYjAGbM4XJLSU20hoer0ASi4OUNDYZCOg9a4LA/u9Mgqc pt8kdLS2E9CjLyDh+udKCj56jTgYPSM0PLMaMKjL8ttyxn6R0FZgxSDn8i0Mel9ZENw/8QYD 0fSCghaPGwOzWITDz6uPEbhOf6Ih+5SPhgsZpxGczD5HQPdkGwlZA7Ew/qOMWr2Cb3GP4HyW +RDf5DUQ/JNKjm8oddB81v3XNG8Q03hzdTRf1TiM8RVfPSQv1uRRvPj1DM3nf+rF+M9dXTTf fn6c4N/1FmObZ2+XrFQKiap0Qb14VbwkYcihJ1P6mMOTjxpoHeqj8lEQw7FLOUuFSAaYYudy fX0+PMAh7BzOXPDe30sYnM2dylV/6fz7MINVcv0D+XSACTaKuzbaQQRYyi7jfGNl/6SRXG2d 9a8oyN+LlnsowDI2lhsdfEsUIokBTalBIark9CSFKjF2keZAgjZZdXjR3oNJIvKPxnh0Qn8X fbOva0Ysg+TB0vjZLqWMVKRrtEnNiGNweYg0/PugUiZVKrRHBPXB3eq0REHTjMIZQh4qXb9N iJex+xSpwgFBSBHU/68YExSmQ7mqJDZ1w50GZSnuNaz2llxtvG1cFfRgvrtfl7fVIY2caW4e qt07fUy3w7IrrSH8d3Dw0kdr7Ieu5O5/kDHv4fI9Cy6ZDAqrQV8SkXPslb3buTPC9nI7td5c oN0ZysyNS88zzLg+K3+tNsJR6LMwYdn1c8iFm+K65WcnI8O3uFtH5YQmQRETjas1ij8RsZMA MAMAAA== X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Makes Dept able to track dependencies by hashed-waitqueue waits. Signed-off-by: Byungchul Park --- include/linux/wait_bit.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/wait_bit.h b/include/linux/wait_bit.h index 7725b7579b78..fe89282c3e96 100644 --- a/include/linux/wait_bit.h +++ b/include/linux/wait_bit.h @@ -6,6 +6,7 @@ * Linux wait-bit related types and methods: */ #include +#include struct wait_bit_key { void *flags; @@ -246,6 +247,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); struct wait_bit_queue_entry __wbq_entry; \ long __ret = ret; /* explicit shadow */ \ \ + sdt_might_sleep_start(NULL); \ init_wait_var_entry(&__wbq_entry, var, \ exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ @@ -263,6 +265,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); cmd; \ } \ finish_wait(__wq_head, &__wbq_entry.wq_entry); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Mon Aug 21 03:46:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3100EE49AC for ; Mon, 21 Aug 2023 04:08:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232830AbjHUEIV (ORCPT ); Mon, 21 Aug 2023 00:08:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232738AbjHUEIN (ORCPT ); Mon, 21 Aug 2023 00:08:13 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 16985198; Sun, 20 Aug 2023 21:07:30 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-ab-64e2ded5845e From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 12/25] dept: Distinguish each syscall context from another Date: Mon, 21 Aug 2023 12:46:24 +0900 Message-Id: <20230821034637.34630-13-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/x44fv53GT9noNmxMD1b2YZ42w8/TMGPDHzruN91cZXdE zUOo0IMeyCmhYufUUf0uo644WXEeElpiae7kIT1RLlKhq/nnvdfe78/7/deHJZRllA+rjdgr 6SPUOhWtIBUdY/JmNzS7NIE9sQpITw4E94+TJOQUWWiou1mIwFJ6FENr9Qp43duOoP/ZcwKM mXUI8pzvCCitaUZQaT5Gw6uWsVDv7qLBkZlEw/ErRTS8aBvA0HQuA0OhvBaepOVjsPd9JsHY SsMF43E8JF8w9JkKGDDFTgOXOZuBAWcQOJobKKh8OwuyLjXRUFHpIKHmjgvDq/IcGpotfyl4 UvOIhLr0FApudObT0NZrIsDk7mLgpT0XQ3Hc0FBCzx8KHqbYMSRcLcFQ/8aG4O7J9xhkSwMN D9ztGKxyJgG/r1UjcJ3uYCA+uY+BC0dPI0iKP0fC88GHFMQ1hUD/rxx6yXzxQXsXIcZZ94uV vbmk+DhfEMuy3zFi3N23jJgr7xOt5pnilYpWLOZ1uylRLjhFi3J3BiMmdtRjsbO2lhEfne8n xZZ6I17vu1WxQCPptFGSPmBRqCKsx9ZP7DkbeuBjRhKORefXJCIvVuCDhVtlt+lExA5zfGqw x6b5GUJjYx/hYW9+qmBN+UQlIgVL8CdGC+Zvz2hPMJ7fJFS/KCE9TPLThLNnLg4XOH6u8DX1 MjGyP0UoLLYPs9eQL9vKkYeVfIjw3fmBHLlJ8hIuZkwc4UnCfXMjmYa4XDSqACm1EVHhaq0u 2D8sOkJ7wH9nZLiMhh7KdGhg2x3UXbexCvEsUo3hQie7NEpKHWWIDq9CAkuovDnfn06NktOo o2MkfeR2/T6dZKhCviypmsjN6d2vUfK71Hul3ZK0R9L/TzHr5ROLDj8taVsZPT3mREiQdjBN Xuv/9f71pnvFFVHWl4MWtGDd5fY3G7iDumX52vL5nfjpbG8jl/Xj42IfO2cZXFe7I6AocGHw kS/Ghgmrt24ecG7zW85YlyxdVZrVMYn6Pq5lyxxbijr9T2HCzXK/nzMcyTkBK0tshnmpxY6Y Y/f8AidHVqtIQ5g6aCahN6j/AaVrYtRMAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzXSfUzMcRwHcN/v7/GO4+fc5rcYdtayDDWyzzBjsX4zzMOwmc3d3I87enKX lA3RFa4npVwUpTitB+muP+KUU4rEiZKn3Oo89qA8dHHKQ7X557PXPu+9P399WEKeS/mwuvAo UR+uDlXSUlK6fmn8vGcutyagKmcepCcHgGfgJAm55aU0NF8rQVBaeQxDV30IPB/sRTD06DEB 5qxmBJc63xBQ2eBCUF10nIaWdxOh1dNPQ2NWEg3xheU0POkZxtB+NgNDiXUdNJ0uwODwfiTB 3EVDjjkej4xPGLyWYgYscb7gLjrPwHBnIDS62iiou9BIQfWruXDuYjsNt6obSWiocmNouZlL g6v0LwVNDfdJaE5PoaCsr4CGnkELARZPPwNPHfkYrhtHriV+/0PBvRQHhsTLFRhaX9oR1Jzs wGAtbaOhztOLwWbNIuDX1XoE7tTPDCQkexnIOZaKICnhLAmPf9+jwNgeBEM/c+kVS4W63n5C MNoOCtWD+aTwoIAXbpx/wwjGmleMkG89INiK/IXCW11YuPTNQwnW4lO0YP2WwQimz61Y6HM6 GeF+9hApvGs14w3Tt0uXacRQXbSoX7BcJdV+tw8RkZmqmPcZSTgOZa81IZbluUV8QtoiE5Kw NOfHv3jhJUat4GbxtpQPlAlJWYI7MZ4v+vKIHg2mcFv4+icV5KhJzpfPPHNhrCDjFvPdaXlj 5rmZfMl1x5glI3ur/SYatZwL4r92viVPI2k+GleMFLrw6DC1LjRovmGfNjZcFzN/V0SYFY38 jOXwcHoVGmgJqUUci5QTZKrpbo2cUkcbYsNqEc8SSoVs2o9OjVymUcceEvURO/UHQkVDLZrG ksqpsjXbRJWc26OOEveJYqSo/59iVuITh6ruFH6c3TEgY5Xre2uOTFJ27BjMW7JMu83nYcj7 JqfTtss540b2koWJFRJ7Ro5vahlB2YN7/DdJYq5e8dt88PDrvg7nHL37w3EqOWT2yruRlaa8 /Xu7P/Unbr1t/hX8NdOocBWnKfwCNjq8US7t5rbu1buP+qxSTb5oCG7rCYxwNSlJg1Yd6E/o Dep/YqCJCy8DAAA= X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org It enters kernel mode on each syscall and each syscall handling should be considered independently from the point of view of Dept. Otherwise, Dept may wrongly track dependencies across different syscalls. That might be a real dependency from user mode. However, now that Dept just started to work, conservatively let Dept not track dependencies across different syscalls. Signed-off-by: Byungchul Park --- arch/arm64/kernel/syscall.c | 2 ++ arch/x86/entry/common.c | 4 +++ include/linux/dept.h | 39 ++++++++++++--------- kernel/dependency/dept.c | 67 +++++++++++++++++++------------------ 4 files changed, 63 insertions(+), 49 deletions(-) diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index da84cf855c44..d5a43e721173 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -105,6 +106,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, */ local_daif_restore(DAIF_PROCCTX); + dept_kernel_enter(); if (flags & _TIF_MTE_ASYNC_FAULT) { /* diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 6c2826417b33..7cdd27abe529 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -19,6 +19,7 @@ #include #include #include +#include #ifdef CONFIG_XEN_PV #include @@ -72,6 +73,7 @@ static __always_inline bool do_syscall_x32(struct pt_regs *regs, int nr) __visible noinstr void do_syscall_64(struct pt_regs *regs, int nr) { + dept_kernel_enter(); add_random_kstack_offset(); nr = syscall_enter_from_user_mode(regs, nr); @@ -120,6 +122,7 @@ __visible noinstr void do_int80_syscall_32(struct pt_regs *regs) { int nr = syscall_32_enter(regs); + dept_kernel_enter(); add_random_kstack_offset(); /* * Subtlety here: if ptrace pokes something larger than 2^31-1 into @@ -140,6 +143,7 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs) int nr = syscall_32_enter(regs); int res; + dept_kernel_enter(); add_random_kstack_offset(); /* * This cannot use syscall_enter_from_user_mode() as it has to diff --git a/include/linux/dept.h b/include/linux/dept.h index b6d45b4b1fd6..f62c7b6f42c6 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -25,11 +25,16 @@ struct task_struct; #define DEPT_MAX_SUBCLASSES_USR (DEPT_MAX_SUBCLASSES / DEPT_MAX_SUBCLASSES_EVT) #define DEPT_MAX_SUBCLASSES_CACHE 2 -#define DEPT_SIRQ 0 -#define DEPT_HIRQ 1 -#define DEPT_IRQS_NR 2 -#define DEPT_SIRQF (1UL << DEPT_SIRQ) -#define DEPT_HIRQF (1UL << DEPT_HIRQ) +enum { + DEPT_CXT_SIRQ = 0, + DEPT_CXT_HIRQ, + DEPT_CXT_IRQS_NR, + DEPT_CXT_PROCESS = DEPT_CXT_IRQS_NR, + DEPT_CXTS_NR +}; + +#define DEPT_SIRQF (1UL << DEPT_CXT_SIRQ) +#define DEPT_HIRQF (1UL << DEPT_CXT_HIRQ) struct dept_ecxt; struct dept_iecxt { @@ -94,8 +99,8 @@ struct dept_class { /* * for tracking IRQ dependencies */ - struct dept_iecxt iecxt[DEPT_IRQS_NR]; - struct dept_iwait iwait[DEPT_IRQS_NR]; + struct dept_iecxt iecxt[DEPT_CXT_IRQS_NR]; + struct dept_iwait iwait[DEPT_CXT_IRQS_NR]; /* * classified by a map embedded in task_struct, @@ -207,8 +212,8 @@ struct dept_ecxt { /* * where the IRQ-enabled happened */ - unsigned long enirq_ip[DEPT_IRQS_NR]; - struct dept_stack *enirq_stack[DEPT_IRQS_NR]; + unsigned long enirq_ip[DEPT_CXT_IRQS_NR]; + struct dept_stack *enirq_stack[DEPT_CXT_IRQS_NR]; /* * where the event context started @@ -252,8 +257,8 @@ struct dept_wait { /* * where the IRQ wait happened */ - unsigned long irq_ip[DEPT_IRQS_NR]; - struct dept_stack *irq_stack[DEPT_IRQS_NR]; + unsigned long irq_ip[DEPT_CXT_IRQS_NR]; + struct dept_stack *irq_stack[DEPT_CXT_IRQS_NR]; /* * where the wait happened @@ -406,19 +411,19 @@ struct dept_task { int wait_hist_pos; /* - * sequential id to identify each IRQ context + * sequential id to identify each context */ - unsigned int irq_id[DEPT_IRQS_NR]; + unsigned int cxt_id[DEPT_CXTS_NR]; /* * for tracking IRQ-enabled points with cross-event */ - unsigned int wgen_enirq[DEPT_IRQS_NR]; + unsigned int wgen_enirq[DEPT_CXT_IRQS_NR]; /* * for keeping up-to-date IRQ-enabled points */ - unsigned long enirq_ip[DEPT_IRQS_NR]; + unsigned long enirq_ip[DEPT_CXT_IRQS_NR]; /* * current effective IRQ-enabled flag @@ -470,7 +475,7 @@ struct dept_task { .wait_hist = { { .wait = NULL, } }, \ .ecxt_held_pos = 0, \ .wait_hist_pos = 0, \ - .irq_id = { 0U }, \ + .cxt_id = { 0U }, \ .wgen_enirq = { 0U }, \ .enirq_ip = { 0UL }, \ .eff_enirqf = 0UL, \ @@ -509,6 +514,7 @@ extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); extern void dept_sched_enter(void); extern void dept_sched_exit(void); +extern void dept_kernel_enter(void); static inline void dept_ecxt_enter_nokeep(struct dept_map *m) { @@ -560,6 +566,7 @@ struct dept_task { }; #define dept_ecxt_exit(m, e_f, ip) do { } while (0) #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) +#define dept_kernel_enter() do { } while (0) #define dept_ecxt_enter_nokeep(m) do { } while (0) #define dept_key_init(k) do { (void)(k); } while (0) #define dept_key_destroy(k) do { (void)(k); } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index c5e23e9184b8..4165cacf4ebb 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -221,9 +221,9 @@ static inline struct dept_class *dep_tc(struct dept_dep *d) static inline const char *irq_str(int irq) { - if (irq == DEPT_SIRQ) + if (irq == DEPT_CXT_SIRQ) return "softirq"; - if (irq == DEPT_HIRQ) + if (irq == DEPT_CXT_HIRQ) return "hardirq"; return "(unknown)"; } @@ -407,7 +407,7 @@ static void initialize_class(struct dept_class *c) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_iecxt *ie = &c->iecxt[i]; struct dept_iwait *iw = &c->iwait[i]; @@ -432,7 +432,7 @@ static void initialize_ecxt(struct dept_ecxt *e) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { e->enirq_stack[i] = NULL; e->enirq_ip[i] = 0UL; } @@ -448,7 +448,7 @@ static void initialize_wait(struct dept_wait *w) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { w->irq_stack[i] = NULL; w->irq_ip[i] = 0UL; } @@ -487,7 +487,7 @@ static void destroy_ecxt(struct dept_ecxt *e) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) if (e->enirq_stack[i]) put_stack(e->enirq_stack[i]); if (e->class) @@ -503,7 +503,7 @@ static void destroy_wait(struct dept_wait *w) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) if (w->irq_stack[i]) put_stack(w->irq_stack[i]); if (w->class) @@ -652,7 +652,7 @@ static void print_diagram(struct dept_dep *d) const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); irqf = e->enirqf & w->irqf; - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { if (!firstline) pr_warn("\nor\n\n"); firstline = false; @@ -685,7 +685,7 @@ static void print_dep(struct dept_dep *d) const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); irqf = e->enirqf & w->irqf; - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { pr_warn("%s has been enabled:\n", irq_str(irq)); print_ip_stack(e->enirq_ip[irq], e->enirq_stack[irq]); pr_warn("\n"); @@ -911,7 +911,7 @@ static void bfs(struct dept_class *c, bfs_f *cb, void *in, void **out) */ static inline unsigned long cur_enirqf(void); -static inline int cur_irq(void); +static inline int cur_cxt(void); static inline unsigned int cur_ctxt_id(void); static inline struct dept_iecxt *iecxt(struct dept_class *c, int irq) @@ -1459,7 +1459,7 @@ static void add_dep(struct dept_ecxt *e, struct dept_wait *w) if (d) { check_dl_bfs(d); - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_iwait *fiw = iwait(fc, i); struct dept_iecxt *found_ie; struct dept_iwait *found_iw; @@ -1495,7 +1495,7 @@ static void add_wait(struct dept_class *c, unsigned long ip, struct dept_task *dt = dept_task(); struct dept_wait *w; unsigned int wg = 0U; - int irq; + int cxt; int i; if (DEPT_WARN_ON(!valid_class(c))) @@ -1511,9 +1511,9 @@ static void add_wait(struct dept_class *c, unsigned long ip, w->wait_stack = get_current_stack(); w->sched_sleep = sched_sleep; - irq = cur_irq(); - if (irq < DEPT_IRQS_NR) - add_iwait(c, irq, w); + cxt = cur_cxt(); + if (cxt == DEPT_CXT_HIRQ || cxt == DEPT_CXT_SIRQ) + add_iwait(c, cxt, w); /* * Avoid adding dependency between user aware nested ecxt and @@ -1594,7 +1594,7 @@ static bool add_ecxt(struct dept_map *m, struct dept_class *c, eh->sub_l = sub_l; irqf = cur_enirqf(); - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) add_iecxt(c, irq, e, false); del_ecxt(e); @@ -1746,7 +1746,7 @@ static void do_event(struct dept_map *m, struct dept_class *c, add_dep(eh->ecxt, wh->wait); } - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_ecxt *e; if (before(dt->wgen_enirq[i], wg)) @@ -1788,7 +1788,7 @@ static void disconnect_class(struct dept_class *c) call_rcu(&d->rh, del_dep_rcu); } - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { stale_iecxt(iecxt(c, i)); stale_iwait(iwait(c, i)); } @@ -1813,27 +1813,21 @@ static inline unsigned long cur_enirqf(void) return 0UL; } -static inline int cur_irq(void) +static inline int cur_cxt(void) { if (lockdep_softirq_context(current)) - return DEPT_SIRQ; + return DEPT_CXT_SIRQ; if (lockdep_hardirq_context()) - return DEPT_HIRQ; - return DEPT_IRQS_NR; + return DEPT_CXT_HIRQ; + return DEPT_CXT_PROCESS; } static inline unsigned int cur_ctxt_id(void) { struct dept_task *dt = dept_task(); - int irq = cur_irq(); + int cxt = cur_cxt(); - /* - * Normal process context - */ - if (irq == DEPT_IRQS_NR) - return 0U; - - return dt->irq_id[irq] | (1UL << irq); + return dt->cxt_id[cxt] | (1UL << cxt); } static void enirq_transition(int irq) @@ -1884,7 +1878,7 @@ static void enirq_update(unsigned long ip) /* * Do enirq_transition() only on an OFF -> ON transition. */ - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { if (prev & (1UL << irq)) continue; @@ -1983,6 +1977,13 @@ void dept_hardirqs_off_ip(unsigned long ip) } EXPORT_SYMBOL_GPL(dept_hardirqs_off_ip); +void dept_kernel_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->cxt_id[DEPT_CXT_PROCESS] += 1UL << DEPT_CXTS_NR; +} + /* * Ensure it's the outmost softirq context. */ @@ -1990,7 +1991,7 @@ void dept_softirq_enter(void) { struct dept_task *dt = dept_task(); - dt->irq_id[DEPT_SIRQ] += 1UL << DEPT_IRQS_NR; + dt->cxt_id[DEPT_CXT_SIRQ] += 1UL << DEPT_CXTS_NR; } /* @@ -2000,7 +2001,7 @@ void dept_hardirq_enter(void) { struct dept_task *dt = dept_task(); - dt->irq_id[DEPT_HIRQ] += 1UL << DEPT_IRQS_NR; + dt->cxt_id[DEPT_CXT_HIRQ] += 1UL << DEPT_CXTS_NR; } void dept_sched_enter(void) From patchwork Mon Aug 21 03:46:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15FBFEE49B4 for ; Mon, 21 Aug 2023 04:08:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232864AbjHUEIY (ORCPT ); Mon, 21 Aug 2023 00:08:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232747AbjHUEIN (ORCPT ); Mon, 21 Aug 2023 00:08:13 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 73A0419B; Sun, 20 Aug 2023 21:07:31 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-bb-64e2ded5b546 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 13/25] dept: Distinguish each work from another Date: Mon, 21 Aug 2023 12:46:25 +0900 Message-Id: <20230821034637.34630-14-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxTHfZ77VirdrkXdHWyZa6oY3RCMmpPhFvaJGze3Je7D4jRa7XVU gWpBFCIbCDIQS4AM2ISYUknXQDfwFhRBCC8KAhMqVEQDlXZG5J3BWuVFthbjl5Nfzv9/fp+O hJDXUYESTWy8oItVRStoKSmd9C/9uN/hUofWZrKQdykU3P9mklBSaaHB9mcFAkt1KobRO5Hw 0DOBYPFeDwFFBTYEpc4hAqrbHAgazOdp6Hv6Ftjd0zR0FGTTkHa1kob740sYBgvzMVSIe6Er 14ihaX6EhKJRGoqL0rB3PMcwbypnwJSyEVzmywwsOcOgw9FPQcPjrfDblUEabjV0kNBW68LQ V1dCg8PyHwVdbXdJsOXpKfhjykjDuMdEgMk9zUBvkwFDVbpXlDG3TEG7vglDRtk1DPZH9Qga M4cxiJZ+GlrdExisYgEBC7/fQeDKmWTgwqV5BopTcxBkXygkoedVOwXpgzth8WUJHfEJ3zox TfDp1jN8g8dA8p1Gjr95eYjh0xsfM7xBPM1bzVv4q7dGMV8666Z4sTyL5sXZfIa/OGnH/FR3 N8Pf/XWR5J/ai/A3Qfulu9VCtCZB0G377LA0qtJiJ07OBJx9WL0uBYnsReQn4dgdnOFKP/WG O6sWSR/TbDA3MDBP+Hgtu4Gz6p95O1IJwf68mjPP3KN9QQC7h7vd3LtyQLIbuZ7UBeRjGbuL c87Y8GvpB1xFVdOKyM+7F+vrVjpydif3j/Nv0ifl2Gw/rn5ymHl98C7XbB4gc5HMgFaVI7km NiFGpYneERKVGKs5G3JUGyMi70eZkpe+r0Wztn0tiJUghb/s8HsutZxSJcQlxrQgTkIo1sqC XjjVcplalZgk6LSHdKejhbgWFCQhFe/ItnvOqOXsD6p44YQgnBR0b1Is8QtMQRn5G8SamgLo XBM4/u2Ra1r59eN7p46tH0xLjkz7Uvi6OKw1PFhfMZT/5MexT8uyvgtRtrsPfKU9OBew/+1N ifE/vZ8UsKdr1YN9OcGNPWPBzdpz3eFDHw0ft3q++NzN6Cwxf42E1uifbY7I/fBUKKEklMW/ 3Ch5tTySVGhU1tQvj5Q5FGRclCpsC6GLU/0PNCIzHU0DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzH+35/jx3Hz5X1m2zsNjMPUVvHp6ExNr+1ycMY4w8d95s76rK7 5GJtpdAj1ZZCrJLT6ii/+iPcJUWk6UHX5aHiTqGJjlz04KEy/3z22uf9+bz+erOEooCax+r0 MaJBr45U0jJSFr4mKaCz16UJnHigguyMQPB8TyGhoMJCQ9utcgSW6kQMA482Q9fIIILxZ60E 5OW2IShy9hBQ3diLwFZ6ioaOvllg9wzR0JSbTkPStQoa2j9NYOi+kIOhXNoCzVnFGOpGP5CQ N0DD5bwkPDk+Yhg1lzFgTlgErtJLDEw4g6Cp10FBw5UmCmyvlsHFq900WG1NJDTWuDB03C2g odfyh4LmxicktGVnUnDzSzENn0bMBJg9Qww8ryvEUJk8aTsz/JuCx5l1GM6U3MZgf3kPQW3K WwySxUFDg2cQQ5WUS8DYjUcIXOc+M3A6Y5SBy4nnEKSfvkBC66/HFCR3q2D8ZwG9fo3QMDhE CMlVxwXbSCEpPC3mhTuXehghufYVIxRKx4Sq0qXCNesAFoq+eShBKkulBelbDiOkfbZj4UtL CyM8yR8nhT57Ht42f69srUaM1MWKhpWhETJthcVOHHX7mLqq5yYgiUtD3izPBfNPK8fJKaa5 xfyLF6PEFPtyC/mqzPdUGpKxBHd2Bl/qfkZPBT5cGP/wwfPpB5JbxLcmjqEplnOreKe7Df+T LuDLK+umRd6Te+ne3ekbBafivzrfkVlIVoi8ypCvTh8bpdZFqlYYj2jj9DrTioPRURKa7Iw5 fiK7Bn3v2FyPOBYpZ8oj5rs0Ckoda4yLqkc8Syh95f4/nBqFXKOOOyEaovcbjkWKxnrkz5JK P3nYbjFCwR1Sx4hHRPGoaPifYtZ7XgKK35lqDNMWXY2pcERbw1XNczaEttSaAiXbrhv3vbT+ S2T5+1rbf3RaV38I7T+ZsjLg/MbtQ1alaZbjys5l1316Vg8fTMqKCdrb9zp6q3xsj76hxBTu Ffx2x57+JeschpAD5l3DmeaS2Tl+8Y1vfoW48zdZeg6nn2IH3B7DclET4n9TSRq16qClhMGo /gv9mtivLwMAAA== X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Workqueue already provides concurrency control. By that, any wait in a work doesn't prevents events in other works with the control enabled. Thus, each work would better be considered a different context. So let Dept assign a different context id to each work. Signed-off-by: Byungchul Park --- include/linux/dept.h | 2 ++ kernel/dependency/dept.c | 10 ++++++++++ kernel/workqueue.c | 3 +++ 3 files changed, 15 insertions(+) diff --git a/include/linux/dept.h b/include/linux/dept.h index f62c7b6f42c6..d9ca9dd50219 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -515,6 +515,7 @@ extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long extern void dept_sched_enter(void); extern void dept_sched_exit(void); extern void dept_kernel_enter(void); +extern void dept_work_enter(void); static inline void dept_ecxt_enter_nokeep(struct dept_map *m) { @@ -567,6 +568,7 @@ struct dept_task { }; #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) #define dept_kernel_enter() do { } while (0) +#define dept_work_enter() do { } while (0) #define dept_ecxt_enter_nokeep(m) do { } while (0) #define dept_key_init(k) do { (void)(k); } while (0) #define dept_key_destroy(k) do { (void)(k); } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 4165cacf4ebb..6cf17f206b78 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -1977,6 +1977,16 @@ void dept_hardirqs_off_ip(unsigned long ip) } EXPORT_SYMBOL_GPL(dept_hardirqs_off_ip); +/* + * Assign a different context id to each work. + */ +void dept_work_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->cxt_id[DEPT_CXT_PROCESS] += 1UL << DEPT_CXTS_NR; +} + void dept_kernel_enter(void) { struct dept_task *dt = dept_task(); diff --git a/kernel/workqueue.c b/kernel/workqueue.c index c913e333cce8..fa23d876a8b5 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -52,6 +52,7 @@ #include #include #include +#include #include "workqueue_internal.h" @@ -2318,6 +2319,8 @@ __acquires(&pool->lock) lockdep_copy_map(&lockdep_map, &work->lockdep_map); #endif + dept_work_enter(); + /* ensure we're on the correct CPU */ WARN_ON_ONCE(!(pool->flags & POOL_DISASSOCIATED) && raw_smp_processor_id() != pool->cpu); From patchwork Mon Aug 21 03:46:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C78FEE49B2 for ; Mon, 21 Aug 2023 04:08:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232856AbjHUEIY (ORCPT ); Mon, 21 Aug 2023 00:08:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232745AbjHUEIN (ORCPT ); Mon, 21 Aug 2023 00:08:13 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9357819A; Sun, 20 Aug 2023 21:07:31 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-cb-64e2ded6eaec From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 14/25] dept: Add a mechanism to refill the internal memory pools on running out Date: Mon, 21 Aug 2023 12:46:26 +0900 Message-Id: <20230821034637.34630-15-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzHfb/PzzvOnp3Go2xyG1omEvYZZtjwbH6MnT9MG27umY46diWy mVBKicrqqMMpzqlIFxuu7FyUQsW1SqrpOqrph+KOq0gX/vnstfd779dfH5aQWyl/VqONEXVa VaSClpLSvil5CxvbnerFXUZ/yDi/GNzfk0kwFBfRUH+vEEHRg1MYel5shCZPL4KRN3UE6LPq EdzoaCPgQWU7gnLzaRocrqnQ4B6goTorlYYz+cU0vP0yiqE1OxNDoWULvErPw2DzdpGg76Eh V38Gj59uDF5TAQOm+LngNOcwMNoRCtXtjRSUtyyAK9daaSgrryah8pETg+OJgYb2ojEKXlW+ JKE+I42Cu/15NHzxmAgwuQcYeGczYrifMC46++03BVVpNgxnb5ZgaHhvRfA0+SMGS1EjDRXu XgylliwChm+/QOC80MdA4nkvA7mnLiBITcwmoe5XFQUJrctg5KeBXrNCqOgdIISE0qNCucdI CjV5vPA4p40REp62MILRckQoNQcL+WU9WLgx5KYES8E5WrAMZTJCSl8DFvpraxnh5eURUnA1 6PG2gF3SVWoxUhMr6hat3iuNMCTW4MMFa451vrdR8Sg1LAVJWJ5byj+sGqT/s6M7h/Axzc3n m5u9E+zHBfKlaZ+pFCRlCS5pMm/++mZiMI07wLc8y8I+Jrm5fJe3jvSxjFvOWwvv/JPO5gvv 2yZEkvHcYn2CfCznlvGDHZ2kT8pzlyT8LUc283cwk39mbibTkcyIJhUguUYbG6XSRC4NiYjT ao6F7DsUZUHjL2U6MRr+CA3VK+2IY5FiimzvLKdaTqlio+Oi7IhnCYWfLOBHh1ouU6vijou6 Q3t0RyLFaDsKYEnFDNkSz1G1nNuvihEPiuJhUfe/xazEPx7d9BwIH1OWBIY9Xze7++TXkk2u 6y6X43Wm0nbRPq0tKDxMU+eHcniZanDH8Mqxng/7pLnr1kp2pqbFJDWHDJfN+x5URbKBJnxV PydDZ/3UpNzdmvw4s2n79OKazX5TOzckpc9n8zW71mu3lhjuhSq7rEPFc7r7Ax/Whth7lfmO CgUZHaEKDSZ00ao/0fxMek4DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSXUxTZxzGfd9zzntOq51nlbkTYEGbEBKMHzgx/8XF4IXxZGxmZhdGjUK1 J1KBQlqEIllA+ZiCGDFDsFSHqKVSNlwhhgE1HSha/IAJYUCgscUoBJTBaKXCRKrZzZNfnif5 XT0cpTQzoZxWlyHpdeoUFZHT8t3b8tf3u72aTTfDoOzsJvDNnqbB3FBPoOc3G4L6ppMYxu/t gr/9kwjmH3dTUFHeg+CqZ4SCpk43Aof1FIHe559An2+KgKu8hED+tQYCf00sYBi+eAGDzf4d PDxfg8EZeElDxTiBqop8vBRjGAKWOhYseZHgtZpYWPDEgMvdz0DHZRcDjqF1cOnKMIE2h4uG zmYvht4WMwF3/SIDDzsf0NBTVsrAr69rCEz4LRRYfFMsPHVWY7hVsGQr+vcdA/dLnRiKrv+O oW+wFcGd088w2Ov7CXT4JjE02sspeFt7D4H33CsWCs8GWKg6eQ5BSeFFGrr/u89AwXAszM+Z Sdw2sWNyihILGrNEh7+aFrtqBPEP0wgrFtwZYsVq+3Gx0RotXmsbx+LVGR8j2uvOENE+c4EV i1/1YfH1kyes+KBynhaf91Xg78P3y7/WSCnaTEm/cXuiPMlc2IXT6+KMo4NOJg+VfFmMZJzA bxF6x0xUkAkfJQwMBD5wCL9GaCx9wRQjOUfxPy0XrP88JsFhFX9MGPqzHAeZ5iOFl4FuOsgK fqvQartJPkojBNst5weRbKm3t7agICv5WGHaM0qfR/JqtKwOhWh1malqbUrsBkNyUrZOa9xw JC3VjpY+Y/lxoawZzfbuakc8h1QrFInhXo2SUWcaslPbkcBRqhBF2BuPRqnQqLNPSPq0BP3x FMnQjsI4WvW54pu9UqKSP6rOkJIlKV3S/79iThaah3ZPa/ZU1hyKfxRTy285PK5zb/600mXN XJt1uOkpaTN9u8Mvywhdkx7ePLb6gCfHeHTQWBSfOzSTsM+yKnlHl61wcVnOF4Z3RsdoXOy6 Ux1zO9P1zRFZEBG5+Eva+ly/7LOD3FcjZ0bvTl/anPvDQJvJlf1zrapFqHqxMgpC42+E3VbR hiR1TDSlN6jfA2rEmL4vAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Dept engine works in a constrained environment. For example, Dept cannot make use of dynamic allocation e.g. kmalloc(). So Dept has been using static pools to keep memory chunks Dept uses. However, Dept would barely work once any of the pools gets run out. So implemented a mechanism for the refill on the lack by any chance, using irq work and workqueue that fits on the contrained environment. Signed-off-by: Byungchul Park --- include/linux/dept.h | 19 ++++-- kernel/dependency/dept.c | 104 +++++++++++++++++++++++++++----- kernel/dependency/dept_object.h | 10 +-- kernel/dependency/dept_proc.c | 8 +-- 4 files changed, 112 insertions(+), 29 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index d9ca9dd50219..583e8fe2dd7b 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -336,9 +336,19 @@ struct dept_pool { size_t obj_sz; /* - * the number of the static array + * the remaining number of the object in spool */ - atomic_t obj_nr; + int obj_nr; + + /* + * the number of the object in spool + */ + int tot_nr; + + /* + * accumulated amount of memory used by the object in byte + */ + atomic_t acc_sz; /* * offset of ->pool_node @@ -348,9 +358,10 @@ struct dept_pool { /* * pointer to the pool */ - void *spool; + void *spool; /* static pool */ + void *rpool; /* reserved pool */ struct llist_head boot_pool; - struct llist_head __percpu *lpool; + struct llist_head __percpu *lpool; /* local pool */ }; struct dept_ecxt_held { diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 6cf17f206b78..8454f0a14d67 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -74,6 +74,9 @@ #include #include #include +#include +#include +#include #include "dept_internal.h" static int dept_stop; @@ -122,10 +125,12 @@ static int dept_per_cpu_ready; WARN(1, "DEPT_STOP: " s); \ }) -#define DEPT_INFO_ONCE(s...) pr_warn_once("DEPT_INFO_ONCE: " s) +#define DEPT_INFO_ONCE(s...) pr_warn_once("DEPT_INFO_ONCE: " s) +#define DEPT_INFO(s...) pr_warn("DEPT_INFO: " s) static arch_spinlock_t dept_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; static arch_spinlock_t stage_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; +static arch_spinlock_t dept_pool_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; /* * DEPT internal engine should be careful in using outside functions @@ -264,6 +269,7 @@ static inline bool valid_key(struct dept_key *k) #define OBJECT(id, nr) \ static struct dept_##id spool_##id[nr]; \ +static struct dept_##id rpool_##id[nr]; \ static DEFINE_PER_CPU(struct llist_head, lpool_##id); #include "dept_object.h" #undef OBJECT @@ -272,14 +278,70 @@ struct dept_pool dept_pool[OBJECT_NR] = { #define OBJECT(id, nr) { \ .name = #id, \ .obj_sz = sizeof(struct dept_##id), \ - .obj_nr = ATOMIC_INIT(nr), \ + .obj_nr = nr, \ + .tot_nr = nr, \ + .acc_sz = ATOMIC_INIT(sizeof(spool_##id) + sizeof(rpool_##id)), \ .node_off = offsetof(struct dept_##id, pool_node), \ .spool = spool_##id, \ + .rpool = rpool_##id, \ .lpool = &lpool_##id, }, #include "dept_object.h" #undef OBJECT }; +static void dept_wq_work_fn(struct work_struct *work) +{ + int i; + + for (i = 0; i < OBJECT_NR; i++) { + struct dept_pool *p = dept_pool + i; + int sz = p->tot_nr * p->obj_sz; + void *rpool; + bool need; + + arch_spin_lock(&dept_pool_spin); + need = !p->rpool; + arch_spin_unlock(&dept_pool_spin); + + if (!need) + continue; + + rpool = vmalloc(sz); + + if (!rpool) { + DEPT_STOP("Failed to extend internal resources.\n"); + break; + } + + arch_spin_lock(&dept_pool_spin); + if (!p->rpool) { + p->rpool = rpool; + rpool = NULL; + atomic_add(sz, &p->acc_sz); + } + arch_spin_unlock(&dept_pool_spin); + + if (rpool) + vfree(rpool); + else + DEPT_INFO("Dept object(%s) just got refilled successfully.\n", p->name); + } +} + +static DECLARE_WORK(dept_wq_work, dept_wq_work_fn); + +static void dept_irq_work_fn(struct irq_work *w) +{ + schedule_work(&dept_wq_work); +} + +static DEFINE_IRQ_WORK(dept_irq_work, dept_irq_work_fn); + +static void request_rpool_refill(void) +{ + irq_work_queue(&dept_irq_work); +} + /* * Can use llist no matter whether CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG is * enabled or not because NMI and other contexts in the same CPU never @@ -315,19 +377,31 @@ static void *from_pool(enum object_t t) /* * Try static pool. */ - if (atomic_read(&p->obj_nr) > 0) { - int idx = atomic_dec_return(&p->obj_nr); + arch_spin_lock(&dept_pool_spin); + + if (!p->obj_nr) { + p->spool = p->rpool; + p->obj_nr = p->rpool ? p->tot_nr : 0; + p->rpool = NULL; + request_rpool_refill(); + } + + if (p->obj_nr) { + void *ret; + + p->obj_nr--; + ret = p->spool + (p->obj_nr * p->obj_sz); + arch_spin_unlock(&dept_pool_spin); - if (idx >= 0) - return p->spool + (idx * p->obj_sz); + return ret; } + arch_spin_unlock(&dept_pool_spin); - DEPT_INFO_ONCE("---------------------------------------------\n" - " Some of Dept internal resources are run out.\n" - " Dept might still work if the resources get freed.\n" - " However, the chances are Dept will suffer from\n" - " the lack from now. Needs to extend the internal\n" - " resource pools. Ask max.byungchul.park@gmail.com\n"); + DEPT_INFO("------------------------------------------\n" + " Dept object(%s) is run out.\n" + " Dept is trying to refill the object.\n" + " Nevertheless, if it fails, Dept will stop.\n", + p->name); return NULL; } @@ -3000,8 +3074,8 @@ void __init dept_init(void) pr_info("... DEPT_MAX_ECXT_HELD : %d\n", DEPT_MAX_ECXT_HELD); pr_info("... DEPT_MAX_SUBCLASSES : %d\n", DEPT_MAX_SUBCLASSES); #define OBJECT(id, nr) \ - pr_info("... memory used by %s: %zu KB\n", \ - #id, B2KB(sizeof(struct dept_##id) * nr)); + pr_info("... memory initially used by %s: %zu KB\n", \ + #id, B2KB(sizeof(spool_##id) + sizeof(rpool_##id))); #include "dept_object.h" #undef OBJECT #define HASH(id, bits) \ @@ -3009,6 +3083,6 @@ void __init dept_init(void) #id, B2KB(sizeof(struct hlist_head) * (1 << (bits)))); #include "dept_hash.h" #undef HASH - pr_info("... total memory used by objects and hashs: %zu KB\n", B2KB(mem_total)); + pr_info("... total memory initially used by objects and hashs: %zu KB\n", B2KB(mem_total)); pr_info("... per task memory footprint: %zu bytes\n", sizeof(struct dept_task)); } diff --git a/kernel/dependency/dept_object.h b/kernel/dependency/dept_object.h index 0b7eb16fe9fb..4f936adfa8ee 100644 --- a/kernel/dependency/dept_object.h +++ b/kernel/dependency/dept_object.h @@ -6,8 +6,8 @@ * nr: # of the object that should be kept in the pool. */ -OBJECT(dep, 1024 * 8) -OBJECT(class, 1024 * 8) -OBJECT(stack, 1024 * 32) -OBJECT(ecxt, 1024 * 16) -OBJECT(wait, 1024 * 32) +OBJECT(dep, 1024 * 4 * 2) +OBJECT(class, 1024 * 4) +OBJECT(stack, 1024 * 4 * 8) +OBJECT(ecxt, 1024 * 4 * 2) +OBJECT(wait, 1024 * 4 * 4) diff --git a/kernel/dependency/dept_proc.c b/kernel/dependency/dept_proc.c index 7d61dfbc5865..f07a512b203f 100644 --- a/kernel/dependency/dept_proc.c +++ b/kernel/dependency/dept_proc.c @@ -73,12 +73,10 @@ static int dept_stats_show(struct seq_file *m, void *v) { int r; - seq_puts(m, "Availability in the static pools:\n\n"); + seq_puts(m, "Accumulated amount of memory used by pools:\n\n"); #define OBJECT(id, nr) \ - r = atomic_read(&dept_pool[OBJECT_##id].obj_nr); \ - if (r < 0) \ - r = 0; \ - seq_printf(m, "%s\t%d/%d(%d%%)\n", #id, r, nr, (r * 100) / (nr)); + r = atomic_read(&dept_pool[OBJECT_##id].acc_sz); \ + seq_printf(m, "%s\t%d KB\n", #id, r / 1024); #include "dept_object.h" #undef OBJECT From patchwork Mon Aug 21 03:46:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7457FEE49A8 for ; Mon, 21 Aug 2023 04:08:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231493AbjHUEI7 (ORCPT ); Mon, 21 Aug 2023 00:08:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232911AbjHUEIs (ORCPT ); Mon, 21 Aug 2023 00:08:48 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 217A7D3; Sun, 20 Aug 2023 21:08:08 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-db-64e2ded6d09c From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 15/25] locking/lockdep, cpu/hotplus: Use a weaker annotation in AP thread Date: Mon, 21 Aug 2023 12:46:27 +0900 Message-Id: <20230821034637.34630-16-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzHfb/Pz05nT6fNQ+bHmWWZxLDPzMx/nv6wGRvDH9y6R91UuCOy mXL5FZni6ZyadWXX1R3pyR9UZ6lVQnVop6yau5BW53LccbpQ4Z/PXnu/93799WEJVQO1gNVl HhP1mZp0Na0gFb5oyyr3oFebJLUvhsIrSRD8dpGE0hoHDa57dgSOB7kYRlq3wpvQGIKJzm4C TJILgcUzQMCDtkEETttZGl6/nwM9QT8NHdJlGowVNTS8HI1g6C8uwmCXt8Hza+UYmsLDJJhG aCgxGfHU+YQhbK1mwJqzHLy2WwxEPGugY9BNgfPtSjDf7qeh0dlBQttDL4bX9aU0DDp+U/C8 7SkJrsICCu5+LqdhNGQlwBr0M/CqqQzD/bwp0fmvvyhoL2jCcP5OLYaevgYEjy++wyA73DS0 BMcw1MkSAT8rWxF4r/oYOHclzEBJ7lUEl88Vk9A92U5BXv96mPhRSm/ZKLSM+Qkhr+6E4AyV kcKzcl54dGuAEfIev2WEMvm4UGdLECoaR7BgCQQpQa6+RAtyoIgR8n09WPjc1cUIT29OkML7 HhPeHrdXsUkrpuuyRP3qzQcUaTf6JvGRSubk+FAJk4MK6HwUxfLcOn5I6mX+s7HYT0wzzcXz vb3hGY7llvB1BR+pfKRgCe7CbN423jkznstpeSlSMsMkt5y3myVqmpXcBt6YW0P+lS7m7feb ZkRRU7ncUI+mWcWt5794hshpKc8Zo/iicvu/wXz+ia2XvIaUZWhWNVLpMrMyNLr0dYlp2Zm6 k4kphzNkNPVS1tORfQ9RwLWzGXEsUkcrDyz0alWUJsuQndGMeJZQxyrjvnu0KqVWk31K1B/e rz+eLhqaURxLqucp14ZOaFVcquaYeEgUj4j6/y1moxbkoDPJbZWtLS9+e5J3+zpXhOz0nhSb XLX0Q21qYQ4TjpkbZ4ypepJ/0/E1EEnqM5vvJOo/bLSvjv61K9liHm6sMEgxHft0vlGpMn7/ z5ElCV/ci+q7YyFgcnaxqTu2phwdLxqadCv2wtkzA9/0nyaXLTJaDtqGW3b+6Dq4y+96dF1N GtI0axIIvUHzB5lkfvNOAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+//P1dnksKwOKhWDLhSaldYbVoQROxRG0AchurjaIVfzwlam hTWdWWmKRtPyxtKaOlfWLDFzMbS0JZmm2AWVXFfJMs1ZS63U6MvLj+fh+X16WUJWTPmx6tij ojZWqZHTElKyI8wQ2N3nUgV3FrCQeyEY3KPnSCiqttLQfrMKgfVOCoaBRwp4MTaIYPzpMwLy je0Irvb3EnCnuQ+BvSKVhs53PtDlHqLBacykwVBWTUPH5wkMPXkXMVTZIqA1pxSDw/ORhPwB GgrzDXjqfMLgMVsYMOsXg6uigIGJ/lXg7OumoKnYSYH99Qq4UtJDQ4PdSUJznQtDZ30RDX3W PxS0Nj8moT03i4IbX0tp+DxmJsDsHmLgucOE4VbalC39+28KWrIcGNKv3cbQ9eo+ggfn3mCw WbtpaHIPYqixGQn4Vf4IgSv7CwNnLngYKEzJRpB5Jo+EZ5MtFKT1hML4zyJ6c5jQNDhECGk1 xwX7mIkUnpTywr2CXkZIe/CaEUy2Y0JNxXKhrGEAC1dH3JRgs5ynBdvIRUbI+NKFha9tbYzw +PI4Kbzrysc7A3ZLNqhEjTpB1K7cFCWJvvRqEseXM4nf3hYyepRFZyAvludCeEPeEDHNNLeU f/nSM8O+3CK+JusDlYEkLMGd9eYrvj2dGczhVLxxonCGSW4xX3XFSE2zlFvLG1KqyX/ShXzV LceMyGsqt92vR9Ms40L54f63ZA6SmNAsC/JVxybEKNWa0CDdkeikWHVi0MG4GBuaehpz8kRu HRrtVDQijkXy2dKoAJdKRikTdEkxjYhnCbmv1P9Hv0omVSmTTojauP3aYxpR14j8WVI+X7ot UoyScYeUR8Ujohgvav+3mPXy06OHvRHkZR+kKJm77K55CfYuqRwN3LqrfPL0eFBwusfiJT0Q eXhBZW/Ye6WvX1lbqmLRvO21beGs6rBPR0TiVuu88JPqsGr7z30a956GkOQt2bn66Jxwu2Jh gMo08qTH2eq9ZuPw+qad+qDk4OvSEKZ+79661QWWzPhTtQ7/dT4ts+SkLlq5ajmh1Sn/AgTl eBkwAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org cb92173d1f0 ("locking/lockdep, cpu/hotplug: Annotate AP thread") was introduced to make lockdep_assert_cpus_held() work in AP thread. However, the annotation is too strong for that purpose. We don't have to use more than try lock annotation for that. Furthermore, now that Dept was introduced, false positive alarms was reported by that. Replaced it with try lock annotation. Signed-off-by: Byungchul Park --- kernel/cpu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/cpu.c b/kernel/cpu.c index f4a2c5845bcb..19076f798b34 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -356,7 +356,7 @@ int lockdep_is_cpus_held(void) static void lockdep_acquire_cpus_lock(void) { - rwsem_acquire(&cpu_hotplug_lock.dep_map, 0, 0, _THIS_IP_); + rwsem_acquire(&cpu_hotplug_lock.dep_map, 0, 1, _THIS_IP_); } static void lockdep_release_cpus_lock(void) From patchwork Mon Aug 21 03:46:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359142 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D856DEE49A8 for ; Mon, 21 Aug 2023 04:09:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232122AbjHUEJx (ORCPT ); Mon, 21 Aug 2023 00:09:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231226AbjHUEJx (ORCPT ); Mon, 21 Aug 2023 00:09:53 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EFB81FD; Sun, 20 Aug 2023 21:09:17 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-eb-64e2ded695c7 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 16/25] dept: Apply sdt_might_sleep_{start,end}() to dma fence wait Date: Mon, 21 Aug 2023 12:46:28 +0900 Message-Id: <20230821034637.34630-17-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAz3SbUxTZxQH8D3Pvfe5l0rNTTHxatWRJmjE+BrAEzONJibekS0u0xgjErmu N6OzFNMqLyZmoKiAQkQCnUKWAqY2gKItvlYcYETQAFUrooHGVnwhgGil3SrVjeK2Lye/nH/O /9PhKJWTmcvpDPtko0HSa4iCVoxF1yzt8/i0KwbDC6HsxAoITBTSUN3USMB1oQFBY3M+huE7 m+BJcBTBZHcvBeYKF4Ia7yAFzR0eBC22QwQeDc0Ed2CcQFfFcQKH65oIPBgJYxioPIWhwf49 3D9Zi6E19JoG8zCBKvNhPDXeYAhZ61mw5sWBz3aGhbB3JXR5+hhoebYETv8+QOBmSxcNHdd8 GB7dqCbgafybgfsdnTS4ykoYOP+2lsBI0EqBNTDOwsNWC4aLBVNFRz98ZuBuSSuGo2cvYXA/ dSK4Vfgcg72xj8DtwCgGh72Cgo/n7iDwlY6xcOREiIWq/FIEx49U0tD76S4DBQOJMPlXNVm/ Rrw9Ok6JBY5ssSVoocV7tYJ4/cwgKxbcesaKFvt+0WGLF+tuDmOxxh9gRHt9ERHt/lOsWDzm xuLbnh5W7PxtkhaH3Gb8g3qH4hutrNdlycbl69IU6eWhHrI3X5ljcZxl89BLRTGK4gQ+QXBU HaP+9+ur0yb8IqG/PzTtWXys4Ch5xRQjBUfxx2YItnfdJBLE8KlCeXMXjpjm44Q3/jEmYiWf JNR2/vFv6ddCw8XWaUdN7e3OGyhiFZ8ovPe+oCOlAl8eJRR9cpIvB3OENls/fRIpLeireqTS GbIyJJ0+YVl6rkGXs+ynzAw7mnop68FwyjXkd21pRzyHNNHKtHk+rYqRsky5Ge1I4CjNLKX6 T69WpdRKuQdkY+Yu4369bGpHao7WzFauCmZrVfzP0j55jyzvlY3/pZiLmpuH6nbEFyXFzCuN 29pWXbR1STh5zbcPh37l2mISNi6Y/8QbF7Z8V6HdlvNBvTrZw0Ub9my/krZzyJRKH8p0XU7z O7Imrvok2cn3hvI7sn959WN7vXG+O7HMo5+RseW52RDbn2zdTUkzk2r8wxPdsYs3P36hXp6y oTJ2JGdbamHw4FqrhjalSyvjKaNJ+gcAEHXBTgMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzXSe0xTZxgGcL/vnPOdQ7fOk0riiZi4NWHLIF5IxLwJzmjC4tlUNDOLycwm jT1IuRTTAoMREwr1MhAjKLIpmlpIJYAip6gIRSsItBAQLUFk0I2OCUSQySizghfqsn/e/PI8 yfPXy1Gqi8wqTqdPlwx6TYqaKGhFXEz+2gGvT7vBO7ccik9uAP/cCRrK62oJ9F2rQVDbYMIw 2b4dHs9PIVjoeUBBWWkfgsujIxQ0dHgRtFTlEfCMfQT9/hkC7tJCAvkVdQQePlvEMHyuBEON vAu6T1sxOAPjNJRNErhQlo+XzgSGgK2aBVtuOPiqzrOwOBoFbu8AA20X3Qy0DEXCr5eGCTha 3DR0NPoweJrKCXhr3zLQ3eGioa+4iIGrz60Ens3bKLD5Z1h45LRguG5eWjv2zxsGOoucGI5V 1mPof9KM4M6JPzDItQME2vxTGOxyKQWvrrQj8J2aZuHoyQALF0ynEBQePUfDg9edDJiHo2Hh ZTnZGiO2Tc1Qotn+o9gyb6HFLqsg3j4/wormO0OsaJEzRHtVhFjhmMTi5Vk/I8rVPxNRni1h xYLpfiw+7+1lRdcvC7Q41l+G96z+TrFZK6XoMiXD+i3xisQzgV5y2KTMstgr2Vz0l6IAhXAC v1Gwj9+igib8Z8LgYOC9Q/mPBXvRU6YAKTiKP/6BUPV3DwkWK/jvhTMNbhw0zYcLE7PTTNBK fpNgdd2l/htdI9Rcd753yFIuNzehoFV8tPBi9E/6NFJY0LJqFKrTZ6ZqdCnR64zJidl6Xda6 g2mpMlp6GtuRxeJGNOfZ3op4Dqk/VMav9mlVjCbTmJ3aigSOUocqw/4d1aqUWk32T5Ih7YAh I0UytqIwjlavVH69T4pX8Yc06VKyJB2WDP+3mAtZlYsc5jf8b/X7XaYe0knys35/Nf75y6e8 Z4fekfPNFxN9WwvVy1Pl2Ncu396kWBxn7zqwbQcXl1D9ZVd5XuQ934SGW7Z/c2SEAx2JLFmj z4v96lruJ9/WeQZvph9XWROaknaaYG9ojjvjftpZZ/hkRtgNa+buq/ZPu2NGhnbail/8cFtN GxM1URGUwah5B5yxqSkwAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Makes Dept able to track dma fence waits. Signed-off-by: Byungchul Park --- drivers/dma-buf/dma-fence.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index f177c56269bb..ad2d7a94c868 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -16,6 +16,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -782,6 +783,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) cb.task = current; list_add(&cb.base.node, &fence->cb_list); + sdt_might_sleep_start(NULL); while (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { if (intr) __set_current_state(TASK_INTERRUPTIBLE); @@ -795,6 +797,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) if (ret > 0 && intr && signal_pending(current)) ret = -ERESTARTSYS; } + sdt_might_sleep_end(); if (!list_empty(&cb.base.node)) list_del(&cb.base.node); @@ -884,6 +887,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } } + sdt_might_sleep_start(NULL); while (ret > 0) { if (intr) set_current_state(TASK_INTERRUPTIBLE); @@ -898,6 +902,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, if (ret > 0 && intr && signal_pending(current)) ret = -ERESTARTSYS; } + sdt_might_sleep_end(); __set_current_state(TASK_RUNNING); From patchwork Mon Aug 21 03:46:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40038EE49AC for ; Mon, 21 Aug 2023 04:10:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232614AbjHUEKc (ORCPT ); Mon, 21 Aug 2023 00:10:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230490AbjHUEKc (ORCPT ); Mon, 21 Aug 2023 00:10:32 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9BE909B; Sun, 20 Aug 2023 21:09:52 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-fb-64e2ded67158 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 17/25] dept: Track timeout waits separately with a new Kconfig Date: Mon, 21 Aug 2023 12:46:29 +0900 Message-Id: <20230821034637.34630-18-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxiGfd9zzntOqzUnVeMZzI91YUswCijMR6NuGqMnzCVblonoD2ns iTQDNC2fmyYwKzoBI0RkAlOKW1cpTC1+IFhSy4AWolZhjBlo1mpUwpeytbPQ4VqNf55cue/c 16+Ho5RtTBSnzcqWdFnqDBWR0/LxefUrBzw+TfxoeTyUl8aD/5/jNNReaiTg/tWCoPFqEYaR zu3wR2AMwcydexRUVboRGL3DFFzt8iCwmb8j0Pd4PvT7Jwm4KksIHLlwicD90RCGoTMVGCzW z6D3VD0Ge/ApDVUjBGqqjuDweYYhaGpgwVQYAz5zNQshbwK4PAMM2B6ugLPnhgjcsrlo6Grx YehrrSXgaXzFQG+XkwZ3eRkDTRP1BEYDJgpM/kkWHtjrMFw2hEXFf88y0F1mx1D80xUM/X+2 IWg//hcGa+MAgQ7/GIZmayUF0790IvCdHGfhaGmQhZqikwhKjp6h4d5/3QwYhpJg5mUt+WS9 2DE2SYmG5jzRFqijxZ56QbxZPcyKhvaHrFhnzRGbzbHihVsjWDRO+RnR2vA9Ea1TFax4Yrwf ixN377Ki84cZWnzcX4U/j94t36CRMrS5ki5uU5o8vWnYQB2cSst3dP6MC9HtHSeQjBP4RKFz upt+yyGTg4kw4T8UBgeDVIQX8suF5rIn4VzOUfyxuYL5+R0SKRbwqcJ1rxtHmOZjhBlLzeux gv9ImO54wryRLhMsl+2vRbJwbm1rRRFW8knCC+8jOiIV+GMy4Zq/mLwZvCPcNg/Sp5CiDs1p QEptVm6mWpuRuCq9IEubv2rfgUwrCr+U6XBoTwuacn/pQDyHVPMUae/6NEpGnasvyHQggaNU CxXR/3o1SoVGXfCNpDuwV5eTIekdKJqjVYsVqwN5GiW/X50tfS1JByXd2xZzsqhCtDQltfXT FZWlpz/OTkoJBfjgkooX19b8npBsjGKdPy5f0MPpVRtaarZvlX2167dFmvzzaql9Z3ubbatE Xm5WLBrsWTvr7E344GKKz1gdN9xk3/S+5b2+olRnjOuLvC2e0BXZtpJcfe/GjbPrjK7r0Ynj yyyxcaf3fXsuJ2XNoYnkG6yK1qerE2IpnV79P06FqShOAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/p+s4fk7Tb9jYeZoQbcVnYzGb9RuVh2nmYXO/ud90emB3 RGG7dIWeVk1depLiulWUyyiKXOuSFlFS1M2dx4iIix483DH/fPba+73366+PhJAXUDMk6qhD oiZKiFDQUlIasip+aZfNoVr+5bkXZKQsB+e30yTkV1bQ0H6lHEHFtTgM/U2B8HR4AMFY20MC DFntCC7Y+wi4ZrUhqDedpKHj1WTodA7S0JKVTEN8SSUNjz6MY+jNzsRQbg6G1vRiDA0jb0kw 9NOQZ4jHrvMOw4ixjAGjbj44TLkMjNt9ocXWRUFjQQsF9c8Ww7nCXhrq6ltIsNY4MHTczKfB VvGbglbrPRLaM1IpuPypmIYPw0YCjM5BBh43FGGo0rtsiV9/UdCc2oAh8eJVDJ09txDcPv0C g7mii4ZG5wCGanMWAaOlTQgcaR8ZSEgZYSAvLg1BckI2CQ9/NlOg7/WHsR/59NpVfOPAIMHr q4/w9cNFJH+/mONrc/sYXn/7GcMXmQ/z1SZvvqSuH/MXhpwUby47Q/PmoUyGT/rYiflPDx4w /L2cMZJ/1WnAm2ftlK5WiRHqaFGzLEApDbvcpycODimPWpouYR26G5SEPCQc68eNGy2Um2l2 IdfdPUK42ZOdw1WnvnHlUgnBnprImT630e5iGruDu25vx24m2fncWHne37GMXcGNNr6h/kln c+VVDX9FHq7cfOsmcrOc9ee+2F+S6UhahCaUIU91VHSkoI7w99GGh8VEqY/67D0QaUaupzGe GM+oQd86Ai2IlSDFJJlylkMlp4RobUykBXESQuEpm/ndrpLLVEJMrKg5sEdzOELUWtBMCanw km3YLirl7D7hkBguigdFzf8WSzxm6NCSY7pLjuF1a6YnhqyMtcTW3sELAhzZnL27eeN+K5wd NXkdowxb2raUzpsUZIsLv9rl8zIoIKl4NHhTjIf6eMnv3UBOSxCs3rvUyUtv+J1fn739daqu 1q7s84ndXRa0eWurb+b71QIZOqUn7EnBNliWsy00ctHbtMWnCuvsuVPn1ihIbZjg601otMIf yW5ULjADAAA= X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Waits with valid timeouts don't actually cause deadlocks. However, Dept has been reporting the cases as well because it's worth informing the circular dependency for some cases where, for example, timeout is used to avoid a deadlock but not meant to be expired. However, yes, there are also a lot of, even more, cases where timeout is used for its clear purpose and meant to be expired. Let Dept report these as an information rather than shouting DEADLOCK. Plus, introduced CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT Kconfig to make it optional so that any reports involving waits with timeouts can be turned on/off depending on the purpose. Signed-off-by: Byungchul Park --- include/linux/dept.h | 15 ++++++--- include/linux/dept_ldt.h | 6 ++-- include/linux/dept_sdt.h | 12 +++++--- kernel/dependency/dept.c | 66 ++++++++++++++++++++++++++++++++++------ lib/Kconfig.debug | 10 ++++++ 5 files changed, 89 insertions(+), 20 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index 583e8fe2dd7b..0aa8d90558a9 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -270,6 +270,11 @@ struct dept_wait { * whether this wait is for commit in scheduler */ bool sched_sleep; + + /* + * whether a timeout is set + */ + bool timeout; }; }; }; @@ -458,6 +463,7 @@ struct dept_task { bool stage_sched_map; const char *stage_w_fn; unsigned long stage_ip; + bool stage_timeout; /* * the number of missing ecxts @@ -496,6 +502,7 @@ struct dept_task { .stage_sched_map = false, \ .stage_w_fn = NULL, \ .stage_ip = 0UL, \ + .stage_timeout = false, \ .missing_ecxt = 0, \ .hardirqs_enabled = false, \ .softirqs_enabled = false, \ @@ -513,8 +520,8 @@ extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, con extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); extern void dept_map_copy(struct dept_map *to, struct dept_map *from); -extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l); -extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn); +extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, long timeout); +extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn, long timeout); extern void dept_request_event_wait_commit(void); extern void dept_clean_stage(void); extern void dept_stage_event(struct task_struct *t, unsigned long ip); @@ -566,8 +573,8 @@ struct dept_task { }; #define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) #define dept_map_copy(t, f) do { } while (0) -#define dept_wait(m, w_f, ip, w_fn, sl) do { (void)(w_fn); } while (0) -#define dept_stage_wait(m, k, ip, w_fn) do { (void)(k); (void)(w_fn); } while (0) +#define dept_wait(m, w_f, ip, w_fn, sl, t) do { (void)(w_fn); } while (0) +#define dept_stage_wait(m, k, ip, w_fn, t) do { (void)(k); (void)(w_fn); } while (0) #define dept_request_event_wait_commit() do { } while (0) #define dept_clean_stage() do { } while (0) #define dept_stage_event(t, ip) do { } while (0) diff --git a/include/linux/dept_ldt.h b/include/linux/dept_ldt.h index 062613e89fc3..8adf298dfcb8 100644 --- a/include/linux/dept_ldt.h +++ b/include/linux/dept_ldt.h @@ -27,7 +27,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_L, i, "trylock", "unlock", sl);\ else { \ - dept_wait(m, LDT_EVT_L, i, "lock", sl); \ + dept_wait(m, LDT_EVT_L, i, "lock", sl, false); \ dept_ecxt_enter(m, LDT_EVT_L, i, "lock", "unlock", sl);\ } \ } while (0) @@ -39,7 +39,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_R, i, "read_trylock", "read_unlock", sl);\ else { \ - dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl);\ + dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl, false);\ dept_ecxt_enter(m, LDT_EVT_R, i, "read_lock", "read_unlock", sl);\ } \ } while (0) @@ -51,7 +51,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_W, i, "write_trylock", "write_unlock", sl);\ else { \ - dept_wait(m, LDT_EVT_RW, i, "write_lock", sl); \ + dept_wait(m, LDT_EVT_RW, i, "write_lock", sl, false);\ dept_ecxt_enter(m, LDT_EVT_W, i, "write_lock", "write_unlock", sl);\ } \ } while (0) diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h index 12a793b90c7e..21fce525f031 100644 --- a/include/linux/dept_sdt.h +++ b/include/linux/dept_sdt.h @@ -22,11 +22,12 @@ #define sdt_map_init_key(m, k) dept_map_init(m, k, 0, #m) -#define sdt_wait(m) \ +#define sdt_wait_timeout(m, t) \ do { \ dept_request_event(m); \ - dept_wait(m, 1UL, _THIS_IP_, __func__, 0); \ + dept_wait(m, 1UL, _THIS_IP_, __func__, 0, t); \ } while (0) +#define sdt_wait(m) sdt_wait_timeout(m, -1L) /* * sdt_might_sleep() and its family will be committed in __schedule() @@ -37,12 +38,13 @@ /* * Use the code location as the class key if an explicit map is not used. */ -#define sdt_might_sleep_start(m) \ +#define sdt_might_sleep_start_timeout(m, t) \ do { \ struct dept_map *__m = m; \ static struct dept_key __key; \ - dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__);\ + dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__, t);\ } while (0) +#define sdt_might_sleep_start(m) sdt_might_sleep_start_timeout(m, -1L) #define sdt_might_sleep_end() dept_clean_stage() @@ -52,7 +54,9 @@ #else /* !CONFIG_DEPT */ #define sdt_map_init(m) do { } while (0) #define sdt_map_init_key(m, k) do { (void)(k); } while (0) +#define sdt_wait_timeout(m, t) do { } while (0) #define sdt_wait(m) do { } while (0) +#define sdt_might_sleep_start_timeout(m, t) do { } while (0) #define sdt_might_sleep_start(m) do { } while (0) #define sdt_might_sleep_end() do { } while (0) #define sdt_ecxt_enter(m) do { } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 8454f0a14d67..52537c099b68 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -740,6 +740,8 @@ static void print_diagram(struct dept_dep *d) if (!irqf) { print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); print_spc(spc, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + if (w->timeout) + print_spc(spc, "--------------- >8 timeout ---------------\n"); print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); } } @@ -793,6 +795,24 @@ static void print_dep(struct dept_dep *d) static void save_current_stack(int skip); +static bool is_timeout_wait_circle(struct dept_class *c) +{ + struct dept_class *fc = c->bfs_parent; + struct dept_class *tc = c; + + do { + struct dept_dep *d = lookup_dep(fc, tc); + + if (d->wait->timeout) + return true; + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + return false; +} + /* * Print all classes in a circle. */ @@ -815,10 +835,14 @@ static void print_circle(struct dept_class *c) pr_warn("summary\n"); pr_warn("---------------------------------------------------\n"); - if (fc == tc) + if (is_timeout_wait_circle(c)) { + pr_warn("NOT A DEADLOCK BUT A CIRCULAR DEPENDENCY\n"); + pr_warn("CHECK IF THE TIMEOUT IS INTENDED\n\n"); + } else if (fc == tc) { pr_warn("*** AA DEADLOCK ***\n\n"); - else + } else { pr_warn("*** DEADLOCK ***\n\n"); + } i = 0; do { @@ -1564,7 +1588,8 @@ static void add_dep(struct dept_ecxt *e, struct dept_wait *w) static atomic_t wgen = ATOMIC_INIT(1); static void add_wait(struct dept_class *c, unsigned long ip, - const char *w_fn, int sub_l, bool sched_sleep) + const char *w_fn, int sub_l, bool sched_sleep, + bool timeout) { struct dept_task *dt = dept_task(); struct dept_wait *w; @@ -1584,6 +1609,7 @@ static void add_wait(struct dept_class *c, unsigned long ip, w->wait_fn = w_fn; w->wait_stack = get_current_stack(); w->sched_sleep = sched_sleep; + w->timeout = timeout; cxt = cur_cxt(); if (cxt == DEPT_CXT_HIRQ || cxt == DEPT_CXT_SIRQ) @@ -2338,7 +2364,7 @@ static struct dept_class *check_new_class(struct dept_key *local, */ static void __dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, - bool sched_sleep, bool sched_map) + bool sched_sleep, bool sched_map, bool timeout) { int e; @@ -2361,7 +2387,7 @@ static void __dept_wait(struct dept_map *m, unsigned long w_f, if (!c) continue; - add_wait(c, ip, w_fn, sub_l, sched_sleep); + add_wait(c, ip, w_fn, sub_l, sched_sleep, timeout); } } @@ -2403,14 +2429,23 @@ static void __dept_event(struct dept_map *m, unsigned long e_f, } void dept_wait(struct dept_map *m, unsigned long w_f, - unsigned long ip, const char *w_fn, int sub_l) + unsigned long ip, const char *w_fn, int sub_l, + long timeoutval) { struct dept_task *dt = dept_task(); unsigned long flags; + bool timeout; if (unlikely(!dept_working())) return; + timeout = timeoutval > 0 && timeoutval < MAX_SCHEDULE_TIMEOUT; + +#if !defined(CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT) + if (timeout) + return; +#endif + if (dt->recursive) return; @@ -2419,21 +2454,30 @@ void dept_wait(struct dept_map *m, unsigned long w_f, flags = dept_enter(); - __dept_wait(m, w_f, ip, w_fn, sub_l, false, false); + __dept_wait(m, w_f, ip, w_fn, sub_l, false, false, timeout); dept_exit(flags); } EXPORT_SYMBOL_GPL(dept_wait); void dept_stage_wait(struct dept_map *m, struct dept_key *k, - unsigned long ip, const char *w_fn) + unsigned long ip, const char *w_fn, + long timeoutval) { struct dept_task *dt = dept_task(); unsigned long flags; + bool timeout; if (unlikely(!dept_working())) return; + timeout = timeoutval > 0 && timeoutval < MAX_SCHEDULE_TIMEOUT; + +#if !defined(CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT) + if (timeout) + return; +#endif + if (m && m->nocheck) return; @@ -2481,6 +2525,7 @@ void dept_stage_wait(struct dept_map *m, struct dept_key *k, dt->stage_w_fn = w_fn; dt->stage_ip = ip; + dt->stage_timeout = timeout; unlock: arch_spin_unlock(&stage_spin); @@ -2506,6 +2551,7 @@ void dept_clean_stage(void) dt->stage_sched_map = false; dt->stage_w_fn = NULL; dt->stage_ip = 0UL; + dt->stage_timeout = false; arch_spin_unlock(&stage_spin); dept_exit_recursive(flags); @@ -2523,6 +2569,7 @@ void dept_request_event_wait_commit(void) unsigned long ip; const char *w_fn; bool sched_map; + bool timeout; if (unlikely(!dept_working())) return; @@ -2545,6 +2592,7 @@ void dept_request_event_wait_commit(void) w_fn = dt->stage_w_fn; ip = dt->stage_ip; sched_map = dt->stage_sched_map; + timeout = dt->stage_timeout; /* * Avoid zero wgen. @@ -2552,7 +2600,7 @@ void dept_request_event_wait_commit(void) wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); WRITE_ONCE(dt->stage_m.wgen, wg); - __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map); + __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map, timeout); exit: dept_exit(flags); } diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index aa62caa4dc14..f78b3d721a2b 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1234,6 +1234,16 @@ config DEPT noting, to mitigate the impact by the false positives, multi reporting has been supported. +config DEPT_AGGRESSIVE_TIMEOUT_WAIT + bool "Aggressively track even timeout waits" + depends on DEPT + default n + help + Timeout wait doesn't contribute to a deadlock. However, + informing a circular dependency might be helpful for cases + that timeout is used to avoid a deadlock. Say N if you'd like + to avoid verbose reports. + config LOCK_DEBUGGING_SUPPORT bool depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT From patchwork Mon Aug 21 03:46:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359144 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3B46EE49AB for ; Mon, 21 Aug 2023 04:11:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232718AbjHUELJ (ORCPT ); Mon, 21 Aug 2023 00:11:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230490AbjHUELI (ORCPT ); Mon, 21 Aug 2023 00:11:08 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BC383F2; Sun, 20 Aug 2023 21:10:31 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-0b-64e2ded6ddcf From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 18/25] dept: Apply timeout consideration to wait_for_completion()/complete() Date: Mon, 21 Aug 2023 12:46:30 +0900 Message-Id: <20230821034637.34630-19-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG/f/PdavJYQUdMygGERRphuVbRERRHYqgG9EN2mgnHXlr3jKI nGkXy7BwrUzMS8ylM/VMuplhipqaaSlLTVeuqKypZW21tMxZfXn58Tzv+/B8eFlCWU3NZnXR 8aI+WhOpouWkfGh6wWK7w6ld8r0f4OL5JeD+doaE3HIrDR23ShFYqwwYBhs2wguPC8FYWzsB JmMHgoKBfgKqGh0IaiypNHS+9Ycu9wgNzcZzNJwsKqfh2adxDH2XL2EolbZAa1YhhlrvexJM gzRcM53Ek+MDBq+5hAFzynxwWnIYGB8IgWaHnYKa3kVwNa+Phgc1zSQ03nVi6LyfS4PDOkFB a+NjEjouZlJQNlxIwyePmQCze4SB57X5GCrSJoNOff1NQVNmLYZTNyoxdPVUI3h45jUGyWqn od7twmCTjAT8LG5A4LwwxED6eS8D1wwXEJxLv0xC+68mCtL6lsHYj1x6zUqh3jVCCGm2JKHG k08KLYW8cC+nnxHSHvYyQr6UINgsC4WiB4NYKBh1U4JUcpYWpNFLjJAx1IWF4adPGeHxlTFS eNtlwlsD98pXacVIXaKoD16tlkfkTJiY2Dz50R5DJU5Br9kMJGN5LpT3VlUw//lXVgvpY5pb wHd3ewkfz+Tm8bbMd1QGkrMEd3oab/ncRvuMGVw4b390G/mY5Obz49XFUwcKbjl/57oB/w2d y5dW1E7pskldqr4/ta/klvFfBt6QvlCeOy3jpZsj/1oE8I8s3WQWUuQjvxKk1EUnRml0kaFB EcnRuqNBB2OiJDT5Uubj4/vuotGOHXWIY5FqukI9x6lVUprEuOSoOsSzhGqmIvD7gFap0GqS j4n6mAP6hEgxrg4FsqRqlmKpJ0mr5MI18eJhUYwV9f9dzMpmp6B1sa+aDwwyxz/uD16750SZ qSFqUy+3jm58og6o96S6epMVTUZ39s7Y8O22ednyeLM9IqBNPdwuX7t6g7S51fX5yHqZrGW3 7YfR47fd0xMa3JMUlq3aaHQUxRR9sbwsK967ybnfmtgZts0gO7TCL33Cb1dqeqvaX/ns47R3 UkjlDhUZF6EJWUjo4zR/AP+kE2lOAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/x47j12nzG4bdZsiQh/h4mIdhfccyNptms/rN/dTpit0p YrZOF4laoY5UerCTKz1cTenBUquUpaiFpLljaF3loYt0Hirzz3uvvd+f9/uvD0+pMpjZvDb8 hKwPl3RqVkEr9myMWdbV69D49KTNh+TLPuAajqMhvaiAhfbCfAQFZUYMfQ1+8GLEiWCstY0C c0o7gmz7GwrKGnsR1OSdY6Hj/XTodA2x0JxyiYWY3CIWnvW7MfSkXsGQb/OHJ0k5GGpHP9Jg 7mPhpjkGj8snDKMWKweW6IXgyEvjwG1fCc29XQzUZzQzUNO9FG5k9rBQXdNMQ2OFA0NHZToL vQV/GHjS+JiG9uQEBu4N5rDQP2KhwOIa4uB5bRaGYtP42vlvvxloSqjFcP52CYbOV1UIHsa9 xWAr6GKh3uXEUGpLoeDnnQYEjsQBDmIvj3Jw05iI4FJsKg1tv5oYMPX4wtiPdHbrRlLvHKKI qfQkqRnJoklLjkgepL3hiOlhN0eybBGkNM+b5Fb3YZL91cUQm/UiS2xfr3AkfqATk8GnTzny +PoYTd53mvHeuQcVmzSyThsp61dsDlKEpP0xc8czFadeGUtwNHrLxyMPXhTWiL+SWugJZoVF 4suXo9QEewkLxNKED0w8UvCUcGGqmPe5lZ0IZgrBYtej+2iCaWGh6K66M1lQCmvF8ltG/G90 vphfXDvpe4z7tqrKyXuV4Ct+sb+jk5AiC02xIi9teGSYpNX5LjeEhkSFa08tP3wszIbGn8Zy 1p1cgYY7/OqQwCP1NGXQXIdGxUiRhqiwOiTylNpLOee7XaNSaqSo07L+WKA+Qicb6tAcnlbP Uu46IAephGDphBwqy8dl/f8U8x6zo1FA+QZti49Tf3/3mWzn/rvvGvwPHV0tltNF9hkRq+Rl O7ZsS+pLVS2OXesZZ52nWfLjav+RQv/W4nXOwpP1YK1srF6nGfgQKu1Uia71t7sjJF3F4A1P Y3cT7At+vbXttbhrWGUKWBLo57U9LlHLkGtfLBbnquSMwqFt7pCGfRdy1bQhRFrpTekN0l+b Zs2cMAMAAA== X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to wait_for_completion()/complete(). Signed-off-by: Byungchul Park --- include/linux/completion.h | 4 ++-- kernel/sched/completion.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/completion.h b/include/linux/completion.h index 32d535abebf3..15eede01a451 100644 --- a/include/linux/completion.h +++ b/include/linux/completion.h @@ -41,9 +41,9 @@ do { \ */ #define init_completion_map(x, m) init_completion(x) -static inline void complete_acquire(struct completion *x) +static inline void complete_acquire(struct completion *x, long timeout) { - sdt_might_sleep_start(&x->dmap); + sdt_might_sleep_start_timeout(&x->dmap, timeout); } static inline void complete_release(struct completion *x) diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c index d57a5c1c1cd9..261807fa7118 100644 --- a/kernel/sched/completion.c +++ b/kernel/sched/completion.c @@ -100,7 +100,7 @@ __wait_for_common(struct completion *x, { might_sleep(); - complete_acquire(x); + complete_acquire(x, timeout); raw_spin_lock_irq(&x->wait.lock); timeout = do_wait_for_common(x, action, timeout, state); From patchwork Mon Aug 21 03:46:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB1D7EE49A8 for ; Mon, 21 Aug 2023 04:11:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232120AbjHUELf (ORCPT ); Mon, 21 Aug 2023 00:11:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232755AbjHUELe (ORCPT ); Mon, 21 Aug 2023 00:11:34 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 04624ED; Sun, 20 Aug 2023 21:11:07 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-1b-64e2ded6ac26 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 19/25] dept: Apply timeout consideration to swait Date: Mon, 21 Aug 2023 12:46:31 +0900 Message-Id: <20230821034637.34630-20-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/xzud/Xba/Iix27BleWafYcbGfDcamxmL4eZ+03FduVLO ZrtTkp5UWw9oVrHrekB+2RCXU4pKiW4JaTp5SE+Ui1Me7sI/n732eX8+r33++PCU+jYzg9cb YySTUWvQsEpaORBQGPK8y61bnLUBMlMXg+drEg3518pZaL1ahqD8hhVDb90meD7aj2Cs+QkF udmtCAq7X1Nwo74LgcN+koW2ning8gyx0JCdwkL8pWssPO0bx9CZk4WhTA6FpowiDE7vBxpy e1m4kBuPfeUjBq+tlAObZS647ec5GO9eAg1d7Qw4Xi6Acxc7WbjraKCh/pYbQ1tVPgtd5b8Z aKp/RENrZhoDVwaLWOgbtVFg8wxx8MxZgKEiwSdKHPnFwMM0J4bEy9cxuF7cQVCd9AaDXN7O Qq2nH0OlnE3Bj+I6BO70AQ5OpXo5uGBNR5ByKoeGJz8fMpDQuQLGvuez61aR2v4hiiRUxhHH aAFNGotEcvv8a44kVL/kSIF8lFTag8mlu72YFA57GCKXnmGJPJzFkeQBFyaDLS0ceZQ3RpMe Vy7eFhSmXKOTDPpYybRo7X5l+Mc3oVEj7DFr9WfaghqZZKTgRWG5mJflpP/zg2dJlJ9ZYb7Y 0eGd4EBhjliZ9t43r+Qp4fRk0f65mfUHU4Ut4oOrI5yfaWGu2CgPTSyohJXi/bMt1F/pbLGs wjnBCl9fvlOF/KwWVohfut/SfqkonFaI77yN/66YLt63d9AZSFWAJpUitd4YG6HVG5YvDDcb 9ccWHoiMkJHvoWwnxnffQsOt22uQwCNNgGr/TLdOzWhjo80RNUjkKU2gKuhbt06t0mnNxyVT 5D7TUYMUXYOCeFozTbV0NE6nFg5qY6TDkhQlmf6nmFfMsKCwI4a2KSGWUO8cxhxIr7aHcOun 625+iH0HUenXA1aRijXjuKrBPCtnfeayn5/mReW9aiaH9rzYXB+f5qhrexyy2mF53DkrZjCj pK+9uDYopYnfaw0eDru4yHzosLAjskR173dgyd5daNpavWhL3al4lWM9YIwrTkzu2aDa6BrY qqGjw7VLgilTtPYPTYEwIEwDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfX8P3991XPs5mR8y3CQLiSmfYWYz853J/OH5H93cb7p1pd2R ylinQx56MokKJ5xWp/K7kIdrp4jLeqAW2Wk6hVsl5CIVrsw/7732fn8+77/eMlpZwE6TaeP2 ifo4tU6F5Yx844rUha3tbk1Yhn0uZJ8OA+/3NAYKyqwYmkpLEFgrjBR4nqyDVwM9CIbqG2nI zWlCcKXjLQ0Vte0I7EVHMDR3+kOLtw+DM+cUhtSrZRhedA9T4Dp3hoISKRKeZxVS4Bj8yECu B0N+birlk08UDFqKObCkBIG7KI+D4Y7F4GxvZaHmopMF+5v5cOGSC8NDu5OB2ko3Bc33CzC0 W/+w8Lz2GQNN2eks3PxciKF7wEKDxdvHwUuHmYJyk6/tWP9vFp6mOyg4du0WBS1tDxBUpb2j QLK2Yqjx9lBgk3Jo+HXjCQJ3Ri8HR08PcpBvzEBw6ug5BhpHnrJgcoXD0M8CvHoFqenpo4nJ doDYB8wMqSsUyL28txwxVb3hiFnaT2xFIeTqQw9FrnzzskQqPoGJ9O0MR072tlDkc0MDR56d H2JIZ0sutSlwp3ylRtRpE0T9olVR8uhP7yLj+3GiseoLk4Lq2JPITybwS4XHL9PoUcZ8sPD6 9eAYB/CzBFv6B9+NXEbzx8cLRV/q8Wgwid8gPC7t50aZ4YOEOqlv7EHBRwiPMhvof6UzhZJy xxj7+XzpwX00yko+XPja8Z7JQnIzGleMArRxCbFqrS481BATnRSnTQzdvTdWQr7NWA4NZ1ei 783rqhEvQ6oJiqhAt0bJqhMMSbHVSJDRqgDF9B8dGqVCo05KFvV7d+n360RDNZouY1RTFOu3 iVFKfo96nxgjivGi/n9KyfympaCuRDEItmRNrmuMyUgODntf4tCFkx9TlgVGdM+5tbs8cOeO S4eMnuV+zbd3zB1Y8qFybdCf0DvnI4I9m9eY2BnbuytcOJl1tdWv+sjG/3bmHe4ZaYscb3Vs Pbs083bmgsl3R+QO4/YXbb1dExWKnFfZXQendl6+Ps9xQrRPnW0L8WdVjCFavTiE1hvUfwHo Dif3LwMAAA== X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to swait, assuming an input 'ret' in ___swait_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/swait.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/swait.h b/include/linux/swait.h index 02848211cef5..def1e47bb678 100644 --- a/include/linux/swait.h +++ b/include/linux/swait.h @@ -162,7 +162,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); struct swait_queue __wait; \ long __ret = ret; \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ INIT_LIST_HEAD(&__wait.task_list); \ for (;;) { \ long __int = prepare_to_swait_event(&wq, &__wait, state);\ From patchwork Mon Aug 21 03:46:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A603EE49A5 for ; Mon, 21 Aug 2023 04:12:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232836AbjHUEMt (ORCPT ); Mon, 21 Aug 2023 00:12:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232933AbjHUEMq (ORCPT ); Mon, 21 Aug 2023 00:12:46 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 845C1DD; Sun, 20 Aug 2023 21:12:13 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-2b-64e2ded67c1d From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 20/25] dept: Apply timeout consideration to waitqueue wait Date: Mon, 21 Aug 2023 12:46:32 +0900 Message-Id: <20230821034637.34630-21-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSXUxTaRCG/c7Pd9qu1WPVeBSNptFoFH8jZqIboxe7fHGzcRNijHgh1Z5I YymkKIrRBKWwCJYFXagiurWSUqEKHjRREKkoKBq0lgarwQqFoI1AFS0rUH8A483kyTszz9yM jFbVsnNkOsN+0WjQ6NVYwSj6J9uWP/cHtKteNyig8OQqCH/KYaC0yonBfbUSgfP6MQqCTbHw fKgPwWjrUxosRW4EF7te0XC92Y+g3nEcQ1vPFPCGQxhaivIwZF6qwvDsXYSCjuJTFFRKf8Lj AhsFruE3DFiCGM5ZMqmx8paCYXsFB/aMRRBwlHAQ6VoNLf52FupfLoOzFzow3K5vYaD5ZoCC ttpSDH7nNxYeNz9kwF1oZuHKgA3DuyE7DfZwiAOPy0pBtWlMlP3xKwsPzC4KssuuUeB9UYfg Tk4nBZKzHcO9cB8FNVIRDSPlTQgC+f0cZJ0c5uDcsXwEeVnFDDz98oAFU0cMjH4uxZvWk3t9 IZqYag6S+iErQx7ZBHKr5BVHTHdecsQqHSA1jqXk0u0gRS4OhlkiVZzARBo8xZHcfi9FBp48 4cjDM6MM6fFaqL+i4hW/akW9Lk00rtyYoEi0fgixKTe4Q+eD+gxkwrlILhP4tUKTM4h+ck/e yESO+cWCzzdMj/MMfoFQY+5lc5FCRvN//yI43rdODE3ntwmuAjMzzgy/SPj6XxM7zkp+nVB3 uo79IZ0vVFa7JkTysVyqq504puJjhA9d3cy4VOAz5UK1p4D7sTBbuOvwMQVIaUWTKpBKZ0hL 0uj0a1ckpht0h1bsSU6S0NhH2Y9Gdt5Eg+64RsTLkHqyMmFuQKtiNWmp6UmNSJDR6hnKqP+7 tCqlVpN+WDQm7zIe0IupjShKxqhnKdcMHdSq+L2a/eI+UUwRjT+7lEw+JwNtvZIc+6/naJki I7J3YJ1ISp5tzikP/dNZ9X6+d8TUabgM06LbWvIca3K2fGbmxl1bFuzt9WQtiUtJuPH7FF9Q 8kRsfzTE21qnFp+OfJnWvaHcPOn+YJmFa2jYvXhPrNtX+G1eR2fZzOz449EL/fKY3y6cN0Vv 53bkp+AjnvZufELNpCZqVi+ljama76KH7WBNAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTYRjHe99zznuOq9VhGZ00KEbRDbMi4+lC9K1DYHQhoiJ0uIOOptlW moFgbVp5Ca3MMqupsdbUtKmQtxyK1jLL0laJSs51sbxkNc1mKzX68vDj///z+/RwlCKPCeA0 McclXYxKqyQyWrZzsyHI2eNSr2n5HARZ6WvA8+McDXmlxQTa7hUhKK44jaG/aTu8Hh1A4G19 TkFOdhuC/N5uCiqaexDUWc4QaHfPhg7PMAFHdhoBQ2EpgRdfJjB0XbmIocgWCi2ZBRjs4x9p yOkncD3HgCfPJwzjZisL5qSl4LLksjDRuxYcPU4GGm84GKjrXAXXbnYRqK1z0ND8wIWhvTqP QE/xHwZamh/T0JaVwUDJUAGBL6NmCsyeYRZe2k0YyoyTtpTvPgYeZdgxpNy+j6HjbQ2Ch+fe YbAVOwk0egYwlNuyKfh1pwmB68IgC8np4yxcP30BQVryFRqe/37EgLErBLw/88i2zWLjwDAl GsvjxbpREy0+KRDEqtxuVjQ+7GRFk+2EWG5ZKRbW9mMx/5uHEW3W80S0fbvIiqmDHVgcevaM FR9f9dKiuyMH71p4ULZFLWk1cZIueGu4LMo0MszEVrInb/Rrk5CRpCI/TuDXC+60X9NM+GXC mzfj1BT784uF8owPTCqScRR/dqZg+do6PZrL7xPsmRn0FNP8UsF3q4mZYjm/Qai5VMP8ky4S isrs0yK/ydxWU42mWMGHCCO9fXQmkpnQDCvy18TERas02pDV+iNRCTGak6sjjkbb0OTPmBMn sh6gH+3bGxDPIeUsefhCl1rBqOL0CdENSOAopb88cKxXrZCrVQmnJN3RMN0JraRvQIEcrZwv 37FfClfwkarj0hFJipV0/1vM+QUkIe3NznWaFfowXLogdA7O15xnIuo/FHqftu59NSfu58ay keDMzqrYxMOo0hHwJD03zNd3KGWu0pdcYS67e1m3ccumU1pF/SUv4axDiZHqJRFs8L4St8+5 ZwzdGmgfY+LVVsOx3fKIQbL8fVCV84DF0Fc9LzC3aWbYjHqtewkfqqT1Uaq1KymdXvUXeHEZ NS8DAAA= X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to waitqueue wait, assuming an input 'ret' in ___wait_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/wait.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/wait.h b/include/linux/wait.h index ff349e609da7..aa1bd964be1e 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -304,7 +304,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); struct wait_queue_entry __wq_entry; \ long __ret = ret; /* explicit shadow */ \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ init_wait_entry(&__wq_entry, exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ long __int = prepare_to_wait_event(&wq_head, &__wq_entry, state);\ From patchwork Mon Aug 21 03:46:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30F59EE49A8 for ; Mon, 21 Aug 2023 04:13:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232755AbjHUENA (ORCPT ); Mon, 21 Aug 2023 00:13:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232913AbjHUENA (ORCPT ); Mon, 21 Aug 2023 00:13:00 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B2F28CD; Sun, 20 Aug 2023 21:12:33 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-3b-64e2ded6c3cd From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 21/25] dept: Apply timeout consideration to hashed-waitqueue wait Date: Mon, 21 Aug 2023 12:46:33 +0900 Message-Id: <20230821034637.34630-22-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAz2SbUxTZxTH9zz3tZVuN5VkV6pzaUKWsIhKZDshxi3ZB+5cfEmWmOgwckdv pLGgKe9sJCCIyJugAr6gASS1owxYIQtDq7UEEBEoQrAyaNbOTYkUEGmhwsRW476c/HL+//P7 dFhC2UWFsdrkVEmfLOrUtJyUe0LqtzxyujXb7PMfQWXpNvAuFpFQ29pMg73FhKC5Iw/DdE8s PPLNIFgZHCagpsqOoN41RUBHrxOBxXiShtEnH8KYd46G/qoSGvKvt9Iw8nwVw2T1OQwm8x4Y qGjAYPU/JaFmmoYrNfk4MJ5h8BuaGDDkhoPbeJmBVdd26HeOU2CZ+BwuXZuk4Zaln4TeTjeG 0a5aGpzNaxQM9N4jwV5ZRsGvsw00PPcZCDB45xh4aK3D0FYQEBW+fE1BX5kVQ2HjbxjGHt9E cLvoLwzm5nEaur0zGNrNVQS8utGDwF3uYeBUqZ+BK3nlCEpOVZMw/F8fBQWT0bCyXEt/HSN0 z8wRQkF7hmDx1ZHC/QZe+OPyFCMU3J5ghDpzmtBujBCu35rGQv2ClxLMTWdowbxwjhGKPWNY mB0aYoR7F1dI4clYDd6vOiTfqZF02nRJv3VXvDzxQX4rOlHOZM5PluFc5KKKkYzluR38RYPv f24842eCTHOf8Q6HnwhyKPcp3172b6AjZwnu9DreOD9IB4P1XBxf2lj5tkRy4XzVyzWyGLGs gvuCL16OeufczJvarG8rssDafLMLBVnJRfMvXH+TQSfPnZbx18ZbyXcHG/i7RgdZgRR16IMm pNQmpyeJWt2OyMSsZG1mZMLxJDMKfJQhZ/WHTrRg/96GOBapQxTxG90aJSWmp2Ql2RDPEupQ hWrJpVEqNGJWtqQ/fkSfppNSbEjFkuqPFVG+DI2SOyqmSsck6YSkf59iVhaWi1QHY4+s2219 hl/9WX54yWH65cC3ewfrOzzOuJ91mbbYsImRTTlrRU2pXz422aJzXvzYvSGOVryOiDTsPTqk OrilsGdxoeWnhL6ctu9kYnbF+Y0olOpskNTRloTfYw7/07JoCv/m6qLh7Fe7Yz65c0EZc0MM UZYMWfd1xFef93QtOdRqMiVR3B5B6FPEN1LS1UdNAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+//PdavFYS06qFSspDCyq/GCIVJRp6Dok0EEemgnXW1qW1kW krVZampOmlZamMoaallHP1ReEDVTu1maXbDVllSmpZnTpnaZRV9efjzPj+fTyxLqIiqA1ccf kkzxokFLK0nljnDL8h6XR7eyPCMMbFkrwTuaTkJRVSUNnTcqEFTWnMTQf28LvBgbRDD56AkB BfZOBFfdbwioaXUhqHeeoqGrbzZ0e4doaLefpcFSWkXD04EpDL35eRgq5O3wILcEQ6PvIwkF /TQUFliw/3zC4HOUM+BIDQaP8xIDU+5V0O7qoaD5cjsF9a+XwcUrvTTU1beT0Hrbg6HrbhEN rsrfFDxobSOh05ZNwfWvJTQMjDkIcHiHGHjWWIzhptW/dvr7LwruZzdiOF12C0P3q1oEDenv MMiVPTQ0ewcxVMt2Aiau3UPgyfnCQFqWj4HCkzkIzqblk/Dk530KrL1hMPmjiI4MF5oHhwjB Wn1EqB8rJoWOEl64c+kNI1gbXjNCsXxYqHaGCKV1/Vi4OuKlBLk8gxbkkTxGyPzSjYWvjx8z QtuFSVLo6y7AO4N2K9frJIM+STKtiIhRxj20VKHEHObocG82TkVuKhMpWJ5by5dl+Jhpprkl /MuXPmKaNdxCvjr7g99RsgR3ZibvHH5ETxdzuD18Vpntr0Rywbz9+28yE7GsilvHZ/5Y/W9z AV9xs/GvovDHcu1dNM1qLoz/5n5P5iJlMZpRjjT6+CSjqDeEhZoPxCXH64+G7k0wysj/M46U KdttNNq1pQlxLNLOUsUEeXRqSkwyJxubEM8SWo0qcNytU6t0YvIxyZQQbTpskMxNKJAltfNU 23ZJMWouVjwkHZCkRMn0v8WsIiAVHT9ReCSAjX12rKll88T8O/I5UVxvjJ2lMKifvhpZOtFs j7ZEuT6btrZcTnqHJna/tQWtmaNYPehJHkiJHI/c2aGLiHYMkVHplh2+utwsKzdT6zRvCCzV JEZpckznh888n1zUYEl5P7t2Q/jFwrzFGzf1Ldo+d9/+4Nj8gyVrricYtaQ5TlwVQpjM4h/H eh1gLwMAAA== X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to hashed-waitqueue wait, assuming an input 'ret' in ___wait_var_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/wait_bit.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/wait_bit.h b/include/linux/wait_bit.h index fe89282c3e96..3ef450d9a7c5 100644 --- a/include/linux/wait_bit.h +++ b/include/linux/wait_bit.h @@ -247,7 +247,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); struct wait_bit_queue_entry __wbq_entry; \ long __ret = ret; /* explicit shadow */ \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ init_wait_var_entry(&__wbq_entry, var, \ exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ From patchwork Mon Aug 21 03:46:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359146 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 609CAEE49AB for ; Mon, 21 Aug 2023 04:12:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232896AbjHUEMG (ORCPT ); Mon, 21 Aug 2023 00:12:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232915AbjHUEMD (ORCPT ); Mon, 21 Aug 2023 00:12:03 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DEDA79D; Sun, 20 Aug 2023 21:11:33 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-4b-64e2ded7e715 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 22/25] dept: Apply timeout consideration to dma fence wait Date: Mon, 21 Aug 2023 12:46:34 +0900 Message-Id: <20230821034637.34630-23-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzH+35/j3ec/bps/ZSNnRnzWCb7uJnxl68/jA2bp03H/aabq+wu kc0cnZCKbBVp9KBz60q6y0ZPS3RJylESq3QnlEocl1Ie7ph/Pnvt/Xm/33+9eUpZxYTxurgE yRCn0atYOS0fmV6w9GWvRxtxJz0UMtMiwPftLA155aUsuG7ZEJRWnsQw2LgBXo4NI5hsfUpB TpYLQYG7h4JKZy+CWuspFtr7Z0CHb5SF5qzzLCQXlbPwbGgKQ3f2JQw2+yZouViIoX7iAw05 gyxczUnG/jOAYcJSwoHFNB881lwOptyR0NzbyUDt68Vw5Vo3CzW1zTQ473owtFflsdBb+puB FucjGlyZ6QyUfSpkYWjMQoHFN8rB8/p8DLfN/qKUr78YaEqvx5ByowJDx6tqBHVn+zDYSztZ eOAbxuCwZ1Hw42YjAk/GCAen0yY4uHoyA8H509k0PP3ZxIC5Owomx/PYdWryYHiUImbHEVI7 lk+Tx4UiuZfbwxFz3WuO5NsPE4d1ESmqGcSkwOtjiL3kHEvs3kscSR3pwORTWxtHHl2epEl/ Rw7eEr5LvkYr6XWJkmH52mh5jNf1EB1q5492TQ7QJmTjUpGMF4WVYm52Bfufh8d/0QFmhQVi V9cEFeCZwlzRkf6eSUVynhLOTBOtn1v9AZ4PEbaLTa2JAQ8tzBdTPO//+hXCKvFCaj/zr3OO aLtd/1eX+XV7dRUKsFKIEr+439KBTlFIlolPyl5Q/wKzxPvWLvoiUuSjoBKk1MUlxmp0+pXL YpLidEeX7Y+PtSP/oizHp3bfRV7X1gYk8Eg1XRE926NVMppEY1JsAxJ5SjVTEf7drVUqtJqk Y5Ihfq/hsF4yNqBwnlaFKlaMHdEqhQOaBOmgJB2SDP+/mJeFmdD11TsHFjpHfj4v64m58Wy2 2lGu//2uu3LXuLdGrebrQGlyFDWmtYSEzvW1hUV22va1qBVy1UdzXXjbeqmy+ERmyHJcdm/a mye5OOj7jJ4+t7NItzl49auhtzfp4o26iIipkBRjYfS8HcF79M74ajK0LWO7KWqJEJTQt2mb jBxQ0cYYTeQiymDU/AGjhYT/TQMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTYRzFe967q9XLsnwxoVhZoGUZGX8oog9RD2WXL9GFLq72ksOptenS ILBcVksrJV3pCi+1xK3UKeUd8zK10FaKWajolFKcWea8pFla9OXw45zD+XQ4UmaivTlVRJSo iVCo5YyEkhzYFr+hvdup3FTwajskJ24C99gNCkz5VgYczy0IrMVXCBis3wMfxl0IppvfkmBM dSDI6u0iodjejaAy9yoDrf2Loc09wkBT6i0G4nPyGXg3NENAZ1oKARbbfnhzN5uA6qkvFBgH GcgwxhNzMkDAlDmPBXOcLzhz01mY6Q2Epu52GmofNtFQ+ckfHjzqZKCisokCe4mTgNYyEwPd 1t80vLE3UuBITqLh2ddsBobGzSSY3SMsvK/OJKBAP7eW8GOWhoakagISHhcS0PaxHEHVjR4C bNZ2BmrdLgKKbKkk/Hxaj8B5e5iFa4lTLGRcuY3g1rU0Ct7+aqBB3xkE05MmZuc2XOsaIbG+ 6CKuHM+k8OtsAZemd7FYX/WJxZm2aFyU64dzKgYJnDXqprEt7yaDbaMpLDYMtxH4a0sLixvv T1O4v81IHPI5LtmuFNUqnajZuCNEEjrqqEPnW7mYjukBKg5ZWAPy4AR+i+CanKXmmeHXCR0d U+Q8e/KrhKKkz7QBSTiSv75QyP3WzBgQxy3lDwsNzbr5DsX7CgnOz3/7Un6rcMfQT//bXClY Cqr/+h5zvq28DM2zjA8Svvf2UXeRJBMtyEOeqghduEKlDgrQhoXGRqhiAs5GhtvQ3GfMl2eS S9BY654axHNIvkga4uNUymiFThsbXoMEjpR7SldM9CplUqUi9pKoiTytiVaL2hq0gqPkXtK9 R8QQGX9OESWGieJ5UfM/JTgP7zgUXEy+DKlfnxj2sLCr6lvcJW+vfUt+h6dGzVJHJwKX+YHx zNofi+0XNI4c8VdaSt1qemHOfS54eY8+YPcahevEk80TWfvqU4IPemlKs0NLlhXGqDf0+VIX n/pPlmnx0M9j9IuTBTM+6FWiyevUjtd9Y5cHrIFjlsgK/T37ro8tEzo5pQ1VBPqRGq3iD+kT KhIvAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to dma fence wait. Signed-off-by: Byungchul Park --- drivers/dma-buf/dma-fence.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index ad2d7a94c868..ab10b228a147 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -783,7 +783,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) cb.task = current; list_add(&cb.base.node, &fence->cb_list); - sdt_might_sleep_start(NULL); + sdt_might_sleep_start_timeout(NULL, timeout); while (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { if (intr) __set_current_state(TASK_INTERRUPTIBLE); @@ -887,7 +887,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } } - sdt_might_sleep_start(NULL); + sdt_might_sleep_start_timeout(NULL, timeout); while (ret > 0) { if (intr) set_current_state(TASK_INTERRUPTIBLE); From patchwork Mon Aug 21 03:46:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359152 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2878EE49A5 for ; Mon, 21 Aug 2023 04:12:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232938AbjHUEMd (ORCPT ); Mon, 21 Aug 2023 00:12:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231629AbjHUEMd (ORCPT ); Mon, 21 Aug 2023 00:12:33 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 55135CA; Sun, 20 Aug 2023 21:12:03 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-5b-64e2ded78431 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 23/25] dept: Record the latest one out of consecutive waits of the same class Date: Mon, 21 Aug 2023 12:46:35 +0900 Message-Id: <20230821034637.34630-24-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSW0xTWRSG3fvsc2mx5qRD4lGMaBOjkYwK8bJi1Phg4vGuOIlRH7ShZ4aO BU0rWDRGkAKKYNSkFBW1gFMbKIIHH+TWdGoA6wWqNIgEiVSjEkEEbRHBC6C+rHxZ/8qX9fBz lLqWnsnpkw9LxmStQcMoibJ/asmfT7uDuiUVF6PhfN4SCH06RaCo0sWA/2Y5AtftDAy9jevh abgPweijVgpsVj+C4p7nFNxu6kbQ4DzJQNuraRAIDTDgs55hILO0koHH78YwdBVcwFAub4EH 50oweEbeELD1MnDZlonHx1sMI44yFhzp8yDovMTCWE8s+LrbaWjojIGLV7sYqG/wEWi6E8TQ VlvEQLfrOw0Pmu4R8J/Pp6HifQkD78IOChyhARaeeOwYqizjouyP32hozvdgyL5+C0PgWR0C 96kXGGRXOwN3Q30YqmUrBV9uNCIInu1nIStvhIXLGWcRnMkqIND6tZkGS9cyGP1cxKxdKd7t G6BES/URsSFsJ+L9EkGsufScFS3uTla0yylitXOhWFrfi8XioRAtymWnGVEeusCKuf0BLL5v aWHFe4WjRHwVsOHtUXuUq3SSQZ8qGRev2a9MfF1USB8ajjDnP3aTdNSmyEUKTuCXCo1lb/Fv zrl+n5pghp8vdHSMTHIkP0eozn9N5yIlR/E5EYLzwyNmIviDTxRO51nRBBN+nhAs+ExyEcep +OVCRenin85oobzKM+lRjK/lutrJczW/TBjseUkmnAKfoxDkwapfT8wQ/nd2kHNIZUdTypBa n5yapNUbli5KTEvWmxclHEyS0XijHMfH9t5BQ/6dXsRzSDNVtX9WUKemtammtCQvEjhKE6mK Gu7RqVU6bdpRyXhwnzHFIJm8KIojmumquPARnZr/R3tYOiBJhyTj7xRzipnpaPNqc8KVv0wR VuuqeMfxaSR2xbOxuf/Fk0Kye85y5ZZQzcP44roD63JSYrb60C5DXDjS6dq2YLBzdYuCyeq7 Zv+Y4f43kBlodW9vNnzx+L6fmDU7wW8H+db60JNNsOfTy41h2zFLV7/Z6fXvqNnw90ZznOph b71nb3THsLE926shpkRt7ELKaNL+AG/veZdNAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+5+7q8VhGR1MsBYSqF2EVi9YEX3IU9EFMgyjcrRjjnTGlqZF YLk0zUVKauWqeWGZ07QtyLxhipdp3trQMpWclYmaZW611EqNvrz8eJ6H36eXwSV60otRqs4L apU8SkqJCNHBoKQNvYMOxWaTnoaM9M3gnL5OgL6shIKuJyYEJc+uYDDaGAy9rnEEM+2dOORk dSHIGxrA4VnTIIKaoqsU2D4sB7tzkgJr1g0KkgrKKOgem8WgPzsTA5P5ALTdysegzj1CQM4o Bbk5Sdj8+YyB21hMgzHRFxxF92iYHQoE62APCQ33rSTU9PnD3Qf9FFTXWAloqnBgYKvUUzBY 8oeEtqYWAroydCSUfsmnYMxlxMHonKThdZ0Bg3LtvC35+28SmnV1GCQXPsXA/rYKQe319xiY S3ooaHCOY2AxZ+Hw61EjAsfNCRqupbtpyL1yE8GNa9kEdM41k6Dtl8HMTz21K4hvGJ/Eea3l Al/jMhB8az7Hv7g3QPPa2j6aN5hjeUuRH19QPYrxeVNOkjcXp1K8eSqT5tMm7Bj/paOD5lvu zBD8B3sOdtg7TLRdIUQp4wT1pp3hoshP+jvkuR9L43XdtUQisnmkIQ+GY7dwKYWt+AJT7Hru zRv3InuyaziL7hOZhkQMzqYs5Yq+tlMLxQo2kktNz0ILTLC+nCP7J5GGGEbMbuVKCzb9c/pw pvK6RY/HfGyuqlycS1gZ921omLiFRAa0pBh5KlVx0XJllGyj5mxkgkoZv/F0TLQZzf+M8fJs RgWatgXXI5ZB0mXicG+HQkLK4zQJ0fWIY3Cpp3j1jyGFRKyQJ1wU1DGn1LFRgqYerWYI6Srx vlAhXMKekZ8XzgrCOUH9v8UYD69ERBypro1Y2dDh23rQuUob9vaRe1KWvWxHsc/tgKCqE9Ou d9u9XnV1RFhOPpSlhATUV4zu37UvMXMtld6413DGVukX3+nfclxZ3n5Ml/lUmWyd89k/dbz0 6Lbgj4/3huwsG5sLs1xac1o1HPFyZCYld/dESOrlUPu6Q6zLew89sq35uUlKaCLlgX64WiP/ C5lDZM0vAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The current code records all the waits for later use to track relation between waits and events in each context. However, since the same class is handled the same way, it'd be okay to record only one on behalf of the others if they all have the same class. Even though it's the ideal to search the whole history buffer for that, since it'd cost too high, alternatively, let's keep the latest one at least when the same class'ed waits consecutively appear. Signed-off-by: Byungchul Park --- kernel/dependency/dept.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 52537c099b68..cdfda4acff58 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -1522,9 +1522,28 @@ static inline struct dept_wait_hist *new_hist(void) return wh; } +static inline struct dept_wait_hist *last_hist(void) +{ + int pos_n = hist_pos_next(); + struct dept_wait_hist *wh_n = hist(pos_n); + + /* + * This is the first try. + */ + if (!pos_n && !wh_n->wait) + return NULL; + + return hist(pos_n + DEPT_MAX_WAIT_HIST - 1); +} + static void add_hist(struct dept_wait *w, unsigned int wg, unsigned int ctxt_id) { - struct dept_wait_hist *wh = new_hist(); + struct dept_wait_hist *wh; + + wh = last_hist(); + + if (!wh || wh->wait->class != w->class || wh->ctxt_id != ctxt_id) + wh = new_hist(); if (likely(wh->wait)) put_wait(wh->wait); From patchwork Mon Aug 21 03:46:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A52F4EE49A5 for ; Mon, 21 Aug 2023 04:13:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232849AbjHUEN1 (ORCPT ); Mon, 21 Aug 2023 00:13:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231435AbjHUEN1 (ORCPT ); Mon, 21 Aug 2023 00:13:27 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CAFC89B; Sun, 20 Aug 2023 21:12:59 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-6b-64e2ded78026 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 24/25] dept: Make Dept able to work with an external wgen Date: Mon, 21 Aug 2023 12:46:36 +0900 Message-Id: <20230821034637.34630-25-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/x46zn9PmRzYc1uQxT/vMzMNs+m5mbHkaM27uN91U7C6d jIlKlMhD5aHlKs6pE37X5uHKKpNLUxchLUenSXMVx0WucBf++ey1z/u99+f9x4enVHZmHK+L T5D08ZpYNaugFd0jCme+crm1cwY7psPpE3PA9+0YDfk3rSw4y0oRWMsPY+h6FAWv+jwI/E8b KcjLcSIobH9DQXmtC0Gl5QgLzztGQrOvl4W6nEwWUopvstD0aQBDW+4ZDKXyaqjPLsJQ1d9J Q14XC5fyUnBgfMTQby7hwJw8FdyWixwMtEdCneslA5Wt0+FCQRsLFZV1NNTedWN4fj+fBZf1 NwP1tQ4anKezGLjRU8TCpz4zBWZfLwfPqkwYbqUGgo5+/cXA46wqDEev3MbQ/NqO4MGxdxhk 60sWHvo8GGxyDgU/rz1C4D7ZzUHaiX4OLh0+iSAzLZeGxsHHDKS2LQD/j3x22SLy0NNLkVSb kVT2mWjypEgk9y6+4Ujqg1aOmOS9xGaJIMUVXZgUen0MkUuOs0T2nuFIRnczJj0NDRxxnPfT pKM5D68N26xYrJVidYmSfvaS7YoYa3E9s6di6b5cTxlORufnZaAQXhTmi/KXTuY/NzkzUJBZ IVxsaemnghwqTBRtWR8CHgVPCenDRcvnp2xQGC2sE9+esw2ZaGGqWOqw4yArhYViucmJ/4ZO EEtvVQ15QgJ72X5/6IBKWCB+aX9PB0NF4WyIOPj5+L8WY8VqSwudjZQmNKwEqXTxiXEaXez8 WTFJ8bp9s3bsjpNR4KXMBwe23EVeZ3QNEnikHqHcPt6tVTGaRENSXA0SeUodqgz73q5VKbWa pP2Sfvc2/d5YyVCDwnhaPUY5t8+oVQk7NQnSLknaI+n/q5gPGZeMFiavUk4wO6PTrt6p/YH9 GxrCpyV5D4hnryf8ioha71B56JUz1kTTo1R3jNS3TZOmeHZFuuJcHT3esZcL7B8OTRzjDjU6 ctamXDGsWP9iB+ks9LduZKu/buWi0huFamN+RTY/GcmZfoOxKaz8lLtm+aD3oDYGH0k/lLCh oXHFbDVtiNFERlB6g+YPcWdgxE4DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe0xTZxjG/b5zK91qjpWEEyG6NCEmGFCi6JtonMtCOFmicTdNSiJUe4RG QG0RRbcE5GoBuQyo46IVtHKpiqcYuZoGAlLwUmkVJYBScYoU2JQyKmwKLPvnyS/P8+R5/3kl hLySWiPRJCQK2gRVnIKWktI929OCB0Zc6k23rsihMHcTeGaySai4aabBfqMegbkxFcN4VwQM zLoRzD94RIChxI7g8ugwAY3dIwjaa87S4BhbCU7PNA22khwa0qpv0vB4YgHDUGkRhnpxN/QV VGGwet+QYBinodyQhhflLQavqY4BU0oguGrKGFgYDQXbyFMKOittFLQPboDfLw7R0NZuI6G7 yYXB0VJBw4j5EwV93T0k2AvzKLg+VUXDxKyJAJNnmoF+qxFDQ/riWuaHfym4l2fFkHnlFgbn 81YEd7NfYhDNT2no9LgxWMQSAj5e60LgOj/JQEaul4Hy1PMIcjJKSXj0zz0K0ofCYH6ugt61 ne90TxN8uuUk3z5rJPneKo5vLhtm+PS7gwxvFE/wlpogvrptHPOX33soXqw7R/Pi+yKG1086 MT/18CHD91yYJ/kxpwHvDVBKd6iFOE2SoN24M1oaa67uo461fX2q1H0Dp6ALm/XIR8KxW7jH dj1aYppdzz175iWW2Jf9irPk/UHpkVRCsFlfcDV/PqCXgtXsT9yLYstyiWQDufqeVrzEMnYr 12i04/9G13H1Ddbljs+iL7a2LB+Qs2HcX6OvyAIkNaIVdchXk5AUr9LEhYXojsQmJ2hOhRw6 Gi+ixacx/bpQ2IRmHBEdiJUgxZey6ACXWk6pknTJ8R2IkxAKX5n/36NquUytSj4taI9GaU/E CboO5C8hFX6y7/YL0XI2RpUoHBGEY4L2/xRLfNakIOXUmLI/xhbxWvoq8vYPw8WGM/7WtU/8 9lYmRU5MRw7ed5f/Yplb+a4lpujnq459ZZPkHbbZuzsnKvT7mIOZxpOHvy16HhLQsC78TS+X 6Fe3qjnQ1b/NVvsh6LeF/HNCb/6eA8yPl5zVoia7tuubXH3xmKNWOxDclBfup2xQZh2XrVaQ ulhVaBCh1ak+A8p1h94wAwAA X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org There is a case where total maps for its wait/event is so large in size. For instance, struct page for PG_locked and PG_writeback is the case. The additional memory size for the maps would be 'the # of pages * sizeof(struct dept_map)' if each struct page keeps its map all the way, which might be too big to accept. It'd be better to keep the minimum data in the case, which is timestamp called 'wgen' that Dept makes use of. So made Dept able to work with an external wgen when needed. Signed-off-by: Byungchul Park --- include/linux/dept.h | 18 ++++++++++++++---- include/linux/dept_sdt.h | 4 ++-- kernel/dependency/dept.c | 30 +++++++++++++++++++++--------- 3 files changed, 37 insertions(+), 15 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index 0aa8d90558a9..ad32ea7b57bb 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -487,6 +487,13 @@ struct dept_task { bool in_sched; }; +/* + * for subsystems that requires compact use of memory e.g. struct page + */ +struct dept_ext_wgen{ + unsigned int wgen; +}; + #define DEPT_TASK_INITIALIZER(t) \ { \ .wait_hist = { { .wait = NULL, } }, \ @@ -518,6 +525,7 @@ extern void dept_task_exit(struct task_struct *t); extern void dept_free_range(void *start, unsigned int sz); extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_ext_wgen_init(struct dept_ext_wgen *ewg); extern void dept_map_copy(struct dept_map *to, struct dept_map *from); extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, long timeout); @@ -527,8 +535,8 @@ extern void dept_clean_stage(void); extern void dept_stage_event(struct task_struct *t, unsigned long ip); extern void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *c_fn, const char *e_fn, int sub_l); extern bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f); -extern void dept_request_event(struct dept_map *m); -extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn); +extern void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg); +extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn, struct dept_ext_wgen *ewg); extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); extern void dept_sched_enter(void); extern void dept_sched_exit(void); @@ -559,6 +567,7 @@ extern void dept_hardirqs_off_ip(unsigned long ip); struct dept_key { }; struct dept_map { }; struct dept_task { }; +struct dept_ext_wgen { }; #define DEPT_MAP_INITIALIZER(n, k) { } #define DEPT_TASK_INITIALIZER(t) { } @@ -571,6 +580,7 @@ struct dept_task { }; #define dept_free_range(s, sz) do { } while (0) #define dept_map_init(m, k, su, n) do { (void)(n); (void)(k); } while (0) #define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_ext_wgen_init(wg) do { } while (0) #define dept_map_copy(t, f) do { } while (0) #define dept_wait(m, w_f, ip, w_fn, sl, t) do { (void)(w_fn); } while (0) @@ -580,8 +590,8 @@ struct dept_task { }; #define dept_stage_event(t, ip) do { } while (0) #define dept_ecxt_enter(m, e_f, ip, c_fn, e_fn, sl) do { (void)(c_fn); (void)(e_fn); } while (0) #define dept_ecxt_holding(m, e_f) false -#define dept_request_event(m) do { } while (0) -#define dept_event(m, e_f, ip, e_fn) do { (void)(e_fn); } while (0) +#define dept_request_event(m, wg) do { } while (0) +#define dept_event(m, e_f, ip, e_fn, wg) do { (void)(e_fn); } while (0) #define dept_ecxt_exit(m, e_f, ip) do { } while (0) #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h index 21fce525f031..8cdac7982036 100644 --- a/include/linux/dept_sdt.h +++ b/include/linux/dept_sdt.h @@ -24,7 +24,7 @@ #define sdt_wait_timeout(m, t) \ do { \ - dept_request_event(m); \ + dept_request_event(m, NULL); \ dept_wait(m, 1UL, _THIS_IP_, __func__, 0, t); \ } while (0) #define sdt_wait(m) sdt_wait_timeout(m, -1L) @@ -49,7 +49,7 @@ #define sdt_might_sleep_end() dept_clean_stage() #define sdt_ecxt_enter(m) dept_ecxt_enter(m, 1UL, _THIS_IP_, "start", "event", 0) -#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__) +#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__, NULL) #define sdt_ecxt_exit(m) dept_ecxt_exit(m, 1UL, _THIS_IP_) #else /* !CONFIG_DEPT */ #define sdt_map_init(m) do { } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index cdfda4acff58..335e5f67bf55 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -2230,6 +2230,11 @@ void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, } EXPORT_SYMBOL_GPL(dept_map_reinit); +void dept_ext_wgen_init(struct dept_ext_wgen *ewg) +{ + WRITE_ONCE(ewg->wgen, 0U); +} + void dept_map_copy(struct dept_map *to, struct dept_map *from) { if (unlikely(!dept_working())) { @@ -2415,7 +2420,7 @@ static void __dept_wait(struct dept_map *m, unsigned long w_f, */ static void __dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn, - bool sched_map) + bool sched_map, unsigned int *wgp) { struct dept_class *c; struct dept_key *k; @@ -2437,14 +2442,14 @@ static void __dept_event(struct dept_map *m, unsigned long e_f, c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, sched_map); if (c && add_ecxt(m, c, 0UL, NULL, e_fn, 0)) { - do_event(m, c, READ_ONCE(m->wgen), ip); + do_event(m, c, READ_ONCE(*wgp), ip); pop_ecxt(m, c); } exit: /* * Keep the map diabled until the next sleep. */ - WRITE_ONCE(m->wgen, 0U); + WRITE_ONCE(*wgp, 0U); } void dept_wait(struct dept_map *m, unsigned long w_f, @@ -2654,7 +2659,7 @@ void dept_stage_event(struct task_struct *t, unsigned long ip) if (!m.keys) goto exit; - __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map); + __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map, &m.wgen); exit: dept_exit(flags); } @@ -2833,10 +2838,11 @@ bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f) } EXPORT_SYMBOL_GPL(dept_ecxt_holding); -void dept_request_event(struct dept_map *m) +void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg) { unsigned long flags; unsigned int wg; + unsigned int *wgp; if (unlikely(!dept_working())) return; @@ -2849,32 +2855,38 @@ void dept_request_event(struct dept_map *m) */ flags = dept_enter_recursive(); + wgp = ewg ? &ewg->wgen : &m->wgen; + /* * Avoid zero wgen. */ wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); - WRITE_ONCE(m->wgen, wg); + WRITE_ONCE(*wgp, wg); dept_exit_recursive(flags); } EXPORT_SYMBOL_GPL(dept_request_event); void dept_event(struct dept_map *m, unsigned long e_f, - unsigned long ip, const char *e_fn) + unsigned long ip, const char *e_fn, + struct dept_ext_wgen *ewg) { struct dept_task *dt = dept_task(); unsigned long flags; + unsigned int *wgp; if (unlikely(!dept_working())) return; + wgp = ewg ? &ewg->wgen : &m->wgen; + if (dt->recursive) { /* * Dept won't work with this even though an event * context has been asked. Don't make it confused at * handling the event. Disable it until the next. */ - WRITE_ONCE(m->wgen, 0U); + WRITE_ONCE(*wgp, 0U); return; } @@ -2883,7 +2895,7 @@ void dept_event(struct dept_map *m, unsigned long e_f, flags = dept_enter(); - __dept_event(m, e_f, ip, e_fn, false); + __dept_event(m, e_f, ip, e_fn, false, wgp); dept_exit(flags); } From patchwork Mon Aug 21 03:46:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13359156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 760B7EE49A5 for ; Mon, 21 Aug 2023 04:13:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232959AbjHUENu (ORCPT ); Mon, 21 Aug 2023 00:13:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230261AbjHUENt (ORCPT ); Mon, 21 Aug 2023 00:13:49 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id ACA209B; Sun, 20 Aug 2023 21:13:26 -0700 (PDT) X-AuditID: a67dfc5b-d6dff70000001748-7b-64e2ded70d01 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [RESEND PATCH v10 25/25] dept: Track the potential waits of PG_{locked,writeback} Date: Mon, 21 Aug 2023 12:46:37 +0900 Message-Id: <20230821034637.34630-26-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230821034637.34630-1-byungchul@sk.com> References: <20230821034637.34630-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzH+35/T3fH5efY/JQNt+Uh85Bhn5mZmfH9hzW2GDZu7kc3dewu KZuJykM6ZNKDmx5wne6Iu/5IKedSqSZHN+VczZ3HVCIuTnnowj+fvfZ+7/366yOhFHeYCIlG myTqtKoEJSujZf3jS+Z3dPvVi1pbFkNO9iIIfD1Jg7HCyoLrpgWBtfIohp6GddAx1Idg+NFj CvJyXQhKfF0UVDZ2I6g1H2Oh/XU4uAMDLDTnnmYh/UoFC096RzB4L57HYLGth9ZzpRgcwXc0 5PWwcCkvHY+e9xiCpnIOTGlR4DcXcjDii4Hm7mcM1HrmQcFlLwt3a5tpaKzyY2ivNrLQbf3N QGvjQxpcOQYGbnwsZaF3yESBKTDAwVNHMYZbGaOi419+MdBkcGA4fvU2BvfzGgR1J19isFmf sVAf6MNgt+VS8KOsAYH/TD8HmdlBDi4dPYPgdOZFGh7/bGIgw7sUhr8b2VXLSX3fAEUy7AdJ 7VAxTVpKBXKnsIsjGXUejhTbDhC7OZpcuduDSclggCG28lMssQ2e50hWvxuTj21tHHmYP0yT 1+48HBu5VbZCLSZokkXdwpU7ZfEdWXH7uzanfLm8LA3lkywklQj8EqHeXk//58IXTTjELD9b 6OwMUiGezM8Q7Ia3TBaSSSj+xDjB/OkRGyom8VuFosyGsQHNRwkVTQ/GBnJ+mfD93pt/0umC 5ZZjLJeO5raaahRiBb9U+Ox7RYekAp8uFTqv1zF/B1OF++ZO+hySF6OwcqTQaJMTVZqEJQvi U7WalAW79iXa0OhDmQ6PbKtCg65NTsRLkHK8fOc0v1rBqJL1qYlOJEgo5WR55DefWiFXq1IP ibp9O3QHEkS9E0VKaOUU+eKhg2oFv0eVJO4Vxf2i7n+LJdKINLSB0pZOPER71nytMviGPbGH 899+TixQVM5Nb5DtNc5y715o9B4ZyW/baKlznv2tlPPZvXzZgK+csur3eC2G1SvcSatitZlV ZUU1nNy1ec7U8LjtwSNR1I9q7lqY48L02VuiO+anzHK2zJRJT6xtn1DQ5XwX64mQfojJMY7b 8WaNktbHq2KiKZ1e9QeEf5fsTAMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzH+35/T9dx9ttp81Nt2U0YQ5r4TBhbm+8Y+UubDd2633RzFXcV MUQPuGRqq/SAU5xUxK/+oAdulx5OuOhWqGvu5KEVkS6l81CZf9577f3e+/3XW0Ypixl/mTY+ UdTHq3UqVk7Ld4SnLe/uc2tCxkbmQ875EPCMnqWhpLqKhY47lQiqak9hGGjeAt1jQwgmn9kp KMjrQHDN5aSgtqUPQWP5aRY6++eAwzPMgi0vi4W0smoWXgx6MfTm52KolLZD+8VSDJaJjzQU DLBQXJCGp+QThglzBQfm1GBwlxdx4HWtAltfFwNNl20MNL5ZBoVXelloaLTR0HLfjaGzroSF vqo/DLS3tNHQkZPNwO0vpSwMjpkpMHuGOXhpMWG4mz61lvn9NwOt2RYMmdfvYXC8rkfw8Oxb DFJVFwtNniEMNVIeBT9vNiNwX/jMQcb5CQ6KT11AkJWRT4P9VysD6b1hMDlewm4KJ01DwxRJ rzlMGsdMNHlSKpAHRU6OpD98wxGTlERqypeSsoYBTK6NeBgiVZxjiTSSyxHjZwcmX54/50jb pUma9DsK8M7A3fL1GlGnTRb1KzdGy2O7jbsOOqOOfL+yJhVdIkbkKxP41UJRTyueZpZfLLx6 NUFNsx+/QKjJ/sAYkVxG8WdmCeVfn7HTwVx+t3A1o3mmQPPBQnXr45mCgl8jjD96T/8bDRIq 71pmfN8pX6qvQ9Os5MOEb6539EUkNyGfCuSnjU+OU2t1YSsMB2JT4rVHVsQkxElo6jLm496c +2i0c4sV8TKkmq2IDnRrlIw62ZASZ0WCjFL5KQJ+uDRKhUadclTUJ+zTJ+lEgxUFyGjVPMXW KDFaye9XJ4oHRPGgqP+fYpmvfyq63j9L9SDhXu7e9RFRWaghX7983bZ3Pg1Z3jt1tcYueNpW v6wz5InVJTfBuZOFC8edhe4NJ7Yvtisch5b4hKpjykpvRNzyDjjzrYntx92PbL+PrU2iQoMM 5p7+1W575MoPMZHSnrScpED/P64AnepW3utF75nQHkvbsUS73rLZqKINsepVSym9Qf0XUW6j gy4DAAA= X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Currently, Dept only tracks the real waits of PG_{locked,writeback} that actually happened having gone through __schedule() to avoid false positives. However, it ends in limited capacity for deadlock detection, because anyway there might be still way more potential dependencies by the waits that have yet to happen but may happen in the future so as to cause a deadlock. So let Dept assume that when PG_{locked,writeback} bit gets cleared, there might be waits on the bit to be woken up. Even though false positives may increase with the aggressive tracking, it's worth doing it because it's going to be useful in practice. See the following link for instance: https://lore.kernel.org/lkml/1674268856-31807-1-git-send-email-byungchul.park@lge.com/ Signed-off-by: Byungchul Park --- include/linux/mm_types.h | 3 + include/linux/page-flags.h | 112 +++++++++++++++++++++++++++++++++---- include/linux/pagemap.h | 7 ++- mm/filemap.c | 11 +++- mm/mm_init.c | 3 + 5 files changed, 121 insertions(+), 15 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 306a3d1a0fa6..ac5048b66e5c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -19,6 +19,7 @@ #include #include #include +#include #include @@ -228,6 +229,8 @@ struct page { #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS int _last_cpupid; #endif + struct dept_ext_wgen PG_locked_wgen; + struct dept_ext_wgen PG_writeback_wgen; } _struct_page_alignment; /* diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 92a2063a0a23..d91e67ed194c 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -196,6 +196,50 @@ enum pageflags { #ifndef __GENERATING_BOUNDS_H +#ifdef CONFIG_DEPT +#include +#include + +extern struct dept_map PG_locked_map; +extern struct dept_map PG_writeback_map; + +/* + * Place the following annotations in its suitable point in code: + * + * Annotate dept_page_set_bit() around firstly set_bit*() + * Annotate dept_page_clear_bit() around clear_bit*() + * Annotate dept_page_wait_on_bit() around wait_on_bit*() + */ + +static inline void dept_page_set_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_request_event(&PG_locked_map, &p->PG_locked_wgen); + else if (bit_nr == PG_writeback) + dept_request_event(&PG_writeback_map, &p->PG_writeback_wgen); +} + +static inline void dept_page_clear_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_event(&PG_locked_map, 1UL, _RET_IP_, __func__, &p->PG_locked_wgen); + else if (bit_nr == PG_writeback) + dept_event(&PG_writeback_map, 1UL, _RET_IP_, __func__, &p->PG_writeback_wgen); +} + +static inline void dept_page_wait_on_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_wait(&PG_locked_map, 1UL, _RET_IP_, __func__, 0, -1L); + else if (bit_nr == PG_writeback) + dept_wait(&PG_writeback_map, 1UL, _RET_IP_, __func__, 0, -1L); +} +#else +#define dept_page_set_bit(p, bit_nr) do { } while (0) +#define dept_page_clear_bit(p, bit_nr) do { } while (0) +#define dept_page_wait_on_bit(p, bit_nr) do { } while (0) +#endif + #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); @@ -377,44 +421,88 @@ static __always_inline int Page##uname(struct page *page) \ #define SETPAGEFLAG(uname, lname, policy) \ static __always_inline \ void folio_set_##lname(struct folio *folio) \ -{ set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_set_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void SetPage##uname(struct page *page) \ -{ set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + set_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_set_bit(page, PG_##lname); \ +} #define CLEARPAGEFLAG(uname, lname, policy) \ static __always_inline \ void folio_clear_##lname(struct folio *folio) \ -{ clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_clear_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void ClearPage##uname(struct page *page) \ -{ clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + clear_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_clear_bit(page, PG_##lname); \ +} #define __SETPAGEFLAG(uname, lname, policy) \ static __always_inline \ void __folio_set_##lname(struct folio *folio) \ -{ __set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + __set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_set_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void __SetPage##uname(struct page *page) \ -{ __set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + __set_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_set_bit(page, PG_##lname); \ +} #define __CLEARPAGEFLAG(uname, lname, policy) \ static __always_inline \ void __folio_clear_##lname(struct folio *folio) \ -{ __clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + __clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_clear_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void __ClearPage##uname(struct page *page) \ -{ __clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + __clear_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_clear_bit(page, PG_##lname); \ +} #define TESTSETFLAG(uname, lname, policy) \ static __always_inline \ bool folio_test_set_##lname(struct folio *folio) \ -{ return test_and_set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + bool ret = test_and_set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy));\ + if (!ret) \ + dept_page_set_bit(&folio->page, PG_##lname); \ + return ret; \ +} \ static __always_inline int TestSetPage##uname(struct page *page) \ -{ return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + bool ret = test_and_set_bit(PG_##lname, &policy(page, 1)->flags);\ + if (!ret) \ + dept_page_set_bit(page, PG_##lname); \ + return ret; \ +} #define TESTCLEARFLAG(uname, lname, policy) \ static __always_inline \ bool folio_test_clear_##lname(struct folio *folio) \ -{ return test_and_clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + bool ret = test_and_clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy));\ + if (ret) \ + dept_page_clear_bit(&folio->page, PG_##lname); \ + return ret; \ +} \ static __always_inline int TestClearPage##uname(struct page *page) \ -{ return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + bool ret = test_and_clear_bit(PG_##lname, &policy(page, 1)->flags);\ + if (ret) \ + dept_page_clear_bit(page, PG_##lname); \ + return ret; \ +} #define PAGEFLAG(uname, lname, policy) \ TESTPAGEFLAG(uname, lname, policy) \ diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a56308a9d1a4..a88e2430f415 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -915,7 +915,12 @@ void folio_unlock(struct folio *folio); */ static inline bool folio_trylock(struct folio *folio) { - return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio, 0))); + bool ret = !test_and_set_bit_lock(PG_locked, folio_flags(folio, 0)); + + if (ret) + dept_page_set_bit(&folio->page, PG_locked); + + return likely(ret); } /* diff --git a/mm/filemap.c b/mm/filemap.c index eed64dc88e43..f05208bb50dc 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1101,6 +1101,7 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, if (flags & WQ_FLAG_CUSTOM) { if (test_and_set_bit(key->bit_nr, &key->folio->flags)) return -1; + dept_page_set_bit(&key->folio->page, key->bit_nr); flags |= WQ_FLAG_DONE; } } @@ -1210,6 +1211,7 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, if (wait->flags & WQ_FLAG_EXCLUSIVE) { if (test_and_set_bit(bit_nr, &folio->flags)) return false; + dept_page_set_bit(&folio->page, bit_nr); } else if (test_bit(bit_nr, &folio->flags)) return false; @@ -1220,8 +1222,10 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, /* How many times do we accept lock stealing from under a waiter? */ int sysctl_page_lock_unfairness = 5; -static struct dept_map __maybe_unused PG_locked_map = DEPT_MAP_INITIALIZER(PG_locked_map, NULL); -static struct dept_map __maybe_unused PG_writeback_map = DEPT_MAP_INITIALIZER(PG_writeback_map, NULL); +struct dept_map __maybe_unused PG_locked_map = DEPT_MAP_INITIALIZER(PG_locked_map, NULL); +struct dept_map __maybe_unused PG_writeback_map = DEPT_MAP_INITIALIZER(PG_writeback_map, NULL); +EXPORT_SYMBOL(PG_locked_map); +EXPORT_SYMBOL(PG_writeback_map); static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, int state, enum behavior behavior) @@ -1234,6 +1238,7 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, unsigned long pflags; bool in_thrashing; + dept_page_wait_on_bit(&folio->page, bit_nr); if (bit_nr == PG_locked) sdt_might_sleep_start(&PG_locked_map); else if (bit_nr == PG_writeback) @@ -1331,6 +1336,7 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, wait->flags |= WQ_FLAG_DONE; break; } + dept_page_set_bit(&folio->page, bit_nr); /* * If a signal happened, this 'finish_wait()' may remove the last @@ -1538,6 +1544,7 @@ void folio_unlock(struct folio *folio) BUILD_BUG_ON(PG_waiters != 7); BUILD_BUG_ON(PG_locked > 7); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + dept_page_clear_bit(&folio->page, PG_locked); if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio, 0))) folio_wake_bit(folio, PG_locked); } diff --git a/mm/mm_init.c b/mm/mm_init.c index 7f7f9c677854..a339f0cbe1b2 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -26,6 +26,7 @@ #include #include #include +#include #include "internal.h" #include "slab.h" #include "shuffle.h" @@ -558,6 +559,8 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, page_mapcount_reset(page); page_cpupid_reset_last(page); page_kasan_tag_reset(page); + dept_ext_wgen_init(&page->PG_locked_wgen); + dept_ext_wgen_init(&page->PG_writeback_wgen); INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL