From patchwork Wed Jan 24 11:59:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529095 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2EA7B60DD7; Wed, 24 Jan 2024 11:59:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097601; cv=none; b=LFFLGPSlqiLCq3KdlOeEed6BebJ+crXSlhTzPW9e1h2LORu2W2MzrUAJs+9nDOJbCMSu/dVqhqdkEmkSDe+y5z/RB4CugMslbmApfGVtWnvikSElOisJlHJgu15/qkKl53t+MD5ObJqywTdYOlFwoYoUuXMP8Ngw7jwyi0FzuVM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097601; c=relaxed/simple; bh=3z6wcnzZigmy1AnwKW6EWYXgb3eqsvFGY6rKaYlxPHE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=ekfg+IvEPUpwhPYg7+kM8VobWCLZ2ApkQAVSaIDG77RoExaAbqcl1Y7bsXFYC4rZk0eqXLSpGOzgGFkcJO9nqwi2Lr7PTZnsj0WDv12MErKEu3ud6XQzu0C4Oe8k7iI8fy3g50hLbf+tg5L3dLv40t9nKWuoEPIeonfa86LZkow= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-51-65b0fbb4aca7 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 01/26] llist: Move llist_{head,node} definition to types.h Date: Wed, 24 Jan 2024 20:59:12 +0900 Message-Id: <20240124115938.80132-2-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSW0wTaRTH/b6Z+WboUjMpGkd8QBurCcYLBs1xo4ZkH3Z40JhojJFEbWSU slxMy3WzRkQEF0RBU1CsWkBrU6Bgi9EVSipqoRKhYsN2CSASd5EVRMGycvHSsvpy8ss5//xy Hv4cpWhiwjlNSpqkTVEnKYmMlo2FVq5tnG2QNkx4NkLp2Q3g/3CGBkN9LQGPtQZBbeNJDCOP f4Y/p0YRzD7toqBc70FQ+bKfgkbXAAKHOZfA81cLwesfJ+DWFxE4VV1P4NmbOQx9ZRcw1Nh2 QEdJFQbn9DAN5SMErpSfwoHxGsO0ycKCKUcFQ+YKFuZeRoF7oIcBR+8auHytj0Czw02D694Q huf3DQQGar8w0OFqp8FTWsxA3dsqAm+mTBSY/OMsdDuNGBryAqL8yc8MtBU7MeTfuI3B+1cT gpYzgxhstT0EHvpHMdhtegpmbj1GMHRujIXTZ6dZuHLyHIKi02U0dH1qYyCvbxPMfjSQmB/F h6PjlJhnzxQdU0ZafFIliH9U9LNiXksvKxpt6aLdHClWN49gsXLCz4g2y+9EtE1cYMXCMS8W 33Z2smL7pVlafOUtx7uW7ZdtjZeSNBmSdv32Q7KE/M5r7LGrsizjzTskB7VwhSiEE/ho4WJu G/OdHXWTJMiEXy34fNNUkBfxywV78T+BjIyj+IIfBPO7p/OhMH6n4Cgax0GmeZXwb1c3HWQ5 v0m47rLS/0sjhJoG57wohN8s1F3und8rAplBy3k2KBX4ghDB0lH47YulwgOzjy5BciNaYEEK TUpGslqTFL0uITtFk7XucGqyDQUqZTo+F3cPTXh2tyKeQ8pQeYylXlIw6gxddnIrEjhKuUju W2qVFPJ4dfavkjb1oDY9SdK1omUcrVwi3ziVGa/gj6rTpF8k6Zik/X7FXEh4DtI7dKuGS1Kl 9d7ImYN3PL7f7qpLa6L3xR7ZfTdu8knEgm5jxEwmdaDf2h2amL1yr8OZ82BP2QmDyur2Vxxe kS6b+Xz/aGxsXHJ4+DYhNO3D4iXmT6rzBTsMivd2fWOYzrXlp7+JL4I9kFgRo1S5Xz/72Hyo 57+hR++LTuxvejGcu0ZJ6xLUUZGUVqf+Ctduq0FOAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzH+36f5/k+T8fZ08k8YmM3ZCEZtY9l1j94Zqtl/vBjpBsP3frB 7kgZW7mEcgiJujjhtOunp5Afl1up5EelQlrdqjVEydTFuURl/vnstff7vddfH45S5TI+nDbu gKSL08SoiYJWhAUblpa7S6UAxzUfyDgdAM7hkzSYSgoJNBUXICgsT8bQV7Me3o30I3C/aqQg K7MJwfXuTgrKax0IbPnHCLT0ToNW5yCB+sx0AoYbJQRefxnF0HHpPIYCORRenMvDYHd9pCGr j0BOlgGPn08YXBYrC5akBdCTn83CaPdyqHe8ZaA6t54BW/tiuHK1g8BjWz0NtRU9GFoemgg4 Cv8w8KL2GQ1NGUYGir7mEfgyYqHA4hxkodluxlCaMm5LHRpjoM5ox5B68w6G1vePEFSe7MIg F74lUO3sx1AmZ1Lw63YNgp4zAywcP+1iISf5DIL045doaPxdx0BKRyC4f5pISLBY3T9IiSll h0TbiJkWn+cJ4oPsTlZMqWxnRbN8UCzL9xNvPO7D4vXvTkaUraeIKH8/z4ppA61Y/NrQwIrP Lrtpsbc1C4fP2aZYvVuK0cZLumVrIhVRqQ1X2f25igTzrbskCVVyaciTE/iVgq1oiEww4X2F tjYXNcHe/DyhzPiBSUMKjuJPTBHyv72aHE3nwwRb+iCeYJpfIHxubKYnWMkHCtdqi+l/0rlC Qal9UuTJBwlFV9onc9X4pst6lj2HFGbkYUXe2rj4WI02JtBfHx2VGKdN8N+1L1ZG409jOTqa UYGGW9ZXIZ5D6qnKEGuJpGI08frE2CokcJTaW9k2q1hSKXdrEg9Lun07dQdjJH0Vms3R6pnK DZulSBW/V3NAipak/ZLuf4s5T58k9HLh/HivwPA39oiho3cNQyjNfmes87066GlzQrR84fKn JxH3N9VVrPJrCl203fUwOfuHh3H0jdc9khS5TnZEtPfO2KHBz71ClbZ5Fz1ig/3pnYt7jKYj 4dPdYSh0OGjJqoB3vw3JAVNC9qz1tW+92d218VjQFm2fqeh1dk7NqUVjK9S0Pkqz3I/S6TV/ AentXzAwAwAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: llist_head and llist_node can be used by very primitives. For example, Dept for tracking dependency uses llist things in its header. To avoid header dependency, move those to types.h. Signed-off-by: Byungchul Park --- include/linux/llist.h | 8 -------- include/linux/types.h | 8 ++++++++ 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/llist.h b/include/linux/llist.h index 2c982ff7475a..3ac071857612 100644 --- a/include/linux/llist.h +++ b/include/linux/llist.h @@ -53,14 +53,6 @@ #include #include -struct llist_head { - struct llist_node *first; -}; - -struct llist_node { - struct llist_node *next; -}; - #define LLIST_HEAD_INIT(name) { NULL } #define LLIST_HEAD(name) struct llist_head name = LLIST_HEAD_INIT(name) diff --git a/include/linux/types.h b/include/linux/types.h index 253168bb3fe1..10d94b7f9e5d 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -199,6 +199,14 @@ struct hlist_node { struct hlist_node *next, **pprev; }; +struct llist_head { + struct llist_node *first; +}; + +struct llist_node { + struct llist_node *next; +}; + struct ustat { __kernel_daddr_t f_tfree; #ifdef CONFIG_ARCH_32BIT_USTAT_F_TINODE From patchwork Wed Jan 24 11:59:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529100 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2EA4B60DC4; Wed, 24 Jan 2024 11:59:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097605; cv=none; b=nbTQOmjACG5HiPC2FfFY2rSCoDN77x2E2YyUH/4dUkUHDaYWNg+i/OTfAmjrVQR47OpoUystYwzcEDag73Em2lVvZEDJh1msT7fWraPylnn8u6iyxf/bo52T4QlDTxh2RWnYwH1evEuN5a+FqrGfRt4w147RREiLNorWXQe+rUA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097605; c=relaxed/simple; bh=asSKRbIQNcB08UYPRwv3bUSMHrCIvUqTNtShomc8xTg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=MbfEn/bs9NGZfzbc/BFBjyLDV2GL9OAYL4vGZq/ul50oWfZiYKGmWwq/psns3kz6RVHJ+FuHrKi3mMs0heMyt5nXCjJ6AK2Ij9KGF3h/f3BFFDJNaDUCllHXcwlDxhnyMCoKsLQtWOY8bBHI6TtHQ6Pdz8WlbPK/yqG9A9ocHuM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-61-65b0fbb469ad From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 02/26] dept: Implement Dept(Dependency Tracker) Date: Wed, 24 Jan 2024 20:59:13 +0900 Message-Id: <20240124115938.80132-3-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe0hTcRSA+933VovrCrxlUayiF/mojEOEChFdiiAKDeoPHXnJ0Xyw2cyg 0Fy+llaCminli7nmcjonqLUyrdUSc+UwFZMSsURNs7ayaTat/jl8cL7z/XUYXGoj1zKKhGRB lSBXyigxIZ5cUbGryVsvBLfki+DW9WBwf88moMxsosBZV4vAZE3HYOz5YXjnmUDg7erGobjQ iaDi43scrPYhBDbDVQp6RlaCyz1FgaNQR0FGlZmCN+NzGAwWFWBQazkGnTcrMWib/URA8RgF pcUZmG98xmBWb6RBn7YFhg13aJj7GAKOoV4SbAM7oeTuIAWPbA4C7M3DGPS0llEwZFogodP+ kgDnrTwSHnyppGDco8dB756i4W1bOQb1Wl8o89tvEl7ktWGQWd2Agav/IYLH2R8wsJh6Kehw T2DQaCnE4VfNcwTD+ZM0XLs+S0Npej4C3bUiArrnX5CgHQwF788yKmI/3zExhfPaxhTe5ikn +FeVHN9y5z3Nax8P0Hy55QLfaNjBVz0aw/iKGTfJW4w5FG+ZKaD53EkXxn95/ZrmX972EvyI qxg7HnBafCBWUCo0giooLEYcN+7pJ5PmpsiLHpODSEMLdiIXiRiO3ct1V2eR/7n0bTq1yBS7 levrm8UXeTW7kWvMG/U5YgZns5ZzhumuJWkVe5CredKxFCLYLZyuxo0WWcKGcmN3rfTf6Aau tr5tKSRi93EPSgaWfKnP+WC8QS9GOTZDxH01f/13sIZ7augjbiJJOVpmRFJFgiZerlDuDYxL TVBcDDybGG9BvqfSX54704xmnCfbEcsg2QpJhNEsSEm5Rp0a3444BpetlvStqROkklh56iVB lRituqAU1O0ogCFk/pLdnpRYKXtOniycF4QkQfV/izGitWnoFBx1dTaVaKxh2o3RDwdOt/ol hmy3BIFZV3SofzoqLqLnlNeUmpR4IiaSmxhlhqvC163PyV7lFxhRvX3Pmyv9z8Ku1gffQ5E3 jJvtm6YDdsueaqyfGsKvhByx/9ApUxz3t+nQ/OcagjnOKMw5FcFq/1DndFSUf0H3wVb6RHun jFDHyUN24Cq1/A/ei8dFUAMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTYRSAe7+709XHkvxSuzAqo4sVpBxIIovw7SbVnyiKGvWRQ7diM8so 8DLvl1QyK02Xxpxzqc396LYaStYSL+UqMxMbUoqa3bY0Z6VGfw4PnOc8vw5HyiroQE6pjhc1 akWcnJFQkuhNqWutkw3ield7CBTmrgf3j0wKyurNDHTW1SIwW5MJGHoSBW88Iwgm2zpIKCnu RHDzw3sSrC19CGzGFAa6BuaC0z3GgKM4h4HUqnoGXgx7Cei9UkRArWUPtBZUEmCf+ERByRAD pSWpxPQYJGDCYGLBkLQcXMbrLHg/bABH32samm84aLD1rIZr5b0MPLQ5KGi56yKg634ZA33m PzS0tjyjoLMwj4bbnysZGPYYSDC4x1h4adcT0KCbrqV//03D0zw7Aem37hDgfPsAwaPMfgIs 5tcMNLtHCGi0FJPwq/oJAlf+KAtpuRMslCbnI8hJu0JBx9RTGnS9YTA5XsZs2YSbR8ZIrGs8 i20ePYWfVwr43vX3LNY96mGx3nIGNxpX4aqHQwS++c1NY4spi8GWb0Uszh51EvhzezuLn12d pPCAs4TYG3xIEnFCjFMmiJp1m49JYoY9b+nT3jH6nMfsoJLQnxYqG/lwAr9RKH2ZzMwww4cI 3d0T5Az780uFxryPdDaScCSf4SsYv7TNSvP5bUL14+bZY4pfLuRUu9EMS/kwYajcyv6LLhFq G+yzIR8+XLh9rWfWl007/aZLbAGS6NEcE/JXqhNUCmVcWKg2NiZRrTwXevyUyoKm38Zw0Vt4 F/3oimpCPIfkftItpnpRRisStImqJiRwpNxf2r2wTpRJTygSz4uaU0c1Z+JEbRMK4ih5gHTn AfGYjD+piBdjRfG0qPm/JTifwCT0wjgVgFKW0DULkkNbnRAWFZjnurDgy45lW0n19vne/MP5 Rp3dtyqtIGbX193bD65J+Xk1129qoOhg2YUOTqJnswr6M/GBwH2R4T6JihqV7bsqYtEDs2n/ 5YDgIyve7ZgX/apv0FobKXPX7B6VL67Qyge/fgzaudK7b3xlRXD84gw5pY1RbFhFarSKv+tX z7AyAwAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: CURRENT STATUS -------------- Lockdep tracks acquisition order of locks in order to detect deadlock, and IRQ and IRQ enable/disable state as well to take accident acquisitions into account. Lockdep should be turned off once it detects and reports a deadlock since the data structure and algorithm are not reusable after detection because of the complex design. PROBLEM ------- *Waits* and their *events* that never reach eventually cause deadlock. However, Lockdep is only interested in lock acquisition order, forcing to emulate lock acqusition even for just waits and events that have nothing to do with real lock. Even worse, no one likes Lockdep's false positive detection because that prevents further one that might be more valuable. That's why all the kernel developers are sensitive to Lockdep's false positive. Besides those, by tracking acquisition order, it cannot correctly deal with read lock and cross-event e.g. wait_for_completion()/complete() for deadlock detection. Lockdep is no longer a good tool for that purpose. SOLUTION -------- Again, *waits* and their *events* that never reach eventually cause deadlock. The new solution, Dept(DEPendency Tracker), focuses on waits and events themselves. Dept tracks waits and events and report it if any event would be never reachable. Dept does: . Works with read lock in the right way. . Works with any wait and event e.i. cross-event. . Continue to work even after reporting multiple times. . Provides simple and intuitive APIs. . Does exactly what dependency checker should do. Q & A ----- Q. Is this the first try ever to address the problem? A. No. Cross-release feature (b09be676e0ff2 locking/lockdep: Implement the 'crossrelease' feature) addressed it 2 years ago that was a Lockdep extension and merged but reverted shortly because: Cross-release started to report valuable hidden problems but started to give report false positive reports as well. For sure, no one likes Lockdep's false positive reports since it makes Lockdep stop, preventing reporting further real problems. Q. Why not Dept was developed as an extension of Lockdep? A. Lockdep definitely includes all the efforts great developers have made for a long time so as to be quite stable enough. But I had to design and implement newly because of the following: 1) Lockdep was designed to track lock acquisition order. The APIs and implementation do not fit on wait-event model. 2) Lockdep is turned off on detection including false positive. Which is terrible and prevents developing any extension for stronger detection. Q. Do you intend to totally replace Lockdep? A. No. Lockdep also checks if lock usage is correct. Of course, the dependency check routine should be replaced but the other functions should be still there. Q. Do you mean the dependency check routine should be replaced right away? A. No. I admit Lockdep is stable enough thanks to great efforts kernel developers have made. Lockdep and Dept, both should be in the kernel until Dept gets considered stable. Q. Stronger detection capability would give more false positive report. Which was a big problem when cross-release was introduced. Is it ok with Dept? A. It's ok. Dept allows multiple reporting thanks to simple and quite generalized design. Of course, false positive reports should be fixed anyway but it's no longer as a critical problem as it was. Signed-off-by: Byungchul Park --- include/linux/dept.h | 567 ++++++ include/linux/hardirq.h | 3 + include/linux/sched.h | 3 + init/init_task.c | 2 + init/main.c | 2 + kernel/Makefile | 1 + kernel/dependency/Makefile | 3 + kernel/dependency/dept.c | 2966 +++++++++++++++++++++++++++++++ kernel/dependency/dept_hash.h | 10 + kernel/dependency/dept_object.h | 13 + kernel/exit.c | 1 + kernel/fork.c | 2 + kernel/module/main.c | 4 + kernel/sched/core.c | 10 + lib/Kconfig.debug | 27 + lib/locking-selftest.c | 2 + 16 files changed, 3616 insertions(+) create mode 100644 include/linux/dept.h create mode 100644 kernel/dependency/Makefile create mode 100644 kernel/dependency/dept.c create mode 100644 kernel/dependency/dept_hash.h create mode 100644 kernel/dependency/dept_object.h diff --git a/include/linux/dept.h b/include/linux/dept.h new file mode 100644 index 000000000000..c6e2291dd843 --- /dev/null +++ b/include/linux/dept.h @@ -0,0 +1,567 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * DEPT(DEPendency Tracker) - runtime dependency tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_H +#define __LINUX_DEPT_H + +#ifdef CONFIG_DEPT + +#include + +struct task_struct; + +#define DEPT_MAX_STACK_ENTRY 16 +#define DEPT_MAX_WAIT_HIST 64 +#define DEPT_MAX_ECXT_HELD 48 + +#define DEPT_MAX_SUBCLASSES 16 +#define DEPT_MAX_SUBCLASSES_EVT 2 +#define DEPT_MAX_SUBCLASSES_USR (DEPT_MAX_SUBCLASSES / DEPT_MAX_SUBCLASSES_EVT) +#define DEPT_MAX_SUBCLASSES_CACHE 2 + +#define DEPT_SIRQ 0 +#define DEPT_HIRQ 1 +#define DEPT_IRQS_NR 2 +#define DEPT_SIRQF (1UL << DEPT_SIRQ) +#define DEPT_HIRQF (1UL << DEPT_HIRQ) + +struct dept_ecxt; +struct dept_iecxt { + struct dept_ecxt *ecxt; + int enirq; + /* + * for preventing to add a new ecxt + */ + bool staled; +}; + +struct dept_wait; +struct dept_iwait { + struct dept_wait *wait; + int irq; + /* + * for preventing to add a new wait + */ + bool staled; + bool touched; +}; + +struct dept_class { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * unique information about the class + */ + const char *name; + unsigned long key; + int sub_id; + + /* + * for BFS + */ + unsigned int bfs_gen; + int bfs_dist; + struct dept_class *bfs_parent; + + /* + * for hashing this object + */ + struct hlist_node hash_node; + + /* + * for linking all classes + */ + struct list_head all_node; + + /* + * for associating its dependencies + */ + struct list_head dep_head; + struct list_head dep_rev_head; + + /* + * for tracking IRQ dependencies + */ + struct dept_iecxt iecxt[DEPT_IRQS_NR]; + struct dept_iwait iwait[DEPT_IRQS_NR]; + + /* + * classified by a map embedded in task_struct, + * not an explicit map + */ + bool sched_map; + }; + }; +}; + +struct dept_key { + union { + /* + * Each byte-wise address will be used as its key. + */ + char base[DEPT_MAX_SUBCLASSES]; + + /* + * for caching the main class pointer + */ + struct dept_class *classes[DEPT_MAX_SUBCLASSES_CACHE]; + }; +}; + +struct dept_map { + const char *name; + struct dept_key *keys; + + /* + * subclass that can be set from user + */ + int sub_u; + + /* + * It's local copy for fast access to the associated classes. + * Also used for dept_key for static maps. + */ + struct dept_key map_key; + + /* + * wait timestamp associated to this map + */ + unsigned int wgen; + + /* + * whether this map should be going to be checked or not + */ + bool nocheck; +}; + +#define DEPT_MAP_INITIALIZER(n, k) \ +{ \ + .name = #n, \ + .keys = (struct dept_key *)(k), \ + .sub_u = 0, \ + .map_key = { .classes = { NULL, } }, \ + .wgen = 0U, \ + .nocheck = false, \ +} + +struct dept_stack { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * backtrace entries + */ + unsigned long raw[DEPT_MAX_STACK_ENTRY]; + int nr; + }; + }; +}; + +struct dept_ecxt { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * function that entered to this ecxt + */ + const char *ecxt_fn; + + /* + * event function + */ + const char *event_fn; + + /* + * associated class + */ + struct dept_class *class; + + /* + * flag indicating which IRQ has been + * enabled within the event context + */ + unsigned long enirqf; + + /* + * where the IRQ-enabled happened + */ + unsigned long enirq_ip[DEPT_IRQS_NR]; + struct dept_stack *enirq_stack[DEPT_IRQS_NR]; + + /* + * where the event context started + */ + unsigned long ecxt_ip; + struct dept_stack *ecxt_stack; + + /* + * where the event triggered + */ + unsigned long event_ip; + struct dept_stack *event_stack; + }; + }; +}; + +struct dept_wait { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * function causing this wait + */ + const char *wait_fn; + + /* + * the associated class + */ + struct dept_class *class; + + /* + * which IRQ the wait was placed in + */ + unsigned long irqf; + + /* + * where the IRQ wait happened + */ + unsigned long irq_ip[DEPT_IRQS_NR]; + struct dept_stack *irq_stack[DEPT_IRQS_NR]; + + /* + * where the wait happened + */ + unsigned long wait_ip; + struct dept_stack *wait_stack; + + /* + * whether this wait is for commit in scheduler + */ + bool sched_sleep; + }; + }; +}; + +struct dept_dep { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * key data of dependency + */ + struct dept_ecxt *ecxt; + struct dept_wait *wait; + + /* + * This object can be referred without dept_lock + * held but with IRQ disabled, e.g. for hash + * lookup. So deferred deletion is needed. + */ + struct rcu_head rh; + + /* + * for BFS + */ + struct list_head bfs_node; + + /* + * for hashing this object + */ + struct hlist_node hash_node; + + /* + * for linking to a class object + */ + struct list_head dep_node; + struct list_head dep_rev_node; + }; + }; +}; + +struct dept_hash { + /* + * hash table + */ + struct hlist_head *table; + + /* + * size of the table e.i. 2^bits + */ + int bits; +}; + +struct dept_pool { + const char *name; + + /* + * object size + */ + size_t obj_sz; + + /* + * the number of the static array + */ + atomic_t obj_nr; + + /* + * offset of ->pool_node + */ + size_t node_off; + + /* + * pointer to the pool + */ + void *spool; + struct llist_head boot_pool; + struct llist_head __percpu *lpool; +}; + +struct dept_ecxt_held { + /* + * associated event context + */ + struct dept_ecxt *ecxt; + + /* + * unique key for this dept_ecxt_held + */ + struct dept_map *map; + + /* + * class of the ecxt of this dept_ecxt_held + */ + struct dept_class *class; + + /* + * the wgen when the event context started + */ + unsigned int wgen; + + /* + * subclass that only works in the local context + */ + int sub_l; +}; + +struct dept_wait_hist { + /* + * associated wait + */ + struct dept_wait *wait; + + /* + * unique id of all waits system-wise until wrapped + */ + unsigned int wgen; + + /* + * local context id to identify IRQ context + */ + unsigned int ctxt_id; +}; + +struct dept_task { + /* + * all event contexts that have entered and before exiting + */ + struct dept_ecxt_held ecxt_held[DEPT_MAX_ECXT_HELD]; + int ecxt_held_pos; + + /* + * ring buffer holding all waits that have happened + */ + struct dept_wait_hist wait_hist[DEPT_MAX_WAIT_HIST]; + int wait_hist_pos; + + /* + * sequential id to identify each IRQ context + */ + unsigned int irq_id[DEPT_IRQS_NR]; + + /* + * for tracking IRQ-enabled points with cross-event + */ + unsigned int wgen_enirq[DEPT_IRQS_NR]; + + /* + * for keeping up-to-date IRQ-enabled points + */ + unsigned long enirq_ip[DEPT_IRQS_NR]; + + /* + * for reserving a current stack instance at each operation + */ + struct dept_stack *stack; + + /* + * for preventing recursive call into DEPT engine + */ + int recursive; + + /* + * for staging data to commit a wait + */ + struct dept_map stage_m; + bool stage_sched_map; + const char *stage_w_fn; + unsigned long stage_ip; + + /* + * the number of missing ecxts + */ + int missing_ecxt; + + /* + * for tracking IRQ-enable state + */ + bool hardirqs_enabled; + bool softirqs_enabled; + + /* + * whether the current is on do_exit() + */ + bool task_exit; + + /* + * whether the current is running __schedule() + */ + bool in_sched; +}; + +#define DEPT_TASK_INITIALIZER(t) \ +{ \ + .wait_hist = { { .wait = NULL, } }, \ + .ecxt_held_pos = 0, \ + .wait_hist_pos = 0, \ + .irq_id = { 0U }, \ + .wgen_enirq = { 0U }, \ + .enirq_ip = { 0UL }, \ + .stack = NULL, \ + .recursive = 0, \ + .stage_m = DEPT_MAP_INITIALIZER((t)->stage_m, NULL), \ + .stage_sched_map = false, \ + .stage_w_fn = NULL, \ + .stage_ip = 0UL, \ + .missing_ecxt = 0, \ + .hardirqs_enabled = false, \ + .softirqs_enabled = false, \ + .task_exit = false, \ + .in_sched = false, \ +} + +extern void dept_on(void); +extern void dept_off(void); +extern void dept_init(void); +extern void dept_task_init(struct task_struct *t); +extern void dept_task_exit(struct task_struct *t); +extern void dept_free_range(void *start, unsigned int sz); +extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_map_copy(struct dept_map *to, struct dept_map *from); + +extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l); +extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn); +extern void dept_request_event_wait_commit(void); +extern void dept_clean_stage(void); +extern void dept_stage_event(struct task_struct *t, unsigned long ip); +extern void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *c_fn, const char *e_fn, int sub_l); +extern bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f); +extern void dept_request_event(struct dept_map *m); +extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn); +extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); +extern void dept_sched_enter(void); +extern void dept_sched_exit(void); + +static inline void dept_ecxt_enter_nokeep(struct dept_map *m) +{ + dept_ecxt_enter(m, 0UL, 0UL, NULL, NULL, 0); +} + +/* + * for users who want to manage external keys + */ +extern void dept_key_init(struct dept_key *k); +extern void dept_key_destroy(struct dept_key *k); +extern void dept_map_ecxt_modify(struct dept_map *m, unsigned long e_f, struct dept_key *new_k, unsigned long new_e_f, unsigned long new_ip, const char *new_c_fn, const char *new_e_fn, int new_sub_l); + +extern void dept_softirq_enter(void); +extern void dept_hardirq_enter(void); +extern void dept_softirqs_on_ip(unsigned long ip); +extern void dept_hardirqs_on(void); +extern void dept_softirqs_off(void); +extern void dept_hardirqs_off(void); +#else /* !CONFIG_DEPT */ +struct dept_key { }; +struct dept_map { }; +struct dept_task { }; + +#define DEPT_MAP_INITIALIZER(n, k) { } +#define DEPT_TASK_INITIALIZER(t) { } + +#define dept_on() do { } while (0) +#define dept_off() do { } while (0) +#define dept_init() do { } while (0) +#define dept_task_init(t) do { } while (0) +#define dept_task_exit(t) do { } while (0) +#define dept_free_range(s, sz) do { } while (0) +#define dept_map_init(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_map_copy(t, f) do { } while (0) + +#define dept_wait(m, w_f, ip, w_fn, sl) do { (void)(w_fn); } while (0) +#define dept_stage_wait(m, k, ip, w_fn) do { (void)(k); (void)(w_fn); } while (0) +#define dept_request_event_wait_commit() do { } while (0) +#define dept_clean_stage() do { } while (0) +#define dept_stage_event(t, ip) do { } while (0) +#define dept_ecxt_enter(m, e_f, ip, c_fn, e_fn, sl) do { (void)(c_fn); (void)(e_fn); } while (0) +#define dept_ecxt_holding(m, e_f) false +#define dept_request_event(m) do { } while (0) +#define dept_event(m, e_f, ip, e_fn) do { (void)(e_fn); } while (0) +#define dept_ecxt_exit(m, e_f, ip) do { } while (0) +#define dept_sched_enter() do { } while (0) +#define dept_sched_exit() do { } while (0) +#define dept_ecxt_enter_nokeep(m) do { } while (0) +#define dept_key_init(k) do { (void)(k); } while (0) +#define dept_key_destroy(k) do { (void)(k); } while (0) +#define dept_map_ecxt_modify(m, e_f, n_k, n_e_f, n_ip, n_c_fn, n_e_fn, n_sl) do { (void)(n_k); (void)(n_c_fn); (void)(n_e_fn); } while (0) + +#define dept_softirq_enter() do { } while (0) +#define dept_hardirq_enter() do { } while (0) +#define dept_softirqs_on_ip(ip) do { } while (0) +#define dept_hardirqs_on() do { } while (0) +#define dept_softirqs_off() do { } while (0) +#define dept_hardirqs_off() do { } while (0) +#endif +#endif /* __LINUX_DEPT_H */ diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index d57cab4d4c06..bb279dbbe748 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -106,6 +107,7 @@ void irq_exit_rcu(void); */ #define __nmi_enter() \ do { \ + dept_off(); \ lockdep_off(); \ arch_nmi_enter(); \ BUG_ON(in_nmi() == NMI_MASK); \ @@ -128,6 +130,7 @@ void irq_exit_rcu(void); __preempt_count_sub(NMI_OFFSET + HARDIRQ_OFFSET); \ arch_nmi_exit(); \ lockdep_on(); \ + dept_on(); \ } while (0) #define nmi_exit() \ diff --git a/include/linux/sched.h b/include/linux/sched.h index 292c31697248..6dafdc93b462 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -38,6 +38,7 @@ #include #include #include +#include /* task_struct member predeclarations (sorted alphabetically): */ struct audit_context; @@ -1178,6 +1179,8 @@ struct task_struct { struct held_lock held_locks[MAX_LOCK_DEPTH]; #endif + struct dept_task dept_task; + #if defined(CONFIG_UBSAN) && !defined(CONFIG_UBSAN_TRAP) unsigned int in_ubsan; #endif diff --git a/init/init_task.c b/init/init_task.c index 5727d42149c3..171572fbdb43 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -12,6 +12,7 @@ #include #include #include +#include #include @@ -194,6 +195,7 @@ struct task_struct init_task .curr_chain_key = INITIAL_CHAIN_KEY, .lockdep_recursion = 0, #endif + .dept_task = DEPT_TASK_INITIALIZER(init_task), #ifdef CONFIG_FUNCTION_GRAPH_TRACER .ret_stack = NULL, .tracing_graph_pause = ATOMIC_INIT(0), diff --git a/init/main.c b/init/main.c index e24b0780fdff..39dac78fafc1 100644 --- a/init/main.c +++ b/init/main.c @@ -65,6 +65,7 @@ #include #include #include +#include #include #include #include @@ -1011,6 +1012,7 @@ void start_kernel(void) panic_param); lockdep_init(); + dept_init(); /* * Need to run this when irqs are enabled, because it wants diff --git a/kernel/Makefile b/kernel/Makefile index 3947122d618b..e864b66d6470 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -51,6 +51,7 @@ obj-y += livepatch/ obj-y += dma/ obj-y += entry/ obj-$(CONFIG_MODULES) += module/ +obj-y += dependency/ obj-$(CONFIG_KCMP) += kcmp.o obj-$(CONFIG_FREEZER) += freezer.o diff --git a/kernel/dependency/Makefile b/kernel/dependency/Makefile new file mode 100644 index 000000000000..b5cfb8a03c0c --- /dev/null +++ b/kernel/dependency/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_DEPT) += dept.o diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c new file mode 100644 index 000000000000..a3e774479f94 --- /dev/null +++ b/kernel/dependency/dept.c @@ -0,0 +1,2966 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DEPT(DEPendency Tracker) - Runtime dependency tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + * + * DEPT provides a general way to detect deadlock possibility in runtime + * and the interest is not limited to typical lock but to every + * syncronization primitives. + * + * The following ideas were borrowed from LOCKDEP: + * + * 1) Use a graph to track relationship between classes. + * 2) Prevent performance regression using hash. + * + * The following items were enhanced from LOCKDEP: + * + * 1) Cover more deadlock cases. + * 2) Allow muliple reports. + * + * TODO: Both LOCKDEP and DEPT should co-exist until DEPT is considered + * stable. Then the dependency check routine should be replaced with + * DEPT after. It should finally look like: + * + * + * + * As is: + * + * LOCKDEP + * +-----------------------------------------+ + * | Lock usage correctness check | <-> locks + * | | + * | | + * | +-------------------------------------+ | + * | | Dependency check | | + * | | (by tracking lock acquisition order)| | + * | +-------------------------------------+ | + * | | + * +-----------------------------------------+ + * + * DEPT + * +-----------------------------------------+ + * | Dependency check | <-> waits/events + * | (by tracking wait and event context) | + * +-----------------------------------------+ + * + * + * + * To be: + * + * LOCKDEP + * +-----------------------------------------+ + * | Lock usage correctness check | <-> locks + * | | + * | | + * | (Request dependency check) | + * | T | + * +--------------------|--------------------+ + * | + * DEPT V + * +-----------------------------------------+ + * | Dependency check | <-> waits/events + * | (by tracking wait and event context) | + * +-----------------------------------------+ + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static int dept_stop; +static int dept_per_cpu_ready; + +#define DEPT_READY_WARN (!oops_in_progress) + +/* + * Make all operations using DEPT_WARN_ON() fail on oops_in_progress and + * prevent warning message. + */ +#define DEPT_WARN_ON_ONCE(c) \ + ({ \ + int __ret = 0; \ + \ + if (likely(DEPT_READY_WARN)) \ + __ret = WARN_ONCE(c, "DEPT_WARN_ON_ONCE: " #c); \ + __ret; \ + }) + +#define DEPT_WARN_ONCE(s...) \ + ({ \ + if (likely(DEPT_READY_WARN)) \ + WARN_ONCE(1, "DEPT_WARN_ONCE: " s); \ + }) + +#define DEPT_WARN_ON(c) \ + ({ \ + int __ret = 0; \ + \ + if (likely(DEPT_READY_WARN)) \ + __ret = WARN(c, "DEPT_WARN_ON: " #c); \ + __ret; \ + }) + +#define DEPT_WARN(s...) \ + ({ \ + if (likely(DEPT_READY_WARN)) \ + WARN(1, "DEPT_WARN: " s); \ + }) + +#define DEPT_STOP(s...) \ + ({ \ + WRITE_ONCE(dept_stop, 1); \ + if (likely(DEPT_READY_WARN)) \ + WARN(1, "DEPT_STOP: " s); \ + }) + +#define DEPT_INFO_ONCE(s...) pr_warn_once("DEPT_INFO_ONCE: " s) + +static arch_spinlock_t dept_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; + +/* + * DEPT internal engine should be careful in using outside functions + * e.g. printk at reporting since that kind of usage might cause + * untrackable deadlock. + */ +static atomic_t dept_outworld = ATOMIC_INIT(0); + +static void dept_outworld_enter(void) +{ + atomic_inc(&dept_outworld); +} + +static void dept_outworld_exit(void) +{ + atomic_dec(&dept_outworld); +} + +static bool dept_outworld_entered(void) +{ + return atomic_read(&dept_outworld); +} + +static bool dept_lock(void) +{ + while (!arch_spin_trylock(&dept_spin)) + if (unlikely(dept_outworld_entered())) + return false; + return true; +} + +static void dept_unlock(void) +{ + arch_spin_unlock(&dept_spin); +} + +/* + * whether to stack-trace on every wait or every ecxt + */ +static bool rich_stack = true; + +enum bfs_ret { + BFS_CONTINUE, + BFS_CONTINUE_REV, + BFS_DONE, + BFS_SKIP, +}; + +static bool after(unsigned int a, unsigned int b) +{ + return (int)(b - a) < 0; +} + +static bool before(unsigned int a, unsigned int b) +{ + return (int)(a - b) < 0; +} + +static bool valid_stack(struct dept_stack *s) +{ + return s && s->nr > 0; +} + +static bool valid_class(struct dept_class *c) +{ + return c->key; +} + +static void invalidate_class(struct dept_class *c) +{ + c->key = 0UL; +} + +static struct dept_ecxt *dep_e(struct dept_dep *d) +{ + return d->ecxt; +} + +static struct dept_wait *dep_w(struct dept_dep *d) +{ + return d->wait; +} + +static struct dept_class *dep_fc(struct dept_dep *d) +{ + return dep_e(d)->class; +} + +static struct dept_class *dep_tc(struct dept_dep *d) +{ + return dep_w(d)->class; +} + +static const char *irq_str(int irq) +{ + if (irq == DEPT_SIRQ) + return "softirq"; + if (irq == DEPT_HIRQ) + return "hardirq"; + return "(unknown)"; +} + +static inline struct dept_task *dept_task(void) +{ + return ¤t->dept_task; +} + +/* + * Dept doesn't work either when it's stopped by DEPT_STOP() or in a nmi + * context. + */ +static bool dept_working(void) +{ + return !READ_ONCE(dept_stop) && !in_nmi(); +} + +/* + * Even k == NULL is considered as a valid key because it would use + * &->map_key as the key in that case. + */ +struct dept_key __dept_no_validate__; +static bool valid_key(struct dept_key *k) +{ + return &__dept_no_validate__ != k; +} + +/* + * Pool + * ===================================================================== + * DEPT maintains pools to provide objects in a safe way. + * + * 1) Static pool is used at the beginning of booting time. + * 2) Local pool is tried first before the static pool. Objects that + * have been freed will be placed. + */ + +enum object_t { +#define OBJECT(id, nr) OBJECT_##id, + #include "dept_object.h" +#undef OBJECT + OBJECT_NR, +}; + +#define OBJECT(id, nr) \ +static struct dept_##id spool_##id[nr]; \ +static DEFINE_PER_CPU(struct llist_head, lpool_##id); + #include "dept_object.h" +#undef OBJECT + +static struct dept_pool pool[OBJECT_NR] = { +#define OBJECT(id, nr) { \ + .name = #id, \ + .obj_sz = sizeof(struct dept_##id), \ + .obj_nr = ATOMIC_INIT(nr), \ + .node_off = offsetof(struct dept_##id, pool_node), \ + .spool = spool_##id, \ + .lpool = &lpool_##id, }, + #include "dept_object.h" +#undef OBJECT +}; + +/* + * Can use llist no matter whether CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG is + * enabled or not because NMI and other contexts in the same CPU never + * run inside of DEPT concurrently by preventing reentrance. + */ +static void *from_pool(enum object_t t) +{ + struct dept_pool *p; + struct llist_head *h; + struct llist_node *n; + + /* + * llist_del_first() doesn't allow concurrent access e.g. + * between process and IRQ context. + */ + if (DEPT_WARN_ON(!irqs_disabled())) + return NULL; + + p = &pool[t]; + + /* + * Try local pool first. + */ + if (likely(dept_per_cpu_ready)) + h = this_cpu_ptr(p->lpool); + else + h = &p->boot_pool; + + n = llist_del_first(h); + if (n) + return (void *)n - p->node_off; + + /* + * Try static pool. + */ + if (atomic_read(&p->obj_nr) > 0) { + int idx = atomic_dec_return(&p->obj_nr); + + if (idx >= 0) + return p->spool + (idx * p->obj_sz); + } + + DEPT_INFO_ONCE("---------------------------------------------\n" + " Some of Dept internal resources are run out.\n" + " Dept might still work if the resources get freed.\n" + " However, the chances are Dept will suffer from\n" + " the lack from now. Needs to extend the internal\n" + " resource pools. Ask max.byungchul.park@gmail.com\n"); + return NULL; +} + +static void to_pool(void *o, enum object_t t) +{ + struct dept_pool *p = &pool[t]; + struct llist_head *h; + + preempt_disable(); + if (likely(dept_per_cpu_ready)) + h = this_cpu_ptr(p->lpool); + else + h = &p->boot_pool; + + llist_add(o + p->node_off, h); + preempt_enable(); +} + +#define OBJECT(id, nr) \ +static void (*ctor_##id)(struct dept_##id *a); \ +static void (*dtor_##id)(struct dept_##id *a); \ +static struct dept_##id *new_##id(void) \ +{ \ + struct dept_##id *a; \ + \ + a = (struct dept_##id *)from_pool(OBJECT_##id); \ + if (unlikely(!a)) \ + return NULL; \ + \ + atomic_set(&a->ref, 1); \ + \ + if (ctor_##id) \ + ctor_##id(a); \ + \ + return a; \ +} \ + \ +static struct dept_##id *get_##id(struct dept_##id *a) \ +{ \ + atomic_inc(&a->ref); \ + return a; \ +} \ + \ +static void put_##id(struct dept_##id *a) \ +{ \ + if (!atomic_dec_return(&a->ref)) { \ + if (dtor_##id) \ + dtor_##id(a); \ + to_pool(a, OBJECT_##id); \ + } \ +} \ + \ +static void del_##id(struct dept_##id *a) \ +{ \ + put_##id(a); \ +} \ + \ +static bool __maybe_unused id##_consumed(struct dept_##id *a) \ +{ \ + return a && atomic_read(&a->ref) > 1; \ +} +#include "dept_object.h" +#undef OBJECT + +#define SET_CONSTRUCTOR(id, f) \ +static void (*ctor_##id)(struct dept_##id *a) = f + +static void initialize_dep(struct dept_dep *d) +{ + INIT_LIST_HEAD(&d->bfs_node); + INIT_LIST_HEAD(&d->dep_node); + INIT_LIST_HEAD(&d->dep_rev_node); +} +SET_CONSTRUCTOR(dep, initialize_dep); + +static void initialize_class(struct dept_class *c) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_iecxt *ie = &c->iecxt[i]; + struct dept_iwait *iw = &c->iwait[i]; + + ie->ecxt = NULL; + ie->enirq = i; + ie->staled = false; + + iw->wait = NULL; + iw->irq = i; + iw->staled = false; + iw->touched = false; + } + c->bfs_gen = 0U; + + INIT_LIST_HEAD(&c->all_node); + INIT_LIST_HEAD(&c->dep_head); + INIT_LIST_HEAD(&c->dep_rev_head); +} +SET_CONSTRUCTOR(class, initialize_class); + +static void initialize_ecxt(struct dept_ecxt *e) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + e->enirq_stack[i] = NULL; + e->enirq_ip[i] = 0UL; + } + e->ecxt_ip = 0UL; + e->ecxt_stack = NULL; + e->enirqf = 0UL; + e->event_ip = 0UL; + e->event_stack = NULL; +} +SET_CONSTRUCTOR(ecxt, initialize_ecxt); + +static void initialize_wait(struct dept_wait *w) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + w->irq_stack[i] = NULL; + w->irq_ip[i] = 0UL; + } + w->wait_ip = 0UL; + w->wait_stack = NULL; + w->irqf = 0UL; +} +SET_CONSTRUCTOR(wait, initialize_wait); + +static void initialize_stack(struct dept_stack *s) +{ + s->nr = 0; +} +SET_CONSTRUCTOR(stack, initialize_stack); + +#define OBJECT(id, nr) \ +static void (*ctor_##id)(struct dept_##id *a); + #include "dept_object.h" +#undef OBJECT + +#undef SET_CONSTRUCTOR + +#define SET_DESTRUCTOR(id, f) \ +static void (*dtor_##id)(struct dept_##id *a) = f + +static void destroy_dep(struct dept_dep *d) +{ + if (dep_e(d)) + put_ecxt(dep_e(d)); + if (dep_w(d)) + put_wait(dep_w(d)); +} +SET_DESTRUCTOR(dep, destroy_dep); + +static void destroy_ecxt(struct dept_ecxt *e) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) + if (e->enirq_stack[i]) + put_stack(e->enirq_stack[i]); + if (e->class) + put_class(e->class); + if (e->ecxt_stack) + put_stack(e->ecxt_stack); + if (e->event_stack) + put_stack(e->event_stack); +} +SET_DESTRUCTOR(ecxt, destroy_ecxt); + +static void destroy_wait(struct dept_wait *w) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) + if (w->irq_stack[i]) + put_stack(w->irq_stack[i]); + if (w->class) + put_class(w->class); + if (w->wait_stack) + put_stack(w->wait_stack); +} +SET_DESTRUCTOR(wait, destroy_wait); + +#define OBJECT(id, nr) \ +static void (*dtor_##id)(struct dept_##id *a); + #include "dept_object.h" +#undef OBJECT + +#undef SET_DESTRUCTOR + +/* + * Caching and hashing + * ===================================================================== + * DEPT makes use of caching and hashing to improve performance. Each + * object can be obtained in O(1) with its key. + * + * NOTE: Currently we assume all the objects in the hashs will never be + * removed. Implement it when needed. + */ + +/* + * Some information might be lost but it's only for hashing key. + */ +static unsigned long mix(unsigned long a, unsigned long b) +{ + int halfbits = sizeof(unsigned long) * 8 / 2; + unsigned long halfmask = (1UL << halfbits) - 1UL; + + return (a << halfbits) | (b & halfmask); +} + +static bool cmp_dep(struct dept_dep *d1, struct dept_dep *d2) +{ + return dep_fc(d1)->key == dep_fc(d2)->key && + dep_tc(d1)->key == dep_tc(d2)->key; +} + +static unsigned long key_dep(struct dept_dep *d) +{ + return mix(dep_fc(d)->key, dep_tc(d)->key); +} + +static bool cmp_class(struct dept_class *c1, struct dept_class *c2) +{ + return c1->key == c2->key; +} + +static unsigned long key_class(struct dept_class *c) +{ + return c->key; +} + +#define HASH(id, bits) \ +static struct hlist_head table_##id[1 << (bits)]; \ + \ +static struct hlist_head *head_##id(struct dept_##id *a) \ +{ \ + return table_##id + hash_long(key_##id(a), bits); \ +} \ + \ +static struct dept_##id *hash_lookup_##id(struct dept_##id *a) \ +{ \ + struct dept_##id *b; \ + \ + hlist_for_each_entry_rcu(b, head_##id(a), hash_node) \ + if (cmp_##id(a, b)) \ + return b; \ + return NULL; \ +} \ + \ +static void hash_add_##id(struct dept_##id *a) \ +{ \ + get_##id(a); \ + hlist_add_head_rcu(&a->hash_node, head_##id(a)); \ +} \ + \ +static void hash_del_##id(struct dept_##id *a) \ +{ \ + hlist_del_rcu(&a->hash_node); \ + put_##id(a); \ +} +#include "dept_hash.h" +#undef HASH + +static struct dept_dep *lookup_dep(struct dept_class *fc, + struct dept_class *tc) +{ + struct dept_ecxt onetime_e = { .class = fc }; + struct dept_wait onetime_w = { .class = tc }; + struct dept_dep onetime_d = { .ecxt = &onetime_e, + .wait = &onetime_w }; + return hash_lookup_dep(&onetime_d); +} + +static struct dept_class *lookup_class(unsigned long key) +{ + struct dept_class onetime_c = { .key = key }; + + return hash_lookup_class(&onetime_c); +} + +/* + * Report + * ===================================================================== + * DEPT prints useful information to help debuging on detection of + * problematic dependency. + */ + +static void print_ip_stack(unsigned long ip, struct dept_stack *s) +{ + if (ip) + print_ip_sym(KERN_WARNING, ip); + + if (valid_stack(s)) { + pr_warn("stacktrace:\n"); + stack_trace_print(s->raw, s->nr, 5); + } + + if (!ip && !valid_stack(s)) + pr_warn("(N/A)\n"); +} + +#define print_spc(spc, fmt, ...) \ + pr_warn("%*c" fmt, (spc) * 4, ' ', ##__VA_ARGS__) + +static void print_diagram(struct dept_dep *d) +{ + struct dept_ecxt *e = dep_e(d); + struct dept_wait *w = dep_w(d); + struct dept_class *fc = dep_fc(d); + struct dept_class *tc = dep_tc(d); + unsigned long irqf; + int irq; + bool firstline = true; + int spc = 1; + const char *w_fn = w->wait_fn ?: "(unknown)"; + const char *e_fn = e->event_fn ?: "(unknown)"; + const char *c_fn = e->ecxt_fn ?: "(unknown)"; + const char *fc_n = fc->sched_map ? "" : (fc->name ?: "(unknown)"); + const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); + + irqf = e->enirqf & w->irqf; + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + if (!firstline) + pr_warn("\nor\n\n"); + firstline = false; + + print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); + print_spc(spc, " <%s interrupt>\n", irq_str(irq)); + print_spc(spc + 1, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); + } + + if (!irqf) { + print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); + print_spc(spc, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); + } +} + +static void print_dep(struct dept_dep *d) +{ + struct dept_ecxt *e = dep_e(d); + struct dept_wait *w = dep_w(d); + struct dept_class *fc = dep_fc(d); + struct dept_class *tc = dep_tc(d); + unsigned long irqf; + int irq; + const char *w_fn = w->wait_fn ?: "(unknown)"; + const char *e_fn = e->event_fn ?: "(unknown)"; + const char *c_fn = e->ecxt_fn ?: "(unknown)"; + const char *fc_n = fc->sched_map ? "" : (fc->name ?: "(unknown)"); + const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); + + irqf = e->enirqf & w->irqf; + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + pr_warn("%s has been enabled:\n", irq_str(irq)); + print_ip_stack(e->enirq_ip[irq], e->enirq_stack[irq]); + pr_warn("\n"); + + pr_warn("[S] %s(%s:%d):\n", c_fn, fc_n, fc->sub_id); + print_ip_stack(e->ecxt_ip, e->ecxt_stack); + pr_warn("\n"); + + pr_warn("[W] %s(%s:%d) in %s context:\n", + w_fn, tc_n, tc->sub_id, irq_str(irq)); + print_ip_stack(w->irq_ip[irq], w->irq_stack[irq]); + pr_warn("\n"); + + pr_warn("[E] %s(%s:%d):\n", e_fn, fc_n, fc->sub_id); + print_ip_stack(e->event_ip, e->event_stack); + } + + if (!irqf) { + pr_warn("[S] %s(%s:%d):\n", c_fn, fc_n, fc->sub_id); + print_ip_stack(e->ecxt_ip, e->ecxt_stack); + pr_warn("\n"); + + pr_warn("[W] %s(%s:%d):\n", w_fn, tc_n, tc->sub_id); + print_ip_stack(w->wait_ip, w->wait_stack); + pr_warn("\n"); + + pr_warn("[E] %s(%s:%d):\n", e_fn, fc_n, fc->sub_id); + print_ip_stack(e->event_ip, e->event_stack); + } +} + +static void save_current_stack(int skip); + +/* + * Print all classes in a circle. + */ +static void print_circle(struct dept_class *c) +{ + struct dept_class *fc = c->bfs_parent; + struct dept_class *tc = c; + int i; + + dept_outworld_enter(); + save_current_stack(6); + + pr_warn("===================================================\n"); + pr_warn("DEPT: Circular dependency has been detected.\n"); + pr_warn("%s %.*s %s\n", init_utsname()->release, + (int)strcspn(init_utsname()->version, " "), + init_utsname()->version, + print_tainted()); + pr_warn("---------------------------------------------------\n"); + pr_warn("summary\n"); + pr_warn("---------------------------------------------------\n"); + + if (fc == tc) + pr_warn("*** AA DEADLOCK ***\n\n"); + else + pr_warn("*** DEADLOCK ***\n\n"); + + i = 0; + do { + struct dept_dep *d = lookup_dep(fc, tc); + + pr_warn("context %c\n", 'A' + (i++)); + print_diagram(d); + if (fc != c) + pr_warn("\n"); + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + pr_warn("\n"); + pr_warn("[S]: start of the event context\n"); + pr_warn("[W]: the wait blocked\n"); + pr_warn("[E]: the event not reachable\n"); + + i = 0; + do { + struct dept_dep *d = lookup_dep(fc, tc); + + pr_warn("---------------------------------------------------\n"); + pr_warn("context %c's detail\n", 'A' + i); + pr_warn("---------------------------------------------------\n"); + pr_warn("context %c\n", 'A' + (i++)); + print_diagram(d); + pr_warn("\n"); + print_dep(d); + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + pr_warn("---------------------------------------------------\n"); + pr_warn("information that might be helpful\n"); + pr_warn("---------------------------------------------------\n"); + dump_stack(); + + dept_outworld_exit(); +} + +/* + * BFS(Breadth First Search) + * ===================================================================== + * Whenever a new dependency is added into the graph, search the graph + * for a new circular dependency. + */ + +static void enqueue(struct list_head *h, struct dept_dep *d) +{ + list_add_tail(&d->bfs_node, h); +} + +static struct dept_dep *dequeue(struct list_head *h) +{ + struct dept_dep *d; + + d = list_first_entry(h, struct dept_dep, bfs_node); + list_del(&d->bfs_node); + return d; +} + +static bool empty(struct list_head *h) +{ + return list_empty(h); +} + +static void extend_queue(struct list_head *h, struct dept_class *cur) +{ + struct dept_dep *d; + + list_for_each_entry(d, &cur->dep_head, dep_node) { + struct dept_class *next = dep_tc(d); + + if (cur->bfs_gen == next->bfs_gen) + continue; + next->bfs_gen = cur->bfs_gen; + next->bfs_dist = cur->bfs_dist + 1; + next->bfs_parent = cur; + enqueue(h, d); + } +} + +static void extend_queue_rev(struct list_head *h, struct dept_class *cur) +{ + struct dept_dep *d; + + list_for_each_entry(d, &cur->dep_rev_head, dep_rev_node) { + struct dept_class *next = dep_fc(d); + + if (cur->bfs_gen == next->bfs_gen) + continue; + next->bfs_gen = cur->bfs_gen; + next->bfs_dist = cur->bfs_dist + 1; + next->bfs_parent = cur; + enqueue(h, d); + } +} + +typedef enum bfs_ret bfs_f(struct dept_dep *d, void *in, void **out); +static unsigned int bfs_gen; + +/* + * NOTE: Must be called with dept_lock held. + */ +static void bfs(struct dept_class *c, bfs_f *cb, void *in, void **out) +{ + LIST_HEAD(q); + enum bfs_ret ret; + + if (DEPT_WARN_ON(!cb)) + return; + + /* + * Avoid zero bfs_gen. + */ + bfs_gen = bfs_gen + 1 ?: 1; + + c->bfs_gen = bfs_gen; + c->bfs_dist = 0; + c->bfs_parent = c; + + ret = cb(NULL, in, out); + if (ret == BFS_DONE) + return; + if (ret == BFS_SKIP) + return; + if (ret == BFS_CONTINUE) + extend_queue(&q, c); + if (ret == BFS_CONTINUE_REV) + extend_queue_rev(&q, c); + + while (!empty(&q)) { + struct dept_dep *d = dequeue(&q); + + ret = cb(d, in, out); + if (ret == BFS_DONE) + break; + if (ret == BFS_SKIP) + continue; + if (ret == BFS_CONTINUE) + extend_queue(&q, dep_tc(d)); + if (ret == BFS_CONTINUE_REV) + extend_queue_rev(&q, dep_fc(d)); + } + + while (!empty(&q)) + dequeue(&q); +} + +/* + * Main operations + * ===================================================================== + * Add dependencies - Each new dependency is added into the graph and + * checked if it forms a circular dependency. + * + * Track waits - Waits are queued into the ring buffer for later use to + * generate appropriate dependencies with cross-event. + * + * Track event contexts(ecxt) - Event contexts are pushed into local + * stack for later use to generate appropriate dependencies with waits. + */ + +static unsigned long cur_enirqf(void); +static int cur_irq(void); +static unsigned int cur_ctxt_id(void); + +static struct dept_iecxt *iecxt(struct dept_class *c, int irq) +{ + return &c->iecxt[irq]; +} + +static struct dept_iwait *iwait(struct dept_class *c, int irq) +{ + return &c->iwait[irq]; +} + +static void stale_iecxt(struct dept_iecxt *ie) +{ + if (ie->ecxt) + put_ecxt(ie->ecxt); + + WRITE_ONCE(ie->ecxt, NULL); + WRITE_ONCE(ie->staled, true); +} + +static void set_iecxt(struct dept_iecxt *ie, struct dept_ecxt *e) +{ + /* + * ->ecxt will never be updated once getting set until the class + * gets removed. + */ + if (ie->ecxt) + DEPT_WARN_ON(1); + else + WRITE_ONCE(ie->ecxt, get_ecxt(e)); +} + +static void stale_iwait(struct dept_iwait *iw) +{ + if (iw->wait) + put_wait(iw->wait); + + WRITE_ONCE(iw->wait, NULL); + WRITE_ONCE(iw->staled, true); +} + +static void set_iwait(struct dept_iwait *iw, struct dept_wait *w) +{ + /* + * ->wait will never be updated once getting set until the class + * gets removed. + */ + if (iw->wait) + DEPT_WARN_ON(1); + else + WRITE_ONCE(iw->wait, get_wait(w)); + + iw->touched = true; +} + +static void touch_iwait(struct dept_iwait *iw) +{ + iw->touched = true; +} + +static void untouch_iwait(struct dept_iwait *iw) +{ + iw->touched = false; +} + +static struct dept_stack *get_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + return s ? get_stack(s) : NULL; +} + +static void prepare_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + /* + * The dept_stack is already ready. + */ + if (s && !stack_consumed(s)) { + s->nr = 0; + return; + } + + if (s) + put_stack(s); + + s = dept_task()->stack = new_stack(); + if (!s) + return; + + get_stack(s); + del_stack(s); +} + +static void save_current_stack(int skip) +{ + struct dept_stack *s = dept_task()->stack; + + if (!s) + return; + if (valid_stack(s)) + return; + + s->nr = stack_trace_save(s->raw, DEPT_MAX_STACK_ENTRY, skip); +} + +static void finish_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + if (stack_consumed(s)) + save_current_stack(2); +} + +/* + * FIXME: For now, disable LOCKDEP while DEPT is working. + * + * Both LOCKDEP and DEPT report it on a deadlock detection using + * printk taking the risk of another deadlock that might be caused by + * locks of console or printk between inside and outside of them. + * + * For DEPT, it's no problem since multiple reports are allowed. But it + * would be a bad idea for LOCKDEP since it will stop even on a singe + * report. So we need to prevent LOCKDEP from its reporting the risk + * DEPT would take when reporting something. + */ +#include + +void noinstr dept_off(void) +{ + dept_task()->recursive++; + lockdep_off(); +} + +void noinstr dept_on(void) +{ + dept_task()->recursive--; + lockdep_on(); +} + +static unsigned long dept_enter(void) +{ + unsigned long flags; + + flags = arch_local_irq_save(); + dept_off(); + prepare_current_stack(); + return flags; +} + +static void dept_exit(unsigned long flags) +{ + finish_current_stack(); + dept_on(); + arch_local_irq_restore(flags); +} + +static unsigned long dept_enter_recursive(void) +{ + unsigned long flags; + + flags = arch_local_irq_save(); + return flags; +} + +static void dept_exit_recursive(unsigned long flags) +{ + arch_local_irq_restore(flags); +} + +/* + * NOTE: Must be called with dept_lock held. + */ +static struct dept_dep *__add_dep(struct dept_ecxt *e, + struct dept_wait *w) +{ + struct dept_dep *d; + + if (DEPT_WARN_ON(!valid_class(e->class))) + return NULL; + + if (DEPT_WARN_ON(!valid_class(w->class))) + return NULL; + + if (lookup_dep(e->class, w->class)) + return NULL; + + d = new_dep(); + if (unlikely(!d)) + return NULL; + + d->ecxt = get_ecxt(e); + d->wait = get_wait(w); + + /* + * Add the dependency into hash and graph. + */ + hash_add_dep(d); + list_add(&d->dep_node, &dep_fc(d)->dep_head); + list_add(&d->dep_rev_node, &dep_tc(d)->dep_rev_head); + return d; +} + +static enum bfs_ret cb_check_dl(struct dept_dep *d, + void *in, void **out) +{ + struct dept_dep *new = (struct dept_dep *)in; + + /* + * initial condition for this BFS search + */ + if (!d) { + dep_tc(new)->bfs_parent = dep_fc(new); + + if (dep_tc(new) != dep_fc(new)) + return BFS_CONTINUE; + + /* + * AA circle does not make additional deadlock. We don't + * have to continue this BFS search. + */ + print_circle(dep_tc(new)); + return BFS_DONE; + } + + /* + * Allow multiple reports. + */ + if (dep_tc(d) == dep_fc(new)) + print_circle(dep_tc(new)); + + return BFS_CONTINUE; +} + +/* + * This function is actually in charge of reporting. + */ +static void check_dl_bfs(struct dept_dep *d) +{ + bfs(dep_tc(d), cb_check_dl, (void *)d, NULL); +} + +static enum bfs_ret cb_find_iw(struct dept_dep *d, void *in, void **out) +{ + int irq = *(int *)in; + struct dept_class *fc; + struct dept_iwait *iw; + + if (DEPT_WARN_ON(!out)) + return BFS_DONE; + + /* + * initial condition for this BFS search + */ + if (!d) + return BFS_CONTINUE_REV; + + fc = dep_fc(d); + iw = iwait(fc, irq); + + /* + * If any parent's ->wait was set, then the children would've + * been touched. + */ + if (!iw->touched) + return BFS_SKIP; + + if (!iw->wait) + return BFS_CONTINUE_REV; + + *out = iw; + return BFS_DONE; +} + +static struct dept_iwait *find_iw_bfs(struct dept_class *c, int irq) +{ + struct dept_iwait *iw = iwait(c, irq); + struct dept_iwait *found = NULL; + + if (iw->wait) + return iw; + + /* + * '->touched == false' guarantees there's no parent that has + * been set ->wait. + */ + if (!iw->touched) + return NULL; + + bfs(c, cb_find_iw, (void *)&irq, (void **)&found); + + if (found) + return found; + + untouch_iwait(iw); + return NULL; +} + +static enum bfs_ret cb_touch_iw_find_ie(struct dept_dep *d, void *in, + void **out) +{ + int irq = *(int *)in; + struct dept_class *tc; + struct dept_iecxt *ie; + struct dept_iwait *iw; + + if (DEPT_WARN_ON(!out)) + return BFS_DONE; + + /* + * initial condition for this BFS search + */ + if (!d) + return BFS_CONTINUE; + + tc = dep_tc(d); + ie = iecxt(tc, irq); + iw = iwait(tc, irq); + + touch_iwait(iw); + + if (!ie->ecxt) + return BFS_CONTINUE; + + if (!*out) + *out = ie; + + return BFS_CONTINUE; +} + +static struct dept_iecxt *touch_iw_find_ie_bfs(struct dept_class *c, + int irq) +{ + struct dept_iecxt *ie = iecxt(c, irq); + struct dept_iwait *iw = iwait(c, irq); + struct dept_iecxt *found = ie->ecxt ? ie : NULL; + + touch_iwait(iw); + bfs(c, cb_touch_iw_find_ie, (void *)&irq, (void **)&found); + return found; +} + +/* + * Should be called with dept_lock held. + */ +static void __add_idep(struct dept_iecxt *ie, struct dept_iwait *iw) +{ + struct dept_dep *new; + + /* + * There's nothing to do. + */ + if (!ie || !iw || !ie->ecxt || !iw->wait) + return; + + new = __add_dep(ie->ecxt, iw->wait); + + /* + * Deadlock detected. Let check_dl_bfs() report it. + */ + if (new) { + check_dl_bfs(new); + stale_iecxt(ie); + stale_iwait(iw); + } + + /* + * If !new, it would be the case of lack of object resource. + * Just let it go and get checked by other chances. Retrying is + * meaningless in that case. + */ +} + +static void set_check_iecxt(struct dept_class *c, int irq, + struct dept_ecxt *e) +{ + struct dept_iecxt *ie = iecxt(c, irq); + + set_iecxt(ie, e); + __add_idep(ie, find_iw_bfs(c, irq)); +} + +static void set_check_iwait(struct dept_class *c, int irq, + struct dept_wait *w) +{ + struct dept_iwait *iw = iwait(c, irq); + + set_iwait(iw, w); + __add_idep(touch_iw_find_ie_bfs(c, irq), iw); +} + +static void add_iecxt(struct dept_class *c, int irq, struct dept_ecxt *e, + bool stack) +{ + /* + * This access is safe since we ensure e->class has set locally. + */ + struct dept_task *dt = dept_task(); + struct dept_iecxt *ie = iecxt(c, irq); + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (unlikely(READ_ONCE(ie->staled))) + return; + + /* + * Skip add_iecxt() if ie->ecxt has ever been set at least once. + * Which means it has a valid ->ecxt or been staled. + */ + if (READ_ONCE(ie->ecxt)) + return; + + if (unlikely(!dept_lock())) + return; + + if (unlikely(ie->staled)) + goto unlock; + if (ie->ecxt) + goto unlock; + + e->enirqf |= (1UL << irq); + + /* + * Should be NULL since it's the first time that these + * enirq_{ip,stack}[irq] have ever set. + */ + DEPT_WARN_ON(e->enirq_ip[irq]); + DEPT_WARN_ON(e->enirq_stack[irq]); + + e->enirq_ip[irq] = dt->enirq_ip[irq]; + e->enirq_stack[irq] = stack ? get_current_stack() : NULL; + + set_check_iecxt(c, irq, e); +unlock: + dept_unlock(); +} + +static void add_iwait(struct dept_class *c, int irq, struct dept_wait *w) +{ + struct dept_iwait *iw = iwait(c, irq); + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (unlikely(READ_ONCE(iw->staled))) + return; + + /* + * Skip add_iwait() if iw->wait has ever been set at least once. + * Which means it has a valid ->wait or been staled. + */ + if (READ_ONCE(iw->wait)) + return; + + if (unlikely(!dept_lock())) + return; + + if (unlikely(iw->staled)) + goto unlock; + if (iw->wait) + goto unlock; + + w->irqf |= (1UL << irq); + + /* + * Should be NULL since it's the first time that these + * irq_{ip,stack}[irq] have ever set. + */ + DEPT_WARN_ON(w->irq_ip[irq]); + DEPT_WARN_ON(w->irq_stack[irq]); + + w->irq_ip[irq] = w->wait_ip; + w->irq_stack[irq] = get_current_stack(); + + set_check_iwait(c, irq, w); +unlock: + dept_unlock(); +} + +static struct dept_wait_hist *hist(int pos) +{ + struct dept_task *dt = dept_task(); + + return dt->wait_hist + (pos % DEPT_MAX_WAIT_HIST); +} + +static int hist_pos_next(void) +{ + struct dept_task *dt = dept_task(); + + return dt->wait_hist_pos % DEPT_MAX_WAIT_HIST; +} + +static void hist_advance(void) +{ + struct dept_task *dt = dept_task(); + + dt->wait_hist_pos++; + dt->wait_hist_pos %= DEPT_MAX_WAIT_HIST; +} + +static struct dept_wait_hist *new_hist(void) +{ + struct dept_wait_hist *wh = hist(hist_pos_next()); + + hist_advance(); + return wh; +} + +static void add_hist(struct dept_wait *w, unsigned int wg, unsigned int ctxt_id) +{ + struct dept_wait_hist *wh = new_hist(); + + if (likely(wh->wait)) + put_wait(wh->wait); + + wh->wait = get_wait(w); + wh->wgen = wg; + wh->ctxt_id = ctxt_id; +} + +/* + * Should be called after setting up e's iecxt and w's iwait. + */ +static void add_dep(struct dept_ecxt *e, struct dept_wait *w) +{ + struct dept_class *fc = e->class; + struct dept_class *tc = w->class; + struct dept_dep *d; + int i; + + if (lookup_dep(fc, tc)) + return; + + if (unlikely(!dept_lock())) + return; + + /* + * __add_dep() will lookup_dep() again with lock held. + */ + d = __add_dep(e, w); + if (d) { + check_dl_bfs(d); + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_iwait *fiw = iwait(fc, i); + struct dept_iecxt *found_ie; + struct dept_iwait *found_iw; + + /* + * '->touched == false' guarantees there's no + * parent that has been set ->wait. + */ + if (!fiw->touched) + continue; + + /* + * find_iw_bfs() will untouch the iwait if + * not found. + */ + found_iw = find_iw_bfs(fc, i); + + if (!found_iw) + continue; + + found_ie = touch_iw_find_ie_bfs(tc, i); + __add_idep(found_ie, found_iw); + } + } + dept_unlock(); +} + +static atomic_t wgen = ATOMIC_INIT(1); + +static void add_wait(struct dept_class *c, unsigned long ip, + const char *w_fn, int sub_l, bool sched_sleep) +{ + struct dept_task *dt = dept_task(); + struct dept_wait *w; + unsigned int wg = 0U; + int irq; + int i; + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + w = new_wait(); + if (unlikely(!w)) + return; + + WRITE_ONCE(w->class, get_class(c)); + w->wait_ip = ip; + w->wait_fn = w_fn; + w->wait_stack = get_current_stack(); + w->sched_sleep = sched_sleep; + + irq = cur_irq(); + if (irq < DEPT_IRQS_NR) + add_iwait(c, irq, w); + + /* + * Avoid adding dependency between user aware nested ecxt and + * wait. + */ + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + + /* + * the case of invalid key'ed one + */ + if (!eh->ecxt) + continue; + + if (eh->ecxt->class != c || eh->sub_l == sub_l) + add_dep(eh->ecxt, w); + } + + if (!wait_consumed(w) && !rich_stack) { + if (w->wait_stack) + put_stack(w->wait_stack); + w->wait_stack = NULL; + } + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + add_hist(w, wg, cur_ctxt_id()); + + del_wait(w); +} + +static bool add_ecxt(struct dept_map *m, struct dept_class *c, + unsigned long ip, const char *c_fn, + const char *e_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + struct dept_ecxt_held *eh; + struct dept_ecxt *e; + unsigned long irqf; + int irq; + + if (DEPT_WARN_ON(!valid_class(c))) + return false; + + if (DEPT_WARN_ON_ONCE(dt->ecxt_held_pos >= DEPT_MAX_ECXT_HELD)) + return false; + + if (m->nocheck) { + eh = dt->ecxt_held + (dt->ecxt_held_pos++); + eh->ecxt = NULL; + eh->map = m; + eh->class = get_class(c); + eh->wgen = atomic_read(&wgen); + eh->sub_l = sub_l; + + return true; + } + + e = new_ecxt(); + if (unlikely(!e)) + return false; + + e->class = get_class(c); + e->ecxt_ip = ip; + e->ecxt_stack = ip && rich_stack ? get_current_stack() : NULL; + e->event_fn = e_fn; + e->ecxt_fn = c_fn; + + eh = dt->ecxt_held + (dt->ecxt_held_pos++); + eh->ecxt = get_ecxt(e); + eh->map = m; + eh->class = get_class(c); + eh->wgen = atomic_read(&wgen); + eh->sub_l = sub_l; + + irqf = cur_enirqf(); + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) + add_iecxt(c, irq, e, false); + + del_ecxt(e); + return true; +} + +static int find_ecxt_pos(struct dept_map *m, struct dept_class *c, + bool newfirst) +{ + struct dept_task *dt = dept_task(); + int i; + + if (newfirst) { + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + if (eh->map == m && eh->class == c) + return i; + } + } else { + for (i = 0; i < dt->ecxt_held_pos; i++) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + if (eh->map == m && eh->class == c) + return i; + } + } + return -1; +} + +static bool pop_ecxt(struct dept_map *m, struct dept_class *c) +{ + struct dept_task *dt = dept_task(); + int pos; + int i; + + pos = find_ecxt_pos(m, c, true); + if (pos == -1) + return false; + + if (dt->ecxt_held[pos].class) + put_class(dt->ecxt_held[pos].class); + + if (dt->ecxt_held[pos].ecxt) + put_ecxt(dt->ecxt_held[pos].ecxt); + + dt->ecxt_held_pos--; + + for (i = pos; i < dt->ecxt_held_pos; i++) + dt->ecxt_held[i] = dt->ecxt_held[i + 1]; + return true; +} + +static bool good_hist(struct dept_wait_hist *wh, unsigned int wg) +{ + return wh->wait != NULL && before(wg, wh->wgen); +} + +/* + * Binary-search the ring buffer for the earliest valid wait. + */ +static int find_hist_pos(unsigned int wg) +{ + int oldest; + int l; + int r; + int pos; + + oldest = hist_pos_next(); + if (unlikely(good_hist(hist(oldest), wg))) { + DEPT_INFO_ONCE("Need to expand the ring buffer.\n"); + return oldest; + } + + l = oldest + 1; + r = oldest + DEPT_MAX_WAIT_HIST - 1; + for (pos = (l + r) / 2; l <= r; pos = (l + r) / 2) { + struct dept_wait_hist *p = hist(pos - 1); + struct dept_wait_hist *wh = hist(pos); + + if (!good_hist(p, wg) && good_hist(wh, wg)) + return pos % DEPT_MAX_WAIT_HIST; + if (good_hist(wh, wg)) + r = pos - 1; + else + l = pos + 1; + } + return -1; +} + +static void do_event(struct dept_map *m, struct dept_class *c, + unsigned int wg, unsigned long ip) +{ + struct dept_task *dt = dept_task(); + struct dept_wait_hist *wh; + struct dept_ecxt_held *eh; + unsigned int ctxt_id; + int end; + int pos; + int i; + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (m->nocheck) + return; + + /* + * The event was triggered before wait. + */ + if (!wg) + return; + + pos = find_ecxt_pos(m, c, false); + if (pos == -1) + return; + + eh = dt->ecxt_held + pos; + + if (DEPT_WARN_ON(!eh->ecxt)) + return; + + eh->ecxt->event_ip = ip; + eh->ecxt->event_stack = get_current_stack(); + + /* + * The ecxt already has done what it needs. + */ + if (!before(wg, eh->wgen)) + return; + + pos = find_hist_pos(wg); + if (pos == -1) + return; + + ctxt_id = cur_ctxt_id(); + end = hist_pos_next(); + end = end > pos ? end : end + DEPT_MAX_WAIT_HIST; + for (wh = hist(pos); pos < end; wh = hist(++pos)) { + if (after(wh->wgen, eh->wgen)) + break; + + if (dt->in_sched && wh->wait->sched_sleep) + continue; + + if (wh->ctxt_id == ctxt_id) + add_dep(eh->ecxt, wh->wait); + } + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_ecxt *e; + + if (before(dt->wgen_enirq[i], wg)) + continue; + + e = eh->ecxt; + add_iecxt(e->class, i, e, false); + } +} + +static void del_dep_rcu(struct rcu_head *rh) +{ + struct dept_dep *d = container_of(rh, struct dept_dep, rh); + + preempt_disable(); + del_dep(d); + preempt_enable(); +} + +/* + * NOTE: Must be called with dept_lock held. + */ +static void disconnect_class(struct dept_class *c) +{ + struct dept_dep *d, *n; + int i; + + list_for_each_entry_safe(d, n, &c->dep_head, dep_node) { + list_del_rcu(&d->dep_node); + list_del_rcu(&d->dep_rev_node); + hash_del_dep(d); + call_rcu(&d->rh, del_dep_rcu); + } + + list_for_each_entry_safe(d, n, &c->dep_rev_head, dep_rev_node) { + list_del_rcu(&d->dep_node); + list_del_rcu(&d->dep_rev_node); + hash_del_dep(d); + call_rcu(&d->rh, del_dep_rcu); + } + + for (i = 0; i < DEPT_IRQS_NR; i++) { + stale_iecxt(iecxt(c, i)); + stale_iwait(iwait(c, i)); + } +} + +/* + * Context control + * ===================================================================== + * Whether a wait is in {hard,soft}-IRQ context or whether + * {hard,soft}-IRQ has been enabled on the way to an event is very + * important to check dependency. All those things should be tracked. + */ + +static unsigned long cur_enirqf(void) +{ + struct dept_task *dt = dept_task(); + int he = dt->hardirqs_enabled; + int se = dt->softirqs_enabled; + + if (he) + return DEPT_HIRQF | (se ? DEPT_SIRQF : 0UL); + return 0UL; +} + +static int cur_irq(void) +{ + if (lockdep_softirq_context(current)) + return DEPT_SIRQ; + if (lockdep_hardirq_context()) + return DEPT_HIRQ; + return DEPT_IRQS_NR; +} + +static unsigned int cur_ctxt_id(void) +{ + struct dept_task *dt = dept_task(); + int irq = cur_irq(); + + /* + * Normal process context + */ + if (irq == DEPT_IRQS_NR) + return 0U; + + return dt->irq_id[irq] | (1UL << irq); +} + +static void enirq_transition(int irq) +{ + struct dept_task *dt = dept_task(); + int i; + + /* + * IRQ can cut in on the way to the event. Used for cross-event + * detection. + * + * wait context event context(ecxt) + * ------------ ------------------- + * wait event + * UPDATE wgen + * observe IRQ enabled + * UPDATE wgen + * keep the wgen locally + * + * on the event + * check the wgen kept + */ + + /* + * Avoid zero wgen. + */ + dt->wgen_enirq[irq] = atomic_inc_return(&wgen) ?: + atomic_inc_return(&wgen); + + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + struct dept_ecxt *e; + + eh = dt->ecxt_held + i; + e = eh->ecxt; + if (e) + add_iecxt(e->class, irq, e, true); + } +} + +static void dept_enirq(unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long irqf = cur_enirqf(); + int irq; + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + /* + * IRQ ON/OFF transition might happen while Dept is working. + * We cannot handle recursive entrance. Just ingnore it. + * Only transitions outside of Dept will be considered. + */ + if (dt->recursive) + return; + + flags = dept_enter(); + + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + dt->enirq_ip[irq] = ip; + enirq_transition(irq); + } + + dept_exit(flags); +} + +void dept_softirqs_on_ip(unsigned long ip) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->softirqs_enabled = true; + dept_enirq(ip); +} + +void dept_hardirqs_on(void) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = true; + dept_enirq(_RET_IP_); +} + +void dept_softirqs_off(void) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->softirqs_enabled = false; +} + +void dept_hardirqs_off(void) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = false; +} + +/* + * Ensure it's the outmost softirq context. + */ +void dept_softirq_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->irq_id[DEPT_SIRQ] += 1UL << DEPT_IRQS_NR; +} + +/* + * Ensure it's the outmost hardirq context. + */ +void dept_hardirq_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->irq_id[DEPT_HIRQ] += 1UL << DEPT_IRQS_NR; +} + +void dept_sched_enter(void) +{ + dept_task()->in_sched = true; +} + +void dept_sched_exit(void) +{ + dept_task()->in_sched = false; +} + +/* + * Exposed APIs + * ===================================================================== + */ + +static void clean_classes_cache(struct dept_key *k) +{ + int i; + + for (i = 0; i < DEPT_MAX_SUBCLASSES_CACHE; i++) { + if (!READ_ONCE(k->classes[i])) + continue; + + WRITE_ONCE(k->classes[i], NULL); + } +} + +void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, + const char *n) +{ + unsigned long flags; + + if (unlikely(!dept_working())) { + m->nocheck = true; + return; + } + + if (DEPT_WARN_ON(sub_u < 0)) { + m->nocheck = true; + return; + } + + if (DEPT_WARN_ON(sub_u >= DEPT_MAX_SUBCLASSES_USR)) { + m->nocheck = true; + return; + } + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + clean_classes_cache(&m->map_key); + + m->keys = k; + m->sub_u = sub_u; + m->name = n; + m->wgen = 0U; + m->nocheck = !valid_key(k); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_map_init); + +void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, + const char *n) +{ + unsigned long flags; + + if (unlikely(!dept_working())) { + m->nocheck = true; + return; + } + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + if (k) { + clean_classes_cache(&m->map_key); + m->keys = k; + m->nocheck = !valid_key(k); + } + + if (sub_u >= 0 && sub_u < DEPT_MAX_SUBCLASSES_USR) + m->sub_u = sub_u; + + if (n) + m->name = n; + + m->wgen = 0U; + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_map_reinit); + +void dept_map_copy(struct dept_map *to, struct dept_map *from) +{ + if (unlikely(!dept_working())) { + to->nocheck = true; + return; + } + + *to = *from; + + /* + * XXX: 'to' might be in a stack or something. Using the address + * in a stack segment as a key is meaningless. Just ignore the + * case for now. + */ + if (!to->keys) { + to->nocheck = true; + return; + } + + /* + * Since the class cache can be modified concurrently we could + * observe half pointers (64bit arch using 32bit copy insns). + * Therefore clear the caches and take the performance hit. + * + * XXX: Doesn't work well with lockdep_set_class_and_subclass() + * since that relies on cache abuse. + */ + clean_classes_cache(&to->map_key); +} + +static LIST_HEAD(classes); + +static bool within(const void *addr, void *start, unsigned long size) +{ + return addr >= start && addr < start + size; +} + +void dept_free_range(void *start, unsigned int sz) +{ + struct dept_task *dt = dept_task(); + struct dept_class *c, *n; + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + DEPT_STOP("Failed to successfully free Dept objects.\n"); + return; + } + + flags = dept_enter(); + + /* + * dept_free_range() should not fail. + * + * FIXME: Should be fixed if dept_free_range() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + list_for_each_entry_safe(c, n, &classes, all_node) { + if (!within((void *)c->key, start, sz) && + !within(c->name, start, sz)) + continue; + + hash_del_class(c); + disconnect_class(c); + list_del(&c->all_node); + invalidate_class(c); + + /* + * Actual deletion will happen on the rcu callback + * that has been added in disconnect_class(). + */ + del_class(c); + } + dept_unlock(); + dept_exit(flags); + + /* + * Wait until even lockless hash_lookup_class() for the class + * returns NULL. + */ + might_sleep(); + synchronize_rcu(); +} + +static int sub_id(struct dept_map *m, int e) +{ + return (m ? m->sub_u : 0) + e * DEPT_MAX_SUBCLASSES_USR; +} + +static struct dept_class *check_new_class(struct dept_key *local, + struct dept_key *k, int sub_id, + const char *n, bool sched_map) +{ + struct dept_class *c = NULL; + + if (DEPT_WARN_ON(sub_id >= DEPT_MAX_SUBCLASSES)) + return NULL; + + if (DEPT_WARN_ON(!k)) + return NULL; + + /* + * XXX: Assume that users prevent the map from using if any of + * the cached keys has been invalidated. If not, the cache, + * local->classes should not be used because it would be racy + * with class deletion. + */ + if (local && sub_id < DEPT_MAX_SUBCLASSES_CACHE) + c = READ_ONCE(local->classes[sub_id]); + + if (c) + return c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (c) + goto caching; + + if (unlikely(!dept_lock())) + return NULL; + + c = lookup_class((unsigned long)k->base + sub_id); + if (unlikely(c)) + goto unlock; + + c = new_class(); + if (unlikely(!c)) + goto unlock; + + c->name = n; + c->sched_map = sched_map; + c->sub_id = sub_id; + c->key = (unsigned long)(k->base + sub_id); + hash_add_class(c); + list_add(&c->all_node, &classes); +unlock: + dept_unlock(); +caching: + if (local && sub_id < DEPT_MAX_SUBCLASSES_CACHE) + WRITE_ONCE(local->classes[sub_id], c); + + return c; +} + +/* + * Called between dept_enter() and dept_exit(). + */ +static void __dept_wait(struct dept_map *m, unsigned long w_f, + unsigned long ip, const char *w_fn, int sub_l, + bool sched_sleep, bool sched_map) +{ + int e; + + /* + * Be as conservative as possible. In case of mulitple waits for + * a single dept_map, we are going to keep only the last wait's + * wgen for simplicity - keeping all wgens seems overengineering. + * + * Of course, it might cause missing some dependencies that + * would rarely, probabily never, happen but it helps avoid + * false positive report. + */ + for_each_set_bit(e, &w_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, sched_map); + if (!c) + continue; + + add_wait(c, ip, w_fn, sub_l, sched_sleep); + } +} + +/* + * Called between dept_enter() and dept_exit(). + */ +static void __dept_event(struct dept_map *m, unsigned long e_f, + unsigned long ip, const char *e_fn, + bool sched_map) +{ + struct dept_class *c; + struct dept_key *k; + int e; + + e = find_first_bit(&e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (DEPT_WARN_ON(e >= DEPT_MAX_SUBCLASSES_EVT)) + return; + + /* + * An event is an event. If the caller passed more than single + * event, then warn it and handle the event corresponding to + * the first bit anyway. + */ + DEPT_WARN_ON(1UL << e != e_f); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, sched_map); + + if (c && add_ecxt(m, c, 0UL, NULL, e_fn, 0)) { + do_event(m, c, READ_ONCE(m->wgen), ip); + pop_ecxt(m, c); + } +} + +void dept_wait(struct dept_map *m, unsigned long w_f, + unsigned long ip, const char *w_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) + return; + + if (m->nocheck) + return; + + flags = dept_enter(); + + __dept_wait(m, w_f, ip, w_fn, sub_l, false, false); + + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_wait); + +void dept_stage_wait(struct dept_map *m, struct dept_key *k, + unsigned long ip, const char *w_fn) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (m && m->nocheck) + return; + + /* + * Either m or k should be passed. Which means Dept relies on + * either its own map or the caller's position in the code when + * determining its class. + */ + if (DEPT_WARN_ON(!m && !k)) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + /* + * Ensure the outmost dept_stage_wait() works. + */ + if (dt->stage_m.keys) + goto exit; + + if (m) { + dt->stage_m = *m; + + /* + * Ensure dt->stage_m.keys != NULL and it works with the + * map's map_key, not stage_m's one when ->keys == NULL. + */ + if (!m->keys) + dt->stage_m.keys = &m->map_key; + } else { + dt->stage_m.name = w_fn; + dt->stage_sched_map = true; + } + + /* + * dept_map_reinit() includes WRITE_ONCE(->wgen, 0U) that + * effectively disables the map just in case real sleep won't + * happen. dept_request_event_wait_commit() will enable it. + */ + dept_map_reinit(&dt->stage_m, k, -1, NULL); + + dt->stage_w_fn = w_fn; + dt->stage_ip = ip; +exit: + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_stage_wait); + +static void __dept_clean_stage(struct dept_task *dt) +{ + memset(&dt->stage_m, 0x0, sizeof(struct dept_map)); + dt->stage_sched_map = false; + dt->stage_w_fn = NULL; + dt->stage_ip = 0UL; +} + +void dept_clean_stage(void) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + __dept_clean_stage(dt); + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_clean_stage); + +/* + * Always called from __schedule(). + */ +void dept_request_event_wait_commit(void) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + unsigned int wg; + unsigned long ip; + const char *w_fn; + bool sched_map; + + if (unlikely(!dept_working())) + return; + + /* + * It's impossible that __schedule() is called while Dept is + * working that already disabled IRQ at the entrance. + */ + if (DEPT_WARN_ON(dt->recursive)) + return; + + flags = dept_enter(); + + /* + * Checks if current has staged a wait. + */ + if (!dt->stage_m.keys) + goto exit; + + w_fn = dt->stage_w_fn; + ip = dt->stage_ip; + sched_map = dt->stage_sched_map; + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + WRITE_ONCE(dt->stage_m.wgen, wg); + + __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map); +exit: + dept_exit(flags); +} + +/* + * Always called from try_to_wake_up(). + */ +void dept_stage_event(struct task_struct *requestor, unsigned long ip) +{ + struct dept_task *dt = dept_task(); + struct dept_task *dt_req = &requestor->dept_task; + unsigned long flags; + struct dept_map m; + bool sched_map; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) + return; + + flags = dept_enter(); + + /* + * Serializing is unnecessary as long as it always comes from + * try_to_wake_up(). + */ + m = dt_req->stage_m; + sched_map = dt_req->stage_sched_map; + __dept_clean_stage(dt_req); + + /* + * ->stage_m.keys should not be NULL if it's in use. Should + * make sure that it's not NULL when staging a valid map. + */ + if (!m.keys) + goto exit; + + __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map); +exit: + dept_exit(flags); +} + +/* + * Modifies the latest ecxt corresponding to m and e_f. + */ +void dept_map_ecxt_modify(struct dept_map *m, unsigned long e_f, + struct dept_key *new_k, unsigned long new_e_f, + unsigned long new_ip, const char *new_c_fn, + const char *new_e_fn, int new_sub_l) +{ + struct dept_task *dt = dept_task(); + struct dept_ecxt_held *eh; + struct dept_class *c; + struct dept_key *k; + unsigned long flags; + int pos = -1; + int new_e; + int e; + + if (unlikely(!dept_working())) + return; + + /* + * XXX: Couldn't handle re-enterance cases. Ingore it for now. + */ + if (dt->recursive) + return; + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + /* + * When it found an ecxt for any event in e_f, done. + */ + pos = find_ecxt_pos(m, c, true); + if (pos != -1) + break; + } + + if (unlikely(pos == -1)) + goto exit; + + eh = dt->ecxt_held + pos; + new_sub_l = new_sub_l >= 0 ? new_sub_l : eh->sub_l; + + new_e = find_first_bit(&new_e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (new_e < DEPT_MAX_SUBCLASSES_EVT) + /* + * Let it work with the first bit anyway. + */ + DEPT_WARN_ON(1UL << new_e != new_e_f); + else + new_e = e; + + pop_ecxt(m, c); + + /* + * Apply the key to the map. + */ + if (new_k) + dept_map_reinit(m, new_k, -1, NULL); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, new_e), m->name, false); + + if (c && add_ecxt(m, c, new_ip, new_c_fn, new_e_fn, new_sub_l)) + goto exit; + + /* + * Successfully pop_ecxt()ed but failed to add_ecxt(). + */ + dt->missing_ecxt++; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_map_ecxt_modify); + +void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, + const char *c_fn, const char *e_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + struct dept_class *c; + struct dept_key *k; + int e; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + dt->missing_ecxt++; + return; + } + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + e = find_first_bit(&e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (e >= DEPT_MAX_SUBCLASSES_EVT) + goto missing_ecxt; + + /* + * An event is an event. If the caller passed more than single + * event, then warn it and handle the event corresponding to + * the first bit anyway. + */ + DEPT_WARN_ON(1UL << e != e_f); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, false); + + if (c && add_ecxt(m, c, ip, c_fn, e_fn, sub_l)) + goto exit; +missing_ecxt: + dt->missing_ecxt++; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_ecxt_enter); + +bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + bool ret = false; + int e; + + if (unlikely(!dept_working())) + return false; + + if (dt->recursive) + return false; + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + if (find_ecxt_pos(m, c, true) != -1) { + ret = true; + break; + } + } + + dept_exit(flags); + + return ret; +} +EXPORT_SYMBOL_GPL(dept_ecxt_holding); + +void dept_request_event(struct dept_map *m) +{ + unsigned long flags; + unsigned int wg; + + if (unlikely(!dept_working())) + return; + + if (m->nocheck) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + WRITE_ONCE(m->wgen, wg); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_request_event); + +void dept_event(struct dept_map *m, unsigned long e_f, + unsigned long ip, const char *e_fn) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (m->nocheck) + return; + + if (dt->recursive) { + /* + * Dept won't work with this even though an event + * context has been asked. Don't make it confused at + * handling the event. Disable it until the next. + */ + WRITE_ONCE(m->wgen, 0U); + return; + } + + flags = dept_enter(); + + __dept_event(m, e_f, ip, e_fn, false); + + /* + * Keep the map diabled until the next sleep. + */ + WRITE_ONCE(m->wgen, 0U); + + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_event); + +void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, + unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int e; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + dt->missing_ecxt--; + return; + } + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + /* + * When it found an ecxt for any event in e_f, done. + */ + if (pop_ecxt(m, c)) + goto exit; + } + + dt->missing_ecxt--; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_ecxt_exit); + +void dept_task_exit(struct task_struct *t) +{ + struct dept_task *dt = &t->dept_task; + int i; + + if (unlikely(!dept_working())) + return; + + raw_local_irq_disable(); + + if (dt->stack) + put_stack(dt->stack); + + for (i = 0; i < dt->ecxt_held_pos; i++) { + if (dt->ecxt_held[i].class) + put_class(dt->ecxt_held[i].class); + if (dt->ecxt_held[i].ecxt) + put_ecxt(dt->ecxt_held[i].ecxt); + } + + for (i = 0; i < DEPT_MAX_WAIT_HIST; i++) + if (dt->wait_hist[i].wait) + put_wait(dt->wait_hist[i].wait); + + dt->task_exit = true; + dept_off(); + + raw_local_irq_enable(); +} + +void dept_task_init(struct task_struct *t) +{ + memset(&t->dept_task, 0x0, sizeof(struct dept_task)); +} + +void dept_key_init(struct dept_key *k) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int sub_id; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + DEPT_STOP("Key initialization fails.\n"); + return; + } + + flags = dept_enter(); + + clean_classes_cache(k); + + /* + * dept_key_init() should not fail. + * + * FIXME: Should be fixed if dept_key_init() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + for (sub_id = 0; sub_id < DEPT_MAX_SUBCLASSES; sub_id++) { + struct dept_class *c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (!c) + continue; + + DEPT_STOP("The class(%s/%d) has not been removed.\n", + c->name, sub_id); + break; + } + + dept_unlock(); + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_key_init); + +void dept_key_destroy(struct dept_key *k) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int sub_id; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive == 1 && dt->task_exit) { + /* + * Need to allow to go ahead in this case where + * ->recursive has been set to 1 by dept_off() in + * dept_task_exit() and ->task_exit has been set to + * true in dept_task_exit(). + */ + } else if (dt->recursive) { + DEPT_STOP("Key destroying fails.\n"); + return; + } + + flags = dept_enter(); + + /* + * dept_key_destroy() should not fail. + * + * FIXME: Should be fixed if dept_key_destroy() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + for (sub_id = 0; sub_id < DEPT_MAX_SUBCLASSES; sub_id++) { + struct dept_class *c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (!c) + continue; + + hash_del_class(c); + disconnect_class(c); + list_del(&c->all_node); + invalidate_class(c); + + /* + * Actual deletion will happen on the rcu callback + * that has been added in disconnect_class(). + */ + del_class(c); + } + + dept_unlock(); + dept_exit(flags); + + /* + * Wait until even lockless hash_lookup_class() for the class + * returns NULL. + */ + might_sleep(); + synchronize_rcu(); +} +EXPORT_SYMBOL_GPL(dept_key_destroy); + +static void move_llist(struct llist_head *to, struct llist_head *from) +{ + struct llist_node *first = llist_del_all(from); + struct llist_node *last; + + if (!first) + return; + + for (last = first; last->next; last = last->next); + llist_add_batch(first, last, to); +} + +static void migrate_per_cpu_pool(void) +{ + const int boot_cpu = 0; + int i; + + /* + * The boot CPU has been using the temperal local pool so far. + * From now on that per_cpu areas have been ready, use the + * per_cpu local pool instead. + */ + DEPT_WARN_ON(smp_processor_id() != boot_cpu); + for (i = 0; i < OBJECT_NR; i++) { + struct llist_head *from; + struct llist_head *to; + + from = &pool[i].boot_pool; + to = per_cpu_ptr(pool[i].lpool, boot_cpu); + move_llist(to, from); + } +} + +#define B2KB(B) ((B) / 1024) + +/* + * Should be called after setup_per_cpu_areas() and before no non-boot + * CPUs have been on. + */ +void __init dept_init(void) +{ + size_t mem_total = 0; + + local_irq_disable(); + dept_per_cpu_ready = 1; + migrate_per_cpu_pool(); + local_irq_enable(); + +#define HASH(id, bits) BUILD_BUG_ON(1 << (bits) <= 0); + #include "dept_hash.h" +#undef HASH +#define OBJECT(id, nr) mem_total += sizeof(struct dept_##id) * nr; + #include "dept_object.h" +#undef OBJECT +#define HASH(id, bits) mem_total += sizeof(struct hlist_head) * (1 << (bits)); + #include "dept_hash.h" +#undef HASH + + pr_info("DEPendency Tracker: Copyright (c) 2020 LG Electronics, Inc., Byungchul Park\n"); + pr_info("... DEPT_MAX_STACK_ENTRY: %d\n", DEPT_MAX_STACK_ENTRY); + pr_info("... DEPT_MAX_WAIT_HIST : %d\n", DEPT_MAX_WAIT_HIST); + pr_info("... DEPT_MAX_ECXT_HELD : %d\n", DEPT_MAX_ECXT_HELD); + pr_info("... DEPT_MAX_SUBCLASSES : %d\n", DEPT_MAX_SUBCLASSES); +#define OBJECT(id, nr) \ + pr_info("... memory used by %s: %zu KB\n", \ + #id, B2KB(sizeof(struct dept_##id) * nr)); + #include "dept_object.h" +#undef OBJECT +#define HASH(id, bits) \ + pr_info("... hash list head used by %s: %zu KB\n", \ + #id, B2KB(sizeof(struct hlist_head) * (1 << (bits)))); + #include "dept_hash.h" +#undef HASH + pr_info("... total memory used by objects and hashs: %zu KB\n", B2KB(mem_total)); + pr_info("... per task memory footprint: %zu bytes\n", sizeof(struct dept_task)); +} diff --git a/kernel/dependency/dept_hash.h b/kernel/dependency/dept_hash.h new file mode 100644 index 000000000000..fd85aab1fdfb --- /dev/null +++ b/kernel/dependency/dept_hash.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * HASH(id, bits) + * + * id : Id for the object of struct dept_##id. + * bits: 1UL << bits is the hash table size. + */ + +HASH(dep, 12) +HASH(class, 12) diff --git a/kernel/dependency/dept_object.h b/kernel/dependency/dept_object.h new file mode 100644 index 000000000000..0b7eb16fe9fb --- /dev/null +++ b/kernel/dependency/dept_object.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * OBJECT(id, nr) + * + * id: Id for the object of struct dept_##id. + * nr: # of the object that should be kept in the pool. + */ + +OBJECT(dep, 1024 * 8) +OBJECT(class, 1024 * 8) +OBJECT(stack, 1024 * 32) +OBJECT(ecxt, 1024 * 16) +OBJECT(wait, 1024 * 32) diff --git a/kernel/exit.c b/kernel/exit.c index aedc0832c9f4..32b7f551674b 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -917,6 +917,7 @@ void __noreturn do_exit(long code) exit_tasks_rcu_finish(); lockdep_free_task(tsk); + dept_task_exit(tsk); do_task_dead(); } diff --git a/kernel/fork.c b/kernel/fork.c index 10917c3e1f03..2f1f5dbec154 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -99,6 +99,7 @@ #include #include #include +#include #include #include @@ -2455,6 +2456,7 @@ __latent_entropy struct task_struct *copy_process( #ifdef CONFIG_LOCKDEP lockdep_init_task(p); #endif + dept_task_init(p); #ifdef CONFIG_DEBUG_MUTEXES p->blocked_on = NULL; /* not blocked yet */ diff --git a/kernel/module/main.c b/kernel/module/main.c index 98fedfdb8db5..507c73b10258 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1228,12 +1228,14 @@ static void free_mod_mem(struct module *mod) /* Free lock-classes; relies on the preceding sync_rcu(). */ lockdep_free_key_range(mod_mem->base, mod_mem->size); + dept_free_range(mod_mem->base, mod_mem->size); if (mod_mem->size) module_memory_free(mod_mem->base, type); } /* MOD_DATA hosts mod, so free it at last */ lockdep_free_key_range(mod->mem[MOD_DATA].base, mod->mem[MOD_DATA].size); + dept_free_range(mod->mem[MOD_DATA].base, mod->mem[MOD_DATA].size); module_memory_free(mod->mem[MOD_DATA].base, MOD_DATA); } @@ -3020,6 +3022,8 @@ static int load_module(struct load_info *info, const char __user *uargs, for_class_mod_mem_type(type, core_data) { lockdep_free_key_range(mod->mem[type].base, mod->mem[type].size); + dept_free_range(mod->mem[type].base, + mod->mem[type].size); } module_deallocate(mod, info); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a708d225c28e..e2a7ee076a58 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -64,6 +64,7 @@ #include #include #include +#include #ifdef CONFIG_PREEMPT_DYNAMIC # ifdef CONFIG_GENERIC_ENTRY @@ -4197,6 +4198,8 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) guard(preempt)(); int cpu, success = 0; + dept_stage_event(p, _RET_IP_); + if (p == current) { /* * We're waking current, this means 'p->on_rq' and 'task_cpu(p) @@ -6578,6 +6581,12 @@ static void __sched notrace __schedule(unsigned int sched_mode) rq = cpu_rq(cpu); prev = rq->curr; + prev_state = READ_ONCE(prev->__state); + if (sched_mode != SM_PREEMPT && prev_state & TASK_NORMAL) + dept_request_event_wait_commit(); + + dept_sched_enter(); + schedule_debug(prev, !!sched_mode); if (sched_feat(HRTICK) || sched_feat(HRTICK_DL)) @@ -6691,6 +6700,7 @@ static void __sched notrace __schedule(unsigned int sched_mode) __balance_callbacks(rq); raw_spin_rq_unlock_irq(rq); } + dept_sched_exit(); } void __noreturn do_task_dead(void) diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 4405f81248fb..9602f41ad8e8 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1285,6 +1285,33 @@ config DEBUG_PREEMPT menu "Lock Debugging (spinlocks, mutexes, etc...)" +config DEPT + bool "Dependency tracking (EXPERIMENTAL)" + depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT + select DEBUG_SPINLOCK + select DEBUG_MUTEXES + select DEBUG_RT_MUTEXES if RT_MUTEXES + select DEBUG_RWSEMS + select DEBUG_WW_MUTEX_SLOWPATH + select DEBUG_LOCK_ALLOC + select TRACE_IRQFLAGS + select STACKTRACE + select FRAME_POINTER if !MIPS && !PPC && !ARM && !S390 && !MICROBLAZE && !ARC && !X86 + select KALLSYMS + select KALLSYMS_ALL + select PROVE_LOCKING + default n + help + Check dependencies between wait and event and report it if + deadlock possibility has been detected. Multiple reports are + allowed if there are more than a single problem. + + This feature is considered EXPERIMENTAL that might produce + false positive reports because new dependencies start to be + tracked, that have never been tracked before. It's worth + noting, to mitigate the impact by the false positives, multi + reporting has been supported. + config LOCK_DEBUGGING_SUPPORT bool depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c index 6f6a5fc85b42..2558ae57b117 100644 --- a/lib/locking-selftest.c +++ b/lib/locking-selftest.c @@ -1398,6 +1398,8 @@ static void reset_locks(void) local_irq_disable(); lockdep_free_key_range(&ww_lockdep.acquire_key, 1); lockdep_free_key_range(&ww_lockdep.mutex_key, 1); + dept_free_range(&ww_lockdep.acquire_key, 1); + dept_free_range(&ww_lockdep.mutex_key, 1); I1(A); I1(B); I1(C); I1(D); I1(X1); I1(X2); I1(Y1); I1(Y2); I1(Z1); I1(Z2); From patchwork Wed Jan 24 11:59:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529097 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4693D60DEF; Wed, 24 Jan 2024 11:59:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097602; cv=none; b=DV7ygf//B/cgq2z2SSEnKUeHLIY5pe2tYpNwD/tJtHkWCksE2QQ6u7OX5kb90xXNUCXh2Csvor2xYuH3FPUYi8TiZ06ZRp/0EYf8klQ8J6LD9+T6aAzn3KFJz02JZW0JO9tp7P+9CRxcDTiLkCPxpt7UJb4TywW1OjzR/Y9VuHA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097602; c=relaxed/simple; bh=yYyo9J9LbrfncpB2CWVjqJhM6aD/eyXV6B0l1bR9NkI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=c4Clt3zVw57xbYR/P7R5NXy6kGYspTnKPfkQ7QybycOQ+fQXOCAr55UZUplQu0O8pmzYFdp15x7FCVfaK8EGqbsuD1HWb9cUctaOBnBoPQg8CaVuRG/9aUwGYB6JQF/Pg2c5mWrLTIh+Ym7TGt98A3/2uT+NJHs5hWSjXsIy6Zo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-71-65b0fbb43154 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 03/26] dept: Add single event dependency tracker APIs Date: Wed, 24 Jan 2024 20:59:14 +0900 Message-Id: <20240124115938.80132-4-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTYRTHe573ti1XL0vyLYtkEEGZmVgdIiqC6KkIKqEPSZeVLzqaU6Zp iwRLs7yVBroysamxxrzPgpoaXtjSQrNcaqKSYpk4L2hbmXaZRl8OP875nf/5ciSUwsaslai1 8aJOq9IoWRktm/Ap3vpsvloMdg1JITcrGNzfbtNQWFXOQmdlGYLyp9cxjNkPQY/HhWC+/S0F hrxOBMVDAxQ8dQwiaDDfYKFrZAU43VMstOVlspBSWsXCu/EFDP359zCUWY/Bm5wSDI1zozQY xlh4aEjB3vIVw5zJwoEpeSMMmws4WBjaDm2D3Qw09G2BB0X9LNQ3tNHgeD6MoctWyMJg+R8G 3jhaaejMzWagYrKEhXGPiQKTe4qD941GDNWp3qC02d8MvMpuxJD2uAaD82Mdgpe3P2Gwlnez 0OJ2Yai15lHw84kdwfCdCQ5uZs1x8PD6HQSZN/NpePvrFQOp/Ttg/kchu383aXFNUSS1NpE0 eIw0eV0ikBcFAxxJfdnHEaP1Mqk1byal9WOYFM+4GWK1pLPEOnOPIxkTTkwmOzo40np/niYj TgM+7n9atidC1KgTRN22vedlUc05BUxsj98Vj2FDMnIqMpBUIvChwvCHu9x/rv89wS4yy28S envnqEX25QOE2uwvTAaSSSj+1nLBPN2+JK3iDwuZz+qWJJrfKIwO2ulFlvM7hNnWFvpf6Aah rLpxyZHyO4WKB31LfYXX+WRZPCzzOilS4c/INP63sEZoMvfSOUhuRMssSKHWJkSr1JrQoCi9 Vn0l6GJMtBV5P8qUtBD+HM10hjUjXoKUPvL9lipRwagS4vTRzUiQUEpfee+aSlEhj1Dpr4q6 mHO6yxoxrhn5S2ilnzzEkxih4CNV8eIlUYwVdf+nWCJdm4xiWwKxvrToiFkesmu6yWZffynv fVGMNCv9YHsSk3I0NzQkoGd1JOOnGfXxjyTXaBff9LmGGNmwC4/TElfyZ/uPxwcEVdiujR/K PqMVpAdWBH6PyO+JP1Y2dSrNFriupsvRXZd4MD0yuE/fvXK9R1az78TopvB6VV2S4+RJ+6MB JR0Xpdq+mdLFqf4CNNKfuk0DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSWUhUYRiG+886Tk0dJqOTrQ6KkGUGGR/a4kXQX9ByZRBFTnnQybWZGp02 NW1zSw2bXBKzGIdR045GixqmuOeWU5mNK5JZLqHOkGmZGt28PHzfw3v1Skh5Fu0gUYWcF9Qh yiAFI6Wkh71itpbOFAvud0fdICXBHaxTtyjIKipgoO1JPoKC0mgChmv2w0fbCIKZ5lYS9Glt CB72d5NQWtuDoMJ4jYGOweVgto4z0JAWz0DMoyIG2r/PEmC5l0pAvngImpJzCaicHqJAP8xA pj6GmI+vBEwbTCwYopxhwJjBwmz/dmjo+UBD9YMGGiq6XCE928JAeUUDBbUvBgjoeJXFQE/B HA1NtfUUtKUk0lA4lsvAd5uBBIN1nIV3lTkEFMfOt92Y/ENDXWIlATcePyXA/KkMwetbfQSI BR8YqLaOEFAippHwK68GwUDSKAvXE6ZZyIxOQhB//R4Frb/raIi1eMDMzyzG2wtXj4yTOLYk HFfYcijcmMvjlxndLI593cXiHPECLjFuxo/Khwn8cMJKY9F0m8HiRCqL40bNBB5raWFx/f0Z Cg+a9cTRdcelu/yEIJVWUG/b4ysNqErOoMM+ro6w6TdGIbM8DtlJeG4HX/5nlFlghnPhOzun yQW25zbxJYlf6DgklZDczaW88UfzorSSO8DHPytblCjOmR/qqaEWWMZ58JP11dS/0o18fnHl omPH7eQL07sW7/J5p890h01G0hy0xITsVSHaYKUqyMNNExigC1FFuJ0JDRbR/GYMV2ZTXqCp jv1ViJMgxTKZt6lIkNNKrUYXXIV4Camwl3WueSLIZX5K3UVBHXpKfSFI0FShtRJKsVp28Jjg K+f8leeFQEEIE9T/v4TEziEK1a2dsiXoLP0vM70dI+d88pyHJp97HmnyTNW+r0/TOn0Kd9r0 5YTqhNkyHDbnKd1b6rre0rt7n9jSN3D57WCvasXTn65J7/1dzjXiduO3lFVnl1xNXxejaP18 2N/RY+vEteiavuwt1pUbxNnOSFJ6uSz7tAzCfXyYNwC1l/q7dScVlCZAuX0zqdYo/wK06lAC LwMAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Wrapped the base APIs for easier annotation on wait and event. Start with supporting waiters on each single event. More general support for multiple events is a future work. Do more when the need arises. How to annotate (the simplest way): 1. Initaialize a map for the interesting wait. /* * Recommand to place along with the wait instance. */ struct dept_map my_wait; /* * Recommand to place in the initialization code. */ sdt_map_init(&my_wait); 2. Place the following at the wait code. sdt_wait(&my_wait); 3. Place the following at the event code. sdt_event(&my_wait); That's it! Signed-off-by: Byungchul Park --- include/linux/dept_sdt.h | 62 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 include/linux/dept_sdt.h diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h new file mode 100644 index 000000000000..12a793b90c7e --- /dev/null +++ b/include/linux/dept_sdt.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Single-event Dependency Tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_SDT_H +#define __LINUX_DEPT_SDT_H + +#include +#include + +#ifdef CONFIG_DEPT +#define sdt_map_init(m) \ + do { \ + static struct dept_key __key; \ + dept_map_init(m, &__key, 0, #m); \ + } while (0) + +#define sdt_map_init_key(m, k) dept_map_init(m, k, 0, #m) + +#define sdt_wait(m) \ + do { \ + dept_request_event(m); \ + dept_wait(m, 1UL, _THIS_IP_, __func__, 0); \ + } while (0) + +/* + * sdt_might_sleep() and its family will be committed in __schedule() + * when it actually gets to __schedule(). Both dept_request_event() and + * dept_wait() will be performed on the commit. + */ + +/* + * Use the code location as the class key if an explicit map is not used. + */ +#define sdt_might_sleep_start(m) \ + do { \ + struct dept_map *__m = m; \ + static struct dept_key __key; \ + dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__);\ + } while (0) + +#define sdt_might_sleep_end() dept_clean_stage() + +#define sdt_ecxt_enter(m) dept_ecxt_enter(m, 1UL, _THIS_IP_, "start", "event", 0) +#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__) +#define sdt_ecxt_exit(m) dept_ecxt_exit(m, 1UL, _THIS_IP_) +#else /* !CONFIG_DEPT */ +#define sdt_map_init(m) do { } while (0) +#define sdt_map_init_key(m, k) do { (void)(k); } while (0) +#define sdt_wait(m) do { } while (0) +#define sdt_might_sleep_start(m) do { } while (0) +#define sdt_might_sleep_end() do { } while (0) +#define sdt_ecxt_enter(m) do { } while (0) +#define sdt_event(m) do { } while (0) +#define sdt_ecxt_exit(m) do { } while (0) +#endif +#endif /* __LINUX_DEPT_SDT_H */ From patchwork Wed Jan 24 11:59:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529098 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4502960DC5; Wed, 24 Jan 2024 11:59:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097602; cv=none; b=HIVPmwgSzZ2+zKGoLEwkXvrHDM0tnKUWtmXUNxMHX3/SrIiKpJC27CN1r3cVuH5Loz4amOVHAtCucWCgJsHlWRDjUhvXV+hrRAiL9dJcHhaVi3P44zd6NLHE9OcQLPwARkytG8ZXafCNMrn6cbE9FCfmzsgt3jHPfv6Vk6Mjmas= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097602; c=relaxed/simple; bh=ZnDomY68etRURkOftpc7pgxDDdwCW9QdH8k+FPh35/w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=sQtIAxlx/a1szzowXynbed5Ad+8U07ne5Mii8PiUoVsqWdETMy/YChIX7NlV0CosX33ocBm6ispEXIRR36GwJS4dl3z26UoMw4R6shoAoksqoPi4oH8NonFC6y3Ej3ZuYpmZf8VJRVs2goxcKWLAYbgLZe7Ijqzq6r9iXdc0GY0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-82-65b0fbb4b9cc From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 04/26] dept: Add lock dependency tracker APIs Date: Wed, 24 Jan 2024 20:59:15 +0900 Message-Id: <20240124115938.80132-5-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+5/rtloc1sWjBcVKutEVq5eIMLodCKEoghKqoYccXptpGUQr tzIvpXmZN0Kt1piaaxp4WyxFzUS3ctWyaWZRmVuWtqFOrWn15eUHz8OP58MrwCUNZIBAHnOO V8TIoqSUiBC55pWuf+w18JuKMiSQlb4J3L9SCCiuqqDA+rAcQUXNFQwGWw7AG48TgbfTgoMm 14qg9EMvDjWtfQhMuqsUdH+aDzb3MAXtuWkUJN+touDF0CQGjrzbGJQbQ6AjswwD8/gXAjSD FBRpkjHf+YrBuFZPg1YZCAO6QhomP2yG9r7XJJh61kHBHQcFjaZ2AlprBzDori+moK/iNwkd rc8IsGZlkFD5vYyCIY8WB617mIaX5hIMDCqf6NroNAltGWYMrt17hIHtbQOCJyn9GBgrXlPQ 7HZiUG3MxWHiQQuCgZsuGtTp4zQUXbmJIE2dR4Blqo0ElWMreMeKqeAdXLNzGOdU1ec5k6eE 4J6XsVxdYS/NqZ700FyJMYGr1q3l7jYOYlzpiJvkjPobFGccuU1zqS4bxn3v6qK5Z/legvtk 02CHlpwQ7Qzno+SJvGLjrtOiCJVvXdxXvwv3zUqkROWSVCQUsEwQ+y1liPjP171p+AxTzCrW bh+f5YXMcrY64zOZikQCnLk+l9X96KRmggXMbnYqy4lmmGAC2Zbu97MiMbOVzW77iP5Kl7Hl BvOsSMhsYysLemY7El+nX3+LnpGyTLKQza7V/Fvhzz7V2YlMJC5Bc/RIIo9JjJbJo4I2RCTF yC9sCIuNNiLfS2kvTYbWohHrkSbECJB0njhYX8VLSFlifFJ0E2IFuHSh2O7/kJeIw2VJF3lF 7ClFQhQf34SWCAipn3iL53y4hDkjO8dH8nwcr/ifYgJhgBIdPdjbOm1Vv9u+0ikMK1C6Vw9Y Jppf1ZR5dmlyfoaZL9e9qu2cCs68lLY4dqzj7PDJ56Ia0DS6jSGW7YjdW9cWid/yXzrx0nDY sT/f9N5xPKJ+TWWuSx2aE9I/HWAvVMd2EbZJ/4/HwlyRKUWjCXGj2WM/r+6797Z034pIxaLL e6REfIRs81pcES/7Aya/SYBOAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+//P1dXqsKROFlQrFYosq9VLRrcvHroR9CEqolYdcjlnbHmt YObKcmnOUstLeMkl874JZWoMRW1dLZeZmLekFG9hTbLZZRp9efnB8/Dj+fCyhCyH8mFVmvOi VqNUy2kJKdkfFL+myl0hrmsr2wymG+vA9f0aCdnlJTS0lBUjKKmKwzDYGAzvJ4YRuF++JiAj rQVBXu9HAqqauhDUFV2mobV/LjhdYzQ40ow0xBeU0/BmaApDZ3oqhmLrPnieko/BPvmFhIxB GrIy4rHnDGCYNFsYMOt9oa8ok4Gp3kBwdLVR0JDjoKCuYzXcvddJQ22dg4SmR30YWh9n09BV 8oeC501PSWgxJVFQOppPw9CEmQCza4yBt/ZcDBUGj+3qt98UNCfZMVy9X4nB+aEGwZNrPRis JW00NLiGMdisaQT8fNCIoC95hIErNyYZyIpLRmC8kk7C61/NFBg6FeD+kU3vCBIahscIwWCL EuomcknhWT4vVGd+ZATDkw5GyLVGCLaiVUJB7SAW8sZdlGC1XKcF63gqIySOOLEw+uoVIzy9 4yaFfmcGPrDkiGTraVGtihS1a7edkIQYPOvODSyMLrTrkR4VyxKRF8tzG/kEt5GYZprz59vb J2fYm1vG25I+U4lIwhJcwmy+6OtLejqYz+3kf5mG0TSTnC/f2NpNTrOUU/C3mj+hf9KlfHGF fUbkxW3iS+92zHRknk6P5SaTgiS5aJYFeas0kWFKlVoRoAsNidGoogNOhYdZkedpzJemTI/Q 99bgesSxSD5HusNSLsooZaQuJqwe8Swh95a2LyoTZdLTyphYURt+XBuhFnX1aDFLyhdKdx8S T8i4M8rzYqgonhO1/1PMevno0UnjhmOBqpDy5Tma0abIhoB36akG27fseVOVK2oc1Q+To6pc g2xn7Elb5vqgFt+BlWce+k8sGM3Uz4/oTjk8vr1acXDkaPOCdzU3w26rw5dnbSk0bdyTJYsa 69r7Is7v64WLBfA+QhF6K6EWGzXBvs6E6AeNu9hY/yNnL/ttSfZLPysndSHKwFWEVqf8CxVQ taswAwAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Wrapped the base APIs for easier annotation on typical lock. Signed-off-by: Byungchul Park --- include/linux/dept_ldt.h | 77 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 include/linux/dept_ldt.h diff --git a/include/linux/dept_ldt.h b/include/linux/dept_ldt.h new file mode 100644 index 000000000000..062613e89fc3 --- /dev/null +++ b/include/linux/dept_ldt.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Lock Dependency Tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_LDT_H +#define __LINUX_DEPT_LDT_H + +#include + +#ifdef CONFIG_DEPT +#define LDT_EVT_L 1UL +#define LDT_EVT_R 2UL +#define LDT_EVT_W 1UL +#define LDT_EVT_RW (LDT_EVT_R | LDT_EVT_W) +#define LDT_EVT_ALL (LDT_EVT_L | LDT_EVT_RW) + +#define ldt_init(m, k, su, n) dept_map_init(m, k, su, n) +#define ldt_lock(m, sl, t, n, i) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_L, i, "trylock", "unlock", sl);\ + else { \ + dept_wait(m, LDT_EVT_L, i, "lock", sl); \ + dept_ecxt_enter(m, LDT_EVT_L, i, "lock", "unlock", sl);\ + } \ + } while (0) + +#define ldt_rlock(m, sl, t, n, i, q) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_R, i, "read_trylock", "read_unlock", sl);\ + else { \ + dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl);\ + dept_ecxt_enter(m, LDT_EVT_R, i, "read_lock", "read_unlock", sl);\ + } \ + } while (0) + +#define ldt_wlock(m, sl, t, n, i) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_W, i, "write_trylock", "write_unlock", sl);\ + else { \ + dept_wait(m, LDT_EVT_RW, i, "write_lock", sl); \ + dept_ecxt_enter(m, LDT_EVT_W, i, "write_lock", "write_unlock", sl);\ + } \ + } while (0) + +#define ldt_unlock(m, i) dept_ecxt_exit(m, LDT_EVT_ALL, i) + +#define ldt_downgrade(m, i) \ + do { \ + if (dept_ecxt_holding(m, LDT_EVT_W)) \ + dept_map_ecxt_modify(m, LDT_EVT_W, NULL, LDT_EVT_R, i, "downgrade", "read_unlock", -1);\ + } while (0) + +#define ldt_set_class(m, n, k, sl, i) dept_map_ecxt_modify(m, LDT_EVT_ALL, k, 0UL, i, "lock_set_class", "(any)unlock", sl) +#else /* !CONFIG_DEPT */ +#define ldt_init(m, k, su, n) do { (void)(k); } while (0) +#define ldt_lock(m, sl, t, n, i) do { } while (0) +#define ldt_rlock(m, sl, t, n, i, q) do { } while (0) +#define ldt_wlock(m, sl, t, n, i) do { } while (0) +#define ldt_unlock(m, i) do { } while (0) +#define ldt_downgrade(m, i) do { } while (0) +#define ldt_set_class(m, n, k, sl, i) do { } while (0) +#endif +#endif /* __LINUX_DEPT_LDT_H */ From patchwork Wed Jan 24 11:59:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529099 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D46D260DF0; Wed, 24 Jan 2024 11:59:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097603; cv=none; b=plhFbhiMM2KEP5IQER70whya2oaNJ4laWeElnglb9mjIeFXJnKSmk1k3QczLLOxW+Pj9IcELprBLsoqSOGuXyOAuEQe0T/1oIPAr4oYwKB5ZQQj6DMrMzHs590V2TO5T+nFqaR0xOi6/t/P/9VYpcaWT+O46l/7VN37UYVw29ao= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097603; c=relaxed/simple; bh=QWHRCvl6WHVJCxC3TSQ+F2P9dkdR+c40EtoDliptNOM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=nRgfrNTvsGi2hgyZZTWU6BzYXT1uM6+VWe9D/Np6XBsmguc5vU7apSYINkH1N6rrRkBG2waktIuwm7q8WEVp786F/t7VSllnrQa9RtC5P5VrKExXb5UvMXhXh1borHmHgWZsMNh2KJFlMa+DVSJf1Sjh7A25hFJTFjizUQmi84k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-93-65b0fbb5e510 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 05/26] dept: Tie to Lockdep and IRQ tracing Date: Wed, 24 Jan 2024 20:59:16 +0900 Message-Id: <20240124115938.80132-6-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa2yLYRTHPc9760rtVRLvkEyaDJnb5pZDxDXiSZAt8Y0Ejb2xstXS6iiR zMxchy2mZgvdUNVdtRWGLTXZzXWsrGYWG5nNOqXWWnUurcuXk1/O/5zfp7+Ekt9hJkhU6l2i Rq1MUbBSWjowqnjmjWCVGDdoiYbcE3HgGzxCQ1FlGQstFaUIyuwHMPTVr4Y2vxtB8PFTCgz5 LQiKu95QYG/oRFBjzmSh9f1ocPo8LDTnH2fh4KVKFp71D2PoOJuHodS6Dh6eLsHgCHygwdDH QqHhIA6NXgwBk4UDU0YMdJvPczDcFQ/NnS8ZqGmfDgUXOli4W9NMQ8Otbgytt4tY6Cz7xcDD hiYaWnJzGCj/VMJCv99Egcnn4eC5w4ihKiskyv76k4HGHAeG7MvXMThf3UFQe+QtBmvZSxbu +9wYbNZ8Cr5frUfQfXKAg0MnAhwUHjiJ4PihszQ8/dHIQFbHfAgOFbHLFpH7bg9Fsmy7SY3f SJMHJQKpPv+GI1m17RwxWnXEZo4ll+72YVLs9THEajnKEqs3jyPHBpyYfHryhCNN54I0ee80 4MSJG6SLk8QUVbqomb1kizT52o9iOu3Mvj2vrF+5DPRFeQxFSAR+nnDxsxf954JqBxNmlp8q uFwBKszj+MmCLacntJdKKP7wSMH8+TEbDsbySwXPhZ4/TPMxgutdGw6zjJ8vGOxD/6TRQmmV 448ogl8glBe002GWh27eWk5xYanAH44QDL2FzN+HKOGe2UWfRjIjGmFBcpU6PVWpSpk3K1mv Vu2ZtXVnqhWFKmXaP7zxFvK2rK9DvAQpRsmWWSpFOaNM1+pT65AgoRTjZK6oClEuS1Lq94qa nZs1uhRRW4cmSmjFeNkc/+4kOb9NuUvcIYppouZ/iiUREzLQ7Mxo3XBC/Ot3xhn2zDYdbfeM dQQ56cd1U+oiH62Ymzh4s3r8muyAbr19Um3sjf7I5ZtsXVL3NPOKK0QR1Wj2r9X33LToH5yK 0xlaPyZvH1Mb41yV+KhoDOW+nvji20r78oXtvrTXTamm0SWW3stq70x7whZVfW5l+XPHkHbq 0jwFrU1WxsdSGq3yN5g/m6pOAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzHfb/P83yf6zh7dm48YcONMS3OyD6T4Q/0zG9/tfUHbjzq5jq5 0y80pRxSqSbnCP3ayXUpd9n8KLvVlLRINSpplfzolyjXOldyx/zz2Wvv93uvvz4SSn6LmS/R 6E6Kep1aqyRSWronODnwoadcVDWXMpCVpgLXz4s05JbZCDTdL0Fgq0jC0P88BN6NDyHwNL6m wJTThCC/5wMFFbVdCKqKzxFo6ZsNra4RAvU5lwkkF5YReDM4iaHzWjaGEvtuaMgswOB0f6HB 1E/gpikZe89XDG6LlQVL4jLoLb7BwmTPGqjvestAza16Bqo6AsB8u5NAZVU9DbWPejG0PMkl 0GWbZqCh9gUNTVnpDJR+KyAwOG6hwOIaYaHZmYehPMVrM479ZqAu3YnBWPQAQ2v7UwTPLnZj sNveEqhxDWFw2HMo+HX3OYLejGEWzqe5WbiZlIHg8vlrNLyeqmMgpTMIPBO5ZEuwUDM0Qgkp jlihajyPFl4W8MLjGx9YIeVZByvk2aMFR/FKobCyHwv5oy5GsFsvEcE+ms0KqcOtWPj26hUr vLjuoYW+VhPetzBMuvGIqNXEiPrVmw5JI+5N5dNRV0/HtdvH2ET0Q52K/CQ8t443P3YyPibc cr6tzU35WMEt5h3pn725VEJxF2byxd8bia+Yw23mR25//ss0t4xv+/gO+1jGBfGmign0T7qI Lyl3/hX5cev5UnMH7WO5d9NtvcJmImkemmFFCo0uJlKt0QatMhyLiNdp4lYdPh5pR96nsSRM Zj1CP1tCqhEnQcpZsi3WMlHOqGMM8ZHViJdQSoWszf++KJcdUcefEvXHD+qjtaKhGi2Q0Mp5 sh2h4iE5F64+KR4TxShR/7/FEr/5iah5+HDtUYVj4geleElmFQ3qomwhYxnTxrjp9caE0u7N Jxqilrgngs3kd52zLmhgrS1dmR3WwO/qPXuQ3mnox3sDM/OHVQFxqh4uW9s45X9gwQH2TJHx gUdVuYJzXh3Y4L8twfB+HQ4vDM39lLr0hGKrOW10767w2LkD2/cn3ZmrpA0R6jUrKb1B/Qce LEDmMAMAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: How to place Dept this way looks so ugly. But it's inevitable for now. The way should be enhanced gradually. Signed-off-by: Byungchul Park --- include/linux/irqflags.h | 7 +- include/linux/local_lock_internal.h | 1 + include/linux/lockdep.h | 102 ++++++++++++++++++++++------ include/linux/lockdep_types.h | 3 + include/linux/mutex.h | 1 + include/linux/percpu-rwsem.h | 2 +- include/linux/rtmutex.h | 1 + include/linux/rwlock_types.h | 1 + include/linux/rwsem.h | 1 + include/linux/seqlock.h | 2 +- include/linux/spinlock_types_raw.h | 3 + include/linux/srcu.h | 2 +- kernel/dependency/dept.c | 8 +-- kernel/locking/lockdep.c | 22 ++++++ 14 files changed, 127 insertions(+), 29 deletions(-) diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h index 2b665c32f5fe..672dac1c3059 100644 --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -14,6 +14,7 @@ #include #include +#include #include #include @@ -61,8 +62,10 @@ extern void trace_hardirqs_off(void); # define lockdep_softirqs_enabled(p) ((p)->softirqs_enabled) # define lockdep_hardirq_enter() \ do { \ - if (__this_cpu_inc_return(hardirq_context) == 1)\ + if (__this_cpu_inc_return(hardirq_context) == 1) { \ current->hardirq_threaded = 0; \ + dept_hardirq_enter(); \ + } \ } while (0) # define lockdep_hardirq_threaded() \ do { \ @@ -137,6 +140,8 @@ do { \ # define lockdep_softirq_enter() \ do { \ current->softirq_context++; \ + if (current->softirq_context == 1) \ + dept_softirq_enter(); \ } while (0) # define lockdep_softirq_exit() \ do { \ diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h index 975e33b793a7..39f67788fd95 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -21,6 +21,7 @@ typedef struct { .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ .lock_type = LD_LOCK_PERCPU, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ }, \ .owner = NULL, diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index dc2844b071c2..8825f535d36d 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -12,6 +12,7 @@ #include #include +#include #include struct task_struct; @@ -39,6 +40,8 @@ static inline void lockdep_copy_map(struct lockdep_map *to, */ for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++) to->class_cache[i] = NULL; + + dept_map_copy(&to->dmap, &from->dmap); } /* @@ -466,7 +469,8 @@ enum xhlock_context_t { * Note that _name must not be NULL. */ #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \ - { .name = (_name), .key = (void *)(_key), } + { .name = (_name), .key = (void *)(_key), \ + .dmap = DEPT_MAP_INITIALIZER(_name, _key) } static inline void lockdep_invariant_state(bool force) {} static inline void lockdep_free_task(struct task_struct *task) {} @@ -548,33 +552,89 @@ extern bool read_lock_is_recursive(void); #define lock_acquire_shared(l, s, t, n, i) lock_acquire(l, s, t, 1, 1, n, i) #define lock_acquire_shared_recursive(l, s, t, n, i) lock_acquire(l, s, t, 2, 1, n, i) -#define spin_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define spin_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define spin_release(l, i) lock_release(l, i) - -#define rwlock_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) +#define spin_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define spin_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define spin_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define rwlock_acquire(l, s, t, i) \ +do { \ + ldt_wlock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) #define rwlock_acquire_read(l, s, t, i) \ do { \ + ldt_rlock(&(l)->dmap, s, t, NULL, i, !read_lock_is_recursive());\ if (read_lock_is_recursive()) \ lock_acquire_shared_recursive(l, s, t, NULL, i); \ else \ lock_acquire_shared(l, s, t, NULL, i); \ } while (0) - -#define rwlock_release(l, i) lock_release(l, i) - -#define seqcount_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define seqcount_acquire_read(l, s, t, i) lock_acquire_shared_recursive(l, s, t, NULL, i) -#define seqcount_release(l, i) lock_release(l, i) - -#define mutex_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define mutex_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define mutex_release(l, i) lock_release(l, i) - -#define rwsem_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define rwsem_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define rwsem_acquire_read(l, s, t, i) lock_acquire_shared(l, s, t, NULL, i) -#define rwsem_release(l, i) lock_release(l, i) +#define rwlock_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define seqcount_acquire(l, s, t, i) \ +do { \ + ldt_wlock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define seqcount_acquire_read(l, s, t, i) \ +do { \ + ldt_rlock(&(l)->dmap, s, t, NULL, i, false); \ + lock_acquire_shared_recursive(l, s, t, NULL, i); \ +} while (0) +#define seqcount_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define mutex_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define mutex_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define mutex_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define rwsem_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define rwsem_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define rwsem_acquire_read(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_shared(l, s, t, NULL, i); \ +} while (0) +#define rwsem_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) #define lock_map_acquire(l) lock_acquire_exclusive(l, 0, 0, NULL, _THIS_IP_) #define lock_map_acquire_try(l) lock_acquire_exclusive(l, 0, 1, NULL, _THIS_IP_) diff --git a/include/linux/lockdep_types.h b/include/linux/lockdep_types.h index 2ebc323d345a..aecd65836b2c 100644 --- a/include/linux/lockdep_types.h +++ b/include/linux/lockdep_types.h @@ -11,6 +11,7 @@ #define __LINUX_LOCKDEP_TYPES_H #include +#include #define MAX_LOCKDEP_SUBCLASSES 8UL @@ -77,6 +78,7 @@ struct lock_class_key { struct hlist_node hash_entry; struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES]; }; + struct dept_key dkey; }; extern struct lock_class_key __lockdep_no_validate__; @@ -194,6 +196,7 @@ struct lockdep_map { int cpu; unsigned long ip; #endif + struct dept_map dmap; }; struct pin_cookie { unsigned int val; }; diff --git a/include/linux/mutex.h b/include/linux/mutex.h index a33aa9eb9fc3..04c41faace85 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -26,6 +26,7 @@ , .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } #else # define __DEP_MAP_MUTEX_INITIALIZER(lockname) diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h index 36b942b67b7d..e871aca04645 100644 --- a/include/linux/percpu-rwsem.h +++ b/include/linux/percpu-rwsem.h @@ -21,7 +21,7 @@ struct percpu_rw_semaphore { }; #ifdef CONFIG_DEBUG_LOCK_ALLOC -#define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }, +#define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname, .dmap = DEPT_MAP_INITIALIZER(lockname, NULL) }, #else #define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) #endif diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h index 7d049883a08a..35889ac5eeae 100644 --- a/include/linux/rtmutex.h +++ b/include/linux/rtmutex.h @@ -81,6 +81,7 @@ do { \ .dep_map = { \ .name = #mutexname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(mutexname, NULL),\ } #else #define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname) diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h index 1948442e7750..6e58dfc84997 100644 --- a/include/linux/rwlock_types.h +++ b/include/linux/rwlock_types.h @@ -10,6 +10,7 @@ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL), \ } #else # define RW_DEP_MAP_INIT(lockname) diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index 1dd530ce8b45..1fa391e7770a 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -22,6 +22,7 @@ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ }, #else # define __RWSEM_DEP_MAP_INIT(lockname) diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index e92f9d5577ba..dee83ab183e4 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -81,7 +81,7 @@ static inline void __seqcount_init(seqcount_t *s, const char *name, #ifdef CONFIG_DEBUG_LOCK_ALLOC # define SEQCOUNT_DEP_MAP_INIT(lockname) \ - .dep_map = { .name = #lockname } + .dep_map = { .name = #lockname, .dmap = DEPT_MAP_INITIALIZER(lockname, NULL) } /** * seqcount_init() - runtime initializer for seqcount_t diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h index 91cb36b65a17..3dcc551ded25 100644 --- a/include/linux/spinlock_types_raw.h +++ b/include/linux/spinlock_types_raw.h @@ -31,11 +31,13 @@ typedef struct raw_spinlock { .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SPIN, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } # define SPIN_DEP_MAP_INIT(lockname) \ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } # define LOCAL_SPIN_DEP_MAP_INIT(lockname) \ @@ -43,6 +45,7 @@ typedef struct raw_spinlock { .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ .lock_type = LD_LOCK_PERCPU, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } #else # define RAW_SPIN_DEP_MAP_INIT(lockname) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 127ef3b2e607..f6b8266a4bfd 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -35,7 +35,7 @@ int __init_srcu_struct(struct srcu_struct *ssp, const char *name, __init_srcu_struct((ssp), #ssp, &__srcu_key); \ }) -#define __SRCU_DEP_MAP_INIT(srcu_name) .dep_map = { .name = #srcu_name }, +#define __SRCU_DEP_MAP_INIT(srcu_name) .dep_map = { .name = #srcu_name, .dmap = DEPT_MAP_INITIALIZER(srcu_name, NULL) }, #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ int init_srcu_struct(struct srcu_struct *ssp); diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index a3e774479f94..7e12e46dc4b7 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -244,10 +244,10 @@ static bool dept_working(void) * Even k == NULL is considered as a valid key because it would use * &->map_key as the key in that case. */ -struct dept_key __dept_no_validate__; +extern struct lock_class_key __lockdep_no_validate__; static bool valid_key(struct dept_key *k) { - return &__dept_no_validate__ != k; + return &__lockdep_no_validate__.dkey != k; } /* @@ -1936,7 +1936,7 @@ void dept_softirqs_off(void) dept_task()->softirqs_enabled = false; } -void dept_hardirqs_off(void) +void noinstr dept_hardirqs_off(void) { /* * Assumes that it's called with IRQ disabled so that accessing @@ -1958,7 +1958,7 @@ void dept_softirq_enter(void) /* * Ensure it's the outmost hardirq context. */ -void dept_hardirq_enter(void) +void noinstr dept_hardirq_enter(void) { struct dept_task *dt = dept_task(); diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 151bd3de5936..e27cf9d17163 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -1215,6 +1215,8 @@ void lockdep_register_key(struct lock_class_key *key) struct lock_class_key *k; unsigned long flags; + dept_key_init(&key->dkey); + if (WARN_ON_ONCE(static_obj(key))) return; hash_head = keyhashentry(key); @@ -4310,6 +4312,8 @@ static void __trace_hardirqs_on_caller(void) */ void lockdep_hardirqs_on_prepare(void) { + dept_hardirqs_on(); + if (unlikely(!debug_locks)) return; @@ -4430,6 +4434,8 @@ EXPORT_SYMBOL_GPL(lockdep_hardirqs_on); */ void noinstr lockdep_hardirqs_off(unsigned long ip) { + dept_hardirqs_off(); + if (unlikely(!debug_locks)) return; @@ -4474,6 +4480,8 @@ void lockdep_softirqs_on(unsigned long ip) { struct irqtrace_events *trace = ¤t->irqtrace; + dept_softirqs_on_ip(ip); + if (unlikely(!lockdep_enabled())) return; @@ -4512,6 +4520,8 @@ void lockdep_softirqs_on(unsigned long ip) */ void lockdep_softirqs_off(unsigned long ip) { + dept_softirqs_off(); + if (unlikely(!lockdep_enabled())) return; @@ -4859,6 +4869,8 @@ void lockdep_init_map_type(struct lockdep_map *lock, const char *name, { int i; + ldt_init(&lock->dmap, &key->dkey, subclass, name); + for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++) lock->class_cache[i] = NULL; @@ -5630,6 +5642,12 @@ void lock_set_class(struct lockdep_map *lock, const char *name, { unsigned long flags; + /* + * dept_map_(re)init() might be called twice redundantly. But + * there's no choice as long as Dept relies on Lockdep. + */ + ldt_set_class(&lock->dmap, name, &key->dkey, subclass, ip); + if (unlikely(!lockdep_enabled())) return; @@ -5647,6 +5665,8 @@ void lock_downgrade(struct lockdep_map *lock, unsigned long ip) { unsigned long flags; + ldt_downgrade(&lock->dmap, ip); + if (unlikely(!lockdep_enabled())) return; @@ -6447,6 +6467,8 @@ void lockdep_unregister_key(struct lock_class_key *key) unsigned long flags; bool found = false; + dept_key_destroy(&key->dkey); + might_sleep(); if (WARN_ON_ONCE(static_obj(key))) From patchwork Wed Jan 24 11:59:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529102 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4DFAB60DE6; Wed, 24 Jan 2024 12:00:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097605; cv=none; b=da1xFJ/ScpQq5NvbNC9K/Cye/mwlJH6YoKSXC65lIrIoVUJcHguJU+54AJOjhdTdfrOIKSR/lM4Ruur4p/Z2ucRUxY3U10Pros3bfWAZnJXFAbnWD+8HgqB2j4NjXptNQkVpsVhFPdVnT2ixuE8BBY3bpSqhI1l/NMooPgWEctM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097605; c=relaxed/simple; bh=37cSYW8WA/TnfUxZD66nhjfMFS+Rdxdh6OVt7EPfSog=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=Ceuc/+MoKjvHxTrY15a4sHiPfgSUvUHskcEl/5JNm0MzNUkD+guikEm9bREyDtLUN+X74yddiuAy+rg6kM/Dx3bidL8Lw1SmD55cQ6VxqGFtyhZ3GAuE83pCbT5qIsvRUNMTAMLewF0FkF5n3pO/0e05hSMhcEZ/r4nKiVnkbyo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-a5-65b0fbb529a6 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 06/26] dept: Add proc knobs to show stats and dependency graph Date: Wed, 24 Jan 2024 20:59:17 +0900 Message-Id: <20240124115938.80132-7-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTYRTHe573uuXkbV18S6IYWNHVbnKKiL5ED0QURX3oPvRFR15iK8si 07Tb1HKBLsvCS6wxl9qmpKWyjDQLzXLVsrnKtBS1lTlraRdn9OXw4/z/5/fp8JSympnBa+IP Sdp4dayKldPygaDCRZUj5VJ4d0cYGDLDwTd0job8MisLraUlCKwVqRh6H26AV8P9CEaan1Jg zGlFUPi+g4KKBg+CWvMpFtq6gsHp87LQlJPBQlpxGQvP+kYxuHMvYSixbYIn2UUYHP5PNBh7 WbhqTMNjoweD32ThwJQSBp3mKxyMvl8KTZ6XDNS2L4C8624WamqbaGio6sTQdjefBY/1DwNP Gh7R0GrIYuDW5yIW+oZNFJh8Xg6eOwowlKePic58+81AY5YDw5kbtzE4X99DUHfuHQab9SUL D3z9GOy2HAp+3nyIoPPCAAenM/0cXE29gCDjdC4NT381MpDuXgkjP/LZdavJg34vRdLtR0jt cAFNHheJpPpKB0fS69o5UmA7TOzm+aS4pheTwkEfQ2yW8yyxDV7iiH7AicnnlhaOPLo8QpMu pxFvCd0pXxMlxWoSJe2StfvlMf43xw8+jjiaXVnPpqCShXok40Vhhdid8hHrET/O3/XBgTUr zBVdLj8V4CnCbNGe9ZHRIzlPCWcniuYvzWwgmCxsFXMbr9MBpoUw0dlbiQKsEFaKaXXd1D// LLGk3DHOMiFCvJXXPt5XjnXeWS5yAakonJWJ9uJr6N/BdPG+2UVnI0UBmmBBSk18YpxaE7ti cUxSvObo4siEOBsaeyjTidFdVWiwdVs9EnikClKss5RJSkadqEuKq0ciT6mmKFzTSyWlIkqd dEzSJuzTHo6VdPUolKdVIYplw0eilEK0+pB0QJIOStr/KeZlM1LQ5eomQ2nIxuWLVK+i28I3 Skalh9zRaaa+qJkEzV2RQnDPedlJa0OIO395cqZM803Ys3vq0GbbnOSetZm+AyeGNNPytq+v +pVjmBMkT92cseOrN9lz3923d2bUh8TyuxN2Od7O7NpZtyo6+GKLKnSNYZ7rGp6o3R3R/2Lr bHp1tjdBReti1EvnU1qd+i/8N+rITAMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/xzsuv502v/K422JCysRnsvAHfmPM+MOG0S2/1aku3VWE tijhVMpW0VWrcM4VnUtb9OBWlGo90C1Jtcpjczriyrk8VJt/3ntt74e/3iwh11PerEodK2rU ykgFLSWle4KSV1e6zaK/tVsGWWn+4PxxiYT88jIaOu+XIih7eA7DyLMd8GrcjsDd1kFAbnYn guKhfgIeNg4gqDWep6HrnQfYnA4amrOv0JB8s5yGF58nMfTlXMNQatkNrZklGKyujyTkjtCg z03GU/IJg8tgYsCQ5APDxjwGJocCoHmgm4KGgmYKantXwo3CPhpqaptJaKwaxtD1OJ+GgbK/ FLQ2PiehMyudgnujJTR8HjcQYHA6GHhpLcJgTplaS/3+h4KmdCuG1FsPMNheVyOouzSIwVLW TUOD046hwpJNwK87zxAMZ3xh4EKaiwH9uQwEVy7kkNDxu4mClL5AcP/Mp7cECQ12ByGkVJwU aseLSKGlhBce5fUzQkpdLyMUWeKECqOvcLNmBAvFY05KsJgu04Jl7Boj6L7YsDDa3s4Iz6+7 SeGdLRfvXXhQuumYGKmKFzVrgkOk4a43Z060rD+VWVlPJ6HSVTrEsjy3jp/QeeiQhKW55XxP j4uYZk9uKV+R/oHSISlLcBdn88avbfS0MY/bx+c0FZLTTHI+vG2kEk2zjAvkk+vez5R5bglf arbOsIRbz9+70TuTl09lBk1XmUwkLUKzTMhTpY6PUqoiA/20EeEJatUpv9DoKAuauowhcTKr Cv3o2lGPOBYp5si2mMpFOaWM1yZE1SOeJRSesh6v+6JcdkyZcFrURB/VxEWK2nq0gCUV82U7 D4ghci5MGStGiOIJUfPfxazEOwktwAW7ur8t2rlbmrpxbUjQhgh9TRe5uNCr82neeEP73OFl XP/t/ca24xNvQwNWbg9+khgzVG6Xhp2s3jdbJzntl17tYZ/wuVt1qG/bGVHtdphXmF84YnHx 27HDEtORsKWHvhn+xB3OTPS+e0s9qgs+G7Onv2PO3tB5MXrdo+Wbt/b6KkhtuDLAl9Bolf8A egzQ2y4DAAA= X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: It'd be useful to show Dept internal stats and dependency graph on runtime via proc for better information. Introduced the knobs. Signed-off-by: Byungchul Park --- kernel/dependency/Makefile | 1 + kernel/dependency/dept.c | 24 +++----- kernel/dependency/dept_internal.h | 26 +++++++++ kernel/dependency/dept_proc.c | 95 +++++++++++++++++++++++++++++++ 4 files changed, 131 insertions(+), 15 deletions(-) create mode 100644 kernel/dependency/dept_internal.h create mode 100644 kernel/dependency/dept_proc.c diff --git a/kernel/dependency/Makefile b/kernel/dependency/Makefile index b5cfb8a03c0c..92f165400187 100644 --- a/kernel/dependency/Makefile +++ b/kernel/dependency/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DEPT) += dept.o +obj-$(CONFIG_DEPT) += dept_proc.o diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 7e12e46dc4b7..19406093103e 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -74,6 +74,7 @@ #include #include #include +#include "dept_internal.h" static int dept_stop; static int dept_per_cpu_ready; @@ -260,20 +261,13 @@ static bool valid_key(struct dept_key *k) * have been freed will be placed. */ -enum object_t { -#define OBJECT(id, nr) OBJECT_##id, - #include "dept_object.h" -#undef OBJECT - OBJECT_NR, -}; - #define OBJECT(id, nr) \ static struct dept_##id spool_##id[nr]; \ static DEFINE_PER_CPU(struct llist_head, lpool_##id); #include "dept_object.h" #undef OBJECT -static struct dept_pool pool[OBJECT_NR] = { +struct dept_pool dept_pool[OBJECT_NR] = { #define OBJECT(id, nr) { \ .name = #id, \ .obj_sz = sizeof(struct dept_##id), \ @@ -303,7 +297,7 @@ static void *from_pool(enum object_t t) if (DEPT_WARN_ON(!irqs_disabled())) return NULL; - p = &pool[t]; + p = &dept_pool[t]; /* * Try local pool first. @@ -338,7 +332,7 @@ static void *from_pool(enum object_t t) static void to_pool(void *o, enum object_t t) { - struct dept_pool *p = &pool[t]; + struct dept_pool *p = &dept_pool[t]; struct llist_head *h; preempt_disable(); @@ -2092,7 +2086,7 @@ void dept_map_copy(struct dept_map *to, struct dept_map *from) clean_classes_cache(&to->map_key); } -static LIST_HEAD(classes); +LIST_HEAD(dept_classes); static bool within(const void *addr, void *start, unsigned long size) { @@ -2124,7 +2118,7 @@ void dept_free_range(void *start, unsigned int sz) while (unlikely(!dept_lock())) cpu_relax(); - list_for_each_entry_safe(c, n, &classes, all_node) { + list_for_each_entry_safe(c, n, &dept_classes, all_node) { if (!within((void *)c->key, start, sz) && !within(c->name, start, sz)) continue; @@ -2200,7 +2194,7 @@ static struct dept_class *check_new_class(struct dept_key *local, c->sub_id = sub_id; c->key = (unsigned long)(k->base + sub_id); hash_add_class(c); - list_add(&c->all_node, &classes); + list_add(&c->all_node, &dept_classes); unlock: dept_unlock(); caching: @@ -2915,8 +2909,8 @@ static void migrate_per_cpu_pool(void) struct llist_head *from; struct llist_head *to; - from = &pool[i].boot_pool; - to = per_cpu_ptr(pool[i].lpool, boot_cpu); + from = &dept_pool[i].boot_pool; + to = per_cpu_ptr(dept_pool[i].lpool, boot_cpu); move_llist(to, from); } } diff --git a/kernel/dependency/dept_internal.h b/kernel/dependency/dept_internal.h new file mode 100644 index 000000000000..007c1eec6bab --- /dev/null +++ b/kernel/dependency/dept_internal.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Dept(DEPendency Tracker) - runtime dependency tracker internal header + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __DEPT_INTERNAL_H +#define __DEPT_INTERNAL_H + +#ifdef CONFIG_DEPT + +enum object_t { +#define OBJECT(id, nr) OBJECT_##id, + #include "dept_object.h" +#undef OBJECT + OBJECT_NR, +}; + +extern struct list_head dept_classes; +extern struct dept_pool dept_pool[]; + +#endif +#endif /* __DEPT_INTERNAL_H */ diff --git a/kernel/dependency/dept_proc.c b/kernel/dependency/dept_proc.c new file mode 100644 index 000000000000..7d61dfbc5865 --- /dev/null +++ b/kernel/dependency/dept_proc.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Procfs knobs for Dept(DEPendency Tracker) + * + * Started by Byungchul Park : + * + * Copyright (C) 2021 LG Electronics, Inc. , Byungchul Park + */ +#include +#include +#include +#include "dept_internal.h" + +static void *l_next(struct seq_file *m, void *v, loff_t *pos) +{ + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + return seq_list_next(v, &dept_classes, pos); +} + +static void *l_start(struct seq_file *m, loff_t *pos) +{ + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + return seq_list_start_head(&dept_classes, *pos); +} + +static void l_stop(struct seq_file *m, void *v) +{ +} + +static int l_show(struct seq_file *m, void *v) +{ + struct dept_class *fc = list_entry(v, struct dept_class, all_node); + struct dept_dep *d; + const char *prefix; + + if (v == &dept_classes) { + seq_puts(m, "All classes:\n\n"); + return 0; + } + + prefix = fc->sched_map ? " " : ""; + seq_printf(m, "[%p] %s%s\n", (void *)fc->key, prefix, fc->name); + + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + list_for_each_entry(d, &fc->dep_head, dep_node) { + struct dept_class *tc = d->wait->class; + + prefix = tc->sched_map ? " " : ""; + seq_printf(m, " -> [%p] %s%s\n", (void *)tc->key, prefix, tc->name); + } + seq_puts(m, "\n"); + + return 0; +} + +static const struct seq_operations dept_deps_ops = { + .start = l_start, + .next = l_next, + .stop = l_stop, + .show = l_show, +}; + +static int dept_stats_show(struct seq_file *m, void *v) +{ + int r; + + seq_puts(m, "Availability in the static pools:\n\n"); +#define OBJECT(id, nr) \ + r = atomic_read(&dept_pool[OBJECT_##id].obj_nr); \ + if (r < 0) \ + r = 0; \ + seq_printf(m, "%s\t%d/%d(%d%%)\n", #id, r, nr, (r * 100) / (nr)); + #include "dept_object.h" +#undef OBJECT + + return 0; +} + +static int __init dept_proc_init(void) +{ + proc_create_seq("dept_deps", S_IRUSR, NULL, &dept_deps_ops); + proc_create_single("dept_stats", S_IRUSR, NULL, dept_stats_show); + return 0; +} + +__initcall(dept_proc_init); From patchwork Wed Jan 24 11:59:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529101 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 60E2B612C5; Wed, 24 Jan 2024 12:00:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097605; cv=none; b=HKYBNZ+BwlpIMh3jZd4dsLHc0GoPtccn6GILA4H0jkd/NgWcBqg07lzlqeCghv2rOwmiDVZlQEVUGvO2FKKYDt0GEJAAxJp96N8I7uKXvtycAdXDd5JlsFN/uZCyoEWTVGgFl49A9BB1dd8DGwDqOqF5lXbt4ju8MV2A/FuJo70= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097605; c=relaxed/simple; bh=Z/lrsPfinthw0Keb2TOWgQylHJ0tuWxQmHrjSTWrwaY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=m7mqMV3n7hkmlB4Tz2eULc5bIzAqeGrz2nPNJqjvnUGXJwgZDwt+LjFmDK3R6Lem9c9rG/NtNkGO45mK3oDsOcZcccmBZFHp054NI4uvDNPxpzXhPkAm1dzldGDWfa9nMjCFqCOBJcrlTWSfaRV3AWVTn+O0niAN21leRjOkHzg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-b5-65b0fbb512f0 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 07/26] dept: Apply sdt_might_sleep_{start,end}() to wait_for_completion()/complete() Date: Wed, 24 Jan 2024 20:59:18 +0900 Message-Id: <20240124115938.80132-8-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0zMcRjHfb4/r+P4OjbfmLFbzWQkqj3MLGbzMT/G/GMYjr5zN3XlUmQz 3ZVECllOP3CVnXMd5XvHUHGyUlEdtZyWVimkn0vXnBJ3mX+evfa8n+f111tCyp/R8yVqzQlB q1FGKRgpJR2YUbD80XipsFKfFgJXL60E92gaBfklVgacD4oRWO06AnqrNsOHsX4E4/WNJBiy nQgKOj+RYK9uR1Bh1jPQ1D0Tmt1DDNRmpzOQXFTCwLu+CQLarmcRUCxuhzdXCglweL5SYOhl IM+QTHjHNwI8JgsLpqRA6DLnsjDRGQK17S00VLQug5xbbQyUV9RSUP2ki4CmZ/kMtFv/0PCm uoYC59UMGu4PFjLQN2YiweQeYuG9w0hAaYpXlPpjkobXGQ4CUu88JKD5YxmC52kdBIjWFgZe ufsJsInZJPy6W4WgK3OAhXOXPCzk6TIRpJ+7TkHj79c0pLSFwfjPfCZiLX7VP0TiFNtJXDFm pHBdIY+f5n5iccrzVhYbxXhsMwfhovJeAheMuGksWi4wWBzJYvHFgWYCDzY0sLjmxjiFu5sN xM4Fe6XrIoUodYKgDV5/SKqqbe+hY1/MPjUxeTYJ2WdeRH4Sngvly85b0X/ObNDTPma4JbzL 5SF9PJdbzNsyvnj3UgnJnZ/Om4frGV8wh1PxrQ29U0xxgXyhwz4lknFhfPmLYvqfdBFfXOqY Evlx4fz9nFbKx3LvTYflMuuT8lyyH+/ILWL/PfjzL80u6gqSGdE0C5KrNQnRSnVU6ApVokZ9 asWRmGgReRtlOjOx7wkace6uRJwEKWbIIiwlgpxWJsQlRlciXkIq5spc/g8EuSxSmXha0MYc 1MZHCXGVaIGEUsyTrRo7GSnnjipPCMcEIVbQ/k8Jid/8JLR/y657uvCN3FO9+pgx4ABZpLFu 7dGpNn3Ub5y1ZlcP3X9tx+qqL31ZTeLhDS835Jnc8QGmwNCbb22Pt12OaNnjH3T8tj14+eh6 g9gtLlWKYqqnLmcytkuKs9oSB0oGv3+LqF9UFjzsvD1cs7kvbDRg4a/sztTPdbdcZHqg7vji NQoqTqUMCSK1ccq/nZYfxk0DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfX+P13H5dcJPnm9rJpNI+hijf6zvPI3ZPM2mGz86rthdjphJ RVF3ZMsl4Tp2bleUK+ah41YunfRALZUctUJKWbqbdB7umH8+e+39fu/110dESgvoEJEiMUlQ JcqVMkZMidcvS5t/d7RUiDB7J0NOdgS4hzMpKCgpZqDxdhGC4vKTBPQ6YuG1px/BaF0DCfrc RgSFnW9JKK92IbCZUxlo6g6EZvcgA87cLAbSrpcw8LLPS0DHxQsEFFnXQe15IwH2kY8U6HsZ uKxPI3znEwEjJgsLppRQ6DLns+DtXAhOVwsNVVecNNja58Glqx0MVNicFFTf7yKg6WEBA67i 3zTUVtdQ0JijpeHWgJGBPo+JBJN7kIVXdgMBpek+2+lvv2h4prUTcPrGHQKa2x4heJz5ngBr cQsDVe5+AsqsuST8uOlA0KX7wsKp7BEWLp/UIcg6dZGChp/PaEjviILR7wVMzDJc1T9I4vSy w9jmMVD4uZHHD/Lfsjj9cTuLDdZDuMwchq9X9BK4cMhNY6vlDIOtQxdYfPZLM4EH6utZXJM3 SuHuZj2xYdp28fLdglKhEVQLVsSJ452uHvrgk6Aj3l8nUlB54FkUIOK5xbyuPpX2M8PN4Vtb R0g/B3Oz+DLtB18uFpFcxlje/LWO8RcTuHi+vb73L1NcKG+0lyM/S7govuJJEf1POpMvKrX/ FQVwS/hbl9opP0t9m/eWc+x5JDagMRYUrEjUJMgVyqhw9f745ETFkfBdBxKsyPczpuPenPto uCm2EnEiJBsnibGUCFJarlEnJ1QiXkTKgiWtU24LUsluefJRQXVgp+qQUlBXoqkiSjZZsnqL ECfl9sqThP2CcFBQ/W8JUUBICnrp6O/RGHuCFk3qyIyMvLYmfPqMb9vr5IPad66W2d4b27QZ A8c6dVWRXbiPHaOmYuduLgyKvjmnbV/a8zzb1GhPd/6E8dGf25I2vfjpqandOm+NO+Pum2sT d3wtiWswOp5+3rQyIkDfY1pKdt9TV6buWaWhlGsdoYaEjTG2sKEQ3bCMUsfLF4aRKrX8D8Uv ZDsvAwAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Makes Dept able to track dependencies by wait_for_completion()/complete(). Signed-off-by: Byungchul Park --- include/linux/completion.h | 30 +++++++++++++++++++++++++----- 1 file changed, 25 insertions(+), 5 deletions(-) diff --git a/include/linux/completion.h b/include/linux/completion.h index fb2915676574..bd2c207481d6 100644 --- a/include/linux/completion.h +++ b/include/linux/completion.h @@ -10,6 +10,7 @@ */ #include +#include /* * struct completion - structure used to maintain state for a "completion" @@ -26,14 +27,33 @@ struct completion { unsigned int done; struct swait_queue_head wait; + struct dept_map dmap; }; +#define init_completion(x) \ +do { \ + sdt_map_init(&(x)->dmap); \ + __init_completion(x); \ +} while (0) + +/* + * XXX: No use cases for now. Fill the body when needed. + */ #define init_completion_map(x, m) init_completion(x) -static inline void complete_acquire(struct completion *x) {} -static inline void complete_release(struct completion *x) {} + +static inline void complete_acquire(struct completion *x) +{ + sdt_might_sleep_start(&x->dmap); +} + +static inline void complete_release(struct completion *x) +{ + sdt_might_sleep_end(); +} #define COMPLETION_INITIALIZER(work) \ - { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait) } + { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait), \ + .dmap = DEPT_MAP_INITIALIZER(work, NULL), } #define COMPLETION_INITIALIZER_ONSTACK_MAP(work, map) \ (*({ init_completion_map(&(work), &(map)); &(work); })) @@ -75,13 +95,13 @@ static inline void complete_release(struct completion *x) {} #endif /** - * init_completion - Initialize a dynamically allocated completion + * __init_completion - Initialize a dynamically allocated completion * @x: pointer to completion structure that is to be initialized * * This inline function will initialize a dynamically created completion * structure. */ -static inline void init_completion(struct completion *x) +static inline void __init_completion(struct completion *x) { x->done = 0; init_swait_queue_head(&x->wait); From patchwork Wed Jan 24 11:59:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529103 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 25CA561689; Wed, 24 Jan 2024 12:00:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097606; cv=none; b=gWtQ/8TTRLUuCiL2wQMHe/8bkOL0XmEUwuKxVD8TYzfOiBnuQIOOwdCrsJH3iCdmPZHY7tX5xoImIAJ9+dr2pbxlSAAgsCvO9unkMuAqQsaamy/9r8clvNXa1JoywG+k6/8Wt7me/ZuO0z9eWLeIxtqUqhbUeM3HUjjs01I5ApU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097606; c=relaxed/simple; bh=tg0pA2ZlUNywKtzMfrdbwkG7GYbhIVlK2HrCAbcpiMY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=Hj+u0yPTySOtfvFv4PV2MHBT6TLQLoPnOznZkhgLhOvtLpH3jPzvFVs+NcPt3C4YloUelNOf/yRkZHCcCvmJm9nyAVGhQLlInVs2qi/LbYqhBCHjKc7Uk8UST6ebKRUhbcCobSLHOW1WDg3zBocTjPrrTFe3wktGd/9ODyfiAE0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-c5-65b0fbb52f43 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 08/26] dept: Apply sdt_might_sleep_{start,end}() to swait Date: Wed, 24 Jan 2024 20:59:19 +0900 Message-Id: <20240124115938.80132-9-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTcRSH/b93Z6uXJfRmRDKwe6ZlcjANoQ+9fQiioKAIXfmSq6k10zKI vKbNvIat8tLUmGuarq1ArdlSsky8S5qYqJlp3nK5kTorZ/Tl8ON3zvN8OgwuqSE9GHnkFUEZ KVNIKREhmlpVvOvFokHwGf3hDzl3fMA2l0ZAQVUFBe2V5QgqnidgMP72EPTYJxEstrThoM5r R1A89BmH540DCMy6RAq6RlZDt22Ggqa8dAqSSqso6JhwYNB/LxeDcuMRaM4uwcAy/40A9TgF +eokbHmMYTCv1dOgjfeCYd1DGhxDvtA08JEEc98OeFDUT8ErcxMBjdXDGHTVFlAwUPGHhObG 9wS052SQ8HS6hIIJuxYHrW2Ghk6LBgND8rLo1s/fJLzLsGBw6/EzDLo/vURQlzaIgbHiIwUN tkkMTMY8HBbK3iIYzpyiIeXOPA35CZkI0lPuEdC29I6E5P59sPirgAoO4BsmZ3A+2XSVN9s1 BP+hhONrHn6m+eS6PprXGGN4k247X/pqHOOLrTaSN+pvU7zRmkvzqqlujJ9ubaX59/cXCX6k W40d3XBKFBgmKOSxgnL3gVBReGaajbrUyFwzx08S8aiNUiFXhmP9uNSeWVqFmJX8yIacNcVu 4Xp753Fndmc9OVPGKKlCIgZnU9043Y+WFXYte4Qb7RkinCzBenE1po3OWszu44a/lhL/9Ju4 coNlxePK+nNPH/St9JLlm0F9Fu10cmyqK9fpSMT/Aeu5N7peIhuJNchFjyTyyNgImVzh5x0e Fym/5n0uKsKIlh9Ke8NxuhpZ24/XI5ZB0lXiYH2VICFlsdFxEfWIY3Cpu7h3faUgEYfJ4q4L yqgQZYxCiK5HGxhCuk68x341TMKel10RLgrCJUH5f4sxrh7x6PbmgKWF7xbN2P3dE9UuDs+s g8cvezUEgHjN5aDg07Vbz41su1sYFmS9cWyv4+zNM4WzmwIV38VF6W5lRaEZrVZ11hMX93VD ZRHoROXSG2vpVP3+EM/X/sw3Q4HCpbjWcYEl/VVjdzNj6r6ktZoGT+bVSe2BHb9jD3/JnfMz eLvtlBLR4TLf7bgyWvYXhYsroUwDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzVSfUyMcRz3+z3P83uejuPZuc3jbezMGPPe2dcy/OHlN6NYf9i8TEcPHZXc kTKslFSUioSSFKddx9Udm3DWatKxFN16k6ZWkiJL15yLdDb/fPbZ5+2vj8Cocrlpgj7ymGyI 1IVriIJVBAYkLHrkLZWXOq2zIPPiUnAPJbOQZ7UQqH9QgsDyMB5D74tN0DTcj8BbW8dATnY9 gtsdHxh4WN2OwFF8lkBD10RwuQcIOLMvEEgoshJ42zeCoe1qFoYS21Z4nVGIocLTw0JOL4Hc nAQ8Bp8xeExmHkxxc6Gz+AYPIx3LwNneyEHVTScHjtaFcD2/jcAzh5OF6sedGBqe5BFot4xy 8Lq6hoX6zDQO7n8rJNA3bGLA5B7g4V1FAYbSxLG1pB9/OHiZVoEh6U4ZBlfLUwTPkz9isFka CVS5+zHYbdkM/Lr3AkFn+lcezl308JAbn47gwrmrLNT9fslBYpsWvD/zyLoAWtU/wNBE+wnq GC5g6atCiZbf+MDTxOetPC2wHaf24gW06FkvprcH3Ry1mVMItQ1m8TT1qwvTb2/e8LTmmpel Xa4cvG3GTsXqUDlcHy0blqwJUYSlJ7tJVLUQ44jrZ+NQHUlFgiCJ/tItN0pFfgIR50nNzR7G x9XibMme9olLRQqBEc+Pl4q/1xKfMVncKn1q6mB9XVacK5XbZ/pkpaiVOruLWB+XxFlSSWnF vx0/caV0/3rrP101lvlovsRnIEUBGmdGan1kdIROH65dbDwcFhupj1m8/0iEDY1dxnR6JPMx GmrYVIlEAWkmKNeZrbKK00UbYyMqkSQwGrWyeeoDWaUM1cWelA1H9hqOh8vGSjRdYDVTlJt3 yCEq8aDumHxYlqNkw38XC37T4tCj0ffDizKcV5SvXO/OBAbOcZfOrFKrA7LmrxqNvxRUtHuz K9TjFXf3hfRtH0j5Tpsua4OeLFye1Hx3wqD/5bUb1h9V+Q8GBbeX2coP3DnVOOnsii35OzcE H5ojSWXBu8hvHDN/9p6hlEouu8XRfTPfKtbkaVfXnu7p3tjTvWaf5UuXhjWG6ZYtYAxG3V8+ 1AJ4LgMAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Makes Dept able to track dependencies by swaits. Signed-off-by: Byungchul Park --- include/linux/swait.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/swait.h b/include/linux/swait.h index d324419482a0..277ac74f61c3 100644 --- a/include/linux/swait.h +++ b/include/linux/swait.h @@ -6,6 +6,7 @@ #include #include #include +#include #include /* @@ -161,6 +162,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); struct swait_queue __wait; \ long __ret = ret; \ \ + sdt_might_sleep_start(NULL); \ INIT_LIST_HEAD(&__wait.task_list); \ for (;;) { \ long __int = prepare_to_swait_event(&wq, &__wait, state);\ @@ -176,6 +178,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); cmd; \ } \ finish_swait(&wq, &__wait); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Wed Jan 24 11:59:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529104 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 57E6A6169C; Wed, 24 Jan 2024 12:00:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097606; cv=none; b=CWVE7mIovmw3THGVvNDLrCbICKsg/JkmR/hTpIj1q97AWSHcboqYyrJVrH13BC27O3jxZSwkPqHOi2xTFAcuiVwAPXFuqYXibX6+XgRpReehQGmObghEsKA1wGv+1JloY3YI+gqZXtNFWWcFxKgoWWlxdp3hkuPmgTegyzd9JW8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097606; c=relaxed/simple; bh=RIc5udGVOneBYojEExyl4QYDrUVZJa4Vo+L4Ic7nXzw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=Ay9TKuzDvAlGjHcCdOK0PkrsYPJrIzI2uTjMH3BSCNrJt3TMTBTxLeI+iMGCOmTZHzboT8bpDQcxbuIvzQoq6069UH4g5+TwtGo2fO78Mne8vZzaL72LviEzdOP7oKsfdGfbF5E3g0Jw6oaoYvOYbsyxMB75GUdf4D68mvzhdkA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-d5-65b0fbb5c9e5 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 09/26] dept: Apply sdt_might_sleep_{start,end}() to waitqueue wait Date: Wed, 24 Jan 2024 20:59:20 +0900 Message-Id: <20240124115938.80132-10-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfX/Pd5z9HOpHRrtpNs8PsQ/Lw9D6arMxmz/4g5t+000d7ipi LDrFXVG2RFp64Dp3x+UKSdcSRVEdFed2hdbQlNpxcS4PFf757L3X5/15/fXhSHkVPZ1TqRNE jVoZp2CklLR/QtGCO4EycXGeFUF2xmLwfT1NQb7NyoDzpgWBteIEAb31UfBqqA9BoLmVhNwc J4Kid50kVDR0IXCYTjLQ1jMR2n0DDDTmGBhILbEx8PzTMAGeC+cJsNg3w9OsYgJq/R8oyO1l 4HJuKjEyPhLgN5pZMKaEQbcpj4Xhd0ugseslDQ73PLhU4GGg2tFIQUNlNwFtVfkMdFl/0/C0 4QkFzuxMGm58Lmbg05CRBKNvgIUXtYUElOlGRGlfftHwOLOWgLSrtwhof30fQc3ptwTYrS8Z eOjrI6DcnkPCj9J6BN1n+1k4leFn4fKJswgMpy5Q0PrzMQ06z3IIfM9n1q3CD/sGSKwrP4Qd Q4UUbioW8L28ThbratwsLrQn4nLTXFxS3UvgIq+PxnbzGQbbvedZrO9vJ/DnlhYWP7kYoHBP ey6xJWSHNCJGjFMliZpFa3ZLY5u+9dIHrksOD2alMymolNUjCSfw4cLzQQv6n29fuzvGGX6O 4HL5ydE8hQ8VyjPf03ok5Ug+fbxgGmxm9IjjJvPbBVv/kdEOxYcJFb+LxvoyfoXQ05zyzzlL sJTVjnHJCL9xyU2NZjm/XHhrPseOOgU+VSIEvDX034NpwgOTi8pCskI0zozkKnVSvFIVF74w NlmtOrxwz/54Oxr5KOOx4Z2VyOvcVod4DikmyNaZbaKcViZpk+PrkMCRiiky17SbolwWo0w+ Imr279IkxonaOhTCUYpg2dKhQzFyfq8yQdwnigdEzf8twUmmp6Bl3jubGx0J0VEa94OI4HjD lu2bBFlJYuSMyTgjP+LKpqCgZ3U7PEm6lYbW7h/Q4WZnZFZpU6nOqdkhEQdnzo+snr3WmVdg 7kgzHvcssBi2aiKPbWhZ7QpVKydFha98ZDtaX7nCmhYarQsKLkisKNP9anI2bfRfT08o0K+n HmW9UVDaWOWSuaRGq/wDP4BQpk0DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzH+/6e7zh+TubnYbTbqi0cNvHZMuMv3zWZvzz+oaMf3aord4na ylFcIu6anIfYVVznSl1XrcJR14qrpeiWtEStIUoNFz14qMw/7732fr/3/uvNkfI8eimn1iSK Wo0qVsFIKenOsPQ1VZMOcV3RmWAwXVwHvu+ZFOSVlTDQXlqMoKTyNAGDjdvh1dgQgsnWNhLM ue0I8vvekFDZ1IvAZTvDQMfAPPD6Rhjw5F5gIL2wjIEXn6cI6LmaQ0CxMwJajAUE1I1/oMA8 yMBNczoxLR8JGLfaWbDqA6HfdoOFqb714OntpKHhlocGV/cquH67h4FHLg8FTTX9BHQ8yGOg t+QPDS1NzyhoN2XTcP9LAQOfx6wkWH0jLLyssxDgyJheO/ftNw1Ps+sIOHennADv64cIHme+ I8BZ0slAg2+IgApnLgkTRY0I+i8Ns3D24jgLN09fQnDh7FUK2n49pSGjJxQmf+YxW8Nww9AI iTMqTmDXmIXCzQUCrr3xhsUZj7tZbHEexxW2EFz4aJDA+V99NHbazzPY+TWHxVnDXgJ/ef6c xc+uTVJ4wGsmdi3fL90cJcaqk0Tt2i2R0ujmH4N0wj3JyVGjgdGjIjYLSTiB3yBU3a2eZYYP Frq6xskZ9ucDhIrs93QWknIkb5gj2EZbmSzEcQv53ULZcMpMh+IDhco/+bN9Gb9RGGjVo3+b K4ViR92sL5n271/vpmZYzocK7+yXWSOSWpCfHfmrNUlxKnVsqFIXE52sUZ9UHo6Pc6Lpz1hT p0w16HvHdjfiOaSYK9tqLxPltCpJlxznRgJHKvxlXUtKRbksSpWcImrjD2qPx4o6N1rGUYrF svA9YqScP6pKFGNEMUHU/k8JTrJUj+IXWtIKjR7HsN/UarOpyb1v0amAzreZQfV7t3ziEkrD fxmC9POueHGva9sOidJYU2+N6I8IOLCgRag/5ppQSqgnfocu15bbX6fq0245gpXNt53UNzHu yO98X1uYzXWiOTpnLHK3JtySeM+0YiIoYP4m/5cG2adQg2du40C1u1tB6aJV60NIrU71F0/v DPAvAwAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Makes Dept able to track dependencies by waitqueue waits. Signed-off-by: Byungchul Park --- include/linux/wait.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/wait.h b/include/linux/wait.h index 3473b663176f..ebeb4678859f 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -303,6 +304,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); struct wait_queue_entry __wq_entry; \ long __ret = ret; /* explicit shadow */ \ \ + sdt_might_sleep_start(NULL); \ init_wait_entry(&__wq_entry, exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ long __int = prepare_to_wait_event(&wq_head, &__wq_entry, state);\ @@ -318,6 +320,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); cmd; \ } \ finish_wait(&wq_head, &__wq_entry); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Wed Jan 24 11:59:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529105 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A0E0C62802; Wed, 24 Jan 2024 12:00:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097607; cv=none; b=oKYeckXJmqxy94LMdwSDImHoi0JeadpkXaIQR/bXW0ksjw4qtiFoG/lKqmGY8C/yD+cwazVXKaluoRkJPVWoIfpZyKv8HkK6wbLUnbrbM9pI0ibeRM74oQzt+iQIHkOPaANCWnysjdJm2jsSMtS4zUAnu3stFKyyDg1hJXEHO7Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097607; c=relaxed/simple; bh=MruAYr/ze2aZU4Q6DDtEFdBspBpkL05uqzvhsrQG5Gc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=e50MRHDFJ5tHy8sS9UFyU6fZzdvcQ5vVjTbgAS6aDNnFMG9UR/T5DqkNp78mS6cPTgQoLex9MiIzMOhx9W3UY4opWkoSKTaGYQ4hvtI6UhmuHzo37uI1zxWTJI2thlTjxv8ZoJIiKv2+82nMo2qYerW1xRfMlngixSlx8BEeewI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-e5-65b0fbb562db From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 10/26] dept: Apply sdt_might_sleep_{start,end}() to hashed-waitqueue wait Date: Wed, 24 Jan 2024 20:59:21 +0900 Message-Id: <20240124115938.80132-11-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/x7scvx3ml+Zhh9kYYmWfhfiLn43N1h82z7fuN3fTxS5F nhaKlChWp8TqcM51PfjlIeraFVKeOrpxUkkzOT0qF9cTV+afz957v/d+ff55s4TyETWD1UUf EA3R6igVLSflXRPzF98fKhGDeypkkHEuGLw/k0nILbbR4CwqQGC7ewKD5+l6eD/QiWDoVT0B xkwngvzPzQTcrWlBYLecpKHhyyRweXtoqMtMpeHU9WIa3nQMY2jKuoihQNoEL9JNGBy+dhKM HhquGE9h//mGwWe2MmBOmA9tlhwGhj8vg7qWdxTYGxdB9rUmGirsdSTUlLVhaHiUS0OL7Q8F L2pqSXBmpFFQ2G2ioWPATIDZ28PAW0cehpJEP+h0/ygFz9IcGE7fuIPB9aEcQWVyKwbJ9o6G x95ODKVSJgGDt54iaDvfxUDSOR8DV06cR5CalEVC/cgzChKbQmHody69Nkx43NlDCImlBwX7 QB4pPDfxwsOcZkZIrGxkhDwpVii1LBSuV3iwkN/npQTJepYWpL6LjJDS5cJC9+vXjFB7eYgU vriMeHPQVvkqjRilixMNS8N3y7XtzRnUfjd7aOTJQyYBuekUJGN5LoTPrneR//XPS2XMmKa5 Bbzb7SPG9FRuDl+a9pVKQXKW4M4E8JbeV+PlKdwOfniw1x+wLMnN56UHkWO2glvB11b3M/+Y s/mCEsc4R+b3C7Mbx38puVC+1XqBGWPy3BkZn3zvN/WvEMhXWdxkOlLkoQlWpNRFx+nVuqiQ Jdr4aN2hJZH79BLyL8p8bHhbGepzRlQjjkWqiYq11mJRSanjYuL11YhnCdVUhTuwSFQqNOr4 w6Jh3y5DbJQYU42CWFI1XbF84KBGye1RHxD3iuJ+0fA/xaxsRgKa1hj762XHpqsvu5NVETtn ObTHNSurnDM9Sa1PtH+Cqm6q1m1P/ZjTP3dmRN/tHVgduTvkqG3noEIbVr46SLPcOXtj/acj vgCtqWG0MnDyBZdng1HaMDo9OP1kQLnpWnNWmH1eiqu9bM2q7529ehRekn87HelHZoUunZQZ /mOddu4WFRmjVS9bSBhi1H8BjlB7+k0DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzHfb/Pzy7HsxOelB87jGWpRvYhzMbmmWFpMxvz41bPdKuL3VVk TFSkhGM5FLvKzrku5YkJXUtxdRlFt5xWLc2ciBpdnK64mH8+e+393vv114clFMXULFadkipq U1TJSlpGyrbGZIXfH60SI8fzIkB/NhI8w7kkFFdaaWi7U47Aeu8Ehv5nG+HNyACC0RetBBgK 2xCUvOsm4J69B4HNfJKG9vdTwOkZpMFRmE9DVlklDa8++zB0Xb6IoVzaAs8vlGKo97pJMPTT UGTIwv7zEYPXZGHAlLkQ+szXGPC9iwJHTwcFjdcdFNg6l8DVG1001NocJNhr+jC0Pyqmocf6 m4Ln9mYS2vQFFFR8LaXh84iJAJNnkIHX9UYMVdl+26nv4xQ0FdRjOHXzLgbn28cI6nJ7MUjW DhoaPQMYqqVCAn7deoag79wXBnLOehkoOnEOQX7OZRJax5ooyO6KhtGfxfS6GKFxYJAQsqsP CbYRIym0lPLCw2vdjJBd18kIRilNqDaHCWW1/Vgo+eahBMlyhhakbxcZIe+LEwtfX75khOYr o6Tw3mnAsaE7ZasTxGR1uqiNWLtPluju1lMHXezhsacPmUzkovNQAMtzy/nhSzXMBNPcIt7l 8hITHMTN46sLPlB5SMYS3OlA3jz04u9gGreb9/0a8hcsS3ILeelB/EQs51bwzQ3fmX/OuXx5 Vf1fT4A/r7jaSU6wgovmey3nmQtIZkSTLChInZKuUamTo5fqkhIzUtSHl8Yf0EjI/zOmYz59 DRpu39iAOBYpJ8vXWSpFBaVK12VoGhDPEsoguSv4jqiQJ6gyjojaA3u1acmirgGFsKRypnzT DnGfgtuvShWTRPGgqP3fYjZgViZa9gPHbn069bZ3s/mGOmRanP1tXXzZ4O8FgVxu+PzF20Lt wYs1T2IebVpZ07IhJMlnmr3d/SYubE5FkDQv9cgi6+NPzlWKWGO+YX9dU5xU4bC7o/Ql0Zpd RbdsbseC9YEoVbb3dMDQk/T2oyuOqwa2KIMDZ2iq7s4ndxX9HFkzvXGPktQlqqLCCK1O9Qdg X4soLwMAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Makes Dept able to track dependencies by hashed-waitqueue waits. Signed-off-by: Byungchul Park --- include/linux/wait_bit.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/wait_bit.h b/include/linux/wait_bit.h index 7725b7579b78..fe89282c3e96 100644 --- a/include/linux/wait_bit.h +++ b/include/linux/wait_bit.h @@ -6,6 +6,7 @@ * Linux wait-bit related types and methods: */ #include +#include struct wait_bit_key { void *flags; @@ -246,6 +247,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); struct wait_bit_queue_entry __wbq_entry; \ long __ret = ret; /* explicit shadow */ \ \ + sdt_might_sleep_start(NULL); \ init_wait_var_entry(&__wbq_entry, var, \ exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ @@ -263,6 +265,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); cmd; \ } \ finish_wait(__wq_head, &__wbq_entry.wq_entry); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Wed Jan 24 11:59:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529107 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8B5D362A06; Wed, 24 Jan 2024 12:00:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097609; cv=none; b=NcktwhyhdESAPQfMaWtK7GLkY3WEM9K3e/VzMOUkxIDTG+I0krKz0qVOgA+kPy/NgtLvelwmBNFwwtLPpNUVvOjwxmcPPHFjIUefmYObvgnAa4SbSkBrgD3iTGaNa1aB6O6nzOqui8mQiFt18zWRh2ai3l1p8O3hbu1M3XVI1sw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097609; c=relaxed/simple; bh=nifTUInVysx7YljtcW2Qa4mywSC4rocnbBoHNOK15ls=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=cuGKyyVF5yCHo1Tu8qF7zEcF6YooSKTIgKYDCABuBMH0/oVjpul+kGtfJ1trmQFhZ1CQGOQ8Zxawl6cgYWaGVQDKvnoUWRF3tKspEYhSI3gWOjwY6nuoe/lmytknRQgILE2wv+HslGJhXWJEcL5vZYe/mnbz/CY5OioApx9g/Xo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-f5-65b0fbb6d165 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 11/26] dept: Distinguish each syscall context from another Date: Wed, 24 Jan 2024 20:59:22 +0900 Message-Id: <20240124115938.80132-12-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTYRjHe8/lPXN14rAKjwkZiwqMtIvFQ0QEfeh0M6lv9SEPecqhzpil GRSWc5XlTGvZxWRemGvOXDNM08Wy8tJyTR26RKVMSkkzrEk2s7zQl4cfz//3/D89MlJRRy+X qdSnJY1aTFRiOSUfXVS8viZgkzaUfgqBvOsbwP/zCgWFVVYMnkcVCKxPLhIw/Ho3dE+MIAi0 vSOhwOBBUPyxj4QnTf0IHOZLGDoHF4PXP4ah1XANQ2ZpFYb2r1ME9N7OJ6DCfgBcN0oIcE5+ oaBgGMP9gkxiZgwRMGmyMGDKWA0D5nsMTH3cCK39XTQ4etbB3aJeDA2OVgqaagcI6HxWiKHf +pcGV1MLBZ68HBoqv5Vg+DphIsHkH2Ogw2kkwKadKdL9mKahOcdJgK7sMQHe9/UInl/5QIDd 2oXhpX+EgGq7gYTf5a8RDOhHGci6PsnA/Yt6BNeyblPw7k8zDdreLRD4VYh3bhNejoyRgrY6 TXBMGCnhTQkv1N3rYwTt8x5GMNrPCNXmcKG0YZgQisf9tGC3XMWCfTyfEbJHvYTwze1mhJY7 AUoY9BYQMaFH5NvjpERVqqSJ3BErj2/PaMCnWmPPtg+KGah+fzYKkvFcFP+i/A36z823blGz jLm1vM83Sc7yUm4lX53zmc5GchnJXV7Im7+34dlgCRfNu75Pzx1Q3Gpe7/tCzzLLbeVthnmH 58L4CptzrihoZl95t2fOV3Bb+A+WXGbeyQziL2etmecQ/oXZR91ArBEtsCCFSp2aJKoSoyLi 09WqsxHHk5PsaOahTOenjtaicc/hRsTJkHIRu9NSJSloMTUlPakR8TJSuZT1hTySFGycmH5O 0iQf05xJlFIaUaiMUgazmybS4hTcSfG0lCBJpyTN/5SQBS3PQGGdkktvS2ivD1N1oRND3YGW zml51DL9ZvculJf7vix2KsZy7NND3U3/CP00vKXIZesKWVF56O2D5FfBHWyfJ9hji9lXnNxh cuh+BQwX1Ak1xu17Ni+M1lkZRQeOrs8fGmKX7BXLVml74kadbb2su7Iocm2udxt3MDQyzMEo qZR4cWM4qUkR/wEobJniTAMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0yMcRzHfb/Pz46zx8k8YmM3tGXiNvGJZv3nmak1DPNj9YxnOiq5U2Sz RT/kKLLqUKzCOdfpxx0buqPV+nGaupScVNIatS6HuptUqMw/7732fr8/778+LKEoogJYdcIp SZMgxilpGSmL3JK29slEpbR+0LkZcq+sB+9YFglFFWYanOVlCMyPz2MYqt8G73xuBBOvWwnQ 5zsRlHzqIeBxQy8Cu/ECDe0D86HD66HBkX+ZhrS7FTS0DU9i6C64jqHMEgHN10ox1Ix/IUE/ REOhPg1PyyCGcYOJAUPqKug33mJg8pMKHL2dFNTddlBg71oDN+9002CzO0hoeNqPof15EQ29 5j8UNDc0keDMzabg0ddSGoZ9BgIMXg8Db2qKMVSmT69ljv6moDG7BkPmvSoMHe+rEbzI6sNg MXfSUOd1Y7Ba8gn49aAeQX/OCAMZV8YZKDyfg+ByRgEJrVONFKR3h8DEzyI6fItQ5/YQQrr1 tGD3FZPCq1JeeHarhxHSX3QxQrElSbAag4S7tiEslPzwUoLFdIkWLD+uM4JupAMLX1taGKHp xgQpDHTocdSy/bKwI1KcOlnSrNsaI4ttS7XRiY6YM20DYiqq3qFDfizPbeAb8/LIGaa5QN7l Gidm2J9bwVuzP1M6JGMJ7uJc3vjtNT0TLOQi+eZvv2cPSG4Vn+P6Qs2wnNvIV+b/6/Dccr6s smZ2yG/af3Sza7av4EL4PtNV5hqSFaM5JuSvTkiOF9VxIcHa47EpCeozwYdPxFvQ9M8Yzk3m PkVj7dtqEcci5Tx5uKlCUlBisjYlvhbxLKH0l7uWlEsK+REx5aykORGtSYqTtLVoKUsqF8u3 75ViFNxR8ZR0XJISJc3/FLN+AanogP9aeZjaFvDxT6DzYHR14eDUDXHjy+go/eix+026lZkt 56ZUdqdRZTXNP2QejvCwMebI3SNhoxrdwQ+20OClJ1efbPiwKEzedMyhDlz3RpdUXh+61Wkr dzmy9u0qacU8Hz4Y4n64W4ryFd5bsOl7blWeTxW6Ry96Qt92Js7ZqSS1saIqiNBoxb+mShGM LwMAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: It enters kernel mode on each syscall and each syscall handling should be considered independently from the point of view of Dept. Otherwise, Dept may wrongly track dependencies across different syscalls. That might be a real dependency from user mode. However, now that Dept just started to work, conservatively let Dept not track dependencies across different syscalls. Signed-off-by: Byungchul Park --- arch/arm64/kernel/syscall.c | 3 ++ arch/x86/entry/common.c | 4 +++ include/linux/dept.h | 39 ++++++++++++--------- kernel/dependency/dept.c | 67 +++++++++++++++++++------------------ 4 files changed, 64 insertions(+), 49 deletions(-) diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index 9a70d9746b66..96c18ad1dbf7 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -100,6 +101,8 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, * (Similarly for HVC and SMC elsewhere.) */ + dept_kernel_enter(); + if (flags & _TIF_MTE_ASYNC_FAULT) { /* * Process the asynchronous tag check fault before the actual diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 6356060caaf3..445e70937b38 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -20,6 +20,7 @@ #include #include #include +#include #ifdef CONFIG_XEN_PV #include @@ -75,6 +76,7 @@ static __always_inline bool do_syscall_x32(struct pt_regs *regs, int nr) /* Returns true to return using SYSRET, or false to use IRET */ __visible noinstr bool do_syscall_64(struct pt_regs *regs, int nr) { + dept_kernel_enter(); add_random_kstack_offset(); nr = syscall_enter_from_user_mode(regs, nr); @@ -262,6 +264,7 @@ __visible noinstr void do_int80_syscall_32(struct pt_regs *regs) { int nr = syscall_32_enter(regs); + dept_kernel_enter(); add_random_kstack_offset(); /* * Subtlety here: if ptrace pokes something larger than 2^31-1 into @@ -283,6 +286,7 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs) int nr = syscall_32_enter(regs); int res; + dept_kernel_enter(); add_random_kstack_offset(); /* * This cannot use syscall_enter_from_user_mode() as it has to diff --git a/include/linux/dept.h b/include/linux/dept.h index c6e2291dd843..4e359f76ac3c 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -25,11 +25,16 @@ struct task_struct; #define DEPT_MAX_SUBCLASSES_USR (DEPT_MAX_SUBCLASSES / DEPT_MAX_SUBCLASSES_EVT) #define DEPT_MAX_SUBCLASSES_CACHE 2 -#define DEPT_SIRQ 0 -#define DEPT_HIRQ 1 -#define DEPT_IRQS_NR 2 -#define DEPT_SIRQF (1UL << DEPT_SIRQ) -#define DEPT_HIRQF (1UL << DEPT_HIRQ) +enum { + DEPT_CXT_SIRQ = 0, + DEPT_CXT_HIRQ, + DEPT_CXT_IRQS_NR, + DEPT_CXT_PROCESS = DEPT_CXT_IRQS_NR, + DEPT_CXTS_NR +}; + +#define DEPT_SIRQF (1UL << DEPT_CXT_SIRQ) +#define DEPT_HIRQF (1UL << DEPT_CXT_HIRQ) struct dept_ecxt; struct dept_iecxt { @@ -94,8 +99,8 @@ struct dept_class { /* * for tracking IRQ dependencies */ - struct dept_iecxt iecxt[DEPT_IRQS_NR]; - struct dept_iwait iwait[DEPT_IRQS_NR]; + struct dept_iecxt iecxt[DEPT_CXT_IRQS_NR]; + struct dept_iwait iwait[DEPT_CXT_IRQS_NR]; /* * classified by a map embedded in task_struct, @@ -207,8 +212,8 @@ struct dept_ecxt { /* * where the IRQ-enabled happened */ - unsigned long enirq_ip[DEPT_IRQS_NR]; - struct dept_stack *enirq_stack[DEPT_IRQS_NR]; + unsigned long enirq_ip[DEPT_CXT_IRQS_NR]; + struct dept_stack *enirq_stack[DEPT_CXT_IRQS_NR]; /* * where the event context started @@ -252,8 +257,8 @@ struct dept_wait { /* * where the IRQ wait happened */ - unsigned long irq_ip[DEPT_IRQS_NR]; - struct dept_stack *irq_stack[DEPT_IRQS_NR]; + unsigned long irq_ip[DEPT_CXT_IRQS_NR]; + struct dept_stack *irq_stack[DEPT_CXT_IRQS_NR]; /* * where the wait happened @@ -406,19 +411,19 @@ struct dept_task { int wait_hist_pos; /* - * sequential id to identify each IRQ context + * sequential id to identify each context */ - unsigned int irq_id[DEPT_IRQS_NR]; + unsigned int cxt_id[DEPT_CXTS_NR]; /* * for tracking IRQ-enabled points with cross-event */ - unsigned int wgen_enirq[DEPT_IRQS_NR]; + unsigned int wgen_enirq[DEPT_CXT_IRQS_NR]; /* * for keeping up-to-date IRQ-enabled points */ - unsigned long enirq_ip[DEPT_IRQS_NR]; + unsigned long enirq_ip[DEPT_CXT_IRQS_NR]; /* * for reserving a current stack instance at each operation @@ -465,7 +470,7 @@ struct dept_task { .wait_hist = { { .wait = NULL, } }, \ .ecxt_held_pos = 0, \ .wait_hist_pos = 0, \ - .irq_id = { 0U }, \ + .cxt_id = { 0U }, \ .wgen_enirq = { 0U }, \ .enirq_ip = { 0UL }, \ .stack = NULL, \ @@ -503,6 +508,7 @@ extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); extern void dept_sched_enter(void); extern void dept_sched_exit(void); +extern void dept_kernel_enter(void); static inline void dept_ecxt_enter_nokeep(struct dept_map *m) { @@ -552,6 +558,7 @@ struct dept_task { }; #define dept_ecxt_exit(m, e_f, ip) do { } while (0) #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) +#define dept_kernel_enter() do { } while (0) #define dept_ecxt_enter_nokeep(m) do { } while (0) #define dept_key_init(k) do { (void)(k); } while (0) #define dept_key_destroy(k) do { (void)(k); } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 19406093103e..9aba9eb22760 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -220,9 +220,9 @@ static struct dept_class *dep_tc(struct dept_dep *d) static const char *irq_str(int irq) { - if (irq == DEPT_SIRQ) + if (irq == DEPT_CXT_SIRQ) return "softirq"; - if (irq == DEPT_HIRQ) + if (irq == DEPT_CXT_HIRQ) return "hardirq"; return "(unknown)"; } @@ -406,7 +406,7 @@ static void initialize_class(struct dept_class *c) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_iecxt *ie = &c->iecxt[i]; struct dept_iwait *iw = &c->iwait[i]; @@ -431,7 +431,7 @@ static void initialize_ecxt(struct dept_ecxt *e) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { e->enirq_stack[i] = NULL; e->enirq_ip[i] = 0UL; } @@ -447,7 +447,7 @@ static void initialize_wait(struct dept_wait *w) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { w->irq_stack[i] = NULL; w->irq_ip[i] = 0UL; } @@ -486,7 +486,7 @@ static void destroy_ecxt(struct dept_ecxt *e) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) if (e->enirq_stack[i]) put_stack(e->enirq_stack[i]); if (e->class) @@ -502,7 +502,7 @@ static void destroy_wait(struct dept_wait *w) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) if (w->irq_stack[i]) put_stack(w->irq_stack[i]); if (w->class) @@ -651,7 +651,7 @@ static void print_diagram(struct dept_dep *d) const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); irqf = e->enirqf & w->irqf; - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { if (!firstline) pr_warn("\nor\n\n"); firstline = false; @@ -684,7 +684,7 @@ static void print_dep(struct dept_dep *d) const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); irqf = e->enirqf & w->irqf; - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { pr_warn("%s has been enabled:\n", irq_str(irq)); print_ip_stack(e->enirq_ip[irq], e->enirq_stack[irq]); pr_warn("\n"); @@ -910,7 +910,7 @@ static void bfs(struct dept_class *c, bfs_f *cb, void *in, void **out) */ static unsigned long cur_enirqf(void); -static int cur_irq(void); +static int cur_cxt(void); static unsigned int cur_ctxt_id(void); static struct dept_iecxt *iecxt(struct dept_class *c, int irq) @@ -1458,7 +1458,7 @@ static void add_dep(struct dept_ecxt *e, struct dept_wait *w) if (d) { check_dl_bfs(d); - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_iwait *fiw = iwait(fc, i); struct dept_iecxt *found_ie; struct dept_iwait *found_iw; @@ -1494,7 +1494,7 @@ static void add_wait(struct dept_class *c, unsigned long ip, struct dept_task *dt = dept_task(); struct dept_wait *w; unsigned int wg = 0U; - int irq; + int cxt; int i; if (DEPT_WARN_ON(!valid_class(c))) @@ -1510,9 +1510,9 @@ static void add_wait(struct dept_class *c, unsigned long ip, w->wait_stack = get_current_stack(); w->sched_sleep = sched_sleep; - irq = cur_irq(); - if (irq < DEPT_IRQS_NR) - add_iwait(c, irq, w); + cxt = cur_cxt(); + if (cxt == DEPT_CXT_HIRQ || cxt == DEPT_CXT_SIRQ) + add_iwait(c, cxt, w); /* * Avoid adding dependency between user aware nested ecxt and @@ -1593,7 +1593,7 @@ static bool add_ecxt(struct dept_map *m, struct dept_class *c, eh->sub_l = sub_l; irqf = cur_enirqf(); - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) add_iecxt(c, irq, e, false); del_ecxt(e); @@ -1745,7 +1745,7 @@ static void do_event(struct dept_map *m, struct dept_class *c, add_dep(eh->ecxt, wh->wait); } - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_ecxt *e; if (before(dt->wgen_enirq[i], wg)) @@ -1787,7 +1787,7 @@ static void disconnect_class(struct dept_class *c) call_rcu(&d->rh, del_dep_rcu); } - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { stale_iecxt(iecxt(c, i)); stale_iwait(iwait(c, i)); } @@ -1812,27 +1812,21 @@ static unsigned long cur_enirqf(void) return 0UL; } -static int cur_irq(void) +static int cur_cxt(void) { if (lockdep_softirq_context(current)) - return DEPT_SIRQ; + return DEPT_CXT_SIRQ; if (lockdep_hardirq_context()) - return DEPT_HIRQ; - return DEPT_IRQS_NR; + return DEPT_CXT_HIRQ; + return DEPT_CXT_PROCESS; } static unsigned int cur_ctxt_id(void) { struct dept_task *dt = dept_task(); - int irq = cur_irq(); + int cxt = cur_cxt(); - /* - * Normal process context - */ - if (irq == DEPT_IRQS_NR) - return 0U; - - return dt->irq_id[irq] | (1UL << irq); + return dt->cxt_id[cxt] | (1UL << cxt); } static void enirq_transition(int irq) @@ -1893,7 +1887,7 @@ static void dept_enirq(unsigned long ip) flags = dept_enter(); - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { dt->enirq_ip[irq] = ip; enirq_transition(irq); } @@ -1939,6 +1933,13 @@ void noinstr dept_hardirqs_off(void) dept_task()->hardirqs_enabled = false; } +void noinstr dept_kernel_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->cxt_id[DEPT_CXT_PROCESS] += 1UL << DEPT_CXTS_NR; +} + /* * Ensure it's the outmost softirq context. */ @@ -1946,7 +1947,7 @@ void dept_softirq_enter(void) { struct dept_task *dt = dept_task(); - dt->irq_id[DEPT_SIRQ] += 1UL << DEPT_IRQS_NR; + dt->cxt_id[DEPT_CXT_SIRQ] += 1UL << DEPT_CXTS_NR; } /* @@ -1956,7 +1957,7 @@ void noinstr dept_hardirq_enter(void) { struct dept_task *dt = dept_task(); - dt->irq_id[DEPT_HIRQ] += 1UL << DEPT_IRQS_NR; + dt->cxt_id[DEPT_CXT_HIRQ] += 1UL << DEPT_CXTS_NR; } void dept_sched_enter(void) From patchwork Wed Jan 24 11:59:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529106 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D217363118; Wed, 24 Jan 2024 12:00:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097609; cv=none; b=hGaJx/jr/aNz0vhTtRd75D6mPSVV8I953H/tincNIjtyFRl0hkhrKFlX/e5CPgZm3KFr6Qrrn/IqcjKU9ekIQgbSxcYMkI4Si+AEeUV8xkh4HmN7EHKD2xvtLDvP2Iw+NWwJ1dhA0NStUwoWXTjgiw5roFFNZrdDwlUOevoB+Mw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097609; c=relaxed/simple; bh=7Na745h6mqt6szPtcPf56L4nwkzcVctpfLScpiam1ys=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=OGWO5wPUbgL+WqVXnDHU9wfGzDSuM1ATrysUHZjjZ4t8+7pmc5z+Y76Lm1+bZMRwUAmEtb3mTDKA6IKUZ3Eg3KDCYylS3i6H8DK7vMNVsNUtdk5CWnmOXxcybr/qUZHWZf02lmpTQxRIwSnSSeszYRrdmZCW5NXCtXQokOzQoq0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-05-65b0fbb6d9f7 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 12/26] dept: Distinguish each work from another Date: Wed, 24 Jan 2024 20:59:23 +0900 Message-Id: <20240124115938.80132-13-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxTH9zz3tR1drhfjrvQDpsaZsCAw1JywZWKm8ZpFojGLm5poM26k ESqWyovJEhgVEYSKDlFEw4t2pdSCBQ0OqgiCggOqdloJbyUMRAoYtjarMDde4peTX/7nn9/5 cliCv0uFsBqtXtJp1YkqWk7Kp4Iqwu/M1UmRb3vXQNHZSPD9nUtCWa2VBqetBoG1IQvDRPsO eOn3Ipjr7iWgpNiJoMIzQEBDxyACh/lnGp6PfgIu3wwNncX5NGRX1dLwdHIeQ//F8xhq7Lvg yblKDC2BcRJKJmi4UpKNF8ZrDAGThQFT5joYMZcyMO+Jgs7BFxQ4+j6Hy9f6aWh2dJLQ0TiC 4flvZTQMWv+j4EnHYxKcRQUU3JyupGHSbyLA5Jth4FlLOYY6w4Io56/3FDwqaMGQc/0WBter JgT3cocx2K0vaGjzeTHU24sJePdrO4KRwikGTp0NMHAlqxBB/qmLJPT++4gCQ/8mmPunjI6N Edu8M4RoqE8THf5yUuyqFMS7pQOMaLjXx4jl9hNivTlMrGqewGLFrI8S7ZYztGifPc+IeVMu LE739DDi40tzpDjqKsG7lfvlX8VLiZpUSRfx9WF5QunTGZT8Jjjd3WgkMlE1l4dkrMBtFNqf 5ZIfOLv7Br3INLdecLsDxCKv5NYI9QVjVB6SswR3+mPB/LZ7qRTMfSMEJodwHmJZklsnDPmj FmMFt1moriyilp2hQk1dy5JHtpDfvNy3dIvnNgnDFiOz3DktE3qGjy7zauGB2U2eQ4py9JEF 8RptapJak7hxQ0KGVpO+4cdjSXa08FCmn+YPNKJZ595WxLFIFaSItdRKPKVOTclIakUCS6hW KtyrbRKviFdnnJR0xw7pTiRKKa1IyZKqTxVf+NPiee6IWi8dlaRkSfdhi1lZSCYKndV7tF9e Gkx2PIylD/zQVl0UuyX/zavPsvT2bU2CRRZy4XdP7drt3wdbf9kft/m1bToo7mCyfpcx7Ez0 7WjliiHvvlV/mka3RmjjQnv/cL63RX5n48ciwuF+Om9U9nu6+D2ab0NdVyXDceWWmCrdir1V xeOFxoGdbdHebpzTtVNFpiSoo8IIXYr6f6shTHFMAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+//P1dXqMI1OJRQjE4wuRsoLRgVBHYIuRFFYUcNOObxkm1p2 gdW0i2WmZOYtdNbSqalnUnYxlpZl4dQcecFmSTevi9rmlmZp0ZeHH8/78vv0sIQin5rHqmPj RU2sKlpJy0jZljD90ntjVeIKU4EPZFxeAS7nBRLyK8tpaL1bhqC85gyG/ucbocM9hGCsuYWA 7KxWBEUf3hFQ02hHUFdylob2jzPB5nLQ0JR1iQZ9cSUNbYPjGHquZ2IokzbD66sGDBbvFxKy +2nIy9bjyfiKwWs0MWDUBUBfSS4D4x+Cocn+loKGgiYK6rqXQM7NHhoe1zWR0Fjbh6H9YT4N 9vLfFLxufElCa0YaBRUjBhoG3UYCjC4HA28shRiqkidt535MUPAizYLh3K1qDLauRwieXHiP QSp/S0ODawiDWcoi4Oed5wj6rgwzkHLZy0DemSsILqVcJ6Hl1wsKkntCYMyTT68LExqGHISQ bD4m1LkLSeGVgRce5L5jhOQn3YxQKCUI5pIgofhxPxaKvrsoQTJdpAXpeyYjpA7bsDBitTLC yxtjpPDRlo23+YfLVh8Uo9WJomb5mgOyyNw2B4ob8D3eWZtO6FApl4p8WJ5bxeubb9NTTHOB fGenl5hiP24hb077TKUiGUtw56fzJd+a/z75cut572AvTkUsS3IBfK87eKqWc6F8qSGD+udc wJdVWf56fCb7ipxucooVXAj/3pTOXEWyQjTNhPzUsYkxKnV0yDJtVGRSrPr4sogjMRKa3Izx 9HhGLXK2b6xHHIuUM+TrTJWiglIlapNi6hHPEko/eefcu6JCflCVdELUHNmvSYgWtfVoPksq 58g37RIPKLjDqngxShTjRM3/K2Z95ulQ+LUBa4H/jkdOs2QY0EbJ2rqe7UUTszZvn50Zag8s yJl1dqXFP2frsO3U7pR9o0l+XWG9Vq/kXHm6xRq03TfOEkD2w77S2eHunYoIp2f508TaT2vN 96vbJceGE4sO9Xo6UuL0oYf2hHboKs7DYPrDw57Fo0cD7eRJT17xaJduQklqI1XBQYRGq/oD klMBFy8DAAA= X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Workqueue already provides concurrency control. By that, any wait in a work doesn't prevents events in other works with the control enabled. Thus, each work would better be considered a different context. So let Dept assign a different context id to each work. Signed-off-by: Byungchul Park --- include/linux/dept.h | 2 ++ kernel/dependency/dept.c | 10 ++++++++++ kernel/workqueue.c | 3 +++ 3 files changed, 15 insertions(+) diff --git a/include/linux/dept.h b/include/linux/dept.h index 4e359f76ac3c..319a5b43df89 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -509,6 +509,7 @@ extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long extern void dept_sched_enter(void); extern void dept_sched_exit(void); extern void dept_kernel_enter(void); +extern void dept_work_enter(void); static inline void dept_ecxt_enter_nokeep(struct dept_map *m) { @@ -559,6 +560,7 @@ struct dept_task { }; #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) #define dept_kernel_enter() do { } while (0) +#define dept_work_enter() do { } while (0) #define dept_ecxt_enter_nokeep(m) do { } while (0) #define dept_key_init(k) do { (void)(k); } while (0) #define dept_key_destroy(k) do { (void)(k); } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 9aba9eb22760..a8e693fd590f 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -1933,6 +1933,16 @@ void noinstr dept_hardirqs_off(void) dept_task()->hardirqs_enabled = false; } +/* + * Assign a different context id to each work. + */ +void dept_work_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->cxt_id[DEPT_CXT_PROCESS] += 1UL << DEPT_CXTS_NR; +} + void noinstr dept_kernel_enter(void) { struct dept_task *dt = dept_task(); diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 2989b57e154a..4452864b918b 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -53,6 +53,7 @@ #include #include #include +#include #include "workqueue_internal.h" @@ -2549,6 +2550,8 @@ __acquires(&pool->lock) lockdep_copy_map(&lockdep_map, &work->lockdep_map); #endif + dept_work_enter(); + /* ensure we're on the correct CPU */ WARN_ON_ONCE(!(pool->flags & POOL_DISASSOCIATED) && raw_smp_processor_id() != pool->cpu); From patchwork Wed Jan 24 11:59:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529108 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0E2E7612D2; Wed, 24 Jan 2024 12:00:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097609; cv=none; b=E2YuDwxI97bH+DDKRn6EqOc138KMqvG+0Tx7i3Zkk95DKGWVqzGWaGm5oF0n9hS0mQEd+sxNsvnrVRBiS0kl5mxZZag36jWJrrPdD1LZjB2oWQaIgTgGMQR83tDcz0VC5xcmjXYFp1ELS5/odbDeFnY9K4WcCNRTY8eLtIn+t3o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097609; c=relaxed/simple; bh=FoAZA+0jlnYhnPe3f+IjJ7FMJD3ZMq2wQkEX2mpeEjA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=CFYEfpnw9vkaUtq2rUZC03CzGN10BBX4bRya1MCOUIgTeV3SQiVXdbAJ40urlwDexGMwpr5he606k5+LhavRCcFK6p2soRg+GIdxyq6u4INDdMxVrxSLhSuqRHHubrB9kEVvgeNityFSIWdBWCn01H1ePjU9dxGlcz7MfLKZ24E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-15-65b0fbb69c45 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 13/26] dept: Add a mechanism to refill the internal memory pools on running out Date: Wed, 24 Jan 2024 20:59:24 +0900 Message-Id: <20240124115938.80132-14-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUyTZxSG9zzvJ4Wa1+rkRWNm6gcJi9+iJ8apicn2/DFx8Y9xiVrhVapQ SBGUGQ0qilpA0EBFyAJlaWvBUls0ooIdBhQ7tEitlQDRTlSkCEFbRZDZYvxzcuW+c67z5/CU 4hYzm1drDkhajSpVycpo2VBM9eLr4zZp2eOCBCgpWAbBD6dpqKyvY8FtrUVQ13AMw0Drb/A0 FEAw3vGIAn2pG0H1i14KGtr6EDSZj7PQ9XIaeILDLLSX6lg4UVPPQufgBIaesvMYau2bwVVs wOAce02DfoCFCv0JHB5vMIwZLRwYcxeC33yJg4kXy6G9z8tAU/fPUP5XDwu3m9ppaLvhx9B1 s5KFvrr/GXC13afBXVLIwJV3BhYGQ0YKjMFhDh47qzDY8sKiU+8nGbhX6MRw6u+rGDzPbiFo Pv0cg73Oy8LdYACDw15KwWdTKwJ/0RAHJwvGOKg4VoRAd7KMhkdf7jGQ15MI458q2Y1ryd3A MEXyHAdJU6iKJg8MImm81MuRvOZujlTZs4jDnEBqbg9gUj0aZIjdcoYl9tHzHDk75MHk3cOH HLl/cZwmLz16vGXOdtm6ZClVnS1pl67fJUt54qrgMtwbDvXbQnQualxxFkXxorBK7PS66O88 6Q0wEWaFeNHnG6MiPFOYJzoKX4VzGU8J+dGieaSDjRQzhCSx/F8vF2FaWCgWtzmnWC6sFgc+ e6hv0p/EWptziqPC+ZXy7qljCiFRfG45x0WkopAfJU4W3cHfFuLEf8w+uhjJq9APFqRQa7LT VOrUVUtScjTqQ0uS0tPsKPxSxiMTf9xAo+6tLUjgkTJGvtFSLykYVXZmTloLEnlKOVPui7NK CnmyKudPSZu+U5uVKmW2oDk8rYyVrwgdTFYIe1UHpP2SlCFpv7eYj5qdixZpDXv2yGNLkjZ1 bHEPXq5J88e8ssnKfrlQYFKsdNr/c23OoHvn1/eb1lib0e+/pjuOxiib9dHvL378cZujd2l0 I4nX7PB2fjx8Z+SpZySOyipNXGMyrdw+fbFrn+9CwlzvtFjzVsOzBcOB3ddnWXXz+t8odNe6 WnXWt/kb/JjnlHRmimp5AqXNVH0FNmlKKE4DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzXSfUzMcRwH8L6/x+s4+zmNn2TabWZ5KlT7jKgZ68s8jT/aZNPhp266cOdS xlZ6ED242io9IGUnVyl38ljtXKskcXROWjU1ofWMu0mFO+afz157v7f3Xx8RKb1Ke4oUMacE VYw8WsaIKfGuDUmr7k/VCH59PXMhO8MP7N/TKCiurmTAcqcCQeW9RAIGm0LhnWMYwVT7KxLy cy0IbvT1kHCvuRdBffl5Bjo+zgGrfYyB1tx0BpLKqhl4PTRNQHdeDgEVhp3Qpi0lwDT5mYL8 QQaK8pMI5/lCwKROz4IuYSn0lxeyMN23Blp7bTQ0Xm2lob5rBRRc62agrr6VguaH/QR0PC5m oLfyNw1tzc8osGRn0lA1WsrAkENHgs4+xsIbUwkBNcnOtdRvv2hoyTQRkHrzLgHW908QNKR9 IMBQaWOg0T5MgNGQS8LPW00I+rNGWEjJmGShKDELQXpKHgWvZlpoSO4OgKkfxUzIBtw4PEbi ZONpXO8oofDzUh4/KuxhcXJDF4tLDBpsLF+Oy+oGCXzjq53GBv1FBhu+5rD40oiVwKMvX7L4 2ZUpCn+05hN7vPaLg44I0YpYQeW7KUIc9batiD1hCY4bqHFQCejR2kvIXcRz/vwv2zDtMsMt 4zs7J0mXPThv3pj5yZmLRSR3YRZfPt7OuIp53GG+4IWNdZnilvLaZtNfS7hAfvCnlfw3uoSv qDH9tbszryroolyWcgH8B/1lVovEJchNjzwUMbFKuSI6YLX6WFR8jCJu9eHjSgNyPo3u3HT2 Q/S9I9SMOBGSzZaE6KsFKS2PVccrzYgXkTIPSefCO4JUckQef0ZQHT+o0kQLajNaJKJkCyTb w4QIKRcpPyUcE4QTgup/S4jcPRPQxqi0pz0TDes909VuPnleZ8IjX+8YH7K2K83c4s1f1OqI rpCnA4e2+Qva3Xs1+zYXj648MHJ9x+fLJ/02SpWh82xxtd5ZOGugLfDBxNGUMMvIA39N4Lqz tyt83lzcchJTz3ODI9u3htU6ZoydhkifDIW5LH1+7dwmukWj9T0XHrRfRqmj5GuWkyq1/A+q DDFPMAMAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Dept engine works in a constrained environment. For example, Dept cannot make use of dynamic allocation e.g. kmalloc(). So Dept has been using static pools to keep memory chunks Dept uses. However, Dept would barely work once any of the pools gets run out. So implemented a mechanism for the refill on the lack by any chance, using irq work and workqueue that fits on the contrained environment. Signed-off-by: Byungchul Park --- include/linux/dept.h | 19 ++++-- kernel/dependency/dept.c | 104 +++++++++++++++++++++++++++----- kernel/dependency/dept_object.h | 10 +-- kernel/dependency/dept_proc.c | 8 +-- 4 files changed, 112 insertions(+), 29 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index 319a5b43df89..ca1a34be4127 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -336,9 +336,19 @@ struct dept_pool { size_t obj_sz; /* - * the number of the static array + * the remaining number of the object in spool */ - atomic_t obj_nr; + int obj_nr; + + /* + * the number of the object in spool + */ + int tot_nr; + + /* + * accumulated amount of memory used by the object in byte + */ + atomic_t acc_sz; /* * offset of ->pool_node @@ -348,9 +358,10 @@ struct dept_pool { /* * pointer to the pool */ - void *spool; + void *spool; /* static pool */ + void *rpool; /* reserved pool */ struct llist_head boot_pool; - struct llist_head __percpu *lpool; + struct llist_head __percpu *lpool; /* local pool */ }; struct dept_ecxt_held { diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index a8e693fd590f..8ca46ad98e10 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -74,6 +74,9 @@ #include #include #include +#include +#include +#include #include "dept_internal.h" static int dept_stop; @@ -122,9 +125,11 @@ static int dept_per_cpu_ready; WARN(1, "DEPT_STOP: " s); \ }) -#define DEPT_INFO_ONCE(s...) pr_warn_once("DEPT_INFO_ONCE: " s) +#define DEPT_INFO_ONCE(s...) pr_warn_once("DEPT_INFO_ONCE: " s) +#define DEPT_INFO(s...) pr_warn("DEPT_INFO: " s) static arch_spinlock_t dept_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; +static arch_spinlock_t dept_pool_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; /* * DEPT internal engine should be careful in using outside functions @@ -263,6 +268,7 @@ static bool valid_key(struct dept_key *k) #define OBJECT(id, nr) \ static struct dept_##id spool_##id[nr]; \ +static struct dept_##id rpool_##id[nr]; \ static DEFINE_PER_CPU(struct llist_head, lpool_##id); #include "dept_object.h" #undef OBJECT @@ -271,14 +277,70 @@ struct dept_pool dept_pool[OBJECT_NR] = { #define OBJECT(id, nr) { \ .name = #id, \ .obj_sz = sizeof(struct dept_##id), \ - .obj_nr = ATOMIC_INIT(nr), \ + .obj_nr = nr, \ + .tot_nr = nr, \ + .acc_sz = ATOMIC_INIT(sizeof(spool_##id) + sizeof(rpool_##id)), \ .node_off = offsetof(struct dept_##id, pool_node), \ .spool = spool_##id, \ + .rpool = rpool_##id, \ .lpool = &lpool_##id, }, #include "dept_object.h" #undef OBJECT }; +static void dept_wq_work_fn(struct work_struct *work) +{ + int i; + + for (i = 0; i < OBJECT_NR; i++) { + struct dept_pool *p = dept_pool + i; + int sz = p->tot_nr * p->obj_sz; + void *rpool; + bool need; + + arch_spin_lock(&dept_pool_spin); + need = !p->rpool; + arch_spin_unlock(&dept_pool_spin); + + if (!need) + continue; + + rpool = vmalloc(sz); + + if (!rpool) { + DEPT_STOP("Failed to extend internal resources.\n"); + break; + } + + arch_spin_lock(&dept_pool_spin); + if (!p->rpool) { + p->rpool = rpool; + rpool = NULL; + atomic_add(sz, &p->acc_sz); + } + arch_spin_unlock(&dept_pool_spin); + + if (rpool) + vfree(rpool); + else + DEPT_INFO("Dept object(%s) just got refilled successfully.\n", p->name); + } +} + +static DECLARE_WORK(dept_wq_work, dept_wq_work_fn); + +static void dept_irq_work_fn(struct irq_work *w) +{ + schedule_work(&dept_wq_work); +} + +static DEFINE_IRQ_WORK(dept_irq_work, dept_irq_work_fn); + +static void request_rpool_refill(void) +{ + irq_work_queue(&dept_irq_work); +} + /* * Can use llist no matter whether CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG is * enabled or not because NMI and other contexts in the same CPU never @@ -314,19 +376,31 @@ static void *from_pool(enum object_t t) /* * Try static pool. */ - if (atomic_read(&p->obj_nr) > 0) { - int idx = atomic_dec_return(&p->obj_nr); + arch_spin_lock(&dept_pool_spin); + + if (!p->obj_nr) { + p->spool = p->rpool; + p->obj_nr = p->rpool ? p->tot_nr : 0; + p->rpool = NULL; + request_rpool_refill(); + } + + if (p->obj_nr) { + void *ret; + + p->obj_nr--; + ret = p->spool + (p->obj_nr * p->obj_sz); + arch_spin_unlock(&dept_pool_spin); - if (idx >= 0) - return p->spool + (idx * p->obj_sz); + return ret; } + arch_spin_unlock(&dept_pool_spin); - DEPT_INFO_ONCE("---------------------------------------------\n" - " Some of Dept internal resources are run out.\n" - " Dept might still work if the resources get freed.\n" - " However, the chances are Dept will suffer from\n" - " the lack from now. Needs to extend the internal\n" - " resource pools. Ask max.byungchul.park@gmail.com\n"); + DEPT_INFO("------------------------------------------\n" + " Dept object(%s) is run out.\n" + " Dept is trying to refill the object.\n" + " Nevertheless, if it fails, Dept will stop.\n", + p->name); return NULL; } @@ -2957,8 +3031,8 @@ void __init dept_init(void) pr_info("... DEPT_MAX_ECXT_HELD : %d\n", DEPT_MAX_ECXT_HELD); pr_info("... DEPT_MAX_SUBCLASSES : %d\n", DEPT_MAX_SUBCLASSES); #define OBJECT(id, nr) \ - pr_info("... memory used by %s: %zu KB\n", \ - #id, B2KB(sizeof(struct dept_##id) * nr)); + pr_info("... memory initially used by %s: %zu KB\n", \ + #id, B2KB(sizeof(spool_##id) + sizeof(rpool_##id))); #include "dept_object.h" #undef OBJECT #define HASH(id, bits) \ @@ -2966,6 +3040,6 @@ void __init dept_init(void) #id, B2KB(sizeof(struct hlist_head) * (1 << (bits)))); #include "dept_hash.h" #undef HASH - pr_info("... total memory used by objects and hashs: %zu KB\n", B2KB(mem_total)); + pr_info("... total memory initially used by objects and hashs: %zu KB\n", B2KB(mem_total)); pr_info("... per task memory footprint: %zu bytes\n", sizeof(struct dept_task)); } diff --git a/kernel/dependency/dept_object.h b/kernel/dependency/dept_object.h index 0b7eb16fe9fb..4f936adfa8ee 100644 --- a/kernel/dependency/dept_object.h +++ b/kernel/dependency/dept_object.h @@ -6,8 +6,8 @@ * nr: # of the object that should be kept in the pool. */ -OBJECT(dep, 1024 * 8) -OBJECT(class, 1024 * 8) -OBJECT(stack, 1024 * 32) -OBJECT(ecxt, 1024 * 16) -OBJECT(wait, 1024 * 32) +OBJECT(dep, 1024 * 4 * 2) +OBJECT(class, 1024 * 4) +OBJECT(stack, 1024 * 4 * 8) +OBJECT(ecxt, 1024 * 4 * 2) +OBJECT(wait, 1024 * 4 * 4) diff --git a/kernel/dependency/dept_proc.c b/kernel/dependency/dept_proc.c index 7d61dfbc5865..f07a512b203f 100644 --- a/kernel/dependency/dept_proc.c +++ b/kernel/dependency/dept_proc.c @@ -73,12 +73,10 @@ static int dept_stats_show(struct seq_file *m, void *v) { int r; - seq_puts(m, "Availability in the static pools:\n\n"); + seq_puts(m, "Accumulated amount of memory used by pools:\n\n"); #define OBJECT(id, nr) \ - r = atomic_read(&dept_pool[OBJECT_##id].obj_nr); \ - if (r < 0) \ - r = 0; \ - seq_printf(m, "%s\t%d/%d(%d%%)\n", #id, r, nr, (r * 100) / (nr)); + r = atomic_read(&dept_pool[OBJECT_##id].acc_sz); \ + seq_printf(m, "%s\t%d KB\n", #id, r / 1024); #include "dept_object.h" #undef OBJECT From patchwork Wed Jan 24 11:59:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529109 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CB41763410; Wed, 24 Jan 2024 12:00:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097610; cv=none; b=twyUBzbMAU4KNDiCoKiVXkXlvHHM5+Cht4o2/prwIZAD2Nhj1vpGSlGCCBzvnc7ViXVO1ub9Pr0dKMjRTlNpEDdNCFoMSFzmID4GGazfo0m8Pe8M8lq0gugT/uixw5TDg/dF3a3/okowUPhv2p1AfDljpG/v8pVeRdE8LwIqOJc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097610; c=relaxed/simple; bh=EIC7RWTml3WGFeQFxVb2eS9TiiJPhRDEmqyuGxWQs6c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=AQF3cKVnEYujo9BD2kLaUsb4p4Fc1GmMNaBZztM4MbjW1jJpnsNbtFr/nxqxrqUnDEUanw3IslczltifqvZHC43Cf0TmXj3aGCuQ9nUK7y5+UsALJiR8wBW61X8Mbz9kpqsT4VqZqoEjF8TNwPoBjDwiE/LKMqIxA/zAgC4QhXo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-25-65b0fbb64f6a From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 14/26] locking/lockdep, cpu/hotplus: Use a weaker annotation in AP thread Date: Wed, 24 Jan 2024 20:59:25 +0900 Message-Id: <20240124115938.80132-15-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSW0wTaRTH9/tm5puh2s1YTRjkwd1Gs8rGC8SaIxpDfHC/jZo1IfFhtVkn dBaq3CyCYjSCVkAu3giwKEFALU0p0i1m44IYqBHBC1asCgSrEKMSiyhatFIvBePLyS/n/P+/ pyMwmhZurmBM3aWYUuVkLVGxqtGZtYv/m3Qoy8rtS+FE8TLwvytgoarJTsB9oQGB/WIuhpFr v8HDCR+Cydt3GKgocyOoHXrEwMVOL4I260EC957+CB7/GIHusiICh842Ebj7MohhsPwkhgbn Rrh5vA5De+A5CxUjBE5XHMKh8QJDwGLjwZKzAIatp3gIDkVDt/cBB20Dv0Jl9SCBy23dLHRe GsZwr6WKgNf+hYObnV0suE+UcND4qo7AywkLAxb/GA+97TUYHOaQKO/tZw6ul7RjyDv3LwZP fyuCKwVPMDjtDwhc9fswNDvLGPhYfw3B8NFRHg4XB3g4nXsUQdHhchbufLrOgXlQB5Mfqkhc LL3qG2OouXk3bZuoYemNOon+f+oRT81XBnha48ykzdYoevbyCKa1436OOm1HCHWOn+Rp4agH 01c9PTzt+meSpU89FXhT5J+q1QYl2ZilmJau2aZK8p65i9Lr+T2luV6Sg0pIIQoTJHG5lFeZ w37nx+5cfoqJ+IvU1xdgpniO+JPUXPKMK0QqgRHzZ0jW17dDZUGYLeolc0HCVIYVF0jDwaFp j1pcIeW3vue+OedJDY72aU9YaN9YOTCd0Yg66YntGD/llMT8MKnQ189/K0RIHdY+9jhS16Af bEhjTM1KkY3Jy5ckZaca9yxJSEtxotBHWfYHt1xC4+54FxIFpJ2pjrM1KRpOzsrITnEhSWC0 c9R9ERcUjdogZ+9VTGl/mTKTlQwXihRYbbg6ZmK3QSMmyruUHYqSrpi+X7EQNjcH/dzo8R0w rPm9eCG7M+FzZmKr60bcjIX6KH94esQfmwNKwaqh2YZ989+Q2Flm3YtofZFjJbeaBiM+zLNy G2IeKgm1+7urS7NUqm2RvfdLR/AGWS+v77/Vm3cuVpeSuFbYujjNsaKrp/r89vhF74/F/62a pYtuWdexpb4j/K1rkT5Gy2YkydFRjClD/gpVmRPATQMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTYRTHe5736mrxugzfrEgGFRldBLVDRkQFvkRJX7pJocNecrRZbGpa BOqm2VJrK52Z5S2WzZU2JewyE03LJLM0NbGVEpnlpdRZOrvMoi+HH+f8z+/TnyVkBZQfq4yN EzWxCpWclpCS8FDdmrvuSnG9w7QIjJnrwTWRQUJBhY2GttvlCGzVKRgGG8Oga3IIgfv5CwLM OW0IivveElDd5ETgKEulof3DfOhwjdLQnHOOBl1pBQ0vv8xg6M01YSi374KWCyUY6qYGSDAP 0nDFrMOe8QnDlMXKgCV5OfSX5TMw0xcIzc5OChquNlPg6FkNl6/10vDQ0UxCU00/hvb7BTQ4 bb8paGl6SkKbMYuCWyMlNHyZtBBgcY0y8KquCEOl3mNLH/9FwZOsOgzp1+9g6HjzAEFtxnsM dlsnDQ2uIQxV9hwCpm80IujPHmYgLXOKgSsp2QjOpeWS8OLnEwr0vcHg/lFAbwkVGoZGCUFf dUJwTBaRwrMSXriX/5YR9LU9jFBkjxeqygKE0oeDWCgec1GC3XqWFuxjJkYwDHdgYaS1lRGe 5rlJ4UOHGe9eEiHZdFhUKRNEzbrNUZIYZ+FLdPwGk3gxxUknoyzagLxYngvi37WlMLNMcyv5 7u4pYpZ9OH++KusjZUASluDOzOXLvj73PLDsAu4Qr8+Ins2Q3HK+f6aPnGUpF8KfefCd+udc xpdX1v31eHn2ty73/M3IuGD+vfU8cwFJitAcK/JRxiaoFUpV8Frt0ZikWGXi2uhjajvydMZy esZYgybaw+oRxyL5POkWa4UooxQJ2iR1PeJZQu4j7V50W5RJDyuSToqaY5GaeJWorUeLWVLu K92xT4yScUcUceJRUTwuav5fMevll4y88/bopGrvrt966/4wVdw48t2wN68ndclih7Eln3m9 ovsRPwBXmcSBWt/oYrw9U1fYGFl4szRd1xg+bbD/XGhYJY8IVZuWRu48cNB78lKnO+CzLWSg OHt6bFfkqv2KmDW5ERv9t1aOk6ao9i7zKeNF79ZvPx4HFcY1BY294wK3yUltjCIwgNBoFX8A 2+WWAC8DAAA= X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: cb92173d1f0 ("locking/lockdep, cpu/hotplug: Annotate AP thread") was introduced to make lockdep_assert_cpus_held() work in AP thread. However, the annotation is too strong for that purpose. We don't have to use more than try lock annotation for that. Furthermore, now that Dept was introduced, false positive alarms was reported by that. Replaced it with try lock annotation. Signed-off-by: Byungchul Park --- kernel/cpu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/cpu.c b/kernel/cpu.c index a86972a91991..b708989f789f 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -535,7 +535,7 @@ int lockdep_is_cpus_held(void) static void lockdep_acquire_cpus_lock(void) { - rwsem_acquire(&cpu_hotplug_lock.dep_map, 0, 0, _THIS_IP_); + rwsem_acquire(&cpu_hotplug_lock.dep_map, 0, 1, _THIS_IP_); } static void lockdep_release_cpus_lock(void) From patchwork Wed Jan 24 11:59:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529110 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 32B97634E7; Wed, 24 Jan 2024 12:00:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097610; cv=none; b=OyISF8rXrVF8Amgwmp8QnyWHp5XVeHJkr3xdFpjYs7ms3sahCClTIr3EaSiRvcVEi14mbm7SvatpJxF8HxR80lg1xYrCiedC9T2csVkFOoJcPm4mj+VjJ5ouZ5FAR1ZqGfGBclfo9csN5+81E/jpXp23drO2HaHXxiOKLRvbM5s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097610; c=relaxed/simple; bh=t4Qh0s/9NGiebNos8RWbTI6PuT087Z/eTqnpXervcDg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=jhxmndNZ6Qr5he0SiJ8rDB7uj376EeUGfTawsjczY0cwYWRDHRWCJ/yPaRZTi+jLf9AFl3CqpD4h9t/Oi38NRYDwNmRGgYMbG1xMfED42Kbx9Uu4HuAx3Im6QG6esoeMigyP3jdSCiHLhCK6MWqaLUsMoCLqN2FMAq4oqUt4Evo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-35-65b0fbb6fc5a From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 15/26] dept: Apply sdt_might_sleep_{start,end}() to dma fence wait Date: Wed, 24 Jan 2024 20:59:26 +0900 Message-Id: <20240124115938.80132-16-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSW1BMcRwHcP9zb7XmzLqdynXNjpmMyxrMT2PIA04PGoZB9cCODrujFhvV MlHaqFW5TUU3tbLWlsouKaxJphKVpYZsabRTdI/YRheXlvHym8/8Lt+nH4NLKkhvRqU+JmjU ijApJSJEA575S8vGS4UVjTo/uJS8AlzfEwnILimiwF5ciKDoXhwGPdVb4N1IP4Lxhlc4ZKTZ EeR3fMDhXk07ApvpDAVNndOg2TVEQV3aeQrib5RQ8LpvAoO29MsYFFq2wsuLBgwqRz8TkNFD QVZGPDZZujEYNZppMMbKwGnKpGGiQw517W9JsDmWwLXcNgoe2+oIqCl3YtD0MJuC9qLfJLys eU6A/VIKCXcGDRT0jRhxMLqGaHhTmYdBqW4y6Oy3XyTUplRicLbgLgbN7x8heJL4EQNL0VsK nrn6MbBa0nAYu1WNwJk6QENC8igNWXGpCM4npBPw6mctCbq21TD+I5vy9+Of9Q/hvM4axdtG 8gj+hYHjKzI/0LzuiYPm8yzHeavJl7/xuAfj84ddJG8xJ1G8ZfgyzesHmjF+sLGR5p9fHSf4 zuYMbJtPsGhdqBCmihQ0y9fvEylTWxvRkThxtOU6HYu6RHrkwXDsKi69tx79t6E6AXebYhdz LS2jfz2DXcBZUz6ReiRicPbcVM70pYHSI4aZzu7iTH073DsEK+MKvr6g3Raza7ghVy/+L3M+ V1ha+dcek/071xyE2xJ2NffRfIF2Z3JsvAfXcdNJ/zvw4p6aWoiLSJyHppiRRKWODFeowlYt U2rVquhl+w+HW9DkQxljJkLK0bB9RxViGST1FPubSwQJqYiM0IZXIY7BpTPELV7FgkQcqtCe EDSH92qOhwkRVciHIaSzxStHokIl7EHFMeGQIBwRNP+nGOPhHYtSCwOaSkLG5FcOaNf6+Hnl dA5oF0Jgr+yBv+3rvpPBZdJ665hj02nrnk3MbbHBWRxU9z5Qbr/fqgv2PCWbrSuIDIq5Kquo LY6ex27flax2ytUBcaG5c4Wj7UmEsq90w/WbuYtyujbvnNetfOc/p2tmcGDAXt9p+vRZG2+d ++GA3VIiQqmQ++KaCMUfAxQQ8EwDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzXSf0yMcRwHcN/nd6ezx7nNI4admVUkm/hs0rLZPLMxw5gfm57pUaer7K6O DCt1J0eptiRFp3LO3alcNFKtdfo9iRpJWhqSfhzqmnMdivnns9c+78/ef30YXFZA+jHKuARR HSeoFJSEkOzclLqmylMhBrdML4fsy8HgmkwnoLDcRkFnmRWB7UEKBsON2+D11CgCz7PnOOTl diK49f4dDg+a+hHUms9T0PVhHnS7nBS05l6iILWknIIXI9MY9F3NwcBq3wHtWcUY1LuHCMgb pqAgLxWbGZ8xcJssNJiSV8Kg+ToN0+/XQWv/KxIcN1pJqO0NhPybfRTU1LYS0PRoEIOu6kIK +m2/SWhvaiGgMzuDhHvjxRSMTJlwMLmcNLysN2JQkTbTpp/4RUJzRj0G+tL7GHS/eYKgLn0A A7vtFQUO1ygGlfZcHH7eaUQwmDlGg+6ym4aClEwEl3RXCXjubSYhrS8EPD8KqfBNvGPUifNp lSf52ikjwbcVc/zj6+9oPq2ul+aN9kS+0hzAl9QMY/yt7y6St1suUrz9ew7NG8a6MX68o4Pm W655CP5Ddx62a8lBSWikqFJqRfXasAhJdObbDnQiRXrKXkQno48SA/JhOHY9V9yow2dNsau4 nh73X8vZ5VxlxifSgCQMzl6Yy5m/PqMMiGEWsPs488ie2RuCXcmVfmujZy1lN3BO1xf8X+cy zlpR/9c+M/t7+b3ErGVsCDdguUJnIYkRzbEguTJOGysoVSFBmpjopDjlqaCj8bF2NPMyprPT 2Y/QZNe2BsQySOErDbeUizJS0GqSYhsQx+AKubRnUZkok0YKSadFdfwRdaJK1DSgxQyhWCjd vl+MkLFRQoIYI4onRPX/FGN8/JLR/MNLI6u8+bvu6ir2/YzCgw9vVYX2h7qE8pGevf7Hgs+c G9dD1UWnPP7axv0Td7mh12FX3Hpv5u1Jw0S7XKJvPzdQV/3DI+R4OzYbDVmDhzYEhhWRCdXp q7V94Vbf7BV+x72yjT5hrjm7n24pPeKwObRj/gfGEsmhqtK3D/18JxWEJlpYF4CrNcIfhViG ai4DAAA= X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Makes Dept able to track dma fence waits. Signed-off-by: Byungchul Park --- drivers/dma-buf/dma-fence.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 8aa8f8cb7071..76dba11f0dab 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -16,6 +16,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -783,6 +784,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) cb.task = current; list_add(&cb.base.node, &fence->cb_list); + sdt_might_sleep_start(NULL); while (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { if (intr) __set_current_state(TASK_INTERRUPTIBLE); @@ -796,6 +798,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) if (ret > 0 && intr && signal_pending(current)) ret = -ERESTARTSYS; } + sdt_might_sleep_end(); if (!list_empty(&cb.base.node)) list_del(&cb.base.node); @@ -885,6 +888,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } } + sdt_might_sleep_start(NULL); while (ret > 0) { if (intr) set_current_state(TASK_INTERRUPTIBLE); @@ -899,6 +903,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, if (ret > 0 && intr && signal_pending(current)) ret = -ERESTARTSYS; } + sdt_might_sleep_end(); __set_current_state(TASK_RUNNING); From patchwork Wed Jan 24 11:59:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529111 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9C57F64A9F; Wed, 24 Jan 2024 12:00:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097611; cv=none; b=pq/MLCffRYdElnxZx1pZFb2ZJ+S5Eo+iOAwcrrViJEAZYQeuvoYcTPamQnabEnlD931SDznMEdnIcp1sLnla8B8Fscdp6AsKKJvSUOGXlUlzfVHDYC1RovMUxTbwLvjiBZLr4pcj4mncOHIIW8a/JNkF7oqAg8wxg45GGML1/XE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097611; c=relaxed/simple; bh=LIy0pGzgh1KfCx3V9I4vVrDFzUkgQh3CEYezt1V8iYA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=uo0Gg9VCjeo5XSyKxMXf/ZLA56LWoTk/S+I5oZ74laPK7nbiBvTwjUeBJIFIyaXxKpMqZEasRLKwtPhiyW/xDLpxkRsU5mmyFTuM3g/oMDp+1PPHVViaj7MgTsC8Gmd/Th2d8bo8IpvCkXsizx/e9+5qOnyMEoR7ty7nD486N+4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-45-65b0fbb6953c From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 16/26] dept: Track timeout waits separately with a new Kconfig Date: Wed, 24 Jan 2024 20:59:27 +0900 Message-Id: <20240124115938.80132-17-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0yTZxTHfZ732o6aNx3LXlCUdCE6FxQWxBNjFhJNfOIluhn94D7om/XN 6CiIBSugJAWKFxAvRKxcsgBduqYgsJYxvGAYBhQVLchcRSBIiIJQMUAbuzIVqn45+eX/P+f3 6fCU+hoTyevSMmVDmqTXsEpa6Q2rif0r2CTHOYZpuHAmDnxzp2ioaqxnwd1Qh6C+OQ/DROdW +Nc/hSDY85ACS5kbQc2zIQqau4YRtNnzWXg0thT6fdMsdJcVs1BgbWShd3Iew+ClUgx1zp1w 73wthvbACxosEyxUWgrwwhjHELA5OLCZYmDUXsHB/LN46B5+zEDbwDdQ/usgCzfaumnoah3F 8OhaFQvD9e8YuNd1hwb3hRIGrryqZWHSb6PA5pvmoK+9GkOTeUF0YvYtA7dL2jGc+O0PDP1P riO4eWoEg7P+MQu3fFMYXM4yCv77vRPB6FkvB4VnAhxU5p1FUFx4iYaH/99mwDy4HoJvqtik jeTW1DRFzK6jpM1fTZO7tSK5WjHEEfPNAY5UO48Ql30Nsd6YwKRmxscQp+M0S5wzpRwp8vZj 8urBA47cuRykyVi/Be9etl+5SSvrdUbZsO67g8rkzkkXmz5+MMsywZvQnzuKkIIXhQSxp9WP P/FV7zy7yKywSvR4AtQihwvRoqvkOVOElDwlnPxMtL/uCS19LvwgtgQrQ0wLMeJs81CIVUKi WPymj/4gXSnWNbWHRIqF/Er5QChXC+vFEcc5blEqCgUK8XSF5eNBhPi33UOfR6pqtMSB1Lo0 Y6qk0yesTc5O02Wt/elQqhMtfJQtd/7HVjTj3tOBBB5pwlRJjkZZzUjGjOzUDiTylCZc5Ylo kNUqrZSdIxsOHTAc0csZHWgZT2u+VH3rP6pVCz9LmXKKLKfLhk8t5hWRJhTVEyjNX33g+Zyx dnPflsOGmMiEjllPrpZoR5a+jPrihbQ66SSOj7GajvdGHxuPFlckzkWs2rvcO1n+9R5mxXFr 7o6LebzuqxSzO+f1VNK6OKvtn2P7uMJYhdEkNVTlDOVf737asneMcNsTRzL1G79v+aV11/0o 1dO7Qti2lKwNWzV0RrIUv4YyZEjvAZfTpuhNAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/x05nv502P7GxMw9lHkL2QZP//GbI08aw6fCbjit2dyLK oiv0pGvLodg5nHYd5e5Y6EgpzlOpk6QaeepcieqaXB5y5p/3Xnt/Pnv99WYJWREVyioTtKI6 QaGS0xJSsnJR2owb/jJxdpd3GuizZ4Ov/xgJRaVWGuqvliCwOg5j8NQshZcDXQj8T+sIMBTU Izj/to0AR207AmfxERoa348Ct6+HBldBFg1pF0ppeO4dwtB6Mh9DiW0FPM4zYagc/ESCwUND oSEND0cnhkGzhQFz6mToKD7DwNDbCHC1N1FQfdZFgbNlOpw+10pDhdNFQm15B4bGW0U0tFt/ U/C49iEJ9focCq58MdHgHTATYPb1MNBQacRQphu2ZfT9ouBBTiWGjIvXMLhf3UZw59gbDDZr Ew3Vvi4MdlsBAT8u1yDoyO1mID17kIHCw7kIstJPklD38wEFutZI8H8vopcsEqq7eghBZ98n OAeMpPDIxAs3z7Qxgu5OCyMYbXsFe3G4cKHCg4XzvT5KsFmO04KtN58RMrvdWPjy7BkjPDzl J4X3bgNeNX6jJGq7qFImiupZi2MlcTVeO72nM3a/wcOmouvLM1EQy3Pz+JvdQ/RfprmpfHPz IPGXQ7iJvD3nI5WJJCzBHR3JF399Gngaza3hb/gLA0xyk/k+R1uApdx8Put7A/lPOoEvKasM iIKG+yunWwK9jIvk31hOMHlIYkQjLChEmZAYr1CqImdqdsUlJSj3z9y2O96GhjdjThnSl6P+ xqVViGORPFi6xFIqyihFoiYpvgrxLCEPkTaPvSrKpNsVSQdE9e4t6r0qUVOFxrGkfIx02Xox VsbtUGjFXaK4R1T/v2I2KDQVqTb3Ja9e6JoUozy+jMzVOTSa0WWj9BlNseH39WutLducYXmb khtSI4I3VLQ/MkWkpOheTL2kKr/s6y/x9mV8/LZJ++S1uDUmZu7nXlnUzntMtnaDw/jTIves e/k6PzT67JTrkWFzo8LqjtQ+uYeC3+EpH3RzDpq0dxe4Lb34UHSNnNTEKSLCCbVG8QcVU6TH LwMAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Waits with valid timeouts don't actually cause deadlocks. However, Dept has been reporting the cases as well because it's worth informing the circular dependency for some cases where, for example, timeout is used to avoid a deadlock but not meant to be expired. However, yes, there are also a lot of, even more, cases where timeout is used for its clear purpose and meant to be expired. Let Dept report these as an information rather than shouting DEADLOCK. Plus, introduced CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT Kconfig to make it optional so that any reports involving waits with timeouts can be turned on/off depending on the purpose. Signed-off-by: Byungchul Park --- include/linux/dept.h | 15 ++++++--- include/linux/dept_ldt.h | 6 ++-- include/linux/dept_sdt.h | 12 +++++--- kernel/dependency/dept.c | 66 ++++++++++++++++++++++++++++++++++------ lib/Kconfig.debug | 10 ++++++ 5 files changed, 89 insertions(+), 20 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index ca1a34be4127..0280e45cc2af 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -270,6 +270,11 @@ struct dept_wait { * whether this wait is for commit in scheduler */ bool sched_sleep; + + /* + * whether a timeout is set + */ + bool timeout; }; }; }; @@ -453,6 +458,7 @@ struct dept_task { bool stage_sched_map; const char *stage_w_fn; unsigned long stage_ip; + bool stage_timeout; /* * the number of missing ecxts @@ -490,6 +496,7 @@ struct dept_task { .stage_sched_map = false, \ .stage_w_fn = NULL, \ .stage_ip = 0UL, \ + .stage_timeout = false, \ .missing_ecxt = 0, \ .hardirqs_enabled = false, \ .softirqs_enabled = false, \ @@ -507,8 +514,8 @@ extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, con extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); extern void dept_map_copy(struct dept_map *to, struct dept_map *from); -extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l); -extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn); +extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, long timeout); +extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn, long timeout); extern void dept_request_event_wait_commit(void); extern void dept_clean_stage(void); extern void dept_stage_event(struct task_struct *t, unsigned long ip); @@ -558,8 +565,8 @@ struct dept_task { }; #define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) #define dept_map_copy(t, f) do { } while (0) -#define dept_wait(m, w_f, ip, w_fn, sl) do { (void)(w_fn); } while (0) -#define dept_stage_wait(m, k, ip, w_fn) do { (void)(k); (void)(w_fn); } while (0) +#define dept_wait(m, w_f, ip, w_fn, sl, t) do { (void)(w_fn); } while (0) +#define dept_stage_wait(m, k, ip, w_fn, t) do { (void)(k); (void)(w_fn); } while (0) #define dept_request_event_wait_commit() do { } while (0) #define dept_clean_stage() do { } while (0) #define dept_stage_event(t, ip) do { } while (0) diff --git a/include/linux/dept_ldt.h b/include/linux/dept_ldt.h index 062613e89fc3..8adf298dfcb8 100644 --- a/include/linux/dept_ldt.h +++ b/include/linux/dept_ldt.h @@ -27,7 +27,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_L, i, "trylock", "unlock", sl);\ else { \ - dept_wait(m, LDT_EVT_L, i, "lock", sl); \ + dept_wait(m, LDT_EVT_L, i, "lock", sl, false); \ dept_ecxt_enter(m, LDT_EVT_L, i, "lock", "unlock", sl);\ } \ } while (0) @@ -39,7 +39,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_R, i, "read_trylock", "read_unlock", sl);\ else { \ - dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl);\ + dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl, false);\ dept_ecxt_enter(m, LDT_EVT_R, i, "read_lock", "read_unlock", sl);\ } \ } while (0) @@ -51,7 +51,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_W, i, "write_trylock", "write_unlock", sl);\ else { \ - dept_wait(m, LDT_EVT_RW, i, "write_lock", sl); \ + dept_wait(m, LDT_EVT_RW, i, "write_lock", sl, false);\ dept_ecxt_enter(m, LDT_EVT_W, i, "write_lock", "write_unlock", sl);\ } \ } while (0) diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h index 12a793b90c7e..21fce525f031 100644 --- a/include/linux/dept_sdt.h +++ b/include/linux/dept_sdt.h @@ -22,11 +22,12 @@ #define sdt_map_init_key(m, k) dept_map_init(m, k, 0, #m) -#define sdt_wait(m) \ +#define sdt_wait_timeout(m, t) \ do { \ dept_request_event(m); \ - dept_wait(m, 1UL, _THIS_IP_, __func__, 0); \ + dept_wait(m, 1UL, _THIS_IP_, __func__, 0, t); \ } while (0) +#define sdt_wait(m) sdt_wait_timeout(m, -1L) /* * sdt_might_sleep() and its family will be committed in __schedule() @@ -37,12 +38,13 @@ /* * Use the code location as the class key if an explicit map is not used. */ -#define sdt_might_sleep_start(m) \ +#define sdt_might_sleep_start_timeout(m, t) \ do { \ struct dept_map *__m = m; \ static struct dept_key __key; \ - dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__);\ + dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__, t);\ } while (0) +#define sdt_might_sleep_start(m) sdt_might_sleep_start_timeout(m, -1L) #define sdt_might_sleep_end() dept_clean_stage() @@ -52,7 +54,9 @@ #else /* !CONFIG_DEPT */ #define sdt_map_init(m) do { } while (0) #define sdt_map_init_key(m, k) do { (void)(k); } while (0) +#define sdt_wait_timeout(m, t) do { } while (0) #define sdt_wait(m) do { } while (0) +#define sdt_might_sleep_start_timeout(m, t) do { } while (0) #define sdt_might_sleep_start(m) do { } while (0) #define sdt_might_sleep_end() do { } while (0) #define sdt_ecxt_enter(m) do { } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 8ca46ad98e10..1b8fa9f69d73 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -739,6 +739,8 @@ static void print_diagram(struct dept_dep *d) if (!irqf) { print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); print_spc(spc, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + if (w->timeout) + print_spc(spc, "--------------- >8 timeout ---------------\n"); print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); } } @@ -792,6 +794,24 @@ static void print_dep(struct dept_dep *d) static void save_current_stack(int skip); +static bool is_timeout_wait_circle(struct dept_class *c) +{ + struct dept_class *fc = c->bfs_parent; + struct dept_class *tc = c; + + do { + struct dept_dep *d = lookup_dep(fc, tc); + + if (d->wait->timeout) + return true; + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + return false; +} + /* * Print all classes in a circle. */ @@ -814,10 +834,14 @@ static void print_circle(struct dept_class *c) pr_warn("summary\n"); pr_warn("---------------------------------------------------\n"); - if (fc == tc) + if (is_timeout_wait_circle(c)) { + pr_warn("NOT A DEADLOCK BUT A CIRCULAR DEPENDENCY\n"); + pr_warn("CHECK IF THE TIMEOUT IS INTENDED\n\n"); + } else if (fc == tc) { pr_warn("*** AA DEADLOCK ***\n\n"); - else + } else { pr_warn("*** DEADLOCK ***\n\n"); + } i = 0; do { @@ -1563,7 +1587,8 @@ static void add_dep(struct dept_ecxt *e, struct dept_wait *w) static atomic_t wgen = ATOMIC_INIT(1); static void add_wait(struct dept_class *c, unsigned long ip, - const char *w_fn, int sub_l, bool sched_sleep) + const char *w_fn, int sub_l, bool sched_sleep, + bool timeout) { struct dept_task *dt = dept_task(); struct dept_wait *w; @@ -1583,6 +1608,7 @@ static void add_wait(struct dept_class *c, unsigned long ip, w->wait_fn = w_fn; w->wait_stack = get_current_stack(); w->sched_sleep = sched_sleep; + w->timeout = timeout; cxt = cur_cxt(); if (cxt == DEPT_CXT_HIRQ || cxt == DEPT_CXT_SIRQ) @@ -2294,7 +2320,7 @@ static struct dept_class *check_new_class(struct dept_key *local, */ static void __dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, - bool sched_sleep, bool sched_map) + bool sched_sleep, bool sched_map, bool timeout) { int e; @@ -2317,7 +2343,7 @@ static void __dept_wait(struct dept_map *m, unsigned long w_f, if (!c) continue; - add_wait(c, ip, w_fn, sub_l, sched_sleep); + add_wait(c, ip, w_fn, sub_l, sched_sleep, timeout); } } @@ -2354,14 +2380,23 @@ static void __dept_event(struct dept_map *m, unsigned long e_f, } void dept_wait(struct dept_map *m, unsigned long w_f, - unsigned long ip, const char *w_fn, int sub_l) + unsigned long ip, const char *w_fn, int sub_l, + long timeoutval) { struct dept_task *dt = dept_task(); unsigned long flags; + bool timeout; if (unlikely(!dept_working())) return; + timeout = timeoutval > 0 && timeoutval < MAX_SCHEDULE_TIMEOUT; + +#if !defined(CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT) + if (timeout) + return; +#endif + if (dt->recursive) return; @@ -2370,21 +2405,30 @@ void dept_wait(struct dept_map *m, unsigned long w_f, flags = dept_enter(); - __dept_wait(m, w_f, ip, w_fn, sub_l, false, false); + __dept_wait(m, w_f, ip, w_fn, sub_l, false, false, timeout); dept_exit(flags); } EXPORT_SYMBOL_GPL(dept_wait); void dept_stage_wait(struct dept_map *m, struct dept_key *k, - unsigned long ip, const char *w_fn) + unsigned long ip, const char *w_fn, + long timeoutval) { struct dept_task *dt = dept_task(); unsigned long flags; + bool timeout; if (unlikely(!dept_working())) return; + timeout = timeoutval > 0 && timeoutval < MAX_SCHEDULE_TIMEOUT; + +#if !defined(CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT) + if (timeout) + return; +#endif + if (m && m->nocheck) return; @@ -2430,6 +2474,7 @@ void dept_stage_wait(struct dept_map *m, struct dept_key *k, dt->stage_w_fn = w_fn; dt->stage_ip = ip; + dt->stage_timeout = timeout; exit: dept_exit_recursive(flags); } @@ -2441,6 +2486,7 @@ static void __dept_clean_stage(struct dept_task *dt) dt->stage_sched_map = false; dt->stage_w_fn = NULL; dt->stage_ip = 0UL; + dt->stage_timeout = false; } void dept_clean_stage(void) @@ -2471,6 +2517,7 @@ void dept_request_event_wait_commit(void) unsigned long ip; const char *w_fn; bool sched_map; + bool timeout; if (unlikely(!dept_working())) return; @@ -2493,6 +2540,7 @@ void dept_request_event_wait_commit(void) w_fn = dt->stage_w_fn; ip = dt->stage_ip; sched_map = dt->stage_sched_map; + timeout = dt->stage_timeout; /* * Avoid zero wgen. @@ -2500,7 +2548,7 @@ void dept_request_event_wait_commit(void) wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); WRITE_ONCE(dt->stage_m.wgen, wg); - __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map); + __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map, timeout); exit: dept_exit(flags); } diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 9602f41ad8e8..0ec3addef504 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1312,6 +1312,16 @@ config DEPT noting, to mitigate the impact by the false positives, multi reporting has been supported. +config DEPT_AGGRESSIVE_TIMEOUT_WAIT + bool "Aggressively track even timeout waits" + depends on DEPT + default n + help + Timeout wait doesn't contribute to a deadlock. However, + informing a circular dependency might be helpful for cases + that timeout is used to avoid a deadlock. Say N if you'd like + to avoid verbose reports. + config LOCK_DEBUGGING_SUPPORT bool depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT From patchwork Wed Jan 24 11:59:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529113 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D0B89651B4; Wed, 24 Jan 2024 12:00:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097613; cv=none; b=nzP06r1SMvG9N3s2bN4pW7E/IV5DR2W6uVDdqTqgyXwo/Z1RL4QOVCsYJ07X2c9KH1CtcWPv0TC7vwH4eZZM2kkRnzv3pM7ZBfu9zeqp10rOaC8GX4xgu3/gUNFOPof4flwB3snioY2tlsM5vSYXPZfjBLoFRJrhcE67feWS5b4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097613; c=relaxed/simple; bh=TSPxdh3H9mjhA/wDLLlJUz6YZTY7hYg5ulgpaHBJJSM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=rOIsM/71uUjetYmoCoQKnWY2JxuXZQnf0JkZA8WpsiPMxkmffANMf+veXKOc7FZcRDkuXIBk/DeAK4JHn7d5HKbmiiIpxVf3LgYHUHpzbLTvffT/4UkBtIAvuFm/nbrWmBI0KT8rQQ7f90wT1CpPGGYODXz8FE0/AKLmMnKTkiY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-55-65b0fbb68577 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 17/26] dept: Apply timeout consideration to wait_for_completion()/complete() Date: Wed, 24 Jan 2024 20:59:28 +0900 Message-Id: <20240124115938.80132-18-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0zMcRjH+3y+P+84++4wX8VwkywrP+bH48eMjfmOhY2/+EM3fem4O+2i kw3h9EslbTmltbrs3K5LdWdz6JJrlTQ5aSRJP25I5TiuyeVHhX+evfZ+nue154+HJeT3qVBW pT0u6rRKtYKWktLhqaVRd4JV4rL+B1K4krUMAt/SSSiqtNHguVWOwHb7HIaBhm3wcmQIQfDJ UwKM+R4Epb1vCLjd2I3AZTlPw3PvNGgP+Ghozr9Ew4WyShqeDY5h6Lqah6HcHgMtuSYMdaPv STAO0HDdeAGPlw8YRs1WBswp4dBnKWRgrHc5NHe/oMDVuQQKirtoqHE1k9Do7MPw/F4RDd22 3xS0ND4iwXMlm4KKTyYaBkfMBJgDPgba6kowVBnGRalff1HQlF2HIfVGNYb2V/cR1Kb3YLDb XtBQHxjC4LDnE/DjZgOCvpxhBi5mjTJw/VwOgksXr5Lw9GcTBYauVRD8XkRvWifUD/kIweDQ C66RElJ4bOKFu4VvGMFQ28kIJfYTgsMSKZTVDGCh1B+gBLs1gxbs/jxGyBxux8Kn1lZGeHQt SArediPeHbZPuiFOVKuSRN3SjbHSeFtbAZNQLD3pyHiGU1APm4kkLM+t5D3eeuY/l3ls9ATT XATf0TFKTPAMbj7vyH5HZSIpS3BpU3jL5yeTQ9O5WL6h3jnJJBfO56S6Jhdk3Gq+9ss99Fc6 jy+vqpvMJeN5RUEnOcFybhXfY73MTEh5Lk3C15x3/rtiNv/Q0kHmIlkJCrEiuUqbpFGq1Cuj 45O1qpPRB49p7Gj8pcynx/Y7kd+zx404FimmyjZZK0U5pUxKTNa4Ec8Sihmyjtm3RLksTpl8 StQdO6A7oRYT3SiMJRWzZCtG9HFy7rDyuHhUFBNE3f8uZiWhKSjC9H14occZsyCkf592h+lL cZHPOXAmqikrc9fcvRnFeWuJCM3O8tiWiN2a0hC3aM+W6Be36fsPtXpkwZictu3Rg/5XRxcs jhzqPeXzZq3P4Mb6mDW1izaHu9+Gna2o3p+eoJ97JNebOsc80+A3Vn+8MceyV701qnNPaHTh lrTX6xVkYrxyeSShS1T+AVmDrvVOAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+//P1dXitKxOFpgjKYzKKOOt7PKh8iAkEYTQ1ZGHXF6SLTWL aOa0WmkqqVkaXmKNubxsItOcDZemSWU5zERFzS6SaRc3MjVToy8PP57nfZ9PD0vICigvVhlz TlTFKKLktISUhOxIXl89USH6ZzX7Q+ZNf3CNXSMhv9xEQ1tZKQJTVRKGocYgeOseRjDx4hUB udltCIr6ewioaupFYDNcoaF9cCE4XaM0tGTfoCG5pJyG118mMXTnZGEoNR+A1oxiDPbxTyTk DtFwLzcZz8hnDON6IwN6jS8MGO4yMNm/CVp6OyhwFLRQYOtaB3n3u2mos7WQ0GQdwNBem09D r2magtamZhLaMtMoeDRSTMMXt54AvWuUgTf2QgwV2pm21J9/KHiWZseQ+qASg/PdYwT11/ow mE0dNDhcwxgs5mwCfj9sRDCQ/pWBlJvjDNxLSkdwIyWHhFdTzyjQdgfAxK98es8OwTE8Sgha S4JgcxeSwvNiXqi528MI2vouRig0xwkWg59QUjeEhaIfLkowG6/TgvlHFiPovjqxMPLyJSM0 35kghUFnLj648ogkMFyMUsaLqo27wiQRpjd5TOx9yXnL9ddYg/pYHfJgeW4LX9JmomeZ5tbw nZ3jxCx7cqt4S9pHSockLMFdnc8bvr2YO1rMhfGNDusck5wvn55qm3uQclv5+u+16F+pN19a YZ/zPWb8R3ld5CzLuAC+z3iLyUCSQjTPiDyVMfHRCmVUwAZ1ZERijPL8hlNno81oZjT6S5OZ VjTWHtSAOBbJF0j3GMtFGaWIVydGNyCeJeSe0s7lZaJMGq5IvCCqzp5UxUWJ6ga0giXly6TB oWKYjDutOCdGimKsqPqfYtbDS4OarEuTccK+xuDProt47+/NzvbF07EjdzqO9aqf9CcssYeX H9B8uLzIe9vHnsDK4E9TPkWJfiFO9swJc3Z1RqxV0xPk3r19dYHWc3tHfPplnUGj2/nU2/jO 58np/Vyg44yx7P38iLUeu46GHr6U4u0TrL7t7g6Zd9x6YXNWzqGawOm9clIdodjkR6jUir/I npxQMAMAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to wait_for_completion()/complete(). Signed-off-by: Byungchul Park --- include/linux/completion.h | 4 ++-- kernel/sched/completion.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/completion.h b/include/linux/completion.h index bd2c207481d6..3200b741de28 100644 --- a/include/linux/completion.h +++ b/include/linux/completion.h @@ -41,9 +41,9 @@ do { \ */ #define init_completion_map(x, m) init_completion(x) -static inline void complete_acquire(struct completion *x) +static inline void complete_acquire(struct completion *x, long timeout) { - sdt_might_sleep_start(&x->dmap); + sdt_might_sleep_start_timeout(&x->dmap, timeout); } static inline void complete_release(struct completion *x) diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c index 3561ab533dd4..499b1fee9dc1 100644 --- a/kernel/sched/completion.c +++ b/kernel/sched/completion.c @@ -110,7 +110,7 @@ __wait_for_common(struct completion *x, { might_sleep(); - complete_acquire(x); + complete_acquire(x, timeout); raw_spin_lock_irq(&x->wait.lock); timeout = do_wait_for_common(x, action, timeout, state); From patchwork Wed Jan 24 11:59:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529112 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 22E0C657AB; Wed, 24 Jan 2024 12:00:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097613; cv=none; b=W0DnrCsQyMn61OhLCqOtq0NDo0Mb0Myqn0s7aSoB/uepxH56h376JHW8DN0hkFTjZjCmwazjC1AsVNLBSxM6a5oT0Bhu22LxCcbI5b4zQ8HCVtZ6dkziM4Apzgv3frOBLc8ijtloRy19gSOEtn7/CnPccUtRVm0rNt+UnxnuEX0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097613; c=relaxed/simple; bh=9o9SI0ONdb1WbiIWXrCSb5nX53qir4UGX5Gt0IQgpYM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=qIYuRuD3nV/7LKQIvkqSpBguURmCSiuZtSxO0w3zXXaZGag8nJD4/vIB/300MqMuSlmwBN62tm0GxIVKo4HIDzQvkiJG6Od0TXnaBSLXIvuTwo+TDZTTlz2yFVXPuWQdeJG+KVBF8+vgC37jcl/C9XKjwLYdzKVwRIgC8xH214o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-65-65b0fbb6c9f0 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 18/26] dept: Apply timeout consideration to swait Date: Wed, 24 Jan 2024 20:59:29 +0900 Message-Id: <20240124115938.80132-19-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+//Pbc4WpyV1Vh+qYRRWmqLxEhESVCciCiKJImq0gy6nxUzN otK85qWs0OWFcCZrTG22WZppLEXTorSSWjZFzS7ipuWauGbWNPry8uN5n+f59IgI6WNquUgV f0bQxCvUclpMip0LdRsbvHXCptyS1XA9fxO4f+aQUG6qoaHnXjWCmvo0DKPtu+D9lAOB92U3 AdqiHgS6oX4C6jsGELQYLtPwdmQR9LonaOgqyqMh/Y6JhtdjMxjsxTcwVJv3wovCSgxWz1cS tKM0lGnTse98w+DRGxnQp66BYUMpAzNDodA18I6Clr71UHLbTkNzSxcJHY3DGN42ldMwUPOH ghcdnST0XC+goHa8koaxKT0BevcEA2+sFRjqMnxFWa5ZCp4VWDFkVd3H0PvhMYInOYMYzDXv aGhzOzBYzEUE/LrbjmD4qpOBzHwPA2VpVxHkZRaT0P37GQUZ9gjwTpfTkVv4NscEwWdYkvmW qQqSf17J8Y9K+xk+40kfw1eYE3mLIYi/0zyKed2km+LNxis0b568wfC5zl7Mj796xfCdt7wk P9KrxftXHBZvVQpqVZKgCdl2XBxjGklDp130WW17FZWKnlO5yE/EseHcbEED/Z+z2/LRHNPs Ws5m8xBzHMCu4iwFX3x+sYhgs/05w/eX84El7A5u3FU7byLZNVx3kwvPsYTdzGWmWsh/pSu5 6jrrvMfPp9eW9M3rUjaCGzReY+ZKOTbPjyv7XM78C8i4pwYbWYgkFWiBEUlV8UlxCpU6PDgm JV51NvjEqTgz8k1Kf2HmSCOa7DnQilgRki+URBpNgpRSJCWkxLUiTkTIAyQ22T1BKlEqUs4J mlPHNIlqIaEVrRCR8mWSsKlkpZSNVpwRYgXhtKD5/8Uiv+WpKC5A1l//7Yc30pq3SuYfWNo9 Ott5SBy+M9Tuv/27f/DXxddC9jSgxNiHGwOjQprH7K1CoX1DUNj5LJty7Vj07o+HfwZ5bg4d PfBp5tc6dVdxrLyakVWRmcqTSx8cPPRbF+HcbbqY4nCtNgbu62x0G+/rDE9XTjuRK2qw482l 9Q6bnEyIUYQGEZoExV8xoGXATgMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUhTaxzHe55zznOOq8VxLTrXLtx7F9IbZUXGj+wdyocLNwuCoKIaeciR mmw2Neg2r1t2S0sD2630XrNYOjVtq7ByMRxqy3KmYiYmKfYimkZr3tbsZRr98+PD9/vh+9dP YFQlXJSgS02X9anaZA1RsIqtcTlLbodq5WV1uQCFecsg8OEkC8U1VQTarlciqLqZjWGoMR6e jo8gCD32MWAtakNwuf85Azeb+hC4yv8i0DE4EzoDYwS8RacJ5FypIfBkeAJD7/lzGCodf0BL QRkGd/A1C9YhApesOTh83mAI2uw82EzRMFB+kYeJ/uXg7eviwFPi5cDVsxgu/NtLoN7lZaGp bgBDx91iAn1VXzloaXrAQlthPgfVo2UEhsdtDNgCYzy0u0sx1JrDayf8XzhozndjOHH1BobO Z/cQ3D/5AoOjqouAJzCCwekoYuDTtUYEA2fe8mDJC/JwKfsMgtOW8yz4PjdzYO6NhdDHYrIh jnpGxhhqdmZQ13gpSx+WSfTOxec8Nd/v4Wmp4wh1li+iV+qHML38PsBRh/1vQh3vz/H01NtO TEdbW3n64J8QSwc7rXjbz7sUaxLlZJ1R1ses269IqhnMRml+kmltvMqZ0EPuFIoQJHGllOvJ Q5NMxPlSd3eQmWS1+KvkzH8VdhQCI+ZOl8rfPSaTxSxxszTqr56SWDFa8t3140lWiqski8nJ fh/9RaqsdU85EeG8+kLPVK4SY6UX9rN8AVKUoml2pNalGlO0uuTYpYZDSVmpusylBw6nOFD4 aWzHJgrr0IeO+AYkCkgzQ7nBXiOrOK3RkJXSgCSB0aiV3T9dl1XKRG3WUVl/eJ/+SLJsaEBz BVYzR/n7Tnm/SjyoTZcPyXKarP/RYiEiyoRMCZ7cYELzJv2bzxs/9bfkx3u2VphGF+y9Fbvl ltHSwGaY9qx+VbBwOz22OrJi1dr/MxdbyPqX3tm1w1q8wufZ7HbVTTs+M+bPsd1R5nZfXJ/y SYxuXoTxbFFJSZag1oQy/Pt+2y1VpO3Qixt9Tmw2z2vdon4UeY/a0itb/1MlaFhDknb5IkZv 0H4DOWK59jADAAA= X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to swait, assuming an input 'ret' in ___swait_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/swait.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/swait.h b/include/linux/swait.h index 277ac74f61c3..233acdf55e9b 100644 --- a/include/linux/swait.h +++ b/include/linux/swait.h @@ -162,7 +162,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); struct swait_queue __wait; \ long __ret = ret; \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ INIT_LIST_HEAD(&__wait.task_list); \ for (;;) { \ long __int = prepare_to_swait_event(&wq, &__wait, state);\ From patchwork Wed Jan 24 11:59:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529114 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 46DA3657BF; Wed, 24 Jan 2024 12:00:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097613; cv=none; b=cPzn0w8jT5mgLIxIQQHzN/EA/b6dEpOD9kb/XvJCR2m/2QG8EFHZBf9TdjDftoKOXW7qtezoki8whxp1YcrjhbC0X/UdPcd9zoWEp0Lx1YGbDJQyY2hRkKMBUHIzSsBWOcXdS+Mhk2WXuCehuvsMB/2CRLXQkqMdSqnXr7y6zqs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097613; c=relaxed/simple; bh=RHlJ2GxVpEp+l17KtAmYPgKdeN+t8Sb6ly9lphI8Ui8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=t1CaZ0VZPxyLZV0mWq6qayofJdK2A8Ph4Y3WvpBA57muh9bsOG2YWH/wU8STvQ7YToUBTpzzcYezdPEpDu1JxxHelypMPP6HOJsTIDbacFsPNUK5rIw2yXhezzbFkDOwAxq003ecghVtksRBKSNX4bBm1SH7FuurishqqAcnsfU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-75-65b0fbb74679 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 19/26] dept: Apply timeout consideration to waitqueue wait Date: Wed, 24 Jan 2024 20:59:30 +0900 Message-Id: <20240124115938.80132-20-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSWUxTeRTG/d+91Zpr3a5iHK0xxl2M6FEZ48Mk839RSdSYqFEbuUojVFIU xC1VKjpVGGWmRZYQFlOaslpwLSUIAVoRrYK1VkDElQBC0BJrQYQaX05++c75vu/lcKTcRs/m VOpjokatjFYwUkraNylv+Z1Aubjqv+7FcO3KKvB9vURBdlkxA67SIgTFlecI6K7/G14M9SII ND8hId3gQpD3pp2EyoYOBHbzeQZa3k2GVl8/A07DZQaSCsoYeNozTECbMY2AIusWaLqaT0CN /yMF6d0MZKUnEWPjEwF+k4UFk3YhdJkzWRh+EwrODjcNdu9SyMhpY6DK7qSg4W4XAS33sxno KB6loanBQYHrWgoNJZ/zGegZMpFg8vWz8Kwml4By3VhQ8pcfNDSm1BCQfOMmAa0vbQiqL3US YC12M1Dn6yWgwmog4XthPYKu1D4WLlzxs5B1LhXB5QtGCp6MNNKgawuDwLdsZvMGXNfbT2Jd RQK2D+VS+GG+gO9ltrNYV+1lca71OK4wL8EFVd0Ezhv00dhq+YfB1sE0Fuv7Wgn8+fFjFjuu Byj8rjWdiAjZLQ2PFKNV8aJm5aYD0qi8agcZe4s9ceOiF2mRjtEjCSfwa4RCbRH6zc5XDdQ4 M/wiwePxk+M8jZ8nVKR8oPVIypH8xYmCeaA5aJ7KbxXO17UHjyh+oWCrGgmyjF8r+JMesL9C /xCKymuCumRML8nwBgvkfJjQafmXHQ8V+CSJ0NjUQ/4yzBIemD3UVSTLRRMsSK5Sx8coVdFr VkQlqlUnVhw8GmNFYy9lOjO85y4adG2vRTyHFJNkmy1lopxWxsclxtQigSMV02SeWaWiXBap TDwpao7u1xyPFuNqUQhHKWbKVg8lRMr5w8pj4hFRjBU1v7cEJ5mtRf/v97zP/BQ+s/m1y1E4 VX974rJw59zOucY/Yx3mb5ZbUeWrfclqfcF6b4F718BGNxXmcp9aqn1kIKcvkJzcOxphrKqf rgwZeH52RkA2EpogO5jaUmaWzE+YY4N5Wwz2HTvfpmVVnxVGX1VyXdtt2pxCT07E6b+meLcd wmv3rUs1Kqi4KGXoElITp/wJkO8o2E4DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTcRTG+//f21wtXpbQmxXFIAK7Q8ahRVhf+tNFi4jALznqJZc6Y8ul Xch03TQtK7OLlbemzKU2i6ymLceWq7xbmamZdNM0zZw15ypb9OXw4znneZ4vR0LJrzFBErVm n6jVqGIUrJSWhilTFt3zlotLB8YwZJ5eCu6RkzTklJlZaCwtQWC+cxRDr2MdvBrtR+Cta6Ag O6sRQd67TgruOLsQVBUns9Dyfiq0ugdZcGWlsZBSUMZC05dxDB0Xz2EosWyCZ2fzMdg8n2jI 7mXhanYKnhifMXiMJg6MSfOgp/gKB+PvloGr6yUD9msuBqraF8Dl6x0sWKtcNDgrezC0PMhh ocv8m4FnzloaGjPTGbj1NZ+FL6NGCozuQQ6abbkYyg0Tace//2LgSboNw/HC2xhaXz9EUH2y G4PF/JIFu7sfQ4Uli4KxIgeCnowBDo6d9nBw9WgGgrRjF2lo8D1hwNARAt6fOWyoktj7Byli qNhPqkZzafI0XyD3r3RyxFDdzpFcSzypKA4mBdZeTPKG3QyxmE6xxDJ8jiOpA62YfK2v50jt JS9N3rdm482zIqSrdokxar2oXbI6UhqVV11L7b3LJRSeaEdJyMCmogCJwC8XXG+c9F9m+flC W5uH+suB/FyhIv0jk4qkEoo/MVkoHqrzG6bxYUKyvdN/RPPzhIdWn59l/ArBk/KY+xc6Rygp t/n1gAn91uV2f4GcDxG6TWe4s0iaiyaZUKBao49VqWNCFuuioxI16oTFO+NiLWjiaYyHxzMr 0UjLuhrES5BiiizUVCbKGZVelxhbgwQJpQiUtc0oFeWyXarEA6I2boc2PkbU1aCZEloxXbZ+ uxgp53er9onRorhX1P7fYklAUBLaObi26L5t+MaWlj5fuG/ojONTyZBMWXkkP6Pj7edtallG Ibsw1BG2JnxL04YPnReYtsY961XfMvFGpzU8ubnOATeUmpuzX5zn7ButeuPdIG8cqVw5O2Qs LKJuZM6hH2AOHikd46W67b/7lI+2Hl4z7OsOOLgh7TkX2SvVP06OVtC6KNWyYEqrU/0BL/Mk kDADAAA= X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to waitqueue wait, assuming an input 'ret' in ___wait_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/wait.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/wait.h b/include/linux/wait.h index ebeb4678859f..e5e3fb2981f4 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -304,7 +304,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); struct wait_queue_entry __wq_entry; \ long __ret = ret; /* explicit shadow */ \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ init_wait_entry(&__wq_entry, exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ long __int = prepare_to_wait_event(&wq_head, &__wq_entry, state);\ From patchwork Wed Jan 24 11:59:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529115 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8D513657DC; Wed, 24 Jan 2024 12:00:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097614; cv=none; b=I4p5mvKjRzkXZk89N+Oe5J1jSmx1g1lfBb9HBTEQOcd7Haq96WkVpLK9WQ4d/wnYIK1oVwq0QROESsTrv/7Nctk+/gsLe6534Vfi50d6dFTouv9V3y+9lFne4x5JLm9bqNyfEGFjdUECQqNaZFr3qpoe5E37felDr6R3yBI7Z8w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097614; c=relaxed/simple; bh=oVu1M/z+Be6zJoonqEicwRg8jn1j0IbzUk3u5AE/er0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=jIA7ZM+mXFXOGsLXQxdWEGtXusxDqnLY6pPze3NirweKx//ndNoU+DtvZTdGfPuCE27YI6B4rcVfR/vaQLIBlV/fmWJ+/97fMofmliv+iEzxmPEtUSfBhzgth/WUzP2T+o6GrgpZN0Kho9i97dAhiecnHm8p+hzvqxC293/wUfs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-85-65b0fbb7a24d From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 20/26] dept: Apply timeout consideration to hashed-waitqueue wait Date: Wed, 24 Jan 2024 20:59:31 +0900 Message-Id: <20240124115938.80132-21-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAz2SW0xTWRSG3ftc26HmTNXMEWM0NUanRgcVzZoZo/hgPF5DML7ogzbDybQZ qKZFBJPBKsULUlSSggiaFrQ2BSm0GhltlYHIRQVRqhamkILEkQBC0CK1eCkYfVn5sta/vqef JeS3qVhWo00TdVpVioKWktLhGOvyW5FqMc4TWA3n8+Ig9O4UCaXOShraqyoQVN44hmHg/mZ4 MT6EINL6mIAiczsCa283ATcaexB47cdp6OifCb7QCA0t5jM0ZJc7aXgyOIkhUFiAocK1Ax6e K8NQF/6fhKIBGkqKsnF0vMYQtjkYsBkWQ5/9IgOTvSuhpec5Bd6uZVB8OUCDx9tCQmNtH4aO 26U09FR+puBhYzMJ7edNFFx/U0bD4LiNAFtohIGndRYM1cao6MTbTxQ0meownLhSg8HXeQfB 3VNBDK7K5zQ0hIYwuF1mAj5cu4+gL3+YgZy8MAMlx/IRnMkpJOHxxyYKjIE1EJkopRN+ExqG RgjB6D4seMctpPCgjBf+udjNCMa7XYxgcR0S3HalUO4ZwIJ1LEQJLsdpWnCNFTBC7rAPC2/a 2hih+UKEFPp9RThx3h7pumQxRZMu6n5Zv1+qfpTtRAfzmYzRgAkbUC+ViyQsz8XzZkvkO78c u0lMMc0t4f3+8DTP5hbybtOraEbKEtzJH3j7aCudi1h2Frebfx9WTmVIbjH/6n0xOcUybi3/ YfIS89W5gK+orpv2SKL768Vd0xk5t4YPOs4yU06ey5bwDQED8fVhLv+v3U+eQzILmuFAco02 PVWlSYlfoc7UajJW/HEg1YWijbL9Pbm3Fo2176pHHIsUMbIEh1OUU6p0fWZqPeJZQjFb5p9b JcplyarMI6LuwD7doRRRX4/msaTiJ9mq8cPJcu5PVZr4lygeFHXfrpiVxBpQgu6/R+qZknVZ 2nJRaeNHg5uumRKzjm696r4cszPfqdqfNkPZXLPdKnMGkzr8Cb5yZXDk9x3mz9bWm53LjHHJ /fPvSCc8hZ7RpLzurC07Z71UFjTlWDa602t/vhcfm6r+tXPbHL2rZNCb+SzDkLQkbfuPx9sW SiY2LDItYhOXmrCC1KtVK5WETq/6Ajv6e5lNAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUyMcRzA/X7Pa4+OZ6fm8W63GcvkZeI7Gf1hPGNizJgxPePhbip2RznG okNdLmpLSuiFq11XlyvvrqV2KXkphbSTiiFSLa6cOy9X5p/vPvt+P/v89WUJZQ41kdXEHpC1 sVK0iuZILjI8cc4tb5k8r26Ag7Qz88D9PYmEHJuVhsbSYgTWiuMYup2r4NVgDwLvk2cEZGY0 IsjrfENARW07AkfRCRqa34+BFncfDfUZKTQkFthoaPriw+A6n46h2L4WGs7lY6jyfCQhs5uG i5mJ2D8+YfCYLQyYE2ZAV1E2A77O+VDf/pKCmkv1FDjaZkPWZRcN9x31JNTe7sLQfDeHhnbr HwoaautIaEwzUVDSm0/Dl0EzAWZ3HwPPq3IxlBn8tVPfflPw0FSF4dTV6xhaXt9DUJnUgcFu fUlDjbsHQ7k9g4CfhU4EXalfGTh5xsPAxeOpCFJOnifh2a+HFBhcYeD9kUNHhIs1PX2EaCiP Fx2DuaT4KF8Q72S/YURDZRsj5toPiuVFIWLB/W4s5g24KdFuSaZF+0A6Ixq/tmCx9+lTRqy7 4CXF9y2ZeP3krdzSXXK0Jk7Wzl0WxakfJ9rQ/lTmUL/LhBNQJ2VEAazALxTeDdwghpnmZwqt rZ4RDuKnC+WmD36HYwn+9GihqP8JbUQsO47fJAx5QoYdkp8hfBjKIodZwS8SfvouMf+a04Ti sqqRToB/X5LVNuIo+TChw3KWOYe4XDTKgoI0sXExkiY6LFS3V62P1RwK3bkvxo78P2M+6ku7 jb43r6pGPItUgYoIi01WUlKcTh9TjQSWUAUpWieUykrFLkl/WNbu26E9GC3rqtEkllSNV6ze LEcp+T3SAXmvLO+Xtf+vmA2YmICcHflT2/vzSusWOHxJ8ZW9Vo0+eEXkixIwLj+WfGT9NdPN wikfietqi5p7UBgpeTca1J2h4VuTUUTK88B1Y7dJcVxD3prAWWN2f9Z5fhd03HMkbTCmpx7O 2tMU/DY7Unkl+NsWve3T0Fnf0CLL3MUPrPHbXYuXLHBuHuectGRa00oVqVNL80MIrU76C1s9 GEgvAwAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to hashed-waitqueue wait, assuming an input 'ret' in ___wait_var_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/wait_bit.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/wait_bit.h b/include/linux/wait_bit.h index fe89282c3e96..3ef450d9a7c5 100644 --- a/include/linux/wait_bit.h +++ b/include/linux/wait_bit.h @@ -247,7 +247,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); struct wait_bit_queue_entry __wbq_entry; \ long __ret = ret; /* explicit shadow */ \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ init_wait_var_entry(&__wbq_entry, var, \ exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ From patchwork Wed Jan 24 11:59:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529116 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 32E9365BC0; Wed, 24 Jan 2024 12:00:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097614; cv=none; b=S2IqMYN+WWqc4u2ka/1sxk2fCOcycoKeeac3gqT1Au9iG7a4XI4233PhMzZP7/pMlfWcP6IdBIe1dEd8BDQS55NMcnRLQqzcj3tWt997bplOnoQpHjnFAluJf1pCcSrRg1Np4ej4Aa4PZ6/IhnDM/KYNlnAym/TS3+fKzlMDlWE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097614; c=relaxed/simple; bh=A0HNCh9MAJFiyY64P3s+ZnF2ZrT06e1PDF08t8YO27Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=njUqifyynp6vnjKTyWLx69qyTFRPVGIinTYV4Et3tybfSqnv9yPn/GNb40VTMkbQqSfQXkqQFsm9iDl9ta5EZGcQ4FBaveM1n8Hf6y7h6fKM4dele9tbS/j+PX4EAdq2H989asCr2bD9YRa8PgDJA4nIO9T0ivxbpCf4mBFgERg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-96-65b0fbb71d9f From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 21/26] dept: Apply timeout consideration to dma fence wait Date: Wed, 24 Jan 2024 20:59:32 +0900 Message-Id: <20240124115938.80132-22-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTYRjHfc/lPcfV6jCDjkkUC+lGF8vqKSKMiA5EVgR9KKhGHtpom7GZ lyCynGUru9G0UsIba23q1lZppaKWlxXlylVLVNTsIs0L1iybXTajLw8/nj/P7/nyZ0nZQ3oW q9KmiDqtQi3HEkoyOLV4SVXQIS63d8XD5fPLIfAth4JCezkGT6UNQfndkwQMNG2Bt2N+BMHn bSTkmzwIinu7SLjb3I2g1nIKQ3v/NPAGhjG4TecwZJXaMbz8MkFAZ94VAmzObfDsUgkB9eOf KMgfwFCQn0WExmcCxs1WBsyZsdBnucHARG8cuLvf0FDbsRiu3+zEUFPrpqC5uo+A9oeFGLrL /9DwrLmVAs/lXBoqhkowfBkzk2AODDPwqr6IAIchJDr99TcNLbn1BJwuu0OA990jBHU5PQQ4 y99geBzwE+Bymkj4easJQd+FQQayz48zUHDyAoJz2XkUtP1qocHQuQqCPwpxwjrhsX+YFAyu NKF2rIgSnpbwwoMbXYxgqOtghCLnUcFlWSSU1gwQQvFogBac1rNYcI5eYQTjoJcQhl68YITW a0FK6PfmEzti9kjWJ4lqVaqoW7bhgET5/X0WPtLOprc6fEwmsjFGxLI8F8+73XojipzEJ/d6 UJgxN5/3+cbJMM/g5vKu3I+0EUlYkjszhbeMPMfhIIpL5PsfVTJhprhY/laVcZKl3Gq+zV6F /0nn8DZH/aQoMrSvuN5BhVnGreJ7rBeZsJTnsiL5lroK5t9BNN9g8VGXkLQIRViRTKVN1ShU 6vilygytKn3pwWSNE4UaZT4+sbcajXp2NSKORfKp0gSrXZTRilR9hqYR8SwpnyH1RVeKMmmS IuOYqEverzuqFvWNKIal5DOlK8bSkmTcIUWKeFgUj4i6/ynBRs7KDP3VyHcs2Bht28nlvf6g juVc8rnSxJjYvLW9tyOiMhwS//3NNpOv7aqRq5tH9hhGlE2qrXG2g56E5Ha5tvrqkvQ73aqh 19qykU/K6R3WtJrsM8MNwYIu//a4lREbN605dsKkKUvZneRaeHP7h305SnaBN/HpfGlURe7M 2T/TGoJySq9UxC0idXrFX55lJ+hNAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUxNYRzHPc855zmnw7Wzq3HI2+7WWN5f4jfM/OcZ87bZbNh06Ex33cru JUJTbpIUsiUqltgtt6tu97KuXizFVd6KWq7kqjQ0qdCNlJfK/PPbZ9/vvp+/fgKjvcxNEfSR +1VjpGLQEZEVN640zysZtKsLf7VqIS1lIfj6kljILrIRqC8sQGC7FY+h88FaeNnfhWDwaR0D Gen1CK62vWHgltuLoCL/OIGGjvHQ6OshUJt+moD5WhGB55+GMLRcOI+hwLEBHp/LxVA58IGF jE4CWRlmPHw+YhiwWHmwxAVCe34mD0Nti6DW28RB9eVaDiqa58ClKy0EyitqWXC72jE0lGYT 8Nr+cPDYXcNCfVoqBze7cwl86rcwYPH18PCiMgeDPWHYlvjtNwcPUysxJF4vxtD4qgzB3aRW DA5bE4FqXxcGpyOdgZ95DxC0n/nMw4mUAR6y4s8gOH3iAgt1vx5ykNASDIM/ssmalbS6q4eh Cc6DtKI/h6WPcmV6J/MNTxPuNvM0x3GAOvOD6LXyTkyvfvVx1GE9Rajj63meJn9uxLT72TOe 1lwcZGlHYwbePHW7uCpUNeijVeOC1SFi2Pd3ZrKvQThUY/fwcaiAT0Z+giwtle/fbkUjTKRZ ssczwIywvzRTdqa+55KRKDDSybFyfu9TMlJMkDbKHWWFo2NWCpTzSpJHWSMtk+uKSsg/6Qy5 wF45KvIbzm9eamZHWCsFy63Ws/w5JOagMVbkr4+MjlD0huD5pvCwmEj9ofl7oiIcaPhpLLFD aS7U17C2CkkC0o3TrLEWqVpOiTbFRFQhWWB0/hrP5EJVqwlVYg6rxqhdxgMG1VSFAgRWN0mz bpsaopX2KvvVcFXdpxr/t1jwmxKHnNM0mWjx+u4bnoDYHdO3rjfLb6fbEpcsvR872+WqEQ9v 16Q8WeHK2wQHcVxV8OvFifdcEle4K2SH4SJDj7w8RjK9xe4toV9uiO5Vz3f3lcNcS8gc5DW3 KeZjpX+ME53i7TvK0Z12T1tA9OtZ5uKgPuPy+KbAqEmhx8f39mQl6VhTmLIoiDGalL/cQIe1 MAMAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to dma fence wait. Signed-off-by: Byungchul Park --- drivers/dma-buf/dma-fence.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 76dba11f0dab..95121cbcc6b5 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -784,7 +784,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) cb.task = current; list_add(&cb.base.node, &fence->cb_list); - sdt_might_sleep_start(NULL); + sdt_might_sleep_start_timeout(NULL, timeout); while (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { if (intr) __set_current_state(TASK_INTERRUPTIBLE); @@ -888,7 +888,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } } - sdt_might_sleep_start(NULL); + sdt_might_sleep_start_timeout(NULL, timeout); while (ret > 0) { if (intr) set_current_state(TASK_INTERRUPTIBLE); From patchwork Wed Jan 24 11:59:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529117 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4C40960DC0; Wed, 24 Jan 2024 12:00:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097615; cv=none; b=jCQaHL5ECk41QhJhR7DsesLD1x+qqj3QPATApc/qokZaFypUPpK3OhanOmexb/58lAXJzq7nENOs0K620VGeJo2rkv2s5+9x5imvipHs0ut2pr3hlcqMj/UubiStvRDb2iVdkv3fzdC9K0hdNrgF3EkvB97LxvEXu9Z+SOiMWGw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097615; c=relaxed/simple; bh=rOOuSh+1eZIFPiRqy8Z4jjiN4PmFYfqaD61RmkKO4sk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=SnTc9+v727RnY8ds6ZxSI+r2ibDZSr2SA8waKJx71JSehByx0nsSmag7nl0oWgHdW0sbbvHZaYaLdEFQ+esEOI6B/HxO/S+DVvdy4mZ13M844msNm50r5jsfsKNreN8Ok2n1D5ddP4SDixK3ThqwzA3rFj49U0k0QmBAnvtmHeQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-a5-65b0fbb74714 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 22/26] dept: Record the latest one out of consecutive waits of the same class Date: Wed, 24 Jan 2024 20:59:33 +0900 Message-Id: <20240124115938.80132-23-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe0yTZxSHfd/v2mrdl47oK25xaUK2weSyiR51UWN0fiY6tkwTcTHawReo FDAFQUxIEIoybgKxFhQJoqm1oGDBiBYYg3DTcNkgiqwwrERt5JZi0QKyAcZ/Tp6c3/k9fx2e UtoYb14TEy/pYtRaFSun5WMrytbfm62SAsdTgyE/OxDcbzJoKK6sYKHndjmCipozGJwte+DJ 9CiC2c5uCoyGHgRXnw1SUNM6hKDenMpC78hK6HNPsNBhyGIh7VolC3+9nsNgv1iAody6Hx7l lWFo9Lykwehk4bIxDS+MVxg8JgsHphQfcJgvcTD3LAg6hh4zUD/gB0Uldhbq6jtoaK11YOh9 UMzCUMV/DDxqbaehJz+HgVvjZSy8njZRYHJPcPB3YymGKv2C6OzUPANtOY0Yzl6/g6HvqQ1B Q8YwBmvFYxaa3aMYqq0GCmZutCBw5I5xkJ7t4eDymVwEWekXaeh+38aA3h4Ms++K2R1bxObR CUrUVyeK9dOltPiwjIj3Lw1yor5hgBNLrSfFarOveK3OicWrLjcjWi2/s6LVVcCJmWN9WBzv 6uLE9sJZWhzpM+Kf1h6Wfx8uaTUJki5g2zF5ZPfdKeaEc/mpXMcISkFtskwk44mwgXT9e5P9 yPeuz9OLzApfkv5+D7XIXsIXpDrnBZOJ5DwlnFtOzJOdS4VPBTV5M1zCLDIt+BDH1NOlskLY SK4YUpkP0nWkvKpxSSRb2N8qGli6UQrBZNhynluUEiFNRv6ZNHIfCmvIn+Z+Og8pStEyC1Jq YhKi1RrtBv/IpBjNKf+w2GgrWngpU/Lcr7XI1fNLExJ4pFqh2GGplJSMOiEuKboJEZ5SeSn6 19yWlIpwddJpSRd7VHdSK8U1obU8rVqt+HY6MVwpRKjjpShJOiHpPqaYl3mnoBsNEau6zh8J m3Gl231tmrf5b/WfJWY/4I9+TiZKDF+l2WO3H4ja+mPoN93J856ve5exXj5/JDe3bw4IOX3h t+PeXiGuSkxmMlKKn/+AVlI+Bbnu2kN5u507d9mOfWLYmmBM33c8KvS7wqwQbeveEm9nZJNt MsrY+XOhX802+6aD71+o6LhIdZAvpYtT/w/5RYusTgMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTcRSH+793Z5OXKfRqYTWQQMmUNA4Y0YfAl6AIiaL6UCtfcl5rS51F 5GVqaWpac13UvLFsWurmrXJiWtqKnOYok3kbUg5NI51oXkqNvhwezu+c59OPwSXFpBcjj70i KGJl0VJKRIiOhqTtbl6qEwK+rXpC/u0AcM7dJKCotoaC3ufVCGoaUjBwvA2FL/NTCJY+WnDQ anoRlI0N4dDQNYzAVJVKQf+4G1idMxSYNdkUpFXUUtA3uYyBrbAAg2rDEfhwpxyD9sXvBGgd FDzSpmFrYwKDRZ2eBl2yD9irHtKwPBYI5uHPJHQWm0kwDfrBgxIbBa0mMwFdLXYM+l8WUTBc 84eED13vCOjNzyHh2XQ5BZPzOhx0zhkaPrWXYlCnXrNlzK6S0J3TjkFGZT0G1q+vELTdHMXA UPOZgk7nFAZGgwaH30/eIrDn/qAh/fYiDY9SchFkpxcSYFnpJkFtC4alhSLqYAjfOTWD82pj Im+aLyX49+Uc/+LhEM2r2wZpvtQQzxurfPmKVgfGl/1ykrxBf4viDb8KaD7rhxXjp3t6aP7d /SWCH7dqsWPbTov2hwvR8gRBsefAOVGEpXGWvORwVeXax1Ey6nbJQi4MxwZxzZWrxDpT7C5u YGARX2cPdgdnzPlGZiERg7OZrlzVz4/UeuDOyri50RJynQnWh7PPft14FrP7uGJNKvlPup2r rmvfELms7Z89GNy4kbDB3Kg+j76DRKVokx55yGMTYmTy6GB/ZVREUqxc5X8hLsaA1kqju76c 34Lm+kM7EMsg6WbxQX2tICFlCcqkmA7EMbjUQzzg+VyQiMNlSVcFRdxZRXy0oOxAWxlCukV8 +KRwTsJelF0RogThkqD4n2KMi1cyKgr0Nt9LpiMuj72/H7WZubHz+ONDJZGWEXpB0xDmmsoO iNRu9VbjqdBrm+J8t9i8A0Ysx576z7gPZTZVWE6f+PLkza3itsjvBY0TvvGOkTRpXWPf3iB5 il/6PpuyMKQir0cv3J3Xvm4ISjjjSYU1qVTh51ULOMuZwqKycxJXzFJCGSEL9MUVStlfR33r vzADAAA= X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The current code records all the waits for later use to track relation between waits and events in each context. However, since the same class is handled the same way, it'd be okay to record only one on behalf of the others if they all have the same class. Even though it's the ideal to search the whole history buffer for that, since it'd cost too high, alternatively, let's keep the latest one at least when the same class'ed waits consecutively appear. Signed-off-by: Byungchul Park --- kernel/dependency/dept.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 1b8fa9f69d73..5c996f11abd5 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -1521,9 +1521,28 @@ static struct dept_wait_hist *new_hist(void) return wh; } +static struct dept_wait_hist *last_hist(void) +{ + int pos_n = hist_pos_next(); + struct dept_wait_hist *wh_n = hist(pos_n); + + /* + * This is the first try. + */ + if (!pos_n && !wh_n->wait) + return NULL; + + return hist(pos_n + DEPT_MAX_WAIT_HIST - 1); +} + static void add_hist(struct dept_wait *w, unsigned int wg, unsigned int ctxt_id) { - struct dept_wait_hist *wh = new_hist(); + struct dept_wait_hist *wh; + + wh = last_hist(); + + if (!wh || wh->wait->class != w->class || wh->ctxt_id != ctxt_id) + wh = new_hist(); if (likely(wh->wait)) put_wait(wh->wait); From patchwork Wed Jan 24 11:59:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529121 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9F42B67747; Wed, 24 Jan 2024 12:00:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097618; cv=none; b=ewRHgfm0PRB2BecFnIph+vjDN/NEivDhyPqsqLbkXcf/nEgzusgQLz0fhayRXoGVJ90m0g2+zAxWh1K+N6cPhFkLIR0MwRm/dn9C2tvArV41wcJ0/SuaDpvFzStVKVoP6vwSqzircG9AlWSSYbEKpTlIhvYdXCM/ckMQqtWJDb4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097618; c=relaxed/simple; bh=Zg3dPmxAeZfdPCpe0eFW6kAfwZet+/NUlIS50hmLxGs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=bhxDkRfYIcsEEdz4VjJhdsngxj+QcHP9/cQKOOhRyE4wMHERr9XCg+7PTMZeqM3bAC2eAP8rbIxLNgN12uOBM0HgbcTSKUxGyU6HnLNcrXCCNLRa6GvmRTmbMzstFw6nKsDcCqHvnlsmefjuElP2Id2+gLeK7P913flqw4Jxh14= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-a6-65b0fbb7d9da From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 23/26] dept: Make Dept able to work with an external wgen Date: Wed, 24 Jan 2024 20:59:34 +0900 Message-Id: <20240124115938.80132-24-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTYRTHe573NlfLl3V7MqJYiGCkJhmHCJOieguMoA9FGTXyRYe62bwH gaV20UwT1rpITI21pqltoyybTUdeKmvlLBMVNamkmWVNtFmmVl8OP/7n/H+fjoSS1zMBEpU6 VdSqlYkKVkpLRxaUrbvvqxXDSj5shksXwsD74xwNpTVVLLiqKxFU2U5hGH6yE96OexD42l9S oNe5EJQN9FJga+5DYDedZqFjaCG4vaMstOkKWMipqGHh1ecpDD2XSzBUWqLhWXE5BsfkRxr0 wyxc1+fgmfEJw6TRzIExOxAGTdc4mBpYD219bxiwd6+Fqzd6WHhkb6OhuW4QQ8fDUhb6qqYZ eNbcSoPrUiEDd76Us/B53EiB0TvKwWuHAUNt7ozozPffDLQUOjCcuXkXg/tdPYKGc/0YLFVv WHB6PRisFh0FP289QTB4cYSDvAuTHFw/dRFBQd5lGl7+amEgtycCfBOlbNQmwekZpYRca4Zg HzfQwtNyIjy41ssJuQ3dnGCwpAlWU7BQ8WgYC2VjXkawmM+zgmWshBPyR9xY+PLiBSe0XvHR wpBbj/euOCjdHCsmqtJFbWjkUWm87VMnlZy/JdNWfJvKRp7wfOQnIfwGMtI6wf5nd8XbOWb5 INLVNUnN8mJ+NbEWfmDykVRC8WfnE9PX9rmjRXw0sXY+RrNM84GkNKdyriDjN5K6e9X/pKtI Za1jLvebye9c7aZnWc5HkH5zETcrJXyOH+mcqGf+FpaTRlMXXYxkBjTPjOQqdXqSUpW4ISQ+ S63KDDmmSbKgmZcynpw6VIfGXPuaEC9BigWyKHONKGeU6SlZSU2ISCjFYlnX8mpRLotVZp0Q tZoj2rREMaUJrZDQimWy8PGMWDkfp0wVE0QxWdT+32KJX0A2mne6oP0wCVa6ArO0g4HTmrrt xq21/RkhPvy+R91aFB0eE9WY+c23csku+3TM3qP3B55r9uh0OxLMRfzA7tD9+7c6DqQ6ncdG TftiP0oNcYvOE3/9keQk5/GHQ4reVZqigKiDfXv8rwTnpccFmXsjb/uvMaxb2r67wxNpu9u5 bYwo6JR45fpgSpui/ANk+Ha5TgMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+5+7q9VhCZ1KqFYiGJVWyxeMkvrgITAkiC4EOfKQKzdlM9NA sNS11Jkz1C4rvLF02rRtlOYlm+U1L+WyJU7LpJQ0w5xk2kWNvjz8eJ73fT49DC65S65jFKo4 Qa2SR0spESE6HJyy7fFclRAwbdgChswA8EzrCDBWVlDQYylHUGG/jMHYi1B4OzOOYK6zG4f8 3B4EhR/cONibBxHUl16hoHdkJTg9kxS05WZQkFJcScGrL/MYDOTlYFBuDYOO7CIMGmc/E5A/ RsGd/BRsQUYxmDWZaTAl+8Jw6W0a5j8EQttgHwlNd9tIqO/fCrfuDVBQV99GQHP1MAa9T4wU DFb8IaGjuZWAHoOehAdfiyj4MmPCweSZpOF1YwEGVakLbdrvv0lo0TdioC15iIHzXS2CBt17 DKwVfRQ0ecYxsFlzcfh5/wWC4awJGtIyZ2m4czkLQUZaHgHdv1pISB2QwdwPIxUSzDeNT+J8 qu0iXz9TQPDtRRxfc9tN86kN/TRfYL3A20r9+eK6MYwvnPKQvNV8jeKtUzk0nz7hxPivXV00 33pzjuBHnPlYuM9J0d5IIVoRL6h37IsQRdlH3+Cx6fsT7NlleDIa35mOvBiO3c05i99Si0yx fpzLNYsvsje7kbPpP5HpSMTg7NXlXOm3zqWj1WwYZ3vzFC0ywfpyxpTypQcxu4erfmSh/pVu 4MqrGpd8rwX/wa1+YpElrIx7b75OZyNRAVpmRt4KVbxSroiWbdecj0pUKRK2n4lRWtHCaExJ 84ZqNN0b6kAsg6QrxCHmSkFCyuM1iUoH4hhc6i12rbUIEnGkPPGSoI45rb4QLWgcaD1DSNeI Dx0TIiTsWXmccF4QYgX1/xRjvNYlo1NhSqnFflW2NjyvpmRT0EHtK+U5iaU29PsRvGSgy91t aWD89h9sv3gkyOD2PeqIK/70s2zXgSHXy5hVkqP6ZzVBKGQiq2My0aXXuZJy7OpCFxf/o0au VWU0bdZ2tu416q4cKjpwQ5bl2Bl+fPRjhPvbUP+JLrpveHOLz3PdpjIpoYmSB/rjao38L2WF VicwAwAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: There is a case where total maps for its wait/event is so large in size. For instance, struct page for PG_locked and PG_writeback is the case. The additional memory size for the maps would be 'the # of pages * sizeof(struct dept_map)' if each struct page keeps its map all the way, which might be too big to accept. It'd be better to keep the minimum data in the case, which is timestamp called 'wgen' that Dept makes use of. So made Dept able to work with an external wgen when needed. Signed-off-by: Byungchul Park --- include/linux/dept.h | 18 ++++++++++++++---- include/linux/dept_sdt.h | 4 ++-- kernel/dependency/dept.c | 30 +++++++++++++++++++++--------- 3 files changed, 37 insertions(+), 15 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index 0280e45cc2af..dea53ad5b356 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -482,6 +482,13 @@ struct dept_task { bool in_sched; }; +/* + * for subsystems that requires compact use of memory e.g. struct page + */ +struct dept_ext_wgen{ + unsigned int wgen; +}; + #define DEPT_TASK_INITIALIZER(t) \ { \ .wait_hist = { { .wait = NULL, } }, \ @@ -512,6 +519,7 @@ extern void dept_task_exit(struct task_struct *t); extern void dept_free_range(void *start, unsigned int sz); extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_ext_wgen_init(struct dept_ext_wgen *ewg); extern void dept_map_copy(struct dept_map *to, struct dept_map *from); extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, long timeout); @@ -521,8 +529,8 @@ extern void dept_clean_stage(void); extern void dept_stage_event(struct task_struct *t, unsigned long ip); extern void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *c_fn, const char *e_fn, int sub_l); extern bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f); -extern void dept_request_event(struct dept_map *m); -extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn); +extern void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg); +extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn, struct dept_ext_wgen *ewg); extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); extern void dept_sched_enter(void); extern void dept_sched_exit(void); @@ -551,6 +559,7 @@ extern void dept_hardirqs_off(void); struct dept_key { }; struct dept_map { }; struct dept_task { }; +struct dept_ext_wgen { }; #define DEPT_MAP_INITIALIZER(n, k) { } #define DEPT_TASK_INITIALIZER(t) { } @@ -563,6 +572,7 @@ struct dept_task { }; #define dept_free_range(s, sz) do { } while (0) #define dept_map_init(m, k, su, n) do { (void)(n); (void)(k); } while (0) #define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_ext_wgen_init(wg) do { } while (0) #define dept_map_copy(t, f) do { } while (0) #define dept_wait(m, w_f, ip, w_fn, sl, t) do { (void)(w_fn); } while (0) @@ -572,8 +582,8 @@ struct dept_task { }; #define dept_stage_event(t, ip) do { } while (0) #define dept_ecxt_enter(m, e_f, ip, c_fn, e_fn, sl) do { (void)(c_fn); (void)(e_fn); } while (0) #define dept_ecxt_holding(m, e_f) false -#define dept_request_event(m) do { } while (0) -#define dept_event(m, e_f, ip, e_fn) do { (void)(e_fn); } while (0) +#define dept_request_event(m, wg) do { } while (0) +#define dept_event(m, e_f, ip, e_fn, wg) do { (void)(e_fn); } while (0) #define dept_ecxt_exit(m, e_f, ip) do { } while (0) #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h index 21fce525f031..8cdac7982036 100644 --- a/include/linux/dept_sdt.h +++ b/include/linux/dept_sdt.h @@ -24,7 +24,7 @@ #define sdt_wait_timeout(m, t) \ do { \ - dept_request_event(m); \ + dept_request_event(m, NULL); \ dept_wait(m, 1UL, _THIS_IP_, __func__, 0, t); \ } while (0) #define sdt_wait(m) sdt_wait_timeout(m, -1L) @@ -49,7 +49,7 @@ #define sdt_might_sleep_end() dept_clean_stage() #define sdt_ecxt_enter(m) dept_ecxt_enter(m, 1UL, _THIS_IP_, "start", "event", 0) -#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__) +#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__, NULL) #define sdt_ecxt_exit(m) dept_ecxt_exit(m, 1UL, _THIS_IP_) #else /* !CONFIG_DEPT */ #define sdt_map_init(m) do { } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 5c996f11abd5..fb33c3758c25 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -2186,6 +2186,11 @@ void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, } EXPORT_SYMBOL_GPL(dept_map_reinit); +void dept_ext_wgen_init(struct dept_ext_wgen *ewg) +{ + ewg->wgen = 0U; +} + void dept_map_copy(struct dept_map *to, struct dept_map *from) { if (unlikely(!dept_working())) { @@ -2371,7 +2376,7 @@ static void __dept_wait(struct dept_map *m, unsigned long w_f, */ static void __dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn, - bool sched_map) + bool sched_map, unsigned int wg) { struct dept_class *c; struct dept_key *k; @@ -2393,7 +2398,7 @@ static void __dept_event(struct dept_map *m, unsigned long e_f, c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, sched_map); if (c && add_ecxt(m, c, 0UL, NULL, e_fn, 0)) { - do_event(m, c, READ_ONCE(m->wgen), ip); + do_event(m, c, wg, ip); pop_ecxt(m, c); } } @@ -2606,7 +2611,7 @@ void dept_stage_event(struct task_struct *requestor, unsigned long ip) if (!m.keys) goto exit; - __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map); + __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map, m.wgen); exit: dept_exit(flags); } @@ -2785,10 +2790,11 @@ bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f) } EXPORT_SYMBOL_GPL(dept_ecxt_holding); -void dept_request_event(struct dept_map *m) +void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg) { unsigned long flags; unsigned int wg; + unsigned int *wg_p; if (unlikely(!dept_working())) return; @@ -2801,21 +2807,25 @@ void dept_request_event(struct dept_map *m) */ flags = dept_enter_recursive(); + wg_p = ewg ? &ewg->wgen : &m->wgen; + /* * Avoid zero wgen. */ wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); - WRITE_ONCE(m->wgen, wg); + WRITE_ONCE(*wg_p, wg); dept_exit_recursive(flags); } EXPORT_SYMBOL_GPL(dept_request_event); void dept_event(struct dept_map *m, unsigned long e_f, - unsigned long ip, const char *e_fn) + unsigned long ip, const char *e_fn, + struct dept_ext_wgen *ewg) { struct dept_task *dt = dept_task(); unsigned long flags; + unsigned int *wg_p; if (unlikely(!dept_working())) return; @@ -2823,24 +2833,26 @@ void dept_event(struct dept_map *m, unsigned long e_f, if (m->nocheck) return; + wg_p = ewg ? &ewg->wgen : &m->wgen; + if (dt->recursive) { /* * Dept won't work with this even though an event * context has been asked. Don't make it confused at * handling the event. Disable it until the next. */ - WRITE_ONCE(m->wgen, 0U); + WRITE_ONCE(*wg_p, 0U); return; } flags = dept_enter(); - __dept_event(m, e_f, ip, e_fn, false); + __dept_event(m, e_f, ip, e_fn, false, READ_ONCE(*wg_p)); /* * Keep the map diabled until the next sleep. */ - WRITE_ONCE(m->wgen, 0U); + WRITE_ONCE(*wg_p, 0U); dept_exit(flags); } From patchwork Wed Jan 24 11:59:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529119 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B75BD67753; Wed, 24 Jan 2024 12:00:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097618; cv=none; b=a7x2jZLuHxb5e3KAjVY1WJ1fnMxRMyhZ6z9dObGnKxMupDj99PRM7ToWkbQJBdFix1/3pWztaTAYS8r2CGF/bPlRQFaQI1xuffnM41WPmYbPeTmx7XOk9wzuDjjprJV/K494CKQPuac1+SI0tkn9627BE+3ggg7zxncE5AUw7Ao= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097618; c=relaxed/simple; bh=tC26hq3Ijka0/rt3bw+rfbu4EsnZw7fNBY7MOAe1vjA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=u/qa00IaQ/gXUih7yV/3WQZexktmiExM4eJPEibhdybfLoQV8LbGMl9EIZ5hHcWvMivZa2Mvx/v7ywU1HH+nPXrqiEB+Ix9xGVgWSrJY3MqMjno5ckhnu8bhND85c0gucZcDb58ear6k8QHui8KceGd5IovM3zbCW7HZprHujCs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-b5-65b0fbb7556c From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 24/26] dept: Track PG_locked with dept Date: Wed, 24 Jan 2024 20:59:35 +0900 Message-Id: <20240124115938.80132-25-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfWxLexzG/X7nnN85LZWT4jpGsIZ4ve52veQbEUEkfhKEkFz33j9o7ER7 bbV0L2wx2aiNbcXIOrOS2aRrulmnJYZVarIxL7OxaO8ywy6usa1s2pjOyzrxzzefPM/zff56 BEZ9jYsS9IZk2WjQxmuIklX2jin7tTZcI8f4rAooyI+B4MfDLFidVQRaqisRVF3KwtDdsAZ8 oR4E4QcPGSgqbEFw7sVTBi41diLw2A8QePxyLLQFAwSaCvMIHCx3Emh9N4Shw3ICQ6VrPdw7 XobBO/g/C0XdBEqKDuLh8wbDoM3Bgy1zJnTZT/Mw9CIWmjqfcOBpnwfFZzsI1HmaWGis7cLw +JqVQGfVNw7uNd5hoaXAzMGFvjIC70I2BmzBAA+PvKUYakzDRdkDXzm4bfZiyD5/EUPbv9cR 3Dj8HIOr6gmBW8EeDG5XIQOfKxoQdB3t5eFQ/iAPJVlHEeQdsrDw8MttDkwdiyH8yUpWLKW3 egIMNbn3UE+olKV3yyR69fRTnpputPO01JVC3fa5tLyuG9Nz/UGOuhxHCHX1n+Bpbm8bpn3N zTy9cyrM0pdtRXjj5L+Uy+LkeH2qbPxt+XalrstsI4l1W/YWtnr4TDSwOhcpBElcJNX3WZhc JIxwlk8XkYk4S/L7B5kIjxenS27zay4XKQVGzBkt2d8/IBFjnLhU+vr29UiIFWdK2SU+PsIq cYmUefUi96N/mlRZ4x3JKIb1C8XtbITV4mLpueMYHymVxByF5PVeIT8eJkk37X72OFKVolEO pNYbUhO0+vhFC3RpBv3eBTt2J7jQ8KJsGUN/16L+ls31SBSQZoxqhcMpqzltalJaQj2SBEYz XuWfVC2rVXHatHTZuHubMSVeTqpHkwVWM1H1e2hPnFrcqU2Wd8lyomz86WJBEZWJRr1y6L0r fXXZUx0V+svub6TiVfQqZ/RWPRPYn76pY1/BwuWB6ozoM1GVGc2t7R++6N4MBD+dNG9J+cOf zOevSzwTU3B/Qt6Arnyt5c8ZR4obyp9N6f6nZrbV6f9lIp4fkh71BP2xlqFnG+Zcn7awf9+M xnC6rvY/utm01TArZ/p+XsMm6bSxcxljkvY7wDYYsk0DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUhTYRzFe55773Ovq8VtGd0sKgZlKZVBxp/eqIi6GEVEUliUI286ctO2 shZFLufbSstILV2yLNaYmjYDe3ExHFpmlqakiY2SqCxfwpq4ZpYL+nL4cc7hfDocpbAwYZxa e1zSaVXJSiKjZTvXZiyrC9RIUS/eRkLBxSjw/cyhwVJdSaDtbgWCyvtGDP2N26BrdABBoPUV BcWFbQhufnhHwf0mLwKX/TyBjo/TodM3TKC58AKBjFvVBNq/jWPoLbqCocK5A1oul2Nw+z/T UNxPoLQ4A0/KFwx+m4MFW/oi6LOXsDD+YSU0e98w4LnRzICrJxKul/USqHc109D0oA9DxyML AW/lHwZamp7R0FaQx0DVUDmBb6M2Cmy+YRZeu60YakyTa1k/Jhh4mufGkHX7HobOt48RPMl5 j8FZ+YaAxzeAodZZSMGvO40I+vIHWci86Geh1JiP4EJmEQ2vfj9lwNQbDYExC9m4VvQMDFOi qfak6Bq10uLzckF8WPKOFU1PeljR6jwh1tojxFv1/Vi8OeJjRKcjl4jOkSusaB7sxOLQy5es +OxagBY/dhbjXfPiZOsSpGR1mqRbsSFeltSXZyOp9XtOFba72HT0Y4sZcZzArxKMXUlmFMIR Plzo7vZTQQ7lFwq1eZ8YM5JxFJ89VbB/byXBYCa/Rpj4+ulfieYXCVmlXWyQ5fxqIf3hPSbI Ar9AqKhx/+uETPpV13voICv4aOG94xJ7GcmsaIoDhaq1aRqVOjl6uf5okkGrPrX8cIrGiSY/ Yzs7XvAA/ezY1oB4DimnyTc6qiUFo0rTGzQNSOAoZai8e85dSSFPUBlOS7qUQ7oTyZK+Ac3l aOVsecxeKV7BJ6qOS0clKVXS/U8xFxKWjjxUrj/+s+Fq9vay/JgVmnNVmd6FERaYNyu1oy7R YupRnZ+9atPmuvVbs9TtY2diPQZX3ElMWs8Ku8pkxqi4wPZfOWav/mCjOzb8SIpt/wz9mj37 Wx3WkWMJS/7s3aGIrSpaOjV7PqUJN9oTWw6UVngWx0Ru2b0vt0ZZQGu0E5FKWp+kWhlB6fSq vyVxM00vAwAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Makes Dept able to track PG_locked waits and events. It's going to be useful in practice. See the following link that shows dept worked with PG_locked and can detect real issues: https://lore.kernel.org/lkml/1674268856-31807-1-git-send-email-byungchul.park@lge.com/ Signed-off-by: Byungchul Park --- include/linux/mm_types.h | 2 + include/linux/page-flags.h | 105 ++++++++++++++++++++++++++++++++----- include/linux/pagemap.h | 7 ++- mm/filemap.c | 26 +++++++++ mm/mm_init.c | 2 + 5 files changed, 129 insertions(+), 13 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 957ce38768b2..5c1112bc7a46 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -19,6 +19,7 @@ #include #include #include +#include #include @@ -203,6 +204,7 @@ struct page { struct page *kmsan_shadow; struct page *kmsan_origin; #endif + struct dept_ext_wgen PG_locked_wgen; } _struct_page_alignment; /* diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index a88e64acebfe..0a498f2c4543 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -198,6 +198,43 @@ enum pageflags { #ifndef __GENERATING_BOUNDS_H +#ifdef CONFIG_DEPT +#include +#include + +extern struct dept_map PG_locked_map; + +/* + * Place the following annotations in its suitable point in code: + * + * Annotate dept_page_set_bit() around firstly set_bit*() + * Annotate dept_page_clear_bit() around clear_bit*() + * Annotate dept_page_wait_on_bit() around wait_on_bit*() + */ + +static inline void dept_page_set_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_request_event(&PG_locked_map, &p->PG_locked_wgen); +} + +static inline void dept_page_clear_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_event(&PG_locked_map, 1UL, _RET_IP_, __func__, &p->PG_locked_wgen); +} + +static inline void dept_page_wait_on_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_wait(&PG_locked_map, 1UL, _RET_IP_, __func__, 0, -1L); +} +#else +#define dept_page_set_bit(p, bit_nr) do { } while (0) +#define dept_page_clear_bit(p, bit_nr) do { } while (0) +#define dept_page_wait_on_bit(p, bit_nr) do { } while (0) +#endif + #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); @@ -379,44 +416,88 @@ static __always_inline int Page##uname(struct page *page) \ #define SETPAGEFLAG(uname, lname, policy) \ static __always_inline \ void folio_set_##lname(struct folio *folio) \ -{ set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_set_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void SetPage##uname(struct page *page) \ -{ set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + set_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_set_bit(page, PG_##lname); \ +} #define CLEARPAGEFLAG(uname, lname, policy) \ static __always_inline \ void folio_clear_##lname(struct folio *folio) \ -{ clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_clear_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void ClearPage##uname(struct page *page) \ -{ clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + clear_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_clear_bit(page, PG_##lname); \ +} #define __SETPAGEFLAG(uname, lname, policy) \ static __always_inline \ void __folio_set_##lname(struct folio *folio) \ -{ __set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + __set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_set_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void __SetPage##uname(struct page *page) \ -{ __set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + __set_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_set_bit(page, PG_##lname); \ +} #define __CLEARPAGEFLAG(uname, lname, policy) \ static __always_inline \ void __folio_clear_##lname(struct folio *folio) \ -{ __clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + __clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_clear_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void __ClearPage##uname(struct page *page) \ -{ __clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + __clear_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_clear_bit(page, PG_##lname); \ +} #define TESTSETFLAG(uname, lname, policy) \ static __always_inline \ bool folio_test_set_##lname(struct folio *folio) \ -{ return test_and_set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + bool ret = test_and_set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy));\ + if (!ret) \ + dept_page_set_bit(&folio->page, PG_##lname); \ + return ret; \ +} \ static __always_inline int TestSetPage##uname(struct page *page) \ -{ return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + bool ret = test_and_set_bit(PG_##lname, &policy(page, 1)->flags);\ + if (!ret) \ + dept_page_set_bit(page, PG_##lname); \ + return ret; \ +} #define TESTCLEARFLAG(uname, lname, policy) \ static __always_inline \ bool folio_test_clear_##lname(struct folio *folio) \ -{ return test_and_clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + bool ret = test_and_clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy));\ + if (ret) \ + dept_page_clear_bit(&folio->page, PG_##lname); \ + return ret; \ +} \ static __always_inline int TestClearPage##uname(struct page *page) \ -{ return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + bool ret = test_and_clear_bit(PG_##lname, &policy(page, 1)->flags);\ + if (ret) \ + dept_page_clear_bit(page, PG_##lname); \ + return ret; \ +} #define PAGEFLAG(uname, lname, policy) \ TESTPAGEFLAG(uname, lname, policy) \ diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 06142ff7f9ce..c6683b228b20 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -991,7 +991,12 @@ void folio_unlock(struct folio *folio); */ static inline bool folio_trylock(struct folio *folio) { - return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio, 0))); + bool ret = !test_and_set_bit_lock(PG_locked, folio_flags(folio, 0)); + + if (ret) + dept_page_set_bit(&folio->page, PG_locked); + + return likely(ret); } /* diff --git a/mm/filemap.c b/mm/filemap.c index ad5b4aa049a3..241a67a363b0 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -45,6 +45,7 @@ #include #include #include +#include #include #include #include "internal.h" @@ -1098,6 +1099,7 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, if (flags & WQ_FLAG_CUSTOM) { if (test_and_set_bit(key->bit_nr, &key->folio->flags)) return -1; + dept_page_set_bit(&key->folio->page, key->bit_nr); flags |= WQ_FLAG_DONE; } } @@ -1181,6 +1183,7 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, if (wait->flags & WQ_FLAG_EXCLUSIVE) { if (test_and_set_bit(bit_nr, &folio->flags)) return false; + dept_page_set_bit(&folio->page, bit_nr); } else if (test_bit(bit_nr, &folio->flags)) return false; @@ -1191,6 +1194,9 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, /* How many times do we accept lock stealing from under a waiter? */ int sysctl_page_lock_unfairness = 5; +struct dept_map __maybe_unused PG_locked_map = DEPT_MAP_INITIALIZER(PG_locked_map, NULL); +EXPORT_SYMBOL(PG_locked_map); + static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, int state, enum behavior behavior) { @@ -1202,6 +1208,8 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, unsigned long pflags; bool in_thrashing; + dept_page_wait_on_bit(&folio->page, bit_nr); + if (bit_nr == PG_locked && !folio_test_uptodate(folio) && folio_test_workingset(folio)) { delayacct_thrashing_start(&in_thrashing); @@ -1295,6 +1303,23 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, break; } + /* + * dept_page_set_bit() might have been called already in + * folio_trylock_flag(), wake_page_function() or somewhere. + * However, call it again to reset the wgen of dept to ensure + * dept_page_wait_on_bit() is called prior to + * dept_page_set_bit(). + * + * Remind dept considers all the waits between + * dept_page_set_bit() and dept_page_clear_bit() as potential + * event disturbers. Ensure the correct sequence so that dept + * can make correct decisions: + * + * wait -> acquire(set bit) -> release(clear bit) + */ + if (wait->flags & WQ_FLAG_DONE) + dept_page_set_bit(&folio->page, bit_nr); + /* * If a signal happened, this 'finish_wait()' may remove the last * waiter from the wait-queues, but the folio waiters bit will remain @@ -1471,6 +1496,7 @@ void folio_unlock(struct folio *folio) BUILD_BUG_ON(PG_waiters != 7); BUILD_BUG_ON(PG_locked > 7); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + dept_page_clear_bit(&folio->page, PG_locked); if (folio_xor_flags_has_waiters(folio, 1 << PG_locked)) folio_wake_bit(folio, PG_locked); } diff --git a/mm/mm_init.c b/mm/mm_init.c index 077bfe393b5e..fc150d7a3686 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -26,6 +26,7 @@ #include #include #include +#include #include "internal.h" #include "slab.h" #include "shuffle.h" @@ -564,6 +565,7 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn, page_mapcount_reset(page); page_cpupid_reset_last(page); page_kasan_tag_reset(page); + dept_ext_wgen_init(&page->PG_locked_wgen); INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL From patchwork Wed Jan 24 11:59:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529118 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CFCBF679E0; Wed, 24 Jan 2024 12:00:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097617; cv=none; b=Y0QWtLoiHyP8Rg+njGDdTkgX55aiLXlMfiO4PqcMXZhba3QC/63j78Qj/B7Y6+oqDAROSDfzuIGBQfs5LXVTtw6hfw0ZgPsEAQpZBgQSVrd7ph5xS/m03DDc1m9tVeB6CqEHWbp8VuvMiawkue56X+Uq0s8dPP0JDoV2ot5nn4A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097617; c=relaxed/simple; bh=ul/wiSzDZit7bOUs1eKkFyN45Ghg8m4Ni94/4VNQ6UQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=kDjEdRtZoatvx19SN2Nxm2WdjcBTfM+AfGJUAb1SH/oSbjTbNmRZpe2AC79B63Ivf/8AC/+/2U78zVtmAcBiJ6Z7b11epdGu6ALxD9OhS7+6aO38WNY7DJvDntiNU/hc3Dei0TOQ9/px4yaqxKuzszQqsbqWtRDYOPmMJpLwy4M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-c4-65b0fbb78df2 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 25/26] dept: Print event context requestor's stacktrace on report Date: Wed, 24 Jan 2024 20:59:36 +0900 Message-Id: <20240124115938.80132-26-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSfUzMcRwHcN/v7+nuOH6OzU9nww1tHkPymRn9wfwwY2v9wx/c9KOb67SL iNnCXURXihxJ6rJzu1LnKosenKKTlNAqqVQeW12XuFunPNx5+Oez1z6fvd9/fUSErIIKEqk0 hwStRqlW0BJS4pqUt6R8zCaE2G5MgfSUEPB8O0tCdnEhDc1FBQgKS09i6H+8Cdq8gwjGGp8T YMxsRpDX20VAaV03girLKRpevZ8MLR43DfWZ52k4nV9Mw4uBcQydlzMwFNi3QcMFEwaH7xMJ xn4arhlPY//4jMFntjJgTpwPfZYsBsZ7l0N9dysFVR2L4GpOJw2VVfUk1JX3YXh1P5uG7sJf FDTUPSGhOd1Awe0hEw0DXjMBZo+bgZeOXAw2nb8o6etPCpwGB4akm3cwtLyuQFB9tgeDvbCV hlrPIIYSeyYB3289RtCX6mJAn+Jj4NrJVATn9ZdJeP7DSYGucxWMjWbT4Wv42kE3wetKjvBV 3lySf2ri+HtZXQyvq+5g+Fz7Yb7EspDPr+zHfN6Ih+Lt1mSat49kMPw5Vwvmh5qaGP7JlTGS f99ixDvkOyVrowS1Kl7QLlu3RxJdf91FxCZvPPqhv5JMRFdWn0NiEceGcm09Tvq/kxobqYBp Nphrb/cRAU9n53Alho/+vUREsGcmcpbhxj+BaWwkV/HdigIm2fnc5+QyHLCUDeN+/hgl/pbO 5gpsjj8W+/e3r3aQAcvYVVyPNY0JlHLsGTF36Y3pX2Am99DSTl5A0lw0wYpkKk18jFKlDl0a naBRHV2692CMHflfynxifFc5GmmOqEGsCCkmScOtxYKMUsbHJcTUIE5EKKZL22cWCTJplDLh mKA9uFt7WC3E1SC5iFTMkK7wHomSsfuVh4QDghAraP9fsUgclIgy0uXbf2kj3Hf3nBj+kqbb 4ujKUd9ym4TNOYa1rfu+bhmVl0XFq4+vFq9XOEszL3ZtdLRtjTbNnZhhdmVNDntg2/ARR/pM QbO6PzljFw8tNsxL8RaF7DTKFXpV1oCljk7KdzwKn/rFHrktIVijL4p5d0mdtmDlh7epevAM Bj8LU5Bx0crlCwltnPI3HTPhU04DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzXSfUzMcRwHcN/v7+k6zn67Mj9h7CZWppLiY4w2o9/ayNbMZjNu/OhWnXZX EWNRUjiVrZI6Ttlpd13ljnkqWlGd1gMl1Sqc0NGDua7n4o7557PX3u/t/ddHREi1lLdIoYwX VEp5jIwWk+I9W1LWPZquEALHv2+A7KuB4BxNJ6GwvJSG1jIjgtIH5zHYX4XB+7FBBNNNLQTk 5bQiuPOpl4AHdX0Iqkou0NDWvxDanSM0WHOu0JBSXE7Dmx8zGHpyr2MwmndDY1YRhurJbyTk 2WkoyEvBrjOAYVJvYECf7AO2kpsMzHxaD9a+DgpqtVYKqrrXQv6tHhoqq6wk1D22YWh7WkhD X+lvChrrGkhozdZQYBououHHmJ4AvXOEgbfVOgwVqa61NMccBfWaagxpd+9jaO96huB5+kcM 5tIOGmqdgxgs5hwCpu69QmC7NsTAxauTDBScv4bgysVcElpm6ylI7QmB6YlCOnQLXzs4QvCp lpN81ZiO5F8XcfyTm70Mn/q8m+F15gTeUuLHF1faMX/nl5PizYYMmjf/us7wl4faMT/c3Mzw DTemSb6/PQ/vXXZAvPWoEKNIFFQB2w6Lo6zaISIuY+epL/ZKMhnd2HQZeYg4NphLa2qi3KbZ NVxn5yThthe7krNovrpysYhgL83nSn420e7Ck93HPZsyILdJ1ocbyHiI3ZawG7m52Qni3+gK zlhR/dcertyU3026LWVDuI+GTCYLiXVongF5KZSJsXJFTIi/OjoqSak45X/kRKwZuZ5Gf3Ym +zEabQurQawIyRZIQg3lgpSSJ6qTYmsQJyJkXpLOJWWCVHJUnnRaUJ04pEqIEdQ1aKmIlC2W hO8XDkvZ4/J4IVoQ4gTV/xaLPLyT0Ysg28aXxdmtIbe4d8v0Lf6ftfYkv4QvUw6m/GBBwO4P jf2Ou6FBDxc5Vs9ZO7YzcT4LJo6ZvPcc2zGh0/pHBhoHI74F+J2hO8YH8jURMGDbHNyzQ7K8 dyq8K6VgaPl45+gcudWhfnLbyPn4Rq4K9/yZucLyeVea5pxpW8VST1+TjFRHydf7ESq1/A9P xOmEMAMAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Currently, print nothing in place of [S] in report, which means stacktrace of event context's start if the event is not an unlock thing by typical lock but general event because it's not easy to specify the point in a general way, where the event context has started from. However, unfortunately it makes hard to interpret dept's report in that case. So made it print the event requestor's stacktrace instead of the event context's start, in place of [S] in report. Signed-off-by: Byungchul Park --- include/linux/dept.h | 13 +++++++ kernel/dependency/dept.c | 83 ++++++++++++++++++++++++++++++++-------- 2 files changed, 80 insertions(+), 16 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index dea53ad5b356..6db23d77905e 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -145,6 +145,11 @@ struct dept_map { */ unsigned int wgen; + /* + * requestor for the event context to run + */ + struct dept_stack *req_stack; + /* * whether this map should be going to be checked or not */ @@ -486,7 +491,15 @@ struct dept_task { * for subsystems that requires compact use of memory e.g. struct page */ struct dept_ext_wgen{ + /* + * wait timestamp associated to this map + */ unsigned int wgen; + + /* + * requestor for the event context to run + */ + struct dept_stack *req_stack; }; #define DEPT_TASK_INITIALIZER(t) \ diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index fb33c3758c25..abf1cdab0615 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -129,6 +129,7 @@ static int dept_per_cpu_ready; #define DEPT_INFO(s...) pr_warn("DEPT_INFO: " s) static arch_spinlock_t dept_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; +static arch_spinlock_t dept_req_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; static arch_spinlock_t dept_pool_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; /* @@ -1669,7 +1670,8 @@ static void add_wait(struct dept_class *c, unsigned long ip, static bool add_ecxt(struct dept_map *m, struct dept_class *c, unsigned long ip, const char *c_fn, - const char *e_fn, int sub_l) + const char *e_fn, int sub_l, + struct dept_stack *req_stack) { struct dept_task *dt = dept_task(); struct dept_ecxt_held *eh; @@ -1700,10 +1702,16 @@ static bool add_ecxt(struct dept_map *m, struct dept_class *c, e->class = get_class(c); e->ecxt_ip = ip; - e->ecxt_stack = ip && rich_stack ? get_current_stack() : NULL; e->event_fn = e_fn; e->ecxt_fn = c_fn; + if (req_stack) + e->ecxt_stack = get_stack(req_stack); + else if (ip && rich_stack) + e->ecxt_stack = get_current_stack(); + else + e->ecxt_stack = NULL; + eh = dt->ecxt_held + (dt->ecxt_held_pos++); eh->ecxt = get_ecxt(e); eh->map = m; @@ -2147,6 +2155,7 @@ void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, m->sub_u = sub_u; m->name = n; m->wgen = 0U; + m->req_stack = NULL; m->nocheck = !valid_key(k); dept_exit_recursive(flags); @@ -2181,6 +2190,7 @@ void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, m->name = n; m->wgen = 0U; + m->req_stack = NULL; dept_exit_recursive(flags); } @@ -2189,6 +2199,7 @@ EXPORT_SYMBOL_GPL(dept_map_reinit); void dept_ext_wgen_init(struct dept_ext_wgen *ewg) { ewg->wgen = 0U; + ewg->req_stack = NULL; } void dept_map_copy(struct dept_map *to, struct dept_map *from) @@ -2376,7 +2387,8 @@ static void __dept_wait(struct dept_map *m, unsigned long w_f, */ static void __dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn, - bool sched_map, unsigned int wg) + bool sched_map, unsigned int wg, + struct dept_stack *req_stack) { struct dept_class *c; struct dept_key *k; @@ -2397,7 +2409,7 @@ static void __dept_event(struct dept_map *m, unsigned long e_f, k = m->keys ?: &m->map_key; c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, sched_map); - if (c && add_ecxt(m, c, 0UL, NULL, e_fn, 0)) { + if (c && add_ecxt(m, c, 0UL, "(event requestor)", e_fn, 0, req_stack)) { do_event(m, c, wg, ip); pop_ecxt(m, c); } @@ -2506,6 +2518,8 @@ EXPORT_SYMBOL_GPL(dept_stage_wait); static void __dept_clean_stage(struct dept_task *dt) { + if (dt->stage_m.req_stack) + put_stack(dt->stage_m.req_stack); memset(&dt->stage_m, 0x0, sizeof(struct dept_map)); dt->stage_sched_map = false; dt->stage_w_fn = NULL; @@ -2571,6 +2585,7 @@ void dept_request_event_wait_commit(void) */ wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); WRITE_ONCE(dt->stage_m.wgen, wg); + dt->stage_m.req_stack = get_current_stack(); __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map, timeout); exit: @@ -2602,6 +2617,8 @@ void dept_stage_event(struct task_struct *requestor, unsigned long ip) */ m = dt_req->stage_m; sched_map = dt_req->stage_sched_map; + if (m.req_stack) + get_stack(m.req_stack); __dept_clean_stage(dt_req); /* @@ -2611,8 +2628,12 @@ void dept_stage_event(struct task_struct *requestor, unsigned long ip) if (!m.keys) goto exit; - __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map, m.wgen); + __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map, m.wgen, + m.req_stack); exit: + if (m.req_stack) + put_stack(m.req_stack); + dept_exit(flags); } @@ -2692,7 +2713,7 @@ void dept_map_ecxt_modify(struct dept_map *m, unsigned long e_f, k = m->keys ?: &m->map_key; c = check_new_class(&m->map_key, k, sub_id(m, new_e), m->name, false); - if (c && add_ecxt(m, c, new_ip, new_c_fn, new_e_fn, new_sub_l)) + if (c && add_ecxt(m, c, new_ip, new_c_fn, new_e_fn, new_sub_l, NULL)) goto exit; /* @@ -2744,7 +2765,7 @@ void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, k = m->keys ?: &m->map_key; c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, false); - if (c && add_ecxt(m, c, ip, c_fn, e_fn, sub_l)) + if (c && add_ecxt(m, c, ip, c_fn, e_fn, sub_l, NULL)) goto exit; missing_ecxt: dt->missing_ecxt++; @@ -2792,9 +2813,11 @@ EXPORT_SYMBOL_GPL(dept_ecxt_holding); void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg) { + struct dept_task *dt = dept_task(); unsigned long flags; unsigned int wg; unsigned int *wg_p; + struct dept_stack **req_stack_p; if (unlikely(!dept_working())) return; @@ -2802,12 +2825,18 @@ void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg) if (m->nocheck) return; - /* - * Allow recursive entrance. - */ - flags = dept_enter_recursive(); + if (dt->recursive) + return; - wg_p = ewg ? &ewg->wgen : &m->wgen; + flags = dept_enter(); + + if (ewg) { + wg_p = &ewg->wgen; + req_stack_p = &ewg->req_stack; + } else { + wg_p = &m->wgen; + req_stack_p = &m->req_stack; + } /* * Avoid zero wgen. @@ -2815,7 +2844,13 @@ void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg) wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); WRITE_ONCE(*wg_p, wg); - dept_exit_recursive(flags); + arch_spin_lock(&dept_req_spin); + if (*req_stack_p) + put_stack(*req_stack_p); + *req_stack_p = get_current_stack(); + arch_spin_unlock(&dept_req_spin); + + dept_exit(flags); } EXPORT_SYMBOL_GPL(dept_request_event); @@ -2826,6 +2861,8 @@ void dept_event(struct dept_map *m, unsigned long e_f, struct dept_task *dt = dept_task(); unsigned long flags; unsigned int *wg_p; + struct dept_stack **req_stack_p; + struct dept_stack *req_stack; if (unlikely(!dept_working())) return; @@ -2833,7 +2870,18 @@ void dept_event(struct dept_map *m, unsigned long e_f, if (m->nocheck) return; - wg_p = ewg ? &ewg->wgen : &m->wgen; + if (ewg) { + wg_p = &ewg->wgen; + req_stack_p = &ewg->req_stack; + } else { + wg_p = &m->wgen; + req_stack_p = &m->req_stack; + } + + arch_spin_lock(&dept_req_spin); + req_stack = *req_stack_p; + *req_stack_p = NULL; + arch_spin_unlock(&dept_req_spin); if (dt->recursive) { /* @@ -2842,17 +2890,20 @@ void dept_event(struct dept_map *m, unsigned long e_f, * handling the event. Disable it until the next. */ WRITE_ONCE(*wg_p, 0U); + if (req_stack) + put_stack(req_stack); return; } flags = dept_enter(); - - __dept_event(m, e_f, ip, e_fn, false, READ_ONCE(*wg_p)); + __dept_event(m, e_f, ip, e_fn, false, READ_ONCE(*wg_p), req_stack); /* * Keep the map diabled until the next sleep. */ WRITE_ONCE(*wg_p, 0U); + if (req_stack) + put_stack(req_stack); dept_exit(flags); } From patchwork Wed Jan 24 11:59:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 13529120 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9CD7067E71; Wed, 24 Jan 2024 12:00:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097618; cv=none; b=WSjvNfCGXjnO3DiTFYyRHTSzNtfW0g0xPpt54CH40FuE9wSGjfbLfd2d7Le3H4WgkBy0YF5kFgQPPvYEcr6y8VaDvTK7bRkL9wyN4cInJHkwGPr9C8Bm/Y1dw/iPCaZFnPLKhRFZNkNwF9GL//HNc4fkiebTUErv23JClAMxAMk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706097618; c=relaxed/simple; bh=VZqpdCT23a4FLYa7casjJexKzDefLDFZjLgIXAUOM2A=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=dn04dRZ9CNFxEIUTdPo6bkr0IGQ/FSIHmLDS70KN9DYlnFzT4DaWH/3csme5qzQeDs1vvj3o8SRvLNyiZgDixESWzrUUoz+R9sbCpCgbe6gGFp5nzsYgpDWw2RfraHnzscmb4aF6O34b3WZ9H7u0CUmS3KoNP2B2ZIAAW2puYnQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d85ff70000001748-d3-65b0fbb8fd1e From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v11 26/26] locking/lockdep, fs/jbd2: Use a weaker annotation in journal handling Date: Wed, 24 Jan 2024 20:59:37 +0900 Message-Id: <20240124115938.80132-27-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240124115938.80132-1-byungchul@sk.com> References: <20240124115938.80132-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUyTZxiF9zzvZzu6vOsIvIrJli5mDhQURe8ZZ1hitmfLjDNmf1wiNvYN NBYkLZ9LNICoyJdKhPqBs6CptRSLBQ0KJQwjgsSCQqAiNEAIA6RA0DbryqaUbX/uXDnn5OT+ cXhK+ZBZy2vTMiR9mlqnYuW03BtWs6k52CBtrvdEwIXSzeB7W0RDtd3GQt+dOgS2pnwMM4+/ gyH/HILgs14KjJV9CGrGRylo6vQgcFoKWOif/AgGfAssdFeWsHDyhp2F56+XMYxUVWCoc+yF nvO1GNoDf9BgnGHhqvEkXjnTGAJmKwfmvPUwYbnCwfL4Fuj2DDLgHI6By7+NsNDq7Kahs3kC Q//DahY8tncM9HR20dB3oYyB+vlaFl77zRSYfQscvGg3YWgoXCk6/eYfBp6UtWM4ffMuhoGX LQjaisYwOGyDLDzyzWFodFRS8Netxwgmyr0cnCoNcHA1vxxByakqGnr/fsJA4UgCBP+sZhN3 kkdzCxQpbMwmTr+JJk9rRfLgyihHCtuGOWJyZJJGSzS50TqDSc2SjyEO61mWOJYqOFLsHcBk 3uXiSNelIE0mB4z4p6iD8l0aSafNkvRxuw/LUxY902z6XVlOgfkBk4eCXDGS8aKwTSy4OUkX I36VrV5NSGaFL0S3O0CFOFz4TGwsm2KKkZynhDMfipbFZ2zI+EQ4LHa+WVgN0cJ60V10G4dY IWwXL5Y/Z/7t/1Ssa2hfzchW9PrLw3SIlUKCOGY9x4VKRaFEJp6ZafrvoTXi7xY3fR4pTOgD K1Jq07JS1VrdttiU3DRtTuyRY6kOtLIo8/HlX5rRUt+BDiTwSBWmSLTaJSWjzjLkpnYgkadU 4Qr3mjuSUqFR5/4q6Y8l6TN1kqEDRfG0KlIR78/WKIVkdYZ0VJLSJf3/LuZla/NQTOnUgQBq 8R/cYZg12dsyut++WjxxfVPuZHxYzvzHO69H5Rza802+K+t74muuvkfG9zdEbK1K32Bftjm4 +8lDvovGlri4a4mxR/r3HL205CuPKLrlCki28J7MrumMH0brX83mxciTvv6yav+6jb37fvw8 Mvlbj9NQsU7X+pWm5+dsl4o2pKi3RFN6g/o93sw5bE0DAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe0hTcRzF+/3u09XitsRu2UMWURS9SONb9vorL0FR/2hFD1deckxnbGrZ g6YzM01T0ZnOQq2WTVObPazUlqJpkY80M7OV2kNRW2gTbfZwQv8cPpxzOH8dlpDlUPNYpTpc 1KgVIXJaQkp2+epXPnSWimu6JjZC6qU14PgZT0JOSRENzcWFCIruRWPor/WDt6ODCJyvmgjI zGhGkNf9gYB7dTYElQUxNLR+ngltDjsNDRmJNOivl9DQMjCBocuQhqHQshNepuRjsI5/IyGz nwZjph5PSh+GcZOZAZNuCfQUZDMw0b0WGmztFNRcbaCgsnMFZF3roqGisoGEuvIeDK2Pc2iw Ff2l4GVdPQnNqUkU3PmeT8PAqIkAk8POwGtrLobS2Mm1uJE/FDxPsmKIu3EXQ9u7Jwiq4j9h sBS101DjGMRQZskg4NetWgQ9yUMMnL80zoAxOhlB4nkDCU2/n1MQ2+UDzrEcepuvUDNoJ4TY shNC5WguKbzI54VH2R8YIbaqkxFyLRFCWcFy4XpFPxbyhh2UYDFfpAXLcBojJAy1YeF7YyMj 1F9xksLntky8e/5+yaYgMUQZKWpWbwmUBP+w9dHH77qdjDE9onTIySQgluU5b948FJSA3Fia W8p3dIwTLnbnvPiypK9UApKwBHdhOl/w4xXtCmZzgXzdiH2qRHJL+I7429jFUm49n57cQrmY 5xbxhaXWqY7bpH8nq5N0sYzz4T+ZLzMpSJKLppmRu1IdGapQhvis0qqCo9TKk6uOhoVa0ORn TGcnUsvRz1a/asSxSD5Dus1cIsooRaQ2KrQa8Swhd5d2zC0WZdIgRdQpURN2WBMRImqrkSdL yudIdwSIgTLumCJcVInicVHzP8Ws2zwdWl/V5GkI9M+b9WDgo7rl97oFuw3LSus773v5djc9 8JHNaXltVJ027NFlmw8Vp2+3Ru+znzD4Z5T3Rg3n92/9ZrRdO6f3YFQHjBEevX2bA/QjGxYe OfzeviHduj15bHXS2N4YXYD/DG+vo/V4MfO0N/Qg7KutULU/DvsSf7P1zLM3TjmpDVasXU5o tIp/akcm2y8DAAA= X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: jbd2 journal handling code doesn't want any jbd2_might_wait_for_commit() to be between start_this_handle() and stop_this_handle(). So it marks the region with rwsem_acquire_read() and rwsem_release(). However, the annotation is too strong for that purpose. We don't have to use more than try lock annotation for that. Furthermore, now that Dept was introduced, false positive alarms was reported by that. Replaced it with try lock annotation. Signed-off-by: Byungchul Park --- fs/jbd2/transaction.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c index 5f08b5fd105a..2c159a547e15 100644 --- a/fs/jbd2/transaction.c +++ b/fs/jbd2/transaction.c @@ -460,7 +460,7 @@ static int start_this_handle(journal_t *journal, handle_t *handle, read_unlock(&journal->j_state_lock); current->journal_info = handle; - rwsem_acquire_read(&journal->j_trans_commit_map, 0, 0, _THIS_IP_); + rwsem_acquire_read(&journal->j_trans_commit_map, 0, 1, _THIS_IP_); jbd2_journal_free_transaction(new_transaction); /* * Ensure that no allocations done while the transaction is open are