From patchwork Mon Mar 20 14:43:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181392 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B73C1C7618D for ; Mon, 20 Mar 2023 14:46:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231944AbjCTOqR (ORCPT ); Mon, 20 Mar 2023 10:46:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231825AbjCTOpu (ORCPT ); Mon, 20 Mar 2023 10:45:50 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB78D1024A; Mon, 20 Mar 2023 07:45:05 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id fd25so7071697pfb.1; Mon, 20 Mar 2023 07:45:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323504; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Qu7alhJ6wWy9WzDcFrk58GfmlRJKvb1Npis5WJImQUk=; b=e/FNCZpTNWAV+E5t5npeOCsIakIkV+ragj5nuKWTyQ0ksvH7Wj0f/MjaAqvkOI3bxN kYww4vhk4UrVxjm8CqRtiK7B32r1TSYWhFbxIvO52TOAJVcilv0wjdmq+8V0WVhlEqJ+ 5L89sNZ4JEHFpnBDbudoUjmsIlFDaVzEgB4MSpsl4K0iZWnLddZ6epdjRw33YKTV96Lo VUOaLxw6oOD5rwbZkxcbQE9bjykyh2CfLHh3qeLlNO0+40+2IkwbKSXKfuz4GfS2hBBQ WNh1RbhlF9XONK9T6JHNdlW27qhvTVJTCY3UcsZIUBcKFPSuEuAmhvrT7xU9Wr/s2qw7 A7Bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323504; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qu7alhJ6wWy9WzDcFrk58GfmlRJKvb1Npis5WJImQUk=; b=u4X4pSMYjkYM6HMpYipMKje5e3MBfiwineCYlJTC4zClC4dbhh9EXHGg839xRplztW alvfniAcaxHcUh3eM7CcuB/etWtHrgzXzYej0kzdeJcq3XItMyB/4aUtbashUgF7cShh NY340Z6AtVD651D0JGgMoxDJ1Xk1WxUGR/iFH6C1J03Rzo7t7JPYNqrNWe4JuEoqGp13 ISWiMNbtJXsKMpufmPmL2hQfilOlMEE5b24AfCPilplaZD5dGMpHI9US4aKY0xQhvzzw girjiFxXRaqRqcp3bzrBPPS3MmYPTAiyp2hjIS22UtE9U6qQXszFhPOT9Sp8jz2JRI6P AHEA== X-Gm-Message-State: AO0yUKUuGX2mGPLml/IIgtPJL9/XMp9Y1xwyRcJC4XwuBxY59LuZ8QRF XM6Oe0HoBS2NyDpC9hQURdE7SJMKMvI= X-Google-Smtp-Source: AK7set//Qk9cXDTXMYexJ0g5upjPAMn6/IlbJvD3ot8I7O6eFbD1nrN/RI0I2HcbbYaxb7YioZLATw== X-Received: by 2002:a62:7b95:0:b0:625:e051:e462 with SMTP id w143-20020a627b95000000b00625e051e462mr15333738pfc.15.1679323504322; Mon, 20 Mar 2023 07:45:04 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id r14-20020a62e40e000000b00627ee6dcb84sm3045993pfh.203.2023.03.20.07.45.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:03 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , MyungJoo Ham , Kyungmin Park , Chanwoo Choi , linux-pm@vger.kernel.org (open list:DEVICE FREQUENCY (DEVFREQ)), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 15/23] PM / devfreq: Drop unneed locking to appease lockdep Date: Mon, 20 Mar 2023 07:43:37 -0700 Message-Id: <20230320144356.803762-16-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Rob Clark In the process of adding lockdep annotation for GPU job_run() path to catch potential deadlocks against the shrinker/reclaim path, I turned up this lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 6.2.0-rc8-debug+ #556 Not tainted ------------------------------------------------------ ring0/123 is trying to acquire lock: ffffff8087219078 (&devfreq->lock){+.+.}-{3:3}, at: devfreq_monitor_resume+0x3c/0xf0 but task is already holding lock: ffffffd6f64e57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (dma_fence_map){++++}-{0:0}: __dma_fence_might_wait+0x74/0xc0 dma_resv_lockdep+0x1f4/0x2f4 do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}: fs_reclaim_acquire+0x80/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc __kmalloc+0xd8/0x100 topology_parse_cpu_capacity+0x8c/0x178 get_cpu_for_node+0x88/0xc4 parse_cluster+0x1b0/0x28c parse_cluster+0x8c/0x28c init_cpu_topology+0x168/0x188 smp_prepare_cpus+0x24/0xf8 kernel_init_freeable+0x18c/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire+0x3c/0x48 fs_reclaim_acquire+0x54/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc __kmalloc_node_track_caller+0xb8/0xe0 kstrdup+0x70/0x90 kstrdup_const+0x38/0x48 kvasprintf_const+0x48/0xbc kobject_set_name_vargs+0x40/0xb0 dev_set_name+0x64/0x8c devfreq_add_device+0x31c/0x55c devm_devfreq_add_device+0x6c/0xb8 msm_devfreq_init+0xa8/0x16c msm_gpu_init+0x38c/0x570 adreno_gpu_init+0x1b4/0x2b4 a6xx_gpu_init+0x15c/0x3e4 adreno_bind+0x218/0x254 component_bind_all+0x114/0x1ec msm_drm_bind+0x2b8/0x608 try_to_bring_up_aggregate_device+0x88/0x1a4 __component_add+0xec/0x13c component_add+0x1c/0x28 dsi_dev_attach+0x28/0x34 dsi_host_attach+0xdc/0x124 mipi_dsi_attach+0x30/0x44 devm_mipi_dsi_attach+0x2c/0x70 ti_sn_bridge_probe+0x298/0x2c4 auxiliary_bus_probe+0x7c/0x94 really_probe+0x158/0x290 __driver_probe_device+0xc8/0xe0 driver_probe_device+0x44/0x100 __device_attach_driver+0x64/0xdc bus_for_each_drv+0xa0/0xc8 __device_attach+0xd8/0x168 device_initial_probe+0x1c/0x28 bus_probe_device+0x38/0xa0 deferred_probe_work_func+0xc8/0xe0 process_one_work+0x2d8/0x478 process_scheduled_works+0x4c/0x50 worker_thread+0x218/0x274 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 -> #0 (&devfreq->lock){+.+.}-{3:3}: __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 devfreq_monitor_resume+0x3c/0xf0 devfreq_simple_ondemand_handler+0x54/0x7c devfreq_resume_device+0xa4/0xe8 msm_devfreq_resume+0x78/0xa8 a6xx_pm_resume+0x110/0x234 adreno_runtime_resume+0x2c/0x38 pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x15c/0x174 rpm_callback+0x78/0x7c rpm_resume+0x318/0x524 __pm_runtime_resume+0x78/0xbc pm_runtime_get_sync.isra.0+0x14/0x20 msm_gpu_submit+0x58/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 other info that might help us debug this: Chain exists of: &devfreq->lock --> mmu_notifier_invalidate_range_start --> dma_fence_map Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(dma_fence_map); lock(mmu_notifier_invalidate_range_start); lock(dma_fence_map); lock(&devfreq->lock); *** DEADLOCK *** 2 locks held by ring0/123: #0: ffffff8087201170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150 #1: ffffffd6f64e57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150 stack backtrace: CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #556 Hardware name: Google Lazor (rev1 - 2) with LTE (DT) Call trace: dump_backtrace.part.0+0xb4/0xf8 show_stack+0x20/0x38 dump_stack_lvl+0x9c/0xd0 dump_stack+0x18/0x34 print_circular_bug+0x1b4/0x1f0 check_noncircular+0x78/0xac __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 devfreq_monitor_resume+0x3c/0xf0 devfreq_simple_ondemand_handler+0x54/0x7c devfreq_resume_device+0xa4/0xe8 msm_devfreq_resume+0x78/0xa8 a6xx_pm_resume+0x110/0x234 adreno_runtime_resume+0x2c/0x38 pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x15c/0x174 rpm_callback+0x78/0x7c rpm_resume+0x318/0x524 __pm_runtime_resume+0x78/0xbc pm_runtime_get_sync.isra.0+0x14/0x20 msm_gpu_submit+0x58/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 The issue is that we cannot be holding any lock while doing memory allocations that is also needed in the job_run (and in the case of devfreq, this means runpm_resume()) because lockdep sees this as a potential dependency. Fortunately there is really no reason to hold the devfreq lock when we are creating the devfreq device, as it is not yet visible to any other task. The only reason it was needed was for a lockdep assert in devfreq_get_freq_range(). Instead, split this up into an internal fxn that is used in the devfreq_add_device() (where the lock is not required). Signed-off-by: Rob Clark --- drivers/devfreq/devfreq.c | 46 ++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 25 deletions(-) diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c index 817c71da391a..11b774048bd2 100644 --- a/drivers/devfreq/devfreq.c +++ b/drivers/devfreq/devfreq.c @@ -111,23 +111,13 @@ static unsigned long find_available_max_freq(struct devfreq *devfreq) return max_freq; } -/** - * devfreq_get_freq_range() - Get the current freq range - * @devfreq: the devfreq instance - * @min_freq: the min frequency - * @max_freq: the max frequency - * - * This takes into consideration all constraints. - */ -void devfreq_get_freq_range(struct devfreq *devfreq, - unsigned long *min_freq, - unsigned long *max_freq) +static void __get_freq_range(struct devfreq *devfreq, + unsigned long *min_freq, + unsigned long *max_freq) { unsigned long *freq_table = devfreq->freq_table; s32 qos_min_freq, qos_max_freq; - lockdep_assert_held(&devfreq->lock); - /* * Initialize minimum/maximum frequency from freq table. * The devfreq drivers can initialize this in either ascending or @@ -158,6 +148,23 @@ void devfreq_get_freq_range(struct devfreq *devfreq, if (*min_freq > *max_freq) *min_freq = *max_freq; } + +/** + * devfreq_get_freq_range() - Get the current freq range + * @devfreq: the devfreq instance + * @min_freq: the min frequency + * @max_freq: the max frequency + * + * This takes into consideration all constraints. + */ +void devfreq_get_freq_range(struct devfreq *devfreq, + unsigned long *min_freq, + unsigned long *max_freq) +{ + lockdep_assert_held(&devfreq->lock); + + __get_freq_range(devfreq, min_freq, max_freq); +} EXPORT_SYMBOL(devfreq_get_freq_range); /** @@ -810,7 +817,6 @@ struct devfreq *devfreq_add_device(struct device *dev, } mutex_init(&devfreq->lock); - mutex_lock(&devfreq->lock); devfreq->dev.parent = dev; devfreq->dev.class = devfreq_class; devfreq->dev.release = devfreq_dev_release; @@ -823,17 +829,14 @@ struct devfreq *devfreq_add_device(struct device *dev, if (devfreq->profile->timer < 0 || devfreq->profile->timer >= DEVFREQ_TIMER_NUM) { - mutex_unlock(&devfreq->lock); err = -EINVAL; goto err_dev; } if (!devfreq->profile->max_state || !devfreq->profile->freq_table) { - mutex_unlock(&devfreq->lock); err = set_freq_table(devfreq); if (err < 0) goto err_dev; - mutex_lock(&devfreq->lock); } else { devfreq->freq_table = devfreq->profile->freq_table; devfreq->max_state = devfreq->profile->max_state; @@ -841,19 +844,17 @@ struct devfreq *devfreq_add_device(struct device *dev, devfreq->scaling_min_freq = find_available_min_freq(devfreq); if (!devfreq->scaling_min_freq) { - mutex_unlock(&devfreq->lock); err = -EINVAL; goto err_dev; } devfreq->scaling_max_freq = find_available_max_freq(devfreq); if (!devfreq->scaling_max_freq) { - mutex_unlock(&devfreq->lock); err = -EINVAL; goto err_dev; } - devfreq_get_freq_range(devfreq, &min_freq, &max_freq); + __get_freq_range(devfreq, &min_freq, &max_freq); devfreq->suspend_freq = dev_pm_opp_get_suspend_opp_freq(dev); devfreq->opp_table = dev_pm_opp_get_opp_table(dev); @@ -865,7 +866,6 @@ struct devfreq *devfreq_add_device(struct device *dev, dev_set_name(&devfreq->dev, "%s", dev_name(dev)); err = device_register(&devfreq->dev); if (err) { - mutex_unlock(&devfreq->lock); put_device(&devfreq->dev); goto err_out; } @@ -876,7 +876,6 @@ struct devfreq *devfreq_add_device(struct device *dev, devfreq->max_state), GFP_KERNEL); if (!devfreq->stats.trans_table) { - mutex_unlock(&devfreq->lock); err = -ENOMEM; goto err_devfreq; } @@ -886,7 +885,6 @@ struct devfreq *devfreq_add_device(struct device *dev, sizeof(*devfreq->stats.time_in_state), GFP_KERNEL); if (!devfreq->stats.time_in_state) { - mutex_unlock(&devfreq->lock); err = -ENOMEM; goto err_devfreq; } @@ -896,8 +894,6 @@ struct devfreq *devfreq_add_device(struct device *dev, srcu_init_notifier_head(&devfreq->transition_notifier_list); - mutex_unlock(&devfreq->lock); - err = dev_pm_qos_add_request(dev, &devfreq->user_min_freq_req, DEV_PM_QOS_MIN_FREQUENCY, 0); if (err < 0) From patchwork Mon Mar 20 14:43:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3554BC76195 for ; Mon, 20 Mar 2023 14:46:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231960AbjCTOqT (ORCPT ); Mon, 20 Mar 2023 10:46:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231913AbjCTOpy (ORCPT ); Mon, 20 Mar 2023 10:45:54 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8F301CBC0; Mon, 20 Mar 2023 07:45:06 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id e15-20020a17090ac20f00b0023d1b009f52so16745718pjt.2; Mon, 20 Mar 2023 07:45:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ETBxoAbrNslCbg8f3yN7x2XxTKk/U7inb50D/YGBCKI=; b=AhmyuHGG/8/igp1P9vAPy7ZPfETRY0TeHFg6mYwFSEp4TLgeIwiFdiaf87Ez+cq6wm C0AW68YtTAXKI9AjPeHGpqoOKDOJViRoc8P16XiAr8I0ETNoavUX0aN2VHu+L3I9EpAx 94JZm4mKBNIp2ZQQhCmhmIZ27ex95EPXwwPo+NBFflXpuSJp8pFQ/0voEc1oCOpHxoni 8ULQEqZIlQ/lAaBa+oSmDtiumvwxw3FoD1C5KzD46aL5yq6V/jfmsoA6q9c/g0UnjK9Q +ppWuOEEO16lI8FMgRLUeqiUz80fjx84GrCxTDpQrCeJtaFq7ZgM+N/oZ2y/H2BMqU2+ UzSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ETBxoAbrNslCbg8f3yN7x2XxTKk/U7inb50D/YGBCKI=; b=KEvqarqcrvlBApjWBE72fLKpENpHDpvn2MVPvp2Hi+64Swsd19Cg6oCfmBSMSUYJ52 OS3p266Ztq/vE0tFzr0gQlw1/9pbyi1dqHRR4OSprxH9gdL+fSdi4YSq+Y2MZtqb3X5j wRMAudID0EUcGzqQar3CBTov7Gh3Vd2Zk+r4ukh2gLDkw7jxoL419I0g8qzS+QCIMBhk c6a5vP6C1R6yyuDojzruDyp+mP4gG/DO1K/TCt5ZzNeRNPLNyvexnOyWKDl5mpOGb7m1 4q4NSAef2LzHe//Qdz7M8iHVMqwrnl9XcZ2NrQFE/YhMXtwipek6dHWG+D3xni/dvaPe ocSQ== X-Gm-Message-State: AO0yUKUa7ihIEq3B9haywr+Bxc9m336hX4xygtAb0AeB7e6iITPNQPxd omzTZKNLKwg92Eil4VNJLiE= X-Google-Smtp-Source: AK7set9nBsw0YboqA8HGtkhw/1efSi+yfRsWBIzz4ldlha0ygO0WznYR00bgmoDEmsF5J6m1TLVXgQ== X-Received: by 2002:a17:903:280c:b0:1a1:adb0:ed72 with SMTP id kp12-20020a170903280c00b001a1adb0ed72mr9170835plb.4.1679323506157; Mon, 20 Mar 2023 07:45:06 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id 17-20020a170902ee5100b0019339f3368asm6853516plo.3.2023.03.20.07.45.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:05 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , MyungJoo Ham , Kyungmin Park , Chanwoo Choi , linux-pm@vger.kernel.org (open list:DEVICE FREQUENCY (DEVFREQ)), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 16/23] PM / devfreq: Teach lockdep about locking order Date: Mon, 20 Mar 2023 07:43:38 -0700 Message-Id: <20230320144356.803762-17-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Rob Clark This will make it easier to catch places doing allocations that can trigger reclaim under devfreq->lock. Because devfreq->lock is held over various devfreq_dev_profile callbacks, there might be some fallout if those callbacks do allocations that can trigger reclaim, but I've looked through the various callback implementations and don't see anything obvious. If it does trigger any lockdep splats, those should be fixed. Signed-off-by: Rob Clark --- drivers/devfreq/devfreq.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c index 11b774048bd2..5ce3bf9b59e7 100644 --- a/drivers/devfreq/devfreq.c +++ b/drivers/devfreq/devfreq.c @@ -817,6 +817,12 @@ struct devfreq *devfreq_add_device(struct device *dev, } mutex_init(&devfreq->lock); + + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&devfreq->lock); + fs_reclaim_release(GFP_KERNEL); + devfreq->dev.parent = dev; devfreq->dev.class = devfreq_class; devfreq->dev.release = devfreq_dev_release; From patchwork Mon Mar 20 14:43:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B59DC7618A for ; Mon, 20 Mar 2023 14:46:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231487AbjCTOqt (ORCPT ); Mon, 20 Mar 2023 10:46:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231200AbjCTOqP (ORCPT ); Mon, 20 Mar 2023 10:46:15 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 179F0AD2A; Mon, 20 Mar 2023 07:45:09 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id w4so4588182plg.9; Mon, 20 Mar 2023 07:45:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323508; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xsq0RBbnJAmMfF8oeFhuiCUGc8EVlJtKASulhInC9Fo=; b=p9GBRAEl+Zz6ySuBggEsfElfaKfonm9GPtTYpwm/8fiqVvksH6NvISc12mzFYPoN9F +q9Gkcm1rOIPfG7hscKR6HQGh3cNRbaqHfMoXRq1OAzQAoImadzMO6xzoJ24931sLeP0 XMLomMkb6/tLUozr48+DwOxKtvsR6yPJ3YCvj/3k5ER7Mwrot6152NM+2ATfnVS5NyWG 6NKoUjUvIQT66ERX0lytM5yjdvodMixEz02YNTLV/kHYe+B2g+hLIToa6+mZv9iDCeC6 LIYyG2lv1209STn622E34Y0uCSi0hxg0BQ7YwO+PX7O9/L+Z6eVygpl+oVAM3x1BouSi peOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323508; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xsq0RBbnJAmMfF8oeFhuiCUGc8EVlJtKASulhInC9Fo=; b=Rctm4Vf2qJWlYHIOre0zp7SKKEow5UkQEucCUghfvHuvGfqvmMkzP99HgTD51p25Ja IkBvZ9uuMO6Ao6z7wmXlM4UziTMwHz3sPzgxnAEmI0DqL4Kd1psEfGJXM18rm9dkiLRk HpA7LNNeuTDqjhCyAhSObc4Xs7JMoeU5wDVwe3E2BAPz4qwUlzGmYAEyl69ukvkAycbF zdnZ7H8Fm9Vd++/kk+iEiK5CG8njBuO/X1tzcZPOtmdRS0IRUCc0OH94GpcV7rnnp7RA wf8FeDOQRR1jIb9JWeWju/ir909uWlpEIgk1X64jSdSd/2uaYUd9GDQmlx4+5MvFjnm/ p5XA== X-Gm-Message-State: AO0yUKW7Po4gZOS4/x7ZJiye7TrceUZ6N6UVCoMLzYNiNN8rpdlc11r1 +daADB9j5/wB9Ufp9T2SDFM= X-Google-Smtp-Source: AK7set+UDt95ULOdbK+W700UNozPXuHXfJsTpTogb+ie1dDwZe+HRPn9R6g0sX63vBd6BHd8dj2PBw== X-Received: by 2002:a05:6a20:6709:b0:da:5bcc:5b6c with SMTP id q9-20020a056a20670900b000da5bcc5b6cmr176291pzh.49.1679323507798; Mon, 20 Mar 2023 07:45:07 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id c8-20020a62e808000000b005cdbd9c8825sm6417680pfi.195.2023.03.20.07.45.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:07 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , linux-pm@vger.kernel.org (open list:HIBERNATION (aka Software Suspend, aka swsusp)), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 17/23] PM / QoS: Fix constraints alloc vs reclaim locking Date: Mon, 20 Mar 2023 07:43:39 -0700 Message-Id: <20230320144356.803762-18-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Rob Clark In the process of adding lockdep annotation for drm GPU scheduler's job_run() to detect potential deadlock against shrinker/reclaim, I hit this lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 6.2.0-rc8-debug+ #558 Tainted: G W ------------------------------------------------------ ring0/125 is trying to acquire lock: ffffffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: dev_pm_qos_update_request+0x38/0x68 but task is already holding lock: ffffff8087239208 (&gpu->active_lock){+.+.}-{3:3}, at: msm_gpu_submit+0xec/0x178 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 (&gpu->active_lock){+.+.}-{3:3}: __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 msm_gpu_submit+0xec/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 -> #3 (dma_fence_map){++++}-{0:0}: __dma_fence_might_wait+0x74/0xc0 dma_resv_lockdep+0x1f4/0x2f4 do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}: fs_reclaim_acquire+0x80/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc __kmalloc+0xd8/0x100 topology_parse_cpu_capacity+0x8c/0x178 get_cpu_for_node+0x88/0xc4 parse_cluster+0x1b0/0x28c parse_cluster+0x8c/0x28c init_cpu_topology+0x168/0x188 smp_prepare_cpus+0x24/0xf8 kernel_init_freeable+0x18c/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire+0x3c/0x48 fs_reclaim_acquire+0x54/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc kmalloc_trace+0x50/0xa8 dev_pm_qos_constraints_allocate+0x38/0x100 __dev_pm_qos_add_request+0xb0/0x1e8 dev_pm_qos_add_request+0x58/0x80 dev_pm_qos_expose_latency_limit+0x60/0x13c register_cpu+0x12c/0x130 topology_init+0xac/0xbc do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #0 (dev_pm_qos_mtx){+.+.}-{3:3}: __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 dev_pm_qos_update_request+0x38/0x68 msm_devfreq_boost+0x40/0x70 msm_devfreq_active+0xc0/0xf0 msm_gpu_submit+0x10c/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 other info that might help us debug this: Chain exists of: dev_pm_qos_mtx --> dma_fence_map --> &gpu->active_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&gpu->active_lock); lock(dma_fence_map); lock(&gpu->active_lock); lock(dev_pm_qos_mtx); *** DEADLOCK *** 3 locks held by ring0/123: #0: ffffff8087251170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150 #1: ffffffd00b0e57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150 #2: ffffff8087251208 (&gpu->active_lock){+.+.}-{3:3}, at: msm_gpu_submit+0xec/0x178 stack backtrace: CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559 Hardware name: Google Lazor (rev1 - 2) with LTE (DT) Call trace: dump_backtrace.part.0+0xb4/0xf8 show_stack+0x20/0x38 dump_stack_lvl+0x9c/0xd0 dump_stack+0x18/0x34 print_circular_bug+0x1b4/0x1f0 check_noncircular+0x78/0xac __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 dev_pm_qos_update_request+0x38/0x68 msm_devfreq_boost+0x40/0x70 msm_devfreq_active+0xc0/0xf0 msm_gpu_submit+0x10c/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 The issue is that dev_pm_qos_mtx is held in the runpm suspend/resume (or freq change) path, but it is also held across allocations that could recurse into shrinker. Solve this by changing dev_pm_qos_constraints_allocate() into a function that can be called unconditionally before the device qos object is needed and before aquiring dev_pm_qos_mtx. This way the allocations can be done without holding the mutex. In the case that we raced with another thread to allocate the qos object, detect this *after* acquiring the dev_pm_qos_mtx and simply free the redundant allocations. Signed-off-by: Rob Clark --- drivers/base/power/qos.c | 60 +++++++++++++++++++++++++++------------- 1 file changed, 41 insertions(+), 19 deletions(-) diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c index 8e93167f1783..f3e0c6b65635 100644 --- a/drivers/base/power/qos.c +++ b/drivers/base/power/qos.c @@ -185,18 +185,24 @@ static int apply_constraint(struct dev_pm_qos_request *req, } /* - * dev_pm_qos_constraints_allocate + * dev_pm_qos_constraints_ensure_allocated * @dev: device to allocate data for * - * Called at the first call to add_request, for constraint data allocation - * Must be called with the dev_pm_qos_mtx mutex held + * Called to ensure that devices qos is allocated, before acquiring + * dev_pm_qos_mtx. */ -static int dev_pm_qos_constraints_allocate(struct device *dev) +static int dev_pm_qos_constraints_ensure_allocated(struct device *dev) { struct dev_pm_qos *qos; struct pm_qos_constraints *c; struct blocking_notifier_head *n; + if (!dev) + return -ENODEV; + + if (!IS_ERR_OR_NULL(dev->power.qos)) + return 0; + qos = kzalloc(sizeof(*qos), GFP_KERNEL); if (!qos) return -ENOMEM; @@ -227,10 +233,26 @@ static int dev_pm_qos_constraints_allocate(struct device *dev) INIT_LIST_HEAD(&qos->flags.list); + mutex_lock(&dev_pm_qos_mtx); + + if (!IS_ERR_OR_NULL(dev->power.qos)) { + /* + * We have raced with another task to create the qos. + * No biggie, just free the resources we've allocated + * outside of dev_pm_qos_mtx and move on with life. + */ + kfree(n); + kfree(qos); + goto unlock; + } + spin_lock_irq(&dev->power.lock); dev->power.qos = qos; spin_unlock_irq(&dev->power.lock); +unlock: + mutex_unlock(&dev_pm_qos_mtx); + return 0; } @@ -331,17 +353,15 @@ static int __dev_pm_qos_add_request(struct device *dev, { int ret = 0; - if (!dev || !req || dev_pm_qos_invalid_req_type(dev, type)) + if (!req || dev_pm_qos_invalid_req_type(dev, type)) return -EINVAL; if (WARN(dev_pm_qos_request_active(req), "%s() called for already added request\n", __func__)) return -EINVAL; - if (IS_ERR(dev->power.qos)) + if (IS_ERR_OR_NULL(dev->power.qos)) ret = -ENODEV; - else if (!dev->power.qos) - ret = dev_pm_qos_constraints_allocate(dev); trace_dev_pm_qos_add_request(dev_name(dev), type, value); if (ret) @@ -390,6 +410,10 @@ int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req, { int ret; + ret = dev_pm_qos_constraints_ensure_allocated(dev); + if (ret) + return ret; + mutex_lock(&dev_pm_qos_mtx); ret = __dev_pm_qos_add_request(dev, req, type, value); mutex_unlock(&dev_pm_qos_mtx); @@ -537,15 +561,11 @@ int dev_pm_qos_add_notifier(struct device *dev, struct notifier_block *notifier, { int ret = 0; - mutex_lock(&dev_pm_qos_mtx); - - if (IS_ERR(dev->power.qos)) - ret = -ENODEV; - else if (!dev->power.qos) - ret = dev_pm_qos_constraints_allocate(dev); - + ret = dev_pm_qos_constraints_ensure_allocated(dev); if (ret) - goto unlock; + return ret; + + mutex_lock(&dev_pm_qos_mtx); switch (type) { case DEV_PM_QOS_RESUME_LATENCY: @@ -565,7 +585,6 @@ int dev_pm_qos_add_notifier(struct device *dev, struct notifier_block *notifier, ret = -EINVAL; } -unlock: mutex_unlock(&dev_pm_qos_mtx); return ret; } @@ -905,10 +924,13 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) { int ret; + ret = dev_pm_qos_constraints_ensure_allocated(dev); + if (ret) + return ret; + mutex_lock(&dev_pm_qos_mtx); - if (IS_ERR_OR_NULL(dev->power.qos) - || !dev->power.qos->latency_tolerance_req) { + if (!dev->power.qos->latency_tolerance_req) { struct dev_pm_qos_request *req; if (val < 0) { From patchwork Mon Mar 20 14:43:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181396 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BB0EC761AF for ; Mon, 20 Mar 2023 14:46:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231902AbjCTOqw (ORCPT ); Mon, 20 Mar 2023 10:46:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231899AbjCTOqR (ORCPT ); Mon, 20 Mar 2023 10:46:17 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3C79D302; Mon, 20 Mar 2023 07:45:10 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id p3-20020a17090a74c300b0023f69bc7a68so8146731pjl.4; Mon, 20 Mar 2023 07:45:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323509; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4LG2BAueGKKjRd5oOzkHW01KAXz7hJtO0YTeAZX7ebQ=; b=JYMfuqbtpiJILVkp0uoqa+oBsFme/4zOTzeVWYi3j3wR1ZQJWuMq7gV27NVroQ4Cof DvrYwJDRdr8vjUe7AoOTjec4wKPvjzpSGWLEwjaFkMJH0s8z+PQvoCMmcpojocLE7ybO oo+/Hl2TVSIoCQa+s4zQECFi1GZGH4VbTC86Yqlf91CDneoARc4TOJ4Z2guCXmZDIpSu /196tgzmV3xQEvnInxfSyu1LNUDP3vSbz96LAF3Eqc21Kvo8zow4uMY7aNpcZ4+NT4Yg MDg9DNeVPHKmg+hxMcvtXfab3OuON64cpe+DAyLDIvSqDWJdJRf7qPJfP/BVFmTravPm rQuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323509; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4LG2BAueGKKjRd5oOzkHW01KAXz7hJtO0YTeAZX7ebQ=; b=UrNvbjjWzYc5ua1kh5fogAC/kGvNkeEO/mH/8s/RbZZnP5042J0uXChEEoknNxGfyL UH0zwFqJX5eCROH24/M7uqIub0l6hBYTbUytYna8balEm6JVHnQ11PrV41dFdzSDCIY8 R0ra9MXwCNmEKUHi231Ff/MMMmnYXA+w2Ohl9AtMfi+iYijV0rdbnsbLxlnJonftPWBi vNQy1B4vI7N7juWymju54QnKUyavLrZlede47FenT4ZvGXRY5U0MPlL2eMJLgFKBylXX pyBojVIwStxyzXRLswXcxcsIJ/XrVDiYBLfNkJTLVGORmOWVQmZBb6I7HPruWKnV38Jb IesA== X-Gm-Message-State: AO0yUKWBbmQ1kD6WIYSIHf1CX+ZiTOVHuCM010AlE4lYEfCYVXjrExMj dF4i+Z6WxaAITxPJYwZc1Uw= X-Google-Smtp-Source: AK7set8ZMssF9rVaP9Q2tyqSeIvwgvafCYLFB4nuIVnqISbFD6NVpbGZbgq3RCYIRz5Qz1iGPiJ1mg== X-Received: by 2002:a17:902:c40a:b0:1a1:b9e0:fa1c with SMTP id k10-20020a170902c40a00b001a1b9e0fa1cmr10839593plk.0.1679323509573; Mon, 20 Mar 2023 07:45:09 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id io7-20020a17090312c700b001a1ca6dc38csm2920651plb.118.2023.03.20.07.45.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:09 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , "Rafael J. Wysocki" , Len Brown , Pavel Machek , Greg Kroah-Hartman , linux-pm@vger.kernel.org (open list:POWER MANAGEMENT CORE), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 18/23] PM / QoS: Decouple request alloc from dev_pm_qos_mtx Date: Mon, 20 Mar 2023 07:43:40 -0700 Message-Id: <20230320144356.803762-19-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Rob Clark Similar to the previous patch, move the allocation out from under dev_pm_qos_mtx, by speculatively doing the allocation and handle any race after acquiring dev_pm_qos_mtx by freeing the redundant allocation. Signed-off-by: Rob Clark --- drivers/base/power/qos.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c index f3e0c6b65635..9cba334b3729 100644 --- a/drivers/base/power/qos.c +++ b/drivers/base/power/qos.c @@ -922,12 +922,16 @@ s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev) */ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) { + struct dev_pm_qos_request *req = NULL; int ret; ret = dev_pm_qos_constraints_ensure_allocated(dev); if (ret) return ret; + if (!dev->power.qos->latency_tolerance_req) + req = kzalloc(sizeof(*req), GFP_KERNEL); + mutex_lock(&dev_pm_qos_mtx); if (!dev->power.qos->latency_tolerance_req) { @@ -940,7 +944,6 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) ret = -EINVAL; goto out; } - req = kzalloc(sizeof(*req), GFP_KERNEL); if (!req) { ret = -ENOMEM; goto out; @@ -952,6 +955,13 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) } dev->power.qos->latency_tolerance_req = req; } else { + /* + * If we raced with another thread to allocate the request, + * simply free the redundant allocation and move on. + */ + if (req) + kfree(req); + if (val < 0) { __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY_TOLERANCE); ret = 0; From patchwork Mon Mar 20 14:43:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181394 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D5B8C7618D for ; Mon, 20 Mar 2023 14:46:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231967AbjCTOqZ (ORCPT ); Mon, 20 Mar 2023 10:46:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231886AbjCTOp4 (ORCPT ); Mon, 20 Mar 2023 10:45:56 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19F32EB73; Mon, 20 Mar 2023 07:45:11 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id o11so12681606ple.1; Mon, 20 Mar 2023 07:45:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323511; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HW0myZZejO7uNMRvPlac+lbEnpvRrjfPGVZeXhkTeDU=; b=PhP6+MY2gm+VfsWU6J4UMRMOHIYbVs8+vizsPZv168wBOIB9PALpYVfd6ZivPoEngY t/VUJFnA/t4Ix6GsI7xwhIXMkAYpKhhRp3xvngTue6Yr2fFXbtmHtWpAQB/s+naJOTYw IRTZeCf/1OLyWnQopg4LLMR+VeOJW0SWboQWMJmtAUiYKPEyPfrp5RIa7I8htQgDrv7u Id7p9NjiUuVcsJhDDDBEShANjbF6FYb4DklqnEW405YxDUdx+2496YftV81PkBEMAJVb 1iddGsKJ7Zt798qVPBcNgXNRq0rx7b3ZiQPIaKcFfUSv7FPK9Xd9DR2vPRCvRDOQgqRi 4D2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323511; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HW0myZZejO7uNMRvPlac+lbEnpvRrjfPGVZeXhkTeDU=; b=i3CyqmY8vAPd+7VRJnahofMHzed8s099n1auEgtIfB4Uyk6R8naGFiG9YxGv8bUV8d EWZuE9kiZafCyNQQEbV/LsrnTmayd/Iy7P4pcYlKDkTUvpGOUzUJM/05uQhTNdCtsOOC TQ8GGnGW7o8vFhCLeQGV6lqnG3GYwIpSMyXxyex1Y5y9/q4vIHzLfr11FJ1LBEdLD13D ORoSwwwhNeK9puSXWADn06E5tnyoHSK5sl7AlULPQP/6cB8PQYyi5fwqK3amIFtPrUDW 2pmlYgosfbBmRMEpf79NZBikEObeXjkEkFJKSHa1Z/fuY+3JFg5g/d53rnMk24fIKQds d3Qw== X-Gm-Message-State: AO0yUKUs0PIKSDjE9NPz5e98vFbcgisCeSdjKyh/uGaamXgNb147/Nvm 91t4hty+BQqEVOoa1bDV7ZM= X-Google-Smtp-Source: AK7set8dqJQ+HWWTlRCoB9bVOq+YmX/QlpDsIah5XuQdk5bfZ+gnkND2WAlzEoISE8F1IrVQp/Nnmw== X-Received: by 2002:a05:6a20:be09:b0:d6:9e5e:f240 with SMTP id ge9-20020a056a20be0900b000d69e5ef240mr13319951pzb.4.1679323511174; Mon, 20 Mar 2023 07:45:11 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id u15-20020a62ed0f000000b005809d382016sm6433907pfh.74.2023.03.20.07.45.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:10 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , linux-pm@vger.kernel.org (open list:HIBERNATION (aka Software Suspend, aka swsusp)), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 19/23] PM / QoS: Teach lockdep about dev_pm_qos_mtx locking order Date: Mon, 20 Mar 2023 07:43:41 -0700 Message-Id: <20230320144356.803762-20-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Rob Clark Annotate dev_pm_qos_mtx to teach lockdep to scream about allocations that could trigger reclaim under dev_pm_qos_mtx. Signed-off-by: Rob Clark --- drivers/base/power/qos.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c index 9cba334b3729..d4addda3944a 100644 --- a/drivers/base/power/qos.c +++ b/drivers/base/power/qos.c @@ -1012,3 +1012,14 @@ void dev_pm_qos_hide_latency_tolerance(struct device *dev) pm_runtime_put(dev); } EXPORT_SYMBOL_GPL(dev_pm_qos_hide_latency_tolerance); + +static int __init dev_pm_qos_init(void) +{ + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&dev_pm_qos_mtx); + fs_reclaim_release(GFP_KERNEL); + + return 0; +} +early_initcall(dev_pm_qos_init); From patchwork Mon Mar 20 14:43:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C58AC77B60 for ; Mon, 20 Mar 2023 14:46:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232023AbjCTOqy (ORCPT ); Mon, 20 Mar 2023 10:46:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231946AbjCTOqR (ORCPT ); Mon, 20 Mar 2023 10:46:17 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F132BE3A4; Mon, 20 Mar 2023 07:45:14 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id z10so6717574pgr.8; Mon, 20 Mar 2023 07:45:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323514; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=59Gn1iYcZ4C/Zt+HL1YxXBv746B3FiPq6mLTMzKnFqI=; b=VFHWbDT/va9ssb48/3WXbEoLQZ8/z5USHfV2w7WirkvR1cl6pry06SKeK54TFyjeKF u752DiUfJuleI6wI6QvuD1FGoBZGJu6G5fLyYnL7LAM7ko0rlw1i/4KZ6yu7ypC0/4oT /DiWwzURYVUFHb0l2ag9mAd9rTEK5fzfKLzN1Bc6k2U7L1qMhzp5oioWUp8T0IFE36dR nEnEItQyfSGQuoNXk7/81UHtNjF8Y4EW0DuSXpmx4OgvAsrclAjCcXrqxBtlNX1M3Hf6 6JObTD4VV/sPObGHUQ/Lr9eVtE/UvF9tMvqHXfKP3fqJwgF8QSVdQ8ZVdJamj2DbCLHQ 2jFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323514; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=59Gn1iYcZ4C/Zt+HL1YxXBv746B3FiPq6mLTMzKnFqI=; b=HzZ0FwpzxhXKFph1pzeeDFlcSO/GFmEGrkGktjf5fVIS5r1U6yEGxQQWIEd4gONtb+ ogQxTR6a22tzVmTZVLfVQXj+Tq9ypKX52x0nD6iDE9ItIUkXY8WkP0PGVajH72nNtKnS ReHn+Rvcd1So5Tz+HttFMw+VE/nh4/eeo7AmUdtfjoUrvDyWBpbYGCMtonRxyjlmTTEm +PZpNAqS9aOQ2YjsRBdNDqJOvJSh0wG4MvrszS6FXHPqK+d28wMrk8FZ8owvsfh43NHK pxKs3l3b5oOW7iWjkaoocHt+fTTT4PIpd+SgJKasUvqIBmWbvb46HtZZKxjHzaQ81owJ 7cew== X-Gm-Message-State: AO0yUKV0VbEnjd3OKQNFVByErUEyZ7+xehJW61qrEKEt78kjoYemUswG 1KjROhXN0/rDONKSMEtr9GQ= X-Google-Smtp-Source: AK7set+slgDWMjc1bZxbFjmWezaBDy57xF31ml0zQbkl/Y9eQb25jYgJZAiCNEnSgoZqURs65tvgtw== X-Received: by 2002:aa7:949c:0:b0:627:e1a5:27b4 with SMTP id z28-20020aa7949c000000b00627e1a527b4mr6902903pfk.33.1679323514351; Mon, 20 Mar 2023 07:45:14 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id j10-20020a63fc0a000000b00503000f0492sm6145458pgi.14.2023.03.20.07.45.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:14 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Georgi Djakov , linux-pm@vger.kernel.org (open list:INTERCONNECT API), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 21/23] interconnect: Fix locking for runpm vs reclaim Date: Mon, 20 Mar 2023 07:43:43 -0700 Message-Id: <20230320144356.803762-22-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Rob Clark For cases where icc_bw_set() can be called in callbaths that could deadlock against shrinker/reclaim, such as runpm resume, we need to decouple the icc locking. Introduce a new icc_bw_lock for cases where we need to serialize bw aggregation and update to decouple that from paths that require memory allocation such as node/link creation/ destruction. Fixes this lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 6.2.0-rc8-debug+ #554 Not tainted ------------------------------------------------------ ring0/132 is trying to acquire lock: ffffff80871916d0 (&gmu->lock){+.+.}-{3:3}, at: a6xx_pm_resume+0xf0/0x234 but task is already holding lock: ffffffdb5aee57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 (dma_fence_map){++++}-{0:0}: __dma_fence_might_wait+0x74/0xc0 dma_resv_lockdep+0x1f4/0x2f4 do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #3 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}: fs_reclaim_acquire+0x80/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc __kmalloc+0xd8/0x100 topology_parse_cpu_capacity+0x8c/0x178 get_cpu_for_node+0x88/0xc4 parse_cluster+0x1b0/0x28c parse_cluster+0x8c/0x28c init_cpu_topology+0x168/0x188 smp_prepare_cpus+0x24/0xf8 kernel_init_freeable+0x18c/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #2 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire+0x3c/0x48 fs_reclaim_acquire+0x54/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc __kmalloc+0xd8/0x100 kzalloc.constprop.0+0x14/0x20 icc_node_create_nolock+0x4c/0xc4 icc_node_create+0x38/0x58 qcom_icc_rpmh_probe+0x1b8/0x248 platform_probe+0x70/0xc4 really_probe+0x158/0x290 __driver_probe_device+0xc8/0xe0 driver_probe_device+0x44/0x100 __driver_attach+0xf8/0x108 bus_for_each_dev+0x78/0xc4 driver_attach+0x2c/0x38 bus_add_driver+0xd0/0x1d8 driver_register+0xbc/0xf8 __platform_driver_register+0x30/0x3c qnoc_driver_init+0x24/0x30 do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #1 (icc_lock){+.+.}-{3:3}: __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 icc_set_bw+0x88/0x2b4 _set_opp_bw+0x8c/0xd8 _set_opp+0x19c/0x300 dev_pm_opp_set_opp+0x84/0x94 a6xx_gmu_resume+0x18c/0x804 a6xx_pm_resume+0xf8/0x234 adreno_runtime_resume+0x2c/0x38 pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x15c/0x174 rpm_callback+0x78/0x7c rpm_resume+0x318/0x524 __pm_runtime_resume+0x78/0xbc adreno_load_gpu+0xc4/0x17c msm_open+0x50/0x120 drm_file_alloc+0x17c/0x228 drm_open_helper+0x74/0x118 drm_open+0xa0/0x144 drm_stub_open+0xd4/0xe4 chrdev_open+0x1b8/0x1e4 do_dentry_open+0x2f8/0x38c vfs_open+0x34/0x40 path_openat+0x64c/0x7b4 do_filp_open+0x54/0xc4 do_sys_openat2+0x9c/0x100 do_sys_open+0x50/0x7c __arm64_sys_openat+0x28/0x34 invoke_syscall+0x8c/0x128 el0_svc_common.constprop.0+0xa0/0x11c do_el0_svc+0xac/0xbc el0_svc+0x48/0xa0 el0t_64_sync_handler+0xac/0x13c el0t_64_sync+0x190/0x194 -> #0 (&gmu->lock){+.+.}-{3:3}: __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 a6xx_pm_resume+0xf0/0x234 adreno_runtime_resume+0x2c/0x38 pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x15c/0x174 rpm_callback+0x78/0x7c rpm_resume+0x318/0x524 __pm_runtime_resume+0x78/0xbc pm_runtime_get_sync.isra.0+0x14/0x20 msm_gpu_submit+0x58/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 other info that might help us debug this: Chain exists of: &gmu->lock --> mmu_notifier_invalidate_range_start --> dma_fence_map Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(dma_fence_map); lock(mmu_notifier_invalidate_range_start); lock(dma_fence_map); lock(&gmu->lock); *** DEADLOCK *** 2 locks held by ring0/132: #0: ffffff8087191170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150 #1: ffffffdb5aee57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150 stack backtrace: CPU: 7 PID: 132 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #554 Hardware name: Google Lazor (rev1 - 2) with LTE (DT) Call trace: dump_backtrace.part.0+0xb4/0xf8 show_stack+0x20/0x38 dump_stack_lvl+0x9c/0xd0 dump_stack+0x18/0x34 print_circular_bug+0x1b4/0x1f0 check_noncircular+0x78/0xac __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 a6xx_pm_resume+0xf0/0x234 adreno_runtime_resume+0x2c/0x38 pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x15c/0x174 rpm_callback+0x78/0x7c rpm_resume+0x318/0x524 __pm_runtime_resume+0x78/0xbc pm_runtime_get_sync.isra.0+0x14/0x20 msm_gpu_submit+0x58/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 Signed-off-by: Rob Clark --- drivers/interconnect/core.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c index 25debded65a8..f7251784765f 100644 --- a/drivers/interconnect/core.c +++ b/drivers/interconnect/core.c @@ -29,6 +29,7 @@ static LIST_HEAD(icc_providers); static int providers_count; static bool synced_state; static DEFINE_MUTEX(icc_lock); +static DEFINE_MUTEX(icc_bw_lock); static struct dentry *icc_debugfs_dir; static void icc_summary_show_one(struct seq_file *s, struct icc_node *n) @@ -632,7 +633,7 @@ int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw) if (WARN_ON(IS_ERR(path) || !path->num_nodes)) return -EINVAL; - mutex_lock(&icc_lock); + mutex_lock(&icc_bw_lock); old_avg = path->reqs[0].avg_bw; old_peak = path->reqs[0].peak_bw; @@ -664,7 +665,7 @@ int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw) apply_constraints(path); } - mutex_unlock(&icc_lock); + mutex_unlock(&icc_bw_lock); trace_icc_set_bw_end(path, ret); @@ -963,6 +964,7 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider) return; mutex_lock(&icc_lock); + mutex_lock(&icc_bw_lock); node->provider = provider; list_add_tail(&node->node_list, &provider->nodes); @@ -988,6 +990,7 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider) node->avg_bw = 0; node->peak_bw = 0; + mutex_unlock(&icc_bw_lock); mutex_unlock(&icc_lock); } EXPORT_SYMBOL_GPL(icc_node_add); @@ -1111,6 +1114,7 @@ void icc_sync_state(struct device *dev) return; mutex_lock(&icc_lock); + mutex_lock(&icc_bw_lock); synced_state = true; list_for_each_entry(p, &icc_providers, provider_list) { dev_dbg(p->dev, "interconnect provider is in synced state\n"); From patchwork Mon Mar 20 14:43:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CAE9C7618D for ; Mon, 20 Mar 2023 14:46:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232029AbjCTOqz (ORCPT ); Mon, 20 Mar 2023 10:46:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231950AbjCTOqS (ORCPT ); Mon, 20 Mar 2023 10:46:18 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A47C5FF1F; Mon, 20 Mar 2023 07:45:16 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id fd25so7072120pfb.1; Mon, 20 Mar 2023 07:45:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323516; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZTj6lDaB879i6vS4ZDrKb4b2c9OODf1NGI0zVIt2OIQ=; b=Bj+oH3CQChclMtHQOjFJOIB+Z7POqoHz3M05DrKcE49Bnf7ohSE3pDsx71PYs3ECvr oTHseaJiODm/MyHSlBMBASGteA5HzjOenIVagUXFeZlt7Q+3YFMZoD7rUejgRISy+aVY 2odaelScwYvdePOWDQYmQtu9g1I2XR+9uN9ba8aIcs04yVVbk2zqIX1fxAN1D/wbDGbE GKnk4Dos7F1ERtbzl7QoDAY8Tv0VU4CBkNuZAfbMf6wRdovcBhxA0rKvl1ZusIAFOTqG Ji6tDLvUkL6ZHoI/zPauDcA1VF/2Q7lzDyCG6Rg7kTodWzwhyU4JJcSU+8FUk5u7Mgzx O7Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323516; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZTj6lDaB879i6vS4ZDrKb4b2c9OODf1NGI0zVIt2OIQ=; b=fvJT19qqknclywQ8N5iPBs4hvjycnTKC7F5IriZQ581YZxPMea6PxZ/g2thCQJZ4iC cUd/DVF2jQTB3+Bt4xAPX3lYGnBaRX5gpPJYXA6oyEK0iU1OFY2FN2gY6j4fOXtnlQDU 7yj2/w1Qe2k67XRV2sua+WD1NoFK4nAOEaFyy+dNC44kUuoxszekUKIO7rP707kWZUCu 9vmkcwsspFNHKf/c7cVQvTXOB9xxsQWJwRyt5Ts2I2VAgVAjxlc1/abbV6SSGNJOSoUK /r51+/3Eao5L0xO+0lzJP2DhO1GhwYcmqJngWoAzNmBDLDT2mlaybGwVTQTWYkPwgHLz h5Rw== X-Gm-Message-State: AO0yUKUUHwpPEzNxKViOLlcC4vgvNTz2Dz3lPlcgNDBonC8oX7N5hSXN o8RaDMMcT1ekio0oaBr7dufZ6TfKbp4= X-Google-Smtp-Source: AK7set+YZdGvBmz/yJ5ACJx//DaOCsv2sHayWOfLM7STeuIIzbdJFihoX6ELSY7Jku8dzTLjKv9Llg== X-Received: by 2002:a62:7946:0:b0:625:2636:9cd2 with SMTP id u67-20020a627946000000b0062526369cd2mr19429597pfc.18.1679323516022; Mon, 20 Mar 2023 07:45:16 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id x48-20020a056a000bf000b005a79596c795sm6428405pfu.29.2023.03.20.07.45.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:15 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Georgi Djakov , linux-pm@vger.kernel.org (open list:INTERCONNECT API), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 22/23] interconnect: Teach lockdep about icc_bw_lock order Date: Mon, 20 Mar 2023 07:43:44 -0700 Message-Id: <20230320144356.803762-23-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Rob Clark Teach lockdep that icc_bw_lock is needed in code paths that could deadlock if they trigger reclaim. Signed-off-by: Rob Clark --- drivers/interconnect/core.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c index f7251784765f..5619963ee85c 100644 --- a/drivers/interconnect/core.c +++ b/drivers/interconnect/core.c @@ -1127,13 +1127,21 @@ void icc_sync_state(struct device *dev) } } } + mutex_unlock(&icc_bw_lock); mutex_unlock(&icc_lock); } EXPORT_SYMBOL_GPL(icc_sync_state); static int __init icc_init(void) { - struct device_node *root = of_find_node_by_path("/"); + struct device_node *root; + + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&icc_bw_lock); + fs_reclaim_release(GFP_KERNEL); + + root = of_find_node_by_path("/"); providers_count = of_count_icc_providers(root); of_node_put(root);