From patchwork Wed Nov 15 08:21:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13456330 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74720C07548 for ; Wed, 15 Nov 2023 07:23:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234630AbjKOHX0 (ORCPT ); Wed, 15 Nov 2023 02:23:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234660AbjKOHXY (ORCPT ); Wed, 15 Nov 2023 02:23:24 -0500 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB9E311C; Tue, 14 Nov 2023 23:23:20 -0800 (PST) Received: from dggpemd200004.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4SVZLQ44mKzmWMr; Wed, 15 Nov 2023 15:19:58 +0800 (CST) Received: from huawei.com (10.175.113.32) by dggpemd200004.china.huawei.com (7.185.36.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1258.23; Wed, 15 Nov 2023 15:23:18 +0800 From: Liu Shixin To: Geert Uytterhoeven , Catalin Marinas , Patrick Wang , Andrew Morton , Kefeng Wang CC: , , Linux-Renesas , Liu Shixin Subject: [PATCH 1/2] Revert "mm/kmemleak: move the initialisation of object to __link_object" Date: Wed, 15 Nov 2023 16:21:37 +0800 Message-ID: <20231115082138.2649870-2-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115082138.2649870-1-liushixin2@huawei.com> References: <20231115082138.2649870-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemd200004.china.huawei.com (7.185.36.141) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-renesas-soc@vger.kernel.org Move the initialisation of object back to__alloc_object() because set_track_prepare() attempt to acquire zone->lock(spinlocks) while __link_object is holding kmemleak_lock(raw_spinlocks). This is not right for RT mode. This reverts commit 245245c2fffd0050772a3f30ba50e2be92537a32. Signed-off-by: Liu Shixin Reported-by: Geert Uytterhoeven Acked-by: Catalin Marinas --- mm/kmemleak.c | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 1eacca03bedd..22bab3738a9e 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -642,32 +642,16 @@ static struct kmemleak_object *__alloc_object(gfp_t gfp) if (!object) { pr_warn("Cannot allocate a kmemleak_object structure\n"); kmemleak_disable(); + return NULL; } - return object; -} - -static int __link_object(struct kmemleak_object *object, unsigned long ptr, - size_t size, int min_count, bool is_phys) -{ - - struct kmemleak_object *parent; - struct rb_node **link, *rb_parent; - unsigned long untagged_ptr; - unsigned long untagged_objp; - INIT_LIST_HEAD(&object->object_list); INIT_LIST_HEAD(&object->gray_list); INIT_HLIST_HEAD(&object->area_list); raw_spin_lock_init(&object->lock); atomic_set(&object->use_count, 1); - object->flags = OBJECT_ALLOCATED | (is_phys ? OBJECT_PHYS : 0); - object->pointer = ptr; - object->size = kfence_ksize((void *)ptr) ?: size; object->excess_ref = 0; - object->min_count = min_count; object->count = 0; /* white color initially */ - object->jiffies = jiffies; object->checksum = 0; object->del_state = 0; @@ -692,6 +676,24 @@ static int __link_object(struct kmemleak_object *object, unsigned long ptr, /* kernel backtrace */ object->trace_handle = set_track_prepare(); + return object; +} + +static int __link_object(struct kmemleak_object *object, unsigned long ptr, + size_t size, int min_count, bool is_phys) +{ + + struct kmemleak_object *parent; + struct rb_node **link, *rb_parent; + unsigned long untagged_ptr; + unsigned long untagged_objp; + + object->flags = OBJECT_ALLOCATED | (is_phys ? OBJECT_PHYS : 0); + object->pointer = ptr; + object->size = kfence_ksize((void *)ptr) ?: size; + object->min_count = min_count; + object->jiffies = jiffies; + untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr); /* * Only update min_addr and max_addr with object From patchwork Wed Nov 15 08:21:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13456331 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8966CC07548 for ; Wed, 15 Nov 2023 07:23:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234638AbjKOHXe (ORCPT ); Wed, 15 Nov 2023 02:23:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234645AbjKOHXd (ORCPT ); Wed, 15 Nov 2023 02:23:33 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2526D1; Tue, 14 Nov 2023 23:23:28 -0800 (PST) Received: from dggpemd200004.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4SVZPz6L0VzWhG1; Wed, 15 Nov 2023 15:23:03 +0800 (CST) Received: from huawei.com (10.175.113.32) by dggpemd200004.china.huawei.com (7.185.36.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1258.23; Wed, 15 Nov 2023 15:23:26 +0800 From: Liu Shixin To: Geert Uytterhoeven , Catalin Marinas , Patrick Wang , Andrew Morton , Kefeng Wang CC: , , Linux-Renesas , Liu Shixin Subject: [PATCH 2/2] mm/kmemleak: move set_track_prepare() outside raw_spinlocks Date: Wed, 15 Nov 2023 16:21:38 +0800 Message-ID: <20231115082138.2649870-3-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231115082138.2649870-1-liushixin2@huawei.com> References: <20231115082138.2649870-1-liushixin2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemd200004.china.huawei.com (7.185.36.141) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-renesas-soc@vger.kernel.org set_track_prepare() will call __alloc_pages() which attempt to acquire zone->lock(spinlocks), so move it outside object->lock(raw_spinlocks) because it's not right to acquire spinlocks while holding raw_spinlocks in RT mode. Signed-off-by: Liu Shixin Acked-by: Catalin Marinas --- mm/kmemleak.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 22bab3738a9e..5501363d6b31 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -1152,6 +1152,7 @@ EXPORT_SYMBOL_GPL(kmemleak_free_percpu); void __ref kmemleak_update_trace(const void *ptr) { struct kmemleak_object *object; + depot_stack_handle_t trace_handle; unsigned long flags; pr_debug("%s(0x%px)\n", __func__, ptr); @@ -1168,8 +1169,9 @@ void __ref kmemleak_update_trace(const void *ptr) return; } + trace_handle = set_track_prepare(); raw_spin_lock_irqsave(&object->lock, flags); - object->trace_handle = set_track_prepare(); + object->trace_handle = trace_handle; raw_spin_unlock_irqrestore(&object->lock, flags); put_object(object);