From patchwork Mon Dec 23 11:33:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miles Chen X-Patchwork-Id: 11308063 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 552E413B6 for ; Mon, 23 Dec 2019 11:33:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7794620828 for ; Mon, 23 Dec 2019 11:33:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=mediatek.com header.i=@mediatek.com header.b="hjqqHWuj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7794620828 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mediatek.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8CEF88E0005; Mon, 23 Dec 2019 06:33:40 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 881C98E0001; Mon, 23 Dec 2019 06:33:40 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 796348E0005; Mon, 23 Dec 2019 06:33:40 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id 56F8D8E0001 for ; Mon, 23 Dec 2019 06:33:40 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id F1C1082499A8 for ; Mon, 23 Dec 2019 11:33:39 +0000 (UTC) X-FDA: 76296196158.08.trick75_77152266d031d X-Spam-Summary: 2,0,0,f70ad5f3c9dab946,d41d8cd98f00b204,miles.chen@mediatek.com,:akpm@linux-foundation.org:mhocko@suse.com:linux-kernel@vger.kernel.org::linux-mediatek@lists.infradead.org:wsd_upstream@mediatek.com:miles.chen@mediatek.com,RULES_HIT:4:41:69:355:379:541:800:960:966:973:988:989:1185:1260:1277:1311:1313:1314:1345:1431:1437:1514:1515:1516:1518:1585:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2636:2693:2731:2892:2895:2898:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:4605:5007:6119:6261:6653:7903:7996:8784:8957:10004:11026:11473:11658:11914:12043:12291:12295:12296:12297:12438:12521:12555:12679:12683:12895:13146:13161:13229:13230:13870:14096:14097:14394:21063:21080:21324:21451:21627:21740:21990:30029:30034:30054:30056:30070:30075:30090,0,RBL:210.61.82.183:@mediatek.com:.lbl8.mailshell.net-62.14.12.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0 ,DNSBL:n X-HE-Tag: trick75_77152266d031d X-Filterd-Recvd-Size: 15858 Received: from mailgw01.mediatek.com (unknown [210.61.82.183]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Dec 2019 11:33:37 +0000 (UTC) X-UUID: ff6927678b874ed6bc52e90e70d90812-20191223 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Transfer-Encoding:Content-Type:MIME-Version:Message-ID:Date:Subject:CC:To:From; bh=qX14vySNe6dWHS9CVDjOljRuxmLr34hwjucjAyQrc40=; b=hjqqHWujsKPp7M8G3XBepzWUuos1NXZO7NRHReI4zK9WxsB0H96Tdw1RN3Gdf3uxTi/KCiM4UTstKJ7K/MifEvdv74G7Scu7htGYhonscSWa33NWPVbWzLL7YmfSmKOpCBBFu9s55Ebebv1+Uxxc9I2Hg2msUn+kv+g99bAi+C0=; X-UUID: ff6927678b874ed6bc52e90e70d90812-20191223 Received: from mtkcas06.mediatek.inc [(172.21.101.30)] by mailgw01.mediatek.com (envelope-from ) (Cellopoint E-mail Firewall v4.1.10 Build 0809 with TLS) with ESMTP id 13804429; Mon, 23 Dec 2019 19:33:30 +0800 Received: from mtkcas07.mediatek.inc (172.21.101.84) by mtkmbs06n2.mediatek.inc (172.21.101.130) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 23 Dec 2019 19:33:27 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkcas07.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1395.4 via Frontend Transport; Mon, 23 Dec 2019 19:32:33 +0800 From: Miles Chen To: Andrew Morton CC: Michal Hocko , , , , , Miles Chen Subject: [PATCH] mm/page_owner: print largest memory consumer when OOM panic occurs Date: Mon, 23 Dec 2019 19:33:26 +0800 Message-ID: <20191223113326.13828-1-miles.chen@mediatek.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 X-TM-SNTS-SMTP: 9B750EF3048915A1305E13950C7313796853CA057EF394FBE0BAF085AE76E0202000:8 X-MTK: N X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Motivation: ----------- When debug with a OOM kernel panic, it is difficult to know the memory allocated by kernel drivers of vmalloc() by checking the Mem-Info or Node/Zone info. For example: Mem-Info: active_anon:5144 inactive_anon:16120 isolated_anon:0 active_file:0 inactive_file:0 isolated_file:0 unevictable:0 dirty:0 writeback:0 unstable:0 slab_reclaimable:739 slab_unreclaimable:442469 mapped:534 shmem:21050 pagetables:21 bounce:0 free:14808 free_pcp:3389 free_cma:8128 Node 0 active_anon:20576kB inactive_anon:64480kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:2136kB dirty:0kB writeback:0kB shmem:84200kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unr eclaimable? yes Node 0 DMA free:14476kB min:21512kB low:26888kB high:32264kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file: 0kB inactive_file:0kB unevictable:0kB writepending:0kB present:1048576kB managed:952736kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:2716kB local_pcp:0kB free_cma:0kB The information above tells us the memory usage of the known memory categories and we can check the abnormal large numbers. However, if a memory leakage cannot be observed in the categories above, we need to reproduce the issue with CONFIG_PAGE_OWNER. It is possible to read the page owner information from coredump files. However, coredump files may not always be available, so my approach is to print out the largest page consumer when OOM kernel panic occurs. The heuristic approach assumes that the OOM kernel panic is caused by a single backtrace. The assumption is not always true but it works in many cases during our test. We have tested this heuristic approach since 2019/5 on android devices. In 38 internal OOM kernel panic reports: 31/38: can be analyzed by using existing information 7/38: need page owner formatino and the heuristic approach in this patch prints the correct backtraces of abnormal memory allocations. No need to reproduce the issues. Output: ------- This output below is generated by a dummy infinite kmalloc(256, GFP_KERNEL) loop: [ 49.691027] OOM: largest memory consumer: 428468 pages are allocated from: [ 49.691278] prep_new_page+0x198/0x19c [ 49.691390] get_page_from_freelist+0x1cb4/0x1e54 [ 49.691500] __alloc_pages_nodemask+0x16c/0xe10 [ 49.691599] alloc_pages_current+0x104/0x190 [ 49.691697] alloc_slab_page+0x160/0x4e8 [ 49.691782] new_slab+0xb8/0x510 [ 49.691866] ___slab_alloc+0x294/0x3dc [ 49.691957] kmem_cache_alloc+0x1f0/0x250 [ 49.692047] meminfo_proc_show+0x68/0x8fc [ 49.692135] seq_read+0x1dc/0x47c [ 49.692217] proc_reg_read+0x5c/0xb4 [ 49.692303] do_iter_read+0xdc/0x1c0 [ 49.692389] vfs_readv+0x60/0xa8 [ 49.692471] default_file_splice_read+0x1f0/0x304 [ 49.692582] splice_direct_to_actor+0x100/0x294 [ 49.692679] do_splice_direct+0x78/0xc8 [ 39.328607] Kernel panic - not syncing: System is deadlocked on memory Signed-off-by: Miles Chen --- include/linux/oom.h | 1 + mm/oom_kill.c | 4 ++ mm/page_owner.c | 135 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 140 insertions(+) diff --git a/include/linux/oom.h b/include/linux/oom.h index c696c265f019..fe3c923ac8f3 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -121,6 +121,7 @@ extern bool oom_killer_disable(signed long timeout); extern void oom_killer_enable(void); extern struct task_struct *find_lock_task_mm(struct task_struct *p); +extern void report_largest_page_consumer(void); /* sysctls */ extern int sysctl_oom_dump_tasks; diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 71e3acea7817..9b069b5a4aff 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #include "internal.h" @@ -1099,6 +1100,9 @@ bool out_of_memory(struct oom_control *oc) if (!oc->chosen) { dump_header(oc, NULL); pr_warn("Out of memory and no killable processes...\n"); +#ifdef CONFIG_PAGE_OWNER + DO_ONCE(report_largest_page_consumer); +#endif /* * If we got here due to an actual allocation at the * system level, we cannot survive this and will enter diff --git a/mm/page_owner.c b/mm/page_owner.c index 18ecde9f45b2..b23e5fe35dad 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -10,6 +10,8 @@ #include #include #include +#include +#include #include "internal.h" @@ -19,12 +21,16 @@ */ #define PAGE_OWNER_STACK_DEPTH (16) +#define OOM_HANDLE_HASH_BITS 10 + struct page_owner { unsigned short order; short last_migrate_reason; gfp_t gfp_mask; depot_stack_handle_t handle; depot_stack_handle_t free_handle; + struct hlist_node node; + unsigned long page_count; /* number of pages points to this handle */ }; static bool page_owner_enabled = false; @@ -33,6 +39,8 @@ DEFINE_STATIC_KEY_FALSE(page_owner_inited); static depot_stack_handle_t dummy_handle; static depot_stack_handle_t failure_handle; static depot_stack_handle_t early_handle; +static DEFINE_HASHTABLE(oom_handle_hash, OOM_HANDLE_HASH_BITS); +static struct page_owner *most_referenced_page_owner; static void init_early_allocated_pages(void); @@ -48,6 +56,57 @@ static int __init early_page_owner_param(char *buf) } early_param("page_owner", early_page_owner_param); +static struct hlist_head *get_bucket(depot_stack_handle_t handle) +{ + unsigned long hash; + + hash = hash_long(handle, OOM_HANDLE_HASH_BITS); + return &oom_handle_hash[hash]; +} + +/* + * lookup a page_owner in the hash bucket + */ +static struct page_owner *lookup_page_owner(depot_stack_handle_t handle, + struct hlist_head *b) +{ + struct page_owner *page_owner; + + hlist_for_each_entry(page_owner, b, node) { + if (page_owner->handle == handle) + return page_owner; + } + + return NULL; +} + +/* + * Increase the page_owner->page_count in the handle_hash by (1 << order) + */ +static void increase_handle_count(struct page_owner *page_owner) +{ + struct hlist_head *bucket; + struct page_owner *owner; + + bucket = get_bucket(page_owner->handle); + + owner = lookup_page_owner(page_owner->handle, bucket); + + if (!owner) { + owner = page_owner; + hlist_add_head(&page_owner->node, bucket); + } + + /* increase page counter */ + owner->page_count += (1 << owner->order); + + /* update most_referenced_page_owner */ + if (!most_referenced_page_owner) + most_referenced_page_owner = owner; + if (most_referenced_page_owner->page_count < owner->page_count) + most_referenced_page_owner = owner; +} + static bool need_page_owner(void) { return page_owner_enabled; @@ -172,6 +231,7 @@ static inline void __set_page_owner_handle(struct page *page, page_owner->order = order; page_owner->gfp_mask = gfp_mask; page_owner->last_migrate_reason = -1; + page_owner->page_count = 0; __set_bit(PAGE_EXT_OWNER, &page_ext->flags); __set_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags); @@ -216,6 +276,7 @@ void __split_page_owner(struct page *page, unsigned int order) for (i = 0; i < (1 << order); i++) { page_owner = get_page_owner(page_ext); page_owner->order = 0; + page_owner->page_count = 0; page_ext = page_ext_next(page_ext); } } @@ -236,6 +297,7 @@ void __copy_page_owner(struct page *oldpage, struct page *newpage) new_page_owner->last_migrate_reason = old_page_owner->last_migrate_reason; new_page_owner->handle = old_page_owner->handle; + new_page_owner->page_count = new_page_owner->page_count; /* * We don't clear the bit on the oldpage as it's going to be freed @@ -615,6 +677,79 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone) pgdat->node_id, zone->name, count); } +static void __report_largest_page_consumer(struct page_owner *page_owner) +{ + unsigned long *entries = NULL; + unsigned int nr_entries; + + nr_entries = stack_depot_fetch(page_owner->handle, &entries); + pr_info("OOM: largest memory consumer: %lu pages are allocated from:\n", + page_owner->page_count); + stack_trace_print(entries, nr_entries, 0); +} + +void report_largest_page_consumer(void) +{ + unsigned long pfn; + struct page *page; + struct page_ext *page_ext; + struct page_owner *page_owner; + depot_stack_handle_t handle; + + pfn = min_low_pfn; + + if (!static_branch_unlikely(&page_owner_inited)) + return; + + /* Find a valid PFN or the start of a MAX_ORDER_NR_PAGES area */ + while (!pfn_valid(pfn) && (pfn & (MAX_ORDER_NR_PAGES - 1)) != 0) + pfn++; + + /* Find an allocated page */ + for (; pfn < max_pfn; pfn++) { + if ((pfn & (MAX_ORDER_NR_PAGES - 1)) == 0 && !pfn_valid(pfn)) { + pfn += MAX_ORDER_NR_PAGES - 1; + continue; + } + + if (!pfn_valid_within(pfn)) + continue; + + page = pfn_to_page(pfn); + if (PageBuddy(page)) { + unsigned long freepage_order = page_order_unsafe(page); + + if (freepage_order < MAX_ORDER) + pfn += (1UL << freepage_order) - 1; + continue; + } + + if (PageReserved(page)) + continue; + + page_ext = lookup_page_ext(page); + if (unlikely(!page_ext)) + continue; + + if (!test_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags)) + continue; + + page_owner = get_page_owner(page_ext); + + if (!IS_ALIGNED(pfn, 1 << page_owner->order)) + continue; + + handle = READ_ONCE(page_owner->handle); + if (!handle) + continue; + + increase_handle_count(page_owner); + } + + __report_largest_page_consumer(most_referenced_page_owner); +} + + static void init_zones_in_node(pg_data_t *pgdat) { struct zone *zone;