From patchwork Tue Jun 6 18:24:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 9769579 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8482960364 for ; Tue, 6 Jun 2017 18:27:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8011727C05 for ; Tue, 6 Jun 2017 18:27:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 743A627FB8; Tue, 6 Jun 2017 18:27:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0829D22B1F for ; Tue, 6 Jun 2017 18:27:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751630AbdFFS1E (ORCPT ); Tue, 6 Jun 2017 14:27:04 -0400 Received: from lhrrgout.huawei.com ([194.213.3.17]:28104 "EHLO lhrrgout.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751390AbdFFS1D (ORCPT ); Tue, 6 Jun 2017 14:27:03 -0400 Received: from 172.18.7.190 (EHLO LHREML714-CAH.china.huawei.com) ([172.18.7.190]) by lhrrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DON88778; Tue, 06 Jun 2017 18:26:40 +0000 (GMT) Received: from localhost.localdomain (10.122.225.51) by smtpsuk.huawei.com (10.201.108.37) with Microsoft SMTP Server (TLS) id 14.3.301.0; Tue, 6 Jun 2017 19:26:32 +0100 From: Igor Stoppa To: , , CC: , , , , , , , , , , Igor Stoppa Subject: [PATCH 3/4] Protectable Memory Allocator - Debug interface Date: Tue, 6 Jun 2017 21:24:52 +0300 Message-ID: <20170606182453.32688-4-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170606182453.32688-1-igor.stoppa@huawei.com> References: <20170606182453.32688-1-igor.stoppa@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.225.51] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090204.5936F3E0.00BA, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 037482c7316444d19b254cc091297172 Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Debugfs interface: it creates the file /sys/kernel/debug/pmalloc/pools which exposes statistics about all the pools and memory nodes in use. Signed-off-by: Igor Stoppa --- mm/Kconfig | 11 ++++++ mm/pmalloc.c | 113 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 124 insertions(+) diff --git a/mm/Kconfig b/mm/Kconfig index beb7a45..dfbdc07 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -539,6 +539,17 @@ config CMA_AREAS If unsure, leave the default value "7". +config PMALLOC_DEBUG + bool "Protectable Memory Allocator debugging" + depends on DEBUG_KERNEL + default y + help + Debugfs support for dumping information about memory pools. + It shows internal stats: free/used/total space, protection + status, data overhead, etc. + + If unsure, say "y". + config MEM_SOFT_DIRTY bool "Track memory changes" depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS diff --git a/mm/pmalloc.c b/mm/pmalloc.c index 4ca1e4a..636169c 100644 --- a/mm/pmalloc.c +++ b/mm/pmalloc.c @@ -225,3 +225,116 @@ int __init pmalloc_init(void) atomic_set(&pmalloc_data->pools_count, 0); return 0; } + +#ifdef CONFIG_PMALLOC_DEBUG +#include +static struct dentry *pmalloc_root; + +static void *__pmalloc_seq_start(struct seq_file *s, loff_t *pos) +{ + if (*pos) + return NULL; + return pos; +} + +static void *__pmalloc_seq_next(struct seq_file *s, void *v, loff_t *pos) +{ + return NULL; +} + +static void __pmalloc_seq_stop(struct seq_file *s, void *v) +{ +} + +static __always_inline +void __seq_printf_node(struct seq_file *s, struct pmalloc_node *node) +{ + unsigned long total_space, node_pages, end_of_node, + used_space, available_space; + int total_words, used_words, available_words; + + used_words = atomic_read(&node->used_words); + total_words = node->total_words; + available_words = total_words - used_words; + used_space = used_words * WORD_SIZE; + total_space = total_words * WORD_SIZE; + available_space = total_space - used_space; + node_pages = (total_space + HEADER_SIZE) / PAGE_SIZE; + end_of_node = total_space + HEADER_SIZE + (unsigned long) node; + seq_printf(s, " - node:\t\t%pK\n", node); + seq_printf(s, " - start of data ptr:\t%pK\n", node->data); + seq_printf(s, " - end of node ptr:\t%pK\n", (void *)end_of_node); + seq_printf(s, " - total words:\t%d\n", total_words); + seq_printf(s, " - used words:\t%d\n", used_words); + seq_printf(s, " - available words:\t%d\n", available_words); + seq_printf(s, " - pages:\t\t%lu\n", node_pages); + seq_printf(s, " - total space:\t%lu\n", total_space); + seq_printf(s, " - used space:\t%lu\n", used_space); + seq_printf(s, " - available space:\t%lu\n", available_space); +} + +static __always_inline +void __seq_printf_pool(struct seq_file *s, struct pmalloc_pool *pool) +{ + struct pmalloc_node *node; + + seq_printf(s, "pool:\t\t\t%pK\n", pool); + seq_printf(s, " - name:\t\t%s\n", pool->name); + seq_printf(s, " - protected:\t\t%u\n", atomic_read(&pool->protected)); + seq_printf(s, " - nodes count:\t\t%u\n", + atomic_read(&pool->nodes_count)); + rcu_read_lock(); + hlist_for_each_entry_rcu(node, &pool->nodes_list_head, nodes_list) + __seq_printf_node(s, node); + rcu_read_unlock(); +} + +static int __pmalloc_seq_show(struct seq_file *s, void *v) +{ + struct pmalloc_pool *pool; + + seq_printf(s, "pools count:\t\t%u\n", + atomic_read(&pmalloc_data->pools_count)); + seq_printf(s, "page size:\t\t%lu\n", PAGE_SIZE); + seq_printf(s, "word size:\t\t%lu\n", WORD_SIZE); + seq_printf(s, "node header size:\t%lu\n", HEADER_SIZE); + rcu_read_lock(); + hlist_for_each_entry_rcu(pool, &pmalloc_data->pools_list_head, + pools_list) + __seq_printf_pool(s, pool); + rcu_read_unlock(); + return 0; +} + +static const struct seq_operations pmalloc_seq_ops = { + .start = __pmalloc_seq_start, + .next = __pmalloc_seq_next, + .stop = __pmalloc_seq_stop, + .show = __pmalloc_seq_show, +}; + +static int __pmalloc_open(struct inode *inode, struct file *file) +{ + return seq_open(file, &pmalloc_seq_ops); +} + +static const struct file_operations pmalloc_file_ops = { + .owner = THIS_MODULE, + .open = __pmalloc_open, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release +}; + + +static int __init __pmalloc_init_track_pool(void) +{ + struct dentry *de = NULL; + + pmalloc_root = debugfs_create_dir("pmalloc", NULL); + debugfs_create_file("pools", 0644, pmalloc_root, NULL, + &pmalloc_file_ops); + return 0; +} +late_initcall(__pmalloc_init_track_pool); +#endif