From patchwork Tue Oct 25 00:13:08 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 9393627 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A152460762 for ; Tue, 25 Oct 2016 00:17:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 90759290B3 for ; Tue, 25 Oct 2016 00:17:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 834942919D; Tue, 25 Oct 2016 00:17:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CD48628F00 for ; Tue, 25 Oct 2016 00:17:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933498AbcJYARX (ORCPT ); Mon, 24 Oct 2016 20:17:23 -0400 Received: from mga07.intel.com ([134.134.136.100]:58538 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965324AbcJYAOg (ORCPT ); Mon, 24 Oct 2016 20:14:36 -0400 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga105.jf.intel.com with ESMTP; 24 Oct 2016 17:14:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,543,1473145200"; d="scan'208";a="893532189" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga003.jf.intel.com with ESMTP; 24 Oct 2016 17:14:31 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id DD831629; Tue, 25 Oct 2016 03:13:45 +0300 (EEST) From: "Kirill A. Shutemov" To: "Theodore Ts'o" , Andreas Dilger , Jan Kara , Andrew Morton Cc: Alexander Viro , Hugh Dickins , Andrea Arcangeli , Dave Hansen , Vlastimil Babka , Matthew Wilcox , Ross Zwisler , linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv4 09/43] mm, rmap: account file thp pages Date: Tue, 25 Oct 2016 03:13:08 +0300 Message-Id: <20161025001342.76126-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20161025001342.76126-1-kirill.shutemov@linux.intel.com> References: <20161025001342.76126-1-kirill.shutemov@linux.intel.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Let's add FileHugePages and FilePmdMapped fields into meminfo and smaps. It indicates how many times we allocate and map file THP. Signed-off-by: Kirill A. Shutemov --- drivers/base/node.c | 6 ++++++ fs/proc/meminfo.c | 4 ++++ fs/proc/task_mmu.c | 5 ++++- include/linux/mmzone.h | 2 ++ mm/filemap.c | 3 ++- mm/huge_memory.c | 5 ++++- mm/page_alloc.c | 5 +++++ mm/rmap.c | 12 ++++++++---- mm/vmstat.c | 2 ++ 9 files changed, 37 insertions(+), 7 deletions(-) diff --git a/drivers/base/node.c b/drivers/base/node.c index 5548f9686016..45be0ddb84ed 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -116,6 +116,8 @@ static ssize_t node_read_meminfo(struct device *dev, "Node %d AnonHugePages: %8lu kB\n" "Node %d ShmemHugePages: %8lu kB\n" "Node %d ShmemPmdMapped: %8lu kB\n" + "Node %d FileHugePages: %8lu kB\n" + "Node %d FilePmdMapped: %8lu kB\n" #endif , nid, K(node_page_state(pgdat, NR_FILE_DIRTY)), @@ -139,6 +141,10 @@ static ssize_t node_read_meminfo(struct device *dev, nid, K(node_page_state(pgdat, NR_SHMEM_THPS) * HPAGE_PMD_NR), nid, K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) * + HPAGE_PMD_NR), + nid, K(node_page_state(pgdat, NR_FILE_THPS) * + HPAGE_PMD_NR), + nid, K(node_page_state(pgdat, NR_FILE_PMDMAPPED) * HPAGE_PMD_NR)); #else nid, K(sum_zone_node_page_state(nid, NR_SLAB_UNRECLAIMABLE))); diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 8a428498d6b2..8396843be7a7 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -146,6 +146,10 @@ static int meminfo_proc_show(struct seq_file *m, void *v) global_node_page_state(NR_SHMEM_THPS) * HPAGE_PMD_NR); show_val_kb(m, "ShmemPmdMapped: ", global_node_page_state(NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR); + show_val_kb(m, "FileHugePages: ", + global_node_page_state(NR_FILE_THPS) * HPAGE_PMD_NR); + show_val_kb(m, "FilePmdMapped: ", + global_node_page_state(NR_FILE_PMDMAPPED) * HPAGE_PMD_NR); #endif #ifdef CONFIG_CMA diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 35b92d81692f..bb8b7f93528e 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -442,6 +442,7 @@ struct mem_size_stats { unsigned long anonymous; unsigned long anonymous_thp; unsigned long shmem_thp; + unsigned long file_thp; unsigned long swap; unsigned long shared_hugetlb; unsigned long private_hugetlb; @@ -577,7 +578,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, else if (is_zone_device_page(page)) /* pass */; else - VM_BUG_ON_PAGE(1, page); + mss->file_thp += HPAGE_PMD_SIZE; smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd)); } #else @@ -772,6 +773,7 @@ static int show_smap(struct seq_file *m, void *v, int is_pid) "Anonymous: %8lu kB\n" "AnonHugePages: %8lu kB\n" "ShmemPmdMapped: %8lu kB\n" + "FilePmdMapped: %8lu kB\n" "Shared_Hugetlb: %8lu kB\n" "Private_Hugetlb: %7lu kB\n" "Swap: %8lu kB\n" @@ -790,6 +792,7 @@ static int show_smap(struct seq_file *m, void *v, int is_pid) mss.anonymous >> 10, mss.anonymous_thp >> 10, mss.shmem_thp >> 10, + mss.file_thp >> 10, mss.shared_hugetlb >> 10, mss.private_hugetlb >> 10, mss.swap >> 10, diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 7f2ae99e5daf..20c5fce13697 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -163,6 +163,8 @@ enum node_stat_item { NR_SHMEM, /* shmem pages (included tmpfs/GEM pages) */ NR_SHMEM_THPS, NR_SHMEM_PMDMAPPED, + NR_FILE_THPS, + NR_FILE_PMDMAPPED, NR_ANON_THPS, NR_UNSTABLE_NFS, /* NFS unstable pages */ NR_VMSCAN_WRITE, diff --git a/mm/filemap.c b/mm/filemap.c index 23fbbb61725c..e9376610ad3c 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -293,7 +293,8 @@ void __delete_from_page_cache(struct page *page, void *shadow) if (PageTransHuge(page)) __dec_node_page_state(page, NR_SHMEM_THPS); } else { - VM_BUG_ON_PAGE(PageTransHuge(page) && !PageHuge(page), page); + if (PageTransHuge(page) && !PageHuge(page)) + __dec_node_page_state(page, NR_FILE_THPS); } /* diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9888dbec1d01..e7cda8e86f2c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1892,7 +1892,10 @@ static void __split_huge_page(struct page *page, struct list_head *list, struct radix_tree_iter iter; void **slot; - __dec_node_page_state(head, NR_SHMEM_THPS); + if (PageSwapBacked(page)) + __dec_node_page_state(page, NR_SHMEM_THPS); + else + __dec_node_page_state(page, NR_FILE_THPS); radix_tree_split(&mapping->page_tree, head->index, 0); radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2b3bf6767d54..447499b7054d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4293,6 +4293,8 @@ void show_free_areas(unsigned int filter) #ifdef CONFIG_TRANSPARENT_HUGEPAGE " shmem_thp: %lukB" " shmem_pmdmapped: %lukB" + " file_thp: %lukB" + " file_pmdmapped: %lukB" " anon_thp: %lukB" #endif " writeback_tmp:%lukB" @@ -4315,6 +4317,9 @@ void show_free_areas(unsigned int filter) K(node_page_state(pgdat, NR_SHMEM_THPS) * HPAGE_PMD_NR), K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR), + K(node_page_state(pgdat, NR_FILE_THPS) * HPAGE_PMD_NR), + K(node_page_state(pgdat, NR_FILE_PMDMAPPED) + * HPAGE_PMD_NR), K(node_page_state(pgdat, NR_ANON_THPS) * HPAGE_PMD_NR), #endif K(node_page_state(pgdat, NR_SHMEM)), diff --git a/mm/rmap.c b/mm/rmap.c index 1ef36404e7b2..48c7310639bd 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1281,8 +1281,10 @@ void page_add_file_rmap(struct page *page, bool compound) } if (!atomic_inc_and_test(compound_mapcount_ptr(page))) goto out; - VM_BUG_ON_PAGE(!PageSwapBacked(page), page); - __inc_node_page_state(page, NR_SHMEM_PMDMAPPED); + if (PageSwapBacked(page)) + __inc_node_page_state(page, NR_SHMEM_PMDMAPPED); + else + __inc_node_page_state(page, NR_FILE_PMDMAPPED); } else { if (PageTransCompound(page) && page_mapping(page)) { VM_WARN_ON_ONCE(!PageLocked(page)); @@ -1322,8 +1324,10 @@ static void page_remove_file_rmap(struct page *page, bool compound) } if (!atomic_add_negative(-1, compound_mapcount_ptr(page))) goto out; - VM_BUG_ON_PAGE(!PageSwapBacked(page), page); - __dec_node_page_state(page, NR_SHMEM_PMDMAPPED); + if (PageSwapBacked(page)) + __dec_node_page_state(page, NR_SHMEM_PMDMAPPED); + else + __dec_node_page_state(page, NR_FILE_PMDMAPPED); } else { if (!atomic_add_negative(-1, &page->_mapcount)) goto out; diff --git a/mm/vmstat.c b/mm/vmstat.c index 604f26a4f696..04dc6bd8ee43 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -967,6 +967,8 @@ const char * const vmstat_text[] = { "nr_shmem", "nr_shmem_hugepages", "nr_shmem_pmdmapped", + "nr_file_hugepaged", + "nr_file_pmdmapped", "nr_anon_transparent_hugepages", "nr_unstable", "nr_vmscan_write",