From patchwork Mon Jul 23 11:19:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 10540011 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A07311805 for ; Mon, 23 Jul 2018 11:20:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8842F28639 for ; Mon, 23 Jul 2018 11:20:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7BAFF28653; Mon, 23 Jul 2018 11:20:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C874E28639 for ; Mon, 23 Jul 2018 11:20:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EB46B6B0008; Mon, 23 Jul 2018 07:19:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D015C6B0266; Mon, 23 Jul 2018 07:19:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B8F5B6B026A; Mon, 23 Jul 2018 07:19:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f71.google.com (mail-pl0-f71.google.com [209.85.160.71]) by kanga.kvack.org (Postfix) with ESMTP id 38B7C6B000E for ; Mon, 23 Jul 2018 07:19:53 -0400 (EDT) Received: by mail-pl0-f71.google.com with SMTP id 31-v6so180248pld.6 for ; Mon, 23 Jul 2018 04:19:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=biWYGvo/cmFKsVN7AQvNTE8IZU/1+i91c+50vkBx4xk=; b=RSQpzS9LYFZXusuwCXFtUi3hcWr2XXjLd/u9925d1Znk+rxOJEyqbbgC/Pj2J7fM+P yNOkD9N+BxOvKKN/owo/g0WebAMRKT7nTB8UeFBA+9mV7dqLk3hHVQ0GvCUcsPB8yJwB RWlbbuvbRMAgKUCA/t0YkofGueWi+uFtlwp895mLNhn0v/1wFLDWbtJp6EXcFRFHiqBm iuQdAUQ2TL7EsWZiAolZDhyUfghoIvwixqbRgIBMMuLAZSUccGR/CMrEECSKs5XhfaPL 6dCTLA9EsTfhZhWhTycP9jH06JxOkqMJiH+cqB6NtztPFohY1qmNQdcXTCD0xuiAw+p8 Jy9Q== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Gm-Message-State: AOUpUlGcoZ3WN1F5HCdtiKQ/W9NmxFrxuQzQQ9RrRfRZI0sKQaPzrtvq HoPQfAB60XvYDIUENysJbth8zF5+f6nAr9db7CStXhjkg8yIYT0m3OAK2AIKfHK5/e23NxGbl+2 a5FHSFE75US6M7W/1ZFMKA+9wzZQeIa0aciONwt6euyWClj1QKmUIMuXfMmsiq31uUQ== X-Received: by 2002:a17:902:28e4:: with SMTP id f91-v6mr12624304plb.70.1532344792901; Mon, 23 Jul 2018 04:19:52 -0700 (PDT) X-Google-Smtp-Source: AAOMgpf6lhpUIb/8dt13ZTTPadyCKJCEDglr7dJyf4Z9bPwl80UmU9yIiyNgnw20FHedJ3RE/eA1 X-Received: by 2002:a17:902:28e4:: with SMTP id f91-v6mr12624252plb.70.1532344791876; Mon, 23 Jul 2018 04:19:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532344791; cv=none; d=google.com; s=arc-20160816; b=Vz/7/GVXlDA26pd2DnHXZutz/mrtZoWRBMHz0MJiaRQWImy/X6hNM0rB0HGyFQsDRI gxSYwsMdB/ySXCU3/2K8h8mOHdJBkh8rH7WcWNQ4c3n3VwxgVp9TMf9Pb/356OywSO3o QUH6HaQ1wh7FUT8H3mbvqUGTqm9HGDZLKWwcQhW9kO2STH3VyrmxgHM+btoUs5qBhebM OxTEbwUPlwhK2r/rIQwWNbVp0dZLR0pctoZsChmiwH1q6O8JuCMmyxduX0GgS5+AG+Z/ 0Lz3wUuQt94LZFLWolTmrs9s85HyO3PV/EakFw6G5kK80dWPOeR0LDyowv1EE8i6CXiq DBEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=biWYGvo/cmFKsVN7AQvNTE8IZU/1+i91c+50vkBx4xk=; b=HBOUApBaaUJ//ascVwQ5ZLXAZ0wJQiBYEbLAlT2wMa0Qh664eig1xTF4+Zf4sp40Sl PxOEAMcNyLoNYzxtbn4kPqeHHjlU8vLu3gTikLcKaRejcABqVNGnObnSbULXaC/rumm5 oWAtYwWT64UAioS6us9dQ+UxT2Et6npMwLrd8XE+vLBDdM2ftQYeuY47mDyCaWpbPAL8 gmHbUXJ00hPTY2p874HvrjXV28PuQDmsDIIJZJ62As6o8jBxopzJiZTNoB84mt6qcZ3e q+QHhQ79nk/fQdOdQKqUa/1Xgi3fqctJqBxd4t3JnECTK9VLx7eSmcLIHVFYbrgsPYx6 aM1Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id b13-v6si8457409pgb.356.2018.07.23.04.19.51 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 23 Jul 2018 04:19:51 -0700 (PDT) Received-SPF: pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) client-ip=195.135.220.15; Authentication-Results: mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 0D204AF3B; Mon, 23 Jul 2018 11:19:50 +0000 (UTC) From: Vlastimil Babka To: Andrew Morton Cc: Daniel Colascione , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Alexey Dobriyan , linux-api@vger.kernel.org, Vlastimil Babka Subject: [PATCH 4/4] mm: proc/pid/smaps_rollup: convert to single value seq_file Date: Mon, 23 Jul 2018 13:19:33 +0200 Message-Id: <20180723111933.15443-5-vbabka@suse.cz> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180723111933.15443-1-vbabka@suse.cz> References: <20180723111933.15443-1-vbabka@suse.cz> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The /proc/pid/smaps_rollup file is currently implemented via the m_start/m_next/m_stop seq_file iterators shared with the other maps files, that iterate over vma's. However, the rollup file doesn't print anything for each vma, only accumulate the stats. There are some issues with the current code as reported in [1] - the accumulated stats can get skewed if seq_file start()/stop() op is called multiple times, if show() is called multiple times, and after seeks to non-zero position. Patch [1] fixed those within existing design, but I believe it is fundamentally wrong to expose the vma iterators to the seq_file mechanism when smaps_rollup shows logically a single set of values for the whole address space. This patch thus refactors the code to provide a single "value" at offset 0, with vma iteration to gather the stats done internally. This fixes the situations where results are skewed, and simplifies the code, especially in show_smap(), at the expense of somewhat less code reuse. [1] https://marc.info/?l=linux-mm&m=151927723128134&w=2 Reported-by: Daniel Colascione Signed-off-by: Vlastimil Babka Reported-by: Daniel Colascione Signed-off-by: Vlastimil Babka Reviewed-by: Alexey Dobriyan --- fs/proc/task_mmu.c | 136 ++++++++++++++++++++++++++++----------------- 1 file changed, 86 insertions(+), 50 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 1d6d315fd31b..31109e67804c 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -404,7 +404,6 @@ const struct file_operations proc_pid_maps_operations = { #ifdef CONFIG_PROC_PAGE_MONITOR struct mem_size_stats { - bool first; unsigned long resident; unsigned long shared_clean; unsigned long shared_dirty; @@ -418,11 +417,12 @@ struct mem_size_stats { unsigned long swap; unsigned long shared_hugetlb; unsigned long private_hugetlb; - unsigned long first_vma_start; + unsigned long last_vma_end; u64 pss; u64 pss_locked; u64 swap_pss; bool check_shmem_swap; + bool finished; }; static void smaps_account(struct mem_size_stats *mss, struct page *page, @@ -775,58 +775,57 @@ static void __show_smap(struct seq_file *m, struct mem_size_stats *mss) static int show_smap(struct seq_file *m, void *v) { - struct proc_maps_private *priv = m->private; struct vm_area_struct *vma = v; - struct mem_size_stats mss_stack; - struct mem_size_stats *mss; - int ret = 0; - bool rollup_mode; - bool last_vma; - - if (priv->rollup) { - rollup_mode = true; - mss = priv->rollup; - if (mss->first) { - mss->first_vma_start = vma->vm_start; - mss->first = false; - } - last_vma = !m_next_vma(priv, vma); - } else { - rollup_mode = false; - memset(&mss_stack, 0, sizeof(mss_stack)); - mss = &mss_stack; - } + struct mem_size_stats mss; - smap_gather_stats(vma, mss); + memset(&mss, 0, sizeof(mss)); - if (!rollup_mode) { - show_map_vma(m, vma); - } else if (last_vma) { - show_vma_header_prefix( - m, mss->first_vma_start, vma->vm_end, 0, 0, 0, 0); - seq_pad(m, ' '); - seq_puts(m, "[rollup]\n"); - } else { - ret = SEQ_SKIP; - } + smap_gather_stats(vma, &mss); - if (!rollup_mode) { - SEQ_PUT_DEC("Size: ", vma->vm_end - vma->vm_start); - SEQ_PUT_DEC(" kB\nKernelPageSize: ", vma_kernel_pagesize(vma)); - SEQ_PUT_DEC(" kB\nMMUPageSize: ", vma_mmu_pagesize(vma)); - seq_puts(m, " kB\n"); - } + show_map_vma(m, vma); - if (!rollup_mode || last_vma) - __show_smap(m, mss); + SEQ_PUT_DEC("Size: ", vma->vm_end - vma->vm_start); + SEQ_PUT_DEC(" kB\nKernelPageSize: ", vma_kernel_pagesize(vma)); + SEQ_PUT_DEC(" kB\nMMUPageSize: ", vma_mmu_pagesize(vma)); + seq_puts(m, " kB\n"); + + __show_smap(m, &mss); + + if (arch_pkeys_enabled()) + seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); + show_smap_vma_flags(m, vma); - if (!rollup_mode) { - if (arch_pkeys_enabled()) - seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); - show_smap_vma_flags(m, vma); - } m_cache_vma(m, vma); - return ret; + + return 0; +} + +static int show_smaps_rollup(struct seq_file *m, void *v) +{ + struct proc_maps_private *priv = m->private; + struct mem_size_stats *mss = priv->rollup; + struct vm_area_struct *vma; + + /* + * We might be called multiple times when e.g. the seq buffer + * overflows. Gather the stats only once. + */ + if (!mss->finished) { + for (vma = priv->mm->mmap; vma; vma = vma->vm_next) { + smap_gather_stats(vma, mss); + mss->last_vma_end = vma->vm_end; + } + mss->finished = true; + } + + show_vma_header_prefix(m, priv->mm->mmap->vm_start, + mss->last_vma_end, 0, 0, 0, 0); + seq_pad(m, ' '); + seq_puts(m, "[rollup]\n"); + + __show_smap(m, mss); + + return 0; } #undef SEQ_PUT_DEC @@ -837,6 +836,44 @@ static const struct seq_operations proc_pid_smaps_op = { .show = show_smap }; +static void *smaps_rollup_start(struct seq_file *m, loff_t *ppos) +{ + struct proc_maps_private *priv = m->private; + struct mm_struct *mm; + + if (*ppos != 0) + return NULL; + + priv->task = get_proc_task(priv->inode); + if (!priv->task) + return ERR_PTR(-ESRCH); + + mm = priv->mm; + if (!mm || !mmget_not_zero(mm)) + return NULL; + + memset(priv->rollup, 0, sizeof(*priv->rollup)); + + down_read(&mm->mmap_sem); + hold_task_mempolicy(priv); + + return mm; +} + +static void *smaps_rollup_next(struct seq_file *m, void *v, loff_t *pos) +{ + (*pos)++; + vma_stop(m->private); + return NULL; +} + +static const struct seq_operations proc_pid_smaps_rollup_op = { + .start = smaps_rollup_start, + .next = smaps_rollup_next, + .stop = m_stop, + .show = show_smaps_rollup +}; + static int pid_smaps_open(struct inode *inode, struct file *file) { return do_maps_open(inode, file, &proc_pid_smaps_op); @@ -846,18 +883,17 @@ static int pid_smaps_rollup_open(struct inode *inode, struct file *file) { struct seq_file *seq; struct proc_maps_private *priv; - int ret = do_maps_open(inode, file, &proc_pid_smaps_op); + int ret = do_maps_open(inode, file, &proc_pid_smaps_rollup_op); if (ret < 0) return ret; seq = file->private_data; priv = seq->private; - priv->rollup = kzalloc(sizeof(*priv->rollup), GFP_KERNEL); + priv->rollup = kmalloc(sizeof(*priv->rollup), GFP_KERNEL); if (!priv->rollup) { proc_map_release(inode, file); return -ENOMEM; } - priv->rollup->first = true; return 0; }