From patchwork Thu Jun 16 00:28:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 9179885 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5C5A960776 for ; Thu, 16 Jun 2016 06:12:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4B96427BF7 for ; Thu, 16 Jun 2016 06:12:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3E1FC27F07; Thu, 16 Jun 2016 06:12:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 1D99B27BF7 for ; Thu, 16 Jun 2016 06:12:48 +0000 (UTC) Received: (qmail 31859 invoked by uid 550); 16 Jun 2016 06:12:47 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 21801 invoked from network); 16 Jun 2016 00:28:55 -0000 From: Andy Lutomirski To: "linux-kernel@vger.kernel.org" , x86@kernel.org, Borislav Petkov Cc: Nadav Amit , Kees Cook , Brian Gerst , "kernel-hardening@lists.openwall.com" , Linus Torvalds , Josh Poimboeuf , Andy Lutomirski , Vladimir Davydov , Johannes Weiner , Michal Hocko , linux-mm@kvack.org Date: Wed, 15 Jun 2016 17:28:26 -0700 Message-Id: <24279d4009c821de64109055665429fad2a7bff7.1466036668.git.luto@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Subject: [kernel-hardening] [PATCH 04/13] mm: Track NR_KERNEL_STACK in pages instead of number of stacks X-Virus-Scanned: ClamAV using ClamSMTP Currently, NR_KERNEL_STACK tracks the number of kernel stacks in a zone. This only makes sense if each kernel stack exists entirely in one zone, and allowing vmapped stacks could break this assumption. It turns out that the code for tracking kernel stack allocations in units of pages is slightly simpler, so just switch to counting pages. Cc: Vladimir Davydov Cc: Johannes Weiner Cc: Michal Hocko Cc: linux-mm@kvack.org Signed-off-by: Andy Lutomirski --- fs/proc/meminfo.c | 2 +- kernel/fork.c | 3 ++- mm/page_alloc.c | 3 +-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 83720460c5bc..8338c0569a8d 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -145,7 +145,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) global_page_state(NR_SLAB_UNRECLAIMABLE)), K(global_page_state(NR_SLAB_RECLAIMABLE)), K(global_page_state(NR_SLAB_UNRECLAIMABLE)), - global_page_state(NR_KERNEL_STACK) * THREAD_SIZE / 1024, + K(global_page_state(NR_KERNEL_STACK)), K(global_page_state(NR_PAGETABLE)), #ifdef CONFIG_QUICKLIST K(quicklist_total_size()), diff --git a/kernel/fork.c b/kernel/fork.c index 5c2c355aa97f..95bebde59d79 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -225,7 +225,8 @@ static void account_kernel_stack(struct thread_info *ti, int account) { struct zone *zone = page_zone(virt_to_page(ti)); - mod_zone_page_state(zone, NR_KERNEL_STACK, account); + mod_zone_page_state(zone, NR_KERNEL_STACK, + THREAD_SIZE / PAGE_SIZE * account); } void free_task(struct task_struct *tsk) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6903b695ebae..2b0203b3a976 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4457,8 +4457,7 @@ void show_free_areas(unsigned int filter) K(zone_page_state(zone, NR_SHMEM)), K(zone_page_state(zone, NR_SLAB_RECLAIMABLE)), K(zone_page_state(zone, NR_SLAB_UNRECLAIMABLE)), - zone_page_state(zone, NR_KERNEL_STACK) * - THREAD_SIZE / 1024, + K(zone_page_state(zone, NR_KERNEL_STACK)), K(zone_page_state(zone, NR_PAGETABLE)), K(zone_page_state(zone, NR_UNSTABLE_NFS)), K(zone_page_state(zone, NR_BOUNCE)),