From patchwork Tue Aug 14 16:24:39 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 1321961 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 0788ADF215 for ; Tue, 14 Aug 2012 16:29:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753035Ab2HNQZb (ORCPT ); Tue, 14 Aug 2012 12:25:31 -0400 Received: from mail-bk0-f46.google.com ([209.85.214.46]:39086 "EHLO mail-bk0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752481Ab2HNQZ3 (ORCPT ); Tue, 14 Aug 2012 12:25:29 -0400 Received: by mail-bk0-f46.google.com with SMTP id j10so208146bkw.19 for ; Tue, 14 Aug 2012 09:25:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=8H7UmGAhLgkpRGQ2+nUrPv3YafkGfEkwediZeS0Flnk=; b=gTs+I0lP8xW95FPaYyqXJ6llEWWsjooXBVqPUtZq/NyL6fDFoctkX6bOC1rQUx3mmA h16sBGCMJTDMkCxDVdoBrzGnkpoA74bhBOfO3boq3zmd4oihYLGSbUSABLSY/n+sTCTL NKpGiCus5UqI2pHgiEClrvlZxO0V1GOSyl75DsALdJrd6Pue4JhjVQCQ0nHsJkj4EuUU nOgQe8QyyEKrrrumelYxSKvX0XdKyOrXvY7RyljWcEXe5Cjlaa+vJnHehKJCMFY6iv0e N77fSfv2ZiGLxUetdLlr/yjtAihVYa0mg5lrDifAQ6vqA3onODytNO5fHpTMbbZzVyye k6Gg== Received: by 10.205.134.139 with SMTP id ic11mr6743496bkc.40.1344961527908; Tue, 14 Aug 2012 09:25:27 -0700 (PDT) Received: from localhost.localdomain (95-89-78-76-dynip.superkabel.de. [95.89.78.76]) by mx.google.com with ESMTPS id t23sm1470660bks.4.2012.08.14.09.25.21 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 14 Aug 2012 09:25:26 -0700 (PDT) From: Sasha Levin To: torvalds@linux-foundation.org Cc: tj@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, paul.gortmaker@windriver.com, davem@davemloft.net, rostedt@goodmis.org, mingo@elte.hu, ebiederm@xmission.com, aarcange@redhat.com, ericvh@gmail.com, netdev@vger.kernel.org, josh@joshtriplett.org, eric.dumazet@gmail.com, mathieu.desnoyers@efficios.com, axboe@kernel.dk, agk@redhat.com, dm-devel@redhat.com, neilb@suse.de, ccaulfie@redhat.com, teigland@redhat.com, Trond.Myklebust@netapp.com, bfields@fieldses.org, fweisbec@gmail.com, jesse@nicira.com, venkat.x.venkatsubra@oracle.com, ejt@redhat.com, snitzer@redhat.com, edumazet@google.com, linux-nfs@vger.kernel.org, dev@openvswitch.org, rds-devel@oss.oracle.com, lw@cn.fujitsu.com, Sasha Levin Subject: [PATCH 05/16] mm/huge_memory: use new hashtable implementation Date: Tue, 14 Aug 2012 18:24:39 +0200 Message-Id: <1344961490-4068-6-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.8.6 In-Reply-To: <1344961490-4068-1-git-send-email-levinsasha928@gmail.com> References: <1344961490-4068-1-git-send-email-levinsasha928@gmail.com> Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Switch hugemem to use the new hashtable implementation. This reduces the amount of generic unrelated code in the hugemem. This also removes the dymanic allocation of the hash table. The size of the table is constant so there's no point in paying the price of an extra dereference when accessing it. Signed-off-by: Sasha Levin --- mm/huge_memory.c | 57 ++++++++++++++--------------------------------------- 1 files changed, 15 insertions(+), 42 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 57c4b93..a5d0a8a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include "internal.h" @@ -57,12 +58,12 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait); static unsigned int khugepaged_max_ptes_none __read_mostly = HPAGE_PMD_NR-1; static int khugepaged(void *none); -static int mm_slots_hash_init(void); static int khugepaged_slab_init(void); static void khugepaged_slab_free(void); -#define MM_SLOTS_HASH_HEADS 1024 -static struct hlist_head *mm_slots_hash __read_mostly; +#define MM_SLOTS_HASH_BITS 10 +static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS); + static struct kmem_cache *mm_slot_cache __read_mostly; /** @@ -140,7 +141,7 @@ static int start_khugepaged(void) int err = 0; if (khugepaged_enabled()) { int wakeup; - if (unlikely(!mm_slot_cache || !mm_slots_hash)) { + if (unlikely(!mm_slot_cache)) { err = -ENOMEM; goto out; } @@ -554,12 +555,6 @@ static int __init hugepage_init(void) if (err) goto out; - err = mm_slots_hash_init(); - if (err) { - khugepaged_slab_free(); - goto out; - } - /* * By default disable transparent hugepages on smaller systems, * where the extra memory used could hurt more than TLB overhead @@ -1541,6 +1536,8 @@ static int __init khugepaged_slab_init(void) if (!mm_slot_cache) return -ENOMEM; + hash_init(mm_slots_hash); + return 0; } @@ -1562,47 +1559,23 @@ static inline void free_mm_slot(struct mm_slot *mm_slot) kmem_cache_free(mm_slot_cache, mm_slot); } -static int __init mm_slots_hash_init(void) -{ - mm_slots_hash = kzalloc(MM_SLOTS_HASH_HEADS * sizeof(struct hlist_head), - GFP_KERNEL); - if (!mm_slots_hash) - return -ENOMEM; - return 0; -} - -#if 0 -static void __init mm_slots_hash_free(void) -{ - kfree(mm_slots_hash); - mm_slots_hash = NULL; -} -#endif - static struct mm_slot *get_mm_slot(struct mm_struct *mm) { - struct mm_slot *mm_slot; - struct hlist_head *bucket; + struct mm_slot *slot; struct hlist_node *node; - bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct)) - % MM_SLOTS_HASH_HEADS]; - hlist_for_each_entry(mm_slot, node, bucket, hash) { - if (mm == mm_slot->mm) - return mm_slot; - } + hash_for_each_possible(mm_slots_hash, slot, node, hash, (unsigned long) mm) + if (slot->mm == mm) + return slot; + return NULL; } static void insert_to_mm_slots_hash(struct mm_struct *mm, struct mm_slot *mm_slot) { - struct hlist_head *bucket; - - bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct)) - % MM_SLOTS_HASH_HEADS]; mm_slot->mm = mm; - hlist_add_head(&mm_slot->hash, bucket); + hash_add(mm_slots_hash, &mm_slot->hash, (long)mm); } static inline int khugepaged_test_exit(struct mm_struct *mm) @@ -1675,7 +1648,7 @@ void __khugepaged_exit(struct mm_struct *mm) spin_lock(&khugepaged_mm_lock); mm_slot = get_mm_slot(mm); if (mm_slot && khugepaged_scan.mm_slot != mm_slot) { - hlist_del(&mm_slot->hash); + hash_del(&mm_slot->hash); list_del(&mm_slot->mm_node); free = 1; } @@ -2089,7 +2062,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot) if (khugepaged_test_exit(mm)) { /* free mm_slot */ - hlist_del(&mm_slot->hash); + hash_del(&mm_slot->hash); list_del(&mm_slot->mm_node); /*