From patchwork Fri Nov 8 19:39:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11235435 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4966C1515 for ; Fri, 8 Nov 2019 19:40:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 16B77218AE for ; Fri, 8 Nov 2019 19:40:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iFRLU+l6" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 16B77218AE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E4EE6B0006; Fri, 8 Nov 2019 14:40:08 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5BC6F6B0007; Fri, 8 Nov 2019 14:40:08 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 484936B0008; Fri, 8 Nov 2019 14:40:08 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 32B9D6B0006 for ; Fri, 8 Nov 2019 14:40:08 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id C8572181AEF0B for ; Fri, 8 Nov 2019 19:40:07 +0000 (UTC) X-FDA: 76134126054.28.dogs93_114c2c344c19 X-Spam-Summary: 2,0,0,92eaad740d76a4d5,d41d8cd98f00b204,3lstfxqykccsfbgohvnvvnsl.jvtspube-ttrchjr.vyn@flex--yuzhao.bounces.google.com,:cl@linux.com:penberg@kernel.org:rientjes@google.com:iamjoonsoo.kim@lge.com:akpm@linux-foundation.org:kirill@shutemov.name:penguin-kernel@i-love.sakura.ne.jp::linux-kernel@vger.kernel.org:yuzhao@google.com:kirill.shutemov@linux.intel.com,RULES_HIT:41:69:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3152:3353:3865:3867:3868:3870:3871:3874:4321:4385:5007:6261:6653:9969:10004:10400:11026:11473:11658:11914:12043:12048:12294:12296:12297:12533:12555:12895:13069:13161:13229:13311:13357:14096:14097:14181:14394:14659:14721:21080:21444:21451:21627:21796:30036:30054:30070,0,RBL:209.85.161.74:@flex--yuzhao.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,Domain Cache:0, X-HE-Tag: dogs93_114c2c344c19 X-Filterd-Recvd-Size: 4740 Received: from mail-yw1-f74.google.com (mail-yw1-f74.google.com [209.85.161.74]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 Nov 2019 19:40:07 +0000 (UTC) Received: by mail-yw1-f74.google.com with SMTP id b19so5497286ywn.19 for ; Fri, 08 Nov 2019 11:40:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=BUf7DlC8KOjDmfM/44KHetV3OBL2X58EJCpCC2vw9nk=; b=iFRLU+l6PnDC17hmhg0Gxuom3QmP9xuyBKqzPyhgnEee0gxT6vS9VaLO0uTYBr5J+G 8FieMoMq9P/S5ScK1GvTz9tgwgQGrORgv7R1AS71uwUXTR2DasFAcL+fFJ0Ks7mm3nh1 +0lw+vrgSNG84YQBfESvfqwzWsbqwapSVMtvkunGWlCwqmix0FGrESR8kCXJtWmfTA8S D+hM+/zx61vleziG/noNo5S2GuHZW7qyj8/o7QayZXklR1g0ULuLpr0O4lRXtHZpN3Fb FB7ix9eDvKUDa+gNz3DFiGqY5qTYdFiX2rZlZKuv8Tvw4Sqsv2gt+l9Q7UDUhuplJwVD o2oA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BUf7DlC8KOjDmfM/44KHetV3OBL2X58EJCpCC2vw9nk=; b=FjgTwv8NOn8WEsVoeEn1Ddvwv+toCKcmiU4yzDcnRwpQioocrBfNe2UUHsylxHi3z+ cdQA+Way66hAG+XBK2B7AeaM9dsKAslTHXd3wv3wkiXjnqThPmLB2jiNuCKOA3FJZ4Ap rXv7wYUgmAZNspAYURgJ2UC66xuImXNsuuIN4kskc6yai0AmuddZS7DBFr9FEpN1aOoh HrX8PSgu4YB3QMiu1SUuFcLOkPOJVNHGQENOZYpbh+HDdowuL0r23CVJ8w6cLvBDLndF GoEjpvtWIOVOnlP5QEsfU+Lo6LFGI3hsC/pu38Y4KzdX2LnX/5jmnHejWB+1Go8epM0H s18w== X-Gm-Message-State: APjAAAWiW8nN9l3DGX2Ooj9+Mu/X1SLK9fx3be7N8YSXYwCTf0ya0B8p YTRuiKtzuHBqNgDA/F+XaLhXfAQZosY= X-Google-Smtp-Source: APXvYqxDhhSALbA0LlJj0Z/4/mxS3E7lafXisg2ns+Gwu/FRwdEjbH3L/S19fN6+uPqqkLWHuzjq/Q9Iqf0= X-Received: by 2002:a81:5557:: with SMTP id j84mr8618451ywb.392.1573242006431; Fri, 08 Nov 2019 11:40:06 -0800 (PST) Date: Fri, 8 Nov 2019 12:39:57 -0700 In-Reply-To: <20190914000743.182739-1-yuzhao@google.com> Message-Id: <20191108193958.205102-1-yuzhao@google.com> Mime-Version: 1.0 References: <20190914000743.182739-1-yuzhao@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v4 1/2] mm: clean up validate_slab() From: Yu Zhao To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , "Kirill A . Shutemov" , Tetsuo Handa Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , "Kirill A . Shutemov" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The function doesn't need to return any value, and the check can be done in one pass. There is a behavior change: before the patch, we stop at the first invalid free object; after the patch, we stop at the first invalid object, free or in use. This shouldn't matter because the original behavior isn't intended anyway. Acked-by: Kirill A. Shutemov Signed-off-by: Yu Zhao --- mm/slub.c | 21 ++++++++------------- 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index b25c807a111f..6930c3febad7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4404,31 +4404,26 @@ static int count_total(struct page *page) #endif #ifdef CONFIG_SLUB_DEBUG -static int validate_slab(struct kmem_cache *s, struct page *page, +static void validate_slab(struct kmem_cache *s, struct page *page, unsigned long *map) { void *p; void *addr = page_address(page); - if (!check_slab(s, page) || - !on_freelist(s, page, NULL)) - return 0; + if (!check_slab(s, page) || !on_freelist(s, page, NULL)) + return; /* Now we know that a valid freelist exists */ bitmap_zero(map, page->objects); get_map(s, page, map); for_each_object(p, s, addr, page->objects) { - if (test_bit(slab_index(p, s, addr), map)) - if (!check_object(s, page, p, SLUB_RED_INACTIVE)) - return 0; - } + u8 val = test_bit(slab_index(p, s, addr), map) ? + SLUB_RED_INACTIVE : SLUB_RED_ACTIVE; - for_each_object(p, s, addr, page->objects) - if (!test_bit(slab_index(p, s, addr), map)) - if (!check_object(s, page, p, SLUB_RED_ACTIVE)) - return 0; - return 1; + if (!check_object(s, page, p, val)) + break; + } } static void validate_slab_slab(struct kmem_cache *s, struct page *page, From patchwork Fri Nov 8 19:39:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11235437 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 851361515 for ; Fri, 8 Nov 2019 19:40:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3545021D7F for ; Fri, 8 Nov 2019 19:40:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EHl7l4Lq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3545021D7F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E9E9B6B0007; Fri, 8 Nov 2019 14:40:09 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DFB9F6B0008; Fri, 8 Nov 2019 14:40:09 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CEC146B000A; Fri, 8 Nov 2019 14:40:09 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id BA49E6B0007 for ; Fri, 8 Nov 2019 14:40:09 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 86E15181AEF0B for ; Fri, 8 Nov 2019 19:40:09 +0000 (UTC) X-FDA: 76134126138.23.wish82_152fe1f4bf5b X-Spam-Summary: 2,0,0,53328d7c435d1f15,d41d8cd98f00b204,3mmtfxqykcc0hdiqjxpxxpun.lxvurwdg-vvtejlt.xap@flex--yuzhao.bounces.google.com,:cl@linux.com:penberg@kernel.org:rientjes@google.com:iamjoonsoo.kim@lge.com:akpm@linux-foundation.org:kirill@shutemov.name:penguin-kernel@i-love.sakura.ne.jp::linux-kernel@vger.kernel.org:yuzhao@google.com:kirill.shutemov@linux.intel.com,RULES_HIT:1:2:41:69:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3152:3865:3866:3867:3870:3871:3874:4050:4321:4385:4605:5007:6261:6653:7903:7904:8660:9592:9969:10004:11026:11233:11473:11658:11914:12043:12048:12291:12296:12297:12438:12533:12555:12679:12683:12895:13148:13230:14110:14394:14659:21080:21433:21444:21451:21611:21627:21966:30025:30029:30054:30070:30075,0,RBL:209.85.222.201:@flex--yuzhao.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none, Bayesian X-HE-Tag: wish82_152fe1f4bf5b X-Filterd-Recvd-Size: 10150 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 Nov 2019 19:40:08 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id a13so7856906qkc.17 for ; Fri, 08 Nov 2019 11:40:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=VRnTto8KPjkvow/svAK/HtQEwlurCDazmk8BqxCWuF0=; b=EHl7l4Lq9VH5X9AXjHCKaa610RoACsxtY8fTjtJX9iB1ikgAE0nREMQkQiiw14GCYt 2HGKyGBYX2GtjA6lp3kbDtgb/DJ+UjZYIbXfePN4uXvOe5yRY9XgX+8sbNBeGk4Nk2QQ TsjeANtIdrQ07dS4cdVmUy+a6OrPgyPCuNvQsKDKFE9UMWjAqvbAf5LFq1p2LFh5Xedd t5r02cabu+ELUsWFmmqVBAJ9lcyXf5wVssRbYlwwxTOWrmOEvfi32l+us16YucMiSFbZ hniikBfjH+KFVn/5YuzWIjGE2UMolwoyMXbsRXbzUrEr6Ux724ZsX6kr0PADPOMPPzbV miJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VRnTto8KPjkvow/svAK/HtQEwlurCDazmk8BqxCWuF0=; b=ljVBXAT8vGrMrn5o+KdqdqizNkqpwPEgftpL8ljCFDEpT0zMPpiXwmUsKkU7BpR7wD Kj22+CwMRKJMiRN2vC9jPm4waYHGno+AKzjZwbbwMn208aFKIc7QcvcgDd78EZlwZQpJ 2NubWcMcpNgDbDrfQ8UMaOA6K99gin7EYPrVTeWIVO8Z0RXTaypVHw7RHxEQ2co9AoG+ EegBQ22YzFk7q8bY5g6ao5OKKawk2ZxBJlvz1i6H6HevTYSHaJs5MEDInsclnmqqBMKO YFi5cQglI/5d1ljcK76BDd7zBoOzFWjEjJkeaaiCxm29iZl25l5rzRSA2Cp6C4BGUkjx m+iA== X-Gm-Message-State: APjAAAV9OlI2HOHPPvOIhLHeMQ2V5IxQGc2+GPGq9iiOEO/bdb+bEl0p ML1S8169eHOCv9uH6Fo4043Yy15rkTk= X-Google-Smtp-Source: APXvYqyuspsk6FSAUHmMHKxk9TJyjFNZzJP9dZgoZElU900N3afONr5EmK/09VzVWIvEbGdXQVXryuZGPmA= X-Received: by 2002:ac8:384f:: with SMTP id r15mr12476756qtb.155.1573242008100; Fri, 08 Nov 2019 11:40:08 -0800 (PST) Date: Fri, 8 Nov 2019 12:39:58 -0700 In-Reply-To: <20191108193958.205102-1-yuzhao@google.com> Message-Id: <20191108193958.205102-2-yuzhao@google.com> Mime-Version: 1.0 References: <20190914000743.182739-1-yuzhao@google.com> <20191108193958.205102-1-yuzhao@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v4 2/2] mm: avoid slub allocation while holding list_lock From: Yu Zhao To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , "Kirill A . Shutemov" , Tetsuo Handa Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , "Kirill A . Shutemov" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we are already under list_lock, don't call kmalloc(). Otherwise we will run into deadlock because kmalloc() also tries to grab the same lock. Fixing the problem by using a static bitmap instead. WARNING: possible recursive locking detected -------------------------------------------- mount-encrypted/4921 is trying to acquire lock: (&(&n->list_lock)->rlock){-.-.}, at: ___slab_alloc+0x104/0x437 but task is already holding lock: (&(&n->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&n->list_lock)->rlock); lock(&(&n->list_lock)->rlock); *** DEADLOCK *** Acked-by: Kirill A. Shutemov Signed-off-by: Yu Zhao Signed-off-by: Christoph Lameter --- mm/slub.c | 88 +++++++++++++++++++++++++++++-------------------------- 1 file changed, 47 insertions(+), 41 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 6930c3febad7..7a4ec3c4b4d9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -441,19 +441,38 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, } #ifdef CONFIG_SLUB_DEBUG +static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; +static DEFINE_SPINLOCK(object_map_lock); + /* * Determine a map of object in use on a page. * * Node listlock must be held to guarantee that the page does * not vanish from under us. */ -static void get_map(struct kmem_cache *s, struct page *page, unsigned long *map) +static unsigned long *get_map(struct kmem_cache *s, struct page *page) { void *p; void *addr = page_address(page); + VM_BUG_ON(!irqs_disabled()); + + spin_lock(&object_map_lock); + + bitmap_zero(object_map, page->objects); + for (p = page->freelist; p; p = get_freepointer(s, p)) - set_bit(slab_index(p, s, addr), map); + set_bit(slab_index(p, s, addr), object_map); + + return object_map; +} + +static void put_map(unsigned long *map) +{ + VM_BUG_ON(map != object_map); + lockdep_assert_held(&object_map_lock); + + spin_unlock(&object_map_lock); } static inline unsigned int size_from_object(struct kmem_cache *s) @@ -3695,13 +3714,12 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, #ifdef CONFIG_SLUB_DEBUG void *addr = page_address(page); void *p; - unsigned long *map = bitmap_zalloc(page->objects, GFP_ATOMIC); - if (!map) - return; + unsigned long *map; + slab_err(s, page, text, s->name); slab_lock(page); - get_map(s, page, map); + map = get_map(s, page); for_each_object(p, s, addr, page->objects) { if (!test_bit(slab_index(p, s, addr), map)) { @@ -3709,8 +3727,9 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, print_tracking(s, p); } } + put_map(map); + slab_unlock(page); - bitmap_free(map); #endif } @@ -4404,19 +4423,19 @@ static int count_total(struct page *page) #endif #ifdef CONFIG_SLUB_DEBUG -static void validate_slab(struct kmem_cache *s, struct page *page, - unsigned long *map) +static void validate_slab(struct kmem_cache *s, struct page *page) { void *p; void *addr = page_address(page); + unsigned long *map; + + slab_lock(page); if (!check_slab(s, page) || !on_freelist(s, page, NULL)) - return; + goto unlock; /* Now we know that a valid freelist exists */ - bitmap_zero(map, page->objects); - - get_map(s, page, map); + map = get_map(s, page); for_each_object(p, s, addr, page->objects) { u8 val = test_bit(slab_index(p, s, addr), map) ? SLUB_RED_INACTIVE : SLUB_RED_ACTIVE; @@ -4424,18 +4443,13 @@ static void validate_slab(struct kmem_cache *s, struct page *page, if (!check_object(s, page, p, val)) break; } -} - -static void validate_slab_slab(struct kmem_cache *s, struct page *page, - unsigned long *map) -{ - slab_lock(page); - validate_slab(s, page, map); + put_map(map); +unlock: slab_unlock(page); } static int validate_slab_node(struct kmem_cache *s, - struct kmem_cache_node *n, unsigned long *map) + struct kmem_cache_node *n) { unsigned long count = 0; struct page *page; @@ -4444,7 +4458,7 @@ static int validate_slab_node(struct kmem_cache *s, spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) { - validate_slab_slab(s, page, map); + validate_slab(s, page); count++; } if (count != n->nr_partial) @@ -4455,7 +4469,7 @@ static int validate_slab_node(struct kmem_cache *s, goto out; list_for_each_entry(page, &n->full, slab_list) { - validate_slab_slab(s, page, map); + validate_slab(s, page); count++; } if (count != atomic_long_read(&n->nr_slabs)) @@ -4472,15 +4486,11 @@ static long validate_slab_cache(struct kmem_cache *s) int node; unsigned long count = 0; struct kmem_cache_node *n; - unsigned long *map = bitmap_alloc(oo_objects(s->max), GFP_KERNEL); - - if (!map) - return -ENOMEM; flush_all(s); for_each_kmem_cache_node(s, node, n) - count += validate_slab_node(s, n, map); - bitmap_free(map); + count += validate_slab_node(s, n); + return count; } /* @@ -4610,18 +4620,17 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, } static void process_slab(struct loc_track *t, struct kmem_cache *s, - struct page *page, enum track_item alloc, - unsigned long *map) + struct page *page, enum track_item alloc) { void *addr = page_address(page); void *p; + unsigned long *map; - bitmap_zero(map, page->objects); - get_map(s, page, map); - + map = get_map(s, page); for_each_object(p, s, addr, page->objects) if (!test_bit(slab_index(p, s, addr), map)) add_location(t, s, get_track(s, p, alloc)); + put_map(map); } static int list_locations(struct kmem_cache *s, char *buf, @@ -4632,11 +4641,9 @@ static int list_locations(struct kmem_cache *s, char *buf, struct loc_track t = { 0, 0, NULL }; int node; struct kmem_cache_node *n; - unsigned long *map = bitmap_alloc(oo_objects(s->max), GFP_KERNEL); - if (!map || !alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location), - GFP_KERNEL)) { - bitmap_free(map); + if (!alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location), + GFP_KERNEL)) { return sprintf(buf, "Out of memory\n"); } /* Push back cpu slabs */ @@ -4651,9 +4658,9 @@ static int list_locations(struct kmem_cache *s, char *buf, spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) - process_slab(&t, s, page, alloc, map); + process_slab(&t, s, page, alloc); list_for_each_entry(page, &n->full, slab_list) - process_slab(&t, s, page, alloc, map); + process_slab(&t, s, page, alloc); spin_unlock_irqrestore(&n->list_lock, flags); } @@ -4702,7 +4709,6 @@ static int list_locations(struct kmem_cache *s, char *buf, } free_loc_track(&t); - bitmap_free(map); if (!t.count) len += sprintf(buf, "No data\n"); return len;