From patchwork Thu Sep 12 00:29:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11141973 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 961CA16B1 for ; Thu, 12 Sep 2019 00:29:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6272E20CC7 for ; Thu, 12 Sep 2019 00:29:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="emTzcJuX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6272E20CC7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 25E566B0272; Wed, 11 Sep 2019 20:29:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 20EE06B0273; Wed, 11 Sep 2019 20:29:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1255F6B028B; Wed, 11 Sep 2019 20:29:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id E10606B0272 for ; Wed, 11 Sep 2019 20:29:35 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 5F26E1F23D for ; Thu, 12 Sep 2019 00:29:35 +0000 (UTC) X-FDA: 75924385110.06.trade30_1978b0747c30b X-Spam-Summary: 2,0,0,fc24ab41e5a7c332,d41d8cd98f00b204,3bzf5xqykcggeafngumuumrk.iusrotad-ssqbgiq.uxm@flex--yuzhao.bounces.google.com,:cl@linux.com:penberg@kernel.org:rientjes@google.com:iamjoonsoo.kim@lge.com:akpm@linux-foundation.org:kirill@shutemov.name:penguin-kernel@i-love.sakura.ne.jp::linux-kernel@vger.kernel.org:yuzhao@google.com,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3152:3352:3865:3866:3867:3868:3870:3871:3872:4321:5007:6238:6261:6653:9969:10004:10400:11026:11473:11658:11914:12043:12048:12296:12297:12438:12533:12555:12679:12895:12986:13069:13311:13357:14096:14097:14181:14394:14659:14721:21080:21444:21451:21627:30054,0,RBL:209.85.214.201:@flex--yuzhao.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Cu stom_rul X-HE-Tag: trade30_1978b0747c30b X-Filterd-Recvd-Size: 4072 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Sep 2019 00:29:34 +0000 (UTC) Received: by mail-pl1-f201.google.com with SMTP id c14so13009396plo.12 for ; Wed, 11 Sep 2019 17:29:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=M01d621+jXOeVOtLJvHlUCtiCRQEegusNaDlr2i35Sc=; b=emTzcJuXyc5nXp177ZHSHM8HRgJfjg3AKpTSMVLcZPdalmHovft+I1D/7rzfw7VwOh TlZYaA9+3FzgumZ+RmUsufI0V/n9gzRG3pZXqutN06yAcgJt/SV7SPgcD1uHe1bapO8t pcRbfp5fwduu7biTVNP4wRAtinetu06gIfYvCWzh2pfRxUg7mw2sWs5fnqof+qga3iIm Or/iDBDSWjDF+sv/XzgUN4QNq3Pw2/2ZKSs8PySJ2YSsGIJb+LAN/tFQRABq8+lH/lGv x4yNOaqzuMJa5G2QTcYWyZrq5ba+nKdHTZubsrpCMU3gPreVF2x1MyXKdSdRssDqBoRT rkVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=M01d621+jXOeVOtLJvHlUCtiCRQEegusNaDlr2i35Sc=; b=Id2hL92sxxjQ8rJ1WEd3/97wrpmDmT5ZccaFldyuSwXVEzvBlhPVehA23ll6lxFVI6 rJLXMHL6C6ABjfxW+FkeZU7AZ4vRkOtvzOS9f9EO1bpbULnq4QBKDEyFGbkV2CYHnV+G xLRhM97adcW7KeDqKl/4UQakwaCVLmQ1A4IAsn81ecFkpyJrsWbkPlDbL8gQFwvzzy5a XR2MLp+OUy1d8CrPfgABYe4P91FAuu1mRri9finhJtSbVbEoZgPCrE0hlafSpSlgLi1+ sFTKbM7gcVjhlkpKfHIeMuM9ztehfMjqGkMFz5wiLxXyduMXX3EfFyiJwYqpvkVt7OnE kaug== X-Gm-Message-State: APjAAAUKI3gX+zbtnpm44Ui1RiOa0QeTgCRWqZ2VIoDiieyynF1yhkRZ 68hPCgTMNRQ2sXG4sMnC/0qE0VQbhCE= X-Google-Smtp-Source: APXvYqxShYmRyX+l6llhto+SBqnYvELJgOERr5ICSUVl9FZCgvwep6WW53CesQ7KOZtdnfKF0jdq059FEGo= X-Received: by 2002:a63:3009:: with SMTP id w9mr36922043pgw.260.1568248173618; Wed, 11 Sep 2019 17:29:33 -0700 (PDT) Date: Wed, 11 Sep 2019 18:29:27 -0600 In-Reply-To: <20190911071331.770ecddff6a085330bf2b5f2@linux-foundation.org> Message-Id: <20190912002929.78873-1-yuzhao@google.com> Mime-Version: 1.0 References: <20190911071331.770ecddff6a085330bf2b5f2@linux-foundation.org> X-Mailer: git-send-email 2.23.0.162.g0b9fbb3734-goog Subject: [PATCH 1/3] mm: correct mask size for slub page->objects From: Yu Zhao To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , "Kirill A . Shutemov" , Tetsuo Handa Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Mask of slub objects per page shouldn't be larger than what page->objects can hold. It requires more than 2^15 objects to hit the problem, and I don't think anybody would. It'd be nice to have the mask fixed, but not really worth cc'ing the stable. Fixes: 50d5c41cd151 ("slub: Do not use frozen page flag but a bit in the page counters") Signed-off-by: Yu Zhao --- mm/slub.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 8834563cdb4b..62053ceb4464 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -187,7 +187,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) */ #define DEBUG_METADATA_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER) -#define OO_SHIFT 16 +#define OO_SHIFT 15 #define OO_MASK ((1 << OO_SHIFT) - 1) #define MAX_OBJS_PER_PAGE 32767 /* since page.objects is u15 */ @@ -343,6 +343,8 @@ static inline unsigned int oo_order(struct kmem_cache_order_objects x) static inline unsigned int oo_objects(struct kmem_cache_order_objects x) { + BUILD_BUG_ON(OO_MASK > MAX_OBJS_PER_PAGE); + return x.x & OO_MASK; } From patchwork Thu Sep 12 00:29:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11141975 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F1E416B1 for ; Thu, 12 Sep 2019 00:29:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D25A82087E for ; Thu, 12 Sep 2019 00:29:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="reevYIbB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D25A82087E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0B3796B028B; Wed, 11 Sep 2019 20:29:38 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 03B906B028C; Wed, 11 Sep 2019 20:29:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E953B6B028D; Wed, 11 Sep 2019 20:29:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0064.hostedemail.com [216.40.44.64]) by kanga.kvack.org (Postfix) with ESMTP id C269C6B028B for ; Wed, 11 Sep 2019 20:29:37 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 5977A1F23A for ; Thu, 12 Sep 2019 00:29:37 +0000 (UTC) X-FDA: 75924385194.11.egg98_19b60794d3d13 X-Spam-Summary: 2,0,0,d5b26f593a6b78aa,d41d8cd98f00b204,3b5f5xqykcgogchpiwowwotm.kwutqvcf-uusdiks.wzo@flex--yuzhao.bounces.google.com,:cl@linux.com:penberg@kernel.org:rientjes@google.com:iamjoonsoo.kim@lge.com:akpm@linux-foundation.org:kirill@shutemov.name:penguin-kernel@i-love.sakura.ne.jp::linux-kernel@vger.kernel.org:yuzhao@google.com,RULES_HIT:2:41:69:152:355:379:541:800:960:966:968:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1593:1594:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3152:3865:3866:3867:3870:3871:3874:4050:4120:4250:4321:4384:4385:4395:4605:5007:6238:6261:6653:7903:7904:8660:9592:9969:10004:11026:11233:11473:11658:11914:12043:12048:12296:12297:12438:12533:12555:12679:12683:12895:12986:13148:13230:13548:14394:14659:21080:21433:21444:21451:21627:30025:30029:30054:30075,0,RBL:209.85.210.202:@flex--yuzhao.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0 .5,Netch X-HE-Tag: egg98_19b60794d3d13 X-Filterd-Recvd-Size: 9927 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Sep 2019 00:29:36 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id g15so16987953pfb.8 for ; Wed, 11 Sep 2019 17:29:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uzRVa/RHNdYkg1IHiO1rEv5ugF6T1ClH+7w2oKbnwMc=; b=reevYIbBpu8qm5Syfb7KaiDsleUA1e7n1fMPaAtKblnilYbRmyLrNKnFVygczUNOaC rbfRBCNiQ+J/oOw01EI/XtsHetn9BKDvU6xYeRWQ8iwr+gYVT8HDdWH4fvspbEwe3NGc rfFlyKtlUt4wsegLwzUgRGQu6A2HGC981pHEpuJFEzqXrFbMFKM4RBfnYPMbsvG4sanX cMS0WVZmBtzWKJ9IS5BQRvDTQEnqzbRscD55f5Df8ImrKUeR996sFxL4jYb0unAMRX42 VCgX/IOpvSf6DCIxsKzh2czffBa60jeg80ZOcPcpCtKTSqQPPliKf/r991sLp+HkEEDh U8jQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uzRVa/RHNdYkg1IHiO1rEv5ugF6T1ClH+7w2oKbnwMc=; b=VpVy0WHQX7qRlI+Y8XQIWX+kuziNuVXifLtmIQmQnyJkgVpZgFtPEsiLLGmdoHXGDc JdvqFiKGPXCIOP5TaEutfDDuqqoflgItWfnCO18D8h+SSJsBtVXjXIX17O/9aOxEZ/Aq xGHNINBlvl+psIokZA/70P/7f1fJD477DbBp/wwidXz27bx4h7ozbu1F8G/IThtZi11Y 756lxJ2WxWcWIGEKw5NkGGuxLRzYLLAV7LkLGwx5g42sz+3OKu46Etrh/cOcAL9qB0U1 bjpghUhQ6Gu4WiA94bo4HwWpaU/q7czg31uBrff4faL1uSMi0fmFL7CFHirMcNcTPo++ Y9gw== X-Gm-Message-State: APjAAAXKzaSJQrQzUpXeqPlGhDdcEHrM63kRCxLqyEsCkPqKJ7DRQTX/ hK/RjBAkRFiEs/LaTMYVdpe/G5bekUo= X-Google-Smtp-Source: APXvYqyJcNIZcsl0b9NOFSki+grLmlxvDyaynkn792Wsw0UtMxyJV6e1qsRj2iFCHRvd+xD8o5J35UoEtVM= X-Received: by 2002:a63:5920:: with SMTP id n32mr29546334pgb.352.1568248175321; Wed, 11 Sep 2019 17:29:35 -0700 (PDT) Date: Wed, 11 Sep 2019 18:29:28 -0600 In-Reply-To: <20190912002929.78873-1-yuzhao@google.com> Message-Id: <20190912002929.78873-2-yuzhao@google.com> Mime-Version: 1.0 References: <20190911071331.770ecddff6a085330bf2b5f2@linux-foundation.org> <20190912002929.78873-1-yuzhao@google.com> X-Mailer: git-send-email 2.23.0.162.g0b9fbb3734-goog Subject: [PATCH 2/3] mm: avoid slub allocation while holding list_lock From: Yu Zhao To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , "Kirill A . Shutemov" , Tetsuo Handa Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we are already under list_lock, don't call kmalloc(). Otherwise we will run into deadlock because kmalloc() also tries to grab the same lock. Instead, statically allocate bitmap in struct kmem_cache_node. Given currently page->objects has 15 bits, we bloat the per-node struct by 4K. So we waste some memory but only do so when slub debug is on. WARNING: possible recursive locking detected -------------------------------------------- mount-encrypted/4921 is trying to acquire lock: (&(&n->list_lock)->rlock){-.-.}, at: ___slab_alloc+0x104/0x437 but task is already holding lock: (&(&n->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&n->list_lock)->rlock); lock(&(&n->list_lock)->rlock); *** DEADLOCK *** Signed-off-by: Yu Zhao --- include/linux/slub_def.h | 4 ++++ mm/slab.h | 1 + mm/slub.c | 44 ++++++++++++++-------------------------- 3 files changed, 20 insertions(+), 29 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index d2153789bd9f..719d43574360 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -9,6 +9,10 @@ */ #include +#define OO_SHIFT 15 +#define OO_MASK ((1 << OO_SHIFT) - 1) +#define MAX_OBJS_PER_PAGE 32767 /* since page.objects is u15 */ + enum stat_item { ALLOC_FASTPATH, /* Allocation from cpu slab */ ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ diff --git a/mm/slab.h b/mm/slab.h index 9057b8056b07..2d8639835db1 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -556,6 +556,7 @@ struct kmem_cache_node { atomic_long_t nr_slabs; atomic_long_t total_objects; struct list_head full; + unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; #endif #endif diff --git a/mm/slub.c b/mm/slub.c index 62053ceb4464..f28072c9f2ce 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -187,10 +187,6 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) */ #define DEBUG_METADATA_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER) -#define OO_SHIFT 15 -#define OO_MASK ((1 << OO_SHIFT) - 1) -#define MAX_OBJS_PER_PAGE 32767 /* since page.objects is u15 */ - /* Internal SLUB flags */ /* Poison object */ #define __OBJECT_POISON ((slab_flags_t __force)0x80000000U) @@ -454,6 +450,8 @@ static void get_map(struct kmem_cache *s, struct page *page, unsigned long *map) void *p; void *addr = page_address(page); + bitmap_zero(map, page->objects); + for (p = page->freelist; p; p = get_freepointer(s, p)) set_bit(slab_index(p, s, addr), map); } @@ -3680,14 +3678,12 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) } static void list_slab_objects(struct kmem_cache *s, struct page *page, - const char *text) + unsigned long *map, const char *text) { #ifdef CONFIG_SLUB_DEBUG void *addr = page_address(page); void *p; - unsigned long *map = bitmap_zalloc(page->objects, GFP_ATOMIC); - if (!map) - return; + slab_err(s, page, text, s->name); slab_lock(page); @@ -3699,8 +3695,8 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, print_tracking(s, p); } } + slab_unlock(page); - bitmap_free(map); #endif } @@ -3721,7 +3717,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) remove_partial(n, page); list_add(&page->slab_list, &discard); } else { - list_slab_objects(s, page, + list_slab_objects(s, page, n->object_map, "Objects remaining in %s on __kmem_cache_shutdown()"); } } @@ -4397,7 +4393,6 @@ static int validate_slab(struct kmem_cache *s, struct page *page, return 0; /* Now we know that a valid freelist exists */ - bitmap_zero(map, page->objects); get_map(s, page, map); for_each_object(p, s, addr, page->objects) { @@ -4422,7 +4417,7 @@ static void validate_slab_slab(struct kmem_cache *s, struct page *page, } static int validate_slab_node(struct kmem_cache *s, - struct kmem_cache_node *n, unsigned long *map) + struct kmem_cache_node *n) { unsigned long count = 0; struct page *page; @@ -4431,7 +4426,7 @@ static int validate_slab_node(struct kmem_cache *s, spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) { - validate_slab_slab(s, page, map); + validate_slab_slab(s, page, n->object_map); count++; } if (count != n->nr_partial) @@ -4442,7 +4437,7 @@ static int validate_slab_node(struct kmem_cache *s, goto out; list_for_each_entry(page, &n->full, slab_list) { - validate_slab_slab(s, page, map); + validate_slab_slab(s, page, n->object_map); count++; } if (count != atomic_long_read(&n->nr_slabs)) @@ -4459,15 +4454,11 @@ static long validate_slab_cache(struct kmem_cache *s) int node; unsigned long count = 0; struct kmem_cache_node *n; - unsigned long *map = bitmap_alloc(oo_objects(s->max), GFP_KERNEL); - - if (!map) - return -ENOMEM; flush_all(s); for_each_kmem_cache_node(s, node, n) - count += validate_slab_node(s, n, map); - bitmap_free(map); + count += validate_slab_node(s, n); + return count; } /* @@ -4603,9 +4594,7 @@ static void process_slab(struct loc_track *t, struct kmem_cache *s, void *addr = page_address(page); void *p; - bitmap_zero(map, page->objects); get_map(s, page, map); - for_each_object(p, s, addr, page->objects) if (!test_bit(slab_index(p, s, addr), map)) add_location(t, s, get_track(s, p, alloc)); @@ -4619,11 +4608,9 @@ static int list_locations(struct kmem_cache *s, char *buf, struct loc_track t = { 0, 0, NULL }; int node; struct kmem_cache_node *n; - unsigned long *map = bitmap_alloc(oo_objects(s->max), GFP_KERNEL); - if (!map || !alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location), - GFP_KERNEL)) { - bitmap_free(map); + if (!alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location), + GFP_KERNEL)) { return sprintf(buf, "Out of memory\n"); } /* Push back cpu slabs */ @@ -4638,9 +4625,9 @@ static int list_locations(struct kmem_cache *s, char *buf, spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) - process_slab(&t, s, page, alloc, map); + process_slab(&t, s, page, alloc, n->object_map); list_for_each_entry(page, &n->full, slab_list) - process_slab(&t, s, page, alloc, map); + process_slab(&t, s, page, alloc, n->object_map); spin_unlock_irqrestore(&n->list_lock, flags); } @@ -4689,7 +4676,6 @@ static int list_locations(struct kmem_cache *s, char *buf, } free_loc_track(&t); - bitmap_free(map); if (!t.count) len += sprintf(buf, "No data\n"); return len; From patchwork Thu Sep 12 00:29:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11141977 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B703112B for ; Thu, 12 Sep 2019 00:29:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 386282087E for ; Thu, 12 Sep 2019 00:29:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tp+vOykV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 386282087E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 433FA6B028C; Wed, 11 Sep 2019 20:29:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3E4426B028D; Wed, 11 Sep 2019 20:29:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23AFE6B028E; Wed, 11 Sep 2019 20:29:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0049.hostedemail.com [216.40.44.49]) by kanga.kvack.org (Postfix) with ESMTP id EE8526B028C for ; Wed, 11 Sep 2019 20:29:38 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 5765F181AC9B4 for ; Thu, 12 Sep 2019 00:29:38 +0000 (UTC) X-FDA: 75924385236.27.chair86_19e5d01cbf229 X-Spam-Summary: 2,0,0,9ea72b682a5ce293,d41d8cd98f00b204,3czf5xqykcgwiejrkyqyyqvo.mywvsxeh-wwufkmu.ybq@flex--yuzhao.bounces.google.com,:cl@linux.com:penberg@kernel.org:rientjes@google.com:iamjoonsoo.kim@lge.com:akpm@linux-foundation.org:kirill@shutemov.name:penguin-kernel@i-love.sakura.ne.jp::linux-kernel@vger.kernel.org:yuzhao@google.com,RULES_HIT:41:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1534:1540:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3152:3352:3865:3866:3867:3870:4385:5007:6261:6653:9010:9012:9969:10004:10400:11026:11658:11914:12048:12296:12297:12533:12555:12895:13069:13311:13357:14096:14097:14181:14394:14659:14721:21080:21444:21627:30054,0,RBL:209.85.222.201:@flex--yuzhao.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:non e X-HE-Tag: chair86_19e5d01cbf229 X-Filterd-Recvd-Size: 3737 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Sep 2019 00:29:37 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id u17so15317055qkj.7 for ; Wed, 11 Sep 2019 17:29:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=aQObBnsoF37hTK58s3QRsq2XDnpF8nVsAPwHxvDTqp4=; b=tp+vOykVFb6wUsjvQv2rrq40mey4jC2MVuUOo62FVit7XzEuaaCrt0Aliep49jaMiY XbUwG8uuq24QHYFR1pJXCGLPlk8YErtCXtbkDMHqwebZu9qAnRnSI/g0XfTBOt0PWM5z +I1bY29tI3TZKsWz3SvhpAV9F6AiGL9n/hfy/Px34sAcEY7+oQGL2CAHQLWdf+tIglJe W3yG6rm1gP6NFBf3dIxi5GmhqJaznQb/QPwy8hRpuMgN4CeDrRfdsP0q/N+e/Gt9Y2Hh YepnosVfNG9Kn5Z5M9n1US8q6U8NgabAvnJug85mkUCVYj4mLlaILu8iJr55zzhPg34s hzIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=aQObBnsoF37hTK58s3QRsq2XDnpF8nVsAPwHxvDTqp4=; b=sQu9bJ0SM83Sq+MhnFUOB21Ax0tvPNb9Ni1Ub1xsu8Yq6J6pIw/De1qZGIXchteF/b goJKWfGMlk8YIuKL3+q/C5SZFcE51H/W9HkxKSyLeKJnVt889LyFr8qWhGWJz7gXqTZ3 Xx98YxSoxYGs3fF5dnCIA0Qbvgio9hXH1gSKRunhD3fvDkeOx72KgLatSe1eJSqvE2GP 2wZilM8AJvw5Qn+pZ4D2BT4//peGpRA3an7zBdJa51aFXovo+RSVmthj0UPWh/7wsfBa vNWNgrTrHd+3Rktac5GjmKsPzYs1UEdvy05VM9YpR9OYDMHO4+3xnmg9jZnkOTWqC6C2 tOvw== X-Gm-Message-State: APjAAAVu87VtBwmHAwmV+imBZTgFe6LrpOV+cNcrHTimEcEEol2INLyv +xsnf1zBdWKOE0uJndeSviSZuxmC9Ew= X-Google-Smtp-Source: APXvYqzK80GPgtRHqvxLYSknsuTpGxTEnT/qyaOOSlta3kEOoLG8zvl4/XdyIzaKcqVWNpZnMQh96oPAaU8= X-Received: by 2002:ac8:678f:: with SMTP id b15mr37229590qtp.293.1568248177173; Wed, 11 Sep 2019 17:29:37 -0700 (PDT) Date: Wed, 11 Sep 2019 18:29:29 -0600 In-Reply-To: <20190912002929.78873-1-yuzhao@google.com> Message-Id: <20190912002929.78873-3-yuzhao@google.com> Mime-Version: 1.0 References: <20190911071331.770ecddff6a085330bf2b5f2@linux-foundation.org> <20190912002929.78873-1-yuzhao@google.com> X-Mailer: git-send-email 2.23.0.162.g0b9fbb3734-goog Subject: [PATCH 3/3] mm: lock slub page when listing objects From: Yu Zhao To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , "Kirill A . Shutemov" , Tetsuo Handa Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Though I have no idea what the side effect of a race would be, apparently we want to prevent the free list from being changed while debugging objects in general. Signed-off-by: Yu Zhao --- mm/slub.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index f28072c9f2ce..2734a092bbff 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4594,10 +4594,14 @@ static void process_slab(struct loc_track *t, struct kmem_cache *s, void *addr = page_address(page); void *p; + slab_lock(page); + get_map(s, page, map); for_each_object(p, s, addr, page->objects) if (!test_bit(slab_index(p, s, addr), map)) add_location(t, s, get_track(s, p, alloc)); + + slab_unlock(page); } static int list_locations(struct kmem_cache *s, char *buf,