From patchwork Mon May 4 03:34:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Hao X-Patchwork-Id: 11524941 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C1FD2174A for ; Mon, 4 May 2020 03:35:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 76B6A206EB for ; Mon, 4 May 2020 03:35:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GV1ryh5W" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 76B6A206EB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8CEF18E0005; Sun, 3 May 2020 23:35:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 87F358E0001; Sun, 3 May 2020 23:35:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7942D8E0005; Sun, 3 May 2020 23:35:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id 61C078E0001 for ; Sun, 3 May 2020 23:35:48 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2B4E7180AD83B for ; Mon, 4 May 2020 03:35:48 +0000 (UTC) X-FDA: 76777622376.08.group51_32a0959a28142 X-Spam-Summary: 2,0,0,a33a90e74a2f25f5,d41d8cd98f00b204,haokexin@gmail.com,,RULES_HIT:41:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1437:1515:1535:1542:1711:1730:1747:1777:1792:1801:2194:2196:2199:2200:2393:2553:2559:2562:2693:2898:3138:3139:3140:3141:3142:3352:3865:3866:3867:3868:3870:3871:4250:4321:4384:4385:4395:4605:5007:6261:6653:7514:7903:9010:9121:9413:9592:10004:11026:11233:11658:11914:12295:12296:12297:12517:12519:12555:12679:12895:13894:14096:14181:14394:14687:14721:21080:21444:21451:21611:21627:21666:21966:21987:30012:30054:30070:30090,0,RBL:209.85.210.195:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: group51_32a0959a28142 X-Filterd-Recvd-Size: 5050 Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Mon, 4 May 2020 03:35:47 +0000 (UTC) Received: by mail-pf1-f195.google.com with SMTP id w65so4901019pfc.12 for ; Sun, 03 May 2020 20:35:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=VFNufphzGXBROWukXNZxiPGve6xzr6sYwrgcdceohMg=; b=GV1ryh5W4p2i91iOvBeY319Lo8wB1lMJwKED2JyA/X1gGb84GTBDtj/8wpHj8CTuvB 7ZdCSuUj0WFi9JMTjFRtVPQuvizXe+svoHjmHxuXXYrL5LkT11sfexpsIAgMeRXtumeC qi2AuexiAFbeFB4VqDCEOMI4aTEOB8+kaJiAm1YY1QMlB3z1wuvdUd/4VQiEqu8vEfqE ja4F/BHKrek4TvowrBE1ZJexAxHf0+GjJ9ZwAjKvobXp4M7w32vTIcijPCh+UrSRQsOJ KMvFY+IuGbN648iRssrsrgC/GeAVlQ3K75GjFh/pXv1KiOGMPIuAbcDpHAOUqVUn3nzV jaog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=VFNufphzGXBROWukXNZxiPGve6xzr6sYwrgcdceohMg=; b=mqaqrBQXlptwN4LPyHclD/fAcQvsqPDQTBHgKeNe8nUH8xeYBMOJAInZKgd1zAZKO8 R1RB9ch/M2a+dm3d49z16ONlAbI1gvEdLDmmPGOHtHMo5x2lKsThItUcmSYJno100NUY 3r3UuCPQlk5IIw+deK/ZhN8bQ6uibHtx0T6jkSuD4sXk/IRGcXMz+QbHWexxCZ/1/+Xq DZlwdxGrHu4wCBI5UOnOGX4phdDuzDeWRI4YJRf+3VeJn94x/iaCOIEX1hxE1rg64e9T /4w5rTMq3zmGtV4QCwRzmbztueDbbM2sftMDirlMmavCIIMwniZVsUNumThaaodZv08t NDtw== X-Gm-Message-State: AGi0PuafcDeDHcR3yr6vSZNSJ88G9+4Eih1s/4gv+7PIWvCRNKWqTM53 MY4rCNF8y9pknqC0PyaJKwallpsvNLs= X-Google-Smtp-Source: APiQypIUj5/ud6OPqvUH768YcPFfFfrPqd9lWvnlEMI0Uuz3a0NaVv6xmyYs9yNnsFO4TW4tnwR0eg== X-Received: by 2002:a63:2347:: with SMTP id u7mr14728495pgm.183.1588563346621; Sun, 03 May 2020 20:35:46 -0700 (PDT) Received: from pek-lpggp6.wrs.com (unknown-105-123.windriver.com. [147.11.105.123]) by smtp.gmail.com with ESMTPSA id p66sm7474051pfb.65.2020.05.03.20.35.39 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Sun, 03 May 2020 20:35:46 -0700 (PDT) From: Kevin Hao To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Sebastian Andrzej Siewior , Thomas Gleixner , Steven Rostedt Subject: [PATCH v5.6-rt] mm: slub: Always flush the delayed empty slubs in flush_all() Date: Mon, 4 May 2020 11:34:07 +0800 Message-Id: <20200504033407.2385-1-haokexin@gmail.com> X-Mailer: git-send-email 2.26.0 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After commit f0b231101c94 ("mm/SLUB: delay giving back empty slubs to IRQ enabled regions"), when the free_slab() is invoked with the IRQ disabled, the empty slubs are moved to a per-CPU list and will be freed after IRQ enabled later. But in the current codes, there is a check to see if there really has the cpu slub on a specific cpu before flushing the delayed empty slubs, this may cause a reference of already released kmem_cache in a scenario like below: cpu 0 cpu 1 kmem_cache_destroy() flush_all() --->IPI flush_cpu_slab() flush_slab() deactivate_slab() discard_slab() free_slab() c->page = NULL; for_each_online_cpu(cpu) if (!has_cpu_slab(1, s)) continue this skip to flush the delayed empty slub released by cpu1 kmem_cache_free(kmem_cache, s) kmalloc() __slab_alloc() free_delayed() __free_slab() reference to released kmem_cache Fixes: f0b231101c94 ("mm/SLUB: delay giving back empty slubs to IRQ enabled regions") Signed-off-by: Kevin Hao Acked-by: David Rientjes --- mm/slub.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 15c194ff16e6..83b29bf71fd0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2402,9 +2402,6 @@ static void flush_all(struct kmem_cache *s) for_each_online_cpu(cpu) { struct slub_free_list *f; - if (!has_cpu_slab(cpu, s)) - continue; - f = &per_cpu(slub_free_list, cpu); raw_spin_lock_irq(&f->lock); list_splice_init(&f->list, &tofree);