From patchwork Sun Sep 30 10:28:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhong jiang X-Patchwork-Id: 10621417 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 031D615A7 for ; Sun, 30 Sep 2018 10:41:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E3CCD29715 for ; Sun, 30 Sep 2018 10:41:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D440E29718; Sun, 30 Sep 2018 10:41:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1CF1B29715 for ; Sun, 30 Sep 2018 10:41:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 85B038E0002; Sun, 30 Sep 2018 06:41:01 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 80AC58E0001; Sun, 30 Sep 2018 06:41:01 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F9408E0002; Sun, 30 Sep 2018 06:41:01 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id 3402F8E0001 for ; Sun, 30 Sep 2018 06:41:01 -0400 (EDT) Received: by mail-pf1-f200.google.com with SMTP id d22-v6so12308117pfn.3 for ; Sun, 30 Sep 2018 03:41:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:mime-version; bh=okSCPUvl/SmjCXjmfUt86F4rf5l9sFnpEAiatPdv3TQ=; b=iJbJu0HPx3gfQ6irFJ/Yp5MGmvY38oxvaL+XJO6v9SDxs0cGXbppOMO7exi3hHytej z6i95cuB/WilGdsRpyCC1cvu1M4gFhgIBu6wKUlRXONjVreMzyYt2L3nQbL9uWhaseRy bpR5Q8GaPGNXwtkdbWh6sub+X+yipq1caIsf3HB0zpZxBnMeRxzli+b2ookIEZIOe8Ff H33vSkX34qu1Z4ry3by60xSjo7J4xZdagJBNWZ+C3RcB76Z3rlaJAZcM5NaAH7K5RvTX 3EZKuB5H7tqsHsyRMsdKTyRWOVhFrVP/C1h6wi7jyA/yvj7ONnaZldLyl6T0b8OVWHXM uRUQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of zhongjiang@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=zhongjiang@huawei.com X-Gm-Message-State: ABuFfoh0h1OIGm7+k89vWzLkTLBaiH7nIQvxKmNJDdTrnNVznvsYNX3n Bx9zB/kikoeUyFfd4AbbGUbjhc20dxpFTT8mnscv43FaR3UGRVYjwXZWDMDFKBHf2ZrokvmcNJq NGNfpr4cfejaJLwg6YJidRsGf5WvGRW0PGbtYUCq4iz+F71YRFeERg+awZ9Qj3C4qcw== X-Received: by 2002:a17:902:7283:: with SMTP id d3-v6mr6728581pll.326.1538304060894; Sun, 30 Sep 2018 03:41:00 -0700 (PDT) X-Google-Smtp-Source: ACcGV62h8Vhct8ARpxbH/l+lSu6MdL0yYGxIEqACcnFuBza3BJcQh2lrNOcGxABsq6P6bxEuvVjE X-Received: by 2002:a17:902:7283:: with SMTP id d3-v6mr6728540pll.326.1538304059920; Sun, 30 Sep 2018 03:40:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538304059; cv=none; d=google.com; s=arc-20160816; b=sWRUZ29wJGC6QAYQ0J4jah6YL7btBwf8qQsVnNjJgO26lLw9+PbwGnPmLeKFpt1Jk2 zblMF20lNfzeTYkYtf+A9Ky6688EJZsK1B8YedNIcEbsdpPtrI6tDAvZY27aHxM4DLn4 er8aGsX1DUDgnSjcQMyRCbhPlWXdk/dBPl50RZLN2vRh0dJVigC3z72PewsWk2Yx/9Z+ nu3SF9eAYydYNwsSjGL3+Azm6jCW5FuYvXwIKQtpp6G3ba7PCW+i9Dd7qy7k1nYa6gPk Q4fDEnNjZJjlkqDVsuTjgEMRncMPBglPyN0BPa5K42MWn4nE34Ok0AL1TbgkEr7OWN+t fS4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:message-id:date:subject:cc:to:from; bh=okSCPUvl/SmjCXjmfUt86F4rf5l9sFnpEAiatPdv3TQ=; b=zuvWVQdutMvXzt1kEZMo78Dn5xcC9c0EhshLgPM8cIz6NT+l4y5jM5aRFYywHh/THv qRVzyMBEtkaNZG55U60Co3v7U95kv34oAREGi9+jhy/PZ2dQ9F4FXa5qJQK7ScM/miBk 6tqtlGs2GuElRYlEohPw51CKZley4ESafzPQD6ee9mwZOtG2yu5mgNndHg02eF56tLNu rWgxQcoa9V/I87xrZhY9lXxIhXtUXwfLexypTtqHoxbNfFngO4+awjHKpjBDuUpaRRmJ V4FkKkaDdsT53uebVX0A9IIcK1WdGmXOm/pihYtsmsXB4QI2zcAPtf0l7xl15UyXZsQ7 Y4vg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of zhongjiang@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=zhongjiang@huawei.com Received: from huawei.com (szxga06-in.huawei.com. [45.249.212.32]) by mx.google.com with ESMTPS id n27-v6si8976114pgb.628.2018.09.30.03.40.59 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 30 Sep 2018 03:40:59 -0700 (PDT) Received-SPF: pass (google.com: domain of zhongjiang@huawei.com designates 45.249.212.32 as permitted sender) client-ip=45.249.212.32; Authentication-Results: mx.google.com; spf=pass (google.com: domain of zhongjiang@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=zhongjiang@huawei.com Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 9462E810E0566; Sun, 30 Sep 2018 18:40:56 +0800 (CST) Received: from linux-ibm.site (10.175.102.37) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.399.0; Sun, 30 Sep 2018 18:40:54 +0800 From: zhong jiang To: CC: , , , , , , , , , , , Subject: [STABLE PATCH] slub: make ->cpu_partial unsigned int Date: Sun, 30 Sep 2018 18:28:21 +0800 Message-ID: <1538303301-61784-1-git-send-email-zhongjiang@huawei.com> X-Mailer: git-send-email 1.7.12.4 MIME-Version: 1.0 X-Originating-IP: [10.175.102.37] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Alexey Dobriyan [ Upstream commit e5d9998f3e09359b372a037a6ac55ba235d95d57 ] /* * cpu_partial determined the maximum number of objects * kept in the per cpu partial lists of a processor. */ Can't be negative. I hit a real issue that it will result in a large number of memory leak. Becuase Freeing slabs are in interrupt context. So it can trigger this issue. put_cpu_partial can be interrupted more than once. due to a union struct of lru and pobjects in struct page, when other core handles page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will result in pobjects being a negative value(0xdead0000). Therefore, a large number of slabs will be added to per_cpu partial list. I had posted the issue to community before. The detailed issue description is as follows. https://www.spinics.net/lists/kernel/msg2870979.html After applying the patch, The issue is fixed. So the patch is a effective bugfix. It should go into stable. Link: http://lkml.kernel.org/r/20180305200730.15812-15-adobriyan@gmail.com Signed-off-by: Alexey Dobriyan Acked-by: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: # 4.4.x Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: zhong jiang --- include/linux/slub_def.h | 3 ++- mm/slub.c | 6 +++--- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 3388511..9b681f2 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -67,7 +67,8 @@ struct kmem_cache { int size; /* The size of an object including meta data */ int object_size; /* The size of an object without meta data */ int offset; /* Free pointer offset. */ - int cpu_partial; /* Number of per cpu partial objects to keep around */ + /* Number of per cpu partial objects to keep around */ + unsigned int cpu_partial; struct kmem_cache_order_objects oo; /* Allocation and freeing of slabs */ diff --git a/mm/slub.c b/mm/slub.c index 2284c43..c33b0e1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1661,7 +1661,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, { struct page *page, *page2; void *object = NULL; - int available = 0; + unsigned int available = 0; int objects; /* @@ -4674,10 +4674,10 @@ static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf) static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf, size_t length) { - unsigned long objects; + unsigned int objects; int err; - err = kstrtoul(buf, 10, &objects); + err = kstrtouint(buf, 10, &objects); if (err) return err; if (objects && !kmem_cache_has_cpu_partial(s))