From patchwork Wed Jun 20 22:41:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 10478801 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8E3F2601D7 for ; Wed, 20 Jun 2018 22:42:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E2BF28FEE for ; Wed, 20 Jun 2018 22:42:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 168A229121; Wed, 20 Jun 2018 22:42:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20CBC29062 for ; Wed, 20 Jun 2018 22:42:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C2796B0003; Wed, 20 Jun 2018 18:42:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 073316B0006; Wed, 20 Jun 2018 18:42:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E56736B0007; Wed, 20 Jun 2018 18:42:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f69.google.com (mail-pl0-f69.google.com [209.85.160.69]) by kanga.kvack.org (Postfix) with ESMTP id A127C6B0003 for ; Wed, 20 Jun 2018 18:42:04 -0400 (EDT) Received: by mail-pl0-f69.google.com with SMTP id 31-v6so553646plf.19 for ; Wed, 20 Jun 2018 15:42:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id; bh=QDrgQcbxtpv9rkSNMuo6ul0hIWy399jGiJTcOQHJxbw=; b=sSIT9F4Y0znZh3e1+vVn63I/sYJBe52mZPVAv4RW8uNJ9Plf/kqYgbfvXr/4A6uAAC epJWEV8LSlsWnkQogDbA9/qxmAnEjhIoeoC+o6i7oN7FJbwrvhZAHkx3W4Nxy2YGk1B1 PYb5nh2j/4oZvUMR/YVJwNJTkdD1fGgRbah9JY3lQ5Lzd1eXjV8s+XXuaiur2AAiWyeV OrSaBeU6g7iKdeANObtPUv3yt00cq1mJ7yZvibXbFUl1AuMqz8I/wgUpxCtgrPrqe9/W 6eTKGN51bh34jPLS5g1K/YwedRGFjevd0vvZzeqG2PrMJFpa3P6uAHAk7DGWISnM8fjb L3+A== X-Gm-Message-State: APt69E1Mdr6pOP06PwUuT4H8VY4sYqrrdK+x6nJoQVBiMZgc0BsNTyur 00S/1O1qsQuMGH8ZhjRFlHVZ6p1dX0SIndDurf5xLfPLAaiNDgUsOULLGzudJhmOXVQcDRfpLZc g5RTfL88TL2vsw45n/o1jwEWF6il03y2R+S0M1S6U/3nVZ6nAg8fQ2zUUa3oGKMiNmoIGMkzXBZ 2XN9vVNjUPXdIoCRUq7RYzVYpx5B9sKI/BZJ6iqkdTCcyNt6kaYu9QfxTb9HYKhfoQyQKmwCqLx cGVNmfwcbdHCLJe3AACCeLfE0QCbJUWhFwJ0bCvhIIZcQXiL3MiCH8WhMgewwk2SUDX2KBpJV79 CZtceqBc/gWGtYGrr3tL3UuIUnU9BdL0L6mO2UoGrD45/VLCE1+nBVPrWCF9BK1XTL+LQyezgfP Q X-Received: by 2002:a65:6343:: with SMTP id p3-v6mr20524529pgv.63.1529534524302; Wed, 20 Jun 2018 15:42:04 -0700 (PDT) X-Received: by 2002:a65:6343:: with SMTP id p3-v6mr20524487pgv.63.1529534523326; Wed, 20 Jun 2018 15:42:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529534523; cv=none; d=google.com; s=arc-20160816; b=SMq0RHrgNGscTWqfJoVuH9H5clXmUJOffX+sQy1PcoMIFhf5+8MPeJjMY2qJa5icaj Omnbl30m0geDvyo4zJrJUdoVKH83Kr7iRG4JvMKck0ZRfv2oZiY2BKhZ1Q/fbQPqZZhi yZOD36E+5AVK2UtZxuhVK/wIBXIPyOQa6+Aly0f6ZuM4UXVsFgU0jlhOYj98BpGWwdDY a523OLg9f2Sy/jza7OB+pVE6dZbidDZzPajzSFIFDOkidPovZMsypQ8e1O9r1rs7oGyW bv2uzZSfQX5oPKFNyjucra1PSGwcZVNFnGKgqYxkEtAdhjqI3w6+nWUkE/0v3rTPC9TY J+Mg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=QDrgQcbxtpv9rkSNMuo6ul0hIWy399jGiJTcOQHJxbw=; b=v6HsKgtqHWML2CcU0oaGzYtuNKjQ1ZRWB6LADZXq3L2sg9gcgpbHdy6j6JcQvSb2Yp AYX6BqsRASm5cOTcIdFAcGNxJx4zT9DeSsgeu7AS4s29vF511/RTgKxfJpNj32AAk2dH Lpj/Y5Q9kUbCZeoyfw8Qzmr8/BbSf5shBQbcswWC75ajIrZS4bPchsfvxMu8iX7FHWnZ pCuf8W7XSTvTDO9uurT2pWaPVTVQaRbpkiC1DTwWmqiyvVB/C89n4tSkxJL9JpxxtYxz Vywfb1IinLZWzu/tRuvNzCVOpwAl3sGaMTmboVeAllQLcptprqZR34fu9NpCy36qchoq uBpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=rKhHbpQ7; spf=pass (google.com: domain of shakeelb@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=shakeelb@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id c10-v6sor924304pfl.108.2018.06.20.15.42.03 for (Google Transport Security); Wed, 20 Jun 2018 15:42:03 -0700 (PDT) Received-SPF: pass (google.com: domain of shakeelb@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=rKhHbpQ7; spf=pass (google.com: domain of shakeelb@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=shakeelb@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=QDrgQcbxtpv9rkSNMuo6ul0hIWy399jGiJTcOQHJxbw=; b=rKhHbpQ7e0xS+1n0GX57b09+YI8KeexFjVgkDPz39MEiOekRebzHWY4bIc7bsJ4HGg Ydkl2ct+4Zn5HEpHKJqwdtZO7+GJWBGiarFw90EK5d+LqPzb55KIAW6zs/P+vF0TuKhK 8X5WU7TvtEIPs1m71nAs/JQjw0ui9aCtKFcQMdvz39+hvazGuLdCppmJ1WW09sA7VYwg aO4cHyGKyhYH72iGX65FRxm+N7ydf5OzfExGEKYSF/ucYV+RIuQBNYKEPIWaGIAAW1kg RJPabBYEPTuzjXyN+gNrmDw1g4G69NEg0Tyyw1IAopZWHjSnuNHd6h1edNyAS6G3Qam1 vDLQ== X-Google-Smtp-Source: ADUXVKLPu97K2ysi2SB9aswDYephaon763wamJq/ljKctzb9sUO2n4ttg7bNNF9YYIU1NXX4lcQpQQ== X-Received: by 2002:a62:4b16:: with SMTP id y22-v6mr24722884pfa.214.1529534522566; Wed, 20 Jun 2018 15:42:02 -0700 (PDT) Received: from shakeelb.mtv.corp.google.com ([2620:15c:2cb:201:3a5f:3a4f:fa44:6b63]) by smtp.gmail.com with ESMTPSA id e16-v6sm4713236pfn.46.2018.06.20.15.42.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 20 Jun 2018 15:42:01 -0700 (PDT) From: Shakeel Butt To: "Jason A . Donenfeld" Cc: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , Andrey Ryabinin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt , stable@vger.kernel.org Subject: [PATCH] slub: track number of slabs irrespective of CONFIG_SLUB_DEBUG Date: Wed, 20 Jun 2018 15:41:47 -0700 Message-Id: <20180620224147.23777-1-shakeelb@google.com> X-Mailer: git-send-email 2.18.0.rc1.244.gcf134e6275-goog X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs allocated per node for a kmem_cache. Thus, slabs_node() in __kmem_cache_empty(), __kmem_cache_shrink() and __kmem_cache_destroy() will always return 0 for such config. This is wrong and can cause issues for all users of these functions. Infact in [1] Jason has reported a system crash while using SLUB without CONFIG_SLUB_DEBUG. The reason was the usage of slabs_node() by __kmem_cache_empty(). The right solution is to make slabs_node() work even for !CONFIG_SLUB_DEBUG. The commit 0f389ec63077 ("slub: No need for per node slab counters if !SLUB_DEBUG") had put the per node slab counter under CONFIG_SLUB_DEBUG because it was only read through sysfs API and the sysfs API was disabled on !CONFIG_SLUB_DEBUG. However the users of the per node slab counter assumed that it will work in the absence of CONFIG_SLUB_DEBUG. So, make the counter work for !CONFIG_SLUB_DEBUG. Please note that commit f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()") exposed this issue but it is present even before. [1] http://lkml.kernel.org/r/CAHmME9rtoPwxUSnktxzKso14iuVCWT7BE_-_8PAC=pGw1iJnQg@mail.gmail.com Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()") Signed-off-by: Shakeel Butt Suggested-by: David Rientjes Reported-by: Jason A . Donenfeld Cc: Christoph Lameter Cc: Pekka Enberg Cc: Joonsoo Kim Cc: Andrew Morton Cc: Andrey Ryabinin Cc: Cc: Cc: --- mm/slab.h | 2 +- mm/slub.c | 80 +++++++++++++++++++++++++------------------------------ 2 files changed, 38 insertions(+), 44 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 68bdf498da3b..a6545332cc86 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -473,8 +473,8 @@ struct kmem_cache_node { #ifdef CONFIG_SLUB unsigned long nr_partial; struct list_head partial; -#ifdef CONFIG_SLUB_DEBUG atomic_long_t nr_slabs; +#ifdef CONFIG_SLUB_DEBUG atomic_long_t total_objects; struct list_head full; #endif diff --git a/mm/slub.c b/mm/slub.c index a3b8467c14af..c9c190d54687 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1030,42 +1030,6 @@ static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct list_del(&page->lru); } -/* Tracking of the number of slabs for debugging purposes */ -static inline unsigned long slabs_node(struct kmem_cache *s, int node) -{ - struct kmem_cache_node *n = get_node(s, node); - - return atomic_long_read(&n->nr_slabs); -} - -static inline unsigned long node_nr_slabs(struct kmem_cache_node *n) -{ - return atomic_long_read(&n->nr_slabs); -} - -static inline void inc_slabs_node(struct kmem_cache *s, int node, int objects) -{ - struct kmem_cache_node *n = get_node(s, node); - - /* - * May be called early in order to allocate a slab for the - * kmem_cache_node structure. Solve the chicken-egg - * dilemma by deferring the increment of the count during - * bootstrap (see early_kmem_cache_node_alloc). - */ - if (likely(n)) { - atomic_long_inc(&n->nr_slabs); - atomic_long_add(objects, &n->total_objects); - } -} -static inline void dec_slabs_node(struct kmem_cache *s, int node, int objects) -{ - struct kmem_cache_node *n = get_node(s, node); - - atomic_long_dec(&n->nr_slabs); - atomic_long_sub(objects, &n->total_objects); -} - /* Object debug checks for alloc/free paths */ static void setup_object_debug(struct kmem_cache *s, struct page *page, void *object) @@ -1321,16 +1285,46 @@ slab_flags_t kmem_cache_flags(unsigned int object_size, #define disable_higher_order_debug 0 +#endif /* CONFIG_SLUB_DEBUG */ + static inline unsigned long slabs_node(struct kmem_cache *s, int node) - { return 0; } +{ + struct kmem_cache_node *n = get_node(s, node); + + return atomic_long_read(&n->nr_slabs); +} + static inline unsigned long node_nr_slabs(struct kmem_cache_node *n) - { return 0; } -static inline void inc_slabs_node(struct kmem_cache *s, int node, - int objects) {} -static inline void dec_slabs_node(struct kmem_cache *s, int node, - int objects) {} +{ + return atomic_long_read(&n->nr_slabs); +} -#endif /* CONFIG_SLUB_DEBUG */ +static inline void inc_slabs_node(struct kmem_cache *s, int node, int objects) +{ + struct kmem_cache_node *n = get_node(s, node); + + /* + * May be called early in order to allocate a slab for the + * kmem_cache_node structure. Solve the chicken-egg + * dilemma by deferring the increment of the count during + * bootstrap (see early_kmem_cache_node_alloc). + */ + if (likely(n)) { + atomic_long_inc(&n->nr_slabs); +#ifdef CONFIG_SLUB_DEBUG + atomic_long_add(objects, &n->total_objects); +#endif + } +} +static inline void dec_slabs_node(struct kmem_cache *s, int node, int objects) +{ + struct kmem_cache_node *n = get_node(s, node); + + atomic_long_dec(&n->nr_slabs); +#ifdef CONFIG_SLUB_DEBUG + atomic_long_sub(objects, &n->total_objects); +#endif +} /* * Hooks for other subsystems that check memory allocations. In a typical