From patchwork Sun Feb 27 03:08:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12761525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0BF9C433EF for ; Sun, 27 Feb 2022 03:08:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0EB468D0017; Sat, 26 Feb 2022 22:08:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 09BF28D0007; Sat, 26 Feb 2022 22:08:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E7E138D0017; Sat, 26 Feb 2022 22:08:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id D60448D0007 for ; Sat, 26 Feb 2022 22:08:43 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9CC4920B53 for ; Sun, 27 Feb 2022 03:08:43 +0000 (UTC) X-FDA: 79187077326.12.9A6248D Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf28.hostedemail.com (Postfix) with ESMTP id 24497C0004 for ; Sun, 27 Feb 2022 03:08:43 +0000 (UTC) Received: by mail-pf1-f179.google.com with SMTP id d187so8037218pfa.10 for ; Sat, 26 Feb 2022 19:08:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:mime-version:content-disposition :in-reply-to; bh=Fb1Fr4SQCkBg3RadE/zeIUDZwL5WObSG9FVSGnzglws=; b=f6xFBweIX4Xb/dPvy1x296+2mivpE9bZWl0oucGJPWCpTNVt7Mjp1waOXUuUYemwY4 lhKvPdGfyILePk8HSCdgpDrkzmrx/t7Ds/ljy4I1LnVtE56THltTe+B3MIDD2sJAtr/c dxe92Lxr1tWEYLG3T56WHsjlNGeKOkqYqvS1BsFlAXeQh4iBmW9eJdwqPe/4keENwSqo pdB92uS6kEOKcQGiohfrAT+iQ+FJ3nc5fBU3JDGWNYed/nf1aNyC1bovamT+2viDAcX9 D8VitCHS+Gday4Lm/vEnRJy+VgcCzqYIuYBE5oxygf5RxORJMxlZ5FhIMYo5tH1tQo5d C9Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition:in-reply-to; bh=Fb1Fr4SQCkBg3RadE/zeIUDZwL5WObSG9FVSGnzglws=; b=ALqi2oHg+o49frhb1Wx4jTUkskaM4V6tBKluNvOMDFBSXB3Xo/KI2pD51WgCqvTc1Y vNunwcdJB7UVAPnlLUyHwuqgDhkafpZ+lKOoYCOIpqzdZQf3Jq95QvDKlpifS0SUL/3O yUISuCCvOOjHpEpfQzdroSrN1AWU4BYJe9urL0xdxjzmRd77+vHdPDvbd6F6qYQIY+na CtXpiqqey2jfkKFtxi9g8F+Yhmr4Z1ujyGngciTQO0bMGgd7VedH6TvWbg9cEG8RhoeH unx4W9NeuXM23RA/JTLDdi3dTGHQ7PTXH4paLEzoGOlfE8g70TnNxola8AG38+vgzHtg t3Ag== X-Gm-Message-State: AOAM532VVExy58KQUAXiiXyjRpFpDHmRN2cBH4tAHX2ZXuXV2867rZG6 JRYayZWxErnIvLnw4lvQxyA= X-Google-Smtp-Source: ABdhPJz7GlKSXZVPOMRVYgkstyoZ2bcGj7eL1zWf7U/EukfxMle1vLRvZN76+hmbDqzPXyRaR+A2Xw== X-Received: by 2002:a05:6a00:2346:b0:4f0:fece:1247 with SMTP id j6-20020a056a00234600b004f0fece1247mr15077112pfj.29.1645931322075; Sat, 26 Feb 2022 19:08:42 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id z8-20020aa79588000000b004e1dc67ead3sm7915907pfj.126.2022.02.26.19.08.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 26 Feb 2022 19:08:41 -0800 (PST) Date: Sun, 27 Feb 2022 03:08:35 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: David Rientjes , Christoph Lameter , Joonsoo Kim , Pekka Enberg , Roman Gushchin , Andrew Morton , linux-mm@kvack.org, patches@lists.linux.dev, linux-kernel@vger.kernel.org, Oliver Glitta , Faiyaz Mohammed , Dmitry Vyukov , Eric Dumazet , Jarkko Sakkinen , Johannes Berg , Yury Norov , Arnd Bergmann , James Bottomley , Matteo Croce , Marco Elver , Andrey Konovalov , Imran Khan , Zqiang , Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH] lib/stackdepot: Use page allocator if both slab and memblock is unavailable Message-ID: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220225180318.20594-3-vbabka@suse.cz> X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 24497C0004 X-Stat-Signature: onyie19xd6euyn1pf64ujjtip8q5ae51 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=f6xFBweI; spf=pass (imf28.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1645931323-522875 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After commit 2dba5eb1c73b ("lib/stackdepot: allow optional init and stack_table allocation by kvmalloc()"), stack_depot_init() is called later if CONFIG_STACKDEPOT_ALWAYS_INIT=n to remove unnecessary memory usage. It allocates stack_table using memblock_alloc() or kvmalloc() depending on availability of slab allocator. But when stack_depot_init() is called while creating boot slab caches, both slab allocator and memblock is not available. So kernel crashes. Allocate stack_table from page allocator when both slab allocator and memblock is unavailable. Limit size of stack_table when using page allocator because vmalloc() is also unavailable in kmem_cache_init(). it must not be larger than (PAGE_SIZE << (MAX_ORDER - 1)). This patch was tested on both CONFIG_STACKDEPOT_ALWAYS_INIT=y and n. Fixes: 2dba5eb1c73b ("lib/stackdepot: allow optional init and stack_table allocation by kvmalloc()") Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reported-by: kernel test robot Reported-by: kernel test robot --- lib/stackdepot.c | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index bf5ba9af0500..606f80ae2bf7 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -73,6 +73,14 @@ static int next_slab_inited; static size_t depot_offset; static DEFINE_RAW_SPINLOCK(depot_lock); +static unsigned int stack_hash_size = (1 << CONFIG_STACK_HASH_ORDER); +static inline unsigned int stack_hash_mask(void) +{ + return stack_hash_size - 1; +} + +#define STACK_HASH_SEED 0x9747b28c + static bool init_stack_slab(void **prealloc) { if (!*prealloc) @@ -142,10 +150,6 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) return stack; } -#define STACK_HASH_SIZE (1L << CONFIG_STACK_HASH_ORDER) -#define STACK_HASH_MASK (STACK_HASH_SIZE - 1) -#define STACK_HASH_SEED 0x9747b28c - static bool stack_depot_disable; static struct stack_record **stack_table; @@ -172,18 +176,28 @@ __ref int stack_depot_init(void) mutex_lock(&stack_depot_init_mutex); if (!stack_depot_disable && !stack_table) { - size_t size = (STACK_HASH_SIZE * sizeof(struct stack_record *)); + size_t size = (stack_hash_size * sizeof(struct stack_record *)); int i; if (slab_is_available()) { pr_info("Stack Depot allocating hash table with kvmalloc\n"); stack_table = kvmalloc(size, GFP_KERNEL); + } else if (totalram_pages() > 0) { + /* Reduce size because vmalloc may be unavailable */ + size = min(size, PAGE_SIZE << (MAX_ORDER - 1)); + stack_hash_size = size / sizeof(struct stack_record *); + + pr_info("Stack Depot allocating hash table with __get_free_pages\n"); + stack_table = (struct stack_record **) + __get_free_pages(GFP_KERNEL, get_order(size)); } else { pr_info("Stack Depot allocating hash table with memblock_alloc\n"); stack_table = memblock_alloc(size, SMP_CACHE_BYTES); } + if (stack_table) { - for (i = 0; i < STACK_HASH_SIZE; i++) + pr_info("Stack Depot hash table size=%u\n", stack_hash_size); + for (i = 0; i < stack_hash_size; i++) stack_table[i] = NULL; } else { pr_err("Stack Depot hash table allocation failed, disabling\n"); @@ -363,7 +377,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, goto fast_exit; hash = hash_stack(entries, nr_entries); - bucket = &stack_table[hash & STACK_HASH_MASK]; + bucket = &stack_table[hash & stack_hash_mask()]; /* * Fast path: look the stack trace up without locking.