From patchwork Tue Feb 7 14:16:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13131588 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65B6BC636CC for ; Tue, 7 Feb 2023 14:17:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E00946B00D3; Tue, 7 Feb 2023 09:16:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DB1246B00D5; Tue, 7 Feb 2023 09:16:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C79366B00D6; Tue, 7 Feb 2023 09:16:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B75056B00D3 for ; Tue, 7 Feb 2023 09:16:59 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8A4A180591 for ; Tue, 7 Feb 2023 14:16:59 +0000 (UTC) X-FDA: 80440697358.28.CD86CE4 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf06.hostedemail.com (Postfix) with ESMTP id 2A5A2180016 for ; Tue, 7 Feb 2023 14:16:55 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=Z1VjMIto; dkim=pass header.d=linutronix.de header.s=2020e header.b=5KIdDvzl; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf06.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675779416; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pMBsGdGaUVtpVb5l7qAv/CD26vfZYLfuerCxMvwtG94=; b=h4bEgl808Fe7Y6GZnz6r5b4KnWdo56uyuWa17J3SgjAV7TGAED09e5x742dTauSZ6MBbYO AJCSMDsPexKzx+A2wnI+76/NgWzWG38+GpqsOW9g4NT32hK5PGD6WhLwSpdt5/ouTnyZov zT88vWihiq3g/ML6gSPZl1Wcprtcnok= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=Z1VjMIto; dkim=pass header.d=linutronix.de header.s=2020e header.b=5KIdDvzl; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf06.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675779416; a=rsa-sha256; cv=none; b=RwiAqdX47HutQ/Vim5A2ZxAupgUcb0mxn9rAVRikzwr+RbCoA1NIkd6GZCaPA9h5G/kRod 6skf1UwbfW6uioYA3h4S5k+FxbdSifS2xQVJ7nrIHU0zm8277WZMO0uorMrv7RvIaIXsK6 2O+HWnHIEmArc6Abky9Vesai1guxVkA= From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1675779413; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=pMBsGdGaUVtpVb5l7qAv/CD26vfZYLfuerCxMvwtG94=; b=Z1VjMItoihm9Q5k80PT5C1fQWCpwP9WvO3o6gWf/0/msrRbYyI0Kp6aOBcDM0zvpeovF59 jHRwxOr8kpnpQAXJdGJxlvJe+5jqfxkq3hhE7uV9PaHEok2TmSwhjskPAMlCeyhdX17SKE EhURhyr5f1g06F/fHrO13pN2IXxHYHwvkXUl190+XFhs7qF2mYWQltO98PsbGClOgMx+dy AZsywvOvz/YhLa2Hees8YcOXNGW7GfMtl8GvSNcwEDj8Vmae1XO0QjqKTduDpTqfILYTTm t0alDHuV9ytg7gsXT3J0hAmEZm7xdEokgU8kYEbfx2048bBJHY20xIbgO1DJdQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1675779413; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=pMBsGdGaUVtpVb5l7qAv/CD26vfZYLfuerCxMvwtG94=; b=5KIdDvzlVcK/H2duY4vqVIvrkGkr2fYRRVcFh/a6cnDuLXvioiD+aWcuU+STnf3sv5o4j4 wUdzUVDmg6uOomDw== To: Vlastimil Babka , kernel test robot , Shanker Donthineni Cc: oe-lkp@lists.linux.dev, lkp@intel.com, linux-kernel@vger.kernel.org, Marc Zyngier , Michael Walle , Sebastian Andrzej Siewior , Hans de Goede , Wolfram Sang , linux-mm@kvack.org, "Liam R. Howlett" , Matthew Wilcox Subject: mm, slab/slub: Ensure kmem_cache_alloc_bulk() is available early In-Reply-To: <875ycdwyx6.ffs@tglx> References: <202302011308.f53123d2-oliver.sang@intel.com> <87o7qdzfay.ffs@tglx> <9a682773-df56-f36c-f582-e8eeef55d7f8@suse.cz> <875ycdwyx6.ffs@tglx> Date: Tue, 07 Feb 2023 15:16:53 +0100 Message-ID: <871qn1wofe.ffs@tglx> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 2A5A2180016 X-Stat-Signature: 5n1zcoumgwoy87s4gu7gppbdfyyro4gi X-HE-Tag: 1675779415-751265 X-HE-Meta: U2FsdGVkX1+xawBjErPfHWuPEY9G59rTIWZxd1MJg5UmYp1InEgpuBOSMXiCXEVnjRZF3dGiJaRKT8yHTJMaGPYl6LY9rzJAO/nql5JMycW0HJfzeyzIe7/5tcaD5WKMJWmiQa5odQ9QPOVySqpZNs9hSuAtFmIBuvqBqjEWPBeZOp26Exfa24qIt78LSaBWFL4wu6JDiUqYveDSOwT9Tue3S8oGcBRXZkaSYCaFCWF15xS9oiOrwK7lOONuxGHtHJLyTTflBxSrNLrQJ8Ci6hoC34OAzvI1R0G/9894dwt9Oz7X2dZquIGtl4g/EXKn3HtWfRx078bmdNM2e8YLZbvoIJkG/WxPJokoa+JP9F6V2keaivVd4/fBkWI8hFXqP8Rtj/xYBuFSbwUbQ8VNm3e3aMnSC3euEw2lMjKsnC/VfVUEx9joLay2dsXwnZyeKOPjLFmWCk6oHkMw8WVJgvKYiYMtofmxFdVoKe8dMHtLVBYRkwcclAiqInc457fAQ57ec5FCmn+veXVLkiAGJiVXVLVflOgiVsjCTMfkbSkIZTYnC380NOZY83JnGdLi/lelpTlRNTqIT5IR4AoUfC6wJazJyfZtoep6gYNvhPHY7eZPQdJxe993prLYYiXljwr3SefjDHH94mBbvZX2RdER0MmhIQfB56nFB56ohAQ2VJVcum0gSQf5jFoCQhfAxzs/x9u3XLHfbHSa2CcTQOQ8LJwj83uHHKyHLhi44gsLZE2/52jpZiZPmwbwkWllaugdKYhMu/YknFKIz2/EQ7X7kf6n2Ky9yTQgcYrM7djPEWyG7Mb6KCUNKCLImGcJVnScwjOMzqptUgzKdlXFQfOLAGMX1/nvapGm2cV+t6h2bjaUdXZQ2misSdvjjYtVehxlFVCvgG1ime7nYW4KcGi0s3zzU9n9JvfehVxn/USqjlKshTvF+VmE7B82rzRj45R2HoB6v4iyks5X4cp SRhUPC7h pGtxMS1RgqxKbHRdV6mSSwhMnVTnQiIfk7A9i1lKiq+aKxJOq+Fq3x10+RLpT2Uvz1dET8+QuRLGfjkOgv2z0Yss4feRgLGT9wlnFDVj47ZTEhNUg3yFYvz2rPinZlduN1z1dcmICUaMVcv8WYw0+EhzW0qIxxTgpZIsBmj9aV5s5zd8UiIxlY64yma8TnC71st8cWxF0pQjagqLp3/lG65z+5IkWoRpDuwIpTmBws2zXzq5UEb10B75wWw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The memory allocators are available during early boot even in the phase where interrupts are disabled and scheduling is not yet possible. The setup is so that GFP_KERNEL allocations work in this phase without causing might_alloc() splats to be emitted because the system state is SYSTEM_BOOTING at that point which prevents the warnings to trigger. Most allocation/free functions use local_irq_save()/restore() or a lock variant of that. But kmem_cache_alloc_bulk() and kmem_cache_free_bulk() use local_[lock]_irq_disable()/enable(), which leads to a lockdep warning when interrupts are enabled during the early boot phase. This went unnoticed so far as there are no early users of these interfaces. The upcoming conversion of the interrupt descriptor store from radix_tree to maple_tree triggered this warning as maple_tree uses the bulk interface. Cure this by moving the kmem_cache_alloc/free() bulk variants of SLUB and SLAB to local[_lock]_irq_save()/restore(). There is obviously no reclaim possible and required at this point so there is no need to expand this coverage further. No functional change. Signed-off-by: Thomas Gleixner Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- Initial version: https://lore.kernel.org/r/87o7qdzfay.ffs@tglx Changes: Update SLAB as well and add changelog --- mm/slab.c | 18 ++++++++++-------- mm/slub.c | 9 +++++---- 2 files changed, 15 insertions(+), 12 deletions(-) --- a/mm/slab.c +++ b/mm/slab.c @@ -3479,14 +3479,15 @@ cache_alloc_debugcheck_after_bulk(struct int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { - size_t i; struct obj_cgroup *objcg = NULL; + unsigned long irqflags; + size_t i; s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags); if (!s) return 0; - local_irq_disable(); + local_irq_save(irqflags); for (i = 0; i < size; i++) { void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags, NUMA_NO_NODE); @@ -3495,7 +3496,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca goto error; p[i] = objp; } - local_irq_enable(); + local_irq_restore(irqflags); cache_alloc_debugcheck_after_bulk(s, flags, size, p, _RET_IP_); @@ -3508,7 +3509,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca /* FIXME: Trace call missing. Christoph would like a bulk variant */ return size; error: - local_irq_enable(); + local_irq_restore(irqflags); cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_); slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); kmem_cache_free_bulk(s, i, p); @@ -3610,8 +3611,9 @@ EXPORT_SYMBOL(kmem_cache_free); void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) { + unsigned long flags; - local_irq_disable(); + local_irq_save(flags); for (int i = 0; i < size; i++) { void *objp = p[i]; struct kmem_cache *s; @@ -3621,9 +3623,9 @@ void kmem_cache_free_bulk(struct kmem_ca /* called via kfree_bulk */ if (!folio_test_slab(folio)) { - local_irq_enable(); + local_irq_restore(flags); free_large_kmalloc(folio, objp); - local_irq_disable(); + local_irq_save(flags); continue; } s = folio_slab(folio)->slab_cache; @@ -3640,7 +3642,7 @@ void kmem_cache_free_bulk(struct kmem_ca __cache_free(s, objp, _RET_IP_); } - local_irq_enable(); + local_irq_restore(flags); /* FIXME: add tracing */ } --- a/mm/slub.c +++ b/mm/slub.c @@ -3913,6 +3913,7 @@ static inline int __kmem_cache_alloc_bul size_t size, void **p, struct obj_cgroup *objcg) { struct kmem_cache_cpu *c; + unsigned long irqflags; int i; /* @@ -3921,7 +3922,7 @@ static inline int __kmem_cache_alloc_bul * handlers invoking normal fastpath. */ c = slub_get_cpu_ptr(s->cpu_slab); - local_lock_irq(&s->cpu_slab->lock); + local_lock_irqsave(&s->cpu_slab->lock, irqflags); for (i = 0; i < size; i++) { void *object = kfence_alloc(s, s->object_size, flags); @@ -3942,7 +3943,7 @@ static inline int __kmem_cache_alloc_bul */ c->tid = next_tid(c->tid); - local_unlock_irq(&s->cpu_slab->lock); + local_unlock_irqrestore(&s->cpu_slab->lock, irqflags); /* * Invoking slow path likely have side-effect @@ -3956,7 +3957,7 @@ static inline int __kmem_cache_alloc_bul c = this_cpu_ptr(s->cpu_slab); maybe_wipe_obj_freeptr(s, p[i]); - local_lock_irq(&s->cpu_slab->lock); + local_lock_irqsave(&s->cpu_slab->lock, irqflags); continue; /* goto for-loop */ } @@ -3965,7 +3966,7 @@ static inline int __kmem_cache_alloc_bul maybe_wipe_obj_freeptr(s, p[i]); } c->tid = next_tid(c->tid); - local_unlock_irq(&s->cpu_slab->lock); + local_unlock_irqrestore(&s->cpu_slab->lock, irqflags); slub_put_cpu_ptr(s->cpu_slab); return i;