From patchwork Tue Aug 23 17:03:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12952324 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EC61C32772 for ; Tue, 23 Aug 2022 17:04:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C5D9F94000D; Tue, 23 Aug 2022 13:04:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ABF17940009; Tue, 23 Aug 2022 13:04:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7970C94000C; Tue, 23 Aug 2022 13:04:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3A7B394000B for ; Tue, 23 Aug 2022 13:04:10 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 0B17C16163E for ; Tue, 23 Aug 2022 17:04:10 +0000 (UTC) X-FDA: 79831480260.28.EB12E3D Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf10.hostedemail.com (Postfix) with ESMTP id 73DB4C0008 for ; Tue, 23 Aug 2022 17:04:09 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 35C381F98D; Tue, 23 Aug 2022 17:04:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1661274248; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XvI+cVLhZvJBlhllZE4A6Yzem81PLYg11lufZc2b5wA=; b=Wn//52wgDW257TpmhoxWiIbXenmoAeREvdYXDuNJhTFU5UASXliUf7aZzyccOxZlZkRO5V ytHiREQhhZOllHG6RCBD7iUiYfmta3h6Lfif5x81t2JIVNpi3xk+WyW6lKI0edqQw6yZGg bWC8FCNC6MN5oiAiiU/VMm8Vf4by3Ds= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1661274248; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XvI+cVLhZvJBlhllZE4A6Yzem81PLYg11lufZc2b5wA=; b=msqnIw1enlGQLxKSg27/atL/NZqDVOl/KT8ujB+0fAeJ4ui6AZihEV7WRBFbY8PV/NIejA 46XTWCIwzQrCLWAA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0663013AE6; Tue, 23 Aug 2022 17:04:08 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id wHK4AIgIBWPTQgAAMHmgww (envelope-from ); Tue, 23 Aug 2022 17:04:08 +0000 From: Vlastimil Babka To: Rongwei Wang , Christoph Lameter , Joonsoo Kim , David Rientjes , Pekka Enberg Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, Sebastian Andrzej Siewior , Thomas Gleixner , Mike Galbraith , Vlastimil Babka Subject: [PATCH v2 3/5] mm/slub: remove slab_lock() usage for debug operations Date: Tue, 23 Aug 2022 19:03:58 +0200 Message-Id: <20220823170400.26546-4-vbabka@suse.cz> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220823170400.26546-1-vbabka@suse.cz> References: <20220823170400.26546-1-vbabka@suse.cz> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661274249; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XvI+cVLhZvJBlhllZE4A6Yzem81PLYg11lufZc2b5wA=; b=m3XcpT+SgowiAQACWCvG86iatXouW//9TtoRTcQJ3D762tfODRs4MGgh9Oas1IaCHH1iC9 KC51uX4TBqjXPLD9y+qwb/XeF5iigffbzj2ypUshxrPyhwkPD382uHN8dNLZ4cV/Ywl9Kr dc/8BP2J/AbX0V6bQ8S9CYPpxut6X8s= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="Wn//52wg"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=msqnIw1e; spf=pass (imf10.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661274249; a=rsa-sha256; cv=none; b=cd24EbdY0tZIvKdBzzfnL4dh+sIuoFQ338+/lKjyj+w3u83bbvw+HkUQ+3LmKEFl/PGxld DywssNIt2NbeCxdj2o3L0G6ZfnkaIhZevq6Vs6BAwdLO77v6xoMAQ2vDACAIZ6RRpMvj5c 5XIC21un2f4ZvZWlsJdoKTLgdXrd0WA= X-Stat-Signature: u8u714f3c9774ps7sk51unw3irgjun6t X-Rspamd-Queue-Id: 73DB4C0008 X-Rspam-User: Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="Wn//52wg"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=msqnIw1e; spf=pass (imf10.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-Rspamd-Server: rspam01 X-HE-Tag: 1661274249-665659 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All alloc and free operations on debug caches are now serialized by n->list_lock, so we can remove slab_lock() usage in validate_slab() and list_slab_objects() as those also happen under n->list_lock. Note the usage in list_slab_objects() could happen even on non-debug caches, but only during cache shutdown time, so there should not be any parallel freeing activity anymore. Except for buggy slab users, but in that case the slab_lock() would not help against the common cmpxchg based fast paths (in non-debug caches) anyway. Also adjust documentation comments accordingly. Suggested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Acked-by: David Rientjes --- mm/slub.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index a5a913879871..b4065e892f7c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -50,7 +50,7 @@ * 1. slab_mutex (Global Mutex) * 2. node->list_lock (Spinlock) * 3. kmem_cache->cpu_slab->lock (Local lock) - * 4. slab_lock(slab) (Only on some arches or for debugging) + * 4. slab_lock(slab) (Only on some arches) * 5. object_map_lock (Only for debugging) * * slab_mutex @@ -64,8 +64,9 @@ * The slab_lock is a wrapper around the page lock, thus it is a bit * spinlock. * - * The slab_lock is only used for debugging and on arches that do not - * have the ability to do a cmpxchg_double. It only protects: + * The slab_lock is only used on arches that do not have the ability + * to do a cmpxchg_double. It only protects: + * * A. slab->freelist -> List of free objects in a slab * B. slab->inuse -> Number of objects in use * C. slab->objects -> Number of objects in slab @@ -94,6 +95,9 @@ * allocating a long series of objects that fill up slabs does not require * the list lock. * + * For debug caches, all allocations are forced to go through a list_lock + * protected region to serialize against concurrent validation. + * * cpu_slab->lock local lock * * This locks protect slowpath manipulation of all kmem_cache_cpu fields @@ -4368,7 +4372,6 @@ static void list_slab_objects(struct kmem_cache *s, struct slab *slab, void *p; slab_err(s, slab, text, s->name); - slab_lock(slab, &flags); map = get_map(s, slab); for_each_object(p, s, addr, slab->objects) { @@ -4379,7 +4382,6 @@ static void list_slab_objects(struct kmem_cache *s, struct slab *slab, } } put_map(map); - slab_unlock(slab, &flags); #endif } @@ -5107,12 +5109,9 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab, { void *p; void *addr = slab_address(slab); - unsigned long flags; - - slab_lock(slab, &flags); if (!check_slab(s, slab) || !on_freelist(s, slab, NULL)) - goto unlock; + return; /* Now we know that a valid freelist exists */ __fill_map(obj_map, s, slab); @@ -5123,8 +5122,6 @@ static void validate_slab(struct kmem_cache *s, struct slab *slab, if (!check_object(s, slab, p, val)) break; } -unlock: - slab_unlock(slab, &flags); } static int validate_slab_node(struct kmem_cache *s,