From patchwork Wed Jun 9 11:38:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 12309837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED798C48BCD for ; Wed, 9 Jun 2021 11:39:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9CBCA613D3 for ; Wed, 9 Jun 2021 11:39:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9CBCA613D3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4B0596B0070; Wed, 9 Jun 2021 07:39:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F5EA6B007D; Wed, 9 Jun 2021 07:39:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A80DF6B0074; Wed, 9 Jun 2021 07:39:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id 566096B0072 for ; Wed, 9 Jun 2021 07:39:34 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id EB51A181AEF21 for ; Wed, 9 Jun 2021 11:39:33 +0000 (UTC) X-FDA: 78233990226.17.C89EF02 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf04.hostedemail.com (Postfix) with ESMTP id 778033C3 for ; Wed, 9 Jun 2021 11:39:30 +0000 (UTC) Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id E30C81FD4E; Wed, 9 Jun 2021 11:39:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1623238771; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0yIopAqiYKYuUjHruBmtFqmPxVRpqk1W58WY5m3tQOo=; b=S79B+ZPYD1ZEeUaxt9hKWMzfTD9TOCTuQo++QSf3MvKDtglnvsOS8/YGbi3Xq0JJ6K6lmW dtNx6p12EKQ3My9QOMM53cX1ikrojSZ9U7mnFv7sHMjP04xp6R9yIHmO+Fp8h6njhfUxQ/ rXeGkSO2barA5BNDxVQ6PPyWVxcH9d4= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1623238771; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0yIopAqiYKYuUjHruBmtFqmPxVRpqk1W58WY5m3tQOo=; b=tvWi3K3sT5luoYzW/nK8wj0xXpVpZ8R87g1ey45dq0fCH2nFemnOGde2D5HP1jxG4N3U+L BHQF/jdnl506dGCg== Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47]) by imap.suse.de (Postfix) with ESMTP id B6328118DD; Wed, 9 Jun 2021 11:39:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1623238771; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0yIopAqiYKYuUjHruBmtFqmPxVRpqk1W58WY5m3tQOo=; b=S79B+ZPYD1ZEeUaxt9hKWMzfTD9TOCTuQo++QSf3MvKDtglnvsOS8/YGbi3Xq0JJ6K6lmW dtNx6p12EKQ3My9QOMM53cX1ikrojSZ9U7mnFv7sHMjP04xp6R9yIHmO+Fp8h6njhfUxQ/ rXeGkSO2barA5BNDxVQ6PPyWVxcH9d4= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1623238771; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0yIopAqiYKYuUjHruBmtFqmPxVRpqk1W58WY5m3tQOo=; b=tvWi3K3sT5luoYzW/nK8wj0xXpVpZ8R87g1ey45dq0fCH2nFemnOGde2D5HP1jxG4N3U+L BHQF/jdnl506dGCg== Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA id sF8ALHOowGD6XgAALh3uQQ (envelope-from ); Wed, 09 Jun 2021 11:39:31 +0000 From: Vlastimil Babka To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Sebastian Andrzej Siewior , Thomas Gleixner , Mel Gorman , Jesper Dangaard Brouer , Peter Zijlstra , Jann Horn , Vlastimil Babka Subject: [RFC v2 02/34] mm, slub: allocate private object map for sysfs listings Date: Wed, 9 Jun 2021 13:38:31 +0200 Message-Id: <20210609113903.1421-3-vbabka@suse.cz> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210609113903.1421-1-vbabka@suse.cz> References: <20210609113903.1421-1-vbabka@suse.cz> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=S79B+ZPY; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tvWi3K3s; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=S79B+ZPY; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tvWi3K3s; dmarc=none; spf=pass (imf04.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam02 X-Stat-Signature: bs3tgr35qka6mftd1sak6y4obddxwxt6 X-Rspamd-Queue-Id: 778033C3 X-HE-Tag: 1623238770-51391 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Slub has a static spinlock protected bitmap for marking which objects are on freelist when it wants to list them, for situations where dynamically allocating such map can lead to recursion or locking issues, and on-stack bitmap would be too large. The handlers of sysfs files alloc_calls and free_calls also currently use this shared bitmap, but their syscall context makes it straightforward to allocate a private map before entering locked sections, so switch these processing paths to use a private bitmap. Signed-off-by: Vlastimil Babka Acked-by: Christoph Lameter Acked-by: Mel Gorman --- mm/slub.c | 42 ++++++++++++++++++++++++++++-------------- 1 file changed, 28 insertions(+), 14 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f928607230b2..92c3ab3a95ba 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -448,6 +448,18 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; static DEFINE_SPINLOCK(object_map_lock); +static void __fill_map(unsigned long *obj_map, struct kmem_cache *s, + struct page *page) +{ + void *addr = page_address(page); + void *p; + + bitmap_zero(obj_map, page->objects); + + for (p = page->freelist; p; p = get_freepointer(s, p)) + set_bit(__obj_to_index(s, addr, p), obj_map); +} + /* * Determine a map of object in use on a page. * @@ -457,17 +469,11 @@ static DEFINE_SPINLOCK(object_map_lock); static unsigned long *get_map(struct kmem_cache *s, struct page *page) __acquires(&object_map_lock) { - void *p; - void *addr = page_address(page); - VM_BUG_ON(!irqs_disabled()); spin_lock(&object_map_lock); - bitmap_zero(object_map, page->objects); - - for (p = page->freelist; p; p = get_freepointer(s, p)) - set_bit(__obj_to_index(s, addr, p), object_map); + __fill_map(object_map, s, page); return object_map; } @@ -4813,17 +4819,17 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, } static void process_slab(struct loc_track *t, struct kmem_cache *s, - struct page *page, enum track_item alloc) + struct page *page, enum track_item alloc, + unsigned long *obj_map) { void *addr = page_address(page); void *p; - unsigned long *map; - map = get_map(s, page); + __fill_map(obj_map, s, page); + for_each_object(p, s, addr, page->objects) - if (!test_bit(__obj_to_index(s, addr, p), map)) + if (!test_bit(__obj_to_index(s, addr, p), obj_map)) add_location(t, s, get_track(s, p, alloc)); - put_map(map); } static int list_locations(struct kmem_cache *s, char *buf, @@ -4834,9 +4840,15 @@ static int list_locations(struct kmem_cache *s, char *buf, struct loc_track t = { 0, 0, NULL }; int node; struct kmem_cache_node *n; + unsigned long *obj_map; + + obj_map = bitmap_alloc(oo_objects(s->oo), GFP_KERNEL); + if (!obj_map) + return sysfs_emit(buf, "Out of memory\n"); if (!alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL)) { + bitmap_free(obj_map); return sysfs_emit(buf, "Out of memory\n"); } @@ -4849,12 +4861,14 @@ static int list_locations(struct kmem_cache *s, char *buf, spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) - process_slab(&t, s, page, alloc); + process_slab(&t, s, page, alloc, obj_map); list_for_each_entry(page, &n->full, slab_list) - process_slab(&t, s, page, alloc); + process_slab(&t, s, page, alloc, obj_map); spin_unlock_irqrestore(&n->list_lock, flags); } + bitmap_free(obj_map); + for (i = 0; i < t.count; i++) { struct location *l = &t.loc[i];