From patchwork Sun Mar 19 07:09:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13180241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22D2AC7618A for ; Sun, 19 Mar 2023 07:09:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF523900005; Sun, 19 Mar 2023 03:09:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA5D2900002; Sun, 19 Mar 2023 03:09:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F99C900005; Sun, 19 Mar 2023 03:09:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 75866900002 for ; Sun, 19 Mar 2023 03:09:45 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4434740C37 for ; Sun, 19 Mar 2023 07:09:45 +0000 (UTC) X-FDA: 80584772730.26.A700A89 Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by imf06.hostedemail.com (Postfix) with ESMTP id 5E2EE180013 for ; Sun, 19 Mar 2023 07:09:43 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=kD77EjSA; spf=pass (imf06.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.208.50 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679209783; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=V8uXWzHaUDgnMR3zVzK4ll6XJwTMNIlCa4zu/w/8SY0=; b=xoum6VLWi5HVRbD7mko/JdzB9T3duulqHnEYE8pYq6Z5Oo6yNnoB+OIzf0WBW2wrTKKgxa 7rYAXgcuJhBUnf5q8Xnu+95mvfSUltT/If6Z6MzGwgVfusRuXvdrk93+G5jHff4g1k6snr yElc0m3x+QU7XU9udKX8NsP5/b86FFY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=kD77EjSA; spf=pass (imf06.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.208.50 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679209783; a=rsa-sha256; cv=none; b=iiXXxQVe98NbF+dDk54IfLCENRGzitLb7OrFl82ku8vRjskN8uEzLpGxNnHYJjezxdHSdu Ndz1/czGZR6fHU2oFGgHcWgzIhxy8evHu6ajxS1EVINQ4AtkYE3P4xcA6hAdfynUzxb7y8 Ji3kOpatKAjeC7kp1i3zlrO71euIPq4= Received: by mail-ed1-f50.google.com with SMTP id z21so35483641edb.4 for ; Sun, 19 Mar 2023 00:09:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679209781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=V8uXWzHaUDgnMR3zVzK4ll6XJwTMNIlCa4zu/w/8SY0=; b=kD77EjSANPv5buSsqXEhflSmgfuAbw+3r1lZTfUH2Si4lAQ5F0kzSrIlBMvWxHyGM5 WrJBoxIFgqN77KaZWfx9jZJpTTd1XtWADAQ8thPxb/RoqM3tMAU4bhfeOHYTD+lZ69tS aeuBbNWwY1xB3ARatXkNcEY4ZMYWbQ+oxyk05M6ElH2m8XUnXKVC90AhPutyGx/glQkO /gnQf39KUikvmDGyoWmpprZoF+1tCRrknofidgyJ79vExjBPNoHY7nIJWHtw+0Z8Dcuk tL4FqjTHjMeftBEJY03uoDecD4TzfQTt7hca9Hln0v9q0JxjgJpoZpLhl/yE4u0zeTt8 0TCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679209781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V8uXWzHaUDgnMR3zVzK4ll6XJwTMNIlCa4zu/w/8SY0=; b=R04yvK5+K9dxVVLQ05Afz06/FWjYSiBMm8G9BQPLZnSBURoPJOMrZG2111Bu7Q/xbI 61dkzi8J8MuJ+NYA4DDJNt4hFMCFCL1nDrCpmhdCcMXvGxQecLUQRTo9wKQEKD9FGQKF INEpPqCJwmcHpMW1045y5XVXl9FDGUxTXM4nhaLCJOrKqPG6SuXAmxmS9MayhcX5k8e/ nGjxSGLvlrezLPyVBmJxyRDdP6SGEMPnjmo36IThDS4T8Ah/x1iygsC5UQtgXcD9EqqO FIxk7MVcBZxlGg8954s0LJhV5e6XzbV6EWET8PE/5iPVzUKkpm26wMgOyvJ7yFVA99EL bBsA== X-Gm-Message-State: AO0yUKUaZkr045e5gznZQSe+EBnsBGOH34P/fjjQ9edi+naItTiZk1Gp IbTFFKxKOFU/pk71tNprVGV/TjY2sds= X-Google-Smtp-Source: AK7set9mnjoY0F9bvHw0JKzlTE3bCjtuxWNF0T2RwMxpa2FJSHNLFnLdQB7KeDSZJqZzpZJ9Mxvzog== X-Received: by 2002:a17:906:6886:b0:930:9cec:2fb9 with SMTP id n6-20020a170906688600b009309cec2fb9mr5024214ejr.77.1679209781486; Sun, 19 Mar 2023 00:09:41 -0700 (PDT) Received: from localhost.localdomain ([2a00:23ee:1938:1bcd:c6e1:42ba:ae87:772e]) by smtp.googlemail.com with ESMTPSA id u8-20020a170906b10800b008c9b44b7851sm2943920ejy.182.2023.03.19.00.09.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Mar 2023 00:09:40 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Lorenzo Stoakes Subject: [PATCH v2 2/4] mm: vmalloc: use rwsem, mutex for vmap_area_lock and vmap_block->lock Date: Sun, 19 Mar 2023 07:09:31 +0000 Message-Id: <6c7f1ac0aeb55faaa46a09108d3999e4595870d9.1679209395.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: 53aaaqomgj9jq98tj9xs8mxbbfp3g37d X-Rspam-User: X-Rspamd-Queue-Id: 5E2EE180013 X-Rspamd-Server: rspam06 X-HE-Tag: 1679209783-273824 X-HE-Meta: U2FsdGVkX18b1MZPPyspIu6t47DpVnVlh3WoQSCu1krsJdQkQKRdbXWAyztCEqq06EwQRLw8Hos/7YfpEtE7tMVpcvk9frgAwwllJG0BrjhmOiYzugwMmxXJjDROxzPIReh3MztYXU/njis0l30jAzpx9D8w2ao7AdDB17U/7EvPWZQFm1hNttQQzrp/CpvFDXvwYifExnM9BEcnPCqseznhRt1U7eodghkJDysMe4pEC5GYKseV3jozgMtQE3gUxHV/k7GL3kNHc4EPIXWdG1Fs+h3X1X3TDiqO84yVITfuKhuELtJCN5CGfFfC4MZAS8P1/lV74SwxhkisRcT+5FI7NLq933va9F+8n1x9S4BTs8IEBSl990w9Kdsao1Lraw52kyOTnWj/ku1vc6wwptaAxZq2ko+H4vZNH/qdCQJThJAaQcs+javWYs8iHzPZI7y919hjPcCZ5zJgIIfZ+rtbBl/mgGet9a3Bz3XiHCixIt/QBTEofDF6b8b53aCaYfmpncO+nDAV+uQg44soDsGMI5Z+RBp1npVujvWum/ppawSJm8B4PsTk895Gf6WxTlhDNy+e6D/hRQl4Mx6mx2jlnFXXqnWV4Vzx6pP0sgrol0nIs2y3Mw4CMWa08sT10D1sb7LYIcm5vIg8aE/hCTQdnHMp7fcDK5X++V79gpBQnabxrJKVmOWlycjDuY4bm2Gfvki+gFZRTU2uBDUkb2j0JiFLEti3gLkbgGK30i3kR3upvsY0P3wQhrFdVZ8M/fUB9dVW5ul7vgow8dExr9MKYyuFDSrziErq07/MiWPg9DqguPScPbvH8ToalY6f+CX6Jw6FNOvhfWxOFipQzgeBrrznDIUIrVhU73m2Nt1FaR+O8q5q9risuZJsVfpeXmZXsKZVYcWSTicNqrTZa9l9MZBizv5nOdF3Sw+SxD/690GDj9bizGXaS42FXSLjWsKvtYnah31wKy/D4M8 gCGAbOJl WOHlgP6++C4LPZGAkZxsAL7hzvlcHH/rvVC4SAnQWrSTsCqYBFzl6dEjlyfaq/yCCqwZ4JrNyzLo9aiLvPvu+Z8ALVC1jmjEMvjwh0zi4MxGBoDZCbfeEUTMQd5CWAkakolqtOnXo5oCxZuIXBVdZss0N3EEnNHbpf6AJxSzTlgLMLNYU04gvLgyQaZ1wdq5JLVRfmn4sHiVu+zDRyeiSSRonw8+D3xRSdkxAs8bDaS0fFWVcVecHw63KepsuQ5wG6P7dwPKMxHCNcpIZk67P9T+F306gzqJF7gRKWC8irD8pLt3ux0cJt/ox2tX8dLsc7+rkfbP9ICVNXFPc6aJJdGOQ/2Lrd/MbljG9n5hLUPRWkEFY5XtU3yJGVHZGZvzh0o4tTVIKmROjcy5PJAQfBId8CrvppwHv0O89XbOgugp5U+5VAWlKQWmNI5846g70EQ1P+QadfghKuZtJdBDhOZLGhUdCu8IMPqbeP9OAUV64iWE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vmalloc() is, by design, not permitted to be used in atomic context and already contains components which may sleep, so avoiding spin locks is not a problem from the perspective of atomic context. The global vmap_area_lock is held when the red/black tree rooted in vmap_are_root is accessed and thus is rather long-held and under potentially high contention. It is likely to be under contention for reads rather than write, so replace it with a rwsem. Each individual vmap_block->lock is likely to be held for less time but under low contention, so a mutex is not an outrageous choice here. A subset of test_vmalloc.sh performance results:- fix_size_alloc_test 0.40% full_fit_alloc_test 2.08% long_busy_list_alloc_test 0.34% random_size_alloc_test -0.25% random_size_align_alloc_test 0.06% ... all tests cycles 0.2% This represents a tiny reduction in performance that sits barely above noise. The reason for making this change is to build a basis for vread() to be usable asynchronously, this eliminating the need for a bounce buffer when copying data to userland in read_kcore() and allowing that to be converted to an iterator form. Signed-off-by: Lorenzo Stoakes Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 77 +++++++++++++++++++++++++++------------------------- 1 file changed, 40 insertions(+), 37 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 978194dc2bb8..c24b27664a97 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include @@ -725,7 +726,7 @@ EXPORT_SYMBOL(vmalloc_to_pfn); #define DEBUG_AUGMENT_LOWEST_MATCH_CHECK 0 -static DEFINE_SPINLOCK(vmap_area_lock); +static DECLARE_RWSEM(vmap_area_lock); static DEFINE_SPINLOCK(free_vmap_area_lock); /* Export for kexec only */ LIST_HEAD(vmap_area_list); @@ -1537,9 +1538,9 @@ static void free_vmap_area(struct vmap_area *va) /* * Remove from the busy tree/list. */ - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); unlink_va(va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); /* * Insert/Merge it back to the free tree/list. @@ -1627,9 +1628,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, va->vm = NULL; va->flags = va_flags; - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); insert_vmap_area(va, &vmap_area_root, &vmap_area_list); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); BUG_ON(!IS_ALIGNED(va->va_start, align)); BUG_ON(va->va_start < vstart); @@ -1854,9 +1855,9 @@ struct vmap_area *find_vmap_area(unsigned long addr) { struct vmap_area *va; - spin_lock(&vmap_area_lock); + down_read(&vmap_area_lock); va = __find_vmap_area(addr, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_read(&vmap_area_lock); return va; } @@ -1865,11 +1866,11 @@ static struct vmap_area *find_unlink_vmap_area(unsigned long addr) { struct vmap_area *va; - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); va = __find_vmap_area(addr, &vmap_area_root); if (va) unlink_va(va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); return va; } @@ -1914,7 +1915,7 @@ struct vmap_block_queue { }; struct vmap_block { - spinlock_t lock; + struct mutex lock; struct vmap_area *va; unsigned long free, dirty; DECLARE_BITMAP(used_map, VMAP_BBMAP_BITS); @@ -1991,7 +1992,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) } vaddr = vmap_block_vaddr(va->va_start, 0); - spin_lock_init(&vb->lock); + mutex_init(&vb->lock); vb->va = va; /* At least something should be left free */ BUG_ON(VMAP_BBMAP_BITS <= (1UL << order)); @@ -2026,9 +2027,9 @@ static void free_vmap_block(struct vmap_block *vb) tmp = xa_erase(&vmap_blocks, addr_to_vb_idx(vb->va->va_start)); BUG_ON(tmp != vb); - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); unlink_va(vb->va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); free_vmap_area_noflush(vb->va); kfree_rcu(vb, rcu_head); @@ -2047,7 +2048,7 @@ static void purge_fragmented_blocks(int cpu) if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS)) continue; - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) { vb->free = 0; /* prevent further allocs after releasing lock */ vb->dirty = VMAP_BBMAP_BITS; /* prevent purging it again */ @@ -2056,10 +2057,10 @@ static void purge_fragmented_blocks(int cpu) spin_lock(&vbq->lock); list_del_rcu(&vb->free_list); spin_unlock(&vbq->lock); - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); list_add_tail(&vb->purge, &purge); } else - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); } rcu_read_unlock(); @@ -2101,9 +2102,9 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) list_for_each_entry_rcu(vb, &vbq->free, free_list) { unsigned long pages_off; - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (vb->free < (1UL << order)) { - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); continue; } @@ -2117,7 +2118,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) spin_unlock(&vbq->lock); } - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); break; } @@ -2144,16 +2145,16 @@ static void vb_free(unsigned long addr, unsigned long size) order = get_order(size); offset = (addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT; vb = xa_load(&vmap_blocks, addr_to_vb_idx(addr)); - spin_lock(&vb->lock); + mutex_lock(&vb->lock); bitmap_clear(vb->used_map, offset, (1UL << order)); - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); vunmap_range_noflush(addr, addr + size); if (debug_pagealloc_enabled_static()) flush_tlb_kernel_range(addr, addr + size); - spin_lock(&vb->lock); + mutex_lock(&vb->lock); /* Expand dirty range */ vb->dirty_min = min(vb->dirty_min, offset); @@ -2162,10 +2163,10 @@ static void vb_free(unsigned long addr, unsigned long size) vb->dirty += 1UL << order; if (vb->dirty == VMAP_BBMAP_BITS) { BUG_ON(vb->free); - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); free_vmap_block(vb); } else - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); } static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) @@ -2183,7 +2184,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) rcu_read_lock(); list_for_each_entry_rcu(vb, &vbq->free, free_list) { - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { unsigned long va_start = vb->va->va_start; unsigned long s, e; @@ -2196,7 +2197,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) flush = 1; } - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); } rcu_read_unlock(); } @@ -2451,9 +2452,9 @@ static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, unsigned long flags, const void *caller) { - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); setup_vmalloc_vm_locked(vm, va, flags, caller); - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); } static void clear_vm_uninitialized_flag(struct vm_struct *vm) @@ -3507,9 +3508,9 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags if (!vb) goto finished; - spin_lock(&vb->lock); + mutex_lock(&vb->lock); if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); goto finished; } for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { @@ -3536,7 +3537,7 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags count -= n; } unlock: - spin_unlock(&vb->lock); + mutex_unlock(&vb->lock); finished: /* zero-fill the left dirty or free regions */ @@ -3576,13 +3577,15 @@ long vread(char *buf, char *addr, unsigned long count) unsigned long buflen = count; unsigned long n, size, flags; + might_sleep(); + addr = kasan_reset_tag(addr); /* Don't allow overflow */ if ((unsigned long) addr + count < count) count = -(unsigned long) addr; - spin_lock(&vmap_area_lock); + down_read(&vmap_area_lock); va = find_vmap_area_exceed_addr((unsigned long)addr); if (!va) goto finished; @@ -3639,7 +3642,7 @@ long vread(char *buf, char *addr, unsigned long count) count -= n; } finished: - spin_unlock(&vmap_area_lock); + up_read(&vmap_area_lock); if (buf == buf_start) return 0; @@ -3980,14 +3983,14 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, } /* insert all vm's */ - spin_lock(&vmap_area_lock); + down_write(&vmap_area_lock); for (area = 0; area < nr_vms; area++) { insert_vmap_area(vas[area], &vmap_area_root, &vmap_area_list); setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC, pcpu_get_vm_areas); } - spin_unlock(&vmap_area_lock); + up_write(&vmap_area_lock); /* * Mark allocated areas as accessible. Do it now as a best-effort @@ -4114,7 +4117,7 @@ static void *s_start(struct seq_file *m, loff_t *pos) __acquires(&vmap_area_lock) { mutex_lock(&vmap_purge_lock); - spin_lock(&vmap_area_lock); + down_read(&vmap_area_lock); return seq_list_start(&vmap_area_list, *pos); } @@ -4128,7 +4131,7 @@ static void s_stop(struct seq_file *m, void *p) __releases(&vmap_area_lock) __releases(&vmap_purge_lock) { - spin_unlock(&vmap_area_lock); + up_read(&vmap_area_lock); mutex_unlock(&vmap_purge_lock); }