From patchwork Mon Feb 27 17:35:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13153963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E6A4C64ED8 for ; Mon, 27 Feb 2023 17:36:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6EB26B0071; Mon, 27 Feb 2023 12:36:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D1E9E6B007B; Mon, 27 Feb 2023 12:36:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBF606B007D; Mon, 27 Feb 2023 12:36:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id AC7846B0071 for ; Mon, 27 Feb 2023 12:36:40 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 7C46BC0A3F for ; Mon, 27 Feb 2023 17:36:40 +0000 (UTC) X-FDA: 80513776560.04.D5987BA Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf16.hostedemail.com (Postfix) with ESMTP id 831A6180019 for ; Mon, 27 Feb 2023 17:36:38 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=YXDHIMQX; spf=pass (imf16.hostedemail.com: domain of 3Jer8YwYKCBQCEBy7v08805y.w86527EH-664Fuw4.8B0@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3Jer8YwYKCBQCEBy7v08805y.w86527EH-664Fuw4.8B0@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677519398; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=6ysbcOzgrNbrBc54CJzK+4Ejhc4D19XVWo1zi5sfKVg=; b=WqeA+VSIQiamhTgNZshI2SeMh/AZ7i9AglYhhWwlqJfzOHfWon5c8i+8/k5vvevmPb1YFq xKdsBAMTU+/ZcTijI+l0nm4Su1bss/R/hGSZiaFVtx6lxzgXQ1y197YJA/WgSv2zERd2rC jNau65ja+yBB0LJe05v5HUKUBCJ5Hw8= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=YXDHIMQX; spf=pass (imf16.hostedemail.com: domain of 3Jer8YwYKCBQCEBy7v08805y.w86527EH-664Fuw4.8B0@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3Jer8YwYKCBQCEBy7v08805y.w86527EH-664Fuw4.8B0@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677519398; a=rsa-sha256; cv=none; b=p3MGUk/npJTWJjDTS3vECpZQE1UmlftVANFKycpElcqNw/joHwTitnQaH6M8C9kxgIMPkP 07xfGKOv+mSJJZ70XuorVFdz3CLenlpMtgXIG5keg/flatx2CE94IWVa7h+Wfq7h8YYbOD 2OdhW10A6d3uIvoS15RvjsLJLZ0FNRg= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-536bbaa701aso152689717b3.3 for ; Mon, 27 Feb 2023 09:36:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id :mime-version:date:from:to:cc:subject:date:message-id:reply-to; bh=6ysbcOzgrNbrBc54CJzK+4Ejhc4D19XVWo1zi5sfKVg=; b=YXDHIMQXQNCWXLhfNzzoX/o2tLllbrpHPLgDcNcqH8+C6OU1Jjgpl/TZeOJAeBv2Ek 6G2Lb+/p3HQVi/QJl2s9jAfGUcFtqpUlJm17sg8OvardcyanOWbCf+eiVnF3gWyLh6Yv 365vlMblzvVAf/FYil7Oc8S8nTLnslm3JlEZf0E2ojvoNar1NisYIJ8QOEt3KQ5eFT14 Z0nkoifnHNSPAqGSx3LQKr/KXHdyZw1JJ4IcCYqJl6dS7dfIY2carAAAYo52ZSwgOygW qXP+TDjYtnS9rOpj/BRmJybu+FqrXluZOL4BHNMBqfbnY7vUAqCmQGS2ovuG2LUgAWN9 eslA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id :mime-version:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=6ysbcOzgrNbrBc54CJzK+4Ejhc4D19XVWo1zi5sfKVg=; b=WBRR+moSnSalfAqc34AQo2lQKAeh425puMxhihrziX+YMlj9JPwSAmvCiVSFHpj2J3 j1udxbAvBD2bpxOqc3YxFMzifnhhVCQS/VZvT8/OsqFrVYFFWxZVl8xvGyGR/NvbWYML MhG7rYqklRkjX8orhH0BwylMc+CKs3pDVYsigZwoqLxzIgUJBUi14vLmp++87KTQy6xa Rv6H8UrgjpAD+MTSW/x7MTwh9LlF/AxSJoEkgx+Ys8sN2KXE2WR9nm5ke0qNNN29uIo1 XSqyxoh8KvzJlrKyC72qW+7FPPipYuHTaLDi2KeskisN9N8PjDp+IKHYbC0F1+icTZms 29Zw== X-Gm-Message-State: AO0yUKUkJveErquMcsGQVALvNnIGOjyrhh8kDr+kC9I9tIsS2i9svnuV zlwOkhbHBbh1NuXC06bBNDKIuFu4tdA= X-Google-Smtp-Source: AK7set/v3Etc1GK4E97njxZlqmAFdnWBLmS3T1Su3dyjBYy0MoMHIhvXgpoPXjG5cohT+kLBVulq1l4CHVc= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:e1f6:21d1:eead:3897]) (user=surenb job=sendgmr) by 2002:a05:6902:128c:b0:8da:3163:224 with SMTP id i12-20020a056902128c00b008da31630224mr8090269ybu.0.1677519397641; Mon, 27 Feb 2023 09:36:37 -0800 (PST) Date: Mon, 27 Feb 2023 09:35:59 -0800 Mime-Version: 1.0 X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230227173632.3292573-1-surenb@google.com> Subject: [PATCH v4 00/33] Per-VMA locks From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, mingo@redhat.com, will@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, chriscli@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, rppt@kernel.org, jannh@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, leewalsh@google.com, posk@google.com, michalechner92@googlemail.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com, Suren Baghdasaryan X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 831A6180019 X-Stat-Signature: n7swzxuyd5gnuunxxndn81i8oh1sjk73 X-Rspam-User: X-HE-Tag: 1677519398-346926 X-HE-Meta: U2FsdGVkX18etOO03//x0ElncUOa4cIhmIk+vwLEd/z63+BJsBYDBIJgLGSEp3r9BG7bBECXKMhvg6vJzUXI4TcdzIDd2DNx3Jmo1GkCcZH3Et3xfRhJStu9iyGZlAH20eR573JZHSK7xjqz0noglZxwyjFzCQRVKI9xYaEmUmT0r9aeu1BnVhNdaaFuuDR69Qbrsii7EmG0WxOoQihv9I28B+iT1a4PKkASoX+bZxwDezvtpX91XPH7XV8I7yr/UHBctE+Mr3modjeZ/GkScv5mYfgS2v9jk5q5XlpWxiiRWhlfeB0gSFGbPwM7cM3sBBU1c7XfoL/8I89KoWAPkdmUzdPPuHvqo17H2d7cOAp8IQJ97lrWd6JzGUSY63YU/ibbAZ1/b+9OYayiRkQyDhQolmJnrWkAi1ajD5MiW1VeDrQk9yvZhfUeRjdwJkU+lyjkskq7t/7q76vGLz6xoQAaSfL3ZgLE+kOwVjmWvOSR9lxjALFuDri0yl4j6UpQCH/7aovS59eoh4tLeh5vt+f1MX+Ixk5J8gHFPdaidx67EsHT5lAWwi3YjoXZecgHMwT/Cl7610MXQbBVVXIC6nkOLerkwIBh3TvcxpkeUdicpDVyAcnR32b1Ws6dkWAxJUD6l8vxJ35iLdbS4bxmFUocCCkUPKNnj/CUUYMzSeGaHgWVqdmOQ9FRuxXy0blgbpYKvtzv3MEGEScDOthK2Grg9j+MY/f/ijcJgk/SruOdZKAc0VmOBMUGkw6LHegPQ4F0cMFnneatKC2gOgeG/ZSGpOpTLU4NPhGL0LwwR/F+L2IY9RUlmCXRapizy9sIu/f3GlJQ8Rdy39+hiUCBpAxVCciWDMgp3OHQ6SyPPrtmqXWn5ydle1BxXcFLbPS/5K9/ALbwdewom7AZvCose4oReu61g/UbNpqMY9k7qVQzrN/fSdN9d+hcDxtDkjOc3N4ZmakC09klcWr/ZeN lhpv2009 jjCK4l1TI/a1Q5erhIaOiHU7oxD/uMJjmRYLD3IUghgHn3RcmKaq5r+DkMvjjs9xjKrJCqGXs/k7pEnOlb5pqkbj8bqY81cn27JosT0jnbmwDhOBZVGlC/69yJEc9nfkhWaVCLsdk9cKGPkvide7lFex8oqU1H3QLSkpAIXstfHk/W10VGWzbBbMB2tqD3FtfbUCAo3cLxZvulGqnqzRqRSm0ZIJeL7i7dpB5xlpnjX84qdm7dnbzn2pynY6yuhxCBhgSQVbhVW+M3SRClb438cGwUnHgebTsfdHHcMLDlle40y56Q17lz/Rpfnrg7egyA08j8JeXgtnvJFsSi8lN7AEMimAwmqM6m1yV+Z5eev1K/ngc7TjEWPVLoSS92sOt4wPrLszR7QIlDS4Qdm7Hf+iQRe1zARDOGyfhsfggF9dRhXjO1g7whwJdC0HiJmGumRQUGtyzrArH1QPgQ1alSJjm2qnUMwjHzJJTwxFlFoOEi3KoOuzF5v/t8m1bBAqQydddFJNDelxbGFhNJDs5eUsLf6Cl9xunbmF4uVbZKCndMTmLlG0ivXY8Hl9XBiw9RhJuwxVcwKB5v5HvZ/i2P1dRg2JH8G1d9azY7KG/M/78o5MbTsaRDksUpSqKzeBwtTvnqFIZOJXmDRg/OWHlcycpRGVIDxrhre02uOfLstX32R1hfYvoGqpQPv3Bg3qyjuDTaNBenetWGBEnmygwoJ6Ta+7WlUbTdc9OZ5SMGAu27iM+mb1URJbsD18mYFB/qqAKZVIFIGpXrLRZ5VYrAwlhJHnu5nJM7eG2uBqEX+X9XMg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Previous versions: v3: https://lore.kernel.org/all/20230216051750.3125598-1-surenb@google.com/ v2: https://lore.kernel.org/lkml/20230127194110.533103-1-surenb@google.com/ v1: https://lore.kernel.org/all/20230109205336.3665937-1-surenb@google.com/ RFC: https://lore.kernel.org/all/20220901173516.702122-1-surenb@google.com/ LWN article describing the feature: https://lwn.net/Articles/906852/ Per-vma locks idea that was discussed during SPF [1] discussion at LSF/MM last year [2], which concluded with suggestion that “a reader/writer semaphore could be put into the VMA itself; that would have the effect of using the VMA as a sort of range lock. There would still be contention at the VMA level, but it would be an improvement.” This patchset implements this suggested approach. When handling page faults we lookup the VMA that contains the faulting page under RCU protection and try to acquire its lock. If that fails we fall back to using mmap_lock, similar to how SPF handled this situation. One notable way the implementation deviates from the proposal is the way VMAs are read-locked. During some of mm updates, multiple VMAs need to be locked until the end of the update (e.g. vma_merge, split_vma, etc). Tracking all the locked VMAs, avoiding recursive locks, figuring out when it's safe to unlock previously locked VMAs would make the code more complex. So, instead of the usual lock/unlock pattern, the proposed solution marks a VMA as locked and provides an efficient way to: 1. Identify locked VMAs. 2. Unlock all locked VMAs in bulk. We also postpone unlocking the locked VMAs until the end of the update, when we do mmap_write_unlock. Potentially this keeps a VMA locked for longer than is absolutely necessary but it results in a big reduction of code complexity. Read-locking a VMA is done using two sequence numbers - one in the vm_area_struct and one in the mm_struct. VMA is considered read-locked when these sequence numbers are equal. To read-lock a VMA we set the sequence number in vm_area_struct to be equal to the sequence number in mm_struct. To unlock all VMAs we increment mm_struct's seq number. This allows for an efficient way to track locked VMAs and to drop the locks on all VMAs at the end of the update. The patchset implements per-VMA locking only for anonymous pages which are not in swap and avoids userfaultfs as their implementation is more complex. Additional support for file-back page faults, swapped and user pages can be added incrementally. Performance benchmarks show similar although slightly smaller benefits as with SPF patchset (~75% of SPF benefits). Still, with lower complexity this approach might be more desirable. Since RFC was posted in September 2022, two separate Google teams outside of Android evaluated the patchset and confirmed positive results. Here are the known usecases when per-VMA locks show benefits: Android: Apps with high number of threads (~100) launch times improve by up to 20%. Each thread mmaps several areas upon startup (Stack and Thread-local storage (TLS), thread signal stack, indirect ref table), which requires taking mmap_lock in write mode. Page faults take mmap_lock in read mode. During app launch, both thread creation and page faults establishing the active workinget are happening in parallel and that causes lock contention between mm writers and readers even if updates and page faults are happening in different VMAs. Per-vma locks prevent this contention by providing more granular lock. Google Fibers: We have several dynamically sized thread pools that spawn new threads under increased load and reduce their number when idling. For example, Google's in-process scheduling/threading framework, UMCG/Fibers, is backed by such a thread pool. When idling, only a small number of idle worker threads are available; when a spike of incoming requests arrive, each request is handled in its own "fiber", which is a work item posted onto a UMCG worker thread; quite often these spikes lead to a number of new threads spawning. Each new thread needs to allocate and register an RSEQ section on its TLS, then register itself with the kernel as a UMCG worker thread, and only after that it can be considered by the in-process UMCG/Fiber scheduler as available to do useful work. In short, during an incoming workload spike new threads have to be spawned, and they perform several syscalls (RSEQ registration, UMCG worker registration, memory allocations) before they can actually start doing useful work. Removing any bottlenecks on this thread startup path will greatly improve our services' latencies when faced with request/workload spikes. At high scale, mmap_lock contention during thread creation and stack page faults leads to user-visible multi-second serving latencies in a similar pattern to Android app startup. Per-VMA locking patchset has been run successfully in limited experiments with user-facing production workloads. In these experiments, we observed that the peak thread creation rate was high enough that thread creation is no longer a bottleneck. TCP zerocopy receive: From the point of view of TCP zerocopy receive, the per-vma lock patch is massively beneficial. In today's implementation, a process with N threads where N - 1 are performing zerocopy receive and 1 thread is performing madvise() with the write lock taken (e.g. needs to change vm_flags) will result in all N -1 receive threads blocking until the madvise is done. Conversely, on a busy process receiving a lot of data, an madvise operation that does need to take the mmap lock in write mode will need to wait for all of the receives to be done - a lose:lose proposition. Per-VMA locking _removes_ by definition this source of contention entirely. There are other benefits for receive as well, chiefly a reduction in cacheline bouncing across receiving threads for locking/unlocking the single mmap lock. On an RPC style synthetic workload with 4KB RPCs: 1a) The find+lock+unlock VMA path in the base case, without the per-vma lock patchset, is about 0.7% of cycles as measured by perf. 1b) mmap_read_lock + mmap_read_unlock in the base case is about 0.5% cycles overall - most of this is within the TCP read hotpath (a small fraction is 'other' usage in the system). 2a) The find+lock+unlock VMA path, with the per-vma patchset and a trivial patch written to take advantage of it in TCP, is about 0.4% of cycles (down from 0.7% above) 2b) mmap_read_lock + mmap_read_unlock in the per-vma patchset is < 0.1% cycles and is out of the TCP read hotpath entirely (down from 0.5% before, the remaining usage is the 'other' usage in the system). So, in addition to entirely removing an onerous source of contention, it also reduces the CPU cycles of TCP receive zerocopy by about 0.5%+ (compared to overall cycles in perf) for the 'small' RPC scenario. The patchset structure is: 0001-0008: Enable maple-tree RCU mode 0009-0031: Main per-vma locks patchset 0032-0033: Performance optimizations Changes since v3: - Changed patch [3] to move vma_prepare before vma_adjust_trans_huge - Dropped patch [4] from the set as unnecessary, per Hyeonggon Yoo - Changed patch [5] to do VMA locking inside vma_prepare, per Liam Howlett - Dropped patch [6] from the set as unnecessary, per Liam Howlett [1] https://lore.kernel.org/all/20220128131006.67712-1-michel@lespinasse.org/ [2] https://lwn.net/Articles/893906/ [3] https://lore.kernel.org/all/20230216051750.3125598-15-surenb@google.com/ [4] https://lore.kernel.org/all/20230216051750.3125598-17-surenb@google.com/ [5] https://lore.kernel.org/all/20230216051750.3125598-18-surenb@google.com/ [6] https://lore.kernel.org/all/20230216051750.3125598-22-surenb@google.com/ The patchset applies cleanly over mm-unstable branch. Laurent Dufour (1): powerc/mm: try VMA lock-based page fault handling first Liam Howlett (4): maple_tree: Be more cautious about dead nodes maple_tree: Detect dead nodes in mas_start() maple_tree: Fix freeing of nodes in rcu mode maple_tree: remove extra smp_wmb() from mas_dead_leaves() Liam R. Howlett (4): maple_tree: Fix write memory barrier of nodes once dead for RCU mode maple_tree: Add smp_rmb() to dead node detection maple_tree: Add RCU lock checking to rcu callback functions mm: Enable maple tree RCU mode by default. Michel Lespinasse (1): mm: rcu safe VMA freeing Suren Baghdasaryan (23): mm: introduce CONFIG_PER_VMA_LOCK mm: move mmap_lock assert function definitions mm: add per-VMA lock and helper functions to control it mm: mark VMA as being written when changing vm_flags mm/mmap: move vma_prepare before vma_adjust_trans_huge mm/khugepaged: write-lock VMA while collapsing a huge page mm/mmap: write-lock VMAs in vma_prepare before modifying them mm/mremap: write-lock VMA while remapping it to a new address range mm: write-lock VMAs before removing them from VMA tree mm: conditionally write-lock VMA in free_pgtables kernel/fork: assert no VMA readers during its destruction mm/mmap: prevent pagefault handler from racing with mmu_notifier registration mm: introduce vma detached flag mm: introduce lock_vma_under_rcu to be used from arch-specific code mm: fall back to mmap_lock if vma->anon_vma is not yet set mm: add FAULT_FLAG_VMA_LOCK flag mm: prevent do_swap_page from handling page faults under VMA lock mm: prevent userfaults to be handled under per-vma lock mm: introduce per-VMA lock statistics x86/mm: try VMA lock-based page fault handling first arm64/mm: try VMA lock-based page fault handling first mm/mmap: free vm_area_struct without call_rcu in exit_mmap mm: separate vma->lock from vm_area_struct arch/arm64/Kconfig | 1 + arch/arm64/mm/fault.c | 36 ++++ arch/powerpc/mm/fault.c | 41 ++++ arch/powerpc/platforms/powernv/Kconfig | 1 + arch/powerpc/platforms/pseries/Kconfig | 1 + arch/x86/Kconfig | 1 + arch/x86/mm/fault.c | 36 ++++ include/linux/mm.h | 108 +++++++++- include/linux/mm_types.h | 32 ++- include/linux/mmap_lock.h | 37 ++-- include/linux/vm_event_item.h | 6 + include/linux/vmstat.h | 6 + kernel/fork.c | 99 +++++++-- lib/maple_tree.c | 269 +++++++++++++++++-------- mm/Kconfig | 12 ++ mm/Kconfig.debug | 6 + mm/init-mm.c | 3 + mm/internal.h | 2 +- mm/khugepaged.c | 5 + mm/memory.c | 72 ++++++- mm/mmap.c | 53 +++-- mm/mremap.c | 1 + mm/nommu.c | 5 + mm/rmap.c | 31 +-- mm/vmstat.c | 6 + tools/testing/radix-tree/maple.c | 16 ++ 26 files changed, 737 insertions(+), 149 deletions(-)