From patchwork Mon Feb 27 17:36:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13153996 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98EFBC64ED8 for ; Mon, 27 Feb 2023 17:37:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BB58128000D; Mon, 27 Feb 2023 12:37:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B3F8B280001; Mon, 27 Feb 2023 12:37:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9440F28000D; Mon, 27 Feb 2023 12:37:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6DF8B280001 for ; Mon, 27 Feb 2023 12:37:58 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2D7BE160541 for ; Mon, 27 Feb 2023 17:37:58 +0000 (UTC) X-FDA: 80513779836.21.71E65E8 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf03.hostedemail.com (Postfix) with ESMTP id 6406520011 for ; Mon, 27 Feb 2023 17:37:56 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=VvMTeoUx; spf=pass (imf03.hostedemail.com: domain of 3c-r8YwYKCGISURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3c-r8YwYKCGISURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677519476; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qWTCVIi1iYe9BSyPLwARD6DqoqWW7R5amHwkRUYAz6E=; b=IvPbp9uW1C9rYI7XUUEeSogaudydyDxQw/OKJxLfC0gLb/QVshimzTnjl02NfLqyt+r4JT BC125RwLv7KLYQb6mh1U15kL4xSIzOe1iX4ju1biMDhCMaTR8rz0fsRLK88+4aEnMgFyN7 wwAt0oSQNz7iyctxu4Qt43vcokIqrbM= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=VvMTeoUx; spf=pass (imf03.hostedemail.com: domain of 3c-r8YwYKCGISURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3c-r8YwYKCGISURENBGOOGLE.COMLINUX-MMKVACK.ORG@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677519476; a=rsa-sha256; cv=none; b=Yr5j4k7nS+ez7H33yG75ImE9cxOasaMRN4MUhwyHpQCOBGyS8rgnlhTxR8MFHefKQL/O0u wzx+c/5Qi3bpA7wuxfaRnA49WbZHjclZSoFC8XRpvhRdy+ngmDsqUnuJGRJQfjXyj6GQvq g0wrBCS6hudhprEM2M3AHV3r2nH8zUk= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-536bf635080so155072157b3.23 for ; Mon, 27 Feb 2023 09:37:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qWTCVIi1iYe9BSyPLwARD6DqoqWW7R5amHwkRUYAz6E=; b=VvMTeoUx5wjaqp/SSzBUAaS+m27RVNRJG9xutBinKMu13MVGWXfijU0pH0JIlVsRC1 1Qknjm7glaZd1lyCyg2w4XYR8Ujs0gzzwHLsVcLBT5JFXNg2+dcUo8mzjU56oHRaJiRQ Altiy52UpcBlGHU2tonuXHP1Zm9pZ7kEpRw7jPUHYcNU+RFCFPTwsB2NLxILr5m4FxUQ wXg6Va2RO6AqeAKOrEiDyanAl+KkHyrgZ3Z09+ORgyuXVSSi/uNsL4P3Dex0so3Z5yvz p07UqCIp1qtZGFtqF1JFTZwh/gE+2skLmmiHZO8ONAXTewlE9Wn0WJhWMtveGR25I/cT TS9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qWTCVIi1iYe9BSyPLwARD6DqoqWW7R5amHwkRUYAz6E=; b=gE1elZQGrkXKaclxPo/kScycddHaIcyYWWZFo0EeeoWODCjlNmeMQzDzii28Yi+xdr WaD5YyuWWNLxayHnmmi/Yjy616FYhFDEKO0DBJtkKAVh+q2gh/4oupmp+efzER1/poQt 9XHP89hDG8MoCdY/F+CJAAOu8UiCmN43kXnzpGjphBkS6CTS1J409PXMV6SivxAEs9EN IJpEkk2paAYVi2e7Er8xmwaODLwaq0s2y+xHd1wC9upV3bAtiTMWyl2HUnFmUWBVNSta cOuudV8hKCwdQ113AS/UX7yJJy7bxPVkvN8yYkvk2mTti+9QQxpOr1j3tBYySXI59smG ELHw== X-Gm-Message-State: AO0yUKWgFxs1tUIqEz3I96BtKOq2LB/pBwILQgRYNB+9yD3eiOmdBJBR Al28q42IwfEj8YM5nLCUnIiFN6Akpaw= X-Google-Smtp-Source: AK7set/c7Ur53LQtByhAksTFTJWraikr5DWovedZqPtoMuJRVU+wfT7Y+cDjXpRHOc4B8/glLU8UK1K2Jrk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:e1f6:21d1:eead:3897]) (user=surenb job=sendgmr) by 2002:a5b:ecb:0:b0:a03:da3f:3e68 with SMTP id a11-20020a5b0ecb000000b00a03da3f3e68mr9397156ybs.12.1677519475458; Mon, 27 Feb 2023 09:37:55 -0800 (PST) Date: Mon, 27 Feb 2023 09:36:32 -0800 In-Reply-To: <20230227173632.3292573-1-surenb@google.com> Mime-Version: 1.0 References: <20230227173632.3292573-1-surenb@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230227173632.3292573-34-surenb@google.com> Subject: [PATCH v4 33/33] mm: separate vma->lock from vm_area_struct From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, mingo@redhat.com, will@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, chriscli@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, rppt@kernel.org, jannh@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, leewalsh@google.com, posk@google.com, michalechner92@googlemail.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com, Suren Baghdasaryan X-Rspamd-Queue-Id: 6406520011 X-Stat-Signature: c49gtrdz5mk7kac3i6ar5ifj6iq19dbo X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1677519476-697977 X-HE-Meta: U2FsdGVkX1/TNdENgSxK7JbW1Kp6BbGKUB6Iy3ZtAAFKOOHQ6ThwHGqMlYQwMjGSD0OMoh1KrXhdp1QwjO3jIlkd8e//Rf+lZHcpwipOcLIPcMjl1ReQ5pW6II7XCNwYrwjWOh5BtBII9k/5AxOasmsVGAqR8DlgrXKILJQWGIgsblEyXRu+tTEbklP/VBQ1hKzPFfutj3cNfC6LkhvZbtN7a4w2EPwVRFMLt7KCpEVaQurGBkQAp44LuK8TXdNfHfPwkYCQhAUUgJtHg2+MAPbE0+y98uie4hYY/Jk3WA9bZCptQXXYarP7wQL9KbjBE0+SebqjISIShnLJdLiC3CbaYaCT1D3a/CoMrUz/+7JWwzOfeJdsfhKYIoHjr8Z1Tolu/C2+lB2ITSACykiaACMiZd+mBB7MtJKK2gmg09FGCi0jaN6J7kuZC9fX8JxpeUSOhoIREBFylmjKpSOvvfhxmM5xsRLbB6jiQdzClrUJSEApTB7lFUJYuxSHegCMaaI6l3AOAocDD81Qn4UeM/DsKnIsDj9AXjOPoO35oTB18nbB/GCdOYb9aLX4PTL+GH6UX5x9CnuzUYTPJFN6oT7OWkS34Q8jojZmPQ+7CPPU6ooJFkyGW8gAehoSWTRxqoC4T3F2lEvCmjFKpoVbqUu7i2EN2viCmSoTf9Qgv2hmVTxileZkMMtopicO8szx2Rpe12+9LtPNeMJcbFcSCEPtBzw8OL5+iCh3B0iYe5CkVJyzCqeGTpR3r901++SFSnLdGOCgBDPqUBCsA0+5UkgRYXlwjPxZdtfAelGB5qvyQ3j4eteXMwnzEsh6wE9F/jhKCnxNiJhdaBQzBqOKOVxqJMASrz8iVWSKvgByuQY91ZSvaDCiG0n7FLUdVUJrTYEnU5ydEBBC6KwE2vL7oVlR1M49jTEv1jgiWxHNsZzzUNvvr1hLLgXHXx2zDkm+0WMGm5XcJ+dQKWS+Irw 6IlutoJ/ CTc0CAWphREnhrnwUjewntKXCKZc5F3mrEMB1+FcvuplObHotX//D8BaGFZhgHErV7hHCx+oeBALwXtXNWOgBa+FtZDtNCQCLqU3R/8pWgVMq5Dbp0k/mkPq2mV5djHJncowr4kqR68+4qjMisMjUQZ3tKe8T/lIZzsG7naKegF3VsA8pfccmjFQyjNxziIsIhTSeCSVQGSfQoLMwsbKbxNFduk3oSEZ0TqMQcr8ZkiXf0x45FZYQ28849VuFNluQn1lrhgqWLLSEJi8M0QYuIZ3uJuFTROIisLBB5SdMnides7L692WGb6XvzVQW53C28OWQ8HY0BR0BIFOtGQWwIb5GYeuAYXixKmvUcg+hapOdDK+hNgxlCBt0/vEWtdUo48Li3ff7URFoprieGYrJigbH/pQpq5BZcQFh/7Hu1TTv03lmPkrAAbuPuEkGNhRBEaqJ8uwyszHatTeTYwMIYF6hPJgMYMluaN/UxBBOy451jZyXcgaVQb2nvLcfUbPmVIxiqTyfJLDKnBPllaRsSoDpr8GDl/MkGqoZM3ACRaGQELB5qO6SZr3JUmPsbrnfskTtuVHiTvIY4vrfnzUF98YJAVWVj+5VY7rSgL58XfiSax6JEiGCZZuAq6kR4alayfc48a4iVPoZdA+SSwLpoBs2Rzo+qHHqBuNWQRIuGd1239E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vma->lock being part of the vm_area_struct causes performance regression during page faults because during contention its count and owner fields are constantly updated and having other parts of vm_area_struct used during page fault handling next to them causes constant cache line bouncing. Fix that by moving the lock outside of the vm_area_struct. All attempts to keep vma->lock inside vm_area_struct in a separate cache line still produce performance regression especially on NUMA machines. Smallest regression was achieved when lock is placed in the fourth cache line but that bloats vm_area_struct to 256 bytes. Considering performance and memory impact, separate lock looks like the best option. It increases memory footprint of each VMA but that can be optimized later if the new size causes issues. Note that after this change vma_init() does not allocate or initialize vma->lock anymore. A number of drivers allocate a pseudo VMA on the stack but they never use the VMA's lock, therefore it does not need to be allocated. The future drivers which might need the VMA lock should use vm_area_alloc()/vm_area_free() to allocate the VMA. Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 23 ++++++------- include/linux/mm_types.h | 6 +++- kernel/fork.c | 73 ++++++++++++++++++++++++++++++++-------- 3 files changed, 74 insertions(+), 28 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5e142bfe7a58..3d4bb18dfcb7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -627,12 +627,6 @@ struct vm_operations_struct { }; #ifdef CONFIG_PER_VMA_LOCK -static inline void vma_init_lock(struct vm_area_struct *vma) -{ - init_rwsem(&vma->lock); - vma->vm_lock_seq = -1; -} - /* * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to @@ -644,17 +638,17 @@ static inline bool vma_start_read(struct vm_area_struct *vma) if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq)) return false; - if (unlikely(down_read_trylock(&vma->lock) == 0)) + if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0)) return false; /* * Overflow might produce false locked result. * False unlocked result is impossible because we modify and check - * vma->vm_lock_seq under vma->lock protection and mm->mm_lock_seq + * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq * modification invalidates all existing locks. */ if (unlikely(vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) { - up_read(&vma->lock); + up_read(&vma->vm_lock->lock); return false; } return true; @@ -663,7 +657,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) static inline void vma_end_read(struct vm_area_struct *vma) { rcu_read_lock(); /* keeps vma alive till the end of up_read */ - up_read(&vma->lock); + up_read(&vma->vm_lock->lock); rcu_read_unlock(); } @@ -681,9 +675,9 @@ static inline void vma_start_write(struct vm_area_struct *vma) if (vma->vm_lock_seq == mm_lock_seq) return; - down_write(&vma->lock); + down_write(&vma->vm_lock->lock); vma->vm_lock_seq = mm_lock_seq; - up_write(&vma->lock); + up_write(&vma->vm_lock->lock); } static inline void vma_assert_write_locked(struct vm_area_struct *vma) @@ -720,6 +714,10 @@ static inline void vma_mark_detached(struct vm_area_struct *vma, #endif /* CONFIG_PER_VMA_LOCK */ +/* + * WARNING: vma_init does not initialize vma->vm_lock. + * Use vm_area_alloc()/vm_area_free() if vma needs locking. + */ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) { static const struct vm_operations_struct dummy_vm_ops = {}; @@ -729,7 +727,6 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->vm_ops = &dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); vma_mark_detached(vma, false); - vma_init_lock(vma); } /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6768533a6b7c..89bbf7d8a312 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -471,6 +471,10 @@ struct anon_vma_name { char name[]; }; +struct vma_lock { + struct rw_semaphore lock; +}; + /* * This struct describes a virtual memory area. There is one of these * per VM-area/task. A VM area is any part of the process virtual memory @@ -510,7 +514,7 @@ struct vm_area_struct { #ifdef CONFIG_PER_VMA_LOCK int vm_lock_seq; - struct rw_semaphore lock; + struct vma_lock *vm_lock; /* Flag to indicate areas detached from the mm->mm_mt tree */ bool detached; diff --git a/kernel/fork.c b/kernel/fork.c index ad37f1d0c5ab..75792157f51a 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -451,13 +451,49 @@ static struct kmem_cache *vm_area_cachep; /* SLAB cache for mm_struct structures (tsk->mm) */ static struct kmem_cache *mm_cachep; +#ifdef CONFIG_PER_VMA_LOCK + +/* SLAB cache for vm_area_struct.lock */ +static struct kmem_cache *vma_lock_cachep; + +static bool vma_lock_alloc(struct vm_area_struct *vma) +{ + vma->vm_lock = kmem_cache_alloc(vma_lock_cachep, GFP_KERNEL); + if (!vma->vm_lock) + return false; + + init_rwsem(&vma->vm_lock->lock); + vma->vm_lock_seq = -1; + + return true; +} + +static inline void vma_lock_free(struct vm_area_struct *vma) +{ + kmem_cache_free(vma_lock_cachep, vma->vm_lock); +} + +#else /* CONFIG_PER_VMA_LOCK */ + +static inline bool vma_lock_alloc(struct vm_area_struct *vma) { return true; } +static inline void vma_lock_free(struct vm_area_struct *vma) {} + +#endif /* CONFIG_PER_VMA_LOCK */ + struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) { struct vm_area_struct *vma; vma = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL); - if (vma) - vma_init(vma, mm); + if (!vma) + return NULL; + + vma_init(vma, mm); + if (!vma_lock_alloc(vma)) { + kmem_cache_free(vm_area_cachep, vma); + return NULL; + } + return vma; } @@ -465,24 +501,30 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) { struct vm_area_struct *new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL); - if (new) { - ASSERT_EXCLUSIVE_WRITER(orig->vm_flags); - ASSERT_EXCLUSIVE_WRITER(orig->vm_file); - /* - * orig->shared.rb may be modified concurrently, but the clone - * will be reinitialized. - */ - data_race(memcpy(new, orig, sizeof(*new))); - INIT_LIST_HEAD(&new->anon_vma_chain); - vma_init_lock(new); - dup_anon_vma_name(orig, new); + if (!new) + return NULL; + + ASSERT_EXCLUSIVE_WRITER(orig->vm_flags); + ASSERT_EXCLUSIVE_WRITER(orig->vm_file); + /* + * orig->shared.rb may be modified concurrently, but the clone + * will be reinitialized. + */ + data_race(memcpy(new, orig, sizeof(*new))); + if (!vma_lock_alloc(new)) { + kmem_cache_free(vm_area_cachep, new); + return NULL; } + INIT_LIST_HEAD(&new->anon_vma_chain); + dup_anon_vma_name(orig, new); + return new; } void __vm_area_free(struct vm_area_struct *vma) { free_anon_vma_name(vma); + vma_lock_free(vma); kmem_cache_free(vm_area_cachep, vma); } @@ -493,7 +535,7 @@ static void vm_area_free_rcu_cb(struct rcu_head *head) vm_rcu); /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->lock), vma); + VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock->lock), vma); __vm_area_free(vma); } #endif @@ -3160,6 +3202,9 @@ void __init proc_caches_init(void) NULL); vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); +#ifdef CONFIG_PER_VMA_LOCK + vma_lock_cachep = KMEM_CACHE(vma_lock, SLAB_PANIC|SLAB_ACCOUNT); +#endif mmap_init(); nsproxy_cache_init(); }