From patchwork Thu Feb 13 22:46:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13974141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54862C021A4 for ; Thu, 13 Feb 2025 22:47:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 711C6280010; Thu, 13 Feb 2025 17:47:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 69AF4280001; Thu, 13 Feb 2025 17:47:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49FB7280010; Thu, 13 Feb 2025 17:47:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 1DE43280001 for ; Thu, 13 Feb 2025 17:47:29 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CBA99816E0 for ; Thu, 13 Feb 2025 22:47:28 +0000 (UTC) X-FDA: 83116409376.08.F11D673 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf01.hostedemail.com (Postfix) with ESMTP id 0D8AD40015 for ; Thu, 13 Feb 2025 22:47:26 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=J0q3Fqxi; spf=pass (imf01.hostedemail.com: domain of 3fXauZwYKCGgYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3fXauZwYKCGgYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739486847; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=p2XHEc7Lc99pXt/+24Q50pb3G2smdSAP5b2ttcYEcJw=; b=wNwrgUoYIaGJrbz3K5HRJQFV0tYtBucn8Puh4snVnqZNBlSfMwv4E8a7pgqzOjvgJDEauz A85eUp5FAnaceqWwibc/5wImCRPmIWgNi0uocNM0++CVQ2irZkPPZTIBEwf3+oawYR2M5v 7bcCHAi32FKlwlfcqM+kdslgYJox4bk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=J0q3Fqxi; spf=pass (imf01.hostedemail.com: domain of 3fXauZwYKCGgYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3fXauZwYKCGgYaXKTHMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739486847; a=rsa-sha256; cv=none; b=fhUvBvh4dcWlo18pWoTD8S42olrYDyL/DYRbA0mlPvJ2118DD11Sk99yS2mTI41wrueeri mIQ7N2vEQD9R07YcMYb0W7P4qxJxu8AoqCOqtrxw1XVjVfgH0iMIIA0Yo3SVcabiwhwFIW 2SfrOFUe3td3RVASCFiDXp35QPJ3USg= Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fbfa786a1aso4529230a91.3 for ; Thu, 13 Feb 2025 14:47:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1739486846; x=1740091646; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=p2XHEc7Lc99pXt/+24Q50pb3G2smdSAP5b2ttcYEcJw=; b=J0q3Fqxi1N4u4ZOkkYwK6KVP4C6sQbR1fahreaBYoZBEcAaAmcVU6h8jiExV8N8p0z CpIBLDCXtreY1lg0hh+pc2c5KUd6FF/dnCUIZhk520zVAxvamhEmobQmmI8ZLOgSjUhZ ZhEmTUYaK4uoM70TIaH9JlXC47RYZvmKP6oddSnHGfEii7t515FxfLpM3jm1NZTkNYy0 0RATUTTNn8M8+KeE/ybx5JgVByTGSItMGy9VWkfm/WInlVmX3oyyGAnvhtdajaF3w4vZ qhtn0+5guHIr+iTUNbTuWRPbY5vM7Ea1ojn7VuO4IITTzAZIQP5HEk6Ek+5prJdZ6WKx vONQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739486846; x=1740091646; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=p2XHEc7Lc99pXt/+24Q50pb3G2smdSAP5b2ttcYEcJw=; b=rF9o0tAsXFLpk2iTyxIO3oAPVCpwvCdUXgRKvjPbNsZwzEpYNWojtboHC0nwXDCBV5 287WalYA+Mps+edbw8K4UsI9Jzw7HeHTEx0YqqdH5vrBeBlfXE/Byp+oIZtBRZJ7PzvE +fih1w1iPE3yHOk3jyoLC2/TJ1F+K0rwSBU579h7p6CGORomV//x/nq9qGt5oB5tZ18d 2VnrzB6T7aSoJOg/Y6kJTEtbi6JsQIjA67h7aC6VfT7LoDs3ie+QGx7QvqMDo+Em+Rwo 0BHeCM9qOOo6KcC8cyqVLxr0AyHu5vzNTKD9Y7PW49cyUebX93LxSxA6RwImhHwbkXfR Wlbw== X-Forwarded-Encrypted: i=1; AJvYcCVNJ9RjvZAHxSyG7+RU9CTT5DPEmW5unOQH/YViSEqmfT7+OP3NAifrNKxqP0e5Jpg7YqE+Y0lVsg==@kvack.org X-Gm-Message-State: AOJu0Yz9YteYP8xDxoyTeQTLBB9XyFZKNyg/BmIEwz0/IAQEYxiylyyE 2g2kZbptLPMAMPnVOAxGvvAlqPTN9BvVNlGJo5B6zXqzYCWMUOJQEPuoVVmBZdh3PSD1BimVtrX J8w== X-Google-Smtp-Source: AGHT+IHxvu/76zqwjXOeUtKeapkgzxoZmdl37UyKOvAGkMpiwzZvuuSG6bTHkHJu7fwwbTV32aeg1MBY6BY= X-Received: from pjap1.prod.google.com ([2002:a17:90a:e41:b0:2fc:c98:ea47]) (user=surenb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1e47:b0:2ee:74a1:fba2 with SMTP id 98e67ed59e1d1-2fc0e5b92e1mr6734995a91.20.1739486845903; Thu, 13 Feb 2025 14:47:25 -0800 (PST) Date: Thu, 13 Feb 2025 14:46:50 -0800 In-Reply-To: <20250213224655.1680278-1-surenb@google.com> Mime-Version: 1.0 References: <20250213224655.1680278-1-surenb@google.com> X-Mailer: git-send-email 2.48.1.601.g30ceb7b040-goog Message-ID: <20250213224655.1680278-14-surenb@google.com> Subject: [PATCH v10 13/18] mm: move lesser used vma_area_struct members into the last cacheline From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, willy@infradead.org, liam.howlett@oracle.com, lorenzo.stoakes@oracle.com, david.laight.linux@gmail.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, richard.weiyang@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 0D8AD40015 X-Stat-Signature: 38hsy4samrc7eqzgq8rn5ffgbr7r3nes X-HE-Tag: 1739486846-141601 X-HE-Meta: U2FsdGVkX1+ipvsGxm3TdJuZY8rM9zQHDPPCquXwNpYkPgANz0Acq2vxe3KtmB0fhcNwjJ7Ki3qC9ROMoY4qUFPE/JUELs101HVepIsM2F8maLefSRV+jSsEMeJihZJU4L5ut0neIQPB594ElqkWIAolhAt4wj2Iz6PisiuCXGb8AfAK6lMw8wmEM4iz3ukeK9GjNRFIHkKQSYP/mjhrSFz81sWDOfOs8ofZCLNIq0dEf3txyviG8wNceCCtY6OitYxfHE+1dUMzM6Ri0ogk5JTVNOZbdMHFj8IhPCmFVoJ75GatswAorjMjtfIh5QI4NoqwMUw8oO3hjry4prjkNdJ5ufrucdy0IUeMAeoIvZwiSG49i826P2uNi72CvC9zgl8HJ1iizi4h9JeMtNHtV3fNyQAxDu8d4sxEl1POE9ANSj+UDW8QcV0hB6N/dkSFY0TbVOctwrcwPSViCAP/Y25kRHeALNRg4PJ1OPnJmMRVZfIYw1heh5I3+tb03TlVvAMm5HQ3SiaGPtIgwZTdFo3zLvy0ZV70fSO4WRwH5hm+neNKmCO37QjVyrlgAlmOxi7M/Em5zPGhH6OYuTnu5HD/VdlAWFspD7mSOOcydUoYZWEW1UGPKNSD+bybkzuEdaGYUDnFr8J4x8w0sJO4rNv9nOnwIAcJVE61mQI5FQuqaL6BSTXEs9kQLRbmErhNZEY84RIejFVjLX3uhsrqKj/99rYerW4CUDagyH0+gUG8fN06hQiUYHFQgtnpwXO2V8XmJXJXytHiAicfyjs1C1yDioxtnuQuOfc6nsEb4dr7RnXx6RH4zRKAMwp5ePjWiFLfOAsfiaxTE6BCu2gjbIlGWzcImzo8bfR1PO28v2wdZvu2kJF6IlPOCezSw+jE8/a3KUoGUXAuAgblZUqRdKnNP1IlKhyUdyLfUOZvK1aYnjJ+HvC0ErPmF9RsG3cGGXRKN8dY2XaepfPqpZg MwiuFqwu 0JbyTOquRCfgN8LPAnpzsxkrlR15C7MoWxrFKjcPnX2k34RI1rYHclO5NZ/yPr4AUn00tZa2ahE7Tz7NHlA8YXhaEGe2k7uYMQJbmNlhPgFf/k76+hXD/PnX2JCek4a5fGniHeYN7wrFYsGMQ+3U+lv2M3JuxjhCRnLNJrZ4Kf3zNwiQ9culhsrmr2m/HUsd0ii6vwo4WLmJf5jvPmZHssn/enGKw5mu2dFbk3mFjY091//u1kElLCRpZk9XHRAo1msNeofahVYujKQF9GwTmN3FET7Jb5TsiCft+tyl9ulbnRh5FxadGBX3cdNSFvnUVG8mzklPCzROqHbvXrgtobJ7k+MEKbtfUchBe37ArYnIxu59Y93XRLKFPConEW/nj43361sJenLfRglyv7Xl+4blT2KbgPGic5Tyi3RXfKh56d9K1N7lpRWU/K+Ci2lIPExWT4hE7/9MJhswV3UHZB0SyMPYxS7vCLF5f1jKnNxT6nFpLbYZVxm13h9L+/jBILXmgOZnHnFAjknUPTu4+VVSgA9NJJmMgCiXX8HM1EKBwweI0EmcpzMxThEK3W131uJDbEoqZqzSvmzjPB4rrpfdNeyHWZMAETHHp11+ycRvTPO5MPu1h+w4kpuTApHtWWUlEE9NAZ8EJGgtgxr+qaRFVBDtupU7qmEh4iLxUU46XA3pZVKGyG8P4ytcEEmhDnp2hKLOcfb4SQpM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move several vma_area_struct members which are rarely or never used during page fault handling into the last cacheline to better pack vm_area_struct. As a result vm_area_struct will fit into 3 as opposed to 4 cachelines. New typical vm_area_struct layout: struct vm_area_struct { union { struct { long unsigned int vm_start; /* 0 8 */ long unsigned int vm_end; /* 8 8 */ }; /* 0 16 */ freeptr_t vm_freeptr; /* 0 8 */ }; /* 0 16 */ struct mm_struct * vm_mm; /* 16 8 */ pgprot_t vm_page_prot; /* 24 8 */ union { const vm_flags_t vm_flags; /* 32 8 */ vm_flags_t __vm_flags; /* 32 8 */ }; /* 32 8 */ unsigned int vm_lock_seq; /* 40 4 */ /* XXX 4 bytes hole, try to pack */ struct list_head anon_vma_chain; /* 48 16 */ /* --- cacheline 1 boundary (64 bytes) --- */ struct anon_vma * anon_vma; /* 64 8 */ const struct vm_operations_struct * vm_ops; /* 72 8 */ long unsigned int vm_pgoff; /* 80 8 */ struct file * vm_file; /* 88 8 */ void * vm_private_data; /* 96 8 */ atomic_long_t swap_readahead_info; /* 104 8 */ struct mempolicy * vm_policy; /* 112 8 */ struct vma_numab_state * numab_state; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ refcount_t vm_refcnt (__aligned__(64)); /* 128 4 */ /* XXX 4 bytes hole, try to pack */ struct { struct rb_node rb (__aligned__(8)); /* 136 24 */ long unsigned int rb_subtree_last; /* 160 8 */ } __attribute__((__aligned__(8))) shared; /* 136 32 */ struct anon_vma_name * anon_name; /* 168 8 */ struct vm_userfaultfd_ctx vm_userfaultfd_ctx; /* 176 8 */ /* size: 192, cachelines: 3, members: 18 */ /* sum members: 176, holes: 2, sum holes: 8 */ /* padding: 8 */ /* forced alignments: 2, forced holes: 1, sum forced holes: 4 */ } __attribute__((__aligned__(64))); Memory consumption per 1000 VMAs becomes 48 pages: slabinfo after vm_area_struct changes: ... : ... vm_area_struct ... 192 42 2 : ... Signed-off-by: Suren Baghdasaryan Reviewed-by: Lorenzo Stoakes --- Changes since v9 [1]: - Update vm_area_struct for tests, per Lorenzo Stoakes - Add Reviewed-by, per Lorenzo Stoakes [1] https://lore.kernel.org/all/20250111042604.3230628-13-surenb@google.com/ include/linux/mm_types.h | 38 +++++++++++++++----------------- tools/testing/vma/vma_internal.h | 37 +++++++++++++++---------------- 2 files changed, 36 insertions(+), 39 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 48ddfedfff83..63ab51699120 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -735,17 +735,6 @@ struct vm_area_struct { */ unsigned int vm_lock_seq; #endif - - /* - * For areas with an address space and backing store, - * linkage into the address_space->i_mmap interval tree. - * - */ - struct { - struct rb_node rb; - unsigned long rb_subtree_last; - } shared; - /* * A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma * list, after a COW of one of the file pages. A MAP_SHARED vma @@ -765,14 +754,6 @@ struct vm_area_struct { struct file * vm_file; /* File we map to (can be NULL). */ void * vm_private_data; /* was vm_pte (shared mem) */ -#ifdef CONFIG_ANON_VMA_NAME - /* - * For private and shared anonymous mappings, a pointer to a null - * terminated string containing the name given to the vma, or NULL if - * unnamed. Serialized by mmap_lock. Use anon_vma_name to access. - */ - struct anon_vma_name *anon_name; -#endif #ifdef CONFIG_SWAP atomic_long_t swap_readahead_info; #endif @@ -785,7 +766,6 @@ struct vm_area_struct { #ifdef CONFIG_NUMA_BALANCING struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif - struct vm_userfaultfd_ctx vm_userfaultfd_ctx; #ifdef CONFIG_PER_VMA_LOCK /* Unstable RCU readers are allowed to read this. */ refcount_t vm_refcnt ____cacheline_aligned_in_smp; @@ -793,6 +773,24 @@ struct vm_area_struct { struct lockdep_map vmlock_dep_map; #endif #endif + /* + * For areas with an address space and backing store, + * linkage into the address_space->i_mmap interval tree. + * + */ + struct { + struct rb_node rb; + unsigned long rb_subtree_last; + } shared; +#ifdef CONFIG_ANON_VMA_NAME + /* + * For private and shared anonymous mappings, a pointer to a null + * terminated string containing the name given to the vma, or NULL if + * unnamed. Serialized by mmap_lock. Use anon_vma_name to access. + */ + struct anon_vma_name *anon_name; +#endif + struct vm_userfaultfd_ctx vm_userfaultfd_ctx; } __randomize_layout; #ifdef CONFIG_NUMA diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index ba838097d3f6..b385170fbb8f 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -279,16 +279,6 @@ struct vm_area_struct { unsigned int vm_lock_seq; #endif - /* - * For areas with an address space and backing store, - * linkage into the address_space->i_mmap interval tree. - * - */ - struct { - struct rb_node rb; - unsigned long rb_subtree_last; - } shared; - /* * A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma * list, after a COW of one of the file pages. A MAP_SHARED vma @@ -308,14 +298,6 @@ struct vm_area_struct { struct file * vm_file; /* File we map to (can be NULL). */ void * vm_private_data; /* was vm_pte (shared mem) */ -#ifdef CONFIG_ANON_VMA_NAME - /* - * For private and shared anonymous mappings, a pointer to a null - * terminated string containing the name given to the vma, or NULL if - * unnamed. Serialized by mmap_lock. Use anon_vma_name to access. - */ - struct anon_vma_name *anon_name; -#endif #ifdef CONFIG_SWAP atomic_long_t swap_readahead_info; #endif @@ -328,11 +310,28 @@ struct vm_area_struct { #ifdef CONFIG_NUMA_BALANCING struct vma_numab_state *numab_state; /* NUMA Balancing state */ #endif - struct vm_userfaultfd_ctx vm_userfaultfd_ctx; #ifdef CONFIG_PER_VMA_LOCK /* Unstable RCU readers are allowed to read this. */ refcount_t vm_refcnt; #endif + /* + * For areas with an address space and backing store, + * linkage into the address_space->i_mmap interval tree. + * + */ + struct { + struct rb_node rb; + unsigned long rb_subtree_last; + } shared; +#ifdef CONFIG_ANON_VMA_NAME + /* + * For private and shared anonymous mappings, a pointer to a null + * terminated string containing the name given to the vma, or NULL if + * unnamed. Serialized by mmap_lock. Use anon_vma_name to access. + */ + struct anon_vma_name *anon_name; +#endif + struct vm_userfaultfd_ctx vm_userfaultfd_ctx; } __randomize_layout; struct vm_fault {};