From patchwork Mon Jan 27 15:50:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13951575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 547F9C02188 for ; Mon, 27 Jan 2025 15:51:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E5121280170; Mon, 27 Jan 2025 10:51:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DD9BD28016F; Mon, 27 Jan 2025 10:51:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB6AB280170; Mon, 27 Jan 2025 10:51:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 31BF528016F for ; Mon, 27 Jan 2025 10:51:16 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 55487A024D for ; Mon, 27 Jan 2025 15:51:15 +0000 (UTC) X-FDA: 83053670910.09.1251251 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf27.hostedemail.com (Postfix) with ESMTP id 97DF440004 for ; Mon, 27 Jan 2025 15:51:11 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2023-11-20 header.b="lUacoQ/Q"; dkim=pass header.d=oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=ka29cd3x; spf=pass (imf27.hostedemail.com: domain of lorenzo.stoakes@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=lorenzo.stoakes@oracle.com; dmarc=pass (policy=reject) header.from=oracle.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737993071; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rcU8Rnf2jqj92Zfr3OOHzxw3letj6lo5Yzku/ikp7mY=; b=dUfq5VbsrPiBa4tbTziN++zAwRcLV6WH3FHAhs2O+y4+jlqCUGuqxdbksN77448zXYiyE8 sjriwBxBZMsFerG30x0fH0ore/h/EopzjFEAl7U0Gn+RsxaxMTlEWrkSQ3Ju/JKTe8nXcL 670RWaTRhPOhYsNPZFwLeRzqyO1Lsic= ARC-Authentication-Results: i=2; imf27.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2023-11-20 header.b="lUacoQ/Q"; dkim=pass header.d=oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=ka29cd3x; spf=pass (imf27.hostedemail.com: domain of lorenzo.stoakes@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=lorenzo.stoakes@oracle.com; dmarc=pass (policy=reject) header.from=oracle.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1737993071; a=rsa-sha256; cv=pass; b=F2Fz2E2lOXif2KJT8NsSdNBrvVyMRoI2PkYoqDCxSJ5KJ9j4d2ss3H866SixF5jXlvf7fh tLKz/bsP5FjBoLjTkHoLVfm6YjxkGxU4UAlwtFQmGmEKpEgzGzcSfguzmoYdadTmHAcrEZ RFrF2dpExhc1NNDbekDymEdoENGGj/Y= Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50RB7O3L010970; Mon, 27 Jan 2025 15:51:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s= corp-2023-11-20; bh=rcU8Rnf2jqj92Zfr3OOHzxw3letj6lo5Yzku/ikp7mY=; b= lUacoQ/QYZl5NQdCdX86zjg2k4f0/Zl8HhGoOGgf2vQNAYQf/T7SqoTGOfhVOcFP 6Kogl/i4wgSzBRlrHfmMsKmRYcPjqlXS7S92wqUd0GeMFrtqHMAgl/DYBiOGo4er YGx4t/HOaTiO8YoCBaLzQRgfF39hPZfWmXnrNKn+9Lz+xHbsVsbaPJOgfStGkSjj Q1a7h0aMQDskBpecsjRcEscnT+b24hXjI7Nyk6gAxNp3AM4xzVjxIAznIlDNK562 fWV6Qv4IpaPDJTJ1TJG/rcVq/bj5tIMRk46/AFxhYCWDGhq8EP6kFnoWpewZmznF ra+3HROTeRlfnecDgC9SPw== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 44e900rhk7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 27 Jan 2025 15:51:08 +0000 (GMT) Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 50RFSOpD005390; Mon, 27 Jan 2025 15:51:07 GMT Received: from nam12-mw2-obe.outbound.protection.outlook.com (mail-mw2nam12lp2041.outbound.protection.outlook.com [104.47.66.41]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 44cpd75syr-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 27 Jan 2025 15:51:06 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=PsltoaaK3eCl/wMlEri3Jz1o/QGoo+MlT7WrUR+DL5ajaDnElne2txZnvBmUQIQTlCPKoTL+xX+ko1AJY9KV3oXdkRyXVKkwpI9pHbB0/wGfdCagyDyADcg2l1/9FcEHXs4saqy5RZbalyc3N4HgnnCK0VH+ojxWal6lFBne58kDrYoSXKUQmWeZKAr8AUk/2PdItgR3QEifRR8Rve3k5fKMcjIe6fBozPIwSsNLncoChulf5Y5vGZCDLio1x2Y0/yDZtJL/KudGCt2TJ8PzFNsqaWpSgLdmUltWJERg61ajj0uJ+eKibq73RqWGVP2Q/4Sv1ea8pmcQwzmNU8x0Xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rcU8Rnf2jqj92Zfr3OOHzxw3letj6lo5Yzku/ikp7mY=; b=tIicx7wtXOx4As5/0tkrjpvpuXLLbznuuTAUbRwtaNJfch2RoiPeFZutcYran1Qc6hOdXLa7GhAd3Hxp+ualQyJHAtkoY815sJIpfEVEK0w1EX/W8zvLUggcUh5a/8HWspEbk6eo6MrcigEc5IeXEczE2+eDOBbWih1QIV1jMGa1cnHMzgINNFMqu5mxVH1oK+2Pl9m0rigduyvWBDK+WVNzNtcDMpAdYlaOZW5V1WXHQQbKLC+cc0lLZIBlccAoYXIWbnc0xh93mb8GzBOcfGMKgAapLo0q+mgJBSTXh5Ql6fZUCNHoROWUnp8YjmfWyoXhN99C3Ia6IYfyxUfVoA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rcU8Rnf2jqj92Zfr3OOHzxw3letj6lo5Yzku/ikp7mY=; b=ka29cd3xXwJNu7zmgkucQhCFqavGiGUsYE+mzNUkyfq7Lp3wczmQgGH15GnfLDEiQ7peqPLTDUNCmGjDkWkNGExhAQFUHLAtEzrlFAX3PR5hlNC+5rJEs1Z0S9m21z9DmYr3NmZ9O6LoCaMMLvH8zG4Vywlefls1oGirMft95E4= Received: from MN2PR10MB3374.namprd10.prod.outlook.com (2603:10b6:208:12b::29) by SA3PR10MB6949.namprd10.prod.outlook.com (2603:10b6:806:319::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8377.18; Mon, 27 Jan 2025 15:51:01 +0000 Received: from MN2PR10MB3374.namprd10.prod.outlook.com ([fe80::eab5:3c8c:1b35:4348]) by MN2PR10MB3374.namprd10.prod.outlook.com ([fe80::eab5:3c8c:1b35:4348%3]) with mapi id 15.20.8377.021; Mon, 27 Jan 2025 15:51:01 +0000 From: Lorenzo Stoakes To: Andrew Morton Cc: "Liam R . Howlett" , Vlastimil Babka , Jann Horn , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/5] mm: simplify vma merge structure and expand comments Date: Mon, 27 Jan 2025 15:50:40 +0000 Message-ID: X-Mailer: git-send-email 2.48.0 In-Reply-To: References: X-ClientProxiedBy: LO6P123CA0049.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:310::9) To MN2PR10MB3374.namprd10.prod.outlook.com (2603:10b6:208:12b::29) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: MN2PR10MB3374:EE_|SA3PR10MB6949:EE_ X-MS-Office365-Filtering-Correlation-Id: ad65cb67-893d-4ffa-4941-08dd3eea662a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|366016; X-Microsoft-Antispam-Message-Info: yE2WU7Q4RyTBlUpHNlyXNIf+kAUeKY3fk85bD36Ax4+eGmJ7RgQUfd7Ou+BlLAMwYdAT4/ZvUKEbFc1aIIEdubPb8o0A1lW1dgF3kxb6z14LtbTaKsZOcrC868F2CnQ7E8MpKJn7reUpg1u54b5gcBYm6rdEn8xlp6utWlq6ugt0/1eJA9XFz9KMH2kmt6JbgR7Trfwz0JhZY8+pxQdq+nd23wc6UmPUXEYvMqA7wwf+fHVBN+qysmpbv9/nNs1Ze9Yzg1N50FasnlADSf6rZDZVTocpu7VJxHPKOicl+nL28/oreKX3NDAEcf2f5RqCq/gG6jkxDj49eP0YKjy8VwIUpOoPYdekMkGMwgZdR6hrhq7nbN64qLiNkPOSyfAoqT2QtfzoHiAqPWPcJsPYny6bT82vuD5fuMlSJJ/Q7mRknqgA+sSVQx9H2SdmUh3mxm8rJBT41CFl98Ee15MJoz5c5/v5nlT5IBACjbkrI/TTQxXdF1EnapvZxMMpp5vOM7ShKzpQGUk3fNBQLqJGczfWtovP6tn62t+4lWvgXRMxDykqI8PxxMUXSeBNj61CrUUNdEcp8TKz9w1UjOmyo5Dx8TnldxiFPGaC8EitMkgP1lSBFxSaVBxhNdvXaZZamQRj+VokHJWn+bcxdBu7WgHvIHfe9F2Xu2zFxWBr2/6Y6Iv+b0JJLiLRiW7irdF8swchrf+YgnSKNLg1SZSEqFWvr9VJEvJNfvji50Xd4SEvZr9ME/jzos2mRRPyRtMWZ5fIzbrmQzuXQQ6cmSyRHMB7ckSllzNZ3djSrsUjxVFlLnjil2ERB6baXwgHCmsiQSDNAo1mSPiC4GdfG7RdqPu3LkCfzFTRxR3grKd/R3Syfqk8t2GXN4f9jOSzogMgbkx0gr77vBS2mocNW1CgyzN9ut0A7SF8LTMTBLHMu1SIRPw6ge+AG+Yc4USRFpOMAJCw6lMXMR2IwPxLczQARqpf9HVNtsX4n5hL+CdFy++P+KV35N9hmGG2aRNeuKG++Q+Rc/iN8xoe7rsbqvKoRSqZ6iKx7Him7Ok7V6iDhjbAiKjYU+kVEfpE9YJlaaS+xtzFuIkmUrhI2r2/ENjW18F4idcM8XY4Yd2TIFUp8gvFwgOqQ5e6XiqR0t+JePsi1I5JLOEXA6hJ4cV8CBGVzK1EQTZs5PkkIyNXu7Yovr7ihhIevet6XW6MJz0eYCrsMZl4MmOPg66uSC0rCgxEhIIKwtSjK7eW5rTBDh1UcUXhchZIkYwDcyD+khLkYModxuoKEA5Sft3wUndEy1Md5YmrlSItMFxEDQ1IjucRarozBxD53R8eDnL+Y1eO2m0mXIZWSieCdnMgIPXoTl0gFenQt5ReTgDw8jmXidSpe8ccnzWHKCLdxTcnPqNTaXYm X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN2PR10MB3374.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(376014)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Wbv9831x5M0DgyBS7cR4bVcc4yS0YNR4MjgW44xK/mmmZzzzfWPRMxTeyykl0vTJI3ajK/NNFUeTqLI6KTEIaMk/RusG1mK9YJPfazOhmPOcaYML+3NllcbUhWFDT1RvBcM9c0AFmPOaRJNIslk+ZkkB9cA8kv1vEO69/w22ZWn0suqQxw5g5qkdkIm0DI5XROQqH077MlzIeE2nAoaMOipoO/KKYljVvBfCWohpcfQn0TuyBhbhLvYAFo7oKgx0NamcdAoOQq1rAEiMNZ/kxGpFukyKQe78PQmzeadhjIyJIH6y9nDCD0kVz1YfbrZ8PE78u0PVSZPA2W6BEFzIaFqM3dfFiVv0vGi4wdFmSjnJT18oTusNk4vpcJqUBqFB/3wX7CiX9IBuQemJEdKgt60izHGpP9q/PS2vRbsUuw8fq/C+JVjv0xnwkF9ciyrK8zjw56QWxRY4flhiALfstezzhG/34wi/e9lZuTIQi/roNMoJC0y4vw2nRsaG052h9TGYCxVfPaFlooKemKNCxAM2jdyUq2obcFeCM9KikcqS59mPwKD3KJux0IUmaGuRNG3PdB/EI2ym8puxRXRPeID+whudKib+cqrqYSd1J0IQqukZbZ8yc6N8RmaM+Z4ubDw+4OHUim+seYFJix+wi0y3vUQZthlJICoUES7P07PH5i2b6bptcC9Ld0oIfH5cPKvPb90kNkgLzsbnUa/AU/NL2kutaM3wtXx6kGNRRH29xiUXpHRmQJSFpXurl0R1hQIdDDY5zkaOqnyeP3Y3kI/Ql7pylMywlEb831NevPVMEWCowngMVv2GM/K0bbidgOqbaaorZWw3jWOEeE98kzu92pGe2eGnFJm2d94i612yCT7RFYk+vofOv7FsIWXou/X5RzP7XGTYyPd1DgedfFZVgow4w44YO/HvC0prrfebod9sNAs8AcET8CWTeL8iG2otBPTt8oPJpQ6OKRtgxFjV7h7RP+4GqEtYHUiaoKm8AKDqCgu75uu/AIJ3NmcCLOgugwJ+nufhRcnRbO6geXEtrG0ujtpjSQ1Ve7cRORfHNMu9cFzmTtg6xlguRrs6LUUrsJtpZtdnintN21FC75ZrrPa1hkt84/XrPU/EGVpRiH4Z4y+rtBLVTB8PB/bDQEPspYrLwcNJwwN1sAtK+62wFV1dMopHeyUeulFkr9SnEKr5H4QpzvFq87eb0m0T9Yk98c5LX2IR4RZAHQJXhHnbA4blPBCG5vb7PNeLhBQNQsPR+fiDZxS8qieDQgVzTVFVIZXI6FZkoTlSfrxq7nfEOyRUO3NK0aDm8NgMlvKT1VoihPJtbFNzZKi0Jbz33PRnr0m4cSfktZvlV793QnwfLo3VYOGdrNi1Any3vy3OOgkQfByGsW6XlAHZcG21d1BNfnba/6Ptd8s9dG4nsUsmY0AC27skuQ2DGGPIbcPEc7ztfcTjQW/el15cSIL8zGCk9fP+PhJJ8QU4e5uBE7e4UpQCx2YWbKjLDFG+hXBHTHdjZHtVABgpq9BXeFOUkkoG83C5BTmT0C9JlBJqKBqLrMck/YeSCIqkiXZX2kGDMEi77eDYj2LAsuvtkueL+ZsYeZIyul0zI6v+FCw5Zg== X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: eBP6zwbMYzLCih8aAJR4AD1BfgMhfy8hUhuzZGFpDq76Ci8w0sJqrAB8YrGamVOZUMQGvOivwixukxpiqjNq0jHL+s+zyQ9EyxVgvy2eq38MIC3fwjB+inEFPKT3s3aIHnW+eb30WtOdKRTkm/M+PcKNuc2YpDZmhzeIHkYYLh+ANIL61l96lS9hhCVEzqen7oH5QGU6+DKhAvdpJZ5gEGBsOdq0E2l5UTU78BK8DqUa9Loc591cIZdno+VDU+hyCXK59FC0WFeWXB7cF93atDF0XBGkJjOJJkRbMAWmnTEKBLmCVFRyy0fTMmr/I0C5lg8ZM0LalBAP5/Uluy2MmKCV8PkS6QC654XEqJli/FOnglRHhE2yIimSZzZpekVjU4Kw8z1yQV4svfjH5G3OHdS/Ymz/iG1CghSns+p0gCQCXHa5/KUZJx4rfQ84safeIFMdaoYVxHlOegyWDoPZd+z5qxotXRZ81sto/UKNyC8YUkWkhsj6GyktUvAYh4cp68+H7aUPt8wDxVU4suRTzgar5Z1xD+4V5QPBoPLtB8iJ98MPj9mzAlrvEfiP+ItDNwRNkjmsK6DoDALise/Ap/UF0Sx0nXJeSdvkyXTqBnA= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: ad65cb67-893d-4ffa-4941-08dd3eea662a X-MS-Exchange-CrossTenant-AuthSource: MN2PR10MB3374.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2025 15:51:01.5987 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Kd1xzdcY4qgxLbMrqUJisHbeGPCIUDxBczOWR3TiuBFLidspadVmMXC2zgznRYawsB17t75slnMdQls1F7dDa7wyp5KGgep8XMygv5UXLWE= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR10MB6949 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-27_07,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 mlxscore=0 adultscore=0 malwarescore=0 spamscore=0 bulkscore=0 mlxlogscore=999 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2411120000 definitions=main-2501270126 X-Proofpoint-GUID: u3RW-_C_NAYWdBbfsR1azTW9MUDKpNcu X-Proofpoint-ORIG-GUID: u3RW-_C_NAYWdBbfsR1azTW9MUDKpNcu X-Rspamd-Queue-Id: 97DF440004 X-Stat-Signature: fmzwk6qxzahojr44934sqdqn9pk46msa X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1737993071-260201 X-HE-Meta: U2FsdGVkX1/m4tF2azdj2G+n+1npsj0PcRR2CsKrGNecPiPA6kseli5lONCCM7VeyckszRh5pOH9pG22BN4uV2WWYaYUlSdxK2HNKQuSRFtGQfoOJ+vKzScfShQ+HQSr4AMkKhwYjpXu5Q3fmigK/6NqRGwLBo2IRB8O/AOm0o/LLIYsFZQzLAMVG0I6cIfv08aLiE0+Msm+Ak5o5w9fPRUQxjPDgR+xmzipRAqi+gckrP5wA6huPnlLt4bO5NV/U0Jcp/A2+ahXlCWkHPVtL++Aob4/MTAAZe2hKHSZqWJISyXGncDHC95wsBKYn8VMwrnckI4Yr8DTn7yY6eoQOYfTKo7qM1uFLPlQtv1TcI8Qc6fTKD17TMr4sYIs+H1GaQ1y6IySu65FV+Iv1ZdmKuZ+W/kBff7CzL+aJvLTzNvl/hlPkMNaPkoOqe6O31pExW7mEOk1OxR+GSbmR52NXf2o9XAoeET7Lih6E8ytXKQqiekB5Ut3TUVE5Jbi+4LPe9ZHfQDIhMLu4W8Jjb4A1MoE61l5WYcjPmA5io9lH/Yq/IOK6mjlWWcLh/KKSznYyknvdK58/p2Hz/fd1dUGDoVNWIsaFAiV/n8rrr7tKqzn8+DbdIvH3ugstEKAwV6/UmU2IbYSZuiGML2EIVFeZwmOYLsoFXl8dmp8dytqHlFoCaGZ808Yn31EpyArl+XK1grSfrWZuS/KL7sC6NsfmosrzRseK4EACj3AavRjyUSA5iiMnUdToloPzRCxTozqSFf18D29YXiVFIAzb2y8crEjkQ/HywCurHkzap9f/KZOC+P1zxwXUiU7/6TQx6YzP2X8w8i2BS8sH6j6BRpFapENWwcRZDT/Qt+V6JpkPPkb6rdGeiF/MB72sLN2Gyr9fB1JQ7e0mIGtYUwmQe+CIMVcKIgisGVr+kVpYYvX8YMlo6eo1EjVx5PjU6HaH2gZEZjoiR5OCwNaFy9pSFm uO77v292 4kIq2wTdS7TnMbd7sYMthoEdx7sg1hLbsLQpDoEDrrBKwl3uevofzMZia5BzMbW60qBG44BsfNthOV2M/JNmZxVh9uSjAFh5vpx5YsLRFFQEvukjUcRe9aIbPpSlfacTdaq1dndhOS5HNgzB68HXKzQxAmqhKNWbMXEAwZ7ouM7b5nZ0edC20GTIbOpWdhK7GIGACMD8pDoHcz0BhqVz77SZyT/NUO3LMg2ARYmyTZxgyD/S5Gd68PW56zASkTZRctqYgdYYEPEGMIHSR9QZYou5JmkljKIKckr5m1My9dWgsJcDFp6Njqq8PACB52U1YVHuVpI8DuovhUYmxpzBLuMSzXMyPkMsj0ib6T2MlwzaidQnSVwc+WlGEXDOpUExvAWcq6AAHUCvm6Y+0xnOX2xS0IwMbY/f+HDxoHXNXYZLNN48YWbDmAr4hKx41Oa6qoSqESD7sVHWJ2YOMHJ606uSEO/d6pozi6GeM4rruA1cdQhdtzuxb9W4Hmg/iddvIHkSJIm7UsNlDa/YwxwO3YX/aUOF2gp1gyg33BP/p5aouvwubEEwFeJXnalExB33juZHR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The merge code, while much improved, still has a number of points of confusion. As part of a broader series cleaning this up to make this more maintainable, we start by addressing some confusion around vma_merge_struct fields. So far, the caller either provides no vmg->vma (a new VMA) or supplies the existing VMA which is being altered, setting vmg->start,end,pgoff to the proposed VMA dimensions. vmg->vma is then updated, as are vmg->start,end,pgoff as the merge process proceeds and the appropriate merge strategy is determined. This is rather confusing, as vmg->vma starts off as the 'middle' VMA between vmg->prev,next, but becomes the 'target' VMA, except in one specific edge case (merge next, shrink middle). Int his patch we introduce vmg->middle to describe the VMA that is between vmg->prev and vmg->next, and does NOT change during the merge operation. We replace vmg->vma with vmg->target, and use this only during the merge operation itself. Aside from the merge right, shrink middle case, this becomes the VMA that forms the basis of the VMA that is returned. This edge case can be addressed in a future commit. We also add a number of comments to explain what is going on. Finally, we adjust the ASCII diagrams showing each merge case in vma_merge_existing_range() to be clearer - the arrow range previously showed the vmg->start, end spanned area, but it is clearer to change this to show the final merged VMA. This patch has no change in functional behaviour. Signed-off-by: Lorenzo Stoakes Reviewed-by: Vlastimil Babka --- mm/debug.c | 18 ++--- mm/mmap.c | 2 +- mm/vma.c | 166 +++++++++++++++++++++------------------- mm/vma.h | 42 ++++++++-- tools/testing/vma/vma.c | 52 ++++++------- 5 files changed, 159 insertions(+), 121 deletions(-) diff --git a/mm/debug.c b/mm/debug.c index 8d2acf432385..c9e07651677b 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -261,7 +261,7 @@ void dump_vmg(const struct vma_merge_struct *vmg, const char *reason) pr_warn("vmg %px state: mm %px pgoff %lx\n" "vmi %px [%lx,%lx)\n" - "prev %px next %px vma %px\n" + "prev %px middle %px next %px target %px\n" "start %lx end %lx flags %lx\n" "file %px anon_vma %px policy %px\n" "uffd_ctx %px\n" @@ -270,7 +270,7 @@ void dump_vmg(const struct vma_merge_struct *vmg, const char *reason) vmg, vmg->mm, vmg->pgoff, vmg->vmi, vmg->vmi ? vma_iter_addr(vmg->vmi) : 0, vmg->vmi ? vma_iter_end(vmg->vmi) : 0, - vmg->prev, vmg->next, vmg->vma, + vmg->prev, vmg->middle, vmg->next, vmg->target, vmg->start, vmg->end, vmg->flags, vmg->file, vmg->anon_vma, vmg->policy, #ifdef CONFIG_USERFAULTFD @@ -288,13 +288,6 @@ void dump_vmg(const struct vma_merge_struct *vmg, const char *reason) pr_warn("vmg %px mm: (NULL)\n", vmg); } - if (vmg->vma) { - pr_warn("vmg %px vma:\n", vmg); - dump_vma(vmg->vma); - } else { - pr_warn("vmg %px vma: (NULL)\n", vmg); - } - if (vmg->prev) { pr_warn("vmg %px prev:\n", vmg); dump_vma(vmg->prev); @@ -302,6 +295,13 @@ void dump_vmg(const struct vma_merge_struct *vmg, const char *reason) pr_warn("vmg %px prev: (NULL)\n", vmg); } + if (vmg->middle) { + pr_warn("vmg %px middle:\n", vmg); + dump_vma(vmg->middle); + } else { + pr_warn("vmg %px middle: (NULL)\n", vmg); + } + if (vmg->next) { pr_warn("vmg %px next:\n", vmg); dump_vma(vmg->next); diff --git a/mm/mmap.c b/mm/mmap.c index cda01071c7b1..6401a1d73f4a 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1707,7 +1707,7 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift) /* * cover the whole range: [new_start, old_end) */ - vmg.vma = vma; + vmg.middle = vma; if (vma_expand(&vmg)) return -ENOMEM; diff --git a/mm/vma.c b/mm/vma.c index af1d549b179c..68a301a76297 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -52,7 +52,7 @@ struct mmap_state { .pgoff = (map_)->pgoff, \ .file = (map_)->file, \ .prev = (map_)->prev, \ - .vma = vma_, \ + .middle = vma_, \ .next = (vma_) ? NULL : (map_)->next, \ .state = VMA_MERGE_START, \ .merge_flags = VMG_FLAG_DEFAULT, \ @@ -639,7 +639,7 @@ static int commit_merge(struct vma_merge_struct *vmg, { struct vma_prepare vp; - init_multi_vma_prep(&vp, vmg->vma, adjust, remove, remove2); + init_multi_vma_prep(&vp, vmg->target, adjust, remove, remove2); VM_WARN_ON(vp.anon_vma && adjust && adjust->anon_vma && vp.anon_vma != adjust->anon_vma); @@ -652,15 +652,15 @@ static int commit_merge(struct vma_merge_struct *vmg, adjust->vm_end); } - if (vma_iter_prealloc(vmg->vmi, vmg->vma)) + if (vma_iter_prealloc(vmg->vmi, vmg->target)) return -ENOMEM; vma_prepare(&vp); - vma_adjust_trans_huge(vmg->vma, vmg->start, vmg->end, adj_start); - vma_set_range(vmg->vma, vmg->start, vmg->end, vmg->pgoff); + vma_adjust_trans_huge(vmg->target, vmg->start, vmg->end, adj_start); + vma_set_range(vmg->target, vmg->start, vmg->end, vmg->pgoff); if (expanded) - vma_iter_store(vmg->vmi, vmg->vma); + vma_iter_store(vmg->vmi, vmg->target); if (adj_start) { adjust->vm_start += adj_start; @@ -671,7 +671,7 @@ static int commit_merge(struct vma_merge_struct *vmg, } } - vma_complete(&vp, vmg->vmi, vmg->vma->vm_mm); + vma_complete(&vp, vmg->vmi, vmg->target->vm_mm); return 0; } @@ -694,8 +694,9 @@ static bool can_merge_remove_vma(struct vm_area_struct *vma) * identical properties. * * This function checks for the existence of any such mergeable VMAs and updates - * the maple tree describing the @vmg->vma->vm_mm address space to account for - * this, as well as any VMAs shrunk/expanded/deleted as a result of this merge. + * the maple tree describing the @vmg->middle->vm_mm address space to account + * for this, as well as any VMAs shrunk/expanded/deleted as a result of this + * merge. * * As part of this operation, if a merge occurs, the @vmg object will have its * vma, start, end, and pgoff fields modified to execute the merge. Subsequent @@ -704,45 +705,47 @@ static bool can_merge_remove_vma(struct vm_area_struct *vma) * Returns: The merged VMA if merge succeeds, or NULL otherwise. * * ASSUMPTIONS: - * - The caller must assign the VMA to be modifed to @vmg->vma. + * - The caller must assign the VMA to be modifed to @vmg->middle. * - The caller must have set @vmg->prev to the previous VMA, if there is one. * - The caller must not set @vmg->next, as we determine this. * - The caller must hold a WRITE lock on the mm_struct->mmap_lock. - * - vmi must be positioned within [@vmg->vma->vm_start, @vmg->vma->vm_end). + * - vmi must be positioned within [@vmg->middle->vm_start, @vmg->middle->vm_end). */ static __must_check struct vm_area_struct *vma_merge_existing_range( struct vma_merge_struct *vmg) { - struct vm_area_struct *vma = vmg->vma; + struct vm_area_struct *middle = vmg->middle; struct vm_area_struct *prev = vmg->prev; struct vm_area_struct *next, *res; struct vm_area_struct *anon_dup = NULL; struct vm_area_struct *adjust = NULL; unsigned long start = vmg->start; unsigned long end = vmg->end; - bool left_side = vma && start == vma->vm_start; - bool right_side = vma && end == vma->vm_end; + bool left_side = middle && start == middle->vm_start; + bool right_side = middle && end == middle->vm_end; int err = 0; long adj_start = 0; - bool merge_will_delete_vma, merge_will_delete_next; + bool merge_will_delete_middle, merge_will_delete_next; bool merge_left, merge_right, merge_both; bool expanded; mmap_assert_write_locked(vmg->mm); - VM_WARN_ON_VMG(!vma, vmg); /* We are modifying a VMA, so caller must specify. */ + VM_WARN_ON_VMG(!middle, vmg); /* We are modifying a VMA, so caller must specify. */ VM_WARN_ON_VMG(vmg->next, vmg); /* We set this. */ VM_WARN_ON_VMG(prev && start <= prev->vm_start, vmg); VM_WARN_ON_VMG(start >= end, vmg); /* - * If vma == prev, then we are offset into a VMA. Otherwise, if we are + * If middle == prev, then we are offset into a VMA. Otherwise, if we are * not, we must span a portion of the VMA. */ - VM_WARN_ON_VMG(vma && ((vma != prev && vmg->start != vma->vm_start) || - vmg->end > vma->vm_end), vmg); - /* The vmi must be positioned within vmg->vma. */ - VM_WARN_ON_VMG(vma && !(vma_iter_addr(vmg->vmi) >= vma->vm_start && - vma_iter_addr(vmg->vmi) < vma->vm_end), vmg); + VM_WARN_ON_VMG(middle && + ((middle != prev && vmg->start != middle->vm_start) || + vmg->end > middle->vm_end), vmg); + /* The vmi must be positioned within vmg->middle. */ + VM_WARN_ON_VMG(middle && + !(vma_iter_addr(vmg->vmi) >= middle->vm_start && + vma_iter_addr(vmg->vmi) < middle->vm_end), vmg); vmg->state = VMA_MERGE_NOMERGE; @@ -776,13 +779,13 @@ static __must_check struct vm_area_struct *vma_merge_existing_range( merge_both = merge_left && merge_right; /* If we span the entire VMA, a merge implies it will be deleted. */ - merge_will_delete_vma = left_side && right_side; + merge_will_delete_middle = left_side && right_side; /* - * If we need to remove vma in its entirety but are unable to do so, + * If we need to remove middle in its entirety but are unable to do so, * we have no sensible recourse but to abort the merge. */ - if (merge_will_delete_vma && !can_merge_remove_vma(vma)) + if (merge_will_delete_middle && !can_merge_remove_vma(middle)) return NULL; /* @@ -793,7 +796,7 @@ static __must_check struct vm_area_struct *vma_merge_existing_range( /* * If we cannot delete next, then we can reduce the operation to merging - * prev and vma (thereby deleting vma). + * prev and middle (thereby deleting middle). */ if (merge_will_delete_next && !can_merge_remove_vma(next)) { merge_will_delete_next = false; @@ -801,8 +804,8 @@ static __must_check struct vm_area_struct *vma_merge_existing_range( merge_both = false; } - /* No matter what happens, we will be adjusting vma. */ - vma_start_write(vma); + /* No matter what happens, we will be adjusting middle. */ + vma_start_write(middle); if (merge_left) vma_start_write(prev); @@ -812,13 +815,13 @@ static __must_check struct vm_area_struct *vma_merge_existing_range( if (merge_both) { /* - * |<----->| - * |-------*********-------| - * prev vma next - * extend delete delete + * |<-------------------->| + * |-------********-------| + * prev middle next + * extend delete delete */ - vmg->vma = prev; + vmg->target = prev; vmg->start = prev->vm_start; vmg->end = next->vm_end; vmg->pgoff = prev->vm_pgoff; @@ -826,78 +829,79 @@ static __must_check struct vm_area_struct *vma_merge_existing_range( /* * We already ensured anon_vma compatibility above, so now it's * simply a case of, if prev has no anon_vma object, which of - * next or vma contains the anon_vma we must duplicate. + * next or middle contains the anon_vma we must duplicate. */ - err = dup_anon_vma(prev, next->anon_vma ? next : vma, &anon_dup); + err = dup_anon_vma(prev, next->anon_vma ? next : middle, + &anon_dup); } else if (merge_left) { /* - * |<----->| OR - * |<--------->| + * |<------------>| OR + * |<----------------->| * |-------************* - * prev vma + * prev middle * extend shrink/delete */ - vmg->vma = prev; + vmg->target = prev; vmg->start = prev->vm_start; vmg->pgoff = prev->vm_pgoff; - if (!merge_will_delete_vma) { - adjust = vma; - adj_start = vmg->end - vma->vm_start; + if (!merge_will_delete_middle) { + adjust = middle; + adj_start = vmg->end - middle->vm_start; } - err = dup_anon_vma(prev, vma, &anon_dup); + err = dup_anon_vma(prev, middle, &anon_dup); } else { /* merge_right */ /* - * |<----->| OR - * |<--------->| + * |<------------->| OR + * |<----------------->| * *************-------| - * vma next + * middle next * shrink/delete extend */ pgoff_t pglen = PHYS_PFN(vmg->end - vmg->start); VM_WARN_ON_VMG(!merge_right, vmg); - /* If we are offset into a VMA, then prev must be vma. */ - VM_WARN_ON_VMG(vmg->start > vma->vm_start && prev && vma != prev, vmg); + /* If we are offset into a VMA, then prev must be middle. */ + VM_WARN_ON_VMG(vmg->start > middle->vm_start && prev && middle != prev, vmg); - if (merge_will_delete_vma) { - vmg->vma = next; + if (merge_will_delete_middle) { + vmg->target = next; vmg->end = next->vm_end; vmg->pgoff = next->vm_pgoff - pglen; } else { /* - * We shrink vma and expand next. + * We shrink middle and expand next. * * IMPORTANT: This is the ONLY case where the final - * merged VMA is NOT vmg->vma, but rather vmg->next. + * merged VMA is NOT vmg->target, but rather vmg->next. */ - - vmg->start = vma->vm_start; + vmg->target = middle; + vmg->start = middle->vm_start; vmg->end = start; - vmg->pgoff = vma->vm_pgoff; + vmg->pgoff = middle->vm_pgoff; adjust = next; - adj_start = -(vma->vm_end - start); + adj_start = -(middle->vm_end - start); } - err = dup_anon_vma(next, vma, &anon_dup); + err = dup_anon_vma(next, middle, &anon_dup); } if (err) goto abort; /* - * In nearly all cases, we expand vmg->vma. There is one exception - + * In nearly all cases, we expand vmg->middle. There is one exception - * merge_right where we partially span the VMA. In this case we shrink - * the end of vmg->vma and adjust the start of vmg->next accordingly. + * the end of vmg->middle and adjust the start of vmg->next accordingly. */ - expanded = !merge_right || merge_will_delete_vma; + expanded = !merge_right || merge_will_delete_middle; if (commit_merge(vmg, adjust, - merge_will_delete_vma ? vma : NULL, + merge_will_delete_middle ? middle : NULL, merge_will_delete_next ? next : NULL, adj_start, expanded)) { if (anon_dup) @@ -973,7 +977,7 @@ struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg) bool just_expand = vmg->merge_flags & VMG_FLAG_JUST_EXPAND; mmap_assert_write_locked(vmg->mm); - VM_WARN_ON_VMG(vmg->vma, vmg); + VM_WARN_ON_VMG(vmg->middle, vmg); /* vmi must point at or before the gap. */ VM_WARN_ON_VMG(vma_iter_addr(vmg->vmi) > end, vmg); @@ -989,13 +993,13 @@ struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg) /* If we can merge with the next VMA, adjust vmg accordingly. */ if (can_merge_right) { vmg->end = next->vm_end; - vmg->vma = next; + vmg->middle = next; } /* If we can merge with the previous VMA, adjust vmg accordingly. */ if (can_merge_left) { vmg->start = prev->vm_start; - vmg->vma = prev; + vmg->middle = prev; vmg->pgoff = prev->vm_pgoff; /* @@ -1017,10 +1021,10 @@ struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg) * Now try to expand adjacent VMA(s). This takes care of removing the * following VMA if we have VMAs on both sides. */ - if (vmg->vma && !vma_expand(vmg)) { - khugepaged_enter_vma(vmg->vma, vmg->flags); + if (vmg->middle && !vma_expand(vmg)) { + khugepaged_enter_vma(vmg->middle, vmg->flags); vmg->state = VMA_MERGE_SUCCESS; - return vmg->vma; + return vmg->middle; } return NULL; @@ -1032,44 +1036,46 @@ struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg) * @vmg: Describes a VMA expansion operation. * * Expand @vma to vmg->start and vmg->end. Can expand off the start and end. - * Will expand over vmg->next if it's different from vmg->vma and vmg->end == - * vmg->next->vm_end. Checking if the vmg->vma can expand and merge with + * Will expand over vmg->next if it's different from vmg->middle and vmg->end == + * vmg->next->vm_end. Checking if the vmg->middle can expand and merge with * vmg->next needs to be handled by the caller. * * Returns: 0 on success. * * ASSUMPTIONS: - * - The caller must hold a WRITE lock on vmg->vma->mm->mmap_lock. - * - The caller must have set @vmg->vma and @vmg->next. + * - The caller must hold a WRITE lock on vmg->middle->mm->mmap_lock. + * - The caller must have set @vmg->middle and @vmg->next. */ int vma_expand(struct vma_merge_struct *vmg) { struct vm_area_struct *anon_dup = NULL; bool remove_next = false; - struct vm_area_struct *vma = vmg->vma; + struct vm_area_struct *middle = vmg->middle; struct vm_area_struct *next = vmg->next; mmap_assert_write_locked(vmg->mm); - vma_start_write(vma); - if (next && (vma != next) && (vmg->end == next->vm_end)) { + vma_start_write(middle); + if (next && (middle != next) && (vmg->end == next->vm_end)) { int ret; remove_next = true; /* This should already have been checked by this point. */ VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg); vma_start_write(next); - ret = dup_anon_vma(vma, next, &anon_dup); + ret = dup_anon_vma(middle, next, &anon_dup); if (ret) return ret; } /* Not merging but overwriting any part of next is not handled. */ VM_WARN_ON_VMG(next && !remove_next && - next != vma && vmg->end > next->vm_start, vmg); + next != middle && vmg->end > next->vm_start, vmg); /* Only handles expanding */ - VM_WARN_ON_VMG(vma->vm_start < vmg->start || vma->vm_end > vmg->end, vmg); + VM_WARN_ON_VMG(middle->vm_start < vmg->start || + middle->vm_end > vmg->end, vmg); + vmg->target = middle; if (commit_merge(vmg, NULL, remove_next ? next : NULL, NULL, 0, true)) goto nomem; @@ -1508,7 +1514,7 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, */ static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg) { - struct vm_area_struct *vma = vmg->vma; + struct vm_area_struct *vma = vmg->middle; struct vm_area_struct *merged; /* First, try to merge. */ @@ -1605,7 +1611,7 @@ struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi, VMG_VMA_STATE(vmg, vmi, vma, vma, vma->vm_end, vma->vm_end + delta); vmg.next = vma_iter_next_rewind(vmi, NULL); - vmg.vma = NULL; /* We use the VMA to populate VMG fields only. */ + vmg.middle = NULL; /* We use the VMA to populate VMG fields only. */ return vma_merge_new_range(&vmg); } @@ -1726,7 +1732,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap, if (new_vma && new_vma->vm_start < addr + len) return NULL; /* should never get here */ - vmg.vma = NULL; /* New VMA range. */ + vmg.middle = NULL; /* New VMA range. */ vmg.pgoff = pgoff; vmg.next = vma_iter_next_rewind(&vmi, NULL); new_vma = vma_merge_new_range(&vmg); diff --git a/mm/vma.h b/mm/vma.h index a2e8710b8c47..5b5dd07e478c 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -69,16 +69,48 @@ enum vma_merge_flags { VMG_FLAG_JUST_EXPAND = 1 << 0, }; -/* Represents a VMA merge operation. */ +/* + * Describes a VMA merge operation and is threaded throughout it. + * + * Any of the fields may be mutated by the merge operation, so no guarantees are + * made to the contents of this structure after a merge operation has completed. + */ struct vma_merge_struct { struct mm_struct *mm; struct vma_iterator *vmi; - pgoff_t pgoff; + /* + * Adjacent VMAs, any of which may be NULL if not present: + * + * |------|--------|------| + * | prev | middle | next | + * |------|--------|------| + * + * middle may not yet exist in the case of a proposed new VMA being + * merged, or it may be an existing VMA. + * + * next may be assigned by the caller. + */ struct vm_area_struct *prev; - struct vm_area_struct *next; /* Modified by vma_merge(). */ - struct vm_area_struct *vma; /* Either a new VMA or the one being modified. */ + struct vm_area_struct *middle; + struct vm_area_struct *next; + /* + * This is the VMA we ultimately target to become the merged VMA, except + * for the one exception of merge right, shrink next (for details of + * this scenario see vma_merge_existing_range()). + */ + struct vm_area_struct *target; + /* + * Initially, the start, end, pgoff fields are provided by the caller + * and describe the proposed new VMA range, whether modifying an + * existing VMA (which will be 'middle'), or adding a new one. + * + * During the merge process these fields are updated to describe the new + * range _including those VMAs which will be merged_. + */ unsigned long start; unsigned long end; + pgoff_t pgoff; + unsigned long flags; struct file *file; struct anon_vma *anon_vma; @@ -118,8 +150,8 @@ static inline pgoff_t vma_pgoff_offset(struct vm_area_struct *vma, .mm = vma_->vm_mm, \ .vmi = vmi_, \ .prev = prev_, \ + .middle = vma_, \ .next = NULL, \ - .vma = vma_, \ .start = start_, \ .end = end_, \ .flags = vma_->vm_flags, \ diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index 04ab45e27fb8..3c0572120e94 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -147,8 +147,8 @@ static void vmg_set_range(struct vma_merge_struct *vmg, unsigned long start, vma_iter_set(vmg->vmi, start); vmg->prev = NULL; + vmg->middle = NULL; vmg->next = NULL; - vmg->vma = NULL; vmg->start = start; vmg->end = end; @@ -338,7 +338,7 @@ static bool test_simple_expand(void) VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg = { .vmi = &vmi, - .vma = vma, + .middle = vma, .start = 0, .end = 0x3000, .pgoff = 0, @@ -631,7 +631,7 @@ static bool test_vma_merge_special_flags(void) */ vma = alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, flags); ASSERT_NE(vma, NULL); - vmg.vma = vma; + vmg.middle = vma; for (i = 0; i < ARRAY_SIZE(special_flags); i++) { vm_flags_t special_flag = special_flags[i]; @@ -760,7 +760,7 @@ static bool test_vma_merge_with_close(void) vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); vmg.prev = vma_prev; - vmg.vma = vma; + vmg.middle = vma; /* * The VMA being modified in a way that would otherwise merge should @@ -787,7 +787,7 @@ static bool test_vma_merge_with_close(void) vma->vm_ops = &vm_ops; vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), NULL); /* * Initially this is misapprehended as an out of memory report, as the @@ -817,7 +817,7 @@ static bool test_vma_merge_with_close(void) vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); vmg.prev = vma_prev; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -843,7 +843,7 @@ static bool test_vma_merge_with_close(void) vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); vmg.prev = vma_prev; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -940,7 +940,7 @@ static bool test_merge_existing(void) vma_next = alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, flags); vma_next->vm_ops = &vm_ops; /* This should have no impact. */ vmg_set_range(&vmg, 0x3000, 0x6000, 3, flags); - vmg.vma = vma; + vmg.middle = vma; vmg.prev = vma; vma->anon_vma = &dummy_anon_vma; ASSERT_EQ(merge_existing(&vmg), vma_next); @@ -973,7 +973,7 @@ static bool test_merge_existing(void) vma_next = alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, flags); vma_next->vm_ops = &vm_ops; /* This should have no impact. */ vmg_set_range(&vmg, 0x2000, 0x6000, 2, flags); - vmg.vma = vma; + vmg.middle = vma; vma->anon_vma = &dummy_anon_vma; ASSERT_EQ(merge_existing(&vmg), vma_next); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -1003,7 +1003,7 @@ static bool test_merge_existing(void) vma->vm_ops = &vm_ops; /* This should have no impact. */ vmg_set_range(&vmg, 0x3000, 0x6000, 3, flags); vmg.prev = vma_prev; - vmg.vma = vma; + vmg.middle = vma; vma->anon_vma = &dummy_anon_vma; ASSERT_EQ(merge_existing(&vmg), vma_prev); @@ -1037,7 +1037,7 @@ static bool test_merge_existing(void) vma = alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); vmg_set_range(&vmg, 0x3000, 0x7000, 3, flags); vmg.prev = vma_prev; - vmg.vma = vma; + vmg.middle = vma; vma->anon_vma = &dummy_anon_vma; ASSERT_EQ(merge_existing(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -1067,7 +1067,7 @@ static bool test_merge_existing(void) vma_next = alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, flags); vmg_set_range(&vmg, 0x3000, 0x7000, 3, flags); vmg.prev = vma_prev; - vmg.vma = vma; + vmg.middle = vma; vma->anon_vma = &dummy_anon_vma; ASSERT_EQ(merge_existing(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -1102,37 +1102,37 @@ static bool test_merge_existing(void) vmg_set_range(&vmg, 0x4000, 0x5000, 4, flags); vmg.prev = vma; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); vmg_set_range(&vmg, 0x5000, 0x6000, 5, flags); vmg.prev = vma; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); vmg_set_range(&vmg, 0x6000, 0x7000, 6, flags); vmg.prev = vma; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); vmg_set_range(&vmg, 0x4000, 0x7000, 4, flags); vmg.prev = vma; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); vmg_set_range(&vmg, 0x4000, 0x6000, 4, flags); vmg.prev = vma; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); vmg_set_range(&vmg, 0x5000, 0x6000, 5, flags); vmg.prev = vma; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -1197,7 +1197,7 @@ static bool test_anon_vma_non_mergeable(void) vmg_set_range(&vmg, 0x3000, 0x7000, 3, flags); vmg.prev = vma_prev; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -1277,7 +1277,7 @@ static bool test_dup_anon_vma(void) vma_next->anon_vma = &dummy_anon_vma; vmg_set_range(&vmg, 0, 0x5000, 0, flags); - vmg.vma = vma_prev; + vmg.middle = vma_prev; vmg.next = vma_next; ASSERT_EQ(expand_existing(&vmg), 0); @@ -1309,7 +1309,7 @@ static bool test_dup_anon_vma(void) vma_next->anon_vma = &dummy_anon_vma; vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); vmg.prev = vma_prev; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -1338,7 +1338,7 @@ static bool test_dup_anon_vma(void) vma->anon_vma = &dummy_anon_vma; vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); vmg.prev = vma_prev; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -1366,7 +1366,7 @@ static bool test_dup_anon_vma(void) vma->anon_vma = &dummy_anon_vma; vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); vmg.prev = vma_prev; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -1394,7 +1394,7 @@ static bool test_dup_anon_vma(void) vma->anon_vma = &dummy_anon_vma; vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); vmg.prev = vma; - vmg.vma = vma; + vmg.middle = vma; ASSERT_EQ(merge_existing(&vmg), vma_next); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -1432,7 +1432,7 @@ static bool test_vmi_prealloc_fail(void) vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); vmg.prev = vma_prev; - vmg.vma = vma; + vmg.middle = vma; fail_prealloc = true; @@ -1458,7 +1458,7 @@ static bool test_vmi_prealloc_fail(void) vma->anon_vma = &dummy_anon_vma; vmg_set_range(&vmg, 0, 0x5000, 3, flags); - vmg.vma = vma_prev; + vmg.middle = vma_prev; vmg.next = vma; fail_prealloc = true;