From patchwork Tue Apr 26 15:06:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liam R. Howlett" X-Patchwork-Id: 12827414 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2034C433F5 for ; Tue, 26 Apr 2022 15:08:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B4C2C6B0082; Tue, 26 Apr 2022 11:08:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AFAC46B009E; Tue, 26 Apr 2022 11:08:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 83B866B00AF; Tue, 26 Apr 2022 11:08:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 708BB6B0082 for ; Tue, 26 Apr 2022 11:08:23 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3C149A11 for ; Tue, 26 Apr 2022 15:08:23 +0000 (UTC) X-FDA: 79399361286.09.AA96B16 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf19.hostedemail.com (Postfix) with ESMTP id 831911A004B for ; Tue, 26 Apr 2022 15:08:18 +0000 (UTC) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 23QDSwAe032176; Tue, 26 Apr 2022 15:08:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : references : in-reply-to : content-type : content-transfer-encoding : mime-version; s=corp-2021-07-09; bh=rsnvsuxw6J16VnibLPjtvlt8P9PQ9zNrL53j2PPnICI=; b=rL0aIswjauVeYFB/nLuhiLOxmKvQS6Lc1uwL5epN5khPavedIcxeTHLDUL9aRVqs5jEU zY/Guss/st67gMuizwjn1SMrCoa+oyR69MZ3atqokEds1XdJqYlfw4b6vSmnSQc0HpO6 2flXnBKMmCQMpSI5Hu5WSHzPkaF14Iv0FIQgzG4kUfP+D7w9rZevBRKMGcycBruSVz1B YPOEjNaSmMSS1KbqjXbLiccNw9YWgF50bBh6LOFY7FA1Nb1pddqrdQAPMONvM/OlJbWT nMIoMLB4rAT3JQP2Xz0FgRwtv06thjZXlRhNBeZNewXHC5zQk7OpATWV/yuJz77vyKWS NA== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3fmb0yx57q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 26 Apr 2022 15:08:14 +0000 Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.16.1.2/8.16.1.2) with SMTP id 23QF6Uk8037830; Tue, 26 Apr 2022 15:08:12 GMT Received: from nam04-dm6-obe.outbound.protection.outlook.com (mail-dm6nam08lp2044.outbound.protection.outlook.com [104.47.73.44]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com with ESMTP id 3fp5yjkvb8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 26 Apr 2022 15:08:12 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XSqpImY9jF0WigvG/A5cblv1I+9CxgmHW+1gR1s1a2gAEn1fK/UH2FOcZP2ontlDIQ+kY01elKtXbcoz0FCuwvZasPVuvNff8MiKOBX/ib+IJpGzhiS0yoApfGaHA/P3pfUQugYesniHevdCNvPXdykQcViMTcciJlQHrJTvbM9ZnsOnmIl22821NwU6u3NRhpkkJI6hchfEWHRAsGRfX0BPQW4pGbsks88OIHwdgyoNyI5T99++EM/fxkWmdt70O+Ic8jzjJqo/1FvtIm92sFDQGwuvslvacEZtm9UpgqwtUStNQWBIfFKBD0JyJwJl5bzhKawZmFlMW6N3aTXZEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rsnvsuxw6J16VnibLPjtvlt8P9PQ9zNrL53j2PPnICI=; b=VW1mEPakgP3ZYaq7fC4iJN4UxlhZ9TTb4jtYcuSSu0PTOrtS54Xm4ywGVDyfXDcmXtSe0bmSB/bAXJi+H42G5awxu0GbPb68kO2KHurxL1qaTUVg8xCF5fjA2t5WNQEdAAi75yivqAx9l39trENqJqytLG+33GY5CbE1E0O88SNuF/e4lYKr6H2Apl1a0ck96o+zSZx4vT6gc8ISHIlGw4EizP3OA6OvaoA3pjzzRxKDwYVxMDARnBq/L24gyWMuqJCEtj8GPranc69DqxpFe6O1t5DhXoZ60Qe39mlUhaVCNiHvshx2OKqEyCGZbQMy1bAe98BrDTk4p0HB5B3r3w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rsnvsuxw6J16VnibLPjtvlt8P9PQ9zNrL53j2PPnICI=; b=GM4d1WV7a+CG6mq4mdQmpBUvgXKamPJrGh3NTv5NU/+7PYEip6XvhM19vsWU7kYGE4I0TQ0kPqFcjzrCQ4jQ7fbRtm5mMcsc44pLYzaL4JfVB5G8pVNFa9lGFCG3Y6zqI/GenfLHn+UmterUMSgHLGlXcqvhpR1S6JZpj6DkbSg= Received: from SN6PR10MB3022.namprd10.prod.outlook.com (2603:10b6:805:d8::25) by DS7PR10MB4925.namprd10.prod.outlook.com (2603:10b6:5:297::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5186.14; Tue, 26 Apr 2022 15:08:10 +0000 Received: from SN6PR10MB3022.namprd10.prod.outlook.com ([fe80::318c:d02:2280:c2c]) by SN6PR10MB3022.namprd10.prod.outlook.com ([fe80::318c:d02:2280:c2c%7]) with mapi id 15.20.5186.021; Tue, 26 Apr 2022 15:08:10 +0000 From: Liam Howlett To: "maple-tree@lists.infradead.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , Andrew Morton , Yu Zhao Subject: [PATCH v8 68/70] mm: remove the vma linked list Thread-Topic: [PATCH v8 68/70] mm: remove the vma linked list Thread-Index: AQHYWX9DcQ9eayWjYECsQ+Nf+XO4ew== Date: Tue, 26 Apr 2022 15:06:53 +0000 Message-ID: <20220426150616.3937571-69-Liam.Howlett@oracle.com> References: <20220426150616.3937571-1-Liam.Howlett@oracle.com> In-Reply-To: <20220426150616.3937571-1-Liam.Howlett@oracle.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-mailer: git-send-email 2.35.1 x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: b8d25fbd-dddb-48b8-6775-08da2796939e x-ms-traffictypediagnostic: DS7PR10MB4925:EE_ x-microsoft-antispam-prvs: x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: pEolHhvO7pn8gmhLpz4+ND6sRp473l6/rW7NKRZB5B/MhOv9ROXCqTE88xE0hXhIpcK+100DtbhGb4+aIElaowIjnyo9dwQysLaGm55HJkYSkSBFXo9Zco39mVTVydLpeAvAGFr+0exHIz08dkPFL3l18ghevwooQm0rtVWqnLeti8eZJy89GcfF1I3Lb8BXE6Mwn5XZDnnTWd21jPNgtYzL1diMSaDbkbCkGZLNHYVSbmxObw1Vn5s96RGc2xOOAv4/ZijFPv0EiY5JxV+wY4pdCjcBecdRRE8PcLSK0uKWAQ2RBz0fFbWQnN0tGuHXDoQrn3ZrEAWsqzmnATRnAzVGHUg2KWgay18Y3dXAig8UAZnHVY6hqFmCRMskqsBSCQOL0iLwhZgz+O5FSZRZl+H0jJnisBWT8y1qzIjbHAJ19QRlTVWcJ0e+OMdddX9ZsNF4CchCwDGVYvyFV2L3Gn/15BPLs53GrPZ+YcuOZ+D++VBAwZQxiCOvc9cZ2XcMdnzUvDnftpKUWztqTD25rB9w6rtkdjTMse4Vn2G7apV+Poyu+1iR9PHVGdBHZ9p3NQQnAKW0ghsm7BNZRrEwXQ1hPR2JWKhRs+5PioA3T88W4+TR5hFSEUJ2H3KjWKve+Nx4nIuKeVnQl7gYGY8S/T4QigX1zGovrReMPXpJlz0q8l2ZlyIMVfkVJnRho1PPlJwE/rW87YZOgpdu/3aa/TVQrf8ctvnbGaGRdyKSGjk= x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR10MB3022.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(38100700002)(6486002)(316002)(66556008)(508600001)(38070700005)(86362001)(8676002)(2616005)(6512007)(110136005)(30864003)(71200400001)(76116006)(91956017)(66946007)(122000001)(6666004)(26005)(5660300002)(8936002)(6506007)(36756003)(83380400001)(1076003)(2906002)(44832011)(186003)(66446008)(66476007)(64756008)(334744004)(579004)(559001);DIR:OUT;SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-8859-1?q?id2UuEscjPHCToELupRX5PD?= =?iso-8859-1?q?APJ02IVROVx/A+VNPoIerYhDmIakOU0XiaPGrQ+pmcyVbEPi9bOvfuopPO6c?= =?iso-8859-1?q?GTbUsVPwvNN27XQpCfV68pga1iidc+hJBrzZJ7B2ZCtwxLqB94y5+iHPHgHT?= =?iso-8859-1?q?es86o1CuefhGKULIy5Sdc8TR9b8Yh1p+u8ewTyM68J7WWj6qTRu4GDMWwF55?= =?iso-8859-1?q?QiPeLfxmh+1ZcnZQ26kZq13pTehFCgdbKLOyjlUau7yi7g6rOGX1VoL2663R?= =?iso-8859-1?q?Ea4XLdN/q6zj4DzUIS6lVzkr/+RGOhXaAPAbb5kOHA56wtft6QY9YGbWNpni?= =?iso-8859-1?q?vVeSajg6fhak+VcfbHd4DtU4/urNuv6XDahdzBwebrvlqQq663ewcPFXRSyA?= =?iso-8859-1?q?2uI7Lw2grbg/nLmyiTEPB+fzDU26sF25pGom3j7ZMjRIpFmoOhmdlEXao/qL?= =?iso-8859-1?q?BodIpZde29lq0j5UBHz0cXbNGoDNU83dIBe0F1TpRSxX4FuG5PKMbCHkQw4M?= =?iso-8859-1?q?vhPn57ePe1xoY7hAuid99G7xLeGbSKah6GkbRrXsP3ubQb2DCFa8LID1qtND?= =?iso-8859-1?q?0GBzWeJDrNMhCwAZQHMKnHmPwNo04YPagM3nCyZCfrdVgV0SMX9OXTjyFGIU?= =?iso-8859-1?q?6TuSC1XOFTtNo1eEkbkVIBcwMv1+F05b89Y4+6YkzPSPfu/+DQN6Puem+m8z?= =?iso-8859-1?q?qw+QvJ5BQeSgXYwRlJ8C1CT6hecSZBkIcVQycplFJvUK9DVhBBA1vlMLvYQb?= =?iso-8859-1?q?nexzbm7TAozeqQw/XG6ruBRs7eee/4yba5H7/Bo7qMDKtIGmtHFfjaG52tQs?= =?iso-8859-1?q?mL9dQ9K/oBU3qd0tAOx07jLQJpCLHL4pPvhF627YMmqQys9d6lKQfM0pL6mH?= =?iso-8859-1?q?3nIrspbzFaAAV1Dk6UjPqmtCLOjVkmIfZ7WGmlOHXjiKUoQjrtk+wbjur35b?= =?iso-8859-1?q?BiigMVRlHfueNG5WW8qymNLESEesg7taImJdeBwt8FO8KOx3ZyOuwqnG4DXu?= =?iso-8859-1?q?EJLJkfM6ouIj0RZKTSz1EoJ9BCpDwfor0AayuYTDFLU1j5sDQQNKizB9iDs7?= =?iso-8859-1?q?qgUiJxuhDTnn50di2TkOaoj7pIxNVN50UqUclXnzl7JRM45Qn39Hl6CLyy9J?= =?iso-8859-1?q?5+SypkkKrV2NkiTxu9ihVnee5HMaunG0nMmnyFMsyaoYLTQbeMLXfluae9Cc?= =?iso-8859-1?q?AeWIXSrAJzhzXVzwJT3Y8nzEf8IyTsM2MyXa6fNX0HGfhHvnmGCQFHLX5n48?= =?iso-8859-1?q?AAgskN6rgaKFBvqf9PST0aTrWtOkbVkUN6/9UWc3bss2P760tuAW5AKXOLDc?= =?iso-8859-1?q?fdNMSAinqKkQM1FDtmJcxPI9AwyS3WnzNKeJxHT9mxWXNkZ5QPpida2fE05k?= =?iso-8859-1?q?muwqsoPPxiKNaSqHQyiUwOmn7VRTXN/iqpJj6b+9qYn0c03hK4zjbynVCPE6?= =?iso-8859-1?q?9sBPcN7kMHiZ/y1lhXhLWVa6N6fZzSjZ2FSoAHx46IkMEEtEo9zwJF8GjhA+?= =?iso-8859-1?q?jrjI60c7siWT0LsZ2RF8TCYVhuQP2ItIvYNUzjaTV/NycZV4yv2mQryaNh3e?= =?iso-8859-1?q?nERVLTO0bmz2/wky8OIa9cYy5j21yLMKhQENaDag9PylyEKLTTMOA9UdPbcC?= =?iso-8859-1?q?q3tWuNF+tcH2OVU/UUlNGGd5Y3nKNBsgttYkBoa/xjq4BEc/UFL6YZ7GO8l1?= =?iso-8859-1?q?1PmnIF+5pDhCXkrQJS7BoItpjLegB9PjhDMqUvuolO7PExuDOrQRBrHloucZ?= =?iso-8859-1?q?pGG3ns9slmQ+EBJeMT4zMnaupl16gpN19jV1DaWpyVVIaSm5Xxfx/MTzIZXs?= =?iso-8859-1?q?/rcYFCNo=3D?= MIME-Version: 1.0 X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SN6PR10MB3022.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: b8d25fbd-dddb-48b8-6775-08da2796939e X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Apr 2022 15:06:53.5045 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: tts7V2FoirLurQwsQfFNSxGqTHAlvP2Tv9gLkPjuLiaCwnlM3wdOVJY2xAtFNKx1j3xXJrhybK9oRCwOmCPeqQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR10MB4925 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.486,18.0.858 definitions=2022-04-26_04:2022-04-26,2022-04-26 signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 mlxlogscore=999 mlxscore=0 suspectscore=0 malwarescore=0 spamscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2204260096 X-Proofpoint-ORIG-GUID: 1AnRQF9CK1j0KCzS5QIKo81QCiwuvXlM X-Proofpoint-GUID: 1AnRQF9CK1j0KCzS5QIKo81QCiwuvXlM X-Stat-Signature: djw6n9o7kzjo4osjnshi1wijetjxsrjm Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2021-07-09 header.b=rL0aIswj; dkim=pass header.d=oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=GM4d1WV7; spf=none (imf19.hostedemail.com: domain of liam.howlett@oracle.com has no SPF policy when checking 205.220.165.32) smtp.mailfrom=liam.howlett@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 831911A004B X-HE-Tag: 1650985698-422830 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Liam R. Howlett" Replace any vm_next use with vma_find(). Update free_pgtables(), unmap_vmas(), and zap_page_range() to use the maple tree. Use the new free_pgtables() and unmap_vmas() in do_mas_align_munmap(). At the same time, alter the loop to be more compact. Now that free_pgtables() and unmap_vmas() take a maple tree as an argument, rearrange do_mas_align_munmap() to use the new tree to hold the vmas to remove. Remove __vma_link_list() and __vma_unlink_list() as they are exclusively used to update the linked list Drop linked list update from __insert_vm_struct(). Rework validation of tree as it was depending on the linked list. Signed-off-by: Liam R. Howlett --- include/linux/mm.h | 5 +- include/linux/mm_types.h | 4 - kernel/fork.c | 15 +- mm/debug.c | 14 +- mm/internal.h | 8 +- mm/memory.c | 33 ++- mm/mmap.c | 433 ++++++++++++++++----------------------- mm/nommu.c | 5 - mm/util.c | 40 ---- 9 files changed, 218 insertions(+), 339 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 110587299681..3200c954c385 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1868,8 +1868,9 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, unsigned long size); void zap_page_range(struct vm_area_struct *vma, unsigned long address, unsigned long size); -void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, - unsigned long start, unsigned long end); +void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt, + struct vm_area_struct *start_vma, unsigned long start, + unsigned long end); struct mmu_notifier_range; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index ca47e66d1b14..b724cbe933e6 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -392,8 +392,6 @@ struct vm_area_struct { unsigned long vm_end; /* The first byte after our end address within vm_mm. */ - /* linked list of VM areas per task, sorted by address */ - struct vm_area_struct *vm_next, *vm_prev; struct mm_struct *vm_mm; /* The address space we belong to. */ /* @@ -457,7 +455,6 @@ struct vm_area_struct { struct kioctx_table; struct mm_struct { struct { - struct vm_area_struct *mmap; /* list of VMAs */ struct maple_tree mm_mt; #ifdef CONFIG_MMU unsigned long (*get_unmapped_area) (struct file *filp, @@ -472,7 +469,6 @@ struct mm_struct { unsigned long mmap_compat_legacy_base; #endif unsigned long task_size; /* size of task vm space */ - unsigned long highest_vm_end; /* highest vma end address */ pgd_t * pgd; #ifdef CONFIG_MEMBARRIER diff --git a/kernel/fork.c b/kernel/fork.c index fa0ec0783784..0b032f02c658 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -474,7 +474,6 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) */ *new = data_race(*orig); INIT_LIST_HEAD(&new->anon_vma_chain); - new->vm_next = new->vm_prev = NULL; dup_anon_vma_name(orig, new); } return new; @@ -579,7 +578,7 @@ static void dup_mm_exe_file(struct mm_struct *mm, struct mm_struct *oldmm) static __latent_entropy int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm) { - struct vm_area_struct *mpnt, *tmp, *prev, **pprev; + struct vm_area_struct *mpnt, *tmp; int retval; unsigned long charge = 0; MA_STATE(old_mas, &oldmm->mm_mt, 0, 0); @@ -606,7 +605,6 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, mm->exec_vm = oldmm->exec_vm; mm->stack_vm = oldmm->stack_vm; - pprev = &mm->mmap; retval = ksm_fork(mm, oldmm); if (retval) goto out; @@ -614,8 +612,6 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, if (retval) goto out; - prev = NULL; - retval = mas_expected_entries(&mas, oldmm->map_count); if (retval) goto out; @@ -687,14 +683,6 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, if (is_vm_hugetlb_page(tmp)) reset_vma_resv_huge_pages(tmp); - /* - * Link in the new vma and copy the page table entries. - */ - *pprev = tmp; - pprev = &tmp->vm_next; - tmp->vm_prev = prev; - prev = tmp; - /* Link the vma into the MT */ mas.index = tmp->vm_start; mas.last = tmp->vm_end - 1; @@ -1112,7 +1100,6 @@ static void mm_init_uprobes_state(struct mm_struct *mm) static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, struct user_namespace *user_ns) { - mm->mmap = NULL; mt_init_flags(&mm->mm_mt, MM_MT_FLAGS); mt_set_external_lock(&mm->mm_mt, &mm->mmap_lock); atomic_set(&mm->mm_users, 1); diff --git a/mm/debug.c b/mm/debug.c index 2d625ca0e326..0fd15ba70d16 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -139,13 +139,11 @@ EXPORT_SYMBOL(dump_page); void dump_vma(const struct vm_area_struct *vma) { - pr_emerg("vma %px start %px end %px\n" - "next %px prev %px mm %px\n" + pr_emerg("vma %px start %px end %px mm %px\n" "prot %lx anon_vma %px vm_ops %px\n" "pgoff %lx file %px private_data %px\n" "flags: %#lx(%pGv)\n", - vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_next, - vma->vm_prev, vma->vm_mm, + vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_mm, (unsigned long)pgprot_val(vma->vm_page_prot), vma->anon_vma, vma->vm_ops, vma->vm_pgoff, vma->vm_file, vma->vm_private_data, @@ -155,11 +153,11 @@ EXPORT_SYMBOL(dump_vma); void dump_mm(const struct mm_struct *mm) { - pr_emerg("mm %px mmap %px task_size %lu\n" + pr_emerg("mm %px task_size %lu\n" #ifdef CONFIG_MMU "get_unmapped_area %px\n" #endif - "mmap_base %lu mmap_legacy_base %lu highest_vm_end %lu\n" + "mmap_base %lu mmap_legacy_base %lu\n" "pgd %px mm_users %d mm_count %d pgtables_bytes %lu map_count %d\n" "hiwater_rss %lx hiwater_vm %lx total_vm %lx locked_vm %lx\n" "pinned_vm %llx data_vm %lx exec_vm %lx stack_vm %lx\n" @@ -183,11 +181,11 @@ void dump_mm(const struct mm_struct *mm) "tlb_flush_pending %d\n" "def_flags: %#lx(%pGv)\n", - mm, mm->mmap, mm->task_size, + mm, mm->task_size, #ifdef CONFIG_MMU mm->get_unmapped_area, #endif - mm->mmap_base, mm->mmap_legacy_base, mm->highest_vm_end, + mm->mmap_base, mm->mmap_legacy_base, mm->pgd, atomic_read(&mm->mm_users), atomic_read(&mm->mm_count), mm_pgtables_bytes(mm), diff --git a/mm/internal.h b/mm/internal.h index 92734fc9b90d..65f1294f0335 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -69,8 +69,9 @@ void folio_rotate_reclaimable(struct folio *folio); bool __folio_end_writeback(struct folio *folio); void deactivate_file_folio(struct folio *folio); -void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, - unsigned long floor, unsigned long ceiling); +void free_pgtables(struct mmu_gather *tlb, struct maple_tree *mt, + struct vm_area_struct *start_vma, unsigned long floor, + unsigned long ceiling); void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte); struct zap_details; @@ -492,9 +493,6 @@ static inline void vma_mas_remove(struct vm_area_struct *vma, struct ma_state *m } /* mm/util.c */ -void __vma_link_list(struct mm_struct *mm, struct vm_area_struct *vma, - struct vm_area_struct *prev); -void __vma_unlink_list(struct mm_struct *mm, struct vm_area_struct *vma); struct anon_vma *folio_anon_vma(struct folio *folio); #ifdef CONFIG_MMU diff --git a/mm/memory.c b/mm/memory.c index 5513d049e7e0..77753da71cd1 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -400,12 +400,21 @@ void free_pgd_range(struct mmu_gather *tlb, } while (pgd++, addr = next, addr != end); } -void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, - unsigned long floor, unsigned long ceiling) +void free_pgtables(struct mmu_gather *tlb, struct maple_tree *mt, + struct vm_area_struct *vma, unsigned long floor, + unsigned long ceiling) { - while (vma) { - struct vm_area_struct *next = vma->vm_next; + MA_STATE(mas, mt, vma->vm_end, vma->vm_end); + + do { unsigned long addr = vma->vm_start; + struct vm_area_struct *next; + + /* + * Note: USER_PGTABLES_CEILING may be passed as ceiling and may + * be 0. This will underflow and is okay. + */ + next = mas_find(&mas, ceiling - 1); /* * Hide vma from rmap and truncate_pagecache before freeing @@ -424,7 +433,7 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, while (next && next->vm_start <= vma->vm_end + PMD_SIZE && !is_vm_hugetlb_page(next)) { vma = next; - next = vma->vm_next; + next = mas_find(&mas, ceiling - 1); unlink_anon_vmas(vma); unlink_file_vma(vma); } @@ -432,7 +441,7 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma, floor, next ? next->vm_start : ceiling); } vma = next; - } + } while (vma); } void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) @@ -1628,17 +1637,19 @@ static void unmap_single_vma(struct mmu_gather *tlb, * ensure that any thus-far unmapped pages are flushed before unmap_vmas() * drops the lock and schedules. */ -void unmap_vmas(struct mmu_gather *tlb, +void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt, struct vm_area_struct *vma, unsigned long start_addr, unsigned long end_addr) { struct mmu_notifier_range range; + MA_STATE(mas, mt, vma->vm_end, vma->vm_end); mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, vma->vm_mm, start_addr, end_addr); mmu_notifier_invalidate_range_start(&range); - for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next) + do { unmap_single_vma(tlb, vma, start_addr, end_addr, NULL); + } while ((vma = mas_find(&mas, end_addr - 1)) != NULL); mmu_notifier_invalidate_range_end(&range); } @@ -1653,8 +1664,11 @@ void unmap_vmas(struct mmu_gather *tlb, void zap_page_range(struct vm_area_struct *vma, unsigned long start, unsigned long size) { + struct maple_tree *mt = &vma->vm_mm->mm_mt; + unsigned long end = start + size; struct mmu_notifier_range range; struct mmu_gather tlb; + MA_STATE(mas, mt, vma->vm_end, vma->vm_end); lru_add_drain(); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, @@ -1662,8 +1676,9 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); - for ( ; vma && vma->vm_start < range.end; vma = vma->vm_next) + do { unmap_single_vma(&tlb, vma, start, range.end, NULL); + } while ((vma = mas_find(&mas, end - 1)) != NULL); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } diff --git a/mm/mmap.c b/mm/mmap.c index 5a042f09bd69..50b616449c87 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -75,9 +75,10 @@ int mmap_rnd_compat_bits __read_mostly = CONFIG_ARCH_MMAP_RND_COMPAT_BITS; static bool ignore_rlimit_data; core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644); -static void unmap_region(struct mm_struct *mm, +static void unmap_region(struct mm_struct *mm, struct maple_tree *mt, struct vm_area_struct *vma, struct vm_area_struct *prev, - unsigned long start, unsigned long end); + struct vm_area_struct *next, unsigned long start, + unsigned long end); /* description of effects of mapping type and prot in current implementation. * this is due to the limited x86 page protection hardware. The expected @@ -180,12 +181,10 @@ void unlink_file_vma(struct vm_area_struct *vma) } /* - * Close a vm structure and free it, returning the next. + * Close a vm structure and free it. */ -static struct vm_area_struct *remove_vma(struct vm_area_struct *vma) +static void remove_vma(struct vm_area_struct *vma) { - struct vm_area_struct *next = vma->vm_next; - might_sleep(); if (vma->vm_ops && vma->vm_ops->close) vma->vm_ops->close(vma); @@ -193,7 +192,6 @@ static struct vm_area_struct *remove_vma(struct vm_area_struct *vma) fput(vma->vm_file); mpol_put(vma_policy(vma)); vm_area_free(vma); - return next; } /* @@ -218,8 +216,7 @@ static int do_brk_munmap(struct ma_state *mas, struct vm_area_struct *vma, unsigned long newbrk, unsigned long oldbrk, struct list_head *uf); static int do_brk_flags(struct ma_state *mas, struct vm_area_struct *brkvma, - unsigned long addr, unsigned long request, - unsigned long flags); + unsigned long addr, unsigned long request, unsigned long flags); SYSCALL_DEFINE1(brk, unsigned long, brk) { unsigned long newbrk, oldbrk, origbrk; @@ -288,7 +285,6 @@ SYSCALL_DEFINE1(brk, unsigned long, brk) * before calling do_brk_munmap(). */ mm->brk = brk; - mas.last = oldbrk - 1; ret = do_brk_munmap(&mas, brkvma, newbrk, oldbrk, &uf); if (ret == 1) { downgraded = true; @@ -343,42 +339,20 @@ extern void mt_dump(const struct maple_tree *mt); static void validate_mm_mt(struct mm_struct *mm) { struct maple_tree *mt = &mm->mm_mt; - struct vm_area_struct *vma_mt, *vma = mm->mmap; + struct vm_area_struct *vma_mt; MA_STATE(mas, mt, 0, 0); - mas_for_each(&mas, vma_mt, ULONG_MAX) { - if (xa_is_zero(vma_mt)) - continue; - - if (!vma) - break; - if ((vma != vma_mt) || - (vma->vm_start != vma_mt->vm_start) || - (vma->vm_end != vma_mt->vm_end) || - (vma->vm_start != mas.index) || - (vma->vm_end - 1 != mas.last)) { + mas_for_each(&mas, vma_mt, ULONG_MAX) { + if ((vma_mt->vm_start != mas.index) || + (vma_mt->vm_end - 1 != mas.last)) { pr_emerg("issue in %s\n", current->comm); dump_stack(); dump_vma(vma_mt); - pr_emerg("and vm_next\n"); - dump_vma(vma->vm_next); pr_emerg("mt piv: %px %lu - %lu\n", vma_mt, mas.index, mas.last); pr_emerg("mt vma: %px %lu - %lu\n", vma_mt, vma_mt->vm_start, vma_mt->vm_end); - if (vma->vm_prev) { - pr_emerg("ll prev: %px %lu - %lu\n", - vma->vm_prev, vma->vm_prev->vm_start, - vma->vm_prev->vm_end); - } - pr_emerg("ll vma: %px %lu - %lu\n", vma, - vma->vm_start, vma->vm_end); - if (vma->vm_next) { - pr_emerg("ll next: %px %lu - %lu\n", - vma->vm_next, vma->vm_next->vm_start, - vma->vm_next->vm_end); - } mt_dump(mas.tree); if (vma_mt->vm_end != mas.last + 1) { @@ -395,11 +369,7 @@ static void validate_mm_mt(struct mm_struct *mm) } VM_BUG_ON_MM(vma_mt->vm_start != mas.index, mm); } - VM_BUG_ON(vma != vma_mt); - vma = vma->vm_next; - } - VM_BUG_ON(vma); mt_validate(&mm->mm_mt); } @@ -407,12 +377,12 @@ static void validate_mm(struct mm_struct *mm) { int bug = 0; int i = 0; - unsigned long highest_address = 0; - struct vm_area_struct *vma = mm->mmap; + struct vm_area_struct *vma; + MA_STATE(mas, &mm->mm_mt, 0, 0); validate_mm_mt(mm); - while (vma) { + mas_for_each(&mas, vma, ULONG_MAX) { #ifdef CONFIG_DEBUG_VM_RB struct anon_vma *anon_vma = vma->anon_vma; struct anon_vma_chain *avc; @@ -424,18 +394,10 @@ static void validate_mm(struct mm_struct *mm) anon_vma_unlock_read(anon_vma); } #endif - - highest_address = vm_end_gap(vma); - vma = vma->vm_next; i++; } if (i != mm->map_count) { - pr_emerg("map_count %d vm_next %d\n", mm->map_count, i); - bug = 1; - } - if (highest_address != mm->highest_vm_end) { - pr_emerg("mm->highest_vm_end %lx, found %lx\n", - mm->highest_vm_end, highest_address); + pr_emerg("map_count %d mas_for_each %d\n", mm->map_count, i); bug = 1; } VM_BUG_ON_MM(bug, mm); @@ -495,29 +457,13 @@ bool range_has_overlap(struct mm_struct *mm, unsigned long start, struct vm_area_struct *existing; MA_STATE(mas, &mm->mm_mt, start, start); + rcu_read_lock(); existing = mas_find(&mas, end - 1); *pprev = mas_prev(&mas, 0); + rcu_read_unlock(); return existing ? true : false; } -/* - * __vma_next() - Get the next VMA. - * @mm: The mm_struct. - * @vma: The current vma. - * - * If @vma is NULL, return the first vma in the mm. - * - * Returns: The next VMA after @vma. - */ -static inline struct vm_area_struct *__vma_next(struct mm_struct *mm, - struct vm_area_struct *vma) -{ - if (!vma) - return mm->mmap; - - return vma->vm_next; -} - static unsigned long count_vma_pages_range(struct mm_struct *mm, unsigned long addr, unsigned long end) { @@ -582,9 +528,7 @@ static void vma_store(struct mm_struct *mm, struct vm_area_struct *vma) vma_mas_store(vma, &mas); } - -static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma, - struct vm_area_struct *prev) +static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma) { MA_STATE(mas, &mm->mm_mt, 0, 0); struct address_space *mapping = NULL; @@ -598,7 +542,6 @@ static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma, } vma_mas_store(vma, &mas); - __vma_link_list(mm, vma, prev); __vma_link_file(vma); if (mapping) @@ -609,23 +552,6 @@ static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma, return 0; } -/* - * Helper for vma_adjust() in the split_vma insert case: insert a vma into the - * mm's list and the mm tree. It has already been inserted into the interval tree. - */ -static void __insert_vm_struct(struct mm_struct *mm, struct ma_state *mas, - struct vm_area_struct *vma, unsigned long location) -{ - struct vm_area_struct *prev; - - mas_set(mas, location); - prev = mas_prev(mas, 0); - mas_reset(mas); - vma_mas_store(vma, mas); - __vma_link_list(mm, vma, prev); - mm->map_count++; -} - /* * vma_expand - Expand an existing VMA * @@ -702,15 +628,8 @@ inline int vma_expand(struct ma_state *mas, struct vm_area_struct *vma, } /* Expanding over the next vma */ - if (remove_next) { - /* Remove from mm linked list - also updates highest_vm_end */ - __vma_unlink_list(mm, next); - - if (file) - __remove_shared_vm_struct(next, file, mapping); - - } else if (!next) { - mm->highest_vm_end = vm_end_gap(vma); + if (remove_next && file) { + __remove_shared_vm_struct(next, file, mapping); } if (anon_vma) { @@ -767,7 +686,6 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, int remove_next = 0; MA_STATE(mas, &mm->mm_mt, 0, 0); struct vm_area_struct *exporter = NULL, *importer = NULL; - unsigned long ll_prev = vma->vm_start; /* linked list prev. */ if (next && !insert) { if (end >= next->vm_end) { @@ -813,7 +731,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, * next, if the vma overlaps with it. */ if (remove_next == 2 && !next->anon_vma) - exporter = next->vm_next; + exporter = find_vma(mm, next->vm_end); } else if (end > next->vm_start) { /* @@ -914,15 +832,11 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, vma_mt_szero(mm, end, vma->vm_end); VM_WARN_ON(insert && insert->vm_end < vma->vm_end); - } else if (insert->vm_start == end) { - ll_prev = vma->vm_end; } } else { vma_changed = true; } vma->vm_end = end; - if (!next) - mm->highest_vm_end = vm_end_gap(vma); } if (vma_changed) @@ -942,17 +856,17 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, flush_dcache_mmap_unlock(mapping); } - if (remove_next) { - __vma_unlink_list(mm, next); - if (file) - __remove_shared_vm_struct(next, file, mapping); + if (remove_next && file) { + __remove_shared_vm_struct(next, file, mapping); } else if (insert) { /* * split_vma has split insert from vma, and needs * us to insert it before dropping the locks * (it may either follow vma or precede it). */ - __insert_vm_struct(mm, &mas, insert, ll_prev); + mas_reset(&mas); + vma_mas_store(insert, &mas); + mm->map_count++; } if (anon_vma) { @@ -991,8 +905,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, /* * If "next" was removed and vma->vm_end was * expanded (up) over it, in turn - * "next->vm_prev->vm_end" changed and the - * "vma->vm_next" gap must be updated. + * "next->prev->vm_end" changed and the + * "vma->next" gap must be updated. */ next = next_next; } else { @@ -1013,33 +927,14 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, remove_next = 1; end = next->vm_end; goto again; - } else if (!next) { - /* - * If remove_next == 2 we obviously can't - * reach this path. - * - * If remove_next == 3 we can't reach this - * path because pre-swap() next is always not - * NULL. pre-swap() "next" is not being - * removed and its next->vm_end is not altered - * (and furthermore "end" already matches - * next->vm_end in remove_next == 3). - * - * We reach this only in the remove_next == 1 - * case if the "next" vma that was removed was - * the highest vma of the mm. However in such - * case next->vm_end == "end" and the extended - * "vma" has vma->vm_end == next->vm_end so - * mm->highest_vm_end doesn't need any update - * in remove_next == 1 case. - */ - VM_WARN_ON(mm->highest_vm_end != vm_end_gap(vma)); } } - if (insert && file) + if (insert && file) { uprobe_mmap(insert); + } validate_mm(mm); + return 0; } @@ -1199,10 +1094,10 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm, if (vm_flags & VM_SPECIAL) return NULL; - next = __vma_next(mm, prev); + next = find_vma(mm, prev ? prev->vm_end : 0); area = next; if (area && area->vm_end == end) /* cases 6, 7, 8 */ - next = next->vm_next; + next = find_vma(mm, next->vm_end); /* verify some invariant that must be enforced by the caller */ VM_WARN_ON(prev && addr <= prev->vm_start); @@ -1336,18 +1231,24 @@ static struct anon_vma *reusable_anon_vma(struct vm_area_struct *old, struct vm_ */ struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *vma) { + MA_STATE(mas, &vma->vm_mm->mm_mt, vma->vm_end, vma->vm_end); struct anon_vma *anon_vma = NULL; + struct vm_area_struct *prev, *next; /* Try next first. */ - if (vma->vm_next) { - anon_vma = reusable_anon_vma(vma->vm_next, vma, vma->vm_next); + next = mas_walk(&mas); + if (next) { + anon_vma = reusable_anon_vma(next, vma, next); if (anon_vma) return anon_vma; } + prev = mas_prev(&mas, 0); + VM_BUG_ON_VMA(prev != vma, vma); + prev = mas_prev(&mas, 0); /* Try prev next. */ - if (vma->vm_prev) - anon_vma = reusable_anon_vma(vma->vm_prev, vma->vm_prev, vma); + if (prev) + anon_vma = reusable_anon_vma(prev, prev, vma); /* * We might reach here with anon_vma == NULL if we can't find @@ -2103,8 +2004,8 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address) if (gap_addr < address || gap_addr > TASK_SIZE) gap_addr = TASK_SIZE; - next = vma->vm_next; - if (next && next->vm_start < gap_addr && vma_is_accessible(next)) { + next = find_vma_intersection(mm, vma->vm_end, gap_addr); + if (next && vma_is_accessible(next)) { if (!(next->vm_flags & VM_GROWSUP)) return -ENOMEM; /* Check that both stack segments have the same anon_vma? */ @@ -2150,8 +2051,6 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address) /* Overwrite old entry in mtree. */ vma_store(mm, vma); anon_vma_interval_tree_post_update_vma(vma); - if (!vma->vm_next) - mm->highest_vm_end = vm_end_gap(vma); spin_unlock(&mm->page_table_lock); perf_event_mmap(vma); @@ -2170,6 +2069,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address) int expand_downwards(struct vm_area_struct *vma, unsigned long address) { struct mm_struct *mm = vma->vm_mm; + MA_STATE(mas, &mm->mm_mt, vma->vm_start, vma->vm_start); struct vm_area_struct *prev; int error = 0; @@ -2178,7 +2078,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address) return -EPERM; /* Enforce stack_guard_gap */ - prev = vma->vm_prev; + prev = mas_prev(&mas, 0); /* Check that both stack segments have the same anon_vma? */ if (prev && !(prev->vm_flags & VM_GROWSDOWN) && vma_is_accessible(prev)) { @@ -2308,25 +2208,26 @@ find_extend_vma(struct mm_struct *mm, unsigned long addr) EXPORT_SYMBOL_GPL(find_extend_vma); /* - * Ok - we have the memory areas we should free on the vma list, - * so release them, and do the vma updates. + * Ok - we have the memory areas we should free on a maple tree so release them, + * and do the vma updates. * * Called with the mm semaphore held. */ -static void remove_vma_list(struct mm_struct *mm, struct vm_area_struct *vma) +static inline void remove_mt(struct mm_struct *mm, struct ma_state *mas) { unsigned long nr_accounted = 0; + struct vm_area_struct *vma; /* Update high watermark before we lower total_vm */ update_hiwater_vm(mm); - do { + mas_for_each(mas, vma, ULONG_MAX) { long nrpages = vma_pages(vma); if (vma->vm_flags & VM_ACCOUNT) nr_accounted += nrpages; vm_stat_account(mm, vma->vm_flags, -nrpages); - vma = remove_vma(vma); - } while (vma); + remove_vma(vma); + } vm_unacct_memory(nr_accounted); validate_mm(mm); } @@ -2336,18 +2237,18 @@ static void remove_vma_list(struct mm_struct *mm, struct vm_area_struct *vma) * * Called with the mm semaphore held. */ -static void unmap_region(struct mm_struct *mm, +static void unmap_region(struct mm_struct *mm, struct maple_tree *mt, struct vm_area_struct *vma, struct vm_area_struct *prev, + struct vm_area_struct *next, unsigned long start, unsigned long end) { - struct vm_area_struct *next = __vma_next(mm, prev); struct mmu_gather tlb; lru_add_drain(); tlb_gather_mmu(&tlb, mm); update_hiwater_rss(mm); - unmap_vmas(&tlb, vma, start, end); - free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS, + unmap_vmas(&tlb, mt, vma, start, end); + free_pgtables(&tlb, mt, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS, next ? next->vm_start : USER_PGTABLES_CEILING); tlb_finish_mmu(&tlb); } @@ -2431,24 +2332,13 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma, return __split_vma(mm, vma, addr, new_below); } -static inline int -unlock_range(struct vm_area_struct *start, struct vm_area_struct **tail, - unsigned long limit) +static inline void munmap_sidetree(struct vm_area_struct *vma, + struct ma_state *mas_detach) { - struct mm_struct *mm = start->vm_mm; - struct vm_area_struct *tmp = start; - int count = 0; - - while (tmp && tmp->vm_start < limit) { - *tail = tmp; - count++; - if (tmp->vm_flags & VM_LOCKED) - mm->locked_vm -= vma_pages(tmp); - - tmp = tmp->vm_next; - } - - return count; + mas_set_range(mas_detach, vma->vm_start, vma->vm_end - 1); + mas_store_gfp(mas_detach, vma, GFP_KERNEL); + if (vma->vm_flags & VM_LOCKED) + vma->vm_mm->locked_vm -= vma_pages(vma); } /* @@ -2468,8 +2358,12 @@ do_mas_align_munmap(struct ma_state *mas, struct vm_area_struct *vma, struct mm_struct *mm, unsigned long start, unsigned long end, struct list_head *uf, bool downgrade) { - struct vm_area_struct *prev, *last; - /* we have start < vma->vm_end */ + struct vm_area_struct *prev, *next = NULL; + struct maple_tree mt_detach; + int count = 0; + MA_STATE(mas_detach, &mt_detach, start, end - 1); + mt_init_flags(&mt_detach, MM_MT_FLAGS); + mt_set_external_lock(&mt_detach, &mm->mmap_lock); mas->last = end - 1; /* @@ -2479,6 +2373,8 @@ do_mas_align_munmap(struct ma_state *mas, struct vm_area_struct *vma, * unmapped vm_area_struct will remain in use: so lower split_vma * places tmp vma above, and higher split_vma places tmp vma below. */ + + /* Does it split the first one? */ if (start > vma->vm_start) { int error; @@ -2490,34 +2386,55 @@ do_mas_align_munmap(struct ma_state *mas, struct vm_area_struct *vma, if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count) return -ENOMEM; + /* + * mas_pause() is not needed since mas->index needs to be set + * differently than vma->vm_end anyways. + */ error = __split_vma(mm, vma, start, 0); if (error) return error; - prev = vma; - vma = __vma_next(mm, prev); - mas->index = start; - mas_reset(mas); - } else { - prev = vma->vm_prev; + + mas_set(mas, start); + vma = mas_walk(mas); } - if (vma->vm_end >= end) - last = vma; - else - last = find_vma_intersection(mm, end - 1, end); + prev = mas_prev(mas, 0); + if (unlikely((!prev))) + mas_set(mas, start); - /* Does it split the last one? */ - if (last && end < last->vm_end) { - int error = __split_vma(mm, last, end, 1); + /* + * Detach a range of VMAs from the mm. Using next as a temp variable as + * it is always overwritten. + */ + mas_for_each(mas, next, end - 1) { + /* Does it split the end? */ + if (next->vm_end > end) { + struct vm_area_struct *split; + int error; - if (error) - return error; + error = __split_vma(mm, next, end, 1); + if (error) + return error; - if (vma == last) - vma = __vma_next(mm, prev); - mas_reset(mas); + mas_set(mas, end); + split = mas_prev(mas, 0); + munmap_sidetree(split, &mas_detach); + count++; + if (vma == next) + vma = split; + break; + } + count++; + munmap_sidetree(next, &mas_detach); +#ifdef CONFIG_DEBUG_VM_MAPLE_TREE + BUG_ON(next->vm_start < start); + BUG_ON(next->vm_start > end); +#endif } + if (!next) + next = mas_next(mas, ULONG_MAX); + if (unlikely(uf)) { /* * If userfaultfd_unmap_prep returns an error the vmas @@ -2534,35 +2451,36 @@ do_mas_align_munmap(struct ma_state *mas, struct vm_area_struct *vma, return error; } - /* - * unlock any mlock()ed ranges before detaching vmas, count the number - * of VMAs to be dropped, and return the tail entry of the affected - * area. - */ - mm->map_count -= unlock_range(vma, &last, end); - /* Drop removed area from the tree */ + /* Point of no return */ + mas_set_range(mas, start, end - 1); +#if defined(CONFIG_DEBUG_VM_MAPLE_TREE) + /* Make sure no VMAs are about to be lost. */ + { + MA_STATE(test, &mt_detach, start, end - 1); + struct vm_area_struct *vma_mas, *vma_test; + int test_count = 0; + + rcu_read_lock(); + vma_test = mas_find(&test, end - 1); + mas_for_each(mas, vma_mas, end - 1) { + BUG_ON(vma_mas != vma_test); + test_count++; + vma_test = mas_next(&test, end - 1); + } + rcu_read_unlock(); + BUG_ON(count != test_count); + mas_set_range(mas, start, end - 1); + } +#endif mas_store_gfp(mas, NULL, GFP_KERNEL); - - /* Detach vmas from the MM linked list */ - vma->vm_prev = NULL; - if (prev) - prev->vm_next = last->vm_next; - else - mm->mmap = last->vm_next; - - if (last->vm_next) { - last->vm_next->vm_prev = prev; - last->vm_next = NULL; - } else - mm->highest_vm_end = prev ? vm_end_gap(prev) : 0; - + mm->map_count -= count; /* * Do not downgrade mmap_lock if we are next to VM_GROWSDOWN or * VM_GROWSUP VMA. Such VMAs can change their size under * down_read(mmap_lock) and collide with the VMA we are about to unmap. */ if (downgrade) { - if (last && (last->vm_flags & VM_GROWSDOWN)) + if (next && (next->vm_flags & VM_GROWSDOWN)) downgrade = false; else if (prev && (prev->vm_flags & VM_GROWSUP)) downgrade = false; @@ -2570,10 +2488,12 @@ do_mas_align_munmap(struct ma_state *mas, struct vm_area_struct *vma, mmap_write_downgrade(mm); } - unmap_region(mm, vma, prev, start, end); - - /* Fix up all other VM information */ - remove_vma_list(mm, vma); + unmap_region(mm, &mt_detach, vma, prev, next, start, end); + /* Statistics and freeing VMAs */ + mas_set(&mas_detach, start); + remove_mt(mm, &mas_detach); + validate_mm(mm); + __mt_destroy(&mt_detach); return downgrade ? 1 : 0; } @@ -2806,7 +2726,6 @@ unsigned long mmap_region(struct file *file, unsigned long addr, i_mmap_lock_write(vma->vm_file->f_mapping); vma_mas_store(vma, &mas); - __vma_link_list(mm, vma, prev); mm->map_count++; if (vma->vm_file) { if (vma->vm_flags & VM_SHARED) @@ -2858,7 +2777,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, vma->vm_file = NULL; /* Undo any partial mapping done by a device driver. */ - unmap_region(mm, vma, prev, vma->vm_start, vma->vm_end); + unmap_region(mm, mas.tree, vma, prev, next, vma->vm_start, vma->vm_end); charged = 0; if (vm_flags & VM_SHARED) mapping_unmap_writable(file->f_mapping); @@ -2947,11 +2866,12 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size, goto out; if (start + size > vma->vm_end) { - struct vm_area_struct *next; + VMA_ITERATOR(vmi, mm, vma->vm_end); + struct vm_area_struct *next, *prev = vma; - for (next = vma->vm_next; next; next = next->vm_next) { + for_each_vma_range(vmi, next, start + size) { /* hole between vmas ? */ - if (next->vm_start != next->vm_prev->vm_end) + if (next->vm_start != prev->vm_end) goto out; if (next->vm_file != vma->vm_file) @@ -2960,8 +2880,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size, if (next->vm_flags != vma->vm_flags) goto out; - if (start + size <= next->vm_end) - break; + prev = next; } if (!next) @@ -3007,7 +2926,7 @@ static int do_brk_munmap(struct ma_state *mas, struct vm_area_struct *vma, struct list_head *uf) { struct mm_struct *mm = vma->vm_mm; - struct vm_area_struct unmap; + struct vm_area_struct unmap, *next; unsigned long unmap_pages; int ret; @@ -3024,6 +2943,7 @@ static int do_brk_munmap(struct ma_state *mas, struct vm_area_struct *vma, ret = userfaultfd_unmap_prep(mm, newbrk, oldbrk, uf); if (ret) return ret; + ret = 1; /* Change the oldbrk of vma to the newbrk of the munmap area */ @@ -3045,6 +2965,7 @@ static int do_brk_munmap(struct ma_state *mas, struct vm_area_struct *vma, vma_mas_remove(&unmap, mas); + vma->vm_end = newbrk; if (vma->anon_vma) { anon_vma_interval_tree_post_update_vma(vma); anon_vma_unlock_write(vma->anon_vma); @@ -3054,8 +2975,9 @@ static int do_brk_munmap(struct ma_state *mas, struct vm_area_struct *vma, if (vma->vm_flags & VM_LOCKED) mm->locked_vm -= unmap_pages; + next = mas_next(mas, ULONG_MAX); mmap_write_downgrade(mm); - unmap_region(mm, &unmap, vma, newbrk, oldbrk); + unmap_region(mm, mas->tree, &unmap, vma, next, newbrk, oldbrk); /* Statistics */ vm_stat_account(mm, vma->vm_flags, -unmap_pages); if (vma->vm_flags & VM_ACCOUNT) @@ -3079,11 +3001,9 @@ static int do_brk_munmap(struct ma_state *mas, struct vm_area_struct *vma, * do some brk-specific accounting here. */ static int do_brk_flags(struct ma_state *mas, struct vm_area_struct *vma, - unsigned long addr, unsigned long len, - unsigned long flags) + unsigned long addr, unsigned long len, unsigned long flags) { struct mm_struct *mm = current->mm; - struct vm_area_struct *prev = NULL; validate_mm_mt(mm); @@ -3126,7 +3046,6 @@ static int do_brk_flags(struct ma_state *mas, struct vm_area_struct *vma, khugepaged_enter_vma_merge(vma, flags); goto out; } - prev = vma; /* create a vma struct for an anonymous mapping */ vma = vm_area_alloc(mm); @@ -3141,11 +3060,6 @@ static int do_brk_flags(struct ma_state *mas, struct vm_area_struct *vma, vma->vm_page_prot = vm_get_page_prot(flags); mas_set_range(mas, vma->vm_start, addr + len - 1); mas_store_gfp(mas, vma, GFP_KERNEL); - - if (!prev) - prev = mas_prev(mas, 0); - - __vma_link_list(mm, vma, prev); mm->map_count++; out: perf_event_mmap(vma); @@ -3154,7 +3068,7 @@ static int do_brk_flags(struct ma_state *mas, struct vm_area_struct *vma, if (flags & VM_LOCKED) mm->locked_vm += (len >> PAGE_SHIFT); vma->vm_flags |= VM_SOFTDIRTY; - validate_mm_mt(mm); + validate_mm(mm); return 0; vm_area_free(vma); @@ -3227,6 +3141,8 @@ void exit_mmap(struct mm_struct *mm) struct mmu_gather tlb; struct vm_area_struct *vma; unsigned long nr_accounted = 0; + MA_STATE(mas, &mm->mm_mt, 0, 0); + int count = 0; /* mm's last user has gone, and its about to be pulled down */ mmu_notifier_release(mm); @@ -3250,7 +3166,7 @@ void exit_mmap(struct mm_struct *mm) } arch_exit_mmap(mm); - vma = mm->mmap; + vma = mas_find(&mas, ULONG_MAX); if (!vma) { /* Can happen if dup_mmap() received an OOM */ mmap_write_unlock(mm); @@ -3262,17 +3178,25 @@ void exit_mmap(struct mm_struct *mm) tlb_gather_mmu_fullmm(&tlb, mm); /* update_hiwater_rss(mm) here? but nobody should be looking */ /* Use -1 here to ensure all VMAs in the mm are unmapped */ - unmap_vmas(&tlb, vma, 0, -1); - free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING); + unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX); + free_pgtables(&tlb, &mm->mm_mt, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING); tlb_finish_mmu(&tlb); - /* Walk the list again, actually closing and freeing it. */ - while (vma) { + /* + * Walk the list again, actually closing and freeing it, with preemption + * enabled, without holding any MM locks besides the unreachable + * mmap_write_lock. + */ + do { if (vma->vm_flags & VM_ACCOUNT) nr_accounted += vma_pages(vma); - vma = remove_vma(vma); + remove_vma(vma); + count++; cond_resched(); - } + } while ((vma = mas_find(&mas, ULONG_MAX)) != NULL); + + BUG_ON(count != mm->map_count); + trace_exit_mmap(mm); __mt_destroy(&mm->mm_mt); @@ -3312,7 +3236,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_area_struct *vma) vma->vm_pgoff = vma->vm_start >> PAGE_SHIFT; } - if (vma_link(mm, vma, prev)) + if (vma_link(mm, vma)) return -ENOMEM; return 0; @@ -3342,7 +3266,8 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap, faulted_in_anon_vma = false; } - if (range_has_overlap(mm, addr, addr + len, &prev)) + new_vma = find_vma_prev(mm, addr, &prev); + if (new_vma->vm_start < addr + len) return NULL; /* should never get here */ new_vma = vma_merge(mm, prev, addr, addr + len, vma->vm_flags, @@ -3385,7 +3310,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap, get_file(new_vma->vm_file); if (new_vma->vm_ops && new_vma->vm_ops->open) new_vma->vm_ops->open(new_vma); - if (vma_link(mm, new_vma, prev)) + if (vma_link(mm, new_vma)) goto out_vma_link; *need_rmap_locks = false; } @@ -3690,12 +3615,13 @@ int mm_take_all_locks(struct mm_struct *mm) { struct vm_area_struct *vma; struct anon_vma_chain *avc; + MA_STATE(mas, &mm->mm_mt, 0, 0); mmap_assert_write_locked(mm); mutex_lock(&mm_all_locks_mutex); - for (vma = mm->mmap; vma; vma = vma->vm_next) { + mas_for_each(&mas, vma, ULONG_MAX) { if (signal_pending(current)) goto out_unlock; if (vma->vm_file && vma->vm_file->f_mapping && @@ -3703,7 +3629,8 @@ int mm_take_all_locks(struct mm_struct *mm) vm_lock_mapping(mm, vma->vm_file->f_mapping); } - for (vma = mm->mmap; vma; vma = vma->vm_next) { + mas_set(&mas, 0); + mas_for_each(&mas, vma, ULONG_MAX) { if (signal_pending(current)) goto out_unlock; if (vma->vm_file && vma->vm_file->f_mapping && @@ -3711,7 +3638,8 @@ int mm_take_all_locks(struct mm_struct *mm) vm_lock_mapping(mm, vma->vm_file->f_mapping); } - for (vma = mm->mmap; vma; vma = vma->vm_next) { + mas_set(&mas, 0); + mas_for_each(&mas, vma, ULONG_MAX) { if (signal_pending(current)) goto out_unlock; if (vma->anon_vma) @@ -3770,11 +3698,12 @@ void mm_drop_all_locks(struct mm_struct *mm) { struct vm_area_struct *vma; struct anon_vma_chain *avc; + MA_STATE(mas, &mm->mm_mt, 0, 0); mmap_assert_write_locked(mm); BUG_ON(!mutex_is_locked(&mm_all_locks_mutex)); - for (vma = mm->mmap; vma; vma = vma->vm_next) { + mas_for_each(&mas, vma, ULONG_MAX) { if (vma->anon_vma) list_for_each_entry(avc, &vma->anon_vma_chain, same_vma) vm_unlock_anon_vma(avc->anon_vma); diff --git a/mm/nommu.c b/mm/nommu.c index 645d11d3a8ab..8b801f5c9ef9 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -551,7 +551,6 @@ static void put_nommu_region(struct vm_region *region) static void add_vma_to_mm(struct mm_struct *mm, struct vm_area_struct *vma) { struct address_space *mapping; - struct vm_area_struct *prev; MA_STATE(mas, &mm->mm_mt, vma->vm_start, vma->vm_end); BUG_ON(!vma->vm_region); @@ -570,11 +569,8 @@ static void add_vma_to_mm(struct mm_struct *mm, struct vm_area_struct *vma) i_mmap_unlock_write(mapping); } - prev = mas_prev(&mas, 0); - mas_reset(&mas); /* add the VMA to the tree */ vma_mas_store(vma, &mas); - __vma_link_list(mm, vma, prev); } /* @@ -599,7 +595,6 @@ static void delete_vma_from_mm(struct vm_area_struct *vma) /* remove from the MM's tree and list */ vma_mas_remove(vma, &mas); - __vma_unlink_list(vma->vm_mm, vma); } /* diff --git a/mm/util.c b/mm/util.c index 486cb132a887..1a5856f469e1 100644 --- a/mm/util.c +++ b/mm/util.c @@ -272,46 +272,6 @@ void *memdup_user_nul(const void __user *src, size_t len) } EXPORT_SYMBOL(memdup_user_nul); -void __vma_link_list(struct mm_struct *mm, struct vm_area_struct *vma, - struct vm_area_struct *prev) -{ - struct vm_area_struct *next; - - vma->vm_prev = prev; - if (prev) { - next = prev->vm_next; - prev->vm_next = vma; - } else { - next = mm->mmap; - mm->mmap = vma; - } - vma->vm_next = next; - if (next) - next->vm_prev = vma; - else - mm->highest_vm_end = vm_end_gap(vma); -} - -void __vma_unlink_list(struct mm_struct *mm, struct vm_area_struct *vma) -{ - struct vm_area_struct *prev, *next; - - next = vma->vm_next; - prev = vma->vm_prev; - if (prev) - prev->vm_next = next; - else - mm->mmap = next; - if (next) { - next->vm_prev = prev; - } else { - if (prev) - mm->highest_vm_end = vm_end_gap(prev); - else - mm->highest_vm_end = 0; - } -} - /* Check if the vma is being used as a stack by this task */ int vma_is_stack_for_current(struct vm_area_struct *vma) {