From patchwork Tue Jul 18 07:56:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13316830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0773AEB64DD for ; Tue, 18 Jul 2023 07:57:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9E1E56B007B; Tue, 18 Jul 2023 03:57:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 991B06B007D; Tue, 18 Jul 2023 03:57:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 80BC08D0001; Tue, 18 Jul 2023 03:57:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 71EC86B007B for ; Tue, 18 Jul 2023 03:57:12 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3DBB140209 for ; Tue, 18 Jul 2023 07:57:12 +0000 (UTC) X-FDA: 81023977104.28.A212394 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2052.outbound.protection.outlook.com [40.107.95.52]) by imf10.hostedemail.com (Postfix) with ESMTP id 53492C0008 for ; Tue, 18 Jul 2023 07:57:09 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=Vt2TBHhs; spf=pass (imf10.hostedemail.com: domain of apopple@nvidia.com designates 40.107.95.52 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689667029; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wdUnKcYe2CSNvlJDQGklH4bL/DkXlTJSioyPKgnLYDY=; b=A3ItetdfI6kjmp8FRA8Iai8Nd0LXVO6y7vAYPIEvI24d8rlIqcFBqGL/XoXw0Bv84SZK2r Olxfk/8Ijs9cQOtNlri82eEK0DtQp4PfQyExfw9tTvX+wSXGQurRF08oezFCv+mUoNA8uU W3b2TOaxDQQAvXD+xUvWtEwX0J9//kU= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1689667029; a=rsa-sha256; cv=pass; b=IlTcof5Ms93DYnH8bDYfS3SyI6s+RSxB4I7Ws3CtWhl6YldoHCP1vtpBc9HCDggwH78y9W nJxmsKSLqTMO3dtMpcgXQkzkbkm0gPc4+iarRTBJsejrb3oqunV/m1X2Qrh+GVF9ccAyq6 o+vkvDG/Im86atXwme2Q/B43JuqboGw= ARC-Authentication-Results: i=2; imf10.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=Vt2TBHhs; spf=pass (imf10.hostedemail.com: domain of apopple@nvidia.com designates 40.107.95.52 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mGijM2jHOs9HZwXL6n467Jyof44+adzz+rIVwu9dN+kiVemHMkOiBebAH/I7ha9fMqgji7L0g29AUWFKC+DXRD49CwRRz/zOXP0cXLiHcBawsNadcvDnLHVNaBBmfdxjfPPSbv7/EaMtpcuoVF01IU2Q0n0RBPm2hWDPQyc8a4IX7VgHiBDVsgJPB0emDhd4psfzbP8GIroiuBk1Sa0RIshArW3fnlTq3Xp3EWYqIhP2ayIPF2O0JUakXCW9J8fe/eAevegFD9keoHmBpYly9sTvhuGoQgaPpg2T7RnOOUk71iq4DKqInkYRndLj6TcEbq2s7rsCwCmxOCCE63DjHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wdUnKcYe2CSNvlJDQGklH4bL/DkXlTJSioyPKgnLYDY=; b=jL07cKqh47MYHOhARsWxDvUwIVR6SWuJ6KHDn1/sgTox95V+Th4RG4kmKneyVnZuRI9DgtIBiVubsoomPFeDi7sa+0b67dRTwgxnmMFKhv7u6KyAzh1bQSQfeygcD6RA1eTafO0B3ARugKLj8TiEf6OfLRuWMCM10y0NC1cwtZBk2UquihA4cVmsZD0Jw46IKTy8lMCclFhOY9SuxKGZh6yNW5+fy7/P2SY69J1PieSAKH8QwIrk6x/LoW025xvRoPV30HF3M1sGHjCr5cFnnNnuQgGl+9s98kEXCtTgPL3J2aU00x7cR43zb1n1f90z2PqPUqGl0uoqo4gUkZUvHQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wdUnKcYe2CSNvlJDQGklH4bL/DkXlTJSioyPKgnLYDY=; b=Vt2TBHhsur1WFoCnGP+NnlARhXK/3RkrjjSYOqI0C1gvIEibnBzk1LVluvfaxNDFkXuzWXcgtnFen9ggX6sTJVEP3TAPhu73y71brRUeyUqOs9rNDaL3ehgPSEIq8a0tIfDy0c3KSQmDE8Tl8f2xWfRTxQ8j2SARMq3vYTfzi6djGGmnUJFNk9GoMXOQ0fUfu2TNvhJopDKSjYO7F4IY06skEptxDC67SXYuVDaUdx0cZd9fO8XeEWkW1mdZaiOKlcEgxA8iLDSIgUucqMbQquJ3JtgDF3w11N5Co3dPvqfssAViV75JmIj3EgzL5hK11CvCdB/J+KXeLdv46qRz+Q== Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by PH7PR12MB8180.namprd12.prod.outlook.com (2603:10b6:510:2b6::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6565.32; Tue, 18 Jul 2023 07:57:07 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::cd5e:7e33:c2c9:fb74%7]) with mapi id 15.20.6588.031; Tue, 18 Jul 2023 07:57:07 +0000 From: Alistair Popple To: akpm@linux-foundation.org Cc: ajd@linux.ibm.com, catalin.marinas@arm.com, fbarrat@linux.ibm.com, iommu@lists.linux.dev, jgg@ziepe.ca, jhubbard@nvidia.com, kevin.tian@intel.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, mpe@ellerman.id.au, nicolinc@nvidia.com, npiggin@gmail.com, robin.murphy@arm.com, seanjc@google.com, will@kernel.org, x86@kernel.org, zhi.wang.linux@gmail.com, Alistair Popple Subject: [PATCH 4/4] mmu_notifiers: Don't invalidate secondary TLBs as part of mmu_notifier_invalidate_range_end() Date: Tue, 18 Jul 2023 17:56:18 +1000 Message-Id: <1de2f1853687c635add15a35f390ce62af36c5db.1689666760.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: X-ClientProxiedBy: SY5P282CA0010.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:208::19) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|PH7PR12MB8180:EE_ X-MS-Office365-Filtering-Correlation-Id: adb2e1f4-dec7-47ce-15e9-08db8764951a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1YWXOm+cPFdwlO8e9eKCF9Aca8hnOYhrImFOxWdQf2aiWibPrcqSynwabgfZpiBO3m+7wElFIF72HsiJTDYbwHHUb/9ewQCUQdwobaJ9O/omoUl0OUYgHTdvRsu41ulSN8e9AbioCyxEaaYxyRUi0LmQMx9rb/E1Hx1spM89KKCutiTPiStUwNhSSddyDrkR1X/vZn9RC6s+TbU+bZZSFt0fsCENCbpXo3pda+2vGP347qjqe91OHfoUkBJEiQo+3v1GiK0zz3qTxkVJXkyxzt78LDkfs/NPZhHcmtefnWkpxv0WXmeSEBwa4x+tMWOP8OilxZyPqsbJbf1qfCG8KsIljCgjIG4axVsnnK2GfTZBAZ951ggVbPqdxx6ujjnUyuCR629XnE/cDmdF0l3DC4Z68NN7etKSlsvq6HJnAVXOMX4AfqPrBIbpx8Rm6JSL/Hkp+JMXnM8NJ3Hz37cKu8Ti5+0DtI+BZlLHyeHm6FhELe+N/fNPP16IXQ2j2VPo2Qt3P3zaptVPbQTOoiNr0Udwi67u6NVESUpB+BeQdS4= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(136003)(346002)(376002)(396003)(451199021)(86362001)(38100700002)(36756003)(66946007)(478600001)(5660300002)(26005)(41300700001)(107886003)(8676002)(186003)(6506007)(6512007)(8936002)(7416002)(30864003)(2906002)(66556008)(2616005)(6916009)(83380400001)(6666004)(316002)(4326008)(6486002)(66476007);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: DbsqD9+R2Ow4fvPy1uF70/k390FL3sb49dnOYcjDwu8acrUxMP8ojPwJ3sgUZtaRHUnaiE5WOVjVuXhWmXW7bWi+wsTtV1vBLbMqHJ32/YU6gkMYFvEz9SFR3eRemTiqa1bCC/p9u8lnwEEDwkK/q2WAekT7k6o7l4rskPuv470tm/ssTEo7cYu6Qg0B+baozfhV2LP3jS2sN5rTwoFp0wFwPLdAxHiAH6u+3OseAaVmRdtu9jS6wquNy/Q3JWT9I8gO7+fRwwlMZMp2B7o/eGF9gh35D9IAMMXzYziJJ7zDK+42Xf2k1MNwMKjbYSjMKLENsBn7XqlsXuTf23KynT+WoHc1E+UrHqPHLTUHRjd9gx5Pa4LfAAiGZAbYELFrw2j2ctSbePyInfkK6e+2xepAQ7o/If0l0w2r429USbyhD9iy6zeQehx8w5PD4ItOPWChipikzFL/Uk1AdNuqZBjMKwk3HpUzD/+OYV+ZOR8nWARINSmJbtCLpNXH3fDpVqGVcS/6JPPdKv0w4U3GRGixi+8cDbihdVr+SBEGBc+P0u1xC0tAgzeX2G6kSriben/eB4+MpnLoDZHvCYZoIL70Yc7i0uof4Tcz04BcCKBOjkR2/2jH7GUuLpmoavYAK/rFq3ROfhi49/REdAQnf/bYCGzqpJd3tqvLSqsYK0utg37LYXFcDN8Q1Pa5Op3jQhcct4cwBw8GJSPAijLqzk37nk2nRyHsLlaQrwIKudJeGTiUvnG6jmxwX0yuhNEighOUCV64ehc+acwpRKYUQVBM8gfUB289L6DtNkZVvTVZ6MIYivx147rs8wpYPpKXj4FCFsKnvw/V/TpltCA65iy9qBPXHgl6HASk2ZpvpwTpmrhFxx+nURMPB+Kdn09dwV2l3IPA1BaO/pns3532Uf3qXXzreepd41lc5AVbZYPaFQGPMZKPk7aOR8upBDrqtM2m4gLQ/TV0CtYy0gb6jEoF5KBb6wupKiXBMsJEtPGSXiSmx5N0omYfazBT0PK18rc65QX52+bzIRvI4XVBBnM6LP98s4UVJ8zwVOTsxIq28ggXK35d7QjEfMpe0ZKsLeV1ZcibPliR6Tf4zVa/Gf1mf3lw6NucVjF8/kvukmrzPDYTl3eLrh0bMnsB9TRq9u7zLRklW2V79N88Y6UmhMex89Oflm7amOWaV8GHpehlmIZPW70/nAbHqueU8MjX+wszbKh0nzV+78E0Tgin6CNDejPlXw3de/KqXqfSLRdMLkeGzF+jSPk6qYJXsMPLW2X5GRGsKQZm+bpGBDneNXpTIIBosz/0nU1hi0bPD9/pxpCIs4vS96C+gzAlfAMdFy7mUcsuqGolqT3JvpFwysinC1LosrU5NHjMt5J8Ibc5ylU6AaypJryCgDNE33sNF/tN/VguSH9FRHpcdrrvkF4/h5kjyqGgLYInwMpWOL8Dy3wYm12PsKI3aX0O/5CO3FLNWVncfFnbO+Of28FUhDq2hFvx+Pkr2Xbni6iOYv5lHpySlpMva+Il0ZfAj028+5M8C6RpClbcIWChVAnWFyY6WxgySFq9ojHOcH/PZYKPYByqTmznoUhk7zsPr3H4 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: adb2e1f4-dec7-47ce-15e9-08db8764951a X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jul 2023 07:57:07.2235 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: u1aeg0CygsCcNYAa4UNvE5G7AQ+HOUuS11L9qhldZU00jRgmn510GnxEh0LKO35tkEmHri5fiXOLowyz5yvdZA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB8180 X-Rspamd-Queue-Id: 53492C0008 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 5t7ugwwc64xsiemfppdiwcttzuh4egtx X-HE-Tag: 1689667029-48457 X-HE-Meta: U2FsdGVkX1/KYy1UZ5tyOWMjXu7Guwz7gq0tya9+5Gd0fL1e6KUsCGbW/WjyFOx78cDGO7Gz+O/31nbnwc5t4wlTOqnF0ATDmTsq0UXRFWvMNmSIPdqZu/SKXHVLgZ1zgNuSo6SAV8Lz5CqBNrLji9UYdlVG4kDkgWGEVhUFbJ4Vd/OyG3hjR+CPGJLvAjLhoGk/74zxgQ1MTMD6pGOlsS06rq6vyuO5BdkxvJw5pwr9YiVKkHCyFXUaT8TmKMiDxlhrjfQViBYE9QoEWmd8E9olSdKzd37/uIkap/W1IWrZ4VBvO9RysQJYWSoaSU9lQJ3xHTeTiHmTL68ePtqTexm/DcyYkNvpeWU1mAMtFb+H7JDSrCd4yY1Yi3kwKXlZlh1lIUV4IKRt7bpQqR7l3FVVp0Pa73yBHU9jIC4jaEtJ9VABlcSY2d0eDmqC6guG75tLyti66U3LpgHb/wsQ7FgiUUXgmsk8Mlz82tj3LJ4FSu63qPGaI3wU24r3mzvOVcLK1/d6nDzkAK06/pwZYs+cFkC7lED1tOzyslBI0cI6TY81ByfxFy5UXzTLYT8nYXjSGWvXJdGpsBMwv0HEvufN9pBEiYEpb9Z/lRFw9gzF9Z2XaNAXmCZva/BXPOkWKskBwbihR7K1bfawFStA8GkW5S6BrAtCdMZsfOcHj0hBXfe02a+Nnq8StgrL+dXQugVyvAJfIyUmb1uPcsUlU42cCxpDrmtqx6S7XsoNOH1qOa9FHH4ZONdNKx8e9iWNzB+/lY+ymarGHvBbEvRXuxsV5gC3ZF4FUDhaTqI5xIcoedPliVOkU+4aIuAUKYOhT90xRef3zsbNUGWa2pgXGbEnZOa9hrPS+9arMK/5P0jz4t6pRXPdd3S7GeB4WHeSyZ8VSXq7i33G585xMfEahpWI5EdASyYJbg/FzVI8ILLCJ1C0xfjI2s5okCU51IxSQb2+Zjlp23f+jgH4fI0 n2xVf8ff MRhxGqG7XUiBDlAHxBvxuUdLaLqwtjBD61sJw6oWGdnBg7X5mJiRwy7MwTnUxY3mhHZabeG3Wa1DSANBj5OxxlYF4DFSbQruCkcFQ0f4kztTj2ule609y3LdCCcfyvn5gSdjcg0R54f/88K11THBRBQL/bgLUCXeadSyLNTLqrpVqMBf1KwJ6yycIv6jE52zOBOUFkRgOGn42a+X0rcquOxwj8C6dvuFXghQzG55Zv3m/68ZjrKcS2EEdKtd04f3ui3nBYBoG8wPyHCiPKt07VayQzGTwUy8vtdqbtqZ9SDpSAmtx0ylMpMpJSXLc0x+eSjbyShityyOvU3QOH8D21yY0hw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Secondary TLBs are now invalidated from the architecture specific TLB invalidation functions. Therefore there is no need to explicitly notify or invalidate as part of the range end functions. This means we can remove mmu_notifier_invalidate_range_end_only() and some of the ptep_*_notify() functions. Signed-off-by: Alistair Popple --- include/linux/mmu_notifier.h | 56 +------------------------------------ kernel/events/uprobes.c | 2 +- mm/huge_memory.c | 25 ++--------------- mm/hugetlb.c | 2 +- mm/memory.c | 8 +---- mm/migrate_device.c | 9 +----- mm/mmu_notifier.c | 25 ++--------------- mm/rmap.c | 42 +---------------------------- 8 files changed, 14 insertions(+), 155 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index a4bc818..6e3c857 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -395,8 +395,7 @@ extern int __mmu_notifier_test_young(struct mm_struct *mm, extern void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address, pte_t pte); extern int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *r); -extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r, - bool only_end); +extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r); extern void __mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm, unsigned long start, unsigned long end); extern bool @@ -481,14 +480,7 @@ mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) might_sleep(); if (mm_has_notifiers(range->mm)) - __mmu_notifier_invalidate_range_end(range, false); -} - -static inline void -mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range) -{ - if (mm_has_notifiers(range->mm)) - __mmu_notifier_invalidate_range_end(range, true); + __mmu_notifier_invalidate_range_end(range); } static inline void mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm, @@ -582,45 +574,6 @@ static inline void mmu_notifier_range_init_owner( __young; \ }) -#define ptep_clear_flush_notify(__vma, __address, __ptep) \ -({ \ - unsigned long ___addr = __address & PAGE_MASK; \ - struct mm_struct *___mm = (__vma)->vm_mm; \ - pte_t ___pte; \ - \ - ___pte = ptep_clear_flush(__vma, __address, __ptep); \ - mmu_notifier_arch_invalidate_secondary_tlbs(___mm, ___addr, \ - ___addr + PAGE_SIZE); \ - \ - ___pte; \ -}) - -#define pmdp_huge_clear_flush_notify(__vma, __haddr, __pmd) \ -({ \ - unsigned long ___haddr = __haddr & HPAGE_PMD_MASK; \ - struct mm_struct *___mm = (__vma)->vm_mm; \ - pmd_t ___pmd; \ - \ - ___pmd = pmdp_huge_clear_flush(__vma, __haddr, __pmd); \ - mmu_notifier_arch_invalidate_secondary_tlbs(___mm, ___haddr, \ - ___haddr + HPAGE_PMD_SIZE); \ - \ - ___pmd; \ -}) - -#define pudp_huge_clear_flush_notify(__vma, __haddr, __pud) \ -({ \ - unsigned long ___haddr = __haddr & HPAGE_PUD_MASK; \ - struct mm_struct *___mm = (__vma)->vm_mm; \ - pud_t ___pud; \ - \ - ___pud = pudp_huge_clear_flush(__vma, __haddr, __pud); \ - mmu_notifier_arch_invalidate_secondary_tlbs(___mm, ___haddr, \ - ___haddr + HPAGE_PUD_SIZE); \ - \ - ___pud; \ -}) - /* * set_pte_at_notify() sets the pte _after_ running the notifier. * This is safe to start by updating the secondary MMUs, because the primary MMU @@ -711,11 +664,6 @@ void mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) { } -static inline void -mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range) -{ -} - static inline void mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm, unsigned long start, unsigned long end) { diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index f0ac5b8..3048589 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -193,7 +193,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, } flush_cache_page(vma, addr, pte_pfn(ptep_get(pvmw.pte))); - ptep_clear_flush_notify(vma, addr, pvmw.pte); + ptep_clear_flush(vma, addr, pvmw.pte); if (new_page) set_pte_at_notify(mm, addr, pvmw.pte, mk_pte(new_page, vma->vm_page_prot)); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a232891..c80d0f9 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2003,7 +2003,7 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, count_vm_event(THP_SPLIT_PUD); - pudp_huge_clear_flush_notify(vma, haddr, pud); + pudp_huge_clear_flush(vma, haddr, pud); } void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, @@ -2023,11 +2023,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, out: spin_unlock(ptl); - /* - * No need to double call mmu_notifier->invalidate_range() callback as - * the above pudp_huge_clear_flush_notify() did already call it. - */ - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); } #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ @@ -2094,7 +2090,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, count_vm_event(THP_SPLIT_PMD); if (!vma_is_anonymous(vma)) { - old_pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd); + old_pmd = pmdp_huge_clear_flush(vma, haddr, pmd); /* * We are going to unmap this huge page. So * just go ahead and zap it @@ -2304,20 +2300,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, out: spin_unlock(ptl); - /* - * No need to double call mmu_notifier->invalidate_range() callback. - * They are 3 cases to consider inside __split_huge_pmd_locked(): - * 1) pmdp_huge_clear_flush_notify() call invalidate_range() obvious - * 2) __split_huge_zero_page_pmd() read only zero page and any write - * fault will trigger a flush_notify before pointing to a new page - * (it is fine if the secondary mmu keeps pointing to the old zero - * page in the meantime) - * 3) Split a huge pmd into pte pointing to the same page. No need - * to invalidate secondary tlb entry they are all still valid. - * any further changes to individual pte will notify. So no need - * to call mmu_notifier->invalidate_range() - */ - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); } void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 178c930..b903377 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5690,8 +5690,6 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, /* Break COW or unshare */ huge_ptep_clear_flush(vma, haddr, ptep); - mmu_notifier_arch_invalidate_secondary_tlbs(mm, range.start, - range.end); page_remove_rmap(&old_folio->page, vma, true); hugepage_add_new_anon_rmap(new_folio, vma, haddr); if (huge_pte_uffd_wp(pte)) diff --git a/mm/memory.c b/mm/memory.c index 01f39e8..fbfcc01 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3149,7 +3149,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) * that left a window where the new PTE could be loaded into * some TLBs while the old PTE remains in others. */ - ptep_clear_flush_notify(vma, vmf->address, vmf->pte); + ptep_clear_flush(vma, vmf->address, vmf->pte); folio_add_new_anon_rmap(new_folio, vma, vmf->address); folio_add_lru_vma(new_folio, vma); /* @@ -3195,11 +3195,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); } - /* - * No need to double call mmu_notifier->invalidate_range() callback as - * the above ptep_clear_flush_notify() did already call it. - */ - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); if (new_folio) folio_put(new_folio); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 8365158..9ce8214 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -658,7 +658,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, if (flush) { flush_cache_page(vma, addr, pte_pfn(orig_pte)); - ptep_clear_flush_notify(vma, addr, ptep); + ptep_clear_flush(vma, addr, ptep); set_pte_at_notify(mm, addr, ptep, entry); update_mmu_cache(vma, addr, ptep); } else { @@ -754,13 +754,8 @@ static void __migrate_device_pages(unsigned long *src_pfns, src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; } - /* - * No need to double call mmu_notifier->invalidate_range() callback as - * the above ptep_clear_flush_notify() inside migrate_vma_insert_page() - * did already call it. - */ if (notified) - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); } /** diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 34c5a84..42bcc0a 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -551,7 +551,7 @@ int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) static void mn_hlist_invalidate_end(struct mmu_notifier_subscriptions *subscriptions, - struct mmu_notifier_range *range, bool only_end) + struct mmu_notifier_range *range) { struct mmu_notifier *subscription; int id; @@ -559,24 +559,6 @@ mn_hlist_invalidate_end(struct mmu_notifier_subscriptions *subscriptions, id = srcu_read_lock(&srcu); hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist, srcu_read_lock_held(&srcu)) { - /* - * Subsystems should register either invalidate_secondary_tlbs() - * or invalidate_range_start()/end() callbacks. - * - * We call invalidate_secondary_tlbs() here so that subsystems - * can use larger range based invalidations. In some cases - * though invalidate_secondary_tlbs() needs to be called while - * holding the page table lock. In that case call sites use - * mmu_notifier_invalidate_range_only_end() and we know it is - * safe to skip secondary TLB invalidation as it will have - * already been done. - */ - if (!only_end && subscription->ops->invalidate_secondary_tlbs) - subscription->ops->invalidate_secondary_tlbs( - subscription, - range->mm, - range->start, - range->end); if (subscription->ops->invalidate_range_end) { if (!mmu_notifier_range_blockable(range)) non_block_start(); @@ -589,8 +571,7 @@ mn_hlist_invalidate_end(struct mmu_notifier_subscriptions *subscriptions, srcu_read_unlock(&srcu, id); } -void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, - bool only_end) +void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) { struct mmu_notifier_subscriptions *subscriptions = range->mm->notifier_subscriptions; @@ -600,7 +581,7 @@ void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, mn_itree_inv_end(subscriptions); if (!hlist_empty(&subscriptions->list)) - mn_hlist_invalidate_end(subscriptions, range, only_end); + mn_hlist_invalidate_end(subscriptions, range); lock_map_release(&__mmu_notifier_invalidate_range_start_map); } diff --git a/mm/rmap.c b/mm/rmap.c index b74fc2c..1fbe83e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -990,13 +990,6 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) #endif } - /* - * No need to call mmu_notifier_arch_invalidate_secondary_tlbs() as - * we are downgrading page table protection not changing it to - * point to a new page. - * - * See Documentation/mm/mmu_notifier.rst - */ if (ret) cleaned++; } @@ -1554,8 +1547,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, hugetlb_vma_unlock_write(vma); flush_tlb_range(vma, range.start, range.end); - mmu_notifier_arch_invalidate_secondary_tlbs( - mm, range.start, range.end); /* * The ref count of the PMD page was * dropped which is part of the way map @@ -1628,9 +1619,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * copied pages. */ dec_mm_counter(mm, mm_counter(&folio->page)); - /* We have to invalidate as we cleared the pte */ - mmu_notifier_arch_invalidate_secondary_tlbs(mm, address, - address + PAGE_SIZE); } else if (folio_test_anon(folio)) { swp_entry_t entry = { .val = page_private(subpage) }; pte_t swp_pte; @@ -1642,10 +1630,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, folio_test_swapcache(folio))) { WARN_ON_ONCE(1); ret = false; - /* We have to invalidate as we cleared the pte */ - mmu_notifier_arch_invalidate_secondary_tlbs(mm, - address, - address + PAGE_SIZE); page_vma_mapped_walk_done(&pvmw); break; } @@ -1676,10 +1660,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ if (ref_count == 1 + map_count && !folio_test_dirty(folio)) { - /* Invalidate as we cleared the pte */ - mmu_notifier_arch_invalidate_secondary_tlbs( - mm, address, - address + PAGE_SIZE); dec_mm_counter(mm, MM_ANONPAGES); goto discard; } @@ -1734,9 +1714,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (pte_uffd_wp(pteval)) swp_pte = pte_swp_mkuffd_wp(swp_pte); set_pte_at(mm, address, pvmw.pte, swp_pte); - /* Invalidate as we cleared the pte */ - mmu_notifier_arch_invalidate_secondary_tlbs(mm, address, - address + PAGE_SIZE); } else { /* * This is a locked file-backed folio, @@ -1752,13 +1729,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, dec_mm_counter(mm, mm_counter_file(&folio->page)); } discard: - /* - * No need to call mmu_notifier_arch_invalidate_secondary_tlbs() it - * has be done above for all cases requiring it to happen under - * page table lock before mmu_notifier_invalidate_range_end() - * - * See Documentation/mm/mmu_notifier.rst - */ page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); @@ -1937,8 +1907,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, hugetlb_vma_unlock_write(vma); flush_tlb_range(vma, range.start, range.end); - mmu_notifier_arch_invalidate_secondary_tlbs( - mm, range.start, range.end); /* * The ref count of the PMD page was @@ -2043,9 +2011,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, * copied pages. */ dec_mm_counter(mm, mm_counter(&folio->page)); - /* We have to invalidate as we cleared the pte */ - mmu_notifier_arch_invalidate_secondary_tlbs(mm, address, - address + PAGE_SIZE); } else { swp_entry_t entry; pte_t swp_pte; @@ -2109,13 +2074,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, */ } - /* - * No need to call mmu_notifier_arch_invalidate_secondary_tlbs() it - * has be done above for all cases requiring it to happen under - * page table lock before mmu_notifier_invalidate_range_end() - * - * See Documentation/mm/mmu_notifier.rst - */ page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); if (vma->vm_flags & VM_LOCKED) mlock_drain_local();