From patchwork Tue Jul 25 13:42:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13326452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18393C001E0 for ; Tue, 25 Jul 2023 13:42:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8739E6B0078; Tue, 25 Jul 2023 09:42:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8236B6B007B; Tue, 25 Jul 2023 09:42:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69DC06B007D; Tue, 25 Jul 2023 09:42:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5C1986B0078 for ; Tue, 25 Jul 2023 09:42:31 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2D910160E0A for ; Tue, 25 Jul 2023 13:42:31 +0000 (UTC) X-FDA: 81050248902.19.1C6A9F9 Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2079.outbound.protection.outlook.com [40.107.96.79]) by imf26.hostedemail.com (Postfix) with ESMTP id 52A04140007 for ; Tue, 25 Jul 2023 13:42:27 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=AMbL0Ds+; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf26.hostedemail.com: domain of apopple@nvidia.com designates 40.107.96.79 as permitted sender) smtp.mailfrom=apopple@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690292548; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TrAVgpar1B+CtuS2I5HzmH3wTKJ4n7roSxVUZcTgcDQ=; b=Ph4UWtmfBL4dI+EeidyklTzeouix1T5QoaZ/qVhHuXjKujh8ZIrZd5XClLlkdti3s5vxrE IfWXkFNiKTgJidaNRMhEYWfC1XXoPE17BwX1fAL/adXvs8hARslCfZ8Y/UUR+eLF/1ZLAW tc8oh4Tgyff9uKZpC4ZzhJW5EBn42KU= ARC-Authentication-Results: i=2; imf26.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=AMbL0Ds+; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf26.hostedemail.com: domain of apopple@nvidia.com designates 40.107.96.79 as permitted sender) smtp.mailfrom=apopple@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1690292548; a=rsa-sha256; cv=pass; b=dmKMCemi31rj25fB6zmpjWZfnMyCGGPMUQNe6//L1gqr6DpVQeS4mk8IoZadYTuorAIguk kEyeFMJcopPlZC4mW45xOcIElPc51xwS0ut4LNqO9+GeEROqLaJm4aR65VyTPPHhcnSyPq 4tPmAGBkw4XPpuzdRulR/AK9ABe2CqE= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=V5MYlfHsOOM/TgiAj4kC7ZGaBreeq8RswRZSpt44YW1i7+MwC+N9UxSKOQgLYzNJ9qdJzumLX04rCkh894Ch/jNr22SXp51wH5Te3o7BgIc6TnHIoX6w5UNnVbG2I1GAlqZfAwVbzmT4meeYjs8ZYYy7hEwFOp4KeWRXWlDq+tk0cdAqFdWf3V3MP4ucs+uRKLrLiSBQ2cL87ENGnfhFhgNElE/DIDl4pzqp5y3+lDjhxBtd5rZF3ma4riLxg7mLNbhA22hqdcfh+xEVKNWeTB+/h0FDLwS/2L+ow1YzrbIDNdPgaZzRhnHs+vUob6ZiHlMw7y+VwouL0iefB3o1Sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TrAVgpar1B+CtuS2I5HzmH3wTKJ4n7roSxVUZcTgcDQ=; b=VIUTgXmB56XROCitu94jNf74YrIPKSERe3g/dux0cUYlzFTIgSBdihDSkB+0XEYESIVrrNNvS2e7vYyOL2WyZ0+FUtJLeYljXctRWu1zCtkpBvAcs4o3I/uITnpEUBc7nM8WZsHY8n58STnvgL32sgLjWJKNipS1/q6/Midz1t3niypJw31b68IcYiA7+hfq3sjWMRzKgBqZJ0VN7FV+NmvQVOkAHK0W3u4PYjMo030xBSEJTgIBH8WeZprLsi7q9+YrEHiluFtAIuzwkWtXvSev0qL7uGQ41WnugbBw70/tagiXG5bPFIajfxin6Ipw2FptPKbX6Nw9fX9Vgrhqxw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TrAVgpar1B+CtuS2I5HzmH3wTKJ4n7roSxVUZcTgcDQ=; b=AMbL0Ds+6GX+qKWx88CVf8QfOWsctqmIcrGJNwSjN8t35HKQnr6ik2KRHlkeQYoXfKjyn428x+D2cdD4z0Hl39NSbJWLKJk0CLZCQojhdtcfpjD2ZImizzG09aOy6Wk4h7/sxCHpHDyo3aIFOGKf8yZddDv1BKVfJU7OvZa7PIoYZTTvhgM4pdy9Qo0p91l4eyNJMcYZZSfk5EIGciSSXoPY31LDjlWdleF07+tC7CNVHBTX9tO5WuO7wmtpnONIUdy+plOolrYOBQa2sjYyta+uK+S2yTovG4QBfL+IdfBCvxEsUzChGYhZky/8cc3eiZeazBx1RVKSa+UD9LBNyA== Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by PH8PR12MB7327.namprd12.prod.outlook.com (2603:10b6:510:215::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6609.33; Tue, 25 Jul 2023 13:42:25 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::c833:9a5c:258e:3351]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::c833:9a5c:258e:3351%4]) with mapi id 15.20.6609.032; Tue, 25 Jul 2023 13:42:25 +0000 From: Alistair Popple To: akpm@linux-foundation.org Cc: jgg@ziepe.ca, npiggin@gmail.com, catalin.marinas@arm.com, jhubbard@nvidia.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, nicolinc@nvidia.com, robin.murphy@arm.com, seanjc@google.com, will@kernel.org, zhi.wang.linux@gmail.com, mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, rtummala@nvidia.com, kevin.tian@intel.com, iommu@lists.linux.dev, x86@kernel.org, fbarrat@linux.ibm.com, ajd@linux.ibm.com, chaitanya.kumar.borah@intel.com, tvrtko.ursulin@linux.intel.com, intel-gfx@lists.freedesktop.org, Alistair Popple , Jason Gunthorpe Subject: [PATCH v4 1/5] arm64/smmu: Use TLBI ASID when invalidating entire range Date: Tue, 25 Jul 2023 23:42:03 +1000 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: X-ClientProxiedBy: SY5PR01CA0042.ausprd01.prod.outlook.com (2603:10c6:10:1f8::14) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|PH8PR12MB7327:EE_ X-MS-Office365-Filtering-Correlation-Id: 88254ad6-a240-4724-a4c8-08db8d14fb14 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4GSVUR+TEFPTu3RvwBuYstVW2dEtKl6CjhB+qegzO1NvJlPZ9rjbPiDtvKuYPsRYaZ6QCL7Mx3M7rDKpVgZs7GjqC+DubNrSD1YqHFJm204y2T5/XDNm40nsxDejFpBLDzFOkrrRzu/eY0lVwHo8rb2h4WqU7YXNGIGMQKsdF8jqpGth7Btw/ftVq15t1NAPUKc/CEgKmq5CVQCf8TQPYx+e1679uFiNfIluYd6L7+pBXgwGP6i22ZPt2ZQnd90qDqAn/WC+x/BvwoIaSF2j/R+NsQgOqfMkROgiIoJ7156IFtZyuH+XXs9ZOEKK949St509iJB/bUjiKZG3lyJ6njLUxukzbyEFYhN5xqkWpuBdqgFEVUlBp59MXfIUSUTS4T7MjKdmpxoQwzxAm6IZTQDJlM8+Yox2/0RBZnNRZNbswupgqFQNMlqBe+ObbSJ3iSMvHbk9egjJcxdbvrsEmPAKxdDVE9ryCiWyxywWpt0EHimZIuIjQ57Rwr0aHvXRkuFIoqhPxvjcN/fjShBgcbOmwwk9aLkhVmqwvXIvCGzr0S/t4uypP7LaL2y7RiSi X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(376002)(396003)(366004)(451199021)(6512007)(66899021)(186003)(2616005)(26005)(6506007)(107886003)(316002)(41300700001)(66556008)(36756003)(6486002)(66476007)(7416002)(5660300002)(66946007)(38100700002)(2906002)(4326008)(6916009)(8676002)(8936002)(83380400001)(54906003)(478600001)(86362001)(6666004);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: wmMO/tYgFBiAPcuu3NVUkGIoW/mnorpkUaqMoDv4JcjuoclVC9tlrvW5MNpGU23JGSPPQegTJTHErQx81HYbMXyEXsvXO8TmGKaz3D8UvaTySWlatDLlEvCMbiGoRCEcmN5GZeS98tU7dJNaX4WFjWOHNlj55i7e1XSHX03pytPrj93hj4n9mdb5HPIs+HYK9o2ZunpT4QCXVvCpMRKTAdtBBix3Nex417wdFDz7lUG3u6p+qMklnatiX0tZeBs3cL80tY2cuz3ABUCdWd+zRddi7cvrdA3u8CUMZoO9+RaMuhbZxL79JZg1/CVXwbKShRsBMTMJcVS6/NchoYQY6tmq16/h1yPrGpjZ/DHirGcwlLvslY/2xvHhmu4PqkNiBPbpqwS1AtBWwS/guPxj/FQAuJVsj3/6QEWa47AP6fKswexwxreEK+uIM3tW2Ryf5i2ULe64CFEOuO38pit5q2KhcEO5tIJb1IBw5MX+NhEJYOMmXTCVoMDux0M6PMW6C5VbLsoiUb+Q22mD8LesQVbqPOvq7o4vXwFlbyNqvNW0Wgsw7FcJTJ/vGLWSaZpgNgNyLZwXy+LyFJJTxMjmf8uIqtzvERIYdb6kJyVi95QubGxjCdvQ4BRmt6cHC27syjf4EvoFBGO8lPfXDNVgR8Cy7VQhuKrDOV0QyVI2AnVGj8x7QTexJeQ5EfD6b/bVFJMlG3sfHJaCyuEGP4W6WFNWsof7bCP84MPG9dNSQ3+kN8VDVosuI2bdB5gsgDzDXk4U8K7bCAWnIMdVUtrLF7mqVi79p8oxMd7RJyMIdkCsdYeNCLvgfztgqn1SZXqREQxlOYG5sEu90ZmRWZP+JeCsJvg9+MdM2tLPVT0eEHoc+sqobrYr8sgBKh/MJIhY5WtU4CWpgxDdY/G3SLbizaMw81wfX280W/oFej6ms0jTEgp+nrK8A8sZdEA5v1TzdFkuMbGAXfUzg/sUihcHsIlpk7QI+scZOIQ6/7Xtkv4kwtkADcYkGlcYKXxkL5/zd0TlYortWCg2fcxma0j1Xfk8rnxOp6/juk5Ckb4vTG3T7lOHy6a9hFapSUH5lIqL8TphDMTWXzDZoqnr3suCQxSDnnAEWIOmArAa6+j9X3wZUFb5LoblnAeAI7k9I5w4gGolt7LKot/e8SqnLRIRBNpXGrR5QJNJACjnY1oODZZbSpw9tSd4Nh1fNW8eUgGY30AeAo1GXWkfuuqoOt85HngrjUO1gHjVNMVbGlXXOeNrEpK4tw3tfxg2/Y4qfZ5iKxNhWsvOhaYTt7QaWyZNOSVZUXcH167DRiw9Z0mA7ITCRmOWQf1XfJ3FgEBt3czBzByq60zpevdFxSDo9gdJsGLDDdocfvr95vk9HF5XKIhDVX0KR3QB+2tusnYmHmGfLh2E7UXIuWq33mDnxR+V7AGS9HwqcRnYWrCTjXYISBLtOWSrmDGqzgSwrh3ntF7nNkKpl6ak3T2cGA7QZ7n3vJacOEkJi+oic/TT/183vbQy8aZJhcEHYDCGr/y+ypDCe+5IzBXnlSy5SDTbcwt/dt2OVuiB2JhujPU3fXni55+mU3s5/imFGRgqeUvDQRqu X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 88254ad6-a240-4724-a4c8-08db8d14fb14 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jul 2023 13:42:25.5347 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: BhEEdmntSW4YuqFX1PBciLerTnFPudZgH3cZytQZMXQhxVg9Srq6WNV1zNEB6oFsBiDvhYKWv8YXSi9YrgQszg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7327 X-Rspamd-Queue-Id: 52A04140007 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 6b46hpsbyfww5z3gbqy641q4ukmapa77 X-HE-Tag: 1690292547-561321 X-HE-Meta: U2FsdGVkX1/TCnvysN3v2tsp+f+NVEA8SGhwV07NIY+s0ZteJ05lIeib4m8XQ0ouErlbxNBnmQ4jmVW71tdk+ncYDkbxzZfrZV4+MuFvg//NDbVu5gp59cI6xSyjuB0V/8YH1iLlYKzesIO0yt2eKuTa0W4XkDAq1CmI9aSMUbueDER2RHDdzZ3ClWhiyyBvLwGOKOkcG7OlNDXJ/NQpnp5A2Vhf1pzBJY0j8p23O/mm1ojiO9mH3fVgXXLVUoUjJoisSGHnfTZ9rAQCXpHj2Pu3ER9+rGRYFADwxugYzF7SiKy+UVKzlQ1nKhg6uRwlSyNqlFAbJsUFmKLdxEGtDO19bYVwS6GVD57PG7HsNpkO9VlAhRVJLp8J54QqzwYNwztzjk+XgjLIV+YMxGSLH7K6xAo9/zaG0XlnAWvLSElkhKUc+StnR3QU6poPGXa1PbKAPDNtRScEI26L5oJRMfJF6ejUjAraA7dT0cRc5BPOg0ouC6Xer28N6GoCmeAJLADbbELEvrSmhINS/g9ZANefhO3pvkhZoOr50ygrsxHxt/4a7ParXp7q34843k7+QE6zMhjHT9IDQbf3CDjx6/zuGpssEuReh1xr6SSSE0HH8i5By5y6v85WdeRYyDJgtDTkynpr1bk+zjli1NzHShZ/r02Bz1SaBGT+ticjNv+U6m/Z6srBNPWo07SVMKxjyKneHdwAp2b1GUFDrElJaj0u9+yPS2KbFf+tUs4p4ZR6U3eYSfitF3EhkrpNeOIoj7tR2LqWTp+tCjftTKOaITaWR0muBFhUvZdanymi3KdD+ZT+jP6f7LpW33dK3DIooq86QVdvE+C+6O9n0fjKL0vodLLBtd6z390c/Eldlz+xw7u0cmjTyC5A70KSINsnEOWP7sIU0X01CbVuTbNRji3WMY1nD+1vH+pYDrrd9qH1WFDjC/oSnkH7wAH5xDMXk4JM2xUvx4phc12S4OK HU9zgRwK e1xl9wi+7ckVtes8gEYqGak2xcwGCH6sNXXd48oKfWmV6g4Gs45Jgi/ISDNuQRW9ny1vFtV2g3iJPosNWMiLP7mgdysp8eG3Yk2MaS5rkdKuEwSekR7kwv/yABw+L/Hf1Z6s5Emct/3m/okeUSrafWS+9GPokgEQOcMa6GAvHW/3ivsv86PnlGiGEYv2WlGrQhS5ee+4J4UdULexd1zarSfvghdZTYJ0iAs/boapN0gBtG0ULqeOqjwQw7rRYyRNR/wOLL3BpFwPd5q0nYv2M8Bd6s32W4XstK8tF9LoU98zFwK4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The ARM SMMU has a specific command for invalidating the TLB for an entire ASID. Currently this is used for the IO_PGTABLE API but not for ATS when called from the MMU notifier. The current implementation of notifiers does not attempt to invalidate such a large address range, instead walking each VMA and invalidating each range individually during mmap removal. However in future SMMU TLB invalidations are going to be sent as part of the normal flush_tlb_*() kernel calls. To better deal with that add handling to use TLBI ASID when invalidating the entire address space. Signed-off-by: Alistair Popple Reviewed-by: Jason Gunthorpe --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index a5a63b1..2a19784 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -200,10 +200,20 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn, * range. So do a simple translation here by calculating size correctly. */ size = end - start; + if (size == ULONG_MAX) + size = 0; + + if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM)) { + if (!size) + arm_smmu_tlb_inv_asid(smmu_domain->smmu, + smmu_mn->cd->asid); + else + arm_smmu_tlb_inv_range_asid(start, size, + smmu_mn->cd->asid, + PAGE_SIZE, false, + smmu_domain); + } - if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM)) - arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid, - PAGE_SIZE, false, smmu_domain); arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size); } From patchwork Tue Jul 25 13:42:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13326453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54EA4C001DF for ; Tue, 25 Jul 2023 13:42:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E190B6B007B; Tue, 25 Jul 2023 09:42:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC92C6B007D; Tue, 25 Jul 2023 09:42:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C1BB68D0001; Tue, 25 Jul 2023 09:42:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B46636B007B for ; Tue, 25 Jul 2023 09:42:36 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8DE60C0E61 for ; Tue, 25 Jul 2023 13:42:36 +0000 (UTC) X-FDA: 81050249112.02.9015142 Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2046.outbound.protection.outlook.com [40.107.96.46]) by imf18.hostedemail.com (Postfix) with ESMTP id A91C11C000D for ; Tue, 25 Jul 2023 13:42:33 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=aDcyb9Xo; spf=pass (imf18.hostedemail.com: domain of apopple@nvidia.com designates 40.107.96.46 as permitted sender) smtp.mailfrom=apopple@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1690292553; a=rsa-sha256; cv=pass; b=DkgvJ3VPFBOH8TGQnpU476r/UFVraaZ8R6AE771WaNVO30n/5jQ6YRFcDkdl20/5fH8hQd 8hMXC9/4/fjGIqmDCytiZwELJqx71hZY8MnB5D9vctj4/+mG33gOgdH2q1srXJsc18ku0B r8MjRIYEyAA27ZOgASLfweuZ4YWB6fg= ARC-Authentication-Results: i=2; imf18.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=aDcyb9Xo; spf=pass (imf18.hostedemail.com: domain of apopple@nvidia.com designates 40.107.96.46 as permitted sender) smtp.mailfrom=apopple@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690292553; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=d25jQwYyjckuTXjDCjP5HzuWDtiLjgNm0PTTXdo7NfQ=; b=vrmYma5JmiiTDziVHlcx1hWQ9PO7QcV12cbwWCDIaZUBEZY7pcg87PMz78OQTGg3iSPXh1 rNSt/W8XOdjVnm4PiwlMyGFFBdiB4W9Bt/NdgkUByxIKNnCcSch0u+AQZzEEiwSkQqdv5S HIp8JZ6u/7ANq6Bxt0XguZzFoV1Mw+U= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ar2ODkte8cR9JzaFNrIhO0PLbVjaSrkl9Hn3QLy3u1XUIkvRuz1MdU0WJQRra0EUhawje0CDNC/S9q+UlIbebLUZ9GjuGdurR7+Zuta4qHLgaz/UkzwtnCGmFb+IewFh1LxVe76vizB9wlyNADmZCaNGNqL/dwZj1G55QObsePchl/Dy6TgK1lWk/uiGvxSFmaxkM/ya55OD0az5QKpmrAbTS9ujTFZDEpbWhpr8ScEFL4/ii4/fxdta1CZ/0fnfd0drLkvoB+/opr/cJUES0wNGhd5S/9WwK2CbwdEVVh68dfjkrvRnePvNuj9/L9S5wkLAjl7Gc9i4phbysRHlaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=d25jQwYyjckuTXjDCjP5HzuWDtiLjgNm0PTTXdo7NfQ=; b=SQwjWO+XTSdBTI7K9/QJJYzjA6S0g6iZ+LvoOFq5pjd4YKx4oGKAfe4Da9i2yjAjbzXh5S13qSgakfZ1IWkVOQy24PPtZow5+MoNiy7G4/prMBw87CtLD/ILex9FmLbRsW8wtuN63QBsG6xGYJKHatyBDhgn3YvBlRtYqGZXvKvIv/rwX1nV74lcT+f7GGeoU5pZ+zM7El7IJ+Cywmoh81mif20Y3L0RC0PPYMfgOAHpzyf7HPdqtCio1897/9vbnDOAtJ+Aq0CFHoKkAxF8cCbQkpHTkWQ4nSCY2uyn/t0XpMw9WF6/tbMB1tDeDUbogoEfv2lfNIRgtS8vyfv+uw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=d25jQwYyjckuTXjDCjP5HzuWDtiLjgNm0PTTXdo7NfQ=; b=aDcyb9XoeWw7eYEnSXEzfAbu+bqDecCYCYo5Zxydx/ohbTkIOmMT+scNJdcAmFiuBJQFmOHn2KGiX3rFQZTg/WbtfWWba67bLKFarsrUdHsT/jSFUUnQzt4DqSs1GsGXEKElNZfFdJSht6cKtBQ7XJfgkjlzYpOxYmcKdE3jJ09vq0TiWde0MSHNc1wJyyo6/r9OKyVtIXiG7n5jcTGKzVoBCAfqSJEpzLHlxCaO45FS8XsMT9PBq2fgTFszisAkOd/9IZB6x0aP9OFIQlRzJaunRwF8GfzpdlOfn5/gqXt0Ymdhv3MoDUNcWGkAbJMV1eo/I14wZXA8+3R8HRtqUw== Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by PH8PR12MB7327.namprd12.prod.outlook.com (2603:10b6:510:215::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6609.33; Tue, 25 Jul 2023 13:42:31 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::c833:9a5c:258e:3351]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::c833:9a5c:258e:3351%4]) with mapi id 15.20.6609.032; Tue, 25 Jul 2023 13:42:31 +0000 From: Alistair Popple To: akpm@linux-foundation.org Cc: jgg@ziepe.ca, npiggin@gmail.com, catalin.marinas@arm.com, jhubbard@nvidia.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, nicolinc@nvidia.com, robin.murphy@arm.com, seanjc@google.com, will@kernel.org, zhi.wang.linux@gmail.com, mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, rtummala@nvidia.com, kevin.tian@intel.com, iommu@lists.linux.dev, x86@kernel.org, fbarrat@linux.ibm.com, ajd@linux.ibm.com, chaitanya.kumar.borah@intel.com, tvrtko.ursulin@linux.intel.com, intel-gfx@lists.freedesktop.org, Alistair Popple , Jason Gunthorpe Subject: [PATCH v4 2/5] mmu_notifiers: Fixup comment in mmu_interval_read_begin() Date: Tue, 25 Jul 2023 23:42:04 +1000 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: X-ClientProxiedBy: SY5P282CA0065.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:203::19) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|PH8PR12MB7327:EE_ X-MS-Office365-Filtering-Correlation-Id: 45461eac-9dc9-44af-0612-08db8d14feb4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: /QTltbwBzGT6Eiw1N5OUupsut6cw6TdlsvuHw35bfVxylVItIKbkqL4vZqUnbXNQE5BmRJJm3FNcn2NTshUmNOGHoyJUWiRYw+ZeiRLBjhDJMbmCh/6EGXY81qTHGC8EjmBbpTjmI2qnxAR7e8S23Sq3ocDneswq3hU8SdyzXkyzaoEEw0G7O3InpBDZaMOXByyzYpMXLw4oHr/zMgY2lO7CVYZYTuUOKvSfDjV4fLjDM8vrRUCjZrV6b1f8ks9psXIp6ZThXE1VQQOIApxxVuw42RimOgmFLCHS6a71gK6zIXEUpFx6VhnITSnB4/9COvJPe66ZxT2fBht3vCqQzAGK8X4ko5cohAm5Ms1mcjTDRi5/aMdKoDmHLF6hejkO/ZvU11wkLlYNVNn8D40iUUc2RAlczaBzmolp+aV5UVg0PQiZawS3FiumsxKkUazNgHC/PeYv+KyIFPrAnnmuN9Vwm9xl/Q8RfWeef0ntiRyeOfh9pH220hLmFHjj0zUdLtgZ8yXwY9p9NuZRch1rPUnK/MKB/NTWJirFTeATZ+ggPl/fwVsEPAm2zMx/i7UA X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(376002)(396003)(366004)(451199021)(6512007)(186003)(2616005)(26005)(6506007)(107886003)(316002)(41300700001)(66556008)(36756003)(6486002)(66476007)(7416002)(5660300002)(66946007)(38100700002)(2906002)(4326008)(6916009)(8676002)(8936002)(83380400001)(54906003)(478600001)(86362001)(6666004);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: KyVClXNvWWHOCPHkUX0LInM4mOZO6KU2k5/CZPTLaBkX++ufQ3ryWSh2aaCNDkqebSdA7ikRPSc72OZGh+FSuEYjii8MAClCuTFrp3puFqZWwbMdIQgCWm0dVSi8TzyQsp0zarlIdhlizBC8PRmqIplkHnrXEgrB8bfYCLuRcGp1Ehij+MW8aaDNvfiHFLRWvJs/I+YvHS/KUGWd3Rh0Dp/RODiab70h3e4qefv+Clgqp0eUj4ltKnDXUff22EP4PPjura8zj7hBHKnxUdLVI3Dsf7SMT+Yqf+F2UhclmJytDw1zpl/oxc2OcRthlcTvLXMn1W6hgLHjkHQrmjCZvMOcciuFdv4/BVeqGjQBlPMfoNHozL+Fgx4ojE9vld/WUKnOAOvEO99LKGSyfjBCEY66tlUi41mH8HcKK5S7LzMgIq35rvV62mQPNp4XGB1vHtSCTe+9q7H9anVdmAMo1YSXSsUIiu0rbzwmgRcsqFwdhVLmnYRUcUN4ckDup8gO6wJ0oUwkM42JF3vWOllZ16PbuGcrjvF7p6kfvjH8ypc0wQ1l9N5dnVH6Y3Q3HLQ4lNIOiGF0heqMAEQZ3n0KAB5X5BZ94mXw0i5CXIO6bfxvtQszUJigxfHoZEh1b4+slKSyv/Ak8UBvb1KyDLO2zKnEHXEHDa16Vu+V+1mblbAfQ7ZLoyl670k0xbRD6nnM0hu6R3pdl/2ipIrl5yHDuu2cR0FrSy/D0h6vZzONbOsPr4nMa7FJ+/ckORk4nX7Sz+bgygB9biR/Zt09bAhKAsxrOTilu4H8vrStkUFM6ks2UtXIx2qmpv5qYybFr8rcZdFwOqAm5lf91QI9FeaRdmStBS3P5+BK35PATvPfPL/aoIxn+k1t4gOFHY06SoSTdmA3A/27PxSNF0JSQKqCegtatq2iiY6+3c8ANcmcp5CzQoiOVu4jQMzIhUywjD0Kr/p/VH+NTxCeQidFwd1229E+Tn4zg7nNaUUZR/Km64zkLsUUU72uVjoghHJrUu+Lv1U/xxDzfxcX8FRCBxleHTl9c1Iza+m/YJ5GXMqllPqbF9nc0eoqTWgOnD7byfSxwtW3KyZQGrnuFcH7hLPX2KkBJRN+wiNScR5Rtp43THcUPjqhIAg4FaKT7leZ/o+SNilwUNjA8n07iAxTtvO6cf+QqpptzDPAc+pIukqpmRDpDte8qVIkGz54W7qwfPqxtLxHAtV2NMHHfNUv8g4sYVaYlE74fQEXOlwWjbCNSoJWKzhyu5bVX/ZqgVCpxXgKF7BINnLaVHmytubq48eEQ96sN5K2R4ugKDCJLbZfqhnFQM/+mLfnDR88mbbFx3Xt4pX6wqTeTPnA9zY+G5k85W0PrvYa+4W9OxGLma643/oUTxIm7k5btKm2kPPJc0RjSIeO1GTnSkBHbRyjebINmr1u9JPWPpFTsg+p7vJ5PEEgtrBhIjYSeHp2Fe9YStlW0g3+DRy3aneXpQr5/um6o92rQ9Pm5f0b2p3LdxGBk45JlQBEz6Zl8t+krjff+Am265zju38GcRhVL2KPm6osihF1V7f1U+aBj2bh4Gk5nIyapjmkThqDlLdVRQeNAjxO X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 45461eac-9dc9-44af-0612-08db8d14feb4 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jul 2023 13:42:31.6323 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: e7uLJeY37JuHD1tBlZUGAbiqeftDjLRJ9K9U0F17zN43VK654X/pQe/YRVWYluKzyy42jN8+b8a5u9SmH2mTzw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7327 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: A91C11C000D X-Stat-Signature: 5irngqtrt1z34g11qmn9juwfhfkxnp5o X-Rspam-User: X-HE-Tag: 1690292553-295852 X-HE-Meta: U2FsdGVkX1+wSIirlXZIxue/vSy9sc69T+Xbl6xuFf1QOuUUMCZzzGr0A3lPAe3mLMu7mObL3nIDlMvNwdirxeKF2EVYkkuEleUN65O8fzgTTkkM3duFEMjCcSTJjPXrr7Vm4oLdn1l8t8bpqJER3FOi7B90f7+f+WAX8G5fZVCbE5pGS/NGLkwzQyTzaLhAzwlwUslNnDAAAFfXM0oK22jEHsucQrbF8wDL56/9j1ylJhB3Uyo0+E8Mz1YVREJx1o9i3efgBsquTEAHR1Odd1utUdk8c//CDlXofPJ6VsTc/9GZHPIaxhvjBNlsW7DhgMIUaWs/DWkGAKJZN0J82ottT8HnLztfPdonV4SOOpkRUKiifLpIbx5Na8n16G/qUocJDcf6vME6qBL4L3j8MQF/ZDKkCf7FeUYD7yvFDwxxL1oDEnfBGVzqsUjz0p7gYjkdVo6OjFHrt//NFAXgcai47441VGcWndc/I12gBCm4EIOnbG/5H57QFsd2Mm7jNCQWvPdTvsSzGvS1hosiUseKq5/86SBYdO7gHV/At/9dm3Id1OuGCvxPjCLWWrHnQzx6ZwE2SUsCgqa9Ae1MlRIKQ70ADYb2Gu7hYg35NKgQfDW/2X5SwFtKN4AYKbj4/ndEPbrvughaL3t+FTdM8DI85M/o5c1/Vq8WhsE/BEii3pz/0wUo0b68A162URm+3fyCHPQlt21QDSJFim4f30jcPGCKMi0ZN33uwm9U+bSoloLg7hKK5SX0mQJhhCs7AzIW9GjTMkNQn2Wn/Dca5Gr/JTCsTrQP3Wrp+JczmiEFN+rXP3JHrVXiNRx4QgAiqkKGMn9sXMkDpNNt7Xw1rUpgpoEyd+HNKKEVJqMPBCQM7ASg01MSyHxdeDRqiefzdXOXf4x8TJlluf14ky8vIWvyykxqfE+SmwDWzM4sh1nZykN6npbAV6LfmNlLlByEA7ao5337kSALY6XFIsh /LuP1zZO IjYjop+hm8Aql/m8mRzNT68p8jN5AlX0vxixGt/H1uhfTbGJssfewlOVfWfQKDfZf8dm0JSwj/8QD+FF7KEekEzxETeHaUF/O5hjzeYAyq4z2kNVpmuByW8vfZNQJJGno+ypt9IGgsVj+x7rRb3p9vS7wm1ilGTTsDcmjXj6bq2Pr0ykP8F4XBJFJJnApK3I0QkStm0ODyFbtRcaz9BAliYmPYeUNvwyJ87iP0mPYS7M+9//PIiiKD1wOuhlnbSvaWjAYY18ZggK5XH+n9vu0zhUaM5BcdB2Uq9qQJi6x7j3ZQRJ25D8Q9DFH4MIW93a3ac9MVUoEIW+W2D74adPa0bAo4g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The comment in mmu_interval_read_begin() refers to a function that doesn't exist and uses the wrong call-back name. The op for mmu interval notifiers is mmu_interval_notifier_ops->invalidate() so fix the comment up to reflect that. Signed-off-by: Alistair Popple Reviewed-by: Jason Gunthorpe --- mm/mmu_notifier.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 50c0dde..b7ad155 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -199,7 +199,7 @@ mmu_interval_read_begin(struct mmu_interval_notifier *interval_sub) * invalidate_start/end and is colliding. * * The locking looks broadly like this: - * mn_tree_invalidate_start(): mmu_interval_read_begin(): + * mn_itree_inv_start(): mmu_interval_read_begin(): * spin_lock * seq = READ_ONCE(interval_sub->invalidate_seq); * seq == subs->invalidate_seq @@ -207,7 +207,7 @@ mmu_interval_read_begin(struct mmu_interval_notifier *interval_sub) * spin_lock * seq = ++subscriptions->invalidate_seq * spin_unlock - * op->invalidate_range(): + * op->invalidate(): * user_lock * mmu_interval_set_seq() * interval_sub->invalidate_seq = seq From patchwork Tue Jul 25 13:42:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13326454 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09164C41513 for ; Tue, 25 Jul 2023 13:42:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 73BA96B007D; Tue, 25 Jul 2023 09:42:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6EB6B8D0001; Tue, 25 Jul 2023 09:42:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 564C46B0080; Tue, 25 Jul 2023 09:42:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 485156B007D for ; Tue, 25 Jul 2023 09:42:45 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1AFEF140B64 for ; Tue, 25 Jul 2023 13:42:45 +0000 (UTC) X-FDA: 81050249490.13.5F514E1 Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2073.outbound.protection.outlook.com [40.107.96.73]) by imf04.hostedemail.com (Postfix) with ESMTP id 29A8C4001C for ; Tue, 25 Jul 2023 13:42:41 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=fftYiGKi; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf04.hostedemail.com: domain of apopple@nvidia.com designates 40.107.96.73 as permitted sender) smtp.mailfrom=apopple@nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690292562; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GQLAFWkVOwLzrCoMP+t2btl8fvFP0zf16RJSkZTjrCo=; b=5lsgBaVRZIMSsH7GhLl3Z5RZWrSu+bCsvEmljrEiZat27CtycSYpkiMWnJ+kIVuCTvvqzS xmqgMg8eC1DaP4Xdd0tsbG4qZnV4/w9zT5JkocEy8xFcfv//Wbvo4Ea9k7s9sZ9E4RbmeG nQ70cju/tm3DwP+uze2hOvtmuaXWTwM= ARC-Authentication-Results: i=2; imf04.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=fftYiGKi; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf04.hostedemail.com: domain of apopple@nvidia.com designates 40.107.96.73 as permitted sender) smtp.mailfrom=apopple@nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1690292562; a=rsa-sha256; cv=pass; b=CCXw2t7GujdNyWNj+5QWXS7ZVAtSFFiqBwieFihXI2vz9/xx5LcA0vtHwFGJLCaeiIUIlR VPhIbUdrZFDpT4FHoJvR1YhzF+2x0X03kDDJ/EVAh+Ks6h3XzVxigBku+o1gV4HugQW0JX BDs+8Kb9t6T1WYL128X0CE/9k2oY05A= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JgPrkztuUcH3xo13b6ibESHfLLaEh3iKNb9u0lxo121UfqBsBH3dG0ZEL5S2gHj3LGRdeCd3X0qJ8tDAIS5Pfhb+An8g3ps/KzJqkHFDLiTp5gV7kl5s4WKX+L2iFj8Pkn80m2sQbOqYaK85anYhfWzBN7OpzayfbcadPRQZQrMkXIQswtTK08WyboQk0j9ze1CW5PrP4VDAefYJsRuHQs0QHKW6QU1g18cW1VbacDz+B2SlV3eQy7Wx1aB7ARE9NJckDYBXFHFU59w8ZoI0ndI6TsI7sfRweg1oPS5SyP5E/N5FZu4oGy5FIKKJJ5rbOflOUJdnhIrRAvcAls6jvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GQLAFWkVOwLzrCoMP+t2btl8fvFP0zf16RJSkZTjrCo=; b=B0MUSFQ45sliSEtrqhl6KNfYvyS45/1dUSJNOAr5V7313ySXsnZ4mY9OBSnPXLjJBg6kirkywuBNojjXeXsw8xZV8Bmtw6oD3zcBfaCuCRy8NcSKMJbprZLAGc1RcQdpDGxLs/3SKCOM0fl9dXciKwU30WYleabHV+Y2TEIDGiFA1Jn9w7nxeZJEGJ08NEtihTGPMsxnGm6E6NKHfpkPpFNQ1QK+sFPMCybtMIG0oCsMj775yXAAGMxLj435T3PBrJDDnwYsXYKvJWYIopKIjjSTLPj0uJ8wQBiepM52Ym3Yk2eUCn0dzMF8O5tSePlf6M5tsSEHxL69XYlDR2ZqdA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GQLAFWkVOwLzrCoMP+t2btl8fvFP0zf16RJSkZTjrCo=; b=fftYiGKi4InTiP2jsm24wplnipjXgXTrDMfGZ+mB3+ivTfW0FGo8+4epWU3c07tVGa0Ta1YNr003jJN5T8ZrjRAa9oR3QPNaHMDAYpkWQJfGvR3W5SBxVWXh0epk2ADMZeskjwqWxSQ8bYfHxiA/8XABB4EVo7TvsFXST02cudBTyVfJ3rEKXsZYvwNZFGUIAEjOl39vrk89R6WwTeiTVJQ0NgFZsXdq4C1o4TFrIrFqknmets/JCYLG537vhT2i0j1tNzARIPKx3X5J7CD68EKH+5kbFunLGgER94+iqwH2DaW7T2jEyaBcWRBmHyhLMNhj05stRIoy6cXorVmkNg== Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by PH8PR12MB7327.namprd12.prod.outlook.com (2603:10b6:510:215::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6609.33; Tue, 25 Jul 2023 13:42:39 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::c833:9a5c:258e:3351]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::c833:9a5c:258e:3351%4]) with mapi id 15.20.6609.032; Tue, 25 Jul 2023 13:42:39 +0000 From: Alistair Popple To: akpm@linux-foundation.org Cc: jgg@ziepe.ca, npiggin@gmail.com, catalin.marinas@arm.com, jhubbard@nvidia.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, nicolinc@nvidia.com, robin.murphy@arm.com, seanjc@google.com, will@kernel.org, zhi.wang.linux@gmail.com, mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, rtummala@nvidia.com, kevin.tian@intel.com, iommu@lists.linux.dev, x86@kernel.org, fbarrat@linux.ibm.com, ajd@linux.ibm.com, chaitanya.kumar.borah@intel.com, tvrtko.ursulin@linux.intel.com, intel-gfx@lists.freedesktop.org, Alistair Popple , SeongJae Park , Jason Gunthorpe Subject: [PATCH v4 3/5] mmu_notifiers: Call invalidate_range() when invalidating TLBs Date: Tue, 25 Jul 2023 23:42:05 +1000 Message-Id: <0287ae32d91393a582897d6c4db6f7456b1001f2.1690292440.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: X-ClientProxiedBy: SYBPR01CA0088.ausprd01.prod.outlook.com (2603:10c6:10:3::28) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|PH8PR12MB7327:EE_ X-MS-Office365-Filtering-Correlation-Id: 0f8e51fa-b7a4-449c-42c5-08db8d150357 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: w/UxqcLg1z5+6sS1LShle/lhNNbLvCpFwaiDobtTrPQng/Qh+a00TLZpowvuQokSHG2/G7xuRbSoVWdOPtL9XH0sA3jsBWAtXPv4RpsGrvDH1XvmpVU1j5LNDUumXBtFsLkAvar2VPNVHypvbr5dYN4PSwFum9VSnCUqkfGAgFHqdC5O7q2xd+8/PB+VXhZB7YBrx60/qKJGsMAslxpofO+Gdp/xw6FY74AQsEgXBH03Es2sPOG0x/ZBV37rOyP3eZ8L8grvLYOGQS58tHLopEagnnK3GLyI+xneZI4/wGwrOUkj7s9w7KGrbRm05CovpoGhJVViHkaqxKSq+q+1k7vf4CHRFiofJ8nutJBSJtz3VAjXdkLr7YLy+rhqq7xhdkskZjTRZziaE0Y6o/k8ok61m0QmFFyhY+QamxF4Vdf77Y5ukSd8rPNZjOG6o2KUZEqJus82v6HSXeipd/+ezc4JD5UQKx+6SQjkeLEZlhqnJyp+3yZq2dFVt/dsQo1kLSo+93oV/8rlyUefVYjizXUc03hNwPC0SpsZu+jQ+YZ5woPxH40ZezGyF1zZee8/vYtHdS3xPnVKjHw+y+052dD0pBQhDjH+S3+qKaLBxjU= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(376002)(396003)(366004)(451199021)(6512007)(186003)(2616005)(26005)(6506007)(107886003)(316002)(41300700001)(66556008)(36756003)(6486002)(66476007)(7416002)(5660300002)(66946007)(38100700002)(2906002)(4326008)(6916009)(8676002)(8936002)(83380400001)(54906003)(478600001)(86362001)(6666004)(473944003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: mlUweiPCPMHE/Tcn55/SoYz5GB5ThEupfxhGvIe894QLrbNDbpZsqf1VmLof/qm6Hsq540Bc80d1/TZmq2P2uVCiEYZNcSAiIM/d2SAUooveRVskvgU3oyij+CrBYr6evM0nrQ/ZN6trdmMRJbRH+yN26yKBedlhiy/fBTYW/5KBwKV8PXp0WE5RGBEKK9X+VZfQbT/6E2kSL1CWHVQvjA6xFGJevsrlKbzJisaYULA1EposB0QG8Xda65wVek8wUmT02Ydu9lsvEUspyaZDbkjF/CzCXwU8TrhI0RwUmNLcDm/ndwVBzbROjHBlfwFmTDswX9mWiJUbDCI0lbMfCi8zI0YloPLBEKo4c4Zk6WralcLUKHruT4z311Od9gLjoH6bRMHDMx/1dDx2b9NG32ZNUq5kDTYD3D/rXeW6kpQlcRo2PIdRtHxbl9uEITVupkD8I1LAIv29a0jQUR4aT9XrAE+SeaWnPDxEgNKYzlPRhzDlL4qZxYipyOjBl6nMfGnBcqHBrDH0vb3NsRyNwj7YsEs7/9ct+AYCHlZybbrpZ1FLE1chejy5FeR2iSzuBK4WWt5mFobhEnggrW+Dr920JvmBZCGSUQNqUoGazLXSBVX9JnybhEuBcS//mr95xl6kz/RUW6rdljdvnhD0bm9TsN48+PStpFKeuMX/ajAFOZ8cXSCq4orKhExvgjXvMke061w5moh+asHdrP+TGOcfkRuNF3lZJ7k/9DPoVWKW7nrojTuyypDxoz3jAo0I8xVjTXRgZKHn+AFMPypwpHTji7Z0BjTJhFMCWPOHJBoiJoqc8FykziL0A+6dLEZgJEhIBVHjCDFJJSaoiklLNuqhNMjjwOEEM3OJfgHmxGZJ53TSVowS7d+3V8K3KWvI/nE75xQ2IxHC8e9N7+P0QFCNGdYuoYGF/OCEB3TMkwcS9Rt+dq11r2W7bn5A4gLMEbzEjjrsbLe4TivM2Zz9fjwgWYG56nzNe/22Aj7QLmJND5Mx53BAU2JSbwiY93nXiiGsqcwfqrHvspIv41ncJaMC7ELeLtSx3DP03QyfuBuZQjUww6OgkqnrlCQXekSI03UWqL8rNYO0+AQdkoU4S1So1kC+npHrg8lqnLLA/uw8c9LfocZCLmoJ2rLNpmItjue0OOFHurf1/iGr3ebCPKrn1TYlKCAAKFOJ9a/Jh7Dg+8FYnjIkXRXyMmrOyYvgFvjILKlX5kTYYaVlV5qY32+vMIsTnKswXX2YlQWUiBo6OVHN86qoNjPcXlZlZByrVgoADFj+Pb8UcwOZ+P0MkID8kwp8nOxPoB3jDdDmLEdmBJBYw/yO2r+03KyXKMfpc9srq8/wEUL3KiVTO/QHqsorcWDvV+2dqIye2QD4VLKxnsOLUWiPOOAMZCdm3n0y3tpP72v0h4tVlLd43ieXJltgYJVmDxaQ7fHqBwBskd0tkuCNyO+tGMHBOtLwVZOmdmjdfYFZe+4WvERPboWC3R+nWx72GVIhIA68sGtrIZDN9EUg0dBg6xWqXttjELLlWRkhVDACsanFIGwetwIj34sUGIF53IOz12Xpkc4GAvSIHaO1SYn+aK2uaL82tWlm X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0f8e51fa-b7a4-449c-42c5-08db8d150357 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jul 2023 13:42:39.3985 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: FBfOSUe0agO0ynf7xPxuJfybDgT+YlKy8KhZ6jXnsU1JpzIX7BshLon65XJ9ltcGAlP7FlEijv8CtH1oBUsXNg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7327 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 29A8C4001C X-Stat-Signature: mzched9jbdg3gffdn4txzfcz94ck1ogw X-HE-Tag: 1690292561-866522 X-HE-Meta: U2FsdGVkX18O5fzK+Zw9CiirWU0rh2dRMec2teWp6Rn2dNeL8mUVJAkj0BQJDKqxl5A8ZXBUasb5kiV5GkItzNipjupHKTHpsdVFLG04kl1QJ96tPut4v1ivjCV7l/dSZ7ANB0AgeGLfmCHxrsrZTP3y9z0MT5k68M4u2icbnEqWg8F1D3bBezHh0DyiprLeJ8+QnZbacFeXuVOoizOAlalT65gBncIJxIjM2LB6spjGaFHGdfSUCA2StIklmywzIXxpWHhH+gIYfsRVyd5xTIUrrP/B7QY510AqFOfJyhnl8mYAeaOJKYn6Zrh1cbYjeBr3SfijYrtU/Dw0a+9aIldmv66n61S3CskjUclc4Ho74B7I7786w+w3D/K6YdR45WGiXld7mGLQijhGDCoPSVM4TF5b18BqcuYPloFniWQMjSfL3/VO54Ouz9qDgRSlCZkaSLNgi0CEK6AEhgLx9ihamoPRjP5jzKisqIjKCdUERBIGN/FYsl+BlJ2eUh6FkwLcr1AYtkbCTtsOv2e7QCO75d160O4e0Ek0LL6P6tL2EAqSbq+N0UbC6aKjATbm9M1I6vlvXkoA+ix/1covPMr3MchK3a/sLk6eSJMRBO/0WlUHc15F/bU4cOIw1tYnYZxHvrjwSdOlkv2uKLZJLOJysjKjZmRLqXwM8nSBP6FXpDFICuuUdrKzo6zrM+xNYPHLS+VxwAYAjVjatYHroRBFxFiRQ3nNwxAtiBLrPQYU2Lyno/KaJqndBFyuqI9smJKv04VCilmg6J3F9JQwWdM2IB9htpaNK/a9PAaWdnfZ16DXjzgvxtmUBXnmxDCHBKUy4vxQuKxMnfjU3pKOksi7EX22959g5HeHY/XM5gK1e8iwKYyqgDa/smbUPT2Fa0TLnZ5OEcLmGP12SjIDQmouFj6cVSTJYfhUD+eBxwUGjVAvdK7HkHw31oB9ik37BpsX96JN5+huds1oMPu D7J9usBF 1a5tJvsnSgwYF0wxKCbK0LuBAEU5CDHde2QJFULo+SqLR7H/P/RJ7skDLR2uZP1wreM9d7cvMIeGY0N2MCpyGqxpOIh51kSJvPIaX51nnRqE15kOO5EofwNIYAwy+1KxzvLq6sijhvyWHDY+VISUbLYyZnA/hi16QOZCNJrhvMFROrsRAmokTCgdOC5FuXiRN8OfKoQcn7ZERvkkuGokM1D6jyG2n+iyNucsfEPGFs9/hNlY6P/eD7jde0i1eVe8lJzzTVo71JZ0RfJZ/7DBBXRNFvMgX4ifHRGFWvOIjGfJV06E57OQjR4MMrkWQAA788ASuZ6wVNLYx7vOzT+YYqbwlJwUM7qPZjijk/ppe8GjfshNtcjP1uxlTxdEailqKmh+3Yu+OxVaW+wcan56YJYxAeMtSITWQ473vXJ1gB53IUwQPA6zUPEx7i6qCsq7BmVG+/yqllDisAVKKmZ+bOoFS3MQN8WdWDHcnTDqsxv/fQkflmUe+JRynPTqQptjYTjw1ALl57RqhtRk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The invalidate_range() is going to become an architecture specific mmu notifier used to keep the TLB of secondary MMUs such as an IOMMU in sync with the CPU page tables. Currently it is called from separate code paths to the main CPU TLB invalidations. This can lead to a secondary TLB not getting invalidated when required and makes it hard to reason about when exactly the secondary TLB is invalidated. To fix this move the notifier call to the architecture specific TLB maintenance functions for architectures that have secondary MMUs requiring explicit software invalidations. This fixes a SMMU bug on ARM64. On ARM64 PTE permission upgrades require a TLB invalidation. This invalidation is done by the architecture specific ptep_set_access_flags() which calls flush_tlb_page() if required. However this doesn't call the notifier resulting in infinite faults being generated by devices using the SMMU if it has previously cached a read-only PTE in it's TLB. Moving the invalidations into the TLB invalidation functions ensures all invalidations happen at the same time as the CPU invalidation. The architecture specific flush_tlb_all() routines do not call the notifier as none of the IOMMUs require this. Signed-off-by: Alistair Popple Suggested-by: Jason Gunthorpe Tested-by: SeongJae Park Acked-by: Catalin Marinas Reviewed-by: Jason Gunthorpe --- arch/arm64/include/asm/tlbflush.h | 5 +++++ arch/powerpc/include/asm/book3s/64/tlbflush.h | 1 + arch/powerpc/mm/book3s64/radix_hugetlbpage.c | 1 + arch/powerpc/mm/book3s64/radix_tlb.c | 4 ++++ arch/x86/include/asm/tlbflush.h | 2 ++ arch/x86/mm/tlb.c | 2 ++ include/asm-generic/tlb.h | 1 - 7 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 3456866..a99349d 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -252,6 +253,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm) __tlbi(aside1is, asid); __tlbi_user(aside1is, asid); dsb(ish); + mmu_notifier_invalidate_range(mm, 0, -1UL); } static inline void __flush_tlb_page_nosync(struct mm_struct *mm, @@ -263,6 +265,8 @@ static inline void __flush_tlb_page_nosync(struct mm_struct *mm, addr = __TLBI_VADDR(uaddr, ASID(mm)); __tlbi(vale1is, addr); __tlbi_user(vale1is, addr); + mmu_notifier_invalidate_range(mm, uaddr & PAGE_MASK, + (uaddr & PAGE_MASK) + PAGE_SIZE); } static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, @@ -396,6 +400,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, scale++; } dsb(ish); + mmu_notifier_invalidate_range(vma->vm_mm, start, end); } static inline void flush_tlb_range(struct vm_area_struct *vma, diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc/include/asm/book3s/64/tlbflush.h index 0d0c144..dca0477 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h @@ -5,6 +5,7 @@ #define MMU_NO_CONTEXT ~0UL #include +#include #include #include diff --git a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c index 5e31955..f3fb49f 100644 --- a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c +++ b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c @@ -39,6 +39,7 @@ void radix__flush_hugetlb_tlb_range(struct vm_area_struct *vma, unsigned long st radix__flush_tlb_pwc_range_psize(vma->vm_mm, start, end, psize); else radix__flush_tlb_range_psize(vma->vm_mm, start, end, psize); + mmu_notifier_invalidate_range(vma->vm_mm, start, end); } void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma, diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c index 0bd4866..4d44902 100644 --- a/arch/powerpc/mm/book3s64/radix_tlb.c +++ b/arch/powerpc/mm/book3s64/radix_tlb.c @@ -987,6 +987,7 @@ void radix__flush_tlb_mm(struct mm_struct *mm) } } preempt_enable(); + mmu_notifier_invalidate_range(mm, 0, -1UL); } EXPORT_SYMBOL(radix__flush_tlb_mm); @@ -1020,6 +1021,7 @@ static void __flush_all_mm(struct mm_struct *mm, bool fullmm) _tlbiel_pid_multicast(mm, pid, RIC_FLUSH_ALL); } preempt_enable(); + mmu_notifier_invalidate_range(mm, 0, -1UL); } void radix__flush_all_mm(struct mm_struct *mm) @@ -1228,6 +1230,7 @@ static inline void __radix__flush_tlb_range(struct mm_struct *mm, } out: preempt_enable(); + mmu_notifier_invalidate_range(mm, start, end); } void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start, @@ -1392,6 +1395,7 @@ static void __radix__flush_tlb_range_psize(struct mm_struct *mm, } out: preempt_enable(); + mmu_notifier_invalidate_range(mm, start, end); } void radix__flush_tlb_range_psize(struct mm_struct *mm, unsigned long start, diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 837e4a5..0a54323 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -3,6 +3,7 @@ #define _ASM_X86_TLBFLUSH_H #include +#include #include #include @@ -282,6 +283,7 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b { inc_mm_tlb_gen(mm); cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); + mmu_notifier_invalidate_range(mm, 0, -1UL); } static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 267acf2..93b2f81 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -1036,6 +1037,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, put_flush_tlb_info(); put_cpu(); + mmu_notifier_invalidate_range(mm, start, end); } diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index b466172..bc32a22 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -456,7 +456,6 @@ static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) return; tlb_flush(tlb); - mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end); __tlb_reset_range(tlb); } From patchwork Tue Jul 25 13:42:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13326455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8A36C0015E for ; Tue, 25 Jul 2023 13:42:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6779B6B007E; Tue, 25 Jul 2023 09:42:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 628DC6B0080; Tue, 25 Jul 2023 09:42:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A1D46B0081; Tue, 25 Jul 2023 09:42:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 3A1F06B007E for ; Tue, 25 Jul 2023 09:42:52 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0BE201A0CF8 for ; Tue, 25 Jul 2023 13:42:52 +0000 (UTC) X-FDA: 81050249784.07.67EDFAB Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2083.outbound.protection.outlook.com [40.107.96.83]) by imf02.hostedemail.com (Postfix) with ESMTP id 352D880015 for ; Tue, 25 Jul 2023 13:42:48 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=K73UlQGf; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf02.hostedemail.com: domain of apopple@nvidia.com designates 40.107.96.83 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690292569; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qDNSkhRbM8klWFkwy+1ggUGwE01lDxa9kSP5AshmjRM=; b=Dtc7MhVel0pJx3CdC3oDWcNUNdSk437yg4/ZUr/J/A4u1iqs+Tls4DRGxH8oo24DI2Marg mt7jv3WDLMlUqYA1BIH7tHJK5c/aiXezrhaSs+ixzgDI02lqMsUi5cxxKZ4TDMHznPS46Y 98s2feY+AcDNT2/DT6RPhjNPTWH+Pqo= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1690292569; a=rsa-sha256; cv=pass; b=dISueBkqF7983KnCtJlsG3lczqQyPF6C5TEarEqLi09V4aEuFkP7zPrZFJoW6wGTusefLb lTE0x7XlG0TXAqJDz3oE96rwrO8pHpGw3iYSrFQE4hoCUnftSPnI39zSvPIWAV3mCmJfou Wp2rwJ8qos8LeqRaY4Wx6FjLtPHrkr4= ARC-Authentication-Results: i=2; imf02.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=K73UlQGf; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf02.hostedemail.com: domain of apopple@nvidia.com designates 40.107.96.83 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dQOiVhFQ0MAGM+rZM+NYCeWj76SfOGuvNkZ8Txa3tGBq8kP3pSJz2lZfgeMSMcF40ZRbUpplECc/d8fHZcyW+7FwtKELgu+NF5GorqdWT9eq4PkZhxRxsp4OqYO/9MQboKhfUcKJLUbOqfjHKTKkovswW+y/EUvQwC7pR0fENWtde5LGBT4RA8h0y8UyA30Ip0dJ2LY+QNo7AD7uxjPu6CkI9hlmNvEli1N3g4jvDgG8e2eB3B7GhjpHVNhIfe48QepSgIHG3gEF2mwVj2ZrcFo+d5If/i/5MmsDolaR8m2eA20klIZIrXWKVuJ+aPMVfkoDPDdlKNPmGVx8sPpIyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qDNSkhRbM8klWFkwy+1ggUGwE01lDxa9kSP5AshmjRM=; b=jNWBhEXPyxd/3V21OwRFviS84/SjXOpJFMsIhm0+y1RlqQkqVbMVFjN3K+0COLP1DyZx4k6g09X7u4eTSUzaXHsAshlMbSrCmRcmEfueejqbgROd/E41TEIaTAE4Vse8ZxV+Rm3A7R4h57ev4yBCgpDDslVhupUZDNv2v7GyrcwdN6H7WjumS2/JXj7b0rpQo3kOBzBA3QuLYGcHdWix3uLudsUSOMayoVRZmdzfaFXVVEeGdVcEQhblcCLrCMauFJieNjdosUbIQuUZ3dwoS7xlw437y3PUd6l7NwqpWC1YISd5A66epS5kOgmaKR/ZZ9jQeC7YI5sTQICQjD6cDg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qDNSkhRbM8klWFkwy+1ggUGwE01lDxa9kSP5AshmjRM=; b=K73UlQGfetPf5uuxsCq9tijfUGqCvkKtPATS7SAfXF1c2ouOpVJr7UogIB/oU26zOpM5Yu2SekOAN8jAKtu/mwRe3odknp3i0I48scg9uF8TkLE9SkmIEABVnHlsqAic0yT7bKW9iwo0WazgxT1vijuWfiDY0J8mTvSqfoKphZuH/07jD594lWk6/lVFRXoje34gUXko4K18bg1fkCkom0wOvNLWoUdEwztHo/vK2Fz+XLxc08R32WkQ5vjMUMub+JOqOA+64uzcrrkVB6b/HD0j5NtBm+vLmQenwWY2an53lPmGeI6he9yEeQRFYBrqfpnLC1zYcVMH7jnqriTiQw== Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by PH8PR12MB7327.namprd12.prod.outlook.com (2603:10b6:510:215::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6609.33; Tue, 25 Jul 2023 13:42:46 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::c833:9a5c:258e:3351]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::c833:9a5c:258e:3351%4]) with mapi id 15.20.6609.032; Tue, 25 Jul 2023 13:42:46 +0000 From: Alistair Popple To: akpm@linux-foundation.org Cc: jgg@ziepe.ca, npiggin@gmail.com, catalin.marinas@arm.com, jhubbard@nvidia.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, nicolinc@nvidia.com, robin.murphy@arm.com, seanjc@google.com, will@kernel.org, zhi.wang.linux@gmail.com, mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, rtummala@nvidia.com, kevin.tian@intel.com, iommu@lists.linux.dev, x86@kernel.org, fbarrat@linux.ibm.com, ajd@linux.ibm.com, chaitanya.kumar.borah@intel.com, tvrtko.ursulin@linux.intel.com, intel-gfx@lists.freedesktop.org, Alistair Popple , Jason Gunthorpe Subject: [PATCH v4 4/5] mmu_notifiers: Don't invalidate secondary TLBs as part of mmu_notifier_invalidate_range_end() Date: Tue, 25 Jul 2023 23:42:06 +1000 Message-Id: <90d749d03cbab256ca0edeb5287069599566d783.1690292440.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0154.ausprd01.prod.outlook.com (2603:10c6:10:1ba::13) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|PH8PR12MB7327:EE_ X-MS-Office365-Filtering-Correlation-Id: 8483cad2-1403-4cdc-4a1b-08db8d1507a2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: r3CJMTYiM2u2+Q/hdNPTl57ReW48/e422n0s/HikYnEMv1nU9DGzqIHm61tCN0E7e5qees0X0bnaSfGyaXz0bCQjGXf8R7LzOGqWgOwaxYx6U+JZyvNSWYXg51UtWmHzG0Ze0ibP5qWkWBvqtWyzvKMutPipNo2TghENmqLi47hLzT4GhTfrfxI9oNSPV/CbXNxeMNWWa9Tbipk8sDGGbYCKvpdjfnuWdsiSP9kFwfF22+cpLlm+0u/0kDYPIvcfmrSfI4elX9tMiAyi+ZMJiLkOstoXciX6FjrFMSNsw85a2AXNXker/Tst257y2lLDO5OewNhqNTQ2E26SjPSwKg5K66vtcKZq37qbswAVBmG2t/8/FnE61ikbKNUVx2IIzQ7tuyalFXfc8TdXKHkTtm7htOsHtatvpzktllWw8gs1ngrzE70umAkYzm9WM7VG1ZIiX1Pz5SYtX3f5k161ekd37eHWbhzteuQJs7wM/NUyHLP3JvQbI+NqfY+e5R2ROU7e95a0O94URId2qhZTsnEQ1vwz/yVW3miubqDY3FE= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(376002)(396003)(366004)(451199021)(6512007)(186003)(2616005)(26005)(6506007)(107886003)(316002)(41300700001)(66556008)(36756003)(6486002)(66476007)(7416002)(5660300002)(30864003)(66946007)(38100700002)(2906002)(4326008)(6916009)(8676002)(8936002)(83380400001)(54906003)(478600001)(86362001)(6666004);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: uFPS3XS7bj2gJyJ9NxiIAf6cXuwgUXsja8jQYJy1hW9j4V53zE7mLtgcCH80B709GOHnIO5Rohv3/EW/AgPyRIv9P99+6pg3g4Zr3mx2n6lOA4z25ppyh1B6pfTDwPs5wuph3bjZAtRO57tq0BNukRCEBedN54koFQtmDmzteB7g5NeRauWZmFeSfip6vuJ2aiuabc1KAz+f82HyiMqaJcq1FJEXaHeaCKM9sLqBQ19617u0vRTZRK6UdiwOlruc1UD/gkkFtMLZDS3stFh0pymsUZKrILwNAWML20S36wOW9ztIE0XkKv+S2b0C/OBpELJYbzjrEzCnG+kCv5Iq4YZhlKlMEnmrqMM0uhuN7hkGtm5DjSNLVA+463/J+KXFgTeKEHQ7DWRxCmi4S8EQaFeLlZHrbQf8qXy/mHGFKRBnj9e/ITHIHArnsNJG6JELZX2sC7BpdewEvJAPVgcZ4vCsKauqkKSa7oZTjfRB4YisQDmIpJ0gxeur3Roz/9PGBZlR9dSZbfc2IYbGOzw20a9TP8xkRLUj4TU7M2MQStxyoVkExk7OGUEs9ht7NIN0deTXHcjwNQyl9oOUwYxaYm0q0jqurgKMMNfT5RLYK1L/tYSdKbOAp39VEWyJwjIGzFxR+58fFwVdH0kiRnX3ukLA98hf4DVO5Qc3p8xW3HWQclT+iTNgBScYdnL86V6BRGK608BkApxvcWSCjqVPWaNEaGq8DuHZIofBfZ4waeUBc3XSvBNRUynMAv2ZrTuem0+kEznMNGY46Dvu0jpmXgUgrvYiGGTp+/zl2ZNHDH06G2vD+tood1F7pmN0Eo0M+5f/O+Rl8n0pnk4zylxxF8sLxVvVQsTZmF7XbxbprtKxN/REI65c6xCpmHi8UiqYdr727vP+UsIvGyeJTjn9gZeRactYeSt3rg/iGsowiRLRxUbKnaDukK5lNcwMT/AIRPL+gpS47dMLNLqFVLfBEtWDlr3iyST3hPzHaX5NC4E10eY01p6Iit7juKsrpMH75q/hV2U5l/P5z7QiDiOIjj0mt/Sme7axC8NOz3egyhbfk7jJBPng3EDLOi2v9cXRDusIKtM0+shBTjdfxwqhcEQ1c/n7NjdRnKFd8km0l7jfbuvAbc5LLfpDQsBF5hDxA8nPEyY7Sjq9wivf+i05/FEY+jG2DcaZpLE40+AgLqkSPqzUsghY/CKFCk7TyIrmjn4xr61yUpIyO8TYfFZeFauRAb/frB4mi58JUpbBFWy/6vQJX/aiVT1/PTFgV4Xz5U3qPU7VeDvxTsPwvvYR/iDfMp8GqKBgFvTsy4N6oW7U0gA+3kJCJ1+3CfTmGH5WFoNxNGzcHFUoZcjYHWy3GUWVNqSx72xa15mQUI0Wj9QEbzgK8p9rxNzDIfHrQ3/17hDczxW3fwzz/tvMYiM8n+T8A/5tKc7XoXPwJg9tuDTWoLQzvJXLrcpx0MQTRlGg8OGmzhxgclv6rhvYQDq2WqUwx1LKVLHtap1UVbf4zS/4GVbRxtSzjg8F68vNRG86aKN+Mo/42nZTdQL5PCxpXNN7fuFu0B5l6tyKGHg8hur3+OBFhXn1qXvp47zdA8lV X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8483cad2-1403-4cdc-4a1b-08db8d1507a2 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jul 2023 13:42:46.6203 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: /ua5FOi9kb7hQZxNQ7P1efvv6V73oa5LztJOAPhCYqjRYfmgHsC3WobOihmxyjKmUA6CaHaePeEwabRwWPVfyA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7327 X-Stat-Signature: 58w7f7birgdwematzznkkjj1ibzfm4ng X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 352D880015 X-Rspam-User: X-HE-Tag: 1690292568-756062 X-HE-Meta: U2FsdGVkX19p5EY6UDxkQfl3I+CmcgxHqprazoYfxMd+ihaexuzHjDEsT2dfKvDZKhHyuqe9Mdzyh4QWm5bf3JtJplHuZFsKr7h5xeGKoni8S+M4mO2WI117+dDlEZBYez9lhEgkV/SKa5KoFE6Cts6qT/Ax53LhRDVXZ95hVsKGYTduK5PFFYH2xrdv+qapL919XXsBGhfmSI5oP6zbwjkuOhtAhfIX8mAtiLvkmikzsZYIjgdO9DFy+L+BxXIZuafrn1l642u/HEaYf3lTnFwTPa7oeWVlJc5gJB4wLVtiDzyfB7rJ/94ub4UnCEZqJh3CAF1M8UIu/3cNnuA55SkzqEayIlcAxysA+pzhQciINz/q3KyR+OJPDRozUvoV8rHIAtGhrC/wXRaBrGpZGfY7JbhRP4rsahvH8Od7mOlDMtuGCOGednhEG/sgnsUcmicU2WYSlI+yo7gAk+qZQy4mCUhROJXOMgtAcEWJg6WYAApiscW82USM0mft89SeIz+P6xigDfE7ySnB4aQ8EA7LT+r1q3t8bRVgKn6HqP2Ws8dgJ7ZUN1vQicMjZom/39OeCDRyoPTW5QEEmCbt8aOOrb0Bgk0EQgiJGqYYBlGNoW2IYw6feg5RtFeEbmQlXZF6pgbnrRBKMieLkAJpPfaG1OLUKtltabdLRCvXSW55X3PWBiVeEmQj/eRjULAUwIo6Fs4ojCC5h2oCWshPExXmnoB+UPyX3FFAK9laRZQjW3BM+1sp+VVHW5Sef7gJL+/LPjteM1RlxmqVFKcYcCrYCfpfiqzwTZKiR6x11ZHhxK1mF2jwXk33/rOSF639oW+kLwlSZpyeXpnL5scyHxwvqSHKG8//286EyX+FTIyQ/kVWOhl86u9683iulH79zqBFzCHixEzjWLHa5IjNIUfhhhgkbFYpYnoKEQBIY4T6mGTXI3CMQCszuzr7KV3SjJLX13rBlD/SxWrkhAc RoPM5It4 clRdJoPB+OrQ5bNK8a27vPd/lZfoz+b6uCAtKmqiXhs5BEqyFAP/Nhho7QTVVloJ6tPiDQ2zseCGMQBbzyXszvESywbL9/fnUr7YUgJDfv6A9t9c/bhmeQfiQMltYcI3czv1Syfmj921Vg0AHUpw37umwB+tfs8mAKr4k0PVeouB3hQNzKr38BT+9IjpF3rK8aVDnfJS8h/mzNLVo1vwZrLh6cXCgbD5kYKSUAAD5y14PmtrYSJFZYVMIv9CzdcWSQYph7GIxmrjFiXOvkp8li50j9uNO2GX/CarT7CtjuoAO1+35RMflVxVddChPjGuQGjp4ewKSY4yYYsgrNzFYqgYP+Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Secondary TLBs are now invalidated from the architecture specific TLB invalidation functions. Therefore there is no need to explicitly notify or invalidate as part of the range end functions. This means we can remove mmu_notifier_invalidate_range_end_only() and some of the ptep_*_notify() functions. Signed-off-by: Alistair Popple Reviewed-by: Jason Gunthorpe --- include/linux/mmu_notifier.h | 56 +------------------------------------ kernel/events/uprobes.c | 2 +- mm/huge_memory.c | 25 ++--------------- mm/hugetlb.c | 1 +- mm/memory.c | 8 +---- mm/migrate_device.c | 9 +----- mm/mmu_notifier.c | 25 ++--------------- mm/rmap.c | 40 +-------------------------- 8 files changed, 14 insertions(+), 152 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 64a3e05..f2e9edc 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -395,8 +395,7 @@ extern int __mmu_notifier_test_young(struct mm_struct *mm, extern void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address, pte_t pte); extern int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *r); -extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r, - bool only_end); +extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r); extern void __mmu_notifier_invalidate_range(struct mm_struct *mm, unsigned long start, unsigned long end); extern bool @@ -481,14 +480,7 @@ mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) might_sleep(); if (mm_has_notifiers(range->mm)) - __mmu_notifier_invalidate_range_end(range, false); -} - -static inline void -mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range) -{ - if (mm_has_notifiers(range->mm)) - __mmu_notifier_invalidate_range_end(range, true); + __mmu_notifier_invalidate_range_end(range); } static inline void mmu_notifier_invalidate_range(struct mm_struct *mm, @@ -582,45 +574,6 @@ static inline void mmu_notifier_range_init_owner( __young; \ }) -#define ptep_clear_flush_notify(__vma, __address, __ptep) \ -({ \ - unsigned long ___addr = __address & PAGE_MASK; \ - struct mm_struct *___mm = (__vma)->vm_mm; \ - pte_t ___pte; \ - \ - ___pte = ptep_clear_flush(__vma, __address, __ptep); \ - mmu_notifier_invalidate_range(___mm, ___addr, \ - ___addr + PAGE_SIZE); \ - \ - ___pte; \ -}) - -#define pmdp_huge_clear_flush_notify(__vma, __haddr, __pmd) \ -({ \ - unsigned long ___haddr = __haddr & HPAGE_PMD_MASK; \ - struct mm_struct *___mm = (__vma)->vm_mm; \ - pmd_t ___pmd; \ - \ - ___pmd = pmdp_huge_clear_flush(__vma, __haddr, __pmd); \ - mmu_notifier_invalidate_range(___mm, ___haddr, \ - ___haddr + HPAGE_PMD_SIZE); \ - \ - ___pmd; \ -}) - -#define pudp_huge_clear_flush_notify(__vma, __haddr, __pud) \ -({ \ - unsigned long ___haddr = __haddr & HPAGE_PUD_MASK; \ - struct mm_struct *___mm = (__vma)->vm_mm; \ - pud_t ___pud; \ - \ - ___pud = pudp_huge_clear_flush(__vma, __haddr, __pud); \ - mmu_notifier_invalidate_range(___mm, ___haddr, \ - ___haddr + HPAGE_PUD_SIZE); \ - \ - ___pud; \ -}) - /* * set_pte_at_notify() sets the pte _after_ running the notifier. * This is safe to start by updating the secondary MMUs, because the primary MMU @@ -711,11 +664,6 @@ void mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) { } -static inline void -mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range) -{ -} - static inline void mmu_notifier_invalidate_range(struct mm_struct *mm, unsigned long start, unsigned long end) { diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index f0ac5b8..3048589 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -193,7 +193,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, } flush_cache_page(vma, addr, pte_pfn(ptep_get(pvmw.pte))); - ptep_clear_flush_notify(vma, addr, pvmw.pte); + ptep_clear_flush(vma, addr, pvmw.pte); if (new_page) set_pte_at_notify(mm, addr, pvmw.pte, mk_pte(new_page, vma->vm_page_prot)); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 762be2f..3ece117 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2003,7 +2003,7 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud, count_vm_event(THP_SPLIT_PUD); - pudp_huge_clear_flush_notify(vma, haddr, pud); + pudp_huge_clear_flush(vma, haddr, pud); } void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, @@ -2023,11 +2023,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, out: spin_unlock(ptl); - /* - * No need to double call mmu_notifier->invalidate_range() callback as - * the above pudp_huge_clear_flush_notify() did already call it. - */ - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); } #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ @@ -2094,7 +2090,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, count_vm_event(THP_SPLIT_PMD); if (!vma_is_anonymous(vma)) { - old_pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd); + old_pmd = pmdp_huge_clear_flush(vma, haddr, pmd); /* * We are going to unmap this huge page. So * just go ahead and zap it @@ -2304,20 +2300,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, out: spin_unlock(ptl); - /* - * No need to double call mmu_notifier->invalidate_range() callback. - * They are 3 cases to consider inside __split_huge_pmd_locked(): - * 1) pmdp_huge_clear_flush_notify() call invalidate_range() obvious - * 2) __split_huge_zero_page_pmd() read only zero page and any write - * fault will trigger a flush_notify before pointing to a new page - * (it is fine if the secondary mmu keeps pointing to the old zero - * page in the meantime) - * 3) Split a huge pmd into pte pointing to the same page. No need - * to invalidate secondary tlb entry they are all still valid. - * any further changes to individual pte will notify. So no need - * to call mmu_notifier->invalidate_range() - */ - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); } void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dc1ec19..9c6e431 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5715,7 +5715,6 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, /* Break COW or unshare */ huge_ptep_clear_flush(vma, haddr, ptep); - mmu_notifier_invalidate_range(mm, range.start, range.end); page_remove_rmap(&old_folio->page, vma, true); hugepage_add_new_anon_rmap(new_folio, vma, haddr); if (huge_pte_uffd_wp(pte)) diff --git a/mm/memory.c b/mm/memory.c index ad79039..8dca544 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3158,7 +3158,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) * that left a window where the new PTE could be loaded into * some TLBs while the old PTE remains in others. */ - ptep_clear_flush_notify(vma, vmf->address, vmf->pte); + ptep_clear_flush(vma, vmf->address, vmf->pte); folio_add_new_anon_rmap(new_folio, vma, vmf->address); folio_add_lru_vma(new_folio, vma); /* @@ -3204,11 +3204,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); } - /* - * No need to double call mmu_notifier->invalidate_range() callback as - * the above ptep_clear_flush_notify() did already call it. - */ - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); if (new_folio) folio_put(new_folio); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index e29626e..6c556b5 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -658,7 +658,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, if (flush) { flush_cache_page(vma, addr, pte_pfn(orig_pte)); - ptep_clear_flush_notify(vma, addr, ptep); + ptep_clear_flush(vma, addr, ptep); set_pte_at_notify(mm, addr, ptep, entry); update_mmu_cache(vma, addr, ptep); } else { @@ -763,13 +763,8 @@ static void __migrate_device_pages(unsigned long *src_pfns, src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; } - /* - * No need to double call mmu_notifier->invalidate_range() callback as - * the above ptep_clear_flush_notify() inside migrate_vma_insert_page() - * did already call it. - */ if (notified) - mmu_notifier_invalidate_range_only_end(&range); + mmu_notifier_invalidate_range_end(&range); } /** diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index b7ad155..453a156 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -551,7 +551,7 @@ int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) static void mn_hlist_invalidate_end(struct mmu_notifier_subscriptions *subscriptions, - struct mmu_notifier_range *range, bool only_end) + struct mmu_notifier_range *range) { struct mmu_notifier *subscription; int id; @@ -559,24 +559,6 @@ mn_hlist_invalidate_end(struct mmu_notifier_subscriptions *subscriptions, id = srcu_read_lock(&srcu); hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist, srcu_read_lock_held(&srcu)) { - /* - * Call invalidate_range here too to avoid the need for the - * subsystem of having to register an invalidate_range_end - * call-back when there is invalidate_range already. Usually a - * subsystem registers either invalidate_range_start()/end() or - * invalidate_range(), so this will be no additional overhead - * (besides the pointer check). - * - * We skip call to invalidate_range() if we know it is safe ie - * call site use mmu_notifier_invalidate_range_only_end() which - * is safe to do when we know that a call to invalidate_range() - * already happen under page table lock. - */ - if (!only_end && subscription->ops->invalidate_range) - subscription->ops->invalidate_range(subscription, - range->mm, - range->start, - range->end); if (subscription->ops->invalidate_range_end) { if (!mmu_notifier_range_blockable(range)) non_block_start(); @@ -589,8 +571,7 @@ mn_hlist_invalidate_end(struct mmu_notifier_subscriptions *subscriptions, srcu_read_unlock(&srcu, id); } -void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, - bool only_end) +void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) { struct mmu_notifier_subscriptions *subscriptions = range->mm->notifier_subscriptions; @@ -600,7 +581,7 @@ void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, mn_itree_inv_end(subscriptions); if (!hlist_empty(&subscriptions->list)) - mn_hlist_invalidate_end(subscriptions, range, only_end); + mn_hlist_invalidate_end(subscriptions, range); lock_map_release(&__mmu_notifier_invalidate_range_start_map); } diff --git a/mm/rmap.c b/mm/rmap.c index 1355bf6..51ec8aa 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -985,13 +985,6 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) #endif } - /* - * No need to call mmu_notifier_invalidate_range() as we are - * downgrading page table protection not changing it to point - * to a new page. - * - * See Documentation/mm/mmu_notifier.rst - */ if (ret) cleaned++; } @@ -1549,8 +1542,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, hugetlb_vma_unlock_write(vma); flush_tlb_range(vma, range.start, range.end); - mmu_notifier_invalidate_range(mm, - range.start, range.end); /* * The ref count of the PMD page was * dropped which is part of the way map @@ -1623,9 +1614,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * copied pages. */ dec_mm_counter(mm, mm_counter(&folio->page)); - /* We have to invalidate as we cleared the pte */ - mmu_notifier_invalidate_range(mm, address, - address + PAGE_SIZE); } else if (folio_test_anon(folio)) { swp_entry_t entry = { .val = page_private(subpage) }; pte_t swp_pte; @@ -1637,9 +1625,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, folio_test_swapcache(folio))) { WARN_ON_ONCE(1); ret = false; - /* We have to invalidate as we cleared the pte */ - mmu_notifier_invalidate_range(mm, address, - address + PAGE_SIZE); page_vma_mapped_walk_done(&pvmw); break; } @@ -1670,9 +1655,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ if (ref_count == 1 + map_count && !folio_test_dirty(folio)) { - /* Invalidate as we cleared the pte */ - mmu_notifier_invalidate_range(mm, - address, address + PAGE_SIZE); dec_mm_counter(mm, MM_ANONPAGES); goto discard; } @@ -1727,9 +1709,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (pte_uffd_wp(pteval)) swp_pte = pte_swp_mkuffd_wp(swp_pte); set_pte_at(mm, address, pvmw.pte, swp_pte); - /* Invalidate as we cleared the pte */ - mmu_notifier_invalidate_range(mm, address, - address + PAGE_SIZE); } else { /* * This is a locked file-backed folio, @@ -1745,13 +1724,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, dec_mm_counter(mm, mm_counter_file(&folio->page)); } discard: - /* - * No need to call mmu_notifier_invalidate_range() it has be - * done above for all cases requiring it to happen under page - * table lock before mmu_notifier_invalidate_range_end() - * - * See Documentation/mm/mmu_notifier.rst - */ page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); @@ -1930,8 +1902,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, hugetlb_vma_unlock_write(vma); flush_tlb_range(vma, range.start, range.end); - mmu_notifier_invalidate_range(mm, - range.start, range.end); /* * The ref count of the PMD page was @@ -2036,9 +2006,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, * copied pages. */ dec_mm_counter(mm, mm_counter(&folio->page)); - /* We have to invalidate as we cleared the pte */ - mmu_notifier_invalidate_range(mm, address, - address + PAGE_SIZE); } else { swp_entry_t entry; pte_t swp_pte; @@ -2102,13 +2069,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, */ } - /* - * No need to call mmu_notifier_invalidate_range() it has be - * done above for all cases requiring it to happen under page - * table lock before mmu_notifier_invalidate_range_end() - * - * See Documentation/mm/mmu_notifier.rst - */ page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); From patchwork Tue Jul 25 13:42:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13326456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 515CCC0015E for ; Tue, 25 Jul 2023 13:43:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E3FFD6B0080; Tue, 25 Jul 2023 09:43:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DF01C6B0081; Tue, 25 Jul 2023 09:43:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C427D6B0082; Tue, 25 Jul 2023 09:43:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B497A6B0080 for ; Tue, 25 Jul 2023 09:43:00 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 82752160DFC for ; Tue, 25 Jul 2023 13:43:00 +0000 (UTC) X-FDA: 81050250120.16.F4683B5 Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2051.outbound.protection.outlook.com [40.107.96.51]) by imf03.hostedemail.com (Postfix) with ESMTP id 8945E2001C for ; Tue, 25 Jul 2023 13:42:57 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=ofcRWonU; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf03.hostedemail.com: domain of apopple@nvidia.com designates 40.107.96.51 as permitted sender) smtp.mailfrom=apopple@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690292577; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RVlDrQz9p7k5DLdC3KTopFsNoia+jUaUauoL5oQKFzE=; b=Bb8QUIZkWN7uH9dKZASWxmCQbXynqePHf0HDJie/NWpWPMhjSVdsWAWJ8WivoTeZj9WmsY 4Kgzl6eDV1p+qRRWYFJx4sQl1Eg73TomkYULR92zgMj2itpTzNx53iGjG92XYCv2yY4cNU B7C+FlRJ+0BbSiEBVdBON4/XlckeCvs= ARC-Authentication-Results: i=2; imf03.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=ofcRWonU; dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf03.hostedemail.com: domain of apopple@nvidia.com designates 40.107.96.51 as permitted sender) smtp.mailfrom=apopple@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1690292577; a=rsa-sha256; cv=pass; b=xRBU4FsfqSmLj0SmIxgs/MvJLvhyzUcDrjt4GiBnRUGD0CikbDhFYEI0Ntuv2y8rx5DYdK NLmUWio7EbU8jHEo3LB6/iDdPMeW02SJR3NpJwoSFzzY5imszpxyfXihJrsqbzxoL4dDkd hTCVgoawbQnTLQJgvKLrdnIXOaMH+Rw= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jzzCLVtxXDEBJ8jujG3B9FJD72SL1APfeI4Yyy60Ige6z0RWd9U4Ff9nwGJvGvdXCM/GfDCmjLvIW6daWXEmtQaI5HaJH9L/FtLQAsT9nmgdVBgM/b8txD8sRyLvrTb2FrHEnxuBXUygjJXDFCi7F+Fp7s9r9HAu5qQckd+BRxKz5maMEcpqO8CoRivSTQQ5AV/9oJDuiOfd7Q/IbdfMtzEi/sfdXi1zwu3+UxORjzsvlFMig0XYvfF3G7fJs24j9VNtS5eVt0wlNWFd6Il9Ac3/UnO5D/sCCIhG3CwDvqmf24cBucvF6xyJ2Ac+1A+MtzrHFM8LfUjxkYUE/N4Gpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RVlDrQz9p7k5DLdC3KTopFsNoia+jUaUauoL5oQKFzE=; b=ZKRiy7rkIrz2gqq3nbZ99R0xtBs5au8gJnrrWCqzENTwY78aAk8rcTCqFXUsfi6OQaihSJNkJ+FLLjuddomScvORdHoOmafprqtfFD7mdN3MuWQnkYSlb/FN6nufTuqBUdujncUGDDc5od5sKl81l6u4aj85qdkKWEsY1Ajrh+CgvcZmdOW/Rwjac8gJLpHybyuOhLwtAH27nHiMYJ64y93FvvtVu+IhKBH1/fYsVF3ADApzDi2I2WrTkG9msnLJnl1mC9cuDOhj9tdZEzs2JhZO+3bjro4UfwDN/BS5Bfr5n+rXpa1kAl6OLcJOa6TK7XzZ2UpaokJbxOlPqGPPBg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RVlDrQz9p7k5DLdC3KTopFsNoia+jUaUauoL5oQKFzE=; b=ofcRWonUX09rZyHJyUBCCYHVfVeebecdUgpnakbGZc7wy65PhGAg4i5eABmEDa5JhqcgNei+ICkEsacsWAjRZOODtmF5Nh77IP3jde09skHQFX8cfhTK3bBFrphvebqtfL6XiZY81xwSW1WwingE56GsU7iaOZe9w+A/HjDf8RKJyQwuRdw4CN4Z+y81Xb6aC0y8CNaLyHk0RieSh7xvXSrSQatpxym+NS3RMu649rwIAlEzo4By//2ItHicm9ZHnQtWdzguOMA/wBzf3l+aNfKGE38YeejFyQX15syAh6s1t2MxM7/h7G3EHarRzsfNuVToc7681ZseHPbbMTmADQ== Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by PH8PR12MB7327.namprd12.prod.outlook.com (2603:10b6:510:215::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6609.33; Tue, 25 Jul 2023 13:42:54 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::c833:9a5c:258e:3351]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::c833:9a5c:258e:3351%4]) with mapi id 15.20.6609.032; Tue, 25 Jul 2023 13:42:54 +0000 From: Alistair Popple To: akpm@linux-foundation.org Cc: jgg@ziepe.ca, npiggin@gmail.com, catalin.marinas@arm.com, jhubbard@nvidia.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, nicolinc@nvidia.com, robin.murphy@arm.com, seanjc@google.com, will@kernel.org, zhi.wang.linux@gmail.com, mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, rtummala@nvidia.com, kevin.tian@intel.com, iommu@lists.linux.dev, x86@kernel.org, fbarrat@linux.ibm.com, ajd@linux.ibm.com, chaitanya.kumar.borah@intel.com, tvrtko.ursulin@linux.intel.com, intel-gfx@lists.freedesktop.org, Alistair Popple , Jason Gunthorpe Subject: [PATCH v4 5/5] mmu_notifiers: Rename invalidate_range notifier Date: Tue, 25 Jul 2023 23:42:07 +1000 Message-Id: <6f77248cd25545c8020a54b4e567e8b72be4dca1.1690292440.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: X-ClientProxiedBy: SYBPR01CA0048.ausprd01.prod.outlook.com (2603:10c6:10:4::36) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|PH8PR12MB7327:EE_ X-MS-Office365-Filtering-Correlation-Id: dddc46dd-7a20-48c2-2292-08db8d150c62 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: R1t6JK2O5CjMCuAgXuG3rW2BAhnqvUV2A9FLAJI5ya1HEEmgvutUGyononZw7+xlgURjfilM2GiFRmeSEdacV2+nL1eXHSO3tsoLMGW5sDpOR8luDo/Cxk3aU8aP09rX4u6AlIBoSVa6AnUUIINBHSC+xuaVLqzD+xnatLYKsJTWGfurlRy868PU2jyCkPaZ4ws79j96Tm6tFeHpizo1BoSXvWs57eiATbYmEzSGTo5nQ9hRaF9+J62TnH0PhZJI70gMeGnY4t1iITq+IWY9T7eBFDsa1H9kPHzSWztj13MkhzORO0UkiQL2bg485mtYYrg9yhDooUlY7oB5UU+WDqbap3oqKsSum4FVuMBGYg+YG+nrVMVOUtJ0UqTKcuYpC95lyGdQTRcerGFmyAgTWnwCdgu6CMUL+nvaJgg9caD+1zg6FP5QBKxWIMinly1vlRSwYlxfvL0dsOyajIBIp6VYcr/GNahLbut2/BYLd3pSV+vB79N4KdVDNx9v89SbI9ZHq7yTaeDcFAfzk6myCKubTZuPbiCIPy1sdicbMk7MFkNTVU1ZNcQB83ho7FIeoK57FoMWIq54pdRHo7pdQ/Ga46iYo8BsiZDShNnkuGmBD/xNiUauYHmj/nt1h/fg X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(376002)(396003)(366004)(451199021)(6512007)(186003)(2616005)(26005)(6506007)(107886003)(316002)(41300700001)(66556008)(36756003)(6486002)(66476007)(7416002)(5660300002)(30864003)(66946007)(38100700002)(2906002)(4326008)(6916009)(8676002)(8936002)(83380400001)(54906003)(478600001)(86362001)(6666004)(473944003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: ox9nOdIz4V3OTbMuF/Qt9rg/k2z6fMReVQmjumgpCDBXT0hQBn6DN9zE1LaQIYdwzNtcLqMIov0yrdBW+nqzhijKGOUiXKlSxtqJ7G9aJe6kmF30iFqjU4pk0l+ap5UaBxezkgqpY60xGJzHj2DYYItEduvUw7AOL6t6ZxkCgE2bCwkgP8/JZ1Ma/xQNwQzF4DlSyY+vbWuaWu2i9kLPHz5FCw3Ivw4Ww4pljTQBeFaonMRrJEEF3JLOIYXsNEuVZHnXzvQVgZ+ZUWInnNZ87EpxI1/NY7TQ3oqqY4HWKXGPWj0dIrB1e+0BrqPETt4z1tNCyujpUVZTyRIto8kYB3JfMwlYLar0dhqGiFTKWxir3+8SvyJpcvJY2xBM/89AadZ5f3yP7WA7hm7rFUNhGQYRBkPRJl6PZvRu3wwrL2NsDTms1PZCzDV3ga1x8EaylGg7h++XgU1UkOtOBrH23bYM+gwB/xcihaiXB8TvJqntDLCdlNTHw02hZmBij2iyfvv9dnEfTY53ZRnW9OABpUe44xYirQKnfFAKXAnX8dhsMizAp2qJktyaIfvzVLB2wTPx6VdS2JReC07ZSXVwubLKb2RziUDujy2qw3BfOCWNVxwbrV72ASJW4aiiob7mhm9fNr2F1xFujOyqBr+7iELp+3UVrixqNz4Z7HNSS+3ccIuk7EYMAuFC/MLF3nuieD0jFmh4VbV7KFplKlbSNLBPNKpwd7pAPakrwBNB9crqK0o2XdpkyeDYixlpOQxjLzfi9ul1DYQuBKMTxYobQtDiiJqY79p/yMW3PJg5R+3v439PzbJAG1aPa1IMARrB2KZHOP77WyCO+YqboxhwENoRzW3WftbUxo+mDLdISf43qWxYU7Ef8YCC941X+dnGwi1H4e7CLfIFSobXSC4m4lZnRZ7lPy0sKs9Z8jrDIhKCs56rjJwYLg5UTpoXKhqLyHxWmDpjXBTEIDxQ9rrFymqQCxBuVjYHhQqUjdLJMVDeklPE9YXReJ/4G4ahoCYIKnAAteNkFxcSzkc1QicfDVnNUWzy3rYL68i/jDV6/44eaTPcPUl5qH+T2LEbOt7ro40Pw2Pkan8lI4Cch6nGFDerl+pJapHSMWu4oPv9FqvHYdadQOIEfTpLD64QcOo96g+3eEcCcoYjZJT8w1eufPeeqMnwvgbdppwMIGm1ixVRpxslG6aqMYvWGZlF8UWA6rSQEaKV43GOrCB+FFfQRqTh4naG29+NIwk7//LxS7iSFKFyQXxJAnFXUTEwsVu7jnKM+UAK9unyFJP2asZqEiIHIaeGXWySpiiIu6poZZjtsunNlxTBl+bvvUKgNXRb8d1hzhsbekRytPilOjzoTlLehrXbJPbwV29khLNID24ZDGItc+DRkRu1O2RjWaePMHE1okVtk5ghm+EJbN3ZFjPS3TAPaKG+Si26iVnSI1fkQ9EJPl+U4kWndpZZabgs/oelPR1BWAWgCTEVoU415zxjwbo1ZwozQ/wmD+wbHWuA7yMpgtpJA9Rr92aXfqbxd2JPvdkdR5G0UO9ncGJ0mmRUjOv8MxMrcWX7QHy/HkvsjmSUu9YC0mbYyVKrAV3a X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: dddc46dd-7a20-48c2-2292-08db8d150c62 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jul 2023 13:42:54.6974 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: /f/5YCDen4SUg3bwagasAPr/lnqRC7s4sDGO6KO8VDmoYSEk2l7x+QSAcqk302FL518SZ+BUdbXMI3pS8lZx1A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7327 X-Rspamd-Queue-Id: 8945E2001C X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 7inrxwj8ei6bpjwsyrfydactyo641jz5 X-HE-Tag: 1690292577-959747 X-HE-Meta: U2FsdGVkX1/b1nksN8jjTkt1FmnCEvQ+MTCOZ+Y1AG3nF26vRQKANjTqhSoEiE3wKYXx0GRESIjNNzRjZd0SinDu5MXkqedLpq4sbohRwhnEfCSo9gtHIirLJSSFc5pXxl8LvZQC3hVte+xiD/LWIj8dnlcs/RLXDLtRrZJR0G4iV/zMAS5I86ocY+B98DhkIW4DQodq8TE/l8jnTnRAK3TBSg5wSn3pGZQ2gLz6QIG/f+dMrr+sLz+tOjcO26oG8cSY2PK9AmDmMfgazL3bDmsGJIWMbZGFHzBCCFRXUrL5EjTa+DkdVFBIZ/RZxhYSgQp7wjwoCWUMOgR9EhU6Dk2JFgkYQccGxriaRgUqL7DybfTMyljC5kPi7oCTpzjNMmxV3fTLe/ALghNIL56+rVk9Fu7pteHv30QmyCxvnOJ4J0Zumr7jVgP6AKMZ/8AX/3YgQ2v9x6NfRK49NisDcjtYhsXtOwsPocrffYyDEZadpBRTuhSjPLle5FOvHIG8HgEHzzbfAaaBoUvIk358Oi108HgbDNEhLy1m9npPsGO8nFWcnowh9NgUkFevJCXeO6hDCorOdWdaVvdbAdOOJAo3dDJeEDoUj+TmQSJew6tKVjqxm03ESdJLto94S4kD6koE4PRB9oAnnx42jdEM4sAUi4C05Ga0lO7o5FhPEtDIki7ONglab05e4KMVxPedFM2ZdC/Vu0DQpHwq+be3w5HzEP8ZLp3CdMKLPrmHOeFWsycjfDmfWdru4Goee0VzEsqNTZ5gTm1Vl2/BWGetdioJLB2yfgiq3Kn0PWWDCdLbjhZ0WLzXDpZrB8Zy72C2yZev2AuBckB0C4nCYyV/xC4xaUAd4Dp8H/OTnS5xwQO5jLRSBhs4PsbT/KNa4V4ENnac4E/2szXztMrzMIdqRPwdlXSid4pB8/4iXwZpfB8XUJEcK7pNRGtRa6UFw0TkIx/zP8dktNO6qeiG6ay HnU8qFKQ g8d2P7XXtgXeMU5KblE2xq5dZ3rGihzGXjUMLhfa5yRi4u6M3hLwa++fENJ3MZQmXmU42RfsIPHOTlloOyfvBcuswbfLutsti8j99v3VIEJGfE83H0aiCWKyE5msNIuRm3P+GQj9VO64Il7J59DUfFGNSAmo9Qz+OSAOJEL0TtmwYxBhP0h2zKYw80HMMLl54mABTc+fPKhHwfulZwE20O7drc3lu6rUnB7259r5/5a+IOy3zybNDyZZCfEkBSqDfFdDLS5ADac902iDrQzNXLgrERnfn6gd1VDTVX1uxXh0zpXPezLb0QW2bABsfYA+rRWnJtlxiyC1BHJU3xpvK0ScuaneJnzktFl0g399qVBt2ZFuEpCK5AF/BpJBcIeutSk6k0zESqzmpWH+xj/6yFRtKVA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are two main use cases for mmu notifiers. One is by KVM which uses mmu_notifier_invalidate_range_start()/end() to manage a software TLB. The other is to manage hardware TLBs which need to use the invalidate_range() callback because HW can establish new TLB entries at any time. Hence using start/end() can lead to memory corruption as these callbacks happen too soon/late during page unmap. mmu notifier users should therefore either use the start()/end() callbacks or the invalidate_range() callbacks. To make this usage clearer rename the invalidate_range() callback to arch_invalidate_secondary_tlbs() and update documention. Signed-off-by: Alistair Popple Suggested-by: Jason Gunthorpe Acked-by: Catalin Marinas Reviewed-by: Jason Gunthorpe --- arch/arm64/include/asm/tlbflush.h | 6 +- arch/powerpc/mm/book3s64/radix_hugetlbpage.c | 2 +- arch/powerpc/mm/book3s64/radix_tlb.c | 8 +-- arch/x86/include/asm/tlbflush.h | 2 +- arch/x86/mm/tlb.c | 2 +- drivers/iommu/amd/iommu_v2.c | 10 ++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 13 ++--- drivers/iommu/intel/svm.c | 8 +-- drivers/misc/ocxl/link.c | 8 +-- include/linux/mmu_notifier.h | 48 +++++++++--------- mm/huge_memory.c | 4 +- mm/hugetlb.c | 7 +-- mm/mmu_notifier.c | 21 ++++++-- 13 files changed, 76 insertions(+), 63 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index a99349d..84a05a0 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -253,7 +253,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm) __tlbi(aside1is, asid); __tlbi_user(aside1is, asid); dsb(ish); - mmu_notifier_invalidate_range(mm, 0, -1UL); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); } static inline void __flush_tlb_page_nosync(struct mm_struct *mm, @@ -265,7 +265,7 @@ static inline void __flush_tlb_page_nosync(struct mm_struct *mm, addr = __TLBI_VADDR(uaddr, ASID(mm)); __tlbi(vale1is, addr); __tlbi_user(vale1is, addr); - mmu_notifier_invalidate_range(mm, uaddr & PAGE_MASK, + mmu_notifier_arch_invalidate_secondary_tlbs(mm, uaddr & PAGE_MASK, (uaddr & PAGE_MASK) + PAGE_SIZE); } @@ -400,7 +400,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, scale++; } dsb(ish); - mmu_notifier_invalidate_range(vma->vm_mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); } static inline void flush_tlb_range(struct vm_area_struct *vma, diff --git a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c index f3fb49f..17075c7 100644 --- a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c +++ b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c @@ -39,7 +39,7 @@ void radix__flush_hugetlb_tlb_range(struct vm_area_struct *vma, unsigned long st radix__flush_tlb_pwc_range_psize(vma->vm_mm, start, end, psize); else radix__flush_tlb_range_psize(vma->vm_mm, start, end, psize); - mmu_notifier_invalidate_range(vma->vm_mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); } void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma, diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c index 4d44902..06e647e 100644 --- a/arch/powerpc/mm/book3s64/radix_tlb.c +++ b/arch/powerpc/mm/book3s64/radix_tlb.c @@ -987,7 +987,7 @@ void radix__flush_tlb_mm(struct mm_struct *mm) } } preempt_enable(); - mmu_notifier_invalidate_range(mm, 0, -1UL); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); } EXPORT_SYMBOL(radix__flush_tlb_mm); @@ -1021,7 +1021,7 @@ static void __flush_all_mm(struct mm_struct *mm, bool fullmm) _tlbiel_pid_multicast(mm, pid, RIC_FLUSH_ALL); } preempt_enable(); - mmu_notifier_invalidate_range(mm, 0, -1UL); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); } void radix__flush_all_mm(struct mm_struct *mm) @@ -1230,7 +1230,7 @@ static inline void __radix__flush_tlb_range(struct mm_struct *mm, } out: preempt_enable(); - mmu_notifier_invalidate_range(mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start, @@ -1395,7 +1395,7 @@ static void __radix__flush_tlb_range_psize(struct mm_struct *mm, } out: preempt_enable(); - mmu_notifier_invalidate_range(mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } void radix__flush_tlb_range_psize(struct mm_struct *mm, unsigned long start, diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 0a54323..6ab42ca 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -283,7 +283,7 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b { inc_mm_tlb_gen(mm); cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); - mmu_notifier_invalidate_range(mm, 0, -1UL); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL); } static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 93b2f81..2d25391 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1037,7 +1037,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, put_flush_tlb_info(); put_cpu(); - mmu_notifier_invalidate_range(mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c index 261352a..2596466 100644 --- a/drivers/iommu/amd/iommu_v2.c +++ b/drivers/iommu/amd/iommu_v2.c @@ -355,9 +355,9 @@ static struct pasid_state *mn_to_state(struct mmu_notifier *mn) return container_of(mn, struct pasid_state, mn); } -static void mn_invalidate_range(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end) +static void mn_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, + struct mm_struct *mm, + unsigned long start, unsigned long end) { struct pasid_state *pasid_state; struct device_state *dev_state; @@ -391,8 +391,8 @@ static void mn_release(struct mmu_notifier *mn, struct mm_struct *mm) } static const struct mmu_notifier_ops iommu_mn = { - .release = mn_release, - .invalidate_range = mn_invalidate_range, + .release = mn_release, + .arch_invalidate_secondary_tlbs = mn_arch_invalidate_secondary_tlbs, }; static void set_pri_tag_status(struct pasid_state *pasid_state, diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index 2a19784..dbc812a 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -186,9 +186,10 @@ static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd) } } -static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end) +static void arm_smmu_mm_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, + struct mm_struct *mm, + unsigned long start, + unsigned long end) { struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn); struct arm_smmu_domain *smmu_domain = smmu_mn->domain; @@ -247,9 +248,9 @@ static void arm_smmu_mmu_notifier_free(struct mmu_notifier *mn) } static const struct mmu_notifier_ops arm_smmu_mmu_notifier_ops = { - .invalidate_range = arm_smmu_mm_invalidate_range, - .release = arm_smmu_mm_release, - .free_notifier = arm_smmu_mmu_notifier_free, + .arch_invalidate_secondary_tlbs = arm_smmu_mm_arch_invalidate_secondary_tlbs, + .release = arm_smmu_mm_release, + .free_notifier = arm_smmu_mmu_notifier_free, }; /* Allocate or get existing MMU notifier for this {domain, mm} pair */ diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c index e95b339..8f6d680 100644 --- a/drivers/iommu/intel/svm.c +++ b/drivers/iommu/intel/svm.c @@ -219,9 +219,9 @@ static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address, } /* Pages have been freed at this point */ -static void intel_invalidate_range(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end) +static void intel_arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, + struct mm_struct *mm, + unsigned long start, unsigned long end) { struct intel_svm *svm = container_of(mn, struct intel_svm, notifier); @@ -256,7 +256,7 @@ static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) static const struct mmu_notifier_ops intel_mmuops = { .release = intel_mm_release, - .invalidate_range = intel_invalidate_range, + .arch_invalidate_secondary_tlbs = intel_arch_invalidate_secondary_tlbs, }; static DEFINE_MUTEX(pasid_mutex); diff --git a/drivers/misc/ocxl/link.c b/drivers/misc/ocxl/link.c index 4cf4c55..c06c699 100644 --- a/drivers/misc/ocxl/link.c +++ b/drivers/misc/ocxl/link.c @@ -491,9 +491,9 @@ void ocxl_link_release(struct pci_dev *dev, void *link_handle) } EXPORT_SYMBOL_GPL(ocxl_link_release); -static void invalidate_range(struct mmu_notifier *mn, - struct mm_struct *mm, - unsigned long start, unsigned long end) +static void arch_invalidate_secondary_tlbs(struct mmu_notifier *mn, + struct mm_struct *mm, + unsigned long start, unsigned long end) { struct pe_data *pe_data = container_of(mn, struct pe_data, mmu_notifier); struct ocxl_link *link = pe_data->link; @@ -509,7 +509,7 @@ static void invalidate_range(struct mmu_notifier *mn, } static const struct mmu_notifier_ops ocxl_mmu_notifier_ops = { - .invalidate_range = invalidate_range, + .arch_invalidate_secondary_tlbs = arch_invalidate_secondary_tlbs, }; static u64 calculate_cfg_state(bool kernel) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index f2e9edc..6e3c857 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -187,27 +187,27 @@ struct mmu_notifier_ops { const struct mmu_notifier_range *range); /* - * invalidate_range() is either called between - * invalidate_range_start() and invalidate_range_end() when the - * VM has to free pages that where unmapped, but before the - * pages are actually freed, or outside of _start()/_end() when - * a (remote) TLB is necessary. + * arch_invalidate_secondary_tlbs() is used to manage a non-CPU TLB + * which shares page-tables with the CPU. The + * invalidate_range_start()/end() callbacks should not be implemented as + * invalidate_secondary_tlbs() already catches the points in time when + * an external TLB needs to be flushed. * - * If invalidate_range() is used to manage a non-CPU TLB with - * shared page-tables, it not necessary to implement the - * invalidate_range_start()/end() notifiers, as - * invalidate_range() already catches the points in time when an - * external TLB range needs to be flushed. For more in depth - * discussion on this see Documentation/mm/mmu_notifier.rst + * This requires arch_invalidate_secondary_tlbs() to be called while + * holding the ptl spin-lock and therefore this callback is not allowed + * to sleep. * - * Note that this function might be called with just a sub-range - * of what was passed to invalidate_range_start()/end(), if - * called between those functions. + * This is called by architecture code whenever invalidating a TLB + * entry. It is assumed that any secondary TLB has the same rules for + * when invalidations are required. If this is not the case architecture + * code will need to call this explicitly when required for secondary + * TLB invalidation. */ - void (*invalidate_range)(struct mmu_notifier *subscription, - struct mm_struct *mm, - unsigned long start, - unsigned long end); + void (*arch_invalidate_secondary_tlbs)( + struct mmu_notifier *subscription, + struct mm_struct *mm, + unsigned long start, + unsigned long end); /* * These callbacks are used with the get/put interface to manage the @@ -396,8 +396,8 @@ extern void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address, pte_t pte); extern int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *r); extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r); -extern void __mmu_notifier_invalidate_range(struct mm_struct *mm, - unsigned long start, unsigned long end); +extern void __mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm, + unsigned long start, unsigned long end); extern bool mmu_notifier_range_update_to_read_only(const struct mmu_notifier_range *range); @@ -483,11 +483,11 @@ mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) __mmu_notifier_invalidate_range_end(range); } -static inline void mmu_notifier_invalidate_range(struct mm_struct *mm, - unsigned long start, unsigned long end) +static inline void mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm, + unsigned long start, unsigned long end) { if (mm_has_notifiers(mm)) - __mmu_notifier_invalidate_range(mm, start, end); + __mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } static inline void mmu_notifier_subscriptions_init(struct mm_struct *mm) @@ -664,7 +664,7 @@ void mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) { } -static inline void mmu_notifier_invalidate_range(struct mm_struct *mm, +static inline void mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm, unsigned long start, unsigned long end) { } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3ece117..e0420de 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2120,8 +2120,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, if (is_huge_zero_pmd(*pmd)) { /* * FIXME: Do we want to invalidate secondary mmu by calling - * mmu_notifier_invalidate_range() see comments below inside - * __split_huge_pmd() ? + * mmu_notifier_arch_invalidate_secondary_tlbs() see comments below + * inside __split_huge_pmd() ? * * We are going from a zero huge page write protected to zero * small page also write protected so it does not seems useful diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9c6e431..e0028cb 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6676,8 +6676,9 @@ long hugetlb_change_protection(struct vm_area_struct *vma, else flush_hugetlb_tlb_range(vma, start, end); /* - * No need to call mmu_notifier_invalidate_range() we are downgrading - * page table protection not changing it to point to a new page. + * No need to call mmu_notifier_arch_invalidate_secondary_tlbs() we are + * downgrading page table protection not changing it to point to a new + * page. * * See Documentation/mm/mmu_notifier.rst */ @@ -7321,7 +7322,7 @@ static void hugetlb_unshare_pmds(struct vm_area_struct *vma, i_mmap_unlock_write(vma->vm_file->f_mapping); hugetlb_vma_unlock_write(vma); /* - * No need to call mmu_notifier_invalidate_range(), see + * No need to call mmu_notifier_arch_invalidate_secondary_tlbs(), see * Documentation/mm/mmu_notifier.rst. */ mmu_notifier_invalidate_range_end(&range); diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 453a156..ec3b068 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -585,8 +585,8 @@ void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) lock_map_release(&__mmu_notifier_invalidate_range_start_map); } -void __mmu_notifier_invalidate_range(struct mm_struct *mm, - unsigned long start, unsigned long end) +void __mmu_notifier_arch_invalidate_secondary_tlbs(struct mm_struct *mm, + unsigned long start, unsigned long end) { struct mmu_notifier *subscription; int id; @@ -595,9 +595,10 @@ void __mmu_notifier_invalidate_range(struct mm_struct *mm, hlist_for_each_entry_rcu(subscription, &mm->notifier_subscriptions->list, hlist, srcu_read_lock_held(&srcu)) { - if (subscription->ops->invalidate_range) - subscription->ops->invalidate_range(subscription, mm, - start, end); + if (subscription->ops->arch_invalidate_secondary_tlbs) + subscription->ops->arch_invalidate_secondary_tlbs( + subscription, mm, + start, end); } srcu_read_unlock(&srcu, id); } @@ -616,6 +617,16 @@ int __mmu_notifier_register(struct mmu_notifier *subscription, mmap_assert_write_locked(mm); BUG_ON(atomic_read(&mm->mm_users) <= 0); + /* + * Subsystems should only register for invalidate_secondary_tlbs() or + * invalidate_range_start()/end() callbacks, not both. + */ + if (WARN_ON_ONCE(subscription && + (subscription->ops->arch_invalidate_secondary_tlbs && + (subscription->ops->invalidate_range_start || + subscription->ops->invalidate_range_end)))) + return -EINVAL; + if (!mm->notifier_subscriptions) { /* * kmalloc cannot be called under mm_take_all_locks(), but we