From patchwork Thu Aug 15 15:11:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13764905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE548C52D7C for ; Thu, 15 Aug 2024 15:12:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4EE8D6B0137; Thu, 15 Aug 2024 11:12:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 49DB46B0138; Thu, 15 Aug 2024 11:12:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F0B46B0139; Thu, 15 Aug 2024 11:12:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 0B34A6B0137 for ; Thu, 15 Aug 2024 11:12:29 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id AFD891410F8 for ; Thu, 15 Aug 2024 15:12:28 +0000 (UTC) X-FDA: 82454821176.28.98F9216 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2046.outbound.protection.outlook.com [40.107.92.46]) by imf13.hostedemail.com (Postfix) with ESMTP id C220D2000D for ; Thu, 15 Aug 2024 15:12:24 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=Y64niQxM; arc=pass ("microsoft.com:s=arcselector10001:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf13.hostedemail.com: domain of jgg@nvidia.com designates 40.107.92.46 as permitted sender) smtp.mailfrom=jgg@nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1723734685; a=rsa-sha256; cv=pass; b=5GVITejF1u0c/avaRas9pIb+HUVywUKeN3AHkBWWhV1vOFlEtxrmbSeF0UlmuP0VNrAHMF puoXm/uki7WE56mt6D9jn2RpcBrlzGo4MoqctG2fiNrypU/j5f12C3kW896tbg3oYjM2qq BHCCCFdmGVFaSAjJ8aXh2gI69hFRMcM= ARC-Authentication-Results: i=2; imf13.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=Y64niQxM; arc=pass ("microsoft.com:s=arcselector10001:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com; spf=pass (imf13.hostedemail.com: domain of jgg@nvidia.com designates 40.107.92.46 as permitted sender) smtp.mailfrom=jgg@nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723734685; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/dc6OFVd1fJllHqtUdTj1JI7YguQmswQxuE+3IZV/Ec=; b=grlwrzzgdhtg+x/epAsupSGsFwl4oChAaP4KAQNbh/eky49Ck9JMAcMOWIB+Hd3qOaJSir w23HIztQj0weYt+VtMrMAW5hBVo4HUTwMOdKVZPXVTLSbec4myCYDNGgCoKji/yNkf8JoQ 45lXTYcPwVxWYjgi7jk99ywNboZSKdM= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Z4okiP7z7MNHngJMrLL9eR1dFvLtJC20q1onPL7SNUTmRCKK5ppWqBnb70Brrjdr+U24NCSYLM09+8rKI/6kJz8xeJ3wScGIvc9E8W7w7dubU1ayp44/uZ+Wz6J5AcXVS/WTSxd2erR6ZvElnL1369Y7HY5aejTUSO93IAuV/DH3PwShhJ4KkyEahsl+kIPRNC4rkfHYguYuEhta9ckVsG46Vwg7kfV46xVj/kdCzjIxENqsNrpgD9vCz94ggvjGFRDVTwzbGZHhnsvc+89/55XG5f8ncQUufpHrhHvk05l0+dwlqvZiyBbqd1EBRWitD05LLnWLSGpgZHcpiqGy7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/dc6OFVd1fJllHqtUdTj1JI7YguQmswQxuE+3IZV/Ec=; b=Aho6P2s4CMzQP/XXrzlYJbBkFqBUvc86UiU8lgzDE5E1gBiiauJccgWr5NWm3RACmD952omHKizfqmuA6nC+jG7+/ExHa9IjsIqLYj5Wh/9K2oPAPaTT37fPspMHQsbh20RJ9VXS3yOJRuWuuvUd9lkBRObumiIlXaVjBQif3JUutMA03Xy4yMWnrIEWz65rLDUKV4sPrSKQsfm4oVOpTGiruqG9VXpwIimwZwSQe/gi2xfpJkCcygoXe9bkKh+fjO5VlrjsuaHL/lG6qqq827SlU+25TyW2s8f/rn7/0h6Fxk6GtoBYWbCD07EthSaDbcJToL+N1Unzh+vEGaPO1A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/dc6OFVd1fJllHqtUdTj1JI7YguQmswQxuE+3IZV/Ec=; b=Y64niQxM2AMmQYAd73O5Ilgs0OOAM+D5Bl4DWHbwxVWyMXj06LHgbZTfBOdiehQzYHFGqf26K+B5cgrvTDJibvpAFFj6T/YectITKc5giYcLUkZ85A2/HedSeXGtDhJWrNhQU0yrKfAq1OLW+q3J8OrOg2B+7OmgQnqsXugMSGOVvxEZMvjGcWEMgTmv7ATKiBScqO7uDHWJ6kGRBrAtpG3Xut1rFd1S9k7BZnBZ7g0iFB0SbzgeJ8pyTw6mgns3/fy1fPLfZlbIKogR7qRKv84+7GD4PCFYn/vET2tdFk9ZD/MBGELSmDgBXmp+nCB8T0Ho47zqWjbT8V33gT3Gdg== Received: from CH3PR12MB7763.namprd12.prod.outlook.com (2603:10b6:610:145::10) by SN7PR12MB8146.namprd12.prod.outlook.com (2603:10b6:806:323::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7875.17; Thu, 15 Aug 2024 15:11:43 +0000 Received: from CH3PR12MB7763.namprd12.prod.outlook.com ([fe80::8b63:dd80:c182:4ce8]) by CH3PR12MB7763.namprd12.prod.outlook.com ([fe80::8b63:dd80:c182:4ce8%3]) with mapi id 15.20.7875.016; Thu, 15 Aug 2024 15:11:43 +0000 From: Jason Gunthorpe To: Cc: Alejandro Jimenez , Lu Baolu , David Hildenbrand , Christoph Hellwig , iommu@lists.linux.dev, Joao Martins , Kevin Tian , kvm@vger.kernel.org, linux-mm@kvack.org, Pasha Tatashin , Peter Xu , Ryan Roberts , Sean Christopherson , Tina Zhang Subject: [PATCH 06/16] iommupt: Add map_pages op Date: Thu, 15 Aug 2024 12:11:22 -0300 Message-ID: <6-v1-01fa10580981+1d-iommu_pt_jgg@nvidia.com> In-Reply-To: <0-v1-01fa10580981+1d-iommu_pt_jgg@nvidia.com> References: X-ClientProxiedBy: BL6PEPF0001641B.NAMP222.PROD.OUTLOOK.COM (2603:10b6:22e:400:0:1004:0:10) To CH3PR12MB7763.namprd12.prod.outlook.com (2603:10b6:610:145::10) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH3PR12MB7763:EE_|SN7PR12MB8146:EE_ X-MS-Office365-Filtering-Correlation-Id: 42c00cdc-9801-4e5c-aed7-08dcbd3c8e74 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|1800799024|7416014|376014; X-Microsoft-Antispam-Message-Info: xWmIRYpPWlXtM1obgfF04hhP7fttpYJJis5uswm5fGeFg/8eykNhEdtCEdTWEj8NMPvVOE5mWz+L8lDjQIOISa/yjyWymUE6tN3Doe64oAXr/hQDyTUHQMsECtOxD1QIF7Gi1tVcQN0xr1DjIieUZdV3mMbXoCkwGl9pto1jSKxStjqGd1y3IMdu1m/iX2OHLRQsXRDv7E3ulL8bf7zNIplnruc2U78zH23wCYdEGErtsRI/P63srFDPEEBZs4roXjNUwalbJWWfNxg22a/XStrlLMIM2crdpeSXvTIGS70KbAgW0DaR5ATK//OTug+NFHGbwIHy7Q+TRhrrSh5n3j/4bR2lJaycRwMXFz+JYY7zNutZuA4psb0j4nQL3xqnvddbL2PJVGG1m9rHITB3FgbK4D5Zll0WFt403TsvSzdB49NgzqCxohigRtXlv0GUdbAdkOUboAcq9Cr+23+qgfr6OCTPpMEkSKLgXE7KA98ewls5pdqdBooKGsupZOk2nIraQgX61byOCm/17grsehIo617ktLBGvWWkOUVKbtrfy37uw+/uIEDVspcix6eE/Tpbu+S5kNibNBQFmDoju4U93dYjjODvxW4io3SPpuLlmerUSGvWcKUAMWhW3kLM2ciR7fkCh0gDnif5P11RGDRiiXreKYfPsl1xFhpSb49lAmH1rfh1vScdI4pVVZOB5lTRNFADmFpXVEyWEszE7saTCEIPzXZsLbhmXlmk0VtAylLNkIgd7z0cDUaRKv8IcmFnGSjyx9Pz++v4gMZLRd4Bku0uJmL+9aiYrP/qmHjNJ5QKPVmQC+MxpyLE7bgph2LqAt2lALMQFvzwBlAvyE/J46ND8/MLgb7UnaO/88L4pgqvadc+4xLTZsB5xwwsUQZ345iIqvyY8T72l7IuigUExT0aovPCSwzUgb/H37kAJmfgaoHVgHhUqErEnQlFxFnUcK+c0ytjY1GHOOCy+2RqDkS1pvd4XVtmGhkoU4viDjo+x+nOuoc+reLTbDTwFKugyZgUVhKS12aLdbgQMUieTBCl284EebutQzdpDqjK7EN6s/CCA7ZwE292+QafccLVgMqyptBl81mlajJOdc9/Jk4FjMr11thj6U8EhbDSY8SSvjDEFZ3mF1YDvb6S7YFOs+eNerhX/OEYOHG4BghFboOpCZ+24C9XXmYd1vYFs/KCfCBTwoLKHNBcJf/xW6cUNuD94ygrtJ4i6mGYCtQCGoF1JW0Qxzf7MLXVcwRxqEexLYCsoqr08vnMcT+mZsnLXF4DyiEwK+59/WG+BKkbA/cMKUzhSPMTvDynpuLkr6SpcFrs3uY4JXx16lbTQVqYzyzLEUcxXy36XLgT4A== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CH3PR12MB7763.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(1800799024)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: DiaaBDsWzdFR+Gi9RgXy8lKErpnHrKXD7uPPoknAD2N1yu1erw6IzaifmtEpcyX0l5ETxebZQmlFKsbmu9lTKpCJJIAys+w9L9I7HM5MWzXjmRK24GYCc/phtMBtXwhaeXVwXEStVlUGfNLCDm2x5xRj8ezzSECi6sixccyPX90MCihqaayWx1qx7bK1p+hzBu+EKPZVchkqn61Tu57Kd5h4G7bwOTq/jRKJE2PpdxVyEm2Dn0nmjrELy95hXP1tTKzz924DRw0m7VtnSmcwT5J5AWxSbUdI4Sj+4GpyziiT7utAoDTOMFFeODorTVQTfIoUtSU7NpFRpgXxHRUdyu1I+oW2e/7e5JCkIS0MGGxTZiYCcCR6bXWMjTaTFZ7+61EK4vC/+34wQxG0GdIOSsB+uFQxaeelo9wFWiSo4W8MLDE+msgFS4djWO2G89L+brx7Gy7yBNe+mU25iDK4zuoLorWVAwISnEEqJXZQnoys9ntguT/hESE6J/0G0S79tzw0euEk/SDPW59InDsG4KzCuC3RJf95jwih8JywpOnjTX0INMVBy7p17B89qdI84fyrKtrhccjnuey2yOJ9xbZS/kSjMyzXSKzkr1dNhvG/eGAMCqm742q+iM+h+pMGTezGxTTGV8wVvZU2rc4z4n4BOxx3Fonh43H96q+KhhPV0S330s8Nb6DKz6dXC1CIjvIwsaxgM1PmSN+9rdgZPdkaeoFi9Uwpd7lxyD3AdkqXdeGjxCpIYwuadEzCCib57NxGRTKxaM5+uKcl5DShEqjUIzF796T8assS5xDMBrRkTEgcqFS2riXHVEDKL5Jvgli0zkxPMCUQhMPw8VOcHjVsr7rSp4nW/7Eiho5r1IhU1UWFuvAe46+AUh8DSl5VTjlYvgUq1cLV4o6LhiBRghjmKvQej7r6HQ9LHy89SyC71JuCBF6lxPJubOkGTmzoMr6ykKoQ1LXI/qpPuG93zhjkQjYE5pOIQN6nInQUyacp4unxZNb9+kALysCLkAuP91UPbzqAZNHJ4EiO6p1bCJ8GcTBAZ9LtP6AfRC0/hAly48lGEpeHL2J6MHDv3af8jdOYV8vP5o4VCrRA8ZNH8Rji55FchOFtk6esWXiWWDMcaSGocBxeN8HHuPunPptJqbaqOb1J/iQRQ54oS1RbRAtU1S5XDkb5shFeOQD6NRa+G4A/8aXGzUzyTgupN/XsleVdE3fhbvB+WdxypQDJorD/iW6VKN3WdFFhrEmrcYv0xl5cCsO8aQF9QjEbd6G4JGxseCFXsGGK3q8sameMlHaXsW94S9Pce++ZfNP+F/Jdr+dhLqxyLLBquHDvBubXHVycCX6ENEQJfPRuOjG4MgfWQHRwOPGchv9XQ4IF7hWWeGhaIfqVunzUwdftUsvULGErA/N7jSZQSP6RMKXGsGWyrmq6YhIGs+fQ4s2plSAJgdN/+W7ThN7xcTpVDv1RTs8+Jl7iyejJ1LC/FDP4EfEoP2NWGnwNrn4QcP6ujqf/uKzm+MrGl8Mi3Q1tcpXFSHw7EpIiJDAFelATe1voXAcJymPjcRPRRIxk2m+c5cI= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 42c00cdc-9801-4e5c-aed7-08dcbd3c8e74 X-MS-Exchange-CrossTenant-AuthSource: CH3PR12MB7763.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Aug 2024 15:11:36.8048 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: oSWpNNT5aCHYJk/pkr56MaBZbRsfklrg6iSdmulqLvw6Oz+zahG4gMkH9CJ9DZ/i X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB8146 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: C220D2000D X-Stat-Signature: c6bub1nw3eur1fyerm984ku39nn7c8oh X-Rspam-User: X-HE-Tag: 1723734744-635570 X-HE-Meta: U2FsdGVkX18j+mAY4KACbIn2cfyFS3ndcmC7QXcl8AW9VeMb3YSr9bNq82q4zFcYIVZ0Wm00+e2BZqo5ThgMS0XnCG+l4TNWFQw+KZC7VIV5MmE5m8t10nHxS4kPSr8X28MTB/TSjuKpOsMbfFVud1cAb5uDudqZ7Ki9P4feJZW9Sa++a60xXS3GAiZzP42Hk5lZfToybuav1W3IrqTEURrEhCh7VGXFGQqMow6Skf0ih39VByapWg4CUgYdFa3VnG4zqgrjBuGp2Hyf/WxZKo4lXHrCMQX7euI9MEWcqZx+vo4ja+wfyq0aZFoHu04JV86ExeQY915F/Pisrr5aBDAj1ncZt4yR18OsL7JM0awTPoLDa4M75KV3b+2S71My++58ryjJWKPl5mA/wbPj5P1g0rgVSEtOg/tmaIyhLgoK0JejJ2sdnGQUqmLzLuispL4oaxtF7yPKIKvhuF04QEnpe1z4AAbravAiHe6WlfySkD919FwttYXFzRoYljLlS2yx/lfgiPVE26eSC0krL2XBfe/Acv4/1Uab0G+x/zG4EOpgg49svZqmaj+v3pZ91MZMDQDW8qssNZ+wFYTOJUr+EH5cUuDpeZ/1X5OOgcVtqOJJuW8FORxegDogY5bkZZ1mKtM7nW6ur/5E2m6jyPwmGqOauFsFT9J8VUNZUqf5IsUSojuUFa3kSYtWu6B/SShQN9RTBov/pI6yrDF2mpesg8jn8YOI9kRVnc/EiP8TV5amGWgIJeK8lxVjnZgXU5h5uxnFa7A+OH8r5Hcyq8g5ly2Y4nJBO3ob8dbOzr3lOkZyzFWgipjUP9kA19sQdtnLxCow2oj6t9paDvoYEwheR2dDSP583HW579guRxQZR+vWG2w1T7mdoJIoMs3meHsExvC6YvUwp6i/vFpvM3ikZcNFp+BDB1HF2zgdtcz50Oua8hDjpgBsguRrjrsfF0HXc+2Y62uOnv0aILh z3cIsFLJ nMXIl7HxoErUUNdCIKxfvCXgOFvm1TixWM9G2SqkbYDX9FBkcOUdgdATm9E4LyLOaqoCzsnoIPqztml4GBCFhavgutaeH2Q2BlnTZ2CezsfF67zGDAqdvf9gpt98/z8UaLwLWz+ro0nTTJ0p06XezVglz0eeY51imefWE6HSMXfchEtJmQBVLFOp+DX3wqxl3JZMAb5ldUS9T5+PyQrj0fZ5cEESrfeV9cqHCGbclix3yc5saaUKTOq4xyoIXdV3yD4yk6ig3V1FySP9vOODM0aPGVySFGfcXQRqMDTypyX2B+2SjCkVLtBTadW3TbuDoOxsX3VcK1+RjeUUPyuemGxcSYA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Implement a self-segmenting algorithm for map_pages. This can handle any valid input VA/length and will automatically break it up into appropriately sized table entries using a recursive descent algorithm. The appropriate page size is computed each step using some bitwise calculations. map is slightly complicated because it has to handle a number of special edge cases: - Overmapping a previously shared table with an OA - requries validating and discarding the possibly empty tables - Doing the above across an entire to-be-created contiguous entry. - Installing a new table concurrently with another thread - Racing table installation with CPU cache flushing - Expanding the table by adding more top levels on the fly Managing the table installation race is done using a flag in the folio. When the shared table entry is possibly unflushed the flag will be set. This works for all pagetable formats but is less efficient than the io-pgtable-arm-lpae approach of using a SW table bit. It may be interesting to provide the latter as an option. Table expansion is a unique feature of AMDv1, this version is quite similar except we handle racing concurrent lockless map. The table top pointer and starting level are encoding in a single uintptr_t which ensures we can READ_ONCE() without tearing. Any op will do the READ_ONCE() and use that fixed point as its starting point. Concurrent expansion is handled with a table global spinlock. When inserting a new table entry map checks that the portion of the table is empty. This includes removing an empty interior tables. The approach here is atomic per entry. Either the new entry is written, or no change is made to the table. This is done by keeping a list of interior tables to free and only progressing once the entire space is checked to be empty. Signed-off-by: Jason Gunthorpe --- drivers/iommu/generic_pt/iommu_pt.h | 337 ++++++++++++++++++++++++++++ include/linux/generic_pt/iommu.h | 29 +++ 2 files changed, 366 insertions(+) diff --git a/drivers/iommu/generic_pt/iommu_pt.h b/drivers/iommu/generic_pt/iommu_pt.h index 6d1c59b33d02f3..a886c94a33eb6c 100644 --- a/drivers/iommu/generic_pt/iommu_pt.h +++ b/drivers/iommu/generic_pt/iommu_pt.h @@ -159,6 +159,342 @@ static int __collect_tables(struct pt_range *range, void *arg, return 0; } +/* Allocate a table, the empty table will be ready to be installed. */ +static inline struct pt_table_p *_table_alloc(struct pt_common *common, + size_t lg2sz, gfp_t gfp, + bool no_incoherent_start) +{ + struct pt_iommu *iommu_table = iommu_from_common(common); + struct pt_table_p *table_mem; + + table_mem = pt_radix_alloc(common, iommu_table->nid, lg2sz, gfp); + if (pt_feature(common, PT_FEAT_DMA_INCOHERENT) && + !no_incoherent_start) { + int ret = pt_radix_start_incoherent( + table_mem, iommu_table->iommu_device, true); + if (ret) { + pt_radix_free(table_mem); + return ERR_PTR(ret); + } + } + return table_mem; +} + +static inline struct pt_table_p *table_alloc_top(struct pt_common *common, + uintptr_t top_of_table, + gfp_t gfp, + bool no_incoherent_start) +{ + /* + * FIXME top is special it doesn't need RCU or the list, and it might be + * small. For now just waste a page on it regardless. + */ + return _table_alloc(common, + max(pt_top_memsize_lg2(common, top_of_table), + PAGE_SHIFT), + gfp, no_incoherent_start); +} + +/* Allocate an interior table */ +static inline struct pt_table_p *table_alloc(struct pt_state *pts, gfp_t gfp, + bool no_incoherent_start) +{ + return _table_alloc(pts->range->common, + pt_num_items_lg2(pts) + ilog2(PT_ENTRY_WORD_SIZE), + gfp, no_incoherent_start); +} + +static inline int pt_iommu_new_table(struct pt_state *pts, + struct pt_write_attrs *attrs, + bool no_incoherent_start) +{ + struct pt_table_p *table_mem; + + /* Given PA/VA/length can't be represented */ + if (unlikely(!pt_can_have_table(pts))) + return -ENXIO; + + table_mem = table_alloc(pts, attrs->gfp, no_incoherent_start); + if (IS_ERR(table_mem)) + return PTR_ERR(table_mem); + + if (!pt_install_table(pts, virt_to_phys(table_mem), attrs)) { + pt_radix_free(table_mem); + return -EAGAIN; + } + pts->table_lower = table_mem; + return 0; +} + +struct pt_iommu_map_args { + struct pt_radix_list_head free_list; + struct pt_write_attrs attrs; + pt_oaddr_t oa; +}; + +/* + * Check that the items in a contiguous block are all empty. This will + * recursively check any tables in the block to validate they are empty and + * accumulate them on the free list. Makes no change on failure. On success + * caller must fill the items. + */ +static int pt_iommu_clear_contig(const struct pt_state *start_pts, + struct pt_iommu_map_args *map, + struct iommu_write_log *wlog, + unsigned int pgsize_lg2) +{ + struct pt_range range = *start_pts->range; + struct pt_state pts = + pt_init(&range, start_pts->level, start_pts->table); + struct pt_iommu_collect_args collect = { + .free_list = map->free_list, + }; + int ret; + + pts.index = start_pts->index; + pts.table_lower = start_pts->table_lower; + pts.end_index = start_pts->index + + log2_to_int(pgsize_lg2 - pt_table_item_lg2sz(&pts)); + pts.type = start_pts->type; + pts.entry = start_pts->entry; + while (true) { + if (pts.type == PT_ENTRY_TABLE) { + ret = pt_walk_child_all(&pts, __collect_tables, + &collect); + if (ret) + return ret; + pt_radix_add_list(&collect.free_list, + pt_table_ptr(&pts)); + } else if (pts.type != PT_ENTRY_EMPTY) { + return -EADDRINUSE; + } + + _pt_advance(&pts, ilog2(1)); + if (pts.index == pts.end_index) + break; + pt_load_entry(&pts); + } + map->free_list = collect.free_list; + return 0; +} + +static int __map_pages(struct pt_range *range, void *arg, unsigned int level, + struct pt_table_p *table) +{ + struct iommu_write_log wlog __cleanup(done_writes) = { .range = range }; + struct pt_state pts = pt_init(range, level, table); + struct pt_iommu_map_args *map = arg; + int ret; + +again: + for_each_pt_level_item(&pts) { + /* + * FIXME: This allows us to segment on our own, but there is + * probably a better performing way to implement it. + */ + unsigned int pgsize_lg2 = pt_compute_best_pgsize(&pts, map->oa); + + /* + * Our mapping fully covers this page size of items starting + * here + */ + if (pgsize_lg2) { + if (pgsize_lg2 != pt_table_item_lg2sz(&pts) || + pts.type != PT_ENTRY_EMPTY) { + ret = pt_iommu_clear_contig(&pts, map, &wlog, + pgsize_lg2); + if (ret) + return ret; + } + + record_write(&wlog, &pts, pgsize_lg2); + pt_install_leaf_entry(&pts, map->oa, pgsize_lg2, + &map->attrs); + pts.type = PT_ENTRY_OA; + map->oa += log2_to_int(pgsize_lg2); + continue; + } + + /* Otherwise we need to descend to a child table */ + + if (pts.type == PT_ENTRY_EMPTY) { + record_write(&wlog, &pts, ilog2(1)); + ret = pt_iommu_new_table(&pts, &map->attrs, false); + if (ret) { + /* + * Racing with another thread installing a table + */ + if (ret == -EAGAIN) + goto again; + return ret; + } + if (pts_feature(&pts, PT_FEAT_DMA_INCOHERENT)) { + done_writes(&wlog); + pt_radix_done_incoherent_flush(pts.table_lower); + } + } else if (pts.type == PT_ENTRY_TABLE) { + /* + * Racing with a shared pt_iommu_new_table()? The other + * thread is still flushing the cache, so we have to + * also flush it to ensure that when our thread's map + * completes our mapping is working. + * + * Using the folio memory means we don't have to rely on + * an available PTE bit to keep track. + * + */ + if (pts_feature(&pts, PT_FEAT_DMA_INCOHERENT) && + pt_radix_incoherent_still_flushing(pts.table_lower)) + record_write(&wlog, &pts, ilog2(1)); + } else { + return -EADDRINUSE; + } + + /* + * Notice the already present table can possibly be shared with + * another concurrent map. + */ + ret = pt_descend(&pts, arg, __map_pages); + if (ret) + return ret; + } + return 0; +} + +/* + * Add a table to the top, increasing the top level as much as necessary to + * encompass range. + */ +static int increase_top(struct pt_iommu *iommu_table, struct pt_range *range, + struct pt_write_attrs *attrs) +{ + struct pt_common *common = common_from_iommu(iommu_table); + uintptr_t top_of_table = READ_ONCE(common->top_of_table); + uintptr_t new_top_of_table = top_of_table; + struct pt_radix_list_head free_list = {}; + unsigned long flags; + int ret; + + while (true) { + struct pt_range top_range = + _pt_top_range(common, new_top_of_table); + struct pt_state pts = pt_init_top(&top_range); + struct pt_table_p *table_mem; + + top_range.va = range->va; + top_range.last_va = range->last_va; + + if (!pt_check_range(&top_range)) + break; + + pts.level++; + if (pts.level > PT_MAX_TOP_LEVEL || + pt_table_item_lg2sz(&pts) >= common->max_vasz_lg2) { + ret = -ERANGE; + goto err_free; + } + + table_mem = table_alloc_top( + common, _pt_top_set(NULL, pts.level), attrs->gfp, true); + if (IS_ERR(table_mem)) + return PTR_ERR(table_mem); + pt_radix_add_list(&free_list, table_mem); + + /* The new table links to the lower table always at index 0 */ + top_range.va = 0; + pts.table_lower = pts.table; + pts.table = table_mem; + pt_load_single_entry(&pts); + PT_WARN_ON(pts.index != 0); + pt_install_table(&pts, virt_to_phys(pts.table_lower), attrs); + new_top_of_table = _pt_top_set(pts.table, pts.level); + + top_range = _pt_top_range(common, new_top_of_table); + } + + if (pt_feature(common, PT_FEAT_DMA_INCOHERENT)) { + ret = pt_radix_start_incoherent_list( + &free_list, iommu_from_common(common)->iommu_device); + if (ret) + goto err_free; + } + + /* + * top_of_table is write locked by the spinlock, but readers can use + * READ_ONCE() to get the value. Since we encode both the level and the + * pointer in one quanta the lockless reader will always see something + * valid. The HW must be updated to the new level under the spinlock + * before top_of_table is updated so that concurrent readers don't map + * into the new level until it is fully functional. If another thread + * already updated it while we were working then throw everything away + * and try again. + */ + spin_lock_irqsave(&iommu_table->table_lock, flags); + if (common->top_of_table != top_of_table) { + spin_unlock_irqrestore(&iommu_table->table_lock, flags); + ret = -EAGAIN; + goto err_free; + } + + /* FIXME update the HW here */ + WRITE_ONCE(common->top_of_table, new_top_of_table); + spin_unlock_irqrestore(&iommu_table->table_lock, flags); + + *range = pt_make_range(common, range->va, range->last_va); + PT_WARN_ON(pt_check_range(range)); + return 0; + +err_free: + if (pt_feature(common, PT_FEAT_DMA_INCOHERENT)) + pt_radix_stop_incoherent_list( + &free_list, iommu_from_common(common)->iommu_device); + pt_radix_free_list(&free_list); + return ret; +} + +static int NS(map_pages)(struct pt_iommu *iommu_table, dma_addr_t iova, + phys_addr_t paddr, dma_addr_t len, unsigned int prot, + gfp_t gfp, size_t *mapped, + struct iommu_iotlb_gather *iotlb_gather) +{ + struct pt_common *common = common_from_iommu(iommu_table); + struct pt_iommu_map_args map = { .oa = paddr }; + struct pt_range range; + int ret; + + if (WARN_ON(!(prot & (IOMMU_READ | IOMMU_WRITE)))) + return -EINVAL; + + if ((sizeof(pt_oaddr_t) > sizeof(paddr) && paddr > PT_VADDR_MAX) || + (common->max_oasz_lg2 != PT_VADDR_MAX_LG2 && + oalog2_div(paddr, common->max_oasz_lg2))) + return -ERANGE; + + ret = pt_iommu_set_prot(common, &map.attrs, prot); + if (ret) + return ret; + map.attrs.gfp = gfp; + +again: + ret = make_range(common_from_iommu(iommu_table), &range, iova, len); + if (pt_feature(common, PT_FEAT_DYNAMIC_TOP) && ret == -ERANGE) { + ret = increase_top(iommu_table, &range, &map.attrs); + if (ret) { + if (ret == -EAGAIN) + goto again; + return ret; + } + } + if (ret) + return ret; + + ret = pt_walk_range(&range, __map_pages, &map); + + /* Bytes successfully mapped */ + *mapped += map.oa - paddr; + return ret; +} + struct pt_unmap_args { struct pt_radix_list_head free_list; pt_vaddr_t unmapped; @@ -285,6 +621,7 @@ static void NS(deinit)(struct pt_iommu *iommu_table) } static const struct pt_iommu_ops NS(ops) = { + .map_pages = NS(map_pages), .unmap_pages = NS(unmap_pages), .iova_to_phys = NS(iova_to_phys), .get_info = NS(get_info), diff --git a/include/linux/generic_pt/iommu.h b/include/linux/generic_pt/iommu.h index bdb6bf2c2ebe85..88e45d21dd21c4 100644 --- a/include/linux/generic_pt/iommu.h +++ b/include/linux/generic_pt/iommu.h @@ -61,6 +61,35 @@ struct pt_iommu_info { /* See the function comments in iommu_pt.c for kdocs */ struct pt_iommu_ops { + /** + * map_pages() - Install translation for an IOVA range + * @iommu_table: Table to manipulate + * @iova: IO virtual address to start + * @paddr: Physical/Output address to start + * @len: Length of the range starting from @iova + * @prot: A bitmap of IOMMU_READ/WRITE/CACHE/NOEXEC/MMIO + * @gfp: GFP flags for any memory allocations + * @gather: Gather struct that must be flushed on return + * + * The range starting at IOVA will have paddr installed into it. The + * rage is automatically segmented into optimally sized table entries, + * and can have any valid alignment. + * + * On error the caller will probably want to invoke unmap on the range + * from iova up to the amount indicated by @mapped to return the table + * back to an unchanged state. + * + * Context: The caller must hold a write range lock that includes + * the whole range. + * + * Returns: -ERRNO on failure, 0 on success. The number of bytes of VA + * that were mapped are added to @mapped, @mapped is not zerod first. + */ + int (*map_pages)(struct pt_iommu *iommu_table, dma_addr_t iova, + phys_addr_t paddr, dma_addr_t len, unsigned int prot, + gfp_t gfp, size_t *mapped, + struct iommu_iotlb_gather *iotlb_gather); + /** * unmap_pages() - Make a range of IOVA empty/not present * @iommu_table: Table to manipulate