From patchwork Wed Jun 16 13:38:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31152C48BE6 for ; Wed, 16 Jun 2021 13:44:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F35B561241 for ; Wed, 16 Jun 2021 13:44:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F35B561241 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:CC :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=tZdRtJwBR61Wy6a3qlvYIFFrbtLRVZms0A0Zn2mBoAE=; b=cQqlLyAzayBpR+ ikDjyvrie3fv94NENhBa6v/t+O72A/HCWnxS7LJKIK9saFzk+YuT8Y5Tsvgldb945A7nTupTyK5KM rK14aAz4YFsUhLW3T9bBY6RLoKRNQ9yS6Cuv6DqUWUxWcrAcQ2anaF2lgRQJzT+fUxIIOJhTlyQ0r Gr2vYk7K+LUHjbD4ZDbieTdwpjtkg/4HZAEMDu216tmLTteEhX9mseXWwkD5ed9UhcKOudZML2l6A Eemw46wZ9gtuePLaslwLpHpnf/3mbOHcLL9O8Aybd2p2rEQ7D36tzVvyKQOVV/lRU8SHC0qmUNBDD /kEET4F4QdUuhnVBLBFQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVoi-006ULr-AW; Wed, 16 Jun 2021 13:42:52 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlJ-006SkF-Uq for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850761; x=1655386761; h=from:to:cc:subject:date:message-id:mime-version; bh=WVAH0YkNKVeMmNqcBBXc3DJnfF/QS6kEngfFH1hdQs8=; b=CQGito953C3QaLEQ1C1HTWd+1RspGqXfSVgLBF9tJOlElaodO3KYCK9I t7qGXBuEDN/FUxi0oDDU0CAfB4iaQ5nROgeIvXGWK0pc26pxhCEYO/pdT yQDA3fFqHdfnRqwYUQf9+npXnv2TzbhhQESRbz4eROFMWSa8/sIqnJcET c=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:07 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:06 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 00/15] Optimizing iommu_[map/unmap] performance Date: Wed, 16 Jun 2021 06:38:41 -0700 Message-ID: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063922_082926_0D4F97FB X-CRM114-Status: GOOD ( 22.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When unmapping a buffer from an IOMMU domain, the IOMMU framework unmaps the buffer at a granule of the largest page size that is supported by the IOMMU hardware and fits within the buffer. For every block that is unmapped, the IOMMU framework will call into the IOMMU driver, and then the io-pgtable framework to walk the page tables to find the entry that corresponds to the IOVA, and then unmaps the entry. This can be suboptimal in scenarios where a buffer or a piece of a buffer can be split into several contiguous page blocks of the same size. For example, consider an IOMMU that supports 4 KB page blocks, 2 MB page blocks, and 1 GB page blocks, and a buffer that is 4 MB in size is being unmapped at IOVA 0. The current call-flow will result in 4 indirect calls, and 2 page table walks, to unmap 2 entries that are next to each other in the page-tables, when both entries could have been unmapped in one shot by clearing both page table entries in the same call. The same optimization is applicable to mapping buffers as well, so these patches implement a set of callbacks called unmap_pages and map_pages to the io-pgtable code and IOMMU drivers which unmaps or maps an IOVA range that consists of a number of pages of the same page size that is supported by the IOMMU hardware, and allows for manipulating multiple page table entries in the same set of indirect calls. The reason for introducing these callbacks is to give other IOMMU drivers/io-pgtable formats time to change to using the new callbacks, so that the transition to using this approach can be done piecemeal. Changes since V6: (https://lore.kernel.org/r/1623776913-390160-1-git-send-email-quic_c_gdjako@quicinc.com/) * Fix compiler warning (patch 08/15) * Free underlying page tables for large mappings (patch 10/15) Consider the case where a 2N--where N > 1--MB buffer is composed entirely of 4 KB pages. This means that at the second to last level, the buffer will have N non-leaf entries that point to page tables with 4 KB mappings. When the buffer is unmapped, all N entries will be cleared at the second to last level. However, the existing logic only checks if it needs to free the underlying page tables for the first non-leaf entry. Therefore, the page table memory for the other entries N-1 entries will be leaked. Fix this memory leak by ensuring that we apply the same check to all N entries that are being unmapped. When unmapping multiple entries, __arm_lpae_unmap() should unmap one entry at a time and perform TLB maintenance as required for that entry. Changes since V5: (https://lore.kernel.org/r/20210408171402.12607-1-isaacm@codeaurora.org/) * Rebased on next-20210515. * Fixed minor checkpatch warnings - indentation, extra blank lines. * Use the correct function argument in __arm_lpae_map(). (chenxiang) Changes since V4: * Fixed type for addr_merge from phys_addr_t to unsigned long so that GENMASK() can be used. * Hooked up arm_v7s_[unmap/map]_pages to the io-pgtable ops. * Introduced a macro for calculating the number of page table entries for the ARM LPAE io-pgtable format. Changes since V3: * Removed usage of ULL variants of bitops from Will's patches, as they were not needed. * Instead of unmapping/mapping pgcount pages, unmap_pages() and map_pages() will at most unmap and map pgcount pages, allowing for part of the pages in pgcount to be mapped and unmapped. This was done to simplify the handling in the io-pgtable layer. * Extended the existing PTE manipulation methods in io-pgtable-arm to handle multiple entries, per Robin's suggestion, eliminating the need to add functions to clear multiple PTEs. * Implemented a naive form of [map/unmap]_pages() for ARM v7s io-pgtable format. * arm_[v7s/lpae]_[map/unmap] will call arm_[v7s/lpae]_[map_pages/unmap_pages] with an argument of 1 page. * The arm_smmu_[map/unmap] functions have been removed, since they have been replaced by arm_smmu_[map/unmap]_pages. Changes since V2: * Added a check in __iommu_map() to check for the existence of either the map or map_pages callback as per Lu's suggestion. Changes since V1: * Implemented the map_pages() callbacks * Integrated Will's patches into this series which address several concerns about how iommu_pgsize() partitioned a buffer (I made a minor change to the patch which changes iommu_pgsize() to use bitmaps by using the ULL variants of the bitops) Isaac J. Manjarres (12): iommu/io-pgtable: Introduce unmap_pages() as a page table op iommu: Add an unmap_pages() op for IOMMU drivers iommu/io-pgtable: Introduce map_pages() as a page table op iommu: Add a map_pages() op for IOMMU drivers iommu: Add support for the map_pages() callback iommu/io-pgtable-arm: Prepare PTE methods for handling multiple entries iommu/io-pgtable-arm: Implement arm_lpae_unmap_pages() iommu/io-pgtable-arm: Implement arm_lpae_map_pages() iommu/io-pgtable-arm-v7s: Implement arm_v7s_unmap_pages() iommu/io-pgtable-arm-v7s: Implement arm_v7s_map_pages() iommu/arm-smmu: Implement the unmap_pages() IOMMU driver callback iommu/arm-smmu: Implement the map_pages() IOMMU driver callback Will Deacon (3): iommu: Use bitmap to calculate page size in iommu_pgsize() iommu: Split 'addr_merge' argument to iommu_pgsize() into separate parts iommu: Hook up '->unmap_pages' driver callback drivers/iommu/arm/arm-smmu/arm-smmu.c | 18 +-- drivers/iommu/io-pgtable-arm-v7s.c | 50 ++++++-- drivers/iommu/io-pgtable-arm.c | 223 +++++++++++++++++++++------------- drivers/iommu/iommu.c | 129 +++++++++++++++----- include/linux/io-pgtable.h | 8 ++ include/linux/iommu.h | 9 ++ 6 files changed, 307 insertions(+), 130 deletions(-)