From patchwork Tue Apr 13 08:54:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12199685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B386C433ED for ; Tue, 13 Apr 2021 09:01:23 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9EE22613B1 for ; Tue, 13 Apr 2021 09:01:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9EE22613B1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=cHmuLcCaOS7hA9yAXTcEWfzERofFe8jzw8C5wrFHDF8=; b=HZh187ti0YZz30sD3WVIIQtt1 9xumHQ25vU2k2hxLhhDWIrTL7QEwKreHZOqkBNLr2ER8qaD+lQ/aIXCB89k2exQzplpU3P8cKOqcx t0mFyN8yWPP0fw8JL4MWRo6s5Xmq9KBFMcSgftejUlz5cwGlu41ChHhROEQav0CPc/uZvTxoSyDCL FKW+IpMMEPhZVXCsjcL04RLlCJbrlE1wDuFvW8q/lKwosWKsEp/9VCmS9sRnJ0rof5/5/r3LmRQhR ySJKp9ZhJgyD069Otz+z1T/4DjVOLF0W1KXgTlHkAgt7/+uTAd26EX/+fN3wE1zGVRgbetvTa111p DtmhiU1zA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWEsQ-008cKj-Gu; Tue, 13 Apr 2021 08:58:30 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWEpU-008c55-EU for linux-arm-kernel@desiato.infradead.org; Tue, 13 Apr 2021 08:56:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ubsi1ADGObl7y/T5MgEEMpkYvISvRfrS9NHXyj48IBo=; b=Tg0T1x/Up98NPuyjlzjjvYfZg3 L/dhk8isqgNXfnH+Osc6Csg3L+n5/4lniN8RensdwOhSECljO7VbbltWoMYcwNCF6sxTeQeOPya5C Gigsct+lX/KfZU28+U53uyBQkaTesl59tfZ14m1L5AadzuXFLWcrNVCRUV1XpRMZRx/nRGIbeHYxa L1raMIL7Cg95PlfdH0ZFqSgjPwrWdnMLw08JOXnMvZkvknwbk7o7fbm1lenFGpskXfYTFPs3HFVwd 5GQatymE6iqxMwNqrjdq0IicjzWjH0bydSWmvjkdk+YoAmoLKIheBIHs98RTs7dtqHskIAjb+ZTnd gF3QW8/w==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWEpQ-006qgx-Az for linux-arm-kernel@lists.infradead.org; Tue, 13 Apr 2021 08:55:27 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FKK9k0VVxzPqkd; Tue, 13 Apr 2021 16:52:26 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Tue, 13 Apr 2021 16:55:08 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , Yi Sun , Jean-Philippe Brucker , Jonathan Cameron , Tian Kevin , Lu Baolu CC: Alex Williamson , Cornelia Huck , Kirti Wankhede , , , , Subject: [PATCH v3 03/12] iommu: Add iommu_merge_page interface Date: Tue, 13 Apr 2021 16:54:48 +0800 Message-ID: <20210413085457.25400-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210413085457.25400-1-zhukeqian1@huawei.com> References: <20210413085457.25400-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210413_015524_793641_ED77EE93 X-CRM114-Status: GOOD ( 14.45 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If block(largepage) mappings are split during start dirty log, then when stop dirty log, we need to recover them for better DMA performance. This adds a new interface named iommu_merge_page in IOMMU base layer. A specific IOMMU driver can invoke it during stop dirty log. If so, the driver also need to realize the merge_page iommu ops. We flush all iotlbs after the whole procedure is completed to ease the pressure of iommu, as we will hanle a huge range of mapping in general. Signed-off-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/iommu.c | 75 +++++++++++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 12 +++++++ 2 files changed, 87 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index bb413a927870..8f0d71bafb3a 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2762,6 +2762,81 @@ int iommu_split_block(struct iommu_domain *domain, unsigned long iova, } EXPORT_SYMBOL_GPL(iommu_split_block); +static int __iommu_merge_page(struct iommu_domain *domain, + unsigned long iova, phys_addr_t paddr, + size_t size, int prot) +{ + const struct iommu_ops *ops = domain->ops; + unsigned int min_pagesz; + size_t pgsize; + int ret = 0; + + if (unlikely(!ops || !ops->merge_page)) + return -ENODEV; + + min_pagesz = 1 << __ffs(domain->pgsize_bitmap); + if (!IS_ALIGNED(iova | paddr | size, min_pagesz)) { + pr_err("unaligned: iova 0x%lx pa %pa size 0x%zx min_pagesz 0x%x\n", + iova, &paddr, size, min_pagesz); + return -EINVAL; + } + + while (size) { + pgsize = iommu_pgsize(domain, iova | paddr, size); + + ret = ops->merge_page(domain, iova, paddr, pgsize, prot); + if (ret) + break; + + pr_debug("merge handled: iova 0x%lx pa %pa size 0x%zx\n", + iova, &paddr, pgsize); + + iova += pgsize; + paddr += pgsize; + size -= pgsize; + } + + return ret; +} + +int iommu_merge_page(struct iommu_domain *domain, unsigned long iova, + size_t size, int prot) +{ + phys_addr_t phys; + dma_addr_t p, i; + size_t cont_size; + bool flush = false; + int ret = 0; + + while (size) { + flush = true; + + phys = iommu_iova_to_phys(domain, iova); + cont_size = PAGE_SIZE; + p = phys + cont_size; + i = iova + cont_size; + + while (cont_size < size && p == iommu_iova_to_phys(domain, i)) { + p += PAGE_SIZE; + i += PAGE_SIZE; + cont_size += PAGE_SIZE; + } + + ret = __iommu_merge_page(domain, iova, phys, cont_size, prot); + if (ret) + break; + + iova += cont_size; + size -= cont_size; + } + + if (flush) + iommu_flush_iotlb_all(domain); + + return ret; +} +EXPORT_SYMBOL_GPL(iommu_merge_page); + int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index c6c90ac069e3..fea3ecabff3d 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -209,6 +209,7 @@ struct iommu_iotlb_gather { * @domain_get_attr: Query domain attributes * @domain_set_attr: Change domain attributes * @split_block: Split block mapping into page mapping + * @merge_page: Merge page mapping into block mapping * @switch_dirty_log: Perform actions to start|stop dirty log tracking * @sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap * @clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap @@ -270,6 +271,8 @@ struct iommu_ops { /* Track dirty log */ int (*split_block)(struct iommu_domain *domain, unsigned long iova, size_t size); + int (*merge_page)(struct iommu_domain *domain, unsigned long iova, + phys_addr_t phys, size_t size, int prot); int (*switch_dirty_log)(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot); int (*sync_dirty_log)(struct iommu_domain *domain, @@ -534,6 +537,8 @@ extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr, void *data); extern int iommu_split_block(struct iommu_domain *domain, unsigned long iova, size_t size); +extern int iommu_merge_page(struct iommu_domain *domain, unsigned long iova, + size_t size, int prot); extern int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot); extern int iommu_sync_dirty_log(struct iommu_domain *domain, unsigned long iova, @@ -940,6 +945,13 @@ static inline int iommu_split_block(struct iommu_domain *domain, return -EINVAL; } +static inline int iommu_merge_page(struct iommu_domain *domain, + unsigned long iova, size_t size, + int prot) +{ + return -EINVAL; +} + static inline int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot)