From patchwork Mon Jun 26 13:38:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 9809735 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C816C603F2 for ; Mon, 26 Jun 2017 13:48:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CCBE2285C3 for ; Mon, 26 Jun 2017 13:48:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C16A6285C7; Mon, 26 Jun 2017 13:48:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4B4E8285CA for ; Mon, 26 Jun 2017 13:48:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xnk1uFPpp/lgFjdOJM4Z3ei7YowtEcwToV1v+ZNsZYg=; b=sNwcGpRrHzaaLr uQSFZAlahOxFddxsozKomnOCnw1Ql1QMgjNECNuMmoq3hMlOecSyLHJmQY4Szu2soYW7wC5uQjTqw aqDFdS0SvsI0oLKYyTU4Oei/6HA4SaNpIQrbKUUFPuIWs8rsKBTsazSdGt4eMssgUzSr9popP1txl CtpsIvcmulctbmMkywVZ55NRoynA2LMUtZXNghALjDIPuBFEQ68UXETonNRjZhKnG3uoaCp7utXuk /BOaR1NAR+M7nyHaxrotJgnRUO/nwMjkK9Vlcir4LWMaybum0Cw6WgpM14TlyJmmjTOwaX2CmdOxf fq0LMM+xynpA+p74RToA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dPUMw-000410-N4; Mon, 26 Jun 2017 13:47:58 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dPUMu-0003ow-Ur for linux-arm-kernel@bombadil.infradead.org; Mon, 26 Jun 2017 13:47:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=0Tzp4mZMvaxeJ3UtDzoTtV9SmBYO8rZUDJz0Qp9onDM=; b=ZyFXGGB/kKb4UCycBjm5xM3Z+ Hwn/6UJkmhMUUtqMRXVWUWxl6FdCWMEyuuAHVA2wTKPfbKhENW8k//NofOcwsp2wpWqWGMQdeUWFo +WKosVZ3jvjuoaJt9SOVTm1TZE+akreGFC9JOCsf5PhoGwDSB6pv0SShQNld2U4NRr/3OWIq3lLFC puDOGYkFmjNdj2T6EHLfiPIy3RHqWiZaDl2m9NWDcjp2gV4a/bGAY1qtZ0X4TVRAghVQDutiU5ScX DJZIyq4jw2hfZlKW87TtKOF39MvJdlfU55bVwLQFuO0fAZmz5W4MMmXyzy6uIRBHIURVIHHa0MpB8 nG67obBQg==; Received: from szxga02-in.huawei.com ([45.249.212.188]) by casper.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dPUGV-0002EI-OF for linux-arm-kernel@lists.infradead.org; Mon, 26 Jun 2017 13:41:28 +0000 Received: from 172.30.72.53 (EHLO DGGEML401-HUB.china.huawei.com) ([172.30.72.53]) by dggrg02-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AQA47791; Mon, 26 Jun 2017 21:39:25 +0800 (CST) Received: from localhost (10.177.23.164) by DGGEML401-HUB.china.huawei.com (10.3.17.32) with Microsoft SMTP Server id 14.3.301.0; Mon, 26 Jun 2017 21:39:18 +0800 From: Zhen Lei To: Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , Robin Murphy , linux-kernel Subject: [PATCH 2/5] iommu: add a new member unmap_tlb_sync into struct iommu_ops Date: Mon, 26 Jun 2017 21:38:47 +0800 Message-ID: <1498484330-10840-3-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.0 In-Reply-To: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> References: <1498484330-10840-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.59510E8E.000F, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: ed5302fe4556ae41e7f51ba1eefc20de X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170626_144124_271221_15C06E81 X-CRM114-Status: GOOD ( 15.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Garry , Hanjun Guo , Xinwei Hu , Zefan Li , Tianhong Ding , Zhen Lei Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP An iova range may contain many pages/blocks, especially for the case of unmap_sg. Currently, for each page/block unmapping, a tlb invalidation operation will be followed and wait(called tlb_sync) until the operation's over. But actually we only need one tlb_sync in the last stage. Look at the loop in function iommu_unmap: while (unmapped < size) { ... unmapped_page = domain->ops->unmap(domain, iova, pgsize); ... } It's not a good idea to add the tlb_sync in domain->ops->unmap. There are many profits, below actions can be reduced: 1. iommu hardware is a shared resource for cpus, for the tlb_sync operation, lock protection is needed. 2. iommu hardware is not inside CPU, to start tlb_sync and check it finished may take a lot of time. Some people might ask: Is it safe to do so? The answer is yes. The standard processing flow is: alloc iova map process data unmap tlb invalidation and sync free iova What should be guaranteed is: "free iova" action is behind "unmap" and "tlbi operation" action, that is what we are doing right now. This ensures that: all TLBs of an iova-range have been invalidated before the iova reallocated. Signed-off-by: Zhen Lei --- drivers/iommu/iommu.c | 3 +++ include/linux/iommu.h | 1 + 2 files changed, 4 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index cf7ca7e..01e91a8 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1610,6 +1610,9 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) unmapped += unmapped_page; } + if (domain->ops->unmap_tlb_sync) + domain->ops->unmap_tlb_sync(domain); + trace_unmap(orig_iova, size, unmapped); return unmapped; } diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 2cb54ad..5964121 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -197,6 +197,7 @@ struct iommu_ops { phys_addr_t paddr, size_t size, int prot); size_t (*unmap)(struct iommu_domain *domain, unsigned long iova, size_t size); + void (*unmap_tlb_sync)(struct iommu_domain *domain); size_t (*map_sg)(struct iommu_domain *domain, unsigned long iova, struct scatterlist *sg, unsigned int nents, int prot); phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova);