From patchwork Mon Apr 20 11:43:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tomasz Figa X-Patchwork-Id: 6241491 Return-Path: X-Original-To: patchwork-linux-rockchip@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 65D739F1BE for ; Mon, 20 Apr 2015 11:44:29 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 64DA6203F1 for ; Mon, 20 Apr 2015 11:44:28 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6F8E9203EC for ; Mon, 20 Apr 2015 11:44:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YkA7l-0007yo-AX; Mon, 20 Apr 2015 11:44:25 +0000 Received: from mail-pa0-x236.google.com ([2607:f8b0:400e:c03::236]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YkA7g-0007wC-Py for linux-rockchip@lists.infradead.org; Mon, 20 Apr 2015 11:44:24 +0000 Received: by pacyx8 with SMTP id yx8so204293117pac.1 for ; Mon, 20 Apr 2015 04:43:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id; bh=x7vUa+8jAi4P++fQ+t5AF5eo+90lCd/9NlYpntIRddI=; b=K/DhMi7r5Rldf5kujuJ+vu8+JXY7hXrd4cTm0dbY5/DLmDtXSMaC6Fch0FVOs/Ygtr P3bHJIrYBhiz1KZXqLHlnPMa6XIFsPwl367phsqAetigCNywXso9Y+Kj+BZV5to9bZbe 5mD5lxI4pG/hEmcVjhb8vAVkoLrDgigBj+ygk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=x7vUa+8jAi4P++fQ+t5AF5eo+90lCd/9NlYpntIRddI=; b=iKml3A8yTJiFxEXQ0b+oVhgKmAocpe3Od9LPC6CJTjv83tkm1HJeKdk7Jno91TWLyS CFamDHybuf5FSbZCuCDUlL6pmeYU3lKB6Jutiv/HpMi9pQ1cQiZwDTxUGKo0rC/KUCd/ Zyq3pkiOGa7v1fpJU9R6XoZSkyEjsRxWe/YYD6d7yZleLqAPIBu6HB8dBtJ3v+mmSEcT aOSEyCFBPGx2Cw5/OjM5N67fiM7ZQ2GosHd5Sds9l751Xjp9jiiPH9uLnevxUEqPWX62 Cjvolcbu9Tn4r4myChF+RqALQgaXdv11HtwQQ19N5DUK5yWTJ3Z58ekN4xOf76e/uBNc q/Wg== X-Gm-Message-State: ALoCoQmpitffMhfLceEnyiy0HOGm0bNmkXoyn1f29xZRJWBiGmKx/hCmNrktgipdyD+HzMhKr06B X-Received: by 10.70.35.171 with SMTP id i11mr19901766pdj.103.1429530237970; Mon, 20 Apr 2015 04:43:57 -0700 (PDT) Received: from basement.tok.corp.google.com ([172.23.69.229]) by mx.google.com with ESMTPSA id oj10sm17987931pdb.38.2015.04.20.04.43.55 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 20 Apr 2015 04:43:57 -0700 (PDT) From: Tomasz Figa To: iommu@lists.linux-foundation.org Subject: [PATCH v2] CHROMIUM: iommu: rockchip: Make sure that page table state is coherent Date: Mon, 20 Apr 2015 20:43:44 +0900 Message-Id: <1429530224-24719-1-git-send-email-tfiga@chromium.org> X-Mailer: git-send-email 2.2.0.rc0.207.ga3a616c X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150420_044420_940726_8BA6105C X-CRM114-Status: GOOD ( 13.12 ) X-Spam-Score: -0.8 (/) Cc: Heiko Stuebner , Joerg Roedel , Daniel Kurtz , Tomasz Figa , linux-rockchip@lists.infradead.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-rockchip@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Upstream kernel work for Rockchip platforms List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "Linux-rockchip" Errors-To: linux-rockchip-bounces+patchwork-linux-rockchip=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To flush created mappings, current mapping code relies on the fact that during unmap the driver zaps every IOVA being unmapped and that it is enough to zap a single IOVA of page table to remove the entire page table from IOMMU cache. Based on these assumptions the driver was made to simply zap the first IOVA of the mapping being created. This is enough to invalidate first page table, which could be shared with another mapping (and thus could be already present in IOMMU cache), but unfortunately it does not do anything about the last page table that could be shared with other mappings as well. Moreover, the flushing is performed before page table contents are actually modified, so there is a race between the CPU updating the page tables and hardware that could be possibly running at the same time and triggering IOMMU look-ups, which could bring back the page tables back to the cache. To fix both issues, this patch makes the mapping code zap first and last (if they are different) IOVAs of new mapping after the page table is updated. Signed-off-by: Tomasz Figa Reviewed-by: Daniel Kurtz Tested-by: Heiko Stuebner Reviewed-by: Daniel Kurtz --- drivers/iommu/rockchip-iommu.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c index 4015560..31004c0 100644 --- a/drivers/iommu/rockchip-iommu.c +++ b/drivers/iommu/rockchip-iommu.c @@ -551,6 +551,15 @@ static void rk_iommu_zap_iova(struct rk_iommu_domain *rk_domain, spin_unlock_irqrestore(&rk_domain->iommus_lock, flags); } +static void rk_iommu_zap_iova_first_last(struct rk_iommu_domain *rk_domain, + dma_addr_t iova, size_t size) +{ + rk_iommu_zap_iova(rk_domain, iova, SPAGE_SIZE); + if (size > SPAGE_SIZE) + rk_iommu_zap_iova(rk_domain, iova + size - SPAGE_SIZE, + SPAGE_SIZE); +} + static u32 *rk_dte_get_page_table(struct rk_iommu_domain *rk_domain, dma_addr_t iova) { @@ -575,12 +584,6 @@ static u32 *rk_dte_get_page_table(struct rk_iommu_domain *rk_domain, rk_table_flush(page_table, NUM_PT_ENTRIES); rk_table_flush(dte_addr, 1); - /* - * Zap the first iova of newly allocated page table so iommu evicts - * old cached value of new dte from the iotlb. - */ - rk_iommu_zap_iova(rk_domain, iova, SPAGE_SIZE); - done: pt_phys = rk_dte_pt_address(dte); return (u32 *)phys_to_virt(pt_phys); @@ -630,6 +633,14 @@ static int rk_iommu_map_iova(struct rk_iommu_domain *rk_domain, u32 *pte_addr, rk_table_flush(pte_addr, pte_count); + /* + * Zap the first and last iova to evict from iotlb any previously + * mapped cachelines holding stale values for its dte and pte. + * We only zap the first and last iova, since only they could have + * dte or pte shared with an existing mapping. + */ + rk_iommu_zap_iova_first_last(rk_domain, iova, size); + return 0; unwind: /* Unmap the range of iovas that we just mapped */