From patchwork Mon Apr 21 14:13:09 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laurent Pinchart X-Patchwork-Id: 4025221 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 98E94BFF02 for ; Mon, 21 Apr 2014 14:17:41 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5AA99202B4 for ; Mon, 21 Apr 2014 14:17:40 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2515120270 for ; Mon, 21 Apr 2014 14:17:39 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WcF0B-0001Y6-4c; Mon, 21 Apr 2014 14:15:19 +0000 Received: from perceval.ideasonboard.com ([95.142.166.194]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WcEzw-0007yO-4A for linux-arm-kernel@lists.infradead.org; Mon, 21 Apr 2014 14:15:05 +0000 Received: from avalon.ideasonboard.com (147.20-200-80.adsl-dyn.isp.belgacom.be [80.200.20.147]) by perceval.ideasonboard.com (Postfix) with ESMTPSA id 55F2B35A06; Mon, 21 Apr 2014 16:10:56 +0200 (CEST) From: Laurent Pinchart To: iommu@lists.linux-foundation.org Subject: [PATCH 9/9] iommu/ipmmu-vmsa: Support clearing mappings Date: Mon, 21 Apr 2014 16:13:09 +0200 Message-Id: <1398089589-9181-10-git-send-email-laurent.pinchart+renesas@ideasonboard.com> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1398089589-9181-1-git-send-email-laurent.pinchart+renesas@ideasonboard.com> References: <1398089589-9181-1-git-send-email-laurent.pinchart+renesas@ideasonboard.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140421_071504_519900_D2D3E096 X-CRM114-Status: GOOD ( 17.35 ) X-Spam-Score: -0.7 (/) Cc: Will Deacon , linux-arm-kernel@lists.infradead.org, linux-sh@vger.kernel.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Laurent Pinchart --- drivers/iommu/ipmmu-vmsa.c | 190 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 186 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c index 711e5c4..de6461d 100644 --- a/drivers/iommu/ipmmu-vmsa.c +++ b/drivers/iommu/ipmmu-vmsa.c @@ -193,14 +193,22 @@ static LIST_HEAD(ipmmu_devices); #define ARM_VMSA_PTE_SH_NS (((pteval_t)0) << 8) #define ARM_VMSA_PTE_SH_OS (((pteval_t)2) << 8) #define ARM_VMSA_PTE_SH_IS (((pteval_t)3) << 8) +#define ARM_VMSA_PTE_SH_MASK (((pteval_t)3) << 8) #define ARM_VMSA_PTE_NS (((pteval_t)1) << 5) #define ARM_VMSA_PTE_PAGE (((pteval_t)3) << 0) /* Stage-1 PTE */ +#define ARM_VMSA_PTE_nG (((pteval_t)1) << 11) #define ARM_VMSA_PTE_AP_UNPRIV (((pteval_t)1) << 6) #define ARM_VMSA_PTE_AP_RDONLY (((pteval_t)2) << 6) +#define ARM_VMSA_PTE_AP_MASK (((pteval_t)3) << 6) +#define ARM_VMSA_PTE_ATTRINDX_MASK (((pteval_t)3) << 2) #define ARM_VMSA_PTE_ATTRINDX_SHIFT 2 -#define ARM_VMSA_PTE_nG (((pteval_t)1) << 11) + +#define ARM_VMSA_PTE_ATTRS_MASK \ + (ARM_VMSA_PTE_XN | ARM_VMSA_PTE_CONT | ARM_VMSA_PTE_nG | \ + ARM_VMSA_PTE_AF | ARM_VMSA_PTE_SH_MASK | ARM_VMSA_PTE_AP_MASK | \ + ARM_VMSA_PTE_NS | ARM_VMSA_PTE_ATTRINDX_MASK) #define ARM_VMSA_PTE_CONT_ENTRIES 16 #define ARM_VMSA_PTE_CONT_SIZE (PAGE_SIZE * ARM_VMSA_PTE_CONT_ENTRIES) @@ -615,7 +623,7 @@ static int ipmmu_alloc_init_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd, return 0; } -static int ipmmu_handle_mapping(struct ipmmu_vmsa_domain *domain, +static int ipmmu_create_mapping(struct ipmmu_vmsa_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot) { @@ -669,6 +677,180 @@ done: return ret; } +static void ipmmu_clear_pud(struct ipmmu_vmsa_device *mmu, pud_t *pud) +{ + /* Free the page table. */ + pgtable_t table = pud_pgtable(*pud); + __free_page(table); + + /* Clear the PUD. */ + *pud = __pud(0); + ipmmu_flush_pgtable(mmu, pud, sizeof(*pud)); +} + +static void ipmmu_clear_pmd(struct ipmmu_vmsa_device *mmu, pud_t *pud, + pmd_t *pmd) +{ + unsigned int i; + + /* Free the page table. */ + if (pmd_table(*pmd)) { + pgtable_t table = pmd_pgtable(*pmd); + __free_page(table); + } + + /* Clear the PMD. */ + *pmd = __pmd(0); + ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd)); + + /* Check whether the PUD is still needed. */ + pmd = pmd_offset(pud, 0); + for (i = 0; i < IPMMU_PTRS_PER_PMD; ++i) { + if (!pmd_none(pmd[i])) + return; + } + + /* Clear the parent PUD. */ + ipmmu_clear_pud(mmu, pud); +} + +static void ipmmu_clear_pte(struct ipmmu_vmsa_device *mmu, pud_t *pud, + pmd_t *pmd, pte_t *pte, unsigned int num_ptes) +{ + unsigned int i; + + /* Clear the PTE. */ + for (i = num_ptes; i; --i) + pte[i-1] = __pte(0); + + ipmmu_flush_pgtable(mmu, pte, sizeof(*pte) * num_ptes); + + /* Check whether the PMD is still needed. */ + pte = pte_offset_kernel(pmd, 0); + for (i = 0; i < IPMMU_PTRS_PER_PTE; ++i) { + if (!pte_none(pte[i])) + return; + } + + /* Clear the parent PMD. */ + ipmmu_clear_pmd(mmu, pud, pmd); +} + +static int ipmmu_split_pmd(struct ipmmu_vmsa_device *mmu, pmd_t *pmd) +{ + pte_t *pte, *start; + pteval_t pteval; + unsigned long pfn; + unsigned int i; + + pte = (pte_t *)get_zeroed_page(GFP_ATOMIC); + if (!pte) + return -ENOMEM; + + /* Copy the PMD attributes. */ + pteval = (pmd_val(*pmd) & ARM_VMSA_PTE_ATTRS_MASK) + | ARM_VMSA_PTE_CONT | ARM_VMSA_PTE_PAGE; + + pfn = pmd_pfn(*pmd); + start = pte; + + for (i = IPMMU_PTRS_PER_PTE; i; --i) + *pte++ = pfn_pte(pfn++, __pgprot(pteval)); + + ipmmu_flush_pgtable(mmu, start, PAGE_SIZE); + *pmd = __pmd(__pa(start) | PMD_NSTABLE | PMD_TYPE_TABLE); + ipmmu_flush_pgtable(mmu, pmd, sizeof(*pmd)); + + return 0; +} + +static void ipmmu_split_pte(struct ipmmu_vmsa_device *mmu, pte_t *pte) +{ + unsigned int i; + + for (i = ARM_VMSA_PTE_CONT_ENTRIES; i; --i) + pte[i-1] = __pte(pte_val(*pte) & ~ARM_VMSA_PTE_CONT); + + ipmmu_flush_pgtable(mmu, pte, sizeof(*pte) * ARM_VMSA_PTE_CONT_ENTRIES); +} + +static int ipmmu_clear_mapping(struct ipmmu_vmsa_domain *domain, + unsigned long iova, size_t size) +{ + struct ipmmu_vmsa_device *mmu = domain->mmu; + unsigned long flags; + pgd_t *pgd = domain->pgd; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + int ret = 0; + + if (!pgd) + return -EINVAL; + + if (size & ~PAGE_MASK) + return -EINVAL; + + pgd += pgd_index(iova); + pud = (pud_t *)pgd; + + spin_lock_irqsave(&domain->lock, flags); + + /* If there's no PUD or PMD we're done. */ + if (pud_none(*pud)) + goto done; + + pmd = pmd_offset(pud, iova); + if (pmd_none(*pmd)) + goto done; + + /* + * When freeing a 2MB block just clear the PMD. In the unlikely case the + * block is mapped as individual pages this will free the corresponding + * PTE page table. + */ + if (size == SZ_2M) { + ipmmu_clear_pmd(mmu, pud, pmd); + goto done; + } + + /* + * If the PMD has been mapped as a section remap it as pages to allow + * freeing individual pages. + */ + if (pmd_sect(*pmd)) + ipmmu_split_pmd(mmu, pmd); + + pte = pte_offset_kernel(pmd, iova); + + /* + * When freeing a 64kB block just clear the PTE entries. We don't have + * to care about the contiguous hint of the surrounding entries. + */ + if (size == SZ_64K) { + ipmmu_clear_pte(mmu, pud, pmd, pte, ARM_VMSA_PTE_CONT_ENTRIES); + goto done; + } + + /* + * If the PTE has been mapped with the contiguous hint set remap it and + * its surrounding PTEs to allow unmapping a single page. + */ + if (pte_val(*pte) & ARM_VMSA_PTE_CONT) + ipmmu_split_pte(mmu, pte); + + /* Clear the PTE. */ + ipmmu_clear_pte(mmu, pud, pmd, pte, 1); + +done: + spin_unlock_irqrestore(&domain->lock, flags); + + if (ret) + ipmmu_tlb_invalidate(domain); + + return 0; +} + /* ----------------------------------------------------------------------------- * IOMMU Operations */ @@ -769,7 +951,7 @@ static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova, if (!domain) return -ENODEV; - return ipmmu_handle_mapping(domain, iova, paddr, size, prot); + return ipmmu_create_mapping(domain, iova, paddr, size, prot); } static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova, @@ -778,7 +960,7 @@ static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova, struct ipmmu_vmsa_domain *domain = io_domain->priv; int ret; - ret = ipmmu_handle_mapping(domain, iova, 0, size, 0); + ret = ipmmu_clear_mapping(domain, iova, size); return ret ? 0 : size; }