From patchwork Thu Jul 16 18:40:12 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 6810501 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id CFEF6C05AC for ; Thu, 16 Jul 2015 18:43:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E1936204D9 for ; Thu, 16 Jul 2015 18:43:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C8633204D8 for ; Thu, 16 Jul 2015 18:43:03 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZFo5S-0003Cu-8N; Thu, 16 Jul 2015 18:40:50 +0000 Received: from eu-smtp-delivery-143.mimecast.com ([207.82.80.143]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZFo5M-000308-GR for linux-arm-kernel@lists.infradead.org; Thu, 16 Jul 2015 18:40:45 +0000 Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.140]) by eu-smtp-1.mimecast.com with ESMTP id uk-mta-28-jsRnSfQWR22VMt7RGW-7pw-1; Thu, 16 Jul 2015 19:40:20 +0100 Received: from e104324-lin.cambridge.arm.com ([10.1.2.79]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Thu, 16 Jul 2015 19:40:19 +0100 From: Robin Murphy To: iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, joro@8bytes.org, will.deacon@arm.com, catalin.marinas@arm.com Subject: [PATCH v4 1/4] iommu/iova: Avoid over-allocating when size-aligned Date: Thu, 16 Jul 2015 19:40:12 +0100 Message-Id: <35cb877828b570db84e89684ef76308cd1857f0c.1437070995.git.robin.murphy@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: X-OriginalArrivalTime: 16 Jul 2015 18:40:19.0887 (UTC) FILETIME=[DDDD43F0:01D0BFF6] X-MC-Unique: jsRnSfQWR22VMt7RGW-7pw-1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150716_114044_891425_B08E172F X-CRM114-Status: GOOD ( 11.33 ) X-Spam-Score: -4.2 (----) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: laurent.pinchart+renesas@ideasonboard.com, arnd@arndb.de, djkurtz@google.com, thunder.leizhen@huawei.com, yingjoe.chen@mediatek.com, treding@nvidia.com, David Woodhouse , yong.wu@mediatek.com, m.szyprowski@samsung.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, allocating a size-aligned IOVA region quietly adjusts the actual allocation size in the process, returning a rounded-up power-of-two-sized allocation. This results in mismatched behaviour in the IOMMU driver if the original size was not a power of two, where the original size is mapped, but the rounded-up IOVA size is unmapped. Whilst some IOMMUs will happily unmap already-unmapped pages, others consider this an error, so fix it by computing the necessary alignment padding without altering the actual allocation size. Also clean up by making pad_size unsigned, since its callers always pass unsigned values and negative padding makes little sense here anyway. CC: David Woodhouse Signed-off-by: Robin Murphy --- drivers/iommu/intel-iommu.c | 2 ++ drivers/iommu/iova.c | 23 ++++++----------------- 2 files changed, 8 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index a98a7b2..9210159 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3233,6 +3233,8 @@ static struct iova *intel_alloc_iova(struct device *dev, /* Restrict dma_mask to the width that the iommu can handle */ dma_mask = min_t(uint64_t, DOMAIN_MAX_ADDR(domain->gaw), dma_mask); + /* Ensure we reserve the whole size-aligned region */ + nrpages = __roundup_pow_of_two(nrpages); if (!dmar_forcedac && dma_mask > DMA_BIT_MASK(32)) { /* diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index b7c3d92..29f2efc 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -120,19 +120,14 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free) } } -/* Computes the padding size required, to make the - * the start address naturally aligned on its size +/* + * Computes the padding size required, to make the start address + * naturally aligned on the power-of-two order of its size */ -static int -iova_get_pad_size(int size, unsigned int limit_pfn) +static unsigned int +iova_get_pad_size(unsigned int size, unsigned int limit_pfn) { - unsigned int pad_size = 0; - unsigned int order = ilog2(size); - - if (order) - pad_size = (limit_pfn + 1) % (1 << order); - - return pad_size; + return (limit_pfn + 1 - size) & (__roundup_pow_of_two(size) - 1); } static int __alloc_and_insert_iova_range(struct iova_domain *iovad, @@ -265,12 +260,6 @@ alloc_iova(struct iova_domain *iovad, unsigned long size, if (!new_iova) return NULL; - /* If size aligned is set then round the size to - * to next power of two. - */ - if (size_aligned) - size = __roundup_pow_of_two(size); - ret = __alloc_and_insert_iova_range(iovad, size, limit_pfn, new_iova, size_aligned);