From patchwork Mon Apr 22 17:59:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10911323 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A2D1E1708 for ; Mon, 22 Apr 2019 18:02:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 93FB828786 for ; Mon, 22 Apr 2019 18:02:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 880F828789; Mon, 22 Apr 2019 18:02:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 07BEB28787 for ; Mon, 22 Apr 2019 18:02:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=sRluEL/cbV5y0k8sB50PB0/nOYIABjxpZvlSCXCkDDA=; b=uP3Yp1tOL2YU6Z B3GTy9vXsNDkoHzYs1FakkDxFkctU3gUNM3E+WBimxxhQxy5oQk8hO7k9tQiE3qv9OoypKvcp67an HQQP9qRbRvxLsCLUbuO+hD/McLSmO4mw7Uu6GayAhFOk+FN5t2dps7NJR2FOXTZbgurhptyJ+uCGs 4AJJTUWMv/1tZ5FL3LSPFzbTTMRqnymR/LEYISimMe1yG6tiWAafG8ZpLJW4zlz4WBAdNBxfi51Yx musyq7xtYjNhieQAz2tAsIh5HJDjj2KpJwFSo0k1vfCtv6Wg28xMuBVL5q7QhRFkRrhSnqKvlrnFl 2GRFVXtZhRfEI+yCmpVw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hIdGQ-0003bY-4l; Mon, 22 Apr 2019 18:01:58 +0000 Received: from 213-225-37-80.nat.highway.a1.net ([213.225.37.80] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hIdF2-0001qJ-Ov; Mon, 22 Apr 2019 18:00:33 +0000 From: Christoph Hellwig To: Robin Murphy Subject: [PATCH 09/26] iommu/dma: Move domain lookup into __iommu_dma_{map, unmap} Date: Mon, 22 Apr 2019 19:59:25 +0200 Message-Id: <20190422175942.18788-10-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190422175942.18788-1-hch@lst.de> References: <20190422175942.18788-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Tom Lendacky , Catalin Marinas , Joerg Roedel , Will Deacon , linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Robin Murphy Most of the callers don't care, and the couple that do already have the domain to hand for other reasons are in slow paths where the (trivial) overhead of a repeated lookup will be utterly immaterial. Signed-off-by: Robin Murphy Signed-off-by: Christoph Hellwig --- drivers/iommu/dma-iommu.c | 34 ++++++++++++++++------------------ 1 file changed, 16 insertions(+), 18 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index e33724497c7b..4ebd08e3a83a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -419,9 +419,10 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, size >> iova_shift(iovad)); } -static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr, +static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, size_t size) { + struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; size_t iova_off = iova_offset(iovad, dma_addr); @@ -436,8 +437,9 @@ static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr, } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, - size_t size, int prot, struct iommu_domain *domain) + size_t size, int prot) { + struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; size_t iova_off = 0; dma_addr_t iova; @@ -536,7 +538,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, static void __iommu_dma_free(struct device *dev, struct page **pages, size_t size, dma_addr_t *handle) { - __iommu_dma_unmap(iommu_get_dma_domain(dev), *handle, size); + __iommu_dma_unmap(dev, *handle, size); __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT); *handle = DMA_MAPPING_ERROR; } @@ -699,14 +701,13 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, static dma_addr_t __iommu_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, int prot) { - return __iommu_dma_map(dev, page_to_phys(page) + offset, size, prot, - iommu_get_dma_domain(dev)); + return __iommu_dma_map(dev, page_to_phys(page) + offset, size, prot); } static void __iommu_dma_unmap_page(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - __iommu_dma_unmap(iommu_get_dma_domain(dev), handle, size); + __iommu_dma_unmap(dev, handle, size); } static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, @@ -715,11 +716,10 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, { phys_addr_t phys = page_to_phys(page) + offset; bool coherent = dev_is_dma_coherent(dev); + int prot = dma_info_to_prot(dir, coherent, attrs); dma_addr_t dma_handle; - dma_handle =__iommu_dma_map(dev, phys, size, - dma_info_to_prot(dir, coherent, attrs), - iommu_get_dma_domain(dev)); + dma_handle =__iommu_dma_map(dev, phys, size, prot); if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && dma_handle != DMA_MAPPING_ERROR) arch_sync_dma_for_device(dev, phys, size, dir); @@ -731,7 +731,7 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) iommu_dma_sync_single_for_cpu(dev, dma_handle, size, dir); - __iommu_dma_unmap(iommu_get_dma_domain(dev), dma_handle, size); + __iommu_dma_unmap(dev, dma_handle, size); } /* @@ -912,21 +912,20 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, sg = tmp; } end = sg_dma_address(sg) + sg_dma_len(sg); - __iommu_dma_unmap(iommu_get_dma_domain(dev), start, end - start); + __iommu_dma_unmap(dev, start, end - start); } static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs) { return __iommu_dma_map(dev, phys, size, - dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, - iommu_get_dma_domain(dev)); + dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO); } static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - __iommu_dma_unmap(iommu_get_dma_domain(dev), handle, size); + __iommu_dma_unmap(dev, handle, size); } static void *iommu_dma_alloc(struct device *dev, size_t size, @@ -1176,9 +1175,8 @@ void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 size) } static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, - phys_addr_t msi_addr, struct iommu_domain *domain) + phys_addr_t msi_addr, struct iommu_dma_cookie *cookie) { - struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iommu_dma_msi_page *msi_page; dma_addr_t iova; int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; @@ -1193,7 +1191,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, if (!msi_page) return NULL; - iova = __iommu_dma_map(dev, msi_addr, size, prot, domain); + iova = __iommu_dma_map(dev, msi_addr, size, prot); if (iova == DMA_MAPPING_ERROR) goto out_free_page; @@ -1228,7 +1226,7 @@ void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg) * of an MSI from within an IPI handler. */ spin_lock_irqsave(&cookie->msi_lock, flags); - msi_page = iommu_dma_get_msi_page(dev, msi_addr, domain); + msi_page = iommu_dma_get_msi_page(dev, msi_addr, cookie); spin_unlock_irqrestore(&cookie->msi_lock, flags); if (WARN_ON(!msi_page)) {