From patchwork Sat Dec 21 15:03:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Murphy X-Patchwork-Id: 11307001 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E616614F6 for ; Sat, 21 Dec 2019 15:04:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C496F21927 for ; Sat, 21 Dec 2019 15:04:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=tcd-ie.20150623.gappssmtp.com header.i=@tcd-ie.20150623.gappssmtp.com header.b="OfH7P/O3" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727201AbfLUPEV (ORCPT ); Sat, 21 Dec 2019 10:04:21 -0500 Received: from mail-ed1-f65.google.com ([209.85.208.65]:38297 "EHLO mail-ed1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727161AbfLUPEV (ORCPT ); Sat, 21 Dec 2019 10:04:21 -0500 Received: by mail-ed1-f65.google.com with SMTP id i16so11382823edr.5 for ; Sat, 21 Dec 2019 07:04:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tcd-ie.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yYBMxHbMND3KXJAAlRHvXU1gMqhOszki9jNdzLeiJU4=; b=OfH7P/O33XqVmrRHuQ/69BFU28v+MGMIwEx7w/umhnY1qNKXSsCGq1Cwaq7vbEzLMp 1ksjMyjtAT7hWlwrLscfQ77QvEuhl13Swd9KWJGnv0QAQ12Z4EMmc0UBnDdMMC+fXDOJ AwGTYVg3PMuc/dPZ45WbnbZxhIn8YgFF6EoHwr+nGZQHnCr6DhBjsrSxM9L9AHr6Caov mXYjg4dRT2WuQhsh/0gAG0D+t+HUDuGU7Nr7pKzkderovTZvMGtj+Zix8GpyJLSUDu3F 9RO4rb75QMNfDN91pVMg46S9XuW7CZvVU6kXu8d0DBfHs3P12+Wv2dYSOCKjy0bOS/Il 93QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yYBMxHbMND3KXJAAlRHvXU1gMqhOszki9jNdzLeiJU4=; b=MVn5SkAjxbRKeNcG5snlGNpbvr9mras92jPk2cfVrRPRUH5B4n4nhinb9b9y6CfPLf 1g9tk8DYZfAJqfqqG5DrR6bD/Pzb5LHVVd56+deKjvfVlzUqxdUxeB48QkHw42dV+pIu HlvFC590IX+UvIw+1Iixa16N0bR3ySsR6ZBasLTZKNdZikpIk+5/DT1jww4p9TTMifSx kJY0pqGU8ga5fczlhfJtUmhngSvB2Df298ICTceTiGi1SxqQ2NZ03fzd5SYAS2ysgPhM tSVs3On5lxY8hSkyZNbMceJL3pWa4q3sF/iegBY+UFXrykUYMDT++wIlw1Mh8EnFnUnW pulQ== X-Gm-Message-State: APjAAAUcKCM4qxTZ5BaqE6A4Zt6Q+Q02kqaDGStz+4y5eximFIz9jI5Z pL/sOVTWoJYbs5obhjYAv1Othw== X-Google-Smtp-Source: APXvYqxT2LSHoXL54d4LTh1o2yBuykvAD28LUC42eFWoj6JLm/GJqllZFuXkdK3UEroGowvwPF7Zpw== X-Received: by 2002:a17:906:5448:: with SMTP id d8mr22606596ejp.254.1576940659410; Sat, 21 Dec 2019 07:04:19 -0800 (PST) Received: from localhost.localdomain ([80.233.37.20]) by smtp.googlemail.com with ESMTPSA id u13sm1517639ejz.69.2019.12.21.07.04.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Dec 2019 07:04:18 -0800 (PST) From: Tom Murphy To: iommu@lists.linux-foundation.org Cc: Tom Murphy , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , Joerg Roedel , Will Deacon , Robin Murphy , Marek Szyprowski , Kukjin Kim , Krzysztof Kozlowski , David Woodhouse , Lu Baolu , Andy Gross , Bjorn Andersson , Matthias Brugger , Rob Clark , Heiko Stuebner , Gerald Schaefer , Thierry Reding , Jonathan Hunter , Jean-Philippe Brucker , Alex Williamson , Cornelia Huck , Eric Auger , Julien Grall , Marc Zyngier , Thomas Gleixner , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-rockchip@lists.infradead.org, linux-s390@vger.kernel.org, linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org Subject: [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment Date: Sat, 21 Dec 2019 15:03:53 +0000 Message-Id: <20191221150402.13868-2-murphyt7@tcd.ie> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191221150402.13868-1-murphyt7@tcd.ie> References: <20191221150402.13868-1-murphyt7@tcd.ie> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org In the intel iommu driver devices which only support 32bit DMA can't be direct mapped. The implementation of this is weird. Currently we assign it a direct mapped domain and then remove the domain later and replace it with a domain of type IOMMU_DOMAIN_IDENTITY. We should just assign it a domain of type IOMMU_DOMAIN_IDENTITY from the begging rather than needlessly swapping domains. Signed-off-by: Tom Murphy --- drivers/iommu/intel-iommu.c | 88 +++++++++++++------------------------ 1 file changed, 31 insertions(+), 57 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 0c8d81f56a30..c1ea66467918 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3462,46 +3462,9 @@ static struct dmar_domain *get_private_domain_for_dev(struct device *dev) } /* Check if the dev needs to go through non-identity map and unmap process.*/ -static bool iommu_need_mapping(struct device *dev) +static bool iommu_no_mapping(struct device *dev) { - int ret; - - if (iommu_dummy(dev)) - return false; - - ret = identity_mapping(dev); - if (ret) { - u64 dma_mask = *dev->dma_mask; - - if (dev->coherent_dma_mask && dev->coherent_dma_mask < dma_mask) - dma_mask = dev->coherent_dma_mask; - - if (dma_mask >= dma_direct_get_required_mask(dev)) - return false; - - /* - * 32 bit DMA is removed from si_domain and fall back to - * non-identity mapping. - */ - dmar_remove_one_dev_info(dev); - ret = iommu_request_dma_domain_for_dev(dev); - if (ret) { - struct iommu_domain *domain; - struct dmar_domain *dmar_domain; - - domain = iommu_get_domain_for_dev(dev); - if (domain) { - dmar_domain = to_dmar_domain(domain); - dmar_domain->flags |= DOMAIN_FLAG_LOSE_CHILDREN; - } - dmar_remove_one_dev_info(dev); - get_private_domain_for_dev(dev); - } - - dev_info(dev, "32bit DMA uses non-identity mapping\n"); - } - - return true; + return iommu_dummy(dev) || identity_mapping(dev); } static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr, @@ -3568,20 +3531,22 @@ static dma_addr_t intel_map_page(struct device *dev, struct page *page, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_need_mapping(dev)) - return __intel_map_single(dev, page_to_phys(page) + offset, - size, dir, *dev->dma_mask); - return dma_direct_map_page(dev, page, offset, size, dir, attrs); + if (iommu_no_mapping(dev)) + return dma_direct_map_page(dev, page, offset, size, dir, attrs); + + return __intel_map_single(dev, page_to_phys(page) + offset, size, dir, + *dev->dma_mask); } static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_need_mapping(dev)) - return __intel_map_single(dev, phys_addr, size, dir, - *dev->dma_mask); - return dma_direct_map_resource(dev, phys_addr, size, dir, attrs); + if (iommu_no_mapping(dev)) + return dma_direct_map_resource(dev, phys_addr, size, dir, + attrs); + + return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask); } static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) @@ -3632,16 +3597,16 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_need_mapping(dev)) - intel_unmap(dev, dev_addr, size); - else + if (iommu_no_mapping(dev)) dma_direct_unmap_page(dev, dev_addr, size, dir, attrs); + else + intel_unmap(dev, dev_addr, size); } static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_need_mapping(dev)) + if (!iommu_no_mapping(dev)) intel_unmap(dev, dev_addr, size); } @@ -3652,7 +3617,7 @@ static void *intel_alloc_coherent(struct device *dev, size_t size, struct page *page = NULL; int order; - if (!iommu_need_mapping(dev)) + if (iommu_no_mapping(dev)) return dma_direct_alloc(dev, size, dma_handle, flags, attrs); size = PAGE_ALIGN(size); @@ -3688,7 +3653,7 @@ static void intel_free_coherent(struct device *dev, size_t size, void *vaddr, int order; struct page *page = virt_to_page(vaddr); - if (!iommu_need_mapping(dev)) + if (iommu_no_mapping(dev)) return dma_direct_free(dev, size, vaddr, dma_handle, attrs); size = PAGE_ALIGN(size); @@ -3708,7 +3673,7 @@ static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist, struct scatterlist *sg; int i; - if (!iommu_need_mapping(dev)) + if (iommu_no_mapping(dev)) return dma_direct_unmap_sg(dev, sglist, nelems, dir, attrs); for_each_sg(sglist, sg, nelems, i) { @@ -3734,7 +3699,7 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele struct intel_iommu *iommu; BUG_ON(dir == DMA_NONE); - if (!iommu_need_mapping(dev)) + if (iommu_no_mapping(dev)) return dma_direct_map_sg(dev, sglist, nelems, dir, attrs); domain = deferred_attach_domain(dev); @@ -3782,7 +3747,7 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele static u64 intel_get_required_mask(struct device *dev) { - if (!iommu_need_mapping(dev)) + if (iommu_no_mapping(dev)) return dma_direct_get_required_mask(dev); return DMA_BIT_MASK(32); } @@ -5618,9 +5583,13 @@ static int intel_iommu_add_device(struct device *dev) struct iommu_domain *domain; struct intel_iommu *iommu; struct iommu_group *group; + u64 dma_mask = *dev->dma_mask; u8 bus, devfn; int ret; + if (dev->coherent_dma_mask && dev->coherent_dma_mask < dma_mask) + dma_mask = dev->coherent_dma_mask; + iommu = device_to_iommu(dev, &bus, &devfn); if (!iommu) return -ENODEV; @@ -5640,7 +5609,12 @@ static int intel_iommu_add_device(struct device *dev) domain = iommu_get_domain_for_dev(dev); dmar_domain = to_dmar_domain(domain); if (domain->type == IOMMU_DOMAIN_DMA) { - if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY) { + /* + * We check dma_mask >= dma_get_required_mask(dev) because + * 32 bit DMA falls back to non-identity mapping. + */ + if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY && + dma_mask >= dma_get_required_mask(dev)) { ret = iommu_request_dm_for_dev(dev); if (ret) { dmar_remove_one_dev_info(dev); From patchwork Sat Dec 21 15:03:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Murphy X-Patchwork-Id: 11307013 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A002213A4 for ; Sat, 21 Dec 2019 15:04:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 73BC12070C for ; Sat, 21 Dec 2019 15:04:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=tcd-ie.20150623.gappssmtp.com header.i=@tcd-ie.20150623.gappssmtp.com header.b="CTktPvnN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727311AbfLUPE3 (ORCPT ); Sat, 21 Dec 2019 10:04:29 -0500 Received: from mail-ed1-f66.google.com ([209.85.208.66]:33741 "EHLO mail-ed1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727150AbfLUPE3 (ORCPT ); Sat, 21 Dec 2019 10:04:29 -0500 Received: by mail-ed1-f66.google.com with SMTP id r21so11400895edq.0 for ; Sat, 21 Dec 2019 07:04:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tcd-ie.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g9kFTz+OIjtd02sboEb/xPH2umLLduJGHZnkezx3k0U=; b=CTktPvnNq1jyejKuw4FPmsDHJqlMqnIK+LqTDjsTZ1ZsJVdD5K3EVXU9H9QfhP1vZi Dr+qg3qIXY699xoJ8F0TOCouYc1BBGbU43hDF2kmAYXm04HO5vI0AZGZAi2jkV0r6EFN Uqqu8MF/N0nkVC19U2nfYFQHC/XM6Mw8TaFgZUgu9czVrLEwvawDZRjnsTBB+/yigVh0 MNGrdrt/6eys15k3D6741eDIFVQEakAIF0yLafdCTI/WakOrjNrokbN5gIZLsHdv6rvT JHYoqwNQsVMDBB0wYNztnmjEhP8hzVgepORwDvLZVBZd6aM2+hZeuyiCzwGgtYXJJ1Dy 5Qag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g9kFTz+OIjtd02sboEb/xPH2umLLduJGHZnkezx3k0U=; b=rv2MVaInXYXIrp7PAJ8bH11q0shHBf7f/cD/mHNnIzUcdh2qmFLFy0urgoXRn4HwPo r8YtNUXZA/b0yq4hgWONG/eqySRqjkQJKbHDR4SdRqbUEMQaIRXDSx4fAjRCTtSIKsZ5 EZXgojk1EOa56AZ8+tIXUf36aX/Zz44PyNvT4WeLu4MibSxdni43TgLtDnRPS99iKKDV 34ajM6BBYuPhvr6wiqUm6xK43cbWuXAPw+1/38zYGLpS1NCRRXDj3JzoJ+b7Y32pIZVO fRF+KlXsXvXaj4pQ0GNdkoeYbal9AeQLGp8eUrhuMCaIrL6rKY+qcAS7qjVy/EXTVfDn uYFw== X-Gm-Message-State: APjAAAXw/xo1e1xtxCPVLkO6PGNQQGM4j6Rkg0vulYm9o2p42YambkTb G4/p/+brEPc23vY2jysf80jQLg== X-Google-Smtp-Source: APXvYqz53m9hdZ0HCFwnxdnPVWgrcBCgGDP73tz51CWh/bfe9HlHp6dFxdkAOOP0s/SRLuvh29w7og== X-Received: by 2002:a50:d0d0:: with SMTP id g16mr22057226edf.75.1576940666211; Sat, 21 Dec 2019 07:04:26 -0800 (PST) Received: from localhost.localdomain ([80.233.37.20]) by smtp.googlemail.com with ESMTPSA id u13sm1517639ejz.69.2019.12.21.07.04.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Dec 2019 07:04:25 -0800 (PST) From: Tom Murphy To: iommu@lists.linux-foundation.org Cc: Tom Murphy , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , Joerg Roedel , Will Deacon , Robin Murphy , Marek Szyprowski , Kukjin Kim , Krzysztof Kozlowski , David Woodhouse , Lu Baolu , Andy Gross , Bjorn Andersson , Matthias Brugger , Rob Clark , Heiko Stuebner , Gerald Schaefer , Thierry Reding , Jonathan Hunter , Jean-Philippe Brucker , Alex Williamson , Cornelia Huck , Eric Auger , Julien Grall , Marc Zyngier , Thomas Gleixner , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-rockchip@lists.infradead.org, linux-s390@vger.kernel.org, linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org Subject: [PATCH 2/8] iommu/vt-d: Use default dma_direct_* mapping functions for direct mapped devices Date: Sat, 21 Dec 2019 15:03:54 +0000 Message-Id: <20191221150402.13868-3-murphyt7@tcd.ie> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191221150402.13868-1-murphyt7@tcd.ie> References: <20191221150402.13868-1-murphyt7@tcd.ie> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org We should only assign intel_dma_ops to devices which will actually use the iommu and let the default fall back dma_direct_* functions handle all other devices. This won't change any behaviour but will just use the generic implementations for direct mapped devices rather than intel specific ones. Signed-off-by: Tom Murphy --- drivers/iommu/intel-iommu.c | 52 +++++-------------------------------- 1 file changed, 6 insertions(+), 46 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index c1ea66467918..64b1a9793daa 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -2794,17 +2794,6 @@ static int __init si_domain_init(int hw) return 0; } -static int identity_mapping(struct device *dev) -{ - struct device_domain_info *info; - - info = dev->archdata.iommu; - if (info && info != DUMMY_DEVICE_DOMAIN_INFO && info != DEFER_DEVICE_DOMAIN_INFO) - return (info->domain == si_domain); - - return 0; -} - static int domain_add_dev_info(struct dmar_domain *domain, struct device *dev) { struct dmar_domain *ndomain; @@ -3461,12 +3450,6 @@ static struct dmar_domain *get_private_domain_for_dev(struct device *dev) return domain; } -/* Check if the dev needs to go through non-identity map and unmap process.*/ -static bool iommu_no_mapping(struct device *dev) -{ - return iommu_dummy(dev) || identity_mapping(dev); -} - static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr, size_t size, int dir, u64 dma_mask) { @@ -3531,9 +3514,6 @@ static dma_addr_t intel_map_page(struct device *dev, struct page *page, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_no_mapping(dev)) - return dma_direct_map_page(dev, page, offset, size, dir, attrs); - return __intel_map_single(dev, page_to_phys(page) + offset, size, dir, *dev->dma_mask); } @@ -3542,10 +3522,6 @@ static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_no_mapping(dev)) - return dma_direct_map_resource(dev, phys_addr, size, dir, - attrs); - return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask); } @@ -3597,17 +3573,13 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_no_mapping(dev)) - dma_direct_unmap_page(dev, dev_addr, size, dir, attrs); - else - intel_unmap(dev, dev_addr, size); + intel_unmap(dev, dev_addr, size); } static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (!iommu_no_mapping(dev)) - intel_unmap(dev, dev_addr, size); + intel_unmap(dev, dev_addr, size); } static void *intel_alloc_coherent(struct device *dev, size_t size, @@ -3617,9 +3589,6 @@ static void *intel_alloc_coherent(struct device *dev, size_t size, struct page *page = NULL; int order; - if (iommu_no_mapping(dev)) - return dma_direct_alloc(dev, size, dma_handle, flags, attrs); - size = PAGE_ALIGN(size); order = get_order(size); @@ -3653,9 +3622,6 @@ static void intel_free_coherent(struct device *dev, size_t size, void *vaddr, int order; struct page *page = virt_to_page(vaddr); - if (iommu_no_mapping(dev)) - return dma_direct_free(dev, size, vaddr, dma_handle, attrs); - size = PAGE_ALIGN(size); order = get_order(size); @@ -3673,9 +3639,6 @@ static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist, struct scatterlist *sg; int i; - if (iommu_no_mapping(dev)) - return dma_direct_unmap_sg(dev, sglist, nelems, dir, attrs); - for_each_sg(sglist, sg, nelems, i) { nrpages += aligned_nrpages(sg_dma_address(sg), sg_dma_len(sg)); } @@ -3699,8 +3662,6 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele struct intel_iommu *iommu; BUG_ON(dir == DMA_NONE); - if (iommu_no_mapping(dev)) - return dma_direct_map_sg(dev, sglist, nelems, dir, attrs); domain = deferred_attach_domain(dev); if (!domain) @@ -3747,8 +3708,6 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele static u64 intel_get_required_mask(struct device *dev) { - if (iommu_no_mapping(dev)) - return dma_direct_get_required_mask(dev); return DMA_BIT_MASK(32); } @@ -5014,7 +4973,6 @@ int __init intel_iommu_init(void) if (!has_untrusted_dev() || intel_no_bounce) swiotlb = 0; #endif - dma_ops = &intel_dma_ops; init_iommu_pm_ops(); @@ -5623,6 +5581,8 @@ static int intel_iommu_add_device(struct device *dev) dev_info(dev, "Device uses a private identity domain.\n"); } + } else { + dev->dma_ops = &intel_dma_ops; } } else { if (device_def_domain_type(dev) == IOMMU_DOMAIN_DMA) { @@ -5639,6 +5599,7 @@ static int intel_iommu_add_device(struct device *dev) dev_info(dev, "Device uses a private dma domain.\n"); } + dev->dma_ops = &intel_dma_ops; } } @@ -5665,8 +5626,7 @@ static void intel_iommu_remove_device(struct device *dev) iommu_device_unlink(&iommu->iommu, dev); - if (device_needs_bounce(dev)) - set_dma_ops(dev, NULL); + set_dma_ops(dev, NULL); } static void intel_iommu_get_resv_regions(struct device *device, From patchwork Sat Dec 21 15:03:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Murphy X-Patchwork-Id: 11307015 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1ECE3138D for ; Sat, 21 Dec 2019 15:04:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E78CD21D7E for ; Sat, 21 Dec 2019 15:04:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=tcd-ie.20150623.gappssmtp.com header.i=@tcd-ie.20150623.gappssmtp.com header.b="LF2xaqTB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727186AbfLUPEf (ORCPT ); Sat, 21 Dec 2019 10:04:35 -0500 Received: from mail-ed1-f66.google.com ([209.85.208.66]:37312 "EHLO mail-ed1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727361AbfLUPEe (ORCPT ); Sat, 21 Dec 2019 10:04:34 -0500 Received: by mail-ed1-f66.google.com with SMTP id cy15so11401231edb.4 for ; Sat, 21 Dec 2019 07:04:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tcd-ie.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3kw9XOM83Gnbnm5VBWA2//dqKfolDXj7oZ7r7B1xmXk=; b=LF2xaqTBBlPQ7i0KVmncE+goIEe89wuyJqhz7klIJWAT71TTSCk30NViAVT7SqjMw+ 7mDm8B7wD8nu4PWLGnZXpzTlibPXPwnZcaD0bsKB74d0NgcPmlUzaRXKjLR17+gOcimK 8fjIXvrME+nRk1BO7gm1A2LxBEHGbznK81gTHWmTZMMCaBlZyxrmIn7FGdI85YZOwsIG mZw5pwH6nNkr1sXWvObIhq7po60PRXw2xmIt1UG+M5Qvx3peWdtOiYVPQYFu+brgYfWA HDkIR8dxB86cKmJJ7DVe6QZcvYxm/BF8hmbwSr9yCIqznQUjonCUJj32NcQy9vmF1Oq+ PxqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3kw9XOM83Gnbnm5VBWA2//dqKfolDXj7oZ7r7B1xmXk=; b=HALewDS6swpF7tDnFQyn4e7SG4IfgFf8CJQEHosH1rUDPwcgD0TodS27AjFvFALlQU QXEuOSSjhhSU2JT41vftLmvfqE3GHEFYFj0J0DEM+g0hNS+NRCJbF4ou28WRy9Mlnqir wvdLTZVkU0ssa5kHPwan/lTJ9krt3sFv5UVYT7Ue9dN1ujp09+Y3x83XI6ujs1Ua1XN0 wz9dAu0ATkDzFDV8Ai/gX8pacX5o7aukj86k4Ly6vczOTpZfkajGTNMg3bTxxku+IFEa aC2QuMy6AfKOglXSue86ZYF7uiTE7g/2SaRJ6gvP6l3B1phCoWLurLo/VvFPQXX3ygE6 90tQ== X-Gm-Message-State: APjAAAXroH374GswXPPC1pYJcqWqo/bJNwqXE9JHzfbfbWpYJZyKXi9s +pcRrmY1hAK2uA6oxfUQ87FgDQ== X-Google-Smtp-Source: APXvYqyaL08IHL0GcHhm3cVmdCwxRxirUVhP6DZU/+/bnczE7blAGMOLi2oiWq+7pBmE+gO544tPVg== X-Received: by 2002:a17:906:49c4:: with SMTP id w4mr22272847ejv.158.1576940673039; Sat, 21 Dec 2019 07:04:33 -0800 (PST) Received: from localhost.localdomain ([80.233.37.20]) by smtp.googlemail.com with ESMTPSA id u13sm1517639ejz.69.2019.12.21.07.04.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Dec 2019 07:04:32 -0800 (PST) From: Tom Murphy To: iommu@lists.linux-foundation.org Cc: Tom Murphy , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , Joerg Roedel , Will Deacon , Robin Murphy , Marek Szyprowski , Kukjin Kim , Krzysztof Kozlowski , David Woodhouse , Lu Baolu , Andy Gross , Bjorn Andersson , Matthias Brugger , Rob Clark , Heiko Stuebner , Gerald Schaefer , Thierry Reding , Jonathan Hunter , Jean-Philippe Brucker , Alex Williamson , Cornelia Huck , Eric Auger , Marc Zyngier , Julien Grall , Thomas Gleixner , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-rockchip@lists.infradead.org, linux-s390@vger.kernel.org, linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org Subject: [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path Date: Sat, 21 Dec 2019 15:03:55 +0000 Message-Id: <20191221150402.13868-4-murphyt7@tcd.ie> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191221150402.13868-1-murphyt7@tcd.ie> References: <20191221150402.13868-1-murphyt7@tcd.ie> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Remove all IOVA handling code from the non-dma_ops path in the intel iommu driver. There's no need for the non-dma_ops path to keep track of IOVAs. The whole point of the non-dma_ops path is that it allows the IOVAs to be handled separately. The IOVA handling code removed in this patch is pointless. Signed-off-by: Tom Murphy --- drivers/iommu/intel-iommu.c | 89 ++++++++++++++----------------------- 1 file changed, 33 insertions(+), 56 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 64b1a9793daa..8d72ea0fb843 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -1908,7 +1908,8 @@ static void domain_exit(struct dmar_domain *domain) domain_remove_dev_info(domain); /* destroy iovas */ - put_iova_domain(&domain->iovad); + if (domain->domain.type == IOMMU_DOMAIN_DMA) + put_iova_domain(&domain->iovad); if (domain->pgd) { struct page *freelist; @@ -2671,19 +2672,9 @@ static struct dmar_domain *set_domain_for_dev(struct device *dev, } static int iommu_domain_identity_map(struct dmar_domain *domain, - unsigned long long start, - unsigned long long end) + unsigned long first_vpfn, + unsigned long last_vpfn) { - unsigned long first_vpfn = start >> VTD_PAGE_SHIFT; - unsigned long last_vpfn = end >> VTD_PAGE_SHIFT; - - if (!reserve_iova(&domain->iovad, dma_to_mm_pfn(first_vpfn), - dma_to_mm_pfn(last_vpfn))) { - pr_err("Reserving iova failed\n"); - return -ENOMEM; - } - - pr_debug("Mapping reserved region %llx-%llx\n", start, end); /* * RMRR range might have overlap with physical memory range, * clear it first @@ -2760,7 +2751,8 @@ static int __init si_domain_init(int hw) for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { ret = iommu_domain_identity_map(si_domain, - PFN_PHYS(start_pfn), PFN_PHYS(end_pfn)); + mm_to_dma_pfn(start_pfn), + mm_to_dma_pfn(end_pfn)); if (ret) return ret; } @@ -4593,58 +4585,37 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb, unsigned long val, void *v) { struct memory_notify *mhp = v; - unsigned long long start, end; - unsigned long start_vpfn, last_vpfn; + unsigned long start_vpfn = mm_to_dma_pfn(mhp->start_pfn); + unsigned long last_vpfn = mm_to_dma_pfn(mhp->start_pfn + + mhp->nr_pages - 1); switch (val) { case MEM_GOING_ONLINE: - start = mhp->start_pfn << PAGE_SHIFT; - end = ((mhp->start_pfn + mhp->nr_pages) << PAGE_SHIFT) - 1; - if (iommu_domain_identity_map(si_domain, start, end)) { - pr_warn("Failed to build identity map for [%llx-%llx]\n", - start, end); + if (iommu_domain_identity_map(si_domain, start_vpfn, + last_vpfn)) { + pr_warn("Failed to build identity map for [%lx-%lx]\n", + start_vpfn, last_vpfn); return NOTIFY_BAD; } break; case MEM_OFFLINE: case MEM_CANCEL_ONLINE: - start_vpfn = mm_to_dma_pfn(mhp->start_pfn); - last_vpfn = mm_to_dma_pfn(mhp->start_pfn + mhp->nr_pages - 1); - while (start_vpfn <= last_vpfn) { - struct iova *iova; + { struct dmar_drhd_unit *drhd; struct intel_iommu *iommu; struct page *freelist; - iova = find_iova(&si_domain->iovad, start_vpfn); - if (iova == NULL) { - pr_debug("Failed get IOVA for PFN %lx\n", - start_vpfn); - break; - } - - iova = split_and_remove_iova(&si_domain->iovad, iova, - start_vpfn, last_vpfn); - if (iova == NULL) { - pr_warn("Failed to split IOVA PFN [%lx-%lx]\n", - start_vpfn, last_vpfn); - return NOTIFY_BAD; - } - - freelist = domain_unmap(si_domain, iova->pfn_lo, - iova->pfn_hi); + freelist = domain_unmap(si_domain, start_vpfn, + last_vpfn); rcu_read_lock(); for_each_active_iommu(iommu, drhd) iommu_flush_iotlb_psi(iommu, si_domain, - iova->pfn_lo, iova_size(iova), + start_vpfn, mhp->nr_pages, !freelist, 0); rcu_read_unlock(); dma_free_pagelist(freelist); - - start_vpfn = iova->pfn_hi + 1; - free_iova_mem(iova); } break; } @@ -4672,8 +4643,9 @@ static void free_all_cpu_cached_iovas(unsigned int cpu) for (did = 0; did < cap_ndoms(iommu->cap); did++) { domain = get_iommu_domain(iommu, (u16)did); - if (!domain) + if (!domain || domain->domain.type != IOMMU_DOMAIN_DMA) continue; + free_cpu_cached_iovas(cpu, &domain->iovad); } } @@ -5095,9 +5067,6 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width) { int adjust_width; - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); - domain_reserve_special_ranges(domain); - /* calculate AGAW */ domain->gaw = guest_width; adjust_width = guestwidth_to_adjustwidth(guest_width); @@ -5116,6 +5085,18 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width) return 0; } +static void intel_init_iova_domain(struct dmar_domain *dmar_domain) +{ + init_iova_domain(&dmar_domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); + copy_reserved_iova(&reserved_iova_list, &dmar_domain->iovad); + + if (init_iova_flush_queue(&dmar_domain->iovad, iommu_flush_iova, + iova_entry_free)) { + pr_warn("iova flush queue initialization failed\n"); + intel_iommu_strict = 1; + } +} + static struct iommu_domain *intel_iommu_domain_alloc(unsigned type) { struct dmar_domain *dmar_domain; @@ -5136,12 +5117,8 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type) return NULL; } - if (type == IOMMU_DOMAIN_DMA && - init_iova_flush_queue(&dmar_domain->iovad, - iommu_flush_iova, iova_entry_free)) { - pr_warn("iova flush queue initialization failed\n"); - intel_iommu_strict = 1; - } + if (type == IOMMU_DOMAIN_DMA) + intel_init_iova_domain(dmar_domain); domain_update_iommu_cap(dmar_domain); From patchwork Sat Dec 21 15:03:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Murphy X-Patchwork-Id: 11307027 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC9E0138D for ; Sat, 21 Dec 2019 15:04:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8268A21927 for ; Sat, 21 Dec 2019 15:04:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=tcd-ie.20150623.gappssmtp.com header.i=@tcd-ie.20150623.gappssmtp.com header.b="qTtqyf7E" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727441AbfLUPEn (ORCPT ); Sat, 21 Dec 2019 10:04:43 -0500 Received: from mail-ed1-f66.google.com ([209.85.208.66]:44193 "EHLO mail-ed1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727056AbfLUPEn (ORCPT ); Sat, 21 Dec 2019 10:04:43 -0500 Received: by mail-ed1-f66.google.com with SMTP id bx28so11365524edb.11 for ; Sat, 21 Dec 2019 07:04:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tcd-ie.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+qGY02LH8j9uJiNJ+7eMIAm0CaJsXFJgRNCiaAq5tb0=; b=qTtqyf7EPcVGfZE9BAmy8YlJfuqgUD6Tq0c2TYbh7m911PENWTukytx1Femz8yDspg SH0mLPSkjN1U5TR2P92gaDvjj6aC9DPQlXBWRgUIgURlUMGPgdC6EkqPyHuJQQTtoFeQ toU41vmZlcGDnzGFWy7hGt9MifLqCmMNCr/rf4Lbjz/DozpwSscM/uUHd2/+Lq8bC/Nl 1X4sPYnK/OY3rFBmxqyjJnQnlzyivLlICWoel/3T6MogTJkHhC/rJAmPtGDiGJVhS418 kOk5HSlGzsLwklC042BYPEHZuOv6CSpAuGncxnSUyLs5kYxFqJwJLaoGahNPhuCd1rIF 0irQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+qGY02LH8j9uJiNJ+7eMIAm0CaJsXFJgRNCiaAq5tb0=; b=eKAMONkQfPcmTT+NNKs0awXfzsJEjVSLktzamccDjTs0+oY6C7Hmi2aQUdQ9AfE/1x 2jKTppm+dfdSVUcEDAS1QG3RQx2B/tlKZdxDg/Ws5gYrYMS4INs2DfdHQoztQP8evuLv W2XWM2YNe6HnU7eC7xO24b9KJ+61HimmDQ09Gknad15QyO0oHdpa3LHlj0ZdN8rRbKTA /ZpSnqRf8jSpfEdt5UknAH4iPdKhND/1f9r6XDaQALBIOwweHcxJ0m6q+x5Syv53Q7d+ EAa4qb4jp049vzRd5WFb7+orlow9Tzo8Afbs0IyQs0XBcSMCfM5iaVgBNWtqaHLJUksj s8Hw== X-Gm-Message-State: APjAAAW8j3AMAHvrxQ5XJNdpOC1e5bbHyZo+19WhqyQLhP8oOQctXm8w 6WJBVleyaT7JzD2Cyu+mdaqTkQ== X-Google-Smtp-Source: APXvYqykuleswl5AzWAw7pkmqPrpJkLagIaSkJpilGdR2JTEiuiCJqHvbqIfgc+FqZbeHj/n+RW3Cw== X-Received: by 2002:a17:906:80b:: with SMTP id e11mr22288275ejd.278.1576940679670; Sat, 21 Dec 2019 07:04:39 -0800 (PST) Received: from localhost.localdomain ([80.233.37.20]) by smtp.googlemail.com with ESMTPSA id u13sm1517639ejz.69.2019.12.21.07.04.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Dec 2019 07:04:39 -0800 (PST) From: Tom Murphy To: iommu@lists.linux-foundation.org Cc: Tom Murphy , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , Joerg Roedel , Will Deacon , Robin Murphy , Marek Szyprowski , Kukjin Kim , Krzysztof Kozlowski , David Woodhouse , Lu Baolu , Andy Gross , Bjorn Andersson , Matthias Brugger , Rob Clark , Heiko Stuebner , Gerald Schaefer , Thierry Reding , Jonathan Hunter , Jean-Philippe Brucker , Alex Williamson , Cornelia Huck , Julien Grall , Eric Auger , Marc Zyngier , Thomas Gleixner , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-rockchip@lists.infradead.org, linux-s390@vger.kernel.org, linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org Subject: [PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers Date: Sat, 21 Dec 2019 15:03:56 +0000 Message-Id: <20191221150402.13868-5-murphyt7@tcd.ie> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191221150402.13868-1-murphyt7@tcd.ie> References: <20191221150402.13868-1-murphyt7@tcd.ie> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This is useful for iommu drivers (in this case the intel iommu driver) which need to wait for the ioTLB to be flushed before newly free/unmapped page table pages can be freed. This way we can still batch ioTLB free operations and handle the freelists. Signed-off-by: Tom Murphy --- drivers/iommu/amd_iommu.c | 14 ++++++++- drivers/iommu/arm-smmu-v3.c | 3 +- drivers/iommu/arm-smmu.c | 3 +- drivers/iommu/dma-iommu.c | 45 ++++++++++++++++++++++------- drivers/iommu/exynos-iommu.c | 3 +- drivers/iommu/intel-iommu.c | 51 +++++++++++++++++++++------------ drivers/iommu/iommu.c | 29 ++++++++++++++----- drivers/iommu/ipmmu-vmsa.c | 3 +- drivers/iommu/msm_iommu.c | 3 +- drivers/iommu/mtk_iommu.c | 3 +- drivers/iommu/mtk_iommu_v1.c | 3 +- drivers/iommu/omap-iommu.c | 3 +- drivers/iommu/qcom_iommu.c | 3 +- drivers/iommu/rockchip-iommu.c | 3 +- drivers/iommu/s390-iommu.c | 3 +- drivers/iommu/tegra-gart.c | 3 +- drivers/iommu/tegra-smmu.c | 3 +- drivers/iommu/virtio-iommu.c | 3 +- drivers/vfio/vfio_iommu_type1.c | 2 +- include/linux/iommu.h | 25 ++++++++++++---- 20 files changed, 151 insertions(+), 57 deletions(-) diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index bd25674ee4db..e8a4c0842624 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -2542,7 +2542,8 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova, static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova, size_t page_size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct protection_domain *domain = to_pdomain(dom); @@ -2668,6 +2669,16 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain) spin_unlock_irqrestore(&dom->lock, flags); } +static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist) +{ + struct protection_domain *dom = to_pdomain(domain); + + domain_flush_pages(dom, iova, size); + domain_flush_complete(dom); +} + static void amd_iommu_iotlb_sync(struct iommu_domain *domain, struct iommu_iotlb_gather *gather) { @@ -2692,6 +2703,7 @@ const struct iommu_ops amd_iommu_ops = { .is_attach_deferred = amd_iommu_is_attach_deferred, .pgsize_bitmap = AMD_IOMMU_PGSIZES, .flush_iotlb_all = amd_iommu_flush_iotlb_all, + .flush_iotlb_range = amd_iommu_flush_iotlb_range, .iotlb_sync = amd_iommu_iotlb_sync, }; diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index effe72eb89e7..a27d4bf4492c 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -2459,7 +2459,8 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, } static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 4f1a350d9529..ea1ab3387a07 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -1205,7 +1205,8 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, } static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops; struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu; diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 0cc702a70a96..df28facdfb8b 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -50,6 +50,19 @@ struct iommu_dma_cookie { struct iommu_domain *fq_domain; }; +static void iommu_dma_entry_dtor(unsigned long data) +{ + struct page *freelist = (struct page *)data; + + while (freelist != NULL) { + unsigned long p = (unsigned long)page_address(freelist); + + freelist = freelist->freelist; + free_page(p); + } +} + + static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) { if (cookie->type == IOMMU_DMA_IOVA_COOKIE) @@ -345,7 +358,8 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, if (!cookie->fq_domain && !iommu_domain_get_attr(domain, DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, &attr) && attr) { cookie->fq_domain = domain; - init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, NULL); + init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, + iommu_dma_entry_dtor); } if (!dev) @@ -439,7 +453,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, } static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, - dma_addr_t iova, size_t size) + dma_addr_t iova, size_t size, struct page *freelist) { struct iova_domain *iovad = &cookie->iovad; @@ -448,7 +462,8 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, cookie->msi_iova -= size; else if (cookie->fq_domain) /* non-strict mode */ queue_iova(iovad, iova_pfn(iovad, iova), - size >> iova_shift(iovad), 0); + size >> iova_shift(iovad), + (unsigned long) freelist); else free_iova_fast(iovad, iova_pfn(iovad, iova), size >> iova_shift(iovad)); @@ -462,18 +477,26 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, struct iova_domain *iovad = &cookie->iovad; size_t iova_off = iova_offset(iovad, dma_addr); struct iommu_iotlb_gather iotlb_gather; + struct page *freelist = NULL; size_t unmapped; dma_addr -= iova_off; size = iova_align(iovad, size + iova_off); iommu_iotlb_gather_init(&iotlb_gather); - unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather); + unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather, + &freelist); WARN_ON(unmapped != size); - if (!cookie->fq_domain) - iommu_tlb_sync(domain, &iotlb_gather); - iommu_dma_free_iova(cookie, dma_addr, size); + if (!cookie->fq_domain) { + if (domain->ops->flush_iotlb_range) + domain->ops->flush_iotlb_range(domain, dma_addr, size, + freelist); + else + iommu_tlb_sync(domain, &iotlb_gather); + } + + iommu_dma_free_iova(cookie, dma_addr, size, freelist); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, @@ -495,7 +518,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, return DMA_MAPPING_ERROR; if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) { - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -650,7 +673,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, out_free_sg: sg_free_table(&sgt); out_free_iova: - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); out_free_pages: __iommu_dma_free_pages(pages, count); return NULL; @@ -901,7 +924,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, return __finalise_sg(dev, sg, nents, iova); out_free_iova: - iommu_dma_free_iova(cookie, iova, iova_len); + iommu_dma_free_iova(cookie, iova, iova_len, NULL); out_restore_sg: __invalidate_sg(sg, nents); return 0; @@ -1194,7 +1217,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return msi_page; out_free_iova: - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); out_free_page: kfree(msi_page); return NULL; diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c index 186ff5cc975c..e6456eb8ac4d 100644 --- a/drivers/iommu/exynos-iommu.c +++ b/drivers/iommu/exynos-iommu.c @@ -1129,7 +1129,8 @@ static void exynos_iommu_tlb_invalidate_entry(struct exynos_iommu_domain *domain static size_t exynos_iommu_unmap(struct iommu_domain *iommu_domain, unsigned long l_iova, size_t size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain); sysmmu_iova_t iova = (sysmmu_iova_t)l_iova; diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 8d72ea0fb843..675ca2aa6e20 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -1145,9 +1145,9 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, pages can only be freed after the IOTLB flush has been done. */ static struct page *domain_unmap(struct dmar_domain *domain, unsigned long start_pfn, - unsigned long last_pfn) + unsigned long last_pfn, + struct page *freelist) { - struct page *freelist; BUG_ON(!domain_pfn_supported(domain, start_pfn)); BUG_ON(!domain_pfn_supported(domain, last_pfn)); @@ -1155,7 +1155,8 @@ static struct page *domain_unmap(struct dmar_domain *domain, /* we don't need lock here; nobody else touches the iova range */ freelist = dma_pte_clear_level(domain, agaw_to_level(domain->agaw), - domain->pgd, 0, start_pfn, last_pfn, NULL); + domain->pgd, 0, start_pfn, last_pfn, + freelist); /* free pgd */ if (start_pfn == 0 && last_pfn == DOMAIN_MAX_PFN(domain->gaw)) { @@ -1914,7 +1915,8 @@ static void domain_exit(struct dmar_domain *domain) if (domain->pgd) { struct page *freelist; - freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw)); + freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw), + NULL); dma_free_pagelist(freelist); } @@ -3541,7 +3543,7 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) if (dev_is_pci(dev)) pdev = to_pci_dev(dev); - freelist = domain_unmap(domain, start_pfn, last_pfn); + freelist = domain_unmap(domain, start_pfn, last_pfn, NULL); if (intel_iommu_strict || (pdev && pdev->untrusted) || !has_iova_flush_queue(&domain->iovad)) { iommu_flush_iotlb_psi(iommu, domain, start_pfn, @@ -4607,7 +4609,7 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb, struct page *freelist; freelist = domain_unmap(si_domain, start_vpfn, - last_vpfn); + last_vpfn, NULL); rcu_read_lock(); for_each_active_iommu(iommu, drhd) @@ -5412,13 +5414,12 @@ static int intel_iommu_map(struct iommu_domain *domain, static size_t intel_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct dmar_domain *dmar_domain = to_dmar_domain(domain); - struct page *freelist = NULL; unsigned long start_pfn, last_pfn; - unsigned int npages; - int iommu_id, level = 0; + int level = 0; /* Cope with horrid API which requires us to unmap more than the size argument if it happens to be a large-page mapping. */ @@ -5432,20 +5433,33 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain, start_pfn = iova >> VTD_PAGE_SHIFT; last_pfn = (iova + size - 1) >> VTD_PAGE_SHIFT; - freelist = domain_unmap(dmar_domain, start_pfn, last_pfn); + *freelist = domain_unmap(dmar_domain, start_pfn, last_pfn, *freelist); - npages = last_pfn - start_pfn + 1; + if (dmar_domain->max_addr == iova + size) + dmar_domain->max_addr = iova; + + return size; +} + +static void intel_iommu_flush_iotlb_range(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist) +{ + struct dmar_domain *dmar_domain = to_dmar_domain(domain); + unsigned long start_pfn, last_pfn; + unsigned long iova_pfn = IOVA_PFN(iova); + unsigned long nrpages; + int iommu_id; + + nrpages = aligned_nrpages(iova, size); + start_pfn = mm_to_dma_pfn(iova_pfn); + last_pfn = start_pfn + nrpages - 1; for_each_domain_iommu(iommu_id, dmar_domain) iommu_flush_iotlb_psi(g_iommus[iommu_id], dmar_domain, - start_pfn, npages, !freelist, 0); + start_pfn, nrpages, !freelist, 0); dma_free_pagelist(freelist); - - if (dmar_domain->max_addr == iova + size) - dmar_domain->max_addr = iova; - - return size; } static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain, @@ -5902,6 +5916,7 @@ const struct iommu_ops intel_iommu_ops = { .aux_get_pasid = intel_iommu_aux_get_pasid, .map = intel_iommu_map, .unmap = intel_iommu_unmap, + .flush_iotlb_range = intel_iommu_flush_iotlb_range, .iova_to_phys = intel_iommu_iova_to_phys, .add_device = intel_iommu_add_device, .remove_device = intel_iommu_remove_device, diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index db7bfd4f2d20..cec728f40d9c 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -667,7 +667,7 @@ static int iommu_group_create_direct_mappings(struct iommu_group *group, } - iommu_flush_tlb_all(domain); + iommu_flush_iotlb_all(domain); out: iommu_put_resv_regions(dev, &mappings); @@ -1961,11 +1961,13 @@ EXPORT_SYMBOL_GPL(iommu_map_atomic); static size_t __iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *iotlb_gather) + struct iommu_iotlb_gather *iotlb_gather, + struct page **freelist) { const struct iommu_ops *ops = domain->ops; size_t unmapped_page, unmapped = 0; unsigned long orig_iova = iova; + struct page *freelist_head = NULL; unsigned int min_pagesz; if (unlikely(ops->unmap == NULL || @@ -1998,7 +2000,8 @@ static size_t __iommu_unmap(struct iommu_domain *domain, while (unmapped < size) { size_t pgsize = iommu_pgsize(domain, iova, size - unmapped); - unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather); + unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather, + &freelist_head); if (!unmapped_page) break; @@ -2009,6 +2012,9 @@ static size_t __iommu_unmap(struct iommu_domain *domain, unmapped += unmapped_page; } + if (freelist) + *freelist = freelist_head; + trace_unmap(orig_iova, size, unmapped); return unmapped; } @@ -2016,12 +2022,20 @@ static size_t __iommu_unmap(struct iommu_domain *domain, size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) { + const struct iommu_ops *ops = domain->ops; struct iommu_iotlb_gather iotlb_gather; + struct page *freelist; size_t ret; iommu_iotlb_gather_init(&iotlb_gather); - ret = __iommu_unmap(domain, iova, size, &iotlb_gather); - iommu_tlb_sync(domain, &iotlb_gather); + ret = __iommu_unmap(domain, iova, size, &iotlb_gather, + &freelist); + + if (ops->flush_iotlb_range) + ops->flush_iotlb_range(domain, iova, ret, + freelist); + else + iommu_tlb_sync(domain, &iotlb_gather); return ret; } @@ -2029,9 +2043,10 @@ EXPORT_SYMBOL_GPL(iommu_unmap); size_t iommu_unmap_fast(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *iotlb_gather) + struct iommu_iotlb_gather *iotlb_gather, + struct page **freelist) { - return __iommu_unmap(domain, iova, size, iotlb_gather); + return __iommu_unmap(domain, iova, size, iotlb_gather, freelist); } EXPORT_SYMBOL_GPL(iommu_unmap_fast); diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c index d02edd2751f3..63bbee653859 100644 --- a/drivers/iommu/ipmmu-vmsa.c +++ b/drivers/iommu/ipmmu-vmsa.c @@ -693,7 +693,8 @@ static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova, } static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain); diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c index 93f14bca26ee..66d1587ab714 100644 --- a/drivers/iommu/msm_iommu.c +++ b/drivers/iommu/msm_iommu.c @@ -518,7 +518,8 @@ static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova, } static size_t msm_iommu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t len, struct iommu_iotlb_gather *gather) + size_t len, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct msm_priv *priv = to_msm_priv(domain); unsigned long flags; diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c index 6fc1f5ecf91e..6bd9f39bb259 100644 --- a/drivers/iommu/mtk_iommu.c +++ b/drivers/iommu/mtk_iommu.c @@ -402,7 +402,8 @@ static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova, static size_t mtk_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct mtk_iommu_domain *dom = to_mtk_domain(domain); diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c index e93b94ecac45..f94d225c3404 100644 --- a/drivers/iommu/mtk_iommu_v1.c +++ b/drivers/iommu/mtk_iommu_v1.c @@ -325,7 +325,8 @@ static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova, static size_t mtk_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct mtk_iommu_domain *dom = to_mtk_domain(domain); unsigned long flags; diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c index be551cc34be4..c5e012267f2b 100644 --- a/drivers/iommu/omap-iommu.c +++ b/drivers/iommu/omap-iommu.c @@ -1383,7 +1383,8 @@ static int omap_iommu_map(struct iommu_domain *domain, unsigned long da, } static size_t omap_iommu_unmap(struct iommu_domain *domain, unsigned long da, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct omap_iommu_domain *omap_domain = to_omap_domain(domain); struct device *dev = omap_domain->dev; diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c index 52f38292df5b..99ebf34e50be 100644 --- a/drivers/iommu/qcom_iommu.c +++ b/drivers/iommu/qcom_iommu.c @@ -440,7 +440,8 @@ static int qcom_iommu_map(struct iommu_domain *domain, unsigned long iova, } static size_t qcom_iommu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { size_t ret; unsigned long flags; diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c index b33cdd5aad81..ec16e01c376a 100644 --- a/drivers/iommu/rockchip-iommu.c +++ b/drivers/iommu/rockchip-iommu.c @@ -795,7 +795,8 @@ static int rk_iommu_map(struct iommu_domain *domain, unsigned long _iova, } static size_t rk_iommu_unmap(struct iommu_domain *domain, unsigned long _iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct rk_iommu_domain *rk_domain = to_rk_domain(domain); unsigned long flags; diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index 1137f3ddcb85..d69d7cf4dee3 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -315,7 +315,8 @@ static phys_addr_t s390_iommu_iova_to_phys(struct iommu_domain *domain, static size_t s390_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct s390_domain *s390_domain = to_s390_domain(domain); int flags = ZPCI_PTE_INVALID; diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c index 3fb7ba72507d..68e7eee9172f 100644 --- a/drivers/iommu/tegra-gart.c +++ b/drivers/iommu/tegra-gart.c @@ -207,7 +207,8 @@ static inline int __gart_iommu_unmap(struct gart_device *gart, } static size_t gart_iommu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t bytes, struct iommu_iotlb_gather *gather) + size_t bytes, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct gart_device *gart = gart_handle; int err; diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index 63a147b623e6..0c5e5f2c3c7d 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -686,7 +686,8 @@ static int tegra_smmu_map(struct iommu_domain *domain, unsigned long iova, } static size_t tegra_smmu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct tegra_smmu_as *as = to_smmu_as(domain); dma_addr_t pte_dma; diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c index 315c7cc4f99d..e74baab27b61 100644 --- a/drivers/iommu/virtio-iommu.c +++ b/drivers/iommu/virtio-iommu.c @@ -750,7 +750,8 @@ static int viommu_map(struct iommu_domain *domain, unsigned long iova, } static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page *freelist) { int ret = 0; size_t unmapped; diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 2ada8e6cdb88..d24ea1181c03 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -685,7 +685,7 @@ static size_t unmap_unpin_fast(struct vfio_domain *domain, if (entry) { unmapped = iommu_unmap_fast(domain->domain, *iova, len, - iotlb_gather); + iotlb_gather, NULL); if (!unmapped) { kfree(entry); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index f2223cbb5fd5..61cac25410b5 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -219,6 +219,7 @@ struct iommu_iotlb_gather { * @map: map a physically contiguous memory region to an iommu domain * @unmap: unmap a physically contiguous memory region from an iommu domain * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain + * @flush_iotlb_range: Flush given iova range of hardware TLBs for this domain * @iotlb_sync_map: Sync mappings created recently using @map to the hardware * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * queue @@ -262,8 +263,12 @@ struct iommu_ops { int (*map)(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp); size_t (*unmap)(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *iotlb_gather); + size_t size, struct iommu_iotlb_gather *iotlb_gather, + struct page **freelist); void (*flush_iotlb_all)(struct iommu_domain *domain); + void (*flush_iotlb_range)(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist); void (*iotlb_sync_map)(struct iommu_domain *domain); void (*iotlb_sync)(struct iommu_domain *domain, struct iommu_iotlb_gather *iotlb_gather); @@ -444,7 +449,8 @@ extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size); extern size_t iommu_unmap_fast(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *iotlb_gather); + struct iommu_iotlb_gather *iotlb_gather, + struct page **freelist); extern size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, struct scatterlist *sg,unsigned int nents, int prot); extern size_t iommu_map_sg_atomic(struct iommu_domain *domain, @@ -518,12 +524,20 @@ extern void iommu_domain_window_disable(struct iommu_domain *domain, u32 wnd_nr) extern int report_iommu_fault(struct iommu_domain *domain, struct device *dev, unsigned long iova, int flags); -static inline void iommu_flush_tlb_all(struct iommu_domain *domain) +static inline void iommu_flush_iotlb_all(struct iommu_domain *domain) { if (domain->ops->flush_iotlb_all) domain->ops->flush_iotlb_all(domain); } +static inline void flush_iotlb_range(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist) +{ + if (domain->ops->flush_iotlb_range) + domain->ops->flush_iotlb_range(domain, iova, size, freelist); +} + static inline void iommu_tlb_sync(struct iommu_domain *domain, struct iommu_iotlb_gather *iotlb_gather) { @@ -699,7 +713,8 @@ static inline size_t iommu_unmap(struct iommu_domain *domain, static inline size_t iommu_unmap_fast(struct iommu_domain *domain, unsigned long iova, int gfp_order, - struct iommu_iotlb_gather *iotlb_gather) + struct iommu_iotlb_gather *iotlb_gather, + struct page **freelist) { return 0; } @@ -718,7 +733,7 @@ static inline size_t iommu_map_sg_atomic(struct iommu_domain *domain, return 0; } -static inline void iommu_flush_tlb_all(struct iommu_domain *domain) +static inline void iommu_flush_iotlb_all(struct iommu_domain *domain) { } From patchwork Sat Dec 21 15:03:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Murphy X-Patchwork-Id: 11307035 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2FD5C138D for ; Sat, 21 Dec 2019 15:04:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0D8B02070C for ; Sat, 21 Dec 2019 15:04:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=tcd-ie.20150623.gappssmtp.com header.i=@tcd-ie.20150623.gappssmtp.com header.b="WkrW4Y9g" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727190AbfLUPEw (ORCPT ); Sat, 21 Dec 2019 10:04:52 -0500 Received: from mail-ed1-f67.google.com ([209.85.208.67]:38355 "EHLO mail-ed1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727450AbfLUPEr (ORCPT ); Sat, 21 Dec 2019 10:04:47 -0500 Received: by mail-ed1-f67.google.com with SMTP id i16so11383583edr.5 for ; Sat, 21 Dec 2019 07:04:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tcd-ie.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=i4dEmM8P1MkKKnNa6NcbuVyKrLsO9ZvL1WqSkd7MOXA=; b=WkrW4Y9gFyG7Qyoell6pvhgdKX7JRReMnuuB9atJoZy04C/ONzngI9MJcZe1C52tLI oAbpeUorXN0OuO/Eg0uTEPtK43geQL68lw4o7Ot8H5XsWGNoa1eKf/zWorwqA2LubWhP ocpyZSMtNUbr1NWS3tmsbNFojfO0KFn4+WKore4fioT3aEQ4/J+J2uZcNnl0dckx4Wul NRwfEfsYHCfN/A/PJf0LwxVas5fBCAxc6UJkeMBPTkWvH4AkfCBijNzpnj6o2cB8Gozf 5sLHCOhpM/FGuPL/eZWe8ySg2l+HsNgX8TbOePQbTkrzEKNWd/Ih3KEJxTQ88SBthjQ7 3K5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=i4dEmM8P1MkKKnNa6NcbuVyKrLsO9ZvL1WqSkd7MOXA=; b=CkQ+HwMVvvXMapWne8GRaYvu3lr/O7FfWijT2lYew+ynjqCKG288oW8tMCo1MyrEvR G0BMgEPdq/cFOVAaz44tHDxA6a4Gefu3Tukb9VKhKgIL9kDxtRbxMZMqCnXEQl3+YvXj gGG3AXxQdswbapznrkKQpqhVkm3pCLhjTWp6wIQkPm2a6zuu2Ksu41uB5DxxWz3eb/8Z lfnR1O1Sr5hcQWcPDClZUR40OVhe7wsvN8IJCfv1JGy7YMGhs50Tdzc+jcc/RHcipK9n VGfJU4o8F8MXgFVKEKsVVsZv74ci3AudieTENfJS26bhKf4mFD3jemXrHAMnKK59gCyR tTeA== X-Gm-Message-State: APjAAAUSImMrFbLmb0SmlvSul0xCF0uzohTKuqF4eU7zkYnODIMcFwQj MZ8fFqSb+BENXc2YFY5tN8Fjsg== X-Google-Smtp-Source: APXvYqzXACa8CmCe+lm4PqHAgWDF0JjnrRxrrR2y7kN6n1yqsWvuOaXwCLqKegFsZD4yWY1Pcx2X8g== X-Received: by 2002:a05:6402:2c3:: with SMTP id b3mr22084523edx.207.1576940686355; Sat, 21 Dec 2019 07:04:46 -0800 (PST) Received: from localhost.localdomain ([80.233.37.20]) by smtp.googlemail.com with ESMTPSA id u13sm1517639ejz.69.2019.12.21.07.04.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Dec 2019 07:04:45 -0800 (PST) From: Tom Murphy To: iommu@lists.linux-foundation.org Cc: Tom Murphy , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , Joerg Roedel , Will Deacon , Robin Murphy , Marek Szyprowski , Kukjin Kim , Krzysztof Kozlowski , David Woodhouse , Lu Baolu , Andy Gross , Bjorn Andersson , Matthias Brugger , Rob Clark , Heiko Stuebner , Gerald Schaefer , Thierry Reding , Jonathan Hunter , Jean-Philippe Brucker , Alex Williamson , Cornelia Huck , Eric Auger , Julien Grall , Marc Zyngier , Thomas Gleixner , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-rockchip@lists.infradead.org, linux-s390@vger.kernel.org, linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org Subject: [PATCH 5/8] iommu: Add iommu_dma_free_cpu_cached_iovas function Date: Sat, 21 Dec 2019 15:03:57 +0000 Message-Id: <20191221150402.13868-6-murphyt7@tcd.ie> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191221150402.13868-1-murphyt7@tcd.ie> References: <20191221150402.13868-1-murphyt7@tcd.ie> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org to dma-iommu ops Add a iommu_dma_free_cpu_cached_iovas function to allow drivers which use the dma-iommu ops to free cached cpu iovas. Signed-off-by: Tom Murphy --- drivers/iommu/dma-iommu.c | 9 +++++++++ include/linux/dma-iommu.h | 3 +++ 2 files changed, 12 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index df28facdfb8b..4eac3cd35443 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -50,6 +50,15 @@ struct iommu_dma_cookie { struct iommu_domain *fq_domain; }; +void iommu_dma_free_cpu_cached_iovas(unsigned int cpu, + struct iommu_domain *domain) +{ + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + + free_cpu_cached_iovas(cpu, iovad); +} + static void iommu_dma_entry_dtor(unsigned long data) { struct page *freelist = (struct page *)data; diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h index 2112f21f73d8..316d22a4a860 100644 --- a/include/linux/dma-iommu.h +++ b/include/linux/dma-iommu.h @@ -37,6 +37,9 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc, void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list); +void iommu_dma_free_cpu_cached_iovas(unsigned int cpu, + struct iommu_domain *domain); + #else /* CONFIG_IOMMU_DMA */ struct iommu_domain; From patchwork Sat Dec 21 15:03:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Murphy X-Patchwork-Id: 11307041 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C0F213A4 for ; Sat, 21 Dec 2019 15:05:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 06CC521D7D for ; Sat, 21 Dec 2019 15:05:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=tcd-ie.20150623.gappssmtp.com header.i=@tcd-ie.20150623.gappssmtp.com header.b="1+ewonpp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727476AbfLUPFA (ORCPT ); Sat, 21 Dec 2019 10:05:00 -0500 Received: from mail-ed1-f68.google.com ([209.85.208.68]:42953 "EHLO mail-ed1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727467AbfLUPE4 (ORCPT ); Sat, 21 Dec 2019 10:04:56 -0500 Received: by mail-ed1-f68.google.com with SMTP id e10so11389908edv.9 for ; Sat, 21 Dec 2019 07:04:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tcd-ie.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=I0JykHmW6XFDV2Vp/ugaZ7Nn7DA+ujFyMXFyqOzSTdU=; b=1+ewonppiB9W8lYl08p7pr7gd0zz68oU69i75ATsgN+8WGxiWusi3bKC7mEUH2iUZT r5uuxiRFx6OdSJyL1vDTT0ELk3P59Rbywd2YUr6MPFBAqDg9BLH08FlJLPwUNE6DLLTL oBF6KagVp/BAfe3aaPxiclOamAqrOGT34nI75PzRaFRvXI3YHm865eNbfcU/BNc990SF 1jPGoBvfkIsGC/24dKPIs6QFcky/NgVLoTfYoW5PgQa7frPzYLKYkbDOR2yneEk5x2dH NoI0/BPadpvBuQht5nq6M38jRRJC31jTXLFsMRDmEvCnLzHv55aGt7qQ04zM16VkFEtn KxDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=I0JykHmW6XFDV2Vp/ugaZ7Nn7DA+ujFyMXFyqOzSTdU=; b=M6d1/8yIrMHiaWs2JEV/G8Rg3LhYGMuT+ygG8KzF75TRh3Zp5wVcFjKZCJyABVBPZx 5Ubybxgc81LjebDf84SdBtQrxR36Qu+SAqr5KqNAF29JVf6yz4axYfZKRC7+pfscxJh5 97jfglM5Fltu3HWJ8CbFs/GejiQoL/UKnRb32ZTwZoQrviZYuNi1vRqS4A+Dr2wAJQWg iiG1l4edaF9iX+UhMDgT0SuI3hTR4UO+wk+eHIlX7rEsEwDKfJkR/gtIBEe3MNmExuhs TNsQwNFDADEt7pZq25ENBYvorVXTYw3xycddd5zXUzlkm8AzsYcr1REYJoikkqo4o/Lf IQFw== X-Gm-Message-State: APjAAAW5hn/4zoxIbilF5EXbp2083aBHomdBGww0t0oY6gxtBLb6RYde ZPgnWggSbz1xVfzEXPstEF+QEA== X-Google-Smtp-Source: APXvYqycgCWhv0uGKFS1lUiWbcIQuToFD3SFHqFOMFhJaeY3cSmbPrLeHRVjV6Yu+VQxlly42u07SA== X-Received: by 2002:a05:6402:221c:: with SMTP id cq28mr22032517edb.110.1576940693185; Sat, 21 Dec 2019 07:04:53 -0800 (PST) Received: from localhost.localdomain ([80.233.37.20]) by smtp.googlemail.com with ESMTPSA id u13sm1517639ejz.69.2019.12.21.07.04.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Dec 2019 07:04:52 -0800 (PST) From: Tom Murphy To: iommu@lists.linux-foundation.org Cc: Tom Murphy , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , Joerg Roedel , Will Deacon , Robin Murphy , Marek Szyprowski , Kukjin Kim , Krzysztof Kozlowski , David Woodhouse , Lu Baolu , Andy Gross , Bjorn Andersson , Matthias Brugger , Rob Clark , Heiko Stuebner , Gerald Schaefer , Thierry Reding , Jonathan Hunter , Jean-Philippe Brucker , Alex Williamson , Cornelia Huck , Eric Auger , Julien Grall , Marc Zyngier , Thomas Gleixner , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-rockchip@lists.infradead.org, linux-s390@vger.kernel.org, linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org Subject: [PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers Date: Sat, 21 Dec 2019 15:03:58 +0000 Message-Id: <20191221150402.13868-7-murphyt7@tcd.ie> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191221150402.13868-1-murphyt7@tcd.ie> References: <20191221150402.13868-1-murphyt7@tcd.ie> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Allow the dma-iommu api to use bounce buffers for untrusted devices. This is a copy of the intel bounce buffer code. Signed-off-by: Tom Murphy Reported-by: kbuild test robot --- drivers/iommu/dma-iommu.c | 93 ++++++++++++++++++++++++++++++++------- drivers/iommu/iommu.c | 10 +++++ include/linux/iommu.h | 9 +++- 3 files changed, 95 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 4eac3cd35443..cf778db7d84d 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -20,9 +20,11 @@ #include #include #include +#include #include #include #include +#include struct iommu_dma_msi_page { struct list_head list; @@ -505,29 +507,89 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, iommu_tlb_sync(domain, &iotlb_gather); } + iommu_dma_free_iova(cookie, dma_addr, size, freelist); } +static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_off = iova_offset(iovad, dma_addr); + size_t aligned_size = iova_align(iovad, size + iova_off); + phys_addr_t phys; + + phys = iommu_iova_to_phys(domain, dma_addr); + if (WARN_ON(!phys)) + return; + + __iommu_dma_unmap(dev, dma_addr, size); + +#ifdef CONFIG_SWIOTLB + if (unlikely(is_swiotlb_buffer(phys))) + swiotlb_tbl_unmap_single(dev, phys, size, + aligned_size, dir, attrs); +#endif +} + static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, - size_t size, int prot, dma_addr_t dma_mask) + size_t org_size, dma_addr_t dma_mask, bool coherent, + enum dma_data_direction dir, unsigned long attrs) { + int prot = dma_info_to_prot(dir, coherent, attrs); struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; size_t iova_off = iova_offset(iovad, phys); + size_t aligned_size = iova_align(iovad, org_size + iova_off); dma_addr_t iova; if (unlikely(iommu_dma_deferred_attach(dev, domain))) return DMA_MAPPING_ERROR; - size = iova_align(iovad, size + iova_off); +#ifdef CONFIG_SWIOTLB + /* + * If both the physical buffer start address and size are + * page aligned, we don't need to use a bounce page. + */ + if (iommu_needs_bounce_buffer(dev) + && !iova_offset(iovad, phys | org_size)) { + phys = swiotlb_tbl_map_single(dev, + __phys_to_dma(dev, io_tlb_start), + phys, org_size, aligned_size, dir, attrs); + + if (phys == DMA_MAPPING_ERROR) + return DMA_MAPPING_ERROR; + + /* Cleanup the padding area. */ + void *padding_start = phys_to_virt(phys); + size_t padding_size = aligned_size; + + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_TO_DEVICE || + dir == DMA_BIDIRECTIONAL)) { + padding_start += org_size; + padding_size -= org_size; + } - iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev); + memset(padding_start, 0, padding_size); + } +#endif + + iova = iommu_dma_alloc_iova(domain, aligned_size, dma_mask, dev); if (!iova) return DMA_MAPPING_ERROR; - if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) { - iommu_dma_free_iova(cookie, iova, size, NULL); + if (iommu_map_atomic(domain, iova, phys - iova_off, aligned_size, + prot)) { + + if (unlikely(is_swiotlb_buffer(phys))) + swiotlb_tbl_unmap_single(dev, phys, aligned_size, + aligned_size, dir, attrs); + iommu_dma_free_iova(cookie, iova, aligned_size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -761,10 +823,10 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, { phys_addr_t phys = page_to_phys(page) + offset; bool coherent = dev_is_dma_coherent(dev); - int prot = dma_info_to_prot(dir, coherent, attrs); dma_addr_t dma_handle; - dma_handle = __iommu_dma_map(dev, phys, size, prot, dma_get_mask(dev)); + dma_handle = __iommu_dma_map(dev, phys, size, dma_get_mask(dev), + coherent, dir, attrs); if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && dma_handle != DMA_MAPPING_ERROR) arch_sync_dma_for_device(phys, size, dir); @@ -776,7 +838,7 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) iommu_dma_sync_single_for_cpu(dev, dma_handle, size, dir); - __iommu_dma_unmap(dev, dma_handle, size); + __iommu_dma_unmap_swiotlb(dev, dma_handle, size, dir, attrs); } /* @@ -960,21 +1022,20 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, sg = tmp; } end = sg_dma_address(sg) + sg_dma_len(sg); - __iommu_dma_unmap(dev, start, end - start); + __iommu_dma_unmap_swiotlb(dev, start, end - start, dir, attrs); } static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs) { - return __iommu_dma_map(dev, phys, size, - dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, - dma_get_mask(dev)); + return __iommu_dma_map(dev, phys, size, dma_get_mask(dev), false, dir, + attrs); } static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - __iommu_dma_unmap(dev, handle, size); + __iommu_dma_unmap_swiotlb(dev, handle, size, dir, attrs); } static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_addr) @@ -1056,7 +1117,6 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp, unsigned long attrs) { bool coherent = dev_is_dma_coherent(dev); - int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs); struct page *page = NULL; void *cpu_addr; @@ -1074,8 +1134,9 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, if (!cpu_addr) return NULL; - *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot, - dev->coherent_dma_mask); + *handle = __iommu_dma_map(dev, page_to_phys(page), size, + dev->coherent_dma_mask, coherent, DMA_BIDIRECTIONAL, + attrs); if (*handle == DMA_MAPPING_ERROR) { __iommu_dma_free(dev, size, cpu_addr); return NULL; diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index cec728f40d9c..e5653cb20c83 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2236,6 +2236,16 @@ void iommu_get_resv_regions(struct device *dev, struct list_head *list) ops->get_resv_regions(dev, list); } +int iommu_needs_bounce_buffer(struct device *dev) +{ + const struct iommu_ops *ops = dev->bus->iommu_ops; + + if (ops && ops->needs_bounce_buffer) + return ops->needs_bounce_buffer(dev); + + return 0; +} + void iommu_put_resv_regions(struct device *dev, struct list_head *list) { const struct iommu_ops *ops = dev->bus->iommu_ops; diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 61cac25410b5..d377ffa362a7 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -280,6 +280,7 @@ struct iommu_ops { enum iommu_attr attr, void *data); int (*domain_set_attr)(struct iommu_domain *domain, enum iommu_attr attr, void *data); + int (*needs_bounce_buffer)(struct device *dev); /* Request/Free a list of reserved regions for a device */ void (*get_resv_regions)(struct device *dev, struct list_head *list); @@ -460,6 +461,7 @@ extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t io extern void iommu_set_fault_handler(struct iommu_domain *domain, iommu_fault_handler_t handler, void *token); +extern int iommu_needs_bounce_buffer(struct device *dev); extern void iommu_get_resv_regions(struct device *dev, struct list_head *list); extern void iommu_put_resv_regions(struct device *dev, struct list_head *list); extern int iommu_request_dm_for_dev(struct device *dev); @@ -530,7 +532,7 @@ static inline void iommu_flush_iotlb_all(struct iommu_domain *domain) domain->ops->flush_iotlb_all(domain); } -static inline void flush_iotlb_range(struct iommu_domain *domain, +static inline void iommu_flush_iotlb_range(struct iommu_domain *domain, unsigned long iova, size_t size, struct page *freelist) { @@ -764,6 +766,11 @@ static inline void iommu_set_fault_handler(struct iommu_domain *domain, { } +static inline int iommu_needs_bounce_buffer(struct device *dev) +{ + return 0; +} + static inline void iommu_get_resv_regions(struct device *dev, struct list_head *list) { From patchwork Sat Dec 21 15:03:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Murphy X-Patchwork-Id: 11307051 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A67F138D for ; Sat, 21 Dec 2019 15:05:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1B52A21927 for ; Sat, 21 Dec 2019 15:05:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=tcd-ie.20150623.gappssmtp.com header.i=@tcd-ie.20150623.gappssmtp.com header.b="M2OG7feF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727280AbfLUPFD (ORCPT ); Sat, 21 Dec 2019 10:05:03 -0500 Received: from mail-ed1-f68.google.com ([209.85.208.68]:35262 "EHLO mail-ed1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727498AbfLUPFC (ORCPT ); Sat, 21 Dec 2019 10:05:02 -0500 Received: by mail-ed1-f68.google.com with SMTP id f8so11405641edv.2 for ; Sat, 21 Dec 2019 07:05:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tcd-ie.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Y3cYbqZ8py3E9IwSpAkh1sxos0ylXdXRv0ylSXjRCJI=; b=M2OG7feFDhZPmM1Iz0gQ2x4cjoKR5ZB0SCoz0mHrGn/DRJVowG3CTrfFFvtV8HU2W9 RTVODG2tjx/nSRsbM3mqi0Hql9sjNgY9AwTGHVpxrx75eR3eiwAwuw3oC006lBrtJ64Y do68qdWbj6p1WKsR1TYozpLVA+PUvzuZMb6Lt1c8BNFf4YZjJEwrg37ef811goq1WarI giyzHPdC8AUAPcdCbFOHlNd5eCQ69hC+sRDqmq5/ok1Tp+C4V5QOz0TcVBNB0z5WT9LS /9i288Q7WgowCNGvErmibUCPq5Mdk4etkm/Y5/2qYw6kh7Ox0HhGWUAmOqYGJO8qVeMp HG7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Y3cYbqZ8py3E9IwSpAkh1sxos0ylXdXRv0ylSXjRCJI=; b=dNU0JlQcmdtDvKnlZ6nC4UckPk/hutYI6boLYqwy89A9S/tfsJuxQk0Dv9YAtfrucn R6o/b+e84NQ9bWa+6xFZ3trNKN4/3e8Bk8+NqDaNLjbgvDbGJ4FLcBMR8RWdB3rXjyfZ bnq1tsOezP/DOX16HzGsS7Qq6uZdGxHk7xtXGrkyLm+28/mvuq5cZc8PGw5yKVIbAyZ7 Wabg+Om0zbK3J4w1mD2FjJLryaXec6RRZyXEdqYnQQzcfHQnTt7bStyCxHs7RsDNmq4N ie7w9Hw08OaXG1OHB9tU7KL4fZXO/SjOnfJFGYlTl07cRwTgEbPO3zU23kErNOfMX4EW alDw== X-Gm-Message-State: APjAAAWdqZhIe+4mrXGjd4+nRg3GLJGeS3cp/XGktDM0kRJ2d8EEFxYo 6WaMF5G0fJGebr+DKEwoRMkXmA== X-Google-Smtp-Source: APXvYqwRA7SzgUqH/GQQ0fPTYwqpIErKH/IKhAIz1vY1H1m8i6hLIzhXImFpZ4bwo42+SiwjsM7YgA== X-Received: by 2002:a05:6402:1659:: with SMTP id s25mr22152834edx.219.1576940699834; Sat, 21 Dec 2019 07:04:59 -0800 (PST) Received: from localhost.localdomain ([80.233.37.20]) by smtp.googlemail.com with ESMTPSA id u13sm1517639ejz.69.2019.12.21.07.04.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Dec 2019 07:04:59 -0800 (PST) From: Tom Murphy To: iommu@lists.linux-foundation.org Cc: Tom Murphy , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , Joerg Roedel , Will Deacon , Robin Murphy , Marek Szyprowski , Kukjin Kim , Krzysztof Kozlowski , David Woodhouse , Lu Baolu , Andy Gross , Bjorn Andersson , Matthias Brugger , Rob Clark , Heiko Stuebner , Gerald Schaefer , Thierry Reding , Jonathan Hunter , Jean-Philippe Brucker , Alex Williamson , Cornelia Huck , Eric Auger , Marc Zyngier , Julien Grall , Thomas Gleixner , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-rockchip@lists.infradead.org, linux-s390@vger.kernel.org, linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org Subject: [PATCH 7/8] iommu/vt-d: Convert intel iommu driver to the iommu ops Date: Sat, 21 Dec 2019 15:03:59 +0000 Message-Id: <20191221150402.13868-8-murphyt7@tcd.ie> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191221150402.13868-1-murphyt7@tcd.ie> References: <20191221150402.13868-1-murphyt7@tcd.ie> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Convert the intel iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the intel iommu driver. Signed-off-by: Tom Murphy --- drivers/iommu/Kconfig | 1 + drivers/iommu/intel-iommu.c | 742 +++--------------------------------- include/linux/intel-iommu.h | 1 - 3 files changed, 55 insertions(+), 689 deletions(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 0b9d78a0f3ac..4126bb2794c7 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -188,6 +188,7 @@ config INTEL_IOMMU select NEED_DMA_MAP_STATE select DMAR_TABLE select SWIOTLB + select IOMMU_DMA help DMA remapping (DMAR) devices support enables independent address translations for Direct Memory Access (DMA) from devices. diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 675ca2aa6e20..e673e1ee9288 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include @@ -41,7 +42,6 @@ #include #include #include -#include #include #include #include @@ -380,9 +380,6 @@ EXPORT_SYMBOL_GPL(intel_iommu_gfx_mapped); static DEFINE_SPINLOCK(device_domain_lock); static LIST_HEAD(device_domain_list); -#define device_needs_bounce(d) (!intel_no_bounce && dev_is_pci(d) && \ - to_pci_dev(d)->untrusted) - /* * Iterate over elements in device_domain_list and call the specified * callback @fn against each element. @@ -1180,13 +1177,6 @@ static void dma_free_pagelist(struct page *freelist) } } -static void iova_entry_free(unsigned long data) -{ - struct page *freelist = (struct page *)data; - - dma_free_pagelist(freelist); -} - /* iommu handling */ static int iommu_alloc_root_entry(struct intel_iommu *iommu) { @@ -1530,16 +1520,14 @@ static inline void __mapping_notify_one(struct intel_iommu *iommu, iommu_flush_write_buffer(iommu); } -static void iommu_flush_iova(struct iova_domain *iovad) +static void intel_flush_iotlb_all(struct iommu_domain *domain) { - struct dmar_domain *domain; + struct dmar_domain *dmar_domain = to_dmar_domain(domain); int idx; - domain = container_of(iovad, struct dmar_domain, iovad); - - for_each_domain_iommu(idx, domain) { + for_each_domain_iommu(idx, dmar_domain) { struct intel_iommu *iommu = g_iommus[idx]; - u16 did = domain->iommu_did[iommu->seq_id]; + u16 did = dmar_domain->iommu_did[iommu->seq_id]; iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH); @@ -1784,53 +1772,6 @@ static int domain_detach_iommu(struct dmar_domain *domain, return count; } -static struct iova_domain reserved_iova_list; -static struct lock_class_key reserved_rbtree_key; - -static int dmar_init_reserved_ranges(void) -{ - struct pci_dev *pdev = NULL; - struct iova *iova; - int i; - - init_iova_domain(&reserved_iova_list, VTD_PAGE_SIZE, IOVA_START_PFN); - - lockdep_set_class(&reserved_iova_list.iova_rbtree_lock, - &reserved_rbtree_key); - - /* IOAPIC ranges shouldn't be accessed by DMA */ - iova = reserve_iova(&reserved_iova_list, IOVA_PFN(IOAPIC_RANGE_START), - IOVA_PFN(IOAPIC_RANGE_END)); - if (!iova) { - pr_err("Reserve IOAPIC range failed\n"); - return -ENODEV; - } - - /* Reserve all PCI MMIO to avoid peer-to-peer access */ - for_each_pci_dev(pdev) { - struct resource *r; - - for (i = 0; i < PCI_NUM_RESOURCES; i++) { - r = &pdev->resource[i]; - if (!r->flags || !(r->flags & IORESOURCE_MEM)) - continue; - iova = reserve_iova(&reserved_iova_list, - IOVA_PFN(r->start), - IOVA_PFN(r->end)); - if (!iova) { - pci_err(pdev, "Reserve iova for %pR failed\n", r); - return -ENODEV; - } - } - } - return 0; -} - -static void domain_reserve_special_ranges(struct dmar_domain *domain) -{ - copy_reserved_iova(&reserved_iova_list, &domain->iovad); -} - static inline int guestwidth_to_adjustwidth(int gaw) { int agaw; @@ -1850,16 +1791,11 @@ static int domain_init(struct dmar_domain *domain, struct intel_iommu *iommu, { int adjust_width, agaw; unsigned long sagaw; - int err; - - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); - - err = init_iova_flush_queue(&domain->iovad, - iommu_flush_iova, iova_entry_free); - if (err) - return err; + int ret; - domain_reserve_special_ranges(domain); + ret = iommu_get_dma_cookie(&domain->domain); + if (ret) + return ret; /* calculate AGAW */ if (guest_width > cap_mgaw(iommu->cap)) @@ -1910,7 +1846,7 @@ static void domain_exit(struct dmar_domain *domain) /* destroy iovas */ if (domain->domain.type == IOMMU_DOMAIN_DMA) - put_iova_domain(&domain->iovad); + iommu_put_dma_cookie(&domain->domain); if (domain->pgd) { struct page *freelist; @@ -2439,20 +2375,6 @@ static struct dmar_domain *find_domain(struct device *dev) return NULL; } -static struct dmar_domain *deferred_attach_domain(struct device *dev) -{ - if (unlikely(dev->archdata.iommu == DEFER_DEVICE_DOMAIN_INFO)) { - struct iommu_domain *domain; - - dev->archdata.iommu = NULL; - domain = iommu_get_domain_for_dev(dev); - if (domain) - intel_iommu_attach_device(domain, dev); - } - - return find_domain(dev); -} - static inline struct device_domain_info * dmar_search_domain_by_dev_info(int segment, int bus, int devfn) { @@ -3363,39 +3285,6 @@ static int __init init_dmars(void) return ret; } -/* This takes a number of _MM_ pages, not VTD pages */ -static unsigned long intel_alloc_iova(struct device *dev, - struct dmar_domain *domain, - unsigned long nrpages, uint64_t dma_mask) -{ - unsigned long iova_pfn; - - /* Restrict dma_mask to the width that the iommu can handle */ - dma_mask = min_t(uint64_t, DOMAIN_MAX_ADDR(domain->gaw), dma_mask); - /* Ensure we reserve the whole size-aligned region */ - nrpages = __roundup_pow_of_two(nrpages); - - if (!dmar_forcedac && dma_mask > DMA_BIT_MASK(32)) { - /* - * First try to allocate an io virtual address in - * DMA_BIT_MASK(32) and if that fails then try allocating - * from higher range - */ - iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, - IOVA_PFN(DMA_BIT_MASK(32)), false); - if (iova_pfn) - return iova_pfn; - } - iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, - IOVA_PFN(dma_mask), true); - if (unlikely(!iova_pfn)) { - dev_err(dev, "Allocating %ld-page iova failed", nrpages); - return 0; - } - - return iova_pfn; -} - static struct dmar_domain *get_private_domain_for_dev(struct device *dev) { struct dmar_domain *domain, *tmp; @@ -3444,528 +3333,6 @@ static struct dmar_domain *get_private_domain_for_dev(struct device *dev) return domain; } -static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr, - size_t size, int dir, u64 dma_mask) -{ - struct dmar_domain *domain; - phys_addr_t start_paddr; - unsigned long iova_pfn; - int prot = 0; - int ret; - struct intel_iommu *iommu; - unsigned long paddr_pfn = paddr >> PAGE_SHIFT; - - BUG_ON(dir == DMA_NONE); - - domain = deferred_attach_domain(dev); - if (!domain) - return DMA_MAPPING_ERROR; - - iommu = domain_get_iommu(domain); - size = aligned_nrpages(paddr, size); - - iova_pfn = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size), dma_mask); - if (!iova_pfn) - goto error; - - /* - * Check if DMAR supports zero-length reads on write only - * mappings.. - */ - if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || \ - !cap_zlr(iommu->cap)) - prot |= DMA_PTE_READ; - if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) - prot |= DMA_PTE_WRITE; - /* - * paddr - (paddr + size) might be partial page, we should map the whole - * page. Note: if two part of one page are separately mapped, we - * might have two guest_addr mapping to the same host paddr, but this - * is not a big problem - */ - ret = domain_pfn_mapping(domain, mm_to_dma_pfn(iova_pfn), - mm_to_dma_pfn(paddr_pfn), size, prot); - if (ret) - goto error; - - start_paddr = (phys_addr_t)iova_pfn << PAGE_SHIFT; - start_paddr += paddr & ~PAGE_MASK; - - trace_map_single(dev, start_paddr, paddr, size << VTD_PAGE_SHIFT); - - return start_paddr; - -error: - if (iova_pfn) - free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(size)); - dev_err(dev, "Device request: %zx@%llx dir %d --- failed\n", - size, (unsigned long long)paddr, dir); - return DMA_MAPPING_ERROR; -} - -static dma_addr_t intel_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, - unsigned long attrs) -{ - return __intel_map_single(dev, page_to_phys(page) + offset, size, dir, - *dev->dma_mask); -} - -static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) -{ - return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask); -} - -static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) -{ - struct dmar_domain *domain; - unsigned long start_pfn, last_pfn; - unsigned long nrpages; - unsigned long iova_pfn; - struct intel_iommu *iommu; - struct page *freelist; - struct pci_dev *pdev = NULL; - - domain = find_domain(dev); - BUG_ON(!domain); - - iommu = domain_get_iommu(domain); - - iova_pfn = IOVA_PFN(dev_addr); - - nrpages = aligned_nrpages(dev_addr, size); - start_pfn = mm_to_dma_pfn(iova_pfn); - last_pfn = start_pfn + nrpages - 1; - - if (dev_is_pci(dev)) - pdev = to_pci_dev(dev); - - freelist = domain_unmap(domain, start_pfn, last_pfn, NULL); - if (intel_iommu_strict || (pdev && pdev->untrusted) || - !has_iova_flush_queue(&domain->iovad)) { - iommu_flush_iotlb_psi(iommu, domain, start_pfn, - nrpages, !freelist, 0); - /* free iova */ - free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(nrpages)); - dma_free_pagelist(freelist); - } else { - queue_iova(&domain->iovad, iova_pfn, nrpages, - (unsigned long)freelist); - /* - * queue up the release of the unmap to save the 1/6th of the - * cpu used up by the iotlb flush operation... - */ - } - - trace_unmap_single(dev, dev_addr, size); -} - -static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) -{ - intel_unmap(dev, dev_addr, size); -} - -static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - intel_unmap(dev, dev_addr, size); -} - -static void *intel_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flags, - unsigned long attrs) -{ - struct page *page = NULL; - int order; - - size = PAGE_ALIGN(size); - order = get_order(size); - - if (gfpflags_allow_blocking(flags)) { - unsigned int count = size >> PAGE_SHIFT; - - page = dma_alloc_from_contiguous(dev, count, order, - flags & __GFP_NOWARN); - } - - if (!page) - page = alloc_pages(flags, order); - if (!page) - return NULL; - memset(page_address(page), 0, size); - - *dma_handle = __intel_map_single(dev, page_to_phys(page), size, - DMA_BIDIRECTIONAL, - dev->coherent_dma_mask); - if (*dma_handle != DMA_MAPPING_ERROR) - return page_address(page); - if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) - __free_pages(page, order); - - return NULL; -} - -static void intel_free_coherent(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - int order; - struct page *page = virt_to_page(vaddr); - - size = PAGE_ALIGN(size); - order = get_order(size); - - intel_unmap(dev, dma_handle, size); - if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) - __free_pages(page, order); -} - -static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist, - int nelems, enum dma_data_direction dir, - unsigned long attrs) -{ - dma_addr_t startaddr = sg_dma_address(sglist) & PAGE_MASK; - unsigned long nrpages = 0; - struct scatterlist *sg; - int i; - - for_each_sg(sglist, sg, nelems, i) { - nrpages += aligned_nrpages(sg_dma_address(sg), sg_dma_len(sg)); - } - - intel_unmap(dev, startaddr, nrpages << VTD_PAGE_SHIFT); - - trace_unmap_sg(dev, startaddr, nrpages << VTD_PAGE_SHIFT); -} - -static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nelems, - enum dma_data_direction dir, unsigned long attrs) -{ - int i; - struct dmar_domain *domain; - size_t size = 0; - int prot = 0; - unsigned long iova_pfn; - int ret; - struct scatterlist *sg; - unsigned long start_vpfn; - struct intel_iommu *iommu; - - BUG_ON(dir == DMA_NONE); - - domain = deferred_attach_domain(dev); - if (!domain) - return 0; - - iommu = domain_get_iommu(domain); - - for_each_sg(sglist, sg, nelems, i) - size += aligned_nrpages(sg->offset, sg->length); - - iova_pfn = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size), - *dev->dma_mask); - if (!iova_pfn) { - sglist->dma_length = 0; - return 0; - } - - /* - * Check if DMAR supports zero-length reads on write only - * mappings.. - */ - if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || \ - !cap_zlr(iommu->cap)) - prot |= DMA_PTE_READ; - if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) - prot |= DMA_PTE_WRITE; - - start_vpfn = mm_to_dma_pfn(iova_pfn); - - ret = domain_sg_mapping(domain, start_vpfn, sglist, size, prot); - if (unlikely(ret)) { - dma_pte_free_pagetable(domain, start_vpfn, - start_vpfn + size - 1, - agaw_to_level(domain->agaw) + 1); - free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(size)); - return 0; - } - - trace_map_sg(dev, iova_pfn << PAGE_SHIFT, - sg_phys(sglist), size << VTD_PAGE_SHIFT); - - return nelems; -} - -static u64 intel_get_required_mask(struct device *dev) -{ - return DMA_BIT_MASK(32); -} - -static const struct dma_map_ops intel_dma_ops = { - .alloc = intel_alloc_coherent, - .free = intel_free_coherent, - .map_sg = intel_map_sg, - .unmap_sg = intel_unmap_sg, - .map_page = intel_map_page, - .unmap_page = intel_unmap_page, - .map_resource = intel_map_resource, - .unmap_resource = intel_unmap_resource, - .dma_supported = dma_direct_supported, - .mmap = dma_common_mmap, - .get_sgtable = dma_common_get_sgtable, - .get_required_mask = intel_get_required_mask, -}; - -static void -bounce_sync_single(struct device *dev, dma_addr_t addr, size_t size, - enum dma_data_direction dir, enum dma_sync_target target) -{ - struct dmar_domain *domain; - phys_addr_t tlb_addr; - - domain = find_domain(dev); - if (WARN_ON(!domain)) - return; - - tlb_addr = intel_iommu_iova_to_phys(&domain->domain, addr); - if (is_swiotlb_buffer(tlb_addr)) - swiotlb_tbl_sync_single(dev, tlb_addr, size, dir, target); -} - -static dma_addr_t -bounce_map_single(struct device *dev, phys_addr_t paddr, size_t size, - enum dma_data_direction dir, unsigned long attrs, - u64 dma_mask) -{ - size_t aligned_size = ALIGN(size, VTD_PAGE_SIZE); - struct dmar_domain *domain; - struct intel_iommu *iommu; - unsigned long iova_pfn; - unsigned long nrpages; - phys_addr_t tlb_addr; - int prot = 0; - int ret; - - domain = deferred_attach_domain(dev); - if (WARN_ON(dir == DMA_NONE || !domain)) - return DMA_MAPPING_ERROR; - - iommu = domain_get_iommu(domain); - if (WARN_ON(!iommu)) - return DMA_MAPPING_ERROR; - - nrpages = aligned_nrpages(0, size); - iova_pfn = intel_alloc_iova(dev, domain, - dma_to_mm_pfn(nrpages), dma_mask); - if (!iova_pfn) - return DMA_MAPPING_ERROR; - - /* - * Check if DMAR supports zero-length reads on write only - * mappings.. - */ - if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || - !cap_zlr(iommu->cap)) - prot |= DMA_PTE_READ; - if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) - prot |= DMA_PTE_WRITE; - - /* - * If both the physical buffer start address and size are - * page aligned, we don't need to use a bounce page. - */ - if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) { - tlb_addr = swiotlb_tbl_map_single(dev, - __phys_to_dma(dev, io_tlb_start), - paddr, size, aligned_size, dir, attrs); - if (tlb_addr == DMA_MAPPING_ERROR) { - goto swiotlb_error; - } else { - /* Cleanup the padding area. */ - void *padding_start = phys_to_virt(tlb_addr); - size_t padding_size = aligned_size; - - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_TO_DEVICE || - dir == DMA_BIDIRECTIONAL)) { - padding_start += size; - padding_size -= size; - } - - memset(padding_start, 0, padding_size); - } - } else { - tlb_addr = paddr; - } - - ret = domain_pfn_mapping(domain, mm_to_dma_pfn(iova_pfn), - tlb_addr >> VTD_PAGE_SHIFT, nrpages, prot); - if (ret) - goto mapping_error; - - trace_bounce_map_single(dev, iova_pfn << PAGE_SHIFT, paddr, size); - - return (phys_addr_t)iova_pfn << PAGE_SHIFT; - -mapping_error: - if (is_swiotlb_buffer(tlb_addr)) - swiotlb_tbl_unmap_single(dev, tlb_addr, size, - aligned_size, dir, attrs); -swiotlb_error: - free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(nrpages)); - dev_err(dev, "Device bounce map: %zx@%llx dir %d --- failed\n", - size, (unsigned long long)paddr, dir); - - return DMA_MAPPING_ERROR; -} - -static void -bounce_unmap_single(struct device *dev, dma_addr_t dev_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - size_t aligned_size = ALIGN(size, VTD_PAGE_SIZE); - struct dmar_domain *domain; - phys_addr_t tlb_addr; - - domain = find_domain(dev); - if (WARN_ON(!domain)) - return; - - tlb_addr = intel_iommu_iova_to_phys(&domain->domain, dev_addr); - if (WARN_ON(!tlb_addr)) - return; - - intel_unmap(dev, dev_addr, size); - if (is_swiotlb_buffer(tlb_addr)) - swiotlb_tbl_unmap_single(dev, tlb_addr, size, - aligned_size, dir, attrs); - - trace_bounce_unmap_single(dev, dev_addr, size); -} - -static dma_addr_t -bounce_map_page(struct device *dev, struct page *page, unsigned long offset, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - return bounce_map_single(dev, page_to_phys(page) + offset, - size, dir, attrs, *dev->dma_mask); -} - -static dma_addr_t -bounce_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - return bounce_map_single(dev, phys_addr, size, - dir, attrs, *dev->dma_mask); -} - -static void -bounce_unmap_page(struct device *dev, dma_addr_t dev_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - bounce_unmap_single(dev, dev_addr, size, dir, attrs); -} - -static void -bounce_unmap_resource(struct device *dev, dma_addr_t dev_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - bounce_unmap_single(dev, dev_addr, size, dir, attrs); -} - -static void -bounce_unmap_sg(struct device *dev, struct scatterlist *sglist, int nelems, - enum dma_data_direction dir, unsigned long attrs) -{ - struct scatterlist *sg; - int i; - - for_each_sg(sglist, sg, nelems, i) - bounce_unmap_page(dev, sg->dma_address, - sg_dma_len(sg), dir, attrs); -} - -static int -bounce_map_sg(struct device *dev, struct scatterlist *sglist, int nelems, - enum dma_data_direction dir, unsigned long attrs) -{ - int i; - struct scatterlist *sg; - - for_each_sg(sglist, sg, nelems, i) { - sg->dma_address = bounce_map_page(dev, sg_page(sg), - sg->offset, sg->length, - dir, attrs); - if (sg->dma_address == DMA_MAPPING_ERROR) - goto out_unmap; - sg_dma_len(sg) = sg->length; - } - - return nelems; - -out_unmap: - bounce_unmap_sg(dev, sglist, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); - return 0; -} - -static void -bounce_sync_single_for_cpu(struct device *dev, dma_addr_t addr, - size_t size, enum dma_data_direction dir) -{ - bounce_sync_single(dev, addr, size, dir, SYNC_FOR_CPU); -} - -static void -bounce_sync_single_for_device(struct device *dev, dma_addr_t addr, - size_t size, enum dma_data_direction dir) -{ - bounce_sync_single(dev, addr, size, dir, SYNC_FOR_DEVICE); -} - -static void -bounce_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, - int nelems, enum dma_data_direction dir) -{ - struct scatterlist *sg; - int i; - - for_each_sg(sglist, sg, nelems, i) - bounce_sync_single(dev, sg_dma_address(sg), - sg_dma_len(sg), dir, SYNC_FOR_CPU); -} - -static void -bounce_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, - int nelems, enum dma_data_direction dir) -{ - struct scatterlist *sg; - int i; - - for_each_sg(sglist, sg, nelems, i) - bounce_sync_single(dev, sg_dma_address(sg), - sg_dma_len(sg), dir, SYNC_FOR_DEVICE); -} - -static const struct dma_map_ops bounce_dma_ops = { - .alloc = intel_alloc_coherent, - .free = intel_free_coherent, - .map_sg = bounce_map_sg, - .unmap_sg = bounce_unmap_sg, - .map_page = bounce_map_page, - .unmap_page = bounce_unmap_page, - .sync_single_for_cpu = bounce_sync_single_for_cpu, - .sync_single_for_device = bounce_sync_single_for_device, - .sync_sg_for_cpu = bounce_sync_sg_for_cpu, - .sync_sg_for_device = bounce_sync_sg_for_device, - .map_resource = bounce_map_resource, - .unmap_resource = bounce_unmap_resource, - .dma_supported = dma_direct_supported, -}; - static inline int iommu_domain_cache_init(void) { int ret = 0; @@ -4648,7 +4015,7 @@ static void free_all_cpu_cached_iovas(unsigned int cpu) if (!domain || domain->domain.type != IOMMU_DOMAIN_DMA) continue; - free_cpu_cached_iovas(cpu, &domain->iovad); + iommu_dma_free_cpu_cached_iovas(cpu, &domain->domain); } } } @@ -4917,12 +4284,6 @@ int __init intel_iommu_init(void) if (list_empty(&dmar_atsr_units)) pr_info("No ATSR found\n"); - if (dmar_init_reserved_ranges()) { - if (force_on) - panic("tboot: Failed to reserve iommu ranges\n"); - goto out_free_reserved_range; - } - if (dmar_map_gfx) intel_iommu_gfx_mapped = 1; @@ -4933,7 +4294,7 @@ int __init intel_iommu_init(void) if (force_on) panic("tboot: Failed to initialize DMARs\n"); pr_err("Initialization failed\n"); - goto out_free_reserved_range; + goto out_free_dmar; } up_write(&dmar_global_lock); @@ -4983,8 +4344,6 @@ int __init intel_iommu_init(void) return 0; -out_free_reserved_range: - put_iova_domain(&reserved_iova_list); out_free_dmar: intel_iommu_free_dmars(); up_write(&dmar_global_lock); @@ -5087,18 +4446,6 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width) return 0; } -static void intel_init_iova_domain(struct dmar_domain *dmar_domain) -{ - init_iova_domain(&dmar_domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); - copy_reserved_iova(&reserved_iova_list, &dmar_domain->iovad); - - if (init_iova_flush_queue(&dmar_domain->iovad, iommu_flush_iova, - iova_entry_free)) { - pr_warn("iova flush queue initialization failed\n"); - intel_iommu_strict = 1; - } -} - static struct iommu_domain *intel_iommu_domain_alloc(unsigned type) { struct dmar_domain *dmar_domain; @@ -5119,8 +4466,9 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type) return NULL; } - if (type == IOMMU_DOMAIN_DMA) - intel_init_iova_domain(dmar_domain); + if (type == IOMMU_DOMAIN_DMA && + iommu_get_dma_cookie(&dmar_domain->domain)) + return NULL; domain_update_iommu_cap(dmar_domain); @@ -5319,6 +4667,9 @@ static int intel_iommu_attach_device(struct iommu_domain *domain, { int ret; + /* Clear deferred attach */ + dev->archdata.iommu = NULL; + if (domain->type == IOMMU_DOMAIN_UNMANAGED && device_is_rmrr_locked(dev)) { dev_warn(dev, "Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor.\n"); @@ -5533,6 +4884,7 @@ static int intel_iommu_add_device(struct device *dev) struct intel_iommu *iommu; struct iommu_group *group; u64 dma_mask = *dev->dma_mask; + dma_addr_t base; u8 bus, devfn; int ret; @@ -5555,6 +4907,7 @@ static int intel_iommu_add_device(struct device *dev) iommu_group_put(group); + base = IOVA_START_PFN << VTD_PAGE_SHIFT; domain = iommu_get_domain_for_dev(dev); dmar_domain = to_dmar_domain(domain); if (domain->type == IOMMU_DOMAIN_DMA) { @@ -5573,7 +4926,9 @@ static int intel_iommu_add_device(struct device *dev) "Device uses a private identity domain.\n"); } } else { - dev->dma_ops = &intel_dma_ops; + iommu_setup_dma_ops(dev, base, + __DOMAIN_MAX_ADDR(dmar_domain->gaw) - + base); } } else { if (device_def_domain_type(dev) == IOMMU_DOMAIN_DMA) { @@ -5590,15 +4945,12 @@ static int intel_iommu_add_device(struct device *dev) dev_info(dev, "Device uses a private dma domain.\n"); } - dev->dma_ops = &intel_dma_ops; + iommu_setup_dma_ops(dev, base, + __DOMAIN_MAX_ADDR(dmar_domain->gaw) - + base); } } - if (device_needs_bounce(dev)) { - dev_info(dev, "Use Intel IOMMU bounce page dma_ops\n"); - set_dma_ops(dev, &bounce_dma_ops); - } - return 0; } @@ -5620,6 +4972,31 @@ static void intel_iommu_remove_device(struct device *dev) set_dma_ops(dev, NULL); } +static int intel_iommu_domain_get_attr(struct iommu_domain *domain, + enum iommu_attr attr, void *data) +{ + switch (domain->type) { + case IOMMU_DOMAIN_UNMANAGED: + return -ENODEV; + case IOMMU_DOMAIN_DMA: + switch (attr) { + case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: + *(int *)data = !intel_iommu_strict; + return 0; + default: + return -ENODEV; + } + break; + default: + return -EINVAL; + } +} + +static int intel_iommu_needs_bounce_buffer(struct device *d) +{ + return !intel_no_bounce && dev_is_pci(d) && to_pci_dev(d)->untrusted; +} + static void intel_iommu_get_resv_regions(struct device *device, struct list_head *head) { @@ -5737,19 +5114,6 @@ int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct device *dev) return ret; } -static void intel_iommu_apply_resv_region(struct device *dev, - struct iommu_domain *domain, - struct iommu_resv_region *region) -{ - struct dmar_domain *dmar_domain = to_dmar_domain(domain); - unsigned long start, end; - - start = IOVA_PFN(region->start); - end = IOVA_PFN(region->start + region->length - 1); - - WARN_ON_ONCE(!reserve_iova(&dmar_domain->iovad, start, end)); -} - #ifdef CONFIG_INTEL_IOMMU_SVM struct intel_iommu *intel_svm_device_to_iommu(struct device *dev) { @@ -5916,13 +5280,15 @@ const struct iommu_ops intel_iommu_ops = { .aux_get_pasid = intel_iommu_aux_get_pasid, .map = intel_iommu_map, .unmap = intel_iommu_unmap, + .flush_iotlb_all = intel_flush_iotlb_all, .flush_iotlb_range = intel_iommu_flush_iotlb_range, .iova_to_phys = intel_iommu_iova_to_phys, .add_device = intel_iommu_add_device, .remove_device = intel_iommu_remove_device, + .domain_get_attr = intel_iommu_domain_get_attr, + .needs_bounce_buffer = intel_iommu_needs_bounce_buffer, .get_resv_regions = intel_iommu_get_resv_regions, .put_resv_regions = intel_iommu_put_resv_regions, - .apply_resv_region = intel_iommu_apply_resv_region, .device_group = pci_device_group, .dev_has_feat = intel_iommu_dev_has_feat, .dev_feat_enabled = intel_iommu_dev_feat_enabled, diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h index 6d8bf4bdf240..d07f14340870 100644 --- a/include/linux/intel-iommu.h +++ b/include/linux/intel-iommu.h @@ -489,7 +489,6 @@ struct dmar_domain { bool has_iotlb_device; struct list_head devices; /* all devices' list */ struct list_head auxd; /* link to device's auxiliary list */ - struct iova_domain iovad; /* iova's that belong to this domain */ struct dma_pte *pgd; /* virtual address */ int gaw; /* max guest address width */ From patchwork Sat Dec 21 15:04:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Murphy X-Patchwork-Id: 11307059 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 93A1414F6 for ; Sat, 21 Dec 2019 15:05:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 72C1B21D7E for ; Sat, 21 Dec 2019 15:05:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=tcd-ie.20150623.gappssmtp.com header.i=@tcd-ie.20150623.gappssmtp.com header.b="Ukw6PoNA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727516AbfLUPFO (ORCPT ); Sat, 21 Dec 2019 10:05:14 -0500 Received: from mail-ed1-f67.google.com ([209.85.208.67]:35290 "EHLO mail-ed1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727208AbfLUPFN (ORCPT ); Sat, 21 Dec 2019 10:05:13 -0500 Received: by mail-ed1-f67.google.com with SMTP id f8so11405987edv.2 for ; Sat, 21 Dec 2019 07:05:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tcd-ie.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0N+893Qfq6YLP1OvUkum1ZO1qWJ01GUT6Ih8V2c80Vk=; b=Ukw6PoNAZjf5krJJRS06xPvTLavrof9Akn4Pak+RjVdM62j1/NqPdGsYUU6ngGpNzI ZHRA+faVKygYXpBoHQBzzSrMhFuDjDM3rPT8thlw7ti3Ya7E/2Ia2hgJGmjOkpCbhcKi fWFJ41yTlMvsKGf/oqAM1MkQf60LS0RKrqZ8loUGznkUimWWsqYUsRHeSebLKMi4vBlg XuhtpVGYw4uTiieNAiV6TEl+/HFnxAfvR6qkWvowyMkkJ9p39r4oe6DrrFdOAOaTjRHK ZuOahTLph0HLpnOdVRH8cOKfXQ7P8kaiwem0m0P3CTEBJSk0wQflsk31c0yoR6ujnQx5 TqqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0N+893Qfq6YLP1OvUkum1ZO1qWJ01GUT6Ih8V2c80Vk=; b=TenPp1A8EDpq53RBh5XtFgdqJglvETA0ob1zWxlid0Gxz2daeVqMrYALDLUVvXH75j 1edT3Q95MRgJtq5RTY5EZmlmJiqoACyXGJx+jj4WG18ycwvRXenhp7xS8xeU39lhh7rj QZ6Tc19ttyv3DhIcU7XVzO9asz633YyIYwoWpdAd/4/o8a2iZpULQYs0v/PSYy9NykNT 87riKT3X1WiGE6W9tBfTSieCZ8Fon+WVS6fkQhEiAr1ykXKMRHsccHbVnVWA+c5y4fuF b5bbKuSMVJSW6hU3nHYcIsIa9ZRfaXmbPFhbz000eb68rWHlbSN73BjQ3xFofjT3qsEJ Cimg== X-Gm-Message-State: APjAAAUUiJ/Ip/fCA4ePr6TCrG2VomKbcpzDxTOkeXiqpCOWDqMxyl1U 6O+K51KSoHO+5vPCqbcXTlIg4A== X-Google-Smtp-Source: APXvYqzYBtLgHyu41wW+pu/pR134HwMgfABxCFG7SFTaKHnLsnkRqp9tkFJSv1iaSEoyl+LQ52YyaQ== X-Received: by 2002:a17:906:2649:: with SMTP id i9mr22633139ejc.120.1576940711107; Sat, 21 Dec 2019 07:05:11 -0800 (PST) Received: from localhost.localdomain ([80.233.37.20]) by smtp.googlemail.com with ESMTPSA id u13sm1517639ejz.69.2019.12.21.07.05.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 21 Dec 2019 07:05:10 -0800 (PST) From: Tom Murphy To: iommu@lists.linux-foundation.org Cc: Tom Murphy , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , Joerg Roedel , Will Deacon , Robin Murphy , Marek Szyprowski , Kukjin Kim , Krzysztof Kozlowski , David Woodhouse , Lu Baolu , Andy Gross , Bjorn Andersson , Matthias Brugger , Rob Clark , Heiko Stuebner , Gerald Schaefer , Thierry Reding , Jonathan Hunter , Jean-Philippe Brucker , Alex Williamson , Cornelia Huck , Marc Zyngier , Eric Auger , Julien Grall , Thomas Gleixner , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-rockchip@lists.infradead.org, linux-s390@vger.kernel.org, linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org Subject: [PATCH 8/8] DO NOT MERGE: iommu: disable list appending in dma-iommu Date: Sat, 21 Dec 2019 15:04:00 +0000 Message-Id: <20191221150402.13868-9-murphyt7@tcd.ie> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191221150402.13868-1-murphyt7@tcd.ie> References: <20191221150402.13868-1-murphyt7@tcd.ie> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org ops __finalise_sg Disable combining sg segments in the dma-iommu api. Combining the sg segments exposes a bug in the intel i915 driver which causes visual artifacts and the screen to freeze. This is most likely because of how the i915 handles the returned list. It probably doesn't respect the returned value specifying the number of elements in the list and instead depends on the previous behaviour of the intel iommu driver which would return the same number of elements in the output list as in the input list. Signed-off-by: Tom Murphy --- drivers/iommu/dma-iommu.c | 38 +++++++------------------------------- 1 file changed, 7 insertions(+), 31 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index cf778db7d84d..d7547b912c87 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -853,8 +853,7 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, { struct scatterlist *s, *cur = sg; unsigned long seg_mask = dma_get_seg_boundary(dev); - unsigned int cur_len = 0, max_len = dma_get_max_seg_size(dev); - int i, count = 0; + int i; for_each_sg(sg, s, nents, i) { /* Restore this segment's original unaligned fields first */ @@ -862,39 +861,16 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, unsigned int s_length = sg_dma_len(s); unsigned int s_iova_len = s->length; + if (i > 0) + cur = sg_next(cur); + s->offset += s_iova_off; s->length = s_length; - sg_dma_address(s) = DMA_MAPPING_ERROR; - sg_dma_len(s) = 0; - - /* - * Now fill in the real DMA data. If... - * - there is a valid output segment to append to - * - and this segment starts on an IOVA page boundary - * - but doesn't fall at a segment boundary - * - and wouldn't make the resulting output segment too long - */ - if (cur_len && !s_iova_off && (dma_addr & seg_mask) && - (max_len - cur_len >= s_length)) { - /* ...then concatenate it with the previous one */ - cur_len += s_length; - } else { - /* Otherwise start the next output segment */ - if (i > 0) - cur = sg_next(cur); - cur_len = s_length; - count++; - - sg_dma_address(cur) = dma_addr + s_iova_off; - } - - sg_dma_len(cur) = cur_len; + sg_dma_address(cur) = dma_addr + s_iova_off; + sg_dma_len(cur) = s_length; dma_addr += s_iova_len; - - if (s_length + s_iova_off < s_iova_len) - cur_len = 0; } - return count; + return nents; } /*