From patchwork Mon May 11 09:52:52 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Li, Zhen-Hua" X-Patchwork-Id: 6375411 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Original-To: patchwork-linux-pci@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A75BFBEEE1 for ; Mon, 11 May 2015 09:56:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C169B20225 for ; Mon, 11 May 2015 09:56:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C701D201ED for ; Mon, 11 May 2015 09:56:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753495AbbEKJ4c (ORCPT ); Mon, 11 May 2015 05:56:32 -0400 Received: from g9t1613g.houston.hp.com ([15.240.0.71]:59112 "EHLO g9t1613g.houston.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753471AbbEKJyp (ORCPT ); Mon, 11 May 2015 05:54:45 -0400 Received: from g1t5425.austin.hp.com (g1t5425.austin.hp.com [15.216.225.55]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by g9t1613g.houston.hp.com (Postfix) with ESMTPS id 237CC61F05; Mon, 11 May 2015 09:54:45 +0000 (UTC) Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247]) by g1t5425.austin.hp.com (Postfix) with ESMTP id D2DA4183; Mon, 11 May 2015 09:54:43 +0000 (UTC) Received: from piepie.asiapacific.hpqcorp.net (piepie.asiapacific.hpqcorp.net [16.187.246.216]) by g2t2360.austin.hp.com (Postfix) with ESMTP id 67ECD68; Mon, 11 May 2015 09:54:38 +0000 (UTC) From: "Li, Zhen-Hua" To: , , , , , Cc: , , , , , , , , , , , , , , , Subject: [PATCH v11 08/10] iommu/vt-d: assign new page table for dma_map Date: Mon, 11 May 2015 17:52:52 +0800 Message-Id: <1431337974-545-9-git-send-email-zhen-hual@hp.com> X-Mailer: git-send-email 2.0.0-rc0 In-Reply-To: <1431337974-545-1-git-send-email-zhen-hual@hp.com> References: <1431337974-545-1-git-send-email-zhen-hual@hp.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When a device driver issues the first dma_map command for a device, we assign a new and empty page-table, thus removing all mappings from the old kernel for the device. Signed-off-by: Li, Zhen-Hua --- drivers/iommu/intel-iommu.c | 58 ++++++++++++++++++++++++++++++++++++++------- 1 file changed, 50 insertions(+), 8 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 91545bf..3cc1027 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -396,6 +396,9 @@ static int copy_root_entry_table(struct intel_iommu *iommu); static int intel_iommu_load_translation_tables(struct intel_iommu *iommu); +static void unmap_device_dma(struct dmar_domain *domain, + struct device *dev, + struct intel_iommu *iommu); static void iommu_check_pre_te_status(struct intel_iommu *iommu); static u8 g_translation_pre_enabled; @@ -3115,6 +3118,7 @@ static struct iova *intel_alloc_iova(struct device *dev, static struct dmar_domain *__get_valid_domain_for_dev(struct device *dev) { struct dmar_domain *domain; + struct intel_iommu *iommu; int ret; domain = get_domain_for_dev(dev, DEFAULT_DOMAIN_ADDRESS_WIDTH); @@ -3124,14 +3128,30 @@ static struct dmar_domain *__get_valid_domain_for_dev(struct device *dev) return NULL; } - /* make sure context mapping is ok */ - if (unlikely(!domain_context_mapped(dev))) { - ret = domain_context_mapping(domain, dev, CONTEXT_TT_MULTI_LEVEL); - if (ret) { - printk(KERN_ERR "Domain context map for %s failed", - dev_name(dev)); - return NULL; - } + /* if in kdump kernel, we need to unmap the mapped dma pages, + * detach this device first. + */ + if (likely(domain_context_mapped(dev))) { + iommu = domain_get_iommu(domain); + if (iommu->pre_enabled_trans) { + unmap_device_dma(domain, dev, iommu); + + domain = get_domain_for_dev(dev, + DEFAULT_DOMAIN_ADDRESS_WIDTH); + if (!domain) { + pr_err("Allocating domain for %s failed", + dev_name(dev)); + return NULL; + } + } else + return domain; + } + + ret = domain_context_mapping(domain, dev, CONTEXT_TT_MULTI_LEVEL); + if (ret) { + pr_err("Domain context map for %s failed", + dev_name(dev)); + return NULL; } return domain; @@ -5168,6 +5188,28 @@ static int intel_iommu_load_translation_tables(struct intel_iommu *iommu) return ret; } +static void unmap_device_dma(struct dmar_domain *domain, + struct device *dev, + struct intel_iommu *iommu) +{ + struct context_entry *ce; + struct iova *iova; + phys_addr_t phys_addr; + dma_addr_t dev_addr; + struct pci_dev *pdev; + + pdev = to_pci_dev(dev); + ce = iommu_context_addr(iommu, pdev->bus->number, pdev->devfn, 1); + phys_addr = context_address_root(ce) << VTD_PAGE_SHIFT; + dev_addr = phys_to_dma(dev, phys_addr); + + iova = find_iova(&domain->iovad, IOVA_PFN(dev_addr)); + if (iova) + intel_unmap(dev, dev_addr); + + domain_remove_one_dev_info(domain, dev); +} + static void iommu_check_pre_te_status(struct intel_iommu *iommu) { u32 sts;