From patchwork Fri Apr 10 08:42:10 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Li, Zhen-Hua" X-Patchwork-Id: 6193201 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Original-To: patchwork-linux-pci@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C4CA6BF4A6 for ; Fri, 10 Apr 2015 08:45:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BE4C1201BB for ; Fri, 10 Apr 2015 08:45:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A40FF200F3 for ; Fri, 10 Apr 2015 08:45:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755094AbbDJIpL (ORCPT ); Fri, 10 Apr 2015 04:45:11 -0400 Received: from g2t2353.austin.hp.com ([15.217.128.52]:40455 "EHLO g2t2353.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932594AbbDJIn5 (ORCPT ); Fri, 10 Apr 2015 04:43:57 -0400 Received: from g5t1633.atlanta.hp.com (g5t1633.atlanta.hp.com [16.201.144.132]) by g2t2353.austin.hp.com (Postfix) with ESMTP id DAE6D9C; Fri, 10 Apr 2015 08:43:56 +0000 (UTC) Received: from localhost.localdomain (linuxtest.chn.hp.com [16.155.255.10]) by g5t1633.atlanta.hp.com (Postfix) with ESMTP id 8923166; Fri, 10 Apr 2015 08:43:50 +0000 (UTC) From: "Li, Zhen-Hua" To: , , , , , Cc: , , , , , , , , , , , , , , , Subject: [PATCH v10 07/10] iommu/vt-d: enable kdump support in iommu module Date: Fri, 10 Apr 2015 16:42:10 +0800 Message-Id: <1428655333-19504-8-git-send-email-zhen-hual@hp.com> X-Mailer: git-send-email 2.0.0-rc0 In-Reply-To: <1428655333-19504-1-git-send-email-zhen-hual@hp.com> References: <1428655333-19504-1-git-send-email-zhen-hual@hp.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Modify the operation of the following functions when called during crash dump: device_to_context_entry free_context_table get_domain_for_dev init_dmars intel_iommu_init Bill Sumner: Original version. Zhenhua: The name of new calling functions. Do not disable and re-enable TE in kdump kernel. Use the did and gaw from old context entry; Signed-off-by: Bill Sumner Signed-off-by: Li, Zhen-Hua --- drivers/iommu/intel-iommu.c | 94 +++++++++++++++++++++++++++++++++++++++------ 1 file changed, 82 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 78c1d65..3d4ea43 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -399,6 +399,7 @@ static int copy_root_entry_table(struct intel_iommu *iommu); static int intel_iommu_load_translation_tables(struct intel_iommu *iommu); static void iommu_check_pre_te_status(struct intel_iommu *iommu); +static u8 g_translation_pre_enabled; /* * This domain is a statically identity mapping domain. @@ -839,6 +840,9 @@ static struct context_entry * device_to_context_entry(struct intel_iommu *iommu, set_root_value(root, phy_addr); set_root_present(root); __iommu_flush_cache(iommu, root, sizeof(*root)); + + if (iommu->pre_enabled_trans) + __iommu_update_old_root_entry(iommu, bus); } spin_unlock_irqrestore(&iommu->lock, flags); return &context[devfn]; @@ -890,7 +894,8 @@ static void free_context_table(struct intel_iommu *iommu) spin_lock_irqsave(&iommu->lock, flags); if (!iommu->root_entry) { - goto out; + spin_unlock_irqrestore(&iommu->lock, flags); + return; } for (i = 0; i < ROOT_ENTRY_NR; i++) { root = &iommu->root_entry[i]; @@ -898,10 +903,23 @@ static void free_context_table(struct intel_iommu *iommu) if (context) free_pgtable_page(context); } + + if (iommu->pre_enabled_trans) { + iommu->root_entry_old_phys = 0; + root = iommu->root_entry_old_virt; + iommu->root_entry_old_virt = NULL; + } + free_pgtable_page(iommu->root_entry); iommu->root_entry = NULL; -out: + spin_unlock_irqrestore(&iommu->lock, flags); + + /* We put this out of spin_unlock is because iounmap() may + * cause error if surrounded by spin_lock and unlock; + */ + if (iommu->pre_enabled_trans) + iounmap(root); } static struct dma_pte *pfn_to_dma_pte(struct dmar_domain *domain, @@ -2319,6 +2337,7 @@ static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw) unsigned long flags; u8 bus, devfn; int did = -1; /* Default to "no domain_id supplied" */ + struct context_entry *ce = NULL; domain = find_domain(dev); if (domain) @@ -2352,6 +2371,20 @@ static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw) domain = alloc_domain(0); if (!domain) return NULL; + + if (iommu->pre_enabled_trans) { + /* + * if this device had a did in the old kernel + * use its values instead of generating new ones + */ + ce = device_to_existing_context_entry(iommu, bus, devfn); + + if (ce) { + did = context_domain_id(ce); + gaw = agaw_to_width(context_address_width(ce)); + } + } + domain->id = iommu_attach_domain_with_id(domain, iommu, did); if (domain->id < 0) { free_domain_mem(domain); @@ -2879,6 +2912,7 @@ static int __init init_dmars(void) goto free_g_iommus; } + g_translation_pre_enabled = 0; /* To know whether to skip RMRR */ for_each_active_iommu(iommu, drhd) { g_iommus[iommu->seq_id] = iommu; @@ -2886,14 +2920,30 @@ static int __init init_dmars(void) if (ret) goto free_iommu; - /* - * TBD: - * we could share the same root & context tables - * among all IOMMU's. Need to Split it later. - */ - ret = iommu_alloc_root_entry(iommu); - if (ret) - goto free_iommu; + iommu_check_pre_te_status(iommu); + if (iommu->pre_enabled_trans) { + pr_info("IOMMU Copying translate tables from panicked kernel\n"); + ret = intel_iommu_load_translation_tables(iommu); + if (ret) { + pr_err("IOMMU: Copy translate tables failed\n"); + + /* Best to stop trying */ + goto free_iommu; + } + pr_info("IOMMU: root_cache:0x%12.12llx phys:0x%12.12llx\n", + (u64)iommu->root_entry, + (u64)iommu->root_entry_old_phys); + } else { + /* + * TBD: + * we could share the same root & context tables + * among all IOMMU's. Need to Split it later. + */ + ret = iommu_alloc_root_entry(iommu); + if (ret) + goto free_iommu; + } + if (!ecap_pass_through(iommu->ecap)) hw_pass_through = 0; } @@ -2911,6 +2961,14 @@ static int __init init_dmars(void) check_tylersburg_isoch(); /* + * In the second kernel: Skip setting-up new domains for + * si, rmrr, and the isa bus on the expectation that these + * translations were copied from the old kernel. + */ + if (g_translation_pre_enabled) + goto skip_new_domains_for_si_rmrr_isa; + + /* * If pass through is not set or not enabled, setup context entries for * identity mappings for rmrr, gfx, and isa and may fall back to static * identity mapping if iommu_identity_mapping is set. @@ -2950,6 +3008,8 @@ static int __init init_dmars(void) iommu_prepare_isa(); +skip_new_domains_for_si_rmrr_isa:; + /* * for each drhd * enable fault log @@ -2978,7 +3038,13 @@ static int __init init_dmars(void) iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL); iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH); - iommu_enable_translation(iommu); + + if (iommu->pre_enabled_trans) { + if (!(iommu->gcmd & DMA_GCMD_TE)) + iommu_enable_translation(iommu); + } else + iommu_enable_translation(iommu); + iommu_disable_protect_mem_regions(iommu); } @@ -4264,11 +4330,14 @@ int __init intel_iommu_init(void) } /* - * Disable translation if already enabled prior to OS handover. + * We do not need to disable translation if already enabled prior + * to OS handover. Because we will try to copy old tables; */ + /* for_each_active_iommu(iommu, drhd) if (iommu->gcmd & DMA_GCMD_TE) iommu_disable_translation(iommu); + */ if (dmar_dev_scope_init() < 0) { if (force_on) @@ -5090,5 +5159,6 @@ static void iommu_check_pre_te_status(struct intel_iommu *iommu) if (sts & DMA_GSTS_TES) { pr_info("Translation is enabled prior to OS.\n"); iommu->pre_enabled_trans = 1; + g_translation_pre_enabled = 1; } }