From patchwork Thu Mar 19 05:36:22 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Li, Zhen-Hua" X-Patchwork-Id: 6046621 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Original-To: patchwork-linux-pci@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C090B9F2A9 for ; Thu, 19 Mar 2015 05:39:31 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C24502049D for ; Thu, 19 Mar 2015 05:39:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C7140203AA for ; Thu, 19 Mar 2015 05:39:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752839AbbCSFj2 (ORCPT ); Thu, 19 Mar 2015 01:39:28 -0400 Received: from g4t3427.houston.hp.com ([15.201.208.55]:50770 "EHLO g4t3427.houston.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750729AbbCSFh2 (ORCPT ); Thu, 19 Mar 2015 01:37:28 -0400 Received: from g4t3433.houston.hp.com (g4t3433.houston.hp.com [16.210.25.219]) by g4t3427.houston.hp.com (Postfix) with ESMTP id 5251971; Thu, 19 Mar 2015 05:37:27 +0000 (UTC) Received: from localhost.localdomain (linuxtest.chn.hp.com [16.155.255.10]) by g4t3433.houston.hp.com (Postfix) with ESMTP id B1C42AB; Thu, 19 Mar 2015 05:37:21 +0000 (UTC) From: "Li, Zhen-Hua" To: , , , , , Cc: , , , , , , , , , , , , , , , Subject: [PATCH v9 04/10] iommu/vt-d: functions to copy data from old mem Date: Thu, 19 Mar 2015 13:36:22 +0800 Message-Id: <1426743388-26908-5-git-send-email-zhen-hual@hp.com> X-Mailer: git-send-email 2.0.0-rc0 In-Reply-To: <1426743388-26908-1-git-send-email-zhen-hual@hp.com> References: <1426743388-26908-1-git-send-email-zhen-hual@hp.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add some functions to copy the data from old kernel. These functions are used to copy context tables and page tables. To avoid calling iounmap between spin_lock_irqsave and spin_unlock_irqrestore, use a link here, store the pointers , and then use iounmap to free them in another place. Li, Zhen-hua: The functions and logics. Takao Indoh: Check if pfn is ram: if (page_is_ram(pfn)) Signed-off-by: Li, Zhen-Hua Signed-off-by: Takao Indoh --- drivers/iommu/intel-iommu.c | 102 ++++++++++++++++++++++++++++++++++++++++++++ include/linux/intel-iommu.h | 9 ++++ 2 files changed, 111 insertions(+) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index f7dbe70..7f3484a 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -374,6 +374,17 @@ static struct context_entry *device_to_existing_context_entry( u8 bus, u8 devfn); +/* + * A structure used to store the address allocated by ioremap(); + * The we need to call iounmap() to free them out of spin_lock_irqsave/unlock; + */ +struct iommu_remapped_entry { + struct list_head list; + void __iomem *mem; +}; +static LIST_HEAD(__iommu_remapped_mem); +static DEFINE_MUTEX(__iommu_mem_list_lock); + #endif /* CONFIG_CRASH_DUMP */ /* @@ -4822,4 +4833,95 @@ static struct context_entry *device_to_existing_context_entry( return ret; } +/* + * Copy memory from a physically-addressed area into a virtually-addressed area + */ +int __iommu_load_from_oldmem(void *to, unsigned long from, unsigned long size) +{ + unsigned long pfn; /* Page Frame Number */ + size_t csize = (size_t)size; /* Num(bytes to copy) */ + unsigned long offset; /* Lower 12 bits of to */ + void __iomem *virt_mem; + struct iommu_remapped_entry *mapped; + + pfn = from >> VTD_PAGE_SHIFT; + offset = from & (~VTD_PAGE_MASK); + + if (page_is_ram(pfn)) { + memcpy(to, pfn_to_kaddr(pfn) + offset, csize); + } else{ + + mapped = kzalloc(sizeof(struct iommu_remapped_entry), + GFP_KERNEL); + if (!mapped) + return -ENOMEM; + + virt_mem = ioremap_cache((unsigned long)from, size); + if (!virt_mem) { + kfree(mapped); + return -ENOMEM; + } + memcpy(to, virt_mem, size); + + mutex_lock(&__iommu_mem_list_lock); + mapped->mem = virt_mem; + list_add_tail(&mapped->list, &__iommu_remapped_mem); + mutex_unlock(&__iommu_mem_list_lock); + } + return size; +} + +/* + * Copy memory from a virtually-addressed area into a physically-addressed area + */ +int __iommu_save_to_oldmem(unsigned long to, void *from, unsigned long size) +{ + unsigned long pfn; /* Page Frame Number */ + size_t csize = (size_t)size; /* Num(bytes to copy) */ + unsigned long offset; /* Lower 12 bits of to */ + void __iomem *virt_mem; + struct iommu_remapped_entry *mapped; + + pfn = to >> VTD_PAGE_SHIFT; + offset = to & (~VTD_PAGE_MASK); + + if (page_is_ram(pfn)) { + memcpy(pfn_to_kaddr(pfn) + offset, from, csize); + } else{ + mapped = kzalloc(sizeof(struct iommu_remapped_entry), + GFP_KERNEL); + if (!mapped) + return -ENOMEM; + + virt_mem = ioremap_cache((unsigned long)to, size); + if (!virt_mem) { + kfree(mapped); + return -ENOMEM; + } + memcpy(virt_mem, from, size); + mutex_lock(&__iommu_mem_list_lock); + mapped->mem = virt_mem; + list_add_tail(&mapped->list, &__iommu_remapped_mem); + mutex_unlock(&__iommu_mem_list_lock); + } + return size; +} + +/* + * Free the mapped memory for ioremap; + */ +int __iommu_free_mapped_mem(void) +{ + struct iommu_remapped_entry *mem_entry, *tmp; + + mutex_lock(&__iommu_mem_list_lock); + list_for_each_entry_safe(mem_entry, tmp, &__iommu_remapped_mem, list) { + iounmap(mem_entry->mem); + list_del(&mem_entry->list); + kfree(mem_entry); + } + mutex_unlock(&__iommu_mem_list_lock); + return 0; +} + #endif /* CONFIG_CRASH_DUMP */ diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h index a65208a..8ffa523 100644 --- a/include/linux/intel-iommu.h +++ b/include/linux/intel-iommu.h @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -368,4 +369,12 @@ extern int dmar_ir_support(void); extern const struct attribute_group *intel_iommu_groups[]; +#ifdef CONFIG_CRASH_DUMP +extern int __iommu_load_from_oldmem(void *to, unsigned long from, + unsigned long size); +extern int __iommu_save_to_oldmem(unsigned long to, void *from, + unsigned long size); +extern int __iommu_free_mapped_mem(void); +#endif /* CONFIG_CRASH_DUMP */ + #endif