From patchwork Fri May 6 08:54:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quan Xu X-Patchwork-Id: 9030351 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DB6499F39D for ; Fri, 6 May 2016 09:00:40 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B28E320384 for ; Fri, 6 May 2016 09:00:39 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8533520382 for ; Fri, 6 May 2016 09:00:38 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aybaV-0004MF-3s; Fri, 06 May 2016 08:58:19 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aybaU-0004Lt-6X for xen-devel@lists.xen.org; Fri, 06 May 2016 08:58:18 +0000 Received: from [193.109.254.147] by server-3.bemta-14.messagelabs.com id 12/09-03281-9AC5C275; Fri, 06 May 2016 08:58:17 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrHLMWRWlGSWpSXmKPExsXS1tYhorsiRif cYNl2G4slHxezODB6HN39mymAMYo1My8pvyKBNaN753y2gkuuFZumXGRsYHxq1MXIySEkUCHx 6dYNZhBbQoBX4siyGawQtr/E9vW3mSBqaiSuLf0DFmcTUJTYcHE5WFxEQFri2ufLjF2MXBzMA i8YJdpaehhBEsIClhIXWn6B2SwCqhI3rywGa+YVcJQ4+uQ7C8QCBYllX9aCLeYUcJJom9rODr HMUeLHtG1MExh5FzAyrGLUKE4tKkst0jU00ksqykzPKMlNzMzRNTQ00ctNLS5OTE/NSUwq1kv Oz93ECAwHBiDYwXh2mvMhRkkOJiVR3lX/tMOF+JLyUyozEosz4otKc1KLDzHKcHAoSfDmROuE CwkWpaanVqRl5gADEyYtwcGjJML7FyTNW1yQmFucmQ6ROsWoKCXOuwAkIQCSyCjNg2uDRcMlR lkpYV5GoEOEeApSi3IzS1DlXzGKczAqCfNOA5nCk5lXAjf9FdBiJqDF7+dqgiwuSURISTUw6l X901yZuelt9+6jXv+3zpn5RWiieXFe6IEkyZAT5z02pHQdMq7/vMdvz8qyf48nnDU/9oTjp/y iT6//KhXMms394c7v+o0vpvc9mRPAIqp+9MYumdAj5ZnKBisP5Vt23e5rs5t5vCR74R0fOak/ T52WhYm83XxwhmN1+Dem2DeLnXV2XJWa+kuJpTgj0VCLuag4EQCCg0OPgQIAAA== X-Env-Sender: quan.xu@intel.com X-Msg-Ref: server-9.tower-27.messagelabs.com!1462525094!39896475!2 X-Originating-IP: [134.134.136.20] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 45065 invoked from network); 6 May 2016 08:58:16 -0000 Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20) by server-9.tower-27.messagelabs.com with SMTP; 6 May 2016 08:58:16 -0000 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP; 06 May 2016 01:58:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,586,1455004800"; d="scan'208";a="969963769" Received: from xen-commits.sh.intel.com ([10.239.82.178]) by orsmga002.jf.intel.com with ESMTP; 06 May 2016 01:58:14 -0700 From: Quan Xu To: xen-devel@lists.xen.org Date: Fri, 6 May 2016 16:54:31 +0800 Message-Id: <1462524880-67205-2-git-send-email-quan.xu@intel.com> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1462524880-67205-1-git-send-email-quan.xu@intel.com> References: <1462524880-67205-1-git-send-email-quan.xu@intel.com> Cc: Kevin Tian , Keir Fraser , Quan Xu , Andrew Cooper , dario.faggioli@citrix.com, Jan Beulich , Feng Wu Subject: [Xen-devel] [PATCH v4 01/10] vt-d: fix the IOMMU flush issue X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The propagation value from IOMMU flush interfaces may be positive, which indicates callers need to flush cache, not one of faliures. when the propagation value is positive, this patch fixes this flush issue as follows: - call iommu_flush_write_buffer() to flush cache. - return zero. Signed-off-by: Quan Xu CC: Kevin Tian CC: Feng Wu CC: Keir Fraser CC: Jan Beulich CC: Andrew Cooper --- xen/drivers/passthrough/vtd/iommu.c | 99 ++++++++++++++++++++++++------------- xen/include/asm-x86/iommu.h | 2 +- 2 files changed, 65 insertions(+), 36 deletions(-) diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index db83949..64093a9 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -557,14 +557,16 @@ static void iommu_flush_all(void) } } -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, - int dma_old_pte_present, unsigned int page_count) +static int iommu_flush_iotlb(struct domain *d, unsigned long gfn, + bool_t dma_old_pte_present, + unsigned int page_count) { struct domain_iommu *hd = dom_iommu(d); struct acpi_drhd_unit *drhd; struct iommu *iommu; int flush_dev_iotlb; int iommu_domid; + int rc = 0; /* * No need pcideves_lock here because we have flush @@ -583,29 +585,34 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, continue; if ( page_count != 1 || gfn == INVALID_GFN ) - { - if ( iommu_flush_iotlb_dsi(iommu, iommu_domid, - 0, flush_dev_iotlb) ) - iommu_flush_write_buffer(iommu); - } + rc = iommu_flush_iotlb_dsi(iommu, iommu_domid, + 0, flush_dev_iotlb); else + rc = iommu_flush_iotlb_psi(iommu, iommu_domid, + (paddr_t)gfn << PAGE_SHIFT_4K, + PAGE_ORDER_4K, + !dma_old_pte_present, + flush_dev_iotlb); + + if ( rc > 0 ) { - if ( iommu_flush_iotlb_psi(iommu, iommu_domid, - (paddr_t)gfn << PAGE_SHIFT_4K, PAGE_ORDER_4K, - !dma_old_pte_present, flush_dev_iotlb) ) - iommu_flush_write_buffer(iommu); + iommu_flush_write_buffer(iommu); + rc = 0; } } + + return rc; } -static void intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count) +static void iommu_flush_iotlb_page(struct domain *d, unsigned long gfn, + unsigned int page_count) { - __intel_iommu_iotlb_flush(d, gfn, 1, page_count); + iommu_flush_iotlb(d, gfn, 1, page_count); } -static void intel_iommu_iotlb_flush_all(struct domain *d) +static void iommu_flush_iotlb_all(struct domain *d) { - __intel_iommu_iotlb_flush(d, INVALID_GFN, 0, 0); + iommu_flush_iotlb(d, INVALID_GFN, 0, 0); } /* clear one page's page table */ @@ -639,7 +646,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr) iommu_flush_cache_entry(pte, sizeof(struct dma_pte)); if ( !this_cpu(iommu_dont_flush_iotlb) ) - __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1); + iommu_flush_iotlb(domain, addr >> PAGE_SHIFT_4K, 1, 1); unmap_vtd_domain_page(page); } @@ -1278,6 +1285,7 @@ int domain_context_mapping_one( u64 maddr, pgd_maddr; u16 seg = iommu->intel->drhd->segment; int agaw; + int rc; ASSERT(pcidevs_locked()); spin_lock(&iommu->lock); @@ -1391,13 +1399,19 @@ int domain_context_mapping_one( spin_unlock(&iommu->lock); /* Context entry was previously non-present (with domid 0). */ - if ( iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn, - DMA_CCMD_MASK_NOBIT, 1) ) - iommu_flush_write_buffer(iommu); - else + rc = iommu_flush_context_device(iommu, 0, (((u16)bus) << 8) | devfn, + DMA_CCMD_MASK_NOBIT, 1); + + if ( !rc ) { int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; - iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); + rc = iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb); + } + + if ( rc > 0 ) + { + iommu_flush_write_buffer(iommu); + rc = 0; } set_bit(iommu->index, &hd->arch.iommu_bitmap); @@ -1407,7 +1421,7 @@ int domain_context_mapping_one( if ( !seg ) me_wifi_quirk(domain, bus, devfn, MAP_ME_PHANTOM_FUNC); - return 0; + return rc; } static int domain_context_mapping( @@ -1502,6 +1516,7 @@ int domain_context_unmap_one( struct context_entry *context, *context_entries; u64 maddr; int iommu_domid; + int rc; ASSERT(pcidevs_locked()); spin_lock(&iommu->lock); @@ -1529,14 +1544,20 @@ int domain_context_unmap_one( return -EINVAL; } - if ( iommu_flush_context_device(iommu, iommu_domid, + rc = iommu_flush_context_device(iommu, iommu_domid, (((u16)bus) << 8) | devfn, - DMA_CCMD_MASK_NOBIT, 0) ) - iommu_flush_write_buffer(iommu); - else + DMA_CCMD_MASK_NOBIT, 0); + + if ( !rc ) { int flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0; - iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb); + rc = iommu_flush_iotlb_dsi(iommu, iommu_domid, 0, flush_dev_iotlb); + } + + if ( rc > 0 ) + { + iommu_flush_write_buffer(iommu); + rc = 0; } spin_unlock(&iommu->lock); @@ -1545,7 +1566,7 @@ int domain_context_unmap_one( if ( !iommu->intel->drhd->segment ) me_wifi_quirk(domain, bus, devfn, UNMAP_ME_PHANTOM_FUNC); - return 0; + return rc; } static int domain_context_unmap( @@ -1734,7 +1755,7 @@ static int intel_iommu_map_page( unmap_vtd_domain_page(page); if ( !this_cpu(iommu_dont_flush_iotlb) ) - __intel_iommu_iotlb_flush(d, gfn, dma_pte_present(old), 1); + iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1); return 0; } @@ -1750,14 +1771,15 @@ static int intel_iommu_unmap_page(struct domain *d, unsigned long gfn) return 0; } -void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, - int order, int present) +int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, + int order, bool_t present) { struct acpi_drhd_unit *drhd; struct iommu *iommu = NULL; struct domain_iommu *hd = dom_iommu(d); int flush_dev_iotlb; int iommu_domid; + int rc = 0; iommu_flush_cache_entry(pte, sizeof(struct dma_pte)); @@ -1771,11 +1793,18 @@ void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, iommu_domid= domain_iommu_domid(d, iommu); if ( iommu_domid == -1 ) continue; - if ( iommu_flush_iotlb_psi(iommu, iommu_domid, + + rc = iommu_flush_iotlb_psi(iommu, iommu_domid, (paddr_t)gfn << PAGE_SHIFT_4K, - order, !present, flush_dev_iotlb) ) + order, !present, flush_dev_iotlb); + if ( rc > 0 ) + { iommu_flush_write_buffer(iommu); + rc = 0; + } } + + return rc; } static int __init vtd_ept_page_compatible(struct iommu *iommu) @@ -2553,8 +2582,8 @@ const struct iommu_ops intel_iommu_ops = { .resume = vtd_resume, .share_p2m = iommu_set_pgd, .crash_shutdown = vtd_crash_shutdown, - .iotlb_flush = intel_iommu_iotlb_flush, - .iotlb_flush_all = intel_iommu_iotlb_flush_all, + .iotlb_flush = iommu_flush_iotlb_page, + .iotlb_flush_all = iommu_flush_iotlb_all, .get_reserved_device_memory = intel_iommu_get_reserved_device_memory, .dump_p2m_table = vtd_dump_p2m_table, }; diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h index e82a2f0..43f1620 100644 --- a/xen/include/asm-x86/iommu.h +++ b/xen/include/asm-x86/iommu.h @@ -27,7 +27,7 @@ int iommu_setup_hpet_msi(struct msi_desc *); /* While VT-d specific, this must get declared in a generic header. */ int adjust_vtd_irq_affinities(void); -void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present); +int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, bool_t present); bool_t iommu_supports_eim(void); int iommu_enable_x2apic_IR(void); void iommu_disable_x2apic_IR(void);