From patchwork Fri Apr 29 09:25:10 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quan Xu X-Patchwork-Id: 8979081 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id ED46C9F372 for ; Fri, 29 Apr 2016 09:30:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D4CB1201FA for ; Fri, 29 Apr 2016 09:30:48 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B1A6A200FF for ; Fri, 29 Apr 2016 09:30:47 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aw4jF-00009l-7t; Fri, 29 Apr 2016 09:28:53 +0000 Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aw4jD-00008X-JV for xen-devel@lists.xen.org; Fri, 29 Apr 2016 09:28:51 +0000 Received: from [85.158.143.35] by server-3.bemta-6.messagelabs.com id A3/43-07120-35923275; Fri, 29 Apr 2016 09:28:51 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrDLMWRWlGSWpSXmKPExsVywNwkQjdIUzn coH+btMWSj4tZHBg9ju7+zRTAGMWamZeUX5HAmjHt4EWWgus2FRufbWZpYJyo0cXIySEkUCFx p281M4gtIcArcWTZDNYuRg4g21/i7BIpiJIaiW/9R1hAbDYBRYkNF5czgdgiAtIS1z5fZuxi5 OJgFljAJHG37wVYQlggSuJXxy82EJtFQFWiq70fbD6vgKPEic5V7BC7FCSWfVkLFucUcJK4fv 09G8QyR4kP3T9YJzDyLmBkWMWoXpxaVJZapGukl1SUmZ5RkpuYmaNraGCml5taXJyYnpqTmFS sl5yfu4kRGAoMQLCDcdlfp0OMkhxMSqK8IZzK4UJ8SfkplRmJxRnxRaU5qcWHGGU4OJQkeOM0 gHKCRanpqRVpmTnAoIRJS3DwKInwGoGkeYsLEnOLM9MhUqcYFaXEebVAEgIgiYzSPLg2WCRcY pSVEuZlBDpEiKcgtSg3swRV/hWjOAejkjBvB8gUnsy8Erjpr4AWMwEtFtikCLK4JBEhJdXAGP PR83N0ddPra5Pq7U0mNx5LOlfVY/DS85K/VMHb0IZlUS5NK56rnGsMvvTklkJtD2PYO+ZP63+ Llr7kPNrHe3pFeXRn1f2cuGnRZsz6ao7rDO9EfezoP84/VSP4mdML+dB+9RhX6/nWZ6KXLNNI VPJbdzBCetbSyJfmW/8ZZGza+Gv3p6WtSizFGYmGWsxFxYkA/9bO5n8CAAA= X-Env-Sender: quan.xu@intel.com X-Msg-Ref: server-2.tower-21.messagelabs.com!1461922125!2123467!4 X-Originating-IP: [192.55.52.88] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 44961 invoked from network); 29 Apr 2016 09:28:49 -0000 Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88) by server-2.tower-21.messagelabs.com with SMTP; 29 Apr 2016 09:28:49 -0000 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP; 29 Apr 2016 02:28:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,551,1455004800"; d="scan'208";a="965266415" Received: from xen-commits.sh.intel.com ([10.239.82.178]) by orsmga002.jf.intel.com with ESMTP; 29 Apr 2016 02:28:47 -0700 From: Quan Xu To: xen-devel@lists.xen.org Date: Fri, 29 Apr 2016 17:25:10 +0800 Message-Id: <1461921917-48394-4-git-send-email-quan.xu@intel.com> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1461921917-48394-1-git-send-email-quan.xu@intel.com> References: <1461921917-48394-1-git-send-email-quan.xu@intel.com> Cc: Kevin Tian , Keir Fraser , Jan Beulich , George Dunlap , Andrew Cooper , dario.faggioli@citrix.com, Jun Nakajima , Quan Xu Subject: [Xen-devel] [PATCH v3 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When IOMMU mapping is failed, we issue a best effort rollback, stopping IOMMU mapping, unmapping the previous IOMMU maps and then reporting the error up to the call trees. When rollback is not feasible (in early initialization phase or trade-off of complexity) for the hardware domain, we do things on a best effort basis, only throwing out an error message. IOMMU unmapping should perhaps continue despite an error, in an attempt to do best effort cleanup. Signed-off-by: Quan Xu CC: Keir Fraser CC: Jan Beulich CC: Andrew Cooper CC: Jun Nakajima CC: Kevin Tian CC: George Dunlap --- xen/arch/x86/mm.c | 13 ++++++++----- xen/arch/x86/mm/p2m-ept.c | 27 +++++++++++++++++++++++++-- xen/arch/x86/mm/p2m-pt.c | 24 ++++++++++++++++++++---- xen/arch/x86/mm/p2m.c | 11 +++++++++-- xen/drivers/passthrough/iommu.c | 14 +++++++++++++- 5 files changed, 75 insertions(+), 14 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index a42097f..427097d 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -2467,7 +2467,7 @@ static int __get_page_type(struct page_info *page, unsigned long type, int preemptible) { unsigned long nx, x, y = page->u.inuse.type_info; - int rc = 0; + int rc = 0, ret = 0; ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2))); @@ -2578,11 +2578,11 @@ static int __get_page_type(struct page_info *page, unsigned long type, if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) ) { if ( (x & PGT_type_mask) == PGT_writable_page ) - iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page))); + ret = iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page))); else if ( type == PGT_writable_page ) - iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)), - page_to_mfn(page), - IOMMUF_readable|IOMMUF_writable); + ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)), + page_to_mfn(page), + IOMMUF_readable|IOMMUF_writable); } } @@ -2599,6 +2599,9 @@ static int __get_page_type(struct page_info *page, unsigned long type, if ( (x & PGT_partial) && !(nx & PGT_partial) ) put_page(page); + if ( !rc ) + rc = ret; + return rc; } diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 1ed5b47..df87944 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -821,6 +821,8 @@ out: if ( needs_sync ) ept_sync_domain(p2m); + ret = 0; + /* For host p2m, may need to change VT-d page table.*/ if ( rc == 0 && p2m_is_hostp2m(p2m) && need_iommu(d) && need_modify_vtd_table ) @@ -831,11 +833,29 @@ out: { if ( iommu_flags ) for ( i = 0; i < (1 << order); i++ ) - iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags); + { + rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags); + + if ( !ret && unlikely(rc) ) + { + while ( i-- ) + iommu_unmap_page(d, gfn + i); + + ret = rc; + break; + } + } else for ( i = 0; i < (1 << order); i++ ) - iommu_unmap_page(d, gfn + i); + { + rc = iommu_unmap_page(d, gfn + i); + + if ( !ret && unlikely(rc) ) + ret = rc; + } } + + rc = 0; } unmap_domain_page(table); @@ -850,6 +870,9 @@ out: if ( rc == 0 && p2m_is_hostp2m(p2m) ) p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2mt, p2ma); + if ( !rc ) + rc = ret; + return rc; } diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index 3d80612..9f5539e 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -498,7 +498,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, l1_pgentry_t intermediate_entry = l1e_empty(); l2_pgentry_t l2e_content; l3_pgentry_t l3e_content; - int rc; + int rc, ret; unsigned int iommu_pte_flags = p2m_get_iommu_flags(p2mt); /* * old_mfn and iommu_old_flags control possible flush/update needs on the @@ -680,11 +680,27 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, } else if ( iommu_pte_flags ) for ( i = 0; i < (1UL << page_order); i++ ) - iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i, - iommu_pte_flags); + { + ret = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i, + iommu_pte_flags); + + if ( !rc && unlikely(ret) ) + { + while ( i-- ) + iommu_unmap_page(p2m->domain, gfn + i); + + rc = ret; + break; + } + } else for ( i = 0; i < (1UL << page_order); i++ ) - iommu_unmap_page(p2m->domain, gfn + i); + { + ret = iommu_unmap_page(p2m->domain, gfn + i); + + if ( !rc && unlikely(ret) ) + rc = ret; + } } /* diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 6eef2f3..6a9bba1 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -636,13 +636,20 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn, mfn_t mfn_return; p2m_type_t t; p2m_access_t a; + int rc = 0, ret; if ( !paging_mode_translate(p2m->domain) ) { if ( need_iommu(p2m->domain) ) for ( i = 0; i < (1 << page_order); i++ ) - iommu_unmap_page(p2m->domain, mfn + i); - return 0; + { + ret = iommu_unmap_page(p2m->domain, mfn + i); + + if ( !rc && unlikely(ret) ) + rc = ret; + } + + return rc; } ASSERT(gfn_locked_by_me(p2m, gfn)); diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index a0003ac..d74433d 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -172,6 +172,8 @@ void __hwdom_init iommu_hwdom_init(struct domain *d) { struct page_info *page; unsigned int i = 0; + int ret, rc = 0; + page_list_for_each ( page, &d->page_list ) { unsigned long mfn = page_to_mfn(page); @@ -182,10 +184,20 @@ void __hwdom_init iommu_hwdom_init(struct domain *d) ((page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page) ) mapping |= IOMMUF_writable; - hd->platform_ops->map_page(d, gfn, mfn, mapping); + + ret = hd->platform_ops->map_page(d, gfn, mfn, mapping); + + if ( unlikely(ret) ) + rc = ret; + if ( !(i++ & 0xfffff) ) process_pending_softirqs(); } + + if ( rc ) + printk(XENLOG_WARNING + "iommu_hwdom_init: IOMMU mapping failed for dom%d.", + d->domain_id); } return hd->platform_ops->hwdom_init(d);