From patchwork Tue Sep 27 15:57:04 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 9352215 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 147E460757 for ; Tue, 27 Sep 2016 16:01:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04C792928B for ; Tue, 27 Sep 2016 16:01:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ED7FA29294; Tue, 27 Sep 2016 16:01:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 41D452928B for ; Tue, 27 Sep 2016 16:01:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1boulx-0007eF-Gm; Tue, 27 Sep 2016 15:58:21 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1boulw-0007Ue-7Z for xen-devel@lists.xenproject.org; Tue, 27 Sep 2016 15:58:20 +0000 Received: from [85.158.139.211] by server-5.bemta-5.messagelabs.com id F6/F1-30284-B179AE75; Tue, 27 Sep 2016 15:58:19 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrBIsWRWlGSWpSXmKPExsXitHSDva7U9Ff hBmcO6Fp83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBkrtnewFRx1rNj9SKSB8ah+FyMnh4SAv8TP xqfsIDabgI7Exbk72boYOThEBFQkbu816GLk4mAWWMQkcabzJStIjbCAh8Tr7sksIDaLgKpEz 4b1TCA2r4CrxL7r/YwQM3UlHp77DVbPCRTvbVsAViMk4CJxd+ZHVoh6QYmTM5+AzWEW0JRo3f 6bHcKWl2jeOpsZol5Ron/eAzaImdwSt09PZZ7AyD8LSfssJO2zkLQvYGRexahRnFpUllqka2i ol1SUmZ5RkpuYmaNraGCql5taXJyYnpqTmFSsl5yfu4kRGIIMQLCDcWW78yFGSQ4mJVFejfZX 4UJ8SfkplRmJxRnxRaU5qcWHGGU4OJQkeD2mAuUEi1LTUyvSMnOA0QCTluDgURLh5ZkGlOYtL kjMLc5Mh0idYlSUEudtAOkTAElklObBtcEi8BKjrJQwLyPQIUI8BalFuZklqPKvGMU5GJWEeR VAxvNk5pXATX8FtJgJaPHSEy9AFpckIqSkGhgTb5RNSJEKNj605mWh7P2A2Q1ZfyampkgGc/Q Zzb4o4vn8h75S6NrrMXpSsnsCmHLNH9vznXjDUKTy9ash9/0VDs/f9G19Vuerf/hVX+fej6uL ir85S6m3319uqtSSV/38JEPTCdUJ7W/y9k5NvKvVMkdQyL1o5+wfMmtnLdy88/jlyCz3I8JKL MUZiYZazEXFiQAJAXfZuwIAAA== X-Env-Sender: prvs=071b8e69e=roger.pau@citrix.com X-Msg-Ref: server-5.tower-206.messagelabs.com!1474991886!60100383!6 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 17502 invoked from network); 27 Sep 2016 15:58:18 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 27 Sep 2016 15:58:18 -0000 X-IronPort-AV: E=Sophos;i="5.30,405,1470700800"; d="scan'208";a="389169520" From: Roger Pau Monne To: Date: Tue, 27 Sep 2016 17:57:04 +0200 Message-ID: <1474991845-27962-10-git-send-email-roger.pau@citrix.com> X-Mailer: git-send-email 2.7.4 (Apple Git-66) In-Reply-To: <1474991845-27962-1-git-send-email-roger.pau@citrix.com> References: <1474991845-27962-1-git-send-email-roger.pau@citrix.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: Kevin Tian , Feng Wu , George Dunlap , Andrew Cooper , Jan Beulich , boris.ostrovsky@oracle.com, Roger Pau Monne Subject: [Xen-devel] [PATCH v2 09/30] x86/vtd: fix and simplify mapping RMRR regions X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The current code used by Intel VTd will only map RMRR regions for the hardware domain, but will fail to map RMRR regions for unprivileged domains unless the page tables are shared between EPT and IOMMU. Fix this and simplify the code, removing the {set/clear}_identity_p2m_entry helpers and just using the normal MMIO mapping functions. Introduce a new MMIO mapping/unmapping helper, that takes care of checking for pending IRQs if the mapped region is big enough that it cannot be done in one shot. Signed-off-by: Roger Pau Monné --- Cc: George Dunlap Cc: Jan Beulich Cc: Andrew Cooper Cc: Kevin Tian Cc: Feng Wu --- xen/arch/x86/mm/p2m.c | 86 ------------------------------------- xen/drivers/passthrough/vtd/iommu.c | 21 +++++---- xen/include/asm-x86/p2m.h | 5 --- xen/include/xen/p2m-common.h | 30 +++++++++++++ 4 files changed, 42 insertions(+), 100 deletions(-) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 9526fff..44492ae 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1029,56 +1029,6 @@ int set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, return set_typed_p2m_entry(d, gfn, mfn, order, p2m_mmio_direct, access); } -int set_identity_p2m_entry(struct domain *d, unsigned long gfn, - p2m_access_t p2ma, unsigned int flag) -{ - p2m_type_t p2mt; - p2m_access_t a; - mfn_t mfn; - struct p2m_domain *p2m = p2m_get_hostp2m(d); - int ret; - - if ( !paging_mode_translate(p2m->domain) ) - { - if ( !need_iommu(d) ) - return 0; - return iommu_map_page(d, gfn, gfn, IOMMUF_readable|IOMMUF_writable); - } - - gfn_lock(p2m, gfn, 0); - - mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); - - if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm ) - ret = p2m_set_entry(p2m, gfn, _mfn(gfn), PAGE_ORDER_4K, - p2m_mmio_direct, p2ma); - else if ( mfn_x(mfn) == gfn && p2mt == p2m_mmio_direct && a == p2ma ) - { - ret = 0; - /* - * PVH fixme: during Dom0 PVH construction, p2m entries are being set - * but iomem regions are not mapped with IOMMU. This makes sure that - * RMRRs are correctly mapped with IOMMU. - */ - if ( is_hardware_domain(d) && !iommu_use_hap_pt(d) ) - ret = iommu_map_page(d, gfn, gfn, IOMMUF_readable|IOMMUF_writable); - } - else - { - if ( flag & XEN_DOMCTL_DEV_RDM_RELAXED ) - ret = 0; - else - ret = -EBUSY; - printk(XENLOG_G_WARNING - "Cannot setup identity map d%d:%lx," - " gfn already mapped to %lx.\n", - d->domain_id, gfn, mfn_x(mfn)); - } - - gfn_unlock(p2m, gfn, 0); - return ret; -} - /* * Returns: * 0 for success @@ -1127,42 +1077,6 @@ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, return rc; } -int clear_identity_p2m_entry(struct domain *d, unsigned long gfn) -{ - p2m_type_t p2mt; - p2m_access_t a; - mfn_t mfn; - struct p2m_domain *p2m = p2m_get_hostp2m(d); - int ret; - - if ( !paging_mode_translate(d) ) - { - if ( !need_iommu(d) ) - return 0; - return iommu_unmap_page(d, gfn); - } - - gfn_lock(p2m, gfn, 0); - - mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL); - if ( p2mt == p2m_mmio_direct && mfn_x(mfn) == gfn ) - { - ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K, - p2m_invalid, p2m->default_access); - gfn_unlock(p2m, gfn, 0); - } - else - { - gfn_unlock(p2m, gfn, 0); - printk(XENLOG_G_WARNING - "non-identity map d%d:%lx not cleared (mapped to %lx)\n", - d->domain_id, gfn, mfn_x(mfn)); - ret = 0; - } - - return ret; -} - /* Returns: 0 for success, -errno for failure */ int set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) { diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index 919993e..714a19e 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -1896,6 +1896,7 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map, unsigned long end_pfn = PAGE_ALIGN_4K(rmrr->end_address) >> PAGE_SHIFT_4K; struct mapped_rmrr *mrmrr; struct domain_iommu *hd = dom_iommu(d); + int ret = 0; ASSERT(pcidevs_locked()); ASSERT(rmrr->base_address < rmrr->end_address); @@ -1909,8 +1910,6 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map, if ( mrmrr->base == rmrr->base_address && mrmrr->end == rmrr->end_address ) { - int ret = 0; - if ( map ) { ++mrmrr->count; @@ -1920,9 +1919,10 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map, if ( --mrmrr->count ) return 0; - while ( base_pfn < end_pfn ) + ret = modify_mmio_11(d, base_pfn, end_pfn - base_pfn, false); + while ( !iommu_use_hap_pt(d) && base_pfn < end_pfn ) { - if ( clear_identity_p2m_entry(d, base_pfn) ) + if ( iommu_unmap_page(d, base_pfn) ) ret = -ENXIO; base_pfn++; } @@ -1936,12 +1936,15 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map, if ( !map ) return -ENOENT; - while ( base_pfn < end_pfn ) + ret = modify_mmio_11(d, base_pfn, end_pfn - base_pfn, true); + if ( ret ) + return ret; + while ( !iommu_use_hap_pt(d) && base_pfn < end_pfn ) { - int err = set_identity_p2m_entry(d, base_pfn, p2m_access_rw, flag); - - if ( err ) - return err; + ret = iommu_map_page(d, base_pfn, base_pfn, + IOMMUF_readable|IOMMUF_writable); + if ( ret ) + return ret; base_pfn++; } diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 7035860..ccf19e5 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -602,11 +602,6 @@ int set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, unsigned int order); -/* Set identity addresses in the p2m table (for pass-through) */ -int set_identity_p2m_entry(struct domain *d, unsigned long gfn, - p2m_access_t p2ma, unsigned int flag); -int clear_identity_p2m_entry(struct domain *d, unsigned long gfn); - /* Add foreign mapping to the guest's p2m table. */ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn, unsigned long gpfn, domid_t foreign_domid); diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h index 3be1e91..5f6b4ef 100644 --- a/xen/include/xen/p2m-common.h +++ b/xen/include/xen/p2m-common.h @@ -2,6 +2,7 @@ #define _XEN_P2M_COMMON_H #include +#include /* * Additional access types, which are used to further restrict @@ -46,6 +47,35 @@ int unmap_mmio_regions(struct domain *d, mfn_t mfn); /* + * Preemptive Helper for mapping/unmapping MMIO regions. + */ +static inline int modify_mmio_11(struct domain *d, unsigned long pfn, + unsigned long nr_pages, bool map) +{ + int rc; + + while ( nr_pages > 0 ) + { + rc = (map ? map_mmio_regions : unmap_mmio_regions) + (d, _gfn(pfn), nr_pages, _mfn(pfn)); + if ( rc == 0 ) + break; + if ( rc < 0 ) + { + printk(XENLOG_ERR + "Failed to %smap %#lx - %#lx into domain %d memory map: %d\n", + map ? "" : "un", pfn, pfn + nr_pages, d->domain_id, rc); + return rc; + } + nr_pages -= rc; + pfn += rc; + process_pending_softirqs(); + } + + return rc; +} + +/* * Set access type for a region of gfns. * If gfn == INVALID_GFN, sets the default access type. */