From patchwork Fri Apr 7 14:44:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 9669649 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6470260364 for ; Fri, 7 Apr 2017 15:07:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 539E828358 for ; Fri, 7 Apr 2017 15:07:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4654928446; Fri, 7 Apr 2017 15:07:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 29B2728358 for ; Fri, 7 Apr 2017 15:07:02 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cwVR4-0008LB-Qt; Fri, 07 Apr 2017 15:04:26 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cwVR3-0008L5-MI for xen-devel@lists.xen.org; Fri, 07 Apr 2017 15:04:25 +0000 Received: from [85.158.143.35] by server-8.bemta-6.messagelabs.com id C8/CD-03648-87AA7E85; Fri, 07 Apr 2017 15:04:24 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrMLMWRWlGSWpSXmKPExsXS1tbhqFu+6nm EwfvbzBZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8bjfdeYC7ZNZ6z4sP4SewPj8fQuRk4OFoH9 TBKLlpV1MXJxCAlMZ5R43bufDSQhIcArcWTZDFYI20/i4+MZbBBF7YwSx1ZtYwFJsAloS/xY/ ZsRxBYRkJa49vkyI0gRs8AWJon5C5eDdQsLxErsaDoM1M0BtE5VYsW7CBCTV8BL4uhKG4j5ch Inj01mncDIs4CRYRWjRnFqUVlqka6RpV5SUWZ6RkluYmaOrqGBmV5uanFxYnpqTmJSsV5yfu4 mRqDnGYBgB+OBRYGHGCU5mJREeRV8nkQI8SXlp1RmJBZnxBeV5qQWH2KU4eBQkuC9tuJ5hJBg UWp6akVaZg4wBGHSEhw8SiK8DSuB0rzFBYm5xZnpEKlTjIpS4ryeIAkBkERGaR5cGyzsLzHKS gnzMgIdIsRTkFqUm1mCKv+KUZyDUUmY1x1kCk9mXgnc9FdAi5mAFvvcegqyuCQRISXVwLizws lnuX5K7i2xn8eDZOY+f+sZbWbMo7p/9r/u6Ab5NPWECZlPTjT7pYkYP8r8NF/k7LdERe89hW2 nea18d60Q0ldqONw17Ymny9U6IVXzT7m+8zW+8ClHKZdIrjMPvn3kwLuzbgaPP2RwfQ14+77q XNdF8YCdyY/NPZXPnrkf/XL+s/3vWJVYijMSDbWYi4oTAYMWiwd2AgAA X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-11.tower-21.messagelabs.com!1491577460!62675603!1 X-Originating-IP: [134.134.136.65] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.12; banners=-,-,- X-VirusChecked: Checked Received: (qmail 36975 invoked from network); 7 Apr 2017 15:04:22 -0000 Received: from mga03.intel.com (HELO mga03.intel.com) (134.134.136.65) by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 7 Apr 2017 15:04:22 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1491577462; x=1523113462; h=from:to:cc:subject:date:message-id; bh=hpxHtKRIVCPkWlr6mQYC5Bn1yDXUXqpLUUCkxJXcOas=; b=oyew718D7J/GNhR2nP7JFEqBR5ge7REUOCQ/P/tyDVZxLZWF6gdwju+v 2cJhIpz+ilGRmNSc38HZ2yJCajBk0A==; Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Apr 2017 08:04:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,165,1488873600"; d="scan'208";a="86421795" Received: from zhangyu-optiplex-9020.bj.intel.com ([10.238.135.159]) by fmsmga006.fm.intel.com with ESMTP; 07 Apr 2017 08:04:02 -0700 From: Yu Zhang To: xen-devel@lists.xen.org Date: Fri, 7 Apr 2017 22:44:57 +0800 Message-Id: <1491576297-16227-1-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 Cc: Kevin Tian , Jun Nakajima , George Dunlap , Andrew Cooper , George Dunlap , Paul Durrant , zhiyuan.lv@intel.com, Jan Beulich Subject: [Xen-devel] [PATCH] x86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP After an ioreq server has unmapped, the remaining p2m_ioreq_server entries need to be reset back to p2m_ram_rw. This patch does this asynchronously with the current p2m_change_entry_type_global() interface. New field entry_count is introduced in struct p2m_domain, to record the number of p2m_ioreq_server p2m page table entries. One nature of these entries is that they only point to 4K sized page frames, because all p2m_ioreq_server entries are originated from p2m_ram_rw ones in p2m_change_type_one(). We do not need to worry about the counting for 2M/1G sized pages. This patch disallows mapping of an ioreq server, when there's still p2m_ioreq_server entry left, in case another mapping occurs right after the current one being unmapped, releases its lock, with p2m table not synced yet. This patch also disallows live migration, when there's remaining p2m_ioreq_server entry in p2m table. The core reason is our current implementation of p2m_change_entry_type_global() lacks information to resync p2m_ioreq_server entries correctly if global_logdirty is on. We still need to handle other recalculations, however; which means that when doing a recalculation, if the current type is p2m_ioreq_server, we check to see if p2m->ioreq.server is valid or not. If it is, we leave it as type p2m_ioreq_server; if not, we reset it to p2m_ram as appropriate. To avoid code duplication, lift recalc_type() out of p2m-pt.c and use it for all type recalculations (both in p2m-pt.c and p2m-ept.c). Signed-off-by: Yu Zhang Signed-off-by: George Dunlap Reviewed-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Paul Durrant Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Jun Nakajima Cc: Kevin Tian Note: this is the 5/6 patch for the ioreq server patch series v12. with other patches got reviewed-by/acked-by, I just resend this one. changes in v8: - According to comments from Jan: do not reset p2m_ioreq_server back when an ioreq server is mapped. - According to comments from Jan: added a helper function to do the p2m type calculation. - According to comments from Jan: add ASSERT for 1G and 2M page size. - According to comments from George: comment changes. - Added "Signed-off-by: George Dunlap ". changes in v7: - According to comments from George: add code to increase the entry_count. - Comment changes in {ept,p2m_pt}_set_entry. changes in v6: - According to comments from Jan & George: move the count from p2m_change_type_one() to {ept,p2m_pt}_set_entry. - According to comments from George: comments change. changes in v5: - According to comments from Jan: use unsigned long for entry_count; - According to comments from Jan: refuse mapping requirement when there's p2m_ioreq_server remained in p2m table. - Added "Reviewed-by: Paul Durrant " changes in v4: - According to comments from Jan: use ASSERT() instead of 'if' condition in p2m_change_type_one(). - According to comments from Jan: commit message changes to mention the p2m_ioreq_server are all based on 4K sized pages. changes in v3: - Move the synchronously resetting logic into patch 5. - According to comments from Jan: introduce p2m_check_changeable() to clarify the p2m type change code. - According to comments from George: use locks in the same order to avoid deadlock, call p2m_change_entry_type_global() after unmap of the ioreq server is finished. changes in v2: - Move the calculation of ioreq server page entry_cout into p2m_change_type_one() so that we do not need a seperate lock. Note: entry_count is also calculated in resolve_misconfig()/ do_recalc(), fortunately callers of both routines have p2m lock protected already. - Simplify logic in hvmop_set_mem_type(). - Introduce routine p2m_finish_type_change() to walk the p2m table and do the p2m reset. --- xen/arch/x86/hvm/ioreq.c | 8 +++++ xen/arch/x86/mm/hap/hap.c | 9 ++++++ xen/arch/x86/mm/p2m-ept.c | 75 +++++++++++++++++++++++++++++++---------------- xen/arch/x86/mm/p2m-pt.c | 62 ++++++++++++++++++++++++++------------- xen/arch/x86/mm/p2m.c | 9 ++++++ xen/include/asm-x86/p2m.h | 25 +++++++++++++++- 6 files changed, 140 insertions(+), 48 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 5bf3b6d..07a6c26 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -955,6 +955,14 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); + if ( rc == 0 && flags == 0 ) + { + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + if ( read_atomic(&p2m->ioreq.entry_count) ) + p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); + } + return rc; } diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index c0610c5..b981432 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -187,6 +187,15 @@ out: */ static int hap_enable_log_dirty(struct domain *d, bool_t log_global) { + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + /* + * Refuse to turn on global log-dirty mode if + * there are outstanding p2m_ioreq_server pages. + */ + if ( log_global && read_atomic(&p2m->ioreq.entry_count) ) + return -EBUSY; + /* turn on PG_log_dirty bit in paging mode */ paging_lock(d); d->arch.paging.mode |= PG_log_dirty; diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index cc1eb21..478e7e8 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -533,6 +533,8 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn) { for ( gfn -= i, i = 0; i < EPT_PAGETABLE_ENTRIES; ++i ) { + p2m_type_t nt; + e = atomic_read_ept_entry(&epte[i]); if ( e.emt == MTRR_NUM_TYPES ) e.emt = 0; @@ -542,10 +544,17 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn) _mfn(e.mfn), 0, &ipat, e.sa_p2mt == p2m_mmio_direct); e.ipat = ipat; - if ( e.recalc && p2m_is_changeable(e.sa_p2mt) ) + + nt = p2m_recalc_type(e.recalc, e.sa_p2mt, p2m, gfn + i); + if ( nt != e.sa_p2mt ) { - e.sa_p2mt = p2m_is_logdirty_range(p2m, gfn + i, gfn + i) - ? p2m_ram_logdirty : p2m_ram_rw; + if ( e.sa_p2mt == p2m_ioreq_server ) + { + ASSERT(p2m->ioreq.entry_count > 0); + p2m->ioreq.entry_count--; + } + + e.sa_p2mt = nt; ept_p2m_type_to_flags(p2m, &e, e.sa_p2mt, e.access); } e.recalc = 0; @@ -562,23 +571,24 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn) if ( recalc && p2m_is_changeable(e.sa_p2mt) ) { - unsigned long mask = ~0UL << (level * EPT_TABLE_ORDER); - - switch ( p2m_is_logdirty_range(p2m, gfn & mask, - gfn | ~mask) ) - { - case 0: - e.sa_p2mt = p2m_ram_rw; - e.recalc = 0; - break; - case 1: - e.sa_p2mt = p2m_ram_logdirty; - e.recalc = 0; - break; - default: /* Force split. */ - emt = -1; - break; - } + unsigned long mask = ~0UL << (level * EPT_TABLE_ORDER); + + ASSERT(e.sa_p2mt != p2m_ioreq_server); + switch ( p2m_is_logdirty_range(p2m, gfn & mask, + gfn | ~mask) ) + { + case 0: + e.sa_p2mt = p2m_ram_rw; + e.recalc = 0; + break; + case 1: + e.sa_p2mt = p2m_ram_logdirty; + e.recalc = 0; + break; + default: /* Force split. */ + emt = -1; + break; + } } if ( unlikely(emt < 0) ) { @@ -816,6 +826,23 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, new_entry.suppress_ve = is_epte_valid(&old_entry) ? old_entry.suppress_ve : 1; + /* + * p2m_ioreq_server is only used for 4K pages, so the + * count is only done on ept page table entries. + */ + if ( p2mt == p2m_ioreq_server ) + { + ASSERT(i == 0); + p2m->ioreq.entry_count++; + } + + if ( ept_entry->sa_p2mt == p2m_ioreq_server ) + { + ASSERT(i == 0); + ASSERT(p2m->ioreq.entry_count > 0); + p2m->ioreq.entry_count--; + } + rc = atomic_write_ept_entry(ept_entry, new_entry, target); if ( unlikely(rc) ) old_entry.epte = 0; @@ -964,12 +991,8 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m, if ( is_epte_valid(ept_entry) ) { - if ( (recalc || ept_entry->recalc) && - p2m_is_changeable(ept_entry->sa_p2mt) ) - *t = p2m_is_logdirty_range(p2m, gfn, gfn) ? p2m_ram_logdirty - : p2m_ram_rw; - else - *t = ept_entry->sa_p2mt; + *t = p2m_recalc_type(recalc || ept_entry->recalc, + ept_entry->sa_p2mt, p2m, gfn); *a = ept_entry->access; if ( sve ) *sve = ept_entry->suppress_ve; diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index c0055f3..5079b59 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -389,6 +389,7 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn) { unsigned long mask = ~0UL << (level * PAGETABLE_ORDER); + ASSERT(p2m_flags_to_type(l1e_get_flags(*pent)) != p2m_ioreq_server); if ( !needs_recalc(l1, *pent) || !p2m_is_changeable(p2m_flags_to_type(l1e_get_flags(*pent))) || p2m_is_logdirty_range(p2m, gfn & mask, gfn | ~mask) >= 0 ) @@ -436,17 +437,18 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn) needs_recalc(l1, *pent) ) { l1_pgentry_t e = *pent; + p2m_type_t ot, nt; + unsigned long mask = ~0UL << (level * PAGETABLE_ORDER); if ( !valid_recalc(l1, e) ) P2M_DEBUG("bogus recalc leaf at d%d:%lx:%u\n", p2m->domain->domain_id, gfn, level); - if ( p2m_is_changeable(p2m_flags_to_type(l1e_get_flags(e))) ) + ot = p2m_flags_to_type(l1e_get_flags(e)); + nt = p2m_recalc_type_range(true, ot, p2m, gfn & mask, gfn | ~mask); + if ( nt != ot ) { - unsigned long mask = ~0UL << (level * PAGETABLE_ORDER); - p2m_type_t p2mt = p2m_is_logdirty_range(p2m, gfn & mask, gfn | ~mask) - ? p2m_ram_logdirty : p2m_ram_rw; unsigned long mfn = l1e_get_pfn(e); - unsigned long flags = p2m_type_to_flags(p2m, p2mt, + unsigned long flags = p2m_type_to_flags(p2m, nt, _mfn(mfn), level); if ( level ) @@ -460,9 +462,17 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn) mfn &= ~((unsigned long)_PAGE_PSE_PAT >> PAGE_SHIFT); flags |= _PAGE_PSE; } + + if ( ot == p2m_ioreq_server ) + { + ASSERT(p2m->ioreq.entry_count > 0); + ASSERT(level == 0); + p2m->ioreq.entry_count--; + } + e = l1e_from_pfn(mfn, flags); p2m_add_iommu_flags(&e, level, - (p2mt == p2m_ram_rw) + (nt == p2m_ram_rw) ? IOMMUF_readable|IOMMUF_writable : 0); ASSERT(!needs_recalc(l1, e)); } @@ -582,6 +592,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, } } + ASSERT(p2m_flags_to_type(flags) != p2m_ioreq_server); ASSERT(!mfn_valid(mfn) || p2mt != p2m_mmio_direct); l3e_content = mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) ? l3e_from_pfn(mfn_x(mfn), @@ -606,6 +617,8 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, if ( page_order == PAGE_ORDER_4K ) { + p2m_type_t p2mt_old; + rc = p2m_next_level(p2m, &table, &gfn_remainder, gfn, L2_PAGETABLE_SHIFT - PAGE_SHIFT, L2_PAGETABLE_ENTRIES, PGT_l1_page_table, 1); @@ -629,6 +642,21 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, if ( entry_content.l1 != 0 ) p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags); + p2mt_old = p2m_flags_to_type(l1e_get_flags(*p2m_entry)); + + /* + * p2m_ioreq_server is only used for 4K pages, so + * the count is only done for level 1 entries. + */ + if ( p2mt == p2m_ioreq_server ) + p2m->ioreq.entry_count++; + + if ( p2mt_old == p2m_ioreq_server ) + { + ASSERT(p2m->ioreq.entry_count > 0); + p2m->ioreq.entry_count--; + } + /* level 1 entry */ p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1); /* NB: paging_write_p2m_entry() handles tlb flushes properly */ @@ -655,7 +683,8 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, intermediate_entry = *p2m_entry; } } - + + ASSERT(p2m_flags_to_type(flags) != p2m_ioreq_server); ASSERT(!mfn_valid(mfn) || p2mt != p2m_mmio_direct); if ( mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) ) l2e_content = l2e_from_pfn(mfn_x(mfn), @@ -726,15 +755,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, return rc; } -static inline p2m_type_t recalc_type(bool_t recalc, p2m_type_t t, - struct p2m_domain *p2m, unsigned long gfn) -{ - if ( !recalc || !p2m_is_changeable(t) ) - return t; - return p2m_is_logdirty_range(p2m, gfn, gfn) ? p2m_ram_logdirty - : p2m_ram_rw; -} - static mfn_t p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn, p2m_type_t *t, p2m_access_t *a, p2m_query_t q, @@ -820,8 +840,8 @@ pod_retry_l3: mfn = _mfn(l3e_get_pfn(*l3e) + l2_table_offset(addr) * L1_PAGETABLE_ENTRIES + l1_table_offset(addr)); - *t = recalc_type(recalc || _needs_recalc(flags), - p2m_flags_to_type(flags), p2m, gfn); + *t = p2m_recalc_type(recalc || _needs_recalc(flags), + p2m_flags_to_type(flags), p2m, gfn); unmap_domain_page(l3e); ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t)); @@ -859,8 +879,8 @@ pod_retry_l2: if ( flags & _PAGE_PSE ) { mfn = _mfn(l2e_get_pfn(*l2e) + l1_table_offset(addr)); - *t = recalc_type(recalc || _needs_recalc(flags), - p2m_flags_to_type(flags), p2m, gfn); + *t = p2m_recalc_type(recalc || _needs_recalc(flags), + p2m_flags_to_type(flags), p2m, gfn); unmap_domain_page(l2e); ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t)); @@ -896,7 +916,7 @@ pod_retry_l1: return INVALID_MFN; } mfn = _mfn(l1e_get_pfn(*l1e)); - *t = recalc_type(recalc || _needs_recalc(flags), l1t, p2m, gfn); + *t = p2m_recalc_type(recalc || _needs_recalc(flags), l1t, p2m, gfn); unmap_domain_page(l1e); ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t) || p2m_is_paging(*t)); diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index b84add0..4169d18 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -317,6 +317,15 @@ int p2m_set_ioreq_server(struct domain *d, if ( p2m->ioreq.server != NULL ) goto out; + /* + * It is possible that an ioreq server has just been unmapped, + * released the spin lock, with some p2m_ioreq_server entries + * in p2m table remained. We shall refuse another ioreq server + * mapping request in such case. + */ + if ( read_atomic(&p2m->ioreq.entry_count) ) + goto out; + p2m->ioreq.server = s; p2m->ioreq.flags = flags; } diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 4521620..7d39113 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -120,7 +120,8 @@ typedef unsigned int p2m_query_t; /* Types that can be subject to bulk transitions. */ #define P2M_CHANGEABLE_TYPES (p2m_to_mask(p2m_ram_rw) \ - | p2m_to_mask(p2m_ram_logdirty) ) + | p2m_to_mask(p2m_ram_logdirty) \ + | p2m_to_mask(p2m_ioreq_server) ) #define P2M_POD_TYPES (p2m_to_mask(p2m_populate_on_demand)) @@ -349,6 +350,7 @@ struct p2m_domain { * are to be emulated by an ioreq server. */ unsigned int flags; + unsigned long entry_count; } ioreq; }; @@ -744,6 +746,27 @@ static inline p2m_type_t p2m_flags_to_type(unsigned long flags) return (flags >> 12) & 0x7f; } +static inline p2m_type_t p2m_recalc_type_range(bool recalc, p2m_type_t t, + struct p2m_domain *p2m, + unsigned long gfn_start, + unsigned long gfn_end) +{ + if ( !recalc || !p2m_is_changeable(t) ) + return t; + + if ( t == p2m_ioreq_server && p2m->ioreq.server != NULL ) + return t; + + return p2m_is_logdirty_range(p2m, gfn_start, gfn_end) ? p2m_ram_logdirty + : p2m_ram_rw; +} + +static inline p2m_type_t p2m_recalc_type(bool_t recalc, p2m_type_t t, + struct p2m_domain *p2m, unsigned long gfn) +{ + return p2m_recalc_type_range(recalc, t, p2m, gfn, gfn); +} + int p2m_pt_handle_deferred_changes(uint64_t gpa); /*