From patchwork Tue May 28 10:28:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: George Dunlap X-Patchwork-Id: 10964583 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B658B112C for ; Tue, 28 May 2019 10:30:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A53621FE82 for ; Tue, 28 May 2019 10:30:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 98FA1281F9; Tue, 28 May 2019 10:30:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CD9B51FE82 for ; Tue, 28 May 2019 10:30:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZLM-0003lw-Hs; Tue, 28 May 2019 10:28:32 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hVZLL-0003lr-FP for xen-devel@lists.xenproject.org; Tue, 28 May 2019 10:28:31 +0000 X-Inumbo-ID: 546fcacc-8133-11e9-8866-2fa2f54a3f37 Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 546fcacc-8133-11e9-8866-2fa2f54a3f37; Tue, 28 May 2019 10:28:27 +0000 (UTC) Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=george.dunlap@citrix.com; spf=Pass smtp.mailfrom=George.Dunlap@citrix.com; spf=None smtp.helo=postmaster@MIAPEX02MSOL02.citrite.net Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of george.dunlap@citrix.com) identity=pra; client-ip=23.29.105.83; receiver=esa1.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="george.dunlap@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of George.Dunlap@citrix.com designates 23.29.105.83 as permitted sender) identity=mailfrom; client-ip=23.29.105.83; receiver=esa1.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="George.Dunlap@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:23.29.105.83 ip4:162.221.156.50 ~all" Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@MIAPEX02MSOL02.citrite.net) identity=helo; client-ip=23.29.105.83; receiver=esa1.hc3370-68.iphmx.com; envelope-from="George.Dunlap@citrix.com"; x-sender="postmaster@MIAPEX02MSOL02.citrite.net"; x-conformance=sidf_compatible IronPort-SDR: zquuyn3j4Su/TuJxVp+16M2h+KLNHJHK3qbcEaZHe6CUmlt2i/1cJrTp0ib2YDgmINVI0ns9P4 0x9ntr7tXJuPrep9A5TZdlmDWKAjdeNDfCYiDtmPP10OAQCnBwd7URT5UjG5CBwPn03mKeFlmr +KLG5DjS+mxs5fD5jmGoeX8AIRlPJJ+27fx44nddHQ/NNiD2DbqPt6h0GN8Qx2eEkxrNgO/Hie f8YZgb3vRUxyyM8/cFzIPY93LDXH48e96BVkwiKpnuODe81uHn7ZFQfWrEevtcus/oAOcXruye y3M= X-SBRS: 2.7 X-MesageID: 977663 X-Ironport-Server: esa1.hc3370-68.iphmx.com X-Remote-IP: 23.29.105.83 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.60,521,1549947600"; d="scan'208";a="977663" From: George Dunlap To: Date: Tue, 28 May 2019 11:28:23 +0100 Message-ID: <20190528102823.14171-1-george.dunlap@citrix.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3] x86/altp2m: cleanup p2m_altp2m_lazy_copy X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , George Dunlap , Andrew Cooper , George Dunlap , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Tamas K Lengyel The p2m_altp2m_lazy_copy is responsible for lazily populating an altp2m view when the guest traps out due to no EPT entry being present in the active view. Currently, in addition to taking a number of unused argements, the whole calling convention has a number of redundant p2m lookups: the function reads the hostp2m, even though the caller has just read the same hostp2m entry; and then the caller re-reads the altp2m entry that the function has just read (and possibly set). Rework this function to make it a bit more rational. Specifically: - Pass the current hostp2m entry values we have just read for it to use to populate the altp2m entry if it finds the entry empty. - If the altp2m entry is not empty, pass out the values we've read so the caller doesn't need to re-walk the tables - Either way, return with the gfn 'locked', to make clean-up handling more consistent. Rename the function to better reflect this functionality. While we're here, change bool_t to bool, and return true/false rather than 1/0. It's a bit grating to do both the p2m_lock() and the get_gfn(), knowing that they boil down to the same thing at the moment; but we have to maintain the fiction until such time as we decide to get rid of it entirely. Signed-off-by: Tamas K Lengyel Signed-off-by: George Dunlap Acked-by: Jan Beulich --- This is identical to the patch I sent in response to Tamas' v2 Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: Roger Pau Monne Cc: George Dunlap --- xen/arch/x86/hvm/hvm.c | 19 +++++--- xen/arch/x86/mm/p2m.c | 95 +++++++++++++++++++++------------------ xen/include/asm-x86/p2m.h | 5 ++- 3 files changed, 67 insertions(+), 52 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index ed1ff9c87f..2f4e7bd30e 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1691,6 +1691,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, int sharing_enomem = 0; vm_event_request_t *req_ptr = NULL; bool_t ap2m_active, sync = 0; + unsigned int page_order; /* On Nested Virtualization, walk the guest page table. * If this succeeds, all is fine. @@ -1757,19 +1758,23 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, hostp2m = p2m_get_hostp2m(currd); mfn = get_gfn_type_access(hostp2m, gfn, &p2mt, &p2ma, P2M_ALLOC | (npfec.write_access ? P2M_UNSHARE : 0), - NULL); + &page_order); if ( ap2m_active ) { - if ( p2m_altp2m_lazy_copy(curr, gpa, gla, npfec, &p2m) ) + p2m = p2m_get_altp2m(curr); + + /* + * Get the altp2m entry if present; or if not, propagate from + * the host p2m. NB that this returns with gfn locked in the + * altp2m. + */ + if ( p2m_altp2m_get_or_propagate(p2m, gfn, &mfn, &p2mt, &p2ma, page_order) ) { - /* entry was lazily copied from host -- retry */ - __put_gfn(hostp2m, gfn); + /* Entry was copied from host -- retry fault */ rc = 1; - goto out; + goto out_put_gfn; } - - mfn = get_gfn_type_access(p2m, gfn, &p2mt, &p2ma, 0, NULL); } else p2m = hostp2m; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 278e1c114e..385146ca63 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2397,65 +2397,74 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx) } /* - * If the fault is for a not present entry: - * if the entry in the host p2m has a valid mfn, copy it and retry - * else indicate that outer handler should handle fault + * Read info about the gfn in an altp2m, locking the gfn. * - * If the fault is for a present entry: - * indicate that outer handler should handle fault + * If the entry is valid, pass the results back to the caller. + * + * If the entry was invalid, and the host's entry is also invalid, + * return to the caller without any changes. + * + * If the entry is invalid, and the host entry was valid, propagate + * the host's entry to the altp2m (retaining page order), and indicate + * that the caller should re-try the faulting instruction. */ - -bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa, - unsigned long gla, struct npfec npfec, - struct p2m_domain **ap2m) +bool p2m_altp2m_get_or_propagate(struct p2m_domain *ap2m, unsigned long gfn_l, + mfn_t *mfn, p2m_type_t *p2mt, + p2m_access_t *p2ma, unsigned int page_order) { - struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain); - p2m_type_t p2mt; - p2m_access_t p2ma; - unsigned int page_order; - gfn_t gfn = _gfn(paddr_to_pfn(gpa)); + p2m_type_t ap2mt; + p2m_access_t ap2ma; unsigned long mask; - mfn_t mfn; - int rv; - - *ap2m = p2m_get_altp2m(v); - - mfn = get_gfn_type_access(*ap2m, gfn_x(gfn), &p2mt, &p2ma, - 0, &page_order); - __put_gfn(*ap2m, gfn_x(gfn)); - - if ( !mfn_eq(mfn, INVALID_MFN) ) - return 0; + gfn_t gfn; + mfn_t amfn; + int rc; - mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma, - P2M_ALLOC, &page_order); - __put_gfn(hp2m, gfn_x(gfn)); + /* + * NB we must get the full lock on the altp2m here, in addition to + * the lock on the individual gfn, since we may change a range of + * gfns below. + */ + p2m_lock(ap2m); + + amfn = get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma, 0, NULL); - if ( mfn_eq(mfn, INVALID_MFN) ) - return 0; + if ( !mfn_eq(amfn, INVALID_MFN) ) + { + p2m_unlock(ap2m); + *mfn = amfn; + *p2mt = ap2mt; + *p2ma = ap2ma; + return false; + } - p2m_lock(*ap2m); + /* Host entry is also invalid; don't bother setting the altp2m entry. */ + if ( mfn_eq(*mfn, INVALID_MFN) ) + { + p2m_unlock(ap2m); + return false; + } /* * If this is a superpage mapping, round down both frame numbers - * to the start of the superpage. + * to the start of the superpage. NB that we repupose `amfn` + * here. */ mask = ~((1UL << page_order) - 1); - mfn = _mfn(mfn_x(mfn) & mask); - gfn = _gfn(gfn_x(gfn) & mask); + amfn = _mfn(mfn_x(*mfn) & mask); + gfn = _gfn(gfn_l & mask); - rv = p2m_set_entry(*ap2m, gfn, mfn, page_order, p2mt, p2ma); - p2m_unlock(*ap2m); + rc = p2m_set_entry(ap2m, gfn, amfn, page_order, *p2mt, *p2ma); + p2m_unlock(ap2m); - if ( rv ) + if ( rc ) { - gdprintk(XENLOG_ERR, - "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m %#"PRIx64"\n", - gfn_x(gfn), mfn_x(mfn), (unsigned long)*ap2m); - domain_crash(hp2m->domain); + gprintk(XENLOG_ERR, + "failed to set entry for %#"PRIx64" -> %#"PRIx64" altp2m %#"PRIx16". rc: %"PRIi32"\n", + gfn_l, mfn_x(amfn), vcpu_altp2m(current).p2midx, rc); + domain_crash(ap2m->domain); } - - return 1; + + return true; } enum altp2m_reset_type { diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 719513f4ba..905fad7c8d 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -879,8 +879,9 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx); void p2m_flush_altp2m(struct domain *d); /* Alternate p2m paging */ -bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa, - unsigned long gla, struct npfec npfec, struct p2m_domain **ap2m); +bool p2m_altp2m_get_or_propagate(struct p2m_domain *ap2m, unsigned long gfn_l, + mfn_t *mfn, p2m_type_t *p2mt, + p2m_access_t *p2ma, unsigned int page_order); /* Make a specific alternate p2m valid */ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);