From patchwork Fri Apr 12 16:39:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 10898807 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7EAEB922 for ; Fri, 12 Apr 2019 16:42:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5F93528D9B for ; Fri, 12 Apr 2019 16:42:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5390528DBC; Fri, 12 Apr 2019 16:42:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C2F6F28D9B for ; Fri, 12 Apr 2019 16:42:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hEzEI-0006kd-HI; Fri, 12 Apr 2019 16:40:42 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hEzEG-0006kY-Ox for xen-devel@lists.xenproject.org; Fri, 12 Apr 2019 16:40:40 +0000 X-Inumbo-ID: b42bc6b9-5d41-11e9-92d7-bc764e045a96 Received: from mail-io1-f65.google.com (unknown [209.85.166.65]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id b42bc6b9-5d41-11e9-92d7-bc764e045a96; Fri, 12 Apr 2019 16:40:38 +0000 (UTC) Received: by mail-io1-f65.google.com with SMTP id v10so8998548iom.8 for ; Fri, 12 Apr 2019 09:40:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=C8ubQx/wEtbDwjeN7YbqKHh4CDZzqqUYOSMedhV0AcY=; b=pmNThw/WBrxKJjcbNWm+wVydVZtUgsoxcpNDReVjhX4mLO6hNbFhZitcKhIuHbFA8/ XwXYWHK3fdDWALPqngqnwaRV2CzpvNY2R+6qafc4dDHQxbRv/Su0v4491qr1EjRMKKIA HcUkJig/Q/Xu6odn7KjJD7QO9PEi/OLkg1AhbWQ6y9Uu8tAXJozQXGmDfuv6Ev7UjmRn x2bEkSxv9nMbZrAR5vr0z8n4XVQ92VgVkuRJIOsRF/D854GZVasr4614v+6WEusU6+sP fCIv75Z0PkseXKGjQ4wWNbQZmT6pxThsSMQ/kXXZ73c0I79fvCkTJ3VEzS05VEHSfCgr AzCg== X-Gm-Message-State: APjAAAUvbOdbXptGGL3mE52S5gTZ7YwDuy8CIVKRJUl0YjoJ7zAMTYKD aNh3Y4PoKmweAI0e/fpHUrN9R9/9 X-Google-Smtp-Source: APXvYqxOZdUGwjuvW8lA4gktwTNkLVbuo62RKNZx4PT7vNAKEzfo6SZbC5xJz9v0iYGE9aYf6xtxFQ== X-Received: by 2002:a6b:fa0b:: with SMTP id p11mr8640724ioh.36.1555087178234; Fri, 12 Apr 2019 09:39:38 -0700 (PDT) Received: from localhost.localdomain (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124]) by smtp.gmail.com with ESMTPSA id m143sm4302205itm.32.2019.04.12.09.39.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 12 Apr 2019 09:39:37 -0700 (PDT) From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Fri, 12 Apr 2019 10:39:32 -0600 Message-Id: <20190412163932.2087-1-tamas@tklengyel.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH] x86/altp2m: cleanup p2m_altp2m_lazy_copy X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Wei Liu , George Dunlap , Andrew Cooper , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The p2m_altp2m_lazy_copy is responsible for lazily populating an altp2m view when the guest traps out due to no EPT entry being present in the active view. Currently the function took several inputs that it didn't use and also locked/unlocked gfns when it didn't need to. Signed-off-by: Tamas K Lengyel Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: Roger Pau Monne Cc: George Dunlap --- xen/arch/x86/hvm/hvm.c | 7 ++++-- xen/arch/x86/mm/p2m.c | 52 +++++++++++++++++---------------------- xen/include/asm-x86/p2m.h | 5 ++-- 3 files changed, 30 insertions(+), 34 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 8adbb61b57..813e69a4c9 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1688,6 +1688,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, int sharing_enomem = 0; vm_event_request_t *req_ptr = NULL; bool_t ap2m_active, sync = 0; + unsigned int page_order; /* On Nested Virtualization, walk the guest page table. * If this succeeds, all is fine. @@ -1754,11 +1755,13 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, hostp2m = p2m_get_hostp2m(currd); mfn = get_gfn_type_access(hostp2m, gfn, &p2mt, &p2ma, P2M_ALLOC | (npfec.write_access ? P2M_UNSHARE : 0), - NULL); + &page_order); if ( ap2m_active ) { - if ( p2m_altp2m_lazy_copy(curr, gpa, gla, npfec, &p2m) ) + p2m = p2m_get_altp2m(curr); + + if ( p2m_altp2m_lazy_copy(p2m, gfn, mfn, p2mt, p2ma, page_order) ) { /* entry was lazily copied from host -- retry */ __put_gfn(hostp2m, gfn); diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index b9bbb8f485..140c707348 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2375,54 +2375,46 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx) * indicate that outer handler should handle fault */ -bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa, - unsigned long gla, struct npfec npfec, - struct p2m_domain **ap2m) +bool_t p2m_altp2m_lazy_copy(struct p2m_domain *ap2m, unsigned long gfn_l, + mfn_t hmfn, p2m_type_t hp2mt, p2m_access_t hp2ma, + unsigned int page_order) { - struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain); - p2m_type_t p2mt; - p2m_access_t p2ma; - unsigned int page_order; - gfn_t gfn = _gfn(paddr_to_pfn(gpa)); + p2m_type_t ap2mt; + p2m_access_t ap2ma; unsigned long mask; - mfn_t mfn; + gfn_t gfn; + mfn_t amfn; int rv; - *ap2m = p2m_get_altp2m(v); - - mfn = get_gfn_type_access(*ap2m, gfn_x(gfn), &p2mt, &p2ma, - 0, &page_order); - __put_gfn(*ap2m, gfn_x(gfn)); - - if ( !mfn_eq(mfn, INVALID_MFN) ) - return 0; + p2m_lock(ap2m); - mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma, - P2M_ALLOC, &page_order); - __put_gfn(hp2m, gfn_x(gfn)); + amfn = __get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma, + 0, NULL, false); - if ( mfn_eq(mfn, INVALID_MFN) ) + /* Bail if entry is already in altp2m or there is no entry is hostp2m */ + if ( !mfn_eq(amfn, INVALID_MFN) || mfn_eq(hmfn, INVALID_MFN) ) + { + p2m_unlock(ap2m); return 0; - - p2m_lock(*ap2m); + } /* * If this is a superpage mapping, round down both frame numbers * to the start of the superpage. */ mask = ~((1UL << page_order) - 1); - mfn = _mfn(mfn_x(mfn) & mask); - gfn = _gfn(gfn_x(gfn) & mask); + hmfn = _mfn(mfn_x(hmfn) & mask); + gfn = _gfn(gfn_l & mask); - rv = p2m_set_entry(*ap2m, gfn, mfn, page_order, p2mt, p2ma); - p2m_unlock(*ap2m); + rv = p2m_set_entry(ap2m, gfn, hmfn, page_order, hp2mt, hp2ma); + p2m_unlock(ap2m); if ( rv ) { gdprintk(XENLOG_ERR, - "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m %#"PRIx64"\n", - gfn_x(gfn), mfn_x(mfn), (unsigned long)*ap2m); - domain_crash(hp2m->domain); + "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m %#"PRIx64"\n", + gfn_l, mfn_x(hmfn), (unsigned long)ap2m); + domain_crash(ap2m->domain); } return 1; diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 2801a8ccca..c25e2a3cd8 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -867,8 +867,9 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx); void p2m_flush_altp2m(struct domain *d); /* Alternate p2m paging */ -bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa, - unsigned long gla, struct npfec npfec, struct p2m_domain **ap2m); +bool_t p2m_altp2m_lazy_copy(struct p2m_domain *ap2m, unsigned long gfn_l, + mfn_t hmfn, p2m_type_t hp2mt, p2m_access_t hp2ma, + unsigned int page_order); /* Make a specific alternate p2m valid */ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);