From patchwork Mon Mar 9 10:23:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11426647 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C8325921 for ; Mon, 9 Mar 2020 10:26:13 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AE64720409 for ; Mon, 9 Mar 2020 10:26:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AE64720409 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jBFZP-0000DB-IZ; Mon, 09 Mar 2020 10:23:35 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jBFZN-0000CP-T0 for xen-devel@lists.xenproject.org; Mon, 09 Mar 2020 10:23:33 +0000 X-Inumbo-ID: 070015f0-61f0-11ea-abfc-12813bfff9fa Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 070015f0-61f0-11ea-abfc-12813bfff9fa; Mon, 09 Mar 2020 10:23:33 +0000 (UTC) IronPort-SDR: eP/UdPLVZWXE5naIO51IBsrtg6njbg2T6T0s7+MXTbMnxrLxrlEEu+DNntx+2BjQCJm+u8gsaD kBiu9r7xpD0g== X-IronPort-AV: E=Sophos;i="5.70,532,1574121600"; d="scan'208";a="31432299" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1a-715bee71.us-east-1.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 09 Mar 2020 10:23:31 +0000 Received: from EX13MTAUEA002.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162]) by email-inbound-relay-1a-715bee71.us-east-1.amazon.com (Postfix) with ESMTPS id 36DD8A2A49; Mon, 9 Mar 2020 10:23:29 +0000 (UTC) Received: from EX13D32EUB004.ant.amazon.com (10.43.166.212) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Mon, 9 Mar 2020 10:23:14 +0000 Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by EX13D32EUB004.ant.amazon.com (10.43.166.212) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 9 Mar 2020 10:23:13 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP Server id 15.0.1236.3 via Frontend Transport; Mon, 9 Mar 2020 10:23:11 +0000 From: To: Date: Mon, 9 Mar 2020 10:23:01 +0000 Message-ID: <20200309102304.1251-4-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200309102304.1251-1-paul@xen.org> References: <20200309102304.1251-1-paul@xen.org> MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v5 3/6] x86 / pv: do not treat PGC_extra pages as RAM X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , Paul Durrant , Andrew Cooper , Paul Durrant , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Paul Durrant This patch modifies several places walking the domain's page_list to make them ignore PGC_extra pages: - dump_pageframe_info() should ignore PGC_extra pages in its dump as it determines whether to dump using domain_tot_pages() which also ignores PGC_extra pages. - arch_set_info_guest() is looking for an L4 page table which will definitely not be in a PGC_extra page. - audit_p2m() should ignore PGC_extra pages as it is perfectly legitimate for them not to be present in the P2M. - dump_nama() should ignore PGC_extra pages as they are essentially uninteresting in that context. - dom0_construct_pv() should ignore PGC_extra pages when setting up the physmap as they are only created for special purposes and, if they need to be mapped, will be mapped explicitly for whatever purpose is relevant. - tboot_gen_domain_integrity() should ignore PGC_extra pages as they should not form part of the measurement. Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monné" v4: - Expand to cover more than just dom0_construct_pv() v2: - New in v2 --- xen/arch/x86/domain.c | 6 +++++- xen/arch/x86/mm/p2m.c | 3 +++ xen/arch/x86/numa.c | 3 +++ xen/arch/x86/pv/dom0_build.c | 4 ++++ xen/arch/x86/tboot.c | 7 ++++++- 5 files changed, 21 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index bdcc0d972a..f6ed25e8ee 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -231,6 +231,9 @@ void dump_pageframe_info(struct domain *d) unsigned int index = MASK_EXTR(page->u.inuse.type_info, PGT_type_mask); + if ( page->count_info & PGC_extra ) + continue; + if ( ++total[index] > 16 ) { switch ( page->u.inuse.type_info & PGT_type_mask ) @@ -1044,7 +1047,8 @@ int arch_set_info_guest( { struct page_info *page = page_list_remove_head(&d->page_list); - if ( page_lock(page) ) + if ( !(page->count_info & PGC_extra) && + page_lock(page) ) { if ( (page->u.inuse.type_info & PGT_type_mask) == PGT_l4_page_table ) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 9f51370327..71d2fb9bbc 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2843,6 +2843,9 @@ void audit_p2m(struct domain *d, spin_lock(&d->page_alloc_lock); page_list_for_each ( page, &d->page_list ) { + if ( page->count_info & PGC_extra ) + continue; + mfn = mfn_x(page_to_mfn(page)); P2M_PRINTK("auditing guest page, mfn=%#lx\n", mfn); diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index f1066c59c7..7e5aa8dc95 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -428,6 +428,9 @@ static void dump_numa(unsigned char key) spin_lock(&d->page_alloc_lock); page_list_for_each(page, &d->page_list) { + if ( page->count_info & PGC_extra ) + break; + i = phys_to_nid(page_to_maddr(page)); page_num_node[i]++; } diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index dc16ef2e79..f8f1bbe2f4 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -792,6 +792,10 @@ int __init dom0_construct_pv(struct domain *d, { mfn = mfn_x(page_to_mfn(page)); BUG_ON(SHARED_M2P(get_gpfn_from_mfn(mfn))); + + if ( page->count_info & PGC_extra ) + continue; + if ( get_gpfn_from_mfn(mfn) >= count ) { BUG_ON(is_pv_32bit_domain(d)); diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index 8c232270b4..6cc020cb71 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -220,7 +220,12 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE], spin_lock(&d->page_alloc_lock); page_list_for_each(page, &d->page_list) { - void *pg = __map_domain_page(page); + void *pg; + + if ( page->count_info & PGC_extra ) + continue; + + pg = __map_domain_page(page); vmac_update(pg, PAGE_SIZE, &ctx); unmap_domain_page(pg); }