From patchwork Sun Jan 24 04:47:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliott Mitchell X-Patchwork-Id: 12042087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DDC0C433DB for ; Sun, 24 Jan 2021 05:47:35 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CF77A22C9F for ; Sun, 24 Jan 2021 05:47:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CF77A22C9F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=m5p.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.73574.132460 (Exim 4.92) (envelope-from ) id 1l3YEt-0000c8-8w; Sun, 24 Jan 2021 05:47:07 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 73574.132460; Sun, 24 Jan 2021 05:47:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l3YEt-0000c1-5s; Sun, 24 Jan 2021 05:47:07 +0000 Received: by outflank-mailman (input) for mailman id 73574; Sun, 24 Jan 2021 05:47:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l3YEr-0000bw-MV for xen-devel@lists.xenproject.org; Sun, 24 Jan 2021 05:47:05 +0000 Received: from mailhost.m5p.com (unknown [74.104.188.4]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3bbe9f68-aa79-4e2e-98df-64f8039fc70c; Sun, 24 Jan 2021 05:47:04 +0000 (UTC) Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7]) by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 10O5ksjQ000452 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Sun, 24 Jan 2021 00:46:59 -0500 (EST) (envelope-from ehem@m5p.com) Received: (from ehem@localhost) by m5p.com (8.15.2/8.15.2/Submit) id 10O5krt7000451; Sat, 23 Jan 2021 21:46:53 -0800 (PST) (envelope-from ehem) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3bbe9f68-aa79-4e2e-98df-64f8039fc70c Message-Id: <202101240546.10O5krt7000451@m5p.com> From: Elliott Mitchell To: xen-devel@lists.xenproject.org Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Wei Liu Cc: =?unknown-8bit?b?IlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0cml4LmNv?= =?unknown-8bit?b?bT4=?= Date: Sat, 23 Jan 2021 20:47:38 -0800 Subject: [PATCH] x86/pod: Do not fragment PoD memory allocations Previously p2m_pod_set_cache_target() would fall back to allocating 4KB pages if 2MB pages ran out. This is counterproductive since it suggests severe memory pressure and is likely a precursor to a memory exhaustion panic. As such don't try to fill requests for 2MB pages from 4KB pages if 2MB pages run out. Signed-off-by: Elliott Mitchell --- I'm not including a separate cover message since this is a single hunk. This really needs some checking in `xl`. If one has a domain which sometimes gets started on different hosts and is sometimes modified with slightly differing settings, one can run into trouble. In this case most of the time the particular domain is most often used PV/PVH, but every so often is used as a template for HVM. Starting it HVM will trigger PoD mode. If it is started on a machine with less memory than others, PoD may well exhaust all memory and then trigger a panic. `xl` should likely fail HVM domain creation when the maximum memory exceeds available memory (never mind total memory). --- xen/arch/x86/mm/p2m-pod.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 48e609d1ed..6a7c9ae7d1 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -216,12 +216,10 @@ p2m_pod_set_cache_target(struct p2m_domain *p2m, unsigned long pod_target, int p page = alloc_domheap_pages(d, order, 0); if ( unlikely(page == NULL) ) { - if ( order == PAGE_ORDER_2M ) - { - /* If we can't allocate a superpage, try singleton pages */ - order = PAGE_ORDER_4K; - goto retry; - } + /* Superpages allocation failures likely indicate severe memory + ** pressure. Continuing to try to fulfill attempts using 4KB pages + ** is likely to exhaust memory and trigger a panic. As such it is + ** NOT worth trying to use 4KB pages to fulfill 2MB page requests.*/ printk("%s: Unable to allocate page for PoD cache (target=%lu cache=%ld)\n", __func__, pod_target, p2m->pod.count);