From patchwork Thu Sep 5 08:34:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11132405 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E89C17EF for ; Thu, 5 Sep 2019 08:36:23 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 33CE420870 for ; Thu, 5 Sep 2019 08:36:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 33CE420870 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5nE3-0000aF-MO; Thu, 05 Sep 2019 08:34:43 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5nE2-0000a9-9O for xen-devel@lists.xenproject.org; Thu, 05 Sep 2019 08:34:42 +0000 X-Inumbo-ID: 00a23f1e-cfb8-11e9-abc1-12813bfff9fa Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 00a23f1e-cfb8-11e9-abc1-12813bfff9fa; Thu, 05 Sep 2019 08:34:40 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id EFC32AC93; Thu, 5 Sep 2019 08:34:39 +0000 (UTC) To: "xen-devel@lists.xenproject.org" From: Jan Beulich Message-ID: <9deab964-2685-3c04-9e4c-e3df04885742@suse.com> Date: Thu, 5 Sep 2019 10:34:47 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 Content-Language: en-US Subject: [Xen-devel] [PATCH] x86/shadow: fold p2m page accounting into sh_min_allocation() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: George Dunlap , Andrew Cooper , Tim Deegan , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" This is to make the function live up to the promise its name makes. And it simplifies all callers. Suggested-by: Andrew Cooper Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné Acked-by: Andrew Cooper Acked-by: Tim Deegan --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -1256,29 +1256,26 @@ static unsigned int sh_min_allocation(co * up of slot zero and an LAPIC page), plus one for HVM's 1-to-1 pagetable. */ return shadow_min_acceptable_pages(d) + - max(d->tot_pages / 256, - is_hvm_domain(d) ? CONFIG_PAGING_LEVELS + 2 : 0U) + - is_hvm_domain(d); + max(max(d->tot_pages / 256, + is_hvm_domain(d) ? CONFIG_PAGING_LEVELS + 2 : 0U) + + is_hvm_domain(d), + d->arch.paging.shadow.p2m_pages); } int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted) { struct page_info *sp; - unsigned int lower_bound; ASSERT(paging_locked_by_me(d)); if ( pages > 0 ) { /* Check for minimum value. */ - if ( pages < d->arch.paging.shadow.p2m_pages ) - pages = 0; - else - pages -= d->arch.paging.shadow.p2m_pages; + unsigned int lower_bound = sh_min_allocation(d); - lower_bound = sh_min_allocation(d); if ( pages < lower_bound ) pages = lower_bound; + pages -= d->arch.paging.shadow.p2m_pages; } SHADOW_PRINTK("current %i target %i\n", @@ -2607,7 +2604,7 @@ int shadow_enable(struct domain *d, u32 /* Init the shadow memory allocation if the user hasn't done so */ old_pages = d->arch.paging.shadow.total_pages; - if ( old_pages < sh_min_allocation(d) + d->arch.paging.shadow.p2m_pages ) + if ( old_pages < sh_min_allocation(d) ) { paging_lock(d); rv = shadow_set_allocation(d, 1024, NULL); /* Use at least 4MB */ @@ -2864,8 +2861,7 @@ static int shadow_one_bit_enable(struct mode |= PG_SH_enable; - if ( d->arch.paging.shadow.total_pages < - sh_min_allocation(d) + d->arch.paging.shadow.p2m_pages ) + if ( d->arch.paging.shadow.total_pages < sh_min_allocation(d) ) { /* Init the shadow memory allocation if the user hasn't done so */ if ( shadow_set_allocation(d, 1, NULL) != 0 )