From patchwork Sun Mar 22 16:14:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451877 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8796F1744 for ; Sun, 22 Mar 2020 16:16:08 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6DD4C20724 for ; Sun, 22 Mar 2020 16:16:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6DD4C20724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3F9-0004Vp-Rv; Sun, 22 Mar 2020 16:14:31 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3F8-0004VU-Ev for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:30 +0000 X-Inumbo-ID: 32c8a036-6c58-11ea-bec1-bc764e2007e4 Received: from mail-ed1-f68.google.com (unknown [209.85.208.68]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 32c8a036-6c58-11ea-bec1-bc764e2007e4; Sun, 22 Mar 2020 16:14:25 +0000 (UTC) Received: by mail-ed1-f68.google.com with SMTP id v6so13483074edw.8 for ; Sun, 22 Mar 2020 09:14:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cet1saYnXR3xpuceaO1waHWT2/fXAf5BKcn7eA9qmok=; b=WY6ih+wcyK8i5ZEXGEP0xL9BxQELgzOPy5Q/pHgC63Bh9cTNUBVXm1X2HcLAh7j7Nx MIdVFgI+C/6xjq7dRhvDFGaskJKDIEbmMapirL7Ygcr0MVOMSLPojFqeGx0plWdGTbJX B8eAujld7OipW0BIyr8Y7XwGKcHQ2r4o5YyigDCn3IjOlQIT+/gTD3ZkgfSNjJw79Ia8 Zo7g2vq8k95wxZnncru7cNDYKK52duhBzrNtuiY6DvxPQ1qAjmyeUrT1F/y/zhqNsttH jtbVYprxKCbX4ZvKa7LIZfpfgFRtE/7MlnPjKUze6KRwqU3eFoh/1c7WNL3AtnGYoPsC cohA== X-Gm-Message-State: ANhLgQ3xlqkKxoSZI5FA6P+V++SwjmJ5eHRruFip024pa2nnnRZd3Axz 0WbtBToakqlyZ6IjaRuX4Olk/exAUEXlbQ== X-Google-Smtp-Source: ADFU+vsk6Azdlr3rrvuls90J5/2jHFaGwH3lfZkR+aiUc+YzczbWZMS7DvH+AugordRalegvssB9aw== X-Received: by 2002:a17:906:7d87:: with SMTP id v7mr16943567ejo.301.1584893664660; Sun, 22 Mar 2020 09:14:24 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:24 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:02 +0000 Message-Id: <20200322161418.31606-2-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 01/17] xen/x86: Introduce helpers to generate/convert the CR3 from/to a MFN/GFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , Julien Grall , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall Introduce handy helpers to generate/convert the CR3 from/to a MFN/GFN. Note that we are using cr3_pa() rather than xen_cr3_to_pfn() because the latter does not ignore the top 12-bits. Take the opportunity to use the new helpers when possible. Signed-off-by: Julien Grall --- xen/arch/x86/domain.c | 4 ++-- xen/arch/x86/mm.c | 2 +- xen/include/asm-x86/mm.h | 20 ++++++++++++++++++++ 3 files changed, 23 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index caf2ecad7e..15750ce210 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1096,7 +1096,7 @@ int arch_set_info_guest( set_bit(_VPF_in_reset, &v->pause_flags); if ( !compat ) - cr3_mfn = _mfn(xen_cr3_to_pfn(c.nat->ctrlreg[3])); + cr3_mfn = cr3_to_mfn(c.nat->ctrlreg[3]); else cr3_mfn = _mfn(compat_cr3_to_pfn(c.cmp->ctrlreg[3])); cr3_page = get_page_from_mfn(cr3_mfn, d); @@ -1142,7 +1142,7 @@ int arch_set_info_guest( v->arch.guest_table = pagetable_from_page(cr3_page); if ( c.nat->ctrlreg[1] ) { - cr3_mfn = _mfn(xen_cr3_to_pfn(c.nat->ctrlreg[1])); + cr3_mfn = cr3_to_mfn(c.nat->ctrlreg[1]); cr3_page = get_page_from_mfn(cr3_mfn, d); if ( !cr3_page ) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 62507ca651..069a61deb8 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -509,7 +509,7 @@ void make_cr3(struct vcpu *v, mfn_t mfn) { struct domain *d = v->domain; - v->arch.cr3 = mfn_x(mfn) << PAGE_SHIFT; + v->arch.cr3 = mfn_to_cr3(mfn); if ( is_pv_domain(d) && d->arch.pv.pcid ) v->arch.cr3 |= get_pcid_bits(v, false); } diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index a06b2fb81f..9764362a38 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -524,6 +524,26 @@ extern struct rangeset *mmio_ro_ranges; #define compat_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) | ((unsigned)(pfn) >> 20)) #define compat_cr3_to_pfn(cr3) (((unsigned)(cr3) >> 12) | ((unsigned)(cr3) << 20)) +static inline unsigned long mfn_to_cr3(mfn_t mfn) +{ + return xen_pfn_to_cr3(mfn_x(mfn)); +} + +static inline mfn_t cr3_to_mfn(unsigned long cr3) +{ + return maddr_to_mfn(cr3_pa(cr3)); +} + +static inline unsigned long gfn_to_cr3(gfn_t gfn) +{ + return xen_pfn_to_cr3(gfn_x(gfn)); +} + +static inline gfn_t cr3_to_gfn(unsigned long cr3) +{ + return gaddr_to_gfn(cr3_pa(cr3)); +} + #ifdef MEMORY_GUARD void memguard_guard_range(void *p, unsigned long l); void memguard_unguard_range(void *p, unsigned long l); From patchwork Sun Mar 22 16:14:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451871 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4C9E21744 for ; Sun, 22 Mar 2020 16:16:06 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 320A220724 for ; Sun, 22 Mar 2020 16:16:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 320A220724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3F6-0004V2-Fk; Sun, 22 Mar 2020 16:14:28 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3F5-0004UB-82 for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:27 +0000 X-Inumbo-ID: 332b1cad-6c58-11ea-8134-12813bfff9fa Received: from mail-ed1-f65.google.com (unknown [209.85.208.65]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 332b1cad-6c58-11ea-8134-12813bfff9fa; Sun, 22 Mar 2020 16:14:26 +0000 (UTC) Received: by mail-ed1-f65.google.com with SMTP id z3so13446365edq.11 for ; Sun, 22 Mar 2020 09:14:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QU5H42/aI1bJSwJ4GskPiooVWk39PG871znlVQQSh0c=; b=PZxb9wn6BhRKZnPaUCHgRXACr+Rya7vdCizihBQOc+J2XbUhdShgN3AcDwKRQD48Ze C/Ma4UBWtGYMPXxnB/ePDUvn8Qt2GqGgQSLCEHUQ+N7zZjUiePaQbDqOOXsGC9ml4wTo EgETpdeD/IwlX32M05FQt6rGZZ4Azsxafc19PxDzGLp+Qx8NbixNi4NxUTUd+Fiak4rT CHh/CWvMHikAP8vYiiPPx1R8B44l28PC3+csqXqh/Tt1AAQS23UAx0qK+CboCbiHqeSb UVPGa7KVFAhJppI944F87rsFliOuHYaYFdE4izhK/o9slg7n4n4skv9xEt/J6LidjIxl g1Lg== X-Gm-Message-State: ANhLgQ3CXfOvQBAqQqOBda8FvoyVLEs8rwjTy3IrUgvg7hSqvsLUsMBT 5hs14jXXpaX19RU0h4//fbieZW/+XSfwoA== X-Google-Smtp-Source: ADFU+vvnV1cmI//55J09x/ur/fQwaz5bz5rmowQkTLWcZD37tm9EqIKP+5hb6m2VzPeiUeTKURA38w== X-Received: by 2002:a05:6402:4cd:: with SMTP id n13mr17844074edw.240.1584893665706; Sun, 22 Mar 2020 09:14:25 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:25 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:03 +0000 Message-Id: <20200322161418.31606-3-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 02/17] xen/x86_64: Convert do_page_walk() to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , Julien Grall , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall No functional changes intended. Signed-off-by: Julien Grall Reviewed-by: Jan Beulich --- xen/arch/x86/x86_64/mm.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index b7ce833ffc..3516423bb0 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -46,7 +46,7 @@ l2_pgentry_t *compat_idle_pg_table_l2; void *do_page_walk(struct vcpu *v, unsigned long addr) { - unsigned long mfn = pagetable_get_pfn(v->arch.guest_table); + mfn_t mfn = pagetable_get_mfn(v->arch.guest_table); l4_pgentry_t l4e, *l4t; l3_pgentry_t l3e, *l3t; l2_pgentry_t l2e, *l2t; @@ -55,7 +55,7 @@ void *do_page_walk(struct vcpu *v, unsigned long addr) if ( !is_pv_vcpu(v) || !is_canonical_address(addr) ) return NULL; - l4t = map_domain_page(_mfn(mfn)); + l4t = map_domain_page(mfn); l4e = l4t[l4_table_offset(addr)]; unmap_domain_page(l4t); if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) ) @@ -64,36 +64,36 @@ void *do_page_walk(struct vcpu *v, unsigned long addr) l3t = map_l3t_from_l4e(l4e); l3e = l3t[l3_table_offset(addr)]; unmap_domain_page(l3t); - mfn = l3e_get_pfn(l3e); - if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || !mfn_valid(_mfn(mfn)) ) + mfn = l3e_get_mfn(l3e); + if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || !mfn_valid(mfn) ) return NULL; if ( (l3e_get_flags(l3e) & _PAGE_PSE) ) { - mfn += PFN_DOWN(addr & ((1UL << L3_PAGETABLE_SHIFT) - 1)); + mfn = mfn_add(mfn, PFN_DOWN(addr & ((1UL << L3_PAGETABLE_SHIFT) - 1))); goto ret; } - l2t = map_domain_page(_mfn(mfn)); + l2t = map_domain_page(mfn); l2e = l2t[l2_table_offset(addr)]; unmap_domain_page(l2t); - mfn = l2e_get_pfn(l2e); - if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) || !mfn_valid(_mfn(mfn)) ) + mfn = l2e_get_mfn(l2e); + if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) || !mfn_valid(mfn) ) return NULL; if ( (l2e_get_flags(l2e) & _PAGE_PSE) ) { - mfn += PFN_DOWN(addr & ((1UL << L2_PAGETABLE_SHIFT) - 1)); + mfn = mfn_add(mfn, PFN_DOWN(addr & ((1UL << L2_PAGETABLE_SHIFT) - 1))); goto ret; } - l1t = map_domain_page(_mfn(mfn)); + l1t = map_domain_page(mfn); l1e = l1t[l1_table_offset(addr)]; unmap_domain_page(l1t); - mfn = l1e_get_pfn(l1e); - if ( !(l1e_get_flags(l1e) & _PAGE_PRESENT) || !mfn_valid(_mfn(mfn)) ) + mfn = l1e_get_mfn(l1e); + if ( !(l1e_get_flags(l1e) & _PAGE_PRESENT) || !mfn_valid(mfn) ) return NULL; ret: - return map_domain_page(_mfn(mfn)) + (addr & ~PAGE_MASK); + return map_domain_page(mfn) + (addr & ~PAGE_MASK); } /* From patchwork Sun Mar 22 16:14:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451885 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 895551744 for ; Sun, 22 Mar 2020 16:16:17 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6488F20724 for ; Sun, 22 Mar 2020 16:16:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6488F20724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FF-0004Yi-DB; Sun, 22 Mar 2020 16:14:37 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FD-0004Y0-ER for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:35 +0000 X-Inumbo-ID: 33f9bad0-6c58-11ea-bec1-bc764e2007e4 Received: from mail-ed1-f67.google.com (unknown [209.85.208.67]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 33f9bad0-6c58-11ea-bec1-bc764e2007e4; Sun, 22 Mar 2020 16:14:27 +0000 (UTC) Received: by mail-ed1-f67.google.com with SMTP id u59so13455282edc.12 for ; Sun, 22 Mar 2020 09:14:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4DuAipu9BSo2SCY2T0JAchJqXJO/XZINnXRDSNDPO8s=; b=ukXJipr9caP1gYnzvXN+q0d5mnSPoHNp+MpFlMe0CDCjQJ2isNeJxXE+LZS77lVevU qg4ny4wIoX/BDMXJfONKR7SkCwwlJIJPsDbnKxshZSdh9S4dTgaFKD//3hWc89thpWv7 xEVWp5nDOmCihtatXRqJehAa0yvESJgVxI1E+BtZKqFNvCgZl96fJ+CiTJmGYWHrEgJo YvhXakzpFvuJfxz+CAX+e1XyhKTasDs/k+KiVOCsLNDFA3yUIjQIBzG8xqnNgtGheBrk UtIb/7TjTNf5yu4yitUMPvcw+b9lM5L9dBl8g3+iDkI8NFxPksyqx1VnonqLrhCYPx3i AOuQ== X-Gm-Message-State: ANhLgQ0cjj98acXErq0etg2BbYFK8gri+RtNd/gKauFSSfCz49BQj3Mt c8788vbf28MzA9E6PsnSw+hdPNE8JLtR3w== X-Google-Smtp-Source: ADFU+vvxMcIR+VB0vYNJ38BmDBhUKdxLqJy9IzwgKNq1EtLeTTHt2Qy9NSX0FJZR6PjSoUXEnMJ5+A== X-Received: by 2002:a50:a9a6:: with SMTP id n35mr9058218edc.57.1584893666593; Sun, 22 Mar 2020 09:14:26 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:26 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:04 +0000 Message-Id: <20200322161418.31606-4-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 03/17] xen/mm: Move the MM types in a separate header X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , julien@xen.org, Wei Liu , Andrew Cooper , Julien Grall , Ian Jackson , George Dunlap , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall It is getting incredibly difficult to use typesafe GFN/MFN/PFN in the headers because of circular dependency. For instance, asm-x86/page.h cannot include xen/mm.h. In order to convert more code to use typesafe, the types are now moved in a separate header that requires only a few dependencies. Signed-off-by: Julien Grall --- xen/include/xen/mm.h | 134 +------------------------------- xen/include/xen/mm_types.h | 155 +++++++++++++++++++++++++++++++++++++ 2 files changed, 156 insertions(+), 133 deletions(-) create mode 100644 xen/include/xen/mm_types.h diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index d0d095d9c7..4337303f99 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -1,50 +1,7 @@ /****************************************************************************** * include/xen/mm.h * - * Definitions for memory pages, frame numbers, addresses, allocations, etc. - * * Copyright (c) 2002-2006, K A Fraser - * - * +---------------------+ - * Xen Memory Management - * +---------------------+ - * - * Xen has to handle many different address spaces. It is important not to - * get these spaces mixed up. The following is a consistent terminology which - * should be adhered to. - * - * mfn: Machine Frame Number - * The values Xen puts into its own pagetables. This is the host physical - * memory address space with RAM, MMIO etc. - * - * gfn: Guest Frame Number - * The values a guest puts in its own pagetables. For an auto-translated - * guest (hardware assisted with 2nd stage translation, or shadowed), gfn != - * mfn. For a non-translated guest which is aware of Xen, gfn == mfn. - * - * pfn: Pseudophysical Frame Number - * A linear idea of a guest physical address space. For an auto-translated - * guest, pfn == gfn while for a non-translated guest, pfn != gfn. - * - * dfn: Device DMA Frame Number (definitions in include/xen/iommu.h) - * The linear frame numbers of device DMA address space. All initiators for - * (i.e. all devices assigned to) a guest share a single DMA address space - * and, by default, Xen will ensure dfn == pfn. - * - * WARNING: Some of these terms have changed over time while others have been - * used inconsistently, meaning that a lot of existing code does not match the - * definitions above. New code should use these terms as described here, and - * over time older code should be corrected to be consistent. - * - * An incomplete list of larger work area: - * - Phase out the use of 'pfn' from the x86 pagetable code. Callers should - * know explicitly whether they are talking about mfns or gfns. - * - Phase out the use of 'pfn' from the ARM mm code. A cursory glance - * suggests that 'mfn' and 'pfn' are currently used interchangeably, where - * 'mfn' is the appropriate term to use. - * - Phase out the use of gpfn/gmfn where pfn/mfn are meant. This excludes - * the x86 shadow code, which uses gmfn/smfn pairs with different, - * documented, meanings. */ #ifndef __XEN_MM_H__ @@ -54,100 +11,11 @@ #include #include #include -#include #include +#include #include #include -TYPE_SAFE(unsigned long, mfn); -#define PRI_mfn "05lx" -#define INVALID_MFN _mfn(~0UL) -/* - * To be used for global variable initialization. This workaround a bug - * in GCC < 5.0. - */ -#define INVALID_MFN_INITIALIZER { ~0UL } - -#ifndef mfn_t -#define mfn_t /* Grep fodder: mfn_t, _mfn() and mfn_x() are defined above */ -#define _mfn -#define mfn_x -#undef mfn_t -#undef _mfn -#undef mfn_x -#endif - -static inline mfn_t mfn_add(mfn_t mfn, unsigned long i) -{ - return _mfn(mfn_x(mfn) + i); -} - -static inline mfn_t mfn_max(mfn_t x, mfn_t y) -{ - return _mfn(max(mfn_x(x), mfn_x(y))); -} - -static inline mfn_t mfn_min(mfn_t x, mfn_t y) -{ - return _mfn(min(mfn_x(x), mfn_x(y))); -} - -static inline bool_t mfn_eq(mfn_t x, mfn_t y) -{ - return mfn_x(x) == mfn_x(y); -} - -TYPE_SAFE(unsigned long, gfn); -#define PRI_gfn "05lx" -#define INVALID_GFN _gfn(~0UL) -/* - * To be used for global variable initialization. This workaround a bug - * in GCC < 5.0 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64856 - */ -#define INVALID_GFN_INITIALIZER { ~0UL } - -#ifndef gfn_t -#define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */ -#define _gfn -#define gfn_x -#undef gfn_t -#undef _gfn -#undef gfn_x -#endif - -static inline gfn_t gfn_add(gfn_t gfn, unsigned long i) -{ - return _gfn(gfn_x(gfn) + i); -} - -static inline gfn_t gfn_max(gfn_t x, gfn_t y) -{ - return _gfn(max(gfn_x(x), gfn_x(y))); -} - -static inline gfn_t gfn_min(gfn_t x, gfn_t y) -{ - return _gfn(min(gfn_x(x), gfn_x(y))); -} - -static inline bool_t gfn_eq(gfn_t x, gfn_t y) -{ - return gfn_x(x) == gfn_x(y); -} - -TYPE_SAFE(unsigned long, pfn); -#define PRI_pfn "05lx" -#define INVALID_PFN (~0UL) - -#ifndef pfn_t -#define pfn_t /* Grep fodder: pfn_t, _pfn() and pfn_x() are defined above */ -#define _pfn -#define pfn_x -#undef pfn_t -#undef _pfn -#undef pfn_x -#endif - struct page_info; void put_page(struct page_info *); diff --git a/xen/include/xen/mm_types.h b/xen/include/xen/mm_types.h new file mode 100644 index 0000000000..f14359f571 --- /dev/null +++ b/xen/include/xen/mm_types.h @@ -0,0 +1,155 @@ +/****************************************************************************** + * include/xen/mm_types.h + * + * Definitions for memory pages, frame numbers, addresses, allocations, etc. + * + * Copyright (c) 2002-2006, K A Fraser + * + * +---------------------+ + * Xen Memory Management + * +---------------------+ + * + * Xen has to handle many different address spaces. It is important not to + * get these spaces mixed up. The following is a consistent terminology which + * should be adhered to. + * + * mfn: Machine Frame Number + * The values Xen puts into its own pagetables. This is the host physical + * memory address space with RAM, MMIO etc. + * + * gfn: Guest Frame Number + * The values a guest puts in its own pagetables. For an auto-translated + * guest (hardware assisted with 2nd stage translation, or shadowed), gfn != + * mfn. For a non-translated guest which is aware of Xen, gfn == mfn. + * + * pfn: Pseudophysical Frame Number + * A linear idea of a guest physical address space. For an auto-translated + * guest, pfn == gfn while for a non-translated guest, pfn != gfn. + * + * dfn: Device DMA Frame Number (definitions in include/xen/iommu.h) + * The linear frame numbers of device DMA address space. All initiators for + * (i.e. all devices assigned to) a guest share a single DMA address space + * and, by default, Xen will ensure dfn == pfn. + * + * WARNING: Some of these terms have changed over time while others have been + * used inconsistently, meaning that a lot of existing code does not match the + * definitions above. New code should use these terms as described here, and + * over time older code should be corrected to be consistent. + * + * An incomplete list of larger work area: + * - Phase out the use of 'pfn' from the x86 pagetable code. Callers should + * know explicitly whether they are talking about mfns or gfns. + * - Phase out the use of 'pfn' from the ARM mm code. A cursory glance + * suggests that 'mfn' and 'pfn' are currently used interchangeably, where + * 'mfn' is the appropriate term to use. + * - Phase out the use of gpfn/gmfn where pfn/mfn are meant. This excludes + * the x86 shadow code, which uses gmfn/smfn pairs with different, + * documented, meanings. + */ + +#ifndef __XEN_MM_TYPES_H__ +#define __XEN_MM_TYPES_H__ + +#include +#include + +TYPE_SAFE(unsigned long, mfn); +#define PRI_mfn "05lx" +#define INVALID_MFN _mfn(~0UL) +/* + * To be used for global variable initialization. This workaround a bug + * in GCC < 5.0. + */ +#define INVALID_MFN_INITIALIZER { ~0UL } + +#ifndef mfn_t +#define mfn_t /* Grep fodder: mfn_t, _mfn() and mfn_x() are defined above */ +#define _mfn +#define mfn_x +#undef mfn_t +#undef _mfn +#undef mfn_x +#endif + +static inline mfn_t mfn_add(mfn_t mfn, unsigned long i) +{ + return _mfn(mfn_x(mfn) + i); +} + +static inline mfn_t mfn_max(mfn_t x, mfn_t y) +{ + return _mfn(max(mfn_x(x), mfn_x(y))); +} + +static inline mfn_t mfn_min(mfn_t x, mfn_t y) +{ + return _mfn(min(mfn_x(x), mfn_x(y))); +} + +static inline bool_t mfn_eq(mfn_t x, mfn_t y) +{ + return mfn_x(x) == mfn_x(y); +} + +TYPE_SAFE(unsigned long, gfn); +#define PRI_gfn "05lx" +#define INVALID_GFN _gfn(~0UL) +/* + * To be used for global variable initialization. This workaround a bug + * in GCC < 5.0 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=64856 + */ +#define INVALID_GFN_INITIALIZER { ~0UL } + +#ifndef gfn_t +#define gfn_t /* Grep fodder: gfn_t, _gfn() and gfn_x() are defined above */ +#define _gfn +#define gfn_x +#undef gfn_t +#undef _gfn +#undef gfn_x +#endif + +static inline gfn_t gfn_add(gfn_t gfn, unsigned long i) +{ + return _gfn(gfn_x(gfn) + i); +} + +static inline gfn_t gfn_max(gfn_t x, gfn_t y) +{ + return _gfn(max(gfn_x(x), gfn_x(y))); +} + +static inline gfn_t gfn_min(gfn_t x, gfn_t y) +{ + return _gfn(min(gfn_x(x), gfn_x(y))); +} + +static inline bool_t gfn_eq(gfn_t x, gfn_t y) +{ + return gfn_x(x) == gfn_x(y); +} + +TYPE_SAFE(unsigned long, pfn); +#define PRI_pfn "05lx" +#define INVALID_PFN (~0UL) + +#ifndef pfn_t +#define pfn_t /* Grep fodder: pfn_t, _pfn() and pfn_x() are defined above */ +#define _pfn +#define pfn_x +#undef pfn_t +#undef _pfn +#undef pfn_x +#endif + +#endif /* __XEN_MM_TYPES_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ From patchwork Sun Mar 22 16:14:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451897 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2D6FD1744 for ; Sun, 22 Mar 2020 16:16:23 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F18C720724 for ; Sun, 22 Mar 2020 16:16:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F18C720724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FL-0004dF-GO; Sun, 22 Mar 2020 16:14:43 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FK-0004cM-7b for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:42 +0000 X-Inumbo-ID: 3563f981-6c58-11ea-8134-12813bfff9fa Received: from mail-ed1-f68.google.com (unknown [209.85.208.68]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 3563f981-6c58-11ea-8134-12813bfff9fa; Sun, 22 Mar 2020 16:14:30 +0000 (UTC) Received: by mail-ed1-f68.google.com with SMTP id a20so13525357edj.2 for ; Sun, 22 Mar 2020 09:14:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lot+uUhxxAln3AbFbT8y1IzSnOaGANX3BPF4a2iVCls=; b=tL2Ziksg0sQkKEenvEjzJm03cYtJMBQD/7Ifc/yopggLoguAyqPNH2liv49/KyVz2m kyglCJJA0EJSnCebX2k6NWkM0RPBOfSSWmFLDH7rEsfNFh+W9cDoWgDKuQar6+vk58u+ u8zDbhGH2qy0lorECuPltu5LrFytfsx7nVVntjqX843kqwRKeim9JyUqnyY4I+E/T7F8 O72gUoU/S13Mp98Nc1Ty9xp+RWyCd4yxNjY2tGSWddTPkAyhqW/18qYzJheIrRnCBe5n FJQFS5+ZjQA8nZb71/6j2NyRSGcfGTi2Ym8o4IBTgGRcuxz5w+IO3LAp2suhnUx/5Iek N9Hw== X-Gm-Message-State: ANhLgQ1tN/R9gRJCw93U+0/QlxezrvseT6jrc5NTEoEX7DC8NWL7rmQE UbvuB+Dn7Lgbzqe2JwQGIjF19/fkVuVgtQ== X-Google-Smtp-Source: ADFU+vuTFpDHwvEr80tHYWVVAL4QQrOZ+xQMsWhzGRDu1nPGhxzfPx62CGMl/Orzm4/sHcjx5vpmkQ== X-Received: by 2002:a17:906:b24d:: with SMTP id ce13mr2391697ejb.13.1584893667791; Sun, 22 Mar 2020 09:14:27 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:27 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:05 +0000 Message-Id: <20200322161418.31606-5-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 04/17] xen: Convert virt_to_mfn() and mfn_to_virt() to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , julien@xen.org, Wei Liu , Konrad Rzeszutek Wilk , Andrew Cooper , Julien Grall , Ian Jackson , George Dunlap , Ross Lagerwall , Lukasz Hawrylko , Jan Beulich , Volodymyr Babchuk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall Most of Xen is now either override the helpers virt_to_mfn() and mfn_to_virt() to use typesafe MFN or use mfn_x() to remove the typesafety when calling the helpers. Therefore it is time to switch the two helpers to use typesafe MFN and remove completely the possibly to make them unsafe by dropping the double-underscore version. Places that were still using non-typesafe MFN have been either converted to use typesafe (if the changes are simple) or use _mfn()/mfn_x() until the rest of the code is changed. There are a couple of noticeable changes in the code: - pvh_populate_p2m() were storing the mfn in a variable called 'addr'. This has now been renamed to 'mfn'. - allocate_cache_aligned_memnodemap() were storing an address in a variable called 'mfn'. The code has been reworked to avoid repurposing the variable. No functional changes intended. Signed-off-by: Julien Grall Reviewed-by: Jan Beulich --- xen/arch/arm/acpi/domain_build.c | 4 ---- xen/arch/arm/alternative.c | 4 ---- xen/arch/arm/cpuerrata.c | 4 ---- xen/arch/arm/domain_build.c | 4 ---- xen/arch/arm/livepatch.c | 4 ---- xen/arch/arm/mm.c | 8 +------- xen/arch/x86/domain_page.c | 10 +++++----- xen/arch/x86/hvm/dom0_build.c | 20 ++++++++++--------- xen/arch/x86/mm.c | 30 +++++++++++++---------------- xen/arch/x86/numa.c | 8 +++----- xen/arch/x86/pv/descriptor-tables.c | 2 +- xen/arch/x86/pv/dom0_build.c | 4 ++-- xen/arch/x86/pv/shim.c | 3 --- xen/arch/x86/setup.c | 10 +++++----- xen/arch/x86/smpboot.c | 4 ++-- xen/arch/x86/srat.c | 2 +- xen/arch/x86/tboot.c | 4 ++-- xen/arch/x86/traps.c | 4 ++-- xen/arch/x86/x86_64/mm.c | 13 +++++++------ xen/common/domctl.c | 3 ++- xen/common/efi/boot.c | 7 ++++--- xen/common/grant_table.c | 8 ++++---- xen/common/page_alloc.c | 18 ++++++++--------- xen/common/trace.c | 19 +++++++++--------- xen/common/xenoprof.c | 4 ---- xen/drivers/acpi/osl.c | 2 +- xen/include/asm-arm/mm.h | 14 +++----------- xen/include/asm-x86/grant_table.h | 4 ++-- xen/include/asm-x86/mm.h | 2 +- xen/include/asm-x86/page.h | 6 ++---- xen/include/xen/domain_page.h | 6 +++--- 31 files changed, 96 insertions(+), 139 deletions(-) diff --git a/xen/arch/arm/acpi/domain_build.c b/xen/arch/arm/acpi/domain_build.c index 1b1cfabb00..b3ac32f601 100644 --- a/xen/arch/arm/acpi/domain_build.c +++ b/xen/arch/arm/acpi/domain_build.c @@ -20,10 +20,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef virt_to_mfn -#define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) - #define ACPI_DOM0_FDT_MIN_SIZE 4096 static int __init acpi_iomem_deny_access(struct domain *d) diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c index 237c4e5642..724b0b187e 100644 --- a/xen/arch/arm/alternative.c +++ b/xen/arch/arm/alternative.c @@ -32,10 +32,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef virt_to_mfn -#define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) - extern const struct alt_instr __alt_instructions[], __alt_instructions_end[]; struct alt_region { diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c index 0248893de0..68105fe91f 100644 --- a/xen/arch/arm/cpuerrata.c +++ b/xen/arch/arm/cpuerrata.c @@ -14,10 +14,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef virt_to_mfn -#define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) - /* Hardening Branch predictor code for Arm64 */ #ifdef CONFIG_ARM64_HARDEN_BRANCH_PREDICTOR diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 4307087536..5c9a55f084 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -52,10 +52,6 @@ struct map_range_data p2m_type_t p2mt; }; -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef virt_to_mfn -#define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) - //#define DEBUG_11_ALLOCATION #ifdef DEBUG_11_ALLOCATION # define D11PRINT(fmt, args...) printk(XENLOG_DEBUG fmt, ##args) diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c index 915e9d926a..0ffdda6005 100644 --- a/xen/arch/arm/livepatch.c +++ b/xen/arch/arm/livepatch.c @@ -12,10 +12,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef virt_to_mfn -#define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) - void *vmap_of_xen_text; int arch_livepatch_safety_check(void) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 727107eefa..1075e5fcaf 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -43,12 +43,6 @@ #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef virt_to_mfn -#define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) -#undef mfn_to_virt -#define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn)) - #ifdef NDEBUG static inline void __attribute__ ((__format__ (__printf__, 1, 2))) @@ -835,7 +829,7 @@ void __init setup_xenheap_mappings(unsigned long base_mfn, * Virtual address aligned to previous 1GB to match physical * address alignment done above. */ - vaddr = (vaddr_t)__mfn_to_virt(base_mfn) & FIRST_MASK; + vaddr = (vaddr_t)mfn_to_virt(_mfn(base_mfn)) & FIRST_MASK; while ( mfn < end_mfn ) { diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c index dd32712d2f..8b8bf4cbe8 100644 --- a/xen/arch/x86/domain_page.c +++ b/xen/arch/x86/domain_page.c @@ -78,17 +78,17 @@ void *map_domain_page(mfn_t mfn) #ifdef NDEBUG if ( mfn_x(mfn) <= PFN_DOWN(__pa(HYPERVISOR_VIRT_END - 1)) ) - return mfn_to_virt(mfn_x(mfn)); + return mfn_to_virt(mfn); #endif v = mapcache_current_vcpu(); if ( !v || !is_pv_vcpu(v) ) - return mfn_to_virt(mfn_x(mfn)); + return mfn_to_virt(mfn); dcache = &v->domain->arch.pv.mapcache; vcache = &v->arch.pv.mapcache; if ( !dcache->inuse ) - return mfn_to_virt(mfn_x(mfn)); + return mfn_to_virt(mfn); perfc_incr(map_domain_page_count); @@ -311,7 +311,7 @@ void *map_domain_page_global(mfn_t mfn) #ifdef NDEBUG if ( mfn_x(mfn) <= PFN_DOWN(__pa(HYPERVISOR_VIRT_END - 1)) ) - return mfn_to_virt(mfn_x(mfn)); + return mfn_to_virt(mfn); #endif return vmap(&mfn, 1); @@ -336,7 +336,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr) const l1_pgentry_t *pl1e; if ( va >= DIRECTMAP_VIRT_START ) - return _mfn(virt_to_mfn(ptr)); + return virt_to_mfn(ptr); if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END ) { diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index 2afd44c8a4..143b7e0a3c 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -444,31 +444,32 @@ static int __init pvh_populate_p2m(struct domain *d) /* Populate memory map. */ for ( i = 0; i < d->arch.nr_e820; i++ ) { - unsigned long addr, size; + mfn_t mfn; + unsigned long size; if ( d->arch.e820[i].type != E820_RAM ) continue; - addr = PFN_DOWN(d->arch.e820[i].addr); + mfn = maddr_to_mfn(d->arch.e820[i].addr); size = PFN_DOWN(d->arch.e820[i].size); - rc = pvh_populate_memory_range(d, addr, size); + rc = pvh_populate_memory_range(d, mfn_x(mfn), size); if ( rc ) return rc; - if ( addr < MB1_PAGES ) + if ( mfn_x(mfn) < MB1_PAGES ) { uint64_t end = min_t(uint64_t, MB(1), d->arch.e820[i].addr + d->arch.e820[i].size); enum hvm_translation_result res = - hvm_copy_to_guest_phys(mfn_to_maddr(_mfn(addr)), - mfn_to_virt(addr), + hvm_copy_to_guest_phys(mfn_to_maddr(mfn), + mfn_to_virt(mfn), d->arch.e820[i].addr - end, v); if ( res != HVMTRANS_okay ) - printk("Failed to copy [%#lx, %#lx): %d\n", - addr, addr + size, res); + printk("Failed to copy [%"PRI_mfn", %"PRI_mfn"): %d\n", + mfn_x(mfn), mfn_x(mfn_add(mfn, size)), res); } } @@ -607,7 +608,8 @@ static int __init pvh_load_kernel(struct domain *d, const module_t *image, if ( initrd != NULL ) { - rc = hvm_copy_to_guest_phys(last_addr, mfn_to_virt(initrd->mod_start), + rc = hvm_copy_to_guest_phys(last_addr, + mfn_to_virt(_mfn(initrd->mod_start)), initrd->mod_end, v); if ( rc ) { diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 069a61deb8..7c0f81759a 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -152,10 +152,6 @@ #include "pv/mm.h" #endif -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef virt_to_mfn -#define virt_to_mfn(v) _mfn(__virt_to_mfn(v)) - /* Mapping of the fixmap space needed early. */ l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE) l1_fixmap[L1_PAGETABLE_ENTRIES]; @@ -323,8 +319,8 @@ void __init arch_init_memory(void) iostart_pfn = max_t(unsigned long, pfn, 1UL << (20 - PAGE_SHIFT)); ioend_pfn = min(rstart_pfn, 16UL << (20 - PAGE_SHIFT)); if ( iostart_pfn < ioend_pfn ) - destroy_xen_mappings((unsigned long)mfn_to_virt(iostart_pfn), - (unsigned long)mfn_to_virt(ioend_pfn)); + destroy_xen_mappings((unsigned long)mfn_to_virt(_mfn(iostart_pfn)), + (unsigned long)mfn_to_virt(_mfn(ioend_pfn))); /* Mark as I/O up to next RAM region. */ for ( ; pfn < rstart_pfn; pfn++ ) @@ -785,21 +781,21 @@ bool is_iomem_page(mfn_t mfn) return (page_get_owner(page) == dom_io); } -static int update_xen_mappings(unsigned long mfn, unsigned int cacheattr) +static int update_xen_mappings(mfn_t mfn, unsigned int cacheattr) { int err = 0; - bool alias = mfn >= PFN_DOWN(xen_phys_start) && - mfn < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START); + bool alias = mfn_x(mfn) >= PFN_DOWN(xen_phys_start) && + mfn_x(mfn) < PFN_UP(xen_phys_start + xen_virt_end - XEN_VIRT_START); unsigned long xen_va = - XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT); + XEN_VIRT_START + mfn_to_maddr(mfn_add(mfn, -PFN_DOWN(xen_phys_start))); if ( unlikely(alias) && cacheattr ) - err = map_pages_to_xen(xen_va, _mfn(mfn), 1, 0); + err = map_pages_to_xen(xen_va, mfn, 1, 0); if ( !err ) - err = map_pages_to_xen((unsigned long)mfn_to_virt(mfn), _mfn(mfn), 1, + err = map_pages_to_xen((unsigned long)mfn_to_virt(mfn), mfn, 1, PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr)); if ( unlikely(alias) && !cacheattr && !err ) - err = map_pages_to_xen(xen_va, _mfn(mfn), 1, PAGE_HYPERVISOR); + err = map_pages_to_xen(xen_va, mfn, 1, PAGE_HYPERVISOR); return err; } @@ -1029,7 +1025,7 @@ get_page_from_l1e( nx = (x & ~PGC_cacheattr_mask) | (cacheattr << PGC_cacheattr_base); } while ( (y = cmpxchg(&page->count_info, x, nx)) != x ); - err = update_xen_mappings(mfn, cacheattr); + err = update_xen_mappings(_mfn(mfn), cacheattr); if ( unlikely(err) ) { cacheattr = y & PGC_cacheattr_mask; @@ -2449,7 +2445,7 @@ static int cleanup_page_mappings(struct page_info *page) BUG_ON(is_xen_heap_page(page)); - rc = update_xen_mappings(mfn, 0); + rc = update_xen_mappings(_mfn(mfn), 0); } /* @@ -4950,7 +4946,7 @@ void *alloc_xen_pagetable(void) { mfn_t mfn = alloc_xen_pagetable_new(); - return mfn_eq(mfn, INVALID_MFN) ? NULL : mfn_to_virt(mfn_x(mfn)); + return mfn_eq(mfn, INVALID_MFN) ? NULL : mfn_to_virt(mfn); } void free_xen_pagetable(void *v) @@ -4983,7 +4979,7 @@ mfn_t alloc_xen_pagetable_new(void) void free_xen_pagetable_new(mfn_t mfn) { if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) ) - free_xenheap_page(mfn_to_virt(mfn_x(mfn))); + free_xenheap_page(mfn_to_virt(mfn)); } static DEFINE_SPINLOCK(map_pgdir_lock); diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index f1066c59c7..87f7365304 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -100,14 +100,12 @@ static int __init populate_memnodemap(const struct node *nodes, static int __init allocate_cachealigned_memnodemap(void) { unsigned long size = PFN_UP(memnodemapsize * sizeof(*memnodemap)); - unsigned long mfn = mfn_x(alloc_boot_pages(size, 1)); + mfn_t mfn = alloc_boot_pages(size, 1); memnodemap = mfn_to_virt(mfn); - mfn <<= PAGE_SHIFT; - size <<= PAGE_SHIFT; printk(KERN_DEBUG "NUMA: Allocated memnodemap from %lx - %lx\n", - mfn, mfn + size); - memnodemapsize = size / sizeof(*memnodemap); + mfn_to_maddr(mfn), mfn_to_maddr(mfn_add(mfn, size))); + memnodemapsize = (size << PAGE_SHIFT) / sizeof(*memnodemap); return 0; } diff --git a/xen/arch/x86/pv/descriptor-tables.c b/xen/arch/x86/pv/descriptor-tables.c index 940804b18a..f22beb1f3c 100644 --- a/xen/arch/x86/pv/descriptor-tables.c +++ b/xen/arch/x86/pv/descriptor-tables.c @@ -76,7 +76,7 @@ bool pv_destroy_ldt(struct vcpu *v) void pv_destroy_gdt(struct vcpu *v) { l1_pgentry_t *pl1e = pv_gdt_ptes(v); - mfn_t zero_mfn = _mfn(virt_to_mfn(zero_page)); + mfn_t zero_mfn = virt_to_mfn(zero_page); l1_pgentry_t zero_l1e = l1e_from_mfn(zero_mfn, __PAGE_HYPERVISOR_RO); unsigned int i; diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index 5678da782d..30846b5f97 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -523,7 +523,7 @@ int __init dom0_construct_pv(struct domain *d, free_domheap_pages(page, order); page += 1UL << order; } - memcpy(page_to_virt(page), mfn_to_virt(initrd->mod_start), + memcpy(page_to_virt(page), mfn_to_virt(_mfn(initrd->mod_start)), initrd_len); mpt_alloc = (paddr_t)initrd->mod_start << PAGE_SHIFT; init_domheap_pages(mpt_alloc, @@ -601,7 +601,7 @@ int __init dom0_construct_pv(struct domain *d, maddr_to_page(mpt_alloc)->u.inuse.type_info = PGT_l4_page_table; l4start = l4tab = __va(mpt_alloc); mpt_alloc += PAGE_SIZE; clear_page(l4tab); - init_xen_l4_slots(l4tab, _mfn(virt_to_mfn(l4start)), + init_xen_l4_slots(l4tab, virt_to_mfn(l4start), d, INVALID_MFN, true); v->arch.guest_table = pagetable_from_paddr(__pa(l4start)); } diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c index ed2ece8a8a..b849c60699 100644 --- a/xen/arch/x86/pv/shim.c +++ b/xen/arch/x86/pv/shim.c @@ -39,9 +39,6 @@ #include -#undef virt_to_mfn -#define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) - #ifdef CONFIG_PV_SHIM_EXCLUSIVE /* Tolerate "pv-shim" being passed to a CONFIG_PV_SHIM_EXCLUSIVE hypervisor. */ ignore_param("pv-shim"); diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c index 885919d5c3..cfe95c5dac 100644 --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -340,7 +340,7 @@ void *__init bootstrap_map(const module_t *mod) void *ret; if ( system_state != SYS_STATE_early_boot ) - return mod ? mfn_to_virt(mod->mod_start) : NULL; + return mod ? mfn_to_virt(_mfn(mod->mod_start)) : NULL; if ( !mod ) { @@ -1005,7 +1005,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) * This needs to remain in sync with xen_in_range() and the * respective reserve_e820_ram() invocation below. */ - mod[mbi->mods_count].mod_start = virt_to_mfn(_stext); + mod[mbi->mods_count].mod_start = mfn_x(virt_to_mfn(_stext)); mod[mbi->mods_count].mod_end = __2M_rwdata_end - _stext; } @@ -1404,7 +1404,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) { set_pdx_range(mod[i].mod_start, mod[i].mod_start + PFN_UP(mod[i].mod_end)); - map_pages_to_xen((unsigned long)mfn_to_virt(mod[i].mod_start), + map_pages_to_xen((unsigned long)mfn_to_virt(_mfn(mod[i].mod_start)), _mfn(mod[i].mod_start), PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR); } @@ -1494,9 +1494,9 @@ void __init noreturn __start_xen(unsigned long mbi_p) numa_initmem_init(0, raw_max_page); - if ( max_page - 1 > virt_to_mfn(HYPERVISOR_VIRT_END - 1) ) + if ( max_page - 1 > mfn_x(virt_to_mfn(HYPERVISOR_VIRT_END - 1)) ) { - unsigned long limit = virt_to_mfn(HYPERVISOR_VIRT_END - 1); + unsigned long limit = mfn_x(virt_to_mfn(HYPERVISOR_VIRT_END - 1)); uint64_t mask = PAGE_SIZE - 1; if ( !highmem_start ) diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c index 09264b02d1..31b4366ab2 100644 --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -996,7 +996,7 @@ static int cpu_smpboot_alloc(unsigned int cpu) goto out; per_cpu(gdt, cpu) = gdt; per_cpu(gdt_l1e, cpu) = - l1e_from_pfn(virt_to_mfn(gdt), __PAGE_HYPERVISOR_RW); + l1e_from_mfn(virt_to_mfn(gdt), __PAGE_HYPERVISOR_RW); memcpy(gdt, boot_gdt, NR_RESERVED_GDT_PAGES * PAGE_SIZE); BUILD_BUG_ON(NR_CPUS > 0x10000); gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu; @@ -1005,7 +1005,7 @@ static int cpu_smpboot_alloc(unsigned int cpu) if ( gdt == NULL ) goto out; per_cpu(compat_gdt_l1e, cpu) = - l1e_from_pfn(virt_to_mfn(gdt), __PAGE_HYPERVISOR_RW); + l1e_from_mfn(virt_to_mfn(gdt), __PAGE_HYPERVISOR_RW); memcpy(gdt, boot_compat_gdt, NR_RESERVED_GDT_PAGES * PAGE_SIZE); gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu; diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c index 506a56d66b..0baf8b97ce 100644 --- a/xen/arch/x86/srat.c +++ b/xen/arch/x86/srat.c @@ -196,7 +196,7 @@ void __init acpi_numa_slit_init(struct acpi_table_slit *slit) return; } mfn = alloc_boot_pages(PFN_UP(slit->header.length), 1); - acpi_slit = mfn_to_virt(mfn_x(mfn)); + acpi_slit = mfn_to_virt(mfn); memcpy(acpi_slit, slit, slit->header.length); } diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index 8c232270b4..19ea69f7c1 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -260,7 +260,7 @@ static int mfn_in_guarded_stack(unsigned long mfn) continue; p = (void *)((unsigned long)stack_base[i] + STACK_SIZE - PRIMARY_STACK_SIZE - PAGE_SIZE); - if ( mfn == virt_to_mfn(p) ) + if ( mfn_eq(_mfn(mfn), virt_to_mfn(p)) ) return -1; } @@ -296,7 +296,7 @@ static void tboot_gen_xenheap_integrity(const uint8_t key[TB_KEY_SIZE], if ( mfn_in_guarded_stack(mfn) ) continue; /* skip guard stack, see memguard_guard_stack() in mm.c */ - pg = mfn_to_virt(mfn); + pg = mfn_to_virt(_mfn(mfn)); vmac_update((uint8_t *)pg, PAGE_SIZE, &ctx); } } diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index e838846c6b..4aa7c35be4 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -2029,9 +2029,9 @@ void __init trap_init(void) /* Cache {,compat_}gdt_l1e now that physically relocation is done. */ this_cpu(gdt_l1e) = - l1e_from_pfn(virt_to_mfn(boot_gdt), __PAGE_HYPERVISOR_RW); + l1e_from_mfn(virt_to_mfn(boot_gdt), __PAGE_HYPERVISOR_RW); this_cpu(compat_gdt_l1e) = - l1e_from_pfn(virt_to_mfn(boot_compat_gdt), __PAGE_HYPERVISOR_RW); + l1e_from_mfn(virt_to_mfn(boot_compat_gdt), __PAGE_HYPERVISOR_RW); percpu_traps_init(); diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index 3516423bb0..ddd5f1ddc4 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -1369,11 +1369,12 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) return -EINVAL; } - i = virt_to_mfn(HYPERVISOR_VIRT_END - 1) + 1; + i = mfn_x(virt_to_mfn(HYPERVISOR_VIRT_END - 1)) + 1; if ( spfn < i ) { - ret = map_pages_to_xen((unsigned long)mfn_to_virt(spfn), _mfn(spfn), - min(epfn, i) - spfn, PAGE_HYPERVISOR); + ret = map_pages_to_xen((unsigned long)mfn_to_virt(_mfn(spfn)), + _mfn(spfn), min(epfn, i) - spfn, + PAGE_HYPERVISOR); if ( ret ) goto destroy_directmap; } @@ -1381,7 +1382,7 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) { if ( i < spfn ) i = spfn; - ret = map_pages_to_xen((unsigned long)mfn_to_virt(i), _mfn(i), + ret = map_pages_to_xen((unsigned long)mfn_to_virt(_mfn(i)), _mfn(i), epfn - i, __PAGE_HYPERVISOR_RW); if ( ret ) goto destroy_directmap; @@ -1473,8 +1474,8 @@ destroy_frametable: NODE_DATA(node)->node_start_pfn = old_node_start; NODE_DATA(node)->node_spanned_pages = old_node_span; destroy_directmap: - destroy_xen_mappings((unsigned long)mfn_to_virt(spfn), - (unsigned long)mfn_to_virt(epfn)); + destroy_xen_mappings((unsigned long)mfn_to_virt(_mfn(spfn)), + (unsigned long)mfn_to_virt(_mfn(epfn))); return ret; } diff --git a/xen/common/domctl.c b/xen/common/domctl.c index a69b3b59a8..e4a055dc67 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -196,7 +196,8 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info) info->outstanding_pages = d->outstanding_pages; info->shr_pages = atomic_read(&d->shr_pages); info->paged_pages = atomic_read(&d->paged_pages); - info->shared_info_frame = mfn_to_gmfn(d, virt_to_mfn(d->shared_info)); + info->shared_info_frame = mfn_to_gmfn(d, + mfn_x(virt_to_mfn(d->shared_info))); BUG_ON(SHARED_M2P(info->shared_info_frame)); info->cpupool = cpupool_get_id(d); diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c index a6f84c945a..4f944fb3e8 100644 --- a/xen/common/efi/boot.c +++ b/xen/common/efi/boot.c @@ -1447,7 +1447,7 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end, { l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)]; l3_pgentry_t *l3src, *l3dst; - unsigned long va = (unsigned long)mfn_to_virt(mfn); + unsigned long va = (unsigned long)mfn_to_virt(_mfn(mfn)); next = mfn + (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)); if ( !is_valid(mfn, min(next, end)) ) @@ -1562,9 +1562,10 @@ void __init efi_init_memory(void) !(smfn & pfn_hole_mask) && !((smfn ^ (emfn - 1)) & ~pfn_pdx_bottom_mask) ) { - if ( (unsigned long)mfn_to_virt(emfn - 1) >= HYPERVISOR_VIRT_END ) + if ( (unsigned long)mfn_to_virt(_mfn(emfn - 1)) >= + HYPERVISOR_VIRT_END ) prot &= ~_PAGE_GLOBAL; - if ( map_pages_to_xen((unsigned long)mfn_to_virt(smfn), + if ( map_pages_to_xen((unsigned long)mfn_to_virt(_mfn(smfn)), _mfn(smfn), emfn - smfn, prot) == 0 ) desc->VirtualStart = (unsigned long)maddr_to_virt(desc->PhysicalStart); diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 9fd6e60416..407fdf08ff 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -3935,8 +3935,8 @@ static int gnttab_get_status_frame_mfn(struct domain *d, } /* Make sure idx is bounded wrt nr_status_frames */ - *mfn = _mfn(virt_to_mfn( - gt->status[array_index_nospec(idx, nr_status_frames(gt))])); + *mfn = virt_to_mfn( + gt->status[array_index_nospec(idx, nr_status_frames(gt))]); return 0; } @@ -3966,8 +3966,8 @@ static int gnttab_get_shared_frame_mfn(struct domain *d, } /* Make sure idx is bounded wrt nr_status_frames */ - *mfn = _mfn(virt_to_mfn( - gt->shared_raw[array_index_nospec(idx, nr_grant_frames(gt))])); + *mfn = virt_to_mfn( + gt->shared_raw[array_index_nospec(idx, nr_grant_frames(gt))]); return 0; } diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 76d37226df..41e4fa899d 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -565,7 +565,7 @@ static unsigned int __read_mostly xenheap_bits; #define xenheap_bits 0 #endif -static unsigned long init_node_heap(int node, unsigned long mfn, +static unsigned long init_node_heap(int node, mfn_t mfn, unsigned long nr, bool *use_tail) { /* First node to be discovered has its heap metadata statically alloced. */ @@ -584,21 +584,21 @@ static unsigned long init_node_heap(int node, unsigned long mfn, needed = 0; } else if ( *use_tail && nr >= needed && - arch_mfn_in_directmap(mfn + nr) && + arch_mfn_in_directmap(mfn_x(mfn_add(mfn, nr))) && (!xenheap_bits || - !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) ) + !((mfn_x(mfn) + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) ) { - _heap[node] = mfn_to_virt(mfn + nr - needed); - avail[node] = mfn_to_virt(mfn + nr - 1) + + _heap[node] = mfn_to_virt(mfn_add(mfn, nr - needed)); + avail[node] = mfn_to_virt(mfn_add(mfn, nr - 1)) + PAGE_SIZE - sizeof(**avail) * NR_ZONES; } else if ( nr >= needed && - arch_mfn_in_directmap(mfn + needed) && + arch_mfn_in_directmap(mfn_x(mfn_add(mfn, needed))) && (!xenheap_bits || - !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) ) + !((mfn_x(mfn) + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) ) { _heap[node] = mfn_to_virt(mfn); - avail[node] = mfn_to_virt(mfn + needed - 1) + + avail[node] = mfn_to_virt(mfn_add(mfn, needed - 1)) + PAGE_SIZE - sizeof(**avail) * NR_ZONES; *use_tail = false; } @@ -1809,7 +1809,7 @@ static void init_heap_pages( (find_first_set_bit(e) <= find_first_set_bit(s)); unsigned long n; - n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i, + n = init_node_heap(nid, page_to_mfn(pg + i), nr_pages - i, &use_tail); BUG_ON(i + n > nr_pages); if ( n && !use_tail ) diff --git a/xen/common/trace.c b/xen/common/trace.c index a2a389a1c7..8dbbcd31de 100644 --- a/xen/common/trace.c +++ b/xen/common/trace.c @@ -218,7 +218,7 @@ static int alloc_trace_bufs(unsigned int pages) t_info_mfn_list[offset + i] = 0; goto out_dealloc; } - t_info_mfn_list[offset + i] = virt_to_mfn(p); + t_info_mfn_list[offset + i] = mfn_x(virt_to_mfn(p)); } } @@ -234,7 +234,8 @@ static int alloc_trace_bufs(unsigned int pages) offset = t_info->mfn_offset[cpu]; /* Initialize the buffer metadata */ - per_cpu(t_bufs, cpu) = buf = mfn_to_virt(t_info_mfn_list[offset]); + buf = mfn_to_virt(_mfn(t_info_mfn_list[offset])); + per_cpu(t_bufs, cpu) = buf; buf->cons = buf->prod = 0; printk(XENLOG_INFO "xentrace: p%d mfn %x offset %u\n", @@ -269,10 +270,10 @@ out_dealloc: continue; for ( i = 0; i < pages; i++ ) { - uint32_t mfn = t_info_mfn_list[offset + i]; - if ( !mfn ) + mfn_t mfn = _mfn(t_info_mfn_list[offset + i]); + if ( mfn_eq(mfn, _mfn(0)) ) break; - ASSERT(!(mfn_to_page(_mfn(mfn))->count_info & PGC_allocated)); + ASSERT(!(mfn_to_page(mfn)->count_info & PGC_allocated)); free_xenheap_pages(mfn_to_virt(mfn), 0); } } @@ -378,7 +379,7 @@ int tb_control(struct xen_sysctl_tbuf_op *tbc) { case XEN_SYSCTL_TBUFOP_get_info: tbc->evt_mask = tb_event_mask; - tbc->buffer_mfn = t_info ? virt_to_mfn(t_info) : 0; + tbc->buffer_mfn = t_info ? mfn_x(virt_to_mfn(t_info)) : 0; tbc->size = t_info_pages * PAGE_SIZE; break; case XEN_SYSCTL_TBUFOP_set_cpu_mask: @@ -512,7 +513,7 @@ static unsigned char *next_record(const struct t_buf *buf, uint32_t *next, uint16_t per_cpu_mfn_offset; uint32_t per_cpu_mfn_nr; uint32_t *mfn_list; - uint32_t mfn; + mfn_t mfn; unsigned char *this_page; barrier(); /* must read buf->prod and buf->cons only once */ @@ -533,7 +534,7 @@ static unsigned char *next_record(const struct t_buf *buf, uint32_t *next, per_cpu_mfn_nr = x >> PAGE_SHIFT; per_cpu_mfn_offset = t_info->mfn_offset[smp_processor_id()]; mfn_list = (uint32_t *)t_info; - mfn = mfn_list[per_cpu_mfn_offset + per_cpu_mfn_nr]; + mfn = _mfn(mfn_list[per_cpu_mfn_offset + per_cpu_mfn_nr]); this_page = mfn_to_virt(mfn); if (per_cpu_mfn_nr + 1 >= opt_tbuf_size) { @@ -542,7 +543,7 @@ static unsigned char *next_record(const struct t_buf *buf, uint32_t *next, } else { - mfn = mfn_list[per_cpu_mfn_offset + per_cpu_mfn_nr + 1]; + mfn = _mfn(mfn_list[per_cpu_mfn_offset + per_cpu_mfn_nr + 1]); *next_page = mfn_to_virt(mfn); } return this_page; diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c index 4f3e799ebb..2721e99da7 100644 --- a/xen/common/xenoprof.c +++ b/xen/common/xenoprof.c @@ -19,10 +19,6 @@ #include #include -/* Override macros from asm/page.h to make them work with mfn_t */ -#undef virt_to_mfn -#define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) - /* Limit amount of pages used for shared buffer (per domain) */ #define MAX_OPROF_SHARED_PAGES 32 diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c index 4c8bb7839e..ca38565507 100644 --- a/xen/drivers/acpi/osl.c +++ b/xen/drivers/acpi/osl.c @@ -219,7 +219,7 @@ void *__init acpi_os_alloc_memory(size_t sz) void *ptr; if (system_state == SYS_STATE_early_boot) - return mfn_to_virt(mfn_x(alloc_boot_pages(PFN_UP(sz), 1))); + return mfn_to_virt(alloc_boot_pages(PFN_UP(sz), 1)); ptr = xmalloc_bytes(sz); ASSERT(!ptr || is_xmalloc_memory(ptr)); diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index 7df91280bc..abf4cc23e4 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -285,16 +285,8 @@ static inline uint64_t gvirt_to_maddr(vaddr_t va, paddr_t *pa, #define __va(x) (maddr_to_virt(x)) /* Convert between Xen-heap virtual addresses and machine frame numbers. */ -#define __virt_to_mfn(va) (virt_to_maddr(va) >> PAGE_SHIFT) -#define __mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT)) - -/* - * We define non-underscored wrappers for above conversion functions. - * These are overriden in various source files while underscored version - * remain intact. - */ -#define virt_to_mfn(va) __virt_to_mfn(va) -#define mfn_to_virt(mfn) __mfn_to_virt(mfn) +#define virt_to_mfn(va) maddr_to_mfn(virt_to_maddr(va)) +#define mfn_to_virt(mfn) maddr_to_virt(mfn_to_maddr(mfn)) /* Convert between Xen-heap virtual addresses and page-info structures. */ static inline struct page_info *virt_to_page(const void *v) @@ -312,7 +304,7 @@ static inline struct page_info *virt_to_page(const void *v) static inline void *page_to_virt(const struct page_info *pg) { - return mfn_to_virt(mfn_x(page_to_mfn(pg))); + return mfn_to_virt(page_to_mfn(pg)); } struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va, diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h index 84e32960c0..5871238f6d 100644 --- a/xen/include/asm-x86/grant_table.h +++ b/xen/include/asm-x86/grant_table.h @@ -45,11 +45,11 @@ static inline int replace_grant_host_mapping(uint64_t addr, mfn_t frame, VALID_M2P(gpfn_) ? _gfn(gpfn_) : INVALID_GFN; \ }) -#define gnttab_shared_mfn(t, i) _mfn(__virt_to_mfn((t)->shared_raw[i])) +#define gnttab_shared_mfn(t, i) virt_to_mfn((t)->shared_raw[i]) #define gnttab_shared_gfn(d, t, i) mfn_to_gfn(d, gnttab_shared_mfn(t, i)) -#define gnttab_status_mfn(t, i) _mfn(__virt_to_mfn((t)->status[i])) +#define gnttab_status_mfn(t, i) virt_to_mfn((t)->status[i]) #define gnttab_status_gfn(d, t, i) mfn_to_gfn(d, gnttab_status_mfn(t, i)) diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 9764362a38..83058fb8d1 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -667,7 +667,7 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn) { unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END); - return mfn <= (virt_to_mfn(eva - 1) + 1); + return mfn <= mfn_x(mfn_add(virt_to_mfn(eva - 1), 1)); } int arch_acquire_resource(struct domain *d, unsigned int type, diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h index c98d8f5ede..624dbbb949 100644 --- a/xen/include/asm-x86/page.h +++ b/xen/include/asm-x86/page.h @@ -236,8 +236,8 @@ void copy_page_sse2(void *, const void *); #define __va(x) (maddr_to_virt(x)) /* Convert between Xen-heap virtual addresses and machine frame numbers. */ -#define __virt_to_mfn(va) (virt_to_maddr(va) >> PAGE_SHIFT) -#define __mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT)) +#define virt_to_mfn(va) maddr_to_mfn(virt_to_maddr(va)) +#define mfn_to_virt(mfn) maddr_to_virt(mfn_to_maddr(mfn)) /* Convert between machine frame numbers and page-info structures. */ #define mfn_to_page(mfn) (frame_table + mfn_to_pdx(mfn)) @@ -260,8 +260,6 @@ void copy_page_sse2(void *, const void *); * overridden in various source files while underscored versions remain intact. */ #define mfn_valid(mfn) __mfn_valid(mfn_x(mfn)) -#define virt_to_mfn(va) __virt_to_mfn(va) -#define mfn_to_virt(mfn) __mfn_to_virt(mfn) #define virt_to_maddr(va) __virt_to_maddr((unsigned long)(va)) #define maddr_to_virt(ma) __maddr_to_virt((unsigned long)(ma)) #define maddr_to_page(ma) __maddr_to_page(ma) diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h index ab2be7b719..0314845921 100644 --- a/xen/include/xen/domain_page.h +++ b/xen/include/xen/domain_page.h @@ -53,14 +53,14 @@ static inline void *__map_domain_page_global(const struct page_info *pg) #else /* !CONFIG_DOMAIN_PAGE */ -#define map_domain_page(mfn) __mfn_to_virt(mfn_x(mfn)) +#define map_domain_page(mfn) mfn_to_virt(mfn) #define __map_domain_page(pg) page_to_virt(pg) #define unmap_domain_page(va) ((void)(va)) -#define domain_page_map_to_mfn(va) _mfn(virt_to_mfn((unsigned long)(va))) +#define domain_page_map_to_mfn(va) virt_to_mfn((unsigned long)(va)) static inline void *map_domain_page_global(mfn_t mfn) { - return mfn_to_virt(mfn_x(mfn)); + return mfn_to_virt(mfn); } static inline void *__map_domain_page_global(const struct page_info *pg) From patchwork Sun Mar 22 16:14:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451875 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B9CED17D4 for ; Sun, 22 Mar 2020 16:16:07 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9437B20724 for ; Sun, 22 Mar 2020 16:16:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9437B20724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FB-0004Wo-45; Sun, 22 Mar 2020 16:14:33 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FA-0004W0-7N for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:32 +0000 X-Inumbo-ID: 35b3b146-6c58-11ea-8134-12813bfff9fa Received: from mail-ed1-f66.google.com (unknown [209.85.208.66]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 35b3b146-6c58-11ea-8134-12813bfff9fa; Sun, 22 Mar 2020 16:14:30 +0000 (UTC) Received: by mail-ed1-f66.google.com with SMTP id n25so12545265eds.10 for ; Sun, 22 Mar 2020 09:14:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=h83qCfUbAW/2NAbwW1ERuK1KGMn6B8eXif9Ulq27FvE=; b=Plzv16lAi+7hz6qwQSg2f0VNYQ4dNP5eq7h4pi/71FuUosH8RN7rCT0H72OPwSA7Ah IrDr83WPGh9s8OqEZG2edmcQakPrCSjY+wbOiT/TrE+u8tR6oZ8EZ2xOF3P1fvZE2F8w tCzPikQa9QcbgWQAKi1lohEH8tV1ThJ1qeOs+ogICgAwUI2B/kqenUH18qAtFxGNgnCj RcjvYlweQMwyr7GNzJfHKHvK/21KHSHKIHhv7kCuVz0uisUD3LToI7lusWfsJHJAC6Yw vRdi9c1PaXRgQBHVYc338Z3tKp4zMkEyKIDzZTfj/uMQxD3JHT0vdVIOyLSvYxRkcQxn 7mVw== X-Gm-Message-State: ANhLgQ0WfG8pxCbQJ+d5GNFm7+Pgg8WnSX4KnpZe3EpInzqZRrfgZWao 9NXEjcWbbZ+aXYd/khltUsoZrIhDhkuEFw== X-Google-Smtp-Source: ADFU+vsSf5w5HhH8Xe2gqsgBouk/xZQTxqN1/mvxgjQ0srN2RxUQWRAl7Xu368gThWCl/SH69aBRGw== X-Received: by 2002:a05:6402:110a:: with SMTP id u10mr11436967edv.159.1584893668808; Sun, 22 Mar 2020 09:14:28 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:28 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:06 +0000 Message-Id: <20200322161418.31606-6-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 05/17] xen/x86: Remove the non-typesafe version of pagetable_* helpers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , julien@xen.org, Jun Nakajima , Wei Liu , Andrew Cooper , Julien Grall , Tim Deegan , George Dunlap , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall Most of the users of the pagetable_* helpers can use the typesafe version. Therefore, it is time to convert the callers still using non-typesafe version to use the typesafe one. Some part of the code assume that a pagetable is NULL when the MFN 0. When possible this is replaced with the helper pagetable_is_null(). There are still someplace which test against MFN 0 and it is not clear if other unconverted part of the code rely on the value. So, for now, the NULL value is not changed to INVALID_MFN. No functional changes intented. Signed-off-by: Julien Grall --- xen/arch/x86/domain.c | 18 ++++++++------- xen/arch/x86/domctl.c | 6 ++--- xen/arch/x86/hvm/vmx/vmcs.c | 2 +- xen/arch/x86/hvm/vmx/vmx.c | 2 +- xen/arch/x86/hvm/vmx/vvmx.c | 2 +- xen/arch/x86/mm.c | 40 +++++++++++++++++----------------- xen/arch/x86/mm/hap/hap.c | 2 +- xen/arch/x86/mm/p2m-ept.c | 2 +- xen/arch/x86/mm/p2m-pt.c | 4 ++-- xen/arch/x86/mm/p2m.c | 2 +- xen/arch/x86/mm/shadow/multi.c | 24 ++++++++++---------- xen/arch/x86/pv/dom0_build.c | 10 ++++----- xen/arch/x86/traps.c | 6 ++--- xen/include/asm-x86/page.h | 19 ++++++++-------- 14 files changed, 70 insertions(+), 69 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 15750ce210..18d8fda9bd 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -952,25 +952,27 @@ int arch_set_info_guest( } else { - unsigned long pfn = pagetable_get_pfn(v->arch.guest_table); + mfn_t mfn = pagetable_get_mfn(v->arch.guest_table); bool fail; if ( !compat ) { - fail = xen_pfn_to_cr3(pfn) != c.nat->ctrlreg[3]; + fail = mfn_to_cr3(mfn) != c.nat->ctrlreg[3]; if ( pagetable_is_null(v->arch.guest_table_user) ) fail |= c.nat->ctrlreg[1] || !(flags & VGCF_in_kernel); else { - pfn = pagetable_get_pfn(v->arch.guest_table_user); - fail |= xen_pfn_to_cr3(pfn) != c.nat->ctrlreg[1]; + mfn = pagetable_get_mfn(v->arch.guest_table_user); + fail |= mfn_to_cr3(mfn) != c.nat->ctrlreg[1]; } - } else { - l4_pgentry_t *l4tab = map_domain_page(_mfn(pfn)); + } + else + { + l4_pgentry_t *l4tab = map_domain_page(mfn); - pfn = l4e_get_pfn(*l4tab); + mfn = l4e_get_mfn(*l4tab); unmap_domain_page(l4tab); - fail = compat_pfn_to_cr3(pfn) != c.cmp->ctrlreg[3]; + fail = compat_pfn_to_cr3(mfn_x(mfn)) != c.cmp->ctrlreg[3]; } for ( i = 0; i < ARRAY_SIZE(v->arch.pv.gdt_frames); ++i ) diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index ed86762fa6..02596c3810 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -1611,11 +1611,11 @@ void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c) if ( !compat ) { - c.nat->ctrlreg[3] = xen_pfn_to_cr3( - pagetable_get_pfn(v->arch.guest_table)); + c.nat->ctrlreg[3] = mfn_to_cr3( + pagetable_get_mfn(v->arch.guest_table)); c.nat->ctrlreg[1] = pagetable_is_null(v->arch.guest_table_user) ? 0 - : xen_pfn_to_cr3(pagetable_get_pfn(v->arch.guest_table_user)); + : mfn_to_cr3(pagetable_get_mfn(v->arch.guest_table_user)); } else { diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index 4c23645454..1f39367253 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -1290,7 +1290,7 @@ static int construct_vmcs(struct vcpu *v) struct p2m_domain *p2m = p2m_get_hostp2m(d); struct ept_data *ept = &p2m->ept; - ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m)); + ept->mfn = mfn_x(pagetable_get_mfn(p2m_get_pagetable(p2m))); __vmwrite(EPT_POINTER, ept->eptp); __vmwrite(HOST_PAT, XEN_MSR_PAT); diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index d265ed46ad..a1e3a19c0a 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2110,7 +2110,7 @@ static void vmx_vcpu_update_eptp(struct vcpu *v) p2m = p2m_get_hostp2m(d); ept = &p2m->ept; - ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m)); + ept->mfn = mfn_x(pagetable_get_mfn(p2m_get_pagetable(p2m))); vmx_vmcs_enter(v); diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index f049920196..84b47ef277 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1149,7 +1149,7 @@ static uint64_t get_shadow_eptp(struct vcpu *v) struct p2m_domain *p2m = p2m_get_nestedp2m(v); struct ept_data *ept = &p2m->ept; - ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m)); + ept->mfn = mfn_x(pagetable_get_mfn(p2m_get_pagetable(p2m))); return ept->eptp; } diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 7c0f81759a..aa0bf3d0ee 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -3085,7 +3085,7 @@ int put_old_guest_table(struct vcpu *v) int vcpu_destroy_pagetables(struct vcpu *v) { - unsigned long mfn = pagetable_get_pfn(v->arch.guest_table); + mfn_t mfn = pagetable_get_mfn(v->arch.guest_table); struct page_info *page = NULL; int rc = put_old_guest_table(v); bool put_guest_table_user = false; @@ -3102,9 +3102,9 @@ int vcpu_destroy_pagetables(struct vcpu *v) */ if ( is_pv_32bit_vcpu(v) ) { - l4_pgentry_t *l4tab = map_domain_page(_mfn(mfn)); + l4_pgentry_t *l4tab = map_domain_page(mfn); - mfn = l4e_get_pfn(*l4tab); + mfn = l4e_get_mfn(*l4tab); l4e_write(l4tab, l4e_empty()); unmap_domain_page(l4tab); } @@ -3116,24 +3116,24 @@ int vcpu_destroy_pagetables(struct vcpu *v) /* Free that page if non-zero */ do { - if ( mfn ) + if ( !mfn_eq(mfn, _mfn(0)) ) { - page = mfn_to_page(_mfn(mfn)); + page = mfn_to_page(mfn); if ( paging_mode_refcounts(v->domain) ) put_page(page); else rc = put_page_and_type_preemptible(page); - mfn = 0; + mfn = _mfn(0); } if ( !rc && put_guest_table_user ) { /* Drop ref to guest_table_user (from MMUEXT_NEW_USER_BASEPTR) */ - mfn = pagetable_get_pfn(v->arch.guest_table_user); + mfn = pagetable_get_mfn(v->arch.guest_table_user); v->arch.guest_table_user = pagetable_null(); put_guest_table_user = false; } - } while ( mfn ); + } while ( !mfn_eq(mfn, _mfn(0)) ); /* * If a "put" operation was interrupted, finish things off in @@ -3551,7 +3551,8 @@ long do_mmuext_op( break; case MMUEXT_NEW_USER_BASEPTR: { - unsigned long old_mfn; + mfn_t old_mfn; + mfn_t new_mfn = _mfn(op.arg1.mfn); if ( unlikely(currd != pg_owner) ) rc = -EPERM; @@ -3560,19 +3561,18 @@ long do_mmuext_op( if ( unlikely(rc) ) break; - old_mfn = pagetable_get_pfn(curr->arch.guest_table_user); + old_mfn = pagetable_get_mfn(curr->arch.guest_table_user); /* * This is particularly important when getting restarted after the * previous attempt got preempted in the put-old-MFN phase. */ - if ( old_mfn == op.arg1.mfn ) + if ( mfn_eq(old_mfn, new_mfn) ) break; - if ( op.arg1.mfn != 0 ) + if ( !mfn_eq(new_mfn, _mfn(0)) ) { - rc = get_page_and_type_from_mfn( - _mfn(op.arg1.mfn), PGT_root_page_table, currd, PTF_preemptible); - + rc = get_page_and_type_from_mfn(new_mfn, PGT_root_page_table, + currd, PTF_preemptible); if ( unlikely(rc) ) { if ( rc == -EINTR ) @@ -3580,19 +3580,19 @@ long do_mmuext_op( else if ( rc != -ERESTART ) gdprintk(XENLOG_WARNING, "Error %d installing new mfn %" PRI_mfn "\n", - rc, op.arg1.mfn); + rc, mfn_x(new_mfn)); break; } if ( VM_ASSIST(currd, m2p_strict) ) - zap_ro_mpt(_mfn(op.arg1.mfn)); + zap_ro_mpt(new_mfn); } - curr->arch.guest_table_user = pagetable_from_pfn(op.arg1.mfn); + curr->arch.guest_table_user = pagetable_from_mfn(new_mfn); - if ( old_mfn != 0 ) + if ( !mfn_eq(old_mfn, _mfn(0)) ) { - page = mfn_to_page(_mfn(old_mfn)); + page = mfn_to_page(old_mfn); switch ( rc = put_page_and_type_preemptible(page) ) { diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index a6d5e39b02..051e92169a 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -394,7 +394,7 @@ static mfn_t hap_make_monitor_table(struct vcpu *v) l4_pgentry_t *l4e; mfn_t m4mfn; - ASSERT(pagetable_get_pfn(v->arch.monitor_table) == 0); + ASSERT(pagetable_is_null(v->arch.monitor_table)); if ( (pg = hap_alloc(d)) == NULL ) goto oom; diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index eb0f0edfef..346696e469 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -1366,7 +1366,7 @@ void p2m_init_altp2m_ept(struct domain *d, unsigned int i) p2m->ept.ad = hostp2m->ept.ad; ept = &p2m->ept; - ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m)); + ept->mfn = mfn_x(pagetable_get_mfn(p2m_get_pagetable(p2m))); d->arch.altp2m_eptp[array_index_nospec(i, MAX_EPTP)] = ept->eptp; } diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index eb66077496..cccb06c26e 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -867,7 +867,7 @@ static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m, unsigned long gfn = 0; unsigned int i, changed; - if ( pagetable_get_pfn(p2m_get_pagetable(p2m)) == 0 ) + if ( pagetable_is_null(p2m_get_pagetable(p2m)) ) return; ASSERT(hap_enabled(p2m->domain)); @@ -950,7 +950,7 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m) ASSERT(pod_locked_by_me(p2m)); /* Audit part one: walk the domain's p2m table, checking the entries. */ - if ( pagetable_get_pfn(p2m_get_pagetable(p2m)) != 0 ) + if ( !pagetable_is_null(p2m_get_pagetable(p2m)) ) { l2_pgentry_t *l2e; l1_pgentry_t *l1e; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 9f51370327..45b4b784d3 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -702,7 +702,7 @@ int p2m_alloc_table(struct p2m_domain *p2m) return -EINVAL; } - if ( pagetable_get_pfn(p2m_get_pagetable(p2m)) != 0 ) + if ( !pagetable_is_null(p2m_get_pagetable(p2m)) ) { P2M_ERROR("p2m already allocated for this domain\n"); p2m_unlock(p2m); diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index b6afc0fba4..5751dae344 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -1520,7 +1520,7 @@ sh_make_monitor_table(struct vcpu *v) { struct domain *d = v->domain; - ASSERT(pagetable_get_pfn(v->arch.monitor_table) == 0); + ASSERT(pagetable_is_null(v->arch.monitor_table)); /* Guarantee we can get the memory we need */ shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS); @@ -2351,11 +2351,11 @@ int sh_safe_not_to_sync(struct vcpu *v, mfn_t gl1mfn) ASSERT(mfn_valid(smfn)); #endif - if ( pagetable_get_pfn(v->arch.shadow_table[0]) == mfn_x(smfn) + if ( mfn_eq(pagetable_get_mfn(v->arch.shadow_table[0]), smfn) #if (SHADOW_PAGING_LEVELS == 3) - || pagetable_get_pfn(v->arch.shadow_table[1]) == mfn_x(smfn) - || pagetable_get_pfn(v->arch.shadow_table[2]) == mfn_x(smfn) - || pagetable_get_pfn(v->arch.shadow_table[3]) == mfn_x(smfn) + || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[1]), smfn) + || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[2]), smfn) + || mfn_eq(pagetable_get_mfn(v->arch.shadow_table[3]), smfn) #endif ) return 0; @@ -3707,7 +3707,7 @@ sh_update_linear_entries(struct vcpu *v) /* Don't try to update the monitor table if it doesn't exist */ if ( shadow_mode_external(d) - && pagetable_get_pfn(v->arch.monitor_table) == 0 ) + && pagetable_is_null(v->arch.monitor_table) ) return; #if SHADOW_PAGING_LEVELS == 4 @@ -3722,7 +3722,7 @@ sh_update_linear_entries(struct vcpu *v) if ( v == current ) { __linear_l4_table[l4_linear_offset(SH_LINEAR_PT_VIRT_START)] = - l4e_from_pfn(pagetable_get_pfn(v->arch.shadow_table[0]), + l4e_from_mfn(pagetable_get_mfn(v->arch.shadow_table[0]), __PAGE_HYPERVISOR_RW); } else @@ -3730,7 +3730,7 @@ sh_update_linear_entries(struct vcpu *v) l4_pgentry_t *ml4e; ml4e = map_domain_page(pagetable_get_mfn(v->arch.monitor_table)); ml4e[l4_table_offset(SH_LINEAR_PT_VIRT_START)] = - l4e_from_pfn(pagetable_get_pfn(v->arch.shadow_table[0]), + l4e_from_mfn(pagetable_get_mfn(v->arch.shadow_table[0]), __PAGE_HYPERVISOR_RW); unmap_domain_page(ml4e); } @@ -3964,15 +3964,15 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush) { ASSERT(shadow_mode_external(d)); if ( hvm_paging_enabled(v) ) - ASSERT(pagetable_get_pfn(v->arch.guest_table)); + ASSERT(!pagetable_is_null(v->arch.guest_table)); else - ASSERT(v->arch.guest_table.pfn - == d->arch.paging.shadow.unpaged_pagetable.pfn); + ASSERT(mfn_eq(pagetable_get_mfn(v->arch.guest_table), + pagetable_get_mfn(d->arch.paging.shadow.unpaged_pagetable))); } #endif SHADOW_PRINTK("%pv guest_table=%"PRI_mfn"\n", - v, (unsigned long)pagetable_get_pfn(v->arch.guest_table)); + v, mfn_x(pagetable_get_mfn(v->arch.guest_table))); #if GUEST_PAGING_LEVELS == 4 if ( !(v->arch.flags & TF_kernel_mode) ) diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index 30846b5f97..8abd5d255c 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -93,14 +93,14 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d, } } -static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn, +static __init void setup_pv_physmap(struct domain *d, mfn_t pgtbl_mfn, unsigned long v_start, unsigned long v_end, unsigned long vphysmap_start, unsigned long vphysmap_end, unsigned long nr_pages) { struct page_info *page = NULL; - l4_pgentry_t *pl4e, *l4start = map_domain_page(_mfn(pgtbl_pfn)); + l4_pgentry_t *pl4e, *l4start = map_domain_page(pgtbl_mfn); l3_pgentry_t *pl3e = NULL; l2_pgentry_t *pl2e = NULL; l1_pgentry_t *pl1e = NULL; @@ -760,11 +760,9 @@ int __init dom0_construct_pv(struct domain *d, /* Set up the phys->machine table if not part of the initial mapping. */ if ( parms.p2m_base != UNSET_ADDR ) - { - pfn = pagetable_get_pfn(v->arch.guest_table); - setup_pv_physmap(d, pfn, v_start, v_end, vphysmap_start, vphysmap_end, + setup_pv_physmap(d, pagetable_get_mfn(v->arch.guest_table), + v_start, v_end, vphysmap_start, vphysmap_end, nr_pages); - } /* Write the phys->machine and machine->phys table entries. */ for ( pfn = 0; pfn < count; pfn++ ) diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index 4aa7c35be4..04a3ebc0a2 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -247,12 +247,12 @@ static void compat_show_guest_stack(struct vcpu *v, if ( v != current ) { struct vcpu *vcpu; - unsigned long mfn; + mfn_t mfn; ASSERT(guest_kernel_mode(v, regs)); - mfn = read_cr3() >> PAGE_SHIFT; + mfn = cr3_to_mfn(read_cr3()); for_each_vcpu( v->domain, vcpu ) - if ( pagetable_get_pfn(vcpu->arch.guest_table) == mfn ) + if ( mfn_eq(pagetable_get_mfn(vcpu->arch.guest_table), mfn) ) break; if ( !vcpu ) { diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h index 624dbbb949..377ba14f6e 100644 --- a/xen/include/asm-x86/page.h +++ b/xen/include/asm-x86/page.h @@ -18,6 +18,7 @@ #ifndef __ASSEMBLY__ # include # include +# include #endif #include @@ -213,17 +214,17 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags) #ifndef __ASSEMBLY__ /* Page-table type. */ -typedef struct { u64 pfn; } pagetable_t; -#define pagetable_get_paddr(x) ((paddr_t)(x).pfn << PAGE_SHIFT) +typedef struct { mfn_t mfn; } pagetable_t; +#define PAGETABLE_NULL_MFN _mfn(0) + +#define pagetable_get_paddr(x) mfn_to_maddr((x).mfn) #define pagetable_get_page(x) mfn_to_page(pagetable_get_mfn(x)) -#define pagetable_get_pfn(x) ((x).pfn) -#define pagetable_get_mfn(x) _mfn(((x).pfn)) -#define pagetable_is_null(x) ((x).pfn == 0) -#define pagetable_from_pfn(pfn) ((pagetable_t) { (pfn) }) -#define pagetable_from_mfn(mfn) ((pagetable_t) { mfn_x(mfn) }) +#define pagetable_get_mfn(x) ((x).mfn) +#define pagetable_is_null(x) mfn_eq((x).mfn, PAGETABLE_NULL_MFN) +#define pagetable_from_mfn(mfn) ((pagetable_t) { mfn }) #define pagetable_from_page(pg) pagetable_from_mfn(page_to_mfn(pg)) -#define pagetable_from_paddr(p) pagetable_from_pfn((p)>>PAGE_SHIFT) -#define pagetable_null() pagetable_from_pfn(0) +#define pagetable_from_paddr(p) pagetable_from_mfn(maddr_to_mfn(p)) +#define pagetable_null() pagetable_from_mfn(PAGETABLE_NULL_MFN) void clear_page_sse2(void *); void copy_page_sse2(void *, const void *); From patchwork Sun Mar 22 16:14:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451883 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A7E471744 for ; Sun, 22 Mar 2020 16:16:16 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8E61020724 for ; Sun, 22 Mar 2020 16:16:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8E61020724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FF-0004Z9-SW; Sun, 22 Mar 2020 16:14:37 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FF-0004Yb-7d for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:37 +0000 X-Inumbo-ID: 363024ba-6c58-11ea-8134-12813bfff9fa Received: from mail-ed1-f68.google.com (unknown [209.85.208.68]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 363024ba-6c58-11ea-8134-12813bfff9fa; Sun, 22 Mar 2020 16:14:31 +0000 (UTC) Received: by mail-ed1-f68.google.com with SMTP id a43so13499869edf.6 for ; Sun, 22 Mar 2020 09:14:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PPouXUKimyKPAUwaiX+i857NLzRatKER6he+nfREvTY=; b=BIMYd1zNzmrkGlcNacbjqG2decGr6MrAuDg7rLq8wR4AQTtRF3RVrDVCwv1dMwYrbW ESlms6d06c8UFG2cmxHPP3cupkXqPvdNmoXu0Id/1AceLaTP3r/aTcmIQRQGpXmOma0H KNbSRiomq3WpjXDmhS60aA/ado4ADJ13VQ23ttp4PgXONiv+weOqy+HeiTiXmY/8B1nG qieNWLVAXsUPhQ0auS76SDr2tM4b0kmIHyn22En2NbsHvg6dxpoMlH8/O5bCS6yxLe29 dSHyQRw+8NfJJoURCeR7D1Hv8QP9aIerdd+p0bHhG2AH3I/0ELfPNl9hnsJdJLRkQ4tn o4YA== X-Gm-Message-State: ANhLgQ2ZY8cu7UCOJhXHUwQQ8B5gxSIw40pWGxyHNLdZcEhlEZTvUR4G r9PKe29nC8THFM9Zsf/+Cb45Xqa5lzd+8g== X-Google-Smtp-Source: ADFU+vsib/mmBtFQpTucPP1SO25CZHQIu/B4TsbM7c0M8pSakiBiLTIIV9TqknxtFODWVl+JX6cK/g== X-Received: by 2002:a17:906:5c43:: with SMTP id c3mr15001421ejr.3.1584893670472; Sun, 22 Mar 2020 09:14:30 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:29 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:07 +0000 Message-Id: <20200322161418.31606-7-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 06/17] xen/x86: mm: Fix the comment on top put_page_from_l2e() to use 'mfn' X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , Julien Grall , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall We are using the 'mfn' to refer to machine frame. As this function deal with 'mfn', replace 'pfn' with 'mfn'. Signed-off-by: Julien Grall Acked-by: Jan Beulich --- I am not entirely sure to understand the comment on top of the function, so this change may be wrong. --- xen/arch/x86/mm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index aa0bf3d0ee..65bc03984d 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -1321,7 +1321,7 @@ static int put_data_pages(struct page_info *page, bool writeable, int pt_shift) } /* - * NB. Virtual address 'l2e' maps to a machine address within frame 'pfn'. + * NB. Virtual address 'l2e' maps to a machine address within frame 'mfn'. * Note also that this automatically deals correctly with linear p.t.'s. */ static int put_page_from_l2e(l2_pgentry_t l2e, mfn_t l2mfn, unsigned int flags) From patchwork Sun Mar 22 16:14:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451881 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8FB296CA for ; Sun, 22 Mar 2020 16:16:14 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 764A220724 for ; Sun, 22 Mar 2020 16:16:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 764A220724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FJ-0004ba-6H; Sun, 22 Mar 2020 16:14:41 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FI-0004b4-EL for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:40 +0000 X-Inumbo-ID: 36de31cc-6c58-11ea-92cf-bc764e2007e4 Received: from mail-ed1-f66.google.com (unknown [209.85.208.66]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 36de31cc-6c58-11ea-92cf-bc764e2007e4; Sun, 22 Mar 2020 16:14:32 +0000 (UTC) Received: by mail-ed1-f66.google.com with SMTP id b21so13476193edy.9 for ; Sun, 22 Mar 2020 09:14:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Q0DjDcTGZuLAgG9ZdNmzRZsu9N7ZM1zKIaO4GQDqQOw=; b=HspJf153GXap7dOlACvUhnceLxDiisfXHVeB6BUMTRFM2aOnvEfp46zKISbtfITmEH 27GofipwIvqNDBKqhcd7ra/+kCwaPNtSc4FqSyBJ6uq7Nft1fo8bJPu3qbbaCvI6Yqn0 M0LT5fhMrg/Y/NILn+kDuDgsZadVg6DoqJwNSReSPp5iRtNtAQMkMtGrO9ZNRevEiddw FcWoBGaKLbFgRSbXEY40tdd9oGVZtXLhH/KdtKBFjBxS03q3mX6VUCkrvL6mTM3tP8JX iOX8K0SQ6EFWoT/BfeA1/0vr82ELVTW2oABwewvxxuDedi9AzRKtfNz17FEnMYWEWS8q uuVQ== X-Gm-Message-State: ANhLgQ1lr1jPznOxsFRR/qgZF9i0Gg2rTNoI+G4tZ1NBoamQa515LwLr oyjGfQZ7c7sXKmWt1PIzKv2hzSzl4eB8Cg== X-Google-Smtp-Source: ADFU+vvgTgEpJ8ob/YW7qtJesBCrWJv3xoArKcCEXeWNlhIZ+k+dpM4FZTqO0NuHdQDUFNqaMCt7Xg== X-Received: by 2002:a17:906:4e81:: with SMTP id v1mr5182103eju.259.1584893671540; Sun, 22 Mar 2020 09:14:31 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:30 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:08 +0000 Message-Id: <20200322161418.31606-8-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 07/17] xen/x86: traps: Convert __page_fault_type() to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , Julien Grall , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall Note that the code is now using cr3_to_mfn() to get the MFN. This is slightly different as the top 12-bits will now be masked. No functional changes intended. Signed-off-by: Julien Grall --- xen/arch/x86/traps.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index 04a3ebc0a2..4f524dc71e 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -1232,7 +1232,8 @@ enum pf_type { static enum pf_type __page_fault_type(unsigned long addr, const struct cpu_user_regs *regs) { - unsigned long mfn, cr3 = read_cr3(); + mfn_t mfn; + unsigned long cr3 = read_cr3(); l4_pgentry_t l4e, *l4t; l3_pgentry_t l3e, *l3t; l2_pgentry_t l2e, *l2t; @@ -1264,20 +1265,20 @@ static enum pf_type __page_fault_type(unsigned long addr, page_user = _PAGE_USER; - mfn = cr3 >> PAGE_SHIFT; + mfn = cr3_to_mfn(cr3); - l4t = map_domain_page(_mfn(mfn)); + l4t = map_domain_page(mfn); l4e = l4e_read_atomic(&l4t[l4_table_offset(addr)]); - mfn = l4e_get_pfn(l4e); + mfn = l4e_get_mfn(l4e); unmap_domain_page(l4t); if ( ((l4e_get_flags(l4e) & required_flags) != required_flags) || (l4e_get_flags(l4e) & disallowed_flags) ) return real_fault; page_user &= l4e_get_flags(l4e); - l3t = map_domain_page(_mfn(mfn)); + l3t = map_domain_page(mfn); l3e = l3e_read_atomic(&l3t[l3_table_offset(addr)]); - mfn = l3e_get_pfn(l3e); + mfn = l3e_get_mfn(l3e); unmap_domain_page(l3t); if ( ((l3e_get_flags(l3e) & required_flags) != required_flags) || (l3e_get_flags(l3e) & disallowed_flags) ) @@ -1286,9 +1287,9 @@ static enum pf_type __page_fault_type(unsigned long addr, if ( l3e_get_flags(l3e) & _PAGE_PSE ) goto leaf; - l2t = map_domain_page(_mfn(mfn)); + l2t = map_domain_page(mfn); l2e = l2e_read_atomic(&l2t[l2_table_offset(addr)]); - mfn = l2e_get_pfn(l2e); + mfn = l2e_get_mfn(l2e); unmap_domain_page(l2t); if ( ((l2e_get_flags(l2e) & required_flags) != required_flags) || (l2e_get_flags(l2e) & disallowed_flags) ) @@ -1297,9 +1298,9 @@ static enum pf_type __page_fault_type(unsigned long addr, if ( l2e_get_flags(l2e) & _PAGE_PSE ) goto leaf; - l1t = map_domain_page(_mfn(mfn)); + l1t = map_domain_page(mfn); l1e = l1e_read_atomic(&l1t[l1_table_offset(addr)]); - mfn = l1e_get_pfn(l1e); + mfn = l1e_get_mfn(l1e); unmap_domain_page(l1t); if ( ((l1e_get_flags(l1e) & required_flags) != required_flags) || (l1e_get_flags(l1e) & disallowed_flags) ) From patchwork Sun Mar 22 16:14:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451891 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F11286CA for ; Sun, 22 Mar 2020 16:16:18 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D738820724 for ; Sun, 22 Mar 2020 16:16:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D738820724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FQ-0004hZ-BN; Sun, 22 Mar 2020 16:14:48 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FP-0004gr-7p for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:47 +0000 X-Inumbo-ID: 3779db0e-6c58-11ea-8134-12813bfff9fa Received: from mail-ed1-f65.google.com (unknown [209.85.208.65]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 3779db0e-6c58-11ea-8134-12813bfff9fa; Sun, 22 Mar 2020 16:14:33 +0000 (UTC) Received: by mail-ed1-f65.google.com with SMTP id a20so13525554edj.2 for ; Sun, 22 Mar 2020 09:14:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/5jiq14GiLSWNnqAgN6SMPOo4zQIP4LpAFn+6K8zDb8=; b=erZUGwjuwtE8Avn9f60efM0fc2MB8Mzclh4F4+qlUpOepfKT5Pyc+PWxaWj6O9yNeP w0povH6pcdT/GP+IwAW6Fzv2ekyCIhTz/4AVkxBaVmsfA0quUoFlF6GPc3k6hGtXrmez zRQ6F1seaoLZoIJ0p8Ww83AoOlhSaE6/UfemW5PAHCLjo9uRzoN3mrJy/Bdqq6TWO9oM FTDmpy+jw1DUeFn82j20B76JkMqElpyMgwbTEhBixDtPsu51+NjeAyZxUPaOzcW6SvP4 IRu/u2giW6+49qIJ13KGIm2dFCNHKUOFBJ4mAh43WXNDpOlVbxTDl6KBOxsT0RA5htd0 6oSg== X-Gm-Message-State: ANhLgQ1eVhjXzZg8F5ncJu1Hy7Yo+disazMBstc2mhSY3cLyM0QgWTiF V8lUkG0rSbYfPBQgqn0bdVzN6QD7mXTC6w== X-Google-Smtp-Source: ADFU+vt5MFpsUyDo/Clv14Mi5UAG0f1ZNtcUxTkSdcRj7tVwLZfJgz4rGcZu2ybIRehMoMymyxF29w== X-Received: by 2002:a50:d7d3:: with SMTP id m19mr17288059edj.329.1584893672524; Sun, 22 Mar 2020 09:14:32 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:32 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:09 +0000 Message-Id: <20200322161418.31606-9-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 08/17] xen/x86: traps: Convert show_page_walk() to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , Julien Grall , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall Note that the code is now using cr3_to_mfn() to get the MFN. This is slightly different as the top 12-bits will now be masked. No functional changes intended. Signed-off-by: Julien Grall --- xen/arch/x86/x86_64/traps.c | 42 ++++++++++++++++++------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c index c3d4faea6b..811c2cb37b 100644 --- a/xen/arch/x86/x86_64/traps.c +++ b/xen/arch/x86/x86_64/traps.c @@ -184,7 +184,8 @@ void vcpu_show_registers(const struct vcpu *v) void show_page_walk(unsigned long addr) { - unsigned long pfn, mfn = read_cr3() >> PAGE_SHIFT; + unsigned long pfn; + mfn_t mfn = cr3_to_mfn(read_cr3()); l4_pgentry_t l4e, *l4t; l3_pgentry_t l3e, *l3t; l2_pgentry_t l2e, *l2t; @@ -194,52 +195,51 @@ void show_page_walk(unsigned long addr) if ( !is_canonical_address(addr) ) return; - l4t = map_domain_page(_mfn(mfn)); + l4t = map_domain_page(mfn); l4e = l4t[l4_table_offset(addr)]; unmap_domain_page(l4t); - mfn = l4e_get_pfn(l4e); - pfn = mfn_valid(_mfn(mfn)) && machine_to_phys_mapping_valid ? - get_gpfn_from_mfn(mfn) : INVALID_M2P_ENTRY; + mfn = l4e_get_mfn(l4e); + pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ? + get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY; printk(" L4[0x%03lx] = %"PRIpte" %016lx\n", l4_table_offset(addr), l4e_get_intpte(l4e), pfn); - if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) || - !mfn_valid(_mfn(mfn)) ) + if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) || !mfn_valid(mfn) ) return; - l3t = map_domain_page(_mfn(mfn)); + l3t = map_domain_page(mfn); l3e = l3t[l3_table_offset(addr)]; unmap_domain_page(l3t); - mfn = l3e_get_pfn(l3e); - pfn = mfn_valid(_mfn(mfn)) && machine_to_phys_mapping_valid ? - get_gpfn_from_mfn(mfn) : INVALID_M2P_ENTRY; + mfn = l3e_get_mfn(l3e); + pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ? + get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY; printk(" L3[0x%03lx] = %"PRIpte" %016lx%s\n", l3_table_offset(addr), l3e_get_intpte(l3e), pfn, (l3e_get_flags(l3e) & _PAGE_PSE) ? " (PSE)" : ""); if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) || (l3e_get_flags(l3e) & _PAGE_PSE) || - !mfn_valid(_mfn(mfn)) ) + !mfn_valid(mfn) ) return; - l2t = map_domain_page(_mfn(mfn)); + l2t = map_domain_page(mfn); l2e = l2t[l2_table_offset(addr)]; unmap_domain_page(l2t); - mfn = l2e_get_pfn(l2e); - pfn = mfn_valid(_mfn(mfn)) && machine_to_phys_mapping_valid ? - get_gpfn_from_mfn(mfn) : INVALID_M2P_ENTRY; + mfn = l2e_get_mfn(l2e); + pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ? + get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY; printk(" L2[0x%03lx] = %"PRIpte" %016lx%s\n", l2_table_offset(addr), l2e_get_intpte(l2e), pfn, (l2e_get_flags(l2e) & _PAGE_PSE) ? " (PSE)" : ""); if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) || (l2e_get_flags(l2e) & _PAGE_PSE) || - !mfn_valid(_mfn(mfn)) ) + !mfn_valid(mfn) ) return; - l1t = map_domain_page(_mfn(mfn)); + l1t = map_domain_page(mfn); l1e = l1t[l1_table_offset(addr)]; unmap_domain_page(l1t); - mfn = l1e_get_pfn(l1e); - pfn = mfn_valid(_mfn(mfn)) && machine_to_phys_mapping_valid ? - get_gpfn_from_mfn(mfn) : INVALID_M2P_ENTRY; + mfn = l1e_get_mfn(l1e); + pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ? + get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY; printk(" L1[0x%03lx] = %"PRIpte" %016lx\n", l1_table_offset(addr), l1e_get_intpte(l1e), pfn); } From patchwork Sun Mar 22 16:14:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451895 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D98F36CA for ; Sun, 22 Mar 2020 16:16:22 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BFE7920724 for ; Sun, 22 Mar 2020 16:16:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BFE7920724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FV-0004my-VG; Sun, 22 Mar 2020 16:14:53 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FU-0004lW-7p for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:52 +0000 X-Inumbo-ID: 3839efde-6c58-11ea-8134-12813bfff9fa Received: from mail-ed1-f66.google.com (unknown [209.85.208.66]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 3839efde-6c58-11ea-8134-12813bfff9fa; Sun, 22 Mar 2020 16:14:34 +0000 (UTC) Received: by mail-ed1-f66.google.com with SMTP id n25so12545557eds.10 for ; Sun, 22 Mar 2020 09:14:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hRWWcaYfaNesHVQvECGOs6y6OntZoli0r5yhjgk5SFw=; b=cdtW2AHGb85Bgd2vCKI0EXe0EMTVJX4xWXoaiSl6Vuc2EcA+FX4McGjfn2Dj9tqx3T pm6XnH0NfG1l3Ra15j9WGcC4P6KP6vQS6tJ4A+ydtBmZI4sKjPQgXC1VQU9qBHdPTnsG RB/jZaKoPzwKrMs3lD2Xmc2/UweG5Zo61+ifs8WmdIhWTCihdWEjtzan84IaETwf0b6f jzQKZC/saecW5U7ZIxkcs5j+5MYmn16xF+Hnmpk3oXkBTgCRz8mt/qU97ixbnxZDL4qE o/EWIyz8UU84PfXjrzxmW1iN+02RLeieqTmNpmJiBK0wESemqxlXIk2Z8OPCJO2WFDix hO5A== X-Gm-Message-State: ANhLgQ2JCNVjGmsTUm1fmON7sWyPHWeyw82EjBeGXyFK4ooCfPL9NJdM Jbr6E+TP8wAXV29HlLLHaKu7MgA9TkNHyQ== X-Google-Smtp-Source: ADFU+vuig2Dh/4OhjxdWgKHpoUm3uk6qpNguyjyBCWRQ6EXvc0WoB+oz7RkrCNYZUJF2f8nx+dx+Sw== X-Received: by 2002:a17:906:7fd9:: with SMTP id r25mr4979523ejs.138.1584893673666; Sun, 22 Mar 2020 09:14:33 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:33 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:10 +0000 Message-Id: <20200322161418.31606-10-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 09/17] xen/x86: Reduce the number of use of l*e_{from, get}_pfn() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , Julien Grall , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall It is preferable to use the typesafe l*e_{from, get}_mfn(). Sadly, this can't be used everywhere easily, so for now only replace the simple ones. No functional changes intended. Signed-off-by: Julien Grall Reviewed-by: Jan Beulich --- xen/arch/x86/machine_kexec.c | 2 +- xen/arch/x86/mm.c | 30 +++++++++++++++--------------- xen/arch/x86/setup.c | 2 +- xen/include/asm-x86/page.h | 2 +- 4 files changed, 18 insertions(+), 18 deletions(-) diff --git a/xen/arch/x86/machine_kexec.c b/xen/arch/x86/machine_kexec.c index b70d5a6a86..b69c2e5fad 100644 --- a/xen/arch/x86/machine_kexec.c +++ b/xen/arch/x86/machine_kexec.c @@ -86,7 +86,7 @@ int machine_kexec_add_page(struct kexec_image *image, unsigned long vaddr, l1 = __map_domain_page(l1_page); l1 += l1_table_offset(vaddr); - l1e_write(l1, l1e_from_pfn(maddr >> PAGE_SHIFT, __PAGE_HYPERVISOR)); + l1e_write(l1, l1e_from_mfn(maddr_to_mfn(maddr), __PAGE_HYPERVISOR)); ret = 0; out: diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 65bc03984d..2516548e49 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -1138,7 +1138,7 @@ static int get_page_from_l2e( l2_pgentry_t l2e, mfn_t l2mfn, struct domain *d, unsigned int flags) { - unsigned long mfn = l2e_get_pfn(l2e); + mfn_t mfn = l2e_get_mfn(l2e); int rc; if ( unlikely((l2e_get_flags(l2e) & L2_DISALLOW_MASK)) ) @@ -1150,7 +1150,7 @@ get_page_from_l2e( ASSERT(!(flags & PTF_preemptible)); - rc = get_page_and_type_from_mfn(_mfn(mfn), PGT_l1_page_table, d, flags); + rc = get_page_and_type_from_mfn(mfn, PGT_l1_page_table, d, flags); if ( unlikely(rc == -EINVAL) && get_l2_linear_pagetable(l2e, l2mfn, d) ) rc = 0; @@ -1209,14 +1209,14 @@ static int _put_page_type(struct page_info *page, unsigned int flags, void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner) { - unsigned long pfn = l1e_get_pfn(l1e); + mfn_t mfn = l1e_get_mfn(l1e); struct page_info *page; struct domain *pg_owner; - if ( !(l1e_get_flags(l1e) & _PAGE_PRESENT) || is_iomem_page(_mfn(pfn)) ) + if ( !(l1e_get_flags(l1e) & _PAGE_PRESENT) || is_iomem_page(mfn) ) return; - page = mfn_to_page(_mfn(pfn)); + page = mfn_to_page(mfn); pg_owner = page_get_owner(page); /* @@ -5219,8 +5219,8 @@ int map_pages_to_xen( for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ ) l2e_write(l2t + i, - l2e_from_pfn(l3e_get_pfn(ol3e) + - (i << PAGETABLE_ORDER), + l2e_from_mfn(mfn_add(l3e_get_mfn(ol3e), + (i << PAGETABLE_ORDER)), l3e_get_flags(ol3e))); if ( l3e_get_flags(ol3e) & _PAGE_GLOBAL ) @@ -5320,7 +5320,7 @@ int map_pages_to_xen( for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) l1e_write(&l1t[i], - l1e_from_pfn(l2e_get_pfn(*pl2e) + i, + l1e_from_mfn(mfn_add(l2e_get_mfn(*pl2e), i), lNf_to_l1f(l2e_get_flags(*pl2e)))); if ( l2e_get_flags(*pl2e) & _PAGE_GLOBAL ) @@ -5391,7 +5391,7 @@ int map_pages_to_xen( l1t = l2e_to_l1e(ol2e); base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1); for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) - if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) || + if ( !mfn_eq(l1e_get_mfn(l1t[i]), _mfn(base_mfn + i)) || (l1e_get_flags(l1t[i]) != flags) ) break; if ( i == L1_PAGETABLE_ENTRIES ) @@ -5521,7 +5521,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) { /* PAGE1GB: whole superpage is modified. */ l3_pgentry_t nl3e = !(nf & _PAGE_PRESENT) ? l3e_empty() - : l3e_from_pfn(l3e_get_pfn(*pl3e), + : l3e_from_mfn(l3e_get_mfn(*pl3e), (l3e_get_flags(*pl3e) & ~FLAGS_MASK) | nf); l3e_write_atomic(pl3e, nl3e); @@ -5535,8 +5535,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) return -ENOMEM; for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ ) l2e_write(l2t + i, - l2e_from_pfn(l3e_get_pfn(*pl3e) + - (i << PAGETABLE_ORDER), + l2e_from_mfn(mfn_add(l3e_get_mfn(*pl3e), + (i << PAGETABLE_ORDER)), l3e_get_flags(*pl3e))); if ( locking ) spin_lock(&map_pgdir_lock); @@ -5576,7 +5576,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) { /* PSE: whole superpage is modified. */ l2_pgentry_t nl2e = !(nf & _PAGE_PRESENT) ? l2e_empty() - : l2e_from_pfn(l2e_get_pfn(*pl2e), + : l2e_from_mfn(l2e_get_mfn(*pl2e), (l2e_get_flags(*pl2e) & ~FLAGS_MASK) | nf); l2e_write_atomic(pl2e, nl2e); @@ -5592,7 +5592,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) return -ENOMEM; for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ ) l1e_write(&l1t[i], - l1e_from_pfn(l2e_get_pfn(*pl2e) + i, + l1e_from_mfn(mfn_add(l2e_get_mfn(*pl2e), i), l2e_get_flags(*pl2e) & ~_PAGE_PSE)); if ( locking ) spin_lock(&map_pgdir_lock); @@ -5625,7 +5625,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf) ASSERT(!(nf & _PAGE_PRESENT)); nl1e = !(nf & _PAGE_PRESENT) ? l1e_empty() - : l1e_from_pfn(l1e_get_pfn(*pl1e), + : l1e_from_mfn(l1e_get_mfn(*pl1e), (l1e_get_flags(*pl1e) & ~FLAGS_MASK) | nf); l1e_write_atomic(pl1e, nl1e); diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c index cfe95c5dac..4d1d38dae3 100644 --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -1147,7 +1147,7 @@ void __init noreturn __start_xen(unsigned long mbi_p) BUG_ON(using_2M_mapping() && l2_table_offset((unsigned long)_erodata) == l2_table_offset((unsigned long)_stext)); - *pl2e++ = l2e_from_pfn(xen_phys_start >> PAGE_SHIFT, + *pl2e++ = l2e_from_mfn(maddr_to_mfn(xen_phys_start), PAGE_HYPERVISOR_RX | _PAGE_PSE); for ( i = 1; i < L2_PAGETABLE_ENTRIES; i++, pl2e++ ) { diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h index 377ba14f6e..8d581cd1e7 100644 --- a/xen/include/asm-x86/page.h +++ b/xen/include/asm-x86/page.h @@ -270,7 +270,7 @@ void copy_page_sse2(void *, const void *); #define pfn_to_paddr(pfn) __pfn_to_paddr(pfn) #define paddr_to_pfn(pa) __paddr_to_pfn(pa) #define paddr_to_pdx(pa) pfn_to_pdx(paddr_to_pfn(pa)) -#define vmap_to_mfn(va) _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va)))) +#define vmap_to_mfn(va) l1e_get_mfn(*virt_to_xen_l1e((unsigned long)(va))) #define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va)) #endif /* !defined(__ASSEMBLY__) */ From patchwork Sun Mar 22 16:14:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451905 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AFA2E17D4 for ; Sun, 22 Mar 2020 16:16:30 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 95D6A20724 for ; Sun, 22 Mar 2020 16:16:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 95D6A20724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3Fa-0004tZ-S6; Sun, 22 Mar 2020 16:14:58 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FZ-0004rK-89 for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:57 +0000 X-Inumbo-ID: 38c696b4-6c58-11ea-8134-12813bfff9fa Received: from mail-ed1-f67.google.com (unknown [209.85.208.67]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 38c696b4-6c58-11ea-8134-12813bfff9fa; Sun, 22 Mar 2020 16:14:35 +0000 (UTC) Received: by mail-ed1-f67.google.com with SMTP id w26so7148397edu.7 for ; Sun, 22 Mar 2020 09:14:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iAgcocLWJa/thTf8SxekVYdlKQsuyCse0uToobp5LkM=; b=ZlSMfy0JI9jNDjfv8XbrIm8Ujbp/XNpyDis9qA/9sItD6tsK491Hi0AaZoZof+eTSv ZVookV5cCnBggW+3nNMI1Ru7Xe/OdlgkbFhyBh3X+zl1PvNButN3oybZM15LTaq426q2 vzfCqhgPNbY/DryPeQrlCThp6/Djd0g6Hq77LLpqLG9ABpFJVK/Y7rPEfrMng8o5Rupa WhBpWunxjzggXXETHK+tgIO+JJupQHKZNsPvKxftftajPVahpoiEmYgnbPd0NV39IUj7 0JyQ0gBI9B9icaiaz0vd+f0EcArCuaBZ5n48OHYfLJR/YXu+9VxI/QfVZsGPQTA7llea LZSA== X-Gm-Message-State: ANhLgQ1KYRWooMMBl42nHxink+hf3rVYNU5yT4EeboKvwhc/Z0CNiHxQ cdWUYHNUYVB2YjPEFiAFzoAHEXMN3jV/jQ== X-Google-Smtp-Source: ADFU+vtexKJQoP1o5dsMumo0E9vKS9kBg35/4qaOwxPd5C8lvf5cAFaKMC9T3UyBSG9Qp5xI6aIfiQ== X-Received: by 2002:a17:906:2455:: with SMTP id a21mr16168678ejb.11.1584893674710; Sun, 22 Mar 2020 09:14:34 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:34 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:11 +0000 Message-Id: <20200322161418.31606-11-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 10/17] xen/x86: pv: Use maddr_to_mfn(...) instead of the open-coding version X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , Julien Grall , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall _mfn(addr >> PAGE_SHIFT) is equivalent to maddr_to_mfn(addr). Signed-off-by: Julien Grall Acked-by: Jan Beulich --- xen/arch/x86/pv/grant_table.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/pv/grant_table.c b/xen/arch/x86/pv/grant_table.c index 0325618c98..f80e233621 100644 --- a/xen/arch/x86/pv/grant_table.c +++ b/xen/arch/x86/pv/grant_table.c @@ -72,7 +72,7 @@ int create_grant_pv_mapping(uint64_t addr, mfn_t frame, goto out; } - gl1mfn = _mfn(addr >> PAGE_SHIFT); + gl1mfn = maddr_to_mfn(addr); page = get_page_from_mfn(gl1mfn, currd); if ( !page ) @@ -228,7 +228,7 @@ int replace_grant_pv_mapping(uint64_t addr, mfn_t frame, goto out; } - gl1mfn = _mfn(addr >> PAGE_SHIFT); + gl1mfn = maddr_to_mfn(addr); page = get_page_from_mfn(gl1mfn, currd); if ( !page ) From patchwork Sun Mar 22 16:14:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451893 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2AA546CA for ; Sun, 22 Mar 2020 16:16:21 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1100520724 for ; Sun, 22 Mar 2020 16:16:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1100520724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FO-0004fl-1J; Sun, 22 Mar 2020 16:14:46 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FN-0004fK-Ev for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:45 +0000 X-Inumbo-ID: 395a120e-6c58-11ea-a6c1-bc764e2007e4 Received: from mail-ed1-f65.google.com (unknown [209.85.208.65]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 395a120e-6c58-11ea-a6c1-bc764e2007e4; Sun, 22 Mar 2020 16:14:36 +0000 (UTC) Received: by mail-ed1-f65.google.com with SMTP id z3so13446815edq.11 for ; Sun, 22 Mar 2020 09:14:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hrdGFn7wEIILke7/AUPxIFCNrHwTfD8D/h3uXv3Thvc=; b=BtJnomIxQ4Z3iQ5OyQyifZESrN0qUKJHPSbbvBD7qqGi1Z2hz3p8BxkNBTWYrPd/KC 8pOYk0GLgdt2VKPnyY/ntG3xbXYqIVuuWn8d0ureTakDXfVqEBt7b/6N8DAjlFNKkiK8 aRz0gzD8obKe1mJtACNufwp+Yll14WK0UDJv6flq2Q1cJ6PG+zNpcO15T+6byNCycmxC fcjPrL4mnPIF6G4BcDv/XtTAa0q84aPfFUfAeB6TsaDzlA6hiVubNDQ9tbcsT7RYHdct j95u8QwaDubyDymGr9QhPVZ9oD5WizblUXKYkCMvAXlIkaY8wvxemJTfaEtZ+SjbVA5f ILrQ== X-Gm-Message-State: ANhLgQ0K8xUmfbwfBxqFpfHDBzxJLtYl1xYaf3MQtX9CV00TF7Q5i56w 4KDxah0JG2RzAeq9od6xrE/6xj/A2v5uJA== X-Google-Smtp-Source: ADFU+vtyMDwxcxqJb5xVmLFbP52Rq+jJz04J5YkoZGwltguOYdsoDUzRoEslYKv/dXZjs4Wcni4mYg== X-Received: by 2002:aa7:c607:: with SMTP id h7mr1105784edq.73.1584893675849; Sun, 22 Mar 2020 09:14:35 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:35 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:12 +0000 Message-Id: <20200322161418.31606-12-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 11/17] xen/x86: nested_ept: Fix typo in the message in nept_translate_l2ga() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , Julien Grall , George Dunlap , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall Signed-off-by: Julien Grall Acked-by: Jan Beulich --- xen/arch/x86/mm/hap/nested_ept.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c index 1cb7fefc37..7bae71cc47 100644 --- a/xen/arch/x86/mm/hap/nested_ept.c +++ b/xen/arch/x86/mm/hap/nested_ept.c @@ -255,7 +255,7 @@ int nept_translate_l2ga(struct vcpu *v, paddr_t l2ga, } else { - gdprintk(XENLOG_ERR, "Uncorrect l1 entry!\n"); + gdprintk(XENLOG_ERR, "Incorrect l1 entry!\n"); BUG(); } if ( nept_permission_check(rwx_acc, rwx_bits) ) From patchwork Sun Mar 22 16:14:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451879 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3378E1744 for ; Sun, 22 Mar 2020 16:16:11 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1986420724 for ; Sun, 22 Mar 2020 16:16:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1986420724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3Ff-0004zM-GW; Sun, 22 Mar 2020 16:15:03 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3Fe-0004xO-8L for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:15:02 +0000 X-Inumbo-ID: 39ecac9a-6c58-11ea-8134-12813bfff9fa Received: from mail-ed1-f66.google.com (unknown [209.85.208.66]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 39ecac9a-6c58-11ea-8134-12813bfff9fa; Sun, 22 Mar 2020 16:14:37 +0000 (UTC) Received: by mail-ed1-f66.google.com with SMTP id cf14so4284541edb.13 for ; Sun, 22 Mar 2020 09:14:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=L1T44arnxUCL2r70XG7PRouN++rEfiBPPpnuW47vsAs=; b=b9r1GJEMIp5wYoanlWO+Et1OxnaeiqwCV4vARVB8j0d34RVz/j7kuH50zXoUvravGW CIlvDM8P+f5wlz/8iuYnR+h/8g/zbk9hUQzyT6QrI6Iv6/GaMgks4mZYwQifZmCTh83G YODZtI68E1JXYJ5+YsDopBjcgCi008jjivsC8WrhcwqY4C/EDLgcmU3CoODt15c3wbZ4 av5KehbJ7JitHoBSTYMXhaZidP/ylP7n286T41kAMaaVD2gFNPRXUmTI8s6eJL2wyDmb 25hH/PCPhr1UO60W6XbOB5HhbLkxCoCLxtjrdWmwY9kzNBZhDyAgInzZIAeSjgOaD2RF nBJw== X-Gm-Message-State: ANhLgQ2ApvOrldFtgkkps80fCVJ0YHnfGzymyZ8TJYEWZwO29EOswREc Lv+JKSFbd0bDi+E5npP1p20iDUk1Br2+Og== X-Google-Smtp-Source: ADFU+vvVWEHQJmzDAewdRV2U/X28YEJ1g1rXfLbUWXvdbBGyUKail6uIE69qnl851t0Vjcg35VadOg== X-Received: by 2002:a17:906:7e07:: with SMTP id e7mr15978001ejr.135.1584893676693; Sun, 22 Mar 2020 09:14:36 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:36 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:13 +0000 Message-Id: <20200322161418.31606-13-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 12/17] xen/x86: p2m: Remove duplicate error message in p2m_pt_audit_p2m() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , George Dunlap , Julien Grall , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall p2m_pt_audit_p2m() has one place where the same message may be printed twice via printk and P2M_PRINTK. Remove the one printed using printk to stay consistent with the rest of the code. Signed-off-by: Julien Grall Acked-by: Jan Beulich --- This was originally sent as part of "xen/arm: Properly disable M2P on Arm" [1]. Changes since the original version: - Move the reflow in a separate patch. [1] <20190603160350.29806-1-julien.grall@arm.com> --- xen/arch/x86/mm/p2m-pt.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index cccb06c26e..77450a9484 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -1061,8 +1061,6 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m) !p2m_is_shared(type) ) { pmbad++; - printk("mismatch: gfn %#lx -> mfn %#lx" - " -> gfn %#lx\n", gfn, mfn, m2pfn); P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx" " -> gfn %#lx\n", gfn, mfn, m2pfn); BUG(); From patchwork Sun Mar 22 16:14:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451899 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 274C16CA for ; Sun, 22 Mar 2020 16:16:25 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0DDA920724 for ; Sun, 22 Mar 2020 16:16:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0DDA920724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FT-0004kf-Ky; Sun, 22 Mar 2020 16:14:51 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FS-0004jY-F9 for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:50 +0000 X-Inumbo-ID: 3a7ed2a0-6c58-11ea-92cf-bc764e2007e4 Received: from mail-ed1-f65.google.com (unknown [209.85.208.65]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3a7ed2a0-6c58-11ea-92cf-bc764e2007e4; Sun, 22 Mar 2020 16:14:38 +0000 (UTC) Received: by mail-ed1-f65.google.com with SMTP id n25so12545685eds.10 for ; Sun, 22 Mar 2020 09:14:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9FdB03XeTb21IBXUeyCVMzkNzMUw0hNKTDFXHd2H3xg=; b=jYxQATlWHi1b9nXzJbJSLTE2uUupvg1lx/LoCQzKncFW8aE64z2DhaKuYsO59E5mP1 R5SAXrzMsRnWiujPzSgRB271pDl5yFhAl+xr1RF11qoD9G8Zt63Rf2E1bkNA0A5aAP+j ypGP0zAhkiWRbAdNSEs7khPYgrusSh41x5IdAMPLpDUWsLXuPE8w9zN+RCDU4Q2DBGRO CTtx3k9cYq6VJfFYnrXnmj1qMg+Cv5/qFNxABGQ51s2dFoxt9LZPNbeeE9RXVmDGh1a4 gZV69H+lStCio4YUTVXWq1Ad7tmLNMha8cwUkNdD+UwvHj//g/onHuTA8l2gBw568neD FU+w== X-Gm-Message-State: ANhLgQ2AYEgwsTRC12vzDmj3kFeF7aVo7tff6ZQAUXv0EZ3wWb8H40XU KBd5WlnLQYERHWD1REpoZNOMb8L2mwQbFQ== X-Google-Smtp-Source: ADFU+vsW1wr7I0nxWsGNMrzsFsK1KyQ6lfU4pzLU1Lj/etpEmfOvWxBWjc3BYZ/4qn1K7XCykTZxng== X-Received: by 2002:aa7:c9cb:: with SMTP id i11mr18191799edt.320.1584893677659; Sun, 22 Mar 2020 09:14:37 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:37 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:14 +0000 Message-Id: <20200322161418.31606-14-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 13/17] xen/x86: p2m: Reflow P2M_PRINTK()s in p2m_pt_audit_p2m() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , Julien Grall , George Dunlap , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall We tend to avoid splitting message on multiple line, so it is easier to find it. Signed-off-by: Julien Grall Acked-by: Jan Beulich --- xen/arch/x86/mm/p2m-pt.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index 77450a9484..e9da34d668 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -994,9 +994,8 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m) if ( m2pfn != (gfn + i2) ) { pmbad++; - P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx" - " -> gfn %#lx\n", gfn+i2, mfn+i2, - m2pfn); + P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n", + gfn + i2, mfn + i2, m2pfn); BUG(); } gfn += 1 << (L3_PAGETABLE_SHIFT - PAGE_SHIFT); @@ -1029,9 +1028,8 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m) if ( (m2pfn != (gfn + i1)) && !SHARED_M2P(m2pfn) ) { pmbad++; - P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx" - " -> gfn %#lx\n", gfn+i1, mfn+i1, - m2pfn); + P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n", + gfn + i1, mfn + i1, m2pfn); BUG(); } } @@ -1061,8 +1059,8 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m) !p2m_is_shared(type) ) { pmbad++; - P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx" - " -> gfn %#lx\n", gfn, mfn, m2pfn); + P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n", + gfn, mfn, m2pfn); BUG(); } } From patchwork Sun Mar 22 16:14:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451901 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 82F9C17D4 for ; Sun, 22 Mar 2020 16:16:25 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6986320724 for ; Sun, 22 Mar 2020 16:16:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6986320724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FY-0004pn-Ae; Sun, 22 Mar 2020 16:14:56 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3FX-0004oq-Fx for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:14:55 +0000 X-Inumbo-ID: 3b2d69e6-6c58-11ea-b34e-bc764e2007e4 Received: from mail-ed1-f65.google.com (unknown [209.85.208.65]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3b2d69e6-6c58-11ea-b34e-bc764e2007e4; Sun, 22 Mar 2020 16:14:39 +0000 (UTC) Received: by mail-ed1-f65.google.com with SMTP id z65so13537431ede.0 for ; Sun, 22 Mar 2020 09:14:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=klK9FdE2K1lUNI+mLYUEqRnbc1iAOenHTFOoesGJIXc=; b=bnY3Mp3NCu8lRY4LGX0L31v31DBOnTr+m4DsfJxqr2bKryyG57nZ43jLaiclNSGM90 gXhtKja6NQF/6F6694TpEfIISOjLTcWUVRuxul/JWUCTmaeATVxczQggEHlH5Of6+Fsv fQOXdvecnD0DQwNvoOa7CW75IMcxgQeh/zOy4UYKYCkt2GI6WHyqQjPk77n9b7SAeDcU B8U5NPI/3YwCTH8n1AjdzXXZqcfQs4qvgnTj0bothvE+7wIqhf74ev/4u2HACiCoFMet Bi2Am0Egfn4rI5JpRVcer2vwaObRQOZOj87KMxpt3+o00kuAJDzmXJeSuyjbK/wnSlV8 kDsg== X-Gm-Message-State: ANhLgQ2EYWwNrtr7F3E6uXw0PFawSxli7BdLjg0kr/wV+awyoUynZg4B c8sCtvkKZlNIkIKnDfcKNu93cEprCzhebA== X-Google-Smtp-Source: ADFU+vuUpOX1gSnROav2FUOZMUb+kId6Pc0sBLXjVopEjrCPInAdWcNh/jQ+8rbjIHyOA9qDj3UleA== X-Received: by 2002:a17:906:32d8:: with SMTP id k24mr92782ejk.2.1584893678743; Sun, 22 Mar 2020 09:14:38 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:38 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:15 +0000 Message-Id: <20200322161418.31606-15-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 14/17] xen/x86: mm: Re-implement set_gpfn_from_mfn() as a static inline function X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , Julien Grall , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall set_gpfn_from_mfn() is currently implement in a 2 part macros. The second macro is only called within the first macro, so they can be folded together. Furthermore, this is now converted to a static inline making the code more readable and safer. Signed-off-by: Julien Grall Reviewed-by: Jan Beulich --- This was originally sent as part of "xen/arm: Properly disable M2P on Arm" [1]. Changes since the original version: - Remove the paragraph in the comment about dom_* as we don't need to move them anymore. - Constify 'd' as it is never modified within the function [1] <20190603160350.29806-1-julien.grall@arm.com> --- xen/include/asm-x86/mm.h | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 83058fb8d1..53f2ed7c7d 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -493,24 +493,25 @@ extern paddr_t mem_hotplug; #define SHARED_M2P(_e) ((_e) == SHARED_M2P_ENTRY) #define compat_machine_to_phys_mapping ((unsigned int *)RDWR_COMPAT_MPT_VIRT_START) -#define _set_gpfn_from_mfn(mfn, pfn) ({ \ - struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn))); \ - unsigned long entry = (d && (d == dom_cow)) ? \ - SHARED_M2P_ENTRY : (pfn); \ - ((void)((mfn) >= (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) / 4 || \ - (compat_machine_to_phys_mapping[(mfn)] = (unsigned int)(entry))), \ - machine_to_phys_mapping[(mfn)] = (entry)); \ - }) /* * Disable some users of set_gpfn_from_mfn() (e.g., free_heap_pages()) until * the machine_to_phys_mapping is actually set up. */ extern bool machine_to_phys_mapping_valid; -#define set_gpfn_from_mfn(mfn, pfn) do { \ - if ( machine_to_phys_mapping_valid ) \ - _set_gpfn_from_mfn(mfn, pfn); \ -} while (0) + +static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned long pfn) +{ + const struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn))); + unsigned long entry = (d && (d == dom_cow)) ? SHARED_M2P_ENTRY : pfn; + + if ( !machine_to_phys_mapping_valid ) + return; + + if ( mfn < (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) / 4 ) + compat_machine_to_phys_mapping[mfn] = entry; + machine_to_phys_mapping[mfn] = entry; +} extern struct rangeset *mmio_ro_ranges; From patchwork Sun Mar 22 16:14:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451907 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C2B06CA for ; Sun, 22 Mar 2020 16:16:32 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 12E4A20724 for ; Sun, 22 Mar 2020 16:16:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 12E4A20724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3Fe-0004xM-60; Sun, 22 Mar 2020 16:15:02 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3Fc-0004vS-FR for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:15:00 +0000 X-Inumbo-ID: 3bb2274e-6c58-11ea-a6c1-bc764e2007e4 Received: from mail-ed1-f68.google.com (unknown [209.85.208.68]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3bb2274e-6c58-11ea-a6c1-bc764e2007e4; Sun, 22 Mar 2020 16:14:40 +0000 (UTC) Received: by mail-ed1-f68.google.com with SMTP id i24so13538858eds.1 for ; Sun, 22 Mar 2020 09:14:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=04zURAfP8Ele6NFckM2pfjn7bHXwVWe5JFmHviPQRzA=; b=YuXL4O7cxjvKSrL+wseiHk0fcxblsITDIi7JsvZLUGRnrgypZRSXRn7xEDQjGFrwRb YuCparMg+GiytwGidSK8YdtD/CLDMjUREB4j5hxi2p6DLZ9mh0RM5lIByrJ4eU1tsFv+ UGFpZ+qNjwihrmGvvNImlYYjCVTPGBkS3bPPC2PSEHUMfQ7FAkwGKKwQc5vBTQ/dJ5pB 8PLA8Dsux3LDvaunkeROg9Ar+3jIXBGzF5kxGTqX6WARxu9dnVQPwbGbJl3gIENibLRw H1A/Ww7RPMw9xpFC+KZalNovNOMvKOR74w3BWNZKsIkG4GS4RZV78ZeMVSkYUCV6xu1A 0DGw== X-Gm-Message-State: ANhLgQ1//LX0Pd/6mPh95QqTMQuw5ePqtY0d6va85sX1GjmgFTZX8emC 86SSFXgpCgvIAsRDFw8BotsRSbkZPH+3qw== X-Google-Smtp-Source: ADFU+vvn+ke5XLNEvi5vuNP8y5JT938xk3czZJWOTQaI95FgPxHJKvvqUt7hHf4YKPx8OMD1MIKJRQ== X-Received: by 2002:a05:6402:343:: with SMTP id r3mr17421186edw.85.1584893679752; Sun, 22 Mar 2020 09:14:39 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:39 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:16 +0000 Message-Id: <20200322161418.31606-16-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 15/17] xen/x86: p2m: Rework printk format in audit_p2m() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: julien@xen.org, Wei Liu , Andrew Cooper , George Dunlap , Julien Grall , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall One of the printk format in audit_p2m() may be difficult to read as it is not clear what is the first number. Furthermore, the format can now take advantage of %pd. Signed-off-by: Julien Grall Acked-by: Jan Beulich --- This was originally sent as part of "xen/arm: Properly disable M2P on Arm" [1]. [1] <20190603160350.29806-1-julien.grall@arm.com> --- xen/arch/x86/mm/p2m.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 45b4b784d3..b6b01a71c8 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2851,8 +2851,7 @@ void audit_p2m(struct domain *d, if ( od != d ) { - P2M_PRINTK("wrong owner %#lx -> %p(%u) != %p(%u)\n", - mfn, od, (od?od->domain_id:-1), d, d->domain_id); + P2M_PRINTK("mfn %"PRI_mfn" owner %pd != %pd\n", mfn, od, d); continue; } From patchwork Sun Mar 22 16:14:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451887 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D8BD16CA for ; Sun, 22 Mar 2020 16:16:17 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A837C20724 for ; Sun, 22 Mar 2020 16:16:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A837C20724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3Fk-00056B-E2; Sun, 22 Mar 2020 16:15:08 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3Fj-000548-8X for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:15:07 +0000 X-Inumbo-ID: 3ca8282e-6c58-11ea-8134-12813bfff9fa Received: from mail-ed1-f66.google.com (unknown [209.85.208.66]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 3ca8282e-6c58-11ea-8134-12813bfff9fa; Sun, 22 Mar 2020 16:14:42 +0000 (UTC) Received: by mail-ed1-f66.google.com with SMTP id a20so13525871edj.2 for ; Sun, 22 Mar 2020 09:14:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mApuAxVqoNL0AJrg7IhfRhEE4AAZrv5zScEqKjeYVHA=; b=SsEGYGhlr2+tOZz6l1Mq5xw2ENc6S5JPQBiJojDw5kYAP5UP63NDi8NdJPNDek5wB3 OBa364Xy8lWHWxRz+JmZQ9sQe+6u5Lj/aDQQFzGzkspese+zMGLIPOTfvgtGOdEJR7bC ++u+L07wEbFfgXQVvCY3Ukrmw+vZSysJAiaivYewTYSOKaoNcwxONVysRiToQRv5lMex FVrrYuYHa++dtuAjWdTXB6IX00TAgfc83r9bUauHrp8wfdnHKWKr72t0ty0u/oqXEwMp YpeQUQCb+0mbJnTVENHz3ebwvD5f8JlcIJ3PgxtDlRrUVfmrRKxtYxB5jPwOqyYPkqas uodA== X-Gm-Message-State: ANhLgQ3CFa2VuOrAkXbHty7/nuHKT41Pj/087MkBjkZINt5KmnA7reiq PQa8dZ0DQcA9W7h57rXRjLV++0Mgkd89FA== X-Google-Smtp-Source: ADFU+vtviWxPf8i29iJJYhAPDo62/ZehU61jnW8XxgECDaObW9w42GVhESNCMnJUxNTYXWAVvYmG4A== X-Received: by 2002:a50:e716:: with SMTP id a22mr17288833edn.358.1584893680855; Sun, 22 Mar 2020 09:14:40 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:40 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:17 +0000 Message-Id: <20200322161418.31606-17-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 16/17] xen/mm: Convert {s, g}et_gpfn_from_mfn() to use typesafe MFN X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , julien@xen.org, Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , Julien Grall , Tamas K Lengyel , Jan Beulich , Volodymyr Babchuk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall The first parameter of {s,g}et_gpfn_from_mfn() is an MFN, so it can be switched to use the typesafe. At the same time, replace gpfn with pfn in the helpers as they all deal with PFN and also turn the macros to static inline. Note that the return of the getter and the 2nd parameter of the setter have not been converted to use typesafe PFN because it was requiring more changes than expected. Signed-off-by: Julien Grall Reviewed-by: Hongyan Xia --- This was originally sent as part of "xen/arm: Properly disable M2P on Arm" [1]. Changes since the original version: - mfn_to_gmfn() is still present for now so update it - Remove stray + - Avoid churn in set_pfn_from_mfn() by inverting mfn and mfn_ - Remove tags - Fix build in mem_sharing [1] <20190603160350.29806-1-julien.grall@arm.com> --- xen/arch/x86/cpu/mcheck/mcaction.c | 2 +- xen/arch/x86/mm.c | 14 +++---- xen/arch/x86/mm/mem_sharing.c | 20 ++++----- xen/arch/x86/mm/p2m-pod.c | 4 +- xen/arch/x86/mm/p2m-pt.c | 35 ++++++++-------- xen/arch/x86/mm/p2m.c | 66 +++++++++++++++--------------- xen/arch/x86/mm/paging.c | 4 +- xen/arch/x86/pv/dom0_build.c | 6 +-- xen/arch/x86/x86_64/traps.c | 8 ++-- xen/common/page_alloc.c | 2 +- xen/include/asm-arm/mm.h | 2 +- xen/include/asm-x86/grant_table.h | 2 +- xen/include/asm-x86/mm.h | 12 ++++-- xen/include/asm-x86/p2m.h | 2 +- 14 files changed, 93 insertions(+), 86 deletions(-) diff --git a/xen/arch/x86/cpu/mcheck/mcaction.c b/xen/arch/x86/cpu/mcheck/mcaction.c index 69332fb84d..5e78fb7703 100644 --- a/xen/arch/x86/cpu/mcheck/mcaction.c +++ b/xen/arch/x86/cpu/mcheck/mcaction.c @@ -89,7 +89,7 @@ mc_memerr_dhandler(struct mca_binfo *binfo, { d = get_domain_by_id(bank->mc_domid); ASSERT(d); - gfn = get_gpfn_from_mfn((bank->mc_addr) >> PAGE_SHIFT); + gfn = get_pfn_from_mfn(maddr_to_mfn(bank->mc_addr)); if ( unmmap_broken_page(d, mfn, gfn) ) { diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 2516548e49..2feb7a5993 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -476,7 +476,7 @@ void share_xen_page_with_guest(struct page_info *page, struct domain *d, if ( page_get_owner(page) == d ) return; - set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), INVALID_M2P_ENTRY); + set_pfn_from_mfn(page_to_mfn(page), INVALID_M2P_ENTRY); spin_lock(&d->page_alloc_lock); @@ -1040,7 +1040,7 @@ get_page_from_l1e( gdprintk(XENLOG_WARNING, "Error updating mappings for mfn %" PRI_mfn " (pfn %" PRI_pfn ", from L1 entry %" PRIpte ") for d%d\n", - mfn, get_gpfn_from_mfn(mfn), + mfn, get_pfn_from_mfn(_mfn(mfn)), l1e_get_intpte(l1e), l1e_owner->domain_id); return err; } @@ -1051,7 +1051,7 @@ get_page_from_l1e( could_not_pin: gdprintk(XENLOG_WARNING, "Error getting mfn %" PRI_mfn " (pfn %" PRI_pfn ") from L1 entry %" PRIpte " for l1e_owner d%d, pg_owner d%d\n", - mfn, get_gpfn_from_mfn(mfn), + mfn, get_pfn_from_mfn(_mfn(mfn)), l1e_get_intpte(l1e), l1e_owner->domain_id, pg_owner->domain_id); if ( real_pg_owner != NULL ) put_page(page); @@ -2636,7 +2636,7 @@ static int validate_page(struct page_info *page, unsigned long type, " (pfn %" PRI_pfn ") for type %" PRtype_info ": caf=%08lx taf=%" PRtype_info "\n", mfn_x(page_to_mfn(page)), - get_gpfn_from_mfn(mfn_x(page_to_mfn(page))), + get_pfn_from_mfn(page_to_mfn(page)), type, page->count_info, page->u.inuse.type_info); if ( page != current->arch.old_guest_table ) page->u.inuse.type_info = 0; @@ -2946,7 +2946,7 @@ static int _get_page_type(struct page_info *page, unsigned long type, "Bad type (saw %" PRtype_info " != exp %" PRtype_info ") " "for mfn %" PRI_mfn " (pfn %" PRI_pfn ")\n", x, type, mfn_x(page_to_mfn(page)), - get_gpfn_from_mfn(mfn_x(page_to_mfn(page)))); + get_pfn_from_mfn(page_to_mfn(page))); return -EINVAL; } else if ( unlikely(!(x & PGT_validated)) ) @@ -4106,7 +4106,7 @@ long do_mmu_update( break; } - set_gpfn_from_mfn(mfn_x(mfn), gpfn); + set_pfn_from_mfn(mfn, gpfn); paging_mark_pfn_dirty(pg_owner, _pfn(gpfn)); put_page(page); @@ -4590,7 +4590,7 @@ int xenmem_add_to_physmap_one( goto put_both; /* Unmap from old location, if any. */ - old_gpfn = get_gpfn_from_mfn(mfn_x(mfn)); + old_gpfn = get_pfn_from_mfn(mfn); ASSERT(!SHARED_M2P(old_gpfn)); if ( space == XENMAPSPACE_gmfn && old_gpfn != gfn ) { diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 3835bc928f..018beec10f 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -426,15 +426,15 @@ static void mem_sharing_gfn_destroy(struct page_info *page, struct domain *d, xfree(gfn_info); } -static struct page_info *mem_sharing_lookup(unsigned long mfn) +static struct page_info *mem_sharing_lookup(mfn_t mfn) { struct page_info *page; unsigned long t; - if ( !mfn_valid(_mfn(mfn)) ) + if ( !mfn_valid(mfn) ) return NULL; - page = mfn_to_page(_mfn(mfn)); + page = mfn_to_page(mfn); if ( page_get_owner(page) != dom_cow ) return NULL; @@ -446,7 +446,7 @@ static struct page_info *mem_sharing_lookup(unsigned long mfn) t = read_atomic(&page->u.inuse.type_info); ASSERT((t & PGT_type_mask) == PGT_shared_page); ASSERT((t & PGT_count_mask) >= 2); - ASSERT(SHARED_M2P(get_gpfn_from_mfn(mfn))); + ASSERT(SHARED_M2P(get_pfn_from_mfn(mfn))); return page; } @@ -505,10 +505,10 @@ static int audit(void) } /* Check the m2p entry */ - if ( !SHARED_M2P(get_gpfn_from_mfn(mfn_x(mfn))) ) + if ( !SHARED_M2P(get_pfn_from_mfn(mfn)) ) { - gdprintk(XENLOG_ERR, "mfn %lx shared, but wrong m2p entry (%lx)!\n", - mfn_x(mfn), get_gpfn_from_mfn(mfn_x(mfn))); + gdprintk(XENLOG_ERR, "mfn %"PRI_mfn" shared, but wrong m2p entry (%lx)!\n", + mfn_x(mfn), get_pfn_from_mfn(mfn)); errors++; } @@ -736,7 +736,7 @@ static struct page_info *__grab_shared_page(mfn_t mfn) if ( !mem_sharing_page_lock(pg) ) return NULL; - if ( mem_sharing_lookup(mfn_x(mfn)) == NULL ) + if ( mem_sharing_lookup(mfn) == NULL ) { mem_sharing_page_unlock(pg); return NULL; @@ -918,7 +918,7 @@ static int nominate_page(struct domain *d, gfn_t gfn, atomic_inc(&nr_shared_mfns); /* Update m2p entry to SHARED_M2P_ENTRY */ - set_gpfn_from_mfn(mfn_x(mfn), SHARED_M2P_ENTRY); + set_pfn_from_mfn(mfn, SHARED_M2P_ENTRY); *phandle = page->sharing->handle; audit_add_list(page); @@ -1306,7 +1306,7 @@ int __mem_sharing_unshare_page(struct domain *d, } /* Update m2p entry */ - set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), gfn); + set_pfn_from_mfn(page_to_mfn(page), gfn); /* * Now that the gfn<->mfn map is properly established, diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 2a7b8c117b..a9ac44a65c 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -644,7 +644,7 @@ p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, unsigned int order) } p2m_tlb_flush_sync(p2m); for ( j = 0; j < n; ++j ) - set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY); + set_pfn_from_mfn(mfn, INVALID_M2P_ENTRY); p2m_pod_cache_add(p2m, page, cur_order); steal_for_cache = ( p2m->pod.entry_count > p2m->pod.count ); @@ -1194,7 +1194,7 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn, for( i = 0; i < (1UL << order); i++ ) { - set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn_aligned) + i); + set_pfn_from_mfn(mfn_add(mfn, i), gfn_x(gfn_aligned) + i); paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn_aligned) + i)); } diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index e9da34d668..1601e9e5e9 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -944,7 +944,8 @@ static int p2m_pt_change_entry_type_range(struct p2m_domain *p2m, long p2m_pt_audit_p2m(struct p2m_domain *p2m) { unsigned long entry_count = 0, pmbad = 0; - unsigned long mfn, gfn, m2pfn; + unsigned long gfn, m2pfn; + mfn_t mfn; ASSERT(p2m_locked_by_me(p2m)); ASSERT(pod_locked_by_me(p2m)); @@ -983,19 +984,20 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m) /* check for 1GB super page */ if ( l3e_get_flags(l3e[i3]) & _PAGE_PSE ) { - mfn = l3e_get_pfn(l3e[i3]); - ASSERT(mfn_valid(_mfn(mfn))); + mfn = l3e_get_mfn(l3e[i3]); + ASSERT(mfn_valid(mfn)); /* we have to cover 512x512 4K pages */ for ( i2 = 0; i2 < (L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES); i2++) { - m2pfn = get_gpfn_from_mfn(mfn+i2); + m2pfn = get_pfn_from_mfn(mfn_add(mfn, i2)); if ( m2pfn != (gfn + i2) ) { pmbad++; - P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n", - gfn + i2, mfn + i2, m2pfn); + P2M_PRINTK("mismatch: gfn %#lx -> mfn %"PRI_mfn" gfn %#lx\n", + gfn + i2, mfn_x(mfn_add(mfn, i2)), + m2pfn); BUG(); } gfn += 1 << (L3_PAGETABLE_SHIFT - PAGE_SHIFT); @@ -1019,17 +1021,18 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m) /* check for super page */ if ( l2e_get_flags(l2e[i2]) & _PAGE_PSE ) { - mfn = l2e_get_pfn(l2e[i2]); - ASSERT(mfn_valid(_mfn(mfn))); + mfn = l2e_get_mfn(l2e[i2]); + ASSERT(mfn_valid(mfn)); for ( i1 = 0; i1 < L1_PAGETABLE_ENTRIES; i1++) { - m2pfn = get_gpfn_from_mfn(mfn+i1); + m2pfn = get_pfn_from_mfn(mfn_add(mfn, i1)); /* Allow shared M2Ps */ if ( (m2pfn != (gfn + i1)) && !SHARED_M2P(m2pfn) ) { pmbad++; - P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n", - gfn + i1, mfn + i1, m2pfn); + P2M_PRINTK("mismatch: gfn %#lx -> mfn %"PRI_mfn" -> gfn %#lx\n", + gfn + i1, mfn_x(mfn_add(mfn, i1)), + m2pfn); BUG(); } } @@ -1050,17 +1053,17 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m) entry_count++; continue; } - mfn = l1e_get_pfn(l1e[i1]); - ASSERT(mfn_valid(_mfn(mfn))); - m2pfn = get_gpfn_from_mfn(mfn); + mfn = l1e_get_mfn(l1e[i1]); + ASSERT(mfn_valid(mfn)); + m2pfn = get_pfn_from_mfn(mfn); if ( m2pfn != gfn && type != p2m_mmio_direct && !p2m_is_grant(type) && !p2m_is_shared(type) ) { pmbad++; - P2M_PRINTK("mismatch: gfn %#lx -> mfn %#lx -> gfn %#lx\n", - gfn, mfn, m2pfn); + P2M_PRINTK("mismatch: gfn %#lx -> mfn %"PRI_mfn" -> gfn %#lx\n", + gfn, mfn_x(mfn), m2pfn); BUG(); } } diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index b6b01a71c8..587c062481 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -769,7 +769,7 @@ void p2m_final_teardown(struct domain *d) static int -p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn, +p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, mfn_t mfn, unsigned int page_order) { unsigned long i; @@ -783,17 +783,17 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn, return 0; ASSERT(gfn_locked_by_me(p2m, gfn)); - P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn_l, mfn); + P2M_DEBUG("removing gfn=%#lx mfn=%"PRI_mfn"\n", gfn_l, mfn_x(mfn)); - if ( mfn_valid(_mfn(mfn)) ) + if ( mfn_valid(mfn) ) { for ( i = 0; i < (1UL << page_order); i++ ) { mfn_return = p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, NULL, NULL); if ( !p2m_is_grant(t) && !p2m_is_shared(t) && !p2m_is_foreign(t) ) - set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY); - ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) ); + set_pfn_from_mfn(mfn_add(mfn, i), INVALID_M2P_ENTRY); + ASSERT( !p2m_is_valid(t) || mfn_eq(mfn_add(mfn, i), mfn_return) ); } } return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid, @@ -807,7 +807,7 @@ guest_physmap_remove_page(struct domain *d, gfn_t gfn, struct p2m_domain *p2m = p2m_get_hostp2m(d); int rc; gfn_lock(p2m, gfn, page_order); - rc = p2m_remove_page(p2m, gfn_x(gfn), mfn_x(mfn), page_order); + rc = p2m_remove_page(p2m, gfn_x(gfn), mfn, page_order); gfn_unlock(p2m, gfn, page_order); return rc; } @@ -842,7 +842,7 @@ guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn, else return -EINVAL; - set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn) + i); + set_pfn_from_mfn(mfn_add(mfn, i), gfn_x(gfn) + i); } return 0; @@ -930,7 +930,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn, else if ( p2m_is_ram(ot) && !p2m_is_paged(ot) ) { ASSERT(mfn_valid(omfn)); - set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY); + set_pfn_from_mfn(omfn, INVALID_M2P_ENTRY); } else if ( ot == p2m_populate_on_demand ) { @@ -974,7 +974,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn, P2M_DEBUG("old gfn=%#lx -> mfn %#lx\n", gfn_x(ogfn) , mfn_x(omfn)); if ( mfn_eq(omfn, mfn_add(mfn, i)) ) - p2m_remove_page(p2m, gfn_x(ogfn), mfn_x(mfn_add(mfn, i)), + p2m_remove_page(p2m, gfn_x(ogfn), mfn_add(mfn, i), 0); } } @@ -992,8 +992,8 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn, if ( !p2m_is_grant(t) ) { for ( i = 0; i < (1UL << page_order); i++ ) - set_gpfn_from_mfn(mfn_x(mfn_add(mfn, i)), - gfn_x(gfn_add(gfn, i))); + set_pfn_from_mfn(mfn_add(mfn, i), + gfn_x(gfn_add(gfn, i))); } } @@ -1279,7 +1279,7 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l, for ( i = 0; i < (1UL << order); ++i ) { ASSERT(mfn_valid(mfn_add(omfn, i))); - set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY); + set_pfn_from_mfn(mfn_add(omfn, i), INVALID_M2P_ENTRY); } } @@ -1475,7 +1475,7 @@ int set_shared_p2m_entry(struct domain *d, unsigned long gfn_l, mfn_t mfn) pg_type = read_atomic(&(mfn_to_page(omfn)->u.inuse.type_info)); if ( (pg_type & PGT_count_mask) == 0 || (pg_type & PGT_type_mask) != PGT_shared_page ) - set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY); + set_pfn_from_mfn(omfn, INVALID_M2P_ENTRY); P2M_DEBUG("set shared %lx %lx\n", gfn_l, mfn_x(mfn)); rc = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_shared, @@ -1829,7 +1829,7 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn_l, uint64_t buffer) ret = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, paging_mode_log_dirty(d) ? p2m_ram_logdirty : p2m_ram_rw, a); - set_gpfn_from_mfn(mfn_x(mfn), gfn_l); + set_pfn_from_mfn(mfn, gfn_l); if ( !page_extant ) atomic_dec(&d->paged_pages); @@ -1880,7 +1880,7 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp) p2m_ram_rw, a); if ( !rc ) - set_gpfn_from_mfn(mfn_x(mfn), gfn_x(gfn)); + set_pfn_from_mfn(mfn, gfn_x(gfn)); } gfn_unlock(p2m, gfn, 0); } @@ -2706,7 +2706,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx, { mfn = ap2m->get_entry(ap2m, old_gfn, &t, &a, 0, NULL, NULL); if ( mfn_valid(mfn) ) - p2m_remove_page(ap2m, gfn_x(old_gfn), mfn_x(mfn), PAGE_ORDER_4K); + p2m_remove_page(ap2m, gfn_x(old_gfn), mfn, PAGE_ORDER_4K); rc = 0; goto out; } @@ -2820,8 +2820,8 @@ void audit_p2m(struct domain *d, { struct page_info *page; struct domain *od; - unsigned long mfn, gfn; - mfn_t p2mfn; + unsigned long gfn; + mfn_t p2mfn, mfn; unsigned long orphans_count = 0, mpbad = 0, pmbad = 0; p2m_access_t p2ma; p2m_type_t type; @@ -2843,53 +2843,53 @@ void audit_p2m(struct domain *d, spin_lock(&d->page_alloc_lock); page_list_for_each ( page, &d->page_list ) { - mfn = mfn_x(page_to_mfn(page)); + mfn = page_to_mfn(page); - P2M_PRINTK("auditing guest page, mfn=%#lx\n", mfn); + P2M_PRINTK("auditing guest page, mfn=%"PRI_mfn"\n", mfn_x(mfn)); od = page_get_owner(page); if ( od != d ) { - P2M_PRINTK("mfn %"PRI_mfn" owner %pd != %pd\n", mfn, od, d); + P2M_PRINTK("mfn %"PRI_mfn" owner %pd != %pd\n", mfn_x(mfn), od, d); continue; } - gfn = get_gpfn_from_mfn(mfn); + gfn = get_pfn_from_mfn(mfn); if ( gfn == INVALID_M2P_ENTRY ) { orphans_count++; - P2M_PRINTK("orphaned guest page: mfn=%#lx has invalid gfn\n", - mfn); + P2M_PRINTK("orphaned guest page: mfn=%"PRI_mfn" has invalid gfn\n", + mfn_x(mfn)); continue; } if ( SHARED_M2P(gfn) ) { - P2M_PRINTK("shared mfn (%lx) on domain page list!\n", - mfn); + P2M_PRINTK("shared mfn (%"PRI_mfn") on domain page list!\n", + mfn_x(mfn)); continue; } p2mfn = get_gfn_type_access(p2m, gfn, &type, &p2ma, 0, NULL); - if ( mfn_x(p2mfn) != mfn ) + if ( !mfn_eq(p2mfn, mfn) ) { mpbad++; - P2M_PRINTK("map mismatch mfn %#lx -> gfn %#lx -> mfn %#lx" + P2M_PRINTK("map mismatch mfn %"PRI_mfn" -> gfn %#lx -> mfn %"PRI_mfn"" " (-> gfn %#lx)\n", - mfn, gfn, mfn_x(p2mfn), + mfn_x(mfn), gfn, mfn_x(p2mfn), (mfn_valid(p2mfn) - ? get_gpfn_from_mfn(mfn_x(p2mfn)) + ? get_pfn_from_mfn(p2mfn) : -1u)); /* This m2p entry is stale: the domain has another frame in * this physical slot. No great disaster, but for neatness, * blow away the m2p entry. */ - set_gpfn_from_mfn(mfn, INVALID_M2P_ENTRY); + set_pfn_from_mfn(mfn, INVALID_M2P_ENTRY); } __put_gfn(p2m, gfn); - P2M_PRINTK("OK: mfn=%#lx, gfn=%#lx, p2mfn=%#lx\n", - mfn, gfn, mfn_x(p2mfn)); + P2M_PRINTK("OK: mfn=%"PRI_mfn", gfn=%#lx, p2mfn=%"PRI_mfn"\n", + mfn_x(mfn), gfn, mfn_x(p2mfn)); } spin_unlock(&d->page_alloc_lock); diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c index 469bb76429..2f6df74135 100644 --- a/xen/arch/x86/mm/paging.c +++ b/xen/arch/x86/mm/paging.c @@ -344,7 +344,7 @@ void paging_mark_dirty(struct domain *d, mfn_t gmfn) return; /* We /really/ mean PFN here, even for non-translated guests. */ - pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn))); + pfn = _pfn(get_pfn_from_mfn(gmfn)); paging_mark_pfn_dirty(d, pfn); } @@ -362,7 +362,7 @@ int paging_mfn_is_dirty(struct domain *d, mfn_t gmfn) ASSERT(paging_mode_log_dirty(d)); /* We /really/ mean PFN here, even for non-translated guests. */ - pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn))); + pfn = _pfn(get_pfn_from_mfn(gmfn)); /* Invalid pages can't be dirty. */ if ( unlikely(!VALID_M2P(pfn_x(pfn))) ) return 0; diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index 8abd5d255c..9f558b2932 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -39,7 +39,7 @@ void __init dom0_update_physmap(struct domain *d, unsigned long pfn, else ((unsigned int *)vphysmap_s)[pfn] = mfn; - set_gpfn_from_mfn(mfn, pfn); + set_pfn_from_mfn(_mfn(mfn), pfn); } static __init void mark_pv_pt_pages_rdonly(struct domain *d, @@ -789,8 +789,8 @@ int __init dom0_construct_pv(struct domain *d, page_list_for_each ( page, &d->page_list ) { mfn = mfn_x(page_to_mfn(page)); - BUG_ON(SHARED_M2P(get_gpfn_from_mfn(mfn))); - if ( get_gpfn_from_mfn(mfn) >= count ) + BUG_ON(SHARED_M2P(get_pfn_from_mfn(_mfn(mfn)))); + if ( get_pfn_from_mfn(_mfn(mfn)) >= count ) { BUG_ON(is_pv_32bit_domain(d)); if ( !page->u.inuse.type_info && diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c index 811c2cb37b..bf5c2060e7 100644 --- a/xen/arch/x86/x86_64/traps.c +++ b/xen/arch/x86/x86_64/traps.c @@ -200,7 +200,7 @@ void show_page_walk(unsigned long addr) unmap_domain_page(l4t); mfn = l4e_get_mfn(l4e); pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ? - get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY; + get_pfn_from_mfn(mfn) : INVALID_M2P_ENTRY; printk(" L4[0x%03lx] = %"PRIpte" %016lx\n", l4_table_offset(addr), l4e_get_intpte(l4e), pfn); if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) || !mfn_valid(mfn) ) @@ -211,7 +211,7 @@ void show_page_walk(unsigned long addr) unmap_domain_page(l3t); mfn = l3e_get_mfn(l3e); pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ? - get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY; + get_pfn_from_mfn(mfn) : INVALID_M2P_ENTRY; printk(" L3[0x%03lx] = %"PRIpte" %016lx%s\n", l3_table_offset(addr), l3e_get_intpte(l3e), pfn, (l3e_get_flags(l3e) & _PAGE_PSE) ? " (PSE)" : ""); @@ -225,7 +225,7 @@ void show_page_walk(unsigned long addr) unmap_domain_page(l2t); mfn = l2e_get_mfn(l2e); pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ? - get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY; + get_pfn_from_mfn(mfn) : INVALID_M2P_ENTRY; printk(" L2[0x%03lx] = %"PRIpte" %016lx%s\n", l2_table_offset(addr), l2e_get_intpte(l2e), pfn, (l2e_get_flags(l2e) & _PAGE_PSE) ? " (PSE)" : ""); @@ -239,7 +239,7 @@ void show_page_walk(unsigned long addr) unmap_domain_page(l1t); mfn = l1e_get_mfn(l1e); pfn = mfn_valid(mfn) && machine_to_phys_mapping_valid ? - get_gpfn_from_mfn(mfn_x(mfn)) : INVALID_M2P_ENTRY; + get_pfn_from_mfn(mfn) : INVALID_M2P_ENTRY; printk(" L1[0x%03lx] = %"PRIpte" %016lx\n", l1_table_offset(addr), l1e_get_intpte(l1e), pfn); } diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 41e4fa899d..239aac18dd 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -1430,7 +1430,7 @@ static void free_heap_pages( /* This page is not a guest frame any more. */ page_set_owner(&pg[i], NULL); /* set_gpfn_from_mfn snoops pg owner */ - set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY); + set_pfn_from_mfn(mfn_add(mfn, i), INVALID_M2P_ENTRY); if ( need_scrub ) { diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index abf4cc23e4..11614f9107 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -319,7 +319,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va, #define SHARED_M2P(_e) ((_e) == SHARED_M2P_ENTRY) /* Xen always owns P2M on ARM */ -#define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0) +static inline void set_pfn_from_mfn(mfn_t mfn, unsigned long pfn) {} #define mfn_to_gmfn(_d, mfn) (mfn) diff --git a/xen/include/asm-x86/grant_table.h b/xen/include/asm-x86/grant_table.h index 5871238f6d..b6a09c4c6c 100644 --- a/xen/include/asm-x86/grant_table.h +++ b/xen/include/asm-x86/grant_table.h @@ -41,7 +41,7 @@ static inline int replace_grant_host_mapping(uint64_t addr, mfn_t frame, #define gnttab_get_frame_gfn(gt, st, idx) ({ \ mfn_t mfn_ = (st) ? gnttab_status_mfn(gt, idx) \ : gnttab_shared_mfn(gt, idx); \ - unsigned long gpfn_ = get_gpfn_from_mfn(mfn_x(mfn_)); \ + unsigned long gpfn_ = get_pfn_from_mfn(mfn_); \ VALID_M2P(gpfn_) ? _gfn(gpfn_) : INVALID_GFN; \ }) diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 53f2ed7c7d..2a4f42e78f 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -500,9 +500,10 @@ extern paddr_t mem_hotplug; */ extern bool machine_to_phys_mapping_valid; -static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned long pfn) +static inline void set_pfn_from_mfn(mfn_t mfn_, unsigned long pfn) { - const struct domain *d = page_get_owner(mfn_to_page(_mfn(mfn))); + const unsigned long mfn = mfn_x(mfn_); + const struct domain *d = page_get_owner(mfn_to_page(mfn_)); unsigned long entry = (d && (d == dom_cow)) ? SHARED_M2P_ENTRY : pfn; if ( !machine_to_phys_mapping_valid ) @@ -515,11 +516,14 @@ static inline void set_gpfn_from_mfn(unsigned long mfn, unsigned long pfn) extern struct rangeset *mmio_ro_ranges; -#define get_gpfn_from_mfn(mfn) (machine_to_phys_mapping[(mfn)]) +static inline unsigned long get_pfn_from_mfn(mfn_t mfn) +{ + return machine_to_phys_mapping[mfn_x(mfn)]; +} #define mfn_to_gmfn(_d, mfn) \ ( (paging_mode_translate(_d)) \ - ? get_gpfn_from_mfn(mfn) \ + ? get_pfn_from_mfn(_mfn(mfn)) \ : (mfn) ) #define compat_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) | ((unsigned)(pfn) >> 20)) diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index a2c6049834..39dae242b0 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -505,7 +505,7 @@ static inline struct page_info *get_page_from_gfn( static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn) { if ( paging_mode_translate(d) ) - return _gfn(get_gpfn_from_mfn(mfn_x(mfn))); + return _gfn(get_pfn_from_mfn(mfn)); else return _gfn(mfn_x(mfn)); } From patchwork Sun Mar 22 16:14:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 11451889 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F1B4417EF for ; Sun, 22 Mar 2020 16:16:17 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C9EED2072E for ; Sun, 22 Mar 2020 16:16:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C9EED2072E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3Fi-00053V-Qp; Sun, 22 Mar 2020 16:15:06 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jG3Fh-00051y-Gl for xen-devel@lists.xenproject.org; Sun, 22 Mar 2020 16:15:05 +0000 X-Inumbo-ID: 3da33d0e-6c58-11ea-bec1-bc764e2007e4 Received: from mail-ed1-f66.google.com (unknown [209.85.208.66]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3da33d0e-6c58-11ea-bec1-bc764e2007e4; Sun, 22 Mar 2020 16:14:44 +0000 (UTC) Received: by mail-ed1-f66.google.com with SMTP id w26so7148658edu.7 for ; Sun, 22 Mar 2020 09:14:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=VngiJqA4TkOWd+cn3jQ19eMxtcbS39mHRtbW5NY77TQ=; b=CBTxP7cfkA5vjI5DTGXMCoZG8fUzvlZUV5qOm+RgMqVKwvqTWmfUgqfHG6aQcioZJO 01MxTLtCIJoMkSNawAGEq5GjvEZFnUqWRzklbqwHdz8ttW1RDKgHRdIu7T01n/dXRLds GNe+8bgE7i+cLvOc/JNS2kt2+rZ+L/USD+c1u7yOIvfLZEi+1fGNPMEVW9+3zZokUj9O zcTFALaXQ0Vf4ocx3Zvsk12Sm50P6j7Xo/v7IjAEnyPBzj8RckGmIXxjjCh8h4OQNbxf dzXTy7MDLK+OBq4naTro5BPDAT5zEUTBZ1FeKcZkhoW3hEFQMXZLYmfxTRi1M78/ckHa ivKA== X-Gm-Message-State: ANhLgQ3Jsuc7Lke3RI8PQm46ev+Egt28QiNMHd1nB/tN6v6GBsx9rfrN l1qFIlL0OYLbcbRoYLMEdSERJhkv7xr7uQ== X-Google-Smtp-Source: ADFU+vu0RCOUkPAG1LZcBxX3raqRlP4Hn2Dv7hIjR4k0+Iy+kCTkNRiueMPr9HQKcAg1NdcPR7+SOg== X-Received: by 2002:a17:906:7f07:: with SMTP id d7mr15669902ejr.54.1584893682125; Sun, 22 Mar 2020 09:14:42 -0700 (PDT) Received: from ufe34d9ed68d054.ant.amazon.com (54-240-197-235.amazon.com. [54.240.197.235]) by smtp.gmail.com with ESMTPSA id v13sm106693edj.62.2020.03.22.09.14.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 Mar 2020 09:14:41 -0700 (PDT) From: julien@xen.org To: xen-devel@lists.xenproject.org Date: Sun, 22 Mar 2020 16:14:18 +0000 Message-Id: <20200322161418.31606-18-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200322161418.31606-1-julien@xen.org> References: <20200322161418.31606-1-julien@xen.org> Subject: [Xen-devel] [PATCH 17/17] xen: Switch parameter in get_page_from_gfn to use typesafe gfn X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Stefano Stabellini , julien@xen.org, Jun Nakajima , Wei Liu , Paul Durrant , Andrew Cooper , Ian Jackson , George Dunlap , Tim Deegan , Julien Grall , Jan Beulich , Volodymyr Babchuk , =?utf-8?q?Roger_Pau_Monn?= =?utf-8?q?=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Julien Grall No functional change intended. Only reasonable clean-ups are done in this patch. The rest will use _gfn for the time being. Signed-off-by: Julien Grall Reviewed-by: Paul Durrant --- get_page_from_gfn() is currently using an unsafe pattern as an MFN should be validated via mfn_valid() before using mfn_to_page(). At Jan's request, this was dropped for this patch as this was unrelated. If we want to fix it properly, then it should be done in a separate patch along with them modifications of all the other callers using this bad behavior.. This was originally sent as part of "More typesafe conversion of common interface." [1]. Changes since the original patch: - Use cr3_to_gfn() - Remove the re-ordering of mfn_valid() and mfn_to_page() (see above). [1] <20190819142651.11058-1-julien.grall@arm.com> --- xen/arch/arm/guestcopy.c | 2 +- xen/arch/arm/mm.c | 2 +- xen/arch/x86/cpu/vpmu.c | 2 +- xen/arch/x86/domctl.c | 6 +++--- xen/arch/x86/hvm/dm.c | 2 +- xen/arch/x86/hvm/domain.c | 6 ++++-- xen/arch/x86/hvm/hvm.c | 9 +++++---- xen/arch/x86/hvm/svm/svm.c | 8 ++++---- xen/arch/x86/hvm/viridian/viridian.c | 16 ++++++++-------- xen/arch/x86/hvm/vmx/vmx.c | 4 ++-- xen/arch/x86/hvm/vmx/vvmx.c | 12 ++++++------ xen/arch/x86/mm.c | 24 ++++++++++++++---------- xen/arch/x86/mm/p2m.c | 2 +- xen/arch/x86/mm/shadow/hvm.c | 6 +++--- xen/arch/x86/physdev.c | 3 ++- xen/arch/x86/pv/descriptor-tables.c | 4 ++-- xen/arch/x86/pv/emul-priv-op.c | 6 +++--- xen/arch/x86/pv/mm.c | 2 +- xen/arch/x86/traps.c | 11 ++++++----- xen/common/domain.c | 2 +- xen/common/event_fifo.c | 12 ++++++------ xen/common/memory.c | 4 ++-- xen/include/asm-arm/p2m.h | 6 +++--- xen/include/asm-x86/p2m.h | 12 ++++++++---- 24 files changed, 88 insertions(+), 75 deletions(-) diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c index 7a0f3e9d5f..55892062bb 100644 --- a/xen/arch/arm/guestcopy.c +++ b/xen/arch/arm/guestcopy.c @@ -37,7 +37,7 @@ static struct page_info *translate_get_page(copy_info_t info, uint64_t addr, return get_page_from_gva(info.gva.v, addr, write ? GV2M_WRITE : GV2M_READ); - page = get_page_from_gfn(info.gpa.d, paddr_to_pfn(addr), &p2mt, P2M_ALLOC); + page = get_page_from_gfn(info.gpa.d, gaddr_to_gfn(addr), &p2mt, P2M_ALLOC); if ( !page ) return NULL; diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 1075e5fcaf..d0ad06add4 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1446,7 +1446,7 @@ int xenmem_add_to_physmap_one( /* Take reference to the foreign domain page. * Reference will be released in XENMEM_remove_from_physmap */ - page = get_page_from_gfn(od, idx, &p2mt, P2M_ALLOC); + page = get_page_from_gfn(od, _gfn(idx), &p2mt, P2M_ALLOC); if ( !page ) { put_pg_owner(od); diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c index e50d478d23..9777efa4fb 100644 --- a/xen/arch/x86/cpu/vpmu.c +++ b/xen/arch/x86/cpu/vpmu.c @@ -617,7 +617,7 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params) struct vcpu *v; struct vpmu_struct *vpmu; struct page_info *page; - uint64_t gfn = params->val; + gfn_t gfn = _gfn(params->val); if ( (params->vcpu >= d->max_vcpus) || (d->vcpu[params->vcpu] == NULL) ) return -EINVAL; diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index 02596c3810..8f5010fd58 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -391,7 +391,7 @@ long arch_do_domctl( break; } - page = get_page_from_gfn(d, gfn, &t, P2M_ALLOC); + page = get_page_from_gfn(d, _gfn(gfn), &t, P2M_ALLOC); if ( unlikely(!page) || unlikely(is_xen_heap_page(page)) ) @@ -461,11 +461,11 @@ long arch_do_domctl( case XEN_DOMCTL_hypercall_init: { - unsigned long gmfn = domctl->u.hypercall_init.gmfn; + gfn_t gfn = _gfn(domctl->u.hypercall_init.gmfn); struct page_info *page; void *hypercall_page; - page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); + page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC); if ( !page || !get_page_type(page, PGT_writable_page) ) { diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 96c5042b75..a09622007c 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -188,7 +188,7 @@ static int modified_memory(struct domain *d, { struct page_info *page; - page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE); + page = get_page_from_gfn(d, _gfn(pfn), NULL, P2M_UNSHARE); if ( page ) { paging_mark_pfn_dirty(d, _pfn(pfn)); diff --git a/xen/arch/x86/hvm/domain.c b/xen/arch/x86/hvm/domain.c index 5d5a746a25..3c29ff86be 100644 --- a/xen/arch/x86/hvm/domain.c +++ b/xen/arch/x86/hvm/domain.c @@ -296,8 +296,10 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx) if ( hvm_paging_enabled(v) && !paging_mode_hap(v->domain) ) { /* Shadow-mode CR3 change. Check PDBR and update refcounts. */ - struct page_info *page = get_page_from_gfn(v->domain, - v->arch.hvm.guest_cr[3] >> PAGE_SHIFT, + struct page_info *page; + + page = get_page_from_gfn(v->domain, + gaddr_to_gfn(v->arch.hvm.guest_cr[3]), NULL, P2M_ALLOC); if ( !page ) { diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index a3d115b650..9f720e7aa1 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -2216,7 +2216,7 @@ int hvm_set_cr0(unsigned long value, bool may_defer) { struct vcpu *v = current; struct domain *d = v->domain; - unsigned long gfn, old_value = v->arch.hvm.guest_cr[0]; + unsigned long old_value = v->arch.hvm.guest_cr[0]; struct page_info *page; HVM_DBG_LOG(DBG_LEVEL_VMMU, "Update CR0 value = %lx", value); @@ -2271,7 +2271,8 @@ int hvm_set_cr0(unsigned long value, bool may_defer) if ( !paging_mode_hap(d) ) { /* The guest CR3 must be pointing to the guest physical. */ - gfn = v->arch.hvm.guest_cr[3] >> PAGE_SHIFT; + gfn_t gfn = gaddr_to_gfn(v->arch.hvm.guest_cr[3]); + page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC); if ( !page ) { @@ -2363,7 +2364,7 @@ int hvm_set_cr3(unsigned long value, bool may_defer) { /* Shadow-mode CR3 change. Check PDBR and update refcounts. */ HVM_DBG_LOG(DBG_LEVEL_VMMU, "CR3 value = %lx", value); - page = get_page_from_gfn(v->domain, value >> PAGE_SHIFT, + page = get_page_from_gfn(v->domain, cr3_to_gfn(value), NULL, P2M_ALLOC); if ( !page ) goto bad_cr3; @@ -3191,7 +3192,7 @@ enum hvm_translation_result hvm_translate_get_page( && hvm_mmio_internal(gfn_to_gaddr(gfn)) ) return HVMTRANS_bad_gfn_to_mfn; - page = get_page_from_gfn(v->domain, gfn_x(gfn), &p2mt, P2M_UNSHARE); + page = get_page_from_gfn(v->domain, gfn, &p2mt, P2M_UNSHARE); if ( !page ) return HVMTRANS_bad_gfn_to_mfn; diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 32d8d847f2..a9abd6d3f1 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -299,7 +299,7 @@ static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c) { if ( c->cr0 & X86_CR0_PG ) { - page = get_page_from_gfn(v->domain, c->cr3 >> PAGE_SHIFT, + page = get_page_from_gfn(v->domain, cr3_to_gfn(c->cr3), NULL, P2M_ALLOC); if ( !page ) { @@ -2230,9 +2230,9 @@ nsvm_get_nvmcb_page(struct vcpu *v, uint64_t vmcbaddr) return NULL; /* Need to translate L1-GPA to MPA */ - page = get_page_from_gfn(v->domain, - nv->nv_vvmcxaddr >> PAGE_SHIFT, - &p2mt, P2M_ALLOC | P2M_UNSHARE); + page = get_page_from_gfn(v->domain, + gaddr_to_gfn(nv->nv_vvmcxaddr), + &p2mt, P2M_ALLOC | P2M_UNSHARE); if ( !page ) return NULL; diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 977c1bc54f..3d75a0f133 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -242,16 +242,16 @@ static void dump_hypercall(const struct domain *d) static void enable_hypercall_page(struct domain *d) { - unsigned long gmfn = d->arch.hvm.viridian->hypercall_gpa.pfn; - struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); + gfn_t gfn = _gfn(d->arch.hvm.viridian->hypercall_gpa.pfn); + struct page_info *page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC); uint8_t *p; if ( !page || !get_page_type(page, PGT_writable_page) ) { if ( page ) put_page(page); - gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", - gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN)); + gdprintk(XENLOG_WARNING, "Bad GFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", + gfn_x(gfn), mfn_x(page ? page_to_mfn(page) : INVALID_MFN)); return; } @@ -719,13 +719,13 @@ void viridian_dump_guest_page(const struct vcpu *v, const char *name, void viridian_map_guest_page(struct domain *d, struct viridian_page *vp) { - unsigned long gmfn = vp->msr.pfn; + gfn_t gfn = _gfn(vp->msr.pfn); struct page_info *page; if ( vp->ptr ) return; - page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); + page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC); if ( !page ) goto fail; @@ -746,8 +746,8 @@ void viridian_map_guest_page(struct domain *d, struct viridian_page *vp) return; fail: - gdprintk(XENLOG_WARNING, "Bad GMFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", - gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN)); + gdprintk(XENLOG_WARNING, "Bad GFN %#"PRI_gfn" (MFN %#"PRI_mfn")\n", + gfn_x(gfn), mfn_x(page ? page_to_mfn(page) : INVALID_MFN)); } void viridian_unmap_guest_page(struct viridian_page *vp) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index a1e3a19c0a..f1898c63c5 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -681,7 +681,7 @@ static int vmx_restore_cr0_cr3( { if ( cr0 & X86_CR0_PG ) { - page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, + page = get_page_from_gfn(v->domain, gaddr_to_gfn(cr3), NULL, P2M_ALLOC); if ( !page ) { @@ -1321,7 +1321,7 @@ static void vmx_load_pdptrs(struct vcpu *v) if ( (cr3 & 0x1fUL) && !hvm_pcid_enabled(v) ) goto crash; - page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, &p2mt, P2M_UNSHARE); + page = get_page_from_gfn(v->domain, gaddr_to_gfn(cr3), &p2mt, P2M_UNSHARE); if ( !page ) { /* Ideally you don't want to crash but rather go into a wait diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 84b47ef277..eee4af3206 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -718,11 +718,11 @@ static void nvmx_update_apic_access_address(struct vcpu *v) if ( ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES ) { p2m_type_t p2mt; - unsigned long apic_gpfn; + gfn_t apic_gfn; struct page_info *apic_pg; - apic_gpfn = get_vvmcs(v, APIC_ACCESS_ADDR) >> PAGE_SHIFT; - apic_pg = get_page_from_gfn(v->domain, apic_gpfn, &p2mt, P2M_ALLOC); + apic_gfn = gaddr_to_gfn(get_vvmcs(v, APIC_ACCESS_ADDR)); + apic_pg = get_page_from_gfn(v->domain, apic_gfn, &p2mt, P2M_ALLOC); ASSERT(apic_pg && !p2m_is_paging(p2mt)); __vmwrite(APIC_ACCESS_ADDR, page_to_maddr(apic_pg)); put_page(apic_pg); @@ -739,11 +739,11 @@ static void nvmx_update_virtual_apic_address(struct vcpu *v) if ( ctrl & CPU_BASED_TPR_SHADOW ) { p2m_type_t p2mt; - unsigned long vapic_gpfn; + gfn_t vapic_gfn; struct page_info *vapic_pg; - vapic_gpfn = get_vvmcs(v, VIRTUAL_APIC_PAGE_ADDR) >> PAGE_SHIFT; - vapic_pg = get_page_from_gfn(v->domain, vapic_gpfn, &p2mt, P2M_ALLOC); + vapic_gfn = gaddr_to_gfn(get_vvmcs(v, VIRTUAL_APIC_PAGE_ADDR)); + vapic_pg = get_page_from_gfn(v->domain, vapic_gfn, &p2mt, P2M_ALLOC); ASSERT(vapic_pg && !p2m_is_paging(p2mt)); __vmwrite(VIRTUAL_APIC_PAGE_ADDR, page_to_maddr(vapic_pg)); put_page(vapic_pg); diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 2feb7a5993..b9a656643b 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -2150,7 +2150,7 @@ static int mod_l1_entry(l1_pgentry_t *pl1e, l1_pgentry_t nl1e, p2m_query_t q = l1e_get_flags(nl1e) & _PAGE_RW ? P2M_ALLOC | P2M_UNSHARE : P2M_ALLOC; - page = get_page_from_gfn(pg_dom, l1e_get_pfn(nl1e), &p2mt, q); + page = get_page_from_gfn(pg_dom, _gfn(l1e_get_pfn(nl1e)), &p2mt, q); if ( p2m_is_paged(p2mt) ) { @@ -3433,7 +3433,8 @@ long do_mmuext_op( if ( paging_mode_refcounts(pg_owner) ) break; - page = get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC); + page = get_page_from_gfn(pg_owner, _gfn(op.arg1.mfn), NULL, + P2M_ALLOC); if ( unlikely(!page) ) { rc = -EINVAL; @@ -3499,7 +3500,8 @@ long do_mmuext_op( if ( paging_mode_refcounts(pg_owner) ) break; - page = get_page_from_gfn(pg_owner, op.arg1.mfn, NULL, P2M_ALLOC); + page = get_page_from_gfn(pg_owner, _gfn(op.arg1.mfn), NULL, + P2M_ALLOC); if ( unlikely(!page) ) { gdprintk(XENLOG_WARNING, @@ -3724,7 +3726,8 @@ long do_mmuext_op( } case MMUEXT_CLEAR_PAGE: - page = get_page_from_gfn(pg_owner, op.arg1.mfn, &p2mt, P2M_ALLOC); + page = get_page_from_gfn(pg_owner, _gfn(op.arg1.mfn), &p2mt, + P2M_ALLOC); if ( unlikely(p2mt != p2m_ram_rw) && page ) { put_page(page); @@ -3752,7 +3755,7 @@ long do_mmuext_op( { struct page_info *src_page, *dst_page; - src_page = get_page_from_gfn(pg_owner, op.arg2.src_mfn, &p2mt, + src_page = get_page_from_gfn(pg_owner, _gfn(op.arg2.src_mfn), &p2mt, P2M_ALLOC); if ( unlikely(p2mt != p2m_ram_rw) && src_page ) { @@ -3768,7 +3771,7 @@ long do_mmuext_op( break; } - dst_page = get_page_from_gfn(pg_owner, op.arg1.mfn, &p2mt, + dst_page = get_page_from_gfn(pg_owner, _gfn(op.arg1.mfn), &p2mt, P2M_ALLOC); if ( unlikely(p2mt != p2m_ram_rw) && dst_page ) { @@ -3856,7 +3859,8 @@ long do_mmu_update( { struct mmu_update req; void *va = NULL; - unsigned long gpfn, gmfn; + unsigned long gpfn; + gfn_t gfn; struct page_info *page; unsigned int cmd, i = 0, done = 0, pt_dom; struct vcpu *curr = current, *v = curr; @@ -3969,8 +3973,8 @@ long do_mmu_update( rc = -EINVAL; req.ptr -= cmd; - gmfn = req.ptr >> PAGE_SHIFT; - page = get_page_from_gfn(pt_owner, gmfn, &p2mt, P2M_ALLOC); + gfn = gaddr_to_gfn(req.ptr); + page = get_page_from_gfn(pt_owner, gfn, &p2mt, P2M_ALLOC); if ( unlikely(!page) || p2mt != p2m_ram_rw ) { @@ -3978,7 +3982,7 @@ long do_mmu_update( put_page(page); if ( p2m_is_paged(p2mt) ) { - p2m_mem_paging_populate(pt_owner, gmfn); + p2m_mem_paging_populate(pt_owner, gfn_x(gfn)); rc = -ENOENT; } else diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 587c062481..1ce012600c 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2967,7 +2967,7 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn, * Take a refcnt on the mfn. NB: following supported for foreign mapping: * ram_rw | ram_logdirty | ram_ro | paging_out. */ - page = get_page_from_gfn(fdom, fgfn, &p2mt, P2M_ALLOC); + page = get_page_from_gfn(fdom, _gfn(fgfn), &p2mt, P2M_ALLOC); if ( !page || !p2m_is_ram(p2mt) || p2m_is_shared(p2mt) || p2m_is_hole(p2mt) ) { diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c index 1e6024c71f..bb11f28531 100644 --- a/xen/arch/x86/mm/shadow/hvm.c +++ b/xen/arch/x86/mm/shadow/hvm.c @@ -398,15 +398,15 @@ void shadow_continue_emulation(struct sh_emulate_ctxt *sh_ctxt, static mfn_t emulate_gva_to_mfn(struct vcpu *v, unsigned long vaddr, struct sh_emulate_ctxt *sh_ctxt) { - unsigned long gfn; + gfn_t gfn; struct page_info *page; mfn_t mfn; p2m_type_t p2mt; uint32_t pfec = PFEC_page_present | PFEC_write_access; /* Translate the VA to a GFN. */ - gfn = paging_get_hostmode(v)->gva_to_gfn(v, NULL, vaddr, &pfec); - if ( gfn == gfn_x(INVALID_GFN) ) + gfn = _gfn(paging_get_hostmode(v)->gva_to_gfn(v, NULL, vaddr, &pfec)); + if ( gfn_eq(gfn, INVALID_GFN) ) { x86_emul_pagefault(pfec, vaddr, &sh_ctxt->ctxt); diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c index 3a3c15890b..4f3f438614 100644 --- a/xen/arch/x86/physdev.c +++ b/xen/arch/x86/physdev.c @@ -229,7 +229,8 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) break; ret = -EINVAL; - page = get_page_from_gfn(current->domain, info.gmfn, NULL, P2M_ALLOC); + page = get_page_from_gfn(current->domain, _gfn(info.gmfn), + NULL, P2M_ALLOC); if ( !page ) break; if ( !get_page_type(page, PGT_writable_page) ) diff --git a/xen/arch/x86/pv/descriptor-tables.c b/xen/arch/x86/pv/descriptor-tables.c index f22beb1f3c..899ed45c6a 100644 --- a/xen/arch/x86/pv/descriptor-tables.c +++ b/xen/arch/x86/pv/descriptor-tables.c @@ -112,7 +112,7 @@ long pv_set_gdt(struct vcpu *v, unsigned long *frames, unsigned int entries) { struct page_info *page; - page = get_page_from_gfn(d, frames[i], NULL, P2M_ALLOC); + page = get_page_from_gfn(d, _gfn(frames[i]), NULL, P2M_ALLOC); if ( !page ) goto fail; if ( !get_page_type(page, PGT_seg_desc_page) ) @@ -219,7 +219,7 @@ long do_update_descriptor(uint64_t gaddr, seg_desc_t d) if ( !IS_ALIGNED(gaddr, sizeof(d)) || !check_descriptor(currd, &d) ) return -EINVAL; - page = get_page_from_gfn(currd, gfn_x(gfn), NULL, P2M_ALLOC); + page = get_page_from_gfn(currd, gfn, NULL, P2M_ALLOC); if ( !page ) return -EINVAL; diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c index e24b84f46a..552b669623 100644 --- a/xen/arch/x86/pv/emul-priv-op.c +++ b/xen/arch/x86/pv/emul-priv-op.c @@ -756,12 +756,12 @@ static int write_cr(unsigned int reg, unsigned long val, case 3: /* Write CR3 */ { struct domain *currd = curr->domain; - unsigned long gfn; + gfn_t gfn; struct page_info *page; int rc; - gfn = !is_pv_32bit_domain(currd) - ? xen_cr3_to_pfn(val) : compat_cr3_to_pfn(val); + gfn = _gfn(!is_pv_32bit_domain(currd) + ? xen_cr3_to_pfn(val) : compat_cr3_to_pfn(val)); page = get_page_from_gfn(currd, gfn, NULL, P2M_ALLOC); if ( !page ) break; diff --git a/xen/arch/x86/pv/mm.c b/xen/arch/x86/pv/mm.c index 2b0dadc8da..00df5edd6f 100644 --- a/xen/arch/x86/pv/mm.c +++ b/xen/arch/x86/pv/mm.c @@ -110,7 +110,7 @@ bool pv_map_ldt_shadow_page(unsigned int offset) if ( unlikely(!(l1e_get_flags(gl1e) & _PAGE_PRESENT)) ) return false; - page = get_page_from_gfn(currd, l1e_get_pfn(gl1e), NULL, P2M_ALLOC); + page = get_page_from_gfn(currd, _gfn(l1e_get_pfn(gl1e)), NULL, P2M_ALLOC); if ( unlikely(!page) ) return false; diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index 4f524dc71e..e5de86845f 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -826,7 +826,7 @@ int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val) case 0: /* Write hypercall page */ { void *hypercall_page; - unsigned long gmfn = val >> PAGE_SHIFT; + gfn_t gfn = gaddr_to_gfn(val); unsigned int page_index = val & (PAGE_SIZE - 1); struct page_info *page; p2m_type_t t; @@ -839,7 +839,7 @@ int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val) return X86EMUL_EXCEPTION; } - page = get_page_from_gfn(d, gmfn, &t, P2M_ALLOC); + page = get_page_from_gfn(d, gfn, &t, P2M_ALLOC); if ( !page || !get_page_type(page, PGT_writable_page) ) { @@ -848,13 +848,14 @@ int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val) if ( p2m_is_paging(t) ) { - p2m_mem_paging_populate(d, gmfn); + p2m_mem_paging_populate(d, gfn_x(gfn)); return X86EMUL_RETRY; } gdprintk(XENLOG_WARNING, - "Bad GMFN %lx (MFN %#"PRI_mfn") to MSR %08x\n", - gmfn, mfn_x(page ? page_to_mfn(page) : INVALID_MFN), base); + "Bad GFN %"PRI_gfn" (MFN %"PRI_mfn") to MSR %08x\n", + gfn_x(gfn), mfn_x(page ? page_to_mfn(page) : INVALID_MFN), + base); return X86EMUL_EXCEPTION; } diff --git a/xen/common/domain.c b/xen/common/domain.c index b4eb476a9c..8435528383 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1237,7 +1237,7 @@ int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset) if ( (v != current) && !(v->pause_flags & VPF_down) ) return -EINVAL; - page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC); + page = get_page_from_gfn(d, _gfn(gfn), NULL, P2M_ALLOC); if ( !page ) return -EINVAL; diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c index 230f440f14..073981ab43 100644 --- a/xen/common/event_fifo.c +++ b/xen/common/event_fifo.c @@ -361,7 +361,7 @@ static const struct evtchn_port_ops evtchn_port_ops_fifo = .print_state = evtchn_fifo_print_state, }; -static int map_guest_page(struct domain *d, uint64_t gfn, void **virt) +static int map_guest_page(struct domain *d, gfn_t gfn, void **virt) { struct page_info *p; @@ -422,7 +422,7 @@ static int setup_control_block(struct vcpu *v) return 0; } -static int map_control_block(struct vcpu *v, uint64_t gfn, uint32_t offset) +static int map_control_block(struct vcpu *v, gfn_t gfn, uint32_t offset) { void *virt; unsigned int i; @@ -508,7 +508,7 @@ int evtchn_fifo_init_control(struct evtchn_init_control *init_control) { struct domain *d = current->domain; uint32_t vcpu_id; - uint64_t gfn; + gfn_t gfn; uint32_t offset; struct vcpu *v; int rc; @@ -516,7 +516,7 @@ int evtchn_fifo_init_control(struct evtchn_init_control *init_control) init_control->link_bits = EVTCHN_FIFO_LINK_BITS; vcpu_id = init_control->vcpu; - gfn = init_control->control_gfn; + gfn = _gfn(init_control->control_gfn); offset = init_control->offset; if ( (v = domain_vcpu(d, vcpu_id)) == NULL ) @@ -578,7 +578,7 @@ int evtchn_fifo_init_control(struct evtchn_init_control *init_control) return rc; } -static int add_page_to_event_array(struct domain *d, unsigned long gfn) +static int add_page_to_event_array(struct domain *d, gfn_t gfn) { void *virt; unsigned int slot; @@ -628,7 +628,7 @@ int evtchn_fifo_expand_array(const struct evtchn_expand_array *expand_array) return -EOPNOTSUPP; spin_lock(&d->event_lock); - rc = add_page_to_event_array(d, expand_array->array_gfn); + rc = add_page_to_event_array(d, _gfn(expand_array->array_gfn)); spin_unlock(&d->event_lock); return rc; diff --git a/xen/common/memory.c b/xen/common/memory.c index 6e4b85674d..7e3c3bb7af 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1388,7 +1388,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) return rc; } - page = get_page_from_gfn(d, xrfp.gpfn, NULL, P2M_ALLOC); + page = get_page_from_gfn(d, _gfn(xrfp.gpfn), NULL, P2M_ALLOC); if ( page ) { rc = guest_physmap_remove_page(d, _gfn(xrfp.gpfn), @@ -1659,7 +1659,7 @@ int check_get_page_from_gfn(struct domain *d, gfn_t gfn, bool readonly, p2m_type_t p2mt; struct page_info *page; - page = get_page_from_gfn(d, gfn_x(gfn), &p2mt, q); + page = get_page_from_gfn(d, gfn, &p2mt, q); #ifdef CONFIG_HAS_MEM_PAGING if ( p2m_is_paging(p2mt) ) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 5fdb6e8183..f1d01ceb3f 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -304,7 +304,7 @@ struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn, p2m_type_t *t); static inline struct page_info *get_page_from_gfn( - struct domain *d, unsigned long gfn, p2m_type_t *t, p2m_query_t q) + struct domain *d, gfn_t gfn, p2m_type_t *t, p2m_query_t q) { mfn_t mfn; p2m_type_t _t; @@ -315,7 +315,7 @@ static inline struct page_info *get_page_from_gfn( * not auto-translated. */ if ( likely(d != dom_xen) ) - return p2m_get_page_from_gfn(d, _gfn(gfn), t); + return p2m_get_page_from_gfn(d, gfn, t); if ( !t ) t = &_t; @@ -326,7 +326,7 @@ static inline struct page_info *get_page_from_gfn( * DOMID_XEN sees 1-1 RAM. The p2m_type is based on the type of the * page. */ - mfn = _mfn(gfn); + mfn = _mfn(gfn_x(gfn)); page = mfn_to_page(mfn); if ( !mfn_valid(mfn) || !get_page(page, d) ) diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 39dae242b0..da842487bb 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -487,18 +487,22 @@ struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn, p2m_query_t q); static inline struct page_info *get_page_from_gfn( - struct domain *d, unsigned long gfn, p2m_type_t *t, p2m_query_t q) + struct domain *d, gfn_t gfn, p2m_type_t *t, p2m_query_t q) { struct page_info *page; + mfn_t mfn; if ( paging_mode_translate(d) ) - return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t, NULL, q); + return p2m_get_page_from_gfn(p2m_get_hostp2m(d), gfn, t, NULL, q); /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */ if ( t ) *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct; - page = mfn_to_page(_mfn(gfn)); - return mfn_valid(_mfn(gfn)) && get_page(page, d) ? page : NULL; + + mfn = _mfn(gfn_x(gfn)); + + page = mfn_to_page(mfn); + return mfn_valid(mfn) && get_page(page, d) ? page : NULL; } /* General conversion function from mfn to gfn */