From patchwork Wed Apr 24 16:59:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 10915273 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F0824922 for ; Wed, 24 Apr 2019 17:01:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DEB0528B3E for ; Wed, 24 Apr 2019 17:01:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D2C5B28B54; Wed, 24 Apr 2019 17:01:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 51DA528B4B for ; Wed, 24 Apr 2019 17:01:54 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hJLFt-0007pz-16; Wed, 24 Apr 2019 17:00:21 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hJLFr-0007pX-HF for xen-devel@lists.xenproject.org; Wed, 24 Apr 2019 17:00:19 +0000 X-Inumbo-ID: 7037dc5e-66b2-11e9-9f38-6b2c9412e21c Received: from foss.arm.com (unknown [217.140.101.70]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTP id 7037dc5e-66b2-11e9-9f38-6b2c9412e21c; Wed, 24 Apr 2019 17:00:18 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 49331EBD; Wed, 24 Apr 2019 10:00:18 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 209803F557; Wed, 24 Apr 2019 10:00:16 -0700 (PDT) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Wed, 24 Apr 2019 17:59:53 +0100 Message-Id: <20190424165955.23718-11-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190424165955.23718-1-julien.grall@arm.com> References: <20190424165955.23718-1-julien.grall@arm.com> Subject: [Xen-devel] [PATCH 10/12] xen/arm: mm: Rework Xen page-tables walk during update X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Oleksandr_Tyshchenko@epam.com, Julien Grall , sstabellini@kernel.org, Andrii_Anisov@epam.com MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Currently, xen_pt_update_entry() is only able to update the region covered by xen_second (i.e 0 to 0x7fffffff). Because of the restriction we end to have multiple functions in mm.c modifying the page-tables differently. Furthermore, we never walked the page-tables fully. This means that any change in the layout may requires major rewrite of the page-tables code. Lastly, we have been quite lucky that no one ever tried to pass an address outside this range because it would have blown-up. xen_pt_update_entry() is reworked to walk over the page-tables every time. The logic has been borrowed from arch/arm/p2m.c and contain some limitations for the time being: - Superpage cannot be shattered - Only level 3 (i.e 4KB) can be done Note that the parameter 'addr' has been renamed to 'virt' to make clear we are dealing with a virtual address. Signed-off-by: Julien Grall Reviewed-by: Andrii Anisov --- xen/arch/arm/mm.c | 121 +++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 106 insertions(+), 15 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 115e8340f1..022967ff9f 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -979,6 +979,53 @@ static void xen_unmap_table(const lpae_t *table) unmap_domain_page(table); } +#define XEN_TABLE_MAP_FAILED 0 +#define XEN_TABLE_SUPER_PAGE 1 +#define XEN_TABLE_NORMAL_PAGE 2 + +/* + * Take the currently mapped table, find the corresponding entry, + * and map the next table, if available. + * + * The read_only parameters indicates whether intermediate tables should + * be allocated when not present. + * + * Return values: + * XEN_TABLE_MAP_FAILED: Either read_only was set and the entry + * was empty, or allocating a new page failed. + * XEN_TABLE_NORMAL_PAGE: next level mapped normally + * XEN_TABLE_SUPER_PAGE: The next entry points to a superpage. + */ +static int xen_pt_next_level(bool read_only, unsigned int level, + lpae_t **table, unsigned int offset) +{ + lpae_t *entry; + int ret; + + entry = *table + offset; + + if ( !lpae_is_valid(*entry) ) + { + if ( read_only ) + return XEN_TABLE_MAP_FAILED; + + ret = create_xen_table(entry); + if ( ret ) + return XEN_TABLE_MAP_FAILED; + } + + ASSERT(lpae_is_valid(*entry)); + + /* The function xen_pt_next_level is never called at the 3rd level */ + if ( lpae_is_mapping(*entry, level) ) + return XEN_TABLE_SUPER_PAGE; + + xen_unmap_table(*table); + *table = xen_map_table(lpae_get_mfn(*entry)); + + return XEN_TABLE_NORMAL_PAGE; +} + /* Sanity check of the entry */ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags) { @@ -1038,30 +1085,65 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags) return true; } -static int xen_pt_update_entry(unsigned long addr, mfn_t mfn, - unsigned int flags) +static int xen_pt_update_entry(mfn_t root, unsigned long virt, + mfn_t mfn, unsigned int flags) { int rc; + unsigned int level; + /* We only support 4KB mapping (i.e level 3) for now */ + unsigned int target = 3; + lpae_t *table; + /* + * The intermediate page tables are read-only when the MFN is not valid + * and we are not populating page table. + * This means we either modify permissions or remove an entry. + */ + bool read_only = mfn_eq(mfn, INVALID_MFN) && !(flags & _PAGE_POPULATE); lpae_t pte, *entry; - lpae_t *third = NULL; + + /* convenience aliases */ + DECLARE_OFFSETS(offsets, (paddr_t)virt); /* _PAGE_POPULATE and _PAGE_PRESENT should never be set together. */ ASSERT((flags & (_PAGE_POPULATE|_PAGE_PRESENT)) != (_PAGE_POPULATE|_PAGE_PRESENT)); - entry = &xen_second[second_linear_offset(addr)]; - if ( !lpae_is_valid(*entry) || !lpae_is_table(*entry, 2) ) + table = xen_map_table(root); + for ( level = HYP_PT_ROOT_LEVEL; level < target; level++ ) { - int rc = create_xen_table(entry); - if ( rc < 0 ) { - printk("%s: L2 failed\n", __func__); - return rc; + rc = xen_pt_next_level(read_only, level, &table, offsets[level]); + if ( rc == XEN_TABLE_MAP_FAILED ) + { + /* + * We are here because xen_pt_next_level has failed to map + * the intermediate page table (e.g the table does not exist + * and the pt is read-only). It is a valid case when + * removing a mapping as it may not exist in the page table. + * In this case, just ignore it. + */ + if ( flags & (_PAGE_PRESENT|_PAGE_POPULATE) ) + { + mm_printk("%s: Unable to map level %u\n", __func__, level); + rc = -ENOENT; + goto out; + } + else + { + rc = 0; + goto out; + } } + else if ( rc != XEN_TABLE_NORMAL_PAGE ) + break; } - BUG_ON(!lpae_is_valid(*entry)); + if ( level != target ) + { + mm_printk("%s: Shattering superpage is not supported\n", __func__); + rc = -EOPNOTSUPP; + goto out; + } - third = xen_map_table(lpae_get_mfn(*entry)); - entry = &third[third_table_offset(addr)]; + entry = table + offsets[level]; rc = -EINVAL; if ( !xen_pt_check_entry(*entry, mfn, flags) ) @@ -1098,7 +1180,7 @@ static int xen_pt_update_entry(unsigned long addr, mfn_t mfn, rc = 0; out: - xen_unmap_table(third); + xen_unmap_table(table); return rc; } @@ -1114,6 +1196,15 @@ static int xen_pt_update(unsigned long virt, unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE; /* + * For arm32, page-tables are different on each CPUs. Yet, they share + * some common mappings. It is assumed that only common mappings + * will be modified with this function. + * + * XXX: Add a check. + */ + const mfn_t root = virt_to_mfn(THIS_CPU_PGTABLE); + + /* * The hardware was configured to forbid mapping both writeable and * executable. * When modifying/creating mapping (i.e _PAGE_PRESENT is set), @@ -1134,9 +1225,9 @@ static int xen_pt_update(unsigned long virt, spin_lock(&xen_pt_lock); - for( ; addr < addr_end; addr += PAGE_SIZE ) + for ( ; addr < addr_end; addr += PAGE_SIZE ) { - rc = xen_pt_update_entry(addr, mfn, flags); + rc = xen_pt_update_entry(root, addr, mfn, flags); if ( rc ) break;