From patchwork Thu Feb 23 05:40:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13149873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8A9B8C678D5 for ; Thu, 23 Feb 2023 05:41:54 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.499862.771145 (Exim 4.92) (envelope-from ) id 1pV4MA-0000Up-Sr; Thu, 23 Feb 2023 05:41:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 499862.771145; Thu, 23 Feb 2023 05:41:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MA-0000UU-NM; Thu, 23 Feb 2023 05:41:26 +0000 Received: by outflank-mailman (input) for mailman id 499862; Thu, 23 Feb 2023 05:41:25 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4M9-0000Ql-U2 for xen-devel@lists.xenproject.org; Thu, 23 Feb 2023 05:41:25 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id b565c53b-b33c-11ed-a82a-c9ca1d2f71af; Thu, 23 Feb 2023 06:41:24 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7F67BC14; Wed, 22 Feb 2023 21:42:06 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C34463F587; Wed, 22 Feb 2023 21:41:20 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b565c53b-b33c-11ed-a82a-c9ca1d2f71af From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v2 1/8] xen/arm: remove stale addr_cells/size_cells in assign_shared_memory Date: Thu, 23 Feb 2023 13:40:58 +0800 Message-Id: <20230223054105.2357217-2-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230223054105.2357217-1-Penny.Zheng@arm.com> References: <20230223054105.2357217-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Function parameters {addr_cells,size_cells} are stale parameters in assign_shared_memory, so we shall remove them. Signed-off-by: Penny Zheng --- v1 -> v2: - new commit --- xen/arch/arm/domain_build.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index edca23b986..4a6f743f35 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -836,7 +836,6 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d, } static int __init assign_shared_memory(struct domain *d, - uint32_t addr_cells, uint32_t size_cells, paddr_t pbase, paddr_t psize, paddr_t gbase) { @@ -998,7 +997,6 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, * specified, so they should be assigned to dom_io. */ ret = assign_shared_memory(owner_dom_io ? dom_io : d, - addr_cells, size_cells, pbase, psize, gbase); if ( ret ) return ret; From patchwork Thu Feb 23 05:40:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13149872 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B9717C61DA4 for ; Thu, 23 Feb 2023 05:41:53 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.499863.771161 (Exim 4.92) (envelope-from ) id 1pV4MF-0000wd-2K; Thu, 23 Feb 2023 05:41:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 499863.771161; Thu, 23 Feb 2023 05:41:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4ME-0000wW-Vq; Thu, 23 Feb 2023 05:41:30 +0000 Received: by outflank-mailman (input) for mailman id 499863; Thu, 23 Feb 2023 05:41:30 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4ME-0000Ql-00 for xen-devel@lists.xenproject.org; Thu, 23 Feb 2023 05:41:30 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id b7723ca6-b33c-11ed-a82a-c9ca1d2f71af; Thu, 23 Feb 2023 06:41:27 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 094831FB; Wed, 22 Feb 2023 21:42:10 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 145923F587; Wed, 22 Feb 2023 21:41:23 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b7723ca6-b33c-11ed-a82a-c9ca1d2f71af From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v2 2/8] xen/arm: re-define a set of data structures for static shared memory region Date: Thu, 23 Feb 2023 13:40:59 +0800 Message-Id: <20230223054105.2357217-3-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230223054105.2357217-1-Penny.Zheng@arm.com> References: <20230223054105.2357217-1-Penny.Zheng@arm.com> MIME-Version: 1.0 This commit introduces a set of separate data structures to deal with static shared memory at different stages. In boot-time host device tree parsing, we introduce a new structure "struct shm_node" and a new field "shm_info" in bootinfo to describe and store parsed shm info. only SHMID and "nr_borrowers", which describes the number of borrower domain, are considered here for per shm node. We also introduce a new local global data "shm_data" in bootfdt.c, in which, reserved memory bank is recorded together with shm node, to assist doing shm node verification. In order to apply above changes in acquire_nr_borrower_domain, we now use SHMID to iterate "shminfo" to find requested shm node, then acquiring the information of "nr_borrowers". In the last, a new anonymized structure "shminfo", which is a array of compound structure that contains SHMID and a "struct membank membank" describing shared memory regions in guest address space, is created in "kinfo" when dealing with domain information. Signed-off-by: Penny Zheng --- v1 -> v2: - As the original "struct shm_membank" was making reserving memory more complex and actually memory information could be still got from host Device\ Tree when dealing with domain construction, we introduce a new simple structure "struct shm_node" in bootinfo to only store SHMID and "nr_borrowers" - Further restrict the scope of the local variable "struct meminfo *mem = &bootinfo.reserved_mem" - Introduce a new local global data "shm_data" in bootfdt.c. In which, reserved memory bank is recorded together with the shm node, to assist doing shm node verification. - Define a set of local variables that point to "shm_data.shm_nodes[i].membank->start", etc, to make the code more readable. - Use SHMID to iterate "shminfo" to find requested shm node, as we no longer store host memory bank info in shm node. - A new anonymized structure, which is a array of compound structure that contains SHMID and a "struct membank membank", describing shared memory region in guest, is introduced in "kinfo". --- xen/arch/arm/bootfdt.c | 61 ++++++++++++++++++------- xen/arch/arm/domain_build.c | 74 +++++++++++++++---------------- xen/arch/arm/include/asm/kernel.h | 9 +++- xen/arch/arm/include/asm/setup.h | 21 +++++++-- 4 files changed, 106 insertions(+), 59 deletions(-) diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c index 0085c28d74..dc32d1b9b3 100644 --- a/xen/arch/arm/bootfdt.c +++ b/xen/arch/arm/bootfdt.c @@ -16,6 +16,16 @@ #include #include +#ifdef CONFIG_STATIC_SHM +static __initdata struct { + unsigned int nr_nodes; + struct { + const struct shm_node *node; + const struct membank *membank; + } shm_nodes[NR_MEM_BANKS]; +} shm_data; +#endif + static bool __init device_tree_node_matches(const void *fdt, int node, const char *match) { @@ -380,7 +390,6 @@ static int __init process_shm_node(const void *fdt, int node, const struct fdt_property *prop, *prop_id, *prop_role; const __be32 *cell; paddr_t paddr, gaddr, size; - struct meminfo *mem = &bootinfo.reserved_mem; unsigned int i; int len; bool owner = false; @@ -452,17 +461,21 @@ static int __init process_shm_node(const void *fdt, int node, return -EINVAL; } - for ( i = 0; i < mem->nr_banks; i++ ) + for ( i = 0; i < shm_data.nr_nodes; i++ ) { + paddr_t bank_start = shm_data.shm_nodes[i].membank->start; + paddr_t bank_size = shm_data.shm_nodes[i].membank->size; + const char *bank_id = shm_data.shm_nodes[i].node->shm_id; + /* * Meet the following check: * 1) The shm ID matches and the region exactly match * 2) The shm ID doesn't match and the region doesn't overlap * with an existing one */ - if ( paddr == mem->bank[i].start && size == mem->bank[i].size ) + if ( paddr == bank_start && size == bank_size ) { - if ( strncmp(shm_id, mem->bank[i].shm_id, MAX_SHM_ID_LENGTH) == 0 ) + if ( strncmp(shm_id, bank_id, MAX_SHM_ID_LENGTH) == 0 ) break; else { @@ -474,17 +487,17 @@ static int __init process_shm_node(const void *fdt, int node, else { paddr_t end = paddr + size; - paddr_t bank_end = mem->bank[i].start + mem->bank[i].size; + paddr_t bank_end = bank_start + bank_size; - if ( (end <= paddr) || (bank_end <= mem->bank[i].start) ) + if ( (end <= paddr) || (bank_end <= bank_start) ) { printk("fdt: static shared memory region %s overflow\n", shm_id); return -EINVAL; } - if ( (end <= mem->bank[i].start) || (paddr >= bank_end) ) + if ( (end <= bank_start) || (paddr >= bank_end) ) { - if ( strcmp(shm_id, mem->bank[i].shm_id) != 0 ) + if ( strcmp(shm_id, bank_id) != 0 ) continue; else { @@ -496,22 +509,38 @@ static int __init process_shm_node(const void *fdt, int node, else { printk("fdt: shared memory region overlap with an existing entry %#"PRIpaddr" - %#"PRIpaddr"\n", - mem->bank[i].start, bank_end); + bank_start, bank_end); return -EINVAL; } } } - if ( i == mem->nr_banks ) + if ( (i == shm_data.nr_nodes) && (i == bootinfo.shminfo.nr_nodes) ) { - if ( i < NR_MEM_BANKS ) + struct meminfo *mem = &bootinfo.reserved_mem; + + if ( (i < NR_MEM_BANKS) && (mem->nr_banks < NR_MEM_BANKS) ) { /* Static shared memory shall be reserved from any other use. */ - safe_strcpy(mem->bank[mem->nr_banks].shm_id, shm_id); - mem->bank[mem->nr_banks].start = paddr; - mem->bank[mem->nr_banks].size = size; - mem->bank[mem->nr_banks].type = MEMBANK_STATIC_DOMAIN; + struct membank *membank = &mem->bank[mem->nr_banks]; + struct shm_node *shm_node = &bootinfo.shminfo.node[i]; + + membank->start = paddr; + membank->size = size; + membank->type = MEMBANK_STATIC_DOMAIN; mem->nr_banks++; + + /* Record static shared memory node info in bootinfo.shminfo */ + safe_strcpy(shm_node->shm_id, shm_id); + bootinfo.shminfo.nr_nodes++; + + /* + * Reserved memory bank is recorded together with shm + * node info in shm_data, to assist doing shm node verification. + */ + shm_data.shm_nodes[i].node = shm_node; + shm_data.shm_nodes[i].membank = membank; + shm_data.nr_nodes++; } else { @@ -524,7 +553,7 @@ static int __init process_shm_node(const void *fdt, int node, * to calculate the reference count. */ if ( !owner ) - mem->bank[i].nr_shm_borrowers++; + bootinfo.shminfo.node[i].nr_shm_borrowers++; return 0; } diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 4a6f743f35..2daf36cdbd 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -752,28 +752,25 @@ static void __init assign_static_memory_11(struct domain *d, } #ifdef CONFIG_STATIC_SHM -static int __init acquire_nr_borrower_domain(struct domain *d, - paddr_t pbase, paddr_t psize, +static int __init acquire_nr_borrower_domain(const char *shm_id, unsigned long *nr_borrowers) { - unsigned int bank; + struct shm_node *shm_node; + unsigned int i; - /* Iterate reserved memory to find requested shm bank. */ - for ( bank = 0 ; bank < bootinfo.reserved_mem.nr_banks; bank++ ) + /* Iterate to find requested static shared memory node. */ + for ( i = 0; i < bootinfo.shminfo.nr_nodes; i++ ) { - paddr_t bank_start = bootinfo.reserved_mem.bank[bank].start; - paddr_t bank_size = bootinfo.reserved_mem.bank[bank].size; + shm_node = &bootinfo.shminfo.node[i]; - if ( (pbase == bank_start) && (psize == bank_size) ) - break; + if ( strcmp(shm_id, shm_node->shm_id) == 0 ) + { + *nr_borrowers = shm_node->nr_shm_borrowers; + return 0; + } } - if ( bank == bootinfo.reserved_mem.nr_banks ) - return -ENOENT; - - *nr_borrowers = bootinfo.reserved_mem.bank[bank].nr_shm_borrowers; - - return 0; + return -EINVAL; } /* @@ -837,7 +834,7 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d, static int __init assign_shared_memory(struct domain *d, paddr_t pbase, paddr_t psize, - paddr_t gbase) + paddr_t gbase, const char *shm_id) { mfn_t smfn; int ret = 0; @@ -871,7 +868,7 @@ static int __init assign_shared_memory(struct domain *d, * Get the right amount of references per page, which is the number of * borrower domains. */ - ret = acquire_nr_borrower_domain(d, pbase, psize, &nr_borrowers); + ret = acquire_nr_borrower_domain(shm_id, &nr_borrowers); if ( ret ) return ret; @@ -907,13 +904,16 @@ static int __init append_shm_bank_to_domain(struct kernel_info *kinfo, paddr_t start, paddr_t size, const char *shm_id) { - if ( kinfo->shm_mem.nr_banks >= NR_MEM_BANKS ) + unsigned int nr_banks = kinfo->shminfo.nr_banks; + struct membank *membank = &kinfo->shminfo.bank[nr_banks].membank; + + if ( nr_banks >= NR_MEM_BANKS ) return -ENOMEM; - kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].start = start; - kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].size = size; - safe_strcpy(kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].shm_id, shm_id); - kinfo->shm_mem.nr_banks++; + membank->start = start; + membank->size = size; + safe_strcpy(kinfo->shminfo.bank[nr_banks].shm_id, shm_id); + kinfo->shminfo.nr_banks++; return 0; } @@ -997,7 +997,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, * specified, so they should be assigned to dom_io. */ ret = assign_shared_memory(owner_dom_io ? dom_io : d, - pbase, psize, gbase); + pbase, psize, gbase, shm_id); if ( ret ) return ret; } @@ -1354,12 +1354,12 @@ static int __init make_memory_node(const struct domain *d, static int __init make_shm_memory_node(const struct domain *d, void *fdt, int addrcells, int sizecells, - const struct meminfo *mem) + const struct kernel_info *kinfo) { unsigned int i = 0; int res = 0; - if ( mem->nr_banks == 0 ) + if ( kinfo->shminfo.nr_banks == 0 ) return -ENOENT; /* @@ -1369,10 +1369,10 @@ static int __init make_shm_memory_node(const struct domain *d, */ dt_dprintk("Create xen-shmem node\n"); - for ( ; i < mem->nr_banks; i++ ) + for ( ; i < kinfo->shminfo.nr_banks; i++ ) { - uint64_t start = mem->bank[i].start; - uint64_t size = mem->bank[i].size; + uint64_t start = kinfo->shminfo.bank[i].membank.start; + uint64_t size = kinfo->shminfo.bank[i].membank.size; /* Placeholder for xen-shmem@ + a 64-bit number + \0 */ char buf[27]; const char compat[] = "xen,shared-memory-v1"; @@ -1381,7 +1381,7 @@ static int __init make_shm_memory_node(const struct domain *d, __be32 *cells; unsigned int len = (addrcells + sizecells) * sizeof(__be32); - snprintf(buf, sizeof(buf), "xen-shmem@%"PRIx64, mem->bank[i].start); + snprintf(buf, sizeof(buf), "xen-shmem@%"PRIx64, start); res = fdt_begin_node(fdt, buf); if ( res ) return res; @@ -1400,7 +1400,7 @@ static int __init make_shm_memory_node(const struct domain *d, dt_dprintk("Shared memory bank %u: %#"PRIx64"->%#"PRIx64"\n", i, start, start + size); - res = fdt_property_string(fdt, "xen,id", mem->bank[i].shm_id); + res = fdt_property_string(fdt, "xen,id", kinfo->shminfo.bank[i].shm_id); if ( res ) return res; @@ -1425,7 +1425,7 @@ static int __init make_shm_memory_node(const struct domain *d, static int __init make_shm_memory_node(const struct domain *d, void *fdt, int addrcells, int sizecells, - const struct meminfo *mem) + const struct kernel_info *kinfo) { ASSERT_UNREACHABLE(); return -EOPNOTSUPP; @@ -1435,13 +1435,13 @@ static int __init make_shm_memory_node(const struct domain *d, static int __init make_resv_memory_node(const struct domain *d, void *fdt, int addrcells, int sizecells, - const struct meminfo *mem) + const struct kernel_info *kinfo) { int res = 0; /* Placeholder for reserved-memory\0 */ const char resvbuf[16] = "reserved-memory"; - if ( mem->nr_banks == 0 ) + if ( kinfo->shminfo.nr_banks == 0 ) /* No shared memory provided. */ return 0; @@ -1463,7 +1463,7 @@ static int __init make_resv_memory_node(const struct domain *d, if ( res ) return res; - res = make_shm_memory_node(d, fdt, addrcells, sizecells, mem); + res = make_shm_memory_node(d, fdt, addrcells, sizecells, kinfo); if ( res ) return res; @@ -2696,8 +2696,7 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo, return res; } - res = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, - &kinfo->shm_mem); + res = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, kinfo); if ( res ) return res; } @@ -3254,8 +3253,7 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo) if ( ret ) goto err; - ret = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, - &kinfo->shm_mem); + ret = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, kinfo); if ( ret ) goto err; diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h index 4617cdc83b..590bc56f6c 100644 --- a/xen/arch/arm/include/asm/kernel.h +++ b/xen/arch/arm/include/asm/kernel.h @@ -38,7 +38,14 @@ struct kernel_info { void *fdt; /* flat device tree */ paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */ struct meminfo mem; - struct meminfo shm_mem; + /* Static shared memory banks */ + struct { + unsigned int nr_banks; + struct { + char shm_id[MAX_SHM_ID_LENGTH]; + struct membank membank; + } bank[NR_MEM_BANKS]; + } shminfo; /* kernel entry point */ paddr_t entry; diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h index a926f30a2b..f92fb51551 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -50,10 +50,6 @@ struct membank { paddr_t start; paddr_t size; enum membank_type type; -#ifdef CONFIG_STATIC_SHM - char shm_id[MAX_SHM_ID_LENGTH]; - unsigned int nr_shm_borrowers; -#endif }; struct meminfo { @@ -95,6 +91,17 @@ struct bootcmdlines { struct bootcmdline cmdline[MAX_MODULES]; }; +#ifdef CONFIG_STATIC_SHM +/* + * struct shm_node represents a static shared memory node shared between + * multiple domains, identified by the unique SHMID("xen,shm-id"). + */ +struct shm_node { + char shm_id[MAX_SHM_ID_LENGTH]; + unsigned int nr_shm_borrowers; +}; +#endif + struct bootinfo { struct meminfo mem; /* The reserved regions are only used when booting using Device-Tree */ @@ -105,6 +112,12 @@ struct bootinfo { struct meminfo acpi; #endif bool static_heap; +#ifdef CONFIG_STATIC_SHM + struct { + unsigned int nr_nodes; + struct shm_node node[NR_MEM_BANKS]; + } shminfo; +#endif }; struct map_range_data From patchwork Thu Feb 23 05:41:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13149874 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BDDFEC64ED8 for ; Thu, 23 Feb 2023 05:41:54 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.499864.771171 (Exim 4.92) (envelope-from ) id 1pV4MI-0001FK-Ho; Thu, 23 Feb 2023 05:41:34 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 499864.771171; Thu, 23 Feb 2023 05:41:34 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MI-0001FB-E6; Thu, 23 Feb 2023 05:41:34 +0000 Received: by outflank-mailman (input) for mailman id 499864; Thu, 23 Feb 2023 05:41:32 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MG-0001BN-La for xen-devel@lists.xenproject.org; Thu, 23 Feb 2023 05:41:32 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id b94a0642-b33c-11ed-88bb-e56d68cac8db; Thu, 23 Feb 2023 06:41:30 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 451761FB; Wed, 22 Feb 2023 21:42:13 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 927AB3F587; Wed, 22 Feb 2023 21:41:27 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b94a0642-b33c-11ed-88bb-e56d68cac8db From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v2 3/8] xen/arm: introduce allocate_domheap_memory and guest_physmap_memory Date: Thu, 23 Feb 2023 13:41:00 +0800 Message-Id: <20230223054105.2357217-4-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230223054105.2357217-1-Penny.Zheng@arm.com> References: <20230223054105.2357217-1-Penny.Zheng@arm.com> MIME-Version: 1.0 We split the code of allocate_bank_memory into two parts, allocate_domheap_memory and guest_physmap_memory. One is about allocating guest RAM from heap, which could be re-used later for allocating static shared memory from heap when host address is not provided. The other is building up guest P2M mapping. We also define a set of MACRO helpers to access common fields in data structure of "meminfo" type, e.g. "struct meminfo" is one of them, and later new "struct shm_meminfo" is also one of them. This kind of structures must have the following characteristics: - an array of "struct membank" - a member called "nr_banks" indicating current array size - a field indicating the maximum array size When introducing a new data structure, according callbacks with function type "retrieve_fn" shall be defined for using MACRO helpers. This commit defines callback "retrieve_meminfo" for data structure "struct meminfo". Signed-off-by: Penny Zheng --- v1 -> v2: - define a set of MACRO helpers to access common fields in data structure of "meminfo" type. "struct meminfo" is one of them, and according callback "retrieve_meminfo" is also introduced here. - typo of changing 1ULL to 1UL --- xen/arch/arm/domain_build.c | 119 ++++++++++++++++++++++++------- xen/arch/arm/include/asm/setup.h | 33 +++++++++ 2 files changed, 127 insertions(+), 25 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 2daf36cdbd..a6ce22b423 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -77,6 +77,26 @@ custom_param("dom0_mem", parse_dom0_mem); */ #define DOM0_FDT_EXTRA_SIZE (128 + sizeof(struct fdt_reserve_entry)) +static void __init retrieve_meminfo(void *mem, unsigned int *max_mem_banks, + struct membank **bank, + unsigned int **nr_banks) +{ + struct meminfo *meminfo = (struct meminfo *)mem; + + if ( max_mem_banks ) + *max_mem_banks = NR_MEM_BANKS; + + if ( nr_banks ) + *nr_banks = &(meminfo->nr_banks); + + if ( bank ) + *bank = meminfo->bank; +} + +retrieve_fn __initdata retrievers[MAX_MEMINFO_TYPE] = { + [NORMAL_MEMINFO] = retrieve_meminfo, +}; + unsigned int __init dom0_max_vcpus(void) { if ( opt_dom0_max_vcpus == 0 ) @@ -391,34 +411,20 @@ static void __init allocate_memory_11(struct domain *d, } } -static bool __init allocate_bank_memory(struct domain *d, - struct kernel_info *kinfo, - gfn_t sgfn, - paddr_t tot_size) +static bool __init allocate_domheap_memory(struct domain *d, + paddr_t tot_size, + void *mem, enum meminfo_type type) { - int res; struct page_info *pg; - struct membank *bank; unsigned int max_order = ~0; - - /* - * allocate_bank_memory can be called with a tot_size of zero for - * the second memory bank. It is not an error and we can safely - * avoid creating a zero-size memory bank. - */ - if ( tot_size == 0 ) - return true; - - bank = &kinfo->mem.bank[kinfo->mem.nr_banks]; - bank->start = gfn_to_gaddr(sgfn); - bank->size = tot_size; + unsigned int *nr_banks = GET_NR_BANKS(mem, type); while ( tot_size > 0 ) { unsigned int order = get_allocation_size(tot_size); + struct membank *membank; order = min(max_order, order); - pg = alloc_domheap_pages(d, order, 0); if ( !pg ) { @@ -438,15 +444,78 @@ static bool __init allocate_bank_memory(struct domain *d, continue; } - res = guest_physmap_add_page(d, sgfn, page_to_mfn(pg), order); - if ( res ) - { - dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res); + if ( *nr_banks == MAX_MEM_BANKS(type) ) return false; - } + + membank = GET_MEMBANK(mem, type, *nr_banks); + membank->start = mfn_to_maddr(page_to_mfn(pg)); + membank->size = 1ULL << (PAGE_SHIFT + order); + (*nr_banks)++; + tot_size -= membank->size; + } + + return true; +} + +static int __init guest_physmap_memory(struct domain *d, + void *mem, enum meminfo_type type, + gfn_t sgfn) +{ + unsigned int i; + int res; + unsigned int *nr_banks = GET_NR_BANKS(mem, type); + + for ( i = 0; i < *nr_banks; i++ ) + { + struct membank *membank = GET_MEMBANK(mem, type, i); + paddr_t start = membank->start; + paddr_t size = membank->size; + unsigned int order = get_order_from_bytes(size); + + /* Size must be power of two */ + BUG_ON(!size || (size & (size - 1))); + res = guest_physmap_add_page(d, sgfn, maddr_to_mfn(start), order); + if ( res ) + return res; sgfn = gfn_add(sgfn, 1UL << order); - tot_size -= (1ULL << (PAGE_SHIFT + order)); + } + + return 0; +} + +static bool __init allocate_bank_memory(struct domain *d, + struct kernel_info *kinfo, + gfn_t sgfn, + paddr_t total_size) +{ + struct membank *bank; + struct meminfo host = { 0 }; + + /* + * allocate_bank_memory can be called with a total_size of zero for + * the second memory bank. It is not an error and we can safely + * avoid creating a zero-size memory bank. + */ + if ( total_size == 0 ) + return true; + + bank = &kinfo->mem.bank[kinfo->mem.nr_banks]; + bank->start = gfn_to_gaddr(sgfn); + bank->size = total_size; + + if ( !allocate_domheap_memory(d, total_size, (void *)&host, NORMAL_MEMINFO) ) + { + printk(XENLOG_ERR "Failed to allocate (%"PRIpaddr"MB) pages to %pd\n", + total_size >> 20, d); + return false; + } + + if ( guest_physmap_memory(d, (void *)&host, NORMAL_MEMINFO, sgfn) ) + { + printk(XENLOG_ERR "Failed to map (%"PRIpaddr"MB) pages to %pd\n", + total_size >> 20, d); + return false; } kinfo->mem.nr_banks++; diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h index f92fb51551..c42fd83db2 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -57,6 +57,39 @@ struct meminfo { struct membank bank[NR_MEM_BANKS]; }; +enum meminfo_type { + NORMAL_MEMINFO, + MAX_MEMINFO_TYPE, +}; + +/* + * Define a set of MACRO helpers to access meminfo_type, like "struct meminfo" + * as type of NORMAL_MEMINFO, etc. + * This kind of structure must have a array of "struct membank", + * a member called nr_banks indicating the current array size, and also a field + * indicating the maximum array size. + */ +typedef void (*retrieve_fn)(void *, unsigned int *, struct membank **, + unsigned int **); + +#define MAX_MEM_BANKS(type) ({ \ + unsigned int _max_mem_banks; \ + retrievers[type](NULL, &_max_mem_banks, NULL, NULL); \ + _max_mem_banks; \ +}) + +#define GET_MEMBANK(mem, type, index) ({ \ + struct membank *_bank; \ + retrievers[type]((void *)(mem), NULL, &_bank, NULL); \ + &(_bank[index]); \ +}) + +#define GET_NR_BANKS(mem, type) ({ \ + unsigned int *_nr_banks; \ + retrievers[type]((void *)mem, NULL, NULL, &_nr_banks); \ + _nr_banks; \ +}) + /* * The domU flag is set for kernels and ramdisks of "xen,domain" nodes. * The purpose of the domU flag is to avoid getting confused in From patchwork Thu Feb 23 05:41:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13149876 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 42A18C677F1 for ; Thu, 23 Feb 2023 05:41:54 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.499865.771183 (Exim 4.92) (envelope-from ) id 1pV4MK-0001XJ-Rf; Thu, 23 Feb 2023 05:41:36 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 499865.771183; Thu, 23 Feb 2023 05:41:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MK-0001X4-Mz; Thu, 23 Feb 2023 05:41:36 +0000 Received: by outflank-mailman (input) for mailman id 499865; Thu, 23 Feb 2023 05:41:35 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MJ-0001BN-JS for xen-devel@lists.xenproject.org; Thu, 23 Feb 2023 05:41:35 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id bb627dc7-b33c-11ed-88bb-e56d68cac8db; Thu, 23 Feb 2023 06:41:34 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CB7A71FB; Wed, 22 Feb 2023 21:42:16 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CE5843F587; Wed, 22 Feb 2023 21:41:30 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bb627dc7-b33c-11ed-88bb-e56d68cac8db From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v2 4/8] xen/arm: use paddr_assigned to indicate whether host address is provided Date: Thu, 23 Feb 2023 13:41:01 +0800 Message-Id: <20230223054105.2357217-5-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230223054105.2357217-1-Penny.Zheng@arm.com> References: <20230223054105.2357217-1-Penny.Zheng@arm.com> MIME-Version: 1.0 We use paddr_assigned to indicate whether host address is provided, by checking the length of "xen,shared-mem" property. The shm matching criteria shall also be adapt to cover the new scenario, by adding when host address is not provided, if SHMID matches, the region size must exactly match too. During domain creation, right now, a static shared memory node could be banked with a statically configured host memory bank, or a set of arbitrary host memory banks allocated from heap. To cover both scenarios, we create a new structure shm_meminfo. It is very alike meminfo, but with the maximum array size being a smaller number NR_SHM_BANKS(16). As "shm_meminfo" is also a new member of "enum meminfo_type", we shall implement its own callback "retrieve_shm_meminfo" to have access to all MACRO helpers, e.g. GET_MEMBANK(...). Also, to make codes tidy and clear, we extract codes about parsing "xen,shared-mem" property from function "process_shm" and move them into a new helper "parse_shm_property". Signed-off-by: Penny Zheng --- v1 -> v2 - In order to get allocated banked host memory info during domain creation, we create a new structure shm_meminfo. It is very alike meminfo, with the maximum array size being NR_SHM_BANKS. As shm_meminfo is a new member of type meminfo_type, we shall implement its own callback retrieve_shm_meminfo to have access to all MACRO helpers, e.g. GET_MEMBANK(...) - rename "acquire_shm_memnode" to "find_shm_memnode" --- xen/arch/arm/bootfdt.c | 100 ++++++++++----- xen/arch/arm/domain_build.c | 207 +++++++++++++++++++++++++------ xen/arch/arm/include/asm/setup.h | 3 + 3 files changed, 243 insertions(+), 67 deletions(-) diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c index dc32d1b9b3..98cf6b89f6 100644 --- a/xen/arch/arm/bootfdt.c +++ b/xen/arch/arm/bootfdt.c @@ -21,7 +21,15 @@ static __initdata struct { unsigned int nr_nodes; struct { const struct shm_node *node; - const struct membank *membank; + /* + * For a static shared memory node, it is either banked with a reserved + * host memory bank, or arbitrary host memory which shall + * be allocated by Xen with a specified total size(tot_size). + */ + union { + const struct membank *membank; + paddr_t tot_size; + }; } shm_nodes[NR_MEM_BANKS]; } shm_data; #endif @@ -392,7 +400,7 @@ static int __init process_shm_node(const void *fdt, int node, paddr_t paddr, gaddr, size; unsigned int i; int len; - bool owner = false; + bool owner = false, paddr_assigned = true; const char *shm_id; if ( address_cells < 1 || size_cells < 1 ) @@ -433,7 +441,7 @@ static int __init process_shm_node(const void *fdt, int node, } /* - * xen,shared-mem = ; + * xen,shared-mem = , and paddr could be optional * Memory region starting from physical address #paddr of #size shall * be mapped to guest physical address #gaddr as static shared memory * region. @@ -444,16 +452,24 @@ static int __init process_shm_node(const void *fdt, int node, if ( len != dt_cells_to_size(address_cells + size_cells + address_cells) ) { + /* paddr is not provided in "xen,shared-mem" */ if ( len == dt_cells_to_size(size_cells + address_cells) ) - printk("fdt: host physical address must be chosen by users at the moment.\n"); - - printk("fdt: invalid `xen,shared-mem` property.\n"); - return -EINVAL; + paddr_assigned = false; + else + { + printk("fdt: invalid `xen,shared-mem` property.\n"); + return -EINVAL; + } } cell = (const __be32 *)prop->data; - device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gaddr); - size = dt_next_cell(size_cells, &cell); + if ( !paddr_assigned ) + device_tree_get_reg(&cell, address_cells, size_cells, &gaddr, &size); + else + { + device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gaddr); + size = dt_next_cell(size_cells, &cell); + } if ( !size ) { @@ -466,23 +482,37 @@ static int __init process_shm_node(const void *fdt, int node, paddr_t bank_start = shm_data.shm_nodes[i].membank->start; paddr_t bank_size = shm_data.shm_nodes[i].membank->size; const char *bank_id = shm_data.shm_nodes[i].node->shm_id; + paddr_t tot_size = shm_data.shm_nodes[i].tot_size; /* * Meet the following check: + * when host address is provided: * 1) The shm ID matches and the region exactly match * 2) The shm ID doesn't match and the region doesn't overlap * with an existing one + * when host address is not provided: + * 1) The shm ID matches and the region size exactly match */ - if ( paddr == bank_start && size == bank_size ) + if ( !paddr_assigned || ((paddr == bank_start) && (size == bank_size)) ) { if ( strncmp(shm_id, bank_id, MAX_SHM_ID_LENGTH) == 0 ) - break; - else + { + if ( !paddr_assigned && (size != tot_size) ) + { + printk("fdt: when host address is not provided, if xen,shm-id matches, size must stay the same too(%"PRIpaddr" -> %"PRIpaddr")\n", + size, tot_size); + return -EINVAL; + } + } + else if ( paddr_assigned ) { printk("fdt: xen,shm-id %s does not match for all the nodes using the same region.\n", shm_id); return -EINVAL; } + + /* host address is not provided, and with different SHMID. */ + continue; } else { @@ -517,37 +547,41 @@ static int __init process_shm_node(const void *fdt, int node, if ( (i == shm_data.nr_nodes) && (i == bootinfo.shminfo.nr_nodes) ) { - struct meminfo *mem = &bootinfo.reserved_mem; - - if ( (i < NR_MEM_BANKS) && (mem->nr_banks < NR_MEM_BANKS) ) + if ( i < NR_MEM_BANKS ) { - /* Static shared memory shall be reserved from any other use. */ - struct membank *membank = &mem->bank[mem->nr_banks]; struct shm_node *shm_node = &bootinfo.shminfo.node[i]; - - membank->start = paddr; - membank->size = size; - membank->type = MEMBANK_STATIC_DOMAIN; - mem->nr_banks++; + struct meminfo *mem = &bootinfo.reserved_mem; /* Record static shared memory node info in bootinfo.shminfo */ safe_strcpy(shm_node->shm_id, shm_id); bootinfo.shminfo.nr_nodes++; - /* - * Reserved memory bank is recorded together with shm - * node info in shm_data, to assist doing shm node verification. - */ shm_data.shm_nodes[i].node = shm_node; - shm_data.shm_nodes[i].membank = membank; shm_data.nr_nodes++; + if ( !paddr_assigned ) + shm_data.shm_nodes[i].tot_size = size; + else if ( mem->nr_banks < NR_MEM_BANKS ) + { + struct membank *membank = &mem->bank[mem->nr_banks]; + + membank->start = paddr; + membank->size = size; + membank->type = MEMBANK_STATIC_DOMAIN; + mem->nr_banks++; + + /* + * Reserved memory bank is recorded together with shm + * node info in shm_data, to assist doing shm node verification. + */ + shm_data.shm_nodes[i].membank = membank; + } + else + goto fail; } else - { - printk("Warning: Max number of supported memory regions reached.\n"); - return -ENOSPC; - } + goto fail; } + /* * keep a count of the number of borrowers, which later may be used * to calculate the reference count. @@ -556,6 +590,10 @@ static int __init process_shm_node(const void *fdt, int node, bootinfo.shminfo.node[i].nr_shm_borrowers++; return 0; + + fail: + printk("Warning: Max number of supported memory regions reached\n"); + return -ENOSPC; } #else static int __init process_shm_node(const void *fdt, int node, diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index a6ce22b423..91feb8f37c 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -49,6 +49,51 @@ boolean_param("ext_regions", opt_ext_regions); static u64 __initdata dom0_mem; static bool __initdata dom0_mem_set; +#ifdef CONFIG_STATIC_SHM +#define NR_SHM_BANKS 16 + +/* + * A static shared memory node could be banked with a statically + * configured host memory bank, or a set of arbitrary host memory + * banks allocated from heap. + */ +struct shm_meminfo { + unsigned int nr_banks; + struct membank bank[NR_SHM_BANKS]; + paddr_t tot_size; +}; + +/* + * struct shm_memnode holds banked host memory info for a static + * shared memory node + */ +struct shm_memnode { + char shm_id[MAX_SHM_ID_LENGTH]; + struct shm_meminfo meminfo; +}; + +static __initdata struct { + unsigned int nr_nodes; + struct shm_memnode node[NR_MEM_BANKS]; +} shm_memdata; + +static void __init retrieve_shm_meminfo(void *mem, unsigned int *max_mem_banks, + struct membank **bank, + unsigned int **nr_banks) +{ + struct shm_meminfo *meminfo = (struct shm_meminfo *)mem; + + if ( max_mem_banks ) + *max_mem_banks = NR_SHM_BANKS; + + if ( nr_banks ) + *nr_banks = &(meminfo->nr_banks); + + if ( bank ) + *bank = meminfo->bank; +} +#endif + static int __init parse_dom0_mem(const char *s) { dom0_mem_set = true; @@ -95,6 +140,9 @@ static void __init retrieve_meminfo(void *mem, unsigned int *max_mem_banks, retrieve_fn __initdata retrievers[MAX_MEMINFO_TYPE] = { [NORMAL_MEMINFO] = retrieve_meminfo, +#ifdef CONFIG_STATIC_SHM + [SHM_MEMINFO] = retrieve_shm_meminfo, +#endif }; unsigned int __init dom0_max_vcpus(void) @@ -842,6 +890,24 @@ static int __init acquire_nr_borrower_domain(const char *shm_id, return -EINVAL; } +static struct shm_memnode * __init find_shm_memnode(const char *shm_id) +{ + unsigned int i; + struct shm_memnode *shm_memnode; + + for ( i = 0 ; i < shm_memdata.nr_nodes; i++ ) + { + shm_memnode = &shm_memdata.node[i]; + + if ( strcmp(shm_id, shm_memnode->shm_id) == 0 ) + return shm_memnode; + } + + shm_memnode = &shm_memdata.node[i]; + safe_strcpy(shm_memnode->shm_id, shm_id); + return shm_memnode; +} + /* * This function checks whether the static shared memory region is * already allocated to dom_io. @@ -987,6 +1053,102 @@ static int __init append_shm_bank_to_domain(struct kernel_info *kinfo, return 0; } +/* Parse "xen,shared-mem" property in static shared memory node */ +static struct shm_memnode * __init parse_shm_property(struct domain *d, + const struct dt_device_node *dt_node, + bool *paddr_assigned, paddr_t *gbase, + const char *shm_id) +{ + uint32_t addr_cells, size_cells; + const struct dt_property *prop; + const __be32 *cells; + uint32_t len; + unsigned int i; + paddr_t pbase, psize; + struct shm_memnode *shm_memnode; + + /* xen,shared-mem = ; And pbase could be optional. */ + prop = dt_find_property(dt_node, "xen,shared-mem", &len); + BUG_ON(!prop); + cells = (const __be32 *)prop->value; + + addr_cells = dt_n_addr_cells(dt_node); + size_cells = dt_n_size_cells(dt_node); + if ( len != dt_cells_to_size(addr_cells + size_cells + addr_cells) ) + { + /* pbase is not provided in "xen,shared-mem" */ + if ( len == dt_cells_to_size(size_cells + addr_cells) ) + *paddr_assigned = false; + else + { + printk("fdt: invalid `xen,shared-mem` property.\n"); + return NULL; + } + } + + /* + * If we firstly process the shared memory node with unique "xen,shm-id", + * we allocate a new member "shm_memnode" for it in shm_memdata. + */ + shm_memnode = find_shm_memnode(shm_id); + BUG_ON(!shm_memnode); + if ( !(*paddr_assigned) ) + { + device_tree_get_reg(&cells, addr_cells, size_cells, gbase, &psize); + if ( shm_memnode->meminfo.tot_size == 0 ) + goto out_early1; + else + goto out_early2; + } + else + { + device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, gbase); + psize = dt_read_number(cells, size_cells); + } + + /* + * The static shared memory node #dt_node is banked with a + * statically configured host memory bank. + */ + shm_memnode->meminfo.bank[0].start = pbase; + shm_memnode->meminfo.bank[0].size = psize; + shm_memnode->meminfo.nr_banks = 1; + + if ( !IS_ALIGNED(pbase, PAGE_SIZE) ) + { + printk("%pd: physical address 0x%"PRIpaddr" is not suitably aligned.\n", + d, pbase); + return NULL; + } + + for ( i = 0; i < PFN_DOWN(psize); i++ ) + if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) + { + printk("%pd: invalid physical MFN 0x%"PRI_mfn"\n for static shared memory node\n", + d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); + return NULL; + } + + out_early1: + if ( !IS_ALIGNED(psize, PAGE_SIZE) ) + { + printk("%pd: size 0x%"PRIpaddr" is not suitably aligned\n", + d, psize); + return NULL; + } + shm_memnode->meminfo.tot_size = psize; + + out_early2: + if ( !IS_ALIGNED(*gbase, PAGE_SIZE) ) + { + printk("%pd: guest address 0x%"PRIpaddr" is not suitably aligned.\n", + d, *gbase); + return NULL; + } + + return shm_memnode; +} + static int __init process_shm(struct domain *d, struct kernel_info *kinfo, const struct dt_device_node *node) { @@ -994,51 +1156,17 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, dt_for_each_child_node(node, shm_node) { - const struct dt_property *prop; - const __be32 *cells; - uint32_t addr_cells, size_cells; paddr_t gbase, pbase, psize; int ret = 0; - unsigned int i; const char *role_str; const char *shm_id; bool owner_dom_io = true; + bool paddr_assigned = true; + struct shm_memnode *shm_memnode; if ( !dt_device_is_compatible(shm_node, "xen,domain-shared-memory-v1") ) continue; - /* - * xen,shared-mem = ; - * TODO: pbase is optional. - */ - addr_cells = dt_n_addr_cells(shm_node); - size_cells = dt_n_size_cells(shm_node); - prop = dt_find_property(shm_node, "xen,shared-mem", NULL); - BUG_ON(!prop); - cells = (const __be32 *)prop->value; - device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase); - psize = dt_read_number(cells, size_cells); - if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE) ) - { - printk("%pd: physical address 0x%"PRIpaddr", or guest address 0x%"PRIpaddr" is not suitably aligned.\n", - d, pbase, gbase); - return -EINVAL; - } - if ( !IS_ALIGNED(psize, PAGE_SIZE) ) - { - printk("%pd: size 0x%"PRIpaddr" is not suitably aligned\n", - d, psize); - return -EINVAL; - } - - for ( i = 0; i < PFN_DOWN(psize); i++ ) - if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) - { - printk("%pd: invalid physical address 0x%"PRI_mfn"\n", - d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); - return -EINVAL; - } - /* * "role" property is optional and if it is defined explicitly, * then the owner domain is not the default "dom_io" domain. @@ -1053,6 +1181,13 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, } BUG_ON((strlen(shm_id) <= 0) || (strlen(shm_id) >= MAX_SHM_ID_LENGTH)); + shm_memnode = parse_shm_property(d, shm_node, &paddr_assigned, &gbase, + shm_id); + if ( !shm_memnode ) + return -EINVAL; + pbase = shm_memnode->meminfo.bank[0].start; + psize = shm_memnode->meminfo.bank[0].size; + /* * DOMID_IO is a fake domain and is not described in the Device-Tree. * Therefore when the owner of the shared region is DOMID_IO, we will diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h index c42fd83db2..fc5ca11e08 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -59,6 +59,9 @@ struct meminfo { enum meminfo_type { NORMAL_MEMINFO, +#ifdef CONFIG_STATIC_SHM + SHM_MEMINFO, +#endif MAX_MEMINFO_TYPE, }; From patchwork Thu Feb 23 05:41:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13149875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B0C6C678DB for ; Thu, 23 Feb 2023 05:41:56 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.499866.771194 (Exim 4.92) (envelope-from ) id 1pV4MO-0001qb-1h; Thu, 23 Feb 2023 05:41:40 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 499866.771194; Thu, 23 Feb 2023 05:41:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MN-0001qQ-Ur; Thu, 23 Feb 2023 05:41:39 +0000 Received: by outflank-mailman (input) for mailman id 499866; Thu, 23 Feb 2023 05:41:39 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MM-0001BN-Q1 for xen-devel@lists.xenproject.org; Thu, 23 Feb 2023 05:41:39 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id bd647e09-b33c-11ed-88bb-e56d68cac8db; Thu, 23 Feb 2023 06:41:37 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 13D3B1FB; Wed, 22 Feb 2023 21:42:20 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6075E3F587; Wed, 22 Feb 2023 21:41:34 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bd647e09-b33c-11ed-88bb-e56d68cac8db From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v2 5/8] xen/arm: support static shared memory when host address not provided Date: Thu, 23 Feb 2023 13:41:02 +0800 Message-Id: <20230223054105.2357217-6-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230223054105.2357217-1-Penny.Zheng@arm.com> References: <20230223054105.2357217-1-Penny.Zheng@arm.com> MIME-Version: 1.0 In order to support static shared memory when host address not provided, we shall do the following modification: - we shall let Xen allocate memory from heap for static shared memory at first domain, no matter it is owner or borrower. - In acquire_shared_memory_bank, as static shared memory has already been allocated from heap, we shall assign them to the owner domain using function "assign_pages". - Function get_shm_pages_reference is created to add as many additional reference as the number of borrowers. - We implement a new helper "add_foreign_mapping_for_borrower" to set up foreign memory mapping for borrower. Instead of using multiple function parameters to deliver various shm-related info, like host physical address, SHMID, etc, and with the introduction of new struct "shm_memnode" to include banked host memory info, we switch to use "shm_memnode" as function parameter to replace them all, to make codes more clear and tidy. Signed-off-by: Penny Zheng --- v1 -> v2: - combine commits 4 - 6 in Serie 1 - Adapt to changes of introducing "struct shm_memnode" --- xen/arch/arm/domain_build.c | 222 +++++++++++++++++++++++++----------- 1 file changed, 155 insertions(+), 67 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 91feb8f37c..9b4aabaf22 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -869,6 +869,11 @@ static void __init assign_static_memory_11(struct domain *d, } #ifdef CONFIG_STATIC_SHM +static bool __init is_shm_allocated_from_heap(struct shm_memnode *node) +{ + return (node->meminfo.nr_banks != 0); +} + static int __init acquire_nr_borrower_domain(const char *shm_id, unsigned long *nr_borrowers) { @@ -912,12 +917,12 @@ static struct shm_memnode * __init find_shm_memnode(const char *shm_id) * This function checks whether the static shared memory region is * already allocated to dom_io. */ -static bool __init is_shm_allocated_to_domio(paddr_t pbase) +static bool __init is_shm_allocated_to_domio(struct shm_memnode *node) { struct page_info *page; struct domain *d; - page = maddr_to_page(pbase); + page = maddr_to_page(node->meminfo.bank[0].start); d = page_get_owner_and_reference(page); if ( d == NULL ) return false; @@ -935,67 +940,129 @@ static bool __init is_shm_allocated_to_domio(paddr_t pbase) } static mfn_t __init acquire_shared_memory_bank(struct domain *d, - paddr_t pbase, paddr_t psize) + struct shm_meminfo *meminfo, + bool paddr_assigned) { - mfn_t smfn; - unsigned long nr_pfns; int res; + unsigned int i = 0; - /* - * Pages of statically shared memory shall be included - * into domain_tot_pages(). - */ - nr_pfns = PFN_DOWN(psize); - if ( (UINT_MAX - d->max_pages) < nr_pfns ) + for ( ; i < meminfo->nr_banks; i++ ) { - printk(XENLOG_ERR "%pd: Over-allocation for d->max_pages: %lu.\n", - d, nr_pfns); + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + unsigned long nr_pfns; + + /* + * Pages of statically shared memory shall be included + * into domain_tot_pages(). + */ + nr_pfns = PFN_DOWN(psize); + if ( (UINT_MAX - d->max_pages) < nr_pfns ) + { + printk(XENLOG_ERR "%pd: Over-allocation for d->max_pages: %lu.\n", + d, nr_pfns); + return INVALID_MFN; + } + d->max_pages += nr_pfns; + + if ( paddr_assigned ) + { + res = acquire_domstatic_pages(d, maddr_to_mfn(pbase), nr_pfns, 0); + if ( res ) + { + printk(XENLOG_ERR + "%pd: failed to acquire static memory: %d.\n", d, res); + goto fail; + } + } + else + /* + * When host address is not provided, static shared memory is + * allocated from heap and shall be assigned to owner domain. + */ + if ( assign_pages(maddr_to_page(pbase), nr_pfns, d, 0) ) + goto fail; + } + + return maddr_to_mfn(meminfo->bank[0].start); + + fail: + while( --i >= 0 ) + d->max_pages -= PFN_DOWN(meminfo->bank[i].size); return INVALID_MFN; +} + +static int __init get_shm_pages_reference(struct domain *d, + struct shm_meminfo *meminfo, + unsigned long count) +{ + struct page_info *page; + unsigned int i = 0, j; + + for ( ; i < meminfo->nr_banks; i++ ) + { + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + unsigned long nr_pages = PFN_DOWN(psize); + + page = maddr_to_page(pbase); + for ( j = 0; j < nr_pages; j++ ) + { + if ( !get_page_nr(page + j, d, count) ) + { + printk(XENLOG_ERR + "Failed to add %lu references to page %"PRI_mfn".\n", + count, mfn_x(page_to_mfn(page + j))); + goto fail; + } + } } - d->max_pages += nr_pfns; - smfn = maddr_to_mfn(pbase); - res = acquire_domstatic_pages(d, smfn, nr_pfns, 0); - if ( res ) + return 0; + + fail: + while ( --j >= 0 ) + put_page_nr(page + j, count); + while ( --i >= 0 ) { - printk(XENLOG_ERR - "%pd: failed to acquire static memory: %d.\n", d, res); - d->max_pages -= nr_pfns; - return INVALID_MFN; + page = maddr_to_page(meminfo->bank[i].start); + j = PFN_DOWN(meminfo->bank[i].size); + while ( --j >= 0 ) + put_page_nr(page + j, count); } + return -EINVAL; - return smfn; } - static int __init assign_shared_memory(struct domain *d, - paddr_t pbase, paddr_t psize, - paddr_t gbase, const char *shm_id) + struct shm_memnode *node, paddr_t gbase, + bool paddr_assigned) { mfn_t smfn; - int ret = 0; - unsigned long nr_pages, nr_borrowers, i; - struct page_info *page; - - printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n", - d, pbase, pbase + psize); + int ret; + unsigned long nr_borrowers, i; + struct shm_meminfo *meminfo = &node->meminfo; - smfn = acquire_shared_memory_bank(d, pbase, psize); + smfn = acquire_shared_memory_bank(d, meminfo, paddr_assigned); if ( mfn_eq(smfn, INVALID_MFN) ) return -EINVAL; - /* - * DOMID_IO is not auto-translated (i.e. it sees RAM 1:1). So we do not need - * to create mapping in the P2M. - */ - nr_pages = PFN_DOWN(psize); - if ( d != dom_io ) + for ( i = 0; i < meminfo->nr_banks; i++ ) { - ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), smfn, - PFN_DOWN(psize)); - if ( ret ) + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + + /* + * DOMID_IO is not auto-translated (i.e. it sees RAM 1:1). So we do + * not need to create mapping in the P2M. + */ + if ( d != dom_io ) { - printk(XENLOG_ERR "Failed to map shared memory to %pd.\n", d); - return ret; + ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), + maddr_to_mfn(pbase), + PFN_DOWN(psize)); + if ( ret ) + { + printk(XENLOG_ERR "Failed to map shared memory to %pd.\n", d); + return ret; + } + gbase += psize; } } @@ -1003,7 +1070,7 @@ static int __init assign_shared_memory(struct domain *d, * Get the right amount of references per page, which is the number of * borrower domains. */ - ret = acquire_nr_borrower_domain(shm_id, &nr_borrowers); + ret = acquire_nr_borrower_domain(node->shm_id, &nr_borrowers); if ( ret ) return ret; @@ -1015,24 +1082,30 @@ static int __init assign_shared_memory(struct domain *d, * So if the borrower is created first, it will cause adding pages * in the P2M without reference. */ - page = mfn_to_page(smfn); - for ( i = 0; i < nr_pages; i++ ) + return get_shm_pages_reference(d, meminfo, nr_borrowers); +} + +static int __init add_foreign_mapping_for_borrower(struct domain *d, + struct shm_memnode *node, + paddr_t gbase) +{ + unsigned int i = 0; + struct shm_meminfo *meminfo = &node->meminfo; + + for ( ; i < meminfo->nr_banks; i++ ) { - if ( !get_page_nr(page + i, d, nr_borrowers) ) - { - printk(XENLOG_ERR - "Failed to add %lu references to page %"PRI_mfn".\n", - nr_borrowers, mfn_x(smfn) + i); - goto fail; - } + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + int ret; + + /* Set up P2M foreign mapping for borrower domain. */ + ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize), + _mfn(PFN_UP(pbase)), p2m_map_foreign_rw); + if ( ret ) + return ret; + gbase += psize; } return 0; - - fail: - while ( --i >= 0 ) - put_page_nr(page + i, nr_borrowers); - return ret; } static int __init append_shm_bank_to_domain(struct kernel_info *kinfo, @@ -1156,7 +1229,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, dt_for_each_child_node(node, shm_node) { - paddr_t gbase, pbase, psize; + paddr_t gbase; int ret = 0; const char *role_str; const char *shm_id; @@ -1185,15 +1258,30 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, shm_id); if ( !shm_memnode ) return -EINVAL; - pbase = shm_memnode->meminfo.bank[0].start; - psize = shm_memnode->meminfo.bank[0].size; + + /* + * When host address is not provided in "xen,shared-mem", + * we let Xen allocate memory from heap at first domain. + */ + if ( !paddr_assigned && !is_shm_allocated_from_heap(shm_memnode) ) + { + if ( !allocate_domheap_memory(NULL, shm_memnode->meminfo.tot_size, + (void *)&shm_memnode->meminfo, + SHM_MEMINFO) ) + { + printk(XENLOG_ERR + "Failed to allocate (%"PRIpaddr"MB) pages as static shared memory from heap\n", + shm_memnode->meminfo.tot_size >> 20); + return -EINVAL; + } + } /* * DOMID_IO is a fake domain and is not described in the Device-Tree. * Therefore when the owner of the shared region is DOMID_IO, we will * only find the borrowers. */ - if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) || + if ( (owner_dom_io && !is_shm_allocated_to_domio(shm_memnode)) || (!owner_dom_io && strcmp(role_str, "owner") == 0) ) { /* @@ -1201,16 +1289,14 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, * specified, so they should be assigned to dom_io. */ ret = assign_shared_memory(owner_dom_io ? dom_io : d, - pbase, psize, gbase, shm_id); + shm_memnode, gbase, paddr_assigned); if ( ret ) return ret; } if ( owner_dom_io || (strcmp(role_str, "borrower") == 0) ) { - /* Set up P2M foreign mapping for borrower domain. */ - ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize), - _mfn(PFN_UP(pbase)), p2m_map_foreign_rw); + ret = add_foreign_mapping_for_borrower(d, shm_memnode, gbase); if ( ret ) return ret; } @@ -1219,7 +1305,9 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, * Record static shared memory region info for later setting * up shm-node in guest device tree. */ - ret = append_shm_bank_to_domain(kinfo, gbase, psize, shm_id); + ret = append_shm_bank_to_domain(kinfo, gbase, + shm_memnode->meminfo.tot_size, + shm_memnode->shm_id); if ( ret ) return ret; } From patchwork Thu Feb 23 05:41:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13149877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8EB05C61DA4 for ; Thu, 23 Feb 2023 05:41:59 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.499867.771205 (Exim 4.92) (envelope-from ) id 1pV4MR-0002F5-Fb; Thu, 23 Feb 2023 05:41:43 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 499867.771205; Thu, 23 Feb 2023 05:41:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MR-0002Ep-BD; Thu, 23 Feb 2023 05:41:43 +0000 Received: by outflank-mailman (input) for mailman id 499867; Thu, 23 Feb 2023 05:41:41 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MP-0001BN-OP for xen-devel@lists.xenproject.org; Thu, 23 Feb 2023 05:41:41 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id bf471208-b33c-11ed-88bb-e56d68cac8db; Thu, 23 Feb 2023 06:41:40 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4F0AA1FB; Wed, 22 Feb 2023 21:42:23 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9C3683F587; Wed, 22 Feb 2023 21:41:37 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bf471208-b33c-11ed-88bb-e56d68cac8db From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v2 6/8] xen/arm: remove shm holes for extended regions Date: Thu, 23 Feb 2023 13:41:03 +0800 Message-Id: <20230223054105.2357217-7-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230223054105.2357217-1-Penny.Zheng@arm.com> References: <20230223054105.2357217-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Static shared memory acts as reserved memory in guest, so it shall be excluded from extended regions. Extended regions are taken care of under three different scenarios: normal DomU, direct-map domain with iommu on, and direct-map domain with iommu off. For normal DomU, we create a new function "remove_shm_holes_for_domU", to firstly transfer original outputs into the format of "struct rangeset", then use "remove_shm_from_rangeset" to remove static shm from them. For direct-map domain with iommu on, after we get guest shm info from "kinfo", we use "remove_shm_from_rangeset" to remove static shm. For direct-map domain with iommu off, as static shm has already been taken care of through reserved memory banks, we do nothing. Signed-off-by: Penny Zheng --- v1 -> v2: - new commit --- xen/arch/arm/domain_build.c | 94 ++++++++++++++++++++++++++++++++++++- 1 file changed, 93 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 9b4aabaf22..4cd1e3d433 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -1914,6 +1914,32 @@ static int __init handle_pci_range(const struct dt_device_node *dev, return 0; } +static int __init remove_shm_from_rangeset(const struct kernel_info *kinfo, + struct rangeset *rangeset) +{ + unsigned int i; + + /* Remove static shared memory regions */ + for ( i = 0; i < kinfo->shminfo.nr_banks; i++ ) + { + struct membank membank = kinfo->shminfo.bank[i].membank; + paddr_t start, end; + int res; + + start = membank.start; + end = membank.start + membank.size - 1; + res = rangeset_remove_range(rangeset, start, end); + if ( res ) + { + printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n", + start, end); + return -EINVAL; + } + } + + return 0; +} + /* * Find the holes in the Host DT which can be exposed to Dom0 as extended * regions for the special memory mappings. In order to calculate regions @@ -1922,6 +1948,8 @@ static int __init handle_pci_range(const struct dt_device_node *dev, * - MMIO * - Host RAM * - PCI aperture + * - Static shared memory regions, which are described by special property + * "xen,static-shm" */ static int __init find_memory_holes(const struct kernel_info *kinfo, struct meminfo *ext_regions) @@ -1997,6 +2025,14 @@ static int __init find_memory_holes(const struct kernel_info *kinfo, } } + /* Remove static shared memory regions */ + if ( kinfo->shminfo.nr_banks != 0 ) + { + res = remove_shm_from_rangeset(kinfo, mem_holes); + if ( res ) + goto out; + } + start = 0; end = (1ULL << p2m_ipa_bits) - 1; res = rangeset_report_ranges(mem_holes, start, end, @@ -2012,6 +2048,62 @@ out: return res; } +static int __init remove_shm_holes_for_domU(const struct kernel_info *kinfo, + struct meminfo *orig_ext) +{ + struct rangeset *guest_holes; + unsigned int i = 0, tail; + int res; + paddr_t start, end; + + /* No static shared memory region. */ + if ( kinfo->shminfo.nr_banks == 0 ) + return 0; + + dt_dprintk("Remove static shared memory holes for extended regions of DomU\n"); + + guest_holes = rangeset_new(NULL, NULL, 0); + if ( !guest_holes ) + return -ENOMEM; + + for ( ; i < orig_ext->nr_banks; i++ ) + { + start = orig_ext->bank[i].start; + end = start + orig_ext->bank[i].size - 1; + + res = rangeset_add_range(guest_holes, start, end); + if ( res ) + { + printk(XENLOG_ERR "Failed to add: %#"PRIpaddr"->%#"PRIpaddr"\n", + start, end); + goto out; + } + } + + /* Remove static shared memory regions */ + res = remove_shm_from_rangeset(kinfo, guest_holes); + if ( res ) + goto out; + + tail = orig_ext->nr_banks - 1; + start = orig_ext->bank[0].start; + end = orig_ext->bank[tail].start + orig_ext->bank[tail].size - 1; + + /* Reset original extended regions to hold new value */ + orig_ext->nr_banks = 0; + res = rangeset_report_ranges(guest_holes, start, end, + add_ext_regions, orig_ext); + if ( res ) + orig_ext->nr_banks = 0; + else if ( !orig_ext->nr_banks ) + res = -ENOENT; + +out: + rangeset_destroy(guest_holes); + + return res; +} + static int __init find_domU_holes(const struct kernel_info *kinfo, struct meminfo *ext_regions) { @@ -2039,7 +2131,7 @@ static int __init find_domU_holes(const struct kernel_info *kinfo, res = 0; } - return res; + return remove_shm_holes_for_domU(kinfo, ext_regions); } static int __init make_hypervisor_node(struct domain *d, From patchwork Thu Feb 23 05:41:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13149878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6BA8BC61DA4 for ; Thu, 23 Feb 2023 05:42:02 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.499874.771216 (Exim 4.92) (envelope-from ) id 1pV4MU-0002e8-RC; Thu, 23 Feb 2023 05:41:46 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 499874.771216; Thu, 23 Feb 2023 05:41:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MU-0002cs-Jj; Thu, 23 Feb 2023 05:41:46 +0000 Received: by outflank-mailman (input) for mailman id 499874; Thu, 23 Feb 2023 05:41:45 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MT-0001BN-1U for xen-devel@lists.xenproject.org; Thu, 23 Feb 2023 05:41:45 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id c13553b7-b33c-11ed-88bb-e56d68cac8db; Thu, 23 Feb 2023 06:41:44 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A74D1FB; Wed, 22 Feb 2023 21:42:26 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D7F443F587; Wed, 22 Feb 2023 21:41:40 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c13553b7-b33c-11ed-88bb-e56d68cac8db From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v2 7/8] xen/p2m: put reference for superpage Date: Thu, 23 Feb 2023 13:41:04 +0800 Message-Id: <20230223054105.2357217-8-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230223054105.2357217-1-Penny.Zheng@arm.com> References: <20230223054105.2357217-1-Penny.Zheng@arm.com> MIME-Version: 1.0 We are doing foreign memory mapping for static shared memory, and there is a great possibility that it could be super mapped. But today, p2m_put_l3_page could not handle superpages. This commits implements a new function p2m_put_superpage to handle superpages, specifically for helping put extra references for foreign superpages. Signed-off-by: Penny Zheng --- v1 -> v2: - new commit --- xen/arch/arm/p2m.c | 60 +++++++++++++++++++++++++++++++++------------- 1 file changed, 44 insertions(+), 16 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 948f199d84..e8fa61fcbc 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -851,17 +851,9 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn, return rc; } -/* - * Put any references on the single 4K page referenced by pte. - * TODO: Handle superpages, for now we only take special references for leaf - * pages (specifically foreign ones, which can't be super mapped today). - */ -static void p2m_put_l3_page(const lpae_t pte) +/* Put any references on the single 4K page referenced by mfn. */ +static void p2m_put_l3_page(mfn_t mfn, unsigned type) { - mfn_t mfn = lpae_get_mfn(pte); - - ASSERT(p2m_is_valid(pte)); - /* * TODO: Handle other p2m types * @@ -869,16 +861,53 @@ static void p2m_put_l3_page(const lpae_t pte) * flush the TLBs if the page is reallocated before the end of * this loop. */ - if ( p2m_is_foreign(pte.p2m.type) ) + if ( p2m_is_foreign(type) ) { ASSERT(mfn_valid(mfn)); put_page(mfn_to_page(mfn)); } /* Detect the xenheap page and mark the stored GFN as invalid. */ - else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) ) + else if ( p2m_is_ram(type) && is_xen_heap_mfn(mfn) ) page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN); } +/* Put any references on the superpage referenced by mfn. */ +static void p2m_put_superpage(mfn_t mfn, unsigned int next_level, unsigned type) +{ + unsigned int i; + unsigned int level_order = XEN_PT_LEVEL_ORDER(next_level); + + for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + { + if ( next_level == 3 ) + p2m_put_l3_page(mfn, type); + else + p2m_put_superpage(mfn, next_level + 1, type); + + mfn = mfn_add(mfn, 1 << level_order); + } +} + +/* Put any references on the page referenced by pte. */ +static void p2m_put_page(const lpae_t pte, unsigned int level) +{ + mfn_t mfn = lpae_get_mfn(pte); + + ASSERT(p2m_is_valid(pte)); + + /* + * We are either having a first level 1G superpage or a + * second level 2M superpage. + */ + if ( p2m_is_superpage(pte, level) ) + return p2m_put_superpage(mfn, level + 1, pte.p2m.type); + else + { + ASSERT(level == 3); + return p2m_put_l3_page(mfn, pte.p2m.type); + } +} + /* Free lpae sub-tree behind an entry */ static void p2m_free_entry(struct p2m_domain *p2m, lpae_t entry, unsigned int level) @@ -907,9 +936,8 @@ static void p2m_free_entry(struct p2m_domain *p2m, #endif p2m->stats.mappings[level]--; - /* Nothing to do if the entry is a super-page. */ - if ( level == 3 ) - p2m_put_l3_page(entry); + p2m_put_page(entry, level); + return; } @@ -1752,7 +1780,7 @@ void p2m_final_teardown(struct domain *d) * p2m_final_teardown() is called either after domain_relinquish_resources() * where relinquish_p2m_mapping() has been called, or from failure path of * domain_create()/arch_domain_create() where mappings that require - * p2m_put_l3_page() should never be created. For the latter case, also see + * p2m_put_page() should never be created. For the latter case, also see * comment on top of the p2m_set_entry() for more info. */ From patchwork Thu Feb 23 05:41:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13149879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B4C41C636D7 for ; Thu, 23 Feb 2023 05:42:05 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.499882.771227 (Exim 4.92) (envelope-from ) id 1pV4MY-0003At-13; Thu, 23 Feb 2023 05:41:50 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 499882.771227; Thu, 23 Feb 2023 05:41:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MX-0003Ah-TN; Thu, 23 Feb 2023 05:41:49 +0000 Received: by outflank-mailman (input) for mailman id 499882; Thu, 23 Feb 2023 05:41:49 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pV4MW-0001BN-SQ for xen-devel@lists.xenproject.org; Thu, 23 Feb 2023 05:41:48 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id c323e35d-b33c-11ed-88bb-e56d68cac8db; Thu, 23 Feb 2023 06:41:47 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C62A71FB; Wed, 22 Feb 2023 21:42:29 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1F5EF3F587; Wed, 22 Feb 2023 21:41:43 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c323e35d-b33c-11ed-88bb-e56d68cac8db From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v2 8/8] xen/docs: refine docs about static shared memory Date: Thu, 23 Feb 2023 13:41:05 +0800 Message-Id: <20230223054105.2357217-9-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230223054105.2357217-1-Penny.Zheng@arm.com> References: <20230223054105.2357217-1-Penny.Zheng@arm.com> MIME-Version: 1.0 This commit amends docs(docs/misc/arm/device-tree/booting.txt) to include the new scenario where host address is not provided in "xen,shared-mem" property, and we also add a new example to explain in detail. We also fix some buggy info in the docs, like SHMID is "my-shared-mem-1", not "0x1". Signed-off-by: Penny Zheng --- v1 -> v2: - no new changes --- docs/misc/arm/device-tree/booting.txt | 52 ++++++++++++++++++++------- 1 file changed, 39 insertions(+), 13 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt index 3879340b5e..ce9e71e087 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -574,7 +574,7 @@ communication. An array takes a physical address, which is the base address of the shared memory region in host physical address space, a size, and a guest physical address, as the target address of the mapping. - e.g. xen,shared-mem = < [host physical address] [guest address] [size] > + e.g. xen,shared-mem = < [host physical address] [guest address] [size] >; It shall also meet the following criteria: 1) If the SHM ID matches with an existing region, the address range of the @@ -585,8 +585,8 @@ communication. The number of cells for the host address (and size) is the same as the guest pseudo-physical address and they are inherited from the parent node. - Host physical address is optional, when missing Xen decides the location - (currently unimplemented). + Host physical address is optional, when missing Xen decides the location. + e.g. xen,shared-mem = < [guest address] [size] >; - role (Optional) @@ -613,7 +613,7 @@ chosen { role = "owner"; xen,shm-id = "my-shared-mem-0"; xen,shared-mem = <0x10000000 0x10000000 0x10000000>; - } + }; domU1 { compatible = "xen,domain"; @@ -624,25 +624,36 @@ chosen { vpl011; /* - * shared memory region identified as 0x0(xen,shm-id = <0x0>) - * is shared between Dom0 and DomU1. + * shared memory region "my-shared-mem-0" is shared + * between Dom0 and DomU1. */ domU1-shared-mem@10000000 { compatible = "xen,domain-shared-memory-v1"; role = "borrower"; xen,shm-id = "my-shared-mem-0"; xen,shared-mem = <0x10000000 0x50000000 0x10000000>; - } + }; /* - * shared memory region identified as 0x1(xen,shm-id = <0x1>) - * is shared between DomU1 and DomU2. + * shared memory region "my-shared-mem-1" is shared between + * DomU1 and DomU2. */ domU1-shared-mem@50000000 { compatible = "xen,domain-shared-memory-v1"; xen,shm-id = "my-shared-mem-1"; xen,shared-mem = <0x50000000 0x60000000 0x20000000>; - } + }; + + /* + * shared memory region "my-shared-mem-2" is shared between + * DomU1 and DomU2. + */ + domU1-shared-mem-2 { + compatible = "xen,domain-shared-memory-v1"; + xen,shm-id = "my-shared-mem-2"; + role = "owner"; + xen,shared-mem = <0x80000000 0x20000000>; + }; ...... @@ -656,14 +667,21 @@ chosen { cpus = <1>; /* - * shared memory region identified as 0x1(xen,shm-id = <0x1>) - * is shared between domU1 and domU2. + * shared memory region "my-shared-mem-1" is shared between + * domU1 and domU2. */ domU2-shared-mem@50000000 { compatible = "xen,domain-shared-memory-v1"; xen,shm-id = "my-shared-mem-1"; xen,shared-mem = <0x50000000 0x70000000 0x20000000>; - } + }; + + domU2-shared-mem-2 { + compatible = "xen,domain-shared-memory-v1"; + xen,shm-id = "my-shared-mem-2"; + role = "borrower"; + xen,shared-mem = <0x90000000 0x20000000>; + }; ...... }; @@ -683,3 +701,11 @@ shared between DomU1 and DomU2. It will get mapped at 0x60000000 in DomU1 guest physical address space, and at 0x70000000 in DomU2 guest physical address space. DomU1 and DomU2 are both the borrower domain, the owner domain is the default owner domain DOMID_IO. + +For the static shared memory region "my-shared-mem-2", since host physical +address is not provided by user, Xen will automatically allocate 512MB +from heap as static shared memory to be shared between DomU1 and DomU2. +The automatically allocated static shared memory will get mapped at +0x80000000 in DomU1 guest physical address space, and at 0x90000000 in DomU2 +guest physical address space. DomU1 is explicitly defined as the owner domain, +and DomU2 is the borrower domain.