From patchwork Mon Aug 21 04:00:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13359081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7C2A5EE49AB for ; Mon, 21 Aug 2023 04:01:31 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.587203.918531 (Exim 4.92) (envelope-from ) id 1qXw6Q-0003WI-D3; Mon, 21 Aug 2023 04:01:18 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 587203.918531; Mon, 21 Aug 2023 04:01:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6Q-0003WB-9L; Mon, 21 Aug 2023 04:01:18 +0000 Received: by outflank-mailman (input) for mailman id 587203; Mon, 21 Aug 2023 04:01:16 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6O-0003VX-AS for xen-devel@lists.xenproject.org; Mon, 21 Aug 2023 04:01:16 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 5e750b8b-3fd7-11ee-9b0c-b553b5be7939; Mon, 21 Aug 2023 06:01:13 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 498E31FB; Sun, 20 Aug 2023 21:01:53 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CCF7A3F740; Sun, 20 Aug 2023 21:01:09 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5e750b8b-3fd7-11ee-9b0c-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 1/8] xen/arm: remove stale addr_cells/size_cells in assign_shared_memory Date: Mon, 21 Aug 2023 12:00:39 +0800 Message-Id: <20230821040046.2965665-2-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821040046.2965665-1-Penny.Zheng@arm.com> References: <20230821040046.2965665-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Function parameters {addr_cells,size_cells} are stale parameters in assign_shared_memory, so we shall remove them. Signed-off-by: Penny Zheng --- v1 -> v2: - new commit --- v2 -> v3: rebase and no change --- xen/arch/arm/domain_build.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 54bf5623c8..ac78597bde 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -858,7 +858,6 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d, } static int __init assign_shared_memory(struct domain *d, - uint32_t addr_cells, uint32_t size_cells, paddr_t pbase, paddr_t psize, paddr_t gbase) { @@ -1020,7 +1019,6 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, * specified, so they should be assigned to dom_io. */ ret = assign_shared_memory(owner_dom_io ? dom_io : d, - addr_cells, size_cells, pbase, psize, gbase); if ( ret ) return ret; From patchwork Mon Aug 21 04:00:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13359082 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9D028EE49A0 for ; Mon, 21 Aug 2023 04:01:32 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.587204.918541 (Exim 4.92) (envelope-from ) id 1qXw6S-0003mZ-LI; Mon, 21 Aug 2023 04:01:20 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 587204.918541; Mon, 21 Aug 2023 04:01:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6S-0003mR-H0; Mon, 21 Aug 2023 04:01:20 +0000 Received: by outflank-mailman (input) for mailman id 587204; Mon, 21 Aug 2023 04:01:19 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6R-0003VX-6A for xen-devel@lists.xenproject.org; Mon, 21 Aug 2023 04:01:19 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 6098f42e-3fd7-11ee-9b0c-b553b5be7939; Mon, 21 Aug 2023 06:01:16 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E54652F4; Sun, 20 Aug 2023 21:01:56 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3255C3F740; Sun, 20 Aug 2023 21:01:12 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6098f42e-3fd7-11ee-9b0c-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 2/8] xen/arm: re-define a set of data structures for static shared memory region Date: Mon, 21 Aug 2023 12:00:40 +0800 Message-Id: <20230821040046.2965665-3-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821040046.2965665-1-Penny.Zheng@arm.com> References: <20230821040046.2965665-1-Penny.Zheng@arm.com> MIME-Version: 1.0 This commit introduces a set of separate data structures to deal with static shared memory at different stages. In boot-time host device tree parsing, we introduce a new structure "struct shm_node" and a new field "shm_info" in bootinfo to describe and store parsed shm info. only SHMID and "nr_borrowers", which describes the number of borrower domain, are considered here for per shm node. We also introduce a new local global data "shm_data" in bootfdt.c, in which, reserved memory bank is recorded together with shm node, to assist doing shm node verification. In order to apply above changes in acquire_nr_borrower_domain, we now use SHMID to iterate "shminfo" to find requested shm node, then acquiring the information of "nr_borrowers". In the last, a new anonymized structure "shminfo", which is a array of compound structure that contains SHMID and a "struct membank membank" describing shared memory regions in guest address space, is created in "kinfo" when dealing with domain information. Signed-off-by: Penny Zheng --- v1 -> v2: - As the original "struct shm_membank" was making reserving memory more complex and actually memory information could be still got from host Device\ Tree when dealing with domain construction, we introduce a new simple structure "struct shm_node" in bootinfo to only store SHMID and "nr_borrowers" - Further restrict the scope of the local variable "struct meminfo *mem = &bootinfo.reserved_mem" - Introduce a new local global data "shm_data" in bootfdt.c. In which, reserved memory bank is recorded together with the shm node, to assist doing shm node verification. - Define a set of local variables that point to "shm_data.shm_nodes[i].membank->start", etc, to make the code more readable. - Use SHMID to iterate "shminfo" to find requested shm node, as we no longer store host memory bank info in shm node. - A new anonymized structure, which is a array of compound structure that contains SHMID and a "struct membank membank", describing shared memory region in guest, is introduced in "kinfo". --- v2 -> v3: - rebase and no changes --- xen/arch/arm/bootfdt.c | 57 ++++++++++++++++++------ xen/arch/arm/domain_build.c | 74 +++++++++++++++---------------- xen/arch/arm/include/asm/kernel.h | 9 +++- xen/arch/arm/include/asm/setup.h | 21 +++++++-- 4 files changed, 104 insertions(+), 57 deletions(-) diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c index 2673ad17a1..66ad3ab3db 100644 --- a/xen/arch/arm/bootfdt.c +++ b/xen/arch/arm/bootfdt.c @@ -16,6 +16,16 @@ #include #include +#ifdef CONFIG_STATIC_SHM +static __initdata struct { + unsigned int nr_nodes; + struct { + const struct shm_node *node; + const struct membank *membank; + } shm_nodes[NR_MEM_BANKS]; +} shm_data; +#endif + static bool __init device_tree_node_matches(const void *fdt, int node, const char *match) { @@ -409,7 +419,6 @@ static int __init process_shm_node(const void *fdt, int node, const struct fdt_property *prop, *prop_id, *prop_role; const __be32 *cell; paddr_t paddr, gaddr, size; - struct meminfo *mem = &bootinfo.reserved_mem; unsigned int i; int len; bool owner = false; @@ -481,17 +490,21 @@ static int __init process_shm_node(const void *fdt, int node, return -EINVAL; } - for ( i = 0; i < mem->nr_banks; i++ ) + for ( i = 0; i < shm_data.nr_nodes; i++ ) { + paddr_t bank_start = shm_data.shm_nodes[i].membank->start; + paddr_t bank_size = shm_data.shm_nodes[i].membank->size; + const char *bank_id = shm_data.shm_nodes[i].node->shm_id; + /* * Meet the following check: * 1) The shm ID matches and the region exactly match * 2) The shm ID doesn't match and the region doesn't overlap * with an existing one */ - if ( paddr == mem->bank[i].start && size == mem->bank[i].size ) + if ( paddr == bank_start && size == bank_size ) { - if ( strncmp(shm_id, mem->bank[i].shm_id, MAX_SHM_ID_LENGTH) == 0 ) + if ( strncmp(shm_id, bank_id, MAX_SHM_ID_LENGTH) == 0 ) break; else { @@ -503,9 +516,9 @@ static int __init process_shm_node(const void *fdt, int node, else { paddr_t end = paddr + size; - paddr_t bank_end = mem->bank[i].start + mem->bank[i].size; + paddr_t bank_end = bank_start + bank_size; - if ( (end <= paddr) || (bank_end <= mem->bank[i].start) ) + if ( (end <= paddr) || (bank_end <= bank_start) ) { printk("fdt: static shared memory region %s overflow\n", shm_id); return -EINVAL; @@ -515,7 +528,7 @@ static int __init process_shm_node(const void *fdt, int node, return -EINVAL; else { - if ( strcmp(shm_id, mem->bank[i].shm_id) != 0 ) + if ( strcmp(shm_id, bank_id) != 0 ) continue; else { @@ -527,16 +540,32 @@ static int __init process_shm_node(const void *fdt, int node, } } - if ( i == mem->nr_banks ) + if ( (i == shm_data.nr_nodes) && (i == bootinfo.shminfo.nr_nodes) ) { - if ( i < NR_MEM_BANKS ) + struct meminfo *mem = &bootinfo.reserved_mem; + + if ( (i < NR_MEM_BANKS) && (mem->nr_banks < NR_MEM_BANKS) ) { /* Static shared memory shall be reserved from any other use. */ - safe_strcpy(mem->bank[mem->nr_banks].shm_id, shm_id); - mem->bank[mem->nr_banks].start = paddr; - mem->bank[mem->nr_banks].size = size; - mem->bank[mem->nr_banks].type = MEMBANK_STATIC_DOMAIN; + struct membank *membank = &mem->bank[mem->nr_banks]; + struct shm_node *shm_node = &bootinfo.shminfo.node[i]; + + membank->start = paddr; + membank->size = size; + membank->type = MEMBANK_STATIC_DOMAIN; mem->nr_banks++; + + /* Record static shared memory node info in bootinfo.shminfo */ + safe_strcpy(shm_node->shm_id, shm_id); + bootinfo.shminfo.nr_nodes++; + + /* + * Reserved memory bank is recorded together with shm + * node info in shm_data, to assist doing shm node verification. + */ + shm_data.shm_nodes[i].node = shm_node; + shm_data.shm_nodes[i].membank = membank; + shm_data.nr_nodes++; } else { @@ -549,7 +578,7 @@ static int __init process_shm_node(const void *fdt, int node, * to calculate the reference count. */ if ( !owner ) - mem->bank[i].nr_shm_borrowers++; + bootinfo.shminfo.node[i].nr_shm_borrowers++; return 0; } diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index ac78597bde..b639d3dc78 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -774,28 +774,25 @@ static void __init assign_static_memory_11(struct domain *d, } #ifdef CONFIG_STATIC_SHM -static int __init acquire_nr_borrower_domain(struct domain *d, - paddr_t pbase, paddr_t psize, +static int __init acquire_nr_borrower_domain(const char *shm_id, unsigned long *nr_borrowers) { - unsigned int bank; + struct shm_node *shm_node; + unsigned int i; - /* Iterate reserved memory to find requested shm bank. */ - for ( bank = 0 ; bank < bootinfo.reserved_mem.nr_banks; bank++ ) + /* Iterate to find requested static shared memory node. */ + for ( i = 0; i < bootinfo.shminfo.nr_nodes; i++ ) { - paddr_t bank_start = bootinfo.reserved_mem.bank[bank].start; - paddr_t bank_size = bootinfo.reserved_mem.bank[bank].size; + shm_node = &bootinfo.shminfo.node[i]; - if ( (pbase == bank_start) && (psize == bank_size) ) - break; + if ( strcmp(shm_id, shm_node->shm_id) == 0 ) + { + *nr_borrowers = shm_node->nr_shm_borrowers; + return 0; + } } - if ( bank == bootinfo.reserved_mem.nr_banks ) - return -ENOENT; - - *nr_borrowers = bootinfo.reserved_mem.bank[bank].nr_shm_borrowers; - - return 0; + return -EINVAL; } /* @@ -859,7 +856,7 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d, static int __init assign_shared_memory(struct domain *d, paddr_t pbase, paddr_t psize, - paddr_t gbase) + paddr_t gbase, const char *shm_id) { mfn_t smfn; int ret = 0; @@ -893,7 +890,7 @@ static int __init assign_shared_memory(struct domain *d, * Get the right amount of references per page, which is the number of * borrower domains. */ - ret = acquire_nr_borrower_domain(d, pbase, psize, &nr_borrowers); + ret = acquire_nr_borrower_domain(shm_id, &nr_borrowers); if ( ret ) return ret; @@ -929,13 +926,16 @@ static int __init append_shm_bank_to_domain(struct kernel_info *kinfo, paddr_t start, paddr_t size, const char *shm_id) { - if ( kinfo->shm_mem.nr_banks >= NR_MEM_BANKS ) + unsigned int nr_banks = kinfo->shminfo.nr_banks; + struct membank *membank = &kinfo->shminfo.bank[nr_banks].membank; + + if ( nr_banks >= NR_MEM_BANKS ) return -ENOMEM; - kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].start = start; - kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].size = size; - safe_strcpy(kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].shm_id, shm_id); - kinfo->shm_mem.nr_banks++; + membank->start = start; + membank->size = size; + safe_strcpy(kinfo->shminfo.bank[nr_banks].shm_id, shm_id); + kinfo->shminfo.nr_banks++; return 0; } @@ -1019,7 +1019,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, * specified, so they should be assigned to dom_io. */ ret = assign_shared_memory(owner_dom_io ? dom_io : d, - pbase, psize, gbase); + pbase, psize, gbase, shm_id); if ( ret ) return ret; } @@ -1405,12 +1405,12 @@ static int __init make_memory_node(const struct domain *d, static int __init make_shm_memory_node(const struct domain *d, void *fdt, int addrcells, int sizecells, - const struct meminfo *mem) + const struct kernel_info *kinfo) { unsigned int i = 0; int res = 0; - if ( mem->nr_banks == 0 ) + if ( kinfo->shminfo.nr_banks == 0 ) return -ENOENT; /* @@ -1420,17 +1420,17 @@ static int __init make_shm_memory_node(const struct domain *d, */ dt_dprintk("Create xen-shmem node\n"); - for ( ; i < mem->nr_banks; i++ ) + for ( ; i < kinfo->shminfo.nr_banks; i++ ) { - uint64_t start = mem->bank[i].start; - uint64_t size = mem->bank[i].size; + uint64_t start = kinfo->shminfo.bank[i].membank.start; + uint64_t size = kinfo->shminfo.bank[i].membank.size; const char compat[] = "xen,shared-memory-v1"; /* Worst case addrcells + sizecells */ __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS]; __be32 *cells; unsigned int len = (addrcells + sizecells) * sizeof(__be32); - res = domain_fdt_begin_node(fdt, "xen-shmem", mem->bank[i].start); + res = domain_fdt_begin_node(fdt, "xen-shmem", start); if ( res ) return res; @@ -1448,7 +1448,7 @@ static int __init make_shm_memory_node(const struct domain *d, dt_dprintk("Shared memory bank %u: %#"PRIx64"->%#"PRIx64"\n", i, start, start + size); - res = fdt_property_string(fdt, "xen,id", mem->bank[i].shm_id); + res = fdt_property_string(fdt, "xen,id", kinfo->shminfo.bank[i].shm_id); if ( res ) return res; @@ -1473,7 +1473,7 @@ static int __init make_shm_memory_node(const struct domain *d, static int __init make_shm_memory_node(const struct domain *d, void *fdt, int addrcells, int sizecells, - const struct meminfo *mem) + const struct kernel_info *kinfo) { ASSERT_UNREACHABLE(); return -EOPNOTSUPP; @@ -1483,13 +1483,13 @@ static int __init make_shm_memory_node(const struct domain *d, static int __init make_resv_memory_node(const struct domain *d, void *fdt, int addrcells, int sizecells, - const struct meminfo *mem) + const struct kernel_info *kinfo) { int res = 0; /* Placeholder for reserved-memory\0 */ const char resvbuf[16] = "reserved-memory"; - if ( mem->nr_banks == 0 ) + if ( kinfo->shminfo.nr_banks == 0 ) /* No shared memory provided. */ return 0; @@ -1511,7 +1511,7 @@ static int __init make_resv_memory_node(const struct domain *d, if ( res ) return res; - res = make_shm_memory_node(d, fdt, addrcells, sizecells, mem); + res = make_shm_memory_node(d, fdt, addrcells, sizecells, kinfo); if ( res ) return res; @@ -2763,8 +2763,7 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo, return res; } - res = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, - &kinfo->shm_mem); + res = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, kinfo); if ( res ) return res; } @@ -3320,8 +3319,7 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo) if ( ret ) goto err; - ret = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, - &kinfo->shm_mem); + ret = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, kinfo); if ( ret ) goto err; diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h index 4617cdc83b..590bc56f6c 100644 --- a/xen/arch/arm/include/asm/kernel.h +++ b/xen/arch/arm/include/asm/kernel.h @@ -38,7 +38,14 @@ struct kernel_info { void *fdt; /* flat device tree */ paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */ struct meminfo mem; - struct meminfo shm_mem; + /* Static shared memory banks */ + struct { + unsigned int nr_banks; + struct { + char shm_id[MAX_SHM_ID_LENGTH]; + struct membank membank; + } bank[NR_MEM_BANKS]; + } shminfo; /* kernel entry point */ paddr_t entry; diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h index 19dc637d55..840ef50ef5 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -50,10 +50,6 @@ struct membank { paddr_t start; paddr_t size; enum membank_type type; -#ifdef CONFIG_STATIC_SHM - char shm_id[MAX_SHM_ID_LENGTH]; - unsigned int nr_shm_borrowers; -#endif }; struct meminfo { @@ -95,6 +91,17 @@ struct bootcmdlines { struct bootcmdline cmdline[MAX_MODULES]; }; +#ifdef CONFIG_STATIC_SHM +/* + * struct shm_node represents a static shared memory node shared between + * multiple domains, identified by the unique SHMID("xen,shm-id"). + */ +struct shm_node { + char shm_id[MAX_SHM_ID_LENGTH]; + unsigned int nr_shm_borrowers; +}; +#endif + struct bootinfo { struct meminfo mem; /* The reserved regions are only used when booting using Device-Tree */ @@ -105,6 +112,12 @@ struct bootinfo { struct meminfo acpi; #endif bool static_heap; +#ifdef CONFIG_STATIC_SHM + struct { + unsigned int nr_nodes; + struct shm_node node[NR_MEM_BANKS]; + } shminfo; +#endif }; struct map_range_data From patchwork Mon Aug 21 04:00:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13359083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E5192EE49A5 for ; Mon, 21 Aug 2023 04:01:32 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.587205.918551 (Exim 4.92) (envelope-from ) id 1qXw6U-00043h-S2; Mon, 21 Aug 2023 04:01:22 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 587205.918551; Mon, 21 Aug 2023 04:01:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6U-00043Y-Ot; Mon, 21 Aug 2023 04:01:22 +0000 Received: by outflank-mailman (input) for mailman id 587205; Mon, 21 Aug 2023 04:01:22 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6U-0003VX-53 for xen-devel@lists.xenproject.org; Mon, 21 Aug 2023 04:01:22 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 6289e56a-3fd7-11ee-9b0c-b553b5be7939; Mon, 21 Aug 2023 06:01:20 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 40FA11FB; Sun, 20 Aug 2023 21:02:00 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CED123F740; Sun, 20 Aug 2023 21:01:16 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6289e56a-3fd7-11ee-9b0c-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 3/8] xen/arm: introduce allocate_domheap_memory and guest_physmap_memory Date: Mon, 21 Aug 2023 12:00:41 +0800 Message-Id: <20230821040046.2965665-4-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821040046.2965665-1-Penny.Zheng@arm.com> References: <20230821040046.2965665-1-Penny.Zheng@arm.com> MIME-Version: 1.0 We split the code of allocate_bank_memory into two parts, allocate_domheap_memory and guest_physmap_memory. One is about allocating guest RAM from heap, which could be re-used later for allocating static shared memory from heap when host address is not provided. The other is building up guest P2M mapping. We also define a set of MACRO helpers to access common fields in data structure of "meminfo" type, e.g. "struct meminfo" is one of them, and later new "struct shm_meminfo" is also one of them. This kind of structures must have the following characteristics: - an array of "struct membank" - a member called "nr_banks" indicating current array size - a field indicating the maximum array size When introducing a new data structure, according callbacks with function type "retrieve_fn" shall be defined for using MACRO helpers. This commit defines callback "retrieve_meminfo" for data structure "struct meminfo". Signed-off-by: Penny Zheng --- v1 -> v2: - define a set of MACRO helpers to access common fields in data structure of "meminfo" type. "struct meminfo" is one of them, and according callback "retrieve_meminfo" is also introduced here. - typo of changing 1ULL to 1UL --- v2 -> v3 - rebase and no changes --- xen/arch/arm/domain_build.c | 119 ++++++++++++++++++++++++------- xen/arch/arm/include/asm/setup.h | 33 +++++++++ 2 files changed, 127 insertions(+), 25 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index b639d3dc78..97f9db1175 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -99,6 +99,26 @@ int __init parse_arch_dom0_param(const char *s, const char *e) */ #define DOM0_FDT_EXTRA_SIZE (128 + sizeof(struct fdt_reserve_entry)) +static void __init retrieve_meminfo(void *mem, unsigned int *max_mem_banks, + struct membank **bank, + unsigned int **nr_banks) +{ + struct meminfo *meminfo = (struct meminfo *)mem; + + if ( max_mem_banks ) + *max_mem_banks = NR_MEM_BANKS; + + if ( nr_banks ) + *nr_banks = &(meminfo->nr_banks); + + if ( bank ) + *bank = meminfo->bank; +} + +retrieve_fn __initdata retrievers[MAX_MEMINFO_TYPE] = { + [NORMAL_MEMINFO] = retrieve_meminfo, +}; + unsigned int __init dom0_max_vcpus(void) { if ( opt_dom0_max_vcpus == 0 ) @@ -413,34 +433,20 @@ static void __init allocate_memory_11(struct domain *d, } } -static bool __init allocate_bank_memory(struct domain *d, - struct kernel_info *kinfo, - gfn_t sgfn, - paddr_t tot_size) +static bool __init allocate_domheap_memory(struct domain *d, + paddr_t tot_size, + void *mem, enum meminfo_type type) { - int res; struct page_info *pg; - struct membank *bank; unsigned int max_order = ~0; - - /* - * allocate_bank_memory can be called with a tot_size of zero for - * the second memory bank. It is not an error and we can safely - * avoid creating a zero-size memory bank. - */ - if ( tot_size == 0 ) - return true; - - bank = &kinfo->mem.bank[kinfo->mem.nr_banks]; - bank->start = gfn_to_gaddr(sgfn); - bank->size = tot_size; + unsigned int *nr_banks = GET_NR_BANKS(mem, type); while ( tot_size > 0 ) { unsigned int order = get_allocation_size(tot_size); + struct membank *membank; order = min(max_order, order); - pg = alloc_domheap_pages(d, order, 0); if ( !pg ) { @@ -460,15 +466,78 @@ static bool __init allocate_bank_memory(struct domain *d, continue; } - res = guest_physmap_add_page(d, sgfn, page_to_mfn(pg), order); - if ( res ) - { - dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res); + if ( *nr_banks == MAX_MEM_BANKS(type) ) return false; - } + + membank = GET_MEMBANK(mem, type, *nr_banks); + membank->start = mfn_to_maddr(page_to_mfn(pg)); + membank->size = 1ULL << (PAGE_SHIFT + order); + (*nr_banks)++; + tot_size -= membank->size; + } + + return true; +} + +static int __init guest_physmap_memory(struct domain *d, + void *mem, enum meminfo_type type, + gfn_t sgfn) +{ + unsigned int i; + int res; + unsigned int *nr_banks = GET_NR_BANKS(mem, type); + + for ( i = 0; i < *nr_banks; i++ ) + { + struct membank *membank = GET_MEMBANK(mem, type, i); + paddr_t start = membank->start; + paddr_t size = membank->size; + unsigned int order = get_order_from_bytes(size); + + /* Size must be power of two */ + BUG_ON(!size || (size & (size - 1))); + res = guest_physmap_add_page(d, sgfn, maddr_to_mfn(start), order); + if ( res ) + return res; sgfn = gfn_add(sgfn, 1UL << order); - tot_size -= (1ULL << (PAGE_SHIFT + order)); + } + + return 0; +} + +static bool __init allocate_bank_memory(struct domain *d, + struct kernel_info *kinfo, + gfn_t sgfn, + paddr_t total_size) +{ + struct membank *bank; + struct meminfo host = { 0 }; + + /* + * allocate_bank_memory can be called with a total_size of zero for + * the second memory bank. It is not an error and we can safely + * avoid creating a zero-size memory bank. + */ + if ( total_size == 0 ) + return true; + + bank = &kinfo->mem.bank[kinfo->mem.nr_banks]; + bank->start = gfn_to_gaddr(sgfn); + bank->size = total_size; + + if ( !allocate_domheap_memory(d, total_size, (void *)&host, NORMAL_MEMINFO) ) + { + printk(XENLOG_ERR "Failed to allocate (%"PRIpaddr"MB) pages to %pd\n", + total_size >> 20, d); + return false; + } + + if ( guest_physmap_memory(d, (void *)&host, NORMAL_MEMINFO, sgfn) ) + { + printk(XENLOG_ERR "Failed to map (%"PRIpaddr"MB) pages to %pd\n", + total_size >> 20, d); + return false; } kinfo->mem.nr_banks++; diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h index 840ef50ef5..dba140d12f 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -57,6 +57,39 @@ struct meminfo { struct membank bank[NR_MEM_BANKS]; }; +enum meminfo_type { + NORMAL_MEMINFO, + MAX_MEMINFO_TYPE, +}; + +/* + * Define a set of MACRO helpers to access meminfo_type, like "struct meminfo" + * as type of NORMAL_MEMINFO, etc. + * This kind of structure must have a array of "struct membank", + * a member called nr_banks indicating the current array size, and also a field + * indicating the maximum array size. + */ +typedef void (*retrieve_fn)(void *, unsigned int *, struct membank **, + unsigned int **); + +#define MAX_MEM_BANKS(type) ({ \ + unsigned int _max_mem_banks; \ + retrievers[type](NULL, &_max_mem_banks, NULL, NULL); \ + _max_mem_banks; \ +}) + +#define GET_MEMBANK(mem, type, index) ({ \ + struct membank *_bank; \ + retrievers[type]((void *)(mem), NULL, &_bank, NULL); \ + &(_bank[index]); \ +}) + +#define GET_NR_BANKS(mem, type) ({ \ + unsigned int *_nr_banks; \ + retrievers[type]((void *)mem, NULL, NULL, &_nr_banks); \ + _nr_banks; \ +}) + /* * The domU flag is set for kernels and ramdisks of "xen,domain" nodes. * The purpose of the domU flag is to avoid getting confused in From patchwork Mon Aug 21 04:00:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13359085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8D7A9EE49A8 for ; Mon, 21 Aug 2023 04:01:42 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.587208.918564 (Exim 4.92) (envelope-from ) id 1qXw6d-0004W3-Hy; Mon, 21 Aug 2023 04:01:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 587208.918564; Mon, 21 Aug 2023 04:01:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6d-0004Ug-Cw; Mon, 21 Aug 2023 04:01:31 +0000 Received: by outflank-mailman (input) for mailman id 587208; Mon, 21 Aug 2023 04:01:30 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6c-0003GX-1a for xen-devel@lists.xenproject.org; Mon, 21 Aug 2023 04:01:30 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 67bd9d06-3fd7-11ee-8782-cb3800f73035; Mon, 21 Aug 2023 06:01:28 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DFD982F4; Sun, 20 Aug 2023 21:02:08 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2AF343F740; Sun, 20 Aug 2023 21:01:19 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 67bd9d06-3fd7-11ee-8782-cb3800f73035 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 4/8] xen/arm: use paddr_assigned to indicate whether host address is provided Date: Mon, 21 Aug 2023 12:00:42 +0800 Message-Id: <20230821040046.2965665-5-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821040046.2965665-1-Penny.Zheng@arm.com> References: <20230821040046.2965665-1-Penny.Zheng@arm.com> MIME-Version: 1.0 We use paddr_assigned to indicate whether host address is provided, by checking the length of "xen,shared-mem" property. The shm matching criteria shall also be adapt to cover the new scenario, by adding when host address is not provided, if SHMID matches, the region size must exactly match too. During domain creation, right now, a static shared memory node could be banked with a statically configured host memory bank, or a set of arbitrary host memory banks allocated from heap. To cover both scenarios, we create a new structure shm_meminfo. It is very alike meminfo, but with the maximum array size being a smaller number NR_SHM_BANKS(16). As "shm_meminfo" is also a new member of "enum meminfo_type", we shall implement its own callback "retrieve_shm_meminfo" to have access to all MACRO helpers, e.g. GET_MEMBANK(...). Also, to make codes tidy and clear, we extract codes about parsing "xen,shared-mem" property from function "process_shm" and move them into a new helper "parse_shm_property". Signed-off-by: Penny Zheng --- v1 -> v2 - In order to get allocated banked host memory info during domain creation, we create a new structure shm_meminfo. It is very alike meminfo, with the maximum array size being NR_SHM_BANKS. As shm_meminfo is a new member of type meminfo_type, we shall implement its own callback retrieve_shm_meminfo to have access to all MACRO helpers, e.g. GET_MEMBANK(...) - rename "acquire_shm_memnode" to "find_shm_memnode" --- v2 -> v3 - rebase and no changes --- xen/arch/arm/bootfdt.c | 100 ++++++++++----- xen/arch/arm/domain_build.c | 207 +++++++++++++++++++++++++------ xen/arch/arm/include/asm/setup.h | 3 + 3 files changed, 243 insertions(+), 67 deletions(-) diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c index 66ad3ab3db..7d86dffd45 100644 --- a/xen/arch/arm/bootfdt.c +++ b/xen/arch/arm/bootfdt.c @@ -21,7 +21,15 @@ static __initdata struct { unsigned int nr_nodes; struct { const struct shm_node *node; - const struct membank *membank; + /* + * For a static shared memory node, it is either banked with a reserved + * host memory bank, or arbitrary host memory which shall + * be allocated by Xen with a specified total size(tot_size). + */ + union { + const struct membank *membank; + paddr_t tot_size; + }; } shm_nodes[NR_MEM_BANKS]; } shm_data; #endif @@ -421,7 +429,7 @@ static int __init process_shm_node(const void *fdt, int node, paddr_t paddr, gaddr, size; unsigned int i; int len; - bool owner = false; + bool owner = false, paddr_assigned = true; const char *shm_id; if ( address_cells < 1 || size_cells < 1 ) @@ -462,7 +470,7 @@ static int __init process_shm_node(const void *fdt, int node, } /* - * xen,shared-mem = ; + * xen,shared-mem = , and paddr could be optional * Memory region starting from physical address #paddr of #size shall * be mapped to guest physical address #gaddr as static shared memory * region. @@ -473,16 +481,24 @@ static int __init process_shm_node(const void *fdt, int node, if ( len != dt_cells_to_size(address_cells + size_cells + address_cells) ) { + /* paddr is not provided in "xen,shared-mem" */ if ( len == dt_cells_to_size(size_cells + address_cells) ) - printk("fdt: host physical address must be chosen by users at the moment.\n"); - - printk("fdt: invalid `xen,shared-mem` property.\n"); - return -EINVAL; + paddr_assigned = false; + else + { + printk("fdt: invalid `xen,shared-mem` property.\n"); + return -EINVAL; + } } cell = (const __be32 *)prop->data; - device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gaddr); - size = dt_next_cell(size_cells, &cell); + if ( !paddr_assigned ) + device_tree_get_reg(&cell, address_cells, size_cells, &gaddr, &size); + else + { + device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gaddr); + size = dt_next_cell(size_cells, &cell); + } if ( !size ) { @@ -495,23 +511,37 @@ static int __init process_shm_node(const void *fdt, int node, paddr_t bank_start = shm_data.shm_nodes[i].membank->start; paddr_t bank_size = shm_data.shm_nodes[i].membank->size; const char *bank_id = shm_data.shm_nodes[i].node->shm_id; + paddr_t tot_size = shm_data.shm_nodes[i].tot_size; /* * Meet the following check: + * when host address is provided: * 1) The shm ID matches and the region exactly match * 2) The shm ID doesn't match and the region doesn't overlap * with an existing one + * when host address is not provided: + * 1) The shm ID matches and the region size exactly match */ - if ( paddr == bank_start && size == bank_size ) + if ( !paddr_assigned || ((paddr == bank_start) && (size == bank_size)) ) { if ( strncmp(shm_id, bank_id, MAX_SHM_ID_LENGTH) == 0 ) - break; - else + { + if ( !paddr_assigned && (size != tot_size) ) + { + printk("fdt: when host address is not provided, if xen,shm-id matches, size must stay the same too(%"PRIpaddr" -> %"PRIpaddr")\n", + size, tot_size); + return -EINVAL; + } + } + else if ( paddr_assigned ) { printk("fdt: xen,shm-id %s does not match for all the nodes using the same region.\n", shm_id); return -EINVAL; } + + /* host address is not provided, and with different SHMID. */ + continue; } else { @@ -542,37 +572,41 @@ static int __init process_shm_node(const void *fdt, int node, if ( (i == shm_data.nr_nodes) && (i == bootinfo.shminfo.nr_nodes) ) { - struct meminfo *mem = &bootinfo.reserved_mem; - - if ( (i < NR_MEM_BANKS) && (mem->nr_banks < NR_MEM_BANKS) ) + if ( i < NR_MEM_BANKS ) { - /* Static shared memory shall be reserved from any other use. */ - struct membank *membank = &mem->bank[mem->nr_banks]; struct shm_node *shm_node = &bootinfo.shminfo.node[i]; - - membank->start = paddr; - membank->size = size; - membank->type = MEMBANK_STATIC_DOMAIN; - mem->nr_banks++; + struct meminfo *mem = &bootinfo.reserved_mem; /* Record static shared memory node info in bootinfo.shminfo */ safe_strcpy(shm_node->shm_id, shm_id); bootinfo.shminfo.nr_nodes++; - /* - * Reserved memory bank is recorded together with shm - * node info in shm_data, to assist doing shm node verification. - */ shm_data.shm_nodes[i].node = shm_node; - shm_data.shm_nodes[i].membank = membank; shm_data.nr_nodes++; + if ( !paddr_assigned ) + shm_data.shm_nodes[i].tot_size = size; + else if ( mem->nr_banks < NR_MEM_BANKS ) + { + struct membank *membank = &mem->bank[mem->nr_banks]; + + membank->start = paddr; + membank->size = size; + membank->type = MEMBANK_STATIC_DOMAIN; + mem->nr_banks++; + + /* + * Reserved memory bank is recorded together with shm + * node info in shm_data, to assist doing shm node verification. + */ + shm_data.shm_nodes[i].membank = membank; + } + else + goto fail; } else - { - printk("Warning: Max number of supported memory regions reached.\n"); - return -ENOSPC; - } + goto fail; } + /* * keep a count of the number of borrowers, which later may be used * to calculate the reference count. @@ -581,6 +615,10 @@ static int __init process_shm_node(const void *fdt, int node, bootinfo.shminfo.node[i].nr_shm_borrowers++; return 0; + + fail: + printk("Warning: Max number of supported memory regions reached\n"); + return -ENOSPC; } #else static int __init process_shm_node(const void *fdt, int node, diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 97f9db1175..e37f3376a1 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -50,6 +50,51 @@ boolean_param("ext_regions", opt_ext_regions); static u64 __initdata dom0_mem; static bool __initdata dom0_mem_set; +#ifdef CONFIG_STATIC_SHM +#define NR_SHM_BANKS 16 + +/* + * A static shared memory node could be banked with a statically + * configured host memory bank, or a set of arbitrary host memory + * banks allocated from heap. + */ +struct shm_meminfo { + unsigned int nr_banks; + struct membank bank[NR_SHM_BANKS]; + paddr_t tot_size; +}; + +/* + * struct shm_memnode holds banked host memory info for a static + * shared memory node + */ +struct shm_memnode { + char shm_id[MAX_SHM_ID_LENGTH]; + struct shm_meminfo meminfo; +}; + +static __initdata struct { + unsigned int nr_nodes; + struct shm_memnode node[NR_MEM_BANKS]; +} shm_memdata; + +static void __init retrieve_shm_meminfo(void *mem, unsigned int *max_mem_banks, + struct membank **bank, + unsigned int **nr_banks) +{ + struct shm_meminfo *meminfo = (struct shm_meminfo *)mem; + + if ( max_mem_banks ) + *max_mem_banks = NR_SHM_BANKS; + + if ( nr_banks ) + *nr_banks = &(meminfo->nr_banks); + + if ( bank ) + *bank = meminfo->bank; +} +#endif + static int __init parse_dom0_mem(const char *s) { dom0_mem_set = true; @@ -117,6 +162,9 @@ static void __init retrieve_meminfo(void *mem, unsigned int *max_mem_banks, retrieve_fn __initdata retrievers[MAX_MEMINFO_TYPE] = { [NORMAL_MEMINFO] = retrieve_meminfo, +#ifdef CONFIG_STATIC_SHM + [SHM_MEMINFO] = retrieve_shm_meminfo, +#endif }; unsigned int __init dom0_max_vcpus(void) @@ -864,6 +912,24 @@ static int __init acquire_nr_borrower_domain(const char *shm_id, return -EINVAL; } +static struct shm_memnode * __init find_shm_memnode(const char *shm_id) +{ + unsigned int i; + struct shm_memnode *shm_memnode; + + for ( i = 0 ; i < shm_memdata.nr_nodes; i++ ) + { + shm_memnode = &shm_memdata.node[i]; + + if ( strcmp(shm_id, shm_memnode->shm_id) == 0 ) + return shm_memnode; + } + + shm_memnode = &shm_memdata.node[i]; + safe_strcpy(shm_memnode->shm_id, shm_id); + return shm_memnode; +} + /* * This function checks whether the static shared memory region is * already allocated to dom_io. @@ -1009,6 +1075,102 @@ static int __init append_shm_bank_to_domain(struct kernel_info *kinfo, return 0; } +/* Parse "xen,shared-mem" property in static shared memory node */ +static struct shm_memnode * __init parse_shm_property(struct domain *d, + const struct dt_device_node *dt_node, + bool *paddr_assigned, paddr_t *gbase, + const char *shm_id) +{ + uint32_t addr_cells, size_cells; + const struct dt_property *prop; + const __be32 *cells; + uint32_t len; + unsigned int i; + paddr_t pbase, psize; + struct shm_memnode *shm_memnode; + + /* xen,shared-mem = ; And pbase could be optional. */ + prop = dt_find_property(dt_node, "xen,shared-mem", &len); + BUG_ON(!prop); + cells = (const __be32 *)prop->value; + + addr_cells = dt_n_addr_cells(dt_node); + size_cells = dt_n_size_cells(dt_node); + if ( len != dt_cells_to_size(addr_cells + size_cells + addr_cells) ) + { + /* pbase is not provided in "xen,shared-mem" */ + if ( len == dt_cells_to_size(size_cells + addr_cells) ) + *paddr_assigned = false; + else + { + printk("fdt: invalid `xen,shared-mem` property.\n"); + return NULL; + } + } + + /* + * If we firstly process the shared memory node with unique "xen,shm-id", + * we allocate a new member "shm_memnode" for it in shm_memdata. + */ + shm_memnode = find_shm_memnode(shm_id); + BUG_ON(!shm_memnode); + if ( !(*paddr_assigned) ) + { + device_tree_get_reg(&cells, addr_cells, size_cells, gbase, &psize); + if ( shm_memnode->meminfo.tot_size == 0 ) + goto out_early1; + else + goto out_early2; + } + else + { + device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, gbase); + psize = dt_read_number(cells, size_cells); + } + + /* + * The static shared memory node #dt_node is banked with a + * statically configured host memory bank. + */ + shm_memnode->meminfo.bank[0].start = pbase; + shm_memnode->meminfo.bank[0].size = psize; + shm_memnode->meminfo.nr_banks = 1; + + if ( !IS_ALIGNED(pbase, PAGE_SIZE) ) + { + printk("%pd: physical address 0x%"PRIpaddr" is not suitably aligned.\n", + d, pbase); + return NULL; + } + + for ( i = 0; i < PFN_DOWN(psize); i++ ) + if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) + { + printk("%pd: invalid physical MFN 0x%"PRI_mfn"\n for static shared memory node\n", + d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); + return NULL; + } + + out_early1: + if ( !IS_ALIGNED(psize, PAGE_SIZE) ) + { + printk("%pd: size 0x%"PRIpaddr" is not suitably aligned\n", + d, psize); + return NULL; + } + shm_memnode->meminfo.tot_size = psize; + + out_early2: + if ( !IS_ALIGNED(*gbase, PAGE_SIZE) ) + { + printk("%pd: guest address 0x%"PRIpaddr" is not suitably aligned.\n", + d, *gbase); + return NULL; + } + + return shm_memnode; +} + static int __init process_shm(struct domain *d, struct kernel_info *kinfo, const struct dt_device_node *node) { @@ -1016,51 +1178,17 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, dt_for_each_child_node(node, shm_node) { - const struct dt_property *prop; - const __be32 *cells; - uint32_t addr_cells, size_cells; paddr_t gbase, pbase, psize; int ret = 0; - unsigned int i; const char *role_str; const char *shm_id; bool owner_dom_io = true; + bool paddr_assigned = true; + struct shm_memnode *shm_memnode; if ( !dt_device_is_compatible(shm_node, "xen,domain-shared-memory-v1") ) continue; - /* - * xen,shared-mem = ; - * TODO: pbase is optional. - */ - addr_cells = dt_n_addr_cells(shm_node); - size_cells = dt_n_size_cells(shm_node); - prop = dt_find_property(shm_node, "xen,shared-mem", NULL); - BUG_ON(!prop); - cells = (const __be32 *)prop->value; - device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase); - psize = dt_read_paddr(cells, size_cells); - if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE) ) - { - printk("%pd: physical address 0x%"PRIpaddr", or guest address 0x%"PRIpaddr" is not suitably aligned.\n", - d, pbase, gbase); - return -EINVAL; - } - if ( !IS_ALIGNED(psize, PAGE_SIZE) ) - { - printk("%pd: size 0x%"PRIpaddr" is not suitably aligned\n", - d, psize); - return -EINVAL; - } - - for ( i = 0; i < PFN_DOWN(psize); i++ ) - if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) - { - printk("%pd: invalid physical address 0x%"PRI_mfn"\n", - d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); - return -EINVAL; - } - /* * "role" property is optional and if it is defined explicitly, * then the owner domain is not the default "dom_io" domain. @@ -1075,6 +1203,13 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, } BUG_ON((strlen(shm_id) <= 0) || (strlen(shm_id) >= MAX_SHM_ID_LENGTH)); + shm_memnode = parse_shm_property(d, shm_node, &paddr_assigned, &gbase, + shm_id); + if ( !shm_memnode ) + return -EINVAL; + pbase = shm_memnode->meminfo.bank[0].start; + psize = shm_memnode->meminfo.bank[0].size; + /* * DOMID_IO is a fake domain and is not described in the Device-Tree. * Therefore when the owner of the shared region is DOMID_IO, we will diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h index dba140d12f..d607ad2723 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -59,6 +59,9 @@ struct meminfo { enum meminfo_type { NORMAL_MEMINFO, +#ifdef CONFIG_STATIC_SHM + SHM_MEMINFO, +#endif MAX_MEMINFO_TYPE, }; From patchwork Mon Aug 21 04:00:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13359084 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3EE30EE49A0 for ; Mon, 21 Aug 2023 04:01:42 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.587207.918561 (Exim 4.92) (envelope-from ) id 1qXw6d-0004Si-9K; Mon, 21 Aug 2023 04:01:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 587207.918561; Mon, 21 Aug 2023 04:01:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6d-0004SX-64; Mon, 21 Aug 2023 04:01:31 +0000 Received: by outflank-mailman (input) for mailman id 587207; Mon, 21 Aug 2023 04:01:29 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6b-0003VX-E3 for xen-devel@lists.xenproject.org; Mon, 21 Aug 2023 04:01:29 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 66cbb6ce-3fd7-11ee-9b0c-b553b5be7939; Mon, 21 Aug 2023 06:01:27 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3BDC81FB; Sun, 20 Aug 2023 21:02:07 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C8B823F7CE; Sun, 20 Aug 2023 21:01:23 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 66cbb6ce-3fd7-11ee-9b0c-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 5/8] xen/arm: support static shared memory when host address not provided Date: Mon, 21 Aug 2023 12:00:43 +0800 Message-Id: <20230821040046.2965665-6-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821040046.2965665-1-Penny.Zheng@arm.com> References: <20230821040046.2965665-1-Penny.Zheng@arm.com> MIME-Version: 1.0 In order to support static shared memory when host address not provided, we shall do the following modification: - we shall let Xen allocate memory from heap for static shared memory at first domain, no matter it is owner or borrower. - In acquire_shared_memory_bank, as static shared memory has already been allocated from heap, we shall assign them to the owner domain using function "assign_pages". - Function get_shm_pages_reference is created to add as many additional reference as the number of borrowers. - We implement a new helper "add_foreign_mapping_for_borrower" to set up foreign memory mapping for borrower. Instead of using multiple function parameters to deliver various shm-related info, like host physical address, SHMID, etc, and with the introduction of new struct "shm_memnode" to include banked host memory info, we switch to use "shm_memnode" as function parameter to replace them all, to make codes more clear and tidy. Signed-off-by: Penny Zheng --- v1 -> v2: - combine commits 4 - 6 in Serie 1 - Adapt to changes of introducing "struct shm_memnode" --- v2 -> v3 - fix infinite loop bug and bad indentation - rebase --- xen/arch/arm/domain_build.c | 223 +++++++++++++++++++++++++----------- 1 file changed, 155 insertions(+), 68 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index e37f3376a1..687420956f 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -891,6 +891,11 @@ static void __init assign_static_memory_11(struct domain *d, } #ifdef CONFIG_STATIC_SHM +static bool __init is_shm_allocated_from_heap(struct shm_memnode *node) +{ + return (node->meminfo.nr_banks != 0); +} + static int __init acquire_nr_borrower_domain(const char *shm_id, unsigned long *nr_borrowers) { @@ -934,12 +939,12 @@ static struct shm_memnode * __init find_shm_memnode(const char *shm_id) * This function checks whether the static shared memory region is * already allocated to dom_io. */ -static bool __init is_shm_allocated_to_domio(paddr_t pbase) +static bool __init is_shm_allocated_to_domio(struct shm_memnode *node) { struct page_info *page; struct domain *d; - page = maddr_to_page(pbase); + page = maddr_to_page(node->meminfo.bank[0].start); d = page_get_owner_and_reference(page); if ( d == NULL ) return false; @@ -957,67 +962,128 @@ static bool __init is_shm_allocated_to_domio(paddr_t pbase) } static mfn_t __init acquire_shared_memory_bank(struct domain *d, - paddr_t pbase, paddr_t psize) + struct shm_meminfo *meminfo, + bool paddr_assigned) { - mfn_t smfn; - unsigned long nr_pfns; - int res; + int res, i = 0; - /* - * Pages of statically shared memory shall be included - * into domain_tot_pages(). - */ - nr_pfns = PFN_DOWN(psize); - if ( (UINT_MAX - d->max_pages) < nr_pfns ) + for ( ; i < meminfo->nr_banks; i++ ) { - printk(XENLOG_ERR "%pd: Over-allocation for d->max_pages: %lu.\n", - d, nr_pfns); - return INVALID_MFN; + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + unsigned long nr_pfns; + + /* + * Pages of statically shared memory shall be included + * into domain_tot_pages(). + */ + nr_pfns = PFN_DOWN(psize); + if ( (UINT_MAX - d->max_pages) < nr_pfns ) + { + printk(XENLOG_ERR "%pd: Over-allocation for d->max_pages: %lu.\n", + d, nr_pfns); + return INVALID_MFN; + } + d->max_pages += nr_pfns; + + if ( paddr_assigned ) + { + res = acquire_domstatic_pages(d, maddr_to_mfn(pbase), nr_pfns, 0); + if ( res ) + { + printk(XENLOG_ERR + "%pd: failed to acquire static memory: %d.\n", d, res); + goto fail; + } + } + else + /* + * When host address is not provided, static shared memory is + * allocated from heap and shall be assigned to owner domain. + */ + if ( assign_pages(maddr_to_page(pbase), nr_pfns, d, 0) ) + goto fail; } - d->max_pages += nr_pfns; - smfn = maddr_to_mfn(pbase); - res = acquire_domstatic_pages(d, smfn, nr_pfns, 0); - if ( res ) + return maddr_to_mfn(meminfo->bank[0].start); + + fail: + while( --i >= 0 ) + d->max_pages -= PFN_DOWN(meminfo->bank[i].size); + return INVALID_MFN; +} + +static int __init get_shm_pages_reference(struct domain *d, + struct shm_meminfo *meminfo, + unsigned long count) +{ + struct page_info *page; + int i = 0, j; + + for ( ; i < meminfo->nr_banks; i++ ) { - printk(XENLOG_ERR - "%pd: failed to acquire static memory: %d.\n", d, res); - d->max_pages -= nr_pfns; - return INVALID_MFN; + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + unsigned long nr_pages = PFN_DOWN(psize); + + page = maddr_to_page(pbase); + for ( j = 0; j < nr_pages; j++ ) + { + if ( !get_page_nr(page + j, d, count) ) + { + printk(XENLOG_ERR + "Failed to add %lu references to page %"PRI_mfn".\n", + count, mfn_x(page_to_mfn(page + j))); + goto fail; + } + } } - return smfn; + return 0; + + fail: + while ( --j >= 0 ) + put_page_nr(page + j, count); + while ( --i >= 0 ) + { + page = maddr_to_page(meminfo->bank[i].start); + j = PFN_DOWN(meminfo->bank[i].size); + while ( --j >= 0 ) + put_page_nr(page + j, count); + } + return -EINVAL; } static int __init assign_shared_memory(struct domain *d, - paddr_t pbase, paddr_t psize, - paddr_t gbase, const char *shm_id) + struct shm_memnode *node, paddr_t gbase, + bool paddr_assigned) { mfn_t smfn; - int ret = 0; - unsigned long nr_pages, nr_borrowers, i; - struct page_info *page; - - printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n", - d, pbase, pbase + psize); + int ret; + unsigned long nr_borrowers, i; + struct shm_meminfo *meminfo = &node->meminfo; - smfn = acquire_shared_memory_bank(d, pbase, psize); + smfn = acquire_shared_memory_bank(d, meminfo, paddr_assigned); if ( mfn_eq(smfn, INVALID_MFN) ) return -EINVAL; - /* - * DOMID_IO is not auto-translated (i.e. it sees RAM 1:1). So we do not need - * to create mapping in the P2M. - */ - nr_pages = PFN_DOWN(psize); - if ( d != dom_io ) + for ( i = 0; i < meminfo->nr_banks; i++ ) { - ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), smfn, - PFN_DOWN(psize)); - if ( ret ) + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + + /* + * DOMID_IO is not auto-translated (i.e. it sees RAM 1:1). So we do + * not need to create mapping in the P2M. + */ + if ( d != dom_io ) { - printk(XENLOG_ERR "Failed to map shared memory to %pd.\n", d); - return ret; + ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), + maddr_to_mfn(pbase), + PFN_DOWN(psize)); + if ( ret ) + { + printk(XENLOG_ERR "Failed to map shared memory to %pd.\n", d); + return ret; + } + gbase += psize; } } @@ -1025,7 +1091,7 @@ static int __init assign_shared_memory(struct domain *d, * Get the right amount of references per page, which is the number of * borrower domains. */ - ret = acquire_nr_borrower_domain(shm_id, &nr_borrowers); + ret = acquire_nr_borrower_domain(node->shm_id, &nr_borrowers); if ( ret ) return ret; @@ -1037,24 +1103,30 @@ static int __init assign_shared_memory(struct domain *d, * So if the borrower is created first, it will cause adding pages * in the P2M without reference. */ - page = mfn_to_page(smfn); - for ( i = 0; i < nr_pages; i++ ) + return get_shm_pages_reference(d, meminfo, nr_borrowers); +} + +static int __init add_foreign_mapping_for_borrower(struct domain *d, + struct shm_memnode *node, + paddr_t gbase) +{ + unsigned int i = 0; + struct shm_meminfo *meminfo = &node->meminfo; + + for ( ; i < meminfo->nr_banks; i++ ) { - if ( !get_page_nr(page + i, d, nr_borrowers) ) - { - printk(XENLOG_ERR - "Failed to add %lu references to page %"PRI_mfn".\n", - nr_borrowers, mfn_x(smfn) + i); - goto fail; - } + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + int ret; + + /* Set up P2M foreign mapping for borrower domain. */ + ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize), + _mfn(PFN_UP(pbase)), p2m_map_foreign_rw); + if ( ret ) + return ret; + gbase += psize; } return 0; - - fail: - while ( --i >= 0 ) - put_page_nr(page + i, nr_borrowers); - return ret; } static int __init append_shm_bank_to_domain(struct kernel_info *kinfo, @@ -1178,7 +1250,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, dt_for_each_child_node(node, shm_node) { - paddr_t gbase, pbase, psize; + paddr_t gbase; int ret = 0; const char *role_str; const char *shm_id; @@ -1207,15 +1279,30 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, shm_id); if ( !shm_memnode ) return -EINVAL; - pbase = shm_memnode->meminfo.bank[0].start; - psize = shm_memnode->meminfo.bank[0].size; + + /* + * When host address is not provided in "xen,shared-mem", + * we let Xen allocate memory from heap at first domain. + */ + if ( !paddr_assigned && !is_shm_allocated_from_heap(shm_memnode) ) + { + if ( !allocate_domheap_memory(NULL, shm_memnode->meminfo.tot_size, + (void *)&shm_memnode->meminfo, + SHM_MEMINFO) ) + { + printk(XENLOG_ERR + "Failed to allocate (%"PRIpaddr"MB) pages as static shared memory from heap\n", + shm_memnode->meminfo.tot_size >> 20); + return -EINVAL; + } + } /* * DOMID_IO is a fake domain and is not described in the Device-Tree. * Therefore when the owner of the shared region is DOMID_IO, we will * only find the borrowers. */ - if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) || + if ( (owner_dom_io && !is_shm_allocated_to_domio(shm_memnode)) || (!owner_dom_io && strcmp(role_str, "owner") == 0) ) { /* @@ -1223,16 +1310,14 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, * specified, so they should be assigned to dom_io. */ ret = assign_shared_memory(owner_dom_io ? dom_io : d, - pbase, psize, gbase, shm_id); + shm_memnode, gbase, paddr_assigned); if ( ret ) return ret; } if ( owner_dom_io || (strcmp(role_str, "borrower") == 0) ) { - /* Set up P2M foreign mapping for borrower domain. */ - ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize), - _mfn(PFN_UP(pbase)), p2m_map_foreign_rw); + ret = add_foreign_mapping_for_borrower(d, shm_memnode, gbase); if ( ret ) return ret; } @@ -1241,7 +1326,9 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo, * Record static shared memory region info for later setting * up shm-node in guest device tree. */ - ret = append_shm_bank_to_domain(kinfo, gbase, psize, shm_id); + ret = append_shm_bank_to_domain(kinfo, gbase, + shm_memnode->meminfo.tot_size, + shm_memnode->shm_id); if ( ret ) return ret; } From patchwork Mon Aug 21 04:00:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13359086 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BD984EE49A5 for ; Mon, 21 Aug 2023 04:01:43 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.587209.918581 (Exim 4.92) (envelope-from ) id 1qXw6f-0004y4-03; Mon, 21 Aug 2023 04:01:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 587209.918581; Mon, 21 Aug 2023 04:01:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6e-0004xV-R6; Mon, 21 Aug 2023 04:01:32 +0000 Received: by outflank-mailman (input) for mailman id 587209; Mon, 21 Aug 2023 04:01:31 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6d-0003GX-Bq for xen-devel@lists.xenproject.org; Mon, 21 Aug 2023 04:01:31 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 68b5c2a8-3fd7-11ee-8782-cb3800f73035; Mon, 21 Aug 2023 06:01:30 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8C8461FB; Sun, 20 Aug 2023 21:02:10 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2488D3F7CE; Sun, 20 Aug 2023 21:01:26 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 68b5c2a8-3fd7-11ee-8782-cb3800f73035 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 6/8] xen/arm: remove shm holes for extended regions Date: Mon, 21 Aug 2023 12:00:44 +0800 Message-Id: <20230821040046.2965665-7-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821040046.2965665-1-Penny.Zheng@arm.com> References: <20230821040046.2965665-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Static shared memory acts as reserved memory in guest, so it shall be excluded from extended regions. Extended regions are taken care of under three different scenarios: normal DomU, direct-map domain with iommu on, and direct-map domain with iommu off. For normal DomU, we create a new function "remove_shm_holes_for_domU", to firstly transfer original outputs into the format of "struct rangeset", then use "remove_shm_from_rangeset" to remove static shm from them. For direct-map domain with iommu on, after we get guest shm info from "kinfo", we use "remove_shm_from_rangeset" to remove static shm. For direct-map domain with iommu off, as static shm has already been taken care of through reserved memory banks, we do nothing. Signed-off-by: Penny Zheng --- v1 -> v2: - new commit --- v2 -> v3: - error out non-zero res before remove_shm_holes_for_domU - rebase --- xen/arch/arm/domain_build.c | 97 ++++++++++++++++++++++++++++++++++++- 1 file changed, 96 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 687420956f..1ab41f823e 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -1974,6 +1974,32 @@ static int __init handle_pci_range(const struct dt_device_node *dev, return 0; } +static int __init remove_shm_from_rangeset(const struct kernel_info *kinfo, + struct rangeset *rangeset) +{ + unsigned int i; + + /* Remove static shared memory regions */ + for ( i = 0; i < kinfo->shminfo.nr_banks; i++ ) + { + struct membank membank = kinfo->shminfo.bank[i].membank; + paddr_t start, end; + int res; + + start = membank.start; + end = membank.start + membank.size - 1; + res = rangeset_remove_range(rangeset, start, end); + if ( res ) + { + printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n", + start, end); + return -EINVAL; + } + } + + return 0; +} + /* * Find the holes in the Host DT which can be exposed to Dom0 as extended * regions for the special memory mappings. In order to calculate regions @@ -1982,6 +2008,8 @@ static int __init handle_pci_range(const struct dt_device_node *dev, * - MMIO * - Host RAM * - PCI aperture + * - Static shared memory regions, which are described by special property + * "xen,static-shm" */ static int __init find_memory_holes(const struct kernel_info *kinfo, struct meminfo *ext_regions) @@ -2058,6 +2086,14 @@ static int __init find_memory_holes(const struct kernel_info *kinfo, } } + /* Remove static shared memory regions */ + if ( kinfo->shminfo.nr_banks != 0 ) + { + res = remove_shm_from_rangeset(kinfo, mem_holes); + if ( res ) + goto out; + } + start = 0; end = (1ULL << p2m_ipa_bits) - 1; res = rangeset_report_ranges(mem_holes, PFN_DOWN(start), PFN_DOWN(end), @@ -2073,6 +2109,62 @@ out: return res; } +static int __init remove_shm_holes_for_domU(const struct kernel_info *kinfo, + struct meminfo *orig_ext) +{ + struct rangeset *guest_holes; + unsigned int i = 0, tail; + int res; + paddr_t start, end; + + /* No static shared memory region. */ + if ( kinfo->shminfo.nr_banks == 0 ) + return 0; + + dt_dprintk("Remove static shared memory holes for extended regions of DomU\n"); + + guest_holes = rangeset_new(NULL, NULL, 0); + if ( !guest_holes ) + return -ENOMEM; + + for ( ; i < orig_ext->nr_banks; i++ ) + { + start = orig_ext->bank[i].start; + end = start + orig_ext->bank[i].size - 1; + + res = rangeset_add_range(guest_holes, start, end); + if ( res ) + { + printk(XENLOG_ERR "Failed to add: %#"PRIpaddr"->%#"PRIpaddr"\n", + start, end); + goto out; + } + } + + /* Remove static shared memory regions */ + res = remove_shm_from_rangeset(kinfo, guest_holes); + if ( res ) + goto out; + + tail = orig_ext->nr_banks - 1; + start = orig_ext->bank[0].start; + end = orig_ext->bank[tail].start + orig_ext->bank[tail].size - 1; + + /* Reset original extended regions to hold new value */ + orig_ext->nr_banks = 0; + res = rangeset_report_ranges(guest_holes, start, end, + add_ext_regions, orig_ext); + if ( res ) + orig_ext->nr_banks = 0; + else if ( !orig_ext->nr_banks ) + res = -ENOENT; + +out: + rangeset_destroy(guest_holes); + + return res; +} + static int __init find_domU_holes(const struct kernel_info *kinfo, struct meminfo *ext_regions) { @@ -2100,7 +2192,10 @@ static int __init find_domU_holes(const struct kernel_info *kinfo, res = 0; } - return res; + if ( res ) + return res; + + return remove_shm_holes_for_domU(kinfo, ext_regions); } static int __init make_hypervisor_node(struct domain *d, From patchwork Mon Aug 21 04:00:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13359088 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D0A26EE49A8 for ; Mon, 21 Aug 2023 04:01:50 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.587218.918601 (Exim 4.92) (envelope-from ) id 1qXw6o-00067L-Ih; Mon, 21 Aug 2023 04:01:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 587218.918601; Mon, 21 Aug 2023 04:01:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6o-00066s-DY; Mon, 21 Aug 2023 04:01:42 +0000 Received: by outflank-mailman (input) for mailman id 587218; Mon, 21 Aug 2023 04:01:41 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6n-0003VX-16 for xen-devel@lists.xenproject.org; Mon, 21 Aug 2023 04:01:41 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 6dae723c-3fd7-11ee-9b0c-b553b5be7939; Mon, 21 Aug 2023 06:01:38 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DC86E2F4; Sun, 20 Aug 2023 21:02:18 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 74B913F740; Sun, 20 Aug 2023 21:01:30 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6dae723c-3fd7-11ee-9b0c-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 7/8] xen/p2m: put reference for superpage Date: Mon, 21 Aug 2023 12:00:45 +0800 Message-Id: <20230821040046.2965665-8-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821040046.2965665-1-Penny.Zheng@arm.com> References: <20230821040046.2965665-1-Penny.Zheng@arm.com> MIME-Version: 1.0 We are doing foreign memory mapping for static shared memory, and there is a great possibility that it could be super mapped. But today, p2m_put_l3_page could not handle superpages. This commits implements a new function p2m_put_superpage to handle superpages, specifically for helping put extra references for foreign superpages. Signed-off-by: Penny Zheng --- v1 -> v2: - new commit --- v2 -> v3: - rebase and no change --- xen/arch/arm/p2m.c | 58 ++++++++++++++++++++++++++++++++++------------ 1 file changed, 43 insertions(+), 15 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index de32a2d638..b7b3db005e 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -851,17 +851,9 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn, return rc; } -/* - * Put any references on the single 4K page referenced by pte. - * TODO: Handle superpages, for now we only take special references for leaf - * pages (specifically foreign ones, which can't be super mapped today). - */ -static void p2m_put_l3_page(const lpae_t pte) +/* Put any references on the single 4K page referenced by mfn. */ +static void p2m_put_l3_page(mfn_t mfn, unsigned type) { - mfn_t mfn = lpae_get_mfn(pte); - - ASSERT(p2m_is_valid(pte)); - /* * TODO: Handle other p2m types * @@ -869,16 +861,53 @@ static void p2m_put_l3_page(const lpae_t pte) * flush the TLBs if the page is reallocated before the end of * this loop. */ - if ( p2m_is_foreign(pte.p2m.type) ) + if ( p2m_is_foreign(type) ) { ASSERT(mfn_valid(mfn)); put_page(mfn_to_page(mfn)); } /* Detect the xenheap page and mark the stored GFN as invalid. */ - else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) ) + else if ( p2m_is_ram(type) && is_xen_heap_mfn(mfn) ) page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN); } +/* Put any references on the superpage referenced by mfn. */ +static void p2m_put_superpage(mfn_t mfn, unsigned int next_level, unsigned type) +{ + unsigned int i; + unsigned int level_order = XEN_PT_LEVEL_ORDER(next_level); + + for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + { + if ( next_level == 3 ) + p2m_put_l3_page(mfn, type); + else + p2m_put_superpage(mfn, next_level + 1, type); + + mfn = mfn_add(mfn, 1 << level_order); + } +} + +/* Put any references on the page referenced by pte. */ +static void p2m_put_page(const lpae_t pte, unsigned int level) +{ + mfn_t mfn = lpae_get_mfn(pte); + + ASSERT(p2m_is_valid(pte)); + + /* + * We are either having a first level 1G superpage or a + * second level 2M superpage. + */ + if ( p2m_is_superpage(pte, level) ) + return p2m_put_superpage(mfn, level + 1, pte.p2m.type); + else + { + ASSERT(level == 3); + return p2m_put_l3_page(mfn, pte.p2m.type); + } +} + /* Free lpae sub-tree behind an entry */ static void p2m_free_entry(struct p2m_domain *p2m, lpae_t entry, unsigned int level) @@ -907,9 +936,8 @@ static void p2m_free_entry(struct p2m_domain *p2m, #endif p2m->stats.mappings[level]--; - /* Nothing to do if the entry is a super-page. */ - if ( level == 3 ) - p2m_put_l3_page(entry); + p2m_put_page(entry, level); + return; } From patchwork Mon Aug 21 04:00:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13359087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D09C0EE49A0 for ; Mon, 21 Aug 2023 04:01:49 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.587212.918591 (Exim 4.92) (envelope-from ) id 1qXw6m-0005km-9N; Mon, 21 Aug 2023 04:01:40 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 587212.918591; Mon, 21 Aug 2023 04:01:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6m-0005kT-54; Mon, 21 Aug 2023 04:01:40 +0000 Received: by outflank-mailman (input) for mailman id 587212; Mon, 21 Aug 2023 04:01:39 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qXw6l-0003VX-5m for xen-devel@lists.xenproject.org; Mon, 21 Aug 2023 04:01:39 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 6caa1f95-3fd7-11ee-9b0c-b553b5be7939; Mon, 21 Aug 2023 06:01:37 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 37E501FB; Sun, 20 Aug 2023 21:02:17 -0700 (PDT) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C50573F7CE; Sun, 20 Aug 2023 21:01:33 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6caa1f95-3fd7-11ee-9b0c-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v3 8/8] xen/docs: refine docs about static shared memory Date: Mon, 21 Aug 2023 12:00:46 +0800 Message-Id: <20230821040046.2965665-9-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821040046.2965665-1-Penny.Zheng@arm.com> References: <20230821040046.2965665-1-Penny.Zheng@arm.com> MIME-Version: 1.0 This commit amends docs(docs/misc/arm/device-tree/booting.txt) to include the new scenario where host address is not provided in "xen,shared-mem" property, and we also add a new example to explain in detail. We also fix some buggy info in the docs, like SHMID is "my-shared-mem-1", not "0x1". Signed-off-by: Penny Zheng --- v1 -> v2: - no new changes --- v2 -> v3 - rebase and no change --- docs/misc/arm/device-tree/booting.txt | 52 ++++++++++++++++++++------- 1 file changed, 39 insertions(+), 13 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt index bbd955e9c2..ac4bad6fe5 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -590,7 +590,7 @@ communication. An array takes a physical address, which is the base address of the shared memory region in host physical address space, a size, and a guest physical address, as the target address of the mapping. - e.g. xen,shared-mem = < [host physical address] [guest address] [size] > + e.g. xen,shared-mem = < [host physical address] [guest address] [size] >; It shall also meet the following criteria: 1) If the SHM ID matches with an existing region, the address range of the @@ -601,8 +601,8 @@ communication. The number of cells for the host address (and size) is the same as the guest pseudo-physical address and they are inherited from the parent node. - Host physical address is optional, when missing Xen decides the location - (currently unimplemented). + Host physical address is optional, when missing Xen decides the location. + e.g. xen,shared-mem = < [guest address] [size] >; - role (Optional) @@ -629,7 +629,7 @@ chosen { role = "owner"; xen,shm-id = "my-shared-mem-0"; xen,shared-mem = <0x10000000 0x10000000 0x10000000>; - } + }; domU1 { compatible = "xen,domain"; @@ -640,25 +640,36 @@ chosen { vpl011; /* - * shared memory region identified as 0x0(xen,shm-id = <0x0>) - * is shared between Dom0 and DomU1. + * shared memory region "my-shared-mem-0" is shared + * between Dom0 and DomU1. */ domU1-shared-mem@10000000 { compatible = "xen,domain-shared-memory-v1"; role = "borrower"; xen,shm-id = "my-shared-mem-0"; xen,shared-mem = <0x10000000 0x50000000 0x10000000>; - } + }; /* - * shared memory region identified as 0x1(xen,shm-id = <0x1>) - * is shared between DomU1 and DomU2. + * shared memory region "my-shared-mem-1" is shared between + * DomU1 and DomU2. */ domU1-shared-mem@50000000 { compatible = "xen,domain-shared-memory-v1"; xen,shm-id = "my-shared-mem-1"; xen,shared-mem = <0x50000000 0x60000000 0x20000000>; - } + }; + + /* + * shared memory region "my-shared-mem-2" is shared between + * DomU1 and DomU2. + */ + domU1-shared-mem-2 { + compatible = "xen,domain-shared-memory-v1"; + xen,shm-id = "my-shared-mem-2"; + role = "owner"; + xen,shared-mem = <0x80000000 0x20000000>; + }; ...... @@ -672,14 +683,21 @@ chosen { cpus = <1>; /* - * shared memory region identified as 0x1(xen,shm-id = <0x1>) - * is shared between domU1 and domU2. + * shared memory region "my-shared-mem-1" is shared between + * domU1 and domU2. */ domU2-shared-mem@50000000 { compatible = "xen,domain-shared-memory-v1"; xen,shm-id = "my-shared-mem-1"; xen,shared-mem = <0x50000000 0x70000000 0x20000000>; - } + }; + + domU2-shared-mem-2 { + compatible = "xen,domain-shared-memory-v1"; + xen,shm-id = "my-shared-mem-2"; + role = "borrower"; + xen,shared-mem = <0x90000000 0x20000000>; + }; ...... }; @@ -699,3 +717,11 @@ shared between DomU1 and DomU2. It will get mapped at 0x60000000 in DomU1 guest physical address space, and at 0x70000000 in DomU2 guest physical address space. DomU1 and DomU2 are both the borrower domain, the owner domain is the default owner domain DOMID_IO. + +For the static shared memory region "my-shared-mem-2", since host physical +address is not provided by user, Xen will automatically allocate 512MB +from heap as static shared memory to be shared between DomU1 and DomU2. +The automatically allocated static shared memory will get mapped at +0x80000000 in DomU1 guest physical address space, and at 0x90000000 in DomU2 +guest physical address space. DomU1 is explicitly defined as the owner domain, +and DomU2 is the borrower domain.