From patchwork Wed Dec 6 09:06:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13481293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 773BEC4167B for ; Wed, 6 Dec 2023 09:06:51 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.649006.1013233 (Exim 4.92) (envelope-from ) id 1rAnre-0002u9-Bj; Wed, 06 Dec 2023 09:06:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 649006.1013233; Wed, 06 Dec 2023 09:06:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnre-0002ty-93; Wed, 06 Dec 2023 09:06:42 +0000 Received: by outflank-mailman (input) for mailman id 649006; Wed, 06 Dec 2023 09:06:41 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnrd-00022d-As for xen-devel@lists.xenproject.org; Wed, 06 Dec 2023 09:06:41 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id c4103803-9416-11ee-9b0f-b553b5be7939; Wed, 06 Dec 2023 10:06:39 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1CDDF1474; Wed, 6 Dec 2023 01:07:25 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2486F3F762; Wed, 6 Dec 2023 01:06:35 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c4103803-9416-11ee-9b0f-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org, michal.orzel@amd.com Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v5 01/11] xen/arm: remove stale addr_cells/size_cells in assign_shared_memory Date: Wed, 6 Dec 2023 17:06:13 +0800 Message-Id: <20231206090623.1932275-2-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231206090623.1932275-1-Penny.Zheng@arm.com> References: <20231206090623.1932275-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Function parameters {addr_cells,size_cells} are stale parameters in assign_shared_memory, so we shall remove them. Signed-off-by: Penny Zheng Reviewed-by: Michal Orzel --- v1 -> v2: - new commit --- v2 -> v3: rebase and no change --- v3 -> v4: rebase and no change --- v4 -> v5: rebase and no change --- xen/arch/arm/static-shmem.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 9097bc8b15..cb268cd2ed 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -90,7 +90,6 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d, } static int __init assign_shared_memory(struct domain *d, - uint32_t addr_cells, uint32_t size_cells, paddr_t pbase, paddr_t psize, paddr_t gbase) { @@ -252,7 +251,6 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, * specified, so they should be assigned to dom_io. */ ret = assign_shared_memory(owner_dom_io ? dom_io : d, - addr_cells, size_cells, pbase, psize, gbase); if ( ret ) return ret; From patchwork Wed Dec 6 09:06:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13481296 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2443FC4167B for ; Wed, 6 Dec 2023 09:07:03 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.649007.1013242 (Exim 4.92) (envelope-from ) id 1rAnrh-0003Gz-Kh; Wed, 06 Dec 2023 09:06:45 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 649007.1013242; Wed, 06 Dec 2023 09:06:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnrh-0003Gr-HR; Wed, 06 Dec 2023 09:06:45 +0000 Received: by outflank-mailman (input) for mailman id 649007; Wed, 06 Dec 2023 09:06:44 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnrg-00022d-UY for xen-devel@lists.xenproject.org; Wed, 06 Dec 2023 09:06:44 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id c61497b2-9416-11ee-9b0f-b553b5be7939; Wed, 06 Dec 2023 10:06:43 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 78086139F; Wed, 6 Dec 2023 01:07:28 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7FDA13F762; Wed, 6 Dec 2023 01:06:39 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c61497b2-9416-11ee-9b0f-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org, michal.orzel@amd.com Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v5 02/11] xen/arm: avoid repetitive checking in process_shm_node Date: Wed, 6 Dec 2023 17:06:14 +0800 Message-Id: <20231206090623.1932275-3-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231206090623.1932275-1-Penny.Zheng@arm.com> References: <20231206090623.1932275-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Putting overlap and overflow checking in the loop is causing repetitive operation, so this commit extracts both checking outside the loop. Signed-off-by: Penny Zheng Reviewed-by: Michal Orzel --- v6: new commit --- xen/arch/arm/static-shmem.c | 39 +++++++++++++++---------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index cb268cd2ed..1a1a9386e4 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -349,7 +349,7 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, { const struct fdt_property *prop, *prop_id, *prop_role; const __be32 *cell; - paddr_t paddr, gaddr, size; + paddr_t paddr, gaddr, size, end; struct meminfo *mem = &bootinfo.reserved_mem; unsigned int i; int len; @@ -422,6 +422,13 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, return -EINVAL; } + end = paddr + size; + if ( end <= paddr ) + { + printk("fdt: static shared memory region %s overflow\n", shm_id); + return -EINVAL; + } + for ( i = 0; i < mem->nr_banks; i++ ) { /* @@ -441,30 +448,13 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, return -EINVAL; } } + else if ( strcmp(shm_id, mem->bank[i].shm_id) != 0 ) + continue; else { - paddr_t end = paddr + size; - paddr_t bank_end = mem->bank[i].start + mem->bank[i].size; - - if ( (end <= paddr) || (bank_end <= mem->bank[i].start) ) - { - printk("fdt: static shared memory region %s overflow\n", shm_id); - return -EINVAL; - } - - if ( check_reserved_regions_overlap(paddr, size) ) - return -EINVAL; - else - { - if ( strcmp(shm_id, mem->bank[i].shm_id) != 0 ) - continue; - else - { - printk("fdt: different shared memory region could not share the same shm ID %s\n", - shm_id); - return -EINVAL; - } - } + printk("fdt: different shared memory region could not share the same shm ID %s\n", + shm_id); + return -EINVAL; } } @@ -472,6 +462,9 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, { if ( i < NR_MEM_BANKS ) { + if ( check_reserved_regions_overlap(paddr, size) ) + return -EINVAL; + /* Static shared memory shall be reserved from any other use. */ safe_strcpy(mem->bank[mem->nr_banks].shm_id, shm_id); mem->bank[mem->nr_banks].start = paddr; From patchwork Wed Dec 6 09:06:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13481295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA912C4167B for ; Wed, 6 Dec 2023 09:06:59 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.649009.1013253 (Exim 4.92) (envelope-from ) id 1rAnrm-0003fI-6C; Wed, 06 Dec 2023 09:06:50 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 649009.1013253; Wed, 06 Dec 2023 09:06:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnrm-0003dh-0Y; Wed, 06 Dec 2023 09:06:50 +0000 Received: by outflank-mailman (input) for mailman id 649009; Wed, 06 Dec 2023 09:06:49 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnrk-00022d-SQ for xen-devel@lists.xenproject.org; Wed, 06 Dec 2023 09:06:48 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id c82bf66b-9416-11ee-9b0f-b553b5be7939; Wed, 06 Dec 2023 10:06:46 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 29DB5139F; Wed, 6 Dec 2023 01:07:32 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DB32B3F762; Wed, 6 Dec 2023 01:06:42 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c82bf66b-9416-11ee-9b0f-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org, michal.orzel@amd.com Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v5 03/11] xen/arm: re-define a set of data structures for static shared memory region Date: Wed, 6 Dec 2023 17:06:15 +0800 Message-Id: <20231206090623.1932275-4-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231206090623.1932275-1-Penny.Zheng@arm.com> References: <20231206090623.1932275-1-Penny.Zheng@arm.com> MIME-Version: 1.0 This commit introduces a set of separate data structures to deal with static shared memory at different stages. In boot-time host device tree parsing, we introduce a new structure "struct shm_node" and a new field "shminfo" in bootinfo to describe and store parsed shm info. In acquire_nr_borrower_domain, it is better to use SHMID as unique identifier to iterate "shminfo", other than address and size. In the last, a new anonymized structure "shminfo", which is a array of compound structure that contains SHMID and a "struct membank membank" describing shared memory regions in guest address space, is created in "kinfo" when dealing with domain information. Signed-off-by: Penny Zheng --- v1 -> v2: - As the original "struct shm_membank" was making reserving memory more complex and actually memory information could be still got from host Device\ Tree when dealing with domain construction, we introduce a new simple structure "struct shm_node" in bootinfo to only store SHMID and "nr_borrowers" - Further restrict the scope of the local variable "struct meminfo *mem = &bootinfo.reserved_mem" - Introduce a new local global data "shm_data" in bootfdt.c. In which, reserved memory bank is recorded together with the shm node, to assist doing shm node verification. - Define a set of local variables that point to "shm_data.shm_nodes[i].membank->start", etc, to make the code more readable. - Use SHMID to iterate "shminfo" to find requested shm node, as we no longer store host memory bank info in shm node. - A new anonymized structure, which is a array of compound structure that contains SHMID and a "struct membank membank", describing shared memory region in guest, is introduced in "kinfo". --- v2 -> v3: - rebase and no changes --- v3 -> v4: rebase and no change --- v4 -> v5: - With all shm-related functions classified into static-shmem.c, there is no need to import local global data "shm_data". --- xen/arch/arm/dom0less-build.c | 3 +- xen/arch/arm/domain_build.c | 3 +- xen/arch/arm/include/asm/kernel.h | 9 +- xen/arch/arm/include/asm/setup.h | 24 +++++- xen/arch/arm/include/asm/static-shmem.h | 4 +- xen/arch/arm/static-shmem.c | 104 ++++++++++++++---------- 6 files changed, 92 insertions(+), 55 deletions(-) diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c index fb63ec6fd1..ac096fa3fa 100644 --- a/xen/arch/arm/dom0less-build.c +++ b/xen/arch/arm/dom0less-build.c @@ -645,8 +645,7 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo) if ( ret ) goto err; - ret = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, - &kinfo->shm_mem); + ret = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, kinfo); if ( ret ) goto err; diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 613b2885ce..64ae944431 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -1767,8 +1767,7 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo, return res; } - res = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, - &kinfo->shm_mem); + res = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, kinfo); if ( res ) return res; } diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h index 0a23e86c2d..db3d8232fa 100644 --- a/xen/arch/arm/include/asm/kernel.h +++ b/xen/arch/arm/include/asm/kernel.h @@ -39,7 +39,14 @@ struct kernel_info { void *fdt; /* flat device tree */ paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */ struct meminfo mem; - struct meminfo shm_mem; + /* Static shared memory banks */ + struct { + unsigned int nr_banks; + struct { + char shm_id[MAX_SHM_ID_LENGTH]; + struct membank membank; + } bank[NR_MEM_BANKS]; + } shminfo; /* kernel entry point */ paddr_t entry; diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h index d15a88d2e0..3a2b35ea46 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -50,10 +50,6 @@ struct membank { paddr_t start; paddr_t size; enum membank_type type; -#ifdef CONFIG_STATIC_SHM - char shm_id[MAX_SHM_ID_LENGTH]; - unsigned int nr_shm_borrowers; -#endif }; struct meminfo { @@ -95,6 +91,17 @@ struct bootcmdlines { struct bootcmdline cmdline[MAX_MODULES]; }; +#ifdef CONFIG_STATIC_SHM +/* + * struct shm_node represents a static shared memory node shared between + * multiple domains, identified by the unique SHMID("xen,shm-id"). + */ +struct shm_node { + char shm_id[MAX_SHM_ID_LENGTH]; + unsigned int nr_shm_borrowers; +}; +#endif + struct bootinfo { struct meminfo mem; /* The reserved regions are only used when booting using Device-Tree */ @@ -105,6 +112,15 @@ struct bootinfo { struct meminfo acpi; #endif bool static_heap; +#ifdef CONFIG_STATIC_SHM + struct { + unsigned int nr_nodes; + struct { + struct shm_node node; + const struct membank *membank; + } shm_nodes[NR_MEM_BANKS]; + } shminfo; +#endif }; struct map_range_data diff --git a/xen/arch/arm/include/asm/static-shmem.h b/xen/arch/arm/include/asm/static-shmem.h index 1536ff18b8..66a3f4c146 100644 --- a/xen/arch/arm/include/asm/static-shmem.h +++ b/xen/arch/arm/include/asm/static-shmem.h @@ -8,7 +8,7 @@ #ifdef CONFIG_STATIC_SHM int make_resv_memory_node(const struct domain *d, void *fdt, int addrcells, - int sizecells, const struct meminfo *mem); + int sizecells, const struct kernel_info *kinfo); int process_shm(struct domain *d, struct kernel_info *kinfo, const struct dt_device_node *node); @@ -28,7 +28,7 @@ int process_shm_node(const void *fdt, int node, uint32_t address_cells, static inline int make_resv_memory_node(const struct domain *d, void *fdt, int addrcells, int sizecells, - const struct meminfo *mem) + const struct kernel_info *kinfo) { return 0; } diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 1a1a9386e4..6a3d8a54bd 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -6,28 +6,25 @@ #include #include -static int __init acquire_nr_borrower_domain(struct domain *d, - paddr_t pbase, paddr_t psize, +static int __init acquire_nr_borrower_domain(const char *shm_id, unsigned long *nr_borrowers) { - unsigned int bank; + struct shm_node *shm_node; + unsigned int i; - /* Iterate reserved memory to find requested shm bank. */ - for ( bank = 0 ; bank < bootinfo.reserved_mem.nr_banks; bank++ ) + /* Iterate to find requested static shared memory node. */ + for ( i = 0; i < bootinfo.shminfo.nr_nodes; i++ ) { - paddr_t bank_start = bootinfo.reserved_mem.bank[bank].start; - paddr_t bank_size = bootinfo.reserved_mem.bank[bank].size; + shm_node = &bootinfo.shminfo.shm_nodes[i].node; - if ( (pbase == bank_start) && (psize == bank_size) ) - break; + if ( strcmp(shm_id, shm_node->shm_id) == 0 ) + { + *nr_borrowers = shm_node->nr_shm_borrowers; + return 0; + } } - if ( bank == bootinfo.reserved_mem.nr_banks ) - return -ENOENT; - - *nr_borrowers = bootinfo.reserved_mem.bank[bank].nr_shm_borrowers; - - return 0; + return -ENOENT; } /* @@ -91,7 +88,7 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d, static int __init assign_shared_memory(struct domain *d, paddr_t pbase, paddr_t psize, - paddr_t gbase) + paddr_t gbase, const char *shm_id) { mfn_t smfn; int ret = 0; @@ -125,7 +122,7 @@ static int __init assign_shared_memory(struct domain *d, * Get the right amount of references per page, which is the number of * borrower domains. */ - ret = acquire_nr_borrower_domain(d, pbase, psize, &nr_borrowers); + ret = acquire_nr_borrower_domain(shm_id, &nr_borrowers); if ( ret ) return ret; @@ -161,13 +158,16 @@ static int __init append_shm_bank_to_domain(struct kernel_info *kinfo, paddr_t start, paddr_t size, const char *shm_id) { - if ( kinfo->shm_mem.nr_banks >= NR_MEM_BANKS ) + unsigned int nr_banks = kinfo->shminfo.nr_banks; + struct membank *membank = &kinfo->shminfo.bank[nr_banks].membank; + + if ( nr_banks >= NR_MEM_BANKS ) return -ENOMEM; - kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].start = start; - kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].size = size; - safe_strcpy(kinfo->shm_mem.bank[kinfo->shm_mem.nr_banks].shm_id, shm_id); - kinfo->shm_mem.nr_banks++; + membank->start = start; + membank->size = size; + safe_strcpy(kinfo->shminfo.bank[nr_banks].shm_id, shm_id); + kinfo->shminfo.nr_banks++; return 0; } @@ -251,7 +251,7 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, * specified, so they should be assigned to dom_io. */ ret = assign_shared_memory(owner_dom_io ? dom_io : d, - pbase, psize, gbase); + pbase, psize, gbase, shm_id); if ( ret ) return ret; } @@ -279,12 +279,12 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, static int __init make_shm_memory_node(const struct domain *d, void *fdt, int addrcells, int sizecells, - const struct meminfo *mem) + const struct kernel_info *kinfo) { unsigned int i = 0; int res = 0; - if ( mem->nr_banks == 0 ) + if ( kinfo->shminfo.nr_banks == 0 ) return -ENOENT; /* @@ -294,17 +294,17 @@ static int __init make_shm_memory_node(const struct domain *d, void *fdt, */ dt_dprintk("Create xen-shmem node\n"); - for ( ; i < mem->nr_banks; i++ ) + for ( ; i < kinfo->shminfo.nr_banks; i++ ) { - uint64_t start = mem->bank[i].start; - uint64_t size = mem->bank[i].size; + uint64_t start = kinfo->shminfo.bank[i].membank.start; + uint64_t size = kinfo->shminfo.bank[i].membank.size; const char compat[] = "xen,shared-memory-v1"; /* Worst case addrcells + sizecells */ __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS]; __be32 *cells; unsigned int len = (addrcells + sizecells) * sizeof(__be32); - res = domain_fdt_begin_node(fdt, "xen-shmem", mem->bank[i].start); + res = domain_fdt_begin_node(fdt, "xen-shmem", start); if ( res ) return res; @@ -322,7 +322,7 @@ static int __init make_shm_memory_node(const struct domain *d, void *fdt, dt_dprintk("Shared memory bank %u: %#"PRIx64"->%#"PRIx64"\n", i, start, start + size); - res = fdt_property_string(fdt, "xen,id", mem->bank[i].shm_id); + res = fdt_property_string(fdt, "xen,id", kinfo->shminfo.bank[i].shm_id); if ( res ) return res; @@ -350,7 +350,6 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, const struct fdt_property *prop, *prop_id, *prop_role; const __be32 *cell; paddr_t paddr, gaddr, size, end; - struct meminfo *mem = &bootinfo.reserved_mem; unsigned int i; int len; bool owner = false; @@ -429,17 +428,21 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, return -EINVAL; } - for ( i = 0; i < mem->nr_banks; i++ ) + for ( i = 0; i < bootinfo.shminfo.nr_nodes; i++ ) { + paddr_t bank_start = bootinfo.shminfo.shm_nodes[i].membank->start; + paddr_t bank_size = bootinfo.shminfo.shm_nodes[i].membank->size; + const char *bank_id = bootinfo.shminfo.shm_nodes[i].node.shm_id; + /* * Meet the following check: * 1) The shm ID matches and the region exactly match * 2) The shm ID doesn't match and the region doesn't overlap * with an existing one */ - if ( paddr == mem->bank[i].start && size == mem->bank[i].size ) + if ( paddr == bank_start && size == bank_size ) { - if ( strncmp(shm_id, mem->bank[i].shm_id, MAX_SHM_ID_LENGTH) == 0 ) + if ( strncmp(shm_id, bank_id, MAX_SHM_ID_LENGTH) == 0 ) break; else { @@ -458,19 +461,32 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, } } - if ( i == mem->nr_banks ) + if ( i == bootinfo.shminfo.nr_nodes ) { - if ( i < NR_MEM_BANKS ) + struct meminfo *mem = &bootinfo.reserved_mem; + + if ( (i < NR_MEM_BANKS) && (mem->nr_banks < NR_MEM_BANKS) ) { + struct membank *membank = &mem->bank[mem->nr_banks]; + struct shm_node *shm_node = &bootinfo.shminfo.shm_nodes[i].node; + if ( check_reserved_regions_overlap(paddr, size) ) return -EINVAL; /* Static shared memory shall be reserved from any other use. */ - safe_strcpy(mem->bank[mem->nr_banks].shm_id, shm_id); - mem->bank[mem->nr_banks].start = paddr; - mem->bank[mem->nr_banks].size = size; - mem->bank[mem->nr_banks].type = MEMBANK_STATIC_DOMAIN; + membank->start = paddr; + membank->size = size; + membank->type = MEMBANK_STATIC_DOMAIN; mem->nr_banks++; + + /* Record static shared memory node info in bootinfo.shminfo */ + safe_strcpy(shm_node->shm_id, shm_id); + /* + * Reserved memory bank is recorded together to assist + * doing shm node verification. + */ + bootinfo.shminfo.shm_nodes[i].membank = membank; + bootinfo.shminfo.nr_nodes++; } else { @@ -483,20 +499,20 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, * to calculate the reference count. */ if ( !owner ) - mem->bank[i].nr_shm_borrowers++; + bootinfo.shminfo.shm_nodes[i].node.nr_shm_borrowers++; return 0; } int __init make_resv_memory_node(const struct domain *d, void *fdt, int addrcells, int sizecells, - const struct meminfo *mem) + const struct kernel_info *kinfo) { int res = 0; /* Placeholder for reserved-memory\0 */ const char resvbuf[16] = "reserved-memory"; - if ( mem->nr_banks == 0 ) + if ( kinfo->shminfo.nr_banks == 0 ) /* No shared memory provided. */ return 0; @@ -518,7 +534,7 @@ int __init make_resv_memory_node(const struct domain *d, void *fdt, if ( res ) return res; - res = make_shm_memory_node(d, fdt, addrcells, sizecells, mem); + res = make_shm_memory_node(d, fdt, addrcells, sizecells, kinfo); if ( res ) return res; From patchwork Wed Dec 6 09:06:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13481298 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 02170C4167B for ; Wed, 6 Dec 2023 09:07:11 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.649010.1013263 (Exim 4.92) (envelope-from ) id 1rAnro-00040b-E7; Wed, 06 Dec 2023 09:06:52 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 649010.1013263; Wed, 06 Dec 2023 09:06:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnro-00040G-9F; Wed, 06 Dec 2023 09:06:52 +0000 Received: by outflank-mailman (input) for mailman id 649010; Wed, 06 Dec 2023 09:06:51 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnrm-0002Yw-VM for xen-devel@lists.xenproject.org; Wed, 06 Dec 2023 09:06:50 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id ca2f57a6-9416-11ee-98e5-6d05b1d4d9a1; Wed, 06 Dec 2023 10:06:50 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 834FD139F; Wed, 6 Dec 2023 01:07:35 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8B32A3F762; Wed, 6 Dec 2023 01:06:46 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ca2f57a6-9416-11ee-98e5-6d05b1d4d9a1 From: Penny Zheng To: xen-devel@lists.xenproject.org, michal.orzel@amd.com Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v5 04/11] xen/arm: introduce allocate_domheap_memory and guest_physmap_memory Date: Wed, 6 Dec 2023 17:06:16 +0800 Message-Id: <20231206090623.1932275-5-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231206090623.1932275-1-Penny.Zheng@arm.com> References: <20231206090623.1932275-1-Penny.Zheng@arm.com> MIME-Version: 1.0 We split the code of allocate_bank_memory into two parts, allocate_domheap_memory and guest_physmap_memory. One is about allocating guest RAM from heap, which could be re-used later for allocating static shared memory from heap when host address is not provided. The other is building up guest P2M mapping. We also define a set of MACRO helpers to access common fields in data structure of "meminfo" type, e.g. "struct meminfo" is one of them, and later new "struct shm_meminfo" is also one of them. This kind of structures must have the following characteristics: - an array of "struct membank" - a member called "nr_banks" indicating current array size - a field indicating the maximum array size When introducing a new data structure, according callbacks with function type "retrieve_fn" shall be defined for using MACRO helpers. This commit defines callback "retrieve_meminfo" for data structure "struct meminfo". Signed-off-by: Penny Zheng --- v1 -> v2: - define a set of MACRO helpers to access common fields in data structure of "meminfo" type. "struct meminfo" is one of them, and according callback "retrieve_meminfo" is also introduced here. - typo of changing 1ULL to 1UL --- v2 -> v3 - rebase and no changes --- v3 -> v4: rebase and no change --- v4 -> v5: rebase and no change --- xen/arch/arm/domain_build.c | 119 +++++++++++++++++++++++++------ xen/arch/arm/include/asm/setup.h | 33 +++++++++ 2 files changed, 129 insertions(+), 23 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 64ae944431..a8bc78baa5 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -51,6 +51,28 @@ boolean_param("ext_regions", opt_ext_regions); static u64 __initdata dom0_mem; static bool __initdata dom0_mem_set; +#ifdef CONFIG_DOM0LESS_BOOT +static void __init retrieve_meminfo(void *mem, unsigned int *max_mem_banks, + struct membank **bank, + unsigned int **nr_banks) +{ + struct meminfo *meminfo = (struct meminfo *)mem; + + if ( max_mem_banks ) + *max_mem_banks = NR_MEM_BANKS; + + if ( nr_banks ) + *nr_banks = &(meminfo->nr_banks); + + if ( bank ) + *bank = meminfo->bank; +} + +retrieve_fn __initdata retrievers[MAX_MEMINFO_TYPE] = { + [NORMAL_MEMINFO] = retrieve_meminfo, +}; +#endif + static int __init parse_dom0_mem(const char *s) { dom0_mem_set = true; @@ -415,32 +437,20 @@ static void __init allocate_memory_11(struct domain *d, } #ifdef CONFIG_DOM0LESS_BOOT -bool __init allocate_bank_memory(struct domain *d, struct kernel_info *kinfo, - gfn_t sgfn, paddr_t tot_size) +static bool __init allocate_domheap_memory(struct domain *d, + paddr_t tot_size, + void *mem, enum meminfo_type type) { - int res; struct page_info *pg; - struct membank *bank; unsigned int max_order = ~0; - - /* - * allocate_bank_memory can be called with a tot_size of zero for - * the second memory bank. It is not an error and we can safely - * avoid creating a zero-size memory bank. - */ - if ( tot_size == 0 ) - return true; - - bank = &kinfo->mem.bank[kinfo->mem.nr_banks]; - bank->start = gfn_to_gaddr(sgfn); - bank->size = tot_size; + unsigned int *nr_banks = GET_NR_BANKS(mem, type); while ( tot_size > 0 ) { unsigned int order = get_allocation_size(tot_size); + struct membank *membank; order = min(max_order, order); - pg = alloc_domheap_pages(d, order, 0); if ( !pg ) { @@ -460,15 +470,78 @@ bool __init allocate_bank_memory(struct domain *d, struct kernel_info *kinfo, continue; } - res = guest_physmap_add_page(d, sgfn, page_to_mfn(pg), order); - if ( res ) - { - dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res); + if ( *nr_banks == MAX_MEM_BANKS(type) ) return false; - } + + membank = GET_MEMBANK(mem, type, *nr_banks); + membank->start = mfn_to_maddr(page_to_mfn(pg)); + membank->size = 1ULL << (PAGE_SHIFT + order); + (*nr_banks)++; + tot_size -= membank->size; + } + + return true; +} + +static int __init guest_physmap_memory(struct domain *d, + void *mem, enum meminfo_type type, + gfn_t sgfn) +{ + unsigned int i; + int res; + unsigned int *nr_banks = GET_NR_BANKS(mem, type); + + for ( i = 0; i < *nr_banks; i++ ) + { + struct membank *membank = GET_MEMBANK(mem, type, i); + paddr_t start = membank->start; + paddr_t size = membank->size; + unsigned int order = get_order_from_bytes(size); + + /* Size must be power of two */ + BUG_ON(!size || (size & (size - 1))); + res = guest_physmap_add_page(d, sgfn, maddr_to_mfn(start), order); + if ( res ) + return res; sgfn = gfn_add(sgfn, 1UL << order); - tot_size -= (1ULL << (PAGE_SHIFT + order)); + } + + return 0; +} + +bool __init allocate_bank_memory(struct domain *d, + struct kernel_info *kinfo, + gfn_t sgfn, + paddr_t total_size) +{ + struct membank *bank; + struct meminfo host = { 0 }; + + /* + * allocate_bank_memory can be called with a total_size of zero for + * the second memory bank. It is not an error and we can safely + * avoid creating a zero-size memory bank. + */ + if ( total_size == 0 ) + return true; + + bank = &kinfo->mem.bank[kinfo->mem.nr_banks]; + bank->start = gfn_to_gaddr(sgfn); + bank->size = total_size; + + if ( !allocate_domheap_memory(d, total_size, (void *)&host, NORMAL_MEMINFO) ) + { + printk(XENLOG_ERR "Failed to allocate (%"PRIpaddr"MB) pages to %pd\n", + total_size >> 20, d); + return false; + } + + if ( guest_physmap_memory(d, (void *)&host, NORMAL_MEMINFO, sgfn) ) + { + printk(XENLOG_ERR "Failed to map (%"PRIpaddr"MB) pages to %pd\n", + total_size >> 20, d); + return false; } kinfo->mem.nr_banks++; diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h index 3a2b35ea46..bc5f08be97 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -57,6 +57,39 @@ struct meminfo { struct membank bank[NR_MEM_BANKS]; }; +enum meminfo_type { + NORMAL_MEMINFO, + MAX_MEMINFO_TYPE, +}; + +/* + * Define a set of MACRO helpers to access meminfo_type, like "struct meminfo" + * as type of NORMAL_MEMINFO, etc. + * This kind of structure must have a array of "struct membank", + * a member called nr_banks indicating the current array size, and also a field + * indicating the maximum array size. + */ +typedef void (*retrieve_fn)(void *, unsigned int *, struct membank **, + unsigned int **); + +#define MAX_MEM_BANKS(type) ({ \ + unsigned int _max_mem_banks; \ + retrievers[type](NULL, &_max_mem_banks, NULL, NULL); \ + _max_mem_banks; \ +}) + +#define GET_MEMBANK(mem, type, index) ({ \ + struct membank *_bank; \ + retrievers[type]((void *)(mem), NULL, &_bank, NULL); \ + &(_bank[index]); \ +}) + +#define GET_NR_BANKS(mem, type) ({ \ + unsigned int *_nr_banks; \ + retrievers[type]((void *)mem, NULL, NULL, &_nr_banks); \ + _nr_banks; \ +}) + /* * The domU flag is set for kernels and ramdisks of "xen,domain" nodes. * The purpose of the domU flag is to avoid getting confused in From patchwork Wed Dec 6 09:06:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13481297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 411C0C4167B for ; Wed, 6 Dec 2023 09:07:07 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.649013.1013273 (Exim 4.92) (envelope-from ) id 1rAnrt-0004bc-MI; Wed, 06 Dec 2023 09:06:57 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 649013.1013273; Wed, 06 Dec 2023 09:06:57 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnrt-0004bN-Iu; Wed, 06 Dec 2023 09:06:57 +0000 Received: by outflank-mailman (input) for mailman id 649013; Wed, 06 Dec 2023 09:06:56 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnrs-00022d-3b for xen-devel@lists.xenproject.org; Wed, 06 Dec 2023 09:06:56 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id cc690495-9416-11ee-9b0f-b553b5be7939; Wed, 06 Dec 2023 10:06:53 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2B7081474; Wed, 6 Dec 2023 01:07:39 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E69A53F762; Wed, 6 Dec 2023 01:06:49 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: cc690495-9416-11ee-9b0f-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org, michal.orzel@amd.com Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v5 05/11] xen/arm: use paddr_assigned to indicate whether host address is provided Date: Wed, 6 Dec 2023 17:06:17 +0800 Message-Id: <20231206090623.1932275-6-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231206090623.1932275-1-Penny.Zheng@arm.com> References: <20231206090623.1932275-1-Penny.Zheng@arm.com> MIME-Version: 1.0 We use paddr_assigned to indicate whether host address is provided, by checking the length of "xen,shared-mem" property. The shm matching criteria shall also be adapt to cover the new scenario, by adding when host address is not provided, if SHMID matches, the region size must exactly match too. During domain creation, right now, a static shared memory node could be banked with a statically configured host memory bank, or arbitrary host memory of dedicated size, which will later be allocated from heap by Xen. To cover both scenarios, we create a new structure shm_meminfo. It is very alike meminfo, but with the maximum array size being a smaller number NR_SHM_BANKS(16). As "shm_meminfo" is also a new member of "enum meminfo_type", we shall implement its own callback "retrieve_shm_meminfo" to have access to all MACRO helpers, e.g. GET_MEMBANK(...). Also, to make codes tidy and clear, we extract codes about parsing "xen,shared-mem" property from function "process_shm" and move them into a new helper "parse_shm_property". Signed-off-by: Penny Zheng --- v1 -> v2 - In order to get allocated banked host memory info during domain creat ion, we create a new structure shm_meminfo. It is very alike meminfo, with the maximum array size being NR_SHM_BANKS. As shm_meminfo is a new member of type meminfo_type, we shall implement its own callback retrieve_shm_meminfo to have access to all MACRO helpers, e.g. GET_MEMBANK(...) - rename "acquire_shm_memnode" to "find_shm_memnode" --- v2 -> v3 - rebase and no changes --- v3 -> v4: - rebase and no change --- v4 -> v5: - fix bugs of that tot_size and membank shall not be member of union, but struct, to differentiate two types of static shared memory node. --- xen/arch/arm/domain_build.c | 3 + xen/arch/arm/include/asm/setup.h | 14 +- xen/arch/arm/include/asm/static-shmem.h | 3 + xen/arch/arm/static-shmem.c | 360 ++++++++++++++++++------ 4 files changed, 293 insertions(+), 87 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index a8bc78baa5..c69d481d34 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -70,6 +70,9 @@ static void __init retrieve_meminfo(void *mem, unsigned int *max_mem_banks, retrieve_fn __initdata retrievers[MAX_MEMINFO_TYPE] = { [NORMAL_MEMINFO] = retrieve_meminfo, +#ifdef CONFIG_STATIC_SHM + [SHM_MEMINFO] = retrieve_shm_meminfo, +#endif }; #endif diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h index bc5f08be97..043588cd2d 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -59,6 +59,9 @@ struct meminfo { enum meminfo_type { NORMAL_MEMINFO, +#ifdef CONFIG_STATIC_SHM + SHM_MEMINFO, +#endif MAX_MEMINFO_TYPE, }; @@ -150,7 +153,16 @@ struct bootinfo { unsigned int nr_nodes; struct { struct shm_node node; - const struct membank *membank; + /* + * For a static shared memory node, it is either banked with + * a statically configured host memory bank, or arbitrary host + * memory which will be allocated by Xen with a specified total + * size(tot_size). + */ + struct { + const struct membank *membank; + paddr_t tot_size; + }; } shm_nodes[NR_MEM_BANKS]; } shminfo; #endif diff --git a/xen/arch/arm/include/asm/static-shmem.h b/xen/arch/arm/include/asm/static-shmem.h index 66a3f4c146..a67445cec8 100644 --- a/xen/arch/arm/include/asm/static-shmem.h +++ b/xen/arch/arm/include/asm/static-shmem.h @@ -24,6 +24,9 @@ static inline int process_shm_chosen(struct domain *d, int process_shm_node(const void *fdt, int node, uint32_t address_cells, uint32_t size_cells); +void retrieve_shm_meminfo(void *mem, unsigned int *max_mem_banks, + struct membank **bank, + unsigned int **nr_banks); #else /* !CONFIG_STATIC_SHM */ static inline int make_resv_memory_node(const struct domain *d, void *fdt, diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 6a3d8a54bd..a9eb26d543 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -6,6 +6,50 @@ #include #include +#define NR_SHM_BANKS 16 + +/* + * There are two types of static shared memory node: + * A static shared memory node could be banked with a statically + * configured host memory bank, or a set of arbitrary host memory + * banks allocated from heap by Xen on runtime. + */ +struct shm_meminfo { + unsigned int nr_banks; + struct membank bank[NR_SHM_BANKS]; + paddr_t tot_size; +}; + +/* + * struct shm_memnode holds banked host memory info for a static + * shared memory node + */ +struct shm_memnode { + char shm_id[MAX_SHM_ID_LENGTH]; + struct shm_meminfo meminfo; +}; + +static __initdata struct { + unsigned int nr_nodes; + struct shm_memnode node[NR_MEM_BANKS]; +} shm_memdata; + +void __init retrieve_shm_meminfo(void *mem, unsigned int *max_mem_banks, + struct membank **bank, + unsigned int **nr_banks) +{ + struct shm_meminfo *meminfo = (struct shm_meminfo *)mem; + + if ( max_mem_banks ) + *max_mem_banks = NR_SHM_BANKS; + + if ( nr_banks ) + *nr_banks = &(meminfo->nr_banks); + + if ( bank ) + *bank = meminfo->bank; +} + static int __init acquire_nr_borrower_domain(const char *shm_id, unsigned long *nr_borrowers) { @@ -172,6 +216,129 @@ static int __init append_shm_bank_to_domain(struct kernel_info *kinfo, return 0; } +static struct shm_memnode * __init find_shm_memnode(const char *shm_id) +{ + unsigned int i; + struct shm_memnode *shm_memnode; + + for ( i = 0 ; i < shm_memdata.nr_nodes; i++ ) + { + shm_memnode = &shm_memdata.node[i]; + + if ( strcmp(shm_id, shm_memnode->shm_id) == 0 ) + return shm_memnode; + } + + if ( i == NR_MEM_BANKS ) + return NULL; + + shm_memnode = &shm_memdata.node[i]; + safe_strcpy(shm_memnode->shm_id, shm_id); + shm_memdata.nr_nodes++; + return shm_memnode; +} + +/* Parse "xen,shared-mem" property in static shared memory node */ +static struct shm_memnode * __init parse_shm_property(struct domain *d, + const struct dt_device_node *dt_node, + bool *paddr_assigned, paddr_t *gbase, + const char *shm_id) +{ + uint32_t addr_cells, size_cells; + const struct dt_property *prop; + const __be32 *cells; + uint32_t len; + unsigned int i; + paddr_t pbase, psize; + struct shm_memnode *shm_memnode; + + /* xen,shared-mem = ; And pbase could be optional. */ + prop = dt_find_property(dt_node, "xen,shared-mem", &len); + BUG_ON(!prop); + cells = (const __be32 *)prop->value; + + addr_cells = dt_n_addr_cells(dt_node); + size_cells = dt_n_size_cells(dt_node); + if ( len != dt_cells_to_size(addr_cells + size_cells + addr_cells) ) + { + /* pbase is not provided in "xen,shared-mem" */ + if ( len == dt_cells_to_size(size_cells + addr_cells) ) + *paddr_assigned = false; + else + { + printk("fdt: invalid `xen,shared-mem` property.\n"); + return NULL; + } + } + + /* + * If we firstly process the shared memory node with unique "xen,shm-id", + * we allocate a new member "shm_memnode" for it in shm_memdata. + */ + shm_memnode = find_shm_memnode(shm_id); + BUG_ON(!shm_memnode); + if ( !(*paddr_assigned) ) + { + device_tree_get_reg(&cells, addr_cells, size_cells, gbase, &psize); + /* Whether it is a new shm node? */ + if ( shm_memnode->meminfo.tot_size == 0 ) + goto out_early1; + else + goto out_early2; + } + else + { + device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, gbase); + psize = dt_read_number(cells, size_cells); + + /* Whether it is a new shm node? */ + if ( shm_memnode->meminfo.nr_banks != 0 ) + goto out_early2; + } + + /* + * The static shared memory node #dt_node is banked with a + * statically configured host memory bank. + */ + shm_memnode->meminfo.bank[0].start = pbase; + shm_memnode->meminfo.bank[0].size = psize; + shm_memnode->meminfo.nr_banks = 1; + + if ( !IS_ALIGNED(pbase, PAGE_SIZE) ) + { + printk("%pd: physical address 0x%"PRIpaddr" is not suitably aligned.\n", + d, pbase); + return NULL; + } + + for ( i = 0; i < PFN_DOWN(psize); i++ ) + if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) + { + printk("%pd: invalid physical MFN 0x%"PRI_mfn"\n for static shared memory node\n", + d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); + return NULL; + } + + out_early1: + if ( !IS_ALIGNED(psize, PAGE_SIZE) ) + { + printk("%pd: size 0x%"PRIpaddr" is not suitably aligned\n", + d, psize); + return NULL; + } + shm_memnode->meminfo.tot_size = psize; + + out_early2: + if ( !IS_ALIGNED(*gbase, PAGE_SIZE) ) + { + printk("%pd: guest address 0x%"PRIpaddr" is not suitably aligned.\n", + d, *gbase); + return NULL; + } + + return shm_memnode; +} + int __init process_shm(struct domain *d, struct kernel_info *kinfo, const struct dt_device_node *node) { @@ -179,51 +346,17 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, dt_for_each_child_node(node, shm_node) { - const struct dt_property *prop; - const __be32 *cells; - uint32_t addr_cells, size_cells; paddr_t gbase, pbase, psize; int ret = 0; - unsigned int i; const char *role_str; const char *shm_id; bool owner_dom_io = true; + bool paddr_assigned = true; + struct shm_memnode *shm_memnode; if ( !dt_device_is_compatible(shm_node, "xen,domain-shared-memory-v1") ) continue; - /* - * xen,shared-mem = ; - * TODO: pbase is optional. - */ - addr_cells = dt_n_addr_cells(shm_node); - size_cells = dt_n_size_cells(shm_node); - prop = dt_find_property(shm_node, "xen,shared-mem", NULL); - BUG_ON(!prop); - cells = (const __be32 *)prop->value; - device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase); - psize = dt_read_paddr(cells, size_cells); - if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE) ) - { - printk("%pd: physical address 0x%"PRIpaddr", or guest address 0x%"PRIpaddr" is not suitably aligned.\n", - d, pbase, gbase); - return -EINVAL; - } - if ( !IS_ALIGNED(psize, PAGE_SIZE) ) - { - printk("%pd: size 0x%"PRIpaddr" is not suitably aligned\n", - d, psize); - return -EINVAL; - } - - for ( i = 0; i < PFN_DOWN(psize); i++ ) - if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) - { - printk("%pd: invalid physical address 0x%"PRI_mfn"\n", - d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); - return -EINVAL; - } - /* * "role" property is optional and if it is defined explicitly, * then the owner domain is not the default "dom_io" domain. @@ -238,6 +371,13 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, } BUG_ON((strlen(shm_id) <= 0) || (strlen(shm_id) >= MAX_SHM_ID_LENGTH)); + shm_memnode = parse_shm_property(d, shm_node, &paddr_assigned, &gbase, + shm_id); + if ( !shm_memnode ) + return -EINVAL; + pbase = shm_memnode->meminfo.bank[0].start; + psize = shm_memnode->meminfo.bank[0].size; + /* * DOMID_IO is a fake domain and is not described in the Device-Tree. * Therefore when the owner of the shared region is DOMID_IO, we will @@ -349,10 +489,10 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, { const struct fdt_property *prop, *prop_id, *prop_role; const __be32 *cell; - paddr_t paddr, gaddr, size, end; + paddr_t paddr, gaddr, size; unsigned int i; int len; - bool owner = false; + bool owner = false, paddr_assigned = true; const char *shm_id; if ( address_cells < 1 || size_cells < 1 ) @@ -404,96 +544,140 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, if ( len != dt_cells_to_size(address_cells + size_cells + address_cells) ) { + /* paddr is not provided in "xen,shared-mem" */ if ( len == dt_cells_to_size(size_cells + address_cells) ) - printk("fdt: host physical address must be chosen by users at the moment.\n"); - - printk("fdt: invalid `xen,shared-mem` property.\n"); - return -EINVAL; + paddr_assigned = false; + else + { + printk("fdt: invalid `xen,shared-mem` property.\n"); + return -EINVAL; + } } cell = (const __be32 *)prop->data; - device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gaddr); - size = dt_next_cell(size_cells, &cell); - - if ( !size ) + if ( !paddr_assigned ) + device_tree_get_reg(&cell, address_cells, size_cells, &gaddr, &size); + else { - printk("fdt: the size for static shared memory region can not be zero\n"); - return -EINVAL; - } + paddr_t end; + + device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gaddr); + size = dt_next_cell(size_cells, &cell); + + if ( !size ) + { + printk("fdt: the size for static shared memory region can not be zero\n"); + return -EINVAL; + } + + end = paddr + size; + if ( end <= paddr ) + { + printk("fdt: static shared memory region %s overflow\n", shm_id); + return -EINVAL; + } - end = paddr + size; - if ( end <= paddr ) - { - printk("fdt: static shared memory region %s overflow\n", shm_id); - return -EINVAL; } for ( i = 0; i < bootinfo.shminfo.nr_nodes; i++ ) { - paddr_t bank_start = bootinfo.shminfo.shm_nodes[i].membank->start; - paddr_t bank_size = bootinfo.shminfo.shm_nodes[i].membank->size; const char *bank_id = bootinfo.shminfo.shm_nodes[i].node.shm_id; + bool is_shmid_equal = strncmp(shm_id, bank_id, MAX_SHM_ID_LENGTH) == 0 ? + true : false; /* * Meet the following check: + * when host address is provided: * 1) The shm ID matches and the region exactly match * 2) The shm ID doesn't match and the region doesn't overlap * with an existing one + * when host address is not provided: + * 1) The shm ID matches and the region size exactly match + */ + /* + * For a static shared memory node, it is either banked with + * a statically configured host memory bank(membank != NULL), or + * arbitrary host memory which will later be allocated by Xen( + * tot_size != 0). */ - if ( paddr == bank_start && size == bank_size ) + if ( bootinfo.shminfo.shm_nodes[i].membank != NULL ) { - if ( strncmp(shm_id, bank_id, MAX_SHM_ID_LENGTH) == 0 ) + paddr_t bank_start = bootinfo.shminfo.shm_nodes[i].membank->start; + paddr_t bank_size = bootinfo.shminfo.shm_nodes[i].membank->size; + bool is_same_region = (paddr == bank_start) && (size == bank_size) ? + true : false; + + if ( is_same_region && is_shmid_equal ) break; - else + else if ( is_same_region || is_shmid_equal ) { printk("fdt: xen,shm-id %s does not match for all the nodes using the same region.\n", shm_id); return -EINVAL; } } - else if ( strcmp(shm_id, mem->bank[i].shm_id) != 0 ) - continue; else { - printk("fdt: different shared memory region could not share the same shm ID %s\n", - shm_id); - return -EINVAL; + paddr_t tot_size = bootinfo.shminfo.shm_nodes[i].tot_size; + bool is_same_region = tot_size == size ? true : false; + + if ( !paddr_assigned && is_same_region && is_shmid_equal ) + break; + else if ( is_shmid_equal ) + { + if ( paddr_assigned ) + { + printk("fdt: two different types of static shared memory region could not share the same shm-id %s\n", + shm_id); + return -EINVAL; + } + + printk("fdt: when host address is not provided, if xen,shm-id matches, size must stay the same too(%"PRIpaddr" -> %"PRIpaddr")\n", + size, tot_size); + return -EINVAL; + } } } if ( i == bootinfo.shminfo.nr_nodes ) { - struct meminfo *mem = &bootinfo.reserved_mem; - - if ( (i < NR_MEM_BANKS) && (mem->nr_banks < NR_MEM_BANKS) ) + if ( i < NR_MEM_BANKS ) { - struct membank *membank = &mem->bank[mem->nr_banks]; struct shm_node *shm_node = &bootinfo.shminfo.shm_nodes[i].node; - - if ( check_reserved_regions_overlap(paddr, size) ) - return -EINVAL; - - /* Static shared memory shall be reserved from any other use. */ - membank->start = paddr; - membank->size = size; - membank->type = MEMBANK_STATIC_DOMAIN; - mem->nr_banks++; + struct meminfo *mem = &bootinfo.reserved_mem; /* Record static shared memory node info in bootinfo.shminfo */ safe_strcpy(shm_node->shm_id, shm_id); - /* - * Reserved memory bank is recorded together to assist - * doing shm node verification. - */ - bootinfo.shminfo.shm_nodes[i].membank = membank; bootinfo.shminfo.nr_nodes++; + + if ( !paddr_assigned ) + bootinfo.shminfo.shm_nodes[i].tot_size = size; + else if ( mem->nr_banks < NR_MEM_BANKS ) + { + struct membank *membank = &mem->bank[mem->nr_banks]; + + if ( check_reserved_regions_overlap(paddr, size) ) + return -EINVAL; + + /* Static shared memory shall be reserved from any other use. */ + membank->start = paddr; + membank->size = size; + membank->type = MEMBANK_STATIC_DOMAIN; + mem->nr_banks++; + + /* + * Reserved memory bank is recorded together to assist + * doing shm node verification. + */ + bootinfo.shminfo.shm_nodes[i].membank = membank; + } + else + goto fail; } else - { - printk("Warning: Max number of supported memory regions reached.\n"); - return -ENOSPC; - } + goto fail; } + /* * keep a count of the number of borrowers, which later may be used * to calculate the reference count. @@ -502,6 +686,10 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, bootinfo.shminfo.shm_nodes[i].node.nr_shm_borrowers++; return 0; + + fail: + printk("Warning: Max number of supported memory regions reached.\n"); + return -ENOSPC; } int __init make_resv_memory_node(const struct domain *d, void *fdt, From patchwork Wed Dec 6 09:06:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13481300 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A7CDAC4167B for ; Wed, 6 Dec 2023 09:07:21 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.649015.1013284 (Exim 4.92) (envelope-from ) id 1rAnrw-00051e-A2; Wed, 06 Dec 2023 09:07:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 649015.1013284; Wed, 06 Dec 2023 09:07:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnrw-00051T-4e; Wed, 06 Dec 2023 09:07:00 +0000 Received: by outflank-mailman (input) for mailman id 649015; Wed, 06 Dec 2023 09:06:58 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnru-0002Yw-1I for xen-devel@lists.xenproject.org; Wed, 06 Dec 2023 09:06:58 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id ce5bc596-9416-11ee-98e5-6d05b1d4d9a1; Wed, 06 Dec 2023 10:06:57 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 85D32139F; Wed, 6 Dec 2023 01:07:42 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8D7E63F762; Wed, 6 Dec 2023 01:06:53 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ce5bc596-9416-11ee-98e5-6d05b1d4d9a1 From: Penny Zheng To: xen-devel@lists.xenproject.org, michal.orzel@amd.com Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v5 06/11] xen/arm: support static shared memory when host address not provided Date: Wed, 6 Dec 2023 17:06:18 +0800 Message-Id: <20231206090623.1932275-7-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231206090623.1932275-1-Penny.Zheng@arm.com> References: <20231206090623.1932275-1-Penny.Zheng@arm.com> MIME-Version: 1.0 In order to support static shared memory when host address not provided, we shall do the following modification: - we shall let Xen allocate memory from heap for static shared memory at first domain, no matter it is owner or borrower. - In acquire_shared_memory_bank, as static shared memory has already been allocated from heap, we shall assign them to the owner domain using function "assign_pages". - Function get_shm_pages_reference is created to add as many additional reference as the number of borrowers. - We implement a new helper "add_foreign_mapping_for_borrower" to set up foreign memory mapping for borrower. Instead of using multiple function parameters to deliver various shm-related info, like host physical address, SHMID, etc, and with the introduction of new struct "shm_memnode" to include banked host memory info, we switch to use "shm_memnode" as function parameter to replace them all, to make codes more clear and tidy. Signed-off-by: Penny Zheng --- v1 -> v2: - combine commits 4 - 6 in Serie 1 - Adapt to changes of introducing "struct shm_memnode" --- v2 -> v3 - fix infinite loop bug and bad indentation - rebase --- v3 -> v4: rebase and no change --- v4 -> v5: rebase and no change --- xen/arch/arm/domain_build.c | 6 +- xen/arch/arm/include/asm/domain_build.h | 5 + xen/arch/arm/static-shmem.c | 223 ++++++++++++++++-------- 3 files changed, 163 insertions(+), 71 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index c69d481d34..c58996e3e9 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -440,9 +440,9 @@ static void __init allocate_memory_11(struct domain *d, } #ifdef CONFIG_DOM0LESS_BOOT -static bool __init allocate_domheap_memory(struct domain *d, - paddr_t tot_size, - void *mem, enum meminfo_type type) +bool __init allocate_domheap_memory(struct domain *d, + paddr_t tot_size, + void *mem, enum meminfo_type type) { struct page_info *pg; unsigned int max_order = ~0; diff --git a/xen/arch/arm/include/asm/domain_build.h b/xen/arch/arm/include/asm/domain_build.h index da9e6025f3..1b75a4c6a8 100644 --- a/xen/arch/arm/include/asm/domain_build.h +++ b/xen/arch/arm/include/asm/domain_build.h @@ -51,6 +51,11 @@ static inline int prepare_acpi(struct domain *d, struct kernel_info *kinfo) int prepare_acpi(struct domain *d, struct kernel_info *kinfo); #endif +#ifdef CONFIG_DOM0LESS_BOOT +bool allocate_domheap_memory(struct domain *d, paddr_t tot_size, + void *mem, enum meminfo_type type); +#endif + #endif /* diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index a9eb26d543..b04e58172b 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -50,6 +50,11 @@ void __init retrieve_shm_meminfo(void *mem, unsigned int *max_mem_banks, *bank = meminfo->bank; } +static bool __init is_shm_allocated_from_heap(struct shm_memnode *node) +{ + return (node->meminfo.nr_banks != 0); +} + static int __init acquire_nr_borrower_domain(const char *shm_id, unsigned long *nr_borrowers) { @@ -75,12 +80,12 @@ static int __init acquire_nr_borrower_domain(const char *shm_id, * This function checks whether the static shared memory region is * already allocated to dom_io. */ -static bool __init is_shm_allocated_to_domio(paddr_t pbase) +static bool __init is_shm_allocated_to_domio(struct shm_memnode *node) { struct page_info *page; struct domain *d; - page = maddr_to_page(pbase); + page = maddr_to_page(node->meminfo.bank[0].start); d = page_get_owner_and_reference(page); if ( d == NULL ) return false; @@ -98,67 +103,128 @@ static bool __init is_shm_allocated_to_domio(paddr_t pbase) } static mfn_t __init acquire_shared_memory_bank(struct domain *d, - paddr_t pbase, paddr_t psize) + struct shm_meminfo *meminfo, + bool paddr_assigned) { - mfn_t smfn; - unsigned long nr_pfns; - int res; + int res, i = 0; - /* - * Pages of statically shared memory shall be included - * into domain_tot_pages(). - */ - nr_pfns = PFN_DOWN(psize); - if ( (UINT_MAX - d->max_pages) < nr_pfns ) + for ( ; i < meminfo->nr_banks; i++ ) { - printk(XENLOG_ERR "%pd: Over-allocation for d->max_pages: %lu.\n", - d, nr_pfns); - return INVALID_MFN; + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + unsigned long nr_pfns; + + /* + * Pages of statically shared memory shall be included + * into domain_tot_pages(). + */ + nr_pfns = PFN_DOWN(psize); + if ( (UINT_MAX - d->max_pages) < nr_pfns ) + { + printk(XENLOG_ERR "%pd: Over-allocation for d->max_pages: %lu.\n", + d, nr_pfns); + return INVALID_MFN; + } + d->max_pages += nr_pfns; + + if ( paddr_assigned ) + { + res = acquire_domstatic_pages(d, maddr_to_mfn(pbase), nr_pfns, 0); + if ( res ) + { + printk(XENLOG_ERR + "%pd: failed to acquire static memory: %d.\n", d, res); + goto fail; + } + } + else + /* + * When host address is not provided, static shared memory is + * allocated from heap and shall be assigned to owner domain. + */ + if ( assign_pages(maddr_to_page(pbase), nr_pfns, d, 0) ) + goto fail; } - d->max_pages += nr_pfns; - smfn = maddr_to_mfn(pbase); - res = acquire_domstatic_pages(d, smfn, nr_pfns, 0); - if ( res ) + return maddr_to_mfn(meminfo->bank[0].start); + + fail: + while( --i >= 0 ) + d->max_pages -= PFN_DOWN(meminfo->bank[i].size); + return INVALID_MFN; +} + +static int __init get_shm_pages_reference(struct domain *d, + struct shm_meminfo *meminfo, + unsigned long count) +{ + struct page_info *page; + int i = 0, j; + + for ( ; i < meminfo->nr_banks; i++ ) { - printk(XENLOG_ERR - "%pd: failed to acquire static memory: %d.\n", d, res); - d->max_pages -= nr_pfns; - return INVALID_MFN; + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + unsigned long nr_pages = PFN_DOWN(psize); + + page = maddr_to_page(pbase); + for ( j = 0; j < nr_pages; j++ ) + { + if ( !get_page_nr(page + j, d, count) ) + { + printk(XENLOG_ERR + "Failed to add %lu references to page %"PRI_mfn".\n", + count, mfn_x(page_to_mfn(page + j))); + goto fail; + } + } } - return smfn; + return 0; + + fail: + while ( --j >= 0 ) + put_page_nr(page + j, count); + while ( --i >= 0 ) + { + page = maddr_to_page(meminfo->bank[i].start); + j = PFN_DOWN(meminfo->bank[i].size); + while ( --j >= 0 ) + put_page_nr(page + j, count); + } + return -EINVAL; } static int __init assign_shared_memory(struct domain *d, - paddr_t pbase, paddr_t psize, - paddr_t gbase, const char *shm_id) + struct shm_memnode *node, paddr_t gbase, + bool paddr_assigned) { mfn_t smfn; - int ret = 0; - unsigned long nr_pages, nr_borrowers, i; - struct page_info *page; - - printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n", - d, pbase, pbase + psize); + int ret; + unsigned long nr_borrowers, i; + struct shm_meminfo *meminfo = &node->meminfo; - smfn = acquire_shared_memory_bank(d, pbase, psize); + smfn = acquire_shared_memory_bank(d, meminfo, paddr_assigned); if ( mfn_eq(smfn, INVALID_MFN) ) return -EINVAL; - /* - * DOMID_IO is not auto-translated (i.e. it sees RAM 1:1). So we do not need - * to create mapping in the P2M. - */ - nr_pages = PFN_DOWN(psize); - if ( d != dom_io ) + for ( i = 0; i < meminfo->nr_banks; i++ ) { - ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), smfn, - PFN_DOWN(psize)); - if ( ret ) + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + + /* + * DOMID_IO is not auto-translated (i.e. it sees RAM 1:1). So we do not need + * to create mapping in the P2M. + */ + if ( d != dom_io ) { - printk(XENLOG_ERR "Failed to map shared memory to %pd.\n", d); - return ret; + ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), + maddr_to_mfn(pbase), + PFN_DOWN(psize)); + if ( ret ) + { + printk(XENLOG_ERR "Failed to map shared memory to %pd.\n", d); + return ret; + } + gbase += psize; } } @@ -166,7 +232,7 @@ static int __init assign_shared_memory(struct domain *d, * Get the right amount of references per page, which is the number of * borrower domains. */ - ret = acquire_nr_borrower_domain(shm_id, &nr_borrowers); + ret = acquire_nr_borrower_domain(node->shm_id, &nr_borrowers); if ( ret ) return ret; @@ -178,24 +244,30 @@ static int __init assign_shared_memory(struct domain *d, * So if the borrower is created first, it will cause adding pages * in the P2M without reference. */ - page = mfn_to_page(smfn); - for ( i = 0; i < nr_pages; i++ ) + return get_shm_pages_reference(d, meminfo, nr_borrowers); +} + +static int __init add_foreign_mapping_for_borrower(struct domain *d, + struct shm_memnode *node, + paddr_t gbase) +{ + unsigned int i = 0; + struct shm_meminfo *meminfo = &node->meminfo; + + for ( ; i < meminfo->nr_banks; i++ ) { - if ( !get_page_nr(page + i, d, nr_borrowers) ) - { - printk(XENLOG_ERR - "Failed to add %lu references to page %"PRI_mfn".\n", - nr_borrowers, mfn_x(smfn) + i); - goto fail; - } + paddr_t pbase = meminfo->bank[i].start, psize = meminfo->bank[i].size; + int ret; + + /* Set up P2M foreign mapping for borrower domain. */ + ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize), + _mfn(PFN_UP(pbase)), p2m_map_foreign_rw); + if ( ret ) + return ret; + gbase += psize; } return 0; - - fail: - while ( --i >= 0 ) - put_page_nr(page + i, nr_borrowers); - return ret; } static int __init append_shm_bank_to_domain(struct kernel_info *kinfo, @@ -346,7 +418,7 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, dt_for_each_child_node(node, shm_node) { - paddr_t gbase, pbase, psize; + paddr_t gbase; int ret = 0; const char *role_str; const char *shm_id; @@ -375,15 +447,30 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, shm_id); if ( !shm_memnode ) return -EINVAL; - pbase = shm_memnode->meminfo.bank[0].start; - psize = shm_memnode->meminfo.bank[0].size; + + /* + * When host address is not provided in "xen,shared-mem", + * we let Xen allocate memory from heap at first domain. + */ + if ( !paddr_assigned && !is_shm_allocated_from_heap(shm_memnode) ) + { + if ( !allocate_domheap_memory(NULL, shm_memnode->meminfo.tot_size, + (void *)&shm_memnode->meminfo, + SHM_MEMINFO) ) + { + printk(XENLOG_ERR + "Failed to allocate (%"PRIpaddr"MB) pages as static shared memory from heap\n", + shm_memnode->meminfo.tot_size >> 20); + return -EINVAL; + } + } /* * DOMID_IO is a fake domain and is not described in the Device-Tree. * Therefore when the owner of the shared region is DOMID_IO, we will * only find the borrowers. */ - if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) || + if ( (owner_dom_io && !is_shm_allocated_to_domio(shm_memnode)) || (!owner_dom_io && strcmp(role_str, "owner") == 0) ) { /* @@ -391,16 +478,14 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, * specified, so they should be assigned to dom_io. */ ret = assign_shared_memory(owner_dom_io ? dom_io : d, - pbase, psize, gbase, shm_id); + shm_memnode, gbase, paddr_assigned); if ( ret ) return ret; } if ( owner_dom_io || (strcmp(role_str, "borrower") == 0) ) { - /* Set up P2M foreign mapping for borrower domain. */ - ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize), - _mfn(PFN_UP(pbase)), p2m_map_foreign_rw); + ret = add_foreign_mapping_for_borrower(d, shm_memnode, gbase); if ( ret ) return ret; } @@ -409,7 +494,9 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, * Record static shared memory region info for later setting * up shm-node in guest device tree. */ - ret = append_shm_bank_to_domain(kinfo, gbase, psize, shm_id); + ret = append_shm_bank_to_domain(kinfo, gbase, + shm_memnode->meminfo.tot_size, + shm_memnode->shm_id); if ( ret ) return ret; } From patchwork Wed Dec 6 09:06:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13481299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7EE0FC10F04 for ; Wed, 6 Dec 2023 09:07:13 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.649019.1013293 (Exim 4.92) (envelope-from ) id 1rAns0-0005cv-Ju; Wed, 06 Dec 2023 09:07:04 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 649019.1013293; Wed, 06 Dec 2023 09:07:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAns0-0005ck-FY; Wed, 06 Dec 2023 09:07:04 +0000 Received: by outflank-mailman (input) for mailman id 649019; Wed, 06 Dec 2023 09:07:02 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnry-0002Yw-Cj for xen-devel@lists.xenproject.org; Wed, 06 Dec 2023 09:07:02 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id d061339e-9416-11ee-98e5-6d05b1d4d9a1; Wed, 06 Dec 2023 10:07:00 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E139D139F; Wed, 6 Dec 2023 01:07:45 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E8F2C3F762; Wed, 6 Dec 2023 01:06:56 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d061339e-9416-11ee-98e5-6d05b1d4d9a1 From: Penny Zheng To: xen-devel@lists.xenproject.org, michal.orzel@amd.com Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v5 07/11] xen/arm: remove shm holes for extended regions Date: Wed, 6 Dec 2023 17:06:19 +0800 Message-Id: <20231206090623.1932275-8-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231206090623.1932275-1-Penny.Zheng@arm.com> References: <20231206090623.1932275-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Static shared memory acts as reserved memory in guest, so it shall be excluded from extended regions. Extended regions are taken care of under three different scenarios: normal DomU, direct-map domain with iommu on, and direct-map domain with iommu off. For normal DomU, we create a new function "remove_shm_holes_for_domU", to firstly transfer original outputs into the format of "struct rangeset", then use "remove_shm_from_rangeset" to remove static shm from them. For direct-map domain with iommu on, after we get guest shm info from "kinfo", we use "remove_shm_from_rangeset" to remove static shm. For direct-map domain with iommu off, as static shm has already been taken care of through reserved memory banks, we do nothing. Signed-off-by: Penny Zheng --- v1 -> v2: - new commit --- v2 -> v3: - error out non-zero res before remove_shm_holes_for_domU - rebase --- v3 -> v4: rebase and no change --- v4 -> v5: rebase and no change --- xen/arch/arm/domain_build.c | 19 +++++- xen/arch/arm/include/asm/domain_build.h | 2 + xen/arch/arm/include/asm/static-shmem.h | 17 +++++ xen/arch/arm/static-shmem.c | 83 +++++++++++++++++++++++++ 4 files changed, 118 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index c58996e3e9..e040f8a6d9 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -887,8 +887,8 @@ int __init make_memory_node(const struct domain *d, return res; } -static int __init add_ext_regions(unsigned long s_gfn, unsigned long e_gfn, - void *data) +int __init add_ext_regions(unsigned long s_gfn, unsigned long e_gfn, + void *data) { struct meminfo *ext_regions = data; paddr_t start, size; @@ -1062,6 +1062,8 @@ static int __init handle_pci_range(const struct dt_device_node *dev, * - MMIO * - Host RAM * - PCI aperture + * - Static shared memory regions, which are described by special property + * "xen,static-shm" */ static int __init find_memory_holes(const struct kernel_info *kinfo, struct meminfo *ext_regions) @@ -1078,6 +1080,14 @@ static int __init find_memory_holes(const struct kernel_info *kinfo, if ( !mem_holes ) return -ENOMEM; + /* Remove static shared memory regions */ + if ( kinfo->shminfo.nr_banks != 0 ) + { + res = remove_shm_from_rangeset(kinfo, mem_holes); + if ( res ) + goto out; + } + /* Start with maximum possible addressable physical memory range */ start = 0; end = (1ULL << p2m_ipa_bits) - 1; @@ -1180,7 +1190,10 @@ static int __init find_domU_holes(const struct kernel_info *kinfo, res = 0; } - return res; + if ( res ) + return res; + + return remove_shm_holes_for_domU(kinfo, ext_regions); } int __init make_hypervisor_node(struct domain *d, diff --git a/xen/arch/arm/include/asm/domain_build.h b/xen/arch/arm/include/asm/domain_build.h index 1b75a4c6a8..0433e76e68 100644 --- a/xen/arch/arm/include/asm/domain_build.h +++ b/xen/arch/arm/include/asm/domain_build.h @@ -56,6 +56,8 @@ bool allocate_domheap_memory(struct domain *d, paddr_t tot_size, void *mem, enum meminfo_type type); #endif +int add_ext_regions(unsigned long s_gfn, unsigned long e_gfn, void *data); + #endif /* diff --git a/xen/arch/arm/include/asm/static-shmem.h b/xen/arch/arm/include/asm/static-shmem.h index a67445cec8..d149985291 100644 --- a/xen/arch/arm/include/asm/static-shmem.h +++ b/xen/arch/arm/include/asm/static-shmem.h @@ -27,6 +27,12 @@ int process_shm_node(const void *fdt, int node, uint32_t address_cells, void retrieve_shm_meminfo(void *mem, unsigned int *max_mem_banks, struct membank **bank, unsigned int **nr_banks); + +int remove_shm_from_rangeset(const struct kernel_info *kinfo, + struct rangeset *rangeset); + +int remove_shm_holes_for_domU(const struct kernel_info *kinfo, + struct meminfo *orig_ext); #else /* !CONFIG_STATIC_SHM */ static inline int make_resv_memory_node(const struct domain *d, void *fdt, @@ -55,6 +61,17 @@ static inline int process_shm_node(const void *fdt, int node, return -EINVAL; } +static inline int remove_shm_from_rangeset(const struct kernel_info *kinfo, + struct rangeset *rangeset) +{ + return 0; +} + +static inline int remove_shm_holes_for_domU(const struct kernel_info *kinfo, + struct meminfo *orig_ext) +{ + return 0; +} #endif /* CONFIG_STATIC_SHM */ #endif /* __ASM_STATIC_SHMEM_H_ */ diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index b04e58172b..a06949abaf 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0-only */ #include +#include #include #include @@ -818,6 +819,88 @@ int __init make_resv_memory_node(const struct domain *d, void *fdt, return res; } +int __init remove_shm_from_rangeset(const struct kernel_info *kinfo, + struct rangeset *rangeset) +{ + unsigned int i; + + /* Remove static shared memory regions */ + for ( i = 0; i < kinfo->shminfo.nr_banks; i++ ) + { + struct membank membank = kinfo->shminfo.bank[i].membank; + paddr_t start, end; + int res; + + start = membank.start; + end = membank.start + membank.size - 1; + res = rangeset_remove_range(rangeset, start, end); + if ( res ) + { + printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n", + start, end); + return -EINVAL; + } + } + + return 0; +} + +int __init remove_shm_holes_for_domU(const struct kernel_info *kinfo, + struct meminfo *orig_ext) +{ + struct rangeset *guest_holes; + unsigned int i = 0, tail; + int res; + paddr_t start, end; + + /* No static shared memory region. */ + if ( kinfo->shminfo.nr_banks == 0 ) + return 0; + + dt_dprintk("Remove static shared memory holes for extended regions of DomU\n"); + + guest_holes = rangeset_new(NULL, NULL, 0); + if ( !guest_holes ) + return -ENOMEM; + + for ( ; i < orig_ext->nr_banks; i++ ) + { + start = orig_ext->bank[i].start; + end = start + orig_ext->bank[i].size - 1; + + res = rangeset_add_range(guest_holes, start, end); + if ( res ) + { + printk(XENLOG_ERR "Failed to add: %#"PRIpaddr"->%#"PRIpaddr"\n", + start, end); + goto out; + } + } + + /* Remove static shared memory regions */ + res = remove_shm_from_rangeset(kinfo, guest_holes); + if ( res ) + goto out; + + tail = orig_ext->nr_banks - 1; + start = orig_ext->bank[0].start; + end = orig_ext->bank[tail].start + orig_ext->bank[tail].size - 1; + + /* Reset original extended regions to hold new value */ + orig_ext->nr_banks = 0; + res = rangeset_report_ranges(guest_holes, start, end, + add_ext_regions, orig_ext); + if ( res ) + orig_ext->nr_banks = 0; + else if ( !orig_ext->nr_banks ) + res = -ENOENT; + +out: + rangeset_destroy(guest_holes); + + return res; +} + /* * Local variables: * mode: C From patchwork Wed Dec 6 09:06:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13481301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 95FFBC4167B for ; Wed, 6 Dec 2023 09:07:28 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.649023.1013303 (Exim 4.92) (envelope-from ) id 1rAns2-00063H-R3; Wed, 06 Dec 2023 09:07:06 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 649023.1013303; Wed, 06 Dec 2023 09:07:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAns2-000632-Nk; Wed, 06 Dec 2023 09:07:06 +0000 Received: by outflank-mailman (input) for mailman id 649023; Wed, 06 Dec 2023 09:07:06 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAns2-00022d-62 for xen-devel@lists.xenproject.org; Wed, 06 Dec 2023 09:07:06 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id d2ac4a95-9416-11ee-9b0f-b553b5be7939; Wed, 06 Dec 2023 10:07:04 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 927BC139F; Wed, 6 Dec 2023 01:07:49 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 501FF3F762; Wed, 6 Dec 2023 01:07:00 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d2ac4a95-9416-11ee-9b0f-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org, michal.orzel@amd.com Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v5 08/11] xen/p2m: put reference for superpage Date: Wed, 6 Dec 2023 17:06:20 +0800 Message-Id: <20231206090623.1932275-9-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231206090623.1932275-1-Penny.Zheng@arm.com> References: <20231206090623.1932275-1-Penny.Zheng@arm.com> MIME-Version: 1.0 We are doing foreign memory mapping for static shared memory, and there is a great possibility that it could be super mapped. But today, p2m_put_l3_page could not handle superpages. This commits implements a new function p2m_put_superpage to handle superpages, specifically for helping put extra references for foreign superpages. Signed-off-by: Penny Zheng --- v1 -> v2: - new commit --- v2 -> v3: - rebase and no change --- v3 -> v4: rebase and no change --- v4 -> v5: rebase and no change --- xen/arch/arm/mmu/p2m.c | 58 +++++++++++++++++++++++++++++++----------- 1 file changed, 43 insertions(+), 15 deletions(-) diff --git a/xen/arch/arm/mmu/p2m.c b/xen/arch/arm/mmu/p2m.c index 6a5a080307..810c89397c 100644 --- a/xen/arch/arm/mmu/p2m.c +++ b/xen/arch/arm/mmu/p2m.c @@ -752,17 +752,9 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn, return rc; } -/* - * Put any references on the single 4K page referenced by pte. - * TODO: Handle superpages, for now we only take special references for leaf - * pages (specifically foreign ones, which can't be super mapped today). - */ -static void p2m_put_l3_page(const lpae_t pte) +/* Put any references on the single 4K page referenced by mfn. */ +static void p2m_put_l3_page(mfn_t mfn, unsigned type) { - mfn_t mfn = lpae_get_mfn(pte); - - ASSERT(p2m_is_valid(pte)); - /* * TODO: Handle other p2m types * @@ -770,16 +762,53 @@ static void p2m_put_l3_page(const lpae_t pte) * flush the TLBs if the page is reallocated before the end of * this loop. */ - if ( p2m_is_foreign(pte.p2m.type) ) + if ( p2m_is_foreign(type) ) { ASSERT(mfn_valid(mfn)); put_page(mfn_to_page(mfn)); } /* Detect the xenheap page and mark the stored GFN as invalid. */ - else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) ) + else if ( p2m_is_ram(type) && is_xen_heap_mfn(mfn) ) page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN); } +/* Put any references on the superpage referenced by mfn. */ +static void p2m_put_superpage(mfn_t mfn, unsigned int next_level, unsigned type) +{ + unsigned int i; + unsigned int level_order = XEN_PT_LEVEL_ORDER(next_level); + + for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + { + if ( next_level == 3 ) + p2m_put_l3_page(mfn, type); + else + p2m_put_superpage(mfn, next_level + 1, type); + + mfn = mfn_add(mfn, 1 << level_order); + } +} + +/* Put any references on the page referenced by pte. */ +static void p2m_put_page(const lpae_t pte, unsigned int level) +{ + mfn_t mfn = lpae_get_mfn(pte); + + ASSERT(p2m_is_valid(pte)); + + /* + * We are either having a first level 1G superpage or a + * second level 2M superpage. + */ + if ( p2m_is_superpage(pte, level) ) + return p2m_put_superpage(mfn, level + 1, pte.p2m.type); + else + { + ASSERT(level == 3); + return p2m_put_l3_page(mfn, pte.p2m.type); + } +} + /* Free lpae sub-tree behind an entry */ static void p2m_free_entry(struct p2m_domain *p2m, lpae_t entry, unsigned int level) @@ -808,9 +837,8 @@ static void p2m_free_entry(struct p2m_domain *p2m, #endif p2m->stats.mappings[level]--; - /* Nothing to do if the entry is a super-page. */ - if ( level == 3 ) - p2m_put_l3_page(entry); + p2m_put_page(entry, level); + return; } From patchwork Wed Dec 6 09:06:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13481308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C38EAC4167B for ; Wed, 6 Dec 2023 09:18:58 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.649040.1013312 (Exim 4.92) (envelope-from ) id 1rAo2z-0002WP-2g; Wed, 06 Dec 2023 09:18:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 649040.1013312; Wed, 06 Dec 2023 09:18:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAo2z-0002WI-09; Wed, 06 Dec 2023 09:18:25 +0000 Received: by outflank-mailman (input) for mailman id 649040; Wed, 06 Dec 2023 09:18:23 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAns3-0002Yw-Rb for xen-devel@lists.xenproject.org; Wed, 06 Dec 2023 09:07:07 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id d45cd516-9416-11ee-98e5-6d05b1d4d9a1; Wed, 06 Dec 2023 10:07:07 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A3AD5139F; Wed, 6 Dec 2023 01:07:52 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AB8983F7F4; Wed, 6 Dec 2023 01:07:03 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d45cd516-9416-11ee-98e5-6d05b1d4d9a1 From: Penny Zheng To: xen-devel@lists.xenproject.org, michal.orzel@amd.com Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v5 09/11] xen/docs: refine docs about static shared memory Date: Wed, 6 Dec 2023 17:06:21 +0800 Message-Id: <20231206090623.1932275-10-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231206090623.1932275-1-Penny.Zheng@arm.com> References: <20231206090623.1932275-1-Penny.Zheng@arm.com> MIME-Version: 1.0 This commit amends docs(docs/misc/arm/device-tree/booting.txt) to include the new scenario where host address is not provided in "xen,shared-mem" property, and we also add a new example to explain in detail. We also fix some buggy info in the docs, like SHMID is "my-shared-mem-1", not "0x1". Signed-off-by: Penny Zheng --- docs/misc/arm/device-tree/booting.txt | 52 ++++++++++++++++++++------- 1 file changed, 39 insertions(+), 13 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt index bbd955e9c2..ac4bad6fe5 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -590,7 +590,7 @@ communication. An array takes a physical address, which is the base address of the shared memory region in host physical address space, a size, and a guest physical address, as the target address of the mapping. - e.g. xen,shared-mem = < [host physical address] [guest address] [size] > + e.g. xen,shared-mem = < [host physical address] [guest address] [size] >; It shall also meet the following criteria: 1) If the SHM ID matches with an existing region, the address range of the @@ -601,8 +601,8 @@ communication. The number of cells for the host address (and size) is the same as the guest pseudo-physical address and they are inherited from the parent node. - Host physical address is optional, when missing Xen decides the location - (currently unimplemented). + Host physical address is optional, when missing Xen decides the location. + e.g. xen,shared-mem = < [guest address] [size] >; - role (Optional) @@ -629,7 +629,7 @@ chosen { role = "owner"; xen,shm-id = "my-shared-mem-0"; xen,shared-mem = <0x10000000 0x10000000 0x10000000>; - } + }; domU1 { compatible = "xen,domain"; @@ -640,25 +640,36 @@ chosen { vpl011; /* - * shared memory region identified as 0x0(xen,shm-id = <0x0>) - * is shared between Dom0 and DomU1. + * shared memory region "my-shared-mem-0" is shared + * between Dom0 and DomU1. */ domU1-shared-mem@10000000 { compatible = "xen,domain-shared-memory-v1"; role = "borrower"; xen,shm-id = "my-shared-mem-0"; xen,shared-mem = <0x10000000 0x50000000 0x10000000>; - } + }; /* - * shared memory region identified as 0x1(xen,shm-id = <0x1>) - * is shared between DomU1 and DomU2. + * shared memory region "my-shared-mem-1" is shared between + * DomU1 and DomU2. */ domU1-shared-mem@50000000 { compatible = "xen,domain-shared-memory-v1"; xen,shm-id = "my-shared-mem-1"; xen,shared-mem = <0x50000000 0x60000000 0x20000000>; - } + }; + + /* + * shared memory region "my-shared-mem-2" is shared between + * DomU1 and DomU2. + */ + domU1-shared-mem-2 { + compatible = "xen,domain-shared-memory-v1"; + xen,shm-id = "my-shared-mem-2"; + role = "owner"; + xen,shared-mem = <0x80000000 0x20000000>; + }; ...... @@ -672,14 +683,21 @@ chosen { cpus = <1>; /* - * shared memory region identified as 0x1(xen,shm-id = <0x1>) - * is shared between domU1 and domU2. + * shared memory region "my-shared-mem-1" is shared between + * domU1 and domU2. */ domU2-shared-mem@50000000 { compatible = "xen,domain-shared-memory-v1"; xen,shm-id = "my-shared-mem-1"; xen,shared-mem = <0x50000000 0x70000000 0x20000000>; - } + }; + + domU2-shared-mem-2 { + compatible = "xen,domain-shared-memory-v1"; + xen,shm-id = "my-shared-mem-2"; + role = "borrower"; + xen,shared-mem = <0x90000000 0x20000000>; + }; ...... }; @@ -699,3 +717,11 @@ shared between DomU1 and DomU2. It will get mapped at 0x60000000 in DomU1 guest physical address space, and at 0x70000000 in DomU2 guest physical address space. DomU1 and DomU2 are both the borrower domain, the owner domain is the default owner domain DOMID_IO. + +For the static shared memory region "my-shared-mem-2", since host physical +address is not provided by user, Xen will automatically allocate 512MB +from heap as static shared memory to be shared between DomU1 and DomU2. +The automatically allocated static shared memory will get mapped at +0x80000000 in DomU1 guest physical address space, and at 0x90000000 in DomU2 +guest physical address space. DomU1 is explicitly defined as the owner domain, +and DomU2 is the borrower domain. From patchwork Wed Dec 6 09:06:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13481310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 79076C4167B for ; Wed, 6 Dec 2023 09:22:21 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.649052.1013333 (Exim 4.92) (envelope-from ) id 1rAo6V-0005Py-OT; Wed, 06 Dec 2023 09:22:03 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 649052.1013333; Wed, 06 Dec 2023 09:22:03 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAo6V-0005Pr-La; Wed, 06 Dec 2023 09:22:03 +0000 Received: by outflank-mailman (input) for mailman id 649052; Wed, 06 Dec 2023 09:22:03 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAns8-00022d-Ac for xen-devel@lists.xenproject.org; Wed, 06 Dec 2023 09:07:12 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id d6660baf-9416-11ee-9b0f-b553b5be7939; Wed, 06 Dec 2023 10:07:10 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0A835139F; Wed, 6 Dec 2023 01:07:56 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 129A73F762; Wed, 6 Dec 2023 01:07:06 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d6660baf-9416-11ee-9b0f-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org, michal.orzel@amd.com Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v5 10/11] xen/arm: fix duplicate /reserved-memory node in Dom0 Date: Wed, 6 Dec 2023 17:06:22 +0800 Message-Id: <20231206090623.1932275-11-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231206090623.1932275-1-Penny.Zheng@arm.com> References: <20231206090623.1932275-1-Penny.Zheng@arm.com> MIME-Version: 1.0 In case there is a /reserved-memory node already present in the host dtb, current Xen codes would create yet another /reserved-memory node specially for the static shm in Dom0 Device Tree. Xen will use write_properties() to copy the reserved memory nodes from host dtb to Dom0 FDT, so we want to insert the shm node along with the copying. And avoiding duplication, we add a checking before make_resv_memory_node(). Signed-off-by: Penny Zheng --- v3 -> v4: new commit --- v4 -> v5: rebase and no change --- xen/arch/arm/domain_build.c | 27 ++++++++++++++++++++++--- xen/arch/arm/include/asm/kernel.h | 2 ++ xen/arch/arm/include/asm/static-shmem.h | 14 +++++++++++++ xen/arch/arm/static-shmem.c | 6 +++--- 4 files changed, 43 insertions(+), 6 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index e040f8a6d9..f098678ea3 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -752,6 +752,23 @@ static int __init write_properties(struct domain *d, struct kernel_info *kinfo, } } + if ( dt_node_path_is_equal(node, "/reserved-memory") ) + { + kinfo->resv_mem = true; + + /* shared memory provided. */ + if ( kinfo->shminfo.nr_banks != 0 ) + { + uint32_t addrcells = dt_n_addr_cells(node), + sizecells = dt_n_size_cells(node); + + res = make_shm_memory_node(d, kinfo->fdt, + addrcells, sizecells, kinfo); + if ( res ) + return res; + } + } + return 0; } @@ -1856,9 +1873,13 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo, return res; } - res = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, kinfo); - if ( res ) - return res; + /* Avoid duplicate /reserved-memory nodes in Device Tree */ + if ( !kinfo->resv_mem ) + { + res = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, kinfo); + if ( res ) + return res; + } } res = fdt_end_node(kinfo->fdt); diff --git a/xen/arch/arm/include/asm/kernel.h b/xen/arch/arm/include/asm/kernel.h index db3d8232fa..8fe2105a91 100644 --- a/xen/arch/arm/include/asm/kernel.h +++ b/xen/arch/arm/include/asm/kernel.h @@ -39,6 +39,8 @@ struct kernel_info { void *fdt; /* flat device tree */ paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */ struct meminfo mem; + /* Whether we have /reserved-memory node in host Device Tree */ + bool resv_mem; /* Static shared memory banks */ struct { unsigned int nr_banks; diff --git a/xen/arch/arm/include/asm/static-shmem.h b/xen/arch/arm/include/asm/static-shmem.h index d149985291..6cb4ef9646 100644 --- a/xen/arch/arm/include/asm/static-shmem.h +++ b/xen/arch/arm/include/asm/static-shmem.h @@ -33,6 +33,11 @@ int remove_shm_from_rangeset(const struct kernel_info *kinfo, int remove_shm_holes_for_domU(const struct kernel_info *kinfo, struct meminfo *orig_ext); + +int make_shm_memory_node(const struct domain *d, + void *fdt, + int addrcells, int sizecells, + const struct kernel_info *kinfo); #else /* !CONFIG_STATIC_SHM */ static inline int make_resv_memory_node(const struct domain *d, void *fdt, @@ -72,6 +77,15 @@ static inline int remove_shm_holes_for_domU(const struct kernel_info *kinfo, { return 0; } + +static inline int make_shm_memory_node(const struct domain *d, + void *fdt, + int addrcells, int sizecells, + const struct kernel_info *kinfo) +{ + return 0; +} + #endif /* CONFIG_STATIC_SHM */ #endif /* __ASM_STATIC_SHMEM_H_ */ diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index a06949abaf..bfce5bbad0 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -505,9 +505,9 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, return 0; } -static int __init make_shm_memory_node(const struct domain *d, void *fdt, - int addrcells, int sizecells, - const struct kernel_info *kinfo) +int __init make_shm_memory_node(const struct domain *d, void *fdt, + int addrcells, int sizecells, + const struct kernel_info *kinfo) { unsigned int i = 0; int res = 0; From patchwork Wed Dec 6 09:06:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Penny Zheng X-Patchwork-Id: 13481309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CED30C4167B for ; Wed, 6 Dec 2023 09:19:51 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.649044.1013323 (Exim 4.92) (envelope-from ) id 1rAo4F-0003Nw-D2; Wed, 06 Dec 2023 09:19:43 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 649044.1013323; Wed, 06 Dec 2023 09:19:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAo4F-0003Np-9x; Wed, 06 Dec 2023 09:19:43 +0000 Received: by outflank-mailman (input) for mailman id 649044; Wed, 06 Dec 2023 09:19:41 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rAnsB-00022d-Qi for xen-devel@lists.xenproject.org; Wed, 06 Dec 2023 09:07:15 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id d870d2bf-9416-11ee-9b0f-b553b5be7939; Wed, 06 Dec 2023 10:07:13 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 66524139F; Wed, 6 Dec 2023 01:07:59 -0800 (PST) Received: from a011292.shanghai.arm.com (a011292.shanghai.arm.com [10.169.190.94]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6D7DB3F762; Wed, 6 Dec 2023 01:07:10 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d870d2bf-9416-11ee-9b0f-b553b5be7939 From: Penny Zheng To: xen-devel@lists.xenproject.org, michal.orzel@amd.com Cc: wei.chen@arm.com, Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng Subject: [PATCH v5 11/11] xen/arm: create another /memory node for static shm Date: Wed, 6 Dec 2023 17:06:23 +0800 Message-Id: <20231206090623.1932275-12-Penny.Zheng@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231206090623.1932275-1-Penny.Zheng@arm.com> References: <20231206090623.1932275-1-Penny.Zheng@arm.com> MIME-Version: 1.0 Static shared memory region shall be described both under /memory and /reserved-memory. We introduce export_shm_memory_node() to create another /memory node to contain the static shared memory ranges. Signed-off-by: Penny Zheng --- v3 -> v4: new commit --- v4 -> v5: rebase and no changes --- xen/arch/arm/dom0less-build.c | 8 ++++++++ xen/arch/arm/domain_build.c | 8 ++++++++ xen/arch/arm/include/asm/static-shmem.h | 10 ++++++++++ xen/arch/arm/static-shmem.c | 19 +++++++++++++++++++ 4 files changed, 45 insertions(+) diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c index ac096fa3fa..870b8a553f 100644 --- a/xen/arch/arm/dom0less-build.c +++ b/xen/arch/arm/dom0less-build.c @@ -645,6 +645,14 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo) if ( ret ) goto err; + /* Create a memory node to store the static shared memory regions */ + if ( kinfo->shminfo.nr_banks != 0 ) + { + ret = export_shm_memory_node(d, kinfo, addrcells, sizecells); + if ( ret ) + goto err; + } + ret = make_resv_memory_node(d, kinfo->fdt, addrcells, sizecells, kinfo); if ( ret ) goto err; diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index f098678ea3..4e450cb4c7 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -1873,6 +1873,14 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo, return res; } + /* Create a memory node to store the static shared memory regions */ + if ( kinfo->shminfo.nr_banks != 0 ) + { + res = export_shm_memory_node(d, kinfo, addrcells, sizecells); + if ( res ) + return res; + } + /* Avoid duplicate /reserved-memory nodes in Device Tree */ if ( !kinfo->resv_mem ) { diff --git a/xen/arch/arm/include/asm/static-shmem.h b/xen/arch/arm/include/asm/static-shmem.h index 6cb4ef9646..385fd24c17 100644 --- a/xen/arch/arm/include/asm/static-shmem.h +++ b/xen/arch/arm/include/asm/static-shmem.h @@ -38,6 +38,10 @@ int make_shm_memory_node(const struct domain *d, void *fdt, int addrcells, int sizecells, const struct kernel_info *kinfo); + +int export_shm_memory_node(const struct domain *d, + const struct kernel_info *kinfo, + int addrcells, int sizecells); #else /* !CONFIG_STATIC_SHM */ static inline int make_resv_memory_node(const struct domain *d, void *fdt, @@ -86,6 +90,12 @@ static inline int make_shm_memory_node(const struct domain *d, return 0; } +static inline int export_shm_memory_node(const struct domain *d, + const struct kernel_info *kinfo, + int addrcells, int sizecells) +{ + return 0; +} #endif /* CONFIG_STATIC_SHM */ #endif /* __ASM_STATIC_SHMEM_H_ */ diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index bfce5bbad0..e583aae685 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -505,6 +505,25 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, return 0; } +int __init export_shm_memory_node(const struct domain *d, + const struct kernel_info *kinfo, + int addrcells, int sizecells) +{ + unsigned int i = 0; + struct meminfo shm_meminfo; + + /* Extract meminfo from kinfo.shminfo */ + for ( ; i < kinfo->shminfo.nr_banks; i++ ) + { + shm_meminfo.bank[i].start = kinfo->shminfo.bank[i].membank.start; + shm_meminfo.bank[i].size = kinfo->shminfo.bank[i].membank.size; + shm_meminfo.bank[i].type = MEMBANK_DEFAULT; + } + shm_meminfo.nr_banks = kinfo->shminfo.nr_banks; + + return make_memory_node(d, kinfo->fdt, addrcells, sizecells, &shm_meminfo); +} + int __init make_shm_memory_node(const struct domain *d, void *fdt, int addrcells, int sizecells, const struct kernel_info *kinfo)