From patchwork Tue Apr 23 08:25:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 13639595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CCF5EC4345F for ; Tue, 23 Apr 2024 08:25:58 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.710382.1109571 (Exim 4.92) (envelope-from ) id 1rzBTJ-0002oP-LW; Tue, 23 Apr 2024 08:25:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 710382.1109571; Tue, 23 Apr 2024 08:25:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTJ-0002nn-IW; Tue, 23 Apr 2024 08:25:49 +0000 Received: by outflank-mailman (input) for mailman id 710382; Tue, 23 Apr 2024 08:25:48 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTI-0002TX-6O for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:48 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 14226acf-014b-11ef-b4bb-af5377834399; Tue, 23 Apr 2024 10:25:45 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 182841063; Tue, 23 Apr 2024 01:26:12 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 348733F64C; Tue, 23 Apr 2024 01:25:43 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 14226acf-014b-11ef-b4bb-af5377834399 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH 1/7] xen/arm: Lookup bootinfo shm bank during the mapping Date: Tue, 23 Apr 2024 09:25:26 +0100 Message-Id: <20240423082532.776623-2-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 The current static shared memory code is using bootinfo banks when it needs to find the number of borrower, so every time assign_shared_memory is called, the bank is searched in the bootinfo.shmem structure. There is nothing wrong with it, however the bank can be used also to retrieve the start address and size and also to pass less argument to assign_shared_memory. When retrieving the information from the bootinfo bank, it's also possible to move the checks on alignment to process_shm_node in the early stages. So create a new function find_shm() which takes a 'struct shared_meminfo' structure and the shared memory ID, to look for a bank with a matching ID, take the physical host address and size from the bank, pass the bank to assign_shared_memory() removing the now unnecessary arguments and finally remove the acquire_nr_borrower_domain() function since now the information can be extracted from the passed bank. Move the "xen,shm-id" parsing early in process_shm to bail out quickly in case of errors (unlikely), as said above, move the checks on alignment to process_shm_node. Drawback of this change is that now the bootinfo are used also when the bank doesn't need to be allocated, however it will be convinient later to use it as an argument for assign_shared_memory when dealing with the use case where the Host physical address is not supplied by the user. Signed-off-by: Luca Fancellu --- xen/arch/arm/static-shmem.c | 105 ++++++++++++++++++++---------------- 1 file changed, 58 insertions(+), 47 deletions(-) diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 09f474ec6050..f6cf74e58a83 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -19,29 +19,24 @@ static void __init __maybe_unused build_assertions(void) offsetof(struct shared_meminfo, bank))); } -static int __init acquire_nr_borrower_domain(struct domain *d, - paddr_t pbase, paddr_t psize, - unsigned long *nr_borrowers) +static const struct membank __init *find_shm(const struct membanks *shmem, + const char *shm_id) { - const struct membanks *shmem = bootinfo_get_shmem(); unsigned int bank; - /* Iterate reserved memory to find requested shm bank. */ + BUG_ON(!shmem || !shm_id); + for ( bank = 0 ; bank < shmem->nr_banks; bank++ ) { - paddr_t bank_start = shmem->bank[bank].start; - paddr_t bank_size = shmem->bank[bank].size; - - if ( (pbase == bank_start) && (psize == bank_size) ) + if ( strncmp(shm_id, shmem->bank[bank].shmem_extra->shm_id, + MAX_SHM_ID_LENGTH) == 0 ) break; } if ( bank == shmem->nr_banks ) - return -ENOENT; - - *nr_borrowers = shmem->bank[bank].shmem_extra->nr_shm_borrowers; + return NULL; - return 0; + return &shmem->bank[bank]; } /* @@ -103,14 +98,20 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d, return smfn; } -static int __init assign_shared_memory(struct domain *d, - paddr_t pbase, paddr_t psize, - paddr_t gbase) +static int __init assign_shared_memory(struct domain *d, paddr_t gbase, + const struct membank *shm_bank) { mfn_t smfn; int ret = 0; unsigned long nr_pages, nr_borrowers, i; struct page_info *page; + paddr_t pbase, psize; + + BUG_ON(!shm_bank || !shm_bank->shmem_extra); + + pbase = shm_bank->start; + psize = shm_bank->size; + nr_borrowers = shm_bank->shmem_extra->nr_shm_borrowers; printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n", d, pbase, pbase + psize); @@ -135,14 +136,6 @@ static int __init assign_shared_memory(struct domain *d, } } - /* - * Get the right amount of references per page, which is the number of - * borrower domains. - */ - ret = acquire_nr_borrower_domain(d, pbase, psize, &nr_borrowers); - if ( ret ) - return ret; - /* * Instead of letting borrower domain get a page ref, we add as many * additional reference as the number of borrowers when the owner @@ -199,6 +192,7 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, dt_for_each_child_node(node, shm_node) { + const struct membank *boot_shm_bank; const struct dt_property *prop; const __be32 *cells; uint32_t addr_cells, size_cells; @@ -212,6 +206,23 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, if ( !dt_device_is_compatible(shm_node, "xen,domain-shared-memory-v1") ) continue; + if ( dt_property_read_string(shm_node, "xen,shm-id", &shm_id) ) + { + printk("%pd: invalid \"xen,shm-id\" property", d); + return -EINVAL; + } + BUG_ON((strlen(shm_id) <= 0) || (strlen(shm_id) >= MAX_SHM_ID_LENGTH)); + + boot_shm_bank = find_shm(bootinfo_get_shmem(), shm_id); + if ( !boot_shm_bank ) + { + printk("%pd: static shared memory bank not found: '%s'", d, shm_id); + return -ENOENT; + } + + pbase = boot_shm_bank->start; + psize = boot_shm_bank->size; + /* * xen,shared-mem = ; * TODO: pbase is optional. @@ -221,20 +232,7 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, prop = dt_find_property(shm_node, "xen,shared-mem", NULL); BUG_ON(!prop); cells = (const __be32 *)prop->value; - device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase); - psize = dt_read_paddr(cells, size_cells); - if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE) ) - { - printk("%pd: physical address 0x%"PRIpaddr", or guest address 0x%"PRIpaddr" is not suitably aligned.\n", - d, pbase, gbase); - return -EINVAL; - } - if ( !IS_ALIGNED(psize, PAGE_SIZE) ) - { - printk("%pd: size 0x%"PRIpaddr" is not suitably aligned\n", - d, psize); - return -EINVAL; - } + gbase = dt_read_paddr(cells + addr_cells, addr_cells); for ( i = 0; i < PFN_DOWN(psize); i++ ) if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) @@ -251,13 +249,6 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, if ( dt_property_read_string(shm_node, "role", &role_str) == 0 ) owner_dom_io = false; - if ( dt_property_read_string(shm_node, "xen,shm-id", &shm_id) ) - { - printk("%pd: invalid \"xen,shm-id\" property", d); - return -EINVAL; - } - BUG_ON((strlen(shm_id) <= 0) || (strlen(shm_id) >= MAX_SHM_ID_LENGTH)); - /* * DOMID_IO is a fake domain and is not described in the Device-Tree. * Therefore when the owner of the shared region is DOMID_IO, we will @@ -270,8 +261,8 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, * We found the first borrower of the region, the owner was not * specified, so they should be assigned to dom_io. */ - ret = assign_shared_memory(owner_dom_io ? dom_io : d, - pbase, psize, gbase); + ret = assign_shared_memory(owner_dom_io ? dom_io : d, gbase, + boot_shm_bank); if ( ret ) return ret; } @@ -440,6 +431,26 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gaddr); size = dt_next_cell(size_cells, &cell); + if ( !IS_ALIGNED(paddr, PAGE_SIZE) ) + { + printk("fdt: physical address 0x%"PRIpaddr" is not suitably aligned.\n", + paddr); + return -EINVAL; + } + + if ( !IS_ALIGNED(gaddr, PAGE_SIZE) ) + { + printk("fdt: guest address 0x%"PRIpaddr" is not suitably aligned.\n", + gaddr); + return -EINVAL; + } + + if ( !IS_ALIGNED(size, PAGE_SIZE) ) + { + printk("fdt: size 0x%"PRIpaddr" is not suitably aligned\n", size); + return -EINVAL; + } + if ( !size ) { printk("fdt: the size for static shared memory region can not be zero\n"); From patchwork Tue Apr 23 08:25:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 13639601 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D6EA1C1746D for ; Tue, 23 Apr 2024 08:26:02 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.710381.1109569 (Exim 4.92) (envelope-from ) id 1rzBTJ-0002kD-Fs; Tue, 23 Apr 2024 08:25:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 710381.1109569; Tue, 23 Apr 2024 08:25:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTJ-0002k4-B2; Tue, 23 Apr 2024 08:25:49 +0000 Received: by outflank-mailman (input) for mailman id 710381; Tue, 23 Apr 2024 08:25:47 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTH-0002VX-TS for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:47 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 14d33859-014b-11ef-909a-e314d9c70b13; Tue, 23 Apr 2024 10:25:46 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 31EBE1476; Tue, 23 Apr 2024 01:26:13 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4E6313F64C; Tue, 23 Apr 2024 01:25:44 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 14d33859-014b-11ef-909a-e314d9c70b13 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH 2/7] xen/arm: Wrap shared memory mapping code in one function Date: Tue, 23 Apr 2024 09:25:27 +0100 Message-Id: <20240423082532.776623-3-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 Wrap the code and logic that is calling assign_shared_memory and map_regions_p2mt into a new function 'handle_shared_mem_bank', it will become useful later when the code will allow the user to don't pass the host physical address. Signed-off-by: Luca Fancellu --- xen/arch/arm/static-shmem.c | 71 +++++++++++++++++++++++-------------- 1 file changed, 45 insertions(+), 26 deletions(-) diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index f6cf74e58a83..24e40495a481 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -185,6 +185,47 @@ append_shm_bank_to_domain(struct shared_meminfo *kinfo_shm_mem, paddr_t start, return 0; } +static int __init handle_shared_mem_bank(struct domain *d, paddr_t gbase, + bool owner_dom_io, + const char *role_str, + const struct membank *shm_bank) +{ + paddr_t pbase, psize; + int ret; + + BUG_ON(!shm_bank); + + pbase = shm_bank->start; + psize = shm_bank->size; + /* + * DOMID_IO is a fake domain and is not described in the Device-Tree. + * Therefore when the owner of the shared region is DOMID_IO, we will + * only find the borrowers. + */ + if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) || + (!owner_dom_io && strcmp(role_str, "owner") == 0) ) + { + /* + * We found the first borrower of the region, the owner was not + * specified, so they should be assigned to dom_io. + */ + ret = assign_shared_memory(owner_dom_io ? dom_io : d, gbase, shm_bank); + if ( ret ) + return ret; + } + + if ( owner_dom_io || (strcmp(role_str, "borrower") == 0) ) + { + /* Set up P2M foreign mapping for borrower domain. */ + ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize), + _mfn(PFN_UP(pbase)), p2m_map_foreign_rw); + if ( ret ) + return ret; + } + + return 0; +} + int __init process_shm(struct domain *d, struct kernel_info *kinfo, const struct dt_device_node *node) { @@ -249,32 +290,10 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, if ( dt_property_read_string(shm_node, "role", &role_str) == 0 ) owner_dom_io = false; - /* - * DOMID_IO is a fake domain and is not described in the Device-Tree. - * Therefore when the owner of the shared region is DOMID_IO, we will - * only find the borrowers. - */ - if ( (owner_dom_io && !is_shm_allocated_to_domio(pbase)) || - (!owner_dom_io && strcmp(role_str, "owner") == 0) ) - { - /* - * We found the first borrower of the region, the owner was not - * specified, so they should be assigned to dom_io. - */ - ret = assign_shared_memory(owner_dom_io ? dom_io : d, gbase, - boot_shm_bank); - if ( ret ) - return ret; - } - - if ( owner_dom_io || (strcmp(role_str, "borrower") == 0) ) - { - /* Set up P2M foreign mapping for borrower domain. */ - ret = map_regions_p2mt(d, _gfn(PFN_UP(gbase)), PFN_DOWN(psize), - _mfn(PFN_UP(pbase)), p2m_map_foreign_rw); - if ( ret ) - return ret; - } + ret = handle_shared_mem_bank(d, gbase, owner_dom_io, role_str, + boot_shm_bank); + if ( ret ) + return ret; /* * Record static shared memory region info for later setting From patchwork Tue Apr 23 08:25:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 13639600 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E7277C18E72 for ; Tue, 23 Apr 2024 08:26:02 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.710383.1109582 (Exim 4.92) (envelope-from ) id 1rzBTK-0002rC-5u; Tue, 23 Apr 2024 08:25:50 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 710383.1109582; Tue, 23 Apr 2024 08:25:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTJ-0002pb-Ov; Tue, 23 Apr 2024 08:25:49 +0000 Received: by outflank-mailman (input) for mailman id 710383; Tue, 23 Apr 2024 08:25:48 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTI-0002VX-IM for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:48 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 1599ce26-014b-11ef-909a-e314d9c70b13; Tue, 23 Apr 2024 10:25:47 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 81E391477; Tue, 23 Apr 2024 01:26:14 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 68BC23F64C; Tue, 23 Apr 2024 01:25:45 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1599ce26-014b-11ef-909a-e314d9c70b13 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Penny Zheng Subject: [PATCH 3/7] xen/p2m: put reference for superpage Date: Tue, 23 Apr 2024 09:25:28 +0100 Message-Id: <20240423082532.776623-4-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 From: Penny Zheng We are doing foreign memory mapping for static shared memory, and there is a great possibility that it could be super mapped. But today, p2m_put_l3_page could not handle superpages. This commits implements a new function p2m_put_superpage to handle superpages, specifically for helping put extra references for foreign superpages. Signed-off-by: Penny Zheng Signed-off-by: Luca Fancellu --- v1: - patch from https://patchwork.kernel.org/project/xen-devel/patch/20231206090623.1932275-9-Penny.Zheng@arm.com/ --- xen/arch/arm/mmu/p2m.c | 58 +++++++++++++++++++++++++++++++----------- 1 file changed, 43 insertions(+), 15 deletions(-) diff --git a/xen/arch/arm/mmu/p2m.c b/xen/arch/arm/mmu/p2m.c index 41fcca011cf4..479a80fbd4cf 100644 --- a/xen/arch/arm/mmu/p2m.c +++ b/xen/arch/arm/mmu/p2m.c @@ -753,17 +753,9 @@ static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn, return rc; } -/* - * Put any references on the single 4K page referenced by pte. - * TODO: Handle superpages, for now we only take special references for leaf - * pages (specifically foreign ones, which can't be super mapped today). - */ -static void p2m_put_l3_page(const lpae_t pte) +/* Put any references on the single 4K page referenced by mfn. */ +static void p2m_put_l3_page(mfn_t mfn, unsigned type) { - mfn_t mfn = lpae_get_mfn(pte); - - ASSERT(p2m_is_valid(pte)); - /* * TODO: Handle other p2m types * @@ -771,16 +763,53 @@ static void p2m_put_l3_page(const lpae_t pte) * flush the TLBs if the page is reallocated before the end of * this loop. */ - if ( p2m_is_foreign(pte.p2m.type) ) + if ( p2m_is_foreign(type) ) { ASSERT(mfn_valid(mfn)); put_page(mfn_to_page(mfn)); } /* Detect the xenheap page and mark the stored GFN as invalid. */ - else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) ) + else if ( p2m_is_ram(type) && is_xen_heap_mfn(mfn) ) page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN); } +/* Put any references on the superpage referenced by mfn. */ +static void p2m_put_superpage(mfn_t mfn, unsigned int next_level, unsigned type) +{ + unsigned int i; + unsigned int level_order = XEN_PT_LEVEL_ORDER(next_level); + + for ( i = 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + { + if ( next_level == 3 ) + p2m_put_l3_page(mfn, type); + else + p2m_put_superpage(mfn, next_level + 1, type); + + mfn = mfn_add(mfn, 1 << level_order); + } +} + +/* Put any references on the page referenced by pte. */ +static void p2m_put_page(const lpae_t pte, unsigned int level) +{ + mfn_t mfn = lpae_get_mfn(pte); + + ASSERT(p2m_is_valid(pte)); + + /* + * We are either having a first level 1G superpage or a + * second level 2M superpage. + */ + if ( p2m_is_superpage(pte, level) ) + return p2m_put_superpage(mfn, level + 1, pte.p2m.type); + else + { + ASSERT(level == 3); + return p2m_put_l3_page(mfn, pte.p2m.type); + } +} + /* Free lpae sub-tree behind an entry */ static void p2m_free_entry(struct p2m_domain *p2m, lpae_t entry, unsigned int level) @@ -809,9 +838,8 @@ static void p2m_free_entry(struct p2m_domain *p2m, #endif p2m->stats.mappings[level]--; - /* Nothing to do if the entry is a super-page. */ - if ( level == 3 ) - p2m_put_l3_page(entry); + p2m_put_page(entry, level); + return; } From patchwork Tue Apr 23 08:25:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 13639602 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E42B8C41513 for ; Tue, 23 Apr 2024 08:26:02 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.710384.1109598 (Exim 4.92) (envelope-from ) id 1rzBTL-0003T8-Eo; Tue, 23 Apr 2024 08:25:51 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 710384.1109598; Tue, 23 Apr 2024 08:25:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTL-0003Py-9D; Tue, 23 Apr 2024 08:25:51 +0000 Received: by outflank-mailman (input) for mailman id 710384; Tue, 23 Apr 2024 08:25:50 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTK-0002TX-35 for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:50 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 1648df22-014b-11ef-b4bb-af5377834399; Tue, 23 Apr 2024 10:25:48 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B67DD339; Tue, 23 Apr 2024 01:26:15 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B81B93F64C; Tue, 23 Apr 2024 01:25:46 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1648df22-014b-11ef-b4bb-af5377834399 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH 4/7] xen/arm: Parse xen,shared-mem when host phys address is not provided Date: Tue, 23 Apr 2024 09:25:29 +0100 Message-Id: <20240423082532.776623-5-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 Handle the parsing of the 'xen,shared-mem' property when the host physical address is not provided, this commit is introducing the logic to parse it, but the functionality is still not implemented and will be part of future commits. Rework the logic inside process_shm_node to check the shm_id before doing the other checks, because it ease the logic itself, add more comment on the logic. Now when the host physical address is not provided, the value INVALID_PADDR is chosen to signal this condition and it is stored as start of the bank, due to that change also early_print_info_shmem and init_sharedmem_pages are changed, to don't handle banks with start equal to INVALID_PADDR. Another change is done inside meminfo_overlap_check, to skip banks that are starting with the start address INVALID_PADDR, that function is used to check banks from reserved memory and ACPI and it's unlikely for these bank to have the start address as INVALID_PADDR. The change holds because of this consideration. Signed-off-by: Luca Fancellu --- xen/arch/arm/setup.c | 3 +- xen/arch/arm/static-shmem.c | 129 +++++++++++++++++++++++++----------- 2 files changed, 93 insertions(+), 39 deletions(-) diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index d242674381d4..f15d40a85a5f 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -297,7 +297,8 @@ static bool __init meminfo_overlap_check(const struct membanks *mem, bank_start = mem->bank[i].start; bank_end = bank_start + mem->bank[i].size; - if ( region_end <= bank_start || region_start >= bank_end ) + if ( INVALID_PADDR == bank_start || region_end <= bank_start || + region_start >= bank_end ) continue; else { diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 24e40495a481..1c03bb7f1882 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -264,6 +264,12 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, pbase = boot_shm_bank->start; psize = boot_shm_bank->size; + if ( INVALID_PADDR == pbase ) + { + printk("%pd: host physical address must be chosen by users at the moment.", d); + return -EINVAL; + } + /* * xen,shared-mem = ; * TODO: pbase is optional. @@ -382,7 +388,8 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, { const struct fdt_property *prop, *prop_id, *prop_role; const __be32 *cell; - paddr_t paddr, gaddr, size, end; + paddr_t paddr = INVALID_PADDR; + paddr_t gaddr, size, end; struct membanks *mem = bootinfo_get_shmem(); struct shmem_membank_extra *shmem_extra = bootinfo_get_shmem_extra(); unsigned int i; @@ -437,24 +444,37 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, if ( !prop ) return -ENOENT; + cell = (const __be32 *)prop->data; if ( len != dt_cells_to_size(address_cells + size_cells + address_cells) ) { - if ( len == dt_cells_to_size(size_cells + address_cells) ) - printk("fdt: host physical address must be chosen by users at the moment.\n"); - - printk("fdt: invalid `xen,shared-mem` property.\n"); - return -EINVAL; + if ( len == dt_cells_to_size(address_cells + size_cells) ) + device_tree_get_reg(&cell, address_cells, size_cells, &gaddr, + &size); + else + { + printk("fdt: invalid `xen,shared-mem` property.\n"); + return -EINVAL; + } } + else + { + device_tree_get_reg(&cell, address_cells, address_cells, &paddr, + &gaddr); + size = dt_next_cell(size_cells, &cell); - cell = (const __be32 *)prop->data; - device_tree_get_reg(&cell, address_cells, address_cells, &paddr, &gaddr); - size = dt_next_cell(size_cells, &cell); + if ( !IS_ALIGNED(paddr, PAGE_SIZE) ) + { + printk("fdt: physical address 0x%"PRIpaddr" is not suitably aligned.\n", + paddr); + return -EINVAL; + } - if ( !IS_ALIGNED(paddr, PAGE_SIZE) ) - { - printk("fdt: physical address 0x%"PRIpaddr" is not suitably aligned.\n", - paddr); - return -EINVAL; + end = paddr + size; + if ( end <= paddr ) + { + printk("fdt: static shared memory region %s overflow\n", shm_id); + return -EINVAL; + } } if ( !IS_ALIGNED(gaddr, PAGE_SIZE) ) @@ -476,41 +496,69 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, return -EINVAL; } - end = paddr + size; - if ( end <= paddr ) - { - printk("fdt: static shared memory region %s overflow\n", shm_id); - return -EINVAL; - } - for ( i = 0; i < mem->nr_banks; i++ ) { /* * Meet the following check: + * when host address is provided: * 1) The shm ID matches and the region exactly match * 2) The shm ID doesn't match and the region doesn't overlap * with an existing one + * when host address is not provided: + * 1) The shm ID matches and the region size exactly match */ - if ( paddr == mem->bank[i].start && size == mem->bank[i].size ) + bool paddr_assigned = INVALID_PADDR == paddr; + bool shm_id_match = strncmp(shm_id, shmem_extra[i].shm_id, + MAX_SHM_ID_LENGTH) == 0; + if ( shm_id_match ) { - if ( strncmp(shm_id, shmem_extra[i].shm_id, - MAX_SHM_ID_LENGTH) == 0 ) + /* + * Regions have same shm_id (cases): + * 1) physical host address is supplied: + * - OK: paddr is equal and size is equal (same region) + * - Fail: paddr doesn't match or size doesn't match (there + * cannot exists two shmem regions with same shm_id) + * 2) physical host address is NOT supplied: + * - OK: size is equal (same region) + * - Fail: size is not equal (same shm_id must identify only one + * region, there can't be two different regions with same + * shm_id) + */ + bool start_match = paddr_assigned ? (paddr == mem->bank[i].start) : + true; + + if ( start_match && size == mem->bank[i].size ) break; else { - printk("fdt: xen,shm-id %s does not match for all the nodes using the same region.\n", + printk("fdt: different shared memory region could not share the same shm ID %s\n", shm_id); return -EINVAL; } } - else if ( strncmp(shm_id, shmem_extra[i].shm_id, - MAX_SHM_ID_LENGTH) != 0 ) - continue; else { - printk("fdt: different shared memory region could not share the same shm ID %s\n", - shm_id); - return -EINVAL; + /* + * Regions have different shm_id (cases): + * 1) physical host address is supplied: + * - OK: paddr different, or size different (case where paddr + * is equal but psize is different are wrong, but they + * are handled later when checking for overlapping) + * - Fail: paddr equal and size equal (the same region can't be + * identified with different shm_id) + * 2) physical host address is NOT supplied: + * - OK: Both have different shm_id so even with same size they + * can exists + */ + if ( !paddr_assigned || paddr != mem->bank[i].start || + size != mem->bank[i].size ) + continue; + else + { + printk("fdt: xen,shm-id %s does not match for all the nodes using the same region.\n", + shm_id); + return -EINVAL; + } } } @@ -518,7 +566,8 @@ int __init process_shm_node(const void *fdt, int node, uint32_t address_cells, { if (i < mem->max_banks) { - if ( check_reserved_regions_overlap(paddr, size) ) + if ( (paddr != INVALID_PADDR) && + check_reserved_regions_overlap(paddr, size) ) return -EINVAL; /* Static shared memory shall be reserved from any other use. */ @@ -588,13 +637,16 @@ void __init early_print_info_shmem(void) { const struct membanks *shmem = bootinfo_get_shmem(); unsigned int bank; + unsigned int printed = 0; for ( bank = 0; bank < shmem->nr_banks; bank++ ) - { - printk(" SHMEM[%u]: %"PRIpaddr" - %"PRIpaddr"\n", bank, - shmem->bank[bank].start, - shmem->bank[bank].start + shmem->bank[bank].size - 1); - } + if ( shmem->bank[bank].start != INVALID_PADDR ) + { + printk(" SHMEM[%u]: %"PRIpaddr" - %"PRIpaddr"\n", printed, + shmem->bank[bank].start, + shmem->bank[bank].start + shmem->bank[bank].size - 1); + printed++; + } } void __init init_sharedmem_pages(void) @@ -603,7 +655,8 @@ void __init init_sharedmem_pages(void) unsigned int bank; for ( bank = 0 ; bank < shmem->nr_banks; bank++ ) - init_staticmem_bank(&shmem->bank[bank]); + if ( shmem->bank[bank].start != INVALID_PADDR ) + init_staticmem_bank(&shmem->bank[bank]); } int __init remove_shm_from_rangeset(const struct kernel_info *kinfo, From patchwork Tue Apr 23 08:25:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 13639597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CCDFAC04FF8 for ; Tue, 23 Apr 2024 08:26:00 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.710385.1109608 (Exim 4.92) (envelope-from ) id 1rzBTM-0003lZ-Mi; Tue, 23 Apr 2024 08:25:52 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 710385.1109608; Tue, 23 Apr 2024 08:25:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTM-0003lO-IL; Tue, 23 Apr 2024 08:25:52 +0000 Received: by outflank-mailman (input) for mailman id 710385; Tue, 23 Apr 2024 08:25:50 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTK-0002TX-RL for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:50 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 16dca818-014b-11ef-b4bb-af5377834399; Tue, 23 Apr 2024 10:25:49 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D09371063; Tue, 23 Apr 2024 01:26:16 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ED13C3F64C; Tue, 23 Apr 2024 01:25:47 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 16dca818-014b-11ef-b4bb-af5377834399 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH 5/7] xen/arm: Rework heap page allocation outside allocate_bank_memory Date: Tue, 23 Apr 2024 09:25:30 +0100 Message-Id: <20240423082532.776623-6-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 The function allocate_bank_memory allocates pages from the heap and map them to the guest using guest_physmap_add_page. As a preparation work to support static shared memory bank when the host physical address is not provided, Xen needs to allocate memory from the heap, so rework allocate_bank_memory moving out the page allocation in a new function called allocate_domheap_memory. The function allocate_domheap_memory takes a callback function and a pointer to some extra information passed to the callback and this function will be called for every page allocated, until a defined size is reached. In order to keep allocate_bank_memory functionality, the callback passed to allocate_domheap_memory is a wrapper for guest_physmap_add_page. Let allocate_domheap_memory be externally visible, in order to use it in the future from the static shared memory module. Take the opportunity to change the signature of allocate_bank_memory and remove the 'struct domain' parameter, which can be retrieved from 'struct kernel_info'. No functional changes is intended. Signed-off-by: Luca Fancellu --- xen/arch/arm/dom0less-build.c | 4 +- xen/arch/arm/domain_build.c | 77 +++++++++++++++++-------- xen/arch/arm/include/asm/domain_build.h | 9 ++- 3 files changed, 62 insertions(+), 28 deletions(-) diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c index 74f053c242f4..20ddf6f8f250 100644 --- a/xen/arch/arm/dom0less-build.c +++ b/xen/arch/arm/dom0less-build.c @@ -60,12 +60,12 @@ static void __init allocate_memory(struct domain *d, struct kernel_info *kinfo) mem->nr_banks = 0; bank_size = MIN(GUEST_RAM0_SIZE, kinfo->unassigned_mem); - if ( !allocate_bank_memory(d, kinfo, gaddr_to_gfn(GUEST_RAM0_BASE), + if ( !allocate_bank_memory(kinfo, gaddr_to_gfn(GUEST_RAM0_BASE), bank_size) ) goto fail; bank_size = MIN(GUEST_RAM1_SIZE, kinfo->unassigned_mem); - if ( !allocate_bank_memory(d, kinfo, gaddr_to_gfn(GUEST_RAM1_BASE), + if ( !allocate_bank_memory(kinfo, gaddr_to_gfn(GUEST_RAM1_BASE), bank_size) ) goto fail; diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 0784e4c5e315..148db06b8ca3 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -417,26 +417,13 @@ static void __init allocate_memory_11(struct domain *d, } #ifdef CONFIG_DOM0LESS_BOOT -bool __init allocate_bank_memory(struct domain *d, struct kernel_info *kinfo, - gfn_t sgfn, paddr_t tot_size) +bool __init allocate_domheap_memory(struct domain *d, paddr_t tot_size, + alloc_domheap_mem_cb cb, void *extra) { - struct membanks *mem = kernel_info_get_mem(kinfo); - int res; + unsigned int max_order = UINT_MAX; struct page_info *pg; - struct membank *bank; - unsigned int max_order = ~0; - /* - * allocate_bank_memory can be called with a tot_size of zero for - * the second memory bank. It is not an error and we can safely - * avoid creating a zero-size memory bank. - */ - if ( tot_size == 0 ) - return true; - - bank = &mem->bank[mem->nr_banks]; - bank->start = gfn_to_gaddr(sgfn); - bank->size = tot_size; + BUG_ON(!cb); while ( tot_size > 0 ) { @@ -463,17 +450,61 @@ bool __init allocate_bank_memory(struct domain *d, struct kernel_info *kinfo, continue; } - res = guest_physmap_add_page(d, sgfn, page_to_mfn(pg), order); - if ( res ) - { - dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res); + if ( cb(d, pg, order, extra) ) return false; - } - sgfn = gfn_add(sgfn, 1UL << order); tot_size -= (1ULL << (PAGE_SHIFT + order)); } + return true; +} + +static int __init guest_map_pages(struct domain *d, struct page_info *pg, + unsigned int order, void *extra) +{ + gfn_t *sgfn = (gfn_t *)extra; + int res; + + BUG_ON(!sgfn); + res = guest_physmap_add_page(d, *sgfn, page_to_mfn(pg), order); + if ( res ) + { + dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res); + return res; + } + + *sgfn = gfn_add(*sgfn, 1UL << order); + + return 0; +} + +bool __init allocate_bank_memory(struct kernel_info *kinfo, gfn_t sgfn, + paddr_t tot_size) +{ + struct membanks *mem = kernel_info_get_mem(kinfo); + struct domain *d = kinfo->d; + struct membank *bank; + + /* + * allocate_bank_memory can be called with a tot_size of zero for + * the second memory bank. It is not an error and we can safely + * avoid creating a zero-size memory bank. + */ + if ( tot_size == 0 ) + return true; + + bank = &mem->bank[mem->nr_banks]; + bank->start = gfn_to_gaddr(sgfn); + bank->size = tot_size; + + /* + * Allocate pages from the heap until tot_size and map them to the guest + * using guest_map_pages, passing the starting gfn as extra parameter for + * the map operation. + */ + if ( !allocate_domheap_memory(d, tot_size, guest_map_pages, &sgfn) ) + return false; + mem->nr_banks++; kinfo->unassigned_mem -= bank->size; diff --git a/xen/arch/arm/include/asm/domain_build.h b/xen/arch/arm/include/asm/domain_build.h index 45936212ca21..9eeb5839f6ed 100644 --- a/xen/arch/arm/include/asm/domain_build.h +++ b/xen/arch/arm/include/asm/domain_build.h @@ -5,9 +5,12 @@ #include typedef __be32 gic_interrupt_t[3]; - -bool allocate_bank_memory(struct domain *d, struct kernel_info *kinfo, - gfn_t sgfn, paddr_t tot_size); +typedef int (*alloc_domheap_mem_cb)(struct domain *d, struct page_info *pg, + unsigned int order, void *extra); +bool allocate_domheap_memory(struct domain *d, paddr_t tot_size, + alloc_domheap_mem_cb cb, void *extra); +bool allocate_bank_memory(struct kernel_info *kinfo, gfn_t sgfn, + paddr_t tot_size); int construct_domain(struct domain *d, struct kernel_info *kinfo); int domain_fdt_begin_node(void *fdt, const char *name, uint64_t unit); int make_chosen_node(const struct kernel_info *kinfo); From patchwork Tue Apr 23 08:25:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 13639599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7CDB0C10F15 for ; Tue, 23 Apr 2024 08:26:02 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.710386.1109618 (Exim 4.92) (envelope-from ) id 1rzBTO-000420-15; Tue, 23 Apr 2024 08:25:54 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 710386.1109618; Tue, 23 Apr 2024 08:25:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTN-00041H-Ro; Tue, 23 Apr 2024 08:25:53 +0000 Received: by outflank-mailman (input) for mailman id 710386; Tue, 23 Apr 2024 08:25:52 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTM-0002TX-27 for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:52 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 178533e0-014b-11ef-b4bb-af5377834399; Tue, 23 Apr 2024 10:25:50 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EB0291477; Tue, 23 Apr 2024 01:26:17 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 135B33F64C; Tue, 23 Apr 2024 01:25:48 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 178533e0-014b-11ef-b4bb-af5377834399 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH 6/7] xen/arm: Implement the logic for static shared memory from Xen heap Date: Tue, 23 Apr 2024 09:25:31 +0100 Message-Id: <20240423082532.776623-7-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 This commit implements the logic to have the static shared memory banks from the Xen heap instead of having the host physical address passed from the user. When the host physical address is not supplied, the physical memory is taken from the Xen heap using allocate_domheap_memory, the allocation needs to occur at the first handled DT node and the allocated banks need to be saved somewhere, so introduce the 'shm_heap_banks' static global variable of type 'struct meminfo' that will hold the banks allocated from the heap, its field .shmem_extra will be used to point to the bootinfo shared memory banks .shmem_extra space, so that there is not further allocation of memory and every bank in shm_heap_banks can be safely identified by the shm_id to reconstruct its traceability and if it was allocated or not. A search into 'shm_heap_banks' will reveal if the banks were allocated or not, in case the host address is not passed, and the callback given to allocate_domheap_memory will store the banks in the structure and map them to the current domain, to do that, some changes to acquire_shared_memory_bank are made to let it differentiate if the bank is from the heap and if it is, then assign_pages is called for every bank. When the bank is already allocated, for every bank allocated with the corresponding shm_id, handle_shared_mem_bank is called and the mapping are done. Signed-off-by: Luca Fancellu --- xen/arch/arm/static-shmem.c | 193 +++++++++++++++++++++++++++++------- 1 file changed, 157 insertions(+), 36 deletions(-) diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c index 1c03bb7f1882..10396ed52ff1 100644 --- a/xen/arch/arm/static-shmem.c +++ b/xen/arch/arm/static-shmem.c @@ -9,6 +9,18 @@ #include #include +typedef struct { + struct domain *d; + paddr_t gbase; + bool owner_dom_io; + const char *role_str; + struct shmem_membank_extra *bank_extra_info; +} alloc_heap_pages_cb_extra; + +static struct meminfo __initdata shm_heap_banks = { + .common.max_banks = NR_MEM_BANKS +}; + static void __init __maybe_unused build_assertions(void) { /* @@ -66,7 +78,8 @@ static bool __init is_shm_allocated_to_domio(paddr_t pbase) } static mfn_t __init acquire_shared_memory_bank(struct domain *d, - paddr_t pbase, paddr_t psize) + paddr_t pbase, paddr_t psize, + bool bank_from_heap) { mfn_t smfn; unsigned long nr_pfns; @@ -86,19 +99,31 @@ static mfn_t __init acquire_shared_memory_bank(struct domain *d, d->max_pages += nr_pfns; smfn = maddr_to_mfn(pbase); - res = acquire_domstatic_pages(d, smfn, nr_pfns, 0); + if ( bank_from_heap ) + /* + * When host address is not provided, static shared memory is + * allocated from heap and shall be assigned to owner domain. + */ + res = assign_pages(maddr_to_page(pbase), nr_pfns, d, 0); + else + res = acquire_domstatic_pages(d, smfn, nr_pfns, 0); + if ( res ) { - printk(XENLOG_ERR - "%pd: failed to acquire static memory: %d.\n", d, res); - d->max_pages -= nr_pfns; - return INVALID_MFN; + printk(XENLOG_ERR "%pd: failed to %s static memory: %d.\n", d, + bank_from_heap ? "assign" : "acquire", res); + goto fail; } return smfn; + + fail: + d->max_pages -= nr_pfns; + return INVALID_MFN; } static int __init assign_shared_memory(struct domain *d, paddr_t gbase, + bool bank_from_heap, const struct membank *shm_bank) { mfn_t smfn; @@ -113,10 +138,7 @@ static int __init assign_shared_memory(struct domain *d, paddr_t gbase, psize = shm_bank->size; nr_borrowers = shm_bank->shmem_extra->nr_shm_borrowers; - printk("%pd: allocate static shared memory BANK %#"PRIpaddr"-%#"PRIpaddr".\n", - d, pbase, pbase + psize); - - smfn = acquire_shared_memory_bank(d, pbase, psize); + smfn = acquire_shared_memory_bank(d, pbase, psize, bank_from_heap); if ( mfn_eq(smfn, INVALID_MFN) ) return -EINVAL; @@ -188,6 +210,7 @@ append_shm_bank_to_domain(struct shared_meminfo *kinfo_shm_mem, paddr_t start, static int __init handle_shared_mem_bank(struct domain *d, paddr_t gbase, bool owner_dom_io, const char *role_str, + bool bank_from_heap, const struct membank *shm_bank) { paddr_t pbase, psize; @@ -197,6 +220,10 @@ static int __init handle_shared_mem_bank(struct domain *d, paddr_t gbase, pbase = shm_bank->start; psize = shm_bank->size; + + printk("%pd: SHMEM map from %s: mphys 0x%"PRIpaddr" -> gphys 0x%"PRIpaddr", size 0x%"PRIpaddr"\n", + d, bank_from_heap ? "Xen heap" : "Host", pbase, gbase, psize); + /* * DOMID_IO is a fake domain and is not described in the Device-Tree. * Therefore when the owner of the shared region is DOMID_IO, we will @@ -209,7 +236,8 @@ static int __init handle_shared_mem_bank(struct domain *d, paddr_t gbase, * We found the first borrower of the region, the owner was not * specified, so they should be assigned to dom_io. */ - ret = assign_shared_memory(owner_dom_io ? dom_io : d, gbase, shm_bank); + ret = assign_shared_memory(owner_dom_io ? dom_io : d, gbase, + bank_from_heap, shm_bank); if ( ret ) return ret; } @@ -226,6 +254,40 @@ static int __init handle_shared_mem_bank(struct domain *d, paddr_t gbase, return 0; } +static int __init save_map_heap_pages(struct domain *d, struct page_info *pg, + unsigned int order, void *extra) +{ + alloc_heap_pages_cb_extra *b_extra = (alloc_heap_pages_cb_extra *)extra; + int idx = shm_heap_banks.common.nr_banks; + int ret = -ENOSPC; + + BUG_ON(!b_extra); + + if ( idx < shm_heap_banks.common.max_banks ) + { + shm_heap_banks.bank[idx].start = page_to_maddr(pg); + shm_heap_banks.bank[idx].size = (1ULL << (PAGE_SHIFT + order)); + shm_heap_banks.bank[idx].shmem_extra = b_extra->bank_extra_info; + shm_heap_banks.common.nr_banks++; + + ret = handle_shared_mem_bank(b_extra->d, b_extra->gbase, + b_extra->owner_dom_io, b_extra->role_str, + true, &shm_heap_banks.bank[idx]); + if ( !ret ) + { + /* Increment guest physical address for next mapping */ + b_extra->gbase += shm_heap_banks.bank[idx].size; + ret = 0; + } + } + + if ( ret ) + printk("Failed to allocate static shared memory from Xen heap: (%d)\n", + ret); + + return ret; +} + int __init process_shm(struct domain *d, struct kernel_info *kinfo, const struct dt_device_node *node) { @@ -264,42 +326,101 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo, pbase = boot_shm_bank->start; psize = boot_shm_bank->size; - if ( INVALID_PADDR == pbase ) - { - printk("%pd: host physical address must be chosen by users at the moment.", d); - return -EINVAL; - } + /* + * "role" property is optional and if it is defined explicitly, + * then the owner domain is not the default "dom_io" domain. + */ + if ( dt_property_read_string(shm_node, "role", &role_str) == 0 ) + owner_dom_io = false; /* - * xen,shared-mem = ; - * TODO: pbase is optional. + * xen,shared-mem = <[pbase,] gbase, size>; + * pbase is optional. */ addr_cells = dt_n_addr_cells(shm_node); size_cells = dt_n_size_cells(shm_node); prop = dt_find_property(shm_node, "xen,shared-mem", NULL); BUG_ON(!prop); cells = (const __be32 *)prop->value; - gbase = dt_read_paddr(cells + addr_cells, addr_cells); - for ( i = 0; i < PFN_DOWN(psize); i++ ) - if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) - { - printk("%pd: invalid physical address 0x%"PRI_mfn"\n", - d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); - return -EINVAL; - } + if ( pbase != INVALID_PADDR ) + { + /* guest phys address is after host phys address */ + gbase = dt_read_paddr(cells + addr_cells, addr_cells); + + for ( i = 0; i < PFN_DOWN(psize); i++ ) + if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) ) + { + printk("%pd: invalid physical address 0x%"PRI_mfn"\n", + d, mfn_x(mfn_add(maddr_to_mfn(pbase), i))); + return -EINVAL; + } + + /* The host physical address is supplied by the user */ + ret = handle_shared_mem_bank(d, gbase, owner_dom_io, role_str, + false, boot_shm_bank); + if ( ret ) + return ret; + } + else + { + /* + * The host physical address is not supplied by the user, so it + * means that the banks needs to be allocated from the Xen heap, + * look into the already allocated banks from the heap. + */ + const struct membank *alloc_bank = find_shm(&shm_heap_banks.common, + shm_id); - /* - * "role" property is optional and if it is defined explicitly, - * then the owner domain is not the default "dom_io" domain. - */ - if ( dt_property_read_string(shm_node, "role", &role_str) == 0 ) - owner_dom_io = false; + /* guest phys address is right at the beginning */ + gbase = dt_read_paddr(cells, addr_cells); - ret = handle_shared_mem_bank(d, gbase, owner_dom_io, role_str, - boot_shm_bank); - if ( ret ) - return ret; + if ( !alloc_bank ) + { + alloc_heap_pages_cb_extra cb_arg = { d, gbase, owner_dom_io, + role_str, boot_shm_bank->shmem_extra }; + + /* shm_id identified bank is not yet allocated */ + if ( !allocate_domheap_memory(NULL, psize, save_map_heap_pages, + &cb_arg) ) + { + printk(XENLOG_ERR + "Failed to allocate (%"PRIpaddr"MB) pages as static shared memory from heap\n", + psize >> 20); + return -EINVAL; + } + } + else + { + /* shm_id identified bank is already allocated */ + const struct membank *end_bank = + &shm_heap_banks.bank[shm_heap_banks.common.nr_banks]; + paddr_t gbase_bank = gbase; + + /* + * Static shared memory banks that are taken from the Xen heap + * are allocated sequentially in shm_heap_banks, so starting + * from the first bank found identified by shm_id, the code can + * just advance by one bank at the time until it reaches the end + * of the array or it finds another bank NOT identified by + * shm_id + */ + for ( ; alloc_bank < end_bank; alloc_bank++ ) + { + if ( strncmp(shm_id, alloc_bank->shmem_extra->shm_id, + MAX_SHM_ID_LENGTH) != 0 ) + break; + + ret = handle_shared_mem_bank(d, gbase_bank, owner_dom_io, + role_str, true, alloc_bank); + if ( ret ) + return ret; + + /* Increment guest physical address for next mapping */ + gbase_bank += alloc_bank->size; + } + } + } /* * Record static shared memory region info for later setting From patchwork Tue Apr 23 08:25:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luca Fancellu X-Patchwork-Id: 13639598 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 12792C4345F for ; Tue, 23 Apr 2024 08:26:02 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.710387.1109624 (Exim 4.92) (envelope-from ) id 1rzBTO-00049r-Mv; Tue, 23 Apr 2024 08:25:54 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 710387.1109624; Tue, 23 Apr 2024 08:25:54 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTO-00048O-H3; Tue, 23 Apr 2024 08:25:54 +0000 Received: by outflank-mailman (input) for mailman id 710387; Tue, 23 Apr 2024 08:25:53 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rzBTN-0002TX-9s for xen-devel@lists.xenproject.org; Tue, 23 Apr 2024 08:25:53 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 184fe73e-014b-11ef-b4bb-af5377834399; Tue, 23 Apr 2024 10:25:51 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 46D70339; Tue, 23 Apr 2024 01:26:19 -0700 (PDT) Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2DCEF3F64C; Tue, 23 Apr 2024 01:25:50 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 184fe73e-014b-11ef-b4bb-af5377834399 From: Luca Fancellu To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Penny Zheng Subject: [PATCH 7/7] xen/docs: Describe static shared memory when host address is not provided Date: Tue, 23 Apr 2024 09:25:32 +0100 Message-Id: <20240423082532.776623-8-luca.fancellu@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240423082532.776623-1-luca.fancellu@arm.com> References: <20240423082532.776623-1-luca.fancellu@arm.com> MIME-Version: 1.0 From: Penny Zheng This commit describe the new scenario where host address is not provided in "xen,shared-mem" property and a new example is added to the page to explain in details. Take the occasion to fix some typos in the page. Signed-off-by: Penny Zheng Signed-off-by: Luca Fancellu Reviewed-by: Michal Orzel --- v1: - patch from https://patchwork.kernel.org/project/xen-devel/patch/20231206090623.1932275-10-Penny.Zheng@arm.com/ with some changes in the commit message. --- docs/misc/arm/device-tree/booting.txt | 52 ++++++++++++++++++++------- 1 file changed, 39 insertions(+), 13 deletions(-) diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt index bbd955e9c2f6..ac4bad6fe5e0 100644 --- a/docs/misc/arm/device-tree/booting.txt +++ b/docs/misc/arm/device-tree/booting.txt @@ -590,7 +590,7 @@ communication. An array takes a physical address, which is the base address of the shared memory region in host physical address space, a size, and a guest physical address, as the target address of the mapping. - e.g. xen,shared-mem = < [host physical address] [guest address] [size] > + e.g. xen,shared-mem = < [host physical address] [guest address] [size] >; It shall also meet the following criteria: 1) If the SHM ID matches with an existing region, the address range of the @@ -601,8 +601,8 @@ communication. The number of cells for the host address (and size) is the same as the guest pseudo-physical address and they are inherited from the parent node. - Host physical address is optional, when missing Xen decides the location - (currently unimplemented). + Host physical address is optional, when missing Xen decides the location. + e.g. xen,shared-mem = < [guest address] [size] >; - role (Optional) @@ -629,7 +629,7 @@ chosen { role = "owner"; xen,shm-id = "my-shared-mem-0"; xen,shared-mem = <0x10000000 0x10000000 0x10000000>; - } + }; domU1 { compatible = "xen,domain"; @@ -640,25 +640,36 @@ chosen { vpl011; /* - * shared memory region identified as 0x0(xen,shm-id = <0x0>) - * is shared between Dom0 and DomU1. + * shared memory region "my-shared-mem-0" is shared + * between Dom0 and DomU1. */ domU1-shared-mem@10000000 { compatible = "xen,domain-shared-memory-v1"; role = "borrower"; xen,shm-id = "my-shared-mem-0"; xen,shared-mem = <0x10000000 0x50000000 0x10000000>; - } + }; /* - * shared memory region identified as 0x1(xen,shm-id = <0x1>) - * is shared between DomU1 and DomU2. + * shared memory region "my-shared-mem-1" is shared between + * DomU1 and DomU2. */ domU1-shared-mem@50000000 { compatible = "xen,domain-shared-memory-v1"; xen,shm-id = "my-shared-mem-1"; xen,shared-mem = <0x50000000 0x60000000 0x20000000>; - } + }; + + /* + * shared memory region "my-shared-mem-2" is shared between + * DomU1 and DomU2. + */ + domU1-shared-mem-2 { + compatible = "xen,domain-shared-memory-v1"; + xen,shm-id = "my-shared-mem-2"; + role = "owner"; + xen,shared-mem = <0x80000000 0x20000000>; + }; ...... @@ -672,14 +683,21 @@ chosen { cpus = <1>; /* - * shared memory region identified as 0x1(xen,shm-id = <0x1>) - * is shared between domU1 and domU2. + * shared memory region "my-shared-mem-1" is shared between + * domU1 and domU2. */ domU2-shared-mem@50000000 { compatible = "xen,domain-shared-memory-v1"; xen,shm-id = "my-shared-mem-1"; xen,shared-mem = <0x50000000 0x70000000 0x20000000>; - } + }; + + domU2-shared-mem-2 { + compatible = "xen,domain-shared-memory-v1"; + xen,shm-id = "my-shared-mem-2"; + role = "borrower"; + xen,shared-mem = <0x90000000 0x20000000>; + }; ...... }; @@ -699,3 +717,11 @@ shared between DomU1 and DomU2. It will get mapped at 0x60000000 in DomU1 guest physical address space, and at 0x70000000 in DomU2 guest physical address space. DomU1 and DomU2 are both the borrower domain, the owner domain is the default owner domain DOMID_IO. + +For the static shared memory region "my-shared-mem-2", since host physical +address is not provided by user, Xen will automatically allocate 512MB +from heap as static shared memory to be shared between DomU1 and DomU2. +The automatically allocated static shared memory will get mapped at +0x80000000 in DomU1 guest physical address space, and at 0x90000000 in DomU2 +guest physical address space. DomU1 is explicitly defined as the owner domain, +and DomU2 is the borrower domain.