diff mbox series

[v2,13/13] xen/arm: List static shared memory regions as /memory nodes

Message ID 20240409114543.3332150-14-luca.fancellu@arm.com (mailing list archive)
State Superseded
Headers show
Series Static shared memory followup v2 - pt1 | expand

Commit Message

Luca Fancellu April 9, 2024, 11:45 a.m. UTC
Currently Xen is not exporting the static shared memory regions
to the device tree as /memory node, this commit is fixing this
issue.

The static shared memory banks can be part of the memory range
available for the domain, so if they are overlapping with the
normal memory banks, they need to be merged together in order
to produce a /memory node with non overlapping ranges in 'reg'.

Given that now make_memory_node needs a parameter 'struct kernel_info'
in order to call the new function shm_mem_node_merge_reg_range,
take the occasion to remove the unused struct domain parameter.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
v2:
 - try to use make_memory_node, don't add overlapping ranges of
   memory, commit message changed.
v1:
 - new patch
---
---
 xen/arch/arm/dom0less-build.c           |  2 +-
 xen/arch/arm/domain_build.c             | 38 ++++++++++++++++++-------
 xen/arch/arm/include/asm/domain_build.h |  2 +-
 xen/arch/arm/include/asm/static-shmem.h | 18 ++++++++++++
 xen/arch/arm/static-shmem.c             | 26 +++++++++++++++++
 5 files changed, 73 insertions(+), 13 deletions(-)

Comments

Julien Grall April 15, 2024, 6:41 p.m. UTC | #1
Hi Luca,

On 09/04/2024 12:45, Luca Fancellu wrote:
> Currently Xen is not exporting the static shared memory regions
> to the device tree as /memory node, this commit is fixing this
> issue.
> 
> The static shared memory banks can be part of the memory range
> available for the domain, so if they are overlapping with the
> normal memory banks, they need to be merged together in order
> to produce a /memory node with non overlapping ranges in 'reg'.

Before reviewing the code in more details, I would like to understand a 
bit more the use case and whether it should be valid.

 From my understanding, the case you are trying to prevent is the 
following setup:
   1. The Guest Physical region 0x0000 to 0x8000 is used for RAM
   2. The Guest Physical region 0x0000 to 0x4000 is used for static memory

The underlying Host Physical regions may be different. Xen doesn't 
guarantee in which order the regions will be mapped, So whether the 
overlapped region will point to the memory or the shared region is 
unknown (we don't guarantee the order of the mapping). So nothing good 
will happen to the guest.

Did I understand correctly? If so, shouldn't this be a configuration we 
should forbid?

Cheers,
Luca Fancellu April 16, 2024, 6:27 a.m. UTC | #2
Hi Julien,

> On 15 Apr 2024, at 19:41, Julien Grall <julien@xen.org> wrote:
> 
> Hi Luca,
> 
> On 09/04/2024 12:45, Luca Fancellu wrote:
>> Currently Xen is not exporting the static shared memory regions
>> to the device tree as /memory node, this commit is fixing this
>> issue.
>> The static shared memory banks can be part of the memory range
>> available for the domain, so if they are overlapping with the
>> normal memory banks, they need to be merged together in order
>> to produce a /memory node with non overlapping ranges in 'reg'.
> 
> Before reviewing the code in more details, I would like to understand a bit more the use case and whether it should be valid.
> 
> From my understanding, the case you are trying to prevent is the following setup:
>  1. The Guest Physical region 0x0000 to 0x8000 is used for RAM
>  2. The Guest Physical region 0x0000 to 0x4000 is used for static memory

So far, it was possible to map guest physical regions inside the memory range given to the guest,
so the above configuration was allowed and the underlying host physical regions were of course
different and enforced with checks. So I’m not trying to prevent this behaviour, however ...

> 
> The underlying Host Physical regions may be different. Xen doesn't guarantee in which order the regions will be mapped, So whether the overlapped region will point to the memory or the shared region is unknown (we don't guarantee the order of the mapping). So nothing good will happen to the guest.

... now here I don’t understand if this was wrong from the beginning or not, shall we enforce also that
guest physical regions for static shared memory are outside the memory given to the guest?

> 
> Did I understand correctly? If so, shouldn't this be a configuration we should forbid?
> 
> Cheers,
> 
> -- 
> Julien Grall

Cheers,
Luca
Julien Grall April 16, 2024, 8:50 a.m. UTC | #3
On 16/04/2024 07:27, Luca Fancellu wrote:
> Hi Julien,

Hi Luca,

> 
>> On 15 Apr 2024, at 19:41, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Luca,
>>
>> On 09/04/2024 12:45, Luca Fancellu wrote:
>>> Currently Xen is not exporting the static shared memory regions
>>> to the device tree as /memory node, this commit is fixing this
>>> issue.
>>> The static shared memory banks can be part of the memory range
>>> available for the domain, so if they are overlapping with the
>>> normal memory banks, they need to be merged together in order
>>> to produce a /memory node with non overlapping ranges in 'reg'.
>>
>> Before reviewing the code in more details, I would like to understand a bit more the use case and whether it should be valid.
>>
>>  From my understanding, the case you are trying to prevent is the following setup:
>>   1. The Guest Physical region 0x0000 to 0x8000 is used for RAM
>>   2. The Guest Physical region 0x0000 to 0x4000 is used for static memory
> 
> So far, it was possible to map guest physical regions inside the memory range given to the guest,
> so the above configuration was allowed and the underlying host physical regions were of course
> different and enforced with checks. So I’m not trying to prevent this behaviour, however ...
> 
>>
>> The underlying Host Physical regions may be different. Xen doesn't guarantee in which order the regions will be mapped, So whether the overlapped region will point to the memory or the shared region is unknown (we don't guarantee the order of the mapping). So nothing good will happen to the guest.
> 
> ... now here I don’t understand if this was wrong from the beginning or not, shall we enforce also that
> guest physical regions for static shared memory are outside the memory given to the guest?

Nothing good will happen if you are trying to overwrite mappings. So I 
think this should be enforced. However, this is a more general problem. 
At the moment, this is pretty much as mess because you can overwrite any 
mapping (e.g. map MMIO on top of the RAM).

I think the easiest way to enforce is to do it in the P2M code like x86 
does for certain mappings.

Anyway, I don't think the problem should be solved here or by you (this 
is likely going to be a can of worms). For now, I would consider to 
simply drop the patches that are trying to do the merge.

Any thoughts?

Cheers,
Michal Orzel April 16, 2024, 8:57 a.m. UTC | #4
Hi Julien,

On 16/04/2024 10:50, Julien Grall wrote:
> 
> 
> On 16/04/2024 07:27, Luca Fancellu wrote:
>> Hi Julien,
> 
> Hi Luca,
> 
>>
>>> On 15 Apr 2024, at 19:41, Julien Grall <julien@xen.org> wrote:
>>>
>>> Hi Luca,
>>>
>>> On 09/04/2024 12:45, Luca Fancellu wrote:
>>>> Currently Xen is not exporting the static shared memory regions
>>>> to the device tree as /memory node, this commit is fixing this
>>>> issue.
>>>> The static shared memory banks can be part of the memory range
>>>> available for the domain, so if they are overlapping with the
>>>> normal memory banks, they need to be merged together in order
>>>> to produce a /memory node with non overlapping ranges in 'reg'.
>>>
>>> Before reviewing the code in more details, I would like to understand a bit more the use case and whether it should be valid.
>>>
>>>  From my understanding, the case you are trying to prevent is the following setup:
>>>   1. The Guest Physical region 0x0000 to 0x8000 is used for RAM
>>>   2. The Guest Physical region 0x0000 to 0x4000 is used for static memory
>>
>> So far, it was possible to map guest physical regions inside the memory range given to the guest,
>> so the above configuration was allowed and the underlying host physical regions were of course
>> different and enforced with checks. So I’m not trying to prevent this behaviour, however ...
>>
>>>
>>> The underlying Host Physical regions may be different. Xen doesn't guarantee in which order the regions will be mapped, So whether the overlapped region will point to the memory or the shared region is unknown (we don't guarantee the order of the mapping). So nothing good will happen to the guest.
>>
>> ... now here I don’t understand if this was wrong from the beginning or not, shall we enforce also that
>> guest physical regions for static shared memory are outside the memory given to the guest?
> 
> Nothing good will happen if you are trying to overwrite mappings. So I
> think this should be enforced. However, this is a more general problem.
> At the moment, this is pretty much as mess because you can overwrite any
> mapping (e.g. map MMIO on top of the RAM).
> 
> I think the easiest way to enforce is to do it in the P2M code like x86
> does for certain mappings.
> 
> Anyway, I don't think the problem should be solved here or by you (this
> is likely going to be a can of worms). For now, I would consider to
> simply drop the patches that are trying to do the merge.
> 
> Any thoughts?
I agree with your analysis. For now, let's drop this patch.
@Luca: This means I reviewed your series completely and you can send a respin.

~Michal
Luca Fancellu April 16, 2024, 8:59 a.m. UTC | #5
> On 16 Apr 2024, at 09:50, Julien Grall <julien@xen.org> wrote:
> 
> 
> 
> On 16/04/2024 07:27, Luca Fancellu wrote:
>> Hi Julien,
> 
> Hi Luca,
> 
>>> On 15 Apr 2024, at 19:41, Julien Grall <julien@xen.org> wrote:
>>> 
>>> Hi Luca,
>>> 
>>> On 09/04/2024 12:45, Luca Fancellu wrote:
>>>> Currently Xen is not exporting the static shared memory regions
>>>> to the device tree as /memory node, this commit is fixing this
>>>> issue.
>>>> The static shared memory banks can be part of the memory range
>>>> available for the domain, so if they are overlapping with the
>>>> normal memory banks, they need to be merged together in order
>>>> to produce a /memory node with non overlapping ranges in 'reg'.
>>> 
>>> Before reviewing the code in more details, I would like to understand a bit more the use case and whether it should be valid.
>>> 
>>> From my understanding, the case you are trying to prevent is the following setup:
>>>  1. The Guest Physical region 0x0000 to 0x8000 is used for RAM
>>>  2. The Guest Physical region 0x0000 to 0x4000 is used for static memory
>> So far, it was possible to map guest physical regions inside the memory range given to the guest,
>> so the above configuration was allowed and the underlying host physical regions were of course
>> different and enforced with checks. So I’m not trying to prevent this behaviour, however ...
>>> 
>>> The underlying Host Physical regions may be different. Xen doesn't guarantee in which order the regions will be mapped, So whether the overlapped region will point to the memory or the shared region is unknown (we don't guarantee the order of the mapping). So nothing good will happen to the guest.
>> ... now here I don’t understand if this was wrong from the beginning or not, shall we enforce also that
>> guest physical regions for static shared memory are outside the memory given to the guest?
> 
> Nothing good will happen if you are trying to overwrite mappings. So I think this should be enforced. However, this is a more general problem. At the moment, this is pretty much as mess because you can overwrite any mapping (e.g. map MMIO on top of the RAM).
> 
> I think the easiest way to enforce is to do it in the P2M code like x86 does for certain mappings.
> 
> Anyway, I don't think the problem should be solved here or by you (this is likely going to be a can of worms). For now, I would consider to simply drop the patches that are trying to do the merge.
> 
> Any thoughts?

Yes I agree with you, I’ll drop the patch that tries to do the merge, I was thinking about checking that the guest phys static mem region is
inside these boundaries:

#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE, GUEST_RAM1_BASE }
#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE, GUEST_RAM1_SIZE }

and also that doesn’t overlap with (struct kernel_info).mem, does it sounds right to you?

So in this patch I will only populate the /memory nodes with the static shared memory banks.


> 
> Cheers,
> 
> -- 
> Julien Grall
Julien Grall April 16, 2024, 9:06 a.m. UTC | #6
On 16/04/2024 09:59, Luca Fancellu wrote:
> 
> 
>> On 16 Apr 2024, at 09:50, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 16/04/2024 07:27, Luca Fancellu wrote:
>>> Hi Julien,
>>
>> Hi Luca,
>>
>>>> On 15 Apr 2024, at 19:41, Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi Luca,
>>>>
>>>> On 09/04/2024 12:45, Luca Fancellu wrote:
>>>>> Currently Xen is not exporting the static shared memory regions
>>>>> to the device tree as /memory node, this commit is fixing this
>>>>> issue.
>>>>> The static shared memory banks can be part of the memory range
>>>>> available for the domain, so if they are overlapping with the
>>>>> normal memory banks, they need to be merged together in order
>>>>> to produce a /memory node with non overlapping ranges in 'reg'.
>>>>
>>>> Before reviewing the code in more details, I would like to understand a bit more the use case and whether it should be valid.
>>>>
>>>>  From my understanding, the case you are trying to prevent is the following setup:
>>>>   1. The Guest Physical region 0x0000 to 0x8000 is used for RAM
>>>>   2. The Guest Physical region 0x0000 to 0x4000 is used for static memory
>>> So far, it was possible to map guest physical regions inside the memory range given to the guest,
>>> so the above configuration was allowed and the underlying host physical regions were of course
>>> different and enforced with checks. So I’m not trying to prevent this behaviour, however ...
>>>>
>>>> The underlying Host Physical regions may be different. Xen doesn't guarantee in which order the regions will be mapped, So whether the overlapped region will point to the memory or the shared region is unknown (we don't guarantee the order of the mapping). So nothing good will happen to the guest.
>>> ... now here I don’t understand if this was wrong from the beginning or not, shall we enforce also that
>>> guest physical regions for static shared memory are outside the memory given to the guest?
>>
>> Nothing good will happen if you are trying to overwrite mappings. So I think this should be enforced. However, this is a more general problem. At the moment, this is pretty much as mess because you can overwrite any mapping (e.g. map MMIO on top of the RAM).
>>
>> I think the easiest way to enforce is to do it in the P2M code like x86 does for certain mappings.
>>
>> Anyway, I don't think the problem should be solved here or by you (this is likely going to be a can of worms). For now, I would consider to simply drop the patches that are trying to do the merge.
>>
>> Any thoughts?
> 
> Yes I agree with you, I’ll drop the patch that tries to do the merge, I was thinking about checking that the guest phys static mem region is
> inside these boundaries:
> 
> #define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE, GUEST_RAM1_BASE }
> #define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE, GUEST_RAM1_SIZE }
> 
> and also that doesn’t overlap with (struct kernel_info).mem, does it sounds right to you?

I don't fully understand what you want to do. But as I wrote before, the 
overlaps happen with many different regions (what if you try to use the 
GIC Distributor regions for the shared memory?).

So if we want some overlaps check, then it has to be generic.

Cheers,
Luca Fancellu April 16, 2024, 10:50 a.m. UTC | #7
> On 16 Apr 2024, at 10:06, Julien Grall <julien@xen.org> wrote:
> 
> 
> 
> On 16/04/2024 09:59, Luca Fancellu wrote:
>>> On 16 Apr 2024, at 09:50, Julien Grall <julien@xen.org> wrote:
>>> 
>>> 
>>> 
>>> On 16/04/2024 07:27, Luca Fancellu wrote:
>>>> Hi Julien,
>>> 
>>> Hi Luca,
>>> 
>>>>> On 15 Apr 2024, at 19:41, Julien Grall <julien@xen.org> wrote:
>>>>> 
>>>>> Hi Luca,
>>>>> 
>>>>> On 09/04/2024 12:45, Luca Fancellu wrote:
>>>>>> Currently Xen is not exporting the static shared memory regions
>>>>>> to the device tree as /memory node, this commit is fixing this
>>>>>> issue.
>>>>>> The static shared memory banks can be part of the memory range
>>>>>> available for the domain, so if they are overlapping with the
>>>>>> normal memory banks, they need to be merged together in order
>>>>>> to produce a /memory node with non overlapping ranges in 'reg'.
>>>>> 
>>>>> Before reviewing the code in more details, I would like to understand a bit more the use case and whether it should be valid.
>>>>> 
>>>>> From my understanding, the case you are trying to prevent is the following setup:
>>>>>  1. The Guest Physical region 0x0000 to 0x8000 is used for RAM
>>>>>  2. The Guest Physical region 0x0000 to 0x4000 is used for static memory
>>>> So far, it was possible to map guest physical regions inside the memory range given to the guest,
>>>> so the above configuration was allowed and the underlying host physical regions were of course
>>>> different and enforced with checks. So I’m not trying to prevent this behaviour, however ...
>>>>> 
>>>>> The underlying Host Physical regions may be different. Xen doesn't guarantee in which order the regions will be mapped, So whether the overlapped region will point to the memory or the shared region is unknown (we don't guarantee the order of the mapping). So nothing good will happen to the guest.
>>>> ... now here I don’t understand if this was wrong from the beginning or not, shall we enforce also that
>>>> guest physical regions for static shared memory are outside the memory given to the guest?
>>> 
>>> Nothing good will happen if you are trying to overwrite mappings. So I think this should be enforced. However, this is a more general problem. At the moment, this is pretty much as mess because you can overwrite any mapping (e.g. map MMIO on top of the RAM).
>>> 
>>> I think the easiest way to enforce is to do it in the P2M code like x86 does for certain mappings.
>>> 
>>> Anyway, I don't think the problem should be solved here or by you (this is likely going to be a can of worms). For now, I would consider to simply drop the patches that are trying to do the merge.
>>> 
>>> Any thoughts?
>> Yes I agree with you, I’ll drop the patch that tries to do the merge, I was thinking about checking that the guest phys static mem region is
>> inside these boundaries:
>> #define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE, GUEST_RAM1_BASE }
>> #define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE, GUEST_RAM1_SIZE }
>> and also that doesn’t overlap with (struct kernel_info).mem, does it sounds right to you?
> 
> I don't fully understand what you want to do. But as I wrote before, the overlaps happen with many different regions (what if you try to use the GIC Distributor regions for the shared memory?).
> 
> So if we want some overlaps check, then it has to be generic.

After a chat in Matrix now I understand what you mean, thanks, I will just drop the patch 12 and update this one.

Cheers,
Luca
diff mbox series

Patch

diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c
index 51cf03221d56..74f053c242f4 100644
--- a/xen/arch/arm/dom0less-build.c
+++ b/xen/arch/arm/dom0less-build.c
@@ -642,7 +642,7 @@  static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo)
     if ( ret )
         goto err;
 
-    ret = make_memory_node(d, kinfo->fdt, addrcells, sizecells,
+    ret = make_memory_node(kinfo, addrcells, sizecells,
                            kernel_info_get_mem(kinfo));
     if ( ret )
         goto err;
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 71eebfcf7e03..f9b749d0a068 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -757,15 +757,14 @@  int __init domain_fdt_begin_node(void *fdt, const char *name, uint64_t unit)
     return fdt_begin_node(fdt, buf);
 }
 
-int __init make_memory_node(const struct domain *d,
-                            void *fdt,
-                            int addrcells, int sizecells,
-                            const struct membanks *mem)
+int __init make_memory_node(const struct kernel_info *kinfo, int addrcells,
+                            int sizecells, const struct membanks *mem)
 {
+    void *fdt = kinfo->fdt;
     unsigned int i;
     int res, reg_size = addrcells + sizecells;
     int nr_cells = 0;
-    __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */];
+    __be32 reg[DT_MEM_NODE_REG_RANGE_SIZE];
     __be32 *cells;
 
     if ( mem->nr_banks == 0 )
@@ -798,14 +797,32 @@  int __init make_memory_node(const struct domain *d,
         if ( mem->bank[i].type == MEMBANK_STATIC_DOMAIN )
             continue;
 
-        dt_dprintk("  Bank %d: %#"PRIx64"->%#"PRIx64"\n",
-                   i, start, start + size);
-
         nr_cells += reg_size;
         BUG_ON(nr_cells >= ARRAY_SIZE(reg));
         dt_child_set_range(&cells, addrcells, sizecells, start, size);
     }
 
+    /*
+     * static shared memory banks need to be listed as /memory node, so when
+     * this function is handling the normal memory, merge the banks.
+     */
+    if ( mem == kernel_info_get_mem_const(kinfo) )
+    {
+        res = shm_mem_node_merge_reg_range(kinfo, reg, &nr_cells, addrcells,
+                                           sizecells);
+        if ( res )
+            return res;
+    }
+
+    for ( cells = reg, i = 0; cells < reg + nr_cells; i++, cells += reg_size )
+    {
+        u64 start = dt_read_number(cells, addrcells);
+        u64 size = dt_read_number(cells + addrcells, sizecells);
+
+        dt_dprintk("  Bank %d: %#"PRIx64"->%#"PRIx64"\n",
+                   i, start, start + size);
+    }
+
     dt_dprintk("(reg size %d, nr cells %d)\n", reg_size, nr_cells);
 
     res = fdt_property(fdt, "reg", reg, nr_cells * sizeof(*reg));
@@ -1771,7 +1788,7 @@  static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
         if ( res )
             return res;
 
-        res = make_memory_node(d, kinfo->fdt, addrcells, sizecells,
+        res = make_memory_node(kinfo, addrcells, sizecells,
                                kernel_info_get_mem(kinfo));
         if ( res )
             return res;
@@ -1782,8 +1799,7 @@  static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
          */
         if ( reserved_mem->nr_banks > 0 )
         {
-            res = make_memory_node(d, kinfo->fdt, addrcells, sizecells,
-                                   reserved_mem);
+            res = make_memory_node(kinfo, addrcells, sizecells, reserved_mem);
             if ( res )
                 return res;
         }
diff --git a/xen/arch/arm/include/asm/domain_build.h b/xen/arch/arm/include/asm/domain_build.h
index 026d975da28e..45936212ca21 100644
--- a/xen/arch/arm/include/asm/domain_build.h
+++ b/xen/arch/arm/include/asm/domain_build.h
@@ -14,7 +14,7 @@  int make_chosen_node(const struct kernel_info *kinfo);
 int make_cpus_node(const struct domain *d, void *fdt);
 int make_hypervisor_node(struct domain *d, const struct kernel_info *kinfo,
                          int addrcells, int sizecells);
-int make_memory_node(const struct domain *d, void *fdt, int addrcells,
+int make_memory_node(const struct kernel_info *kinfo, int addrcells,
                      int sizecells, const struct membanks *mem);
 int make_psci_node(void *fdt);
 int make_timer_node(const struct kernel_info *kinfo);
diff --git a/xen/arch/arm/include/asm/static-shmem.h b/xen/arch/arm/include/asm/static-shmem.h
index 7495a91e7a31..bb5624c801af 100644
--- a/xen/arch/arm/include/asm/static-shmem.h
+++ b/xen/arch/arm/include/asm/static-shmem.h
@@ -3,10 +3,15 @@ 
 #ifndef __ASM_STATIC_SHMEM_H_
 #define __ASM_STATIC_SHMEM_H_
 
+#include <xen/types.h>
 #include <asm/kernel.h>
+#include <asm/setup.h>
 
 #ifdef CONFIG_STATIC_SHM
 
+/* Worst case /memory node reg element: (addrcells + sizecells) */
+#define DT_MEM_NODE_REG_RANGE_SIZE ((NR_MEM_BANKS + NR_SHMEM_BANKS) * 4)
+
 int make_resv_memory_node(const struct kernel_info *kinfo, int addrcells,
                           int sizecells);
 
@@ -37,8 +42,14 @@  int remove_shm_holes_for_domU(const struct kernel_info *kinfo,
 int make_shm_resv_memory_node(const struct kernel_info *kinfo, int addrcells,
                               int sizecells);
 
+int shm_mem_node_merge_reg_range(const struct kernel_info *kinfo, __be32 *reg,
+                                 int *nr_cells, int addrcells, int sizecells);
+
 #else /* !CONFIG_STATIC_SHM */
 
+/* Worst case /memory node reg element: (addrcells + sizecells) */
+#define DT_MEM_NODE_REG_RANGE_SIZE (NR_MEM_BANKS * 4)
+
 static inline int make_resv_memory_node(const struct kernel_info *kinfo,
                                         int addrcells, int sizecells)
 {
@@ -86,6 +97,13 @@  static inline int make_shm_resv_memory_node(const struct kernel_info *kinfo,
     return 0;
 }
 
+static inline int shm_mem_node_merge_reg_range(const struct kernel_info *kinfo,
+                                               __be32 *reg, int *nr_cells,
+                                               int addrcells, int sizecells)
+{
+    return 0;
+}
+
 #endif /* CONFIG_STATIC_SHM */
 
 #endif /* __ASM_STATIC_SHMEM_H_ */
diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c
index 5ad6f1269c48..61fcbe217c61 100644
--- a/xen/arch/arm/static-shmem.c
+++ b/xen/arch/arm/static-shmem.c
@@ -1,5 +1,6 @@ 
 /* SPDX-License-Identifier: GPL-2.0-only */
 
+#include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
 #include <xen/rangeset.h>
 #include <xen/sched.h>
@@ -662,6 +663,31 @@  int __init remove_shm_holes_for_domU(const struct kernel_info *kinfo,
     return res;
 }
 
+int __init shm_mem_node_merge_reg_range(const struct kernel_info *kinfo,
+                                        __be32 *reg, int *nr_cells,
+                                        int addrcells, int sizecells)
+{
+    const struct membanks *mem = &kinfo->shm_mem.common;
+    unsigned int i;
+    __be32 *cells;
+
+    BUG_ON(!nr_cells || !reg);
+
+    cells = &reg[*nr_cells];
+    for ( i = 0; i < mem->nr_banks; i++ )
+    {
+        u64 start = mem->bank[i].start;
+        u64 size = mem->bank[i].size;
+
+        *nr_cells += addrcells + sizecells;
+        BUG_ON(*nr_cells >= DT_MEM_NODE_REG_RANGE_SIZE);
+        dt_child_set_range(&cells, addrcells, sizecells, start, size);
+    }
+
+    return dt_merge_overlapping_addr_size_intervals(reg, nr_cells, addrcells,
+                                                    sizecells);
+}
+
 /*
  * Local variables:
  * mode: C