Message ID | 20220224013023.50920-1-Henry.Wang@arm.com (mailing list archive) |
---|---|
Headers | show |
Series | Introduce reserved Xenheap | expand |
Hi Henry, On 24/02/2022 01:30, Henry Wang wrote: > The reserved Xenheap, or statically configured Xenheap, refers to parts > of RAM reserved in the beginning for Xenheap. Like the static memory > allocation, such reserved Xenheap regions are reserved by configuration > in the device tree using physical address ranges. In Xen, we have the concept of domheap and xenheap. For Arm64 and x86 they would be the same. But for Arm32, they would be different: xenheap is always mapped whereas domheap is separate. Skimming through the series, I think you want to use the region for both domheap and xenheap. Is that correct? Furthemore, now that we are introducing more static region, it will get easier to overlap the regions by mistakes. I think we want to have some logic in Xen (or outside) to ensure that none of them overlaps. Do you have any plan for that? > > This feature is useful to run Xen on Arm MPU systems, where only a > finite number of memory protection regions are available. The limited > number of protection regions places requirement on planning the use of > MPU protection regions and one or more MPU protection regions needs to > be reserved only for Xenheap. > > Therefore, this patch series is sent as RFC for comments from the > community. The first patch introduces the reserved Xenheap and the > device tree processing code. The second patch adds the implementation of > the reserved Xenheap pages handling in boot and heap allocator on Arm64. > > Henry Wang (2): > docs, xen/arm: Introduce reserved Xenheap memory > xen/arm: Handle reserved Xenheap pages in boot/heap allocator > > docs/misc/arm/device-tree/booting.txt | 43 ++++++++++++++++++++++ > xen/arch/arm/bootfdt.c | 52 +++++++++++++++++++++------ > xen/arch/arm/include/asm/setup.h | 3 ++ > xen/arch/arm/setup.c | 52 +++++++++++++++++++-------- > 4 files changed, 125 insertions(+), 25 deletions(-) >
Hi Julien, Thanks very much for your time reading the series and your feedback. Please find the inline reply below. > -----Original Message----- > From: Julien Grall <julien@xen.org> > Sent: Saturday, February 26, 2022 4:09 AM > To: Henry Wang <Henry.Wang@arm.com>; xen-devel@lists.xenproject.org; > sstabellini@kernel.org > Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen > <Wei.Chen@arm.com>; Penny Zheng <Penny.Zheng@arm.com> > Subject: Re: [RFC PATCH 0/2] Introduce reserved Xenheap > > Hi Henry, > > On 24/02/2022 01:30, Henry Wang wrote: > > The reserved Xenheap, or statically configured Xenheap, refers to parts > > of RAM reserved in the beginning for Xenheap. Like the static memory > > allocation, such reserved Xenheap regions are reserved by configuration > > in the device tree using physical address ranges. > > In Xen, we have the concept of domheap and xenheap. For Arm64 and x86 > they would be the same. But for Arm32, they would be different: xenheap > is always mapped whereas domheap is separate. > > Skimming through the series, I think you want to use the region for both > domheap and xenheap. Is that correct? Yes I think that would be correct, for Arm32, instead of using the full `ram_pages` as the initial value of `heap_pages`, we want to use the region specified in the device tree. But we are confused if this is the correct (or preferred) way for Arm32, so in this series we only implemented the reserved heap for Arm64. Could you please share your opinion on this? Thanks! > > Furthemore, now that we are introducing more static region, it will get > easier to overlap the regions by mistakes. I think we want to have some > logic in Xen (or outside) to ensure that none of them overlaps. Do you > have any plan for that? Totally agree with this idea, but before we actually implement the code, we would like to firstly share our thoughts on this: One option could be to add data structures to notes down these static memory regions when the device tree is parsed, and then we can check if they are overlapped. Over the long term (and this long term option is currently not in our plan), maybe we can add something in the Xen toolstack for this usage? Also, I am wondering if the overlapping check logic should be introduced in this series. WDYT? > > > > > This feature is useful to run Xen on Arm MPU systems, where only a > > finite number of memory protection regions are available. The limited > > number of protection regions places requirement on planning the use of > > MPU protection regions and one or more MPU protection regions needs to > > be reserved only for Xenheap. > > > > Therefore, this patch series is sent as RFC for comments from the > > community. The first patch introduces the reserved Xenheap and the > > device tree processing code. The second patch adds the implementation of > > the reserved Xenheap pages handling in boot and heap allocator on Arm64. > > > > Henry Wang (2): > > docs, xen/arm: Introduce reserved Xenheap memory > > xen/arm: Handle reserved Xenheap pages in boot/heap allocator > > > > docs/misc/arm/device-tree/booting.txt | 43 ++++++++++++++++++++++ > > xen/arch/arm/bootfdt.c | 52 +++++++++++++++++++++------ > > xen/arch/arm/include/asm/setup.h | 3 ++ > > xen/arch/arm/setup.c | 52 +++++++++++++++++++-------- > > 4 files changed, 125 insertions(+), 25 deletions(-) > > > > -- > Julien Grall Kind regards, Henry
On 28/02/2022 07:12, Henry Wang wrote: > Hi Julien, Hi Henry, >> -----Original Message----- >> From: Julien Grall <julien@xen.org> >> Sent: Saturday, February 26, 2022 4:09 AM >> To: Henry Wang <Henry.Wang@arm.com>; xen-devel@lists.xenproject.org; >> sstabellini@kernel.org >> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen >> <Wei.Chen@arm.com>; Penny Zheng <Penny.Zheng@arm.com> >> Subject: Re: [RFC PATCH 0/2] Introduce reserved Xenheap >> >> Hi Henry, >> >> On 24/02/2022 01:30, Henry Wang wrote: >>> The reserved Xenheap, or statically configured Xenheap, refers to parts >>> of RAM reserved in the beginning for Xenheap. Like the static memory >>> allocation, such reserved Xenheap regions are reserved by configuration >>> in the device tree using physical address ranges. >> >> In Xen, we have the concept of domheap and xenheap. For Arm64 and x86 >> they would be the same. But for Arm32, they would be different: xenheap >> is always mapped whereas domheap is separate. >> >> Skimming through the series, I think you want to use the region for both >> domheap and xenheap. Is that correct? > > Yes I think that would be correct, for Arm32, instead of using the full > `ram_pages` as the initial value of `heap_pages`, we want to use the > region specified in the device tree. But we are confused if this is the > correct (or preferred) way for Arm32, so in this series we only > implemented the reserved heap for Arm64. That's an interesting point. When I skimmed through the series on Friday, my first thought was that for arm32 it would be only xenheap (so all the rest of memory is domheap). However, Xen can allocate memory from domheap for its own purpose (e.g. we don't need contiguous memory, or for page-tables). In a fully static environment, the domheap and xenheap are both going to be quite small. It would also be somewhat difficult for a user to size it. So I think, it would be easier to use the region you introduce for both domheap and xenheap. Stefano, Bertrand, any opionions? On a separate topic, I think we need some documentation explaining how a user can size the xenheap. How did you figure out for your setup? >> >> Furthemore, now that we are introducing more static region, it will get >> easier to overlap the regions by mistakes. I think we want to have some >> logic in Xen (or outside) to ensure that none of them overlaps. Do you >> have any plan for that? > > Totally agree with this idea, but before we actually implement the code, > we would like to firstly share our thoughts on this: One option could be to > add data structures to notes down these static memory regions when the > device tree is parsed, and then we can check if they are overlapped. This should work. > Over > the long term (and this long term option is currently not in our plan), > maybe we can add something in the Xen toolstack for this usage? When I read "Xen toolstack", I read the tools that will run in dom0. Is it what you meant? > > Also, I am wondering if the overlapping check logic should be introduced > in this series. WDYT? I would do that in a separate series. Cheers,
Hi Julien, > -----Original Message----- > From: Julien Grall <julien@xen.org> > On 28/02/2022 07:12, Henry Wang wrote: > > Hi Julien, > > Hi Henry, > > >> -----Original Message----- > >> From: Julien Grall <julien@xen.org> > >> Hi Henry, > >> > >> On 24/02/2022 01:30, Henry Wang wrote: > >>> The reserved Xenheap, or statically configured Xenheap, refers to parts > >>> of RAM reserved in the beginning for Xenheap. Like the static memory > >>> allocation, such reserved Xenheap regions are reserved by configuration > >>> in the device tree using physical address ranges. > >> > >> In Xen, we have the concept of domheap and xenheap. For Arm64 and > x86 > >> they would be the same. But for Arm32, they would be different: xenheap > >> is always mapped whereas domheap is separate. > >> > >> Skimming through the series, I think you want to use the region for both > >> domheap and xenheap. Is that correct? > > > > Yes I think that would be correct, for Arm32, instead of using the full > > `ram_pages` as the initial value of `heap_pages`, we want to use the > > region specified in the device tree. But we are confused if this is the > > correct (or preferred) way for Arm32, so in this series we only > > implemented the reserved heap for Arm64. > > That's an interesting point. When I skimmed through the series on > Friday, my first thought was that for arm32 it would be only xenheap (so > all the rest of memory is domheap). > > However, Xen can allocate memory from domheap for its own purpose (e.g. > we don't need contiguous memory, or for page-tables). > > In a fully static environment, the domheap and xenheap are both going to > be quite small. It would also be somewhat difficult for a user to size > it. So I think, it would be easier to use the region you introduce for > both domheap and xenheap. > > Stefano, Bertrand, any opionions? > > On a separate topic, I think we need some documentation explaining how a > user can size the xenheap. How did you figure out for your setup? Not sure if I fully understand the question. I will explain in two parts: I tested this series on a dom0less (static mem) system on FVP_Base. (1) For configuring the system, I followed the documentation I added in the first patch in this series (docs/misc/arm/device-tree/booting.txt). The idea is adding some static mem regions under the chosen node. chosen { + #xen,static-mem-address-cells = <0x2>; + #xen,static-mem-size-cells = <0x2>; + xen,static-mem = <0x8 0x80000000 0x0 0x00100000 0x8 0x90000000 0x0 0x08000000>; [...] } (2) For verifying this series, what I did was basically playing with the region size and number of the regions, adding printks and also see if the guests can boot as expected when I change the xenheap size. > > >> > >> Furthemore, now that we are introducing more static region, it will get > >> easier to overlap the regions by mistakes. I think we want to have some > >> logic in Xen (or outside) to ensure that none of them overlaps. Do you > >> have any plan for that? > > > > Totally agree with this idea, but before we actually implement the code, > > we would like to firstly share our thoughts on this: One option could be to > > add data structures to notes down these static memory regions when the > > device tree is parsed, and then we can check if they are overlapped. > > This should work. Ack. > > > Over > > the long term (and this long term option is currently not in our plan), > > maybe we can add something in the Xen toolstack for this usage? > > When I read "Xen toolstack", I read the tools that will run in dom0. Is > it what you meant? Nonono, sorry for the misleading. I mean a build time tool that can run on host (build machine) to generate/configure the Xen DTS for static allocated memory. But maybe this tool can be placed in Xen tool or it can be a separate tool that out of Xen's scope. Anyway, this is just an idea as we find it is not easy for users to configure so many static items manually. > > > > > Also, I am wondering if the overlapping check logic should be introduced > > in this series. WDYT? > > I would do that in a separate series. Ack. Kind regards, Henry > > Cheers, > > -- > Julien Grall
Hi Julien, > -----Original Message----- > From: Henry Wang <Henry.Wang@arm.com> > Sent: 2022年3月1日 10:11 > To: Julien Grall <julien@xen.org>; xen-devel@lists.xenproject.org; > sstabellini@kernel.org > Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen > <Wei.Chen@arm.com>; Penny Zheng <Penny.Zheng@arm.com> > Subject: RE: [RFC PATCH 0/2] Introduce reserved Xenheap > > Hi Julien, > > > -----Original Message----- > > From: Julien Grall <julien@xen.org> > > On 28/02/2022 07:12, Henry Wang wrote: > > > Hi Julien, > > > > Hi Henry, > > > > >> -----Original Message----- > > >> From: Julien Grall <julien@xen.org> > > >> Hi Henry, > > >> > > >> On 24/02/2022 01:30, Henry Wang wrote: > > >>> The reserved Xenheap, or statically configured Xenheap, refers to > parts > > >>> of RAM reserved in the beginning for Xenheap. Like the static memory > > >>> allocation, such reserved Xenheap regions are reserved by > configuration > > >>> in the device tree using physical address ranges. > > >> > > >> In Xen, we have the concept of domheap and xenheap. For Arm64 and > > x86 > > >> they would be the same. But for Arm32, they would be different: > xenheap > > >> is always mapped whereas domheap is separate. > > >> > > >> Skimming through the series, I think you want to use the region for > both > > >> domheap and xenheap. Is that correct? > > > > > > Yes I think that would be correct, for Arm32, instead of using the > full > > > `ram_pages` as the initial value of `heap_pages`, we want to use the > > > region specified in the device tree. But we are confused if this is > the > > > correct (or preferred) way for Arm32, so in this series we only > > > implemented the reserved heap for Arm64. > > > > That's an interesting point. When I skimmed through the series on > > Friday, my first thought was that for arm32 it would be only xenheap (so > > all the rest of memory is domheap). > > > > However, Xen can allocate memory from domheap for its own purpose (e.g. > > we don't need contiguous memory, or for page-tables). > > > > In a fully static environment, the domheap and xenheap are both going to > > be quite small. It would also be somewhat difficult for a user to size > > it. So I think, it would be easier to use the region you introduce for > > both domheap and xenheap. > > > > Stefano, Bertrand, any opionions? > > > > On a separate topic, I think we need some documentation explaining how a > > user can size the xenheap. How did you figure out for your setup? > > Not sure if I fully understand the question. I will explain in two parts: > I tested > this series on a dom0less (static mem) system on FVP_Base. > (1) For configuring the system, I followed the documentation I added in > the > first patch in this series (docs/misc/arm/device-tree/booting.txt). The > idea is > adding some static mem regions under the chosen node. > > chosen { > + #xen,static-mem-address-cells = <0x2>; > + #xen,static-mem-size-cells = <0x2>; > + xen,static-mem = <0x8 0x80000000 0x0 0x00100000 0x8 0x90000000 > 0x0 0x08000000>; > [...] > } > > (2) For verifying this series, what I did was basically playing with the > region size > and number of the regions, adding printks and also see if the guests can > boot > as expected when I change the xenheap size. > > > > > >> > > >> Furthemore, now that we are introducing more static region, it will > get > > >> easier to overlap the regions by mistakes. I think we want to have > some > > >> logic in Xen (or outside) to ensure that none of them overlaps. Do > you > > >> have any plan for that? > > > > > > Totally agree with this idea, but before we actually implement the > code, > > > we would like to firstly share our thoughts on this: One option could > be to > > > add data structures to notes down these static memory regions when the > > > device tree is parsed, and then we can check if they are overlapped. > > > > This should work. > > Ack. > > > > > > Over > > > the long term (and this long term option is currently not in our plan), > > > maybe we can add something in the Xen toolstack for this usage? > > > > When I read "Xen toolstack", I read the tools that will run in dom0. Is > > it what you meant? > > Nonono, sorry for the misleading. I mean a build time tool that can run > on host (build machine) to generate/configure the Xen DTS for static > allocated memory. But maybe this tool can be placed in Xen tool or it can > be a separate tool that out of Xen's scope. > > Anyway, this is just an idea as we find it is not easy for users to > configure > so many static items manually. Not only for this one. As v8R64 support code also includes lots of static allocated items, it will also encounter this user configuration issue. So this would be a long term consideration. We can discuss this topic after Xen V8R64 support code upstream work be done. And this tool does not necessarily need to be provided by the community. Vendors that want to use Xen also can do it. IMO, it would be better if community could provide it. Anyway let's defer this topic : ) Thanks, Wei Chen > > > > > > > > > Also, I am wondering if the overlapping check logic should be > introduced > > > in this series. WDYT? > > > > I would do that in a separate series. > > Ack. > > Kind regards, > > Henry > > > > > Cheers, > > > > -- > > Julien Grall
Hi, > On 28 Feb 2022, at 18:51, Julien Grall <julien@xen.org> wrote: > > > > On 28/02/2022 07:12, Henry Wang wrote: >> Hi Julien, > > Hi Henry, > >>> -----Original Message----- >>> From: Julien Grall <julien@xen.org> >>> Sent: Saturday, February 26, 2022 4:09 AM >>> To: Henry Wang <Henry.Wang@arm.com>; xen-devel@lists.xenproject.org; >>> sstabellini@kernel.org >>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen >>> <Wei.Chen@arm.com>; Penny Zheng <Penny.Zheng@arm.com> >>> Subject: Re: [RFC PATCH 0/2] Introduce reserved Xenheap >>> >>> Hi Henry, >>> >>> On 24/02/2022 01:30, Henry Wang wrote: >>>> The reserved Xenheap, or statically configured Xenheap, refers to parts >>>> of RAM reserved in the beginning for Xenheap. Like the static memory >>>> allocation, such reserved Xenheap regions are reserved by configuration >>>> in the device tree using physical address ranges. >>> >>> In Xen, we have the concept of domheap and xenheap. For Arm64 and x86 >>> they would be the same. But for Arm32, they would be different: xenheap >>> is always mapped whereas domheap is separate. >>> >>> Skimming through the series, I think you want to use the region for both >>> domheap and xenheap. Is that correct? >> Yes I think that would be correct, for Arm32, instead of using the full >> `ram_pages` as the initial value of `heap_pages`, we want to use the >> region specified in the device tree. But we are confused if this is the >> correct (or preferred) way for Arm32, so in this series we only >> implemented the reserved heap for Arm64. > > That's an interesting point. When I skimmed through the series on Friday, my first thought was that for arm32 it would be only xenheap (so > all the rest of memory is domheap). > > However, Xen can allocate memory from domheap for its own purpose (e.g. we don't need contiguous memory, or for page-tables). > > In a fully static environment, the domheap and xenheap are both going to be quite small. It would also be somewhat difficult for a user to size it. So I think, it would be easier to use the region you introduce for both domheap and xenheap. > > Stefano, Bertrand, any opionions? Only one region is easier to configure and I think in this case it will also prevent lots of over allocation. So in a full static case, having only one heap is a good strategy for now. There might be some cases where someone would want to fully control the memory allocated by Xen per domain and in this case be able to size it for each guest (to make sure one guest cannot be impacted by an other at all). But this is definitely something that could be done later, if needed. Cheers Bertrand > > On a separate topic, I think we need some documentation explaining how a user can size the xenheap. How did you figure out for your setup? > >>> >>> Furthemore, now that we are introducing more static region, it will get >>> easier to overlap the regions by mistakes. I think we want to have some >>> logic in Xen (or outside) to ensure that none of them overlaps. Do you >>> have any plan for that? >> Totally agree with this idea, but before we actually implement the code, >> we would like to firstly share our thoughts on this: One option could be to >> add data structures to notes down these static memory regions when the >> device tree is parsed, and then we can check if they are overlapped. > > This should work. > >> Over >> the long term (and this long term option is currently not in our plan), >> maybe we can add something in the Xen toolstack for this usage? > > When I read "Xen toolstack", I read the tools that will run in dom0. Is it what you meant? > >> Also, I am wondering if the overlapping check logic should be introduced >> in this series. WDYT? > > I would do that in a separate series. > > Cheers, > > -- > Julien Grall
On Tue, 1 Mar 2022, Wei Chen wrote: > > Hi Julien, > > > > > -----Original Message----- > > > From: Julien Grall <julien@xen.org> > > > On 28/02/2022 07:12, Henry Wang wrote: > > > > Hi Julien, > > > > > > Hi Henry, > > > > > > >> -----Original Message----- > > > >> From: Julien Grall <julien@xen.org> > > > >> Hi Henry, > > > >> > > > >> On 24/02/2022 01:30, Henry Wang wrote: > > > >>> The reserved Xenheap, or statically configured Xenheap, refers to > > parts > > > >>> of RAM reserved in the beginning for Xenheap. Like the static memory > > > >>> allocation, such reserved Xenheap regions are reserved by > > configuration > > > >>> in the device tree using physical address ranges. > > > >> > > > >> In Xen, we have the concept of domheap and xenheap. For Arm64 and > > > x86 > > > >> they would be the same. But for Arm32, they would be different: > > xenheap > > > >> is always mapped whereas domheap is separate. > > > >> > > > >> Skimming through the series, I think you want to use the region for > > both > > > >> domheap and xenheap. Is that correct? > > > > > > > > Yes I think that would be correct, for Arm32, instead of using the > > full > > > > `ram_pages` as the initial value of `heap_pages`, we want to use the > > > > region specified in the device tree. But we are confused if this is > > the > > > > correct (or preferred) way for Arm32, so in this series we only > > > > implemented the reserved heap for Arm64. > > > > > > That's an interesting point. When I skimmed through the series on > > > Friday, my first thought was that for arm32 it would be only xenheap (so > > > all the rest of memory is domheap). > > > > > > However, Xen can allocate memory from domheap for its own purpose (e.g. > > > we don't need contiguous memory, or for page-tables). > > > > > > In a fully static environment, the domheap and xenheap are both going to > > > be quite small. It would also be somewhat difficult for a user to size > > > it. So I think, it would be easier to use the region you introduce for > > > both domheap and xenheap. > > > > > > Stefano, Bertrand, any opionions? > > > > > > On a separate topic, I think we need some documentation explaining how a > > > user can size the xenheap. How did you figure out for your setup? > > > > Not sure if I fully understand the question. I will explain in two parts: > > I tested > > this series on a dom0less (static mem) system on FVP_Base. > > (1) For configuring the system, I followed the documentation I added in > > the > > first patch in this series (docs/misc/arm/device-tree/booting.txt). The > > idea is > > adding some static mem regions under the chosen node. > > > > chosen { > > + #xen,static-mem-address-cells = <0x2>; > > + #xen,static-mem-size-cells = <0x2>; > > + xen,static-mem = <0x8 0x80000000 0x0 0x00100000 0x8 0x90000000 > > 0x0 0x08000000>; > > [...] > > } > > > > (2) For verifying this series, what I did was basically playing with the > > region size > > and number of the regions, adding printks and also see if the guests can > > boot > > as expected when I change the xenheap size. > > > > > > > > >> > > > >> Furthemore, now that we are introducing more static region, it will > > get > > > >> easier to overlap the regions by mistakes. I think we want to have > > some > > > >> logic in Xen (or outside) to ensure that none of them overlaps. Do > > you > > > >> have any plan for that? > > > > > > > > Totally agree with this idea, but before we actually implement the > > code, > > > > we would like to firstly share our thoughts on this: One option could > > be to > > > > add data structures to notes down these static memory regions when the > > > > device tree is parsed, and then we can check if they are overlapped. > > > > > > This should work. > > > > Ack. > > > > > > > > > Over > > > > the long term (and this long term option is currently not in our plan), > > > > maybe we can add something in the Xen toolstack for this usage? > > > > > > When I read "Xen toolstack", I read the tools that will run in dom0. Is > > > it what you meant? > > > > Nonono, sorry for the misleading. I mean a build time tool that can run > > on host (build machine) to generate/configure the Xen DTS for static > > allocated memory. But maybe this tool can be placed in Xen tool or it can > > be a separate tool that out of Xen's scope. > > > > Anyway, this is just an idea as we find it is not easy for users to > > configure > > so many static items manually. > > Not only for this one. As v8R64 support code also includes lots of static > allocated items, it will also encounter this user configuration issue. > So this would be a long term consideration. We can discuss this topic > after Xen V8R64 support code upstream work be done. > > And this tool does not necessarily need to be provided by the community. > Vendors that want to use Xen also can do it. IMO, it would be better if > community could provide it. Anyway let's defer this topic : ) Yes, I agree with you that it would be best if this tool was provided by the community. I'll continue the conversation on the Armv8-R64 thread.