diff mbox

[v4] boot allocator: Use arch helper for virt_to_mfn on DIRECTMAP

Message ID 1490704538-17557-1-git-send-email-vijay.kilari@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Vijay Kilari March 28, 2017, 12:35 p.m. UTC
From: Vijaya Kumar K <Vijaya.Kumar@cavium.com>

On ARM64, virt_to_mfn uses the hardware for address
translation. So if the virtual address is not mapped translation
fault is raised. On ARM64, DIRECTMAP_VIRT region is direct mapped.

On ARM platforms with NUMA, While initializing second memory node,
panic is triggered from init_node_heap() when virt_to_mfn()
is called for DIRECTMAP_VIRT region address.
Here the check is made to ensure that MFN less than max MFN mapped.
The max MFN is found by calling virt_to_mfn of DIRECTMAP_VIRT_END
region. Since DIRECMAP_VIRT region is not mapped to any virtual address
on ARM, it fails.

In this patch, instead of calling virt_to_mfn(), arch helper
arch_mfn_in_directmap() is introduced. On ARM64 this arch helper
will return true, whereas on ARM DIRECTMAP_VIRT region is not directly mapped
only xenheap region is directly mapped. So on ARM return false always.
For x86 this helper does virt_to_mfn.

Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
---
 xen/common/page_alloc.c        |  7 ++-----
 xen/include/asm-arm/arm32/mm.h | 20 ++++++++++++++++++++
 xen/include/asm-arm/arm64/mm.h | 20 ++++++++++++++++++++
 xen/include/asm-arm/mm.h       |  8 ++++++++
 xen/include/asm-x86/mm.h       | 11 +++++++++++
 5 files changed, 61 insertions(+), 5 deletions(-)

Comments

Jan Beulich March 28, 2017, 12:51 p.m. UTC | #1
>>> On 28.03.17 at 14:35, <vijay.kilari@gmail.com> wrote:
> From: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
> 
> On ARM64, virt_to_mfn uses the hardware for address
> translation. So if the virtual address is not mapped translation
> fault is raised. On ARM64, DIRECTMAP_VIRT region is direct mapped.
> 
> On ARM platforms with NUMA, While initializing second memory node,
> panic is triggered from init_node_heap() when virt_to_mfn()
> is called for DIRECTMAP_VIRT region address.
> Here the check is made to ensure that MFN less than max MFN mapped.
> The max MFN is found by calling virt_to_mfn of DIRECTMAP_VIRT_END
> region. Since DIRECMAP_VIRT region is not mapped to any virtual address
> on ARM, it fails.
> 
> In this patch, instead of calling virt_to_mfn(), arch helper
> arch_mfn_in_directmap() is introduced. On ARM64 this arch helper
> will return true, whereas on ARM DIRECTMAP_VIRT region is not directly 
> mapped
> only xenheap region is directly mapped. So on ARM return false always.
> For x86 this helper does virt_to_mfn.
> 
> Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>

Non-ARM parts
Reviewed-by: Jan Beulich <jbeulich@suse.com>
with two remarks:

> ---

Please get used to add a brief change summary here.

> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -602,4 +602,15 @@ extern const char zero_page[];
>  /* Build a 32bit PSE page table using 4MB pages. */
>  void write_32bit_pse_identmap(uint32_t *l2);
>  
> +/*
> + * x86 maps DIRECTMAP_VIRT to physical memory. Get the mfn for directmap
> + * memory region.

The comment does not describe what the function does. The ARM64
one doesn't look right, either. I'd say "x86 maps part of physical
memory via the directmap region. Return whether the input MFN
falls in that range."

> + */
> +static inline bool arch_mfn_in_directmap(unsigned long mfn)
> +{
> +    unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
> +
> +    return (mfn <= (virt_to_mfn(eva - 1) + 1));

Stray parentheses (at least the outer ones).

Both issues could be taken care of while committing, unless v5
becomes necessary for another reason.

Jan
Julien Grall April 3, 2017, 10:01 a.m. UTC | #2
Hi Vijay,

On 28/03/17 13:35, vijay.kilari@gmail.com wrote:
> From: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
>
> On ARM64, virt_to_mfn uses the hardware for address
> translation. So if the virtual address is not mapped translation
> fault is raised. On ARM64, DIRECTMAP_VIRT region is direct mapped.

You are stating obvious things, a DIRECTMAP_VIRT region is as the name 
said direct mapped. What matter is all the RAM is mapped in Xen on ARM64.

>
> On ARM platforms with NUMA, While initializing second memory node,

s/While/while/

> panic is triggered from init_node_heap() when virt_to_mfn()
> is called for DIRECTMAP_VIRT region address.
> Here the check is made to ensure that MFN less than max MFN mapped.

"The check is here to know whether the MFN is part of the direct mapping".

> The max MFN is found by calling virt_to_mfn of DIRECTMAP_VIRT_END
> region.

DIRECTMAP_VIRT_END is the end of the region not a region.

> Since DIRECMAP_VIRT region is not mapped to any virtual address

s/DIRECMAP_VIRT/DIRECTMAP_VIRT/

> on ARM, it fails.
>
> In this patch, instead of calling virt_to_mfn(), arch helper
> arch_mfn_in_directmap() is introduced. On ARM64 this arch helper
> will return true, whereas on ARM DIRECTMAP_VIRT region is not directly mapped
> only xenheap region is directly mapped.

As said before, there is no DIRECTMAP_VIRT region on ARM. All the RAM is 
not mapped on Xen but the xenheap.

> So on ARM return false always.

I am OK if you always return false on ARM. But you need to explain why 
not return is_xen_heap_mfn(...);

> For x86 this helper does virt_to_mfn.
>
> Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
> ---
>  xen/common/page_alloc.c        |  7 ++-----
>  xen/include/asm-arm/arm32/mm.h | 20 ++++++++++++++++++++
>  xen/include/asm-arm/arm64/mm.h | 20 ++++++++++++++++++++
>  xen/include/asm-arm/mm.h       |  8 ++++++++
>  xen/include/asm-x86/mm.h       | 11 +++++++++++
>  5 files changed, 61 insertions(+), 5 deletions(-)
>
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 42c20cb..c4ffb31 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -520,9 +520,6 @@ static unsigned long init_node_heap(int node, unsigned long mfn,
>      unsigned long needed = (sizeof(**_heap) +
>                              sizeof(**avail) * NR_ZONES +
>                              PAGE_SIZE - 1) >> PAGE_SHIFT;
> -#ifdef DIRECTMAP_VIRT_END
> -    unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
> -#endif
>      int i, j;
>
>      if ( !first_node_initialised )
> @@ -534,7 +531,7 @@ static unsigned long init_node_heap(int node, unsigned long mfn,
>      }
>  #ifdef DIRECTMAP_VIRT_END

Sorry I didn't spot that before. Why do we keep the #ifdef here given 
that the check is arch specific now?

>      else if ( *use_tail && nr >= needed &&
> -              (mfn + nr) <= (virt_to_mfn(eva - 1) + 1) &&
> +              arch_mfn_in_directmap(mfn + nr) &&
>                (!xenheap_bits ||
>                 !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>      {
> @@ -543,7 +540,7 @@ static unsigned long init_node_heap(int node, unsigned long mfn,
>                        PAGE_SIZE - sizeof(**avail) * NR_ZONES;
>      }
>      else if ( nr >= needed &&
> -              (mfn + needed) <= (virt_to_mfn(eva - 1) + 1) &&
> +              arch_mfn_in_directmap(mfn + needed) &&
>                (!xenheap_bits ||
>                 !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>      {
> diff --git a/xen/include/asm-arm/arm32/mm.h b/xen/include/asm-arm/arm32/mm.h
> new file mode 100644
> index 0000000..e93d9df
> --- /dev/null
> +++ b/xen/include/asm-arm/arm32/mm.h
> @@ -0,0 +1,20 @@
> +#ifndef __ARM_ARM32_MM_H__
> +#define __ARM_ARM32_MM_H__
> +
> +/* On ARM only xenheap memory is directly mapped. Hence return false. */

By reading this comment some people will wonder why you don't check 
whether the mfn is in xenheap then. As mentioned above, I am ok if you 
always return false here. But you need to explain why.

> +static inline bool arch_mfn_in_directmap(unsigned long mfn)
> +{
> +    return false;
> +}
> +
> +#endif /* __ARM_ARM32_MM_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-arm/arm64/mm.h b/xen/include/asm-arm/arm64/mm.h
> new file mode 100644
> index 0000000..36ee9c8
> --- /dev/null
> +++ b/xen/include/asm-arm/arm64/mm.h
> @@ -0,0 +1,20 @@
> +#ifndef __ARM_ARM64_MM_H__
> +#define __ARM_ARM64_MM_H__
> +
> +/* On ARM64 DIRECTMAP_VIRT region is directly mapped. Hence return true */

This comment is not correct. A the directmap region is always direct 
mapped which is quite obvious by the name. What matter is all the RAM is 
mapped in Xen.

So I would reword the commit message with:

"On ARM64, all the RAM is currently direct mapped in Xen."

Cheers,
Vijay Kilari April 4, 2017, 5:50 a.m. UTC | #3
Hi Julien,

On Mon, Apr 3, 2017 at 3:31 PM, Julien Grall <julien.grall@arm.com> wrote:
> Hi Vijay,
>
> On 28/03/17 13:35, vijay.kilari@gmail.com wrote:
>>
>> From: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
>>
>> On ARM64, virt_to_mfn uses the hardware for address
>> translation. So if the virtual address is not mapped translation
>> fault is raised. On ARM64, DIRECTMAP_VIRT region is direct mapped.
>
>
> You are stating obvious things, a DIRECTMAP_VIRT region is as the name said
> direct mapped. What matter is all the RAM is mapped in Xen on ARM64.
>
>>
>> On ARM platforms with NUMA, While initializing second memory node,
>
>
> s/While/while/
>
>> panic is triggered from init_node_heap() when virt_to_mfn()
>> is called for DIRECTMAP_VIRT region address.
>> Here the check is made to ensure that MFN less than max MFN mapped.
>
>
> "The check is here to know whether the MFN is part of the direct mapping".
>
>> The max MFN is found by calling virt_to_mfn of DIRECTMAP_VIRT_END
>> region.
>
>
> DIRECTMAP_VIRT_END is the end of the region not a region.
>
>> Since DIRECMAP_VIRT region is not mapped to any virtual address
>
>
> s/DIRECMAP_VIRT/DIRECTMAP_VIRT/
>
>> on ARM, it fails.
>>
>> In this patch, instead of calling virt_to_mfn(), arch helper
>> arch_mfn_in_directmap() is introduced. On ARM64 this arch helper
>> will return true, whereas on ARM DIRECTMAP_VIRT region is not directly
>> mapped
>> only xenheap region is directly mapped.
>
>
> As said before, there is no DIRECTMAP_VIRT region on ARM. All the RAM is not
> mapped on Xen but the xenheap.
>
>> So on ARM return false always.
>
>
> I am OK if you always return false on ARM. But you need to explain why not
> return is_xen_heap_mfn(...);
>
>> For x86 this helper does virt_to_mfn.
>>
>> Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
>> ---
>>  xen/common/page_alloc.c        |  7 ++-----
>>  xen/include/asm-arm/arm32/mm.h | 20 ++++++++++++++++++++
>>  xen/include/asm-arm/arm64/mm.h | 20 ++++++++++++++++++++
>>  xen/include/asm-arm/mm.h       |  8 ++++++++
>>  xen/include/asm-x86/mm.h       | 11 +++++++++++
>>  5 files changed, 61 insertions(+), 5 deletions(-)
>>
>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> index 42c20cb..c4ffb31 100644
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -520,9 +520,6 @@ static unsigned long init_node_heap(int node, unsigned
>> long mfn,
>>      unsigned long needed = (sizeof(**_heap) +
>>                              sizeof(**avail) * NR_ZONES +
>>                              PAGE_SIZE - 1) >> PAGE_SHIFT;
>> -#ifdef DIRECTMAP_VIRT_END
>> -    unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
>> -#endif
>>      int i, j;
>>
>>      if ( !first_node_initialised )
>> @@ -534,7 +531,7 @@ static unsigned long init_node_heap(int node, unsigned
>> long mfn,
>>      }
>>  #ifdef DIRECTMAP_VIRT_END
>
>
> Sorry I didn't spot that before. Why do we keep the #ifdef here given that
> the check is arch specific now?
>
>>      else if ( *use_tail && nr >= needed &&
>> -              (mfn + nr) <= (virt_to_mfn(eva - 1) + 1) &&
>> +              arch_mfn_in_directmap(mfn + nr) &&
>>                (!xenheap_bits ||
>>                 !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>>      {
>> @@ -543,7 +540,7 @@ static unsigned long init_node_heap(int node, unsigned
>> long mfn,
>>                        PAGE_SIZE - sizeof(**avail) * NR_ZONES;
>>      }
>>      else if ( nr >= needed &&
>> -              (mfn + needed) <= (virt_to_mfn(eva - 1) + 1) &&
>> +              arch_mfn_in_directmap(mfn + needed) &&
>>                (!xenheap_bits ||
>>                 !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
>>      {
>> diff --git a/xen/include/asm-arm/arm32/mm.h
>> b/xen/include/asm-arm/arm32/mm.h
>> new file mode 100644
>> index 0000000..e93d9df
>> --- /dev/null
>> +++ b/xen/include/asm-arm/arm32/mm.h
>> @@ -0,0 +1,20 @@
>> +#ifndef __ARM_ARM32_MM_H__
>> +#define __ARM_ARM32_MM_H__
>> +
>> +/* On ARM only xenheap memory is directly mapped. Hence return false. */
>
>
> By reading this comment some people will wonder why you don't check whether
> the mfn is in xenheap then. As mentioned above, I am ok if you always return
> false here. But you need to explain why.

Is this ok?

"On ARM32, all the RAM is not mapped by Xen, instead it is mapped by xenheap.
So DIRECTMAP_VIRT region is not mapped.
Hence we return always false when mfn is checked on DIRECTMAP_VIRT region."

>
>
>> +static inline bool arch_mfn_in_directmap(unsigned long mfn)
>> +{
>> +    return false;
>> +}
>> +
>> +#endif /* __ARM_ARM32_MM_H__ */
>> +
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * tab-width: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
>> diff --git a/xen/include/asm-arm/arm64/mm.h
>> b/xen/include/asm-arm/arm64/mm.h
>> new file mode 100644
>> index 0000000..36ee9c8
>> --- /dev/null
>> +++ b/xen/include/asm-arm/arm64/mm.h
>> @@ -0,0 +1,20 @@
>> +#ifndef __ARM_ARM64_MM_H__
>> +#define __ARM_ARM64_MM_H__
>> +
>> +/* On ARM64 DIRECTMAP_VIRT region is directly mapped. Hence return true
>> */
>
>
> This comment is not correct. A the directmap region is always direct mapped
> which is quite obvious by the name. What matter is all the RAM is mapped in
> Xen.
>
> So I would reword the commit message with:
>
> "On ARM64, all the RAM is currently direct mapped in Xen."
>
> Cheers,
>
> --
> Julien Grall
Julien Grall April 4, 2017, 8 a.m. UTC | #4
On 04/04/2017 06:50 AM, Vijay Kilari wrote:
> Hi Julien,

Hello Vijay,

> On Mon, Apr 3, 2017 at 3:31 PM, Julien Grall <julien.grall@arm.com> wrote:
>> On 28/03/17 13:35, vijay.kilari@gmail.com wrote:
>>> diff --git a/xen/include/asm-arm/arm32/mm.h
>>> b/xen/include/asm-arm/arm32/mm.h
>>> new file mode 100644
>>> index 0000000..e93d9df
>>> --- /dev/null
>>> +++ b/xen/include/asm-arm/arm32/mm.h
>>> @@ -0,0 +1,20 @@
>>> +#ifndef __ARM_ARM32_MM_H__
>>> +#define __ARM_ARM32_MM_H__
>>> +
>>> +/* On ARM only xenheap memory is directly mapped. Hence return false. */
>>
>>
>> By reading this comment some people will wonder why you don't check whether
>> the mfn is in xenheap then. As mentioned above, I am ok if you always return
>> false here. But you need to explain why.
>
> Is this ok?
>
> "On ARM32, all the RAM is not mapped by Xen, instead it is mapped by xenheap.
> So DIRECTMAP_VIRT region is not mapped.
> Hence we return always false when mfn is checked on DIRECTMAP_VIRT region."

This does not make any sense. You don't map the RAM using xenheap, 
xenheap is the only portion of the RAM always mapped. Furthermore, as I 
said there are no "DIRECTMAP_VIRT" region on ARM, have a grep at this 
keyword for ARM32...

How about:

"Only a limited amount of RAM, called xenheap, is always mapped on 
ARM32. For convenience always return false."

Cheers,
diff mbox

Patch

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 42c20cb..c4ffb31 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -520,9 +520,6 @@  static unsigned long init_node_heap(int node, unsigned long mfn,
     unsigned long needed = (sizeof(**_heap) +
                             sizeof(**avail) * NR_ZONES +
                             PAGE_SIZE - 1) >> PAGE_SHIFT;
-#ifdef DIRECTMAP_VIRT_END
-    unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
-#endif
     int i, j;
 
     if ( !first_node_initialised )
@@ -534,7 +531,7 @@  static unsigned long init_node_heap(int node, unsigned long mfn,
     }
 #ifdef DIRECTMAP_VIRT_END
     else if ( *use_tail && nr >= needed &&
-              (mfn + nr) <= (virt_to_mfn(eva - 1) + 1) &&
+              arch_mfn_in_directmap(mfn + nr) &&
               (!xenheap_bits ||
                !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
     {
@@ -543,7 +540,7 @@  static unsigned long init_node_heap(int node, unsigned long mfn,
                       PAGE_SIZE - sizeof(**avail) * NR_ZONES;
     }
     else if ( nr >= needed &&
-              (mfn + needed) <= (virt_to_mfn(eva - 1) + 1) &&
+              arch_mfn_in_directmap(mfn + needed) &&
               (!xenheap_bits ||
                !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
     {
diff --git a/xen/include/asm-arm/arm32/mm.h b/xen/include/asm-arm/arm32/mm.h
new file mode 100644
index 0000000..e93d9df
--- /dev/null
+++ b/xen/include/asm-arm/arm32/mm.h
@@ -0,0 +1,20 @@ 
+#ifndef __ARM_ARM32_MM_H__
+#define __ARM_ARM32_MM_H__
+
+/* On ARM only xenheap memory is directly mapped. Hence return false. */
+static inline bool arch_mfn_in_directmap(unsigned long mfn)
+{
+    return false;
+}
+
+#endif /* __ARM_ARM32_MM_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/arm64/mm.h b/xen/include/asm-arm/arm64/mm.h
new file mode 100644
index 0000000..36ee9c8
--- /dev/null
+++ b/xen/include/asm-arm/arm64/mm.h
@@ -0,0 +1,20 @@ 
+#ifndef __ARM_ARM64_MM_H__
+#define __ARM_ARM64_MM_H__
+
+/* On ARM64 DIRECTMAP_VIRT region is directly mapped. Hence return true */
+static inline bool arch_mfn_in_directmap(unsigned long mfn)
+{
+    return true;
+}
+
+#endif /* __ARM_ARM64_MM_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 4892155..0fef612 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -6,6 +6,14 @@ 
 #include <public/xen.h>
 #include <xen/pdx.h>
 
+#if defined(CONFIG_ARM_32)
+# include <asm/arm32/mm.h>
+#elif defined(CONFIG_ARM_64)
+# include <asm/arm64/mm.h>
+#else
+# error "unknown ARM variant"
+#endif
+
 /* Align Xen to a 2 MiB boundary. */
 #define XEN_PADDR_ALIGN (1 << 21)
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index e22603c..efae611 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -602,4 +602,15 @@  extern const char zero_page[];
 /* Build a 32bit PSE page table using 4MB pages. */
 void write_32bit_pse_identmap(uint32_t *l2);
 
+/*
+ * x86 maps DIRECTMAP_VIRT to physical memory. Get the mfn for directmap
+ * memory region.
+ */
+static inline bool arch_mfn_in_directmap(unsigned long mfn)
+{
+    unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
+
+    return (mfn <= (virt_to_mfn(eva - 1) + 1));
+}
+
 #endif /* __ASM_X86_MM_H__ */