diff mbox series

[28/36] xen/arm: introduce xen_map_text_rw

Message ID 20220304174701.1453977-29-marco.solieri@minervasys.tech (mailing list archive)
State New, archived
Headers show
Series Arm cache coloring | expand

Commit Message

Marco Solieri March 4, 2022, 5:46 p.m. UTC
From: Luca Miccio <lucmiccio@gmail.com>

Introduce two new arm specific functions to temporarily map/unmap the
Xen text read-write (the Xen text is mapped read-only by default by
setup_pagetables): xen_map_text_rw and xen_unmap_text_rw.

There is only one caller in the alternative framework.

The non-colored implementation simply uses __vmap to do the mapping. In
other words, there are no changes to the non-colored case.

The colored implementation calculates Xen text physical addresses
appropriately, according to the coloring configuration.

Export vm_alloc because it is needed by the colored implementation of
xen_map_text_rw.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Signed-off-by: Luca Miccio <lucmiccio@gmail.com>
---
 xen/arch/arm/alternative.c    |  8 ++------
 xen/arch/arm/include/asm/mm.h |  3 +++
 xen/arch/arm/mm.c             | 38 +++++++++++++++++++++++++++++++++++
 xen/common/vmap.c             |  4 ++--
 xen/include/xen/vmap.h        |  2 ++
 5 files changed, 47 insertions(+), 8 deletions(-)

Comments

Jan Beulich March 7, 2022, 7:39 a.m. UTC | #1
On 04.03.2022 18:46, Marco Solieri wrote:
> From: Luca Miccio <lucmiccio@gmail.com>
> 
> Introduce two new arm specific functions to temporarily map/unmap the
> Xen text read-write (the Xen text is mapped read-only by default by
> setup_pagetables): xen_map_text_rw and xen_unmap_text_rw.
> 
> There is only one caller in the alternative framework.
> 
> The non-colored implementation simply uses __vmap to do the mapping. In
> other words, there are no changes to the non-colored case.
> 
> The colored implementation calculates Xen text physical addresses
> appropriately, according to the coloring configuration.
> 
> Export vm_alloc because it is needed by the colored implementation of
> xen_map_text_rw.

I'm afraid I view vm_alloc() as strictly an internal function to
vmap.c. Even livepatching infrastructure has got away without making
it non-static.

Jan
Julien Grall March 11, 2022, 10:28 p.m. UTC | #2
Hi,

On 07/03/2022 07:39, Jan Beulich wrote:
> On 04.03.2022 18:46, Marco Solieri wrote:
>> From: Luca Miccio <lucmiccio@gmail.com>
>>
>> Introduce two new arm specific functions to temporarily map/unmap the
>> Xen text read-write (the Xen text is mapped read-only by default by
>> setup_pagetables): xen_map_text_rw and xen_unmap_text_rw.
>>
>> There is only one caller in the alternative framework.
>>
>> The non-colored implementation simply uses __vmap to do the mapping. In
>> other words, there are no changes to the non-colored case.
>>
>> The colored implementation calculates Xen text physical addresses
>> appropriately, according to the coloring configuration.
>>
>> Export vm_alloc because it is needed by the colored implementation of
>> xen_map_text_rw.
> 
> I'm afraid I view vm_alloc() as strictly an internal function to
> vmap.c. Even livepatching infrastructure has got away without making
> it non-static.

I think we can get away from using vmap() here. We were using it because 
Xen text mappings are RX and this is enforced by the processor via 
SCTLR_EL1.WXN.

The bit is cached in the TLB. Back then it wasn't very clear what would 
happen if we clear the bit. Looking at the latest Arm Arm (ARM DDI 
0487H.a D5.10), there is now a section "TLB invalidation and System 
register control fields" providing more details.

Reading the section, it should be safe to temporary disable WXN on every 
CPUs and make Xen text writable.

@Marco, would you be able to have a look?

Cheers,
diff mbox series

Patch

diff --git a/xen/arch/arm/alternative.c b/xen/arch/arm/alternative.c
index 237c4e5642..2481521c9c 100644
--- a/xen/arch/arm/alternative.c
+++ b/xen/arch/arm/alternative.c
@@ -185,9 +185,6 @@  static int __apply_alternatives_multi_stop(void *unused)
     {
         int ret;
         struct alt_region region;
-        mfn_t xen_mfn = virt_to_mfn(_start);
-        paddr_t xen_size = _end - _start;
-        unsigned int xen_order = get_order_from_bytes(xen_size);
         void *xenmap;
 
         BUG_ON(patched);
@@ -196,8 +193,7 @@  static int __apply_alternatives_multi_stop(void *unused)
          * The text and inittext section are read-only. So re-map Xen to
          * be able to patch the code.
          */
-        xenmap = __vmap(&xen_mfn, 1U << xen_order, 1, 1, PAGE_HYPERVISOR,
-                        VMAP_DEFAULT);
+        xenmap = xen_map_text_rw();
         /* Re-mapping Xen is not expected to fail during boot. */
         BUG_ON(!xenmap);
 
@@ -208,7 +204,7 @@  static int __apply_alternatives_multi_stop(void *unused)
         /* The patching is not expected to fail during boot. */
         BUG_ON(ret != 0);
 
-        vunmap(xenmap);
+        xen_unmap_text_rw(xenmap);
 
         /* Barriers provided by the cache flushing */
         write_atomic(&patched, 1);
diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h
index 1422091436..defb1efaad 100644
--- a/xen/arch/arm/include/asm/mm.h
+++ b/xen/arch/arm/include/asm/mm.h
@@ -195,6 +195,9 @@  extern void mmu_init_secondary_cpu(void);
 extern void setup_xenheap_mappings(unsigned long base_mfn, unsigned long nr_mfns);
 /* Map a frame table to cover physical addresses ps through pe */
 extern void setup_frametable_mappings(paddr_t ps, paddr_t pe);
+/* Create temporary Xen text read-write mapping */
+extern void *xen_map_text_rw(void);
+extern void xen_unmap_text_rw(void *va);
 /* Map a 4k page in a fixmap entry */
 extern void set_fixmap(unsigned map, mfn_t mfn, unsigned attributes);
 /* Remove a mapping from a fixmap entry */
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 53ea13641b..b18c7cd373 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -637,6 +637,31 @@  static void clear_table(void *table)
 }
 
 #ifdef CONFIG_COLORING
+void* __init xen_map_text_rw(void)
+{
+    paddr_t xen_paddr = __pa(_start);
+    unsigned int xen_size = 1 << get_order_from_bytes(_end - _start);
+    void *va = vm_alloc(xen_size, 1, VMAP_DEFAULT);
+    unsigned long cur = (unsigned long)va;
+    mfn_t mfn_col;
+    unsigned int i;
+
+    for ( i = 0; i < xen_size; i++, cur += PAGE_SIZE )
+    {
+        xen_paddr = next_xen_colored(xen_paddr);
+        mfn_col = maddr_to_mfn(xen_paddr);
+        if ( map_pages_to_xen(cur, mfn_col, 1, PAGE_HYPERVISOR) )
+            return NULL;
+        xen_paddr += PAGE_SIZE;
+    }
+    return va;
+}
+
+void __init xen_unmap_text_rw(void *va)
+{
+    vunmap(va);
+}
+
 /*
  * Translate a Xen (.text) virtual address to the colored physical one
  * depending on the hypervisor configuration.
@@ -796,6 +821,19 @@  void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     xen_pt_enforce_wnx();
 }
 #else
+void* __init xen_map_text_rw(void)
+{
+    unsigned int xen_order = get_order_from_bytes(_end - _start);
+    mfn_t xen_mfn = virt_to_mfn(_start);
+    return __vmap(&xen_mfn, 1U << xen_order, 1, 1, PAGE_HYPERVISOR,
+                  VMAP_DEFAULT);
+}
+
+void __init xen_unmap_text_rw(void *va)
+{
+    vunmap(va);
+}
+
 /* Boot-time pagetable setup.
  * Changes here may need matching changes in head.S */
 void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
diff --git a/xen/common/vmap.c b/xen/common/vmap.c
index 4fd6b3067e..bedfc9d418 100644
--- a/xen/common/vmap.c
+++ b/xen/common/vmap.c
@@ -45,8 +45,8 @@  void __init vm_init_type(enum vmap_region type, void *start, void *end)
     populate_pt_range(va, vm_low[type] - nr);
 }
 
-static void *vm_alloc(unsigned int nr, unsigned int align,
-                      enum vmap_region t)
+void *vm_alloc(unsigned int nr, unsigned int align,
+               enum vmap_region t)
 {
     unsigned int start, bit;
 
diff --git a/xen/include/xen/vmap.h b/xen/include/xen/vmap.h
index b0f7632e89..dcf2be692f 100644
--- a/xen/include/xen/vmap.h
+++ b/xen/include/xen/vmap.h
@@ -12,6 +12,8 @@  enum vmap_region {
 
 void vm_init_type(enum vmap_region type, void *start, void *end);
 
+void *vm_alloc(unsigned int nr, unsigned int align,
+               enum vmap_region t);
 void *__vmap(const mfn_t *mfn, unsigned int granularity, unsigned int nr,
              unsigned int align, unsigned int flags, enum vmap_region);
 void *vmap(const mfn_t *mfn, unsigned int nr);