Message ID | 20221216114853.8227-12-julien@xen.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Remove the directmap | expand |
On 16.12.2022 12:48, Julien Grall wrote: > From: Hongyan Xia <hongyxia@amazon.com> > > Also add a helper function to retrieve it. Change arch_mfns_in_direct_map > to check this option before returning. I think the abstract parts of this want to be generic right away. I can't see why Arm would not suffer from the same issue that this work is trying to address. > This is added as a boot command line option, not a Kconfig to allow > the user to experiment the feature without rebuild the hypervisor. I think there wants to be a (generic) Kconfig piece here, to control the default of the option. Plus a 2nd, prompt-less element which an arch can select to force the setting to always-on, suppressing the choice of default. That 2nd control would then be used to compile out the boolean_param() for Arm for the time being. That said, I think this change comes too early in the series, or there is something missing. As said in reply to patch 10, while there the mapcache is being initialized for the idle domain, I don't think it can be used just yet. Read through mapcache_current_vcpu() to understand why I think that way, paying particular attention to the ASSERT() near the end. In preparation of this patch here I think the mfn_to_virt() uses have to all disappear from map_domain_page(). Perhaps yet more strongly all ..._to_virt() (except fix_to_virt() and friends) and __va() have to disappear up front from x86 and any code path which can be taken on x86 (which may simply mean purging all respective x86 #define-s, without breaking the build in any way). Jan
On Fri, 16 Dec 2022, Julien Grall wrote: > From: Hongyan Xia <hongyxia@amazon.com> > > Also add a helper function to retrieve it. Change arch_mfns_in_direct_map > to check this option before returning. > > This is added as a boot command line option, not a Kconfig to allow > the user to experiment the feature without rebuild the hypervisor. > > Signed-off-by: Hongyan Xia <hongyxia@amazon.com> > Signed-off-by: Julien Grall <jgrall@amazon.com> > > ---- > > TODO: > * Do we also want to provide a Kconfig option? > > Changes since Hongyan's version: > * Reword the commit message > * opt_directmap is only modified during boot so mark it as > __ro_after_init > --- > docs/misc/xen-command-line.pandoc | 12 ++++++++++++ > xen/arch/arm/include/asm/mm.h | 5 +++++ > xen/arch/x86/include/asm/mm.h | 17 ++++++++++++++++- > xen/arch/x86/mm.c | 3 +++ > xen/arch/x86/setup.c | 2 ++ > 5 files changed, 38 insertions(+), 1 deletion(-) > > diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc > index b7ee97be762e..a63e4612acac 100644 > --- a/docs/misc/xen-command-line.pandoc > +++ b/docs/misc/xen-command-line.pandoc > @@ -760,6 +760,18 @@ Specify the size of the console debug trace buffer. By specifying `cpu:` > additionally a trace buffer of the specified size is allocated per cpu. > The debug trace feature is only enabled in debugging builds of Xen. > > +### directmap (x86) > +> `= <boolean>` > + > +> Default: `true` > + > +Enable or disable the direct map region in Xen. > + > +By default, Xen creates the direct map region which maps physical memory > +in that region. Setting this to no will remove the direct map, blocking > +exploits that leak secrets via speculative memory access in the direct > +map. > + > ### dma_bits > > `= <integer>` > > diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h > index 68adcac9fa8d..2366928d71aa 100644 > --- a/xen/arch/arm/include/asm/mm.h > +++ b/xen/arch/arm/include/asm/mm.h > @@ -406,6 +406,11 @@ static inline void page_set_xenheap_gfn(struct page_info *p, gfn_t gfn) > } while ( (y = cmpxchg(&p->u.inuse.type_info, x, nx)) != x ); > } > > +static inline bool arch_has_directmap(void) > +{ > + return true; Shoudn't arch_has_directmap return false for arm32? > +} > + > #endif /* __ARCH_ARM_MM__ */ > /* > * Local variables: > diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h > index db29e3e2059f..cf8b20817c6c 100644 > --- a/xen/arch/x86/include/asm/mm.h > +++ b/xen/arch/x86/include/asm/mm.h > @@ -464,6 +464,8 @@ static inline int get_page_and_type(struct page_info *page, > ASSERT(((_p)->count_info & PGC_count_mask) != 0); \ > ASSERT(page_get_owner(_p) == (_d)) > > +extern bool opt_directmap; > + > /****************************************************************************** > * With shadow pagetables, the different kinds of address start > * to get get confusing. > @@ -620,13 +622,26 @@ extern const char zero_page[]; > /* Build a 32bit PSE page table using 4MB pages. */ > void write_32bit_pse_identmap(uint32_t *l2); > > +static inline bool arch_has_directmap(void) > +{ > + return opt_directmap; > +} > + > /* > * x86 maps part of physical memory via the directmap region. > * Return whether the range of MFN falls in the directmap region. > + * > + * When boot command line sets directmap=no, we will not have a direct map at > + * all so this will always return false. > */ > static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr) > { > - unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END); > + unsigned long eva; > + > + if ( !arch_has_directmap() ) > + return false; > + > + eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END); > > return (mfn + nr) <= (virt_to_mfn(eva - 1) + 1); > } > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c > index 041bd4cfde17..e76e135b96fc 100644 > --- a/xen/arch/x86/mm.c > +++ b/xen/arch/x86/mm.c > @@ -157,6 +157,9 @@ l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE) > l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE) > l1_fixmap_x[L1_PAGETABLE_ENTRIES]; > > +bool __ro_after_init opt_directmap = true; > +boolean_param("directmap", opt_directmap); > + > /* Frame table size in pages. */ > unsigned long max_page; > unsigned long total_pages; > diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c > index 1c2e09711eb0..2cb051c6e4e7 100644 > --- a/xen/arch/x86/setup.c > +++ b/xen/arch/x86/setup.c > @@ -1423,6 +1423,8 @@ void __init noreturn __start_xen(unsigned long mbi_p) > if ( highmem_start ) > xenheap_max_mfn(PFN_DOWN(highmem_start - 1)); > > + printk("Booting with directmap %s\n", arch_has_directmap() ? "on" : "off"); > + > /* > * Walk every RAM region and map it in its entirety (on x86/64, at least) > * and notify it to the boot allocator. > -- > 2.38.1 >
Hi Stefano, On 23/01/2023 21:45, Stefano Stabellini wrote: >> diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h >> index 68adcac9fa8d..2366928d71aa 100644 >> --- a/xen/arch/arm/include/asm/mm.h >> +++ b/xen/arch/arm/include/asm/mm.h >> @@ -406,6 +406,11 @@ static inline void page_set_xenheap_gfn(struct page_info *p, gfn_t gfn) >> } while ( (y = cmpxchg(&p->u.inuse.type_info, x, nx)) != x ); >> } >> >> +static inline bool arch_has_directmap(void) >> +{ >> + return true; > > Shoudn't arch_has_directmap return false for arm32? We still have a directmap on Arm32, but it only covers the xenheap. Cheers,
diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc index b7ee97be762e..a63e4612acac 100644 --- a/docs/misc/xen-command-line.pandoc +++ b/docs/misc/xen-command-line.pandoc @@ -760,6 +760,18 @@ Specify the size of the console debug trace buffer. By specifying `cpu:` additionally a trace buffer of the specified size is allocated per cpu. The debug trace feature is only enabled in debugging builds of Xen. +### directmap (x86) +> `= <boolean>` + +> Default: `true` + +Enable or disable the direct map region in Xen. + +By default, Xen creates the direct map region which maps physical memory +in that region. Setting this to no will remove the direct map, blocking +exploits that leak secrets via speculative memory access in the direct +map. + ### dma_bits > `= <integer>` diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index 68adcac9fa8d..2366928d71aa 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -406,6 +406,11 @@ static inline void page_set_xenheap_gfn(struct page_info *p, gfn_t gfn) } while ( (y = cmpxchg(&p->u.inuse.type_info, x, nx)) != x ); } +static inline bool arch_has_directmap(void) +{ + return true; +} + #endif /* __ARCH_ARM_MM__ */ /* * Local variables: diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h index db29e3e2059f..cf8b20817c6c 100644 --- a/xen/arch/x86/include/asm/mm.h +++ b/xen/arch/x86/include/asm/mm.h @@ -464,6 +464,8 @@ static inline int get_page_and_type(struct page_info *page, ASSERT(((_p)->count_info & PGC_count_mask) != 0); \ ASSERT(page_get_owner(_p) == (_d)) +extern bool opt_directmap; + /****************************************************************************** * With shadow pagetables, the different kinds of address start * to get get confusing. @@ -620,13 +622,26 @@ extern const char zero_page[]; /* Build a 32bit PSE page table using 4MB pages. */ void write_32bit_pse_identmap(uint32_t *l2); +static inline bool arch_has_directmap(void) +{ + return opt_directmap; +} + /* * x86 maps part of physical memory via the directmap region. * Return whether the range of MFN falls in the directmap region. + * + * When boot command line sets directmap=no, we will not have a direct map at + * all so this will always return false. */ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr) { - unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END); + unsigned long eva; + + if ( !arch_has_directmap() ) + return false; + + eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END); return (mfn + nr) <= (virt_to_mfn(eva - 1) + 1); } diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 041bd4cfde17..e76e135b96fc 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -157,6 +157,9 @@ l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE) l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE) l1_fixmap_x[L1_PAGETABLE_ENTRIES]; +bool __ro_after_init opt_directmap = true; +boolean_param("directmap", opt_directmap); + /* Frame table size in pages. */ unsigned long max_page; unsigned long total_pages; diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c index 1c2e09711eb0..2cb051c6e4e7 100644 --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -1423,6 +1423,8 @@ void __init noreturn __start_xen(unsigned long mbi_p) if ( highmem_start ) xenheap_max_mfn(PFN_DOWN(highmem_start - 1)); + printk("Booting with directmap %s\n", arch_has_directmap() ? "on" : "off"); + /* * Walk every RAM region and map it in its entirety (on x86/64, at least) * and notify it to the boot allocator.