Message ID | 20241114145715.59777-3-roger.pau@citrix.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | x86/mm: miscellaneous fixes | expand |
On 14.11.2024 15:57, Roger Pau Monne wrote: > @@ -5517,7 +5524,8 @@ int map_pages_to_xen( > L3T_LOCK(current_l3page); > ol3e = *pl3e; > > - if ( cpu_has_page1gb && IS_L3E_ALIGNED(virt, mfn) && > + if ( cpu_has_page1gb && > + (!(flags & _PAGE_PRESENT) || IS_L3E_ALIGNED(virt, mfn)) && > nr_mfns >= (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) && > !(flags & (_PAGE_PAT | MAP_SMALL_PAGES)) ) > { > @@ -5636,7 +5644,7 @@ int map_pages_to_xen( > if ( !pl2e ) > goto out; > > - if ( IS_L2E_ALIGNED(virt, mfn) && > + if ( (!(flags & _PAGE_PRESENT) || IS_L2E_ALIGNED(virt, mfn)) && > (nr_mfns >= (1u << PAGETABLE_ORDER)) && > !(flags & (_PAGE_PAT|MAP_SMALL_PAGES)) ) > { Upon inspecting Andrew's report of crashes I noticed that this can't be quite right. We can't entirely skip the alignment check when non-present mappings are requested; we merely need to limit the check to the VA. In a reply to the 1st v2 I actually had it arranged to match that requirement: if ( (cpu_has_page1gb || !(flags & _PAGE_PRESENT)) && IS_L3E_ALIGNED(virt, flags & _PAGE_PRESENT ? mfn : _mfn(0)) && nr_mfns >= (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) && !(flags & (_PAGE_PAT | MAP_SMALL_PAGES)) ) Yet then I didn't pay attention to the difference when reviewing the 2nd v2 (that versioning issue of course isn't helping here either). I'm afraid I can't (yet) connect the observed bad behavior with this issue, though. Jan
On Fri, Nov 15, 2024 at 10:09:39AM +0100, Jan Beulich wrote: > On 14.11.2024 15:57, Roger Pau Monne wrote: > > @@ -5517,7 +5524,8 @@ int map_pages_to_xen( > > L3T_LOCK(current_l3page); > > ol3e = *pl3e; > > > > - if ( cpu_has_page1gb && IS_L3E_ALIGNED(virt, mfn) && > > + if ( cpu_has_page1gb && > > + (!(flags & _PAGE_PRESENT) || IS_L3E_ALIGNED(virt, mfn)) && > > nr_mfns >= (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) && > > !(flags & (_PAGE_PAT | MAP_SMALL_PAGES)) ) > > { > > @@ -5636,7 +5644,7 @@ int map_pages_to_xen( > > if ( !pl2e ) > > goto out; > > > > - if ( IS_L2E_ALIGNED(virt, mfn) && > > + if ( (!(flags & _PAGE_PRESENT) || IS_L2E_ALIGNED(virt, mfn)) && > > (nr_mfns >= (1u << PAGETABLE_ORDER)) && > > !(flags & (_PAGE_PAT|MAP_SMALL_PAGES)) ) > > { > > Upon inspecting Andrew's report of crashes I noticed that this can't be quite > right. We can't entirely skip the alignment check when non-present mappings > are requested; we merely need to limit the check to the VA. In a reply to > the 1st v2 I actually had it arranged to match that requirement: > > if ( (cpu_has_page1gb || !(flags & _PAGE_PRESENT)) && > IS_L3E_ALIGNED(virt, flags & _PAGE_PRESENT ? mfn : _mfn(0)) && > nr_mfns >= (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) && > !(flags & (_PAGE_PAT | MAP_SMALL_PAGES)) ) > > Yet then I didn't pay attention to the difference when reviewing the 2nd v2 > (that versioning issue of course isn't helping here either). > > I'm afraid I can't (yet) connect the observed bad behavior with this issue, > though. I think this might be caused by map_pages_to_xen() now freeing vmap regions still under use, and that seems to manifest with the memnodemap array being unintentionally unmapped (because it's allocated with vmap_contig()). See the usage of mfn_to_nid() in free_heap_pages(). Will prepare a patch, sorry. Thanks, Roger.
On 15.11.2024 10:09, Jan Beulich wrote: > On 14.11.2024 15:57, Roger Pau Monne wrote: >> @@ -5517,7 +5524,8 @@ int map_pages_to_xen( >> L3T_LOCK(current_l3page); >> ol3e = *pl3e; >> >> - if ( cpu_has_page1gb && IS_L3E_ALIGNED(virt, mfn) && >> + if ( cpu_has_page1gb && >> + (!(flags & _PAGE_PRESENT) || IS_L3E_ALIGNED(virt, mfn)) && >> nr_mfns >= (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) && >> !(flags & (_PAGE_PAT | MAP_SMALL_PAGES)) ) >> { >> @@ -5636,7 +5644,7 @@ int map_pages_to_xen( >> if ( !pl2e ) >> goto out; >> >> - if ( IS_L2E_ALIGNED(virt, mfn) && >> + if ( (!(flags & _PAGE_PRESENT) || IS_L2E_ALIGNED(virt, mfn)) && >> (nr_mfns >= (1u << PAGETABLE_ORDER)) && >> !(flags & (_PAGE_PAT|MAP_SMALL_PAGES)) ) >> { > > Upon inspecting Andrew's report of crashes I noticed that this can't be quite > right. We can't entirely skip the alignment check when non-present mappings > are requested; we merely need to limit the check to the VA. In a reply to > the 1st v2 I actually had it arranged to match that requirement: > > if ( (cpu_has_page1gb || !(flags & _PAGE_PRESENT)) && > IS_L3E_ALIGNED(virt, flags & _PAGE_PRESENT ? mfn : _mfn(0)) && > nr_mfns >= (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) && > !(flags & (_PAGE_PAT | MAP_SMALL_PAGES)) ) > > Yet then I didn't pay attention to the difference when reviewing the 2nd v2 > (that versioning issue of course isn't helping here either). > > I'm afraid I can't (yet) connect the observed bad behavior with this issue, > though. I think I now can: memnodemap is set using vmap_contig(). The VESA frame buffer unmap, neglecting to check VA alignment, will have wrongly unmapped memnodemap. Jan
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index ebb50a7836ac..b9a2234b53e1 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -5493,10 +5493,17 @@ int map_pages_to_xen( } \ } while (0) -/* Check if a (virt, mfn) tuple is aligned for a given slot level. */ -#define IS_LnE_ALIGNED(v, m, n) \ - IS_ALIGNED(PFN_DOWN(v) | mfn_x(m), \ - (1UL << (PAGETABLE_ORDER * ((n) - 1))) - 1) +/* + * Check if a (virt, mfn) tuple is aligned for a given slot level. m must not + * be INVALID_MFN, since alignment is only relevant for present entries. + */ +#define IS_LnE_ALIGNED(v, m, n) ({ \ + mfn_t m_ = m; \ + \ + ASSERT(!mfn_eq(m_, INVALID_MFN)); \ + IS_ALIGNED(PFN_DOWN(v) | mfn_x(m_), \ + (1UL << (PAGETABLE_ORDER * ((n) - 1))) - 1); \ +}) #define IS_L2E_ALIGNED(v, m) IS_LnE_ALIGNED(v, m, 2) #define IS_L3E_ALIGNED(v, m) IS_LnE_ALIGNED(v, m, 3) @@ -5517,7 +5524,8 @@ int map_pages_to_xen( L3T_LOCK(current_l3page); ol3e = *pl3e; - if ( cpu_has_page1gb && IS_L3E_ALIGNED(virt, mfn) && + if ( cpu_has_page1gb && + (!(flags & _PAGE_PRESENT) || IS_L3E_ALIGNED(virt, mfn)) && nr_mfns >= (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) && !(flags & (_PAGE_PAT | MAP_SMALL_PAGES)) ) { @@ -5636,7 +5644,7 @@ int map_pages_to_xen( if ( !pl2e ) goto out; - if ( IS_L2E_ALIGNED(virt, mfn) && + if ( (!(flags & _PAGE_PRESENT) || IS_L2E_ALIGNED(virt, mfn)) && (nr_mfns >= (1u << PAGETABLE_ORDER)) && !(flags & (_PAGE_PAT|MAP_SMALL_PAGES)) ) {