diff mbox series

x86/mm: fix alignment check for non-present entries

Message ID 20241115101225.70556-1-roger.pau@citrix.com (mailing list archive)
State New
Headers show
Series x86/mm: fix alignment check for non-present entries | expand

Commit Message

Roger Pau Monné Nov. 15, 2024, 10:12 a.m. UTC
While the alignment of the mfn is not relevant for non-present entries, the
alignment of the linear address is.  Commit 5b52e1b0436f introduced a
regression by not checking the alignment of the linear address when the new
entry was a non-present one.

Fix by always checking the alignment of the linear address, non-present entries
must just skip the alignment check of the physical address.

Fixes: 5b52e1b0436f ('x86/mm: skip super-page alignment checks for non-present entries')
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/mm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Jan Beulich Nov. 15, 2024, 10:37 a.m. UTC | #1
On 15.11.2024 11:12, Roger Pau Monne wrote:
> While the alignment of the mfn is not relevant for non-present entries, the
> alignment of the linear address is.  Commit 5b52e1b0436f introduced a
> regression by not checking the alignment of the linear address when the new
> entry was a non-present one.
> 
> Fix by always checking the alignment of the linear address, non-present entries
> must just skip the alignment check of the physical address.
> 
> Fixes: 5b52e1b0436f ('x86/mm: skip super-page alignment checks for non-present entries')
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
diff mbox series

Patch

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 5d7e8d78718c..494c14e80ff9 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5525,7 +5525,7 @@  int map_pages_to_xen(
         ol3e = *pl3e;
 
         if ( cpu_has_page1gb &&
-             (!(flags & _PAGE_PRESENT) || IS_L3E_ALIGNED(virt, mfn)) &&
+             IS_L3E_ALIGNED(virt, flags & _PAGE_PRESENT ? mfn : _mfn(0)) &&
              nr_mfns >= (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) &&
              !(flags & (_PAGE_PAT | MAP_SMALL_PAGES)) )
         {
@@ -5644,7 +5644,7 @@  int map_pages_to_xen(
         if ( !pl2e )
             goto out;
 
-        if ( (!(flags & _PAGE_PRESENT) || IS_L2E_ALIGNED(virt, mfn)) &&
+        if ( IS_L2E_ALIGNED(virt, flags & _PAGE_PRESENT ? mfn : _mfn(0)) &&
              (nr_mfns >= (1u << PAGETABLE_ORDER)) &&
              !(flags & (_PAGE_PAT|MAP_SMALL_PAGES)) )
         {