Message ID | 20210909145945.12192-3-david@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | s390: fixes, cleanups and optimizations for page table walkers | expand |
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index b6b56cd4ca64..9023bf3ced89 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -690,9 +690,10 @@ void __gmap_zap(struct gmap *gmap, unsigned long gaddr) /* Get pointer to the page table entry */ ptep = get_locked_pte(gmap->mm, vmaddr, &ptl); - if (likely(ptep)) + if (likely(ptep)) { ptep_zap_unused(gmap->mm, vmaddr, ptep, 0); - pte_unmap_unlock(ptep, ptl); + pte_unmap_unlock(ptep, ptl); + } } } EXPORT_SYMBOL_GPL(__gmap_zap);
... otherwise we will try unlocking a spinlock that was never locked via a garbage pointer. At the time we reach this code path, we usually successfully looked up a PGSTE already; however, evil user space could have manipulated the VMA layout in the meantime and triggered removal of the page table. Fixes: 1e133ab296f3 ("s390/mm: split arch/s390/mm/pgtable.c") Signed-off-by: David Hildenbrand <david@redhat.com> --- arch/s390/mm/gmap.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)