Message ID | 20220110231530.665970-2-willy@infradead.org (mailing list archive) |
---|---|
State | Mainlined |
Commit | 4e140f59d285c1ca1e5c81b4c13e27366865bd09 |
Headers | show |
Series | Assorted improvements to usercopy | expand |
On Mon, 10 Jan 2022 23:15:27 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote: > If you are copying to an address in the kmap region, you may not copy > across a page boundary, In the source, the destination or in both, and why may we not? > no matter what the size of the underlying > allocation. You can't kmap() a slab page because slab pages always > come from low memory. Why not? kmap() does if (!PageHighMem(page)) addr = page_address(page); else addr = kmap_high(page);
On Mon, May 09, 2022 at 08:37:42PM -0700, Andrew Morton wrote: > On Mon, 10 Jan 2022 23:15:27 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote: > > > If you are copying to an address in the kmap region, you may not copy > > across a page boundary, > > In the source, the destination or in both, and why may we not? This depends on direction. For copying to userspace, the source (kmap). For copying from userspace, the destination (kmap). > > no matter what the size of the underlying > > allocation. You can't kmap() a slab page because slab pages always > > come from low memory. As in it'll be processed as a slab page instead of kmap by the usercopy checks?
diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h index 032e020853aa..731ee7cc40a5 100644 --- a/arch/x86/include/asm/highmem.h +++ b/arch/x86/include/asm/highmem.h @@ -26,6 +26,7 @@ #include <asm/tlbflush.h> #include <asm/paravirt.h> #include <asm/fixmap.h> +#include <asm/pgtable_areas.h> /* declarations for highmem.c */ extern unsigned long highstart_pfn, highend_pfn; diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h index 0a0b2b09b1b8..01fb76d101b0 100644 --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -149,6 +149,11 @@ static inline void totalhigh_pages_add(long count) atomic_long_add(count, &_totalhigh_pages); } +static inline bool is_kmap_addr(const void *x) +{ + unsigned long addr = (unsigned long)x; + return addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP); +} #else /* CONFIG_HIGHMEM */ static inline struct page *kmap_to_page(void *addr) @@ -234,6 +239,11 @@ static inline void __kunmap_atomic(void *addr) static inline unsigned int nr_free_highpages(void) { return 0; } static inline unsigned long totalhigh_pages(void) { return 0UL; } +static inline bool is_kmap_addr(const void *x) +{ + return false; +} + #endif /* CONFIG_HIGHMEM */ /* diff --git a/mm/usercopy.c b/mm/usercopy.c index d0d268135d96..2d13bc3bd83b 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -229,12 +229,16 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (!virt_addr_valid(ptr)) return; - /* - * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the - * highmem page or fallback to virt_to_page(). The following - * is effectively a highmem-aware virt_to_slab(). - */ - folio = page_folio(kmap_to_page((void *)ptr)); + if (is_kmap_addr(ptr)) { + unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1); + + if ((unsigned long)ptr + n - 1 > page_end) + usercopy_abort("kmap", NULL, to_user, + offset_in_page(ptr), n); + return; + } + + folio = virt_to_folio(ptr); if (folio_test_slab(folio)) { /* Check slab allocator for flags and size. */