Message ID | 20220505071037.4121100-1-songyuanzheng@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: usercopy: move the virt_addr_valid() below the is_vmalloc_addr() | expand |
Matthew & Kees, On Thu, 5 May 2022 07:10:37 +0000 Yuanzheng Song <songyuanzheng@huawei.com> wrote: > The is_kmap_addr() and the is_vmalloc_addr() in the check_heap_object() > will not work, because the virt_addr_valid() will exclude the kmap and > vmalloc regions. So let's move the virt_addr_valid() below > the is_vmalloc_addr(). The author, > Signed-off-by: Yuanzheng Song <songyuanzheng@huawei.com> Tells me off-list that this fix: > --- a/mm/usercopy.c > +++ b/mm/usercopy.c > @@ -163,9 +163,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n, > { > struct folio *folio; > > - if (!virt_addr_valid(ptr)) > - return; > - > if (is_kmap_addr(ptr)) { > unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1); > > @@ -190,6 +187,9 @@ static inline void check_heap_object(const void *ptr, unsigned long n, > return; > } > > + if (!virt_addr_valid(ptr)) > + return; > + > folio = virt_to_folio(ptr); > > if (folio_test_slab(folio)) { is required to fix patches "mm/usercopy: Check kmap addresses properly" and "mm/usercopy: Detect vmalloc overruns".
On Mon, May 09, 2022 at 08:37:32PM -0700, Andrew Morton wrote: > Matthew & Kees, > > On Thu, 5 May 2022 07:10:37 +0000 Yuanzheng Song <songyuanzheng@huawei.com> wrote: > > > The is_kmap_addr() and the is_vmalloc_addr() in the check_heap_object() > > will not work, because the virt_addr_valid() will exclude the kmap and > > vmalloc regions. So let's move the virt_addr_valid() below > > the is_vmalloc_addr(). > > The author, > > > Signed-off-by: Yuanzheng Song <songyuanzheng@huawei.com> > > Tells me off-list that this fix: > > > --- a/mm/usercopy.c > > +++ b/mm/usercopy.c > > @@ -163,9 +163,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n, > > { > > struct folio *folio; > > > > - if (!virt_addr_valid(ptr)) > > - return; > > - > > if (is_kmap_addr(ptr)) { > > unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1); > > > > @@ -190,6 +187,9 @@ static inline void check_heap_object(const void *ptr, unsigned long n, > > return; > > } > > > > + if (!virt_addr_valid(ptr)) > > + return; > > + > > folio = virt_to_folio(ptr); > > > > if (folio_test_slab(folio)) { > > is required to fix patches "mm/usercopy: Check kmap addresses properly" > and "mm/usercopy: Detect vmalloc overruns". Ah, this very well may be true! I will need to study this (or more likely, I will build some selftests), but I suspect willy knows off the top of his head. :)
On Thu, 5 May 2022 07:10:37 +0000, Yuanzheng Song wrote: > The is_kmap_addr() and the is_vmalloc_addr() in the check_heap_object() > will not work, because the virt_addr_valid() will exclude the kmap and > vmalloc regions. So let's move the virt_addr_valid() below > the is_vmalloc_addr(). Applied to for-next/hardening, thanks! [1/1] mm: usercopy: move the virt_addr_valid() below the is_vmalloc_addr() https://git.kernel.org/kees/c/0a76d4c331b4
diff --git a/mm/usercopy.c b/mm/usercopy.c index ac8a093e90c1..baeacc735b83 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -163,9 +163,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n, { struct folio *folio; - if (!virt_addr_valid(ptr)) - return; - if (is_kmap_addr(ptr)) { unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1); @@ -190,6 +187,9 @@ static inline void check_heap_object(const void *ptr, unsigned long n, return; } + if (!virt_addr_valid(ptr)) + return; + folio = virt_to_folio(ptr); if (folio_test_slab(folio)) {
The is_kmap_addr() and the is_vmalloc_addr() in the check_heap_object() will not work, because the virt_addr_valid() will exclude the kmap and vmalloc regions. So let's move the virt_addr_valid() below the is_vmalloc_addr(). Signed-off-by: Yuanzheng Song <songyuanzheng@huawei.com> --- mm/usercopy.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)