Message ID | 20241008192910.2823726-1-snovitoll@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | [v4] mm, kasan, kmsan: copy_from/to_kernel_nofault | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
On Tue, 8 Oct 2024 at 21:28, Sabyrzhan Tasbolatov <snovitoll@gmail.com> wrote: > > Instrument copy_from_kernel_nofault() with KMSAN for uninitialized kernel > memory check and copy_to_kernel_nofault() with KASAN, KCSAN to detect > the memory corruption. > > syzbot reported that bpf_probe_read_kernel() kernel helper triggered > KASAN report via kasan_check_range() which is not the expected behaviour > as copy_from_kernel_nofault() is meant to be a non-faulting helper. > > Solution is, suggested by Marco Elver, to replace KASAN, KCSAN check in > copy_from_kernel_nofault() with KMSAN detection of copying uninitilaized > kernel memory. In copy_to_kernel_nofault() we can retain > instrument_write() explicitly for the memory corruption instrumentation. > > copy_to_kernel_nofault() is tested on x86_64 and arm64 with > CONFIG_KASAN_SW_TAGS. On arm64 with CONFIG_KASAN_HW_TAGS, > kunit test currently fails. Need more clarification on it > - currently, disabled in kunit test. > > Link: https://lore.kernel.org/linux-mm/CANpmjNMAVFzqnCZhEity9cjiqQ9CVN1X7qeeeAp_6yKjwKo8iw@mail.gmail.com/ > Reviewed-by: Marco Elver <elver@google.com> > Reported-by: syzbot+61123a5daeb9f7454599@syzkaller.appspotmail.com > Closes: https://syzkaller.appspot.com/bug?extid=61123a5daeb9f7454599 > Reported-by: Andrey Konovalov <andreyknvl@gmail.com> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=210505 > Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com> > --- > v2: > - squashed previous submitted in -mm tree 2 patches based on Linus tree > v3: > - moved checks to *_nofault_loop macros per Marco's comments > - edited the commit message > v4: > - replaced Suggested-By with Reviewed-By: Marco Elver For future reference: No need to send v+1 just for this tag. Usually maintainers pick up tags from the last round without the original author having to send out a v+1 with the tags. Of course, if you make other corrections and need to send a v+1, then it is appropriate to collect tags where those tags would remain valid (such as on unchanged patches part of the series, or for simpler corrections).
On Wed, Oct 9, 2024 at 12:34 AM Marco Elver <elver@google.com> wrote: > > On Tue, 8 Oct 2024 at 21:28, Sabyrzhan Tasbolatov <snovitoll@gmail.com> wrote: > > > > Instrument copy_from_kernel_nofault() with KMSAN for uninitialized kernel > > memory check and copy_to_kernel_nofault() with KASAN, KCSAN to detect > > the memory corruption. > > > > syzbot reported that bpf_probe_read_kernel() kernel helper triggered > > KASAN report via kasan_check_range() which is not the expected behaviour > > as copy_from_kernel_nofault() is meant to be a non-faulting helper. > > > > Solution is, suggested by Marco Elver, to replace KASAN, KCSAN check in > > copy_from_kernel_nofault() with KMSAN detection of copying uninitilaized > > kernel memory. In copy_to_kernel_nofault() we can retain > > instrument_write() explicitly for the memory corruption instrumentation. > > > > copy_to_kernel_nofault() is tested on x86_64 and arm64 with > > CONFIG_KASAN_SW_TAGS. On arm64 with CONFIG_KASAN_HW_TAGS, > > kunit test currently fails. Need more clarification on it > > - currently, disabled in kunit test. > > > > Link: https://lore.kernel.org/linux-mm/CANpmjNMAVFzqnCZhEity9cjiqQ9CVN1X7qeeeAp_6yKjwKo8iw@mail.gmail.com/ > > Reviewed-by: Marco Elver <elver@google.com> > > Reported-by: syzbot+61123a5daeb9f7454599@syzkaller.appspotmail.com > > Closes: https://syzkaller.appspot.com/bug?extid=61123a5daeb9f7454599 > > Reported-by: Andrey Konovalov <andreyknvl@gmail.com> > > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=210505 > > Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com> > > --- > > v2: > > - squashed previous submitted in -mm tree 2 patches based on Linus tree > > v3: > > - moved checks to *_nofault_loop macros per Marco's comments > > - edited the commit message > > v4: > > - replaced Suggested-By with Reviewed-By: Marco Elver > > For future reference: No need to send v+1 just for this tag. Usually > maintainers pick up tags from the last round without the original > author having to send out a v+1 with the tags. Of course, if you make > other corrections and need to send a v+1, then it is appropriate to > collect tags where those tags would remain valid (such as on unchanged > patches part of the series, or for simpler corrections). Thanks! Will do it next time. Please advise if Andrew should need to be notified in the separate cover letter to remove the prev. merged to -mm tree patch and use this v4: https://lore.kernel.org/all/20241008020150.4795AC4CEC6@smtp.kernel.org/
On Tue, Oct 8, 2024 at 9:28 PM Sabyrzhan Tasbolatov <snovitoll@gmail.com> wrote: > > Instrument copy_from_kernel_nofault() with KMSAN for uninitialized kernel > memory check and copy_to_kernel_nofault() with KASAN, KCSAN to detect > the memory corruption. > > syzbot reported that bpf_probe_read_kernel() kernel helper triggered > KASAN report via kasan_check_range() which is not the expected behaviour > as copy_from_kernel_nofault() is meant to be a non-faulting helper. > > Solution is, suggested by Marco Elver, to replace KASAN, KCSAN check in > copy_from_kernel_nofault() with KMSAN detection of copying uninitilaized > kernel memory. In copy_to_kernel_nofault() we can retain > instrument_write() explicitly for the memory corruption instrumentation. > > copy_to_kernel_nofault() is tested on x86_64 and arm64 with > CONFIG_KASAN_SW_TAGS. On arm64 with CONFIG_KASAN_HW_TAGS, > kunit test currently fails. Need more clarification on it > - currently, disabled in kunit test. > > Link: https://lore.kernel.org/linux-mm/CANpmjNMAVFzqnCZhEity9cjiqQ9CVN1X7qeeeAp_6yKjwKo8iw@mail.gmail.com/ > Reviewed-by: Marco Elver <elver@google.com> > Reported-by: syzbot+61123a5daeb9f7454599@syzkaller.appspotmail.com > Closes: https://syzkaller.appspot.com/bug?extid=61123a5daeb9f7454599 > Reported-by: Andrey Konovalov <andreyknvl@gmail.com> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=210505 > Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com> (Back from travels, looking at the patches again.) > --- > v2: > - squashed previous submitted in -mm tree 2 patches based on Linus tree > v3: > - moved checks to *_nofault_loop macros per Marco's comments > - edited the commit message > v4: > - replaced Suggested-By with Reviewed-By: Marco Elver > --- > mm/kasan/kasan_test_c.c | 27 +++++++++++++++++++++++++++ > mm/kmsan/kmsan_test.c | 17 +++++++++++++++++ > mm/maccess.c | 10 ++++++++-- > 3 files changed, 52 insertions(+), 2 deletions(-) > > diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c > index a181e4780d9d..5cff90f831db 100644 > --- a/mm/kasan/kasan_test_c.c > +++ b/mm/kasan/kasan_test_c.c > @@ -1954,6 +1954,32 @@ static void rust_uaf(struct kunit *test) > KUNIT_EXPECT_KASAN_FAIL(test, kasan_test_rust_uaf()); > } > > +static void copy_to_kernel_nofault_oob(struct kunit *test) > +{ > + char *ptr; > + char buf[128]; > + size_t size = sizeof(buf); > + > + /* Not detecting fails currently with HW_TAGS */ Let's reword this to: This test currently fails with the HW_TAGS mode. The reason is unknown and needs to be investigated. > + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_HW_TAGS); > + > + ptr = kmalloc(size - KASAN_GRANULE_SIZE, GFP_KERNEL); > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); > + OPTIMIZER_HIDE_VAR(ptr); > + > + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) { > + /* Check that the returned pointer is tagged. */ > + KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); > + KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); > + } Let's drop the checks above: if pointers returned by kmalloc are not tagged, the checks below (and many other tests) will fail. > + Please add a comment here explaining why we only check copy_to_kernel_nofault and not copy_from_kernel_nofault (is this because we cannot add KASAN instrumentation to copy_from_kernel_nofault?). > + KUNIT_EXPECT_KASAN_FAIL(test, > + copy_to_kernel_nofault(&buf[0], ptr, size)); > + KUNIT_EXPECT_KASAN_FAIL(test, > + copy_to_kernel_nofault(ptr, &buf[0], size)); > + kfree(ptr); > +} > + > static struct kunit_case kasan_kunit_test_cases[] = { > KUNIT_CASE(kmalloc_oob_right), > KUNIT_CASE(kmalloc_oob_left), > @@ -2027,6 +2053,7 @@ static struct kunit_case kasan_kunit_test_cases[] = { > KUNIT_CASE(match_all_not_assigned), > KUNIT_CASE(match_all_ptr_tag), > KUNIT_CASE(match_all_mem_tag), > + KUNIT_CASE(copy_to_kernel_nofault_oob), > KUNIT_CASE(rust_uaf), > {} > }; > diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c > index 13236d579eba..9733a22c46c1 100644 > --- a/mm/kmsan/kmsan_test.c > +++ b/mm/kmsan/kmsan_test.c > @@ -640,6 +640,22 @@ static void test_unpoison_memory(struct kunit *test) > KUNIT_EXPECT_TRUE(test, report_matches(&expect)); > } > > +static void test_copy_from_kernel_nofault(struct kunit *test) > +{ > + long ret; > + char buf[4], src[4]; > + size_t size = sizeof(buf); > + > + EXPECTATION_UNINIT_VALUE_FN(expect, "copy_from_kernel_nofault"); > + kunit_info( > + test, > + "testing copy_from_kernel_nofault with uninitialized memory\n"); > + > + ret = copy_from_kernel_nofault((char *)&buf[0], (char *)&src[0], size); > + USE(ret); > + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); > +} > + > static struct kunit_case kmsan_test_cases[] = { > KUNIT_CASE(test_uninit_kmalloc), > KUNIT_CASE(test_init_kmalloc), > @@ -664,6 +680,7 @@ static struct kunit_case kmsan_test_cases[] = { > KUNIT_CASE(test_long_origin_chain), > KUNIT_CASE(test_stackdepot_roundtrip), > KUNIT_CASE(test_unpoison_memory), > + KUNIT_CASE(test_copy_from_kernel_nofault), > {}, > }; > > diff --git a/mm/maccess.c b/mm/maccess.c > index 518a25667323..3ca55ec63a6a 100644 > --- a/mm/maccess.c > +++ b/mm/maccess.c > @@ -13,9 +13,14 @@ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src, > return true; > } > > +/* > + * The below only uses kmsan_check_memory() to ensure uninitialized kernel > + * memory isn't leaked. > + */ > #define copy_from_kernel_nofault_loop(dst, src, len, type, err_label) \ > while (len >= sizeof(type)) { \ > - __get_kernel_nofault(dst, src, type, err_label); \ > + __get_kernel_nofault(dst, src, type, err_label); \ > + kmsan_check_memory(src, sizeof(type)); \ > dst += sizeof(type); \ > src += sizeof(type); \ > len -= sizeof(type); \ > @@ -49,7 +54,8 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault); > > #define copy_to_kernel_nofault_loop(dst, src, len, type, err_label) \ > while (len >= sizeof(type)) { \ > - __put_kernel_nofault(dst, src, type, err_label); \ > + __put_kernel_nofault(dst, src, type, err_label); \ > + instrument_write(dst, sizeof(type)); \ > dst += sizeof(type); \ > src += sizeof(type); \ > len -= sizeof(type); \ > -- > 2.34.1 >
On Wed, 9 Oct 2024 at 22:19, Andrey Konovalov <andreyknvl@gmail.com> wrote: [...] > Please add a comment here explaining why we only check > copy_to_kernel_nofault and not copy_from_kernel_nofault (is this > because we cannot add KASAN instrumentation to > copy_from_kernel_nofault?). Just to clarify: Unless we can prove that there won't be any false positives, I proposed to err on the side of being conservative here. The new way of doing it after we already checked that the accessed location is on a faulted-in page may be amenable to also KASAN instrumentation, but you can also come up with cases that would be a false positive: e.g. some copy_from_kernel_nofault() for a large range, knowing that if it accesses bad memory at least one page is not faulted in, but some initial pages may be faulted in; in that case there'd be some error handling that then deals with the failure. Again, this might be something that an eBPF program could legally do. On the other hand, we may want to know if we are leaking random uninitialized kernel memory with KMSAN to avoid infoleaks. Only copy_to_kernel_nofault should really have valid memory, otherwise we risk corrupting the kernel. But these checks should only happen after we know we're accessing faulted-in memory, again to avoid false positives.
On Wed, 9 Oct 2024 00:42:25 +0500 Sabyrzhan Tasbolatov <snovitoll@gmail.com> wrote: > > > v4: > > > - replaced Suggested-By with Reviewed-By: Marco Elver > > > > For future reference: No need to send v+1 just for this tag. Usually > > maintainers pick up tags from the last round without the original > > author having to send out a v+1 with the tags. Of course, if you make > > other corrections and need to send a v+1, then it is appropriate to > > collect tags where those tags would remain valid (such as on unchanged > > patches part of the series, or for simpler corrections). > > Thanks! Will do it next time. > > Please advise if Andrew should need to be notified in the separate cover letter > to remove the prev. merged to -mm tree patch and use this v4: > https://lore.kernel.org/all/20241008020150.4795AC4CEC6@smtp.kernel.org/ I've updated v3's changelog, thanks. I kept Marco's Suggested-by:, as that's still relevant even with the Reviewed-by:.
diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c index a181e4780d9d..5cff90f831db 100644 --- a/mm/kasan/kasan_test_c.c +++ b/mm/kasan/kasan_test_c.c @@ -1954,6 +1954,32 @@ static void rust_uaf(struct kunit *test) KUNIT_EXPECT_KASAN_FAIL(test, kasan_test_rust_uaf()); } +static void copy_to_kernel_nofault_oob(struct kunit *test) +{ + char *ptr; + char buf[128]; + size_t size = sizeof(buf); + + /* Not detecting fails currently with HW_TAGS */ + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_HW_TAGS); + + ptr = kmalloc(size - KASAN_GRANULE_SIZE, GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + OPTIMIZER_HIDE_VAR(ptr); + + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) { + /* Check that the returned pointer is tagged. */ + KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN); + KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL); + } + + KUNIT_EXPECT_KASAN_FAIL(test, + copy_to_kernel_nofault(&buf[0], ptr, size)); + KUNIT_EXPECT_KASAN_FAIL(test, + copy_to_kernel_nofault(ptr, &buf[0], size)); + kfree(ptr); +} + static struct kunit_case kasan_kunit_test_cases[] = { KUNIT_CASE(kmalloc_oob_right), KUNIT_CASE(kmalloc_oob_left), @@ -2027,6 +2053,7 @@ static struct kunit_case kasan_kunit_test_cases[] = { KUNIT_CASE(match_all_not_assigned), KUNIT_CASE(match_all_ptr_tag), KUNIT_CASE(match_all_mem_tag), + KUNIT_CASE(copy_to_kernel_nofault_oob), KUNIT_CASE(rust_uaf), {} }; diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c index 13236d579eba..9733a22c46c1 100644 --- a/mm/kmsan/kmsan_test.c +++ b/mm/kmsan/kmsan_test.c @@ -640,6 +640,22 @@ static void test_unpoison_memory(struct kunit *test) KUNIT_EXPECT_TRUE(test, report_matches(&expect)); } +static void test_copy_from_kernel_nofault(struct kunit *test) +{ + long ret; + char buf[4], src[4]; + size_t size = sizeof(buf); + + EXPECTATION_UNINIT_VALUE_FN(expect, "copy_from_kernel_nofault"); + kunit_info( + test, + "testing copy_from_kernel_nofault with uninitialized memory\n"); + + ret = copy_from_kernel_nofault((char *)&buf[0], (char *)&src[0], size); + USE(ret); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + static struct kunit_case kmsan_test_cases[] = { KUNIT_CASE(test_uninit_kmalloc), KUNIT_CASE(test_init_kmalloc), @@ -664,6 +680,7 @@ static struct kunit_case kmsan_test_cases[] = { KUNIT_CASE(test_long_origin_chain), KUNIT_CASE(test_stackdepot_roundtrip), KUNIT_CASE(test_unpoison_memory), + KUNIT_CASE(test_copy_from_kernel_nofault), {}, }; diff --git a/mm/maccess.c b/mm/maccess.c index 518a25667323..3ca55ec63a6a 100644 --- a/mm/maccess.c +++ b/mm/maccess.c @@ -13,9 +13,14 @@ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src, return true; } +/* + * The below only uses kmsan_check_memory() to ensure uninitialized kernel + * memory isn't leaked. + */ #define copy_from_kernel_nofault_loop(dst, src, len, type, err_label) \ while (len >= sizeof(type)) { \ - __get_kernel_nofault(dst, src, type, err_label); \ + __get_kernel_nofault(dst, src, type, err_label); \ + kmsan_check_memory(src, sizeof(type)); \ dst += sizeof(type); \ src += sizeof(type); \ len -= sizeof(type); \ @@ -49,7 +54,8 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault); #define copy_to_kernel_nofault_loop(dst, src, len, type, err_label) \ while (len >= sizeof(type)) { \ - __put_kernel_nofault(dst, src, type, err_label); \ + __put_kernel_nofault(dst, src, type, err_label); \ + instrument_write(dst, sizeof(type)); \ dst += sizeof(type); \ src += sizeof(type); \ len -= sizeof(type); \