diff mbox series

[RFC,v2,2/6] mm/slab: Perform init_on_free earlier

Message ID 20200929183513.380760-3-alex.popov@linux.com (mailing list archive)
State New, archived
Headers show
Series Break heap spraying needed for exploiting use-after-free | expand

Commit Message

Alexander Popov Sept. 29, 2020, 6:35 p.m. UTC
Currently in CONFIG_SLAB init_on_free happens too late, and heap
objects go to the heap quarantine being dirty. Lets move memory
clearing before calling kasan_slab_free() to fix that.

Signed-off-by: Alexander Popov <alex.popov@linux.com>
---
 mm/slab.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

Alexander Potapenko Sept. 30, 2020, 12:50 p.m. UTC | #1
On Tue, Sep 29, 2020 at 8:35 PM Alexander Popov <alex.popov@linux.com> wrote:
>
> Currently in CONFIG_SLAB init_on_free happens too late, and heap
> objects go to the heap quarantine being dirty. Lets move memory
> clearing before calling kasan_slab_free() to fix that.
>
> Signed-off-by: Alexander Popov <alex.popov@linux.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

> ---
>  mm/slab.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index 3160dff6fd76..5140203c5b76 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -3414,6 +3414,9 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac)
>  static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp,
>                                          unsigned long caller)
>  {
> +       if (unlikely(slab_want_init_on_free(cachep)))
> +               memset(objp, 0, cachep->object_size);
> +
>         /* Put the object into the quarantine, don't touch it for now. */
>         if (kasan_slab_free(cachep, objp, _RET_IP_))
>                 return;
> @@ -3432,8 +3435,6 @@ void ___cache_free(struct kmem_cache *cachep, void *objp,
>         struct array_cache *ac = cpu_cache_get(cachep);
>
>         check_irq_off();
> -       if (unlikely(slab_want_init_on_free(cachep)))
> -               memset(objp, 0, cachep->object_size);
>         kmemleak_free_recursive(objp, cachep->flags);
>         objp = cache_free_debugcheck(cachep, objp, caller);
>         memcg_slab_free_hook(cachep, virt_to_head_page(objp), objp);
> --
> 2.26.2
>
Alexander Popov Oct. 1, 2020, 7:48 p.m. UTC | #2
On 30.09.2020 15:50, Alexander Potapenko wrote:
> On Tue, Sep 29, 2020 at 8:35 PM Alexander Popov <alex.popov@linux.com> wrote:
>>
>> Currently in CONFIG_SLAB init_on_free happens too late, and heap
>> objects go to the heap quarantine being dirty. Lets move memory
>> clearing before calling kasan_slab_free() to fix that.
>>
>> Signed-off-by: Alexander Popov <alex.popov@linux.com>
> Reviewed-by: Alexander Potapenko <glider@google.com>

Thanks for the review, Alexander!

Do you have any idea how this patch series relates to Memory Tagging support
that is currently developed?

Best regards,
Alexander
Alexander Popov Dec. 3, 2020, 7:50 p.m. UTC | #3
On 30.09.2020 15:50, Alexander Potapenko wrote:
> On Tue, Sep 29, 2020 at 8:35 PM Alexander Popov <alex.popov@linux.com> wrote:
>>
>> Currently in CONFIG_SLAB init_on_free happens too late, and heap
>> objects go to the heap quarantine being dirty. Lets move memory
>> clearing before calling kasan_slab_free() to fix that.
>>
>> Signed-off-by: Alexander Popov <alex.popov@linux.com>
> Reviewed-by: Alexander Potapenko <glider@google.com>

Hello!

Can this particular patch be considered for the mainline kernel?


Note: I summarized the results of the experiment with the Linux kernel heap
quarantine in a short article, for future reference:
https://a13xp0p0v.github.io/2020/11/30/slab-quarantine.html

Best regards,
Alexander

>> ---
>>  mm/slab.c | 5 +++--
>>  1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/slab.c b/mm/slab.c
>> index 3160dff6fd76..5140203c5b76 100644
>> --- a/mm/slab.c
>> +++ b/mm/slab.c
>> @@ -3414,6 +3414,9 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac)
>>  static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp,
>>                                          unsigned long caller)
>>  {
>> +       if (unlikely(slab_want_init_on_free(cachep)))
>> +               memset(objp, 0, cachep->object_size);
>> +
>>         /* Put the object into the quarantine, don't touch it for now. */
>>         if (kasan_slab_free(cachep, objp, _RET_IP_))
>>                 return;
>> @@ -3432,8 +3435,6 @@ void ___cache_free(struct kmem_cache *cachep, void *objp,
>>         struct array_cache *ac = cpu_cache_get(cachep);
>>
>>         check_irq_off();
>> -       if (unlikely(slab_want_init_on_free(cachep)))
>> -               memset(objp, 0, cachep->object_size);
>>         kmemleak_free_recursive(objp, cachep->flags);
>>         objp = cache_free_debugcheck(cachep, objp, caller);
>>         memcg_slab_free_hook(cachep, virt_to_head_page(objp), objp);
>> --
>> 2.26.2
>>
> 
>
Andrew Morton Dec. 3, 2020, 8:49 p.m. UTC | #4
On Thu, 3 Dec 2020 22:50:27 +0300 Alexander Popov <alex.popov@linux.com> wrote:

> On 30.09.2020 15:50, Alexander Potapenko wrote:
> > On Tue, Sep 29, 2020 at 8:35 PM Alexander Popov <alex.popov@linux.com> wrote:
> >>
> >> Currently in CONFIG_SLAB init_on_free happens too late, and heap
> >> objects go to the heap quarantine being dirty. Lets move memory
> >> clearing before calling kasan_slab_free() to fix that.
> >>
> >> Signed-off-by: Alexander Popov <alex.popov@linux.com>
> > Reviewed-by: Alexander Potapenko <glider@google.com>
> 
> Hello!
> 
> Can this particular patch be considered for the mainline kernel?

All patches are considered ;) And merged if they're reviewed, tested,
judged useful, etc.

If you think this particular patch should be fast-tracked then please
send it as a non-RFC, standalone patch.  Please also enhance the
changelog so that it actually explains what goes wrong.  Presumably
"objects go to the heap quarantine being dirty" causes some
user-visible problem?  What is that problem?
Alexander Popov Dec. 4, 2020, 11:54 a.m. UTC | #5
On 03.12.2020 23:49, Andrew Morton wrote:
> On Thu, 3 Dec 2020 22:50:27 +0300 Alexander Popov <alex.popov@linux.com> wrote:
> 
>> On 30.09.2020 15:50, Alexander Potapenko wrote:
>>> On Tue, Sep 29, 2020 at 8:35 PM Alexander Popov <alex.popov@linux.com> wrote:
>>>>
>>>> Currently in CONFIG_SLAB init_on_free happens too late, and heap
>>>> objects go to the heap quarantine being dirty. Lets move memory
>>>> clearing before calling kasan_slab_free() to fix that.
>>>>
>>>> Signed-off-by: Alexander Popov <alex.popov@linux.com>
>>> Reviewed-by: Alexander Potapenko <glider@google.com>
>>
>> Hello!
>>
>> Can this particular patch be considered for the mainline kernel?
> 
> All patches are considered ;) And merged if they're reviewed, tested,
> judged useful, etc.
> 
> If you think this particular patch should be fast-tracked then please
> send it as a non-RFC, standalone patch.  Please also enhance the
> changelog so that it actually explains what goes wrong.  Presumably
> "objects go to the heap quarantine being dirty" causes some
> user-visible problem?  What is that problem?

Ok, thanks!
I'll improve the commit message and send the patch separately.

Best regards,
Alexander
diff mbox series

Patch

diff --git a/mm/slab.c b/mm/slab.c
index 3160dff6fd76..5140203c5b76 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3414,6 +3414,9 @@  static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac)
 static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp,
 					 unsigned long caller)
 {
+	if (unlikely(slab_want_init_on_free(cachep)))
+		memset(objp, 0, cachep->object_size);
+
 	/* Put the object into the quarantine, don't touch it for now. */
 	if (kasan_slab_free(cachep, objp, _RET_IP_))
 		return;
@@ -3432,8 +3435,6 @@  void ___cache_free(struct kmem_cache *cachep, void *objp,
 	struct array_cache *ac = cpu_cache_get(cachep);
 
 	check_irq_off();
-	if (unlikely(slab_want_init_on_free(cachep)))
-		memset(objp, 0, cachep->object_size);
 	kmemleak_free_recursive(objp, cachep->flags);
 	objp = cache_free_debugcheck(cachep, objp, caller);
 	memcg_slab_free_hook(cachep, virt_to_head_page(objp), objp);