diff mbox series

[v3,1/3] slab: make check_object() more consistent

Message ID 20240607-b4-slab-debug-v3-1-bb2a326c4ceb@linux.dev (mailing list archive)
State New
Headers show
Series slab: fix and cleanup of slub_debug | expand

Commit Message

Chengming Zhou June 7, 2024, 8:40 a.m. UTC
Now check_object() calls check_bytes_and_report() multiple times to
check every section of the object it cares about, like left and right
redzones, object poison, paddings poison and freepointer. It will
abort the checking process and return 0 once it finds an error.

There are two inconsistencies in check_object(), which are alignment
padding checking and object padding checking. We only print the error
messages but don't return 0 to tell callers that something is wrong
and needs to be handled. Please see alloc_debug_processing() and
free_debug_processing() for details.

We want to do all checks without skipping, so use a local variable
"ret" to save each check result and change check_bytes_and_report() to
only report specific error findings. Then at end of check_object(),
print the trailer once if any found an error.

Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
---
 mm/slub.c | 62 +++++++++++++++++++++++++++++++++++++++++---------------------
 1 file changed, 41 insertions(+), 21 deletions(-)

Comments

Vlastimil Babka June 7, 2024, 8:58 a.m. UTC | #1
On 6/7/24 10:40 AM, Chengming Zhou wrote:
> Now check_object() calls check_bytes_and_report() multiple times to
> check every section of the object it cares about, like left and right
> redzones, object poison, paddings poison and freepointer. It will
> abort the checking process and return 0 once it finds an error.
> 
> There are two inconsistencies in check_object(), which are alignment
> padding checking and object padding checking. We only print the error
> messages but don't return 0 to tell callers that something is wrong
> and needs to be handled. Please see alloc_debug_processing() and
> free_debug_processing() for details.
> 
> We want to do all checks without skipping, so use a local variable
> "ret" to save each check result and change check_bytes_and_report() to
> only report specific error findings. Then at end of check_object(),
> print the trailer once if any found an error.
> 
> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>

Thanks.
Christoph Lameter (Ampere) June 10, 2024, 5:07 p.m. UTC | #2
On Fri, 7 Jun 2024, Chengming Zhou wrote:

> There are two inconsistencies in check_object(), which are alignment
> padding checking and object padding checking. We only print the error
> messages but don't return 0 to tell callers that something is wrong
> and needs to be handled. Please see alloc_debug_processing() and
> free_debug_processing() for details.

If the error is in the padding and the redzones are ok then its likely 
that the objects are ok. So we can actually continue with this slab page 
instead of isolating it.

We isolate it in the case that the redzones have been violated because 
that suggests someone overwrote the end of the object f.e. In that case 
objects may be corrupted. Its best to isolate the slab and hope for the 
best.

If it was just the padding then the assumption is that this may be a 
scribble. So clean it up and continue.
Vlastimil Babka June 10, 2024, 8:54 p.m. UTC | #3
On 6/10/24 7:07 PM, Christoph Lameter (Ampere) wrote:
> On Fri, 7 Jun 2024, Chengming Zhou wrote:
> 
>> There are two inconsistencies in check_object(), which are alignment
>> padding checking and object padding checking. We only print the error
>> messages but don't return 0 to tell callers that something is wrong
>> and needs to be handled. Please see alloc_debug_processing() and
>> free_debug_processing() for details.
> 
> If the error is in the padding and the redzones are ok then its likely 
> that the objects are ok. So we can actually continue with this slab page 
> instead of isolating it.
> 
> We isolate it in the case that the redzones have been violated because 
> that suggests someone overwrote the end of the object f.e. In that case 
> objects may be corrupted. Its best to isolate the slab and hope for the 
> best.
> 
> If it was just the padding then the assumption is that this may be a 
> scribble. So clean it up and continue.

Hm is it really worth such nuance? We enabled debugging and actually hit a
bug. I think it's best to keep things as much as they were and not try to
allow further changes. This e.g. allows more detailed analysis if somebody
later notices the bug report and decides to get a kdump crash dump (or use
drgn on live system). Maybe we should even stop doing the restore_bytes()
stuff, and prevent any further frees in the slab page to happen if possible
without affecting fast paths (now we mark everything as used but don't
prevent further frees of objects that were actually allocated before).

Even if some security people enable parts of slub debugging for security
people it is my impression they would rather panic/reboot or have memory
leaked than trying to salvage the slab page? (CC Kees)
Kees Cook June 10, 2024, 9:37 p.m. UTC | #4
On Mon, Jun 10, 2024 at 10:54:26PM +0200, Vlastimil Babka wrote:
> On 6/10/24 7:07 PM, Christoph Lameter (Ampere) wrote:
> > On Fri, 7 Jun 2024, Chengming Zhou wrote:
> > 
> >> There are two inconsistencies in check_object(), which are alignment
> >> padding checking and object padding checking. We only print the error
> >> messages but don't return 0 to tell callers that something is wrong
> >> and needs to be handled. Please see alloc_debug_processing() and
> >> free_debug_processing() for details.
> > 
> > If the error is in the padding and the redzones are ok then its likely 
> > that the objects are ok. So we can actually continue with this slab page 
> > instead of isolating it.
> > 
> > We isolate it in the case that the redzones have been violated because 
> > that suggests someone overwrote the end of the object f.e. In that case 
> > objects may be corrupted. Its best to isolate the slab and hope for the 
> > best.
> > 
> > If it was just the padding then the assumption is that this may be a 
> > scribble. So clean it up and continue.

"a scribble"? :P If padding got touched, something has the wrong size
for an object write. It should be treated just like the redzones. We
want maximal coverage if this checking is enabled.

> Hm is it really worth such nuance? We enabled debugging and actually hit a
> bug. I think it's best to keep things as much as they were and not try to
> allow further changes. This e.g. allows more detailed analysis if somebody
> later notices the bug report and decides to get a kdump crash dump (or use
> drgn on live system). Maybe we should even stop doing the restore_bytes()
> stuff, and prevent any further frees in the slab page to happen if possible
> without affecting fast paths (now we mark everything as used but don't
> prevent further frees of objects that were actually allocated before).
> 
> Even if some security people enable parts of slub debugging for security
> people it is my impression they would rather panic/reboot or have memory
> leaked than trying to salvage the slab page? (CC Kees)

Yeah, if we're doing these checks, we should do the checks fully.
Padding is just extra redzone. :)
Kees Cook June 12, 2024, 6:39 p.m. UTC | #5
On Tue, Jun 11, 2024 at 03:52:49PM -0700, Christoph Lameter (Ampere) wrote:
> On Mon, 10 Jun 2024, Vlastimil Babka wrote:
> 
> > Even if some security people enable parts of slub debugging for security
> > people it is my impression they would rather panic/reboot or have memory
> > leaked than trying to salvage the slab page? (CC Kees)
> 
> In the past these resilience features have been used to allow the continued
> operation of a broken kernel.
> 
> So first the Kernel crashed with some obscure oops in the allocator due to
> metadata corruption.
> 
> One can then put a slub_debug option on the kernel command line which will
> result in detailed error reports on what caused the corruption. It will also
> activate resilience measures that will often allow the continued operation
> until a fix becomes available.

Sure, as long as it's up to the deployment. I just don't want padding
errors unilaterally ignored. If it's useful, there's the
CHECK_DATA_CORRUPTION() macro. That'll let a deployment escalate the
issue from WARN to BUG, etc.
Chengming Zhou June 14, 2024, 2:40 a.m. UTC | #6
On 2024/6/12 06:52, Christoph Lameter (Ampere) wrote:
> On Mon, 10 Jun 2024, Vlastimil Babka wrote:
> 
>> Even if some security people enable parts of slub debugging for security
>> people it is my impression they would rather panic/reboot or have memory
>> leaked than trying to salvage the slab page? (CC Kees)
> 
> In the past these resilience features have been used to allow the continued operation of a broken kernel.
> 
> So first the Kernel crashed with some obscure oops in the allocator due to metadata corruption.
> 
> One can then put a slub_debug option on the kernel command line which will result in detailed error reports on what caused the corruption. It will also activate resilience measures that will often allow the continued operation until a fix becomes available.

This reminds me that we can't toggle slub_debug options for kmem_cache in runtime,
I'm wondering is it useful to be able to enable/disable debug options in runtime?
We can implement this feature by using per-slab debug options, so per-slab has
independent execution path, in which some slabs with debug options enabled go
the slow path, while others can still go fast path.

No sure if it's useful in some cases? Maybe KFENCE is enough? Just my random thoughts.

Thanks.
Vlastimil Babka June 17, 2024, 9:51 a.m. UTC | #7
On 6/14/24 4:40 AM, Chengming Zhou wrote:
> On 2024/6/12 06:52, Christoph Lameter (Ampere) wrote:
>> On Mon, 10 Jun 2024, Vlastimil Babka wrote:
>> 
>>> Even if some security people enable parts of slub debugging for security
>>> people it is my impression they would rather panic/reboot or have memory
>>> leaked than trying to salvage the slab page? (CC Kees)
>> 
>> In the past these resilience features have been used to allow the continued operation of a broken kernel.
>> 
>> So first the Kernel crashed with some obscure oops in the allocator due to metadata corruption.
>> 
>> One can then put a slub_debug option on the kernel command line which will result in detailed error reports on what caused the corruption. It will also activate resilience measures that will often allow the continued operation until a fix becomes available.
> 
> This reminds me that we can't toggle slub_debug options for kmem_cache in runtime,
> I'm wondering is it useful to be able to enable/disable debug options in runtime?
> We can implement this feature by using per-slab debug options, so per-slab has
> independent execution path, in which some slabs with debug options enabled go
> the slow path, while others can still go fast path.

Many of the debug options change the layout of objects in slabs (i.e. affect
calculate_sizes()) so it would be very complicated to change things in
runtime. Also the cache might be merged with other ones if it boots without
debug... I don't think it would be feasible at all.

> No sure if it's useful in some cases? Maybe KFENCE is enough? Just my random thoughts.
> 
> Thanks.
Chengming Zhou June 17, 2024, 10:29 a.m. UTC | #8
On 2024/6/17 17:51, Vlastimil Babka wrote:
> On 6/14/24 4:40 AM, Chengming Zhou wrote:
>> On 2024/6/12 06:52, Christoph Lameter (Ampere) wrote:
>>> On Mon, 10 Jun 2024, Vlastimil Babka wrote:
>>>
>>>> Even if some security people enable parts of slub debugging for security
>>>> people it is my impression they would rather panic/reboot or have memory
>>>> leaked than trying to salvage the slab page? (CC Kees)
>>>
>>> In the past these resilience features have been used to allow the continued operation of a broken kernel.
>>>
>>> So first the Kernel crashed with some obscure oops in the allocator due to metadata corruption.
>>>
>>> One can then put a slub_debug option on the kernel command line which will result in detailed error reports on what caused the corruption. It will also activate resilience measures that will often allow the continued operation until a fix becomes available.
>>
>> This reminds me that we can't toggle slub_debug options for kmem_cache in runtime,
>> I'm wondering is it useful to be able to enable/disable debug options in runtime?
>> We can implement this feature by using per-slab debug options, so per-slab has
>> independent execution path, in which some slabs with debug options enabled go
>> the slow path, while others can still go fast path.
> 
> Many of the debug options change the layout of objects in slabs (i.e. affect
> calculate_sizes()) so it would be very complicated to change things in
Yeah, so each slab in the same kmem_cache can have different layout (caused by
different debug_options enabled), we use these different information to decide
which path each slab should go.

Then the problem is saving these different layout information for each slab,
which has an unused _mapcount to reuse, can be used as index to find its layout
information in the kmem_cache.

I haven't thought too much about this, so must be missing something.

Thanks.

> runtime. Also the cache might be merged with other ones if it boots without
> debug... I don't think it would be feasible at all.
> 
>> No sure if it's useful in some cases? Maybe KFENCE is enough? Just my random thoughts.
>>
>> Thanks.
>
Vlastimil Babka June 17, 2024, 11:08 a.m. UTC | #9
On 6/17/24 12:29 PM, Chengming Zhou wrote:
> On 2024/6/17 17:51, Vlastimil Babka wrote:
>> On 6/14/24 4:40 AM, Chengming Zhou wrote:
>>> On 2024/6/12 06:52, Christoph Lameter (Ampere) wrote:
>>>
>>> This reminds me that we can't toggle slub_debug options for kmem_cache in runtime,
>>> I'm wondering is it useful to be able to enable/disable debug options in runtime?
>>> We can implement this feature by using per-slab debug options, so per-slab has
>>> independent execution path, in which some slabs with debug options enabled go
>>> the slow path, while others can still go fast path.
>> 
>> Many of the debug options change the layout of objects in slabs (i.e. affect
>> calculate_sizes()) so it would be very complicated to change things in
> Yeah, so each slab in the same kmem_cache can have different layout (caused by
> different debug_options enabled), we use these different information to decide
> which path each slab should go.
> 
> Then the problem is saving these different layout information for each slab,
> which has an unused _mapcount to reuse, can be used as index to find its layout
> information in the kmem_cache.
> 
> I haven't thought too much about this, so must be missing something.

Yeah it seems very complex with dubious benefits. Possibly would affect fast
paths too as we might disable debugging but still have some slabs around
that were created with debugging enabled so we'll need to keep doing the
checks for them... We'd basically have to keep the "percpu slabs and their
fastpaths can't be used" mode for given cache even after the debugging is
disabled, and that would already defeat most of the performance benefit.

> Thanks.
> 
>> runtime. Also the cache might be merged with other ones if it boots without
>> debug... I don't think it would be feasible at all.
>> 
>>> No sure if it's useful in some cases? Maybe KFENCE is enough? Just my random thoughts.
>>>
>>> Thanks.
>>
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index 0809760cf789..45f89d4bb687 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -788,8 +788,24 @@  static bool slab_add_kunit_errors(void)
 	kunit_put_resource(resource);
 	return true;
 }
+
+static bool slab_in_kunit_test(void)
+{
+	struct kunit_resource *resource;
+
+	if (!kunit_get_current_test())
+		return false;
+
+	resource = kunit_find_named_resource(current->kunit_test, "slab_errors");
+	if (!resource)
+		return false;
+
+	kunit_put_resource(resource);
+	return true;
+}
 #else
 static inline bool slab_add_kunit_errors(void) { return false; }
+static inline bool slab_in_kunit_test(void) { return false; }
 #endif
 
 static inline unsigned int size_from_object(struct kmem_cache *s)
@@ -1192,8 +1208,6 @@  static int check_bytes_and_report(struct kmem_cache *s, struct slab *slab,
 	pr_err("0x%p-0x%p @offset=%tu. First byte 0x%x instead of 0x%x\n",
 					fault, end - 1, fault - addr,
 					fault[0], value);
-	print_trailer(s, slab, object);
-	add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
 
 skip_bug_print:
 	restore_bytes(s, what, value, fault, end);
@@ -1302,15 +1316,16 @@  static int check_object(struct kmem_cache *s, struct slab *slab,
 	u8 *p = object;
 	u8 *endobject = object + s->object_size;
 	unsigned int orig_size, kasan_meta_size;
+	int ret = 1;
 
 	if (s->flags & SLAB_RED_ZONE) {
 		if (!check_bytes_and_report(s, slab, object, "Left Redzone",
 			object - s->red_left_pad, val, s->red_left_pad))
-			return 0;
+			ret = 0;
 
 		if (!check_bytes_and_report(s, slab, object, "Right Redzone",
 			endobject, val, s->inuse - s->object_size))
-			return 0;
+			ret = 0;
 
 		if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) {
 			orig_size = get_orig_size(s, object);
@@ -1319,14 +1334,15 @@  static int check_object(struct kmem_cache *s, struct slab *slab,
 				!check_bytes_and_report(s, slab, object,
 					"kmalloc Redzone", p + orig_size,
 					val, s->object_size - orig_size)) {
-				return 0;
+				ret = 0;
 			}
 		}
 	} else {
 		if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) {
-			check_bytes_and_report(s, slab, p, "Alignment padding",
+			if (!check_bytes_and_report(s, slab, p, "Alignment padding",
 				endobject, POISON_INUSE,
-				s->inuse - s->object_size);
+				s->inuse - s->object_size))
+				ret = 0;
 		}
 	}
 
@@ -1342,27 +1358,25 @@  static int check_object(struct kmem_cache *s, struct slab *slab,
 			    !check_bytes_and_report(s, slab, p, "Poison",
 					p + kasan_meta_size, POISON_FREE,
 					s->object_size - kasan_meta_size - 1))
-				return 0;
+				ret = 0;
 			if (kasan_meta_size < s->object_size &&
 			    !check_bytes_and_report(s, slab, p, "End Poison",
 					p + s->object_size - 1, POISON_END, 1))
-				return 0;
+				ret = 0;
 		}
 		/*
 		 * check_pad_bytes cleans up on its own.
 		 */
-		check_pad_bytes(s, slab, p);
+		if (!check_pad_bytes(s, slab, p))
+			ret = 0;
 	}
 
-	if (!freeptr_outside_object(s) && val == SLUB_RED_ACTIVE)
-		/*
-		 * Object and freepointer overlap. Cannot check
-		 * freepointer while object is allocated.
-		 */
-		return 1;
-
-	/* Check free pointer validity */
-	if (!check_valid_pointer(s, slab, get_freepointer(s, p))) {
+	/*
+	 * Cannot check freepointer while object is allocated if
+	 * object and freepointer overlap.
+	 */
+	if ((freeptr_outside_object(s) || val != SLUB_RED_ACTIVE) &&
+	    !check_valid_pointer(s, slab, get_freepointer(s, p))) {
 		object_err(s, slab, p, "Freepointer corrupt");
 		/*
 		 * No choice but to zap it and thus lose the remainder
@@ -1370,9 +1384,15 @@  static int check_object(struct kmem_cache *s, struct slab *slab,
 		 * another error because the object count is now wrong.
 		 */
 		set_freepointer(s, p, NULL);
-		return 0;
+		ret = 0;
 	}
-	return 1;
+
+	if (!ret && !slab_in_kunit_test()) {
+		print_trailer(s, slab, object);
+		add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
+	}
+
+	return ret;
 }
 
 static int check_slab(struct kmem_cache *s, struct slab *slab)