Message ID | 20241011102020.58087-1-yuan.gao@ucloud.cn (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3] mm/slub: Avoid list corruption when removing a slab from the full list | expand |
On Fri, 11 Oct 2024, yuan.gao wrote: > When an object belonging to the slab got freed later, the remove_full() > function is called. Because the slab is neither on the partial list nor > on the full list, it eventually lead to a list corruption. We detect list poison.... > diff --git a/mm/slab.h b/mm/slab.h > index 6c6fe6d630ce..7681e71d9a13 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -73,6 +73,10 @@ struct slab { > struct { > unsigned inuse:16; > unsigned objects:15; > + /* > + * Reuse frozen bit for slab with debug enabled: "If slab debugging is enabled then the frozen bit can bereused to indicate that the slab was corrupted" > index 5b832512044e..b9265e9f11aa 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1423,6 +1423,11 @@ static int check_slab(struct kmem_cache *s, struct slab *slab) > slab->inuse, slab->objects); > return 0; > } > + if (slab->frozen) { > + slab_err(s, slab, "Corrupted slab"); "Slab folio disabled due to metadata corruption" ? > @@ -2744,7 +2750,10 @@ static void *alloc_single_from_partial(struct kmem_cache *s, > slab->inuse++; > > if (!alloc_debug_processing(s, slab, object, orig_size)) { > - remove_partial(n, slab); > + if (folio_test_slab(slab_folio(slab))) { Does folio_test_slab test for the frozen bit??
On 24/10/11 11:07AM, Christoph Lameter (Ampere) wrote: > On Fri, 11 Oct 2024, yuan.gao wrote: > > > When an object belonging to the slab got freed later, the remove_full() > > function is called. Because the slab is neither on the partial list nor > > on the full list, it eventually lead to a list corruption. > > We detect list poison.... > > > diff --git a/mm/slab.h b/mm/slab.h > > index 6c6fe6d630ce..7681e71d9a13 100644 > > --- a/mm/slab.h > > +++ b/mm/slab.h > > @@ -73,6 +73,10 @@ struct slab { > > struct { > > unsigned inuse:16; > > unsigned objects:15; > > + /* > > + * Reuse frozen bit for slab with debug enabled: > > "If slab debugging is enabled then the frozen bit can bereused to > indicate that the slab was corrupted" > > > index 5b832512044e..b9265e9f11aa 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -1423,6 +1423,11 @@ static int check_slab(struct kmem_cache *s, struct slab *slab) > > slab->inuse, slab->objects); > > return 0; > > } > > + if (slab->frozen) { > > + slab_err(s, slab, "Corrupted slab"); > > > "Slab folio disabled due to metadata corruption" ? > > Yes, that's what I meant. Perhaps I should change the description from "Corrupted slab" to "Metadata corrupt"? > > @@ -2744,7 +2750,10 @@ static void *alloc_single_from_partial(struct kmem_cache *s, > > slab->inuse++; > > > > if (!alloc_debug_processing(s, slab, object, orig_size)) { > > - remove_partial(n, slab); > > + if (folio_test_slab(slab_folio(slab))) { > > > Does folio_test_slab test for the frozen bit?? > For slab folios, slab->fronzen has been set to 1. For non-slab folios, we should not call remove_partial(). I'm not sure if I understand this correctly. Thanks
On Sat, 12 Oct 2024, yuan.gao wrote: > On 24/10/11 11:07AM, Christoph Lameter (Ampere) wrote: > > On Fri, 11 Oct 2024, yuan.gao wrote: > > > > > When an object belonging to the slab got freed later, the remove_full() > > > function is called. Because the slab is neither on the partial list nor > > > on the full list, it eventually lead to a list corruption. > > > > We detect list poison.... > > > > > diff --git a/mm/slab.h b/mm/slab.h > > > index 6c6fe6d630ce..7681e71d9a13 100644 > > > --- a/mm/slab.h > > > +++ b/mm/slab.h > > > @@ -73,6 +73,10 @@ struct slab { > > > struct { > > > unsigned inuse:16; > > > unsigned objects:15; > > > + /* > > > + * Reuse frozen bit for slab with debug enabled: > > > > "If slab debugging is enabled then the frozen bit can bereused to > > indicate that the slab was corrupted" > > > > > index 5b832512044e..b9265e9f11aa 100644 > > > --- a/mm/slub.c > > > +++ b/mm/slub.c > > > @@ -1423,6 +1423,11 @@ static int check_slab(struct kmem_cache *s, struct slab *slab) > > > slab->inuse, slab->objects); > > > return 0; > > > } > > > + if (slab->frozen) { > > > + slab_err(s, slab, "Corrupted slab"); > > > > > > "Slab folio disabled due to metadata corruption" ? > > > > > > Yes, that's what I meant. > Perhaps I should change the description from "Corrupted slab" to > "Metadata corrupt"? > I think the point here is that slab page corruption is different from slab metadata corruption :) The suggested phrasing, "Slab folio disabled due to metadata corruption", sounds good to me. > > > @@ -2744,7 +2750,10 @@ static void *alloc_single_from_partial(struct kmem_cache *s, > > > slab->inuse++; > > > > > > if (!alloc_debug_processing(s, slab, object, orig_size)) { > > > - remove_partial(n, slab); > > > + if (folio_test_slab(slab_folio(slab))) { > > > > > > Does folio_test_slab test for the frozen bit?? > > > > For slab folios, slab->fronzen has been set to 1. > For non-slab folios, we should not call remove_partial(). > I'm not sure if I understand this correctly. > > Thanks >
On 10/13/24 22:46, David Rientjes wrote: > On Sat, 12 Oct 2024, yuan.gao wrote: > >> On 24/10/11 11:07AM, Christoph Lameter (Ampere) wrote: >> > On Fri, 11 Oct 2024, yuan.gao wrote: >> > >> > > When an object belonging to the slab got freed later, the remove_full() >> > > function is called. Because the slab is neither on the partial list nor >> > > on the full list, it eventually lead to a list corruption. >> > >> > We detect list poison.... >> > >> > > diff --git a/mm/slab.h b/mm/slab.h >> > > index 6c6fe6d630ce..7681e71d9a13 100644 >> > > --- a/mm/slab.h >> > > +++ b/mm/slab.h >> > > @@ -73,6 +73,10 @@ struct slab { >> > > struct { >> > > unsigned inuse:16; >> > > unsigned objects:15; >> > > + /* >> > > + * Reuse frozen bit for slab with debug enabled: >> > >> > "If slab debugging is enabled then the frozen bit can bereused to >> > indicate that the slab was corrupted" >> > >> > > index 5b832512044e..b9265e9f11aa 100644 >> > > --- a/mm/slub.c >> > > +++ b/mm/slub.c >> > > @@ -1423,6 +1423,11 @@ static int check_slab(struct kmem_cache *s, struct slab *slab) >> > > slab->inuse, slab->objects); >> > > return 0; >> > > } >> > > + if (slab->frozen) { >> > > + slab_err(s, slab, "Corrupted slab"); >> > >> > >> > "Slab folio disabled due to metadata corruption" ? >> > >> > >> >> Yes, that's what I meant. >> Perhaps I should change the description from "Corrupted slab" to >> "Metadata corrupt"? >> > > I think the point here is that slab page corruption is different from slab > metadata corruption :) > > The suggested phrasing, "Slab folio disabled due to metadata corruption", > sounds good to me. What about: "Slab disabled due to previous consistency check failure" ? > >> > > @@ -2744,7 +2750,10 @@ static void *alloc_single_from_partial(struct kmem_cache *s, >> > > slab->inuse++; >> > > >> > > if (!alloc_debug_processing(s, slab, object, orig_size)) { >> > > - remove_partial(n, slab); >> > > + if (folio_test_slab(slab_folio(slab))) { Your patch adds add_full() here as in the previous versions. I wouldn't do it anymore. Thanks to the frozen bit check in check_slab(), no further list manipulation should happen that would trigger the list poison being detected. Adding to full list would rather mean each sysfs-triggered validation will reach the slab and output the new slab_err() message, which is not useful. We want the corrupted slab to stay away from everything else, and only be informed of further object freeing attempts. >> > >> > >> > Does folio_test_slab test for the frozen bit?? >> > >> >> For slab folios, slab->fronzen has been set to 1. >> For non-slab folios, we should not call remove_partial(). >> I'm not sure if I understand this correctly. >> >> Thanks >>
On Mon, 14 Oct 2024, Vlastimil Babka wrote: > What about: > > "Slab disabled due to previous consistency check failure" ? I think this implies more than we can actually check. We can only check the metadata generated by SLUB. The consistency check for the object itself does not exist and would have to be done by the subsystem.
On 10/14/24 18:47, Christoph Lameter (Ampere) wrote: > On Mon, 14 Oct 2024, Vlastimil Babka wrote: > >> What about: >> >> "Slab disabled due to previous consistency check failure" ? > > I think this implies more than we can actually check. We can only check > the metadata generated by SLUB. The consistency check for the object > itself does not exist and would have to be done by the subsystem. "Slab disabled since SLUB metadata consistency check failure" ?
On Tue, 15 Oct 2024, Vlastimil Babka wrote: > On 10/14/24 18:47, Christoph Lameter (Ampere) wrote: > > On Mon, 14 Oct 2024, Vlastimil Babka wrote: > > > >> What about: > >> > >> "Slab disabled due to previous consistency check failure" ? > > > > I think this implies more than we can actually check. We can only check > > the metadata generated by SLUB. The consistency check for the object > > itself does not exist and would have to be done by the subsystem. > > "Slab disabled since SLUB metadata consistency check failure" ? "failed" Sounds good.
diff --git a/mm/slab.h b/mm/slab.h index 6c6fe6d630ce..7681e71d9a13 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -73,6 +73,10 @@ struct slab { struct { unsigned inuse:16; unsigned objects:15; + /* + * Reuse frozen bit for slab with debug enabled: + * frozen == 1 means it is corrupted + */ unsigned frozen:1; }; }; diff --git a/mm/slub.c b/mm/slub.c index 5b832512044e..b9265e9f11aa 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1423,6 +1423,11 @@ static int check_slab(struct kmem_cache *s, struct slab *slab) slab->inuse, slab->objects); return 0; } + if (slab->frozen) { + slab_err(s, slab, "Corrupted slab"); + return 0; + } + /* Slab_pad_check fixes things up after itself */ slab_pad_check(s, slab); return 1; @@ -1603,6 +1608,7 @@ static noinline bool alloc_debug_processing(struct kmem_cache *s, slab_fix(s, "Marking all objects used"); slab->inuse = slab->objects; slab->freelist = NULL; + slab->frozen = 1; /* mark consistency-failed slab as frozen */ } return false; } @@ -2744,7 +2750,10 @@ static void *alloc_single_from_partial(struct kmem_cache *s, slab->inuse++; if (!alloc_debug_processing(s, slab, object, orig_size)) { - remove_partial(n, slab); + if (folio_test_slab(slab_folio(slab))) { + remove_partial(n, slab); + add_full(s, n, slab); + } return NULL; }