diff mbox series

[v3] slub: add back check for free nonslab objects

Message ID 20210930070214.61499-1-wangkefeng.wang@huawei.com (mailing list archive)
State New
Headers show
Series [v3] slub: add back check for free nonslab objects | expand

Commit Message

Kefeng Wang Sept. 30, 2021, 7:02 a.m. UTC
After commit ("f227f0faf63b slub: fix unreclaimable slab stat for bulk
free"), the check for free nonslab page is replaced by VM_BUG_ON_PAGE,
which only check with CONFIG_DEBUG_VM enabled, but this config may impact
performance, so it only for debug.

Commit ("0937502af7c9 slub: Add check for kfree() of non slab objects.")
add the ability, which should be needed in any configs to catch the
invalid free, they even could be potential issue, eg, memory corruption,
use after free and double free, so replace VM_BUG_ON_PAGE to WARN_ON_ONCE,
add object address printing to help use to debug the issue.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
v3:
- use 'once' mechanism sugguested by Shakeel Butt
- drop dump_page sugguested by Matthew Wilcox
v2:
- add object address printing sugguested by Matthew Wilcox

 mm/slub.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index 3d2025f7163b..336eceea0c75 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3513,7 +3513,9 @@  static inline void free_nonslab_page(struct page *page, void *object)
 {
 	unsigned int order = compound_order(page);
 
-	VM_BUG_ON_PAGE(!PageCompound(page), page);
+	if (WARN_ON_ONCE(!PageCompound(page)))
+		pr_warn_once("object pointer: 0x%p\n", object);
+
 	kfree_hook(object);
 	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
 	__free_pages(page, order);