diff mbox series

btrfs: disable slab merging in debug build

Message ID 20230712191712.18860-1-dsterba@suse.com (mailing list archive)
State New
Headers show
Series btrfs: disable slab merging in debug build | expand

Commit Message

David Sterba July 12, 2023, 7:17 p.m. UTC
The slab allocator newly allows to disable merging per-slab (since
commit d0bf7d5759c1 ("mm/slab: introduce kmem_cache flag
SLAB_NO_MERGE")). Set this for all caches in debug build so we can
verify there are no leaks when module gets reloaded.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/backref.c          |  2 +-
 fs/btrfs/ctree.c            |  2 +-
 fs/btrfs/defrag.c           |  2 +-
 fs/btrfs/delayed-inode.c    |  2 +-
 fs/btrfs/delayed-ref.c      |  8 ++++----
 fs/btrfs/extent-io-tree.c   |  2 +-
 fs/btrfs/extent_io.c        |  2 +-
 fs/btrfs/extent_map.c       |  2 +-
 fs/btrfs/free-space-cache.c |  3 ++-
 fs/btrfs/inode.c            |  3 ++-
 fs/btrfs/misc.h             | 10 ++++++++++
 fs/btrfs/ordered-data.c     |  2 +-
 fs/btrfs/transaction.c      |  3 ++-
 13 files changed, 28 insertions(+), 15 deletions(-)

Comments

Vlastimil Babka July 13, 2023, 7:38 a.m. UTC | #1
On 7/12/23 21:17, David Sterba wrote:
> The slab allocator newly allows to disable merging per-slab (since
> commit d0bf7d5759c1 ("mm/slab: introduce kmem_cache flag
> SLAB_NO_MERGE")). Set this for all caches in debug build so we can
> verify there are no leaks when module gets reloaded.
> 
> Signed-off-by: David Sterba <dsterba@suse.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>
Christoph Hellwig July 13, 2023, 11:21 a.m. UTC | #2
On Wed, Jul 12, 2023 at 09:17:12PM +0200, David Sterba wrote:
> The slab allocator newly allows to disable merging per-slab (since
> commit d0bf7d5759c1 ("mm/slab: introduce kmem_cache flag
> SLAB_NO_MERGE")). Set this for all caches in debug build so we can
> verify there are no leaks when module gets reloaded.

So we're having a discussion on linux-mm wether to just disbale slab
merging by default, because it really is a pain.  Maybe wait for that
to settle before adding per-subsystem hacks for what really is a slab
problem?
David Sterba July 13, 2023, 12:03 p.m. UTC | #3
On Thu, Jul 13, 2023 at 04:21:10AM -0700, Christoph Hellwig wrote:
> On Wed, Jul 12, 2023 at 09:17:12PM +0200, David Sterba wrote:
> > The slab allocator newly allows to disable merging per-slab (since
> > commit d0bf7d5759c1 ("mm/slab: introduce kmem_cache flag
> > SLAB_NO_MERGE")). Set this for all caches in debug build so we can
> > verify there are no leaks when module gets reloaded.
> 
> So we're having a discussion on linux-mm wether to just disbale slab
> merging by default, because it really is a pain.  Maybe wait for that
> to settle before adding per-subsystem hacks for what really is a slab
> problem?

Yeah I can wait with the patch. That slab merging is considered bad is
new. I remember discussions where Linus and (maybe?) xfs guys argued
pro/against merging of slabs, where xfs wanted not-merging and had to
resort to hacks like empty slab constructor that would prevent it. I
can't find the link but that's base of my reasoning to add a flag
assuming that merging makes sense by default.
Vlastimil Babka July 13, 2023, 12:52 p.m. UTC | #4
On 7/13/23 14:03, David Sterba wrote:
> On Thu, Jul 13, 2023 at 04:21:10AM -0700, Christoph Hellwig wrote:
>> On Wed, Jul 12, 2023 at 09:17:12PM +0200, David Sterba wrote:
>> > The slab allocator newly allows to disable merging per-slab (since
>> > commit d0bf7d5759c1 ("mm/slab: introduce kmem_cache flag
>> > SLAB_NO_MERGE")). Set this for all caches in debug build so we can
>> > verify there are no leaks when module gets reloaded.
>> 
>> So we're having a discussion on linux-mm wether to just disbale slab
>> merging by default, because it really is a pain.  Maybe wait for that
>> to settle before adding per-subsystem hacks for what really is a slab
>> problem?
> 
> Yeah I can wait with the patch. That slab merging is considered bad is
> new.

Yeah, I wouldn't say it's universally accepted. But even if we change the
default, it's just a default that distros or users might not follow, so
there's still a space for per-cache enforcement IMHO.

> I remember discussions where Linus and (maybe?) xfs guys argued
> pro/against merging of slabs, where xfs wanted not-merging and had to
> resort to hacks like empty slab constructor that would prevent it. I
> can't find the link but that's base of my reasoning to add a flag
> assuming that merging makes sense by default.

Probably this discussion?

https://lore.kernel.org/all/CA+55aFyepmdpbg9U2Pvp+aHjKmmGCrTK2ywzqfmaOTMXQasYNw@mail.gmail.com/
diff mbox series

Patch

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 79336fa853db..0167c3aaf15d 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -200,7 +200,7 @@  int __init btrfs_prelim_ref_init(void)
 	btrfs_prelim_ref_cache = kmem_cache_create("btrfs_prelim_ref",
 					sizeof(struct prelim_ref),
 					0,
-					SLAB_MEM_SPREAD,
+					BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD,
 					NULL);
 	if (!btrfs_prelim_ref_cache)
 		return -ENOMEM;
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index a4cb4b642987..14fc47857caf 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -5166,7 +5166,7 @@  int __init btrfs_ctree_init(void)
 {
 	btrfs_path_cachep = kmem_cache_create("btrfs_path",
 			sizeof(struct btrfs_path), 0,
-			SLAB_MEM_SPREAD, NULL);
+			BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD, NULL);
 	if (!btrfs_path_cachep)
 		return -ENOMEM;
 	return 0;
diff --git a/fs/btrfs/defrag.c b/fs/btrfs/defrag.c
index f2ff4cbe8656..bac993bb757e 100644
--- a/fs/btrfs/defrag.c
+++ b/fs/btrfs/defrag.c
@@ -1370,7 +1370,7 @@  int __init btrfs_auto_defrag_init(void)
 {
 	btrfs_inode_defrag_cachep = kmem_cache_create("btrfs_inode_defrag",
 					sizeof(struct inode_defrag), 0,
-					SLAB_MEM_SPREAD,
+					BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD,
 					NULL);
 	if (!btrfs_inode_defrag_cachep)
 		return -ENOMEM;
diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
index 6b457b010cbc..c0a6a1784697 100644
--- a/fs/btrfs/delayed-inode.c
+++ b/fs/btrfs/delayed-inode.c
@@ -31,7 +31,7 @@  int __init btrfs_delayed_inode_init(void)
 	delayed_node_cache = kmem_cache_create("btrfs_delayed_node",
 					sizeof(struct btrfs_delayed_node),
 					0,
-					SLAB_MEM_SPREAD,
+					BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD,
 					NULL);
 	if (!delayed_node_cache)
 		return -ENOMEM;
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index 6a13cf00218b..d80ea0c52ccc 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -1104,28 +1104,28 @@  int __init btrfs_delayed_ref_init(void)
 	btrfs_delayed_ref_head_cachep = kmem_cache_create(
 				"btrfs_delayed_ref_head",
 				sizeof(struct btrfs_delayed_ref_head), 0,
-				SLAB_MEM_SPREAD, NULL);
+				BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD, NULL);
 	if (!btrfs_delayed_ref_head_cachep)
 		goto fail;
 
 	btrfs_delayed_tree_ref_cachep = kmem_cache_create(
 				"btrfs_delayed_tree_ref",
 				sizeof(struct btrfs_delayed_tree_ref), 0,
-				SLAB_MEM_SPREAD, NULL);
+				BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD, NULL);
 	if (!btrfs_delayed_tree_ref_cachep)
 		goto fail;
 
 	btrfs_delayed_data_ref_cachep = kmem_cache_create(
 				"btrfs_delayed_data_ref",
 				sizeof(struct btrfs_delayed_data_ref), 0,
-				SLAB_MEM_SPREAD, NULL);
+				BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD, NULL);
 	if (!btrfs_delayed_data_ref_cachep)
 		goto fail;
 
 	btrfs_delayed_extent_op_cachep = kmem_cache_create(
 				"btrfs_delayed_extent_op",
 				sizeof(struct btrfs_delayed_extent_op), 0,
-				SLAB_MEM_SPREAD, NULL);
+				BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD, NULL);
 	if (!btrfs_delayed_extent_op_cachep)
 		goto fail;
 
diff --git a/fs/btrfs/extent-io-tree.c b/fs/btrfs/extent-io-tree.c
index a2315a4b8b75..3075d9439907 100644
--- a/fs/btrfs/extent-io-tree.c
+++ b/fs/btrfs/extent-io-tree.c
@@ -1771,7 +1771,7 @@  int __init extent_state_init_cachep(void)
 {
 	extent_state_cache = kmem_cache_create("btrfs_extent_state",
 			sizeof(struct extent_state), 0,
-			SLAB_MEM_SPREAD, NULL);
+			BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD, NULL);
 	if (!extent_state_cache)
 		return -ENOMEM;
 
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index a845a90d46f7..094004c11a50 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -149,7 +149,7 @@  int __init extent_buffer_init_cachep(void)
 {
 	extent_buffer_cache = kmem_cache_create("btrfs_extent_buffer",
 			sizeof(struct extent_buffer), 0,
-			SLAB_MEM_SPREAD, NULL);
+			BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD, NULL);
 	if (!extent_buffer_cache)
 		return -ENOMEM;
 
diff --git a/fs/btrfs/extent_map.c b/fs/btrfs/extent_map.c
index 0cdb3e86f29b..d66f512613f7 100644
--- a/fs/btrfs/extent_map.c
+++ b/fs/btrfs/extent_map.c
@@ -17,7 +17,7 @@  int __init extent_map_init(void)
 {
 	extent_map_cache = kmem_cache_create("btrfs_extent_map",
 			sizeof(struct extent_map), 0,
-			SLAB_MEM_SPREAD, NULL);
+			BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD, NULL);
 	if (!extent_map_cache)
 		return -ENOMEM;
 	return 0;
diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index 880800418075..c5b70dbec245 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -4160,12 +4160,13 @@  int __init btrfs_free_space_init(void)
 {
 	btrfs_free_space_cachep = kmem_cache_create("btrfs_free_space",
 			sizeof(struct btrfs_free_space), 0,
-			SLAB_MEM_SPREAD, NULL);
+			BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD, NULL);
 	if (!btrfs_free_space_cachep)
 		return -ENOMEM;
 
 	btrfs_free_space_bitmap_cachep = kmem_cache_create("btrfs_free_space_bitmap",
 							PAGE_SIZE, PAGE_SIZE,
+							BTRFS_DEBUG_SLAB_NO_MERGE |
 							SLAB_MEM_SPREAD, NULL);
 	if (!btrfs_free_space_bitmap_cachep) {
 		kmem_cache_destroy(btrfs_free_space_cachep);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index b12850b31cb3..3bde49018530 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -8723,7 +8723,8 @@  int __init btrfs_init_cachep(void)
 {
 	btrfs_inode_cachep = kmem_cache_create("btrfs_inode",
 			sizeof(struct btrfs_inode), 0,
-			SLAB_RECLAIM_ACCOUNT | SLAB_MEM_SPREAD | SLAB_ACCOUNT,
+			BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_RECLAIM_ACCOUNT |
+			SLAB_MEM_SPREAD | SLAB_ACCOUNT,
 			init_once);
 	if (!btrfs_inode_cachep)
 		goto fail;
diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
index 005751a12911..9f667ab1a3d5 100644
--- a/fs/btrfs/misc.h
+++ b/fs/btrfs/misc.h
@@ -8,6 +8,16 @@ 
 #include <linux/math64.h>
 #include <linux/rbtree.h>
 
+/*
+ * Don't merge slabs in debug build so we can verify there are no leaks when
+ * reloading module.
+ */
+#ifdef CONFIG_BTRFS_DEBUG
+#define BTRFS_DEBUG_SLAB_NO_MERGE		SLAB_NO_MERGE
+#else
+#define BTRFS_DEBUG_SLAB_NO_MERGE		0
+#endif
+
 #define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len))
 
 /*
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index a629532283bc..c7306e8bd877 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -1245,7 +1245,7 @@  int __init ordered_data_init(void)
 {
 	btrfs_ordered_extent_cache = kmem_cache_create("btrfs_ordered_extent",
 				     sizeof(struct btrfs_ordered_extent), 0,
-				     SLAB_MEM_SPREAD,
+				     BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_MEM_SPREAD,
 				     NULL);
 	if (!btrfs_ordered_extent_cache)
 		return -ENOMEM;
diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
index cf306351b148..e7cfc992e02a 100644
--- a/fs/btrfs/transaction.c
+++ b/fs/btrfs/transaction.c
@@ -2641,7 +2641,8 @@  int __init btrfs_transaction_init(void)
 {
 	btrfs_trans_handle_cachep = kmem_cache_create("btrfs_trans_handle",
 			sizeof(struct btrfs_trans_handle), 0,
-			SLAB_TEMPORARY | SLAB_MEM_SPREAD, NULL);
+			BTRFS_DEBUG_SLAB_NO_MERGE | SLAB_TEMPORARY | SLAB_MEM_SPREAD,
+			NULL);
 	if (!btrfs_trans_handle_cachep)
 		return -ENOMEM;
 	return 0;