diff mbox series

[v4] module: Add CONFIG_MODULE_DISABLE_INIT_FREE option

Message ID 20231012014720.19748-1-quic_jiangenj@quicinc.com (mailing list archive)
State New, archived
Headers show
Series [v4] module: Add CONFIG_MODULE_DISABLE_INIT_FREE option | expand

Commit Message

Joey Jiao Oct. 12, 2023, 1:47 a.m. UTC
To facilitate syzkaller test, it's essential for the module to retain the
same address across reboots. In userspace, the execution of modprobe
commands must occur sequentially. In the kernel, selecting the
CONFIG_MODULE_DISABLE_INIT_FREE option disables the asynchronous freeing
of init sections.

Signed-off-by: Joey Jiao <quic_jiangenj@quicinc.com>
---
 kernel/module/Kconfig | 12 ++++++++++++
 kernel/module/main.c  |  3 ++-
 2 files changed, 14 insertions(+), 1 deletion(-)

Comments

Luis Chamberlain Oct. 12, 2023, 4:56 p.m. UTC | #1
On Thu, Oct 12, 2023 at 07:17:19AM +0530, Joey Jiao wrote:
>  
> +config MODULE_DISABLE_INIT_FREE
> +	bool "Disable freeing of init sections"
> +	default n
> +	help
> +	  By default, kernel will free init sections after module being fully
> +	  loaded.
> +
> +	  MODULE_DISABLE_INIT_FREE allows users to prevent the freeing of init
> +	  sections. This option is particularly helpful for syzkaller fuzzing,
> +	  ensuring that the module consistently loads into the same address
> +	  across reboots.

How and why does not free'ing init help with syzkaller exactly? I don't
see the relationship between not free'ing init and ensuring th emodule
loads into the same address. There could be many things which could
incur an address gets allocated from a module at another location which
a module can take. I cannot fathom how this simple toggle could ensure
modules following the address allocations accross reboots. That seems
like odd chance, not something actually deterministic.

> +
>  endif # MODULES
> diff --git a/kernel/module/main.c b/kernel/module/main.c
> index 98fedfdb8db5..0f242b7b29fe 100644
> --- a/kernel/module/main.c
> +++ b/kernel/module/main.c
> @@ -2593,7 +2593,8 @@ static noinline int do_init_module(struct module *mod)
>  	 * be cleaned up needs to sync with the queued work - ie
>  	 * rcu_barrier()
>  	 */
> -	if (llist_add(&freeinit->node, &init_free_list))
> +	if (llist_add(&freeinit->node, &init_free_list) &&
> +		!IS_ENABLED(CONFIG_MODULE_DISABLE_INIT_FREE))
>  		schedule_work(&init_free_wq);
diff mbox series

Patch

diff --git a/kernel/module/Kconfig b/kernel/module/Kconfig
index 33a2e991f608..88206bc4c7d4 100644
--- a/kernel/module/Kconfig
+++ b/kernel/module/Kconfig
@@ -389,4 +389,16 @@  config MODULES_TREE_LOOKUP
 	def_bool y
 	depends on PERF_EVENTS || TRACING || CFI_CLANG
 
+config MODULE_DISABLE_INIT_FREE
+	bool "Disable freeing of init sections"
+	default n
+	help
+	  By default, kernel will free init sections after module being fully
+	  loaded.
+
+	  MODULE_DISABLE_INIT_FREE allows users to prevent the freeing of init
+	  sections. This option is particularly helpful for syzkaller fuzzing,
+	  ensuring that the module consistently loads into the same address
+	  across reboots.
+
 endif # MODULES
diff --git a/kernel/module/main.c b/kernel/module/main.c
index 98fedfdb8db5..0f242b7b29fe 100644
--- a/kernel/module/main.c
+++ b/kernel/module/main.c
@@ -2593,7 +2593,8 @@  static noinline int do_init_module(struct module *mod)
 	 * be cleaned up needs to sync with the queued work - ie
 	 * rcu_barrier()
 	 */
-	if (llist_add(&freeinit->node, &init_free_list))
+	if (llist_add(&freeinit->node, &init_free_list) &&
+		!IS_ENABLED(CONFIG_MODULE_DISABLE_INIT_FREE))
 		schedule_work(&init_free_wq);
 
 	mutex_unlock(&module_mutex);