diff mbox series

[1/4] arm64: module: create module allocations without exec permissions

Message ID 20190523102256.29168-2-ard.biesheuvel@arm.com (mailing list archive)
State New, archived
Headers show
Series arm64: wire up VM_FLUSH_RESET_PERMS | expand

Commit Message

Ard Biesheuvel May 23, 2019, 10:22 a.m. UTC
Now that the core code manages the executable permissions of code
regions of modules explicitly, it is no longer necessary to create
the module vmalloc regions with RWX permissions, and we can create
them with RW- permissions instead, which is preferred from a
security perspective.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@arm.com>
---
 arch/arm64/kernel/module.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Anshuman Khandual May 28, 2019, 5:35 a.m. UTC | #1
On 05/23/2019 03:52 PM, Ard Biesheuvel wrote:
> Now that the core code manages the executable permissions of code
> regions of modules explicitly, it is no longer necessary to create

I guess the permission transition for various module sections happen
through module_enable_[ro|nx]() after allocating via module_alloc().

> the module vmalloc regions with RWX permissions, and we can create
> them with RW- permissions instead, which is preferred from a
> security perspective.

Makes sense. Will this be followed in all architectures now ?

> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@arm.com>
> ---
>  arch/arm64/kernel/module.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
> index 2e4e3915b4d0..88f0ed31d9aa 100644
> --- a/arch/arm64/kernel/module.c
> +++ b/arch/arm64/kernel/module.c
> @@ -41,7 +41,7 @@ void *module_alloc(unsigned long size)
>  
>  	p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
>  				module_alloc_base + MODULES_VSIZE,
> -				gfp_mask, PAGE_KERNEL_EXEC, 0,
> +				gfp_mask, PAGE_KERNEL, 0,
>  				NUMA_NO_NODE, __builtin_return_address(0));
>  
>  	if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
> @@ -57,7 +57,7 @@ void *module_alloc(unsigned long size)
>  		 */
>  		p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
>  				module_alloc_base + SZ_4G, GFP_KERNEL,
> -				PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
> +				PAGE_KERNEL, 0, NUMA_NO_NODE,
>  				__builtin_return_address(0));
>  
>  	if (p && (kasan_module_alloc(p, size) < 0)) {
> 

Which just makes sure that PTE_PXN never gets dropped while creating
these mappings.
Ard Biesheuvel May 28, 2019, 6:24 a.m. UTC | #2
On 5/28/19 7:35 AM, Anshuman Khandual wrote:
> 
> 
> On 05/23/2019 03:52 PM, Ard Biesheuvel wrote:
>> Now that the core code manages the executable permissions of code
>> regions of modules explicitly, it is no longer necessary to create
> 
> I guess the permission transition for various module sections happen
> through module_enable_[ro|nx]() after allocating via module_alloc().
> 

Indeed.

>> the module vmalloc regions with RWX permissions, and we can create
>> them with RW- permissions instead, which is preferred from a
>> security perspective.
> 
> Makes sense. Will this be followed in all architectures now ?
> 

I am not sure if every architecture implements module_enable_[ro|nx](), 
but if they do, they should probably apply this change as well.

>>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@arm.com>
>> ---
>>   arch/arm64/kernel/module.c | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
>> index 2e4e3915b4d0..88f0ed31d9aa 100644
>> --- a/arch/arm64/kernel/module.c
>> +++ b/arch/arm64/kernel/module.c
>> @@ -41,7 +41,7 @@ void *module_alloc(unsigned long size)
>>   
>>   	p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
>>   				module_alloc_base + MODULES_VSIZE,
>> -				gfp_mask, PAGE_KERNEL_EXEC, 0,
>> +				gfp_mask, PAGE_KERNEL, 0,
>>   				NUMA_NO_NODE, __builtin_return_address(0));
>>   
>>   	if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
>> @@ -57,7 +57,7 @@ void *module_alloc(unsigned long size)
>>   		 */
>>   		p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
>>   				module_alloc_base + SZ_4G, GFP_KERNEL,
>> -				PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
>> +				PAGE_KERNEL, 0, NUMA_NO_NODE,
>>   				__builtin_return_address(0));
>>   
>>   	if (p && (kasan_module_alloc(p, size) < 0)) {
>>
> 
> Which just makes sure that PTE_PXN never gets dropped while creating
> these mappings.
> 

Not sure what you mean. Is there a question here?
diff mbox series

Patch

diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 2e4e3915b4d0..88f0ed31d9aa 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -41,7 +41,7 @@  void *module_alloc(unsigned long size)
 
 	p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
 				module_alloc_base + MODULES_VSIZE,
-				gfp_mask, PAGE_KERNEL_EXEC, 0,
+				gfp_mask, PAGE_KERNEL, 0,
 				NUMA_NO_NODE, __builtin_return_address(0));
 
 	if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
@@ -57,7 +57,7 @@  void *module_alloc(unsigned long size)
 		 */
 		p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
 				module_alloc_base + SZ_4G, GFP_KERNEL,
-				PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
+				PAGE_KERNEL, 0, NUMA_NO_NODE,
 				__builtin_return_address(0));
 
 	if (p && (kasan_module_alloc(p, size) < 0)) {