diff mbox series

[bpf-next] bpf, arm64: enable kfunc call

Message ID 20220119144942.305568-1-houtao1@huawei.com (mailing list archive)
State New, archived
Headers show
Series [bpf-next] bpf, arm64: enable kfunc call | expand

Commit Message

Hou Tao Jan. 19, 2022, 2:49 p.m. UTC
Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
randomization range to 2 GB"), for arm64 whether KASLR is enabled
or not, the module is placed within 2GB of the kernel region, so
s32 in bpf_kfunc_desc is sufficient to represente the offset of
module function relative to __bpf_call_base. The only thing needed
is to override bpf_jit_supports_kfunc_call().

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 arch/arm64/net/bpf_jit_comp.c | 5 +++++
 1 file changed, 5 insertions(+)

Comments

Daniel Borkmann Jan. 24, 2022, 4:21 p.m. UTC | #1
On 1/19/22 3:49 PM, Hou Tao wrote:
> Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
> randomization range to 2 GB"), for arm64 whether KASLR is enabled
> or not, the module is placed within 2GB of the kernel region, so
> s32 in bpf_kfunc_desc is sufficient to represente the offset of
> module function relative to __bpf_call_base. The only thing needed
> is to override bpf_jit_supports_kfunc_call().
> 
> Signed-off-by: Hou Tao <houtao1@huawei.com>

Lgtm, could we also add a BPF selftest to assert that this assumption
won't break in future when bpf_jit_supports_kfunc_call() returns true?

E.g. extending lib/test_bpf.ko could be an option, wdyt?

> ---
>   arch/arm64/net/bpf_jit_comp.c | 5 +++++
>   1 file changed, 5 insertions(+)
> 
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index e96d4d87291f..74f9a9b6a053 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>   	return prog;
>   }
>   
> +bool bpf_jit_supports_kfunc_call(void)
> +{
> +	return true;
> +}
> +
>   u64 bpf_jit_alloc_exec_limit(void)
>   {
>   	return VMALLOC_END - VMALLOC_START;
>
Hou Tao Jan. 26, 2022, 11:10 a.m. UTC | #2
Hi,

On 1/25/2022 12:21 AM, Daniel Borkmann wrote:
> On 1/19/22 3:49 PM, Hou Tao wrote:
>> Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
>> randomization range to 2 GB"), for arm64 whether KASLR is enabled
>> or not, the module is placed within 2GB of the kernel region, so
>> s32 in bpf_kfunc_desc is sufficient to represente the offset of
>> module function relative to __bpf_call_base. The only thing needed
>> is to override bpf_jit_supports_kfunc_call().
>>
>> Signed-off-by: Hou Tao <houtao1@huawei.com>
>
> Lgtm, could we also add a BPF selftest to assert that this assumption
> won't break in future when bpf_jit_supports_kfunc_call() returns true?
>
> E.g. extending lib/test_bpf.ko could be an option, wdyt?
Make sense.  Will figure out how to done that.

Regards,
Tao
>
>> ---
>>   arch/arm64/net/bpf_jit_comp.c | 5 +++++
>>   1 file changed, 5 insertions(+)
>>
>> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
>> index e96d4d87291f..74f9a9b6a053 100644
>> --- a/arch/arm64/net/bpf_jit_comp.c
>> +++ b/arch/arm64/net/bpf_jit_comp.c
>> @@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog
>> *prog)
>>       return prog;
>>   }
>>   +bool bpf_jit_supports_kfunc_call(void)
>> +{
>> +    return true;
>> +}
>> +
>>   u64 bpf_jit_alloc_exec_limit(void)
>>   {
>>       return VMALLOC_END - VMALLOC_START;
>>
>
> .
diff mbox series

Patch

diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index e96d4d87291f..74f9a9b6a053 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -1143,6 +1143,11 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	return prog;
 }
 
+bool bpf_jit_supports_kfunc_call(void)
+{
+	return true;
+}
+
 u64 bpf_jit_alloc_exec_limit(void)
 {
 	return VMALLOC_END - VMALLOC_START;