Message ID | 20220130092917.14544-2-hotforest@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | bpf, arm64: enable kfunc call | expand |
On 1/30/22 10:29 AM, Hou Tao wrote: > From: Hou Tao <houtao1@huawei.com> > > Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module > randomization range to 2 GB"), for arm64 whether KASLR is enabled > or not, the module is placed within 2GB of the kernel region, so > s32 in bpf_kfunc_desc is sufficient to represente the offset of > module function relative to __bpf_call_base. The only thing needed > is to override bpf_jit_supports_kfunc_call(). > > Signed-off-by: Hou Tao <houtao1@huawei.com> Could you rebase patch 2 & 3 and resend as they don't apply to bpf-next right now. Meanwhile, applied this one, thanks a lot!
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index e96d4d87291f..74f9a9b6a053 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -1143,6 +1143,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) return prog; } +bool bpf_jit_supports_kfunc_call(void) +{ + return true; +} + u64 bpf_jit_alloc_exec_limit(void) { return VMALLOC_END - VMALLOC_START;