diff mbox series

[bpf-next,v3] bpf: reject kfunc calls that overflow insn->imm

Message ID 20220209091153.54116-1-houtao1@huawei.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series [bpf-next,v3] bpf: reject kfunc calls that overflow insn->imm | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 20 this patch: 20
netdev/cc_maintainers warning 1 maintainers not CCed: kpsingh@kernel.org
netdev/build_clang success Errors and warnings before: 18 this patch: 18
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 25 this patch: 25
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 25 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next success VM_Test

Commit Message

Hou Tao Feb. 9, 2022, 9:11 a.m. UTC
Now kfunc call uses s32 to represent the offset between the address
of kfunc and __bpf_call_base, but it doesn't check whether or not
s32 will be overflowed, so add an extra checking to reject these
invalid kfunc calls.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
v3:
 * call BPF_CALL_IMM() once (suggested by Yonghong)

v2: https://lore.kernel.org/bpf/20220208123348.40360-1-houtao1@huawei.com
 * instead of checking the overflow in selftests, just reject
   these kfunc calls directly in verifier

v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com
---
 kernel/bpf/verifier.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

Comments

Yonghong Song Feb. 9, 2022, 3:42 p.m. UTC | #1
On 2/9/22 1:11 AM, Hou Tao wrote:
> Now kfunc call uses s32 to represent the offset between the address
> of kfunc and __bpf_call_base, but it doesn't check whether or not
> s32 will be overflowed, so add an extra checking to reject these
> invalid kfunc calls.
> 
> Signed-off-by: Hou Tao <houtao1@huawei.com>

The patch itself looks good. But the commit message
itself doesn't specify whether this is a theoretical case or
could really happen in practice. I look at the patch history,
and find the become commit message in v1 of the patch ([1]):

 > Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
 > randomization range to 2 GB"), for arm64 whether KASLR is enabled
 > or not, the module is placed within 2GB of the kernel region, so
 > s32 in bpf_kfunc_desc is sufficient to represente the offset of
 > module function relative to __bpf_call_base. The only thing needed
 > is to override bpf_jit_supports_kfunc_call().

So it does look like the overflow is possible.

So I suggest you add more description on *when* the overflow
may happen in this patch.

And you can also retain your previous selftest patch to test
this verifier change.

   [1] 
https://lore.kernel.org/bpf/20220119144942.305568-1-houtao1@huawei.com/

> ---
> v3:
>   * call BPF_CALL_IMM() once (suggested by Yonghong)
> 
> v2: https://lore.kernel.org/bpf/20220208123348.40360-1-houtao1@huawei.com
>   * instead of checking the overflow in selftests, just reject
>     these kfunc calls directly in verifier
> 
> v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com
> ---
>   kernel/bpf/verifier.c | 11 ++++++++++-
>   1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 1ae41d0cf96c..eb72e6139e2b 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1842,6 +1842,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
>   	struct bpf_kfunc_desc *desc;
>   	const char *func_name;
>   	struct btf *desc_btf;
> +	unsigned long call_imm;
>   	unsigned long addr;
>   	int err;
>   
> @@ -1926,9 +1927,17 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
>   		return -EINVAL;
>   	}
>   
> +	call_imm = BPF_CALL_IMM(addr);
> +	/* Check whether or not the relative offset overflows desc->imm */
> +	if ((unsigned long)(s32)call_imm != call_imm) {
> +		verbose(env, "address of kernel function %s is out of range\n",
> +			func_name);
> +		return -EINVAL;
> +	}
> +
>   	desc = &tab->descs[tab->nr_descs++];
>   	desc->func_id = func_id;
> -	desc->imm = BPF_CALL_IMM(addr);
> +	desc->imm = call_imm;
>   	desc->offset = offset;
>   	err = btf_distill_func_proto(&env->log, desc_btf,
>   				     func_proto, func_name,
Hou Tao Feb. 15, 2022, 4:29 a.m. UTC | #2
Hi,

On 2/9/2022 11:42 PM, Yonghong Song wrote:
>
>
> On 2/9/22 1:11 AM, Hou Tao wrote:
>> Now kfunc call uses s32 to represent the offset between the address
>> of kfunc and __bpf_call_base, but it doesn't check whether or not
>> s32 will be overflowed, so add an extra checking to reject these
>> invalid kfunc calls.
>>
>> Signed-off-by: Hou Tao <houtao1@huawei.com>
>
> The patch itself looks good. But the commit message
> itself doesn't specify whether this is a theoretical case or
> could really happen in practice. I look at the patch history,
> and find the become commit message in v1 of the patch ([1]):
>
> > Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
> > randomization range to 2 GB"), for arm64 whether KASLR is enabled
> > or not, the module is placed within 2GB of the kernel region, so
> > s32 in bpf_kfunc_desc is sufficient to represente the offset of
> > module function relative to __bpf_call_base. The only thing needed
> > is to override bpf_jit_supports_kfunc_call().
>
> So it does look like the overflow is possible.
>
> So I suggest you add more description on *when* the overflow
> may happen in this patch.
Will do in v5.
>
> And you can also retain your previous selftest patch to test
> this verifier change.
Is it necessary ?  IMO it is just duplication of the newly-added logic.

Regards,
Tao

>
>   [1] https://lore.kernel.org/bpf/20220119144942.305568-1-houtao1@huawei.com/
>
>> ---
>> v3:
>>   * call BPF_CALL_IMM() once (suggested by Yonghong)
>>
>> v2: https://lore.kernel.org/bpf/20220208123348.40360-1-houtao1@huawei.com
>>   * instead of checking the overflow in selftests, just reject
>>     these kfunc calls directly in verifier
>>
>> v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com
>> ---
>>   kernel/bpf/verifier.c | 11 ++++++++++-
>>   1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 1ae41d0cf96c..eb72e6139e2b 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -1842,6 +1842,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env,
>> u32 func_id, s16 offset)
>>       struct bpf_kfunc_desc *desc;
>>       const char *func_name;
>>       struct btf *desc_btf;
>> +    unsigned long call_imm;
>>       unsigned long addr;
>>       int err;
>>   @@ -1926,9 +1927,17 @@ static int add_kfunc_call(struct bpf_verifier_env
>> *env, u32 func_id, s16 offset)
>>           return -EINVAL;
>>       }
>>   +    call_imm = BPF_CALL_IMM(addr);
>> +    /* Check whether or not the relative offset overflows desc->imm */
>> +    if ((unsigned long)(s32)call_imm != call_imm) {
>> +        verbose(env, "address of kernel function %s is out of range\n",
>> +            func_name);
>> +        return -EINVAL;
>> +    }
>> +
>>       desc = &tab->descs[tab->nr_descs++];
>>       desc->func_id = func_id;
>> -    desc->imm = BPF_CALL_IMM(addr);
>> +    desc->imm = call_imm;
>>       desc->offset = offset;
>>       err = btf_distill_func_proto(&env->log, desc_btf,
>>                        func_proto, func_name,
> .
Yonghong Song Feb. 15, 2022, 6:39 a.m. UTC | #3
On 2/14/22 8:29 PM, Hou Tao wrote:
> Hi,
> 
> On 2/9/2022 11:42 PM, Yonghong Song wrote:
>>
>>
>> On 2/9/22 1:11 AM, Hou Tao wrote:
>>> Now kfunc call uses s32 to represent the offset between the address
>>> of kfunc and __bpf_call_base, but it doesn't check whether or not
>>> s32 will be overflowed, so add an extra checking to reject these
>>> invalid kfunc calls.
>>>
>>> Signed-off-by: Hou Tao <houtao1@huawei.com>
>>
>> The patch itself looks good. But the commit message
>> itself doesn't specify whether this is a theoretical case or
>> could really happen in practice. I look at the patch history,
>> and find the become commit message in v1 of the patch ([1]):
>>
>>> Since commit b2eed9b58811 ("arm64/kernel: kaslr: reduce module
>>> randomization range to 2 GB"), for arm64 whether KASLR is enabled
>>> or not, the module is placed within 2GB of the kernel region, so
>>> s32 in bpf_kfunc_desc is sufficient to represente the offset of
>>> module function relative to __bpf_call_base. The only thing needed
>>> is to override bpf_jit_supports_kfunc_call().
>>
>> So it does look like the overflow is possible.
>>
>> So I suggest you add more description on *when* the overflow
>> may happen in this patch.
> Will do in v5.
>>
>> And you can also retain your previous selftest patch to test
>> this verifier change.
> Is it necessary ?  IMO it is just duplication of the newly-added logic.

Okay, I just realized that the previous selftest doesn't really
verify the kernel change. That is, it will succeed regardless
of whether the kernel change applied or not. So it is ok not to
have your previous selftest.

> 
> Regards,
> Tao
> 
>>
>>    [1] https://lore.kernel.org/bpf/20220119144942.305568-1-houtao1@huawei.com/
>>
>>> ---
>>> v3:
>>>    * call BPF_CALL_IMM() once (suggested by Yonghong)
>>>
>>> v2: https://lore.kernel.org/bpf/20220208123348.40360-1-houtao1@huawei.com
>>>    * instead of checking the overflow in selftests, just reject
>>>      these kfunc calls directly in verifier
>>>
>>> v1: https://lore.kernel.org/bpf/20220206043107.18549-1-houtao1@huawei.com
>>> ---
>>>    kernel/bpf/verifier.c | 11 ++++++++++-
>>>    1 file changed, 10 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>> index 1ae41d0cf96c..eb72e6139e2b 100644
>>> --- a/kernel/bpf/verifier.c
>>> +++ b/kernel/bpf/verifier.c
>>> @@ -1842,6 +1842,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env,
>>> u32 func_id, s16 offset)
>>>        struct bpf_kfunc_desc *desc;
>>>        const char *func_name;
>>>        struct btf *desc_btf;
>>> +    unsigned long call_imm;
>>>        unsigned long addr;
>>>        int err;
>>>    @@ -1926,9 +1927,17 @@ static int add_kfunc_call(struct bpf_verifier_env
>>> *env, u32 func_id, s16 offset)
>>>            return -EINVAL;
>>>        }
>>>    +    call_imm = BPF_CALL_IMM(addr);
>>> +    /* Check whether or not the relative offset overflows desc->imm */
>>> +    if ((unsigned long)(s32)call_imm != call_imm) {
>>> +        verbose(env, "address of kernel function %s is out of range\n",
>>> +            func_name);
>>> +        return -EINVAL;
>>> +    }
>>> +
>>>        desc = &tab->descs[tab->nr_descs++];
>>>        desc->func_id = func_id;
>>> -    desc->imm = BPF_CALL_IMM(addr);
>>> +    desc->imm = call_imm;
>>>        desc->offset = offset;
>>>        err = btf_distill_func_proto(&env->log, desc_btf,
>>>                         func_proto, func_name,
>> .
>
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1ae41d0cf96c..eb72e6139e2b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1842,6 +1842,7 @@  static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 	struct bpf_kfunc_desc *desc;
 	const char *func_name;
 	struct btf *desc_btf;
+	unsigned long call_imm;
 	unsigned long addr;
 	int err;
 
@@ -1926,9 +1927,17 @@  static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 		return -EINVAL;
 	}
 
+	call_imm = BPF_CALL_IMM(addr);
+	/* Check whether or not the relative offset overflows desc->imm */
+	if ((unsigned long)(s32)call_imm != call_imm) {
+		verbose(env, "address of kernel function %s is out of range\n",
+			func_name);
+		return -EINVAL;
+	}
+
 	desc = &tab->descs[tab->nr_descs++];
 	desc->func_id = func_id;
-	desc->imm = BPF_CALL_IMM(addr);
+	desc->imm = call_imm;
 	desc->offset = offset;
 	err = btf_distill_func_proto(&env->log, desc_btf,
 				     func_proto, func_name,