From patchwork Tue Sep 21 01:31:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Martin KaFai Lau X-Patchwork-Id: 12506703 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1ABC1C433FE for ; Tue, 21 Sep 2021 01:31:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D13FA60F43 for ; Tue, 21 Sep 2021 01:31:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229730AbhIUBdK (ORCPT ); Mon, 20 Sep 2021 21:33:10 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:8802 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230079AbhIUBcq (ORCPT ); Mon, 20 Sep 2021 21:32:46 -0400 Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18KHwN90014546 for ; Mon, 20 Sep 2021 18:31:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : content-transfer-encoding : mime-version; s=facebook; bh=3wZL20nAF2XEAAnrU623v+gAeIHh8UoyyXAllPrQ+cg=; b=m8ThAiHcA1Ky0kg2QQlYRQ5OmkYUzY0NFBedUHWjXlEn0gruvV7Z20Eewj8wAGHS0I8n HFu120/npRGSJkVgIE7tFy+hoSkA39j1PCDEiRqY1PZwHP/bAu7fASk6JdORVwp0oFtH SWlrznaNAczWp0HcAQjF0k2dJarMP4mMh+Q= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com with ESMTP id 3b6f2rf9pt-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 20 Sep 2021 18:31:18 -0700 Received: from intmgw006.03.ash8.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.14; Mon, 20 Sep 2021 18:31:16 -0700 Received: by devbig005.ftw2.facebook.com (Postfix, from userid 6611) id B00132940D2A; Mon, 20 Sep 2021 18:31:09 -0700 (PDT) From: Martin KaFai Lau To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Yonghong Song Subject: [PATCH bpf-next 1/4] bpf: Check the other end of slot_type for STACK_SPILL Date: Mon, 20 Sep 2021 18:31:09 -0700 Message-ID: <20210921013109.1036209-1-kafai@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210921013102.1035356-1-kafai@fb.com> References: <20210921013102.1035356-1-kafai@fb.com> X-FB-Internal: Safe X-FB-Source: Intern X-Proofpoint-ORIG-GUID: _G_ibi-xUJzNLKlO74imP2Ycfzif_G40 X-Proofpoint-GUID: _G_ibi-xUJzNLKlO74imP2Ycfzif_G40 X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-20_11,2021-09-20_01,2020-04-07_01 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 mlxscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 suspectscore=0 mlxlogscore=999 impostorscore=0 priorityscore=1501 bulkscore=0 adultscore=0 malwarescore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2109030001 definitions=main-2109210006 X-FB-Internal: deliver Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Every 8 bytes of the stack is tracked by a bpf_stack_state. Within each bpf_stack_state, there is a 'u8 slot_type[8]' to track the type of each byte. Verifier tests slot_type[0] == STACK_SPILL to decide if the spilled reg state is saved. Verifier currently only saves the reg state if the whole 8 bytes are spilled to the stack, so checking the slot_type[7] is the same as checking slot_type[0]. The later patch will allow verifier to save the bounded scalar reg also for <8 bytes spill. There is a llvm patch [1] to ensure the <8 bytes spill will be 8-byte aligned, so checking slot_type[7] instead of slot_type[0] is required. While at it, this patch refactors the slot_type[0] == STACK_SPILL test into a new function is_spilled_reg() and change the slot_type[0] check to slot_type[7] check in there also. [1] https://reviews.llvm.org/D109073 Signed-off-by: Martin KaFai Lau --- kernel/bpf/verifier.c | 30 +++++++++++++++++++----------- 1 file changed, 19 insertions(+), 11 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e76b55917905..2ad2a12c5482 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -612,6 +612,14 @@ static const char *kernel_type_name(const struct btf* btf, u32 id) return btf_name_by_offset(btf, btf_type_by_id(btf, id)->name_off); } +/* The reg state of a pointer or a bounded scalar was saved when + * it was spilled to the stack. + */ +static bool is_spilled_reg(const struct bpf_stack_state *stack) +{ + return stack->slot_type[BPF_REG_SIZE - 1] == STACK_SPILL; +} + static void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_func_state *state) { @@ -717,7 +725,7 @@ static void print_verifier_state(struct bpf_verifier_env *env, continue; verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE); print_liveness(env, state->stack[i].spilled_ptr.live); - if (state->stack[i].slot_type[0] == STACK_SPILL) { + if (is_spilled_reg(&state->stack[i])) { reg = &state->stack[i].spilled_ptr; t = reg->type; verbose(env, "=%s", reg_type_str[t]); @@ -2373,7 +2381,7 @@ static void mark_all_scalars_precise(struct bpf_verifier_env *env, reg->precise = true; } for (j = 0; j < func->allocated_stack / BPF_REG_SIZE; j++) { - if (func->stack[j].slot_type[0] != STACK_SPILL) + if (!is_spilled_reg(&func->stack[j])) continue; reg = &func->stack[j].spilled_ptr; if (reg->type != SCALAR_VALUE) @@ -2415,7 +2423,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno, } while (spi >= 0) { - if (func->stack[spi].slot_type[0] != STACK_SPILL) { + if (!is_spilled_reg(&func->stack[spi])) { stack_mask = 0; break; } @@ -2514,7 +2522,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno, return 0; } - if (func->stack[i].slot_type[0] != STACK_SPILL) { + if (!is_spilled_reg(&func->stack[i])) { stack_mask &= ~(1ull << i); continue; } @@ -2713,7 +2721,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, /* regular write of data into stack destroys any spilled ptr */ state->stack[spi].spilled_ptr.type = NOT_INIT; /* Mark slots as STACK_MISC if they belonged to spilled ptr. */ - if (state->stack[spi].slot_type[0] == STACK_SPILL) + if (is_spilled_reg(&state->stack[spi])) for (i = 0; i < BPF_REG_SIZE; i++) state->stack[spi].slot_type[i] = STACK_MISC; @@ -2923,7 +2931,7 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, stype = reg_state->stack[spi].slot_type; reg = ®_state->stack[spi].spilled_ptr; - if (stype[0] == STACK_SPILL) { + if (is_spilled_reg(®_state->stack[spi])) { if (size != BPF_REG_SIZE) { if (reg->type != SCALAR_VALUE) { verbose_linfo(env, env->insn_idx, "; "); @@ -4514,11 +4522,11 @@ static int check_stack_range_initialized( goto mark; } - if (state->stack[spi].slot_type[0] == STACK_SPILL && + if (is_spilled_reg(&state->stack[spi]) && state->stack[spi].spilled_ptr.type == PTR_TO_BTF_ID) goto mark; - if (state->stack[spi].slot_type[0] == STACK_SPILL && + if (is_spilled_reg(&state->stack[spi]) && (state->stack[spi].spilled_ptr.type == SCALAR_VALUE || env->allow_ptr_leaks)) { if (clobber) { @@ -10356,9 +10364,9 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, * return false to continue verification of this path */ return false; - if (i % BPF_REG_SIZE) + if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1) continue; - if (old->stack[spi].slot_type[0] != STACK_SPILL) + if (!is_spilled_reg(&old->stack[spi])) continue; if (!regsafe(env, &old->stack[spi].spilled_ptr, &cur->stack[spi].spilled_ptr, idmap)) @@ -10565,7 +10573,7 @@ static int propagate_precision(struct bpf_verifier_env *env, } for (i = 0; i < state->allocated_stack / BPF_REG_SIZE; i++) { - if (state->stack[i].slot_type[0] != STACK_SPILL) + if (!is_spilled_reg(&state->stack[i])) continue; state_reg = &state->stack[i].spilled_ptr; if (state_reg->type != SCALAR_VALUE || From patchwork Tue Sep 21 01:31:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Martin KaFai Lau X-Patchwork-Id: 12506709 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CAA6C4332F for ; Tue, 21 Sep 2021 01:31:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4BC9260ED5 for ; Tue, 21 Sep 2021 01:31:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229773AbhIUBdK (ORCPT ); Mon, 20 Sep 2021 21:33:10 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:46496 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230168AbhIUBcy (ORCPT ); Mon, 20 Sep 2021 21:32:54 -0400 Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18KHwWqD006002 for ; Mon, 20 Sep 2021 18:31:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : content-transfer-encoding : mime-version; s=facebook; bh=J+Y35pJLVB8iKljp+wn6W8IHV7LMC5bZ8fLbhIwGaDs=; b=H7psrf2LdNTvC+xJO8eEabeOWeWV84Sm6NmpuVOMxAgOWZzu33qUVQSSawne3LevV++d a6aJpKyv7lUC8MGi3xA//jBLqSHavao8gsKy/U28Mg85PouYJvTyhJH8KLr5lF+tFQ9Y g+TtZqiBXVyEvNchiffTDGx/JaSrwV772Zo= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com with ESMTP id 3b6v27uns2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 20 Sep 2021 18:31:26 -0700 Received: from intmgw001.37.frc1.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::f) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.14; Mon, 20 Sep 2021 18:31:25 -0700 Received: by devbig005.ftw2.facebook.com (Postfix, from userid 6611) id 33B4F2940D2A; Mon, 20 Sep 2021 18:31:15 -0700 (PDT) From: Martin KaFai Lau To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Yonghong Song Subject: [PATCH bpf-next 2/4] bpf: Support <8-byte scalar spill and refill Date: Mon, 20 Sep 2021 18:31:15 -0700 Message-ID: <20210921013115.1037089-1-kafai@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210921013102.1035356-1-kafai@fb.com> References: <20210921013102.1035356-1-kafai@fb.com> X-FB-Internal: Safe X-FB-Source: Intern X-Proofpoint-GUID: nJfAx9Xvwli1uxIwUcF_SqZuFLNjjlYQ X-Proofpoint-ORIG-GUID: nJfAx9Xvwli1uxIwUcF_SqZuFLNjjlYQ X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-20_11,2021-09-20_01,2020-04-07_01 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 clxscore=1015 mlxscore=0 phishscore=0 malwarescore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 spamscore=0 bulkscore=0 mlxlogscore=999 adultscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2109030001 definitions=main-2109210006 X-FB-Internal: deliver Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The verifier currently does not save the reg state when spilling <8byte bounded scalar to the stack. The bpf program will be incorrectly rejected when this scalar is refilled to the reg and then used to offset into a packet header. The later patch has a simplified bpf prog from a real use case to demonstrate this case. The current work around is to reparse the packet again such that this offset scalar is close to where the packet data will be accessed to avoid the spill. Thus, the header is parsed twice. The llvm patch [1] will align the <8bytes spill to the 8-byte stack address. This can simplify the verifier support by avoiding to store multiple reg states for each 8 byte stack slot. This patch changes the verifier to save the reg state when spilling <8bytes scalar to the stack. This reg state saving is limited to spill aligned to the 8-byte stack address. The current refill logic has already called coerce_reg_to_size(), so coerce_reg_to_size() is not called on state->stack[spi].spilled_ptr during spill. When refilling in check_stack_read_fixed_off(), it checks the refill size is the same as the number of bytes marked with STACK_SPILL before restoring the reg state. When restoring the reg state to state->regs[dst_regno], it needs to avoid the state->regs[dst_regno].subreg_def being over written because it has been marked by the check_reg_arg() earlier [check_mem_access() is called after check_reg_arg() in do_check()]. Reordering check_mem_access() and check_reg_arg() will need a lot of changes in test_verifier's tests because of the difference in verifier's error message. Thus, the patch here is to save the state->regs[dst_regno].subreg_def first in check_stack_read_fixed_off(). There are cases that the verifier needs to scrub the spilled slot from STACK_SPILL to STACK_MISC. After this patch the spill is not always in 8 bytes now, so it can no longer assume the other 7 bytes are always marked as STACK_SPILL. In particular, the scrub needs to avoid marking an uninitialized byte from STACK_INVALID to STACK_MISC. Otherwise, the verifier will incorrectly accept bpf program reading uninitialized bytes from the stack. A new helper scrub_spilled_slot() is created for this purpose. [1]: https://reviews.llvm.org/D109073 Signed-off-by: Martin KaFai Lau --- kernel/bpf/verifier.c | 67 +++++++++++++++++++++++++++++++++---------- 1 file changed, 52 insertions(+), 15 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 2ad2a12c5482..7a8351604f67 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -620,6 +620,12 @@ static bool is_spilled_reg(const struct bpf_stack_state *stack) return stack->slot_type[BPF_REG_SIZE - 1] == STACK_SPILL; } +static void scrub_spilled_slot(u8 *stype) +{ + if (*stype != STACK_INVALID) + *stype = STACK_MISC; +} + static void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_func_state *state) { @@ -2634,15 +2640,21 @@ static bool __is_pointer_value(bool allow_ptr_leaks, } static void save_register_state(struct bpf_func_state *state, - int spi, struct bpf_reg_state *reg) + int spi, struct bpf_reg_state *reg, + int size) { int i; state->stack[spi].spilled_ptr = *reg; - state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; + if (size == BPF_REG_SIZE) + state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; - for (i = 0; i < BPF_REG_SIZE; i++) - state->stack[spi].slot_type[i] = STACK_SPILL; + for (i = BPF_REG_SIZE; i > BPF_REG_SIZE - size; i--) + state->stack[spi].slot_type[i - 1] = STACK_SPILL; + + /* size < 8 bytes spill */ + for (; i; i--) + scrub_spilled_slot(&state->stack[spi].slot_type[i - 1]); } /* check_stack_{read,write}_fixed_off functions track spill/fill of registers, @@ -2689,7 +2701,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, env->insn_aux_data[insn_idx].sanitize_stack_spill = true; } - if (reg && size == BPF_REG_SIZE && register_is_bounded(reg) && + if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && !register_is_null(reg) && env->bpf_capable) { if (dst_reg != BPF_REG_FP) { /* The backtracking logic can only recognize explicit @@ -2702,7 +2714,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, if (err) return err; } - save_register_state(state, spi, reg); + save_register_state(state, spi, reg, size); } else if (reg && is_spillable_regtype(reg->type)) { /* register containing pointer is being spilled into stack */ if (size != BPF_REG_SIZE) { @@ -2714,7 +2726,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, verbose(env, "cannot spill pointers to stack into stack frame of the caller\n"); return -EINVAL; } - save_register_state(state, spi, reg); + save_register_state(state, spi, reg, size); } else { u8 type = STACK_MISC; @@ -2723,7 +2735,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, /* Mark slots as STACK_MISC if they belonged to spilled ptr. */ if (is_spilled_reg(&state->stack[spi])) for (i = 0; i < BPF_REG_SIZE; i++) - state->stack[spi].slot_type[i] = STACK_MISC; + scrub_spilled_slot(&state->stack[spi].slot_type[i]); /* only mark the slot as written if all 8 bytes were written * otherwise read propagation may incorrectly stop too soon @@ -2926,23 +2938,50 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, struct bpf_func_state *state = vstate->frame[vstate->curframe]; int i, slot = -off - 1, spi = slot / BPF_REG_SIZE; struct bpf_reg_state *reg; - u8 *stype; + u8 *stype, type; stype = reg_state->stack[spi].slot_type; reg = ®_state->stack[spi].spilled_ptr; if (is_spilled_reg(®_state->stack[spi])) { if (size != BPF_REG_SIZE) { + u8 scalar_size = 0; + if (reg->type != SCALAR_VALUE) { verbose_linfo(env, env->insn_idx, "; "); verbose(env, "invalid size of register fill\n"); return -EACCES; } - if (dst_regno >= 0) { + + mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); + if (dst_regno < 0) + return 0; + + for (i = BPF_REG_SIZE; i > 0 && stype[i - 1] == STACK_SPILL; i--) + scalar_size++; + + if (!(off % BPF_REG_SIZE) && size == scalar_size) { + /* The earlier check_reg_arg() has decided the + * subreg_def for this insn. Save it first. + */ + s32 subreg_def = state->regs[dst_regno].subreg_def; + + state->regs[dst_regno] = *reg; + state->regs[dst_regno].subreg_def = subreg_def; + } else { + for (i = 0; i < size; i++) { + type = stype[(slot - i) % BPF_REG_SIZE]; + if (type == STACK_SPILL) + continue; + if (type == STACK_MISC) + continue; + verbose(env, "invalid read from stack off %d+%d size %d\n", + off, i, size); + return -EACCES; + } mark_reg_unknown(env, state->regs, dst_regno); - state->regs[dst_regno].live |= REG_LIVE_WRITTEN; } - mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); + state->regs[dst_regno].live |= REG_LIVE_WRITTEN; return 0; } for (i = 1; i < BPF_REG_SIZE; i++) { @@ -2973,8 +3012,6 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, } mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); } else { - u8 type; - for (i = 0; i < size; i++) { type = stype[(slot - i) % BPF_REG_SIZE]; if (type == STACK_MISC) @@ -4532,7 +4569,7 @@ static int check_stack_range_initialized( if (clobber) { __mark_reg_unknown(env, &state->stack[spi].spilled_ptr); for (j = 0; j < BPF_REG_SIZE; j++) - state->stack[spi].slot_type[j] = STACK_MISC; + scrub_spilled_slot(&state->stack[spi].slot_type[j]); } goto mark; } From patchwork Tue Sep 21 01:31:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Martin KaFai Lau X-Patchwork-Id: 12506705 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18B93C433F5 for ; Tue, 21 Sep 2021 01:31:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DE308604D1 for ; Tue, 21 Sep 2021 01:31:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229790AbhIUBdL (ORCPT ); Mon, 20 Sep 2021 21:33:11 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:7244 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230183AbhIUBcz (ORCPT ); Mon, 20 Sep 2021 21:32:55 -0400 Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18KHwWqF006002 for ; Mon, 20 Sep 2021 18:31:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : content-transfer-encoding : mime-version; s=facebook; bh=X36ME8e7f0R2qzKkpq3fXg6QJ01K7Au7LTLQ9SxZJTA=; b=atnnFFvetAQUbjohXdXIelYApk9h9lqNwVdbZTOynJ+BH4VlemoolyMoCYuKGUW+pL+w w1yBgm937BaT6IyH21cI6wECcCPm+cyPO7c6ekugNnmiQkYyVhvpVP+8pPsR+2JYf0X+ iRrXU9IdkiTBc+KCeLOO0k8SS+rx00L3LS8= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com with ESMTP id 3b6v27uns2-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 20 Sep 2021 18:31:27 -0700 Received: from intmgw001.37.frc1.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::f) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.14; Mon, 20 Sep 2021 18:31:25 -0700 Received: by devbig005.ftw2.facebook.com (Postfix, from userid 6611) id 5C6E32940D2A; Mon, 20 Sep 2021 18:31:22 -0700 (PDT) From: Martin KaFai Lau To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Yonghong Song Subject: [PATCH bpf-next 3/4] bpf: selftest: A bpf prog that has a 32bit scalar spill Date: Mon, 20 Sep 2021 18:31:22 -0700 Message-ID: <20210921013122.1037548-1-kafai@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210921013102.1035356-1-kafai@fb.com> References: <20210921013102.1035356-1-kafai@fb.com> X-FB-Internal: Safe X-FB-Source: Intern X-Proofpoint-GUID: uLcatKMzTcyorLCL0Re2JZSN6Z9sR4cK X-Proofpoint-ORIG-GUID: uLcatKMzTcyorLCL0Re2JZSN6Z9sR4cK X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-20_11,2021-09-20_01,2020-04-07_01 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 clxscore=1015 mlxscore=0 phishscore=0 malwarescore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 spamscore=0 bulkscore=0 mlxlogscore=999 adultscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2109030001 definitions=main-2109210006 X-FB-Internal: deliver Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net It is a simplified example that can trigger a 32bit scalar spill. The const scalar is refilled and added to a skb->data later. Since the reg state of the 32bit scalar spill is not saved now, adding the refilled reg to skb->data and then comparing it with skb->data_end cannot verify the skb->data access. With the earlier verifier patch and the llvm patch [1]. The verifier can correctly verify the bpf prog. Here is the snippet of the verifier log that leads to verifier conclusion that the packet data is unsafe to read. 67: R0=inv1 R1=inv17 R2=pkt(id=0,off=62,r=102,imm=0) R3=invP2 R4=pkt(id=0,off=68,r=102,imm=0) R5=inv1 R6=pkt(id=0,off=0,r=102,imm=0) R7=pkt_end(id=0,off=0,imm=0) R8=inv17 R9=inv102 R10=fp0 67: (63) *(u32 *)(r10 -16) = r9 68: R0=inv1 R1=inv17 R2=pkt(id=0,off=62,r=102,imm=0) R3=invP2 R4=pkt(id=0,off=68,r=102,imm=0) R5=inv1 R6=pkt(id=0,off=0,r=102,imm=0) R7=pkt_end(id=0,off=0,imm=0) R8=inv17 R9=inv102 R10=fp0 fp-16=????mmmm ... 74: R0_w=map_value_or_null(id=2,off=0,ks=16,vs=1,imm=0) R5_w=inv1 R6=pkt(id=0,off=0,r=102,imm=0) R7=pkt_end(id=0,off=0,imm=0) R8=inv17 R9_w=inv1 R10=fp0 fp-16=????mmmm 74: (61) r9 = *(u32 *)(r10 -16) 75: R0_w=map_value_or_null(id=2,off=0,ks=16,vs=1,imm=0) R5_w=inv1 R6=pkt(id=0,off=0,r=102,imm=0) R7=pkt_end(id=0,off=0,imm=0) R8=inv17 R9_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-16=????mmmm 75: (bc) w1 = w9 76: R0_w=map_value_or_null(id=2,off=0,ks=16,vs=1,imm=0) R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R5_w=inv1 R6=pkt(id=0,off=0,r=102,imm=0) R7=pkt_end(id=0,off=0,imm=0) R8=inv17 R9_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-16=????mmmm 76: (0f) r6 += r1 last_idx 76 first_idx 67 regs=2 stack=0 before 75: (bc) w1 = w9 regs=200 stack=0 before 74: (61) r9 = *(u32 *)(r10 -16) 77: R0_w=map_value_or_null(id=2,off=0,ks=16,vs=1,imm=0) R1_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R5_w=inv1 R6_w=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R7=pkt_end(id=0,off=0,imm=0) R8=inv17 R9_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-16=????mmmm ... 99: R0=inv1 R1=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R5=inv1 R6=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R7=pkt_end(id=0,off=0,imm=0) R8_w=invP17 R9=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-16=????mmmm 99: (bf) r1 = r6 100: R0=inv1 R1_w=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R5=inv1 R6=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R7=pkt_end(id=0,off=0,imm=0) R8_w=invP17 R9=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-16=????mmmm 100: (07) r1 += 8 101: R0=inv1 R1_w=pkt(id=3,off=8,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R5=inv1 R6=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R7=pkt_end(id=0,off=0,imm=0) R8_w=invP17 R9=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-16=????mmmm 101: (b4) w0 = 1 102: R0_w=inv1 R1_w=pkt(id=3,off=8,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R5=inv1 R6=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R7=pkt_end(id=0,off=0,imm=0) R8_w=invP17 R9=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-16=????mmmm 102: (2d) if r1 > r7 goto pc-38 R0_w=inv1 R1_w=pkt(id=3,off=8,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R5=inv1 R6=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R7=pkt_end(id=0,off=0,imm=0) R8_w=invP17 R9=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-16=????mmmm 103: R0_w=inv1 R1_w=pkt(id=3,off=8,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R5=inv1 R6=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R7=pkt_end(id=0,off=0,imm=0) R8_w=invP17 R9=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-16=????mmmm 103: (69) r7 = *(u16 *)(r6 +0) invalid access to packet, off=0 size=2, R6(id=3,off=0,r=0) R6 offset is outside of the packet [1]: https://reviews.llvm.org/D109073 Signed-off-by: Martin KaFai Lau --- .../selftests/bpf/prog_tests/xdpwall.c | 15 + tools/testing/selftests/bpf/progs/xdpwall.c | 302 ++++++++++++++++++ 2 files changed, 317 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/xdpwall.c create mode 100644 tools/testing/selftests/bpf/progs/xdpwall.c diff --git a/tools/testing/selftests/bpf/prog_tests/xdpwall.c b/tools/testing/selftests/bpf/prog_tests/xdpwall.c new file mode 100644 index 000000000000..d263fd0a7860 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/xdpwall.c @@ -0,0 +1,15 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2021 Facebook */ + +#include "test_progs.h" +#include "xdpwall.skel.h" + +void test_xdpwall(void) +{ + struct xdpwall *skel; + + skel = xdpwall__open_and_load(); + ASSERT_OK_PTR(skel, "skel"); + + xdpwall__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/xdpwall.c b/tools/testing/selftests/bpf/progs/xdpwall.c new file mode 100644 index 000000000000..314a8d6beff0 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/xdpwall.c @@ -0,0 +1,302 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2021 Facebook */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +enum pkt_parse_err { + NO_ERR, + BAD_IP6_HDR, + BAD_IP4GUE_HDR, + BAD_IP6GUE_HDR, +}; + +enum pkt_flag { + TUNNEL = 0x1, + TCP_SYN = 0x2, + QUIC_INITIAL_FLAG = 0x4, + TCP_ACK = 0x8, + TCP_RST = 0x10 +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 16); + __type(key, struct in6_addr); + __type(value, bool); +} v6_addr_map SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 16); + __type(key, int); + __type(value, __u8); +} tcp_port_map SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 16); + __type(key, int); + __type(value, __u16); +} udp_port_map SEC(".maps"); + +enum ip_type { V4 = 1, V6 = 2 }; + +struct fw_match_info { + __u8 v4_src_ip_match; + __u8 v6_src_ip_match; + __u8 v4_src_prefix_match; + __u8 v4_dst_prefix_match; + __u8 v6_src_prefix_match; + __u8 tcp_dp_match; + __u16 udp_sp_match; + __u16 udp_dp_match; + bool is_tcp; + bool is_tcp_syn; + bool is_link_local_v6; +}; + +struct pkt_info { + enum ip_type type; + union { + struct iphdr *ipv4; + struct ipv6hdr *ipv6; + } ip; + int sport; + int dport; + __u16 trans_hdr_offset; + __u8 proto; + __u8 flags; +}; + +static __always_inline struct ethhdr *parse_ethhdr(void *data, void *data_end) +{ + struct ethhdr *eth = data; + + if (eth + 1 > data_end) + return NULL; + + return eth; +} + +static __always_inline void +filter_src_ip6(struct pkt_info *info, struct fw_match_info *match_info) +{ + if (info->type == V6 && + bpf_map_lookup_elem(&v6_addr_map, &info->ip.ipv6->saddr)) + match_info->v6_src_ip_match = true; +} + +static __always_inline void * +get_transport_hdr(__u16 offset, void *data, void *data_end) +{ + if (offset > 255 || data + offset > data_end) + return NULL; + + return data + offset; +} + +static __always_inline bool tcphdr_only_contains_flag(struct tcphdr *tcp, + __u32 FLAG) +{ + return (tcp_flag_word(tcp) & + (TCP_FLAG_ACK | TCP_FLAG_RST | TCP_FLAG_SYN | TCP_FLAG_FIN)) == FLAG; +} + +static __always_inline void set_tcp_flags(struct pkt_info *info, + struct tcphdr *tcp) { + if (tcphdr_only_contains_flag(tcp, TCP_FLAG_SYN)) + info->flags |= TCP_SYN; + else if (tcphdr_only_contains_flag(tcp, TCP_FLAG_ACK)) + info->flags |= TCP_ACK; + else if (tcphdr_only_contains_flag(tcp, TCP_FLAG_RST)) + info->flags |= TCP_RST; +} + +static __always_inline bool +parse_tcp(struct pkt_info *info, void *transport_hdr, void *data_end) +{ + struct tcphdr *tcp = transport_hdr; + + if (tcp + 1 > data_end) + return false; + + info->sport = bpf_ntohs(tcp->source); + info->dport = bpf_ntohs(tcp->dest); + set_tcp_flags(info, tcp); + + return true; +} + +static __always_inline bool +parse_udp(struct pkt_info *info, void *transport_hdr, void *data_end) +{ + struct udphdr *udp = transport_hdr; + + if (udp + 1 > data_end) + return false; + + info->sport = bpf_ntohs(udp->source); + info->dport = bpf_ntohs(udp->dest); + + return true; +} + +static __always_inline __u8 filter_tcp_port(int port) +{ + __u8 *leaf = bpf_map_lookup_elem(&tcp_port_map, &port); + + return leaf ? *leaf : 0; +} + +static __always_inline __u16 filter_udp_port(int port) +{ + __u16 *leaf = bpf_map_lookup_elem(&udp_port_map, &port); + + return leaf ? *leaf : 0; +} + +static __always_inline bool +filter_transport_hdr(void *transport_hdr, void *data_end, + struct pkt_info *info, struct fw_match_info *match_info) +{ + if (info->proto == IPPROTO_TCP) { + if (!parse_tcp(info, transport_hdr, data_end)) + return false; + + match_info->is_tcp = true; + match_info->is_tcp_syn = (info->flags & TCP_SYN) > 0; + + match_info->tcp_dp_match = filter_tcp_port(info->dport); + } else if (info->proto == IPPROTO_UDP) { + if (!parse_udp(info, transport_hdr, data_end)) + return false; + + match_info->udp_dp_match = filter_udp_port(info->dport); + match_info->udp_sp_match = filter_udp_port(info->sport); + } + + return true; +} + +static __always_inline __u8 +parse_gue_v6(struct pkt_info *info, struct ipv6hdr *ip6h, void *data_end) +{ + struct udphdr *udp = (struct udphdr *)(ip6h + 1); + void *encap_data = udp + 1; + + if (udp + 1 > data_end) + return BAD_IP6_HDR; + + if (udp->dest != bpf_htons(6666)) + return NO_ERR; + + info->flags |= TUNNEL; + + if (encap_data + 1 > data_end) + return BAD_IP6GUE_HDR; + + if (*(__u8 *)encap_data & 0x30) { + struct ipv6hdr *inner_ip6h = encap_data; + + if (inner_ip6h + 1 > data_end) + return BAD_IP6GUE_HDR; + + info->type = V6; + info->proto = inner_ip6h->nexthdr; + info->ip.ipv6 = inner_ip6h; + info->trans_hdr_offset += sizeof(struct ipv6hdr) + sizeof(struct udphdr); + } else { + struct iphdr *inner_ip4h = encap_data; + + if (inner_ip4h + 1 > data_end) + return BAD_IP6GUE_HDR; + + info->type = V4; + info->proto = inner_ip4h->protocol; + info->ip.ipv4 = inner_ip4h; + info->trans_hdr_offset += sizeof(struct iphdr) + sizeof(struct udphdr); + } + + return NO_ERR; +} + +static __always_inline __u8 parse_ipv6_gue(struct pkt_info *info, + void *data, void *data_end) +{ + struct ipv6hdr *ip6h = data + sizeof(struct ethhdr); + + if (ip6h + 1 > data_end) + return BAD_IP6_HDR; + + info->proto = ip6h->nexthdr; + info->ip.ipv6 = ip6h; + info->type = V6; + info->trans_hdr_offset = sizeof(struct ethhdr) + sizeof(struct ipv6hdr); + + if (info->proto == IPPROTO_UDP) + return parse_gue_v6(info, ip6h, data_end); + + return NO_ERR; +} + +SEC("xdp") +int edgewall(struct xdp_md *ctx) +{ + void *data_end = (void *)(long)(ctx->data_end); + void *data = (void *)(long)(ctx->data); + struct fw_match_info match_info = {}; + struct pkt_info info = {}; + __u8 parse_err = NO_ERR; + void *transport_hdr; + struct ethhdr *eth; + bool filter_res; + __u32 proto; + + eth = parse_ethhdr(data, data_end); + if (!eth) + return XDP_DROP; + + proto = eth->h_proto; + if (proto != bpf_htons(ETH_P_IPV6)) + return XDP_DROP; + + if (parse_ipv6_gue(&info, data, data_end)) + return XDP_DROP; + + if (info.proto == IPPROTO_ICMPV6) + return XDP_PASS; + + if (info.proto != IPPROTO_TCP && info.proto != IPPROTO_UDP) + return XDP_DROP; + + filter_src_ip6(&info, &match_info); + + transport_hdr = get_transport_hdr(info.trans_hdr_offset, data, + data_end); + if (!transport_hdr) + return XDP_DROP; + + filter_res = filter_transport_hdr(transport_hdr, data_end, + &info, &match_info); + if (!filter_res) + return XDP_DROP; + + if (match_info.is_tcp && !match_info.is_tcp_syn) + return XDP_PASS; + + return XDP_DROP; +} + +char LICENSE[] SEC("license") = "GPL"; From patchwork Tue Sep 21 01:31:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Martin KaFai Lau X-Patchwork-Id: 12506707 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22831C433EF for ; Tue, 21 Sep 2021 01:31:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0099360ED5 for ; Tue, 21 Sep 2021 01:31:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230079AbhIUBdN (ORCPT ); Mon, 20 Sep 2021 21:33:13 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:4484 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230413AbhIUBdH (ORCPT ); Mon, 20 Sep 2021 21:33:07 -0400 Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18KHwMcT028630 for ; Mon, 20 Sep 2021 18:31:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=G6JlfMaj3ieCK0PRofb89XlBrM0Xdtah7XuApqlYmoY=; b=IxuFkfWbY7G9Yl+N/9fCvl4c35dwbfpVc/9GfnSjR4HYQ0nv+TBDGFjWYIhl01J4NgLQ zJXSdbIo2BDBoA0mDqX6rqD1cWpmu/x8rSN8WDQRU+4Inx69IyuodYOYC8zrj6uj+42v hOi6SA0a7CPZy/OuHD+lFkUvQqcCgK3b5bA= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com with ESMTP id 3b6kt66b1v-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 20 Sep 2021 18:31:38 -0700 Received: from intmgw001.38.frc1.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:11d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.14; Mon, 20 Sep 2021 18:31:36 -0700 Received: by devbig005.ftw2.facebook.com (Postfix, from userid 6611) id AA2332940D2A; Mon, 20 Sep 2021 18:31:29 -0700 (PDT) From: Martin KaFai Lau To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Yonghong Song Subject: [PATCH bpf-next 4/4] bpf: selftest: Add verifier tests for <8-byte scalar spill and refill Date: Mon, 20 Sep 2021 18:31:29 -0700 Message-ID: <20210921013129.1037825-1-kafai@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210921013102.1035356-1-kafai@fb.com> References: <20210921013102.1035356-1-kafai@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-FB-Source: Intern X-Proofpoint-ORIG-GUID: wQrEUaKF0LAbxpJmfQbLpUXnS4uEnMvo X-Proofpoint-GUID: wQrEUaKF0LAbxpJmfQbLpUXnS4uEnMvo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-20_11,2021-09-20_01,2020-04-07_01 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 impostorscore=0 malwarescore=0 bulkscore=0 adultscore=0 clxscore=1015 mlxscore=0 suspectscore=0 priorityscore=1501 spamscore=0 mlxlogscore=999 phishscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2109030001 definitions=main-2109210006 X-FB-Internal: deliver Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a few verifier tests for <8-byte spill and refill. Signed-off-by: Martin KaFai Lau --- .../selftests/bpf/verifier/spill_fill.c | 161 ++++++++++++++++++ 1 file changed, 161 insertions(+) diff --git a/tools/testing/selftests/bpf/verifier/spill_fill.c b/tools/testing/selftests/bpf/verifier/spill_fill.c index 0b943897aaf6..c9991c3f3bd2 100644 --- a/tools/testing/selftests/bpf/verifier/spill_fill.c +++ b/tools/testing/selftests/bpf/verifier/spill_fill.c @@ -104,3 +104,164 @@ .result = ACCEPT, .retval = POINTER_VALUE, }, +{ + "Spill and refill a u32 const scalar. Offset to skb->data", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, + offsetof(struct __sk_buff, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, + offsetof(struct __sk_buff, data_end)), + /* r4 = 20 */ + BPF_MOV32_IMM(BPF_REG_4, 20), + /* *(u32 *)(r10 -8) = r4 */ + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), + /* r4 = *(u32 *)(r10 -8) */ + BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8), + /* r0 = r2 */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv20 */ + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), + /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=inv20 */ + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), + /* r0 = *(u32 *)r2 R0=pkt,off=20,r=20 R2=pkt,r=20 R3=pkt_end R4=inv20 */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .prog_type = BPF_PROG_TYPE_SCHED_CLS, +}, +{ + "Spill a u32 const, refill from another half of the uninit u32 from the stack", + .insns = { + /* r4 = 20 */ + BPF_MOV32_IMM(BPF_REG_4, 20), + /* *(u32 *)(r10 -8) = r4 */ + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), + /* r4 = *(u32 *)(r10 -4) fp-8=????rrrr*/ + BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -4), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "invalid read from stack off -4+0 size 4", + .prog_type = BPF_PROG_TYPE_SCHED_CLS, +}, +{ + "Spill a u32 const scalar. Refill as u16. Offset to skb->data", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, + offsetof(struct __sk_buff, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, + offsetof(struct __sk_buff, data_end)), + /* r4 = 20 */ + BPF_MOV32_IMM(BPF_REG_4, 20), + /* *(u32 *)(r10 -8) = r4 */ + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), + /* r4 = *(u16 *)(r10 -8) */ + BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8), + /* r0 = r2 */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */ + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */ + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "invalid access to packet", + .prog_type = BPF_PROG_TYPE_SCHED_CLS, +}, +{ + "Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, + offsetof(struct __sk_buff, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, + offsetof(struct __sk_buff, data_end)), + /* r4 = 20 */ + BPF_MOV32_IMM(BPF_REG_4, 20), + /* *(u32 *)(r10 -8) = r4 */ + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), + /* r4 = *(u16 *)(r10 -6) */ + BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -6), + /* r0 = r2 */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=65535 */ + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv,umax=65535 */ + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=inv20 */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "invalid access to packet", + .prog_type = BPF_PROG_TYPE_SCHED_CLS, +}, +{ + "Spill and refill a u32 const scalar at non 8byte aligned stack addr. Offset to skb->data", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, + offsetof(struct __sk_buff, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, + offsetof(struct __sk_buff, data_end)), + /* r4 = 20 */ + BPF_MOV32_IMM(BPF_REG_4, 20), + /* *(u32 *)(r10 -8) = r4 */ + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), + /* *(u32 *)(r10 -4) = r4 */ + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -4), + /* r4 = *(u32 *)(r10 -4), */ + BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -4), + /* r0 = r2 */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=inv,umax=U32_MAX */ + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), + /* if (r0 > r3) R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4=inv */ + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), + /* r0 = *(u32 *)r2 R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4=inv */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "invalid access to packet", + .prog_type = BPF_PROG_TYPE_SCHED_CLS, +}, +{ + "Spill and refill a umax=40 bounded scalar. Offset to skb->data", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, + offsetof(struct __sk_buff, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, + offsetof(struct __sk_buff, data_end)), + BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_1, + offsetof(struct __sk_buff, tstamp)), + BPF_JMP_IMM(BPF_JLE, BPF_REG_4, 40, 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + /* *(u32 *)(r10 -8) = r4 R4=inv,umax=40 */ + BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), + /* r4 = (*u32 *)(r10 - 8) */ + BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8), + /* r2 += r4 R2=pkt R4=inv,umax=40 */ + BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_4), + /* r0 = r2 R2=pkt,umax=40 R4=inv,umax=40 */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), + /* r2 += 20 R0=pkt,umax=40 R2=pkt,umax=40 */ + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 20), + /* if (r2 > r3) R0=pkt,umax=40 R2=pkt,off=20,umax=40 */ + BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_3, 1), + /* r0 = *(u32 *)r0 R0=pkt,r=20,umax=40 R2=pkt,off=20,r=20,umax=40 */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .prog_type = BPF_PROG_TYPE_SCHED_CLS, +},