From patchwork Sun Feb 7 01:10:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrei Matei X-Patchwork-Id: 12072411 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30998C433DB for ; Sun, 7 Feb 2021 01:12:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DA66764E8B for ; Sun, 7 Feb 2021 01:12:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229537AbhBGBLz (ORCPT ); Sat, 6 Feb 2021 20:11:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229506AbhBGBLx (ORCPT ); Sat, 6 Feb 2021 20:11:53 -0500 Received: from mail-qk1-x72a.google.com (mail-qk1-x72a.google.com [IPv6:2607:f8b0:4864:20::72a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCFE5C061756 for ; Sat, 6 Feb 2021 17:11:12 -0800 (PST) Received: by mail-qk1-x72a.google.com with SMTP id t63so11036138qkc.1 for ; Sat, 06 Feb 2021 17:11:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yw6ndVeglj6KkDlyeMnJoF+jUZhLVhG/3rwnggUkRHw=; b=rl548knBO80q/qTfY7TBSupopEdXDOquhugyLma7uSlHYRx4nvk76CXFvjbqAhusTr HemalsKpkN0XXUtq3ZJwlZ4toJ93YG37QWkmwfl9oZV4sgsJ+j8n0TlQNwJya+k0JFe8 VKWZebxGVYsyM0yom8oVCRS44rBKVYZz07ER6pxcuOoPrMIa13w1BmhgwX2Zs2xzkRTJ P6Clqu5AEXLy2hMx+ajvSDV8Y1LIkjoOKBEKzisRRVzuTAqFQLVbwmJ96yPkYd1SmDQh 8qg03wQx2tYUxSviWTuieGfTQvvChSAe2xKl5SUmnQOdoA0lwSk6a6x8t9zlkBaPuiYa yPqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yw6ndVeglj6KkDlyeMnJoF+jUZhLVhG/3rwnggUkRHw=; b=j4fzdOXqC1Xc13HkTWMHFBCsqondE+mp8Vob90Z0pa7xoO8d6M6T0mQst3J4GAbU19 gm2g7LK0qN4daDauhnZDBLaWZpjoCbdnAWkEL0GBht+r7/BhWY6jXrlERPpEU88sIuof aSceCrp+R8o5SW+1CMuIfbI+5Ch9wUPU7RKFl3GEl6IgeyRd99GpCveQ/KQHLrxk0ZRl dWsaCts3tW4z791ZunmmyVVPLPBsT/2dWE1dePErU7pLvKC3vaOamxjbyr8bQYS5pu1g 2s3k44uZYObhmTW2AyZwOlkCnen46ny8dJZHfC/zPqJbiWS9LxtdYPu4FwoOp9vguvID tYVg== X-Gm-Message-State: AOAM532AP9lHlU2ykgyvyKPm2eaGQYYdPU3SYuZHXkFAEwKS3ELWf3U6 m5Lapri3xGTV7cKYgsBMOyJpC3p+H7lSqQ== X-Google-Smtp-Source: ABdhPJx+0lK6aZjWIfyxtY8eOnedSKzEqZj5Ot4CjSokL180s/2Ei/MbFJkXfX365wjC9hDSi/hW+g== X-Received: by 2002:a37:aa09:: with SMTP id t9mr10693942qke.214.1612660270948; Sat, 06 Feb 2021 17:11:10 -0800 (PST) Received: from localhost (pool-100-33-73-206.nycmny.fios.verizon.net. [100.33.73.206]) by smtp.gmail.com with ESMTPSA id 138sm13416510qkh.7.2021.02.06.17.11.10 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 06 Feb 2021 17:11:10 -0800 (PST) From: Andrei Matei To: bpf@vger.kernel.org, ast@kernel.org, andrii@kernel.org Cc: Andrei Matei Subject: [PATCH bpf-next v3 1/4] bpf: allow variable-offset stack access Date: Sat, 6 Feb 2021 20:10:24 -0500 Message-Id: <20210207011027.676572-2-andreimatei1@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210207011027.676572-1-andreimatei1@gmail.com> References: <20210207011027.676572-1-andreimatei1@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Before this patch, variable offset access to the stack was dissalowed for regular instructions, but was allowed for "indirect" accesses (i.e. helpers). This patch removes the restriction, allowing reading and writing to the stack through stack pointers with variable offsets. This makes stack-allocated buffers more usable in programs, and brings stack pointers closer to other types of pointers. The motivation is being able to use stack-allocated buffers for data manipulation. When the stack size limit is sufficient, allocating buffers on the stack is simpler than per-cpu arrays, or other alternatives. In unpriviledged programs, variable-offset reads and writes are disallowed (they were already disallowed for the indirect access case) because the speculative execution checking code doesn't support them. Additionally, when writing through a variable-offset stack pointer, if any pointers are in the accessible range, there's possilibities of later leaking pointers because the write cannot be tracked precisely. Writes with variable offset mark the whole range as initialized, even though we don't know which stack slots are actually written. This is in order to not reject future reads to these slots. Note that this doesn't affect writes done through helpers; like before, helpers need the whole stack range to be initialized to begin with. All the stack slots are in range are considered scalars after the write; variable-offset register spills are not tracked. For reads, all the stack slots in the variable range needs to be initialized (but see above about what writes do), otherwise the read is rejected. All register spilled in stack slots that might be read are marked as having been read, however reads through such pointers don't do register filling; the target register will always be either a scalar or a constant zero. Signed-off-by: Andrei Matei --- include/linux/bpf.h | 5 + include/linux/bpf_verifier.h | 3 +- kernel/bpf/verifier.c | 657 +++++++++++++++++++++++++++-------- 3 files changed, 518 insertions(+), 147 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 321966fc35db..079162bbd387 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1290,6 +1290,11 @@ static inline bool bpf_allow_ptr_leaks(void) return perfmon_capable(); } +static inline bool bpf_allow_uninit_stack(void) +{ + return perfmon_capable(); +} + static inline bool bpf_allow_ptr_to_map_access(void) { return perfmon_capable(); diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index dfe6f85d97dd..532c97836d0d 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -195,7 +195,7 @@ struct bpf_func_state { * 0 = main function, 1 = first callee. */ u32 frameno; - /* subprog number == index within subprog_stack_depth + /* subprog number == index within subprog_info * zero == main subprog */ u32 subprogno; @@ -404,6 +404,7 @@ struct bpf_verifier_env { u32 used_btf_cnt; /* number of used BTF objects */ u32 id_gen; /* used to generate unique reg IDs */ bool allow_ptr_leaks; + bool allow_uninit_stack; bool allow_ptr_to_map_access; bool bpf_capable; bool bypass_spec_v1; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 15694246f854..400d79e99fc8 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2275,12 +2275,14 @@ static void save_register_state(struct bpf_func_state *state, state->stack[spi].slot_type[i] = STACK_SPILL; } -/* check_stack_read/write functions track spill/fill of registers, +/* check_stack_{read,write}_fixed_off functions track spill/fill of registers, * stack boundary and alignment are checked in check_mem_access() */ -static int check_stack_write(struct bpf_verifier_env *env, - struct bpf_func_state *state, /* func where register points to */ - int off, int size, int value_regno, int insn_idx) +static int check_stack_write_fixed_off(struct bpf_verifier_env *env, + /* stack frame we're writing to */ + struct bpf_func_state *state, + int off, int size, int value_regno, + int insn_idx) { struct bpf_func_state *cur; /* state of the current function */ int i, slot = -off - 1, spi = slot / BPF_REG_SIZE, err; @@ -2406,9 +2408,175 @@ static int check_stack_write(struct bpf_verifier_env *env, return 0; } -static int check_stack_read(struct bpf_verifier_env *env, - struct bpf_func_state *reg_state /* func where register points to */, - int off, int size, int value_regno) +/* Write the stack: 'stack[ptr_regno + off] = value_regno'. 'ptr_regno' is + * known to contain a variable offset. + * This function checks whether the write is permitted and conservatively + * tracks the effects of the write, considering that each stack slot in the + * dynamic range is potentially written to. + * + * 'off' includes 'regno->off'. + * 'value_regno' can be -1, meaning that an unknown value is being written to + * the stack. + * + * Spilled pointers in range are not marked as written because we don't know + * what's going to be actually written. This means that read propagation for + * future reads cannot be terminated by this write. + * + * For privileged programs, uninitialized stack slots are considered + * initialized by this write (even though we don't know exactly what offsets + * are going to be written to). The idea is that we don't want the verifier to + * reject future reads that access slots written to through variable offsets. + */ +static int check_stack_write_var_off(struct bpf_verifier_env *env, + /* func where register points to */ + struct bpf_func_state *state, + int ptr_regno, int off, int size, + int value_regno, int insn_idx) +{ + struct bpf_func_state *cur; /* state of the current function */ + int min_off, max_off; + int i, err; + struct bpf_reg_state *ptr_reg = NULL, *value_reg = NULL; + bool writing_zero = false; + /* set if the fact that we're writing a zero is used to let any + * stack slots remain STACK_ZERO + */ + bool zero_used = false; + + cur = env->cur_state->frame[env->cur_state->curframe]; + ptr_reg = &cur->regs[ptr_regno]; + min_off = ptr_reg->smin_value + off; + max_off = ptr_reg->smax_value + off + size; + if (value_regno >= 0) + value_reg = &cur->regs[value_regno]; + if (value_reg && register_is_null(value_reg)) + writing_zero = true; + + err = realloc_func_state(state, round_up(-min_off, BPF_REG_SIZE), + state->acquired_refs, true); + if (err) + return err; + + + /* Variable offset writes destroy any spilled pointers in range. */ + for (i = min_off; i < max_off; i++) { + u8 new_type, *stype; + int slot, spi; + + slot = -i - 1; + spi = slot / BPF_REG_SIZE; + stype = &state->stack[spi].slot_type[slot % BPF_REG_SIZE]; + + if (!env->allow_ptr_leaks + && *stype != NOT_INIT + && *stype != SCALAR_VALUE) { + /* Reject the write if there's are spilled pointers in + * range. If we didn't reject here, the ptr status + * would be erased below (even though not all slots are + * actually overwritten), possibly opening the door to + * leaks. + */ + verbose(env, "spilled ptr in range of var-offset stack write; insn %d, ptr off: %d", + insn_idx, i); + return -EINVAL; + } + + /* Erase all spilled pointers. */ + state->stack[spi].spilled_ptr.type = NOT_INIT; + + /* Update the slot type. */ + new_type = STACK_MISC; + if (writing_zero && *stype == STACK_ZERO) { + new_type = STACK_ZERO; + zero_used = true; + } + /* If the slot is STACK_INVALID, we check whether it's OK to + * pretend that it will be initialized by this write. The slot + * might not actually be written to, and so if we mark it as + * initialized future reads might leak uninitialized memory. + * For privileged programs, we will accept such reads to slots + * that may or may not be written because, if we're reject + * them, the error would be too confusing. + */ + if (*stype == STACK_INVALID && !env->allow_uninit_stack) { + verbose(env, "uninit stack in range of var-offset write prohibited for !root; insn %d, off: %d", + insn_idx, i); + return -EINVAL; + } + *stype = new_type; + } + if (zero_used) { + /* backtracking doesn't work for STACK_ZERO yet. */ + err = mark_chain_precision(env, value_regno); + if (err) + return err; + } + return 0; +} + +/* When register 'dst_regno' is assigned some values from stack[min_off, + * max_off), we set the register's type according to the types of the + * respective stack slots. If all the stack values are known to be zeros, then + * so is the destination reg. Otherwise, the register is considered to be + * SCALAR. This function does not deal with register filling; the caller must + * ensure that all spilled registers in the stack range have been marked as + * read. + */ +static void mark_reg_stack_read(struct bpf_verifier_env *env, + /* func where src register points to */ + struct bpf_func_state *ptr_state, + int min_off, int max_off, int dst_regno) +{ + struct bpf_verifier_state *vstate = env->cur_state; + struct bpf_func_state *state = vstate->frame[vstate->curframe]; + int i, slot, spi; + u8 *stype; + int zeros = 0; + + for (i = min_off; i < max_off; i++) { + slot = -i - 1; + spi = slot / BPF_REG_SIZE; + stype = ptr_state->stack[spi].slot_type; + if (stype[slot % BPF_REG_SIZE] != STACK_ZERO) + break; + zeros++; + } + if (zeros == max_off - min_off) { + /* any access_size read into register is zero extended, + * so the whole register == const_zero + */ + __mark_reg_const_zero(&state->regs[dst_regno]); + /* backtracking doesn't support STACK_ZERO yet, + * so mark it precise here, so that later + * backtracking can stop here. + * Backtracking may not need this if this register + * doesn't participate in pointer adjustment. + * Forward propagation of precise flag is not + * necessary either. This mark is only to stop + * backtracking. Any register that contributed + * to const 0 was marked precise before spill. + */ + state->regs[dst_regno].precise = true; + } else { + /* have read misc data from the stack */ + mark_reg_unknown(env, state->regs, dst_regno); + } + state->regs[dst_regno].live |= REG_LIVE_WRITTEN; +} + +/* Read the stack at 'off' and put the results into the register indicated by + * 'dst_regno'. It handles reg filling if the addressed stack slot is a + * spilled reg. + * + * 'dst_regno' can be -1, meaning that the read value is not going to a + * register. + * + * The access is assumed to be within the current stack bounds. + */ +static int check_stack_read_fixed_off(struct bpf_verifier_env *env, + /* func where src register points to */ + struct bpf_func_state *reg_state, + int off, int size, int dst_regno) { struct bpf_verifier_state *vstate = env->cur_state; struct bpf_func_state *state = vstate->frame[vstate->curframe]; @@ -2416,11 +2584,6 @@ static int check_stack_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg; u8 *stype; - if (reg_state->allocated_stack <= slot) { - verbose(env, "invalid read from stack off %d+0 size %d\n", - off, size); - return -EACCES; - } stype = reg_state->stack[spi].slot_type; reg = ®_state->stack[spi].spilled_ptr; @@ -2431,9 +2594,9 @@ static int check_stack_read(struct bpf_verifier_env *env, verbose(env, "invalid size of register fill\n"); return -EACCES; } - if (value_regno >= 0) { - mark_reg_unknown(env, state->regs, value_regno); - state->regs[value_regno].live |= REG_LIVE_WRITTEN; + if (dst_regno >= 0) { + mark_reg_unknown(env, state->regs, dst_regno); + state->regs[dst_regno].live |= REG_LIVE_WRITTEN; } mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); return 0; @@ -2445,16 +2608,16 @@ static int check_stack_read(struct bpf_verifier_env *env, } } - if (value_regno >= 0) { + if (dst_regno >= 0) { /* restore register state from stack */ - state->regs[value_regno] = *reg; + state->regs[dst_regno] = *reg; /* mark reg as written since spilled pointer state likely * has its liveness marks cleared by is_state_visited() * which resets stack/reg liveness for state transitions */ - state->regs[value_regno].live |= REG_LIVE_WRITTEN; + state->regs[dst_regno].live |= REG_LIVE_WRITTEN; } else if (__is_pointer_value(env->allow_ptr_leaks, reg)) { - /* If value_regno==-1, the caller is asking us whether + /* If dst_regno==-1, the caller is asking us whether * it is acceptable to use this value as a SCALAR_VALUE * (e.g. for XADD). * We must not allow unprivileged callers to do that @@ -2466,70 +2629,167 @@ static int check_stack_read(struct bpf_verifier_env *env, } mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); } else { - int zeros = 0; + u8 type; for (i = 0; i < size; i++) { - if (stype[(slot - i) % BPF_REG_SIZE] == STACK_MISC) + type = stype[(slot - i) % BPF_REG_SIZE]; + if (type == STACK_MISC) continue; - if (stype[(slot - i) % BPF_REG_SIZE] == STACK_ZERO) { - zeros++; + if (type == STACK_ZERO) continue; - } verbose(env, "invalid read from stack off %d+%d size %d\n", off, i, size); return -EACCES; } mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64); - if (value_regno >= 0) { - if (zeros == size) { - /* any size read into register is zero extended, - * so the whole register == const_zero - */ - __mark_reg_const_zero(&state->regs[value_regno]); - /* backtracking doesn't support STACK_ZERO yet, - * so mark it precise here, so that later - * backtracking can stop here. - * Backtracking may not need this if this register - * doesn't participate in pointer adjustment. - * Forward propagation of precise flag is not - * necessary either. This mark is only to stop - * backtracking. Any register that contributed - * to const 0 was marked precise before spill. - */ - state->regs[value_regno].precise = true; - } else { - /* have read misc data from the stack */ - mark_reg_unknown(env, state->regs, value_regno); - } - state->regs[value_regno].live |= REG_LIVE_WRITTEN; - } + if (dst_regno >= 0) + mark_reg_stack_read(env, reg_state, off, off + size, dst_regno); } return 0; } -static int check_stack_access(struct bpf_verifier_env *env, - const struct bpf_reg_state *reg, - int off, int size) +enum stack_access_src { + ACCESS_DIRECT = 1, /* the access is performed by an instruction */ + ACCESS_HELPER = 2, /* the access is performed by a helper */ +}; + +static int check_stack_range_initialized(struct bpf_verifier_env *env, + int regno, int off, int access_size, + bool zero_size_allowed, + enum stack_access_src type, + struct bpf_call_arg_meta *meta); + +static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno) +{ + return cur_regs(env) + regno; +} + +/* Read the stack at 'ptr_regno + off' and put the result into the register + * 'dst_regno'. + * 'off' includes the pointer register's fixed offset(i.e. 'ptr_regno.off'), + * but not its variable offset. + * 'size' is assumed to be <= reg size and the access is assumed to be aligned. + * + * As opposed to check_stack_read_fixed_off, this function doesn't deal with + * filling registers (i.e. reads of spilled register cannot be detected when + * the offset is not fixed). We conservatively mark 'dst_regno' as containing + * SCALAR_VALUE. That's why we assert that the 'ptr_regno' has a variable + * offset; for a fixed offset check_stack_read_fixed_off should be used + * instead. + */ +static int check_stack_read_var_off(struct bpf_verifier_env *env, + int ptr_regno, int off, int size, int dst_regno) { - /* Stack accesses must be at a fixed offset, so that we - * can determine what type of data were returned. See - * check_stack_read(). + /* The state of the source register. */ + struct bpf_reg_state *reg = reg_state(env, ptr_regno); + struct bpf_func_state *ptr_state = func(env, reg); + int err; + int min_off, max_off; + + /* Note that we pass a NULL meta, so raw access will not be permitted. */ - if (!tnum_is_const(reg->var_off)) { + err = check_stack_range_initialized(env, ptr_regno, off, size, + false, ACCESS_DIRECT, NULL); + if (err) + return err; + + min_off = reg->smin_value + off; + max_off = reg->smax_value + off; + mark_reg_stack_read(env, ptr_state, min_off, max_off + size, dst_regno); + return 0; +} + +/* check_stack_read dispatches to check_stack_read_fixed_off or + * check_stack_read_var_off. + * + * The caller must ensure that the offset falls within the allocated stack + * bounds. + * + * 'dst_regno' is a register which will receive the value from the stack. It + * can be -1, meaning that the read value is not going to a register. + */ +static int check_stack_read(struct bpf_verifier_env *env, + int ptr_regno, int off, int size, + int dst_regno) +{ + struct bpf_reg_state *reg = reg_state(env, ptr_regno); + struct bpf_func_state *state = func(env, reg); + int err; + /* Some accesses are only permitted with a static offset. */ + bool var_off = !tnum_is_const(reg->var_off); + + /* The offset is required to be static when reads don't go to a + * register, in order to not leak pointers (see + * check_stack_read_fixed_off). + */ + if (dst_regno < 0 && var_off) { char tn_buf[48]; tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); - verbose(env, "variable stack access var_off=%s off=%d size=%d\n", + verbose(env, "variable offset stack pointer cannot be passed into helper function; var_off=%s off=%d size=%d\n", tn_buf, off, size); return -EACCES; } + /* Variable offset is prohibited for unprivileged mode for simplicity + * since it requires corresponding support in Spectre masking for stack + * ALU. See also retrieve_ptr_limit(). + */ + if (!env->bypass_spec_v1 && var_off) { + char tn_buf[48]; - if (off >= 0 || off < -MAX_BPF_STACK) { - verbose(env, "invalid stack off=%d size=%d\n", off, size); + tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); + verbose(env, "R%d variable offset stack access prohibited for !root, var_off=%s\n", + ptr_regno, tn_buf); return -EACCES; } - return 0; + if (!var_off) { + off += reg->var_off.value; + err = check_stack_read_fixed_off(env, state, off, size, + dst_regno); + } else { + /* Variable offset stack reads need more conservative handling + * than fixed offset ones. Note that dst_regno >= 0 on this + * branch. + */ + err = check_stack_read_var_off(env, ptr_regno, off, size, + dst_regno); + } + return err; +} + + +/* check_stack_write dispatches to check_stack_write_fixed_off or + * check_stack_write_var_off. + * + * 'ptr_regno' is the register used as a pointer into the stack. + * 'off' includes 'ptr_regno->off', but not its variable offset (if any). + * 'value_regno' is the register whose value we're writing to the stack. It can + * be -1, meaning that we're not writing from a register. + * + * The caller must ensure that the offset falls within the maximum stack size. + */ +static int check_stack_write(struct bpf_verifier_env *env, + int ptr_regno, int off, int size, + int value_regno, int insn_idx) +{ + struct bpf_reg_state *reg = reg_state(env, ptr_regno); + struct bpf_func_state *state = func(env, reg); + int err; + + if (tnum_is_const(reg->var_off)) { + off += reg->var_off.value; + err = check_stack_write_fixed_off(env, state, off, size, + value_regno, insn_idx); + } else { + /* Variable offset stack reads need more conservative handling + * than fixed offset ones. + */ + err = check_stack_write_var_off(env, state, + ptr_regno, off, size, + value_regno, insn_idx); + } + return err; } static int check_map_access_type(struct bpf_verifier_env *env, u32 regno, @@ -2862,11 +3122,6 @@ static int check_sock_access(struct bpf_verifier_env *env, int insn_idx, return -EACCES; } -static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno) -{ - return cur_regs(env) + regno; -} - static bool is_pointer_value(struct bpf_verifier_env *env, int regno) { return __is_pointer_value(env->allow_ptr_leaks, reg_state(env, regno)); @@ -2985,8 +3240,8 @@ static int check_ptr_alignment(struct bpf_verifier_env *env, break; case PTR_TO_STACK: pointer_desc = "stack "; - /* The stack spill tracking logic in check_stack_write() - * and check_stack_read() relies on stack accesses being + /* The stack spill tracking logic in check_stack_write_fixed_off() + * and check_stack_read_fixed_off() relies on stack accesses being * aligned. */ strict = true; @@ -3402,6 +3657,91 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env, return 0; } +/* Check that the stack access at the given offset is within bounds. The + * maximum valid offset is -1. + * + * The minimum valid offset is -MAX_BPF_STACK for writes, and + * -state->allocated_stack for reads. + */ +static int check_stack_slot_within_bounds(int off, + struct bpf_func_state *state, + enum bpf_access_type t) +{ + int min_valid_off; + + if (t == BPF_WRITE) + min_valid_off = -MAX_BPF_STACK; + else + min_valid_off = -state->allocated_stack; + + if (off < min_valid_off || off > -1) + return -EACCES; + return 0; +} + +/* Check that the stack access at 'regno + off' falls within the maximum stack + * bounds. + * + * 'off' includes `regno->offset`, but not its dynamic part (if any). + */ +static int check_stack_access_within_bounds( + struct bpf_verifier_env *env, + int regno, int off, int access_size, + enum stack_access_src src, enum bpf_access_type type) +{ + struct bpf_reg_state *regs = cur_regs(env); + struct bpf_reg_state *reg = regs + regno; + struct bpf_func_state *state = func(env, reg); + int min_off, max_off; + int err; + char *err_extra; + + if (src == ACCESS_HELPER) + /* We don't know if helpers are reading or writing (or both). */ + err_extra = " indirect access to"; + else if (type == BPF_READ) + err_extra = " read from"; + else + err_extra = " write to"; + + if (tnum_is_const(reg->var_off)) { + min_off = reg->var_off.value + off; + if (access_size > 0) + max_off = min_off + access_size - 1; + else + max_off = min_off; + } else { + if (reg->smax_value >= BPF_MAX_VAR_OFF || + reg->smin_value <= -BPF_MAX_VAR_OFF) { + verbose(env, "invalid unbounded variable-offset%s stack R%d\n", + err_extra, regno); + return -EACCES; + } + min_off = reg->smin_value + off; + if (access_size > 0) + max_off = reg->smax_value + off + access_size - 1; + else + max_off = min_off; + } + + err = check_stack_slot_within_bounds(min_off, state, type); + if (!err) + err = check_stack_slot_within_bounds(max_off, state, type); + + if (err) { + if (tnum_is_const(reg->var_off)) { + verbose(env, "invalid%s stack R%d off=%d size=%d\n", + err_extra, regno, off, access_size); + } else { + char tn_buf[48]; + + tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); + verbose(env, "invalid variable-offset%s stack R%d var_off=%s size=%d\n", + err_extra, regno, tn_buf, access_size); + } + } + return err; +} /* check whether memory at (regno + off) is accessible for t = (read | write) * if t==write, value_regno is a register which value is stored into memory @@ -3517,8 +3857,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn } } else if (reg->type == PTR_TO_STACK) { - off += reg->var_off.value; - err = check_stack_access(env, reg, off, size); + /* Basic bounds checks. */ + err = check_stack_access_within_bounds(env, regno, off, size, ACCESS_DIRECT, t); if (err) return err; @@ -3527,12 +3867,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn if (err) return err; - if (t == BPF_WRITE) - err = check_stack_write(env, state, off, size, - value_regno, insn_idx); - else - err = check_stack_read(env, state, off, size, + if (t == BPF_READ) + err = check_stack_read(env, regno, off, size, value_regno); + else + err = check_stack_write(env, regno, off, size, + value_regno, insn_idx); } else if (reg_is_pkt_pointer(reg)) { if (t == BPF_WRITE && !may_access_direct_pkt_data(env, NULL, t)) { verbose(env, "cannot write into packet\n"); @@ -3699,49 +4039,53 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i return 0; } -static int __check_stack_boundary(struct bpf_verifier_env *env, u32 regno, - int off, int access_size, - bool zero_size_allowed) +/* When register 'regno' is used to read the stack (either directly or through + * a helper function) make sure that it's within stack boundary and, depending + * on the access type, that all elements of the stack are initialized. + * + * 'off' includes 'regno->off', but not its dynamic part (if any). + * + * All registers that have been spilled on the stack in the slots within the + * read offsets are marked as read. + */ +static int check_stack_range_initialized( + struct bpf_verifier_env *env, int regno, int off, + int access_size, bool zero_size_allowed, + enum stack_access_src type, struct bpf_call_arg_meta *meta) { struct bpf_reg_state *reg = reg_state(env, regno); + struct bpf_func_state *state = func(env, reg); + int err, min_off, max_off, i, j, slot, spi; + char *err_extra = type == ACCESS_HELPER ? " indirect" : ""; + enum bpf_access_type bounds_check_type; + /* Some accesses can write anything into the stack, others are + * read-only. + */ + bool clobber = false; - if (off >= 0 || off < -MAX_BPF_STACK || off + access_size > 0 || - access_size < 0 || (access_size == 0 && !zero_size_allowed)) { - if (tnum_is_const(reg->var_off)) { - verbose(env, "invalid stack type R%d off=%d access_size=%d\n", - regno, off, access_size); - } else { - char tn_buf[48]; - - tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); - verbose(env, "invalid stack type R%d var_off=%s access_size=%d\n", - regno, tn_buf, access_size); - } + if (access_size == 0 && !zero_size_allowed) { + verbose(env, "invalid zero-sized read\n"); return -EACCES; } - return 0; -} -/* when register 'regno' is passed into function that will read 'access_size' - * bytes from that pointer, make sure that it's within stack boundary - * and all elements of stack are initialized. - * Unlike most pointer bounds-checking functions, this one doesn't take an - * 'off' argument, so it has to add in reg->off itself. - */ -static int check_stack_boundary(struct bpf_verifier_env *env, int regno, - int access_size, bool zero_size_allowed, - struct bpf_call_arg_meta *meta) -{ - struct bpf_reg_state *reg = reg_state(env, regno); - struct bpf_func_state *state = func(env, reg); - int err, min_off, max_off, i, j, slot, spi; + if (type == ACCESS_HELPER) { + /* The bounds checks for writes are more permissive than for + * reads. However, if raw_mode is not set, we'll do extra + * checks below. + */ + bounds_check_type = BPF_WRITE; + clobber = true; + } else { + bounds_check_type = BPF_READ; + } + err = check_stack_access_within_bounds(env, regno, off, access_size, + type, bounds_check_type); + if (err) + return err; + if (tnum_is_const(reg->var_off)) { - min_off = max_off = reg->var_off.value + reg->off; - err = __check_stack_boundary(env, regno, min_off, access_size, - zero_size_allowed); - if (err) - return err; + min_off = max_off = reg->var_off.value + off; } else { /* Variable offset is prohibited for unprivileged mode for * simplicity since it requires corresponding support in @@ -3752,8 +4096,8 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno, char tn_buf[48]; tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); - verbose(env, "R%d indirect variable offset stack access prohibited for !root, var_off=%s\n", - regno, tn_buf); + verbose(env, "R%d%s variable offset stack access prohibited for !root, var_off=%s\n", + regno, err_extra, tn_buf); return -EACCES; } /* Only initialized buffer on stack is allowed to be accessed @@ -3765,28 +4109,8 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno, if (meta && meta->raw_mode) meta = NULL; - if (reg->smax_value >= BPF_MAX_VAR_OFF || - reg->smax_value <= -BPF_MAX_VAR_OFF) { - verbose(env, "R%d unbounded indirect variable offset stack access\n", - regno); - return -EACCES; - } - min_off = reg->smin_value + reg->off; - max_off = reg->smax_value + reg->off; - err = __check_stack_boundary(env, regno, min_off, access_size, - zero_size_allowed); - if (err) { - verbose(env, "R%d min value is outside of stack bound\n", - regno); - return err; - } - err = __check_stack_boundary(env, regno, max_off, access_size, - zero_size_allowed); - if (err) { - verbose(env, "R%d max value is outside of stack bound\n", - regno); - return err; - } + min_off = reg->smin_value + off; + max_off = reg->smax_value + off; } if (meta && meta->raw_mode) { @@ -3806,8 +4130,10 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno, if (*stype == STACK_MISC) goto mark; if (*stype == STACK_ZERO) { - /* helper can write anything into the stack */ - *stype = STACK_MISC; + if (clobber) { + /* helper can write anything into the stack */ + *stype = STACK_MISC; + } goto mark; } @@ -3818,22 +4144,24 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno, if (state->stack[spi].slot_type[0] == STACK_SPILL && (state->stack[spi].spilled_ptr.type == SCALAR_VALUE || env->allow_ptr_leaks)) { - __mark_reg_unknown(env, &state->stack[spi].spilled_ptr); - for (j = 0; j < BPF_REG_SIZE; j++) - state->stack[spi].slot_type[j] = STACK_MISC; + if (clobber) { + __mark_reg_unknown(env, &state->stack[spi].spilled_ptr); + for (j = 0; j < BPF_REG_SIZE; j++) + state->stack[spi].slot_type[j] = STACK_MISC; + } goto mark; } err: if (tnum_is_const(reg->var_off)) { - verbose(env, "invalid indirect read from stack off %d+%d size %d\n", - min_off, i - min_off, access_size); + verbose(env, "invalid%s read from stack R%d off %d+%d size %d\n", + err_extra, regno, min_off, i - min_off, access_size); } else { char tn_buf[48]; tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); - verbose(env, "invalid indirect read from stack var_off %s+%d size %d\n", - tn_buf, i - min_off, access_size); + verbose(env, "invalid%s read from stack R%d var_off %s+%d size %d\n", + err_extra, regno, tn_buf, i - min_off, access_size); } return -EACCES; mark: @@ -3882,8 +4210,10 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, "rdwr", &env->prog->aux->max_rdwr_access); case PTR_TO_STACK: - return check_stack_boundary(env, regno, access_size, - zero_size_allowed, meta); + return check_stack_range_initialized( + env, + regno, reg->off, access_size, + zero_size_allowed, ACCESS_HELPER, meta); default: /* scalar_value or invalid ptr */ /* Allow zero-byte read from NULL, regardless of pointer type */ if (zero_size_allowed && access_size == 0 && @@ -5547,6 +5877,41 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env, return !ret ? -EFAULT : 0; } +/* check that stack access falls within stack limits and that 'reg' doesn't + * have a variable offset. + * + * Variable offset is prohibited for unprivileged mode for simplicity since it + * requires corresponding support in Spectre masking for stack ALU. See also + * retrieve_ptr_limit(). + * + * + * 'off' includes 'reg->off'. + */ +static int check_stack_access_for_ptr_arithmetic( + struct bpf_verifier_env *env, + int regno, + const struct bpf_reg_state *reg, + int off) +{ + if (!tnum_is_const(reg->var_off)) { + char tn_buf[48]; + + tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); + verbose(env, "R%d variable stack access prohibited for !root, var_off=%s off=%d\n", + regno, tn_buf, off); + return -EACCES; + } + + if (off >= 0 || off < -MAX_BPF_STACK) { + verbose(env, "R%d stack pointer arithmetic goes out of range, " + "prohibited for !root; off=%d\n", regno, off); + return -EACCES; + } + + return 0; +} + + /* Handles arithmetic on a pointer and a scalar: computes new min/max and var_off. * Caller should also handle BPF_MOV case separately. * If we return -EACCES, caller may want to try again treating pointer as a @@ -5790,10 +6155,9 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, "prohibited for !root\n", dst); return -EACCES; } else if (dst_reg->type == PTR_TO_STACK && - check_stack_access(env, dst_reg, dst_reg->off + - dst_reg->var_off.value, 1)) { - verbose(env, "R%d stack pointer arithmetic goes out of range, " - "prohibited for !root\n", dst); + check_stack_access_for_ptr_arithmetic( + env, dst, dst_reg, dst_reg->off + + dst_reg->var_off.value)) { return -EACCES; } } @@ -12129,6 +12493,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, env->strict_alignment = false; env->allow_ptr_leaks = bpf_allow_ptr_leaks(); + env->allow_uninit_stack = bpf_allow_uninit_stack(); env->allow_ptr_to_map_access = bpf_allow_ptr_to_map_access(); env->bypass_spec_v1 = bpf_bypass_spec_v1(); env->bypass_spec_v4 = bpf_bypass_spec_v4(); From patchwork Sun Feb 7 01:10:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrei Matei X-Patchwork-Id: 12072409 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43DC9C433E0 for ; Sun, 7 Feb 2021 01:12:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0E1F664E8C for ; Sun, 7 Feb 2021 01:12:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229506AbhBGBLz (ORCPT ); Sat, 6 Feb 2021 20:11:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229529AbhBGBLx (ORCPT ); Sat, 6 Feb 2021 20:11:53 -0500 Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9107AC061786 for ; Sat, 6 Feb 2021 17:11:13 -0800 (PST) Received: by mail-qt1-x82c.google.com with SMTP id z22so7964785qto.7 for ; Sat, 06 Feb 2021 17:11:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=otgOeoYCDQUQhuP2LtXmMhwn3dnl+sJWTV8ebcGXgmI=; b=Z0CdyQq1fhroPaKw6ygez8WWAkQxli4tgMBqN29WPCMYX/ldDyJ6ZHb0E7gSNzL887 +L9gkBV0nu22vOQxmFgAW/6bc7/42LE/eevBp4nTRmOES0rEB70aSBuEg5E3sKzZR6ON sKBXqL1zhcoJfCI3i8scSzwHMQTfj1fYewKBTQHSsxLWP8nZhSf1lHbsLVRxLMpyhoqT qb1cg4niRDEDDpZOHAA9Eh8uAsp/8uLFNEP+4APPznUMgSyHAcMEQh0VPITKK8Ss0J3e SLSoF6giUDAx8aJh1CLi0uvlQJn+Yy2mzSOhltc3J4E5h+2FrZK0ZAhNId4YqKdmnesr weOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=otgOeoYCDQUQhuP2LtXmMhwn3dnl+sJWTV8ebcGXgmI=; b=GT8pABL8zrSd9DBlzLfrpHRH2tP5BL58xx+z+UQUvjamG1483x9JWNBDR8LJ0sSjDN qqZ3ajVp22VdtoMCHvcAwajtRHVQS8hEaQjv7nBMUW/MzH45RhI6Y+oIq+MKtfNU2qhJ 4xhyLZEGcwFpAmiaS56dt4+QAc4TwIxehTJ0gODrd4o6cqd0UcWJmxcGSQuNhCPA43a1 dJTgQQDQyj2r7nLThW1nJb0P7Yhg6wbLejvaJi7lXuiQkikmV15glwQKuf4PrLIjeYwm mOWZ7+DmGAW3o4ph/8QP1hM31cXksFroXqauLkiNM0lvXzWkZIkeRu8mcUFki12v6ryb qzYg== X-Gm-Message-State: AOAM530f8noH/qsCrftXnGc4kZAHFPWcV6Cnvw+GXVrvRYJlcVHoP/Qg NNLApYsbzp969cGVw68mz/EisbLSaIKl3w== X-Google-Smtp-Source: ABdhPJxAiwh90uL8AXdH3LKTbnmWTGVLEedNmTptLcSAW+ebb23a4Y2oPoX0rAiYZ04kVm2fleF3Vw== X-Received: by 2002:ac8:3a65:: with SMTP id w92mr10431946qte.267.1612660272200; Sat, 06 Feb 2021 17:11:12 -0800 (PST) Received: from localhost (pool-100-33-73-206.nycmny.fios.verizon.net. [100.33.73.206]) by smtp.gmail.com with ESMTPSA id e15sm9401878qtq.43.2021.02.06.17.11.11 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 06 Feb 2021 17:11:11 -0800 (PST) From: Andrei Matei To: bpf@vger.kernel.org, ast@kernel.org, andrii@kernel.org Cc: Andrei Matei Subject: [PATCH bpf-next v3 2/4] selftest/bpf: adjust expected verifier errors Date: Sat, 6 Feb 2021 20:10:25 -0500 Message-Id: <20210207011027.676572-3-andreimatei1@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210207011027.676572-1-andreimatei1@gmail.com> References: <20210207011027.676572-1-andreimatei1@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The verifier errors around stack accesses have changed slightly in the previous commit (generally for the better). Signed-off-by: Andrei Matei --- .../selftests/bpf/verifier/basic_stack.c | 2 +- tools/testing/selftests/bpf/verifier/calls.c | 4 ++-- .../testing/selftests/bpf/verifier/const_or.c | 4 ++-- .../bpf/verifier/helper_access_var_len.c | 12 +++++----- .../testing/selftests/bpf/verifier/int_ptr.c | 6 ++--- .../selftests/bpf/verifier/raw_stack.c | 10 ++++----- .../selftests/bpf/verifier/stack_ptr.c | 22 +++++++++++-------- tools/testing/selftests/bpf/verifier/unpriv.c | 2 +- .../testing/selftests/bpf/verifier/var_off.c | 16 +++++++------- 9 files changed, 41 insertions(+), 37 deletions(-) diff --git a/tools/testing/selftests/bpf/verifier/basic_stack.c b/tools/testing/selftests/bpf/verifier/basic_stack.c index b56f8117c09d..f995777dddb3 100644 --- a/tools/testing/selftests/bpf/verifier/basic_stack.c +++ b/tools/testing/selftests/bpf/verifier/basic_stack.c @@ -4,7 +4,7 @@ BPF_ST_MEM(BPF_DW, BPF_REG_10, 8, 0), BPF_EXIT_INSN(), }, - .errstr = "invalid stack", + .errstr = "invalid write to stack", .result = REJECT, }, { diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c index c4f5d909e58a..eb888c8479c3 100644 --- a/tools/testing/selftests/bpf/verifier/calls.c +++ b/tools/testing/selftests/bpf/verifier/calls.c @@ -1228,7 +1228,7 @@ .prog_type = BPF_PROG_TYPE_XDP, .fixup_map_hash_8b = { 23 }, .result = REJECT, - .errstr = "invalid read from stack off -16+0 size 8", + .errstr = "invalid read from stack R7 off=-16 size=8", }, { "calls: two calls that receive map_value via arg=ptr_stack_of_caller. test1", @@ -1958,7 +1958,7 @@ BPF_EXIT_INSN(), }, .fixup_map_hash_48b = { 6 }, - .errstr = "invalid indirect read from stack off -8+0 size 8", + .errstr = "invalid indirect read from stack R2 off -8+0 size 8", .result = REJECT, .prog_type = BPF_PROG_TYPE_XDP, }, diff --git a/tools/testing/selftests/bpf/verifier/const_or.c b/tools/testing/selftests/bpf/verifier/const_or.c index 6c214c58e8d4..0719b0ddec04 100644 --- a/tools/testing/selftests/bpf/verifier/const_or.c +++ b/tools/testing/selftests/bpf/verifier/const_or.c @@ -23,7 +23,7 @@ BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), BPF_EXIT_INSN(), }, - .errstr = "invalid stack type R1 off=-48 access_size=58", + .errstr = "invalid indirect access to stack R1 off=-48 size=58", .result = REJECT, .prog_type = BPF_PROG_TYPE_TRACEPOINT, }, @@ -54,7 +54,7 @@ BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), BPF_EXIT_INSN(), }, - .errstr = "invalid stack type R1 off=-48 access_size=58", + .errstr = "invalid indirect access to stack R1 off=-48 size=58", .result = REJECT, .prog_type = BPF_PROG_TYPE_TRACEPOINT, }, diff --git a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c index 87c4e7900083..0ab7f1dfc97a 100644 --- a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c +++ b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c @@ -39,7 +39,7 @@ BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), BPF_EXIT_INSN(), }, - .errstr = "invalid indirect read from stack off -64+0 size 64", + .errstr = "invalid indirect read from stack R1 off -64+0 size 64", .result = REJECT, .prog_type = BPF_PROG_TYPE_TRACEPOINT, }, @@ -59,7 +59,7 @@ BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, - .errstr = "invalid stack type R1 off=-64 access_size=65", + .errstr = "invalid indirect access to stack R1 off=-64 size=65", .result = REJECT, .prog_type = BPF_PROG_TYPE_TRACEPOINT, }, @@ -136,7 +136,7 @@ BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, - .errstr = "invalid stack type R1 off=-64 access_size=65", + .errstr = "invalid indirect access to stack R1 off=-64 size=65", .result = REJECT, .prog_type = BPF_PROG_TYPE_TRACEPOINT, }, @@ -156,7 +156,7 @@ BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, - .errstr = "invalid stack type R1 off=-64 access_size=65", + .errstr = "invalid indirect access to stack R1 off=-64 size=65", .result = REJECT, .prog_type = BPF_PROG_TYPE_TRACEPOINT, }, @@ -194,7 +194,7 @@ BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, - .errstr = "invalid indirect read from stack off -64+0 size 64", + .errstr = "invalid indirect read from stack R1 off -64+0 size 64", .result = REJECT, .prog_type = BPF_PROG_TYPE_TRACEPOINT, }, @@ -584,7 +584,7 @@ BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), BPF_EXIT_INSN(), }, - .errstr = "invalid indirect read from stack off -64+32 size 64", + .errstr = "invalid indirect read from stack R1 off -64+32 size 64", .result = REJECT, .prog_type = BPF_PROG_TYPE_TRACEPOINT, }, diff --git a/tools/testing/selftests/bpf/verifier/int_ptr.c b/tools/testing/selftests/bpf/verifier/int_ptr.c index ca3b4729df66..070893fb2900 100644 --- a/tools/testing/selftests/bpf/verifier/int_ptr.c +++ b/tools/testing/selftests/bpf/verifier/int_ptr.c @@ -27,7 +27,7 @@ }, .result = REJECT, .prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL, - .errstr = "invalid indirect read from stack off -16+0 size 8", + .errstr = "invalid indirect read from stack R4 off -16+0 size 8", }, { "ARG_PTR_TO_LONG half-uninitialized", @@ -59,7 +59,7 @@ }, .result = REJECT, .prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL, - .errstr = "invalid indirect read from stack off -16+4 size 8", + .errstr = "invalid indirect read from stack R4 off -16+4 size 8", }, { "ARG_PTR_TO_LONG misaligned", @@ -125,7 +125,7 @@ }, .result = REJECT, .prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL, - .errstr = "invalid stack type R4 off=-4 access_size=8", + .errstr = "invalid indirect access to stack R4 off=-4 size=8", }, { "ARG_PTR_TO_LONG initialized", diff --git a/tools/testing/selftests/bpf/verifier/raw_stack.c b/tools/testing/selftests/bpf/verifier/raw_stack.c index 193d9e87d5a9..cc8e8c3cdc03 100644 --- a/tools/testing/selftests/bpf/verifier/raw_stack.c +++ b/tools/testing/selftests/bpf/verifier/raw_stack.c @@ -11,7 +11,7 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "invalid read from stack off -8+0 size 8", + .errstr = "invalid read from stack R6 off=-8 size=8", .prog_type = BPF_PROG_TYPE_SCHED_CLS, }, { @@ -59,7 +59,7 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "invalid stack type R3", + .errstr = "invalid zero-sized read", .prog_type = BPF_PROG_TYPE_SCHED_CLS, }, { @@ -205,7 +205,7 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "invalid stack type R3 off=-513 access_size=8", + .errstr = "invalid indirect access to stack R3 off=-513 size=8", .prog_type = BPF_PROG_TYPE_SCHED_CLS, }, { @@ -221,7 +221,7 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "invalid stack type R3 off=-1 access_size=8", + .errstr = "invalid indirect access to stack R3 off=-1 size=8", .prog_type = BPF_PROG_TYPE_SCHED_CLS, }, { @@ -285,7 +285,7 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "invalid stack type R3 off=-512 access_size=0", + .errstr = "invalid zero-sized read", .prog_type = BPF_PROG_TYPE_SCHED_CLS, }, { diff --git a/tools/testing/selftests/bpf/verifier/stack_ptr.c b/tools/testing/selftests/bpf/verifier/stack_ptr.c index 8bfeb77c60bd..07eaa04412ae 100644 --- a/tools/testing/selftests/bpf/verifier/stack_ptr.c +++ b/tools/testing/selftests/bpf/verifier/stack_ptr.c @@ -44,7 +44,7 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "invalid stack off=-79992 size=8", + .errstr = "invalid write to stack R1 off=-79992 size=8", .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", }, { @@ -57,7 +57,7 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "invalid stack off=0 size=8", + .errstr = "invalid write to stack R1 off=0 size=8", }, { "PTR_TO_STACK check high 1", @@ -106,7 +106,7 @@ BPF_EXIT_INSN(), }, .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .errstr = "invalid stack off=0 size=1", + .errstr = "invalid write to stack R1 off=0 size=1", .result = REJECT, }, { @@ -119,7 +119,8 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "invalid stack off", + .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", + .errstr = "invalid write to stack R1", }, { "PTR_TO_STACK check high 6", @@ -131,7 +132,8 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "invalid stack off", + .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", + .errstr = "invalid write to stack", }, { "PTR_TO_STACK check high 7", @@ -183,7 +185,7 @@ BPF_EXIT_INSN(), }, .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .errstr = "invalid stack off=-513 size=1", + .errstr = "invalid write to stack R1 off=-513 size=1", .result = REJECT, }, { @@ -208,7 +210,8 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "invalid stack off", + .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", + .errstr = "invalid write to stack", }, { "PTR_TO_STACK check low 6", @@ -220,7 +223,8 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "invalid stack off", + .errstr = "invalid write to stack", + .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", }, { "PTR_TO_STACK check low 7", @@ -292,7 +296,7 @@ BPF_EXIT_INSN(), }, .result_unpriv = REJECT, - .errstr_unpriv = "invalid stack off=0 size=1", + .errstr_unpriv = "invalid write to stack R1 off=0 size=1", .result = ACCEPT, .retval = 42, }, diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c index ee298627abae..b018ad71e0a8 100644 --- a/tools/testing/selftests/bpf/verifier/unpriv.c +++ b/tools/testing/selftests/bpf/verifier/unpriv.c @@ -108,7 +108,7 @@ BPF_EXIT_INSN(), }, .fixup_map_hash_8b = { 3 }, - .errstr_unpriv = "invalid indirect read from stack off -8+0 size 8", + .errstr_unpriv = "invalid indirect read from stack R2 off -8+0 size 8", .result_unpriv = REJECT, .result = ACCEPT, }, diff --git a/tools/testing/selftests/bpf/verifier/var_off.c b/tools/testing/selftests/bpf/verifier/var_off.c index 8504ac937809..49b78a1a261b 100644 --- a/tools/testing/selftests/bpf/verifier/var_off.c +++ b/tools/testing/selftests/bpf/verifier/var_off.c @@ -18,7 +18,7 @@ .prog_type = BPF_PROG_TYPE_LWT_IN, }, { - "variable-offset stack access", + "variable-offset stack read, priv vs unpriv", .insns = { /* Fill the top 8 bytes of the stack */ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), @@ -63,7 +63,7 @@ BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, - .errstr = "R4 unbounded indirect variable offset stack access", + .errstr = "invalid unbounded variable-offset indirect access to stack R4", .result = REJECT, .prog_type = BPF_PROG_TYPE_SOCK_OPS, }, @@ -88,7 +88,7 @@ BPF_EXIT_INSN(), }, .fixup_map_hash_8b = { 5 }, - .errstr = "R2 max value is outside of stack bound", + .errstr = "invalid variable-offset indirect access to stack R2", .result = REJECT, .prog_type = BPF_PROG_TYPE_LWT_IN, }, @@ -113,7 +113,7 @@ BPF_EXIT_INSN(), }, .fixup_map_hash_8b = { 5 }, - .errstr = "R2 min value is outside of stack bound", + .errstr = "invalid variable-offset indirect access to stack R2", .result = REJECT, .prog_type = BPF_PROG_TYPE_LWT_IN, }, @@ -138,7 +138,7 @@ BPF_EXIT_INSN(), }, .fixup_map_hash_8b = { 5 }, - .errstr = "invalid indirect read from stack var_off", + .errstr = "invalid indirect read from stack R2 var_off", .result = REJECT, .prog_type = BPF_PROG_TYPE_LWT_IN, }, @@ -163,7 +163,7 @@ BPF_EXIT_INSN(), }, .fixup_map_hash_8b = { 5 }, - .errstr = "invalid indirect read from stack var_off", + .errstr = "invalid indirect read from stack R2 var_off", .result = REJECT, .prog_type = BPF_PROG_TYPE_LWT_IN, }, @@ -189,7 +189,7 @@ BPF_EXIT_INSN(), }, .fixup_map_hash_8b = { 6 }, - .errstr_unpriv = "R2 stack pointer arithmetic goes out of range, prohibited for !root", + .errstr_unpriv = "R2 variable stack access prohibited for !root", .result_unpriv = REJECT, .result = ACCEPT, .prog_type = BPF_PROG_TYPE_CGROUP_SKB, @@ -217,7 +217,7 @@ BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, - .errstr = "invalid indirect read from stack var_off", + .errstr = "invalid indirect read from stack R4 var_off", .result = REJECT, .prog_type = BPF_PROG_TYPE_SOCK_OPS, }, From patchwork Sun Feb 7 01:10:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrei Matei X-Patchwork-Id: 12072417 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7878C433E0 for ; Sun, 7 Feb 2021 01:12:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9730164E54 for ; Sun, 7 Feb 2021 01:12:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229548AbhBGBL4 (ORCPT ); Sat, 6 Feb 2021 20:11:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229536AbhBGBLz (ORCPT ); Sat, 6 Feb 2021 20:11:55 -0500 Received: from mail-qv1-xf30.google.com (mail-qv1-xf30.google.com [IPv6:2607:f8b0:4864:20::f30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2372C061788 for ; Sat, 6 Feb 2021 17:11:14 -0800 (PST) Received: by mail-qv1-xf30.google.com with SMTP id ew18so5406629qvb.4 for ; Sat, 06 Feb 2021 17:11:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YkjvP0DdMns7yyDccaUP9qx46lk4M3AAq20jcDjy1Ns=; b=SA8H2KMr4RD3BbafeFVeuanYAImbKNt3S4RpJhid/OUs15iIOnfMFSZeBajwvstpL2 NEEOp3X68FTOWtLjN6oKJLXmGtXI72h+K60Irt3OSLKzjMKrxVFGhnx8lfW6uIFjnPpy z9ru71GZswHMn7yJkj39T4taqpRxsmzvNA8S3rYxjhBF2UpNMIpVn1rY44NyNKW0e5YS N3ts69FAATsa08hPQ6s17k1jPbfY4RtkCbQpNG4JS4Q/fh6o6Ltwu4Vy8ucoDB+tCCi9 JCGXF1h5a5AmA68JQ1SX5n8Rw/wRWom4Q+NvcUx9aQAzIlUfAhLNyCYSK30H1nQ9N7LK Y4Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YkjvP0DdMns7yyDccaUP9qx46lk4M3AAq20jcDjy1Ns=; b=XmORG68lvFHtaG/6+NBzkUrM/7OxWpU28HvPllLZeHD9SpxcYDW0g3zOJJo93bxg+0 p9TSuMxywTQUZmWASe1WI1iZYpHv4j8wYT363FeVsc19l/xhHTSyJi+qH3bHPrbyFdnY xFi+tXFAvcb1A6YqZTq1dKmXiPK5/svQijjsBi3xEuYRk032wfpDgjMUvnqHIYfwhXhb YmP8ufXHeS3A7zbnFGrytyw1pkKWVNA+fxRrY61S047YxYwxd80rw/mazZT2dSHZ4DdG MTyDdbHJzVeOML8xv3+J8rhZhfArSDMyjXeXa4CJeLSBsmFE3GWTC/z7+iN4blKWCnZM NIUg== X-Gm-Message-State: AOAM532M9BtlJFKeONlW60tVV8Ci6aY9N6UGdwYkxBXvg/CwcO2PYvVE dZK2ralLHj1mA4YjSv9e1+waExXLjdPifg== X-Google-Smtp-Source: ABdhPJzkNcUXxdeRj5oN1cQ2AiFz3YAem5ew+II7yh52rrBs1teaMsSFnTLyAwcEz+PLP+Z0NMtUiQ== X-Received: by 2002:a0c:e48b:: with SMTP id n11mr10909332qvl.10.1612660273535; Sat, 06 Feb 2021 17:11:13 -0800 (PST) Received: from localhost (pool-100-33-73-206.nycmny.fios.verizon.net. [100.33.73.206]) by smtp.gmail.com with ESMTPSA id z6sm5788628qtu.8.2021.02.06.17.11.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 06 Feb 2021 17:11:13 -0800 (PST) From: Andrei Matei To: bpf@vger.kernel.org, ast@kernel.org, andrii@kernel.org Cc: Andrei Matei Subject: [PATCH bpf-next v3 3/4] selftest/bpf: verifier tests for var-off access Date: Sat, 6 Feb 2021 20:10:26 -0500 Message-Id: <20210207011027.676572-4-andreimatei1@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210207011027.676572-1-andreimatei1@gmail.com> References: <20210207011027.676572-1-andreimatei1@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add tests for the new functionality - reading and writing to the stack through a variable-offset pointer. Signed-off-by: Andrei Matei --- .../testing/selftests/bpf/verifier/var_off.c | 99 ++++++++++++++++++- 1 file changed, 97 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/verifier/var_off.c b/tools/testing/selftests/bpf/verifier/var_off.c index 49b78a1a261b..eab1f7f56e2f 100644 --- a/tools/testing/selftests/bpf/verifier/var_off.c +++ b/tools/testing/selftests/bpf/verifier/var_off.c @@ -31,14 +31,109 @@ * we don't know which */ BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), - /* dereference it */ + /* dereference it for a stack read */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .result_unpriv = REJECT, + .errstr_unpriv = "R2 variable stack access prohibited for !root", + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, +}, +{ + "variable-offset stack read, uninitialized", + .insns = { + /* Get an unknown value */ + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), + /* Make it small and 4-byte aligned */ + BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 8), + /* add it to fp. We now have either fp-4 or fp-8, but + * we don't know which + */ + BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), + /* dereference it for a stack read */ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, - .errstr = "variable stack access var_off=(0xfffffffffffffff8; 0x4)", .result = REJECT, + .errstr = "invalid variable-offset read from stack R2", .prog_type = BPF_PROG_TYPE_LWT_IN, }, +{ + "variable-offset stack write, priv vs unpriv", + .insns = { + /* Get an unknown value */ + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), + /* Make it small and 8-byte aligned */ + BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 8), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 16), + /* Add it to fp. We now have either fp-8 or fp-16, but + * we don't know which + */ + BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), + /* Dereference it for a stack write */ + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), + /* Now read from the address we just wrote. This shows + * that, after a variable-offset write, a priviledged + * program can read the slots that were in the range of + * that write (even if the verifier doesn't actually know + * if the slot being read was really written to or not. + */ + BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_2, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + /* Variable stack access is rejected for unprivileged. + */ + .errstr_unpriv = "R2 variable stack access prohibited for !root", + .result_unpriv = REJECT, + .result = ACCEPT, +}, +{ + "variable-offset stack write clobbers spilled regs", + .insns = { + /* Dummy instruction; needed because we need to patch the next one + * and we can't patch the first instruction. + */ + BPF_MOV64_IMM(BPF_REG_6, 0), + /* Make R0 a map ptr */ + BPF_LD_MAP_FD(BPF_REG_0, 0), + /* Get an unknown value */ + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), + /* Make it small and 8-byte aligned */ + BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 8), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 16), + /* Add it to fp. We now have either fp-8 or fp-16, but + * we don't know which. + */ + BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), + /* Spill R0(map ptr) into stack */ + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), + /* Dereference the unknown value for a stack write */ + BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), + /* Fill the register back into R2 */ + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8), + /* Try to dereference R2 for a memory load */ + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 8), + BPF_EXIT_INSN(), + }, + .fixup_map_hash_8b = { 1 }, + /* The unpriviledged case is not too interesting; variable + * stack access is rejected. + */ + .errstr_unpriv = "R2 variable stack access prohibited for !root", + .result_unpriv = REJECT, + /* In the priviledged case, dereferencing a spilled-and-then-filled + * register is rejected because the previous variable offset stack + * write might have overwritten the spilled pointer (i.e. we lose track + * of the spilled register when we analyze the write). + */ + .errstr = "R2 invalid mem access 'inv'", + .result = REJECT, +}, { "indirect variable-offset stack access, unbounded", .insns = { From patchwork Sun Feb 7 01:10:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrei Matei X-Patchwork-Id: 12072415 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9032FC433E9 for ; Sun, 7 Feb 2021 01:12:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5F28C64E8B for ; Sun, 7 Feb 2021 01:12:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229536AbhBGBL4 (ORCPT ); Sat, 6 Feb 2021 20:11:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229529AbhBGBL4 (ORCPT ); Sat, 6 Feb 2021 20:11:56 -0500 Received: from mail-qk1-x736.google.com (mail-qk1-x736.google.com [IPv6:2607:f8b0:4864:20::736]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EABEBC06178A for ; Sat, 6 Feb 2021 17:11:15 -0800 (PST) Received: by mail-qk1-x736.google.com with SMTP id s77so11008828qke.4 for ; Sat, 06 Feb 2021 17:11:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4B1lByB23Mjsr/oXNA7OyT1Bcx5NI3UuCUOCI/MwObk=; b=LdXKxvH95MDHVs+Cn78ACKE9sI/xithSrRyqUmEu5nib6ldRt9Ba5vGozZlwsUiqkv dGbdHxwGbMetzjt9cazv0bwQmSpd27itgHJnq273lVbiaCPlulqRwLx8IrCsGMtzCJ0F JXsESgfLi9AMGPanw2Iu9hspbGGhy98Sos2Sfq6bGhROlre5fkcPkoaEIwIyVpawc+qp O0YHOZOAlDH1XetSVDnn3XakhiXnVSdztulHiVeCpCUqR5zizO9IxQ0Ztx7u2iPn45qY OwNQgWOb8n98bSG942Z513+5bFJTKzwFt+E4pFoRkqmTkasiHXPjOSuC5p9yqk3NaPcl tO+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4B1lByB23Mjsr/oXNA7OyT1Bcx5NI3UuCUOCI/MwObk=; b=ZjvQinpQnv6LnjgI28Lklre2uMg7yeGuJWs/dPQUjDq3BwEnGDYHCqm35H8gXcZj1e J3Wyf1hJus0w0UXmyxRBDRbKVktIKPoFb0OMpWADuDkqt7Fxz/4PLdA6GdO/Hpx/eJqd 8o8LsWtp6jf7rdQ25q6lRug41/IbxPDNrJAi0zsY3IgsPdyA2XvpcN/Q/ylYjA1KDsPh khAyMrxhvSZK6EMHHD5L2uN/0+76NeY+VU5vvGISujX5VlUZ3NU+glbOfDPmxzbsD/Rb ZjHkZ8e80mPzKKRvb/rZxtCMMsbjpX3Gy8KoxnC502w/3OnuXaHgsLWo6A/cWBAjl+Gq K7Og== X-Gm-Message-State: AOAM530u4oy+7VGZUb9N2IfGVIV7LL6Q/hplLu79FPXRMDVv80tXLQYQ 7OXcC+HgIoSXNJfaumYJZYS51w6kOQKtXw== X-Google-Smtp-Source: ABdhPJyR95xaoDfFKZmd8k+vMn/OZNJFZ6OtVFVD9fZEiaLGhJxYXQcCpGXaBP8XpSbF/sEuldcvag== X-Received: by 2002:a37:2741:: with SMTP id n62mr11183768qkn.43.1612660274748; Sat, 06 Feb 2021 17:11:14 -0800 (PST) Received: from localhost (pool-100-33-73-206.nycmny.fios.verizon.net. [100.33.73.206]) by smtp.gmail.com with ESMTPSA id c17sm1713137qtn.71.2021.02.06.17.11.13 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 06 Feb 2021 17:11:14 -0800 (PST) From: Andrei Matei To: bpf@vger.kernel.org, ast@kernel.org, andrii@kernel.org Cc: Andrei Matei Subject: [PATCH bpf-next v3 4/4] selftest/bpf: add test for var-offset stack access Date: Sat, 6 Feb 2021 20:10:27 -0500 Message-Id: <20210207011027.676572-5-andreimatei1@gmail.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210207011027.676572-1-andreimatei1@gmail.com> References: <20210207011027.676572-1-andreimatei1@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add a higher-level test (C BPF program) for the new functionality - variable access stack reads and writes. Signed-off-by: Andrei Matei --- .../selftests/bpf/prog_tests/stack_var_off.c | 36 ++++++++++++ .../selftests/bpf/progs/test_stack_var_off.c | 56 +++++++++++++++++++ 2 files changed, 92 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_var_off.c create mode 100644 tools/testing/selftests/bpf/progs/test_stack_var_off.c diff --git a/tools/testing/selftests/bpf/prog_tests/stack_var_off.c b/tools/testing/selftests/bpf/prog_tests/stack_var_off.c new file mode 100644 index 000000000000..52e00486b1aa --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/stack_var_off.c @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include "test_stack_var_off.skel.h" + +/* Test read and writes to the stack performed with offsets that are not + * statically known. + */ +void test_stack_var_off(void) +{ + int duration = 0; + struct test_stack_var_off *skel; + + skel = test_stack_var_off__open_and_load(); + if (CHECK(!skel, "skel_open", "failed to open skeleton\n")) + return; + if (CHECK(!skel->bss, "check_bss", ".bss wasn't mmap()-ed\n")) + goto cleanup; + + test_stack_var_off__attach(skel); + + /* Give pid to bpf prog so it doesn't trigger for anyone else. */ + skel->bss->test_pid = getpid(); + /* Initialize the probe's input. */ + skel->bss->input[0] = 2; + skel->bss->input[1] = 42; /* This will be returned in probe_res. */ + + /* Trigger probe. */ + usleep(1); + + if (CHECK(skel->bss->probe_res != 42, "check_probe_res", + "wrong probe res: %d\n", skel->bss->probe_res)) + goto cleanup; + +cleanup: + test_stack_var_off__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/test_stack_var_off.c b/tools/testing/selftests/bpf/progs/test_stack_var_off.c new file mode 100644 index 000000000000..bd9c8d86cd91 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_stack_var_off.c @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2017 Facebook + +#include +#include +#include +#include + +int probe_res; + +char input[4] = {}; +int test_pid; + +SEC("tracepoint/syscalls/sys_enter_nanosleep") +int probe(void *ctx) +{ + /* This BPF program performs variable-offset reads and writes on a + * stack-allocated buffer. + */ + char stack_buf[16]; + unsigned long len; + unsigned long last; + + if (test_pid == 0) + return 0; + if ((bpf_get_current_pid_tgid() >> 32) != test_pid) + return 0; + + /* Copy the input to the stack. */ + __builtin_memcpy(stack_buf, input, 4); + + /* The first byte in the buffer indicates the length. */ + len = stack_buf[0] & 0xf; + last = (len - 1) & 0xf; + + /* Append something to the buffer. The offset where we write is not + * statically known; this is a variable-offset stack write. + */ + stack_buf[len] = 42; + + /* Index into the buffer at an unknown offset. This is a + * variable-offset stack read. + * + * Note that if it wasn't for the preceding variable-offset write, this + * read would be rejected because the stack slot cannot be verified as + * being initialized. With the preceding variable-offset write, the + * stack slot still cannot be verified, but the write inhibits the + * respective check on the reasoning that, if there was a + * variable-offset to a higher-or-equal spot, we're probably reading + * what we just wrote. + */ + probe_res = stack_buf[last]; + return 0; +} + +char _license[] SEC("license") = "GPL";