From patchwork Sat Jan 21 00:22:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110772 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29321C05027 for ; Sat, 21 Jan 2023 00:24:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229565AbjAUAYh (ORCPT ); Fri, 20 Jan 2023 19:24:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229673AbjAUAYg (ORCPT ); Fri, 20 Jan 2023 19:24:36 -0500 Received: from mail-pj1-x1042.google.com (mail-pj1-x1042.google.com [IPv6:2607:f8b0:4864:20::1042]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 743AF73AD9 for ; Fri, 20 Jan 2023 16:24:01 -0800 (PST) Received: by mail-pj1-x1042.google.com with SMTP id y3-20020a17090a390300b00229add7bb36so6409956pjb.4 for ; Fri, 20 Jan 2023 16:24:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VaLkm+F26DdGfFgA0j+DUYDbXVbQ5FuBOQ58rlSnxCU=; b=bvOdxAtFZey69slndGuAXUtvmWyist+PhmsvogDwnEiP9JN+LkEfCx8fUqjqEht9Fc SIvWhBoX+bX2dDW8HLTXmQwv/BO+YvM24+QAFcjtZRjzeNI0HQqajeaQY+lZSaQE5GPh IZvXMhO7+fGXzfMPwr86w0FK0vaDm1z2r5v7d2CvDrd+IpRpTlvryXDqkuP6v81Mrbdc klZgF9wZWDbtg3kB9pMGStHw1udXqEJDOTtw45DrTROF1EjwTQCfv2iUrZ9RWrF63Y8f Z5hnyU8Y9BfO3TXXgfrrdChR9wCSdjLKSb2r8kJlzByM2vxrD1Anzfo66Y8FI6X9nJZv DUgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VaLkm+F26DdGfFgA0j+DUYDbXVbQ5FuBOQ58rlSnxCU=; b=lx/V8S0yU+Pm3zQLM1ZYA6mCEgSBnSkyDFAB+zl3iDRe70H9EKp694anyHsZpOiwjU 1Yh0WbV6rb1QwexFkxjWqWeSTaGUB2ze9X51Gsbnm57VidX+qjujUoB9A7l/IjOV5pqn RBOTY/VIUff6tyBwg+hs3CeJAKSS5zYcmPTU0GFmulbJxAUtyae9IpAIaBRBSJSJcWSa Nydzc5i9XMldV1W2WmOO4WR5meTeJbXnv2n9peCLmMUiLb0UrZHgCEF5gjqG86MmHsTW uTywiCTgSL60m8l3oaOT2Jrt7HUSidnzTwXyt0n6FJZ9N7zjrf+GIZg23LSfl4fD9soV iq7Q== X-Gm-Message-State: AFqh2ko+45liPo7y0+oM8w6gxAx6PLSsjUMdrmb66saLh9HVhtdzWoP8 pAghsyxK+uGEr7qLDjk72d7JpAW8elY= X-Google-Smtp-Source: AMrXdXtAvgKyqXlcGd3SLmikpmgZXsJ+CzdBpNnVdtYuOvlYB4u5LJ3WA493pLsoyE0E3leIajBMmQ== X-Received: by 2002:a17:903:2348:b0:195:ed3e:cfa8 with SMTP id c8-20020a170903234800b00195ed3ecfa8mr3627005plh.9.1674260569526; Fri, 20 Jan 2023 16:22:49 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id p9-20020a170902e74900b00194c6c63693sm4270747plf.80.2023.01.20.16.22.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:22:49 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Eduard Zingerman , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 01/12] bpf: Fix state pruning for STACK_DYNPTR stack slots Date: Sat, 21 Jan 2023 05:52:30 +0530 Message-Id: <20230121002241.2113993-2-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9951; i=memxor@gmail.com; h=from:subject; bh=NYNbdfps9+kYMJ335GtKo6Xc0a43rfZyUOa/oudRbNc=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAjL83VYouCKa4i4kZhtgc+y/4A2p1iM42Byfm9 YpAPpFSJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swIwAKCRBM4MiGSL8RypC4EA CEuYpQrAVuxX2XP6hWiBPi+n594T8LY2bWWRAhdwupPqwGU8sjEu4P6lE+aEaFh+p8JwP1lbGAh+yK Q/ERZyD4J6Co5dfXO5tm5tNuuGgfvgIi2CWCUfFYEDNXhMgA8kZiMpdMx8Y7Spgi82U+HApfdagBYF 0gYi5WWToZUD9SAeuX37f/8wVSRe081rr2IGYtFYK5ohFMCRAYkf4IMtc7XyGKBxLVjQXUmCclAr/v 8Ltu/LXmSNy8Iij9Pi79V5kH966ifkm/M9jyqqi+/PWPbV0h7gly9IQjbPXG0bGZ921fbORRJnXB3H axYGD9rWSGCGuIsOUyXecvMOlTUEJR8yXx1zTVvBYQUryuD2m6TqSh/jO9Bqfisa4/k1zXbB2e2RUQ ACul/3MS7+ydeidXMtbJ8REgeHZMHFPuyYxhjWlQ9wXeh8706k/c38q/dkwCuVLse5gvGJ+XsAFXc+ qFYcLNdKPie3+stwNWNY/pWuz9PFPKoJcjKxMaPqjIdVe0gLZ/6pFaYHtVpkp6JKtjmcl4TXjX0pdX 3PzSQJnz4zZ3Qh15Eiuhgdkm4zB9Wr1PWBnDJpAmQIuayf0gWXAU0P5LUMLJ1voOT1qUJ6irUouqgK 0NQTC+t/8ke/3idpaE9ZF0ILal3pxWgRzx5hI52FfIcQnq8MSLGkItG2jZ9A== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The root of the problem is missing liveness marking for STACK_DYNPTR slots. This leads to all kinds of problems inside stacksafe. The verifier by default inside stacksafe ignores spilled_ptr in stack slots which do not have REG_LIVE_READ marks. Since this is being checked in the 'old' explored state, it must have already done clean_live_states for this old bpf_func_state. Hence, it won't be receiving any more liveness marks from to be explored insns (it has received REG_LIVE_DONE marking from liveness point of view). What this means is that verifier considers that it's safe to not compare the stack slot if was never read by children states. While liveness marks are usually propagated correctly following the parentage chain for spilled registers (SCALAR_VALUE and PTR_* types), the same is not the case for STACK_DYNPTR. clean_live_states hence simply rewrites these stack slots to the type STACK_INVALID since it sees no REG_LIVE_READ marks. The end result is that we will never see STACK_DYNPTR slots in explored state. Even if verifier was conservatively matching !REG_LIVE_READ slots, very next check continuing the stacksafe loop on seeing STACK_INVALID would again prevent further checks. Now as long as verifier stores an explored state which we can compare to when reaching a pruning point, we can abuse this bug to make verifier prune search for obviously unsafe paths using STACK_DYNPTR slots thinking they are never used hence safe. Doing this in unprivileged mode is a bit challenging. add_new_state is only set when seeing BPF_F_TEST_STATE_FREQ (which requires privileges) or when jmps_processed difference is >= 2 and insn_processed difference is >= 8. So coming up with the unprivileged case requires a little more work, but it is still totally possible. The test case being discussed below triggers the heuristic even in unprivileged mode. However, it no longer works since commit 8addbfc7b308 ("bpf: Gate dynptr API behind CAP_BPF"). Let's try to study the test step by step. Consider the following program (C style BPF ASM): 0 r0 = 0; 1 r6 = &ringbuf_map; 3 r1 = r6; 4 r2 = 8; 5 r3 = 0; 6 r4 = r10; 7 r4 -= -16; 8 call bpf_ringbuf_reserve_dynptr; 9 if r0 == 0 goto pc+1; 10 goto pc+1; 11 *(r10 - 16) = 0xeB9F; 12 r1 = r10; 13 r1 -= -16; 14 r2 = 0; 15 call bpf_ringbuf_discard_dynptr; 16 r0 = 0; 17 exit; We know that insn 12 will be a pruning point, hence if we force add_new_state for it, it will first verify the following path as safe in straight line exploration: 0 1 3 4 5 6 7 8 9 -> 10 -> (12) 13 14 15 16 17 Then, when we arrive at insn 12 from the following path: 0 1 3 4 5 6 7 8 9 -> 11 (12) We will find a state that has been verified as safe already at insn 12. Since register state is same at this point, regsafe will pass. Next, in stacksafe, for spi = 0 and spi = 1 (location of our dynptr) is skipped seeing !REG_LIVE_READ. The rest matches, so stacksafe returns true. Next, refsafe is also true as reference state is unchanged in both states. The states are considered equivalent and search is pruned. Hence, we are able to construct a dynptr with arbitrary contents and use the dynptr API to operate on this arbitrary pointer and arbitrary size + offset. To fix this, first define a mark_dynptr_read function that propagates liveness marks whenever a valid initialized dynptr is accessed by dynptr helpers. REG_LIVE_WRITTEN is marked whenever we initialize an uninitialized dynptr. This is done in mark_stack_slots_dynptr. It allows screening off mark_reg_read and not propagating marks upwards from that point. This ensures that we either set REG_LIVE_READ64 on both dynptr slots, or none, so clean_live_states either sets both slots to STACK_INVALID or none of them. This is the invariant the checks inside stacksafe rely on. Next, do a complete comparison of both stack slots whenever they have STACK_DYNPTR. Compare the dynptr type stored in the spilled_ptr, and also whether both form the same first_slot. Only then is the later path safe. Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs") Acked-by: Eduard Zingerman Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 88 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 84 insertions(+), 4 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ca7db2ce70b9..39d8ee38c338 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -781,6 +781,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_ state->stack[spi - 1].spilled_ptr.ref_obj_id = id; } + state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; + state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN; + return 0; } @@ -805,6 +808,31 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re __mark_reg_not_init(env, &state->stack[spi].spilled_ptr); __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr); + + /* Why do we need to set REG_LIVE_WRITTEN for STACK_INVALID slot? + * + * While we don't allow reading STACK_INVALID, it is still possible to + * do <8 byte writes marking some but not all slots as STACK_MISC. Then, + * helpers or insns can do partial read of that part without failing, + * but check_stack_range_initialized, check_stack_read_var_off, and + * check_stack_read_fixed_off will do mark_reg_read for all 8-bytes of + * the slot conservatively. Hence we need to prevent those liveness + * marking walks. + * + * This was not a problem before because STACK_INVALID is only set by + * default (where the default reg state has its reg->parent as NULL), or + * in clean_live_states after REG_LIVE_DONE (at which point + * mark_reg_read won't walk reg->parent chain), but not randomly during + * verifier state exploration (like we did above). Hence, for our case + * parentage chain will still be live (i.e. reg->parent may be + * non-NULL), while earlier reg->parent was NULL, so we need + * REG_LIVE_WRITTEN to screen off read marker propagation when it is + * done later on reads or by mark_dynptr_read as well to unnecessary + * mark registers in verifier state. + */ + state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; + state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN; + return 0; } @@ -2390,6 +2418,30 @@ static int mark_reg_read(struct bpf_verifier_env *env, return 0; } +static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + struct bpf_func_state *state = func(env, reg); + int spi, ret; + + /* For CONST_PTR_TO_DYNPTR, it must have already been done by + * check_reg_arg in check_helper_call and mark_btf_func_reg_size in + * check_kfunc_call. + */ + if (reg->type == CONST_PTR_TO_DYNPTR) + return 0; + spi = get_spi(reg->off); + /* Caller ensures dynptr is valid and initialized, which means spi is in + * bounds and spi is the first dynptr slot. Simply mark stack slot as + * read. + */ + ret = mark_reg_read(env, &state->stack[spi].spilled_ptr, + state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64); + if (ret) + return ret; + return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr, + state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64); +} + /* This function is supposed to be used by the following 32-bit optimization * code only. It returns TRUE if the source or destination register operates * on 64-bit, otherwise return FALSE. @@ -5977,6 +6029,8 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno, meta->uninit_dynptr_regno = regno; } else /* MEM_RDONLY and None case from above */ { + int err; + /* For the reg->type == PTR_TO_STACK case, bpf_dynptr is never const */ if (reg->type == CONST_PTR_TO_DYNPTR && !(arg_type & MEM_RDONLY)) { verbose(env, "cannot pass pointer to const bpf_dynptr, the helper mutates it\n"); @@ -6010,6 +6064,10 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno, err_extra, regno); return -EINVAL; } + + err = mark_dynptr_read(env, reg); + if (err) + return err; } return 0; } @@ -13215,10 +13273,9 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, return false; if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1) continue; - if (!is_spilled_reg(&old->stack[spi])) - continue; - if (!regsafe(env, &old->stack[spi].spilled_ptr, - &cur->stack[spi].spilled_ptr, idmap)) + /* Both old and cur are having same slot_type */ + switch (old->stack[spi].slot_type[BPF_REG_SIZE - 1]) { + case STACK_SPILL: /* when explored and current stack slot are both storing * spilled registers, check that stored pointers types * are the same as well. @@ -13229,7 +13286,30 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, * such verifier states are not equivalent. * return false to continue verification of this path */ + if (!regsafe(env, &old->stack[spi].spilled_ptr, + &cur->stack[spi].spilled_ptr, idmap)) + return false; + break; + case STACK_DYNPTR: + { + const struct bpf_reg_state *old_reg, *cur_reg; + + old_reg = &old->stack[spi].spilled_ptr; + cur_reg = &cur->stack[spi].spilled_ptr; + if (old_reg->dynptr.type != cur_reg->dynptr.type || + old_reg->dynptr.first_slot != cur_reg->dynptr.first_slot || + !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap)) + return false; + break; + } + case STACK_MISC: + case STACK_ZERO: + case STACK_INVALID: + continue; + /* Ensure that new unhandled slot types return false by default */ + default: return false; + } } return true; } From patchwork Sat Jan 21 00:22:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110781 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 507F1C05027 for ; Sat, 21 Jan 2023 00:25:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229882AbjAUAZg (ORCPT ); Fri, 20 Jan 2023 19:25:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229824AbjAUAZf (ORCPT ); Fri, 20 Jan 2023 19:25:35 -0500 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE47E13511 for ; Fri, 20 Jan 2023 16:25:04 -0800 (PST) Received: by mail-pj1-f68.google.com with SMTP id z9-20020a17090a468900b00226b6e7aeeaso6429175pjf.1 for ; Fri, 20 Jan 2023 16:25:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tbfmok1bo7nEVSKUA385ZnbR9z6ipfMuo+sDIvn1aHs=; b=ZhxyBSygKIMnQajII8HjjdJmdOc4O2Nis31eLblQSYlcVXVXEdmcccG+QGH0RsxrNx Fn5zNYRbuJVpk5t1x2bUF02WB6zU7FwxsvOLXY0TF2SEM5EjOgmh8XZxUmejxcnVS+QS eE5/jCUEW2JlJiXEK8nxflMQRrVBq2mhDzD3EqDSsB2VLBzNBK4r186NETY/LrVVqt5C Ss+A4BxhDFDDsFLclfSN/q35KQb0CTk7v4CPaS8Wq5h7WlIotr8xL2ulubwlLujDAO+y nyyNepLhowxyj+9SfMlYGbtYiaz8bKyz8OV1NoVHGybM62hj9UYuNMsyx3EW5sm2PCPC 97hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tbfmok1bo7nEVSKUA385ZnbR9z6ipfMuo+sDIvn1aHs=; b=qinsONzByXLyB1yQNY+ra1o0TQCsf86Fx6Tli9lp/EwD2kMN6GDiQj/v9zI33xEQSI lQxqPSS1XZDR91GjG+/3dfNUY2IxyPUnjmrd71rs9KFGQsmm5xOpASPmV0zqfwYFA1Dt P/izcTSlkezODe1RBCeabYcC1pAaYBovbC7F2hcGVTugz+I1YUfkO2jlNgqfwGItKKWz x+tURAY1IG5Vo7YUNDoShXQoEcXPmz+R8y1BiEArNtsam3cAVA/VPsylWNWx2+wDfRHW 5eoPFi4+XP6bPWrz31siRCaa9AhU/vwJLVxi4bkTxd9MHBrrqkmVr9/e0KoRbljk3GF5 DL/w== X-Gm-Message-State: AFqh2kpu9YYdoTDJnrMbkvj1/A82PZjpw3I6+daCN5GFIiGf+ru6uvve m2/PJkF+PhzHWq9G9BAGkM9Xb8oDMkc= X-Google-Smtp-Source: AMrXdXuaDIJ4l0VzoEWkp33C6HmRyLbXCchXJa0g0i1UpWQcCyCopE+/6ul6/J8RJNDLQVeXqnNfMQ== X-Received: by 2002:a17:90a:30d:b0:225:f901:ebf8 with SMTP id 13-20020a17090a030d00b00225f901ebf8mr16927319pje.18.1674260572893; Fri, 20 Jan 2023 16:22:52 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id n4-20020a17090a394400b0022698aa22d9sm2096980pjf.31.2023.01.20.16.22.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:22:52 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Joanne Koong , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 02/12] bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR Date: Sat, 21 Jan 2023 05:52:31 +0530 Message-Id: <20230121002241.2113993-3-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9553; i=memxor@gmail.com; h=from:subject; bh=2s1pBDiiXTIV1BB1rUUx54yULFT49RyHEefJ0HCSxfo=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAjl2cE3exe3tS2y0dU5xdJr1Vgd0s+vbl8nS4L rp+xG4SJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swIwAKCRBM4MiGSL8Ryuv4EA CH/EiO4/PCPaAVauR1vQJQ5KOEMVNvFwBYVtQl8T59gjRtYfTQDIb1EUBqFkDiGtK9ZdpLrYCGmvad GoOUvVu4/RiAx1+W/iyzwPRYBceG0lai35ibybMr9FTTlv3hj6Go6P0inGzSJr++lHuhebjMzGfb/q vk4LMIIbas5KJOlGnH706wXeHnIWPAANKBQuAhZP7rEqY03UzFSZc2emTEi2l7fY8kK8Z40m6+AGoO /TKNSpK5uM0m9FDmC0uy80YpHuuQfeGsjH54F5GcYLBCN2BsZe9ulOAnpVFCM9KDSdG9tSYF176ufp hFtXCotwHE25MfGRNDOacjZTozYs6jT2QF0jW/JNQ5aWEOnpRO0XkyMXorw4fWmShCP2SW6ctkuPKP mMv6jYt9NGheBRJ7B5/gynlbPLfAzf3y2rVVIloDHMnr/LId+UVnfPW9UDNl2YdRpRmfiSmFMN31DG 6bjbZNrSb+jKFI12jYvyRiJuj5+Kp7GtE1u2LgNUENqo2bvzzOY3ElNFYY/2p1eCX3kT/KfCHYd8Qs 0+f+ye92l9wfoysohoQJ6r8fs29ZvxXEo9rV23TFyGsqBC3jGySvSdWDHUtc1ycu17XQU4OGtqztv4 1laB5wRfTm9UJkns6wEc8e70CVDzxnOpvL8niRuFA1+kGJGJqdyADnhanm/Q== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, the dynptr function is not checking the variable offset part of PTR_TO_STACK that it needs to check. The fixed offset is considered when computing the stack pointer index, but if the variable offset was not a constant (such that it could not be accumulated in reg->off), we will end up a discrepency where runtime pointer does not point to the actual stack slot we mark as STACK_DYNPTR. It is impossible to precisely track dynptr state when variable offset is not constant, hence, just like bpf_timer, kptr, bpf_spin_lock, etc. simply reject the case where reg->var_off is not constant. Then, consider both reg->off and reg->var_off.value when computing the stack pointer index. A new helper dynptr_get_spi is introduced to hide over these details since the dynptr needs to be located in multiple places outside the process_dynptr_func checks, hence once we know it's a PTR_TO_STACK, we need to enforce these checks in all places. Note that it is disallowed for unprivileged users to have a non-constant var_off, so this problem should only be possible to trigger from programs having CAP_PERFMON. However, its effects can vary. Without the fix, it is possible to replace the contents of the dynptr arbitrarily by making verifier mark different stack slots than actual location and then doing writes to the actual stack address of dynptr at runtime. Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs") Acked-by: Joanne Koong Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 84 +++++++++++++++---- .../bpf/prog_tests/kfunc_dynptr_param.c | 2 +- .../testing/selftests/bpf/progs/dynptr_fail.c | 4 +- 3 files changed, 69 insertions(+), 21 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 39d8ee38c338..76afdbea425a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -638,11 +638,34 @@ static void print_liveness(struct bpf_verifier_env *env, verbose(env, "D"); } -static int get_spi(s32 off) +static int __get_spi(s32 off) { return (-off - 1) / BPF_REG_SIZE; } +static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + int off, spi; + + if (!tnum_is_const(reg->var_off)) { + verbose(env, "dynptr has to be at a constant offset\n"); + return -EINVAL; + } + + off = reg->off + reg->var_off.value; + if (off % BPF_REG_SIZE) { + verbose(env, "cannot pass in dynptr at an offset=%d\n", off); + return -EINVAL; + } + + spi = __get_spi(off); + if (spi < 1) { + verbose(env, "cannot pass in dynptr at an offset=%d\n", off); + return -EINVAL; + } + return spi; +} + static bool is_spi_bounds_valid(struct bpf_func_state *state, int spi, int nr_slots) { int allocated_slots = state->allocated_stack / BPF_REG_SIZE; @@ -754,7 +777,9 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_ enum bpf_dynptr_type type; int spi, i, id; - spi = get_spi(reg->off); + spi = dynptr_get_spi(env, reg); + if (spi < 0) + return spi; if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS)) return -EINVAL; @@ -792,7 +817,9 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re struct bpf_func_state *state = func(env, reg); int spi, i; - spi = get_spi(reg->off); + spi = dynptr_get_spi(env, reg); + if (spi < 0) + return spi; if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS)) return -EINVAL; @@ -844,7 +871,11 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_ if (reg->type == CONST_PTR_TO_DYNPTR) return false; - spi = get_spi(reg->off); + spi = dynptr_get_spi(env, reg); + if (spi < 0) + return false; + + /* We will do check_mem_access to check and update stack bounds later */ if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS)) return true; @@ -860,14 +891,15 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_ static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { struct bpf_func_state *state = func(env, reg); - int spi; - int i; + int spi, i; /* This already represents first slot of initialized bpf_dynptr */ if (reg->type == CONST_PTR_TO_DYNPTR) return true; - spi = get_spi(reg->off); + spi = dynptr_get_spi(env, reg); + if (spi < 0) + return false; if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) || !state->stack[spi].spilled_ptr.dynptr.first_slot) return false; @@ -896,7 +928,9 @@ static bool is_dynptr_type_expected(struct bpf_verifier_env *env, struct bpf_reg if (reg->type == CONST_PTR_TO_DYNPTR) { return reg->dynptr.type == dynptr_type; } else { - spi = get_spi(reg->off); + spi = dynptr_get_spi(env, reg); + if (spi < 0) + return false; return state->stack[spi].spilled_ptr.dynptr.type == dynptr_type; } } @@ -2429,7 +2463,9 @@ static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state * */ if (reg->type == CONST_PTR_TO_DYNPTR) return 0; - spi = get_spi(reg->off); + spi = dynptr_get_spi(env, reg); + if (spi < 0) + return spi; /* Caller ensures dynptr is valid and initialized, which means spi is in * bounds and spi is the first dynptr slot. Simply mark stack slot as * read. @@ -5992,12 +6028,15 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno, } /* CONST_PTR_TO_DYNPTR already has fixed and var_off as 0 due to * check_func_arg_reg_off's logic. We only need to check offset - * alignment for PTR_TO_STACK. + * and its alignment for PTR_TO_STACK. */ - if (reg->type == PTR_TO_STACK && (reg->off % BPF_REG_SIZE)) { - verbose(env, "cannot pass in dynptr at an offset=%d\n", reg->off); - return -EINVAL; + if (reg->type == PTR_TO_STACK) { + int err = dynptr_get_spi(env, reg); + + if (err < 0) + return err; } + /* MEM_UNINIT - Points to memory that is an appropriate candidate for * constructing a mutable bpf_dynptr object. * @@ -6405,15 +6444,16 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env, } } -static u32 dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +static int dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { struct bpf_func_state *state = func(env, reg); int spi; if (reg->type == CONST_PTR_TO_DYNPTR) return reg->ref_obj_id; - - spi = get_spi(reg->off); + spi = dynptr_get_spi(env, reg); + if (spi < 0) + return spi; return state->stack[spi].spilled_ptr.ref_obj_id; } @@ -6487,7 +6527,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, * PTR_TO_STACK. */ if (reg->type == PTR_TO_STACK) { - spi = get_spi(reg->off); + spi = dynptr_get_spi(env, reg); + if (spi < 0) + return spi; if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) || !state->stack[spi].spilled_ptr.ref_obj_id) { verbose(env, "arg %d is an unacquired reference\n", regno); @@ -7977,13 +8019,19 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) { if (arg_type_is_dynptr(fn->arg_type[i])) { struct bpf_reg_state *reg = ®s[BPF_REG_1 + i]; + int ref_obj_id; if (meta.ref_obj_id) { verbose(env, "verifier internal error: meta.ref_obj_id already set\n"); return -EFAULT; } - meta.ref_obj_id = dynptr_ref_obj_id(env, reg); + ref_obj_id = dynptr_ref_obj_id(env, reg); + if (ref_obj_id < 0) { + verbose(env, "verifier internal error: failed to obtain dynptr ref_obj_id\n"); + return ref_obj_id; + } + meta.ref_obj_id = ref_obj_id; break; } } diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c index a9229260a6ce..72800b1e8395 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c @@ -18,7 +18,7 @@ static struct { const char *expected_verifier_err_msg; int expected_runtime_err; } kfunc_dynptr_tests[] = { - {"not_valid_dynptr", "Expected an initialized dynptr as arg #1", 0}, + {"not_valid_dynptr", "cannot pass in dynptr at an offset=-8", 0}, {"not_ptr_to_stack", "arg#0 expected pointer to stack or dynptr_ptr", 0}, {"dynptr_data_null", NULL, -EBADMSG}, }; diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c index 78debc1b3820..02d57b95cf6e 100644 --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c @@ -382,7 +382,7 @@ int invalid_helper1(void *ctx) /* A dynptr can't be passed into a helper function at a non-zero offset */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #3") +__failure __msg("cannot pass in dynptr at an offset=-8") int invalid_helper2(void *ctx) { struct bpf_dynptr ptr; @@ -584,7 +584,7 @@ int invalid_read4(void *ctx) /* Initializing a dynptr on an offset should fail */ SEC("?raw_tp") -__failure __msg("invalid write to stack") +__failure __msg("cannot pass in dynptr at an offset=0") int invalid_offset(void *ctx) { struct bpf_dynptr ptr; From patchwork Sat Jan 21 00:22:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110783 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 918F2C05027 for ; Sat, 21 Jan 2023 00:26:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229542AbjAUA0t (ORCPT ); Fri, 20 Jan 2023 19:26:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230310AbjAUAY7 (ORCPT ); Fri, 20 Jan 2023 19:24:59 -0500 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92C90C13DA for ; Fri, 20 Jan 2023 16:24:21 -0800 (PST) Received: by mail-pj1-x1043.google.com with SMTP id b10so7100260pjo.1 for ; Fri, 20 Jan 2023 16:24:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2H/54ZqUIjU3VYoXRIgXxyEiI8YMEx+pUQ0BaawvGCY=; b=iOFVvDBKnQJSkUi+dAHuhUyrKgrH0YBL0gekJv0qfcKQPMSFSbg9Vc0zQ5AdAhBUqM deUzxo5xP4Fy3TBCiq5jKTHeYtLkKHJz9TQRrd3fEfjQLCMvB3EO+h6YnSgKk9P2hiZX GDsRE0NKvcRfecFquU7hy/xSGjHJrDxoHxdQfeeF9p37bJfAdcp4howypukHq3dXbfkj 1Dj6BaVXY2+UHCI6YyZaBe+Gv+3YhTPX3Lv/lp174pmsxq3LGku3sleVDkTc42bUgos0 Px0wir+Ji5oKzEogl4MnjD1y+dQocQb0jeRLE9ZURqOlOlgXtEGSjO7RLBZcLHB8fXUC YMdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2H/54ZqUIjU3VYoXRIgXxyEiI8YMEx+pUQ0BaawvGCY=; b=6wwTTmFiTOy54nJjqU8bPKbP7/F+GAAmnY4jELpSTWsBIe8Z7entfw3xcWpEkV2wtx zauihQrbwrKHcgKGs36ne/X72ebGd+yio3sndMYxLi2YGybDfZw9KQ6JcWaMCah6x4GB MEuGO1moOVOWoQ3Yn7eKqzlsCn4FK8EMZTXrO6GZAvvpLHeE8i7pMiBKUSOKPjzrT5nl Y78AKlxkUVwxdceLtk2PzVVlKjRD8FY/MW+9+tO0lq+U0xUnKTQftC4GbctLW3ZteL5U aXBZ/349bh7jo+T6iBgCSgCAhMmztwQlMa6SCyVxtiTUWAJcLPalL52/zd24qRCJJgOO nsmQ== X-Gm-Message-State: AFqh2kp80zDrGX314EgVMY+4hKvQTMklhuQ32KlKHvWQcvELJQJ+zK3t Z9pr43D7rv/L7HCW68gOoi5VOxaakOM= X-Google-Smtp-Source: AMrXdXvQdFOM7pHzBq8FoHpSvdfXZ2EB4E5bSjY3WKkx/lRg4oDk9nhAoLxoe96Fruw5G+M90zvV+w== X-Received: by 2002:a17:902:8c94:b0:193:ab5:39c7 with SMTP id t20-20020a1709028c9400b001930ab539c7mr17171468plo.11.1674260575973; Fri, 20 Jan 2023 16:22:55 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id i11-20020a17090332cb00b001949c0d7a33sm9585695plr.7.2023.01.20.16.22.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:22:55 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 03/12] bpf: Fix partial dynptr stack slot reads/writes Date: Sat, 21 Jan 2023 05:52:32 +0530 Message-Id: <20230121002241.2113993-4-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=8936; i=memxor@gmail.com; h=from:subject; bh=88XPzRu2Z9EO4iSWHshfPh1pJZo++IffnWh/36lilyY=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAjDrwrMkcLcpOoo/SrtdbpZSaCSGvVeLRH4RUd 8HAwUXWJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swIwAKCRBM4MiGSL8RyhnKEA CTRy7imY7aMKqthGbXj4MxfEnR2ZdK1SrX/D8wn65kZTltEWlJWqM9dLjuNNYDFws3gE+1RcFn5x2y bmbhy9nQxbzBDiOUdG6kMxst74YRoA2jYb0EwJ1JgvCa9RiAWOouYc8Eg87DdzHFsmO0mWgM6vxnWA 2xD/gx8GrQ2ifdlPJnuJaYsjorrkG4/Ot8us9m6GIchhwmT4LcsarWNVIBLXRPnxQc5j1Gb12+Jbge fLlBgMZ4GHhJK/nqeJLLozL/3ANXc7IGaeiYkvHME68cYD23a6Fg9PruiXWC5PCaTK7jSKlbz5F2ms YPUClaYE7tsN/ApNbG7sbz6FAToZ5oqxyvtGpO5q/tEyFsupopCDRNOTv317TnrELm6SN6N8OARMsj cZsV0Ec2Hi8tg/U+l0kBKzJIaPCOjoiVtS0fbn2qI8RjBZR/x3wRoHeu2tycOzoaoXzOEj99I8zebF uDq/VRvS6U6mmVCPTy4tXWJ0cpGsTTLidj1hC6Mk4twnpGKPTqeKxYtPPPDbKs0RGjexPRLzhsvrr5 ETiyXTlr9u+Jzoz9tCFS/wrY6lj7+aCRxlpr9mMbgwOHSPTeosoAHEY4XieUC6gcra0uopR1A7vjVa tJdbj4DPqHtgN17lhK0DChFUa/vK/G5KnpAA4qTVQ2Q2ZtWdwLk/MVlLGITQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, while reads are disallowed for dynptr stack slots, writes are not. Reads don't work from both direct access and helpers, while writes do work in both cases, but have the effect of overwriting the slot_type. While this is fine, handling for a few edge cases is missing. Firstly, a user can overwrite the stack slots of dynptr partially. Consider the following layout: spi: [d][d][?] 2 1 0 First slot is at spi 2, second at spi 1. Now, do a write of 1 to 8 bytes for spi 1. This will essentially either write STACK_MISC for all slot_types or STACK_MISC and STACK_ZERO (in case of size < BPF_REG_SIZE partial write of zeroes). The end result is that slot is scrubbed. Now, the layout is: spi: [d][m][?] 2 1 0 Suppose if user initializes spi = 1 as dynptr. We get: spi: [d][d][d] 2 1 0 But this time, both spi 2 and spi 1 have first_slot = true. Now, when passing spi 2 to dynptr helper, it will consider it as initialized as it does not check whether second slot has first_slot == false. And spi 1 should already work as normal. This effectively replaced size + offset of first dynptr, hence allowing invalid OOB reads and writes. Make a few changes to protect against this: When writing to PTR_TO_STACK using BPF insns, when we touch spi of a STACK_DYNPTR type, mark both first and second slot (regardless of which slot we touch) as STACK_INVALID. Reads are already prevented. Second, prevent writing to stack memory from helpers if the range may contain any STACK_DYNPTR slots. Reads are already prevented. For helpers, we cannot allow it to destroy dynptrs from the writes as depending on arguments, helper may take uninit_mem and dynptr both at the same time. This would mean that helper may write to uninit_mem before it reads the dynptr, which would be bad. PTR_TO_MEM: [?????dd] Depending on the code inside the helper, it may end up overwriting the dynptr contents first and then read those as the dynptr argument. Verifier would only simulate destruction when it does byte by byte access simulation in check_helper_call for meta.access_size, and fail to catch this case, as it happens after argument checks. The same would need to be done for any other non-trivial objects created on the stack in the future, such as bpf_list_head on stack, or bpf_rb_root on stack. A common misunderstanding in the current code is that MEM_UNINIT means writes, but note that writes may also be performed even without MEM_UNINIT in case of helpers, in that case the code after handling meta && meta->raw_mode will complain when it sees STACK_DYNPTR. So that invalid read case also covers writes to potential STACK_DYNPTR slots. The only loophole was in case of meta->raw_mode which simulated writes through instructions which could overwrite them. A future series sequenced after this will focus on the clean up of helper access checks and bugs around that. Fixes: 97e03f521050 ("bpf: Add verifier support for dynptrs") Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 88 +++++++++++++++++++ .../testing/selftests/bpf/progs/dynptr_fail.c | 6 +- 2 files changed, 91 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 76afdbea425a..5c7f29ca94ec 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -769,6 +769,8 @@ static void mark_dynptr_cb_reg(struct bpf_reg_state *reg, __mark_dynptr_reg(reg, type, true); } +static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, + struct bpf_func_state *state, int spi); static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg, enum bpf_arg_type arg_type, int insn_idx) @@ -863,6 +865,55 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re return 0; } +static void __mark_reg_unknown(const struct bpf_verifier_env *env, + struct bpf_reg_state *reg); + +static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, + struct bpf_func_state *state, int spi) +{ + int i; + + /* We always ensure that STACK_DYNPTR is never set partially, + * hence just checking for slot_type[0] is enough. This is + * different for STACK_SPILL, where it may be only set for + * 1 byte, so code has to use is_spilled_reg. + */ + if (state->stack[spi].slot_type[0] != STACK_DYNPTR) + return 0; + + /* Reposition spi to first slot */ + if (!state->stack[spi].spilled_ptr.dynptr.first_slot) + spi = spi + 1; + + if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) { + verbose(env, "cannot overwrite referenced dynptr\n"); + return -EINVAL; + } + + mark_stack_slot_scratched(env, spi); + mark_stack_slot_scratched(env, spi - 1); + + /* Writing partially to one dynptr stack slot destroys both. */ + for (i = 0; i < BPF_REG_SIZE; i++) { + state->stack[spi].slot_type[i] = STACK_INVALID; + state->stack[spi - 1].slot_type[i] = STACK_INVALID; + } + + /* TODO: Invalidate any slices associated with this dynptr */ + + /* Do not release reference state, we are destroying dynptr on stack, + * not using some helper to release it. Just reset register. + */ + __mark_reg_not_init(env, &state->stack[spi].spilled_ptr); + __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr); + + /* Same reason as unmark_stack_slots_dynptr above */ + state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; + state->stack[spi - 1].spilled_ptr.live |= REG_LIVE_WRITTEN; + + return 0; +} + static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { struct bpf_func_state *state = func(env, reg); @@ -3391,6 +3442,10 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, env->insn_aux_data[insn_idx].sanitize_stack_spill = true; } + err = destroy_if_dynptr_stack_slot(env, state, spi); + if (err) + return err; + mark_stack_slot_scratched(env, spi); if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && !register_is_null(reg) && env->bpf_capable) { @@ -3504,6 +3559,14 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env, if (err) return err; + for (i = min_off; i < max_off; i++) { + int spi; + + spi = __get_spi(i); + err = destroy_if_dynptr_stack_slot(env, state, spi); + if (err) + return err; + } /* Variable offset writes destroy any spilled pointers in range. */ for (i = min_off; i < max_off; i++) { @@ -5531,6 +5594,31 @@ static int check_stack_range_initialized( } if (meta && meta->raw_mode) { + /* Ensure we won't be overwriting dynptrs when simulating byte + * by byte access in check_helper_call using meta.access_size. + * This would be a problem if we have a helper in the future + * which takes: + * + * helper(uninit_mem, len, dynptr) + * + * Now, uninint_mem may overlap with dynptr pointer. Hence, it + * may end up writing to dynptr itself when touching memory from + * arg 1. This can be relaxed on a case by case basis for known + * safe cases, but reject due to the possibilitiy of aliasing by + * default. + */ + for (i = min_off; i < max_off + access_size; i++) { + int stack_off = -i - 1; + + spi = __get_spi(i); + /* raw_mode may write past allocated_stack */ + if (state->allocated_stack <= stack_off) + continue; + if (state->stack[spi].slot_type[stack_off % BPF_REG_SIZE] == STACK_DYNPTR) { + verbose(env, "potential write to dynptr at off=%d disallowed\n", i); + return -EACCES; + } + } meta->access_size = access_size; meta->regno = regno; return 0; diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c index 02d57b95cf6e..9dc3f23a8270 100644 --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c @@ -420,7 +420,7 @@ int invalid_write1(void *ctx) * offset */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #3") +__failure __msg("cannot overwrite referenced dynptr") int invalid_write2(void *ctx) { struct bpf_dynptr ptr; @@ -444,7 +444,7 @@ int invalid_write2(void *ctx) * non-const offset */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #1") +__failure __msg("cannot overwrite referenced dynptr") int invalid_write3(void *ctx) { struct bpf_dynptr ptr; @@ -476,7 +476,7 @@ static int invalid_write4_callback(__u32 index, void *data) * be invalidated as a dynptr */ SEC("?raw_tp") -__failure __msg("arg 1 is an unacquired reference") +__failure __msg("cannot overwrite referenced dynptr") int invalid_write4(void *ctx) { struct bpf_dynptr ptr; From patchwork Sat Jan 21 00:22:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110786 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4CA2C05027 for ; Sat, 21 Jan 2023 00:31:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230092AbjAUAbC (ORCPT ); Fri, 20 Jan 2023 19:31:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229694AbjAUAbB (ORCPT ); Fri, 20 Jan 2023 19:31:01 -0500 Received: from mail-yw1-x1142.google.com (mail-yw1-x1142.google.com [IPv6:2607:f8b0:4864:20::1142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10EC21420D for ; Fri, 20 Jan 2023 16:30:34 -0800 (PST) Received: by mail-yw1-x1142.google.com with SMTP id 00721157ae682-4d19b2686a9so95199907b3.6 for ; Fri, 20 Jan 2023 16:30:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=s4u7MZwWeHdhMy6kBepje+uCs8FsJV9lKGUJW/7cWwQ=; b=p4uyT9Ie/VFwUcIRXUnTkaI8hYOoahVouLilGlNAnZNmVcI1MZTCRL0Jpr1fKQJNG3 yjnqvzX3+GWjEFTsfhgZ+sLk6zCFkoFvptAp+g3lfeRUA5tWZN39FqHBlkDz+mzf+eMW zLrSOvcxalmT6w7AGxYCr9pcWawximtt26egJ89L7ieaHLVcM/OINR0TN3Hg89saAPBG mnwCiCJu/upqIamxtN4/Tlrxn9W+kaI0sdfGF3ibiBS2lsjibB7/Pf9SmQsX0HPk7olU RVqRvwrNDhqZWjuakBMxIvfETs48OgbjjwStOXrSh0oSkn1HDTlAh/TxmnWp8RLC0TEk tp1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=s4u7MZwWeHdhMy6kBepje+uCs8FsJV9lKGUJW/7cWwQ=; b=5tmTzsn/kRv9YonVkFFxe40wqblqaEEGWEwozSOFep1+BGL62Y4tzwEjOrREib3dp0 ssPEc0VPg4wa+KX2oPW7MARXyR+iJ4jCoqyjDuRqc1YgvibQiQ7AidD1vbdxaS/GsMui B75b1x108+RH7UY1Quh5Y1W3BexAUjhYJPXbRLHi/rgc5sMnlOxet0I1lOXgDwC/6Ynh +uDzPXHb0R9h3xgwHiLek1dTqMp8oepljgMI7SBL1A16xOQ9zLx7lAGUooAKEeXxM+pF hd2bjfrfSAh1XUV8V31PmcIaN18cMfNfMQOb+w2eYaqU95gO3iV2r6Kqx4Je8pNzNbO0 ZXkQ== X-Gm-Message-State: AFqh2koF0P0Rtxd026rgol0g91ZpJoFQSNG8UzcjaHgLITPr+Tui1LvV YFq7oCokYLpPdWRQoKc0FKXnbDm2HMQ= X-Google-Smtp-Source: AMrXdXtoRuaiqRwl+wty5QvsbpDJqRSNX0rxlIG5IP7rQ14T65C1LKiAwsW6sKmRo9ClPMlJYAE7jw== X-Received: by 2002:a17:902:ea0a:b0:191:24a:63e3 with SMTP id s10-20020a170902ea0a00b00191024a63e3mr21773258plg.50.1674260579025; Fri, 20 Jan 2023 16:22:59 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id k7-20020a170902ce0700b001885d15e3c1sm27510704plg.26.2023.01.20.16.22.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:22:58 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Joanne Koong , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 04/12] bpf: Invalidate slices on destruction of dynptrs on stack Date: Sat, 21 Jan 2023 05:52:33 +0530 Message-Id: <20230121002241.2113993-5-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9379; i=memxor@gmail.com; h=from:subject; bh=UEDRNkOkEGa0JtceYQHiHofOqQt17c3+ZTwrz0wtH7s=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAjxb9l8iXwDCN2ZV6HLneiul+xz1IXLB2L+uMR bdikpWuJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swIwAKCRBM4MiGSL8RyuckEA C7o1pEH0fgDqSbIMsO8KBOlVI5nm1XcI6B0bjC7qzJG0s0ThEL24nOCpq9ucU5mRBL5vgMRYaRSgPx pcr+kIiq/ZyJC9l0o1R2pfobh2iduF1XqbVyOjlG5laB0qEe+6BBFUARK75Lk9P2HFXGZPlj0P3Umq 68JYkLlRpjW7xB6nUulXdTGrJh/hI3m9qSf2HLuYhccJbM0E8Up6sm9+1PHa5EvLrVje4OYzw5Fuyy pH16baVQlK7SfgJ6+RKDI84ROt6OSnyrItpSxwX/DqExUllj5iGV9YyPMuLlK0JOChzqG2zwYyiaH7 tNzNfUPZXMcS8cMvQAJk9PrQM+uBP52UuFrvwAPnmkADqMaD7L9PglR1vWa//1qQ8Ai83gSooSKI2M qnJpjxnlmRpeCNQq/Uh1L+lXMmshhhG5lHio/VNyPf8ObPdX3VDcFRBLkB05elXyflOtGlI1hNGPPm +1Q0yjwM608A2pWnYmoDLPPNVjoikhD2kLDBGewF+t/fqffZt82YNNxI53iLd54kDCNeKiE1mtWd8Y Sb8su1dyALTE16ZVc6tnxv/OS70GzmGNjva6PlEHLN57ZXcHgjTOVmR6YTOpTA9xvECbYgbhCUmgXi yNtYu8eOv6v0d4FHWNOj9DT2b9eunDCDOmaPI0qwNDgvxqn8XYmKKa+iIV9g== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The previous commit implemented destroy_if_dynptr_stack_slot. It destroys the dynptr which given spi belongs to, but still doesn't invalidate the slices that belong to such a dynptr. While for the case of referenced dynptr, we don't allow their overwrite and return an error early, we still allow it and destroy the dynptr for unreferenced dynptr. To be able to enable precise and scoped invalidation of dynptr slices in this case, we must be able to associate the source dynptr of slices that have been obtained using bpf_dynptr_data. When doing destruction, only slices belonging to the dynptr being destructed should be invalidated, and nothing else. Currently, dynptr slices belonging to different dynptrs are indistinguishible. Hence, allocate a unique id to each dynptr (CONST_PTR_TO_DYNPTR and those on stack). This will be stored as part of reg->id. Whenever using bpf_dynptr_data, transfer this unique dynptr id to the returned PTR_TO_MEM_OR_NULL slice pointer, and store it in a new per-PTR_TO_MEM dynptr_id register state member. Finally, after establishing such a relationship between dynptrs and their slices, implement precise invalidation logic that only invalidates slices belong to the destroyed dynptr in destroy_if_dynptr_stack_slot. Acked-by: Joanne Koong Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 5 +- kernel/bpf/verifier.c | 74 ++++++++++++++++--- .../testing/selftests/bpf/progs/dynptr_fail.c | 4 +- 3 files changed, 68 insertions(+), 15 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 127058cfec47..aa83de1fe755 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -70,7 +70,10 @@ struct bpf_reg_state { u32 btf_id; }; - u32 mem_size; /* for PTR_TO_MEM | PTR_TO_MEM_OR_NULL */ + struct { /* for PTR_TO_MEM | PTR_TO_MEM_OR_NULL */ + u32 mem_size; + u32 dynptr_id; /* for dynptr slices */ + }; /* For dynptr stack slots */ struct { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 5c7f29ca94ec..01cb802776fd 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -255,6 +255,7 @@ struct bpf_call_arg_meta { int mem_size; u64 msize_max_value; int ref_obj_id; + int dynptr_id; int map_uid; int func_id; struct btf *btf; @@ -750,23 +751,27 @@ static bool dynptr_type_refcounted(enum bpf_dynptr_type type) static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type type, - bool first_slot); + bool first_slot, int dynptr_id); static void __mark_reg_not_init(const struct bpf_verifier_env *env, struct bpf_reg_state *reg); -static void mark_dynptr_stack_regs(struct bpf_reg_state *sreg1, +static void mark_dynptr_stack_regs(struct bpf_verifier_env *env, + struct bpf_reg_state *sreg1, struct bpf_reg_state *sreg2, enum bpf_dynptr_type type) { - __mark_dynptr_reg(sreg1, type, true); - __mark_dynptr_reg(sreg2, type, false); + int id = ++env->id_gen; + + __mark_dynptr_reg(sreg1, type, true, id); + __mark_dynptr_reg(sreg2, type, false, id); } -static void mark_dynptr_cb_reg(struct bpf_reg_state *reg, +static void mark_dynptr_cb_reg(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, enum bpf_dynptr_type type) { - __mark_dynptr_reg(reg, type, true); + __mark_dynptr_reg(reg, type, true, ++env->id_gen); } static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, @@ -795,7 +800,7 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_ if (type == BPF_DYNPTR_TYPE_INVALID) return -EINVAL; - mark_dynptr_stack_regs(&state->stack[spi].spilled_ptr, + mark_dynptr_stack_regs(env, &state->stack[spi].spilled_ptr, &state->stack[spi - 1].spilled_ptr, type); if (dynptr_type_refcounted(type)) { @@ -871,7 +876,9 @@ static void __mark_reg_unknown(const struct bpf_verifier_env *env, static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, struct bpf_func_state *state, int spi) { - int i; + struct bpf_func_state *fstate; + struct bpf_reg_state *dreg; + int i, dynptr_id; /* We always ensure that STACK_DYNPTR is never set partially, * hence just checking for slot_type[0] is enough. This is @@ -899,7 +906,19 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, state->stack[spi - 1].slot_type[i] = STACK_INVALID; } - /* TODO: Invalidate any slices associated with this dynptr */ + dynptr_id = state->stack[spi].spilled_ptr.id; + /* Invalidate any slices associated with this dynptr */ + bpf_for_each_reg_in_vstate(env->cur_state, fstate, dreg, ({ + /* Dynptr slices are only PTR_TO_MEM_OR_NULL and PTR_TO_MEM */ + if (dreg->type != (PTR_TO_MEM | PTR_MAYBE_NULL) && dreg->type != PTR_TO_MEM) + continue; + if (dreg->dynptr_id == dynptr_id) { + if (!env->allow_ptr_leaks) + __mark_reg_not_init(env, dreg); + else + __mark_reg_unknown(env, dreg); + } + })); /* Do not release reference state, we are destroying dynptr on stack, * not using some helper to release it. Just reset register. @@ -1562,7 +1581,7 @@ static void mark_reg_known_zero(struct bpf_verifier_env *env, } static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type type, - bool first_slot) + bool first_slot, int dynptr_id) { /* reg->type has no meaning for STACK_DYNPTR, but when we set reg for * callback arguments, it does need to be CONST_PTR_TO_DYNPTR, so simply @@ -1570,6 +1589,8 @@ static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type ty */ __mark_reg_known_zero(reg); reg->type = CONST_PTR_TO_DYNPTR; + /* Give each dynptr a unique id to uniquely associate slices to it. */ + reg->id = dynptr_id; reg->dynptr.type = type; reg->dynptr.first_slot = first_slot; } @@ -6532,6 +6553,19 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env, } } +static int dynptr_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + struct bpf_func_state *state = func(env, reg); + int spi; + + if (reg->type == CONST_PTR_TO_DYNPTR) + return reg->id; + spi = dynptr_get_spi(env, reg); + if (spi < 0) + return spi; + return state->stack[spi].spilled_ptr.id; +} + static int dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { struct bpf_func_state *state = func(env, reg); @@ -7601,7 +7635,7 @@ static int set_user_ringbuf_callback_state(struct bpf_verifier_env *env, * callback_fn(const struct bpf_dynptr_t* dynptr, void *callback_ctx); */ __mark_reg_not_init(env, &callee->regs[BPF_REG_0]); - mark_dynptr_cb_reg(&callee->regs[BPF_REG_1], BPF_DYNPTR_TYPE_LOCAL); + mark_dynptr_cb_reg(env, &callee->regs[BPF_REG_1], BPF_DYNPTR_TYPE_LOCAL); callee->regs[BPF_REG_2] = caller->regs[BPF_REG_3]; /* unused */ @@ -8107,18 +8141,31 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) { if (arg_type_is_dynptr(fn->arg_type[i])) { struct bpf_reg_state *reg = ®s[BPF_REG_1 + i]; - int ref_obj_id; + int id, ref_obj_id; + + if (meta.dynptr_id) { + verbose(env, "verifier internal error: meta.dynptr_id already set\n"); + return -EFAULT; + } if (meta.ref_obj_id) { verbose(env, "verifier internal error: meta.ref_obj_id already set\n"); return -EFAULT; } + id = dynptr_id(env, reg); + if (id < 0) { + verbose(env, "verifier internal error: failed to obtain dynptr id\n"); + return id; + } + ref_obj_id = dynptr_ref_obj_id(env, reg); if (ref_obj_id < 0) { verbose(env, "verifier internal error: failed to obtain dynptr ref_obj_id\n"); return ref_obj_id; } + + meta.dynptr_id = id; meta.ref_obj_id = ref_obj_id; break; } @@ -8275,6 +8322,9 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn return -EFAULT; } + if (is_dynptr_ref_function(func_id)) + regs[BPF_REG_0].dynptr_id = meta.dynptr_id; + if (is_ptr_cast_function(func_id) || is_dynptr_ref_function(func_id)) { /* For release_reference() */ regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id; diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c index 9dc3f23a8270..e43000c63c66 100644 --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c @@ -67,7 +67,7 @@ static int get_map_val_dynptr(struct bpf_dynptr *ptr) * bpf_ringbuf_submit/discard_dynptr call */ SEC("?raw_tp") -__failure __msg("Unreleased reference id=1") +__failure __msg("Unreleased reference id=2") int ringbuf_missing_release1(void *ctx) { struct bpf_dynptr ptr; @@ -80,7 +80,7 @@ int ringbuf_missing_release1(void *ctx) } SEC("?raw_tp") -__failure __msg("Unreleased reference id=2") +__failure __msg("Unreleased reference id=4") int ringbuf_missing_release2(void *ctx) { struct bpf_dynptr ptr1, ptr2; From patchwork Sat Jan 21 00:22:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110773 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65984C25B50 for ; Sat, 21 Jan 2023 00:25:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229460AbjAUAZU (ORCPT ); Fri, 20 Jan 2023 19:25:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229796AbjAUAYx (ORCPT ); Fri, 20 Jan 2023 19:24:53 -0500 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1400273AFD for ; Fri, 20 Jan 2023 16:24:12 -0800 (PST) Received: by mail-pf1-x443.google.com with SMTP id s3so5150235pfd.12 for ; Fri, 20 Jan 2023 16:24:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tJTE+2IiThjMO3PC0iuCb5HXJeu+oAk3QS1io4LW83g=; b=ZVD11Jeh7M9TsMIZ5SKV15h4E4KcgvvjMz6S1BHSmBpYDEd1W2EGEgJusO/6z28DWC pSQE51jt48s0/LVTrve9kPxtnK2Ru25yrviBkbYxFNvPqPYdWzapmeDSmTNLzM5/rqI3 U/aIybjDQYtJjNT65Azi9eN9T/PXIo6WkHe8vgOn2iknNqBnYrXA4RpwANYQ/B0ulFun wThoXa5oSRCjb+SqwyGt1CahstkiZdWS2/qffV/Ai0tVxUBYtMiyLu2BWO3IbAcslnOo g5PwdUB3yroUO4UPGArzS0f5WZxyodg7EzKHfsFJuauvb4fLokpz2n7tJ/T+1WglrJ3o ExOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tJTE+2IiThjMO3PC0iuCb5HXJeu+oAk3QS1io4LW83g=; b=DaClNpOKtax7Ir/hpLlQQc3f9L+vlc0SUc7g0z68zdINNqr6gyNvyf2FY4T96UMbIL qzHBd/gaqC2hShT3JDTnRjWfBlL3qqkqsSOnxn4UbQQVC00JT/dbsz92KL1WGPa0/ocK rxTKuxepGb94O8l7ygSYCeoKoO0dkDsj7XskjBluMloZI6qXo6fiCMhLjm3DkmjF57PC z3T9dJQNNU6LS7NfUG3Ckf8UxOKyUMOVP74VtkuI7mYceBgnNUJSWpUet3if+nG0Lg3M qGvZD8vw6oaTUiWRsZa2hJPH7eiBpRbZpfwaqoBR7mVhbcgaKXmAMLpKohlpWIHSFNp4 B3nA== X-Gm-Message-State: AFqh2kp5CUfIX+PBIFueaUzZnxRGFY4ggtIS9QtZoPM3rRDZrFI0f4ab yDSf5k+XatW3MonWtYAgJ0cEimJFCyg= X-Google-Smtp-Source: AMrXdXuTavOUIAPkv7lRGlVIzLakOY8XDIPsuXfjkUwMrAc87hz9dqIxD91OOal9f2k0udHl1a/+6A== X-Received: by 2002:aa7:94ac:0:b0:58d:e33b:d562 with SMTP id a12-20020aa794ac000000b0058de33bd562mr12277938pfl.11.1674260581969; Fri, 20 Jan 2023 16:23:01 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id d8-20020aa797a8000000b0058837da69edsm22961524pfq.128.2023.01.20.16.23.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:23:01 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Joanne Koong , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 05/12] bpf: Allow reinitializing unreferenced dynptr stack slots Date: Sat, 21 Jan 2023 05:52:34 +0530 Message-Id: <20230121002241.2113993-6-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3888; i=memxor@gmail.com; h=from:subject; bh=YkhcRHw6M8R1uNyUytIog/IKy6jL6P3mgH0kEKFhQB8=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAkJqykp549TuD8qfi+l8J7mju9PTDOmDmMoZB7 eluhbseJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swJAAKCRBM4MiGSL8RyqU8EA CIk5IXCIee4Z4x6+h/8564qNjbYFlWYLUHY3MA5qYoRHOXS1SCJrZO8wUP9iTtSYUrf8ogaK9bRviy xqVsC6fJVDDp8/l7z0eh+ysPOd9LbiNNE9ZsPmj0fjdQLKPFPuuQo76MnDy2EfAWy1oaV1B32TVGzh vusCOhq7yAC/PH2aFAgEDN/7rLUsYu3X2X0SQLizPODx01OO37vHhdseDoceBbx/jZChab2ZI3Bzpj xqbo6F8pAe34Yxl18zjVHEHrRfO6De2tlKCWr+pqii0LFRzXztp3/SaUm0HrRGJPi+9IfyAagQizsY uLUzcijmLkfjRdosbE+woC9deBVPGfHRjbyRfM8T4cMaVVDDvGJNkf/qHzIjgiu0JTL8+3i3Ds3+Rz rKmOgvFvDAm8kB4tMxSvAeNwVScJ8wqrmIruHUUk3DuTGWzXzoaBDgcmRXFXvs9JZ/H0OYyNAU3wMg csYmxSK56pssDNYSL11B+dF4Oyk2LtjGJOFgj5eD7NVu1c5oFicIC+nILbstvy1OiRkt5BxYvR7ggi l/HfeoOkrd9vmgsN8B/7zgoYbRkQ5Nk8gbDbpn4k9FV/GYkD1VqxAuCouLu8nNDh4aCWCFSB9t/xIs mSMIT93ePYP+mi0tOxVTEMGG8JzpGgmz0ZyB9eSelB7mTzODiUIBrv1FiWIg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Consider a program like below: void prog(void) { { struct bpf_dynptr ptr; bpf_dynptr_from_mem(...); } ... { struct bpf_dynptr ptr; bpf_dynptr_from_mem(...); } } Here, the C compiler based on lifetime rules in the C standard would be well within in its rights to share stack storage for dynptr 'ptr' as their lifetimes do not overlap in the two distinct scopes. Currently, such an example would be rejected by the verifier, but this is too strict. Instead, we should allow reinitializing over dynptr stack slots and forget information about the old dynptr object. The destroy_if_dynptr_stack_slot function already makes necessary checks to avoid overwriting referenced dynptr slots. This is done to present a better error message instead of forgetting dynptr information on stack and preserving reference state, leading to an inevitable but undecipherable error at the end about an unreleased reference which has to be associated back to its allocating call instruction to make any sense to the user. Acked-by: Joanne Koong Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 01cb802776fd..e5745b696bfe 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -782,7 +782,7 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_ { struct bpf_func_state *state = func(env, reg); enum bpf_dynptr_type type; - int spi, i, id; + int spi, i, id, err; spi = dynptr_get_spi(env, reg); if (spi < 0) @@ -791,6 +791,22 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_ if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS)) return -EINVAL; + /* We cannot assume both spi and spi - 1 belong to the same dynptr, + * hence we need to call destroy_if_dynptr_stack_slot twice for both, + * to ensure that for the following example: + * [d1][d1][d2][d2] + * spi 3 2 1 0 + * So marking spi = 2 should lead to destruction of both d1 and d2. In + * case they do belong to same dynptr, second call won't see slot_type + * as STACK_DYNPTR and will simply skip destruction. + */ + err = destroy_if_dynptr_stack_slot(env, state, spi); + if (err) + return err; + err = destroy_if_dynptr_stack_slot(env, state, spi - 1); + if (err) + return err; + for (i = 0; i < BPF_REG_SIZE; i++) { state->stack[spi].slot_type[i] = STACK_DYNPTR; state->stack[spi - 1].slot_type[i] = STACK_DYNPTR; @@ -936,7 +952,7 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { struct bpf_func_state *state = func(env, reg); - int spi, i; + int spi; if (reg->type == CONST_PTR_TO_DYNPTR) return false; @@ -949,12 +965,14 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_ if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS)) return true; - for (i = 0; i < BPF_REG_SIZE; i++) { - if (state->stack[spi].slot_type[i] == STACK_DYNPTR || - state->stack[spi - 1].slot_type[i] == STACK_DYNPTR) - return false; - } - + /* We allow overwriting existing unreferenced STACK_DYNPTR slots, see + * mark_stack_slots_dynptr which calls destroy_if_dynptr_stack_slot to + * ensure dynptr objects at the slots we are touching are completely + * destructed before we reinitialize them for a new one. For referenced + * ones, destroy_if_dynptr_stack_slot returns an error early instead of + * delaying it until the end where the user will get "Unreleased + * reference" error. + */ return true; } From patchwork Sat Jan 21 00:22:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110774 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDFFFC27C76 for ; Sat, 21 Jan 2023 00:25:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229702AbjAUAZU (ORCPT ); Fri, 20 Jan 2023 19:25:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230107AbjAUAYy (ORCPT ); Fri, 20 Jan 2023 19:24:54 -0500 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 915979373B for ; Fri, 20 Jan 2023 16:24:16 -0800 (PST) Received: by mail-pf1-x441.google.com with SMTP id w2so5148574pfc.11 for ; Fri, 20 Jan 2023 16:24:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=w6atmUnu6yUDSAXWR/NCWLDY2ibxR/H8Klr3dPFutJk=; b=mppG7qRHwXcbgEO9HZYzPz3TqiEeu/eyo+nHrzzExuiNX77a4s6QkcjaXOFxXlYHUB tI7MMJRVNc1Ymg33kdbjPdRboAAfuIzIWjmg1arbBRRQ706iSdcbO6SNoCTyngWgYxd2 J9OzZj2lvbPOnENMBJkdmHfsN7MAhXtLrglyN5SRRnpyDSU3+YkKgXQ6VLS/s5PlaGRx +t31qVpyLBbmzfnDPVn7a6o7Xq6L1CjPHEkVeCpvzjLO8G6B44H7lnUh3c4LZtD6JzNk Bs3fSl2pNpTm/5tT5caZAe3HgeKubJCc703CiTQAHYm+D45e0Qzq7n43X7d/Xe/vtUFf LcLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w6atmUnu6yUDSAXWR/NCWLDY2ibxR/H8Klr3dPFutJk=; b=e1kbe7dcJto7J5NHwa0FVyQ9rCG2rapIIwcen+No1R36pvl9Tjx0PqcC0CVm2Y+Yig o3WWkVBV3LwsikPQ4Q/BxauSCDU6qPdZyE57FsAYDg5HTmPes6peQlqXL2XYt+KUVXwq mBah4LmKEaP+IJrDupbC4pgeWkn8VobQOhPfLPuDb9NaRwPXlpvYmoq0YjEaV0nNnG1C VDqR5N2smzH1C4XwhC+0gdTArCeY2MnvoJ3exPDfvbHjugkwpLo1d8aYAVeSZrc8Wllz PfDjqrID33w2/nwyvqeFKSzpiYHoIMNAvaFAeRYBcLoQc9BWrG6NQ5M+QKWBQwOi7U4s khtg== X-Gm-Message-State: AFqh2koIxSaPEsgwLmFGsWt4mmmeVP2aIY+DNUYHmwYxJEbRezjHTV2h mVJGCi8hAfYkBbSH2a40q+Ki29TmzDg= X-Google-Smtp-Source: AMrXdXtjywqiPAAgKck58Utr7SACMU/tB29Su4eDlqPKuIz2yE2RHiWaehsG8XVHTMFX8zkCO0gCRA== X-Received: by 2002:aa7:9467:0:b0:58b:aaaa:82a9 with SMTP id t7-20020aa79467000000b0058baaaa82a9mr16714152pfq.25.1674260586482; Fri, 20 Jan 2023 16:23:06 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id 63-20020a621942000000b0056d98e359a5sm26522078pfz.165.2023.01.20.16.23.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:23:06 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Joanne Koong , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 06/12] bpf: Combine dynptr_get_spi and is_spi_bounds_valid Date: Sat, 21 Jan 2023 05:52:35 +0530 Message-Id: <20230121002241.2113993-7-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6180; i=memxor@gmail.com; h=from:subject; bh=Io8R4nMkdTkL0qipRPdCVbaGYoI8/FQxzBNP3ACEZXc=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAkM4sVtlsxGoBslFWC6lc7p/kxompbmt4+GNFv n46VPXCJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swJAAKCRBM4MiGSL8RykgDD/ wNGibpAHbQQqTl3PU87VMDnPDf2AyPgF+8cFYvp4KjB2NAizyGFpTGoGWNeFdthv6Az+F4k4Z+Yi0u t9bFdPDwtq6c+d9O/SLM4Y/DyhKYDu06YTAu0TkvZ5MWn//Fz0SEX8C0dKq/NRz1fJRhGA/vlSKCRo ViLOvqjZ53BQEVVX1BY4bpRRQYbC/KhisbYbf4Qb4dK5+SjhEXsknV9PF93yTuJ7tvnP7TD4G9grfI 9Y9VK6pvqdR6d0N0Mn9tdr0G3mITRKv+qXweFzP+kQfh1N+0AlNbMKzDk14IlWVKo3K9IWylcEJ1g7 fxGgvj42h7XDsHJVY8T7pYu1kMlPG4dDx5Ra2YD8yoWEwKFyk9+d/5p5nsVwziRD9b6ANT6OjJfUCV rUDMLTeHAuL7+fhHErDUZ4ZZ/y9j8MxT76S7YKZeH2o2+sfgif68hJfrRRr/nvC8mFYs5mvQVCGkK3 wj9ev4ksz6TZmYJW4Cj7SfXgpzGW+vmqOPU3dA5B7CadCK/RTxFyr2FkJiDOaqTBA15+CmIS/Aj8Px mV7gFtK5U4+gsIvruUbVhcu3lgP3c8cKJi1tNcXuw/RHGhP7x98nWjrJ2FdQZ7TNBkb1bETSKBIeBJ YPbjBNwKUaYIi1jt7Xo/2PiFTvg4I/f1ig2M1o38eg6pdCCdgqHVt+C3vtBQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, a check on spi resides in dynptr_get_spi, while others checking its validity for being within the allocated stack slots happens in is_spi_bounds_valid. Almost always barring a couple of cases (where being beyond allocated stack slots is not an error as stack slots need to be populated), both are used together to make checks. Hence, subsume the is_spi_bounds_valid check in dynptr_get_spi, and return -ERANGE to specially distinguish the case where spi is valid but not within allocated slots in the stack state. The is_spi_bounds_valid function is still kept around as it is a generic helper that will be useful for other objects on stack similar to dynptr in the future. Suggested-by: Joanne Koong Acked-by: Joanne Koong Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 75 +++++++++++++++++++------------------------ 1 file changed, 33 insertions(+), 42 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e5745b696bfe..29cbb3ef35e2 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -644,6 +644,28 @@ static int __get_spi(s32 off) return (-off - 1) / BPF_REG_SIZE; } +static struct bpf_func_state *func(struct bpf_verifier_env *env, + const struct bpf_reg_state *reg) +{ + struct bpf_verifier_state *cur = env->cur_state; + + return cur->frame[reg->frameno]; +} + +static bool is_spi_bounds_valid(struct bpf_func_state *state, int spi, int nr_slots) +{ + int allocated_slots = state->allocated_stack / BPF_REG_SIZE; + + /* We need to check that slots between [spi - nr_slots + 1, spi] are + * within [0, allocated_stack). + * + * Please note that the spi grows downwards. For example, a dynptr + * takes the size of two stack slots; the first slot will be at + * spi and the second slot will be at spi - 1. + */ + return spi - nr_slots + 1 >= 0 && spi < allocated_slots; +} + static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { int off, spi; @@ -664,29 +686,10 @@ static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *re verbose(env, "cannot pass in dynptr at an offset=%d\n", off); return -EINVAL; } - return spi; -} - -static bool is_spi_bounds_valid(struct bpf_func_state *state, int spi, int nr_slots) -{ - int allocated_slots = state->allocated_stack / BPF_REG_SIZE; - /* We need to check that slots between [spi - nr_slots + 1, spi] are - * within [0, allocated_stack). - * - * Please note that the spi grows downwards. For example, a dynptr - * takes the size of two stack slots; the first slot will be at - * spi and the second slot will be at spi - 1. - */ - return spi - nr_slots + 1 >= 0 && spi < allocated_slots; -} - -static struct bpf_func_state *func(struct bpf_verifier_env *env, - const struct bpf_reg_state *reg) -{ - struct bpf_verifier_state *cur = env->cur_state; - - return cur->frame[reg->frameno]; + if (!is_spi_bounds_valid(func(env, reg), spi, BPF_DYNPTR_NR_SLOTS)) + return -ERANGE; + return spi; } static const char *kernel_type_name(const struct btf* btf, u32 id) @@ -788,9 +791,6 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_ if (spi < 0) return spi; - if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS)) - return -EINVAL; - /* We cannot assume both spi and spi - 1 belong to the same dynptr, * hence we need to call destroy_if_dynptr_stack_slot twice for both, * to ensure that for the following example: @@ -844,9 +844,6 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re if (spi < 0) return spi; - if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS)) - return -EINVAL; - for (i = 0; i < BPF_REG_SIZE; i++) { state->stack[spi].slot_type[i] = STACK_INVALID; state->stack[spi - 1].slot_type[i] = STACK_INVALID; @@ -951,20 +948,18 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { - struct bpf_func_state *state = func(env, reg); int spi; if (reg->type == CONST_PTR_TO_DYNPTR) return false; spi = dynptr_get_spi(env, reg); + /* For -ERANGE (i.e. spi not falling into allocated stack slots), we + * will do check_mem_access to check and update stack bounds later, so + * return true for that case. + */ if (spi < 0) - return false; - - /* We will do check_mem_access to check and update stack bounds later */ - if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS)) - return true; - + return spi == -ERANGE; /* We allow overwriting existing unreferenced STACK_DYNPTR slots, see * mark_stack_slots_dynptr which calls destroy_if_dynptr_stack_slot to * ensure dynptr objects at the slots we are touching are completely @@ -988,8 +983,7 @@ static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_re spi = dynptr_get_spi(env, reg); if (spi < 0) return false; - if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) || - !state->stack[spi].spilled_ptr.dynptr.first_slot) + if (!state->stack[spi].spilled_ptr.dynptr.first_slot) return false; for (i = 0; i < BPF_REG_SIZE; i++) { @@ -6160,7 +6154,7 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno, if (reg->type == PTR_TO_STACK) { int err = dynptr_get_spi(env, reg); - if (err < 0) + if (err < 0 && err != -ERANGE) return err; } @@ -6668,10 +6662,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, */ if (reg->type == PTR_TO_STACK) { spi = dynptr_get_spi(env, reg); - if (spi < 0) - return spi; - if (!is_spi_bounds_valid(state, spi, BPF_DYNPTR_NR_SLOTS) || - !state->stack[spi].spilled_ptr.ref_obj_id) { + if (spi < 0 || !state->stack[spi].spilled_ptr.ref_obj_id) { verbose(env, "arg %d is an unacquired reference\n", regno); return -EINVAL; } From patchwork Sat Jan 21 00:22:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110776 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0600EC25B50 for ; Sat, 21 Jan 2023 00:25:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229511AbjAUAZW (ORCPT ); Fri, 20 Jan 2023 19:25:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230296AbjAUAY6 (ORCPT ); Fri, 20 Jan 2023 19:24:58 -0500 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34CC0BC77D for ; Fri, 20 Jan 2023 16:24:21 -0800 (PST) Received: by mail-pl1-x642.google.com with SMTP id 5so1409721plo.3 for ; Fri, 20 Jan 2023 16:24:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Tmo0le0cUJOBzWSdrlYEJJL3gLctaWM5TmJR0sXVqHw=; b=BrF7Cz1NYMkWC33zj7SG7Hl6MRL96n9ijAW2w0pH85p1aXh0ObQoAg64nzORaHI4Pg toyFQPXRScj1GLglHxDbQDrtcAy+Gq7CWXNllBqdZuVbaQLuju/9uHygeepYve95s8qq IDCC++jdAmfkTaJ/U3ejAc5PCNu5DLXR02Lb28fHlMK2FJHa2HdoFLDkNNGmIF2WIXQR Q8hv4Rv8WR/7HYEOA1uelvLeEWbM9mwSbOTF5qk42RK4bMihirLUqo2iKyUlzQabFsot w980YBfdZbdlo9Ph90gSnWr1xDU8qA3BpLDDibmCySoJKsdG5GTzX2NcgHa+CftecW/s +CFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Tmo0le0cUJOBzWSdrlYEJJL3gLctaWM5TmJR0sXVqHw=; b=evm1pRLcjijbhTt61P44gFcFIlXuu/VcCnwsGCUW8d0/Z+cDwGmK03AO3gRSi823ev qlcIr8jTGttZOWPjaTgZXTym1Zr4GHHO+44dpnAb4zx0+ep8z8l9uwijgihS3/WQwlz9 cow9uDOmbOQNzF3hC4esRibCbp0aJkJFO017mdMXouypeCsxwUzi8dPSG1R3GjvglzbI D43spjYhe7tEF50yvBuGTKWA5Goy/XQvi0TYh67F3HjOA8R3EcQhw0GmHXWJZhGKuF/L vSCo/VOLxQC0GMaM845k92x20EOGrhJ/t6FFIrJa3RF4oqNBy/5Ku/0hvJgDd+D3HeP7 QxEA== X-Gm-Message-State: AFqh2kp8DS5UHsPUUGM4s45PFR9uNloekqWGlvU9s/Lpd4VZjy4afQfk kSmHNW57Y/q7Dbs5Q1+z3ZLO1fjhmAA= X-Google-Smtp-Source: AMrXdXt5FiOt4T4GmUIzpPMavTBt8O4nNxyZI63mwbVu1U7QmS5eVqOZAMz5Z1aimWI4MiqG2JaFwQ== X-Received: by 2002:a17:90a:ea86:b0:229:9369:e13 with SMTP id h6-20020a17090aea8600b0022993690e13mr17211032pjz.36.1674260590591; Fri, 20 Jan 2023 16:23:10 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id m13-20020a17090a3f8d00b00223ed94759csm2027823pjc.39.2023.01.20.16.23.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:23:10 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Joanne Koong , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 07/12] bpf: Avoid recomputing spi in process_dynptr_func Date: Sat, 21 Jan 2023 05:52:36 +0530 Message-Id: <20230121002241.2113993-8-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3698; i=memxor@gmail.com; h=from:subject; bh=n4U1dXvfeNH1zRNe/co4rVPmlxHdSUzypJJY0IRJzjw=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAk1RvlX6RfReOLAFbMWC9wK/gvWsxIfRy4IvoV dOq3G5OJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swJAAKCRBM4MiGSL8RyqKGEA CqWcomND/5jfg8ip0Mj/py7B5U47HQFLbmlgo2okv4V8mWQfqvQhOunSfJdlK3o//AtqzVMs054eUF 8Z8x+kkR+R5cmBT2Cq8d0hqwnYrPWrQ2+ZrsUJlCnpW/2/q4DuIJ6KYCtN3qXWVqrkkXh9JsES63H/ uLFtFQlLyRohmUE7rg5l1JKSLf6tpwbzSvXAnZfE+lN50JP1TWCUExsClCfR/gRoBQMQnosjNAJjBV 8X/Jnd7Ue916+xF8ivBzOJmMfzZVfXZaLeEp8t/HXN4TGcA4I1tOmtHjH1q5F8j4w9nQAk3TytBjL6 +VCRQjPpVUJlrOnkTu+RL9MB28Bvs1+TIQ3vAxJtJ8jjR5IObvik2Htv7Mc/ANLvKVRy2HZq4dvgW5 /99+plj+U2VcwgqERMa4+DuklNqh9ji/cANczr8BujJfRW+d1cmVlmIHZlCjFVwugo6Y1Vhh1+CQap OWaFw1WOezGhPrxW84za6r4sQCe+QvN/GpuiOxOgC1F0VlA34YlhQoDac6D4YH1jgBjvDcf8MbX+ZG ZSXDTBpMwqaSsmgeDqC88n3Q26KaonE8kz2c/D01bkCpjy3N3AAy4613iVxwLGiuEfFwlKYUCCJC9y 6EqiTEVEYeUPzgT/2Rn2sFcHM7kpiQ/CoMHOlGMpgvXgqFwSszj195t6K2NA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, process_dynptr_func first calls dynptr_get_spi and then is_dynptr_reg_valid_init and is_dynptr_reg_valid_uninit have to call it again to obtain the spi value. Instead of doing this twice, reuse the already obtained value (which is by default 0, and is only set for PTR_TO_STACK, and only used in that case in aforementioned functions). The input value for these two functions will either be -ERANGE or >= 1, and can either be permitted or rejected based on the respective check. Suggested-by: Joanne Koong Acked-by: Joanne Koong Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 24 +++++++++++------------- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 29cbb3ef35e2..ecf7fed7881c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -946,14 +946,12 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, return 0; } -static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg, + int spi) { - int spi; - if (reg->type == CONST_PTR_TO_DYNPTR) return false; - spi = dynptr_get_spi(env, reg); /* For -ERANGE (i.e. spi not falling into allocated stack slots), we * will do check_mem_access to check and update stack bounds later, so * return true for that case. @@ -971,16 +969,16 @@ static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_ return true; } -static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +static bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg, + int spi) { struct bpf_func_state *state = func(env, reg); - int spi, i; + int i; /* This already represents first slot of initialized bpf_dynptr */ if (reg->type == CONST_PTR_TO_DYNPTR) return true; - spi = dynptr_get_spi(env, reg); if (spi < 0) return false; if (!state->stack[spi].spilled_ptr.dynptr.first_slot) @@ -6139,6 +6137,7 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno, enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta) { struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno]; + int spi = 0; /* MEM_UNINIT and MEM_RDONLY are exclusive, when applied to an * ARG_PTR_TO_DYNPTR (or ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_*): @@ -6152,10 +6151,9 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno, * and its alignment for PTR_TO_STACK. */ if (reg->type == PTR_TO_STACK) { - int err = dynptr_get_spi(env, reg); - - if (err < 0 && err != -ERANGE) - return err; + spi = dynptr_get_spi(env, reg); + if (spi < 0 && spi != -ERANGE) + return spi; } /* MEM_UNINIT - Points to memory that is an appropriate candidate for @@ -6174,7 +6172,7 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno, * to. */ if (arg_type & MEM_UNINIT) { - if (!is_dynptr_reg_valid_uninit(env, reg)) { + if (!is_dynptr_reg_valid_uninit(env, reg, spi)) { verbose(env, "Dynptr has to be an uninitialized dynptr\n"); return -EINVAL; } @@ -6197,7 +6195,7 @@ int process_dynptr_func(struct bpf_verifier_env *env, int regno, return -EINVAL; } - if (!is_dynptr_reg_valid_init(env, reg)) { + if (!is_dynptr_reg_valid_init(env, reg, spi)) { verbose(env, "Expected an initialized dynptr as arg #%d\n", regno); From patchwork Sat Jan 21 00:22:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110777 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B02CBC27C76 for ; Sat, 21 Jan 2023 00:25:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229761AbjAUAZY (ORCPT ); Fri, 20 Jan 2023 19:25:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230340AbjAUAZB (ORCPT ); Fri, 20 Jan 2023 19:25:01 -0500 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 425DAAA5CD for ; Fri, 20 Jan 2023 16:24:26 -0800 (PST) Received: by mail-pf1-x443.google.com with SMTP id i1so5186309pfk.3 for ; Fri, 20 Jan 2023 16:24:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Zs5n2Q1YRbL+FZCik99wIL5O/1FCJ2BZ9GlI4NG+zGU=; b=quecMsS0/sgum1iU7XO+FkUnDIUSvEJwNZHSr5Cqyg0C3XH+VZhXABzpCQUo9pc/j/ 9wj+kvgbg9zh+jd+AYfGNiaBf8pQ2btOoZxP/GXQjSEtWtNz38N1Ay4MqFPQrKUkJQoh PNJ03BHEzdNcLIZJ2H7qiI0qU+rk6bWB9JXzoL+kWs0PzPrttgvheZ5nTGVCLJYjOF/N Y6SOXE7NzvyE4qklh7Io9NJ8oebVfSgBdugnVG7grAHfwBrrevnF+9E+UMJGk64E4WPw tp6j/BAf2uWgGn3BuSOs4MU03Mvorxz1HwNRsGO9VjumRXyJObMnPj8esETlXMQ2X7CY TKAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Zs5n2Q1YRbL+FZCik99wIL5O/1FCJ2BZ9GlI4NG+zGU=; b=66swDRDt4vz6vgyciyauRCwaJc5GCQFJUAnChT48SvU4llxAyL7LGDvgnZw5hAwe86 h8wuNG1tOhJ7YpElo/E3fXQzFaRMnb+IiJhlZDbsyz6DNJqu9Szb1tq+FP9WhUJ+A7ec qGtyKxTIxx7a+0/ggOwjLx6qzmn0F1VruSlG/4Gc8WjA4KVT4DCuHWl4Z3fPILLXhm50 XAFaLEP2Urho666xXY9O7Duk/y2S5qJGl35YHyO2TUX2lSTadU2byWf4UKVwStUNs2SO Atw7fVVLT1VKzALaLO0BMWvZgzzglWOsKAcVwj6eBjp1ekpFK0Sk7d3uVrImf8ZHv2Ow WxYQ== X-Gm-Message-State: AFqh2kobje5h5a+3e8f+/XkG/5Bz8/a31dy/WFIQ5F54Kt99qnBloX+K junPLKIqJV0KM1KOLZLQQvIK5UBkNVQ= X-Google-Smtp-Source: AMrXdXvKwY1QBZgQEYlXaLVucOoVSzOAkuZzcElsMvXrXZil4zD/ThTDf5PxjS7Ubx07ky+XlgTCFQ== X-Received: by 2002:aa7:880b:0:b0:58d:abd5:504a with SMTP id c11-20020aa7880b000000b0058dabd5504amr15658790pfo.31.1674260594221; Fri, 20 Jan 2023 16:23:14 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id m63-20020a625842000000b005775c52dbc4sm3011286pfb.167.2023.01.20.16.23.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:23:13 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Andrii Nakryiko , Yonghong Song , Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 08/12] selftests/bpf: convenience macro for use with 'asm volatile' blocks Date: Sat, 21 Jan 2023 05:52:37 +0530 Message-Id: <20230121002241.2113993-9-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1699; i=memxor@gmail.com; h=from:subject; bh=SNRMXDcMW1zluX6a3W/WYa1RbpYDzrfdzhpCt6yThTA=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAks5JRMTd9QT6uBQgwWA2rgY8rE3jweRYNnMEY kA+RJ3KJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swJAAKCRBM4MiGSL8RykP1EA CgtUWgZGiUO5Luilnr/T7cPGSXkCKQTfws7HQRMp9hZyPlOLfyRul3WDi9KjiRlvBwhvkYyuCSK5Wf tj8IQMKPlQHDhnedVp24GcymiBgizlrfMw/nQs9V13W+imV1h9ne/OVonZMs4KIeHxLDxHBUCtxbU9 wEubNbyik4Pr2fpE0FPYJbidEEgVwtoTSG0NknnDVwLTaLND7j0I32aJYN0hTLLUI7E2wQdvCOiAld qzaRhNp4ct/0938JbcU7S+z/OtBan7XX+siMaAIulKnwuTB9TSebYsv8CEx66z12A/rQ1jrqqYudZq fG7QaOfMa6Rll41Ei2H5zRa8ri8c0ZdLPhveP7Xz2zrx8/hq28GgGeohVeUeR3WXcXXHgKypas1cQo or+r57Hqjiv4I1h7Sp1PCysy9+HO/wrQgr5DcmxVlMCqtY67HMVltv7LJ906tEut8YYYgPPdREwcfm q7aSexttmeNfja9wNqYIgolRHhMaaAve4L6/RynQgqWYQJN/Ja6ydXXrYotehV6TrZhWD26T0SUKk9 PvucNz1sfEVUvtP0GV/7yuj5KgZfBRcmkp4XiMfOkb8KliY7dhjZu4AuDUypW2UjBJV6AsAeq64p72 Vmt2Oe8pm6dvZDo3F5zre7MQnU2ASm9r+y78gTWqST0rvSrI+P0o78TdIjEQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Eduard Zingerman A set of macros useful for writing naked BPF functions using inline assembly. E.g. as follows: struct map_struct { ... } map SEC(".maps"); SEC(...) __naked int foo_test(void) { asm volatile( "r0 = 0;" "*(u64*)(r10 - 8) = r0;" "r1 = %[map] ll;" "r2 = r10;" "r2 += -8;" "call %[bpf_map_lookup_elem];" "r0 = 0;" "exit;" : : __imm(bpf_map_lookup_elem), __imm_addr(map) : __clobber_all); } Acked-by: Andrii Nakryiko Acked-by: Yonghong Song Signed-off-by: Eduard Zingerman [ Kartikeya: Add acks, include __clobber_common from Andrii ] Signed-off-by: Kumar Kartikeya Dwivedi --- tools/testing/selftests/bpf/progs/bpf_misc.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h index 4a01ea9113bf..2d7b89b447b2 100644 --- a/tools/testing/selftests/bpf/progs/bpf_misc.h +++ b/tools/testing/selftests/bpf/progs/bpf_misc.h @@ -7,6 +7,13 @@ #define __success __attribute__((btf_decl_tag("comment:test_expect_success"))) #define __log_level(lvl) __attribute__((btf_decl_tag("comment:test_log_level="#lvl))) +/* Convenience macro for use with 'asm volatile' blocks */ +#define __naked __attribute__((naked)) +#define __clobber_all "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "memory" +#define __clobber_common "r0", "r1", "r2", "r3", "r4", "r5", "memory" +#define __imm(name) [name]"i"(name) +#define __imm_addr(name) [name]"i"(&name) + #if defined(__TARGET_ARCH_x86) #define SYSCALL_WRAPPER 1 #define SYS_PREFIX "__x64_" From patchwork Sat Jan 21 00:22:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110775 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44303C38159 for ; Sat, 21 Jan 2023 00:25:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229716AbjAUAZX (ORCPT ); Fri, 20 Jan 2023 19:25:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230336AbjAUAZB (ORCPT ); Fri, 20 Jan 2023 19:25:01 -0500 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF85990B1A for ; Fri, 20 Jan 2023 16:24:25 -0800 (PST) Received: by mail-pj1-x1043.google.com with SMTP id k10-20020a17090a590a00b0022ba875a1a4so3566620pji.3 for ; Fri, 20 Jan 2023 16:24:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JjuxCRFRYw9wjpCtAG75vdMCkmNxJZQw7fQxwyidGTs=; b=bzDvN8UTAN6LmOoKIyHeVF3b4v3z1nAvL+9aGA37ezFQ5cahrFN6fO2UOAjvW1EgA3 Akd5NpJ0QEsZ8PAIcT7GvZYVQXJI8H8eNkmbyZmVobgJPrSePU+/IFdB+NGv0ElqmMnq LfkP7lcSRtW7Zi/kWLSi/ASZSCzMoc8kHrDiriXb27nh+MXkv9Z9sFp27E19CVQDt3tZ feIbVi0jybh33Uam24EQ2ka3Yj9wjgEDz/wPGuscJkGKs4s6JXantOrEL9P8MINGiOaZ 1ZjKUgcHyFo+iMms+LyGDEkcNLBjtsXXb0TkonCtowUulQ8K9/ZMFATZsIV4mPGmA2Iw a07w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JjuxCRFRYw9wjpCtAG75vdMCkmNxJZQw7fQxwyidGTs=; b=wqy0G1xDLpIzLRS5TOqXnJc6WTieDnOgcfXXKVRFMbx4Qc4TnWZJRl/Ty0bUqb7fuG 7wO1rFvL+mU5ieIWWom8VEngLnKQEq3ULLwbfFgDGAEyoW7R/ZFYf35S5vbaniIAuNxS QFXpU2Cy6Uv2Vu8jTaS2tVFT4B4Lb7umF8Q3Lkh8VSNqjzwsjYGUZ1gw+uAUpKvwfgnx AWgPKnYc1Ra+f3jExbZhk6XwwIWRIyoMICAD73CkOfCSHPzaTfjJam+UWPwn+psQdXex C+ZlN5BYf+GQmHyGuhM3DOqz+BewRSIROe1YjdBbf9jTb+ATOmmuOlczyp5fLTSI8FwX obkw== X-Gm-Message-State: AFqh2kqyBhS7WEgDwKqaQmviRBfJIqZQzKvaxEsmqqKsVeZ6t6Z8oh56 vNmxaMAXMyz46mqmlQIIaHfHQLP8p4g= X-Google-Smtp-Source: AMrXdXuoP4tWJRCjDDnzHP3wvep9Q/GJ9NuvjVnY66FPSm6Fpq7Y6VNtmlpN3PZds3g9xBNjQ4LIMw== X-Received: by 2002:a05:6a21:6daa:b0:b5:bc9b:36e4 with SMTP id wl42-20020a056a216daa00b000b5bc9b36e4mr21193580pzb.13.1674260597246; Fri, 20 Jan 2023 16:23:17 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id d2-20020aa797a2000000b005895f9657ebsm21818621pfq.70.2023.01.20.16.23.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:23:16 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 09/12] selftests/bpf: Add dynptr pruning tests Date: Sat, 21 Jan 2023 05:52:38 +0530 Message-Id: <20230121002241.2113993-10-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4291; i=memxor@gmail.com; h=from:subject; bh=KOeUuf38swlcblzdJeZUFs0p+ftuZYTeMeza4RnQIH4=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAkYF+eUnuacr+8EXLHi1Ln5LCgo7dSC6XL0jKR Zsw+QEeJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swJAAKCRBM4MiGSL8Rytw+EA C4V8XOEkDXC6UZ9/aMoySLuaZjfLGCwsgv590+hGBO2ZOsKKwQpuurDSHHBdI+g64RdIAFJ0EOjblj jSjxJEMuksGVFTC98gvFsh7tqne5fedHV0UimbGyPcTbZpbEqffyQw3xHOxf/wjNtcHHvAavJv7NIF hzYkImiuyFCgFzyt4W6856tq4Vq2jDEQ47YNCin7FI/2zyAy3z5htkAuObbtzwG5NbpFMGZ8MtNmEP Z+KZnqUTjk3FCj2tVr51KJyakkFVJKoUDuFgOUsnnnxNOQ3JCM7yjwgQzK16HXx3lcSmEZB+TW2e2o bj3NvBGHnZnWjsZiSRd1xZ9tftV7Jy2x7Oj5o+uEBXB4fXpGn3jegLlONyUGt5v8hn0vEnKTlsNQ+y Dwx99qrDPxIR6GAQbciv1Air3PNJaPwhZ1oCzy7a02+oLuD4HKavFAuvLLUdxXzC5w8AziO9rg9y/S 24JJTSWMIbeFqpxvYia6/auMs72F/jfFGwEEmCCTuCZ9M87lNSrPcuFfpft8aOFMOfctpJiyy5bsCJ HC6+D0mEo1jke5jpF6+bxhTfMg5GcOath119zFSRamkBLnKnlydRxbjYLHYXlJNJNoYLeA9nn8Hjw6 Y60J3o7UuyRmIQ3CN26eTJjgq1H4yQ2RwjYD1FKf+n6ckL0DwZ9ro3Jub5PQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add verifier tests that verify the new pruning behavior for STACK_DYNPTR slots, and ensure that state equivalence takes into account changes to the old and current verifier state correctly. Also ensure that the stacksafe changes are actually enabling pruning in case states are equivalent from pruning PoV. Signed-off-by: Kumar Kartikeya Dwivedi --- .../testing/selftests/bpf/progs/dynptr_fail.c | 141 ++++++++++++++++++ 1 file changed, 141 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c index e43000c63c66..f1e047877279 100644 --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c @@ -35,6 +35,13 @@ struct { __type(value, __u32); } array_map3 SEC(".maps"); +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, __u32); + __type(value, __u64); +} array_map4 SEC(".maps"); + struct sample { int pid; long value; @@ -653,3 +660,137 @@ int dynptr_from_mem_invalid_api(void *ctx) return 0; } + +SEC("?tc") +__failure __msg("cannot overwrite referenced dynptr") __log_level(2) +int dynptr_pruning_overwrite(struct __sk_buff *ctx) +{ + asm volatile ( + "r9 = 0xeB9F; \ + r6 = %[ringbuf] ll; \ + r1 = r6; \ + r2 = 8; \ + r3 = 0; \ + r4 = r10; \ + r4 += -16; \ + call %[bpf_ringbuf_reserve_dynptr]; \ + if r0 == 0 goto pjmp1; \ + goto pjmp2; \ + pjmp1: \ + *(u64 *)(r10 - 16) = r9; \ + pjmp2: \ + r1 = r10; \ + r1 += -16; \ + r2 = 0; \ + call %[bpf_ringbuf_discard_dynptr]; " + : + : __imm(bpf_ringbuf_reserve_dynptr), + __imm(bpf_ringbuf_discard_dynptr), + __imm_addr(ringbuf) + : __clobber_all + ); + return 0; +} + +SEC("?tc") +__success __msg("12: safe") __log_level(2) +int dynptr_pruning_stacksafe(struct __sk_buff *ctx) +{ + asm volatile ( + "r9 = 0xeB9F; \ + r6 = %[ringbuf] ll; \ + r1 = r6; \ + r2 = 8; \ + r3 = 0; \ + r4 = r10; \ + r4 += -16; \ + call %[bpf_ringbuf_reserve_dynptr]; \ + if r0 == 0 goto stjmp1; \ + goto stjmp2; \ + stjmp1: \ + r9 = r9; \ + stjmp2: \ + r1 = r10; \ + r1 += -16; \ + r2 = 0; \ + call %[bpf_ringbuf_discard_dynptr]; " + : + : __imm(bpf_ringbuf_reserve_dynptr), + __imm(bpf_ringbuf_discard_dynptr), + __imm_addr(ringbuf) + : __clobber_all + ); + return 0; +} + +SEC("?tc") +__failure __msg("cannot overwrite referenced dynptr") __log_level(2) +int dynptr_pruning_type_confusion(struct __sk_buff *ctx) +{ + asm volatile ( + "r6 = %[array_map4] ll; \ + r7 = %[ringbuf] ll; \ + r1 = r6; \ + r2 = r10; \ + r2 += -8; \ + r9 = 0; \ + *(u64 *)(r2 + 0) = r9; \ + r3 = r10; \ + r3 += -24; \ + r9 = 0xeB9FeB9F; \ + *(u64 *)(r10 - 16) = r9; \ + *(u64 *)(r10 - 24) = r9; \ + r9 = 0; \ + r4 = 0; \ + r8 = r2; \ + call %[bpf_map_update_elem]; \ + r1 = r6; \ + r2 = r8; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto tjmp1; \ + exit; \ + tjmp1: \ + r8 = r0; \ + r1 = r7; \ + r2 = 8; \ + r3 = 0; \ + r4 = r10; \ + r4 += -16; \ + r0 = *(u64 *)(r0 + 0); \ + call %[bpf_ringbuf_reserve_dynptr]; \ + if r0 == 0 goto tjmp2; \ + r8 = r8; \ + r8 = r8; \ + r8 = r8; \ + r8 = r8; \ + r8 = r8; \ + r8 = r8; \ + r8 = r8; \ + goto tjmp3; \ + tjmp2: \ + *(u64 *)(r10 - 8) = r9; \ + *(u64 *)(r10 - 16) = r9; \ + r1 = r8; \ + r1 += 8; \ + r2 = 0; \ + r3 = 0; \ + r4 = r10; \ + r4 += -16; \ + call %[bpf_dynptr_from_mem]; \ + tjmp3: \ + r1 = r10; \ + r1 += -16; \ + r2 = 0; \ + call %[bpf_ringbuf_discard_dynptr]; " + : + : __imm(bpf_map_update_elem), + __imm(bpf_map_lookup_elem), + __imm(bpf_ringbuf_reserve_dynptr), + __imm(bpf_dynptr_from_mem), + __imm(bpf_ringbuf_discard_dynptr), + __imm_addr(array_map4), + __imm_addr(ringbuf) + : __clobber_all + ); + return 0; +} From patchwork Sat Jan 21 00:22:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110778 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44D9EC38159 for ; Sat, 21 Jan 2023 00:25:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229813AbjAUAZ0 (ORCPT ); Fri, 20 Jan 2023 19:25:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230362AbjAUAZD (ORCPT ); Fri, 20 Jan 2023 19:25:03 -0500 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96CD2C79D3 for ; Fri, 20 Jan 2023 16:24:31 -0800 (PST) Received: by mail-pf1-x441.google.com with SMTP id z31so2070905pfw.4 for ; Fri, 20 Jan 2023 16:24:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LqKHgLPVOuLzH7ZCR7QAiED+nBVGYayNRgcRrmJ2ARk=; b=ey+lPeMjs+n9uhyzpCsLr05EwUjSMs2Zid4RnLkwSQBb0/7nvjzQ0VNXJapbV/cGhs JygdowJ/3qx8wClGCcNXoVeX7Gy3yJgIkKQQhV3EURkxEtH2B+MRSg7/LVwNgClp9YNT PGl2zZke4u9wq8FmORcixC9viihPtaeX6CUydvq8nzxX/6JHynxNJlJk1hc9oHMu9nXG D0p0vSv5XXqgIOxM6or+dzfc2eV1pasyeXAEKC617nBxtgpJI2Ia2OMzHm93zr9E2UGp pGTyRRt7MF4+tBx6r6DUaE1yUP57ejGEPJzN/N0bOU3J4BkBpV3PgoojHAnuiCEb4Hob bCjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LqKHgLPVOuLzH7ZCR7QAiED+nBVGYayNRgcRrmJ2ARk=; b=SAwgRpwhS8KxkCjR63o6wobJ90EKSIqNydwxZom/RRgzy+xkoTvwON2FIq4pculvBd 7eZyXUaX0KyHbZeWnpGleLtebLCL+6mALgzoyOkErCWABziYdbTLxG0eSr0Hf+UEcSSr jP02epUGKJx19tOw3hKWRwExT21Kz+dH5R+mplg6IA81AQAAjqdnTVXaEKxSE0WwbB3x X6Co1mc66g1Ej8Q4kY582p8eLglxYTIaqv+MK+Fmcwx7v0GPVtjYk/FrssRIWQfKgF4n jpIpEhS18z2JHs1oQBqcxyvimNATZKLUC3duKCJi4fRI2whnRMDLrBrs8iMLp/oElJuh Ymmg== X-Gm-Message-State: AFqh2koW/K/E1DZU/8tLHlQ9ELzqUXFien7fk4dBTVyS87YRKmnNDKW0 qKQL/YtpoaVnLlubj6NPZmy8SMbfQBc= X-Google-Smtp-Source: AMrXdXvcg3Rij6hKe7rvsyOKzx7no2xh3Yf3n4Nj561XFNPh8cKLcM8AdPyOHeebfbuHdj2F3cjGcQ== X-Received: by 2002:aa7:8c45:0:b0:584:6bbd:d78c with SMTP id e5-20020aa78c45000000b005846bbdd78cmr17087640pfd.34.1674260600470; Fri, 20 Jan 2023 16:23:20 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id 190-20020a6219c7000000b00589d91962d5sm21122336pfz.195.2023.01.20.16.23.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:23:20 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 10/12] selftests/bpf: Add dynptr var_off tests Date: Sat, 21 Jan 2023 05:52:39 +0530 Message-Id: <20230121002241.2113993-11-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1662; i=memxor@gmail.com; h=from:subject; bh=35zrMYiHjIev+NoG4HbRoMpjPSoBEo4hvE45xRAMx28=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAkPR7d9Dz+Hqxsm4lJqmrpIOh2VEe9eLixVhUX WICorMeJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swJAAKCRBM4MiGSL8RynazD/ 9yQItCNLVmDXfASIvd9OHSqtuvvmfkHvu+3Pehgqlm/oyuT2vP6vUKRt4pI5+9qeEnirFG84HQVa57 7Ovyd7ZK9BIa0GrCkiDpyUV5MTDptgsDS666i+IOHMe9NNtHAGOJSOFCh984u18KETONnTn6dI5ivY ut4+udoK5VD7ZzS3XvGl2fI0mLwJb4/oNe0qPeVDcrVWevoKKLvlGWkgkuOwsQHWodRuduZlNT0VsS ZrOq3QuYr+A1RB/qjF/f9jUt+ILwf7JtA2e1EVTFMjfsvPAfBortCtsfA/A6c8EDuwQvc9BsT/ibxU IQLqNWDVgef/KU8wZo02UI7qCl1izm1rlvwEZsC8M1apiPg/uNGxkVPKaZ0KylvCPH7N2jq4NhYNye UZ52cvU9wowUKk1SzOfkf0uqa9YMyiVu4R3/PzC1a5I8LNQ8nbzBm7Jo02Wl8J/DjeeQgGkl/CsXi9 geE+Xn2OSuaLToWBp+Ln+cl9jvuKn271Dkx8TqniS/tca0BZoiEWtNRfXrIKQ4iETK/ejyNkL7P7lF veRwdStzVWBVJ6FjpEWKvQcnXeuC6vv8ekym2WUjLDOV9Yrk86vzexQ7AquYYRgz6hCmWXaNGtDaXT ZXeRPdtT8Xos5913bPBPopY53b0tPe4O03p/ktgHP7czcnwCQazKs/yxTV5Q== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Ensure that variable offset is handled correctly, and verifier takes both fixed and variable part into account. Also ensures that only constant var_off is allowed. Signed-off-by: Kumar Kartikeya Dwivedi --- .../testing/selftests/bpf/progs/dynptr_fail.c | 40 +++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c index f1e047877279..2d899f2bebb0 100644 --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c @@ -794,3 +794,43 @@ int dynptr_pruning_type_confusion(struct __sk_buff *ctx) ); return 0; } + +SEC("?tc") +__failure __msg("dynptr has to be at a constant offset") __log_level(2) +int dynptr_var_off_overwrite(struct __sk_buff *ctx) +{ + asm volatile ( + "r9 = 16; \ + *(u32 *)(r10 - 4) = r9; \ + r8 = *(u32 *)(r10 - 4); \ + if r8 >= 0 goto vjmp1; \ + r0 = 1; \ + exit; \ + vjmp1: \ + if r8 <= 16 goto vjmp2; \ + r0 = 1; \ + exit; \ + vjmp2: \ + r8 &= 16; \ + r1 = %[ringbuf] ll; \ + r2 = 8; \ + r3 = 0; \ + r4 = r10; \ + r4 += -32; \ + r4 += r8; \ + call %[bpf_ringbuf_reserve_dynptr]; \ + r9 = 0xeB9F; \ + *(u64 *)(r10 - 16) = r9; \ + r1 = r10; \ + r1 += -32; \ + r1 += r8; \ + r2 = 0; \ + call %[bpf_ringbuf_discard_dynptr]; " + : + : __imm(bpf_ringbuf_reserve_dynptr), + __imm(bpf_ringbuf_discard_dynptr), + __imm_addr(ringbuf) + : __clobber_all + ); + return 0; +} From patchwork Sat Jan 21 00:22:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110780 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52859C27C76 for ; Sat, 21 Jan 2023 00:25:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229797AbjAUAZ1 (ORCPT ); Fri, 20 Jan 2023 19:25:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230453AbjAUAZN (ORCPT ); Fri, 20 Jan 2023 19:25:13 -0500 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3DFE73AC6 for ; Fri, 20 Jan 2023 16:24:40 -0800 (PST) Received: by mail-pf1-x441.google.com with SMTP id g205so5178027pfb.6 for ; Fri, 20 Jan 2023 16:24:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yqzLD9MYr6KiuRpe9sBVIUYjbcoAjTXay8FSvvNcEzA=; b=MHR/wVyKYf6JT96VHkiXh7CEYEDVoPffKaItvWTQYkqb5rHJs4yenqSdxTxMSVHz+j yTwVId05F+a4LgZIJvEYwr/Z+Kp3cPuKqoRsQtDB0NhqenIhkUqAgMAPqdfIgwhwBquy j0IOa0HQuJCXXDmTS/eapXe9FaCKi4Kqnd0IFheIYe2vyeMlKwuJ9xv8yo/z1AV3BoE8 RxMYwUfBVwuNP/0J3tH6j/DJYkV7J/VJFu4Ye/Zsl7NxkbkG6Uo93BHTx8ggSpz2zvNr k0dzHcMIZRtFfb3NMInS8GHb0m1QVMiSfJXvnHjq7woslt9FL50lb2uh4VR8hndfWirc ZHbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yqzLD9MYr6KiuRpe9sBVIUYjbcoAjTXay8FSvvNcEzA=; b=fJEuhKE/ktYyJ7Pij4d/9jrxAWEoP3nabgvHFyDjVigI69gMzVYnHpk+/MjeKccfMO ZrEYjrg6pUg3fYYnoSsOVO777gIfcYudxR2B8zFIQQ6zjFxWblhibznddYMIehNyUA/I Hw2eij6uYIZyc9Wke+AlAWFHJsdfLGN8WtdoTyC8jVSUBgtPysq5POyExfDpDkYc2kLy Qf17HrescqOSpDzzpDaOMnlcEGRBZy6PKlD7vNkLaFSzcoV7in3121g36tjpDkRhJE+0 AMYPHLVFdEXjwDs1Y7eKYwDEVbvb2OMLEU4jO8oInGjkIhb6LuLeJkn5AHo1EW6M4nLO 0vGg== X-Gm-Message-State: AFqh2kr9HLBtZbfW8VgDtQS7L8ZUZG6q+QN87aPDkH88v6nhYvW7VX3K oK6XjJTeg0uOA7OoLYyOifXTWhDsNss= X-Google-Smtp-Source: AMrXdXsg4S5A1Q5ifQJZ3GRjx/B0AW7FGXnNh4Xv8LHhgYpHOzwyU59dzgE6plKfQYLVCIEMTIaYEA== X-Received: by 2002:a05:6a00:4207:b0:580:eeae:e4ba with SMTP id cd7-20020a056a00420700b00580eeaee4bamr18434371pfb.4.1674260603663; Fri, 20 Jan 2023 16:23:23 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id 65-20020a621844000000b005877d374069sm24251619pfy.10.2023.01.20.16.23.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:23:23 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 11/12] selftests/bpf: Add dynptr partial slot overwrite tests Date: Sat, 21 Jan 2023 05:52:40 +0530 Message-Id: <20230121002241.2113993-12-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2463; i=memxor@gmail.com; h=from:subject; bh=vb2M3d6AP1tJHi9lacIFMdvNjgTaEGGdyWxaqNz5OHI=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAkXtdYoVT+uua2rJKUZ75sLYaxpcQxs+Mg4Z+P 6ikII2eJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swJAAKCRBM4MiGSL8RyntbEA CgN/UC9rB07GANvrLkbXrPZOHnFgic1ikW/UTeLitsSKTtjDFF8dk1/GRz4oamHhYV1a7Zf/ANAWhO t40GiA1lQLShKohv0qDwEqSD85VtCXQxEahhL0ldEjLGWMebBYj1/hyT5nNwnPiyAWUQHU+Q25VA19 OTeYVnHq+dcXVHRLzp9I+r1H+okBthZLUujl9w07FdSyccvPXisjq9XfiLNNmjM6id0yB7Bowdub36 bsFPWrzCstibKkV1JhGcRdRxBHZ9Zp5Qg/wUK/asiVaZ8uRL1uLd/2DxR5C1Bg+nJyBLzOzXvFwkmZ vgTYU5XpgJnwlyzuX2K/ffbfQFiVlcx93syYYVvpPleIp0DhIVh64gmEVmawTsjiD2L6rByiVQ3Grf rblV6jt+tl5kxkBeC85chcpg1/Vsjhcp36DHxUpNjjtrg0IdRx6169/6JrwZmeP5lEQZj+iF/PLzhq E8k34nf5q7GVlxaWL7MY5XS/Mx1FGcZntxZ6g8HX6RkO40qSNEsluJdZenhUYFhK73vInt6wpkL56J H99xJ2CapLn5jWQvFY3N5qshELQwwz+BaaL+E7VxfEipHvPxgCkaBpM0OjFuCwsLzO1ZR2SFICIk2E /aycmHR0IQgEU9uGXJ3ebd0bHY+e5H5hjx7YmM2AQcUhZtBRG1YBLAZQDzQA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Try creating a dynptr, then overwriting second slot with first slot of another dynptr. Then, the first slot of first dynptr should also be invalidated, but without our fix that does not happen. As a consequence, the unfixed case allows passing first dynptr (as the kernel check only checks for slot_type and then first_slot == true). Signed-off-by: Kumar Kartikeya Dwivedi --- .../testing/selftests/bpf/progs/dynptr_fail.c | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c index 2d899f2bebb0..1cbec5468879 100644 --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c @@ -834,3 +834,69 @@ int dynptr_var_off_overwrite(struct __sk_buff *ctx) ); return 0; } + +SEC("?tc") +__failure __msg("cannot overwrite referenced dynptr") __log_level(2) +int dynptr_partial_slot_invalidate(struct __sk_buff *ctx) +{ + asm volatile ( + "r6 = %[ringbuf] ll; \ + r7 = %[array_map4] ll; \ + r1 = r7; \ + r2 = r10; \ + r2 += -8; \ + r9 = 0; \ + *(u64 *)(r2 + 0) = r9; \ + r3 = r2; \ + r4 = 0; \ + r8 = r2; \ + call %[bpf_map_update_elem]; \ + r1 = r7; \ + r2 = r8; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto sjmp1; \ + exit; \ + sjmp1: \ + r7 = r0; \ + r1 = r6; \ + r2 = 8; \ + r3 = 0; \ + r4 = r10; \ + r4 += -24; \ + call %[bpf_ringbuf_reserve_dynptr]; \ + *(u64 *)(r10 - 16) = r9; \ + r1 = r7; \ + r2 = 8; \ + r3 = 0; \ + r4 = r10; \ + r4 += -16; \ + call %[bpf_dynptr_from_mem]; \ + r1 = r10; \ + r1 += -512; \ + r2 = 488; \ + r3 = r10; \ + r3 += -24; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_dynptr_read]; \ + r8 = 1; \ + if r0 != 0 goto sjmp2; \ + r8 = 0; \ + sjmp2: \ + r1 = r10; \ + r1 += -24; \ + r2 = 0; \ + call %[bpf_ringbuf_discard_dynptr]; " + : + : __imm(bpf_map_update_elem), + __imm(bpf_map_lookup_elem), + __imm(bpf_ringbuf_reserve_dynptr), + __imm(bpf_ringbuf_discard_dynptr), + __imm(bpf_dynptr_from_mem), + __imm(bpf_dynptr_read), + __imm_addr(ringbuf), + __imm_addr(array_map4) + : __clobber_all + ); + return 0; +} From patchwork Sat Jan 21 00:22:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13110779 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C48EC25B50 for ; Sat, 21 Jan 2023 00:25:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229765AbjAUAZ0 (ORCPT ); Fri, 20 Jan 2023 19:25:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230380AbjAUAZE (ORCPT ); Fri, 20 Jan 2023 19:25:04 -0500 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E67ECD235 for ; Fri, 20 Jan 2023 16:24:33 -0800 (PST) Received: by mail-pl1-x642.google.com with SMTP id jl3so6702078plb.8 for ; Fri, 20 Jan 2023 16:24:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=I/QcajSZvIJYuBCKx7NlqHTz3x2/XOIswd7jGPSiPBU=; b=K87LMr+U7UzSEEs9lXC4BGm5uAvMQyNz7PR0lWWb957dCJ4r/ncKUWAMTLcApdMQP5 S20qA8lJdIw+C+GZ7ro8cCGHSj8rDiYWrJiXflBVCN+ogVTmzWtC8Ywzf6LuzDsxSP/I gX6rYYYcAbwg0O5R1MM1/1tJASV1s/rYMiYAducO3q00hhLR0PLcvojvnWg+odPEPCr9 BOiYH/mNpSfAUFK9bDNKppI5b0IiZh9cnjEKin92ck5EGjZYBB3ijiEgCRQwveWDPKXt MC9SufyGOCMPth3YqRm9mkbh25ICPQFSBzSS8azNw/fgSiulsqHoBaKu8XtTfxkcVK2n Bkvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=I/QcajSZvIJYuBCKx7NlqHTz3x2/XOIswd7jGPSiPBU=; b=GSvbBq3Ld61urqrH064FV2eXNnWwGL07WdGxPJw9sPhhpixt4R0+VY0inY6wiEuLLA b4osXUySRKJIneHKxUs9htQOvp/v5VeWQkm3sYTltnpoGM5YzlhHs+e/D8JKTClCJrdM rDR4olCppCwiuk98H7vhG4zvPltbq5whaQAx+LN/duGHYdkx9cjCAxToPsxIk6s9KE+I ZSQwFqqfoQuWypiPmjIljJSnIVzaVnk6nqBo5EKTbUY+Y1fAvF9Pe349pZ427Q0Tt871 zH8RVEgIt2jQPIXf0FWK90japVFO5MPGbq4c6zBj6dABqTyIcAEC0GGJR52rAc8qPxp+ BTEQ== X-Gm-Message-State: AFqh2kp3npw7ctrSOW8eMZ/fqHq9O+FnEXyrbWnXZBoBRAWf8ZQwGS9E 5Orc3WEsSPHahNOKII2AJGJ1pfsS0Js= X-Google-Smtp-Source: AMrXdXsC34qBYiFAPeAv5NOno/ub8nq9KPSPGRXiEssLhjMxrcTwAuW4dt/KFDrw/XOSVT301MozlQ== X-Received: by 2002:a17:902:da90:b0:194:43e2:dcd9 with SMTP id j16-20020a170902da9000b0019443e2dcd9mr23571832plx.2.1674260606904; Fri, 20 Jan 2023 16:23:26 -0800 (PST) Received: from localhost ([2405:201:6014:dae3:7dbb:8857:7c39:bb2a]) by smtp.gmail.com with ESMTPSA id e13-20020a17090301cd00b0019498ee6d95sm9794162plh.105.2023.01.20.16.23.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 16:23:26 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Joanne Koong , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau Subject: [PATCH bpf-next v5 12/12] selftests/bpf: Add dynptr helper tests Date: Sat, 21 Jan 2023 05:52:41 +0530 Message-Id: <20230121002241.2113993-13-memxor@gmail.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230121002241.2113993-1-memxor@gmail.com> References: <20230121002241.2113993-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5259; i=memxor@gmail.com; h=from:subject; bh=EV/cG3Jk6SIyLbDKkxNscw4kpd97ighWn+qGJu9gvyU=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjyzAkcO3eN6JFdHCSIrXoiC4lRrWALMKyYcONMI7L g7D4V9CJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY8swJAAKCRBM4MiGSL8Ryv4UD/ 9OecdfuDoCldTedsqGtf2CfdhX3wNiQDM6tYM7Pj/M3yRFy9fhb2j/fDMGNOtpx67jK+u+qY3o9aQO us/UaHiY8Hk1JeGRSozzmU9sjJPVUvQPMSKhAT6qkFJvaIqSu9U0orKPbJ8/1K194+V1vZZCEOJe50 fcCtYG/lfNJtqPqPKaFwKjdYqGt0gEBKiiT4a0+q3JJV07d3rAON5n3v8ruaUvybLiEMx7Kb8ZaFwn I1u4rQUIM4zxf+WFYsl7CW6d0RQ7qNblcpFcS8pJx1ohXldT74//RucYj5BjgOAauyoJcFNMVsQPIX bBdjZmejCDzPgDm03V2AcfvxPDGD9x9V3e2+AK9SjgKBs4ztl1hRBmXx7CcafkTHF4xa+CGxGL13gg k2SG9lE/wNacXabERJK5ETyjX5frXCrO/YX9KutXAInG86aZJ4xijxJuE/q/YBKMI5hVaARm3Bi6xn 1McHDOaE33yyjikZ0kg3X0UKVXwnGJxSou8gNomYNgfoY4zptNWva5o+j3LfLd2M6fEVA8/+9ofE87 nMk9ixuxegfE2+ehi/BlDJp4V0LmswLWN+tMZQXtZyCEGtI9gwZ0g2sDcIFkwJBrxBkhQoZWu9EZx+ cqYjkVLl8eb2I3o+B3Ah9lD8D5bBxN1+zindQ3TqW5Fb35oUr1t0uSYvwKww== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net First test that we allow overwriting dynptr slots and reinitializing them in unreferenced case, and disallow overwriting for referenced case. Include tests to ensure slices obtained from destroyed dynptrs are being invalidated on their destruction. The destruction needs to be scoped, as in slices of dynptr A should not be invalidated when dynptr B is destroyed. Next, test that MEM_UNINIT doesn't allow writing dynptr stack slots. Acked-by: Joanne Koong Signed-off-by: Kumar Kartikeya Dwivedi --- .../testing/selftests/bpf/progs/dynptr_fail.c | 192 ++++++++++++++++++ 1 file changed, 192 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c index 1cbec5468879..5950ad6ec2e6 100644 --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c @@ -900,3 +900,195 @@ int dynptr_partial_slot_invalidate(struct __sk_buff *ctx) ); return 0; } + +/* Test that it is allowed to overwrite unreferenced dynptr. */ +SEC("?raw_tp") +__success +int dynptr_overwrite_unref(void *ctx) +{ + struct bpf_dynptr ptr; + + if (get_map_val_dynptr(&ptr)) + return 0; + if (get_map_val_dynptr(&ptr)) + return 0; + if (get_map_val_dynptr(&ptr)) + return 0; + + return 0; +} + +/* Test that slices are invalidated on reinitializing a dynptr. */ +SEC("?raw_tp") +__failure __msg("invalid mem access 'scalar'") +int dynptr_invalidate_slice_reinit(void *ctx) +{ + struct bpf_dynptr ptr; + __u8 *p; + + if (get_map_val_dynptr(&ptr)) + return 0; + p = bpf_dynptr_data(&ptr, 0, 1); + if (!p) + return 0; + if (get_map_val_dynptr(&ptr)) + return 0; + /* this should fail */ + return *p; +} + +/* Invalidation of dynptr slices on destruction of dynptr should not miss + * mem_or_null pointers. + */ +SEC("?raw_tp") +__failure __msg("R1 type=scalar expected=percpu_ptr_") +int dynptr_invalidate_slice_or_null(void *ctx) +{ + struct bpf_dynptr ptr; + __u8 *p; + + if (get_map_val_dynptr(&ptr)) + return 0; + + p = bpf_dynptr_data(&ptr, 0, 1); + *(__u8 *)&ptr = 0; + /* this should fail */ + bpf_this_cpu_ptr(p); + return 0; +} + +/* Destruction of dynptr should also any slices obtained from it */ +SEC("?raw_tp") +__failure __msg("R7 invalid mem access 'scalar'") +int dynptr_invalidate_slice_failure(void *ctx) +{ + struct bpf_dynptr ptr1; + struct bpf_dynptr ptr2; + __u8 *p1, *p2; + + if (get_map_val_dynptr(&ptr1)) + return 0; + if (get_map_val_dynptr(&ptr2)) + return 0; + + p1 = bpf_dynptr_data(&ptr1, 0, 1); + if (!p1) + return 0; + p2 = bpf_dynptr_data(&ptr2, 0, 1); + if (!p2) + return 0; + + *(__u8 *)&ptr1 = 0; + /* this should fail */ + return *p1; +} + +/* Invalidation of slices should be scoped and should not prevent dereferencing + * slices of another dynptr after destroying unrelated dynptr + */ +SEC("?raw_tp") +__success +int dynptr_invalidate_slice_success(void *ctx) +{ + struct bpf_dynptr ptr1; + struct bpf_dynptr ptr2; + __u8 *p1, *p2; + + if (get_map_val_dynptr(&ptr1)) + return 1; + if (get_map_val_dynptr(&ptr2)) + return 1; + + p1 = bpf_dynptr_data(&ptr1, 0, 1); + if (!p1) + return 1; + p2 = bpf_dynptr_data(&ptr2, 0, 1); + if (!p2) + return 1; + + *(__u8 *)&ptr1 = 0; + return *p2; +} + +/* Overwriting referenced dynptr should be rejected */ +SEC("?raw_tp") +__failure __msg("cannot overwrite referenced dynptr") +int dynptr_overwrite_ref(void *ctx) +{ + struct bpf_dynptr ptr; + + bpf_ringbuf_reserve_dynptr(&ringbuf, 64, 0, &ptr); + /* this should fail */ + if (get_map_val_dynptr(&ptr)) + bpf_ringbuf_discard_dynptr(&ptr, 0); + return 0; +} + +/* Reject writes to dynptr slot from bpf_dynptr_read */ +SEC("?raw_tp") +__failure __msg("potential write to dynptr at off=-16") +int dynptr_read_into_slot(void *ctx) +{ + union { + struct { + char _pad[48]; + struct bpf_dynptr ptr; + }; + char buf[64]; + } data; + + bpf_ringbuf_reserve_dynptr(&ringbuf, 64, 0, &data.ptr); + /* this should fail */ + bpf_dynptr_read(data.buf, sizeof(data.buf), &data.ptr, 0, 0); + + return 0; +} + +/* Reject writes to dynptr slot for uninit arg */ +SEC("?raw_tp") +__failure __msg("potential write to dynptr at off=-16") +int uninit_write_into_slot(void *ctx) +{ + struct { + char buf[64]; + struct bpf_dynptr ptr; + } data; + + bpf_ringbuf_reserve_dynptr(&ringbuf, 80, 0, &data.ptr); + /* this should fail */ + bpf_get_current_comm(data.buf, 80); + + return 0; +} + +static int callback(__u32 index, void *data) +{ + *(__u32 *)data = 123; + + return 0; +} + +/* If the dynptr is written into in a callback function, its data + * slices should be invalidated as well. + */ +SEC("?raw_tp") +__failure __msg("invalid mem access 'scalar'") +int invalid_data_slices(void *ctx) +{ + struct bpf_dynptr ptr; + __u32 *slice; + + if (get_map_val_dynptr(&ptr)) + return 0; + + slice = bpf_dynptr_data(&ptr, 0, sizeof(__u32)); + if (!slice) + return 0; + + bpf_loop(10, callback, &ptr, 0); + + /* this should fail */ + *slice = 1; + + return 0; +}