From patchwork Thu Feb 22 00:50:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13566643 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8373EDC for ; Thu, 22 Feb 2024 00:50:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708563029; cv=none; b=FfTSECetnvAmNMdIuIAPlba9dnetK7eSt8EWt+2AEwLk0EiPL7fMAbTMqgvaMGk6JEn51p9OZrLJSxSawb8HOQQ9dFQtSz2pNuK2HnKr/qOie+pDTYCK5rwaNRsP8x0SGUDfDdYKNKz1r49WnvX7qcZZPm35B1cbGqcUSJYguBs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708563029; c=relaxed/simple; bh=o6u4p5eTufpYWAPtnC/zgute2JW1fp8Y8IMCCiCUuHI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IJp7i5qAj7r/suqWwMDCLR0ecXLa1pA5eYkmLIj+VyaZlgK7a4rDxUPA7rIm3fAcGCPjJDGJAshNVy8goGSACi0JNKx66jExldSp7TFdZ0noU/SmnlXPXQoIIzFGJTxp9Qi4/mG9RepHsslepY0czJlhWq5USY7SZUtCHysGUrc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FinmgIEP; arc=none smtp.client-ip=209.85.128.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FinmgIEP" Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-4127700a384so8598585e9.3 for ; Wed, 21 Feb 2024 16:50:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1708563026; x=1709167826; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eIUgpSFLHenqnUbM2o3R3OjL2OAwPt1xNlwPnfi1f84=; b=FinmgIEP9gBa3RKjiHfV0K6iGklNNaDF0vFKCx9KEuwP7WrOyOQiQ87bAokr46dZI3 TAQFKPb6TN3qWm/37B0q2AaMdj5UQspEesLR5xgpHJ4mZ/Qbwt67EbDc1Qf/oSkcY4jm Z0mfpcUX/X5n64KS5OKKOZYLjkDTa//ac3yjkDknQitBN6iSR9RE+Txs9xoh5N5GJu6b 2IIZOXMbqKSUfaAVGgsF2I/L9DrYQUId3/x3H25cvwqVOB8fMI2uv/w9nHuchcLHkVqI jaCCFuIBYYMl5kjt8gZCg3p4b7xGyEjSSkYbNizo6zYcLBNJLD4d8HKJYXPCnUXjueHu ufhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708563026; x=1709167826; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eIUgpSFLHenqnUbM2o3R3OjL2OAwPt1xNlwPnfi1f84=; b=odHK0ua7lxO5NADbDqg0Ksc1jI18bTog0RMW9xyTNVoqqfk29+4BtYHXH47+MJuAqQ Wckh4UfzPc7GHsKrhNNzUCQxugp9Cm1xHbiCc3Y/l8gUKKjN4V/7ZVeJVPSX+pSoEHi3 ulvoO5dnsqBMxLOwnTX6G0WeA7ZwCqU4cCRtulyBJkpVAJR4FN0BYyI4Dd6KmRrCqbdX SDAJaD4Olb0x2xUokscCZok3UDWD+j5doJpjUGbso34jJjU4az2nA5UIJk10pVktl/hT 1MMOFltrYh/whEZKncirPcHNXBtUOJjWub/SxCGQuoZGud2ZXqL5lvlIkqG0J4t2HfJ7 no4A== X-Gm-Message-State: AOJu0Yxn8EMnXGJeVE4Wy+dBZom9oXHgqbYQ/Rc2PP49Tj2SyDEs6qZi 0oEl0bbqQw0/3wCBoJ4/ch7S/xGW84ceCG6zRqT+2esZhjkBezgp/nJDAOCb X-Google-Smtp-Source: AGHT+IFZN9fYAa646oeG9hWGS18TEPVnXuiTPdkmZI12hXQ5q7YP3XvJtdFpI9WrgcAxSUgG9eSypg== X-Received: by 2002:a05:600c:34d5:b0:412:85ff:ec0d with SMTP id d21-20020a05600c34d500b0041285ffec0dmr40608wmq.7.1708563025451; Wed, 21 Feb 2024 16:50:25 -0800 (PST) Received: from localhost.localdomain (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id i17-20020a05600c355100b0041279ac13adsm2031992wmq.36.2024.02.21.16.50.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Feb 2024 16:50:24 -0800 (PST) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, sunhao.th@gmail.com, Eduard Zingerman Subject: [PATCH bpf-next 1/4] bpf: replace env->cur_hist_ent with a getter function Date: Thu, 22 Feb 2024 02:50:02 +0200 Message-ID: <20240222005005.31784-2-eddyz87@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240222005005.31784-1-eddyz87@gmail.com> References: <20240222005005.31784-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Let push_jmp_history() peek current jump history entry basing on the passed bpf_verifier_state. This replaces a "global" variable in bpf_verifier_env allowing to use push_jmp_history() for states other than env->cur_state. Signed-off-by: Eduard Zingerman --- include/linux/bpf_verifier.h | 1 - kernel/bpf/verifier.c | 34 ++++++++++++++++------------------ 2 files changed, 16 insertions(+), 19 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 84365e6dd85d..cbfb235984c8 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -705,7 +705,6 @@ struct bpf_verifier_env { int cur_stack; } cfg; struct backtrack_state bt; - struct bpf_jmp_history_entry *cur_hist_ent; u32 pass_cnt; /* number of times do_check() was called */ u32 subprog_cnt; /* number of instructions analyzed by the verifier */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 011d54a1dc53..759ef089b33c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3304,24 +3304,34 @@ static bool is_jmp_point(struct bpf_verifier_env *env, int insn_idx) return env->insn_aux_data[insn_idx].jmp_point; } +static struct bpf_jmp_history_entry *get_jmp_hist_entry(struct bpf_verifier_state *st, + u32 hist_end, int insn_idx) +{ + if (hist_end > 0 && st->jmp_history[hist_end - 1].idx == insn_idx) + return &st->jmp_history[hist_end - 1]; + return NULL; +} + /* for any branch, call, exit record the history of jmps in the given state */ static int push_jmp_history(struct bpf_verifier_env *env, struct bpf_verifier_state *cur, int insn_flags) { + struct bpf_jmp_history_entry *p, *cur_hist_ent; u32 cnt = cur->jmp_history_cnt; - struct bpf_jmp_history_entry *p; size_t alloc_size; + cur_hist_ent = get_jmp_hist_entry(cur, cnt, env->insn_idx); + /* combine instruction flags if we already recorded this instruction */ - if (env->cur_hist_ent) { + if (cur_hist_ent) { /* atomic instructions push insn_flags twice, for READ and * WRITE sides, but they should agree on stack slot */ - WARN_ONCE((env->cur_hist_ent->flags & insn_flags) && - (env->cur_hist_ent->flags & insn_flags) != insn_flags, + WARN_ONCE((cur_hist_ent->flags & insn_flags) && + (cur_hist_ent->flags & insn_flags) != insn_flags, "verifier insn history bug: insn_idx %d cur flags %x new flags %x\n", - env->insn_idx, env->cur_hist_ent->flags, insn_flags); - env->cur_hist_ent->flags |= insn_flags; + env->insn_idx, cur_hist_ent->flags, insn_flags); + cur_hist_ent->flags |= insn_flags; return 0; } @@ -3337,19 +3347,10 @@ static int push_jmp_history(struct bpf_verifier_env *env, struct bpf_verifier_st p->prev_idx = env->prev_insn_idx; p->flags = insn_flags; cur->jmp_history_cnt = cnt; - env->cur_hist_ent = p; return 0; } -static struct bpf_jmp_history_entry *get_jmp_hist_entry(struct bpf_verifier_state *st, - u32 hist_end, int insn_idx) -{ - if (hist_end > 0 && st->jmp_history[hist_end - 1].idx == insn_idx) - return &st->jmp_history[hist_end - 1]; - return NULL; -} - /* Backtrack one insn at a time. If idx is not at the top of recorded * history then previous instruction came from straight line execution. * Return -ENOENT if we exhausted all instructions within given state. @@ -17437,9 +17438,6 @@ static int do_check(struct bpf_verifier_env *env) u8 class; int err; - /* reset current history entry on each new instruction */ - env->cur_hist_ent = NULL; - env->prev_insn_idx = prev_insn_idx; if (env->insn_idx >= insn_cnt) { verbose(env, "invalid insn idx %d insn_cnt %d\n", From patchwork Thu Feb 22 00:50:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13566644 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2828D28EF for ; Thu, 22 Feb 2024 00:50:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708563031; cv=none; b=vFG4UMjx5+EJEGVNjphe0lPUW4fNTdOdZy+JhPB/W8hd/ltTk7fI9SPqRT4SZX1x2jo3Do/RKKXohArlmO1g3mVker10QaP2/sHlGeOCNm40cHVNJxISi8wPHQKrYXSaLlNuMH4JQMKMII1BqtPuqD3lUZfHXCbYOnuzy/zVtjk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708563031; c=relaxed/simple; bh=giGk+YWOjOMm7lj7SFnq2ZlMxbBOcgizsNChrkDC2qs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=b/nKbrptGgpaOZlaADQuByoaurP/teP5JBboL+3s4Lb7AmRC9FbN8X3DIrLMLsFvlrxF3ykFZxjRlYhGaQ6usygL0QxRFD+f+jOxWnwaPVl88CbvgE46HdGamdFz0nF6LiTwGnL6ArKXDYDml85WAHlC2wLO+vN8QcVJPckiADM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Gbq9OJLx; arc=none smtp.client-ip=209.85.208.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Gbq9OJLx" Received: by mail-lj1-f169.google.com with SMTP id 38308e7fff4ca-2d22fa5c822so58781061fa.2 for ; Wed, 21 Feb 2024 16:50:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1708563027; x=1709167827; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XBhA5T/zsEAe6jZ1c9b7hpb8cI83Ga6aj5fC7T+/iqE=; b=Gbq9OJLxv9Il0ZDUg3aaeR+sGAM6BfAJJQUVoyDvLYrYZoqnK++sHB1KV3/y+xmP4d NaXjBbHCmue2ka7KA3O5ndA/Vl3cIDS5hZC6ESRpURV3bvNRPFC5vQR7CElBcftouwbR aFcQWigVrTgSkfAb/uykjgxDz/7iPZtZ/TMV9btyLFZonZgbqdyoEG/rp0dus6R+JU/c uCOFeQvU5Nlxp3HW2mwUSOAjOeneyLjTi2lOb9+fGnS69lk/WCUopqX6Ndd38RdHtw8D xMVaggjKTHNaAUawTv0vIsk0oQXG2R8SvcRsdQgLuv0KBczVmec8tu++D9GWz0Pi4HLQ +kBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708563027; x=1709167827; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XBhA5T/zsEAe6jZ1c9b7hpb8cI83Ga6aj5fC7T+/iqE=; b=evmLMVHQLc8pKdGxdQk1y0FIlUUMp3i0cfGujXnZXAZvqQHuE0G7nRYczwNwk2GLCP ZrHpg8ZG9WWxAzOiSB/4qs2Zf85LJd+UIfMK+C5UxPg6P4Kdxj8P+UG7McmDi+F+KfA+ s7ZrDisGoC1oue1d+Imnx8B3nW0ANmM+bpYXjJ+od/ObNiTmJExNhYzCNoKBcOUxR23X +r1aotUpwf4jrrSsUfPeGVqY9w5ogLFRU1ichEdiXjY4QR2xAyDjGL9HPmxaaU6zKuZG iWlqit1HwtmqBIx0RRKc9y9BUt9UuOOZHKQVWfZWZ0F1ZW3l1lT/ZsLEsVC6OpozdTgy wgxA== X-Gm-Message-State: AOJu0Yw1lLPr/tTjsEG0mMWPibckU0FP47fok8udHtP/IT6L8wuj7Uv0 XFbzyOf4EC5O5lSMiAcoHCxAR9i7bZjqsd53IpS9TIocdrX+A5CQFn5Z/fxH X-Google-Smtp-Source: AGHT+IHZpbxEJ64t7R+3ypoQwVEauVr7cPUeEYrzXbo5zG32XRkk51NecSXAt9R6wrqdT7Ngg0S2hg== X-Received: by 2002:a2e:2e0a:0:b0:2d2:3a2b:7ad3 with SMTP id u10-20020a2e2e0a000000b002d23a2b7ad3mr8534930lju.26.1708563026715; Wed, 21 Feb 2024 16:50:26 -0800 (PST) Received: from localhost.localdomain (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id i17-20020a05600c355100b0041279ac13adsm2031992wmq.36.2024.02.21.16.50.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Feb 2024 16:50:26 -0800 (PST) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, sunhao.th@gmail.com, Eduard Zingerman Subject: [PATCH bpf-next 2/4] bpf: track find_equal_scalars history on per-instruction level Date: Thu, 22 Feb 2024 02:50:03 +0200 Message-ID: <20240222005005.31784-3-eddyz87@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240222005005.31784-1-eddyz87@gmail.com> References: <20240222005005.31784-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Use bpf_verifier_state->jmp_history to track which registers were updated by find_equal_scalars() when conditional jump was verified. Use recorded information in backtrack_insn() to propagate precision. E.g. for the following program: while verifying instructions r1 = r0 | if r1 < 8 goto ... | push r0,r1 as equal_scalars in jmp_history if r0 > 16 goto ... | push r0,r1 as equal_scalars in jmp_history r2 = r10 | r2 += r0 v mark_chain_precision(r0) while doing mark_chain_precision(r0) r1 = r0 ^ if r1 < 8 goto ... | mark r0,r1 as precise if r0 > 16 goto ... | mark r0,r1 as precise r2 = r10 | r2 += r0 | mark r0 precise Technically achieve this in following steps: - Use 10 bits to identify each register that gains range because of find_equal_scalars(): - 3 bits for frame number; - 6 bits for register or stack slot number; - 1 bit to indicate if register is spilled. - Use u64 as a vector of 6 such records + 4 bits for vector length. - Augment struct bpf_jmp_history_entry with field 'equal_scalars' representing such vector. - When doing check_cond_jmp_op() for remember up to 6 registers that gain range because of find_equal_scalars() in such a vector. - Don't propagate range information and reset IDs for registers that don't fit in 6-value vector. - Push collected vector to bpf_verifier_state->jmp_history for instruction index of conditional jump. - When doing backtrack_insn() for conditional jumps check if any of recorded equal scalars is currently marked precise, if so mark all equal recorded scalars as precise. Fixes: 904e6ddf4133 ("bpf: Use scalar ids in mark_chain_precision()") Reported-by: Hao Sun Closes: https://lore.kernel.org/bpf/CAEf4BzZ0xidVCqB47XnkXcNhkPWF6_nTV7yt+_Lf0kcFEut2Mg@mail.gmail.com/ Suggested-by: Andrii Nakryiko Signed-off-by: Eduard Zingerman --- include/linux/bpf_verifier.h | 1 + kernel/bpf/verifier.c | 207 ++++++++++++++++-- .../bpf/progs/verifier_subprog_precision.c | 2 +- .../testing/selftests/bpf/verifier/precise.c | 2 +- 4 files changed, 195 insertions(+), 17 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index cbfb235984c8..26e32555711c 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -361,6 +361,7 @@ struct bpf_jmp_history_entry { u32 prev_idx : 22; /* special flags, e.g., whether insn is doing register stack spill/load */ u32 flags : 10; + u64 equal_scalars; }; /* Maximum number of register states that can exist at once */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 759ef089b33c..b95b6842703c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3304,6 +3304,76 @@ static bool is_jmp_point(struct bpf_verifier_env *env, int insn_idx) return env->insn_aux_data[insn_idx].jmp_point; } +#define ES_FRAMENO_BITS 3 +#define ES_SPI_BITS 6 +#define ES_ENTRY_BITS (ES_SPI_BITS + ES_FRAMENO_BITS + 1) +#define ES_SIZE_BITS 4 +#define ES_FRAMENO_MASK ((1ul << ES_FRAMENO_BITS) - 1) +#define ES_SPI_MASK ((1ul << ES_SPI_BITS) - 1) +#define ES_SIZE_MASK ((1ul << ES_SIZE_BITS) - 1) +#define ES_SPI_OFF ES_FRAMENO_BITS +#define ES_IS_REG_OFF (ES_SPI_BITS + ES_FRAMENO_BITS) + +/* Pack one history entry for equal scalars as 10 bits in the following format: + * - 3-bits frameno + * - 6-bits spi_or_reg + * - 1-bit is_reg + */ +static u64 equal_scalars_pack(u32 frameno, u32 spi_or_reg, bool is_reg) +{ + u64 val = 0; + + val |= frameno & ES_FRAMENO_MASK; + val |= (spi_or_reg & ES_SPI_MASK) << ES_SPI_OFF; + val |= (is_reg ? 1 : 0) << ES_IS_REG_OFF; + return val; +} + +static void equal_scalars_unpack(u64 val, u32 *frameno, u32 *spi_or_reg, bool *is_reg) +{ + *frameno = val & ES_FRAMENO_MASK; + *spi_or_reg = (val >> ES_SPI_OFF) & ES_SPI_MASK; + *is_reg = (val >> ES_IS_REG_OFF) & 0x1; +} + +static u32 equal_scalars_size(u64 equal_scalars) +{ + return equal_scalars & ES_SIZE_MASK; +} + +/* Use u64 as a stack of 6 10-bit values, use first 4-bits to track + * number of elements currently in stack. + */ +static bool equal_scalars_push(u64 *equal_scalars, u32 frameno, u32 spi_or_reg, bool is_reg) +{ + u32 num; + + num = equal_scalars_size(*equal_scalars); + if (num == 6) + return false; + *equal_scalars >>= ES_SIZE_BITS; + *equal_scalars <<= ES_ENTRY_BITS; + *equal_scalars |= equal_scalars_pack(frameno, spi_or_reg, is_reg); + *equal_scalars <<= ES_SIZE_BITS; + *equal_scalars |= num + 1; + return true; +} + +static bool equal_scalars_pop(u64 *equal_scalars, u32 *frameno, u32 *spi_or_reg, bool *is_reg) +{ + u32 num; + + num = equal_scalars_size(*equal_scalars); + if (num == 0) + return false; + *equal_scalars >>= ES_SIZE_BITS; + equal_scalars_unpack(*equal_scalars, frameno, spi_or_reg, is_reg); + *equal_scalars >>= ES_ENTRY_BITS; + *equal_scalars <<= ES_SIZE_BITS; + *equal_scalars |= num - 1; + return true; +} + static struct bpf_jmp_history_entry *get_jmp_hist_entry(struct bpf_verifier_state *st, u32 hist_end, int insn_idx) { @@ -3314,7 +3384,7 @@ static struct bpf_jmp_history_entry *get_jmp_hist_entry(struct bpf_verifier_stat /* for any branch, call, exit record the history of jmps in the given state */ static int push_jmp_history(struct bpf_verifier_env *env, struct bpf_verifier_state *cur, - int insn_flags) + int insn_flags, u64 equal_scalars) { struct bpf_jmp_history_entry *p, *cur_hist_ent; u32 cnt = cur->jmp_history_cnt; @@ -3332,6 +3402,12 @@ static int push_jmp_history(struct bpf_verifier_env *env, struct bpf_verifier_st "verifier insn history bug: insn_idx %d cur flags %x new flags %x\n", env->insn_idx, cur_hist_ent->flags, insn_flags); cur_hist_ent->flags |= insn_flags; + if (cur_hist_ent->equal_scalars != 0) { + verbose(env, "verifier bug: insn_idx %d equal_scalars != 0: %#llx\n", + env->insn_idx, cur_hist_ent->equal_scalars); + return -EFAULT; + } + cur_hist_ent->equal_scalars = equal_scalars; return 0; } @@ -3346,6 +3422,7 @@ static int push_jmp_history(struct bpf_verifier_env *env, struct bpf_verifier_st p->idx = env->insn_idx; p->prev_idx = env->prev_insn_idx; p->flags = insn_flags; + p->equal_scalars = equal_scalars; cur->jmp_history_cnt = cnt; return 0; @@ -3502,6 +3579,11 @@ static inline bool bt_is_reg_set(struct backtrack_state *bt, u32 reg) return bt->reg_masks[bt->frame] & (1 << reg); } +static inline bool bt_is_frame_reg_set(struct backtrack_state *bt, u32 frame, u32 reg) +{ + return bt->reg_masks[frame] & (1 << reg); +} + static inline bool bt_is_frame_slot_set(struct backtrack_state *bt, u32 frame, u32 slot) { return bt->stack_masks[frame] & (1ull << slot); @@ -3546,6 +3628,39 @@ static void fmt_stack_mask(char *buf, ssize_t buf_sz, u64 stack_mask) } } +/* If any register R in hist->equal_scalars is marked as precise in bt, + * do bt_set_frame_{reg,slot}(bt, R) for all registers in hist->equal_scalars. + */ +static void bt_set_equal_scalars(struct backtrack_state *bt, struct bpf_jmp_history_entry *hist) +{ + bool is_reg, some_precise = false; + u64 equal_scalars; + u32 fr, spi; + + if (!hist || hist->equal_scalars == 0) + return; + + equal_scalars = hist->equal_scalars; + while (equal_scalars_pop(&equal_scalars, &fr, &spi, &is_reg)) { + if ((is_reg && bt_is_frame_reg_set(bt, fr, spi)) || + (!is_reg && bt_is_frame_slot_set(bt, fr, spi))) { + some_precise = true; + break; + } + } + + if (!some_precise) + return; + + equal_scalars = hist->equal_scalars; + while (equal_scalars_pop(&equal_scalars, &fr, &spi, &is_reg)) { + if (is_reg) + bt_set_frame_reg(bt, fr, spi); + else + bt_set_frame_slot(bt, fr, spi); + } +} + static bool calls_callback(struct bpf_verifier_env *env, int insn_idx); /* For given verifier state backtrack_insn() is called from the last insn to @@ -3802,6 +3917,7 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx, */ return 0; } else if (BPF_SRC(insn->code) == BPF_X) { + bt_set_equal_scalars(bt, hist); if (!bt_is_reg_set(bt, dreg) && !bt_is_reg_set(bt, sreg)) return 0; /* dreg sreg @@ -3812,6 +3928,9 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx, */ bt_set_reg(bt, dreg); bt_set_reg(bt, sreg); + bt_set_equal_scalars(bt, hist); + } else if (BPF_SRC(insn->code) == BPF_K) { + bt_set_equal_scalars(bt, hist); /* else dreg K * Only dreg still needs precision before * this insn, so for the K-based conditional @@ -4579,7 +4698,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, } if (insn_flags) - return push_jmp_history(env, env->cur_state, insn_flags); + return push_jmp_history(env, env->cur_state, insn_flags, 0); return 0; } @@ -4884,7 +5003,7 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, insn_flags = 0; /* we are not restoring spilled register */ } if (insn_flags) - return push_jmp_history(env, env->cur_state, insn_flags); + return push_jmp_history(env, env->cur_state, insn_flags, 0); return 0; } @@ -14835,16 +14954,58 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, return true; } -static void find_equal_scalars(struct bpf_verifier_state *vstate, - struct bpf_reg_state *known_reg) +static void __find_equal_scalars(u64 *equal_scalars, + struct bpf_reg_state *reg, + u32 id, u32 frameno, u32 spi_or_reg, bool is_reg) { - struct bpf_func_state *state; + if (reg->type != SCALAR_VALUE || reg->id != id) + return; + + if (!equal_scalars_push(equal_scalars, frameno, spi_or_reg, is_reg)) + reg->id = 0; +} + +/* For all R being scalar registers or spilled scalar registers + * in verifier state, save R in equal_scalars if R->id == id. + * If there are too many Rs sharing same id, reset id for leftover Rs. + */ +static void find_equal_scalars(struct bpf_verifier_state *vstate, u32 id, u64 *equal_scalars) +{ + struct bpf_func_state *func; struct bpf_reg_state *reg; + int i, j; - bpf_for_each_reg_in_vstate(vstate, state, reg, ({ - if (reg->type == SCALAR_VALUE && reg->id == known_reg->id) + for (i = vstate->curframe; i >= 0; i--) { + func = vstate->frame[i]; + for (j = 0; j < BPF_REG_FP; j++) { + reg = &func->regs[j]; + __find_equal_scalars(equal_scalars, reg, id, i, j, true); + } + for (j = 0; j < func->allocated_stack / BPF_REG_SIZE; j++) { + if (!is_spilled_reg(&func->stack[j])) + continue; + reg = &func->stack[j].spilled_ptr; + __find_equal_scalars(equal_scalars, reg, id, i, j, false); + } + } +} + +/* For all R in equal_scalars, copy known_reg range into R + * if R->id == known_reg->id. + */ +static void copy_known_reg(struct bpf_verifier_state *vstate, + struct bpf_reg_state *known_reg, u64 equal_scalars) +{ + struct bpf_reg_state *reg; + u32 fr, spi; + bool is_reg; + + while (equal_scalars_pop(&equal_scalars, &fr, &spi, &is_reg)) { + reg = is_reg ? &vstate->frame[fr]->regs[spi] + : &vstate->frame[fr]->stack[spi].spilled_ptr; + if (reg->id == known_reg->id) copy_register_state(reg, known_reg); - })); + } } static int check_cond_jmp_op(struct bpf_verifier_env *env, @@ -14857,6 +15018,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, struct bpf_reg_state *eq_branch_regs; struct bpf_reg_state fake_reg = {}; u8 opcode = BPF_OP(insn->code); + u64 equal_scalars = 0; bool is_jmp32; int pred = -1; int err; @@ -14944,6 +15106,21 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, return 0; } + /* Push scalar registers sharing same ID to jump history, + * do this before creating 'other_branch', so that both + * 'this_branch' and 'other_branch' share this history + * if parent state is created. + */ + if (BPF_SRC(insn->code) == BPF_X && src_reg->type == SCALAR_VALUE && src_reg->id) + find_equal_scalars(this_branch, src_reg->id, &equal_scalars); + if (dst_reg->type == SCALAR_VALUE && dst_reg->id) + find_equal_scalars(this_branch, dst_reg->id, &equal_scalars); + if (equal_scalars_size(equal_scalars) > 1) { + err = push_jmp_history(env, this_branch, 0, equal_scalars); + if (err) + return err; + } + other_branch = push_stack(env, *insn_idx + insn->off + 1, *insn_idx, false); if (!other_branch) @@ -14968,13 +15145,13 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, if (BPF_SRC(insn->code) == BPF_X && src_reg->type == SCALAR_VALUE && src_reg->id && !WARN_ON_ONCE(src_reg->id != other_branch_regs[insn->src_reg].id)) { - find_equal_scalars(this_branch, src_reg); - find_equal_scalars(other_branch, &other_branch_regs[insn->src_reg]); + copy_known_reg(this_branch, src_reg, equal_scalars); + copy_known_reg(other_branch, &other_branch_regs[insn->src_reg], equal_scalars); } if (dst_reg->type == SCALAR_VALUE && dst_reg->id && !WARN_ON_ONCE(dst_reg->id != other_branch_regs[insn->dst_reg].id)) { - find_equal_scalars(this_branch, dst_reg); - find_equal_scalars(other_branch, &other_branch_regs[insn->dst_reg]); + copy_known_reg(this_branch, dst_reg, equal_scalars); + copy_known_reg(other_branch, &other_branch_regs[insn->dst_reg], equal_scalars); } /* if one pointer register is compared to another pointer @@ -17213,7 +17390,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) * the current state. */ if (is_jmp_point(env, env->insn_idx)) - err = err ? : push_jmp_history(env, cur, 0); + err = err ? : push_jmp_history(env, cur, 0, 0); err = err ? : propagate_precision(env, &sl->state); if (err) return err; @@ -17477,7 +17654,7 @@ static int do_check(struct bpf_verifier_env *env) } if (is_jmp_point(env, env->insn_idx)) { - err = push_jmp_history(env, state, 0); + err = push_jmp_history(env, state, 0, 0); if (err) return err; } diff --git a/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c b/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c index 6f5d19665cf6..2c7261834149 100644 --- a/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c +++ b/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c @@ -191,7 +191,7 @@ __msg("mark_precise: frame0: last_idx 14 first_idx 9") __msg("mark_precise: frame0: regs=r6 stack= before 13: (bf) r1 = r7") __msg("mark_precise: frame0: regs=r6 stack= before 12: (27) r6 *= 4") __msg("mark_precise: frame0: regs=r6 stack= before 11: (25) if r6 > 0x3 goto pc+4") -__msg("mark_precise: frame0: regs=r6 stack= before 10: (bf) r6 = r0") +__msg("mark_precise: frame0: regs=r0,r6 stack= before 10: (bf) r6 = r0") __msg("mark_precise: frame0: regs=r0 stack= before 9: (85) call bpf_loop") /* State entering callback body popped from states stack */ __msg("from 9 to 17: frame1:") diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c index 0a9293a57211..64d722199e8f 100644 --- a/tools/testing/selftests/bpf/verifier/precise.c +++ b/tools/testing/selftests/bpf/verifier/precise.c @@ -44,7 +44,7 @@ mark_precise: frame0: regs=r2 stack= before 23\ mark_precise: frame0: regs=r2 stack= before 22\ mark_precise: frame0: regs=r2 stack= before 20\ - mark_precise: frame0: parent state regs=r2 stack=:\ + mark_precise: frame0: parent state regs=r2,r9 stack=:\ mark_precise: frame0: last_idx 19 first_idx 10\ mark_precise: frame0: regs=r2,r9 stack= before 19\ mark_precise: frame0: regs=r9 stack= before 18\ From patchwork Thu Feb 22 00:50:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13566645 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 552F4EDC for ; Thu, 22 Feb 2024 00:50:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708563032; cv=none; b=H50J+m1OqohMyo1XoeihrSTOHXVz/7Lc8qva//gJeQDbTvZ3aW3hD+TUWcSLVoym+Y0tuy+4HfNBfMhbEwYZqppEsDMyWk5PrpaC0PvttLjOyZQ4DZ+dNtJXYQYeh6L8IzYLM73C+e//lLaU36UDL+HTOkNhkNk39PPZ5dfsfeY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708563032; c=relaxed/simple; bh=+quqcVFYvn4rfLVSvK2Be+9At7mrGUaitErENPiJyoM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lJc4RhJDnOJm5q3vZlgltE7cSn5x73l2Ppk5rda/EB8nUA2iE636eTa5bBsV+dJAH85xm/gqlYtjK6RtvjfcHe87faliwWq4MhHmtVeGAJj/I6j23yCVrhyzGnZ7l0KNrXapXx1pJepgQ9XeSLZphrQmMaSzTEvf93U9NRAhOMY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WrU6ySuc; arc=none smtp.client-ip=209.85.128.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WrU6ySuc" Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-41282f05409so990175e9.0 for ; Wed, 21 Feb 2024 16:50:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1708563028; x=1709167828; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4XlWZbIEv6ohq/x7YFQPrRZvHVGqt0MQoYZdefCYwqw=; b=WrU6ySuc4W79+3YRXieiSj1mpIADFayD5/FLpGTJqyLWephZc/SQkfjFtJEVtLaCom AQ4Exhz/NS9fvc80GjEnN9h3uuai5V/DJJ7AYnlXgla5k+VtT6r19Cv182zgIons5axe VGin6BxDsE/2wHotYWgrM2qacQYkyX4D3KtGuX9mUT63dhfgcUKDFvtnN+0VjJDg/NVw f666KtzjfGeNDQBoCvRiKs2nopKBXqoE4CmQq6mn0qQuh9U6duTH1qYtrEPxrNn8xUdN KgQuHQqf3nLteo9gixkWBysGbQ54LtwrpkCOXclNDaFDD4/K4pLOrDOf/1GycuOd9P/g 5+Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708563028; x=1709167828; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4XlWZbIEv6ohq/x7YFQPrRZvHVGqt0MQoYZdefCYwqw=; b=BG6c5B6Ndw698MPF3XryNs336USbDYEU3r7zF0vmKPn3ijeXpQhkZiKFFo8zBuM0dj G6mcgdYRT1mZRowp62LG0zQP+1wDD7Vmj6ju5hKTuWrYP1en6PWXnHMc9L7QuLm3SHcY 1fNQuMVxorZEMSsfMULfvKq8062ZezncOSx+jHF/QGw5QQMpatb6xa8qm4yTehqaN1Om WKRPi2TxVjfKl8qnmhHGInlmmKTS0jYx4PradFryYgKqEQfQNltZxNhgs/Fx/Rbf8EUK 4ExN9VXWBcy+Dp+Z0WDD4SUjp4qpYyOXUp/SskUvyvuoxxsWK2VCyx6Fr8xqt7v9wVKk o2Og== X-Gm-Message-State: AOJu0YzUSlprKgmCT77z9gaWhYpInIEucG7EjFqaplHJGTCSo5lo/EEy zxl7Td1Lx0wFEwRCZgjw1Sv4LPyZKRvBxEOd+pbsoUWOs/ZPNTWQif0abnUu X-Google-Smtp-Source: AGHT+IEcM1qzJWUwxsvfTkFRQaQBn1gRgCRBMUpC5nmPWUuPuk/NT5jnleQ0u1czHnsZrosqeeTfTQ== X-Received: by 2002:a05:600c:4ec8:b0:412:7ea5:cfdd with SMTP id g8-20020a05600c4ec800b004127ea5cfddmr458653wmq.37.1708563027987; Wed, 21 Feb 2024 16:50:27 -0800 (PST) Received: from localhost.localdomain (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id i17-20020a05600c355100b0041279ac13adsm2031992wmq.36.2024.02.21.16.50.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Feb 2024 16:50:27 -0800 (PST) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, sunhao.th@gmail.com, Eduard Zingerman Subject: [PATCH bpf-next 3/4] bpf: remove mark_precise_scalar_ids() Date: Thu, 22 Feb 2024 02:50:04 +0200 Message-ID: <20240222005005.31784-4-eddyz87@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240222005005.31784-1-eddyz87@gmail.com> References: <20240222005005.31784-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Function mark_precise_scalar_ids() is superseded by bt_set_equal_scalars() and equal scalars tracking in jump history. mark_precise_scalar_ids() propagates precision over registers sharing same ID on parent/child state boundaries, while jump history records allow bt_set_equal_scalars() to propagate same information with instruction level granularity, which is strictly more precise. This commit removes mark_precise_scalar_ids() and updates test cases in progs/verifier_scalar_ids to reflect new verifier behavior. The tests are updated in the following manner: - mark_precise_scalar_ids() propagated precision regardless of presence of conditional jumps, while new jump history bases logic only kicks in when conditional jumps are present. Hence test cases are augmented with conditional jumps to still trigger precision propagation. - As equal scalars tracking no longer relies on parent/child state boundaries some test cases are no longer interesting, such test cases are removed, namely: - precision_same_state and precision_cross_state are superseded by equal_scalars_bpf_k; - precision_same_state_broken_link and equal_scalars_broken_link are superseded by equal_scalars_broken_link. Signed-off-by: Eduard Zingerman --- kernel/bpf/verifier.c | 115 ------------ .../selftests/bpf/progs/verifier_scalar_ids.c | 171 ++++++------------ .../testing/selftests/bpf/verifier/precise.c | 8 +- 3 files changed, 59 insertions(+), 235 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b95b6842703c..921aee8b12f6 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4076,96 +4076,6 @@ static void mark_all_scalars_imprecise(struct bpf_verifier_env *env, struct bpf_ } } -static bool idset_contains(struct bpf_idset *s, u32 id) -{ - u32 i; - - for (i = 0; i < s->count; ++i) - if (s->ids[i] == id) - return true; - - return false; -} - -static int idset_push(struct bpf_idset *s, u32 id) -{ - if (WARN_ON_ONCE(s->count >= ARRAY_SIZE(s->ids))) - return -EFAULT; - s->ids[s->count++] = id; - return 0; -} - -static void idset_reset(struct bpf_idset *s) -{ - s->count = 0; -} - -/* Collect a set of IDs for all registers currently marked as precise in env->bt. - * Mark all registers with these IDs as precise. - */ -static int mark_precise_scalar_ids(struct bpf_verifier_env *env, struct bpf_verifier_state *st) -{ - struct bpf_idset *precise_ids = &env->idset_scratch; - struct backtrack_state *bt = &env->bt; - struct bpf_func_state *func; - struct bpf_reg_state *reg; - DECLARE_BITMAP(mask, 64); - int i, fr; - - idset_reset(precise_ids); - - for (fr = bt->frame; fr >= 0; fr--) { - func = st->frame[fr]; - - bitmap_from_u64(mask, bt_frame_reg_mask(bt, fr)); - for_each_set_bit(i, mask, 32) { - reg = &func->regs[i]; - if (!reg->id || reg->type != SCALAR_VALUE) - continue; - if (idset_push(precise_ids, reg->id)) - return -EFAULT; - } - - bitmap_from_u64(mask, bt_frame_stack_mask(bt, fr)); - for_each_set_bit(i, mask, 64) { - if (i >= func->allocated_stack / BPF_REG_SIZE) - break; - if (!is_spilled_scalar_reg(&func->stack[i])) - continue; - reg = &func->stack[i].spilled_ptr; - if (!reg->id) - continue; - if (idset_push(precise_ids, reg->id)) - return -EFAULT; - } - } - - for (fr = 0; fr <= st->curframe; ++fr) { - func = st->frame[fr]; - - for (i = BPF_REG_0; i < BPF_REG_10; ++i) { - reg = &func->regs[i]; - if (!reg->id) - continue; - if (!idset_contains(precise_ids, reg->id)) - continue; - bt_set_frame_reg(bt, fr, i); - } - for (i = 0; i < func->allocated_stack / BPF_REG_SIZE; ++i) { - if (!is_spilled_scalar_reg(&func->stack[i])) - continue; - reg = &func->stack[i].spilled_ptr; - if (!reg->id) - continue; - if (!idset_contains(precise_ids, reg->id)) - continue; - bt_set_frame_slot(bt, fr, i); - } - } - - return 0; -} - /* * __mark_chain_precision() backtracks BPF program instruction sequence and * chain of verifier states making sure that register *regno* (if regno >= 0) @@ -4298,31 +4208,6 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno) bt->frame, last_idx, first_idx, subseq_idx); } - /* If some register with scalar ID is marked as precise, - * make sure that all registers sharing this ID are also precise. - * This is needed to estimate effect of find_equal_scalars(). - * Do this at the last instruction of each state, - * bpf_reg_state::id fields are valid for these instructions. - * - * Allows to track precision in situation like below: - * - * r2 = unknown value - * ... - * --- state #0 --- - * ... - * r1 = r2 // r1 and r2 now share the same ID - * ... - * --- state #1 {r1.id = A, r2.id = A} --- - * ... - * if (r2 > 10) goto exit; // find_equal_scalars() assigns range to r1 - * ... - * --- state #2 {r1.id = A, r2.id = A} --- - * r3 = r10 - * r3 += r1 // need to mark both r1 and r2 - */ - if (mark_precise_scalar_ids(env, st)) - return -EFAULT; - if (last_idx < 0) { /* we are at the entry into subprog, which * is expected for global funcs, but only if diff --git a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c index 13b29a7faa71..639db72b1c55 100644 --- a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c +++ b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c @@ -5,54 +5,27 @@ #include "bpf_misc.h" /* Check that precision marks propagate through scalar IDs. - * Registers r{0,1,2} have the same scalar ID at the moment when r0 is - * marked to be precise, this mark is immediately propagated to r{1,2}. + * Registers r{0,1,2} have the same scalar ID. + * Range information is propagated for scalars sharing same ID. + * Check that precision mark for r0 causes precision marks for r{1,2} + * when range information is propagated for 'if ' insn. */ SEC("socket") __success __log_level(2) -__msg("frame0: regs=r0,r1,r2 stack= before 4: (bf) r3 = r10") -__msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0") -__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0") -__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") -__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns") -__flag(BPF_F_TEST_STATE_FREQ) -__naked void precision_same_state(void) -{ - asm volatile ( - /* r0 = random number up to 0xff */ - "call %[bpf_ktime_get_ns];" - "r0 &= 0xff;" - /* tie r0.id == r1.id == r2.id */ - "r1 = r0;" - "r2 = r0;" - /* force r0 to be precise, this immediately marks r1 and r2 as - * precise as well because of shared IDs - */ - "r3 = r10;" - "r3 += r0;" - "r0 = 0;" - "exit;" - : - : __imm(bpf_ktime_get_ns) - : __clobber_all); -} - -/* Same as precision_same_state, but mark propagates through state / - * parent state boundary. - */ -SEC("socket") -__success __log_level(2) -__msg("frame0: last_idx 6 first_idx 5 subseq_idx -1") -__msg("frame0: regs=r0,r1,r2 stack= before 5: (bf) r3 = r10") +/* first 'if' branch */ +__msg("6: (0f) r3 += r0") +__msg("frame0: regs=r0 stack= before 4: (25) if r1 > 0x7 goto pc+0") __msg("frame0: parent state regs=r0,r1,r2 stack=:") -__msg("frame0: regs=r0,r1,r2 stack= before 4: (05) goto pc+0") __msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0") -__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0") -__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") -__msg("frame0: parent state regs=r0 stack=:") -__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns") +/* second 'if' branch */ +__msg("from 4 to 5: ") +__msg("6: (0f) r3 += r0") +__msg("frame0: regs=r0 stack= before 5: (bf) r3 = r10") +__msg("frame0: regs=r0 stack= before 4: (25) if r1 > 0x7 goto pc+0") +/* parent state already has r{0,1,2} as precise */ +__msg("frame0: parent state regs= stack=:") __flag(BPF_F_TEST_STATE_FREQ) -__naked void precision_cross_state(void) +__naked void equal_scalars_bpf_k(void) { asm volatile ( /* r0 = random number up to 0xff */ @@ -61,9 +34,8 @@ __naked void precision_cross_state(void) /* tie r0.id == r1.id == r2.id */ "r1 = r0;" "r2 = r0;" - /* force checkpoint */ - "goto +0;" - /* force r0 to be precise, this immediately marks r1 and r2 as + "if r1 > 7 goto +0;" + /* force r0 to be precise, this eventually marks r1 and r2 as * precise as well because of shared IDs */ "r3 = r10;" @@ -75,59 +47,18 @@ __naked void precision_cross_state(void) : __clobber_all); } -/* Same as precision_same_state, but break one of the +/* Same as equal_scalars_bpf_k, but break one of the * links, note that r1 is absent from regs=... in __msg below. */ SEC("socket") __success __log_level(2) -__msg("frame0: regs=r0,r2 stack= before 5: (bf) r3 = r10") -__msg("frame0: regs=r0,r2 stack= before 4: (b7) r1 = 0") -__msg("frame0: regs=r0,r2 stack= before 3: (bf) r2 = r0") -__msg("frame0: regs=r0 stack= before 2: (bf) r1 = r0") -__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") -__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns") -__flag(BPF_F_TEST_STATE_FREQ) -__naked void precision_same_state_broken_link(void) -{ - asm volatile ( - /* r0 = random number up to 0xff */ - "call %[bpf_ktime_get_ns];" - "r0 &= 0xff;" - /* tie r0.id == r1.id == r2.id */ - "r1 = r0;" - "r2 = r0;" - /* break link for r1, this is the only line that differs - * compared to the previous test - */ - "r1 = 0;" - /* force r0 to be precise, this immediately marks r1 and r2 as - * precise as well because of shared IDs - */ - "r3 = r10;" - "r3 += r0;" - "r0 = 0;" - "exit;" - : - : __imm(bpf_ktime_get_ns) - : __clobber_all); -} - -/* Same as precision_same_state_broken_link, but with state / - * parent state boundary. - */ -SEC("socket") -__success __log_level(2) -__msg("frame0: regs=r0,r2 stack= before 6: (bf) r3 = r10") -__msg("frame0: regs=r0,r2 stack= before 5: (b7) r1 = 0") -__msg("frame0: parent state regs=r0,r2 stack=:") -__msg("frame0: regs=r0,r1,r2 stack= before 4: (05) goto pc+0") -__msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0") -__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0") -__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") +__msg("7: (0f) r3 += r0") +__msg("frame0: regs=r0 stack= before 6: (bf) r3 = r10") __msg("frame0: parent state regs=r0 stack=:") -__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns") +__msg("frame0: regs=r0 stack= before 5: (25) if r0 > 0x7 goto pc+0") +__msg("frame0: parent state regs=r0,r2 stack=:") __flag(BPF_F_TEST_STATE_FREQ) -__naked void precision_cross_state_broken_link(void) +__naked void equal_scalars_broken_link(void) { asm volatile ( /* r0 = random number up to 0xff */ @@ -136,18 +67,13 @@ __naked void precision_cross_state_broken_link(void) /* tie r0.id == r1.id == r2.id */ "r1 = r0;" "r2 = r0;" - /* force checkpoint, although link between r1 and r{0,2} is - * broken by the next statement current precision tracking - * algorithm can't react to it and propagates mark for r1 to - * the parent state. - */ - "goto +0;" /* break link for r1, this is the only line that differs - * compared to precision_cross_state() + * compared to the previous test */ "r1 = 0;" - /* force r0 to be precise, this immediately marks r1 and r2 as - * precise as well because of shared IDs + "if r0 > 7 goto +0;" + /* force r0 to be precise, + * this eventually marks r2 as precise because of shared IDs */ "r3 = r10;" "r3 += r0;" @@ -164,10 +90,16 @@ __naked void precision_cross_state_broken_link(void) */ SEC("socket") __success __log_level(2) -__msg("11: (0f) r2 += r1") +__msg("12: (0f) r2 += r1") /* Current state */ -__msg("frame2: last_idx 11 first_idx 10 subseq_idx -1") -__msg("frame2: regs=r1 stack= before 10: (bf) r2 = r10") +__msg("frame2: last_idx 12 first_idx 11 subseq_idx -1 ") +__msg("frame2: regs=r1 stack= before 11: (bf) r2 = r10") +__msg("frame2: parent state regs=r1 stack=") +__msg("frame1: parent state regs= stack=") +__msg("frame0: parent state regs= stack=") +/* Parent state */ +__msg("frame2: last_idx 10 first_idx 10 subseq_idx 11 ") +__msg("frame2: regs=r1 stack= before 10: (25) if r1 > 0x7 goto pc+0") __msg("frame2: parent state regs=r1 stack=") /* frame1.r{6,7} are marked because mark_precise_scalar_ids() * looks for all registers with frame2.r1.id in the current state @@ -192,7 +124,7 @@ __msg("frame1: regs=r1 stack= before 4: (85) call pc+1") __msg("frame0: parent state regs=r1,r6 stack=") /* Parent state */ __msg("frame0: last_idx 3 first_idx 1 subseq_idx 4") -__msg("frame0: regs=r0,r1,r6 stack= before 3: (bf) r6 = r0") +__msg("frame0: regs=r1,r6 stack= before 3: (bf) r6 = r0") __msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0") __msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") __flag(BPF_F_TEST_STATE_FREQ) @@ -230,7 +162,8 @@ static __naked __noinline __used void precision_many_frames__bar(void) { asm volatile ( - /* force r1 to be precise, this immediately marks: + "if r1 > 7 goto +0;" + /* force r1 to be precise, this eventually marks: * - bar frame r1 * - foo frame r{1,6,7} * - main frame r{1,6} @@ -247,14 +180,16 @@ void precision_many_frames__bar(void) */ SEC("socket") __success __log_level(2) +__msg("11: (0f) r2 += r1") /* foo frame */ -__msg("frame1: regs=r1 stack=-8,-16 before 9: (bf) r2 = r10") +__msg("frame1: regs=r1 stack= before 10: (bf) r2 = r10") +__msg("frame1: regs=r1 stack= before 9: (25) if r1 > 0x7 goto pc+0") __msg("frame1: regs=r1 stack=-8,-16 before 8: (7b) *(u64 *)(r10 -16) = r1") __msg("frame1: regs=r1 stack=-8 before 7: (7b) *(u64 *)(r10 -8) = r1") __msg("frame1: regs=r1 stack= before 4: (85) call pc+2") /* main frame */ -__msg("frame0: regs=r0,r1 stack=-8 before 3: (7b) *(u64 *)(r10 -8) = r1") -__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0") +__msg("frame0: regs=r1 stack=-8 before 3: (7b) *(u64 *)(r10 -8) = r1") +__msg("frame0: regs=r1 stack= before 2: (bf) r1 = r0") __msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") __flag(BPF_F_TEST_STATE_FREQ) __naked void precision_stack(void) @@ -283,7 +218,8 @@ void precision_stack__foo(void) */ "*(u64*)(r10 - 8) = r1;" "*(u64*)(r10 - 16) = r1;" - /* force r1 to be precise, this immediately marks: + "if r1 > 7 goto +0;" + /* force r1 to be precise, this eventually marks: * - foo frame r1,fp{-8,-16} * - main frame r1,fp{-8} */ @@ -299,15 +235,17 @@ void precision_stack__foo(void) SEC("socket") __success __log_level(2) /* r{6,7} */ -__msg("11: (0f) r3 += r7") -__msg("frame0: regs=r6,r7 stack= before 10: (bf) r3 = r10") +__msg("12: (0f) r3 += r7") +__msg("frame0: regs=r7 stack= before 11: (bf) r3 = r10") +__msg("frame0: regs=r7 stack= before 9: (25) if r7 > 0x7 goto pc+0") /* ... skip some insns ... */ __msg("frame0: regs=r6,r7 stack= before 3: (bf) r7 = r0") __msg("frame0: regs=r0,r6 stack= before 2: (bf) r6 = r0") /* r{8,9} */ -__msg("12: (0f) r3 += r9") -__msg("frame0: regs=r8,r9 stack= before 11: (0f) r3 += r7") +__msg("13: (0f) r3 += r9") +__msg("frame0: regs=r9 stack= before 12: (0f) r3 += r7") /* ... skip some insns ... */ +__msg("frame0: regs=r9 stack= before 10: (25) if r9 > 0x7 goto pc+0") __msg("frame0: regs=r8,r9 stack= before 7: (bf) r9 = r0") __msg("frame0: regs=r0,r8 stack= before 6: (bf) r8 = r0") __flag(BPF_F_TEST_STATE_FREQ) @@ -328,8 +266,9 @@ __naked void precision_two_ids(void) "r9 = r0;" /* clear r0 id */ "r0 = 0;" - /* force checkpoint */ - "goto +0;" + /* propagate equal scalars precision */ + "if r7 > 7 goto +0;" + "if r9 > 7 goto +0;" "r3 = r10;" /* force r7 to be precise, this also marks r6 */ "r3 += r7;" diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c index 64d722199e8f..59a020c35647 100644 --- a/tools/testing/selftests/bpf/verifier/precise.c +++ b/tools/testing/selftests/bpf/verifier/precise.c @@ -106,7 +106,7 @@ mark_precise: frame0: regs=r2 stack= before 22\ mark_precise: frame0: parent state regs=r2 stack=:\ mark_precise: frame0: last_idx 20 first_idx 20\ - mark_precise: frame0: regs=r2,r9 stack= before 20\ + mark_precise: frame0: regs=r2 stack= before 20\ mark_precise: frame0: parent state regs=r2,r9 stack=:\ mark_precise: frame0: last_idx 19 first_idx 17\ mark_precise: frame0: regs=r2,r9 stack= before 19\ @@ -183,10 +183,10 @@ .prog_type = BPF_PROG_TYPE_XDP, .flags = BPF_F_TEST_STATE_FREQ, .errstr = "mark_precise: frame0: last_idx 7 first_idx 7\ - mark_precise: frame0: parent state regs=r4 stack=-8:\ + mark_precise: frame0: parent state regs=r4 stack=:\ mark_precise: frame0: last_idx 6 first_idx 4\ - mark_precise: frame0: regs=r4 stack=-8 before 6: (b7) r0 = -1\ - mark_precise: frame0: regs=r4 stack=-8 before 5: (79) r4 = *(u64 *)(r10 -8)\ + mark_precise: frame0: regs=r4 stack= before 6: (b7) r0 = -1\ + mark_precise: frame0: regs=r4 stack= before 5: (79) r4 = *(u64 *)(r10 -8)\ mark_precise: frame0: regs= stack=-8 before 4: (7b) *(u64 *)(r3 -8) = r0\ mark_precise: frame0: parent state regs=r0 stack=:\ mark_precise: frame0: last_idx 3 first_idx 3\ From patchwork Thu Feb 22 00:50:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13566646 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wr1-f45.google.com (mail-wr1-f45.google.com [209.85.221.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 80CE011712 for ; Thu, 22 Feb 2024 00:50:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708563033; cv=none; b=QaG5pZFY+aQ5HhzJBBLg96E/I4QnQ2gaN2z7g6Qru8XB8MPbsigz63D/zhfNeIa2KlnmWbCXYxMzMcP7UwTMZ1GpKV4BSRjAXcEB3cBiPI8D0R+wQZ5/sv9ShTkg/RYGIl6V5oakdvwMqH9j3shuzpRSLyuE8mJpGaEcjfUfTSw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708563033; c=relaxed/simple; bh=sm/zi4BAYhK0oXd2Ui+UTSdQgAGy1sC/8HLhXDkPggY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t+ZjCwl1s3Ggwp2wx6ezC6/1ozKvZleXkIdUfHviTuHdUfoR4mXlRKMoy6dlPkL3rH0Tj0qlZTg+qVzyb+GpQFOWix7LlE1StuKkv4r90GdWdV4wM4suvreZWH6YF+r99iBea8Elp7RHWCatBEDC08bNZrpNKlsZjfsOoM3ysZA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=haAgv/XS; arc=none smtp.client-ip=209.85.221.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="haAgv/XS" Received: by mail-wr1-f45.google.com with SMTP id ffacd0b85a97d-3394bec856fso183942f8f.0 for ; Wed, 21 Feb 2024 16:50:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1708563029; x=1709167829; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=od5FUctHBa5MP/BQQSw6kIbGqjimFTpHBHCf3SHkStg=; b=haAgv/XSoLmPcJWk4V0V549SRW0gldEtJmfMva1+mScpooxasmHXq1SYIHGV2EFM9W KSIrKBQz/e868SdgUH0dta/E13HOEXMJy1xOqEJeNxunqqjFWihUs6+z40sKiKCg+J0V xXZX9S4tv0cuOx7xn4y5f7iOnBVr/V5uIoeDDEJk1QiugSzJf8ji0U4Sl3H1x671k7Fo reooxbbSi859To/2bryOPIL7Eu/mcZLfudl36adn/axOV22dlOIGYp0bd4K+uMZOfDPf MpS5j9zRnuhPG1IE93w5TuTijSTZCER6kx/UNGAlX8SeIXZ/ObFs43PlOtgTMDk04E0S mGhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708563029; x=1709167829; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=od5FUctHBa5MP/BQQSw6kIbGqjimFTpHBHCf3SHkStg=; b=kuo6kwhzhdUAvGwaO/ouPc5FNpyYPzeb3ZTt/sv5JCdOk3MPn0VMyrrxfszWeucjSN dXMu55ZzNh2XY9n3k3PVmcNg1H5rZwINiRof/bP67imwJrUeoi/lG6zC1kAK618sNUeB E2mQIbDYezN22qF7T0sX8jWLFhTIw90otM+qWBEezAuOl0qzgd9h2lIxW1ksHMbS/1+S UrZ5o5qiNorzpiQdTeJqMRmCIHsM2M93qb8rpnQTxYS4cCGlePjZOXcm5lpEirEwVWwq AvPv8HT1sxTv2OSCbMXpJzEc8z1cjCGayoL0naEzGk3Ii6zGJ/0Y4nAimA4ql9iLenkW uQgg== X-Gm-Message-State: AOJu0Yzcz2SGWi9Qu5f3cyJDsUQqlmZmQ/U92Q7dU5l0FROq/1/GMBZh RNmwqHi6lUf6XW5Ivvy5Tvm8HuREKrth87whaX8s75lAhwq7zjqDeyGyyHGO X-Google-Smtp-Source: AGHT+IHCmX4iwTM5kVnfxB3KKd7/1z4r6VSjU/2JRsvxuIb6fg1d7R4oro6hxJbB9OSZoR4CC7o5Qg== X-Received: by 2002:adf:e687:0:b0:33d:8e93:9524 with SMTP id r7-20020adfe687000000b0033d8e939524mr172726wrm.29.1708563029274; Wed, 21 Feb 2024 16:50:29 -0800 (PST) Received: from localhost.localdomain (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id i17-20020a05600c355100b0041279ac13adsm2031992wmq.36.2024.02.21.16.50.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Feb 2024 16:50:28 -0800 (PST) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, sunhao.th@gmail.com, Eduard Zingerman Subject: [PATCH bpf-next 4/4] selftests/bpf: tests for per-insn find_equal_scalars() precision tracking Date: Thu, 22 Feb 2024 02:50:05 +0200 Message-ID: <20240222005005.31784-5-eddyz87@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240222005005.31784-1-eddyz87@gmail.com> References: <20240222005005.31784-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Add a few test cases to verify precision tracking for scalars gaining range because of find_equal_scalars(): - check what happens when more than 6 registers might gain range in find_equal_scalars(); - check if precision is propagated correctly when operand of conditional jump gained range in find_equal_scalars() and one of linked registers is marked precise; - check if precision is propagated correctly when operand of conditional jump gained range in find_equal_scalars() and a other-linked operand of the conditional jump is marked precise; - add a minimized reproducer for precision tracking bug reported in [0]; - Check that mark_chain_precision() for one of the conditional jump operands does not trigger equal scalars precision propagation. [0] https://lore.kernel.org/bpf/CAEf4BzZ0xidVCqB47XnkXcNhkPWF6_nTV7yt+_Lf0kcFEut2Mg@mail.gmail.com/ Signed-off-by: Eduard Zingerman --- .../selftests/bpf/progs/verifier_scalar_ids.c | 165 ++++++++++++++++++ 1 file changed, 165 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c index 639db72b1c55..993c5affb3d7 100644 --- a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c +++ b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c @@ -47,6 +47,72 @@ __naked void equal_scalars_bpf_k(void) : __clobber_all); } +/* Registers r{0,1,2} share same ID when 'if r1 > ...' insn is processed, + * check that verifier marks r{1,2} as precise while backtracking + * 'if r1 > ...' with r0 already marked. + */ +SEC("socket") +__success __log_level(2) +__flag(BPF_F_TEST_STATE_FREQ) +__msg("frame0: regs=r0 stack= before 5: (2d) if r1 > r3 goto pc+0") +__msg("frame0: parent state regs=r0,r1,r2,r3 stack=:") +__msg("frame0: regs=r0,r1,r2,r3 stack= before 4: (b7) r3 = 7") +__naked void equal_scalars_bpf_x_src(void) +{ + asm volatile ( + /* r0 = random number up to 0xff */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* tie r0.id == r1.id == r2.id */ + "r1 = r0;" + "r2 = r0;" + "r3 = 7;" + "if r1 > r3 goto +0;" + /* force r0 to be precise, this eventually marks r1 and r2 as + * precise as well because of shared IDs + */ + "r4 = r10;" + "r4 += r0;" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Registers r{0,1,2} share same ID when 'if r1 > r3' insn is processed, + * check that verifier marks r{0,1,2} as precise while backtracking + * 'if r1 > r3' with r3 already marked. + */ +SEC("socket") +__success __log_level(2) +__flag(BPF_F_TEST_STATE_FREQ) +__msg("frame0: regs=r3 stack= before 5: (2d) if r1 > r3 goto pc+0") +__msg("frame0: parent state regs=r0,r1,r2,r3 stack=:") +__msg("frame0: regs=r0,r1,r2,r3 stack= before 4: (b7) r3 = 7") +__naked void equal_scalars_bpf_x_dst(void) +{ + asm volatile ( + /* r0 = random number up to 0xff */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* tie r0.id == r1.id == r2.id */ + "r1 = r0;" + "r2 = r0;" + "r3 = 7;" + "if r1 > r3 goto +0;" + /* force r0 to be precise, this eventually marks r1 and r2 as + * precise as well because of shared IDs + */ + "r4 = r10;" + "r4 += r3;" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + /* Same as equal_scalars_bpf_k, but break one of the * links, note that r1 is absent from regs=... in __msg below. */ @@ -280,6 +346,105 @@ __naked void precision_two_ids(void) : __clobber_all); } +SEC("socket") +__success __log_level(2) +__flag(BPF_F_TEST_STATE_FREQ) +/* check thar r0 and r6 have different IDs after 'if', + * find_equal_scalars() can't tie more than 6 registers for a single insn. + */ +__msg("8: (25) if r0 > 0x7 goto pc+0 ; R0=scalar(id=1") +__msg("9: (bf) r6 = r6 ; R6_w=scalar(id=2") +/* check that r{0-5} are marked precise after 'if' */ +__msg("frame0: regs=r0 stack= before 8: (25) if r0 > 0x7 goto pc+0") +__msg("frame0: parent state regs=r0,r1,r2,r3,r4,r5 stack=:") +__naked void equal_scalars_too_many_regs(void) +{ + asm volatile ( + /* r0 = random number up to 0xff */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* tie r{0-6} IDs */ + "r1 = r0;" + "r2 = r0;" + "r3 = r0;" + "r4 = r0;" + "r5 = r0;" + "r6 = r0;" + /* propagate range for r{0-6} */ + "if r0 > 7 goto +0;" + /* make r6 appear in the log */ + "r6 = r6;" + /* force r0 to be precise, + * this would cause r{0-4} to be precise because of shared IDs + */ + "r7 = r10;" + "r7 += r0;" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__failure __log_level(2) +__flag(BPF_F_TEST_STATE_FREQ) +__msg("regs=r7 stack= before 5: (3d) if r8 >= r0") +__msg("parent state regs=r0,r7,r8") +__msg("regs=r0,r7,r8 stack= before 4: (25) if r0 > 0x1") +__msg("div by zero") +__naked void equal_scalars_broken_link_2(void) +{ + asm volatile ( + "call %[bpf_get_prandom_u32];" + "r7 = r0;" + "r8 = r0;" + "call %[bpf_get_prandom_u32];" + "if r0 > 1 goto +0;" + /* r7.id == r8.id, + * thus r7 precision implies r8 precision, + * which implies r0 precision because of the conditional below. + */ + "if r8 >= r0 goto 1f;" + /* break id relation between r7 and r8 */ + "r8 += r8;" + /* make r7 precise */ + "if r7 == 0 goto 1f;" + "r0 /= 0;" +"1:" + "r0 = 42;" + "exit;" + : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +/* Check that mark_chain_precision() for one of the conditional jump + * operands does not trigger equal scalars precision propagation. + */ +SEC("socket") +__success __log_level(2) +__msg("3: (25) if r1 > 0x100 goto pc+0") +__msg("frame0: regs=r1 stack= before 2: (bf) r1 = r0") +__naked void cjmp_no_equal_scalars_trigger(void) +{ + asm volatile ( + /* r0 = random number up to 0xff */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* tie r0.id == r1.id */ + "r1 = r0;" + /* the jump below would be predicted, thus r1 would be marked precise, + * this should not imply precision mark for r0 + */ + "if r1 > 256 goto +0;" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + /* Verify that check_ids() is used by regsafe() for scalars. * * r9 = ... some pointer with range X ...