From patchwork Mon Jan 8 20:51:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13513984 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6593555C22; Mon, 8 Jan 2024 20:52:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Jk0yMxdG" Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-a28b0207c1dso171177966b.3; Mon, 08 Jan 2024 12:52:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747146; x=1705351946; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KaxOHT3a3aj7XU+MlOSUS+lYj9x+w+wZqme7TuQY4L8=; b=Jk0yMxdGoDV6mnpOo3Oe1BrZ7Ca6h4SFSowTyN2XER0ss5nj/TC4egALXl7to5cBrO C6sRZxf2tX+jQk7Ss3rkbcjCE1T7sf2EzBD/DNEq1crSmkMxj5B/btE8H85ECUtXoGhX d4hMiN62PvQCVkD2/54K12rChog2WzJvxT4PKawmm0MupyL6KPYwtZ9OQDD2PfIQ1uZj btikTwNghvwD/24lW69r0LLfSoqg5glOpZokKMaD9KYIRJiXdOOipN9JGeMcDVxxwzi9 oeSRvT5PNeYPuDY2vyoU2wns5zL8ro6IvYMaXmYITqqLzyzOZYX/QUlnA8lBGpeI8+kO 2ZQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747146; x=1705351946; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KaxOHT3a3aj7XU+MlOSUS+lYj9x+w+wZqme7TuQY4L8=; b=ZpSB7QDSlv+p+yZmOPwrbMlJt+fk38MNtxcZ7rtvcXnRorb9cE0krYxLnMhmZ65E0J FKs9UJTzl+8FEvESyvwxKQmxkmNGO4MbGnrWTgUB/2XuanR2Db/reAvUp6/7v6Whkbgk rR05+5D6rNqcHWnJ9ckZRboEHrhhQvxTt0xWRoSdwYCzF7xBcnIeFyfMSz3OkB/Q4gVB mombCnBhlEMwdhLjctrVYzxEkoVmsO2pSjvTd/AJtMaxBxFYK2AO9ctOFMpRlkorlCB0 UUn8OpUxiOTdREaM1Cjt89y7rMIDMD4nf5zmBeNpm0demYD/b4qIQVLAoBpXj1yLUkbf DFqA== X-Gm-Message-State: AOJu0YzceD5YwRoV77+0XhGoJAftyD7yN0h8xFzsQ8DLrYrXk5jQ+TkW kFmKEwsCXS7kE0ZHIYnWIoE= X-Google-Smtp-Source: AGHT+IEZ5/zhC7YksxwtEfQclyBATbEzAa8gfneSehB0Lnw+m8uZNEFETukkqmJFXPjvPzE0uzxLyg== X-Received: by 2002:a17:906:5610:b0:a18:4b1b:9522 with SMTP id f16-20020a170906561000b00a184b1b9522mr11099ejq.41.1704747145526; Mon, 08 Jan 2024 12:52:25 -0800 (PST) Received: from localhost (tor-exit-1.zbau.f3netze.de. [185.220.100.252]) by smtp.gmail.com with ESMTPSA id l19-20020a1709062a9300b00a26a5632d8fsm250209eje.13.2024.01.08.12.52.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:52:25 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next v2 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test Date: Mon, 8 Jan 2024 22:51:55 +0200 Message-ID: <20240108205209.838365-2-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Maxim Mikityanskiy The u64_offset_to_skb_data test is supposed to make a 64-bit fill, but instead makes a 16-bit one. Fix the test according to its intention and update the comments accordingly (umax is no longer 0xffff). The 16-bit fill is covered by u16_offset_to_skb_data. Signed-off-by: Maxim Mikityanskiy Acked-by: Eduard Zingerman --- tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index 39fe3372e0e0..848f2930f599 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -243,7 +243,7 @@ l0_%=: r0 = 0; \ SEC("tc") __description("Spill u32 const scalars. Refill as u64. Offset to skb->data") -__failure __msg("invalid access to packet") +__failure __msg("math between pkt pointer and register with unbounded min value is not allowed") __naked void u64_offset_to_skb_data(void) { asm volatile (" \ @@ -253,13 +253,11 @@ __naked void u64_offset_to_skb_data(void) w7 = 20; \ *(u32*)(r10 - 4) = r6; \ *(u32*)(r10 - 8) = r7; \ - r4 = *(u16*)(r10 - 8); \ + r4 = *(u64*)(r10 - 8); \ r0 = r2; \ - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\ + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4= */ \ r0 += r4; \ - /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */\ if r0 > r3 goto l0_%=; \ - /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */\ r0 = *(u32*)(r2 + 0); \ l0_%=: r0 = 0; \ exit; \ From patchwork Mon Jan 8 20:51:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13513985 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 89B2D5467C; Mon, 8 Jan 2024 20:52:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Haj2S4Sn" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-40d5b89e2bfso24654625e9.0; Mon, 08 Jan 2024 12:52:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747150; x=1705351950; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CcpIKqYveGeGHn6cD90ywmKwn6I2uEhHJBOWNqtefkQ=; b=Haj2S4Snz+sHGm6pYZcjXTJCyUTvVAzGIVeO2NfH3EViNMu+9MICyTtC0S80fNDyx/ J574KDHlBiqaqQI6up/PWIH+L/NR+/ZkktxyyPhS3lRGssIsYvQSwoHWq6Tb5/42U6qm cXFQnYcex1kHh/Kc45W5xL1kZiqKLc6QiRKGl0hylJgN0TZZt3oF1cdDxp7CUqrJodDY yOMp+xDtj08I9yphoC13InRATin9UIzdxPWnGr+HuyjpJRAFDa44pbTYvoYZh04orUcj D5pR4psxfahOByXaxdSbbHhnTouSpzQKqqJk0kXdEqL7t9rdoperkSuuLORggXOhALMj 0T5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747150; x=1705351950; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CcpIKqYveGeGHn6cD90ywmKwn6I2uEhHJBOWNqtefkQ=; b=g2t+u+QNsgJZGQZY4pHyhhjvzzTKkiZ5oWnO65tOVedx68IvFbe0Xg2M6z/ou/Kffu qRIqAArm73Z01MG75A007zfeJEoLiwUIeB9RY61rdg9+Xrj60f52u2s4mRqxbyJebGKY DfPXt0iId6UXPrIhRQdk7LuqEXnGbKGDRYKVtOQyYivx9GNNtHLchuhnGvJtJytJ0uuo UA1czxGe6dVdqThRwMBafAvBsKx3nTOnHO1+DxQdO/sSBrzpYLhxoHq4tmaoVwGZaRgV J0NaC2CwXwhnQrj2w88YBBlWV38F5EiTcH0An58GbNR07spxq7cNVH2fBmT2pBzoCDym BwOQ== X-Gm-Message-State: AOJu0YyLpARZKEa52IF/IdFO6OdtwYlz4N1Fli5ltAs/EgR4AwwJPe0e FbMuxSzauEwtM4PGjMFQ9IY= X-Google-Smtp-Source: AGHT+IEwiEM59ky77aX66tOtT8D/NJ24ZEhLf4rRdES0knq3lbbfMz2kx3eDDI7of9nogarVR1xBcw== X-Received: by 2002:a05:600c:1c24:b0:40e:46f5:e600 with SMTP id j36-20020a05600c1c2400b0040e46f5e600mr1281899wms.172.1704747149553; Mon, 08 Jan 2024 12:52:29 -0800 (PST) Received: from localhost (tor-exit-1.zbau.f3netze.de. [185.220.100.252]) by smtp.gmail.com with ESMTPSA id g21-20020a1709061e1500b00a26d20a48dasm239783ejj.125.2024.01.08.12.52.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:52:29 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf-next v2 02/15] bpf: make infinite loop detection in is_state_visited() exact Date: Mon, 8 Jan 2024 22:51:56 +0200 Message-ID: <20240108205209.838365-3-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Eduard Zingerman Current infinite loops detection mechanism is speculative: - first, states_maybe_looping() check is done which simply does memcmp for R1-R10 in current frame; - second, states_equal(..., exact=false) is called. With exact=false states_equal() would compare scalars for equality only if in old state scalar has precision mark. Such logic might be problematic if compiler makes some unlucky stack spill/fill decisions. An artificial example of a false positive looks as follows: r0 = ... unknown scalar ... r0 &= 0xff; *(u64 *)(r10 - 8) = r0; r0 = 0; loop: r0 = *(u64 *)(r10 - 8); if r0 > 10 goto exit_; r0 += 1; *(u64 *)(r10 - 8) = r0; r0 = 0; goto loop; This commit updates call to states_equal to use exact=true, forcing all scalar comparisons to be exact. Signed-off-by: Eduard Zingerman --- kernel/bpf/verifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index adbf330d364b..bc565f445410 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -17023,7 +17023,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) } /* attempt to detect infinite loop to avoid unnecessary doomed work */ if (states_maybe_looping(&sl->state, cur) && - states_equal(env, &sl->state, cur, false) && + states_equal(env, &sl->state, cur, true) && !iter_active_depths_differ(&sl->state, cur) && sl->state.callback_unroll_depth == cur->callback_unroll_depth) { verbose_linfo(env, insn_idx, "; "); From patchwork Mon Jan 8 20:51:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13513986 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7674B55E68; Mon, 8 Jan 2024 20:52:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZUMVQDkT" Received: by mail-ed1-f48.google.com with SMTP id 4fb4d7f45d1cf-557d554ca30so977981a12.2; Mon, 08 Jan 2024 12:52:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747152; x=1705351952; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=adI7f1RlY+ltIHKhZGqEjhbXbsGlT5KUxNsWV+Nqtu8=; b=ZUMVQDkTjMcsd6fEbWg4HtlY3et5hzHW8/lrkOrGdlmlyVhcwWcxXpF/sACIuWRFCC xd5nbKys+uzLJ9MQBjS2KItcwtlSzNf1PsjkdGyqVKguuZaN+eKIWIiQkjmXh7B6rhC7 m1RQiKARBWMcvDntXsIIH5XAFs3tk3HisWuzYRzpkuAT+cMKsrEc8Fn57QtkeSzpT+l6 VYlx+fs+QH87pzpKo7PJ+WVaFvmHh8bBfGc3P0MD0shq+tB5RR9hf7jGskZPUQqllH9q s5xmDNGAZ3Vcjs0dU/ZrPhihI1EK5x5py14UwFplaY6GkgcvDBVq2xIZlOL8lF3M7nFx DByg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747152; x=1705351952; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=adI7f1RlY+ltIHKhZGqEjhbXbsGlT5KUxNsWV+Nqtu8=; b=D6BQqhK5SaElgUNCUMmRI9RZVcOgwhjXzhR+Nu2KjNrigIuOd5vvpj1GfvfWWbYiTM OjuOi99jYe0IMPJH+cPmm5DKTcRmSwSIgPBY391VR4OXo9X+zoHiWf+rnu+eQ17TNI/0 HFnXylJiHbzp5TdUxyFHONGYosnw82Y9JnxBbagnleWJdHIK86BRgc/K5YXcnXrtl/KQ zMHh5zqVebnJgyej9+dUl4QR79BeRgm1syPsSxD46e4Dh96D/8mmObfQ/k8kNTyPhB+7 M/QOtgO8DwWu+0Y89AB8AvSixbpNzDINdkJlZ0KakUychPv5fmREFXFAQ4uEIVB9y1fg HEkA== X-Gm-Message-State: AOJu0Yxg+wKCQX24uh2PMAMD81O2f4L28I7K8ucO1zPk87pz3umVTBDB xLx7Bq2vuwoAmt8UPxU55RM= X-Google-Smtp-Source: AGHT+IFh3NpDyh01CwEX8/55m43siA24ttbMhvSdDCSwNAhhteJmeiKTxLmJqSZKHN1jcuFMmr7fAg== X-Received: by 2002:a17:906:1cd8:b0:a28:f8d2:7897 with SMTP id i24-20020a1709061cd800b00a28f8d27897mr16527ejh.20.1704747151799; Mon, 08 Jan 2024 12:52:31 -0800 (PST) Received: from localhost (tor-exit-1.zbau.f3netze.de. [185.220.100.252]) by smtp.gmail.com with ESMTPSA id k7-20020a170906158700b00a2add419e33sm244982ejd.176.2024.01.08.12.52.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:52:31 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf-next v2 03/15] selftests/bpf: check if imprecise stack spills confuse infinite loop detection Date: Mon, 8 Jan 2024 22:51:57 +0200 Message-ID: <20240108205209.838365-4-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Eduard Zingerman Verify that infinite loop detection logic separates states with identical register states but different imprecise scalars spilled to stack. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/progs/verifier_loops1.c | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_loops1.c b/tools/testing/selftests/bpf/progs/verifier_loops1.c index 71735dbf33d4..e07b43b78fd2 100644 --- a/tools/testing/selftests/bpf/progs/verifier_loops1.c +++ b/tools/testing/selftests/bpf/progs/verifier_loops1.c @@ -259,4 +259,28 @@ l0_%=: r2 += r1; \ " ::: __clobber_all); } +SEC("xdp") +__success +__naked void not_an_inifinite_loop(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r0 &= 0xff; \ + *(u64 *)(r10 - 8) = r0; \ + r0 = 0; \ +loop_%=: \ + r0 = *(u64 *)(r10 - 8); \ + if r0 > 10 goto exit_%=; \ + r0 += 1; \ + *(u64 *)(r10 - 8) = r0; \ + r0 = 0; \ + goto loop_%=; \ +exit_%=: \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + char _license[] SEC("license") = "GPL"; From patchwork Mon Jan 8 20:51:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13513987 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5921F55E45; Mon, 8 Jan 2024 20:52:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iqNGgoPp" Received: by mail-ed1-f43.google.com with SMTP id 4fb4d7f45d1cf-55745901085so2785822a12.0; Mon, 08 Jan 2024 12:52:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747157; x=1705351957; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=q8bFyaBTWGO9YtHUlCOAC9syiZawhLxoZiiL7FBJDXo=; b=iqNGgoPpML2o07UxouiH7o92H2mRes/gUTTYuYhHqk9r0pmQiXxK6z8dReJ7Y2m8C/ nu7BW9T9tlpHiWkexaU5WLqZxqDjWHFF2dxZyvshwT4nA/9ZuIunJB7hnCr88jakNRSa 3VGshDmk/Fq1Nw2SW8KCIA/+iSnG4KOOcpwqBieB+xNRgPg+iELgD5JEbUQdZ8uxkWQ5 rD3KLL1DNsA5qOlv2MkSbPNgTQDirJ86zjrqFXRiGzJBHeByZm4N9n7LVNevK15m9qTQ wzGYmYymPXUDooIiKcCy8yfY8vAQQXh3AziZg2tb0Y1nMeBNAtLpe1TQXcyDfDIMnK7H vWOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747157; x=1705351957; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=q8bFyaBTWGO9YtHUlCOAC9syiZawhLxoZiiL7FBJDXo=; b=xEFiW2MJJ7Gj/ih9MBLAWFoAfiSYMxMmdzYpXahCs7xwBXFXYANgD2fcnM6WKSru7r bbFqkcuZeWxzYhwEjBvKL8fiEFjV/3VrJvay+98/DODQ8P5vwbL3eQdbIikkowGIarND tejyOQxeFDql0jFqFhGSFXmZur31f8Q9mRRSN6mppCOd/iPPzVF0cQ/+mYT/T4W6h6p6 wa3cef8tw1REDU8RxFNC63vUCkccA7IO8P4wzWLibWgzMLNJWC7tqGCPnsDPNnk3b0/s kwt0e9phMfesPpViSvn/hAVkC/gxTWxWsuJ/7Z6hZCMGwlZRYb12ZcQDuNOVpdzLXqtX G54w== X-Gm-Message-State: AOJu0YzuaOte9G4zYifbbXt4dc01f37xfX5pdSMKGGGyFbpbYgEjHfyj 5deCnRnnSKav2A5srAUV3Ks= X-Google-Smtp-Source: AGHT+IHkPlQbCTfVkOa1+Z9mPBmsAVl+28FS9fBT+EHVdi4HwpaWqFHpDxZzETVpBjbsT2PlzOx6/w== X-Received: by 2002:a50:f617:0:b0:557:d5c1:a4ae with SMTP id c23-20020a50f617000000b00557d5c1a4aemr743628edn.47.1704747157544; Mon, 08 Jan 2024 12:52:37 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id c13-20020aa7df0d000000b0055422adeb00sm201966edy.33.2024.01.08.12.52.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:52:37 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next v2 04/15] bpf: Make bpf_for_each_spilled_reg consider narrow spills Date: Mon, 8 Jan 2024 22:51:58 +0200 Message-ID: <20240108205209.838365-5-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Maxim Mikityanskiy Adjust the check in bpf_get_spilled_reg to take into account spilled registers narrower than 64 bits. That allows find_equal_scalars to properly adjust the range of all spilled registers that have the same ID. Before this change, it was possible for a register and a spilled register to have the same IDs but different ranges if the spill was narrower than 64 bits and a range check was performed on the register. Signed-off-by: Maxim Mikityanskiy Acked-by: Eduard Zingerman --- include/linux/bpf_verifier.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index d07d857ca67f..e11baecbde68 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -453,7 +453,7 @@ struct bpf_verifier_state { #define bpf_get_spilled_reg(slot, frame, mask) \ (((slot < frame->allocated_stack / BPF_REG_SIZE) && \ - ((1 << frame->stack[slot].slot_type[0]) & (mask))) \ + ((1 << frame->stack[slot].slot_type[BPF_REG_SIZE - 1]) & (mask))) \ ? &frame->stack[slot].spilled_ptr : NULL) /* Iterate over 'frame', setting 'reg' to either NULL or a spilled register. */ From patchwork Mon Jan 8 20:51:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13513988 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67CDB55E5D; Mon, 8 Jan 2024 20:52:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="agrTGG1j" Received: by mail-lf1-f50.google.com with SMTP id 2adb3069b0e04-50e759ece35so2321289e87.3; Mon, 08 Jan 2024 12:52:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747162; x=1705351962; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0qdhZQ50yhU/SRCB/PIjXryngjXoiTzCDGKP0J5FKhQ=; b=agrTGG1jCiSP2FSDfu/qOl1rTJ2f2CH2PG3s+x8jXsTHKpLSJ6uWQx66tsl8hUjBbX A4stqnTzTE2S4w1XexgKjyDY/nwfChlXbxeG4ByYpvVxdA2CeahsDu1eCw8EV5ag8y0p GbbMANOuRGxHGV+75B8ND2krdiGKwtVK7EpJUqWxR+B7m/fFH+zq+TOqAoVF3boMFyvl i48YZFRWSSOh//OaKcaaSTeXAiR/d0JxIawN8ytylqyGtzBTHBoq1R1Wl0vV35rxCAFO vsNz0bqFBa9Pqny7tgp+cx6jaTrXXd1iBfs3jeR1s3Ud/6s2vTTIHLP+sCnHmPNx5rWv mSHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747162; x=1705351962; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0qdhZQ50yhU/SRCB/PIjXryngjXoiTzCDGKP0J5FKhQ=; b=U2guxmJUjSRnGjqKjfddLqwYaVEGI61i1Br76qElcRHNzJS9UgMkSqP43I/g6LJdXK /j2jePghb786hhaQ1O5Gm2R1a3sQncvMKm98yQmLD9TiSPWGSMSGmP6My/JPfcx/QFad e7DATq15Y67sv9cRaB6z3fl03SwQKbx+Vh834/UhcndDFkvFJGOiJ4Um5EKUUQsSTW43 Gj3H4b7QJGw86Ch+ba2PGrYtX/1sBy8Tb2BBxaxm4M/VHf3Az19QEYrd7KSDfCON6s/D p1F0yC505wXd1gWVf9NZ5Q9TcYAHDRLyyHB38/yJeUimvzQ6cdTIBPXUDd6uF7d4/G64 2AmQ== X-Gm-Message-State: AOJu0YxTytdyArzN6nzTxkAMoE8kvdR9Z8Klqz7y5rtSmMnWx8O1csFF RLO48jSzS4NdBg0DVvodaDc= X-Google-Smtp-Source: AGHT+IFeJ99k98c+GtyJciKxCA0cmzsKkflU47ocxDxxI2gqMKhGDOuOB1GQHhGOHCIXTtn48hv45g== X-Received: by 2002:a05:6512:2394:b0:50e:7c70:fbe1 with SMTP id c20-20020a056512239400b0050e7c70fbe1mr1874713lfv.100.1704747162071; Mon, 08 Jan 2024 12:52:42 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id w1-20020a17090652c100b00a2777219307sm236577ejn.202.2024.01.08.12.52.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:52:41 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next v2 05/15] selftests/bpf: Add a test case for 32-bit spill tracking Date: Mon, 8 Jan 2024 22:51:59 +0200 Message-ID: <20240108205209.838365-6-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Maxim Mikityanskiy When a range check is performed on a register that was 32-bit spilled to the stack, the IDs of the two instances of the register are the same, so the range should also be the same. Signed-off-by: Maxim Mikityanskiy Acked-by: Eduard Zingerman --- .../selftests/bpf/progs/verifier_spill_fill.c | 31 +++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index 848f2930f599..f303ac19cf41 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -735,4 +735,35 @@ __naked void stack_load_preserves_const_precision_subreg(void) : __clobber_common); } +SEC("xdp") +__description("32-bit spilled reg range should be tracked") +__success __retval(0) +__naked void spill_32bit_range_track(void) +{ + asm volatile(" \ + call %[bpf_ktime_get_ns]; \ + /* Make r0 bounded. */ \ + r0 &= 65535; \ + /* Assign an ID to r0. */ \ + r1 = r0; \ + /* 32-bit spill r0 to stack. */ \ + *(u32*)(r10 - 8) = r0; \ + /* Boundary check on r0. */ \ + if r0 < 1 goto l0_%=; \ + /* 32-bit fill r1 from stack. */ \ + r1 = *(u32*)(r10 - 8); \ + /* r1 == r0 => r1 >= 1 always. */ \ + if r1 >= 1 goto l0_%=; \ + /* Dead branch: the verifier should prune it. \ + * Do an invalid memory access if the verifier \ + * follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + char _license[] SEC("license") = "GPL"; From patchwork Mon Jan 8 20:52:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13513989 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D60155E5D; Mon, 8 Jan 2024 20:52:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jYcRyf2I" Received: by mail-ed1-f53.google.com with SMTP id 4fb4d7f45d1cf-55719cdc0e1so2546040a12.1; Mon, 08 Jan 2024 12:52:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747168; x=1705351968; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ql4Pqfjh0Jhi1BvTKprb/E446Img2B4kp0cwEWA8FYU=; b=jYcRyf2I3h5ZVaJWQjpnKMBsIhGyF1fSN5JYsiChFZwafghzLwoh4oHwHHbAFXfKnk Ahh8MVHuFYVdLc4gcg5uB+YISXKnLt8C5nmMLcUjKTm9nSpVBfWAbIJFlFRyktFKT1LQ I2e5H9XZ3Fd/msrwQdk1sGmXvoDWJ8VL+QZzCbZxRZKQaVD2v1N79RUcCIG5ZTXKciG2 qEBgTCFUODe+GQmrONOwSGotqzsITc39zs15HsIRRtiMF9pu9hxJTYWuiRnKXcN98k07 oMCx53ulbtlHywct56MxIoX6zvFKp9UGKbJN0mk3ZHruRBW1Cx3+mqfLAq5SYi+z627x T/qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747168; x=1705351968; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ql4Pqfjh0Jhi1BvTKprb/E446Img2B4kp0cwEWA8FYU=; b=V5+QaaWyRO94rgSNpUEGvbfGOXqP1QeBbzDTSO6MqNAur8qpeVDArV79d74lyo3Xze Gq2XMVo2qT7x7RHqr/quOOKbs5/ggOVxDyAAx9cQjCtpXBVVpeAMWVL75MADS3N7vSX6 gtMcb1hKj2CRyqHU5BzPOeM09KUtwlbdZkgEaFRqsJeYDxcewuN5gsAoOuyH3QHZ7R1k WD8EFmw5mfwrahWY/qNrDxgyKjFRVfjkmrQ9O2kOTFZc81b7h/b5EukaAHHTl9749GBP vqJnkZoazm6dXkBHL98GgBhWBzLhh04ykX7QYwnwMmIMcVSPgA/RT7emIxIaiMM+ckV/ R1Mw== X-Gm-Message-State: AOJu0YxmSM/7mpbqbIvWXpGu3nG49dRt8KIOk6izcITKoTfNzrv+zGDv Nog+DoJmGsyvjV9x557zoUs= X-Google-Smtp-Source: AGHT+IHm3reXS/cS+H3l5RG6G2jaSgJes2s3dXPKqT3uvgOrpu+uBPSL4cznoKTs86c9uLJ6AW11Sw== X-Received: by 2002:a50:8e53:0:b0:552:7433:23ee with SMTP id 19-20020a508e53000000b00552743323eemr3023958edx.0.1704747168526; Mon, 08 Jan 2024 12:52:48 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id i4-20020a056402054400b00555e52fed52sm198492edx.91.2024.01.08.12.52.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:52:48 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next v2 06/15] bpf: Add the assign_scalar_id_before_mov function Date: Mon, 8 Jan 2024 22:52:00 +0200 Message-ID: <20240108205209.838365-7-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Maxim Mikityanskiy Extract the common code that generates a register ID for src_reg before MOV if needed into a new function. This function will also be used in a following commit. Signed-off-by: Maxim Mikityanskiy Acked-by: Eduard Zingerman --- kernel/bpf/verifier.c | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index bc565f445410..e3eff2becd64 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4403,6 +4403,18 @@ static bool __is_pointer_value(bool allow_ptr_leaks, return reg->type != SCALAR_VALUE; } +static void assign_scalar_id_before_mov(struct bpf_verifier_env *env, + struct bpf_reg_state *src_reg) +{ + if (src_reg->type == SCALAR_VALUE && !src_reg->id && + !tnum_is_const(src_reg->var_off)) + /* Ensure that src_reg has a valid ID that will be copied to + * dst_reg and then will be used by find_equal_scalars() to + * propagate min/max range. + */ + src_reg->id = ++env->id_gen; +} + /* Copy src state preserving dst->parent and dst->live fields */ static void copy_register_state(struct bpf_reg_state *dst, const struct bpf_reg_state *src) { @@ -13901,20 +13913,13 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) if (BPF_SRC(insn->code) == BPF_X) { struct bpf_reg_state *src_reg = regs + insn->src_reg; struct bpf_reg_state *dst_reg = regs + insn->dst_reg; - bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id && - !tnum_is_const(src_reg->var_off); if (BPF_CLASS(insn->code) == BPF_ALU64) { if (insn->off == 0) { /* case: R1 = R2 * copy register state to dest reg */ - if (need_id) - /* Assign src and dst registers the same ID - * that will be used by find_equal_scalars() - * to propagate min/max range. - */ - src_reg->id = ++env->id_gen; + assign_scalar_id_before_mov(env, src_reg); copy_register_state(dst_reg, src_reg); dst_reg->live |= REG_LIVE_WRITTEN; dst_reg->subreg_def = DEF_NOT_SUBREG; @@ -13929,8 +13934,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) bool no_sext; no_sext = src_reg->umax_value < (1ULL << (insn->off - 1)); - if (no_sext && need_id) - src_reg->id = ++env->id_gen; + if (no_sext) + assign_scalar_id_before_mov(env, src_reg); copy_register_state(dst_reg, src_reg); if (!no_sext) dst_reg->id = 0; @@ -13952,8 +13957,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) if (insn->off == 0) { bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX; - if (is_src_reg_u32 && need_id) - src_reg->id = ++env->id_gen; + if (is_src_reg_u32) + assign_scalar_id_before_mov(env, src_reg); copy_register_state(dst_reg, src_reg); /* Make sure ID is cleared if src_reg is not in u32 * range otherwise dst_reg min/max could be incorrectly @@ -13967,8 +13972,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) /* case: W1 = (s8, s16)W2 */ bool no_sext = src_reg->umax_value < (1ULL << (insn->off - 1)); - if (no_sext && need_id) - src_reg->id = ++env->id_gen; + if (no_sext) + assign_scalar_id_before_mov(env, src_reg); copy_register_state(dst_reg, src_reg); if (!no_sext) dst_reg->id = 0; From patchwork Mon Jan 8 20:52:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13513990 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ej1-f42.google.com (mail-ej1-f42.google.com [209.85.218.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B41914264; Mon, 8 Jan 2024 20:52:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XEAgeZwQ" Received: by mail-ej1-f42.google.com with SMTP id a640c23a62f3a-a271a28aeb4so225710266b.2; Mon, 08 Jan 2024 12:52:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747174; x=1705351974; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H/2ahyKl2REEzGAF9J7O+/7m1Ul3qlQ5KbFjuAOQvVM=; b=XEAgeZwQxXVF6iFQhqyvBW1PKyJPdudjDJmIpQu2CJ4MtMkBl/OVb98t5ZhjKuzhEQ FJ982BGkzoWlF/0+U1/VpFbzJwz3pMcs8pI/c/6nuvx3FXo1bKoORoRrvxZA8/ymBA/W JgHMXmCVcQdFWDmIrS3rOIms47jLoEqXrr1Qsm6E0d3+5VG4r0gpOTt4glIyvnDCC+Ey efDMeXH8LgjY50EZ7NwIXn8IW3MLyslX9gRznSV2HUIrcmCH0ztFruR0ol2NiBlpNMlX kqcSeLRJFlrwPvHfK4OL/E9UlYTSNaTuwuiSxxM7wdf9pbQnEvrcKL+kng99vSasgzuu wtyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747174; x=1705351974; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H/2ahyKl2REEzGAF9J7O+/7m1Ul3qlQ5KbFjuAOQvVM=; b=WUsY206oOGLXW/TjvNaXFgJc7CqkJUxvUEwWCUOIwFHBP+/NbIVoP+OBelnBjAJXGs UfIh8DP2qc3tCmUbwQMLNzxDp7poUeFrqplpWfxfoIIs70GhXT4iMHaIWSq/63TVIO0U T1pJ9TN/93/wyh+F0OLEctSACT8AkuhsfHEAiOrDlh7JzfkjVS5RT6vhyPoF1AT/OE+4 RqELkidcFVwffrcR/v+ym+W9IyGWkOqjwfzEsEpeVJc4oCfIZkcPsGdtsp9RY94HZt82 JN0yPu1zYm3G72z0bvjbSeqf/PNZgQo3PxmVLXUzXbn73icuKcUSskd3tN+FwlEBFUgz gZnQ== X-Gm-Message-State: AOJu0Ywt4EjH1kcPk5ZScQ4TeqdQnqs8TvYRWvOBZ8s6XcEtdthqK2kZ Xi15x40uHA5aPHtCFd8VSRQ= X-Google-Smtp-Source: AGHT+IEW4Gmp1cib7z+Sn7CvDzf8ungGZE5RIbblg019aWmYE3hHr8Vogo/gFCKJbbxEhCJQTeOrOA== X-Received: by 2002:a17:907:1b0d:b0:a28:d2d9:4e66 with SMTP id mp13-20020a1709071b0d00b00a28d2d94e66mr29058ejc.31.1704747174433; Mon, 08 Jan 2024 12:52:54 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id u22-20020a170906781600b00a26af6131e0sm246615ejm.7.2024.01.08.12.52.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:52:54 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next v2 07/15] bpf: Add the get_reg_width function Date: Mon, 8 Jan 2024 22:52:01 +0200 Message-ID: <20240108205209.838365-8-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Maxim Mikityanskiy Put calculation of the register value width into a dedicated function. This function will also be used in a following commit. Signed-off-by: Maxim Mikityanskiy --- kernel/bpf/verifier.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e3eff2becd64..4cd82a7c1318 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4450,6 +4450,11 @@ static bool is_bpf_st_mem(struct bpf_insn *insn) return BPF_CLASS(insn->code) == BPF_ST && BPF_MODE(insn->code) == BPF_MEM; } +static int get_reg_width(struct bpf_reg_state *reg) +{ + return fls64(reg->umax_value); +} + /* check_stack_{read,write}_fixed_off functions track spill/fill of registers, * stack boundary and alignment are checked in check_mem_access() */ @@ -4502,7 +4507,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) { save_register_state(env, state, spi, reg, size); /* Break the relation on a narrowing spill. */ - if (fls64(reg->umax_value) > BITS_PER_BYTE * size) + if (get_reg_width(reg) > BITS_PER_BYTE * size) state->stack[spi].spilled_ptr.id = 0; } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) && insn->imm != 0 && env->bpf_capable) { @@ -13955,7 +13960,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) return -EACCES; } else if (src_reg->type == SCALAR_VALUE) { if (insn->off == 0) { - bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX; + bool is_src_reg_u32 = get_reg_width(src_reg) <= 32; if (is_src_reg_u32) assign_scalar_id_before_mov(env, src_reg); From patchwork Mon Jan 8 20:52:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13513991 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ed1-f41.google.com (mail-ed1-f41.google.com [209.85.208.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED0E15467C; Mon, 8 Jan 2024 20:53:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FWJ0aZ4d" Received: by mail-ed1-f41.google.com with SMTP id 4fb4d7f45d1cf-55745901085so2786256a12.0; Mon, 08 Jan 2024 12:53:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747179; x=1705351979; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5d/FIxpPJtgATfH89NHC2zsxYFMsV/1h0vIJMGRORcc=; b=FWJ0aZ4d40FRBCrPPLv1OIFFfjWBGUyGNmh8BaD1XMe9jAlho8eXYBgX4j4XoBSq36 3iolxRa3EYj79Wk9aLqmedQmUkJcFmhYPG0DNvN/aDKY81PqXIiu1oejo3OrVSFhpTUO IPXQF4Vy9mo0rJqeQDJFxLWaM8SsaM3GUlGMHSgGvmRAoWjibY9yAJrn4WP0aQvYXrzn XYIr+K5B6IgCOvwVRdq41SH/dx1m60bid6QHeKQUNghEjvnOFEaGVWno/YyXQQV3BEu0 mMiwuOh8jclMADxXcxjLEIYZmNFz3fmWA+FXRPv12mVSUY/3ufYF+bF4JqYcE6GFkqoy TxWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747179; x=1705351979; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5d/FIxpPJtgATfH89NHC2zsxYFMsV/1h0vIJMGRORcc=; b=IEYmF7yZZEd//Wd71e2nyJsXdSx/J323rUxys0Wy3G8rKuNLg1d3+EsO77vp5B0hzL WzE2s77VyAP9rqHnWdLEDWdur/DYEK7CbhRc1KJ0dJ+aX5CYBkOKwDAceD0VSS1+BOtY U0pTTsh0akP+H+yGb47tW3/cDnOejGAPbXIX6jkpNT34CPqqlL2uKB0Qm+57+v2PH5OI vTQ/sgA31uKa9UA66FdE6hSfnjXXJJmmmAjEkW0DdiBOSRHHoZGQdBYN/o7nqgcaEKHZ vbarwlMBB6BAhKTwEzKMy332ol6LdojiElAXGuSGemenxmEBOLslKrjokU248Ts7eOhB oRhw== X-Gm-Message-State: AOJu0YyU+/XhaQsCCfbmGJe7kSJW0uascDcC1g5HT1TVyfGr3C5X9or4 pSxS58/z2jWtNR1PqmibaVw= X-Google-Smtp-Source: AGHT+IEq4hA0Q4TjZhbtBQ3f8rnS7QTIYFDK6Om0y0vMZ6I7l8K76VO1V09LqnPK4E+yqDbsbPDlzw== X-Received: by 2002:a17:907:320b:b0:a2b:3a97:205a with SMTP id xg11-20020a170907320b00b00a2b3a97205amr18923ejb.52.1704747179138; Mon, 08 Jan 2024 12:52:59 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id t13-20020a1709063e4d00b00a1f7ab65d3fsm242866eji.131.2024.01.08.12.52.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:52:58 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next v2 08/15] bpf: Assign ID to scalars on spill Date: Mon, 8 Jan 2024 22:52:02 +0200 Message-ID: <20240108205209.838365-9-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Maxim Mikityanskiy Currently, when a scalar bounded register is spilled to the stack, its ID is preserved, but only if was already assigned, i.e. if this register was MOVed before. Assign an ID on spill if none is set, so that equal scalars could be tracked if a register is spilled to the stack and filled into another register. One test is adjusted to reflect the change in register IDs. Signed-off-by: Maxim Mikityanskiy Acked-by: Eduard Zingerman --- kernel/bpf/verifier.c | 8 +++++++- .../selftests/bpf/progs/verifier_direct_packet_access.c | 2 +- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 4cd82a7c1318..055fa8096a08 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4505,9 +4505,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, mark_stack_slot_scratched(env, spi); if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) { + bool reg_value_fits; + + reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size; + /* Make sure that reg had an ID to build a relation on spill. */ + if (reg_value_fits) + assign_scalar_id_before_mov(env, reg); save_register_state(env, state, spi, reg, size); /* Break the relation on a narrowing spill. */ - if (get_reg_width(reg) > BITS_PER_BYTE * size) + if (!reg_value_fits) state->stack[spi].spilled_ptr.id = 0; } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) && insn->imm != 0 && env->bpf_capable) { diff --git a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c index be95570ab382..28b602ac9cbe 100644 --- a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c +++ b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c @@ -568,7 +568,7 @@ l0_%=: r0 = 0; \ SEC("tc") __description("direct packet access: test23 (x += pkt_ptr, 4)") -__failure __msg("invalid access to packet, off=0 size=8, R5(id=2,off=0,r=0)") +__failure __msg("invalid access to packet, off=0 size=8, R5(id=3,off=0,r=0)") __flag(BPF_F_ANY_ALIGNMENT) __naked void test23_x_pkt_ptr_4(void) { From patchwork Mon Jan 8 20:52:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13514027 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E67FF55E55; Mon, 8 Jan 2024 20:53:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="I9Z69VMo" Received: by mail-ed1-f43.google.com with SMTP id 4fb4d7f45d1cf-5572a9b3420so5558951a12.1; Mon, 08 Jan 2024 12:53:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747189; x=1705351989; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ej8J9OhauosIFI++9G+ufXWjBdHGQphfaSW7XQcVXSA=; b=I9Z69VMoo/7fv9QbcNHFuJ6H+fQoNCm4Da7lm4lsqKQs2gjJMI6aEazo0KBomSRooE VTh6QoqwXcsicBPPP7RjC3b3ZQTQ/vksCWfc1XiiZpQ/6nRgQuDXC5Eca5wSrextqRFO O4grGFaUmuxSnL8trMm2ioJH08gBbif25PxlJrG2RBTBMKFfDiwyPOonBuM6pL2RnRvd Puh+JXTTwYhAFkmfrVhiE/YYWz0bLHRd069p+lAk6iDqRbImg4ADNEpVYBrgTOhhSQNU 9lDi0iRs4DFJpMxe5j1HFOuxuI4gvIwbWNmE8u5jFg/lz2uztkXo12otZal1o578KZJB iU3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747189; x=1705351989; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ej8J9OhauosIFI++9G+ufXWjBdHGQphfaSW7XQcVXSA=; b=PwwdonZ+GgsUp6dxmA8DyYS7pv2wp5NV63zQ/ou0PUPcCjB1/UQ4zyNvNNEimw0OvI l39ifJjBwmXlbNqYLf766JjenxTLQZWcSkY/oePX7gUKuHq1iO/pIRo8BJEtVCI7DFoD 3C3tMKMwcylwBGiMFXhQ7JjlzGYAgpPLMXEPNupUeGQTumFRueVsDbfZX/Z59CGA5FTx ekShKI1IBJRafS3lOK1S+UhI3vyC2z25yRG2XPgpPY8ifMg4ubKAwjiZEglSs+Xtrcwv 1W0wh5FR9Q5kBPfuqUqQAqTWiTbDdc6J2z2YlzewYSqCOGQyj7FK0Ux2eJIeFTbKBy3O REIw== X-Gm-Message-State: AOJu0YygvPoOYD1XVG1YOxT1JYqlJVDL1Ym6382PSz3tPssLlBWgLjd1 Bj46zQnqM0Q22aolq1RzHFw= X-Google-Smtp-Source: AGHT+IEOCULLeQPjmXdaQqQ5BarlL+HPYkS3juBbITgf8csPPO8Mqs5Dk7Moosu4E5GaqGVvGagZvQ== X-Received: by 2002:a17:907:3209:b0:a2b:f7d:5b5d with SMTP id xg9-20020a170907320900b00a2b0f7d5b5dmr21043ejb.32.1704747189146; Mon, 08 Jan 2024 12:53:09 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id p11-20020a170906604b00b00a298d735a1bsm244153ejj.149.2024.01.08.12.53.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:53:08 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next v2 09/15] selftests/bpf: Test assigning ID to scalars on spill Date: Mon, 8 Jan 2024 22:52:03 +0200 Message-ID: <20240108205209.838365-10-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Maxim Mikityanskiy The previous commit implemented assigning IDs to registers holding scalars before spill. Add the test cases to check the new functionality. Signed-off-by: Maxim Mikityanskiy Acked-by: Eduard Zingerman --- .../selftests/bpf/progs/verifier_spill_fill.c | 133 ++++++++++++++++++ 1 file changed, 133 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index f303ac19cf41..b05aab925ee5 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -766,4 +766,137 @@ l0_%=: r0 = 0; \ : __clobber_all); } +SEC("xdp") +__description("64-bit spill of 64-bit reg should assign ID") +__success __retval(0) +__naked void spill_64bit_of_64bit_ok(void) +{ + asm volatile (" \ + /* Roll one bit to make the register inexact. */\ + call %[bpf_get_prandom_u32]; \ + r0 &= 0x80000000; \ + r0 <<= 32; \ + /* 64-bit spill r0 to stack - should assign an ID. */\ + *(u64*)(r10 - 8) = r0; \ + /* 64-bit fill r1 from stack - should preserve the ID. */\ + r1 = *(u64*)(r10 - 8); \ + /* Compare r1 with another register to trigger find_equal_scalars.\ + * Having one random bit is important here, otherwise the verifier cuts\ + * the corners. \ + */ \ + r2 = 0; \ + if r1 != r2 goto l0_%=; \ + /* The result of this comparison is predefined. */\ + if r0 == r2 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("xdp") +__description("32-bit spill of 32-bit reg should assign ID") +__success __retval(0) +__naked void spill_32bit_of_32bit_ok(void) +{ + asm volatile (" \ + /* Roll one bit to make the register inexact. */\ + call %[bpf_get_prandom_u32]; \ + w0 &= 0x80000000; \ + /* 32-bit spill r0 to stack - should assign an ID. */\ + *(u32*)(r10 - 8) = r0; \ + /* 32-bit fill r1 from stack - should preserve the ID. */\ + r1 = *(u32*)(r10 - 8); \ + /* Compare r1 with another register to trigger find_equal_scalars.\ + * Having one random bit is important here, otherwise the verifier cuts\ + * the corners. \ + */ \ + r2 = 0; \ + if r1 != r2 goto l0_%=; \ + /* The result of this comparison is predefined. */\ + if r0 == r2 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("xdp") +__description("16-bit spill of 16-bit reg should assign ID") +__success __retval(0) +__naked void spill_16bit_of_16bit_ok(void) +{ + asm volatile (" \ + /* Roll one bit to make the register inexact. */\ + call %[bpf_get_prandom_u32]; \ + r0 &= 0x8000; \ + /* 16-bit spill r0 to stack - should assign an ID. */\ + *(u16*)(r10 - 8) = r0; \ + /* 16-bit fill r1 from stack - should preserve the ID. */\ + r1 = *(u16*)(r10 - 8); \ + /* Compare r1 with another register to trigger find_equal_scalars.\ + * Having one random bit is important here, otherwise the verifier cuts\ + * the corners. \ + */ \ + r2 = 0; \ + if r1 != r2 goto l0_%=; \ + /* The result of this comparison is predefined. */\ + if r0 == r2 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("xdp") +__description("8-bit spill of 8-bit reg should assign ID") +__success __retval(0) +__naked void spill_8bit_of_8bit_ok(void) +{ + asm volatile (" \ + /* Roll one bit to make the register inexact. */\ + call %[bpf_get_prandom_u32]; \ + r0 &= 0x80; \ + /* 8-bit spill r0 to stack - should assign an ID. */\ + *(u8*)(r10 - 8) = r0; \ + /* 8-bit fill r1 from stack - should preserve the ID. */\ + r1 = *(u8*)(r10 - 8); \ + /* Compare r1 with another register to trigger find_equal_scalars.\ + * Having one random bit is important here, otherwise the verifier cuts\ + * the corners. \ + */ \ + r2 = 0; \ + if r1 != r2 goto l0_%=; \ + /* The result of this comparison is predefined. */\ + if r0 == r2 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + char _license[] SEC("license") = "GPL"; From patchwork Mon Jan 8 20:52:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13514028 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 042E755E58; Mon, 8 Jan 2024 20:53:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Ne7ZUI9a" Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-40e43fb2659so17099185e9.1; Mon, 08 Jan 2024 12:53:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747194; x=1705351994; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=heZMwtmY4JQoXU68UC8Q7sB/5mey2edhuFmP1UH7hEQ=; b=Ne7ZUI9aOYfvbH8/NRLJEY6yMXM0E/vavKY3YCoQaqY8yZ8nG7FBCTfUdNRgwaxfLZ Jln7h9XqFBFr09s5XVEr4rhepXdP646+EcmfsX51V35ybhH/YoKQnF2d2+QHPWncdXog sciAk0Joij9iL6LOqvr1HQqYhPsjaz8q2TjI64Oa2zzI48GsZTJIiTvIeRGE7oEgnplZ xUAjhjWjEDgdPM4SxpiIkunyue8co6HI3HmjV/9EmbnZ4ti49o22S8joLrmftoUBwim5 YZwlMaamfVAuEAyOW12DdloSboOAezRsuYnNJvoB8TZqtajUnFs0HGAa3B4QnpATR6SW ZBVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747194; x=1705351994; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=heZMwtmY4JQoXU68UC8Q7sB/5mey2edhuFmP1UH7hEQ=; b=f/ZE7gQu2gUSt0jirZ6LHKq0+sYX9hXWse6Z/zZ1kRSnVxrzWYQFYmOFuV0SlCb7++ O2qlTrdqCk00PZrxTiEwKeTJwnQS0T2pP054A9pbZOkpti5VTCJam8rOwDlvynm3XeTI V30fCBdpBulXWriqSSZ+UOQuMdhIwxLqRwdFcMBT/HGCPt4+TZhiIZj3AcJwASx8AG9l IcbIF21EZOZVqcQ/qf48Ml1t4HlHuAhsarBAsCBxeg1zJ54UpTbYv4kUMQoDFTd/dkoj Iqzbl8rfi5WQxCzYCHETzn2hE2f44pVQSV/BuTvKHbTkm67kyeeR4DGlchOXK1HI4LHu AlOg== X-Gm-Message-State: AOJu0YzCZlPNaYJ2zVn69Oeccj8wbOp8L6qHzWvFL9dcg2fT73479viD zBzFcoXFBgJwHKN5h9dacM0= X-Google-Smtp-Source: AGHT+IF1UHK/6Tr91X6sTrsrkmnOQc1uwSdiOyFRMEiR0l5MkTUc8gJkpYQINKnda0ulk2n6dig0yA== X-Received: by 2002:a7b:ce10:0:b0:40e:4572:57ea with SMTP id m16-20020a7bce10000000b0040e457257eamr1222047wmc.65.1704747194367; Mon, 08 Jan 2024 12:53:14 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id w14-20020a170906384e00b00a279fa8b3f0sm241537ejc.124.2024.01.08.12.53.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:53:14 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next v2 10/15] bpf: Track spilled unbounded scalars Date: Mon, 8 Jan 2024 22:52:04 +0200 Message-ID: <20240108205209.838365-11-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Maxim Mikityanskiy Support the pattern where an unbounded scalar is spilled to the stack, then boundary checks are performed on the src register, after which the stack frame slot is refilled into a register. Before this commit, the verifier didn't treat the src register and the stack slot as related if the src register was an unbounded scalar. The register state wasn't copied, the id wasn't preserved, and the stack slot was marked as STACK_MISC. Subsequent boundary checks on the src register wouldn't result in updating the boundaries of the spilled variable on the stack. After this commit, the verifier will preserve the bond between src and dst even if src is unbounded, which permits to do boundary checks on src and refill dst later, still remembering its boundaries. Such a pattern is sometimes generated by clang when compiling complex long functions. One test is adjusted to reflect the fact that an untracked register is marked as precise at an earlier stage, and one more test is adjusted to reflect that now unbounded scalars are tracked. Signed-off-by: Maxim Mikityanskiy Acked-by: Eduard Zingerman --- kernel/bpf/verifier.c | 7 +------ tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 6 +++--- tools/testing/selftests/bpf/verifier/precise.c | 6 +++--- 3 files changed, 7 insertions(+), 12 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 055fa8096a08..e7fff5f5aa1d 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4389,11 +4389,6 @@ static bool __is_scalar_unbounded(struct bpf_reg_state *reg) reg->u32_min_value == 0 && reg->u32_max_value == U32_MAX; } -static bool register_is_bounded(struct bpf_reg_state *reg) -{ - return reg->type == SCALAR_VALUE && !__is_scalar_unbounded(reg); -} - static bool __is_pointer_value(bool allow_ptr_leaks, const struct bpf_reg_state *reg) { @@ -4504,7 +4499,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, return err; mark_stack_slot_scratched(env, spi); - if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) { + if (reg && !(off % BPF_REG_SIZE) && reg->type == SCALAR_VALUE && env->bpf_capable) { bool reg_value_fits; reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size; diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index b05aab925ee5..57eb70e100a3 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -452,9 +452,9 @@ l0_%=: r1 >>= 16; \ SEC("raw_tp") __log_level(2) __success -__msg("fp-8=0m??mmmm") -__msg("fp-16=00mm??mm") -__msg("fp-24=00mm???m") +__msg("fp-8=0m??scalar()") +__msg("fp-16=00mm??scalar()") +__msg("fp-24=00mm???scalar()") __naked void spill_subregs_preserve_stack_zero(void) { asm volatile ( diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c index 8a2ff81d8350..0a9293a57211 100644 --- a/tools/testing/selftests/bpf/verifier/precise.c +++ b/tools/testing/selftests/bpf/verifier/precise.c @@ -183,10 +183,10 @@ .prog_type = BPF_PROG_TYPE_XDP, .flags = BPF_F_TEST_STATE_FREQ, .errstr = "mark_precise: frame0: last_idx 7 first_idx 7\ - mark_precise: frame0: parent state regs=r4 stack=:\ + mark_precise: frame0: parent state regs=r4 stack=-8:\ mark_precise: frame0: last_idx 6 first_idx 4\ - mark_precise: frame0: regs=r4 stack= before 6: (b7) r0 = -1\ - mark_precise: frame0: regs=r4 stack= before 5: (79) r4 = *(u64 *)(r10 -8)\ + mark_precise: frame0: regs=r4 stack=-8 before 6: (b7) r0 = -1\ + mark_precise: frame0: regs=r4 stack=-8 before 5: (79) r4 = *(u64 *)(r10 -8)\ mark_precise: frame0: regs= stack=-8 before 4: (7b) *(u64 *)(r3 -8) = r0\ mark_precise: frame0: parent state regs=r0 stack=:\ mark_precise: frame0: last_idx 3 first_idx 3\ From patchwork Mon Jan 8 20:52:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13514029 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ej1-f54.google.com (mail-ej1-f54.google.com [209.85.218.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0856355776; Mon, 8 Jan 2024 20:53:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WOrlTbdq" Received: by mail-ej1-f54.google.com with SMTP id a640c23a62f3a-a28fb463a28so225042866b.3; Mon, 08 Jan 2024 12:53:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747200; x=1705352000; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FHtIR9EAB6yZ4OYaEE1Ew9mP0VH0SKAUUWk1oh8IBPI=; b=WOrlTbdqmnn6BIVElRBMLnuRDE04eT81re+o+ob55RxcIKolX0ItHxBTqi2UNr7y1l 9fzqqjBZeCsMZJMD6NJHqo0TiTBAsO41zh1wF3ygvaWaXAhL6oGcx/HrgNO4vgW9YvJ/ t83Pnw1jWmRlisi8mcZIK1cCNbYnh9EA0gls/WtQgS41HUImD2tHHxKrAt1M50EzyRQK M7NakBY7izXXK1R7OfmJkqnkFVOvf80yoXMCLxFWvwVtGJZoPnlWXTr3OEJo4w6BRaaJ GXTOFqXck/u39jhvbAcoE/W6UDgSSynVF8JL9sx4G6C0AvwCTsZCy15LP8QKgxa9Qk1J LvjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747200; x=1705352000; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FHtIR9EAB6yZ4OYaEE1Ew9mP0VH0SKAUUWk1oh8IBPI=; b=CACDRE96ralVqRr+2HZqGb6dTM7aEKcDBFK6d/RBXlxk5sqBZwn/dFux8hcdSuHcwb F5miszT3uKbrssy50M8D0ZpSKRI7EiZhegMv+nBazBSMDoMvGikl5RP3OuDaDL4ctI2P +4CajVHoDH+ijd9JCo+VoRb2mMy7q019V650b7pWLfueRKns88NpacXSu4x5QeP/RiiD ieEJrmDMJjQW2N6VUg5hL52wcJ2/eUfGZeZTcB06Kk7eWd2axhUoiGgK02o6dSnLHiTF UWwz8cOe3oR3xLcEFu/u2XPsIq7vHBATrfTXERoSSdLrFo0aF0RRbjuxLp1As9LLCaKP qCHQ== X-Gm-Message-State: AOJu0YyguXWxQniqxJiImV5J5Tj8WB0cuyXdOILXCFZfTd7GydhTTnHz kDnc+t2vikjNPDYb8YYGV0o= X-Google-Smtp-Source: AGHT+IGJoMCX9W+pG0MMsn3NOMoijN9+KI8+bd7cS8LWg2qUPDRn3c1WMcmewJWaAgqVMwChqzwHVA== X-Received: by 2002:a17:907:7615:b0:a27:e0ae:99a8 with SMTP id jx21-20020a170907761500b00a27e0ae99a8mr10292ejc.145.1704747200306; Mon, 08 Jan 2024 12:53:20 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id p11-20020a170906140b00b00a26b3f29f3dsm248401ejc.43.2024.01.08.12.53.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:53:20 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next v2 11/15] selftests/bpf: Test tracking spilled unbounded scalars Date: Mon, 8 Jan 2024 22:52:05 +0200 Message-ID: <20240108205209.838365-12-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Maxim Mikityanskiy The previous commit added tracking for unbounded scalars on spill. Add the test case to check the new functionality. Signed-off-by: Maxim Mikityanskiy Acked-by: Eduard Zingerman --- .../selftests/bpf/progs/verifier_spill_fill.c | 27 +++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index 57eb70e100a3..cc6c5a3b464b 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -899,4 +899,31 @@ l0_%=: r0 = 0; \ : __clobber_all); } +SEC("xdp") +__description("spill unbounded reg, then range check src") +__success __retval(0) +__naked void spill_unbounded(void) +{ + asm volatile (" \ + /* Produce an unbounded scalar. */ \ + call %[bpf_get_prandom_u32]; \ + /* Spill r0 to stack. */ \ + *(u64*)(r10 - 8) = r0; \ + /* Boundary check on r0. */ \ + if r0 > 16 goto l0_%=; \ + /* Fill r0 from stack. */ \ + r0 = *(u64*)(r10 - 8); \ + /* Boundary check on r0 with predetermined result. */\ + if r0 <= 16 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + char _license[] SEC("license") = "GPL"; From patchwork Mon Jan 8 20:52:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13514030 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f44.google.com (mail-wm1-f44.google.com [209.85.128.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA9CF55E55; Mon, 8 Jan 2024 20:53:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="OgJsQK3d" Received: by mail-wm1-f44.google.com with SMTP id 5b1f17b1804b1-40d5336986cso29744205e9.1; Mon, 08 Jan 2024 12:53:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747207; x=1705352007; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=L4YlXYJMAttQeRUGHeeQCdYKHw087JtCupOXw2uBL7U=; b=OgJsQK3dsSbUVzIN74bx0rlob9KA1z4wWEqg2MiRp2sBFXOJm1ygd4nWwv36IpPTHA xVw4xnFLxfQwAOK+G0ihEh7yRWzkkAWHkPTzg5UbdGh/+VfwDsaaKXfNcnQzltXROpVQ iyGjyFXUe7kM+py7d/W5+Hnwxw+FJPtw0tXyg7mbLaZn2Tm1nXLncVf8K5+D+qAvDTgL mWqhe4Q3XMdexoPFCkcg8sQYtJrBSu+gwYfwXTIuSxbYIFTEs4GVtvfn+pnScp2r+v/1 VJl8YToNvGqxmIji+1RBgDZ8tD4nOKOTKv7M5EnWf+n6njXtzREIt2vF2MuJuSH+4rPK 7q2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747207; x=1705352007; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=L4YlXYJMAttQeRUGHeeQCdYKHw087JtCupOXw2uBL7U=; b=iLEUNABCxyqK28Z59q+UVTOmZQajWM8HxaUa+ftFnw1Lx6e5UQx3d06p5FF/9V9n8u PMly6sGTcCBQXiAb3qnPUjdZTdigav5WcEsxcNiJGkJCQsIHWF944NK8VDLYFxG6f9rb HPv4Y/D3q/t5roTxTBVKGK0s5FJq53pHAmR3599qzx+RlsMLYVH8GeBUgG9o1koNls3U tMuDjFaaJHKhpvBPb+0vk4Mdg4A86/ehlW/zRmBRNRXCQdbR9OdbMRIfEo5PWLzW11TP p/bE2iOFcn+zMjbj3Sp00EN7lUCmDzy6RbrgEvSCVzrFwp6nr5Spig6SqmyeuQuWMpUb bSsA== X-Gm-Message-State: AOJu0YxbVQ+LURn1zBNo+nVcYrgBmOYtbhXPxBX30+kARqLdcGPQ7hHP xBp6GqYG6cMGm+G26Jbs868= X-Google-Smtp-Source: AGHT+IFWfP2KgCdPWlOIm+9tf4F2y+HBtjp27VCgJ2SLRi8qejXhNUypcZusVoBytirW4TholAzmOQ== X-Received: by 2002:a05:600c:1d9d:b0:40e:4914:31d0 with SMTP id p29-20020a05600c1d9d00b0040e491431d0mr649700wms.70.1704747206734; Mon, 08 Jan 2024 12:53:26 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id x8-20020a1709065ac800b00a28d1d73654sm239553ejs.207.2024.01.08.12.53.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:53:26 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next v2 12/15] bpf: Preserve boundaries and track scalars on narrowing fill Date: Mon, 8 Jan 2024 22:52:06 +0200 Message-ID: <20240108205209.838365-13-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Maxim Mikityanskiy When the width of a fill is smaller than the width of the preceding spill, the information about scalar boundaries can still be preserved, as long as it's coerced to the right width (done by coerce_reg_to_size). Even further, if the actual value fits into the fill width, the ID can be preserved as well for further tracking of equal scalars. Implement the above improvements, which makes narrowing fills behave the same as narrowing spills and MOVs between registers. Two tests are adjusted to accommodate for endianness differences and to take into account that it's now allowed to do a narrowing fill from the least significant bits. reg_bounds_sync is added to coerce_reg_to_size to correctly adjust umin/umax boundaries after the var_off truncation, for example, a 64-bit value 0xXXXXXXXX00000000, when read as a 32-bit, gets umin = 0, umax = 0xFFFFFFFF, var_off = (0x0; 0xffffffff00000000), which needs to be synced down to umax = 0, otherwise reg_bounds_sanity_check doesn't pass. Signed-off-by: Maxim Mikityanskiy --- include/linux/bpf_verifier.h | 2 -- include/linux/filter.h | 12 ++++++++ kernel/bpf/verifier.c | 15 +++++++--- .../selftests/bpf/progs/verifier_spill_fill.c | 28 +++++++++++++------ 4 files changed, 42 insertions(+), 15 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index e11baecbde68..95ea7657f07e 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -239,8 +239,6 @@ enum bpf_stack_slot_type { STACK_ITER, }; -#define BPF_REG_SIZE 8 /* size of eBPF register in bytes */ - #define BPF_REGMASK_ARGS ((1 << BPF_REG_1) | (1 << BPF_REG_2) | \ (1 << BPF_REG_3) | (1 << BPF_REG_4) | \ (1 << BPF_REG_5)) diff --git a/include/linux/filter.h b/include/linux/filter.h index 68fb6c8142fe..be784be7ed4e 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -39,6 +39,8 @@ struct sock_reuseport; struct ctl_table; struct ctl_table_header; +#define BPF_REG_SIZE 8 /* size of eBPF register in bytes */ + /* ArgX, context and stack frame pointer register positions. Note, * Arg1, Arg2, Arg3, etc are used as argument mappings of function * calls in BPF_CALL instruction. @@ -881,6 +883,16 @@ bpf_ctx_narrow_access_offset(u32 off, u32 size, u32 size_default) #define bpf_classic_proglen(fprog) (fprog->len * sizeof(fprog->filter[0])) +static inline bool +bpf_stack_narrow_access_ok(int off, int size, int spill_size) +{ +#ifdef __BIG_ENDIAN + off -= spill_size - size; +#endif + + return !(off % BPF_REG_SIZE); +} + static inline void bpf_prog_lock_ro(struct bpf_prog *fp) { #ifndef CONFIG_BPF_JIT_ALWAYS_ON diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e7fff5f5aa1d..aeb3e198a5ea 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4774,7 +4774,8 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, if (dst_regno < 0) return 0; - if (!(off % BPF_REG_SIZE) && size == spill_size) { + if (size <= spill_size && + bpf_stack_narrow_access_ok(off, size, spill_size)) { /* The earlier check_reg_arg() has decided the * subreg_def for this insn. Save it first. */ @@ -4782,6 +4783,12 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, copy_register_state(&state->regs[dst_regno], reg); state->regs[dst_regno].subreg_def = subreg_def; + + /* Break the relation on a narrowing fill. + * coerce_reg_to_size will adjust the boundaries. + */ + if (get_reg_width(reg) > size * BITS_PER_BYTE) + state->regs[dst_regno].id = 0; } else { int spill_cnt = 0, zero_cnt = 0; @@ -6057,10 +6064,10 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size) * values are also truncated so we push 64-bit bounds into * 32-bit bounds. Above were truncated < 32-bits already. */ - if (size < 4) { + if (size < 4) __mark_reg32_unbounded(reg); - reg_bounds_sync(reg); - } + + reg_bounds_sync(reg); } static void set_sext64_default_val(struct bpf_reg_state *reg, int size) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index cc6c5a3b464b..fab8ae9fe947 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -217,7 +217,7 @@ __naked void uninit_u32_from_the_stack(void) SEC("tc") __description("Spill a u32 const scalar. Refill as u16. Offset to skb->data") -__failure __msg("invalid access to packet") +__success __retval(0) __naked void u16_offset_to_skb_data(void) { asm volatile (" \ @@ -225,19 +225,24 @@ __naked void u16_offset_to_skb_data(void) r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ w4 = 20; \ *(u32*)(r10 - 8) = r4; \ - r4 = *(u16*)(r10 - 8); \ + r4 = *(u16*)(r10 - %[offset]); \ r0 = r2; \ - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\ + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=20 */\ r0 += r4; \ - /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */\ + /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */\ if r0 > r3 goto l0_%=; \ - /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */\ + /* r0 = *(u32 *)r2 R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */\ r0 = *(u32*)(r2 + 0); \ l0_%=: r0 = 0; \ exit; \ " : : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), - __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + __imm_const(offset, 8) +#else + __imm_const(offset, 6) +#endif : __clobber_all); } @@ -268,7 +273,7 @@ l0_%=: r0 = 0; \ } SEC("tc") -__description("Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data") +__description("Spill a u32 const scalar. Refill as u16 from MSB. Offset to skb->data") __failure __msg("invalid access to packet") __naked void _6_offset_to_skb_data(void) { @@ -277,7 +282,7 @@ __naked void _6_offset_to_skb_data(void) r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ w4 = 20; \ *(u32*)(r10 - 8) = r4; \ - r4 = *(u16*)(r10 - 6); \ + r4 = *(u16*)(r10 - %[offset]); \ r0 = r2; \ /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\ r0 += r4; \ @@ -289,7 +294,12 @@ l0_%=: r0 = 0; \ exit; \ " : : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), - __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + __imm_const(offset, 6) +#else + __imm_const(offset, 8) +#endif : __clobber_all); } From patchwork Mon Jan 8 20:52:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13514031 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ej1-f51.google.com (mail-ej1-f51.google.com [209.85.218.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28C0D56467; Mon, 8 Jan 2024 20:53:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Pw6rD420" Received: by mail-ej1-f51.google.com with SMTP id a640c23a62f3a-a28b1095064so246027266b.2; Mon, 08 Jan 2024 12:53:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747212; x=1705352012; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C23Ia8KDgm3NZnmRi0kSKyscN+EmGw68vBX1pkziOPc=; b=Pw6rD420+lxdClsbnm4tRSbKJccMS62aagiqX21DZ5QBZRlS+REIeATuwNt+0rYuEc 0Lk1dDxNkjT7t/Z7PYjhjvQGqZ3HAZkvpUKC4Xq5Z2L0nTecs3mvBZYeaB36m5d909c/ LyrLjsvlNKStYGw2Rw5EC5N67FoW5tSoRY37JBLn6pEbxFM8e1Nja/QEETKVBXdBdyUG I4qtbi5ZRqqzUMhraIl1Nq59iKANxy/VbyN/sAMvo7HTyntlCSDRzZ6+2pDML3OES99g JV/h6SFjhQwW+dcKK7KsCbLdFuZgvEQpngyNEAvb70huGcGegHCYsS3J6qDl+X3YeHYI lx3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747212; x=1705352012; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C23Ia8KDgm3NZnmRi0kSKyscN+EmGw68vBX1pkziOPc=; b=iTzkty5DhXJlUuDl0ZI+shdZgGNywf5TfG047tTHA4K8cvpGB6nFprI/XkqEqI8qW0 wjUeSv5jChcegi1lqQ+AammCPLv6e/UZ1zHGk/KiDKLaKpCnJ4xSVYPk/vru/BLGsYBm nc8Q7BcbbxAGwsV1aPfaaNnqGFQAvpGJuDyzV+pqL+mCoEUxvOrNTMYEmZtsSztDOZUP GLnLxD/2pIX+X/eFkpOW/+fCF+T33B+8CmwyJstlSeXSTRriX2U1VvH1tQj4awAZ63pA vwnPqiiwdIG1Ji0eB4qYfQsXm16Qy4/C746ce+sGf5PqU6awfxZOzv1YaBWex1khHSaP 74VA== X-Gm-Message-State: AOJu0YyxC4rveE0htBd+VCsZ2jmzqoC+yg+HfXZmSAvOivOiqCCd8H2N 1vLbf+55vVZ3IHP5GwBVh7c= X-Google-Smtp-Source: AGHT+IGtkVNw4QJzY82TEiJ+RyskmX1QkWRto9t63dOw/iXHNlxKb8ouG0vDQQZv1XmJUUBD9O1H4A== X-Received: by 2002:a17:906:c2d7:b0:a2a:1f1c:a317 with SMTP id ch23-20020a170906c2d700b00a2a1f1ca317mr3525ejb.208.1704747212351; Mon, 08 Jan 2024 12:53:32 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id y19-20020a17090668d300b00a27a7fa8691sm244476ejr.137.2024.01.08.12.53.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:53:32 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next v2 13/15] selftests/bpf: Add test cases for narrowing fill Date: Mon, 8 Jan 2024 22:52:07 +0200 Message-ID: <20240108205209.838365-14-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Maxim Mikityanskiy The previous commit allowed to preserve boundaries and track IDs of scalars on narrowing fills. Add test cases for that pattern. Signed-off-by: Maxim Mikityanskiy Acked-by: Eduard Zingerman --- .../selftests/bpf/progs/verifier_spill_fill.c | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index fab8ae9fe947..3764111d190d 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -936,4 +936,112 @@ l0_%=: r0 = 0; \ : __clobber_all); } +SEC("xdp") +__description("32-bit fill after 64-bit spill") +__success __retval(0) +__naked void fill_32bit_after_spill_64bit(void) +{ + asm volatile(" \ + /* Randomize the upper 32 bits. */ \ + call %[bpf_get_prandom_u32]; \ + r0 <<= 32; \ + /* 64-bit spill r0 to stack. */ \ + *(u64*)(r10 - 8) = r0; \ + /* 32-bit fill r0 from stack. */ \ + r0 = *(u32*)(r10 - %[offset]); \ + /* Boundary check on r0 with predetermined result. */\ + if r0 == 0 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + __imm_const(offset, 8) +#else + __imm_const(offset, 4) +#endif + : __clobber_all); +} + +SEC("xdp") +__description("32-bit fill after 64-bit spill of 32-bit value should preserve ID") +__success __retval(0) +__naked void fill_32bit_after_spill_64bit_preserve_id(void) +{ + asm volatile (" \ + /* Randomize the lower 32 bits. */ \ + call %[bpf_get_prandom_u32]; \ + w0 &= 0xffffffff; \ + /* 64-bit spill r0 to stack - should assign an ID. */\ + *(u64*)(r10 - 8) = r0; \ + /* 32-bit fill r1 from stack - should preserve the ID. */\ + r1 = *(u32*)(r10 - %[offset]); \ + /* Compare r1 with another register to trigger find_equal_scalars. */\ + r2 = 0; \ + if r1 != r2 goto l0_%=; \ + /* The result of this comparison is predefined. */\ + if r0 == r2 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32), +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + __imm_const(offset, 8) +#else + __imm_const(offset, 4) +#endif + : __clobber_all); +} + +SEC("xdp") +__description("32-bit fill after 64-bit spill should clear ID") +__failure __msg("math between ctx pointer and 4294967295 is not allowed") +__naked void fill_32bit_after_spill_64bit_clear_id(void) +{ + asm volatile (" \ + r6 = r1; \ + /* Roll one bit to force the verifier to track both branches. */\ + call %[bpf_get_prandom_u32]; \ + r0 &= 0x8; \ + /* Put a large number into r1. */ \ + r1 = 0xffffffff; \ + r1 <<= 32; \ + r1 += r0; \ + /* 64-bit spill r1 to stack - should assign an ID. */\ + *(u64*)(r10 - 8) = r1; \ + /* 32-bit fill r2 from stack - should clear the ID. */\ + r2 = *(u32*)(r10 - %[offset]); \ + /* Compare r2 with another register to trigger find_equal_scalars.\ + * Having one random bit is important here, otherwise the verifier cuts\ + * the corners. If the ID was mistakenly preserved on fill, this would\ + * cause the verifier to think that r1 is also equal to zero in one of\ + * the branches, and equal to eight on the other branch.\ + */ \ + r3 = 0; \ + if r2 != r3 goto l0_%=; \ +l0_%=: r1 >>= 32; \ + /* The verifier shouldn't propagate r2's range to r1, so it should\ + * still remember r1 = 0xffffffff and reject the below.\ + */ \ + r6 += r1; \ + r0 = *(u32*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_get_prandom_u32), +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + __imm_const(offset, 8) +#else + __imm_const(offset, 4) +#endif + : __clobber_all); +} + char _license[] SEC("license") = "GPL"; From patchwork Mon Jan 8 20:52:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13514032 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93DBA46B89; Mon, 8 Jan 2024 20:53:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KAOsSM7m" Received: by mail-lf1-f50.google.com with SMTP id 2adb3069b0e04-50ea9daac4cso2255995e87.3; Mon, 08 Jan 2024 12:53:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747216; x=1705352016; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BLfm1mzQrY+NzOzQ/cChcdxuSLRWPW8l6yPKrN7SEt0=; b=KAOsSM7my3o0ss9t7OQVwYIK3v6j8c25SIVgd1lutYaJEUfcvsnqc0W7CdPdWhg5uz aWcy5G9AF4wmUx1bjhEExthRAlmZhaY/mFn+jLMklCAZu3BldSR8TIDnXKUZ6etKuExf yV0nEgckNmni2xq424wk+5agYBTjTCX3R7YqJfBGoXfLJHifTyYt8kaV4xBSjG+8UD9o tGksga8hHY0kzYvyg1laFuu0YIxtyVGSNSpEDc4ezS47gyYsYe8iENY20qZ0KbFr3PTy 46YssoaHsQcXnb9O1xLogfzI/M9BRcvFrDtV96gtgiLq8dX1GoDlr7uqxhqn0ZgL4pfm 0C5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747216; x=1705352016; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BLfm1mzQrY+NzOzQ/cChcdxuSLRWPW8l6yPKrN7SEt0=; b=JN2EIiBjhxM4P4ULyL2s0ZpKr5duT/8gzatvW56wge5PxOK4Mh/tTufvvMTtBWxZBv lnxekEuTVz1jH9T+JGlxPSiLdwstFDjfvovsQHBo/jI6Jo/MvqllICq5f5Be4ost3AVj orZpCZoXbnlUeS6v9ZB4jEyIRckycOfaur5blynTPI0UygLDRHLYY479YFiOt3z78lTp kislYexTDr/PoCAN3p7H3WAs4FJ7yJ3sI+rCiwC9bHeECde9R+U116x7mD9XEIJR3+fb lrp5eWmS054Qz9NoIkCO7LmwWhTub1K+FTIz3KbUO+FE86Ly5rniFiEmBIDVNPCMgQlx QDkg== X-Gm-Message-State: AOJu0YzRoEHDSezsZlz0hrSVDZbZbtEQIT9uc/109kpQ1dr/rohBPGKO mzMK4QnkKphqfjpz2lDsvGk= X-Google-Smtp-Source: AGHT+IFc9xZYhhIW8erzKgh0zJWNQ/mdnjmLVjr5yuwJyDMvhfxPQNehXoqEejCOLO6Kdi4IFxMUWQ== X-Received: by 2002:a05:6512:68f:b0:50e:76e7:b1fc with SMTP id t15-20020a056512068f00b0050e76e7b1fcmr2243701lfe.0.1704747216612; Mon, 08 Jan 2024 12:53:36 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id ov19-20020a170906fc1300b00a26ac88d801sm245406ejb.30.2024.01.08.12.53.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:53:36 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf-next v2 14/15] bpf: Optimize state pruning for spilled scalars Date: Mon, 8 Jan 2024 22:52:08 +0200 Message-ID: <20240108205209.838365-15-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Eduard Zingerman Changes for scalar ID tracking of spilled unbound scalars lead to certain verification performance regression. This commit mitigates the regression by exploiting the following properties maintained by check_stack_read_fixed_off(): - a mix of STACK_MISC, STACK_ZERO and STACK_INVALID marks is read as unbounded scalar register; - spi with all slots marked STACK_ZERO is read as scalar register with value zero. This commit modifies stacksafe() to consider situations above equivalent. Veristat results after this patch show significant gains: $ ./veristat -e file,prog,states -f '!states_pct<10' -f '!states_b<10' -C not-opt after File Program States (A) States (B) States (DIFF) ---------------- -------- ---------- ---------- ---------------- pyperf180.bpf.o on_event 10456 8422 -2034 (-19.45%) pyperf600.bpf.o on_event 37319 22519 -14800 (-39.66%) strobemeta.bpf.o on_event 13435 4703 -8732 (-64.99%) Signed-off-by: Eduard Zingerman --- kernel/bpf/verifier.c | 83 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index aeb3e198a5ea..cb82f8d4226f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1170,6 +1170,12 @@ static void mark_stack_slot_misc(struct bpf_verifier_env *env, u8 *stype) *stype = STACK_MISC; } +static bool is_spilled_scalar_reg64(const struct bpf_stack_state *stack) +{ + return stack->slot_type[0] == STACK_SPILL && + stack->spilled_ptr.type == SCALAR_VALUE; +} + static void scrub_spilled_slot(u8 *stype) { if (*stype != STACK_INVALID) @@ -16459,11 +16465,45 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, } } +static bool is_stack_zero64(struct bpf_stack_state *stack) +{ + u32 i; + + for (i = 0; i < ARRAY_SIZE(stack->slot_type); ++i) + if (stack->slot_type[i] != STACK_ZERO) + return false; + return true; +} + +static bool is_stack_unbound_slot64(struct bpf_verifier_env *env, + struct bpf_stack_state *stack) +{ + u32 i; + + for (i = 0; i < ARRAY_SIZE(stack->slot_type); ++i) + if (stack->slot_type[i] != STACK_ZERO && + stack->slot_type[i] != STACK_MISC && + (!env->allow_uninit_stack || stack->slot_type[i] != STACK_INVALID)) + return false; + return true; +} + +static bool is_spilled_unbound_scalar_reg64(struct bpf_stack_state *stack) +{ + return is_spilled_scalar_reg64(stack) && __is_scalar_unbounded(&stack->spilled_ptr); +} + static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact) { + struct bpf_reg_state unbound_reg = {}; + struct bpf_reg_state zero_reg = {}; int i, spi; + __mark_reg_unknown(env, &unbound_reg); + __mark_reg_const_zero(env, &zero_reg); + zero_reg.precise = true; + /* walk slots of the explored stack and ignore any additional * slots in the current stack, since explored(safe) state * didn't use them @@ -16484,6 +16524,49 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, continue; } + /* load of stack value with all MISC and ZERO slots produces unbounded + * scalar value, call regsafe to ensure scalar ids are compared. + */ + if (is_spilled_unbound_scalar_reg64(&old->stack[spi]) && + is_stack_unbound_slot64(env, &cur->stack[spi])) { + i += BPF_REG_SIZE - 1; + if (!regsafe(env, &old->stack[spi].spilled_ptr, &unbound_reg, + idmap, exact)) + return false; + continue; + } + + if (is_stack_unbound_slot64(env, &old->stack[spi]) && + is_spilled_unbound_scalar_reg64(&cur->stack[spi])) { + i += BPF_REG_SIZE - 1; + if (!regsafe(env, &unbound_reg, &cur->stack[spi].spilled_ptr, + idmap, exact)) + return false; + continue; + } + + /* load of stack value with all ZERO slots produces scalar value 0, + * call regsafe to ensure scalar ids are compared and precision + * flags are taken into account. + */ + if (is_spilled_scalar_reg64(&old->stack[spi]) && + is_stack_zero64(&cur->stack[spi])) { + if (!regsafe(env, &old->stack[spi].spilled_ptr, &zero_reg, + idmap, exact)) + return false; + i += BPF_REG_SIZE - 1; + continue; + } + + if (is_stack_zero64(&old->stack[spi]) && + is_spilled_scalar_reg64(&cur->stack[spi])) { + if (!regsafe(env, &zero_reg, &cur->stack[spi].spilled_ptr, + idmap, exact)) + return false; + i += BPF_REG_SIZE - 1; + continue; + } + if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_INVALID) continue; From patchwork Mon Jan 8 20:52:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13514033 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9B2855E51; Mon, 8 Jan 2024 20:53:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Nky5uTea" Received: by mail-ed1-f50.google.com with SMTP id 4fb4d7f45d1cf-555144cd330so2767010a12.2; Mon, 08 Jan 2024 12:53:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704747222; x=1705352022; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e9DLq0TfwWAUMCpLHR20MyJFlzT+sTpZ6i+Jf0NPu6s=; b=Nky5uTeapyrVhcyd9lMbwTt32+PcLyKBrRfZFfMKmxh9qFjF57rgbFVz9XbSRw1q/2 oip1fk+7WW3XTirSJeSEiLrEnV0h/cZSJgmdtLgVJMz7Lwcz/K5CsJuI0E51Xi6uJs5Y K7t0vzGqu6WrbDbpk+2M8EOjp4M3fjYXwodOgOKkXfl4CX3tpKMEiCxU8ars1xZQw0BY al9mpBa9rlnhpjtCVf0vD/BjCC/OhR3b8joRtrzHkXQBxC1IezUGZgm4Sp4XYaB5Am/R pGXmaiITuUSwRHyY4SDPxN9dnvxojOrQuPtGAxp+hJVJHBLxYatyBKOPXsOGYZyL3NB+ qXsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704747222; x=1705352022; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e9DLq0TfwWAUMCpLHR20MyJFlzT+sTpZ6i+Jf0NPu6s=; b=PUDGmdM/5WfdPP2XDPEGMVXa+wVd0AUJ1g6k5dX2x+pXsffDv663BaYanDeaXshDrR RTIsJ0IGFpt1kOrhj2UmsvId7iW0iLpIlJpfEevNYyk7HsxRSKSk3+iqdBThN5HTZREJ uAbzOUmILZG2O/7BefAWntp4fdG2HKgd/SAhrL9rtk6EwSPjmTMyi0xAXuCe95eOkI7V 89uF4MwrA0Bp0d9aGuXTQVfu49uVxY9WZchgqF2DvuCuaxJ4NJzQStGGN08eWBnolYnj uSqHqLSE6AgH9biIeqg7aBM1hjpo2uLbZ/IY2kti/N2W7DEzdIMTCx3q38c64QggDVyf lPSw== X-Gm-Message-State: AOJu0Yy/fj5+EzXlhzMw45ua6jUCp89hfrXIZhi0Sy2b7WcBAlczD4x7 WBM2SSXThfSC0Atlf813/hU= X-Google-Smtp-Source: AGHT+IHQ6/x0HVJ35HtD6Bjguw9MNx1mMJnNOgHd39sQ8neRKxkGKl8TzgyNleGUq342qGc3ggzm1g== X-Received: by 2002:aa7:d6da:0:b0:556:a28e:7cc1 with SMTP id x26-20020aa7d6da000000b00556a28e7cc1mr2212161edr.81.1704747221648; Mon, 08 Jan 2024 12:53:41 -0800 (PST) Received: from localhost ([185.220.101.80]) by smtp.gmail.com with ESMTPSA id fi23-20020a056402551700b005579dbd7c4csm203899edb.35.2024.01.08.12.53.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Jan 2024 12:53:41 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Shung-Hsi Yu Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf-next v2 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO} Date: Mon, 8 Jan 2024 22:52:09 +0200 Message-ID: <20240108205209.838365-16-maxtram95@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240108205209.838365-1-maxtram95@gmail.com> References: <20240108205209.838365-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Eduard Zingerman Check that stacksafe() considers the following old vs cur stack spill state combinations equivalent: - spill of unbound scalar vs combination of STACK_{MISC,ZERO,INVALID} - STACK_MISC vs spill of unbound scalar - spill of scalar 0 vs STACK_ZERO - STACK_ZERO vs spill of scalar 0 Signed-off-by: Eduard Zingerman --- .../selftests/bpf/progs/verifier_spill_fill.c | 192 ++++++++++++++++++ 1 file changed, 192 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index 3764111d190d..3cd3fe30357f 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -1044,4 +1044,196 @@ l0_%=: r1 >>= 32; \ : __clobber_all); } +/* stacksafe(): check if spill of unbound scalar in old state is + * considered equivalent to any state of the spill in the current state. + * + * On the first verification path an unbound scalar is written for + * fp-8 and later marked precise. + * On the second verification path a mix of STACK_MISC/ZERO/INVALID is + * written to fp-8. These should be considered equivalent. + */ +SEC("socket") +__success __log_level(2) +__msg("10: (79) r0 = *(u64 *)(r10 -8)") +__msg("10: safe") +__msg("processed 16 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void old_unbound_scalar_vs_cur_anything(void) +{ + asm volatile( + /* get a random value for branching */ + "call %[bpf_ktime_get_ns];" + "r7 = r0;" + /* get a random value for storing at fp-8 */ + "call %[bpf_ktime_get_ns];" + "if r7 == 0 goto 1f;" + /* unbound scalar written to fp-8 */ + "*(u64*)(r10 - 8) = r0;" + "goto 2f;" +"1:" + /* mark fp-8 as mix of STACK_MISC/ZERO/INVALID */ + "r1 = 0;" + "*(u8*)(r10 - 8) = r0;" + "*(u8*)(r10 - 7) = r1;" + /* fp-2..fp-6 remain STACK_INVALID */ + "*(u8*)(r10 - 1) = r0;" +"2:" + /* read fp-8 and force it precise, should be considered safe + * on second visit + */ + "r0 = *(u64*)(r10 - 8);" + "r0 &= 0xff;" + "r1 = r10;" + "r1 += r0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* stacksafe(): check if STACK_MISC in old state is considered + * equivalent to stack spill of unbound scalar in cur state. + */ +SEC("socket") +__success __log_level(2) +__msg("8: (79) r0 = *(u64 *)(r10 -8) ; R0_w=scalar(id=1) R10=fp0 fp-8=scalar(id=1)") +__msg("8: safe") +__msg("processed 11 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void old_unbound_scalar_vs_cur_stack_misc(void) +{ + asm volatile( + /* get a random value for branching */ + "call %[bpf_ktime_get_ns];" + "if r0 == 0 goto 1f;" + /* conjure unbound scalar at fp-8 */ + "call %[bpf_ktime_get_ns];" + "*(u64*)(r10 - 8) = r0;" + "goto 2f;" +"1:" + /* conjure STACK_MISC at fp-8 */ + "call %[bpf_ktime_get_ns];" + "*(u64*)(r10 - 8) = r0;" + "*(u32*)(r10 - 4) = r0;" +"2:" + /* read fp-8, should be considered safe on second visit */ + "r0 = *(u64*)(r10 - 8);" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* stacksafe(): check if stack spill of unbound scalar in old state is + * considered equivalent to STACK_MISC in cur state. + */ +SEC("socket") +__success __log_level(2) +__msg("8: (79) r0 = *(u64 *)(r10 -8) ; R0_w=scalar() R10=fp0 fp-8=mmmmmmmm") +__msg("8: safe") +__msg("processed 11 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void old_stack_misc_vs_cur_unbound_scalar(void) +{ + asm volatile( + /* get a random value for branching */ + "call %[bpf_ktime_get_ns];" + "if r0 == 0 goto 1f;" + /* conjure STACK_MISC at fp-8 */ + "call %[bpf_ktime_get_ns];" + "*(u64*)(r10 - 8) = r0;" + "*(u32*)(r10 - 4) = r0;" + "goto 2f;" +"1:" + /* conjure unbound scalar at fp-8 */ + "call %[bpf_ktime_get_ns];" + "*(u64*)(r10 - 8) = r0;" +"2:" + /* read fp-8, should be considered safe on second visit */ + "r0 = *(u64*)(r10 - 8);" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* stacksafe(): check if spill of register with value 0 in old state + * is considered equivalent to STACK_ZERO. + */ +SEC("socket") +__success __log_level(2) +__msg("9: (79) r0 = *(u64 *)(r10 -8)") +__msg("9: safe") +__msg("processed 15 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void old_spill_zero_vs_stack_zero(void) +{ + asm volatile( + /* get a random value for branching */ + "call %[bpf_ktime_get_ns];" + "r7 = r0;" + /* get a random value for storing at fp-8 */ + "call %[bpf_ktime_get_ns];" + "if r7 == 0 goto 1f;" + /* conjure spilled register with value 0 at fp-8 */ + "*(u64*)(r10 - 8) = r0;" + "if r0 != 0 goto 3f;" + "goto 2f;" +"1:" + /* conjure STACK_ZERO at fp-8 */ + "r1 = 0;" + "*(u64*)(r10 - 8) = r1;" +"2:" + /* read fp-8 and force it precise, should be considered safe + * on second visit + */ + "r0 = *(u64*)(r10 - 8);" + "r1 = r10;" + "r1 += r0;" +"3:" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* stacksafe(): similar to old_spill_zero_vs_stack_zero() but the + * other way around: check if STACK_ZERO is considered equivalent to + * spill of register with value 0. + */ +SEC("socket") +__success __log_level(2) +__msg("8: (79) r0 = *(u64 *)(r10 -8)") +__msg("8: safe") +__msg("processed 14 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void old_stack_zero_vs_spill_zero(void) +{ + asm volatile( + /* get a random value for branching */ + "call %[bpf_ktime_get_ns];" + "if r0 == 0 goto 1f;" + /* conjure STACK_ZERO at fp-8 */ + "r1 = 0;" + "*(u64*)(r10 - 8) = r1;" + "goto 2f;" +"1:" + /* conjure spilled register with value 0 at fp-8 */ + "call %[bpf_ktime_get_ns];" + "*(u64*)(r10 - 8) = r0;" + "if r0 != 0 goto 3f;" +"2:" + /* read fp-8 and force it precise, should be considered safe + * on second visit + */ + "r0 = *(u64*)(r10 - 8);" + "r1 = r10;" + "r1 += r0;" +"3:" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + char _license[] SEC("license") = "GPL";