From patchwork Wed Dec 20 21:39:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500463 Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A43AF48CF4; Wed, 20 Dec 2023 21:40:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="f/40Nbgg" Received: by mail-ej1-f45.google.com with SMTP id a640c23a62f3a-a22f59c6ae6so12339166b.1; Wed, 20 Dec 2023 13:40:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108427; x=1703713227; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zLk8SAJ8ZFWGDqxLKtNRR7m+7/3ll3qzy9xQyT4uoH8=; b=f/40NbggRbKolqHe/NP9ZbVkSlmMLdxq1Xi9fXoZztwFP4JGcpPNxN7PFffIF933ZL IOIM8+7EZpsNWa9vJQaaBv6q2SjOt87QtCwoEQTjsJD89HYdYu/K5lYhXcdbsN4jJ3SR 5sh9KbwlP/16D4FIvdw2Oe0UFb6asMyRH5bmnwW3fdwSXxk/PBX+XufjGmSM/wz2a52N UEq4P71UWvWGX6b5JojWcSubdbu6Gmj0LiDIi4xGW8t5huk2E5xNFco8Xto2qpQGxAKx U08Kb90CGHYIZN0IIvD1Q1jlJ1GaQwQt3xvitWkh+9MLFYfdx4mHz9lHhxPrsGtU/Id5 Docg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108427; x=1703713227; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zLk8SAJ8ZFWGDqxLKtNRR7m+7/3ll3qzy9xQyT4uoH8=; b=saZQCNgjNjCsOW5T4mRiIHiXWFG1kG6HJ5V6ACREzD9+/1Qs4pCYADqdU0thxxvfY2 EeEvOxwlDjSkE6EYEqNylcoBm9mdDqu8LqRaOqAeVQvuvpx4DV00OSc/j5/YvqTw/sSG NBSiQckIoH7xrNTmT68IIdpUUtzupgP2y9GNvezkxpn585t+qBowxDqSMMGXgdxTaetk Imqsh1S8mRvl56QQdLV6FbQn/zRNa3bajq4yV2SV+2AvudDhZVFpo8B+tFGHtwxuw8PU na/2E2QoV4g2vibkPpPG1yK8N6tlYveAI5gq9+J8g4urlAlBV564MzC9I37HKkNcj3UI 58bQ== X-Gm-Message-State: AOJu0YwRtZa88H1R+ZYnC1BCp5YWHnl3L6YdhjKoU9m9KnWXplXgKcPv ctl1VO2V97wyLm7m5xgd9zM= X-Google-Smtp-Source: AGHT+IHdNw8vTwvCHrE42VS8buZlxIEHPQZSyoUKFDv0SqeNRMQfq8lFyJWXenuAK6QaoJqdmeLveQ== X-Received: by 2002:a17:906:101b:b0:a23:713d:57c0 with SMTP id 27-20020a170906101b00b00a23713d57c0mr888187ejm.234.1703108426941; Wed, 20 Dec 2023 13:40:26 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id jo23-20020a170906f6d700b00a268e4757b2sm240393ejb.143.2023.12.20.13.40.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:26 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next 01/15] selftests/bpf: Fix the u64_offset_to_skb_data test Date: Wed, 20 Dec 2023 23:39:59 +0200 Message-ID: <20231220214013.3327288-2-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Maxim Mikityanskiy The u64_offset_to_skb_data test is supposed to make a 64-bit fill, but instead makes a 16-bit one. Fix the test according to its intention. The 16-bit fill is covered by u16_offset_to_skb_data. Signed-off-by: Maxim Mikityanskiy --- tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index 39fe3372e0e0..84eccab36582 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -243,7 +243,7 @@ l0_%=: r0 = 0; \ SEC("tc") __description("Spill u32 const scalars. Refill as u64. Offset to skb->data") -__failure __msg("invalid access to packet") +__failure __msg("math between pkt pointer and register with unbounded min value is not allowed") __naked void u64_offset_to_skb_data(void) { asm volatile (" \ @@ -253,7 +253,7 @@ __naked void u64_offset_to_skb_data(void) w7 = 20; \ *(u32*)(r10 - 4) = r6; \ *(u32*)(r10 - 8) = r7; \ - r4 = *(u16*)(r10 - 8); \ + r4 = *(u64*)(r10 - 8); \ r0 = r2; \ /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\ r0 += r4; \ From patchwork Wed Dec 20 21:40:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500464 Received: from mail-ej1-f42.google.com (mail-ej1-f42.google.com [209.85.218.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F044E4BAAE; Wed, 20 Dec 2023 21:40:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RiZD02o9" Received: by mail-ej1-f42.google.com with SMTP id a640c23a62f3a-a2343c31c4bso12215366b.1; Wed, 20 Dec 2023 13:40:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108429; x=1703713229; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3PXHJ62Jn+WSXkqHLjoo0Vmkp5AYsgfUhivyvdOpnBc=; b=RiZD02o9tXGegRY9/uZ5wKiHWKCs/zHd4oEKWxX9lGZ6kSFKsLaJ+/vpYm5mvkO8Ou aC28VZva0QBbZIeE8w8blE2KUvpD2+aGNtBqczhR7gfJIrERF9cYueiozUuYs4RF7VRc D8zUdBCWcFo61hRMY6IGBPO9GXus0CXW5cfGBK8kFJytJXmF3lijgMnurp0OV+g6Utt7 6bMgDWHieT/qUCpMn7+WxfwCvd6QBrvyHDvm2FrP8h79cczP/RbJO6HuZ0KB0mkafCnI z+dDbgAgWu+Z7XmQNihsmKNTojIokhopEAuwOh7GqXBboYFHUF5dCA7dyHlpNuYB8XN4 PQFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108429; x=1703713229; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3PXHJ62Jn+WSXkqHLjoo0Vmkp5AYsgfUhivyvdOpnBc=; b=OGNzWtb9V/i91IunyQR11ElU+6Ap7AWKc9A0KnwUkqbCdjvqQxk1P+3YhFV4fXQvUY 8UzrrU8SQFwnOdlA13WvEErV5ZRjpRKQ7o9pgfYC0XRyvC7qLay+oYwLxmsYlxpTNpSh 4p2+iYJTP14voHaI0L8wxFeJKNjvrbE8hHhIUzdXQZ8BHeQbkIXxYsLz9jY65YZsyR9w ATO2R4FJE3te3gys5ncDqfQlQQmfFWEhZ02V2ENpwm1EzFWYbgrgFz4c99WakLt8mhbq FxnFoFAKcbF+mA0M0lYLaoPdKyy2jQs5TZG4PwcwBSBiW9OLn+hgggtYNSho5hMsIqMQ eNYA== X-Gm-Message-State: AOJu0YzY8b+RFiCqOxFUAH+uDLzfM7/W/jWtcn8tH3/MgiLGazQVxOua JdGzyXbaQ4boHubYxFGAdIA= X-Google-Smtp-Source: AGHT+IGsKrXHxXuPNTkqzxBxD9e4xxM/iXB1VsudclaXB4re8Jkj5EnbajTSWYjFpuKNT94uTUkD4g== X-Received: by 2002:a17:906:20d4:b0:a23:3571:6c8b with SMTP id c20-20020a17090620d400b00a2335716c8bmr2440063ejc.123.1703108429091; Wed, 20 Dec 2023 13:40:29 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id w10-20020a170906d20a00b00a1d9c81418esm238509ejz.170.2023.12.20.13.40.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:28 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf-next 02/15] bpf: make infinite loop detection in is_state_visited() exact Date: Wed, 20 Dec 2023 23:40:00 +0200 Message-ID: <20231220214013.3327288-3-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eduard Zingerman Current infinite loops detection mechanism is speculative: - first, states_maybe_looping() check is done which simply does memcmp for R1-R10 in current frame; - second, states_equal(..., exact=false) is called. With exact=false states_equal() would compare scalars for equality only if in old state scalar has precision mark. Such logic might be problematic if compiler makes some unlucky stack spill/fill decisions. An artificial example of a false positive looks as follows: r0 = ... unknown scalar ... r0 &= 0xff; *(u64 *)(r10 - 8) = r0; r0 = 0; loop: r0 = *(u64 *)(r10 - 8); if r0 > 10 goto exit_; r0 += 1; *(u64 *)(r10 - 8) = r0; r0 = 0; goto loop; This commit updates call to states_equal to use exact=true, forcing all scalar comparisons to be exact. Signed-off-by: Eduard Zingerman --- kernel/bpf/verifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index f13008d27f35..89f8c527ed3c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -17008,7 +17008,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) } /* attempt to detect infinite loop to avoid unnecessary doomed work */ if (states_maybe_looping(&sl->state, cur) && - states_equal(env, &sl->state, cur, false) && + states_equal(env, &sl->state, cur, true) && !iter_active_depths_differ(&sl->state, cur) && sl->state.callback_unroll_depth == cur->callback_unroll_depth) { verbose_linfo(env, insn_idx, "; "); From patchwork Wed Dec 20 21:40:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500465 Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F05764C3A3; Wed, 20 Dec 2023 21:40:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GWld8Gfh" Received: by mail-ed1-f53.google.com with SMTP id 4fb4d7f45d1cf-553a65b6ad4so179136a12.0; Wed, 20 Dec 2023 13:40:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108431; x=1703713231; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Zka4zARObtbMJoYbSx8xKgCuE8KtqgISiEptwV6WR/w=; b=GWld8GfhenNWrbBo1cagVvuvnH4R0U9A9bcDfh9+cyBtsl+OF8/qsuPQYGJZhtdSxV 4VOrjEauwGq3PoWisI4g5xzpdhc+dPli9Hwc78lphXuvS1V16EqQqqEWpPIyp29DGBUl qqiok4XeUoeial4BDzCWGW7jXOB/SUPcm15hKdVd8g7lg98afuVfoST/ojYGpIMEwxwk oM5kmmyUC5yrMW67iprfaI+7qgvw2rckMrNexVcquSCOYS+r6u/jHD2QUsmb38lH7D2Q auZ/dqVHgT57S18JVVMS9SvRc2PUuddgoqGEKc/BzyZeTk3LKzLR6gkVSgWQ40AsXIDe OZ6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108431; x=1703713231; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Zka4zARObtbMJoYbSx8xKgCuE8KtqgISiEptwV6WR/w=; b=GFanHrY0Bc7Ha4dBBQA135ozYzxVeqocmMuinOu7D8lUt2Ak1WT4FNkzzqE0Xslmd1 LfXdFHUanDlt3F5u+0HHFkE7UOKQ25Bh8WmXC1ryWqUXGiRZEfq6qqOOID7fTm68DVI0 e+y4BUPOp38+nqsSY3qb1hAOZl0H3drrEOGksf1rUgEBkSrjVDylQsAOQpU84F4xTpxS gVmDJE9mEjPk62tAZWQAPhE+Jcp7HuHZCLiiGBXeZ5Hem2NZWUsmMpWH48/j+n8EnwEH lsD1oYcjRrJW9HOaz7o0npQzAk63PhzdLSgIsJbArOJfWpqhBAuaGut/48CZi9lJOxVo SrNw== X-Gm-Message-State: AOJu0YwHskB9rH7X9TK5BqcR2bJ2OaP/1jQfHeH5JFyYn2UdWbujCLDk FrcTXs9Z/IVIA+D+NGEaDI8= X-Google-Smtp-Source: AGHT+IHTNCI3gKBAnzAEiWESctwCI60HiBPLA1DkQbWIXScqb4HJel3vF/IyxT6cQs1KpYQsPQdFiA== X-Received: by 2002:a50:8718:0:b0:553:c02:60c8 with SMTP id i24-20020a508718000000b005530c0260c8mr4150124edb.41.1703108431197; Wed, 20 Dec 2023 13:40:31 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id q11-20020a056402248b00b00553165eb4f7sm296389eda.17.2023.12.20.13.40.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:30 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf-next 03/15] selftests/bpf: check if imprecise stack spills confuse infinite loop detection Date: Wed, 20 Dec 2023 23:40:01 +0200 Message-ID: <20231220214013.3327288-4-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eduard Zingerman Verify that infinite loop detection logic separates states with identical register states but different imprecise scalars spilled to stack. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/progs/verifier_loops1.c | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_loops1.c b/tools/testing/selftests/bpf/progs/verifier_loops1.c index 71735dbf33d4..e07b43b78fd2 100644 --- a/tools/testing/selftests/bpf/progs/verifier_loops1.c +++ b/tools/testing/selftests/bpf/progs/verifier_loops1.c @@ -259,4 +259,28 @@ l0_%=: r2 += r1; \ " ::: __clobber_all); } +SEC("xdp") +__success +__naked void not_an_inifinite_loop(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r0 &= 0xff; \ + *(u64 *)(r10 - 8) = r0; \ + r0 = 0; \ +loop_%=: \ + r0 = *(u64 *)(r10 - 8); \ + if r0 > 10 goto exit_%=; \ + r0 += 1; \ + *(u64 *)(r10 - 8) = r0; \ + r0 = 0; \ + goto loop_%=; \ +exit_%=: \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + char _license[] SEC("license") = "GPL"; From patchwork Wed Dec 20 21:40:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500466 Received: from mail-ej1-f42.google.com (mail-ej1-f42.google.com [209.85.218.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B543E4C3B7; Wed, 20 Dec 2023 21:40:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FSP+Ji6P" Received: by mail-ej1-f42.google.com with SMTP id a640c23a62f3a-a2370535060so231232766b.1; Wed, 20 Dec 2023 13:40:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108433; x=1703713233; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XCxFxUWlX2zy+ADuI9jc+/uBDewXnutA3SR5fl5N6eE=; b=FSP+Ji6PfVGEK5nUlez+PeVmYnCcVPSec/SbNw4uTcUH/6pgsv/3qXyoSiAfOmRHQt Dv9vW3lVkoFqheiCx/HhMMYUYdMFIDXX/iXWlpNfMtGDuPFHipgZv8c4L/qvvjK8158w YckajLss5l7lPN6WB2hh80btPDnPVddhnqbyN72rcj63TCMZhtDHvQTgRVqaJfvevzaR 4/LudCWGBTeQZxxNzbkorAIwVwoqLcpLo79YudQPtQkerkwnxJ9HfbQqLa1gDv0J4Yc4 ouEtqzVuG8dSR/1KrjUwH7NnjuAxYGRCpmYdMRsnpo7pH7Nddmvgzlb2GPoOfSKihwxm aX1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108433; x=1703713233; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XCxFxUWlX2zy+ADuI9jc+/uBDewXnutA3SR5fl5N6eE=; b=l2jNcWf09CJ87quAym4EG03Xfm4rJU3ObjUE1xPiYsu8++BUxeih1UTue52VrNi1pF uzX8P71ixZfyp87ljtI189T3zA51QTJrLwSesPHvaSBnJfGeOnVL0mj93ytV5wOVhw3v Jh65h4Fk4fw8Mss98KLoHFy9L3F6ph2FcxHHbk4eItbNt1Z0eIj+jRLYltCkfwLRd/Fy gXcpaUD/8i7SPb3EjddekC+mNmYbsfGFQwLOrKXcTM7P0X7xl2+im9vElUc7Iewax7IM eJLsiEMfrMevhdEglIGtDpUoIPuBNA4kDBeWM2mouGam/GzeXcOh9H5irhE3LRVx2ob7 Oi/g== X-Gm-Message-State: AOJu0Yx/mRnFzhBPeinAoYExntdFPcArlsl2/nILEsncf+4eQr/qnpU8 JsMCUxfttOVeB9ImC/Cm+b8= X-Google-Smtp-Source: AGHT+IHTSYZCijtb2PvFKC8vQ02KuK23QZ1yD4xcYtThic70H3lhnePX7yqDJ1jIhj8eOCND01vcdw== X-Received: by 2002:a17:906:3f59:b0:a1d:3511:6674 with SMTP id f25-20020a1709063f5900b00a1d35116674mr3656540ejj.13.1703108433051; Wed, 20 Dec 2023 13:40:33 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id wi22-20020a170906fd5600b00a2693ce340csm237378ejb.59.2023.12.20.13.40.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:32 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next 04/15] bpf: Make bpf_for_each_spilled_reg consider narrow spills Date: Wed, 20 Dec 2023 23:40:02 +0200 Message-ID: <20231220214013.3327288-5-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Maxim Mikityanskiy Adjust the check in bpf_get_spilled_reg to take into account spilled registers narrower than 64 bits. That allows find_equal_scalars to properly adjust the range of all spilled registers that have the same ID. Before this change, it was possible for a register and a spilled register to have the same IDs but different ranges if the spill was narrower than 64 bits and a range check was performed on the register. Signed-off-by: Maxim Mikityanskiy --- include/linux/bpf_verifier.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index d07d857ca67f..e11baecbde68 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -453,7 +453,7 @@ struct bpf_verifier_state { #define bpf_get_spilled_reg(slot, frame, mask) \ (((slot < frame->allocated_stack / BPF_REG_SIZE) && \ - ((1 << frame->stack[slot].slot_type[0]) & (mask))) \ + ((1 << frame->stack[slot].slot_type[BPF_REG_SIZE - 1]) & (mask))) \ ? &frame->stack[slot].spilled_ptr : NULL) /* Iterate over 'frame', setting 'reg' to either NULL or a spilled register. */ From patchwork Wed Dec 20 21:40:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500467 Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 008C54C3DC; Wed, 20 Dec 2023 21:40:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fG68nxr1" Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-a2356bb40e3so8865466b.1; Wed, 20 Dec 2023 13:40:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108435; x=1703713235; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CD7VeMnjKAxyZj8mbMwt4mJMl93BY6cO6FMg2JTnA74=; b=fG68nxr1hIC/vqCaABoYJ4jQgTg3zdtHrBewyxENw+WXb/F2tmqvcw+UYwqvgmVre9 eKYqw7/BgtMmhWkXiYylvKAq1eVCqMCQiSrwKEcV3dTFjvVkviphUm+tYZ9hCjpIAp0a pY8oe7VG4H/vqNFbYyZSr+246pyFp70wWWdA2VZDxzlkR1dwlMGB3smvrKXpK20eAbz1 ozD0yfnvs3RY7Iz4Vh2swhYK1AjGo2NSX2mYdfLjUjix44lXKmDWVWqtTMNx/Y5SbHKq sX0/ovkZeI3f41vgzIzF20h67VVLR1pJpMlRADXJqlEDr3DQD53MqGcojJ7GpQfNQtie 9r5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108435; x=1703713235; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CD7VeMnjKAxyZj8mbMwt4mJMl93BY6cO6FMg2JTnA74=; b=EzQ507Wv0a8wv0BSZJtMy631J2xTkVNkhO7GCSL1qCRti06I4iYONvUAvQZBxIV1p+ jRNlJCygQQECg/eYPlOt41IL5W6aFfsK7f1TFXO7SedM/i7HOxnzhQtiWCyTgAgrvwIx /z8EIk+OjvZ116AqRfwvAl6t4fBZLRqHyvyTm0q+PCxyTsoNj7+lpybfJkfjhd5F/j/H evqU+laHDaUHjbKJAx2kVIiHlE49H30c02MVlXCYOCsIkZ7EKSbwwTzIhaoax5w4wZDp DubY9a7jyIw6Ma00y3dqRKRZKezYxcQ2bz37pNw7LSpB3qVTPS6O+xdeDGTeMFNJdv9Q Y1+Q== X-Gm-Message-State: AOJu0YwGJx29ujtabsppTlBCq2xN2gJ5XocDv/exQiC5AfdrTXqsPAxc O9z05gSzds0I5Vm+1RPn8Pk= X-Google-Smtp-Source: AGHT+IFOZ5KB1f8BKZKidNyLx56gZSqFtkxELjQ1V4ZwxFmpE05JVQpdm3NS3JKmInS9cV/9gheIqA== X-Received: by 2002:a17:906:5657:b0:a26:88f4:3fae with SMTP id v23-20020a170906565700b00a2688f43faemr1389315ejr.67.1703108435174; Wed, 20 Dec 2023 13:40:35 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id wl1-20020a170907310100b00a236378a43fsm241250ejb.62.2023.12.20.13.40.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:34 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next 05/15] selftests/bpf: Add a test case for 32-bit spill tracking Date: Wed, 20 Dec 2023 23:40:03 +0200 Message-ID: <20231220214013.3327288-6-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Maxim Mikityanskiy When a range check is performed on a register that was 32-bit spilled to the stack, the IDs of the two instances of the register are the same, so the range should also be the same. Signed-off-by: Maxim Mikityanskiy --- .../selftests/bpf/progs/verifier_spill_fill.c | 31 +++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index 84eccab36582..f2c1fe5b1dba 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -737,4 +737,35 @@ __naked void stack_load_preserves_const_precision_subreg(void) : __clobber_common); } +SEC("xdp") +__description("32-bit spilled reg range should be tracked") +__success __retval(0) +__naked void spill_32bit_range_track(void) +{ + asm volatile(" \ + call %[bpf_ktime_get_ns]; \ + /* Make r0 bounded. */ \ + r0 &= 65535; \ + /* Assign an ID to r0. */ \ + r1 = r0; \ + /* 32-bit spill r0 to stack. */ \ + *(u32*)(r10 - 8) = r0; \ + /* Boundary check on r0. */ \ + if r0 < 1 goto l0_%=; \ + /* 32-bit fill r1 from stack. */ \ + r1 = *(u32*)(r10 - 8); \ + /* r1 == r0 => r1 >= 1 always. */ \ + if r1 >= 1 goto l0_%=; \ + /* Dead branch: the verifier should prune it. \ + * Do an invalid memory access if the verifier \ + * follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + char _license[] SEC("license") = "GPL"; From patchwork Wed Dec 20 21:40:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500468 Received: from mail-lf1-f42.google.com (mail-lf1-f42.google.com [209.85.167.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 739F84C3B7; Wed, 20 Dec 2023 21:40:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="c/YzQOxp" Received: by mail-lf1-f42.google.com with SMTP id 2adb3069b0e04-50c0f13ea11so189886e87.3; Wed, 20 Dec 2023 13:40:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108437; x=1703713237; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XCR2AWe8CH/pwYf56t8XoxHbIux/alGXqT7pZRazkTo=; b=c/YzQOxpL/doambL6DwaXcXxI9aO1o8/huMsixmKRUpeQeKzxFmN8RiIqTnN60QDzV k3jN7+frQ376L6fmVST3NaBw2twAlScKvuKtickI/MZz1IB+j2IlIVvoK0EbB/NnJPnA udEwrfK74j6BA4JXGLdM3HfuQbeQ7HGNjUuYpJt9/snJctu0e38KC3HyOlTgGHJkyoOk R/4vPuwakaZMuiVzoRwv4emsTXEy9DY1okn1cP10pN+gqajS/RzRjXDtoZtG0ba+ALNi hDDl0iDdIv7PhCcBqNQJDuViM8FZHvUwMCy+WuMS01V91vG5Bf99ugeSRNHS2mRTYPhJ 3Mjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108437; x=1703713237; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XCR2AWe8CH/pwYf56t8XoxHbIux/alGXqT7pZRazkTo=; b=BdZ3mxEIUCXMfZ4nXYDh6ioP3MapDqoqd/zXeKA6NdEze58WRinyIAiF+KDZHTLICD EYlWpV3Dw2F6T5Ip3MRX0xZADSbZKlUbWlOBWQmdFuLIeHk3vjpq/HPvOyOzP4/lQp0j yBk/d9WL0udq9b3x0wJ3p/1FYrQ9HQDZRTEUhYwqe+Yosa+UalcnEFTY6nCzCuXo/fy1 w9ReSSmOpksYcJe1y+7yTCXy4/gszGchLZ8zEOTK7Dt9NehS+S06koSNlX9BndIpinfs TFkwnBFnIYvEA5gISXqM1e8mhNRUmQ14TKYk6d+3PIm5g6OQ4WIkDnCQmctajyBN0IFZ fOQA== X-Gm-Message-State: AOJu0Yy2zeZgJrMCku1u9oe2BUOkL1pG/ycgOmOYb4wQS6b8CsqO+H82 Ekd+15y0vDMOEaDbcu0VmZU= X-Google-Smtp-Source: AGHT+IEsQljel7SOO/Bsutvovti6Jf3zmzicRNLYYglxCWca3X4elleb8WRQOdJXiqmYXDr/tTy2qA== X-Received: by 2002:a19:9146:0:b0:50e:5ac7:2f83 with SMTP id y6-20020a199146000000b0050e5ac72f83mr84530lfj.55.1703108437086; Wed, 20 Dec 2023 13:40:37 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id wb1-20020a170907d50100b00a2300127f26sm233133ejc.185.2023.12.20.13.40.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:36 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next 06/15] bpf: Add the assign_scalar_id_before_mov function Date: Wed, 20 Dec 2023 23:40:04 +0200 Message-ID: <20231220214013.3327288-7-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Maxim Mikityanskiy Extract the common code that generates a register ID for src_reg before MOV if needed into a new function. This function will also be used in a following commit. Signed-off-by: Maxim Mikityanskiy --- kernel/bpf/verifier.c | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 89f8c527ed3c..a703e3adedd3 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4401,6 +4401,18 @@ static bool __is_pointer_value(bool allow_ptr_leaks, return reg->type != SCALAR_VALUE; } +static void assign_scalar_id_before_mov(struct bpf_verifier_env *env, + struct bpf_reg_state *src_reg) +{ + if (src_reg->type == SCALAR_VALUE && !src_reg->id && + !tnum_is_const(src_reg->var_off)) + /* Ensure that src_reg has a valid ID that will be copied to + * dst_reg and then will be used by find_equal_scalars() to + * propagate min/max range. + */ + src_reg->id = ++env->id_gen; +} + /* Copy src state preserving dst->parent and dst->live fields */ static void copy_register_state(struct bpf_reg_state *dst, const struct bpf_reg_state *src) { @@ -13886,20 +13898,13 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) if (BPF_SRC(insn->code) == BPF_X) { struct bpf_reg_state *src_reg = regs + insn->src_reg; struct bpf_reg_state *dst_reg = regs + insn->dst_reg; - bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id && - !tnum_is_const(src_reg->var_off); if (BPF_CLASS(insn->code) == BPF_ALU64) { if (insn->off == 0) { /* case: R1 = R2 * copy register state to dest reg */ - if (need_id) - /* Assign src and dst registers the same ID - * that will be used by find_equal_scalars() - * to propagate min/max range. - */ - src_reg->id = ++env->id_gen; + assign_scalar_id_before_mov(env, src_reg); copy_register_state(dst_reg, src_reg); dst_reg->live |= REG_LIVE_WRITTEN; dst_reg->subreg_def = DEF_NOT_SUBREG; @@ -13914,8 +13919,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) bool no_sext; no_sext = src_reg->umax_value < (1ULL << (insn->off - 1)); - if (no_sext && need_id) - src_reg->id = ++env->id_gen; + if (no_sext) + assign_scalar_id_before_mov(env, src_reg); copy_register_state(dst_reg, src_reg); if (!no_sext) dst_reg->id = 0; @@ -13937,8 +13942,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) if (insn->off == 0) { bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX; - if (is_src_reg_u32 && need_id) - src_reg->id = ++env->id_gen; + if (is_src_reg_u32) + assign_scalar_id_before_mov(env, src_reg); copy_register_state(dst_reg, src_reg); /* Make sure ID is cleared if src_reg is not in u32 * range otherwise dst_reg min/max could be incorrectly @@ -13952,8 +13957,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) /* case: W1 = (s8, s16)W2 */ bool no_sext = src_reg->umax_value < (1ULL << (insn->off - 1)); - if (no_sext && need_id) - src_reg->id = ++env->id_gen; + if (no_sext) + assign_scalar_id_before_mov(env, src_reg); copy_register_state(dst_reg, src_reg); if (!no_sext) dst_reg->id = 0; From patchwork Wed Dec 20 21:40:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500469 Received: from mail-lj1-f170.google.com (mail-lj1-f170.google.com [209.85.208.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 661BD4A9BF; Wed, 20 Dec 2023 21:40:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Qorubc3e" Received: by mail-lj1-f170.google.com with SMTP id 38308e7fff4ca-2ca0715f0faso813381fa.0; Wed, 20 Dec 2023 13:40:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108439; x=1703713239; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GJR6xP9fGTUyWDBnp/BMSTW04Tr13zYTm4zsvuB63mw=; b=Qorubc3eal5/tHhSo6bDo+j+URwW40h2n5SZ1fb4Qnw5DTHJMqSFYO3FcZyX/hEOIe V4loLGusv4+Y00T95ajwxZsh7zJ3Fq9DTvMf0ThvYArHo4IUsfZuPWkIcmk9uSQ01o7D 4L6A8g0OZTFdRDZbp+YrHr8Yr2DjaKhQVlmpmstL6kOd1KlGL9yOA4zoPfiGBMfhEhct o4qpL0jr0+MEDT3ooOVk44EEHs3L4WnuYalgciPlcIwbH7oGeS55ilOQgBpJIpQOAkXg WP1whg5g2BiE+72HuNwHxtoATFLf6ftduAQI+gJ3alphMls+kD+d0j29y39aIt7Ysfwy lA4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108439; x=1703713239; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GJR6xP9fGTUyWDBnp/BMSTW04Tr13zYTm4zsvuB63mw=; b=UgoKuS9lL+XBwPAUbbT55ipqL1VWF1baLZ9qixDI6oMlYe0wiAhTpFj0wXnZ25GWu8 a8sGL65Dgasr1tz2WO/dZ21iAt0Gb5u1t+9AyBwzYjRAR6oRwBksp6Vdvn8oJMQ92qQr gZkwg0495cnKgjf/YRpxyAfbJUBPERduApTabiMukWPYRZCiqM118TlGkis/kYwutvvg 0T5PD3m8i5oiEThpWuAfGuhb4+XYzXE9uWyUXFO0FUNyxyO8Dklq1n5b4+4QeKqk8p+0 +T0hUrw8OpfbljM+yyGZbnWd6Lg2Avi+piZ3iz5qugQmWDVrVVrtQJp3lABgQ4GDlnt9 CxVg== X-Gm-Message-State: AOJu0Yx+6rTBDM0lZWRIHWbeS9iZBxel2p3AUDclMIEtMZ2+grwTym9+ euaGk6kCvAoqSItqG9t0qX0= X-Google-Smtp-Source: AGHT+IFwA3XxyO50Jsk+/XI6kZhAZsUi9Lvn1rCcCr4tShqusjBWIF6jMDRldWwjLCMOJf+k33agzQ== X-Received: by 2002:a2e:860b:0:b0:2cc:6e37:f0df with SMTP id a11-20020a2e860b000000b002cc6e37f0dfmr3097578lji.34.1703108439073; Wed, 20 Dec 2023 13:40:39 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id cn2-20020a0564020ca200b005529a6a185esm288467edb.84.2023.12.20.13.40.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:38 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next 07/15] bpf: Add the get_reg_width function Date: Wed, 20 Dec 2023 23:40:05 +0200 Message-ID: <20231220214013.3327288-8-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Maxim Mikityanskiy Put calculation of the register value width into a dedicated function. This function will also be used in a following commit. Signed-off-by: Maxim Mikityanskiy --- kernel/bpf/verifier.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index a703e3adedd3..b757fdbbbdd2 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4448,6 +4448,11 @@ static bool is_bpf_st_mem(struct bpf_insn *insn) return BPF_CLASS(insn->code) == BPF_ST && BPF_MODE(insn->code) == BPF_MEM; } +static int get_reg_width(struct bpf_reg_state *reg) +{ + return fls64(reg->umax_value); +} + /* check_stack_{read,write}_fixed_off functions track spill/fill of registers, * stack boundary and alignment are checked in check_mem_access() */ @@ -4500,7 +4505,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) { save_register_state(env, state, spi, reg, size); /* Break the relation on a narrowing spill. */ - if (fls64(reg->umax_value) > BITS_PER_BYTE * size) + if (get_reg_width(reg) > BITS_PER_BYTE * size) state->stack[spi].spilled_ptr.id = 0; } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) && insn->imm != 0 && env->bpf_capable) { @@ -13940,7 +13945,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) return -EACCES; } else if (src_reg->type == SCALAR_VALUE) { if (insn->off == 0) { - bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX; + bool is_src_reg_u32 = get_reg_width(src_reg) <= 32; if (is_src_reg_u32) assign_scalar_id_before_mov(env, src_reg); From patchwork Wed Dec 20 21:40:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500470 Received: from mail-ej1-f41.google.com (mail-ej1-f41.google.com [209.85.218.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF6674C3B7; Wed, 20 Dec 2023 21:40:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="d5jS6oXP" Received: by mail-ej1-f41.google.com with SMTP id a640c23a62f3a-a2698cff486so13722966b.0; Wed, 20 Dec 2023 13:40:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108441; x=1703713241; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Gvv0pY6A6i2DfvrEtu8tnwcwTcSQg/Fj4m22tiCCEIc=; b=d5jS6oXPIKO7rM6Xss9YBTwvHwHa7e4kICmeLlQAIrgWMFewzmfBB5GFWDO/DsaAK4 39uYaEIsV8dtQzDEKtpaCDrhQq3OxFExBlaeZTCj+dNlnklYHpETOGPW+fgw00vDjJfg HgMLUnTM6u6cW7TS3pntQ/WUuxpkUQhhWTJVMKPFmD1uZev+bouB1Dxlqd/SEAUOtxTW MKFCuh6TFBFt9Q+XsaSWkHehE9sWWt8/G+FH4bav+pjnMrIna21ZL41GBRomFOQTrDqJ hGm8nFQdm0brQoD/+1zVm0uHQOU8ZS4QJAuAx4aZJRI4DXJ3Z6bHQCHOVcO5s8C7htim mbXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108441; x=1703713241; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Gvv0pY6A6i2DfvrEtu8tnwcwTcSQg/Fj4m22tiCCEIc=; b=dRn4WXV2be5Fxrvq0wA1kIVKGcpdzxWdr3e1I1G8/0PJJXIPd7Yg39vfO3KXPNwjhY VmrjuvuFtL6rqeCewfVG0AgD4fJEVSeFzKrS8orKRDxOueI1a6RoX1wBnehe33Oymj7d mMkf9IoQ4pXjmAKG2Wzaltcp+75y03eFMyQHiStn19u0V5xYW9bmXH4eOlHm+Po6Z2fJ Y3m8jq2m33jFxEZFhNqRuEqDWkxfKQbv8fzCG94TDyXjcsaWxcbKeQNqdf0WHw+JPTjJ OYZIZZ1HyFal9Z3byGAxfHNn6sBmNJtIFhRtaFtVuU5c6/lbTqFzAfpp2Y5cDqaOY+K7 rJ/w== X-Gm-Message-State: AOJu0YxQHLDvNKnEKUs3ICQq7ey6sfAcWCWSAiMJMp7x31tl9veGwdlR QeyQ3liZq/VVcgeR+1HSjX4= X-Google-Smtp-Source: AGHT+IEG+vbZCfaPYkd+ZYiamB9M4+7BZMnOHPrP/ZTHlQLhYzCRyzlX8y93l+SaW1Q3qCTTcV6DPQ== X-Received: by 2002:a17:906:5ac9:b0:a23:3b67:a14d with SMTP id x9-20020a1709065ac900b00a233b67a14dmr1898025ejs.189.1703108440972; Wed, 20 Dec 2023 13:40:40 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id ex17-20020a170907955100b00a269fa0d305sm189501ejc.8.2023.12.20.13.40.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:40 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next 08/15] bpf: Assign ID to scalars on spill Date: Wed, 20 Dec 2023 23:40:06 +0200 Message-ID: <20231220214013.3327288-9-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Maxim Mikityanskiy Currently, when a scalar bounded register is spilled to the stack, its ID is preserved, but only if was already assigned, i.e. if this register was MOVed before. Assign an ID on spill if none is set, so that equal scalars could be tracked if a register is spilled to the stack and filled into another register. One test is adjusted to reflect the change in register IDs. Signed-off-by: Maxim Mikityanskiy --- kernel/bpf/verifier.c | 8 +++++++- .../selftests/bpf/progs/verifier_direct_packet_access.c | 2 +- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b757fdbbbdd2..caa768f1e369 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4503,9 +4503,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, mark_stack_slot_scratched(env, spi); if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) { + bool reg_value_fits; + + reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size; + /* Make sure that reg had an ID to build a relation on spill. */ + if (reg_value_fits) + assign_scalar_id_before_mov(env, reg); save_register_state(env, state, spi, reg, size); /* Break the relation on a narrowing spill. */ - if (get_reg_width(reg) > BITS_PER_BYTE * size) + if (!reg_value_fits) state->stack[spi].spilled_ptr.id = 0; } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) && insn->imm != 0 && env->bpf_capable) { diff --git a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c index be95570ab382..28b602ac9cbe 100644 --- a/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c +++ b/tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c @@ -568,7 +568,7 @@ l0_%=: r0 = 0; \ SEC("tc") __description("direct packet access: test23 (x += pkt_ptr, 4)") -__failure __msg("invalid access to packet, off=0 size=8, R5(id=2,off=0,r=0)") +__failure __msg("invalid access to packet, off=0 size=8, R5(id=3,off=0,r=0)") __flag(BPF_F_ANY_ALIGNMENT) __naked void test23_x_pkt_ptr_4(void) { From patchwork Wed Dec 20 21:40:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500471 Received: from mail-ed1-f44.google.com (mail-ed1-f44.google.com [209.85.208.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A04A4D138; Wed, 20 Dec 2023 21:40:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="c40Gwhr0" Received: by mail-ed1-f44.google.com with SMTP id 4fb4d7f45d1cf-553fe292cffso138781a12.3; Wed, 20 Dec 2023 13:40:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108443; x=1703713243; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8Cmahq1oN8skftCnwm4onZq6EwCNDIEX8Tio8RQhdao=; b=c40Gwhr06loa82rrib6ZQBuo6DNIAa0zeXy4KTIjIxnaK6ito/jcIu1txyFB/3XoIv 5WK/WylHzxrQTtzy+w0Kg6zJINPOOmemFdlXl2aB7duA1a+XH1jUT0u7v2PsoYxlPGv5 2f0SzrHn1E2g7iRgxfWqBoHAOmBUyvZL4Rl7nBFJXU6ZBdrqDPL0crkGZGxXLLwVlDbk 7cJa0TF4NyDhWq24v7YmKeN/8WV4DkspzCkKIuzrOGkOVg1wRIwJPkwoP138qVnbkM8x AKXwR+2fpkxm8rnsk7RxTrgC7Y8xUbl+VpnOTTFoWqZ2aW/2jgJGidV6GsPOv07ru0fl uuKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108443; x=1703713243; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8Cmahq1oN8skftCnwm4onZq6EwCNDIEX8Tio8RQhdao=; b=IKfKLqp0LAPYiAjzrSRK67k0VBLGlVtP1aejxwbd+I9AhRheFjXP7uKPMggYt7s8ME m9g+jlprztBDaiMkzs+2uyjOjYeFGSaY5o6CIjR7lfXqO1P3mD+uMQE9wEpSMa34XJ4R 5Tz+N8/cRPWeVFf1Gt7a8F5LZWOeYEY956vvbFIj/mDJfZIydj2yX3FrRvGKuZ3GOrN9 OOi/HPWANIU88Qlqg3u6ms958lB3k2JuvV/u12z0mQrORHa7WxXAD+lZq1GlAiF+oxDY V2j7rlxAN7xm4NyWRn3B5t0ZUSSMkJ0piNimkwnxC1LUUgOGvcx3UE6oa8essmDJCrXQ LXAA== X-Gm-Message-State: AOJu0YxSoH+NxRyxZD7C6LeF9xp7rVfYE98BOTnX/JwGUZqx1KUQ9O1y ySc/gxTzGJE5Gy01b8APs+M= X-Google-Smtp-Source: AGHT+IHnLwAlzsVAnk/ixCkuoYt0PYs2nhoogYEqVM+Q06urlmHE+saw2KWviIOXbdVGZcInu7xfIQ== X-Received: by 2002:a50:8d8c:0:b0:54b:22a1:e6fe with SMTP id r12-20020a508d8c000000b0054b22a1e6femr5828623edh.7.1703108442750; Wed, 20 Dec 2023 13:40:42 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id z14-20020aa7cf8e000000b0055351aa7d64sm288745edx.81.2023.12.20.13.40.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:42 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next 09/15] selftests/bpf: Test assigning ID to scalars on spill Date: Wed, 20 Dec 2023 23:40:07 +0200 Message-ID: <20231220214013.3327288-10-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Maxim Mikityanskiy The previous commit implemented assigning IDs to registers holding scalars before spill. Add the test cases to check the new functionality. Signed-off-by: Maxim Mikityanskiy --- .../selftests/bpf/progs/verifier_spill_fill.c | 133 ++++++++++++++++++ 1 file changed, 133 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index f2c1fe5b1dba..86881eaab4e2 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -768,4 +768,137 @@ l0_%=: r0 = 0; \ : __clobber_all); } +SEC("xdp") +__description("64-bit spill of 64-bit reg should assign ID") +__success __retval(0) +__naked void spill_64bit_of_64bit_ok(void) +{ + asm volatile (" \ + /* Roll one bit to make the register inexact. */\ + call %[bpf_get_prandom_u32]; \ + r0 &= 0x80000000; \ + r0 <<= 32; \ + /* 64-bit spill r0 to stack - should assign an ID. */\ + *(u64*)(r10 - 8) = r0; \ + /* 64-bit fill r1 from stack - should preserve the ID. */\ + r1 = *(u64*)(r10 - 8); \ + /* Compare r1 with another register to trigger find_equal_scalars.\ + * Having one random bit is important here, otherwise the verifier cuts\ + * the corners. \ + */ \ + r2 = 0; \ + if r1 != r2 goto l0_%=; \ + /* The result of this comparison is predefined. */\ + if r0 == r2 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("xdp") +__description("32-bit spill of 32-bit reg should assign ID") +__success __retval(0) +__naked void spill_32bit_of_32bit_ok(void) +{ + asm volatile (" \ + /* Roll one bit to make the register inexact. */\ + call %[bpf_get_prandom_u32]; \ + w0 &= 0x80000000; \ + /* 32-bit spill r0 to stack - should assign an ID. */\ + *(u32*)(r10 - 8) = r0; \ + /* 32-bit fill r1 from stack - should preserve the ID. */\ + r1 = *(u32*)(r10 - 8); \ + /* Compare r1 with another register to trigger find_equal_scalars.\ + * Having one random bit is important here, otherwise the verifier cuts\ + * the corners. \ + */ \ + r2 = 0; \ + if r1 != r2 goto l0_%=; \ + /* The result of this comparison is predefined. */\ + if r0 == r2 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("xdp") +__description("16-bit spill of 16-bit reg should assign ID") +__success __retval(0) +__naked void spill_16bit_of_16bit_ok(void) +{ + asm volatile (" \ + /* Roll one bit to make the register inexact. */\ + call %[bpf_get_prandom_u32]; \ + r0 &= 0x8000; \ + /* 16-bit spill r0 to stack - should assign an ID. */\ + *(u16*)(r10 - 8) = r0; \ + /* 16-bit fill r1 from stack - should preserve the ID. */\ + r1 = *(u16*)(r10 - 8); \ + /* Compare r1 with another register to trigger find_equal_scalars.\ + * Having one random bit is important here, otherwise the verifier cuts\ + * the corners. \ + */ \ + r2 = 0; \ + if r1 != r2 goto l0_%=; \ + /* The result of this comparison is predefined. */\ + if r0 == r2 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("xdp") +__description("8-bit spill of 8-bit reg should assign ID") +__success __retval(0) +__naked void spill_8bit_of_8bit_ok(void) +{ + asm volatile (" \ + /* Roll one bit to make the register inexact. */\ + call %[bpf_get_prandom_u32]; \ + r0 &= 0x80; \ + /* 8-bit spill r0 to stack - should assign an ID. */\ + *(u8*)(r10 - 8) = r0; \ + /* 8-bit fill r1 from stack - should preserve the ID. */\ + r1 = *(u8*)(r10 - 8); \ + /* Compare r1 with another register to trigger find_equal_scalars.\ + * Having one random bit is important here, otherwise the verifier cuts\ + * the corners. \ + */ \ + r2 = 0; \ + if r1 != r2 goto l0_%=; \ + /* The result of this comparison is predefined. */\ + if r0 == r2 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + char _license[] SEC("license") = "GPL"; From patchwork Wed Dec 20 21:40:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500472 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA7224CB2D; Wed, 20 Dec 2023 21:40:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hiblco8s" Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-40d12b56a38so1550965e9.2; Wed, 20 Dec 2023 13:40:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108445; x=1703713245; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+Sevdi2KqnyBVsR9S+9QQjRPndF9K+CdOHtNwLCmtLQ=; b=hiblco8sYEvM0mO73LwJ21xf4S604JwzHfmRlUPcnWXYuU3yHjh9pzmUYKWddMJYqO ae7h6kGcaF+d1v5s4VSuDJNXLZqkD+wGNUuYBUOoNZ4ZLiI48LBxvXt0fBxyiwyzOvx0 YZE4R3fogocUa9eDokk92Q5Ru5X+H7EuYsTWzzxVGFboy9VKxj+is3r0PKlqag91Ejdz wx9lafaYJzDswBo3OpDdz+0NScwigyVaTOgWIw+YW8U2p+KdNbOdNLwNyWTebtt9BgAD 3mAGlbMnCcunk9y9Fray3gCOkVi6l/ArpBJL5XtF3ZoWkC7Z945Se8WSHhZezmO8mCue SWbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108445; x=1703713245; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+Sevdi2KqnyBVsR9S+9QQjRPndF9K+CdOHtNwLCmtLQ=; b=w4rW21fJkW9RiyWNaH3LtJ/8BbIF30CGC+eIyqfI9/KK7PvLOpaT2LYVFlXDY72prL IXjk1kSwl5c1P1OZ3XwCoWPcfsoWlnwspoZztDO4Ueb2hhoGfmIkNXleZ453PCZKOgx/ eOKHmmTkMfogtCUb0hmYRF8oXDQ8+IDO/6HrlstF3SL3riwXPCJ1Fy+InsWan24cpIf6 zRjEOi6zQhpCd6eFOj6U2QL17vAlwwacZSWFKOYH7Id19en7mw5tlp1hPXQ0aHmn8ixc yTFze7A8tqU8IfQ0AhAguvaHqBXFmPAMqL/3RP2WRAaP5y+7dY0reguja4Y4+JfS0K3a 9Qpg== X-Gm-Message-State: AOJu0Yx0SXOwQmklzFLYebw9/9aSLPWvW1cEgSWLx2+g+yU8ug40X+GW usDjHM9nQIIjr4mnbeeLAvYiHD1ZWrtqAA2I X-Google-Smtp-Source: AGHT+IH1DHeAOG1n5JfvRzma1Cua678Jcbiy5bho+PhVg0om00YFJOG0hMYvEkXWgS1yIZqHd6C+/Q== X-Received: by 2002:a05:600c:6d3:b0:40d:38c6:7cec with SMTP id b19-20020a05600c06d300b0040d38c67cecmr88851wmn.299.1703108445118; Wed, 20 Dec 2023 13:40:45 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id zs8-20020a170907714800b00a2686db1e81sm243266ejb.26.2023.12.20.13.40.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:44 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next 10/15] bpf: Track spilled unbounded scalars Date: Wed, 20 Dec 2023 23:40:08 +0200 Message-ID: <20231220214013.3327288-11-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Maxim Mikityanskiy Support the pattern where an unbounded scalar is spilled to the stack, then boundary checks are performed on the src register, after which the stack frame slot is refilled into a register. Before this commit, the verifier didn't treat the src register and the stack slot as related if the src register was an unbounded scalar. The register state wasn't copied, the id wasn't preserved, and the stack slot was marked as STACK_MISC. Subsequent boundary checks on the src register wouldn't result in updating the boundaries of the spilled variable on the stack. After this commit, the verifier will preserve the bond between src and dst even if src is unbounded, which permits to do boundary checks on src and refill dst later, still remembering its boundaries. Such a pattern is sometimes generated by clang when compiling complex long functions. One test is adjusted to reflect the fact that an untracked register is marked as precise at an earlier stage, and one more test is adjusted to reflect that now unbounded scalars are tracked. Signed-off-by: Maxim Mikityanskiy --- kernel/bpf/verifier.c | 7 +------ tools/testing/selftests/bpf/progs/verifier_spill_fill.c | 6 +++--- tools/testing/selftests/bpf/verifier/precise.c | 6 +++--- 3 files changed, 7 insertions(+), 12 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index caa768f1e369..9b5053389739 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4387,11 +4387,6 @@ static bool __is_scalar_unbounded(struct bpf_reg_state *reg) reg->u32_min_value == 0 && reg->u32_max_value == U32_MAX; } -static bool register_is_bounded(struct bpf_reg_state *reg) -{ - return reg->type == SCALAR_VALUE && !__is_scalar_unbounded(reg); -} - static bool __is_pointer_value(bool allow_ptr_leaks, const struct bpf_reg_state *reg) { @@ -4502,7 +4497,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, return err; mark_stack_slot_scratched(env, spi); - if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) { + if (reg && !(off % BPF_REG_SIZE) && reg->type == SCALAR_VALUE && env->bpf_capable) { bool reg_value_fits; reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size; diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index 86881eaab4e2..92e446b18e10 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -454,9 +454,9 @@ l0_%=: r1 >>= 16; \ SEC("raw_tp") __log_level(2) __success -__msg("fp-8=0m??mmmm") -__msg("fp-16=00mm??mm") -__msg("fp-24=00mm???m") +__msg("fp-8=0m??scalar()") +__msg("fp-16=00mm??scalar()") +__msg("fp-24=00mm???scalar()") __naked void spill_subregs_preserve_stack_zero(void) { asm volatile ( diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c index 8a2ff81d8350..0a9293a57211 100644 --- a/tools/testing/selftests/bpf/verifier/precise.c +++ b/tools/testing/selftests/bpf/verifier/precise.c @@ -183,10 +183,10 @@ .prog_type = BPF_PROG_TYPE_XDP, .flags = BPF_F_TEST_STATE_FREQ, .errstr = "mark_precise: frame0: last_idx 7 first_idx 7\ - mark_precise: frame0: parent state regs=r4 stack=:\ + mark_precise: frame0: parent state regs=r4 stack=-8:\ mark_precise: frame0: last_idx 6 first_idx 4\ - mark_precise: frame0: regs=r4 stack= before 6: (b7) r0 = -1\ - mark_precise: frame0: regs=r4 stack= before 5: (79) r4 = *(u64 *)(r10 -8)\ + mark_precise: frame0: regs=r4 stack=-8 before 6: (b7) r0 = -1\ + mark_precise: frame0: regs=r4 stack=-8 before 5: (79) r4 = *(u64 *)(r10 -8)\ mark_precise: frame0: regs= stack=-8 before 4: (7b) *(u64 *)(r3 -8) = r0\ mark_precise: frame0: parent state regs=r0 stack=:\ mark_precise: frame0: last_idx 3 first_idx 3\ From patchwork Wed Dec 20 21:40:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500473 Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 497AC4CB27; Wed, 20 Dec 2023 21:40:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Z7Ur8huf" Received: by mail-ed1-f43.google.com with SMTP id 4fb4d7f45d1cf-54c79968ffbso134219a12.3; Wed, 20 Dec 2023 13:40:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108450; x=1703713250; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CZLSMuU6ksvyVtfD3RmiNa/SVTNHgs0cwXOVuO57y9A=; b=Z7Ur8hufGnP4tyA0xa6WulJ7knWMUP066n4dWDZ8v4sA5Rn5kDoJgZN5pHqmZZj11z BkSN2vOEHPfSu+WlorOb1nj8KzwPChVY4lYimh2AXvtGUI/SjBDb/DvCHLquPc+YxYwO qebkBzTC8XicGMxSM/P7n9avVaZQwrB3aSUX61RzwH6tfFVm7YLQ2IHx8mSFKxQsp2G0 v9BG71l4dRC8IOKfx6+khG0+BAsaKq8zGeFQ7m7VIjz8753x1QiU8dwCW0XeBUsci5YV 7QVH3kC+VvU3d95qjHjzukBa4UKnej+KjJBitGONpNRPwxvj4NkYJJE0eKB1WbojV7Jf /B0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108450; x=1703713250; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CZLSMuU6ksvyVtfD3RmiNa/SVTNHgs0cwXOVuO57y9A=; b=SEdHNJPLi9DxtV0SToDsNKLPEpZu93FUXiV3BdyWZuyv0fAGPb9ELF4P+R1z+vJ++H q5oSi2pOstaYLuww/D1IhaXjz1N+5CzHEv+uufl8eZ6cpD5xU9VwTC7+Uf+AFxBJMTQU 09hi2fQX+itNN7ilwBjg/fIb+QQQWbQgn/Z5RO6ai8LBOGXCl9fXGvpJQFrrK5RmHJ5X 480ZDuwy84wKB0VxKiV7/K/ogWbo05mqyuRFBdtDIKFvMW1rqlWQoBfdVPEx2LSSOqzZ nwtmZSFXM/Ru40cCxPGoh2oQMBrdFfHbcG3uskDp4ho4qzbeQOejPLFpykG5J6xoOaqK 522A== X-Gm-Message-State: AOJu0YygW7NO7POuDKwAkPu0rV6LXLD6BDEsUugeQb9I2/AM48Ox21wj lfxdMP7zQIApukSubVa5aNQ= X-Google-Smtp-Source: AGHT+IHq852po/PKaZHcQBG5i9Cih9t5LeGZoL9Nup3vJNnFXdtHs3Tq48dEb9VojCiHIjtMqQ5igw== X-Received: by 2002:a50:8d85:0:b0:551:16cf:d0ca with SMTP id r5-20020a508d85000000b0055116cfd0camr8365627edh.55.1703108450569; Wed, 20 Dec 2023 13:40:50 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id dy21-20020a05640231f500b00553d59acdb3sm295282edb.27.2023.12.20.13.40.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:50 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next 11/15] selftests/bpf: Test tracking spilled unbounded scalars Date: Wed, 20 Dec 2023 23:40:09 +0200 Message-ID: <20231220214013.3327288-12-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Maxim Mikityanskiy The previous commit added tracking for unbounded scalars on spill. Add the test case to check the new functionality. Signed-off-by: Maxim Mikityanskiy --- .../selftests/bpf/progs/verifier_spill_fill.c | 27 +++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index 92e446b18e10..809a09732168 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -901,4 +901,31 @@ l0_%=: r0 = 0; \ : __clobber_all); } +SEC("xdp") +__description("spill unbounded reg, then range check src") +__success __retval(0) +__naked void spill_unbounded(void) +{ + asm volatile (" \ + /* Produce an unbounded scalar. */ \ + call %[bpf_get_prandom_u32]; \ + /* Spill r0 to stack. */ \ + *(u64*)(r10 - 8) = r0; \ + /* Boundary check on r0. */ \ + if r0 > 16 goto l0_%=; \ + /* Fill r0 from stack. */ \ + r0 = *(u64*)(r10 - 8); \ + /* Boundary check on r0 with predetermined result. */\ + if r0 <= 16 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + char _license[] SEC("license") = "GPL"; From patchwork Wed Dec 20 21:40:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500474 Received: from mail-ej1-f52.google.com (mail-ej1-f52.google.com [209.85.218.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F0234D5B3; Wed, 20 Dec 2023 21:40:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gatysvZV" Received: by mail-ej1-f52.google.com with SMTP id a640c23a62f3a-a233bf14cafso11061466b.2; Wed, 20 Dec 2023 13:40:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108452; x=1703713252; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/PzRr5+eL+ixCBvU5r2avdkiY9zkp9dVii6UTme9xyo=; b=gatysvZV8XHQz5R5g5jkuXyYPZzTaHWlCA3Zi+5CQr6ACKnmJ83L6/nluUO6H/S3pH /hXjjPHkoyao0JwSebN5xhdnYxTjTeJd+0V3F66eNGjCY0ae44hilnKWKQwJK/V0yGIh RKCD8AySHNWXq9uyKmNtigzIH79wclPFzBfthrh7kY37P+c7z/rqgAmOYW6ybt7vAGg6 50YRTcXlPTd8AbqrR/Zjonf1AcGitXd+mZ9M7IiVwBZN1+jXfjjyMptnfjEbl98MXmCt 5ioG5m/Vkft4+jLk1rF0gEMEm64Z1A5BHbDCCS6h2Ux/SpHtvnTRuNN9duC+xXTe38CH EaNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108452; x=1703713252; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/PzRr5+eL+ixCBvU5r2avdkiY9zkp9dVii6UTme9xyo=; b=g4H5kh595+4K3Noi/ARhqkpkkHWdB0zjSc2a8eHvPwFk+L66m6+eYzcazxHWrhm1W5 rbDloVJyUyaKX/G+2n6GMb5v62BVvP0hltpdJswlyKfUZb9GSfx6TtQr+tOGU7FjOCc2 9gcomQtWj/OWtSNtr4+PGJP8MhgApUbK2njPEHe8RglPUOdBxVXAT+jx4ptgEomztVfA 448AJevf5ZcxzC5FwToiG0iBHjbYJT2pLXu/0QYu3X0R09v85hzsx6+ry0W5gGnLIR/t oqCpTdKTqpL768pVTH7QtfoKQ/e0d+VTsd1iCb/kpsDdgweCPI3k9vosrAn+FgUqzy+/ SsHQ== X-Gm-Message-State: AOJu0YxHdRw6GZzkxGTfUtxxA2bImwGae7sEiCwQKPvRPPmX876muR8p 8frWhcoCUgPkKnzabpJJ7/A= X-Google-Smtp-Source: AGHT+IFTdJ4NPn00O9ReZEd2e/rCXN0KDXDScRnfXD2z/V+jKY10sDsJNx6GQymHzvK3VWPv9Y5C7g== X-Received: by 2002:a17:907:3c23:b0:a26:8556:e5a0 with SMTP id gh35-20020a1709073c2300b00a268556e5a0mr1337712ejc.66.1703108452438; Wed, 20 Dec 2023 13:40:52 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id fi3-20020a1709073ac300b00a269779ea32sm241139ejc.79.2023.12.20.13.40.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:52 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next 12/15] bpf: Preserve boundaries and track scalars on narrowing fill Date: Wed, 20 Dec 2023 23:40:10 +0200 Message-ID: <20231220214013.3327288-13-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Maxim Mikityanskiy When the width of a fill is smaller than the width of the preceding spill, the information about scalar boundaries can still be preserved, as long as it's coerced to the right width (done by coerce_reg_to_size). Even further, if the actual value fits into the fill width, the ID can be preserved as well for further tracking of equal scalars. Implement the above improvements, which makes narrowing fills behave the same as narrowing spills and MOVs between registers. Two tests are adjusted to accommodate for endianness differences and to take into account that it's now allowed to do a narrowing fill from the least significant bits. reg_bounds_sync is added to coerce_reg_to_size to correctly adjust umin/umax boundaries after the var_off truncation, for example, a 64-bit value 0xXXXXXXXX00000000, when read as a 32-bit, gets umin = 0, umax = 0xFFFFFFFF, var_off = (0x0; 0xffffffff00000000), which needs to be synced down to umax = 0, otherwise reg_bounds_sanity_check doesn't pass. Signed-off-by: Maxim Mikityanskiy --- kernel/bpf/verifier.c | 20 ++++++++++--- .../selftests/bpf/progs/verifier_spill_fill.c | 28 +++++++++++++------ 2 files changed, 35 insertions(+), 13 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9b5053389739..b6e252539e52 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4772,7 +4772,13 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, if (dst_regno < 0) return 0; - if (!(off % BPF_REG_SIZE) && size == spill_size) { + if (size <= spill_size && +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + !(off % BPF_REG_SIZE) +#else + !((off + size - spill_size) % BPF_REG_SIZE) +#endif + ) { /* The earlier check_reg_arg() has decided the * subreg_def for this insn. Save it first. */ @@ -4780,6 +4786,12 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, copy_register_state(&state->regs[dst_regno], reg); state->regs[dst_regno].subreg_def = subreg_def; + + /* Break the relation on a narrowing fill. + * coerce_reg_to_size will adjust the boundaries. + */ + if (get_reg_width(reg) > size * BITS_PER_BYTE) + state->regs[dst_regno].id = 0; } else { int spill_cnt = 0, zero_cnt = 0; @@ -6055,10 +6067,10 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size) * values are also truncated so we push 64-bit bounds into * 32-bit bounds. Above were truncated < 32-bits already. */ - if (size < 4) { + if (size < 4) __mark_reg32_unbounded(reg); - reg_bounds_sync(reg); - } + + reg_bounds_sync(reg); } static void set_sext64_default_val(struct bpf_reg_state *reg, int size) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index 809a09732168..de03e72e07a9 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -217,7 +217,7 @@ __naked void uninit_u32_from_the_stack(void) SEC("tc") __description("Spill a u32 const scalar. Refill as u16. Offset to skb->data") -__failure __msg("invalid access to packet") +__success __retval(0) __naked void u16_offset_to_skb_data(void) { asm volatile (" \ @@ -225,19 +225,24 @@ __naked void u16_offset_to_skb_data(void) r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ w4 = 20; \ *(u32*)(r10 - 8) = r4; \ - r4 = *(u16*)(r10 - 8); \ + r4 = *(u16*)(r10 - %[offset]); \ r0 = r2; \ - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\ + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=20 */\ r0 += r4; \ - /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */\ + /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */\ if r0 > r3 goto l0_%=; \ - /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */\ + /* r0 = *(u32 *)r2 R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */\ r0 = *(u32*)(r2 + 0); \ l0_%=: r0 = 0; \ exit; \ " : : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), - __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + __imm_const(offset, 8) +#else + __imm_const(offset, 6) +#endif : __clobber_all); } @@ -270,7 +275,7 @@ l0_%=: r0 = 0; \ } SEC("tc") -__description("Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data") +__description("Spill a u32 const scalar. Refill as u16 from MSB. Offset to skb->data") __failure __msg("invalid access to packet") __naked void _6_offset_to_skb_data(void) { @@ -279,7 +284,7 @@ __naked void _6_offset_to_skb_data(void) r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ w4 = 20; \ *(u32*)(r10 - 8) = r4; \ - r4 = *(u16*)(r10 - 6); \ + r4 = *(u16*)(r10 - %[offset]); \ r0 = r2; \ /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\ r0 += r4; \ @@ -291,7 +296,12 @@ l0_%=: r0 = 0; \ exit; \ " : : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), - __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + __imm_const(offset, 6) +#else + __imm_const(offset, 8) +#endif : __clobber_all); } From patchwork Wed Dec 20 21:40:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500475 Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EA444AF6B; Wed, 20 Dec 2023 21:40:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GY+18ipp" Received: by mail-ej1-f50.google.com with SMTP id a640c23a62f3a-a2358a75b69so23202566b.1; Wed, 20 Dec 2023 13:40:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108454; x=1703713254; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NAmo05RnOjTf6B7ADoC7uGraqPZAsfUxrzmafdm0UDs=; b=GY+18ipp1iLQ8raKR4tgkq9KteqMii6NDo8xWEfTxe4mmqO9miaHSqnCEKU2SP66hF GPGTu4ct8QlgZdOkJrbIVjY1kCol5adQvjQa5aI6eVM/rSXgwlqjtJjajRohLHpLsnt1 h//bBoMAGX++a59jqe40LwMSy3WXyDLlOgCdlPjcl37nbrpd/jgiYuCFzAU9PZ496EvN 385IVLT5CP089E7TwUr4H09VAYHlS2z/xpPwfDutLomSuVgamWYEwGlYCaNmZEmLk+IP nPDIJhQ1WCvlmfsVMRwaQR4d/SkkNjQD4z4HOB5VFnVpAlgxFxf3HRVOCvGg6qXb8Fxx AJow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108454; x=1703713254; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NAmo05RnOjTf6B7ADoC7uGraqPZAsfUxrzmafdm0UDs=; b=maRWh5fvwSQQAZJdgDJvQ9HgPZHSuzJFDxc3WPJ5oKooM8AiJ4hZ7/vv+U8m0f4IrB NP3oIaNt5tpoZpwHgZR3jwKFECl8R0Z9AoNQsqm7xvCm7evmOlIFirhRTHsE1/hgYo83 qB9tEmoQit/EWWcwj3wBMydjQD6ZG2Qpx5qDH94k2J9vKSJIgExE1IrnWWz7Uhp8W7YL yR/Z7ik3gxyCmJzr8y9EjxjZOfL9P0IM2LlEyC88oLvBgIb2jRY0ev8b0gQLR5ubUM5j uzkR8YsUqO+viK78Fcolx1bJC7Hgny79gG77t6Yu92YnN2i7wrq3HtiGSsgrHswdtwJW dbPw== X-Gm-Message-State: AOJu0YxSPczFzZlkxJ7HWewfMSKmMfAfhW85iXm1Dj1r0n1PfQIpa1Ps ccmTsCUS2IT/MO1cHpigVzU= X-Google-Smtp-Source: AGHT+IF+q5S2rEJ0JQi91pAua3JKvvrDe8tEuRGPT4sMkhb3YRVMJCg8JoEga3vqJ6rZLmt8JBeY+g== X-Received: by 2002:a17:906:d0cd:b0:a23:690e:48bf with SMTP id bq13-20020a170906d0cd00b00a23690e48bfmr3236891ejb.12.1703108454392; Wed, 20 Dec 2023 13:40:54 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id d13-20020a17090648cd00b00a2350a75dc0sm233110ejt.207.2023.12.20.13.40.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:54 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org, Maxim Mikityanskiy Subject: [PATCH bpf-next 13/15] selftests/bpf: Add test cases for narrowing fill Date: Wed, 20 Dec 2023 23:40:11 +0200 Message-ID: <20231220214013.3327288-14-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Maxim Mikityanskiy The previous commit allowed to preserve boundaries and track IDs of scalars on narrowing fills. Add test cases for that pattern. Signed-off-by: Maxim Mikityanskiy --- .../selftests/bpf/progs/verifier_spill_fill.c | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index de03e72e07a9..df195cf5c77b 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -938,4 +938,112 @@ l0_%=: r0 = 0; \ : __clobber_all); } +SEC("xdp") +__description("32-bit fill after 64-bit spill") +__success __retval(0) +__naked void fill_32bit_after_spill_64bit(void) +{ + asm volatile(" \ + /* Randomize the upper 32 bits. */ \ + call %[bpf_get_prandom_u32]; \ + r0 <<= 32; \ + /* 64-bit spill r0 to stack. */ \ + *(u64*)(r10 - 8) = r0; \ + /* 32-bit fill r0 from stack. */ \ + r0 = *(u32*)(r10 - %[offset]); \ + /* Boundary check on r0 with predetermined result. */\ + if r0 == 0 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_get_prandom_u32), +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + __imm_const(offset, 8) +#else + __imm_const(offset, 4) +#endif + : __clobber_all); +} + +SEC("xdp") +__description("32-bit fill after 64-bit spill of 32-bit value should preserve ID") +__success __retval(0) +__naked void fill_32bit_after_spill_64bit_preserve_id(void) +{ + asm volatile (" \ + /* Randomize the lower 32 bits. */ \ + call %[bpf_get_prandom_u32]; \ + w0 &= 0xffffffff; \ + /* 64-bit spill r0 to stack - should assign an ID. */\ + *(u64*)(r10 - 8) = r0; \ + /* 32-bit fill r1 from stack - should preserve the ID. */\ + r1 = *(u32*)(r10 - %[offset]); \ + /* Compare r1 with another register to trigger find_equal_scalars. */\ + r2 = 0; \ + if r1 != r2 goto l0_%=; \ + /* The result of this comparison is predefined. */\ + if r0 == r2 goto l0_%=; \ + /* Dead branch: the verifier should prune it. Do an invalid memory\ + * access if the verifier follows it. \ + */ \ + r0 = *(u64*)(r9 + 0); \ + exit; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32), +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + __imm_const(offset, 8) +#else + __imm_const(offset, 4) +#endif + : __clobber_all); +} + +SEC("xdp") +__description("32-bit fill after 64-bit spill should clear ID") +__failure __msg("math between ctx pointer and 4294967295 is not allowed") +__naked void fill_32bit_after_spill_64bit_clear_id(void) +{ + asm volatile (" \ + r6 = r1; \ + /* Roll one bit to force the verifier to track both branches. */\ + call %[bpf_get_prandom_u32]; \ + r0 &= 0x8; \ + /* Put a large number into r1. */ \ + r1 = 0xffffffff; \ + r1 <<= 32; \ + r1 += r0; \ + /* 64-bit spill r1 to stack - should assign an ID. */\ + *(u64*)(r10 - 8) = r1; \ + /* 32-bit fill r2 from stack - should clear the ID. */\ + r2 = *(u32*)(r10 - %[offset]); \ + /* Compare r2 with another register to trigger find_equal_scalars.\ + * Having one random bit is important here, otherwise the verifier cuts\ + * the corners. If the ID was mistakenly preserved on fill, this would\ + * cause the verifier to think that r1 is also equal to zero in one of\ + * the branches, and equal to eight on the other branch.\ + */ \ + r3 = 0; \ + if r2 != r3 goto l0_%=; \ +l0_%=: r1 >>= 32; \ + /* The verifier shouldn't propagate r2's range to r1, so it should\ + * still remember r1 = 0xffffffff and reject the below.\ + */ \ + r6 += r1; \ + r0 = *(u32*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_get_prandom_u32), +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + __imm_const(offset, 8) +#else + __imm_const(offset, 4) +#endif + : __clobber_all); +} + char _license[] SEC("license") = "GPL"; From patchwork Wed Dec 20 21:40:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500476 Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9B304E1A6; Wed, 20 Dec 2023 21:40:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZFTjO6l2" Received: by mail-ed1-f42.google.com with SMTP id 4fb4d7f45d1cf-54bf9a54fe3so146156a12.3; Wed, 20 Dec 2023 13:40:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108456; x=1703713256; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=novcoq9/4Pyy8zw+ZSP6rZp8sZ+5d2Y5G7xxLF0te80=; b=ZFTjO6l2wQ4ZPEnXlWl/z5X+BQAMxNSz/ujTCrUR5E/W9tg9tVKJfaUfKEgZm0PFwZ d/Z0N2dX1KSh/8fVRE2l/vornS+HmKB1HkBcw+osnlMpmN5Lx2jn26zve5mJhepglyhz gbL3bNtOAbJIXJR08vOZKlwN9hvIyDRuWW68aj84LeeBx0gm3KJi8p9brIx19d/eRxT6 Mx+vChBV9VtgDlUo3IIKNhsxImeodyozqIx9Y5AZ9NhJ6n1S25K+pT7q0lf0ZsQG75O6 makEI70Ff4DaahU0qM/T1fWmp4VDdoZKBqh1crIQHosrVLcFaDfLgGagXJiAyA85/Mi0 STBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108456; x=1703713256; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=novcoq9/4Pyy8zw+ZSP6rZp8sZ+5d2Y5G7xxLF0te80=; b=v2akKb8uvyXHivvDco3jtOmHnWf6NW59gnu76iOPiVuNzjo4Lt/wa6P/ukTBWB5g4x BgpJ+G2Q/Y/UBy0VmME7vcGPQTvs4/f2ahowaifICyjDej6tT0hxJIQp6w/hL0f/q+fo 2vxMsxf+6mi0uXGc8CLG/rf6pTQTH/nn+ryq+Cov0LE2tseAGqZIrMY7/o905DevKn8W mJGuPzv7dHJF9+YLGwFxpkGXY2vcNxtPGoNw9DX7JmdzpQTUYXnxgztocZjOnZcvmpYt kfrE6W7eaAGcCeTQAt0xCTyq59EWYlmaC2GNl4PVLbVblXq1u37WdgeY8NLbh3jJoNKQ BCKA== X-Gm-Message-State: AOJu0YzJBs+wHLjrkGfAmLQR7WH3cX+atPgeCHcLjjzl2Dwh8V6CeM/w noddRRFVJ8hPnZqW9HUCAtI= X-Google-Smtp-Source: AGHT+IHBRMSo1/bxNLn3ZilaWSFFZRRplZXKeZl1jv+HP/070ZAZqfrMPWV4s4WYqVKEhMP6Kud4bA== X-Received: by 2002:a50:c209:0:b0:552:85f4:ed39 with SMTP id n9-20020a50c209000000b0055285f4ed39mr5534327edf.38.1703108456194; Wed, 20 Dec 2023 13:40:56 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id q11-20020a056402248b00b00553165eb4f7sm296778eda.17.2023.12.20.13.40.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:55 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf-next 14/15] bpf: Optimize state pruning for spilled scalars Date: Wed, 20 Dec 2023 23:40:12 +0200 Message-ID: <20231220214013.3327288-15-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eduard Zingerman Changes for scalar ID tracking of spilled unbound scalars lead to certain verification performance regression. This commit mitigates the regression by exploiting the following properties maintained by check_stack_read_fixed_off(): - a mix of STACK_MISC, STACK_ZERO and STACK_INVALID marks is read as unbounded scalar register; - spi with all slots marked STACK_ZERO is read as scalar register with value zero. This commit modifies stacksafe() to consider situations above equivalent. Veristat results after this patch show significant gains: $ ./veristat -e file,prog,states -f '!states_pct<10' -f '!states_b<10' -C not-opt after File Program States (A) States (B) States (DIFF) ---------------- -------- ---------- ---------- ---------------- pyperf180.bpf.o on_event 10456 8422 -2034 (-19.45%) pyperf600.bpf.o on_event 37319 22519 -14800 (-39.66%) strobemeta.bpf.o on_event 13435 4703 -8732 (-64.99%) Signed-off-by: Eduard Zingerman --- kernel/bpf/verifier.c | 83 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b6e252539e52..a020d4d83524 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1168,6 +1168,12 @@ static void mark_stack_slot_misc(struct bpf_verifier_env *env, u8 *stype) *stype = STACK_MISC; } +static bool is_spilled_scalar_reg64(const struct bpf_stack_state *stack) +{ + return stack->slot_type[0] == STACK_SPILL && + stack->spilled_ptr.type == SCALAR_VALUE; +} + static void scrub_spilled_slot(u8 *stype) { if (*stype != STACK_INVALID) @@ -16449,11 +16455,45 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, } } +static bool is_stack_zero64(struct bpf_stack_state *stack) +{ + u32 i; + + for (i = 0; i < ARRAY_SIZE(stack->slot_type); ++i) + if (stack->slot_type[i] != STACK_ZERO) + return false; + return true; +} + +static bool is_stack_unbound_slot64(struct bpf_verifier_env *env, + struct bpf_stack_state *stack) +{ + u32 i; + + for (i = 0; i < ARRAY_SIZE(stack->slot_type); ++i) + if (stack->slot_type[i] != STACK_ZERO && + stack->slot_type[i] != STACK_MISC && + (!env->allow_uninit_stack || stack->slot_type[i] != STACK_INVALID)) + return false; + return true; +} + +static bool is_spilled_unbound_scalar_reg64(struct bpf_stack_state *stack) +{ + return is_spilled_scalar_reg64(stack) && __is_scalar_unbounded(&stack->spilled_ptr); +} + static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact) { + struct bpf_reg_state unbound_reg = {}; + struct bpf_reg_state zero_reg = {}; int i, spi; + __mark_reg_unknown(env, &unbound_reg); + __mark_reg_const_zero(env, &zero_reg); + zero_reg.precise = true; + /* walk slots of the explored stack and ignore any additional * slots in the current stack, since explored(safe) state * didn't use them @@ -16474,6 +16514,49 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, continue; } + /* load of stack value with all MISC and ZERO slots produces unbounded + * scalar value, call regsafe to ensure scalar ids are compared. + */ + if (is_spilled_unbound_scalar_reg64(&old->stack[spi]) && + is_stack_unbound_slot64(env, &cur->stack[spi])) { + i += BPF_REG_SIZE - 1; + if (!regsafe(env, &old->stack[spi].spilled_ptr, &unbound_reg, + idmap, exact)) + return false; + continue; + } + + if (is_stack_unbound_slot64(env, &old->stack[spi]) && + is_spilled_unbound_scalar_reg64(&cur->stack[spi])) { + i += BPF_REG_SIZE - 1; + if (!regsafe(env, &unbound_reg, &cur->stack[spi].spilled_ptr, + idmap, exact)) + return false; + continue; + } + + /* load of stack value with all ZERO slots produces scalar value 0, + * call regsafe to ensure scalar ids are compared and precision + * flags are taken into account. + */ + if (is_spilled_scalar_reg64(&old->stack[spi]) && + is_stack_zero64(&cur->stack[spi])) { + if (!regsafe(env, &old->stack[spi].spilled_ptr, &zero_reg, + idmap, exact)) + return false; + i += BPF_REG_SIZE - 1; + continue; + } + + if (is_stack_zero64(&old->stack[spi]) && + is_spilled_scalar_reg64(&cur->stack[spi])) { + if (!regsafe(env, &zero_reg, &cur->stack[spi].spilled_ptr, + idmap, exact)) + return false; + i += BPF_REG_SIZE - 1; + continue; + } + if (old->stack[spi].slot_type[i % BPF_REG_SIZE] == STACK_INVALID) continue; From patchwork Wed Dec 20 21:40:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Mikityanskiy X-Patchwork-Id: 13500477 Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 370B74E1C9; Wed, 20 Dec 2023 21:40:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KjcFwrmp" Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-548ce39b101so151685a12.2; Wed, 20 Dec 2023 13:40:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703108458; x=1703713258; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sM0p3ninJhZQiLF62VH9FTNW7PXmPuY4hvKlJNu1iqU=; b=KjcFwrmp86WX6o44vS0GQAx/GDDp9lrqEMTBqEKDU5GOaCOsmhfdeIanIvXM9GYCj1 6UTLQ1311oz2L8a7roNcmZlKNvgqZx2tm2bj83DA1I7z6HU95hjW7juDa0Ma9D5688c3 f/s5uRHgs+fl6l0X5bAx/U2RHJL/J7FG7aEmS3XZRH22cLEKOmf28wtXjUELgZ5DJ0Qd TcnqoJ1jvxcgZhqWBMFfukEORVVqbbboqIEtexho7MBxxFibmxt6QaY2K+YrTjO5KkCR LDV5J0KDPRxdIM/sE4oUnPPh7Hqu6SisPkjd5y2r9/QKCi/gnvWJfqFJ767Hme+X0/XX WqpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703108458; x=1703713258; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sM0p3ninJhZQiLF62VH9FTNW7PXmPuY4hvKlJNu1iqU=; b=Mr5MieawDdZarOZ/DxqCxgNf+zIfpDAZ+stH1aqSGAvCoa//HIwJ+bhntKCz/f9hwb J78EhECbB23OEELMexHHNLZFDMsqv9wwKYXecA6rNcRisKoDcbSU4w+ybAu7Zr9nCrF4 Ww8twUNMAKWBwK5TtmzHadF4ljj1sb8xKWqgXKVwuM+YAKYq38Sj7WFuR3FxHcy4NwVZ ehFqYFgDqIJIJqj/kldic+hIVTeoJR8p2SsSeq7/Rql7jmXWR8DMfdCzHp5Lstu62M4N 2Lewe5/hAg0lYMkUUQ9PuOzQmjc9BhZa7kdZAb6htuOeKBgrG5YNyTLHJcHpDZb8qNLS mcXQ== X-Gm-Message-State: AOJu0YypHLGLKeIYKE66zJNJQ8Tvz/0Doq/j7QRq/nMNq27Se6yobAX4 PFmidLPb3jTOxi537Thn3kM= X-Google-Smtp-Source: AGHT+IFd32rNhmpTGzbU1vvGYZUPxN3dRbJ5LmyJ8+p+vL0K7Mz7xBWDJx/SRNoyliwvX/g8IxI9cw== X-Received: by 2002:a50:aa9a:0:b0:553:6abe:203 with SMTP id q26-20020a50aa9a000000b005536abe0203mr1870457edc.14.1703108458413; Wed, 20 Dec 2023 13:40:58 -0800 (PST) Received: from localhost ([185.220.101.166]) by smtp.gmail.com with ESMTPSA id p20-20020aa7cc94000000b00553b746e17esm291312edt.83.2023.12.20.13.40.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 13:40:58 -0800 (PST) From: Maxim Mikityanskiy To: Eduard Zingerman , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: John Fastabend , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Mykola Lysenko , Shuah Khan , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf-next 15/15] selftests/bpf: states pruning checks for scalar vs STACK_{MISC,ZERO} Date: Wed, 20 Dec 2023 23:40:13 +0200 Message-ID: <20231220214013.3327288-16-maxtram95@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231220214013.3327288-1-maxtram95@gmail.com> References: <20231220214013.3327288-1-maxtram95@gmail.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eduard Zingerman Check that stacksafe() considers the following old vs cur stack spill state combinations equivalent: - spill of unbound scalar vs combination of STACK_{MISC,ZERO,INVALID} - STACK_MISC vs spill of unbound scalar - spill of scalar 0 vs STACK_ZERO - STACK_ZERO vs spill of scalar 0 Signed-off-by: Eduard Zingerman --- .../selftests/bpf/progs/verifier_spill_fill.c | 192 ++++++++++++++++++ 1 file changed, 192 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index df195cf5c77b..e2acc4fc3d10 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -1046,4 +1046,196 @@ l0_%=: r1 >>= 32; \ : __clobber_all); } +/* stacksafe(): check if spill of unbound scalar in old state is + * considered equivalent to any state of the spill in the current state. + * + * On the first verification path an unbound scalar is written for + * fp-8 and later marked precise. + * On the second verification path a mix of STACK_MISC/ZERO/INVALID is + * written to fp-8. These should be considered equivalent. + */ +SEC("socket") +__success __log_level(2) +__msg("10: (79) r0 = *(u64 *)(r10 -8)") +__msg("10: safe") +__msg("processed 16 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void old_unbound_scalar_vs_cur_anything(void) +{ + asm volatile( + /* get a random value for branching */ + "call %[bpf_ktime_get_ns];" + "r7 = r0;" + /* get a random value for storing at fp-8 */ + "call %[bpf_ktime_get_ns];" + "if r7 == 0 goto 1f;" + /* unbound scalar written to fp-8 */ + "*(u64*)(r10 - 8) = r0;" + "goto 2f;" +"1:" + /* mark fp-8 as mix of STACK_MISC/ZERO/INVALID */ + "r1 = 0;" + "*(u8*)(r10 - 8) = r0;" + "*(u8*)(r10 - 7) = r1;" + /* fp-2..fp-6 remain STACK_INVALID */ + "*(u8*)(r10 - 1) = r0;" +"2:" + /* read fp-8 and force it precise, should be considered safe + * on second visit + */ + "r0 = *(u64*)(r10 - 8);" + "r0 &= 0xff;" + "r1 = r10;" + "r1 += r0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* stacksafe(): check if STACK_MISC in old state is considered + * equivalent to stack spill of unbound scalar in cur state. + */ +SEC("socket") +__success __log_level(2) +__msg("8: (79) r0 = *(u64 *)(r10 -8) ; R0_w=scalar(id=1) R10=fp0 fp-8=scalar(id=1)") +__msg("8: safe") +__msg("processed 11 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void old_unbound_scalar_vs_cur_stack_misc(void) +{ + asm volatile( + /* get a random value for branching */ + "call %[bpf_ktime_get_ns];" + "if r0 == 0 goto 1f;" + /* conjure unbound scalar at fp-8 */ + "call %[bpf_ktime_get_ns];" + "*(u64*)(r10 - 8) = r0;" + "goto 2f;" +"1:" + /* conjure STACK_MISC at fp-8 */ + "call %[bpf_ktime_get_ns];" + "*(u64*)(r10 - 8) = r0;" + "*(u32*)(r10 - 4) = r0;" +"2:" + /* read fp-8, should be considered safe on second visit */ + "r0 = *(u64*)(r10 - 8);" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* stacksafe(): check if stack spill of unbound scalar in old state is + * considered equivalent to STACK_MISC in cur state. + */ +SEC("socket") +__success __log_level(2) +__msg("8: (79) r0 = *(u64 *)(r10 -8) ; R0_w=scalar() R10=fp0 fp-8=mmmmmmmm") +__msg("8: safe") +__msg("processed 11 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void old_stack_misc_vs_cur_unbound_scalar(void) +{ + asm volatile( + /* get a random value for branching */ + "call %[bpf_ktime_get_ns];" + "if r0 == 0 goto 1f;" + /* conjure STACK_MISC at fp-8 */ + "call %[bpf_ktime_get_ns];" + "*(u64*)(r10 - 8) = r0;" + "*(u32*)(r10 - 4) = r0;" + "goto 2f;" +"1:" + /* conjure unbound scalar at fp-8 */ + "call %[bpf_ktime_get_ns];" + "*(u64*)(r10 - 8) = r0;" +"2:" + /* read fp-8, should be considered safe on second visit */ + "r0 = *(u64*)(r10 - 8);" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* stacksafe(): check if spill of register with value 0 in old state + * is considered equivalent to STACK_ZERO. + */ +SEC("socket") +__success __log_level(2) +__msg("9: (79) r0 = *(u64 *)(r10 -8)") +__msg("9: safe") +__msg("processed 15 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void old_spill_zero_vs_stack_zero(void) +{ + asm volatile( + /* get a random value for branching */ + "call %[bpf_ktime_get_ns];" + "r7 = r0;" + /* get a random value for storing at fp-8 */ + "call %[bpf_ktime_get_ns];" + "if r7 == 0 goto 1f;" + /* conjure spilled register with value 0 at fp-8 */ + "*(u64*)(r10 - 8) = r0;" + "if r0 != 0 goto 3f;" + "goto 2f;" +"1:" + /* conjure STACK_ZERO at fp-8 */ + "r1 = 0;" + "*(u64*)(r10 - 8) = r1;" +"2:" + /* read fp-8 and force it precise, should be considered safe + * on second visit + */ + "r0 = *(u64*)(r10 - 8);" + "r1 = r10;" + "r1 += r0;" +"3:" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* stacksafe(): similar to old_spill_zero_vs_stack_zero() but the + * other way around: check if STACK_ZERO is considered equivalent to + * spill of register with value 0. + */ +SEC("socket") +__success __log_level(2) +__msg("8: (79) r0 = *(u64 *)(r10 -8)") +__msg("8: safe") +__msg("processed 14 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void old_stack_zero_vs_spill_zero(void) +{ + asm volatile( + /* get a random value for branching */ + "call %[bpf_ktime_get_ns];" + "if r0 == 0 goto 1f;" + /* conjure STACK_ZERO at fp-8 */ + "r1 = 0;" + "*(u64*)(r10 - 8) = r1;" + "goto 2f;" +"1:" + /* conjure spilled register with value 0 at fp-8 */ + "call %[bpf_ktime_get_ns];" + "*(u64*)(r10 - 8) = r0;" + "if r0 != 0 goto 3f;" +"2:" + /* read fp-8 and force it precise, should be considered safe + * on second visit + */ + "r0 = *(u64*)(r10 - 8);" + "r1 = r10;" + "r1 += r0;" +"3:" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + char _license[] SEC("license") = "GPL";