From patchwork Mon Dec 4 19:26:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13479014 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50055AC for ; Mon, 4 Dec 2023 11:26:37 -0800 (PST) Received: from pps.filterd (m0001303.ppops.net [127.0.0.1]) by m0001303.ppops.net (8.17.1.19/8.17.1.19) with ESMTP id 3B4JDxos002969 for ; Mon, 4 Dec 2023 11:26:36 -0800 Received: from maileast.thefacebook.com ([163.114.130.16]) by m0001303.ppops.net (PPS) with ESMTPS id 3us79gnupp-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 04 Dec 2023 11:26:36 -0800 Received: from twshared34392.14.frc2.facebook.com (2620:10d:c0a8:1c::11) by mail.thefacebook.com (2620:10d:c0a8:83::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.34; Mon, 4 Dec 2023 11:26:34 -0800 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id B70E83C94B904; Mon, 4 Dec 2023 11:26:25 -0800 (PST) From: Andrii Nakryiko To: , , , CC: , , Eduard Zingerman Subject: [PATCH v3 bpf-next 09/10] selftests/bpf: validate precision logic in partial_stack_load_preserves_zeros Date: Mon, 4 Dec 2023 11:26:00 -0800 Message-ID: <20231204192601.2672497-10-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231204192601.2672497-1-andrii@kernel.org> References: <20231204192601.2672497-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: dsg9vjFIPwVxShwNkCT4NlL2LsSXtnhz X-Proofpoint-GUID: dsg9vjFIPwVxShwNkCT4NlL2LsSXtnhz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-04_18,2023-12-04_01,2023-05-22_02 X-Patchwork-Delegate: bpf@iogearbox.net Enhance partial_stack_load_preserves_zeros subtest with detailed precision propagation log checks. We know expect fp-16 to be spilled, initially imprecise, zero const register, which is later marked as precise even when partial stack slot load is performed, even if it's not a register fill (!). Acked-by: Eduard Zingerman Signed-off-by: Andrii Nakryiko --- .../selftests/bpf/progs/verifier_spill_fill.c | 40 +++++++++++++++---- 1 file changed, 32 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index 7c1f1927f01a..f7bebc79fec4 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -492,6 +492,22 @@ char single_byte_buf[1] SEC(".data.single_byte_buf"); SEC("raw_tp") __log_level(2) __success +/* make sure fp-8 is all STACK_ZERO */ +__msg("2: (7a) *(u64 *)(r10 -8) = 0 ; R10=fp0 fp-8_w=00000000") +/* but fp-16 is spilled IMPRECISE zero const reg */ +__msg("4: (7b) *(u64 *)(r10 -16) = r0 ; R0_w=0 R10=fp0 fp-16_w=0") +/* and now check that precision propagation works even for such tricky case */ +__msg("10: (71) r2 = *(u8 *)(r10 -9) ; R2_w=P0 R10=fp0 fp-16_w=0") +__msg("11: (0f) r1 += r2") +__msg("mark_precise: frame0: last_idx 11 first_idx 0 subseq_idx -1") +__msg("mark_precise: frame0: regs=r2 stack= before 10: (71) r2 = *(u8 *)(r10 -9)") +__msg("mark_precise: frame0: regs= stack=-16 before 9: (bf) r1 = r6") +__msg("mark_precise: frame0: regs= stack=-16 before 8: (73) *(u8 *)(r1 +0) = r2") +__msg("mark_precise: frame0: regs= stack=-16 before 7: (0f) r1 += r2") +__msg("mark_precise: frame0: regs= stack=-16 before 6: (71) r2 = *(u8 *)(r10 -1)") +__msg("mark_precise: frame0: regs= stack=-16 before 5: (bf) r1 = r6") +__msg("mark_precise: frame0: regs= stack=-16 before 4: (7b) *(u64 *)(r10 -16) = r0") +__msg("mark_precise: frame0: regs=r0 stack= before 3: (b7) r0 = 0") __naked void partial_stack_load_preserves_zeros(void) { asm volatile ( @@ -505,42 +521,50 @@ __naked void partial_stack_load_preserves_zeros(void) /* load single U8 from non-aligned STACK_ZERO slot */ "r1 = %[single_byte_buf];" "r2 = *(u8 *)(r10 -1);" - "r1 += r2;" /* this should be fine */ + "r1 += r2;" + "*(u8 *)(r1 + 0) = r2;" /* this should be fine */ /* load single U8 from non-aligned ZERO REG slot */ "r1 = %[single_byte_buf];" "r2 = *(u8 *)(r10 -9);" - "r1 += r2;" /* this should be fine */ + "r1 += r2;" + "*(u8 *)(r1 + 0) = r2;" /* this should be fine */ /* load single U16 from non-aligned STACK_ZERO slot */ "r1 = %[single_byte_buf];" "r2 = *(u16 *)(r10 -2);" - "r1 += r2;" /* this should be fine */ + "r1 += r2;" + "*(u8 *)(r1 + 0) = r2;" /* this should be fine */ /* load single U16 from non-aligned ZERO REG slot */ "r1 = %[single_byte_buf];" "r2 = *(u16 *)(r10 -10);" - "r1 += r2;" /* this should be fine */ + "r1 += r2;" + "*(u8 *)(r1 + 0) = r2;" /* this should be fine */ /* load single U32 from non-aligned STACK_ZERO slot */ "r1 = %[single_byte_buf];" "r2 = *(u32 *)(r10 -4);" - "r1 += r2;" /* this should be fine */ + "r1 += r2;" + "*(u8 *)(r1 + 0) = r2;" /* this should be fine */ /* load single U32 from non-aligned ZERO REG slot */ "r1 = %[single_byte_buf];" "r2 = *(u32 *)(r10 -12);" - "r1 += r2;" /* this should be fine */ + "r1 += r2;" + "*(u8 *)(r1 + 0) = r2;" /* this should be fine */ /* for completeness, load U64 from STACK_ZERO slot */ "r1 = %[single_byte_buf];" "r2 = *(u64 *)(r10 -8);" - "r1 += r2;" /* this should be fine */ + "r1 += r2;" + "*(u8 *)(r1 + 0) = r2;" /* this should be fine */ /* for completeness, load U64 from ZERO REG slot */ "r1 = %[single_byte_buf];" "r2 = *(u64 *)(r10 -16);" - "r1 += r2;" /* this should be fine */ + "r1 += r2;" + "*(u8 *)(r1 + 0) = r2;" /* this should be fine */ "r0 = 0;" "exit;"