From patchwork Thu Mar 5 23:44:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luke Nelson X-Patchwork-Id: 11422795 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 03F7D174A for ; Thu, 5 Mar 2020 23:44:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CDA3E20801 for ; Thu, 5 Mar 2020 23:44:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cs.washington.edu header.i=@cs.washington.edu header.b="DLRMzf7h" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726400AbgCEXoZ (ORCPT ); Thu, 5 Mar 2020 18:44:25 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:41035 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726177AbgCEXoV (ORCPT ); Thu, 5 Mar 2020 18:44:21 -0500 Received: by mail-pl1-f195.google.com with SMTP id t14so62850plr.8 for ; Thu, 05 Mar 2020 15:44:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cs.washington.edu; s=goo201206; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=aODloWMH+16G+jblYOYaP3wMYDm9EtGNGFqttSX2s+c=; b=DLRMzf7h4mpOskip4ouIPR0SHCDuIPqH5OncwsmFJzBEKqHKjlJmeE2yLIz5+a+nez z1CihklS33zvnR8+UsqoMQpolFyiH/grjxX2KwacBuA4ve8c3cTN43i+meX3/mUdfZur JwZ9Gpo/sdNhdNq4WzQ70/nSHFX2JJGGQ2oeM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=aODloWMH+16G+jblYOYaP3wMYDm9EtGNGFqttSX2s+c=; b=RjkE9sFluUtSv5aK2hl0P8nGISv5aKmTZTIMwsUBLdNjWAdr8aQFCQHsGmQLlW47dJ crUG1HkkrSK+Cqlf08VIaGsChGIznqgHubsC7GH/XhmqTJOftGRXRY0MnD+3aNa7iBKE 6Lsexse6JNyaL6RfYAw6gK9KB5aq2cHB8H2Teu9F/YJPVcSSsXCjL83emX1sshAdovxg P3my7DyPUHDSRZWpOqgFJ2KmpyOi0U7m0l7YWOfyWoTZMTFrdLQ7AwQxkLHxALvmzb21 P7sxc8xwhcQyXaby4x8nfXtxvgCQVgQW0A/OWYkhQrXrejesV6Wk1Ws5tL+OLAhHOQ+Z 46Fg== X-Gm-Message-State: ANhLgQ3IbVcQqmCMxxhKYkxgeDtudLZYFo2IMj+dOzaH9gVs5bqxXqwy Ua4S9sbCEDLLUVltrIZKNEMQjw== X-Google-Smtp-Source: ADFU+vso9fE7zT/rV4B3bXR5NWkoobXDHEh3h6d5W3kgqFGDXSuUEjPGN9nvGO5Gfot1rEhID2VWzQ== X-Received: by 2002:a17:902:bc88:: with SMTP id bb8mr224807plb.274.1583451859886; Thu, 05 Mar 2020 15:44:19 -0800 (PST) Received: from ryzen.cs.washington.edu ([2607:4000:200:11:60e4:c000:39d0:c5af]) by smtp.gmail.com with ESMTPSA id s123sm30103856pfs.21.2020.03.05.15.44.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 15:44:19 -0800 (PST) From: Luke Nelson X-Google-Original-From: Luke Nelson To: bpf@vger.kernel.org Cc: Luke Nelson , Xi Wang , Wang YanQing , "David S. Miller" , Alexey Kuznetsov , Hideaki YOSHIFUJI , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , Song Liu , Yonghong Song , Andrii Nakryiko , Shuah Khan , Jiong Wang , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH bpf 1/2] bpf, x32: fix bug with JMP32 JSET BPF_X checking upper bits Date: Thu, 5 Mar 2020 15:44:12 -0800 Message-Id: <20200305234416.31597-1-luke.r.nels@gmail.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The current x32 BPF JIT is incorrect for JMP32 JSET BPF_X when the upper 32 bits of operand registers are non-zero in certain situations. The problem is in the following code: case BPF_JMP | BPF_JSET | BPF_X: case BPF_JMP32 | BPF_JSET | BPF_X: ... /* and dreg_lo,sreg_lo */ EMIT2(0x23, add_2reg(0xC0, sreg_lo, dreg_lo)); /* and dreg_hi,sreg_hi */ EMIT2(0x23, add_2reg(0xC0, sreg_hi, dreg_hi)); /* or dreg_lo,dreg_hi */ EMIT2(0x09, add_2reg(0xC0, dreg_lo, dreg_hi)); This code checks the upper bits of the operand registers regardless if the BPF instruction is BPF_JMP32 or BPF_JMP64. Registers dreg_hi and dreg_lo are not loaded from the stack for BPF_JMP32, however, they can still be polluted with values from previous instructions. The following BPF program demonstrates the bug. The jset64 instruction loads the temporary registers and performs the jump, since ((u64)r7 & (u64)r8) is non-zero. The jset32 should _not_ be taken, as the lower 32 bits are all zero, however, the current JIT will take the branch due the pollution of temporary registers from the earlier jset64. mov64 r0, 0 ld64 r7, 0x8000000000000000 ld64 r8, 0x8000000000000000 jset64 r7, r8, 1 exit jset32 r7, r8, 1 mov64 r0, 2 exit The expected return value of this program is 2; under the buggy x32 JIT it returns 0. The fix is to skip using the upper 32 bits for jset32 and compare the upper 32 bits for jset64 only. All tests in test_bpf.ko and selftests/bpf/test_verifier continue to pass with this change. We found this bug using our automated verification tool, Serval. Fixes: 69f827eb6e14 ("x32: bpf: implement jitting of JMP32") Co-developed-by: Xi Wang Signed-off-by: Xi Wang Signed-off-by: Luke Nelson --- arch/x86/net/bpf_jit_comp32.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c index 393d251798c0..4d2a7a764602 100644 --- a/arch/x86/net/bpf_jit_comp32.c +++ b/arch/x86/net/bpf_jit_comp32.c @@ -2039,10 +2039,12 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, } /* and dreg_lo,sreg_lo */ EMIT2(0x23, add_2reg(0xC0, sreg_lo, dreg_lo)); - /* and dreg_hi,sreg_hi */ - EMIT2(0x23, add_2reg(0xC0, sreg_hi, dreg_hi)); - /* or dreg_lo,dreg_hi */ - EMIT2(0x09, add_2reg(0xC0, dreg_lo, dreg_hi)); + if (is_jmp64) { + /* and dreg_hi,sreg_hi */ + EMIT2(0x23, add_2reg(0xC0, sreg_hi, dreg_hi)); + /* or dreg_lo,dreg_hi */ + EMIT2(0x09, add_2reg(0xC0, dreg_lo, dreg_hi)); + } goto emit_cond_jmp; } case BPF_JMP | BPF_JSET | BPF_K: From patchwork Thu Mar 5 23:44:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luke Nelson X-Patchwork-Id: 11422797 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 076CC92A for ; Thu, 5 Mar 2020 23:44:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DD1E22146E for ; Thu, 5 Mar 2020 23:44:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=cs.washington.edu header.i=@cs.washington.edu header.b="Gjc6mRDc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726504AbgCEXo0 (ORCPT ); Thu, 5 Mar 2020 18:44:26 -0500 Received: from mail-pg1-f194.google.com ([209.85.215.194]:44533 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726162AbgCEXo0 (ORCPT ); Thu, 5 Mar 2020 18:44:26 -0500 Received: by mail-pg1-f194.google.com with SMTP id n24so171218pgk.11 for ; Thu, 05 Mar 2020 15:44:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cs.washington.edu; s=goo201206; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8CXMX6fYDwGAYuQgw3gIIoEsDSy+htyQIvrI+wvb2+8=; b=Gjc6mRDcST67ch0F2d8snDJxaX0MY/pG5uwlIGr31+nbXp8S0uE3YwChQ1qHzuQLxL P6vxhKDB5gDhglRfvnEYHG0bCTQo2prlA1AIn/BGmKqFUo8ZR4Pibv0u9NO8/pFdRN8J d5CvL4rE2OsCeveuUbpV8Ki2YiPQsBHoatL1Y= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8CXMX6fYDwGAYuQgw3gIIoEsDSy+htyQIvrI+wvb2+8=; b=hOSgXxJGZMAgOu6MhxFdiKzdmp/sRimd8p9N8+UsDmBLwpaAqoCUQsQwmJ30GaO5YZ U/wwWtmzbtPOSSjh7Szg1wE5AwdAI21WTngrLhvI5cUicpkJmgDoVs4Yd5xN6mjEIzlf Gz1qxUqMtavnR+eGmM4UntWtX2QJ8ZSm54qmv+RU6ocpQFx6TqIJGO4X5tuqjua7T5m+ GvexDPoqRXU16srk/1lihDdgakq0jy4javQVfNSon8DlFmtT+xSxDTkgEDdWMv7msuzc 3ybn6cfamROm3hnPWZOw1ZPTyHu/uCvInnlG1KLzHeEgo9uraM4rcj35PMhjhQifruku 2xag== X-Gm-Message-State: ANhLgQ3zQJeVSVBralOmqW9d7gnD5Bta3qwJb83GE/xlj2lyQAb10CN0 ksNU1WaTBnnt3zQSXeRiTdXNBQ== X-Google-Smtp-Source: ADFU+vtF3P2k33NweA6Fex6hDs6vqelTVn0FS2/nOnTCDUSCKFMa3p1xx2Dagpz3OcKZWwoRk2B67Q== X-Received: by 2002:a63:1a5b:: with SMTP id a27mr546605pgm.249.1583451864784; Thu, 05 Mar 2020 15:44:24 -0800 (PST) Received: from ryzen.cs.washington.edu ([2607:4000:200:11:60e4:c000:39d0:c5af]) by smtp.gmail.com with ESMTPSA id s123sm30103856pfs.21.2020.03.05.15.44.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 15:44:24 -0800 (PST) From: Luke Nelson X-Google-Original-From: Luke Nelson To: bpf@vger.kernel.org Cc: Luke Nelson , Xi Wang , Wang YanQing , "David S. Miller" , Alexey Kuznetsov , Hideaki YOSHIFUJI , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , x86@kernel.org, Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , Song Liu , Yonghong Song , Andrii Nakryiko , Shuah Khan , Jiong Wang , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH bpf 2/2] selftests: bpf: add test for JMP32 JSET BPF_X with upper bits set Date: Thu, 5 Mar 2020 15:44:13 -0800 Message-Id: <20200305234416.31597-2-luke.r.nels@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200305234416.31597-1-luke.r.nels@gmail.com> References: <20200305234416.31597-1-luke.r.nels@gmail.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The existing tests attempt to check that JMP32 JSET ignores the upper bits in the operand registers. However, the tests missed one such bug in the x32 JIT that is only uncovered when a previous instruction pollutes the upper 32 bits of the registers. This patch adds a new test case that catches the bug by first executing a 64-bit JSET to pollute the upper 32-bits of the temporary registers, followed by a 32-bit JSET which should ignore the upper 32 bits. Co-developed-by: Xi Wang Signed-off-by: Xi Wang Signed-off-by: Luke Nelson --- tools/testing/selftests/bpf/verifier/jmp32.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/tools/testing/selftests/bpf/verifier/jmp32.c b/tools/testing/selftests/bpf/verifier/jmp32.c index bf0322eb5346..bd5cae4a7f73 100644 --- a/tools/testing/selftests/bpf/verifier/jmp32.c +++ b/tools/testing/selftests/bpf/verifier/jmp32.c @@ -61,6 +61,21 @@ }, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, }, +{ + "jset32: ignores upper bits", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_LD_IMM64(BPF_REG_7, 0x8000000000000000), + BPF_LD_IMM64(BPF_REG_8, 0x8000000000000000), + BPF_JMP_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1), + BPF_EXIT_INSN(), + BPF_JMP32_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 2, +}, { "jset32: min/max deduction", .insns = {