From patchwork Fri Oct 1 13:03:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12530621 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0C34C4332F for ; Fri, 1 Oct 2021 13:05:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 95B1E61A8B for ; Fri, 1 Oct 2021 13:05:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354002AbhJANGx (ORCPT ); Fri, 1 Oct 2021 09:06:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354186AbhJANGD (ORCPT ); Fri, 1 Oct 2021 09:06:03 -0400 Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [IPv6:2a00:1450:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DA71C0613E7 for ; Fri, 1 Oct 2021 06:04:01 -0700 (PDT) Received: by mail-ed1-x533.google.com with SMTP id ba1so34577758edb.4 for ; Fri, 01 Oct 2021 06:04:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3jpHHECyJBSuDzV8sQEpnKqzKuSA0gHWMInU+P2sJMM=; b=L5/YvycX993FGYGpJAohPhtyp1IoH+ZofMEQx+8C7sgxQSekL4wFyP31hjA2QBgWfv EV+gdzbHEx73OO3Ld4PwEm+10WM2GhGTnQKUtneQqbpGZuz67uZ4NpC5qnT/bigGoOFO fFySZP8o7fA/N4bQGX08jVTDbxVJCFHM69U/o0OwW48cAmoubPMReXFi2Y5pOW/FLap7 P0ci01MagBRYw/+99RvDAEjK7vG44nj2EhVVpY5XwGZADmM9euYk+ya4iE6xmIypgHY6 IFKtyek8lsU2M3IUleLIjGtgUEGkxA37HBTU22GMQza9h8Lr0rUoy+Es8ICOg+lAvaRc iBUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3jpHHECyJBSuDzV8sQEpnKqzKuSA0gHWMInU+P2sJMM=; b=hsbkxSWGN90DKVbHDTeJZ6tROj8iNA4CqBeyPrCAizGozjj1rSx1HHjd5Na6Z6QSdz 4/G+tqzm2B6xD0C+q+m0NSy53Q8OkLnQVCb1mN/MNmxJToZO0GXCqcx/cyEhUTsDpV5G nVm0yAYyqrlhsqjmWJPG9scLwf+OJUXh/LHUzUgRacUu5ChoGTVKvDSnKgB5k+MVfPIJ MR79Lz92yWWPlUut60q23EFWqcsc83Ke8I8bMQo9cLJ1YQwtHAmzJv4+AziOUxSJLmYt tPNMuPf/E/dYpqUWnq+9u/vk03lTUHTCh5a8EXl1cg+/XjFV3R6wfxSQKXSfEFyYul0p JMkA== X-Gm-Message-State: AOAM530fOjS0szIq5nft7Qn0UWlgom/+RJrZKQncUYY+PeTQk7w5k6XO bOHq7sNSuNZEOHv000VHw1BTOw== X-Google-Smtp-Source: ABdhPJxP1recQuvCaOvbZrc5X2GV9hImHVScvrvBzFLUbGU3S9GFk9xHcoujT8qJsDVi7wLsmygxhw== X-Received: by 2002:a50:e1cf:: with SMTP id m15mr13988643edl.181.1633093438864; Fri, 01 Oct 2021 06:03:58 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id p22sm2920279ejl.90.2021.10.01.06.03.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 06:03:58 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, paul@cilium.io, yangtiezhu@loongson.cn, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next 01/10] bpf/tests: Add tests of BPF_LDX and BPF_STX with small sizes Date: Fri, 1 Oct 2021 15:03:39 +0200 Message-Id: <20211001130348.3670534-2-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> References: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a series of tests to verify the behavior of BPF_LDX and BPF_STX with BPF_B//W sizes in isolation. In particular, it checks that BPF_LDX zero-extendeds the result, and that BPF_STX does not overwrite adjacent bytes in memory. BPF_ST and operations on BPF_DW size are deemed to be sufficiently tested by existing tests. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 254 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 254 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 21ea1ab253a1..a838a6179ca4 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -6907,6 +6907,260 @@ static struct bpf_test tests[] = { { }, { { 0, (u32) (cpu_to_le64(0xfedcba9876543210ULL) >> 32) } }, }, + /* BPF_LDX_MEM B/H/W/DW */ + { + "BPF_LDX_MEM | BPF_B", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x0102030405060708ULL), + BPF_LD_IMM64(R2, 0x0000000000000008ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_LDX_MEM(BPF_B, R0, R10, -1), +#else + BPF_LDX_MEM(BPF_B, R0, R10, -8), +#endif + BPF_JMP_REG(BPF_JNE, R0, R2, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + { + "BPF_LDX_MEM | BPF_B, MSB set", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x8182838485868788ULL), + BPF_LD_IMM64(R2, 0x0000000000000088ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_LDX_MEM(BPF_B, R0, R10, -1), +#else + BPF_LDX_MEM(BPF_B, R0, R10, -8), +#endif + BPF_JMP_REG(BPF_JNE, R0, R2, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + { + "BPF_LDX_MEM | BPF_H", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x0102030405060708ULL), + BPF_LD_IMM64(R2, 0x0000000000000708ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_LDX_MEM(BPF_H, R0, R10, -2), +#else + BPF_LDX_MEM(BPF_H, R0, R10, -8), +#endif + BPF_JMP_REG(BPF_JNE, R0, R2, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + { + "BPF_LDX_MEM | BPF_H, MSB set", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x8182838485868788ULL), + BPF_LD_IMM64(R2, 0x0000000000008788ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_LDX_MEM(BPF_H, R0, R10, -2), +#else + BPF_LDX_MEM(BPF_H, R0, R10, -8), +#endif + BPF_JMP_REG(BPF_JNE, R0, R2, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + { + "BPF_LDX_MEM | BPF_W", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x0102030405060708ULL), + BPF_LD_IMM64(R2, 0x0000000005060708ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_LDX_MEM(BPF_W, R0, R10, -4), +#else + BPF_LDX_MEM(BPF_W, R0, R10, -8), +#endif + BPF_JMP_REG(BPF_JNE, R0, R2, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + { + "BPF_LDX_MEM | BPF_W, MSB set", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x8182838485868788ULL), + BPF_LD_IMM64(R2, 0x0000000085868788ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_LDX_MEM(BPF_W, R0, R10, -4), +#else + BPF_LDX_MEM(BPF_W, R0, R10, -8), +#endif + BPF_JMP_REG(BPF_JNE, R0, R2, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + /* BPF_STX_MEM B/H/W/DW */ + { + "BPF_STX_MEM | BPF_B", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), + BPF_LD_IMM64(R2, 0x0102030405060708ULL), + BPF_LD_IMM64(R3, 0x8090a0b0c0d0e008ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_STX_MEM(BPF_B, R10, R2, -1), +#else + BPF_STX_MEM(BPF_B, R10, R2, -8), +#endif + BPF_LDX_MEM(BPF_DW, R0, R10, -8), + BPF_JMP_REG(BPF_JNE, R0, R3, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + { + "BPF_STX_MEM | BPF_B, MSB set", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), + BPF_LD_IMM64(R2, 0x8182838485868788ULL), + BPF_LD_IMM64(R3, 0x8090a0b0c0d0e088ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_STX_MEM(BPF_B, R10, R2, -1), +#else + BPF_STX_MEM(BPF_B, R10, R2, -8), +#endif + BPF_LDX_MEM(BPF_DW, R0, R10, -8), + BPF_JMP_REG(BPF_JNE, R0, R3, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + { + "BPF_STX_MEM | BPF_H", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), + BPF_LD_IMM64(R2, 0x0102030405060708ULL), + BPF_LD_IMM64(R3, 0x8090a0b0c0d00708ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_STX_MEM(BPF_H, R10, R2, -2), +#else + BPF_STX_MEM(BPF_H, R10, R2, -8), +#endif + BPF_LDX_MEM(BPF_DW, R0, R10, -8), + BPF_JMP_REG(BPF_JNE, R0, R3, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + { + "BPF_STX_MEM | BPF_H, MSB set", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), + BPF_LD_IMM64(R2, 0x8182838485868788ULL), + BPF_LD_IMM64(R3, 0x8090a0b0c0d08788ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_STX_MEM(BPF_H, R10, R2, -2), +#else + BPF_STX_MEM(BPF_H, R10, R2, -8), +#endif + BPF_LDX_MEM(BPF_DW, R0, R10, -8), + BPF_JMP_REG(BPF_JNE, R0, R3, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + { + "BPF_STX_MEM | BPF_W", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), + BPF_LD_IMM64(R2, 0x0102030405060708ULL), + BPF_LD_IMM64(R3, 0x8090a0b005060708ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_STX_MEM(BPF_W, R10, R2, -4), +#else + BPF_STX_MEM(BPF_W, R10, R2, -8), +#endif + BPF_LDX_MEM(BPF_DW, R0, R10, -8), + BPF_JMP_REG(BPF_JNE, R0, R3, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + { + "BPF_STX_MEM | BPF_W, MSB set", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x8090a0b0c0d0e0f0ULL), + BPF_LD_IMM64(R2, 0x8182838485868788ULL), + BPF_LD_IMM64(R3, 0x8090a0b085868788ULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), +#ifdef __BIG_ENDIAN + BPF_STX_MEM(BPF_W, R10, R2, -4), +#else + BPF_STX_MEM(BPF_W, R10, R2, -8), +#endif + BPF_LDX_MEM(BPF_DW, R0, R10, -8), + BPF_JMP_REG(BPF_JNE, R0, R3, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, /* BPF_ST(X) | BPF_MEM | BPF_B/H/W/DW */ { "ST_MEM_B: Store/Load byte: max negative", From patchwork Fri Oct 1 13:03:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12530623 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 317F3C433F5 for ; Fri, 1 Oct 2021 13:05:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1EA6F61A8B for ; Fri, 1 Oct 2021 13:05:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354064AbhJANGz (ORCPT ); Fri, 1 Oct 2021 09:06:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353953AbhJANGv (ORCPT ); Fri, 1 Oct 2021 09:06:51 -0400 Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [IPv6:2a00:1450:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE3D3C061780 for ; Fri, 1 Oct 2021 06:04:06 -0700 (PDT) Received: by mail-ed1-x532.google.com with SMTP id v18so33794774edc.11 for ; Fri, 01 Oct 2021 06:04:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=viy6/L1w5zzBK7Cww93yBpzUD2D5BBFDTZa/eQUpqNc=; b=EXyHgYVUU43xdYgTdllAkO1ir3c44HrPz6AN9CYTieQ5CYAzXON8RnwGoHmDfdVvUm rkYN0NcgXrNVsj/cRVlWBkqZ550kQOvyZ1upqWTBdyEzSkaZu0qvKqjXAv9DVXY0MP7y bdumuL/tNZGucZMK6tKtidIt7vjPCCXsjCE96tBSApBksflMa1kA28+EHF06rNMJPIU8 iZhqDYVR8Zbdbwc7l8aPjzLqrYKa2wNY1JGzPQg8cJ9KutHlJo3m+udDV6CBs6xpeM1u Tz8oh4Yki2oOtcCd1fnfWxF3cbkuKPFBeqUIH/lsPxt6DpU99isNjUbDDMkW5U0aTIIU 4CKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=viy6/L1w5zzBK7Cww93yBpzUD2D5BBFDTZa/eQUpqNc=; b=5yMyzyjs1+/1h752PO1tDNhgFL3YVXPd4/bp/nEZulpzpRpHn+cuWPT6MrzunVs/dJ RaE34045rBTxX+qlkrLrZMRSjWZ6iLlH7sXbhhxRDHtjapk/otxqVJT5l4ULdA0E7nBe GlpDLqsfOj6UTevjJj/5VNGTQv0+UmSU+DHfIhZ3d9oPXtHG3Yht2qRi5Dlo7TBENfKd rMqSOnJ/8/sJi2yWJ6igcNNsHhykPWd+aOXE5d1X+a2IaDD7iGVhVYDEFGQko+EXpHB9 UWJ0pOae9a+AfeQ+S4M/O3dl60vXll89Sk8Phzyg4pfbOUn9wTixw2Rzv9Xw4buw7FKO sm+w== X-Gm-Message-State: AOAM533zhEg+82GDSyVBlOrBsSZp9SmLbV+TqTQMTgDpO/FDQ9Jqj3Y9 pHZ4i4JM5lN8xvOg86gmgdw/FQ== X-Google-Smtp-Source: ABdhPJwgoeychVRQ5rN4WrY9sZgznqfBhrsFLMNvcK4WKpWZeiUi2Dq8BTJD+yTY3ilF3tNSac+3eQ== X-Received: by 2002:a50:bf4a:: with SMTP id g10mr14397741edk.11.1633093440032; Fri, 01 Oct 2021 06:04:00 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id p22sm2920279ejl.90.2021.10.01.06.03.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 06:03:59 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, paul@cilium.io, yangtiezhu@loongson.cn, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next 02/10] bpf/tests: Add zero-extension checks in BPF_ATOMIC tests Date: Fri, 1 Oct 2021 15:03:40 +0200 Message-Id: <20211001130348.3670534-3-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> References: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch updates the existing tests of BPF_ATOMIC operations to verify that a 32-bit register operand is properly zero-extended. In particular, it checks the operation on archs that require 32-bit operands to be properly zero-/sign-extended or the result is undefined, e.g. MIPS64. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index a838a6179ca4..f6983ad7b981 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -7398,15 +7398,20 @@ static struct bpf_test tests[] = { * Individual tests are expanded from template macros for all * combinations of ALU operation, word size and fetching. */ +#define BPF_ATOMIC_POISON(width) ((width) == BPF_W ? (0xbaadf00dULL << 32) : 0) + #define BPF_ATOMIC_OP_TEST1(width, op, logic, old, update, result) \ { \ "BPF_ATOMIC | " #width ", " #op ": Test: " \ #old " " #logic " " #update " = " #result, \ .u.insns_int = { \ - BPF_ALU32_IMM(BPF_MOV, R5, update), \ + BPF_LD_IMM64(R5, (update) | BPF_ATOMIC_POISON(width)), \ BPF_ST_MEM(width, R10, -40, old), \ BPF_ATOMIC_OP(width, op, R10, R5, -40), \ BPF_LDX_MEM(width, R0, R10, -40), \ + BPF_ALU64_REG(BPF_MOV, R1, R0), \ + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ + BPF_ALU64_REG(BPF_OR, R0, R1), \ BPF_EXIT_INSN(), \ }, \ INTERNAL, \ @@ -7420,11 +7425,14 @@ static struct bpf_test tests[] = { #old " " #logic " " #update " = " #result, \ .u.insns_int = { \ BPF_ALU64_REG(BPF_MOV, R1, R10), \ - BPF_ALU32_IMM(BPF_MOV, R0, update), \ + BPF_LD_IMM64(R0, (update) | BPF_ATOMIC_POISON(width)), \ BPF_ST_MEM(BPF_W, R10, -40, old), \ BPF_ATOMIC_OP(width, op, R10, R0, -40), \ BPF_ALU64_REG(BPF_MOV, R0, R10), \ BPF_ALU64_REG(BPF_SUB, R0, R1), \ + BPF_ALU64_REG(BPF_MOV, R1, R0), \ + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ + BPF_ALU64_REG(BPF_OR, R0, R1), \ BPF_EXIT_INSN(), \ }, \ INTERNAL, \ @@ -7438,10 +7446,13 @@ static struct bpf_test tests[] = { #old " " #logic " " #update " = " #result, \ .u.insns_int = { \ BPF_ALU64_REG(BPF_MOV, R0, R10), \ - BPF_ALU32_IMM(BPF_MOV, R1, update), \ + BPF_LD_IMM64(R1, (update) | BPF_ATOMIC_POISON(width)), \ BPF_ST_MEM(width, R10, -40, old), \ BPF_ATOMIC_OP(width, op, R10, R1, -40), \ BPF_ALU64_REG(BPF_SUB, R0, R10), \ + BPF_ALU64_REG(BPF_MOV, R1, R0), \ + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ + BPF_ALU64_REG(BPF_OR, R0, R1), \ BPF_EXIT_INSN(), \ }, \ INTERNAL, \ @@ -7454,10 +7465,10 @@ static struct bpf_test tests[] = { "BPF_ATOMIC | " #width ", " #op ": Test fetch: " \ #old " " #logic " " #update " = " #result, \ .u.insns_int = { \ - BPF_ALU32_IMM(BPF_MOV, R3, update), \ + BPF_LD_IMM64(R3, (update) | BPF_ATOMIC_POISON(width)), \ BPF_ST_MEM(width, R10, -40, old), \ BPF_ATOMIC_OP(width, op, R10, R3, -40), \ - BPF_ALU64_REG(BPF_MOV, R0, R3), \ + BPF_ALU32_REG(BPF_MOV, R0, R3), \ BPF_EXIT_INSN(), \ }, \ INTERNAL, \ @@ -7555,6 +7566,7 @@ static struct bpf_test tests[] = { BPF_ATOMIC_OP_TEST2(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab), BPF_ATOMIC_OP_TEST3(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab), BPF_ATOMIC_OP_TEST4(BPF_DW, BPF_XCHG, xchg, 0x12, 0xab, 0xab), +#undef BPF_ATOMIC_POISON #undef BPF_ATOMIC_OP_TEST1 #undef BPF_ATOMIC_OP_TEST2 #undef BPF_ATOMIC_OP_TEST3 From patchwork Fri Oct 1 13:03:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12530625 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A524C43217 for ; Fri, 1 Oct 2021 13:05:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 57E3E61A8F for ; Fri, 1 Oct 2021 13:05:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354486AbhJANGy (ORCPT ); Fri, 1 Oct 2021 09:06:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354306AbhJANGa (ORCPT ); Fri, 1 Oct 2021 09:06:30 -0400 Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com [IPv6:2a00:1450:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABDB8C0613F0 for ; Fri, 1 Oct 2021 06:04:06 -0700 (PDT) Received: by mail-ed1-x52e.google.com with SMTP id g7so33803010edv.1 for ; Fri, 01 Oct 2021 06:04:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WMJpUkvFDTUqwUnATL8f4FPyfkuNd2ylh3XVlZE6hFI=; b=l9jQsitgcg7CrDbQfHwizDqKmOEgcl8K15bQGANSg3keq4TiYTg+/w2NjIVto3HfTz 7ldbRK8yKPHizPKf8xr7yOy5s6sp+8Qc6rek174QqFhNq6d6YBtM6eNiemRcgtG8v/SE sEx1UBHZTQx4lOHUzN03RMeCgnkkZYN5nqsKItNZMq/1W94+2vJyRVq4VVhS0x5tsEjq Rz7+elnhiW7IA1Qeu9X7tZMsuwnixWrqU9wvPX5Lc41ISj7VgFcf9J3/Q+jxI7pFKPXn rYu05dnPT1bsaZiyl2ADMlCK+Kw9Kwk8/M7TFLz/ca4FO+k0mxAeDtjbAl1qBr2o0YW+ B+Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WMJpUkvFDTUqwUnATL8f4FPyfkuNd2ylh3XVlZE6hFI=; b=buDfBAV8wuu/bQXC+jUux4JcUEGh4a95y+m2tkzW26Lc45s3fub4jtALxh5dw7j2lM treWEOWld05qGjl6OPH26ZulImLMual2mQgVXvjmnci7LWH72pA4EHSjf0DhZfo9+wlc gKsTm8/NgSAqu5Xy+D7pX+e+itnz6MGLmTp8b9CFCNBOlesB86RDytiO1fzF8bON2ToW w6za6o6/piSv7t/oxHpUTJ4Q4EovtpEwyn6RJQYHcySosPBPPrGq+z/hStS05OoeG3Y2 KMOOdJGnjPGMEOoCsuCAk5yJgvf8UBqSvnaFhzLBmlVI1GdK6/PzZfuGkgfg36ZKH8iE 3hFA== X-Gm-Message-State: AOAM530nQMn65i2oj+wqcUidS1W7tgFGHlN//9lcR/h1K8SvDapsV66U mG3FTgfrmeVknA1bnLFDuBUxfw== X-Google-Smtp-Source: ABdhPJyAOQc5rpyPJ632HYhJ4scTVmF6ya8fJtXvJJvwDStdZ6xqYyDLpGnjt93m2WyNM3E5ZRd8DQ== X-Received: by 2002:a50:da0a:: with SMTP id z10mr14370430edj.95.1633093441551; Fri, 01 Oct 2021 06:04:01 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id p22sm2920279ejl.90.2021.10.01.06.04.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 06:04:00 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, paul@cilium.io, yangtiezhu@loongson.cn, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next 03/10] bpf/tests: Add exhaustive tests of BPF_ATOMIC magnitudes Date: Fri, 1 Oct 2021 15:03:41 +0200 Message-Id: <20211001130348.3670534-4-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> References: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a series of test to verify the operation of BPF_ATOMIC with BPF_DW and BPF_W sizes, for all power-of-two magnitudes of the register value operand. Also fixes a confusing typo in the comment for a related test. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 504 ++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 503 insertions(+), 1 deletion(-) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index f6983ad7b981..84efb23e09d0 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -796,7 +796,7 @@ static int __bpf_fill_pattern(struct bpf_test *self, void *arg, /* * Exhaustive tests of ALU operations for all combinations of power-of-two * magnitudes of the operands, both for positive and negative values. The - * test is designed to verify e.g. the JMP and JMP32 operations for JITs that + * test is designed to verify e.g. the ALU and ALU64 operations for JITs that * emit different code depending on the magnitude of the immediate value. */ @@ -1137,6 +1137,306 @@ static int bpf_fill_alu32_mod_reg(struct bpf_test *self) return __bpf_fill_alu32_reg(self, BPF_MOD); } +/* + * Exhaustive tests of atomic operations for all power-of-two operand + * magnitudes, both for positive and negative values. + */ + +static int __bpf_emit_atomic64(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 src) +{ + int op = *(int *)arg; + u64 keep, fetch, res; + int i = 0; + + if (!insns) + return 21; + + switch (op) { + case BPF_XCHG: + res = src; + break; + default: + __bpf_alu_result(&res, dst, src, BPF_OP(op)); + } + + keep = 0x0123456789abcdefULL; + if (op & BPF_FETCH) + fetch = dst; + else + fetch = src; + + i += __bpf_ld_imm64(&insns[i], R0, keep); + i += __bpf_ld_imm64(&insns[i], R1, dst); + i += __bpf_ld_imm64(&insns[i], R2, src); + i += __bpf_ld_imm64(&insns[i], R3, res); + i += __bpf_ld_imm64(&insns[i], R4, fetch); + i += __bpf_ld_imm64(&insns[i], R5, keep); + + insns[i++] = BPF_STX_MEM(BPF_DW, R10, R1, -8); + insns[i++] = BPF_ATOMIC_OP(BPF_DW, op, R10, R2, -8); + insns[i++] = BPF_LDX_MEM(BPF_DW, R1, R10, -8); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); + insns[i++] = BPF_EXIT_INSN(); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R2, R4, 1); + insns[i++] = BPF_EXIT_INSN(); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R5, 1); + insns[i++] = BPF_EXIT_INSN(); + + return i; +} + +static int __bpf_emit_atomic32(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 src) +{ + int op = *(int *)arg; + u64 keep, fetch, res; + int i = 0; + + if (!insns) + return 21; + + switch (op) { + case BPF_XCHG: + res = src; + break; + default: + __bpf_alu_result(&res, (u32)dst, (u32)src, BPF_OP(op)); + } + + keep = 0x0123456789abcdefULL; + if (op & BPF_FETCH) + fetch = (u32)dst; + else + fetch = src; + + i += __bpf_ld_imm64(&insns[i], R0, keep); + i += __bpf_ld_imm64(&insns[i], R1, (u32)dst); + i += __bpf_ld_imm64(&insns[i], R2, src); + i += __bpf_ld_imm64(&insns[i], R3, (u32)res); + i += __bpf_ld_imm64(&insns[i], R4, fetch); + i += __bpf_ld_imm64(&insns[i], R5, keep); + + insns[i++] = BPF_STX_MEM(BPF_W, R10, R1, -4); + insns[i++] = BPF_ATOMIC_OP(BPF_W, op, R10, R2, -4); + insns[i++] = BPF_LDX_MEM(BPF_W, R1, R10, -4); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); + insns[i++] = BPF_EXIT_INSN(); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R2, R4, 1); + insns[i++] = BPF_EXIT_INSN(); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R5, 1); + insns[i++] = BPF_EXIT_INSN(); + + return i; +} + +static int __bpf_emit_cmpxchg64(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 src) +{ + int i = 0; + + if (!insns) + return 23; + + i += __bpf_ld_imm64(&insns[i], R0, ~dst); + i += __bpf_ld_imm64(&insns[i], R1, dst); + i += __bpf_ld_imm64(&insns[i], R2, src); + + /* Result unsuccessful */ + insns[i++] = BPF_STX_MEM(BPF_DW, R10, R1, -8); + insns[i++] = BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -8); + insns[i++] = BPF_LDX_MEM(BPF_DW, R3, R10, -8); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 2); + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R3, 2); + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + + /* Result successful */ + insns[i++] = BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -8); + insns[i++] = BPF_LDX_MEM(BPF_DW, R3, R10, -8); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R2, R3, 2); + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R1, 2); + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + + return i; +} + +static int __bpf_emit_cmpxchg32(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 src) +{ + int i = 0; + + if (!insns) + return 27; + + i += __bpf_ld_imm64(&insns[i], R0, ~dst); + i += __bpf_ld_imm64(&insns[i], R1, (u32)dst); + i += __bpf_ld_imm64(&insns[i], R2, src); + + /* Result unsuccessful */ + insns[i++] = BPF_STX_MEM(BPF_W, R10, R1, -4); + insns[i++] = BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R2, -4); + insns[i++] = BPF_ZEXT_REG(R0), /* Zext always inserted by verifier */ + insns[i++] = BPF_LDX_MEM(BPF_W, R3, R10, -4); + + insns[i++] = BPF_JMP32_REG(BPF_JEQ, R1, R3, 2); + insns[i++] = BPF_MOV32_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R3, 2); + insns[i++] = BPF_MOV32_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + + /* Result successful */ + i += __bpf_ld_imm64(&insns[i], R0, dst); + insns[i++] = BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R2, -4); + insns[i++] = BPF_ZEXT_REG(R0), /* Zext always inserted by verifier */ + insns[i++] = BPF_LDX_MEM(BPF_W, R3, R10, -4); + + insns[i++] = BPF_JMP32_REG(BPF_JEQ, R2, R3, 2); + insns[i++] = BPF_MOV32_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + + insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R1, 2); + insns[i++] = BPF_MOV32_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + + return i; +} + +static int __bpf_fill_atomic64(struct bpf_test *self, int op) +{ + return __bpf_fill_pattern(self, &op, 64, 64, + 0, PATTERN_BLOCK2, + &__bpf_emit_atomic64); +} + +static int __bpf_fill_atomic32(struct bpf_test *self, int op) +{ + return __bpf_fill_pattern(self, &op, 64, 64, + 0, PATTERN_BLOCK2, + &__bpf_emit_atomic32); +} + +/* 64-bit atomic operations */ +static int bpf_fill_atomic64_add(struct bpf_test *self) +{ + return __bpf_fill_atomic64(self, BPF_ADD); +} + +static int bpf_fill_atomic64_and(struct bpf_test *self) +{ + return __bpf_fill_atomic64(self, BPF_AND); +} + +static int bpf_fill_atomic64_or(struct bpf_test *self) +{ + return __bpf_fill_atomic64(self, BPF_OR); +} + +static int bpf_fill_atomic64_xor(struct bpf_test *self) +{ + return __bpf_fill_atomic64(self, BPF_XOR); +} + +static int bpf_fill_atomic64_add_fetch(struct bpf_test *self) +{ + return __bpf_fill_atomic64(self, BPF_ADD | BPF_FETCH); +} + +static int bpf_fill_atomic64_and_fetch(struct bpf_test *self) +{ + return __bpf_fill_atomic64(self, BPF_AND | BPF_FETCH); +} + +static int bpf_fill_atomic64_or_fetch(struct bpf_test *self) +{ + return __bpf_fill_atomic64(self, BPF_OR | BPF_FETCH); +} + +static int bpf_fill_atomic64_xor_fetch(struct bpf_test *self) +{ + return __bpf_fill_atomic64(self, BPF_XOR | BPF_FETCH); +} + +static int bpf_fill_atomic64_xchg(struct bpf_test *self) +{ + return __bpf_fill_atomic64(self, BPF_XCHG); +} + +static int bpf_fill_cmpxchg64(struct bpf_test *self) +{ + return __bpf_fill_pattern(self, NULL, 64, 64, 0, PATTERN_BLOCK2, + &__bpf_emit_cmpxchg64); +} + +/* 32-bit atomic operations */ +static int bpf_fill_atomic32_add(struct bpf_test *self) +{ + return __bpf_fill_atomic32(self, BPF_ADD); +} + +static int bpf_fill_atomic32_and(struct bpf_test *self) +{ + return __bpf_fill_atomic32(self, BPF_AND); +} + +static int bpf_fill_atomic32_or(struct bpf_test *self) +{ + return __bpf_fill_atomic32(self, BPF_OR); +} + +static int bpf_fill_atomic32_xor(struct bpf_test *self) +{ + return __bpf_fill_atomic32(self, BPF_XOR); +} + +static int bpf_fill_atomic32_add_fetch(struct bpf_test *self) +{ + return __bpf_fill_atomic32(self, BPF_ADD | BPF_FETCH); +} + +static int bpf_fill_atomic32_and_fetch(struct bpf_test *self) +{ + return __bpf_fill_atomic32(self, BPF_AND | BPF_FETCH); +} + +static int bpf_fill_atomic32_or_fetch(struct bpf_test *self) +{ + return __bpf_fill_atomic32(self, BPF_OR | BPF_FETCH); +} + +static int bpf_fill_atomic32_xor_fetch(struct bpf_test *self) +{ + return __bpf_fill_atomic32(self, BPF_XOR | BPF_FETCH); +} + +static int bpf_fill_atomic32_xchg(struct bpf_test *self) +{ + return __bpf_fill_atomic32(self, BPF_XCHG); +} + +static int bpf_fill_cmpxchg32(struct bpf_test *self) +{ + return __bpf_fill_pattern(self, NULL, 64, 64, 0, PATTERN_BLOCK2, + &__bpf_emit_cmpxchg32); +} + /* * Test the two-instruction 64-bit immediate load operation for all * power-of-two magnitudes of the immediate operand. For each MSB, a block @@ -10721,6 +11021,208 @@ static struct bpf_test tests[] = { { { 0, 1 } }, .fill_helper = bpf_fill_ld_imm64, }, + /* 64-bit ATOMIC magnitudes */ + { + "ATOMIC_DW_ADD: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_add, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_DW_AND: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_and, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_DW_OR: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_or, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_DW_XOR: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_xor, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_DW_ADD_FETCH: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_add_fetch, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_DW_AND_FETCH: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_and_fetch, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_DW_OR_FETCH: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_or_fetch, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_DW_XOR_FETCH: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_xor_fetch, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_DW_XCHG: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_xchg, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_DW_CMPXCHG: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_cmpxchg64, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + /* 64-bit atomic magnitudes */ + { + "ATOMIC_W_ADD: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_add, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_W_AND: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_and, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_W_OR: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_or, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_W_XOR: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_xor, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_W_ADD_FETCH: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_add_fetch, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_W_AND_FETCH: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_and_fetch, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_W_OR_FETCH: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_or_fetch, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_W_XOR_FETCH: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_xor_fetch, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_W_XCHG: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_xchg, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ATOMIC_W_CMPXCHG: all operand magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_cmpxchg32, + .stack_depth = 8, + .nr_testruns = NR_PATTERN_RUNS, + }, /* JMP immediate magnitudes */ { "JMP_JSET_K: all immediate value magnitudes", From patchwork Fri Oct 1 13:03:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12530619 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB67CC433EF for ; Fri, 1 Oct 2021 13:05:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A1C0161A8B for ; Fri, 1 Oct 2021 13:05:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353935AbhJANGy (ORCPT ); Fri, 1 Oct 2021 09:06:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354285AbhJANGa (ORCPT ); Fri, 1 Oct 2021 09:06:30 -0400 Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E79DC0613EE for ; Fri, 1 Oct 2021 06:04:05 -0700 (PDT) Received: by mail-ed1-x534.google.com with SMTP id r18so34436424edv.12 for ; Fri, 01 Oct 2021 06:04:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=a6y4JSy406Wu+PL5pFim+RuAZbB2EbfKD2rlF0TOar4=; b=RzpiLocflVSAD7CChVeRqlnX4KIPe8PDNEC+u6+IUU3ez+0g/3JZa0oyT2EHegTfol XRq62uP4Yyrs2xsKClcNp+rDh8CAzRMYCHZpMM/uI1e6u4vf/dfJ3d/456kmgg8RJ+m8 rEwEq66lgQbQn8k6fVKHrgm5SOgbyUtYLtJJC97RjspN/iA1ZpnTDm/GDQVQyH7Aas3i K9X/qy8dvj9VUCNz3nbxFvd1CWLYgqyrfFPLq+avzg3QPSamtzeC8B7k/EO2uMAM3jzU ZSh11RPz+XxTNCKxfkXCQa1DsjVUrrwzmJpgVz71Y6HsVm59TECvq/hhAWVQ+rFpc5lH baQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=a6y4JSy406Wu+PL5pFim+RuAZbB2EbfKD2rlF0TOar4=; b=cVKs2KYJYV9XHF9Z9vUwdYLvf8F8TedaGN/odTaZbuMrskHPAFVPHgcxL9ke4XkXAa SrZvGe8qv5juM6W6bQt5+VccSm+uAr8uaqveVFxgb5M4vKdAVpvPBHY8IUWEUtyV2NVa 45Btclaq8/Nhn79Y3HfMixEksdBV/lvwMV3alasO4T9LXeiycEb0veORm/XCPOnKRY4o FLZThZTGPj6yVfaMWO20gmh+FyB66gIHlBGys3OlzSc8NPWb/9ouIhZoHjhjWyRVbIjZ IF/272jLXXdiTTHF20wFrZf7WCRoY6I+V6/+xuT5IW79+WBu6RFcEvrunpMxyXDEGRFA D9lA== X-Gm-Message-State: AOAM531MNMg6QQRjW3WE8uWDtd25132tfQIZGjbhqnx+6f5DtXR7i9x8 yhL6NKR2CIbEdJAlB6tOmkwxXw== X-Google-Smtp-Source: ABdhPJwj3yV8GHKjigIxa2Jnn7+zh39UAhWAy3pPE2urx3c414BWuJS1mfT237HxRD5wmYkPrqREjg== X-Received: by 2002:a50:8282:: with SMTP id 2mr14038488edg.98.1633093442602; Fri, 01 Oct 2021 06:04:02 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id p22sm2920279ejl.90.2021.10.01.06.04.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 06:04:02 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, paul@cilium.io, yangtiezhu@loongson.cn, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next 04/10] bpf/tests: Add tests to check source register zero-extension Date: Fri, 1 Oct 2021 15:03:42 +0200 Message-Id: <20211001130348.3670534-5-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> References: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds tests to check that the source register is preserved when zero-extending a 32-bit value. In particular, it checks that the source operand is not zero-extended in-place. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 143 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 143 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 84efb23e09d0..c7db90112ef0 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -10586,6 +10586,149 @@ static struct bpf_test tests[] = { {}, { { 0, 2 } }, }, + /* Checking that ALU32 src is not zero extended in place */ +#define BPF_ALU32_SRC_ZEXT(op) \ + { \ + "ALU32_" #op "_X: src preserved in zext", \ + .u.insns_int = { \ + BPF_LD_IMM64(R1, 0x0123456789acbdefULL),\ + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL),\ + BPF_ALU64_REG(BPF_MOV, R0, R1), \ + BPF_ALU32_REG(BPF_##op, R2, R1), \ + BPF_ALU64_REG(BPF_SUB, R0, R1), \ + BPF_ALU64_REG(BPF_MOV, R1, R0), \ + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ + BPF_ALU64_REG(BPF_OR, R0, R1), \ + BPF_EXIT_INSN(), \ + }, \ + INTERNAL, \ + { }, \ + { { 0, 0 } }, \ + } + BPF_ALU32_SRC_ZEXT(MOV), + BPF_ALU32_SRC_ZEXT(AND), + BPF_ALU32_SRC_ZEXT(OR), + BPF_ALU32_SRC_ZEXT(XOR), + BPF_ALU32_SRC_ZEXT(ADD), + BPF_ALU32_SRC_ZEXT(SUB), + BPF_ALU32_SRC_ZEXT(MUL), + BPF_ALU32_SRC_ZEXT(DIV), + BPF_ALU32_SRC_ZEXT(MOD), +#undef BPF_ALU32_SRC_ZEXT + /* Checking that ATOMIC32 src is not zero extended in place */ +#define BPF_ATOMIC32_SRC_ZEXT(op) \ + { \ + "ATOMIC_W_" #op ": src preserved in zext", \ + .u.insns_int = { \ + BPF_LD_IMM64(R0, 0x0123456789acbdefULL), \ + BPF_ALU64_REG(BPF_MOV, R1, R0), \ + BPF_ST_MEM(BPF_W, R10, -4, 0), \ + BPF_ATOMIC_OP(BPF_W, BPF_##op, R10, R1, -4), \ + BPF_ALU64_REG(BPF_SUB, R0, R1), \ + BPF_ALU64_REG(BPF_MOV, R1, R0), \ + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ + BPF_ALU64_REG(BPF_OR, R0, R1), \ + BPF_EXIT_INSN(), \ + }, \ + INTERNAL, \ + { }, \ + { { 0, 0 } }, \ + .stack_depth = 8, \ + } + BPF_ATOMIC32_SRC_ZEXT(ADD), + BPF_ATOMIC32_SRC_ZEXT(AND), + BPF_ATOMIC32_SRC_ZEXT(OR), + BPF_ATOMIC32_SRC_ZEXT(XOR), +#undef BPF_ATOMIC32_SRC_ZEXT + /* Checking that CMPXCHG32 src is not zero extended in place */ + { + "ATOMIC_W_CMPXCHG: src preserved in zext", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x0123456789acbdefULL), + BPF_ALU64_REG(BPF_MOV, R2, R1), + BPF_ALU64_REG(BPF_MOV, R0, 0), + BPF_ST_MEM(BPF_W, R10, -4, 0), + BPF_ATOMIC_OP(BPF_W, BPF_CMPXCHG, R10, R1, -4), + BPF_ALU64_REG(BPF_SUB, R1, R2), + BPF_ALU64_REG(BPF_MOV, R2, R1), + BPF_ALU64_IMM(BPF_RSH, R2, 32), + BPF_ALU64_REG(BPF_OR, R1, R2), + BPF_ALU64_REG(BPF_MOV, R0, R1), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, + /* Checking that JMP32 immediate src is not zero extended in place */ +#define BPF_JMP32_IMM_ZEXT(op) \ + { \ + "JMP32_" #op "_K: operand preserved in zext", \ + .u.insns_int = { \ + BPF_LD_IMM64(R0, 0x0123456789acbdefULL),\ + BPF_ALU64_REG(BPF_MOV, R1, R0), \ + BPF_JMP32_IMM(BPF_##op, R0, 1234, 1), \ + BPF_JMP_A(0), /* Nop */ \ + BPF_ALU64_REG(BPF_SUB, R0, R1), \ + BPF_ALU64_REG(BPF_MOV, R1, R0), \ + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ + BPF_ALU64_REG(BPF_OR, R0, R1), \ + BPF_EXIT_INSN(), \ + }, \ + INTERNAL, \ + { }, \ + { { 0, 0 } }, \ + } + BPF_JMP32_IMM_ZEXT(JEQ), + BPF_JMP32_IMM_ZEXT(JNE), + BPF_JMP32_IMM_ZEXT(JSET), + BPF_JMP32_IMM_ZEXT(JGT), + BPF_JMP32_IMM_ZEXT(JGE), + BPF_JMP32_IMM_ZEXT(JLT), + BPF_JMP32_IMM_ZEXT(JLE), + BPF_JMP32_IMM_ZEXT(JSGT), + BPF_JMP32_IMM_ZEXT(JSGE), + BPF_JMP32_IMM_ZEXT(JSGT), + BPF_JMP32_IMM_ZEXT(JSLT), + BPF_JMP32_IMM_ZEXT(JSLE), +#undef BPF_JMP2_IMM_ZEXT + /* Checking that JMP32 dst & src are not zero extended in place */ +#define BPF_JMP32_REG_ZEXT(op) \ + { \ + "JMP32_" #op "_X: operands preserved in zext", \ + .u.insns_int = { \ + BPF_LD_IMM64(R0, 0x0123456789acbdefULL),\ + BPF_LD_IMM64(R1, 0xfedcba9876543210ULL),\ + BPF_ALU64_REG(BPF_MOV, R2, R0), \ + BPF_ALU64_REG(BPF_MOV, R3, R1), \ + BPF_JMP32_IMM(BPF_##op, R0, R1, 1), \ + BPF_JMP_A(0), /* Nop */ \ + BPF_ALU64_REG(BPF_SUB, R0, R2), \ + BPF_ALU64_REG(BPF_SUB, R1, R3), \ + BPF_ALU64_REG(BPF_OR, R0, R1), \ + BPF_ALU64_REG(BPF_MOV, R1, R0), \ + BPF_ALU64_IMM(BPF_RSH, R1, 32), \ + BPF_ALU64_REG(BPF_OR, R0, R1), \ + BPF_EXIT_INSN(), \ + }, \ + INTERNAL, \ + { }, \ + { { 0, 0 } }, \ + } + BPF_JMP32_REG_ZEXT(JEQ), + BPF_JMP32_REG_ZEXT(JNE), + BPF_JMP32_REG_ZEXT(JSET), + BPF_JMP32_REG_ZEXT(JGT), + BPF_JMP32_REG_ZEXT(JGE), + BPF_JMP32_REG_ZEXT(JLT), + BPF_JMP32_REG_ZEXT(JLE), + BPF_JMP32_REG_ZEXT(JSGT), + BPF_JMP32_REG_ZEXT(JSGE), + BPF_JMP32_REG_ZEXT(JSGT), + BPF_JMP32_REG_ZEXT(JSLT), + BPF_JMP32_REG_ZEXT(JSLE), +#undef BPF_JMP2_REG_ZEXT /* Exhaustive test of ALU64 shift operations */ { "ALU64_LSH_K: all shift values", From patchwork Fri Oct 1 13:03:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12530635 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9EA9C433FE for ; Fri, 1 Oct 2021 13:05:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B036561AA3 for ; Fri, 1 Oct 2021 13:05:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354154AbhJANG7 (ORCPT ); Fri, 1 Oct 2021 09:06:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354089AbhJANGv (ORCPT ); Fri, 1 Oct 2021 09:06:51 -0400 Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com [IPv6:2a00:1450:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65B0BC061788 for ; Fri, 1 Oct 2021 06:04:09 -0700 (PDT) Received: by mail-ed1-x52b.google.com with SMTP id s17so33888442edd.8 for ; Fri, 01 Oct 2021 06:04:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2ABm10r1Yh2rzBKJDa5tee3k/Wciy5nB8H1vIAr4hqY=; b=Go0Lse7ZkShi1iDtfcAk3Ni+1Fp4LvrrcQ4yz0hKVXsr42ctrWcLvCvUOpmP4U7XAV rKtakcfp5AK/N4V/7oV81q0Jn0aXj8/7NvBxB2Fl+7UmDRJ6igm0CP5UYEfkGPwL9kV/ Z/K6Yh+vE/K8tChMFQGfWERX5YnJhvCV8pTwnq4PdDurenC9+kaRroOnNGB6wwfjPJCO l9i1tSiWTCdNXeVW//MnoYSHwnMwRjnBSb90AJVXzbsTA9da6oUIwIqMQPP8VhMp356m 3IPd0ap2v2K7/BHfTM/VNIR5czAku6LUTHuo7v/6XrquVFjkqtn5fiDQ8n9SZr/zNOep r68g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2ABm10r1Yh2rzBKJDa5tee3k/Wciy5nB8H1vIAr4hqY=; b=CnfJP7SiDYKEJ6+Eq/4SQT1X4q9tmLV1Rcms5GAP8iJ5A0sQEAVf494OkDn+jQgl9X mLwNoDBaZRvvdOMTXh3D2vC6cBudQNRN0E8jeSAgEvO8Xlt1YysQCW6y+LtnbmZ9M4cu CMstxvq3D1/D+yQu8nVKz+J08hiXDT8tG/BYOyTcc7zpfK6Zs+m+WPKdDELB/paiPf0S MrGo7KszZJv/AzaB3+3ImzoalB0LRqYGk1BMMjy2SnTpaRuTPOXqh1u9pX44lpzDt3gj /Mc48FHpmuU3YXArmNL9H4p8kVvCv1xJipJSmrWQrFSxfa/5eK77DH4w4Fari86VoRIB 7IGQ== X-Gm-Message-State: AOAM532ObKyRj1HnG+ch5MU5Sjldc7zgGn55FDOvjS6kd9w8JdHu67AG gpbmAJYOXxc0KLls38MvpYCoiA== X-Google-Smtp-Source: ABdhPJxBY9RYRv2IioAS6zefS2xzamHZq3ToYGEV1xY8YBvKp6T04UmtntIutH7668rW09+B2VtQEA== X-Received: by 2002:a17:906:2bc3:: with SMTP id n3mr6083938ejg.548.1633093443753; Fri, 01 Oct 2021 06:04:03 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id p22sm2920279ejl.90.2021.10.01.06.04.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 06:04:03 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, paul@cilium.io, yangtiezhu@loongson.cn, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next 05/10] bpf/tests: Add more tests for ALU and ATOMIC register clobbering Date: Fri, 1 Oct 2021 15:03:43 +0200 Message-Id: <20211001130348.3670534-6-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> References: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch expands the register-clobbering-during-function-call tests to cover more all ALU32/64 MUL, DIV and MOD operations and all ATOMIC operations. In short, if a JIT implements a complex operation with a call to an external function, it must make sure to save and restore all its caller-saved registers that may be clobbered by the call. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 267 ++++++++++++++++++++++++++++++++++++------------- 1 file changed, 197 insertions(+), 70 deletions(-) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index c7db90112ef0..201f34060eef 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -3744,76 +3744,6 @@ static struct bpf_test tests[] = { { }, { { 0, -1 } } }, - { - /* - * Register (non-)clobbering test, in the case where a 32-bit - * JIT implements complex ALU64 operations via function calls. - * If so, the function call must be invisible in the eBPF - * registers. The JIT must then save and restore relevant - * registers during the call. The following tests check that - * the eBPF registers retain their values after such a call. - */ - "INT: Register clobbering, R1 updated", - .u.insns_int = { - BPF_ALU32_IMM(BPF_MOV, R0, 0), - BPF_ALU32_IMM(BPF_MOV, R1, 123456789), - BPF_ALU32_IMM(BPF_MOV, R2, 2), - BPF_ALU32_IMM(BPF_MOV, R3, 3), - BPF_ALU32_IMM(BPF_MOV, R4, 4), - BPF_ALU32_IMM(BPF_MOV, R5, 5), - BPF_ALU32_IMM(BPF_MOV, R6, 6), - BPF_ALU32_IMM(BPF_MOV, R7, 7), - BPF_ALU32_IMM(BPF_MOV, R8, 8), - BPF_ALU32_IMM(BPF_MOV, R9, 9), - BPF_ALU64_IMM(BPF_DIV, R1, 123456789), - BPF_JMP_IMM(BPF_JNE, R0, 0, 10), - BPF_JMP_IMM(BPF_JNE, R1, 1, 9), - BPF_JMP_IMM(BPF_JNE, R2, 2, 8), - BPF_JMP_IMM(BPF_JNE, R3, 3, 7), - BPF_JMP_IMM(BPF_JNE, R4, 4, 6), - BPF_JMP_IMM(BPF_JNE, R5, 5, 5), - BPF_JMP_IMM(BPF_JNE, R6, 6, 4), - BPF_JMP_IMM(BPF_JNE, R7, 7, 3), - BPF_JMP_IMM(BPF_JNE, R8, 8, 2), - BPF_JMP_IMM(BPF_JNE, R9, 9, 1), - BPF_ALU32_IMM(BPF_MOV, R0, 1), - BPF_EXIT_INSN(), - }, - INTERNAL, - { }, - { { 0, 1 } } - }, - { - "INT: Register clobbering, R2 updated", - .u.insns_int = { - BPF_ALU32_IMM(BPF_MOV, R0, 0), - BPF_ALU32_IMM(BPF_MOV, R1, 1), - BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789), - BPF_ALU32_IMM(BPF_MOV, R3, 3), - BPF_ALU32_IMM(BPF_MOV, R4, 4), - BPF_ALU32_IMM(BPF_MOV, R5, 5), - BPF_ALU32_IMM(BPF_MOV, R6, 6), - BPF_ALU32_IMM(BPF_MOV, R7, 7), - BPF_ALU32_IMM(BPF_MOV, R8, 8), - BPF_ALU32_IMM(BPF_MOV, R9, 9), - BPF_ALU64_IMM(BPF_DIV, R2, 123456789), - BPF_JMP_IMM(BPF_JNE, R0, 0, 10), - BPF_JMP_IMM(BPF_JNE, R1, 1, 9), - BPF_JMP_IMM(BPF_JNE, R2, 2, 8), - BPF_JMP_IMM(BPF_JNE, R3, 3, 7), - BPF_JMP_IMM(BPF_JNE, R4, 4, 6), - BPF_JMP_IMM(BPF_JNE, R5, 5, 5), - BPF_JMP_IMM(BPF_JNE, R6, 6, 4), - BPF_JMP_IMM(BPF_JNE, R7, 7, 3), - BPF_JMP_IMM(BPF_JNE, R8, 8, 2), - BPF_JMP_IMM(BPF_JNE, R9, 9, 1), - BPF_ALU32_IMM(BPF_MOV, R0, 1), - BPF_EXIT_INSN(), - }, - INTERNAL, - { }, - { { 0, 1 } } - }, { /* * Test 32-bit JITs that implement complex ALU64 operations as @@ -10586,6 +10516,203 @@ static struct bpf_test tests[] = { {}, { { 0, 2 } }, }, + /* + * Register (non-)clobbering tests for the case where a JIT implements + * complex ALU or ATOMIC operations via function calls. If so, the + * function call must be transparent to the eBPF registers. The JIT + * must therefore save and restore relevant registers across the call. + * The following tests check that the eBPF registers retain their + * values after such an operation. Mainly intended for complex ALU + * and atomic operation, but we run it for all. You never know... + * + * Note that each operations should be tested twice with different + * destinations, to check preservation for all registers. + */ +#define BPF_TEST_CLOBBER_ALU(alu, op, dst, src) \ + { \ + #alu "_" #op " to " #dst ": no clobbering", \ + .u.insns_int = { \ + BPF_ALU64_IMM(BPF_MOV, R0, R0), \ + BPF_ALU64_IMM(BPF_MOV, R1, R1), \ + BPF_ALU64_IMM(BPF_MOV, R2, R2), \ + BPF_ALU64_IMM(BPF_MOV, R3, R3), \ + BPF_ALU64_IMM(BPF_MOV, R4, R4), \ + BPF_ALU64_IMM(BPF_MOV, R5, R5), \ + BPF_ALU64_IMM(BPF_MOV, R6, R6), \ + BPF_ALU64_IMM(BPF_MOV, R7, R7), \ + BPF_ALU64_IMM(BPF_MOV, R8, R8), \ + BPF_ALU64_IMM(BPF_MOV, R9, R9), \ + BPF_##alu(BPF_ ##op, dst, src), \ + BPF_ALU32_IMM(BPF_MOV, dst, dst), \ + BPF_JMP_IMM(BPF_JNE, R0, R0, 10), \ + BPF_JMP_IMM(BPF_JNE, R1, R1, 9), \ + BPF_JMP_IMM(BPF_JNE, R2, R2, 8), \ + BPF_JMP_IMM(BPF_JNE, R3, R3, 7), \ + BPF_JMP_IMM(BPF_JNE, R4, R4, 6), \ + BPF_JMP_IMM(BPF_JNE, R5, R5, 5), \ + BPF_JMP_IMM(BPF_JNE, R6, R6, 4), \ + BPF_JMP_IMM(BPF_JNE, R7, R7, 3), \ + BPF_JMP_IMM(BPF_JNE, R8, R8, 2), \ + BPF_JMP_IMM(BPF_JNE, R9, R9, 1), \ + BPF_ALU64_IMM(BPF_MOV, R0, 1), \ + BPF_EXIT_INSN(), \ + }, \ + INTERNAL, \ + { }, \ + { { 0, 1 } } \ + } + /* ALU64 operations, register clobbering */ + BPF_TEST_CLOBBER_ALU(ALU64_IMM, AND, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, AND, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, OR, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, OR, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, XOR, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, XOR, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, LSH, R8, 12), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, LSH, R9, 12), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, RSH, R8, 12), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, RSH, R9, 12), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, ARSH, R8, 12), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, ARSH, R9, 12), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, ADD, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, ADD, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, SUB, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, SUB, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, MUL, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, MUL, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, DIV, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, DIV, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, MOD, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU64_IMM, MOD, R9, 123456789), + /* ALU32 immediate operations, register clobbering */ + BPF_TEST_CLOBBER_ALU(ALU32_IMM, AND, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, AND, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, OR, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, OR, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, XOR, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, XOR, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, LSH, R8, 12), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, LSH, R9, 12), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, RSH, R8, 12), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, RSH, R9, 12), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, ARSH, R8, 12), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, ARSH, R9, 12), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, ADD, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, ADD, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, SUB, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, SUB, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, MUL, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, MUL, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, DIV, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, DIV, R9, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, MOD, R8, 123456789), + BPF_TEST_CLOBBER_ALU(ALU32_IMM, MOD, R9, 123456789), + /* ALU64 register operations, register clobbering */ + BPF_TEST_CLOBBER_ALU(ALU64_REG, AND, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, AND, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, OR, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, OR, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, XOR, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, XOR, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, LSH, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, LSH, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, RSH, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, RSH, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, ARSH, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, ARSH, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, ADD, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, ADD, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, SUB, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, SUB, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, MUL, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, MUL, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, DIV, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, DIV, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, MOD, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU64_REG, MOD, R9, R1), + /* ALU32 register operations, register clobbering */ + BPF_TEST_CLOBBER_ALU(ALU32_REG, AND, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, AND, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, OR, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, OR, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, XOR, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, XOR, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, LSH, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, LSH, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, RSH, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, RSH, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, ARSH, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, ARSH, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, ADD, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, ADD, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, SUB, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, SUB, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, MUL, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, MUL, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, DIV, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, DIV, R9, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, MOD, R8, R1), + BPF_TEST_CLOBBER_ALU(ALU32_REG, MOD, R9, R1), +#undef BPF_TEST_CLOBBER_ALU +#define BPF_TEST_CLOBBER_ATOMIC(width, op) \ + { \ + "Atomic_" #width " " #op ": no clobbering", \ + .u.insns_int = { \ + BPF_ALU64_IMM(BPF_MOV, R0, 0), \ + BPF_ALU64_IMM(BPF_MOV, R1, 1), \ + BPF_ALU64_IMM(BPF_MOV, R2, 2), \ + BPF_ALU64_IMM(BPF_MOV, R3, 3), \ + BPF_ALU64_IMM(BPF_MOV, R4, 4), \ + BPF_ALU64_IMM(BPF_MOV, R5, 5), \ + BPF_ALU64_IMM(BPF_MOV, R6, 6), \ + BPF_ALU64_IMM(BPF_MOV, R7, 7), \ + BPF_ALU64_IMM(BPF_MOV, R8, 8), \ + BPF_ALU64_IMM(BPF_MOV, R9, 9), \ + BPF_ST_MEM(width, R10, -8, \ + (op) == BPF_CMPXCHG ? 0 : \ + (op) & BPF_FETCH ? 1 : 0), \ + BPF_ATOMIC_OP(width, op, R10, R1, -8), \ + BPF_JMP_IMM(BPF_JNE, R0, 0, 10), \ + BPF_JMP_IMM(BPF_JNE, R1, 1, 9), \ + BPF_JMP_IMM(BPF_JNE, R2, 2, 8), \ + BPF_JMP_IMM(BPF_JNE, R3, 3, 7), \ + BPF_JMP_IMM(BPF_JNE, R4, 4, 6), \ + BPF_JMP_IMM(BPF_JNE, R5, 5, 5), \ + BPF_JMP_IMM(BPF_JNE, R6, 6, 4), \ + BPF_JMP_IMM(BPF_JNE, R7, 7, 3), \ + BPF_JMP_IMM(BPF_JNE, R8, 8, 2), \ + BPF_JMP_IMM(BPF_JNE, R9, 9, 1), \ + BPF_ALU64_IMM(BPF_MOV, R0, 1), \ + BPF_EXIT_INSN(), \ + }, \ + INTERNAL, \ + { }, \ + { { 0, 1 } }, \ + .stack_depth = 8, \ + } + /* 64-bit atomic operations, register clobbering */ + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_ADD), + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_AND), + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_OR), + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_XOR), + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_ADD | BPF_FETCH), + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_AND | BPF_FETCH), + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_OR | BPF_FETCH), + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_XOR | BPF_FETCH), + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_XCHG), + BPF_TEST_CLOBBER_ATOMIC(BPF_DW, BPF_CMPXCHG), + /* 32-bit atomic operations, register clobbering */ + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_ADD), + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_AND), + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_OR), + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_XOR), + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_ADD | BPF_FETCH), + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_AND | BPF_FETCH), + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_OR | BPF_FETCH), + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_XOR | BPF_FETCH), + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_XCHG), + BPF_TEST_CLOBBER_ATOMIC(BPF_W, BPF_CMPXCHG), +#undef BPF_TEST_CLOBBER_ATOMIC /* Checking that ALU32 src is not zero extended in place */ #define BPF_ALU32_SRC_ZEXT(op) \ { \ From patchwork Fri Oct 1 13:03:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12530637 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FFBAC433F5 for ; Fri, 1 Oct 2021 13:05:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 85E9561AA8 for ; Fri, 1 Oct 2021 13:05:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231447AbhJANHe (ORCPT ); Fri, 1 Oct 2021 09:07:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353957AbhJANGv (ORCPT ); Fri, 1 Oct 2021 09:06:51 -0400 Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com [IPv6:2a00:1450:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEFFFC061782 for ; Fri, 1 Oct 2021 06:04:07 -0700 (PDT) Received: by mail-ed1-x529.google.com with SMTP id r18so34436935edv.12 for ; Fri, 01 Oct 2021 06:04:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FX3awPb6VAt2eZU+efm5xZi4nnLa1upqR/daiKl0/CY=; b=1JaOw38bwLZcOUNtgfWT+mTbFWe80hsyJrOcdE4t5ubu3U70vSCC5DvTc3QFJ1CSgQ MBBR6xdJFokiKQaPx3lbww2CIZa5pZ+Ka0iv4u+At6OYzkNE/lpHghb91NxMUpf7bfMY f9tYZVbHpVHhJT8Vg6FtFlKWbCnxe5qx8ZynNWH9rF9Q+K+gf2Ss1pPlmr/Gl+vnmhZJ aPdhxr3BI66cTqRS2fTRcZMyWvrHlw/iUOHkP+sT+i+fXXlfZCpmLxZPXZInbjc6w0UC GLw6eG63kpcDCQIoA3SW0OwsFmhy1F2q32e7gvfOnS/8ly3zFKlv41RijLEBTj3dBEkI NonQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FX3awPb6VAt2eZU+efm5xZi4nnLa1upqR/daiKl0/CY=; b=JowYCdNV0znVTgqX52tLgTAa5FAWx/u59eRKJgFsNWsZiveU+RnjtFx2AxAt+K4/Cv mY4XUwlH9zzbM5bFKq237EVL4+aCCLQOZ3NF4HTEaL/YDhvBl+76Owh9MIBRA1HiNj2E mXummKX39FwOaKUmeXrATjCYDmM0coVbedDCaLf+m3Yh6YcjpEdBolnH4DhfE4OAgD2D XAmh3aKYTCnpkQ+rki2b9QYVDth1laJVN465X6f9oBczw7MY30fY80oR/AGacVLmE1SC VgRjjQje/NvW4Y7p0MtxNe7OZaan/1PH5YmddYEUyDKlPKUQ+xHQt5qD0PgmIf/lzK8w WotQ== X-Gm-Message-State: AOAM530SbqHFVsPNqVQRaQKepOwbTrkqBSt2089tebC1l6pHBx/P4+u1 DmgDEa9bW0kwDowAJcRDthsSBA== X-Google-Smtp-Source: ABdhPJytKybAbDURYKoq+8eKzkF3v/ljvBrmTADi+GnUiyn3ezfuZ0Ic8Cv+wm+Zyc+x5FUwLU1uAQ== X-Received: by 2002:a17:906:660f:: with SMTP id b15mr6147768ejp.491.1633093445070; Fri, 01 Oct 2021 06:04:05 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id p22sm2920279ejl.90.2021.10.01.06.04.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 06:04:04 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, paul@cilium.io, yangtiezhu@loongson.cn, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next 06/10] bpf/tests: Minor restructuring of ALU tests Date: Fri, 1 Oct 2021 15:03:44 +0200 Message-Id: <20211001130348.3670534-7-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> References: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch moves the ALU LSH/RSH/ARSH reference computations into the common reference value function. Also fix typo in constants so they now have the intended values. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 137 +++++++++++++++++++++++-------------------------- 1 file changed, 65 insertions(+), 72 deletions(-) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 201f34060eef..919323a3b69f 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -538,6 +538,57 @@ static int bpf_fill_max_jmp_never_taken(struct bpf_test *self) return __bpf_fill_max_jmp(self, BPF_JLT, 0); } +/* ALU result computation used in tests */ +static bool __bpf_alu_result(u64 *res, u64 v1, u64 v2, u8 op) +{ + *res = 0; + switch (op) { + case BPF_MOV: + *res = v2; + break; + case BPF_AND: + *res = v1 & v2; + break; + case BPF_OR: + *res = v1 | v2; + break; + case BPF_XOR: + *res = v1 ^ v2; + break; + case BPF_LSH: + *res = v1 << v2; + break; + case BPF_RSH: + *res = v1 >> v2; + break; + case BPF_ARSH: + *res = v1 >> v2; + if (v2 > 0 && v1 > S64_MAX) + *res |= ~0ULL << (64 - v2); + break; + case BPF_ADD: + *res = v1 + v2; + break; + case BPF_SUB: + *res = v1 - v2; + break; + case BPF_MUL: + *res = v1 * v2; + break; + case BPF_DIV: + if (v2 == 0) + return false; + *res = div64_u64(v1, v2); + break; + case BPF_MOD: + if (v2 == 0) + return false; + div64_u64_rem(v1, v2, res); + break; + } + return true; +} + /* Test an ALU shift operation for all valid shift values */ static int __bpf_fill_alu_shift(struct bpf_test *self, u8 op, u8 mode, bool alu32) @@ -576,37 +627,19 @@ static int __bpf_fill_alu_shift(struct bpf_test *self, u8 op, insn[i++] = BPF_ALU32_IMM(op, R1, imm); else insn[i++] = BPF_ALU32_REG(op, R1, R2); - switch (op) { - case BPF_LSH: - val = (u32)reg << imm; - break; - case BPF_RSH: - val = (u32)reg >> imm; - break; - case BPF_ARSH: - val = (u32)reg >> imm; - if (imm > 0 && (reg & 0x80000000)) - val |= ~(u32)0 << (32 - imm); - break; - } + + if (op == BPF_ARSH) + reg = (s32)reg; + else + reg = (u32)reg; + __bpf_alu_result(&val, reg, imm, op); + val = (u32)val; } else { if (mode == BPF_K) insn[i++] = BPF_ALU64_IMM(op, R1, imm); else insn[i++] = BPF_ALU64_REG(op, R1, R2); - switch (op) { - case BPF_LSH: - val = (u64)reg << imm; - break; - case BPF_RSH: - val = (u64)reg >> imm; - break; - case BPF_ARSH: - val = (u64)reg >> imm; - if (imm > 0 && reg < 0) - val |= ~(u64)0 << (64 - imm); - break; - } + __bpf_alu_result(&val, reg, imm, op); } /* @@ -799,46 +832,6 @@ static int __bpf_fill_pattern(struct bpf_test *self, void *arg, * test is designed to verify e.g. the ALU and ALU64 operations for JITs that * emit different code depending on the magnitude of the immediate value. */ - -static bool __bpf_alu_result(u64 *res, u64 v1, u64 v2, u8 op) -{ - *res = 0; - switch (op) { - case BPF_MOV: - *res = v2; - break; - case BPF_AND: - *res = v1 & v2; - break; - case BPF_OR: - *res = v1 | v2; - break; - case BPF_XOR: - *res = v1 ^ v2; - break; - case BPF_ADD: - *res = v1 + v2; - break; - case BPF_SUB: - *res = v1 - v2; - break; - case BPF_MUL: - *res = v1 * v2; - break; - case BPF_DIV: - if (v2 == 0) - return false; - *res = div64_u64(v1, v2); - break; - case BPF_MOD: - if (v2 == 0) - return false; - div64_u64_rem(v1, v2, res); - break; - } - return true; -} - static int __bpf_emit_alu64_imm(struct bpf_test *self, void *arg, struct bpf_insn *insns, s64 dst, s64 imm) { @@ -7881,7 +7874,7 @@ static struct bpf_test tests[] = { "BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test successful return", .u.insns_int = { BPF_LD_IMM64(R1, 0x0123456789abcdefULL), - BPF_LD_IMM64(R2, 0xfecdba9876543210ULL), + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL), BPF_ALU64_REG(BPF_MOV, R0, R1), BPF_STX_MEM(BPF_DW, R10, R1, -40), BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40), @@ -7898,7 +7891,7 @@ static struct bpf_test tests[] = { "BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test successful store", .u.insns_int = { BPF_LD_IMM64(R1, 0x0123456789abcdefULL), - BPF_LD_IMM64(R2, 0xfecdba9876543210ULL), + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL), BPF_ALU64_REG(BPF_MOV, R0, R1), BPF_STX_MEM(BPF_DW, R10, R0, -40), BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40), @@ -7916,7 +7909,7 @@ static struct bpf_test tests[] = { "BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test failure return", .u.insns_int = { BPF_LD_IMM64(R1, 0x0123456789abcdefULL), - BPF_LD_IMM64(R2, 0xfecdba9876543210ULL), + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL), BPF_ALU64_REG(BPF_MOV, R0, R1), BPF_ALU64_IMM(BPF_ADD, R0, 1), BPF_STX_MEM(BPF_DW, R10, R1, -40), @@ -7934,7 +7927,7 @@ static struct bpf_test tests[] = { "BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test failure store", .u.insns_int = { BPF_LD_IMM64(R1, 0x0123456789abcdefULL), - BPF_LD_IMM64(R2, 0xfecdba9876543210ULL), + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL), BPF_ALU64_REG(BPF_MOV, R0, R1), BPF_ALU64_IMM(BPF_ADD, R0, 1), BPF_STX_MEM(BPF_DW, R10, R1, -40), @@ -7953,11 +7946,11 @@ static struct bpf_test tests[] = { "BPF_ATOMIC | BPF_DW, BPF_CMPXCHG: Test side effects", .u.insns_int = { BPF_LD_IMM64(R1, 0x0123456789abcdefULL), - BPF_LD_IMM64(R2, 0xfecdba9876543210ULL), + BPF_LD_IMM64(R2, 0xfedcba9876543210ULL), BPF_ALU64_REG(BPF_MOV, R0, R1), BPF_STX_MEM(BPF_DW, R10, R1, -40), BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40), - BPF_LD_IMM64(R0, 0xfecdba9876543210ULL), + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), BPF_JMP_REG(BPF_JNE, R0, R2, 1), BPF_ALU64_REG(BPF_SUB, R0, R2), BPF_EXIT_INSN(), From patchwork Fri Oct 1 13:03:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12530631 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E09E5C43217 for ; Fri, 1 Oct 2021 13:05:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C55C761A83 for ; Fri, 1 Oct 2021 13:05:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354515AbhJANHA (ORCPT ); Fri, 1 Oct 2021 09:07:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354073AbhJANGv (ORCPT ); Fri, 1 Oct 2021 09:06:51 -0400 Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2903C061786 for ; Fri, 1 Oct 2021 06:04:08 -0700 (PDT) Received: by mail-ed1-x52d.google.com with SMTP id b8so47317edk.2 for ; Fri, 01 Oct 2021 06:04:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZpFHmDqOoyhA3jAK5s9z4k6+KE4kULIDGqlgShg+6uQ=; b=L4vDK+ih8jWxlpASn6qUuBl5+sEwkSFACj0AGmdvkq2Kve4JP7zQT8e2Ma+JuPR/ST B55u84/BjxgodDg99H0Cjv4y5W6ocP/VSqzMFHGe/EnERNmZlCEdF3gmiCyRkGxLGBZi H+SBoJMyJiV50Hr3DewCsr1/leGJ/YqT5k9FLiBDDDo18uwFHPhm+Z+ASrJDN340Uacm EA6j93FFBXwoz4rhKgOZLgFMigJeV/UD4pLpnUD80yc7H4auJYGBF7mqiYZ1eaB/pUeh x3sPci9qKVAHWYOm8Bh5MXmMQHbFDZaSPO6bKUAe4pTlYAPjQEmoy6j4ZwRnf/+cL27z vHkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZpFHmDqOoyhA3jAK5s9z4k6+KE4kULIDGqlgShg+6uQ=; b=bcbPe3g61ZBw9giCpK/o4ZM9Zw/uiZhRllJaud+yx6qUX9JZbzRQaF1KE+c/X/SmyE aoLB+LeJuRhxVuIFUtzIeRb1MFGIUXdXBO49NHIEa2rp6BucreGTNy9/4Du+rinIvZYN MhZN+tDQfYmT+2NcVxglB9SzD7ywemBQxulAQ7wmtdYF+8fwjWBJj1+YQRF7DojlhI2m CeBLByLvlmqUQH4kgtamaUK4VKmGSF0TGbrVPEr0gfmwKICoP2i/6lwtlrpKEl9aNMxa 5WBfXFhjxqVzQ2QHDMLs3Vs+eeENulZNmH55Lm9iQjy5QhZNGFXON50CFjfKteMqavCU 1Jag== X-Gm-Message-State: AOAM533v5nXZDEsNkVFEwq3h26QkyVb01ibkoCP4fut2MQby5GTMrVX6 igs0zDmY2PglsPRLvouIn064Kg== X-Google-Smtp-Source: ABdhPJxcEfLmyOyLtCXDlOSGa86W59p/xC3GMO/nSeijab3YMNHaWotVUKxIj7xl+htVS0qa/XfKOw== X-Received: by 2002:a17:906:3fd7:: with SMTP id k23mr6102451ejj.176.1633093446347; Fri, 01 Oct 2021 06:04:06 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id p22sm2920279ejl.90.2021.10.01.06.04.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 06:04:06 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, paul@cilium.io, yangtiezhu@loongson.cn, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next 07/10] bpf/tests: Add exhaustive tests of ALU register combinations Date: Fri, 1 Oct 2021 15:03:45 +0200 Message-Id: <20211001130348.3670534-8-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> References: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch replaces the current register combination test with new exhaustive tests. Before, only a subset of register combinations was tested for ALU64 DIV. Now, all combinatons of operand registers are tested, including the case when they are the same, and for all ALU32 and ALU64 operations. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 834 ++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 763 insertions(+), 71 deletions(-) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 919323a3b69f..924bf4c9783c 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -1130,6 +1130,381 @@ static int bpf_fill_alu32_mod_reg(struct bpf_test *self) return __bpf_fill_alu32_reg(self, BPF_MOD); } +/* + * Test JITs that implement complex ALU operations as function + * calls, and must re-arrange operands for argument passing. + */ +static int __bpf_fill_alu_imm_regs(struct bpf_test *self, u8 op, bool alu32) +{ + int len = 2 + 10 * 10; + struct bpf_insn *insns; + u64 dst, res; + int i = 0; + u32 imm; + int rd; + + insns = kmalloc_array(len, sizeof(*insns), GFP_KERNEL); + if (!insns) + return -ENOMEM; + + /* Operand and result values according to operation */ + if (alu32) + dst = 0x76543210U; + else + dst = 0x7edcba9876543210ULL; + imm = 0x01234567U; + + if (op == BPF_LSH || op == BPF_RSH || op == BPF_ARSH) + imm &= 31; + + __bpf_alu_result(&res, dst, imm, op); + + if (alu32) + res = (u32)res; + + /* Check all operand registers */ + for (rd = R0; rd <= R9; rd++) { + i += __bpf_ld_imm64(&insns[i], rd, dst); + + if (alu32) + insns[i++] = BPF_ALU32_IMM(op, rd, imm); + else + insns[i++] = BPF_ALU64_IMM(op, rd, imm); + + insns[i++] = BPF_JMP32_IMM(BPF_JEQ, rd, res, 2); + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + + insns[i++] = BPF_ALU64_IMM(BPF_RSH, rd, 32); + insns[i++] = BPF_JMP32_IMM(BPF_JEQ, rd, res >> 32, 2); + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + } + + insns[i++] = BPF_MOV64_IMM(R0, 1); + insns[i++] = BPF_EXIT_INSN(); + + self->u.ptr.insns = insns; + self->u.ptr.len = len; + BUG_ON(i != len); + + return 0; +} + +/* ALU64 K registers */ +static int bpf_fill_alu64_mov_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_MOV, false); +} + +static int bpf_fill_alu64_and_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_AND, false); +} + +static int bpf_fill_alu64_or_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_OR, false); +} + +static int bpf_fill_alu64_xor_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_XOR, false); +} + +static int bpf_fill_alu64_lsh_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_LSH, false); +} + +static int bpf_fill_alu64_rsh_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_RSH, false); +} + +static int bpf_fill_alu64_arsh_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_ARSH, false); +} + +static int bpf_fill_alu64_add_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_ADD, false); +} + +static int bpf_fill_alu64_sub_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_SUB, false); +} + +static int bpf_fill_alu64_mul_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_MUL, false); +} + +static int bpf_fill_alu64_div_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_DIV, false); +} + +static int bpf_fill_alu64_mod_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_MOD, false); +} + +/* ALU32 K registers */ +static int bpf_fill_alu32_mov_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_MOV, true); +} + +static int bpf_fill_alu32_and_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_AND, true); +} + +static int bpf_fill_alu32_or_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_OR, true); +} + +static int bpf_fill_alu32_xor_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_XOR, true); +} + +static int bpf_fill_alu32_lsh_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_LSH, true); +} + +static int bpf_fill_alu32_rsh_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_RSH, true); +} + +static int bpf_fill_alu32_arsh_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_ARSH, true); +} + +static int bpf_fill_alu32_add_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_ADD, true); +} + +static int bpf_fill_alu32_sub_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_SUB, true); +} + +static int bpf_fill_alu32_mul_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_MUL, true); +} + +static int bpf_fill_alu32_div_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_DIV, true); +} + +static int bpf_fill_alu32_mod_imm_regs(struct bpf_test *self) +{ + return __bpf_fill_alu_imm_regs(self, BPF_MOD, true); +} + +/* + * Test JITs that implement complex ALU operations as function + * calls, and must re-arrange operands for argument passing. + */ +static int __bpf_fill_alu_reg_pairs(struct bpf_test *self, u8 op, bool alu32) +{ + int len = 2 + 10 * 10 * 12; + u64 dst, src, res, same; + struct bpf_insn *insns; + int rd, rs; + int i = 0; + + insns = kmalloc_array(len, sizeof(*insns), GFP_KERNEL); + if (!insns) + return -ENOMEM; + + /* Operand and result values according to operation */ + if (alu32) { + dst = 0x76543210U; + src = 0x01234567U; + } else { + dst = 0x7edcba9876543210ULL; + src = 0x0123456789abcdefULL; + } + + if (op == BPF_LSH || op == BPF_RSH || op == BPF_ARSH) + src &= 31; + + __bpf_alu_result(&res, dst, src, op); + __bpf_alu_result(&same, src, src, op); + + if (alu32) { + res = (u32)res; + same = (u32)same; + } + + /* Check all combinations of operand registers */ + for (rd = R0; rd <= R9; rd++) { + for (rs = R0; rs <= R9; rs++) { + u64 val = rd == rs ? same : res; + + i += __bpf_ld_imm64(&insns[i], rd, dst); + i += __bpf_ld_imm64(&insns[i], rs, src); + + if (alu32) + insns[i++] = BPF_ALU32_REG(op, rd, rs); + else + insns[i++] = BPF_ALU64_REG(op, rd, rs); + + insns[i++] = BPF_JMP32_IMM(BPF_JEQ, rd, val, 2); + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + + insns[i++] = BPF_ALU64_IMM(BPF_RSH, rd, 32); + insns[i++] = BPF_JMP32_IMM(BPF_JEQ, rd, val >> 32, 2); + insns[i++] = BPF_MOV64_IMM(R0, __LINE__); + insns[i++] = BPF_EXIT_INSN(); + } + } + + insns[i++] = BPF_MOV64_IMM(R0, 1); + insns[i++] = BPF_EXIT_INSN(); + + self->u.ptr.insns = insns; + self->u.ptr.len = len; + BUG_ON(i != len); + + return 0; +} + +/* ALU64 X register combinations */ +static int bpf_fill_alu64_mov_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_MOV, false); +} + +static int bpf_fill_alu64_and_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_AND, false); +} + +static int bpf_fill_alu64_or_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_OR, false); +} + +static int bpf_fill_alu64_xor_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_XOR, false); +} + +static int bpf_fill_alu64_lsh_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_LSH, false); +} + +static int bpf_fill_alu64_rsh_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_RSH, false); +} + +static int bpf_fill_alu64_arsh_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_ARSH, false); +} + +static int bpf_fill_alu64_add_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_ADD, false); +} + +static int bpf_fill_alu64_sub_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_SUB, false); +} + +static int bpf_fill_alu64_mul_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_MUL, false); +} + +static int bpf_fill_alu64_div_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_DIV, false); +} + +static int bpf_fill_alu64_mod_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_MOD, false); +} + +/* ALU32 X register combinations */ +static int bpf_fill_alu32_mov_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_MOV, true); +} + +static int bpf_fill_alu32_and_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_AND, true); +} + +static int bpf_fill_alu32_or_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_OR, true); +} + +static int bpf_fill_alu32_xor_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_XOR, true); +} + +static int bpf_fill_alu32_lsh_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_LSH, true); +} + +static int bpf_fill_alu32_rsh_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_RSH, true); +} + +static int bpf_fill_alu32_arsh_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_ARSH, true); +} + +static int bpf_fill_alu32_add_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_ADD, true); +} + +static int bpf_fill_alu32_sub_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_SUB, true); +} + +static int bpf_fill_alu32_mul_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_MUL, true); +} + +static int bpf_fill_alu32_div_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_DIV, true); +} + +static int bpf_fill_alu32_mod_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_alu_reg_pairs(self, BPF_MOD, true); +} + /* * Exhaustive tests of atomic operations for all power-of-two operand * magnitudes, both for positive and negative values. @@ -3737,77 +4112,6 @@ static struct bpf_test tests[] = { { }, { { 0, -1 } } }, - { - /* - * Test 32-bit JITs that implement complex ALU64 operations as - * function calls R0 = f(R1, R2), and must re-arrange operands. - */ -#define NUMER 0xfedcba9876543210ULL -#define DENOM 0x0123456789abcdefULL - "ALU64_DIV X: Operand register permutations", - .u.insns_int = { - /* R0 / R2 */ - BPF_LD_IMM64(R0, NUMER), - BPF_LD_IMM64(R2, DENOM), - BPF_ALU64_REG(BPF_DIV, R0, R2), - BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1), - BPF_EXIT_INSN(), - /* R1 / R0 */ - BPF_LD_IMM64(R1, NUMER), - BPF_LD_IMM64(R0, DENOM), - BPF_ALU64_REG(BPF_DIV, R1, R0), - BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1), - BPF_EXIT_INSN(), - /* R0 / R1 */ - BPF_LD_IMM64(R0, NUMER), - BPF_LD_IMM64(R1, DENOM), - BPF_ALU64_REG(BPF_DIV, R0, R1), - BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1), - BPF_EXIT_INSN(), - /* R2 / R0 */ - BPF_LD_IMM64(R2, NUMER), - BPF_LD_IMM64(R0, DENOM), - BPF_ALU64_REG(BPF_DIV, R2, R0), - BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1), - BPF_EXIT_INSN(), - /* R2 / R1 */ - BPF_LD_IMM64(R2, NUMER), - BPF_LD_IMM64(R1, DENOM), - BPF_ALU64_REG(BPF_DIV, R2, R1), - BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1), - BPF_EXIT_INSN(), - /* R1 / R2 */ - BPF_LD_IMM64(R1, NUMER), - BPF_LD_IMM64(R2, DENOM), - BPF_ALU64_REG(BPF_DIV, R1, R2), - BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1), - BPF_EXIT_INSN(), - /* R1 / R1 */ - BPF_LD_IMM64(R1, NUMER), - BPF_ALU64_REG(BPF_DIV, R1, R1), - BPF_JMP_IMM(BPF_JEQ, R1, 1, 1), - BPF_EXIT_INSN(), - /* R2 / R2 */ - BPF_LD_IMM64(R2, DENOM), - BPF_ALU64_REG(BPF_DIV, R2, R2), - BPF_JMP_IMM(BPF_JEQ, R2, 1, 1), - BPF_EXIT_INSN(), - /* R3 / R4 */ - BPF_LD_IMM64(R3, NUMER), - BPF_LD_IMM64(R4, DENOM), - BPF_ALU64_REG(BPF_DIV, R3, R4), - BPF_JMP_IMM(BPF_JEQ, R3, NUMER / DENOM, 1), - BPF_EXIT_INSN(), - /* Successful return */ - BPF_LD_IMM64(R0, 1), - BPF_EXIT_INSN(), - }, - INTERNAL, - { }, - { { 0, 1 } }, -#undef NUMER -#undef DENOM - }, #ifdef CONFIG_32BIT { "INT: 32-bit context pointer word order and zero-extension", @@ -10849,6 +11153,394 @@ static struct bpf_test tests[] = { BPF_JMP32_REG_ZEXT(JSLT), BPF_JMP32_REG_ZEXT(JSLE), #undef BPF_JMP2_REG_ZEXT + /* ALU64 K register combinations */ + { + "ALU64_MOV_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mov_imm_regs, + }, + { + "ALU64_AND_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_and_imm_regs, + }, + { + "ALU64_OR_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_or_imm_regs, + }, + { + "ALU64_XOR_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_xor_imm_regs, + }, + { + "ALU64_LSH_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_lsh_imm_regs, + }, + { + "ALU64_RSH_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_rsh_imm_regs, + }, + { + "ALU64_ARSH_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_arsh_imm_regs, + }, + { + "ALU64_ADD_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_add_imm_regs, + }, + { + "ALU64_SUB_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_sub_imm_regs, + }, + { + "ALU64_MUL_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mul_imm_regs, + }, + { + "ALU64_DIV_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_div_imm_regs, + }, + { + "ALU64_MOD_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mod_imm_regs, + }, + /* ALU32 K registers */ + { + "ALU32_MOV_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mov_imm_regs, + }, + { + "ALU32_AND_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_and_imm_regs, + }, + { + "ALU32_OR_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_or_imm_regs, + }, + { + "ALU32_XOR_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_xor_imm_regs, + }, + { + "ALU32_LSH_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_lsh_imm_regs, + }, + { + "ALU32_RSH_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_rsh_imm_regs, + }, + { + "ALU32_ARSH_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_arsh_imm_regs, + }, + { + "ALU32_ADD_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_add_imm_regs, + }, + { + "ALU32_SUB_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_sub_imm_regs, + }, + { + "ALU32_MUL_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mul_imm_regs, + }, + { + "ALU32_DIV_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_div_imm_regs, + }, + { + "ALU32_MOD_K: registers", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mod_imm_regs, + }, + /* ALU64 X register combinations */ + { + "ALU64_MOV_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mov_reg_pairs, + }, + { + "ALU64_AND_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_and_reg_pairs, + }, + { + "ALU64_OR_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_or_reg_pairs, + }, + { + "ALU64_XOR_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_xor_reg_pairs, + }, + { + "ALU64_LSH_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_lsh_reg_pairs, + }, + { + "ALU64_RSH_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_rsh_reg_pairs, + }, + { + "ALU64_ARSH_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_arsh_reg_pairs, + }, + { + "ALU64_ADD_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_add_reg_pairs, + }, + { + "ALU64_SUB_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_sub_reg_pairs, + }, + { + "ALU64_MUL_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mul_reg_pairs, + }, + { + "ALU64_DIV_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_div_reg_pairs, + }, + { + "ALU64_MOD_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mod_reg_pairs, + }, + /* ALU32 X register combinations */ + { + "ALU32_MOV_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mov_reg_pairs, + }, + { + "ALU32_AND_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_and_reg_pairs, + }, + { + "ALU32_OR_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_or_reg_pairs, + }, + { + "ALU32_XOR_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_xor_reg_pairs, + }, + { + "ALU32_LSH_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_lsh_reg_pairs, + }, + { + "ALU32_RSH_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_rsh_reg_pairs, + }, + { + "ALU32_ARSH_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_arsh_reg_pairs, + }, + { + "ALU32_ADD_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_add_reg_pairs, + }, + { + "ALU32_SUB_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_sub_reg_pairs, + }, + { + "ALU32_MUL_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mul_reg_pairs, + }, + { + "ALU32_DIV_X: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_div_reg_pairs, + }, + { + "ALU32_MOD_X register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mod_reg_pairs, + }, /* Exhaustive test of ALU64 shift operations */ { "ALU64_LSH_K: all shift values", From patchwork Fri Oct 1 13:03:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12530629 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE45CC433F5 for ; Fri, 1 Oct 2021 13:05:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A520161AAA for ; Fri, 1 Oct 2021 13:05:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354157AbhJANG7 (ORCPT ); Fri, 1 Oct 2021 09:06:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354097AbhJANGv (ORCPT ); Fri, 1 Oct 2021 09:06:51 -0400 Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com [IPv6:2a00:1450:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 283A3C06178A for ; Fri, 1 Oct 2021 06:04:10 -0700 (PDT) Received: by mail-ed1-x529.google.com with SMTP id r18so34437328edv.12 for ; Fri, 01 Oct 2021 06:04:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ERLGDhV2pBfoTQi2z5jcvKZZQA7s+VbF/uLzKflc344=; b=aAstHToN9+sA+adJ1X952a4pZ4qmf3+gkfbgZIILdNhAGgi2a5/T/snvVfjcC+nXsL +SAgQ+mH7K1erqJG5tssiQhpSTKV6lgvnAo1K5DD69lchLyV56HyieCgExvv3otQEkDD 9v22d9r4nrfGBXbJ2YmfTh5tFbK5mOW0sv6gA/FhpcJQVeeUSnUiaguljFZnRNJry9ce oweXiAtbJiTcf6uN4jpHFaPMy8hvhEF8qtuGt3vCr8jPUEKDkt+Ao6BIhPznWwHJdwns KDge8yU4kgvVKT4kJpFecW4M+6B6NJIJiCcd48vCT67n995c0YwCG6cixwka41o2yFYx dZjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ERLGDhV2pBfoTQi2z5jcvKZZQA7s+VbF/uLzKflc344=; b=ccuUuvZAu/R6NOpdBMI42aUkgWKDn1rRaJExlL6exBewrVGqShDjs/So6MhpJEQmgj CW/ejm+oWn7z6apIVGWNt43tSjrDeiL6bZwBbi55MOb4m69hx99dV+nVLJ7lB62vDfhm eDcOJVgffjWhLHaSZ2mp46a8c/w0ONwT+p0DFhMFFufW5XS29R+1f7G/wP2Q1QAVBurd e4/tkj19qbjUlLw6Ai6wTKENKFzSuiTxXsT7SSLilguMg3PRuu+CvZFNjeoVAkCEbmPY fthFCdD6nkRFN5GGaknKD5XXFGPkYGJzG4irIMaKYE5VYTbyscmQsE2LeflXpX7KELMC aRQQ== X-Gm-Message-State: AOAM530ZAqxrVsLKBNiXgIkqo10gr+0k+wYL9Zb1mNpZ4hHRkVzcgTcT pnnA2sYQld6XLFLhV7HpPIi8yg== X-Google-Smtp-Source: ABdhPJyGVFnd8U3qrHZwUay8HgFmSidD3IJKVnC0zV5GSx37zcmn6/e2W3oRssJdmSRsNXPG/15NNA== X-Received: by 2002:a17:906:4c4a:: with SMTP id d10mr6264672ejw.391.1633093447628; Fri, 01 Oct 2021 06:04:07 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id p22sm2920279ejl.90.2021.10.01.06.04.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 06:04:07 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, paul@cilium.io, yangtiezhu@loongson.cn, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next 08/10] bpf/tests: Add exhaustive tests of BPF_ATOMIC register combinations Date: Fri, 1 Oct 2021 15:03:46 +0200 Message-Id: <20211001130348.3670534-9-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> References: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds tests of all register combinations for BPF_ATOMIC operations on both BPF_W and BPF_DW sizes. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 422 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 422 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 924bf4c9783c..40db4cee4f51 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -1805,6 +1805,246 @@ static int bpf_fill_cmpxchg32(struct bpf_test *self) &__bpf_emit_cmpxchg32); } +/* + * Test JITs that implement ATOMIC operations as function calls or + * other primitives, and must re-arrange operands for argument passing. + */ +static int __bpf_fill_atomic_reg_pairs(struct bpf_test *self, u8 width, u8 op) +{ + struct bpf_insn *insn; + int len = 2 + 34 * 10 * 10; + u64 mem, upd, res; + int rd, rs, i = 0; + + insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL); + if (!insn) + return -ENOMEM; + + /* Operand and memory values */ + if (width == BPF_DW) { + mem = 0x0123456789abcdefULL; + upd = 0xfedcba9876543210ULL; + } else { /* BPF_W */ + mem = 0x01234567U; + upd = 0x76543210U; + } + + /* Memory updated according to operation */ + switch (op) { + case BPF_XCHG: + res = upd; + break; + case BPF_CMPXCHG: + res = mem; + break; + default: + __bpf_alu_result(&res, mem, upd, BPF_OP(op)); + } + + /* Test all operand registers */ + for (rd = R0; rd <= R9; rd++) { + for (rs = R0; rs <= R9; rs++) { + u64 cmp, src; + + /* Initialize value in memory */ + i += __bpf_ld_imm64(&insn[i], R0, mem); + insn[i++] = BPF_STX_MEM(width, R10, R0, -8); + + /* Initialize registers in order */ + i += __bpf_ld_imm64(&insn[i], R0, ~mem); + i += __bpf_ld_imm64(&insn[i], rs, upd); + insn[i++] = BPF_MOV64_REG(rd, R10); + + /* Perform atomic operation */ + insn[i++] = BPF_ATOMIC_OP(width, op, rd, rs, -8); + if (op == BPF_CMPXCHG && width == BPF_W) + insn[i++] = BPF_ZEXT_REG(R0); + + /* Check R0 register value */ + if (op == BPF_CMPXCHG) + cmp = mem; /* Expect value from memory */ + else if (R0 == rd || R0 == rs) + cmp = 0; /* Aliased, checked below */ + else + cmp = ~mem; /* Expect value to be preserved */ + if (cmp) { + insn[i++] = BPF_JMP32_IMM(BPF_JEQ, R0, + (u32)cmp, 2); + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); + insn[i++] = BPF_EXIT_INSN(); + insn[i++] = BPF_ALU64_IMM(BPF_RSH, R0, 32); + insn[i++] = BPF_JMP32_IMM(BPF_JEQ, R0, + cmp >> 32, 2); + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); + insn[i++] = BPF_EXIT_INSN(); + } + + /* Check source register value */ + if (rs == R0 && op == BPF_CMPXCHG) + src = 0; /* Aliased with R0, checked above */ + else if (rs == rd && (op == BPF_CMPXCHG || + !(op & BPF_FETCH))) + src = 0; /* Aliased with rd, checked below */ + else if (op == BPF_CMPXCHG) + src = upd; /* Expect value to be preserved */ + else if (op & BPF_FETCH) + src = mem; /* Expect fetched value from mem */ + else /* no fetch */ + src = upd; /* Expect value to be preserved */ + if (src) { + insn[i++] = BPF_JMP32_IMM(BPF_JEQ, rs, + (u32)src, 2); + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); + insn[i++] = BPF_EXIT_INSN(); + insn[i++] = BPF_ALU64_IMM(BPF_RSH, rs, 32); + insn[i++] = BPF_JMP32_IMM(BPF_JEQ, rs, + src >> 32, 2); + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); + insn[i++] = BPF_EXIT_INSN(); + } + + /* Check destination register value */ + if (!(rd == R0 && op == BPF_CMPXCHG) && + !(rd == rs && (op & BPF_FETCH))) { + insn[i++] = BPF_JMP_REG(BPF_JEQ, rd, R10, 2); + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); + insn[i++] = BPF_EXIT_INSN(); + } + + /* Check value in memory */ + if (rs != rd) { /* No aliasing */ + i += __bpf_ld_imm64(&insn[i], R1, res); + } else if (op == BPF_XCHG) { /* Aliased, XCHG */ + insn[i++] = BPF_MOV64_REG(R1, R10); + } else if (op == BPF_CMPXCHG) { /* Aliased, CMPXCHG */ + i += __bpf_ld_imm64(&insn[i], R1, mem); + } else { /* Aliased, ALU oper */ + i += __bpf_ld_imm64(&insn[i], R1, mem); + insn[i++] = BPF_ALU64_REG(BPF_OP(op), R1, R10); + } + + insn[i++] = BPF_LDX_MEM(width, R0, R10, -8); + if (width == BPF_DW) + insn[i++] = BPF_JMP_REG(BPF_JEQ, R0, R1, 2); + else /* width == BPF_W */ + insn[i++] = BPF_JMP32_REG(BPF_JEQ, R0, R1, 2); + insn[i++] = BPF_MOV32_IMM(R0, __LINE__); + insn[i++] = BPF_EXIT_INSN(); + } + } + + insn[i++] = BPF_MOV64_IMM(R0, 1); + insn[i++] = BPF_EXIT_INSN(); + + self->u.ptr.insns = insn; + self->u.ptr.len = i; + BUG_ON(i > len); + + return 0; +} + +/* 64-bit atomic register tests */ +static int bpf_fill_atomic64_add_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_ADD); +} + +static int bpf_fill_atomic64_and_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_AND); +} + +static int bpf_fill_atomic64_or_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_OR); +} + +static int bpf_fill_atomic64_xor_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_XOR); +} + +static int bpf_fill_atomic64_add_fetch_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_ADD | BPF_FETCH); +} + +static int bpf_fill_atomic64_and_fetch_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_AND | BPF_FETCH); +} + +static int bpf_fill_atomic64_or_fetch_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_OR | BPF_FETCH); +} + +static int bpf_fill_atomic64_xor_fetch_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_XOR | BPF_FETCH); +} + +static int bpf_fill_atomic64_xchg_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_XCHG); +} + +static int bpf_fill_atomic64_cmpxchg_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_DW, BPF_CMPXCHG); +} + +/* 32-bit atomic register tests */ +static int bpf_fill_atomic32_add_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_ADD); +} + +static int bpf_fill_atomic32_and_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_AND); +} + +static int bpf_fill_atomic32_or_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_OR); +} + +static int bpf_fill_atomic32_xor_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_XOR); +} + +static int bpf_fill_atomic32_add_fetch_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_ADD | BPF_FETCH); +} + +static int bpf_fill_atomic32_and_fetch_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_AND | BPF_FETCH); +} + +static int bpf_fill_atomic32_or_fetch_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_OR | BPF_FETCH); +} + +static int bpf_fill_atomic32_xor_fetch_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_XOR | BPF_FETCH); +} + +static int bpf_fill_atomic32_xchg_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_XCHG); +} + +static int bpf_fill_atomic32_cmpxchg_reg_pairs(struct bpf_test *self) +{ + return __bpf_fill_atomic_reg_pairs(self, BPF_W, BPF_CMPXCHG); +} + /* * Test the two-instruction 64-bit immediate load operation for all * power-of-two magnitudes of the immediate operand. For each MSB, a block @@ -11976,6 +12216,188 @@ static struct bpf_test tests[] = { { { 0, 1 } }, .fill_helper = bpf_fill_ld_imm64, }, + /* 64-bit ATOMIC register combinations */ + { + "ATOMIC_DW_ADD: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_add_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_DW_AND: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_and_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_DW_OR: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_or_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_DW_XOR: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_xor_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_DW_ADD_FETCH: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_add_fetch_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_DW_AND_FETCH: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_and_fetch_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_DW_OR_FETCH: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_or_fetch_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_DW_XOR_FETCH: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_xor_fetch_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_DW_XCHG: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_xchg_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_DW_CMPXCHG: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic64_cmpxchg_reg_pairs, + .stack_depth = 8, + }, + /* 32-bit ATOMIC register combinations */ + { + "ATOMIC_W_ADD: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_add_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_W_AND: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_and_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_W_OR: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_or_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_W_XOR: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_xor_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_W_ADD_FETCH: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_add_fetch_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_W_AND_FETCH: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_and_fetch_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_W_OR_FETCH: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_or_fetch_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_W_XOR_FETCH: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_xor_fetch_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_W_XCHG: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_xchg_reg_pairs, + .stack_depth = 8, + }, + { + "ATOMIC_W_CMPXCHG: register combinations", + { }, + INTERNAL, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_atomic32_cmpxchg_reg_pairs, + .stack_depth = 8, + }, /* 64-bit ATOMIC magnitudes */ { "ATOMIC_DW_ADD: all operand magnitudes", From patchwork Fri Oct 1 13:03:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12530633 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C9CDC4321E for ; Fri, 1 Oct 2021 13:05:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 08E7E61AA3 for ; Fri, 1 Oct 2021 13:05:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354497AbhJANG5 (ORCPT ); Fri, 1 Oct 2021 09:06:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354147AbhJANGv (ORCPT ); Fri, 1 Oct 2021 09:06:51 -0400 Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 786A8C061775 for ; Fri, 1 Oct 2021 06:04:17 -0700 (PDT) Received: by mail-ed1-x52d.google.com with SMTP id g7so33804937edv.1 for ; Fri, 01 Oct 2021 06:04:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1dyBuNYzpYckrS6HbYo2UriEXk3AJ+jSdp87B70JRTg=; b=HNqKx+ZGamiEvqIRVIhBXRul+X41vt7idhdmzZ2tqvQXSDhu1Y/8QY9s/4Btj1xNA2 sMRVJZ09fxzvGcPcPVdmxa5ts//bpPWe1x0PX6dTiIqvLxv9Y045HTJuqxjU53hopRmF NRfRgu6VCWnVI7DcTv6Q2t8qc3G/9Y8jTYy0OzHzBXj2Nt+rWOcv96n3qIXxf0JECfkB QMVkWIz60UFkwCBNXcvLPG1s/e9CZnEjt+lzFqQYo3g3Z+AgA9q/jwcqJE+jbrP+4iPI tJA3YyyQEl/CkVIkszb3/fAliu5P/YK1pDithnVEDtVMvbb3TDpSmIKhIK8qej+w4Unk HhIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1dyBuNYzpYckrS6HbYo2UriEXk3AJ+jSdp87B70JRTg=; b=DbHT4ODW7b280e80xQ17+gDgBBUR5fT29awETQojKQVdjbdk4u7Mnf6hl1WWSFEL2E E2ECv8x82C+i9y05ln9UrvsJUddLDRFtjzr5fLel4szT1Fx/5vYv7cJqgLMGoD6n6/dN t72HnbW9O4IcwgsYzFi6vBvWRGjUUY5tlwSJjg3Sw0YUaQ1pf+cZ14Uiige+bT5xzm3q suT4N2l6uNDrzHVI6T6lLmW7VdDmAh1ulRtEpAj95Scdt9ZmvqgopR1NDbis4+hWZA6s BezCjb+GxGAK/alK881WNLWl652nBz2YOo7LrNv1Dn0BcPNhLeKirG8sAN7kE0CNOCEs 8DFw== X-Gm-Message-State: AOAM532jFbQB3mSZcAh8vhtZfH7Z9KVi5RIxI2erK/qEE5gJjz9tMVBo 8BlLYRi/p6unH4w8Trzvxc3Ssw== X-Google-Smtp-Source: ABdhPJzUUJNfhTVTRaRNPEHw3vr+IXFQKIy7lZ1UXp50ZegUvxV75vx2AXRWvT6bjkHNGrJyIi9/dQ== X-Received: by 2002:a17:906:318b:: with SMTP id 11mr6364683ejy.493.1633093448772; Fri, 01 Oct 2021 06:04:08 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id p22sm2920279ejl.90.2021.10.01.06.04.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 06:04:08 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, paul@cilium.io, yangtiezhu@loongson.cn, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next 09/10] bpf/tests: Add test of ALU shifts with operand register aliasing Date: Fri, 1 Oct 2021 15:03:47 +0200 Message-Id: <20211001130348.3670534-10-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> References: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a tests of ALU32 and ALU64 LSH/RSH/ARSH operations for the case when the two operands are the same register. Mainly intended to test JITs that implement ALU64 shifts with 32-bit CPU instructions. Also renamed related helper functions for consistency with the new tests. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 162 +++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 149 insertions(+), 13 deletions(-) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 40db4cee4f51..dfcbdff714b6 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -660,37 +660,37 @@ static int __bpf_fill_alu_shift(struct bpf_test *self, u8 op, self->u.ptr.insns = insn; self->u.ptr.len = len; - BUG_ON(i > len); + BUG_ON(i != len); return 0; } -static int bpf_fill_alu_lsh_imm(struct bpf_test *self) +static int bpf_fill_alu64_lsh_imm(struct bpf_test *self) { return __bpf_fill_alu_shift(self, BPF_LSH, BPF_K, false); } -static int bpf_fill_alu_rsh_imm(struct bpf_test *self) +static int bpf_fill_alu64_rsh_imm(struct bpf_test *self) { return __bpf_fill_alu_shift(self, BPF_RSH, BPF_K, false); } -static int bpf_fill_alu_arsh_imm(struct bpf_test *self) +static int bpf_fill_alu64_arsh_imm(struct bpf_test *self) { return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_K, false); } -static int bpf_fill_alu_lsh_reg(struct bpf_test *self) +static int bpf_fill_alu64_lsh_reg(struct bpf_test *self) { return __bpf_fill_alu_shift(self, BPF_LSH, BPF_X, false); } -static int bpf_fill_alu_rsh_reg(struct bpf_test *self) +static int bpf_fill_alu64_rsh_reg(struct bpf_test *self) { return __bpf_fill_alu_shift(self, BPF_RSH, BPF_X, false); } -static int bpf_fill_alu_arsh_reg(struct bpf_test *self) +static int bpf_fill_alu64_arsh_reg(struct bpf_test *self) { return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_X, false); } @@ -725,6 +725,86 @@ static int bpf_fill_alu32_arsh_reg(struct bpf_test *self) return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_X, true); } +/* + * Test an ALU register shift operation for all valid shift values + * for the case when the source and destination are the same. + */ +static int __bpf_fill_alu_shift_same_reg(struct bpf_test *self, u8 op, + bool alu32) +{ + int bits = alu32 ? 32 : 64; + int len = 3 + 6 * bits; + struct bpf_insn *insn; + int i = 0; + u64 val; + + insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL); + if (!insn) + return -ENOMEM; + + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 0); + + for (val = 0; val < bits; val++) { + u64 res; + + /* Perform operation */ + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R1, val); + if (alu32) + insn[i++] = BPF_ALU32_REG(op, R1, R1); + else + insn[i++] = BPF_ALU64_REG(op, R1, R1); + + /* Compute the reference result */ + __bpf_alu_result(&res, val, val, op); + if (alu32) + res = (u32)res; + i += __bpf_ld_imm64(&insn[i], R2, res); + + /* Check the actual result */ + insn[i++] = BPF_JMP_REG(BPF_JEQ, R1, R2, 1); + insn[i++] = BPF_EXIT_INSN(); + } + + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 1); + insn[i++] = BPF_EXIT_INSN(); + + self->u.ptr.insns = insn; + self->u.ptr.len = len; + BUG_ON(i != len); + + return 0; +} + +static int bpf_fill_alu64_lsh_same_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift_same_reg(self, BPF_LSH, false); +} + +static int bpf_fill_alu64_rsh_same_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift_same_reg(self, BPF_RSH, false); +} + +static int bpf_fill_alu64_arsh_same_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift_same_reg(self, BPF_ARSH, false); +} + +static int bpf_fill_alu32_lsh_same_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift_same_reg(self, BPF_LSH, true); +} + +static int bpf_fill_alu32_rsh_same_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift_same_reg(self, BPF_RSH, true); +} + +static int bpf_fill_alu32_arsh_same_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift_same_reg(self, BPF_ARSH, true); +} + /* * Common operand pattern generator for exhaustive power-of-two magnitudes * tests. The block size parameters can be adjusted to increase/reduce the @@ -11788,7 +11868,7 @@ static struct bpf_test tests[] = { INTERNAL | FLAG_NO_DATA, { }, { { 0, 1 } }, - .fill_helper = bpf_fill_alu_lsh_imm, + .fill_helper = bpf_fill_alu64_lsh_imm, }, { "ALU64_RSH_K: all shift values", @@ -11796,7 +11876,7 @@ static struct bpf_test tests[] = { INTERNAL | FLAG_NO_DATA, { }, { { 0, 1 } }, - .fill_helper = bpf_fill_alu_rsh_imm, + .fill_helper = bpf_fill_alu64_rsh_imm, }, { "ALU64_ARSH_K: all shift values", @@ -11804,7 +11884,7 @@ static struct bpf_test tests[] = { INTERNAL | FLAG_NO_DATA, { }, { { 0, 1 } }, - .fill_helper = bpf_fill_alu_arsh_imm, + .fill_helper = bpf_fill_alu64_arsh_imm, }, { "ALU64_LSH_X: all shift values", @@ -11812,7 +11892,7 @@ static struct bpf_test tests[] = { INTERNAL | FLAG_NO_DATA, { }, { { 0, 1 } }, - .fill_helper = bpf_fill_alu_lsh_reg, + .fill_helper = bpf_fill_alu64_lsh_reg, }, { "ALU64_RSH_X: all shift values", @@ -11820,7 +11900,7 @@ static struct bpf_test tests[] = { INTERNAL | FLAG_NO_DATA, { }, { { 0, 1 } }, - .fill_helper = bpf_fill_alu_rsh_reg, + .fill_helper = bpf_fill_alu64_rsh_reg, }, { "ALU64_ARSH_X: all shift values", @@ -11828,7 +11908,7 @@ static struct bpf_test tests[] = { INTERNAL | FLAG_NO_DATA, { }, { { 0, 1 } }, - .fill_helper = bpf_fill_alu_arsh_reg, + .fill_helper = bpf_fill_alu64_arsh_reg, }, /* Exhaustive test of ALU32 shift operations */ { @@ -11879,6 +11959,62 @@ static struct bpf_test tests[] = { { { 0, 1 } }, .fill_helper = bpf_fill_alu32_arsh_reg, }, + /* + * Exhaustive test of ALU64 shift operations when + * source and destination register are the same. + */ + { + "ALU64_LSH_X: all shift values with the same register", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_lsh_same_reg, + }, + { + "ALU64_RSH_X: all shift values with the same register", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_rsh_same_reg, + }, + { + "ALU64_ARSH_X: all shift values with the same register", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_arsh_same_reg, + }, + /* + * Exhaustive test of ALU32 shift operations when + * source and destination register are the same. + */ + { + "ALU32_LSH_X: all shift values with the same register", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_lsh_same_reg, + }, + { + "ALU32_RSH_X: all shift values with the same register", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_rsh_same_reg, + }, + { + "ALU32_ARSH_X: all shift values with the same register", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_arsh_same_reg, + }, /* ALU64 immediate magnitudes */ { "ALU64_MOV_K: all immediate value magnitudes", From patchwork Fri Oct 1 13:03:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12530627 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49102C433FE for ; Fri, 1 Oct 2021 13:05:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 32F1C61A8F for ; Fri, 1 Oct 2021 13:05:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354490AbhJANG5 (ORCPT ); Fri, 1 Oct 2021 09:06:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354134AbhJANGv (ORCPT ); Fri, 1 Oct 2021 09:06:51 -0400 Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [IPv6:2a00:1450:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54529C06178C for ; Fri, 1 Oct 2021 06:04:13 -0700 (PDT) Received: by mail-ed1-x52c.google.com with SMTP id bd28so34456017edb.9 for ; Fri, 01 Oct 2021 06:04:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5zYB8mqtE+ygJzihZK4VFYuXUtlev0s60JUhGV9c/Wg=; b=erH8r8HMaLwfNhcglVlsRt48YRykCk/HyR8bgmz5zhINC7Cvrz9kUo71KZD7CQZm8p CKDqPvJBgoMLOtLsCL3p75pUS5T63GkvrOD+jKAISfhpWZ3GFPFPAdjaIr/NEQ+bfy92 JKoBBFVy7sDRsfneNH9k1udTEV0Jae2hKmUMm8D1p0toxKLLtyuYv1OkqzU2zWZcpMcv 7GKC9yKN95uUeroae2ZRgu7Yk8pg6FxtfEmSPCF4Ibc4CREwEV+y62FlBhbAPt+y1CKv jW4eyRq0GF7wAF3ANTlo28+I7o1Za/IlJT9YvW5ZR36QZBSSgUUPBBCRu2PjMIFG0duz tgog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5zYB8mqtE+ygJzihZK4VFYuXUtlev0s60JUhGV9c/Wg=; b=VLxyRNBl1CmEOOYps4CL/RUKeLGzoNbKRvLeaOTbvIeqeKIYEm+Neqr9QT/aBNpu1L 5+anIjwAzpWYmsGqiuDe6lmlsuX3CoSYY5MaY8y7zDf2kXk53yMaX2edkcmC6ipIRcTv xQF08DBy8AM+ti1CJekrlZB8ypXtLDKf40ZHraXNDa9QrHeKhww9o02sO21JZjxm3gEf jCfuGBWHDeJsIAtnoqgStCU4A0ACiS8NM0xvjLvNb1tQ0r1RLg+RF6oVuK7ZIi4kZ8D/ WwLy0LUpG8v68V1R2y+448sIipYAUrcNbSEZB5zJF4/yrlspX3AGuOZymCx8K7zDemCM BHTg== X-Gm-Message-State: AOAM532x4yDkDO1EpHysKcUtis9jfkQab58TegN78S4jjC3cHtXs8Zlx cjcSXOwMr8RY6Xbmo/AUZreIXQ== X-Google-Smtp-Source: ABdhPJzurTk3eXztmQhe69eT45MuApqyv5juzkQJqOdeF+jvXaaptb5Cp5FYqlWek6bVH/OhOxobTg== X-Received: by 2002:a17:906:36d6:: with SMTP id b22mr5978897ejc.387.1633093451801; Fri, 01 Oct 2021 06:04:11 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id p22sm2920279ejl.90.2021.10.01.06.04.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 06:04:11 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, paul@cilium.io, yangtiezhu@loongson.cn, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next 10/10] bpf/tests: Add test of LDX_MEM with operand aliasing Date: Fri, 1 Oct 2021 15:03:48 +0200 Message-Id: <20211001130348.3670534-11-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> References: <20211001130348.3670534-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a set of tests of BPF_LDX_MEM where both operand registers are the same register. Mainly testing 32-bit JITs that may load a 64-bit value in two 32-bit loads, and must not overwrite the address register. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index dfcbdff714b6..b9fc330fc83b 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -11133,6 +11133,64 @@ static struct bpf_test tests[] = { {}, { { 0, 2 } }, }, + /* BPF_LDX_MEM with operand aliasing */ + { + "LDX_MEM_B: operand register aliasing", + .u.insns_int = { + BPF_ST_MEM(BPF_B, R10, -8, 123), + BPF_MOV64_REG(R0, R10), + BPF_LDX_MEM(BPF_B, R0, R0, -8), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 123 } }, + .stack_depth = 8, + }, + { + "LDX_MEM_H: operand register aliasing", + .u.insns_int = { + BPF_ST_MEM(BPF_H, R10, -8, 12345), + BPF_MOV64_REG(R0, R10), + BPF_LDX_MEM(BPF_H, R0, R0, -8), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 12345 } }, + .stack_depth = 8, + }, + { + "LDX_MEM_W: operand register aliasing", + .u.insns_int = { + BPF_ST_MEM(BPF_W, R10, -8, 123456789), + BPF_MOV64_REG(R0, R10), + BPF_LDX_MEM(BPF_W, R0, R0, -8), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 123456789 } }, + .stack_depth = 8, + }, + { + "LDX_MEM_DW: operand register aliasing", + .u.insns_int = { + BPF_LD_IMM64(R1, 0x123456789abcdefULL), + BPF_STX_MEM(BPF_DW, R10, R1, -8), + BPF_MOV64_REG(R0, R10), + BPF_LDX_MEM(BPF_DW, R0, R0, -8), + BPF_ALU64_REG(BPF_SUB, R0, R1), + BPF_MOV64_REG(R1, R0), + BPF_ALU64_IMM(BPF_RSH, R1, 32), + BPF_ALU64_REG(BPF_OR, R0, R1), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, 0 } }, + .stack_depth = 8, + }, /* * Register (non-)clobbering tests for the case where a JIT implements * complex ALU or ATOMIC operations via function calls. If so, the