From patchwork Mon Jul 12 00:34:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369475 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDC9CC07E9C for ; Mon, 12 Jul 2021 00:35:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9A53161076 for ; Mon, 12 Jul 2021 00:35:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231857AbhGLAiV (ORCPT ); Sun, 11 Jul 2021 20:38:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231753AbhGLAiR (ORCPT ); Sun, 11 Jul 2021 20:38:17 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1AFAC0613E5; Sun, 11 Jul 2021 17:35:29 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id j3so6318579plx.7; Sun, 11 Jul 2021 17:35:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Pq5zfFYvJRJp/wFGTGQ6c/E4wNoaWHlEdO1OBbdRBdc=; b=EWbU54tmOwmJPHgIKjujeR9uuTq+dJh14Zn3GkZe7AJ+ebrGN21BQozej+t2qPttFT HhM2l0ylPSc4t1t6jeEppHcgkEd7zVJaFaQB07uDsg5O7OqzhY5GixKquhqRQcn2kSY+ QeLiGg0WXCnb0VhlJMFg7sXnqyZq0256Vi9nB36fXlQtXumOCQoELzy+rVTlKCINhnBO x27E0GadNDK9vXFMq19cnPsyt+fClTKMLEE6xrcGd2fHDU2H2TxqqdGPbpLNSA3/0WJ3 bXdQEZz/Wet+7KQw4yv51WyQZ9Uo8rXvs6S/Sdy+NDg/8AssIeJt6cTrCq6Ev4OzCOzS jJDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Pq5zfFYvJRJp/wFGTGQ6c/E4wNoaWHlEdO1OBbdRBdc=; b=ahtVn+5TaWSaOczPQbpqCKhSEolTVrbGitNKO1OgLfS6VqAd3qYteSFv7tbcXJ0ImI TONyPyuusDzC+gFwQW1AK+ja2bf5gR4df4LF6E4ZFePgzWKHnasR9lNkdSC8BgpdIIfp 05+Gf7+UiM7AQy3Jib+B/vOMLtJwNfjCEhq8ij5rW1fvrBdB2Kxzy0Sj1lcewz9FJdZu 55nuat2TnAOeKvXCeG28oByzjOKTgZuLSsTOmFtw6x12xBW8eNC76H+KJls3CUQe0f6j hMp0TZbxn1GwSwPMXScgs375qYTkmJ2dB/KHqWc6R3M8EhHV4xJKCbwc+goWxqXFXCRy XJoQ== X-Gm-Message-State: AOAM533zOFbDxAxPSS9bPvGEt5tqXcNe3lycyRiqsetOcLIkHuaCTdaJ FmMfNu5kvY8W4eW/TGZH3Uk= X-Google-Smtp-Source: ABdhPJxVPCrDqfktMD/mzYKEPeAO4zTe+4jFdAOyVMjP4bjytDpWrBV1exym7bFScf7LfgwHUK5q3A== X-Received: by 2002:a17:90a:1f43:: with SMTP id y3mr28259256pjy.0.1626050129292; Sun, 11 Jul 2021 17:35:29 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:29 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 01/14] MIPS: eBPF: support BPF_TAIL_CALL in JIT static analysis Date: Sun, 11 Jul 2021 17:34:47 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Add support in reg_val_propagate_range() for BPF_TAIL_CALL, fixing many kernel log WARNINGs ("Unhandled BPF_JMP case") seen during JIT testing. Treat BPF_TAIL_CALL like a NOP, falling through as if the tail call failed. Fixes: b6bd53f9c4e8 ("MIPS: Add missing file for eBPF JIT.") Signed-off-by: Tony Ambardar --- arch/mips/net/ebpf_jit.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index 939dd06764bc..ed47a10d533f 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -1714,6 +1714,9 @@ static int reg_val_propagate_range(struct jit_ctx *ctx, u64 initial_rvt, for (reg = BPF_REG_0; reg <= BPF_REG_5; reg++) set_reg_val_type(&exit_rvt, reg, REG_64BIT); + rvt[idx] |= RVT_DONE; + break; + case BPF_TAIL_CALL: rvt[idx] |= RVT_DONE; break; default: From patchwork Mon Jul 12 00:34:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369483 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79672C11F6E for ; Mon, 12 Jul 2021 00:35:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 67D566100B for ; Mon, 12 Jul 2021 00:35:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231827AbhGLAi2 (ORCPT ); Sun, 11 Jul 2021 20:38:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232165AbhGLAiY (ORCPT ); Sun, 11 Jul 2021 20:38:24 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD2EAC0613E5; Sun, 11 Jul 2021 17:35:35 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id o3-20020a17090a6783b0290173ce472b8aso604975pjj.2; Sun, 11 Jul 2021 17:35:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q0geXsZXheKEHM8d15RpPNNDAHbeW46Q8E7QaxBLS+M=; b=ESlm4qm3b9pM2Ema1X3lBBtoUYUdjvbym4FFIthfGdiW8igMml7VM/Zx0kp2oua++j kIi2hmmGUmybJvwi3uPvTduF1o/UEeAlMu0mOiiPFsjY/n30xhRxB11drPRPQwdKzlgG 00L2IOvL7tLSTILp5cPaXa9TnAcJAJnBWWynjaeQZNZ9yGpQ5cAl6j/+cq3V9KC9oEiq VxjwIfiN1l+UvQBlhuCcBX3tj/GU6w17+HCiprRkdoLUgy8/YdNSkK9obqTXsE7tRgRZ JZ1yovbf3bec1cT/HkWxNkB8NPAVidT18PFj1B0zzO8Bog4WR1uKcB9eLDg3z1e3oJy+ DrtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q0geXsZXheKEHM8d15RpPNNDAHbeW46Q8E7QaxBLS+M=; b=mBwbjnq80m1wTDu2bH24rhY1/MK/bCOaQ9plVH7glPIUw1VcZAb+LzpoqTI9C7EMOn ELJI2t2Dx96hCZJJBrcwMK+d5qxXv7ZgsMPHU5QdNrhW7Y2RwuVFMo9jdOFs3HTMBcYN b2RpDKv69vammvgnl+UeLmSOd6NIbHl5u6MlYM+3+v1W0vSzF/Qjo+cRyGjRhnxI3JqQ OmcK+bkackANmCKiuyGlo9LIAjjKVPRFDitgtbKWyv5mEKhhNU1RAAijapoI8fz0G9qi O3/hQes9M58qe/XsOGAPwsZa0hTCQoGq7nsq2VK/pVE/ku+zirsnbxfXobrG+2SAPniQ /qyg== X-Gm-Message-State: AOAM532P/YXOJYgTAuuib1MzLg1XwdSfvzst366EUS7VrgPsjD+R7Uv/ lZ9pbwE1j6clVPWy7CNC8Io= X-Google-Smtp-Source: ABdhPJxkpAuKJDR6C65UnC6SeiFHOh8kLmtl7Ep47m+YiLk4+LDahq6FEPR3ppc+HHSBDuq3k0K7sw== X-Received: by 2002:a17:90a:cd01:: with SMTP id d1mr11517767pju.106.1626050130351; Sun, 11 Jul 2021 17:35:30 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:30 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 02/14] MIPS: eBPF: mask 32-bit index for tail calls Date: Sun, 11 Jul 2021 17:34:48 -0700 Message-Id: <306525292b0b4959873301b8e62b10c8d4d60ff3.1625970384.git.Tony.Ambardar@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC The program array index for tail-calls should be 32-bit, so zero-extend to sanitize the value. This fixes failures seen for test_verifier test: 852/p runtime/jit: pass > 32bit index to tail_call FAIL retval 2 != 42 Fixes: b6bd53f9c4e8 ("MIPS: Add missing file for eBPF JIT.") Signed-off-by: Tony Ambardar --- arch/mips/net/ebpf_jit.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index ed47a10d533f..64f76c7191b1 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -611,6 +611,8 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx, int this_idx) * if (index >= array->map.max_entries) * goto out; */ + /* Mask index as 32-bit */ + emit_instr(ctx, dinsu, MIPS_R_A2, MIPS_R_ZERO, 32, 32); off = offsetof(struct bpf_array, map.max_entries); emit_instr(ctx, lwu, MIPS_R_T5, off, MIPS_R_A1); emit_instr(ctx, sltu, MIPS_R_AT, MIPS_R_T5, MIPS_R_A2); From patchwork Mon Jul 12 00:34:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369477 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07E22C11F66 for ; Mon, 12 Jul 2021 00:35:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D50F460551 for ; Mon, 12 Jul 2021 00:35:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232040AbhGLAiW (ORCPT ); Sun, 11 Jul 2021 20:38:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231772AbhGLAiT (ORCPT ); Sun, 11 Jul 2021 20:38:19 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5886C0613DD; Sun, 11 Jul 2021 17:35:31 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id d12so16455863pgd.9; Sun, 11 Jul 2021 17:35:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PdYhsrqGfOzHQe0K7vA77OCurMz/VH4pNLXIe9BYfjs=; b=TcmPHYxBslPI3lbXCEOxmInhmZizPMoy2Ro29mJnAfhOnUpELT0Pc4YcihUwLbC2De +rZZrY8M4iRwkxeHUxtWLntKgoOcCiR1MTVZa8QMXI+rlp8htIRvA8i+m38vbiqK5gqI q3smbnEhK8npKx3hpJ3ehaTmySrBmZkVPCl162fgmlcLHjP0tutLN8FLLw+qAAnQQgrs O3Sq/VGzbQdoLcLfH7IgcnLFyKVPvHJ/u+x9fiWjoCuQ4FUi9lxq2x50ortkGjQ+Xp/P NuwVyB+3TPeUh4jIn0ibs0cfD25Lji+3e05WvKsYXseJIQjDFPkG59LNfnvOjhSvVhsG lc4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PdYhsrqGfOzHQe0K7vA77OCurMz/VH4pNLXIe9BYfjs=; b=HELJetJSp/HJaHT1WnpcQhCrT8+9AIO0Ft+qDOst1NWxNObgoXjZgZqFyKQ9su+8/J lBLmTKTdUqTk2MpJC1DfY2MOYTAB2n434FE5d9q1681tQYk2WcJ/el9ay2D36hruAeSz yPZxnicBpezYh9Fj0ZhYTdd82VGXCiMpeZKsP5vmcy60PcyEkLNzULk6knYvUic3wWkM f/EMXG/JuE7DidT7LAx9zlnofFunAtd4gdLMjNgAILwAz8BB7uvv7diYr8Nde22191Bm IfydQ4S7xmmrxj9MJbQ1WMLf21GTyrKQYGVqUPD1H4nW/53Ulp9tLkQHm5jFPChk+CkI j/zw== X-Gm-Message-State: AOAM531sWGDS3Jl7OXbWOxo1IqAC9ezq6+WhLZiWzBYXeyMYNAStO20G 7pUPADkcGFhuhCbbluWkXR8= X-Google-Smtp-Source: ABdhPJxsCAQ1+j/Nu6C3zH6Pyxyu7SIV9c7ClPohXeDwxhj23W92lOvEu0XGMFUudP3gm2YPLjTj9g== X-Received: by 2002:a63:3d08:: with SMTP id k8mr15557220pga.147.1626050131342; Sun, 11 Jul 2021 17:35:31 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:31 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 03/14] MIPS: eBPF: fix BPF_ALU|ARSH handling in JIT static analysis Date: Sun, 11 Jul 2021 17:34:49 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Update reg_val_propagate_range() to add the missing case for BPF_ALU|ARSH, otherwise zero-extension breaks for this opcode. Resolves failures for test_verifier tests: 963/p arsh32 reg zero extend check FAIL retval -1 != 0 964/u arsh32 imm zero extend check FAIL retval -1 != 0 964/p arsh32 imm zero extend check FAIL retval -1 != 0 Fixes: b6bd53f9c4e8 ("MIPS: Add missing file for eBPF JIT.") Signed-off-by: Tony Ambardar --- arch/mips/net/ebpf_jit.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index 64f76c7191b1..67b45d502435 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -1587,6 +1587,7 @@ static int reg_val_propagate_range(struct jit_ctx *ctx, u64 initial_rvt, case BPF_AND: case BPF_LSH: case BPF_RSH: + case BPF_ARSH: case BPF_NEG: case BPF_MOD: case BPF_XOR: From patchwork Mon Jul 12 00:34:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369473 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5B00C11F67 for ; Mon, 12 Jul 2021 00:35:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B3D3F61055 for ; Mon, 12 Jul 2021 00:35:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232002AbhGLAiW (ORCPT ); Sun, 11 Jul 2021 20:38:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231827AbhGLAiU (ORCPT ); Sun, 11 Jul 2021 20:38:20 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD346C0613E5; Sun, 11 Jul 2021 17:35:32 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id j199so14757453pfd.7; Sun, 11 Jul 2021 17:35:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rbuIp1F5x+tOiZk7AWK2VpM3BgvTmXRq6ziDa20X64M=; b=qaFCBPwxWK1kfUGYKxNlk76YwLxzZZIq0eUWqFER6NhghR6f8waNuy6PVGnKeY4Gfb NdyhRnDqzTDoBVHBhROGBxLDOoRzgrl7/XjJQEygaQPZ9d1Mgkc0dYkiDB4VybbhdNLA XWbTGbEO4jodA8R9NXFE/BcXSxAfUw2rzaN8Xc7kAu9d8lTcoKrdD6SCX79z35Y9KXwa 2EfAXXFSBQM71dBbISjhIpiwii5En6qmX5okaoKsZdAOsrV6qZcGMPlcFwEAoAp8r3jA f0/qlBHWFb5vHecriO+iXKGjL6blgwpv/edMFuohJACXUCEUBFNu5HDRDGxo1iZDUqMV uK6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rbuIp1F5x+tOiZk7AWK2VpM3BgvTmXRq6ziDa20X64M=; b=gWctw0bIC5TBCCD+MlBfkb9xZZqXbxXb6Dc6MSuSj+MTSk+LP6T+cMmX1N6kvSDlAy kHAd4n3mFqZHBP4ojkJObghW+hf3zIrj5rOR/5Kv/ORzOenHdDUEJlbQW2nCP4nyRYM0 6SGDtB2EWvdSHSt88xxQbqtwU525ulfqqhnilo370RHMfXu/rB4twgYYPu3jFSVreXle 0+G4yfehqyiwEW2V7B8PAvEMSnX+F25gBqWCAl3runUKmtJKCcOS+37otdZuCZWZN/Im XSEHyim059sIw7EK2J5ruaJXgwl6Ijn4cYRFrPI625xEk2A6Piiv74qqWHWrROwrWaae UuLw== X-Gm-Message-State: AOAM5322fJz+rYeTebhnHW3jHBS06hm47akJICbe4sGrwjWfW300wgkI onJE8AGilMgd8zdWTvgRyTc= X-Google-Smtp-Source: ABdhPJzJbyShIsw7/h4jB/HBczHcq5fPPi4talLuofAWQCtnBVUSRW907GLIqS2Z2wvoAxla4pXCgw== X-Received: by 2002:a63:5144:: with SMTP id r4mr51042008pgl.223.1626050132292; Sun, 11 Jul 2021 17:35:32 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:32 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 04/14] MIPS: eBPF: support BPF_JMP32 in JIT static analysis Date: Sun, 11 Jul 2021 17:34:50 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC While the MIPS64 JIT rejects programs with JMP32 insns, it still performs initial static analysis. Add support in reg_val_propagate_range() for BPF_JMP32, fixing kernel log WARNINGs ("Unhandled BPF_JMP case") seen during JIT testing. Handle code BPF_JMP32 the same as BPF_JMP. Fixes: 092ed0968bb6 ("bpf: verifier support JMP32") Signed-off-by: Tony Ambardar --- arch/mips/net/ebpf_jit.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index 67b45d502435..ad0e54a842fc 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -1683,6 +1683,7 @@ static int reg_val_propagate_range(struct jit_ctx *ctx, u64 initial_rvt, rvt[idx] |= RVT_DONE; break; case BPF_JMP: + case BPF_JMP32: switch (BPF_OP(insn->code)) { case BPF_EXIT: rvt[idx] = RVT_DONE | exit_rvt; From patchwork Mon Jul 12 00:34:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369479 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D529C11F69 for ; Mon, 12 Jul 2021 00:35:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7BDAD6102A for ; Mon, 12 Jul 2021 00:35:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231753AbhGLAi1 (ORCPT ); Sun, 11 Jul 2021 20:38:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231998AbhGLAiW (ORCPT ); Sun, 11 Jul 2021 20:38:22 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A866FC0613DD; Sun, 11 Jul 2021 17:35:33 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id v7so16488178pgl.2; Sun, 11 Jul 2021 17:35:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EalCyecMW2572H9VkyfBPRqHeeHC2CH2d/QT1GKpwg8=; b=IxkDm8adSbqtlwlgLCgDUo/NkLyJX1d3cErtYIdhRp2jaFasDIrnFLpNHvUxis3oh1 UPRVgcXYwvFHHFALsBhFld5paL+DhSm+ZauQUaD1sjZvl4kbSLf3BBZzrQqLQn2d/hmH kSeK6xqeAUywe+dXEfGmBApHKOtUvxNrXmgPbFwCSh4GG+QXOmc34TYSoeNTFGFReDzR iQAOK0zVMSt+w854SWzCucGf7R2KPoVvkw3Wj4skqnXpI8W1rIsypl9++rShBcOXQwvp e9sFVctgLtf6h49qhJgtNl7xYvwCoYP2Yf90niANP2qOBUYl4TYzIB8L5OSha9AErbZO 4CnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EalCyecMW2572H9VkyfBPRqHeeHC2CH2d/QT1GKpwg8=; b=R1/P9wFXB/mwXeu3N1OocTsu1jpYjQakAwpSavO5Na8I0iwc38oaLa19HPZvBO4jdM EXD3ggULcHr/r60Q5O4BCqrva4WMJU1bc4UibMzbGoQgBjrP+gm/w8R9WfjWtGLo9Axh i+L0lvwvXesQc5jVBXPzdWq+36PyfpfTyxc92aWyul7gyRx+WxAEcHjqEPZNv1O2PwwV c1Y76zZZyNoIbcwrUR7ugqB8pimbNFBbFsFlVEo/4Q5TgS1YDrZMJkZ02rv1HpOgdZiE GP1z2D/pMjKs3x5fAnC9uR9pa2H72qFInesKjAzR8jM/WTkXmeA74x8EBMl2F8rBqTPV ZAFQ== X-Gm-Message-State: AOAM530xweC8R4LfKWPS3MwQscpGzFOGvfJ2m/zvHV2VZED2E0h6O6Eb paWsXd5N4DV6V9XlfrR9+vg= X-Google-Smtp-Source: ABdhPJwlhW+6BbknidrHJtpLhO+5/xNKE9xfj4mRa+aQVwFpEp8XjIsJAGJckvT9u5fZegcdfEgtAQ== X-Received: by 2002:a62:3344:0:b029:24c:735c:4546 with SMTP id z65-20020a6233440000b029024c735c4546mr50583458pfz.1.1626050133301; Sun, 11 Jul 2021 17:35:33 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:33 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 05/14] MIPS: eBPF: fix system hang with verifier dead-code patching Date: Sun, 11 Jul 2021 17:34:51 -0700 Message-Id: <9d192df017fd2fb79030477508e7de88f21c6b4e.1625970384.git.Tony.Ambardar@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Commit 2a5418a13fcf changed verifier dead code handling from patching with NOPs to using a loop trap made with BPF_JMP_IMM(BPF_JA, 0, 0, -1). This confuses the JIT static analysis, which follows the loop assuming the verifier passed safe code, and results in a system hang and RCU stall. Update reg_val_propagate_range() to fall through these trap insns. Trigger the bug using test_verifier "check known subreg with unknown reg". Fixes: 2a5418a13fcf ("bpf: improve dead code sanitizing") Signed-off-by: Tony Ambardar --- arch/mips/net/ebpf_jit.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index ad0e54a842fc..e60a089ee3b3 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -1691,6 +1691,14 @@ static int reg_val_propagate_range(struct jit_ctx *ctx, u64 initial_rvt, return idx; case BPF_JA: rvt[idx] |= RVT_DONE; + /* + * Verifier dead code patching can use + * infinite-loop traps, causing hangs and + * RCU stalls here. Treat traps as nops + * if detected and fall through. + */ + if (insn->off == -1) + break; idx += insn->off; break; case BPF_JEQ: From patchwork Mon Jul 12 00:34:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369485 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 228C3C11F6F for ; Mon, 12 Jul 2021 00:35:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 03EFA60551 for ; Mon, 12 Jul 2021 00:35:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232544AbhGLAia (ORCPT ); Sun, 11 Jul 2021 20:38:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232090AbhGLAiX (ORCPT ); Sun, 11 Jul 2021 20:38:23 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5849AC0613DD; Sun, 11 Jul 2021 17:35:35 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id s18so1184645pgq.3; Sun, 11 Jul 2021 17:35:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=B0unYZuIjXroRmLeSdEAWvOsYYAcsr17oWsLSXDQ15w=; b=YH6QeaxEeMovxNg/MDaDzsL3I+L57mXEUwztNZX8OnzgEZhXzP241cBepJ5EeQ55Xs zsOCh7tMinRD+QNaddog6ArwHMRPUDW85nyjJ3llkDWg6nr/5MTSVhSdJf5VdevLlc8Q tVEZhpDEvW8Z0n1w831cYhNSeiZoYUjyPLegsNluVnL9gKxsLFiK3Kml+Rlzy7drm2ED AIMeL9rSLxWuEAG0lLLCKIo2rvevYxo85u0zqmFqWBG5KCswDxitv2TfRbOLdxTIyTPa fROJ5/VsyZ601lue5ruv6s+prHsS31oNg2VcAqCnmL3l/p4jkMF0AUANVNjJIp8eGcVS +c7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=B0unYZuIjXroRmLeSdEAWvOsYYAcsr17oWsLSXDQ15w=; b=UjL04q2ec318qtZhnkUbjGZXH6v9+EHy6otBrCovQCAJ7Vvo3I2BhGEz5VhHSIQ6Of koyBBqNeo391tmh+gx1Vi51vwxQX2rftHQCc8UzcVYtZqVcJww/u9CSj+peRvVQ22cEc tH6e1rSwQhHuotNAHVpyQVgxXWEAJthOi2CGMCHtfYnFVl8Z7YGBiyfobvgl7nWHjD5l ESdA4yZg26ZFQQRK6DnuyWYCqDs3zdDio+wZJl9XQVsTamlRBT+uT9pB6T3SFKzsl4MH RcAfW34+Rj/mrG3dBGW3zUy8jUcoruTtrY5pfOVtbaoh5E4f741l0Wz01tPrNGdjM0rH VIDA== X-Gm-Message-State: AOAM532nI/90/nEsl5IT+XV5XvMdr1mlb5DFz9ab6t0/WC4J3sDFfVWL ay6p3P63Qvw70DBpAKO591Q= X-Google-Smtp-Source: ABdhPJxU1giKJp7HNS+yHRwlhR1DVja/yTEMojfwjkOXVlVXRYZ8+yh+rfP5cXpXdwqgcMhcOalWOQ== X-Received: by 2002:aa7:8d01:0:b029:311:47d0:f169 with SMTP id j1-20020aa78d010000b029031147d0f169mr36795986pfe.78.1626050134450; Sun, 11 Jul 2021 17:35:34 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:34 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 06/14] MIPS: eBPF: fix JIT static analysis hang with bounded loops Date: Sun, 11 Jul 2021 17:34:52 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Support for bounded loops allowed the verifier to output backward jumps such as BPF_JMP_A(-4). These trap the JIT's static analysis in a loop, resulting in a system hang and eventual RCU stall. Fix by updating reg_val_propagate_range() to skip backward jumps when in fallthrough mode and if the jump target has been visited already. Trigger the bug using the test_verifier test "bounded loop that jumps out rather than in". Fixes: 2589726d12a1 ("bpf: introduce bounded loops") Signed-off-by: Tony Ambardar --- arch/mips/net/ebpf_jit.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index e60a089ee3b3..4f641dcb2031 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -1690,6 +1690,10 @@ static int reg_val_propagate_range(struct jit_ctx *ctx, u64 initial_rvt, rvt[prog->len] = exit_rvt; return idx; case BPF_JA: + { + int tgt = idx + 1 + insn->off; + bool visited = (rvt[tgt] & RVT_FALL_THROUGH); + rvt[idx] |= RVT_DONE; /* * Verifier dead code patching can use @@ -1699,8 +1703,16 @@ static int reg_val_propagate_range(struct jit_ctx *ctx, u64 initial_rvt, */ if (insn->off == -1) break; + /* + * Bounded loops cause the same issues in + * fallthrough mode; follow only if jump + * target is unvisited to mitigate. + */ + if (insn->off < 0 && !follow_taken && visited) + break; idx += insn->off; break; + } case BPF_JEQ: case BPF_JGT: case BPF_JGE: From patchwork Mon Jul 12 00:34:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369493 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF900C11F70 for ; Mon, 12 Jul 2021 00:35:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A199D6100C for ; Mon, 12 Jul 2021 00:35:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232491AbhGLAi3 (ORCPT ); Sun, 11 Jul 2021 20:38:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232135AbhGLAiY (ORCPT ); Sun, 11 Jul 2021 20:38:24 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D69ECC0613E8; Sun, 11 Jul 2021 17:35:35 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id h4so16473264pgp.5; Sun, 11 Jul 2021 17:35:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cekwLtOmtUa2BNFyGkn1Zn17oN6dKTP2bypSds/8+bw=; b=u4/lYuyqXXO8+vXNvORgveCso7b6Rl7EAbhoMQiCKh7y8pSDz2WfJvM067p3MaMeoq sWnDZfwkn/HzMzF9HoLcV+ZdLNK1xsgn5GNuCy5RhveJrs/cryCV5iDyp2xj/LVo2yur vXrJZj6A7k0rhcZNK70+Hqaw5VNgmspDD5Xq6UETYC7uwSvQoy0yZvDBhwxfxz/mqStL jwuRWHnaMa70T+oSnB62TZbyj83qX2YnhFNhzh9pIYDfdX27GJEv8FYzEcY+0c6wKauL ornwOZVLrMdx5wzDZFTVZvNDXiyZeR9Q3qH+HwjH8l+z52DmQRqfqRc1bVNrauk91tIB nS2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cekwLtOmtUa2BNFyGkn1Zn17oN6dKTP2bypSds/8+bw=; b=HzwPpdRssgw978fu539y55xE/CN4aofH4+WDROTPeae3HLJAcY3ixAVsz84k4CzEXR pbJiFqEwfyyuFdhJH25KcyXEM1+aNS5hBZP6uGTLnzl38Lcz3/xy7aorC7uvWf/RiPIo BhCnRXKz7L54EYz6maqz7UwXfSQ9kmx5kTjZEbw0YfQ3ZjWiANonL6zZ9u68PrtMBwS2 TLFbqwAa5gy8NVC6wpEmjtbdQmwdZDhcEmumDCBwfLP6e6edARWFE3hCkIgp++Vkr7Gw BR4ppVJIrrkEZLGgzp8/STHBKNZwFXP+571ZTHwi+MKvxb/kRrE32CaP2Zxdcjh22BKw xmCw== X-Gm-Message-State: AOAM5329QpvW+dmtiYa9RLu9zXN8pzan1I8jhbIvL6akri96zSkpf2OX nL/dBulToSnO/4WEkG9tYXw= X-Google-Smtp-Source: ABdhPJwQuaqbTrJMo0sb5ZCTNqRaO05aLENU6w354eWq7EFMYs5+vt5W1/9uxz0BRbATALUu54tVZA== X-Received: by 2002:aa7:8b07:0:b029:2f7:d38e:ff1 with SMTP id f7-20020aa78b070000b02902f7d38e0ff1mr50576505pfd.72.1626050135398; Sun, 11 Jul 2021 17:35:35 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:35 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 07/14] MIPS: eBPF: fix MOD64 insn on R6 ISA Date: Sun, 11 Jul 2021 17:34:53 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC The BPF_ALU64 | BPF_MOD implementation is broken on MIPS64R6 due to use of a 32-bit "modu" insn, as shown by the test_verifier failures: 455/p MOD64 overflow, check 1 FAIL retval 0 != 1 (run 1/1) 456/p MOD64 overflow, check 2 FAIL retval 0 != 1 (run 1/1) Resolve by using the 64-bit "dmodu" instead. Fixes: 6c2c8a188868 ("MIPS: eBPF: Provide eBPF support for MIPS64R6") Signed-off-by: Tony Ambardar --- arch/mips/net/ebpf_jit.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index 4f641dcb2031..e8c403c6cfa3 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -800,7 +800,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, if (bpf_op == BPF_DIV) emit_instr(ctx, ddivu_r6, dst, dst, MIPS_R_AT); else - emit_instr(ctx, modu, dst, dst, MIPS_R_AT); + emit_instr(ctx, dmodu, dst, dst, MIPS_R_AT); break; } emit_instr(ctx, ddivu, dst, MIPS_R_AT); @@ -882,7 +882,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, emit_instr(ctx, ddivu_r6, dst, dst, src); else - emit_instr(ctx, modu, dst, dst, src); + emit_instr(ctx, dmodu, dst, dst, src); break; } emit_instr(ctx, ddivu, dst, src); From patchwork Mon Jul 12 00:34:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369481 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA079C11F6D for ; Mon, 12 Jul 2021 00:35:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A758E6100B for ; Mon, 12 Jul 2021 00:35:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232430AbhGLAi2 (ORCPT ); Sun, 11 Jul 2021 20:38:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232184AbhGLAiZ (ORCPT ); Sun, 11 Jul 2021 20:38:25 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EED53C0613E9; Sun, 11 Jul 2021 17:35:36 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id 62so16506067pgf.1; Sun, 11 Jul 2021 17:35:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=i+Gabeg7gBRlgfLEH6jm47+oAGtcKCXKDUoYmp14LZk=; b=f71Qw1xyigLXDaFJo/m7+YMnVEcTiHPENXPAj/xcWL8vIHRy/0VbpzrlKFkQqF4GoV hEYhe5AtLucPFJtN6Lij3yrgGSiqwhdSYHzZELWCfOu/mP/t5oZV7zZ7D2GC2N2+mDC1 p+b0O88B1qK3WYQH5o0r6KkTi3y3ggxTzdhzePSohUmMq9ZuT+XlEaEMd3KbwOrMsOqA woF2HWyWDqEhnG9OZ8f5OO0eMgO2CoareYbAiDs/OVEXcVkrv58Xo4flwkI96o5uf4Wg lkQDvnm7paD/mfZjM0VulRyhY26zf/bR25NhX/jJIdYDHyHq3MkafP9c2zXcoTbp62MC pg3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=i+Gabeg7gBRlgfLEH6jm47+oAGtcKCXKDUoYmp14LZk=; b=CxfIPfHrzW+nZx6Ls3o++ORIyW2IBRxh1uackupcVWK+s/L1zM8UZLC7Wqvxei/riU Pvro+R0XBYXVSMQ+VT6lHOjdACefbM9PQCAEuQhyVKGLQiQjQvWFgOLbvHA64AQG3Y7x KNzlEC9/tKjzSKRFDC3WSf7C2ViCwNbcNU0zTLh2JUrUt9L7PNicWZU0GZl5N5u+GybZ lE6AE+0DAql5X9WO3R+NtRtrkmlgMsaiWvnqTV6N7EiE4e/TIR13qhN42Y075t2TgC+j 45Joa5GQLUBLMqmxJ534MFywUh2foDA/fZGGzGhl/06qGnTFBq5CcgF2qIiXOFwXAwL8 tkAw== X-Gm-Message-State: AOAM5322UrjuAsolzdDOI/9GamVdkTowwDS3HoXwSH8qgjej4xjG7rvJ 1KWHPPMfal54IP8CqV9xRn4= X-Google-Smtp-Source: ABdhPJywamGR/Ym0qxdUshaPf4U0mClZS9ejbnhwFzUXqUkIWQWSFYHb+lehOW9cCG2SPg/mi7L0KA== X-Received: by 2002:aa7:8112:0:b029:328:70bc:3307 with SMTP id b18-20020aa781120000b029032870bc3307mr18193320pfi.67.1626050136575; Sun, 11 Jul 2021 17:35:36 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:36 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 08/14] MIPS: eBPF: support long jump for BPF_JMP|EXIT Date: Sun, 11 Jul 2021 17:34:54 -0700 Message-Id: <068c7d433d5ca78e791827f93275474fec64ef6d.1625970384.git.Tony.Ambardar@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Existing JIT code supports only short (18-bit) branches for BPF EXIT, and results in some tests from module test_bpf not being jited. Update code to fall back to long (28-bit) jumps if short branches are insufficient. Before: test_bpf: #296 BPF_MAXINSNS: exec all MSH jited:0 1556004 PASS test_bpf: #297 BPF_MAXINSNS: ld_abs+get_processor_id jited:0 824957 PASS test_bpf: Summary: 378 PASSED, 0 FAILED, [364/366 JIT'ed] After: test_bpf: #296 BPF_MAXINSNS: exec all MSH jited:1 221998 PASS test_bpf: #297 BPF_MAXINSNS: ld_abs+get_processor_id jited:1 490507 PASS test_bpf: Summary: 378 PASSED, 0 FAILED, [366/366 JIT'ed] Fixes: b6bd53f9c4e8 ("MIPS: Add missing file for eBPF JIT.") Signed-off-by: Tony Ambardar --- arch/mips/net/ebpf_jit.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index e8c403c6cfa3..f510c692975e 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -994,9 +994,14 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_JMP | BPF_EXIT: if (this_idx + 1 < exit_idx) { b_off = b_imm(exit_idx, ctx); - if (is_bad_offset(b_off)) - return -E2BIG; - emit_instr(ctx, beq, MIPS_R_ZERO, MIPS_R_ZERO, b_off); + if (is_bad_offset(b_off)) { + target = j_target(ctx, exit_idx); + if (target == (unsigned int)-1) + return -E2BIG; + emit_instr(ctx, j, target); + } else { + emit_instr(ctx, b, b_off); + } emit_instr(ctx, nop); } break; From patchwork Mon Jul 12 00:34:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369487 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B73EC07E9C for ; Mon, 12 Jul 2021 00:35:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 71B0560551 for ; Mon, 12 Jul 2021 00:35:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232582AbhGLAib (ORCPT ); Sun, 11 Jul 2021 20:38:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232193AbhGLAi0 (ORCPT ); Sun, 11 Jul 2021 20:38:26 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E86FFC0613EE; Sun, 11 Jul 2021 17:35:37 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id w15so16459308pgk.13; Sun, 11 Jul 2021 17:35:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xV2HdIRH4UfRrAHXN/H9DTR/2ZGLXLPOFFF8iUGihdI=; b=iqGS6jwOogU85vAVz7EzI0ONnzoVCWpt1kgujjjWx61sWmyIA5vYT/NRuvRzjIPHE7 tLq66ltpEdsOMDVjdrL6OtHPfggRcmn8sdzW9n0ZxoLwq+3OxNBcVxAe0N+GUKYDcOYo wIVG1yf4GlQXQlT0w+xPAcRgl/2LHvHAr5hCzwzzWbaGEvpaGD6xZLUMvOGxYQtr71IO bJuilMU5bvtxsqCIFMOveMUdrtlbm9l53CJ7LWwqvqFvgRSVsLq8XJTEAsFf9zoAibHM 4KbDb4IV36q06RYdvRULfBIC9p34CeIOP8/+50LZAf4W8b5YZ8xW6D/u+CZZPMTi+ub4 TqSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xV2HdIRH4UfRrAHXN/H9DTR/2ZGLXLPOFFF8iUGihdI=; b=RBUF1TU8DVgEnVPiK0wFwdGknArfqBIhzZtueodKdrUfKgtzN2OI9GYTP+oV/jSyGO MpX4zk/YOfoGPrFCyulttCG8hysa56Ek/0Pz6eudfRAuU9H5voLYm2Kj4OTfpBpDjEEu E5aFULoSf2Glad7Z0LhpNcjhVZzsL6VBgKBuntzGZUjok9iq8MvdxpdGp+B+IRiaQvsE 47GWdq8KEOpsbOqiU3o8/lMEo04+rltEeC7J4aUAsHvwruMkLqm6qZcZ/A/+W2VZaEQY Zwp8ta6Cy0LvwMMpDnlts5HHdz1kmaOSaKSjaPtbink1AiHRZSwH+jJIYjr2onggQ5OC 1oWQ== X-Gm-Message-State: AOAM530Mu5p92zJdZledb43bTl9AWELsqDuRnWm5IV+QIu4/64llU/FW w6iFMIFVGYZfhmqmBPNLp6HWNshIkPo8ww== X-Google-Smtp-Source: ABdhPJzrIt6Zodpqq8ncxTnLEq+8L79neCI209zuG0wLSWubTeFeUK2CKj7bygFajsasyqIWvvUsSw== X-Received: by 2002:a62:e90b:0:b029:30e:4530:8dca with SMTP id j11-20020a62e90b0000b029030e45308dcamr50228994pfh.17.1626050137560; Sun, 11 Jul 2021 17:35:37 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:37 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 09/14] MIPS: eBPF: drop src_reg restriction in BPF_LD|BPF_DW|BPF_IMM Date: Sun, 11 Jul 2021 17:34:55 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Stop enforcing (insn->src_reg == 0) and allow special verifier insns such as "BPF_LD_MAP_FD(BPF_REG_2, 0)", used to refer to a process-local map fd and introduced in 0246e64d9a5f ("bpf: handle pseudo BPF_LD_IMM64 insn"). This is consistent with other JITs such as riscv32 and also used by test_verifier (e.g. "runtime/jit: tail_call within bounds, prog once"). Fixes: b6bd53f9c4e8 ("MIPS: Add missing file for eBPF JIT.") Signed-off-by: Tony Ambardar --- arch/mips/net/ebpf_jit.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index f510c692975e..bba41f334d07 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -1303,8 +1303,6 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, emit_instr(ctx, nop); break; case BPF_LD | BPF_DW | BPF_IMM: - if (insn->src_reg != 0) - return -EINVAL; dst = ebpf_to_mips_reg(ctx, insn, dst_reg); if (dst < 0) return dst; From patchwork Mon Jul 12 00:34:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369489 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BF34C11F76 for ; Mon, 12 Jul 2021 00:35:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 011626100C for ; Mon, 12 Jul 2021 00:35:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232634AbhGLAib (ORCPT ); Sun, 11 Jul 2021 20:38:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232201AbhGLAi0 (ORCPT ); Sun, 11 Jul 2021 20:38:26 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA273C0613DD; Sun, 11 Jul 2021 17:35:38 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id p4so3994355plo.11; Sun, 11 Jul 2021 17:35:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jw7YhCYvkORE+kKaZ4OGGD3iGi/XtNNMa4bsdRSUdJU=; b=qhdAF39A6JSBf8FNNpO7mNHkGRhYz6SraAUZivrN0tfL641UK21X33qE/j508BkNpi /fzZgqTm8qh1LU77rnoK39Oy+yN7SDK910x8AAqtK9WFwARJv9y1qNf6pYsKm9RNpTci gchty2QZ6PiDb9z633I005BU/nU6BBSrwvtKwhkwQBYX24FX3wjuUTEvadbKGty2i1gv cYauEJjJSiGk808DrQY0GV2Sf2UFgeMDxePkwFHIJbFhk6HUuf4fS+Ja1e9wLNxRCZ67 n6am1CPbuTUXu4im3+MC90lquwylbPLYpHP1DlMpvQcePty6rgIS0C8E1AChVV9wQE16 6MQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jw7YhCYvkORE+kKaZ4OGGD3iGi/XtNNMa4bsdRSUdJU=; b=gRVorBUsFc8ijFbkk3RunWoK7b7AwlNy3tIWfMMewHm05PHtD3JROnX+Ept7TSB8by RtPBt0hVk2OjpuFJyJlmYWnLci27EmWJINV1lI4PtTyBBXRjDtiu4tRuJDqdHa8Zvdsa 1K6wbGn9BrjoUHurNIEUYUTY7j5hh+9RcxpIRpGsnzQe6MSYLICZVKl4civwQj/X8gN6 qaks3d41juOLwuQzCLAFw+iFnAx82xwDu3jR/h98qRb/XgIGsw0kZHqnhughWRtod3fv HeMXUx4kJik4XUXO2Ika7kAWFimZ1L4nc+Rt68+BLOuxHPN/Df8QEUVjX6pNyYK+0l0w lrAQ== X-Gm-Message-State: AOAM532yV5YGEkwx0RF5sxl3XcdQr4QeaA8g1wXBIRmwn8A1NwpdteL6 toJj+IuLikDyAW6GHBEkNaw= X-Google-Smtp-Source: ABdhPJwZxh1FtkAyQNLnCkGTnJmXy0K3N9PofEXhfbduE8653nL35X0Joxeygj4r0hySN//vuBGIKA== X-Received: by 2002:a17:902:d293:b029:12b:1912:2540 with SMTP id t19-20020a170902d293b029012b19122540mr963903plc.76.1626050138521; Sun, 11 Jul 2021 17:35:38 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:38 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 10/14] MIPS: eBPF: improve and clarify enum 'which_ebpf_reg' Date: Sun, 11 Jul 2021 17:34:56 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Update enum and literals to be more consistent and less confusing. Prior usage resulted in 'dst_reg' variables and 'dst_reg' enum literals, often adjacent to each other and hampering maintenance. Signed-off-by: Tony Ambardar --- arch/mips/net/ebpf_jit.c | 64 ++++++++++++++++++++-------------------- 1 file changed, 32 insertions(+), 32 deletions(-) diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index bba41f334d07..61d7894051aa 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -177,11 +177,11 @@ static u32 b_imm(unsigned int tgt, struct jit_ctx *ctx) (ctx->idx * 4) - 4; } -enum which_ebpf_reg { - src_reg, - src_reg_no_fp, - dst_reg, - dst_reg_fp_ok +enum reg_usage { + REG_SRC_FP_OK, + REG_SRC_NO_FP, + REG_DST_FP_OK, + REG_DST_NO_FP }; /* @@ -192,9 +192,9 @@ enum which_ebpf_reg { */ static int ebpf_to_mips_reg(struct jit_ctx *ctx, const struct bpf_insn *insn, - enum which_ebpf_reg w) + enum reg_usage u) { - int ebpf_reg = (w == src_reg || w == src_reg_no_fp) ? + int ebpf_reg = (u == REG_SRC_FP_OK || u == REG_SRC_NO_FP) ? insn->src_reg : insn->dst_reg; switch (ebpf_reg) { @@ -223,7 +223,7 @@ static int ebpf_to_mips_reg(struct jit_ctx *ctx, ctx->flags |= EBPF_SAVE_S3; return MIPS_R_S3; case BPF_REG_10: - if (w == dst_reg || w == src_reg_no_fp) + if (u == REG_DST_NO_FP || u == REG_SRC_NO_FP) goto bad_reg; ctx->flags |= EBPF_SEEN_FP; /* @@ -423,7 +423,7 @@ static int gen_imm_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, int idx) { int upper_bound, lower_bound; - int dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + int dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; @@ -696,7 +696,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, return r; break; case BPF_ALU64 | BPF_MUL | BPF_K: /* ALU64_IMM */ - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) @@ -712,7 +712,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, } break; case BPF_ALU64 | BPF_NEG | BPF_K: /* ALU64_IMM */ - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) @@ -720,7 +720,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, emit_instr(ctx, dsubu, dst, MIPS_R_ZERO, dst); break; case BPF_ALU | BPF_MUL | BPF_K: /* ALU_IMM */ - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; td = get_reg_val_type(ctx, this_idx, insn->dst_reg); @@ -739,7 +739,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, } break; case BPF_ALU | BPF_NEG | BPF_K: /* ALU_IMM */ - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; td = get_reg_val_type(ctx, this_idx, insn->dst_reg); @@ -753,7 +753,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_ALU | BPF_MOD | BPF_K: /* ALU_IMM */ if (insn->imm == 0) return -EINVAL; - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; td = get_reg_val_type(ctx, this_idx, insn->dst_reg); @@ -784,7 +784,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_ALU64 | BPF_MOD | BPF_K: /* ALU_IMM */ if (insn->imm == 0) return -EINVAL; - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) @@ -821,8 +821,8 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_ALU64 | BPF_LSH | BPF_X: /* ALU64_REG */ case BPF_ALU64 | BPF_RSH | BPF_X: /* ALU64_REG */ case BPF_ALU64 | BPF_ARSH | BPF_X: /* ALU64_REG */ - src = ebpf_to_mips_reg(ctx, insn, src_reg); - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (src < 0 || dst < 0) return -EINVAL; if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) @@ -917,8 +917,8 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_ALU | BPF_LSH | BPF_X: /* ALU_REG */ case BPF_ALU | BPF_RSH | BPF_X: /* ALU_REG */ case BPF_ALU | BPF_ARSH | BPF_X: /* ALU_REG */ - src = ebpf_to_mips_reg(ctx, insn, src_reg_no_fp); - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (src < 0 || dst < 0) return -EINVAL; td = get_reg_val_type(ctx, this_idx, insn->dst_reg); @@ -1008,7 +1008,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_JMP | BPF_JEQ | BPF_K: /* JMP_IMM */ case BPF_JMP | BPF_JNE | BPF_K: /* JMP_IMM */ cmp_eq = (bpf_op == BPF_JEQ); - dst = ebpf_to_mips_reg(ctx, insn, dst_reg_fp_ok); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); if (dst < 0) return dst; if (insn->imm == 0) { @@ -1029,8 +1029,8 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_JMP | BPF_JGT | BPF_X: case BPF_JMP | BPF_JGE | BPF_X: case BPF_JMP | BPF_JSET | BPF_X: - src = ebpf_to_mips_reg(ctx, insn, src_reg_no_fp); - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (src < 0 || dst < 0) return -EINVAL; td = get_reg_val_type(ctx, this_idx, insn->dst_reg); @@ -1160,7 +1160,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_JMP | BPF_JSLT | BPF_K: /* JMP_IMM */ case BPF_JMP | BPF_JSLE | BPF_K: /* JMP_IMM */ cmp_eq = (bpf_op == BPF_JSGE); - dst = ebpf_to_mips_reg(ctx, insn, dst_reg_fp_ok); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); if (dst < 0) return dst; @@ -1235,7 +1235,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_JMP | BPF_JLT | BPF_K: case BPF_JMP | BPF_JLE | BPF_K: cmp_eq = (bpf_op == BPF_JGE); - dst = ebpf_to_mips_reg(ctx, insn, dst_reg_fp_ok); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); if (dst < 0) return dst; /* @@ -1258,7 +1258,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, goto jeq_common; case BPF_JMP | BPF_JSET | BPF_K: /* JMP_IMM */ - dst = ebpf_to_mips_reg(ctx, insn, dst_reg_fp_ok); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); if (dst < 0) return dst; @@ -1303,7 +1303,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, emit_instr(ctx, nop); break; case BPF_LD | BPF_DW | BPF_IMM: - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; t64 = ((u64)(u32)insn->imm) | ((u64)(insn + 1)->imm << 32); @@ -1326,7 +1326,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_ALU | BPF_END | BPF_FROM_BE: case BPF_ALU | BPF_END | BPF_FROM_LE: - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; td = get_reg_val_type(ctx, this_idx, insn->dst_reg); @@ -1369,7 +1369,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, dst = MIPS_R_SP; mem_off = insn->off + MAX_BPF_STACK; } else { - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; mem_off = insn->off; @@ -1400,12 +1400,12 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, src = MIPS_R_SP; mem_off = insn->off + MAX_BPF_STACK; } else { - src = ebpf_to_mips_reg(ctx, insn, src_reg_no_fp); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); if (src < 0) return src; mem_off = insn->off; } - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; switch (BPF_SIZE(insn->code)) { @@ -1435,12 +1435,12 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, dst = MIPS_R_SP; mem_off = insn->off + MAX_BPF_STACK; } else { - dst = ebpf_to_mips_reg(ctx, insn, dst_reg); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (dst < 0) return dst; mem_off = insn->off; } - src = ebpf_to_mips_reg(ctx, insn, src_reg_no_fp); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); if (src < 0) return src; if (BPF_MODE(insn->code) == BPF_ATOMIC) { From patchwork Mon Jul 12 00:34:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369495 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D523AC11F66 for ; Mon, 12 Jul 2021 00:35:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AABF660551 for ; Mon, 12 Jul 2021 00:35:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231998AbhGLAic (ORCPT ); Sun, 11 Jul 2021 20:38:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232481AbhGLAi3 (ORCPT ); Sun, 11 Jul 2021 20:38:29 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0C24C0613DD; Sun, 11 Jul 2021 17:35:40 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id b12so5330364plh.10; Sun, 11 Jul 2021 17:35:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KSSnFa9Hba5QV32S9y4LHnmrZP8UVxAeAaR27+inUSM=; b=Ssu9T6fLq8r4m4Fy+2FVBAvkExM1Z1VeJwKLbMc8v9uEPn/sc8K2ZrPisRbDGpGaLN IG8hy2WGNAu3dP6LDAnSlcK9Ig9mRhSdq5HSSkbi9iceSehx5EQsMO5/qXPMiCLV0/rJ LddkQzmOQTc+U2AWGth/s9rPfa3pIJio7wS07OSBvLmL6iFJIg3CgF5YJ7idT2QbziO2 Ixilw0lEX7L+Xnl91k/jZaZ8ihdfW68QALd0+x5okBhieILBmvM9xXy7SpT9tcP/nZ/j XV69kVML3zHV6QpS69FL2GBUY4jXwaJVtIhEouwZC+hSjrJWBZnONT1hMV8whqog1dKJ K7Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KSSnFa9Hba5QV32S9y4LHnmrZP8UVxAeAaR27+inUSM=; b=NbCwDsncHJfDmrrOH61KEVtve6Jr2Vvl8gbo4NYCH0s3ji80SBYDpxEvQ4FG9Hk1// eJQHmWhDDIOwA6WTUe9NDljsfOqjKJWVZGkgTnr5Tz/IMbL/c+U4WJmyiRIUfMcZppMD YRTLrbkDL9eosARE9oM1UhcNq7NxqoALtQ+loNscAkY+7roTWNu+0Z0b1sMBa+vco2Vr qD3Llr1S8IND1hIEbjmSBMmMI2zkNBv4wzzml0SF/7Z1oIjIWkYfXnfUejtuFuEb7Zg/ yhgsx/5g3Y066WcDU76RbkgkZUlZsODtMCmiyE2EWBHYu2SItQl1GOS23ZRRnfOaw8x6 wEyQ== X-Gm-Message-State: AOAM532Gd+4bRAj0l+Ht9/0i4O+6JLENYQyr3R5qtGl5Cczpy3ddRGYX 2sVnM78HNyI7fGgjZMKYomM= X-Google-Smtp-Source: ABdhPJw60yhe1XaEjsysLZFJsBdJRKQhJO5K81x760kEop+PzkOEEGPwcMg4ZoDy3qURtExe6FRV9g== X-Received: by 2002:a17:902:b409:b029:129:a9a6:fc76 with SMTP id x9-20020a170902b409b0290129a9a6fc76mr27307168plr.68.1626050139813; Sun, 11 Jul 2021 17:35:39 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:39 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 11/14] MIPS: eBPF: add core support for 32/64-bit systems Date: Sun, 11 Jul 2021 17:34:57 -0700 Message-Id: <9a6e1b11fdc1f6a4c1b81cfb1b361d4628cc14c6.1625970384.git.Tony.Ambardar@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Update register definitions and flags for both 32/64-bit operation. Add a common register lookup table and modify ebpf_to_mips_reg() to use this, and add is64bit() and isbigend() common helper functions. On MIPS32, BPF registers are held in register pairs defined by the base register. Word-size and endian-aware helper macros select 32-bit registers from a pair and generate 32-bit word memory storage offsets. The BPF TCC is stored to the stack due to register pressure. Modify build_int_prologue() and build_int_epilogue() to handle MIPS32 registers and any adjustments needed during program entry/exit/transfer if transitioning between the native N64/O32 ABI and the BPF 64-bit ABI. Also ensure ABI-consistent stack alignment and use verifier-provided stack depth during setup. Update emit_const_to_reg() to work across MIPS64 and MIPS32 systems and optimize gen_imm_to_reg() to only set the lower halfword if needed. Rework emit_bpf_tail_call() to also support MIPS32 usage and add common helpers, emit_bpf_call() and emit_push_args(), handling TCC and ABI variations on MIPS32/MIPS64. Add tail_call_present() and update tailcall handling to support mixing BPF2BPF subprograms and tailcalls. Add sign and zero-extension helpers usable with verifier zext insertion, gen_zext_insn() and gen_sext_insn(). Add common functions emit_caller_save() and emit_caller_restore(), which push and pop all caller-saved BPF registers to the stack, for use with JIT-internal kernel calls such as those needed for BPF insns unsupported by native system ISA opcodes. Since these calls would be hidden from any BPF C compiler, which would normally spill needed registers during a call, the JIT must handle save/restore itself. Adopt a dedicated BPF FP (in MIPS_R_S8), and relax FP usage within insns. This reduces ad-hoc code doing $sp manipulation with temp registers, and allows wider usage of BPF FP for comparison and arithmetic. For example, the following tests from test_verifier are now jited but not previously: 939/p store PTR_TO_STACK in R10 to array map using BPF_B 981/p unpriv: cmp pointer with pointer 984/p unpriv: indirectly pass pointer on stack to helper function 985/p unpriv: mangle pointer on stack 1 986/p unpriv: mangle pointer on stack 2 1001/p unpriv: partial copy of pointer 1097/p xadd/w check whether src/dst got mangled, 1 1098/p xadd/w check whether src/dst got mangled, 2 Signed-off-by: Tony Ambardar --- arch/mips/net/ebpf_jit.c | 750 +++++++++++++++++++++++++++++---------- 1 file changed, 563 insertions(+), 187 deletions(-) diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index 61d7894051aa..525f1df8db21 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -1,11 +1,13 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * Just-In-Time compiler for eBPF filters on MIPS - * - * Copyright (c) 2017 Cavium, Inc. + * Just-In-Time compiler for eBPF filters on MIPS32/MIPS64 + * Copyright (c) 2021 Tony Ambardar * * Based on code from: * + * Copyright (c) 2017 Cavium, Inc. + * Author: David Daney + * * Copyright (c) 2014 Imagination Technologies Ltd. * Author: Markos Chandras */ @@ -22,31 +24,42 @@ #include #include -/* Registers used by JIT */ +/* Registers used by JIT: (MIPS32) (MIPS64) */ #define MIPS_R_ZERO 0 #define MIPS_R_AT 1 -#define MIPS_R_V0 2 /* BPF_R0 */ -#define MIPS_R_V1 3 -#define MIPS_R_A0 4 /* BPF_R1 */ -#define MIPS_R_A1 5 /* BPF_R2 */ -#define MIPS_R_A2 6 /* BPF_R3 */ -#define MIPS_R_A3 7 /* BPF_R4 */ -#define MIPS_R_A4 8 /* BPF_R5 */ -#define MIPS_R_T4 12 /* BPF_AX */ -#define MIPS_R_T5 13 -#define MIPS_R_T6 14 -#define MIPS_R_T7 15 -#define MIPS_R_S0 16 /* BPF_R6 */ -#define MIPS_R_S1 17 /* BPF_R7 */ -#define MIPS_R_S2 18 /* BPF_R8 */ -#define MIPS_R_S3 19 /* BPF_R9 */ -#define MIPS_R_S4 20 /* BPF_TCC */ -#define MIPS_R_S5 21 -#define MIPS_R_S6 22 -#define MIPS_R_S7 23 -#define MIPS_R_T8 24 -#define MIPS_R_T9 25 +#define MIPS_R_V0 2 /* BPF_R0 BPF_R0 */ +#define MIPS_R_V1 3 /* BPF_R0 BPF_TCC */ +#define MIPS_R_A0 4 /* BPF_R1 BPF_R1 */ +#define MIPS_R_A1 5 /* BPF_R1 BPF_R2 */ +#define MIPS_R_A2 6 /* BPF_R2 BPF_R3 */ +#define MIPS_R_A3 7 /* BPF_R2 BPF_R4 */ + +/* MIPS64 replaces T0-T3 scratch regs with extra arguments A4-A7. */ +#ifdef CONFIG_64BIT +# define MIPS_R_A4 8 /* (n/a) BPF_R5 */ +#else +# define MIPS_R_T0 8 /* BPF_R3 (n/a) */ +# define MIPS_R_T1 9 /* BPF_R3 (n/a) */ +# define MIPS_R_T2 10 /* BPF_R4 (n/a) */ +# define MIPS_R_T3 11 /* BPF_R4 (n/a) */ +#endif + +#define MIPS_R_T4 12 /* BPF_R5 BPF_AX */ +#define MIPS_R_T5 13 /* BPF_R5 (free) */ +#define MIPS_R_T6 14 /* BPF_AX (used) */ +#define MIPS_R_T7 15 /* BPF_AX (free) */ +#define MIPS_R_S0 16 /* BPF_R6 BPF_R6 */ +#define MIPS_R_S1 17 /* BPF_R6 BPF_R7 */ +#define MIPS_R_S2 18 /* BPF_R7 BPF_R8 */ +#define MIPS_R_S3 19 /* BPF_R7 BPF_R9 */ +#define MIPS_R_S4 20 /* BPF_R8 BPF_TCC */ +#define MIPS_R_S5 21 /* BPF_R8 (free) */ +#define MIPS_R_S6 22 /* BPF_R9 (free) */ +#define MIPS_R_S7 23 /* BPF_R9 (free) */ +#define MIPS_R_T8 24 /* (used) (used) */ +#define MIPS_R_T9 25 /* (used) (used) */ #define MIPS_R_SP 29 +#define MIPS_R_S8 30 /* BPF_R10 BPF_R10 */ #define MIPS_R_RA 31 /* eBPF flags */ @@ -55,10 +68,117 @@ #define EBPF_SAVE_S2 BIT(2) #define EBPF_SAVE_S3 BIT(3) #define EBPF_SAVE_S4 BIT(4) -#define EBPF_SAVE_RA BIT(5) -#define EBPF_SEEN_FP BIT(6) -#define EBPF_SEEN_TC BIT(7) -#define EBPF_TCC_IN_V1 BIT(8) +#define EBPF_SAVE_S5 BIT(5) +#define EBPF_SAVE_S6 BIT(6) +#define EBPF_SAVE_S7 BIT(7) +#define EBPF_SAVE_S8 BIT(8) +#define EBPF_SAVE_RA BIT(9) +#define EBPF_SEEN_FP BIT(10) +#define EBPF_SEEN_TC BIT(11) +#define EBPF_TCC_IN_RUN BIT(12) + +/* + * Extra JIT registers dedicated to holding TCC during runtime or saving + * across calls. + */ +enum { + JIT_RUN_TCC = MAX_BPF_JIT_REG, + JIT_SAV_TCC +}; +/* Temporary register for passing TCC if nothing dedicated. */ +#define TEMP_PASS_TCC MIPS_R_T8 + +/* + * Word-size and endianness-aware helpers for building MIPS32 vs MIPS64 + * tables and selecting 32-bit subregisters from a register pair base. + * Simplify use by emulating MIPS_R_SP and MIPS_R_ZERO as register pairs + * and adding HI/LO word memory offsets. + */ +#ifdef CONFIG_64BIT +# define HI(reg) (reg) +# define LO(reg) (reg) +# define OFFHI(mem) (mem) +# define OFFLO(mem) (mem) +#else /* CONFIG_32BIT */ +# ifdef __BIG_ENDIAN +# define HI(reg) ((reg) == MIPS_R_SP ? MIPS_R_ZERO : \ + (reg) == MIPS_R_S8 ? MIPS_R_ZERO : \ + (reg)) +# define LO(reg) ((reg) == MIPS_R_ZERO ? (reg) : \ + (reg) == MIPS_R_SP ? (reg) : \ + (reg) == MIPS_R_S8 ? (reg) : \ + (reg) + 1) +# define OFFHI(mem) (mem) +# define OFFLO(mem) ((mem) + sizeof(long)) +# else /* __LITTLE_ENDIAN */ +# define HI(reg) ((reg) == MIPS_R_ZERO ? (reg) : \ + (reg) == MIPS_R_SP ? MIPS_R_ZERO : \ + (reg) == MIPS_R_S8 ? MIPS_R_ZERO : \ + (reg) + 1) +# define LO(reg) (reg) +# define OFFHI(mem) ((mem) + sizeof(long)) +# define OFFLO(mem) (mem) +# endif +#endif + +#ifdef CONFIG_64BIT +# define M(expr32, expr64) (expr64) +#else +# define M(expr32, expr64) (expr32) +#endif +const struct { + /* Register or pair base */ + int reg; + /* Register flags */ + u32 flags; + /* Usage table: (MIPS32) (MIPS64) */ +} bpf2mips[] = { + /* Return value from in-kernel function, and exit value from eBPF. */ + [BPF_REG_0] = {M(MIPS_R_V0, MIPS_R_V0)}, + /* Arguments from eBPF program to in-kernel/BPF functions. */ + [BPF_REG_1] = {M(MIPS_R_A0, MIPS_R_A0)}, + [BPF_REG_2] = {M(MIPS_R_A2, MIPS_R_A1)}, + [BPF_REG_3] = {M(MIPS_R_T0, MIPS_R_A2)}, + [BPF_REG_4] = {M(MIPS_R_T2, MIPS_R_A3)}, + [BPF_REG_5] = {M(MIPS_R_T4, MIPS_R_A4)}, + /* Callee-saved registers preserved by in-kernel/BPF functions. */ + [BPF_REG_6] = {M(MIPS_R_S0, MIPS_R_S0), + M(EBPF_SAVE_S0|EBPF_SAVE_S1, EBPF_SAVE_S0)}, + [BPF_REG_7] = {M(MIPS_R_S2, MIPS_R_S1), + M(EBPF_SAVE_S2|EBPF_SAVE_S3, EBPF_SAVE_S1)}, + [BPF_REG_8] = {M(MIPS_R_S4, MIPS_R_S2), + M(EBPF_SAVE_S4|EBPF_SAVE_S5, EBPF_SAVE_S2)}, + [BPF_REG_9] = {M(MIPS_R_S6, MIPS_R_S3), + M(EBPF_SAVE_S6|EBPF_SAVE_S7, EBPF_SAVE_S3)}, + [BPF_REG_10] = {M(MIPS_R_S8, MIPS_R_S8), + M(EBPF_SAVE_S8|EBPF_SEEN_FP, EBPF_SAVE_S8|EBPF_SEEN_FP)}, + /* Internal register for rewriting insns during JIT blinding. */ + [BPF_REG_AX] = {M(MIPS_R_T6, MIPS_R_T4)}, + /* + * Internal registers for TCC runtime holding and saving during + * calls. A zero save register indicates using scratch space on + * the stack for storage during calls. A zero hold register means + * no dedicated register holds TCC during runtime (but a temp reg + * still passes TCC to tailcall or bpf2bpf call). + */ + [JIT_RUN_TCC] = {M(0, MIPS_R_V1)}, + [JIT_SAV_TCC] = {M(0, MIPS_R_S4), + M(0, EBPF_SAVE_S4)} +}; +#undef M + +static inline bool is64bit(void) +{ + return IS_ENABLED(CONFIG_64BIT); +} + +static inline bool isbigend(void) +{ + return IS_ENABLED(CONFIG_CPU_BIG_ENDIAN); +} + +/* Stack region alignment under N64 and O32 ABIs */ +#define STACK_ALIGN (2 * sizeof(long)) /* * For the mips64 ISA, we need to track the value range or type for @@ -100,6 +220,7 @@ enum reg_val_type { struct jit_ctx { const struct bpf_prog *skf; int stack_size; + int bpf_stack_off; u32 idx; u32 flags; u32 *offsets; @@ -177,6 +298,34 @@ static u32 b_imm(unsigned int tgt, struct jit_ctx *ctx) (ctx->idx * 4) - 4; } +/* Sign-extend dst register or HI 32-bit reg of pair. */ +static inline void gen_sext_insn(int dst, struct jit_ctx *ctx) +{ + if (is64bit()) + emit_instr(ctx, sll, dst, dst, 0); + else + emit_instr(ctx, sra, HI(dst), LO(dst), 31); +} + +/* + * Zero-extend dst register or HI 32-bit reg of pair, if either forced + * or the BPF verifier does not insert its own zext insns. + */ +static inline void gen_zext_insn(int dst, bool force, struct jit_ctx *ctx) +{ + if (!ctx->skf->aux->verifier_zext || force) { + if (is64bit()) + emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); + else + emit_instr(ctx, and, HI(dst), MIPS_R_ZERO, MIPS_R_ZERO); + } +} + +static inline bool tail_call_present(struct jit_ctx *ctx) +{ + return ctx->flags & EBPF_SEEN_TC || ctx->skf->aux->tail_call_reachable; +} + enum reg_usage { REG_SRC_FP_OK, REG_SRC_NO_FP, @@ -186,9 +335,8 @@ enum reg_usage { /* * For eBPF, the register mapping naturally falls out of the - * requirements of eBPF and the MIPS n64 ABI. We don't maintain a - * separate frame pointer, so BPF_REG_10 relative accesses are - * adjusted to be $sp relative. + * requirements of eBPF and the MIPS N64/O32 ABIs. We also maintain + * a separate frame pointer, setting BPF_REG_10 relative to $sp. */ static int ebpf_to_mips_reg(struct jit_ctx *ctx, const struct bpf_insn *insn, @@ -199,110 +347,139 @@ static int ebpf_to_mips_reg(struct jit_ctx *ctx, switch (ebpf_reg) { case BPF_REG_0: - return MIPS_R_V0; case BPF_REG_1: - return MIPS_R_A0; case BPF_REG_2: - return MIPS_R_A1; case BPF_REG_3: - return MIPS_R_A2; case BPF_REG_4: - return MIPS_R_A3; case BPF_REG_5: - return MIPS_R_A4; case BPF_REG_6: - ctx->flags |= EBPF_SAVE_S0; - return MIPS_R_S0; case BPF_REG_7: - ctx->flags |= EBPF_SAVE_S1; - return MIPS_R_S1; case BPF_REG_8: - ctx->flags |= EBPF_SAVE_S2; - return MIPS_R_S2; case BPF_REG_9: - ctx->flags |= EBPF_SAVE_S3; - return MIPS_R_S3; + case BPF_REG_AX: + ctx->flags |= bpf2mips[ebpf_reg].flags; + return bpf2mips[ebpf_reg].reg; case BPF_REG_10: if (u == REG_DST_NO_FP || u == REG_SRC_NO_FP) goto bad_reg; - ctx->flags |= EBPF_SEEN_FP; - /* - * Needs special handling, return something that - * cannot be clobbered just in case. - */ - return MIPS_R_ZERO; - case BPF_REG_AX: - return MIPS_R_T4; + ctx->flags |= bpf2mips[ebpf_reg].flags; + return bpf2mips[ebpf_reg].reg; default: bad_reg: WARN(1, "Illegal bpf reg: %d\n", ebpf_reg); return -EINVAL; } } + /* * eBPF stack frame will be something like: * * Entry $sp ------> +--------------------------------+ * | $ra (optional) | * +--------------------------------+ - * | $s0 (optional) | + * | $s8 (optional) | * +--------------------------------+ - * | $s1 (optional) | + * | $s7 (optional) | * +--------------------------------+ - * | $s2 (optional) | + * | $s6 (optional) | * +--------------------------------+ - * | $s3 (optional) | + * | $s5 (optional) | * +--------------------------------+ * | $s4 (optional) | * +--------------------------------+ - * | tmp-storage (if $ra saved) | - * $sp + tmp_offset --> +--------------------------------+ <--BPF_REG_10 + * | $s3 (optional) | + * +--------------------------------+ + * | $s2 (optional) | + * +--------------------------------+ + * | $s1 (optional) | + * +--------------------------------+ + * | $s0 (optional) | + * +--------------------------------+ + * | tmp-storage (optional) | + * $sp + bpf_stack_off->+--------------------------------+ <--BPF_REG_10 * | BPF_REG_10 relative storage | * | MAX_BPF_STACK (optional) | * | . | * | . | * | . | - * $sp --------> +--------------------------------+ + * $sp ------> +--------------------------------+ * * If BPF_REG_10 is never referenced, then the MAX_BPF_STACK sized * area is not allocated. */ -static int gen_int_prologue(struct jit_ctx *ctx) +static int build_int_prologue(struct jit_ctx *ctx) { + int tcc_run = bpf2mips[JIT_RUN_TCC].reg ? + bpf2mips[JIT_RUN_TCC].reg : + TEMP_PASS_TCC; + int tcc_sav = bpf2mips[JIT_SAV_TCC].reg; + const struct bpf_prog *prog = ctx->skf; + int r10 = bpf2mips[BPF_REG_10].reg; + int r1 = bpf2mips[BPF_REG_1].reg; int stack_adjust = 0; int store_offset; int locals_size; if (ctx->flags & EBPF_SAVE_RA) - /* - * If RA we are doing a function call and may need - * extra 8-byte tmp area. - */ - stack_adjust += 2 * sizeof(long); - if (ctx->flags & EBPF_SAVE_S0) stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S1) + if (ctx->flags & EBPF_SAVE_S8) stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S2) + if (ctx->flags & EBPF_SAVE_S7) stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S3) + if (ctx->flags & EBPF_SAVE_S6) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S5) stack_adjust += sizeof(long); if (ctx->flags & EBPF_SAVE_S4) stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S3) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S2) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S1) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S0) + stack_adjust += sizeof(long); + if (tail_call_present(ctx) && + !(ctx->flags & EBPF_TCC_IN_RUN) && !tcc_sav) + /* Allocate scratch space for holding TCC if needed. */ + stack_adjust += sizeof(long); - BUILD_BUG_ON(MAX_BPF_STACK & 7); - locals_size = (ctx->flags & EBPF_SEEN_FP) ? MAX_BPF_STACK : 0; + stack_adjust = ALIGN(stack_adjust, STACK_ALIGN); + + locals_size = (ctx->flags & EBPF_SEEN_FP) ? prog->aux->stack_depth : 0; + locals_size = ALIGN(locals_size, STACK_ALIGN); stack_adjust += locals_size; ctx->stack_size = stack_adjust; + ctx->bpf_stack_off = locals_size; + + /* + * First instruction initializes the tail call count (TCC) if + * called from kernel or via BPF tail call. A BPF tail-caller + * will skip this instruction and pass the TCC via register. + * As a BPF2BPF subprog, we are called directly and must avoid + * resetting the TCC. + */ + if (!ctx->skf->is_func) + emit_instr(ctx, addiu, tcc_run, MIPS_R_ZERO, MAX_TAIL_CALL_CNT); /* - * First instruction initializes the tail call count (TCC). - * On tail call we skip this instruction, and the TCC is - * passed in $v1 from the caller. + * If called from kernel under O32 ABI we must set up BPF R1 context, + * since BPF R1 is an endian-order regster pair ($a0:$a1 or $a1:$a0) + * but context is always passed in $a0 as 32-bit pointer. Entry from + * a tail-call looks just like a kernel call, which means the caller + * must set up R1 context according to the kernel call ABI. If we are + * a BPF2BPF call then all registers are already correctly set up. */ - emit_instr(ctx, addiu, MIPS_R_V1, MIPS_R_ZERO, MAX_TAIL_CALL_CNT); + if (!is64bit() && !ctx->skf->is_func) { + if (isbigend()) + emit_instr(ctx, move, LO(r1), MIPS_R_A0); + /* Sanitize upper 32-bit reg */ + gen_zext_insn(r1, true, ctx); + } + if (stack_adjust) emit_instr_long(ctx, daddiu, addiu, MIPS_R_SP, MIPS_R_SP, -stack_adjust); @@ -316,24 +493,24 @@ static int gen_int_prologue(struct jit_ctx *ctx) MIPS_R_RA, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } - if (ctx->flags & EBPF_SAVE_S0) { + if (ctx->flags & EBPF_SAVE_S8) { emit_instr_long(ctx, sd, sw, - MIPS_R_S0, store_offset, MIPS_R_SP); + MIPS_R_S8, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } - if (ctx->flags & EBPF_SAVE_S1) { + if (ctx->flags & EBPF_SAVE_S7) { emit_instr_long(ctx, sd, sw, - MIPS_R_S1, store_offset, MIPS_R_SP); + MIPS_R_S7, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } - if (ctx->flags & EBPF_SAVE_S2) { + if (ctx->flags & EBPF_SAVE_S6) { emit_instr_long(ctx, sd, sw, - MIPS_R_S2, store_offset, MIPS_R_SP); + MIPS_R_S6, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } - if (ctx->flags & EBPF_SAVE_S3) { + if (ctx->flags & EBPF_SAVE_S5) { emit_instr_long(ctx, sd, sw, - MIPS_R_S3, store_offset, MIPS_R_SP); + MIPS_R_S5, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } if (ctx->flags & EBPF_SAVE_S4) { @@ -341,10 +518,40 @@ static int gen_int_prologue(struct jit_ctx *ctx) MIPS_R_S4, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } + if (ctx->flags & EBPF_SAVE_S3) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S3, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S2) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S2, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S1) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S1, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S0) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S0, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + + /* Store TCC in backup register or stack scratch space if indicated. */ + if (tail_call_present(ctx) && !(ctx->flags & EBPF_TCC_IN_RUN)) { + if (tcc_sav) + emit_instr(ctx, move, tcc_sav, tcc_run); + else + emit_instr_long(ctx, sd, sw, + tcc_run, ctx->bpf_stack_off, MIPS_R_SP); + } - if ((ctx->flags & EBPF_SEEN_TC) && !(ctx->flags & EBPF_TCC_IN_V1)) - emit_instr_long(ctx, daddu, addu, - MIPS_R_S4, MIPS_R_V1, MIPS_R_ZERO); + /* Prepare BPF FP as single-reg ptr, emulate upper 32-bits as needed.*/ + if (ctx->flags & EBPF_SEEN_FP) + emit_instr_long(ctx, daddiu, addiu, r10, + MIPS_R_SP, ctx->bpf_stack_off); return 0; } @@ -354,14 +561,38 @@ static int build_int_epilogue(struct jit_ctx *ctx, int dest_reg) const struct bpf_prog *prog = ctx->skf; int stack_adjust = ctx->stack_size; int store_offset = stack_adjust - sizeof(long); + int r1 = bpf2mips[BPF_REG_1].reg; + int r0 = bpf2mips[BPF_REG_0].reg; enum reg_val_type td; - int r0 = MIPS_R_V0; - if (dest_reg == MIPS_R_RA) { - /* Don't let zero extended value escape. */ - td = get_reg_val_type(ctx, prog->len, BPF_REG_0); - if (td == REG_64BIT) - emit_instr(ctx, sll, r0, r0, 0); + /* + * Returns from BPF2BPF calls consistently use the BPF 64-bit ABI + * i.e. register usage and mapping between JIT and OS is unchanged. + * Returning to the kernel must follow the N64 or O32 ABI, and for + * the latter requires fixup of BPF R0 to MIPS V0 register mapping. + * + * Tails calls must ensure the passed R1 context is consistent with + * the kernel ABI, and requires fixup on MIPS32 bigendian systems. + */ + if (dest_reg == MIPS_R_RA && !ctx->skf->is_func) { /* kernel return */ + if (is64bit()) { + /* Don't let zero extended value escape. */ + td = get_reg_val_type(ctx, prog->len, BPF_REG_0); + if (td == REG_64BIT) + gen_sext_insn(r0, ctx); + } else if (isbigend()) { /* and 32-bit */ + /* + * O32 ABI specifies 32-bit return value always + * placed in MIPS_R_V0 regardless of the native + * endianness. This would be in the wrong position + * in a BPF R0 reg pair on big-endian systems, so + * we must relocate. + */ + emit_instr(ctx, move, MIPS_R_V0, LO(r0)); + } + } else if (dest_reg == MIPS_R_T9) { /* tail call */ + if (!is64bit() && isbigend()) + emit_instr(ctx, move, MIPS_R_A0, LO(r1)); } if (ctx->flags & EBPF_SAVE_RA) { @@ -369,24 +600,24 @@ static int build_int_epilogue(struct jit_ctx *ctx, int dest_reg) MIPS_R_RA, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } - if (ctx->flags & EBPF_SAVE_S0) { + if (ctx->flags & EBPF_SAVE_S8) { emit_instr_long(ctx, ld, lw, - MIPS_R_S0, store_offset, MIPS_R_SP); + MIPS_R_S8, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } - if (ctx->flags & EBPF_SAVE_S1) { + if (ctx->flags & EBPF_SAVE_S7) { emit_instr_long(ctx, ld, lw, - MIPS_R_S1, store_offset, MIPS_R_SP); + MIPS_R_S7, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } - if (ctx->flags & EBPF_SAVE_S2) { + if (ctx->flags & EBPF_SAVE_S6) { emit_instr_long(ctx, ld, lw, - MIPS_R_S2, store_offset, MIPS_R_SP); + MIPS_R_S6, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } - if (ctx->flags & EBPF_SAVE_S3) { + if (ctx->flags & EBPF_SAVE_S5) { emit_instr_long(ctx, ld, lw, - MIPS_R_S3, store_offset, MIPS_R_SP); + MIPS_R_S5, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } if (ctx->flags & EBPF_SAVE_S4) { @@ -394,8 +625,29 @@ static int build_int_epilogue(struct jit_ctx *ctx, int dest_reg) MIPS_R_S4, store_offset, MIPS_R_SP); store_offset -= sizeof(long); } + if (ctx->flags & EBPF_SAVE_S3) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S3, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S2) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S2, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S1) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S1, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S0) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S0, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } emit_instr(ctx, jr, dest_reg); + /* Delay slot */ if (stack_adjust) emit_instr_long(ctx, daddiu, addiu, MIPS_R_SP, MIPS_R_SP, stack_adjust); @@ -415,7 +667,9 @@ static void gen_imm_to_reg(const struct bpf_insn *insn, int reg, int upper = insn->imm - lower; emit_instr(ctx, lui, reg, upper >> 16); - emit_instr(ctx, addiu, reg, reg, lower); + /* lui already clears lower halfword */ + if (lower) + emit_instr(ctx, addiu, reg, reg, lower); } } @@ -564,12 +818,12 @@ static int gen_imm_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, return 0; } -static void emit_const_to_reg(struct jit_ctx *ctx, int dst, u64 value) +static void emit_const_to_reg(struct jit_ctx *ctx, int dst, unsigned long value) { - if (value >= 0xffffffffffff8000ull || value < 0x8000ull) { - emit_instr(ctx, daddiu, dst, MIPS_R_ZERO, (int)value); - } else if (value >= 0xffffffff80000000ull || - (value < 0x80000000 && value > 0xffff)) { + if (value >= S16_MIN || value <= S16_MAX) { + emit_instr_long(ctx, daddiu, addiu, dst, MIPS_R_ZERO, (int)value); + } else if (value >= S32_MIN || + (value <= S32_MAX && value > U16_MAX)) { emit_instr(ctx, lui, dst, (s32)(s16)(value >> 16)); emit_instr(ctx, ori, dst, dst, (unsigned int)(value & 0xffff)); } else { @@ -601,54 +855,145 @@ static void emit_const_to_reg(struct jit_ctx *ctx, int dst, u64 value) } } +/* + * Push BPF regs R3-R5 to the stack, skipping BPF regs R1-R2 which are + * passed via MIPS register pairs in $a0-$a3. Register order within pairs + * and the memory storage order are identical i.e. endian native. + */ +static void emit_push_args(struct jit_ctx *ctx) +{ + int store_offset = 2 * sizeof(u64); /* Skip R1-R2 in $a0-$a3 */ + int bpf, reg; + + for (bpf = BPF_REG_3; bpf <= BPF_REG_5; bpf++) { + reg = bpf2mips[bpf].reg; + + emit_instr(ctx, sw, LO(reg), OFFLO(store_offset), MIPS_R_SP); + emit_instr(ctx, sw, HI(reg), OFFHI(store_offset), MIPS_R_SP); + store_offset += sizeof(u64); + } +} + +/* + * Common helper for BPF_CALL insn, handling TCC and ABI variations. + * Kernel calls under O32 ABI require arguments passed on the stack, + * while BPF2BPF calls need the TCC passed via register as expected + * by the subprog's prologue. + * + * Under MIPS32 O32 ABI calling convention, u64 BPF regs R1-R2 are passed + * via reg pairs in $a0-$a3, while BPF regs R3-R5 are passed via the stack. + * Stack space is still reserved for $a0-$a3, and the whole area aligned. + */ +#define ARGS_SIZE (5 * sizeof(u64)) + +void emit_bpf_call(struct jit_ctx *ctx, const struct bpf_insn *insn) +{ + int stack_adjust = ALIGN(ARGS_SIZE, STACK_ALIGN); + int tcc_run = bpf2mips[JIT_RUN_TCC].reg ? + bpf2mips[JIT_RUN_TCC].reg : + TEMP_PASS_TCC; + int tcc_sav = bpf2mips[JIT_SAV_TCC].reg; + long func_addr; + + ctx->flags |= EBPF_SAVE_RA; + + /* Ensure TCC passed into BPF subprog */ + if ((insn->src_reg == BPF_PSEUDO_CALL) && + tail_call_present(ctx) && !(ctx->flags & EBPF_TCC_IN_RUN)) { + /* Set TCC from reg or stack */ + if (tcc_sav) + emit_instr(ctx, move, tcc_run, tcc_sav); + else + emit_instr_long(ctx, ld, lw, tcc_run, + ctx->bpf_stack_off, MIPS_R_SP); + } + + /* Push O32 stack args for kernel call */ + if (!is64bit() && (insn->src_reg != BPF_PSEUDO_CALL)) { + emit_instr(ctx, addiu, MIPS_R_SP, MIPS_R_SP, -stack_adjust); + emit_push_args(ctx); + } + + func_addr = (long)__bpf_call_base + insn->imm; + emit_const_to_reg(ctx, MIPS_R_T9, func_addr); + emit_instr(ctx, jalr, MIPS_R_RA, MIPS_R_T9); + /* Delay slot */ + emit_instr(ctx, nop); + + /* Restore stack */ + if (!is64bit() && (insn->src_reg != BPF_PSEUDO_CALL)) + emit_instr(ctx, addiu, MIPS_R_SP, MIPS_R_SP, stack_adjust); +} + +/* + * Tail call helper arguments passed via BPF ABI as u64 parameters. On + * MIPS64 N64 ABI systems these are native regs, while on MIPS32 O32 ABI + * systems these are reg pairs: + * + * R1 -> &ctx + * R2 -> &array + * R3 -> index + */ static int emit_bpf_tail_call(struct jit_ctx *ctx, int this_idx) { + int tcc_run = bpf2mips[JIT_RUN_TCC].reg ? + bpf2mips[JIT_RUN_TCC].reg : + TEMP_PASS_TCC; + int tcc_sav = bpf2mips[JIT_SAV_TCC].reg; + int r2 = bpf2mips[BPF_REG_2].reg; + int r3 = bpf2mips[BPF_REG_3].reg; int off, b_off; - int tcc_reg; + int tcc; ctx->flags |= EBPF_SEEN_TC; /* * if (index >= array->map.max_entries) * goto out; */ - /* Mask index as 32-bit */ - emit_instr(ctx, dinsu, MIPS_R_A2, MIPS_R_ZERO, 32, 32); + if (is64bit()) + /* Mask index as 32-bit */ + gen_zext_insn(r3, true, ctx); off = offsetof(struct bpf_array, map.max_entries); - emit_instr(ctx, lwu, MIPS_R_T5, off, MIPS_R_A1); - emit_instr(ctx, sltu, MIPS_R_AT, MIPS_R_T5, MIPS_R_A2); + emit_instr_long(ctx, lwu, lw, MIPS_R_AT, off, LO(r2)); + emit_instr(ctx, sltu, MIPS_R_AT, MIPS_R_AT, LO(r3)); b_off = b_imm(this_idx + 1, ctx); - emit_instr(ctx, bne, MIPS_R_AT, MIPS_R_ZERO, b_off); + emit_instr(ctx, bnez, MIPS_R_AT, b_off); /* * if (TCC-- < 0) * goto out; */ /* Delay slot */ - tcc_reg = (ctx->flags & EBPF_TCC_IN_V1) ? MIPS_R_V1 : MIPS_R_S4; - emit_instr(ctx, daddiu, MIPS_R_T5, tcc_reg, -1); + tcc = (ctx->flags & EBPF_TCC_IN_RUN) ? tcc_run : tcc_sav; + /* Get TCC from reg or stack */ + if (tcc) + emit_instr(ctx, move, MIPS_R_T8, tcc); + else + emit_instr_long(ctx, ld, lw, MIPS_R_T8, + ctx->bpf_stack_off, MIPS_R_SP); b_off = b_imm(this_idx + 1, ctx); - emit_instr(ctx, bltz, tcc_reg, b_off); + emit_instr(ctx, bltz, MIPS_R_T8, b_off); /* * prog = array->ptrs[index]; * if (prog == NULL) * goto out; */ /* Delay slot */ - emit_instr(ctx, dsll, MIPS_R_T8, MIPS_R_A2, 3); - emit_instr(ctx, daddu, MIPS_R_T8, MIPS_R_T8, MIPS_R_A1); + emit_instr_long(ctx, dsll, sll, MIPS_R_AT, LO(r3), ilog2(sizeof(long))); + emit_instr_long(ctx, daddu, addu, MIPS_R_AT, MIPS_R_AT, LO(r2)); off = offsetof(struct bpf_array, ptrs); - emit_instr(ctx, ld, MIPS_R_AT, off, MIPS_R_T8); + emit_instr_long(ctx, ld, lw, MIPS_R_AT, off, MIPS_R_AT); b_off = b_imm(this_idx + 1, ctx); - emit_instr(ctx, beq, MIPS_R_AT, MIPS_R_ZERO, b_off); + emit_instr(ctx, beqz, MIPS_R_AT, b_off); /* Delay slot */ emit_instr(ctx, nop); /* goto *(prog->bpf_func + 4); */ off = offsetof(struct bpf_prog, bpf_func); - emit_instr(ctx, ld, MIPS_R_T9, off, MIPS_R_AT); - /* All systems are go... propagate TCC */ - emit_instr(ctx, daddu, MIPS_R_V1, MIPS_R_T5, MIPS_R_ZERO); + emit_instr_long(ctx, ld, lw, MIPS_R_T9, off, MIPS_R_AT); + /* All systems are go... decrement and propagate TCC */ + emit_instr_long(ctx, daddiu, addiu, tcc_run, MIPS_R_T8, -1); /* Skip first instruction (TCC initialization) */ - emit_instr(ctx, daddiu, MIPS_R_T9, MIPS_R_T9, 4); + emit_instr_long(ctx, daddiu, addiu, MIPS_R_T9, MIPS_R_T9, 4); return build_int_epilogue(ctx, MIPS_R_T9); } @@ -828,15 +1173,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); did_move = false; - if (insn->src_reg == BPF_REG_10) { - if (bpf_op == BPF_MOV) { - emit_instr(ctx, daddiu, dst, MIPS_R_SP, MAX_BPF_STACK); - did_move = true; - } else { - emit_instr(ctx, daddiu, MIPS_R_AT, MIPS_R_SP, MAX_BPF_STACK); - src = MIPS_R_AT; - } - } else if (get_reg_val_type(ctx, this_idx, insn->src_reg) == REG_32BIT) { + if (get_reg_val_type(ctx, this_idx, insn->src_reg) == REG_32BIT) { int tmp_reg = MIPS_R_AT; if (bpf_op == BPF_MOV) { @@ -917,7 +1254,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_ALU | BPF_LSH | BPF_X: /* ALU_REG */ case BPF_ALU | BPF_RSH | BPF_X: /* ALU_REG */ case BPF_ALU | BPF_ARSH | BPF_X: /* ALU_REG */ - src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); if (src < 0 || dst < 0) return -EINVAL; @@ -1029,8 +1366,8 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_JMP | BPF_JGT | BPF_X: case BPF_JMP | BPF_JGE | BPF_X: case BPF_JMP | BPF_JSET | BPF_X: - src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); if (src < 0 || dst < 0) return -EINVAL; td = get_reg_val_type(ctx, this_idx, insn->dst_reg); @@ -1311,12 +1648,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, return 2; /* Double slot insn */ case BPF_JMP | BPF_CALL: - ctx->flags |= EBPF_SAVE_RA; - t64s = (s64)insn->imm + (long)__bpf_call_base; - emit_const_to_reg(ctx, MIPS_R_T9, (u64)t64s); - emit_instr(ctx, jalr, MIPS_R_RA, MIPS_R_T9); - /* delay slot */ - emit_instr(ctx, nop); + emit_bpf_call(ctx, insn); break; case BPF_JMP | BPF_TAIL_CALL: @@ -1364,16 +1696,10 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_ST | BPF_H | BPF_MEM: case BPF_ST | BPF_W | BPF_MEM: case BPF_ST | BPF_DW | BPF_MEM: - if (insn->dst_reg == BPF_REG_10) { - ctx->flags |= EBPF_SEEN_FP; - dst = MIPS_R_SP; - mem_off = insn->off + MAX_BPF_STACK; - } else { - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (dst < 0) - return dst; - mem_off = insn->off; - } + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + if (dst < 0) + return dst; + mem_off = insn->off; gen_imm_to_reg(insn, MIPS_R_AT, ctx); switch (BPF_SIZE(insn->code)) { case BPF_B: @@ -1395,19 +1721,11 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_LDX | BPF_H | BPF_MEM: case BPF_LDX | BPF_W | BPF_MEM: case BPF_LDX | BPF_DW | BPF_MEM: - if (insn->src_reg == BPF_REG_10) { - ctx->flags |= EBPF_SEEN_FP; - src = MIPS_R_SP; - mem_off = insn->off + MAX_BPF_STACK; - } else { - src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); - if (src < 0) - return src; - mem_off = insn->off; - } dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (dst < 0) - return dst; + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + if (dst < 0 || src < 0) + return -EINVAL; + mem_off = insn->off; switch (BPF_SIZE(insn->code)) { case BPF_B: emit_instr(ctx, lbu, dst, mem_off, src); @@ -1430,25 +1748,16 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_STX | BPF_DW | BPF_MEM: case BPF_STX | BPF_W | BPF_ATOMIC: case BPF_STX | BPF_DW | BPF_ATOMIC: - if (insn->dst_reg == BPF_REG_10) { - ctx->flags |= EBPF_SEEN_FP; - dst = MIPS_R_SP; - mem_off = insn->off + MAX_BPF_STACK; - } else { - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (dst < 0) - return dst; - mem_off = insn->off; - } - src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); - if (src < 0) - return src; + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + if (src < 0 || dst < 0) + return -EINVAL; + mem_off = insn->off; if (BPF_MODE(insn->code) == BPF_ATOMIC) { if (insn->imm != BPF_ADD) { pr_err("ATOMIC OP %02x NOT HANDLED\n", insn->imm); return -EINVAL; } - /* * If mem_off does not fit within the 9 bit ll/sc * instruction immediate field, use a temp reg. @@ -1829,6 +2138,73 @@ static void jit_fill_hole(void *area, unsigned int size) uasm_i_break(&p, BRK_BUG); /* Increments p */ } +/* + * Save and restore the BPF VM state across a direct kernel call. This + * includes the caller-saved registers used for BPF_REG_0 .. BPF_REG_5 + * and BPF_REG_AX used by the verifier for blinding and other dark arts. + * Restore avoids clobbering bpf_ret, which holds the call return value. + * BPF_REG_6 .. BPF_REG_10 and TCC are already callee-saved or on stack. + */ +static const int bpf_caller_save[] = { + BPF_REG_0, + BPF_REG_1, + BPF_REG_2, + BPF_REG_3, + BPF_REG_4, + BPF_REG_5, + BPF_REG_AX, +}; + +#define CALLER_ENV_SIZE (ARRAY_SIZE(bpf_caller_save) * sizeof(u64)) + +void emit_caller_save(struct jit_ctx *ctx) +{ + int stack_adj = ALIGN(CALLER_ENV_SIZE, STACK_ALIGN); + int i, bpf, reg, store_offset; + + emit_instr_long(ctx, daddiu, addiu, MIPS_R_SP, MIPS_R_SP, -stack_adj); + + for (i = 0; i < ARRAY_SIZE(bpf_caller_save); i++) { + bpf = bpf_caller_save[i]; + reg = bpf2mips[bpf].reg; + store_offset = i * sizeof(u64); + + if (is64bit()) { + emit_instr(ctx, sd, reg, store_offset, MIPS_R_SP); + } else { + emit_instr(ctx, sw, LO(reg), + OFFLO(store_offset), MIPS_R_SP); + emit_instr(ctx, sw, HI(reg), + OFFHI(store_offset), MIPS_R_SP); + } + } +} + +void emit_caller_restore(struct jit_ctx *ctx, int bpf_ret) +{ + int stack_adj = ALIGN(CALLER_ENV_SIZE, STACK_ALIGN); + int i, bpf, reg, store_offset; + + for (i = 0; i < ARRAY_SIZE(bpf_caller_save); i++) { + bpf = bpf_caller_save[i]; + reg = bpf2mips[bpf].reg; + store_offset = i * sizeof(u64); + if (bpf == bpf_ret) + continue; + + if (is64bit()) { + emit_instr(ctx, ld, reg, store_offset, MIPS_R_SP); + } else { + emit_instr(ctx, lw, LO(reg), + OFFLO(store_offset), MIPS_R_SP); + emit_instr(ctx, lw, HI(reg), + OFFHI(store_offset), MIPS_R_SP); + } + } + + emit_instr_long(ctx, daddiu, addiu, MIPS_R_SP, MIPS_R_SP, stack_adj); +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { struct bpf_prog *orig_prog = prog; @@ -1889,14 +2265,14 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) goto out_err; /* - * If no calls are made (EBPF_SAVE_RA), then tail call count - * in $v1, else we must save in n$s4. + * If no calls are made (EBPF_SAVE_RA), then tailcall count located + * in runtime reg if defined, else we backup to save reg or stack. */ - if (ctx.flags & EBPF_SEEN_TC) { + if (tail_call_present(&ctx)) { if (ctx.flags & EBPF_SAVE_RA) - ctx.flags |= EBPF_SAVE_S4; - else - ctx.flags |= EBPF_TCC_IN_V1; + ctx.flags |= bpf2mips[JIT_SAV_TCC].flags; + else if (bpf2mips[JIT_RUN_TCC].reg) + ctx.flags |= EBPF_TCC_IN_RUN; } /* @@ -1910,7 +2286,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) ctx.idx = 0; ctx.gen_b_offsets = 1; ctx.long_b_conversion = 0; - if (gen_int_prologue(&ctx)) + if (build_int_prologue(&ctx)) goto out_err; if (build_int_body(&ctx)) goto out_err; @@ -1929,7 +2305,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) /* Third pass generates the code */ ctx.idx = 0; - if (gen_int_prologue(&ctx)) + if (build_int_prologue(&ctx)) goto out_err; if (build_int_body(&ctx)) goto out_err; From patchwork Mon Jul 12 00:34:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369527 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6EE6C07E96 for ; Mon, 12 Jul 2021 00:35:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A7C9D6100C for ; Mon, 12 Jul 2021 00:35:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232918AbhGLAin (ORCPT ); Sun, 11 Jul 2021 20:38:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232547AbhGLAib (ORCPT ); Sun, 11 Jul 2021 20:38:31 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 31C56C0613E8; Sun, 11 Jul 2021 17:35:43 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id z2so5336369plg.8; Sun, 11 Jul 2021 17:35:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lRpj4WJtK6i0nIzexNX7mpHmLfTDSO20eFFLwRl6VoY=; b=IveTDEH1vNA1gLF8lUSTcHTiu3LsbaoI1AGbJwUmN34QZ7W3SSYwnE/9fRDxO1ecYi ZGeOAwGX+AMpxMdLcf6pjB/zKsIJrD/D0S7+PNKl4qhgVA0ats71MiA0eyHER6DKrq/+ vIix0UObwDMY18xEJm3/24Ue1Vj7mmo+LyHpksI0lbhj6E2v9KElUJLKArJhgun0Qco+ sLl2B11UD7/0l2j5quyGhK+ir0VzFXFbxHPRGf6NIMlJ+VaJPP+a1y9Czk9n/Li0EDZm Y6fGV0MB8zFkogmN6H7Uo+T+TGzjDW6yIDvoWEjHzrDAPN4KzPzJrFpegrqF8isaw+vs HrxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lRpj4WJtK6i0nIzexNX7mpHmLfTDSO20eFFLwRl6VoY=; b=ElRCnzjFvy7c4AIaqogIMDWgt9dXsXsDzuhb3exAxkUFCxrmqlY/XDJxJDqNrIFed5 LglgyLsyOcpTDGs0m5zCkJM+o7YT/JVEy0F6vcVmr57U/ZUgf4yLbmSfiuozkrU8Qnaj iuSOINkt+u6YArVTb2hKsRiZLZFd78Kf5a5lxm5JXu6DyrGz46/hvcdU7ZbC2M7ClgaB e1QXd0pJP1nGC4q4KlfPaB+MHDgar/IzkcBwRsqlgImYLVJgQUSbuYsnwlv5LRtqzA9Z z9nE+KYbJjWuRXZCwGAsYxvYsCrZ0Ek04NzK873/veWPGpyPXf4KGcEUiBVjlobYWB/9 KmCg== X-Gm-Message-State: AOAM533GYuBjJYpKJAaHl9drSLhb/3SewCAUBYxMre6TnW/dmZhgrx14 L0OR+w8594FKB+3gMzkcGj4= X-Google-Smtp-Source: ABdhPJznNnSt78TC322aoEdZx9P7puyQIRX8ihTGpen09bq8FMcZ9eX2H0pGmcNlTyl7utIRdQMxjw== X-Received: by 2002:a17:903:2345:b029:12a:cfac:3cd8 with SMTP id c5-20020a1709032345b029012acfac3cd8mr15538997plh.26.1626050141384; Sun, 11 Jul 2021 17:35:41 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:41 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 12/14] MIPS: eBPF: refactor common MIPS64/MIPS32 functions and headers Date: Sun, 11 Jul 2021 17:34:58 -0700 Message-Id: <9ef16ad29a0b85957c97d114b2b6279c3239bf61.1625970384.git.Tony.Ambardar@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Move core functions and headers to ebpf_jit_core.c and ebpf_jit.h, and relocate the MIPS64 specific build_one_insn() to ebpf_jit_comp64.c. Signed-off-by: Tony Ambardar --- arch/mips/net/Makefile | 2 +- arch/mips/net/ebpf_jit.c | 2341 ------------------------------- arch/mips/net/ebpf_jit.h | 295 ++++ arch/mips/net/ebpf_jit_comp64.c | 987 +++++++++++++ arch/mips/net/ebpf_jit_core.c | 1112 +++++++++++++++ 5 files changed, 2395 insertions(+), 2342 deletions(-) delete mode 100644 arch/mips/net/ebpf_jit.c create mode 100644 arch/mips/net/ebpf_jit.h create mode 100644 arch/mips/net/ebpf_jit_comp64.c create mode 100644 arch/mips/net/ebpf_jit_core.c diff --git a/arch/mips/net/Makefile b/arch/mips/net/Makefile index d55912349039..de42f4a4db56 100644 --- a/arch/mips/net/Makefile +++ b/arch/mips/net/Makefile @@ -2,4 +2,4 @@ # MIPS networking code obj-$(CONFIG_MIPS_CBPF_JIT) += bpf_jit.o bpf_jit_asm.o -obj-$(CONFIG_MIPS_EBPF_JIT) += ebpf_jit.o +obj-$(CONFIG_MIPS_EBPF_JIT) += ebpf_jit_core.o ebpf_jit_comp64.o diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c deleted file mode 100644 index 525f1df8db21..000000000000 --- a/arch/mips/net/ebpf_jit.c +++ /dev/null @@ -1,2341 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Just-In-Time compiler for eBPF filters on MIPS32/MIPS64 - * Copyright (c) 2021 Tony Ambardar - * - * Based on code from: - * - * Copyright (c) 2017 Cavium, Inc. - * Author: David Daney - * - * Copyright (c) 2014 Imagination Technologies Ltd. - * Author: Markos Chandras - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -/* Registers used by JIT: (MIPS32) (MIPS64) */ -#define MIPS_R_ZERO 0 -#define MIPS_R_AT 1 -#define MIPS_R_V0 2 /* BPF_R0 BPF_R0 */ -#define MIPS_R_V1 3 /* BPF_R0 BPF_TCC */ -#define MIPS_R_A0 4 /* BPF_R1 BPF_R1 */ -#define MIPS_R_A1 5 /* BPF_R1 BPF_R2 */ -#define MIPS_R_A2 6 /* BPF_R2 BPF_R3 */ -#define MIPS_R_A3 7 /* BPF_R2 BPF_R4 */ - -/* MIPS64 replaces T0-T3 scratch regs with extra arguments A4-A7. */ -#ifdef CONFIG_64BIT -# define MIPS_R_A4 8 /* (n/a) BPF_R5 */ -#else -# define MIPS_R_T0 8 /* BPF_R3 (n/a) */ -# define MIPS_R_T1 9 /* BPF_R3 (n/a) */ -# define MIPS_R_T2 10 /* BPF_R4 (n/a) */ -# define MIPS_R_T3 11 /* BPF_R4 (n/a) */ -#endif - -#define MIPS_R_T4 12 /* BPF_R5 BPF_AX */ -#define MIPS_R_T5 13 /* BPF_R5 (free) */ -#define MIPS_R_T6 14 /* BPF_AX (used) */ -#define MIPS_R_T7 15 /* BPF_AX (free) */ -#define MIPS_R_S0 16 /* BPF_R6 BPF_R6 */ -#define MIPS_R_S1 17 /* BPF_R6 BPF_R7 */ -#define MIPS_R_S2 18 /* BPF_R7 BPF_R8 */ -#define MIPS_R_S3 19 /* BPF_R7 BPF_R9 */ -#define MIPS_R_S4 20 /* BPF_R8 BPF_TCC */ -#define MIPS_R_S5 21 /* BPF_R8 (free) */ -#define MIPS_R_S6 22 /* BPF_R9 (free) */ -#define MIPS_R_S7 23 /* BPF_R9 (free) */ -#define MIPS_R_T8 24 /* (used) (used) */ -#define MIPS_R_T9 25 /* (used) (used) */ -#define MIPS_R_SP 29 -#define MIPS_R_S8 30 /* BPF_R10 BPF_R10 */ -#define MIPS_R_RA 31 - -/* eBPF flags */ -#define EBPF_SAVE_S0 BIT(0) -#define EBPF_SAVE_S1 BIT(1) -#define EBPF_SAVE_S2 BIT(2) -#define EBPF_SAVE_S3 BIT(3) -#define EBPF_SAVE_S4 BIT(4) -#define EBPF_SAVE_S5 BIT(5) -#define EBPF_SAVE_S6 BIT(6) -#define EBPF_SAVE_S7 BIT(7) -#define EBPF_SAVE_S8 BIT(8) -#define EBPF_SAVE_RA BIT(9) -#define EBPF_SEEN_FP BIT(10) -#define EBPF_SEEN_TC BIT(11) -#define EBPF_TCC_IN_RUN BIT(12) - -/* - * Extra JIT registers dedicated to holding TCC during runtime or saving - * across calls. - */ -enum { - JIT_RUN_TCC = MAX_BPF_JIT_REG, - JIT_SAV_TCC -}; -/* Temporary register for passing TCC if nothing dedicated. */ -#define TEMP_PASS_TCC MIPS_R_T8 - -/* - * Word-size and endianness-aware helpers for building MIPS32 vs MIPS64 - * tables and selecting 32-bit subregisters from a register pair base. - * Simplify use by emulating MIPS_R_SP and MIPS_R_ZERO as register pairs - * and adding HI/LO word memory offsets. - */ -#ifdef CONFIG_64BIT -# define HI(reg) (reg) -# define LO(reg) (reg) -# define OFFHI(mem) (mem) -# define OFFLO(mem) (mem) -#else /* CONFIG_32BIT */ -# ifdef __BIG_ENDIAN -# define HI(reg) ((reg) == MIPS_R_SP ? MIPS_R_ZERO : \ - (reg) == MIPS_R_S8 ? MIPS_R_ZERO : \ - (reg)) -# define LO(reg) ((reg) == MIPS_R_ZERO ? (reg) : \ - (reg) == MIPS_R_SP ? (reg) : \ - (reg) == MIPS_R_S8 ? (reg) : \ - (reg) + 1) -# define OFFHI(mem) (mem) -# define OFFLO(mem) ((mem) + sizeof(long)) -# else /* __LITTLE_ENDIAN */ -# define HI(reg) ((reg) == MIPS_R_ZERO ? (reg) : \ - (reg) == MIPS_R_SP ? MIPS_R_ZERO : \ - (reg) == MIPS_R_S8 ? MIPS_R_ZERO : \ - (reg) + 1) -# define LO(reg) (reg) -# define OFFHI(mem) ((mem) + sizeof(long)) -# define OFFLO(mem) (mem) -# endif -#endif - -#ifdef CONFIG_64BIT -# define M(expr32, expr64) (expr64) -#else -# define M(expr32, expr64) (expr32) -#endif -const struct { - /* Register or pair base */ - int reg; - /* Register flags */ - u32 flags; - /* Usage table: (MIPS32) (MIPS64) */ -} bpf2mips[] = { - /* Return value from in-kernel function, and exit value from eBPF. */ - [BPF_REG_0] = {M(MIPS_R_V0, MIPS_R_V0)}, - /* Arguments from eBPF program to in-kernel/BPF functions. */ - [BPF_REG_1] = {M(MIPS_R_A0, MIPS_R_A0)}, - [BPF_REG_2] = {M(MIPS_R_A2, MIPS_R_A1)}, - [BPF_REG_3] = {M(MIPS_R_T0, MIPS_R_A2)}, - [BPF_REG_4] = {M(MIPS_R_T2, MIPS_R_A3)}, - [BPF_REG_5] = {M(MIPS_R_T4, MIPS_R_A4)}, - /* Callee-saved registers preserved by in-kernel/BPF functions. */ - [BPF_REG_6] = {M(MIPS_R_S0, MIPS_R_S0), - M(EBPF_SAVE_S0|EBPF_SAVE_S1, EBPF_SAVE_S0)}, - [BPF_REG_7] = {M(MIPS_R_S2, MIPS_R_S1), - M(EBPF_SAVE_S2|EBPF_SAVE_S3, EBPF_SAVE_S1)}, - [BPF_REG_8] = {M(MIPS_R_S4, MIPS_R_S2), - M(EBPF_SAVE_S4|EBPF_SAVE_S5, EBPF_SAVE_S2)}, - [BPF_REG_9] = {M(MIPS_R_S6, MIPS_R_S3), - M(EBPF_SAVE_S6|EBPF_SAVE_S7, EBPF_SAVE_S3)}, - [BPF_REG_10] = {M(MIPS_R_S8, MIPS_R_S8), - M(EBPF_SAVE_S8|EBPF_SEEN_FP, EBPF_SAVE_S8|EBPF_SEEN_FP)}, - /* Internal register for rewriting insns during JIT blinding. */ - [BPF_REG_AX] = {M(MIPS_R_T6, MIPS_R_T4)}, - /* - * Internal registers for TCC runtime holding and saving during - * calls. A zero save register indicates using scratch space on - * the stack for storage during calls. A zero hold register means - * no dedicated register holds TCC during runtime (but a temp reg - * still passes TCC to tailcall or bpf2bpf call). - */ - [JIT_RUN_TCC] = {M(0, MIPS_R_V1)}, - [JIT_SAV_TCC] = {M(0, MIPS_R_S4), - M(0, EBPF_SAVE_S4)} -}; -#undef M - -static inline bool is64bit(void) -{ - return IS_ENABLED(CONFIG_64BIT); -} - -static inline bool isbigend(void) -{ - return IS_ENABLED(CONFIG_CPU_BIG_ENDIAN); -} - -/* Stack region alignment under N64 and O32 ABIs */ -#define STACK_ALIGN (2 * sizeof(long)) - -/* - * For the mips64 ISA, we need to track the value range or type for - * each JIT register. The BPF machine requires zero extended 32-bit - * values, but the mips64 ISA requires sign extended 32-bit values. - * At each point in the BPF program we track the state of every - * register so that we can zero extend or sign extend as the BPF - * semantics require. - */ -enum reg_val_type { - /* uninitialized */ - REG_UNKNOWN, - /* not known to be 32-bit compatible. */ - REG_64BIT, - /* 32-bit compatible, no truncation needed for 64-bit ops. */ - REG_64BIT_32BIT, - /* 32-bit compatible, need truncation for 64-bit ops. */ - REG_32BIT, - /* 32-bit no sign/zero extension needed. */ - REG_32BIT_POS -}; - -/* - * high bit of offsets indicates if long branch conversion done at - * this insn. - */ -#define OFFSETS_B_CONV BIT(31) - -/** - * struct jit_ctx - JIT context - * @skf: The sk_filter - * @stack_size: eBPF stack size - * @idx: Instruction index - * @flags: JIT flags - * @offsets: Instruction offsets - * @target: Memory location for the compiled filter - * @reg_val_types Packed enum reg_val_type for each register. - */ -struct jit_ctx { - const struct bpf_prog *skf; - int stack_size; - int bpf_stack_off; - u32 idx; - u32 flags; - u32 *offsets; - u32 *target; - u64 *reg_val_types; - unsigned int long_b_conversion:1; - unsigned int gen_b_offsets:1; - unsigned int use_bbit_insns:1; -}; - -static void set_reg_val_type(u64 *rvt, int reg, enum reg_val_type type) -{ - *rvt &= ~(7ull << (reg * 3)); - *rvt |= ((u64)type << (reg * 3)); -} - -static enum reg_val_type get_reg_val_type(const struct jit_ctx *ctx, - int index, int reg) -{ - return (ctx->reg_val_types[index] >> (reg * 3)) & 7; -} - -/* Simply emit the instruction if the JIT memory space has been allocated */ -#define emit_instr_long(ctx, func64, func32, ...) \ -do { \ - if ((ctx)->target != NULL) { \ - u32 *p = &(ctx)->target[ctx->idx]; \ - if (IS_ENABLED(CONFIG_64BIT)) \ - uasm_i_##func64(&p, ##__VA_ARGS__); \ - else \ - uasm_i_##func32(&p, ##__VA_ARGS__); \ - } \ - (ctx)->idx++; \ -} while (0) - -#define emit_instr(ctx, func, ...) \ - emit_instr_long(ctx, func, func, ##__VA_ARGS__) - -static unsigned int j_target(struct jit_ctx *ctx, int target_idx) -{ - unsigned long target_va, base_va; - unsigned int r; - - if (!ctx->target) - return 0; - - base_va = (unsigned long)ctx->target; - target_va = base_va + (ctx->offsets[target_idx] & ~OFFSETS_B_CONV); - - if ((base_va & ~0x0ffffffful) != (target_va & ~0x0ffffffful)) - return (unsigned int)-1; - r = target_va & 0x0ffffffful; - return r; -} - -/* Compute the immediate value for PC-relative branches. */ -static u32 b_imm(unsigned int tgt, struct jit_ctx *ctx) -{ - if (!ctx->gen_b_offsets) - return 0; - - /* - * We want a pc-relative branch. tgt is the instruction offset - * we want to jump to. - - * Branch on MIPS: - * I: target_offset <- sign_extend(offset) - * I+1: PC += target_offset (delay slot) - * - * ctx->idx currently points to the branch instruction - * but the offset is added to the delay slot so we need - * to subtract 4. - */ - return (ctx->offsets[tgt] & ~OFFSETS_B_CONV) - - (ctx->idx * 4) - 4; -} - -/* Sign-extend dst register or HI 32-bit reg of pair. */ -static inline void gen_sext_insn(int dst, struct jit_ctx *ctx) -{ - if (is64bit()) - emit_instr(ctx, sll, dst, dst, 0); - else - emit_instr(ctx, sra, HI(dst), LO(dst), 31); -} - -/* - * Zero-extend dst register or HI 32-bit reg of pair, if either forced - * or the BPF verifier does not insert its own zext insns. - */ -static inline void gen_zext_insn(int dst, bool force, struct jit_ctx *ctx) -{ - if (!ctx->skf->aux->verifier_zext || force) { - if (is64bit()) - emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); - else - emit_instr(ctx, and, HI(dst), MIPS_R_ZERO, MIPS_R_ZERO); - } -} - -static inline bool tail_call_present(struct jit_ctx *ctx) -{ - return ctx->flags & EBPF_SEEN_TC || ctx->skf->aux->tail_call_reachable; -} - -enum reg_usage { - REG_SRC_FP_OK, - REG_SRC_NO_FP, - REG_DST_FP_OK, - REG_DST_NO_FP -}; - -/* - * For eBPF, the register mapping naturally falls out of the - * requirements of eBPF and the MIPS N64/O32 ABIs. We also maintain - * a separate frame pointer, setting BPF_REG_10 relative to $sp. - */ -static int ebpf_to_mips_reg(struct jit_ctx *ctx, - const struct bpf_insn *insn, - enum reg_usage u) -{ - int ebpf_reg = (u == REG_SRC_FP_OK || u == REG_SRC_NO_FP) ? - insn->src_reg : insn->dst_reg; - - switch (ebpf_reg) { - case BPF_REG_0: - case BPF_REG_1: - case BPF_REG_2: - case BPF_REG_3: - case BPF_REG_4: - case BPF_REG_5: - case BPF_REG_6: - case BPF_REG_7: - case BPF_REG_8: - case BPF_REG_9: - case BPF_REG_AX: - ctx->flags |= bpf2mips[ebpf_reg].flags; - return bpf2mips[ebpf_reg].reg; - case BPF_REG_10: - if (u == REG_DST_NO_FP || u == REG_SRC_NO_FP) - goto bad_reg; - ctx->flags |= bpf2mips[ebpf_reg].flags; - return bpf2mips[ebpf_reg].reg; - default: -bad_reg: - WARN(1, "Illegal bpf reg: %d\n", ebpf_reg); - return -EINVAL; - } -} - -/* - * eBPF stack frame will be something like: - * - * Entry $sp ------> +--------------------------------+ - * | $ra (optional) | - * +--------------------------------+ - * | $s8 (optional) | - * +--------------------------------+ - * | $s7 (optional) | - * +--------------------------------+ - * | $s6 (optional) | - * +--------------------------------+ - * | $s5 (optional) | - * +--------------------------------+ - * | $s4 (optional) | - * +--------------------------------+ - * | $s3 (optional) | - * +--------------------------------+ - * | $s2 (optional) | - * +--------------------------------+ - * | $s1 (optional) | - * +--------------------------------+ - * | $s0 (optional) | - * +--------------------------------+ - * | tmp-storage (optional) | - * $sp + bpf_stack_off->+--------------------------------+ <--BPF_REG_10 - * | BPF_REG_10 relative storage | - * | MAX_BPF_STACK (optional) | - * | . | - * | . | - * | . | - * $sp ------> +--------------------------------+ - * - * If BPF_REG_10 is never referenced, then the MAX_BPF_STACK sized - * area is not allocated. - */ -static int build_int_prologue(struct jit_ctx *ctx) -{ - int tcc_run = bpf2mips[JIT_RUN_TCC].reg ? - bpf2mips[JIT_RUN_TCC].reg : - TEMP_PASS_TCC; - int tcc_sav = bpf2mips[JIT_SAV_TCC].reg; - const struct bpf_prog *prog = ctx->skf; - int r10 = bpf2mips[BPF_REG_10].reg; - int r1 = bpf2mips[BPF_REG_1].reg; - int stack_adjust = 0; - int store_offset; - int locals_size; - - if (ctx->flags & EBPF_SAVE_RA) - stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S8) - stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S7) - stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S6) - stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S5) - stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S4) - stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S3) - stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S2) - stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S1) - stack_adjust += sizeof(long); - if (ctx->flags & EBPF_SAVE_S0) - stack_adjust += sizeof(long); - if (tail_call_present(ctx) && - !(ctx->flags & EBPF_TCC_IN_RUN) && !tcc_sav) - /* Allocate scratch space for holding TCC if needed. */ - stack_adjust += sizeof(long); - - stack_adjust = ALIGN(stack_adjust, STACK_ALIGN); - - locals_size = (ctx->flags & EBPF_SEEN_FP) ? prog->aux->stack_depth : 0; - locals_size = ALIGN(locals_size, STACK_ALIGN); - - stack_adjust += locals_size; - - ctx->stack_size = stack_adjust; - ctx->bpf_stack_off = locals_size; - - /* - * First instruction initializes the tail call count (TCC) if - * called from kernel or via BPF tail call. A BPF tail-caller - * will skip this instruction and pass the TCC via register. - * As a BPF2BPF subprog, we are called directly and must avoid - * resetting the TCC. - */ - if (!ctx->skf->is_func) - emit_instr(ctx, addiu, tcc_run, MIPS_R_ZERO, MAX_TAIL_CALL_CNT); - - /* - * If called from kernel under O32 ABI we must set up BPF R1 context, - * since BPF R1 is an endian-order regster pair ($a0:$a1 or $a1:$a0) - * but context is always passed in $a0 as 32-bit pointer. Entry from - * a tail-call looks just like a kernel call, which means the caller - * must set up R1 context according to the kernel call ABI. If we are - * a BPF2BPF call then all registers are already correctly set up. - */ - if (!is64bit() && !ctx->skf->is_func) { - if (isbigend()) - emit_instr(ctx, move, LO(r1), MIPS_R_A0); - /* Sanitize upper 32-bit reg */ - gen_zext_insn(r1, true, ctx); - } - - if (stack_adjust) - emit_instr_long(ctx, daddiu, addiu, - MIPS_R_SP, MIPS_R_SP, -stack_adjust); - else - return 0; - - store_offset = stack_adjust - sizeof(long); - - if (ctx->flags & EBPF_SAVE_RA) { - emit_instr_long(ctx, sd, sw, - MIPS_R_RA, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S8) { - emit_instr_long(ctx, sd, sw, - MIPS_R_S8, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S7) { - emit_instr_long(ctx, sd, sw, - MIPS_R_S7, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S6) { - emit_instr_long(ctx, sd, sw, - MIPS_R_S6, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S5) { - emit_instr_long(ctx, sd, sw, - MIPS_R_S5, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S4) { - emit_instr_long(ctx, sd, sw, - MIPS_R_S4, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S3) { - emit_instr_long(ctx, sd, sw, - MIPS_R_S3, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S2) { - emit_instr_long(ctx, sd, sw, - MIPS_R_S2, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S1) { - emit_instr_long(ctx, sd, sw, - MIPS_R_S1, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S0) { - emit_instr_long(ctx, sd, sw, - MIPS_R_S0, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - - /* Store TCC in backup register or stack scratch space if indicated. */ - if (tail_call_present(ctx) && !(ctx->flags & EBPF_TCC_IN_RUN)) { - if (tcc_sav) - emit_instr(ctx, move, tcc_sav, tcc_run); - else - emit_instr_long(ctx, sd, sw, - tcc_run, ctx->bpf_stack_off, MIPS_R_SP); - } - - /* Prepare BPF FP as single-reg ptr, emulate upper 32-bits as needed.*/ - if (ctx->flags & EBPF_SEEN_FP) - emit_instr_long(ctx, daddiu, addiu, r10, - MIPS_R_SP, ctx->bpf_stack_off); - - return 0; -} - -static int build_int_epilogue(struct jit_ctx *ctx, int dest_reg) -{ - const struct bpf_prog *prog = ctx->skf; - int stack_adjust = ctx->stack_size; - int store_offset = stack_adjust - sizeof(long); - int r1 = bpf2mips[BPF_REG_1].reg; - int r0 = bpf2mips[BPF_REG_0].reg; - enum reg_val_type td; - - /* - * Returns from BPF2BPF calls consistently use the BPF 64-bit ABI - * i.e. register usage and mapping between JIT and OS is unchanged. - * Returning to the kernel must follow the N64 or O32 ABI, and for - * the latter requires fixup of BPF R0 to MIPS V0 register mapping. - * - * Tails calls must ensure the passed R1 context is consistent with - * the kernel ABI, and requires fixup on MIPS32 bigendian systems. - */ - if (dest_reg == MIPS_R_RA && !ctx->skf->is_func) { /* kernel return */ - if (is64bit()) { - /* Don't let zero extended value escape. */ - td = get_reg_val_type(ctx, prog->len, BPF_REG_0); - if (td == REG_64BIT) - gen_sext_insn(r0, ctx); - } else if (isbigend()) { /* and 32-bit */ - /* - * O32 ABI specifies 32-bit return value always - * placed in MIPS_R_V0 regardless of the native - * endianness. This would be in the wrong position - * in a BPF R0 reg pair on big-endian systems, so - * we must relocate. - */ - emit_instr(ctx, move, MIPS_R_V0, LO(r0)); - } - } else if (dest_reg == MIPS_R_T9) { /* tail call */ - if (!is64bit() && isbigend()) - emit_instr(ctx, move, MIPS_R_A0, LO(r1)); - } - - if (ctx->flags & EBPF_SAVE_RA) { - emit_instr_long(ctx, ld, lw, - MIPS_R_RA, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S8) { - emit_instr_long(ctx, ld, lw, - MIPS_R_S8, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S7) { - emit_instr_long(ctx, ld, lw, - MIPS_R_S7, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S6) { - emit_instr_long(ctx, ld, lw, - MIPS_R_S6, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S5) { - emit_instr_long(ctx, ld, lw, - MIPS_R_S5, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S4) { - emit_instr_long(ctx, ld, lw, - MIPS_R_S4, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S3) { - emit_instr_long(ctx, ld, lw, - MIPS_R_S3, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S2) { - emit_instr_long(ctx, ld, lw, - MIPS_R_S2, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S1) { - emit_instr_long(ctx, ld, lw, - MIPS_R_S1, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - if (ctx->flags & EBPF_SAVE_S0) { - emit_instr_long(ctx, ld, lw, - MIPS_R_S0, store_offset, MIPS_R_SP); - store_offset -= sizeof(long); - } - emit_instr(ctx, jr, dest_reg); - - /* Delay slot */ - if (stack_adjust) - emit_instr_long(ctx, daddiu, addiu, - MIPS_R_SP, MIPS_R_SP, stack_adjust); - else - emit_instr(ctx, nop); - - return 0; -} - -static void gen_imm_to_reg(const struct bpf_insn *insn, int reg, - struct jit_ctx *ctx) -{ - if (insn->imm >= S16_MIN && insn->imm <= S16_MAX) { - emit_instr(ctx, addiu, reg, MIPS_R_ZERO, insn->imm); - } else { - int lower = (s16)(insn->imm & 0xffff); - int upper = insn->imm - lower; - - emit_instr(ctx, lui, reg, upper >> 16); - /* lui already clears lower halfword */ - if (lower) - emit_instr(ctx, addiu, reg, reg, lower); - } -} - -static int gen_imm_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, - int idx) -{ - int upper_bound, lower_bound; - int dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - - if (dst < 0) - return dst; - - switch (BPF_OP(insn->code)) { - case BPF_MOV: - case BPF_ADD: - upper_bound = S16_MAX; - lower_bound = S16_MIN; - break; - case BPF_SUB: - upper_bound = -(int)S16_MIN; - lower_bound = -(int)S16_MAX; - break; - case BPF_AND: - case BPF_OR: - case BPF_XOR: - upper_bound = 0xffff; - lower_bound = 0; - break; - case BPF_RSH: - case BPF_LSH: - case BPF_ARSH: - /* Shift amounts are truncated, no need for bounds */ - upper_bound = S32_MAX; - lower_bound = S32_MIN; - break; - default: - return -EINVAL; - } - - /* - * Immediate move clobbers the register, so no sign/zero - * extension needed. - */ - if (BPF_CLASS(insn->code) == BPF_ALU64 && - BPF_OP(insn->code) != BPF_MOV && - get_reg_val_type(ctx, idx, insn->dst_reg) == REG_32BIT) - emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); - /* BPF_ALU | BPF_LSH doesn't need separate sign extension */ - if (BPF_CLASS(insn->code) == BPF_ALU && - BPF_OP(insn->code) != BPF_LSH && - BPF_OP(insn->code) != BPF_MOV && - get_reg_val_type(ctx, idx, insn->dst_reg) != REG_32BIT) - emit_instr(ctx, sll, dst, dst, 0); - - if (insn->imm >= lower_bound && insn->imm <= upper_bound) { - /* single insn immediate case */ - switch (BPF_OP(insn->code) | BPF_CLASS(insn->code)) { - case BPF_ALU64 | BPF_MOV: - emit_instr(ctx, daddiu, dst, MIPS_R_ZERO, insn->imm); - break; - case BPF_ALU64 | BPF_AND: - case BPF_ALU | BPF_AND: - emit_instr(ctx, andi, dst, dst, insn->imm); - break; - case BPF_ALU64 | BPF_OR: - case BPF_ALU | BPF_OR: - emit_instr(ctx, ori, dst, dst, insn->imm); - break; - case BPF_ALU64 | BPF_XOR: - case BPF_ALU | BPF_XOR: - emit_instr(ctx, xori, dst, dst, insn->imm); - break; - case BPF_ALU64 | BPF_ADD: - emit_instr(ctx, daddiu, dst, dst, insn->imm); - break; - case BPF_ALU64 | BPF_SUB: - emit_instr(ctx, daddiu, dst, dst, -insn->imm); - break; - case BPF_ALU64 | BPF_RSH: - emit_instr(ctx, dsrl_safe, dst, dst, insn->imm & 0x3f); - break; - case BPF_ALU | BPF_RSH: - emit_instr(ctx, srl, dst, dst, insn->imm & 0x1f); - break; - case BPF_ALU64 | BPF_LSH: - emit_instr(ctx, dsll_safe, dst, dst, insn->imm & 0x3f); - break; - case BPF_ALU | BPF_LSH: - emit_instr(ctx, sll, dst, dst, insn->imm & 0x1f); - break; - case BPF_ALU64 | BPF_ARSH: - emit_instr(ctx, dsra_safe, dst, dst, insn->imm & 0x3f); - break; - case BPF_ALU | BPF_ARSH: - emit_instr(ctx, sra, dst, dst, insn->imm & 0x1f); - break; - case BPF_ALU | BPF_MOV: - emit_instr(ctx, addiu, dst, MIPS_R_ZERO, insn->imm); - break; - case BPF_ALU | BPF_ADD: - emit_instr(ctx, addiu, dst, dst, insn->imm); - break; - case BPF_ALU | BPF_SUB: - emit_instr(ctx, addiu, dst, dst, -insn->imm); - break; - default: - return -EINVAL; - } - } else { - /* multi insn immediate case */ - if (BPF_OP(insn->code) == BPF_MOV) { - gen_imm_to_reg(insn, dst, ctx); - } else { - gen_imm_to_reg(insn, MIPS_R_AT, ctx); - switch (BPF_OP(insn->code) | BPF_CLASS(insn->code)) { - case BPF_ALU64 | BPF_AND: - case BPF_ALU | BPF_AND: - emit_instr(ctx, and, dst, dst, MIPS_R_AT); - break; - case BPF_ALU64 | BPF_OR: - case BPF_ALU | BPF_OR: - emit_instr(ctx, or, dst, dst, MIPS_R_AT); - break; - case BPF_ALU64 | BPF_XOR: - case BPF_ALU | BPF_XOR: - emit_instr(ctx, xor, dst, dst, MIPS_R_AT); - break; - case BPF_ALU64 | BPF_ADD: - emit_instr(ctx, daddu, dst, dst, MIPS_R_AT); - break; - case BPF_ALU64 | BPF_SUB: - emit_instr(ctx, dsubu, dst, dst, MIPS_R_AT); - break; - case BPF_ALU | BPF_ADD: - emit_instr(ctx, addu, dst, dst, MIPS_R_AT); - break; - case BPF_ALU | BPF_SUB: - emit_instr(ctx, subu, dst, dst, MIPS_R_AT); - break; - default: - return -EINVAL; - } - } - } - - return 0; -} - -static void emit_const_to_reg(struct jit_ctx *ctx, int dst, unsigned long value) -{ - if (value >= S16_MIN || value <= S16_MAX) { - emit_instr_long(ctx, daddiu, addiu, dst, MIPS_R_ZERO, (int)value); - } else if (value >= S32_MIN || - (value <= S32_MAX && value > U16_MAX)) { - emit_instr(ctx, lui, dst, (s32)(s16)(value >> 16)); - emit_instr(ctx, ori, dst, dst, (unsigned int)(value & 0xffff)); - } else { - int i; - bool seen_part = false; - int needed_shift = 0; - - for (i = 0; i < 4; i++) { - u64 part = (value >> (16 * (3 - i))) & 0xffff; - - if (seen_part && needed_shift > 0 && (part || i == 3)) { - emit_instr(ctx, dsll_safe, dst, dst, needed_shift); - needed_shift = 0; - } - if (part) { - if (i == 0 || (!seen_part && i < 3 && part < 0x8000)) { - emit_instr(ctx, lui, dst, (s32)(s16)part); - needed_shift = -16; - } else { - emit_instr(ctx, ori, dst, - seen_part ? dst : MIPS_R_ZERO, - (unsigned int)part); - } - seen_part = true; - } - if (seen_part) - needed_shift += 16; - } - } -} - -/* - * Push BPF regs R3-R5 to the stack, skipping BPF regs R1-R2 which are - * passed via MIPS register pairs in $a0-$a3. Register order within pairs - * and the memory storage order are identical i.e. endian native. - */ -static void emit_push_args(struct jit_ctx *ctx) -{ - int store_offset = 2 * sizeof(u64); /* Skip R1-R2 in $a0-$a3 */ - int bpf, reg; - - for (bpf = BPF_REG_3; bpf <= BPF_REG_5; bpf++) { - reg = bpf2mips[bpf].reg; - - emit_instr(ctx, sw, LO(reg), OFFLO(store_offset), MIPS_R_SP); - emit_instr(ctx, sw, HI(reg), OFFHI(store_offset), MIPS_R_SP); - store_offset += sizeof(u64); - } -} - -/* - * Common helper for BPF_CALL insn, handling TCC and ABI variations. - * Kernel calls under O32 ABI require arguments passed on the stack, - * while BPF2BPF calls need the TCC passed via register as expected - * by the subprog's prologue. - * - * Under MIPS32 O32 ABI calling convention, u64 BPF regs R1-R2 are passed - * via reg pairs in $a0-$a3, while BPF regs R3-R5 are passed via the stack. - * Stack space is still reserved for $a0-$a3, and the whole area aligned. - */ -#define ARGS_SIZE (5 * sizeof(u64)) - -void emit_bpf_call(struct jit_ctx *ctx, const struct bpf_insn *insn) -{ - int stack_adjust = ALIGN(ARGS_SIZE, STACK_ALIGN); - int tcc_run = bpf2mips[JIT_RUN_TCC].reg ? - bpf2mips[JIT_RUN_TCC].reg : - TEMP_PASS_TCC; - int tcc_sav = bpf2mips[JIT_SAV_TCC].reg; - long func_addr; - - ctx->flags |= EBPF_SAVE_RA; - - /* Ensure TCC passed into BPF subprog */ - if ((insn->src_reg == BPF_PSEUDO_CALL) && - tail_call_present(ctx) && !(ctx->flags & EBPF_TCC_IN_RUN)) { - /* Set TCC from reg or stack */ - if (tcc_sav) - emit_instr(ctx, move, tcc_run, tcc_sav); - else - emit_instr_long(ctx, ld, lw, tcc_run, - ctx->bpf_stack_off, MIPS_R_SP); - } - - /* Push O32 stack args for kernel call */ - if (!is64bit() && (insn->src_reg != BPF_PSEUDO_CALL)) { - emit_instr(ctx, addiu, MIPS_R_SP, MIPS_R_SP, -stack_adjust); - emit_push_args(ctx); - } - - func_addr = (long)__bpf_call_base + insn->imm; - emit_const_to_reg(ctx, MIPS_R_T9, func_addr); - emit_instr(ctx, jalr, MIPS_R_RA, MIPS_R_T9); - /* Delay slot */ - emit_instr(ctx, nop); - - /* Restore stack */ - if (!is64bit() && (insn->src_reg != BPF_PSEUDO_CALL)) - emit_instr(ctx, addiu, MIPS_R_SP, MIPS_R_SP, stack_adjust); -} - -/* - * Tail call helper arguments passed via BPF ABI as u64 parameters. On - * MIPS64 N64 ABI systems these are native regs, while on MIPS32 O32 ABI - * systems these are reg pairs: - * - * R1 -> &ctx - * R2 -> &array - * R3 -> index - */ -static int emit_bpf_tail_call(struct jit_ctx *ctx, int this_idx) -{ - int tcc_run = bpf2mips[JIT_RUN_TCC].reg ? - bpf2mips[JIT_RUN_TCC].reg : - TEMP_PASS_TCC; - int tcc_sav = bpf2mips[JIT_SAV_TCC].reg; - int r2 = bpf2mips[BPF_REG_2].reg; - int r3 = bpf2mips[BPF_REG_3].reg; - int off, b_off; - int tcc; - - ctx->flags |= EBPF_SEEN_TC; - /* - * if (index >= array->map.max_entries) - * goto out; - */ - if (is64bit()) - /* Mask index as 32-bit */ - gen_zext_insn(r3, true, ctx); - off = offsetof(struct bpf_array, map.max_entries); - emit_instr_long(ctx, lwu, lw, MIPS_R_AT, off, LO(r2)); - emit_instr(ctx, sltu, MIPS_R_AT, MIPS_R_AT, LO(r3)); - b_off = b_imm(this_idx + 1, ctx); - emit_instr(ctx, bnez, MIPS_R_AT, b_off); - /* - * if (TCC-- < 0) - * goto out; - */ - /* Delay slot */ - tcc = (ctx->flags & EBPF_TCC_IN_RUN) ? tcc_run : tcc_sav; - /* Get TCC from reg or stack */ - if (tcc) - emit_instr(ctx, move, MIPS_R_T8, tcc); - else - emit_instr_long(ctx, ld, lw, MIPS_R_T8, - ctx->bpf_stack_off, MIPS_R_SP); - b_off = b_imm(this_idx + 1, ctx); - emit_instr(ctx, bltz, MIPS_R_T8, b_off); - /* - * prog = array->ptrs[index]; - * if (prog == NULL) - * goto out; - */ - /* Delay slot */ - emit_instr_long(ctx, dsll, sll, MIPS_R_AT, LO(r3), ilog2(sizeof(long))); - emit_instr_long(ctx, daddu, addu, MIPS_R_AT, MIPS_R_AT, LO(r2)); - off = offsetof(struct bpf_array, ptrs); - emit_instr_long(ctx, ld, lw, MIPS_R_AT, off, MIPS_R_AT); - b_off = b_imm(this_idx + 1, ctx); - emit_instr(ctx, beqz, MIPS_R_AT, b_off); - /* Delay slot */ - emit_instr(ctx, nop); - - /* goto *(prog->bpf_func + 4); */ - off = offsetof(struct bpf_prog, bpf_func); - emit_instr_long(ctx, ld, lw, MIPS_R_T9, off, MIPS_R_AT); - /* All systems are go... decrement and propagate TCC */ - emit_instr_long(ctx, daddiu, addiu, tcc_run, MIPS_R_T8, -1); - /* Skip first instruction (TCC initialization) */ - emit_instr_long(ctx, daddiu, addiu, MIPS_R_T9, MIPS_R_T9, 4); - return build_int_epilogue(ctx, MIPS_R_T9); -} - -static bool is_bad_offset(int b_off) -{ - return b_off > 0x1ffff || b_off < -0x20000; -} - -/* Returns the number of insn slots consumed. */ -static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, - int this_idx, int exit_idx) -{ - int src, dst, r, td, ts, mem_off, b_off; - bool need_swap, did_move, cmp_eq; - unsigned int target = 0; - u64 t64; - s64 t64s; - int bpf_op = BPF_OP(insn->code); - - if (IS_ENABLED(CONFIG_32BIT) && ((BPF_CLASS(insn->code) == BPF_ALU64) - || (bpf_op == BPF_DW))) - return -EINVAL; - - switch (insn->code) { - case BPF_ALU64 | BPF_ADD | BPF_K: /* ALU64_IMM */ - case BPF_ALU64 | BPF_SUB | BPF_K: /* ALU64_IMM */ - case BPF_ALU64 | BPF_OR | BPF_K: /* ALU64_IMM */ - case BPF_ALU64 | BPF_AND | BPF_K: /* ALU64_IMM */ - case BPF_ALU64 | BPF_LSH | BPF_K: /* ALU64_IMM */ - case BPF_ALU64 | BPF_RSH | BPF_K: /* ALU64_IMM */ - case BPF_ALU64 | BPF_XOR | BPF_K: /* ALU64_IMM */ - case BPF_ALU64 | BPF_ARSH | BPF_K: /* ALU64_IMM */ - case BPF_ALU64 | BPF_MOV | BPF_K: /* ALU64_IMM */ - case BPF_ALU | BPF_MOV | BPF_K: /* ALU32_IMM */ - case BPF_ALU | BPF_ADD | BPF_K: /* ALU32_IMM */ - case BPF_ALU | BPF_SUB | BPF_K: /* ALU32_IMM */ - case BPF_ALU | BPF_OR | BPF_K: /* ALU64_IMM */ - case BPF_ALU | BPF_AND | BPF_K: /* ALU64_IMM */ - case BPF_ALU | BPF_LSH | BPF_K: /* ALU64_IMM */ - case BPF_ALU | BPF_RSH | BPF_K: /* ALU64_IMM */ - case BPF_ALU | BPF_XOR | BPF_K: /* ALU64_IMM */ - case BPF_ALU | BPF_ARSH | BPF_K: /* ALU64_IMM */ - r = gen_imm_insn(insn, ctx, this_idx); - if (r < 0) - return r; - break; - case BPF_ALU64 | BPF_MUL | BPF_K: /* ALU64_IMM */ - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (dst < 0) - return dst; - if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) - emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); - if (insn->imm == 1) /* Mult by 1 is a nop */ - break; - gen_imm_to_reg(insn, MIPS_R_AT, ctx); - if (MIPS_ISA_REV >= 6) { - emit_instr(ctx, dmulu, dst, dst, MIPS_R_AT); - } else { - emit_instr(ctx, dmultu, MIPS_R_AT, dst); - emit_instr(ctx, mflo, dst); - } - break; - case BPF_ALU64 | BPF_NEG | BPF_K: /* ALU64_IMM */ - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (dst < 0) - return dst; - if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) - emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); - emit_instr(ctx, dsubu, dst, MIPS_R_ZERO, dst); - break; - case BPF_ALU | BPF_MUL | BPF_K: /* ALU_IMM */ - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (dst < 0) - return dst; - td = get_reg_val_type(ctx, this_idx, insn->dst_reg); - if (td == REG_64BIT) { - /* sign extend */ - emit_instr(ctx, sll, dst, dst, 0); - } - if (insn->imm == 1) /* Mult by 1 is a nop */ - break; - gen_imm_to_reg(insn, MIPS_R_AT, ctx); - if (MIPS_ISA_REV >= 6) { - emit_instr(ctx, mulu, dst, dst, MIPS_R_AT); - } else { - emit_instr(ctx, multu, dst, MIPS_R_AT); - emit_instr(ctx, mflo, dst); - } - break; - case BPF_ALU | BPF_NEG | BPF_K: /* ALU_IMM */ - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (dst < 0) - return dst; - td = get_reg_val_type(ctx, this_idx, insn->dst_reg); - if (td == REG_64BIT) { - /* sign extend */ - emit_instr(ctx, sll, dst, dst, 0); - } - emit_instr(ctx, subu, dst, MIPS_R_ZERO, dst); - break; - case BPF_ALU | BPF_DIV | BPF_K: /* ALU_IMM */ - case BPF_ALU | BPF_MOD | BPF_K: /* ALU_IMM */ - if (insn->imm == 0) - return -EINVAL; - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (dst < 0) - return dst; - td = get_reg_val_type(ctx, this_idx, insn->dst_reg); - if (td == REG_64BIT) - /* sign extend */ - emit_instr(ctx, sll, dst, dst, 0); - if (insn->imm == 1) { - /* div by 1 is a nop, mod by 1 is zero */ - if (bpf_op == BPF_MOD) - emit_instr(ctx, addu, dst, MIPS_R_ZERO, MIPS_R_ZERO); - break; - } - gen_imm_to_reg(insn, MIPS_R_AT, ctx); - if (MIPS_ISA_REV >= 6) { - if (bpf_op == BPF_DIV) - emit_instr(ctx, divu_r6, dst, dst, MIPS_R_AT); - else - emit_instr(ctx, modu, dst, dst, MIPS_R_AT); - break; - } - emit_instr(ctx, divu, dst, MIPS_R_AT); - if (bpf_op == BPF_DIV) - emit_instr(ctx, mflo, dst); - else - emit_instr(ctx, mfhi, dst); - break; - case BPF_ALU64 | BPF_DIV | BPF_K: /* ALU_IMM */ - case BPF_ALU64 | BPF_MOD | BPF_K: /* ALU_IMM */ - if (insn->imm == 0) - return -EINVAL; - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (dst < 0) - return dst; - if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) - emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); - if (insn->imm == 1) { - /* div by 1 is a nop, mod by 1 is zero */ - if (bpf_op == BPF_MOD) - emit_instr(ctx, addu, dst, MIPS_R_ZERO, MIPS_R_ZERO); - break; - } - gen_imm_to_reg(insn, MIPS_R_AT, ctx); - if (MIPS_ISA_REV >= 6) { - if (bpf_op == BPF_DIV) - emit_instr(ctx, ddivu_r6, dst, dst, MIPS_R_AT); - else - emit_instr(ctx, dmodu, dst, dst, MIPS_R_AT); - break; - } - emit_instr(ctx, ddivu, dst, MIPS_R_AT); - if (bpf_op == BPF_DIV) - emit_instr(ctx, mflo, dst); - else - emit_instr(ctx, mfhi, dst); - break; - case BPF_ALU64 | BPF_MOV | BPF_X: /* ALU64_REG */ - case BPF_ALU64 | BPF_ADD | BPF_X: /* ALU64_REG */ - case BPF_ALU64 | BPF_SUB | BPF_X: /* ALU64_REG */ - case BPF_ALU64 | BPF_XOR | BPF_X: /* ALU64_REG */ - case BPF_ALU64 | BPF_OR | BPF_X: /* ALU64_REG */ - case BPF_ALU64 | BPF_AND | BPF_X: /* ALU64_REG */ - case BPF_ALU64 | BPF_MUL | BPF_X: /* ALU64_REG */ - case BPF_ALU64 | BPF_DIV | BPF_X: /* ALU64_REG */ - case BPF_ALU64 | BPF_MOD | BPF_X: /* ALU64_REG */ - case BPF_ALU64 | BPF_LSH | BPF_X: /* ALU64_REG */ - case BPF_ALU64 | BPF_RSH | BPF_X: /* ALU64_REG */ - case BPF_ALU64 | BPF_ARSH | BPF_X: /* ALU64_REG */ - src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (src < 0 || dst < 0) - return -EINVAL; - if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) - emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); - did_move = false; - if (get_reg_val_type(ctx, this_idx, insn->src_reg) == REG_32BIT) { - int tmp_reg = MIPS_R_AT; - - if (bpf_op == BPF_MOV) { - tmp_reg = dst; - did_move = true; - } - emit_instr(ctx, daddu, tmp_reg, src, MIPS_R_ZERO); - emit_instr(ctx, dinsu, tmp_reg, MIPS_R_ZERO, 32, 32); - src = MIPS_R_AT; - } - switch (bpf_op) { - case BPF_MOV: - if (!did_move) - emit_instr(ctx, daddu, dst, src, MIPS_R_ZERO); - break; - case BPF_ADD: - emit_instr(ctx, daddu, dst, dst, src); - break; - case BPF_SUB: - emit_instr(ctx, dsubu, dst, dst, src); - break; - case BPF_XOR: - emit_instr(ctx, xor, dst, dst, src); - break; - case BPF_OR: - emit_instr(ctx, or, dst, dst, src); - break; - case BPF_AND: - emit_instr(ctx, and, dst, dst, src); - break; - case BPF_MUL: - if (MIPS_ISA_REV >= 6) { - emit_instr(ctx, dmulu, dst, dst, src); - } else { - emit_instr(ctx, dmultu, dst, src); - emit_instr(ctx, mflo, dst); - } - break; - case BPF_DIV: - case BPF_MOD: - if (MIPS_ISA_REV >= 6) { - if (bpf_op == BPF_DIV) - emit_instr(ctx, ddivu_r6, - dst, dst, src); - else - emit_instr(ctx, dmodu, dst, dst, src); - break; - } - emit_instr(ctx, ddivu, dst, src); - if (bpf_op == BPF_DIV) - emit_instr(ctx, mflo, dst); - else - emit_instr(ctx, mfhi, dst); - break; - case BPF_LSH: - emit_instr(ctx, dsllv, dst, dst, src); - break; - case BPF_RSH: - emit_instr(ctx, dsrlv, dst, dst, src); - break; - case BPF_ARSH: - emit_instr(ctx, dsrav, dst, dst, src); - break; - default: - pr_err("ALU64_REG NOT HANDLED\n"); - return -EINVAL; - } - break; - case BPF_ALU | BPF_MOV | BPF_X: /* ALU_REG */ - case BPF_ALU | BPF_ADD | BPF_X: /* ALU_REG */ - case BPF_ALU | BPF_SUB | BPF_X: /* ALU_REG */ - case BPF_ALU | BPF_XOR | BPF_X: /* ALU_REG */ - case BPF_ALU | BPF_OR | BPF_X: /* ALU_REG */ - case BPF_ALU | BPF_AND | BPF_X: /* ALU_REG */ - case BPF_ALU | BPF_MUL | BPF_X: /* ALU_REG */ - case BPF_ALU | BPF_DIV | BPF_X: /* ALU_REG */ - case BPF_ALU | BPF_MOD | BPF_X: /* ALU_REG */ - case BPF_ALU | BPF_LSH | BPF_X: /* ALU_REG */ - case BPF_ALU | BPF_RSH | BPF_X: /* ALU_REG */ - case BPF_ALU | BPF_ARSH | BPF_X: /* ALU_REG */ - src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (src < 0 || dst < 0) - return -EINVAL; - td = get_reg_val_type(ctx, this_idx, insn->dst_reg); - if (td == REG_64BIT) { - /* sign extend */ - emit_instr(ctx, sll, dst, dst, 0); - } - did_move = false; - ts = get_reg_val_type(ctx, this_idx, insn->src_reg); - if (ts == REG_64BIT) { - int tmp_reg = MIPS_R_AT; - - if (bpf_op == BPF_MOV) { - tmp_reg = dst; - did_move = true; - } - /* sign extend */ - emit_instr(ctx, sll, tmp_reg, src, 0); - src = MIPS_R_AT; - } - switch (bpf_op) { - case BPF_MOV: - if (!did_move) - emit_instr(ctx, addu, dst, src, MIPS_R_ZERO); - break; - case BPF_ADD: - emit_instr(ctx, addu, dst, dst, src); - break; - case BPF_SUB: - emit_instr(ctx, subu, dst, dst, src); - break; - case BPF_XOR: - emit_instr(ctx, xor, dst, dst, src); - break; - case BPF_OR: - emit_instr(ctx, or, dst, dst, src); - break; - case BPF_AND: - emit_instr(ctx, and, dst, dst, src); - break; - case BPF_MUL: - emit_instr(ctx, mul, dst, dst, src); - break; - case BPF_DIV: - case BPF_MOD: - if (MIPS_ISA_REV >= 6) { - if (bpf_op == BPF_DIV) - emit_instr(ctx, divu_r6, dst, dst, src); - else - emit_instr(ctx, modu, dst, dst, src); - break; - } - emit_instr(ctx, divu, dst, src); - if (bpf_op == BPF_DIV) - emit_instr(ctx, mflo, dst); - else - emit_instr(ctx, mfhi, dst); - break; - case BPF_LSH: - emit_instr(ctx, sllv, dst, dst, src); - break; - case BPF_RSH: - emit_instr(ctx, srlv, dst, dst, src); - break; - case BPF_ARSH: - emit_instr(ctx, srav, dst, dst, src); - break; - default: - pr_err("ALU_REG NOT HANDLED\n"); - return -EINVAL; - } - break; - case BPF_JMP | BPF_EXIT: - if (this_idx + 1 < exit_idx) { - b_off = b_imm(exit_idx, ctx); - if (is_bad_offset(b_off)) { - target = j_target(ctx, exit_idx); - if (target == (unsigned int)-1) - return -E2BIG; - emit_instr(ctx, j, target); - } else { - emit_instr(ctx, b, b_off); - } - emit_instr(ctx, nop); - } - break; - case BPF_JMP | BPF_JEQ | BPF_K: /* JMP_IMM */ - case BPF_JMP | BPF_JNE | BPF_K: /* JMP_IMM */ - cmp_eq = (bpf_op == BPF_JEQ); - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); - if (dst < 0) - return dst; - if (insn->imm == 0) { - src = MIPS_R_ZERO; - } else { - gen_imm_to_reg(insn, MIPS_R_AT, ctx); - src = MIPS_R_AT; - } - goto jeq_common; - case BPF_JMP | BPF_JEQ | BPF_X: /* JMP_REG */ - case BPF_JMP | BPF_JNE | BPF_X: - case BPF_JMP | BPF_JSLT | BPF_X: - case BPF_JMP | BPF_JSLE | BPF_X: - case BPF_JMP | BPF_JSGT | BPF_X: - case BPF_JMP | BPF_JSGE | BPF_X: - case BPF_JMP | BPF_JLT | BPF_X: - case BPF_JMP | BPF_JLE | BPF_X: - case BPF_JMP | BPF_JGT | BPF_X: - case BPF_JMP | BPF_JGE | BPF_X: - case BPF_JMP | BPF_JSET | BPF_X: - src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); - if (src < 0 || dst < 0) - return -EINVAL; - td = get_reg_val_type(ctx, this_idx, insn->dst_reg); - ts = get_reg_val_type(ctx, this_idx, insn->src_reg); - if (td == REG_32BIT && ts != REG_32BIT) { - emit_instr(ctx, sll, MIPS_R_AT, src, 0); - src = MIPS_R_AT; - } else if (ts == REG_32BIT && td != REG_32BIT) { - emit_instr(ctx, sll, MIPS_R_AT, dst, 0); - dst = MIPS_R_AT; - } - if (bpf_op == BPF_JSET) { - emit_instr(ctx, and, MIPS_R_AT, dst, src); - cmp_eq = false; - dst = MIPS_R_AT; - src = MIPS_R_ZERO; - } else if (bpf_op == BPF_JSGT || bpf_op == BPF_JSLE) { - emit_instr(ctx, dsubu, MIPS_R_AT, dst, src); - if ((insn + 1)->code == (BPF_JMP | BPF_EXIT) && insn->off == 1) { - b_off = b_imm(exit_idx, ctx); - if (is_bad_offset(b_off)) - return -E2BIG; - if (bpf_op == BPF_JSGT) - emit_instr(ctx, blez, MIPS_R_AT, b_off); - else - emit_instr(ctx, bgtz, MIPS_R_AT, b_off); - emit_instr(ctx, nop); - return 2; /* We consumed the exit. */ - } - b_off = b_imm(this_idx + insn->off + 1, ctx); - if (is_bad_offset(b_off)) - return -E2BIG; - if (bpf_op == BPF_JSGT) - emit_instr(ctx, bgtz, MIPS_R_AT, b_off); - else - emit_instr(ctx, blez, MIPS_R_AT, b_off); - emit_instr(ctx, nop); - break; - } else if (bpf_op == BPF_JSGE || bpf_op == BPF_JSLT) { - emit_instr(ctx, slt, MIPS_R_AT, dst, src); - cmp_eq = bpf_op == BPF_JSGE; - dst = MIPS_R_AT; - src = MIPS_R_ZERO; - } else if (bpf_op == BPF_JGT || bpf_op == BPF_JLE) { - /* dst or src could be AT */ - emit_instr(ctx, dsubu, MIPS_R_T8, dst, src); - emit_instr(ctx, sltu, MIPS_R_AT, dst, src); - /* SP known to be non-zero, movz becomes boolean not */ - if (MIPS_ISA_REV >= 6) { - emit_instr(ctx, seleqz, MIPS_R_T9, - MIPS_R_SP, MIPS_R_T8); - } else { - emit_instr(ctx, movz, MIPS_R_T9, - MIPS_R_SP, MIPS_R_T8); - emit_instr(ctx, movn, MIPS_R_T9, - MIPS_R_ZERO, MIPS_R_T8); - } - emit_instr(ctx, or, MIPS_R_AT, MIPS_R_T9, MIPS_R_AT); - cmp_eq = bpf_op == BPF_JGT; - dst = MIPS_R_AT; - src = MIPS_R_ZERO; - } else if (bpf_op == BPF_JGE || bpf_op == BPF_JLT) { - emit_instr(ctx, sltu, MIPS_R_AT, dst, src); - cmp_eq = bpf_op == BPF_JGE; - dst = MIPS_R_AT; - src = MIPS_R_ZERO; - } else { /* JNE/JEQ case */ - cmp_eq = (bpf_op == BPF_JEQ); - } -jeq_common: - /* - * If the next insn is EXIT and we are jumping arround - * only it, invert the sense of the compare and - * conditionally jump to the exit. Poor man's branch - * chaining. - */ - if ((insn + 1)->code == (BPF_JMP | BPF_EXIT) && insn->off == 1) { - b_off = b_imm(exit_idx, ctx); - if (is_bad_offset(b_off)) { - target = j_target(ctx, exit_idx); - if (target == (unsigned int)-1) - return -E2BIG; - cmp_eq = !cmp_eq; - b_off = 4 * 3; - if (!(ctx->offsets[this_idx] & OFFSETS_B_CONV)) { - ctx->offsets[this_idx] |= OFFSETS_B_CONV; - ctx->long_b_conversion = 1; - } - } - - if (cmp_eq) - emit_instr(ctx, bne, dst, src, b_off); - else - emit_instr(ctx, beq, dst, src, b_off); - emit_instr(ctx, nop); - if (ctx->offsets[this_idx] & OFFSETS_B_CONV) { - emit_instr(ctx, j, target); - emit_instr(ctx, nop); - } - return 2; /* We consumed the exit. */ - } - b_off = b_imm(this_idx + insn->off + 1, ctx); - if (is_bad_offset(b_off)) { - target = j_target(ctx, this_idx + insn->off + 1); - if (target == (unsigned int)-1) - return -E2BIG; - cmp_eq = !cmp_eq; - b_off = 4 * 3; - if (!(ctx->offsets[this_idx] & OFFSETS_B_CONV)) { - ctx->offsets[this_idx] |= OFFSETS_B_CONV; - ctx->long_b_conversion = 1; - } - } - - if (cmp_eq) - emit_instr(ctx, beq, dst, src, b_off); - else - emit_instr(ctx, bne, dst, src, b_off); - emit_instr(ctx, nop); - if (ctx->offsets[this_idx] & OFFSETS_B_CONV) { - emit_instr(ctx, j, target); - emit_instr(ctx, nop); - } - break; - case BPF_JMP | BPF_JSGT | BPF_K: /* JMP_IMM */ - case BPF_JMP | BPF_JSGE | BPF_K: /* JMP_IMM */ - case BPF_JMP | BPF_JSLT | BPF_K: /* JMP_IMM */ - case BPF_JMP | BPF_JSLE | BPF_K: /* JMP_IMM */ - cmp_eq = (bpf_op == BPF_JSGE); - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); - if (dst < 0) - return dst; - - if (insn->imm == 0) { - if ((insn + 1)->code == (BPF_JMP | BPF_EXIT) && insn->off == 1) { - b_off = b_imm(exit_idx, ctx); - if (is_bad_offset(b_off)) - return -E2BIG; - switch (bpf_op) { - case BPF_JSGT: - emit_instr(ctx, blez, dst, b_off); - break; - case BPF_JSGE: - emit_instr(ctx, bltz, dst, b_off); - break; - case BPF_JSLT: - emit_instr(ctx, bgez, dst, b_off); - break; - case BPF_JSLE: - emit_instr(ctx, bgtz, dst, b_off); - break; - } - emit_instr(ctx, nop); - return 2; /* We consumed the exit. */ - } - b_off = b_imm(this_idx + insn->off + 1, ctx); - if (is_bad_offset(b_off)) - return -E2BIG; - switch (bpf_op) { - case BPF_JSGT: - emit_instr(ctx, bgtz, dst, b_off); - break; - case BPF_JSGE: - emit_instr(ctx, bgez, dst, b_off); - break; - case BPF_JSLT: - emit_instr(ctx, bltz, dst, b_off); - break; - case BPF_JSLE: - emit_instr(ctx, blez, dst, b_off); - break; - } - emit_instr(ctx, nop); - break; - } - /* - * only "LT" compare available, so we must use imm + 1 - * to generate "GT" and imm -1 to generate LE - */ - if (bpf_op == BPF_JSGT) - t64s = insn->imm + 1; - else if (bpf_op == BPF_JSLE) - t64s = insn->imm + 1; - else - t64s = insn->imm; - - cmp_eq = bpf_op == BPF_JSGT || bpf_op == BPF_JSGE; - if (t64s >= S16_MIN && t64s <= S16_MAX) { - emit_instr(ctx, slti, MIPS_R_AT, dst, (int)t64s); - src = MIPS_R_AT; - dst = MIPS_R_ZERO; - goto jeq_common; - } - emit_const_to_reg(ctx, MIPS_R_AT, (u64)t64s); - emit_instr(ctx, slt, MIPS_R_AT, dst, MIPS_R_AT); - src = MIPS_R_AT; - dst = MIPS_R_ZERO; - goto jeq_common; - - case BPF_JMP | BPF_JGT | BPF_K: - case BPF_JMP | BPF_JGE | BPF_K: - case BPF_JMP | BPF_JLT | BPF_K: - case BPF_JMP | BPF_JLE | BPF_K: - cmp_eq = (bpf_op == BPF_JGE); - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); - if (dst < 0) - return dst; - /* - * only "LT" compare available, so we must use imm + 1 - * to generate "GT" and imm -1 to generate LE - */ - if (bpf_op == BPF_JGT) - t64s = (u64)(u32)(insn->imm) + 1; - else if (bpf_op == BPF_JLE) - t64s = (u64)(u32)(insn->imm) + 1; - else - t64s = (u64)(u32)(insn->imm); - - cmp_eq = bpf_op == BPF_JGT || bpf_op == BPF_JGE; - - emit_const_to_reg(ctx, MIPS_R_AT, (u64)t64s); - emit_instr(ctx, sltu, MIPS_R_AT, dst, MIPS_R_AT); - src = MIPS_R_AT; - dst = MIPS_R_ZERO; - goto jeq_common; - - case BPF_JMP | BPF_JSET | BPF_K: /* JMP_IMM */ - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); - if (dst < 0) - return dst; - - if (ctx->use_bbit_insns && hweight32((u32)insn->imm) == 1) { - if ((insn + 1)->code == (BPF_JMP | BPF_EXIT) && insn->off == 1) { - b_off = b_imm(exit_idx, ctx); - if (is_bad_offset(b_off)) - return -E2BIG; - emit_instr(ctx, bbit0, dst, ffs((u32)insn->imm) - 1, b_off); - emit_instr(ctx, nop); - return 2; /* We consumed the exit. */ - } - b_off = b_imm(this_idx + insn->off + 1, ctx); - if (is_bad_offset(b_off)) - return -E2BIG; - emit_instr(ctx, bbit1, dst, ffs((u32)insn->imm) - 1, b_off); - emit_instr(ctx, nop); - break; - } - t64 = (u32)insn->imm; - emit_const_to_reg(ctx, MIPS_R_AT, t64); - emit_instr(ctx, and, MIPS_R_AT, dst, MIPS_R_AT); - src = MIPS_R_AT; - dst = MIPS_R_ZERO; - cmp_eq = false; - goto jeq_common; - - case BPF_JMP | BPF_JA: - /* - * Prefer relative branch for easier debugging, but - * fall back if needed. - */ - b_off = b_imm(this_idx + insn->off + 1, ctx); - if (is_bad_offset(b_off)) { - target = j_target(ctx, this_idx + insn->off + 1); - if (target == (unsigned int)-1) - return -E2BIG; - emit_instr(ctx, j, target); - } else { - emit_instr(ctx, b, b_off); - } - emit_instr(ctx, nop); - break; - case BPF_LD | BPF_DW | BPF_IMM: - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (dst < 0) - return dst; - t64 = ((u64)(u32)insn->imm) | ((u64)(insn + 1)->imm << 32); - emit_const_to_reg(ctx, dst, t64); - return 2; /* Double slot insn */ - - case BPF_JMP | BPF_CALL: - emit_bpf_call(ctx, insn); - break; - - case BPF_JMP | BPF_TAIL_CALL: - if (emit_bpf_tail_call(ctx, this_idx)) - return -EINVAL; - break; - - case BPF_ALU | BPF_END | BPF_FROM_BE: - case BPF_ALU | BPF_END | BPF_FROM_LE: - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - if (dst < 0) - return dst; - td = get_reg_val_type(ctx, this_idx, insn->dst_reg); - if (insn->imm == 64 && td == REG_32BIT) - emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); - - if (insn->imm != 64 && td == REG_64BIT) { - /* sign extend */ - emit_instr(ctx, sll, dst, dst, 0); - } - -#ifdef __BIG_ENDIAN - need_swap = (BPF_SRC(insn->code) == BPF_FROM_LE); -#else - need_swap = (BPF_SRC(insn->code) == BPF_FROM_BE); -#endif - if (insn->imm == 16) { - if (need_swap) - emit_instr(ctx, wsbh, dst, dst); - emit_instr(ctx, andi, dst, dst, 0xffff); - } else if (insn->imm == 32) { - if (need_swap) { - emit_instr(ctx, wsbh, dst, dst); - emit_instr(ctx, rotr, dst, dst, 16); - } - } else { /* 64-bit*/ - if (need_swap) { - emit_instr(ctx, dsbh, dst, dst); - emit_instr(ctx, dshd, dst, dst); - } - } - break; - - case BPF_ST | BPF_B | BPF_MEM: - case BPF_ST | BPF_H | BPF_MEM: - case BPF_ST | BPF_W | BPF_MEM: - case BPF_ST | BPF_DW | BPF_MEM: - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); - if (dst < 0) - return dst; - mem_off = insn->off; - gen_imm_to_reg(insn, MIPS_R_AT, ctx); - switch (BPF_SIZE(insn->code)) { - case BPF_B: - emit_instr(ctx, sb, MIPS_R_AT, mem_off, dst); - break; - case BPF_H: - emit_instr(ctx, sh, MIPS_R_AT, mem_off, dst); - break; - case BPF_W: - emit_instr(ctx, sw, MIPS_R_AT, mem_off, dst); - break; - case BPF_DW: - emit_instr(ctx, sd, MIPS_R_AT, mem_off, dst); - break; - } - break; - - case BPF_LDX | BPF_B | BPF_MEM: - case BPF_LDX | BPF_H | BPF_MEM: - case BPF_LDX | BPF_W | BPF_MEM: - case BPF_LDX | BPF_DW | BPF_MEM: - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); - src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); - if (dst < 0 || src < 0) - return -EINVAL; - mem_off = insn->off; - switch (BPF_SIZE(insn->code)) { - case BPF_B: - emit_instr(ctx, lbu, dst, mem_off, src); - break; - case BPF_H: - emit_instr(ctx, lhu, dst, mem_off, src); - break; - case BPF_W: - emit_instr(ctx, lw, dst, mem_off, src); - break; - case BPF_DW: - emit_instr(ctx, ld, dst, mem_off, src); - break; - } - break; - - case BPF_STX | BPF_B | BPF_MEM: - case BPF_STX | BPF_H | BPF_MEM: - case BPF_STX | BPF_W | BPF_MEM: - case BPF_STX | BPF_DW | BPF_MEM: - case BPF_STX | BPF_W | BPF_ATOMIC: - case BPF_STX | BPF_DW | BPF_ATOMIC: - dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); - src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); - if (src < 0 || dst < 0) - return -EINVAL; - mem_off = insn->off; - if (BPF_MODE(insn->code) == BPF_ATOMIC) { - if (insn->imm != BPF_ADD) { - pr_err("ATOMIC OP %02x NOT HANDLED\n", insn->imm); - return -EINVAL; - } - /* - * If mem_off does not fit within the 9 bit ll/sc - * instruction immediate field, use a temp reg. - */ - if (MIPS_ISA_REV >= 6 && - (mem_off >= BIT(8) || mem_off < -BIT(8))) { - emit_instr(ctx, daddiu, MIPS_R_T6, - dst, mem_off); - mem_off = 0; - dst = MIPS_R_T6; - } - switch (BPF_SIZE(insn->code)) { - case BPF_W: - if (get_reg_val_type(ctx, this_idx, insn->src_reg) == REG_32BIT) { - emit_instr(ctx, sll, MIPS_R_AT, src, 0); - src = MIPS_R_AT; - } - emit_instr(ctx, ll, MIPS_R_T8, mem_off, dst); - emit_instr(ctx, addu, MIPS_R_T8, MIPS_R_T8, src); - emit_instr(ctx, sc, MIPS_R_T8, mem_off, dst); - /* - * On failure back up to LL (-4 - * instructions of 4 bytes each - */ - emit_instr(ctx, beq, MIPS_R_T8, MIPS_R_ZERO, -4 * 4); - emit_instr(ctx, nop); - break; - case BPF_DW: - if (get_reg_val_type(ctx, this_idx, insn->src_reg) == REG_32BIT) { - emit_instr(ctx, daddu, MIPS_R_AT, src, MIPS_R_ZERO); - emit_instr(ctx, dinsu, MIPS_R_AT, MIPS_R_ZERO, 32, 32); - src = MIPS_R_AT; - } - emit_instr(ctx, lld, MIPS_R_T8, mem_off, dst); - emit_instr(ctx, daddu, MIPS_R_T8, MIPS_R_T8, src); - emit_instr(ctx, scd, MIPS_R_T8, mem_off, dst); - emit_instr(ctx, beq, MIPS_R_T8, MIPS_R_ZERO, -4 * 4); - emit_instr(ctx, nop); - break; - } - } else { /* BPF_MEM */ - switch (BPF_SIZE(insn->code)) { - case BPF_B: - emit_instr(ctx, sb, src, mem_off, dst); - break; - case BPF_H: - emit_instr(ctx, sh, src, mem_off, dst); - break; - case BPF_W: - emit_instr(ctx, sw, src, mem_off, dst); - break; - case BPF_DW: - if (get_reg_val_type(ctx, this_idx, insn->src_reg) == REG_32BIT) { - emit_instr(ctx, daddu, MIPS_R_AT, src, MIPS_R_ZERO); - emit_instr(ctx, dinsu, MIPS_R_AT, MIPS_R_ZERO, 32, 32); - src = MIPS_R_AT; - } - emit_instr(ctx, sd, src, mem_off, dst); - break; - } - } - break; - - default: - pr_err("NOT HANDLED %d - (%02x)\n", - this_idx, (unsigned int)insn->code); - return -EINVAL; - } - return 1; -} - -#define RVT_VISITED_MASK 0xc000000000000000ull -#define RVT_FALL_THROUGH 0x4000000000000000ull -#define RVT_BRANCH_TAKEN 0x8000000000000000ull -#define RVT_DONE (RVT_FALL_THROUGH | RVT_BRANCH_TAKEN) - -static int build_int_body(struct jit_ctx *ctx) -{ - const struct bpf_prog *prog = ctx->skf; - const struct bpf_insn *insn; - int i, r; - - for (i = 0; i < prog->len; ) { - insn = prog->insnsi + i; - if ((ctx->reg_val_types[i] & RVT_VISITED_MASK) == 0) { - /* dead instruction, don't emit it. */ - i++; - continue; - } - - if (ctx->target == NULL) - ctx->offsets[i] = (ctx->offsets[i] & OFFSETS_B_CONV) | (ctx->idx * 4); - - r = build_one_insn(insn, ctx, i, prog->len); - if (r < 0) - return r; - i += r; - } - /* epilogue offset */ - if (ctx->target == NULL) - ctx->offsets[i] = ctx->idx * 4; - - /* - * All exits have an offset of the epilogue, some offsets may - * not have been set due to banch-around threading, so set - * them now. - */ - if (ctx->target == NULL) - for (i = 0; i < prog->len; i++) { - insn = prog->insnsi + i; - if (insn->code == (BPF_JMP | BPF_EXIT)) - ctx->offsets[i] = ctx->idx * 4; - } - return 0; -} - -/* return the last idx processed, or negative for error */ -static int reg_val_propagate_range(struct jit_ctx *ctx, u64 initial_rvt, - int start_idx, bool follow_taken) -{ - const struct bpf_prog *prog = ctx->skf; - const struct bpf_insn *insn; - u64 exit_rvt = initial_rvt; - u64 *rvt = ctx->reg_val_types; - int idx; - int reg; - - for (idx = start_idx; idx < prog->len; idx++) { - rvt[idx] = (rvt[idx] & RVT_VISITED_MASK) | exit_rvt; - insn = prog->insnsi + idx; - switch (BPF_CLASS(insn->code)) { - case BPF_ALU: - switch (BPF_OP(insn->code)) { - case BPF_ADD: - case BPF_SUB: - case BPF_MUL: - case BPF_DIV: - case BPF_OR: - case BPF_AND: - case BPF_LSH: - case BPF_RSH: - case BPF_ARSH: - case BPF_NEG: - case BPF_MOD: - case BPF_XOR: - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); - break; - case BPF_MOV: - if (BPF_SRC(insn->code)) { - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); - } else { - /* IMM to REG move*/ - if (insn->imm >= 0) - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); - else - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); - } - break; - case BPF_END: - if (insn->imm == 64) - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); - else if (insn->imm == 32) - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); - else /* insn->imm == 16 */ - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); - break; - } - rvt[idx] |= RVT_DONE; - break; - case BPF_ALU64: - switch (BPF_OP(insn->code)) { - case BPF_MOV: - if (BPF_SRC(insn->code)) { - /* REG to REG move*/ - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); - } else { - /* IMM to REG move*/ - if (insn->imm >= 0) - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); - else - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT_32BIT); - } - break; - default: - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); - } - rvt[idx] |= RVT_DONE; - break; - case BPF_LD: - switch (BPF_SIZE(insn->code)) { - case BPF_DW: - if (BPF_MODE(insn->code) == BPF_IMM) { - s64 val; - - val = (s64)((u32)insn->imm | ((u64)(insn + 1)->imm << 32)); - if (val > 0 && val <= S32_MAX) - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); - else if (val >= S32_MIN && val <= S32_MAX) - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT_32BIT); - else - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); - rvt[idx] |= RVT_DONE; - idx++; - } else { - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); - } - break; - case BPF_B: - case BPF_H: - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); - break; - case BPF_W: - if (BPF_MODE(insn->code) == BPF_IMM) - set_reg_val_type(&exit_rvt, insn->dst_reg, - insn->imm >= 0 ? REG_32BIT_POS : REG_32BIT); - else - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); - break; - } - rvt[idx] |= RVT_DONE; - break; - case BPF_LDX: - switch (BPF_SIZE(insn->code)) { - case BPF_DW: - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); - break; - case BPF_B: - case BPF_H: - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); - break; - case BPF_W: - set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); - break; - } - rvt[idx] |= RVT_DONE; - break; - case BPF_JMP: - case BPF_JMP32: - switch (BPF_OP(insn->code)) { - case BPF_EXIT: - rvt[idx] = RVT_DONE | exit_rvt; - rvt[prog->len] = exit_rvt; - return idx; - case BPF_JA: - { - int tgt = idx + 1 + insn->off; - bool visited = (rvt[tgt] & RVT_FALL_THROUGH); - - rvt[idx] |= RVT_DONE; - /* - * Verifier dead code patching can use - * infinite-loop traps, causing hangs and - * RCU stalls here. Treat traps as nops - * if detected and fall through. - */ - if (insn->off == -1) - break; - /* - * Bounded loops cause the same issues in - * fallthrough mode; follow only if jump - * target is unvisited to mitigate. - */ - if (insn->off < 0 && !follow_taken && visited) - break; - idx += insn->off; - break; - } - case BPF_JEQ: - case BPF_JGT: - case BPF_JGE: - case BPF_JLT: - case BPF_JLE: - case BPF_JSET: - case BPF_JNE: - case BPF_JSGT: - case BPF_JSGE: - case BPF_JSLT: - case BPF_JSLE: - if (follow_taken) { - rvt[idx] |= RVT_BRANCH_TAKEN; - idx += insn->off; - follow_taken = false; - } else { - rvt[idx] |= RVT_FALL_THROUGH; - } - break; - case BPF_CALL: - set_reg_val_type(&exit_rvt, BPF_REG_0, REG_64BIT); - /* Upon call return, argument registers are clobbered. */ - for (reg = BPF_REG_0; reg <= BPF_REG_5; reg++) - set_reg_val_type(&exit_rvt, reg, REG_64BIT); - - rvt[idx] |= RVT_DONE; - break; - case BPF_TAIL_CALL: - rvt[idx] |= RVT_DONE; - break; - default: - WARN(1, "Unhandled BPF_JMP case.\n"); - rvt[idx] |= RVT_DONE; - break; - } - break; - default: - rvt[idx] |= RVT_DONE; - break; - } - } - return idx; -} - -/* - * Track the value range (i.e. 32-bit vs. 64-bit) of each register at - * each eBPF insn. This allows unneeded sign and zero extension - * operations to be omitted. - * - * Doesn't handle yet confluence of control paths with conflicting - * ranges, but it is good enough for most sane code. - */ -static int reg_val_propagate(struct jit_ctx *ctx) -{ - const struct bpf_prog *prog = ctx->skf; - u64 exit_rvt; - int reg; - int i; - - /* - * 11 registers * 3 bits/reg leaves top bits free for other - * uses. Bit-62..63 used to see if we have visited an insn. - */ - exit_rvt = 0; - - /* Upon entry, argument registers are 64-bit. */ - for (reg = BPF_REG_1; reg <= BPF_REG_5; reg++) - set_reg_val_type(&exit_rvt, reg, REG_64BIT); - - /* - * First follow all conditional branches on the fall-through - * edge of control flow.. - */ - reg_val_propagate_range(ctx, exit_rvt, 0, false); -restart_search: - /* - * Then repeatedly find the first conditional branch where - * both edges of control flow have not been taken, and follow - * the branch taken edge. We will end up restarting the - * search once per conditional branch insn. - */ - for (i = 0; i < prog->len; i++) { - u64 rvt = ctx->reg_val_types[i]; - - if ((rvt & RVT_VISITED_MASK) == RVT_DONE || - (rvt & RVT_VISITED_MASK) == 0) - continue; - if ((rvt & RVT_VISITED_MASK) == RVT_FALL_THROUGH) { - reg_val_propagate_range(ctx, rvt & ~RVT_VISITED_MASK, i, true); - } else { /* RVT_BRANCH_TAKEN */ - WARN(1, "Unexpected RVT_BRANCH_TAKEN case.\n"); - reg_val_propagate_range(ctx, rvt & ~RVT_VISITED_MASK, i, false); - } - goto restart_search; - } - /* - * Eventually all conditional branches have been followed on - * both branches and we are done. Any insn that has not been - * visited at this point is dead. - */ - - return 0; -} - -static void jit_fill_hole(void *area, unsigned int size) -{ - u32 *p; - - /* We are guaranteed to have aligned memory. */ - for (p = area; size >= sizeof(u32); size -= sizeof(u32)) - uasm_i_break(&p, BRK_BUG); /* Increments p */ -} - -/* - * Save and restore the BPF VM state across a direct kernel call. This - * includes the caller-saved registers used for BPF_REG_0 .. BPF_REG_5 - * and BPF_REG_AX used by the verifier for blinding and other dark arts. - * Restore avoids clobbering bpf_ret, which holds the call return value. - * BPF_REG_6 .. BPF_REG_10 and TCC are already callee-saved or on stack. - */ -static const int bpf_caller_save[] = { - BPF_REG_0, - BPF_REG_1, - BPF_REG_2, - BPF_REG_3, - BPF_REG_4, - BPF_REG_5, - BPF_REG_AX, -}; - -#define CALLER_ENV_SIZE (ARRAY_SIZE(bpf_caller_save) * sizeof(u64)) - -void emit_caller_save(struct jit_ctx *ctx) -{ - int stack_adj = ALIGN(CALLER_ENV_SIZE, STACK_ALIGN); - int i, bpf, reg, store_offset; - - emit_instr_long(ctx, daddiu, addiu, MIPS_R_SP, MIPS_R_SP, -stack_adj); - - for (i = 0; i < ARRAY_SIZE(bpf_caller_save); i++) { - bpf = bpf_caller_save[i]; - reg = bpf2mips[bpf].reg; - store_offset = i * sizeof(u64); - - if (is64bit()) { - emit_instr(ctx, sd, reg, store_offset, MIPS_R_SP); - } else { - emit_instr(ctx, sw, LO(reg), - OFFLO(store_offset), MIPS_R_SP); - emit_instr(ctx, sw, HI(reg), - OFFHI(store_offset), MIPS_R_SP); - } - } -} - -void emit_caller_restore(struct jit_ctx *ctx, int bpf_ret) -{ - int stack_adj = ALIGN(CALLER_ENV_SIZE, STACK_ALIGN); - int i, bpf, reg, store_offset; - - for (i = 0; i < ARRAY_SIZE(bpf_caller_save); i++) { - bpf = bpf_caller_save[i]; - reg = bpf2mips[bpf].reg; - store_offset = i * sizeof(u64); - if (bpf == bpf_ret) - continue; - - if (is64bit()) { - emit_instr(ctx, ld, reg, store_offset, MIPS_R_SP); - } else { - emit_instr(ctx, lw, LO(reg), - OFFLO(store_offset), MIPS_R_SP); - emit_instr(ctx, lw, HI(reg), - OFFHI(store_offset), MIPS_R_SP); - } - } - - emit_instr_long(ctx, daddiu, addiu, MIPS_R_SP, MIPS_R_SP, stack_adj); -} - -struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) -{ - struct bpf_prog *orig_prog = prog; - bool tmp_blinded = false; - struct bpf_prog *tmp; - struct bpf_binary_header *header = NULL; - struct jit_ctx ctx; - unsigned int image_size; - u8 *image_ptr; - - if (!prog->jit_requested) - return prog; - - tmp = bpf_jit_blind_constants(prog); - /* If blinding was requested and we failed during blinding, - * we must fall back to the interpreter. - */ - if (IS_ERR(tmp)) - return orig_prog; - if (tmp != prog) { - tmp_blinded = true; - prog = tmp; - } - - memset(&ctx, 0, sizeof(ctx)); - - preempt_disable(); - switch (current_cpu_type()) { - case CPU_CAVIUM_OCTEON: - case CPU_CAVIUM_OCTEON_PLUS: - case CPU_CAVIUM_OCTEON2: - case CPU_CAVIUM_OCTEON3: - ctx.use_bbit_insns = 1; - break; - default: - ctx.use_bbit_insns = 0; - } - preempt_enable(); - - ctx.offsets = kcalloc(prog->len + 1, sizeof(*ctx.offsets), GFP_KERNEL); - if (ctx.offsets == NULL) - goto out_err; - - ctx.reg_val_types = kcalloc(prog->len + 1, sizeof(*ctx.reg_val_types), GFP_KERNEL); - if (ctx.reg_val_types == NULL) - goto out_err; - - ctx.skf = prog; - - if (reg_val_propagate(&ctx)) - goto out_err; - - /* - * First pass discovers used resources and instruction offsets - * assuming short branches are used. - */ - if (build_int_body(&ctx)) - goto out_err; - - /* - * If no calls are made (EBPF_SAVE_RA), then tailcall count located - * in runtime reg if defined, else we backup to save reg or stack. - */ - if (tail_call_present(&ctx)) { - if (ctx.flags & EBPF_SAVE_RA) - ctx.flags |= bpf2mips[JIT_SAV_TCC].flags; - else if (bpf2mips[JIT_RUN_TCC].reg) - ctx.flags |= EBPF_TCC_IN_RUN; - } - - /* - * Second pass generates offsets, if any branches are out of - * range a jump-around long sequence is generated, and we have - * to try again from the beginning to generate the new - * offsets. This is done until no additional conversions are - * necessary. - */ - do { - ctx.idx = 0; - ctx.gen_b_offsets = 1; - ctx.long_b_conversion = 0; - if (build_int_prologue(&ctx)) - goto out_err; - if (build_int_body(&ctx)) - goto out_err; - if (build_int_epilogue(&ctx, MIPS_R_RA)) - goto out_err; - } while (ctx.long_b_conversion); - - image_size = 4 * ctx.idx; - - header = bpf_jit_binary_alloc(image_size, &image_ptr, - sizeof(u32), jit_fill_hole); - if (header == NULL) - goto out_err; - - ctx.target = (u32 *)image_ptr; - - /* Third pass generates the code */ - ctx.idx = 0; - if (build_int_prologue(&ctx)) - goto out_err; - if (build_int_body(&ctx)) - goto out_err; - if (build_int_epilogue(&ctx, MIPS_R_RA)) - goto out_err; - - /* Update the icache */ - flush_icache_range((unsigned long)ctx.target, - (unsigned long)&ctx.target[ctx.idx]); - - if (bpf_jit_enable > 1) - /* Dump JIT code */ - bpf_jit_dump(prog->len, image_size, 2, ctx.target); - - bpf_jit_binary_lock_ro(header); - prog->bpf_func = (void *)ctx.target; - prog->jited = 1; - prog->jited_len = image_size; -out_normal: - if (tmp_blinded) - bpf_jit_prog_release_other(prog, prog == orig_prog ? - tmp : orig_prog); - kfree(ctx.offsets); - kfree(ctx.reg_val_types); - - return prog; - -out_err: - prog = orig_prog; - if (header) - bpf_jit_binary_free(header); - goto out_normal; -} diff --git a/arch/mips/net/ebpf_jit.h b/arch/mips/net/ebpf_jit.h new file mode 100644 index 000000000000..1ca0fdf91842 --- /dev/null +++ b/arch/mips/net/ebpf_jit.h @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Just-In-Time compiler for eBPF filters on MIPS32/MIPS64 + * Copyright (c) 2021 Tony Ambardar + * + * Based on code from: + * + * Copyright (c) 2017 Cavium, Inc. + * Author: David Daney + * + * Copyright (c) 2014 Imagination Technologies Ltd. + * Author: Markos Chandras + */ + +#ifndef _EBPF_JIT_H +#define _EBPF_JIT_H + +#include +#include +#include +#include + +/* Registers used by JIT: (MIPS32) (MIPS64) */ +#define MIPS_R_ZERO 0 +#define MIPS_R_AT 1 +#define MIPS_R_V0 2 /* BPF_R0 BPF_R0 */ +#define MIPS_R_V1 3 /* BPF_R0 BPF_TCC */ +#define MIPS_R_A0 4 /* BPF_R1 BPF_R1 */ +#define MIPS_R_A1 5 /* BPF_R1 BPF_R2 */ +#define MIPS_R_A2 6 /* BPF_R2 BPF_R3 */ +#define MIPS_R_A3 7 /* BPF_R2 BPF_R4 */ +/* MIPS64 swaps T0-T3 regs for extra args A4-A7. */ +#ifdef CONFIG_64BIT +# define MIPS_R_A4 8 /* (n/a) BPF_R5 */ +#else /* CONFIG_32BIT */ +# define MIPS_R_T0 8 /* BPF_R3 (n/a) */ +# define MIPS_R_T1 9 /* BPF_R3 (n/a) */ +# define MIPS_R_T2 10 /* BPF_R4 (n/a) */ +# define MIPS_R_T3 11 /* BPF_R4 (n/a) */ +#endif +#define MIPS_R_T4 12 /* BPF_R5 BPF_AX */ +#define MIPS_R_T5 13 /* BPF_R5 (free) */ +#define MIPS_R_T6 14 /* BPF_AX (used) */ +#define MIPS_R_T7 15 /* BPF_AX (free) */ +#define MIPS_R_S0 16 /* BPF_R6 BPF_R6 */ +#define MIPS_R_S1 17 /* BPF_R6 BPF_R7 */ +#define MIPS_R_S2 18 /* BPF_R7 BPF_R8 */ +#define MIPS_R_S3 19 /* BPF_R7 BPF_R9 */ +#define MIPS_R_S4 20 /* BPF_R8 BPF_TCC */ +#define MIPS_R_S5 21 /* BPF_R8 (free) */ +#define MIPS_R_S6 22 /* BPF_R9 (free) */ +#define MIPS_R_S7 23 /* BPF_R9 (free) */ +#define MIPS_R_T8 24 /* (used) (used) */ +#define MIPS_R_T9 25 /* (used) (used) */ +#define MIPS_R_SP 29 +#define MIPS_R_S8 30 /* BPF_R10 BPF_R10 */ +#define MIPS_R_RA 31 + +/* eBPF flags */ +#define EBPF_SAVE_S0 BIT(0) +#define EBPF_SAVE_S1 BIT(1) +#define EBPF_SAVE_S2 BIT(2) +#define EBPF_SAVE_S3 BIT(3) +#define EBPF_SAVE_S4 BIT(4) +#define EBPF_SAVE_S5 BIT(5) +#define EBPF_SAVE_S6 BIT(6) +#define EBPF_SAVE_S7 BIT(7) +#define EBPF_SAVE_S8 BIT(8) +#define EBPF_SAVE_RA BIT(9) +#define EBPF_SEEN_FP BIT(10) +#define EBPF_SEEN_TC BIT(11) +#define EBPF_TCC_IN_RUN BIT(12) + +/* + * Word-size and endianness-aware helpers for building MIPS32 vs MIPS64 + * tables and selecting 32-bit subregisters from a register pair base. + * Simplify use by emulating MIPS_R_SP and MIPS_R_ZERO as register pairs + * and adding HI/LO word memory offsets. + */ +#ifdef CONFIG_64BIT +# define HI(reg) (reg) +# define LO(reg) (reg) +# define OFFHI(mem) (mem) +# define OFFLO(mem) (mem) +#else /* CONFIG_32BIT */ +# ifdef __BIG_ENDIAN +# define HI(reg) ((reg) == MIPS_R_SP ? MIPS_R_ZERO : \ + (reg) == MIPS_R_S8 ? MIPS_R_ZERO : \ + (reg)) +# define LO(reg) ((reg) == MIPS_R_ZERO ? (reg) : \ + (reg) == MIPS_R_SP ? (reg) : \ + (reg) == MIPS_R_S8 ? (reg) : \ + (reg) + 1) +# define OFFHI(mem) (mem) +# define OFFLO(mem) ((mem) + sizeof(long)) +# else /* __LITTLE_ENDIAN */ +# define HI(reg) ((reg) == MIPS_R_ZERO ? (reg) : \ + (reg) == MIPS_R_SP ? MIPS_R_ZERO : \ + (reg) == MIPS_R_S8 ? MIPS_R_ZERO : \ + (reg) + 1) +# define LO(reg) (reg) +# define OFFHI(mem) ((mem) + sizeof(long)) +# define OFFLO(mem) (mem) +# endif +#endif + +static inline bool is64bit(void) +{ + return IS_ENABLED(CONFIG_64BIT); +} + +static inline bool isbigend(void) +{ + return IS_ENABLED(CONFIG_CPU_BIG_ENDIAN); +} + +/* + * For the mips64 ISA, we need to track the value range or type for + * each JIT register. The BPF machine requires zero extended 32-bit + * values, but the mips64 ISA requires sign extended 32-bit values. + * At each point in the BPF program we track the state of every + * register so that we can zero extend or sign extend as the BPF + * semantics require. + */ +enum reg_val_type { + /* uninitialized */ + REG_UNKNOWN, + /* not known to be 32-bit compatible. */ + REG_64BIT, + /* 32-bit compatible, no truncation needed for 64-bit ops. */ + REG_64BIT_32BIT, + /* 32-bit compatible, need truncation for 64-bit ops. */ + REG_32BIT, + /* 32-bit no sign/zero extension needed. */ + REG_32BIT_POS +}; + +/** + * struct jit_ctx - JIT context + * @skf: The sk_filter + * @stack_size: eBPF stack size + * @idx: Instruction index + * @flags: JIT flags + * @offsets: Instruction offsets + * @target: Memory location for the compiled filter + * @reg_val_types Packed enum reg_val_type for each register. + */ +struct jit_ctx { + const struct bpf_prog *skf; + int stack_size; + int bpf_stack_off; + u32 idx; + u32 flags; + u32 *offsets; + u32 *target; + u64 *reg_val_types; + unsigned int long_b_conversion:1; + unsigned int gen_b_offsets:1; + unsigned int use_bbit_insns:1; +}; + +static inline void set_reg_val_type(u64 *rvt, int reg, enum reg_val_type type) +{ + *rvt &= ~(7ull << (reg * 3)); + *rvt |= ((u64)type << (reg * 3)); +} + +static inline enum reg_val_type get_reg_val_type(const struct jit_ctx *ctx, + int index, int reg) +{ + return (ctx->reg_val_types[index] >> (reg * 3)) & 7; +} + +/* Simply emit the instruction if the JIT memory space has been allocated */ +#define emit_instr_long(ctx, func64, func32, ...) \ +do { \ + if ((ctx)->target != NULL) { \ + u32 *p = &(ctx)->target[ctx->idx]; \ + if (IS_ENABLED(CONFIG_64BIT)) \ + uasm_i_##func64(&p, ##__VA_ARGS__); \ + else \ + uasm_i_##func32(&p, ##__VA_ARGS__); \ + } \ + (ctx)->idx++; \ +} while (0) + +#define emit_instr(ctx, func, ...) \ + emit_instr_long(ctx, func, func, ##__VA_ARGS__) + +/* + * High bit of offsets indicates if long branch conversion done at + * this insn. + */ +#define OFFSETS_B_CONV BIT(31) + +static inline unsigned int j_target(struct jit_ctx *ctx, int target_idx) +{ + unsigned long target_va, base_va; + unsigned int r; + + if (!ctx->target) + return 0; + + base_va = (unsigned long)ctx->target; + target_va = base_va + (ctx->offsets[target_idx] & ~OFFSETS_B_CONV); + + if ((base_va & ~0x0ffffffful) != (target_va & ~0x0ffffffful)) + return (unsigned int)-1; + r = target_va & 0x0ffffffful; + return r; +} + +/* Compute the immediate value for PC-relative branches. */ +static inline u32 b_imm(unsigned int tgt, struct jit_ctx *ctx) +{ + if (!ctx->gen_b_offsets) + return 0; + + /* + * We want a pc-relative branch. tgt is the instruction offset + * we want to jump to. + + * Branch on MIPS: + * I: target_offset <- sign_extend(offset) + * I+1: PC += target_offset (delay slot) + * + * ctx->idx currently points to the branch instruction + * but the offset is added to the delay slot so we need + * to subtract 4. + */ + return (ctx->offsets[tgt] & ~OFFSETS_B_CONV) - + (ctx->idx * 4) - 4; +} + +static inline bool tail_call_present(struct jit_ctx *ctx) +{ + return ctx->flags & EBPF_SEEN_TC || ctx->skf->aux->tail_call_reachable; +} + +static inline bool is_bad_offset(int b_off) +{ + return b_off > 0x1ffff || b_off < -0x20000; +} + +/* Sign-extend dst register or HI 32-bit reg of pair. */ +static inline void gen_sext_insn(int dst, struct jit_ctx *ctx) +{ + if (is64bit()) + emit_instr(ctx, sll, dst, dst, 0); + else + emit_instr(ctx, sra, HI(dst), LO(dst), 31); +} + +/* + * Zero-extend dst register or HI 32-bit reg of pair, if either forced + * or the BPF verifier does not insert its own zext insns. + */ +static inline void gen_zext_insn(int dst, bool force, struct jit_ctx *ctx) +{ + if (!ctx->skf->aux->verifier_zext || force) { + if (is64bit()) + emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); + else + emit_instr(ctx, and, HI(dst), MIPS_R_ZERO, MIPS_R_ZERO); + } +} + +enum reg_usage { + REG_SRC_FP_OK, + REG_SRC_NO_FP, + REG_DST_FP_OK, + REG_DST_NO_FP +}; + +extern int ebpf_to_mips_reg(struct jit_ctx *ctx, + const struct bpf_insn *insn, + enum reg_usage u); + +extern void gen_imm_to_reg(const struct bpf_insn *insn, int reg, + struct jit_ctx *ctx); + +extern void emit_const_to_reg(struct jit_ctx *ctx, int dst, unsigned long value); + +extern void emit_bpf_call(struct jit_ctx *ctx, const struct bpf_insn *insn); + +extern int emit_bpf_tail_call(struct jit_ctx *ctx, int this_idx); + +extern void emit_caller_save(struct jit_ctx *ctx); + +extern void emit_caller_restore(struct jit_ctx *ctx, int bpf_ret); + +extern int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, + int this_idx, int exit_idx); + +#endif /* _EBPF_JIT_H */ diff --git a/arch/mips/net/ebpf_jit_comp64.c b/arch/mips/net/ebpf_jit_comp64.c new file mode 100644 index 000000000000..de43f1758766 --- /dev/null +++ b/arch/mips/net/ebpf_jit_comp64.c @@ -0,0 +1,987 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Just-In-Time compiler for eBPF filters on MIPS32/MIPS64 + * Copyright (c) 2021 Tony Ambardar + * + * Based on code from: + * + * Copyright (c) 2017 Cavium, Inc. + * Author: David Daney + * + * Copyright (c) 2014 Imagination Technologies Ltd. + * Author: Markos Chandras + */ + +#include +#include +#include + +#include "ebpf_jit.h" + +static int gen_imm_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, + int idx) +{ + int upper_bound, lower_bound; + int dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + + if (dst < 0) + return dst; + + switch (BPF_OP(insn->code)) { + case BPF_MOV: + case BPF_ADD: + upper_bound = S16_MAX; + lower_bound = S16_MIN; + break; + case BPF_SUB: + upper_bound = -(int)S16_MIN; + lower_bound = -(int)S16_MAX; + break; + case BPF_AND: + case BPF_OR: + case BPF_XOR: + upper_bound = 0xffff; + lower_bound = 0; + break; + case BPF_RSH: + case BPF_LSH: + case BPF_ARSH: + /* Shift amounts are truncated, no need for bounds */ + upper_bound = S32_MAX; + lower_bound = S32_MIN; + break; + default: + return -EINVAL; + } + + /* + * Immediate move clobbers the register, so no sign/zero + * extension needed. + */ + if (BPF_CLASS(insn->code) == BPF_ALU64 && + BPF_OP(insn->code) != BPF_MOV && + get_reg_val_type(ctx, idx, insn->dst_reg) == REG_32BIT) + emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); + /* BPF_ALU | BPF_LSH doesn't need separate sign extension */ + if (BPF_CLASS(insn->code) == BPF_ALU && + BPF_OP(insn->code) != BPF_LSH && + BPF_OP(insn->code) != BPF_MOV && + get_reg_val_type(ctx, idx, insn->dst_reg) != REG_32BIT) + emit_instr(ctx, sll, dst, dst, 0); + + if (insn->imm >= lower_bound && insn->imm <= upper_bound) { + /* single insn immediate case */ + switch (BPF_OP(insn->code) | BPF_CLASS(insn->code)) { + case BPF_ALU64 | BPF_MOV: + emit_instr(ctx, daddiu, dst, MIPS_R_ZERO, insn->imm); + break; + case BPF_ALU64 | BPF_AND: + case BPF_ALU | BPF_AND: + emit_instr(ctx, andi, dst, dst, insn->imm); + break; + case BPF_ALU64 | BPF_OR: + case BPF_ALU | BPF_OR: + emit_instr(ctx, ori, dst, dst, insn->imm); + break; + case BPF_ALU64 | BPF_XOR: + case BPF_ALU | BPF_XOR: + emit_instr(ctx, xori, dst, dst, insn->imm); + break; + case BPF_ALU64 | BPF_ADD: + emit_instr(ctx, daddiu, dst, dst, insn->imm); + break; + case BPF_ALU64 | BPF_SUB: + emit_instr(ctx, daddiu, dst, dst, -insn->imm); + break; + case BPF_ALU64 | BPF_RSH: + emit_instr(ctx, dsrl_safe, dst, dst, insn->imm & 0x3f); + break; + case BPF_ALU | BPF_RSH: + emit_instr(ctx, srl, dst, dst, insn->imm & 0x1f); + break; + case BPF_ALU64 | BPF_LSH: + emit_instr(ctx, dsll_safe, dst, dst, insn->imm & 0x3f); + break; + case BPF_ALU | BPF_LSH: + emit_instr(ctx, sll, dst, dst, insn->imm & 0x1f); + break; + case BPF_ALU64 | BPF_ARSH: + emit_instr(ctx, dsra_safe, dst, dst, insn->imm & 0x3f); + break; + case BPF_ALU | BPF_ARSH: + emit_instr(ctx, sra, dst, dst, insn->imm & 0x1f); + break; + case BPF_ALU | BPF_MOV: + emit_instr(ctx, addiu, dst, MIPS_R_ZERO, insn->imm); + break; + case BPF_ALU | BPF_ADD: + emit_instr(ctx, addiu, dst, dst, insn->imm); + break; + case BPF_ALU | BPF_SUB: + emit_instr(ctx, addiu, dst, dst, -insn->imm); + break; + default: + return -EINVAL; + } + } else { + /* multi insn immediate case */ + if (BPF_OP(insn->code) == BPF_MOV) { + gen_imm_to_reg(insn, dst, ctx); + } else { + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + switch (BPF_OP(insn->code) | BPF_CLASS(insn->code)) { + case BPF_ALU64 | BPF_AND: + case BPF_ALU | BPF_AND: + emit_instr(ctx, and, dst, dst, MIPS_R_AT); + break; + case BPF_ALU64 | BPF_OR: + case BPF_ALU | BPF_OR: + emit_instr(ctx, or, dst, dst, MIPS_R_AT); + break; + case BPF_ALU64 | BPF_XOR: + case BPF_ALU | BPF_XOR: + emit_instr(ctx, xor, dst, dst, MIPS_R_AT); + break; + case BPF_ALU64 | BPF_ADD: + emit_instr(ctx, daddu, dst, dst, MIPS_R_AT); + break; + case BPF_ALU64 | BPF_SUB: + emit_instr(ctx, dsubu, dst, dst, MIPS_R_AT); + break; + case BPF_ALU | BPF_ADD: + emit_instr(ctx, addu, dst, dst, MIPS_R_AT); + break; + case BPF_ALU | BPF_SUB: + emit_instr(ctx, subu, dst, dst, MIPS_R_AT); + break; + default: + return -EINVAL; + } + } + } + + return 0; +} + +/* Returns the number of insn slots consumed. */ +int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, + int this_idx, int exit_idx) +{ + int src, dst, r, td, ts, mem_off, b_off; + bool need_swap, did_move, cmp_eq; + unsigned int target = 0; + u64 t64; + s64 t64s; + int bpf_op = BPF_OP(insn->code); + + switch (insn->code) { + case BPF_ALU64 | BPF_ADD | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_SUB | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_OR | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_AND | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_LSH | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_RSH | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_XOR | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_ARSH | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_MOV | BPF_K: /* ALU64_IMM */ + case BPF_ALU | BPF_MOV | BPF_K: /* ALU32_IMM */ + case BPF_ALU | BPF_ADD | BPF_K: /* ALU32_IMM */ + case BPF_ALU | BPF_SUB | BPF_K: /* ALU32_IMM */ + case BPF_ALU | BPF_OR | BPF_K: /* ALU64_IMM */ + case BPF_ALU | BPF_AND | BPF_K: /* ALU64_IMM */ + case BPF_ALU | BPF_LSH | BPF_K: /* ALU64_IMM */ + case BPF_ALU | BPF_RSH | BPF_K: /* ALU64_IMM */ + case BPF_ALU | BPF_XOR | BPF_K: /* ALU64_IMM */ + case BPF_ALU | BPF_ARSH | BPF_K: /* ALU64_IMM */ + r = gen_imm_insn(insn, ctx, this_idx); + if (r < 0) + return r; + break; + case BPF_ALU64 | BPF_MUL | BPF_K: /* ALU64_IMM */ + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) + emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); + if (insn->imm == 1) /* Mult by 1 is a nop */ + break; + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + if (MIPS_ISA_REV >= 6) { + emit_instr(ctx, dmulu, dst, dst, MIPS_R_AT); + } else { + emit_instr(ctx, dmultu, MIPS_R_AT, dst); + emit_instr(ctx, mflo, dst); + } + break; + case BPF_ALU64 | BPF_NEG | BPF_K: /* ALU64_IMM */ + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) + emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); + emit_instr(ctx, dsubu, dst, MIPS_R_ZERO, dst); + break; + case BPF_ALU | BPF_MUL | BPF_K: /* ALU_IMM */ + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + td = get_reg_val_type(ctx, this_idx, insn->dst_reg); + if (td == REG_64BIT) { + /* sign extend */ + emit_instr(ctx, sll, dst, dst, 0); + } + if (insn->imm == 1) /* Mult by 1 is a nop */ + break; + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + if (MIPS_ISA_REV >= 6) { + emit_instr(ctx, mulu, dst, dst, MIPS_R_AT); + } else { + emit_instr(ctx, multu, dst, MIPS_R_AT); + emit_instr(ctx, mflo, dst); + } + break; + case BPF_ALU | BPF_NEG | BPF_K: /* ALU_IMM */ + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + td = get_reg_val_type(ctx, this_idx, insn->dst_reg); + if (td == REG_64BIT) { + /* sign extend */ + emit_instr(ctx, sll, dst, dst, 0); + } + emit_instr(ctx, subu, dst, MIPS_R_ZERO, dst); + break; + case BPF_ALU | BPF_DIV | BPF_K: /* ALU_IMM */ + case BPF_ALU | BPF_MOD | BPF_K: /* ALU_IMM */ + if (insn->imm == 0) + return -EINVAL; + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + td = get_reg_val_type(ctx, this_idx, insn->dst_reg); + if (td == REG_64BIT) + /* sign extend */ + emit_instr(ctx, sll, dst, dst, 0); + if (insn->imm == 1) { + /* div by 1 is a nop, mod by 1 is zero */ + if (bpf_op == BPF_MOD) + emit_instr(ctx, addu, dst, MIPS_R_ZERO, MIPS_R_ZERO); + break; + } + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + if (MIPS_ISA_REV >= 6) { + if (bpf_op == BPF_DIV) + emit_instr(ctx, divu_r6, dst, dst, MIPS_R_AT); + else + emit_instr(ctx, modu, dst, dst, MIPS_R_AT); + break; + } + emit_instr(ctx, divu, dst, MIPS_R_AT); + if (bpf_op == BPF_DIV) + emit_instr(ctx, mflo, dst); + else + emit_instr(ctx, mfhi, dst); + break; + case BPF_ALU64 | BPF_DIV | BPF_K: /* ALU_IMM */ + case BPF_ALU64 | BPF_MOD | BPF_K: /* ALU_IMM */ + if (insn->imm == 0) + return -EINVAL; + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) + emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); + if (insn->imm == 1) { + /* div by 1 is a nop, mod by 1 is zero */ + if (bpf_op == BPF_MOD) + emit_instr(ctx, addu, dst, MIPS_R_ZERO, MIPS_R_ZERO); + break; + } + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + if (MIPS_ISA_REV >= 6) { + if (bpf_op == BPF_DIV) + emit_instr(ctx, ddivu_r6, dst, dst, MIPS_R_AT); + else + emit_instr(ctx, dmodu, dst, dst, MIPS_R_AT); + break; + } + emit_instr(ctx, ddivu, dst, MIPS_R_AT); + if (bpf_op == BPF_DIV) + emit_instr(ctx, mflo, dst); + else + emit_instr(ctx, mfhi, dst); + break; + case BPF_ALU64 | BPF_MOV | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_ADD | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_SUB | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_XOR | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_OR | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_AND | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_MUL | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_DIV | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_MOD | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_LSH | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_RSH | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_ARSH | BPF_X: /* ALU64_REG */ + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (src < 0 || dst < 0) + return -EINVAL; + if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT) + emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); + did_move = false; + if (get_reg_val_type(ctx, this_idx, insn->src_reg) == REG_32BIT) { + int tmp_reg = MIPS_R_AT; + + if (bpf_op == BPF_MOV) { + tmp_reg = dst; + did_move = true; + } + emit_instr(ctx, daddu, tmp_reg, src, MIPS_R_ZERO); + emit_instr(ctx, dinsu, tmp_reg, MIPS_R_ZERO, 32, 32); + src = MIPS_R_AT; + } + switch (bpf_op) { + case BPF_MOV: + if (!did_move) + emit_instr(ctx, daddu, dst, src, MIPS_R_ZERO); + break; + case BPF_ADD: + emit_instr(ctx, daddu, dst, dst, src); + break; + case BPF_SUB: + emit_instr(ctx, dsubu, dst, dst, src); + break; + case BPF_XOR: + emit_instr(ctx, xor, dst, dst, src); + break; + case BPF_OR: + emit_instr(ctx, or, dst, dst, src); + break; + case BPF_AND: + emit_instr(ctx, and, dst, dst, src); + break; + case BPF_MUL: + if (MIPS_ISA_REV >= 6) { + emit_instr(ctx, dmulu, dst, dst, src); + } else { + emit_instr(ctx, dmultu, dst, src); + emit_instr(ctx, mflo, dst); + } + break; + case BPF_DIV: + case BPF_MOD: + if (MIPS_ISA_REV >= 6) { + if (bpf_op == BPF_DIV) + emit_instr(ctx, ddivu_r6, + dst, dst, src); + else + emit_instr(ctx, dmodu, dst, dst, src); + break; + } + emit_instr(ctx, ddivu, dst, src); + if (bpf_op == BPF_DIV) + emit_instr(ctx, mflo, dst); + else + emit_instr(ctx, mfhi, dst); + break; + case BPF_LSH: + emit_instr(ctx, dsllv, dst, dst, src); + break; + case BPF_RSH: + emit_instr(ctx, dsrlv, dst, dst, src); + break; + case BPF_ARSH: + emit_instr(ctx, dsrav, dst, dst, src); + break; + default: + pr_err("ALU64_REG NOT HANDLED\n"); + return -EINVAL; + } + break; + case BPF_ALU | BPF_MOV | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_ADD | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_SUB | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_XOR | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_OR | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_AND | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_MUL | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_DIV | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_MOD | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_LSH | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_RSH | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_ARSH | BPF_X: /* ALU_REG */ + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (src < 0 || dst < 0) + return -EINVAL; + td = get_reg_val_type(ctx, this_idx, insn->dst_reg); + if (td == REG_64BIT) { + /* sign extend */ + emit_instr(ctx, sll, dst, dst, 0); + } + did_move = false; + ts = get_reg_val_type(ctx, this_idx, insn->src_reg); + if (ts == REG_64BIT) { + int tmp_reg = MIPS_R_AT; + + if (bpf_op == BPF_MOV) { + tmp_reg = dst; + did_move = true; + } + /* sign extend */ + emit_instr(ctx, sll, tmp_reg, src, 0); + src = MIPS_R_AT; + } + switch (bpf_op) { + case BPF_MOV: + if (!did_move) + emit_instr(ctx, addu, dst, src, MIPS_R_ZERO); + break; + case BPF_ADD: + emit_instr(ctx, addu, dst, dst, src); + break; + case BPF_SUB: + emit_instr(ctx, subu, dst, dst, src); + break; + case BPF_XOR: + emit_instr(ctx, xor, dst, dst, src); + break; + case BPF_OR: + emit_instr(ctx, or, dst, dst, src); + break; + case BPF_AND: + emit_instr(ctx, and, dst, dst, src); + break; + case BPF_MUL: + emit_instr(ctx, mul, dst, dst, src); + break; + case BPF_DIV: + case BPF_MOD: + if (MIPS_ISA_REV >= 6) { + if (bpf_op == BPF_DIV) + emit_instr(ctx, divu_r6, dst, dst, src); + else + emit_instr(ctx, modu, dst, dst, src); + break; + } + emit_instr(ctx, divu, dst, src); + if (bpf_op == BPF_DIV) + emit_instr(ctx, mflo, dst); + else + emit_instr(ctx, mfhi, dst); + break; + case BPF_LSH: + emit_instr(ctx, sllv, dst, dst, src); + break; + case BPF_RSH: + emit_instr(ctx, srlv, dst, dst, src); + break; + case BPF_ARSH: + emit_instr(ctx, srav, dst, dst, src); + break; + default: + pr_err("ALU_REG NOT HANDLED\n"); + return -EINVAL; + } + break; + case BPF_JMP | BPF_EXIT: + if (this_idx + 1 < exit_idx) { + b_off = b_imm(exit_idx, ctx); + if (is_bad_offset(b_off)) { + target = j_target(ctx, exit_idx); + if (target == (unsigned int)-1) + return -E2BIG; + emit_instr(ctx, j, target); + } else { + emit_instr(ctx, b, b_off); + } + emit_instr(ctx, nop); + } + break; + case BPF_JMP | BPF_JEQ | BPF_K: /* JMP_IMM */ + case BPF_JMP | BPF_JNE | BPF_K: /* JMP_IMM */ + cmp_eq = (bpf_op == BPF_JEQ); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + if (dst < 0) + return dst; + if (insn->imm == 0) { + src = MIPS_R_ZERO; + } else { + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + src = MIPS_R_AT; + } + goto jeq_common; + case BPF_JMP | BPF_JEQ | BPF_X: /* JMP_REG */ + case BPF_JMP | BPF_JNE | BPF_X: + case BPF_JMP | BPF_JSLT | BPF_X: + case BPF_JMP | BPF_JSLE | BPF_X: + case BPF_JMP | BPF_JSGT | BPF_X: + case BPF_JMP | BPF_JSGE | BPF_X: + case BPF_JMP | BPF_JLT | BPF_X: + case BPF_JMP | BPF_JLE | BPF_X: + case BPF_JMP | BPF_JGT | BPF_X: + case BPF_JMP | BPF_JGE | BPF_X: + case BPF_JMP | BPF_JSET | BPF_X: + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + if (src < 0 || dst < 0) + return -EINVAL; + td = get_reg_val_type(ctx, this_idx, insn->dst_reg); + ts = get_reg_val_type(ctx, this_idx, insn->src_reg); + if (td == REG_32BIT && ts != REG_32BIT) { + emit_instr(ctx, sll, MIPS_R_AT, src, 0); + src = MIPS_R_AT; + } else if (ts == REG_32BIT && td != REG_32BIT) { + emit_instr(ctx, sll, MIPS_R_AT, dst, 0); + dst = MIPS_R_AT; + } + if (bpf_op == BPF_JSET) { + emit_instr(ctx, and, MIPS_R_AT, dst, src); + cmp_eq = false; + dst = MIPS_R_AT; + src = MIPS_R_ZERO; + } else if (bpf_op == BPF_JSGT || bpf_op == BPF_JSLE) { + emit_instr(ctx, dsubu, MIPS_R_AT, dst, src); + if ((insn + 1)->code == (BPF_JMP | BPF_EXIT) && insn->off == 1) { + b_off = b_imm(exit_idx, ctx); + if (is_bad_offset(b_off)) + return -E2BIG; + if (bpf_op == BPF_JSGT) + emit_instr(ctx, blez, MIPS_R_AT, b_off); + else + emit_instr(ctx, bgtz, MIPS_R_AT, b_off); + emit_instr(ctx, nop); + return 2; /* We consumed the exit. */ + } + b_off = b_imm(this_idx + insn->off + 1, ctx); + if (is_bad_offset(b_off)) + return -E2BIG; + if (bpf_op == BPF_JSGT) + emit_instr(ctx, bgtz, MIPS_R_AT, b_off); + else + emit_instr(ctx, blez, MIPS_R_AT, b_off); + emit_instr(ctx, nop); + break; + } else if (bpf_op == BPF_JSGE || bpf_op == BPF_JSLT) { + emit_instr(ctx, slt, MIPS_R_AT, dst, src); + cmp_eq = bpf_op == BPF_JSGE; + dst = MIPS_R_AT; + src = MIPS_R_ZERO; + } else if (bpf_op == BPF_JGT || bpf_op == BPF_JLE) { + /* dst or src could be AT */ + emit_instr(ctx, dsubu, MIPS_R_T8, dst, src); + emit_instr(ctx, sltu, MIPS_R_AT, dst, src); + /* SP known to be non-zero, movz becomes boolean not */ + if (MIPS_ISA_REV >= 6) { + emit_instr(ctx, seleqz, MIPS_R_T9, + MIPS_R_SP, MIPS_R_T8); + } else { + emit_instr(ctx, movz, MIPS_R_T9, + MIPS_R_SP, MIPS_R_T8); + emit_instr(ctx, movn, MIPS_R_T9, + MIPS_R_ZERO, MIPS_R_T8); + } + emit_instr(ctx, or, MIPS_R_AT, MIPS_R_T9, MIPS_R_AT); + cmp_eq = bpf_op == BPF_JGT; + dst = MIPS_R_AT; + src = MIPS_R_ZERO; + } else if (bpf_op == BPF_JGE || bpf_op == BPF_JLT) { + emit_instr(ctx, sltu, MIPS_R_AT, dst, src); + cmp_eq = bpf_op == BPF_JGE; + dst = MIPS_R_AT; + src = MIPS_R_ZERO; + } else { /* JNE/JEQ case */ + cmp_eq = (bpf_op == BPF_JEQ); + } +jeq_common: + /* + * If the next insn is EXIT and we are jumping arround + * only it, invert the sense of the compare and + * conditionally jump to the exit. Poor man's branch + * chaining. + */ + if ((insn + 1)->code == (BPF_JMP | BPF_EXIT) && insn->off == 1) { + b_off = b_imm(exit_idx, ctx); + if (is_bad_offset(b_off)) { + target = j_target(ctx, exit_idx); + if (target == (unsigned int)-1) + return -E2BIG; + cmp_eq = !cmp_eq; + b_off = 4 * 3; + if (!(ctx->offsets[this_idx] & OFFSETS_B_CONV)) { + ctx->offsets[this_idx] |= OFFSETS_B_CONV; + ctx->long_b_conversion = 1; + } + } + + if (cmp_eq) + emit_instr(ctx, bne, dst, src, b_off); + else + emit_instr(ctx, beq, dst, src, b_off); + emit_instr(ctx, nop); + if (ctx->offsets[this_idx] & OFFSETS_B_CONV) { + emit_instr(ctx, j, target); + emit_instr(ctx, nop); + } + return 2; /* We consumed the exit. */ + } + b_off = b_imm(this_idx + insn->off + 1, ctx); + if (is_bad_offset(b_off)) { + target = j_target(ctx, this_idx + insn->off + 1); + if (target == (unsigned int)-1) + return -E2BIG; + cmp_eq = !cmp_eq; + b_off = 4 * 3; + if (!(ctx->offsets[this_idx] & OFFSETS_B_CONV)) { + ctx->offsets[this_idx] |= OFFSETS_B_CONV; + ctx->long_b_conversion = 1; + } + } + + if (cmp_eq) + emit_instr(ctx, beq, dst, src, b_off); + else + emit_instr(ctx, bne, dst, src, b_off); + emit_instr(ctx, nop); + if (ctx->offsets[this_idx] & OFFSETS_B_CONV) { + emit_instr(ctx, j, target); + emit_instr(ctx, nop); + } + break; + case BPF_JMP | BPF_JSGT | BPF_K: /* JMP_IMM */ + case BPF_JMP | BPF_JSGE | BPF_K: /* JMP_IMM */ + case BPF_JMP | BPF_JSLT | BPF_K: /* JMP_IMM */ + case BPF_JMP | BPF_JSLE | BPF_K: /* JMP_IMM */ + cmp_eq = (bpf_op == BPF_JSGE); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + if (dst < 0) + return dst; + + if (insn->imm == 0) { + if ((insn + 1)->code == (BPF_JMP | BPF_EXIT) && insn->off == 1) { + b_off = b_imm(exit_idx, ctx); + if (is_bad_offset(b_off)) + return -E2BIG; + switch (bpf_op) { + case BPF_JSGT: + emit_instr(ctx, blez, dst, b_off); + break; + case BPF_JSGE: + emit_instr(ctx, bltz, dst, b_off); + break; + case BPF_JSLT: + emit_instr(ctx, bgez, dst, b_off); + break; + case BPF_JSLE: + emit_instr(ctx, bgtz, dst, b_off); + break; + } + emit_instr(ctx, nop); + return 2; /* We consumed the exit. */ + } + b_off = b_imm(this_idx + insn->off + 1, ctx); + if (is_bad_offset(b_off)) + return -E2BIG; + switch (bpf_op) { + case BPF_JSGT: + emit_instr(ctx, bgtz, dst, b_off); + break; + case BPF_JSGE: + emit_instr(ctx, bgez, dst, b_off); + break; + case BPF_JSLT: + emit_instr(ctx, bltz, dst, b_off); + break; + case BPF_JSLE: + emit_instr(ctx, blez, dst, b_off); + break; + } + emit_instr(ctx, nop); + break; + } + /* + * only "LT" compare available, so we must use imm + 1 + * to generate "GT" and imm -1 to generate LE + */ + if (bpf_op == BPF_JSGT) + t64s = insn->imm + 1; + else if (bpf_op == BPF_JSLE) + t64s = insn->imm + 1; + else + t64s = insn->imm; + + cmp_eq = bpf_op == BPF_JSGT || bpf_op == BPF_JSGE; + if (t64s >= S16_MIN && t64s <= S16_MAX) { + emit_instr(ctx, slti, MIPS_R_AT, dst, (int)t64s); + src = MIPS_R_AT; + dst = MIPS_R_ZERO; + goto jeq_common; + } + emit_const_to_reg(ctx, MIPS_R_AT, (u64)t64s); + emit_instr(ctx, slt, MIPS_R_AT, dst, MIPS_R_AT); + src = MIPS_R_AT; + dst = MIPS_R_ZERO; + goto jeq_common; + + case BPF_JMP | BPF_JGT | BPF_K: + case BPF_JMP | BPF_JGE | BPF_K: + case BPF_JMP | BPF_JLT | BPF_K: + case BPF_JMP | BPF_JLE | BPF_K: + cmp_eq = (bpf_op == BPF_JGE); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + if (dst < 0) + return dst; + /* + * only "LT" compare available, so we must use imm + 1 + * to generate "GT" and imm -1 to generate LE + */ + if (bpf_op == BPF_JGT) + t64s = (u64)(u32)(insn->imm) + 1; + else if (bpf_op == BPF_JLE) + t64s = (u64)(u32)(insn->imm) + 1; + else + t64s = (u64)(u32)(insn->imm); + + cmp_eq = bpf_op == BPF_JGT || bpf_op == BPF_JGE; + + emit_const_to_reg(ctx, MIPS_R_AT, (u64)t64s); + emit_instr(ctx, sltu, MIPS_R_AT, dst, MIPS_R_AT); + src = MIPS_R_AT; + dst = MIPS_R_ZERO; + goto jeq_common; + + case BPF_JMP | BPF_JSET | BPF_K: /* JMP_IMM */ + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + if (dst < 0) + return dst; + + if (ctx->use_bbit_insns && hweight32((u32)insn->imm) == 1) { + if ((insn + 1)->code == (BPF_JMP | BPF_EXIT) && insn->off == 1) { + b_off = b_imm(exit_idx, ctx); + if (is_bad_offset(b_off)) + return -E2BIG; + emit_instr(ctx, bbit0, dst, ffs((u32)insn->imm) - 1, b_off); + emit_instr(ctx, nop); + return 2; /* We consumed the exit. */ + } + b_off = b_imm(this_idx + insn->off + 1, ctx); + if (is_bad_offset(b_off)) + return -E2BIG; + emit_instr(ctx, bbit1, dst, ffs((u32)insn->imm) - 1, b_off); + emit_instr(ctx, nop); + break; + } + t64 = (u32)insn->imm; + emit_const_to_reg(ctx, MIPS_R_AT, t64); + emit_instr(ctx, and, MIPS_R_AT, dst, MIPS_R_AT); + src = MIPS_R_AT; + dst = MIPS_R_ZERO; + cmp_eq = false; + goto jeq_common; + + case BPF_JMP | BPF_JA: + /* + * Prefer relative branch for easier debugging, but + * fall back if needed. + */ + b_off = b_imm(this_idx + insn->off + 1, ctx); + if (is_bad_offset(b_off)) { + target = j_target(ctx, this_idx + insn->off + 1); + if (target == (unsigned int)-1) + return -E2BIG; + emit_instr(ctx, j, target); + } else { + emit_instr(ctx, b, b_off); + } + emit_instr(ctx, nop); + break; + case BPF_LD | BPF_DW | BPF_IMM: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + t64 = ((u64)(u32)insn->imm) | ((u64)(insn + 1)->imm << 32); + emit_const_to_reg(ctx, dst, t64); + return 2; /* Double slot insn */ + + case BPF_JMP | BPF_CALL: + emit_bpf_call(ctx, insn); + break; + + case BPF_JMP | BPF_TAIL_CALL: + if (emit_bpf_tail_call(ctx, this_idx)) + return -EINVAL; + break; + + case BPF_ALU | BPF_END | BPF_FROM_BE: + case BPF_ALU | BPF_END | BPF_FROM_LE: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + td = get_reg_val_type(ctx, this_idx, insn->dst_reg); + if (insn->imm == 64 && td == REG_32BIT) + emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); + + if (insn->imm != 64 && td == REG_64BIT) { + /* sign extend */ + emit_instr(ctx, sll, dst, dst, 0); + } + +#ifdef __BIG_ENDIAN + need_swap = (BPF_SRC(insn->code) == BPF_FROM_LE); +#else + need_swap = (BPF_SRC(insn->code) == BPF_FROM_BE); +#endif + if (insn->imm == 16) { + if (need_swap) + emit_instr(ctx, wsbh, dst, dst); + emit_instr(ctx, andi, dst, dst, 0xffff); + } else if (insn->imm == 32) { + if (need_swap) { + emit_instr(ctx, wsbh, dst, dst); + emit_instr(ctx, rotr, dst, dst, 16); + } + } else { /* 64-bit*/ + if (need_swap) { + emit_instr(ctx, dsbh, dst, dst); + emit_instr(ctx, dshd, dst, dst); + } + } + break; + + case BPF_ST | BPF_B | BPF_MEM: + case BPF_ST | BPF_H | BPF_MEM: + case BPF_ST | BPF_W | BPF_MEM: + case BPF_ST | BPF_DW | BPF_MEM: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + if (dst < 0) + return dst; + mem_off = insn->off; + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + switch (BPF_SIZE(insn->code)) { + case BPF_B: + emit_instr(ctx, sb, MIPS_R_AT, mem_off, dst); + break; + case BPF_H: + emit_instr(ctx, sh, MIPS_R_AT, mem_off, dst); + break; + case BPF_W: + emit_instr(ctx, sw, MIPS_R_AT, mem_off, dst); + break; + case BPF_DW: + emit_instr(ctx, sd, MIPS_R_AT, mem_off, dst); + break; + } + break; + + case BPF_LDX | BPF_B | BPF_MEM: + case BPF_LDX | BPF_H | BPF_MEM: + case BPF_LDX | BPF_W | BPF_MEM: + case BPF_LDX | BPF_DW | BPF_MEM: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + if (dst < 0 || src < 0) + return -EINVAL; + mem_off = insn->off; + switch (BPF_SIZE(insn->code)) { + case BPF_B: + emit_instr(ctx, lbu, dst, mem_off, src); + break; + case BPF_H: + emit_instr(ctx, lhu, dst, mem_off, src); + break; + case BPF_W: + emit_instr(ctx, lw, dst, mem_off, src); + break; + case BPF_DW: + emit_instr(ctx, ld, dst, mem_off, src); + break; + } + break; + + case BPF_STX | BPF_B | BPF_MEM: + case BPF_STX | BPF_H | BPF_MEM: + case BPF_STX | BPF_W | BPF_MEM: + case BPF_STX | BPF_DW | BPF_MEM: + case BPF_STX | BPF_W | BPF_ATOMIC: + case BPF_STX | BPF_DW | BPF_ATOMIC: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + if (src < 0 || dst < 0) + return -EINVAL; + mem_off = insn->off; + if (BPF_MODE(insn->code) == BPF_ATOMIC) { + if (insn->imm != BPF_ADD) { + pr_err("ATOMIC OP %02x NOT HANDLED\n", insn->imm); + return -EINVAL; + } + /* + * If mem_off does not fit within the 9 bit ll/sc + * instruction immediate field, use a temp reg. + */ + if (MIPS_ISA_REV >= 6 && + (mem_off >= BIT(8) || mem_off < -BIT(8))) { + emit_instr(ctx, daddiu, MIPS_R_T6, + dst, mem_off); + mem_off = 0; + dst = MIPS_R_T6; + } + switch (BPF_SIZE(insn->code)) { + case BPF_W: + if (get_reg_val_type(ctx, this_idx, insn->src_reg) == REG_32BIT) { + emit_instr(ctx, sll, MIPS_R_AT, src, 0); + src = MIPS_R_AT; + } + emit_instr(ctx, ll, MIPS_R_T8, mem_off, dst); + emit_instr(ctx, addu, MIPS_R_T8, MIPS_R_T8, src); + emit_instr(ctx, sc, MIPS_R_T8, mem_off, dst); + /* + * On failure back up to LL (-4 + * instructions of 4 bytes each + */ + emit_instr(ctx, beq, MIPS_R_T8, MIPS_R_ZERO, -4 * 4); + emit_instr(ctx, nop); + break; + case BPF_DW: + if (get_reg_val_type(ctx, this_idx, insn->src_reg) == REG_32BIT) { + emit_instr(ctx, daddu, MIPS_R_AT, src, MIPS_R_ZERO); + emit_instr(ctx, dinsu, MIPS_R_AT, MIPS_R_ZERO, 32, 32); + src = MIPS_R_AT; + } + emit_instr(ctx, lld, MIPS_R_T8, mem_off, dst); + emit_instr(ctx, daddu, MIPS_R_T8, MIPS_R_T8, src); + emit_instr(ctx, scd, MIPS_R_T8, mem_off, dst); + emit_instr(ctx, beq, MIPS_R_T8, MIPS_R_ZERO, -4 * 4); + emit_instr(ctx, nop); + break; + } + } else { /* BPF_MEM */ + switch (BPF_SIZE(insn->code)) { + case BPF_B: + emit_instr(ctx, sb, src, mem_off, dst); + break; + case BPF_H: + emit_instr(ctx, sh, src, mem_off, dst); + break; + case BPF_W: + emit_instr(ctx, sw, src, mem_off, dst); + break; + case BPF_DW: + if (get_reg_val_type(ctx, this_idx, insn->src_reg) == REG_32BIT) { + emit_instr(ctx, daddu, MIPS_R_AT, src, MIPS_R_ZERO); + emit_instr(ctx, dinsu, MIPS_R_AT, MIPS_R_ZERO, 32, 32); + src = MIPS_R_AT; + } + emit_instr(ctx, sd, src, mem_off, dst); + break; + } + } + break; + + default: + pr_err("NOT HANDLED %d - (%02x)\n", + this_idx, (unsigned int)insn->code); + return -EINVAL; + } + return 1; +} diff --git a/arch/mips/net/ebpf_jit_core.c b/arch/mips/net/ebpf_jit_core.c new file mode 100644 index 000000000000..5bc33b4bbb2a --- /dev/null +++ b/arch/mips/net/ebpf_jit_core.c @@ -0,0 +1,1112 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Just-In-Time compiler for eBPF filters on MIPS32/MIPS64 + * Copyright (c) 2021 Tony Ambardar + * + * Based on code from: + * + * Copyright (c) 2017 Cavium, Inc. + * Author: David Daney + * + * Copyright (c) 2014 Imagination Technologies Ltd. + * Author: Markos Chandras + */ + +#include +#include +#include +#include +#include + +#include "ebpf_jit.h" + +/* + * Extra JIT registers dedicated to holding TCC during runtime or saving + * across calls. + */ +enum { + JIT_RUN_TCC = MAX_BPF_JIT_REG, + JIT_SAV_TCC +}; +/* Temporary register for passing TCC if nothing dedicated. */ +#define TEMP_PASS_TCC MIPS_R_T8 + +#ifdef CONFIG_64BIT +# define M(expr32, expr64) (expr64) +#else +# define M(expr32, expr64) (expr32) +#endif +static const struct { + /* Register or pair base */ + int reg; + /* Register flags */ + u32 flags; + /* Usage table: (MIPS32) (MIPS64) */ +} bpf2mips[] = { + /* Return value from in-kernel function, and exit value from eBPF. */ + [BPF_REG_0] = {M(MIPS_R_V0, MIPS_R_V0)}, + /* Arguments from eBPF program to in-kernel/BPF functions. */ + [BPF_REG_1] = {M(MIPS_R_A0, MIPS_R_A0)}, + [BPF_REG_2] = {M(MIPS_R_A2, MIPS_R_A1)}, + [BPF_REG_3] = {M(MIPS_R_T0, MIPS_R_A2)}, + [BPF_REG_4] = {M(MIPS_R_T2, MIPS_R_A3)}, + [BPF_REG_5] = {M(MIPS_R_T4, MIPS_R_A4)}, + /* Callee-saved registers preserved by in-kernel/BPF functions. */ + [BPF_REG_6] = {M(MIPS_R_S0, MIPS_R_S0), + M(EBPF_SAVE_S0|EBPF_SAVE_S1, EBPF_SAVE_S0)}, + [BPF_REG_7] = {M(MIPS_R_S2, MIPS_R_S1), + M(EBPF_SAVE_S2|EBPF_SAVE_S3, EBPF_SAVE_S1)}, + [BPF_REG_8] = {M(MIPS_R_S4, MIPS_R_S2), + M(EBPF_SAVE_S4|EBPF_SAVE_S5, EBPF_SAVE_S2)}, + [BPF_REG_9] = {M(MIPS_R_S6, MIPS_R_S3), + M(EBPF_SAVE_S6|EBPF_SAVE_S7, EBPF_SAVE_S3)}, + [BPF_REG_10] = {M(MIPS_R_S8, MIPS_R_S8), + M(EBPF_SAVE_S8|EBPF_SEEN_FP, EBPF_SAVE_S8|EBPF_SEEN_FP)}, + /* Internal register for rewriting insns during JIT blinding. */ + [BPF_REG_AX] = {M(MIPS_R_T6, MIPS_R_T4)}, + /* + * Internal registers for TCC runtime holding and saving during + * calls. A zero save register indicates using scratch space on + * the stack for storage during calls. A zero hold register means + * no dedicated register holds TCC during runtime (but a temp reg + * still passes TCC to tailcall or bpf2bpf call). + */ + [JIT_RUN_TCC] = {M(0, MIPS_R_V1)}, + [JIT_SAV_TCC] = {M(0, MIPS_R_S4), + M(0, EBPF_SAVE_S4)} +}; +#undef M + +/* + * For eBPF, the register mapping naturally falls out of the + * requirements of eBPF and MIPS N64/O32 ABIs. We also maintain + * a separate frame pointer, setting BPF_REG_10 relative to $sp. + */ +int ebpf_to_mips_reg(struct jit_ctx *ctx, + const struct bpf_insn *insn, + enum reg_usage u) +{ + int ebpf_reg = (u == REG_SRC_FP_OK || u == REG_SRC_NO_FP) ? + insn->src_reg : insn->dst_reg; + + switch (ebpf_reg) { + case BPF_REG_0: + case BPF_REG_1: + case BPF_REG_2: + case BPF_REG_3: + case BPF_REG_4: + case BPF_REG_5: + case BPF_REG_6: + case BPF_REG_7: + case BPF_REG_8: + case BPF_REG_9: + case BPF_REG_AX: + ctx->flags |= bpf2mips[ebpf_reg].flags; + return bpf2mips[ebpf_reg].reg; + case BPF_REG_10: + if (u == REG_DST_NO_FP || u == REG_SRC_NO_FP) + goto bad_reg; + ctx->flags |= bpf2mips[ebpf_reg].flags; + return bpf2mips[ebpf_reg].reg; + default: +bad_reg: + WARN(1, "Illegal bpf reg: %d\n", ebpf_reg); + return -EINVAL; + } +} + +void gen_imm_to_reg(const struct bpf_insn *insn, int reg, + struct jit_ctx *ctx) +{ + if (insn->imm >= S16_MIN && insn->imm <= S16_MAX) { + emit_instr(ctx, addiu, reg, MIPS_R_ZERO, insn->imm); + } else { + int lower = (s16)(insn->imm & 0xffff); + int upper = insn->imm - lower; + + emit_instr(ctx, lui, reg, upper >> 16); + /* lui already clears lower halfword */ + if (lower) + emit_instr(ctx, addiu, reg, reg, lower); + } +} + +void emit_const_to_reg(struct jit_ctx *ctx, int dst, unsigned long value) +{ + if (value >= S16_MIN || value <= S16_MAX) { + emit_instr_long(ctx, daddiu, addiu, dst, MIPS_R_ZERO, (int)value); + } else if (value >= S32_MIN || + (value <= S32_MAX && value > U16_MAX)) { + emit_instr(ctx, lui, dst, (s32)(s16)(value >> 16)); + emit_instr(ctx, ori, dst, dst, (unsigned int)(value & 0xffff)); + } else { + int i; + bool seen_part = false; + int needed_shift = 0; + + for (i = 0; i < 4; i++) { + u64 part = (value >> (16 * (3 - i))) & 0xffff; + + if (seen_part && needed_shift > 0 && (part || i == 3)) { + emit_instr(ctx, dsll_safe, dst, dst, needed_shift); + needed_shift = 0; + } + if (part) { + if (i == 0 || (!seen_part && i < 3 && part < 0x8000)) { + emit_instr(ctx, lui, dst, (s32)(s16)part); + needed_shift = -16; + } else { + emit_instr(ctx, ori, dst, + seen_part ? dst : MIPS_R_ZERO, + (unsigned int)part); + } + seen_part = true; + } + if (seen_part) + needed_shift += 16; + } + } +} + +#define RVT_VISITED_MASK 0xc000000000000000ull +#define RVT_FALL_THROUGH 0x4000000000000000ull +#define RVT_BRANCH_TAKEN 0x8000000000000000ull +#define RVT_DONE (RVT_FALL_THROUGH | RVT_BRANCH_TAKEN) + +/* return the last idx processed, or negative for error */ +static int reg_val_propagate_range(struct jit_ctx *ctx, u64 initial_rvt, + int start_idx, bool follow_taken) +{ + const struct bpf_prog *prog = ctx->skf; + const struct bpf_insn *insn; + u64 exit_rvt = initial_rvt; + u64 *rvt = ctx->reg_val_types; + int idx; + int reg; + + for (idx = start_idx; idx < prog->len; idx++) { + rvt[idx] = (rvt[idx] & RVT_VISITED_MASK) | exit_rvt; + insn = prog->insnsi + idx; + switch (BPF_CLASS(insn->code)) { + case BPF_ALU: + switch (BPF_OP(insn->code)) { + case BPF_ADD: + case BPF_SUB: + case BPF_MUL: + case BPF_DIV: + case BPF_OR: + case BPF_AND: + case BPF_LSH: + case BPF_RSH: + case BPF_ARSH: + case BPF_NEG: + case BPF_MOD: + case BPF_XOR: + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); + break; + case BPF_MOV: + if (BPF_SRC(insn->code)) { + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); + } else { + /* IMM to REG move*/ + if (insn->imm >= 0) + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); + else + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); + } + break; + case BPF_END: + if (insn->imm == 64) + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); + else if (insn->imm == 32) + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); + else /* insn->imm == 16 */ + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); + break; + } + rvt[idx] |= RVT_DONE; + break; + case BPF_ALU64: + switch (BPF_OP(insn->code)) { + case BPF_MOV: + if (BPF_SRC(insn->code)) { + /* REG to REG move*/ + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); + } else { + /* IMM to REG move*/ + if (insn->imm >= 0) + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); + else + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT_32BIT); + } + break; + default: + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); + } + rvt[idx] |= RVT_DONE; + break; + case BPF_LD: + switch (BPF_SIZE(insn->code)) { + case BPF_DW: + if (BPF_MODE(insn->code) == BPF_IMM) { + s64 val; + + val = (s64)((u32)insn->imm | ((u64)(insn + 1)->imm << 32)); + if (val > 0 && val <= S32_MAX) + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); + else if (val >= S32_MIN && val <= S32_MAX) + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT_32BIT); + else + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); + rvt[idx] |= RVT_DONE; + idx++; + } else { + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); + } + break; + case BPF_B: + case BPF_H: + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); + break; + case BPF_W: + if (BPF_MODE(insn->code) == BPF_IMM) + set_reg_val_type(&exit_rvt, insn->dst_reg, + insn->imm >= 0 ? REG_32BIT_POS : REG_32BIT); + else + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); + break; + } + rvt[idx] |= RVT_DONE; + break; + case BPF_LDX: + switch (BPF_SIZE(insn->code)) { + case BPF_DW: + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_64BIT); + break; + case BPF_B: + case BPF_H: + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT_POS); + break; + case BPF_W: + set_reg_val_type(&exit_rvt, insn->dst_reg, REG_32BIT); + break; + } + rvt[idx] |= RVT_DONE; + break; + case BPF_JMP: + case BPF_JMP32: + switch (BPF_OP(insn->code)) { + case BPF_EXIT: + rvt[idx] = RVT_DONE | exit_rvt; + rvt[prog->len] = exit_rvt; + return idx; + case BPF_JA: + { + int tgt = idx + 1 + insn->off; + bool visited = (rvt[tgt] & RVT_FALL_THROUGH); + + rvt[idx] |= RVT_DONE; + /* + * Verifier dead code patching can use + * infinite-loop traps, causing hangs and + * RCU stalls here. Treat traps as nops + * if detected and fall through. + */ + if (insn->off == -1) + break; + /* + * Bounded loops cause the same issues in + * fallthrough mode; follow only if jump + * target is unvisited to mitigate. + */ + if (insn->off < 0 && !follow_taken && visited) + break; + idx += insn->off; + break; + } + case BPF_JEQ: + case BPF_JGT: + case BPF_JGE: + case BPF_JLT: + case BPF_JLE: + case BPF_JSET: + case BPF_JNE: + case BPF_JSGT: + case BPF_JSGE: + case BPF_JSLT: + case BPF_JSLE: + if (follow_taken) { + rvt[idx] |= RVT_BRANCH_TAKEN; + idx += insn->off; + follow_taken = false; + } else { + rvt[idx] |= RVT_FALL_THROUGH; + } + break; + case BPF_CALL: + set_reg_val_type(&exit_rvt, BPF_REG_0, REG_64BIT); + /* Upon call return, argument registers are clobbered. */ + for (reg = BPF_REG_0; reg <= BPF_REG_5; reg++) + set_reg_val_type(&exit_rvt, reg, REG_64BIT); + + rvt[idx] |= RVT_DONE; + break; + case BPF_TAIL_CALL: + rvt[idx] |= RVT_DONE; + break; + default: + WARN(1, "Unhandled BPF_JMP case.\n"); + rvt[idx] |= RVT_DONE; + break; + } + break; + default: + rvt[idx] |= RVT_DONE; + break; + } + } + return idx; +} + +/* + * Track the value range (i.e. 32-bit vs. 64-bit) of each register at + * each eBPF insn. This allows unneeded sign and zero extension + * operations to be omitted. + * + * Doesn't handle yet confluence of control paths with conflicting + * ranges, but it is good enough for most sane code. + */ +static int reg_val_propagate(struct jit_ctx *ctx) +{ + const struct bpf_prog *prog = ctx->skf; + u64 exit_rvt; + int reg; + int i; + + /* + * 11 registers * 3 bits/reg leaves top bits free for other + * uses. Bit-62..63 used to see if we have visited an insn. + */ + exit_rvt = 0; + + /* Upon entry, argument registers are 64-bit. */ + for (reg = BPF_REG_1; reg <= BPF_REG_5; reg++) + set_reg_val_type(&exit_rvt, reg, REG_64BIT); + + /* + * First follow all conditional branches on the fall-through + * edge of control flow.. + */ + reg_val_propagate_range(ctx, exit_rvt, 0, false); +restart_search: + /* + * Then repeatedly find the first conditional branch where + * both edges of control flow have not been taken, and follow + * the branch taken edge. We will end up restarting the + * search once per conditional branch insn. + */ + for (i = 0; i < prog->len; i++) { + u64 rvt = ctx->reg_val_types[i]; + + if ((rvt & RVT_VISITED_MASK) == RVT_DONE || + (rvt & RVT_VISITED_MASK) == 0) + continue; + if ((rvt & RVT_VISITED_MASK) == RVT_FALL_THROUGH) { + reg_val_propagate_range(ctx, rvt & ~RVT_VISITED_MASK, i, true); + } else { /* RVT_BRANCH_TAKEN */ + WARN(1, "Unexpected RVT_BRANCH_TAKEN case.\n"); + reg_val_propagate_range(ctx, rvt & ~RVT_VISITED_MASK, i, false); + } + goto restart_search; + } + /* + * Eventually all conditional branches have been followed on + * both branches and we are done. Any insn that has not been + * visited at this point is dead. + */ + + return 0; +} + +static void jit_fill_hole(void *area, unsigned int size) +{ + u32 *p; + + /* We are guaranteed to have aligned memory. */ + for (p = area; size >= sizeof(u32); size -= sizeof(u32)) + uasm_i_break(&p, BRK_BUG); /* Increments p */ +} + +/* Stack region alignment under N64 and O32 ABIs */ +#define STACK_ALIGN (2 * sizeof(long)) + +/* + * eBPF stack frame will be something like: + * + * Entry $sp ------> +--------------------------------+ + * | $ra (optional) | + * +--------------------------------+ + * | $s8 (optional) | + * +--------------------------------+ + * | $s7 (optional) | + * +--------------------------------+ + * | $s6 (optional) | + * +--------------------------------+ + * | $s5 (optional) | + * +--------------------------------+ + * | $s4 (optional) | + * +--------------------------------+ + * | $s3 (optional) | + * +--------------------------------+ + * | $s2 (optional) | + * +--------------------------------+ + * | $s1 (optional) | + * +--------------------------------+ + * | $s0 (optional) | + * +--------------------------------+ + * | tmp-storage (optional) | + * $sp + bpf_stack_off->+--------------------------------+ <--BPF_REG_10 + * | BPF_REG_10 relative storage | + * | MAX_BPF_STACK (optional) | + * | . | + * | . | + * | . | + * $sp ------> +--------------------------------+ + * + * If BPF_REG_10 is never referenced, then the MAX_BPF_STACK sized + * area is not allocated. + */ +static int build_int_prologue(struct jit_ctx *ctx) +{ + int tcc_run = bpf2mips[JIT_RUN_TCC].reg ? + bpf2mips[JIT_RUN_TCC].reg : + TEMP_PASS_TCC; + int tcc_sav = bpf2mips[JIT_SAV_TCC].reg; + const struct bpf_prog *prog = ctx->skf; + int r10 = bpf2mips[BPF_REG_10].reg; + int r1 = bpf2mips[BPF_REG_1].reg; + int stack_adjust = 0; + int store_offset; + int locals_size; + + if (ctx->flags & EBPF_SAVE_RA) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S8) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S7) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S6) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S5) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S4) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S3) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S2) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S1) + stack_adjust += sizeof(long); + if (ctx->flags & EBPF_SAVE_S0) + stack_adjust += sizeof(long); + if (tail_call_present(ctx) && + !(ctx->flags & EBPF_TCC_IN_RUN) && !tcc_sav) + /* Allocate scratch space for holding TCC if needed. */ + stack_adjust += sizeof(long); + + stack_adjust = ALIGN(stack_adjust, STACK_ALIGN); + + locals_size = (ctx->flags & EBPF_SEEN_FP) ? prog->aux->stack_depth : 0; + locals_size = ALIGN(locals_size, STACK_ALIGN); + + stack_adjust += locals_size; + + ctx->stack_size = stack_adjust; + ctx->bpf_stack_off = locals_size; + + /* + * First instruction initializes the tail call count (TCC) if + * called from kernel or via BPF tail call. A BPF tail-caller + * will skip this instruction and pass the TCC via register. + * As a BPF2BPF subprog, we are called directly and must avoid + * resetting the TCC. + */ + if (!ctx->skf->is_func) + emit_instr(ctx, addiu, tcc_run, MIPS_R_ZERO, MAX_TAIL_CALL_CNT); + + /* + * If called from kernel under O32 ABI we must set up BPF R1 context, + * since BPF R1 is an endian-order regster pair ($a0:$a1 or $a1:$a0) + * but context is always passed in $a0 as 32-bit pointer. Entry from + * a tail-call looks just like a kernel call, which means the caller + * must set up R1 context according to the kernel call ABI. If we are + * a BPF2BPF call then all registers are already correctly set up. + */ + if (!is64bit() && !ctx->skf->is_func) { + if (isbigend()) + emit_instr(ctx, move, LO(r1), MIPS_R_A0); + /* Sanitize upper 32-bit reg */ + gen_zext_insn(r1, true, ctx); + } + + if (stack_adjust) + emit_instr_long(ctx, daddiu, addiu, + MIPS_R_SP, MIPS_R_SP, -stack_adjust); + else + return 0; + + store_offset = stack_adjust - sizeof(long); + + if (ctx->flags & EBPF_SAVE_RA) { + emit_instr_long(ctx, sd, sw, + MIPS_R_RA, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S8) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S8, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S7) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S7, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S6) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S6, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S5) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S5, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S4) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S4, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S3) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S3, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S2) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S2, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S1) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S1, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S0) { + emit_instr_long(ctx, sd, sw, + MIPS_R_S0, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + + /* Store TCC in backup register or stack scratch space if indicated. */ + if (tail_call_present(ctx) && !(ctx->flags & EBPF_TCC_IN_RUN)) { + if (tcc_sav) + emit_instr(ctx, move, tcc_sav, tcc_run); + else + emit_instr_long(ctx, sd, sw, + tcc_run, ctx->bpf_stack_off, MIPS_R_SP); + } + + /* Prepare BPF FP as single-reg ptr, emulate upper 32-bits as needed.*/ + if (ctx->flags & EBPF_SEEN_FP) + emit_instr_long(ctx, daddiu, addiu, r10, + MIPS_R_SP, ctx->bpf_stack_off); + + return 0; +} + +static int build_int_body(struct jit_ctx *ctx) +{ + const struct bpf_prog *prog = ctx->skf; + const struct bpf_insn *insn; + int i, r; + + for (i = 0; i < prog->len; ) { + insn = prog->insnsi + i; + if ((ctx->reg_val_types[i] & RVT_VISITED_MASK) == 0) { + /* dead instruction, don't emit it. */ + i++; + continue; + } + + if (ctx->target == NULL) + ctx->offsets[i] = (ctx->offsets[i] & OFFSETS_B_CONV) | (ctx->idx * 4); + + r = build_one_insn(insn, ctx, i, prog->len); + if (r < 0) + return r; + i += r; + } + /* epilogue offset */ + if (ctx->target == NULL) + ctx->offsets[i] = ctx->idx * 4; + + /* + * All exits have an offset of the epilogue, some offsets may + * not have been set due to banch-around threading, so set + * them now. + */ + if (ctx->target == NULL) + for (i = 0; i < prog->len; i++) { + insn = prog->insnsi + i; + if (insn->code == (BPF_JMP | BPF_EXIT)) + ctx->offsets[i] = ctx->idx * 4; + } + return 0; +} + +static int build_int_epilogue(struct jit_ctx *ctx, int dest_reg) +{ + const struct bpf_prog *prog = ctx->skf; + int stack_adjust = ctx->stack_size; + int store_offset = stack_adjust - sizeof(long); + int r1 = bpf2mips[BPF_REG_1].reg; + int r0 = bpf2mips[BPF_REG_0].reg; + enum reg_val_type td; + + /* + * Returns from BPF2BPF calls consistently use the BPF 64-bit ABI + * i.e. register usage and mapping between JIT and OS is unchanged. + * Returning to the kernel must follow the N64 or O32 ABI, and for + * the latter requires fixup of BPF R0 to MIPS V0 register mapping. + * + * Tails calls must ensure the passed R1 context is consistent with + * the kernel ABI, and requires fixup on MIPS32 bigendian systems. + */ + if (dest_reg == MIPS_R_RA && !ctx->skf->is_func) { /* kernel return */ + if (is64bit()) { + /* Don't let zero extended value escape. */ + td = get_reg_val_type(ctx, prog->len, BPF_REG_0); + if (td == REG_64BIT) + gen_sext_insn(r0, ctx); + } else if (isbigend()) { /* and 32-bit */ + /* + * O32 ABI specifies 32-bit return value always + * placed in MIPS_R_V0 regardless of the native + * endianness. This would be in the wrong position + * in a BPF R0 reg pair on big-endian systems, so + * we must relocate. + */ + emit_instr(ctx, move, MIPS_R_V0, LO(r0)); + } + } else if (dest_reg == MIPS_R_T9) { /* tail call */ + if (!is64bit() && isbigend()) + emit_instr(ctx, move, MIPS_R_A0, LO(r1)); + } + + + if (ctx->flags & EBPF_SAVE_RA) { + emit_instr_long(ctx, ld, lw, + MIPS_R_RA, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S8) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S8, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S7) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S7, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S6) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S6, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S5) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S5, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S4) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S4, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S3) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S3, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S2) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S2, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S1) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S1, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + if (ctx->flags & EBPF_SAVE_S0) { + emit_instr_long(ctx, ld, lw, + MIPS_R_S0, store_offset, MIPS_R_SP); + store_offset -= sizeof(long); + } + emit_instr(ctx, jr, dest_reg); + + /* Delay slot */ + if (stack_adjust) + emit_instr_long(ctx, daddiu, addiu, + MIPS_R_SP, MIPS_R_SP, stack_adjust); + else + emit_instr(ctx, nop); + + return 0; +} + +/* + * Push BPF regs R3-R5 to the stack, skipping BPF regs R1-R2 which are + * passed via MIPS register pairs in $a0-$a3. Register order within pairs + * and the memory storage order are identical i.e. endian native. + */ +static void emit_push_args(struct jit_ctx *ctx) +{ + int store_offset = 2 * sizeof(u64); /* Skip R1-R2 in $a0-$a3 */ + int bpf, reg; + + for (bpf = BPF_REG_3; bpf <= BPF_REG_5; bpf++) { + reg = bpf2mips[bpf].reg; + + emit_instr(ctx, sw, LO(reg), OFFLO(store_offset), MIPS_R_SP); + emit_instr(ctx, sw, HI(reg), OFFHI(store_offset), MIPS_R_SP); + store_offset += sizeof(u64); + } +} + +/* + * Common helper for BPF_CALL insn, handling TCC and ABI variations. + * Kernel calls under O32 ABI require arguments passed on the stack, + * while BPF2BPF calls need the TCC passed via register as expected + * by the subprog's prologue. + * + * Under MIPS32 O32 ABI calling convention, u64 BPF regs R1-R2 are passed + * via reg pairs in $a0-$a3, while BPF regs R3-R5 are passed via the stack. + * Stack space is still reserved for $a0-$a3, and the whole area aligned. + */ +#define ARGS_SIZE (5 * sizeof(u64)) + +void emit_bpf_call(struct jit_ctx *ctx, const struct bpf_insn *insn) +{ + int stack_adjust = ALIGN(ARGS_SIZE, STACK_ALIGN); + int tcc_run = bpf2mips[JIT_RUN_TCC].reg ? + bpf2mips[JIT_RUN_TCC].reg : + TEMP_PASS_TCC; + int tcc_sav = bpf2mips[JIT_SAV_TCC].reg; + long func_addr; + + ctx->flags |= EBPF_SAVE_RA; + + /* Ensure TCC passed into BPF subprog */ + if ((insn->src_reg == BPF_PSEUDO_CALL) && + tail_call_present(ctx) && !(ctx->flags & EBPF_TCC_IN_RUN)) { + /* Set TCC from reg or stack */ + if (tcc_sav) + emit_instr(ctx, move, tcc_run, tcc_sav); + else + emit_instr_long(ctx, ld, lw, tcc_run, + ctx->bpf_stack_off, MIPS_R_SP); + } + + /* Push O32 stack args for kernel call */ + if (!is64bit() && (insn->src_reg != BPF_PSEUDO_CALL)) { + emit_instr(ctx, addiu, MIPS_R_SP, MIPS_R_SP, -stack_adjust); + emit_push_args(ctx); + } + + func_addr = (long)__bpf_call_base + insn->imm; + emit_const_to_reg(ctx, MIPS_R_T9, func_addr); + emit_instr(ctx, jalr, MIPS_R_RA, MIPS_R_T9); + /* Delay slot */ + emit_instr(ctx, nop); + + /* Restore stack */ + if (!is64bit() && (insn->src_reg != BPF_PSEUDO_CALL)) + emit_instr(ctx, addiu, MIPS_R_SP, MIPS_R_SP, stack_adjust); +} + +/* + * Tail call helper arguments passed via BPF ABI as u64 parameters. On + * MIPS64 N64 ABI systems these are native regs, while on MIPS32 O32 ABI + * systems these are reg pairs: + * + * R1 -> &ctx + * R2 -> &array + * R3 -> index + */ +int emit_bpf_tail_call(struct jit_ctx *ctx, int this_idx) +{ + int tcc_run = bpf2mips[JIT_RUN_TCC].reg ? + bpf2mips[JIT_RUN_TCC].reg : + TEMP_PASS_TCC; + int tcc_sav = bpf2mips[JIT_SAV_TCC].reg; + int r2 = bpf2mips[BPF_REG_2].reg; + int r3 = bpf2mips[BPF_REG_3].reg; + int off, b_off; + int tcc; + + ctx->flags |= EBPF_SEEN_TC; + /* + * if (index >= array->map.max_entries) + * goto out; + */ + if (is64bit()) + /* Mask index as 32-bit */ + gen_zext_insn(r3, true, ctx); + off = offsetof(struct bpf_array, map.max_entries); + emit_instr_long(ctx, lwu, lw, MIPS_R_AT, off, LO(r2)); + emit_instr(ctx, sltu, MIPS_R_AT, MIPS_R_AT, LO(r3)); + b_off = b_imm(this_idx + 1, ctx); + emit_instr(ctx, bnez, MIPS_R_AT, b_off); + /* + * if (TCC-- < 0) + * goto out; + */ + /* Delay slot */ + tcc = (ctx->flags & EBPF_TCC_IN_RUN) ? tcc_run : tcc_sav; + /* Get TCC from reg or stack */ + if (tcc) + emit_instr(ctx, move, MIPS_R_T8, tcc); + else + emit_instr_long(ctx, ld, lw, MIPS_R_T8, + ctx->bpf_stack_off, MIPS_R_SP); + b_off = b_imm(this_idx + 1, ctx); + emit_instr(ctx, bltz, MIPS_R_T8, b_off); + /* + * prog = array->ptrs[index]; + * if (prog == NULL) + * goto out; + */ + /* Delay slot */ + emit_instr_long(ctx, dsll, sll, MIPS_R_AT, LO(r3), ilog2(sizeof(long))); + emit_instr_long(ctx, daddu, addu, MIPS_R_AT, MIPS_R_AT, LO(r2)); + off = offsetof(struct bpf_array, ptrs); + emit_instr_long(ctx, ld, lw, MIPS_R_AT, off, MIPS_R_AT); + b_off = b_imm(this_idx + 1, ctx); + emit_instr(ctx, beqz, MIPS_R_AT, b_off); + /* Delay slot */ + emit_instr(ctx, nop); + + /* goto *(prog->bpf_func + 4); */ + off = offsetof(struct bpf_prog, bpf_func); + emit_instr_long(ctx, ld, lw, MIPS_R_T9, off, MIPS_R_AT); + /* All systems are go... decrement and propagate TCC */ + emit_instr_long(ctx, daddiu, addiu, tcc_run, MIPS_R_T8, -1); + /* Skip first instruction (TCC initialization) */ + emit_instr_long(ctx, daddiu, addiu, MIPS_R_T9, MIPS_R_T9, 4); + return build_int_epilogue(ctx, MIPS_R_T9); +} + +/* + * Save and restore the BPF VM state across a direct kernel call. This + * includes the caller-saved registers used for BPF_REG_0 .. BPF_REG_5 + * and BPF_REG_AX used by the verifier for blinding and other dark arts. + * Restore avoids clobbering bpf_ret, which holds the call return value. + * BPF_REG_6 .. BPF_REG_10 and TCC are already callee-saved or on stack. + */ +static const int bpf_caller_save[] = { + BPF_REG_0, + BPF_REG_1, + BPF_REG_2, + BPF_REG_3, + BPF_REG_4, + BPF_REG_5, + BPF_REG_AX, +}; + +#define CALLER_ENV_SIZE (ARRAY_SIZE(bpf_caller_save) * sizeof(u64)) + +void emit_caller_save(struct jit_ctx *ctx) +{ + int stack_adj = ALIGN(CALLER_ENV_SIZE, STACK_ALIGN); + int i, bpf, reg, store_offset; + + emit_instr_long(ctx, daddiu, addiu, MIPS_R_SP, MIPS_R_SP, -stack_adj); + + for (i = 0; i < ARRAY_SIZE(bpf_caller_save); i++) { + bpf = bpf_caller_save[i]; + reg = bpf2mips[bpf].reg; + store_offset = i * sizeof(u64); + + if (is64bit()) { + emit_instr(ctx, sd, reg, store_offset, MIPS_R_SP); + } else { + emit_instr(ctx, sw, LO(reg), + OFFLO(store_offset), MIPS_R_SP); + emit_instr(ctx, sw, HI(reg), + OFFHI(store_offset), MIPS_R_SP); + } + } +} + +void emit_caller_restore(struct jit_ctx *ctx, int bpf_ret) +{ + int stack_adj = ALIGN(CALLER_ENV_SIZE, STACK_ALIGN); + int i, bpf, reg, store_offset; + + for (i = 0; i < ARRAY_SIZE(bpf_caller_save); i++) { + bpf = bpf_caller_save[i]; + reg = bpf2mips[bpf].reg; + store_offset = i * sizeof(u64); + if (bpf == bpf_ret) + continue; + + if (is64bit()) { + emit_instr(ctx, ld, reg, store_offset, MIPS_R_SP); + } else { + emit_instr(ctx, lw, LO(reg), + OFFLO(store_offset), MIPS_R_SP); + emit_instr(ctx, lw, HI(reg), + OFFHI(store_offset), MIPS_R_SP); + } + } + + emit_instr_long(ctx, daddiu, addiu, MIPS_R_SP, MIPS_R_SP, stack_adj); +} + +struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) +{ + struct bpf_prog *orig_prog = prog; + bool tmp_blinded = false; + struct bpf_prog *tmp; + struct bpf_binary_header *header = NULL; + struct jit_ctx ctx; + unsigned int image_size; + u8 *image_ptr; + + if (!prog->jit_requested) + return prog; + + tmp = bpf_jit_blind_constants(prog); + /* If blinding was requested and we failed during blinding, + * we must fall back to the interpreter. + */ + if (IS_ERR(tmp)) + return orig_prog; + if (tmp != prog) { + tmp_blinded = true; + prog = tmp; + } + + memset(&ctx, 0, sizeof(ctx)); + + preempt_disable(); + switch (current_cpu_type()) { + case CPU_CAVIUM_OCTEON: + case CPU_CAVIUM_OCTEON_PLUS: + case CPU_CAVIUM_OCTEON2: + case CPU_CAVIUM_OCTEON3: + ctx.use_bbit_insns = 1; + break; + default: + ctx.use_bbit_insns = 0; + } + preempt_enable(); + + ctx.offsets = kcalloc(prog->len + 1, sizeof(*ctx.offsets), GFP_KERNEL); + if (ctx.offsets == NULL) + goto out_err; + + ctx.reg_val_types = kcalloc(prog->len + 1, sizeof(*ctx.reg_val_types), GFP_KERNEL); + if (ctx.reg_val_types == NULL) + goto out_err; + + ctx.skf = prog; + + if (reg_val_propagate(&ctx)) + goto out_err; + + /* + * First pass discovers used resources and instruction offsets + * assuming short branches are used. + */ + if (build_int_body(&ctx)) + goto out_err; + + /* + * If no calls are made (EBPF_SAVE_RA), then tailcall count located + * in runtime reg if defined, else we backup to save reg or stack. + */ + if (tail_call_present(&ctx)) { + if (ctx.flags & EBPF_SAVE_RA) + ctx.flags |= bpf2mips[JIT_SAV_TCC].flags; + else if (bpf2mips[JIT_RUN_TCC].reg) + ctx.flags |= EBPF_TCC_IN_RUN; + } + + /* + * Second pass generates offsets, if any branches are out of + * range a jump-around long sequence is generated, and we have + * to try again from the beginning to generate the new + * offsets. This is done until no additional conversions are + * necessary. + */ + do { + ctx.idx = 0; + ctx.gen_b_offsets = 1; + ctx.long_b_conversion = 0; + if (build_int_prologue(&ctx)) + goto out_err; + if (build_int_body(&ctx)) + goto out_err; + if (build_int_epilogue(&ctx, MIPS_R_RA)) + goto out_err; + } while (ctx.long_b_conversion); + + image_size = 4 * ctx.idx; + + header = bpf_jit_binary_alloc(image_size, &image_ptr, + sizeof(u32), jit_fill_hole); + if (header == NULL) + goto out_err; + + ctx.target = (u32 *)image_ptr; + + /* Third pass generates the code */ + ctx.idx = 0; + if (build_int_prologue(&ctx)) + goto out_err; + if (build_int_body(&ctx)) + goto out_err; + if (build_int_epilogue(&ctx, MIPS_R_RA)) + goto out_err; + + /* Update the icache */ + flush_icache_range((unsigned long)ctx.target, + (unsigned long)&ctx.target[ctx.idx]); + + if (bpf_jit_enable > 1) + /* Dump JIT code */ + bpf_jit_dump(prog->len, image_size, 2, ctx.target); + + bpf_jit_binary_lock_ro(header); + prog->bpf_func = (void *)ctx.target; + prog->jited = 1; + prog->jited_len = image_size; +out_normal: + if (tmp_blinded) + bpf_jit_prog_release_other(prog, prog == orig_prog ? + tmp : orig_prog); + kfree(ctx.offsets); + kfree(ctx.reg_val_types); + + return prog; + +out_err: + prog = orig_prog; + if (header) + bpf_jit_binary_free(header); + goto out_normal; +} From patchwork Mon Jul 12 00:34:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369491 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4033FC11F68 for ; Mon, 12 Jul 2021 00:35:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1CA8161059 for ; Mon, 12 Jul 2021 00:35:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232618AbhGLAid (ORCPT ); Sun, 11 Jul 2021 20:38:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232569AbhGLAib (ORCPT ); Sun, 11 Jul 2021 20:38:31 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9A51C0613E5; Sun, 11 Jul 2021 17:35:42 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id d9-20020a17090ae289b0290172f971883bso11141301pjz.1; Sun, 11 Jul 2021 17:35:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oB80oYMulhrt43Nc7h4b8WTmJ925bEkBYiIlwoCVavE=; b=NxP01KNhkrSiBxyEIGraariMdqCYl7thvxueTtkC4eWIOa++ocF+MOwloKQBLmsPQi dC+v+6N2GJ00z7af2/i+ugI5Zy9or7H8OeqtMA92xfLXoQY4ac5KVEuwiogaq5s2nI6x wqZwRGppD0niZR8GpdOdLz3qY3GtDNuwqRCSvX80pv1CDBcRczn/DkzjUCM99I6tLpUb mNkxgKAqRvKwsR+xUFNEGnVVXspKxmALCroOE+N4l53iJo9//wkpQOsGJsJxtvmfAvUn XAfbyJAO3sRwMbLTgRukrDW0kj0byGBXpdZP0FIuD7eYL79p3FbB8MlhHhiwWzj4Bi6A LD9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oB80oYMulhrt43Nc7h4b8WTmJ925bEkBYiIlwoCVavE=; b=sX+p3ojPqucyYhDD/UJMoRr6Y30bEO8Z10IC5YQ/kZUf9pK57Y60/V0bRmfQiJqAap 0i4byLTtIwUJXOOmDPJqfNjANw3XWB+QUR7kyqpONaaLaoXRDKpd3o9SRAH4bbgc1FRe a+Gm7UygsckZH1+ewlv6e+6cc1eODtUGCEY3qI8E0nRSDSTcxukYQ2n23zZnk14RyVF0 ErbtOz77Snp5ajOYi0F9WcKrgP/rry34OzK8DwgqBZcQabwgVuSTW+JQb19wg4ju9i3h asSmVzomYsHBH/o970oYg3HCxzZbwTqKPin2RyBTekyzuzpuCPexQG0Pmz9GkGffCmgS R+9A== X-Gm-Message-State: AOAM532Jcy8fY8pQLjtH7hID5PQe7ZfsajdR6zMXcHERIO7TlYbZ+ntV 00H9OnUAYZ+TZNfmFVyiMy4= X-Google-Smtp-Source: ABdhPJz1KEiQJEmGbn390SxZyaPk+Zi8fa4F2g+2F6GcpkwMNjNRb1jJgBkY3Y49srHXIVRhsJszKw== X-Received: by 2002:a17:903:300c:b029:129:dd30:2b8a with SMTP id o12-20020a170903300cb0290129dd302b8amr17858041pla.85.1626050142430; Sun, 11 Jul 2021 17:35:42 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:42 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 13/14] MIPS: uasm: Enable muhu opcode for MIPS R6 Date: Sun, 11 Jul 2021 17:34:59 -0700 Message-Id: <94f6da32a6990e64c69ccbbd435fcba67d3c7ec6.1625970384.git.Tony.Ambardar@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Enable the 'muhu' instruction, complementing the existing 'mulu', needed to implement a MIPS32 BPF JIT. Also fix a typo in the existing definition of 'dmulu'. Signed-off-by: Tony Ambardar --- arch/mips/include/asm/uasm.h | 1 + arch/mips/mm/uasm-mips.c | 4 +++- arch/mips/mm/uasm.c | 3 ++- 3 files changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/mips/include/asm/uasm.h b/arch/mips/include/asm/uasm.h index f7effca791a5..5efa4e2dc9ab 100644 --- a/arch/mips/include/asm/uasm.h +++ b/arch/mips/include/asm/uasm.h @@ -145,6 +145,7 @@ Ip_u1(_mtlo); Ip_u3u1u2(_mul); Ip_u1u2(_multu); Ip_u3u1u2(_mulu); +Ip_u3u1u2(_muhu); Ip_u3u1u2(_nor); Ip_u3u1u2(_or); Ip_u2u1u3(_ori); diff --git a/arch/mips/mm/uasm-mips.c b/arch/mips/mm/uasm-mips.c index 7154a1d99aad..e15c6700cd08 100644 --- a/arch/mips/mm/uasm-mips.c +++ b/arch/mips/mm/uasm-mips.c @@ -90,7 +90,7 @@ static const struct insn insn_table[insn_invalid] = { RS | RT | RD}, [insn_dmtc0] = {M(cop0_op, dmtc_op, 0, 0, 0, 0), RT | RD | SET}, [insn_dmultu] = {M(spec_op, 0, 0, 0, 0, dmultu_op), RS | RT}, - [insn_dmulu] = {M(spec_op, 0, 0, 0, dmult_dmul_op, dmultu_op), + [insn_dmulu] = {M(spec_op, 0, 0, 0, dmultu_dmulu_op, dmultu_op), RS | RT | RD}, [insn_drotr] = {M(spec_op, 1, 0, 0, 0, dsrl_op), RT | RD | RE}, [insn_drotr32] = {M(spec_op, 1, 0, 0, 0, dsrl32_op), RT | RD | RE}, @@ -150,6 +150,8 @@ static const struct insn insn_table[insn_invalid] = { [insn_mtlo] = {M(spec_op, 0, 0, 0, 0, mtlo_op), RS}, [insn_mulu] = {M(spec_op, 0, 0, 0, multu_mulu_op, multu_op), RS | RT | RD}, + [insn_muhu] = {M(spec_op, 0, 0, 0, multu_muhu_op, multu_op), + RS | RT | RD}, #ifndef CONFIG_CPU_MIPSR6 [insn_mul] = {M(spec2_op, 0, 0, 0, 0, mul_op), RS | RT | RD}, #else diff --git a/arch/mips/mm/uasm.c b/arch/mips/mm/uasm.c index 81dd226d6b6b..125140979d62 100644 --- a/arch/mips/mm/uasm.c +++ b/arch/mips/mm/uasm.c @@ -59,7 +59,7 @@ enum opcode { insn_lddir, insn_ldpte, insn_ldx, insn_lh, insn_lhu, insn_ll, insn_lld, insn_lui, insn_lw, insn_lwu, insn_lwx, insn_mfc0, insn_mfhc0, insn_mfhi, insn_mflo, insn_modu, insn_movn, insn_movz, insn_mtc0, insn_mthc0, - insn_mthi, insn_mtlo, insn_mul, insn_multu, insn_mulu, insn_nor, + insn_mthi, insn_mtlo, insn_mul, insn_multu, insn_mulu, insn_muhu, insn_nor, insn_or, insn_ori, insn_pref, insn_rfe, insn_rotr, insn_sb, insn_sc, insn_scd, insn_seleqz, insn_selnez, insn_sd, insn_sh, insn_sll, insn_sllv, insn_slt, insn_slti, insn_sltiu, insn_sltu, insn_sra, @@ -344,6 +344,7 @@ I_u1(_mtlo) I_u3u1u2(_mul) I_u1u2(_multu) I_u3u1u2(_mulu) +I_u3u1u2(_muhu) I_u3u1u2(_nor) I_u3u1u2(_or) I_u2u1u3(_ori) From patchwork Mon Jul 12 00:35:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Ambardar X-Patchwork-Id: 12369497 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA0E2C11F7A for ; Mon, 12 Jul 2021 00:35:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C39BA6100B for ; Mon, 12 Jul 2021 00:35:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232680AbhGLAie (ORCPT ); Sun, 11 Jul 2021 20:38:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232240AbhGLAic (ORCPT ); Sun, 11 Jul 2021 20:38:32 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65104C0613DD; Sun, 11 Jul 2021 17:35:44 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id d12so14777068pfj.2; Sun, 11 Jul 2021 17:35:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JOomLBBGAdsO95uipaQY0urxYkp7txwBOIFkCdVCh2c=; b=IT1YMRY9PWoPCn0dTsInwXfv+m173IX92GN1DjAwC1aPJsr4kcY3CKsoIdBu0fGSpH 4wbMm5WwJH9TdR8aEwfLk5OVw1E/dMDQkubFLYw8cbTpr+B7RaXpGlgq++VkhEjPeXsS JKYfPsVG4HmSjL3mp75wBmmhYvtcTBwKpAyJkYhL69rSnfg6tKOu9sBZG185RJdloL0f 1nKgO6Lxn6jgKHv7aBMg+tklPOrayOFdJ+CHuEDoR+IRpuycbCgV1QHfgOG3hLRfuj+k rq/b5u110tpN0F0h8nhVD8sDKtsQbiE19Pm5E1/y+X27sS71+NyEIoCDa5qGqxxjVcXj dcwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JOomLBBGAdsO95uipaQY0urxYkp7txwBOIFkCdVCh2c=; b=Y74RBWmwnIclzePVdJ4J5mljMTfv03sbMdtOgrya+WNiCLb4GhN/ahcPO1eC+7XleE fPyg9yEC4fqkYiNVKGOqPbUrFzEi0Bu4b0+zal9YfxRnEPlsiJBjUIPUSbguy9HV8TbJ 1wu5KsCxSn1Mk7mr4CUenCPOWjWpkrkEP2OxXWj7Tda5gLks0F4/1FtS4+hZhiB2uepq sSTqpUciScH7oFVnSL7OESKsjYfK/U2xqqpG+HM0KScnfG3XdStYu4m8/L2vRcld+uAe +4j1jeHm6U6QmKCht4gUHsIx+gTGbHntTRkjz/ZHyjCO1sFOZJNn2MYK476KzgaImV2X KERQ== X-Gm-Message-State: AOAM5315gIC8HAnw143IgW471HSoYIUtUHTxiywxhDBwLc95uXrzUsyv r1vx0z98dtE2uFHivzyVVYk= X-Google-Smtp-Source: ABdhPJxZgx9fuK0uNVOMfBNTriIGqx4lfTG9caponsElL5WfCHHEjqX61MwhdGxke9S37PbeCwf6pQ== X-Received: by 2002:a62:b517:0:b029:311:1052:7c92 with SMTP id y23-20020a62b5170000b029031110527c92mr50078341pfe.49.1626050143533; Sun, 11 Jul 2021 17:35:43 -0700 (PDT) Received: from localhost.localdomain ([2001:470:e92d:10:4039:6c0e:1168:cc74]) by smtp.gmail.com with ESMTPSA id c24sm15588447pgj.11.2021.07.11.17.35.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jul 2021 17:35:43 -0700 (PDT) From: Tony Ambardar X-Google-Original-From: Tony Ambardar To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Thomas Bogendoerfer , Paul Burton Cc: Tony Ambardar , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mips@vger.kernel.org, Johan Almbladh , Hassan Naveed , David Daney , Luke Nelson , Serge Semin , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh Subject: [RFC PATCH bpf-next v1 14/14] MIPS: eBPF: add MIPS32 JIT Date: Sun, 11 Jul 2021 17:35:00 -0700 Message-Id: <716e137c8d4c39f5f268ab99885a3fe9c080b80a.1625970384.git.Tony.Ambardar@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Add a new variant of build_one_insn() supporting MIPS32, leveraging the previously added common functions, and disable static analysis as unneeded on MIPS32. Also define bpf_jit_needs_zext() to request verifier zext insertion. Handle these zext insns, and add conditional zext for all ALU32 and LDX word-size operations for cases where the verifier is unable to do so (e.g. test_bpf bypasses verifier). Aside from mapping 64-bit BPF registers to pairs of 32-bit MIPS registers, notable changes from the MIPS64 version of build_one_insn() include: BPF_JMP32: implement all conditionals. BPF_ALU{,64} | {DIV,MOD} | BPF_K: drop divide-by-zero guard as the insns underlying do not raise exceptions. BPF_JMP | JSET | BPF_K: drop bbit insns only usable on MIPS64 Octeon. The MIPS32 ISA does not include 64-bit div/mod or atomic opcodes. Add the emit_bpf_divmod64() and emit_bpf_atomic64() functions, which use built-in kernel functions to implement the follwing BPF insns: BPF_STX | BPF_DW | BPF_ATOMIC BPF_ALU64 | BPF_DIV | BPF_X BPF_ALU64 | BPF_DIV | BPF_K BPF_ALU64 | BPF_MOD | BPF_X BPF_ALU64 | BPF_MOD | BPF_K Atomics other than BPF_ADD or using BPF_FETCH are currently unsupported. Test and development primarily used LTS kernel 5.10.x and then 5.13.x, running under QEMU. Test suites included the 'test_bpf' module and the 'test_verifier' program from kselftests. Testing with 'test_progs' from kselftests was not possible in general since cross-compilation depends on libbpf/bpftool, which does not support cross-endian builds (see also [1]). The matrix of test configurations executed for this series covers: MIPSWORD={64-bit,32-bit} x MIPSISA={R2,R6} x JIT={off,on,hardened} On MIPS32BE and MIPS32LE there was general parity between the results of interpreter vs. JIT-backed tests with respect to the numbers of PASSED, SKIPPED, and FAILED tests. root@OpenWrt:~# sysctl net.core.bpf_jit_enable=1 root@OpenWrt:~# modprobe test_bpf ... test_bpf: Summary: 378 PASSED, 0 FAILED, [366/366 JIT'ed] root@OpenWrt:~# ./test_verifier 0 853 ... Summary: 1127 PASSED, 0 SKIPPED, 89 FAILED root@OpenWrt:~# ./test_verifier 855 1149 ... Summary: 408 PASSED, 7 SKIPPED, 53 FAILED Link: [1] https://lore.kernel.org/bpf/CAEf4BzZCnP3oB81w4BDL4TCmvO3vPw8MucOTbVnjbW8UuCtejw@mail.gmail.com/ Signed-off-by: Tony Ambardar --- Documentation/admin-guide/sysctl/net.rst | 6 +- Documentation/networking/filter.rst | 6 +- arch/mips/Kconfig | 4 +- arch/mips/net/Makefile | 8 +- arch/mips/net/ebpf_jit_comp32.c | 1241 ++++++++++++++++++++++ arch/mips/net/ebpf_jit_core.c | 20 +- 6 files changed, 1270 insertions(+), 15 deletions(-) create mode 100644 arch/mips/net/ebpf_jit_comp32.c diff --git a/Documentation/admin-guide/sysctl/net.rst b/Documentation/admin-guide/sysctl/net.rst index 4150f74c521a..099e3efbf38e 100644 --- a/Documentation/admin-guide/sysctl/net.rst +++ b/Documentation/admin-guide/sysctl/net.rst @@ -66,14 +66,16 @@ two flavors of JITs, the newer eBPF JIT currently supported on: - ppc64 - ppc32 - sparc64 - - mips64 + - mips64 (R2+) + - mips32 (R2+) - s390x - riscv64 - riscv32 And the older cBPF JIT supported on the following archs: - - mips + - mips64 (R1) + - mips32 (R1) - sparc eBPF JITs are a superset of cBPF JITs, meaning the kernel will diff --git a/Documentation/networking/filter.rst b/Documentation/networking/filter.rst index 3e2221f4abe4..31101411da0e 100644 --- a/Documentation/networking/filter.rst +++ b/Documentation/networking/filter.rst @@ -637,9 +637,9 @@ skb pointer). All constraints and restrictions from bpf_check_classic() apply before a conversion to the new layout is being done behind the scenes! Currently, the classic BPF format is being used for JITing on most -32-bit architectures, whereas x86-64, aarch64, s390x, powerpc64, -sparc64, arm32, riscv64, riscv32 perform JIT compilation from eBPF -instruction set. +32-bit architectures, whereas x86-64, aarch64, s390x, powerpc64, sparc64, +mips64, riscv64, arm32, riscv32, and mips32 perform JIT compilation from +eBPF instruction set. Some core changes of the new internal format: diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index ed51970c08e7..d096d2332fe4 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -55,7 +55,7 @@ config MIPS select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES select HAVE_ASM_MODVERSIONS - select HAVE_CBPF_JIT if !64BIT && !CPU_MICROMIPS + select HAVE_CBPF_JIT if !CPU_MICROMIPS && TARGET_ISA_REV < 2 select HAVE_CONTEXT_TRACKING select HAVE_TIF_NOHZ select HAVE_C_RECORDMCOUNT @@ -63,7 +63,7 @@ config MIPS select HAVE_DEBUG_STACKOVERFLOW select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE - select HAVE_EBPF_JIT if 64BIT && !CPU_MICROMIPS && TARGET_ISA_REV >= 2 + select HAVE_EBPF_JIT if !CPU_MICROMIPS && TARGET_ISA_REV >= 2 select HAVE_EXIT_THREAD select HAVE_FAST_GUP select HAVE_FTRACE_MCOUNT_RECORD diff --git a/arch/mips/net/Makefile b/arch/mips/net/Makefile index de42f4a4db56..5f804bc54629 100644 --- a/arch/mips/net/Makefile +++ b/arch/mips/net/Makefile @@ -2,4 +2,10 @@ # MIPS networking code obj-$(CONFIG_MIPS_CBPF_JIT) += bpf_jit.o bpf_jit_asm.o -obj-$(CONFIG_MIPS_EBPF_JIT) += ebpf_jit_core.o ebpf_jit_comp64.o + +obj-$(CONFIG_MIPS_EBPF_JIT) += ebpf_jit_core.o +ifeq ($(CONFIG_CPU_MIPS64),y) + obj-$(CONFIG_MIPS_EBPF_JIT) += ebpf_jit_comp64.o +else + obj-$(CONFIG_MIPS_EBPF_JIT) += ebpf_jit_comp32.o +endif diff --git a/arch/mips/net/ebpf_jit_comp32.c b/arch/mips/net/ebpf_jit_comp32.c new file mode 100644 index 000000000000..069b3e044b89 --- /dev/null +++ b/arch/mips/net/ebpf_jit_comp32.c @@ -0,0 +1,1241 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Just-In-Time compiler for eBPF filters on MIPS32/MIPS64 + * Copyright (c) 2021 Tony Ambardar + * + * Based on code from: + * + * Copyright (c) 2017 Cavium, Inc. + * Author: David Daney + * + * Copyright (c) 2014 Imagination Technologies Ltd. + * Author: Markos Chandras + */ + +#include +#include +#include + +#include "ebpf_jit.h" + +static int gen_imm_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, + int idx) +{ + int dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + int upper_bound, lower_bound, shamt; + int imm = insn->imm; + + if (dst < 0) + return dst; + + switch (BPF_OP(insn->code)) { + case BPF_MOV: + case BPF_ADD: + upper_bound = S16_MAX; + lower_bound = S16_MIN; + break; + case BPF_SUB: + upper_bound = -(int)S16_MIN; + lower_bound = -(int)S16_MAX; + break; + case BPF_AND: + case BPF_OR: + case BPF_XOR: + upper_bound = 0xffff; + lower_bound = 0; + break; + case BPF_RSH: + case BPF_LSH: + case BPF_ARSH: + /* Shift amounts are truncated, no need for bounds */ + upper_bound = S32_MAX; + lower_bound = S32_MIN; + break; + default: + return -EINVAL; + } + + /* + * Immediate move clobbers the register, so no sign/zero + * extension needed. + */ + if (lower_bound <= imm && imm <= upper_bound) { + /* single insn immediate case */ + switch (BPF_OP(insn->code) | BPF_CLASS(insn->code)) { + case BPF_ALU64 | BPF_MOV: + emit_instr(ctx, addiu, LO(dst), MIPS_R_ZERO, imm); + if (imm < 0) + gen_sext_insn(dst, ctx); + else + gen_zext_insn(dst, true, ctx); + break; + case BPF_ALU | BPF_MOV: + emit_instr(ctx, addiu, LO(dst), MIPS_R_ZERO, imm); + break; + case BPF_ALU64 | BPF_AND: + if (imm >= 0) + gen_zext_insn(dst, true, ctx); + fallthrough; + case BPF_ALU | BPF_AND: + emit_instr(ctx, andi, LO(dst), LO(dst), imm); + break; + case BPF_ALU64 | BPF_OR: + if (imm < 0) + emit_instr(ctx, nor, HI(dst), + MIPS_R_ZERO, MIPS_R_ZERO); + fallthrough; + case BPF_ALU | BPF_OR: + emit_instr(ctx, ori, LO(dst), LO(dst), imm); + break; + case BPF_ALU64 | BPF_XOR: + if (imm < 0) + emit_instr(ctx, nor, HI(dst), + HI(dst), MIPS_R_ZERO); + fallthrough; + case BPF_ALU | BPF_XOR: + emit_instr(ctx, xori, LO(dst), LO(dst), imm); + break; + case BPF_ALU64 | BPF_ADD: + emit_instr(ctx, addiu, LO(dst), LO(dst), imm); + if (imm < 0) + emit_instr(ctx, addiu, HI(dst), HI(dst), -1); + emit_instr(ctx, sltiu, MIPS_R_AT, LO(dst), imm); + emit_instr(ctx, addu, HI(dst), HI(dst), MIPS_R_AT); + break; + case BPF_ALU64 | BPF_SUB: + emit_instr(ctx, addiu, MIPS_R_AT, LO(dst), -imm); + if (imm < 0) + emit_instr(ctx, addiu, HI(dst), HI(dst), 1); + emit_instr(ctx, sltu, MIPS_R_AT, LO(dst), MIPS_R_AT); + emit_instr(ctx, subu, HI(dst), HI(dst), MIPS_R_AT); + emit_instr(ctx, addiu, LO(dst), LO(dst), -imm); + break; + case BPF_ALU64 | BPF_ARSH: + shamt = imm & 0x3f; + if (shamt >= 32) { + emit_instr(ctx, sra, LO(dst), + HI(dst), shamt - 32); + emit_instr(ctx, sra, HI(dst), HI(dst), 31); + } else if (shamt > 0) { + emit_instr(ctx, srl, LO(dst), LO(dst), shamt); + emit_instr(ctx, ins, LO(dst), HI(dst), + 32 - shamt, shamt); + emit_instr(ctx, sra, HI(dst), HI(dst), shamt); + } + break; + case BPF_ALU64 | BPF_RSH: + shamt = imm & 0x3f; + if (shamt >= 32) { + emit_instr(ctx, srl, LO(dst), + HI(dst), shamt - 32); + emit_instr(ctx, and, HI(dst), + HI(dst), MIPS_R_ZERO); + } else if (shamt > 0) { + emit_instr(ctx, srl, LO(dst), LO(dst), shamt); + emit_instr(ctx, ins, LO(dst), HI(dst), + 32 - shamt, shamt); + emit_instr(ctx, srl, HI(dst), HI(dst), shamt); + } + break; + case BPF_ALU64 | BPF_LSH: + shamt = imm & 0x3f; + if (shamt >= 32) { + emit_instr(ctx, sll, HI(dst), + LO(dst), shamt - 32); + emit_instr(ctx, and, LO(dst), + LO(dst), MIPS_R_ZERO); + } else if (shamt > 0) { + emit_instr(ctx, srl, MIPS_R_AT, + LO(dst), 32 - shamt); + emit_instr(ctx, sll, HI(dst), HI(dst), shamt); + emit_instr(ctx, sll, LO(dst), LO(dst), shamt); + emit_instr(ctx, or, HI(dst), + HI(dst), MIPS_R_AT); + } + break; + case BPF_ALU | BPF_RSH: + emit_instr(ctx, srl, LO(dst), LO(dst), imm & 0x1f); + break; + case BPF_ALU | BPF_LSH: + emit_instr(ctx, sll, LO(dst), LO(dst), imm & 0x1f); + break; + case BPF_ALU | BPF_ARSH: + emit_instr(ctx, sra, LO(dst), LO(dst), imm & 0x1f); + break; + case BPF_ALU | BPF_ADD: + emit_instr(ctx, addiu, LO(dst), LO(dst), imm); + break; + case BPF_ALU | BPF_SUB: + emit_instr(ctx, addiu, LO(dst), LO(dst), -imm); + break; + default: + return -EINVAL; + } + } else { + /* multi insn immediate case */ + if (BPF_OP(insn->code) == BPF_MOV) { + gen_imm_to_reg(insn, LO(dst), ctx); + if (BPF_CLASS(insn->code) == BPF_ALU64) + gen_sext_insn(dst, ctx); + } else { + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + switch (BPF_OP(insn->code) | BPF_CLASS(insn->code)) { + case BPF_ALU64 | BPF_AND: + if (imm >= 0) + gen_zext_insn(dst, true, ctx); + fallthrough; + case BPF_ALU | BPF_AND: + emit_instr(ctx, and, LO(dst), LO(dst), + MIPS_R_AT); + break; + case BPF_ALU64 | BPF_OR: + if (imm < 0) + emit_instr(ctx, nor, HI(dst), + MIPS_R_ZERO, MIPS_R_ZERO); + fallthrough; + case BPF_ALU | BPF_OR: + emit_instr(ctx, or, LO(dst), LO(dst), + MIPS_R_AT); + break; + case BPF_ALU64 | BPF_XOR: + if (imm < 0) + emit_instr(ctx, nor, HI(dst), + HI(dst), MIPS_R_ZERO); + fallthrough; + case BPF_ALU | BPF_XOR: + emit_instr(ctx, xor, LO(dst), LO(dst), + MIPS_R_AT); + break; + case BPF_ALU64 | BPF_ADD: + emit_instr(ctx, addu, LO(dst), + LO(dst), MIPS_R_AT); + if (imm < 0) + emit_instr(ctx, addiu, HI(dst), HI(dst), -1); + emit_instr(ctx, sltu, MIPS_R_AT, + LO(dst), MIPS_R_AT); + emit_instr(ctx, addu, HI(dst), + HI(dst), MIPS_R_AT); + break; + case BPF_ALU64 | BPF_SUB: + emit_instr(ctx, subu, LO(dst), + LO(dst), MIPS_R_AT); + if (imm < 0) + emit_instr(ctx, addiu, HI(dst), HI(dst), 1); + emit_instr(ctx, sltu, MIPS_R_AT, + MIPS_R_AT, LO(dst)); + emit_instr(ctx, subu, HI(dst), + HI(dst), MIPS_R_AT); + break; + case BPF_ALU | BPF_ADD: + emit_instr(ctx, addu, LO(dst), LO(dst), + MIPS_R_AT); + break; + case BPF_ALU | BPF_SUB: + emit_instr(ctx, subu, LO(dst), LO(dst), + MIPS_R_AT); + break; + default: + return -EINVAL; + } + } + } + + return 0; +} + +/* + * Implement 64-bit BPF div/mod insns on 32-bit systems by calling the + * equivalent built-in kernel function. The function args may be mixed + * 64/32-bit types, unlike the uniform u64 args of BPF kernel helpers. + * Func proto: u64 div64_u64_rem(u64 dividend, u64 divisor, u64 *remainder) + */ +static int emit_bpf_divmod64(struct jit_ctx *ctx, const struct bpf_insn *insn) +{ + const int bpf_src = BPF_SRC(insn->code); + const int bpf_op = BPF_OP(insn->code); + int rem_off, arg_off; + int src, dst, tmp; + u32 func_addr; + + ctx->flags |= EBPF_SAVE_RA; + + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return -EINVAL; + + if (bpf_src == BPF_X) { + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + if (src < 0) + return -EINVAL; + /* + * Use MIPS_R_T8 as temp reg pair to avoid target + * of dst from clobbering src. + */ + if (src == MIPS_R_A0) { + tmp = MIPS_R_T8; + emit_instr(ctx, move, LO(tmp), LO(src)); + emit_instr(ctx, move, HI(tmp), HI(src)); + src = tmp; + } + } + + /* Save caller registers */ + emit_caller_save(ctx); + /* Push O32 stack, aligned space for u64, u64, u64 *, u64 */ + emit_instr(ctx, addiu, MIPS_R_SP, MIPS_R_SP, -32); + + func_addr = (u32) &div64_u64_rem; + /* Move u64 dst to arg 1 as needed */ + if (dst != MIPS_R_A0) { + emit_instr(ctx, move, LO(MIPS_R_A0), LO(dst)); + emit_instr(ctx, move, HI(MIPS_R_A0), HI(dst)); + } + /* Load imm or move u64 src to arg 2 as needed */ + if (bpf_src == BPF_K) { + gen_imm_to_reg(insn, LO(MIPS_R_A2), ctx); + gen_sext_insn(MIPS_R_A2, ctx); + } else if (src != MIPS_R_A2) { /* BPF_X */ + emit_instr(ctx, move, LO(MIPS_R_A2), LO(src)); + emit_instr(ctx, move, HI(MIPS_R_A2), HI(src)); + } + /* Set up stack arg 3 as ptr to u64 remainder on stack */ + arg_off = 16; + rem_off = 24; + emit_instr(ctx, addiu, MIPS_R_AT, MIPS_R_SP, rem_off); + emit_instr(ctx, sw, MIPS_R_AT, arg_off, MIPS_R_SP); + + emit_const_to_reg(ctx, MIPS_R_T9, func_addr); + emit_instr(ctx, jalr, MIPS_R_RA, MIPS_R_T9); + /* Delay slot */ + emit_instr(ctx, nop); + + /* Move return value to dst as needed */ + switch (bpf_op) { + case BPF_DIV: + /* Quotient in MIPS_R_V0 reg pair */ + if (dst != MIPS_R_V0) { + emit_instr(ctx, move, LO(dst), LO(MIPS_R_V0)); + emit_instr(ctx, move, HI(dst), HI(MIPS_R_V0)); + } + break; + case BPF_MOD: + /* Remainder on stack */ + emit_instr(ctx, lw, LO(dst), OFFLO(rem_off), MIPS_R_SP); + emit_instr(ctx, lw, HI(dst), OFFHI(rem_off), MIPS_R_SP); + break; + } + + /* Pop O32 call stack */ + emit_instr(ctx, addiu, MIPS_R_SP, MIPS_R_SP, 32); + /* Restore all caller registers except call return value*/ + emit_caller_restore(ctx, insn->dst_reg); + + return 0; +} + +/* + * Implement 64-bit BPF atomic insns on 32-bit systems by calling the + * equivalent built-in kernel function. The function args may be mixed + * 64/32-bit types, unlike the uniform u64 args of BPF kernel helpers. + * Func proto: void atomic64_add(s64 a, atomic64_t *v) + */ +static int emit_bpf_atomic64(struct jit_ctx *ctx, const struct bpf_insn *insn) +{ + int src, dst, mem_off; + u32 func_addr; + + ctx->flags |= EBPF_SAVE_RA; + + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + if (src < 0 || dst < 0) + return -EINVAL; + mem_off = insn->off; + + /* Save caller registers */ + emit_caller_save(ctx); + + switch (insn->imm) { + case BPF_ADD: + func_addr = (u32) &atomic64_add; + /* Move s64 src to arg 1 as needed */ + if (src != MIPS_R_A0) { + emit_instr(ctx, move, LO(MIPS_R_A0), LO(src)); + emit_instr(ctx, move, HI(MIPS_R_A0), HI(src)); + } + /* Set up dst ptr in arg 2 base register*/ + emit_instr(ctx, addiu, MIPS_R_A2, LO(dst), mem_off); + break; + default: + pr_err("ATOMIC OP %02x NOT HANDLED\n", insn->imm); + return -EINVAL; + } + + emit_const_to_reg(ctx, MIPS_R_T9, func_addr); + emit_instr(ctx, jalr, MIPS_R_RA, MIPS_R_T9); + /* Delay slot */ + /* Push minimal O32 stack */ + emit_instr(ctx, addiu, MIPS_R_SP, MIPS_R_SP, -16); + + /* Pop minimal O32 stack */ + emit_instr(ctx, addiu, MIPS_R_SP, MIPS_R_SP, 16); + /* Restore all caller registers since none clobbered by call */ + emit_caller_restore(ctx, BPF_REG_FP); + + return 0; +} + +/* Returns the number of insn slots consumed. */ +int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, + int this_idx, int exit_idx) +{ + const int bpf_class = BPF_CLASS(insn->code); + const int bpf_size = BPF_SIZE(insn->code); + const int bpf_src = BPF_SRC(insn->code); + const int bpf_op = BPF_OP(insn->code); + int src, dst, r, mem_off, b_off; + bool need_swap, cmp_eq; + unsigned int target = 0; + u64 t64u; + + switch (insn->code) { + case BPF_ALU64 | BPF_ADD | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_SUB | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_LSH | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_RSH | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_ARSH | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_XOR | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_MOV | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_OR | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_AND | BPF_K: /* ALU64_IMM */ + case BPF_ALU | BPF_MOV | BPF_K: /* ALU32_IMM */ + case BPF_ALU | BPF_ADD | BPF_K: /* ALU32_IMM */ + case BPF_ALU | BPF_SUB | BPF_K: /* ALU32_IMM */ + case BPF_ALU | BPF_OR | BPF_K: /* ALU32_IMM */ + case BPF_ALU | BPF_AND | BPF_K: /* ALU32_IMM */ + case BPF_ALU | BPF_LSH | BPF_K: /* ALU32_IMM */ + case BPF_ALU | BPF_RSH | BPF_K: /* ALU32_IMM */ + case BPF_ALU | BPF_XOR | BPF_K: /* ALU32_IMM */ + case BPF_ALU | BPF_ARSH | BPF_K: /* ALU32_IMM */ + r = gen_imm_insn(insn, ctx, this_idx); + if (r < 0) + return r; + break; + case BPF_ALU64 | BPF_MUL | BPF_K: /* ALU64_IMM */ + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + if (insn->imm == 1) /* Mult by 1 is a nop */ + break; + src = MIPS_R_T8; /* Use tmp reg pair for imm */ + gen_imm_to_reg(insn, LO(src), ctx); + emit_instr(ctx, sra, HI(src), LO(src), 31); + goto case_alu64_mul_x; + + case BPF_ALU64 | BPF_NEG | BPF_K: /* ALU64_IMM */ + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + emit_instr(ctx, subu, LO(dst), MIPS_R_ZERO, LO(dst)); + emit_instr(ctx, subu, HI(dst), MIPS_R_ZERO, HI(dst)); + emit_instr(ctx, sltu, MIPS_R_AT, MIPS_R_ZERO, LO(dst)); + emit_instr(ctx, subu, HI(dst), HI(dst), MIPS_R_AT); + break; + case BPF_ALU | BPF_MUL | BPF_K: /* ALU_IMM */ + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + if (insn->imm == 1) /* Mult by 1 is a nop */ + break; + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + if (MIPS_ISA_REV >= 6) { + emit_instr(ctx, mulu, LO(dst), LO(dst), MIPS_R_AT); + } else { + emit_instr(ctx, multu, LO(dst), MIPS_R_AT); + emit_instr(ctx, mflo, LO(dst)); + } + break; + case BPF_ALU | BPF_NEG | BPF_K: /* ALU_IMM */ + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + emit_instr(ctx, subu, LO(dst), MIPS_R_ZERO, LO(dst)); + break; + case BPF_ALU | BPF_DIV | BPF_K: /* ALU_IMM */ + case BPF_ALU | BPF_MOD | BPF_K: /* ALU_IMM */ + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + if (insn->imm == 1) { + /* div by 1 is a nop, mod by 1 is zero */ + if (bpf_op == BPF_MOD) + emit_instr(ctx, move, LO(dst), MIPS_R_ZERO); + break; + } + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + if (MIPS_ISA_REV >= 6) { + if (bpf_op == BPF_DIV) + emit_instr(ctx, divu_r6, LO(dst), + LO(dst), MIPS_R_AT); + else + emit_instr(ctx, modu, LO(dst), + LO(dst), MIPS_R_AT); + break; + } + emit_instr(ctx, divu, LO(dst), MIPS_R_AT); + if (bpf_op == BPF_DIV) + emit_instr(ctx, mflo, LO(dst)); + else + emit_instr(ctx, mfhi, LO(dst)); + break; + case BPF_ALU64 | BPF_DIV | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_MOD | BPF_K: /* ALU64_IMM */ + case BPF_ALU64 | BPF_DIV | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_MOD | BPF_X: /* ALU64_REG */ + r = emit_bpf_divmod64(ctx, insn); + if (r < 0) + return r; + break; + case BPF_ALU64 | BPF_MUL | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_ADD | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_SUB | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_MOV | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_XOR | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_OR | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_AND | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_LSH | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_RSH | BPF_X: /* ALU64_REG */ + case BPF_ALU64 | BPF_ARSH | BPF_X: /* ALU64_REG */ + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (src < 0 || dst < 0) + return -EINVAL; + switch (bpf_op) { + case BPF_MOV: + emit_instr(ctx, move, LO(dst), LO(src)); + emit_instr(ctx, move, HI(dst), HI(src)); + break; + case BPF_ADD: + emit_instr(ctx, addu, HI(dst), HI(dst), HI(src)); + emit_instr(ctx, addu, MIPS_R_AT, LO(dst), LO(src)); + emit_instr(ctx, sltu, MIPS_R_AT, MIPS_R_AT, LO(dst)); + emit_instr(ctx, addu, HI(dst), HI(dst), MIPS_R_AT); + emit_instr(ctx, addu, LO(dst), LO(dst), LO(src)); + break; + case BPF_SUB: + emit_instr(ctx, subu, HI(dst), HI(dst), HI(src)); + emit_instr(ctx, subu, MIPS_R_AT, LO(dst), LO(src)); + emit_instr(ctx, sltu, MIPS_R_AT, LO(dst), MIPS_R_AT); + emit_instr(ctx, subu, HI(dst), HI(dst), MIPS_R_AT); + emit_instr(ctx, subu, LO(dst), LO(dst), LO(src)); + break; + case BPF_XOR: + emit_instr(ctx, xor, LO(dst), LO(dst), LO(src)); + emit_instr(ctx, xor, HI(dst), HI(dst), HI(src)); + break; + case BPF_OR: + emit_instr(ctx, or, LO(dst), LO(dst), LO(src)); + emit_instr(ctx, or, HI(dst), HI(dst), HI(src)); + break; + case BPF_AND: + emit_instr(ctx, and, LO(dst), LO(dst), LO(src)); + emit_instr(ctx, and, HI(dst), HI(dst), HI(src)); + break; + case BPF_MUL: +case_alu64_mul_x: + emit_instr(ctx, mul, HI(dst), HI(dst), LO(src)); + emit_instr(ctx, mul, MIPS_R_AT, LO(dst), HI(src)); + emit_instr(ctx, addu, HI(dst), HI(dst), MIPS_R_AT); + if (MIPS_ISA_REV >= 6) { + emit_instr(ctx, muhu, MIPS_R_AT, LO(dst), LO(src)); + emit_instr(ctx, mul, LO(dst), LO(dst), LO(src)); + } else { + emit_instr(ctx, multu, LO(dst), LO(src)); + emit_instr(ctx, mfhi, MIPS_R_AT); + emit_instr(ctx, mflo, LO(dst)); + } + emit_instr(ctx, addu, HI(dst), HI(dst), MIPS_R_AT); + break; + case BPF_DIV: + case BPF_MOD: + return -EINVAL; + case BPF_LSH: + emit_instr(ctx, beqz, LO(src), 11 * 4); + emit_instr(ctx, addiu, MIPS_R_AT, LO(src), -32); + emit_instr(ctx, bltz, MIPS_R_AT, 4 * 4); + emit_instr(ctx, nop); + emit_instr(ctx, sllv, HI(dst), LO(dst), MIPS_R_AT); + emit_instr(ctx, and, LO(dst), LO(dst), MIPS_R_ZERO); + emit_instr(ctx, b, 5 * 4); + emit_instr(ctx, subu, MIPS_R_AT, MIPS_R_ZERO, MIPS_R_AT); + emit_instr(ctx, srlv, MIPS_R_AT, LO(dst), MIPS_R_AT); + emit_instr(ctx, sllv, HI(dst), HI(dst), LO(src)); + emit_instr(ctx, sllv, LO(dst), LO(dst), LO(src)); + emit_instr(ctx, or, HI(dst), HI(dst), MIPS_R_AT); + break; + case BPF_RSH: + emit_instr(ctx, beqz, LO(src), 11 * 4); + emit_instr(ctx, addiu, MIPS_R_AT, LO(src), -32); + emit_instr(ctx, bltz, MIPS_R_AT, 4 * 4); + emit_instr(ctx, nop); + emit_instr(ctx, srlv, LO(dst), HI(dst), MIPS_R_AT); + emit_instr(ctx, and, HI(dst), HI(dst), MIPS_R_ZERO); + emit_instr(ctx, b, 5 * 4); + emit_instr(ctx, subu, MIPS_R_AT, MIPS_R_ZERO, MIPS_R_AT); + emit_instr(ctx, sllv, MIPS_R_AT, HI(dst), MIPS_R_AT); + emit_instr(ctx, srlv, HI(dst), HI(dst), LO(src)); + emit_instr(ctx, srlv, LO(dst), LO(dst), LO(src)); + emit_instr(ctx, or, LO(dst), LO(dst), MIPS_R_AT); + break; + case BPF_ARSH: + emit_instr(ctx, beqz, LO(src), 11 * 4); + emit_instr(ctx, addiu, MIPS_R_AT, LO(src), -32); + emit_instr(ctx, bltz, MIPS_R_AT, 4 * 4); + emit_instr(ctx, nop); + emit_instr(ctx, srav, LO(dst), HI(dst), MIPS_R_AT); + emit_instr(ctx, sra, HI(dst), HI(dst), 31); + emit_instr(ctx, b, 5 * 4); + emit_instr(ctx, subu, MIPS_R_AT, MIPS_R_ZERO, MIPS_R_AT); + emit_instr(ctx, sllv, MIPS_R_AT, HI(dst), MIPS_R_AT); + emit_instr(ctx, srav, HI(dst), HI(dst), LO(src)); + emit_instr(ctx, srlv, LO(dst), LO(dst), LO(src)); + emit_instr(ctx, or, LO(dst), LO(dst), MIPS_R_AT); + break; + default: + pr_err("ALU64_REG NOT HANDLED\n"); + return -EINVAL; + } + break; + case BPF_ALU | BPF_MOV | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_ADD | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_SUB | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_XOR | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_OR | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_AND | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_MUL | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_DIV | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_MOD | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_LSH | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_RSH | BPF_X: /* ALU_REG */ + case BPF_ALU | BPF_ARSH | BPF_X: /* ALU_REG */ + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (src < 0 || dst < 0) + return -EINVAL; + /* Special BPF_MOV zext insn from verifier. */ + if (insn_is_zext(insn)) { + gen_zext_insn(dst, true, ctx); + break; + } + switch (bpf_op) { + case BPF_MOV: + emit_instr(ctx, move, LO(dst), LO(src)); + break; + case BPF_ADD: + emit_instr(ctx, addu, LO(dst), LO(dst), LO(src)); + break; + case BPF_SUB: + emit_instr(ctx, subu, LO(dst), LO(dst), LO(src)); + break; + case BPF_XOR: + emit_instr(ctx, xor, LO(dst), LO(dst), LO(src)); + break; + case BPF_OR: + emit_instr(ctx, or, LO(dst), LO(dst), LO(src)); + break; + case BPF_AND: + emit_instr(ctx, and, LO(dst), LO(dst), LO(src)); + break; + case BPF_MUL: + emit_instr(ctx, mul, LO(dst), LO(dst), LO(src)); + break; + case BPF_DIV: + case BPF_MOD: + if (MIPS_ISA_REV >= 6) { + if (bpf_op == BPF_DIV) + emit_instr(ctx, divu_r6, LO(dst), + LO(dst), LO(src)); + else + emit_instr(ctx, modu, LO(dst), + LO(dst), LO(src)); + break; + } + emit_instr(ctx, divu, LO(dst), LO(src)); + if (bpf_op == BPF_DIV) + emit_instr(ctx, mflo, LO(dst)); + else + emit_instr(ctx, mfhi, LO(dst)); + break; + case BPF_LSH: + emit_instr(ctx, sllv, LO(dst), LO(dst), LO(src)); + break; + case BPF_RSH: + emit_instr(ctx, srlv, LO(dst), LO(dst), LO(src)); + break; + case BPF_ARSH: + emit_instr(ctx, srav, LO(dst), LO(dst), LO(src)); + break; + default: + pr_err("ALU_REG NOT HANDLED\n"); + return -EINVAL; + } + break; + case BPF_JMP | BPF_EXIT: + if (this_idx + 1 < exit_idx) { + b_off = b_imm(exit_idx, ctx); + if (is_bad_offset(b_off)) { + target = j_target(ctx, exit_idx); + if (target == (unsigned int)-1) + return -E2BIG; + emit_instr(ctx, j, target); + } else { + emit_instr(ctx, b, b_off); + } + emit_instr(ctx, nop); + } + break; + case BPF_JMP32 | BPF_JSLT | BPF_X: + case BPF_JMP32 | BPF_JSLE | BPF_X: + case BPF_JMP32 | BPF_JSGT | BPF_X: + case BPF_JMP32 | BPF_JSGE | BPF_X: + case BPF_JMP32 | BPF_JSGT | BPF_K: + case BPF_JMP32 | BPF_JSGE | BPF_K: + case BPF_JMP32 | BPF_JSLT | BPF_K: + case BPF_JMP32 | BPF_JSLE | BPF_K: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return -EINVAL; + + if (bpf_src == BPF_X) { + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); + if (src < 0) + return -EINVAL; + } else if (insn->imm == 0) { /* and BPF_K */ + src = MIPS_R_ZERO; + } else { + src = MIPS_R_T8; + gen_imm_to_reg(insn, LO(src), ctx); + } + + cmp_eq = bpf_op == BPF_JSLE || bpf_op == BPF_JSGE; + switch (bpf_op) { + case BPF_JSGE: + emit_instr(ctx, slt, MIPS_R_AT, LO(dst), LO(src)); + break; + case BPF_JSLT: + emit_instr(ctx, slt, MIPS_R_AT, LO(dst), LO(src)); + break; + case BPF_JSGT: + emit_instr(ctx, slt, MIPS_R_AT, LO(src), LO(dst)); + break; + case BPF_JSLE: + emit_instr(ctx, slt, MIPS_R_AT, LO(src), LO(dst)); + break; + } + + src = MIPS_R_AT; + dst = MIPS_R_ZERO; + goto jeq_common; + + case BPF_JMP | BPF_JSLT | BPF_X: + case BPF_JMP | BPF_JSLE | BPF_X: + case BPF_JMP | BPF_JSGT | BPF_X: + case BPF_JMP | BPF_JSGE | BPF_X: + case BPF_JMP | BPF_JSGT | BPF_K: + case BPF_JMP | BPF_JSGE | BPF_K: + case BPF_JMP | BPF_JSLT | BPF_K: + case BPF_JMP | BPF_JSLE | BPF_K: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return -EINVAL; + + if (bpf_src == BPF_X) { + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); + if (src < 0) + return -EINVAL; + } else if (insn->imm == 0) { /* and BPF_K */ + src = MIPS_R_ZERO; + } else { + src = MIPS_R_T8; + gen_imm_to_reg(insn, LO(src), ctx); + if (insn->imm < 0) + gen_sext_insn(src, ctx); + else + gen_zext_insn(src, true, ctx); + } + + cmp_eq = bpf_op == BPF_JSGT || bpf_op == BPF_JSGE; + + if (bpf_op == BPF_JSGT || bpf_op == BPF_JSLE) { + /* Check dst <= src */ + emit_instr(ctx, bne, HI(dst), HI(src), 4 * 4); + /* Delay slot */ + emit_instr(ctx, slt, MIPS_R_AT, HI(dst), HI(src)); + emit_instr(ctx, bne, LO(dst), LO(src), 2 * 4); + /* Delay slot */ + emit_instr(ctx, sltu, MIPS_R_AT, LO(dst), LO(src)); + emit_instr(ctx, nor, MIPS_R_AT, MIPS_R_ZERO, MIPS_R_AT); + } else { + /* Check dst < src */ + emit_instr(ctx, bne, HI(dst), HI(src), 2 * 4); + /* Delay slot */ + emit_instr(ctx, slt, MIPS_R_AT, HI(dst), HI(src)); + emit_instr(ctx, sltu, MIPS_R_AT, LO(dst), LO(src)); + } + + src = MIPS_R_AT; + dst = MIPS_R_ZERO; + goto jeq_common; + + case BPF_JMP | BPF_JLT | BPF_X: + case BPF_JMP | BPF_JLE | BPF_X: + case BPF_JMP | BPF_JGT | BPF_X: + case BPF_JMP | BPF_JGE | BPF_X: + case BPF_JMP | BPF_JGT | BPF_K: + case BPF_JMP | BPF_JGE | BPF_K: + case BPF_JMP | BPF_JLT | BPF_K: + case BPF_JMP | BPF_JLE | BPF_K: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return -EINVAL; + + if (bpf_src == BPF_X) { + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); + if (src < 0) + return -EINVAL; + } else if (insn->imm == 0) { /* and BPF_K */ + src = MIPS_R_ZERO; + } else { + src = MIPS_R_T8; + gen_imm_to_reg(insn, LO(src), ctx); + if (insn->imm < 0) + gen_sext_insn(src, ctx); + else + gen_zext_insn(src, true, ctx); + } + + cmp_eq = bpf_op == BPF_JGT || bpf_op == BPF_JGE; + + if (bpf_op == BPF_JGT || bpf_op == BPF_JLE) { + /* Check dst <= src */ + emit_instr(ctx, bne, HI(dst), HI(src), 4 * 4); + /* Delay slot */ + emit_instr(ctx, sltu, MIPS_R_AT, HI(dst), HI(src)); + emit_instr(ctx, bne, LO(dst), LO(src), 2 * 4); + /* Delay slot */ + emit_instr(ctx, sltu, MIPS_R_AT, LO(dst), LO(src)); + emit_instr(ctx, nor, MIPS_R_AT, MIPS_R_ZERO, MIPS_R_AT); + } else { + /* Check dst < src */ + emit_instr(ctx, bne, HI(dst), HI(src), 2 * 4); + /* Delay slot */ + emit_instr(ctx, sltu, MIPS_R_AT, HI(dst), HI(src)); + emit_instr(ctx, sltu, MIPS_R_AT, LO(dst), LO(src)); + } + + src = MIPS_R_AT; + dst = MIPS_R_ZERO; + goto jeq_common; + + case BPF_JMP32 | BPF_JLT | BPF_X: + case BPF_JMP32 | BPF_JLE | BPF_X: + case BPF_JMP32 | BPF_JGT | BPF_X: + case BPF_JMP32 | BPF_JGE | BPF_X: + case BPF_JMP32 | BPF_JGT | BPF_K: + case BPF_JMP32 | BPF_JGE | BPF_K: + case BPF_JMP32 | BPF_JLT | BPF_K: + case BPF_JMP32 | BPF_JLE | BPF_K: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return -EINVAL; + + if (bpf_src == BPF_X) { + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); + if (src < 0) + return -EINVAL; + } else if (insn->imm == 0) { /* and BPF_K */ + src = MIPS_R_ZERO; + } else { + src = MIPS_R_T8; + gen_imm_to_reg(insn, LO(src), ctx); + } + + cmp_eq = bpf_op == BPF_JLE || bpf_op == BPF_JGE; + switch (bpf_op) { + case BPF_JGE: + emit_instr(ctx, sltu, MIPS_R_AT, LO(dst), LO(src)); + break; + case BPF_JLT: + emit_instr(ctx, sltu, MIPS_R_AT, LO(dst), LO(src)); + break; + case BPF_JGT: + emit_instr(ctx, sltu, MIPS_R_AT, LO(src), LO(dst)); + break; + case BPF_JLE: + emit_instr(ctx, sltu, MIPS_R_AT, LO(src), LO(dst)); + break; + } + + src = MIPS_R_AT; + dst = MIPS_R_ZERO; + goto jeq_common; + + case BPF_JMP | BPF_JEQ | BPF_X: /* JMP_REG */ + case BPF_JMP | BPF_JNE | BPF_X: + case BPF_JMP32 | BPF_JEQ | BPF_X: + case BPF_JMP32 | BPF_JNE | BPF_X: + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + if (src < 0 || dst < 0) + return -EINVAL; + + cmp_eq = (bpf_op == BPF_JEQ); + if (bpf_class == BPF_JMP) { + emit_instr(ctx, beq, HI(dst), HI(src), 2 * 4); + /* Delay slot */ + emit_instr(ctx, move, MIPS_R_AT, LO(src)); + /* Make low words unequal if high word unequal. */ + emit_instr(ctx, addu, MIPS_R_AT, LO(dst), MIPS_R_SP); + dst = LO(dst); + src = MIPS_R_AT; + } else { /* BPF_JMP32 */ + dst = LO(dst); + src = LO(src); + } + goto jeq_common; + + case BPF_JMP | BPF_JSET | BPF_X: /* JMP_REG */ + case BPF_JMP32 | BPF_JSET | BPF_X: + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_NO_FP); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (src < 0 || dst < 0) + return -EINVAL; + emit_instr(ctx, and, MIPS_R_AT, LO(dst), LO(src)); + if (bpf_class == BPF_JMP) { + emit_instr(ctx, and, MIPS_R_T8, HI(dst), HI(src)); + emit_instr(ctx, or, MIPS_R_AT, MIPS_R_AT, MIPS_R_T8); + } + cmp_eq = false; + dst = MIPS_R_AT; + src = MIPS_R_ZERO; +jeq_common: + /* + * If the next insn is EXIT and we are jumping arround + * only it, invert the sense of the compare and + * conditionally jump to the exit. Poor man's branch + * chaining. + */ + if ((insn + 1)->code == (BPF_JMP | BPF_EXIT) && insn->off == 1) { + b_off = b_imm(exit_idx, ctx); + if (is_bad_offset(b_off)) { + target = j_target(ctx, exit_idx); + if (target == (unsigned int)-1) + return -E2BIG; + cmp_eq = !cmp_eq; + b_off = 4 * 3; + if (!(ctx->offsets[this_idx] & OFFSETS_B_CONV)) { + ctx->offsets[this_idx] |= OFFSETS_B_CONV; + ctx->long_b_conversion = 1; + } + } + + if (cmp_eq) + emit_instr(ctx, bne, dst, src, b_off); + else + emit_instr(ctx, beq, dst, src, b_off); + emit_instr(ctx, nop); + if (ctx->offsets[this_idx] & OFFSETS_B_CONV) { + emit_instr(ctx, j, target); + emit_instr(ctx, nop); + } + return 2; /* We consumed the exit. */ + } + b_off = b_imm(this_idx + insn->off + 1, ctx); + if (is_bad_offset(b_off)) { + target = j_target(ctx, this_idx + insn->off + 1); + if (target == (unsigned int)-1) + return -E2BIG; + cmp_eq = !cmp_eq; + b_off = 4 * 3; + if (!(ctx->offsets[this_idx] & OFFSETS_B_CONV)) { + ctx->offsets[this_idx] |= OFFSETS_B_CONV; + ctx->long_b_conversion = 1; + } + } + + if (cmp_eq) + emit_instr(ctx, beq, dst, src, b_off); + else + emit_instr(ctx, bne, dst, src, b_off); + emit_instr(ctx, nop); + if (ctx->offsets[this_idx] & OFFSETS_B_CONV) { + emit_instr(ctx, j, target); + emit_instr(ctx, nop); + } + break; + + case BPF_JMP | BPF_JEQ | BPF_K: /* JMP_IMM */ + case BPF_JMP | BPF_JNE | BPF_K: /* JMP_IMM */ + case BPF_JMP32 | BPF_JEQ | BPF_K: /* JMP_IMM */ + case BPF_JMP32 | BPF_JNE | BPF_K: /* JMP_IMM */ + cmp_eq = (bpf_op == BPF_JEQ); + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + if (dst < 0) + return dst; + if (insn->imm == 0) { + src = MIPS_R_ZERO; + if (bpf_class == BPF_JMP32) { + dst = LO(dst); + } else { /* BPF_JMP */ + emit_instr(ctx, or, MIPS_R_AT, LO(dst), HI(dst)); + dst = MIPS_R_AT; + } + } else if (bpf_class == BPF_JMP32) { + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + src = MIPS_R_AT; + dst = LO(dst); + } else { /* BPF_JMP */ + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + /* If low words equal, check high word vs imm sign. */ + emit_instr(ctx, beq, LO(dst), MIPS_R_AT, 2 * 4); + emit_instr(ctx, nop); + /* Make high word signs unequal if low words unequal. */ + emit_instr(ctx, nor, MIPS_R_AT, MIPS_R_ZERO, HI(dst)); + emit_instr(ctx, sra, MIPS_R_AT, MIPS_R_AT, 31); + src = MIPS_R_AT; + dst = HI(dst); + } + goto jeq_common; + + case BPF_JMP | BPF_JSET | BPF_K: /* JMP_IMM */ + case BPF_JMP32 | BPF_JSET | BPF_K: /* JMP_IMM */ + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + + t64u = (u32)insn->imm; + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + emit_instr(ctx, and, MIPS_R_AT, LO(dst), MIPS_R_AT); + if (bpf_class == BPF_JMP && insn->imm < 0) + emit_instr(ctx, or, MIPS_R_AT, MIPS_R_AT, HI(dst)); + src = MIPS_R_AT; + dst = MIPS_R_ZERO; + cmp_eq = false; + goto jeq_common; + + case BPF_JMP | BPF_JA: + /* + * Prefer relative branch for easier debugging, but + * fall back if needed. + */ + b_off = b_imm(this_idx + insn->off + 1, ctx); + if (is_bad_offset(b_off)) { + target = j_target(ctx, this_idx + insn->off + 1); + if (target == (unsigned int)-1) + return -E2BIG; + emit_instr(ctx, j, target); + } else { + emit_instr(ctx, b, b_off); + } + emit_instr(ctx, nop); + break; + case BPF_LD | BPF_DW | BPF_IMM: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; + gen_imm_to_reg(insn, LO(dst), ctx); + gen_imm_to_reg(insn+1, HI(dst), ctx); + return 2; /* Double slot insn */ + + case BPF_JMP | BPF_CALL: + emit_bpf_call(ctx, insn); + break; + case BPF_JMP | BPF_TAIL_CALL: + if (emit_bpf_tail_call(ctx, this_idx)) + return -EINVAL; + break; + + case BPF_ALU | BPF_END | BPF_FROM_BE: + case BPF_ALU | BPF_END | BPF_FROM_LE: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + if (dst < 0) + return dst; +#ifdef __BIG_ENDIAN + need_swap = (bpf_src == BPF_FROM_LE); +#else + need_swap = (bpf_src == BPF_FROM_BE); +#endif + if (insn->imm == 16) { + if (need_swap) + emit_instr(ctx, wsbh, LO(dst), LO(dst)); + emit_instr(ctx, andi, LO(dst), LO(dst), 0xffff); + } else if (insn->imm == 32) { + if (need_swap) { + emit_instr(ctx, wsbh, LO(dst), LO(dst)); + emit_instr(ctx, rotr, LO(dst), LO(dst), 16); + } + } else { /* 64-bit*/ + if (need_swap) { + emit_instr(ctx, wsbh, MIPS_R_AT, LO(dst)); + emit_instr(ctx, wsbh, LO(dst), HI(dst)); + emit_instr(ctx, rotr, HI(dst), MIPS_R_AT, 16); + emit_instr(ctx, rotr, LO(dst), LO(dst), 16); + } + } + break; + + case BPF_ST | BPF_DW | BPF_MEM: + case BPF_ST | BPF_B | BPF_MEM: + case BPF_ST | BPF_H | BPF_MEM: + case BPF_ST | BPF_W | BPF_MEM: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + if (dst < 0) + return -EINVAL; + mem_off = insn->off; + gen_imm_to_reg(insn, MIPS_R_AT, ctx); + + switch (bpf_size) { + case BPF_B: + emit_instr(ctx, sb, MIPS_R_AT, mem_off, LO(dst)); + break; + case BPF_H: + emit_instr(ctx, sh, MIPS_R_AT, mem_off, LO(dst)); + break; + case BPF_W: + emit_instr(ctx, sw, MIPS_R_AT, mem_off, LO(dst)); + break; + case BPF_DW: + /* Memory order == register order in pair */ + emit_instr(ctx, sw, MIPS_R_AT, OFFLO(mem_off), LO(dst)); + if (insn->imm < 0) { + emit_instr(ctx, nor, MIPS_R_AT, + MIPS_R_ZERO, MIPS_R_ZERO); + emit_instr(ctx, sw, MIPS_R_AT, + OFFHI(mem_off), LO(dst)); + } else { + emit_instr(ctx, sw, MIPS_R_ZERO, + OFFHI(mem_off), LO(dst)); + } + break; + } + break; + + case BPF_LDX | BPF_DW | BPF_MEM: + case BPF_LDX | BPF_B | BPF_MEM: + case BPF_LDX | BPF_H | BPF_MEM: + case BPF_LDX | BPF_W | BPF_MEM: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_NO_FP); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + if (src < 0 || dst < 0) + return -EINVAL; + mem_off = insn->off; + + switch (bpf_size) { + case BPF_B: + emit_instr(ctx, lbu, LO(dst), mem_off, LO(src)); + break; + case BPF_H: + emit_instr(ctx, lhu, LO(dst), mem_off, LO(src)); + break; + case BPF_W: + emit_instr(ctx, lw, LO(dst), mem_off, LO(src)); + break; + case BPF_DW: + /* + * Careful: update HI(dst) first in case dst == src, + * since only LO(src) is the usable pointer. + */ + emit_instr(ctx, lw, HI(dst), OFFHI(mem_off), LO(src)); + emit_instr(ctx, lw, LO(dst), OFFLO(mem_off), LO(src)); + break; + } + break; + + case BPF_STX | BPF_DW | BPF_ATOMIC: + r = emit_bpf_atomic64(ctx, insn); + if (r < 0) + return r; + break; + case BPF_STX | BPF_W | BPF_ATOMIC: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + if (src < 0 || dst < 0) + return -EINVAL; + mem_off = insn->off; + if (insn->imm != BPF_ADD) { + pr_err("ATOMIC OP %02x NOT HANDLED\n", insn->imm); + return -EINVAL; + } + /* + * Drop reg pair scheme for more efficient temp register usage + * given BPF_W mode. + */ + dst = LO(dst); + src = LO(src); + /* + * If mem_off does not fit within the 9 bit ll/sc instruction + * immediate field, use a temp reg. + */ + if (MIPS_ISA_REV >= 6 && + (mem_off >= BIT(8) || mem_off < -BIT(8))) { + emit_instr(ctx, addiu, MIPS_R_T9, dst, mem_off); + mem_off = 0; + dst = MIPS_R_T9; + } + emit_instr(ctx, ll, MIPS_R_AT, mem_off, dst); + emit_instr(ctx, addu, MIPS_R_AT, MIPS_R_AT, src); + emit_instr(ctx, sc, MIPS_R_AT, mem_off, dst); + /* + * On failure back up to LL (-4 insns of 4 bytes each) + */ + emit_instr(ctx, beqz, MIPS_R_AT, -4 * 4); + emit_instr(ctx, nop); + break; + + case BPF_STX | BPF_DW | BPF_MEM: + case BPF_STX | BPF_B | BPF_MEM: + case BPF_STX | BPF_H | BPF_MEM: + case BPF_STX | BPF_W | BPF_MEM: + dst = ebpf_to_mips_reg(ctx, insn, REG_DST_FP_OK); + src = ebpf_to_mips_reg(ctx, insn, REG_SRC_FP_OK); + if (src < 0 || dst < 0) + return -EINVAL; + mem_off = insn->off; + + switch (bpf_size) { + case BPF_B: + emit_instr(ctx, sb, LO(src), mem_off, LO(dst)); + break; + case BPF_H: + emit_instr(ctx, sh, LO(src), mem_off, LO(dst)); + break; + case BPF_W: + emit_instr(ctx, sw, LO(src), mem_off, LO(dst)); + break; + case BPF_DW: + emit_instr(ctx, sw, HI(src), OFFHI(mem_off), LO(dst)); + emit_instr(ctx, sw, LO(src), OFFLO(mem_off), LO(dst)); + break; + } + break; + + default: + pr_err("NOT HANDLED %d - (%02x)\n", + this_idx, (unsigned int)insn->code); + return -EINVAL; + } + /* + * Handle zero-extension if the verifier is unable to patch and + * insert it's own special zext insns. + */ + if ((bpf_class == BPF_ALU && !(bpf_op == BPF_END && insn->imm == 64)) || + (bpf_class == BPF_LDX && bpf_size != BPF_DW)) + gen_zext_insn(dst, false, ctx); + return 1; +} + +/* Enable the verifier to insert zext insn for ALU32 ops as needed. */ +bool bpf_jit_needs_zext(void) +{ + return true; +} diff --git a/arch/mips/net/ebpf_jit_core.c b/arch/mips/net/ebpf_jit_core.c index 5bc33b4bbb2a..5ea5d4afd661 100644 --- a/arch/mips/net/ebpf_jit_core.c +++ b/arch/mips/net/ebpf_jit_core.c @@ -633,7 +633,8 @@ static int build_int_body(struct jit_ctx *ctx) for (i = 0; i < prog->len; ) { insn = prog->insnsi + i; - if ((ctx->reg_val_types[i] & RVT_VISITED_MASK) == 0) { + if (is64bit() && (ctx->reg_val_types[i] & + RVT_VISITED_MASK) == 0) { /* dead instruction, don't emit it. */ i++; continue; @@ -1019,14 +1020,19 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) if (ctx.offsets == NULL) goto out_err; - ctx.reg_val_types = kcalloc(prog->len + 1, sizeof(*ctx.reg_val_types), GFP_KERNEL); - if (ctx.reg_val_types == NULL) - goto out_err; - ctx.skf = prog; - if (reg_val_propagate(&ctx)) - goto out_err; + /* Static analysis only used for MIPS64. */ + if (is64bit()) { + ctx.reg_val_types = kcalloc(prog->len + 1, + sizeof(*ctx.reg_val_types), + GFP_KERNEL); + if (ctx.reg_val_types == NULL) + goto out_err; + + if (reg_val_propagate(&ctx)) + goto out_err; + } /* * First pass discovers used resources and instruction offsets