From patchwork Tue Jan 28 02:11:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Palmer Dabbelt X-Patchwork-Id: 11353465 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AC0F114B4 for ; Tue, 28 Jan 2020 02:14:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8AC5B2467E for ; Tue, 28 Jan 2020 02:14:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rnj6yTlt" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727221AbgA1COz (ORCPT ); Mon, 27 Jan 2020 21:14:55 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:41465 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726758AbgA1COv (ORCPT ); Mon, 27 Jan 2020 21:14:51 -0500 Received: by mail-pf1-f195.google.com with SMTP id w62so5819845pfw.8 for ; Mon, 27 Jan 2020 18:14:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=subject:date:message-id:mime-version:content-transfer-encoding:cc :from:to:in-reply-to:references; bh=B26eSG9gMaHSxZUV5pLyS5i7Lt3Ya0fKdNJyMPYapJg=; b=rnj6yTltwKWYN3+9NJAw5sCB/fR2AAcye4lYI+VlXhtsAeDA2BY21GJ8k8CnQf7aQY VdJ1qPrTM3EARaQkNVyPlmwk3OR0FZ1qhmjPYmqj747Fi6+MIUvvEt3r/xG932neiZRZ G3MOKgdSY//mdAPafDURRAFv84ghi6Xg6+OIs6B82gN1iNzgi4TCJIbiKMAjvWHnWsI8 LfD/uSlWuMSGAOZ40hPJMvwD5XkCAeqbt+356MQtEucaALw0rbVlR+5ifj9W6OqwPggu K+WgKDD0Sf0RDg9+u3AIFA8wMp9WAsRV6SFb5qwxtu0Spb25cBn4ycpFAc+FS2vQBAxJ dq0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:date:message-id:mime-version :content-transfer-encoding:cc:from:to:in-reply-to:references; bh=B26eSG9gMaHSxZUV5pLyS5i7Lt3Ya0fKdNJyMPYapJg=; b=QTx/r8JfxvtmDJGacS9fA1EQ74Ulz55ziJmwlNQtpUBJVLdvP+MJgU9gIeMXM1t4BU MOh48IuksjjVlpDmuuUwQobWizo6njV0e6b43FKMxlxOpdUkbHGcfo3ZzvjiWlPsyNSz 1Jr2xOjACKkEAlYiTvXrejOlxxormk87s5WyzH3T9D6JL6I6nmyry9h7U4il8B/+jLnK pYut3TBfO7VHMyl0cfrnoQP7wdGYFQmIw5prMeqFF3eFwBucluzYBPRqd7iC1O4p9Ug9 LDsfGLnACrUY4uRVbyvLNV6Zq0huGLvvvi2bVZxR0/nhAc0IY7951xq2FbaAo661kzIc zOTA== X-Gm-Message-State: APjAAAUH+yrjkLvwstDoY+Qq3ovSTQXXL8NSTe9P/oNPwlOdF6EijTV5 p274H1p0ZSwhBPWou6pb/FdmkQ== X-Google-Smtp-Source: APXvYqxWbxJsMQEFqUcP7zhlWCZa48n0RUY+fa4pHyARk3jQS+HZxuwfqCKurzNjpNa6tqaE0J+XAw== X-Received: by 2002:a62:2b8a:: with SMTP id r132mr1588482pfr.56.1580177689517; Mon, 27 Jan 2020 18:14:49 -0800 (PST) Received: from localhost ([216.9.110.6]) by smtp.gmail.com with ESMTPSA id x21sm10370389pfq.76.2020.01.27.18.14.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Jan 2020 18:14:48 -0800 (PST) Subject: [PATCH 1/4] selftests/bpf: Elide a check for LLVM versions that can't compile it Date: Mon, 27 Jan 2020 18:11:42 -0800 Message-Id: <20200128021145.36774-2-palmerdabbelt@google.com> X-Mailer: git-send-email 2.25.0.341.g760bfbb309-goog MIME-Version: 1.0 Cc: daniel@iogearbox.net, ast@kernel.org, zlim.lnx@gmail.com, catalin.marinas@arm.com, will@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, andriin@fb.com, shuah@kernel.org, Palmer Dabbelt , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, clang-built-linux@googlegroups.com, kernel-team@android.com From: Palmer Dabbelt To: Bjorn Topel In-Reply-To: <20200128021145.36774-1-palmerdabbelt@google.com> References: <20200128021145.36774-1-palmerdabbelt@google.com> Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The current stable LLVM BPF backend fails to compile the BPF selftests due to a compiler bug. The bug has been fixed in trunk, but that fix hasn't landed in the binary packages I'm using yet (Fedora arm64). Without this workaround the tests don't compile for me. This patch triggers a preprocessor warning on LLVM versions that definitely have the bug. The test may be conservative (ie, I'm not sure if 9.1 will have the fix), but it should at least make the current set of stable releases work together. See https://reviews.llvm.org/D69438 for more information on the fix. I obtained the workaround from https://lore.kernel.org/linux-kselftest/aed8eda7-df20-069b-ea14-f06628984566@gmail.com/T/ Fixes: 20a9ad2e7136 ("selftests/bpf: add CO-RE relocs array tests") Signed-off-by: Palmer Dabbelt --- .../testing/selftests/bpf/progs/test_core_reloc_arrays.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c index bf67f0fdf743..c9a3e0585a84 100644 --- a/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c +++ b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c @@ -40,15 +40,23 @@ int test_core_arrays(void *ctx) /* in->a[2] */ if (BPF_CORE_READ(&out->a2, &in->a[2])) return 1; +#if defined(__clang__) && (__clang_major__ < 10) && (__clang_minor__ < 1) +# warning "clang 9.0 SEGVs on multidimensional arrays, see https://reviews.llvm.org/D69438" +#else /* in->b[1][2][3] */ if (BPF_CORE_READ(&out->b123, &in->b[1][2][3])) return 1; +#endif /* in->c[1].c */ if (BPF_CORE_READ(&out->c1c, &in->c[1].c)) return 1; +#if defined(__clang__) && (__clang_major__ < 10) && (__clang_minor__ < 1) +# warning "clang 9.0 SEGVs on multidimensional arrays, see https://reviews.llvm.org/D69438" +#else /* in->d[0][0].d */ if (BPF_CORE_READ(&out->d00d, &in->d[0][0].d)) return 1; +#endif return 0; } From patchwork Tue Jan 28 02:11:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Palmer Dabbelt X-Patchwork-Id: 11353467 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0118013A4 for ; Tue, 28 Jan 2020 02:14:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D47A42467F for ; Tue, 28 Jan 2020 02:14:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wIVx5InN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727215AbgA1COz (ORCPT ); Mon, 27 Jan 2020 21:14:55 -0500 Received: from mail-pg1-f193.google.com ([209.85.215.193]:46361 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726101AbgA1COz (ORCPT ); Mon, 27 Jan 2020 21:14:55 -0500 Received: by mail-pg1-f193.google.com with SMTP id z124so6120098pgb.13 for ; Mon, 27 Jan 2020 18:14:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=subject:date:message-id:mime-version:content-transfer-encoding:cc :from:to:in-reply-to:references; bh=vlorJcvaP10UZfjLamGOOlyDMylrB7iiHsA2qcGcn2c=; b=wIVx5InN6/DzJVEigJkJbaohW40EGIe8KYnsmlI6ruUV1jfOEXmaV23mBGuxsbBdZZ 7N5HJz6fhvoeGjRPqyHw4cQLMNsZap4dno9K9rOg88GKKO0wPpVtVk369vKFayuZU5Z3 5Flw1wwHLh9O9DQZQTciGdHwm1qbkIb8HtClNU6B1kMPffvLLLeTYvsIc7IFOTJGrRP3 SyzkIZz34nOkSB9/5HyV2JYgvxR9XE7nVDO+BkBUI5F0ICsuFrJAuKdJFfEYUNMGpkZ9 eQDk7IhpuBKghrnxUYESqFen0lnva69a+zckjFBNE2KlKqDRnZmXl4U5GhQiIVPVOm2I L8Eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:date:message-id:mime-version :content-transfer-encoding:cc:from:to:in-reply-to:references; bh=vlorJcvaP10UZfjLamGOOlyDMylrB7iiHsA2qcGcn2c=; b=ejvjJGehw94kOKDXc1hqK0Ydwc8nwJ0caSmtMLoMaWCgU2/t1kHK5XJuMWFAWOd3pY dcHHpUqd41e/XQlkbHxCEtUTydpV2mFgDOLwbjtaxh/39qDpBNnkhdpxg/N124E/cpOH 0KFmqksC7O8En+bvsydCH84BZPwysH+facBROhTVIuZTWr2PoUwlyrDMLHUDKZT8WSs/ zxWEB7fGAxGUOZ2Y6zfUUMV5NqrVyjHbR5RiTTZg7sEUsqwnLmJgxCbPaKypDW0uRJ6N v4Z865VgRcvf0eY4kQSfOaJcyYNSkAW/w8bea9v5jRlpLUZNVXtxbDiLwuOjnCQXuQae LKOg== X-Gm-Message-State: APjAAAVXupYXtnzGNdVDf6hDy/zxUHjVaS/Q3di0REo+L7FczLvwqEdh L5fF4lHVC8Av1LAG1e18UghSMg== X-Google-Smtp-Source: APXvYqwztG3P3KPhEdDXDM7eU2re4Pli6ZoCVOT/l3ew9l8F5p7fV2G51hn2C7+OmMraT7KvzDqk4w== X-Received: by 2002:a62:52d0:: with SMTP id g199mr1553020pfb.241.1580177693903; Mon, 27 Jan 2020 18:14:53 -0800 (PST) Received: from localhost ([216.9.110.6]) by smtp.gmail.com with ESMTPSA id b12sm17391719pfi.157.2020.01.27.18.14.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Jan 2020 18:14:53 -0800 (PST) Subject: [PATCH 2/4] arm64: bpf: Convert bpf2a64 to a function Date: Mon, 27 Jan 2020 18:11:43 -0800 Message-Id: <20200128021145.36774-3-palmerdabbelt@google.com> X-Mailer: git-send-email 2.25.0.341.g760bfbb309-goog MIME-Version: 1.0 Cc: daniel@iogearbox.net, ast@kernel.org, zlim.lnx@gmail.com, catalin.marinas@arm.com, will@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, andriin@fb.com, shuah@kernel.org, Palmer Dabbelt , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, clang-built-linux@googlegroups.com, kernel-team@android.com From: Palmer Dabbelt To: Bjorn Topel In-Reply-To: <20200128021145.36774-1-palmerdabbelt@google.com> References: <20200128021145.36774-1-palmerdabbelt@google.com> Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This patch is intended to change no functionality, it just allows me to more cleanly add dynamic register mapping later. Signed-off-by: Palmer Dabbelt --- arch/arm64/net/bpf_jit_comp.c | 53 +++++++++++++++++++---------------- 1 file changed, 29 insertions(+), 24 deletions(-) diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index cdc79de0c794..8eee68705056 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -25,7 +25,7 @@ #define TMP_REG_3 (MAX_BPF_JIT_REG + 3) /* Map BPF registers to A64 registers */ -static const int bpf2a64[] = { +static const int bpf2a64_default[] = { /* return value from in-kernel function, and exit value from eBPF */ [BPF_REG_0] = A64_R(7), /* arguments from eBPF program to in-kernel function */ @@ -60,6 +60,11 @@ struct jit_ctx { u32 stack_size; }; +static inline int bpf2a64(struct jit_ctx *ctx, int bpf_reg) +{ + return bpf2a64_default[bpf_reg]; +} + static inline void emit(const u32 insn, struct jit_ctx *ctx) { if (ctx->image != NULL) @@ -176,12 +181,12 @@ static inline int epilogue_offset(const struct jit_ctx *ctx) static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf) { const struct bpf_prog *prog = ctx->prog; - const u8 r6 = bpf2a64[BPF_REG_6]; - const u8 r7 = bpf2a64[BPF_REG_7]; - const u8 r8 = bpf2a64[BPF_REG_8]; - const u8 r9 = bpf2a64[BPF_REG_9]; - const u8 fp = bpf2a64[BPF_REG_FP]; - const u8 tcc = bpf2a64[TCALL_CNT]; + const u8 r6 = bpf2a64(ctx, BPF_REG_6); + const u8 r7 = bpf2a64(ctx, BPF_REG_7); + const u8 r8 = bpf2a64(ctx, BPF_REG_8); + const u8 r9 = bpf2a64(ctx, BPF_REG_9); + const u8 fp = bpf2a64(ctx, BPF_REG_FP); + const u8 tcc = bpf2a64(ctx, TCALL_CNT); const int idx0 = ctx->idx; int cur_offset; @@ -243,12 +248,12 @@ static int out_offset = -1; /* initialized on the first pass of build_body() */ static int emit_bpf_tail_call(struct jit_ctx *ctx) { /* bpf_tail_call(void *prog_ctx, struct bpf_array *array, u64 index) */ - const u8 r2 = bpf2a64[BPF_REG_2]; - const u8 r3 = bpf2a64[BPF_REG_3]; + const u8 r2 = bpf2a64(ctx, BPF_REG_2); + const u8 r3 = bpf2a64(ctx, BPF_REG_3); - const u8 tmp = bpf2a64[TMP_REG_1]; - const u8 prg = bpf2a64[TMP_REG_2]; - const u8 tcc = bpf2a64[TCALL_CNT]; + const u8 tmp = bpf2a64(ctx, TMP_REG_1); + const u8 prg = bpf2a64(ctx, TMP_REG_2); + const u8 tcc = bpf2a64(ctx, TCALL_CNT); const int idx0 = ctx->idx; #define cur_offset (ctx->idx - idx0) #define jmp_offset (out_offset - (cur_offset)) @@ -307,12 +312,12 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx) static void build_epilogue(struct jit_ctx *ctx) { - const u8 r0 = bpf2a64[BPF_REG_0]; - const u8 r6 = bpf2a64[BPF_REG_6]; - const u8 r7 = bpf2a64[BPF_REG_7]; - const u8 r8 = bpf2a64[BPF_REG_8]; - const u8 r9 = bpf2a64[BPF_REG_9]; - const u8 fp = bpf2a64[BPF_REG_FP]; + const u8 r0 = bpf2a64(ctx, BPF_REG_0); + const u8 r6 = bpf2a64(ctx, BPF_REG_6); + const u8 r7 = bpf2a64(ctx, BPF_REG_7); + const u8 r8 = bpf2a64(ctx, BPF_REG_8); + const u8 r9 = bpf2a64(ctx, BPF_REG_9); + const u8 fp = bpf2a64(ctx, BPF_REG_FP); /* We're done with BPF stack */ emit(A64_ADD_I(1, A64_SP, A64_SP, ctx->stack_size), ctx); @@ -343,11 +348,11 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool extra_pass) { const u8 code = insn->code; - const u8 dst = bpf2a64[insn->dst_reg]; - const u8 src = bpf2a64[insn->src_reg]; - const u8 tmp = bpf2a64[TMP_REG_1]; - const u8 tmp2 = bpf2a64[TMP_REG_2]; - const u8 tmp3 = bpf2a64[TMP_REG_3]; + const u8 dst = bpf2a64(ctx, insn->dst_reg); + const u8 src = bpf2a64(ctx, insn->src_reg); + const u8 tmp = bpf2a64(ctx, TMP_REG_1); + const u8 tmp2 = bpf2a64(ctx, TMP_REG_2); + const u8 tmp3 = bpf2a64(ctx, TMP_REG_3); const s16 off = insn->off; const s32 imm = insn->imm; const int i = insn - ctx->prog->insnsi; @@ -634,7 +639,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, /* function call */ case BPF_JMP | BPF_CALL: { - const u8 r0 = bpf2a64[BPF_REG_0]; + const u8 r0 = bpf2a64(ctx, BPF_REG_0); bool func_addr_fixed; u64 func_addr; int ret; From patchwork Tue Jan 28 02:11:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Palmer Dabbelt X-Patchwork-Id: 11353469 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 024B5139A for ; Tue, 28 Jan 2020 02:15:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CAE5824684 for ; Tue, 28 Jan 2020 02:15:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="u2lemWY+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727228AbgA1CPD (ORCPT ); Mon, 27 Jan 2020 21:15:03 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:39985 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727441AbgA1CPB (ORCPT ); Mon, 27 Jan 2020 21:15:01 -0500 Received: by mail-pf1-f195.google.com with SMTP id q8so5827981pfh.7 for ; Mon, 27 Jan 2020 18:15:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=subject:date:message-id:mime-version:content-transfer-encoding:cc :from:to:in-reply-to:references; bh=jKrDfwDIxQNTXTrn+zwyb0TImugf3AGUmYGjhKrNRfc=; b=u2lemWY+5dfIGEONMq7ElLsRYfcjPfM+zVFSnbwB2QUxDirC5Ap3qf9dj8uY7gGPbZ yQVfbIClno3WHSVLbboD/umGxT1Ondur2zKycypuNc+lrqUPBsPQMWIpTjgeysbFRtlQ aSeIMQCFc40GVw3J0rKBtHGMzTlyettIs1Wemi/9I5zpqjaMvhYMIhRycrye8tndglh5 O/b8yN9w4dCj+vHx7TY9dGzq5qZAcOvKsILaVWe9KhRk6UHTzxR1XzcWZrLl888sYPgc qcbNy9Ynlefqre+vOJfZ44swu4fU2PSPraRNDMXkxJpUjS4Bqg12fMB7xKL38m9bNTut WqJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:date:message-id:mime-version :content-transfer-encoding:cc:from:to:in-reply-to:references; bh=jKrDfwDIxQNTXTrn+zwyb0TImugf3AGUmYGjhKrNRfc=; b=Ei7JWfxeKFpbXs1yHYkHo6K7eq+Nm4D37RdsD/tJg5/yvxZYKmiO0D1YoqZnsgsCQc pT+ekkjiSeo110VYRGRPr2vvSNuCE85wQ9JWnX6L9mQ/3FY9CkMI96iL5gmsMS/qW/Mc +ATepiNfkVQYl0gpEfs3MiXd5NyGFhyFs4DePTZDA+pT1brd8rsCt4EE+hI7WQtVBeWw wYJzisbGBqnmsTPLL1Cii3C6jy9QvSrT96iuP7p6KWViCp4Rdv6Pa9Ow12SLo6emddEw QDqEwKl4XBDoOKin2BthFG8HvSYDO8zCqfZlPBtfHliwTBJ8frYth4Vbst6mK+g8wLCJ fong== X-Gm-Message-State: APjAAAUaceIjxHFzFRFNVzRDOzIl9Zn5GdCwdbjVpsX19ZpCbIhqxfRy 5bdFRZmEJQpVZo1fbW1y2mPafw== X-Google-Smtp-Source: APXvYqyi/cnBTa1Klz/nM2JzLVC6bUlEiBW6MlgldDlvQ56qM/OW7gQzYxWPZZDg1UQWEFYCSCX5rw== X-Received: by 2002:a63:6d8d:: with SMTP id i135mr22565833pgc.90.1580177699348; Mon, 27 Jan 2020 18:14:59 -0800 (PST) Received: from localhost ([216.9.110.7]) by smtp.gmail.com with ESMTPSA id k1sm10222394pfg.66.2020.01.27.18.14.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Jan 2020 18:14:58 -0800 (PST) Subject: [PATCH 3/4] arm64: bpf: Split the read and write halves of dst Date: Mon, 27 Jan 2020 18:11:44 -0800 Message-Id: <20200128021145.36774-4-palmerdabbelt@google.com> X-Mailer: git-send-email 2.25.0.341.g760bfbb309-goog MIME-Version: 1.0 Cc: daniel@iogearbox.net, ast@kernel.org, zlim.lnx@gmail.com, catalin.marinas@arm.com, will@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, andriin@fb.com, shuah@kernel.org, Palmer Dabbelt , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, clang-built-linux@googlegroups.com, kernel-team@android.com From: Palmer Dabbelt To: Bjorn Topel In-Reply-To: <20200128021145.36774-1-palmerdabbelt@google.com> References: <20200128021145.36774-1-palmerdabbelt@google.com> Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org This patch is intended to change no functionality, it just allows me to do register renaming later. Signed-off-by: Palmer Dabbelt --- arch/arm64/net/bpf_jit_comp.c | 107 +++++++++++++++++----------------- 1 file changed, 54 insertions(+), 53 deletions(-) diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 8eee68705056..fba5b1b00cd7 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -348,7 +348,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool extra_pass) { const u8 code = insn->code; - const u8 dst = bpf2a64(ctx, insn->dst_reg); + const u8 dstw = bpf2a64(ctx, insn->dst_reg); + const u8 dstr = bpf2a64(ctx, insn->dst_reg); const u8 src = bpf2a64(ctx, insn->src_reg); const u8 tmp = bpf2a64(ctx, TMP_REG_1); const u8 tmp2 = bpf2a64(ctx, TMP_REG_2); @@ -377,32 +378,32 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, /* dst = src */ case BPF_ALU | BPF_MOV | BPF_X: case BPF_ALU64 | BPF_MOV | BPF_X: - emit(A64_MOV(is64, dst, src), ctx); + emit(A64_MOV(is64, dstw, src), ctx); break; /* dst = dst OP src */ case BPF_ALU | BPF_ADD | BPF_X: case BPF_ALU64 | BPF_ADD | BPF_X: - emit(A64_ADD(is64, dst, dst, src), ctx); + emit(A64_ADD(is64, dstw, dstr, src), ctx); break; case BPF_ALU | BPF_SUB | BPF_X: case BPF_ALU64 | BPF_SUB | BPF_X: - emit(A64_SUB(is64, dst, dst, src), ctx); + emit(A64_SUB(is64, dstw, dstr, src), ctx); break; case BPF_ALU | BPF_AND | BPF_X: case BPF_ALU64 | BPF_AND | BPF_X: - emit(A64_AND(is64, dst, dst, src), ctx); + emit(A64_AND(is64, dstw, dstr, src), ctx); break; case BPF_ALU | BPF_OR | BPF_X: case BPF_ALU64 | BPF_OR | BPF_X: - emit(A64_ORR(is64, dst, dst, src), ctx); + emit(A64_ORR(is64, dstw, dstr, src), ctx); break; case BPF_ALU | BPF_XOR | BPF_X: case BPF_ALU64 | BPF_XOR | BPF_X: - emit(A64_EOR(is64, dst, dst, src), ctx); + emit(A64_EOR(is64, dstw, dstr, src), ctx); break; case BPF_ALU | BPF_MUL | BPF_X: case BPF_ALU64 | BPF_MUL | BPF_X: - emit(A64_MUL(is64, dst, dst, src), ctx); + emit(A64_MUL(is64, dstw, dstr, src), ctx); break; case BPF_ALU | BPF_DIV | BPF_X: case BPF_ALU64 | BPF_DIV | BPF_X: @@ -410,30 +411,30 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_ALU64 | BPF_MOD | BPF_X: switch (BPF_OP(code)) { case BPF_DIV: - emit(A64_UDIV(is64, dst, dst, src), ctx); + emit(A64_UDIV(is64, dstw, dstr, src), ctx); break; case BPF_MOD: - emit(A64_UDIV(is64, tmp, dst, src), ctx); - emit(A64_MSUB(is64, dst, dst, tmp, src), ctx); + emit(A64_UDIV(is64, tmp, dstr, src), ctx); + emit(A64_MSUB(is64, dstw, dstr, tmp, src), ctx); break; } break; case BPF_ALU | BPF_LSH | BPF_X: case BPF_ALU64 | BPF_LSH | BPF_X: - emit(A64_LSLV(is64, dst, dst, src), ctx); + emit(A64_LSLV(is64, dstw, dstr, src), ctx); break; case BPF_ALU | BPF_RSH | BPF_X: case BPF_ALU64 | BPF_RSH | BPF_X: - emit(A64_LSRV(is64, dst, dst, src), ctx); + emit(A64_LSRV(is64, dstw, dstr, src), ctx); break; case BPF_ALU | BPF_ARSH | BPF_X: case BPF_ALU64 | BPF_ARSH | BPF_X: - emit(A64_ASRV(is64, dst, dst, src), ctx); + emit(A64_ASRV(is64, dstw, dstr, src), ctx); break; /* dst = -dst */ case BPF_ALU | BPF_NEG: case BPF_ALU64 | BPF_NEG: - emit(A64_NEG(is64, dst, dst), ctx); + emit(A64_NEG(is64, dstw, dstr), ctx); break; /* dst = BSWAP##imm(dst) */ case BPF_ALU | BPF_END | BPF_FROM_LE: @@ -447,16 +448,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, #endif switch (imm) { case 16: - emit(A64_REV16(is64, dst, dst), ctx); + emit(A64_REV16(is64, dstw, dstr), ctx); /* zero-extend 16 bits into 64 bits */ - emit(A64_UXTH(is64, dst, dst), ctx); + emit(A64_UXTH(is64, dstw, dstr), ctx); break; case 32: - emit(A64_REV32(is64, dst, dst), ctx); + emit(A64_REV32(is64, dstw, dstr), ctx); /* upper 32 bits already cleared */ break; case 64: - emit(A64_REV64(dst, dst), ctx); + emit(A64_REV64(dstw, dstr), ctx); break; } break; @@ -464,11 +465,11 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, switch (imm) { case 16: /* zero-extend 16 bits into 64 bits */ - emit(A64_UXTH(is64, dst, dst), ctx); + emit(A64_UXTH(is64, dstw, dstr), ctx); break; case 32: /* zero-extend 32 bits into 64 bits */ - emit(A64_UXTW(is64, dst, dst), ctx); + emit(A64_UXTW(is64, dstw, dstr), ctx); break; case 64: /* nop */ @@ -478,61 +479,61 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, /* dst = imm */ case BPF_ALU | BPF_MOV | BPF_K: case BPF_ALU64 | BPF_MOV | BPF_K: - emit_a64_mov_i(is64, dst, imm, ctx); + emit_a64_mov_i(is64, dstw, imm, ctx); break; /* dst = dst OP imm */ case BPF_ALU | BPF_ADD | BPF_K: case BPF_ALU64 | BPF_ADD | BPF_K: emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_ADD(is64, dst, dst, tmp), ctx); + emit(A64_ADD(is64, dstw, dstr, tmp), ctx); break; case BPF_ALU | BPF_SUB | BPF_K: case BPF_ALU64 | BPF_SUB | BPF_K: emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_SUB(is64, dst, dst, tmp), ctx); + emit(A64_SUB(is64, dstw, dstr, tmp), ctx); break; case BPF_ALU | BPF_AND | BPF_K: case BPF_ALU64 | BPF_AND | BPF_K: emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_AND(is64, dst, dst, tmp), ctx); + emit(A64_AND(is64, dstw, dstr, tmp), ctx); break; case BPF_ALU | BPF_OR | BPF_K: case BPF_ALU64 | BPF_OR | BPF_K: emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_ORR(is64, dst, dst, tmp), ctx); + emit(A64_ORR(is64, dstw, dstr, tmp), ctx); break; case BPF_ALU | BPF_XOR | BPF_K: case BPF_ALU64 | BPF_XOR | BPF_K: emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_EOR(is64, dst, dst, tmp), ctx); + emit(A64_EOR(is64, dstw, dstr, tmp), ctx); break; case BPF_ALU | BPF_MUL | BPF_K: case BPF_ALU64 | BPF_MUL | BPF_K: emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_MUL(is64, dst, dst, tmp), ctx); + emit(A64_MUL(is64, dstw, dstr, tmp), ctx); break; case BPF_ALU | BPF_DIV | BPF_K: case BPF_ALU64 | BPF_DIV | BPF_K: emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_UDIV(is64, dst, dst, tmp), ctx); + emit(A64_UDIV(is64, dstw, dstr, tmp), ctx); break; case BPF_ALU | BPF_MOD | BPF_K: case BPF_ALU64 | BPF_MOD | BPF_K: emit_a64_mov_i(is64, tmp2, imm, ctx); - emit(A64_UDIV(is64, tmp, dst, tmp2), ctx); - emit(A64_MSUB(is64, dst, dst, tmp, tmp2), ctx); + emit(A64_UDIV(is64, tmp, dstr, tmp2), ctx); + emit(A64_MSUB(is64, dstw, dstr, tmp, tmp2), ctx); break; case BPF_ALU | BPF_LSH | BPF_K: case BPF_ALU64 | BPF_LSH | BPF_K: - emit(A64_LSL(is64, dst, dst, imm), ctx); + emit(A64_LSL(is64, dstw, dstr, imm), ctx); break; case BPF_ALU | BPF_RSH | BPF_K: case BPF_ALU64 | BPF_RSH | BPF_K: - emit(A64_LSR(is64, dst, dst, imm), ctx); + emit(A64_LSR(is64, dstw, dstr, imm), ctx); break; case BPF_ALU | BPF_ARSH | BPF_K: case BPF_ALU64 | BPF_ARSH | BPF_K: - emit(A64_ASR(is64, dst, dst, imm), ctx); + emit(A64_ASR(is64, dstw, dstr, imm), ctx); break; /* JUMP off */ @@ -562,7 +563,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_JMP32 | BPF_JSLT | BPF_X: case BPF_JMP32 | BPF_JSGE | BPF_X: case BPF_JMP32 | BPF_JSLE | BPF_X: - emit(A64_CMP(is64, dst, src), ctx); + emit(A64_CMP(is64, dstr, src), ctx); emit_cond_jmp: jmp_offset = bpf2a64_offset(i + off, i, ctx); check_imm19(jmp_offset); @@ -605,7 +606,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, break; case BPF_JMP | BPF_JSET | BPF_X: case BPF_JMP32 | BPF_JSET | BPF_X: - emit(A64_TST(is64, dst, src), ctx); + emit(A64_TST(is64, dstr, src), ctx); goto emit_cond_jmp; /* IF (dst COND imm) JUMP off */ case BPF_JMP | BPF_JEQ | BPF_K: @@ -629,12 +630,12 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_JMP32 | BPF_JSGE | BPF_K: case BPF_JMP32 | BPF_JSLE | BPF_K: emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_CMP(is64, dst, tmp), ctx); + emit(A64_CMP(is64, dstr, tmp), ctx); goto emit_cond_jmp; case BPF_JMP | BPF_JSET | BPF_K: case BPF_JMP32 | BPF_JSET | BPF_K: emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_TST(is64, dst, tmp), ctx); + emit(A64_TST(is64, dstr, tmp), ctx); goto emit_cond_jmp; /* function call */ case BPF_JMP | BPF_CALL: @@ -676,7 +677,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, u64 imm64; imm64 = (u64)insn1.imm << 32 | (u32)imm; - emit_a64_mov_i64(dst, imm64, ctx); + emit_a64_mov_i64(dstw, imm64, ctx); return 1; } @@ -689,16 +690,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, emit_a64_mov_i(1, tmp, off, ctx); switch (BPF_SIZE(code)) { case BPF_W: - emit(A64_LDR32(dst, src, tmp), ctx); + emit(A64_LDR32(dstw, src, tmp), ctx); break; case BPF_H: - emit(A64_LDRH(dst, src, tmp), ctx); + emit(A64_LDRH(dstw, src, tmp), ctx); break; case BPF_B: - emit(A64_LDRB(dst, src, tmp), ctx); + emit(A64_LDRB(dstw, src, tmp), ctx); break; case BPF_DW: - emit(A64_LDR64(dst, src, tmp), ctx); + emit(A64_LDR64(dstw, src, tmp), ctx); break; } break; @@ -713,16 +714,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, emit_a64_mov_i(1, tmp, imm, ctx); switch (BPF_SIZE(code)) { case BPF_W: - emit(A64_STR32(tmp, dst, tmp2), ctx); + emit(A64_STR32(tmp, dstr, tmp2), ctx); break; case BPF_H: - emit(A64_STRH(tmp, dst, tmp2), ctx); + emit(A64_STRH(tmp, dstr, tmp2), ctx); break; case BPF_B: - emit(A64_STRB(tmp, dst, tmp2), ctx); + emit(A64_STRB(tmp, dstr, tmp2), ctx); break; case BPF_DW: - emit(A64_STR64(tmp, dst, tmp2), ctx); + emit(A64_STR64(tmp, dstr, tmp2), ctx); break; } break; @@ -735,16 +736,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, emit_a64_mov_i(1, tmp, off, ctx); switch (BPF_SIZE(code)) { case BPF_W: - emit(A64_STR32(src, dst, tmp), ctx); + emit(A64_STR32(src, dstr, tmp), ctx); break; case BPF_H: - emit(A64_STRH(src, dst, tmp), ctx); + emit(A64_STRH(src, dstr, tmp), ctx); break; case BPF_B: - emit(A64_STRB(src, dst, tmp), ctx); + emit(A64_STRB(src, dstr, tmp), ctx); break; case BPF_DW: - emit(A64_STR64(src, dst, tmp), ctx); + emit(A64_STR64(src, dstr, tmp), ctx); break; } break; @@ -754,10 +755,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, /* STX XADD: lock *(u64 *)(dst + off) += src */ case BPF_STX | BPF_XADD | BPF_DW: if (!off) { - reg = dst; + reg = dstr; } else { emit_a64_mov_i(1, tmp, off, ctx); - emit(A64_ADD(1, tmp, tmp, dst), ctx); + emit(A64_ADD(1, tmp, tmp, dstr), ctx); reg = tmp; } if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) { From patchwork Tue Jan 28 02:11:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Palmer Dabbelt X-Patchwork-Id: 11353471 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 29375139A for ; Tue, 28 Jan 2020 02:15:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id F21C524684 for ; Tue, 28 Jan 2020 02:15:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="t6Fkbs2B" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727221AbgA1CPI (ORCPT ); Mon, 27 Jan 2020 21:15:08 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:46529 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727440AbgA1CPF (ORCPT ); Mon, 27 Jan 2020 21:15:05 -0500 Received: by mail-pl1-f195.google.com with SMTP id y8so4464253pll.13 for ; Mon, 27 Jan 2020 18:15:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=subject:date:message-id:mime-version:content-transfer-encoding:cc :from:to:in-reply-to:references; bh=LUiKWI3Cf+9+n0Mf0T0uSDtpI14DkFl6R/zPz6oE4kA=; b=t6Fkbs2B6XQQ1FJjHeJszTkGjc450EaIRq1KZfmbn8O4NYMZcq6AXMS7pmHbUboObC +++s+jeLdsA6RQB08P9Q5Ack/z09JhMpcwAH9ll/D/MNh2PF/l8bN+H59x2KdWIZQaGJ V6rLY45e1nmA4dzti6zKLzfbfSQH01wh1Pq+jqMqsvZxpHPcSUIN8yopf38iePTDe4Fn PnNN2/5p6w6AcizSaS4T/xDwCmkQr+gZ6sexRbfAK34zFGsth0Z8GT9lZrcVKfKxscsF FNGp79tHt13MmqPXpQkiJlgMycGe4X09ATmdUeh59o7i1ed82t9JibgA5r1Fh0RysWRj 6HTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:date:message-id:mime-version :content-transfer-encoding:cc:from:to:in-reply-to:references; bh=LUiKWI3Cf+9+n0Mf0T0uSDtpI14DkFl6R/zPz6oE4kA=; b=OtfiOLaSJ0BbOwjAaFHB3AvoUf94RvNV3g+03FPIR4oz1cfXl1oWP9IEK61+QIJkuC M+GbcFbxOfw0gPbny9jvkHJwGTJ2mzY43B0zaNFS7rGIOXwxI/RrFHY84DhIy8MHfSjX 0vfKyxh83s28B1hHNpKEMiHR9tHpRkrGNzLytQ6KiUOW2/cFXkmG3+LWRoeoVA/5wJWu iPwxEXFMhd8ZCfRyC3dzaeocPG1eEkMuznaheizOXHqPUDT6xxCEoMCbQIThBrgURMD8 +Y0QVYIu1tX0yf5NjWn+ihRwXBVEQoe9eDsZrpnuh0wIvTP7bbfouJ/KoDLG5Iywcdkv Ui6w== X-Gm-Message-State: APjAAAVJ9T6bEM4RVvBzZ2nQMqzGTML87YPyBHmMvklhFBBk1TAEPtrz cwyTcf4P3DqJ7qsjR+K2YeiVmA== X-Google-Smtp-Source: APXvYqxlXoEawlGrjKe7Dg0dTNaImpBzNfwMgpu5Qd3tYpex5PzlG+tN2FelRoSSzPaUPamCbr5Lgw== X-Received: by 2002:a17:902:9a09:: with SMTP id v9mr19976280plp.341.1580177704696; Mon, 27 Jan 2020 18:15:04 -0800 (PST) Received: from localhost ([216.9.110.11]) by smtp.gmail.com with ESMTPSA id o17sm393828pjq.1.2020.01.27.18.15.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Jan 2020 18:15:04 -0800 (PST) Subject: [PATCH 4/4] arm64: bpf: Elide some moves to a0 after calls Date: Mon, 27 Jan 2020 18:11:45 -0800 Message-Id: <20200128021145.36774-5-palmerdabbelt@google.com> X-Mailer: git-send-email 2.25.0.341.g760bfbb309-goog MIME-Version: 1.0 Cc: daniel@iogearbox.net, ast@kernel.org, zlim.lnx@gmail.com, catalin.marinas@arm.com, will@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, andriin@fb.com, shuah@kernel.org, Palmer Dabbelt , netdev@vger.kernel.org, bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, clang-built-linux@googlegroups.com, kernel-team@android.com From: Palmer Dabbelt To: Bjorn Topel In-Reply-To: <20200128021145.36774-1-palmerdabbelt@google.com> References: <20200128021145.36774-1-palmerdabbelt@google.com> Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org On arm64, the BPF function ABI doesn't match the C function ABI. Specifically, arm64 encodes calls as `a0 = f(a0, a1, ...)` while BPF encodes calls as `BPF_REG_0 = f(BPF_REG_1, BPF_REG_2, ...)`. This discrepancy results in function calls being encoded as a two operations sequence that first does a C ABI calls and then moves the return register into the right place. This results in one extra instruction for every function call. This patch adds an optimization to the arm64 BPF JIT backend that aims to avoid some of these moves. I've done no benchmarking to determine if this is correct. I ran the BPF selftests before and after the change on arm64 in QEMU and found that I had a single failure both before and after. I'm not at all confident this code actually works as it's my first time doing anything with both ARM64 and BPF and I didn't even open the documentation for either of these. I was particularly surprised that the code didn't fail any tests -- I was kind of assuming this would fail the tests, get put on the backburner, sit long enough for me to stop caring, and then get deleted. Signed-off-by: Palmer Dabbelt --- arch/arm64/net/bpf_jit_comp.c | 71 +++++++++++++++++++++++++++++++++-- 1 file changed, 68 insertions(+), 3 deletions(-) diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index fba5b1b00cd7..48d900cc7258 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -58,10 +58,14 @@ struct jit_ctx { int *offset; __le32 *image; u32 stack_size; + int reg0_in_reg1; }; static inline int bpf2a64(struct jit_ctx *ctx, int bpf_reg) { + if (ctx->reg0_in_reg1 && bpf_reg == BPF_REG_0) + bpf_reg = BPF_REG_1; + return bpf2a64_default[bpf_reg]; } @@ -338,6 +342,47 @@ static void build_epilogue(struct jit_ctx *ctx) emit(A64_RET(A64_LR), ctx); } +static int dead_register(const struct jit_ctx *ctx, int offset, int bpf_reg) +{ + const struct bpf_prog *prog = ctx->prog; + int i; + + for (i = offset; i < prog->len; ++i) { + const struct bpf_insn *insn = &prog->insnsi[i]; + const u8 code = insn->code; + const u8 bpf_dst = insn->dst_reg; + const u8 bpf_src = insn->src_reg; + const int writes_dst = !((code & BPF_ST) || (code & BPF_STX) + || (code & BPF_JMP32) || (code & BPF_JMP)); + const int reads_dst = !((code & BPF_LD)); + const int reads_src = true; + + /* Calls are a bit special in that they clobber a bunch of regisers. */ + if ((code & (BPF_JMP | BPF_CALL)) || (code & (BPF_JMP | BPF_TAIL_CALL))) + if ((bpf_reg >= BPF_REG_0) && (bpf_reg <= BPF_REG_5)) + return false; + + /* Registers that are read before they're written are alive. + * Most opcodes are of the form DST = DEST op SRC, but there + * are some exceptions.*/ + if (bpf_src == bpf_reg && reads_src) + return false; + + if (bpf_dst == bpf_reg && reads_dst) + return false; + + if (bpf_dst == bpf_reg && writes_dst) + return true; + + /* Most BPF instructions are 8 bits long, but some ar 16 bits + * long. */ + if (code & (BPF_LD | BPF_IMM | BPF_DW)) + ++i; + } + + return true; +} + /* JITs an eBPF instruction. * Returns: * 0 - successfully JITed an 8-byte eBPF instruction. @@ -348,7 +393,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool extra_pass) { const u8 code = insn->code; - const u8 dstw = bpf2a64(ctx, insn->dst_reg); + u8 dstw; const u8 dstr = bpf2a64(ctx, insn->dst_reg); const u8 src = bpf2a64(ctx, insn->src_reg); const u8 tmp = bpf2a64(ctx, TMP_REG_1); @@ -374,6 +419,27 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, #define check_imm19(imm) check_imm(19, imm) #define check_imm26(imm) check_imm(26, imm) + /* Handle BPF_REG_0, which may be in the wrong place because the ARM64 + * ABI doesn't match the BPF ABI for function calls. */ + if (ctx->reg0_in_reg1) { + /* If we're writing BPF_REG_0 then we don't need to do any + * extra work to get the registers back in their correct + * locations. */ + if (insn->dst_reg == BPF_REG_0) + ctx->reg0_in_reg1 = false; + + /* If we're writing to BPF_REG_1 then we need to save BPF_REG_0 + * into the correct location if it's still alive, as otherwise + * it will be clobbered. */ + if (insn->dst_reg == BPF_REG_1) { + if (!dead_register(ctx, off + 1, BPF_REG_0)) + emit(A64_MOV(1, A64_R(7), A64_R(0)), ctx); + ctx->reg0_in_reg1 = false; + } + } + + dstw = bpf2a64(ctx, insn->dst_reg); + switch (code) { /* dst = src */ case BPF_ALU | BPF_MOV | BPF_X: @@ -640,7 +706,6 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, /* function call */ case BPF_JMP | BPF_CALL: { - const u8 r0 = bpf2a64(ctx, BPF_REG_0); bool func_addr_fixed; u64 func_addr; int ret; @@ -651,7 +716,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, return ret; emit_addr_mov_i64(tmp, func_addr, ctx); emit(A64_BLR(tmp), ctx); - emit(A64_MOV(1, r0, A64_R(0)), ctx); + ctx->reg0_in_reg1 = true; break; } /* tail call */