From patchwork Tue Dec 6 09:14:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Pu Lehui X-Patchwork-Id: 13065582 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3556C3A5A7 for ; Tue, 6 Dec 2022 09:13:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232584AbiLFJNZ (ORCPT ); Tue, 6 Dec 2022 04:13:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230036AbiLFJNY (ORCPT ); Tue, 6 Dec 2022 04:13:24 -0500 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D708192AC; Tue, 6 Dec 2022 01:13:23 -0800 (PST) Received: from mail02.huawei.com (unknown [172.30.67.153]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4NRF7x2MV8z4f3pBn; Tue, 6 Dec 2022 17:13:17 +0800 (CST) Received: from localhost.localdomain (unknown [10.67.175.61]) by APP4 (Coremail) with SMTP id gCh0CgCXCNavB49jTw3_Bg--.27878S2; Tue, 06 Dec 2022 17:13:20 +0800 (CST) From: Pu Lehui To: bpf@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Paul Walmsley , Palmer Dabbelt , Albert Ou , Pu Lehui , Pu Lehui Subject: [PATCH bpf v3] riscv, bpf: Emit fixed-length instructions for BPF_PSEUDO_FUNC Date: Tue, 6 Dec 2022 17:14:10 +0800 Message-Id: <20221206091410.1584784-1-pulehui@huaweicloud.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-CM-TRANSID: gCh0CgCXCNavB49jTw3_Bg--.27878S2 X-Coremail-Antispam: 1UD129KBjvJXoW7CFy5AFyxuryDur4UAryxZrb_yoW8uFW5pF ZxGrn3CFWvqr1fGr13tF1jqr1SkF4vqFWfKry7G3y5J3ZIqwsF93Z8Gw4jyas8Zry8Gr15 JFWjkF98ua4Dta7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUvF14x267AKxVW5JVWrJwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK02 1l84ACjcxK6xIIjxv20xvE14v26w1j6s0DM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26r4U JVWxJr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_Gc CE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E 2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJV W8JwACjcxG0xvY0x0EwIxGrwACjI8F5VA0II8E6IAqYI8I648v4I1lFIxGxcIEc7CjxVA2 Y2ka0xkIwI1l42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4 xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5 MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I 0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWrJr0_WFyUJwCI42IY6I8E87Iv67AK xVWUJVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvj fUOmhFUUUUU X-CM-SenderInfo: psxovxtxl6x35dzhxuhorxvhhfrp/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Pu Lehui For BPF_PSEUDO_FUNC instruction, verifier will refill imm with correct addresses of bpf_calls and then run last pass of JIT. Since the emit_imm of RV64 is variable-length, which will emit appropriate length instructions accorroding to the imm, it may broke ctx->offset, and lead to unpredictable problem, such as inaccurate jump. So let's fix it with fixed-length instructions. Fixes: 69c087ba6225 ("bpf: Add bpf_for_each_map_elem() helper") Signed-off-by: Pu Lehui Suggested-by: Björn Töpel Acked-by: Björn Töpel Reviewed-by: Björn Töpel --- arch/riscv/net/bpf_jit_comp64.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index 89744dbd6d86..2f6207b72e12 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -139,6 +139,25 @@ static bool in_auipc_jalr_range(s64 val) val < ((1L << 31) - (1L << 11)); } +/* Emit fixed-length instructions for address */ +static int emit_addr(u8 rd, u64 addr, bool extra_pass, struct rv_jit_context *ctx) +{ + u64 ip = (u64)(ctx->insns + ctx->ninsns); + s64 off = addr - ip; + s64 upper = (off + (1 << 11)) >> 12; + s64 lower = off & 0xfff; + + if (extra_pass && !in_auipc_jalr_range(off)) { + pr_err("bpf-jit: target offset 0x%llx is out of range\n", off); + return -ERANGE; + } + + emit(rv_auipc(rd, upper), ctx); + emit(rv_addi(rd, rd, lower), ctx); + return 0; +} + +/* Emit variable-length instructions for 32-bit and 64-bit imm */ static void emit_imm(u8 rd, s64 val, struct rv_jit_context *ctx) { /* Note that the immediate from the add is sign-extended, @@ -1053,7 +1072,15 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, u64 imm64; imm64 = (u64)insn1.imm << 32 | (u32)imm; - emit_imm(rd, imm64, ctx); + if (bpf_pseudo_func(insn)) { + /* fixed-length insns for extra jit pass */ + ret = emit_addr(rd, imm64, extra_pass, ctx); + if (ret) + return ret; + } else { + emit_imm(rd, imm64, ctx); + } + return 1; }