From patchwork Thu Dec 1 07:56:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13061026 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABA54C47088 for ; Thu, 1 Dec 2022 07:57:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229852AbiLAH5F (ORCPT ); Thu, 1 Dec 2022 02:57:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229709AbiLAH5A (ORCPT ); Thu, 1 Dec 2022 02:57:00 -0500 Received: from pegase2.c-s.fr (pegase2.c-s.fr [93.17.235.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BFAA52158; Wed, 30 Nov 2022 23:56:58 -0800 (PST) Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67]) by localhost (Postfix) with ESMTP id 4NN7h83gxDz9sZb; Thu, 1 Dec 2022 08:56:56 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase2.c-s.fr ([172.26.127.65]) by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id e49hW9rQN97v; Thu, 1 Dec 2022 08:56:56 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase2.c-s.fr (Postfix) with ESMTP id 4NN7h82T17z9sXw; Thu, 1 Dec 2022 08:56:56 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 4195E8B780; Thu, 1 Dec 2022 08:56:56 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id ybp9-llyLvPx; Thu, 1 Dec 2022 08:56:56 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 138308B763; Thu, 1 Dec 2022 08:56:56 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (localhost [127.0.0.1]) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.16.1) with ESMTPS id 2B17unOY130791 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Thu, 1 Dec 2022 08:56:49 +0100 Received: (from chleroy@localhost) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.17.1/Submit) id 2B17umb4130779; Thu, 1 Dec 2022 08:56:48 +0100 X-Authentication-Warning: PO20335.IDSI0.si.c-s.fr: chleroy set sender to christophe.leroy@csgroup.eu using -f From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , "Naveen N. Rao" Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , "Naveen N . Rao" , stable@vger.kernel.org Subject: [PATCH v1 01/10] powerpc/bpf/32: Fix Oops on tail call tests Date: Thu, 1 Dec 2022 08:56:26 +0100 Message-Id: X-Mailer: git-send-email 2.38.1 MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1669881390; l=7581; s=20211009; h=from:subject:message-id; bh=mTdJVaQfMyXbnPuQ7zaafwCQVYscHE9YBgdQhOwKvMM=; b=kxBgTIRCPz6IPZjtyYFofoT48bO9f5x0Jgv7ZxGlGeAC9MxadoQueJ/Rjap1eMPbkPEwOWkABrN4 Dc4Ko9t+A+wpyR+sbyAkGeeQ/hD1SgGUcsf+6fW6n4ekJFR5MmMC X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net test_bpf tail call tests end up as: test_bpf: #0 Tail call leaf jited:1 85 PASS test_bpf: #1 Tail call 2 jited:1 111 PASS test_bpf: #2 Tail call 3 jited:1 145 PASS test_bpf: #3 Tail call 4 jited:1 170 PASS test_bpf: #4 Tail call load/store leaf jited:1 190 PASS test_bpf: #5 Tail call load/store jited:1 BUG: Unable to handle kernel data access on write at 0xf1b4e000 Faulting instruction address: 0xbe86b710 Oops: Kernel access of bad area, sig: 11 [#1] BE PAGE_SIZE=4K MMU=Hash PowerMac Modules linked in: test_bpf(+) CPU: 0 PID: 97 Comm: insmod Not tainted 6.1.0-rc4+ #195 Hardware name: PowerMac3,1 750CL 0x87210 PowerMac NIP: be86b710 LR: be857e88 CTR: be86b704 REGS: f1b4df20 TRAP: 0300 Not tainted (6.1.0-rc4+) MSR: 00009032 CR: 28008242 XER: 00000000 DAR: f1b4e000 DSISR: 42000000 GPR00: 00000001 f1b4dfe0 c11d2280 00000000 00000000 00000000 00000002 00000000 GPR08: f1b4e000 be86b704 f1b4e000 00000000 00000000 100d816a f2440000 fe73baa8 GPR16: f2458000 00000000 c1941ae4 f1fe2248 00000045 c0de0000 f2458030 00000000 GPR24: 000003e8 0000000f f2458000 f1b4dc90 3e584b46 00000000 f24466a0 c1941a00 NIP [be86b710] 0xbe86b710 LR [be857e88] __run_one+0xec/0x264 [test_bpf] Call Trace: [f1b4dfe0] [00000002] 0x2 (unreliable) Instruction dump: XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX ---[ end trace 0000000000000000 ]--- This is a tentative to write above the stack. The problem is encoutered with tests added by commit 38608ee7b690 ("bpf, tests: Add load store test case for tail call") This happens because tail call is done to a BPF prog with a different stack_depth. At the time being, the stack is kept as is when the caller tail calls its callee. But at exit, the callee restores the stack based on its own properties. Therefore here, at each run, r1 is erroneously increased by 32 - 16 = 16 bytes. This was done that way in order to pass the tail call count from caller to callee through the stack. As powerpc32 doesn't have a red zone in the stack, it was necessary the maintain the stack as is for the tail call. But it was not anticipated that the BPF frame size could be different. Let's take a new approach. Use register r4 to carry the tail call count during the tail call, and save it into the stack at function entry if required. This means the input parameter must be in r3, which is more correct as it is a 32 bits parameter, then tail call better match with normal BPF function entry, the down side being that we move that input parameter back and forth between r3 and r4. That can be optimised later. Doing that also has the advantage of maximising the common parts between tail calls and a normal function exit. With the fix, tail call tests are now successfull: test_bpf: #0 Tail call leaf jited:1 53 PASS test_bpf: #1 Tail call 2 jited:1 115 PASS test_bpf: #2 Tail call 3 jited:1 154 PASS test_bpf: #3 Tail call 4 jited:1 165 PASS test_bpf: #4 Tail call load/store leaf jited:1 101 PASS test_bpf: #5 Tail call load/store jited:1 141 PASS test_bpf: #6 Tail call error path, max count reached jited:1 994 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 140975 PASS test_bpf: #8 Tail call error path, NULL target jited:1 110 PASS test_bpf: #9 Tail call error path, index out of range jited:1 69 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] Suggested-by: Naveen N. Rao Fixes: 51c66ad849a7 ("powerpc/bpf: Implement extended BPF on PPC32") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy Tested-by: Naveen N. Rao Link: https://lore.kernel.org/r/757acccb7fbfc78efa42dcf3c974b46678198905.1669278887.git.christophe.leroy@csgroup.eu --- arch/powerpc/net/bpf_jit_comp32.c | 52 +++++++++++++------------------ 1 file changed, 21 insertions(+), 31 deletions(-) diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c index 43f1c76d48ce..a379b0ce19ff 100644 --- a/arch/powerpc/net/bpf_jit_comp32.c +++ b/arch/powerpc/net/bpf_jit_comp32.c @@ -113,23 +113,19 @@ void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx) { int i; - /* First arg comes in as a 32 bits pointer. */ - EMIT(PPC_RAW_MR(bpf_to_ppc(BPF_REG_1), _R3)); - EMIT(PPC_RAW_LI(bpf_to_ppc(BPF_REG_1) - 1, 0)); + /* Initialize tail_call_cnt, to be skipped if we do tail calls. */ + EMIT(PPC_RAW_LI(_R4, 0)); + +#define BPF_TAILCALL_PROLOGUE_SIZE 4 + EMIT(PPC_RAW_STWU(_R1, _R1, -BPF_PPC_STACKFRAME(ctx))); - /* - * Initialize tail_call_cnt in stack frame if we do tail calls. - * Otherwise, put in NOPs so that it can be skipped when we are - * invoked through a tail call. - */ if (ctx->seen & SEEN_TAILCALL) - EMIT(PPC_RAW_STW(bpf_to_ppc(BPF_REG_1) - 1, _R1, - bpf_jit_stack_offsetof(ctx, BPF_PPC_TC))); - else - EMIT(PPC_RAW_NOP()); + EMIT(PPC_RAW_STW(_R4, _R1, bpf_jit_stack_offsetof(ctx, BPF_PPC_TC))); -#define BPF_TAILCALL_PROLOGUE_SIZE 16 + /* First arg comes in as a 32 bits pointer. */ + EMIT(PPC_RAW_MR(bpf_to_ppc(BPF_REG_1), _R3)); + EMIT(PPC_RAW_LI(bpf_to_ppc(BPF_REG_1) - 1, 0)); /* * We need a stack frame, but we don't necessarily need to @@ -170,24 +166,24 @@ static void bpf_jit_emit_common_epilogue(u32 *image, struct codegen_context *ctx for (i = BPF_PPC_NVR_MIN; i <= 31; i++) if (bpf_is_seen_register(ctx, i)) EMIT(PPC_RAW_LWZ(i, _R1, bpf_jit_stack_offsetof(ctx, i))); -} - -void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx) -{ - EMIT(PPC_RAW_MR(_R3, bpf_to_ppc(BPF_REG_0))); - - bpf_jit_emit_common_epilogue(image, ctx); - - /* Tear down our stack frame */ if (ctx->seen & SEEN_FUNC) EMIT(PPC_RAW_LWZ(_R0, _R1, BPF_PPC_STACKFRAME(ctx) + PPC_LR_STKOFF)); + /* Tear down our stack frame */ EMIT(PPC_RAW_ADDI(_R1, _R1, BPF_PPC_STACKFRAME(ctx))); if (ctx->seen & SEEN_FUNC) EMIT(PPC_RAW_MTLR(_R0)); +} + +void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx) +{ + EMIT(PPC_RAW_MR(_R3, bpf_to_ppc(BPF_REG_0))); + + bpf_jit_emit_common_epilogue(image, ctx); + EMIT(PPC_RAW_BLR()); } @@ -244,7 +240,6 @@ static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 o EMIT(PPC_RAW_RLWINM(_R3, b2p_index, 2, 0, 29)); EMIT(PPC_RAW_ADD(_R3, _R3, b2p_bpf_array)); EMIT(PPC_RAW_LWZ(_R3, _R3, offsetof(struct bpf_array, ptrs))); - EMIT(PPC_RAW_STW(_R0, _R1, bpf_jit_stack_offsetof(ctx, BPF_PPC_TC))); /* * if (prog == NULL) @@ -255,19 +250,14 @@ static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 o /* goto *(prog->bpf_func + prologue_size); */ EMIT(PPC_RAW_LWZ(_R3, _R3, offsetof(struct bpf_prog, bpf_func))); - - if (ctx->seen & SEEN_FUNC) - EMIT(PPC_RAW_LWZ(_R0, _R1, BPF_PPC_STACKFRAME(ctx) + PPC_LR_STKOFF)); - EMIT(PPC_RAW_ADDIC(_R3, _R3, BPF_TAILCALL_PROLOGUE_SIZE)); - - if (ctx->seen & SEEN_FUNC) - EMIT(PPC_RAW_MTLR(_R0)); - EMIT(PPC_RAW_MTCTR(_R3)); EMIT(PPC_RAW_MR(_R3, bpf_to_ppc(BPF_REG_1))); + /* Put tail_call_cnt in r4 */ + EMIT(PPC_RAW_MR(_R4, _R0)); + /* tear restore NVRs, ... */ bpf_jit_emit_common_epilogue(image, ctx); From patchwork Thu Dec 1 07:56:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13061028 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77789C4708C for ; Thu, 1 Dec 2022 07:57:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229680AbiLAH5H (ORCPT ); Thu, 1 Dec 2022 02:57:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229507AbiLAH5G (ORCPT ); Thu, 1 Dec 2022 02:57:06 -0500 Received: from pegase2.c-s.fr (pegase2.c-s.fr [93.17.235.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 633F551316; Wed, 30 Nov 2022 23:57:05 -0800 (PST) Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67]) by localhost (Postfix) with ESMTP id 4NN7hF2Bj9z9sb3; Thu, 1 Dec 2022 08:57:01 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase2.c-s.fr ([172.26.127.65]) by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id U755w7Mnx6tw; Thu, 1 Dec 2022 08:57:01 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase2.c-s.fr (Postfix) with ESMTP id 4NN7hD3KzMz9sXw; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 5D0688B78D; Thu, 1 Dec 2022 08:57:00 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id HysRZHYumzPh; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id D153E8B787; Thu, 1 Dec 2022 08:56:59 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (localhost [127.0.0.1]) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.16.1) with ESMTPS id 2B17uuGB130806 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Thu, 1 Dec 2022 08:56:56 +0100 Received: (from chleroy@localhost) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.17.1/Submit) id 2B17uuau130794; Thu, 1 Dec 2022 08:56:56 +0100 X-Authentication-Warning: PO20335.IDSI0.si.c-s.fr: chleroy set sender to christophe.leroy@csgroup.eu using -f From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , "Naveen N. Rao" Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa Subject: [PATCH v1 02/10] powerpc: Remove __kernel_text_address() in show_instructions() Date: Thu, 1 Dec 2022 08:56:27 +0100 Message-Id: <96a35025667d6e44f5253f82f61bb49b0a6ecf2a.1669881248.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1669881390; l=1412; s=20211009; h=from:subject:message-id; bh=/zpH2ZZvEoJhCvxUe4ujuN1Lgf9RLeBH5yxOMRR6WTo=; b=VO+bDKToRC6Z5qShLIgfWwHi4YRQyWdYTYB6ld+4YoQvb5mLByEIdDa3Pk7kXIovhVtsg6pGixkz x7bgqbDbBxts/Evn0oZmXOnVq3gJ9+QZTXiuGSLkD57Qv+Byc1IZ X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org That test was introducted in 2006 by commit 00ae36de49cc ("[POWERPC] Better check in show_instructions"). At that time, there was no BPF progs. As seen in message of commit 89d21e259a94 ("powerpc/bpf/32: Fix Oops on tail call tests"), when a page fault occurs in test_bpf.ko for instance, the code is dumped as XXXXXXXXs. Allthough __kernel_text_address() checks is_bpf_text_address(), it seems it is not enough. Today, show_instructions() uses get_kernel_nofault() to read the code, so there is no real need for additional verifications. ARM64 and x86 don't do any additional check before dumping instructions. Do the same and remove __kernel_text_address() in show_instructions(). Signed-off-by: Christophe Leroy --- arch/powerpc/kernel/process.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index e3e1feaa536a..c4e9f090ad22 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1373,8 +1373,7 @@ static void show_instructions(struct pt_regs *regs) for (i = 0; i < NR_INSN_TO_PRINT; i++) { int instr; - if (!__kernel_text_address(pc) || - get_kernel_nofault(instr, (const void *)pc)) { + if (get_kernel_nofault(instr, (const void *)pc)) { pr_cont("XXXXXXXX "); } else { if (nip == pc) From patchwork Thu Dec 1 07:56:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13061030 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 073D0C43217 for ; Thu, 1 Dec 2022 07:57:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229904AbiLAH5c (ORCPT ); Thu, 1 Dec 2022 02:57:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229875AbiLAH53 (ORCPT ); Thu, 1 Dec 2022 02:57:29 -0500 Received: from pegase2.c-s.fr (pegase2.c-s.fr [93.17.235.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8502D532F8; Wed, 30 Nov 2022 23:57:11 -0800 (PST) Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67]) by localhost (Postfix) with ESMTP id 4NN7hH2FBjz9sZt; Thu, 1 Dec 2022 08:57:03 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase2.c-s.fr ([172.26.127.65]) by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id M1e-p4nP_thr; Thu, 1 Dec 2022 08:57:03 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase2.c-s.fr (Postfix) with ESMTP id 4NN7hD43Snz9sZx; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 78E138B763; Thu, 1 Dec 2022 08:57:00 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id sFoZRa7m0HU2; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id C7A9A8B781; Thu, 1 Dec 2022 08:56:59 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (localhost [127.0.0.1]) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.16.1) with ESMTPS id 2B17uudK130816 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Thu, 1 Dec 2022 08:56:56 +0100 Received: (from chleroy@localhost) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.17.1/Submit) id 2B17uuGg130811; Thu, 1 Dec 2022 08:56:56 +0100 X-Authentication-Warning: PO20335.IDSI0.si.c-s.fr: chleroy set sender to christophe.leroy@csgroup.eu using -f From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , "Naveen N. Rao" Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa Subject: [PATCH v1 03/10] powerpc/bpf/32: No need to zeroise r4 when not doing tail call Date: Thu, 1 Dec 2022 08:56:28 +0100 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1669881390; l=924; s=20211009; h=from:subject:message-id; bh=I16YVnQHQYIVB+npE1u3C3GDSkabRAUr5Fvl4cfu+v0=; b=o0mZ6QHayEwIm1KTsJGpiPMhIaZvo4OWxBY/I4a2MD0/kDcSY21VZgAEGsSFqwqLB+SJuUMAHhid 2fTOIp91A3eQBLgEWx4H7HSOA1ju9k+LQGovkHpdH5GEUXhVCZzy X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net r4 is cleared at function entry and used as tail call count. But when the function does not perform tail call, r4 is ignored, so no need to clear it. Replace it by a NOP in that case. Signed-off-by: Christophe Leroy --- arch/powerpc/net/bpf_jit_comp32.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c index a379b0ce19ff..4e6caee9c98a 100644 --- a/arch/powerpc/net/bpf_jit_comp32.c +++ b/arch/powerpc/net/bpf_jit_comp32.c @@ -114,7 +114,10 @@ void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx) int i; /* Initialize tail_call_cnt, to be skipped if we do tail calls. */ - EMIT(PPC_RAW_LI(_R4, 0)); + if (ctx->seen & SEEN_TAILCALL) + EMIT(PPC_RAW_LI(_R4, 0)); + else + EMIT(PPC_RAW_NOP()); #define BPF_TAILCALL_PROLOGUE_SIZE 4 From patchwork Thu Dec 1 07:56:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13061035 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD961C43217 for ; Thu, 1 Dec 2022 07:59:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229888AbiLAH7M (ORCPT ); Thu, 1 Dec 2022 02:59:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229890AbiLAH6g (ORCPT ); Thu, 1 Dec 2022 02:58:36 -0500 Received: from pegase2.c-s.fr (pegase2.c-s.fr [93.17.235.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 442931F9E5; Wed, 30 Nov 2022 23:57:49 -0800 (PST) Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67]) by localhost (Postfix) with ESMTP id 4NN7hN1YMnz9sZm; Thu, 1 Dec 2022 08:57:08 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase2.c-s.fr ([172.26.127.65]) by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qodRSHBAPw_y; Thu, 1 Dec 2022 08:57:08 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase2.c-s.fr (Postfix) with ESMTP id 4NN7hD4RGyz9sb1; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 837DB8B788; Thu, 1 Dec 2022 08:57:00 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id oDVJLje2fhiV; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id D81E28B78B; Thu, 1 Dec 2022 08:56:59 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (localhost [127.0.0.1]) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.16.1) with ESMTPS id 2B17uuOg130820 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Thu, 1 Dec 2022 08:56:56 +0100 Received: (from chleroy@localhost) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.17.1/Submit) id 2B17uuEr130819; Thu, 1 Dec 2022 08:56:56 +0100 X-Authentication-Warning: PO20335.IDSI0.si.c-s.fr: chleroy set sender to christophe.leroy@csgroup.eu using -f From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , "Naveen N. Rao" Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa Subject: [PATCH v1 04/10] powerpc/bpf/32: Only set a stack frame when necessary Date: Thu, 1 Dec 2022 08:56:29 +0100 Message-Id: <01949e27523916850fe17e87e774c08b9ffea4fe.1669881248.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1669881390; l=2645; s=20211009; h=from:subject:message-id; bh=Th6pI9aT8WZokbYGGkcX3VXZ2tYVta6n6C2EO5biRt0=; b=FrTWCdvOwq1ZVy8tre7lNePq9iNu8Qn888IQ1HZ47f2BHpdWUijFnHo6zptT4cnDp3kRswEakAg3 qd8Ui6ZLAg8OLOT4WG6zbJrYOVSAGdcDTbCGqHWYeh3hZWcTOCfp X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Until now a stack frame was set at all time due to the need to keep tail call counter in the stack. But since commit fa025537f584 ("powerpc/bpf/32: Fix Oops on tail call tests"), the tail call counter is passed via register r4. It is therefore not necessary anymore to have a stack frame for that. Just like PPC64, implement bpf_has_stack_frame() and only sets the frame when needed. The difference with PPC64 is that PPC32 doesn't have a redzone, so the stack is required as soon as non volatile registers are used or when tail call count is set up. Signed-off-by: Christophe Leroy --- arch/powerpc/net/bpf_jit_comp32.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c index 4e6caee9c98a..7f54d37bede6 100644 --- a/arch/powerpc/net/bpf_jit_comp32.c +++ b/arch/powerpc/net/bpf_jit_comp32.c @@ -79,6 +79,20 @@ static int bpf_jit_stack_offsetof(struct codegen_context *ctx, int reg) #define SEEN_NVREG_FULL_MASK 0x0003ffff /* Non volatile registers r14-r31 */ #define SEEN_NVREG_TEMP_MASK 0x00001e01 /* BPF_REG_5, BPF_REG_AX, TMP_REG */ +static inline bool bpf_has_stack_frame(struct codegen_context *ctx) +{ + /* + * We only need a stack frame if: + * - we call other functions (kernel helpers), or + * - we use non volatile registers, or + * - we use tail call counter + * - the bpf program uses its stack area + * The latter condition is deduced from the usage of BPF_REG_FP + */ + return ctx->seen & (SEEN_FUNC | SEEN_TAILCALL | SEEN_NVREG_FULL_MASK) || + bpf_is_seen_register(ctx, bpf_to_ppc(BPF_REG_FP)); +} + void bpf_jit_realloc_regs(struct codegen_context *ctx) { unsigned int nvreg_mask; @@ -121,7 +135,8 @@ void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx) #define BPF_TAILCALL_PROLOGUE_SIZE 4 - EMIT(PPC_RAW_STWU(_R1, _R1, -BPF_PPC_STACKFRAME(ctx))); + if (bpf_has_stack_frame(ctx)) + EMIT(PPC_RAW_STWU(_R1, _R1, -BPF_PPC_STACKFRAME(ctx))); if (ctx->seen & SEEN_TAILCALL) EMIT(PPC_RAW_STW(_R4, _R1, bpf_jit_stack_offsetof(ctx, BPF_PPC_TC))); @@ -174,7 +189,8 @@ static void bpf_jit_emit_common_epilogue(u32 *image, struct codegen_context *ctx EMIT(PPC_RAW_LWZ(_R0, _R1, BPF_PPC_STACKFRAME(ctx) + PPC_LR_STKOFF)); /* Tear down our stack frame */ - EMIT(PPC_RAW_ADDI(_R1, _R1, BPF_PPC_STACKFRAME(ctx))); + if (bpf_has_stack_frame(ctx)) + EMIT(PPC_RAW_ADDI(_R1, _R1, BPF_PPC_STACKFRAME(ctx))); if (ctx->seen & SEEN_FUNC) EMIT(PPC_RAW_MTLR(_R0)); From patchwork Thu Dec 1 07:56:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13061027 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95D7FC4321E for ; Thu, 1 Dec 2022 07:57:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229684AbiLAH5G (ORCPT ); Thu, 1 Dec 2022 02:57:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229827AbiLAH5D (ORCPT ); Thu, 1 Dec 2022 02:57:03 -0500 Received: from pegase2.c-s.fr (pegase2.c-s.fr [93.17.235.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80CE852158; Wed, 30 Nov 2022 23:57:02 -0800 (PST) Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67]) by localhost (Postfix) with ESMTP id 4NN7hD49LSz9sb0; Thu, 1 Dec 2022 08:57:00 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase2.c-s.fr ([172.26.127.65]) by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bjg0YCF2gRff; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase2.c-s.fr (Postfix) with ESMTP id 4NN7hD3LP1z9sZm; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 5E6BD8B78F; Thu, 1 Dec 2022 08:57:00 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id s1TQuKcu7yMv; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id D06018B786; Thu, 1 Dec 2022 08:56:59 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (localhost [127.0.0.1]) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.16.1) with ESMTPS id 2B17uukQ130824 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Thu, 1 Dec 2022 08:56:57 +0100 Received: (from chleroy@localhost) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.17.1/Submit) id 2B17uu8O130823; Thu, 1 Dec 2022 08:56:56 +0100 X-Authentication-Warning: PO20335.IDSI0.si.c-s.fr: chleroy set sender to christophe.leroy@csgroup.eu using -f From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , "Naveen N. Rao" Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa Subject: [PATCH v1 05/10] powerpc/bpf/32: BPF prog is never called with more than one arg Date: Thu, 1 Dec 2022 08:56:30 +0100 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1669881390; l=1258; s=20211009; h=from:subject:message-id; bh=ZtovIKf+Bs9+YlyJOZZ5XopWCTSLLeOP3s1Uy3d0LfE=; b=D7B8FCrw8H9txBaTo4ZlVMJKLtvYqTkEGDiS/DOKWEVBDop2Z7klOsKfEkxXvNFwbTNvD71cxWtr m2YFo1RWB0EjbtknoQVDqoz7lQ2UISRffDvJ5WtJ77JjYbsJPCoU X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net BPF progs are never called with more than one argument, plus the tail call count as a second argument when needed. So, no need to retrieve 9th and 10th argument (5th 64 bits argument) from the stack in prologue. Signed-off-by: Christophe Leroy --- arch/powerpc/net/bpf_jit_comp32.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c index 7f54d37bede6..7c129fe810f5 100644 --- a/arch/powerpc/net/bpf_jit_comp32.c +++ b/arch/powerpc/net/bpf_jit_comp32.c @@ -159,12 +159,6 @@ void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx) if (bpf_is_seen_register(ctx, i)) EMIT(PPC_RAW_STW(i, _R1, bpf_jit_stack_offsetof(ctx, i))); - /* If needed retrieve arguments 9 and 10, ie 5th 64 bits arg.*/ - if (bpf_is_seen_register(ctx, bpf_to_ppc(BPF_REG_5))) { - EMIT(PPC_RAW_LWZ(bpf_to_ppc(BPF_REG_5) - 1, _R1, BPF_PPC_STACKFRAME(ctx)) + 8); - EMIT(PPC_RAW_LWZ(bpf_to_ppc(BPF_REG_5), _R1, BPF_PPC_STACKFRAME(ctx)) + 12); - } - /* Setup frame pointer to point to the bpf stack area */ if (bpf_is_seen_register(ctx, bpf_to_ppc(BPF_REG_FP))) { EMIT(PPC_RAW_LI(bpf_to_ppc(BPF_REG_FP) - 1, 0)); From patchwork Thu Dec 1 07:56:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13061033 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6746C43217 for ; Thu, 1 Dec 2022 07:58:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229944AbiLAH6g (ORCPT ); Thu, 1 Dec 2022 02:58:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229953AbiLAH6A (ORCPT ); Thu, 1 Dec 2022 02:58:00 -0500 Received: from pegase2.c-s.fr (pegase2.c-s.fr [93.17.235.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24A5763D58; Wed, 30 Nov 2022 23:57:27 -0800 (PST) Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67]) by localhost (Postfix) with ESMTP id 4NN7hL1C1Dz9sZv; Thu, 1 Dec 2022 08:57:06 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase2.c-s.fr ([172.26.127.65]) by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id H_JiJorG2A5g; Thu, 1 Dec 2022 08:57:06 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase2.c-s.fr (Postfix) with ESMTP id 4NN7hD4Wrbz9sb2; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 852588B78C; Thu, 1 Dec 2022 08:57:00 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id DOt8h5AQMIwH; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id C41078B780; Thu, 1 Dec 2022 08:56:59 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (localhost [127.0.0.1]) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.16.1) with ESMTPS id 2B17uvxf130828 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Thu, 1 Dec 2022 08:56:57 +0100 Received: (from chleroy@localhost) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.17.1/Submit) id 2B17uvtN130827; Thu, 1 Dec 2022 08:56:57 +0100 X-Authentication-Warning: PO20335.IDSI0.si.c-s.fr: chleroy set sender to christophe.leroy@csgroup.eu using -f From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , "Naveen N. Rao" Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa Subject: [PATCH v1 06/10] powerpc/bpf: Perform complete extra passes to update addresses Date: Thu, 1 Dec 2022 08:56:31 +0100 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1669881390; l=4449; s=20211009; h=from:subject:message-id; bh=RDj3enfMo0/pzPD7ndomZPp8lJevoeThXIDqFamXyKI=; b=r6vVGgdZKqeK1TBzPS1+mrL0tybQsq8zP5DrpFoROP2tPpRDYpuWPlhCPGDECNd2111a7npO6iCa /v3g7ZWIC0Vx5jT8W4hY7x0spxUcoHhdhtkHduQqHoFr9ST4qwXp X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net BPF core calls the jit compiler again for an extra pass in order to properly set subprog addresses. Unlike other architectures, powerpc only updates the addresses during that extra pass. It means that holes must have been left in the code in order to enable the maximum possible instruction size. In order avoid waste of space, and waste of CPU time on powerpc processors on which the NOP instruction is not 0-cycle, perform two real additional passes. Signed-off-by: Christophe Leroy --- arch/powerpc/net/bpf_jit_comp.c | 85 --------------------------------- 1 file changed, 85 deletions(-) diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c index 43e634126514..8833bf23f5aa 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -23,74 +23,6 @@ static void bpf_jit_fill_ill_insns(void *area, unsigned int size) memset32(area, BREAKPOINT_INSTRUCTION, size / 4); } -/* Fix updated addresses (for subprog calls, ldimm64, et al) during extra pass */ -static int bpf_jit_fixup_addresses(struct bpf_prog *fp, u32 *image, - struct codegen_context *ctx, u32 *addrs) -{ - const struct bpf_insn *insn = fp->insnsi; - bool func_addr_fixed; - u64 func_addr; - u32 tmp_idx; - int i, j, ret; - - for (i = 0; i < fp->len; i++) { - /* - * During the extra pass, only the branch target addresses for - * the subprog calls need to be fixed. All other instructions - * can left untouched. - * - * The JITed image length does not change because we already - * ensure that the JITed instruction sequence for these calls - * are of fixed length by padding them with NOPs. - */ - if (insn[i].code == (BPF_JMP | BPF_CALL) && - insn[i].src_reg == BPF_PSEUDO_CALL) { - ret = bpf_jit_get_func_addr(fp, &insn[i], true, - &func_addr, - &func_addr_fixed); - if (ret < 0) - return ret; - - /* - * Save ctx->idx as this would currently point to the - * end of the JITed image and set it to the offset of - * the instruction sequence corresponding to the - * subprog call temporarily. - */ - tmp_idx = ctx->idx; - ctx->idx = addrs[i] / 4; - ret = bpf_jit_emit_func_call_rel(image, ctx, func_addr); - if (ret) - return ret; - - /* - * Restore ctx->idx here. This is safe as the length - * of the JITed sequence remains unchanged. - */ - ctx->idx = tmp_idx; - } else if (insn[i].code == (BPF_LD | BPF_IMM | BPF_DW)) { - tmp_idx = ctx->idx; - ctx->idx = addrs[i] / 4; -#ifdef CONFIG_PPC32 - PPC_LI32(bpf_to_ppc(insn[i].dst_reg) - 1, (u32)insn[i + 1].imm); - PPC_LI32(bpf_to_ppc(insn[i].dst_reg), (u32)insn[i].imm); - for (j = ctx->idx - addrs[i] / 4; j < 4; j++) - EMIT(PPC_RAW_NOP()); -#else - func_addr = ((u64)(u32)insn[i].imm) | (((u64)(u32)insn[i + 1].imm) << 32); - PPC_LI64(bpf_to_ppc(insn[i].dst_reg), func_addr); - /* overwrite rest with nops */ - for (j = ctx->idx - addrs[i] / 4; j < 5; j++) - EMIT(PPC_RAW_NOP()); -#endif - ctx->idx = tmp_idx; - i++; - } - } - - return 0; -} - int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx, int tmp_reg, long exit_addr) { if (!exit_addr || is_offset_in_branch_range(exit_addr - (ctx->idx * 4))) { @@ -234,22 +166,6 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) skip_init_ctx: code_base = (u32 *)(image + FUNCTION_DESCR_SIZE); - if (extra_pass) { - /* - * Do not touch the prologue and epilogue as they will remain - * unchanged. Only fix the branch target address for subprog - * calls in the body, and ldimm64 instructions. - * - * This does not change the offsets and lengths of the subprog - * call instruction sequences and hence, the size of the JITed - * image as well. - */ - bpf_jit_fixup_addresses(fp, code_base, &cgctx, addrs); - - /* There is no need to perform the usual passes. */ - goto skip_codegen_passes; - } - /* Code generation passes 1-2 */ for (pass = 1; pass < 3; pass++) { /* Now build the prologue, body code & epilogue for real. */ @@ -268,7 +184,6 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) proglen - (cgctx.idx * 4), cgctx.seen); } -skip_codegen_passes: if (bpf_jit_enable > 1) /* * Note that we output the base address of the code_base From patchwork Thu Dec 1 07:56:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13061029 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3207C47088 for ; Thu, 1 Dec 2022 07:57:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229823AbiLAH5K (ORCPT ); Thu, 1 Dec 2022 02:57:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229817AbiLAH5J (ORCPT ); Thu, 1 Dec 2022 02:57:09 -0500 Received: from pegase2.c-s.fr (pegase2.c-s.fr [93.17.235.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E88A151316; Wed, 30 Nov 2022 23:57:08 -0800 (PST) Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67]) by localhost (Postfix) with ESMTP id 4NN7hG0wSwz9sXw; Thu, 1 Dec 2022 08:57:02 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase2.c-s.fr ([172.26.127.65]) by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ZHx9Acv37piw; Thu, 1 Dec 2022 08:57:02 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase2.c-s.fr (Postfix) with ESMTP id 4NN7hD3gmRz9sZt; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 6603B8B786; Thu, 1 Dec 2022 08:57:00 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id uqtkp7rnjH06; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id D734D8B788; Thu, 1 Dec 2022 08:56:59 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (localhost [127.0.0.1]) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.16.1) with ESMTPS id 2B17uvPo130832 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Thu, 1 Dec 2022 08:56:57 +0100 Received: (from chleroy@localhost) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.17.1/Submit) id 2B17uvE7130831; Thu, 1 Dec 2022 08:56:57 +0100 X-Authentication-Warning: PO20335.IDSI0.si.c-s.fr: chleroy set sender to christophe.leroy@csgroup.eu using -f From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , "Naveen N. Rao" Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa Subject: [PATCH v1 07/10] powerpc/bpf: Only pad length-variable code at initial pass Date: Thu, 1 Dec 2022 08:56:32 +0100 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1669881391; l=3050; s=20211009; h=from:subject:message-id; bh=nL5CqXWPiz/BUngCysQIWSZa7hNQZeFj8xwmb11p9kA=; b=RnNHGX2YEBBid1pt9gpxSmgra+AVaIn4fG1kA2C89gHzjdE/DzWTQW+BefFonH49GkyGXE1f26Xo 7vNq9xWLD5Dfh2nBL4rDcXOWOJTvaNHIUAOY+GPWE7ITgEQZ0142 X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Now that two real additional passes are performed in case of extra pass requested by BPF core, padding is not needed anymore except during initial pass done before memory allocation to count maximum possible program size. So, only do the padding when 'image' is NULL. Signed-off-by: Christophe Leroy --- arch/powerpc/net/bpf_jit_comp32.c | 8 +++----- arch/powerpc/net/bpf_jit_comp64.c | 12 +++++++----- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c index 7c129fe810f5..6c45d953d4e8 100644 --- a/arch/powerpc/net/bpf_jit_comp32.c +++ b/arch/powerpc/net/bpf_jit_comp32.c @@ -206,9 +206,6 @@ int bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 func if (image && rel < 0x2000000 && rel >= -0x2000000) { PPC_BL(func); - EMIT(PPC_RAW_NOP()); - EMIT(PPC_RAW_NOP()); - EMIT(PPC_RAW_NOP()); } else { /* Load function address into r0 */ EMIT(PPC_RAW_LIS(_R0, IMM_H(func))); @@ -973,8 +970,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * PPC_LI32(dst_reg_h, (u32)insn[i + 1].imm); PPC_LI32(dst_reg, (u32)insn[i].imm); /* padding to allow full 4 instructions for later patching */ - for (j = ctx->idx - tmp_idx; j < 4; j++) - EMIT(PPC_RAW_NOP()); + if (!image) + for (j = ctx->idx - tmp_idx; j < 4; j++) + EMIT(PPC_RAW_NOP()); /* Adjust for two bpf instructions */ addrs[++i] = ctx->idx * 4; break; diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 29ee306d6302..1c2931b4bc15 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -240,13 +240,14 @@ int bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, u64 func * load the callee's address, but this may optimize the number of * instructions required based on the nature of the address. * - * Since we don't want the number of instructions emitted to change, + * Since we don't want the number of instructions emitted to increase, * we pad the optimized PPC_LI64() call with NOPs to guarantee that * we always have a five-instruction sequence, which is the maximum * that PPC_LI64() can emit. */ - for (i = ctx->idx - ctx_idx; i < 5; i++) - EMIT(PPC_RAW_NOP()); + if (!image) + for (i = ctx->idx - ctx_idx; i < 5; i++) + EMIT(PPC_RAW_NOP()); EMIT(PPC_RAW_MTCTR(_R12)); EMIT(PPC_RAW_BCTRL()); @@ -938,8 +939,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * tmp_idx = ctx->idx; PPC_LI64(dst_reg, imm64); /* padding to allow full 5 instructions for later patching */ - for (j = ctx->idx - tmp_idx; j < 5; j++) - EMIT(PPC_RAW_NOP()); + if (!image) + for (j = ctx->idx - tmp_idx; j < 5; j++) + EMIT(PPC_RAW_NOP()); /* Adjust for two bpf instructions */ addrs[++i] = ctx->idx * 4; break; From patchwork Thu Dec 1 07:56:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13061032 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CE5BC43217 for ; Thu, 1 Dec 2022 07:58:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229870AbiLAH6M (ORCPT ); Thu, 1 Dec 2022 02:58:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229925AbiLAH5t (ORCPT ); Thu, 1 Dec 2022 02:57:49 -0500 Received: from pegase2.c-s.fr (pegase2.c-s.fr [93.17.235.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0C8F83259; Wed, 30 Nov 2022 23:57:20 -0800 (PST) Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67]) by localhost (Postfix) with ESMTP id 4NN7hK2X0Xz9sZs; Thu, 1 Dec 2022 08:57:05 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase2.c-s.fr ([172.26.127.65]) by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id iG_uvUsi9AHU; Thu, 1 Dec 2022 08:57:05 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase2.c-s.fr (Postfix) with ESMTP id 4NN7hD3nz5z9sZv; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 6AFC58B790; Thu, 1 Dec 2022 08:57:00 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id s9cs-cE-xsxk; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id DBA528B78C; Thu, 1 Dec 2022 08:56:59 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (localhost [127.0.0.1]) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.16.1) with ESMTPS id 2B17uvd3130837 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Thu, 1 Dec 2022 08:56:57 +0100 Received: (from chleroy@localhost) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.17.1/Submit) id 2B17uvEt130835; Thu, 1 Dec 2022 08:56:57 +0100 X-Authentication-Warning: PO20335.IDSI0.si.c-s.fr: chleroy set sender to christophe.leroy@csgroup.eu using -f From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , "Naveen N. Rao" Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa Subject: [PATCH v1 08/10] powerpc/bpf/32: Optimise some particular const operations Date: Thu, 1 Dec 2022 08:56:33 +0100 Message-Id: <3dce3db8a490a053b5e0bf11ddaa6665f95422c1.1669881248.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1669881391; l=2234; s=20211009; h=from:subject:message-id; bh=VwMKF2Awkd37GsVn0u5EZiXK9a3k4RENHDOgOO9LVEY=; b=5t5D82yM8v/1TM6KY7mduXB3rodDFp5mXkqWfbNzo8deF0CjLCkBzEhDK3HZOFaRc/jxw5TuhJ/n q43XJmuYDUAy9x2yflHjm/QoPCP4EdowwDyhhHGIXpleulXB/Wz4 X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Simplify multiplications and divisions with constants when the constant is 1 or -1. When the constant is a power of 2, replace them by bit shits. Signed-off-by: Christophe Leroy --- arch/powerpc/net/bpf_jit_comp32.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c index 6c45d953d4e8..e6e156c0c5d6 100644 --- a/arch/powerpc/net/bpf_jit_comp32.c +++ b/arch/powerpc/net/bpf_jit_comp32.c @@ -391,7 +391,13 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * EMIT(PPC_RAW_MULW(dst_reg, dst_reg, src_reg)); break; case BPF_ALU | BPF_MUL | BPF_K: /* (u32) dst *= (u32) imm */ - if (imm >= -32768 && imm < 32768) { + if (imm == 1) + break; + if (imm == -1) { + EMIT(PPC_RAW_SUBFIC(dst_reg, dst_reg, 0)); + } else if (is_power_of_2((u32)imm)) { + EMIT(PPC_RAW_SLWI(dst_reg, dst_reg, ilog2(imm))); + } else if (imm >= -32768 && imm < 32768) { EMIT(PPC_RAW_MULI(dst_reg, dst_reg, imm)); } else { PPC_LI32(_R0, imm); @@ -411,6 +417,13 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * EMIT(PPC_RAW_SUBFZE(dst_reg_h, dst_reg_h)); break; } + if (imm > 0 && is_power_of_2(imm)) { + imm = ilog2(imm); + EMIT(PPC_RAW_RLWINM(dst_reg_h, dst_reg_h, imm, 0, 31 - imm)); + EMIT(PPC_RAW_RLWIMI(dst_reg_h, dst_reg, imm, 32 - imm, 31)); + EMIT(PPC_RAW_SLWI(dst_reg, dst_reg, imm)); + break; + } bpf_set_seen_register(ctx, tmp_reg); PPC_LI32(tmp_reg, imm); EMIT(PPC_RAW_MULW(dst_reg_h, dst_reg_h, tmp_reg)); @@ -438,8 +451,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * if (imm == 1) break; - PPC_LI32(_R0, imm); - EMIT(PPC_RAW_DIVWU(dst_reg, dst_reg, _R0)); + if (is_power_of_2((u32)imm)) { + EMIT(PPC_RAW_SRWI(dst_reg, dst_reg, ilog2(imm))); + } else { + PPC_LI32(_R0, imm); + EMIT(PPC_RAW_DIVWU(dst_reg, dst_reg, _R0)); + } break; case BPF_ALU | BPF_MOD | BPF_K: /* (u32) dst %= (u32) imm */ if (!imm) From patchwork Thu Dec 1 07:56:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13061031 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7970C43217 for ; Thu, 1 Dec 2022 07:57:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229845AbiLAH54 (ORCPT ); Thu, 1 Dec 2022 02:57:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229784AbiLAH5f (ORCPT ); Thu, 1 Dec 2022 02:57:35 -0500 Received: from pegase2.c-s.fr (pegase2.c-s.fr [93.17.235.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE89B58034; Wed, 30 Nov 2022 23:57:15 -0800 (PST) Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67]) by localhost (Postfix) with ESMTP id 4NN7hJ1WXTz9sZx; Thu, 1 Dec 2022 08:57:04 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase2.c-s.fr ([172.26.127.65]) by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id IqDbQX6TA4_p; Thu, 1 Dec 2022 08:57:04 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase2.c-s.fr (Postfix) with ESMTP id 4NN7hD3fgTz9sZs; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 66C348B787; Thu, 1 Dec 2022 08:57:00 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id nvMTKp5zQqAs; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id C40BF8B763; Thu, 1 Dec 2022 08:56:59 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (localhost [127.0.0.1]) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.16.1) with ESMTPS id 2B17uvLM130855 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Thu, 1 Dec 2022 08:56:57 +0100 Received: (from chleroy@localhost) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.17.1/Submit) id 2B17uvng130851; Thu, 1 Dec 2022 08:56:57 +0100 X-Authentication-Warning: PO20335.IDSI0.si.c-s.fr: chleroy set sender to christophe.leroy@csgroup.eu using -f From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , "Naveen N. Rao" Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa Subject: [PATCH v1 09/10] powerpc/bpf/32: introduce a second source register for ALU operations Date: Thu, 1 Dec 2022 08:56:34 +0100 Message-Id: <5e09173898731378fefa91b2ca986173aa7310ae.1669881248.git.christophe.leroy@csgroup.eu> X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1669881391; l=24323; s=20211009; h=from:subject:message-id; bh=bpYtcKU8c/fGO729q+2W/8IcYai75AITh9hFQY4O7Jc=; b=44ZzMv/1Wklf3QlzUNp+FIcEr475KK/vtXL7PmLfkHryFObVGLlmWQC64QE/anQIAXu4B3cJSsqQ 9H5/itcWA/jNmqq8e7iep4qvgTgpDa5CRm61fiMH2lkA4QFSZQo0 X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net At the time being, all ALU operation are performed with same L-source and destination, requiring the L-source to be moved into destination via a separate register move, like: 70: 7f c6 f3 78 mr r6,r30 74: 7f a5 eb 78 mr r5,r29 78: 30 c6 ff f4 addic r6,r6,-12 7c: 7c a5 01 d4 addme r5,r5 Introduce a second source register to all ALU operations. For the time being that second source register is made equal to the destination register. That change will allow, via following patch, to optimise the generated code as: 70: 30 de ff f4 addic r6,r30,-12 74: 7c bd 01 d4 addme r5,r29 Signed-off-by: Christophe Leroy --- arch/powerpc/net/bpf_jit_comp32.c | 350 ++++++++++++++++-------------- 1 file changed, 183 insertions(+), 167 deletions(-) diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c index e6e156c0c5d6..c45db3d280b2 100644 --- a/arch/powerpc/net/bpf_jit_comp32.c +++ b/arch/powerpc/net/bpf_jit_comp32.c @@ -294,6 +294,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * u32 dst_reg_h = dst_reg - 1; u32 src_reg = bpf_to_ppc(insn[i].src_reg); u32 src_reg_h = src_reg - 1; + u32 src2_reg = dst_reg; + u32 src2_reg_h = dst_reg_h; u32 ax_reg = bpf_to_ppc(BPF_REG_AX); u32 tmp_reg = bpf_to_ppc(TMP_REG); u32 size = BPF_SIZE(code); @@ -338,108 +340,111 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * * Arithmetic operations: ADD/SUB/MUL/DIV/MOD/NEG */ case BPF_ALU | BPF_ADD | BPF_X: /* (u32) dst += (u32) src */ - EMIT(PPC_RAW_ADD(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_ADD(dst_reg, src2_reg, src_reg)); break; case BPF_ALU64 | BPF_ADD | BPF_X: /* dst += src */ - EMIT(PPC_RAW_ADDC(dst_reg, dst_reg, src_reg)); - EMIT(PPC_RAW_ADDE(dst_reg_h, dst_reg_h, src_reg_h)); + EMIT(PPC_RAW_ADDC(dst_reg, src2_reg, src_reg)); + EMIT(PPC_RAW_ADDE(dst_reg_h, src2_reg_h, src_reg_h)); break; case BPF_ALU | BPF_SUB | BPF_X: /* (u32) dst -= (u32) src */ - EMIT(PPC_RAW_SUB(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_SUB(dst_reg, src2_reg, src_reg)); break; case BPF_ALU64 | BPF_SUB | BPF_X: /* dst -= src */ - EMIT(PPC_RAW_SUBFC(dst_reg, src_reg, dst_reg)); - EMIT(PPC_RAW_SUBFE(dst_reg_h, src_reg_h, dst_reg_h)); + EMIT(PPC_RAW_SUBFC(dst_reg, src_reg, src2_reg)); + EMIT(PPC_RAW_SUBFE(dst_reg_h, src_reg_h, src2_reg_h)); break; case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */ imm = -imm; fallthrough; case BPF_ALU | BPF_ADD | BPF_K: /* (u32) dst += (u32) imm */ - if (IMM_HA(imm) & 0xffff) - EMIT(PPC_RAW_ADDIS(dst_reg, dst_reg, IMM_HA(imm))); + if (!imm) { + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); + } else if (IMM_HA(imm) & 0xffff) { + EMIT(PPC_RAW_ADDIS(dst_reg, src2_reg, IMM_HA(imm))); + src2_reg = dst_reg; + } if (IMM_L(imm)) - EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm))); + EMIT(PPC_RAW_ADDI(dst_reg, src2_reg, IMM_L(imm))); break; case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */ imm = -imm; fallthrough; case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */ - if (!imm) + if (!imm) { + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); + EMIT(PPC_RAW_MR(dst_reg_h, src2_reg_h)); break; - + } if (imm >= -32768 && imm < 32768) { - EMIT(PPC_RAW_ADDIC(dst_reg, dst_reg, imm)); + EMIT(PPC_RAW_ADDIC(dst_reg, src2_reg, imm)); } else { PPC_LI32(_R0, imm); - EMIT(PPC_RAW_ADDC(dst_reg, dst_reg, _R0)); + EMIT(PPC_RAW_ADDC(dst_reg, src2_reg, _R0)); } if (imm >= 0 || (BPF_OP(code) == BPF_SUB && imm == 0x80000000)) - EMIT(PPC_RAW_ADDZE(dst_reg_h, dst_reg_h)); + EMIT(PPC_RAW_ADDZE(dst_reg_h, src2_reg_h)); else - EMIT(PPC_RAW_ADDME(dst_reg_h, dst_reg_h)); + EMIT(PPC_RAW_ADDME(dst_reg_h, src2_reg_h)); break; case BPF_ALU64 | BPF_MUL | BPF_X: /* dst *= src */ bpf_set_seen_register(ctx, tmp_reg); - EMIT(PPC_RAW_MULW(_R0, dst_reg, src_reg_h)); - EMIT(PPC_RAW_MULW(dst_reg_h, dst_reg_h, src_reg)); - EMIT(PPC_RAW_MULHWU(tmp_reg, dst_reg, src_reg)); - EMIT(PPC_RAW_MULW(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_MULW(_R0, src2_reg, src_reg_h)); + EMIT(PPC_RAW_MULW(dst_reg_h, src2_reg_h, src_reg)); + EMIT(PPC_RAW_MULHWU(tmp_reg, src2_reg, src_reg)); + EMIT(PPC_RAW_MULW(dst_reg, src2_reg, src_reg)); EMIT(PPC_RAW_ADD(dst_reg_h, dst_reg_h, _R0)); EMIT(PPC_RAW_ADD(dst_reg_h, dst_reg_h, tmp_reg)); break; case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */ - EMIT(PPC_RAW_MULW(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_MULW(dst_reg, src2_reg, src_reg)); break; case BPF_ALU | BPF_MUL | BPF_K: /* (u32) dst *= (u32) imm */ - if (imm == 1) - break; - if (imm == -1) { - EMIT(PPC_RAW_SUBFIC(dst_reg, dst_reg, 0)); + if (imm == 1) { + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); + } else if (imm == -1) { + EMIT(PPC_RAW_SUBFIC(dst_reg, src2_reg, 0)); } else if (is_power_of_2((u32)imm)) { - EMIT(PPC_RAW_SLWI(dst_reg, dst_reg, ilog2(imm))); + EMIT(PPC_RAW_SLWI(dst_reg, src2_reg, ilog2(imm))); } else if (imm >= -32768 && imm < 32768) { - EMIT(PPC_RAW_MULI(dst_reg, dst_reg, imm)); + EMIT(PPC_RAW_MULI(dst_reg, src2_reg, imm)); } else { PPC_LI32(_R0, imm); - EMIT(PPC_RAW_MULW(dst_reg, dst_reg, _R0)); + EMIT(PPC_RAW_MULW(dst_reg, src2_reg, _R0)); } break; case BPF_ALU64 | BPF_MUL | BPF_K: /* dst *= imm */ if (!imm) { PPC_LI32(dst_reg, 0); PPC_LI32(dst_reg_h, 0); - break; - } - if (imm == 1) - break; - if (imm == -1) { - EMIT(PPC_RAW_SUBFIC(dst_reg, dst_reg, 0)); - EMIT(PPC_RAW_SUBFZE(dst_reg_h, dst_reg_h)); - break; - } - if (imm > 0 && is_power_of_2(imm)) { + } else if (imm == 1) { + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); + EMIT(PPC_RAW_MR(dst_reg_h, src2_reg_h)); + } else if (imm == -1) { + EMIT(PPC_RAW_SUBFIC(dst_reg, src2_reg, 0)); + EMIT(PPC_RAW_SUBFZE(dst_reg_h, src2_reg_h)); + } else if (imm > 0 && is_power_of_2(imm)) { imm = ilog2(imm); - EMIT(PPC_RAW_RLWINM(dst_reg_h, dst_reg_h, imm, 0, 31 - imm)); + EMIT(PPC_RAW_RLWINM(dst_reg_h, src2_reg_h, imm, 0, 31 - imm)); EMIT(PPC_RAW_RLWIMI(dst_reg_h, dst_reg, imm, 32 - imm, 31)); - EMIT(PPC_RAW_SLWI(dst_reg, dst_reg, imm)); - break; + EMIT(PPC_RAW_SLWI(dst_reg, src2_reg, imm)); + } else { + bpf_set_seen_register(ctx, tmp_reg); + PPC_LI32(tmp_reg, imm); + EMIT(PPC_RAW_MULW(dst_reg_h, src2_reg_h, tmp_reg)); + if (imm < 0) + EMIT(PPC_RAW_SUB(dst_reg_h, dst_reg_h, src2_reg)); + EMIT(PPC_RAW_MULHWU(_R0, src2_reg, tmp_reg)); + EMIT(PPC_RAW_MULW(dst_reg, src2_reg, tmp_reg)); + EMIT(PPC_RAW_ADD(dst_reg_h, dst_reg_h, _R0)); } - bpf_set_seen_register(ctx, tmp_reg); - PPC_LI32(tmp_reg, imm); - EMIT(PPC_RAW_MULW(dst_reg_h, dst_reg_h, tmp_reg)); - if (imm < 0) - EMIT(PPC_RAW_SUB(dst_reg_h, dst_reg_h, dst_reg)); - EMIT(PPC_RAW_MULHWU(_R0, dst_reg, tmp_reg)); - EMIT(PPC_RAW_MULW(dst_reg, dst_reg, tmp_reg)); - EMIT(PPC_RAW_ADD(dst_reg_h, dst_reg_h, _R0)); break; case BPF_ALU | BPF_DIV | BPF_X: /* (u32) dst /= (u32) src */ - EMIT(PPC_RAW_DIVWU(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_DIVWU(dst_reg, src2_reg, src_reg)); break; case BPF_ALU | BPF_MOD | BPF_X: /* (u32) dst %= (u32) src */ - EMIT(PPC_RAW_DIVWU(_R0, dst_reg, src_reg)); + EMIT(PPC_RAW_DIVWU(_R0, src2_reg, src_reg)); EMIT(PPC_RAW_MULW(_R0, src_reg, _R0)); - EMIT(PPC_RAW_SUB(dst_reg, dst_reg, _R0)); + EMIT(PPC_RAW_SUB(dst_reg, src2_reg, _R0)); break; case BPF_ALU64 | BPF_DIV | BPF_X: /* dst /= src */ return -EOPNOTSUPP; @@ -448,14 +453,13 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * case BPF_ALU | BPF_DIV | BPF_K: /* (u32) dst /= (u32) imm */ if (!imm) return -EINVAL; - if (imm == 1) - break; - - if (is_power_of_2((u32)imm)) { - EMIT(PPC_RAW_SRWI(dst_reg, dst_reg, ilog2(imm))); + if (imm == 1) { + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); + } else if (is_power_of_2((u32)imm)) { + EMIT(PPC_RAW_SRWI(dst_reg, src2_reg, ilog2(imm))); } else { PPC_LI32(_R0, imm); - EMIT(PPC_RAW_DIVWU(dst_reg, dst_reg, _R0)); + EMIT(PPC_RAW_DIVWU(dst_reg, src2_reg, _R0)); } break; case BPF_ALU | BPF_MOD | BPF_K: /* (u32) dst %= (u32) imm */ @@ -465,16 +469,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * if (!is_power_of_2((u32)imm)) { bpf_set_seen_register(ctx, tmp_reg); PPC_LI32(tmp_reg, imm); - EMIT(PPC_RAW_DIVWU(_R0, dst_reg, tmp_reg)); + EMIT(PPC_RAW_DIVWU(_R0, src2_reg, tmp_reg)); EMIT(PPC_RAW_MULW(_R0, tmp_reg, _R0)); - EMIT(PPC_RAW_SUB(dst_reg, dst_reg, _R0)); - break; - } - if (imm == 1) + EMIT(PPC_RAW_SUB(dst_reg, src2_reg, _R0)); + } else if (imm == 1) { EMIT(PPC_RAW_LI(dst_reg, 0)); - else - EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 32 - ilog2((u32)imm), 31)); - + } else { + imm = ilog2((u32)imm); + EMIT(PPC_RAW_RLWINM(dst_reg, src2_reg, 0, 32 - imm), 31)); + } break; case BPF_ALU64 | BPF_MOD | BPF_K: /* dst %= imm */ if (!imm) @@ -486,7 +489,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * if (imm == 1) EMIT(PPC_RAW_LI(dst_reg, 0)); else - EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 32 - ilog2(imm), 31)); + EMIT(PPC_RAW_RLWINM(dst_reg, src2_reg, 0, 32 - ilog2(imm), 31)); EMIT(PPC_RAW_LI(dst_reg_h, 0)); break; case BPF_ALU64 | BPF_DIV | BPF_K: /* dst /= imm */ @@ -496,34 +499,38 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * return -EOPNOTSUPP; if (imm < 0) { - EMIT(PPC_RAW_SUBFIC(dst_reg, dst_reg, 0)); - EMIT(PPC_RAW_SUBFZE(dst_reg_h, dst_reg_h)); + EMIT(PPC_RAW_SUBFIC(dst_reg, src2_reg, 0)); + EMIT(PPC_RAW_SUBFZE(dst_reg_h, src2_reg_h)); imm = -imm; + src2_reg = dst_reg; + } + if (imm == 1) { + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); + EMIT(PPC_RAW_MR(dst_reg_h, src2_reg_h)); + } else { + imm = ilog2(imm); + EMIT(PPC_RAW_RLWINM(dst_reg, src2_reg, 32 - imm, imm, 31)); + EMIT(PPC_RAW_RLWIMI(dst_reg, src2_reg_h, 32 - imm, 0, imm - 1)); + EMIT(PPC_RAW_SRAWI(dst_reg_h, src2_reg_h, imm)); } - if (imm == 1) - break; - imm = ilog2(imm); - EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 32 - imm, imm, 31)); - EMIT(PPC_RAW_RLWIMI(dst_reg, dst_reg_h, 32 - imm, 0, imm - 1)); - EMIT(PPC_RAW_SRAWI(dst_reg_h, dst_reg_h, imm)); break; case BPF_ALU | BPF_NEG: /* (u32) dst = -dst */ - EMIT(PPC_RAW_NEG(dst_reg, dst_reg)); + EMIT(PPC_RAW_NEG(dst_reg, src2_reg)); break; case BPF_ALU64 | BPF_NEG: /* dst = -dst */ - EMIT(PPC_RAW_SUBFIC(dst_reg, dst_reg, 0)); - EMIT(PPC_RAW_SUBFZE(dst_reg_h, dst_reg_h)); + EMIT(PPC_RAW_SUBFIC(dst_reg, src2_reg, 0)); + EMIT(PPC_RAW_SUBFZE(dst_reg_h, src2_reg_h)); break; /* * Logical operations: AND/OR/XOR/[A]LSH/[A]RSH */ case BPF_ALU64 | BPF_AND | BPF_X: /* dst = dst & src */ - EMIT(PPC_RAW_AND(dst_reg, dst_reg, src_reg)); - EMIT(PPC_RAW_AND(dst_reg_h, dst_reg_h, src_reg_h)); + EMIT(PPC_RAW_AND(dst_reg, src2_reg, src_reg)); + EMIT(PPC_RAW_AND(dst_reg_h, src2_reg_h, src_reg_h)); break; case BPF_ALU | BPF_AND | BPF_X: /* (u32) dst = dst & src */ - EMIT(PPC_RAW_AND(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_AND(dst_reg, src2_reg, src_reg)); break; case BPF_ALU64 | BPF_AND | BPF_K: /* dst = dst & imm */ if (imm >= 0) @@ -531,23 +538,23 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * fallthrough; case BPF_ALU | BPF_AND | BPF_K: /* (u32) dst = dst & imm */ if (!IMM_H(imm)) { - EMIT(PPC_RAW_ANDI(dst_reg, dst_reg, IMM_L(imm))); + EMIT(PPC_RAW_ANDI(dst_reg, src2_reg, IMM_L(imm))); } else if (!IMM_L(imm)) { - EMIT(PPC_RAW_ANDIS(dst_reg, dst_reg, IMM_H(imm))); + EMIT(PPC_RAW_ANDIS(dst_reg, src2_reg, IMM_H(imm))); } else if (imm == (((1 << fls(imm)) - 1) ^ ((1 << (ffs(i) - 1)) - 1))) { - EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, + EMIT(PPC_RAW_RLWINM(dst_reg, src2_reg, 0, 32 - fls(imm), 32 - ffs(imm))); } else { PPC_LI32(_R0, imm); - EMIT(PPC_RAW_AND(dst_reg, dst_reg, _R0)); + EMIT(PPC_RAW_AND(dst_reg, src2_reg, _R0)); } break; case BPF_ALU64 | BPF_OR | BPF_X: /* dst = dst | src */ - EMIT(PPC_RAW_OR(dst_reg, dst_reg, src_reg)); - EMIT(PPC_RAW_OR(dst_reg_h, dst_reg_h, src_reg_h)); + EMIT(PPC_RAW_OR(dst_reg, src2_reg, src_reg)); + EMIT(PPC_RAW_OR(dst_reg_h, src2_reg_h, src_reg_h)); break; case BPF_ALU | BPF_OR | BPF_X: /* dst = (u32) dst | (u32) src */ - EMIT(PPC_RAW_OR(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_OR(dst_reg, src2_reg, src_reg)); break; case BPF_ALU64 | BPF_OR | BPF_K:/* dst = dst | imm */ /* Sign-extended */ @@ -555,145 +562,154 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * EMIT(PPC_RAW_LI(dst_reg_h, -1)); fallthrough; case BPF_ALU | BPF_OR | BPF_K:/* dst = (u32) dst | (u32) imm */ - if (IMM_L(imm)) - EMIT(PPC_RAW_ORI(dst_reg, dst_reg, IMM_L(imm))); + if (IMM_L(imm)) { + EMIT(PPC_RAW_ORI(dst_reg, src2_reg, IMM_L(imm))); + src2_reg = dst_reg; + } if (IMM_H(imm)) - EMIT(PPC_RAW_ORIS(dst_reg, dst_reg, IMM_H(imm))); + EMIT(PPC_RAW_ORIS(dst_reg, src2_reg, IMM_H(imm))); break; case BPF_ALU64 | BPF_XOR | BPF_X: /* dst ^= src */ if (dst_reg == src_reg) { EMIT(PPC_RAW_LI(dst_reg, 0)); EMIT(PPC_RAW_LI(dst_reg_h, 0)); } else { - EMIT(PPC_RAW_XOR(dst_reg, dst_reg, src_reg)); - EMIT(PPC_RAW_XOR(dst_reg_h, dst_reg_h, src_reg_h)); + EMIT(PPC_RAW_XOR(dst_reg, src2_reg, src_reg)); + EMIT(PPC_RAW_XOR(dst_reg_h, src2_reg_h, src_reg_h)); } break; case BPF_ALU | BPF_XOR | BPF_X: /* (u32) dst ^= src */ if (dst_reg == src_reg) EMIT(PPC_RAW_LI(dst_reg, 0)); else - EMIT(PPC_RAW_XOR(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_XOR(dst_reg, src2_reg, src_reg)); break; case BPF_ALU64 | BPF_XOR | BPF_K: /* dst ^= imm */ if (imm < 0) - EMIT(PPC_RAW_NOR(dst_reg_h, dst_reg_h, dst_reg_h)); + EMIT(PPC_RAW_NOR(dst_reg_h, src2_reg_h, src2_reg_h)); fallthrough; case BPF_ALU | BPF_XOR | BPF_K: /* (u32) dst ^= (u32) imm */ - if (IMM_L(imm)) - EMIT(PPC_RAW_XORI(dst_reg, dst_reg, IMM_L(imm))); + if (IMM_L(imm)) { + EMIT(PPC_RAW_XORI(dst_reg, src2_reg, IMM_L(imm))); + src2_reg = dst_reg; + } if (IMM_H(imm)) - EMIT(PPC_RAW_XORIS(dst_reg, dst_reg, IMM_H(imm))); + EMIT(PPC_RAW_XORIS(dst_reg, src2_reg, IMM_H(imm))); break; case BPF_ALU | BPF_LSH | BPF_X: /* (u32) dst <<= (u32) src */ - EMIT(PPC_RAW_SLW(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_SLW(dst_reg, src2_reg, src_reg)); break; case BPF_ALU64 | BPF_LSH | BPF_X: /* dst <<= src; */ bpf_set_seen_register(ctx, tmp_reg); EMIT(PPC_RAW_SUBFIC(_R0, src_reg, 32)); - EMIT(PPC_RAW_SLW(dst_reg_h, dst_reg_h, src_reg)); + EMIT(PPC_RAW_SLW(dst_reg_h, src2_reg_h, src_reg)); EMIT(PPC_RAW_ADDI(tmp_reg, src_reg, 32)); - EMIT(PPC_RAW_SRW(_R0, dst_reg, _R0)); - EMIT(PPC_RAW_SLW(tmp_reg, dst_reg, tmp_reg)); + EMIT(PPC_RAW_SRW(_R0, src2_reg, _R0)); + EMIT(PPC_RAW_SLW(tmp_reg, src2_reg, tmp_reg)); EMIT(PPC_RAW_OR(dst_reg_h, dst_reg_h, _R0)); - EMIT(PPC_RAW_SLW(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_SLW(dst_reg, src2_reg, src_reg)); EMIT(PPC_RAW_OR(dst_reg_h, dst_reg_h, tmp_reg)); break; case BPF_ALU | BPF_LSH | BPF_K: /* (u32) dst <<= (u32) imm */ - if (!imm) - break; - EMIT(PPC_RAW_SLWI(dst_reg, dst_reg, imm)); + if (imm) + EMIT(PPC_RAW_SLWI(dst_reg, src2_reg, imm)); + else + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); break; case BPF_ALU64 | BPF_LSH | BPF_K: /* dst <<= imm */ if (imm < 0) return -EINVAL; - if (!imm) - break; - if (imm < 32) { - EMIT(PPC_RAW_RLWINM(dst_reg_h, dst_reg_h, imm, 0, 31 - imm)); - EMIT(PPC_RAW_RLWIMI(dst_reg_h, dst_reg, imm, 32 - imm, 31)); - EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, imm, 0, 31 - imm)); - break; - } - if (imm < 64) - EMIT(PPC_RAW_RLWINM(dst_reg_h, dst_reg, imm, 0, 31 - imm)); - else + if (!imm) { + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); + } else if (imm < 32) { + EMIT(PPC_RAW_RLWINM(dst_reg_h, src2_reg_h, imm, 0, 31 - imm)); + EMIT(PPC_RAW_RLWIMI(dst_reg_h, src2_reg, imm, 32 - imm, 31)); + EMIT(PPC_RAW_RLWINM(dst_reg, src2_reg, imm, 0, 31 - imm)); + } else if (imm < 64) { + EMIT(PPC_RAW_RLWINM(dst_reg_h, src2_reg, imm, 0, 31 - imm)); + EMIT(PPC_RAW_LI(dst_reg, 0)); + } else { EMIT(PPC_RAW_LI(dst_reg_h, 0)); - EMIT(PPC_RAW_LI(dst_reg, 0)); + EMIT(PPC_RAW_LI(dst_reg, 0)); + } break; case BPF_ALU | BPF_RSH | BPF_X: /* (u32) dst >>= (u32) src */ - EMIT(PPC_RAW_SRW(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_SRW(dst_reg, src2_reg, src_reg)); break; case BPF_ALU64 | BPF_RSH | BPF_X: /* dst >>= src */ bpf_set_seen_register(ctx, tmp_reg); EMIT(PPC_RAW_SUBFIC(_R0, src_reg, 32)); - EMIT(PPC_RAW_SRW(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_SRW(dst_reg, src2_reg, src_reg)); EMIT(PPC_RAW_ADDI(tmp_reg, src_reg, 32)); - EMIT(PPC_RAW_SLW(_R0, dst_reg_h, _R0)); + EMIT(PPC_RAW_SLW(_R0, src2_reg_h, _R0)); EMIT(PPC_RAW_SRW(tmp_reg, dst_reg_h, tmp_reg)); EMIT(PPC_RAW_OR(dst_reg, dst_reg, _R0)); - EMIT(PPC_RAW_SRW(dst_reg_h, dst_reg_h, src_reg)); + EMIT(PPC_RAW_SRW(dst_reg_h, src2_reg_h, src_reg)); EMIT(PPC_RAW_OR(dst_reg, dst_reg, tmp_reg)); break; case BPF_ALU | BPF_RSH | BPF_K: /* (u32) dst >>= (u32) imm */ - if (!imm) - break; - EMIT(PPC_RAW_SRWI(dst_reg, dst_reg, imm)); + if (imm) + EMIT(PPC_RAW_SRWI(dst_reg, src2_reg, imm)); + else + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); break; case BPF_ALU64 | BPF_RSH | BPF_K: /* dst >>= imm */ if (imm < 0) return -EINVAL; - if (!imm) - break; - if (imm < 32) { - EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 32 - imm, imm, 31)); - EMIT(PPC_RAW_RLWIMI(dst_reg, dst_reg_h, 32 - imm, 0, imm - 1)); - EMIT(PPC_RAW_RLWINM(dst_reg_h, dst_reg_h, 32 - imm, imm, 31)); - break; - } - if (imm < 64) - EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg_h, 64 - imm, imm - 32, 31)); - else + if (!imm) { + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); + EMIT(PPC_RAW_MR(dst_reg_h, src2_reg_h)); + } else if (imm < 32) { + EMIT(PPC_RAW_RLWINM(dst_reg, src2_reg, 32 - imm, imm, 31)); + EMIT(PPC_RAW_RLWIMI(dst_reg, src2_reg_h, 32 - imm, 0, imm - 1)); + EMIT(PPC_RAW_RLWINM(dst_reg_h, src2_reg_h, 32 - imm, imm, 31)); + } else if (imm < 64) { + EMIT(PPC_RAW_RLWINM(dst_reg, src2_reg_h, 64 - imm, imm - 32, 31)); + EMIT(PPC_RAW_LI(dst_reg_h, 0)); + } else { EMIT(PPC_RAW_LI(dst_reg, 0)); - EMIT(PPC_RAW_LI(dst_reg_h, 0)); + EMIT(PPC_RAW_LI(dst_reg_h, 0)); + } break; case BPF_ALU | BPF_ARSH | BPF_X: /* (s32) dst >>= src */ - EMIT(PPC_RAW_SRAW(dst_reg, dst_reg, src_reg)); + EMIT(PPC_RAW_SRAW(dst_reg, src2_reg, src_reg)); break; case BPF_ALU64 | BPF_ARSH | BPF_X: /* (s64) dst >>= src */ bpf_set_seen_register(ctx, tmp_reg); EMIT(PPC_RAW_SUBFIC(_R0, src_reg, 32)); - EMIT(PPC_RAW_SRW(dst_reg, dst_reg, src_reg)); - EMIT(PPC_RAW_SLW(_R0, dst_reg_h, _R0)); + EMIT(PPC_RAW_SRW(dst_reg, src2_reg, src_reg)); + EMIT(PPC_RAW_SLW(_R0, src2_reg_h, _R0)); EMIT(PPC_RAW_ADDI(tmp_reg, src_reg, 32)); EMIT(PPC_RAW_OR(dst_reg, dst_reg, _R0)); EMIT(PPC_RAW_RLWINM(_R0, tmp_reg, 0, 26, 26)); - EMIT(PPC_RAW_SRAW(tmp_reg, dst_reg_h, tmp_reg)); - EMIT(PPC_RAW_SRAW(dst_reg_h, dst_reg_h, src_reg)); + EMIT(PPC_RAW_SRAW(tmp_reg, src2_reg_h, tmp_reg)); + EMIT(PPC_RAW_SRAW(dst_reg_h, src2_reg_h, src_reg)); EMIT(PPC_RAW_SLW(tmp_reg, tmp_reg, _R0)); EMIT(PPC_RAW_OR(dst_reg, dst_reg, tmp_reg)); break; case BPF_ALU | BPF_ARSH | BPF_K: /* (s32) dst >>= imm */ - if (!imm) - break; - EMIT(PPC_RAW_SRAWI(dst_reg, dst_reg, imm)); + if (imm) + EMIT(PPC_RAW_SRAWI(dst_reg, src2_reg, imm)); + else + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); break; case BPF_ALU64 | BPF_ARSH | BPF_K: /* (s64) dst >>= imm */ if (imm < 0) return -EINVAL; - if (!imm) - break; - if (imm < 32) { - EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 32 - imm, imm, 31)); - EMIT(PPC_RAW_RLWIMI(dst_reg, dst_reg_h, 32 - imm, 0, imm - 1)); - EMIT(PPC_RAW_SRAWI(dst_reg_h, dst_reg_h, imm)); - break; + if (!imm) { + EMIT(PPC_RAW_MR(dst_reg, src2_reg)); + EMIT(PPC_RAW_MR(dst_reg_h, src2_reg_h)); + } else if (imm < 32) { + EMIT(PPC_RAW_RLWINM(dst_reg, src2_reg, 32 - imm, imm, 31)); + EMIT(PPC_RAW_RLWIMI(dst_reg, src2_reg_h, 32 - imm, 0, imm - 1)); + EMIT(PPC_RAW_SRAWI(dst_reg_h, src2_reg_h, imm)); + } else if (imm < 64) { + EMIT(PPC_RAW_SRAWI(dst_reg, src2_reg_h, imm - 32)); + EMIT(PPC_RAW_SRAWI(dst_reg_h, src2_reg_h, 31)); + } else { + EMIT(PPC_RAW_SRAWI(dst_reg, src2_reg_h, 31)); + EMIT(PPC_RAW_SRAWI(dst_reg_h, src2_reg_h, 31)); } - if (imm < 64) - EMIT(PPC_RAW_SRAWI(dst_reg, dst_reg_h, imm - 32)); - else - EMIT(PPC_RAW_SRAWI(dst_reg, dst_reg_h, 31)); - EMIT(PPC_RAW_SRAWI(dst_reg_h, dst_reg_h, 31)); break; /* @@ -727,7 +743,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * switch (imm) { case 16: /* Copy 16 bits to upper part */ - EMIT(PPC_RAW_RLWIMI(dst_reg, dst_reg, 16, 0, 15)); + EMIT(PPC_RAW_RLWIMI(dst_reg, src2_reg, 16, 0, 15)); /* Rotate 8 bits right & mask */ EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 24, 16, 31)); break; @@ -737,23 +753,23 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * * 2 bytes are already in their final position * -- byte 2 and 4 (of bytes 1, 2, 3 and 4) */ - EMIT(PPC_RAW_RLWINM(_R0, dst_reg, 8, 0, 31)); + EMIT(PPC_RAW_RLWINM(_R0, src2_reg, 8, 0, 31)); /* Rotate 24 bits and insert byte 1 */ - EMIT(PPC_RAW_RLWIMI(_R0, dst_reg, 24, 0, 7)); + EMIT(PPC_RAW_RLWIMI(_R0, src2_reg, 24, 0, 7)); /* Rotate 24 bits and insert byte 3 */ - EMIT(PPC_RAW_RLWIMI(_R0, dst_reg, 24, 16, 23)); + EMIT(PPC_RAW_RLWIMI(_R0, src2_reg, 24, 16, 23)); EMIT(PPC_RAW_MR(dst_reg, _R0)); break; case 64: bpf_set_seen_register(ctx, tmp_reg); - EMIT(PPC_RAW_RLWINM(tmp_reg, dst_reg, 8, 0, 31)); - EMIT(PPC_RAW_RLWINM(_R0, dst_reg_h, 8, 0, 31)); + EMIT(PPC_RAW_RLWINM(tmp_reg, src2_reg, 8, 0, 31)); + EMIT(PPC_RAW_RLWINM(_R0, src2_reg_h, 8, 0, 31)); /* Rotate 24 bits and insert byte 1 */ - EMIT(PPC_RAW_RLWIMI(tmp_reg, dst_reg, 24, 0, 7)); - EMIT(PPC_RAW_RLWIMI(_R0, dst_reg_h, 24, 0, 7)); + EMIT(PPC_RAW_RLWIMI(tmp_reg, src2_reg, 24, 0, 7)); + EMIT(PPC_RAW_RLWIMI(_R0, src2_reg_h, 24, 0, 7)); /* Rotate 24 bits and insert byte 3 */ - EMIT(PPC_RAW_RLWIMI(tmp_reg, dst_reg, 24, 16, 23)); - EMIT(PPC_RAW_RLWIMI(_R0, dst_reg_h, 24, 16, 23)); + EMIT(PPC_RAW_RLWIMI(tmp_reg, src2_reg, 24, 16, 23)); + EMIT(PPC_RAW_RLWIMI(_R0, src2_reg_h, 24, 16, 23)); EMIT(PPC_RAW_MR(dst_reg, _R0)); EMIT(PPC_RAW_MR(dst_reg_h, tmp_reg)); break; @@ -763,7 +779,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * switch (imm) { case 16: /* zero-extend 16 bits into 32 bits */ - EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 16, 31)); + EMIT(PPC_RAW_RLWINM(dst_reg, src2_reg, 0, 16, 31)); break; case 32: case 64: From patchwork Thu Dec 1 07:56:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Leroy X-Patchwork-Id: 13061034 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 491CEC43217 for ; Thu, 1 Dec 2022 07:58:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229964AbiLAH65 (ORCPT ); Thu, 1 Dec 2022 02:58:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229926AbiLAH6P (ORCPT ); Thu, 1 Dec 2022 02:58:15 -0500 Received: from pegase2.c-s.fr (pegase2.c-s.fr [93.17.235.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 290ADBE2F; Wed, 30 Nov 2022 23:57:40 -0800 (PST) Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67]) by localhost (Postfix) with ESMTP id 4NN7hM5MFCz9sZb; Thu, 1 Dec 2022 08:57:07 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from pegase2.c-s.fr ([172.26.127.65]) by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fi5NXFaCiC1k; Thu, 1 Dec 2022 08:57:07 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase2.c-s.fr (Postfix) with ESMTP id 4NN7hD4Pqzz9sZm; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 82C578B781; Thu, 1 Dec 2022 08:57:00 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id 7OCGxiDEj4Hh; Thu, 1 Dec 2022 08:57:00 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (unknown [172.25.230.108]) by messagerie.si.c-s.fr (Postfix) with ESMTP id CBCE08B783; Thu, 1 Dec 2022 08:56:59 +0100 (CET) Received: from PO20335.IDSI0.si.c-s.fr (localhost [127.0.0.1]) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.16.1) with ESMTPS id 2B17uvHa130872 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Thu, 1 Dec 2022 08:56:57 +0100 Received: (from chleroy@localhost) by PO20335.IDSI0.si.c-s.fr (8.17.1/8.17.1/Submit) id 2B17uvZZ130871; Thu, 1 Dec 2022 08:56:57 +0100 X-Authentication-Warning: PO20335.IDSI0.si.c-s.fr: chleroy set sender to christophe.leroy@csgroup.eu using -f From: Christophe Leroy To: Michael Ellerman , Nicholas Piggin , "Naveen N. Rao" Cc: Christophe Leroy , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, bpf@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa Subject: [PATCH v1 10/10] powerpc/bpf/32: perform three operands ALU operations Date: Thu, 1 Dec 2022 08:56:35 +0100 Message-Id: X-Mailer: git-send-email 2.38.1 In-Reply-To: References: MIME-Version: 1.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1669881391; l=2064; s=20211009; h=from:subject:message-id; bh=x9e4wsb7bwKsXC6S+KcXmVuzQBBX7AhIURwS3dX/rR4=; b=XsN+vrNJB5stqWsxL7BjReZg+9Ci3BiT40M67fhpNUf77tkXWOUxpOfPm8CKFXMi9eyiYb3IVTok HDoMyJP3CSO/U9BCU4ujNUEnj3SkVPGu4o3qJAHEzf2u9z5CzwmD X-Developer-Key: i=christophe.leroy@csgroup.eu; a=ed25519; pk=HIzTzUj91asvincQGOFx6+ZF5AoUuP9GdOtQChs7Mm0= Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net When an ALU instruction is preceded by a MOV instruction that just moves a source register into the destination register of the ALU, replace that MOV+ALU instructions by an ALU operation taking the source of the MOV as second source instead of using its destination. Before the change, code could look like the following, with superfluous separate register move (mr) instructions. 70: 7f c6 f3 78 mr r6,r30 74: 7f a5 eb 78 mr r5,r29 78: 30 c6 ff f4 addic r6,r6,-12 7c: 7c a5 01 d4 addme r5,r5 With this commit, addition instructions take r30 and r29 directly. 70: 30 de ff f4 addic r6,r30,-12 74: 7c bd 01 d4 addme r5,r29 Signed-off-by: Christophe Leroy --- arch/powerpc/net/bpf_jit_comp32.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c index c45db3d280b2..4b8d3634dc04 100644 --- a/arch/powerpc/net/bpf_jit_comp32.c +++ b/arch/powerpc/net/bpf_jit_comp32.c @@ -290,6 +290,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * for (i = 0; i < flen; i++) { u32 code = insn[i].code; + u32 prevcode = i ? insn[i - 1].code : 0; u32 dst_reg = bpf_to_ppc(insn[i].dst_reg); u32 dst_reg_h = dst_reg - 1; u32 src_reg = bpf_to_ppc(insn[i].src_reg); @@ -308,6 +309,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * u32 tmp_idx; int j; + if (i && (BPF_CLASS(code) == BPF_ALU64 || BPF_CLASS(code) == BPF_ALU) && + (BPF_CLASS(prevcode) == BPF_ALU64 || BPF_CLASS(prevcode) == BPF_ALU) && + BPF_OP(prevcode) == BPF_MOV && BPF_SRC(prevcode) == BPF_X && + insn[i - 1].dst_reg == insn[i].dst_reg && insn[i - 1].imm != 1) { + src2_reg = bpf_to_ppc(insn[i - 1].src_reg); + src2_reg_h = src2_reg - 1; + ctx->idx = addrs[i - 1] / 4; + } + /* * addrs[] maps a BPF bytecode address into a real offset from * the start of the body code.