From patchwork Thu Jan 4 14:22:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Hwang X-Patchwork-Id: 13511165 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 840F7224E3 for ; Thu, 4 Jan 2024 14:23:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aL1nCyqd" Received: by mail-pj1-f43.google.com with SMTP id 98e67ed59e1d1-28c9d424cceso360546a91.0 for ; Thu, 04 Jan 2024 06:23:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704378186; x=1704982986; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MQ7lDyfIT+vign2NgNFpx2bbU589QIKd3RiKfIK/eXI=; b=aL1nCyqdgTg/AMoAh+RhjkEg3g6kdNOT+LUBEJTGO+FwOsMOzvOkpsOHHINkN2xay9 uP9Zw4j9iDmc7vHxNDsW2PcNQeYlsoswTr0qdifWFoaMJ+HG08rVFbYdMSVE1PjJmKDK qyXV61cEbbCFdYz9J4XamOBQjDe5BGxt8FMpKLSlsgPEI7hqtxUlXGTlbwYCohePA//x aFG3iHH0Ca4GZYgthEMysU9T00GLBgQB5zCzfEpUbeGrc4SLp8cC9zjBcEW4ylkWi9bL gzMH55RiBWlb6fBI1M6tEihzRFG6mv5qomMiPjWbFgYj9mU3ZMBO/4/abR0XfyVTHnO4 ukVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704378186; x=1704982986; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MQ7lDyfIT+vign2NgNFpx2bbU589QIKd3RiKfIK/eXI=; b=Agg5qtQqIMioBIJ84T86tr2eInl3KKGonfbhpgbR/iFGy4e5PErqr6PMH9anCgeKX+ u8Dmz5f6nWrn+KH04vBKmxRhh179bb6MotCOXiwdK+alXn3IG2jXltiSTdq62ayjARmO B4pId71/3F7pCReNT6aUUWxltpWR7BAM84268mSIwKyUzOF/awtciwD3hG8hRdwz2aTs oQJdbwQrnAeQ1AeAQ9Dofjn/nAUnwvBP9Xratb1a4v/A3c8MhFPWKvnEkPMyoKvzP1hV L7ZBTkV5yTj0iqx+HL6WdClWCqYjY/UG3JtNB79fIMqzef/oRt7Vmfj5Hzw7TdWJ3kmk c23g== X-Gm-Message-State: AOJu0Yz15CdAaUreoToLguXm4SKNivI06asK3Ptz0X8qKHslsIWvc49v jfK/+Rd1oqfM0cuRJZbSxImwARs8Lpo= X-Google-Smtp-Source: AGHT+IHurWuZz3EHCSi6055kTvq08kZ7uIIlV3xU+0F/yS2bFfBHSF03lN+opSSpjF9JBaBSB7VDug== X-Received: by 2002:a17:90a:9a92:b0:28c:18a:9b20 with SMTP id e18-20020a17090a9a9200b0028c018a9b20mr702771pjp.6.1704378185908; Thu, 04 Jan 2024 06:23:05 -0800 (PST) Received: from localhost.localdomain (bb219-74-10-34.singnet.com.sg. [219.74.10.34]) by smtp.gmail.com with ESMTPSA id c10-20020a17090a020a00b0028cb82a8da0sm4081507pjc.31.2024.01.04.06.23.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jan 2024 06:23:05 -0800 (PST) From: Leon Hwang To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, maciej.fijalkowski@intel.com, jakub@cloudflare.com, iii@linux.ibm.com, hengqi.chen@gmail.com, hffilwlqm@gmail.com, kernel-patches-bot@fb.com Subject: [PATCH bpf-next 1/4] bpf, x64: Use emit_nops() to replace memcpy()'ing x86_nops[5] Date: Thu, 4 Jan 2024 22:22:23 +0800 Message-ID: <20240104142226.87869-2-hffilwlqm@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20240104142226.87869-1-hffilwlqm@gmail.com> References: <20240104142226.87869-1-hffilwlqm@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net For next commit to reuse emit_nops(), move emit_nops() before emit_prologue(). By the way, replace all memcpy(prog, x86_nops[5], X86_PATCH_SIZE) with emit_nops(&prog, X86_PATCH_SIZE). Signed-off-by: Leon Hwang --- arch/x86/net/bpf_jit_comp.c | 47 +++++++++++++++++-------------------- 1 file changed, 22 insertions(+), 25 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index bdacbb84456d9..fe30b9ebb8de4 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -307,6 +307,25 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used) *pprog = prog; } +static void emit_nops(u8 **pprog, int len) +{ + u8 *prog = *pprog; + int i, noplen; + + while (len > 0) { + noplen = len; + + if (noplen > ASM_NOP_MAX) + noplen = ASM_NOP_MAX; + + for (i = 0; i < noplen; i++) + EMIT1(x86_nops[noplen][i]); + len -= noplen; + } + + *pprog = prog; +} + /* * Emit the various CFI preambles, see asm/cfi.h and the comments about FineIBT * in arch/x86/kernel/alternative.c @@ -385,8 +404,7 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, /* BPF trampoline can be made to work without these nops, * but let's waste 5 bytes for now and optimize later */ - memcpy(prog, x86_nops[5], X86_PATCH_SIZE); - prog += X86_PATCH_SIZE; + emit_nops(&prog, X86_PATCH_SIZE); if (!ebpf_from_cbpf) { if (tail_call_reachable && !is_subprog) /* When it's the entry of the whole tailcall context, @@ -692,8 +710,7 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog, if (stack_depth) EMIT3_off32(0x48, 0x81, 0xC4, round_up(stack_depth, 8)); - memcpy(prog, x86_nops[5], X86_PATCH_SIZE); - prog += X86_PATCH_SIZE; + emit_nops(&prog, X86_PATCH_SIZE); /* out: */ ctx->tail_call_direct_label = prog - start; @@ -1055,25 +1072,6 @@ static void detect_reg_usage(struct bpf_insn *insn, int insn_cnt, } } -static void emit_nops(u8 **pprog, int len) -{ - u8 *prog = *pprog; - int i, noplen; - - while (len > 0) { - noplen = len; - - if (noplen > ASM_NOP_MAX) - noplen = ASM_NOP_MAX; - - for (i = 0; i < noplen; i++) - EMIT1(x86_nops[noplen][i]); - len -= noplen; - } - - *pprog = prog; -} - /* emit the 3-byte VEX prefix * * r: same as rex.r, extra bit for ModRM reg field @@ -2700,8 +2698,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im /* remember return value in a stack for bpf prog to access */ emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8); im->ip_after_call = image + (prog - (u8 *)rw_image); - memcpy(prog, x86_nops[5], X86_PATCH_SIZE); - prog += X86_PATCH_SIZE; + emit_nops(&prog, X86_PATCH_SIZE); } if (fmod_ret->nr_links) { From patchwork Thu Jan 4 14:22:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Hwang X-Patchwork-Id: 13511166 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 84423224ED for ; Thu, 4 Jan 2024 14:23:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="l2K0qO71" Received: by mail-pj1-f54.google.com with SMTP id 98e67ed59e1d1-28bd623c631so426454a91.3 for ; Thu, 04 Jan 2024 06:23:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704378189; x=1704982989; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TCG6I33gkhYXs2o79kAaxgm2xJGrP3YviAZ0nUhtKuI=; b=l2K0qO71yFQxv7oqzCH+0B9N6q12d29UDKByEwjZflMYlEPkaCAm5e6sYekj3UbbY5 0mXRnKatjyr6gOjqiszHt2cMCOltOFixoLUlfVCIdSFT2MiIN4kf1wja6A7CYVsoVdkf QXwtP5z/f0kHpvsj1kYPachQzA6Rnn3ws5TKLMv9zpkN5LMMA2v0vggwSJuRJZiWJcN0 g8rIDQwWWgXOqr5PbHmux9wLOOH3t9QjuVn1qyYYQIg8YL/NNiCMl9yrAX6uwHu2lUiX CX0gRCxyXkMAqofhkirqzmedJmnB0UUoCSBJhG3y0B6OvCaJHcsmAzKjoEWwv5bKAoT7 bCbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704378189; x=1704982989; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TCG6I33gkhYXs2o79kAaxgm2xJGrP3YviAZ0nUhtKuI=; b=ZzG78yq4P0mHXQHOthkNdP/0SrBQ6j+lIzOwBPBsrt0mcmdVn5Uxm0uKgnKW/ESTmn g29e7/j6pqBNRGlzhtSbrnl9f+E+9yprHefjRy8ePFDzlAuhZXr/ZjaHXhtNPax18pKt BGndDA1vKujJbrqdgjnAzHFWX8GBa21CkMRjal2wmBratc3upCki2K6fWAGEmfVdZ7Ah dtO6meNV8iblK0q53owuCqzzIerG26TUx4M/JzXAVJ/uDQ6CEkAa5aTDfuceGU/PWtuQ iophQr+Xarw8Yucq9ZiMeK291UfKGMS1roJW1VLPy+ZFi8w9koAN2k1k7BJ9+/lPAvTE Iz5A== X-Gm-Message-State: AOJu0YwKfw1z6J1I9vnHEeqJGSrAx17HVck0gFDgZNwKjzmTEmqtnRmc AwVtbI1rcZ3vNfJCjk63i0Vb+yVLR5c= X-Google-Smtp-Source: AGHT+IElXnYWVKXLItyHId/nerFZHLG0UpDtqfmH1KLKonvXq3UDO1SKNLyYpP70U3EtVlSWM8a2qA== X-Received: by 2002:a17:90a:5289:b0:28c:4c8d:5574 with SMTP id w9-20020a17090a528900b0028c4c8d5574mr601443pjh.49.1704378189141; Thu, 04 Jan 2024 06:23:09 -0800 (PST) Received: from localhost.localdomain (bb219-74-10-34.singnet.com.sg. [219.74.10.34]) by smtp.gmail.com with ESMTPSA id c10-20020a17090a020a00b0028cb82a8da0sm4081507pjc.31.2024.01.04.06.23.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jan 2024 06:23:08 -0800 (PST) From: Leon Hwang To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, maciej.fijalkowski@intel.com, jakub@cloudflare.com, iii@linux.ibm.com, hengqi.chen@gmail.com, hffilwlqm@gmail.com, kernel-patches-bot@fb.com Subject: [PATCH bpf-next 2/4] bpf, x64: Fix tailcall hierarchy Date: Thu, 4 Jan 2024 22:22:24 +0800 Message-ID: <20240104142226.87869-3-hffilwlqm@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20240104142226.87869-1-hffilwlqm@gmail.com> References: <20240104142226.87869-1-hffilwlqm@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT"), the tailcall on x64 works better than before. From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms for x64 JIT"), tailcall is able to run in BPF subprograms on x64. How about: 1. More than 1 subprograms are called in a bpf program. 2. The tailcalls in the subprograms call the bpf program. Because of missing tail_call_cnt back-propagation, a tailcall hierarchy comes up. And MAX_TAIL_CALL_CNT limit does not work for this case. Let's take a look into an example: #include #include #include "bpf_legacy.h" struct { __uint(type, BPF_MAP_TYPE_PROG_ARRAY); __uint(max_entries, 1); __uint(key_size, sizeof(__u32)); __uint(value_size, sizeof(__u32)); } jmp_table SEC(".maps"); int count = 0; static __noinline int subprog_tail(struct __sk_buff *skb) { bpf_tail_call_static(skb, &jmp_table, 0); return 0; } SEC("tc") int entry(struct __sk_buff *skb) { volatile int ret = 1; count++; subprog_tail(skb); /* subprog call1 */ subprog_tail(skb); /* subprog call2 */ return ret; } char __license[] SEC("license") = "GPL"; And the entry bpf prog is populated to the 0th slot of jmp_table. Then, what happens when entry bpf prog runs? The CPU will be stalled because of too many tailcalls like this CI: https://github.com/kernel-patches/bpf/pull/5807/checks In this CI results, the test_progs failed to run on aarch64 and s390x because of "rcu: INFO: rcu_sched self-detected stall on CPU". So, if CPU does not stall because of too many tailcalls, how many tailcalls will be there for this case? And why MAX_TAIL_CALL_CNT limit does not work for this case? Let's step into some running steps. At the very first time when subprog_tail() is called, subprog_tail() does tailcall the entry bpf prog. Then, subprog_taill() is called at second time at the position subprog call1, and it tailcalls the entry bpf prog again. Then, again and again. At the very first time when MAX_TAIL_CALL_CNT limit works, subprog_tail() has been called for 34 times at the position subprog call1. And at this time, the tail_call_cnt is 33 in subprog_tail(). Next, the 34th subprog_tail() returns to entry() because of MAX_TAIL_CALL_CNT limit. In entry(), the 34th entry(), at the time after the 34th subprog_tail() at the position subprog call1 finishes and before the 1st subprog_tail() at the position subprog call2 calls in entry(), what's the value of tail_call_cnt in entry()? It's 33. As we know, tail_all_cnt is pushed on the stack of entry(), and propagates to subprog_tail() by %rax from stack. Then, at the time when subprog_tail() at the position subprog call2 is called for its first time, tail_call_cnt 33 propagates to subprog_tail() by %rax. And the tailcall in subprog_tail() is aborted because of tail_call_cnt >= MAX_TAIL_CALL_CNT too. Then, subprog_tail() at the position subprog call2 ends, and the 34th entry() ends. And it returns to the 33rd subprog_tail() called from the position subprog call1. But wait, at this time, what's the value of tail_call_cnt under the stack of subprog_tail()? It's 33. Then, in the 33rd entry(), at the time after the 33th subprog_tail() at the position subprog call1 finishes and before the 2nd subprog_tail() at the position subprog call2 calls, what's the value of tail_call_cnt in current entry()? It's *32*. Why not 33? Before stepping into subprog_tail() at the position subprog call2 in 33rd entry(), like stopping the time machine, let's have a look at the stack memory: | STACK | +---------+ RBP <-- current rbp | ret | STACK of 33rd entry() | tcc | its value is 32 +---------+ RSP <-- current rsp | rip | STACK of 34rd entry() | rbp | reuse the STACK of 33rd subprog_tail() at the position | ret | subprog call1 | tcc | its value is 33 +---------+ rsp | rip | STACK of 1st subprog_tail() at the position subprog call2 | rbp | | tcc | its value is 33 +---------+ rsp Why not 33? It's because tail_call_cnt does not back-propagate from subprog_tail() to entry(). Then, while stepping into subprog_tail() at the position subprog call2 in 33rd entry(): | STACK | +---------+ | ret | STACK of 33rd entry() | tcc | its value is 32 | rip | | rbp | +---------+ RBP <-- current rbp | tcc | its value is 32; STACK of subprog_tail() at the position +---------+ RSP <-- current rsp subprog call2 Then, while pausing after tailcalling in 2nd subprog_tail() at the position subprog call2: | STACK | +---------+ | ret | STACK of 33rd entry() | tcc | its value is 32 | rip | | rbp | +---------+ RBP <-- current rbp | tcc | its value is 33; STACK of subprog_tail() at the position +---------+ RSP <-- current rsp subprog call2 Note: what happens to tail_call_cnt: /* * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT) * goto out; */ It's to check >= MAX_TAIL_CALL_CNT first and then increment tail_call_cnt. So, current tailcall is allowed to run. Then, entry() is tailcalled. And the stack memory status is: | STACK | +---------+ | ret | STACK of 33rd entry() | tcc | its value is 32 | rip | | rbp | +---------+ RBP <-- current rbp | ret | STACK of 35th entry(); reuse STACK of subprog_tail() at the | tcc | its value is 33 the position subprog call2 +---------+ RSP <-- current rsp So, the tailcalls in the 35th entry() will be aborted. And, ..., again and again. :( And, I hope you have understood the reason why MAX_TAIL_CALL_CNT limit does not work for this case. And, how many tailcalls are there for this case if CPU does not stall? From top-down view, does it look like hierarchy layer and layer? I think it is a hierarchy layer model with 2+4+8+...+2**33 tailcalls. As a result, if CPU does not stall, there will be 2**34 - 2 = 17,179,869,182 tailcalls. That's the guy making CPU stalled. What about there are N subprog_tail() in entry()? If CPU does not stall because of too many tailcalls, there will be almost N**34 tailcalls. And, as we know about the issue, how does this patch resolve it? I hope you have patience to read the following details, because it's really hard to understand the code directly. As we know, in tail call context, the tail_call_cnt propagates by stack and rax register between BPF subprograms and trampolines. How about propagating the pointer of tail_call_cnt instead of tail_call_cnt? When propagating tail_call_cnt pointer by stack and rax register, it'll make tail_call_cnt works like a global variable in current tail call context. Then MAX_TAIL_CALL_CNT limit will be able to work for all tailcalls in current tail call context. But, where does tail_call_cnt store? It stores on the stack of entry bpf prog's caller, like | STACK | | | | rip | +->| tcc | | | rip | | | rbp | | +---------+ RBP | | | | | | | | | +--| tcc_ptr | | rbx | +---------+ RSP Note: tcc is tail_call_cnt, tcc_ptr is tail_call_cnt pointer. So, how does it store tail_call_cnt to the stack of entry bpf prog's caller? At the epilogue of entry bpf prog, before pushing %rbp, it initialises tail_call_cnt by "xor eax, eax" and then push it to stack by "push rax". Then, make %rax as the pointer that points to tail_call_cnt by "mov rax, rsp". Next, call the main part of the entry bpf prog by "call 2". (This is the exceptional point.) With this "call", %rip is pushed to stack. And at the end of the entry bpf prog runtime, the %rip is popped from stack; then, pop tail_call_cnt by "pop rcx" from stack too; and finally "ret" again. The "pop rcx" and "ret" is the 2 in "call 2". It seems invasive to use a "call" here. But it is the key of this patch. With this "call", it is able to store tail_call_cnt to stack of entry bpf prog's caller instead of the stack of entry bpf prog. As a result, tail_call_cnt is protected by "call" actually. Meanwhile tcc_ptr is unnecessary to be popped from stack at the epilogue of bpf prog, like the way of commit d207929d97ea028f ("bpf, x64: Drop "pop %rcx" instruction on BPF JIT epilogue"). And when a tailcall happens, load tail_call_cnt pointer from stack to %rax by "mov rax, qword ptr [rbp - tcc_ptr_off]", and compare tail_call_cnt with MAX_TAIL_CALL_CNT by "cmp dword ptr [rax], MAX_TAIL_CALL_CNT", and then increment tail_call_cnt by "add dword ptr [rax], 1". Finally, when pop %rax, it's to pop tail_call_cnt pointer from stack to %rax. Next, let's step into some running steps. When the epilogue of entry() runs, the stack of entry() should be like: | STACK | STACK of entry()'s caller | | | rip | +->| tcc | its value is 0 | | rip | | | rbp | | +---------+ RBP <-- current rbp | | ret | STACK of entry() +--| tcc_ptr | | rbx | saved regs +---------+ RSP <-- current rsp Then, when subprog_tail() is called for its very first time, its stack should be like: | STACK | STACK of entry()'s caller | | | rip | +->| tcc | its value is 0 | | rip | | | rbp | | +---------+ rbp | | ret | STACK of entry() +--| tcc_ptr | | | rbx | saved regs | | rip | | | rbp | | +---------+ RBP <-- current rbp +--| tcc_ptr | STACK of subprog_tail() +---------+ RSP <-- current rsp Then, when subprog_tail() tailcalls entry(): | STACK | STACK of entry()'s caller | | | rip | +->| tcc | its value is 1 | | rip | | | rbp | | +---------+ rbp | | ret | STACK of entry() +--| tcc_ptr | | | rbx | saved regs | | rip | | | rbp | | +---------+ RBP <-- current rbp | | ret | STACK of entry(), reuse STACK of subprog_tail() +--| tcc_ptr | +---------+ RSP <-- current rsp Then, when entry() calls subprog_tail(): | STACK | STACK of entry()'s caller | | | rip | +->| tcc | its value is 1 | | rip | | | rbp | | +---------+ rbp | | ret | STACK of entry() +--| tcc_ptr | | | rbx | saved regs | | rip | | | rbp | | +---------+ rbp | | ret | STACK of entry(), reuse STACK of subprog_tail() +--| tcc_ptr | | | rip | | | rbp | | +---------+ RBP <-- current rbp +--| tcc_ptr | STACK of subprog_tail() +---------+ RSP <-- current rsp Then, when subprog_tail() tailcalls entry(): | STACK | STACK of entry()'s caller | | | rip | +->| tcc | its value is 2 | | rip | | | rbp | | +---------+ rbp | | ret | STACK of entry() +--| tcc_ptr | | | rbx | saved regs | | rip | | | rbp | | +---------+ rbp | | ret | STACK of entry(), reuse STACK of subprog_tail() +--| tcc_ptr | | | rip | | | rbp | | +---------+ RBP <-- current rbp | | ret | STACK of entry(), reuse STACK of subprog_tail() +--| tcc_ptr | +---------+ RSP <-- current rsp Then, again and again. At the very first time when MAX_TAIL_CALL_CNT limit works, subprog_tail() has been called for 34 times at the position subprog call1. And at this time, the stack should be like: | STACK | STACK of entry()'s caller | | | rip | +->| tcc | its value is 33 | | rip | | | rbp | | +---------+ rbp | | ret | STACK of entry() +--| tcc_ptr | | | rbx | saved regs | | rip | | | rbp | | +---------+ rbp | | ret | STACK of entry(), reuse STACK of subprog_tail() +--| tcc_ptr | | | rip | | | rbp | | +---------+ rbp | | ret | STACK of entry(), reuse STACK of subprog_tail() +--| tcc_ptr | | | rip | | | rbp | | +---------+ rbp | | * | | | * | | | * | | +---------+ RBP <-- current rbp +--| tcc_ptr | STACK of subprog_tail() +---------+ RSP <-- current rsp At this time, the tailcalls in the future will be aborted because tail_call_cnt has been 33, which reaches its MAX_TAIL_CALL_CNT limit. This is the way how this patch works. It's really nice if you reach here. I hope you have a clear idea to understand the following code with above explaining. Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") Fixes: e411901c0b77 ("bpf: allow for tailcalls in BPF subprograms for x64 JIT") Reviewed-by: Maciej Fijalkowski Signed-off-by: Leon Hwang --- arch/x86/net/bpf_jit_comp.c | 40 ++++++++++++++++++++++--------------- 1 file changed, 24 insertions(+), 16 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index fe30b9ebb8de4..67fa337fc2e0c 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -259,7 +259,7 @@ struct jit_context { /* Number of bytes emit_patch() needs to generate instructions */ #define X86_PATCH_SIZE 5 /* Number of bytes that will be skipped on tailcall */ -#define X86_TAIL_CALL_OFFSET (11 + ENDBR_INSN_SIZE) +#define X86_TAIL_CALL_OFFSET (22 + ENDBR_INSN_SIZE) static void push_r12(u8 **pprog) { @@ -406,14 +406,21 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, */ emit_nops(&prog, X86_PATCH_SIZE); if (!ebpf_from_cbpf) { - if (tail_call_reachable && !is_subprog) + if (tail_call_reachable && !is_subprog) { /* When it's the entry of the whole tailcall context, * zeroing rax means initialising tail_call_cnt. */ - EMIT2(0x31, 0xC0); /* xor eax, eax */ - else - /* Keep the same instruction layout. */ - EMIT2(0x66, 0x90); /* nop2 */ + EMIT2(0x31, 0xC0); /* xor eax, eax */ + EMIT1(0x50); /* push rax */ + /* Make rax as ptr that points to tail_call_cnt. */ + EMIT3(0x48, 0x89, 0xE0); /* mov rax, rsp */ + EMIT1_off32(0xE8, 2); /* call main prog */ + EMIT1(0x59); /* pop rcx, get rid of tail_call_cnt */ + EMIT1(0xC3); /* ret */ + } else { + /* Keep the same instruction size. */ + emit_nops(&prog, 13); + } } /* Exception callback receives FP as third parameter */ if (is_exception_cb) { @@ -439,6 +446,7 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, if (stack_depth) EMIT3_off32(0x48, 0x81, 0xEC, round_up(stack_depth, 8)); if (tail_call_reachable) + /* Here, rax is tail_call_cnt_ptr. */ EMIT1(0x50); /* push rax */ *pprog = prog; } @@ -594,7 +602,7 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog, u32 stack_depth, u8 *ip, struct jit_context *ctx) { - int tcc_off = -4 - round_up(stack_depth, 8); + int tcc_ptr_off = -8 - round_up(stack_depth, 8); u8 *prog = *pprog, *start = *pprog; int offset; @@ -619,13 +627,12 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog, * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT) * goto out; */ - EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */ - EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */ + EMIT3_off32(0x48, 0x8B, 0x85, tcc_ptr_off); /* mov rax, qword ptr [rbp - tcc_ptr_off] */ + EMIT3(0x83, 0x38, MAX_TAIL_CALL_CNT); /* cmp dword ptr [rax], MAX_TAIL_CALL_CNT */ offset = ctx->tail_call_indirect_label - (prog + 2 - start); EMIT2(X86_JAE, offset); /* jae out */ - EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */ - EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */ + EMIT3(0x83, 0x00, 0x01); /* add dword ptr [rax], 1 */ /* prog = array->ptrs[index]; */ EMIT4_off32(0x48, 0x8B, 0x8C, 0xD6, /* mov rcx, [rsi + rdx * 8 + offsetof(...)] */ @@ -647,6 +654,7 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog, pop_callee_regs(&prog, callee_regs_used); } + /* pop tail_call_cnt_ptr */ EMIT1(0x58); /* pop rax */ if (stack_depth) EMIT3_off32(0x48, 0x81, 0xC4, /* add rsp, sd */ @@ -675,7 +683,7 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog, bool *callee_regs_used, u32 stack_depth, struct jit_context *ctx) { - int tcc_off = -4 - round_up(stack_depth, 8); + int tcc_ptr_off = -8 - round_up(stack_depth, 8); u8 *prog = *pprog, *start = *pprog; int offset; @@ -683,13 +691,12 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog, * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT) * goto out; */ - EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */ - EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */ + EMIT3_off32(0x48, 0x8B, 0x85, tcc_ptr_off); /* mov rax, qword ptr [rbp - tcc_ptr_off] */ + EMIT3(0x83, 0x38, MAX_TAIL_CALL_CNT); /* cmp dword ptr [rax], MAX_TAIL_CALL_CNT */ offset = ctx->tail_call_direct_label - (prog + 2 - start); EMIT2(X86_JAE, offset); /* jae out */ - EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */ - EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */ + EMIT3(0x83, 0x00, 0x01); /* add dword ptr [rax], 1 */ poke->tailcall_bypass = ip + (prog - start); poke->adj_off = X86_TAIL_CALL_OFFSET; @@ -706,6 +713,7 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog, pop_callee_regs(&prog, callee_regs_used); } + /* pop tail_call_cnt_ptr */ EMIT1(0x58); /* pop rax */ if (stack_depth) EMIT3_off32(0x48, 0x81, 0xC4, round_up(stack_depth, 8)); From patchwork Thu Jan 4 14:22:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Hwang X-Patchwork-Id: 13511167 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47EDD224E5 for ; Thu, 4 Jan 2024 14:23:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bN/wz7xs" Received: by mail-pj1-f45.google.com with SMTP id 98e67ed59e1d1-28c7e30c83fso368666a91.1 for ; Thu, 04 Jan 2024 06:23:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704378192; x=1704982992; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eb+Zb1hGgefs5EEJCd0uWySCNqa1KQKVtSoiZtIG8wo=; b=bN/wz7xs6v7B4MGmtsy9WWnFdHAJLdyxQPCUkll3whFJX3X6Tx5YE5clgqw5YZaUCS bFGN9CJNaRBlSGJjQsa+NBG+IIn/4ILdJ0qBbSoB69XOJhXoNGpvGDWyJGT0HEic6EtA na/INBq1Lb+7RqnDGZ3NjbWoougblrs0YqM7dZldoTQyBIhB8oOxca5NM1U1wV6P3gLp Fv8BdkQAcyBAKAcEFHX1Qga1oXXRN1xUjkE3jRk8OteeaFFdJ/JQLUglCa1mzQaCXxVM +dpdImdE1NYlOhGYQ1N+RpwSaLSfqbI/49VasuNygShbacbwsK6hsPWLAkpiWZW9NKPD gl5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704378192; x=1704982992; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eb+Zb1hGgefs5EEJCd0uWySCNqa1KQKVtSoiZtIG8wo=; b=ULWhmSbxzwlInMid+b76ROOx0Ai/wWqc6su62tXjLDeNH/gY8yP7LHKxKY/0ZYw5F5 iC1B9tVT/8Y8HEWTwgHTuxY6YEFhgHR9BRaSxlfMfXYkz9qRyIaNb/UI2Zj31lStezZA IUjUSWDKptJI+/OweGHBTX7fggmuGGSXY9ScA1ARN4JsoRSqrz0d8Yx10BYS7X+lb8rZ XC2W+PaviV61bczca053kW+oonvGOUlI54EsaDqucictuTruZ6ZM/xw5OdkxzsEx4iw4 a43D9nRxXjTFsbLl081Lr7z8CrVzhyC/WELodktL3KSaXjC5kHiCfq9MbAXyyapbaNES 4mQg== X-Gm-Message-State: AOJu0Yyn/cXgmKqEC60Nmn3V6oBx4ufH3fOtl82yU2FdPD+tUgMlSfm4 0vcFaERB4xCcNKox7O9gnU8rshgQjPM= X-Google-Smtp-Source: AGHT+IHCZQkhPRxXOO6ah6LSXRSOFLQG+Ynn9rFXGxBzciMWAfMtyYP7foGzL5xjeJNguVBiXh7f3Q== X-Received: by 2002:a17:90a:c28d:b0:28c:e9a0:ce3d with SMTP id f13-20020a17090ac28d00b0028ce9a0ce3dmr469975pjt.99.1704378192095; Thu, 04 Jan 2024 06:23:12 -0800 (PST) Received: from localhost.localdomain (bb219-74-10-34.singnet.com.sg. [219.74.10.34]) by smtp.gmail.com with ESMTPSA id c10-20020a17090a020a00b0028cb82a8da0sm4081507pjc.31.2024.01.04.06.23.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jan 2024 06:23:11 -0800 (PST) From: Leon Hwang To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, maciej.fijalkowski@intel.com, jakub@cloudflare.com, iii@linux.ibm.com, hengqi.chen@gmail.com, hffilwlqm@gmail.com, kernel-patches-bot@fb.com Subject: [PATCH bpf-next 3/4] bpf, x64: Rename RESTORE_TAIL_CALL_CNT() to LOAD_TAIL_CALL_CNT_PTR() Date: Thu, 4 Jan 2024 22:22:25 +0800 Message-ID: <20240104142226.87869-4-hffilwlqm@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20240104142226.87869-1-hffilwlqm@gmail.com> References: <20240104142226.87869-1-hffilwlqm@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net With previous commit, %rax is used to propagate tail_call_cnt pointer instead of tail_call_cnt. So, LOAD_TAIL_CALL_CNT_PTR() is more accurate. Signed-off-by: Leon Hwang --- arch/x86/net/bpf_jit_comp.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 67fa337fc2e0c..4065bdcc5b2a4 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1142,7 +1142,7 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op) #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp))) /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */ -#define RESTORE_TAIL_CALL_CNT(stack) \ +#define LOAD_TAIL_CALL_CNT_PTR(stack) \ EMIT3_off32(0x48, 0x8B, 0x85, -round_up(stack, 8) - 8) static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image, @@ -1762,7 +1762,7 @@ st: if (is_imm8(insn->off)) func = (u8 *) __bpf_call_base + imm32; if (tail_call_reachable) { - RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth); + LOAD_TAIL_CALL_CNT_PTR(bpf_prog->aux->stack_depth); if (!imm32) return -EINVAL; offs = 7 + x86_call_depth_emit_accounting(&prog, func); @@ -2558,7 +2558,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im * [ ... ] * [ stack_arg2 ] * RBP - arg_stack_off [ stack_arg1 ] - * RSP [ tail_call_cnt ] BPF_TRAMP_F_TAIL_CALL_CTX + * RSP [ tail_call_cnt_ptr ] BPF_TRAMP_F_TAIL_CALL_CTX */ /* room for return value of orig_call or fentry prog */ @@ -2686,12 +2686,11 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im restore_regs(m, &prog, regs_off); save_args(m, &prog, arg_stack_off, true); - if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) { - /* Before calling the original function, restore the - * tail_call_cnt from stack to rax. + if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) + /* Before calling the original function, load the + * tail_call_cnt_ptr to rax. */ - RESTORE_TAIL_CALL_CNT(stack_size); - } + LOAD_TAIL_CALL_CNT_PTR(stack_size); if (flags & BPF_TRAMP_F_ORIG_STACK) { emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, 8); @@ -2749,10 +2748,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im goto cleanup; } } else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) { - /* Before running the original function, restore the - * tail_call_cnt from stack to rax. + /* Before running the original function, load the + * tail_call_cnt_ptr to rax. */ - RESTORE_TAIL_CALL_CNT(stack_size); + LOAD_TAIL_CALL_CNT_PTR(stack_size); } /* restore return value of orig_call or fentry prog back into RAX */ From patchwork Thu Jan 4 14:22:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Hwang X-Patchwork-Id: 13511168 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4093224E5 for ; Thu, 4 Jan 2024 14:23:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DnSKyFQm" Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-28cb1286e10so368728a91.0 for ; Thu, 04 Jan 2024 06:23:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704378195; x=1704982995; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mnwSmUkdDX0fU7kXOxEGZY6UxnQL18hcZFfgOpLYyjI=; b=DnSKyFQm2452URQOEJtNUGtuul5rkqGERWSiacJn21omilUy6AXMrx+mCB47nQakFl ZEoRefRppLyJm0x49hnx9YqrGGXwehQvIa4UaNrNya89pL49cVrMtER+HiQSbtm8bnDY ZVQVNNlYsM+CZbTt/R0AErXaxEx9NxHkmeRud14LGOMrkrod1mjgGswRm5wXbAJt1OxO kKXQjB9zaCh+st+lEqmtc7y/DS47Rt0bvjUeYnPjZXFZgyooSFjiZal3nCTXKYj84yR0 8JkFNybEkavzHKW76QQXIWB59Bbx0wdU0uLKJJEQ/CStgAt5m2PLSnDExpfYhOMGBAZI SlyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704378195; x=1704982995; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mnwSmUkdDX0fU7kXOxEGZY6UxnQL18hcZFfgOpLYyjI=; b=TSLDyMiqwIj84g5gcq+eMkelJgYPlfdI0qDwK+scMUH3Zz+Z5zWEySPAP4m+iK4zp0 vbYMUDm0c1GwxWc0xA/gBBPWQaave1I6kM2Gmo+RPgCRUXDi6J42aV8XionGAby1YQJE T5onBgn7ZsfRczb+nt+oYyYVdxIv0K1fi4CIE22wV6X3JJgpflBjY7sXr22J/AGURh2h gLMsXQZJkaD7EqFsVXK3g87HjypGgxAgCi+bGZgkEturR1SEQg4WNn5Z4tSxVKykvsVz tKLFXKjic7KR+k+MOMn5xpPXJef65knjmM1y2YnqiRe1bd+0olQ9PgII6XvPICp17apY xUmQ== X-Gm-Message-State: AOJu0YzPsf7R5pQj6v2VTcWACxfL8r2N5QQjvSpQ9rE0GrxqYzcbCkj8 dGMk/YfgqKiNuO2jKU1VvwxBsMu71Pg= X-Google-Smtp-Source: AGHT+IFtrmDWjGZxKoDgwuwSfbafUkm/lNVg88bYLCwg4/VCMHWf8hNzN1RkKZndQnNKRlF8wQKfww== X-Received: by 2002:a17:90a:890c:b0:28c:acd8:c9cd with SMTP id u12-20020a17090a890c00b0028cacd8c9cdmr567787pjn.9.1704378195146; Thu, 04 Jan 2024 06:23:15 -0800 (PST) Received: from localhost.localdomain (bb219-74-10-34.singnet.com.sg. [219.74.10.34]) by smtp.gmail.com with ESMTPSA id c10-20020a17090a020a00b0028cb82a8da0sm4081507pjc.31.2024.01.04.06.23.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jan 2024 06:23:14 -0800 (PST) From: Leon Hwang To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, maciej.fijalkowski@intel.com, jakub@cloudflare.com, iii@linux.ibm.com, hengqi.chen@gmail.com, hffilwlqm@gmail.com, kernel-patches-bot@fb.com Subject: [PATCH bpf-next 4/4] selftests/bpf: Add testcases for tailcall hierarchy fixing Date: Thu, 4 Jan 2024 22:22:26 +0800 Message-ID: <20240104142226.87869-5-hffilwlqm@gmail.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20240104142226.87869-1-hffilwlqm@gmail.com> References: <20240104142226.87869-1-hffilwlqm@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Add some test cases to confirm the tailcall hierarchy issue has been fixed. tools/testing/selftests/bpf/test_progs -t tailcalls 305/18 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 305/19 tailcalls/tailcall_bpf2bpf_hierarchy_fentry:OK 305/20 tailcalls/tailcall_bpf2bpf_hierarchy_fexit:OK 305/21 tailcalls/tailcall_bpf2bpf_hierarchy_fentry_fexit:OK 305/22 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 305/23 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 305 tailcalls:OK Summary: 1/23 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Leon Hwang --- .../selftests/bpf/prog_tests/tailcalls.c | 418 ++++++++++++++++++ .../bpf/progs/tailcall_bpf2bpf_hierarchy1.c | 34 ++ .../bpf/progs/tailcall_bpf2bpf_hierarchy2.c | 55 +++ .../bpf/progs/tailcall_bpf2bpf_hierarchy3.c | 46 ++ 4 files changed, 553 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy1.c create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy2.c create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy3.c diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c index 59993fc9c0d7e..e796328faf2e6 100644 --- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c +++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c @@ -1187,6 +1187,412 @@ static void test_tailcall_poke(void) tailcall_poke__destroy(call); } +static void test_tailcall_hierarchy_count(const char *which, bool test_fentry, + bool test_fexit) +{ + int err, map_fd, prog_fd, main_data_fd, fentry_data_fd, fexit_data_fd, i, val; + struct bpf_object *obj = NULL, *fentry_obj = NULL, *fexit_obj = NULL; + struct bpf_link *fentry_link = NULL, *fexit_link = NULL; + struct bpf_map *prog_array, *data_map; + struct bpf_program *prog; + char buff[128] = {}; + + LIBBPF_OPTS(bpf_test_run_opts, topts, + .data_in = buff, + .data_size_in = sizeof(buff), + .repeat = 1, + ); + + err = bpf_prog_test_load(which, BPF_PROG_TYPE_SCHED_CLS, &obj, + &prog_fd); + if (!ASSERT_OK(err, "load obj")) + return; + + prog = bpf_object__find_program_by_name(obj, "entry"); + if (!ASSERT_OK_PTR(prog, "find entry prog")) + goto out; + + prog_fd = bpf_program__fd(prog); + if (!ASSERT_GE(prog_fd, 0, "prog_fd")) + goto out; + + prog_array = bpf_object__find_map_by_name(obj, "jmp_table"); + if (!ASSERT_OK_PTR(prog_array, "find jmp_table")) + goto out; + + map_fd = bpf_map__fd(prog_array); + if (!ASSERT_GE(map_fd, 0, "map_fd")) + goto out; + + i = 0; + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); + if (!ASSERT_OK(err, "update jmp_table")) + goto out; + + if (test_fentry) { + fentry_obj = bpf_object__open_file("tailcall_bpf2bpf_fentry.bpf.o", + NULL); + if (!ASSERT_OK_PTR(fentry_obj, "open fentry_obj file")) + goto out; + + prog = bpf_object__find_program_by_name(fentry_obj, "fentry"); + if (!ASSERT_OK_PTR(prog, "find fentry prog")) + goto out; + + err = bpf_program__set_attach_target(prog, prog_fd, + "subprog_tail"); + if (!ASSERT_OK(err, "set_attach_target subprog_tail")) + goto out; + + err = bpf_object__load(fentry_obj); + if (!ASSERT_OK(err, "load fentry_obj")) + goto out; + + fentry_link = bpf_program__attach_trace(prog); + if (!ASSERT_OK_PTR(fentry_link, "attach_trace")) + goto out; + } + + if (test_fexit) { + fexit_obj = bpf_object__open_file("tailcall_bpf2bpf_fexit.bpf.o", + NULL); + if (!ASSERT_OK_PTR(fexit_obj, "open fexit_obj file")) + goto out; + + prog = bpf_object__find_program_by_name(fexit_obj, "fexit"); + if (!ASSERT_OK_PTR(prog, "find fexit prog")) + goto out; + + err = bpf_program__set_attach_target(prog, prog_fd, + "subprog_tail"); + if (!ASSERT_OK(err, "set_attach_target subprog_tail")) + goto out; + + err = bpf_object__load(fexit_obj); + if (!ASSERT_OK(err, "load fexit_obj")) + goto out; + + fexit_link = bpf_program__attach_trace(prog); + if (!ASSERT_OK_PTR(fexit_link, "attach_trace")) + goto out; + } + + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "tailcall"); + ASSERT_EQ(topts.retval, 1, "tailcall retval"); + + data_map = bpf_object__find_map_by_name(obj, ".bss"); + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map), + "find data_map")) + goto out; + + main_data_fd = bpf_map__fd(data_map); + if (!ASSERT_GE(main_data_fd, 0, "main_data_fd")) + goto out; + + i = 0; + err = bpf_map_lookup_elem(main_data_fd, &i, &val); + ASSERT_OK(err, "tailcall count"); + ASSERT_EQ(val, 34, "tailcall count"); + + if (test_fentry) { + data_map = bpf_object__find_map_by_name(fentry_obj, ".bss"); + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map), + "find tailcall_bpf2bpf_fentry.bss map")) + goto out; + + fentry_data_fd = bpf_map__fd(data_map); + if (!ASSERT_GE(fentry_data_fd, 0, + "find tailcall_bpf2bpf_fentry.bss map fd")) + goto out; + + i = 0; + err = bpf_map_lookup_elem(fentry_data_fd, &i, &val); + ASSERT_OK(err, "fentry count"); + ASSERT_EQ(val, 68, "fentry count"); + } + + if (test_fexit) { + data_map = bpf_object__find_map_by_name(fexit_obj, ".bss"); + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map), + "find tailcall_bpf2bpf_fexit.bss map")) + goto out; + + fexit_data_fd = bpf_map__fd(data_map); + if (!ASSERT_GE(fexit_data_fd, 0, + "find tailcall_bpf2bpf_fexit.bss map fd")) + goto out; + + i = 0; + err = bpf_map_lookup_elem(fexit_data_fd, &i, &val); + ASSERT_OK(err, "fexit count"); + ASSERT_EQ(val, 68, "fexit count"); + } + + i = 0; + err = bpf_map_delete_elem(map_fd, &i); + if (!ASSERT_OK(err, "delete_elem from jmp_table")) + goto out; + + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "tailcall"); + ASSERT_EQ(topts.retval, 1, "tailcall retval"); + + i = 0; + err = bpf_map_lookup_elem(main_data_fd, &i, &val); + ASSERT_OK(err, "tailcall count"); + ASSERT_EQ(val, 35, "tailcall count"); + + if (test_fentry) { + i = 0; + err = bpf_map_lookup_elem(fentry_data_fd, &i, &val); + ASSERT_OK(err, "fentry count"); + ASSERT_EQ(val, 70, "fentry count"); + } + + if (test_fexit) { + i = 0; + err = bpf_map_lookup_elem(fexit_data_fd, &i, &val); + ASSERT_OK(err, "fexit count"); + ASSERT_EQ(val, 70, "fexit count"); + } + +out: + bpf_link__destroy(fentry_link); + bpf_link__destroy(fexit_link); + bpf_object__close(fentry_obj); + bpf_object__close(fexit_obj); + bpf_object__close(obj); +} + +/* test_tailcall_bpf2bpf_hierarchy_1 checks that the count value of the tail + * call limit enforcement matches with expectations when tailcalls are preceded + * with two bpf2bpf calls. + * + * subprog --tailcall-> entry + * entry < + * subprog --tailcall-> entry + */ +static void test_tailcall_bpf2bpf_hierarchy_1(void) +{ + test_tailcall_hierarchy_count("tailcall_bpf2bpf_hierarchy1.bpf.o", + false, false); +} + +/* test_tailcall_bpf2bpf_hierarchy_fentry checks that the count value of the + * tail call limit enforcement matches with expectations when tailcalls are + * preceded with two bpf2bpf calls, and the two subprogs are traced by fentry. + */ +static void test_tailcall_bpf2bpf_hierarchy_fentry(void) +{ + test_tailcall_hierarchy_count("tailcall_bpf2bpf_hierarchy1.bpf.o", + true, false); +} + +/* test_tailcall_bpf2bpf_hierarchy_fexit checks that the count value of the tail + * call limit enforcement matches with expectations when tailcalls are preceded + * with two bpf2bpf calls, and the two subprogs are traced by fexit. + */ +static void test_tailcall_bpf2bpf_hierarchy_fexit(void) +{ + test_tailcall_hierarchy_count("tailcall_bpf2bpf_hierarchy1.bpf.o", + false, true); +} + +/* test_tailcall_bpf2bpf_hierarchy_fentry_fexit checks that the count value of + * the tail call limit enforcement matches with expectations when tailcalls are + * preceded with two bpf2bpf calls, and the two subprogs are traced by both + * fentry and fexit. + */ +static void test_tailcall_bpf2bpf_hierarchy_fentry_fexit(void) +{ + test_tailcall_hierarchy_count("tailcall_bpf2bpf_hierarchy1.bpf.o", + true, true); +} + +/* test_tailcall_bpf2bpf_hierarchy_2 checks that the count value of the tail + * call limit enforcement matches with expectations: + * + * subprog_tail0 --tailcall-> classifier_0 -> subprog_tail0 + * entry < + * subprog_tail1 --tailcall-> classifier_1 -> subprog_tail1 + */ +static void test_tailcall_bpf2bpf_hierarchy_2(void) +{ + int err, map_fd, prog_fd, data_fd, main_fd, i, val[2]; + struct bpf_map *prog_array, *data_map; + struct bpf_object *obj = NULL; + struct bpf_program *prog; + char buff[128] = {}; + + LIBBPF_OPTS(bpf_test_run_opts, topts, + .data_in = buff, + .data_size_in = sizeof(buff), + .repeat = 1, + ); + + err = bpf_prog_test_load("tailcall_bpf2bpf_hierarchy2.bpf.o", + BPF_PROG_TYPE_SCHED_CLS, + &obj, &prog_fd); + if (!ASSERT_OK(err, "load obj")) + return; + + prog = bpf_object__find_program_by_name(obj, "entry"); + if (!ASSERT_OK_PTR(prog, "find entry prog")) + goto out; + + main_fd = bpf_program__fd(prog); + if (!ASSERT_GE(main_fd, 0, "main_fd")) + goto out; + + prog_array = bpf_object__find_map_by_name(obj, "jmp_table"); + if (!ASSERT_OK_PTR(prog_array, "find jmp_table map")) + goto out; + + map_fd = bpf_map__fd(prog_array); + if (!ASSERT_GE(map_fd, 0, "find jmp_table map fd")) + goto out; + + prog = bpf_object__find_program_by_name(obj, "classifier_0"); + if (!ASSERT_OK_PTR(prog, "find classifier_0 prog")) + goto out; + + prog_fd = bpf_program__fd(prog); + if (!ASSERT_GE(prog_fd, 0, "find classifier_0 prog fd")) + goto out; + + i = 0; + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); + if (!ASSERT_OK(err, "update jmp_table")) + goto out; + + prog = bpf_object__find_program_by_name(obj, "classifier_1"); + if (!ASSERT_OK_PTR(prog, "find classifier_1 prog")) + goto out; + + prog_fd = bpf_program__fd(prog); + if (!ASSERT_GE(prog_fd, 0, "find classifier_1 prog fd")) + goto out; + + i = 1; + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); + if (!ASSERT_OK(err, "update jmp_table")) + goto out; + + err = bpf_prog_test_run_opts(main_fd, &topts); + ASSERT_OK(err, "tailcall"); + ASSERT_EQ(topts.retval, 1, "tailcall retval"); + + data_map = bpf_object__find_map_by_name(obj, ".bss"); + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map), + "find .bss map")) + goto out; + + data_fd = bpf_map__fd(data_map); + if (!ASSERT_GE(data_fd, 0, "find .bss map fd")) + goto out; + + i = 0; + err = bpf_map_lookup_elem(data_fd, &i, &val); + ASSERT_OK(err, "tailcall counts"); + ASSERT_EQ(val[0], 33, "tailcall count0"); + ASSERT_EQ(val[1], 0, "tailcall count1"); + +out: + bpf_object__close(obj); +} + +/* test_tailcall_bpf2bpf_hierarchy_3 checks that the count value of the tail + * call limit enforcement matches with expectations: + * + * subprog with jmp_table0 to classifier_0 + * entry --tailcall-> classifier_0 < + * subprog with jmp_table1 to classifier_0 + */ +static void test_tailcall_bpf2bpf_hierarchy_3(void) +{ + int err, map_fd, prog_fd, data_fd, main_fd, i, val; + struct bpf_map *prog_array, *data_map; + struct bpf_object *obj = NULL; + struct bpf_program *prog; + char buff[128] = {}; + + LIBBPF_OPTS(bpf_test_run_opts, topts, + .data_in = buff, + .data_size_in = sizeof(buff), + .repeat = 1, + ); + + err = bpf_prog_test_load("tailcall_bpf2bpf_hierarchy3.bpf.o", + BPF_PROG_TYPE_SCHED_CLS, + &obj, &prog_fd); + if (!ASSERT_OK(err, "load obj")) + return; + + prog = bpf_object__find_program_by_name(obj, "entry"); + if (!ASSERT_OK_PTR(prog, "find entry prog")) + goto out; + + main_fd = bpf_program__fd(prog); + if (!ASSERT_GE(main_fd, 0, "main_fd")) + goto out; + + prog_array = bpf_object__find_map_by_name(obj, "jmp_table0"); + if (!ASSERT_OK_PTR(prog_array, "find jmp_table0 map")) + goto out; + + map_fd = bpf_map__fd(prog_array); + if (!ASSERT_GE(map_fd, 0, "find jmp_table0 map fd")) + goto out; + + prog = bpf_object__find_program_by_name(obj, "classifier_0"); + if (!ASSERT_OK_PTR(prog, "find classifier_0 prog")) + goto out; + + prog_fd = bpf_program__fd(prog); + if (!ASSERT_GE(prog_fd, 0, "find classifier_0 prog fd")) + goto out; + + i = 0; + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); + if (!ASSERT_OK(err, "update jmp_table0")) + goto out; + + prog_array = bpf_object__find_map_by_name(obj, "jmp_table1"); + if (!ASSERT_OK_PTR(prog_array, "find jmp_table1 map")) + goto out; + + map_fd = bpf_map__fd(prog_array); + if (!ASSERT_GE(map_fd, 0, "find jmp_table1 map fd")) + goto out; + + i = 0; + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); + if (!ASSERT_OK(err, "update jmp_table1")) + goto out; + + err = bpf_prog_test_run_opts(main_fd, &topts); + ASSERT_OK(err, "tailcall"); + ASSERT_EQ(topts.retval, 1, "tailcall retval"); + + data_map = bpf_object__find_map_by_name(obj, ".bss"); + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map), + "find .bss map")) + goto out; + + data_fd = bpf_map__fd(data_map); + if (!ASSERT_GE(data_fd, 0, "find .bss map fd")) + goto out; + + i = 0; + err = bpf_map_lookup_elem(data_fd, &i, &val); + ASSERT_OK(err, "tailcall count"); + ASSERT_EQ(val, 33, "tailcall count"); + +out: + bpf_object__close(obj); +} + void test_tailcalls(void) { if (test__start_subtest("tailcall_1")) @@ -1223,4 +1629,16 @@ void test_tailcalls(void) test_tailcall_bpf2bpf_fentry_entry(); if (test__start_subtest("tailcall_poke")) test_tailcall_poke(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_1")) + test_tailcall_bpf2bpf_hierarchy_1(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_fentry")) + test_tailcall_bpf2bpf_hierarchy_fentry(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_fexit")) + test_tailcall_bpf2bpf_hierarchy_fexit(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_fentry_fexit")) + test_tailcall_bpf2bpf_hierarchy_fentry_fexit(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_2")) + test_tailcall_bpf2bpf_hierarchy_2(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_3")) + test_tailcall_bpf2bpf_hierarchy_3(); } diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy1.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy1.c new file mode 100644 index 0000000000000..0bfbb7c9637b7 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy1.c @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include "bpf_legacy.h" + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 1); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} jmp_table SEC(".maps"); + +int count = 0; + +static __noinline +int subprog_tail(struct __sk_buff *skb) +{ + bpf_tail_call_static(skb, &jmp_table, 0); + return 0; +} + +SEC("tc") +int entry(struct __sk_buff *skb) +{ + volatile int ret = 1; + + count++; + subprog_tail(skb); + subprog_tail(skb); + + return ret; +} + +char __license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy2.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy2.c new file mode 100644 index 0000000000000..b84541546082e --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy2.c @@ -0,0 +1,55 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include "bpf_legacy.h" + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 2); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} jmp_table SEC(".maps"); + +int count0 = 0; +int count1 = 0; + +static __noinline +int subprog_tail0(struct __sk_buff *skb) +{ + bpf_tail_call_static(skb, &jmp_table, 0); + return 0; +} + +SEC("tc") +int classifier_0(struct __sk_buff *skb) +{ + count0++; + subprog_tail0(skb); + return 0; +} + +static __noinline +int subprog_tail1(struct __sk_buff *skb) +{ + bpf_tail_call_static(skb, &jmp_table, 1); + return 0; +} + +SEC("tc") +int classifier_1(struct __sk_buff *skb) +{ + count1++; + subprog_tail1(skb); + return 0; +} + +SEC("tc") +int entry(struct __sk_buff *skb) +{ + subprog_tail0(skb); + subprog_tail1(skb); + + return 1; +} + +char __license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy3.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy3.c new file mode 100644 index 0000000000000..6398a1d277fc7 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy3.c @@ -0,0 +1,46 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include "bpf_legacy.h" + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 1); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} jmp_table0 SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 1); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} jmp_table1 SEC(".maps"); + +int count = 0; + +static __noinline +int subprog_tail(struct __sk_buff *skb, void *jmp_table) +{ + bpf_tail_call_static(skb, jmp_table, 0); + return 0; +} + +SEC("tc") +int classifier_0(struct __sk_buff *skb) +{ + count++; + subprog_tail(skb, &jmp_table0); + subprog_tail(skb, &jmp_table1); + return 1; +} + +SEC("tc") +int entry(struct __sk_buff *skb) +{ + bpf_tail_call_static(skb, &jmp_table0, 0); + + return 0; +} + +char __license[] SEC("license") = "GPL";