From patchwork Mon Apr 11 17:34:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kui-Feng Lee X-Patchwork-Id: 12809471 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7327AC433F5 for ; Mon, 11 Apr 2022 17:35:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346067AbiDKRiD (ORCPT ); Mon, 11 Apr 2022 13:38:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348868AbiDKRhe (ORCPT ); Mon, 11 Apr 2022 13:37:34 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C47D1A38F for ; Mon, 11 Apr 2022 10:35:19 -0700 (PDT) Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23BGdTdg012937 for ; Mon, 11 Apr 2022 10:35:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=it0R3A2XfxXkDhvqzKpqrF15GlXA/S2/TorWpb8xpBs=; b=BPbCTPTxahN/Sz7lvdh/cB/CiSW7/+O6bSEjtGGmGSAc8nvU1iuOum+RI1yHFweNuLIW F7HUc6WiUBmBpkfPLKwPi5z9S9ggnRLWDsX0lDpQqUbBql97jPr2YtutLf1A7uiGXbBM kNY6A1U0qF5Q0kGk6/jB7sIrtwi5AjTKz+A= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3fb5fr33s2-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 11 Apr 2022 10:35:19 -0700 Received: from twshared39027.37.frc1.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:21d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Mon, 11 Apr 2022 10:35:18 -0700 Received: by devbig931.frc1.facebook.com (Postfix, from userid 460691) id 1D78E225EE22; Mon, 11 Apr 2022 10:35:11 -0700 (PDT) From: Kui-Feng Lee To: , , , , CC: Kui-Feng Lee Subject: [PATCH bpf-next v4 2/5] bpf, x86: Create bpf_tramp_run_ctx on the caller thread's stack Date: Mon, 11 Apr 2022 10:34:26 -0700 Message-ID: <20220411173429.4139609-3-kuifeng@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220411173429.4139609-1-kuifeng@fb.com> References: <20220411173429.4139609-1-kuifeng@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: 0qpDGHWmxTtfbnrBsm_TFdiWYFqFcuWX X-Proofpoint-ORIG-GUID: 0qpDGHWmxTtfbnrBsm_TFdiWYFqFcuWX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-04-11_07,2022-04-11_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net BPF trampolines will create a bpf_tramp_run_ctx, a bpf_run_ctx, on stacks and set/reset the current bpf_run_ctx before/after calling a bpf_prog. Signed-off-by: Kui-Feng Lee --- arch/x86/net/bpf_jit_comp.c | 55 +++++++++++++++++++++++++++++++++++++ include/linux/bpf.h | 17 +++++++++--- kernel/bpf/syscall.c | 4 +-- kernel/bpf/trampoline.c | 23 +++++++++++++--- 4 files changed, 89 insertions(+), 10 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 4dcc0b1ac770..0f521be68f7b 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1766,10 +1766,26 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog, { u8 *prog = *pprog; u8 *jmp_insn; + int ctx_cookie_off = offsetof(struct bpf_tramp_run_ctx, bpf_cookie); struct bpf_prog *p = l->link.prog; + /* mov rdi, 0 */ + emit_mov_imm64(&prog, BPF_REG_1, 0, 0); + + /* Prepare struct bpf_tramp_run_ctx. + * + * bpf_tramp_run_ctx is already preserved by + * arch_prepare_bpf_trampoline(). + * + * mov QWORD PTR [rsp + ctx_cookie_off], rdi + */ + EMIT4(0x48, 0x89, 0x7C, 0x24); EMIT1(ctx_cookie_off); + /* arg1: mov rdi, progs[i] */ emit_mov_imm64(&prog, BPF_REG_1, (long) p >> 32, (u32) (long) p); + /* arg2: mov rsi, rsp (struct bpf_run_ctx *) */ + EMIT3(0x48, 0x89, 0xE6); + if (emit_call(&prog, p->aux->sleepable ? __bpf_prog_enter_sleepable : __bpf_prog_enter, prog)) @@ -1815,6 +1831,8 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog, emit_mov_imm64(&prog, BPF_REG_1, (long) p >> 32, (u32) (long) p); /* arg2: mov rsi, rbx <- start time in nsec */ emit_mov_reg(&prog, true, BPF_REG_2, BPF_REG_6); + /* arg3: mov rdx, rsp (struct bpf_run_ctx *) */ + EMIT3(0x48, 0x89, 0xE2); if (emit_call(&prog, p->aux->sleepable ? __bpf_prog_exit_sleepable : __bpf_prog_exit, prog)) @@ -2079,6 +2097,16 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i } } + if (nr_args < 3 && (fentry->nr_links || fexit->nr_links || fmod_ret->nr_links)) + EMIT1(0x52); /* push rdx */ + + if (fentry->nr_links || fexit->nr_links || fmod_ret->nr_links) { + /* Prepare struct bpf_tramp_run_ctx. + * sub rsp, sizeof(struct bpf_tramp_run_ctx) + */ + EMIT4(0x48, 0x83, 0xEC, sizeof(struct bpf_tramp_run_ctx)); + } + if (fentry->nr_links) if (invoke_bpf(m, &prog, fentry, regs_off, flags & BPF_TRAMP_F_RET_FENTRY_RET)) @@ -2098,6 +2126,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i } if (flags & BPF_TRAMP_F_CALL_ORIG) { + /* pop struct bpf_tramp_run_ctx + * add rsp, sizeof(struct bpf_tramp_run_ctx) + */ + if (fentry->nr_links || fexit->nr_links || fmod_ret->nr_links) + EMIT4(0x48, 0x83, 0xC4, sizeof(struct bpf_tramp_run_ctx)); + + if (nr_args < 3 && (fentry->nr_links || fexit->nr_links || fmod_ret->nr_links)) + EMIT1(0x5A); /* pop rdx */ + restore_regs(m, &prog, nr_args, regs_off); /* call original function */ @@ -2110,6 +2147,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i im->ip_after_call = prog; memcpy(prog, x86_nops[5], X86_PATCH_SIZE); prog += X86_PATCH_SIZE; + + if (nr_args < 3 && (fentry->nr_links || fexit->nr_links || fmod_ret->nr_links)) + EMIT1(0x52); /* push rdx */ + + /* Prepare struct bpf_tramp_run_ctx. + * sub rsp, sizeof(struct bpf_tramp_run_ctx) + */ + if (fentry->nr_links || fexit->nr_links || fmod_ret->nr_links) + EMIT4(0x48, 0x83, 0xEC, sizeof(struct bpf_tramp_run_ctx)); } if (fmod_ret->nr_links) { @@ -2133,6 +2179,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i goto cleanup; } + /* pop struct bpf_tramp_run_ctx + * add rsp, sizeof(struct bpf_tramp_run_ctx) + */ + if (fentry->nr_links || fexit->nr_links || fmod_ret->nr_links) + EMIT4(0x48, 0x83, 0xC4, sizeof(struct bpf_tramp_run_ctx)); + + if (nr_args < 3 && (fentry->nr_links || fexit->nr_links || fmod_ret->nr_links)) + EMIT1(0x5A); /* pop rdx */ + if (flags & BPF_TRAMP_F_RESTORE_REGS) restore_regs(m, &prog, nr_args, regs_off); diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a1a3722d11b0..a94891bfab7f 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -681,6 +681,8 @@ struct bpf_tramp_links { int nr_links; }; +struct bpf_tramp_run_ctx; + /* Different use cases for BPF trampoline: * 1. replace nop at the function entry (kprobe equivalent) * flags = BPF_TRAMP_F_RESTORE_REGS @@ -707,10 +709,11 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *i struct bpf_tramp_links *tlinks, void *orig_call); /* these two functions are called from generated trampoline */ -u64 notrace __bpf_prog_enter(struct bpf_prog *prog); -void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start); -u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog); -void notrace __bpf_prog_exit_sleepable(struct bpf_prog *prog, u64 start); +u64 notrace __bpf_prog_enter(struct bpf_prog *prog, struct bpf_tramp_run_ctx *run_ctx); +void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start, struct bpf_tramp_run_ctx *run_ctx); +u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog, struct bpf_tramp_run_ctx *run_ctx); +void notrace __bpf_prog_exit_sleepable(struct bpf_prog *prog, u64 start, + struct bpf_tramp_run_ctx *run_ctx); void notrace __bpf_tramp_enter(struct bpf_tramp_image *tr); void notrace __bpf_tramp_exit(struct bpf_tramp_image *tr); @@ -1304,6 +1307,12 @@ struct bpf_trace_run_ctx { u64 bpf_cookie; }; +struct bpf_tramp_run_ctx { + struct bpf_run_ctx run_ctx; + u64 bpf_cookie; + struct bpf_run_ctx *saved_run_ctx; +}; + static inline struct bpf_run_ctx *bpf_set_run_ctx(struct bpf_run_ctx *new_ctx) { struct bpf_run_ctx *old_ctx = NULL; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 3078c0c9317f..e8f06311a0b5 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4802,13 +4802,13 @@ BPF_CALL_3(bpf_sys_bpf, int, cmd, union bpf_attr *, attr, u32, attr_size) return -EINVAL; } - if (!__bpf_prog_enter_sleepable(prog)) { + if (!__bpf_prog_enter_sleepable(prog, NULL)) { /* recursion detected */ bpf_prog_put(prog); return -EBUSY; } attr->test.retval = bpf_prog_run(prog, (void *) (long) attr->test.ctx_in); - __bpf_prog_exit_sleepable(prog, 0 /* bpf_prog_run does runtime stats */); + __bpf_prog_exit_sleepable(prog, 0 /* bpf_prog_run does runtime stats */, NULL); bpf_prog_put(prog); return 0; #endif diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index d5e6bc5517cb..0c32828c2698 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -568,11 +568,15 @@ static void notrace inc_misses_counter(struct bpf_prog *prog) * [2..MAX_U64] - execute bpf prog and record execution time. * This is start time. */ -u64 notrace __bpf_prog_enter(struct bpf_prog *prog) +u64 notrace __bpf_prog_enter(struct bpf_prog *prog, struct bpf_tramp_run_ctx *run_ctx) __acquires(RCU) { rcu_read_lock(); migrate_disable(); + + if (run_ctx) + run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx); + if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) { inc_misses_counter(prog); return 0; @@ -602,20 +606,27 @@ static void notrace update_prog_stats(struct bpf_prog *prog, } } -void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start) +void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start, struct bpf_tramp_run_ctx *run_ctx) __releases(RCU) { + if (run_ctx) + bpf_reset_run_ctx(run_ctx->saved_run_ctx); + update_prog_stats(prog, start); __this_cpu_dec(*(prog->active)); migrate_enable(); rcu_read_unlock(); } -u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog) +u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog, struct bpf_tramp_run_ctx *run_ctx) { rcu_read_lock_trace(); migrate_disable(); might_fault(); + + if (run_ctx) + run_ctx->saved_run_ctx = bpf_set_run_ctx(&run_ctx->run_ctx); + if (unlikely(__this_cpu_inc_return(*(prog->active)) != 1)) { inc_misses_counter(prog); return 0; @@ -623,8 +634,12 @@ u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog) return bpf_prog_start_time(); } -void notrace __bpf_prog_exit_sleepable(struct bpf_prog *prog, u64 start) +void notrace __bpf_prog_exit_sleepable(struct bpf_prog *prog, u64 start, + struct bpf_tramp_run_ctx *run_ctx) { + if (run_ctx) + bpf_reset_run_ctx(run_ctx->saved_run_ctx); + update_prog_stats(prog, start); __this_cpu_dec(*(prog->active)); migrate_enable();