From patchwork Sat May 8 03:48:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245759 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FEEAC433B4 for ; Sat, 8 May 2021 03:48:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 216F261106 for ; Sat, 8 May 2021 03:48:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231140AbhEHDtp (ORCPT ); Fri, 7 May 2021 23:49:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbhEHDto (ORCPT ); Fri, 7 May 2021 23:49:44 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F4DCC061574 for ; Fri, 7 May 2021 20:48:43 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id g24so6293303pji.4 for ; Fri, 07 May 2021 20:48:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7NBSOquCBwdDutQSOyJSpYJU/InzThTPO7vwE3A0bCY=; b=fHs7zMiitSdKKpWiuSCNJM1VXueVFqcIXzmsdHJpdm50vh8phZnsfi53Yez6OnWh1D ddruW27s+u2CyJlCJp6JZDZKUxpMI038YPtMivlVdC/NlaPXFGjrI71IUXjkHiAN7Cv9 YLXDyQGBjExvj+Z9+9MuxwASKIGLwRmwXCd3V48HJk/KvkxVNWxkmGcJBZKrD/jPZXYd j52L/NmX2g9ilAWm24vb4k3yW3oW035eKjZ4Z+/19AQNlg7ZIlLU7CTbf7BUzOBY4A8t SlN68G8l+29RdHylZjRLWzj+DwrUmUTF/9+xhIkraM4B4y62SBGXzXilPZ/M5UHbWofq OzPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7NBSOquCBwdDutQSOyJSpYJU/InzThTPO7vwE3A0bCY=; b=mRhZFNUPJGdpvBTJE2vzbIXmxQIPdhmpEsUtkhVl8xlQAdufo6ARZASeNOMdS91K8p 6fZt2swd/8y8yUha2eDSNaIuYD96JI0vEoh0K3I5B0xvVrujltFEWQB4uPz6Sk6G8lNv +NkOkoKkYCGvM/xJFAwDlUWk6y7IPC48rXm7Yy5sRImfd8WpSti5WYAVr3f3VAe0SbXp NMpq5s7O7MiQ9vy0CGecv1dNcEl0AfVWsVYHgS/Pdfpu+kIznDx6h9/hhMvbGNSHgIEF XxtlNCS3Xz/QPvr41nOihMuUtDiuykaZmzBOVyvJWbUXRUQ21Jiwpb52rW5RiaetieGh /qCA== X-Gm-Message-State: AOAM53163Rr5auk1IxGIYjExG/1ZKUoJ6rp4OcWy2niLCiG+OY01q6p8 qdePElnKGzMsYf9uPU8p9gmDOsUCnUo= X-Google-Smtp-Source: ABdhPJxA/dgtyXZ/vgYvOSV0gp69aTGMoK1bRaBOKbaUcmTrRVLCQAlsyOmtKnev6ZPZ7tnfX+VOzg== X-Received: by 2002:a17:90a:246:: with SMTP id t6mr14564557pje.228.1620445722670; Fri, 07 May 2021 20:48:42 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.48.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:48:41 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 01/22] bpf: Introduce bpf_sys_bpf() helper and program type. Date: Fri, 7 May 2021 20:48:16 -0700 Message-Id: <20210508034837.64585-2-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Add placeholders for bpf_sys_bpf() helper and new program type. Make sure to check that expected_attach_type is zero for future extensibility. Allow tracing helper functions to be used in this program type, since they will only execute from user context via bpf_prog_test_run. Signed-off-by: Alexei Starovoitov Acked-by: John Fastabend Acked-by: Andrii Nakryiko --- include/linux/bpf.h | 10 +++++++ include/linux/bpf_types.h | 2 ++ include/uapi/linux/bpf.h | 8 +++++ kernel/bpf/syscall.c | 53 ++++++++++++++++++++++++++++++++++ kernel/bpf/verifier.c | 8 +++++ net/bpf/test_run.c | 43 +++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 8 +++++ 7 files changed, 132 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 02b02cb29ce2..04a2bf41ae72 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1826,6 +1826,9 @@ static inline bool bpf_map_is_dev_bound(struct bpf_map *map) struct bpf_map *bpf_map_offload_map_alloc(union bpf_attr *attr); void bpf_map_offload_map_free(struct bpf_map *map); +int bpf_prog_test_run_syscall(struct bpf_prog *prog, + const union bpf_attr *kattr, + union bpf_attr __user *uattr); #else static inline int bpf_prog_offload_init(struct bpf_prog *prog, union bpf_attr *attr) @@ -1851,6 +1854,13 @@ static inline struct bpf_map *bpf_map_offload_map_alloc(union bpf_attr *attr) static inline void bpf_map_offload_map_free(struct bpf_map *map) { } + +static inline int bpf_prog_test_run_syscall(struct bpf_prog *prog, + const union bpf_attr *kattr, + union bpf_attr __user *uattr) +{ + return -ENOTSUPP; +} #endif /* CONFIG_NET && CONFIG_BPF_SYSCALL */ #if defined(CONFIG_INET) && defined(CONFIG_BPF_SYSCALL) diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h index f883f01a5061..a9db1eae6796 100644 --- a/include/linux/bpf_types.h +++ b/include/linux/bpf_types.h @@ -77,6 +77,8 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_LSM, lsm, void *, void *) #endif /* CONFIG_BPF_LSM */ #endif +BPF_PROG_TYPE(BPF_PROG_TYPE_SYSCALL, bpf_syscall, + void *, void *) BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY, array_map_ops) BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_ARRAY, percpu_array_map_ops) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index ec6d85a81744..c92648f38144 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -937,6 +937,7 @@ enum bpf_prog_type { BPF_PROG_TYPE_EXT, BPF_PROG_TYPE_LSM, BPF_PROG_TYPE_SK_LOOKUP, + BPF_PROG_TYPE_SYSCALL, /* a program that can execute syscalls */ }; enum bpf_attach_type { @@ -4735,6 +4736,12 @@ union bpf_attr { * be zero-terminated except when **str_size** is 0. * * Or **-EBUSY** if the per-CPU memory copy buffer is busy. + * + * long bpf_sys_bpf(u32 cmd, void *attr, u32 attr_size) + * Description + * Execute bpf syscall with given arguments. + * Return + * A syscall result. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -4903,6 +4910,7 @@ union bpf_attr { FN(check_mtu), \ FN(for_each_map_elem), \ FN(snprintf), \ + FN(sys_bpf), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 941ca06d9dfa..b1e7352919cb 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2014,6 +2014,7 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type, if (expected_attach_type == BPF_SK_LOOKUP) return 0; return -EINVAL; + case BPF_PROG_TYPE_SYSCALL: case BPF_PROG_TYPE_EXT: if (expected_attach_type) return -EINVAL; @@ -4508,3 +4509,55 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz return err; } + +static bool syscall_prog_is_valid_access(int off, int size, + enum bpf_access_type type, + const struct bpf_prog *prog, + struct bpf_insn_access_aux *info) +{ + if (off < 0 || off >= U16_MAX) + return false; + if (off % size != 0) + return false; + return true; +} + +BPF_CALL_3(bpf_sys_bpf, int, cmd, void *, attr, u32, attr_size) +{ + return -EINVAL; +} + +const struct bpf_func_proto bpf_sys_bpf_proto = { + .func = bpf_sys_bpf, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_ANYTHING, + .arg2_type = ARG_PTR_TO_MEM, + .arg3_type = ARG_CONST_SIZE, +}; + +const struct bpf_func_proto * __weak +tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) +{ + return bpf_base_func_proto(func_id); +} + +static const struct bpf_func_proto * +syscall_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) +{ + switch (func_id) { + case BPF_FUNC_sys_bpf: + return &bpf_sys_bpf_proto; + default: + return tracing_prog_func_proto(func_id, prog); + } +} + +const struct bpf_verifier_ops bpf_syscall_verifier_ops = { + .get_func_proto = syscall_prog_func_proto, + .is_valid_access = syscall_prog_is_valid_access, +}; + +const struct bpf_prog_ops bpf_syscall_prog_ops = { + .test_run = bpf_prog_test_run_syscall, +}; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 8fd552c16763..ad4df0a4ce54 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -13207,6 +13207,14 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) int ret; u64 key; + if (prog->type == BPF_PROG_TYPE_SYSCALL) { + if (prog->aux->sleepable) + /* attach_btf_id checked to be zero already */ + return 0; + verbose(env, "Syscall programs can only be sleepable\n"); + return -EINVAL; + } + if (prog->aux->sleepable && prog->type != BPF_PROG_TYPE_TRACING && prog->type != BPF_PROG_TYPE_LSM) { verbose(env, "Only fentry/fexit/fmod_ret and lsm programs can be sleepable\n"); diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index a5d72c48fb66..a6972d7ddf80 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -918,3 +918,46 @@ int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kat kfree(user_ctx); return ret; } + +int bpf_prog_test_run_syscall(struct bpf_prog *prog, + const union bpf_attr *kattr, + union bpf_attr __user *uattr) +{ + void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in); + __u32 ctx_size_in = kattr->test.ctx_size_in; + void *ctx = NULL; + u32 retval; + int err = 0; + + /* doesn't support data_in/out, ctx_out, duration, or repeat or flags */ + if (kattr->test.data_in || kattr->test.data_out || + kattr->test.ctx_out || kattr->test.duration || + kattr->test.repeat || kattr->test.flags) + return -EINVAL; + + if (ctx_size_in < prog->aux->max_ctx_offset || + ctx_size_in > U16_MAX) + return -EINVAL; + + if (ctx_size_in) { + ctx = kzalloc(ctx_size_in, GFP_USER); + if (!ctx) + return -ENOMEM; + if (copy_from_user(ctx, ctx_in, ctx_size_in)) { + err = -EFAULT; + goto out; + } + } + retval = bpf_prog_run_pin_on_cpu(prog, ctx); + + if (copy_to_user(&uattr->test.retval, &retval, sizeof(u32))) { + err = -EFAULT; + goto out; + } + if (ctx_size_in) + if (copy_to_user(ctx_in, ctx, ctx_size_in)) + err = -EFAULT; +out: + kfree(ctx); + return err; +} diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index ec6d85a81744..c92648f38144 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -937,6 +937,7 @@ enum bpf_prog_type { BPF_PROG_TYPE_EXT, BPF_PROG_TYPE_LSM, BPF_PROG_TYPE_SK_LOOKUP, + BPF_PROG_TYPE_SYSCALL, /* a program that can execute syscalls */ }; enum bpf_attach_type { @@ -4735,6 +4736,12 @@ union bpf_attr { * be zero-terminated except when **str_size** is 0. * * Or **-EBUSY** if the per-CPU memory copy buffer is busy. + * + * long bpf_sys_bpf(u32 cmd, void *attr, u32 attr_size) + * Description + * Execute bpf syscall with given arguments. + * Return + * A syscall result. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -4903,6 +4910,7 @@ union bpf_attr { FN(check_mtu), \ FN(for_each_map_elem), \ FN(snprintf), \ + FN(sys_bpf), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper From patchwork Sat May 8 03:48:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245761 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0D7BC43460 for ; Sat, 8 May 2021 03:48:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72CBF611CC for ; Sat, 8 May 2021 03:48:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231144AbhEHDtq (ORCPT ); Fri, 7 May 2021 23:49:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbhEHDtq (ORCPT ); Fri, 7 May 2021 23:49:46 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C23DFC061574 for ; Fri, 7 May 2021 20:48:44 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id t2-20020a17090ae502b029015b0fbfbc50so6534529pjy.3 for ; Fri, 07 May 2021 20:48:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Nwc5XWdKz5jG+SAGsD0OMALB7sp2qsIR5lQRCTO9YJs=; b=XHDGtO6ThS73hy5r3lTVUnTdFA9eHcXN0de0UzN1V6f0Cphd33TGtA7dAUgQ2kSkXD Z3AODx80pnP1fGpyUBXbhtR3KhSxRm7KP9uPigMvaKAT3VHJsypUIM5QmreoB6tpmYJ9 88YE2TqItTL0hFZUptwuCb37ac+RBaxdii3CyQ+s5rQN60w05iGZKM0VyLBB939AXKvH xK8PrfMzAHkuqVB/L+Q3xpYyW0mOTOQ0/KVHTsn77QvFjk0dlFY//bLWvrTUcDqIaCKg ZtuJP9+aLf9evfUBXNvLqfJKuK3Bq1vAOvHXGiQLHMyVs58GvWtd/XSDVBTWL865tiWM zwQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Nwc5XWdKz5jG+SAGsD0OMALB7sp2qsIR5lQRCTO9YJs=; b=o69VP5HqWkTcco0If5lSFOM++GN63EVjrPoHV7csCIaOgBQMx3rI8N/8pX2K+eP7qM JO88kncK+pYFug4ZJeAxRNOrcWQnuGfAGA5EGZOpfGlfcVc3PTcmC1cOJTRMlZ+EgSTg QCzGhdik56N5rWpyEdYwmbO1lGPi4UXkXj2Evvqj0aZX5dtykQjWj6vD2fGwsNHHAxs4 Qw1x/2Hz+ZF7gfRTwjnXnnps8UfQt/jysVUWk5X9Voz0Iujo2YknzznaEYUyTS88GCR/ oF8yaNXh1fUyrbdR+K9r4RIFtm05Nns3nsOGXx2xQLDRNS9sCrDLgJKVOgZRPyOcOCdb sOGQ== X-Gm-Message-State: AOAM533ACb8GQyTqv/B8ADntu9jc8NIp7GfhrZyfK32beCyW+h43B3bs ZKv92lKjYIPNGvkhGZCuetk= X-Google-Smtp-Source: ABdhPJw4lyJ/BcCdwSpVjb4S/w6iXUwuhzjmL8izMzBWnR/XwIgtyh1B6ISIR4XV6MWSdrN1t+uEIA== X-Received: by 2002:a17:902:6bca:b029:ee:b72c:5585 with SMTP id m10-20020a1709026bcab02900eeb72c5585mr12950736plt.46.1620445724376; Fri, 07 May 2021 20:48:44 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.48.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:48:43 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 02/22] bpf: Introduce bpfptr_t user/kernel pointer. Date: Fri, 7 May 2021 20:48:17 -0700 Message-Id: <20210508034837.64585-3-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Similar to sockptr_t introduce bpfptr_t with few additions: make_bpfptr() creates new user/kernel pointer in the same address space as existing user/kernel pointer. bpfptr_add() advances the user/kernel pointer. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- include/linux/bpfptr.h | 81 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 81 insertions(+) create mode 100644 include/linux/bpfptr.h diff --git a/include/linux/bpfptr.h b/include/linux/bpfptr.h new file mode 100644 index 000000000000..e370acb04977 --- /dev/null +++ b/include/linux/bpfptr.h @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* A pointer that can point to either kernel or userspace memory. */ +#ifndef _LINUX_BPFPTR_H +#define _LINUX_BPFPTR_H + +#include + +typedef sockptr_t bpfptr_t; + +static inline bool bpfptr_is_kernel(bpfptr_t bpfptr) +{ + return bpfptr.is_kernel; +} + +static inline bpfptr_t KERNEL_BPFPTR(void *p) +{ + return (bpfptr_t) { .kernel = p, .is_kernel = true }; +} + +static inline bpfptr_t USER_BPFPTR(void __user *p) +{ + return (bpfptr_t) { .user = p }; +} + +static inline bpfptr_t make_bpfptr(u64 addr, bool is_kernel) +{ + if (is_kernel) + return (bpfptr_t) { + .kernel = (void*) (uintptr_t) addr, + .is_kernel = true, + }; + else + return (bpfptr_t) { + .user = u64_to_user_ptr(addr), + .is_kernel = false, + }; +} + +static inline bool bpfptr_is_null(bpfptr_t bpfptr) +{ + if (bpfptr_is_kernel(bpfptr)) + return !bpfptr.kernel; + return !bpfptr.user; +} + +static inline void bpfptr_add(bpfptr_t *bpfptr, size_t val) +{ + if (bpfptr_is_kernel(*bpfptr)) + bpfptr->kernel += val; + else + bpfptr->user += val; +} + +static inline int copy_from_bpfptr_offset(void *dst, bpfptr_t src, + size_t offset, size_t size) +{ + return copy_from_sockptr_offset(dst, (sockptr_t) src, offset, size); +} + +static inline int copy_from_bpfptr(void *dst, bpfptr_t src, size_t size) +{ + return copy_from_bpfptr_offset(dst, src, 0, size); +} + +static inline int copy_to_bpfptr_offset(bpfptr_t dst, size_t offset, + const void *src, size_t size) +{ + return copy_to_sockptr_offset((sockptr_t) dst, offset, src, size); +} + +static inline void *memdup_bpfptr(bpfptr_t src, size_t len) +{ + return memdup_sockptr((sockptr_t) src, len); +} + +static inline long strncpy_from_bpfptr(char *dst, bpfptr_t src, size_t count) +{ + return strncpy_from_sockptr(dst, (sockptr_t) src, count); +} + +#endif /* _LINUX_BPFPTR_H */ From patchwork Sat May 8 03:48:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245763 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D76D2C433B4 for ; Sat, 8 May 2021 03:48:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B65C3611CC for ; Sat, 8 May 2021 03:48:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231145AbhEHDts (ORCPT ); Fri, 7 May 2021 23:49:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbhEHDts (ORCPT ); Fri, 7 May 2021 23:49:48 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7B60C061574 for ; Fri, 7 May 2021 20:48:46 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id i190so9212921pfc.12 for ; Fri, 07 May 2021 20:48:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PT+4d1mGt/QbJB/soVZxkaMHp2oUCXI6ryUSkWOCcxQ=; b=JPwvCG803XcWU5roPeF49/6vANBawxqRAEkr1ka7m53YRYW4L6Zdjxj6d+xjIGyxii dZn0NRj/1o8UIdzTtqZq/Y4dXnBHmnMnti6T12mZVwKEHZ/75NPWaOLy1PXmZnvy183r S6BnmD4FX//WXvHhp/Gi8p1q2jPuQypeiFFcPZcvsbWWw7h+FOq0C8Y0gw4EDXTZDaZx AddMxFBMgQXk48BojOfACn4f6rUZQZl93+Aosv5dvtqG85fBmRcRpKZyFWXFhNAdPbIR pYHgg0Mxi9Cc5KunbDZ05zb/miYtQIGWs3mBOfxe7bhB+gSqGm2Ig9ekVL7cspiEIBLM uM+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PT+4d1mGt/QbJB/soVZxkaMHp2oUCXI6ryUSkWOCcxQ=; b=pfuF4KaZtecqorps+qMcbWxi0pvZW779K53qxFU+XfMzNfoqnhsmjL1X185Phn4dKj kLePqum2ppe7hAnmD7ZzVYXva5T0bEzFmKvOSM3BwjGQzaoT8DcLnJ4TdfW0Si2gS4zc GTDjk2eEVQrTUZpvnaLfD5QXQ6HteTbUbjMoEll74ZQOMMDZbFe6jtzIOHa1gVxgYH9Q DvCavo1EXuM9dpSU/d+bJcn8TOPmq7lwEcvAeMsPLqPJzRu6lG/18VzNqi+5qXUG+sEE YdvYT0gw1YtsEMuH5xBVAQAzIkUvJ7NtV8tuCAGXci/2Q1YoolHRuNMtatBOESwFk/iJ EJ5w== X-Gm-Message-State: AOAM530YwdZwYqqoE10lg+ZlmKKNo7xpJlqKApYWY9eMtNgxci2COXVo 2l3Bx+jlbD51hLdQcPJ3yy4= X-Google-Smtp-Source: ABdhPJwYU3WZnUeh+otcXG0jin4ps8W6qnXxHmqrSjdAOQD/vbUvQxBvvwYGMuj/z6RQwJ8hOewrKA== X-Received: by 2002:a63:1c02:: with SMTP id c2mr13568246pgc.195.1620445726178; Fri, 07 May 2021 20:48:46 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.48.44 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:48:45 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 03/22] bpf: Prepare bpf syscall to be used from kernel and user space. Date: Fri, 7 May 2021 20:48:18 -0700 Message-Id: <20210508034837.64585-4-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov With the help from bpfptr_t prepare relevant bpf syscall commands to be used from kernel and user space. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- include/linux/bpf.h | 8 +-- kernel/bpf/bpf_iter.c | 13 ++--- kernel/bpf/syscall.c | 113 +++++++++++++++++++++++++++--------------- kernel/bpf/verifier.c | 34 +++++++------ net/bpf/test_run.c | 2 +- 5 files changed, 104 insertions(+), 66 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 04a2bf41ae72..7fd53380c981 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -22,6 +22,7 @@ #include #include #include +#include struct bpf_verifier_env; struct bpf_verifier_log; @@ -1428,7 +1429,7 @@ struct bpf_iter__bpf_map_elem { int bpf_iter_reg_target(const struct bpf_iter_reg *reg_info); void bpf_iter_unreg_target(const struct bpf_iter_reg *reg_info); bool bpf_iter_prog_supported(struct bpf_prog *prog); -int bpf_iter_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); +int bpf_iter_link_attach(const union bpf_attr *attr, bpfptr_t uattr, struct bpf_prog *prog); int bpf_iter_new_fd(struct bpf_link *link); bool bpf_link_is_iter(struct bpf_link *link); struct bpf_prog *bpf_iter_get_info(struct bpf_iter_meta *meta, bool in_stop); @@ -1459,7 +1460,7 @@ int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file, int bpf_fd_htab_map_lookup_elem(struct bpf_map *map, void *key, u32 *value); int bpf_get_file_flag(int flags); -int bpf_check_uarg_tail_zero(void __user *uaddr, size_t expected_size, +int bpf_check_uarg_tail_zero(bpfptr_t uaddr, size_t expected_size, size_t actual_size); /* memcpy that is used with 8-byte aligned pointers, power-of-8 size and @@ -1479,8 +1480,7 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size) } /* verify correctness of eBPF program */ -int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, - union bpf_attr __user *uattr); +int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr); #ifndef CONFIG_BPF_JIT_ALWAYS_ON void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth); diff --git a/kernel/bpf/bpf_iter.c b/kernel/bpf/bpf_iter.c index 931870f9cf56..2d4fbdbb194e 100644 --- a/kernel/bpf/bpf_iter.c +++ b/kernel/bpf/bpf_iter.c @@ -473,15 +473,16 @@ bool bpf_link_is_iter(struct bpf_link *link) return link->ops == &bpf_iter_link_lops; } -int bpf_iter_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) +int bpf_iter_link_attach(const union bpf_attr *attr, bpfptr_t uattr, + struct bpf_prog *prog) { - union bpf_iter_link_info __user *ulinfo; struct bpf_link_primer link_primer; struct bpf_iter_target_info *tinfo; union bpf_iter_link_info linfo; struct bpf_iter_link *link; u32 prog_btf_id, linfo_len; bool existed = false; + bpfptr_t ulinfo; int err; if (attr->link_create.target_fd || attr->link_create.flags) @@ -489,18 +490,18 @@ int bpf_iter_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) memset(&linfo, 0, sizeof(union bpf_iter_link_info)); - ulinfo = u64_to_user_ptr(attr->link_create.iter_info); + ulinfo = make_bpfptr(attr->link_create.iter_info, uattr.is_kernel); linfo_len = attr->link_create.iter_info_len; - if (!ulinfo ^ !linfo_len) + if (bpfptr_is_null(ulinfo) ^ !linfo_len) return -EINVAL; - if (ulinfo) { + if (!bpfptr_is_null(ulinfo)) { err = bpf_check_uarg_tail_zero(ulinfo, sizeof(linfo), linfo_len); if (err) return err; linfo_len = min_t(u32, linfo_len, sizeof(linfo)); - if (copy_from_user(&linfo, ulinfo, linfo_len)) + if (copy_from_bpfptr(&linfo, ulinfo, linfo_len)) return -EFAULT; } diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index b1e7352919cb..28387fe149ba 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -72,11 +72,10 @@ static const struct bpf_map_ops * const bpf_map_types[] = { * copy_from_user() call. However, this is not a concern since this function is * meant to be a future-proofing of bits. */ -int bpf_check_uarg_tail_zero(void __user *uaddr, +int bpf_check_uarg_tail_zero(bpfptr_t uaddr, size_t expected_size, size_t actual_size) { - unsigned char __user *addr = uaddr + expected_size; int res; if (unlikely(actual_size > PAGE_SIZE)) /* silly large */ @@ -85,7 +84,12 @@ int bpf_check_uarg_tail_zero(void __user *uaddr, if (actual_size <= expected_size) return 0; - res = check_zeroed_user(addr, actual_size - expected_size); + if (uaddr.is_kernel) + res = memchr_inv(uaddr.kernel + expected_size, 0, + actual_size - expected_size) == NULL; + else + res = check_zeroed_user(uaddr.user + expected_size, + actual_size - expected_size); if (res < 0) return res; return res ? 0 : -E2BIG; @@ -1004,6 +1008,17 @@ static void *__bpf_copy_key(void __user *ukey, u64 key_size) return NULL; } +static void *___bpf_copy_key(bpfptr_t ukey, u64 key_size) +{ + if (key_size) + return memdup_bpfptr(ukey, key_size); + + if (!bpfptr_is_null(ukey)) + return ERR_PTR(-EINVAL); + + return NULL; +} + /* last field in 'union bpf_attr' used by this command */ #define BPF_MAP_LOOKUP_ELEM_LAST_FIELD flags @@ -1074,10 +1089,10 @@ static int map_lookup_elem(union bpf_attr *attr) #define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags -static int map_update_elem(union bpf_attr *attr) +static int map_update_elem(union bpf_attr *attr, bpfptr_t uattr) { - void __user *ukey = u64_to_user_ptr(attr->key); - void __user *uvalue = u64_to_user_ptr(attr->value); + bpfptr_t ukey = make_bpfptr(attr->key, uattr.is_kernel); + bpfptr_t uvalue = make_bpfptr(attr->value, uattr.is_kernel); int ufd = attr->map_fd; struct bpf_map *map; void *key, *value; @@ -1103,7 +1118,7 @@ static int map_update_elem(union bpf_attr *attr) goto err_put; } - key = __bpf_copy_key(ukey, map->key_size); + key = ___bpf_copy_key(ukey, map->key_size); if (IS_ERR(key)) { err = PTR_ERR(key); goto err_put; @@ -1123,7 +1138,7 @@ static int map_update_elem(union bpf_attr *attr) goto free_key; err = -EFAULT; - if (copy_from_user(value, uvalue, value_size) != 0) + if (copy_from_bpfptr(value, uvalue, value_size) != 0) goto free_value; err = bpf_map_update_value(map, f, key, value, attr->flags); @@ -2076,7 +2091,7 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type) /* last field in 'union bpf_attr' used by this command */ #define BPF_PROG_LOAD_LAST_FIELD attach_prog_fd -static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr) +static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr) { enum bpf_prog_type type = attr->prog_type; struct bpf_prog *prog, *dst_prog = NULL; @@ -2101,8 +2116,9 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr) return -EPERM; /* copy eBPF program license from user space */ - if (strncpy_from_user(license, u64_to_user_ptr(attr->license), - sizeof(license) - 1) < 0) + if (strncpy_from_bpfptr(license, + make_bpfptr(attr->license, uattr.is_kernel), + sizeof(license) - 1) < 0) return -EFAULT; license[sizeof(license) - 1] = 0; @@ -2186,8 +2202,9 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr) prog->len = attr->insn_cnt; err = -EFAULT; - if (copy_from_user(prog->insns, u64_to_user_ptr(attr->insns), - bpf_prog_insn_size(prog)) != 0) + if (copy_from_bpfptr(prog->insns, + make_bpfptr(attr->insns, uattr.is_kernel), + bpf_prog_insn_size(prog)) != 0) goto free_prog_sec; prog->orig_prog = NULL; @@ -3423,7 +3440,7 @@ static int bpf_prog_get_info_by_fd(struct file *file, u32 ulen; int err; - err = bpf_check_uarg_tail_zero(uinfo, sizeof(info), info_len); + err = bpf_check_uarg_tail_zero(USER_BPFPTR(uinfo), sizeof(info), info_len); if (err) return err; info_len = min_t(u32, sizeof(info), info_len); @@ -3702,7 +3719,7 @@ static int bpf_map_get_info_by_fd(struct file *file, u32 info_len = attr->info.info_len; int err; - err = bpf_check_uarg_tail_zero(uinfo, sizeof(info), info_len); + err = bpf_check_uarg_tail_zero(USER_BPFPTR(uinfo), sizeof(info), info_len); if (err) return err; info_len = min_t(u32, sizeof(info), info_len); @@ -3745,7 +3762,7 @@ static int bpf_btf_get_info_by_fd(struct file *file, u32 info_len = attr->info.info_len; int err; - err = bpf_check_uarg_tail_zero(uinfo, sizeof(*uinfo), info_len); + err = bpf_check_uarg_tail_zero(USER_BPFPTR(uinfo), sizeof(*uinfo), info_len); if (err) return err; @@ -3762,7 +3779,7 @@ static int bpf_link_get_info_by_fd(struct file *file, u32 info_len = attr->info.info_len; int err; - err = bpf_check_uarg_tail_zero(uinfo, sizeof(info), info_len); + err = bpf_check_uarg_tail_zero(USER_BPFPTR(uinfo), sizeof(info), info_len); if (err) return err; info_len = min_t(u32, sizeof(info), info_len); @@ -4023,13 +4040,14 @@ static int bpf_map_do_batch(const union bpf_attr *attr, return err; } -static int tracing_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) +static int tracing_bpf_link_attach(const union bpf_attr *attr, bpfptr_t uattr, + struct bpf_prog *prog) { if (attr->link_create.attach_type != prog->expected_attach_type) return -EINVAL; if (prog->expected_attach_type == BPF_TRACE_ITER) - return bpf_iter_link_attach(attr, prog); + return bpf_iter_link_attach(attr, uattr, prog); else if (prog->type == BPF_PROG_TYPE_EXT) return bpf_tracing_prog_attach(prog, attr->link_create.target_fd, @@ -4038,7 +4056,7 @@ static int tracing_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog * } #define BPF_LINK_CREATE_LAST_FIELD link_create.iter_info_len -static int link_create(union bpf_attr *attr) +static int link_create(union bpf_attr *attr, bpfptr_t uattr) { enum bpf_prog_type ptype; struct bpf_prog *prog; @@ -4057,7 +4075,7 @@ static int link_create(union bpf_attr *attr) goto out; if (prog->type == BPF_PROG_TYPE_EXT) { - ret = tracing_bpf_link_attach(attr, prog); + ret = tracing_bpf_link_attach(attr, uattr, prog); goto out; } @@ -4078,7 +4096,7 @@ static int link_create(union bpf_attr *attr) ret = cgroup_bpf_link_attach(attr, prog); break; case BPF_PROG_TYPE_TRACING: - ret = tracing_bpf_link_attach(attr, prog); + ret = tracing_bpf_link_attach(attr, uattr, prog); break; case BPF_PROG_TYPE_FLOW_DISSECTOR: case BPF_PROG_TYPE_SK_LOOKUP: @@ -4366,7 +4384,7 @@ static int bpf_prog_bind_map(union bpf_attr *attr) return ret; } -SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, size) +static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size) { union bpf_attr attr; int err; @@ -4381,7 +4399,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz /* copy attributes from user space, may be less than sizeof(bpf_attr) */ memset(&attr, 0, sizeof(attr)); - if (copy_from_user(&attr, uattr, size) != 0) + if (copy_from_bpfptr(&attr, uattr, size) != 0) return -EFAULT; err = security_bpf(cmd, &attr, size); @@ -4396,7 +4414,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz err = map_lookup_elem(&attr); break; case BPF_MAP_UPDATE_ELEM: - err = map_update_elem(&attr); + err = map_update_elem(&attr, uattr); break; case BPF_MAP_DELETE_ELEM: err = map_delete_elem(&attr); @@ -4423,21 +4441,21 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz err = bpf_prog_detach(&attr); break; case BPF_PROG_QUERY: - err = bpf_prog_query(&attr, uattr); + err = bpf_prog_query(&attr, uattr.user); break; case BPF_PROG_TEST_RUN: - err = bpf_prog_test_run(&attr, uattr); + err = bpf_prog_test_run(&attr, uattr.user); break; case BPF_PROG_GET_NEXT_ID: - err = bpf_obj_get_next_id(&attr, uattr, + err = bpf_obj_get_next_id(&attr, uattr.user, &prog_idr, &prog_idr_lock); break; case BPF_MAP_GET_NEXT_ID: - err = bpf_obj_get_next_id(&attr, uattr, + err = bpf_obj_get_next_id(&attr, uattr.user, &map_idr, &map_idr_lock); break; case BPF_BTF_GET_NEXT_ID: - err = bpf_obj_get_next_id(&attr, uattr, + err = bpf_obj_get_next_id(&attr, uattr.user, &btf_idr, &btf_idr_lock); break; case BPF_PROG_GET_FD_BY_ID: @@ -4447,7 +4465,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz err = bpf_map_get_fd_by_id(&attr); break; case BPF_OBJ_GET_INFO_BY_FD: - err = bpf_obj_get_info_by_fd(&attr, uattr); + err = bpf_obj_get_info_by_fd(&attr, uattr.user); break; case BPF_RAW_TRACEPOINT_OPEN: err = bpf_raw_tracepoint_open(&attr); @@ -4459,26 +4477,26 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz err = bpf_btf_get_fd_by_id(&attr); break; case BPF_TASK_FD_QUERY: - err = bpf_task_fd_query(&attr, uattr); + err = bpf_task_fd_query(&attr, uattr.user); break; case BPF_MAP_LOOKUP_AND_DELETE_ELEM: err = map_lookup_and_delete_elem(&attr); break; case BPF_MAP_LOOKUP_BATCH: - err = bpf_map_do_batch(&attr, uattr, BPF_MAP_LOOKUP_BATCH); + err = bpf_map_do_batch(&attr, uattr.user, BPF_MAP_LOOKUP_BATCH); break; case BPF_MAP_LOOKUP_AND_DELETE_BATCH: - err = bpf_map_do_batch(&attr, uattr, + err = bpf_map_do_batch(&attr, uattr.user, BPF_MAP_LOOKUP_AND_DELETE_BATCH); break; case BPF_MAP_UPDATE_BATCH: - err = bpf_map_do_batch(&attr, uattr, BPF_MAP_UPDATE_BATCH); + err = bpf_map_do_batch(&attr, uattr.user, BPF_MAP_UPDATE_BATCH); break; case BPF_MAP_DELETE_BATCH: - err = bpf_map_do_batch(&attr, uattr, BPF_MAP_DELETE_BATCH); + err = bpf_map_do_batch(&attr, uattr.user, BPF_MAP_DELETE_BATCH); break; case BPF_LINK_CREATE: - err = link_create(&attr); + err = link_create(&attr, uattr); break; case BPF_LINK_UPDATE: err = link_update(&attr); @@ -4487,7 +4505,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz err = bpf_link_get_fd_by_id(&attr); break; case BPF_LINK_GET_NEXT_ID: - err = bpf_obj_get_next_id(&attr, uattr, + err = bpf_obj_get_next_id(&attr, uattr.user, &link_idr, &link_idr_lock); break; case BPF_ENABLE_STATS: @@ -4510,6 +4528,11 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz return err; } +SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, size) +{ + return __sys_bpf(cmd, USER_BPFPTR(uattr), size); +} + static bool syscall_prog_is_valid_access(int off, int size, enum bpf_access_type type, const struct bpf_prog *prog, @@ -4524,7 +4547,19 @@ static bool syscall_prog_is_valid_access(int off, int size, BPF_CALL_3(bpf_sys_bpf, int, cmd, void *, attr, u32, attr_size) { - return -EINVAL; + switch (cmd) { + case BPF_MAP_CREATE: + case BPF_MAP_UPDATE_ELEM: + case BPF_MAP_FREEZE: + case BPF_PROG_LOAD: + break; + /* case BPF_PROG_TEST_RUN: + * is not part of this list to prevent recursive test_run + */ + default: + return -EINVAL; + } + return __sys_bpf(cmd, KERNEL_BPFPTR(attr), attr_size); } const struct bpf_func_proto bpf_sys_bpf_proto = { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ad4df0a4ce54..ba5aa685572c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -9432,7 +9432,7 @@ static int check_abnormal_return(struct bpf_verifier_env *env) static int check_btf_func(struct bpf_verifier_env *env, const union bpf_attr *attr, - union bpf_attr __user *uattr) + bpfptr_t uattr) { const struct btf_type *type, *func_proto, *ret_type; u32 i, nfuncs, urec_size, min_size; @@ -9441,7 +9441,7 @@ static int check_btf_func(struct bpf_verifier_env *env, struct bpf_func_info_aux *info_aux = NULL; struct bpf_prog *prog; const struct btf *btf; - void __user *urecord; + bpfptr_t urecord; u32 prev_offset = 0; bool scalar_return; int ret = -ENOMEM; @@ -9469,7 +9469,7 @@ static int check_btf_func(struct bpf_verifier_env *env, prog = env->prog; btf = prog->aux->btf; - urecord = u64_to_user_ptr(attr->func_info); + urecord = make_bpfptr(attr->func_info, uattr.is_kernel); min_size = min_t(u32, krec_size, urec_size); krecord = kvcalloc(nfuncs, krec_size, GFP_KERNEL | __GFP_NOWARN); @@ -9487,13 +9487,15 @@ static int check_btf_func(struct bpf_verifier_env *env, /* set the size kernel expects so loader can zero * out the rest of the record. */ - if (put_user(min_size, &uattr->func_info_rec_size)) + if (copy_to_bpfptr_offset(uattr, + offsetof(union bpf_attr, func_info_rec_size), + &min_size, sizeof(min_size))) ret = -EFAULT; } goto err_free; } - if (copy_from_user(&krecord[i], urecord, min_size)) { + if (copy_from_bpfptr(&krecord[i], urecord, min_size)) { ret = -EFAULT; goto err_free; } @@ -9545,7 +9547,7 @@ static int check_btf_func(struct bpf_verifier_env *env, } prev_offset = krecord[i].insn_off; - urecord += urec_size; + bpfptr_add(&urecord, urec_size); } prog->aux->func_info = krecord; @@ -9577,14 +9579,14 @@ static void adjust_btf_func(struct bpf_verifier_env *env) static int check_btf_line(struct bpf_verifier_env *env, const union bpf_attr *attr, - union bpf_attr __user *uattr) + bpfptr_t uattr) { u32 i, s, nr_linfo, ncopy, expected_size, rec_size, prev_offset = 0; struct bpf_subprog_info *sub; struct bpf_line_info *linfo; struct bpf_prog *prog; const struct btf *btf; - void __user *ulinfo; + bpfptr_t ulinfo; int err; nr_linfo = attr->line_info_cnt; @@ -9610,7 +9612,7 @@ static int check_btf_line(struct bpf_verifier_env *env, s = 0; sub = env->subprog_info; - ulinfo = u64_to_user_ptr(attr->line_info); + ulinfo = make_bpfptr(attr->line_info, uattr.is_kernel); expected_size = sizeof(struct bpf_line_info); ncopy = min_t(u32, expected_size, rec_size); for (i = 0; i < nr_linfo; i++) { @@ -9618,14 +9620,15 @@ static int check_btf_line(struct bpf_verifier_env *env, if (err) { if (err == -E2BIG) { verbose(env, "nonzero tailing record in line_info"); - if (put_user(expected_size, - &uattr->line_info_rec_size)) + if (copy_to_bpfptr_offset(uattr, + offsetof(union bpf_attr, line_info_rec_size), + &expected_size, sizeof(expected_size))) err = -EFAULT; } goto err_free; } - if (copy_from_user(&linfo[i], ulinfo, ncopy)) { + if (copy_from_bpfptr(&linfo[i], ulinfo, ncopy)) { err = -EFAULT; goto err_free; } @@ -9677,7 +9680,7 @@ static int check_btf_line(struct bpf_verifier_env *env, } prev_offset = linfo[i].insn_off; - ulinfo += rec_size; + bpfptr_add(&ulinfo, rec_size); } if (s != env->subprog_cnt) { @@ -9699,7 +9702,7 @@ static int check_btf_line(struct bpf_verifier_env *env, static int check_btf_info(struct bpf_verifier_env *env, const union bpf_attr *attr, - union bpf_attr __user *uattr) + bpfptr_t uattr) { struct btf *btf; int err; @@ -13286,8 +13289,7 @@ struct btf *bpf_get_btf_vmlinux(void) return btf_vmlinux; } -int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, - union bpf_attr __user *uattr) +int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) { u64 start_time = ktime_get_ns(); struct bpf_verifier_env *env; diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index a6972d7ddf80..aa47af349ba8 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -409,7 +409,7 @@ static void *bpf_ctx_init(const union bpf_attr *kattr, u32 max_size) return ERR_PTR(-ENOMEM); if (data_in) { - err = bpf_check_uarg_tail_zero(data_in, max_size, size); + err = bpf_check_uarg_tail_zero(USER_BPFPTR(data_in), max_size, size); if (err) { kfree(data); return ERR_PTR(err); From patchwork Sat May 8 03:48:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245765 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4F15C433B4 for ; Sat, 8 May 2021 03:48:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BEFCE611CC for ; Sat, 8 May 2021 03:48:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231146AbhEHDtv (ORCPT ); Fri, 7 May 2021 23:49:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbhEHDtu (ORCPT ); Fri, 7 May 2021 23:49:50 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8B16C061574 for ; Fri, 7 May 2021 20:48:48 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id k3-20020a17090ad083b0290155b934a295so6532214pju.2 for ; Fri, 07 May 2021 20:48:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rltzKfGuTM0FZ3M77es8qWnUbcE1pdywuZVdQaq1mmQ=; b=H30M3Dz+iwSyykTrWji2NXgIHayC1yCLXqHUytrw0zhkTuiI2NOsgeaetzWxsvim5c gDRiO13T50I7/jkC7Rlxy5pBgFterCFis07zxMVPquqQn4287+7z8sZk7wVaaS51qE1d ob7cai+g3C+s+imFi3QwppOkafFqUBlUtoessWGJTr8AntB1mnf5QS7d6oovtAZGunCb dG4BReETsmqA/JOD5xMFotlivgWitFk8hXJ8qlCYWifCRBOJQvb9lQQ/Rxx1/SxksYnQ nMb7bnc47MOnVcmyh+tvefc7oTKiHw4ynNBcz0txSVyGR+nfHfYQfsKKjHkKxGmGNjxI zCGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rltzKfGuTM0FZ3M77es8qWnUbcE1pdywuZVdQaq1mmQ=; b=kY4AwSNOtRAEUKvZSS+E7ZfeQ7BKJzV3q9vAxtAYy4ek25zelyxkpmHXZGZc4t/rXO d99yfFqfTiuyWc916iH9X+InhN+sUgFhNpBI7byTWpd7suAAoTndYSxwt6XME2jcus2A 3io1PeLYvutUpMBPUORAWqRG5C51DfhxZg9zxaAo5X5MzeJHf2VqWDlTFJC6NiNtcFmD +bACjr5RZFNY3TSzq1xIpbnAedE0Q5UaOqmtq5abM4xi2gQ/kv77T//2trnRD0JxgpK7 gxOCRgfG/TbM6jIuSrXJPSZqpqI+SKl28huMcTnh6NQPuxdkQZ/N8peeOi49xx0Zo3Sy VEFA== X-Gm-Message-State: AOAM532gsqrQWxuCTI6DK8PwohsxS451L9k0WUTb3RejBIbghFdyo+WX bHkklx3gLB5bnwqfOmCWigg= X-Google-Smtp-Source: ABdhPJwO9o7JNzwhARRQXoGiGKxUwUA/VjzCpzKO1zOKM2c7hG+8iviiRwOREBimR8rLIXKrqXV4kw== X-Received: by 2002:a17:902:8205:b029:ee:aa49:489b with SMTP id x5-20020a1709028205b02900eeaa49489bmr13874849pln.5.1620445728474; Fri, 07 May 2021 20:48:48 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.48.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:48:47 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 04/22] libbpf: Support for syscall program type Date: Fri, 7 May 2021 20:48:19 -0700 Message-Id: <20210508034837.64585-5-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Trivial support for syscall program type. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- tools/lib/bpf/libbpf.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index e2a3cf437814..491349a31a06 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -8884,6 +8884,8 @@ static const struct bpf_sec_def section_defs[] = { .expected_attach_type = BPF_TRACE_ITER, .is_attach_btf = true, .attach_fn = attach_iter), + SEC_DEF("syscall", SYSCALL, + .is_sleepable = true), BPF_EAPROG_SEC("xdp_devmap/", BPF_PROG_TYPE_XDP, BPF_XDP_DEVMAP), BPF_EAPROG_SEC("xdp_cpumap/", BPF_PROG_TYPE_XDP, From patchwork Sat May 8 03:48:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245767 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD096C43460 for ; Sat, 8 May 2021 03:48:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B8B5611CC for ; Sat, 8 May 2021 03:48:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231147AbhEHDtw (ORCPT ); Fri, 7 May 2021 23:49:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbhEHDtw (ORCPT ); Fri, 7 May 2021 23:49:52 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE26DC061574 for ; Fri, 7 May 2021 20:48:50 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id b15so55922plh.10 for ; Fri, 07 May 2021 20:48:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0K8Xzl62e9lc94hq4NcUnRsH0duC86K/QW580qzN/2I=; b=GU9MahvfOZ/S9uhKjfjj5rsLO0+7LXNg8njDFSH6MM9i2AgAjDJm9PdRkRxwB/YIjA h75EIC3SwWrGc4dwL3mLCe741+SJ52m313/NJOaaseccpNtukUgs49MQ6SkR9nYQp89l 8qptQUhlY1LwcHcfgh0/bdo9A5bNouuWuha+W8XmSjGcRVok5i1veC6IXgZA5buAQa6O O9yQWwzemQdLLeTigQRSwa2u1mEFUufgM+D3gjUvhz4Pkyi7CfKWprGJSxWH2EqJ7f0C 1vsKCHzuDOs31N5f+tT4xtj5d9dXJs3TQOYi9Kuxn4j1W0aJVD6a4fkh6R0lsFazOLFi +OyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0K8Xzl62e9lc94hq4NcUnRsH0duC86K/QW580qzN/2I=; b=ov/qHIa5tJE2ReeXHVU+BfaHTRlmFlzrDsApdxvKK78WZJUJK/Y/CkFhBSt6r3dKU1 OkIpSWDEh7Z1RO/OanwlHqUrMBd6PU03MUuphbnfar/D/2ETSEWVrjshVqew3A5ktgTH A4Ymr/vDKoE3TFrzEA+k+6z2iDnj/07vbQvZMCvL5m5ihPiaPfqd5884F0y5Hjfu5C+u qN30y1dljh0O4AdjjFb1G6JCND7S9xO4BBujDdt6n1UxDFcfIAuSsoC/B+KQsbdZ2Ebe C3QQifmk9LZqJB+6Qu36RUMi1yJUdDwehgqNWEz3JfrkGKUFN29WEBz5weo+3Sg6qofG ZazQ== X-Gm-Message-State: AOAM532CtoqmmG46hs/KW28LzW+o7B0gy9tWZAx5cxYM2JjXFV2Z6RH5 HCAmtQ6Dx9WS8/EBLwGlMYM= X-Google-Smtp-Source: ABdhPJzsJwGnMGBnCMbsIXg+zwxBi0LF80c1bgePLSkediLr9J5my8Q1vVbHuEI2Y5HKeaCCzmWQ/A== X-Received: by 2002:a17:902:c18d:b029:ee:f005:41d with SMTP id d13-20020a170902c18db02900eef005041dmr13354382pld.68.1620445730388; Fri, 07 May 2021 20:48:50 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.48.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:48:49 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 05/22] selftests/bpf: Test for syscall program type Date: Fri, 7 May 2021 20:48:20 -0700 Message-Id: <20210508034837.64585-6-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov bpf_prog_type_syscall is a program that creates a bpf map, updates it, and loads another bpf program using bpf_sys_bpf() helper. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- .../selftests/bpf/prog_tests/syscall.c | 49 +++++++++++++ tools/testing/selftests/bpf/progs/syscall.c | 71 +++++++++++++++++++ 2 files changed, 120 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/syscall.c create mode 100644 tools/testing/selftests/bpf/progs/syscall.c diff --git a/tools/testing/selftests/bpf/prog_tests/syscall.c b/tools/testing/selftests/bpf/prog_tests/syscall.c new file mode 100644 index 000000000000..fb376c112f0c --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/syscall.c @@ -0,0 +1,49 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2021 Facebook */ +#include +#include "syscall.skel.h" + +struct args { + __u64 log_buf; + __u32 log_size; + int max_entries; + int map_fd; + int prog_fd; +}; + +void test_syscall(void) +{ + static char verifier_log[8192]; + struct args ctx = { + .max_entries = 1024, + .log_buf = (uintptr_t) verifier_log, + .log_size = sizeof(verifier_log), + }; + struct bpf_prog_test_run_attr tattr = { + .ctx_in = &ctx, + .ctx_size_in = sizeof(ctx), + }; + struct syscall *skel = NULL; + __u64 key = 12, value = 0; + __u32 duration = 0; + int err; + + skel = syscall__open_and_load(); + if (CHECK(!skel, "skel_load", "syscall skeleton failed\n")) + goto cleanup; + + tattr.prog_fd = bpf_program__fd(skel->progs.bpf_prog); + err = bpf_prog_test_run_xattr(&tattr); + ASSERT_EQ(err, 0, "err"); + ASSERT_EQ(tattr.retval, 1, "retval"); + ASSERT_GT(ctx.map_fd, 0, "ctx.map_fd"); + ASSERT_GT(ctx.prog_fd, 0, "ctx.prog_fd"); + ASSERT_OK(memcmp(verifier_log, "processed", sizeof("processed") - 1), + "verifier_log"); + + err = bpf_map_lookup_elem(ctx.map_fd, &key, &value); + ASSERT_EQ(err, 0, "map_lookup"); + ASSERT_EQ(value, 34, "map lookup value"); +cleanup: + syscall__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/syscall.c b/tools/testing/selftests/bpf/progs/syscall.c new file mode 100644 index 000000000000..865b5269ecbb --- /dev/null +++ b/tools/testing/selftests/bpf/progs/syscall.c @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2021 Facebook */ +#include +#include +#include +#include +#include <../../../tools/include/linux/filter.h> + +char _license[] SEC("license") = "GPL"; + +struct args { + __u64 log_buf; + __u32 log_size; + int max_entries; + int map_fd; + int prog_fd; +}; + +SEC("syscall") +int bpf_prog(struct args *ctx) +{ + static char license[] = "GPL"; + static struct bpf_insn insns[] = { + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), + BPF_LD_MAP_FD(BPF_REG_1, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }; + static union bpf_attr map_create_attr = { + .map_type = BPF_MAP_TYPE_HASH, + .key_size = 8, + .value_size = 8, + }; + static union bpf_attr map_update_attr = { .map_fd = 1, }; + static __u64 key = 12; + static __u64 value = 34; + static union bpf_attr prog_load_attr = { + .prog_type = BPF_PROG_TYPE_XDP, + .insn_cnt = sizeof(insns) / sizeof(insns[0]), + }; + int ret; + + map_create_attr.max_entries = ctx->max_entries; + prog_load_attr.license = (long) license; + prog_load_attr.insns = (long) insns; + prog_load_attr.log_buf = ctx->log_buf; + prog_load_attr.log_size = ctx->log_size; + prog_load_attr.log_level = 1; + + ret = bpf_sys_bpf(BPF_MAP_CREATE, &map_create_attr, sizeof(map_create_attr)); + if (ret <= 0) + return ret; + ctx->map_fd = ret; + insns[3].imm = ret; + + map_update_attr.map_fd = ret; + map_update_attr.key = (long) &key; + map_update_attr.value = (long) &value; + ret = bpf_sys_bpf(BPF_MAP_UPDATE_ELEM, &map_update_attr, sizeof(map_update_attr)); + if (ret < 0) + return ret; + + ret = bpf_sys_bpf(BPF_PROG_LOAD, &prog_load_attr, sizeof(prog_load_attr)); + if (ret <= 0) + return ret; + ctx->prog_fd = ret; + return 1; +} From patchwork Sat May 8 03:48:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245769 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53082C433B4 for ; Sat, 8 May 2021 03:48:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 33BD261107 for ; Sat, 8 May 2021 03:48:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231151AbhEHDty (ORCPT ); Fri, 7 May 2021 23:49:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbhEHDtx (ORCPT ); Fri, 7 May 2021 23:49:53 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72AA6C061574 for ; Fri, 7 May 2021 20:48:52 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id m12so8764326pgr.9 for ; Fri, 07 May 2021 20:48:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=n2RjWMYAO5iIDg2VJ348LHvtsRegR2feMc3MAzsS5l4=; b=q3tU2RtZEyItt2HzM6f2VVoc3u+GT2brxI5AU64odkm6j86GajNp7H/5JbCfRmeFL0 y3Ib7ikEulYZduW0Xh9iMODGNiuh6G5R55q87yQ4KodARe1FWoShrfHQ/mleK/OTqUJ9 kpHiFQEMTs+Q0U5WSllN+EuTjTwDB+lpnpnfYNXpkQ/BPW5EWLIs5899JY1/D1RiPpnF VUgiz3kZ/pZBXU6vm6pKGwk3cd91aeYJSoaZzoTv3AgGtoVUOhDej72wvFkbmRbkqwkT MwEnm0CwxD0pXYvfDjJefzVpta3lDQ8hH7Bveq/nHKk3L9+sR05VwXzArpDMmkW0wp2e wnkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=n2RjWMYAO5iIDg2VJ348LHvtsRegR2feMc3MAzsS5l4=; b=mZdax8TeR5lOnlvwiaq2pDIxmaxTYHvxXYQcEERQ29gBGST/KkPn5DG9nTNdfF2zUK N4wWtCRBxPVuQNYRErSu4b2CJiGC+t7SJ6Lvh2mXNMDuow9ZyHo0XnxD6Mv8D9gAETpD u1caKYVzQDmHrAWr+pVQEaHwSsaev2AkH5Rlt2Yh+W5/XKxG/dI8tt8GZPfQIYIbGZzq utTJaULz714mvrAwyUS4OaUCgS8l47eof0WLKtoH9+0ywb0RownWX7yPoLywFOA2DPLM uujK452tuo61b6EDk2m+/e7w84/HMLkxZFgr1eNdPzvz4bUTzCaRb+BUXgltbW5UkR0/ 2pnA== X-Gm-Message-State: AOAM532/xPPmjtZ7BkAYX6FFSzRz+8I1G5THNTQkSyo2l0QsjDVvZGXr CHoWXKYmSUcBU6oyzS4eMH4= X-Google-Smtp-Source: ABdhPJzUelU4IVka4wYG6bZ8E/hVd0LPfNfydtEbG0CZWnWeqosp058YQW979ZbOaVVqW9PQmpY7Ag== X-Received: by 2002:aa7:8d84:0:b029:1f8:3449:1bc6 with SMTP id i4-20020aa78d840000b02901f834491bc6mr13288950pfr.76.1620445732027; Fri, 07 May 2021 20:48:52 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.48.50 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:48:51 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 06/22] bpf: Make btf_load command to be bpfptr_t compatible. Date: Fri, 7 May 2021 20:48:21 -0700 Message-Id: <20210508034837.64585-7-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Similar to prog_load make btf_load command to be availble to bpf_prog_type_syscall program. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- include/linux/btf.h | 2 +- kernel/bpf/btf.c | 8 ++++---- kernel/bpf/syscall.c | 7 ++++--- 3 files changed, 9 insertions(+), 8 deletions(-) diff --git a/include/linux/btf.h b/include/linux/btf.h index 3bac66e0183a..94a0c976c90f 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -21,7 +21,7 @@ extern const struct file_operations btf_fops; void btf_get(struct btf *btf); void btf_put(struct btf *btf); -int btf_new_fd(const union bpf_attr *attr); +int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr); struct btf *btf_get_by_fd(int fd); int btf_get_info_by_fd(const struct btf *btf, const union bpf_attr *attr, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 0600ed325fa0..fbf6c06a9d62 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -4257,7 +4257,7 @@ static int btf_parse_hdr(struct btf_verifier_env *env) return 0; } -static struct btf *btf_parse(void __user *btf_data, u32 btf_data_size, +static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, u32 log_level, char __user *log_ubuf, u32 log_size) { struct btf_verifier_env *env = NULL; @@ -4306,7 +4306,7 @@ static struct btf *btf_parse(void __user *btf_data, u32 btf_data_size, btf->data = data; btf->data_size = btf_data_size; - if (copy_from_user(data, btf_data, btf_data_size)) { + if (copy_from_bpfptr(data, btf_data, btf_data_size)) { err = -EFAULT; goto errout; } @@ -5780,12 +5780,12 @@ static int __btf_new_fd(struct btf *btf) return anon_inode_getfd("btf", &btf_fops, btf, O_RDONLY | O_CLOEXEC); } -int btf_new_fd(const union bpf_attr *attr) +int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr) { struct btf *btf; int ret; - btf = btf_parse(u64_to_user_ptr(attr->btf), + btf = btf_parse(make_bpfptr(attr->btf, uattr.is_kernel), attr->btf_size, attr->btf_log_level, u64_to_user_ptr(attr->btf_log_buf), attr->btf_log_size); diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 28387fe149ba..415865c49dd4 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3842,7 +3842,7 @@ static int bpf_obj_get_info_by_fd(const union bpf_attr *attr, #define BPF_BTF_LOAD_LAST_FIELD btf_log_level -static int bpf_btf_load(const union bpf_attr *attr) +static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr) { if (CHECK_ATTR(BPF_BTF_LOAD)) return -EINVAL; @@ -3850,7 +3850,7 @@ static int bpf_btf_load(const union bpf_attr *attr) if (!bpf_capable()) return -EPERM; - return btf_new_fd(attr); + return btf_new_fd(attr, uattr); } #define BPF_BTF_GET_FD_BY_ID_LAST_FIELD btf_id @@ -4471,7 +4471,7 @@ static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size) err = bpf_raw_tracepoint_open(&attr); break; case BPF_BTF_LOAD: - err = bpf_btf_load(&attr); + err = bpf_btf_load(&attr, uattr); break; case BPF_BTF_GET_FD_BY_ID: err = bpf_btf_get_fd_by_id(&attr); @@ -4552,6 +4552,7 @@ BPF_CALL_3(bpf_sys_bpf, int, cmd, void *, attr, u32, attr_size) case BPF_MAP_UPDATE_ELEM: case BPF_MAP_FREEZE: case BPF_PROG_LOAD: + case BPF_BTF_LOAD: break; /* case BPF_PROG_TEST_RUN: * is not part of this list to prevent recursive test_run From patchwork Sat May 8 03:48:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245771 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 243B1C43460 for ; Sat, 8 May 2021 03:48:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 05C62611CC for ; Sat, 8 May 2021 03:48:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231152AbhEHDtz (ORCPT ); Fri, 7 May 2021 23:49:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbhEHDty (ORCPT ); Fri, 7 May 2021 23:49:54 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11DDDC061574 for ; Fri, 7 May 2021 20:48:54 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id gc22-20020a17090b3116b02901558435aec1so6653270pjb.4 for ; Fri, 07 May 2021 20:48:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7aJAToK/NwcLONPHrtxlk3jkIiqRlSPfWOsB4QDB7vg=; b=CqDFNJfqZPmPPNJMjjIVFvWjx/8cpqWHYM3qFeht7rq0gpPFHX7m4ac1goBeYhUndP i/8PFTmNrNkjlCbfpG8NKqOYwzN1HNQGhvdjLhew//EUrjmT8DSiXDY5uKYjxT3m8WI/ XgzAOhxTSZX+Co7KUNSW7DF/U3uMljZmFPZHbzs/B6GRKX1F6ucbsI4SwXarWgnhDaJb CrxKqDSQeuactBg7w29Ji8f8xpXgVjA+Qttnge+Z4Hw+xdbEWsAbU8DYyL71vTXHAJ03 6Y1hqC70sF7aUoZBwtYkTrMADg/5tloovrtZwfN+EYfhgpnZqpjfPOVu1kUXKJ+RGw8M 0bZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7aJAToK/NwcLONPHrtxlk3jkIiqRlSPfWOsB4QDB7vg=; b=IQzXeJTdvTpHndicRJnBXnIs3WO+9T1a6UWiv2fbNDlLoM5bw9YkOKvK6PH2VvE0xW RUtsaiFQKp/9Vd2pXVFUQpYYA2DpQ/Ey+tl32bG+2uoJQLC3GBRK8RNYsh3QOztd8msB WgJuSSWhOK9KYpll09//yz97YxFqvlSjnnGkyZ3IvZSferX983Tt4Rm+ndexyYV7S7EQ GkEIuyosSNh1IN2VjFVEZ5msD5aIel25vbhInsO9XyDClbeUVUwZ96g0ZUt+5kWljxvG G3PGSQpBq0H8p/E+WTFYRIXoNSfwV2fwfkoAawXq9RZ1ox0Wi0srDWQhLFHGTbfNuW6j WiFg== X-Gm-Message-State: AOAM530EGPmER2CAh5/ONxd4iQ0p4a0ooMMr0NBhAmjjmoimZkEMZc5D 6KEFwb3l5I9fp1KeRwDOneM= X-Google-Smtp-Source: ABdhPJwFyPbooss4Dexevzf5Koq4AzWCZy+2/+//jTVLRnzMaztXmS0xEFuKscxR6rbKkEW5DMEqaA== X-Received: by 2002:a17:903:2403:b029:ee:eaf1:848d with SMTP id e3-20020a1709032403b02900eeeaf1848dmr13206225plo.63.1620445733667; Fri, 07 May 2021 20:48:53 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.48.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:48:53 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 07/22] selftests/bpf: Test for btf_load command. Date: Fri, 7 May 2021 20:48:22 -0700 Message-Id: <20210508034837.64585-8-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Improve selftest to check that btf_load is working from bpf program. Signed-off-by: Alexei Starovoitov --- tools/testing/selftests/bpf/progs/syscall.c | 48 +++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/syscall.c b/tools/testing/selftests/bpf/progs/syscall.c index 865b5269ecbb..4353b8d8fb7f 100644 --- a/tools/testing/selftests/bpf/progs/syscall.c +++ b/tools/testing/selftests/bpf/progs/syscall.c @@ -5,6 +5,7 @@ #include #include #include <../../../tools/include/linux/filter.h> +#include char _license[] SEC("license") = "GPL"; @@ -16,6 +17,45 @@ struct args { int prog_fd; }; +#define BTF_INFO_ENC(kind, kind_flag, vlen) \ + ((!!(kind_flag) << 31) | ((kind) << 24) | ((vlen) & BTF_MAX_VLEN)) +#define BTF_TYPE_ENC(name, info, size_or_type) (name), (info), (size_or_type) +#define BTF_INT_ENC(encoding, bits_offset, nr_bits) \ + ((encoding) << 24 | (bits_offset) << 16 | (nr_bits)) +#define BTF_TYPE_INT_ENC(name, encoding, bits_offset, bits, sz) \ + BTF_TYPE_ENC(name, BTF_INFO_ENC(BTF_KIND_INT, 0, 0), sz), \ + BTF_INT_ENC(encoding, bits_offset, bits) + +static int btf_load(void) +{ + struct btf_blob { + struct btf_header btf_hdr; + __u32 types[8]; + __u32 str; + } raw_btf = { + .btf_hdr = { + .magic = BTF_MAGIC, + .version = BTF_VERSION, + .hdr_len = sizeof(struct btf_header), + .type_len = sizeof(__u32) * 8, + .str_off = sizeof(__u32) * 8, + .str_len = sizeof(__u32), + }, + .types = { + /* long */ + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 64, 8), /* [1] */ + /* unsigned long */ + BTF_TYPE_INT_ENC(0, 0, 0, 64, 8), /* [2] */ + }, + }; + static union bpf_attr btf_load_attr = { + .btf_size = sizeof(raw_btf), + }; + + btf_load_attr.btf = (long)&raw_btf; + return bpf_sys_bpf(BPF_BTF_LOAD, &btf_load_attr, sizeof(btf_load_attr)); +} + SEC("syscall") int bpf_prog(struct args *ctx) { @@ -33,6 +73,8 @@ int bpf_prog(struct args *ctx) .map_type = BPF_MAP_TYPE_HASH, .key_size = 8, .value_size = 8, + .btf_key_type_id = 1, + .btf_value_type_id = 2, }; static union bpf_attr map_update_attr = { .map_fd = 1, }; static __u64 key = 12; @@ -43,7 +85,13 @@ int bpf_prog(struct args *ctx) }; int ret; + ret = btf_load(); + if (ret < 0) + return ret; + map_create_attr.max_entries = ctx->max_entries; + map_create_attr.btf_fd = ret; + prog_load_attr.license = (long) license; prog_load_attr.insns = (long) insns; prog_load_attr.log_buf = ctx->log_buf; From patchwork Sat May 8 03:48:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245773 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E866C433B4 for ; Sat, 8 May 2021 03:48:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F1C4F61106 for ; Sat, 8 May 2021 03:48:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230520AbhEHDt5 (ORCPT ); Fri, 7 May 2021 23:49:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbhEHDt5 (ORCPT ); Fri, 7 May 2021 23:49:57 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A6E1C061574 for ; Fri, 7 May 2021 20:48:56 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id a11so6233311plh.3 for ; Fri, 07 May 2021 20:48:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IENnOVrj9N0Os5OTKavEIzHiuU2ntsg+fjSHvpV+DRE=; b=tLR/99Ky4DxP1mbQqXM20yTyjsod9qyo2DFoUJKwo5GForyTr8Ql20G3sDvJEWOjI+ x9PUeEVFeqqKmiOC8Cc4u7XJzVh90S4qEOa2VxEI5PRs/dpZDfA4J1FcFBKWf7KQQ70l Nl6u9JQyDNffXM8lCvm4MI24Hcz0pSO0HVYCvk7x/JZviuF/Ei8a9IlmVPEEDv/4rygX qWghbvB75hY45xWmB2OYxWvUYjA5m3sTmj0RYrY4bqQWGbyJYu/djo6d/iXRjGL5sqHK b3xnWY2ux7OtJjvfvR/EuFMrbapSIJMFIvRoaWbJsFTyKLg9bO0owDSTCU5+a0mV+6L8 cS/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IENnOVrj9N0Os5OTKavEIzHiuU2ntsg+fjSHvpV+DRE=; b=pBPH5fKFTrRxwAwyXUSfFrll28rzVk5keZOcYb2dkTTjEhLLVxC8cb/z9ZVFfPMVU6 JPDhxRXldoTfqHpcyouTCBNMv1Ipdk9ysk6tTZ6mfbbIOt8V0PoM8z+MbUEo+GjxP1YH olhZUpaZ3BnimjY0DzLC1tBsfDC1kHnFxFdrINZf61e9J+IRuyWGt2YxMr/jkAe/pIvH CuEqootk/P+P1KXEFq/bng4N/86qAa2pBgyALaNw6R9ro6VVzUDMMTLIuZ6PwprX+uIA snz1hfmavQYTvEfzSpNDGPcRFdnuoceSAnkz+4jNh8g8zjcTt8OGLBdmP8MO7OPfU6Aw fw2g== X-Gm-Message-State: AOAM5337xdZBBErRsQ7rvGoEmLvEAtQ2L364Ao7KdUda5pYVBLPFwhHy Zf1X5UQGbCy3sQrTjPS9G3w= X-Google-Smtp-Source: ABdhPJwDeksnHWekZuA7O9EPJy0aVvr4//rFJpkLHxJtXE+qrdhGxH6SrkKYyf8GvjqO1zm7fJP2Sg== X-Received: by 2002:a17:90b:4c4f:: with SMTP id np15mr13798468pjb.191.1620445735691; Fri, 07 May 2021 20:48:55 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.48.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:48:54 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 08/22] bpf: Introduce fd_idx Date: Fri, 7 May 2021 20:48:23 -0700 Message-Id: <20210508034837.64585-9-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Typical program loading sequence involves creating bpf maps and applying map FDs into bpf instructions in various places in the bpf program. This job is done by libbpf that is using compiler generated ELF relocations to patch certain instruction after maps are created and BTFs are loaded. The goal of fd_idx is to allow bpf instructions to stay immutable after compilation. At load time the libbpf would still create maps as usual, but it wouldn't need to patch instructions. It would store map_fds into __u32 fd_array[] and would pass that pointer to sys_bpf(BPF_PROG_LOAD). Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- include/linux/bpf_verifier.h | 1 + include/uapi/linux/bpf.h | 16 ++++++++---- kernel/bpf/syscall.c | 2 +- kernel/bpf/verifier.c | 47 ++++++++++++++++++++++++++-------- tools/include/uapi/linux/bpf.h | 16 ++++++++---- 5 files changed, 61 insertions(+), 21 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 6023a1367853..a5a3b4b3e804 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -441,6 +441,7 @@ struct bpf_verifier_env { u32 peak_states; /* longest register parentage chain walked for liveness marking */ u32 longest_mark_read_walk; + bpfptr_t fd_array; }; __printf(2, 0) void bpf_verifier_vlog(struct bpf_verifier_log *log, diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index c92648f38144..de58a714ed36 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1098,8 +1098,8 @@ enum bpf_link_type { /* When BPF ldimm64's insn[0].src_reg != 0 then this can have * the following extensions: * - * insn[0].src_reg: BPF_PSEUDO_MAP_FD - * insn[0].imm: map fd + * insn[0].src_reg: BPF_PSEUDO_MAP_[FD|IDX] + * insn[0].imm: map fd or fd_idx * insn[1].imm: 0 * insn[0].off: 0 * insn[1].off: 0 @@ -1107,15 +1107,19 @@ enum bpf_link_type { * verifier type: CONST_PTR_TO_MAP */ #define BPF_PSEUDO_MAP_FD 1 -/* insn[0].src_reg: BPF_PSEUDO_MAP_VALUE - * insn[0].imm: map fd +#define BPF_PSEUDO_MAP_IDX 5 + +/* insn[0].src_reg: BPF_PSEUDO_MAP_[IDX_]VALUE + * insn[0].imm: map fd or fd_idx * insn[1].imm: offset into value * insn[0].off: 0 * insn[1].off: 0 * ldimm64 rewrite: address of map[0]+offset * verifier type: PTR_TO_MAP_VALUE */ -#define BPF_PSEUDO_MAP_VALUE 2 +#define BPF_PSEUDO_MAP_VALUE 2 +#define BPF_PSEUDO_MAP_IDX_VALUE 6 + /* insn[0].src_reg: BPF_PSEUDO_BTF_ID * insn[0].imm: kernel btd id of VAR * insn[1].imm: 0 @@ -1315,6 +1319,8 @@ union bpf_attr { /* or valid module BTF object fd or 0 to attach to vmlinux */ __u32 attach_btf_obj_fd; }; + __u32 :32; /* pad */ + __aligned_u64 fd_array; /* array of FDs */ }; struct { /* anonymous struct used by BPF_OBJ_* commands */ diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 415865c49dd4..da7dc2406470 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2089,7 +2089,7 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type) } /* last field in 'union bpf_attr' used by this command */ -#define BPF_PROG_LOAD_LAST_FIELD attach_prog_fd +#define BPF_PROG_LOAD_LAST_FIELD fd_array static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr) { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ba5aa685572c..11dfa4420053 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -8911,12 +8911,14 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn) mark_reg_known_zero(env, regs, insn->dst_reg); dst_reg->map_ptr = map; - if (insn->src_reg == BPF_PSEUDO_MAP_VALUE) { + if (insn->src_reg == BPF_PSEUDO_MAP_VALUE || + insn->src_reg == BPF_PSEUDO_MAP_IDX_VALUE) { dst_reg->type = PTR_TO_MAP_VALUE; dst_reg->off = aux->map_off; if (map_value_has_spin_lock(map)) dst_reg->id = ++env->id_gen; - } else if (insn->src_reg == BPF_PSEUDO_MAP_FD) { + } else if (insn->src_reg == BPF_PSEUDO_MAP_FD || + insn->src_reg == BPF_PSEUDO_MAP_IDX) { dst_reg->type = CONST_PTR_TO_MAP; } else { verbose(env, "bpf verifier is misconfigured\n"); @@ -11185,6 +11187,7 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) struct bpf_map *map; struct fd f; u64 addr; + u32 fd; if (i == insn_cnt - 1 || insn[1].code != 0 || insn[1].dst_reg != 0 || insn[1].src_reg != 0 || @@ -11214,16 +11217,38 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) /* In final convert_pseudo_ld_imm64() step, this is * converted into regular 64-bit imm load insn. */ - if ((insn[0].src_reg != BPF_PSEUDO_MAP_FD && - insn[0].src_reg != BPF_PSEUDO_MAP_VALUE) || - (insn[0].src_reg == BPF_PSEUDO_MAP_FD && - insn[1].imm != 0)) { - verbose(env, - "unrecognized bpf_ld_imm64 insn\n"); + switch (insn[0].src_reg) { + case BPF_PSEUDO_MAP_VALUE: + case BPF_PSEUDO_MAP_IDX_VALUE: + break; + case BPF_PSEUDO_MAP_FD: + case BPF_PSEUDO_MAP_IDX: + if (insn[1].imm == 0) + break; + fallthrough; + default: + verbose(env, "unrecognized bpf_ld_imm64 insn\n"); return -EINVAL; } - f = fdget(insn[0].imm); + switch (insn[0].src_reg) { + case BPF_PSEUDO_MAP_IDX_VALUE: + case BPF_PSEUDO_MAP_IDX: + if (bpfptr_is_null(env->fd_array)) { + verbose(env, "fd_idx without fd_array is invalid\n"); + return -EPROTO; + } + if (copy_from_bpfptr_offset(&fd, env->fd_array, + insn[0].imm * sizeof(fd), + sizeof(fd))) + return -EFAULT; + break; + default: + fd = insn[0].imm; + break; + } + + f = fdget(fd); map = __bpf_map_get(f); if (IS_ERR(map)) { verbose(env, "fd %d is not pointing to valid bpf_map\n", @@ -11238,7 +11263,8 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) } aux = &env->insn_aux_data[i]; - if (insn->src_reg == BPF_PSEUDO_MAP_FD) { + if (insn[0].src_reg == BPF_PSEUDO_MAP_FD || + insn[0].src_reg == BPF_PSEUDO_MAP_IDX) { addr = (unsigned long)map; } else { u32 off = insn[1].imm; @@ -13319,6 +13345,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) env->insn_aux_data[i].orig_idx = i; env->prog = *prog; env->ops = bpf_verifier_ops[env->prog->type]; + env->fd_array = make_bpfptr(attr->fd_array, uattr.is_kernel); is_priv = bpf_capable(); bpf_get_btf_vmlinux(); diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index c92648f38144..de58a714ed36 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1098,8 +1098,8 @@ enum bpf_link_type { /* When BPF ldimm64's insn[0].src_reg != 0 then this can have * the following extensions: * - * insn[0].src_reg: BPF_PSEUDO_MAP_FD - * insn[0].imm: map fd + * insn[0].src_reg: BPF_PSEUDO_MAP_[FD|IDX] + * insn[0].imm: map fd or fd_idx * insn[1].imm: 0 * insn[0].off: 0 * insn[1].off: 0 @@ -1107,15 +1107,19 @@ enum bpf_link_type { * verifier type: CONST_PTR_TO_MAP */ #define BPF_PSEUDO_MAP_FD 1 -/* insn[0].src_reg: BPF_PSEUDO_MAP_VALUE - * insn[0].imm: map fd +#define BPF_PSEUDO_MAP_IDX 5 + +/* insn[0].src_reg: BPF_PSEUDO_MAP_[IDX_]VALUE + * insn[0].imm: map fd or fd_idx * insn[1].imm: offset into value * insn[0].off: 0 * insn[1].off: 0 * ldimm64 rewrite: address of map[0]+offset * verifier type: PTR_TO_MAP_VALUE */ -#define BPF_PSEUDO_MAP_VALUE 2 +#define BPF_PSEUDO_MAP_VALUE 2 +#define BPF_PSEUDO_MAP_IDX_VALUE 6 + /* insn[0].src_reg: BPF_PSEUDO_BTF_ID * insn[0].imm: kernel btd id of VAR * insn[1].imm: 0 @@ -1315,6 +1319,8 @@ union bpf_attr { /* or valid module BTF object fd or 0 to attach to vmlinux */ __u32 attach_btf_obj_fd; }; + __u32 :32; /* pad */ + __aligned_u64 fd_array; /* array of FDs */ }; struct { /* anonymous struct used by BPF_OBJ_* commands */ From patchwork Sat May 8 03:48:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245775 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9158C43461 for ; Sat, 8 May 2021 03:49:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A5C1C611CC for ; Sat, 8 May 2021 03:49:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229947AbhEHDuA (ORCPT ); Fri, 7 May 2021 23:50:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231153AbhEHDt7 (ORCPT ); Fri, 7 May 2021 23:49:59 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10296C061761 for ; Fri, 7 May 2021 20:48:58 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id md17so6292948pjb.0 for ; Fri, 07 May 2021 20:48:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=a4f5LsGI83/jfgglNNDvyMZbbZCWkDq90kN1+mjqueY=; b=RqSlRW0cY4lKYrtg1S28scNtUBVDoSVG9el9nj2M8szAZ9+lXVfWAgSXDjhMtmKsa8 FFM5JRcc9fftT7WycUy/tQ1uZjSqw1fRz7Um6tt4xzDU6BacRwUgpObpGEl3Q2uBD1+f VWC/DDmJ54j15gTQp+H/9IBuXai2UA6cu70UApvimAtYLxi+UkJD1W14VXom50/d79xU QwaA0uRA/IiGK8EGQpyvo8PAKs0OJ2+yG/CZcDwyLaWtn+rI50MVArtQ9e5gjLytIqbF /kmFnvzdRx8S9ykbuo07x/OkDf6pqTmUsC6Orq9kFw/cNQZ3HfGHqcmFk7FUT9aTqEYg y8eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=a4f5LsGI83/jfgglNNDvyMZbbZCWkDq90kN1+mjqueY=; b=TrRbWH6MOrMBzfg6FbNvI3Xz+IAu+cwGC3jGzWozrSXc0res93w2vRqNgho6iV2cfX RBUFs6DupaYosUF1qwV5kXYgZno+v40Kem75fGoRru+2+sDYnDUwCgmpruvNsehDMmcW nO14ayxGoX3DIH2sWet3PyK+qAEICO1xIiJIcPjdZM+qIPZGRM8h826zO8TPhBYkR9PJ vqOloWBCELsqUNrjCb5UbDjQtHrK5GwKxRkNBVLNioJ5YFJaP4h3l3jejDxACr3zcmc4 YXTRGpc6X0GjngATX+6plfLFLkVKh5pmR+ot1pCFAUhUMRo+Ryz6MTv7JAjIMpFSbETQ 49DQ== X-Gm-Message-State: AOAM5317dQ0a/iMjAd7SpbpRqEZUckMrzgwqZHkBdldG0mxfwVw7sMCa +kc/8LOm7SyI18W4oAl+51Mb0p9ABvQ= X-Google-Smtp-Source: ABdhPJwwIzNl5/rM3CrzBKQLh0qaUkM5ukL44qIwFNsTMpD+yUqU9B8AWpmef+Azw2qMMrUTZcl1oQ== X-Received: by 2002:a17:902:8b81:b029:eb:5a4:9cae with SMTP id ay1-20020a1709028b81b02900eb05a49caemr13738885plb.13.1620445737577; Fri, 07 May 2021 20:48:57 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.48.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:48:57 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 09/22] libbpf: Support for fd_idx Date: Fri, 7 May 2021 20:48:24 -0700 Message-Id: <20210508034837.64585-10-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Add support for FD_IDX and make libbpf prefer that approach to loading programs to make testing and bisection easy. The next patch will fine tune libbpf behavior to use FEAT_FD_IDX only when generating loader program. Signed-off-by: Alexei Starovoitov --- tools/lib/bpf/bpf.c | 1 + tools/lib/bpf/libbpf.c | 73 +++++++++++++++++++++++++++++---- tools/lib/bpf/libbpf_internal.h | 1 + 3 files changed, 68 insertions(+), 7 deletions(-) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index bba48ff4c5c0..b96a3aba6fcc 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -252,6 +252,7 @@ int libbpf__bpf_prog_load(const struct bpf_prog_load_params *load_attr) attr.prog_btf_fd = load_attr->prog_btf_fd; attr.prog_flags = load_attr->prog_flags; + attr.fd_array = ptr_to_u64(load_attr->fd_array); attr.func_info_rec_size = load_attr->func_info_rec_size; attr.func_info_cnt = load_attr->func_info_cnt; diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 491349a31a06..0b9270403a77 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -175,6 +175,8 @@ enum kern_feature_id { FEAT_MODULE_BTF, /* BTF_KIND_FLOAT support */ FEAT_BTF_FLOAT, + /* Kernel support for FD_IDX */ + FEAT_FD_IDX, __FEAT_CNT, }; @@ -288,6 +290,7 @@ struct bpf_program { __u32 line_info_rec_size; __u32 line_info_cnt; __u32 prog_flags; + int *fd_array; }; struct bpf_struct_ops { @@ -4238,6 +4241,29 @@ static int probe_module_btf(void) return !err; } +static int probe_kern_fd_idx(void) +{ + struct bpf_load_program_attr attr; + struct bpf_insn insns[] = { + BPF_LD_IMM64_RAW(BPF_REG_0, BPF_PSEUDO_MAP_IDX, 0), + BPF_EXIT_INSN(), + }; + int err; + + memset(&attr, 0, sizeof(attr)); + attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER; + attr.insns = insns; + attr.insns_cnt = ARRAY_SIZE(insns); + attr.license = "GPL"; + + err = bpf_load_program_xattr(&attr, NULL, 0); + if (err >= 0) { + close(err); + return 0; + } + return errno == EPROTO; +} + enum kern_feature_result { FEAT_UNKNOWN = 0, FEAT_SUPPORTED = 1, @@ -4288,6 +4314,9 @@ static struct kern_feature_desc { [FEAT_BTF_FLOAT] = { "BTF_KIND_FLOAT support", probe_kern_btf_float, }, + [FEAT_FD_IDX] = { + "prog_load with fd_idx", probe_kern_fd_idx, + }, }; static bool kernel_supports(enum kern_feature_id feat_id) @@ -6368,19 +6397,34 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog) switch (relo->type) { case RELO_LD64: - insn[0].src_reg = BPF_PSEUDO_MAP_FD; - insn[0].imm = obj->maps[relo->map_idx].fd; + if (kernel_supports(FEAT_FD_IDX)) { + insn[0].src_reg = BPF_PSEUDO_MAP_IDX; + insn[0].imm = relo->map_idx; + } else { + insn[0].src_reg = BPF_PSEUDO_MAP_FD; + insn[0].imm = obj->maps[relo->map_idx].fd; + } break; case RELO_DATA: - insn[0].src_reg = BPF_PSEUDO_MAP_VALUE; insn[1].imm = insn[0].imm + relo->sym_off; - insn[0].imm = obj->maps[relo->map_idx].fd; + if (kernel_supports(FEAT_FD_IDX)) { + insn[0].src_reg = BPF_PSEUDO_MAP_IDX_VALUE; + insn[0].imm = relo->map_idx; + } else { + insn[0].src_reg = BPF_PSEUDO_MAP_VALUE; + insn[0].imm = obj->maps[relo->map_idx].fd; + } break; case RELO_EXTERN_VAR: ext = &obj->externs[relo->sym_off]; if (ext->type == EXT_KCFG) { - insn[0].src_reg = BPF_PSEUDO_MAP_VALUE; - insn[0].imm = obj->maps[obj->kconfig_map_idx].fd; + if (kernel_supports(FEAT_FD_IDX)) { + insn[0].src_reg = BPF_PSEUDO_MAP_IDX_VALUE; + insn[0].imm = obj->kconfig_map_idx; + } else { + insn[0].src_reg = BPF_PSEUDO_MAP_VALUE; + insn[0].imm = obj->maps[obj->kconfig_map_idx].fd; + } insn[1].imm = ext->kcfg.data_off; } else /* EXT_KSYM */ { if (ext->ksym.type_id) { /* typed ksyms */ @@ -7106,6 +7150,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt, load_attr.attach_btf_id = prog->attach_btf_id; load_attr.kern_version = kern_version; load_attr.prog_ifindex = prog->prog_ifindex; + load_attr.fd_array = prog->fd_array; /* specify func_info/line_info only if kernel supports them */ btf_fd = bpf_object__btf_fd(prog->obj); @@ -7296,6 +7341,7 @@ static int bpf_object__load_progs(struct bpf_object *obj, int log_level) { struct bpf_program *prog; + int *fd_array = NULL; size_t i; int err; @@ -7306,6 +7352,14 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level) return err; } + if (kernel_supports(FEAT_FD_IDX) && obj->nr_maps) { + fd_array = malloc(sizeof(int) * obj->nr_maps); + if (!fd_array) + return -ENOMEM; + for (i = 0; i < obj->nr_maps; i++) + fd_array[i] = obj->maps[i].fd; + } + for (i = 0; i < obj->nr_programs; i++) { prog = &obj->programs[i]; if (prog_is_subprog(obj, prog)) @@ -7315,10 +7369,15 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level) continue; } prog->log_level |= log_level; + prog->fd_array = fd_array; err = bpf_program__load(prog, obj->license, obj->kern_version); - if (err) + prog->fd_array = NULL; + if (err) { + free(fd_array); return err; + } } + free(fd_array); return 0; } diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h index ee426226928f..2d4f4a995f35 100644 --- a/tools/lib/bpf/libbpf_internal.h +++ b/tools/lib/bpf/libbpf_internal.h @@ -249,6 +249,7 @@ struct bpf_prog_load_params { __u32 log_level; char *log_buf; size_t log_buf_sz; + int *fd_array; }; int libbpf__bpf_prog_load(const struct bpf_prog_load_params *load_attr); From patchwork Sat May 8 03:48:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245777 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DFD1C43460 for ; Sat, 8 May 2021 03:49:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 62FC261106 for ; Sat, 8 May 2021 03:49:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231160AbhEHDuC (ORCPT ); Fri, 7 May 2021 23:50:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231153AbhEHDuB (ORCPT ); Fri, 7 May 2021 23:50:01 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B002C061574 for ; Fri, 7 May 2021 20:49:00 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id a5so2227840pfa.11 for ; Fri, 07 May 2021 20:49:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=y+0bajeKnpUKeB1pFAPp4noxdFqa3DUCLHKzbgOGDnI=; b=goCL3ksFi3/TTS5U3EzFqv6O8XajWUFDx763w9/emlvHTGftAsoYc9d/BodIOZAbjc M02m4QmBk1V3+IZgwll2MfXdZWKViGZ3oZcAzZVkL7ica4aYKPaslBsn/vTpUPAvKOHE nVRePw1c9kvMdZIzGlmAi+szecyTa+1y3neKmc3lGzzi9KLdYynFVu0g8mBaFRDrF3SN JPkRA7zvWaAO1UMKVDTBwQ8JQY2NbB9NT8LzK940gUFV86VtzGlH6kDvaEBxi7jwqtCr jPDn07SoIlwHzseeqnatW/KXP/XaNqf/HL8gAp6lh6G3pzDqBGyEYBtsop9smnPLSICQ hcYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=y+0bajeKnpUKeB1pFAPp4noxdFqa3DUCLHKzbgOGDnI=; b=fN2piEH9wQTHTUdwqQcrVYk49oyIwl6ZcvfigX8DPVBUkPG3nnMSl+FQ+Zzlf7OE8r VYsVxhdA8KjzQxW4tOEh7fERBfTap3di4OFKSaHfWqqGNWDDtkHAdQZzKrm4HuQ6QrTb omyrZIq3RQ6XKqugUXo58K/VKa/aHmZbCN5V0dpdavVaUwion8ILCAWyKqof2pj8zNHy 00aJFJACOhh1Dnb+qT0gPvgKvGpdnTdK6frNgS8iUYIePy1vGzfXDzJHy6GU6DmRtk5G eD0PHU/EUcuPnOOCiEraX5W/v8dTotHlSowHump83rKSIisBERFiVxufowkWrMeaXGTf Z3Fw== X-Gm-Message-State: AOAM533x7FDVtnLwB3w/hwZpnx5Fg+dPpJEHU+vPKf7L7C65fAzrTWqz GTTghdP7j2TgzC7Y4rBSaqhlS1h5hrM= X-Google-Smtp-Source: ABdhPJx1TP301IhcZhbnRtfO48aH9JqaqunxNwWm77sBI8HScoEfeYZ2Sc3NM+5+e3Vf1/iQs4ynjw== X-Received: by 2002:a63:ff66:: with SMTP id s38mr13422863pgk.154.1620445739688; Fri, 07 May 2021 20:48:59 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.48.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:48:59 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 10/22] bpf: Add bpf_btf_find_by_name_kind() helper. Date: Fri, 7 May 2021 20:48:25 -0700 Message-Id: <20210508034837.64585-11-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Add new helper: long bpf_btf_find_by_name_kind(char *name, int name_sz, u32 kind, int flags) Description Find BTF type with given name and kind in vmlinux BTF or in module's BTFs. Return Returns btf_id and btf_obj_fd in lower and upper 32 bits. It will be used by loader program to find btf_id to attach the program to and to find btf_ids of ksyms. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- include/linux/bpf.h | 1 + include/uapi/linux/bpf.h | 7 ++++ kernel/bpf/btf.c | 62 ++++++++++++++++++++++++++++++++++ kernel/bpf/syscall.c | 2 ++ tools/include/uapi/linux/bpf.h | 7 ++++ 5 files changed, 79 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 7fd53380c981..9dc44ba97584 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1974,6 +1974,7 @@ extern const struct bpf_func_proto bpf_get_socket_ptr_cookie_proto; extern const struct bpf_func_proto bpf_task_storage_get_proto; extern const struct bpf_func_proto bpf_task_storage_delete_proto; extern const struct bpf_func_proto bpf_for_each_map_elem_proto; +extern const struct bpf_func_proto bpf_btf_find_by_name_kind_proto; const struct bpf_func_proto *bpf_tracing_func_proto( enum bpf_func_id func_id, const struct bpf_prog *prog); diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index de58a714ed36..3cc07351c1cf 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -4748,6 +4748,12 @@ union bpf_attr { * Execute bpf syscall with given arguments. * Return * A syscall result. + * + * long bpf_btf_find_by_name_kind(char *name, int name_sz, u32 kind, int flags) + * Description + * Find BTF type with given name and kind in vmlinux BTF or in module's BTFs. + * Return + * Returns btf_id and btf_obj_fd in lower and upper 32 bits. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -4917,6 +4923,7 @@ union bpf_attr { FN(for_each_map_elem), \ FN(snprintf), \ FN(sys_bpf), \ + FN(btf_find_by_name_kind), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index fbf6c06a9d62..85716327c375 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6085,3 +6085,65 @@ struct module *btf_try_get_module(const struct btf *btf) return res; } + +BPF_CALL_4(bpf_btf_find_by_name_kind, char *, name, int, name_sz, u32, kind, int, flags) +{ + struct btf *btf; + long ret; + + if (flags) + return -EINVAL; + + if (name_sz <= 1 || name[name_sz - 1]) + return -EINVAL; + + btf = bpf_get_btf_vmlinux(); + if (IS_ERR(btf)) + return PTR_ERR(btf); + + ret = btf_find_by_name_kind(btf, name, kind); + /* ret is never zero, since btf_find_by_name_kind returns + * positive btf_id or negative error. + */ + if (ret < 0) { + struct btf *mod_btf; + int id; + + /* If name is not found in vmlinux's BTF then search in module's BTFs */ + spin_lock_bh(&btf_idr_lock); + idr_for_each_entry(&btf_idr, mod_btf, id) { + if (!btf_is_module(mod_btf)) + continue; + /* linear search could be slow hence unlock/lock + * the IDR to avoiding holding it for too long + */ + btf_get(mod_btf); + spin_unlock_bh(&btf_idr_lock); + ret = btf_find_by_name_kind(mod_btf, name, kind); + if (ret > 0) { + int btf_obj_fd; + + btf_obj_fd = __btf_new_fd(mod_btf); + if (btf_obj_fd < 0) { + btf_put(mod_btf); + return btf_obj_fd; + } + return ret | (((u64)btf_obj_fd) << 32); + } + spin_lock_bh(&btf_idr_lock); + btf_put(mod_btf); + } + spin_unlock_bh(&btf_idr_lock); + } + return ret; +} + +const struct bpf_func_proto bpf_btf_find_by_name_kind_proto = { + .func = bpf_btf_find_by_name_kind, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_MEM, + .arg2_type = ARG_CONST_SIZE, + .arg3_type = ARG_ANYTHING, + .arg4_type = ARG_ANYTHING, +}; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index da7dc2406470..f93ff2ebf96d 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4584,6 +4584,8 @@ syscall_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) switch (func_id) { case BPF_FUNC_sys_bpf: return &bpf_sys_bpf_proto; + case BPF_FUNC_btf_find_by_name_kind: + return &bpf_btf_find_by_name_kind_proto; default: return tracing_prog_func_proto(func_id, prog); } diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index de58a714ed36..3cc07351c1cf 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -4748,6 +4748,12 @@ union bpf_attr { * Execute bpf syscall with given arguments. * Return * A syscall result. + * + * long bpf_btf_find_by_name_kind(char *name, int name_sz, u32 kind, int flags) + * Description + * Find BTF type with given name and kind in vmlinux BTF or in module's BTFs. + * Return + * Returns btf_id and btf_obj_fd in lower and upper 32 bits. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -4917,6 +4923,7 @@ union bpf_attr { FN(for_each_map_elem), \ FN(snprintf), \ FN(sys_bpf), \ + FN(btf_find_by_name_kind), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper From patchwork Sat May 8 03:48:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245779 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E258C433B4 for ; Sat, 8 May 2021 03:49:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 624B061107 for ; Sat, 8 May 2021 03:49:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231162AbhEHDuD (ORCPT ); Fri, 7 May 2021 23:50:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231153AbhEHDuD (ORCPT ); Fri, 7 May 2021 23:50:03 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCDB6C061574 for ; Fri, 7 May 2021 20:49:01 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id a11so6233363plh.3 for ; Fri, 07 May 2021 20:49:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=i2IYG8EW9hvtHL7j1SorOuzsq3cQ3EMooNko0bSzuhU=; b=DDs4hYpN1JlGK07wzIbAbPGr9HCVfGHzQ/TZlu2Qpme2zJxOQQ0VeD/iMJaiyqfMNi aN4nyTT788EWNMPOmKN/rO99QXOKRjyu69Rs1uhAOD3cob4p/69pSds7cYbX9fkugvOU ZlAqBQe5hfyQR7Si4RZU5hJWpH3+XdGKlubSpAhOLdk+qiJpv3PkHKhi3B1bMVnaoUro cOzVTVtQscIPU0dx47jbpFWkA/QpM+DkUH9YswktUiJONOP7Ea+chWeveCI9Yl7QUHzf 4zmgdVaq3otryddBbCtkRNpa5pScr0T9/EQg/d68jO8I6iH8aHEpQZp2yjr2jViddfkV W1WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=i2IYG8EW9hvtHL7j1SorOuzsq3cQ3EMooNko0bSzuhU=; b=BXcPJoHTwYGUsaxirF/SUVKgTMnjscNRI+WNKb1vdG9IAiGSors/PXDjglSvubUg0j LFeVST5YNkSzHVQujnWqjiRzEMl1neo6mdKuvCrs0NvCTG20ef0tY6/0it99RjZMSV9R ycTX1HNW18IKf171WNF2R6TN0yPMSHOz5Hzw3PIdsWwd7PyISqaPzP4xe6abX4Fi2w3M Yo8LsTLGIigisQvn9/f4IoHPuvrOsQLpJTmukwVXjci9AaRvh31Pni56Gh/3sn8RD/Hx MuuZ6ogJuK9RcWJfhUv5uJVfHlLrLKtDlALXRe8bR++5Qd7EbTslvknlJQTe9BRajudV HisA== X-Gm-Message-State: AOAM530GyNRnA/fCiplv9j7AqENw/jQhnhb8XfyoVU2k3AAmIdwJUnbZ L2og+ZbgRiqZmpRIMECr9eXSSXWRJQA= X-Google-Smtp-Source: ABdhPJwsIY/vO3JhJcXL0sqeiuiet98NddilHt1jLCjVzsHUtiC2H+JzfM7MgZGODkUz4eQ2lqrkXQ== X-Received: by 2002:a17:902:8a8a:b029:ec:857a:4d51 with SMTP id p10-20020a1709028a8ab02900ec857a4d51mr12995505plo.68.1620445741390; Fri, 07 May 2021 20:49:01 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.48.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:00 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 11/22] bpf: Add bpf_sys_close() helper. Date: Fri, 7 May 2021 20:48:26 -0700 Message-Id: <20210508034837.64585-12-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Add bpf_sys_close() helper to be used by the syscall/loader program to close intermediate FDs and other cleanup. Note this helper must never be allowed inside fdget/fdput bracketing. Signed-off-by: Alexei Starovoitov --- include/uapi/linux/bpf.h | 7 +++++++ kernel/bpf/syscall.c | 19 +++++++++++++++++++ tools/include/uapi/linux/bpf.h | 7 +++++++ 3 files changed, 33 insertions(+) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 3cc07351c1cf..4cd9a0181f27 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -4754,6 +4754,12 @@ union bpf_attr { * Find BTF type with given name and kind in vmlinux BTF or in module's BTFs. * Return * Returns btf_id and btf_obj_fd in lower and upper 32 bits. + * + * long bpf_sys_close(u32 fd) + * Description + * Execute close syscall for given FD. + * Return + * A syscall result. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -4924,6 +4930,7 @@ union bpf_attr { FN(snprintf), \ FN(sys_bpf), \ FN(btf_find_by_name_kind), \ + FN(sys_close), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index f93ff2ebf96d..0f1ce2171f1e 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4578,6 +4578,23 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return bpf_base_func_proto(func_id); } +BPF_CALL_1(bpf_sys_close, u32, fd) +{ + /* When bpf program calls this helper there should not be + * an fdget() without matching completed fdput(). + * This helper is allowed in the following callchain only: + * sys_bpf->prog_test_run->bpf_prog->bpf_sys_close + */ + return close_fd(fd); +} + +const struct bpf_func_proto bpf_sys_close_proto = { + .func = bpf_sys_close, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_ANYTHING, +}; + static const struct bpf_func_proto * syscall_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { @@ -4586,6 +4603,8 @@ syscall_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_sys_bpf_proto; case BPF_FUNC_btf_find_by_name_kind: return &bpf_btf_find_by_name_kind_proto; + case BPF_FUNC_sys_close: + return &bpf_sys_close_proto; default: return tracing_prog_func_proto(func_id, prog); } diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 3cc07351c1cf..4cd9a0181f27 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -4754,6 +4754,12 @@ union bpf_attr { * Find BTF type with given name and kind in vmlinux BTF or in module's BTFs. * Return * Returns btf_id and btf_obj_fd in lower and upper 32 bits. + * + * long bpf_sys_close(u32 fd) + * Description + * Execute close syscall for given FD. + * Return + * A syscall result. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -4924,6 +4930,7 @@ union bpf_attr { FN(snprintf), \ FN(sys_bpf), \ FN(btf_find_by_name_kind), \ + FN(sys_close), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper From patchwork Sat May 8 03:48:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245781 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FE78C433B4 for ; Sat, 8 May 2021 03:49:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 61F12611CC for ; Sat, 8 May 2021 03:49:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231163AbhEHDuF (ORCPT ); Fri, 7 May 2021 23:50:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230249AbhEHDuF (ORCPT ); Fri, 7 May 2021 23:50:05 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9494CC061761 for ; Fri, 7 May 2021 20:49:03 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id m12so8764530pgr.9 for ; Fri, 07 May 2021 20:49:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=apTCvuF1PDMqXFjbxcj20yhNLZaDmIF6JzD+/Z/MuZ4=; b=SolNCXKT/LnoaoeP7WXA+VTtkpcrtVYHo7sIROSnTykkpTWv1NYz84FOj7ATcxX8mg XEIFUYx15x5QhRj9vFvwl9NO0mKFw7MU/M0UBmH1P3fllTzQOQxdDxgD6KNF9W3CMjri US/V+FcyV4mTsFo58JBAmjPOWmU5Wsztg2a+Jg0cnz9FPdlBtGAr0Ov/BGYosBzhS3A1 QXRjWMppOSBmy2tN0hLlC3heBpir9Ly7xMogmeY7HsSGDWHxfd6ejubJbpUuEN9VHvYj DN1qrDh/rjRLkRxHmMslNbQUBY3z9kiJ/2TonXHZDHV07c/TKOyG9/UUzt5v4YzYOxmK xTSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=apTCvuF1PDMqXFjbxcj20yhNLZaDmIF6JzD+/Z/MuZ4=; b=Rlh6dNccxJKowjFuAdRD4M5iC44KRbxT6WJpyv4f5rf+o4cflBY8LI6P45+FmeD1BU cj+U2b4VK48ihaRV7989rte5eZ14ZydCp2XMyTdOQqda4R/kFKVYjwo7zPU3QmCRgT+L fVGGs5e2CMfkAZBNO7cNiApwDQNZRHneE1MnHj3ZkrxVc/e8LRJXzMddi09jslsrBTda j/27+nimjGHg+/9TQyMeUORyXkfNK7Ud+MLNM9j4FJrTuJ2yNnYbKyFOww4TKEP6uWZ9 jbWX6xe9l9O1kRIKMHDMaUTUUXdk3sA/DEBI2+9Qbfwz74Cqlb6NcaW5UI2Rkdkca3ff eD1Q== X-Gm-Message-State: AOAM532B9YJew5g6Xax90JARFVVX414qeGueX0mGftIvJ3bhNtmxgEGU QINHvg+zOjRmjFjzPsXGkwg= X-Google-Smtp-Source: ABdhPJwNMVZxSfnfMdDB1c/ooUNiWRjOIQzB82UA4wNmc/Pd3kaMdUvAATJTy+qHO7oVJAbM1Mmq6Q== X-Received: by 2002:a63:e911:: with SMTP id i17mr13749614pgh.148.1620445743172; Fri, 07 May 2021 20:49:03 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.49.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:02 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 12/22] libbpf: Change the order of data and text relocations. Date: Fri, 7 May 2021 20:48:27 -0700 Message-Id: <20210508034837.64585-13-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov In order to be able to generate loader program in the later patches change the order of data and text relocations. Also improve the test to include data relos. If the kernel supports "FD array" the map_fd relocations can be processed before text relos since generated loader program won't need to manually patch ld_imm64 insns with map_fd. But ksym and kfunc relocations can only be processed after all calls are relocated, since loader program will consist of a sequence of calls to bpf_btf_find_by_name_kind() followed by patching of btf_id and btf_obj_fd into corresponding ld_imm64 insns. The locations of those ld_imm64 insns are specified in relocations. Hence process all data relocations (maps, ksym, kfunc) together after call relos. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- tools/lib/bpf/libbpf.c | 86 ++++++++++++++++--- .../selftests/bpf/progs/test_subprogs.c | 13 +++ 2 files changed, 85 insertions(+), 14 deletions(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 0b9270403a77..f04bac9f398c 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -6443,11 +6443,15 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog) insn[0].imm = ext->ksym.kernel_btf_id; break; case RELO_SUBPROG_ADDR: - insn[0].src_reg = BPF_PSEUDO_FUNC; - /* will be handled as a follow up pass */ + if (insn[0].src_reg != BPF_PSEUDO_FUNC) { + pr_warn("prog '%s': relo #%d: bad insn\n", + prog->name, i); + return -EINVAL; + } + /* handled already */ break; case RELO_CALL: - /* will be handled as a follow up pass */ + /* handled already */ break; default: pr_warn("prog '%s': relo #%d: bad relo type %d\n", @@ -6616,6 +6620,30 @@ static struct reloc_desc *find_prog_insn_relo(const struct bpf_program *prog, si sizeof(*prog->reloc_desc), cmp_relo_by_insn_idx); } +static int append_subprog_relos(struct bpf_program *main_prog, struct bpf_program *subprog) +{ + int new_cnt = main_prog->nr_reloc + subprog->nr_reloc; + struct reloc_desc *relos; + int i; + + if (main_prog == subprog) + return 0; + relos = libbpf_reallocarray(main_prog->reloc_desc, new_cnt, sizeof(*relos)); + if (!relos) + return -ENOMEM; + memcpy(relos + main_prog->nr_reloc, subprog->reloc_desc, + sizeof(*relos) * subprog->nr_reloc); + + for (i = main_prog->nr_reloc; i < new_cnt; i++) + relos[i].insn_idx += subprog->sub_insn_off; + /* After insn_idx adjustment the 'relos' array is still sorted + * by insn_idx and doesn't break bsearch. + */ + main_prog->reloc_desc = relos; + main_prog->nr_reloc = new_cnt; + return 0; +} + static int bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog, struct bpf_program *prog) @@ -6636,6 +6664,11 @@ bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog, continue; relo = find_prog_insn_relo(prog, insn_idx); + if (relo && relo->type == RELO_EXTERN_FUNC) + /* kfunc relocations will be handled later + * in bpf_object__relocate_data() + */ + continue; if (relo && relo->type != RELO_CALL && relo->type != RELO_SUBPROG_ADDR) { pr_warn("prog '%s': unexpected relo for insn #%zu, type %d\n", prog->name, insn_idx, relo->type); @@ -6710,6 +6743,10 @@ bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog, pr_debug("prog '%s': added %zu insns from sub-prog '%s'\n", main_prog->name, subprog->insns_cnt, subprog->name); + /* The subprog insns are now appended. Append its relos too. */ + err = append_subprog_relos(main_prog, subprog); + if (err) + return err; err = bpf_object__reloc_code(obj, main_prog, subprog); if (err) return err; @@ -6843,7 +6880,7 @@ static int bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path) { struct bpf_program *prog; - size_t i; + size_t i, j; int err; if (obj->btf_ext) { @@ -6854,23 +6891,32 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path) return err; } } - /* relocate data references first for all programs and sub-programs, - * as they don't change relative to code locations, so subsequent - * subprogram processing won't need to re-calculate any of them + + /* Before relocating calls pre-process relocations and mark + * few ld_imm64 instructions that points to subprogs. + * Otherwise bpf_object__reloc_code() later would have to consider + * all ld_imm64 insns as relocation candidates. That would + * reduce relocation speed, since amount of find_prog_insn_relo() + * would increase and most of them will fail to find a relo. */ for (i = 0; i < obj->nr_programs; i++) { prog = &obj->programs[i]; - err = bpf_object__relocate_data(obj, prog); - if (err) { - pr_warn("prog '%s': failed to relocate data references: %d\n", - prog->name, err); - return err; + for (j = 0; j < prog->nr_reloc; j++) { + struct reloc_desc *relo = &prog->reloc_desc[j]; + struct bpf_insn *insn = &prog->insns[relo->insn_idx]; + + /* mark the insn, so it's recognized by insn_is_pseudo_func() */ + if (relo->type == RELO_SUBPROG_ADDR) + insn[0].src_reg = BPF_PSEUDO_FUNC; } } - /* now relocate subprogram calls and append used subprograms to main + + /* relocate subprogram calls and append used subprograms to main * programs; each copy of subprogram code needs to be relocated * differently for each main program, because its code location might - * have changed + * have changed. + * Append subprog relos to main programs to allow data relos to be + * processed after text is completely relocated. */ for (i = 0; i < obj->nr_programs; i++) { prog = &obj->programs[i]; @@ -6887,6 +6933,18 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path) return err; } } + /* Process data relos for main programs */ + for (i = 0; i < obj->nr_programs; i++) { + prog = &obj->programs[i]; + if (prog_is_subprog(obj, prog)) + continue; + err = bpf_object__relocate_data(obj, prog); + if (err) { + pr_warn("prog '%s': failed to relocate data references: %d\n", + prog->name, err); + return err; + } + } /* free up relocation descriptors */ for (i = 0; i < obj->nr_programs; i++) { prog = &obj->programs[i]; diff --git a/tools/testing/selftests/bpf/progs/test_subprogs.c b/tools/testing/selftests/bpf/progs/test_subprogs.c index d3c5673c0218..b7c37ca09544 100644 --- a/tools/testing/selftests/bpf/progs/test_subprogs.c +++ b/tools/testing/selftests/bpf/progs/test_subprogs.c @@ -4,8 +4,18 @@ const char LICENSE[] SEC("license") = "GPL"; +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, __u32); + __type(value, __u64); +} array SEC(".maps"); + __noinline int sub1(int x) { + int key = 0; + + bpf_map_lookup_elem(&array, &key); return x + 1; } @@ -23,6 +33,9 @@ static __noinline int sub3(int z) static __noinline int sub4(int w) { + int key = 0; + + bpf_map_lookup_elem(&array, &key); return w + sub3(5) + sub1(6); } From patchwork Sat May 8 03:48:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245783 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7085EC43460 for ; Sat, 8 May 2021 03:49:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4D9E761106 for ; Sat, 8 May 2021 03:49:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231164AbhEHDuH (ORCPT ); Fri, 7 May 2021 23:50:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231153AbhEHDuG (ORCPT ); Fri, 7 May 2021 23:50:06 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CF7AC061574 for ; Fri, 7 May 2021 20:49:05 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id v191so9245665pfc.8 for ; Fri, 07 May 2021 20:49:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=n4jxsbH9IlQHNNbf1NgCfus6Brf7sC4xj3Gte1cIIh4=; b=BXc6hK4evQsZJROvIylNgXOjTxuw6MC8q8g1VDRtvGqyRagvvzwiqKfzVIVKZDuMOe D3Be36rO3S2HEwpUzeO257CvCR5U4J2ySSC+ffWFtutZwPv3xdpH2uY3eIbw1YyOm2x5 vRQBw1tmVZcb/R3YvJwCdu37nUnvfz3tEjQ1OjYi1YPGAfI280dGSxWpqJ8VkuXHhBfE A5ei4ZJwkziCsQyG+M8jaS/dtPZanjA+eLRjLyghAipADoQjSd2KpkewwLBISq2OG8Au ws3UXD1DU0KY0PuueypsI2z1MtaHEfIx1PiiSxmqtoul0w06bQF6rTbPBuQrqEuN9MS5 O1Hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=n4jxsbH9IlQHNNbf1NgCfus6Brf7sC4xj3Gte1cIIh4=; b=WSq5/mVqiFE+LW+eHEbGcleiK5GNxoAg0adWbagsyCDJdw7lT9RMEQiq0vpar7xF7k zAxl+Y5mRFzzlSUst/OssVrFh4kD503GJWyhg35uzCX48/dvn2NEuI5LSFCiC5RLlEKz yadPwSKXIB/KGtrr55cJztepUe5f0DFaSrseWZElGqmCgCnGXn8MrktzAp1MNqJnLL/a hpepaL0UwoqOZ5Yg+hpilXdQETSVU7gMVbZXtuyVeMg4Jz53ErqAEoLW0E9qMtg8lRUf ldaiu2Xgnw5f0v4hI0kgzol1fZEYYAVUicvZsavesCmwZGSjLy0Jl8wd6mSKVEjWhi1Z bkXw== X-Gm-Message-State: AOAM532zmEItFMosw9Fg5AguKnQThZBcIrA27C3HX+kW1zmBjIMx12Di y56YJZQyRomTHU19Z8dYWzg= X-Google-Smtp-Source: ABdhPJwd9IxdXl3E/uDKTBissaguNNxef5wps9vdikGXi9k4wrDnK+2wVqN/TbiBfO2XaUpUqcrViQ== X-Received: by 2002:a65:584d:: with SMTP id s13mr13427714pgr.97.1620445745062; Fri, 07 May 2021 20:49:05 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.49.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:04 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 13/22] libbpf: Add bpf_object pointer to kernel_supports(). Date: Fri, 7 May 2021 20:48:28 -0700 Message-Id: <20210508034837.64585-14-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Add a pointer to 'struct bpf_object' to kernel_supports() helper. It will be used in the next patch. No functional changes. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- tools/lib/bpf/libbpf.c | 52 +++++++++++++++++++++--------------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index f04bac9f398c..75a0ca75db77 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -180,7 +180,7 @@ enum kern_feature_id { __FEAT_CNT, }; -static bool kernel_supports(enum kern_feature_id feat_id); +static bool kernel_supports(const struct bpf_object *obj, enum kern_feature_id feat_id); enum reloc_type { RELO_LD64, @@ -2446,20 +2446,20 @@ static bool section_have_execinstr(struct bpf_object *obj, int idx) static bool btf_needs_sanitization(struct bpf_object *obj) { - bool has_func_global = kernel_supports(FEAT_BTF_GLOBAL_FUNC); - bool has_datasec = kernel_supports(FEAT_BTF_DATASEC); - bool has_float = kernel_supports(FEAT_BTF_FLOAT); - bool has_func = kernel_supports(FEAT_BTF_FUNC); + bool has_func_global = kernel_supports(obj, FEAT_BTF_GLOBAL_FUNC); + bool has_datasec = kernel_supports(obj, FEAT_BTF_DATASEC); + bool has_float = kernel_supports(obj, FEAT_BTF_FLOAT); + bool has_func = kernel_supports(obj, FEAT_BTF_FUNC); return !has_func || !has_datasec || !has_func_global || !has_float; } static void bpf_object__sanitize_btf(struct bpf_object *obj, struct btf *btf) { - bool has_func_global = kernel_supports(FEAT_BTF_GLOBAL_FUNC); - bool has_datasec = kernel_supports(FEAT_BTF_DATASEC); - bool has_float = kernel_supports(FEAT_BTF_FLOAT); - bool has_func = kernel_supports(FEAT_BTF_FUNC); + bool has_func_global = kernel_supports(obj, FEAT_BTF_GLOBAL_FUNC); + bool has_datasec = kernel_supports(obj, FEAT_BTF_DATASEC); + bool has_float = kernel_supports(obj, FEAT_BTF_FLOAT); + bool has_func = kernel_supports(obj, FEAT_BTF_FUNC); struct btf_type *t; int i, j, vlen; @@ -2665,7 +2665,7 @@ static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj) if (!obj->btf) return 0; - if (!kernel_supports(FEAT_BTF)) { + if (!kernel_supports(obj, FEAT_BTF)) { if (kernel_needs_btf(obj)) { err = -EOPNOTSUPP; goto report; @@ -4319,7 +4319,7 @@ static struct kern_feature_desc { }, }; -static bool kernel_supports(enum kern_feature_id feat_id) +static bool kernel_supports(const struct bpf_object *obj, enum kern_feature_id feat_id) { struct kern_feature_desc *feat = &feature_probes[feat_id]; int ret; @@ -4438,7 +4438,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map) memset(&create_attr, 0, sizeof(create_attr)); - if (kernel_supports(FEAT_PROG_NAME)) + if (kernel_supports(obj, FEAT_PROG_NAME)) create_attr.name = map->name; create_attr.map_ifindex = map->map_ifindex; create_attr.map_type = def->type; @@ -5003,7 +5003,7 @@ static int load_module_btfs(struct bpf_object *obj) obj->btf_modules_loaded = true; /* kernel too old to support module BTFs */ - if (!kernel_supports(FEAT_MODULE_BTF)) + if (!kernel_supports(obj, FEAT_MODULE_BTF)) return 0; while (true) { @@ -6397,7 +6397,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog) switch (relo->type) { case RELO_LD64: - if (kernel_supports(FEAT_FD_IDX)) { + if (kernel_supports(obj, FEAT_FD_IDX)) { insn[0].src_reg = BPF_PSEUDO_MAP_IDX; insn[0].imm = relo->map_idx; } else { @@ -6407,7 +6407,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog) break; case RELO_DATA: insn[1].imm = insn[0].imm + relo->sym_off; - if (kernel_supports(FEAT_FD_IDX)) { + if (kernel_supports(obj, FEAT_FD_IDX)) { insn[0].src_reg = BPF_PSEUDO_MAP_IDX_VALUE; insn[0].imm = relo->map_idx; } else { @@ -6418,7 +6418,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog) case RELO_EXTERN_VAR: ext = &obj->externs[relo->sym_off]; if (ext->type == EXT_KCFG) { - if (kernel_supports(FEAT_FD_IDX)) { + if (kernel_supports(obj, FEAT_FD_IDX)) { insn[0].src_reg = BPF_PSEUDO_MAP_IDX_VALUE; insn[0].imm = obj->kconfig_map_idx; } else { @@ -6542,7 +6542,7 @@ reloc_prog_func_and_line_info(const struct bpf_object *obj, /* no .BTF.ext relocation if .BTF.ext is missing or kernel doesn't * supprot func/line info */ - if (!obj->btf_ext || !kernel_supports(FEAT_BTF_FUNC)) + if (!obj->btf_ext || !kernel_supports(obj, FEAT_BTF_FUNC)) return 0; /* only attempt func info relocation if main program's func_info @@ -7150,12 +7150,12 @@ static int bpf_object__sanitize_prog(struct bpf_object *obj, struct bpf_program switch (func_id) { case BPF_FUNC_probe_read_kernel: case BPF_FUNC_probe_read_user: - if (!kernel_supports(FEAT_PROBE_READ_KERN)) + if (!kernel_supports(obj, FEAT_PROBE_READ_KERN)) insn->imm = BPF_FUNC_probe_read; break; case BPF_FUNC_probe_read_kernel_str: case BPF_FUNC_probe_read_user_str: - if (!kernel_supports(FEAT_PROBE_READ_KERN)) + if (!kernel_supports(obj, FEAT_PROBE_READ_KERN)) insn->imm = BPF_FUNC_probe_read_str; break; default: @@ -7190,12 +7190,12 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt, load_attr.prog_type = prog->type; /* old kernels might not support specifying expected_attach_type */ - if (!kernel_supports(FEAT_EXP_ATTACH_TYPE) && prog->sec_def && + if (!kernel_supports(prog->obj, FEAT_EXP_ATTACH_TYPE) && prog->sec_def && prog->sec_def->is_exp_attach_type_optional) load_attr.expected_attach_type = 0; else load_attr.expected_attach_type = prog->expected_attach_type; - if (kernel_supports(FEAT_PROG_NAME)) + if (kernel_supports(prog->obj, FEAT_PROG_NAME)) load_attr.name = prog->name; load_attr.insns = insns; load_attr.insn_cnt = insns_cnt; @@ -7212,7 +7212,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt, /* specify func_info/line_info only if kernel supports them */ btf_fd = bpf_object__btf_fd(prog->obj); - if (btf_fd >= 0 && kernel_supports(FEAT_BTF_FUNC)) { + if (btf_fd >= 0 && kernel_supports(prog->obj, FEAT_BTF_FUNC)) { load_attr.prog_btf_fd = btf_fd; load_attr.func_info = prog->func_info; load_attr.func_info_rec_size = prog->func_info_rec_size; @@ -7242,7 +7242,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt, pr_debug("verifier log:\n%s", log_buf); if (prog->obj->rodata_map_idx >= 0 && - kernel_supports(FEAT_PROG_BIND_MAP)) { + kernel_supports(prog->obj, FEAT_PROG_BIND_MAP)) { struct bpf_map *rodata_map = &prog->obj->maps[prog->obj->rodata_map_idx]; @@ -7410,7 +7410,7 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level) return err; } - if (kernel_supports(FEAT_FD_IDX) && obj->nr_maps) { + if (kernel_supports(obj, FEAT_FD_IDX) && obj->nr_maps) { fd_array = malloc(sizeof(int) * obj->nr_maps); if (!fd_array) return -ENOMEM; @@ -7614,11 +7614,11 @@ static int bpf_object__sanitize_maps(struct bpf_object *obj) bpf_object__for_each_map(m, obj) { if (!bpf_map__is_internal(m)) continue; - if (!kernel_supports(FEAT_GLOBAL_DATA)) { + if (!kernel_supports(obj, FEAT_GLOBAL_DATA)) { pr_warn("kernel doesn't support global data\n"); return -ENOTSUP; } - if (!kernel_supports(FEAT_ARRAY_MMAP)) + if (!kernel_supports(obj, FEAT_ARRAY_MMAP)) m->def.map_flags ^= BPF_F_MMAPABLE; } From patchwork Sat May 8 03:48:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245785 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B2C2C433B4 for ; Sat, 8 May 2021 03:49:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 71B6161107 for ; Sat, 8 May 2021 03:49:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231153AbhEHDuJ (ORCPT ); Fri, 7 May 2021 23:50:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230249AbhEHDuJ (ORCPT ); Fri, 7 May 2021 23:50:09 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A77A8C061574 for ; Fri, 7 May 2021 20:49:07 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id q2so9210093pfh.13 for ; Fri, 07 May 2021 20:49:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yFihKYDYA4Yun8BFaZXFiprAqmYNKuLwP9FjaZRbDuo=; b=YGBrMmvMEDfuu09/trBDqI1191t535/fwKqOvEqAc5n/xGg+Z/bSulNsxwFWoRmwwH WE+P+wXjV1EebHn76pdgvHfHUe/HeC9TIQp8UKhKUV+hx7uwVl3W04dEEQgiSQ5NU1HZ 2T+yI978Mo0cKL1bQ5f2d6reDoOqGQy7XgUmMS9LDBlR67VXdVre92bo5diX63E4fjuy any/kX3us0MxLAZa2KbPrTwwPQQSP//0DXtFIXreu8uSEypNMuyWmuTY/fJvJbtwPl78 ry8VOpWkyMO5JTv0S2tOgPgP8+EAoTB4IGuwn8Zo61MoNZoInEEnbJkteLIxG7sqsqCU BeTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yFihKYDYA4Yun8BFaZXFiprAqmYNKuLwP9FjaZRbDuo=; b=G47XigBUMuIjLZ26MIvyj5qwTmLEuiJzyP8R1fjckS9H3dEEDhbN+JQw6s8yG6GjxD qdM5PwgU31AwjLLBrG1NCPamzUUHeGtLFu6v0gBqtNffAPOajJc1tow9Gj+Dg8PDd3GP mYVn2HbJ/7Uh+32uW8To0YEi26W2XCb8R5M3wCRJSWy3Bfhd8aBx6vViOdHzch01mdKB nDE0YAaf8wsWd/8mIDclpvEjqtXXl5NcREpDnTVukXtJkCufhX0b/zP0AO0p+nJT5D31 TI2anfZ4GAQWOoaDPxEIvvaj4GNM5zz9tdVUxUvR4osmqKVRg1SSPu6+odo5R42XD7kx F8Xw== X-Gm-Message-State: AOAM532xeZWVwPf68yImtfrylNkoOKdB+LXNxEydhjaQSbXcaHEKchwz S0BxwVVs01pEbYjKO46jaFo= X-Google-Smtp-Source: ABdhPJxkl/8Ws7T76aG7t0qnSZooDTRJd9qMUEja17sZMTU054sYOH6Yd+bRoxRTyI6VlARBZyx3Rg== X-Received: by 2002:a63:f502:: with SMTP id w2mr14027846pgh.197.1620445746909; Fri, 07 May 2021 20:49:06 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.49.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:06 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 14/22] libbpf: Generate loader program out of BPF ELF file. Date: Fri, 7 May 2021 20:48:29 -0700 Message-Id: <20210508034837.64585-15-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov The BPF program loading process performed by libbpf is quite complex and consists of the following steps: "open" phase: - parse elf file and remember relocations, sections - collect externs and ksyms including their btf_ids in prog's BTF - patch BTF datasec (since llvm couldn't do it) - init maps (old style map_def, BTF based, global data map, kconfig map) - collect relocations against progs and maps "load" phase: - probe kernel features - load vmlinux BTF - resolve externs (kconfig and ksym) - load program BTF - init struct_ops - create maps - apply CO-RE relocations - patch ld_imm64 insns with src_reg=PSEUDO_MAP, PSEUDO_MAP_VALUE, PSEUDO_BTF_ID - reposition subprograms and adjust call insns - sanitize and load progs During this process libbpf does sys_bpf() calls to load BTF, create maps, populate maps and finally load programs. Instead of actually doing the syscalls generate a trace of what libbpf would have done and represent it as the "loader program". The "loader program" consists of single map with: - union bpf_attr(s) - BTF bytes - map value bytes - insns bytes and single bpf program that passes bpf_attr(s) and data into bpf_sys_bpf() helper. Executing such "loader program" via bpf_prog_test_run() command will replay the sequence of syscalls that libbpf would have done which will result the same maps created and programs loaded as specified in the elf file. The "loader program" removes libelf and majority of libbpf dependency from program loading process. kconfig, typeless ksym, struct_ops and CO-RE are not supported yet. The order of relocate_data and relocate_calls had to change, so that bpf_gen__prog_load() can see all relocations for a given program with correct insn_idx-es. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- tools/lib/bpf/Build | 2 +- tools/lib/bpf/bpf_gen_internal.h | 40 ++ tools/lib/bpf/gen_loader.c | 657 +++++++++++++++++++++++++++++++ tools/lib/bpf/libbpf.c | 224 +++++++++-- tools/lib/bpf/libbpf.h | 12 + tools/lib/bpf/libbpf.map | 1 + tools/lib/bpf/libbpf_internal.h | 2 + tools/lib/bpf/skel_internal.h | 116 ++++++ 8 files changed, 1022 insertions(+), 32 deletions(-) create mode 100644 tools/lib/bpf/bpf_gen_internal.h create mode 100644 tools/lib/bpf/gen_loader.c create mode 100644 tools/lib/bpf/skel_internal.h diff --git a/tools/lib/bpf/Build b/tools/lib/bpf/Build index 9b057cc7650a..430f6874fa41 100644 --- a/tools/lib/bpf/Build +++ b/tools/lib/bpf/Build @@ -1,3 +1,3 @@ libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \ netlink.o bpf_prog_linfo.o libbpf_probes.o xsk.o hashmap.o \ - btf_dump.o ringbuf.o strset.o linker.o + btf_dump.o ringbuf.o strset.o linker.o gen_loader.o diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h new file mode 100644 index 000000000000..f42a55efd559 --- /dev/null +++ b/tools/lib/bpf/bpf_gen_internal.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ +/* Copyright (c) 2021 Facebook */ +#ifndef __BPF_GEN_INTERNAL_H +#define __BPF_GEN_INTERNAL_H + +struct ksym_relo_desc { + const char *name; + int kind; + int insn_idx; +}; + +struct bpf_gen { + struct gen_loader_opts *opts; + void *data_start; + void *data_cur; + void *insn_start; + void *insn_cur; + __u32 nr_progs; + __u32 nr_maps; + int log_level; + int error; + struct ksym_relo_desc *relos; + int relo_cnt; + char attach_target[128]; + int attach_kind; +}; + +void bpf_gen__init(struct bpf_gen *gen, int log_level); +int bpf_gen__finish(struct bpf_gen *gen); +void bpf_gen__free(struct bpf_gen *gen); +void bpf_gen__load_btf(struct bpf_gen *gen, const void *raw_data, __u32 raw_size); +void bpf_gen__map_create(struct bpf_gen *gen, struct bpf_create_map_attr *map_attr, int map_idx); +struct bpf_prog_load_params; +void bpf_gen__prog_load(struct bpf_gen *gen, struct bpf_prog_load_params *load_attr, int prog_idx); +void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *value, __u32 value_size); +void bpf_gen__map_freeze(struct bpf_gen *gen, int map_idx); +void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *name, enum bpf_attach_type type); +void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, int kind, int insn_idx); + +#endif diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c new file mode 100644 index 000000000000..585c672cc53e --- /dev/null +++ b/tools/lib/bpf/gen_loader.c @@ -0,0 +1,657 @@ +// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) +/* Copyright (c) 2021 Facebook */ +#include +#include +#include +#include +#include +#include "btf.h" +#include "bpf.h" +#include "libbpf.h" +#include "libbpf_internal.h" +#include "hashmap.h" +#include "bpf_gen_internal.h" +#include "skel_internal.h" + +#define MAX_USED_MAPS 64 +#define MAX_USED_PROGS 32 + +/* The following structure describes the stack layout of the loader program. + * In addition R6 contains the pointer to context. + * R7 contains the result of the last sys_bpf command (typically error or FD). + * R9 contains the result of the last sys_close command. + * + * Naming convention: + * ctx - bpf program context + * stack - bpf program stack + * blob - bpf_attr-s, strings, insns, map data. + * All the bytes that loader prog will use for read/write. + */ +struct loader_stack { + __u32 btf_fd; + __u32 map_fd[MAX_USED_MAPS]; + __u32 prog_fd[MAX_USED_PROGS]; + __u32 inner_map_fd; +}; +#define stack_off(field) (__s16)(-sizeof(struct loader_stack) + offsetof(struct loader_stack, field)) + +static int bpf_gen__realloc_insn_buf(struct bpf_gen *gen, __u32 size) +{ + size_t off = gen->insn_cur - gen->insn_start; + void *insn_start; + + if (gen->error) + return gen->error; + if (size > INT32_MAX || off + size > INT32_MAX) { + gen->error = -ERANGE; + return -ERANGE; + } + insn_start = realloc(gen->insn_start, off + size); + if (!insn_start) { + gen->error = -ENOMEM; + free(gen->insn_start); + gen->insn_start = NULL; + return -ENOMEM; + } + gen->insn_start = insn_start; + gen->insn_cur = insn_start + off; + return 0; +} + +static int bpf_gen__realloc_data_buf(struct bpf_gen *gen, __u32 size) +{ + size_t off = gen->data_cur - gen->data_start; + void *data_start; + + if (gen->error) + return gen->error; + if (size > INT32_MAX || off + size > INT32_MAX) { + gen->error = -ERANGE; + return -ERANGE; + } + data_start = realloc(gen->data_start, off + size); + if (!data_start) { + gen->error = -ENOMEM; + free(gen->data_start); + gen->data_start = NULL; + return -ENOMEM; + } + gen->data_start = data_start; + gen->data_cur = data_start + off; + return 0; +} + +static void bpf_gen__emit(struct bpf_gen *gen, struct bpf_insn insn) +{ + if (bpf_gen__realloc_insn_buf(gen, sizeof(insn))) + return; + memcpy(gen->insn_cur, &insn, sizeof(insn)); + gen->insn_cur += sizeof(insn); +} + +static void bpf_gen__emit2(struct bpf_gen *gen, struct bpf_insn insn1, struct bpf_insn insn2) +{ + bpf_gen__emit(gen, insn1); + bpf_gen__emit(gen, insn2); +} + +void bpf_gen__init(struct bpf_gen *gen, int log_level) +{ + gen->log_level = log_level; + bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_6, BPF_REG_1)); +} + +static int bpf_gen__add_data(struct bpf_gen *gen, const void *data, __u32 size) +{ + void *prev; + + if (bpf_gen__realloc_data_buf(gen, size)) + return 0; + prev = gen->data_cur; + memcpy(gen->data_cur, data, size); + gen->data_cur += size; + return prev - gen->data_start; +} + +static int insn_bytes_to_bpf_size(__u32 sz) +{ + switch (sz) { + case 8: return BPF_DW; + case 4: return BPF_W; + case 2: return BPF_H; + case 1: return BPF_B; + default: return -1; + } +} + +/* *(u64 *)(blob + off) = (u64)(void *)(blob + data) */ +static void bpf_gen__emit_rel_store(struct bpf_gen *gen, int off, int data) +{ + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_0, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, data)); + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, off)); + bpf_gen__emit(gen, BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0)); +} + +/* *(u64 *)(blob + off) = (u64)(void *)(%sp + stack_off) */ +static void bpf_gen__emit_rel_store_sp(struct bpf_gen *gen, int off, int stack_off) +{ + bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_0, BPF_REG_10)); + bpf_gen__emit(gen, BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, stack_off)); + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, off)); + bpf_gen__emit(gen, BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0)); +} + +static void bpf_gen__move_ctx2blob(struct bpf_gen *gen, int off, int size, int ctx_off, + bool check_non_zero) +{ + bpf_gen__emit(gen, BPF_LDX_MEM(insn_bytes_to_bpf_size(size), BPF_REG_0, BPF_REG_6, ctx_off)); + if (check_non_zero) + /* If value in ctx is zero don't update the blob. + * For example: when ctx->map.max_entries == 0, keep default max_entries from bpf.c + */ + bpf_gen__emit(gen, BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3)); + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, off)); + bpf_gen__emit(gen, BPF_STX_MEM(insn_bytes_to_bpf_size(size), BPF_REG_1, BPF_REG_0, 0)); +} + +static void bpf_gen__move_stack2blob(struct bpf_gen *gen, int off, int size, int stack_off) +{ + bpf_gen__emit(gen, BPF_LDX_MEM(insn_bytes_to_bpf_size(size), BPF_REG_0, BPF_REG_10, stack_off)); + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, off)); + bpf_gen__emit(gen, BPF_STX_MEM(insn_bytes_to_bpf_size(size), BPF_REG_1, BPF_REG_0, 0)); +} + +static void bpf_gen__move_stack2ctx(struct bpf_gen *gen, int ctx_off, int size, int stack_off) +{ + bpf_gen__emit(gen, BPF_LDX_MEM(insn_bytes_to_bpf_size(size), BPF_REG_0, BPF_REG_10, stack_off)); + bpf_gen__emit(gen, BPF_STX_MEM(insn_bytes_to_bpf_size(size), BPF_REG_6, BPF_REG_0, ctx_off)); +} + +static void bpf_gen__emit_sys_bpf(struct bpf_gen *gen, int cmd, int attr, int attr_size) +{ + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_1, cmd)); + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_2, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, attr)); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_3, attr_size)); + bpf_gen__emit(gen, BPF_EMIT_CALL(BPF_FUNC_sys_bpf)); + /* remember the result in R7 */ + bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_7, BPF_REG_0)); +} + +static void bpf_gen__emit_check_err(struct bpf_gen *gen) +{ + bpf_gen__emit(gen, BPF_JMP_IMM(BPF_JSGE, BPF_REG_7, 0, 2)); + bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_0, BPF_REG_7)); + /* TODO: close intermediate FDs in case of error */ + bpf_gen__emit(gen, BPF_EXIT_INSN()); +} + +/* reg1 and reg2 should not be R1 - R5. They can be R0, R6 - R10 */ +static void __bpf_gen__debug(struct bpf_gen *gen, int reg1, int reg2, const char *fmt, va_list args) +{ + char buf[1024]; + int addr, len, ret; + + if (!gen->log_level) + return; + ret = vsnprintf(buf, sizeof(buf), fmt, args); + if (ret < 1024 - 7 && reg1 >= 0 && reg2 < 0) + /* The special case to accommodate common bpf_gen__debug_ret(): + * to avoid specifying BPF_REG_7 and adding " r=%%d" to prints explicitly. + */ + strcat(buf, " r=%d"); + len = strlen(buf) + 1; + addr = bpf_gen__add_data(gen, buf, len); + + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, addr)); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_2, len)); + if (reg1 >= 0) + bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_3, reg1)); + if (reg2 >= 0) + bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_4, reg2)); + bpf_gen__emit(gen, BPF_EMIT_CALL(BPF_FUNC_trace_printk)); +} + +static void bpf_gen__debug_regs(struct bpf_gen *gen, int reg1, int reg2, const char *fmt, ...) +{ + va_list args; + + va_start(args, fmt); + __bpf_gen__debug(gen, reg1, reg2, fmt, args); + va_end(args); +} + +static void bpf_gen__debug_ret(struct bpf_gen *gen, const char *fmt, ...) +{ + va_list args; + + va_start(args, fmt); + __bpf_gen__debug(gen, BPF_REG_7, -1, fmt, args); + va_end(args); +} + +static void __bpf_gen__emit_sys_close(struct bpf_gen *gen) +{ + bpf_gen__emit(gen, BPF_JMP_IMM(BPF_JSLE, BPF_REG_1, 0, + /* 2 is the number of the following insns + * 6 is additional insns in debug_regs + */ + 2 + (gen->log_level ? 6 : 0))); + bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_9, BPF_REG_1)); + bpf_gen__emit(gen, BPF_EMIT_CALL(BPF_FUNC_sys_close)); + bpf_gen__debug_regs(gen, BPF_REG_9, BPF_REG_0, "close(%%d) = %%d"); +} + +static void bpf_gen__emit_sys_close_stack(struct bpf_gen *gen, int stack_off) +{ + bpf_gen__emit(gen, BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, stack_off)); + __bpf_gen__emit_sys_close(gen); +} + +static void bpf_gen__emit_sys_close_blob(struct bpf_gen *gen, int blob_off) +{ + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_0, BPF_PSEUDO_MAP_IDX_VALUE, + 0, 0, 0, blob_off)); + bpf_gen__emit(gen, BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0)); + __bpf_gen__emit_sys_close(gen); +} + +int bpf_gen__finish(struct bpf_gen *gen) +{ + int i; + + bpf_gen__emit_sys_close_stack(gen, stack_off(btf_fd)); + for (i = 0; i < gen->nr_progs; i++) + bpf_gen__move_stack2ctx(gen, + sizeof(struct bpf_loader_ctx) + + sizeof(struct bpf_map_desc) * gen->nr_maps + + sizeof(struct bpf_prog_desc) * i + + offsetof(struct bpf_prog_desc, prog_fd), 4, + stack_off(prog_fd[i])); + for (i = 0; i < gen->nr_maps; i++) + bpf_gen__move_stack2ctx(gen, + sizeof(struct bpf_loader_ctx) + + sizeof(struct bpf_map_desc) * i + + offsetof(struct bpf_map_desc, map_fd), 4, + stack_off(map_fd[i])); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_0, 0)); + bpf_gen__emit(gen, BPF_EXIT_INSN()); + pr_debug("gen: finish %d\n", gen->error); + if (!gen->error) { + struct gen_loader_opts *opts = gen->opts; + + opts->insns = gen->insn_start; + opts->insns_sz = gen->insn_cur - gen->insn_start; + opts->data = gen->data_start; + opts->data_sz = gen->data_cur - gen->data_start; + } + return gen->error; +} + +void bpf_gen__free(struct bpf_gen *gen) +{ + if (!gen) + return; + free(gen->data_start); + free(gen->insn_start); + free(gen); +} + +void bpf_gen__load_btf(struct bpf_gen *gen, const void *btf_raw_data, __u32 btf_raw_size) +{ + int attr_size = offsetofend(union bpf_attr, btf_log_level); + int btf_data, btf_load_attr; + union bpf_attr attr; + + memset(&attr, 0, attr_size); + pr_debug("gen: load_btf: size %d\n", btf_raw_size); + btf_data = bpf_gen__add_data(gen, btf_raw_data, btf_raw_size); + + attr.btf_size = btf_raw_size; + btf_load_attr = bpf_gen__add_data(gen, &attr, attr_size); + + /* populate union bpf_attr with user provided log details */ + bpf_gen__move_ctx2blob(gen, btf_load_attr + offsetof(union bpf_attr, btf_log_level), 4, + offsetof(struct bpf_loader_ctx, log_level), false); + bpf_gen__move_ctx2blob(gen, btf_load_attr + offsetof(union bpf_attr, btf_log_size), 4, + offsetof(struct bpf_loader_ctx, log_size), false); + bpf_gen__move_ctx2blob(gen, btf_load_attr + offsetof(union bpf_attr, btf_log_buf), 8, + offsetof(struct bpf_loader_ctx, log_buf), false); + /* populate union bpf_attr with a pointer to the BTF data */ + bpf_gen__emit_rel_store(gen, btf_load_attr + offsetof(union bpf_attr, btf), btf_data); + /* emit BTF_LOAD command */ + bpf_gen__emit_sys_bpf(gen, BPF_BTF_LOAD, btf_load_attr, attr_size); + bpf_gen__debug_ret(gen, "btf_load size %d", btf_raw_size); + bpf_gen__emit_check_err(gen); + /* remember btf_fd in the stack, if successful */ + bpf_gen__emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, stack_off(btf_fd))); +} + +void bpf_gen__map_create(struct bpf_gen *gen, struct bpf_create_map_attr *map_attr, int map_idx) +{ + int attr_size = offsetofend(union bpf_attr, btf_vmlinux_value_type_id); + bool close_inner_map_fd = false; + int map_create_attr; + union bpf_attr attr; + + memset(&attr, 0, attr_size); + attr.map_type = map_attr->map_type; + attr.key_size = map_attr->key_size; + attr.value_size = map_attr->value_size; + attr.map_flags = map_attr->map_flags; + memcpy(attr.map_name, map_attr->name, + min((unsigned)strlen(map_attr->name), BPF_OBJ_NAME_LEN - 1)); + attr.numa_node = map_attr->numa_node; + attr.map_ifindex = map_attr->map_ifindex; + attr.max_entries = map_attr->max_entries; + switch (attr.map_type) { + case BPF_MAP_TYPE_PERF_EVENT_ARRAY: + case BPF_MAP_TYPE_CGROUP_ARRAY: + case BPF_MAP_TYPE_STACK_TRACE: + case BPF_MAP_TYPE_ARRAY_OF_MAPS: + case BPF_MAP_TYPE_HASH_OF_MAPS: + case BPF_MAP_TYPE_DEVMAP: + case BPF_MAP_TYPE_DEVMAP_HASH: + case BPF_MAP_TYPE_CPUMAP: + case BPF_MAP_TYPE_XSKMAP: + case BPF_MAP_TYPE_SOCKMAP: + case BPF_MAP_TYPE_SOCKHASH: + case BPF_MAP_TYPE_QUEUE: + case BPF_MAP_TYPE_STACK: + case BPF_MAP_TYPE_RINGBUF: + break; + default: + attr.btf_key_type_id = map_attr->btf_key_type_id; + attr.btf_value_type_id = map_attr->btf_value_type_id; + } + + pr_debug("gen: map_create: %s idx %d type %d value_type_id %d\n", + attr.map_name, map_idx, map_attr->map_type, attr.btf_value_type_id); + + map_create_attr = bpf_gen__add_data(gen, &attr, attr_size); + if (attr.btf_value_type_id) + /* populate union bpf_attr with btf_fd saved in the stack earlier */ + bpf_gen__move_stack2blob(gen, map_create_attr + offsetof(union bpf_attr, btf_fd), 4, + stack_off(btf_fd)); + switch (attr.map_type) { + case BPF_MAP_TYPE_ARRAY_OF_MAPS: + case BPF_MAP_TYPE_HASH_OF_MAPS: + bpf_gen__move_stack2blob(gen, map_create_attr + offsetof(union bpf_attr, inner_map_fd), + 4, stack_off(inner_map_fd)); + close_inner_map_fd = true; + break; + default: + break; + } + /* conditionally update max_entries */ + if (map_idx >= 0) + bpf_gen__move_ctx2blob(gen, map_create_attr + offsetof(union bpf_attr, max_entries), 4, + sizeof(struct bpf_loader_ctx) + + sizeof(struct bpf_map_desc) * map_idx + + offsetof(struct bpf_map_desc, max_entries), + true /* check that max_entries != 0 */); + /* emit MAP_CREATE command */ + bpf_gen__emit_sys_bpf(gen, BPF_MAP_CREATE, map_create_attr, attr_size); + bpf_gen__debug_ret(gen, "map_create %s idx %d type %d value_size %d value_btf_id %d", + attr.map_name, map_idx, map_attr->map_type, attr.value_size, + attr.btf_value_type_id); + bpf_gen__emit_check_err(gen); + /* remember map_fd in the stack, if successful */ + if (map_idx < 0) { + /* This bpf_gen__map_create() function is called with map_idx >= 0 for all maps + * that libbpf loading logic tracks. + * It's called with -1 to create an inner map. + */ + bpf_gen__emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, stack_off(inner_map_fd))); + } else if (map_idx != gen->nr_maps) { + gen->error = -EDOM; /* internal bug */ + return; + } else { + bpf_gen__emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, stack_off(map_fd[map_idx]))); + gen->nr_maps++; + } + if (close_inner_map_fd) + bpf_gen__emit_sys_close_stack(gen, stack_off(inner_map_fd)); +} + +void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *attach_name, + enum bpf_attach_type type) +{ + const char *prefix; + int kind, ret; + + btf_get_kernel_prefix_kind(type, &prefix, &kind); + gen->attach_kind = kind; + ret = snprintf(gen->attach_target, sizeof(gen->attach_target), "%s%s", prefix, attach_name); + if (ret == sizeof(gen->attach_target)) + gen->error = -ENOSPC; +} + +static void bpf_gen__emit_find_attach_target(struct bpf_gen *gen) +{ + int name, len = strlen(gen->attach_target) + 1; + + pr_debug("gen: find_attach_tgt %s %d\n", gen->attach_target, gen->attach_kind); + name = bpf_gen__add_data(gen, gen->attach_target, len); + + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, name)); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_2, len)); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_3, gen->attach_kind)); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_4, 0)); + bpf_gen__emit(gen, BPF_EMIT_CALL(BPF_FUNC_btf_find_by_name_kind)); + bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_7, BPF_REG_0)); + bpf_gen__debug_ret(gen, "find_by_name_kind(%s,%d)", gen->attach_target, gen->attach_kind); + bpf_gen__emit_check_err(gen); + /* if successful, btf_id is in lower 32-bit of R7 and btf_obj_fd is in upper 32-bit */ +} + +void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, int kind, int insn_idx) +{ + struct ksym_relo_desc *relo; + + relo = libbpf_reallocarray(gen->relos, gen->relo_cnt + 1, sizeof(*relo)); + if (!relo) { + gen->error = -ENOMEM; + return; + } + gen->relos = relo; + relo += gen->relo_cnt; + relo->name = name; + relo->kind = kind; + relo->insn_idx = insn_idx; + gen->relo_cnt++; +} + +static void bpf_gen__emit_relo(struct bpf_gen *gen, struct ksym_relo_desc *relo, int insns) +{ + int name, insn, len = strlen(relo->name) + 1; + + pr_debug("gen: emit_relo: %s at %d\n", relo->name, relo->insn_idx); + name = bpf_gen__add_data(gen, relo->name, len); + + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, name)); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_2, len)); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_3, relo->kind)); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_4, 0)); + bpf_gen__emit(gen, BPF_EMIT_CALL(BPF_FUNC_btf_find_by_name_kind)); + bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_7, BPF_REG_0)); + bpf_gen__debug_ret(gen, "find_by_name_kind(%s,%d)", relo->name, relo->kind); + bpf_gen__emit_check_err(gen); + /* store btf_id into insn[insn_idx].imm */ + insn = insns + sizeof(struct bpf_insn) * relo->insn_idx + offsetof(struct bpf_insn, imm); + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_0, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, insn)); + bpf_gen__emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, 0)); + if (relo->kind == BTF_KIND_VAR) { + /* store btf_obj_fd into insn[insn_idx + 1].imm */ + bpf_gen__emit(gen, BPF_ALU64_IMM(BPF_RSH, BPF_REG_7, 32)); + bpf_gen__emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, sizeof(struct bpf_insn))); + } +} + +static void bpf_gen__emit_relos(struct bpf_gen *gen, int insns) +{ + int i; + + for (i = 0; i < gen->relo_cnt; i++) + bpf_gen__emit_relo(gen, gen->relos + i, insns); +} + +static void bpf_gen__cleanup_relos(struct bpf_gen *gen, int insns) +{ + int i, insn; + + for (i = 0; i < gen->relo_cnt; i++) { + if (gen->relos[i].kind != BTF_KIND_VAR) + continue; + /* close fd recorded in insn[insn_idx + 1].imm */ + insn = insns + sizeof(struct bpf_insn) * (gen->relos[i].insn_idx + 1) + + offsetof(struct bpf_insn, imm); + bpf_gen__emit_sys_close_blob(gen, insn); + } + if (gen->relo_cnt) { + free(gen->relos); + gen->relo_cnt = 0; + gen->relos = NULL; + } +} + +void bpf_gen__prog_load(struct bpf_gen *gen, struct bpf_prog_load_params *load_attr, int prog_idx) +{ + int attr_size = offsetofend(union bpf_attr, fd_array); + int prog_load_attr, license, insns, func_info, line_info; + union bpf_attr attr; + + memset(&attr, 0, attr_size); + pr_debug("gen: prog_load: type %d insns_cnt %zd\n", + load_attr->prog_type, load_attr->insn_cnt); + /* add license string to blob of bytes */ + license = bpf_gen__add_data(gen, load_attr->license, strlen(load_attr->license) + 1); + /* add insns to blob of bytes */ + insns = bpf_gen__add_data(gen, load_attr->insns, + load_attr->insn_cnt * sizeof(struct bpf_insn)); + + attr.prog_type = load_attr->prog_type; + attr.expected_attach_type = load_attr->expected_attach_type; + attr.attach_btf_id = load_attr->attach_btf_id; + attr.prog_ifindex = load_attr->prog_ifindex; + attr.kern_version = 0; + attr.insn_cnt = (__u32)load_attr->insn_cnt; + attr.prog_flags = load_attr->prog_flags; + + attr.func_info_rec_size = load_attr->func_info_rec_size; + attr.func_info_cnt = load_attr->func_info_cnt; + func_info = bpf_gen__add_data(gen, load_attr->func_info, + attr.func_info_cnt * attr.func_info_rec_size); + + attr.line_info_rec_size = load_attr->line_info_rec_size; + attr.line_info_cnt = load_attr->line_info_cnt; + line_info = bpf_gen__add_data(gen, load_attr->line_info, + attr.line_info_cnt * attr.line_info_rec_size); + + memcpy(attr.prog_name, load_attr->name, + min((unsigned)strlen(load_attr->name), BPF_OBJ_NAME_LEN - 1)); + prog_load_attr = bpf_gen__add_data(gen, &attr, attr_size); + + /* populate union bpf_attr with a pointer to license */ + bpf_gen__emit_rel_store(gen, prog_load_attr + offsetof(union bpf_attr, license), license); + + /* populate union bpf_attr with a pointer to instructions */ + bpf_gen__emit_rel_store(gen, prog_load_attr + offsetof(union bpf_attr, insns), insns); + + /* populate union bpf_attr with a pointer to func_info */ + bpf_gen__emit_rel_store(gen, prog_load_attr + offsetof(union bpf_attr, func_info), func_info); + + /* populate union bpf_attr with a pointer to line_info */ + bpf_gen__emit_rel_store(gen, prog_load_attr + offsetof(union bpf_attr, line_info), line_info); + + /* populate union bpf_attr fd_array with a pointer to stack where map_fds are saved */ + bpf_gen__emit_rel_store_sp(gen, prog_load_attr + offsetof(union bpf_attr, fd_array), + stack_off(map_fd[0])); + + /* populate union bpf_attr with user provided log details */ + bpf_gen__move_ctx2blob(gen, prog_load_attr + offsetof(union bpf_attr, log_level), 4, + offsetof(struct bpf_loader_ctx, log_level), false); + bpf_gen__move_ctx2blob(gen, prog_load_attr + offsetof(union bpf_attr, log_size), 4, + offsetof(struct bpf_loader_ctx, log_size), false); + bpf_gen__move_ctx2blob(gen, prog_load_attr + offsetof(union bpf_attr, log_buf), 8, + offsetof(struct bpf_loader_ctx, log_buf), false); + /* populate union bpf_attr with btf_fd saved in the stack earlier */ + bpf_gen__move_stack2blob(gen, prog_load_attr + offsetof(union bpf_attr, prog_btf_fd), 4, + stack_off(btf_fd)); + if (gen->attach_kind) { + bpf_gen__emit_find_attach_target(gen); + /* populate union bpf_attr with btf_id and btf_obj_fd found by helper */ + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_0, BPF_PSEUDO_MAP_IDX_VALUE, + 0, 0, 0, prog_load_attr)); + bpf_gen__emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, + offsetof(union bpf_attr, attach_btf_id))); + bpf_gen__emit(gen, BPF_ALU64_IMM(BPF_RSH, BPF_REG_7, 32)); + bpf_gen__emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, + offsetof(union bpf_attr, attach_btf_obj_fd))); + } + bpf_gen__emit_relos(gen, insns); + /* emit PROG_LOAD command */ + bpf_gen__emit_sys_bpf(gen, BPF_PROG_LOAD, prog_load_attr, attr_size); + bpf_gen__debug_ret(gen, "prog_load %s insn_cnt %d", attr.prog_name, attr.insn_cnt); + /* successful or not, close btf module FDs used in extern ksyms and attach_btf_obj_fd */ + bpf_gen__cleanup_relos(gen, insns); + if (gen->attach_kind) + bpf_gen__emit_sys_close_blob(gen, + prog_load_attr + offsetof(union bpf_attr, attach_btf_obj_fd)); + bpf_gen__emit_check_err(gen); + /* remember prog_fd in the stack, if successful */ + bpf_gen__emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, stack_off(prog_fd[gen->nr_progs]))); + gen->nr_progs++; +} + +void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *pvalue, __u32 value_size) +{ + int attr_size = offsetofend(union bpf_attr, flags); + int map_update_attr, value, key; + union bpf_attr attr; + int zero = 0; + + memset(&attr, 0, attr_size); + pr_debug("gen: map_update_elem: idx %d\n", map_idx); + + value = bpf_gen__add_data(gen, pvalue, value_size); + key = bpf_gen__add_data(gen, &zero, sizeof(zero)); + + /* if (map_desc[map_idx].initial_value) + * copy_from_user(value, initial_value, value_size); + */ + bpf_gen__emit(gen, BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_6, + sizeof(struct bpf_loader_ctx) + + sizeof(struct bpf_map_desc) * map_idx + + offsetof(struct bpf_map_desc, initial_value))); + bpf_gen__emit(gen, BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0, 4)); + bpf_gen__emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, value)); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_2, value_size)); + bpf_gen__emit(gen, BPF_EMIT_CALL(BPF_FUNC_copy_from_user)); + + map_update_attr = bpf_gen__add_data(gen, &attr, attr_size); + bpf_gen__move_stack2blob(gen, map_update_attr + offsetof(union bpf_attr, map_fd), 4, + stack_off(map_fd[map_idx])); + bpf_gen__emit_rel_store(gen, map_update_attr + offsetof(union bpf_attr, key), key); + bpf_gen__emit_rel_store(gen, map_update_attr + offsetof(union bpf_attr, value), value); + /* emit MAP_UPDATE_ELEM command */ + bpf_gen__emit_sys_bpf(gen, BPF_MAP_UPDATE_ELEM, map_update_attr, attr_size); + bpf_gen__debug_ret(gen, "update_elem idx %d value_size %d", map_idx, value_size); + bpf_gen__emit_check_err(gen); +} + +void bpf_gen__map_freeze(struct bpf_gen *gen, int map_idx) +{ + int attr_size = offsetofend(union bpf_attr, map_fd); + int map_freeze_attr; + union bpf_attr attr; + + memset(&attr, 0, attr_size); + pr_debug("gen: map_freeze: idx %d\n", map_idx); + map_freeze_attr = bpf_gen__add_data(gen, &attr, attr_size); + bpf_gen__move_stack2blob(gen, map_freeze_attr + offsetof(union bpf_attr, map_fd), 4, + stack_off(map_fd[map_idx])); + /* emit MAP_FREEZE command */ + bpf_gen__emit_sys_bpf(gen, BPF_MAP_FREEZE, map_freeze_attr, attr_size); + bpf_gen__debug_ret(gen, "map_freeze"); + bpf_gen__emit_check_err(gen); +} diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 75a0ca75db77..0ca79712f850 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -54,6 +54,7 @@ #include "str_error.h" #include "libbpf_internal.h" #include "hashmap.h" +#include "bpf_gen_internal.h" #ifndef BPF_FS_MAGIC #define BPF_FS_MAGIC 0xcafe4a11 @@ -435,6 +436,8 @@ struct bpf_object { bool loaded; bool has_subcalls; + struct bpf_gen *gen_loader; + /* * Information when doing elf related work. Only valid if fd * is valid. @@ -2722,7 +2725,20 @@ static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj) bpf_object__sanitize_btf(obj, kern_btf); } - err = btf__load(kern_btf); + if (obj->gen_loader) { + __u32 raw_size = 0; + const void *raw_data = btf__get_raw_data(kern_btf, &raw_size); + + if (!raw_data) + return -ENOMEM; + bpf_gen__load_btf(obj->gen_loader, raw_data, raw_size); + /* Pretend to have valid FD to pass various fd >= 0 checks. + * This fd == 0 will not be used with any syscall and will be reset to -1 eventually. + */ + btf__set_fd(kern_btf, 0); + } else { + err = btf__load(kern_btf); + } if (sanitize) { if (!err) { /* move fd to libbpf's BTF */ @@ -4324,6 +4340,12 @@ static bool kernel_supports(const struct bpf_object *obj, enum kern_feature_id f struct kern_feature_desc *feat = &feature_probes[feat_id]; int ret; + if (obj->gen_loader) + /* To generate loader program assume the latest kernel + * to avoid doing extra prog_load, map_create syscalls. + */ + return true; + if (READ_ONCE(feat->res) == FEAT_UNKNOWN) { ret = feat->probe(); if (ret > 0) { @@ -4406,6 +4428,13 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) char *cp, errmsg[STRERR_BUFSIZE]; int err, zero = 0; + if (obj->gen_loader) { + bpf_gen__map_update_elem(obj->gen_loader, map - obj->maps, + map->mmaped, map->def.value_size); + if (map_type == LIBBPF_MAP_RODATA || map_type == LIBBPF_MAP_KCONFIG) + bpf_gen__map_freeze(obj->gen_loader, map - obj->maps); + return 0; + } err = bpf_map_update_elem(map->fd, &zero, map->mmaped, 0); if (err) { err = -errno; @@ -4431,7 +4460,7 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map) static void bpf_map__destroy(struct bpf_map *map); -static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map) +static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, bool is_inner) { struct bpf_create_map_attr create_attr; struct bpf_map_def *def = &map->def; @@ -4479,7 +4508,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map) if (map->inner_map) { int err; - err = bpf_object__create_map(obj, map->inner_map); + err = bpf_object__create_map(obj, map->inner_map, true); if (err) { pr_warn("map '%s': failed to create inner map: %d\n", map->name, err); @@ -4491,7 +4520,15 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map) create_attr.inner_map_fd = map->inner_map_fd; } - map->fd = bpf_create_map_xattr(&create_attr); + if (obj->gen_loader) { + bpf_gen__map_create(obj->gen_loader, &create_attr, is_inner ? -1 : map - obj->maps); + /* Pretend to have valid FD to pass various fd >= 0 checks. + * This fd == 0 will not be used with any syscall and will be reset to -1 eventually. + */ + map->fd = 0; + } else { + map->fd = bpf_create_map_xattr(&create_attr); + } if (map->fd < 0 && (create_attr.btf_key_type_id || create_attr.btf_value_type_id)) { char *cp, errmsg[STRERR_BUFSIZE]; @@ -4512,6 +4549,8 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map) return -errno; if (bpf_map_type__is_map_in_map(def->type) && map->inner_map) { + if (obj->gen_loader) + map->inner_map->fd = -1; bpf_map__destroy(map->inner_map); zfree(&map->inner_map); } @@ -4519,11 +4558,11 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map) return 0; } -static int init_map_slots(struct bpf_map *map) +static int init_map_slots(struct bpf_object *obj, struct bpf_map *map) { const struct bpf_map *targ_map; unsigned int i; - int fd, err; + int fd, err = 0; for (i = 0; i < map->init_slots_sz; i++) { if (!map->init_slots[i]) @@ -4531,7 +4570,13 @@ static int init_map_slots(struct bpf_map *map) targ_map = map->init_slots[i]; fd = bpf_map__fd(targ_map); - err = bpf_map_update_elem(map->fd, &i, &fd, 0); + if (obj->gen_loader) { + pr_warn("// TODO map_update_elem: idx %ld key %d value==map_idx %ld\n", + map - obj->maps, i, targ_map - obj->maps); + return -ENOTSUP; + } else { + err = bpf_map_update_elem(map->fd, &i, &fd, 0); + } if (err) { err = -errno; pr_warn("map '%s': failed to initialize slot [%d] to map '%s' fd=%d: %d\n", @@ -4573,7 +4618,7 @@ bpf_object__create_maps(struct bpf_object *obj) pr_debug("map '%s': skipping creation (preset fd=%d)\n", map->name, map->fd); } else { - err = bpf_object__create_map(obj, map); + err = bpf_object__create_map(obj, map, false); if (err) goto err_out; @@ -4589,7 +4634,7 @@ bpf_object__create_maps(struct bpf_object *obj) } if (map->init_slots_sz) { - err = init_map_slots(map); + err = init_map_slots(obj, map); if (err < 0) { zclose(map->fd); goto err_out; @@ -4999,6 +5044,9 @@ static int load_module_btfs(struct bpf_object *obj) if (obj->btf_modules_loaded) return 0; + if (obj->gen_loader) + return 0; + /* don't do this again, even if we find no module BTFs */ obj->btf_modules_loaded = true; @@ -6146,6 +6194,12 @@ static int bpf_core_apply_relo(struct bpf_program *prog, if (str_is_empty(spec_str)) return -EINVAL; + if (prog->obj->gen_loader) { + pr_warn("// TODO core_relo: prog %ld insn[%d] %s %s kind %d\n", + prog - prog->obj->programs, relo->insn_off / 8, + local_name, spec_str, relo->kind); + return -ENOTSUP; + } err = bpf_core_parse_spec(local_btf, local_id, spec_str, relo->kind, &local_spec); if (err) { pr_warn("prog '%s': relo #%d: parsing [%d] %s %s + %s failed: %d\n", @@ -6876,6 +6930,20 @@ bpf_object__relocate_calls(struct bpf_object *obj, struct bpf_program *prog) return 0; } +static void +bpf_object__free_relocs(struct bpf_object *obj) +{ + struct bpf_program *prog; + int i; + + /* free up relocation descriptors */ + for (i = 0; i < obj->nr_programs; i++) { + prog = &obj->programs[i]; + zfree(&prog->reloc_desc); + prog->nr_reloc = 0; + } +} + static int bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path) { @@ -6945,12 +7013,8 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path) return err; } } - /* free up relocation descriptors */ - for (i = 0; i < obj->nr_programs; i++) { - prog = &obj->programs[i]; - zfree(&prog->reloc_desc); - prog->nr_reloc = 0; - } + if (!obj->gen_loader) + bpf_object__free_relocs(obj); return 0; } @@ -7139,6 +7203,9 @@ static int bpf_object__sanitize_prog(struct bpf_object *obj, struct bpf_program enum bpf_func_id func_id; int i; + if (obj->gen_loader) + return 0; + for (i = 0; i < prog->insns_cnt; i++, insn++) { if (!insn_is_helper_call(insn, &func_id)) continue; @@ -7224,6 +7291,12 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt, load_attr.log_level = prog->log_level; load_attr.prog_flags = prog->prog_flags; + if (prog->obj->gen_loader) { + bpf_gen__prog_load(prog->obj->gen_loader, &load_attr, + prog - prog->obj->programs); + *pfd = -1; + return 0; + } retry_load: if (log_buf_size) { log_buf = malloc(log_buf_size); @@ -7301,6 +7374,38 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt, return ret; } +static int bpf_program__record_externs(struct bpf_program *prog) +{ + struct bpf_object *obj = prog->obj; + int i; + + for (i = 0; i < prog->nr_reloc; i++) { + struct reloc_desc *relo = &prog->reloc_desc[i]; + struct extern_desc *ext = &obj->externs[relo->sym_off]; + + switch (relo->type) { + case RELO_EXTERN_VAR: + if (ext->type != EXT_KSYM) + continue; + if (!ext->ksym.type_id) { + pr_warn("typeless ksym %s is not supported yet\n", + ext->name); + return -ENOTSUP; + } + bpf_gen__record_extern(obj->gen_loader, ext->name, BTF_KIND_VAR, + relo->insn_idx); + break; + case RELO_EXTERN_FUNC: + bpf_gen__record_extern(obj->gen_loader, ext->name, BTF_KIND_FUNC, + relo->insn_idx); + break; + default: + continue; + } + } + return 0; +} + static int libbpf_find_attach_btf_id(struct bpf_program *prog, int *btf_obj_fd, int *btf_type_id); int bpf_program__load(struct bpf_program *prog, char *license, __u32 kern_ver) @@ -7346,6 +7451,8 @@ int bpf_program__load(struct bpf_program *prog, char *license, __u32 kern_ver) pr_warn("prog '%s': inconsistent nr(%d) != 1\n", prog->name, prog->instances.nr); } + if (prog->obj->gen_loader) + bpf_program__record_externs(prog); err = load_program(prog, prog->insns, prog->insns_cnt, license, kern_ver, &fd); if (!err) @@ -7435,6 +7542,8 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level) return err; } } + if (obj->gen_loader) + bpf_object__free_relocs(obj); free(fd_array); return 0; } @@ -7816,6 +7925,12 @@ static int bpf_object__resolve_ksyms_btf_id(struct bpf_object *obj) if (ext->type != EXT_KSYM || !ext->ksym.type_id) continue; + if (obj->gen_loader) { + ext->is_set = true; + ext->ksym.kernel_btf_obj_fd = 0; + ext->ksym.kernel_btf_id = 0; + continue; + } t = btf__type_by_id(obj->btf, ext->btf_id); if (btf_is_var(t)) err = bpf_object__resolve_ksym_var_btf_id(obj, ext); @@ -7930,6 +8045,9 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr) return -EINVAL; } + if (obj->gen_loader) + bpf_gen__init(obj->gen_loader, attr->log_level); + err = bpf_object__probe_loading(obj); err = err ? : bpf_object__load_vmlinux_btf(obj, false); err = err ? : bpf_object__resolve_externs(obj, obj->kconfig); @@ -7940,6 +8058,15 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr) err = err ? : bpf_object__relocate(obj, attr->target_btf_path); err = err ? : bpf_object__load_progs(obj, attr->log_level); + if (obj->gen_loader) { + /* reset FDs */ + btf__set_fd(obj->btf, -1); + for (i = 0; i < obj->nr_maps; i++) + obj->maps[i].fd = -1; + if (!err) + err = bpf_gen__finish(obj->gen_loader); + } + /* clean up module BTFs */ for (i = 0; i < obj->btf_module_cnt; i++) { close(obj->btf_modules[i].fd); @@ -8565,6 +8692,7 @@ void bpf_object__close(struct bpf_object *obj) if (obj->clear_priv) obj->clear_priv(obj, obj->priv); + bpf_gen__free(obj->gen_loader); bpf_object__elf_finish(obj); bpf_object__unload(obj); btf__free(obj->btf); @@ -8655,6 +8783,22 @@ void *bpf_object__priv(const struct bpf_object *obj) return obj ? obj->priv : ERR_PTR(-EINVAL); } +int bpf_object__gen_loader(struct bpf_object *obj, struct gen_loader_opts *opts) +{ + struct bpf_gen *gen; + + if (!opts) + return -EFAULT; + if (!OPTS_VALID(opts, gen_loader_opts)) + return -EINVAL; + gen = calloc(sizeof(*gen), 1); + if (!gen) + return -ENOMEM; + gen->opts = opts; + obj->gen_loader = gen; + return 0; +} + static struct bpf_program * __bpf_program__iter(const struct bpf_program *p, const struct bpf_object *obj, bool forward) @@ -9292,6 +9436,28 @@ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj, #define BTF_ITER_PREFIX "bpf_iter_" #define BTF_MAX_NAME_SIZE 128 +void btf_get_kernel_prefix_kind(enum bpf_attach_type attach_type, + const char **prefix, int *kind) +{ + switch (attach_type) { + case BPF_TRACE_RAW_TP: + *prefix = BTF_TRACE_PREFIX; + *kind = BTF_KIND_TYPEDEF; + break; + case BPF_LSM_MAC: + *prefix = BTF_LSM_PREFIX; + *kind = BTF_KIND_FUNC; + break; + case BPF_TRACE_ITER: + *prefix = BTF_ITER_PREFIX; + *kind = BTF_KIND_FUNC; + break; + default: + *prefix = ""; + *kind = BTF_KIND_FUNC; + } +} + static int find_btf_by_prefix_kind(const struct btf *btf, const char *prefix, const char *name, __u32 kind) { @@ -9312,21 +9478,11 @@ static int find_btf_by_prefix_kind(const struct btf *btf, const char *prefix, static inline int find_attach_btf_id(struct btf *btf, const char *name, enum bpf_attach_type attach_type) { - int err; - - if (attach_type == BPF_TRACE_RAW_TP) - err = find_btf_by_prefix_kind(btf, BTF_TRACE_PREFIX, name, - BTF_KIND_TYPEDEF); - else if (attach_type == BPF_LSM_MAC) - err = find_btf_by_prefix_kind(btf, BTF_LSM_PREFIX, name, - BTF_KIND_FUNC); - else if (attach_type == BPF_TRACE_ITER) - err = find_btf_by_prefix_kind(btf, BTF_ITER_PREFIX, name, - BTF_KIND_FUNC); - else - err = btf__find_by_name_kind(btf, name, BTF_KIND_FUNC); + const char *prefix; + int kind; - return err; + btf_get_kernel_prefix_kind(attach_type, &prefix, &kind); + return find_btf_by_prefix_kind(btf, prefix, name, kind); } int libbpf_find_vmlinux_btf_id(const char *name, @@ -9425,7 +9581,7 @@ static int libbpf_find_attach_btf_id(struct bpf_program *prog, int *btf_obj_fd, __u32 attach_prog_fd = prog->attach_prog_fd; const char *name = prog->sec_name, *attach_name; const struct bpf_sec_def *sec = NULL; - int i, err; + int i, err = 0; if (!name) return -EINVAL; @@ -9460,7 +9616,13 @@ static int libbpf_find_attach_btf_id(struct bpf_program *prog, int *btf_obj_fd, } /* kernel/module BTF ID */ - err = find_kernel_btf_id(prog->obj, attach_name, attach_type, btf_obj_fd, btf_type_id); + if (prog->obj->gen_loader) { + bpf_gen__record_attach_target(prog->obj->gen_loader, attach_name, attach_type); + *btf_obj_fd = 0; + *btf_type_id = 1; + } else { + err = find_kernel_btf_id(prog->obj, attach_name, attach_type, btf_obj_fd, btf_type_id); + } if (err) { pr_warn("failed to find kernel BTF type ID of '%s': %d\n", attach_name, err); return err; diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index bec4e6a6e31d..fb291b4529e8 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -756,6 +756,18 @@ LIBBPF_API int bpf_object__attach_skeleton(struct bpf_object_skeleton *s); LIBBPF_API void bpf_object__detach_skeleton(struct bpf_object_skeleton *s); LIBBPF_API void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s); +struct gen_loader_opts { + size_t sz; /* size of this struct, for forward/backward compatiblity */ + const char *data; + const char *insns; + __u32 data_sz; + __u32 insns_sz; +}; + +#define gen_loader_opts__last_field insns_sz +LIBBPF_API int bpf_object__gen_loader(struct bpf_object *obj, + struct gen_loader_opts *opts); + enum libbpf_tristate { TRI_NO = 0, TRI_YES = 1, diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index b9b29baf1df8..889ee2f3611c 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -360,5 +360,6 @@ LIBBPF_0.4.0 { bpf_linker__free; bpf_linker__new; bpf_map__inner_map; + bpf_object__gen_loader; bpf_object__set_kversion; } LIBBPF_0.3.0; diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h index 2d4f4a995f35..2404cb0f01fb 100644 --- a/tools/lib/bpf/libbpf_internal.h +++ b/tools/lib/bpf/libbpf_internal.h @@ -259,6 +259,8 @@ int bpf_object__section_size(const struct bpf_object *obj, const char *name, int bpf_object__variable_offset(const struct bpf_object *obj, const char *name, __u32 *off); struct btf *btf_get_from_fd(int btf_fd, struct btf *base_btf); +void btf_get_kernel_prefix_kind(enum bpf_attach_type attach_type, + const char **prefix, int *kind); struct btf_ext_info { /* diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h new file mode 100644 index 000000000000..78005b387063 --- /dev/null +++ b/tools/lib/bpf/skel_internal.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ +/* Copyright (c) 2021 Facebook */ +#ifndef __SKEL_INTERNAL_H +#define __SKEL_INTERNAL_H + +#include +#include +#include + +/* This file is a base header for auto-generated *.lskel.h files. + * Its contents will change and may become part of auto-generation in the future. + * + * The layout of bpf_[map|prog]_desc and bpf_loader_ctx is feature dependent + * and will change from one version of libbpf to another and features + * requested during loader program generation. + */ +struct bpf_map_desc { + union { + /* input for the loader prog */ + struct { + __aligned_u64 initial_value; + __u32 max_entries; + }; + /* output of the loader prog */ + struct { + int map_fd; + }; + }; +}; +struct bpf_prog_desc { + int prog_fd; +}; + +struct bpf_loader_ctx { + size_t sz; + __u32 log_level; + __u32 log_size; + __u64 log_buf; +}; + +struct bpf_load_and_run_opts { + struct bpf_loader_ctx *ctx; + const void *data; + const void *insns; + __u32 data_sz; + __u32 insns_sz; + const char *errstr; +}; + +static inline int sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr, + unsigned int size) +{ + return syscall(__NR_bpf, cmd, attr, size); +} + +static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts) +{ + int map_fd = -1, prog_fd = -1, key = 0, err; + union bpf_attr attr; + + map_fd = bpf_create_map_name(BPF_MAP_TYPE_ARRAY, "__loader.map", 4, + opts->data_sz, 1, 0); + if (map_fd < 0) { + opts->errstr = "failed to create loader map"; + err = -errno; + goto out; + } + + err = bpf_map_update_elem(map_fd, &key, opts->data, 0); + if (err < 0) { + opts->errstr = "failed to update loader map"; + err = -errno; + goto out; + } + + memset(&attr, 0, sizeof(attr)); + attr.prog_type = BPF_PROG_TYPE_SYSCALL; + attr.insns = (long) opts->insns; + attr.insn_cnt = opts->insns_sz / sizeof(struct bpf_insn); + attr.license = (long) "Dual BSD/GPL"; + memcpy(attr.prog_name, "__loader.prog", sizeof("__loader.prog")); + attr.fd_array = (long) &map_fd; + attr.log_level = opts->ctx->log_level; + attr.log_size = opts->ctx->log_size; + attr.log_buf = opts->ctx->log_buf; + attr.prog_flags = BPF_F_SLEEPABLE; + prog_fd = sys_bpf(BPF_PROG_LOAD, &attr, sizeof(attr)); + if (prog_fd < 0) { + opts->errstr = "failed to load loader prog"; + err = -errno; + goto out; + } + + memset(&attr, 0, sizeof(attr)); + attr.test.prog_fd = prog_fd; + attr.test.ctx_in = (long) opts->ctx; + attr.test.ctx_size_in = opts->ctx->sz; + err = sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr)); + if (err < 0 || (int)attr.test.retval < 0) { + opts->errstr = "failed to execute loader prog"; + if (err < 0) + err = -errno; + else + err = (int)attr.test.retval; + goto out; + } + err = 0; +out: + if (map_fd >= 0) + close(map_fd); + if (prog_fd >= 0) + close(prog_fd); + return err; +} + +#endif From patchwork Sat May 8 03:48:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245789 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDE95C43460 for ; Sat, 8 May 2021 03:49:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D1EFC6128A for ; Sat, 8 May 2021 03:49:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230507AbhEHDuM (ORCPT ); Fri, 7 May 2021 23:50:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230249AbhEHDuM (ORCPT ); Fri, 7 May 2021 23:50:12 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14928C061574 for ; Fri, 7 May 2021 20:49:09 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id h14-20020a17090aea8eb02901553e1cc649so6530002pjz.0 for ; Fri, 07 May 2021 20:49:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=guOxpEi8kNk6mB7wT4ySKJnrqmhE0DB/sFrSnvn66yc=; b=E21x2ek9f2Uv+AuW3pXVFfB/nrXYDLaXr6nNxpyi3xh6S4gW9ktGqm+Ucjlm6NMQng cWFZLSdSbECwlOyJRHXe1AR0AOFZVxw5H9cXKp3O8K3+T0hkFIf22splv71iZF2lGpBE bbwjevwOsDYqGMn0Wi6ZLke4ZMSD/h3gPcR2p/KqSq2wCtk3w8iBJYjKh9yZ9oEKHcSg pMRtyaTt2v0KYaIPXeIyF9r0yPYpMQKfho7qel5jNGSyPg4aP/rUBoTkerV0wFldek0E p01KD3WU4lbxdAILqAbayQwy2v917wqkpUp6cDerk05ux4uaDRlVI3/Qrk+ELTeCIw9x VJ9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=guOxpEi8kNk6mB7wT4ySKJnrqmhE0DB/sFrSnvn66yc=; b=lOmsuEC06lqUs/2qvk/XoIdVuptYjm0UDQ4dLGIgzhVAWn8pDmOzP0eeoi1z2Z0YXf ki/pF9CdJPH7yaB5Xd2J12ut47/7Jy3d1Z62uFs0rvzfMxkymCamUtcPwCUUlhHmEf4P T1vqs8EnwApXkSEaVE0MmfC3r9/lZ6mjYaI9ob53Fn6JfxZtU6jRT5kO00JNPv3dd1Mu sKh1vVL+91ciZNdYZOYPylDCM+ax7uxve8kpz1KxlMW6bW/j0j/Nxjb75zipCWYuv8CK BFn9Ixv/+N9Q38nSmqwgOSrrBAeCYRyED3xuPhIIRqBO7UmDNQ8i+9K11nTH6gVYiQO+ 520g== X-Gm-Message-State: AOAM530geey4ySmEjYChs2b11GrsxYtmdNlvEMT/lo8oEoSwkRcPQVwV hcbgZgjcOj0CHNjC/wILt9g= X-Google-Smtp-Source: ABdhPJyFKNJyl7FSImuWiIyToM+F394vwUK8yIh/rx45iwjK05Ppy5f0OSGcLRJz0ZKwTte0onZzMA== X-Received: by 2002:a17:902:a60f:b029:ee:cc8c:f891 with SMTP id u15-20020a170902a60fb02900eecc8cf891mr13071731plq.39.1620445748685; Fri, 07 May 2021 20:49:08 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.49.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:08 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 15/22] libbpf: Use fd_array only with gen_loader. Date: Fri, 7 May 2021 20:48:30 -0700 Message-Id: <20210508034837.64585-16-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Rely on fd_array kernel feature only to generate loader program, since it's mandatory for it. Avoid using fd_array by default to preserve test coverage for old style map_fd patching. Signed-off-by: Alexei Starovoitov --- tools/lib/bpf/libbpf.c | 24 ++++-------------------- 1 file changed, 4 insertions(+), 20 deletions(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 0ca79712f850..24a659448782 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -291,7 +291,6 @@ struct bpf_program { __u32 line_info_rec_size; __u32 line_info_cnt; __u32 prog_flags; - int *fd_array; }; struct bpf_struct_ops { @@ -6451,7 +6450,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog) switch (relo->type) { case RELO_LD64: - if (kernel_supports(obj, FEAT_FD_IDX)) { + if (obj->gen_loader) { insn[0].src_reg = BPF_PSEUDO_MAP_IDX; insn[0].imm = relo->map_idx; } else { @@ -6461,7 +6460,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog) break; case RELO_DATA: insn[1].imm = insn[0].imm + relo->sym_off; - if (kernel_supports(obj, FEAT_FD_IDX)) { + if (obj->gen_loader) { insn[0].src_reg = BPF_PSEUDO_MAP_IDX_VALUE; insn[0].imm = relo->map_idx; } else { @@ -6472,7 +6471,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog) case RELO_EXTERN_VAR: ext = &obj->externs[relo->sym_off]; if (ext->type == EXT_KCFG) { - if (kernel_supports(obj, FEAT_FD_IDX)) { + if (obj->gen_loader) { insn[0].src_reg = BPF_PSEUDO_MAP_IDX_VALUE; insn[0].imm = obj->kconfig_map_idx; } else { @@ -7275,7 +7274,6 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt, load_attr.attach_btf_id = prog->attach_btf_id; load_attr.kern_version = kern_version; load_attr.prog_ifindex = prog->prog_ifindex; - load_attr.fd_array = prog->fd_array; /* specify func_info/line_info only if kernel supports them */ btf_fd = bpf_object__btf_fd(prog->obj); @@ -7506,7 +7504,6 @@ static int bpf_object__load_progs(struct bpf_object *obj, int log_level) { struct bpf_program *prog; - int *fd_array = NULL; size_t i; int err; @@ -7517,14 +7514,6 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level) return err; } - if (kernel_supports(obj, FEAT_FD_IDX) && obj->nr_maps) { - fd_array = malloc(sizeof(int) * obj->nr_maps); - if (!fd_array) - return -ENOMEM; - for (i = 0; i < obj->nr_maps; i++) - fd_array[i] = obj->maps[i].fd; - } - for (i = 0; i < obj->nr_programs; i++) { prog = &obj->programs[i]; if (prog_is_subprog(obj, prog)) @@ -7534,17 +7523,12 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level) continue; } prog->log_level |= log_level; - prog->fd_array = fd_array; err = bpf_program__load(prog, obj->license, obj->kern_version); - prog->fd_array = NULL; - if (err) { - free(fd_array); + if (err) return err; - } } if (obj->gen_loader) bpf_object__free_relocs(obj); - free(fd_array); return 0; } From patchwork Sat May 8 03:48:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245787 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52E38C433ED for ; Sat, 8 May 2021 03:49:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 35908611CC for ; Sat, 8 May 2021 03:49:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230300AbhEHDuM (ORCPT ); Fri, 7 May 2021 23:50:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230249AbhEHDuL (ORCPT ); Fri, 7 May 2021 23:50:11 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB7BEC061761 for ; Fri, 7 May 2021 20:49:10 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id z16so8811102pga.1 for ; Fri, 07 May 2021 20:49:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=r7EEMW5kBmPD/yBDhPPmRdicSHwy7w6bLkWjtExKKHU=; b=GxK00X/dQoVxZ67b00rEw8t0aH7Fe0sjS5/QBMPRIx59kW+CNPqvV/9aKlEZj3BkfN HVpLFXnRHerwBNlZTjhdQ2yjm0iTkfb0/f8kObbT7se+jse8im0c/m/Pcq8cLTIzsuCF g4KpXrE2RU18c5O5OF93zoDBUnse6PIgWbQAdW65kzkUZ2HNvtKw8aMxXNGREKowvhf6 R5q18Hz2231TvLstF1eLSKNYiWMACIpGaUC0Gog5sSq8LLvzgyKwGvkrlF4Tl8R/8+23 DmucZiZ60Xdlo6dApW0S7juxueZYoyvPLEp28dKpoe50lLeojwMWjVsMdI89Tag9FAMv G+sQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=r7EEMW5kBmPD/yBDhPPmRdicSHwy7w6bLkWjtExKKHU=; b=BAQf9e/V7nXPQGQwxukgWtDxrgmWb2wR0z2i5BQJTNQ7yvchFQF4XXsAdpi7pCXcJ/ +tXirYr+OZpV0Escc9XfM+OWaOw9b5opnp8+DJ/hcpxIXFHPXF+S4GMl37nvut8P74LI Wo9guXLkuH9LrbJ38ztlpgbd7gKlTt8JgdCBbNsGkVVZPwQRcgiW+M+DbYr9l595Nvrg v7gCVZWYNwmfE0d2CRMcpxVUvvPsc0Tcsah+g0g3jzT7uSWiO4AZxG9r4DpdLlQ59PuH h88ZmLk1t2bC1/kwtXX1X89H1SyarqOVGu3Z8/oPlsCwcAm0dQtcy29b26nXdcCE+Uzi reEg== X-Gm-Message-State: AOAM532xzy8xKvb6IgAbUGK/31R0aPSpqeI8hbOoV+G6DZTP5eq3ymP+ XpYoytkh/++NNrpSWLwWFuo= X-Google-Smtp-Source: ABdhPJzU39Sgj+PmfFx8zP0xZ+Ddk0ts901LiUDuXScsew1kmFQTLnvoV2+lTpaVAwlhxrAFOit2tQ== X-Received: by 2002:a05:6a00:781:b029:27c:d3f6:d019 with SMTP id g1-20020a056a000781b029027cd3f6d019mr13715208pfu.42.1620445750455; Fri, 07 May 2021 20:49:10 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.49.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:09 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 16/22] libbpf: Cleanup temp FDs when intermediate sys_bpf fails. Date: Fri, 7 May 2021 20:48:31 -0700 Message-Id: <20210508034837.64585-17-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Fix loader program to close temporary FDs when intermediate sys_bpf command fails. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- tools/lib/bpf/bpf_gen_internal.h | 1 + tools/lib/bpf/gen_loader.c | 38 ++++++++++++++++++++++++++++---- 2 files changed, 35 insertions(+), 4 deletions(-) diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h index f42a55efd559..da2c026a3f31 100644 --- a/tools/lib/bpf/bpf_gen_internal.h +++ b/tools/lib/bpf/bpf_gen_internal.h @@ -15,6 +15,7 @@ struct bpf_gen { void *data_cur; void *insn_start; void *insn_cur; + size_t cleanup_label; __u32 nr_progs; __u32 nr_maps; int log_level; diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c index 585c672cc53e..b1709421ba90 100644 --- a/tools/lib/bpf/gen_loader.c +++ b/tools/lib/bpf/gen_loader.c @@ -97,8 +97,36 @@ static void bpf_gen__emit2(struct bpf_gen *gen, struct bpf_insn insn1, struct bp void bpf_gen__init(struct bpf_gen *gen, int log_level) { + size_t stack_sz = sizeof(struct loader_stack); + int i; + gen->log_level = log_level; + /* save ctx pointer into R6 */ bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_6, BPF_REG_1)); + + /* bzero stack */ + bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_1, BPF_REG_10)); + bpf_gen__emit(gen, BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -stack_sz)); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_2, stack_sz)); + bpf_gen__emit(gen, BPF_MOV64_IMM(BPF_REG_3, 0)); + bpf_gen__emit(gen, BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel)); + + /* jump over cleanup code */ + bpf_gen__emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, + /* size of cleanup code below */ + (stack_sz / 4) * 3 + 2)); + + /* remember the label where all error branches will jump to */ + gen->cleanup_label = gen->insn_cur - gen->insn_start; + /* emit cleanup code: close all temp FDs */ + for (i = 0; i < stack_sz; i+= 4) { + bpf_gen__emit(gen, BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -stack_sz + i)); + bpf_gen__emit(gen, BPF_JMP_IMM(BPF_JSLE, BPF_REG_1, 0, 1)); + bpf_gen__emit(gen, BPF_EMIT_CALL(BPF_FUNC_sys_close)); + } + /* R7 contains the error code from sys_bpf. Copy it into R0 and exit. */ + bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_0, BPF_REG_7)); + bpf_gen__emit(gen, BPF_EXIT_INSN()); } static int bpf_gen__add_data(struct bpf_gen *gen, const void *data, __u32 size) @@ -179,10 +207,12 @@ static void bpf_gen__emit_sys_bpf(struct bpf_gen *gen, int cmd, int attr, int at static void bpf_gen__emit_check_err(struct bpf_gen *gen) { - bpf_gen__emit(gen, BPF_JMP_IMM(BPF_JSGE, BPF_REG_7, 0, 2)); - bpf_gen__emit(gen, BPF_MOV64_REG(BPF_REG_0, BPF_REG_7)); - /* TODO: close intermediate FDs in case of error */ - bpf_gen__emit(gen, BPF_EXIT_INSN()); + /* R7 contains result of last sys_bpf command. + * if (R7 < 0) goto cleanup; + */ + bpf_gen__emit(gen, BPF_JMP_IMM(BPF_JSGE, BPF_REG_7, 0, 1)); + bpf_gen__emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, + -(gen->insn_cur - gen->insn_start - gen->cleanup_label) / 8 - 1)); } /* reg1 and reg2 should not be R1 - R5. They can be R0, R6 - R10 */ From patchwork Sat May 8 03:48:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245791 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B044C433ED for ; Sat, 8 May 2021 03:49:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5EA4C611CC for ; Sat, 8 May 2021 03:49:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231165AbhEHDuO (ORCPT ); Fri, 7 May 2021 23:50:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230249AbhEHDuO (ORCPT ); Fri, 7 May 2021 23:50:14 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB22EC061574 for ; Fri, 7 May 2021 20:49:12 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id b21so2168965pft.10 for ; Fri, 07 May 2021 20:49:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=h6twfkm2IowZKnyKauWBMVhaQnG35VJ+1j0h2W+eISk=; b=JpDDcZjMcH6shPLSxZkTgbk9NoJbxpLfkv3wTPxZiiqoGEvsSYK32fODVVoE2nCUQS RwLjrmlqIIUxUBXf0gZURMXRx9rrme2EN2hfHP3BfXJVS4rVvJ0N5JqOnYjVT48v1/u5 BkfG3P7jcXUguJ8MrjxHyMl0q+8gYEZatjn9NEVvib0bd8FVfr9zQ6R5JPHhEuogBDwF zSZzvc5GuMh/bLG/gwVtQgAA0++wYOhp0HTQ4Gae8Nm3hfsbAe6+pQ5XPOWO6mwVsG5p THFq0u0vzuguAffQ6DzNJBxAIcYMDEi78Hh9Q9JBSHxfqTdaGacf1QpdVbYs7BHxEAlO qsIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=h6twfkm2IowZKnyKauWBMVhaQnG35VJ+1j0h2W+eISk=; b=ad5ikLd/BqarLYHh39lcmYtyTXy0rzJEqslS7hkPQ/xvWvYQtn2E4NGsX5TZp3XeA/ 6WZLFw8A+IRtOLBFcleqLppdrpUw19yVngEWbWIHh+lZL9Ocaijq71aS+1sFdFwKOOJG fC+joJeyMI+KbNZqUAniERuoq9FotR8rnGtDs8O4XYKQGiHnzWxVk3D5m4uJA5N4U+Y1 aK3qIulTQemVo8Ua2SYRvEpY5rmtqdI0iwAcwSo1IWdhsVkSJFzAIWgpA34AqeGJKQPO dNizjBr0PtSRezMvmZWU1pXm7moUX5yQBLcJ9hYAa8X5WQTLr8PCdsdmiFFHOCIs62cM MOHg== X-Gm-Message-State: AOAM530Dslj+NayMNZBe6cvopNeKI5iSeNuLUt/XMhfVSTfT3se5Wx6V HffKTqDvE4o7Q9+Ohq25/do5+P9ekHM= X-Google-Smtp-Source: ABdhPJy/h4JY1MDxwj6XB6VKzNFLICMv0h8XtNCVsr8SzWtkaGaJk1cqzP9b96iuGT5i98UrkGs6Sg== X-Received: by 2002:a62:84d2:0:b029:27c:bbd5:6c0d with SMTP id k201-20020a6284d20000b029027cbbd56c0dmr13505250pfd.32.1620445752313; Fri, 07 May 2021 20:49:12 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.49.10 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:11 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 17/22] libbpf: Introduce bpf_map__get_initial_value(). Date: Fri, 7 May 2021 20:48:32 -0700 Message-Id: <20210508034837.64585-18-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Introduce bpf_map__get_initial_value() to read initial contents of rodata/bss maps. Note only mmaped maps qualify. Just as bpf_map__set_initial_value() works only for mmaped kconfig. Signed-off-by: Alexei Starovoitov --- tools/lib/bpf/libbpf.c | 10 ++++++++++ tools/lib/bpf/libbpf.h | 2 ++ tools/lib/bpf/libbpf.map | 1 + 3 files changed, 13 insertions(+) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 24a659448782..f7cdbb0e1faf 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -9763,6 +9763,16 @@ int bpf_map__set_initial_value(struct bpf_map *map, return 0; } +int bpf_map__get_initial_value(struct bpf_map *map, + const void **pdata, size_t *psize) +{ + if (!map->mmaped) + return -EINVAL; + *psize = map->def.value_size; + *pdata = map->mmaped; + return 0; +} + bool bpf_map__is_offload_neutral(const struct bpf_map *map) { return map->def.type == BPF_MAP_TYPE_PERF_EVENT_ARRAY; diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index fb291b4529e8..f8976a30586f 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -471,6 +471,8 @@ LIBBPF_API int bpf_map__set_priv(struct bpf_map *map, void *priv, LIBBPF_API void *bpf_map__priv(const struct bpf_map *map); LIBBPF_API int bpf_map__set_initial_value(struct bpf_map *map, const void *data, size_t size); +LIBBPF_API int bpf_map__get_initial_value(struct bpf_map *map, + const void **pdata, size_t *psize); LIBBPF_API bool bpf_map__is_offload_neutral(const struct bpf_map *map); LIBBPF_API bool bpf_map__is_internal(const struct bpf_map *map); LIBBPF_API int bpf_map__set_pin_path(struct bpf_map *map, const char *path); diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index 889ee2f3611c..44285045ddf4 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -360,6 +360,7 @@ LIBBPF_0.4.0 { bpf_linker__free; bpf_linker__new; bpf_map__inner_map; + bpf_map__get_initial_value; bpf_object__gen_loader; bpf_object__set_kversion; } LIBBPF_0.3.0; From patchwork Sat May 8 03:48:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245793 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0844AC433ED for ; Sat, 8 May 2021 03:49:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DFB4A611CC for ; Sat, 8 May 2021 03:49:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230249AbhEHDuQ (ORCPT ); Fri, 7 May 2021 23:50:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231168AbhEHDuQ (ORCPT ); Fri, 7 May 2021 23:50:16 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE9CFC061574 for ; Fri, 7 May 2021 20:49:14 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id gq14-20020a17090b104eb029015be008ab0fso6532564pjb.1 for ; Fri, 07 May 2021 20:49:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LBlNClTdqNIVb9EuhEQguHdvy8XMHoWsMd/kS+HTaPQ=; b=t1S6RMVbnaZkM7XASlU3qaEtL+tKvEOISkutDOvuZQcPHGzmbs5aAIaRNej2CcnLGu lyRLBuWLZJaEY3BOamjRmQ8ydA7O+vg2guL8tOsnlcDFOBI4BfuIiPc/jvibIywG2hW1 aCGzfk374k1cqeCbCLxd7nPJkcjEnZlvH8cYTL0rIQpkBYj/7Ya8saOzHrt3HqBXx31x OQrz5+Cr2Mwt7eopPBSnTxp2wsWetQvzDDu0fFLrimeg+fYgoJu373NhdSlAU49OAm/G qxQEZmgOVchZoTxv7+aY9VpNoiYALUZILfHNt4WD71GE6YlBii7DlNttV5ARvNZEB8AT Ua3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LBlNClTdqNIVb9EuhEQguHdvy8XMHoWsMd/kS+HTaPQ=; b=ejCvCx4/bS3tEQJflbfEP5vqNcwqsCQ2UwWc1WmQrRs2/vUA2hUEOUQgHXHUOaEwRN z//+7z1QerUkRABHAHWBbKNqONTQD2hdU1YSJapm6RSCL9iAHn0eSC6C9ihRnJrlRIU5 Em/7eX6IaBvTr40a+6dk/2QuaNGBKFiAMDdzOlVFy71Ne1ToZR2Y4WEcUbxEpY42UADN olgmXgp2JbCniyL91uXsu6L0BwNvutxdtgaSseJnUmKNrUahPTBBk3ZO90d+KfQ9zIvz bbP0KSbIBHvQdDpnTI0TvfbaCUxnaFpmuHCj5kkIRBuD+3OEd9Qf8ZmYaknGsw7K6YRP VIcw== X-Gm-Message-State: AOAM5300MZPGTVCpZV3FXaDYwHWZhi+m2G7+YwUSW1A/Y0EUzsvmHmJ2 FuRuQdLa2CpVyALnNSnqzBk= X-Google-Smtp-Source: ABdhPJytLbowQ9ZsWSaQ6jB0Cluwjs3ddVAuAt4j6OmYOphNErOhHj2baFZDhki4+dA+O5W9PG57CA== X-Received: by 2002:a17:90a:bf0c:: with SMTP id c12mr13669918pjs.206.1620445754146; Fri, 07 May 2021 20:49:14 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.49.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:13 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 18/22] bpftool: Use syscall/loader program in "prog load" and "gen skeleton" command. Date: Fri, 7 May 2021 20:48:33 -0700 Message-Id: <20210508034837.64585-19-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Add -L flag to bpftool to use libbpf gen_trace facility and syscall/loader program for skeleton generation and program loading. "bpftool gen skeleton -L" command will generate a "light skeleton" or "loader skeleton" that is similar to existing skeleton, but has one major difference: $ bpftool gen skeleton lsm.o > lsm.skel.h $ bpftool gen skeleton -L lsm.o > lsm.lskel.h $ diff lsm.skel.h lsm.lskel.h @@ -5,34 +4,34 @@ #define __LSM_SKEL_H__ #include -#include +#include The light skeleton does not use majority of libbpf infrastructure. It doesn't need libelf. It doesn't parse .o file. It only needs few sys_bpf wrappers. All of them are in bpf/bpf.h file. In future libbpf/bpf.c can be inlined into bpf.h, so not even libbpf.a would be needed to work with light skeleton. "bpftool prog load -L file.o" command is introduced for debugging of syscall/loader program generation. Just like the same command without -L it will try to load the programs from file.o into the kernel. It won't even try to pin them. "bpftool prog load -L -d file.o" command will provide additional debug messages on how syscall/loader program was generated. Also the execution of syscall/loader program will use bpf_trace_printk() for each step of loading BTF, creating maps, and loading programs. The user can do "cat /.../trace_pipe" for further debug. An example of fexit_sleep.lskel.h generated from progs/fexit_sleep.c: struct fexit_sleep { struct bpf_loader_ctx ctx; struct { struct bpf_map_desc bss; } maps; struct { struct bpf_prog_desc nanosleep_fentry; struct bpf_prog_desc nanosleep_fexit; } progs; struct { int nanosleep_fentry_fd; int nanosleep_fexit_fd; } links; struct fexit_sleep__bss { int pid; int fentry_cnt; int fexit_cnt; } *bss; }; Signed-off-by: Alexei Starovoitov --- tools/bpf/bpftool/Makefile | 2 +- tools/bpf/bpftool/gen.c | 362 ++++++++++++++++++++++++++++-- tools/bpf/bpftool/main.c | 7 +- tools/bpf/bpftool/main.h | 1 + tools/bpf/bpftool/prog.c | 104 +++++++++ tools/bpf/bpftool/xlated_dumper.c | 3 + 6 files changed, 456 insertions(+), 23 deletions(-) diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile index b3073ae84018..d16d289ade7a 100644 --- a/tools/bpf/bpftool/Makefile +++ b/tools/bpf/bpftool/Makefile @@ -136,7 +136,7 @@ endif BPFTOOL_BOOTSTRAP := $(BOOTSTRAP_OUTPUT)bpftool -BOOTSTRAP_OBJS = $(addprefix $(BOOTSTRAP_OUTPUT),main.o common.o json_writer.o gen.o btf.o) +BOOTSTRAP_OBJS = $(addprefix $(BOOTSTRAP_OUTPUT),main.o common.o json_writer.o gen.o btf.o xlated_dumper.o btf_dumper.o) $(OUTPUT)disasm.o OBJS = $(patsubst %.c,$(OUTPUT)%.o,$(SRCS)) $(OUTPUT)disasm.o VMLINUX_BTF_PATHS ?= $(if $(O),$(O)/vmlinux) \ diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c index 31ade77f5ef8..7a3e343f31db 100644 --- a/tools/bpf/bpftool/gen.c +++ b/tools/bpf/bpftool/gen.c @@ -18,6 +18,7 @@ #include #include #include +#include #include "json_writer.h" #include "main.h" @@ -268,6 +269,303 @@ static void codegen(const char *template, ...) free(s); } +static void print_hex(const char *obj_data, int file_sz) +{ + int i, len; + + for (i = 0, len = 0; i < file_sz; i++) { + int w = obj_data[i] ? 4 : 2; + + len += w; + if (len > 78) { + printf("\\\n"); + len = w; + } + if (!obj_data[i]) + printf("\\0"); + else + printf("\\x%02x", (unsigned char)obj_data[i]); + } +} + +static size_t bpf_map_mmap_sz(const struct bpf_map *map) +{ + long page_sz = sysconf(_SC_PAGE_SIZE); + size_t map_sz; + + map_sz = (size_t)roundup(bpf_map__value_size(map), 8) * bpf_map__max_entries(map); + map_sz = roundup(map_sz, page_sz); + return map_sz; +} + +static void codegen_attach_detach(struct bpf_object *obj, const char *obj_name) +{ + struct bpf_program *prog; + + bpf_object__for_each_program(prog, obj) { + codegen("\ + \n\ + \n\ + static inline int \n\ + %1$s__%2$s__attach(struct %1$s *skel) \n\ + { \n\ + int fd = bpf_raw_tracepoint_open( \ + ", obj_name, bpf_program__name(prog)); + + switch (bpf_program__get_type(prog)) { + case BPF_PROG_TYPE_RAW_TRACEPOINT: + putchar('"'); + fputs(strchr(bpf_program__section_name(prog), '/') + 1, stdout); + putchar('"'); + break; + default: + fputs("NULL", stdout); + break; + } + codegen("\ + \n\ + , skel->progs.%1$s.prog_fd); \n\ + if (fd > 0) skel->links.%1$s_fd = fd; \n\ + return fd; \n\ + } \n\ + ", bpf_program__name(prog)); + } + + codegen("\ + \n\ + \n\ + static inline int \n\ + %1$s__attach(struct %1$s *skel) \n\ + { \n\ + int ret = 0; \n\ + ", obj_name); + + bpf_object__for_each_program(prog, obj) { + codegen("\ + \n\ + ret = ret < 0 ? ret : %1$s__%2$s__attach(skel); \n\ + ", obj_name, bpf_program__name(prog)); + } + + codegen("\ + \n\ + return ret < 0 ? ret : 0; \n\ + } \n\ + \n\ + static inline void \n\ + %1$s__detach(struct %1$s *skel) \n\ + { \n\ + ", obj_name); + bpf_object__for_each_program(prog, obj) { + printf("\tif (skel->links.%1$s_fd > 0) close(skel->links.%1$s_fd);\n", + bpf_program__name(prog)); + } + codegen("\ + \n\ + } \n\ + "); +} + +static void codegen_destroy(struct bpf_object *obj, const char *obj_name) +{ + struct bpf_program *prog; + struct bpf_map *map; + + codegen("\ + \n\ + static void \n\ + %1$s__destroy(struct %1$s *skel) \n\ + { \n\ + if (!skel) \n\ + return; \n\ + %1$s__detach(skel); \n\ + ", + obj_name); + bpf_object__for_each_program(prog, obj) { + printf("\tif (skel->progs.%1$s.prog_fd > 0) close(skel->progs.%1$s.prog_fd);\n", + bpf_program__name(prog)); + } + bpf_object__for_each_map(map, obj) { + const char * ident; + + ident = get_map_ident(map); + if (!ident) + continue; + if (bpf_map__is_internal(map) && + (bpf_map__def(map)->map_flags & BPF_F_MMAPABLE)) + printf("\tmunmap(skel->%1$s, %2$zd);\n", + ident, bpf_map_mmap_sz(map)); + printf("\tif (skel->maps.%1$s.map_fd > 0) close(skel->maps.%1$s.map_fd);\n", ident); + } + codegen("\ + \n\ + free(skel); \n\ + } \n\ + ", + obj_name); +} + +static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *header_guard) +{ + struct bpf_object_load_attr load_attr = {}; + DECLARE_LIBBPF_OPTS(gen_loader_opts, opts); + struct bpf_map *map; + int err = 0; + + err = bpf_object__gen_loader(obj, &opts); + if (err) + return err; + + load_attr.obj = obj; + if (verifier_logs) + /* log_level1 + log_level2 + stats, but not stable UAPI */ + load_attr.log_level = 1 + 2 + 4; + + err = bpf_object__load_xattr(&load_attr); + if (err) { + p_err("failed to load object file"); + goto out; + } + /* If there was no error during load then gen_loader_opts + * are populated with the loader program. + */ + + /* finish generating 'struct skel' */ + codegen("\ + \n\ + }; \n\ + ", obj_name); + + + codegen_attach_detach(obj, obj_name); + + codegen_destroy(obj, obj_name); + + codegen("\ + \n\ + static inline struct %1$s * \n\ + %1$s__open(void) \n\ + { \n\ + struct %1$s *skel; \n\ + \n\ + skel = calloc(sizeof(*skel), 1); \n\ + if (!skel) \n\ + return NULL; \n\ + skel->ctx.sz = (void *)&skel->links - (void *)skel; \n\ + ", + obj_name, opts.data_sz); + bpf_object__for_each_map(map, obj) { + const char *ident; + const void *mmap_data = NULL; + size_t mmap_size = 0; + + ident = get_map_ident(map); + if (!ident) + continue; + + if (!bpf_map__is_internal(map) || + !(bpf_map__def(map)->map_flags & BPF_F_MMAPABLE)) + continue; + + printf("\tskel->%1$s =\n" + "\t\tmmap(NULL, %2$zd, PROT_READ | PROT_WRITE,\n" + "\t\t\tMAP_SHARED | MAP_ANONYMOUS, -1, 0);\n" + "\tmemcpy(skel->%1$s, (void *)\"", + ident, bpf_map_mmap_sz(map)); + bpf_map__get_initial_value(map, &mmap_data, &mmap_size); + print_hex(mmap_data, mmap_size); + printf("\", %2$zd);\n" + "\tskel->maps.%1$s.initial_value = (__u64)(long)skel->%1$s;\n", + ident, mmap_size); + } + codegen("\ + \n\ + return skel; \n\ + } \n\ + \n\ + static inline int \n\ + %1$s__load(struct %1$s *skel) \n\ + { \n\ + struct bpf_load_and_run_opts opts = {}; \n\ + int err; \n\ + \n\ + opts.ctx = (struct bpf_loader_ctx *)skel; \n\ + opts.data_sz = %2$d; \n\ + opts.data = (void *)\"\\ \n\ + ", + obj_name, opts.data_sz); + print_hex(opts.data, opts.data_sz); + codegen("\ + \n\ + \"; \n\ + "); + + codegen("\ + \n\ + opts.insns_sz = %d; \n\ + opts.insns = (void *)\"\\ \n\ + ", + opts.insns_sz); + print_hex(opts.insns, opts.insns_sz); + codegen("\ + \n\ + \"; \n\ + err = bpf_load_and_run(&opts); \n\ + if (err < 0) \n\ + return err; \n\ + ", obj_name); + bpf_object__for_each_map(map, obj) { + const char *ident, *mmap_flags; + + ident = get_map_ident(map); + if (!ident) + continue; + + if (!bpf_map__is_internal(map) || + !(bpf_map__def(map)->map_flags & BPF_F_MMAPABLE)) + continue; + if (bpf_map__def(map)->map_flags & BPF_F_RDONLY_PROG) + mmap_flags = "PROT_READ"; + else + mmap_flags = "PROT_READ | PROT_WRITE"; + + printf("\tskel->%1$s =\n" + "\t\tmmap(skel->%1$s, %2$zd, %3$s, MAP_SHARED | MAP_FIXED,\n" + "\t\t\tskel->maps.%1$s.map_fd, 0);\n", + ident, bpf_map_mmap_sz(map), mmap_flags); + } + codegen("\ + \n\ + return 0; \n\ + } \n\ + \n\ + static inline struct %1$s * \n\ + %1$s__open_and_load(void) \n\ + { \n\ + struct %1$s *skel; \n\ + \n\ + skel = %1$s__open(); \n\ + if (!skel) \n\ + return NULL; \n\ + if (%1$s__load(skel)) { \n\ + %1$s__destroy(skel); \n\ + return NULL; \n\ + } \n\ + return skel; \n\ + } \n\ + ", obj_name); + + codegen("\ + \n\ + \n\ + #endif /* %s */ \n\ + ", + header_guard); + err = 0; +out: + return err; +} + static int do_skeleton(int argc, char **argv) { char header_guard[MAX_OBJ_NAME_LEN + sizeof("__SKEL_H__")]; @@ -277,7 +575,7 @@ static int do_skeleton(int argc, char **argv) struct bpf_object *obj = NULL; const char *file, *ident; struct bpf_program *prog; - int fd, len, err = -1; + int fd, err = -1; struct bpf_map *map; struct btf *btf; struct stat st; @@ -359,7 +657,25 @@ static int do_skeleton(int argc, char **argv) } get_header_guard(header_guard, obj_name); - codegen("\ + if (use_loader) { + codegen("\ + \n\ + /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ \n\ + /* THIS FILE IS AUTOGENERATED! */ \n\ + #ifndef %2$s \n\ + #define %2$s \n\ + \n\ + #include \n\ + #include \n\ + #include \n\ + \n\ + struct %1$s { \n\ + struct bpf_loader_ctx ctx; \n\ + ", + obj_name, header_guard + ); + } else { + codegen("\ \n\ /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ \n\ \n\ @@ -375,7 +691,8 @@ static int do_skeleton(int argc, char **argv) struct bpf_object *obj; \n\ ", obj_name, header_guard - ); + ); + } if (map_cnt) { printf("\tstruct {\n"); @@ -383,7 +700,10 @@ static int do_skeleton(int argc, char **argv) ident = get_map_ident(map); if (!ident) continue; - printf("\t\tstruct bpf_map *%s;\n", ident); + if (use_loader) + printf("\t\tstruct bpf_map_desc %s;\n", ident); + else + printf("\t\tstruct bpf_map *%s;\n", ident); } printf("\t} maps;\n"); } @@ -391,14 +711,22 @@ static int do_skeleton(int argc, char **argv) if (prog_cnt) { printf("\tstruct {\n"); bpf_object__for_each_program(prog, obj) { - printf("\t\tstruct bpf_program *%s;\n", - bpf_program__name(prog)); + if (use_loader) + printf("\t\tstruct bpf_prog_desc %s;\n", + bpf_program__name(prog)); + else + printf("\t\tstruct bpf_program *%s;\n", + bpf_program__name(prog)); } printf("\t} progs;\n"); printf("\tstruct {\n"); bpf_object__for_each_program(prog, obj) { - printf("\t\tstruct bpf_link *%s;\n", - bpf_program__name(prog)); + if (use_loader) + printf("\t\tint %s_fd;\n", + bpf_program__name(prog)); + else + printf("\t\tstruct bpf_link *%s;\n", + bpf_program__name(prog)); } printf("\t} links;\n"); } @@ -409,6 +737,10 @@ static int do_skeleton(int argc, char **argv) if (err) goto out; } + if (use_loader) { + err = gen_trace(obj, obj_name, header_guard); + goto out; + } codegen("\ \n\ @@ -578,19 +910,7 @@ static int do_skeleton(int argc, char **argv) file_sz); /* embed contents of BPF object file */ - for (i = 0, len = 0; i < file_sz; i++) { - int w = obj_data[i] ? 4 : 2; - - len += w; - if (len > 78) { - printf("\\\n"); - len = w; - } - if (!obj_data[i]) - printf("\\0"); - else - printf("\\x%02x", (unsigned char)obj_data[i]); - } + print_hex(obj_data, file_sz); codegen("\ \n\ diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c index d9afb730136a..7f2817d97079 100644 --- a/tools/bpf/bpftool/main.c +++ b/tools/bpf/bpftool/main.c @@ -29,6 +29,7 @@ bool show_pinned; bool block_mount; bool verifier_logs; bool relaxed_maps; +bool use_loader; struct btf *base_btf; struct pinned_obj_table prog_table; struct pinned_obj_table map_table; @@ -392,6 +393,7 @@ int main(int argc, char **argv) { "mapcompat", no_argument, NULL, 'm' }, { "nomount", no_argument, NULL, 'n' }, { "debug", no_argument, NULL, 'd' }, + { "use-loader", no_argument, NULL, 'L' }, { "base-btf", required_argument, NULL, 'B' }, { 0 } }; @@ -409,7 +411,7 @@ int main(int argc, char **argv) hash_init(link_table.table); opterr = 0; - while ((opt = getopt_long(argc, argv, "VhpjfmndB:", + while ((opt = getopt_long(argc, argv, "VhpjfLmndB:", options, NULL)) >= 0) { switch (opt) { case 'V': @@ -452,6 +454,9 @@ int main(int argc, char **argv) return -1; } break; + case 'L': + use_loader = true; + break; default: p_err("unrecognized option '%s'", argv[optind - 1]); if (json_output) diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h index 76e91641262b..c1cf29798b99 100644 --- a/tools/bpf/bpftool/main.h +++ b/tools/bpf/bpftool/main.h @@ -90,6 +90,7 @@ extern bool show_pids; extern bool block_mount; extern bool verifier_logs; extern bool relaxed_maps; +extern bool use_loader; extern struct btf *base_btf; extern struct pinned_obj_table prog_table; extern struct pinned_obj_table map_table; diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c index 3f067d2d7584..55401b65815a 100644 --- a/tools/bpf/bpftool/prog.c +++ b/tools/bpf/bpftool/prog.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -24,6 +25,8 @@ #include #include #include +#include +#include #include "cfg.h" #include "main.h" @@ -1645,8 +1648,109 @@ static int load_with_options(int argc, char **argv, bool first_prog_only) return -1; } +static int count_open_fds(void) +{ + DIR *dp = opendir("/proc/self/fd"); + struct dirent *de; + int cnt = -3; + + if (!dp) + return -1; + + while ((de = readdir(dp))) + cnt++; + + closedir(dp); + return cnt; +} + +static int try_loader(struct gen_loader_opts *gen) +{ + struct bpf_load_and_run_opts opts = {}; + struct bpf_loader_ctx *ctx; + int ctx_sz = sizeof(*ctx) + 64 * max(sizeof(struct bpf_map_desc), sizeof(struct bpf_prog_desc)); + int log_buf_sz = (1u << 24) - 1; + int err, fds_before, fd_delta; + char *log_buf; + + ctx = alloca(ctx_sz); + memset(ctx, 0, ctx_sz); + ctx->sz = ctx_sz; + ctx->log_level = 1; + ctx->log_size = log_buf_sz; + log_buf = malloc(log_buf_sz); + if (!log_buf) + return -ENOMEM; + ctx->log_buf = (long) log_buf; + opts.ctx = ctx; + opts.data = gen->data; + opts.data_sz = gen->data_sz; + opts.insns = gen->insns; + opts.insns_sz = gen->insns_sz; + fds_before = count_open_fds(); + err = bpf_load_and_run(&opts); + fd_delta = count_open_fds() - fds_before; + if (err < 0) { + fprintf(stderr, "err %d\n%s\n%s", err, opts.errstr, log_buf); + if (fd_delta) + fprintf(stderr, "loader prog leaked %d FDs\n", + fd_delta); + } + free(log_buf); + return err; +} + +static int do_loader(int argc, char **argv) +{ + DECLARE_LIBBPF_OPTS(bpf_object_open_opts, open_opts); + DECLARE_LIBBPF_OPTS(gen_loader_opts, gen); + struct bpf_object_load_attr load_attr = {}; + struct bpf_object *obj; + const char *file; + int err = 0; + + if (!REQ_ARGS(1)) + return -1; + file = GET_ARG(); + + obj = bpf_object__open_file(file, &open_opts); + if (IS_ERR_OR_NULL(obj)) { + p_err("failed to open object file"); + goto err_close_obj; + } + + err = bpf_object__gen_loader(obj, &gen); + if (err) + goto err_close_obj; + + load_attr.obj = obj; + if (verifier_logs) + /* log_level1 + log_level2 + stats, but not stable UAPI */ + load_attr.log_level = 1 + 2 + 4; + + err = bpf_object__load_xattr(&load_attr); + if (err) { + p_err("failed to load object file"); + goto err_close_obj; + } + + if (verifier_logs) { + struct dump_data dd = {}; + + kernel_syms_load(&dd); + dump_xlated_plain(&dd, (void *)gen.insns, gen.insns_sz, false, false); + kernel_syms_destroy(&dd); + } + err = try_loader(&gen); +err_close_obj: + bpf_object__close(obj); + return err; +} + static int do_load(int argc, char **argv) { + if (use_loader) + return do_loader(argc, argv); return load_with_options(argc, argv, true); } diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c index 6fc3e6f7f40c..f1f32e21d5cd 100644 --- a/tools/bpf/bpftool/xlated_dumper.c +++ b/tools/bpf/bpftool/xlated_dumper.c @@ -196,6 +196,9 @@ static const char *print_imm(void *private_data, else if (insn->src_reg == BPF_PSEUDO_MAP_VALUE) snprintf(dd->scratch_buff, sizeof(dd->scratch_buff), "map[id:%u][0]+%u", insn->imm, (insn + 1)->imm); + else if (insn->src_reg == BPF_PSEUDO_MAP_IDX_VALUE) + snprintf(dd->scratch_buff, sizeof(dd->scratch_buff), + "map[idx:%u]+%u", insn->imm, (insn + 1)->imm); else if (insn->src_reg == BPF_PSEUDO_FUNC) snprintf(dd->scratch_buff, sizeof(dd->scratch_buff), "subprog[%+d]", insn->imm); From patchwork Sat May 8 03:48:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245795 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49E4DC433B4 for ; Sat, 8 May 2021 03:49:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 221246128A for ; Sat, 8 May 2021 03:49:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231166AbhEHDuT (ORCPT ); Fri, 7 May 2021 23:50:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231167AbhEHDuR (ORCPT ); Fri, 7 May 2021 23:50:17 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89FD7C061574 for ; Fri, 7 May 2021 20:49:16 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id k19so9276764pfu.5 for ; Fri, 07 May 2021 20:49:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=H57421thi6qtesr0E3pXjP5Fr7Nvukgag48Z35U1Gpg=; b=dhTlogD0DeWWcRjofQWGobwyV9Y13bZuQbzpq1AImZRntGNWXGGyZVzqdv/46gG9b+ hyd2ZVztoP6NbDmd2+c/51QpFjZjIyhAz6GXroF7DBLugan3L76uHOtJYTkKgoyAnefD bAGleKE3Swn1Q08m0IGoccDITq003oCARXd8eXluj21H4kdw7I23z77SyOFmij5dUlvA dMk5cNV+x1ceX1bu5H45q/8oDRmnFX/hznVVNH7o7cqNGwadVfe0zk2nh3nEZ+0PCkVc LlQSu6juPrsiEJeXJdism2UWnHAqmTBXpv7NPU3qIR3T/GWNya6yRmOvG+pyjQhiNmuM yJSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=H57421thi6qtesr0E3pXjP5Fr7Nvukgag48Z35U1Gpg=; b=MnRcCyo6L654ce7bOS4Slehuz0CP7HP4K/Lx4qz6Fateq/CoL0hrisTeLXIjHfBbb6 8Deg54s060DQM8LG09bb7NtM/h9RFZqWwgnpyzGvzTUOTX76IeVO5C4k1Z5bsPrqZ682 b+XIakybjMi917jjx2EDKv3GcTORM6OL8L4sax6k1ZKHbwwTSSsXib5K3nYvfwaT5A9U xCKvpYAvM9C4YPW1RiWhYjqz5WxguHoffNNEgijwEUxDqPupCwHNwuQisos7DZ+oXd+X DL4ESOejSih0UNDA2zd9+e68YSCuKnQokBYi1qA6msD3UUre8N0u/8yN2T42dtYbv5YO VrUA== X-Gm-Message-State: AOAM530TXTz+jNeAVAH8gFG+W0V6vrePQIqygz4UsLsOT0YQhZ0wmsp/ AtCRNDVJajfGWfYqgw6XALE= X-Google-Smtp-Source: ABdhPJw923RzhB+gBj3fhPzFUAH66O3J/MZQvgLX263qCe2dRG1u4AnYrSPJsMO1jL5NM08Jf50/kg== X-Received: by 2002:a63:a62:: with SMTP id z34mr13168597pgk.189.1620445756113; Fri, 07 May 2021 20:49:16 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.49.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:15 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 19/22] selftests/bpf: Convert few tests to light skeleton. Date: Fri, 7 May 2021 20:48:34 -0700 Message-Id: <20210508034837.64585-20-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Convert few tests that don't use CO-RE to light skeleton. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- tools/testing/selftests/bpf/.gitignore | 1 + tools/testing/selftests/bpf/Makefile | 16 +++++++++++++++- .../selftests/bpf/prog_tests/fentry_fexit.c | 6 +++--- .../selftests/bpf/prog_tests/fentry_test.c | 10 +++++----- .../selftests/bpf/prog_tests/fexit_sleep.c | 6 +++--- .../selftests/bpf/prog_tests/fexit_test.c | 10 +++++----- .../selftests/bpf/prog_tests/kfunc_call.c | 6 +++--- .../selftests/bpf/prog_tests/ksyms_module.c | 2 +- tools/testing/selftests/bpf/prog_tests/ringbuf.c | 8 +++----- tools/testing/selftests/bpf/progs/test_ringbuf.c | 4 ++-- 10 files changed, 41 insertions(+), 28 deletions(-) diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore index 4866f6a21901..a030aa4a8a9e 100644 --- a/tools/testing/selftests/bpf/.gitignore +++ b/tools/testing/selftests/bpf/.gitignore @@ -30,6 +30,7 @@ test_sysctl xdping test_cpp *.skel.h +*.lskel.h /no_alu32 /bpf_gcc /tools diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 283e5ad8385e..f794f16c79b8 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -312,6 +312,10 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \ linked_vars.skel.h linked_maps.skel.h +LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ + test_ksyms_module.c test_ringbuf.c +SKEL_BLACKLIST += $$(LSKELS) + test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o linked_funcs.skel.h-deps := linked_funcs1.o linked_funcs2.o linked_vars.skel.h-deps := linked_vars1.o linked_vars2.o @@ -339,6 +343,7 @@ TRUNNER_BPF_OBJS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.o, $$(TRUNNER_BPF_SRCS) TRUNNER_BPF_SKELS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.skel.h, \ $$(filter-out $(SKEL_BLACKLIST) $(LINKED_BPF_SRCS),\ $$(TRUNNER_BPF_SRCS))) +TRUNNER_BPF_LSKELS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.lskel.h, $$(LSKELS)) TRUNNER_BPF_SKELS_LINKED := $$(addprefix $$(TRUNNER_OUTPUT)/,$(LINKED_SKELS)) TEST_GEN_FILES += $$(TRUNNER_BPF_OBJS) @@ -380,6 +385,14 @@ $(TRUNNER_BPF_SKELS): %.skel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT) $(Q)diff $$(<:.o=.linked2.o) $$(<:.o=.linked3.o) $(Q)$$(BPFTOOL) gen skeleton $$(<:.o=.linked3.o) name $$(notdir $$(<:.o=)) > $$@ +$(TRUNNER_BPF_LSKELS): %.lskel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT) + $$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@) + $(Q)$$(BPFTOOL) gen object $$(<:.o=.linked1.o) $$< + $(Q)$$(BPFTOOL) gen object $$(<:.o=.linked2.o) $$(<:.o=.linked1.o) + $(Q)$$(BPFTOOL) gen object $$(<:.o=.linked3.o) $$(<:.o=.linked2.o) + $(Q)diff $$(<:.o=.linked2.o) $$(<:.o=.linked3.o) + $(Q)$$(BPFTOOL) gen skeleton -L $$(<:.o=.linked3.o) name $$(notdir $$(<:.o=)) > $$@ + $(TRUNNER_BPF_SKELS_LINKED): $(TRUNNER_BPF_OBJS) $(BPFTOOL) | $(TRUNNER_OUTPUT) $$(call msg,LINK-BPF,$(TRUNNER_BINARY),$$(@:.skel.h=.o)) $(Q)$$(BPFTOOL) gen object $$(@:.skel.h=.linked1.o) $$(addprefix $(TRUNNER_OUTPUT)/,$$($$(@F)-deps)) @@ -409,6 +422,7 @@ $(TRUNNER_TEST_OBJS): $(TRUNNER_OUTPUT)/%.test.o: \ $(TRUNNER_EXTRA_HDRS) \ $(TRUNNER_BPF_OBJS) \ $(TRUNNER_BPF_SKELS) \ + $(TRUNNER_BPF_LSKELS) \ $(TRUNNER_BPF_SKELS_LINKED) \ $$(BPFOBJ) | $(TRUNNER_OUTPUT) $$(call msg,TEST-OBJ,$(TRUNNER_BINARY),$$@) @@ -516,6 +530,6 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o $(OUTPUT)/testing_helpers.o \ EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \ prog_tests/tests.h map_tests/tests.h verifier/tests.h \ feature \ - $(addprefix $(OUTPUT)/,*.o *.skel.h no_alu32 bpf_gcc bpf_testmod.ko) + $(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h no_alu32 bpf_gcc bpf_testmod.ko) .PHONY: docs docs-clean diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c b/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c index 109d0345a2be..91154c2ba256 100644 --- a/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c +++ b/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c @@ -1,8 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (c) 2019 Facebook */ #include -#include "fentry_test.skel.h" -#include "fexit_test.skel.h" +#include "fentry_test.lskel.h" +#include "fexit_test.lskel.h" void test_fentry_fexit(void) { @@ -26,7 +26,7 @@ void test_fentry_fexit(void) if (CHECK(err, "fexit_attach", "fexit attach failed: %d\n", err)) goto close_prog; - prog_fd = bpf_program__fd(fexit_skel->progs.test1); + prog_fd = fexit_skel->progs.test1.prog_fd; err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, NULL, &retval, &duration); CHECK(err || retval, "ipv6", diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_test.c b/tools/testing/selftests/bpf/prog_tests/fentry_test.c index 7cb111b11995..174c89e7456e 100644 --- a/tools/testing/selftests/bpf/prog_tests/fentry_test.c +++ b/tools/testing/selftests/bpf/prog_tests/fentry_test.c @@ -1,13 +1,13 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (c) 2019 Facebook */ #include -#include "fentry_test.skel.h" +#include "fentry_test.lskel.h" static int fentry_test(struct fentry_test *fentry_skel) { int err, prog_fd, i; __u32 duration = 0, retval; - struct bpf_link *link; + int link_fd; __u64 *result; err = fentry_test__attach(fentry_skel); @@ -15,11 +15,11 @@ static int fentry_test(struct fentry_test *fentry_skel) return err; /* Check that already linked program can't be attached again. */ - link = bpf_program__attach(fentry_skel->progs.test1); - if (!ASSERT_ERR_PTR(link, "fentry_attach_link")) + link_fd = fentry_test__test1__attach(fentry_skel); + if (!ASSERT_LT(link_fd, 0, "fentry_attach_link")) return -1; - prog_fd = bpf_program__fd(fentry_skel->progs.test1); + prog_fd = fentry_skel->progs.test1.prog_fd; err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, NULL, &retval, &duration); ASSERT_OK(err, "test_run"); diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_sleep.c b/tools/testing/selftests/bpf/prog_tests/fexit_sleep.c index ccc7e8a34ab6..4e7f4b42ea29 100644 --- a/tools/testing/selftests/bpf/prog_tests/fexit_sleep.c +++ b/tools/testing/selftests/bpf/prog_tests/fexit_sleep.c @@ -6,7 +6,7 @@ #include #include #include -#include "fexit_sleep.skel.h" +#include "fexit_sleep.lskel.h" static int do_sleep(void *skel) { @@ -58,8 +58,8 @@ void test_fexit_sleep(void) * waiting for percpu_ref_kill to confirm). The other one * will be freed quickly. */ - close(bpf_program__fd(fexit_skel->progs.nanosleep_fentry)); - close(bpf_program__fd(fexit_skel->progs.nanosleep_fexit)); + close(fexit_skel->progs.nanosleep_fentry.prog_fd); + close(fexit_skel->progs.nanosleep_fexit.prog_fd); fexit_sleep__detach(fexit_skel); /* kill the thread to unwind sys_nanosleep stack through the trampoline */ diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_test.c b/tools/testing/selftests/bpf/prog_tests/fexit_test.c index 6792e41f7f69..af3dba726701 100644 --- a/tools/testing/selftests/bpf/prog_tests/fexit_test.c +++ b/tools/testing/selftests/bpf/prog_tests/fexit_test.c @@ -1,13 +1,13 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (c) 2019 Facebook */ #include -#include "fexit_test.skel.h" +#include "fexit_test.lskel.h" static int fexit_test(struct fexit_test *fexit_skel) { int err, prog_fd, i; __u32 duration = 0, retval; - struct bpf_link *link; + int link_fd; __u64 *result; err = fexit_test__attach(fexit_skel); @@ -15,11 +15,11 @@ static int fexit_test(struct fexit_test *fexit_skel) return err; /* Check that already linked program can't be attached again. */ - link = bpf_program__attach(fexit_skel->progs.test1); - if (!ASSERT_ERR_PTR(link, "fexit_attach_link")) + link_fd = fexit_test__test1__attach(fexit_skel); + if (!ASSERT_LT(link_fd, 0, "fexit_attach_link")) return -1; - prog_fd = bpf_program__fd(fexit_skel->progs.test1); + prog_fd = fexit_skel->progs.test1.prog_fd; err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, NULL, &retval, &duration); ASSERT_OK(err, "test_run"); diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c index 7fc0951ee75f..30a7b9b837bf 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c @@ -2,7 +2,7 @@ /* Copyright (c) 2021 Facebook */ #include #include -#include "kfunc_call_test.skel.h" +#include "kfunc_call_test.lskel.h" #include "kfunc_call_test_subprog.skel.h" static void test_main(void) @@ -14,13 +14,13 @@ static void test_main(void) if (!ASSERT_OK_PTR(skel, "skel")) return; - prog_fd = bpf_program__fd(skel->progs.kfunc_call_test1); + prog_fd = skel->progs.kfunc_call_test1.prog_fd; err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), NULL, NULL, (__u32 *)&retval, NULL); ASSERT_OK(err, "bpf_prog_test_run(test1)"); ASSERT_EQ(retval, 12, "test1-retval"); - prog_fd = bpf_program__fd(skel->progs.kfunc_call_test2); + prog_fd = skel->progs.kfunc_call_test2.prog_fd; err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), NULL, NULL, (__u32 *)&retval, NULL); ASSERT_OK(err, "bpf_prog_test_run(test2)"); diff --git a/tools/testing/selftests/bpf/prog_tests/ksyms_module.c b/tools/testing/selftests/bpf/prog_tests/ksyms_module.c index 4c232b456479..2cd5cded543f 100644 --- a/tools/testing/selftests/bpf/prog_tests/ksyms_module.c +++ b/tools/testing/selftests/bpf/prog_tests/ksyms_module.c @@ -4,7 +4,7 @@ #include #include #include -#include "test_ksyms_module.skel.h" +#include "test_ksyms_module.lskel.h" static int duration; diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf.c b/tools/testing/selftests/bpf/prog_tests/ringbuf.c index de78617f6550..80c11ac0ffb1 100644 --- a/tools/testing/selftests/bpf/prog_tests/ringbuf.c +++ b/tools/testing/selftests/bpf/prog_tests/ringbuf.c @@ -12,7 +12,7 @@ #include #include #include -#include "test_ringbuf.skel.h" +#include "test_ringbuf.lskel.h" #define EDONE 7777 @@ -93,9 +93,7 @@ void test_ringbuf(void) if (CHECK(!skel, "skel_open", "skeleton open failed\n")) return; - err = bpf_map__set_max_entries(skel->maps.ringbuf, page_size); - if (CHECK(err != 0, "bpf_map__set_max_entries", "bpf_map__set_max_entries failed\n")) - goto cleanup; + skel->maps.ringbuf.max_entries = page_size; err = test_ringbuf__load(skel); if (CHECK(err != 0, "skel_load", "skeleton load failed\n")) @@ -104,7 +102,7 @@ void test_ringbuf(void) /* only trigger BPF program for current process */ skel->bss->pid = getpid(); - ringbuf = ring_buffer__new(bpf_map__fd(skel->maps.ringbuf), + ringbuf = ring_buffer__new(skel->maps.ringbuf.map_fd, process_sample, NULL, NULL); if (CHECK(!ringbuf, "ringbuf_create", "failed to create ringbuf\n")) goto cleanup; diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf.c b/tools/testing/selftests/bpf/progs/test_ringbuf.c index 6b3f288b7c63..eaa7d9dba0be 100644 --- a/tools/testing/selftests/bpf/progs/test_ringbuf.c +++ b/tools/testing/selftests/bpf/progs/test_ringbuf.c @@ -35,7 +35,7 @@ long prod_pos = 0; /* inner state */ long seq = 0; -SEC("tp/syscalls/sys_enter_getpgid") +SEC("fentry/__x64_sys_getpgid") int test_ringbuf(void *ctx) { int cur_pid = bpf_get_current_pid_tgid() >> 32; @@ -48,7 +48,7 @@ int test_ringbuf(void *ctx) sample = bpf_ringbuf_reserve(&ringbuf, sizeof(*sample), 0); if (!sample) { __sync_fetch_and_add(&dropped, 1); - return 1; + return 0; } sample->pid = pid; From patchwork Sat May 8 03:48:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245797 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA98DC433ED for ; Sat, 8 May 2021 03:49:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B377C6128A for ; Sat, 8 May 2021 03:49:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231167AbhEHDuV (ORCPT ); Fri, 7 May 2021 23:50:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231168AbhEHDuT (ORCPT ); Fri, 7 May 2021 23:50:19 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60069C061761 for ; Fri, 7 May 2021 20:49:18 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id z18so2645617plg.8 for ; Fri, 07 May 2021 20:49:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GqMwLc8bZDgAdQd/jqEc6DeFkm+cdgyoPkr6k8iE7nM=; b=VbrKA3+0i4+Oho5UKudks7y7KAY02+CFOZBkFB/qFiigY0eMCPH8nDKWeVy4Q4ativ EqzvTHLX3Uz+jxIrt7mOKGTLnBNB+tAH/JRelu8MVW8TliN/E2zGeEOYKtU4ghFqLklo +XaOJUW7k9yka2NHbrsgwXZk/s2QCtP+hmAS1rlwJmz4aapUmqUS8rukBEZ9dEQDhPU8 CmbETiSo2wE8C3HWCz1XF9R5s8/OyxBUIQBPzMzb6UQUujZcPP2TwbZ9f5ywSOzHX52x 5CQtVrdE5n5mIq8WMe97unD5X1sOIL4yoU4b2Dhst33qgkskrODAI88VyxNE956rnZA/ W7jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GqMwLc8bZDgAdQd/jqEc6DeFkm+cdgyoPkr6k8iE7nM=; b=jkUdMhyw6ddg/etpkPKkaXz2AN//361+FnWRukxPTBIAQha64WcWJYw3qkyWF+EaKK zv0UL1Bf2R/+Bs5yv4E/gaPDMoCt5EiR7Vvax8RLIyV6sY6csBFN8oULZt390i6XpaNt cZt/oRZf5JVHOi0V2lMCjM6YMfFO7EGb9t13yJej7/ahEwMZdm+nUPkR5NyMHFAVOEoZ vBaKY6JKEMMKk23d1vRngcdH1hz/Vvnw46zwax+ku7J9+ccbsUjmOxq4Oi5sYFC2p8Up 2XgBoXCI4IoJ3Z0iHmcV6PS2JrzIBAmy8WARo1LfiInFyjGWgWp9nCi+xyaTmthmmM8J otIA== X-Gm-Message-State: AOAM5331aPpTVJuuD9w0YZHf5fyn4iC98yLnGZV/PCCaJNWK1z5HvUzC w6nhkVmvDG75enIizSVuVtA= X-Google-Smtp-Source: ABdhPJxvKzeUGFf3kcL0oZeaMg5XIhtvCmshGosZcxUOeYnK9tSsmTlrcA17gBSYHU9K/ktws4GsCQ== X-Received: by 2002:a17:90a:590d:: with SMTP id k13mr15081797pji.68.1620445757946; Fri, 07 May 2021 20:49:17 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.49.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:17 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 20/22] selftests/bpf: Convert atomics test to light skeleton. Date: Fri, 7 May 2021 20:48:35 -0700 Message-Id: <20210508034837.64585-21-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Convert prog_tests/atomics.c to lskel.h Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- tools/testing/selftests/bpf/Makefile | 2 +- .../selftests/bpf/prog_tests/atomics.c | 73 ++++++++++--------- 2 files changed, 38 insertions(+), 37 deletions(-) diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index f794f16c79b8..4f50e4367e42 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -313,7 +313,7 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \ linked_vars.skel.h linked_maps.skel.h LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ - test_ksyms_module.c test_ringbuf.c + test_ksyms_module.c test_ringbuf.c atomics.c SKEL_BLACKLIST += $$(LSKELS) test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o diff --git a/tools/testing/selftests/bpf/prog_tests/atomics.c b/tools/testing/selftests/bpf/prog_tests/atomics.c index 21efe7bbf10d..b5b139ee5614 100644 --- a/tools/testing/selftests/bpf/prog_tests/atomics.c +++ b/tools/testing/selftests/bpf/prog_tests/atomics.c @@ -2,19 +2,19 @@ #include -#include "atomics.skel.h" +#include "atomics.lskel.h" static void test_add(struct atomics *skel) { int err, prog_fd; __u32 duration = 0, retval; - struct bpf_link *link; + int link_fd; - link = bpf_program__attach(skel->progs.add); - if (CHECK(IS_ERR(link), "attach(add)", "err: %ld\n", PTR_ERR(link))) + link_fd = atomics__add__attach(skel); + if (!ASSERT_GT(link_fd, 0, "attach(add)")) return; - prog_fd = bpf_program__fd(skel->progs.add); + prog_fd = skel->progs.add.prog_fd; err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, NULL, &retval, &duration); if (CHECK(err || retval, "test_run add", @@ -32,21 +32,22 @@ static void test_add(struct atomics *skel) ASSERT_EQ(skel->data->add_noreturn_value, 3, "add_noreturn_value"); + cleanup: - bpf_link__destroy(link); + close(link_fd); } static void test_sub(struct atomics *skel) { int err, prog_fd; __u32 duration = 0, retval; - struct bpf_link *link; + int link_fd; - link = bpf_program__attach(skel->progs.sub); - if (CHECK(IS_ERR(link), "attach(sub)", "err: %ld\n", PTR_ERR(link))) + link_fd = atomics__sub__attach(skel); + if (!ASSERT_GT(link_fd, 0, "attach(sub)")) return; - prog_fd = bpf_program__fd(skel->progs.sub); + prog_fd = skel->progs.sub.prog_fd; err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, NULL, &retval, &duration); if (CHECK(err || retval, "test_run sub", @@ -66,20 +67,20 @@ static void test_sub(struct atomics *skel) ASSERT_EQ(skel->data->sub_noreturn_value, -1, "sub_noreturn_value"); cleanup: - bpf_link__destroy(link); + close(link_fd); } static void test_and(struct atomics *skel) { int err, prog_fd; __u32 duration = 0, retval; - struct bpf_link *link; + int link_fd; - link = bpf_program__attach(skel->progs.and); - if (CHECK(IS_ERR(link), "attach(and)", "err: %ld\n", PTR_ERR(link))) + link_fd = atomics__and__attach(skel); + if (!ASSERT_GT(link_fd, 0, "attach(and)")) return; - prog_fd = bpf_program__fd(skel->progs.and); + prog_fd = skel->progs.and.prog_fd; err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, NULL, &retval, &duration); if (CHECK(err || retval, "test_run and", @@ -94,20 +95,20 @@ static void test_and(struct atomics *skel) ASSERT_EQ(skel->data->and_noreturn_value, 0x010ull << 32, "and_noreturn_value"); cleanup: - bpf_link__destroy(link); + close(link_fd); } static void test_or(struct atomics *skel) { int err, prog_fd; __u32 duration = 0, retval; - struct bpf_link *link; + int link_fd; - link = bpf_program__attach(skel->progs.or); - if (CHECK(IS_ERR(link), "attach(or)", "err: %ld\n", PTR_ERR(link))) + link_fd = atomics__or__attach(skel); + if (!ASSERT_GT(link_fd, 0, "attach(or)")) return; - prog_fd = bpf_program__fd(skel->progs.or); + prog_fd = skel->progs.or.prog_fd; err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, NULL, &retval, &duration); if (CHECK(err || retval, "test_run or", @@ -123,20 +124,20 @@ static void test_or(struct atomics *skel) ASSERT_EQ(skel->data->or_noreturn_value, 0x111ull << 32, "or_noreturn_value"); cleanup: - bpf_link__destroy(link); + close(link_fd); } static void test_xor(struct atomics *skel) { int err, prog_fd; __u32 duration = 0, retval; - struct bpf_link *link; + int link_fd; - link = bpf_program__attach(skel->progs.xor); - if (CHECK(IS_ERR(link), "attach(xor)", "err: %ld\n", PTR_ERR(link))) + link_fd = atomics__xor__attach(skel); + if (!ASSERT_GT(link_fd, 0, "attach(xor)")) return; - prog_fd = bpf_program__fd(skel->progs.xor); + prog_fd = skel->progs.xor.prog_fd; err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, NULL, &retval, &duration); if (CHECK(err || retval, "test_run xor", @@ -151,20 +152,20 @@ static void test_xor(struct atomics *skel) ASSERT_EQ(skel->data->xor_noreturn_value, 0x101ull << 32, "xor_nxoreturn_value"); cleanup: - bpf_link__destroy(link); + close(link_fd); } static void test_cmpxchg(struct atomics *skel) { int err, prog_fd; __u32 duration = 0, retval; - struct bpf_link *link; + int link_fd; - link = bpf_program__attach(skel->progs.cmpxchg); - if (CHECK(IS_ERR(link), "attach(cmpxchg)", "err: %ld\n", PTR_ERR(link))) + link_fd = atomics__cmpxchg__attach(skel); + if (!ASSERT_GT(link_fd, 0, "attach(cmpxchg)")) return; - prog_fd = bpf_program__fd(skel->progs.cmpxchg); + prog_fd = skel->progs.cmpxchg.prog_fd; err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, NULL, &retval, &duration); if (CHECK(err || retval, "test_run add", @@ -180,20 +181,20 @@ static void test_cmpxchg(struct atomics *skel) ASSERT_EQ(skel->bss->cmpxchg32_result_succeed, 1, "cmpxchg_result_succeed"); cleanup: - bpf_link__destroy(link); + close(link_fd); } static void test_xchg(struct atomics *skel) { int err, prog_fd; __u32 duration = 0, retval; - struct bpf_link *link; + int link_fd; - link = bpf_program__attach(skel->progs.xchg); - if (CHECK(IS_ERR(link), "attach(xchg)", "err: %ld\n", PTR_ERR(link))) + link_fd = atomics__xchg__attach(skel); + if (!ASSERT_GT(link_fd, 0, "attach(xchg)")) return; - prog_fd = bpf_program__fd(skel->progs.xchg); + prog_fd = skel->progs.xchg.prog_fd; err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, NULL, &retval, &duration); if (CHECK(err || retval, "test_run add", @@ -207,7 +208,7 @@ static void test_xchg(struct atomics *skel) ASSERT_EQ(skel->bss->xchg32_result, 1, "xchg32_result"); cleanup: - bpf_link__destroy(link); + close(link_fd); } void test_atomics(void) From patchwork Sat May 8 03:48:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245801 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B18AC43460 for ; Sat, 8 May 2021 03:49:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 090CC61435 for ; Sat, 8 May 2021 03:49:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231168AbhEHDuW (ORCPT ); Fri, 7 May 2021 23:50:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231174AbhEHDuV (ORCPT ); Fri, 7 May 2021 23:50:21 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14308C061574 for ; Fri, 7 May 2021 20:49:20 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id t21so6242279plo.2 for ; Fri, 07 May 2021 20:49:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=sTXESP1aSicDhTl1UsNmdxVoNAuNzTa1c5MFyG0wCi8=; b=Lwxu5lPsGhldyHfYS0BbETtP8YJNhBlg21PeqEzJ02IzFlDNKneCHVhFDewE06bpzf brPGYh+CDzuv+uarAwZqzyPtL87GO/cEEafaU1PRLR1wLQWzU6HBTuKHH6HcTpYdtV1C YHUeKrrGbBNY8lgOwFGaaasx28091cPpMalqoGPclvXqZ+WcrNNgEQR14Ao7IXbuUTpR tNxCkjFLSV7Rf2xZk8My9wkeogb993js7XF9A4Ozg1e7HMI2Gj1/QisISLEf0aeT920j ydTwGoVNtqRI7igWOUnsJeFPrHJ9FsgJSVrYthwfv4STzRJ1CxMKx0LkAPbv6PwtxEwl ux8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=sTXESP1aSicDhTl1UsNmdxVoNAuNzTa1c5MFyG0wCi8=; b=lKcPOhiSVjkWO/uWXYGthWxOW9du78j2ZC4Wy18rYxstPye+YQ9MN+PrCcn6BVV7y3 gDNFdhJyZHB78lRFZpVqovbr4Qb55v+J1vf4sOziThFxYpt+f+64K7Ap1xbz9dk2SOEG YYjHLdRkBTbOi8BLgi+TLPrJc9/1BED+6nZyLP9R+SbJSKCX8lqz3H1asqDGNheUXd9z wdz2CoGi/cnXvQet8U9tzB/fKmydypqFs3h4INKPtNeUM76yQ47AM16pJnwRP/zRlusZ W4N1l4g0mjEfnW4W6AjHOmY++y9/vlaujwevcEM58o0cH9o5Upy8iHS04lELMM1r+EJK Excg== X-Gm-Message-State: AOAM531h0IlisoLczSVh+43hXOEK9KDdK1XO4gkpV2QvAdBm2iLdOpB8 XGJSgnEa+P4RlqsBOvRW2BY= X-Google-Smtp-Source: ABdhPJzueRBUuCiNJOTWU315bBhasntU/ifUfEXiDrso2x36ibZm9jKPU44Cr8voaByYuO/wDW52Sw== X-Received: by 2002:a17:902:b104:b029:ee:beb3:ef0a with SMTP id q4-20020a170902b104b02900eebeb3ef0amr13430572plr.80.1620445759604; Fri, 07 May 2021 20:49:19 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.49.18 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:19 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 21/22] selftests/bpf: Convert test printk to use rodata. Date: Fri, 7 May 2021 20:48:36 -0700 Message-Id: <20210508034837.64585-22-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Convert test trace_printk to more aggressively validate and use rodata. Signed-off-by: Alexei Starovoitov --- tools/testing/selftests/bpf/prog_tests/trace_printk.c | 3 +++ tools/testing/selftests/bpf/progs/trace_printk.c | 4 ++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/trace_printk.c b/tools/testing/selftests/bpf/prog_tests/trace_printk.c index 39b0decb1bb2..60c2347a3181 100644 --- a/tools/testing/selftests/bpf/prog_tests/trace_printk.c +++ b/tools/testing/selftests/bpf/prog_tests/trace_printk.c @@ -21,6 +21,9 @@ void test_trace_printk(void) if (CHECK(!skel, "skel_open", "failed to open skeleton\n")) return; + ASSERT_EQ(skel->rodata->sys_enter_fmt[0], 'T', "invalid printk fmt string"); + skel->rodata->sys_enter_fmt[0] = 't'; + err = trace_printk__load(skel); if (CHECK(err, "skel_load", "failed to load skeleton: %d\n", err)) goto cleanup; diff --git a/tools/testing/selftests/bpf/progs/trace_printk.c b/tools/testing/selftests/bpf/progs/trace_printk.c index 8ca7f399b670..18c8baaf1143 100644 --- a/tools/testing/selftests/bpf/progs/trace_printk.c +++ b/tools/testing/selftests/bpf/progs/trace_printk.c @@ -10,10 +10,10 @@ char _license[] SEC("license") = "GPL"; int trace_printk_ret = 0; int trace_printk_ran = 0; -SEC("tp/raw_syscalls/sys_enter") +SEC("fentry/__x64_sys_nanosleep") int sys_enter(void *ctx) { - static const char fmt[] = "testing,testing %d\n"; + static const char fmt[] = "Testing,testing %d\n"; trace_printk_ret = bpf_trace_printk(fmt, sizeof(fmt), ++trace_printk_ran); From patchwork Sat May 8 03:48:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 12245799 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B01EBC43461 for ; Sat, 8 May 2021 03:49:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 96BA361106 for ; Sat, 8 May 2021 03:49:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231174AbhEHDuX (ORCPT ); Fri, 7 May 2021 23:50:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231173AbhEHDuX (ORCPT ); Fri, 7 May 2021 23:50:23 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDA82C061761 for ; Fri, 7 May 2021 20:49:21 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id i13so9386074pfu.2 for ; Fri, 07 May 2021 20:49:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zxPHvkEvgEDaR2TrDa228MMqgxEhIF+gTHbwPvP7qCg=; b=jHwryWcSA0W3jtm+0L5/gcpohrICrq8/pUoOW2BaxzGM6z0dKy1I/SFBXO8wChDIf0 11drwy3xABo5XHnYsZb069WHdyix3t2Izxec+TP28V+cvfXDWDh3LgWAbVbV/4jt4g8n jZ9xFNk0jNPIvearN1NOsFE6ioiwM57GRJ74mdmV86mZ3J4teAw6W+raKDsmTNIeCVOx eKjUhXgFbMi3E1PxwPaVW6Ss91NSIjhpjSHyEbTLnR4CCUOEJdIjWD4s2PcBYgdlsjTv 3/PbAVJFgKr+Wo9/F5s1gd/OHLrE99wP7ErHOoImKR30NPSLpCAlUBCcCs6AP5/156QC bCEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zxPHvkEvgEDaR2TrDa228MMqgxEhIF+gTHbwPvP7qCg=; b=P6J3XK3RhBjDL2v//bufbEYnZtzjNOdWY2qKMErAlt1XNbM3I1m+cpkMjnA7RgxScG Muslm+y1/03Mghmwv6A/Jza+vXyxQIFBgLtjrTriImPuRJkvQk2TrreVqX/z33GNuOTZ btquKD5mgUg6+oKJ803z8jjApOcr35rvNNrE1kI7zAiCOnYSiScdicZS3A8DhXq8Me6P yY69wli1FiB4dlBddNAYvw0Gmfkr1KX4gOhNae8zIRkHDWN/tL/wslq9+mUce6+UsBCh /mGFbwM9FsO5mK71Due7KmveoKBZMUCj/hI7aHVG/D1Yvtu7p1JS330gi4/8BwVfnoUP sU2g== X-Gm-Message-State: AOAM532B945dqdXMkWJCtF0ymJ1AvS3TdM5iw6rexGS/oh2E82Edosey bhfD5AW+WQNCbwaXkvEpKR8= X-Google-Smtp-Source: ABdhPJyvGCV7uByRTtUlUatlmTU/MLMQ76PQgSYIUWGtXx1QT4jk27PbmICCrHCsbOUbKQA8dG/j+w== X-Received: by 2002:a63:e918:: with SMTP id i24mr13593485pgh.118.1620445761422; Fri, 07 May 2021 20:49:21 -0700 (PDT) Received: from ast-mbp.thefacebook.com ([163.114.132.1]) by smtp.gmail.com with ESMTPSA id u12sm5784606pfh.122.2021.05.07.20.49.19 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 07 May 2021 20:49:20 -0700 (PDT) From: Alexei Starovoitov To: davem@davemloft.net Cc: daniel@iogearbox.net, andrii@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v4 bpf-next 22/22] selftests/bpf: Convert test trace_printk to lskel. Date: Fri, 7 May 2021 20:48:37 -0700 Message-Id: <20210508034837.64585-23-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20210508034837.64585-1-alexei.starovoitov@gmail.com> References: <20210508034837.64585-1-alexei.starovoitov@gmail.com> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Convert test trace_printk to light skeleton to check rodata support in lskel. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- tools/testing/selftests/bpf/Makefile | 2 +- tools/testing/selftests/bpf/prog_tests/trace_printk.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 4f50e4367e42..8d252238b005 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -313,7 +313,7 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \ linked_vars.skel.h linked_maps.skel.h LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ - test_ksyms_module.c test_ringbuf.c atomics.c + test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c SKEL_BLACKLIST += $$(LSKELS) test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o diff --git a/tools/testing/selftests/bpf/prog_tests/trace_printk.c b/tools/testing/selftests/bpf/prog_tests/trace_printk.c index 60c2347a3181..e67268e929bd 100644 --- a/tools/testing/selftests/bpf/prog_tests/trace_printk.c +++ b/tools/testing/selftests/bpf/prog_tests/trace_printk.c @@ -3,7 +3,7 @@ #include -#include "trace_printk.skel.h" +#include "trace_printk.lskel.h" #define TRACEBUF "/sys/kernel/debug/tracing/trace_pipe" #define SEARCHMSG "testing,testing"