From patchwork Sat Oct 10 18:17:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Daniel T. Lee" X-Patchwork-Id: 11830353 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87F6CC43467 for ; Sat, 10 Oct 2020 22:53:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4F2D0207FB for ; Sat, 10 Oct 2020 22:53:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="t0KhpzJm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387625AbgJJWxZ (ORCPT ); Sat, 10 Oct 2020 18:53:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729402AbgJJSo1 (ORCPT ); Sat, 10 Oct 2020 14:44:27 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D86A8C08EC3F; Sat, 10 Oct 2020 11:17:46 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id f19so9725507pfj.11; Sat, 10 Oct 2020 11:17:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7XtdEIP/yMYueLEMBvBgxI2I7uijB3xDVdLEjuFSqs8=; b=t0KhpzJme8IfqBHzeXM6ksIyb+FttavxhX6wR36lAa7OIMmlhAU9+FEop6R9vu9r++ sdTQrWFmFV8O95uWIFTh6bScOn9h5OGOS7BTvsXRsPVoOw9mlYVZJKbbi0ePXLSj1RZD BXecQ6FJxbWpNqZ+OfdwHkIf+G4mTwzZZNkAGut7+daw34Nf4U3eDrNdA9K+/uz9q9/H ZazfCiP3AxWby3qu8Ug4+6nJebnYG5P50VKQ2pX6azwCLc6mJ7QMqOtBhKIyEcYh9bdM 1DeIvXmLiPIFfNXMwcafPHT43of3FsvbkV9xaHfpNHG7W7HLJEX9/xASdNEKp03TpEP3 Le4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7XtdEIP/yMYueLEMBvBgxI2I7uijB3xDVdLEjuFSqs8=; b=WBX+hUlMPPyY2NuVI9c8FStyZWCUeMGdJPa2ZHLw9OaaGGJmWHhDWLcF7FLNkAu4K9 DRybiUt70Ohb768RbqfFEBOJ3fORuJFgTkKKj0padhRJAeS71cE13eCjLruJW3qPiYKO 4Rw6Fq7jfYGvd0kk+cDDvO4KjyzZ11mWvsEcJ1XNoxysMaHOUkQDB23Q7Hwu4gi3ZmHF P+d3NpP6tVrfcg7hsRblXMUE3CeFGXGAx6xs2ANLWXia9JtfmkSNtMsmImWHPsJN1ngq HYcu0qpQwsdnefzsyjDKjV/oXClMYZwkAwS7LFPpM/ZZy7DWue84mwAfIix0ttCMsZ6n S1lA== X-Gm-Message-State: AOAM532/FAP7Vj4/w5bBgAx09XhUT7gRM27wALI5F8UrQAojo4dCNvKS WiYpBqQt4BL4Cbm0tMg/Ng== X-Google-Smtp-Source: ABdhPJxHLascyH/l2C7lPRnkHuT7P1wX3vuG7sq+GKqN5tOApQDbTJ8ODaXQbwosOBbW00862A4TFQ== X-Received: by 2002:a17:90b:309:: with SMTP id ay9mr11077438pjb.39.1602353866398; Sat, 10 Oct 2020 11:17:46 -0700 (PDT) Received: from localhost.localdomain ([182.209.58.45]) by smtp.gmail.com with ESMTPSA id q65sm14974615pfq.219.2020.10.10.11.17.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 10 Oct 2020 11:17:45 -0700 (PDT) From: "Daniel T. Lee" To: Daniel Borkmann , Alexei Starovoitov , Jesper Dangaard Brouer , Andrii Nakryiko , Lorenzo Bianconi Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, Xdp Subject: [PATCH bpf-next v2 1/3] samples: bpf: Refactor xdp_monitor with libbpf Date: Sun, 11 Oct 2020 03:17:32 +0900 Message-Id: <20201010181734.1109-2-danieltimlee@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201010181734.1109-1-danieltimlee@gmail.com> References: <20201010181734.1109-1-danieltimlee@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net To avoid confusion caused by the increasing fragmentation of the BPF Loader program, this commit would like to change to the libbpf loader instead of using the bpf_load. Thanks to libbpf's bpf_link interface, managing the tracepoint BPF program is much easier. bpf_program__attach_tracepoint manages the enable of tracepoint event and attach of BPF programs to it with a single interface bpf_link, so there is no need to manage event_fd and prog_fd separately. This commit refactors xdp_monitor with using this libbpf API, and the bpf_load is removed and migrated to libbpf. Signed-off-by: Daniel T. Lee Acked-by: Andrii Nakryiko --- Changes in v2: - added cleanup logic for bpf_link and bpf_object - split increment into seperate satement - refactor pointer array initialization samples/bpf/Makefile | 2 +- samples/bpf/xdp_monitor_user.c | 159 +++++++++++++++++++++++++-------- 2 files changed, 121 insertions(+), 40 deletions(-) diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index 4f1ed0e3cf9f..0cee2aa8970f 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -99,7 +99,7 @@ per_socket_stats_example-objs := cookie_uid_helper_example.o xdp_redirect-objs := xdp_redirect_user.o xdp_redirect_map-objs := xdp_redirect_map_user.o xdp_redirect_cpu-objs := bpf_load.o xdp_redirect_cpu_user.o -xdp_monitor-objs := bpf_load.o xdp_monitor_user.o +xdp_monitor-objs := xdp_monitor_user.o xdp_rxq_info-objs := xdp_rxq_info_user.o syscall_tp-objs := syscall_tp_user.o cpustat-objs := cpustat_user.o diff --git a/samples/bpf/xdp_monitor_user.c b/samples/bpf/xdp_monitor_user.c index ef53b93db573..03d0a182913f 100644 --- a/samples/bpf/xdp_monitor_user.c +++ b/samples/bpf/xdp_monitor_user.c @@ -26,12 +26,37 @@ static const char *__doc_err_only__= #include #include +#include #include -#include "bpf_load.h" +#include #include "bpf_util.h" +enum map_type { + REDIRECT_ERR_CNT, + EXCEPTION_CNT, + CPUMAP_ENQUEUE_CNT, + CPUMAP_KTHREAD_CNT, + DEVMAP_XMIT_CNT, +}; + +static const char *const map_type_strings[] = { + [REDIRECT_ERR_CNT] = "redirect_err_cnt", + [EXCEPTION_CNT] = "exception_cnt", + [CPUMAP_ENQUEUE_CNT] = "cpumap_enqueue_cnt", + [CPUMAP_KTHREAD_CNT] = "cpumap_kthread_cnt", + [DEVMAP_XMIT_CNT] = "devmap_xmit_cnt", +}; + +#define NUM_MAP 5 +#define NUM_TP 8 + +static int tp_cnt; +static int map_cnt; static int verbose = 1; static bool debug = false; +struct bpf_map *map_data[NUM_MAP] = {}; +struct bpf_link *tp_links[NUM_TP] = {}; +struct bpf_object *obj; static const struct option long_options[] = { {"help", no_argument, NULL, 'h' }, @@ -41,6 +66,16 @@ static const struct option long_options[] = { {0, 0, NULL, 0 } }; +static void int_exit(int sig) +{ + /* Detach tracepoints */ + while (tp_cnt) + bpf_link__destroy(tp_links[--tp_cnt]); + + bpf_object__close(obj); + exit(0); +} + /* C standard specifies two constants, EXIT_SUCCESS(0) and EXIT_FAILURE(1) */ #define EXIT_FAIL_MEM 5 @@ -483,23 +518,23 @@ static bool stats_collect(struct stats_record *rec) * this can happen by someone running perf-record -e */ - fd = map_data[0].fd; /* map0: redirect_err_cnt */ + fd = bpf_map__fd(map_data[REDIRECT_ERR_CNT]); for (i = 0; i < REDIR_RES_MAX; i++) map_collect_record_u64(fd, i, &rec->xdp_redirect[i]); - fd = map_data[1].fd; /* map1: exception_cnt */ + fd = bpf_map__fd(map_data[EXCEPTION_CNT]); for (i = 0; i < XDP_ACTION_MAX; i++) { map_collect_record_u64(fd, i, &rec->xdp_exception[i]); } - fd = map_data[2].fd; /* map2: cpumap_enqueue_cnt */ + fd = bpf_map__fd(map_data[CPUMAP_ENQUEUE_CNT]); for (i = 0; i < MAX_CPUS; i++) map_collect_record(fd, i, &rec->xdp_cpumap_enqueue[i]); - fd = map_data[3].fd; /* map3: cpumap_kthread_cnt */ + fd = bpf_map__fd(map_data[CPUMAP_KTHREAD_CNT]); map_collect_record(fd, 0, &rec->xdp_cpumap_kthread); - fd = map_data[4].fd; /* map4: devmap_xmit_cnt */ + fd = bpf_map__fd(map_data[DEVMAP_XMIT_CNT]); map_collect_record(fd, 0, &rec->xdp_devmap_xmit); return true; @@ -598,8 +633,8 @@ static void stats_poll(int interval, bool err_only) /* TODO Need more advanced stats on error types */ if (verbose) { - printf(" - Stats map0: %s\n", map_data[0].name); - printf(" - Stats map1: %s\n", map_data[1].name); + printf(" - Stats map0: %s\n", bpf_map__name(map_data[0])); + printf(" - Stats map1: %s\n", bpf_map__name(map_data[1])); printf("\n"); } fflush(stdout); @@ -618,44 +653,51 @@ static void stats_poll(int interval, bool err_only) static void print_bpf_prog_info(void) { - int i; + struct bpf_program *prog; + struct bpf_map *map; + int i = 0; /* Prog info */ - printf("Loaded BPF prog have %d bpf program(s)\n", prog_cnt); - for (i = 0; i < prog_cnt; i++) { - printf(" - prog_fd[%d] = fd(%d)\n", i, prog_fd[i]); + printf("Loaded BPF prog have %d bpf program(s)\n", tp_cnt); + bpf_object__for_each_program(prog, obj) { + printf(" - prog_fd[%d] = fd(%d)\n", i, bpf_program__fd(prog)); + i++; } + i = 0; /* Maps info */ - printf("Loaded BPF prog have %d map(s)\n", map_data_count); - for (i = 0; i < map_data_count; i++) { - char *name = map_data[i].name; - int fd = map_data[i].fd; + printf("Loaded BPF prog have %d map(s)\n", map_cnt); + bpf_object__for_each_map(map, obj) { + const char *name = bpf_map__name(map); + int fd = bpf_map__fd(map); printf(" - map_data[%d] = fd(%d) name:%s\n", i, fd, name); + i++; } /* Event info */ - printf("Searching for (max:%d) event file descriptor(s)\n", prog_cnt); - for (i = 0; i < prog_cnt; i++) { - if (event_fd[i] != -1) - printf(" - event_fd[%d] = fd(%d)\n", i, event_fd[i]); + printf("Searching for (max:%d) event file descriptor(s)\n", tp_cnt); + for (i = 0; i < tp_cnt; i++) { + int fd = bpf_link__fd(tp_links[i]); + + if (fd != -1) + printf(" - event_fd[%d] = fd(%d)\n", i, fd); } } int main(int argc, char **argv) { struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY}; + struct bpf_program *prog; int longindex = 0, opt; - int ret = EXIT_SUCCESS; - char bpf_obj_file[256]; + int ret = EXIT_FAILURE; + enum map_type type; + char filename[256]; /* Default settings: */ bool errors_only = true; int interval = 2; - snprintf(bpf_obj_file, sizeof(bpf_obj_file), "%s_kern.o", argv[0]); - /* Parse commands line args */ while ((opt = getopt_long(argc, argv, "hDSs:", long_options, &longindex)) != -1) { @@ -672,40 +714,79 @@ int main(int argc, char **argv) case 'h': default: usage(argv); - return EXIT_FAILURE; + return ret; } } + snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); if (setrlimit(RLIMIT_MEMLOCK, &r)) { perror("setrlimit(RLIMIT_MEMLOCK)"); - return EXIT_FAILURE; + return ret; } - if (load_bpf_file(bpf_obj_file)) { - printf("ERROR - bpf_log_buf: %s", bpf_log_buf); - return EXIT_FAILURE; + /* Remove tracepoint program when program is interrupted or killed */ + signal(SIGINT, int_exit); + signal(SIGTERM, int_exit); + + obj = bpf_object__open_file(filename, NULL); + if (libbpf_get_error(obj)) { + printf("ERROR: opening BPF object file failed\n"); + obj = NULL; + goto cleanup; + } + + /* load BPF program */ + if (bpf_object__load(obj)) { + printf("ERROR: loading BPF object file failed\n"); + goto cleanup; + } + + for (type = 0; type < NUM_MAP; type++) { + map_data[type] = + bpf_object__find_map_by_name(obj, map_type_strings[type]); + + if (libbpf_get_error(map_data[type])) { + printf("ERROR: finding a map in obj file failed\n"); + goto cleanup; + } + map_cnt++; } - if (!prog_fd[0]) { - printf("ERROR - load_bpf_file: %s\n", strerror(errno)); - return EXIT_FAILURE; + + bpf_object__for_each_program(prog, obj) { + tp_links[tp_cnt] = bpf_program__attach(prog); + if (libbpf_get_error(tp_links[tp_cnt])) { + printf("ERROR: bpf_program__attach failed\n"); + tp_links[tp_cnt] = NULL; + goto cleanup; + } + tp_cnt++; } if (debug) { print_bpf_prog_info(); } - /* Unload/stop tracepoint event by closing fd's */ + /* Unload/stop tracepoint event by closing bpf_link's */ if (errors_only) { - /* The prog_fd[i] and event_fd[i] depend on the - * order the functions was defined in _kern.c + /* The bpf_link[i] depend on the order of + * the functions was defined in _kern.c */ - close(event_fd[2]); /* tracepoint/xdp/xdp_redirect */ - close(prog_fd[2]); /* func: trace_xdp_redirect */ - close(event_fd[3]); /* tracepoint/xdp/xdp_redirect_map */ - close(prog_fd[3]); /* func: trace_xdp_redirect_map */ + bpf_link__destroy(tp_links[2]); /* tracepoint/xdp/xdp_redirect */ + tp_links[2] = NULL; + + bpf_link__destroy(tp_links[3]); /* tracepoint/xdp/xdp_redirect_map */ + tp_links[3] = NULL; } stats_poll(interval, errors_only); + ret = EXIT_SUCCESS; + +cleanup: + /* Detach tracepoints */ + while (tp_cnt) + bpf_link__destroy(tp_links[--tp_cnt]); + + bpf_object__close(obj); return ret; } From patchwork Sat Oct 10 18:17:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Daniel T. Lee" X-Patchwork-Id: 11830543 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B461C433DF for ; Sat, 10 Oct 2020 23:13:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 233E820795 for ; Sat, 10 Oct 2020 23:13:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gYg2Evfh" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730819AbgJJWzC (ORCPT ); Sat, 10 Oct 2020 18:55:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729326AbgJJSo1 (ORCPT ); Sat, 10 Oct 2020 14:44:27 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06E08C08EC67; Sat, 10 Oct 2020 11:17:50 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id w21so9742667pfc.7; Sat, 10 Oct 2020 11:17:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+PD1+9LxtrbZLPhCmTKFgS/KC0LBYYbezLQcohdy+HE=; b=gYg2Evfh7f6PF6/ik05IyidjXnyoLUaYhly2TgQQul8pT5qT02DX8RKpIiDW////wr /LKJgW2r9KKMDCvwiuMH94KyLMHo/GYmMh7J2xXfu7pomva8Y6nvqBZmGlfsoiWBFWC7 u3CcqHcn3uA2u/j28X3t63Pfyag1Sf9qKdIzSGLnjR1+zYNoSdlBYgMiV1BG4JIl8LAk CbZ0uuzvzN/4F61oRpkh+2H05H+EC6VH0PoHlwtKRl6NNDc5jZWMZNtmKiIhGIHSQYF1 6IVvlBXa7qawSeVHv/xE4AwVSYGatPLHTb6pPShCcoQ/mwylo895DLVcVeVQc9nczwfH 0P7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+PD1+9LxtrbZLPhCmTKFgS/KC0LBYYbezLQcohdy+HE=; b=qYBau+/G94OO55/ElOfAOSY2RUDbssl2m+uM2lA8K6PNa0bcldB7KO2vPAPG4SdO0W v9YQk0TfZQdFt1OIRVLIDoPr9N/0ngkXqzbPLXIs2+4v9LGxCjmw/lWneuNs1gzfoT8C 1oo48/NGAJS1p+dYhFuQYIdXj+RvYY+Pv599B05MaqIu9SUBY/+HrOyW3l/CfUdr6KKL tsMutPLQBmYnz09yUyyqRLpwZUP1tewrmiMHG3uXwS8VPMOv6/PiW7gzWJXjij90fK27 HmcwZqTNjyBr95u5vxECDOeFgfB/HPkJvJfr52FxkGHO2RvbeOLQwL1sxi/JBSnD4yWp t+Ew== X-Gm-Message-State: AOAM533Q+64ZkhvkjGw21jMRI8vXhN+A4pJ6nLUKUK3eqN0K/rmpVPlc l+GK3oAuxV4XMEZw2WXTjQ== X-Google-Smtp-Source: ABdhPJx4EF+FMyt2dg+E1LgfQreBQ7n6O3tCRM+NCXScTjovNZHQL1WKFQofvN7j55V9VuvVYf0xfw== X-Received: by 2002:a63:3d8c:: with SMTP id k134mr7595120pga.22.1602353869490; Sat, 10 Oct 2020 11:17:49 -0700 (PDT) Received: from localhost.localdomain ([182.209.58.45]) by smtp.gmail.com with ESMTPSA id q65sm14974615pfq.219.2020.10.10.11.17.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 10 Oct 2020 11:17:48 -0700 (PDT) From: "Daniel T. Lee" To: Daniel Borkmann , Alexei Starovoitov , Jesper Dangaard Brouer , Andrii Nakryiko , Lorenzo Bianconi Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, Xdp Subject: [PATCH bpf-next v2 2/3] samples: bpf: Replace attach_tracepoint() to attach() in xdp_redirect_cpu Date: Sun, 11 Oct 2020 03:17:33 +0900 Message-Id: <20201010181734.1109-3-danieltimlee@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201010181734.1109-1-danieltimlee@gmail.com> References: <20201010181734.1109-1-danieltimlee@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From commit d7a18ea7e8b6 ("libbpf: Add generic bpf_program__attach()"), for some BPF programs, it is now possible to attach BPF programs with __attach() instead of explicitly calling __attach_(). This commit refactors the __attach_tracepoint() with libbpf's generic __attach() method. In addition, this refactors the logic of setting the map FD to simplify the code. Also, the missing removal of bpf_load.o in Makefile has been fixed. Signed-off-by: Daniel T. Lee Acked-by: Andrii Nakryiko --- Changes in v2: - program section match with bpf_program__is_ instead of strncmp - refactor pointer array initialization - error code cleanup samples/bpf/Makefile | 2 +- samples/bpf/xdp_redirect_cpu_user.c | 153 +++++++++++++--------------- 2 files changed, 73 insertions(+), 82 deletions(-) diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index 0cee2aa8970f..ac9175705b2f 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -98,7 +98,7 @@ test_map_in_map-objs := test_map_in_map_user.o per_socket_stats_example-objs := cookie_uid_helper_example.o xdp_redirect-objs := xdp_redirect_user.o xdp_redirect_map-objs := xdp_redirect_map_user.o -xdp_redirect_cpu-objs := bpf_load.o xdp_redirect_cpu_user.o +xdp_redirect_cpu-objs := xdp_redirect_cpu_user.o xdp_monitor-objs := xdp_monitor_user.o xdp_rxq_info-objs := xdp_rxq_info_user.o syscall_tp-objs := syscall_tp_user.o diff --git a/samples/bpf/xdp_redirect_cpu_user.c b/samples/bpf/xdp_redirect_cpu_user.c index 3dd366e9474d..6fb8dbde62c5 100644 --- a/samples/bpf/xdp_redirect_cpu_user.c +++ b/samples/bpf/xdp_redirect_cpu_user.c @@ -37,18 +37,35 @@ static __u32 prog_id; static __u32 xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST; static int n_cpus; -static int cpu_map_fd; -static int rx_cnt_map_fd; -static int redirect_err_cnt_map_fd; -static int cpumap_enqueue_cnt_map_fd; -static int cpumap_kthread_cnt_map_fd; -static int cpus_available_map_fd; -static int cpus_count_map_fd; -static int cpus_iterator_map_fd; -static int exception_cnt_map_fd; + +enum map_type { + CPU_MAP, + RX_CNT, + REDIRECT_ERR_CNT, + CPUMAP_ENQUEUE_CNT, + CPUMAP_KTHREAD_CNT, + CPUS_AVAILABLE, + CPUS_COUNT, + CPUS_ITERATOR, + EXCEPTION_CNT, +}; + +static const char *const map_type_strings[] = { + [CPU_MAP] = "cpu_map", + [RX_CNT] = "rx_cnt", + [REDIRECT_ERR_CNT] = "redirect_err_cnt", + [CPUMAP_ENQUEUE_CNT] = "cpumap_enqueue_cnt", + [CPUMAP_KTHREAD_CNT] = "cpumap_kthread_cnt", + [CPUS_AVAILABLE] = "cpus_available", + [CPUS_COUNT] = "cpus_count", + [CPUS_ITERATOR] = "cpus_iterator", + [EXCEPTION_CNT] = "exception_cnt", +}; #define NUM_TP 5 -struct bpf_link *tp_links[NUM_TP] = { 0 }; +#define NUM_MAP 9 +struct bpf_link *tp_links[NUM_TP] = {}; +static int map_fds[NUM_MAP]; static int tp_cnt = 0; /* Exit return codes */ @@ -527,20 +544,20 @@ static void stats_collect(struct stats_record *rec) { int fd, i; - fd = rx_cnt_map_fd; + fd = map_fds[RX_CNT]; map_collect_percpu(fd, 0, &rec->rx_cnt); - fd = redirect_err_cnt_map_fd; + fd = map_fds[REDIRECT_ERR_CNT]; map_collect_percpu(fd, 1, &rec->redir_err); - fd = cpumap_enqueue_cnt_map_fd; + fd = map_fds[CPUMAP_ENQUEUE_CNT]; for (i = 0; i < n_cpus; i++) map_collect_percpu(fd, i, &rec->enq[i]); - fd = cpumap_kthread_cnt_map_fd; + fd = map_fds[CPUMAP_KTHREAD_CNT]; map_collect_percpu(fd, 0, &rec->kthread); - fd = exception_cnt_map_fd; + fd = map_fds[EXCEPTION_CNT]; map_collect_percpu(fd, 0, &rec->exception); } @@ -565,7 +582,7 @@ static int create_cpu_entry(__u32 cpu, struct bpf_cpumap_val *value, /* Add a CPU entry to cpumap, as this allocate a cpu entry in * the kernel for the cpu. */ - ret = bpf_map_update_elem(cpu_map_fd, &cpu, value, 0); + ret = bpf_map_update_elem(map_fds[CPU_MAP], &cpu, value, 0); if (ret) { fprintf(stderr, "Create CPU entry failed (err:%d)\n", ret); exit(EXIT_FAIL_BPF); @@ -574,21 +591,21 @@ static int create_cpu_entry(__u32 cpu, struct bpf_cpumap_val *value, /* Inform bpf_prog's that a new CPU is available to select * from via some control maps. */ - ret = bpf_map_update_elem(cpus_available_map_fd, &avail_idx, &cpu, 0); + ret = bpf_map_update_elem(map_fds[CPUS_AVAILABLE], &avail_idx, &cpu, 0); if (ret) { fprintf(stderr, "Add to avail CPUs failed\n"); exit(EXIT_FAIL_BPF); } /* When not replacing/updating existing entry, bump the count */ - ret = bpf_map_lookup_elem(cpus_count_map_fd, &key, &curr_cpus_count); + ret = bpf_map_lookup_elem(map_fds[CPUS_COUNT], &key, &curr_cpus_count); if (ret) { fprintf(stderr, "Failed reading curr cpus_count\n"); exit(EXIT_FAIL_BPF); } if (new) { curr_cpus_count++; - ret = bpf_map_update_elem(cpus_count_map_fd, &key, + ret = bpf_map_update_elem(map_fds[CPUS_COUNT], &key, &curr_cpus_count, 0); if (ret) { fprintf(stderr, "Failed write curr cpus_count\n"); @@ -612,7 +629,7 @@ static void mark_cpus_unavailable(void) int ret, i; for (i = 0; i < n_cpus; i++) { - ret = bpf_map_update_elem(cpus_available_map_fd, &i, + ret = bpf_map_update_elem(map_fds[CPUS_AVAILABLE], &i, &invalid_cpu, 0); if (ret) { fprintf(stderr, "Failed marking CPU unavailable\n"); @@ -665,68 +682,37 @@ static void stats_poll(int interval, bool use_separators, char *prog_name, free_stats_record(prev); } -static struct bpf_link * attach_tp(struct bpf_object *obj, - const char *tp_category, - const char* tp_name) +static int init_tracepoints(struct bpf_object *obj) { struct bpf_program *prog; - struct bpf_link *link; - char sec_name[PATH_MAX]; - int len; - len = snprintf(sec_name, PATH_MAX, "tracepoint/%s/%s", - tp_category, tp_name); - if (len < 0) - exit(EXIT_FAIL); + bpf_object__for_each_program(prog, obj) { + if (bpf_program__is_tracepoint(prog) != true) + continue; - prog = bpf_object__find_program_by_title(obj, sec_name); - if (!prog) { - fprintf(stderr, "ERR: finding progsec: %s\n", sec_name); - exit(EXIT_FAIL_BPF); + tp_links[tp_cnt] = bpf_program__attach(prog); + if (libbpf_get_error(tp_links[tp_cnt])) { + tp_links[tp_cnt] = NULL; + return -EINVAL; + } + tp_cnt++; } - link = bpf_program__attach_tracepoint(prog, tp_category, tp_name); - if (libbpf_get_error(link)) - exit(EXIT_FAIL_BPF); - - return link; -} - -static void init_tracepoints(struct bpf_object *obj) { - tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_redirect_err"); - tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_redirect_map_err"); - tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_exception"); - tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_cpumap_enqueue"); - tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_cpumap_kthread"); + return 0; } static int init_map_fds(struct bpf_object *obj) { - /* Maps updated by tracepoints */ - redirect_err_cnt_map_fd = - bpf_object__find_map_fd_by_name(obj, "redirect_err_cnt"); - exception_cnt_map_fd = - bpf_object__find_map_fd_by_name(obj, "exception_cnt"); - cpumap_enqueue_cnt_map_fd = - bpf_object__find_map_fd_by_name(obj, "cpumap_enqueue_cnt"); - cpumap_kthread_cnt_map_fd = - bpf_object__find_map_fd_by_name(obj, "cpumap_kthread_cnt"); - - /* Maps used by XDP */ - rx_cnt_map_fd = bpf_object__find_map_fd_by_name(obj, "rx_cnt"); - cpu_map_fd = bpf_object__find_map_fd_by_name(obj, "cpu_map"); - cpus_available_map_fd = - bpf_object__find_map_fd_by_name(obj, "cpus_available"); - cpus_count_map_fd = bpf_object__find_map_fd_by_name(obj, "cpus_count"); - cpus_iterator_map_fd = - bpf_object__find_map_fd_by_name(obj, "cpus_iterator"); - - if (cpu_map_fd < 0 || rx_cnt_map_fd < 0 || - redirect_err_cnt_map_fd < 0 || cpumap_enqueue_cnt_map_fd < 0 || - cpumap_kthread_cnt_map_fd < 0 || cpus_available_map_fd < 0 || - cpus_count_map_fd < 0 || cpus_iterator_map_fd < 0 || - exception_cnt_map_fd < 0) - return -ENOENT; + enum map_type type; + + for (type = 0; type < NUM_MAP; type++) { + map_fds[type] = + bpf_object__find_map_fd_by_name(obj, + map_type_strings[type]); + + if (map_fds[type] < 0) + return -ENOENT; + } return 0; } @@ -795,13 +781,13 @@ int main(int argc, char **argv) bool stress_mode = false; struct bpf_program *prog; struct bpf_object *obj; + int err = EXIT_FAIL; char filename[256]; int added_cpus = 0; int longindex = 0; int interval = 2; int add_cpu = -1; - int opt, err; - int prog_fd; + int opt, prog_fd; int *cpu, i; __u32 qsize; @@ -824,24 +810,29 @@ int main(int argc, char **argv) } if (bpf_prog_load_xattr(&prog_load_attr, &obj, &prog_fd)) - return EXIT_FAIL; + return err; if (prog_fd < 0) { fprintf(stderr, "ERR: bpf_prog_load_xattr: %s\n", strerror(errno)); - return EXIT_FAIL; + return err; } - init_tracepoints(obj); + + if (init_tracepoints(obj) < 0) { + fprintf(stderr, "ERR: bpf_program__attach failed\n"); + return err; + } + if (init_map_fds(obj) < 0) { fprintf(stderr, "bpf_object__find_map_fd_by_name failed\n"); - return EXIT_FAIL; + return err; } mark_cpus_unavailable(); cpu = malloc(n_cpus * sizeof(int)); if (!cpu) { fprintf(stderr, "failed to allocate cpu array\n"); - return EXIT_FAIL; + return err; } memset(cpu, 0, n_cpus * sizeof(int)); @@ -960,14 +951,12 @@ int main(int argc, char **argv) prog = bpf_object__find_program_by_title(obj, prog_name); if (!prog) { fprintf(stderr, "bpf_object__find_program_by_title failed\n"); - err = EXIT_FAIL; goto out; } prog_fd = bpf_program__fd(prog); if (prog_fd < 0) { fprintf(stderr, "bpf_program__fd failed\n"); - err = EXIT_FAIL; goto out; } @@ -986,6 +975,8 @@ int main(int argc, char **argv) stats_poll(interval, use_separators, prog_name, mprog_name, &value, stress_mode); + + err = EXIT_OK; out: free(cpu); return err; From patchwork Sat Oct 10 18:17:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Daniel T. Lee" X-Patchwork-Id: 11830355 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D71DDC433DF for ; Sat, 10 Oct 2020 22:53:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9DC302075E for ; Sat, 10 Oct 2020 22:53:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RP9Lr0dw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388114AbgJJWxX (ORCPT ); Sat, 10 Oct 2020 18:53:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728919AbgJJSo1 (ORCPT ); Sat, 10 Oct 2020 14:44:27 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0561C08EC68; Sat, 10 Oct 2020 11:17:52 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id n14so9744871pff.6; Sat, 10 Oct 2020 11:17:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OW/gvTkGu86YgUxpgjSRkvij/4SXXCYc66hmNhRMo9c=; b=RP9Lr0dwyZrTys8jBoQ9I0zvpSSbuymiY1u0AVevyxRC9RcycEfPU1qogzQIlGLn5F bnzf29bXersmue4wdam21u2v8bnvKumcx48SimZTQHgY/0I/J6ljJ5HZagxEpc+XotcQ I3hpZDoD8a2ujP2AYkAoLG15I5gNVjTZrSqWsUyvJNHIKlWT3Y5ggaQ46BivmN5DG0hR 9KXC/IBWD+1XZld+iSEDbfoE7RKQH3dGYxAtZLtgjTT8+z4eyXFDzlc1umSnqobPPVT4 0Ra0EZOfp32uTwsIdjKttEpdIzSIGa4pCO8koolM5YRKLeGQyIle+bvHpqGlcn+5MAkr aqxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OW/gvTkGu86YgUxpgjSRkvij/4SXXCYc66hmNhRMo9c=; b=jkqZ6wOOqCz9MsBtTFlb0hL7KheY42c8jvO3BlRzpfOtybg1fSrxXVB4PRbEVTRdFN FXeGhRCL7UNZHcxE4RBM8eqbroeR5tkD4Z2FRAr/XY8TS9RiPHWhB1QHUVHir6z7zmVc MSK+ECUGf4SV8rwT9aEnHnIb1sW2tEQQzRdaOx+W7pmkta8af+laLwWhKLwCsEYPlOVz Qd8QdFZgzV7OCu2xFsXXmN/SWnboK9Kahp/3cyljuOEvS1AY+F2PUrY4Bf25/NlEB964 S1BZFNjk3rNh56CZ4c6zZCktSJY3t5J+5wztXqYABlH/a4xiffFSGDTrCcB05ah0Cse4 MESg== X-Gm-Message-State: AOAM533VRYyndYgfmS2NI/4KWKDxvCu/yUPRAqfsQCfASFSIyf6Nq0qg kwvWooa2mtpPDI/0SbFE0g== X-Google-Smtp-Source: ABdhPJz3z9tzziFA8Y4tikPOGrztipNfWRECVbGGGWQUD3ZzPw+iH0RspWjfzcpT1xXSQModH4AOAQ== X-Received: by 2002:a17:90a:71cb:: with SMTP id m11mr10999364pjs.14.1602353872462; Sat, 10 Oct 2020 11:17:52 -0700 (PDT) Received: from localhost.localdomain ([182.209.58.45]) by smtp.gmail.com with ESMTPSA id q65sm14974615pfq.219.2020.10.10.11.17.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 10 Oct 2020 11:17:51 -0700 (PDT) From: "Daniel T. Lee" To: Daniel Borkmann , Alexei Starovoitov , Jesper Dangaard Brouer , Andrii Nakryiko , Lorenzo Bianconi Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, Xdp Subject: [PATCH bpf-next v2 3/3] samples: bpf: refactor XDP kern program maps with BTF-defined map Date: Sun, 11 Oct 2020 03:17:34 +0900 Message-Id: <20201010181734.1109-4-danieltimlee@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201010181734.1109-1-danieltimlee@gmail.com> References: <20201010181734.1109-1-danieltimlee@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Most of the samples were converted to use the new BTF-defined MAP as they moved to libbpf, but some of the samples were missing. Instead of using the previous BPF MAP definition, this commit refactors xdp_monitor and xdp_sample_pkts_kern MAP definition with the new BTF-defined MAP format. Also, this commit removes the max_entries attribute at PERF_EVENT_ARRAY map type. The libbpf's bpf_object__create_map() will automatically set max_entries to the maximum configured number of CPUs on the host. Signed-off-by: Daniel T. Lee Acked-by: Andrii Nakryiko --- Changes in v2: - revert BTF key/val type to default of BPF_MAP_TYPE_PERF_EVENT_ARRAY samples/bpf/xdp_monitor_kern.c | 60 +++++++++++++++--------------- samples/bpf/xdp_sample_pkts_kern.c | 14 +++---- samples/bpf/xdp_sample_pkts_user.c | 1 - 3 files changed, 36 insertions(+), 39 deletions(-) diff --git a/samples/bpf/xdp_monitor_kern.c b/samples/bpf/xdp_monitor_kern.c index 3d33cca2d48a..5c955b812c47 100644 --- a/samples/bpf/xdp_monitor_kern.c +++ b/samples/bpf/xdp_monitor_kern.c @@ -6,21 +6,21 @@ #include #include -struct bpf_map_def SEC("maps") redirect_err_cnt = { - .type = BPF_MAP_TYPE_PERCPU_ARRAY, - .key_size = sizeof(u32), - .value_size = sizeof(u64), - .max_entries = 2, +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __type(key, u32); + __type(value, u64); + __uint(max_entries, 2); /* TODO: have entries for all possible errno's */ -}; +} redirect_err_cnt SEC(".maps"); #define XDP_UNKNOWN XDP_REDIRECT + 1 -struct bpf_map_def SEC("maps") exception_cnt = { - .type = BPF_MAP_TYPE_PERCPU_ARRAY, - .key_size = sizeof(u32), - .value_size = sizeof(u64), - .max_entries = XDP_UNKNOWN + 1, -}; +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __type(key, u32); + __type(value, u64); + __uint(max_entries, XDP_UNKNOWN + 1); +} exception_cnt SEC(".maps"); /* Tracepoint format: /sys/kernel/debug/tracing/events/xdp/xdp_redirect/format * Code in: kernel/include/trace/events/xdp.h @@ -129,19 +129,19 @@ struct datarec { }; #define MAX_CPUS 64 -struct bpf_map_def SEC("maps") cpumap_enqueue_cnt = { - .type = BPF_MAP_TYPE_PERCPU_ARRAY, - .key_size = sizeof(u32), - .value_size = sizeof(struct datarec), - .max_entries = MAX_CPUS, -}; +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __type(key, u32); + __type(value, struct datarec); + __uint(max_entries, MAX_CPUS); +} cpumap_enqueue_cnt SEC(".maps"); -struct bpf_map_def SEC("maps") cpumap_kthread_cnt = { - .type = BPF_MAP_TYPE_PERCPU_ARRAY, - .key_size = sizeof(u32), - .value_size = sizeof(struct datarec), - .max_entries = 1, -}; +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __type(key, u32); + __type(value, struct datarec); + __uint(max_entries, 1); +} cpumap_kthread_cnt SEC(".maps"); /* Tracepoint: /sys/kernel/debug/tracing/events/xdp/xdp_cpumap_enqueue/format * Code in: kernel/include/trace/events/xdp.h @@ -210,12 +210,12 @@ int trace_xdp_cpumap_kthread(struct cpumap_kthread_ctx *ctx) return 0; } -struct bpf_map_def SEC("maps") devmap_xmit_cnt = { - .type = BPF_MAP_TYPE_PERCPU_ARRAY, - .key_size = sizeof(u32), - .value_size = sizeof(struct datarec), - .max_entries = 1, -}; +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __type(key, u32); + __type(value, struct datarec); + __uint(max_entries, 1); +} devmap_xmit_cnt SEC(".maps"); /* Tracepoint: /sys/kernel/debug/tracing/events/xdp/xdp_devmap_xmit/format * Code in: kernel/include/trace/events/xdp.h diff --git a/samples/bpf/xdp_sample_pkts_kern.c b/samples/bpf/xdp_sample_pkts_kern.c index 33377289e2a8..9cf76b340dd7 100644 --- a/samples/bpf/xdp_sample_pkts_kern.c +++ b/samples/bpf/xdp_sample_pkts_kern.c @@ -5,14 +5,12 @@ #include #define SAMPLE_SIZE 64ul -#define MAX_CPUS 128 - -struct bpf_map_def SEC("maps") my_map = { - .type = BPF_MAP_TYPE_PERF_EVENT_ARRAY, - .key_size = sizeof(int), - .value_size = sizeof(u32), - .max_entries = MAX_CPUS, -}; + +struct { + __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); + __uint(key_size, sizeof(int)); + __uint(value_size, sizeof(u32)); +} my_map SEC(".maps"); SEC("xdp_sample") int xdp_sample_prog(struct xdp_md *ctx) diff --git a/samples/bpf/xdp_sample_pkts_user.c b/samples/bpf/xdp_sample_pkts_user.c index 991ef6f0880b..4b2a300c750c 100644 --- a/samples/bpf/xdp_sample_pkts_user.c +++ b/samples/bpf/xdp_sample_pkts_user.c @@ -18,7 +18,6 @@ #include "perf-sys.h" -#define MAX_CPUS 128 static int if_idx; static char *if_name; static __u32 xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST;