From patchwork Wed Dec 6 14:10:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Protopopov X-Patchwork-Id: 13481845 X-Patchwork-Delegate: bpf@iogearbox.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=isovalent.com header.i=@isovalent.com header.b="Rs6CoZ5U" Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 888C2D64 for ; Wed, 6 Dec 2023 06:13:46 -0800 (PST) Received: by mail-wm1-x335.google.com with SMTP id 5b1f17b1804b1-40c039e9719so50776185e9.1 for ; Wed, 06 Dec 2023 06:13:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1701872025; x=1702476825; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SvykMix2y/s2nVs/JtoBvl97FF1Y8Yf3PLdO8JbbZe4=; b=Rs6CoZ5UE9jva99W5OLW4YD5sxSi9GcgKkp1iW1Ycv94FHFSdi+SpuxLAdVd87dvUI eja0Q7YXCjhiXJrK1mn4OSjhHUHL7Ix1AGnbBuXEUyy8rTiR6MQ3WNpdh8FPOM68GJr4 2dU34DVMTVvntmssPErW9GHSl+uqzcsuNOXD5XtEhANmqyRF9JF30c/rob3rzIXqy3mx CLS+MD736sUzNxKW6WbJhLJopU9Pefashz4OX0JyG6Yxa23nbpmixdJjvHKGD9C2IzEA 8IbEAnLrMygsGKhQD88QDu6WHxC5BnlBB/Cltoen0pI7HpcUr3Hm+sOeOYrN5r0BS7M/ yaqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701872025; x=1702476825; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SvykMix2y/s2nVs/JtoBvl97FF1Y8Yf3PLdO8JbbZe4=; b=PqN2Gf4x4kYf0Y6Ctz8D+MwXZsn8AokQwU93AqYJdw6EKZFq3Jq2SzTIWZbs+VnVRO Lo8Ug76a5fHrd0GpAQxtXCZhO/cq7fA1lU0s5ILsYxoiqiAurz2NmiIdZ+3RUkFiFrlM 6Nmfn45lPOE5UQqAcYcGMGzGODhX+RqPnmO8uDKBra+a4CMOgQpz3a/fOWfIB9NcbtpE JPx7xFTFNaXtsTFtyokl+sPPXdVp7NcEJIENfWlJKeMMJ5cyHhND4jCCXQdoA0+lAgqr OE/l4ebwMCeQO0Sg/KvdsbW022cphe2cDRS942pcEIVmXZAqG4Lxv22ge+Hok7vGIGOh H22Q== X-Gm-Message-State: AOJu0YzdKXhPZQshsEfLKxaeyF33ANoOiSna2yAZv1XJDk+z8rX0jIqN 8JWhd7woQCki1gvax8syqiK4aw== X-Google-Smtp-Source: AGHT+IHJ7azeEp7QtYQ5l0uQFMCbC0alzcXxcidhuHD1h2OSTvfbRcT8xgRuHe7VJRP4VLNKUocauA== X-Received: by 2002:a05:600c:45d5:b0:40b:5e21:ec37 with SMTP id s21-20020a05600c45d500b0040b5e21ec37mr676160wmo.105.1701872024920; Wed, 06 Dec 2023 06:13:44 -0800 (PST) Received: from zh-lab-node-5.home ([2a02:168:f656:0:1ac0:4dff:fe0f:3782]) by smtp.gmail.com with ESMTPSA id g18-20020a05600c311200b0040b42df75fcsm22140330wmo.39.2023.12.06.06.13.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 06:13:43 -0800 (PST) From: Anton Protopopov To: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Jiri Olsa , Martin KaFai Lau , Stanislav Fomichev , bpf@vger.kernel.org Cc: Anton Protopopov Subject: [PATCH bpf-next 1/7] bpf: extract bpf_prog_bind_map logic into an inline helper Date: Wed, 6 Dec 2023 14:10:24 +0000 Message-Id: <20231206141030.1478753-2-aspsk@isovalent.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231206141030.1478753-1-aspsk@isovalent.com> References: <20231206141030.1478753-1-aspsk@isovalent.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Add a new inline function __bpf_prog_bind_map() which adds a new map to prog->aux->used_maps. This new helper will be used in a consequent patch. (This change also simplifies the code of the bpf_prog_bind_map() function.) Signed-off-by: Anton Protopopov --- kernel/bpf/syscall.c | 58 ++++++++++++++++++++++---------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 0ed286b8a0f0..81625ef98a7d 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2366,6 +2366,28 @@ struct bpf_prog *bpf_prog_get_type_dev(u32 ufd, enum bpf_prog_type type, } EXPORT_SYMBOL_GPL(bpf_prog_get_type_dev); +static int __bpf_prog_bind_map(struct bpf_prog *prog, struct bpf_map *map) +{ + struct bpf_map **used_maps_new; + int i; + + for (i = 0; i < prog->aux->used_map_cnt; i++) + if (prog->aux->used_maps[i] == map) + return -EEXIST; + + used_maps_new = krealloc_array(prog->aux->used_maps, + prog->aux->used_map_cnt + 1, + sizeof(used_maps_new[0]), + GFP_KERNEL); + if (!used_maps_new) + return -ENOMEM; + + prog->aux->used_maps = used_maps_new; + prog->aux->used_maps[prog->aux->used_map_cnt++] = map; + + return 0; +} + /* Initially all BPF programs could be loaded w/o specifying * expected_attach_type. Later for some of them specifying expected_attach_type * at load time became required so that program could be validated properly. @@ -5285,8 +5307,7 @@ static int bpf_prog_bind_map(union bpf_attr *attr) { struct bpf_prog *prog; struct bpf_map *map; - struct bpf_map **used_maps_old, **used_maps_new; - int i, ret = 0; + int ret = 0; if (CHECK_ATTR(BPF_PROG_BIND_MAP)) return -EINVAL; @@ -5305,37 +5326,16 @@ static int bpf_prog_bind_map(union bpf_attr *attr) } mutex_lock(&prog->aux->used_maps_mutex); - - used_maps_old = prog->aux->used_maps; - - for (i = 0; i < prog->aux->used_map_cnt; i++) - if (used_maps_old[i] == map) { - bpf_map_put(map); - goto out_unlock; - } - - used_maps_new = kmalloc_array(prog->aux->used_map_cnt + 1, - sizeof(used_maps_new[0]), - GFP_KERNEL); - if (!used_maps_new) { - ret = -ENOMEM; - goto out_unlock; - } - - memcpy(used_maps_new, used_maps_old, - sizeof(used_maps_old[0]) * prog->aux->used_map_cnt); - used_maps_new[prog->aux->used_map_cnt] = map; - - prog->aux->used_map_cnt++; - prog->aux->used_maps = used_maps_new; - - kfree(used_maps_old); - -out_unlock: + ret = __bpf_prog_bind_map(prog, map); mutex_unlock(&prog->aux->used_maps_mutex); if (ret) bpf_map_put(map); + + /* This map was already bound to the program */ + if (ret == -EEXIST) + ret = 0; + out_prog_put: bpf_prog_put(prog); return ret; From patchwork Wed Dec 6 14:10:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Protopopov X-Patchwork-Id: 13481846 X-Patchwork-Delegate: bpf@iogearbox.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=isovalent.com header.i=@isovalent.com header.b="UeL9oCgS" Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F150D5F for ; Wed, 6 Dec 2023 06:13:48 -0800 (PST) Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-40b2ad4953cso5333515e9.0 for ; Wed, 06 Dec 2023 06:13:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1701872027; x=1702476827; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jLj/TJoOLretkQGY8Iwvu8Sot+mgIa4XdPhJoK6i32M=; b=UeL9oCgSLo8L9QHIqAb+tuKgTpDGYk67RxyJs+mOUhULs8VFP2UX1UIVz0ZXukDnVI YgaITdOIYorgIG3eRxsnBddvtlCCRccQlt7VWBUiQyBKXl3+oPxHfKLlozCzACM3I/Dx cZpH4U9V4y/Lclqaremivi7ZTllghlzrOYz5p9ER2bZiFRhwy728N6wVdKvcAdcluVV2 r6R3xVikoldLHhOquixzC7GXqLTPlZrBq2/9imade6/hdp4DrjxUx3UkZ2J83QaN8whi Ui+V8mBhIzbcbpy6uykckcWxdgiHiEAHmVnbmWeVXHNQennsAUVzn0ye8+eta1wblWaT uxqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701872027; x=1702476827; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jLj/TJoOLretkQGY8Iwvu8Sot+mgIa4XdPhJoK6i32M=; b=JvVh8uWu+et/7WfXTeZsBHswX5JGIdHsizdtrV8Jo9xvtM7aOYPjnMvbpxjFrbQ8ek AGQYCc5j+Vc5lk5rgD6hDsS/8YL+4yOQDM/1Bx59cs+25eubq8FDIarXsbKYA7u3WZhC BiLNDyYXG6vaACUhUV6SIcyHWmg5YRiewzuyInpsxCiXgbd0fLsYM+N44plSRb+AmUT7 m5TersZiZo0HnCJBwX7voLKJIsF4rVFqu0gqB+SEEfepL3d7WCPLHcbbzJASACx5APZS 4nn22SVW/XkwmV9DlnBGn1hy0bPs37yVuW+b3cF6eFb5rKatC5wHNuPFi7sPZdKCbKEX gYZw== X-Gm-Message-State: AOJu0YxyHNCg6aClgHE37lpexZ3UylTO+VQqdVHqClbFTTwwccpm76BK e4QcsJhdlVKY4X8xyGYmA3vwfqFoCeGolEK5LWc= X-Google-Smtp-Source: AGHT+IGdwueU9m73Uz3PYTqxjd6915dJwCWmOK/ltDEMwJRgFw6rcmXzvWARKnedLW5vjNBDqmcdKA== X-Received: by 2002:a1c:7c0f:0:b0:40b:5e4a:2359 with SMTP id x15-20020a1c7c0f000000b0040b5e4a2359mr1720762wmc.91.1701872026541; Wed, 06 Dec 2023 06:13:46 -0800 (PST) Received: from zh-lab-node-5.home ([2a02:168:f656:0:1ac0:4dff:fe0f:3782]) by smtp.gmail.com with ESMTPSA id g18-20020a05600c311200b0040b42df75fcsm22140330wmo.39.2023.12.06.06.13.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 06:13:45 -0800 (PST) From: Anton Protopopov To: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Jiri Olsa , Martin KaFai Lau , Stanislav Fomichev , bpf@vger.kernel.org Cc: Anton Protopopov Subject: [PATCH bpf-next 2/7] bpf: rename and export a struct definition Date: Wed, 6 Dec 2023 14:10:25 +0000 Message-Id: <20231206141030.1478753-3-aspsk@isovalent.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231206141030.1478753-1-aspsk@isovalent.com> References: <20231206141030.1478753-1-aspsk@isovalent.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net There is a structure, struct prog_poke_elem, defined in the k/b/array.c. It contains a list_head and a pointer to a bpf_prog_aux instance, and its purpose is to serve as a list element in a list of bpf_prog_aux instances. Rename it to struct bpf_prog_aux_list_elem and define inside the i/l/bpf.h so that it can be reused for similar purposes by other pieces of code. Signed-off-by: Anton Protopopov --- include/linux/bpf.h | 5 +++++ kernel/bpf/arraymap.c | 13 ++++--------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index eb84caf133df..8085780b7fcd 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -3227,4 +3227,9 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog) return prog->aux->func_idx != 0; } +struct bpf_prog_aux_list_elem { + struct list_head list; + struct bpf_prog_aux *aux; +}; + #endif /* _LINUX_BPF_H */ diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index 2058e89b5ddd..7e6df6bd7e7a 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -956,15 +956,10 @@ static void prog_array_map_seq_show_elem(struct bpf_map *map, void *key, rcu_read_unlock(); } -struct prog_poke_elem { - struct list_head list; - struct bpf_prog_aux *aux; -}; - static int prog_array_map_poke_track(struct bpf_map *map, struct bpf_prog_aux *prog_aux) { - struct prog_poke_elem *elem; + struct bpf_prog_aux_list_elem *elem; struct bpf_array_aux *aux; int ret = 0; @@ -997,7 +992,7 @@ static int prog_array_map_poke_track(struct bpf_map *map, static void prog_array_map_poke_untrack(struct bpf_map *map, struct bpf_prog_aux *prog_aux) { - struct prog_poke_elem *elem, *tmp; + struct bpf_prog_aux_list_elem *elem, *tmp; struct bpf_array_aux *aux; aux = container_of(map, struct bpf_array, map)->aux; @@ -1017,7 +1012,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, struct bpf_prog *new) { u8 *old_addr, *new_addr, *old_bypass_addr; - struct prog_poke_elem *elem; + struct bpf_prog_aux_list_elem *elem; struct bpf_array_aux *aux; aux = container_of(map, struct bpf_array, map)->aux; @@ -1148,7 +1143,7 @@ static struct bpf_map *prog_array_map_alloc(union bpf_attr *attr) static void prog_array_map_free(struct bpf_map *map) { - struct prog_poke_elem *elem, *tmp; + struct bpf_prog_aux_list_elem *elem, *tmp; struct bpf_array_aux *aux; aux = container_of(map, struct bpf_array, map)->aux; From patchwork Wed Dec 6 14:10:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Protopopov X-Patchwork-Id: 13481847 X-Patchwork-Delegate: bpf@iogearbox.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=isovalent.com header.i=@isovalent.com header.b="cMkbhZ6x" Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C940D69 for ; Wed, 6 Dec 2023 06:13:49 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id 5b1f17b1804b1-40c0e7b8a9bso38811475e9.3 for ; Wed, 06 Dec 2023 06:13:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1701872027; x=1702476827; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nRv7xiM70JnOFaj5pW4OpSWpKutp6o9jAADHES/rpZQ=; b=cMkbhZ6x0Dhb/gt9E5ybkqSuSDxhlhJjd66dw6erDDbJERtsPk1etgZ47Pui7rBICT c6YPjDRV1KP5lUhMQHD0HHWvRQOBluAXF1+s9F8Tj0dOUmDmZYjplIIYDG+vJFiMDaUx +4uxq4PiY0CcRjfqC268/6VkByNr+6QKuMYhVzfroyokeEp9yn/GzF/Q8W12I55/wBPs SgvxCbZUM6q/RIlo3ZA0wV9BwDCU0wdrsB2urJ1BIESWc3xoZpGYdAwVcY/C4yIeNxgp UkrKboB+5ACTrjjYYCUFJM5BldC9KmKJfLvK8bd4BvxeZgKuEDcTv09Jf9VaZ5pRJY4v 0gtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701872027; x=1702476827; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nRv7xiM70JnOFaj5pW4OpSWpKutp6o9jAADHES/rpZQ=; b=YJZNs8b9Gau/X2lgqgyWndqHhfDfAl1e/0Z/HLSqYNwqUyDat3TXiDUL1XOdI9c4F9 s4e3XnppH5c/Dv2LCAr15zu7ccu1HPfo15k6fL2vBioqd6Ziz8oV6UXHst0HMLatqCUN S7PgAaaMXaVnZae+uE24OoKLhB37NKpT1v4MPB6kCyrDOyTn9rQSy0E9oSJ56/cxjod3 /SIKJJcfmn+e7OzjuHz7gH3x8lwYCAoARkRsJRnPaDpBLniY5Ooe4hytuc52s0pl2L1g CKob2NjMXozGMCoOiMaQ5K8EdHzo+33v6yd0z1ZNIvI+9/CbXyowQ6IZT6p5aVujzybU BOcA== X-Gm-Message-State: AOJu0YzWhCRy4eKa4eGxUmcNtPXB6uBU6RW+rPF4+Ciaz79fZ9YXOuke nXJYRIKvIoQhHCzUl6ALvHLBMQ== X-Google-Smtp-Source: AGHT+IExBgDjw4jkQqNG5XM7TK8Ssj8lFeXVV6lS0VyjT534UTHPUmAFSPHRAqfFYLyQSUz0tCrFjg== X-Received: by 2002:a05:600c:a01:b0:40b:5e21:dd48 with SMTP id z1-20020a05600c0a0100b0040b5e21dd48mr672705wmp.118.1701872027523; Wed, 06 Dec 2023 06:13:47 -0800 (PST) Received: from zh-lab-node-5.home ([2a02:168:f656:0:1ac0:4dff:fe0f:3782]) by smtp.gmail.com with ESMTPSA id g18-20020a05600c311200b0040b42df75fcsm22140330wmo.39.2023.12.06.06.13.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 06:13:46 -0800 (PST) From: Anton Protopopov To: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Jiri Olsa , Martin KaFai Lau , Stanislav Fomichev , bpf@vger.kernel.org Cc: Anton Protopopov Subject: [PATCH bpf-next 3/7] bpf: adjust functions offsets when patching progs Date: Wed, 6 Dec 2023 14:10:26 +0000 Message-Id: <20231206141030.1478753-4-aspsk@isovalent.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231206141030.1478753-1-aspsk@isovalent.com> References: <20231206141030.1478753-1-aspsk@isovalent.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net When patching instructions with the bpf_patch_insn_data() function patch env->prog->aux->func_info[i].insn_off as well. Currently this doesn't seem to break anything, but this filed will be used in a consequent patch. Signed-off-by: Anton Protopopov --- kernel/bpf/verifier.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index bf94ba50c6ee..5d38ee2e74a1 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -18181,6 +18181,20 @@ static void adjust_insn_aux_data(struct bpf_verifier_env *env, vfree(old_data); } +static void adjust_func_info(struct bpf_verifier_env *env, u32 off, u32 len) +{ + int i; + + if (len == 1) + return; + + for (i = 0; i < env->prog->aux->func_info_cnt; i++) { + if (env->prog->aux->func_info[i].insn_off <= off) + continue; + env->prog->aux->func_info[i].insn_off += len - 1; + } +} + static void adjust_subprog_starts(struct bpf_verifier_env *env, u32 off, u32 len) { int i; @@ -18232,6 +18246,7 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of return NULL; } adjust_insn_aux_data(env, new_data, new_prog, off, len); + adjust_func_info(env, off, len); adjust_subprog_starts(env, off, len); adjust_poke_descs(new_prog, off, len); return new_prog; From patchwork Wed Dec 6 14:10:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Protopopov X-Patchwork-Id: 13481850 X-Patchwork-Delegate: bpf@iogearbox.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=isovalent.com header.i=@isovalent.com header.b="XNxp08kf" Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F5F7D51 for ; Wed, 6 Dec 2023 06:13:50 -0800 (PST) Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-40b399a6529so5199765e9.1 for ; Wed, 06 Dec 2023 06:13:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1701872029; x=1702476829; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iawa77ErIO3uM0nFJSl0bnovHJlBAXU0a52h01OI+f8=; b=XNxp08kfsKHmYG/NENtvzJTfNv5L1qbqsXrZ6cfzPkjMzyCQNmWlM6JWU52sbEsM10 Jab34U/SfF0n1raEeV2lf1BM1vQDryogWssiqgLQSF+MsqDRy+KK6FsW2XoRSUL6SvsT gt0EcFWE+u0+8LH5hliLcEVra6nUz63LtgFIFwtNQHFjP+DxzAdylLX1maDLjHJn+Rcz S5jGp481dCCcdkZMhb2GsNKl6kSnYX4baIOW5ybpPCMgOtLEggqdgTxHgjsXHidQt7yX NnC+BpfTrNjER7piAYGxU33xONmxqLkLFXTlCew6RMBXSY/tT9nFH26ndgKp3+O1J6m+ z+mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701872029; x=1702476829; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iawa77ErIO3uM0nFJSl0bnovHJlBAXU0a52h01OI+f8=; b=UDSyZZnlJaWNAdqAtWPz4PG7qwTnebnqOVvuxK+kLHpiL9vd5x2nMFKgrsxuyrvPmM KJ1ga034H95cw8q8iPKOlPsHgIilrwCd8jzZEycBgbiJRvVEXBy482/PukT8qq389E3w sn4dYmq69SDzG35XrKvTxxUDwI9O/N8qb+54Q9Ad6LDD4zpR76Y464pT66xRCxJQ6yG4 b65pqKVheGFjihsejEwAatd2ELo9Dl17O7eIIs/NboMHTvfqXnNG6yL9SFeZmQFkdnhz Xknkc3jM3VAo2nU0sR9Uekqx0RYtIvPssynazCNWPOILAjTsneCy8ilod4MFiCjZAK10 01Fw== X-Gm-Message-State: AOJu0YxfkcYkiioDSsf2Uy13M7+ymmQqchetuWWTw9lGPNJ8c4JSAfJT hibURSyDzrtFQOPSrsIU/tsQZTCP3Ca9wtNgYG0= X-Google-Smtp-Source: AGHT+IHpy/nNKep/Hv/xm91sVTXHX+ZyMWWCokM5UK5OJI0ewQot3ENv/cK4nR2Bo0T7d0AZtOpA8g== X-Received: by 2002:a05:600c:46d4:b0:40b:5e26:2373 with SMTP id q20-20020a05600c46d400b0040b5e262373mr1651327wmo.36.1701872028757; Wed, 06 Dec 2023 06:13:48 -0800 (PST) Received: from zh-lab-node-5.home ([2a02:168:f656:0:1ac0:4dff:fe0f:3782]) by smtp.gmail.com with ESMTPSA id g18-20020a05600c311200b0040b42df75fcsm22140330wmo.39.2023.12.06.06.13.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 06:13:47 -0800 (PST) From: Anton Protopopov To: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Jiri Olsa , Martin KaFai Lau , Stanislav Fomichev , bpf@vger.kernel.org Cc: Anton Protopopov Subject: [PATCH bpf-next 4/7] bpf: implement BPF Static Keys support Date: Wed, 6 Dec 2023 14:10:27 +0000 Message-Id: <20231206141030.1478753-5-aspsk@isovalent.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231206141030.1478753-1-aspsk@isovalent.com> References: <20231206141030.1478753-1-aspsk@isovalent.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net BPF static keys are created as array maps with BPF_F_STATIC_KEY in the map_flags and with the following parameters (any other combination is considered invalid): map_type: BPF_MAP_TYPE_ARRAY key_size: 4 value_size: 4 max_entries: 1 Given such a map, a BPF program can use it to control a "static branch", which is a JA +OFF instruction which can be toggled to become a JA +0 by writing 0 or 1 to the map. One branch is described by the struct bpf_static_branch_info { __u32 map_fd; __u32 insn_offset; __u32 jump_target; __u32 flags; }; structure. Here map_fd should point to a corresponding static key, insn_offset is the offset of a JA instruction, jump_target is the [absolute] address of instruction to which the JA jumps. The flags can be either 0 or BPF_F_INVERSE_BRANCH which lets users to specify one of two types of static branches: normal one patched to NOP/JUMP when key is zero/non-zero, the other is inverse. This may seem non-obvious why we need both normal and inverse branches, the answer is that both are required if we want to implement "unlikely" and "likely" branches controlled by a static key, see the consequent patch which implements libbpf support for BPF static keys. On program load a list of branches described by the struct bpf_static_branch_info are passed via new attributes: __aligned_u64 static_branches_info; __u32 static_branches_info_size; This patch doesn't actually fully implement the functionality for any architecture. To do so, one should implement a bpf_arch_poke_static_branch() helper which implements text poking for particular architecture. The arch-specific code should also configure the internal representation of the static branch appropriately (fill in arch-specific fields). The verification of the new feature is straightforward: instead of going one edge for JA instruction, insert two edges for the original JA and NOP (i.e., JA +0) instructions. In order not to pollute kernel/bpf/{syscall.c,verier.c} files with new code a new kernel/bpf/skey.c file was added. For more details on design of the feature see the following talk at Linux Plumbers 2023: https://lpc.events/event/17/contributions/1608/ Signed-off-by: Anton Protopopov --- MAINTAINERS | 6 + include/linux/bpf.h | 29 ++++ include/uapi/linux/bpf.h | 18 ++ kernel/bpf/Makefile | 2 + kernel/bpf/arraymap.c | 2 +- kernel/bpf/core.c | 9 + kernel/bpf/skey.c | 306 +++++++++++++++++++++++++++++++++ kernel/bpf/syscall.c | 46 ++++- kernel/bpf/verifier.c | 88 +++++++++- tools/include/uapi/linux/bpf.h | 18 ++ 10 files changed, 511 insertions(+), 13 deletions(-) create mode 100644 kernel/bpf/skey.c diff --git a/MAINTAINERS b/MAINTAINERS index 14e1194faa4b..e2f655980c6c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3887,6 +3887,12 @@ S: Maintained F: kernel/bpf/stackmap.c F: kernel/trace/bpf_trace.c +BPF [STATIC KEYS] +M: Anton Protopopov +L: bpf@vger.kernel.org +S: Maintained +F: kernel/bpf/skey.c + BROADCOM ASP 2.0 ETHERNET DRIVER M: Justin Chen M: Florian Fainelli diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 8085780b7fcd..6985b4893191 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -289,8 +289,18 @@ struct bpf_map { bool bypass_spec_v1; bool frozen; /* write-once; write-protected by freeze_mutex */ s64 __percpu *elem_count; + struct list_head static_key_list_head; + struct mutex static_key_mutex; }; +bool bpf_jit_supports_static_keys(void); +struct bpf_static_branch *bpf_static_branch_by_offset(struct bpf_prog *bpf_prog, + u32 offset); +int bpf_prog_register_static_branches(struct bpf_prog *prog); +int bpf_prog_init_static_branches(struct bpf_prog *prog, union bpf_attr *attr); +int bpf_static_key_update(struct bpf_map *map, void *key, void *value, u64 flags); +void bpf_static_key_remove_prog(struct bpf_map *map, struct bpf_prog_aux *aux); + static inline const char *btf_field_type_name(enum btf_field_type type) { switch (type) { @@ -1381,6 +1391,17 @@ struct btf_mod_pair { struct bpf_kfunc_desc_tab; +struct bpf_static_branch { + struct bpf_map *map; + u32 flags; + u32 bpf_offset; + void *arch_addr; + u32 arch_len; + u8 bpf_jmp[8]; + u8 arch_nop[8]; + u8 arch_jmp[8]; +}; + struct bpf_prog_aux { atomic64_t refcnt; u32 used_map_cnt; @@ -1473,6 +1494,8 @@ struct bpf_prog_aux { struct work_struct work; struct rcu_head rcu; }; + struct bpf_static_branch *static_branches; + u32 static_branches_len; }; struct bpf_prog { @@ -3176,6 +3199,9 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, void *bpf_arch_text_copy(void *dst, void *src, size_t len); int bpf_arch_text_invalidate(void *dst, size_t len); +int bpf_arch_poke_static_branch(struct bpf_prog *prog, + struct bpf_static_branch *branch, bool on); + struct btf_id_set; bool btf_id_set_contains(const struct btf_id_set *set, u32 id); @@ -3232,4 +3258,7 @@ struct bpf_prog_aux_list_elem { struct bpf_prog_aux *aux; }; +int __bpf_prog_bind_map(struct bpf_prog *prog, struct bpf_map *map, + bool check_boundaries); + #endif /* _LINUX_BPF_H */ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 0f6cdf52b1da..2d3cf9175cf9 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1325,6 +1325,9 @@ enum { /* Get path from provided FD in BPF_OBJ_PIN/BPF_OBJ_GET commands */ BPF_F_PATH_FD = (1U << 14), + +/* Treat this map as a BPF Static Key */ + BPF_F_STATIC_KEY = (1U << 15), }; /* Flags for BPF_PROG_QUERY. */ @@ -1369,6 +1372,18 @@ struct bpf_stack_build_id { #define BPF_OBJ_NAME_LEN 16U +/* flags for bpf_static_branch_info */ +enum { + BPF_F_INVERSE_BRANCH = 1, +}; + +struct bpf_static_branch_info { + __u32 map_fd; /* map in control */ + __u32 insn_offset; /* absolute offset of the branch instruction */ + __u32 jump_target; /* absolute offset of the jump target */ + __u32 flags; +}; + union bpf_attr { struct { /* anonymous struct used by BPF_MAP_CREATE command */ __u32 map_type; /* one of enum bpf_map_type */ @@ -1467,6 +1482,9 @@ union bpf_attr { * truncated), or smaller (if log buffer wasn't filled completely). */ __u32 log_true_size; + /* An array of struct bpf_static_branch_info */ + __aligned_u64 static_branches_info; + __u32 static_branches_info_size; }; struct { /* anonymous struct used by BPF_OBJ_* commands */ diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index f526b7573e97..f0f0eb9acf18 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -46,3 +46,5 @@ obj-$(CONFIG_BPF_PRELOAD) += preload/ obj-$(CONFIG_BPF_SYSCALL) += relo_core.o $(obj)/relo_core.o: $(srctree)/tools/lib/bpf/relo_core.c FORCE $(call if_changed_rule,cc_o_c) + +obj-$(CONFIG_BPF_SYSCALL) += skey.o diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index 7e6df6bd7e7a..f968489e1df8 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -17,7 +17,7 @@ #define ARRAY_CREATE_FLAG_MASK \ (BPF_F_NUMA_NODE | BPF_F_MMAPABLE | BPF_F_ACCESS_MASK | \ - BPF_F_PRESERVE_ELEMS | BPF_F_INNER_MAP) + BPF_F_PRESERVE_ELEMS | BPF_F_INNER_MAP | BPF_F_STATIC_KEY) static void bpf_array_free_percpu(struct bpf_array *array) { diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 08626b519ce2..b10ffcb0a6e6 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2670,6 +2670,8 @@ void __bpf_free_used_maps(struct bpf_prog_aux *aux, map = used_maps[i]; if (map->ops->map_poke_untrack) map->ops->map_poke_untrack(map, aux); + if (map->map_flags & BPF_F_STATIC_KEY) + bpf_static_key_remove_prog(map, aux); bpf_map_put(map); } } @@ -2927,6 +2929,13 @@ void __weak arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, { } +int __weak bpf_arch_poke_static_branch(struct bpf_prog *prog, + struct bpf_static_branch *branch, + bool on) +{ + return -EOPNOTSUPP; +} + #ifdef CONFIG_BPF_SYSCALL static int __init bpf_global_ma_init(void) { diff --git a/kernel/bpf/skey.c b/kernel/bpf/skey.c new file mode 100644 index 000000000000..8f1915ba6d44 --- /dev/null +++ b/kernel/bpf/skey.c @@ -0,0 +1,306 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023 Isovalent + */ + +#include + +bool bpf_jit_supports_static_keys(void) +{ + int err; + + /* Should return -EINVAL if supported */ + err = bpf_arch_poke_static_branch(NULL, NULL, false); + return err != -EOPNOTSUPP; +} + +struct bpf_static_branch *bpf_static_branch_by_offset(struct bpf_prog *bpf_prog, u32 offset) +{ + u32 i, n = bpf_prog->aux->static_branches_len; + struct bpf_static_branch *branch; + + for (i = 0; i < n; i++) { + branch = &bpf_prog->aux->static_branches[i]; + if (branch->bpf_offset == offset) + return branch; + } + return NULL; +} + +static int bpf_prog_update_static_branches(struct bpf_prog *prog, + const struct bpf_map *map, bool on) +{ + struct bpf_static_branch *branch; + int err = 0; + int i; + + for (i = 0; i < prog->aux->static_branches_len; i++) { + branch = &prog->aux->static_branches[i]; + if (branch->map != map) + continue; + + err = bpf_arch_poke_static_branch(prog, branch, on); + if (err) + break; + } + + return err; +} + +static int static_key_add_prog(struct bpf_map *map, struct bpf_prog *prog) +{ + struct bpf_prog_aux_list_elem *elem; + u32 key = 0; + int err = 0; + u32 *val; + + mutex_lock(&map->static_key_mutex); + + val = map->ops->map_lookup_elem(map, &key); + if (!val) { + err = -ENOENT; + goto unlock_ret; + } + + list_for_each_entry(elem, &map->static_key_list_head, list) + if (elem->aux == prog->aux) + goto unlock_ret; + + elem = kmalloc(sizeof(*elem), GFP_KERNEL); + if (!elem) { + err = -ENOMEM; + goto unlock_ret; + } + + INIT_LIST_HEAD(&elem->list); + elem->aux = prog->aux; + + list_add_tail(&elem->list, &map->static_key_list_head); + + err = bpf_prog_update_static_branches(prog, map, *val); + +unlock_ret: + mutex_unlock(&map->static_key_mutex); + return err; +} + +void bpf_static_key_remove_prog(struct bpf_map *map, struct bpf_prog_aux *aux) +{ + struct bpf_prog_aux_list_elem *elem, *tmp; + + mutex_lock(&map->static_key_mutex); + list_for_each_entry_safe(elem, tmp, &map->static_key_list_head, list) { + if (elem->aux == aux) { + list_del_init(&elem->list); + kfree(elem); + break; + } + } + mutex_unlock(&map->static_key_mutex); +} + +int bpf_static_key_update(struct bpf_map *map, void *key, void *value, u64 flags) +{ + struct bpf_prog_aux_list_elem *elem; + bool on = *(u32 *)value; + int err; + + mutex_lock(&map->static_key_mutex); + + err = map->ops->map_update_elem(map, key, value, flags); + if (err) + goto unlock_ret; + + list_for_each_entry(elem, &map->static_key_list_head, list) { + err = bpf_prog_update_static_branches(elem->aux->prog, map, on); + if (err) + break; + } + +unlock_ret: + mutex_unlock(&map->static_key_mutex); + return err; +} + +static bool init_static_jump_instruction(struct bpf_prog *prog, + struct bpf_static_branch *branch, + struct bpf_static_branch_info *branch_info) +{ + bool inverse = !!(branch_info->flags & BPF_F_INVERSE_BRANCH); + u32 insn_offset = branch_info->insn_offset; + u32 jump_target = branch_info->jump_target; + struct bpf_insn *jump_insn; + s32 jump_offset; + + if (insn_offset % 8 || jump_target % 8) + return false; + + if (insn_offset / 8 >= prog->len || jump_target / 8 >= prog->len) + return false; + + jump_insn = &prog->insnsi[insn_offset / 8]; + if (jump_insn->code != (BPF_JMP | BPF_JA) && + jump_insn->code != (BPF_JMP32 | BPF_JA)) + return false; + + if (jump_insn->dst_reg || jump_insn->src_reg) + return false; + + if (jump_insn->off && jump_insn->imm) + return false; + + jump_offset = ((long)jump_target - (long)insn_offset) / 8 - 1; + + if (inverse) { + if (jump_insn->code == (BPF_JMP | BPF_JA)) { + if (jump_insn->off != jump_offset) + return false; + } else { + if (jump_insn->imm != jump_offset) + return false; + } + } else { + /* The instruction here should be JA 0. We will replace it by a + * non-zero jump so that this is simpler to verify this program + * (verifier might optimize out such instructions and we don't + * want to care about this). After verification the instruction + * will be set to proper value + */ + if (jump_insn->off || jump_insn->imm) + return false; + + if (jump_insn->code == (BPF_JMP | BPF_JA)) + jump_insn->off = jump_offset; + else + jump_insn->imm = jump_offset; + } + + memcpy(branch->bpf_jmp, jump_insn, 8); + branch->bpf_offset = insn_offset; + return true; +} + +static int +__bpf_prog_init_static_branches(struct bpf_prog *prog, + struct bpf_static_branch_info *static_branches_info, + int n) +{ + size_t size = n * sizeof(*prog->aux->static_branches); + struct bpf_static_branch *static_branches; + struct bpf_map *map; + int i, err = 0; + + static_branches = kzalloc(size, GFP_USER | __GFP_NOWARN); + if (!static_branches) + return -ENOMEM; + + for (i = 0; i < n; i++) { + if (static_branches_info[i].flags & ~(BPF_F_INVERSE_BRANCH)) { + err = -EINVAL; + goto free_static_branches; + } + static_branches[i].flags = static_branches_info[i].flags; + + if (!init_static_jump_instruction(prog, &static_branches[i], + &static_branches_info[i])) { + err = -EINVAL; + goto free_static_branches; + } + + map = bpf_map_get(static_branches_info[i].map_fd); + if (IS_ERR(map)) { + err = PTR_ERR(map); + goto free_static_branches; + } + + if (!(map->map_flags & BPF_F_STATIC_KEY)) { + bpf_map_put(map); + err = -EINVAL; + goto free_static_branches; + } + + err = __bpf_prog_bind_map(prog, map, true); + if (err) { + bpf_map_put(map); + if (err != -EEXIST) + goto free_static_branches; + } + + static_branches[i].map = map; + } + + prog->aux->static_branches = static_branches; + prog->aux->static_branches_len = n; + + return 0; + +free_static_branches: + kfree(static_branches); + return err; +} + +int bpf_prog_init_static_branches(struct bpf_prog *prog, union bpf_attr *attr) +{ + void __user *user_static_branches = u64_to_user_ptr(attr->static_branches_info); + size_t item_size = sizeof(struct bpf_static_branch_info); + struct bpf_static_branch_info *static_branches_info; + size_t size = attr->static_branches_info_size; + int err = 0; + + if (!attr->static_branches_info) + return size ? -EINVAL : 0; + if (!size) + return -EINVAL; + if (size % item_size) + return -EINVAL; + + if (!bpf_jit_supports_static_keys()) + return -EOPNOTSUPP; + + static_branches_info = kzalloc(size, GFP_USER | __GFP_NOWARN); + if (!static_branches_info) + return -ENOMEM; + + if (copy_from_user(static_branches_info, user_static_branches, size)) { + err = -EFAULT; + goto free_branches; + } + + err = __bpf_prog_init_static_branches(prog, static_branches_info, + size / item_size); + if (err) + goto free_branches; + + err = 0; + +free_branches: + kfree(static_branches_info); + return err; +} + +int bpf_prog_register_static_branches(struct bpf_prog *prog) +{ + int n_branches = prog->aux->static_branches_len; + struct bpf_static_branch *branch; + int err; + u32 i; + + for (i = 0; i < n_branches; i++) { + branch = &prog->aux->static_branches[i]; + + /* JIT compiler did not detect this branch + * and thus won't be able to poke it when asked to + */ + if (!branch->arch_len) + return -EINVAL; + } + + for (i = 0; i < n_branches; i++) { + branch = &prog->aux->static_branches[i]; + err = static_key_add_prog(branch->map, prog); + if (err) + break; + } + + return 0; +} diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 81625ef98a7d..a85ade499e45 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -197,6 +197,10 @@ static int bpf_map_update_value(struct bpf_map *map, struct file *map_file, map->map_type == BPF_MAP_TYPE_STACK || map->map_type == BPF_MAP_TYPE_BLOOM_FILTER) { err = map->ops->map_push_elem(map, value, flags); + } else if (map->map_flags & BPF_F_STATIC_KEY) { + rcu_read_lock(); + err = bpf_static_key_update(map, key, value, flags); + rcu_read_unlock(); } else { rcu_read_lock(); err = map->ops->map_update_elem(map, key, value, flags); @@ -1096,6 +1100,16 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, return ret; } +static bool is_static_key(u32 map_type, u32 key_size, u32 value_size, + u32 max_entries, u32 map_flags) +{ + return map_type == BPF_MAP_TYPE_ARRAY && + key_size == 4 && + value_size == 4 && + max_entries == 1 && + map_flags & BPF_F_STATIC_KEY; +} + #define BPF_MAP_CREATE_LAST_FIELD map_extra /* called via syscall */ static int map_create(union bpf_attr *attr) @@ -1104,6 +1118,7 @@ static int map_create(union bpf_attr *attr) int numa_node = bpf_map_attr_numa_node(attr); u32 map_type = attr->map_type; struct bpf_map *map; + bool static_key; int f_flags; int err; @@ -1123,6 +1138,13 @@ static int map_create(union bpf_attr *attr) attr->map_extra != 0) return -EINVAL; + static_key = is_static_key(attr->map_type, attr->key_size, attr->value_size, + attr->max_entries, attr->map_flags); + if (static_key && !bpf_jit_supports_static_keys()) + return -EOPNOTSUPP; + if (!static_key && (attr->map_flags & BPF_F_STATIC_KEY)) + return -EINVAL; + f_flags = bpf_get_file_flag(attr->map_flags); if (f_flags < 0) return f_flags; @@ -1221,7 +1243,9 @@ static int map_create(union bpf_attr *attr) atomic64_set(&map->refcnt, 1); atomic64_set(&map->usercnt, 1); mutex_init(&map->freeze_mutex); + mutex_init(&map->static_key_mutex); spin_lock_init(&map->owner.lock); + INIT_LIST_HEAD(&map->static_key_list_head); if (attr->btf_key_type_id || attr->btf_value_type_id || /* Even the map's value is a kernel's struct, @@ -2366,7 +2390,7 @@ struct bpf_prog *bpf_prog_get_type_dev(u32 ufd, enum bpf_prog_type type, } EXPORT_SYMBOL_GPL(bpf_prog_get_type_dev); -static int __bpf_prog_bind_map(struct bpf_prog *prog, struct bpf_map *map) +int __bpf_prog_bind_map(struct bpf_prog *prog, struct bpf_map *map, bool check_boundaries) { struct bpf_map **used_maps_new; int i; @@ -2375,6 +2399,13 @@ static int __bpf_prog_bind_map(struct bpf_prog *prog, struct bpf_map *map) if (prog->aux->used_maps[i] == map) return -EEXIST; + /* + * This is ok to add more maps after the program is loaded, but not + * before bpf_check, as verifier env only has MAX_USED_MAPS slots + */ + if (check_boundaries && prog->aux->used_map_cnt >= MAX_USED_MAPS) + return -E2BIG; + used_maps_new = krealloc_array(prog->aux->used_maps, prog->aux->used_map_cnt + 1, sizeof(used_maps_new[0]), @@ -2388,6 +2419,7 @@ static int __bpf_prog_bind_map(struct bpf_prog *prog, struct bpf_map *map) return 0; } + /* Initially all BPF programs could be loaded w/o specifying * expected_attach_type. Later for some of them specifying expected_attach_type * at load time became required so that program could be validated properly. @@ -2576,7 +2608,7 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type) } /* last field in 'union bpf_attr' used by this command */ -#define BPF_PROG_LOAD_LAST_FIELD log_true_size +#define BPF_PROG_LOAD_LAST_FIELD static_branches_info_size static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) { @@ -2734,6 +2766,10 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) if (err < 0) goto free_prog_sec; + err = bpf_prog_init_static_branches(prog, attr); + if (err < 0) + goto free_prog_sec; + /* run eBPF verifier */ err = bpf_check(&prog, attr, uattr, uattr_size); if (err < 0) @@ -2743,6 +2779,10 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) if (err < 0) goto free_used_maps; + err = bpf_prog_register_static_branches(prog); + if (err < 0) + goto free_used_maps; + err = bpf_prog_alloc_id(prog); if (err) goto free_used_maps; @@ -5326,7 +5366,7 @@ static int bpf_prog_bind_map(union bpf_attr *attr) } mutex_lock(&prog->aux->used_maps_mutex); - ret = __bpf_prog_bind_map(prog, map); + ret = __bpf_prog_bind_map(prog, map, false); mutex_unlock(&prog->aux->used_maps_mutex); if (ret) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 5d38ee2e74a1..6b591f4a01c6 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -15534,6 +15534,7 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns, static int visit_insn(int t, struct bpf_verifier_env *env) { struct bpf_insn *insns = env->prog->insnsi, *insn = &insns[t]; + struct bpf_static_branch *branch; int ret, off; if (bpf_pseudo_func(insn)) @@ -15587,15 +15588,26 @@ static int visit_insn(int t, struct bpf_verifier_env *env) else off = insn->imm; - /* unconditional jump with single edge */ - ret = push_insn(t, t + off + 1, FALLTHROUGH, env, - true); - if (ret) - return ret; + branch = bpf_static_branch_by_offset(env->prog, t * 8); + if (unlikely(branch)) { + /* static branch with two edges */ + mark_prune_point(env, t); - mark_prune_point(env, t + off + 1); - mark_jmp_point(env, t + off + 1); + ret = push_insn(t, t + 1, FALLTHROUGH, env, true); + if (ret) + return ret; + + ret = push_insn(t, t + off + 1, BRANCH, env, true); + } else { + /* unconditional jump with single edge */ + ret = push_insn(t, t + off + 1, FALLTHROUGH, env, + true); + if (ret) + return ret; + mark_prune_point(env, t + off + 1); + mark_jmp_point(env, t + off + 1); + } return ret; default: @@ -17547,6 +17559,10 @@ static int do_check(struct bpf_verifier_env *env) mark_reg_scratched(env, BPF_REG_0); } else if (opcode == BPF_JA) { + struct bpf_verifier_state *other_branch; + struct bpf_static_branch *branch; + u32 jmp_offset; + if (BPF_SRC(insn->code) != BPF_K || insn->src_reg != BPF_REG_0 || insn->dst_reg != BPF_REG_0 || @@ -17557,9 +17573,20 @@ static int do_check(struct bpf_verifier_env *env) } if (class == BPF_JMP) - env->insn_idx += insn->off + 1; + jmp_offset = insn->off + 1; else - env->insn_idx += insn->imm + 1; + jmp_offset = insn->imm + 1; + + branch = bpf_static_branch_by_offset(env->prog, env->insn_idx * 8); + if (unlikely(branch)) { + other_branch = push_stack(env, env->insn_idx + jmp_offset, + env->insn_idx, false); + if (!other_branch) + return -EFAULT; + + jmp_offset = 1; + } + env->insn_idx += jmp_offset; continue; } else if (opcode == BPF_EXIT) { @@ -17854,6 +17881,11 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env, { enum bpf_prog_type prog_type = resolve_prog_type(prog); + if (map->map_flags & BPF_F_STATIC_KEY) { + verbose(env, "progs cannot access static keys yet\n"); + return -EINVAL; + } + if (btf_record_has_field(map->record, BPF_LIST_HEAD) || btf_record_has_field(map->record, BPF_RB_ROOT)) { if (is_tracing_prog_type(prog_type)) { @@ -18223,6 +18255,25 @@ static void adjust_poke_descs(struct bpf_prog *prog, u32 off, u32 len) } } +static void adjust_static_branches(struct bpf_prog *prog, u32 off, u32 len) +{ + struct bpf_static_branch *branch; + const u32 delta = (len - 1) * 8; /* # of new prog bytes */ + int i; + + if (len <= 1) + return; + + for (i = 0; i < prog->aux->static_branches_len; i++) { + branch = &prog->aux->static_branches[i]; + if (branch->bpf_offset <= off * 8) + continue; + + branch->bpf_offset += delta; + memcpy(branch->bpf_jmp, &prog->insnsi[branch->bpf_offset/8], 8); + } +} + static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 off, const struct bpf_insn *patch, u32 len) { @@ -18249,6 +18300,7 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of adjust_func_info(env, off, len); adjust_subprog_starts(env, off, len); adjust_poke_descs(new_prog, off, len); + adjust_static_branches(new_prog, off, len); return new_prog; } @@ -18914,6 +18966,9 @@ static int jit_subprogs(struct bpf_verifier_env *env) func[i]->aux->nr_linfo = prog->aux->nr_linfo; func[i]->aux->jited_linfo = prog->aux->jited_linfo; func[i]->aux->linfo_idx = env->subprog_info[i].linfo_idx; + func[i]->aux->static_branches = prog->aux->static_branches; + func[i]->aux->static_branches_len = prog->aux->static_branches_len; + num_exentries = 0; insn = func[i]->insnsi; for (j = 0; j < func[i]->len; j++, insn++) { @@ -20704,6 +20759,21 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3 env->fd_array = make_bpfptr(attr->fd_array, uattr.is_kernel); is_priv = bpf_capable(); + /* the program could already have referenced some maps */ + if (env->prog->aux->used_map_cnt) { + if (WARN_ON(env->prog->aux->used_map_cnt > MAX_USED_MAPS || + !env->prog->aux->used_maps)) + return -EFAULT; + + memcpy(env->used_maps, env->prog->aux->used_maps, + sizeof(env->used_maps[0]) * env->prog->aux->used_map_cnt); + env->used_map_cnt = env->prog->aux->used_map_cnt; + + kfree(env->prog->aux->used_maps); + env->prog->aux->used_map_cnt = 0; + env->prog->aux->used_maps = NULL; + } + bpf_get_btf_vmlinux(); /* grab the mutex to protect few globals used by verifier */ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 0f6cdf52b1da..2d3cf9175cf9 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1325,6 +1325,9 @@ enum { /* Get path from provided FD in BPF_OBJ_PIN/BPF_OBJ_GET commands */ BPF_F_PATH_FD = (1U << 14), + +/* Treat this map as a BPF Static Key */ + BPF_F_STATIC_KEY = (1U << 15), }; /* Flags for BPF_PROG_QUERY. */ @@ -1369,6 +1372,18 @@ struct bpf_stack_build_id { #define BPF_OBJ_NAME_LEN 16U +/* flags for bpf_static_branch_info */ +enum { + BPF_F_INVERSE_BRANCH = 1, +}; + +struct bpf_static_branch_info { + __u32 map_fd; /* map in control */ + __u32 insn_offset; /* absolute offset of the branch instruction */ + __u32 jump_target; /* absolute offset of the jump target */ + __u32 flags; +}; + union bpf_attr { struct { /* anonymous struct used by BPF_MAP_CREATE command */ __u32 map_type; /* one of enum bpf_map_type */ @@ -1467,6 +1482,9 @@ union bpf_attr { * truncated), or smaller (if log buffer wasn't filled completely). */ __u32 log_true_size; + /* An array of struct bpf_static_branch_info */ + __aligned_u64 static_branches_info; + __u32 static_branches_info_size; }; struct { /* anonymous struct used by BPF_OBJ_* commands */ From patchwork Wed Dec 6 14:10:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Protopopov X-Patchwork-Id: 13481848 X-Patchwork-Delegate: bpf@iogearbox.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=isovalent.com header.i=@isovalent.com header.b="g9MkAjHV" Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41ADAD5B for ; Wed, 6 Dec 2023 06:13:51 -0800 (PST) Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-40c25973988so1174865e9.2 for ; Wed, 06 Dec 2023 06:13:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1701872030; x=1702476830; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=590AYwwkdIFO3Un7/IlMUBfx3bIXwzntV+/SAYwcPk0=; b=g9MkAjHVfUNCWxHUWz2NkIEHww+WQwaK8eKxgr150jvy2MqtY6Sz26hqYJxnPH8ND2 HsGQYqALM8P8uejMQdEt7MGyOOk9VpYctMJMeV/f0wsFi5RQsoyE3YfvmxVtLcndjPFG +jZ8rEEObX3VDeEmkv+q1UlzLw8ERaDm40R2Ft7cU+8Bu0fr+NVrOVXbe+xUrs5kyjNB qhlY/0NcIXRMT91ARNaeSPSmFKNeHEJNqeFbU+cPjPFZ57beZpoynZSvh1iyKajC3p27 nu8hV6ILdeTfNluzoLZEi9FN7tVxj0HDO1nSjLqWtLZgKHBB9iOnQuLJ98po4u5D10Jz 643Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701872030; x=1702476830; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=590AYwwkdIFO3Un7/IlMUBfx3bIXwzntV+/SAYwcPk0=; b=PdyO44Tt0YpDbgalFwjcbkSMmjjmHWxHpBTcKtjm8RXjbnVlj/2ct25WqMwTx9pDlV ofX9KLrMto0Ge1NBTMhhAmEpzszikUEk4KgSsoP/SrRz+/Ce+q8fkewlI1Q0BNz2NbX0 sZtxTsvxPMKZboSiH2eGS6C8WRgcEcw8gk0NNGfD9GeuYaI/LAZxSlmtDRBg35FM1+Nh a2X/Cf0HxQQSlRnD72TQ10TMRjCv22yUBAu9ixcScan+EFADRFAoKB2EkJ1mwhAQ+crY WPsOM2jl9V0pD9mcwC2YH0rwpYDIEG753V6Se5U1G8JC7KH0d9M+GU/wQlz5asVx4d4s KT8g== X-Gm-Message-State: AOJu0YzZUq2qGifM8H9oqmMtcRq0baITZFxLNVr5UHdL6KO1YMbZG5Hj iPRcZW/qbJ3149it/ROttdI//A== X-Google-Smtp-Source: AGHT+IE2AIMbcsCNLZ9LjLV1XZRAySSpYawwzxPzgbjab1rrihrGDYNrY/Rfzdm+1I/ugvkwm+G1og== X-Received: by 2002:a05:600c:4e89:b0:408:3707:b199 with SMTP id f9-20020a05600c4e8900b004083707b199mr624251wmq.3.1701872029830; Wed, 06 Dec 2023 06:13:49 -0800 (PST) Received: from zh-lab-node-5.home ([2a02:168:f656:0:1ac0:4dff:fe0f:3782]) by smtp.gmail.com with ESMTPSA id g18-20020a05600c311200b0040b42df75fcsm22140330wmo.39.2023.12.06.06.13.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 06:13:49 -0800 (PST) From: Anton Protopopov To: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Jiri Olsa , Martin KaFai Lau , Stanislav Fomichev , bpf@vger.kernel.org Cc: Anton Protopopov Subject: [PATCH bpf-next 5/7] bpf: x86: implement static keys support Date: Wed, 6 Dec 2023 14:10:28 +0000 Message-Id: <20231206141030.1478753-6-aspsk@isovalent.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231206141030.1478753-1-aspsk@isovalent.com> References: <20231206141030.1478753-1-aspsk@isovalent.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Implement X86 JIT support for BPF Static Keys: while jiting code and encountering a JA instruction the JIT compiler checks if there is a corresponding static branch. If there is, it saves a corresponding x86 address in the static branch structure. Signed-off-by: Anton Protopopov --- arch/x86/net/bpf_jit_comp.c | 72 +++++++++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 8c10d9abc239..4e8ed43bd03d 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -452,6 +452,32 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, return __bpf_arch_text_poke(ip, t, old_addr, new_addr); } +int bpf_arch_poke_static_branch(struct bpf_prog *prog, + struct bpf_static_branch *branch, bool on) +{ + static const u64 bpf_nop = BPF_JMP | BPF_JA; + const void *arch_op; + const void *bpf_op; + bool inverse; + + if (!prog || !branch) + return -EINVAL; + + inverse = !!(branch->flags & BPF_F_INVERSE_BRANCH); + if (on ^ inverse) { + bpf_op = branch->bpf_jmp; + arch_op = branch->arch_jmp; + } else { + bpf_op = &bpf_nop; + arch_op = branch->arch_nop; + } + + text_poke_bp(branch->arch_addr, arch_op, branch->arch_len, NULL); + memcpy(&prog->insnsi[branch->bpf_offset / 8], bpf_op, 8); + + return 0; +} + #define EMIT_LFENCE() EMIT3(0x0F, 0xAE, 0xE8) static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip) @@ -1008,6 +1034,32 @@ static void emit_nops(u8 **pprog, int len) *pprog = prog; } +static __always_inline void copy_nops(u8 *dst, int len) +{ + BUILD_BUG_ON(len != 2 && len != 5); + memcpy(dst, x86_nops[len], len); +} + +static __always_inline void +arch_init_static_branch(struct bpf_static_branch *branch, + int len, u32 jmp_offset, void *addr) +{ + BUILD_BUG_ON(len != 2 && len != 5); + + if (len == 2) { + branch->arch_jmp[0] = 0xEB; + branch->arch_jmp[1] = jmp_offset; + } else { + branch->arch_jmp[0] = 0xE9; + memcpy(&branch->arch_jmp[1], &jmp_offset, 4); + } + + copy_nops(branch->arch_nop, len); + + branch->arch_len = len; + branch->arch_addr = addr; +} + /* emit the 3-byte VEX prefix * * r: same as rex.r, extra bit for ModRM reg field @@ -1078,6 +1130,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image { bool tail_call_reachable = bpf_prog->aux->tail_call_reachable; struct bpf_insn *insn = bpf_prog->insnsi; + struct bpf_static_branch *branch = NULL; bool callee_regs_used[4] = {}; int insn_cnt = bpf_prog->len; bool tail_call_seen = false; @@ -1928,6 +1981,16 @@ st: if (is_imm8(insn->off)) break; } emit_jmp: + if (bpf_prog->aux->static_branches_len > 0 && bpf_prog->aux->func_info) { + int off, idx; + + idx = bpf_prog->aux->func_idx; + off = bpf_prog->aux->func_info[idx].insn_off + i - 1; + branch = bpf_static_branch_by_offset(bpf_prog, off * 8); + } else { + branch = bpf_static_branch_by_offset(bpf_prog, (i - 1) * 8); + } + if (is_imm8(jmp_offset)) { if (jmp_padding) { /* To avoid breaking jmp_offset, the extra bytes @@ -1950,8 +2013,17 @@ st: if (is_imm8(insn->off)) } emit_nops(&prog, INSN_SZ_DIFF - 2); } + + if (branch) + arch_init_static_branch(branch, 2, jmp_offset, + image + addrs[i-1]); + EMIT2(0xEB, jmp_offset); } else if (is_simm32(jmp_offset)) { + if (branch) + arch_init_static_branch(branch, 5, jmp_offset, + image + addrs[i-1]); + EMIT1_off32(0xE9, jmp_offset); } else { pr_err("jmp gen bug %llx\n", jmp_offset); From patchwork Wed Dec 6 14:10:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Protopopov X-Patchwork-Id: 13481849 X-Patchwork-Delegate: bpf@iogearbox.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=isovalent.com header.i=@isovalent.com header.b="HLrsHCqv" Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com [IPv6:2a00:1450:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 269DFD59 for ; Wed, 6 Dec 2023 06:13:53 -0800 (PST) Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-40c0e7b8a9bso38812625e9.3 for ; Wed, 06 Dec 2023 06:13:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1701872031; x=1702476831; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=odAFtJgnDKyW71MhMDCtnqE2RXd+U86YhVhwvCqb1Po=; b=HLrsHCqvqVff0YoguYSonIepat+ztUox07eACcjaI84I7xWIIgwoz+NG6gXDCIm6Z9 JBw+gO3ckATR4EXuvOXDIvW+cVFEI3JcZbmfQccBvRxGVupsUFg1HXqMDadk3YwFHfHe Heg3RYoiIvlv4s4tL561ja2Qs4ahjwnGtQnvbl5by9gdC+TsyvTQOofU/Oth+er+Pa70 Yi5ukQ3jJEvyBjtGnMjAxbGF1uSG88dmT1HJZZWgzaVxIQiEUF/wfj00cELPHnfvcpnB qtkV1U5XloiGqzvwkkYDSUadYLGaoCQqaQShX35SyGFdxQKxM7BEFO3d5gI3khd63IX1 kWmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701872031; x=1702476831; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=odAFtJgnDKyW71MhMDCtnqE2RXd+U86YhVhwvCqb1Po=; b=L7z8rOKY3DbWZVPd/5+oxOZ+EuF1huzuvRlF78U4sIjGoTw5ffw37TrTBIPrG5dHwP 7avHyWfc9rBskx54XtlmASqi/hP4evXNStXFqtSHBDt9fE8m2OfKIor0/PrZwSbHbOaZ xhowLQcx1i9L4MwrNWXlmsjOE7JGZG+ZT3/shiT3gcfAgHn7dwx1Eo9b7XzyWnybD3ck 6y0M2nsC/d1Cf4MBskdZTMp5kz2khU1xRDlXodxev+P70LuwXba/prvLeAvRmMN6YpBM 25N25367c6s0VRvzFpNWfsgNTtybZ1atVLkmIrCV53LbIIgLYbHz42CKOvaZqY4ZdT9h NO5Q== X-Gm-Message-State: AOJu0YzaRoSsejIm41vF+qpJJ3qiBDq5f4DxKS7D1nAu/Y6HSOXL8cAq C+wWhfO7l+gmA1mJ38g0YbgWng== X-Google-Smtp-Source: AGHT+IEtkF9XGNFu3RKEA0GQS0zifyYyuxm6CaV3YIYKVfxLzoF56EHv+dhr2W+2ZHRBQ8AMDJnY/w== X-Received: by 2002:a05:600c:358a:b0:40b:51cc:b98 with SMTP id p10-20020a05600c358a00b0040b51cc0b98mr704561wmq.9.1701872031180; Wed, 06 Dec 2023 06:13:51 -0800 (PST) Received: from zh-lab-node-5.home ([2a02:168:f656:0:1ac0:4dff:fe0f:3782]) by smtp.gmail.com with ESMTPSA id g18-20020a05600c311200b0040b42df75fcsm22140330wmo.39.2023.12.06.06.13.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 06:13:50 -0800 (PST) From: Anton Protopopov To: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Jiri Olsa , Martin KaFai Lau , Stanislav Fomichev , bpf@vger.kernel.org Cc: Anton Protopopov Subject: [PATCH bpf-next 6/7] libbpf: BPF Static Keys support Date: Wed, 6 Dec 2023 14:10:29 +0000 Message-Id: <20231206141030.1478753-7-aspsk@isovalent.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231206141030.1478753-1-aspsk@isovalent.com> References: <20231206141030.1478753-1-aspsk@isovalent.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Introduce the DEFINE_STATIC_KEY() and bpf_static_branch_{unlikely,likely} macros to mimic Linux Kernel Static Keys API in BPF. Example of usage would be as follows: DEFINE_STATIC_KEY(key); void prog(void) { if (bpf_static_branch_unlikely(&key)) /* rarely used code */ else /* default hot path code */ } or, using the likely variant: void prog2(void) { if (bpf_static_branch_likely(&key)) /* default hot path code */ else /* rarely used code */ } The "unlikely" version of macro compiles in the code where the else-branch (key is off) is fall-through, the "likely" macro prioritises the if-branch. Both macros push an entry in a new ".jump_table" section which contains the following information: 32 bits 32 bits 64 bits offset of jump instruction | offset of jump target | flags The corresponding ".rel.jump_table" relocations table entry contains the base section name and the static key (map) name. The bigger portion of this patch works on parsing, relocating and sending this information to kernel via the static_branches_info and static_branches_info_size attributes of the BPF_PROG_LOAD syscall. The same key may be used multiple times in one program and can be used by multiple BPF programs. BPF doesn't provide guarantees on order in which static branches controlled by one key are patched. Signed-off-by: Anton Protopopov --- tools/lib/bpf/bpf.c | 5 +- tools/lib/bpf/bpf.h | 4 +- tools/lib/bpf/bpf_helpers.h | 64 ++++++++ tools/lib/bpf/libbpf.c | 273 +++++++++++++++++++++++++++++++- tools/lib/bpf/libbpf_internal.h | 3 + tools/lib/bpf/linker.c | 8 +- 6 files changed, 351 insertions(+), 6 deletions(-) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index 9dc9625651dc..f67d6a4dac05 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -232,7 +232,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns, size_t insn_cnt, struct bpf_prog_load_opts *opts) { - const size_t attr_sz = offsetofend(union bpf_attr, log_true_size); + const size_t attr_sz = offsetofend(union bpf_attr, static_branches_info_size); void *finfo = NULL, *linfo = NULL; const char *func_info, *line_info; __u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd; @@ -262,6 +262,9 @@ int bpf_prog_load(enum bpf_prog_type prog_type, attr.prog_ifindex = OPTS_GET(opts, prog_ifindex, 0); attr.kern_version = OPTS_GET(opts, kern_version, 0); + attr.static_branches_info = ptr_to_u64(OPTS_GET(opts, static_branches_info, NULL)); + attr.static_branches_info_size = OPTS_GET(opts, static_branches_info_size, 0); + if (prog_name && kernel_supports(NULL, FEAT_PROG_NAME)) libbpf_strlcpy(attr.prog_name, prog_name, sizeof(attr.prog_name)); attr.license = ptr_to_u64(license); diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index d0f53772bdc0..ec6d4b955fb8 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -102,9 +102,11 @@ struct bpf_prog_load_opts { * If kernel doesn't support this feature, log_size is left unchanged. */ __u32 log_true_size; + struct bpf_static_branch_info *static_branches_info; + __u32 static_branches_info_size; size_t :0; }; -#define bpf_prog_load_opts__last_field log_true_size +#define bpf_prog_load_opts__last_field static_branches_info_size LIBBPF_API int bpf_prog_load(enum bpf_prog_type prog_type, const char *prog_name, const char *license, diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h index 77ceea575dc7..e3bfa0697304 100644 --- a/tools/lib/bpf/bpf_helpers.h +++ b/tools/lib/bpf/bpf_helpers.h @@ -400,4 +400,68 @@ extern void bpf_iter_num_destroy(struct bpf_iter_num *it) __weak __ksym; ) #endif /* bpf_repeat */ +#define DEFINE_STATIC_KEY(NAME) \ + struct { \ + __uint(type, BPF_MAP_TYPE_ARRAY); \ + __type(key, __u32); \ + __type(value, __u32); \ + __uint(map_flags, BPF_F_STATIC_KEY); \ + __uint(max_entries, 1); \ + } NAME SEC(".maps") + +#ifndef likely +#define likely(x) (__builtin_expect(!!(x), 1)) +#endif + +#ifndef unlikely +#define unlikely(x) (__builtin_expect(!!(x), 0)) +#endif + +static __always_inline int __bpf_static_branch_nop(void *static_key) +{ + asm goto("1:\n\t" + "goto +0\n\t" + ".pushsection .jump_table, \"aw\"\n\t" + ".balign 8\n\t" + ".long 1b - .\n\t" + ".long %l[l_yes] - .\n\t" + ".quad %c0 - .\n\t" + ".popsection\n\t" + :: "i" (static_key) + :: l_yes); + return 0; +l_yes: + return 1; +} + +static __always_inline int __bpf_static_branch_jump(void *static_key) +{ + asm goto("1:\n\t" + "goto %l[l_yes]\n\t" + ".pushsection .jump_table, \"aw\"\n\t" + ".balign 8\n\t" + ".long 1b - .\n\t" + ".long %l[l_yes] - .\n\t" + ".quad %c0 - . + 1\n\t" + ".popsection\n\t" + :: "i" (static_key) + :: l_yes); + return 0; +l_yes: + return 1; +} + +/* + * The bpf_static_branch_{unlikely,likely} macros provide a way to utilize BPF + * Static Keys in BPF programs in exactly the same manner this is done in the + * Linux Kernel. The "unlikely" macro compiles in the code where the else-branch + * (key is off) is prioritized, the "likely" macro prioritises the if-branch. + */ + +#define bpf_static_branch_unlikely(static_key) \ + unlikely(__bpf_static_branch_nop(static_key)) + +#define bpf_static_branch_likely(static_key) \ + likely(!__bpf_static_branch_jump(static_key)) + #endif diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index e067be95da3c..92620717abda 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -391,6 +391,13 @@ struct bpf_sec_def { libbpf_prog_attach_fn_t prog_attach_fn; }; +struct static_branch_info { + struct bpf_map *map; + __u32 insn_offset; + __u32 jump_target; + __u32 flags; +}; + /* * bpf_prog should be a better name but it has been used in * linux/filter.h. @@ -463,6 +470,9 @@ struct bpf_program { __u32 line_info_rec_size; __u32 line_info_cnt; __u32 prog_flags; + + struct static_branch_info *static_branches_info; + __u32 static_branches_info_size; }; struct bpf_struct_ops { @@ -493,6 +503,7 @@ struct bpf_struct_ops { #define KSYMS_SEC ".ksyms" #define STRUCT_OPS_SEC ".struct_ops" #define STRUCT_OPS_LINK_SEC ".struct_ops.link" +#define STATIC_JUMPS_SEC ".jump_table" enum libbpf_map_type { LIBBPF_MAP_UNSPEC, @@ -624,6 +635,7 @@ struct elf_state { Elf_Data *symbols; Elf_Data *st_ops_data; Elf_Data *st_ops_link_data; + Elf_Data *static_branches_data; size_t shstrndx; /* section index for section name strings */ size_t strtabidx; struct elf_sec_desc *secs; @@ -634,6 +646,7 @@ struct elf_state { int symbols_shndx; int st_ops_shndx; int st_ops_link_shndx; + int static_branches_shndx; }; struct usdt_manager; @@ -715,6 +728,7 @@ void bpf_program__unload(struct bpf_program *prog) zfree(&prog->func_info); zfree(&prog->line_info); + zfree(&prog->static_branches_info); } static void bpf_program__exit(struct bpf_program *prog) @@ -3605,6 +3619,9 @@ static int bpf_object__elf_collect(struct bpf_object *obj) } else if (strcmp(name, STRUCT_OPS_LINK_SEC) == 0) { obj->efile.st_ops_link_data = data; obj->efile.st_ops_link_shndx = idx; + } else if (strcmp(name, STATIC_JUMPS_SEC) == 0) { + obj->efile.static_branches_data = data; + obj->efile.static_branches_shndx = idx; } else { pr_info("elf: skipping unrecognized data section(%d) %s\n", idx, name); @@ -3620,7 +3637,8 @@ static int bpf_object__elf_collect(struct bpf_object *obj) if (!section_have_execinstr(obj, targ_sec_idx) && strcmp(name, ".rel" STRUCT_OPS_SEC) && strcmp(name, ".rel" STRUCT_OPS_LINK_SEC) && - strcmp(name, ".rel" MAPS_ELF_SEC)) { + strcmp(name, ".rel" MAPS_ELF_SEC) && + strcmp(name, ".rel" STATIC_JUMPS_SEC)) { pr_info("elf: skipping relo section(%d) %s for section(%d) %s\n", idx, name, targ_sec_idx, elf_sec_name(obj, elf_sec_by_idx(obj, targ_sec_idx)) ?: ""); @@ -4422,6 +4440,189 @@ bpf_object__collect_prog_relos(struct bpf_object *obj, Elf64_Shdr *shdr, Elf_Dat return 0; } +struct jump_table_entry { + __u32 insn_offset; + __u32 jump_target; + union { + __u64 map_ptr; /* map_ptr is always zero, as it is relocated */ + __u64 flags; /* so we can reuse it to store flags */ + }; +}; + +static struct bpf_program *shndx_to_prog(struct bpf_object *obj, + size_t sec_idx, + struct jump_table_entry *entry) +{ + __u32 insn_offset = entry->insn_offset / 8; + __u32 jump_target = entry->jump_target / 8; + struct bpf_program *prog; + size_t i; + + for (i = 0; i < obj->nr_programs; i++) { + prog = &obj->programs[i]; + if (prog->sec_idx != sec_idx) + continue; + + if (insn_offset < prog->sec_insn_off || + insn_offset >= prog->sec_insn_off + prog->sec_insn_cnt) + continue; + + if (jump_target < prog->sec_insn_off || + jump_target >= prog->sec_insn_off + prog->sec_insn_cnt) { + pr_warn("static branch: offset %u is in boundaries, target %u is not\n", + insn_offset, jump_target); + return NULL; + } + + return prog; + } + + return NULL; +} + +static struct bpf_program *find_prog_for_jump_entry(struct bpf_object *obj, + int nrels, + Elf_Data *relo_data, + __u32 entry_offset, + struct jump_table_entry *entry) +{ + struct bpf_program *prog; + Elf64_Rel *rel; + Elf64_Sym *sym; + int i; + + for (i = 0; i < nrels; i++) { + rel = elf_rel_by_idx(relo_data, i); + if (!rel) { + pr_warn("static branch: relo #%d: failed to get ELF relo\n", i); + return ERR_PTR(-LIBBPF_ERRNO__FORMAT); + } + + if ((__u32)rel->r_offset != entry_offset) + continue; + + sym = elf_sym_by_idx(obj, ELF64_R_SYM(rel->r_info)); + if (!sym) { + pr_warn("static branch: .maps relo #%d: symbol %zx not found\n", + i, (size_t)ELF64_R_SYM(rel->r_info)); + return ERR_PTR(-LIBBPF_ERRNO__FORMAT); + } + + prog = shndx_to_prog(obj, sym->st_shndx, entry); + if (!prog) { + pr_warn("static branch: .maps relo #%d: program %zx not found\n", + i, (size_t)sym->st_shndx); + return ERR_PTR(-LIBBPF_ERRNO__FORMAT); + } + return prog; + } + return ERR_PTR(-LIBBPF_ERRNO__FORMAT); +} + +static struct bpf_map *find_map_for_jump_entry(struct bpf_object *obj, + int nrels, + Elf_Data *relo_data, + __u32 entry_offset) +{ + struct bpf_map *map; + const char *name; + Elf64_Rel *rel; + Elf64_Sym *sym; + int i; + + for (i = 0; i < nrels; i++) { + rel = elf_rel_by_idx(relo_data, i); + if (!rel) { + pr_warn("static branch: relo #%d: failed to get ELF relo\n", i); + return NULL; + } + + if ((__u32)rel->r_offset != entry_offset) + continue; + + sym = elf_sym_by_idx(obj, ELF64_R_SYM(rel->r_info)); + if (!sym) { + pr_warn(".maps relo #%d: symbol %zx not found\n", + i, (size_t)ELF64_R_SYM(rel->r_info)); + return NULL; + } + + name = elf_sym_str(obj, sym->st_name) ?: ""; + if (!name || !strcmp(name, "")) { + pr_warn(".maps relo #%d: symbol name is zero or empty\n", i); + return NULL; + } + + map = bpf_object__find_map_by_name(obj, name); + if (!map) + return NULL; + return map; + } + return NULL; +} + +static int add_static_branch(struct bpf_program *prog, + struct jump_table_entry *entry, + struct bpf_map *map) +{ + __u32 size_old = prog->static_branches_info_size; + __u32 size_new = size_old + sizeof(struct static_branch_info); + struct static_branch_info *info; + void *x; + + x = realloc(prog->static_branches_info, size_new); + if (!x) + return -ENOMEM; + + info = x + size_old; + info->insn_offset = entry->insn_offset - prog->sec_insn_off * 8; + info->jump_target = entry->jump_target - prog->sec_insn_off * 8; + info->flags = (__u32) entry->flags; + info->map = map; + + prog->static_branches_info = x; + prog->static_branches_info_size = size_new; + + return 0; +} + +static int +bpf_object__collect_static_branches_relos(struct bpf_object *obj, + Elf64_Shdr *shdr, + Elf_Data *relo_data) +{ + Elf_Data *branches_data = obj->efile.static_branches_data; + int nrels = shdr->sh_size / shdr->sh_entsize; + struct jump_table_entry *entries; + size_t i; + int err; + + if (!branches_data) + return 0; + + entries = (void *)branches_data->d_buf; + for (i = 0; i < branches_data->d_size / sizeof(struct jump_table_entry); i++) { + __u32 entry_offset = i * sizeof(struct jump_table_entry); + struct bpf_program *prog; + struct bpf_map *map; + + prog = find_prog_for_jump_entry(obj, nrels, relo_data, entry_offset, &entries[i]); + if (IS_ERR(prog)) + return PTR_ERR(prog); + + map = find_map_for_jump_entry(obj, nrels, relo_data, + entry_offset + offsetof(struct jump_table_entry, map_ptr)); + if (!map) + return -EINVAL; + + err = add_static_branch(prog, &entries[i], map); + if (err) + return err; + } + + return 0; +} + static int map_fill_btf_type_info(struct bpf_object *obj, struct bpf_map *map) { int id; @@ -6298,10 +6499,44 @@ static struct reloc_desc *find_prog_insn_relo(const struct bpf_program *prog, si sizeof(*prog->reloc_desc), cmp_relo_by_insn_idx); } +static int append_subprog_static_branches(struct bpf_program *main_prog, + struct bpf_program *subprog) +{ + size_t subprog_size = subprog->static_branches_info_size; + size_t main_size = main_prog->static_branches_info_size; + size_t entry_size = sizeof(struct static_branch_info); + void *old_info = main_prog->static_branches_info; + int n_entries = subprog_size / entry_size; + struct static_branch_info *branch; + void *new_info; + int i; + + if (!subprog_size) + return 0; + + new_info = realloc(old_info, subprog_size + main_size); + if (!new_info) + return -ENOMEM; + + memcpy(new_info + main_size, subprog->static_branches_info, subprog_size); + + for (i = 0; i < n_entries; i++) { + branch = new_info + main_size + i * entry_size; + branch->insn_offset += subprog->sub_insn_off * 8; + branch->jump_target += subprog->sub_insn_off * 8; + } + + main_prog->static_branches_info = new_info; + main_prog->static_branches_info_size += subprog_size; + + return 0; +} + static int append_subprog_relos(struct bpf_program *main_prog, struct bpf_program *subprog) { int new_cnt = main_prog->nr_reloc + subprog->nr_reloc; struct reloc_desc *relos; + int err; int i; if (main_prog == subprog) @@ -6324,6 +6559,11 @@ static int append_subprog_relos(struct bpf_program *main_prog, struct bpf_progra */ main_prog->reloc_desc = relos; main_prog->nr_reloc = new_cnt; + + err = append_subprog_static_branches(main_prog, subprog); + if (err) + return err; + return 0; } @@ -6879,6 +7119,8 @@ static int bpf_object__collect_relos(struct bpf_object *obj) err = bpf_object__collect_st_ops_relos(obj, shdr, data); else if (idx == obj->efile.btf_maps_shndx) err = bpf_object__collect_map_relos(obj, shdr, data); + else if (idx == obj->efile.static_branches_shndx) + err = bpf_object__collect_static_branches_relos(obj, shdr, data); else err = bpf_object__collect_prog_relos(obj, shdr, data); if (err) @@ -7002,6 +7244,30 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog, static void fixup_verifier_log(struct bpf_program *prog, char *buf, size_t buf_sz); +static struct bpf_static_branch_info * +convert_branch_info(struct static_branch_info *info, size_t size) +{ + size_t n = size/sizeof(struct static_branch_info); + struct bpf_static_branch_info *bpf_info; + size_t i; + + if (!info) + return NULL; + + bpf_info = calloc(n, sizeof(struct bpf_static_branch_info)); + if (!bpf_info) + return NULL; + + for (i = 0; i < n; i++) { + bpf_info[i].insn_offset = info[i].insn_offset; + bpf_info[i].jump_target = info[i].jump_target; + bpf_info[i].flags = info[i].flags; + bpf_info[i].map_fd = info[i].map->fd; + } + + return bpf_info; +} + static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt, const char *license, __u32 kern_version, int *prog_fd) @@ -7106,6 +7372,11 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog load_attr.log_size = log_buf_size; load_attr.log_level = log_level; + load_attr.static_branches_info = convert_branch_info(prog->static_branches_info, + prog->static_branches_info_size); + load_attr.static_branches_info_size = prog->static_branches_info_size / + sizeof(struct static_branch_info) * sizeof(struct bpf_static_branch_info); + ret = bpf_prog_load(prog->type, prog_name, license, insns, insns_cnt, &load_attr); if (ret >= 0) { if (log_level && own_log_buf) { diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h index f0f08635adb0..62020e7a58b0 100644 --- a/tools/lib/bpf/libbpf_internal.h +++ b/tools/lib/bpf/libbpf_internal.h @@ -40,6 +40,9 @@ #ifndef R_BPF_64_ABS32 #define R_BPF_64_ABS32 3 #endif +#ifndef R_BPF_64_NODYLD32 +#define R_BPF_64_NODYLD32 4 +#endif #ifndef R_BPF_64_32 #define R_BPF_64_32 10 #endif diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c index 5ced96d99f8c..47b343e2813e 100644 --- a/tools/lib/bpf/linker.c +++ b/tools/lib/bpf/linker.c @@ -22,6 +22,7 @@ #include "strset.h" #define BTF_EXTERN_SEC ".extern" +#define STATIC_JUMPS_REL_SEC ".rel.jump_table" struct src_sec { const char *sec_name; @@ -888,8 +889,9 @@ static int linker_sanity_check_elf_relos(struct src_obj *obj, struct src_sec *se size_t sym_type = ELF64_R_TYPE(relo->r_info); if (sym_type != R_BPF_64_64 && sym_type != R_BPF_64_32 && - sym_type != R_BPF_64_ABS64 && sym_type != R_BPF_64_ABS32) { - pr_warn("ELF relo #%d in section #%zu has unexpected type %zu in %s\n", + sym_type != R_BPF_64_ABS64 && sym_type != R_BPF_64_ABS32 && + sym_type != R_BPF_64_NODYLD32 && strcmp(sec->sec_name, STATIC_JUMPS_REL_SEC)) { + pr_warn("ELF relo #%d in section #%zu unexpected type %zu in %s\n", i, sec->sec_idx, sym_type, obj->filename); return -EINVAL; } @@ -2087,7 +2089,7 @@ static int linker_append_elf_relos(struct bpf_linker *linker, struct src_obj *ob insn->imm += sec->dst_off / sizeof(struct bpf_insn); else insn->imm += sec->dst_off; - } else { + } else if (strcmp(src_sec->sec_name, STATIC_JUMPS_REL_SEC)) { pr_warn("relocation against STT_SECTION in non-exec section is not supported!\n"); return -EINVAL; } From patchwork Wed Dec 6 14:10:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anton Protopopov X-Patchwork-Id: 13481851 X-Patchwork-Delegate: bpf@iogearbox.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=isovalent.com header.i=@isovalent.com header.b="YsV7ns1X" Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A84B1A2 for ; Wed, 6 Dec 2023 06:13:54 -0800 (PST) Received: by mail-wm1-x32f.google.com with SMTP id 5b1f17b1804b1-40c0e7b8a9bso38812955e9.3 for ; Wed, 06 Dec 2023 06:13:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=isovalent.com; s=google; t=1701872032; x=1702476832; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H/mil66XVycPGDyt1s26xPsBfGVLnvx73zPTwd81ej8=; b=YsV7ns1X4v2IKhSRToEwRAhf/y1SDtr1SZw5DubOCnueM8Ca9lznyi8Ih/dy1lzNjE dRuLke5/2cfpEJiITbbnZvD7/K6/uigOEytY9zaU5GDbShwQHRzz/1FKw1LZ03d6wgMD KpczoT7ptTO4kiasYdLd47aMcDO59K2UlHWoetHZkbkVrT9EuACu+K2oWtzbf1/HF/m6 rS0aJnPK8D1D3dS5HdrC9ojhlZKPQnEI889Gzn43nJO7OdurvO6Di/uUfY6N8rxEcc5w EKRb+tUqRX5okZz+g2bU9iWGijRUDiAa/Y/vra4HHd7FGn5Czbo3MOQE6MBtOXkAoDFB 1zOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701872032; x=1702476832; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H/mil66XVycPGDyt1s26xPsBfGVLnvx73zPTwd81ej8=; b=Ir5m0zdXPT5Prdbm+4flJYvYT6CH/dZlYfSIBUjxoJIAdB7j5jMQJmZiN3hKYO8xks lHdPMRSllfafT8utYVMrEMlLp34mC6CltOiYGozB+i3OBwZ61SXXuIDlCSD/HmamPzeu dJUQjCFUTvdqX7oYg9+4KEkZt7v6lk1Btzjgjre6XGHQD8JeAFIjPPtx3QV+gBm+sEng nRrYxe0eR1pMJxWjqJupPJh37+zXPx/KOyFSZaoBWFnAZWSe2fhhmrq2DHjypVef66Zq MXCLigqQMzGxku/c1uI790rLBR43O+1djOtSLKB1umVvMVKMCCZjIDsmFDXbjbV+XIlG VRIQ== X-Gm-Message-State: AOJu0YxwtQZBVMKZvB1okLrcSz1WiyI9iyQP3IooHbMlQ2pBOmLcsNPi RtjQ0bvdCZfCopBkrYHs17WcdQ== X-Google-Smtp-Source: AGHT+IGGgzhxTp5n/3v+rAeYU7skWgWjS5zMTHjt+sFv+USt0/X5XY34li4ssrJNbWaWKrdESQ4F/Q== X-Received: by 2002:a05:600c:16d3:b0:40b:5e59:ccbd with SMTP id l19-20020a05600c16d300b0040b5e59ccbdmr639416wmn.158.1701872032656; Wed, 06 Dec 2023 06:13:52 -0800 (PST) Received: from zh-lab-node-5.home ([2a02:168:f656:0:1ac0:4dff:fe0f:3782]) by smtp.gmail.com with ESMTPSA id g18-20020a05600c311200b0040b42df75fcsm22140330wmo.39.2023.12.06.06.13.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 06:13:52 -0800 (PST) From: Anton Protopopov To: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Jiri Olsa , Martin KaFai Lau , Stanislav Fomichev , bpf@vger.kernel.org Cc: Anton Protopopov Subject: [PATCH bpf-next 7/7] selftests/bpf: Add tests for BPF Static Keys Date: Wed, 6 Dec 2023 14:10:30 +0000 Message-Id: <20231206141030.1478753-8-aspsk@isovalent.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231206141030.1478753-1-aspsk@isovalent.com> References: <20231206141030.1478753-1-aspsk@isovalent.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net Add several self-tests to test the new BPF Static Key feature. Selftests include the following tests: * check that one key works for one program * check that one key works for multiple programs * check that static keys work with 2 and 5 bytes jumps * check that multiple keys works for one program * check that static keys work for base program and a BPF-to-BPF call * check that static keys can't be used as normal maps * check that passing incorrect parameters on map creation fails * check that passing incorrect parameters on program load fails Signed-off-by: Anton Protopopov --- MAINTAINERS | 1 + .../bpf/prog_tests/bpf_static_keys.c | 436 ++++++++++++++++++ .../selftests/bpf/progs/bpf_static_keys.c | 120 +++++ 3 files changed, 557 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/bpf_static_keys.c create mode 100644 tools/testing/selftests/bpf/progs/bpf_static_keys.c diff --git a/MAINTAINERS b/MAINTAINERS index e2f655980c6c..81a040d66af6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3892,6 +3892,7 @@ M: Anton Protopopov L: bpf@vger.kernel.org S: Maintained F: kernel/bpf/skey.c +F: tools/testing/selftests/bpf/*/*bpf_static_key* BROADCOM ASP 2.0 ETHERNET DRIVER M: Justin Chen diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_static_keys.c b/tools/testing/selftests/bpf/prog_tests/bpf_static_keys.c new file mode 100644 index 000000000000..37b2da247869 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/bpf_static_keys.c @@ -0,0 +1,436 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Isovalent */ + +#include +#include "bpf_static_keys.skel.h" + +#define set_static_key(map_fd, val) \ + do { \ + __u32 map_value = (val); \ + __u32 zero_key = 0; \ + int ret; \ + \ + ret = bpf_map_update_elem(map_fd, &zero_key, &map_value, 0); \ + ASSERT_EQ(ret, 0, "bpf_map_update_elem"); \ + } while (0) + +static void check_one_key(struct bpf_static_keys *skel) +{ + struct bpf_link *link; + int map_fd; + + link = bpf_program__attach(skel->progs.check_one_key); + if (!ASSERT_OK_PTR(link, "link")) + return; + + map_fd = bpf_map__fd(skel->maps.key1); + ASSERT_GT(map_fd, 0, "skel->maps.key1"); + + set_static_key(map_fd, 0); + skel->bss->ret_user = 0; + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 4, "skel->bss->ret_user"); + + set_static_key(map_fd, 1); + skel->bss->ret_user = 0; + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 3, "skel->bss->ret_user"); + + bpf_link__destroy(link); +} + +static void check_multiple_progs(struct bpf_static_keys *skel) +{ + struct bpf_link *link1; + struct bpf_link *link2; + struct bpf_link *link3; + int map_fd; + + link1 = bpf_program__attach(skel->progs.check_one_key); + if (!ASSERT_OK_PTR(link1, "link1")) + return; + + link2 = bpf_program__attach(skel->progs.check_one_key_another_prog); + if (!ASSERT_OK_PTR(link2, "link2")) + return; + + link3 = bpf_program__attach(skel->progs.check_one_key_yet_another_prog); + if (!ASSERT_OK_PTR(link3, "link3")) + return; + + map_fd = bpf_map__fd(skel->maps.key1); + ASSERT_GT(map_fd, 0, "skel->maps.key1"); + + set_static_key(map_fd, 0); + skel->bss->ret_user = 0; + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 444, "skel->bss->ret_user"); + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 888, "skel->bss->ret_user"); + + set_static_key(map_fd, 1); + skel->bss->ret_user = 0; + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 333, "skel->bss->ret_user"); + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 666, "skel->bss->ret_user"); + + bpf_link__destroy(link3); + bpf_link__destroy(link2); + bpf_link__destroy(link1); +} + +static void check_multiple_keys(struct bpf_static_keys *skel) +{ + struct bpf_link *link; + int map_fd1; + int map_fd2; + int map_fd3; + int i; + + link = bpf_program__attach(skel->progs.check_multiple_keys_unlikely); + if (!ASSERT_OK_PTR(link, "link")) + return; + + map_fd1 = bpf_map__fd(skel->maps.key1); + ASSERT_GT(map_fd1, 0, "skel->maps.key1"); + + map_fd2 = bpf_map__fd(skel->maps.key2); + ASSERT_GT(map_fd2, 0, "skel->maps.key2"); + + map_fd3 = bpf_map__fd(skel->maps.key3); + ASSERT_GT(map_fd3, 0, "skel->maps.key3"); + + for (i = 0; i < 8; i++) { + set_static_key(map_fd1, i & 1); + set_static_key(map_fd2, i & 2); + set_static_key(map_fd3, i & 4); + + usleep(1); + ASSERT_EQ(skel->bss->ret_user, i, "skel->bss->ret_user"); + } + + bpf_link__destroy(link); +} + +static void check_one_key_long_jump(struct bpf_static_keys *skel) +{ + struct bpf_link *link; + int map_fd; + + link = bpf_program__attach(skel->progs.check_one_key_long_jump); + if (!ASSERT_OK_PTR(link, "link")) + return; + + map_fd = bpf_map__fd(skel->maps.key1); + ASSERT_GT(map_fd, 0, "skel->maps.key1"); + + set_static_key(map_fd, 0); + skel->bss->ret_user = 0; + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 2256, "skel->bss->ret_user"); + + set_static_key(map_fd, 1); + skel->bss->ret_user = 0; + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 1256, "skel->bss->ret_user"); + + bpf_link__destroy(link); +} + +static void check_bpf_to_bpf_call(struct bpf_static_keys *skel) +{ + struct bpf_link *link; + int map_fd1; + int map_fd2; + + link = bpf_program__attach(skel->progs.check_bpf_to_bpf_call); + if (!ASSERT_OK_PTR(link, "link")) + return; + + map_fd1 = bpf_map__fd(skel->maps.key1); + ASSERT_GT(map_fd1, 0, "skel->maps.key1"); + + map_fd2 = bpf_map__fd(skel->maps.key2); + ASSERT_GT(map_fd2, 0, "skel->maps.key2"); + + set_static_key(map_fd1, 0); + set_static_key(map_fd2, 0); + skel->bss->ret_user = 0; + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 0, "skel->bss->ret_user"); + + set_static_key(map_fd1, 1); + set_static_key(map_fd2, 0); + skel->bss->ret_user = 0; + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 101, "skel->bss->ret_user"); + + set_static_key(map_fd1, 0); + set_static_key(map_fd2, 1); + skel->bss->ret_user = 0; + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 1010, "skel->bss->ret_user"); + + set_static_key(map_fd1, 1); + set_static_key(map_fd2, 1); + skel->bss->ret_user = 0; + usleep(1); + ASSERT_EQ(skel->bss->ret_user, 1111, "skel->bss->ret_user"); + + + bpf_link__destroy(link); +} + +#define FIXED_MAP_FD 666 + +static void check_use_key_as_map(struct bpf_static_keys *skel) +{ + struct bpf_insn insns[] = { + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), + BPF_LD_MAP_FD(BPF_REG_1, FIXED_MAP_FD), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }; + union bpf_attr attr = { + .prog_type = BPF_PROG_TYPE_XDP, + .insns = ptr_to_u64(insns), + .insn_cnt = ARRAY_SIZE(insns), + .license = ptr_to_u64("GPL"), + }; + int map_fd; + int ret; + + /* first check that prog loads ok */ + + map_fd = bpf_map__fd(skel->maps.just_map); + ASSERT_GT(map_fd, 0, "skel->maps.just_map"); + + ret = dup2(map_fd, FIXED_MAP_FD); + ASSERT_EQ(ret, FIXED_MAP_FD, "dup2"); + + strncpy(attr.prog_name, "prog", sizeof(attr.prog_name)); + ret = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_GT(ret, 0, "BPF_PROG_LOAD"); + close(ret); + close(FIXED_MAP_FD); + + /* now the incorrect map (static key as normal map) */ + + map_fd = bpf_map__fd(skel->maps.key1); + ASSERT_GT(map_fd, 0, "skel->maps.key1"); + + ret = dup2(map_fd, FIXED_MAP_FD); + ASSERT_EQ(ret, FIXED_MAP_FD, "dup2"); + + ret = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_EQ(ret, -1, "BPF_PROG_LOAD"); + ASSERT_EQ(errno, EINVAL, "BPF_PROG_LOAD"); + close(ret); + close(FIXED_MAP_FD); +} + +static void map_create_incorrect(void) +{ + union bpf_attr attr = { + .map_type = BPF_MAP_TYPE_ARRAY, + .key_size = 4, + .value_size = 4, + .max_entries = 1, + .map_flags = BPF_F_STATIC_KEY, + }; + int map_fd; + + /* The first call should be ok */ + + map_fd = syscall(__NR_bpf, BPF_MAP_CREATE, &attr, sizeof(attr)); + ASSERT_GT(map_fd, 0, "BPF_MAP_CREATE"); + close(map_fd); + + /* All the rest calls should fail */ + + attr.map_type = BPF_MAP_TYPE_HASH; + map_fd = syscall(__NR_bpf, BPF_MAP_CREATE, &attr, sizeof(attr)); + ASSERT_EQ(map_fd, -1, "BPF_MAP_CREATE"); + attr.map_type = BPF_MAP_TYPE_ARRAY; + + attr.key_size = 8; + map_fd = syscall(__NR_bpf, BPF_MAP_CREATE, &attr, sizeof(attr)); + ASSERT_EQ(map_fd, -1, "BPF_MAP_CREATE"); + attr.key_size = 4; + + attr.value_size = 8; + map_fd = syscall(__NR_bpf, BPF_MAP_CREATE, &attr, sizeof(attr)); + ASSERT_EQ(map_fd, -1, "BPF_MAP_CREATE"); + attr.value_size = 4; + + attr.max_entries = 2; + map_fd = syscall(__NR_bpf, BPF_MAP_CREATE, &attr, sizeof(attr)); + ASSERT_EQ(map_fd, -1, "BPF_MAP_CREATE"); + attr.max_entries = 1; +} + +static void prog_load_incorrect_branches(struct bpf_static_keys *skel) +{ + int key_fd, map_fd, prog_fd; + + /* + * KEY=OFF KEY=ON + * : + * 0: r0 = 0x0 r0 = 0x0 + * 1: goto +0x0 <1> goto +0x1 <2> + * <1>: + * 2: exit exit + * <2>: + * 3: r0 = 0x1 r0 = 0x1 + * 4: goto -0x3 <1> goto -0x3 <1> + */ + struct bpf_insn insns[] = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_JMP_IMM(BPF_JA, 0, 0, 0), + BPF_EXIT_INSN(), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_JMP_IMM(BPF_JA, 0, 0, -3), + }; + struct bpf_static_branch_info static_branches_info[] = { + { + .map_fd = -1, + .insn_offset = 8, + .jump_target = 24, + .flags = 0, + }, + }; + union bpf_attr attr = { + .prog_type = BPF_PROG_TYPE_XDP, + .insns = ptr_to_u64(insns), + .insn_cnt = ARRAY_SIZE(insns), + .license = ptr_to_u64("GPL"), + .static_branches_info = ptr_to_u64(static_branches_info), + .static_branches_info_size = sizeof(static_branches_info), + }; + + key_fd = bpf_map__fd(skel->maps.key1); + ASSERT_GT(key_fd, 0, "skel->maps.key1"); + + map_fd = bpf_map__fd(skel->maps.just_map); + ASSERT_GT(map_fd, 0, "skel->maps.just_map"); + + strncpy(attr.prog_name, "prog", sizeof(attr.prog_name)); + + /* The first two loads should be ok, correct parameters */ + + static_branches_info[0].map_fd = key_fd; + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_GT(prog_fd, 0, "BPF_PROG_LOAD"); + close(prog_fd); + + static_branches_info[0].flags = BPF_F_INVERSE_BRANCH; + insns[1] = BPF_JMP_IMM(BPF_JA, 0, 0, 1); /* inverse branch expects non-zero offset */ + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_GT(prog_fd, 0, "BPF_PROG_LOAD"); + close(prog_fd); + static_branches_info[0].flags = 0; + insns[1] = BPF_JMP_IMM(BPF_JA, 0, 0, 0); + + /* All other loads should fail with -EINVAL */ + + static_branches_info[0].map_fd = map_fd; + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_EQ(prog_fd, -1, "BPF_PROG_LOAD: incorrect map fd"); + ASSERT_EQ(errno, EINVAL, "BPF_PROG_LOAD: incorrect map fd"); + static_branches_info[0].map_fd = key_fd; + + attr.static_branches_info = 0; + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_EQ(prog_fd, -1, "BPF_PROG_LOAD: info is NULL, but size is not zero"); + ASSERT_EQ(errno, EINVAL, "BPF_PROG_LOAD: info is NULL, but size is not zero"); + attr.static_branches_info = ptr_to_u64(static_branches_info); + + attr.static_branches_info_size = 0; + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_EQ(prog_fd, -1, "BPF_PROG_LOAD: info is not NULL, but size is zero"); + ASSERT_EQ(errno, EINVAL, "BPF_PROG_LOAD: info is not NULL, but size is zero"); + attr.static_branches_info_size = sizeof(static_branches_info); + + attr.static_branches_info_size = 1; + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_EQ(prog_fd, -1, "BPF_PROG_LOAD: size not divisible by item size"); + ASSERT_EQ(errno, EINVAL, "BPF_PROG_LOAD: size not divisible by item size"); + attr.static_branches_info_size = sizeof(static_branches_info); + + static_branches_info[0].flags = 0xbeef; + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_EQ(prog_fd, -1, "BPF_PROG_LOAD: incorrect flags"); + ASSERT_EQ(errno, EINVAL, "BPF_PROG_LOAD: incorrect flags"); + static_branches_info[0].flags = 0; + + static_branches_info[0].insn_offset = 1; + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_EQ(prog_fd, -1, "BPF_PROG_LOAD: incorrect insn_offset"); + ASSERT_EQ(errno, EINVAL, "BPF_PROG_LOAD: incorrect insn_offset"); + static_branches_info[0].insn_offset = 8; + + static_branches_info[0].insn_offset = 64; + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_EQ(prog_fd, -1, "BPF_PROG_LOAD: insn_offset outside of prgoram"); + ASSERT_EQ(errno, EINVAL, "BPF_PROG_LOAD: insn_offset outside of prgoram"); + static_branches_info[0].insn_offset = 8; + + static_branches_info[0].jump_target = 1; + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_EQ(prog_fd, -1, "BPF_PROG_LOAD: incorrect jump_target"); + ASSERT_EQ(errno, EINVAL, "BPF_PROG_LOAD: incorrect jump_target"); + static_branches_info[0].jump_target = 8; + + static_branches_info[0].jump_target = 64; + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_EQ(prog_fd, -1, "BPF_PROG_LOAD: jump_target outside of prgoram"); + ASSERT_EQ(errno, EINVAL, "BPF_PROG_LOAD: jump_target outside of prgoram"); + static_branches_info[0].jump_target = 8; + + static_branches_info[0].insn_offset = 0; + prog_fd = syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); + ASSERT_EQ(prog_fd, -1, "BPF_PROG_LOAD: patching not a JA"); + ASSERT_EQ(errno, EINVAL, "BPF_PROG_LOAD: patching not a JA"); + static_branches_info[0].insn_offset = 8; +} + +void test_bpf_static_keys(void) +{ + struct bpf_static_keys *skel; + + skel = bpf_static_keys__open_and_load(); + if (!ASSERT_OK_PTR(skel, "bpf_static_keys__open_and_load")) + return; + + if (test__start_subtest("check_one_key")) + check_one_key(skel); + + if (test__start_subtest("check_multiple_keys")) + check_multiple_keys(skel); + + if (test__start_subtest("check_multiple_progs")) + check_multiple_progs(skel); + + if (test__start_subtest("check_one_key_long_jump")) + check_one_key_long_jump(skel); + + if (test__start_subtest("check_bpf_to_bpf_call")) + check_bpf_to_bpf_call(skel); + + /* Negative tests */ + + if (test__start_subtest("check_use_key_as_map")) + check_use_key_as_map(skel); + + if (test__start_subtest("map_create_incorrect")) + map_create_incorrect(); + + if (test__start_subtest("prog_load_incorrect_branches")) + prog_load_incorrect_branches(skel); + + bpf_static_keys__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/bpf_static_keys.c b/tools/testing/selftests/bpf/progs/bpf_static_keys.c new file mode 100644 index 000000000000..e47a34df469b --- /dev/null +++ b/tools/testing/selftests/bpf/progs/bpf_static_keys.c @@ -0,0 +1,120 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Isovalent */ + +#include "vmlinux.h" +#include +#include "bpf_misc.h" + +DEFINE_STATIC_KEY(key1); +DEFINE_STATIC_KEY(key2); +DEFINE_STATIC_KEY(key3); + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, u32); + __type(value, u32); +} just_map SEC(".maps"); + +int ret_user; + +SEC("fentry/" SYS_PREFIX "sys_nanosleep") +int check_one_key(void *ctx) +{ + if (bpf_static_branch_likely(&key1)) + ret_user += 3; + else + ret_user += 4; + + return 0; +} + +SEC("fentry/" SYS_PREFIX "sys_nanosleep") +int check_one_key_another_prog(void *ctx) +{ + if (bpf_static_branch_unlikely(&key1)) + ret_user += 30; + else + ret_user += 40; + + return 0; +} + +SEC("fentry/" SYS_PREFIX "sys_nanosleep") +int check_one_key_yet_another_prog(void *ctx) +{ + if (bpf_static_branch_unlikely(&key1)) + ret_user += 300; + else + ret_user += 400; + + return 0; +} + +static __always_inline int big_chunk_of_code(volatile int *x) +{ + #pragma clang loop unroll_count(256) + for (int i = 0; i < 256; i++) + *x += 1; + + return *x; +} + +SEC("fentry/" SYS_PREFIX "sys_nanosleep") +int check_one_key_long_jump(void *ctx) +{ + int x; + + if (bpf_static_branch_likely(&key1)) { + x = 1000; + big_chunk_of_code(&x); + ret_user = x; + } else { + x = 2000; + big_chunk_of_code(&x); + ret_user = x; + } + + return 0; +} + +SEC("fentry/" SYS_PREFIX "sys_nanosleep") +int check_multiple_keys_unlikely(void *ctx) +{ + ret_user = (bpf_static_branch_unlikely(&key1) << 0) | + (bpf_static_branch_unlikely(&key2) << 1) | + (bpf_static_branch_unlikely(&key3) << 2); + + return 0; +} + +int __noinline patch(int x) +{ + if (bpf_static_branch_likely(&key1)) + x += 100; + if (bpf_static_branch_unlikely(&key2)) + x += 1000; + + return x; +} + +SEC("fentry/" SYS_PREFIX "sys_nanosleep") +int check_bpf_to_bpf_call(void *ctx) +{ + __u64 j = bpf_jiffies64(); + + bpf_printk("%lu\n", j); + + ret_user = 0; + + if (bpf_static_branch_likely(&key1)) + ret_user += 1; + if (bpf_static_branch_unlikely(&key2)) + ret_user += 10; + + ret_user = patch(ret_user); + + return 0; +} + +char _license[] SEC("license") = "GPL";