From patchwork Sat Dec 21 01:24:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13917582 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F3F6A1DA3D for ; Sat, 21 Dec 2024 01:24:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744263; cv=none; b=bRTaMJB4BnZeiwf6O9BMQ/AR5ZI1Lfv4TdHySrbxaHZ4w8fhNN3V+rFegdlokiVo235Mom9ZeabcWmZrV7mvMo5Pcw3OUA/b5fUcfS/esQHtrxAV0JDMPg0ZA/Ybf55DR7p9EeQdv9kWTqOsgnwscWI0T3kpPB6DmzqS+jUhbPA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744263; c=relaxed/simple; bh=rBG+AsI9uSAETmCL6GbWWVsOK3Oe1VshvCQ+iEEvVBU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ER7brHc3IJKzic1IkduDbWDKcz4orZB8W5hU84uIYAtyrwfBdZ9LQAu0HXO41iW0dHWUSC+0W7INAYTAuMVzsCapeWM3/0fRf9GF36qIMb/3RC9Xpl75RRohODTgBUUUum6Za3diVXW0/j0YYaPY2l8oJxWoILgpnuPyGZQ+afI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VWXxyRsF; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VWXxyRsF" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-72907f58023so2859813b3a.3 for ; Fri, 20 Dec 2024 17:24:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734744261; x=1735349061; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Zrn35WjRTAXV/q0fGp4quX7tww+ddKAXEAGON/HplnE=; b=VWXxyRsFQ77P63rS6FmD6NH8FYe8roC65o/wMR6g7XlLtvZWq6yIK9blP9G3h7oB8Q DLk+dR+bLaY/QKHLwcLSE6GungHX7qC0GYyLYHm8XfIlbI/oe+Ri2CvAGx9OaS+jJnVp 9Jx39KbAHOOVHRaMXtVyETrpSFAJ3fVmQ/ydPaCMq1lsif6qCIeN15K0UcYvUbOreg7u ULcrE052dkSE0X4GbVNsq5ks2wjr83XnFqTCyRw5429Kje+TRD5pPbfWg1zf+7ZqWTU8 v+J7FnZwrcRFV4c7LQtyxupW4dr49k7H2amt1yVQa4b24gticCs1wwSIWJEu+KD8m+z4 UQ3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734744261; x=1735349061; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zrn35WjRTAXV/q0fGp4quX7tww+ddKAXEAGON/HplnE=; b=mGodENB/BDmXR0dkM9A/ly9+GqySTA5t1R1nQ8NjQ1M1+9gb6NCV/2LY7ghAgYn0As NgjOKZiOPkEgfRYqEhSkOszt9GCC91VpjRsyYypAKckwk3lksL6bwh4ZoTkM+IGauLAB UHGrQk6ZWu1BU8rYqNCDvr2MTbCnOJxcqNVkERH/ZeEtkYuF9+KaTSeeDYHTVMbD7gvw K1x529KGvu/Kln/Xx9TJUvSozaYL2JJAwGVvcpvkbfCk+wNDcsoF3k9zXI6vWQhGQ2XZ XbndUZ2pn7O6k+VgnyT9EP+m0iXHkC/HaISV5jMt/ALpuyTXQ60SNAHzK3lKfVpheFC8 pmhw== X-Gm-Message-State: AOJu0Yyv/ipyx6oPfK5DPK5xYWwxdN43neO9h909+ApDkHt/0tawXTri Slg42o53/LDYAHpvbBQ/2cEiMpdU1iynr9BFOR1T8CbUxiPnYgrOUNaxLHGSppzuApZzYg1/w/Z zNLVAChdISXfrlaYzuk7k7en85zNhIvYTLv0VlKB7dX3/sLgi3M7pcG7Sa6YV+l2ALGXa8G69gO y/+yXpBMP2AK+qB7lPfWhSgXXmlxUlerm5zQ6vq7w= X-Google-Smtp-Source: AGHT+IFbscAq9apWLBUd7Erh+ffNiilAXLjwg3/pHWhedK7m5HIwODQAO6Bo6uwHyxt7qauW6JqF+Y3GDmamXA== X-Received: from pgum3.prod.google.com ([2002:a65:6a03:0:b0:7fd:4497:f282]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:d80d:b0:1e0:d848:9e8f with SMTP id adf61e73a8af0-1e5e046decbmr9957435637.13.1734744261235; Fri, 20 Dec 2024 17:24:21 -0800 (PST) Date: Sat, 21 Dec 2024 01:24:04 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <44fd0483ebdb9f84e6d069fdf890bc5801e0d130.1734742802.git.yepeilin@google.com> Subject: [PATCH RFC bpf-next v1 1/4] bpf/verifier: Factor out check_load() From: Peilin Ye To: bpf@vger.kernel.org Cc: Peilin Ye , Alexei Starovoitov , Eduard Zingerman , Song Liu , Yonghong Song , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , "Paul E. McKenney" , Puranjay Mohan , Xu Kuohai , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , David Vernet , Dave Marchevsky , linux-kernel@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC No functional changes intended. While we are here, make that comment about "reserved fields" more specific. Reviewed-by: Josh Don Signed-off-by: Peilin Ye --- kernel/bpf/verifier.c | 56 +++++++++++++++++++++++++------------------ 1 file changed, 33 insertions(+), 23 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index f27274e933e5..fa40a0440590 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7518,6 +7518,36 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn static int save_aux_ptr_type(struct bpf_verifier_env *env, enum bpf_reg_type type, bool allow_trust_mismatch); +static int check_load(struct bpf_verifier_env *env, struct bpf_insn *insn, const char *ctx) +{ + struct bpf_reg_state *regs = cur_regs(env); + enum bpf_reg_type src_reg_type; + int err; + + /* check src operand */ + err = check_reg_arg(env, insn->src_reg, SRC_OP); + if (err) + return err; + + err = check_reg_arg(env, insn->dst_reg, DST_OP_NO_MARK); + if (err) + return err; + + src_reg_type = regs[insn->src_reg].type; + + /* check that memory (src_reg + off) is readable, + * the state of dst_reg will be updated by this func + */ + err = check_mem_access(env, env->insn_idx, insn->src_reg, + insn->off, BPF_SIZE(insn->code), + BPF_READ, insn->dst_reg, false, + BPF_MODE(insn->code) == BPF_MEMSX); + err = err ?: save_aux_ptr_type(env, src_reg_type, true); + err = err ?: reg_bounds_sanity_check(env, ®s[insn->dst_reg], ctx); + + return err; +} + static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) { int load_reg; @@ -18945,30 +18975,10 @@ static int do_check(struct bpf_verifier_env *env) return err; } else if (class == BPF_LDX) { - enum bpf_reg_type src_reg_type; - - /* check for reserved fields is already done */ - - /* check src operand */ - err = check_reg_arg(env, insn->src_reg, SRC_OP); - if (err) - return err; - - err = check_reg_arg(env, insn->dst_reg, DST_OP_NO_MARK); - if (err) - return err; - - src_reg_type = regs[insn->src_reg].type; - - /* check that memory (src_reg + off) is readable, - * the state of dst_reg will be updated by this func + /* Check for reserved fields is already done in + * resolve_pseudo_ldimm64(). */ - err = check_mem_access(env, env->insn_idx, insn->src_reg, - insn->off, BPF_SIZE(insn->code), - BPF_READ, insn->dst_reg, false, - BPF_MODE(insn->code) == BPF_MEMSX); - err = err ?: save_aux_ptr_type(env, src_reg_type, true); - err = err ?: reg_bounds_sanity_check(env, ®s[insn->dst_reg], "ldx"); + err = check_load(env, insn, "ldx"); if (err) return err; } else if (class == BPF_STX) { From patchwork Sat Dec 21 01:25:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13917583 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-oa1-f74.google.com (mail-oa1-f74.google.com [209.85.160.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 268BA1BF37 for ; Sat, 21 Dec 2024 01:25:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744353; cv=none; b=Mq8EutXZPYbKgQ22OunbFjyu2+R2RWfiOKNIhCmcO1r/MuCxZjpW8GCsUyUHkQkCiZ21TfrUXGCAPZ2pf3jepBYQ5wNsIcQje04VjxdeebzZPSE03ThT6IThRArZGR+aABKfMRXt7qIU12yX31JVGwKKeEcZbD/04PQ1SquFaNI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744353; c=relaxed/simple; bh=HieM+O12nJwUAZqYnUf0imp77QA9IQJTUEcK+710otI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=s4aG9STYFJdkVfz0FuDRBUXfK2XLVsKB0IulxnuR8N7x10fY+YbHpQbqXXePgaJd15ME3BZbnQkA1H1NERMlJMEY8ZvZOD2pmxOM0CSq0c8Csv1bwhcO0WTiK0C2D+mjtrYUiKWMG/LHa31OdmKHZN3kRnOJQDBdBAIUMfdSwh4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cca485p2; arc=none smtp.client-ip=209.85.160.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cca485p2" Received: by mail-oa1-f74.google.com with SMTP id 586e51a60fabf-29fd2a9dd35so1875611fac.1 for ; Fri, 20 Dec 2024 17:25:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734744349; x=1735349149; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mX9CpM+nPYklWRPu/tPXprtM/OQW3vEFNUZ72JhAffA=; b=cca485p2euhbSZf+RZFcA4JrEjCNX4GTstmI573NUzWNAXMaGU/fox+mD2Y85pEC4s k5PRBIL0niMOUOvpZhxvLMyxT6oQnqHj7OWbOREeS3LR/e4RL2Km3L6udVmfYGPk4Kfc HveUrq2Nd1o3KQH0qtV34fhcz82qlsbQB/H1tZXZQ34Iys9OCGYbY9FkLZhryJB1wp3O OFwzbkZ6oZ42Fuechz9dnJzGlA5Wz8JtBnMO38Z23FvtohzDnRwm2d8wQ7HJYVZVzosy WeGiktdSgtwz3iOgsz8yqplMV45F+VT9jL5RywNOKPOZdSeqPnMn+1AExaUJ/JKrOEdp DH0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734744349; x=1735349149; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mX9CpM+nPYklWRPu/tPXprtM/OQW3vEFNUZ72JhAffA=; b=LTf9QjxyOfBRKOgDlXRIX3E4YegdaA+KS3fGyAnuFziw+cft+KwWEyRIA83hXryFz8 FKxduSVY3ceSpXS3DThFScZgp66uV/H1SLGT+FClszpYDE7ILbgqimLznkumt6CVPiUl gsep8M61QCmV4IB9mf3XD9k7cgKn4JdLOD8JYO5pkSGz5Lzjbcly0I4CjFXLcSzMVewD XuzdaN890GErEE3zSZ8HMSYI23evuoslVOopdKJaLhi0Pi5Brc3xO6zmV8OCu4C6qmkX Duo5N40IHoVw6i7jNDGt4D4qycratCjOG96hJoyMD4iOXQ0H4f/ly2B8ytSuQLF19IpW 2s6Q== X-Gm-Message-State: AOJu0YyP+KlUpnpAv2ZXIn3RQu5wvFffGRIlV1QV2TpgPhRJMJ7sq0oF Sh1WNzDktO0TYS38u8XBldrOoo9Bpyt8ydfQHmCmPkMBL4P/uoZo4/38NdDT9ks78B1tlGS4FY1 lk8nyiQi5dG0m03wuabJ6Al5sEGbldHJHAKkh2fB3+I04vZsZgwjVtMsk2ws5Ot8h5iGo29779P AqQly7OvdbTgoOrdM1GDFHgZbEXmAoYA1cX/tktB0= X-Google-Smtp-Source: AGHT+IGfrliDwZE1i61hVef4Ti0b/q6w3sxJFHTcveHuR3fqxkqIllRyaNGTn342Hd1OPEwwQ2hXiXno9ZAsAg== X-Received: from oabwe11.prod.google.com ([2002:a05:6871:a60b:b0:289:3039:6009]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:6e06:b0:29e:1a11:ef26 with SMTP id 586e51a60fabf-2a7fb09141emr2948930fac.11.1734744349267; Fri, 20 Dec 2024 17:25:49 -0800 (PST) Date: Sat, 21 Dec 2024 01:25:30 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <6ca65dc2916dba7490c4fd7a8b727b662138d606.1734742802.git.yepeilin@google.com> Subject: [PATCH RFC bpf-next v1 2/4] bpf: Introduce load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org Cc: Peilin Ye , Alexei Starovoitov , Eduard Zingerman , Song Liu , Yonghong Song , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , "Paul E. McKenney" , Puranjay Mohan , Xu Kuohai , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , David Vernet , Dave Marchevsky , linux-kernel@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Introduce BPF instructions with load-acquire and store-release semantics, as discussed in [1]. The following new flags are defined: BPF_ATOMIC_LOAD 0x10 BPF_ATOMIC_STORE 0x20 BPF_ATOMIC_TYPE(imm) ((imm) & 0xf0) BPF_RELAXED 0x0 BPF_ACQUIRE 0x1 BPF_RELEASE 0x2 BPF_ACQ_REL 0x3 BPF_SEQ_CST 0x4 BPF_LOAD_ACQ (BPF_ATOMIC_LOAD | BPF_ACQUIRE) BPF_STORE_REL (BPF_ATOMIC_STORE | BPF_RELEASE) A "load-acquire" is a BPF_STX | BPF_ATOMIC instruction with the 'imm' field set to BPF_LOAD_ACQ (0x11). Similarly, a "store-release" is a BPF_STX | BPF_ATOMIC instruction with the 'imm' field set to BPF_STORE_REL (0x22). Unlike existing atomic operations that only support BPF_W (32-bit) and BPF_DW (64-bit) size modifiers, load-acquires and store-releases also support BPF_B (8-bit) and BPF_H (16-bit). An 8- or 16-bit load-acquire zero-extends the value before writing it to a 32-bit register, just like ARM64 instruction LDARH and friends. As an example, consider the following 64-bit load-acquire BPF instruction (assuming little-endian from now on): db 10 00 00 11 00 00 00 r0 = load_acquire((u64 *)(r1 + 0x0)) opcode (0xdb): BPF_ATOMIC | BPF_DW | BPF_STX imm (0x00000011): BPF_LOAD_ACQ For ARM64, an LDAR instruction will be generated by the JIT compiler for the above: ldar x7, [x0] Similarly, a 16-bit BPF store-release: cb 21 00 00 22 00 00 00 store_release((u16 *)(r1 + 0x0), w2) opcode (0xcb): BPF_ATOMIC | BPF_H | BPF_STX imm (0x00000022): BPF_STORE_REL An STLRH will be generated for it: stlrh w1, [x0] For a complete mapping for ARM64: load-acquire 8-bit LDARB (BPF_LOAD_ACQ) 16-bit LDARH 32-bit LDAR (32-bit) 64-bit LDAR (64-bit) store-release 8-bit STLRB (BPF_STORE_REL) 16-bit STLRH 32-bit STLR (32-bit) 64-bit STLR (64-bit) Reviewed-by: Josh Don Reviewed-by: Barret Rhoden Signed-off-by: Peilin Ye --- arch/arm64/include/asm/insn.h | 8 ++++ arch/arm64/lib/insn.c | 34 ++++++++++++++ arch/arm64/net/bpf_jit.h | 20 ++++++++ arch/arm64/net/bpf_jit_comp.c | 85 +++++++++++++++++++++++++++++++++- include/uapi/linux/bpf.h | 13 ++++++ kernel/bpf/core.c | 41 +++++++++++++++- kernel/bpf/disasm.c | 14 ++++++ kernel/bpf/verifier.c | 32 +++++++++---- tools/include/uapi/linux/bpf.h | 13 ++++++ 9 files changed, 246 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index e390c432f546..bbfdbe570ff6 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -188,8 +188,10 @@ enum aarch64_insn_ldst_type { AARCH64_INSN_LDST_STORE_PAIR_PRE_INDEX, AARCH64_INSN_LDST_LOAD_PAIR_POST_INDEX, AARCH64_INSN_LDST_STORE_PAIR_POST_INDEX, + AARCH64_INSN_LDST_LOAD_ACQ, AARCH64_INSN_LDST_LOAD_EX, AARCH64_INSN_LDST_LOAD_ACQ_EX, + AARCH64_INSN_LDST_STORE_REL, AARCH64_INSN_LDST_STORE_EX, AARCH64_INSN_LDST_STORE_REL_EX, AARCH64_INSN_LDST_SIGNED_LOAD_IMM_OFFSET, @@ -351,6 +353,8 @@ __AARCH64_INSN_FUNCS(ldr_imm, 0x3FC00000, 0x39400000) __AARCH64_INSN_FUNCS(ldr_lit, 0xBF000000, 0x18000000) __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000) __AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000) +__AARCH64_INSN_FUNCS(load_acq, 0x3FC08000, 0x08C08000) +__AARCH64_INSN_FUNCS(store_rel, 0x3FC08000, 0x08808000) __AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000) __AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000) __AARCH64_INSN_FUNCS(mops, 0x3B200C00, 0x19000400) @@ -602,6 +606,10 @@ u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn_register reg1, int offset, enum aarch64_insn_variant variant, enum aarch64_insn_ldst_type type); +u32 aarch64_insn_gen_load_acq_store_rel(enum aarch64_insn_register reg, + enum aarch64_insn_register base, + enum aarch64_insn_size_type size, + enum aarch64_insn_ldst_type type); u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg, enum aarch64_insn_register base, enum aarch64_insn_register state, diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c index b008a9b46a7f..80e5b191d96a 100644 --- a/arch/arm64/lib/insn.c +++ b/arch/arm64/lib/insn.c @@ -540,6 +540,40 @@ u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn_register reg1, offset >> shift); } +u32 aarch64_insn_gen_load_acq_store_rel(enum aarch64_insn_register reg, + enum aarch64_insn_register base, + enum aarch64_insn_size_type size, + enum aarch64_insn_ldst_type type) +{ + u32 insn; + + switch (type) { + case AARCH64_INSN_LDST_LOAD_ACQ: + insn = aarch64_insn_get_load_acq_value(); + break; + case AARCH64_INSN_LDST_STORE_REL: + insn = aarch64_insn_get_store_rel_value(); + break; + default: + pr_err("%s: unknown load-acquire/store-release encoding %d\n", __func__, type); + return AARCH64_BREAK_FAULT; + } + + insn = aarch64_insn_encode_ldst_size(size, insn); + + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, + reg); + + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, + base); + + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT2, insn, + AARCH64_INSN_REG_ZR); + + return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RS, insn, + AARCH64_INSN_REG_ZR); +} + u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg, enum aarch64_insn_register base, enum aarch64_insn_register state, diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h index b22ab2f97a30..a3b0e693a125 100644 --- a/arch/arm64/net/bpf_jit.h +++ b/arch/arm64/net/bpf_jit.h @@ -119,6 +119,26 @@ aarch64_insn_gen_load_store_ex(Rt, Rn, Rs, A64_SIZE(sf), \ AARCH64_INSN_LDST_STORE_REL_EX) +/* Load-acquire & store-release */ +#define A64_LDAR(Rt, Rn, size) \ + aarch64_insn_gen_load_acq_store_rel(Rt, Rn, AARCH64_INSN_SIZE_##size, \ + AARCH64_INSN_LDST_LOAD_ACQ) +#define A64_STLR(Rt, Rn, size) \ + aarch64_insn_gen_load_acq_store_rel(Rt, Rn, AARCH64_INSN_SIZE_##size, \ + AARCH64_INSN_LDST_STORE_REL) + +/* Rt = [Rn] (load acquire) */ +#define A64_LDARB(Wt, Xn) A64_LDAR(Wt, Xn, 8) +#define A64_LDARH(Wt, Xn) A64_LDAR(Wt, Xn, 16) +#define A64_LDAR32(Wt, Xn) A64_LDAR(Wt, Xn, 32) +#define A64_LDAR64(Xt, Xn) A64_LDAR(Xt, Xn, 64) + +/* [Rn] = Rt (store release) */ +#define A64_STLRB(Wt, Xn) A64_STLR(Wt, Xn, 8) +#define A64_STLRH(Wt, Xn) A64_STLR(Wt, Xn, 16) +#define A64_STLR32(Wt, Xn) A64_STLR(Wt, Xn, 32) +#define A64_STLR64(Xt, Xn) A64_STLR(Xt, Xn, 64) + /* * LSE atomics * diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 66708b95493a..15fc0f391f14 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -634,6 +634,80 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx) return 0; } +static inline bool is_atomic_load_store(const s32 imm) +{ + const s32 type = BPF_ATOMIC_TYPE(imm); + + return type == BPF_ATOMIC_LOAD || type == BPF_ATOMIC_STORE; +} + +static int emit_atomic_load_store(const struct bpf_insn *insn, struct jit_ctx *ctx) +{ + const s16 off = insn->off; + const u8 code = insn->code; + const bool arena = BPF_MODE(code) == BPF_PROBE_ATOMIC; + const u8 arena_vm_base = bpf2a64[ARENA_VM_START]; + const u8 dst = bpf2a64[insn->dst_reg]; + const u8 src = bpf2a64[insn->src_reg]; + const u8 tmp = bpf2a64[TMP_REG_1]; + u8 ptr; + + if (BPF_ATOMIC_TYPE(insn->imm) == BPF_ATOMIC_LOAD) + ptr = src; + else + ptr = dst; + + if (off) { + emit_a64_mov_i(true, tmp, off, ctx); + emit(A64_ADD(true, tmp, tmp, ptr), ctx); + ptr = tmp; + } + if (arena) { + emit(A64_ADD(true, tmp, ptr, arena_vm_base), ctx); + ptr = tmp; + } + + switch (insn->imm) { + case BPF_LOAD_ACQ: + switch (BPF_SIZE(code)) { + case BPF_B: + emit(A64_LDARB(dst, ptr), ctx); + break; + case BPF_H: + emit(A64_LDARH(dst, ptr), ctx); + break; + case BPF_W: + emit(A64_LDAR32(dst, ptr), ctx); + break; + case BPF_DW: + emit(A64_LDAR64(dst, ptr), ctx); + break; + } + break; + case BPF_STORE_REL: + switch (BPF_SIZE(code)) { + case BPF_B: + emit(A64_STLRB(src, ptr), ctx); + break; + case BPF_H: + emit(A64_STLRH(src, ptr), ctx); + break; + case BPF_W: + emit(A64_STLR32(src, ptr), ctx); + break; + case BPF_DW: + emit(A64_STLR64(src, ptr), ctx); + break; + } + break; + default: + pr_err_once("unknown atomic load/store op code %02x\n", insn->imm); + return -EINVAL; + } + + return 0; +} + #ifdef CONFIG_ARM64_LSE_ATOMICS static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx) { @@ -1641,11 +1715,17 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, return ret; break; + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: + case BPF_STX | BPF_PROBE_ATOMIC | BPF_B: + case BPF_STX | BPF_PROBE_ATOMIC | BPF_H: case BPF_STX | BPF_PROBE_ATOMIC | BPF_W: case BPF_STX | BPF_PROBE_ATOMIC | BPF_DW: - if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) + if (is_atomic_load_store(insn->imm)) + ret = emit_atomic_load_store(insn, ctx); + else if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) ret = emit_lse_atomic(insn, ctx); else ret = emit_ll_sc_atomic(insn, ctx); @@ -2669,7 +2749,8 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena) switch (insn->code) { case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: - if (!cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) + if (!is_atomic_load_store(insn->imm) && + !cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) return false; } return true; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 2acf9b336371..4a20a125eb46 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -51,6 +51,19 @@ #define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ #define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ +#define BPF_ATOMIC_LOAD 0x10 +#define BPF_ATOMIC_STORE 0x20 +#define BPF_ATOMIC_TYPE(imm) ((imm) & 0xf0) + +#define BPF_RELAXED 0x00 +#define BPF_ACQUIRE 0x01 +#define BPF_RELEASE 0x02 +#define BPF_ACQ_REL 0x03 +#define BPF_SEQ_CST 0x04 + +#define BPF_LOAD_ACQ (BPF_ATOMIC_LOAD | BPF_ACQUIRE) /* load-acquire */ +#define BPF_STORE_REL (BPF_ATOMIC_STORE | BPF_RELEASE) /* store-release */ + enum bpf_cond_pseudo_jmp { BPF_MAY_GOTO = 0, }; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index da729cbbaeb9..ab082ab9d535 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1663,14 +1663,17 @@ EXPORT_SYMBOL_GPL(__bpf_call_base); INSN_3(JMP, JSET, K), \ INSN_2(JMP, JA), \ INSN_2(JMP32, JA), \ + /* Atomic operations. */ \ + INSN_3(STX, ATOMIC, B), \ + INSN_3(STX, ATOMIC, H), \ + INSN_3(STX, ATOMIC, W), \ + INSN_3(STX, ATOMIC, DW), \ /* Store instructions. */ \ /* Register based. */ \ INSN_3(STX, MEM, B), \ INSN_3(STX, MEM, H), \ INSN_3(STX, MEM, W), \ INSN_3(STX, MEM, DW), \ - INSN_3(STX, ATOMIC, W), \ - INSN_3(STX, ATOMIC, DW), \ /* Immediate based. */ \ INSN_3(ST, MEM, B), \ INSN_3(ST, MEM, H), \ @@ -2169,6 +2172,8 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) STX_ATOMIC_DW: STX_ATOMIC_W: + STX_ATOMIC_H: + STX_ATOMIC_B: switch (IMM) { ATOMIC_ALU_OP(BPF_ADD, add) ATOMIC_ALU_OP(BPF_AND, and) @@ -2196,6 +2201,38 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) (atomic64_t *)(unsigned long) (DST + insn->off), (u64) BPF_R0, (u64) SRC); break; + case BPF_LOAD_ACQ: + switch (BPF_SIZE(insn->code)) { +#define LOAD_ACQUIRE(SIZEOP, SIZE) \ + case BPF_##SIZEOP: \ + DST = (SIZE)smp_load_acquire( \ + (SIZE *)(unsigned long)(SRC + insn->off)); \ + break; + LOAD_ACQUIRE(B, u8) + LOAD_ACQUIRE(H, u16) + LOAD_ACQUIRE(W, u32) + LOAD_ACQUIRE(DW, u64) +#undef LOAD_ACQUIRE + default: + goto default_label; + } + break; + case BPF_STORE_REL: + switch (BPF_SIZE(insn->code)) { +#define STORE_RELEASE(SIZEOP, SIZE) \ + case BPF_##SIZEOP: \ + smp_store_release( \ + (SIZE *)(unsigned long)(DST + insn->off), (SIZE)SRC); \ + break; + STORE_RELEASE(B, u8) + STORE_RELEASE(H, u16) + STORE_RELEASE(W, u32) + STORE_RELEASE(DW, u64) +#undef STORE_RELEASE + default: + goto default_label; + } + break; default: goto default_label; diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 309c4aa1b026..2a354a44f209 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -267,6 +267,20 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, BPF_SIZE(insn->code) == BPF_DW ? "64" : "", bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == BPF_LOAD_ACQ) { + verbose(cbs->private_data, "(%02x) %s%d = load_acquire((%s *)(r%d %+d))\n", + insn->code, + BPF_SIZE(insn->code) == BPF_DW ? "r" : "w", insn->dst_reg, + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->src_reg, insn->off); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == BPF_STORE_REL) { + verbose(cbs->private_data, "(%02x) store_release((%s *)(r%d %+d), %s%d)\n", + insn->code, + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, + BPF_SIZE(insn->code) == BPF_DW ? "r" : "w", insn->src_reg); } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fa40a0440590..dc3ecc925b97 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3480,7 +3480,7 @@ static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn, } if (class == BPF_STX) { - /* BPF_STX (including atomic variants) has multiple source + /* BPF_STX (including atomic variants) has one or more source * operands, one of which is a ptr. Check whether the caller is * asking about it. */ @@ -7550,6 +7550,8 @@ static int check_load(struct bpf_verifier_env *env, struct bpf_insn *insn, const static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) { + const int bpf_size = BPF_SIZE(insn->code); + bool write_only = false; int load_reg; int err; @@ -7564,17 +7566,21 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i case BPF_XOR | BPF_FETCH: case BPF_XCHG: case BPF_CMPXCHG: + if (bpf_size != BPF_W && bpf_size != BPF_DW) { + verbose(env, "invalid atomic operand size\n"); + return -EINVAL; + } + break; + case BPF_LOAD_ACQ: + return check_load(env, insn, "atomic"); + case BPF_STORE_REL: + write_only = true; break; default: verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); return -EINVAL; } - if (BPF_SIZE(insn->code) != BPF_W && BPF_SIZE(insn->code) != BPF_DW) { - verbose(env, "invalid atomic operand size\n"); - return -EINVAL; - } - /* check src1 operand */ err = check_reg_arg(env, insn->src_reg, SRC_OP); if (err) @@ -7615,6 +7621,9 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i return -EACCES; } + if (write_only) + goto skip_read_check; + if (insn->imm & BPF_FETCH) { if (insn->imm == BPF_CMPXCHG) load_reg = BPF_REG_0; @@ -7636,14 +7645,15 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i * case to simulate the register fill. */ err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, - BPF_SIZE(insn->code), BPF_READ, -1, true, false); + bpf_size, BPF_READ, -1, true, false); if (!err && load_reg >= 0) err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, - BPF_SIZE(insn->code), BPF_READ, load_reg, - true, false); + bpf_size, BPF_READ, load_reg, true, + false); if (err) return err; +skip_read_check: if (is_arena_reg(env, insn->dst_reg)) { err = save_aux_ptr_type(env, PTR_TO_ARENA, false); if (err) @@ -20320,7 +20330,9 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) insn->code == (BPF_ST | BPF_MEM | BPF_W) || insn->code == (BPF_ST | BPF_MEM | BPF_DW)) { type = BPF_WRITE; - } else if ((insn->code == (BPF_STX | BPF_ATOMIC | BPF_W) || + } else if ((insn->code == (BPF_STX | BPF_ATOMIC | BPF_B) || + insn->code == (BPF_STX | BPF_ATOMIC | BPF_H) || + insn->code == (BPF_STX | BPF_ATOMIC | BPF_W) || insn->code == (BPF_STX | BPF_ATOMIC | BPF_DW)) && env->insn_aux_data[i + delta].ptr_type == PTR_TO_ARENA) { insn->code = BPF_STX | BPF_PROBE_ATOMIC | BPF_SIZE(insn->code); diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 2acf9b336371..4a20a125eb46 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -51,6 +51,19 @@ #define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ #define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ +#define BPF_ATOMIC_LOAD 0x10 +#define BPF_ATOMIC_STORE 0x20 +#define BPF_ATOMIC_TYPE(imm) ((imm) & 0xf0) + +#define BPF_RELAXED 0x00 +#define BPF_ACQUIRE 0x01 +#define BPF_RELEASE 0x02 +#define BPF_ACQ_REL 0x03 +#define BPF_SEQ_CST 0x04 + +#define BPF_LOAD_ACQ (BPF_ATOMIC_LOAD | BPF_ACQUIRE) /* load-acquire */ +#define BPF_STORE_REL (BPF_ATOMIC_STORE | BPF_RELEASE) /* store-release */ + enum bpf_cond_pseudo_jmp { BPF_MAY_GOTO = 0, }; From patchwork Sat Dec 21 01:25:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13917584 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 406C625765 for ; Sat, 21 Dec 2024 01:26:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744371; cv=none; b=dpGjfVQRJvfXF3QUASHBZ8/ZLCLEMlVGPBtGLciPhhXb+sny1HzlsfxcxRcGXmq2mHmBSC/MXx/C1fUVuEzprydhA6C2UJgsyKxj0ZNDBpv8iKnITx61LwoTxU3IJ65w2RnfsMIIUagrTzyGM8fUHojPgRJmDEcfjUDvzK3f3xw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744371; c=relaxed/simple; bh=DLpv7tN8tWj9GmuppCeHTKMXzEqGVF0ivwjlaG3AhNs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UY6Wvbpc1BqBADbbb9OjVr1/XO+d9Gb/jxrojhP4Vr3LVNDVn7RGubW2EYI+Rtzt5vSbtQ7FmA/B7GXB9vMq491WLII5RRUNNqyRemOpdCkAPyMiPKv2SN0kmKIBbt7mk/uxEZoPdVrMvDisxcW/rMsXcf86jJcM0inlv6jVwKs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ROpvj3qY; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ROpvj3qY" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-21632eacb31so17169315ad.0 for ; Fri, 20 Dec 2024 17:26:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734744369; x=1735349169; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bytesDbDVUYNBSRD3xSwDN7qmEeS7owIZnIend2YERw=; b=ROpvj3qYEFoDH31hwu/YBOXN8EsnTWdJdcPJ2SBG7nJxXbjaEx18f+ZEWkUVj1KKSW o+6Zb0+gmkhGhPViD9K5E+gwdzg+CENMbqIIGBU2QF3BuHCP8J3f3xgSHepeSDtt4KNy 3FPXPN4I73f2pZdM+iZPYZk5+YdmpPEhI3njFyfQA2pjG0EfLMFeDFSecHuKWV0j/QzM 29q4kyKqQ2ey4rtKL+XWDdNw0K93UnZB7NOX/hCrqGo3yo2dMEd8n8UL514U6R7thrGR AAoyJdlxTwckocz4lgDYgtJs0fkCYnq/oe+rYh1ydzruP5QvvoglUj4SeG+xkxhsk0MV +ZJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734744369; x=1735349169; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bytesDbDVUYNBSRD3xSwDN7qmEeS7owIZnIend2YERw=; b=wf+/tXmW8G1TEpO8crcBejoO28mlvyTD1t52tN4noFe/iHrkBTrjZC2ClLTm5dSzW+ kZ4itaaffPjXIgAiPJs7tgYFsQ7FsTM3aZ7rSZ9q4idKnwKH7dCPi0X0nBYLjyHPmh+N HCSgtlQbYI4vhSTCim9rYJ7DZFV32s/ezenUo5a7DBJQ/zXE2Y6UbCuH4Q1nQYbRWl0r 5q59koAjI40Zj2GidoAE+IRrmBlveYM2kMQeJVPAEb58kM6dN6p6L9IsLuRL4zrGwReJ K+GOtr5WN7mDLrOk4zGg5C2jEe9JcLnxfmtzH9VbZavbruHwRY96najJDchwmSSZ865G eX8A== X-Gm-Message-State: AOJu0YwNr8OJfR0H2U9iI+ww3fedzb701n3uWLbhg7LPBKo3OnzGcU1O HHh7cx7CSR2+kCWX+RM9edfezh5oalNpgJ0JmSw+g3za8rx10kwUX00U5K0aICP+2JsqrNYS+j1 mybI3UqzKcF6U6DwyiLnCS1PWcdj2x/0mhM6QAR9dgk5CdDCCOFjDjRwI0mFZFOoRF7ixP7t9c2 y1RNQYKkI95i1lyIex0K+rvVFcvK1vrVPuNQieq2I= X-Google-Smtp-Source: AGHT+IHYk9A9yRfGSCHIhMa9AwDi+tWyoGUU43W6BZLFLwohK9iNEKLbcMkTYl+xrGSNlL3eTaKbUcNGwhS/ZQ== X-Received: from ploz22.prod.google.com ([2002:a17:902:8f96:b0:216:4295:2584]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d546:b0:216:4e9f:4ec3 with SMTP id d9443c01a7336-219e6f2eb8fmr79334885ad.39.1734744369464; Fri, 20 Dec 2024 17:26:09 -0800 (PST) Date: Sat, 21 Dec 2024 01:25:57 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: Subject: [PATCH RFC bpf-next v1 3/4] selftests/bpf: Delete duplicate verifier/atomic_invalid tests From: Peilin Ye To: bpf@vger.kernel.org Cc: Peilin Ye , Alexei Starovoitov , Eduard Zingerman , Song Liu , Yonghong Song , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , "Paul E. McKenney" , Puranjay Mohan , Xu Kuohai , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , David Vernet , Dave Marchevsky , linux-kernel@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Right now, the BPF_ADD and BPF_ADD | BPF_FETCH cases are tested twice: #55/u atomic BPF_ADD access through non-pointer OK #55/p atomic BPF_ADD access through non-pointer OK #56/u atomic BPF_ADD | BPF_FETCH access through non-pointer OK #56/p atomic BPF_ADD | BPF_FETCH access through non-pointer OK #57/u atomic BPF_ADD access through non-pointer OK #57/p atomic BPF_ADD access through non-pointer OK #58/u atomic BPF_ADD | BPF_FETCH access through non-pointer OK #58/p atomic BPF_ADD | BPF_FETCH access through non-pointer OK Reviewed-by: Josh Don Signed-off-by: Peilin Ye --- tools/testing/selftests/bpf/verifier/atomic_invalid.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/tools/testing/selftests/bpf/verifier/atomic_invalid.c b/tools/testing/selftests/bpf/verifier/atomic_invalid.c index 25f4ac1c69ab..8c52ad682067 100644 --- a/tools/testing/selftests/bpf/verifier/atomic_invalid.c +++ b/tools/testing/selftests/bpf/verifier/atomic_invalid.c @@ -13,8 +13,6 @@ } __INVALID_ATOMIC_ACCESS_TEST(BPF_ADD), __INVALID_ATOMIC_ACCESS_TEST(BPF_ADD | BPF_FETCH), -__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD), -__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD | BPF_FETCH), __INVALID_ATOMIC_ACCESS_TEST(BPF_AND), __INVALID_ATOMIC_ACCESS_TEST(BPF_AND | BPF_FETCH), __INVALID_ATOMIC_ACCESS_TEST(BPF_OR), From patchwork Sat Dec 21 01:26:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peilin Ye X-Patchwork-Id: 13917585 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2BB4B282EE for ; Sat, 21 Dec 2024 01:26:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744392; cv=none; b=lM5ij6tG8IA9okMQcCNi2hWJ0b6lj9vNRkYLyvs40VdaVhDDlC11wKmxMOR1VkO1GOJ4hcHbEWBaYDXRAePgbhLtqFPcTLEFAHRFAD3Im9sGebYGj3dLX4iaLUCAHEQbNzgIrnZXmFUYmQZYmaZpn0fp0xg1gAvYIwGdcwj/EL0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744392; c=relaxed/simple; bh=J0uIY6A8JDdMj0JP8IBJHPjr9rJz9TpI6m3buHqW2A4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kUYOLoRzWOt8Zu7cTO/hN9nLFEICHzpMcHw3/eip9trGo53/q/XmqIRh9tJwh1mrGLMOk4fCXis5Ktxw1znR78LQpvTahMFpV/wQjldmlBYWYDzG4FKjNx/4CEsNlYWZODLipjFgmQWr41anP5RlAilo0geevFskXsAqh4pCtcs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=l2cb+nCJ; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="l2cb+nCJ" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-729046e1283so1675638b3a.1 for ; Fri, 20 Dec 2024 17:26:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734744389; x=1735349189; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VR7rs6YOw8kz1O0c5Uzbutq8ohP0Q1FipY0UTTgo7LE=; b=l2cb+nCJSik41QOImqI8GxGYRNfi/n/g2fus62qMU7mZVrR0BBzfhm9qk4ZyBGGqz7 3BGG66HDRcaq8X3YfSufQwZDBiGVGiOPYA7RAFqShsBePjk+3pSBe8hvt0dobWL6bde9 KzSrN733tpuqxH0lS2xCPEa7uHmdlq481mjoeWT8GYmE/5fLcnpmXim8RSlD+AqwN+YJ CvhY0f6vc7Fmgk1DbFODRV16tJvvObGi+qVbtVNbO8kCh4vDz+abCg68xnBleXMuS7ub 9DJ6n8WTeenBukJsa4gzpxdvgJvsIUgf3Js1rsw7oySRbEadkd1ieynIHyzbcoPu0Fzo gHwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734744389; x=1735349189; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VR7rs6YOw8kz1O0c5Uzbutq8ohP0Q1FipY0UTTgo7LE=; b=mwkpVpGxTu3ID/7fQkuBTpM944iPK99QTtRtbZqFYO4ZCUJFpBRsmCG5q/pP87+zhE uXkP8ZPRJLuHEhLy4JEoNOf4bFmz8xG+2wgFobajKHVEr8B4CVR2zg8X/oIK4C7C32OU AfpmMuafqkQhZNPFClmEi/KYhGhYlKvXLuqE7nRxy5k57rew91GLRcJpcYiUyVohDeFg qJn4D3QV3wH8qNabI0YIVbj94r9hD71fx55LVBTwwfjx58O+F9ul/YEuoN1OfbzXy9n9 tlvemsjHaa4yt1UrL+zAgGPqiPSJh1H6jolqLaqnH6aESQ6iDaXBXZZOMArxS9TOuyaC rNVA== X-Gm-Message-State: AOJu0YzvQgWr3AXTa5ZuQStoydpEjtQi9mlv7Op62cfSgJ82k9M4bGa8 6zezuRRyUnEGL9sU6PGKjW/vIqXpziUwQTvXEmgVGnDasRCbTP1KQ6u9mhA/z39+8hor/yXAogo WwSAjqszwBKgD+ITovfcbqamejRvrIiKMT2xUmm+zsi46v/zRRWS8w5Y2aAgJLnlBycjTfRAMqO qWwdl5cXQbw5uuDAy1C5/m6sfjtHR64HodMnPpUws= X-Google-Smtp-Source: AGHT+IHGZp1lynJxLzXDf8EUHjWFT+6nZOfXQ5lWLeGO4NjDXJwuXl7Ffxd61IwVMu1NkWUgA5vbthCTJTEeJA== X-Received: from pgab186.prod.google.com ([2002:a63:34c3:0:b0:7fd:4e21:2f5a]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:d045:b0:1e4:8fda:78ea with SMTP id adf61e73a8af0-1e5e08661eemr9682713637.46.1734744389307; Fri, 20 Dec 2024 17:26:29 -0800 (PST) Date: Sat, 21 Dec 2024 01:26:13 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <114f23ac20d73eeb624a9677e39a87b766f4bcc2.1734742802.git.yepeilin@google.com> Subject: [PATCH RFC bpf-next v1 4/4] selftests/bpf: Add selftests for load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org Cc: Peilin Ye , Alexei Starovoitov , Eduard Zingerman , Song Liu , Yonghong Song , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , "Paul E. McKenney" , Puranjay Mohan , Xu Kuohai , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , David Vernet , Dave Marchevsky , linux-kernel@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Add the following ./test_progs tests: * atomics/load_acquire * atomics/store_release * arena_atomics/load_acquire * arena_atomics/store_release They depend on the pre-defined __BPF_FEATURE_LOAD_ACQ_STORE_REL feature macro, which implies -mcpu>=v4. $ ALLOWLIST=atomics/load_acquire,atomics/store_release, $ ALLOWLIST+=arena_atomics/load_acquire,arena_atomics/store_release $ ./test_progs-cpuv4 -a $ALLOWLIST #3/9 arena_atomics/load_acquire:OK #3/10 arena_atomics/store_release:OK ... #10/8 atomics/load_acquire:OK #10/9 atomics/store_release:OK $ ./test_progs -v -a $ALLOWLIST test_load_acquire:SKIP:Clang does not support BPF load-acquire or addr_space_cast #3/9 arena_atomics/load_acquire:SKIP test_store_release:SKIP:Clang does not support BPF store-release or addr_space_cast #3/10 arena_atomics/store_release:SKIP ... test_load_acquire:SKIP:Clang does not support BPF load-acquire #10/8 atomics/load_acquire:SKIP test_store_release:SKIP:Clang does not support BPF store-release #10/9 atomics/store_release:SKIP Additionally, add several ./test_verifier tests: #65/u atomic BPF_LOAD_ACQ access through non-pointer OK #65/p atomic BPF_LOAD_ACQ access through non-pointer OK #66/u atomic BPF_STORE_REL access through non-pointer OK #66/p atomic BPF_STORE_REL access through non-pointer OK #67/u BPF_ATOMIC load-acquire, 8-bit OK #67/p BPF_ATOMIC load-acquire, 8-bit OK #68/u BPF_ATOMIC load-acquire, 16-bit OK #68/p BPF_ATOMIC load-acquire, 16-bit OK #69/u BPF_ATOMIC load-acquire, 32-bit OK #69/p BPF_ATOMIC load-acquire, 32-bit OK #70/u BPF_ATOMIC load-acquire, 64-bit OK #70/p BPF_ATOMIC load-acquire, 64-bit OK #71/u Cannot load-acquire from uninitialized src_reg OK #71/p Cannot load-acquire from uninitialized src_reg OK #76/u BPF_ATOMIC store-release, 8-bit OK #76/p BPF_ATOMIC store-release, 8-bit OK #77/u BPF_ATOMIC store-release, 16-bit OK #77/p BPF_ATOMIC store-release, 16-bit OK #78/u BPF_ATOMIC store-release, 32-bit OK #78/p BPF_ATOMIC store-release, 32-bit OK #79/u BPF_ATOMIC store-release, 64-bit OK #79/p BPF_ATOMIC store-release, 64-bit OK #80/u Cannot store-release from uninitialized src_reg OK #80/p Cannot store-release from uninitialized src_reg OK Reviewed-by: Josh Don Signed-off-by: Peilin Ye --- include/linux/filter.h | 2 + .../selftests/bpf/prog_tests/arena_atomics.c | 61 +++++++++++++++- .../selftests/bpf/prog_tests/atomics.c | 57 ++++++++++++++- .../selftests/bpf/progs/arena_atomics.c | 62 +++++++++++++++- tools/testing/selftests/bpf/progs/atomics.c | 62 +++++++++++++++- .../selftests/bpf/verifier/atomic_invalid.c | 26 +++---- .../selftests/bpf/verifier/atomic_load.c | 71 +++++++++++++++++++ .../selftests/bpf/verifier/atomic_store.c | 70 ++++++++++++++++++ 8 files changed, 393 insertions(+), 18 deletions(-) create mode 100644 tools/testing/selftests/bpf/verifier/atomic_load.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_store.c diff --git a/include/linux/filter.h b/include/linux/filter.h index 0477254bc2d3..c264d723dc9e 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -364,6 +364,8 @@ static inline bool insn_is_cast_user(const struct bpf_insn *insn) * BPF_XOR | BPF_FETCH src_reg = atomic_fetch_xor(dst_reg + off16, src_reg); * BPF_XCHG src_reg = atomic_xchg(dst_reg + off16, src_reg) * BPF_CMPXCHG r0 = atomic_cmpxchg(dst_reg + off16, r0, src_reg) + * BPF_LOAD_ACQ dst_reg = smp_load_acquire(src_reg + off16) + * BPF_STORE_REL smp_store_release(dst_reg + off16, src_reg) */ #define BPF_ATOMIC_OP(SIZE, OP, DST, SRC, OFF) \ diff --git a/tools/testing/selftests/bpf/prog_tests/arena_atomics.c b/tools/testing/selftests/bpf/prog_tests/arena_atomics.c index 26e7c06c6cb4..81d3575d7652 100644 --- a/tools/testing/selftests/bpf/prog_tests/arena_atomics.c +++ b/tools/testing/selftests/bpf/prog_tests/arena_atomics.c @@ -162,6 +162,60 @@ static void test_uaf(struct arena_atomics *skel) ASSERT_EQ(skel->arena->uaf_recovery_fails, 0, "uaf_recovery_fails"); } +static void test_load_acquire(struct arena_atomics *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP:Clang does not support BPF load-acquire or addr_space_cast\n", + __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd = bpf_program__fd(skel->progs.load_acquire); + err = bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->arena->load_acquire8_result, 0x12, "load_acquire8_result"); + ASSERT_EQ(skel->arena->load_acquire16_result, 0x1234, "load_acquire16_result"); + ASSERT_EQ(skel->arena->load_acquire32_result, 0x12345678, "load_acquire32_result"); + ASSERT_EQ(skel->arena->load_acquire64_result, 0x1234567890abcdef, + "load_acquire64_result"); +} + +static void test_store_release(struct arena_atomics *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP:Clang does not support BPF store-release or addr_space_cast\n", + __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd = bpf_program__fd(skel->progs.store_release); + err = bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->arena->store_release8_result, 0x12, "store_release8_result"); + ASSERT_EQ(skel->arena->store_release16_result, 0x1234, "store_release16_result"); + ASSERT_EQ(skel->arena->store_release32_result, 0x12345678, "store_release32_result"); + ASSERT_EQ(skel->arena->store_release64_result, 0x1234567890abcdef, + "store_release64_result"); +} + void test_arena_atomics(void) { struct arena_atomics *skel; @@ -171,7 +225,7 @@ void test_arena_atomics(void) if (!ASSERT_OK_PTR(skel, "arena atomics skeleton open")) return; - if (skel->data->skip_tests) { + if (skel->data->skip_all_tests) { printf("%s:SKIP:no ENABLE_ATOMICS_TESTS or no addr_space_cast support in clang", __func__); test__skip(); @@ -199,6 +253,11 @@ void test_arena_atomics(void) if (test__start_subtest("uaf")) test_uaf(skel); + if (test__start_subtest("load_acquire")) + test_load_acquire(skel); + if (test__start_subtest("store_release")) + test_store_release(skel); + cleanup: arena_atomics__destroy(skel); } diff --git a/tools/testing/selftests/bpf/prog_tests/atomics.c b/tools/testing/selftests/bpf/prog_tests/atomics.c index 13e101f370a1..5d7cff3eed2b 100644 --- a/tools/testing/selftests/bpf/prog_tests/atomics.c +++ b/tools/testing/selftests/bpf/prog_tests/atomics.c @@ -162,6 +162,56 @@ static void test_xchg(struct atomics_lskel *skel) ASSERT_EQ(skel->bss->xchg32_result, 1, "xchg32_result"); } +static void test_load_acquire(struct atomics_lskel *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP:Clang does not support BPF load-acquire\n", __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd = skel->progs.load_acquire.prog_fd; + err = bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->bss->load_acquire8_result, 0x12, "load_acquire8_result"); + ASSERT_EQ(skel->bss->load_acquire16_result, 0x1234, "load_acquire16_result"); + ASSERT_EQ(skel->bss->load_acquire32_result, 0x12345678, "load_acquire32_result"); + ASSERT_EQ(skel->bss->load_acquire64_result, 0x1234567890abcdef, "load_acquire64_result"); +} + +static void test_store_release(struct atomics_lskel *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP:Clang does not support BPF store-release\n", __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd = skel->progs.store_release.prog_fd; + err = bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->bss->store_release8_result, 0x12, "store_release8_result"); + ASSERT_EQ(skel->bss->store_release16_result, 0x1234, "store_release16_result"); + ASSERT_EQ(skel->bss->store_release32_result, 0x12345678, "store_release32_result"); + ASSERT_EQ(skel->bss->store_release64_result, 0x1234567890abcdef, "store_release64_result"); +} + void test_atomics(void) { struct atomics_lskel *skel; @@ -170,7 +220,7 @@ void test_atomics(void) if (!ASSERT_OK_PTR(skel, "atomics skeleton load")) return; - if (skel->data->skip_tests) { + if (skel->data->skip_all_tests) { printf("%s:SKIP:no ENABLE_ATOMICS_TESTS (missing Clang BPF atomics support)", __func__); test__skip(); @@ -193,6 +243,11 @@ void test_atomics(void) if (test__start_subtest("xchg")) test_xchg(skel); + if (test__start_subtest("load_acquire")) + test_load_acquire(skel); + if (test__start_subtest("store_release")) + test_store_release(skel); + cleanup: atomics_lskel__destroy(skel); } diff --git a/tools/testing/selftests/bpf/progs/arena_atomics.c b/tools/testing/selftests/bpf/progs/arena_atomics.c index 40dd57fca5cc..fe8b67d9c87b 100644 --- a/tools/testing/selftests/bpf/progs/arena_atomics.c +++ b/tools/testing/selftests/bpf/progs/arena_atomics.c @@ -19,9 +19,15 @@ struct { } arena SEC(".maps"); #if defined(ENABLE_ATOMICS_TESTS) && defined(__BPF_FEATURE_ADDR_SPACE_CAST) -bool skip_tests __attribute((__section__(".data"))) = false; +bool skip_all_tests __attribute((__section__(".data"))) = false; #else -bool skip_tests = true; +bool skip_all_tests = true; +#endif + +#if defined(__BPF_FEATURE_LOAD_ACQ_STORE_REL) && defined(__BPF_FEATURE_ADDR_SPACE_CAST) +bool skip_lacq_srel_tests __attribute((__section__(".data"))) = false; +#else +bool skip_lacq_srel_tests = true; #endif __u32 pid = 0; @@ -274,4 +280,56 @@ int uaf(const void *ctx) return 0; } +__u8 __arena_global load_acquire8_value = 0x12; +__u16 __arena_global load_acquire16_value = 0x1234; +__u32 __arena_global load_acquire32_value = 0x12345678; +__u64 __arena_global load_acquire64_value = 0x1234567890abcdef; + +__u8 __arena_global load_acquire8_result = 0; +__u16 __arena_global load_acquire16_result = 0; +__u32 __arena_global load_acquire32_result = 0; +__u64 __arena_global load_acquire64_result = 0; + +SEC("raw_tp/sys_enter") +int load_acquire(const void *ctx) +{ + if (pid != (bpf_get_current_pid_tgid() >> 32)) + return 0; + +#ifdef __BPF_FEATURE_LOAD_ACQ_STORE_REL + load_acquire8_result = __atomic_load_n(&load_acquire8_value, __ATOMIC_ACQUIRE); + load_acquire16_result = __atomic_load_n(&load_acquire16_value, __ATOMIC_ACQUIRE); + load_acquire32_result = __atomic_load_n(&load_acquire32_value, __ATOMIC_ACQUIRE); + load_acquire64_result = __atomic_load_n(&load_acquire64_value, __ATOMIC_ACQUIRE); +#endif + + return 0; +} + +__u8 __arena_global store_release8_result = 0; +__u16 __arena_global store_release16_result = 0; +__u32 __arena_global store_release32_result = 0; +__u64 __arena_global store_release64_result = 0; + +SEC("raw_tp/sys_enter") +int store_release(const void *ctx) +{ + if (pid != (bpf_get_current_pid_tgid() >> 32)) + return 0; + +#ifdef __BPF_FEATURE_LOAD_ACQ_STORE_REL + __u8 val8 = 0x12; + __u16 val16 = 0x1234; + __u32 val32 = 0x12345678; + __u64 val64 = 0x1234567890abcdef; + + __atomic_store_n(&store_release8_result, val8, __ATOMIC_RELEASE); + __atomic_store_n(&store_release16_result, val16, __ATOMIC_RELEASE); + __atomic_store_n(&store_release32_result, val32, __ATOMIC_RELEASE); + __atomic_store_n(&store_release64_result, val64, __ATOMIC_RELEASE); +#endif + + return 0; +} + char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/atomics.c b/tools/testing/selftests/bpf/progs/atomics.c index f89c7f0cc53b..4c23d7d0d37d 100644 --- a/tools/testing/selftests/bpf/progs/atomics.c +++ b/tools/testing/selftests/bpf/progs/atomics.c @@ -5,9 +5,15 @@ #include #ifdef ENABLE_ATOMICS_TESTS -bool skip_tests __attribute((__section__(".data"))) = false; +bool skip_all_tests __attribute((__section__(".data"))) = false; #else -bool skip_tests = true; +bool skip_all_tests = true; +#endif + +#ifdef __BPF_FEATURE_LOAD_ACQ_STORE_REL +bool skip_lacq_srel_tests __attribute((__section__(".data"))) = false; +#else +bool skip_lacq_srel_tests = true; #endif __u32 pid = 0; @@ -168,3 +174,55 @@ int xchg(const void *ctx) return 0; } + +__u8 load_acquire8_value = 0x12; +__u16 load_acquire16_value = 0x1234; +__u32 load_acquire32_value = 0x12345678; +__u64 load_acquire64_value = 0x1234567890abcdef; + +__u8 load_acquire8_result = 0; +__u16 load_acquire16_result = 0; +__u32 load_acquire32_result = 0; +__u64 load_acquire64_result = 0; + +SEC("raw_tp/sys_enter") +int load_acquire(const void *ctx) +{ + if (pid != (bpf_get_current_pid_tgid() >> 32)) + return 0; + +#ifdef __BPF_FEATURE_LOAD_ACQ_STORE_REL + load_acquire8_result = __atomic_load_n(&load_acquire8_value, __ATOMIC_ACQUIRE); + load_acquire16_result = __atomic_load_n(&load_acquire16_value, __ATOMIC_ACQUIRE); + load_acquire32_result = __atomic_load_n(&load_acquire32_value, __ATOMIC_ACQUIRE); + load_acquire64_result = __atomic_load_n(&load_acquire64_value, __ATOMIC_ACQUIRE); +#endif + + return 0; +} + +__u8 store_release8_result = 0; +__u16 store_release16_result = 0; +__u32 store_release32_result = 0; +__u64 store_release64_result = 0; + +SEC("raw_tp/sys_enter") +int store_release(const void *ctx) +{ + if (pid != (bpf_get_current_pid_tgid() >> 32)) + return 0; + +#ifdef __BPF_FEATURE_LOAD_ACQ_STORE_REL + __u8 val8 = 0x12; + __u16 val16 = 0x1234; + __u32 val32 = 0x12345678; + __u64 val64 = 0x1234567890abcdef; + + __atomic_store_n(&store_release8_result, val8, __ATOMIC_RELEASE); + __atomic_store_n(&store_release16_result, val16, __ATOMIC_RELEASE); + __atomic_store_n(&store_release32_result, val32, __ATOMIC_RELEASE); + __atomic_store_n(&store_release64_result, val64, __ATOMIC_RELEASE); +#endif + + return 0; +} diff --git a/tools/testing/selftests/bpf/verifier/atomic_invalid.c b/tools/testing/selftests/bpf/verifier/atomic_invalid.c index 8c52ad682067..3f90d8f8a9c0 100644 --- a/tools/testing/selftests/bpf/verifier/atomic_invalid.c +++ b/tools/testing/selftests/bpf/verifier/atomic_invalid.c @@ -1,4 +1,4 @@ -#define __INVALID_ATOMIC_ACCESS_TEST(op) \ +#define __INVALID_ATOMIC_ACCESS_TEST(op, reg) \ { \ "atomic " #op " access through non-pointer ", \ .insns = { \ @@ -9,15 +9,17 @@ BPF_EXIT_INSN(), \ }, \ .result = REJECT, \ - .errstr = "R1 invalid mem access 'scalar'" \ + .errstr = #reg " invalid mem access 'scalar'" \ } -__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD), -__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD | BPF_FETCH), -__INVALID_ATOMIC_ACCESS_TEST(BPF_AND), -__INVALID_ATOMIC_ACCESS_TEST(BPF_AND | BPF_FETCH), -__INVALID_ATOMIC_ACCESS_TEST(BPF_OR), -__INVALID_ATOMIC_ACCESS_TEST(BPF_OR | BPF_FETCH), -__INVALID_ATOMIC_ACCESS_TEST(BPF_XOR), -__INVALID_ATOMIC_ACCESS_TEST(BPF_XOR | BPF_FETCH), -__INVALID_ATOMIC_ACCESS_TEST(BPF_XCHG), -__INVALID_ATOMIC_ACCESS_TEST(BPF_CMPXCHG), +__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD | BPF_FETCH, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_AND, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_AND | BPF_FETCH, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_OR, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_OR | BPF_FETCH, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_XOR, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_XOR | BPF_FETCH, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_XCHG, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_CMPXCHG, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_LOAD_ACQ, R0), +__INVALID_ATOMIC_ACCESS_TEST(BPF_STORE_REL, R1), \ No newline at end of file diff --git a/tools/testing/selftests/bpf/verifier/atomic_load.c b/tools/testing/selftests/bpf/verifier/atomic_load.c new file mode 100644 index 000000000000..5186f71b6009 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_load.c @@ -0,0 +1,71 @@ +{ + "BPF_ATOMIC load-acquire, 8-bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 1 to stack. */ + BPF_ST_MEM(BPF_B, BPF_REG_10, -1, 0x12), + /* Load-acquire it from stack to R1. */ + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_1, BPF_REG_10, -1), + /* Check loaded value is 0x12. */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x12, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC load-acquire, 16-bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 0x1234 to stack. */ + BPF_ST_MEM(BPF_H, BPF_REG_10, -2, 0x1234), + /* Load-acquire it from stack to R1. */ + BPF_ATOMIC_OP(BPF_H, BPF_LOAD_ACQ, BPF_REG_1, BPF_REG_10, -2), + /* Check loaded value is 0x1234. */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x1234, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC load-acquire, 32-bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 0x12345678 to stack. */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x12345678), + /* Load-acquire it from stack to R1. */ + BPF_ATOMIC_OP(BPF_W, BPF_LOAD_ACQ, BPF_REG_1, BPF_REG_10, -4), + /* Check loaded value is 0x12345678. */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x12345678, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC load-acquire, 64-bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Save 0x1234567890abcdef to R1, then write it to stack. */ + BPF_LD_IMM64(BPF_REG_1, 0x1234567890abcdef), + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* Load-acquire it from stack to R2. */ + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -8), + /* Check loaded value is 0x1234567890abcdef. */ + BPF_JMP_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "Cannot load-acquire from uninitialized src_reg", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_1, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "R2 !read_ok", +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_store.c b/tools/testing/selftests/bpf/verifier/atomic_store.c new file mode 100644 index 000000000000..23f2d5c46ea5 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_store.c @@ -0,0 +1,70 @@ +{ + "BPF_ATOMIC store-release, 8-bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Store-release 0x12 to stack. */ + BPF_MOV64_IMM(BPF_REG_1, 0x12), + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -1), + /* Check loaded value is 0x12. */ + BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_10, -1), + BPF_JMP32_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC store-release, 16-bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Store-release 0x1234 to stack. */ + BPF_MOV64_IMM(BPF_REG_1, 0x1234), + BPF_ATOMIC_OP(BPF_H, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -2), + /* Check loaded value is 0x1234. */ + BPF_LDX_MEM(BPF_H, BPF_REG_2, BPF_REG_10, -2), + BPF_JMP32_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC store-release, 32-bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Store-release 0x12345678 to stack. */ + BPF_MOV64_IMM(BPF_REG_1, 0x12345678), + BPF_ATOMIC_OP(BPF_W, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -4), + /* Check loaded value is 0x12345678. */ + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_10, -4), + BPF_JMP32_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC store-release, 64-bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Store-release 0x1234567890abcdef to stack. */ + BPF_LD_IMM64(BPF_REG_1, 0x1234567890abcdef), + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -8), + /* Check loaded value is 0x1234567890abcdef. */ + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8), + BPF_JMP_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "Cannot store-release with uninitialized src_reg", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "R2 !read_ok", +},