From patchwork Fri Nov 27 17:57:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937011 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3210C3E8C5 for ; Fri, 27 Nov 2020 17:58:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A06192224A for ; Fri, 27 Nov 2020 17:58:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tIaLYwou" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732719AbgK0R5v (ORCPT ); Fri, 27 Nov 2020 12:57:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732681AbgK0R5v (ORCPT ); Fri, 27 Nov 2020 12:57:51 -0500 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB672C0613D2 for ; Fri, 27 Nov 2020 09:57:50 -0800 (PST) Received: by mail-qt1-x84a.google.com with SMTP id x62so235897qtd.11 for ; Fri, 27 Nov 2020 09:57:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=pCm5bSXPkdYGa8DRY20xR0eBkPwWiyvd1AYgQ0fdFE4=; b=tIaLYwouJp51q4LpdC5w6IxE4wvtbS/vWXbBRw2PKuDiSDRPyxx1bUK25Gbdt41+Fx gkYmyxooYOjjdPNUl16J0NlVinIY0MfAxd+Mut69Qa0jLOfu7ZAZk+BDKjAYyi79C+yg 20Lwa7gg9F3Tkbilhq4e+UBxxMCJFL50IPeczhuyV/0GcQo0xF54K5Dg0x5y5X5ekSKB 665yHxJEDdyTcsDSHpSOnRBg56ZHLKv3cL7qxQuX0hCXlQp/mBpxx06VPdIOJBkXoBBU 7aq2o+ACt8z0wgsQMpQOg/eo84fBNm00gfKu7bHs/e5XBA5+aDh9XtHf3OV2C+X+JiQT Pi+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pCm5bSXPkdYGa8DRY20xR0eBkPwWiyvd1AYgQ0fdFE4=; b=icm7KKBYVvc+5QXTuggVpGScYGmZPUnF6nuy81Gf2VfEyDWzGZFihlrI+ovs+f3C8s FL20vjzQfWrd6Sbv8XjjpHxSe3OVMJqLGBIjnOZ5cJqDBiFLPT521+gE+Hx8HaqsafIh OUt/b7m9A2ZkjSRJgtNiRIpl3DvWK5p1RRkptuHtrdsQRBy3LPzWOqa00cSPEciUrxNz m51gQ0D6N6B7q66Jiy9pqxJ3hAPhrTrtQxDSbhpzLXfn8a7+ggxaZJ6vAFgntt1V0z7j hQgU5kEieOdI3HoPl0/Fc56HZi9+SRK4Tjx9XEmfYd/NRDvrnvenTOoHHTs1V8cWSSZ8 q5aw== X-Gm-Message-State: AOAM531UzM8Wqe0z8tBr1CYDjLYwhV0D+q9U+wVVDs0J2TSaHn4Ox0PE nc/gg7UzTWmSY67drgAXRHnWBAClMEsYI0HdJw5jvhJ8+su12zNrZhh6MJBOvBy7YEF+zp82Qbq jm0kKIMKdUBHqn2toIOF+kDq54/MleEryty2CjkKcekmP4KnH8usyNmjJn+tQSfQ= X-Google-Smtp-Source: ABdhPJzoTKHY6AOlzu8QYufpg7MnqU6ov6YK+vXzSz8+X+Yy2baFZMiE73UJR6EQRj7V14nEU91/iqJa66GTjw== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:ad4:4432:: with SMTP id e18mr9496712qvt.57.1606499869651; Fri, 27 Nov 2020 09:57:49 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:26 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-2-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 01/13] bpf: x86: Factor out emission of ModR/M for *(reg + off) From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The case for JITing atomics is about to get more complicated. Let's factor out some common code to make the review and result more readable. NB the atomics code doesn't yet use the new helper - a subsequent patch will add its use as a side-effect of other changes. Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 42 +++++++++++++++++++++---------------- 1 file changed, 24 insertions(+), 18 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 796506dcfc42..94b17bd30e00 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -681,6 +681,27 @@ static void emit_mov_reg(u8 **pprog, bool is64, u32 dst_reg, u32 src_reg) *pprog = prog; } +/* Emit the ModR/M byte for addressing *(r1 + off) and r2 */ +static void emit_modrm_dstoff(u8 **pprog, u32 r1, u32 r2, int off) +{ + u8 *prog = *pprog; + int cnt = 0; + + if (is_imm8(off)) { + /* 1-byte signed displacement. + * + * If off == 0 we could skip this and save one extra byte, but + * special case of x86 R13 which always needs an offset is not + * worth the hassle + */ + EMIT2(add_2reg(0x40, r1, r2), off); + } else { + /* 4-byte signed displacement */ + EMIT1_off32(add_2reg(0x80, r1, r2), off); + } + *pprog = prog; +} + /* LDX: dst_reg = *(u8*)(src_reg + off) */ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) { @@ -708,15 +729,7 @@ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) EMIT2(add_2mod(0x48, src_reg, dst_reg), 0x8B); break; } - /* - * If insn->off == 0 we can save one extra byte, but - * special case of x86 R13 which always needs an offset - * is not worth the hassle - */ - if (is_imm8(off)) - EMIT2(add_2reg(0x40, src_reg, dst_reg), off); - else - EMIT1_off32(add_2reg(0x80, src_reg, dst_reg), off); + emit_modrm_dstoff(&prog, src_reg, dst_reg, off); *pprog = prog; } @@ -751,10 +764,7 @@ static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) EMIT2(add_2mod(0x48, dst_reg, src_reg), 0x89); break; } - if (is_imm8(off)) - EMIT2(add_2reg(0x40, dst_reg, src_reg), off); - else - EMIT1_off32(add_2reg(0x80, dst_reg, src_reg), off); + emit_modrm_dstoff(&prog, dst_reg, src_reg, off); *pprog = prog; } @@ -1240,11 +1250,7 @@ st: if (is_imm8(insn->off)) goto xadd; case BPF_STX | BPF_XADD | BPF_DW: EMIT3(0xF0, add_2mod(0x48, dst_reg, src_reg), 0x01); -xadd: if (is_imm8(insn->off)) - EMIT2(add_2reg(0x40, dst_reg, src_reg), insn->off); - else - EMIT1_off32(add_2reg(0x80, dst_reg, src_reg), - insn->off); +xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); break; /* call */ From patchwork Fri Nov 27 17:57:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937015 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CDB9C63798 for ; Fri, 27 Nov 2020 17:58:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D26372224C for ; Fri, 27 Nov 2020 17:58:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HjJkNd/F" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732738AbgK0R5x (ORCPT ); Fri, 27 Nov 2020 12:57:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732731AbgK0R5x (ORCPT ); Fri, 27 Nov 2020 12:57:53 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0D57C0613D2 for ; Fri, 27 Nov 2020 09:57:52 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id d206so4119141qkc.23 for ; Fri, 27 Nov 2020 09:57:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=S3LKp+3RvK8OZkmGe150smJLkvcUKra31qzmBdRpcwc=; b=HjJkNd/Fcb7hlI25N2/0RZaPOeJLmob9416QnTRAdWKyICh8nlL4N42gT7uYToO16V USLz0UptmPjQlEwvmgho10Ctx+6t/NUOYRkjpaHSs1tXiHAuEyx0ZT9LV8XtNL2aya/Y OPqXGnne4x7klTAiwv2z72/uksfWLkcx0t7tCr0/7WB8rUa9Xd16zdIYNBEhyk5oRYGB M2YonpjfV6Ujm7xik6fhgGm9CIFXiDZNT8bXQvkfBmtRtrpi3/w3rUL+iCDzq+M1sSu9 4uzjdd34FZh4N8WzgTiXMirD6XpfZHgE98Xvtwh1wqemySCXDfdx4ar6Lm+Pr4qNrKES xmIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=S3LKp+3RvK8OZkmGe150smJLkvcUKra31qzmBdRpcwc=; b=mW617/JHXIuqHlZ5Ridhkk8NnNFJ7yn1Y3ULsa5PJam//JqSQ7Y6n/n9WG3FEr/1Wy Ev23/UzsMYsOvdeBSw3peqrBH6Wa2fGihVzkdjuhPINMeE8TF4gNLVKfvCTxIK/+NBDx VvNtENT4FDhn4wJfLc4XgTIf8LC8BgRh0wgE5bvGqeriohojL016SY+mjjnzEMvH5PVX A3J7IaPR7w+fhmmus11YK2X/9r03eP7tzwRV9OefLd3AxjDqSOqS6TGY6fOlx4nw567O oas9xs5L4ggpzmV3t2IuCJxpRaSTitN82+M9hp+OpHkf2htqkq2d7V3a6g2/OvhI6MS8 6EzA== X-Gm-Message-State: AOAM531vc2ULhp3FPB92oXoJmyALbolsMXo71/lCSV9UvLAIzNtHU+jc ZkU/a3X2s/lStXIfftl4ebzwabjghkhA5jSlJyB5CkV1/LESlrai0oD6hthX3VOahOKQ5assi/3 rli7l0s7uQ+w+JIecSI+TrnaAM4bb0VZWZeMkTYm6lkB+7xGtyri2PJYmwoYoGw4= X-Google-Smtp-Source: ABdhPJyyHmArdB3e3yTqjlnMow5d9+BjTkDGqoBU9a+C+qyn1UIo8b9nMbPDQY8AxWcN+k392VZX9ao0Cr7r+A== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a05:6214:2b4:: with SMTP id m20mr9656499qvv.34.1606499871986; Fri, 27 Nov 2020 09:57:51 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:27 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-3-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 02/13] bpf: x86: Factor out emission of REX byte From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The JIT case for encoding atomic ops is about to get more complicated. In order to make the review & resulting code easier, let's factor out some shared helpers. Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 39 ++++++++++++++++++++++--------------- 1 file changed, 23 insertions(+), 16 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 94b17bd30e00..a839c1a54276 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -702,6 +702,21 @@ static void emit_modrm_dstoff(u8 **pprog, u32 r1, u32 r2, int off) *pprog = prog; } +/* + * Emit a REX byte if it will be necessary to address these registers + */ +static void maybe_emit_rex(u8 **pprog, u32 reg_rm, u32 reg_reg, bool wide) +{ + u8 *prog = *pprog; + int cnt = 0; + + if (wide) + EMIT1(add_2mod(0x48, reg_rm, reg_reg)); + else if (is_ereg(reg_rm) || is_ereg(reg_reg)) + EMIT1(add_2mod(0x40, reg_rm, reg_reg)); + *pprog = prog; +} + /* LDX: dst_reg = *(u8*)(src_reg + off) */ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) { @@ -854,10 +869,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, case BPF_OR: b2 = 0x09; break; case BPF_XOR: b2 = 0x31; break; } - if (BPF_CLASS(insn->code) == BPF_ALU64) - EMIT1(add_2mod(0x48, dst_reg, src_reg)); - else if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT1(add_2mod(0x40, dst_reg, src_reg)); + maybe_emit_rex(&prog, dst_reg, src_reg, + BPF_CLASS(insn->code) == BPF_ALU64); EMIT2(b2, add_2reg(0xC0, dst_reg, src_reg)); break; @@ -1301,20 +1314,16 @@ xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); case BPF_JMP32 | BPF_JSGE | BPF_X: case BPF_JMP32 | BPF_JSLE | BPF_X: /* cmp dst_reg, src_reg */ - if (BPF_CLASS(insn->code) == BPF_JMP) - EMIT1(add_2mod(0x48, dst_reg, src_reg)); - else if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT1(add_2mod(0x40, dst_reg, src_reg)); + maybe_emit_rex(&prog, dst_reg, src_reg, + BPF_CLASS(insn->code) == BPF_JMP); EMIT2(0x39, add_2reg(0xC0, dst_reg, src_reg)); goto emit_cond_jmp; case BPF_JMP | BPF_JSET | BPF_X: case BPF_JMP32 | BPF_JSET | BPF_X: /* test dst_reg, src_reg */ - if (BPF_CLASS(insn->code) == BPF_JMP) - EMIT1(add_2mod(0x48, dst_reg, src_reg)); - else if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT1(add_2mod(0x40, dst_reg, src_reg)); + maybe_emit_rex(&prog, dst_reg, src_reg, + BPF_CLASS(insn->code) == BPF_JMP); EMIT2(0x85, add_2reg(0xC0, dst_reg, src_reg)); goto emit_cond_jmp; @@ -1350,10 +1359,8 @@ xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); case BPF_JMP32 | BPF_JSLE | BPF_K: /* test dst_reg, dst_reg to save one extra byte */ if (imm32 == 0) { - if (BPF_CLASS(insn->code) == BPF_JMP) - EMIT1(add_2mod(0x48, dst_reg, dst_reg)); - else if (is_ereg(dst_reg)) - EMIT1(add_2mod(0x40, dst_reg, dst_reg)); + maybe_emit_rex(&prog, dst_reg, dst_reg, + BPF_CLASS(insn->code) == BPF_JMP); EMIT2(0x85, add_2reg(0xC0, dst_reg, dst_reg)); goto emit_cond_jmp; } From patchwork Fri Nov 27 17:57:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937013 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79C92C64E7B for ; Fri, 27 Nov 2020 17:58:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3E4022224C for ; Fri, 27 Nov 2020 17:58:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="akOQ2ZMN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732426AbgK0R54 (ORCPT ); Fri, 27 Nov 2020 12:57:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730985AbgK0R54 (ORCPT ); Fri, 27 Nov 2020 12:57:56 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAE3CC0613D2 for ; Fri, 27 Nov 2020 09:57:55 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id n13so3785034wrs.10 for ; Fri, 27 Nov 2020 09:57:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=EiPiPUafMgOaehLSLTGCVFZQBAOetMfRBds+Fc1vJSU=; b=akOQ2ZMN9W6euIG+IX9HlmE81+hrKhPw5OCfnLqK8PqcBSdfEcIpZ2/w26yPMURt/f kHYXpDQ4Emx4vBD2UOdj56/tyhuo8ib5Ic8TejR8yEu5Fej6yP8kKUo9rCe7pwJ5e2lr 3/T976Jru34VCKkV9ncI66Z18jKrp3SWv5F/UtLCsCscH+ublDACEOClqwiFCqblEUlJ J/vD/zE94YloeZrYp/ovr0dL5jXyfmgtgNILR7rU0DHO/p0kE2yYxRGyAqsJ4LhX4xvB f5HibXuUlctdvesV1sf4ya6juEFTKnjKr2iaxR+mxRzDdLfCMJv6P31InidCMJTE0uwl ffZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=EiPiPUafMgOaehLSLTGCVFZQBAOetMfRBds+Fc1vJSU=; b=CtXIybIvfCacWNiGoAJqF6aIwWXcC0Fh+ysS2aVX57dWSNuh3yi6EMbuVWJxmz17dy W2uofBjsjOZ1JwMMp+i2sIE5lhA0GQ27XZY+1etnHn0cYcO8A/rVY3v7RHGe1kroUxU9 pqvO/mupJ2iM5tlUXAqKj+MjUVji3OfaJCtEG/1F0BtSkvaphvzZZOY1+3JCntPA22Qz gyfw7nOA6zB31TE1+N8pdJwoa8rmCqObs0U750uaOKGjudmFLhkxteKtA1DpuiaS4kNg edVKJN8Z6tf6gW47XY37pVMGoaA/BH9kbChn2PzxAsZrgmWB1hVp5fHMso/fQPOJOoEs a1Xg== X-Gm-Message-State: AOAM530KvHUpbhlsf60L/43cOWO0E0en+Ij5ZQ51AhoYuAtY5NJtlx0e 1ZLg66ynQZRsgBBBDPBcbQDitfHvXkLA57KWoI3+6E9qKjrFbx9nMJHmtF+yrzQWxxd0CL0KXch pPT6W8Qvd+4V1q9ApROnla2q48jYkoSz1HypKcWzbiUgLtCMgtplJQ3xlAkibxlM= X-Google-Smtp-Source: ABdhPJybWNL8ifikgSrknPlXXQMhlVShu89S3WwBiSTCfyaxbGqNhJHCJwDo+3vOuydrljUSzKJ6LJ/OLzPscA== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:adf:e7d0:: with SMTP id e16mr12210610wrn.114.1606499874225; Fri, 27 Nov 2020 09:57:54 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:28 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-4-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 03/13] bpf: x86: Factor out function to emit NEG From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net There's currently only one usage of this but implementation of atomic_sub add another. Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index a839c1a54276..49dea0c1a130 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -783,6 +783,22 @@ static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) *pprog = prog; } + +static void emit_neg(u8 **pprog, u32 reg, bool is64) +{ + u8 *prog = *pprog; + int cnt = 0; + + /* Emit REX byte if necessary */ + if (is64) + EMIT1(add_1mod(0x48, reg)); + else if (is_ereg(reg)) + EMIT1(add_1mod(0x40, reg)); + + EMIT2(0xF7, add_1reg(0xD8, reg)); /* x86 NEG */ + *pprog = prog; +} + static bool ex_handler_bpf(const struct exception_table_entry *x, struct pt_regs *regs, int trapnr, unsigned long error_code, unsigned long fault_addr) @@ -884,11 +900,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, /* neg dst */ case BPF_ALU | BPF_NEG: case BPF_ALU64 | BPF_NEG: - if (BPF_CLASS(insn->code) == BPF_ALU64) - EMIT1(add_1mod(0x48, dst_reg)); - else if (is_ereg(dst_reg)) - EMIT1(add_1mod(0x40, dst_reg)); - EMIT2(0xF7, add_1reg(0xD8, dst_reg)); + emit_neg(&prog, dst_reg, + BPF_CLASS(insn->code) == BPF_ALU64); break; case BPF_ALU | BPF_ADD | BPF_K: From patchwork Fri Nov 27 17:57:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937017 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCE1EC64E7C for ; Fri, 27 Nov 2020 17:58:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8AF912224C for ; Fri, 27 Nov 2020 17:58:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="q7oY8MHB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732863AbgK0R56 (ORCPT ); Fri, 27 Nov 2020 12:57:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732858AbgK0R56 (ORCPT ); Fri, 27 Nov 2020 12:57:58 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E70E9C0613D2 for ; Fri, 27 Nov 2020 09:57:57 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id k128so3462574wme.7 for ; Fri, 27 Nov 2020 09:57:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=H22q+ouf7u9LzglpF3czxswAZ/tYvPaf24TLGDwbSdY=; b=q7oY8MHBuEHwS6KdXZOjzfRIdqqQ9Gq5GgEfXpJ3lBShIXyDo2xtCpN5S+0k4mqqJv BP9FV9uH44XTe0Pkm5Deh9ldWyDwefxmMlDwSiRqpZknx2J2vHuUpTTJY8xPyfxwKTqf 1yohpDY3jSIg13YPnnrKMQNwSeap0PQrQoBiVOCEH+byznGShV6BsBJ2zlemsleoKOoC EcpHSVCj4+2zuuhnoX9K+8Fsmf9uJVOdYeGslR7uGvnQfNFkdZ5jqW327+1QtxbpgnMI PStcypAW1hJ0odoCypA2OdL+yL2M3UwM8ZbgI8T/yH148rMN4gL5I3YbZvxRO8FXIvzA KOMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=H22q+ouf7u9LzglpF3czxswAZ/tYvPaf24TLGDwbSdY=; b=OdIkUiMFz0kg2VoUbKNzuGrlhKQuXK1wzZU4QsaVv8+DjARAbRW06iMMMafiFKLtab hsStO1LCgjfhBKsv4letJscE++NwpRKj+BrozmJTJOMwIPVeUYlhbLJBG9m2OsENMKvr MPaxeNW8tgmgXvYRuAqGA6DMGXV73M7wNuN3FE5vlgy5bjsHQZly6VEXqYNizrPPVmL9 lDR0dTQXVh6fr1/+vf8+WNj19SSI2l5UUW1bQw6G7r7W4KqzX9UURibfNbzN/bwuwAT+ cfNr6lJvPFV6hQn2CEJTHF5NnyhCK/I8HLMy6R10Uup20r6dQbBktFWLYId12KznFmDL GTTg== X-Gm-Message-State: AOAM532rjd+kmFk7yrv/gEV/nsihJxjpHyJuVfbp37ITRy0md4HlMxZw wqilIBG5l0n734qRsDZJe8kRPZxRb4MljUE+J6GrM3n12gyiguT0pKNGYvtYxOD71rLubI852ae Z3et6D74ZPXDahF/UanOEunGJGy0iBGV3Y1x178LKcuhVstZg5uUqo5R2bctF/Vk= X-Google-Smtp-Source: ABdhPJxqOgiEBu4iK10iq+TzBQOdcgZsckPDRErXaV/RvmgzGe/x7wC+++NAMXvmPHCa9VnPDvQZ0802KxWG3g== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a1c:61c2:: with SMTP id v185mr10546867wmb.152.1606499876503; Fri, 27 Nov 2020 09:57:56 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:29 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-5-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 04/13] bpf: x86: Factor out a lookup table for some ALU opcodes From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net A later commit will need to lookup a subset of these opcodes. To avoid duplicating code, pull out a table. The shift opcodes won't be needed by that later commit, but they're already duplicated, so fold them into the table anyway. Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 33 +++++++++++++++------------------ 1 file changed, 15 insertions(+), 18 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 49dea0c1a130..9ecee9d018ac 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -205,6 +205,18 @@ static u8 add_2reg(u8 byte, u32 dst_reg, u32 src_reg) return byte + reg2hex[dst_reg] + (reg2hex[src_reg] << 3); } +/* Some 1-byte opcodes for binary ALU operations */ +static u8 simple_alu_opcodes[] = { + [BPF_ADD] = 0x01, + [BPF_SUB] = 0x29, + [BPF_AND] = 0x21, + [BPF_OR] = 0x09, + [BPF_XOR] = 0x31, + [BPF_LSH] = 0xE0, + [BPF_RSH] = 0xE8, + [BPF_ARSH] = 0xF8, +}; + static void jit_fill_hole(void *area, unsigned int size) { /* Fill whole space with INT3 instructions */ @@ -878,15 +890,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, case BPF_ALU64 | BPF_AND | BPF_X: case BPF_ALU64 | BPF_OR | BPF_X: case BPF_ALU64 | BPF_XOR | BPF_X: - switch (BPF_OP(insn->code)) { - case BPF_ADD: b2 = 0x01; break; - case BPF_SUB: b2 = 0x29; break; - case BPF_AND: b2 = 0x21; break; - case BPF_OR: b2 = 0x09; break; - case BPF_XOR: b2 = 0x31; break; - } maybe_emit_rex(&prog, dst_reg, src_reg, BPF_CLASS(insn->code) == BPF_ALU64); + b2 = simple_alu_opcodes[BPF_OP(insn->code)]; EMIT2(b2, add_2reg(0xC0, dst_reg, src_reg)); break; @@ -1063,12 +1069,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, else if (is_ereg(dst_reg)) EMIT1(add_1mod(0x40, dst_reg)); - switch (BPF_OP(insn->code)) { - case BPF_LSH: b3 = 0xE0; break; - case BPF_RSH: b3 = 0xE8; break; - case BPF_ARSH: b3 = 0xF8; break; - } - + b3 = simple_alu_opcodes[BPF_OP(insn->code)]; if (imm32 == 1) EMIT2(0xD1, add_1reg(b3, dst_reg)); else @@ -1102,11 +1103,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, else if (is_ereg(dst_reg)) EMIT1(add_1mod(0x40, dst_reg)); - switch (BPF_OP(insn->code)) { - case BPF_LSH: b3 = 0xE0; break; - case BPF_RSH: b3 = 0xE8; break; - case BPF_ARSH: b3 = 0xF8; break; - } + b3 = simple_alu_opcodes[BPF_OP(insn->code)]; EMIT2(0xD3, add_1reg(b3, dst_reg)); if (src_reg != BPF_REG_4) From patchwork Fri Nov 27 17:57:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937023 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66509C64E90 for ; Fri, 27 Nov 2020 17:58:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 224F32224A for ; Fri, 27 Nov 2020 17:58:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="H6wS+llR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732892AbgK0R6C (ORCPT ); Fri, 27 Nov 2020 12:58:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730985AbgK0R6C (ORCPT ); Fri, 27 Nov 2020 12:58:02 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56520C0613D2 for ; Fri, 27 Nov 2020 09:58:00 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id c8so3772716wrh.16 for ; Fri, 27 Nov 2020 09:58:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=SnXQcuj6914u1CbCIB9htymYW1oU6rJ9bupPqkOpEfY=; b=H6wS+llRMqxcroweDResixOLrRMcjShG7MTlKzMLXR+PGEbxmOG/+yOlt1hsycz1hi 3+dZqdfcnQs07V1lbcBqKURKWG9CJ50sxf9g/r5ECrF557gW4DuYM/NuMmOTucus4fPV 2dOsEeB5iTZTCtbeL9nLLxmgHKYPYeVxQTTmrBfd7VF6wnnPYg62wt65C3zjAcCzwCfe 0JVLVnF2xjmwe5GDdAE1me5ergu4ba3gGAPkDniEyKnOOHKEJEcqkPvJdzJmUdDSUlfN GtRLM6F+oyc0LP2dI07h0w/L6EIFQhXWhUjSUlR+ZXExhf+2O01rHuArw+HidW6670MJ xfAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SnXQcuj6914u1CbCIB9htymYW1oU6rJ9bupPqkOpEfY=; b=E0oC3AIF5wztNsPeL+z6XmRLJ7sOs0WfPBgFYSj/E4G0WSc91o/RFmFg+4j1eLm5fy 6CQzUVSlGyC2AHfRHBdG/sLv9OPwIpY8m8ldqNYpy4eVDfN9hqAo+vhab0KEnBWfHwzz H7cPNYqFrZhtn+ZVqQGfY9AF20VxyHmI8kw5JGPzARYBEFjK/vmjh5kPIj2U7qMzSrR7 XtSNdRsMRIdjLLXvhtPbyN8w7QRO40ERGwFEPgW/FK9Ly3yL0QfydFzXTFNFda7IXcpg sNz3R7ASyheFj3/FsBjqTgXC1iLbaQxEnasYqAPoXGKJTTA8nCGiiI4XN1Ze8gI7MHOV JNwQ== X-Gm-Message-State: AOAM532RTU20uwZPTMrQsZvkdD6vcyBu9KvAUUya2crR/Qjr6+pd6WCs vKg8QRABi4qBwKw6Me+W4NqFBUHjoAFs+oQzI9a8/feBuuBupcFXmUQBWzQpACq8fYYandKgn79 DGD5CcmBc39TKDCGuS+yfJzDidCryANSYhFuTPtgJSJOmNkM5bV8Kjdh11SlEALQ= X-Google-Smtp-Source: ABdhPJyBZGLfUOy71uZ/B22PIjjwns7+UCphZWtU2qmTRIziEADIEFA2b6J6sb2AWkU/3rGvnVb+m0RimveeiA== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a1c:791a:: with SMTP id l26mr1857681wme.1.1606499878529; Fri, 27 Nov 2020 09:57:58 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:30 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-6-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 05/13] bpf: Rename BPF_XADD and prepare to encode other atomics in .imm From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net A subsequent patch will add additional atomic operations. These new operations will use the same opcode field as the existing XADD, with the immediate discriminating different operations. In preparation, rename the instruction mode BPF_ATOMIC and start calling the zero immediate BPF_ADD. This is possible (doesn't break existing valid BPF progs) because the immediate field is currently reserved MBZ and BPF_ADD is zero. All uses are removed from the tree but the BPF_XADD definition is kept around to avoid breaking builds for people including kernel headers. Signed-off-by: Brendan Jackman --- Documentation/networking/filter.rst | 30 +++++++----- arch/arm/net/bpf_jit_32.c | 7 ++- arch/arm64/net/bpf_jit_comp.c | 16 +++++-- arch/mips/net/ebpf_jit.c | 11 +++-- arch/powerpc/net/bpf_jit_comp64.c | 25 ++++++++-- arch/riscv/net/bpf_jit_comp32.c | 20 ++++++-- arch/riscv/net/bpf_jit_comp64.c | 16 +++++-- arch/s390/net/bpf_jit_comp.c | 27 ++++++----- arch/sparc/net/bpf_jit_comp_64.c | 17 +++++-- arch/x86/net/bpf_jit_comp.c | 46 ++++++++++++++----- arch/x86/net/bpf_jit_comp32.c | 6 +-- drivers/net/ethernet/netronome/nfp/bpf/jit.c | 14 ++++-- drivers/net/ethernet/netronome/nfp/bpf/main.h | 4 +- .../net/ethernet/netronome/nfp/bpf/verifier.c | 15 ++++-- include/linux/filter.h | 8 ++-- include/uapi/linux/bpf.h | 3 +- kernel/bpf/core.c | 31 +++++++++---- kernel/bpf/disasm.c | 6 ++- kernel/bpf/verifier.c | 24 ++++++---- lib/test_bpf.c | 2 +- samples/bpf/bpf_insn.h | 4 +- samples/bpf/sock_example.c | 2 +- samples/bpf/test_cgrp2_attach.c | 4 +- tools/include/linux/filter.h | 7 +-- tools/include/uapi/linux/bpf.h | 3 +- .../bpf/prog_tests/cgroup_attach_multi.c | 4 +- tools/testing/selftests/bpf/verifier/ctx.c | 7 ++- .../testing/selftests/bpf/verifier/leak_ptr.c | 4 +- tools/testing/selftests/bpf/verifier/unpriv.c | 3 +- tools/testing/selftests/bpf/verifier/xadd.c | 2 +- 30 files changed, 248 insertions(+), 120 deletions(-) diff --git a/Documentation/networking/filter.rst b/Documentation/networking/filter.rst index debb59e374de..1583d59d806d 100644 --- a/Documentation/networking/filter.rst +++ b/Documentation/networking/filter.rst @@ -1006,13 +1006,13 @@ Size modifier is one of ... Mode modifier is one of:: - BPF_IMM 0x00 /* used for 32-bit mov in classic BPF and 64-bit in eBPF */ - BPF_ABS 0x20 - BPF_IND 0x40 - BPF_MEM 0x60 - BPF_LEN 0x80 /* classic BPF only, reserved in eBPF */ - BPF_MSH 0xa0 /* classic BPF only, reserved in eBPF */ - BPF_XADD 0xc0 /* eBPF only, exclusive add */ + BPF_IMM 0x00 /* used for 32-bit mov in classic BPF and 64-bit in eBPF */ + BPF_ABS 0x20 + BPF_IND 0x40 + BPF_MEM 0x60 + BPF_LEN 0x80 /* classic BPF only, reserved in eBPF */ + BPF_MSH 0xa0 /* classic BPF only, reserved in eBPF */ + BPF_ATOMIC 0xc0 /* eBPF only, atomic operations */ eBPF has two non-generic instructions: (BPF_ABS | | BPF_LD) and (BPF_IND | | BPF_LD) which are used to access packet data. @@ -1044,11 +1044,19 @@ Unlike classic BPF instruction set, eBPF has generic load/store operations:: BPF_MEM | | BPF_STX: *(size *) (dst_reg + off) = src_reg BPF_MEM | | BPF_ST: *(size *) (dst_reg + off) = imm32 BPF_MEM | | BPF_LDX: dst_reg = *(size *) (src_reg + off) - BPF_XADD | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg - BPF_XADD | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg -Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW. Note that 1 and -2 byte atomic increments are not supported. +Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW. + +It also includes atomic operations, which use the immediate field for extra +encoding. + + .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg + .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg + +Note that 1 and 2 byte atomic operations are not supported. + +You may encounter BPF_XADD - this is a legacy name for BPF_ATOMIC, referring to +the exclusive-add operation encoded when the immediate field is zero. eBPF has one 16-byte instruction: BPF_LD | BPF_DW | BPF_IMM which consists of two consecutive ``struct bpf_insn`` 8-byte blocks and interpreted as single diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index 0207b6ea6e8a..897634d0a67c 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -1620,10 +1620,9 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) } emit_str_r(dst_lo, tmp2, off, ctx, BPF_SIZE(code)); break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + /* Atomic ops */ + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: goto notyet; /* STX: *(size *)(dst + off) = src */ case BPF_STX | BPF_MEM | BPF_W: diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index ef9f1d5e989d..f7b194878a99 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -875,10 +875,18 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, } break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm != BPF_ADD) { + pr_err_once("unknown atomic op code %02x\n", insn->imm); + return -EINVAL; + } + + /* STX XADD: lock *(u32 *)(dst + off) += src + * and + * STX XADD: lock *(u64 *)(dst + off) += src + */ + if (!off) { reg = dst; } else { diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index 561154cbcc40..939dd06764bc 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -1423,8 +1423,8 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_STX | BPF_H | BPF_MEM: case BPF_STX | BPF_W | BPF_MEM: case BPF_STX | BPF_DW | BPF_MEM: - case BPF_STX | BPF_W | BPF_XADD: - case BPF_STX | BPF_DW | BPF_XADD: + case BPF_STX | BPF_W | BPF_ATOMIC: + case BPF_STX | BPF_DW | BPF_ATOMIC: if (insn->dst_reg == BPF_REG_10) { ctx->flags |= EBPF_SEEN_FP; dst = MIPS_R_SP; @@ -1438,7 +1438,12 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, src = ebpf_to_mips_reg(ctx, insn, src_reg_no_fp); if (src < 0) return src; - if (BPF_MODE(insn->code) == BPF_XADD) { + if (BPF_MODE(insn->code) == BPF_ATOMIC) { + if (insn->imm != BPF_ADD) { + pr_err("ATOMIC OP %02x NOT HANDLED\n", insn->imm); + return -EINVAL; + } + /* * If mem_off does not fit within the 9 bit ll/sc * instruction immediate field, use a temp reg. diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 022103c6a201..aaf1a887f653 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -683,10 +683,18 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, break; /* - * BPF_STX XADD (atomic_add) + * BPF_STX ATOMIC (atomic ops) */ - /* *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_W: + if (insn->imm != BPF_ADD) { + pr_err_ratelimited( + "eBPF filter atomic op code %02x (@%d) unsupported\n", + code, i); + return -ENOTSUPP; + } + + /* *(u32 *)(dst + off) += src */ + /* Get EA into TMP_REG_1 */ EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], dst_reg, off)); tmp_idx = ctx->idx * 4; @@ -699,8 +707,15 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, /* we're done if this succeeded */ PPC_BCC_SHORT(COND_NE, tmp_idx); break; - /* *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm != BPF_ADD) { + pr_err_ratelimited( + "eBPF filter atomic op code %02x (@%d) unsupported\n", + code, i); + return -ENOTSUPP; + } + /* *(u64 *)(dst + off) += src */ + EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], dst_reg, off)); tmp_idx = ctx->idx * 4; EMIT(PPC_RAW_LDARX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1], 0)); diff --git a/arch/riscv/net/bpf_jit_comp32.c b/arch/riscv/net/bpf_jit_comp32.c index 579575f9cdae..a9ef808b235f 100644 --- a/arch/riscv/net/bpf_jit_comp32.c +++ b/arch/riscv/net/bpf_jit_comp32.c @@ -881,7 +881,7 @@ static int emit_store_r64(const s8 *dst, const s8 *src, s16 off, const s8 *rd = bpf_get_reg64(dst, tmp1, ctx); const s8 *rs = bpf_get_reg64(src, tmp2, ctx); - if (mode == BPF_XADD && size != BPF_W) + if (mode == BPF_ATOMIC && (size != BPF_W || imm != BPF_ADD)) return -1; emit_imm(RV_REG_T0, off, ctx); @@ -899,7 +899,7 @@ static int emit_store_r64(const s8 *dst, const s8 *src, s16 off, case BPF_MEM: emit(rv_sw(RV_REG_T0, 0, lo(rs)), ctx); break; - case BPF_XADD: + case BPF_ATOMIC: /* .imm checked above - only BPF_ADD allowed */ emit(rv_amoadd_w(RV_REG_ZERO, lo(rs), RV_REG_T0, 0, 0), ctx); break; @@ -1260,7 +1260,6 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_STX | BPF_MEM | BPF_H: case BPF_STX | BPF_MEM | BPF_W: case BPF_STX | BPF_MEM | BPF_DW: - case BPF_STX | BPF_XADD | BPF_W: if (BPF_CLASS(code) == BPF_ST) { emit_imm32(tmp2, imm, ctx); src = tmp2; @@ -1271,8 +1270,21 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, return -1; break; + case BPF_STX | BPF_ATOMIC | BPF_W: + if (insn->imm != BPF_ADD) { + pr_info_once( + "bpf-jit: not supported: atomic operation %02x ***\n", + insn->imm); + return -EFAULT; + } + + if (emit_store_r64(dst, src, off, ctx, BPF_SIZE(code), + BPF_MODE(code))) + return -1; + break; + /* No hardware support for 8-byte atomics in RV32. */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_DW: /* Fallthrough. */ notsupported: diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index 8a56b5293117..b44ff52f84a6 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -1027,10 +1027,18 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_add(RV_REG_T1, RV_REG_T1, rd, ctx); emit_sd(RV_REG_T1, 0, rs, ctx); break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm != BPF_ADD) { + pr_err("bpf-jit: not supported: atomic operation %02x ***\n", + insn->imm); + return -EINVAL; + } + + /* atomic_add: lock *(u32 *)(dst + off) += src + * atomic_add: lock *(u64 *)(dst + off) += src + */ + if (off) { if (is_12b_int(off)) { emit_addi(RV_REG_T1, rd, off, ctx); diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index 0a4182792876..f973e2ead197 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -1205,18 +1205,23 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, jit->seen |= SEEN_MEM; break; /* - * BPF_STX XADD (atomic_add) + * BPF_ATOMIC */ - case BPF_STX | BPF_XADD | BPF_W: /* *(u32 *)(dst + off) += src */ - /* laal %w0,%src,off(%dst) */ - EMIT6_DISP_LH(0xeb000000, 0x00fa, REG_W0, src_reg, - dst_reg, off); - jit->seen |= SEEN_MEM; - break; - case BPF_STX | BPF_XADD | BPF_DW: /* *(u64 *)(dst + off) += src */ - /* laalg %w0,%src,off(%dst) */ - EMIT6_DISP_LH(0xeb000000, 0x00ea, REG_W0, src_reg, - dst_reg, off); + case BPF_STX | BPF_ATOMIC | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_W: + if (insn->imm != BPF_ADD) { + pr_err("Unknown atomic operation %02x\n", insn->imm); + return -1; + } + + /* *(u32/u64 *)(dst + off) += src + * + * BFW_W: laal %w0,%src,off(%dst) + * BPF_DW: laalg %w0,%src,off(%dst) + */ + EMIT6_DISP_LH(0xeb000000, + BPF_SIZE(insn->code) == BPF_W ? 0x00fa : 0x00ea, + REG_W0, src_reg, dst_reg, off); jit->seen |= SEEN_MEM; break; /* diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c index 3364e2a00989..4b8d3c65d266 100644 --- a/arch/sparc/net/bpf_jit_comp_64.c +++ b/arch/sparc/net/bpf_jit_comp_64.c @@ -1366,12 +1366,18 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) break; } - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: { + case BPF_STX | BPF_ATOMIC | BPF_W: { const u8 tmp = bpf2sparc[TMP_REG_1]; const u8 tmp2 = bpf2sparc[TMP_REG_2]; const u8 tmp3 = bpf2sparc[TMP_REG_3]; + if (insn->imm != BPF_ADD) { + pr_err_once("unknown atomic op %02x\n", insn->imm); + return -EINVAL; + } + + /* lock *(u32 *)(dst + off) += src */ + if (insn->dst_reg == BPF_REG_FP) ctx->saw_frame_pointer = true; @@ -1390,11 +1396,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) break; } /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: { + case BPF_STX | BPF_ATOMIC | BPF_DW: { const u8 tmp = bpf2sparc[TMP_REG_1]; const u8 tmp2 = bpf2sparc[TMP_REG_2]; const u8 tmp3 = bpf2sparc[TMP_REG_3]; + if (insn->imm != BPF_ADD) { + pr_err_once("unknown atomic op %02x\n", insn->imm); + return -EINVAL; + } + if (insn->dst_reg == BPF_REG_FP) ctx->saw_frame_pointer = true; diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 9ecee9d018ac..7c47ad70ddb4 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -811,6 +811,34 @@ static void emit_neg(u8 **pprog, u32 reg, bool is64) *pprog = prog; } +static int emit_atomic(u8 **pprog, u8 atomic_op, + u32 dst_reg, u32 src_reg, s16 off, u8 bpf_size) +{ + u8 *prog = *pprog; + int cnt = 0; + + EMIT1(0xF0); /* lock prefix */ + + maybe_emit_rex(&prog, dst_reg, src_reg, bpf_size == BPF_DW); + + /* emit opcode */ + switch (atomic_op) { + case BPF_ADD: + /* lock *(u32/u64*)(dst_reg + off) = src_reg */ + EMIT1(simple_alu_opcodes[atomic_op]); + break; + default: + pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op); + return -EFAULT; + } + + emit_modrm_dstoff(&prog, dst_reg, src_reg, off); + + *pprog = prog; + return 0; +} + + static bool ex_handler_bpf(const struct exception_table_entry *x, struct pt_regs *regs, int trapnr, unsigned long error_code, unsigned long fault_addr) @@ -855,6 +883,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, int i, cnt = 0, excnt = 0; int proglen = 0; u8 *prog = temp; + int err; detect_reg_usage(insn, insn_cnt, callee_regs_used, &tail_call_seen); @@ -1263,17 +1292,12 @@ st: if (is_imm8(insn->off)) } break; - /* STX XADD: lock *(u32*)(dst_reg + off) += src_reg */ - case BPF_STX | BPF_XADD | BPF_W: - /* Emit 'lock add dword ptr [rax + off], eax' */ - if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT3(0xF0, add_2mod(0x40, dst_reg, src_reg), 0x01); - else - EMIT2(0xF0, 0x01); - goto xadd; - case BPF_STX | BPF_XADD | BPF_DW: - EMIT3(0xF0, add_2mod(0x48, dst_reg, src_reg), 0x01); -xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + err = emit_atomic(&prog, insn->imm, dst_reg, src_reg, + insn->off, BPF_SIZE(insn->code)); + if (err) + return err; break; /* call */ diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c index 96fde03aa987..d17b67c69f89 100644 --- a/arch/x86/net/bpf_jit_comp32.c +++ b/arch/x86/net/bpf_jit_comp32.c @@ -2243,10 +2243,8 @@ emit_cond_jmp: jmp_cond = get_cond_jmp_opcode(BPF_OP(code), false); return -EFAULT; } break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: goto notyet; case BPF_JMP | BPF_EXIT: if (seen_exit) { diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c index 0a721f6e8676..1c9efc74edfc 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c @@ -3109,13 +3109,19 @@ mem_xadd(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, bool is64) return 0; } -static int mem_xadd4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +static int mem_atomic4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { + if (meta->insn.off != BPF_ADD) + return -EOPNOTSUPP; + return mem_xadd(nfp_prog, meta, false); } -static int mem_xadd8(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +static int mem_atomic8(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { + if (meta->insn.off != BPF_ADD) + return -EOPNOTSUPP; + return mem_xadd(nfp_prog, meta, true); } @@ -3475,8 +3481,8 @@ static const instr_cb_t instr_cb[256] = { [BPF_STX | BPF_MEM | BPF_H] = mem_stx2, [BPF_STX | BPF_MEM | BPF_W] = mem_stx4, [BPF_STX | BPF_MEM | BPF_DW] = mem_stx8, - [BPF_STX | BPF_XADD | BPF_W] = mem_xadd4, - [BPF_STX | BPF_XADD | BPF_DW] = mem_xadd8, + [BPF_STX | BPF_ATOMIC | BPF_W] = mem_atomic4, + [BPF_STX | BPF_ATOMIC | BPF_DW] = mem_atomic8, [BPF_ST | BPF_MEM | BPF_B] = mem_st1, [BPF_ST | BPF_MEM | BPF_H] = mem_st2, [BPF_ST | BPF_MEM | BPF_W] = mem_st4, diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h index fac9c6f9e197..d0e17eebddd9 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/main.h +++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h @@ -428,9 +428,9 @@ static inline bool is_mbpf_classic_store_pkt(const struct nfp_insn_meta *meta) return is_mbpf_classic_store(meta) && meta->ptr.type == PTR_TO_PACKET; } -static inline bool is_mbpf_xadd(const struct nfp_insn_meta *meta) +static inline bool is_mbpf_atomic(const struct nfp_insn_meta *meta) { - return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_STX | BPF_XADD); + return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_STX | BPF_ATOMIC); } static inline bool is_mbpf_mul(const struct nfp_insn_meta *meta) diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c index e92ee510fd52..9d235c0ce46a 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c @@ -479,7 +479,7 @@ nfp_bpf_check_ptr(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, pr_vlog(env, "map writes not supported\n"); return -EOPNOTSUPP; } - if (is_mbpf_xadd(meta)) { + if (is_mbpf_atomic(meta)) { err = nfp_bpf_map_mark_used(env, meta, reg, NFP_MAP_USE_ATOMIC_CNT); if (err) @@ -523,12 +523,17 @@ nfp_bpf_check_store(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, } static int -nfp_bpf_check_xadd(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, - struct bpf_verifier_env *env) +nfp_bpf_check_atomic(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + struct bpf_verifier_env *env) { const struct bpf_reg_state *sreg = cur_regs(env) + meta->insn.src_reg; const struct bpf_reg_state *dreg = cur_regs(env) + meta->insn.dst_reg; + if (meta->insn.imm != BPF_ADD) { + pr_vlog(env, "atomic op not implemented: %d\n", meta->insn.imm); + return -EOPNOTSUPP; + } + if (dreg->type != PTR_TO_MAP_VALUE) { pr_vlog(env, "atomic add not to a map value pointer: %d\n", dreg->type); @@ -655,8 +660,8 @@ int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx, if (is_mbpf_store(meta)) return nfp_bpf_check_store(nfp_prog, meta, env); - if (is_mbpf_xadd(meta)) - return nfp_bpf_check_xadd(nfp_prog, meta, env); + if (is_mbpf_atomic(meta)) + return nfp_bpf_check_atomic(nfp_prog, meta, env); if (is_mbpf_alu(meta)) return nfp_bpf_check_alu(nfp_prog, meta, env); diff --git a/include/linux/filter.h b/include/linux/filter.h index 1b62397bd124..ce19988fb312 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -261,13 +261,15 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) /* Atomic memory add, *(uint *)(dst_reg + off16) += src_reg */ -#define BPF_STX_XADD(SIZE, DST, SRC, OFF) \ +#define BPF_ATOMIC_ADD(SIZE, DST, SRC, OFF) \ ((struct bpf_insn) { \ - .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ .dst_reg = DST, \ .src_reg = SRC, \ .off = OFF, \ - .imm = 0 }) + .imm = BPF_ADD }) +#define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ + /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index c3458ec1f30a..d0adc48db43c 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -19,7 +19,8 @@ /* ld/ldx fields */ #define BPF_DW 0x18 /* double word (64-bit) */ -#define BPF_XADD 0xc0 /* exclusive add */ +#define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */ +#define BPF_XADD 0xc0 /* exclusive add - legacy name */ /* alu/jmp fields */ #define BPF_MOV 0xb0 /* mov reg to reg */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index ff55cbcfbab4..1fbb13d30bbc 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1317,8 +1317,8 @@ EXPORT_SYMBOL_GPL(__bpf_call_base); INSN_3(STX, MEM, H), \ INSN_3(STX, MEM, W), \ INSN_3(STX, MEM, DW), \ - INSN_3(STX, XADD, W), \ - INSN_3(STX, XADD, DW), \ + INSN_3(STX, ATOMIC, W), \ + INSN_3(STX, ATOMIC, DW), \ /* Immediate based. */ \ INSN_3(ST, MEM, B), \ INSN_3(ST, MEM, H), \ @@ -1626,13 +1626,25 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) LDX_PROBE(DW, 8) #undef LDX_PROBE - STX_XADD_W: /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ - atomic_add((u32) SRC, (atomic_t *)(unsigned long) - (DST + insn->off)); + STX_ATOMIC_W: + switch (IMM) { + case BPF_ADD: + /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ + atomic_add((u32) SRC, (atomic_t *)(unsigned long) + (DST + insn->off)); + default: + goto default_label; + } CONT; - STX_XADD_DW: /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */ - atomic64_add((u64) SRC, (atomic64_t *)(unsigned long) - (DST + insn->off)); + STX_ATOMIC_DW: + switch (IMM) { + case BPF_ADD: + /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */ + atomic64_add((u64) SRC, (atomic64_t *)(unsigned long) + (DST + insn->off)); + default: + goto default_label; + } CONT; default_label: @@ -1642,7 +1654,8 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) * * Note, verifier whitelists all opcodes in bpf_opcode_in_insntable(). */ - pr_warn("BPF interpreter: unknown opcode %02x\n", insn->code); + pr_warn("BPF interpreter: unknown opcode %02x (imm: 0x%x)\n", + insn->code, insn->imm); BUG_ON(1); return 0; } diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index b44d8c447afd..37c8d6e9b4cc 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -153,14 +153,16 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); - else if (BPF_MODE(insn->code) == BPF_XADD) + else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == BPF_ADD) { verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) += r%d\n", insn->code, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); - else + } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); + } } else if (class == BPF_ST) { if (BPF_MODE(insn->code) != BPF_MEM) { verbose(cbs->private_data, "BUG_st_%02x\n", insn->code); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e333ce43f281..1947da617b03 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3598,13 +3598,17 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn return err; } -static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) +static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) { int err; - if ((BPF_SIZE(insn->code) != BPF_W && BPF_SIZE(insn->code) != BPF_DW) || - insn->imm != 0) { - verbose(env, "BPF_XADD uses reserved fields\n"); + if (insn->imm != BPF_ADD) { + verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); + return -EINVAL; + } + + if (BPF_SIZE(insn->code) != BPF_W && BPF_SIZE(insn->code) != BPF_DW) { + verbose(env, "invalid atomic operand size\n"); return -EINVAL; } @@ -3627,19 +3631,19 @@ static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_ins is_pkt_reg(env, insn->dst_reg) || is_flow_key_reg(env, insn->dst_reg) || is_sk_reg(env, insn->dst_reg)) { - verbose(env, "BPF_XADD stores into R%d %s is not allowed\n", + verbose(env, "atomic stores into R%d %s is not allowed\n", insn->dst_reg, reg_type_str[reg_state(env, insn->dst_reg)->type]); return -EACCES; } - /* check whether atomic_add can read the memory */ + /* check whether we can read the memory */ err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_READ, -1, true); if (err) return err; - /* check whether atomic_add can write into the same memory */ + /* check whether we can write into the same memory */ return check_mem_access(env, insn_idx, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_WRITE, -1, true); } @@ -9497,8 +9501,8 @@ static int do_check(struct bpf_verifier_env *env) } else if (class == BPF_STX) { enum bpf_reg_type *prev_dst_type, dst_reg_type; - if (BPF_MODE(insn->code) == BPF_XADD) { - err = check_xadd(env, env->insn_idx, insn); + if (BPF_MODE(insn->code) == BPF_ATOMIC) { + err = check_atomic(env, env->insn_idx, insn); if (err) return err; env->insn_idx++; @@ -9908,7 +9912,7 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) if (BPF_CLASS(insn->code) == BPF_STX && ((BPF_MODE(insn->code) != BPF_MEM && - BPF_MODE(insn->code) != BPF_XADD) || insn->imm != 0)) { + BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { verbose(env, "BPF_STX uses reserved fields\n"); return -EINVAL; } diff --git a/lib/test_bpf.c b/lib/test_bpf.c index ca7d635bccd9..fbb13ef9207c 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -4295,7 +4295,7 @@ static struct bpf_test tests[] = { { { 0, 0xffffffff } }, .stack_depth = 40, }, - /* BPF_STX | BPF_XADD | BPF_W/DW */ + /* BPF_STX | BPF_ATOMIC | BPF_W/DW */ { "STX_XADD_W: Test: 0x12 + 0x10 = 0x22", .u.insns_int = { diff --git a/samples/bpf/bpf_insn.h b/samples/bpf/bpf_insn.h index 544237980582..db67a2847395 100644 --- a/samples/bpf/bpf_insn.h +++ b/samples/bpf/bpf_insn.h @@ -138,11 +138,11 @@ struct bpf_insn; #define BPF_STX_XADD(SIZE, DST, SRC, OFF) \ ((struct bpf_insn) { \ - .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ .dst_reg = DST, \ .src_reg = SRC, \ .off = OFF, \ - .imm = 0 }) + .imm = BPF_ADD }) /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/samples/bpf/sock_example.c b/samples/bpf/sock_example.c index 00aae1d33fca..b18fa8083137 100644 --- a/samples/bpf/sock_example.c +++ b/samples/bpf/sock_example.c @@ -54,7 +54,7 @@ static int test_sock(void) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0), BPF_MOV64_IMM(BPF_REG_0, 0), /* r0 = 0 */ BPF_EXIT_INSN(), }; diff --git a/samples/bpf/test_cgrp2_attach.c b/samples/bpf/test_cgrp2_attach.c index 20fbd1241db3..61896c4f9322 100644 --- a/samples/bpf/test_cgrp2_attach.c +++ b/samples/bpf/test_cgrp2_attach.c @@ -53,7 +53,7 @@ static int prog_load(int map_fd, int verdict) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0), /* Count bytes */ BPF_MOV64_IMM(BPF_REG_0, MAP_KEY_BYTES), /* r0 = 1 */ @@ -64,7 +64,7 @@ static int prog_load(int map_fd, int verdict) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, offsetof(struct __sk_buff, len)), /* r1 = skb->len */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0), BPF_MOV64_IMM(BPF_REG_0, verdict), /* r0 = verdict */ BPF_EXIT_INSN(), diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index ca28b6ab8db7..95ff51d97f25 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -171,13 +171,14 @@ /* Atomic memory add, *(uint *)(dst_reg + off16) += src_reg */ -#define BPF_STX_XADD(SIZE, DST, SRC, OFF) \ +#define BPF_ATOMIC_ADD(SIZE, DST, SRC, OFF) \ ((struct bpf_insn) { \ - .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ .dst_reg = DST, \ .src_reg = SRC, \ .off = OFF, \ - .imm = 0 }) + .imm = BPF_ADD }) +#define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index c3458ec1f30a..d0adc48db43c 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -19,7 +19,8 @@ /* ld/ldx fields */ #define BPF_DW 0x18 /* double word (64-bit) */ -#define BPF_XADD 0xc0 /* exclusive add */ +#define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */ +#define BPF_XADD 0xc0 /* exclusive add - legacy name */ /* alu/jmp fields */ #define BPF_MOV 0xb0 /* mov reg to reg */ diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c index b549fcfacc0b..882fce827c81 100644 --- a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c +++ b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c @@ -45,13 +45,13 @@ static int prog_load_cnt(int verdict, int val) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_MOV64_IMM(BPF_REG_1, val), /* r1 = 1 */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0), BPF_LD_MAP_FD(BPF_REG_1, cgroup_storage_fd), BPF_MOV64_IMM(BPF_REG_2, 0), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), BPF_MOV64_IMM(BPF_REG_1, val), - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_W, BPF_REG_0, BPF_REG_1, 0, 0), + BPF_ATOMIC_ADD(BPF_W, BPF_REG_0, BPF_REG_1, 0), BPF_LD_MAP_FD(BPF_REG_1, percpu_cgroup_storage_fd), BPF_MOV64_IMM(BPF_REG_2, 0), diff --git a/tools/testing/selftests/bpf/verifier/ctx.c b/tools/testing/selftests/bpf/verifier/ctx.c index 93d6b1641481..a6d2d82b3447 100644 --- a/tools/testing/selftests/bpf/verifier/ctx.c +++ b/tools/testing/selftests/bpf/verifier/ctx.c @@ -10,14 +10,13 @@ .prog_type = BPF_PROG_TYPE_SCHED_CLS, }, { - "context stores via XADD", + "context stores via BPF_ATOMIC", .insns = { BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_W, BPF_REG_1, - BPF_REG_0, offsetof(struct __sk_buff, mark), 0), + BPF_ATOMIC_ADD(BPF_W, BPF_REG_1, BPF_REG_0, offsetof(struct __sk_buff)), BPF_EXIT_INSN(), }, - .errstr = "BPF_XADD stores into R1 ctx is not allowed", + .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", .result = REJECT, .prog_type = BPF_PROG_TYPE_SCHED_CLS, }, diff --git a/tools/testing/selftests/bpf/verifier/leak_ptr.c b/tools/testing/selftests/bpf/verifier/leak_ptr.c index d6eec17f2cd2..f9a594b48fb3 100644 --- a/tools/testing/selftests/bpf/verifier/leak_ptr.c +++ b/tools/testing/selftests/bpf/verifier/leak_ptr.c @@ -13,7 +13,7 @@ .errstr_unpriv = "R2 leaks addr into mem", .result_unpriv = REJECT, .result = REJECT, - .errstr = "BPF_XADD stores into R1 ctx is not allowed", + .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", }, { "leak pointer into ctx 2", @@ -28,7 +28,7 @@ .errstr_unpriv = "R10 leaks addr into mem", .result_unpriv = REJECT, .result = REJECT, - .errstr = "BPF_XADD stores into R1 ctx is not allowed", + .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", }, { "leak pointer into ctx 3", diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c index 91bb77c24a2e..85b5e8b70e5d 100644 --- a/tools/testing/selftests/bpf/verifier/unpriv.c +++ b/tools/testing/selftests/bpf/verifier/unpriv.c @@ -206,7 +206,8 @@ BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_10, BPF_REG_0, -8, 0), + BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_DW, + BPF_REG_10, BPF_REG_0, -8, BPF_ADD), BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc), BPF_EXIT_INSN(), diff --git a/tools/testing/selftests/bpf/verifier/xadd.c b/tools/testing/selftests/bpf/verifier/xadd.c index c5de2e62cc8b..70a320505bf2 100644 --- a/tools/testing/selftests/bpf/verifier/xadd.c +++ b/tools/testing/selftests/bpf/verifier/xadd.c @@ -51,7 +51,7 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "BPF_XADD stores into R2 pkt is not allowed", + .errstr = "BPF_ATOMIC stores into R2 pkt is not allowed", .prog_type = BPF_PROG_TYPE_XDP, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, }, From patchwork Fri Nov 27 17:57:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937033 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AA56C3E8C5 for ; Fri, 27 Nov 2020 17:58:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED89E2224A for ; Fri, 27 Nov 2020 17:58:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Cp+XgwyL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731497AbgK0R6F (ORCPT ); Fri, 27 Nov 2020 12:58:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732896AbgK0R6E (ORCPT ); Fri, 27 Nov 2020 12:58:04 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBCE2C0613D2 for ; Fri, 27 Nov 2020 09:58:02 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id g5so3771696wrp.5 for ; Fri, 27 Nov 2020 09:58:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=zbpGVgr8tNKDgthNqGrhordV2MhrhuH8IzD17QyNfdQ=; b=Cp+XgwyL/DQSmzZ1190ENwLk/KCOR1R/9fKDea7XXBnGJQKHwda/BkhBDsSBnP6+Gz b0dV/egvQo3YKiLXVOcjivnl4oW4rtubUUAnheZSrnGy08WoTHfHe4B0G1iIavdKtzaO Xo1E5o0R9ww4ZMpvHa+mpZDFY8crVQQAD+JGbhLm/AozjZQHti6pYWMLMaqpJy/K4/0q LEm/it1Oz0OnPFnXDh1TUDq9LHmrGOxl+7g1fauQ3CXxn+KSp7Po46OJwYlc2g6yE6Az EV++zDfV7Z13674e7QYD64BC67as9iD2sUwY6j+4YON+7sryWrC68j9MDqe8qXV6xri+ MyTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zbpGVgr8tNKDgthNqGrhordV2MhrhuH8IzD17QyNfdQ=; b=p9H9jWcaZO8bJ13tN+Rc9SCCDobBNdcV+EWyoWXdyhtej2nYSxWBiAMtFZo3MVxxxm akw8duZfclu9olYN+HL5c+SI2A5rV0+C2DG830vc9kAIIiB52mmz9bjSbAgc8JRyZjjD +DQuJVw/C7ua6bfIz5dwSo7E+wHAm2gj509deODs/NYcj38ED3awZLy9gd65owRACAv5 uIYYmZr26GhqOayCiD1aVm5U+kfO6JuN02j/03dwt1CCe76Gd7+9hH5TJXnPTgDlLF0s kiRVybxOkjAKiPiGiSHWrNl9gKDz2TPtYGUWLTKpqJQ7m0PNxEBwQb/NpMePyw+X88Ac S0uA== X-Gm-Message-State: AOAM533Bew9E3+tyg1kn8PVZusdGR/OFqGTMnhyoU8vO4gQkJ57aLZ7u fbpThdboSeWYr4aHQANfCjBoKS8cR9SZ/1N9jle6gsfOwM/TlKCOlJEUdqzSK3uz+vNdVr2aXh6 5GFKR69O+3DtsZ3BjWDWnBH3OajXJ+iosRN+D2/UZMKlL1LCNKYhtg0q5l4hTjOQ= X-Google-Smtp-Source: ABdhPJwho6pXFG+DscRnKoutGBvwveNAK3yEza+jlSdM3GLTU4mqPS7QfoiniQ696oYgBA4QlL5MYEPj++kEhA== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a1c:791a:: with SMTP id l26mr1857691wme.1.1606499880798; Fri, 27 Nov 2020 09:58:00 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:31 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-7-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 06/13] bpf: Move BPF_STX reserved field check into BPF_STX verifier code From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net I can't find a reason why this code is in resolve_pseudo_ldimm64; since I'll be modifying it in a subsequent commit, tidy it up. Signed-off-by: Brendan Jackman --- kernel/bpf/verifier.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 1947da617b03..e8b41ccdfb90 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -9501,6 +9501,12 @@ static int do_check(struct bpf_verifier_env *env) } else if (class == BPF_STX) { enum bpf_reg_type *prev_dst_type, dst_reg_type; + if (((BPF_MODE(insn->code) != BPF_MEM && + BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { + verbose(env, "BPF_STX uses reserved fields\n"); + return -EINVAL; + } + if (BPF_MODE(insn->code) == BPF_ATOMIC) { err = check_atomic(env, env->insn_idx, insn); if (err) @@ -9910,13 +9916,6 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) return -EINVAL; } - if (BPF_CLASS(insn->code) == BPF_STX && - ((BPF_MODE(insn->code) != BPF_MEM && - BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { - verbose(env, "BPF_STX uses reserved fields\n"); - return -EINVAL; - } - if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) { struct bpf_insn_aux_data *aux; struct bpf_map *map; From patchwork Fri Nov 27 17:57:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937035 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83BBFC63798 for ; Fri, 27 Nov 2020 17:58:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3204B2224C for ; Fri, 27 Nov 2020 17:58:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oP1RtxUz" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732433AbgK0R61 (ORCPT ); Fri, 27 Nov 2020 12:58:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732901AbgK0R6F (ORCPT ); Fri, 27 Nov 2020 12:58:05 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7919C0617A7 for ; Fri, 27 Nov 2020 09:58:03 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id g28so4135219qka.4 for ; Fri, 27 Nov 2020 09:58:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=OcGetHndj/pHr70a737GqKJhkLB+N6xuhPL+W0UqXWw=; b=oP1RtxUz3VsX5kHcTFm2PRALRYVLFg6CljCExzA6CpXRNrUwEEHBSPb79E9chrIzqx 4b5nI6NEhr+Y/TCoPvF8+H2jyRue/9CfjoF3xKAdNB/doDlElPqMrXJs1qw1mRJcU7mS WA/FZd9kEq4DhHrALSCU05m55BYNzBiXdyKEmJ/MzB71PbUFL/QhZTWMSBEH69xQ7+OC 82FKD9O4RCgXFjhiaQl1SNpChnjj8Zsml2m/9s70RcYPgMsphccEu/xxJ1q07k8t7Mnu hP92udfm/uVIfGN/ugE65L1rF/n4ixZk4yd++vaklFCy5xvVWZhCeg9ZSl4KYRFdvlHH Y8Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OcGetHndj/pHr70a737GqKJhkLB+N6xuhPL+W0UqXWw=; b=NM77iMYL/s3bQ9udrjHOHX37HMp9XM46zm2dWFt9bQn4hOYcv2S545jvbSU3xiKL/H KFyz/lXHYuUSjsgUNhSPKsUJiiVe3dd4aR04xkT769qR8xAkvxFcG0LJs2EWuH2xorCi hZ1jPCrHL3oJ4qpcQrjW9GS758F3VqLE+l6TVrlAcpCBuEpStsSWyhwkLVFsPQ5TqSao Pid9zqMuc+AOx+/Ne5DUZaY/+D83kqVlhHypIaMjkTjUOZbBOd84tfQFVoUyfOVUet2h 2WlduKzQCBgs9437AYZsc7w3iAcLhdOSXiPYYGMzaqZ0qTDmTSnhHIQWbI/ecd80jOed lMjA== X-Gm-Message-State: AOAM531yPkHWKL0CBKnE0b0FAxCEOO6WNiMPucSgRXyxH00r4J38x2dm X0zxOCgGe4wfkUzzFnpsny7SJYhCR+Jd2ljL9ltC/1tc5KtMs0VOdSMLFb3YFjKzWxQsE5aAZ0L +9xYWtEUFcWjFbC364WoosDCTyDoHw5L4ZZmHKSUELpBRtX/IabEhysQYheLzC6E= X-Google-Smtp-Source: ABdhPJyaXSLD8oAmb/bJB/DYjXGeC2Es+yRxgNIXnnDI/9bi9jI4O0nduo6xHZgByKQBICV5emPyL+Gvvj6Mlw== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:ad4:42c8:: with SMTP id f8mr9672404qvr.29.1606499883005; Fri, 27 Nov 2020 09:58:03 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:32 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-8-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 07/13] bpf: Add BPF_FETCH field / create atomic_fetch_add instruction From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This value can be set in bpf_insn.imm, for BPF_ATOMIC instructions, in order to have the previous value of the atomically-modified memory location loaded into the src register after an atomic op is carried out. Suggested-by: Yonghong Song Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 4 ++++ include/linux/filter.h | 9 +++++++++ include/uapi/linux/bpf.h | 3 +++ kernel/bpf/core.c | 13 +++++++++++++ kernel/bpf/disasm.c | 7 +++++++ kernel/bpf/verifier.c | 35 ++++++++++++++++++++++++---------- tools/include/linux/filter.h | 10 ++++++++++ tools/include/uapi/linux/bpf.h | 3 +++ 8 files changed, 74 insertions(+), 10 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 7c47ad70ddb4..d3cd45bcd0c1 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -827,6 +827,10 @@ static int emit_atomic(u8 **pprog, u8 atomic_op, /* lock *(u32/u64*)(dst_reg + off) = src_reg */ EMIT1(simple_alu_opcodes[atomic_op]); break; + case BPF_ADD | BPF_FETCH: + /* src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */ + EMIT2(0x0F, 0xC1); + break; default: pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op); return -EFAULT; diff --git a/include/linux/filter.h b/include/linux/filter.h index ce19988fb312..4e04d0fc454f 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -270,6 +270,15 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) .imm = BPF_ADD }) #define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ +/* Atomic memory add with fetch, src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_ADD(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_ADD | BPF_FETCH }) /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index d0adc48db43c..025e377e7229 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -44,6 +44,9 @@ #define BPF_CALL 0x80 /* function call */ #define BPF_EXIT 0x90 /* function return */ +/* atomic op type fields (stored in immediate) */ +#define BPF_FETCH 0x01 /* fetch previous value into src reg */ + /* Register numbers */ enum { BPF_REG_0 = 0, diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 1fbb13d30bbc..49a2a533db60 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1632,16 +1632,29 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ atomic_add((u32) SRC, (atomic_t *)(unsigned long) (DST + insn->off)); + break; + case BPF_ADD | BPF_FETCH: + SRC = (u32) atomic_fetch_add( + (u32) SRC, + (atomic_t *)(unsigned long) (DST + insn->off)); + break; default: goto default_label; } CONT; + STX_ATOMIC_DW: switch (IMM) { case BPF_ADD: /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */ atomic64_add((u64) SRC, (atomic64_t *)(unsigned long) (DST + insn->off)); + break; + case BPF_ADD | BPF_FETCH: + SRC = (u64) atomic64_fetch_add( + (u64) SRC, + (atomic64_t *)(s64) (DST + insn->off)); + break; default: goto default_label; } diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 37c8d6e9b4cc..3ee2246a52ef 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -160,6 +160,13 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == (BPF_ADD | BPF_FETCH)) { + verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_add(*(%s *)(r%d %+d), r%d)\n", + insn->code, insn->src_reg, + BPF_SIZE(insn->code) == BPF_DW ? "64" : "", + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, insn->src_reg); } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e8b41ccdfb90..cd4c03b25573 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3602,7 +3602,11 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i { int err; - if (insn->imm != BPF_ADD) { + switch (insn->imm) { + case BPF_ADD: + case BPF_ADD | BPF_FETCH: + break; + default: verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); return -EINVAL; } @@ -3631,7 +3635,7 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i is_pkt_reg(env, insn->dst_reg) || is_flow_key_reg(env, insn->dst_reg) || is_sk_reg(env, insn->dst_reg)) { - verbose(env, "atomic stores into R%d %s is not allowed\n", + verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n", insn->dst_reg, reg_type_str[reg_state(env, insn->dst_reg)->type]); return -EACCES; @@ -3644,8 +3648,20 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i return err; /* check whether we can write into the same memory */ - return check_mem_access(env, insn_idx, insn->dst_reg, insn->off, - BPF_SIZE(insn->code), BPF_WRITE, -1, true); + err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, + BPF_SIZE(insn->code), BPF_WRITE, -1, true); + if (err) + return err; + + if (!(insn->imm & BPF_FETCH)) + return 0; + + /* check and record load of old value into src reg */ + err = check_reg_arg(env, insn->src_reg, DST_OP); + if (err) + return err; + + return 0; } static int __check_stack_boundary(struct bpf_verifier_env *env, u32 regno, @@ -9501,12 +9517,6 @@ static int do_check(struct bpf_verifier_env *env) } else if (class == BPF_STX) { enum bpf_reg_type *prev_dst_type, dst_reg_type; - if (((BPF_MODE(insn->code) != BPF_MEM && - BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { - verbose(env, "BPF_STX uses reserved fields\n"); - return -EINVAL; - } - if (BPF_MODE(insn->code) == BPF_ATOMIC) { err = check_atomic(env, env->insn_idx, insn); if (err) @@ -9515,6 +9525,11 @@ static int do_check(struct bpf_verifier_env *env) continue; } + if (BPF_MODE(insn->code) != BPF_MEM && insn->imm != 0) { + verbose(env, "BPF_STX uses reserved fields\n"); + return -EINVAL; + } + /* check src1 operand */ err = check_reg_arg(env, insn->src_reg, SRC_OP); if (err) diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index 95ff51d97f25..ac7701678e1a 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -180,6 +180,16 @@ .imm = BPF_ADD }) #define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ +/* Atomic memory add with fetch, src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_ADD(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_ADD | BPF_FETCH }) + /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index d0adc48db43c..025e377e7229 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -44,6 +44,9 @@ #define BPF_CALL 0x80 /* function call */ #define BPF_EXIT 0x90 /* function return */ +/* atomic op type fields (stored in immediate) */ +#define BPF_FETCH 0x01 /* fetch previous value into src reg */ + /* Register numbers */ enum { BPF_REG_0 = 0, From patchwork Fri Nov 27 17:57:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937019 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E48A9C8300C for ; Fri, 27 Nov 2020 17:58:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B04062224C for ; Fri, 27 Nov 2020 17:58:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YgGHPoy0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731612AbgK0R6I (ORCPT ); Fri, 27 Nov 2020 12:58:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730985AbgK0R6H (ORCPT ); Fri, 27 Nov 2020 12:58:07 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E417C0613D2 for ; Fri, 27 Nov 2020 09:58:06 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id z29so6885440ybi.23 for ; Fri, 27 Nov 2020 09:58:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=aP4rLvLeG6QsoLjqnuY/PUWFVifkydhv/2pqvMyJ8oc=; b=YgGHPoy0zBSM0oooIItHiQA+69kyprzG2TdCZwdR9PwhSjuTTuNljIKCW38XZW5R4b BNoy85/Q6zrl+XJTZjjBojwiPWr2W6sSKNt5mpzO0j00FJxSpZnpzU4Zf0uAowwpqLJC Uu0rOXnXNSqaqXtOmUo3BDKKyAbZHsJlmXVuy7saEht3c4gLYO8TN4Dd/Upd2wDbJE+c IwU2EVAYv0M7/F3wBjY6d34NH5DTQGaRtzCtx6wpiwY7CHdgUBJuDi58/LmVbrrlDDFR R6UcG89E1Edlnu9yS5kCyXqLRchfkLZSbkzG+rORBHLQuFoBbcnT409xu1jea1FHF2bf c7tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=aP4rLvLeG6QsoLjqnuY/PUWFVifkydhv/2pqvMyJ8oc=; b=nHRBr1DyLXhRO9WehTOpDOJagjBygv8jbkb5vVwhmyF1Y8fJ7e8Zu4gzNuRGaj2khO BIlYL8ViG2MIGonX9BckpJCmRpJ9C25+CvgPag2CGdljvCiwEwOu2XX2uMKnotbBUFju OAGpjDoDsRcSCjpvD1R1yqKmX2jZaucgo72mQMHx0qVHOjO/DIPt19X+i36GpiIG8Ems ty2EAN29b+hjCxekfspv++pRn103/MUGxpjGe8f0fG8MChoYj3JLXHqiGy+LDEOT048S yGoqDVJWKjeQ1cCKjqIGMno7XW5/BwikTTXO/VKiRsCkbc/jSzzpoZlMXIs8TkEd5btJ Q85g== X-Gm-Message-State: AOAM53280qY0MH/Rh+I44hqhyVLFzcIwJQqaFuDjbXLl8Sp5jBPSKqTs fh4vcEEO4l9vx9VAxr3mJalDnPoD1yH1IMcXiX3C0jcQlhXmtCqeYwhwHWQhmPGAjWh6wT4jWEt X+vBzpP+f2Gf8sWrifyUzoHdA7lJ2pqmZXadUL8E26g2FzuEzCrSQR+P3oKiQzpk= X-Google-Smtp-Source: ABdhPJwlGIiHJGC5eHX3e8oHmth2Jr6mlLle1xmPl1qjIyyqGRGtjckYwD4bxuycTMLekbOs5W4LhRd4KS31xA== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a25:15c8:: with SMTP id 191mr10649933ybv.256.1606499885322; Fri, 27 Nov 2020 09:58:05 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:33 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-9-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 08/13] bpf: Add instructions for atomic_[cmp]xchg From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This adds two atomic opcodes, both of which include the BPF_FETCH flag. XCHG without the BPF_FETCh flag would naturally encode atomic_set. This is not supported because it would be of limited value to userspace (it doesn't imply any barriers). CMPXCHG without BPF_FETCH woulud be an atomic compare-and-write. We don't have such an operation in the kernel so it isn't provided to BPF either. There are two significant design decisions made for the CMPXCHG instruction: - To solve the issue that this operation fundamentally has 3 operands, but we only have two register fields. Therefore the operand we compare against (the kernel's API calls it 'old') is hard-coded to be R0. x86 has similar design (and A64 doesn't have this problem). A potential alternative might be to encode the other operand's register number in the immediate field. - The kernel's atomic_cmpxchg returns the old value, while the C11 userspace APIs return a boolean indicating the comparison result. Which should BPF do? A64 returns the old value. x86 returns the old value in the hard-coded register (and also sets a flag). That means return-old-value is easier to JIT. Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 8 ++++++++ include/linux/filter.h | 20 ++++++++++++++++++++ include/uapi/linux/bpf.h | 4 +++- kernel/bpf/core.c | 20 ++++++++++++++++++++ kernel/bpf/disasm.c | 15 +++++++++++++++ kernel/bpf/verifier.c | 19 +++++++++++++++++-- tools/include/linux/filter.h | 20 ++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 4 +++- 8 files changed, 106 insertions(+), 4 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index d3cd45bcd0c1..7431b2937157 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -831,6 +831,14 @@ static int emit_atomic(u8 **pprog, u8 atomic_op, /* src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */ EMIT2(0x0F, 0xC1); break; + case BPF_XCHG: + /* src_reg = atomic_xchg(*(u32/u64*)(dst_reg + off), src_reg); */ + EMIT1(0x87); + break; + case BPF_CMPXCHG: + /* r0 = atomic_cmpxchg(*(u32/u64*)(dst_reg + off), r0, src_reg); */ + EMIT2(0x0F, 0xB1); + break; default: pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op); return -EFAULT; diff --git a/include/linux/filter.h b/include/linux/filter.h index 4e04d0fc454f..6186280715ed 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -280,6 +280,26 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) .off = OFF, \ .imm = BPF_ADD | BPF_FETCH }) +/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ + +#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XCHG }) + +/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */ + +#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_CMPXCHG }) + /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 025e377e7229..82039a1176ac 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -45,7 +45,9 @@ #define BPF_EXIT 0x90 /* function return */ /* atomic op type fields (stored in immediate) */ -#define BPF_FETCH 0x01 /* fetch previous value into src reg */ +#define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ +#define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ +#define BPF_FETCH 0x01 /* fetch previous value into src reg or r0*/ /* Register numbers */ enum { diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 49a2a533db60..05350a8f87c0 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1638,6 +1638,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) (u32) SRC, (atomic_t *)(unsigned long) (DST + insn->off)); break; + case BPF_XCHG: + SRC = (u32) atomic_xchg( + (atomic_t *)(unsigned long) (DST + insn->off), + (u32) SRC); + break; + case BPF_CMPXCHG: + BPF_R0 = (u32) atomic_cmpxchg( + (atomic_t *)(unsigned long) (DST + insn->off), + (u32) BPF_R0, (u32) SRC); + break; default: goto default_label; } @@ -1655,6 +1665,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) (u64) SRC, (atomic64_t *)(s64) (DST + insn->off)); break; + case BPF_XCHG: + SRC = (u64) atomic64_xchg( + (atomic64_t *)(u64) (DST + insn->off), + (u64) SRC); + break; + case BPF_CMPXCHG: + BPF_R0 = (u64) atomic64_cmpxchg( + (atomic64_t *)(u64) (DST + insn->off), + (u64) BPF_R0, (u64) SRC); + break; default: goto default_label; } diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 3ee2246a52ef..3441ac54ac65 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -167,6 +167,21 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, BPF_SIZE(insn->code) == BPF_DW ? "64" : "", bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == (BPF_CMPXCHG)) { + verbose(cbs->private_data, "(%02x) r0 = atomic%s_cmpxchg(*(%s *)(r%d %+d), r0, r%d)\n", + insn->code, + BPF_SIZE(insn->code) == BPF_DW ? "64" : "", + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, + insn->src_reg); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == (BPF_XCHG)) { + verbose(cbs->private_data, "(%02x) r%d = atomic%s_xchg(*(%s *)(r%d %+d), r%d)\n", + insn->code, insn->src_reg, + BPF_SIZE(insn->code) == BPF_DW ? "64" : "", + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, insn->src_reg); } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index cd4c03b25573..c8311cc114ec 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3601,10 +3601,13 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) { int err; + int load_reg; switch (insn->imm) { case BPF_ADD: case BPF_ADD | BPF_FETCH: + case BPF_XCHG: + case BPF_CMPXCHG: break; default: verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); @@ -3626,6 +3629,13 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i if (err) return err; + if (insn->imm == BPF_CMPXCHG) { + /* check src3 operand */ + err = check_reg_arg(env, BPF_REG_0, SRC_OP); + if (err) + return err; + } + if (is_pointer_value(env, insn->src_reg)) { verbose(env, "R%d leaks addr into mem\n", insn->src_reg); return -EACCES; @@ -3656,8 +3666,13 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i if (!(insn->imm & BPF_FETCH)) return 0; - /* check and record load of old value into src reg */ - err = check_reg_arg(env, insn->src_reg, DST_OP); + if (insn->imm == BPF_CMPXCHG) + load_reg = BPF_REG_0; + else + load_reg = insn->src_reg; + + /* check and record load of old value */ + err = check_reg_arg(env, load_reg, DST_OP); if (err) return err; diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index ac7701678e1a..ea99bd17d003 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -190,6 +190,26 @@ .off = OFF, \ .imm = BPF_ADD | BPF_FETCH }) +/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ + +#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XCHG }) + +/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */ + +#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_CMPXCHG }) + /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 025e377e7229..82039a1176ac 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -45,7 +45,9 @@ #define BPF_EXIT 0x90 /* function return */ /* atomic op type fields (stored in immediate) */ -#define BPF_FETCH 0x01 /* fetch previous value into src reg */ +#define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ +#define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ +#define BPF_FETCH 0x01 /* fetch previous value into src reg or r0*/ /* Register numbers */ enum { From patchwork Fri Nov 27 17:57:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937021 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F7F3C83010 for ; Fri, 27 Nov 2020 17:58:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0D41A2224C for ; Fri, 27 Nov 2020 17:58:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ODBuhzIh" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732966AbgK0R6K (ORCPT ); Fri, 27 Nov 2020 12:58:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732965AbgK0R6K (ORCPT ); Fri, 27 Nov 2020 12:58:10 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9D75C0613D2 for ; Fri, 27 Nov 2020 09:58:08 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id p16so3781569wrx.4 for ; Fri, 27 Nov 2020 09:58:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=l/XmS3F8heDnnCjC/QfV3ASRMXtsU72vcNxRpP0pgcs=; b=ODBuhzIhRy9z+TV6wyH1yjSarGBb4XmnN5EPraWYd1vPji65yN00rSZc4BOORsN10F KUeBOIxRFoEbm8drjKMMs5bkgePqf3zFtQv0HrzoEvdjVaJ+PRbyrBBzUndMp3jqgRzf zIy1HHpUp3V7nyA/AEHmMjl8CN2le2PY8sbBAGa0fG0FOCSjsluVQbFQ9v0xBJ/96eyn qag9Th0QAqHNfKM2mQYxNCpc5Kffi9lTsde/4HpofgU9ifatDDSECJrZ8Qcg5u2jJnWW DAOCt0MQ+Kbm4Jty+GHSBLfYNxgGV0ljuF31KDPbped5l0U5plfpzcayOhUo1dq0oS4C fwnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=l/XmS3F8heDnnCjC/QfV3ASRMXtsU72vcNxRpP0pgcs=; b=J146vua/I8mKnEOQoqi7PshVZ/xEzHVtGS/AWvNlUNlYSgSzvYCErq3DA2RpNTE/HC lUGlgdF+kVMGWvb+sueRwkkTQblyf3YvCMDGl0dYOUQNLWKt2K+s3pc85Iad7Fmx8kpc wWGdCbsXUs+dhPIylQUWgUeRDQQtg/jpAoZQwcIbJgjLMcG4ILErex1JUd1dcZTEPxAY q3rSyWnbXTvHUb1Fxh/vZq1+gL2/iuJXMWVorpCTwCxw1aYgvBHDmGV5BovpRgqBXYWb f2rI+8/2Pe9mn1MB1xnQTP2lHxQRAJRczVAa9ZH2YkGVHHAG2S/V1zy8b/2hYpukY6Nl IPSg== X-Gm-Message-State: AOAM530zecVjuAAIA0WPRwiWQSXX1G/PLgoXP8I/pI7n6AnxLKAwWKLL tQS5k+pyzxqHAyNO9LpT7Y3rGB9CdgO2HTdhwPxNihw/RlJMMCuiegZrRiwaHUWF2lxyTar3r7O VBqPliaxmTyIe3MaU74l61fgWFD6BR3REPFvmInVava5e8/ETc1H2teYN9otST0c= X-Google-Smtp-Source: ABdhPJxrqimB5kfC4aGvATRJWvBfKmL+97XEjnMgu/Khl1atRW6kZO7Yyo7TGLngWUlU2fos3PI8GDhHlYsr/A== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:adf:e787:: with SMTP id n7mr12379892wrm.153.1606499887359; Fri, 27 Nov 2020 09:58:07 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:34 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-10-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 09/13] bpf: Pull out a macro for interpreting atomic ALU operations From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Since the atomic operations that are added in subsequent commits are all isomorphic with BPF_ADD, pull out a macro to avoid the interpreter becoming dominated by lines of atomic-related code. Note that this sacrificies interpreter performance (combining STX_ATOMIC_W and STX_ATOMIC_DW into single switch case means that we need an extra conditional branch to differentiate them) in favour of compact and (relatively!) simple C code. Signed-off-by: Brendan Jackman --- kernel/bpf/core.c | 79 +++++++++++++++++++++++------------------------ 1 file changed, 38 insertions(+), 41 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 05350a8f87c0..20a5351d1dc2 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1626,55 +1626,52 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) LDX_PROBE(DW, 8) #undef LDX_PROBE - STX_ATOMIC_W: - switch (IMM) { - case BPF_ADD: - /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ - atomic_add((u32) SRC, (atomic_t *)(unsigned long) - (DST + insn->off)); - break; - case BPF_ADD | BPF_FETCH: - SRC = (u32) atomic_fetch_add( - (u32) SRC, - (atomic_t *)(unsigned long) (DST + insn->off)); - break; - case BPF_XCHG: - SRC = (u32) atomic_xchg( - (atomic_t *)(unsigned long) (DST + insn->off), - (u32) SRC); - break; - case BPF_CMPXCHG: - BPF_R0 = (u32) atomic_cmpxchg( - (atomic_t *)(unsigned long) (DST + insn->off), - (u32) BPF_R0, (u32) SRC); +#define ATOMIC(BOP, KOP) \ + case BOP: \ + if (BPF_SIZE(insn->code) == BPF_W) \ + atomic_##KOP((u32) SRC, (atomic_t *)(unsigned long) \ + (DST + insn->off)); \ + else \ + atomic64_##KOP((u64) SRC, (atomic64_t *)(unsigned long) \ + (DST + insn->off)); \ + break; \ + case BOP | BPF_FETCH: \ + if (BPF_SIZE(insn->code) == BPF_W) \ + SRC = (u32) atomic_fetch_##KOP( \ + (u32) SRC, \ + (atomic_t *)(unsigned long) (DST + insn->off)); \ + else \ + SRC = (u64) atomic64_fetch_##KOP( \ + (u64) SRC, \ + (atomic64_t *)(s64) (DST + insn->off)); \ break; - default: - goto default_label; - } - CONT; STX_ATOMIC_DW: + STX_ATOMIC_W: switch (IMM) { - case BPF_ADD: - /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */ - atomic64_add((u64) SRC, (atomic64_t *)(unsigned long) - (DST + insn->off)); - break; - case BPF_ADD | BPF_FETCH: - SRC = (u64) atomic64_fetch_add( - (u64) SRC, - (atomic64_t *)(s64) (DST + insn->off)); - break; + ATOMIC(BPF_ADD, add) + case BPF_XCHG: - SRC = (u64) atomic64_xchg( - (atomic64_t *)(u64) (DST + insn->off), - (u64) SRC); + if (BPF_SIZE(insn->code) == BPF_W) + SRC = (u32) atomic_xchg( + (atomic_t *)(unsigned long) (DST + insn->off), + (u32) SRC); + else + SRC = (u64) atomic64_xchg( + (atomic64_t *)(u64) (DST + insn->off), + (u64) SRC); break; case BPF_CMPXCHG: - BPF_R0 = (u64) atomic64_cmpxchg( - (atomic64_t *)(u64) (DST + insn->off), - (u64) BPF_R0, (u64) SRC); + if (BPF_SIZE(insn->code) == BPF_W) + BPF_R0 = (u32) atomic_cmpxchg( + (atomic_t *)(unsigned long) (DST + insn->off), + (u32) BPF_R0, (u32) SRC); + else + BPF_R0 = (u64) atomic64_cmpxchg( + (atomic64_t *)(u64) (DST + insn->off), + (u64) BPF_R0, (u64) SRC); break; + default: goto default_label; } From patchwork Fri Nov 27 17:57:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937029 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3FE8C83017 for ; Fri, 27 Nov 2020 17:58:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6A2642224C for ; Fri, 27 Nov 2020 17:58:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Jcv+0Mqi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733000AbgK0R6O (ORCPT ); Fri, 27 Nov 2020 12:58:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732983AbgK0R6M (ORCPT ); Fri, 27 Nov 2020 12:58:12 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63461C0613D1 for ; Fri, 27 Nov 2020 09:58:12 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id p18so3700185wro.9 for ; Fri, 27 Nov 2020 09:58:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=mZvHjAE2xe0rGoIORBV8EUbjbmRNvWQYKWvCzYEAY9I=; b=Jcv+0MqiazH0aK/8MJzRfSyVl6p/4WrnxQ1BydfAbL0drjK+I9i6e9XKme5KyBtQnm wNRtvwRlRLmaRFnf+3koDuPTjZVVhQ/Lj6Z16wDWHDMayyto1hR9vAQgBbHlPFAoswO8 w8ireE+54mev5nvNzwqc8nNJ4/eTvH2eknSR26ErOHXx5Z4fwDZs1WCmQ6Q+dV/gEF2E FJjSIBN9HxWbJjy4aq7D/WrZwprS31r/q0Zv03F2er1YwCOfU5dL612XPKWik4AU7oAl I1DKGbYv6vwG7VbRoMcj3T8Asa7EXeBKUCUIztzj028pv8D5eP1WDKNw8otlGUQxN1iQ 6rFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mZvHjAE2xe0rGoIORBV8EUbjbmRNvWQYKWvCzYEAY9I=; b=eFcyKb5D/SyOZmlcQ8sOEeB8n95h4P1d8aCkpc6U+1PJIvklckbs+R6u+/stTDkX35 xGX64ztilC9u6JNtuj00jH5xconQXgO3n0MLf44kITSnAnQ1YH6avW95hPxymRCbUgmj ZBWqqonAGKn5t5lSvHbG7L93UYj4HYg1v7ZwOxDXBX21FrFns39Q6UKi/l5HA0iK+/lM oIjLSyhDit62++q9KccCmb0cRDo7c+/vckA8JrkamFMAizqRyCWg17HvsiFN7NatTJqe tPUbesEjsYkK41q01YolbaPCJbt9lfnNf+PAOFO+vZnkwm5Vj4y9oogORlfpm8zvNGJK vCJg== X-Gm-Message-State: AOAM531EdsOJJTVIEgtuOvUStee2LlUHeSPsyxIq52rbIaOOx0bAYCQj ilt39f5AZGWhRNqG/0ErH6piJSAV3T7hjYnPsySaL3gyzYwJ9UVAMOgkTiCWdvcBEEg09yh6Nuw to6UfGKWv1lOwH4G+RQmd0dAPOlQMVsqzkSw3y150vqCWLr+McHu+C2PoLTT5ALM= X-Google-Smtp-Source: ABdhPJznJRx2Qfw3/bl6VBoodCw973ZRb6FFQigtE+w4qCijbg/ZHslsaZx5ZC6E2RiZ5ZQnSrBtN9+oVg2cnQ== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a7b:c1cc:: with SMTP id a12mr1857681wmj.0.1606499889154; Fri, 27 Nov 2020 09:58:09 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:35 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-11-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 10/13] bpf: Add instructions for atomic[64]_[fetch_]sub From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Including only interpreter and x86 JIT support. x86 doesn't provide an atomic exchange-and-subtract instruction that could be used for BPF_SUB | BPF_FETCH, however we can just emit a NEG followed by an XADD to get the same effect. Signed-off-by: Brendan Jackman Reported-by: kernel test robot --- arch/x86/net/bpf_jit_comp.c | 16 ++++++++++++++-- include/linux/filter.h | 20 ++++++++++++++++++++ kernel/bpf/core.c | 1 + kernel/bpf/disasm.c | 16 ++++++++++++---- kernel/bpf/verifier.c | 2 ++ tools/include/linux/filter.h | 20 ++++++++++++++++++++ 6 files changed, 69 insertions(+), 6 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 7431b2937157..a8a9fab13fcf 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -823,6 +823,7 @@ static int emit_atomic(u8 **pprog, u8 atomic_op, /* emit opcode */ switch (atomic_op) { + case BPF_SUB: case BPF_ADD: /* lock *(u32/u64*)(dst_reg + off) = src_reg */ EMIT1(simple_alu_opcodes[atomic_op]); @@ -1306,8 +1307,19 @@ st: if (is_imm8(insn->off)) case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: - err = emit_atomic(&prog, insn->imm, dst_reg, src_reg, - insn->off, BPF_SIZE(insn->code)); + if (insn->imm == (BPF_SUB | BPF_FETCH)) { + /* + * x86 doesn't have an XSUB insn, so we negate + * and XADD instead. + */ + emit_neg(&prog, src_reg, BPF_SIZE(insn->code) == BPF_DW); + err = emit_atomic(&prog, BPF_ADD | BPF_FETCH, + dst_reg, src_reg, insn->off, + BPF_SIZE(insn->code)); + } else { + err = emit_atomic(&prog, insn->imm, dst_reg, src_reg, + insn->off, BPF_SIZE(insn->code)); + } if (err) return err; break; diff --git a/include/linux/filter.h b/include/linux/filter.h index 6186280715ed..a20a3a536bf5 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -280,6 +280,26 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) .off = OFF, \ .imm = BPF_ADD | BPF_FETCH }) +/* Atomic memory sub, *(uint *)(dst_reg + off16) -= src_reg */ + +#define BPF_ATOMIC_SUB(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_SUB }) + +/* Atomic memory sub with fetch, src_reg = atomic_fetch_sub(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_SUB(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_SUB | BPF_FETCH }) + /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ #define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 20a5351d1dc2..0f700464955f 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1650,6 +1650,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) STX_ATOMIC_W: switch (IMM) { ATOMIC(BPF_ADD, add) + ATOMIC(BPF_SUB, sub) case BPF_XCHG: if (BPF_SIZE(insn->code) == BPF_W) diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 3441ac54ac65..f33acffdeed0 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -80,6 +80,11 @@ const char *const bpf_alu_string[16] = { [BPF_END >> 4] = "endian", }; +const char *const bpf_atomic_alu_string[16] = { + [BPF_ADD >> 4] = "add", + [BPF_SUB >> 4] = "sub", +}; + static const char *const bpf_ldst_string[] = { [BPF_W >> 3] = "u32", [BPF_H >> 3] = "u16", @@ -154,17 +159,20 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, insn->dst_reg, insn->off, insn->src_reg); else if (BPF_MODE(insn->code) == BPF_ATOMIC && - insn->imm == BPF_ADD) { - verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) += r%d\n", + (insn->imm == BPF_ADD || insn->imm == BPF_SUB)) { + verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) %s r%d\n", insn->code, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, + bpf_alu_string[BPF_OP(insn->imm) >> 4], insn->src_reg); } else if (BPF_MODE(insn->code) == BPF_ATOMIC && - insn->imm == (BPF_ADD | BPF_FETCH)) { - verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_add(*(%s *)(r%d %+d), r%d)\n", + (insn->imm == (BPF_ADD | BPF_FETCH) || + insn->imm == (BPF_SUB | BPF_FETCH))) { + verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_%s(*(%s *)(r%d %+d), r%d)\n", insn->code, insn->src_reg, BPF_SIZE(insn->code) == BPF_DW ? "64" : "", + bpf_atomic_alu_string[BPF_OP(insn->imm) >> 4], bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); } else if (BPF_MODE(insn->code) == BPF_ATOMIC && diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c8311cc114ec..dea9ad486ad1 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3606,6 +3606,8 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i switch (insn->imm) { case BPF_ADD: case BPF_ADD | BPF_FETCH: + case BPF_SUB: + case BPF_SUB | BPF_FETCH: case BPF_XCHG: case BPF_CMPXCHG: break; diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index ea99bd17d003..387eddaf11e5 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -190,6 +190,26 @@ .off = OFF, \ .imm = BPF_ADD | BPF_FETCH }) +/* Atomic memory sub, *(uint *)(dst_reg + off16) -= src_reg */ + +#define BPF_ATOMIC_SUB(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_SUB }) + +/* Atomic memory sub with fetch, src_reg = atomic_fetch_sub(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_SUB(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_SUB | BPF_FETCH }) + /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ #define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ From patchwork Fri Nov 27 17:57:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937031 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.5 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2A63C83016 for ; Fri, 27 Nov 2020 17:58:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 951672224A for ; Fri, 27 Nov 2020 17:58:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qtcSoh40" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732987AbgK0R6O (ORCPT ); Fri, 27 Nov 2020 12:58:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732984AbgK0R6N (ORCPT ); Fri, 27 Nov 2020 12:58:13 -0500 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF553C0613D4 for ; Fri, 27 Nov 2020 09:58:12 -0800 (PST) Received: by mail-qk1-x74a.google.com with SMTP id b11so4134068qkk.10 for ; Fri, 27 Nov 2020 09:58:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=0hpUNgP6F8b8DKNGvFw/ePsro3vokMIuT28XL72HN2Y=; b=qtcSoh40Q/lucmLUCuIt/8x+xk6nZUMLcqpBc7ghemxX3vNaupJ0JKs0iP2uO2Yy0s Mba1YCSnu6s2vi0zFkdXwdrIvcQLvUyxBfqVZqseeM/lGLomMtFePWuuz+CEbHD9oQ12 UKKQjkmgD0ZO9E+M/8veRCmGz16v9PBAp34h+sEp+Bc93cHPla53BoHii089PLambfH2 3SyYoDP5BgmYPTxXX7fcGVEOTGVS0Y2as9otNOcAr8t8f7zYpWrcsWhsxvzcHzjrfyeW b982buRTB5dt6xX+nhcLAAULTDcw/R9AQg4CRoW+Asr3MozG0g2q4P7NIDEGsSwCz0r5 vAIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0hpUNgP6F8b8DKNGvFw/ePsro3vokMIuT28XL72HN2Y=; b=B0AIF+tavUINNTtvoR1h0XMQNCA0Cn8tNL4e+yRV0qDpioGYs6Vn7lqxf35AJbS57n Tm3vFS2d+vjwcnqtieBGRaUQniIs2HuQwh006X004r1xttQP8LWkM4nNm1i4eiD9Y2fp AwyasxksDI0dyniIbk1p9838rdZAXNH/kDfgXl1S8TBzZZYJQAScyz3SfkGsiLXYipVk MN3U9iuePvLnxWzsonsqChOFVvehoF6i8/rsrbrJhGcT74F+Nh6ifV7uHIUh7z1P4f5s hdfsAFgUD8y8BhT0RFdO99cOzxkmWMlR9O2TXYynqzUDh1W1oQdIiZH3NkmTECik/I/o 5tcg== X-Gm-Message-State: AOAM533Yjw8AWcxX+YYova5N3NMJ982tVVqauYNzD4QDNX1lY3ut0juY J6qPf6WfmBfwbdTFU2YCq3cH5/Ld6Sz/SmCDm5e1iSzx7EOEjaNSLL3gehkWL7tL1/VM/+EENtm THmG3UzrQp+ob/B5IoQO6QRM77Qx6YKEVXIuMCY/kO35hfvuv5RrQIn+CJYTXDVw= X-Google-Smtp-Source: ABdhPJz52r9mkaXVmRVm2C75fNnzTIisQaRQ3trqlZfjAyMykvkK0XKpUHtijnQYsv5DRvZoEb99Yhb9QTbYyQ== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:ad4:4052:: with SMTP id r18mr9566252qvp.38.1606499891695; Fri, 27 Nov 2020 09:58:11 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:36 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-12-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 11/13] bpf: Add bitwise atomic instructions From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This adds instructions for atomic[64]_[fetch_]and atomic[64]_[fetch_]or atomic[64]_[fetch_]xor All these operations are isomorphic enough to implement with the same verifier, interpreter, and x86 JIT code, hence being a single commit. The main interesting thing here is that x86 doesn't directly support the fetch_ version these operations, so we need to generate a CMPXCHG loop in the JIT. This requires the use of two temporary registers, IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose. Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 49 ++++++++++++++++++++++++++++- include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++ kernel/bpf/core.c | 5 ++- kernel/bpf/disasm.c | 7 +++-- kernel/bpf/verifier.c | 6 ++++ tools/include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++ 6 files changed, 183 insertions(+), 4 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index a8a9fab13fcf..46b977ee21c4 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -823,8 +823,11 @@ static int emit_atomic(u8 **pprog, u8 atomic_op, /* emit opcode */ switch (atomic_op) { - case BPF_SUB: case BPF_ADD: + case BPF_SUB: + case BPF_AND: + case BPF_OR: + case BPF_XOR: /* lock *(u32/u64*)(dst_reg + off) = src_reg */ EMIT1(simple_alu_opcodes[atomic_op]); break; @@ -1307,6 +1310,50 @@ st: if (is_imm8(insn->off)) case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm == (BPF_AND | BPF_FETCH) || + insn->imm == (BPF_OR | BPF_FETCH) || + insn->imm == (BPF_XOR | BPF_FETCH)) { + u8 *branch_target; + bool is64 = BPF_SIZE(insn->code) == BPF_DW; + + /* + * Can't be implemented with a single x86 insn. + * Need to do a CMPXCHG loop. + */ + + /* Will need RAX as a CMPXCHG operand so save R0 */ + emit_mov_reg(&prog, true, BPF_REG_AX, BPF_REG_0); + branch_target = prog; + /* Load old value */ + emit_ldx(&prog, BPF_SIZE(insn->code), + BPF_REG_0, dst_reg, insn->off); + /* + * Perform the (commutative) operation locally, + * put the result in the AUX_REG. + */ + emit_mov_reg(&prog, is64, AUX_REG, BPF_REG_0); + maybe_emit_rex(&prog, AUX_REG, src_reg, is64); + EMIT2(simple_alu_opcodes[BPF_OP(insn->imm)], + add_2reg(0xC0, AUX_REG, src_reg)); + /* Attempt to swap in new value */ + err = emit_atomic(&prog, BPF_CMPXCHG, + dst_reg, AUX_REG, insn->off, + BPF_SIZE(insn->code)); + if (WARN_ON(err)) + return err; + /* + * ZF tells us whether we won the race. If it's + * cleared we need to try again. + */ + EMIT2(X86_JNE, -(prog - branch_target) - 2); + /* Return the pre-modification value */ + emit_mov_reg(&prog, is64, src_reg, BPF_REG_0); + /* Restore R0 after clobbering RAX */ + emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX); + break; + + } + if (insn->imm == (BPF_SUB | BPF_FETCH)) { /* * x86 doesn't have an XSUB insn, so we negate diff --git a/include/linux/filter.h b/include/linux/filter.h index a20a3a536bf5..cb5d865cce3c 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -300,6 +300,66 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) .off = OFF, \ .imm = BPF_SUB | BPF_FETCH }) +/* Atomic memory and, *(uint *)(dst_reg + off16) -= src_reg */ + +#define BPF_ATOMIC_AND(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_AND }) + +/* Atomic memory and with fetch, src_reg = atomic_fetch_and(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_AND(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_AND | BPF_FETCH }) + +/* Atomic memory or, *(uint *)(dst_reg + off16) -= src_reg */ + +#define BPF_ATOMIC_OR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_OR }) + +/* Atomic memory or with fetch, src_reg = atomic_fetch_or(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_OR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_OR | BPF_FETCH }) + +/* Atomic memory xor, *(uint *)(dst_reg + off16) -= src_reg */ + +#define BPF_ATOMIC_XOR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XOR }) + +/* Atomic memory xor with fetch, src_reg = atomic_fetch_xor(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XOR | BPF_FETCH }) + /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ #define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 0f700464955f..d5f4b1f2c9fe 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1651,7 +1651,10 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) switch (IMM) { ATOMIC(BPF_ADD, add) ATOMIC(BPF_SUB, sub) - + ATOMIC(BPF_AND, and) + ATOMIC(BPF_OR, or) + ATOMIC(BPF_XOR, xor) +#undef ATOMIC case BPF_XCHG: if (BPF_SIZE(insn->code) == BPF_W) SRC = (u32) atomic_xchg( diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index f33acffdeed0..4c861632efac 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -83,6 +83,7 @@ const char *const bpf_alu_string[16] = { const char *const bpf_atomic_alu_string[16] = { [BPF_ADD >> 4] = "add", [BPF_SUB >> 4] = "sub", + [BPF_AND >> 4] = "and", }; static const char *const bpf_ldst_string[] = { @@ -159,7 +160,8 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, insn->dst_reg, insn->off, insn->src_reg); else if (BPF_MODE(insn->code) == BPF_ATOMIC && - (insn->imm == BPF_ADD || insn->imm == BPF_SUB)) { + (insn->imm == BPF_ADD || insn->imm == BPF_SUB || + (insn->imm == BPF_AND))) { verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) %s r%d\n", insn->code, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], @@ -168,7 +170,8 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, insn->src_reg); } else if (BPF_MODE(insn->code) == BPF_ATOMIC && (insn->imm == (BPF_ADD | BPF_FETCH) || - insn->imm == (BPF_SUB | BPF_FETCH))) { + insn->imm == (BPF_SUB | BPF_FETCH) || + insn->imm == (BPF_AND | BPF_FETCH))) { verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_%s(*(%s *)(r%d %+d), r%d)\n", insn->code, insn->src_reg, BPF_SIZE(insn->code) == BPF_DW ? "64" : "", diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index dea9ad486ad1..188f152a0c32 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3608,6 +3608,12 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i case BPF_ADD | BPF_FETCH: case BPF_SUB: case BPF_SUB | BPF_FETCH: + case BPF_AND: + case BPF_AND | BPF_FETCH: + case BPF_OR: + case BPF_OR | BPF_FETCH: + case BPF_XOR: + case BPF_XOR | BPF_FETCH: case BPF_XCHG: case BPF_CMPXCHG: break; diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index 387eddaf11e5..2a64149af056 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -210,6 +210,66 @@ .off = OFF, \ .imm = BPF_SUB | BPF_FETCH }) +/* Atomic memory and, *(uint *)(dst_reg + off16) -= src_reg */ + +#define BPF_ATOMIC_AND(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_AND }) + +/* Atomic memory and with fetch, src_reg = atomic_fetch_and(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_AND(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_AND | BPF_FETCH }) + +/* Atomic memory or, *(uint *)(dst_reg + off16) -= src_reg */ + +#define BPF_ATOMIC_OR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_OR }) + +/* Atomic memory or with fetch, src_reg = atomic_fetch_or(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_OR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_OR | BPF_FETCH }) + +/* Atomic memory xor, *(uint *)(dst_reg + off16) -= src_reg */ + +#define BPF_ATOMIC_XOR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XOR }) + +/* Atomic memory xor with fetch, src_reg = atomic_fetch_xor(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XOR | BPF_FETCH }) + /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ #define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ From patchwork Fri Nov 27 17:57:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937027 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB400C83012 for ; Fri, 27 Nov 2020 17:58:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AF1802224C for ; Fri, 27 Nov 2020 17:58:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wCrqQ3p6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733031AbgK0R6Q (ORCPT ); Fri, 27 Nov 2020 12:58:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732919AbgK0R6Q (ORCPT ); Fri, 27 Nov 2020 12:58:16 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60CFCC0613D2 for ; Fri, 27 Nov 2020 09:58:15 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id o19so3483845wme.2 for ; Fri, 27 Nov 2020 09:58:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=dPT+lQWJVLkIOZCl+Ehie0vjz1PRtZaDFnCAvmHCwNo=; b=wCrqQ3p638zJv03Hi0Zjj6NaKsHOwiFhrl/AoYKYNR5ZhN6G3mpWJoNzb5tKpIXcbg E4sfa/+h5jWvCactB1/7i4O+skz135GYfCqU1Ga3OFa/h1i+Z/F4RilPvNuquRkeiVy8 L1HWY36FW0zD5QSJ77PRoNSvUgHh0HSuLtggt5hg1liJasyF2uLedQdpC4YAbpAkrN7T Kro5LM2K5aG5AJdcp8yQB1rpKEXOy4YDVNZJ+2L0W3vXOE0qNeWJHdJr2O2kLGgCcFdJ 7HbysCiRFep3txIdJBO1UcgU4foz6JC1Af8OJ4iX0OJludvlop5+K5hFwzaUXMHsM8j7 u/ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dPT+lQWJVLkIOZCl+Ehie0vjz1PRtZaDFnCAvmHCwNo=; b=pu3ZdN3sLhDxBgCYXVaHDxYaX+UrNUY17wOEaWoV2qLQ86JmdgmOjpj2Ugcb9E7kcz 3Xm5AdkHH7OlNlEfieaznvAq9nEgXDAWYb5ozgJjI27UIbo3LCK34B9FLwIhVrjhDe0h B9Zw4OyL3zc7Z7nLDEAwSOa+hQI9keexdaSGJMcmAghIYeHC9ml3lDRJTMY5yQC6nkYs UbUY4Ou6+ESoIcPcVaTGbKHulUycYjqBXmz+fp8D+/akM6FvmAYY7er+ue24F+xbm1S4 RrGmMG+7ZjQbLdT8eej8E8XvydqAGoro+rgATH3tZO8qctXylJGtLKx8jT7zQFx6DhEQ NjRg== X-Gm-Message-State: AOAM530zrQdt1zBhGhRJ6deRPRn2LRrh97OA0AKkgaShNvNh7be0LYfo yGbFJzPz5VJpJC4SJsko5690B7Oi6PX6UnlqXZot4+ecHSawKSUyC+zvlzjdMMVJ0YfggrU5mip RNuevIfY1FgICoPClqXzxfVyss7Z2wQaEhk1SaMXg1jC2tCMFTCpDX6OTMjNfpHo= X-Google-Smtp-Source: ABdhPJw4IuY7eYouHoaWm235PapJTC5s95Ac9K8GM5JT4mTePiFV/KVgx3tBOuXDTC4iUC1wRoeURJj1jZidEw== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a1c:f20e:: with SMTP id s14mr10419901wmc.126.1606499893900; Fri, 27 Nov 2020 09:58:13 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:37 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-13-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 12/13] bpf: Add tests for new BPF atomic operations From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This relies on the work done by Yonghong Song in https://reviews.llvm.org/D72184 Note the hackery in the Makefile that is necessary to avoid breaking tests for people who haven't yet got a version of Clang supporting V4. It seems like this hackery ought to be confined to tools/build/feature - I tried implementing that and found that it ballooned into an explosion of nightmares at the top of tools/testing/selftests/bpf/Makefile without actually improving the clarity of the CLANG_BPF_BUILD_RULE code at all. Hence the simple $(shell) call... Signed-off-by: Brendan Jackman --- tools/testing/selftests/bpf/Makefile | 12 +- .../selftests/bpf/prog_tests/atomics_test.c | 329 ++++++++++++++++++ .../selftests/bpf/progs/atomics_test.c | 124 +++++++ .../selftests/bpf/verifier/atomic_and.c | 77 ++++ .../selftests/bpf/verifier/atomic_cmpxchg.c | 96 +++++ .../selftests/bpf/verifier/atomic_fetch_add.c | 106 ++++++ .../selftests/bpf/verifier/atomic_or.c | 77 ++++ .../selftests/bpf/verifier/atomic_sub.c | 44 +++ .../selftests/bpf/verifier/atomic_xchg.c | 46 +++ .../selftests/bpf/verifier/atomic_xor.c | 77 ++++ tools/testing/selftests/bpf/verifier/ctx.c | 2 +- 11 files changed, 987 insertions(+), 3 deletions(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/atomics_test.c create mode 100644 tools/testing/selftests/bpf/progs/atomics_test.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_and.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_fetch_add.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_or.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_sub.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xchg.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xor.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 3d5940cd110d..5eadfd09037d 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -228,6 +228,12 @@ IS_LITTLE_ENDIAN = $(shell $(CC) -dM -E - &1)),true,) +ifeq ($(CLANG_SUPPORTS_V4),true) + CFLAGS += -DENABLE_ATOMICS_TESTS +endif + CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG)) BPF_CFLAGS = -g -D__TARGET_ARCH_$(SRCARCH) $(MENDIAN) \ -I$(INCLUDE_DIR) -I$(CURDIR) -I$(APIDIR) \ @@ -250,7 +256,9 @@ define CLANG_BPF_BUILD_RULE $(call msg,CLNG-LLC,$(TRUNNER_BINARY),$2) $(Q)($(CLANG) $3 -O2 -target bpf -emit-llvm \ -c $1 -o - || echo "BPF obj compilation failed") | \ - $(LLC) -mattr=dwarfris -march=bpf -mcpu=v3 $4 -filetype=obj -o $2 + $(LLC) -mattr=dwarfris -march=bpf \ + -mcpu=$(if $(CLANG_SUPPORTS_V4),v4,v3) \ + $4 -filetype=obj -o $2 endef # Similar to CLANG_BPF_BUILD_RULE, but with disabled alu32 define CLANG_NOALU32_BPF_BUILD_RULE @@ -391,7 +399,7 @@ TRUNNER_EXTRA_SOURCES := test_progs.c cgroup_helpers.c trace_helpers.c \ TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read \ $(wildcard progs/btf_dump_test_case_*.c) TRUNNER_BPF_BUILD_RULE := CLANG_BPF_BUILD_RULE -TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS) +TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS) $(if $(CLANG_SUPPORTS_V4),-DENABLE_ATOMICS_TESTS,) TRUNNER_BPF_LDFLAGS := -mattr=+alu32 $(eval $(call DEFINE_TEST_RUNNER,test_progs)) diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c new file mode 100644 index 000000000000..8ecc0392fdf9 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c @@ -0,0 +1,329 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + +#ifdef ENABLE_ATOMICS_TESTS + +#include "atomics_test.skel.h" + +static void test_add(void) +{ + struct atomics_test *atomics_skel = NULL; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = atomics_test__open_and_load(); + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n")) + goto cleanup; + + err = atomics_test__attach(atomics_skel); + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err)) + goto cleanup; + + prog_fd = bpf_program__fd(atomics_skel->progs.add); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run add", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + CHECK(atomics_skel->data->add64_value != 3, "add64_value", + "64bit atomic add value was not incremented (got %lld want 2)\n", + atomics_skel->data->add64_value); + CHECK(atomics_skel->bss->add64_result != 1, "add64_result", + "64bit atomic add bad return value (got %lld want 1)\n", + atomics_skel->bss->add64_result); + + CHECK(atomics_skel->data->add32_value != 3, "add32_value", + "32bit atomic add value was not incremented (got %d want 2)\n", + atomics_skel->data->add32_value); + CHECK(atomics_skel->bss->add32_result != 1, "add32_result", + "32bit atomic add bad return value (got %d want 1)\n", + atomics_skel->bss->add32_result); + + CHECK(atomics_skel->bss->add_stack_value_copy != 3, "add_stack_value", + "stack atomic add value was not incremented (got %lld want 2)\n", + atomics_skel->bss->add_stack_value_copy); + CHECK(atomics_skel->bss->add_stack_result != 1, "add_stack_result", + "stack atomic add bad return value (got %lld want 1)\n", + atomics_skel->bss->add_stack_result); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_sub(void) +{ + struct atomics_test *atomics_skel = NULL; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = atomics_test__open_and_load(); + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n")) + goto cleanup; + + err = atomics_test__attach(atomics_skel); + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err)) + goto cleanup; + + prog_fd = bpf_program__fd(atomics_skel->progs.sub); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run sub", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + CHECK(atomics_skel->data->sub64_value != -1, "sub64_value", + "64bit atomic sub value was not decremented (got %lld want -1)\n", + atomics_skel->data->sub64_value); + CHECK(atomics_skel->bss->sub64_result != 1, "sub64_result", + "64bit atomic sub bad return value (got %lld want 1)\n", + atomics_skel->bss->sub64_result); + + CHECK(atomics_skel->data->sub32_value != -1, "sub32_value", + "32bit atomic sub value was not decremented (got %d want -1)\n", + atomics_skel->data->sub32_value); + CHECK(atomics_skel->bss->sub32_result != 1, "sub32_result", + "32bit atomic sub bad return value (got %d want 1)\n", + atomics_skel->bss->sub32_result); + + CHECK(atomics_skel->bss->sub_stack_value_copy != -1, "sub_stack_value", + "stack atomic sub value was not decremented (got %lld want -1)\n", + atomics_skel->bss->sub_stack_value_copy); + CHECK(atomics_skel->bss->sub_stack_result != 1, "sub_stack_result", + "stack atomic sub bad return value (got %lld want 1)\n", + atomics_skel->bss->sub_stack_result); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_and(void) +{ + struct atomics_test *atomics_skel = NULL; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = atomics_test__open_and_load(); + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n")) + goto cleanup; + + err = atomics_test__attach(atomics_skel); + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err)) + goto cleanup; + + prog_fd = bpf_program__fd(atomics_skel->progs.and); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run and", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + CHECK(atomics_skel->data->and64_value != 0x010ull << 32, "and64_value", + "64bit atomic and, bad value (got 0x%llx want 0x%llx)\n", + atomics_skel->data->and64_value, 0x010ull << 32); + CHECK(atomics_skel->bss->and64_result != 0x110ull << 32, "and64_result", + "64bit atomic and, bad result (got 0x%llx want 0x%llx)\n", + atomics_skel->bss->and64_result, 0x110ull << 32); + + CHECK(atomics_skel->data->and32_value != 0x010, "and32_value", + "32bit atomic and, bad value (got 0x%x want 0x%x)\n", + atomics_skel->data->and32_value, 0x010); + CHECK(atomics_skel->bss->and32_result != 0x110, "and32_result", + "32bit atomic and, bad result (got 0x%x want 0x%x)\n", + atomics_skel->bss->and32_result, 0x110); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_or(void) +{ + struct atomics_test *atomics_skel = NULL; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = atomics_test__open_and_load(); + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n")) + goto cleanup; + + err = atomics_test__attach(atomics_skel); + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err)) + goto cleanup; + + prog_fd = bpf_program__fd(atomics_skel->progs.or); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run or", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + CHECK(atomics_skel->data->or64_value != 0x111ull << 32, "or64_value", + "64bit atomic or, bad value (got 0x%llx want 0x%llx)\n", + atomics_skel->data->or64_value, 0x111ull << 32); + CHECK(atomics_skel->bss->or64_result != 0x110ull << 32, "or64_result", + "64bit atomic or, bad result (got 0x%llx want 0x%llx)\n", + atomics_skel->bss->or64_result, 0x110ull << 32); + + CHECK(atomics_skel->data->or32_value != 0x111, "or32_value", + "32bit atomic or, bad value (got 0x%x want 0x%x)\n", + atomics_skel->data->or32_value, 0x111); + CHECK(atomics_skel->bss->or32_result != 0x110, "or32_result", + "32bit atomic or, bad result (got 0x%x want 0x%x)\n", + atomics_skel->bss->or32_result, 0x110); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_xor(void) +{ + struct atomics_test *atomics_skel = NULL; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = atomics_test__open_and_load(); + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n")) + goto cleanup; + + err = atomics_test__attach(atomics_skel); + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err)) + goto cleanup; + + prog_fd = bpf_program__fd(atomics_skel->progs.xor); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run xor", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + CHECK(atomics_skel->data->xor64_value != 0x101ull << 32, "xor64_value", + "64bit atomic xor, bad value (got 0x%llx want 0x%llx)\n", + atomics_skel->data->xor64_value, 0x101ull << 32); + CHECK(atomics_skel->bss->xor64_result != 0x110ull << 32, "xor64_result", + "64bit atomic xor, bad result (got 0x%llx want 0x%llx)\n", + atomics_skel->bss->xor64_result, 0x110ull << 32); + + CHECK(atomics_skel->data->xor32_value != 0x101, "xor32_value", + "32bit atomic xor, bad value (got 0x%x want 0x%x)\n", + atomics_skel->data->xor32_value, 0x101); + CHECK(atomics_skel->bss->xor32_result != 0x110, "xor32_result", + "32bit atomic xor, bad result (got 0x%x want 0x%x)\n", + atomics_skel->bss->xor32_result, 0x110); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_cmpxchg(void) +{ + struct atomics_test *atomics_skel = NULL; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = atomics_test__open_and_load(); + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n")) + goto cleanup; + + err = atomics_test__attach(atomics_skel); + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err)) + goto cleanup; + + prog_fd = bpf_program__fd(atomics_skel->progs.add); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run add", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + CHECK(atomics_skel->data->cmpxchg64_value != 2, "cmpxchg64_value", + "64bit cmpxchg left unexpected value (got %llx want 2)\n", + atomics_skel->data->cmpxchg64_value); + CHECK(atomics_skel->bss->cmpxchg64_result_fail != 1, "cmpxchg_result_fail", + "64bit cmpxchg returned bad result (got %llx want 1)\n", + atomics_skel->bss->cmpxchg64_result_fail); + CHECK(atomics_skel->bss->cmpxchg64_result_succeed != 1, "cmpxchg_result_succeed", + "64bit cmpxchg returned bad result (got %llx want 1)\n", + atomics_skel->bss->cmpxchg64_result_succeed); + + CHECK(atomics_skel->data->cmpxchg32_value != 2, "cmpxchg32_value", + "32bit cmpxchg left unexpected value (got %d want 2)\n", + atomics_skel->data->cmpxchg32_value); + CHECK(atomics_skel->bss->cmpxchg32_result_fail != 1, "cmpxchg_result_fail", + "32bit cmpxchg returned bad result (got %d want 1)\n", + atomics_skel->bss->cmpxchg32_result_fail); + CHECK(atomics_skel->bss->cmpxchg32_result_succeed != 1, "cmpxchg_result_succeed", + "32bit cmpxchg returned bad result (got %d want 1)\n", + atomics_skel->bss->cmpxchg32_result_succeed); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_xchg(void) +{ + struct atomics_test *atomics_skel = NULL; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = atomics_test__open_and_load(); + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n")) + goto cleanup; + + err = atomics_test__attach(atomics_skel); + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err)) + goto cleanup; + + prog_fd = bpf_program__fd(atomics_skel->progs.add); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run add", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + CHECK(atomics_skel->data->xchg64_value != 2, "xchg64_value", + "64bit xchg left unexpected value (got %lld want 2)\n", + atomics_skel->data->xchg64_value); + CHECK(atomics_skel->bss->xchg64_result != 1, "xchg_result", + "64bit xchg returned bad result (got %lld want 1)\n", + atomics_skel->bss->xchg64_result); + + CHECK(atomics_skel->data->xchg32_value != 2, "xchg32_value", + "32bit xchg left unexpected value (got %d want 2)\n", + atomics_skel->data->xchg32_value); + CHECK(atomics_skel->bss->xchg32_result != 1, "xchg_result", + "32bit xchg returned bad result (got %d want 1)\n", + atomics_skel->bss->xchg32_result); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +void test_atomics_test(void) +{ + test_add(); + test_sub(); + test_and(); + test_or(); + test_xor(); + test_cmpxchg(); + test_xchg(); +} + +#else /* ENABLE_ATOMICS_TESTS */ + +void test_atomics_test(void) +{ + printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)", + __func__); + test__skip(); +} + +#endif /* ENABLE_ATOMICS_TESTS */ diff --git a/tools/testing/selftests/bpf/progs/atomics_test.c b/tools/testing/selftests/bpf/progs/atomics_test.c new file mode 100644 index 000000000000..3139b00937e5 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/atomics_test.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include + +#ifdef ENABLE_ATOMICS_TESTS + +__u64 add64_value = 1; +__u64 add64_result = 0; +__u32 add32_value = 1; +__u32 add32_result = 0; +__u64 add_stack_value_copy = 0; +__u64 add_stack_result = 0; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(add, int a) +{ + __u64 add_stack_value = 1; + + add64_result = __sync_fetch_and_add(&add64_value, 2); + add32_result = __sync_fetch_and_add(&add32_value, 2); + add_stack_result = __sync_fetch_and_add(&add_stack_value, 2); + add_stack_value_copy = add_stack_value; + + return 0; +} + +__s64 sub64_value = 1; +__s64 sub64_result = 0; +__s32 sub32_value = 1; +__s32 sub32_result = 0; +__s64 sub_stack_value_copy = 0; +__s64 sub_stack_result = 0; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(sub, int a) +{ + __u64 sub_stack_value = 1; + + sub64_result = __sync_fetch_and_sub(&sub64_value, 2); + sub32_result = __sync_fetch_and_sub(&sub32_value, 2); + sub_stack_result = __sync_fetch_and_sub(&sub_stack_value, 2); + sub_stack_value_copy = sub_stack_value; + + return 0; +} + +__u64 and64_value = (0x110ull << 32); +__u64 and64_result = 0; +__u32 and32_value = 0x110; +__u32 and32_result = 0; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(and, int a) +{ + + and64_result = __sync_fetch_and_and(&and64_value, 0x011ull << 32); + and32_result = __sync_fetch_and_and(&and32_value, 0x011); + + return 0; +} + +__u64 or64_value = (0x110ull << 32); +__u64 or64_result = 0; +__u32 or32_value = 0x110; +__u32 or32_result = 0; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(or, int a) +{ + or64_result = __sync_fetch_and_or(&or64_value, 0x011ull << 32); + or32_result = __sync_fetch_and_or(&or32_value, 0x011); + + return 0; +} + +__u64 xor64_value = (0x110ull << 32); +__u64 xor64_result = 0; +__u32 xor32_value = 0x110; +__u32 xor32_result = 0; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(xor, int a) +{ + xor64_result = __sync_fetch_and_xor(&xor64_value, 0x011ull << 32); + xor32_result = __sync_fetch_and_xor(&xor32_value, 0x011); + + return 0; +} + +__u64 cmpxchg64_value = 1; +__u64 cmpxchg64_result_fail = 0; +__u64 cmpxchg64_result_succeed = 0; +__u32 cmpxchg32_value = 1; +__u32 cmpxchg32_result_fail = 0; +__u32 cmpxchg32_result_succeed = 0; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(cmpxchg, int a) +{ + cmpxchg64_result_fail = __sync_val_compare_and_swap( + &cmpxchg64_value, 0, 3); + cmpxchg64_result_succeed = __sync_val_compare_and_swap( + &cmpxchg64_value, 1, 2); + + cmpxchg32_result_fail = __sync_val_compare_and_swap( + &cmpxchg32_value, 0, 3); + cmpxchg32_result_succeed = __sync_val_compare_and_swap( + &cmpxchg32_value, 1, 2); + + return 0; +} + +__u64 xchg64_value = 1; +__u64 xchg64_result = 0; +__u32 xchg32_value = 1; +__u32 xchg32_result = 0; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(xchg, int a) +{ + __u64 val64 = 2; + __u32 val32 = 2; + + __atomic_exchange(&xchg64_value, &val64, &xchg64_result, __ATOMIC_RELAXED); + __atomic_exchange(&xchg32_value, &val32, &xchg32_result, __ATOMIC_RELAXED); + + return 0; +} + +#endif /* ENABLE_ATOMICS_TESTS */ diff --git a/tools/testing/selftests/bpf/verifier/atomic_and.c b/tools/testing/selftests/bpf/verifier/atomic_and.c new file mode 100644 index 000000000000..7eea6d9dfd7d --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_and.c @@ -0,0 +1,77 @@ +{ + "BPF_ATOMIC_AND without fetch", + .insns = { + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* atomic_and(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_AND(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (val != 0x010) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x010, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* r1 should not be clobbered, no BPF_FETCH flag */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_AND with fetch", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 123), + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* old = atomic_fetch_and(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_AND(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 0x110) exit(3); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x010) exit(2); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2), + BPF_MOV64_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (for fear of x86 JIT bug) */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_AND with fetch 32bit", + .insns = { + /* r0 = (s64) -1 */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1), + /* val = 0x110; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110), + /* old = atomic_fetch_and(&val, 0x011); */ + BPF_MOV32_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_AND(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 0x110) exit(3); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x010) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2), + BPF_MOV32_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (for fear of x86 JIT bug) + * It should be -1 so add 1 to get exit code. + */ + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c new file mode 100644 index 000000000000..eb43a06428fa --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c @@ -0,0 +1,96 @@ +{ + "atomic compare-and-exchange smoketest - 64bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + /* old = atomic_cmpxchg(&val, 2, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 3) exit(2); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* if (val != 3) exit(3); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* old = atomic_cmpxchg(&val, 3, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 3) exit(4); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 4), + BPF_EXIT_INSN(), + /* if (val != 4) exit(5); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV64_IMM(BPF_REG_0, 5), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "atomic compare-and-exchange smoketest - 32bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3), + /* old = atomic_cmpxchg(&val, 2, 4); */ + BPF_MOV32_IMM(BPF_REG_1, 4), + BPF_MOV32_IMM(BPF_REG_0, 2), + BPF_ATOMIC_CMPXCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 3) exit(2); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* if (val != 3) exit(3); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* old = atomic_cmpxchg(&val, 3, 4); */ + BPF_MOV32_IMM(BPF_REG_1, 4), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_ATOMIC_CMPXCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 3) exit(4); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 4), + BPF_EXIT_INSN(), + /* if (val != 4) exit(5); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV32_IMM(BPF_REG_0, 5), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "Can't use cmpxchg on uninit src reg", + .insns = { + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "!read_ok", +}, +{ + "Can't use cmpxchg on uninit memory", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_MOV64_IMM(BPF_REG_2, 4), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "invalid read from stack", +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c new file mode 100644 index 000000000000..c3236510cb64 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c @@ -0,0 +1,106 @@ +{ + "BPF_ATOMIC_FETCH_ADD smoketest - 64bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 3 to stack */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + /* Put a 1 in R1, add it to the 3 on the stack, and load the value back into R1 */ + BPF_MOV64_IMM(BPF_REG_1, 1), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* Check the value we loaded back was 3 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* Load value from stack */ + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8), + /* Check value loaded from stack was 4 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_FETCH_ADD smoketest - 32bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 3 to stack */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3), + /* Put a 1 in R1, add it to the 3 on the stack, and load the value back into R1 */ + BPF_MOV32_IMM(BPF_REG_1, 1), + BPF_ATOMIC_FETCH_ADD(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* Check the value we loaded back was 3 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* Load value from stack */ + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4), + /* Check value loaded from stack was 4 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "Can't use ATM_FETCH_ADD on frame pointer", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_10, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr_unpriv = "R10 leaks addr into mem", + .errstr = "frame pointer is read only", +}, +{ + "Can't use ATM_FETCH_ADD on uninit src reg", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + /* It happens that the address leak check is first, but it would also be + * complain about the fact that we're trying to modify R10. + */ + .errstr = "!read_ok", +}, +{ + "Can't use ATM_FETCH_ADD on uninit dst reg", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_2, BPF_REG_0, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + /* It happens that the address leak check is first, but it would also be + * complain about the fact that we're trying to modify R10. + */ + .errstr = "!read_ok", +}, +{ + "Can't use ATM_FETCH_ADD on kernel memory", + .insns = { + /* This is an fentry prog, context is array of the args of the + * kernel function being called. Load first arg into R2. + */ + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 0), + /* First arg of bpf_fentry_test7 is a pointer to a struct. + * Attempt to modify that struct. Verifier shouldn't let us + * because it's kernel memory. + */ + BPF_MOV64_IMM(BPF_REG_3, 1), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_2, BPF_REG_3, 0), + /* Done */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_TRACING, + .expected_attach_type = BPF_TRACE_FENTRY, + .kfunc = "bpf_fentry_test7", + .result = REJECT, + .errstr = "only read is supported", +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_or.c b/tools/testing/selftests/bpf/verifier/atomic_or.c new file mode 100644 index 000000000000..1b22fb2881f0 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_or.c @@ -0,0 +1,77 @@ +{ + "BPF_ATOMIC_OR without fetch", + .insns = { + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* atomic_or(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_OR(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (val != 0x111) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x111, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* r1 should not be clobbered, no BPF_FETCH flag */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_OR with fetch", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 123), + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* old = atomic_fetch_or(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_OR(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 0x110) exit(3); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x111) exit(2); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x111, 2), + BPF_MOV64_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (for fear of x86 JIT bug) */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_OR with fetch 32bit", + .insns = { + /* r0 = (s64) -1 */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1), + /* val = 0x110; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110), + /* old = atomic_fetch_or(&val, 0x011); */ + BPF_MOV32_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_OR(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 0x110) exit(3); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x111) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x111, 2), + BPF_MOV32_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (for fear of x86 JIT bug) + * It should be -1 so add 1 to get exit code. + */ + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_sub.c b/tools/testing/selftests/bpf/verifier/atomic_sub.c new file mode 100644 index 000000000000..8a198f8bc194 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_sub.c @@ -0,0 +1,44 @@ +{ + "BPF_ATOMIC_SUB without fetch", + .insns = { + /* val = 100; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 100), + /* atomic_sub(&val, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_ATOMIC_SUB(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (val != 96) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 96, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* r1 should not be clobbered, no BPF_FETCH flag */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_SUB with fetch", + .insns = { + /* val = 100; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 100), + /* old = atomic_fetch_sub(&val, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_ATOMIC_FETCH_SUB(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 100) exit(3); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 100, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 96) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 96, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_xchg.c b/tools/testing/selftests/bpf/verifier/atomic_xchg.c new file mode 100644 index 000000000000..6ab7b2bdc6b7 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_xchg.c @@ -0,0 +1,46 @@ +{ + "atomic exchange smoketest - 64bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + /* old = atomic_xchg(&val, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_ATOMIC_XCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 3) exit(1); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* if (val != 4) exit(2); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "atomic exchange smoketest - 32bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3), + /* old = atomic_xchg(&val, 4); */ + BPF_MOV32_IMM(BPF_REG_1, 4), + BPF_ATOMIC_XCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 3) exit(1); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* if (val != 4) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV32_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_xor.c b/tools/testing/selftests/bpf/verifier/atomic_xor.c new file mode 100644 index 000000000000..d1315419a3a8 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_xor.c @@ -0,0 +1,77 @@ +{ + "BPF_ATOMIC_XOR without fetch", + .insns = { + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* atomic_xor(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_XOR(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (val != 0x101) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x101, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* r1 should not be clobbered, no BPF_FETCH flag */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_XOR with fetch", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 123), + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* old = atomic_fetch_xor(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_XOR(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 0x110) exit(3); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x101) exit(2); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x101, 2), + BPF_MOV64_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (fxor fear of x86 JIT bug) */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_XOR with fetch 32bit", + .insns = { + /* r0 = (s64) -1 */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1), + /* val = 0x110; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110), + /* old = atomic_fetch_xor(&val, 0x011); */ + BPF_MOV32_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_XOR(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 0x110) exit(3); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x101) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x101, 2), + BPF_MOV32_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (fxor fear of x86 JIT bug) + * It should be -1 so add 1 to get exit code. + */ + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, diff --git a/tools/testing/selftests/bpf/verifier/ctx.c b/tools/testing/selftests/bpf/verifier/ctx.c index a6d2d82b3447..ede3842d123b 100644 --- a/tools/testing/selftests/bpf/verifier/ctx.c +++ b/tools/testing/selftests/bpf/verifier/ctx.c @@ -13,7 +13,7 @@ "context stores via BPF_ATOMIC", .insns = { BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_ATOMIC_ADD(BPF_W, BPF_REG_1, BPF_REG_0, offsetof(struct __sk_buff)), + BPF_ATOMIC_ADD(BPF_W, BPF_REG_1, BPF_REG_0, offsetof(struct __sk_buff, mark)), BPF_EXIT_INSN(), }, .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", From patchwork Fri Nov 27 17:57:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11937025 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DF50C3E8C5 for ; Fri, 27 Nov 2020 17:58:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 102942224C for ; Fri, 27 Nov 2020 17:58:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LCxzFfey" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732797AbgK0R6S (ORCPT ); Fri, 27 Nov 2020 12:58:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732919AbgK0R6S (ORCPT ); Fri, 27 Nov 2020 12:58:18 -0500 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8173C0613D4 for ; Fri, 27 Nov 2020 09:58:16 -0800 (PST) Received: by mail-qt1-x84a.google.com with SMTP id t17so3666393qtp.3 for ; Fri, 27 Nov 2020 09:58:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=NXuOTf1MPK8TRE6mj0XMAIGn4tL+u51UNs4Z0qmP7TY=; b=LCxzFfeyElEsmKF10zxjGXjZC+9NCkcHcJd6YRlac2ucNzBm813nhfgUEbO8tX89ls uJr5UW8Z8IRDcmuGXFQ/8rJiigwxIuAKf5yT4AfIY9qJwFSvgKFliJ3NIoQZ83jdEeUq tskA4y84dkkwdYQtTZDEHSU1EPauPDbzchuuBVKw0ZjiSGEoIQImlRJVAbZV3eglnt3D QDmy2ThtjXPWPgev/zPS/yKySFkL6ADja18SIHDYn4IAe7hiaR6j7IWcFtGvZl9mg9JM xEN0wBlt10RwJ6c/Dm+oDBwFniegP0SehwWazCNcDQvnLOQc8D/2sXRRmdlLe8omOh2r tqJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NXuOTf1MPK8TRE6mj0XMAIGn4tL+u51UNs4Z0qmP7TY=; b=eTG4DY+nOY+55Cwf50x28AeULfkSjQzAhuLUdUed8O8mX0lIebihsQJERBKAjKnMy5 0lwKc/Wj0IXmFKOE0OABWnyjO8w2lpNl/xnuWcWAg0I+IoKxuLc24NSIOOX76zU6g8SV Ibhk7Sis9UrDxttGqGoTwT4MFIax0euOc0zycuJ48ymiCDIvEEcn5h/O0ood/vufk2TR jvYd0MkPOYF1YCgv7gbo80NnGGjME1FOJAeEQ1VcGXpj4vJ7BAZQAxUeR2qAX3RjoPeA u5IyODGxF6JtgqUANToE9zqhozhgftuCac6hWlcG1tBegLafGg6NBCVTP8uekrXk6aMc shPQ== X-Gm-Message-State: AOAM530epVmIYYWqMxrotL/A17q+6p8DbQ0vyt4sWq/By7HTVA4flMH0 maK974++q0qB375Fk+LYV541Haim+8m14OElGOLIpEigsk8GSl787BLAGRc/MG5gUb9hhQ2NoVN bwXLYJzwWpMuv6D483p6obeTJgORI3FG+rFRZtlzjpet5cuC+mMpsOAxd48OhReM= X-Google-Smtp-Source: ABdhPJxPJl813Avj63aZPEpIBc12NcoU1YbQ5PNvAh5FhZ/HCmrzSxW3HvxObE/cCykzTQtXo1QWJ4L8I0EJ+g== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:ad4:4725:: with SMTP id l5mr9499075qvz.51.1606499895963; Fri, 27 Nov 2020 09:58:15 -0800 (PST) Date: Fri, 27 Nov 2020 17:57:38 +0000 In-Reply-To: <20201127175738.1085417-1-jackmanb@google.com> Message-Id: <20201127175738.1085417-14-jackmanb@google.com> Mime-Version: 1.0 References: <20201127175738.1085417-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 bpf-next 13/13] bpf: Document new atomic instructions From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Signed-off-by: Brendan Jackman --- Documentation/networking/filter.rst | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/Documentation/networking/filter.rst b/Documentation/networking/filter.rst index 1583d59d806d..c86091b8cb0e 100644 --- a/Documentation/networking/filter.rst +++ b/Documentation/networking/filter.rst @@ -1053,6 +1053,33 @@ encoding. .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg +The basic atomic operations supported (from architecture v4 onwards) are: + + BPF_ADD + BPF_SUB + BPF_AND + BPF_OR + BPF_XOR + +Each having isomorphic semantics with the ``BPF_ADD`` example, that is: the +memory location addresed by ``dst_reg + off`` is atomically modified, with +``src_reg`` as the other operand. If the ``BPF_FETCH`` flag is set in the +immediate, then these operations also overwrite ``src_reg`` with the +pre-modification value from memory. + +The more special operations are: + + BPF_XCHG + +This atomically exchanges ``src_reg`` with the value addressed by ``dst_reg + +off``. + + BPF_CMPXCHG + +This atomically compares the value addressed by ``dst_reg + off`` with +``R0``. If they match it is replaced with ``src_reg``, The pre-modification +value is loaded back to ``R0``. + Note that 1 and 2 byte atomic operations are not supported. You may encounter BPF_XADD - this is a legacy name for BPF_ATOMIC, referring to