From patchwork Thu Dec 3 16:02:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949053 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B973AC0018C for ; Thu, 3 Dec 2020 16:04:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 80358207AD for ; Thu, 3 Dec 2020 16:04:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728857AbgLCQD7 (ORCPT ); Thu, 3 Dec 2020 11:03:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726267AbgLCQD7 (ORCPT ); Thu, 3 Dec 2020 11:03:59 -0500 Received: from mail-ej1-x649.google.com (mail-ej1-x649.google.com [IPv6:2a00:1450:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5824C061A53 for ; Thu, 3 Dec 2020 08:03:12 -0800 (PST) Received: by mail-ej1-x649.google.com with SMTP id k15so955642ejg.8 for ; Thu, 03 Dec 2020 08:03:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Avg0eeJcHmqqPj9kVJA02JQI6kCCnbENGcaSwjHPh3w=; b=IFtG6Xqy/HIL7S+ghxb9zqtTZkaezrwpXDJBy+3NhXnBPgctfC1hoVNdopgr4BhGoC xzpDUUuz3fCW9BvtFLchGmQKgpZGjmllAl8u+px+FtAFfEWrGXhD7SbeMntG8H11hI+K jS1TWr+CpOYtIJV/yDofrOQfh3QTk3hYvF13V6xqAem/6EJpHqxHDK55/+b0qAf6J95Z 9P5q8If3f/sXCZjtoF7mC8NFePzRNhxQvixpoBePhvGHz5p3cELZMOdIkSM+Wt+BSfbU L+ZmfXazlhg+aVDILuLWuaD6Jt6sKMAilPhhAxLk6vVNFfS/E8CFlpeUeqmohyWGkYch l6SQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Avg0eeJcHmqqPj9kVJA02JQI6kCCnbENGcaSwjHPh3w=; b=XPn0QnPBWE5DWaVcS5IJKby5XK7ttX73jNLjHkFSaJNLSN8kzE1PbEjmtm/KjT6OW0 YSythjDZnMJEpM7sBvrRDJX4AXFh/jQWoqDtKtEVL8JyGzF5cE7BH2CFwzF88sasQt3w Bp0A1Zk7nrnWFoKKG70q4vqxVOffKy4mrO2EEuPR1ki+SIZchw0wJbFp8N0nWxPKuAD+ o7I7g2LWuCDpWYPmF76ckBK309f4WARBnjZ4bvc0brzFQ4PN54AyS5agE7nDHtx4Wkun YHmk/h8TdKQstonAAkPD2IDqWjuzihZNY3qeYAF4XslIblaqFcsJnnZHV0BB07CFtUr3 otNA== X-Gm-Message-State: AOAM530nlmvMgg1+e2u8sp5yNC7qIO/PV3JPeGUijo6N6CDExTzpGV1P qcyxMw2s19JuMlu5YZmlVbibfv75lQTFNJN+yDCcWeXDUOkVOrZvZv9AqXB960Jo0oHZAsF9g/w rvFM7xk434WUoW6bFanWkglL7ALXaraHKdzobHs22yBL9x9D9Ply/f8+BV2nmQ6U= X-Google-Smtp-Source: ABdhPJzuT8NP2QZqSfs5VGNZ/PtzZs7jUmyA8fllEw8LdS22rMv2aLa+JH08iaFSAytfjR0iwSBz1ys9ErjX3w== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a17:906:f894:: with SMTP id lg20mr3079697ejb.348.1607011391181; Thu, 03 Dec 2020 08:03:11 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:32 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-2-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 01/14] bpf: x86: Factor out emission of ModR/M for *(reg + off) From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The case for JITing atomics is about to get more complicated. Let's factor out some common code to make the review and result more readable. NB the atomics code doesn't yet use the new helper - a subsequent patch will add its use as a side-effect of other changes. Signed-off-by: Brendan Jackman Change-Id: I1510c7eb0132ff9262fea92ce1839243b6d33372 --- arch/x86/net/bpf_jit_comp.c | 42 +++++++++++++++++++++---------------- 1 file changed, 24 insertions(+), 18 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 796506dcfc42..cc818ed7c2b9 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -681,6 +681,27 @@ static void emit_mov_reg(u8 **pprog, bool is64, u32 dst_reg, u32 src_reg) *pprog = prog; } +/* Emit the suffix (ModR/M etc) for addressing *(ptr_reg + off) and val_reg */ +static void emit_insn_suffix(u8 **pprog, u32 ptr_reg, u32 val_reg, int off) +{ + u8 *prog = *pprog; + int cnt = 0; + + if (is_imm8(off)) { + /* 1-byte signed displacement. + * + * If off == 0 we could skip this and save one extra byte, but + * special case of x86 R13 which always needs an offset is not + * worth the hassle + */ + EMIT2(add_2reg(0x40, ptr_reg, val_reg), off); + } else { + /* 4-byte signed displacement */ + EMIT1_off32(add_2reg(0x80, ptr_reg, val_reg), off); + } + *pprog = prog; +} + /* LDX: dst_reg = *(u8*)(src_reg + off) */ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) { @@ -708,15 +729,7 @@ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) EMIT2(add_2mod(0x48, src_reg, dst_reg), 0x8B); break; } - /* - * If insn->off == 0 we can save one extra byte, but - * special case of x86 R13 which always needs an offset - * is not worth the hassle - */ - if (is_imm8(off)) - EMIT2(add_2reg(0x40, src_reg, dst_reg), off); - else - EMIT1_off32(add_2reg(0x80, src_reg, dst_reg), off); + emit_insn_suffix(&prog, src_reg, dst_reg, off); *pprog = prog; } @@ -751,10 +764,7 @@ static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) EMIT2(add_2mod(0x48, dst_reg, src_reg), 0x89); break; } - if (is_imm8(off)) - EMIT2(add_2reg(0x40, dst_reg, src_reg), off); - else - EMIT1_off32(add_2reg(0x80, dst_reg, src_reg), off); + emit_insn_suffix(&prog, dst_reg, src_reg, off); *pprog = prog; } @@ -1240,11 +1250,7 @@ st: if (is_imm8(insn->off)) goto xadd; case BPF_STX | BPF_XADD | BPF_DW: EMIT3(0xF0, add_2mod(0x48, dst_reg, src_reg), 0x01); -xadd: if (is_imm8(insn->off)) - EMIT2(add_2reg(0x40, dst_reg, src_reg), insn->off); - else - EMIT1_off32(add_2reg(0x80, dst_reg, src_reg), - insn->off); +xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); break; /* call */ From patchwork Thu Dec 3 16:02:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949055 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40A57C18E57 for ; Thu, 3 Dec 2020 16:04:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DCDAD207AD for ; Thu, 3 Dec 2020 16:04:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730876AbgLCQEN (ORCPT ); Thu, 3 Dec 2020 11:04:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726598AbgLCQEM (ORCPT ); Thu, 3 Dec 2020 11:04:12 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34970C061A55 for ; Thu, 3 Dec 2020 08:03:14 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id 198so2334255qkj.7 for ; Thu, 03 Dec 2020 08:03:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=0ZiV8JtnOj2y1hxnSkofHvyMEv1ZFrFbx4Eyr3aUv3w=; b=HVBol0JzcgJH7Ul5vN3bvsiqNzXpZyUkasuPalD/BEGlMr+FHMyaGuzmvPL+skOydT 25mz/ZMUxaDqB7YNd3tQRDJShKzUFtuX5wck/+0hczVpQZpbvw/CMHABSiRwb5fnn8Ld LHhGVPBWZus7QYmdIJwU+irg8dLVkVOUDfY4dyUSUVt9+rr5/Il4+zIsK8ARs8JotmUW WLBzgq8n2gghPMODS36wyRukW3Hy/f2SwiartK7KSx+JZS6EE49M/5a4YJxsKXo6adcw O7hu1p567RFpkzTUpOBFCP+LdEMqbhSJYGYa+jJ9FcBiHOv52Ro6blztV8vBS/fAoGqE fscQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0ZiV8JtnOj2y1hxnSkofHvyMEv1ZFrFbx4Eyr3aUv3w=; b=qvkPuLvwy1T82Sy7ql2DNZANWxyX1lTIsaV53R6FoKZSqx1XqxrQV3khLmYV4moTFO x/2K1Qxdscabo/CM3pJeizED3XFXYqx6k0YTWE5NoKYyjrdlDZGozCpkOKQUxVWkau1F RkV3WSutEiS8QRLa0xCy0PQAQB/kAAq0h/0quFAT6DuM8cEF+q6nVjBqwNQww0R4kRfN MBFilJxEYJmEpXU79BKgyznNpS70XKO70rKwoWkjd6wd2jg3Pn7+P77Rgr7kc4k3LrcT BtMCGhWxycV79gB6H8ShH2niAkgWbcEUt36O4N76sNA9vRGe618CHHb0HSDOSi2hbCRt 8uXg== X-Gm-Message-State: AOAM530IfE6N4PRWTK6OTXXQwySl+5SUaOq2fyzLWYdx7uY+9LkKc06g yUeTyCqvIGkjx16bOo5sn+ngh0FDfI9cBRQ1rPPxN9Q2ASsk/PYq1AZZMocEAhgEa+mtF335gH1 SkTbmXvYshI8Vu5Ov33WbvHi0+0XeSyD8tliSmS6Jjz58f074NRTH+nqLks+9ce8= X-Google-Smtp-Source: ABdhPJxRUx6i6Edd0F0R7SLTISys3hXrC24G0wTerh9gRUgjxXHUXqUma+S1VeBjxrdSXD5cfBfRPb/DJK9uVw== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:ad4:5106:: with SMTP id g6mr3804556qvp.1.1607011393249; Thu, 03 Dec 2020 08:03:13 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:33 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-3-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 02/14] bpf: x86: Factor out emission of REX byte From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The JIT case for encoding atomic ops is about to get more complicated. In order to make the review & resulting code easier, let's factor out some shared helpers. Signed-off-by: Brendan Jackman Change-Id: I66dbd5ad0bf6f820901fb73d6b2c6a63e00483b1 --- arch/x86/net/bpf_jit_comp.c | 39 ++++++++++++++++++++++--------------- 1 file changed, 23 insertions(+), 16 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index cc818ed7c2b9..7106cfd10ba6 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -702,6 +702,21 @@ static void emit_insn_suffix(u8 **pprog, u32 ptr_reg, u32 val_reg, int off) *pprog = prog; } +/* + * Emit a REX byte if it will be necessary to address these registers + */ +static void maybe_emit_mod(u8 **pprog, u32 dst_reg, u32 src_reg, bool is64) +{ + u8 *prog = *pprog; + int cnt = 0; + + if (is64) + EMIT1(add_2mod(0x48, dst_reg, src_reg)); + else if (is_ereg(dst_reg) || is_ereg(src_reg)) + EMIT1(add_2mod(0x40, dst_reg, src_reg)); + *pprog = prog; +} + /* LDX: dst_reg = *(u8*)(src_reg + off) */ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) { @@ -854,10 +869,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, case BPF_OR: b2 = 0x09; break; case BPF_XOR: b2 = 0x31; break; } - if (BPF_CLASS(insn->code) == BPF_ALU64) - EMIT1(add_2mod(0x48, dst_reg, src_reg)); - else if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT1(add_2mod(0x40, dst_reg, src_reg)); + maybe_emit_mod(&prog, dst_reg, src_reg, + BPF_CLASS(insn->code) == BPF_ALU64); EMIT2(b2, add_2reg(0xC0, dst_reg, src_reg)); break; @@ -1301,20 +1314,16 @@ xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); case BPF_JMP32 | BPF_JSGE | BPF_X: case BPF_JMP32 | BPF_JSLE | BPF_X: /* cmp dst_reg, src_reg */ - if (BPF_CLASS(insn->code) == BPF_JMP) - EMIT1(add_2mod(0x48, dst_reg, src_reg)); - else if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT1(add_2mod(0x40, dst_reg, src_reg)); + maybe_emit_mod(&prog, dst_reg, src_reg, + BPF_CLASS(insn->code) == BPF_JMP); EMIT2(0x39, add_2reg(0xC0, dst_reg, src_reg)); goto emit_cond_jmp; case BPF_JMP | BPF_JSET | BPF_X: case BPF_JMP32 | BPF_JSET | BPF_X: /* test dst_reg, src_reg */ - if (BPF_CLASS(insn->code) == BPF_JMP) - EMIT1(add_2mod(0x48, dst_reg, src_reg)); - else if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT1(add_2mod(0x40, dst_reg, src_reg)); + maybe_emit_mod(&prog, dst_reg, src_reg, + BPF_CLASS(insn->code) == BPF_JMP); EMIT2(0x85, add_2reg(0xC0, dst_reg, src_reg)); goto emit_cond_jmp; @@ -1350,10 +1359,8 @@ xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); case BPF_JMP32 | BPF_JSLE | BPF_K: /* test dst_reg, dst_reg to save one extra byte */ if (imm32 == 0) { - if (BPF_CLASS(insn->code) == BPF_JMP) - EMIT1(add_2mod(0x48, dst_reg, dst_reg)); - else if (is_ereg(dst_reg)) - EMIT1(add_2mod(0x40, dst_reg, dst_reg)); + maybe_emit_mod(&prog, dst_reg, dst_reg, + BPF_CLASS(insn->code) == BPF_JMP); EMIT2(0x85, add_2reg(0xC0, dst_reg, dst_reg)); goto emit_cond_jmp; } From patchwork Thu Dec 3 16:02:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949057 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18013C433FE for ; Thu, 3 Dec 2020 16:05:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A9F1A207AA for ; Thu, 3 Dec 2020 16:05:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2436632AbgLCQEc (ORCPT ); Thu, 3 Dec 2020 11:04:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2436572AbgLCQEb (ORCPT ); Thu, 3 Dec 2020 11:04:31 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C568C08C5F2 for ; Thu, 3 Dec 2020 08:03:17 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id b12so1384774wru.15 for ; Thu, 03 Dec 2020 08:03:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=AenZbb8RHh2R8IaLCwpQOQzoT8oepJeSVwoaWTZlmMo=; b=aCZOUUB+bwEeJ6PQ1cuZQt6eyoTMxi9AzEJG+trdlXUro6emF0uiJoamlpO5y7ojgd nUCAxGxe1wRCejF3IX9h1UWPAfUtOd3XUo5JAhQ3U1x50wF4k4EW3iqgzvULpFEo51pB bsl5BJcD5tH2fOboJEGdp7TZP8hBcF30Kjp2c+zfUpGOZ0gtmrrG2qh+a5m7hGreyv6h /E1W/FIukDCmhLYvLYPaNzbY/ICYAxnaJjDoVt2+eOVakOPmbkJifqqWIfRztkBs79T7 /UXJm/yWchMXoXS1r1eYMM+oNLNgUKcQnABu8yX3F055F83bg91rhw6iv0BdVSUYARp6 ovGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AenZbb8RHh2R8IaLCwpQOQzoT8oepJeSVwoaWTZlmMo=; b=QQfV14IFYKUHE4zSUrlGijsjostkMS956Y60mvSDUUUJYsx2jX25FbeECJT9L0V6o7 YKF4HORssMp3/PnnM/CQKCphpg1/6exkClncJOhHEWE2P1iU6AjP2He0P7nH+apwfaxu +QoTLnVO1l2hXCExF+RXKNrXiushQMxMRMn6TpS8RL+wG/IrgRhQKQ79G8D6XkQ4OfbT eoXcKR5pviO0mYqccYZvjCyLxgbY5OIGqiXFj3Z4EGUD+kUwfOLUlAY+WAwErqoiB8AW 00ft3524xTRqSUU6muaqgG4qpBbv4bd0E4OaAy928506JngXy8hGjePRRDcP8o2xKn48 4cpw== X-Gm-Message-State: AOAM533cqgbiLnvmSreoMGnuaEL6TlnrrtjvdOWMlF7bbc4Xz1Q2nldE arwcf0/Uw4wdooYsrAfATGqYZZ5T+OYsvGP1SEkj+H/D0jGlilX9+9h6yNlM9urMZN/vPMoGSod GFdScKQccy4nJ+k69l3sMlZr6jGsTFfA/NNC/qBHXM/nbnp02KWxs/mb1DfQ7ixc= X-Google-Smtp-Source: ABdhPJxRTqFXO8/CUauvDTj5+Jn1Ir5xXFY0JTaftnYynZiPY0ReAKjPE6KjAr8ay26upDKkFD9/c2BU+WWVmA== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a1c:791a:: with SMTP id l26mr1117354wme.1.1607011395423; Thu, 03 Dec 2020 08:03:15 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:34 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-4-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 03/14] bpf: x86: Factor out function to emit NEG From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net There's currently only one usage of this but implementation of atomic_sub add another. Change-Id: Ia56743ec26ff5e7bcde8ae94fa17fef92d418d2b Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 7106cfd10ba6..171ce539f6b9 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -783,6 +783,22 @@ static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) *pprog = prog; } + +static void emit_neg(u8 **pprog, u32 reg, bool is64) +{ + u8 *prog = *pprog; + int cnt = 0; + + /* Emit REX byte if necessary */ + if (is64) + EMIT1(add_1mod(0x48, reg)); + else if (is_ereg(reg)) + EMIT1(add_1mod(0x40, reg)); + + EMIT2(0xF7, add_1reg(0xD8, reg)); /* x86 NEG */ + *pprog = prog; +} + static bool ex_handler_bpf(const struct exception_table_entry *x, struct pt_regs *regs, int trapnr, unsigned long error_code, unsigned long fault_addr) @@ -884,11 +900,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, /* neg dst */ case BPF_ALU | BPF_NEG: case BPF_ALU64 | BPF_NEG: - if (BPF_CLASS(insn->code) == BPF_ALU64) - EMIT1(add_1mod(0x48, dst_reg)); - else if (is_ereg(dst_reg)) - EMIT1(add_1mod(0x40, dst_reg)); - EMIT2(0xF7, add_1reg(0xD8, dst_reg)); + emit_neg(&prog, dst_reg, + BPF_CLASS(insn->code) == BPF_ALU64); break; case BPF_ALU | BPF_ADD | BPF_K: From patchwork Thu Dec 3 16:02:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949059 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 663AFC4167B for ; Thu, 3 Dec 2020 16:05:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 125E0207A9 for ; Thu, 3 Dec 2020 16:05:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2436649AbgLCQEc (ORCPT ); Thu, 3 Dec 2020 11:04:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2436594AbgLCQEb (ORCPT ); Thu, 3 Dec 2020 11:04:31 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8C84C08E85F for ; Thu, 3 Dec 2020 08:03:19 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id l5so1503462wmi.4 for ; Thu, 03 Dec 2020 08:03:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=2Q2lne0lXLlvS3rxE7b9vuMB9y2K2LQZIZMl35a4JXk=; b=vcJ4+79//CdUXs9AdX9SPf05uB/G63aUP5cG4KzyXo/tvvxbbWt6n+3Qx73s7O3Dpb v1L95XWtfWCpKts52CEIq8d2OwA88kBC67aOerfydzROVWbBDykc1fntIekGZgB/OjKX sR3DEEv/H6mFORpM0vcEOlh3GxtOvqkMfy/89+7ORcFWEBgSAHd8U0pWVWjwYSsmym14 +FAYpqcpU5vR/l8ZboEngQxDOMgFyrXSemfzZloks/lv/qLV0QZxN7kjkm/Hz18U3ugR ffZn/FTZkJVMZ2GVSfcGAsWqtz8S4AfL15VPP2ma04FHcF249DECceWDLAGs5TuAId8+ UdBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2Q2lne0lXLlvS3rxE7b9vuMB9y2K2LQZIZMl35a4JXk=; b=Ym4qsVlwgFPmok4usZrTI1eRMM+8HV3fuCYXrFtwj0ZrhNiRCr+KIH3Ps+RwAwOfKs OmaciDukejrGP2FqF8AM72uv6NTUfd3nvJzTn6684mbVIvEKgotq68iojSTcc6IIJWVA f9BLzRAFv4Z9CFMLO8L+jKCVFC+iBGUh9fGEtRgsWxW5D6D/gtS/5+mrbXiXkz25WfAq eEYgcZshtBBXK1y05X45nUIuWrjC1HtfM+yMIQfW/g9+gYxcS7co1MjtrNg+KoaMZHs3 1FCQoq3vXtkMdW8Jmf8zo9uKW0p1vR3nTEECYbxrP/v49ZSjYWEr0eEIb+jB2G/sS4Jv YfcQ== X-Gm-Message-State: AOAM532MHVmKdMfX7g13ZDJL8DAcNqbX71cYbirSS56V6TDsGVOQBj0/ KsCJ1ROWoZ1yP7QXzMTWLcziWDqAUUaliKwFgMsECSxTMbjxHA3yKjGwuexAkkyj16RzI4zgGrs BKDmMBtHhhOelXALfHLYSupvxczTp5b1WT2SRpWihFjaQTwFUkCf1VjMyL9C6dg0= X-Google-Smtp-Source: ABdhPJxfEnzmya6LLLPB13rIOUwk19exrmqubviCzrWxYezPhfMxbEy9yh2kt8fPqjD23AGjw7yuMAHbib0fbA== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a7b:c8c8:: with SMTP id f8mr1117092wml.0.1607011397822; Thu, 03 Dec 2020 08:03:17 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:35 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-5-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 04/14] bpf: x86: Factor out a lookup table for some ALU opcodes From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net A later commit will need to lookup a subset of these opcodes. To avoid duplicating code, pull out a table. The shift opcodes won't be needed by that later commit, but they're already duplicated, so fold them into the table anyway. Change-Id: Ia6888f9fa65da6225c33b530ea16911bf2f70750 Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 33 +++++++++++++++------------------ 1 file changed, 15 insertions(+), 18 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 171ce539f6b9..ee7905051ee9 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -205,6 +205,18 @@ static u8 add_2reg(u8 byte, u32 dst_reg, u32 src_reg) return byte + reg2hex[dst_reg] + (reg2hex[src_reg] << 3); } +/* Some 1-byte opcodes for binary ALU operations */ +static u8 simple_alu_opcodes[] = { + [BPF_ADD] = 0x01, + [BPF_SUB] = 0x29, + [BPF_AND] = 0x21, + [BPF_OR] = 0x09, + [BPF_XOR] = 0x31, + [BPF_LSH] = 0xE0, + [BPF_RSH] = 0xE8, + [BPF_ARSH] = 0xF8, +}; + static void jit_fill_hole(void *area, unsigned int size) { /* Fill whole space with INT3 instructions */ @@ -878,15 +890,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, case BPF_ALU64 | BPF_AND | BPF_X: case BPF_ALU64 | BPF_OR | BPF_X: case BPF_ALU64 | BPF_XOR | BPF_X: - switch (BPF_OP(insn->code)) { - case BPF_ADD: b2 = 0x01; break; - case BPF_SUB: b2 = 0x29; break; - case BPF_AND: b2 = 0x21; break; - case BPF_OR: b2 = 0x09; break; - case BPF_XOR: b2 = 0x31; break; - } maybe_emit_mod(&prog, dst_reg, src_reg, BPF_CLASS(insn->code) == BPF_ALU64); + b2 = simple_alu_opcodes[BPF_OP(insn->code)]; EMIT2(b2, add_2reg(0xC0, dst_reg, src_reg)); break; @@ -1063,12 +1069,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, else if (is_ereg(dst_reg)) EMIT1(add_1mod(0x40, dst_reg)); - switch (BPF_OP(insn->code)) { - case BPF_LSH: b3 = 0xE0; break; - case BPF_RSH: b3 = 0xE8; break; - case BPF_ARSH: b3 = 0xF8; break; - } - + b3 = simple_alu_opcodes[BPF_OP(insn->code)]; if (imm32 == 1) EMIT2(0xD1, add_1reg(b3, dst_reg)); else @@ -1102,11 +1103,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, else if (is_ereg(dst_reg)) EMIT1(add_1mod(0x40, dst_reg)); - switch (BPF_OP(insn->code)) { - case BPF_LSH: b3 = 0xE0; break; - case BPF_RSH: b3 = 0xE8; break; - case BPF_ARSH: b3 = 0xF8; break; - } + b3 = simple_alu_opcodes[BPF_OP(insn->code)]; EMIT2(0xD3, add_1reg(b3, dst_reg)); if (src_reg != BPF_REG_4) From patchwork Thu Dec 3 16:02:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949061 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9361AC0018C for ; Thu, 3 Dec 2020 16:05:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3E922207AA for ; Thu, 3 Dec 2020 16:05:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2436677AbgLCQEe (ORCPT ); Thu, 3 Dec 2020 11:04:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2436666AbgLCQEd (ORCPT ); Thu, 3 Dec 2020 11:04:33 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4AAADC08E861 for ; Thu, 3 Dec 2020 08:03:21 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id e13so1966776qvl.19 for ; Thu, 03 Dec 2020 08:03:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=hkHljNn1iYtFxKwvriGTHfp3Vh9VCkIAAHMitht365o=; b=VtcvhvRaRivI1ksNcqz6gV17NQDEZUu4Do2S6Ags1+wB8tev+LlYWcHBcZ0/Xjj48Q uXGkpL/zK2+5vqMLc9eAq33HBGETirxgV8erNkLKFt/VvQ+BAjEavGDImzwamWbfIl9H BarVgBVrKgoPR4o0Kiq4+p6xWyx6fC8iRiVZiJxSjoJNxo0/n6pRA9WBmDoQ/66p7/J7 ywSx99d9T8czgTKaWGquVjddsSfyKU94DUoSAiW891C90x8dGydEy9A9eJqqqY/hobnM RiwtcL/Hnh/L94Vco7t7wRvjugrb1slZFvnlBVpL8GiU6OWaN+/2jbkUfP4oIokI6PVQ 9ERA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hkHljNn1iYtFxKwvriGTHfp3Vh9VCkIAAHMitht365o=; b=sl3EofhtJq21l6UX2eMF22goTrgprAUlHPbjabTxCB+1a+cfoYLqrYQw0cF4lsHZHZ 32V0nCJ2pU/wvH1l9YDEXLMAe5eoo3NrPnWuX3UIpTGXtDgHVFHhmVFIkhL2u9PPU4TU Lk0Hhj1ohIKr9DaMeCSMrH0xyXi7mkbp7l8J0Fu1M2EX4XMDre+lPnt98yMudXxP5wBs YUpj2iy9WO0rCxEiAR5zzlpYh5akRx6kxFg4ZTs09151zX1f+piJ35MsUrlMDw0yV203 7YZZaODYsUHkyI0Jri91aD1tD5hTPcAA2XySLf1shscYSPRc6IT8h0771HkaNU9nazB6 ytQA== X-Gm-Message-State: AOAM532qWXpO1qxOESo8aWFOZT1WqSDBZRRnDnjCAH2W/y4WaPd680uT E1IWAtnP6MBvU/2timpYF1QEYahkS+h4WQzTFuTNF7SeU8Rbfo77mWY/tRQuWRFeWdpB7sI0M0J 5lmWHNlqGt813RgYh25Oyrdr4pKngUfV2CBywRgXWYNCC4L90gYXFREQduAC7Ahw= X-Google-Smtp-Source: ABdhPJxU5vIGTzkoH7XHAtd5r96UDOzzGX3XHV5itZjnvesU3n6ZQoVFv0KgfPkN1Q3o0IX5z87CMqVDrf6MfA== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:ad4:5587:: with SMTP id e7mr3754317qvx.33.1607011400341; Thu, 03 Dec 2020 08:03:20 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:36 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-6-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 05/14] bpf: Rename BPF_XADD and prepare to encode other atomics in .imm From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net A subsequent patch will add additional atomic operations. These new operations will use the same opcode field as the existing XADD, with the immediate discriminating different operations. In preparation, rename the instruction mode BPF_ATOMIC and start calling the zero immediate BPF_ADD. This is possible (doesn't break existing valid BPF progs) because the immediate field is currently reserved MBZ and BPF_ADD is zero. All uses are removed from the tree but the BPF_XADD definition is kept around to avoid breaking builds for people including kernel headers. Signed-off-by: Brendan Jackman Change-Id: Ib78f54acba37f7196cbf6c35ffa1c40805cb0d87 Acked-by: Yonghong Song --- Documentation/networking/filter.rst | 30 +++++++----- arch/arm/net/bpf_jit_32.c | 7 ++- arch/arm64/net/bpf_jit_comp.c | 16 +++++-- arch/mips/net/ebpf_jit.c | 11 +++-- arch/powerpc/net/bpf_jit_comp64.c | 25 ++++++++-- arch/riscv/net/bpf_jit_comp32.c | 20 ++++++-- arch/riscv/net/bpf_jit_comp64.c | 16 +++++-- arch/s390/net/bpf_jit_comp.c | 27 ++++++----- arch/sparc/net/bpf_jit_comp_64.c | 17 +++++-- arch/x86/net/bpf_jit_comp.c | 46 ++++++++++++++----- arch/x86/net/bpf_jit_comp32.c | 6 +-- drivers/net/ethernet/netronome/nfp/bpf/jit.c | 14 ++++-- drivers/net/ethernet/netronome/nfp/bpf/main.h | 4 +- .../net/ethernet/netronome/nfp/bpf/verifier.c | 15 ++++-- include/linux/filter.h | 8 ++-- include/uapi/linux/bpf.h | 3 +- kernel/bpf/core.c | 31 +++++++++---- kernel/bpf/disasm.c | 6 ++- kernel/bpf/verifier.c | 24 ++++++---- lib/test_bpf.c | 2 +- samples/bpf/bpf_insn.h | 4 +- samples/bpf/sock_example.c | 2 +- samples/bpf/test_cgrp2_attach.c | 4 +- tools/include/linux/filter.h | 7 +-- tools/include/uapi/linux/bpf.h | 3 +- .../bpf/prog_tests/cgroup_attach_multi.c | 4 +- tools/testing/selftests/bpf/verifier/ctx.c | 7 ++- .../testing/selftests/bpf/verifier/leak_ptr.c | 4 +- tools/testing/selftests/bpf/verifier/unpriv.c | 3 +- tools/testing/selftests/bpf/verifier/xadd.c | 2 +- 30 files changed, 248 insertions(+), 120 deletions(-) diff --git a/Documentation/networking/filter.rst b/Documentation/networking/filter.rst index debb59e374de..1583d59d806d 100644 --- a/Documentation/networking/filter.rst +++ b/Documentation/networking/filter.rst @@ -1006,13 +1006,13 @@ Size modifier is one of ... Mode modifier is one of:: - BPF_IMM 0x00 /* used for 32-bit mov in classic BPF and 64-bit in eBPF */ - BPF_ABS 0x20 - BPF_IND 0x40 - BPF_MEM 0x60 - BPF_LEN 0x80 /* classic BPF only, reserved in eBPF */ - BPF_MSH 0xa0 /* classic BPF only, reserved in eBPF */ - BPF_XADD 0xc0 /* eBPF only, exclusive add */ + BPF_IMM 0x00 /* used for 32-bit mov in classic BPF and 64-bit in eBPF */ + BPF_ABS 0x20 + BPF_IND 0x40 + BPF_MEM 0x60 + BPF_LEN 0x80 /* classic BPF only, reserved in eBPF */ + BPF_MSH 0xa0 /* classic BPF only, reserved in eBPF */ + BPF_ATOMIC 0xc0 /* eBPF only, atomic operations */ eBPF has two non-generic instructions: (BPF_ABS | | BPF_LD) and (BPF_IND | | BPF_LD) which are used to access packet data. @@ -1044,11 +1044,19 @@ Unlike classic BPF instruction set, eBPF has generic load/store operations:: BPF_MEM | | BPF_STX: *(size *) (dst_reg + off) = src_reg BPF_MEM | | BPF_ST: *(size *) (dst_reg + off) = imm32 BPF_MEM | | BPF_LDX: dst_reg = *(size *) (src_reg + off) - BPF_XADD | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg - BPF_XADD | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg -Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW. Note that 1 and -2 byte atomic increments are not supported. +Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW. + +It also includes atomic operations, which use the immediate field for extra +encoding. + + .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg + .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg + +Note that 1 and 2 byte atomic operations are not supported. + +You may encounter BPF_XADD - this is a legacy name for BPF_ATOMIC, referring to +the exclusive-add operation encoded when the immediate field is zero. eBPF has one 16-byte instruction: BPF_LD | BPF_DW | BPF_IMM which consists of two consecutive ``struct bpf_insn`` 8-byte blocks and interpreted as single diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index 0207b6ea6e8a..897634d0a67c 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -1620,10 +1620,9 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) } emit_str_r(dst_lo, tmp2, off, ctx, BPF_SIZE(code)); break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + /* Atomic ops */ + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: goto notyet; /* STX: *(size *)(dst + off) = src */ case BPF_STX | BPF_MEM | BPF_W: diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index ef9f1d5e989d..f7b194878a99 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -875,10 +875,18 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, } break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm != BPF_ADD) { + pr_err_once("unknown atomic op code %02x\n", insn->imm); + return -EINVAL; + } + + /* STX XADD: lock *(u32 *)(dst + off) += src + * and + * STX XADD: lock *(u64 *)(dst + off) += src + */ + if (!off) { reg = dst; } else { diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c index 561154cbcc40..939dd06764bc 100644 --- a/arch/mips/net/ebpf_jit.c +++ b/arch/mips/net/ebpf_jit.c @@ -1423,8 +1423,8 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_STX | BPF_H | BPF_MEM: case BPF_STX | BPF_W | BPF_MEM: case BPF_STX | BPF_DW | BPF_MEM: - case BPF_STX | BPF_W | BPF_XADD: - case BPF_STX | BPF_DW | BPF_XADD: + case BPF_STX | BPF_W | BPF_ATOMIC: + case BPF_STX | BPF_DW | BPF_ATOMIC: if (insn->dst_reg == BPF_REG_10) { ctx->flags |= EBPF_SEEN_FP; dst = MIPS_R_SP; @@ -1438,7 +1438,12 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, src = ebpf_to_mips_reg(ctx, insn, src_reg_no_fp); if (src < 0) return src; - if (BPF_MODE(insn->code) == BPF_XADD) { + if (BPF_MODE(insn->code) == BPF_ATOMIC) { + if (insn->imm != BPF_ADD) { + pr_err("ATOMIC OP %02x NOT HANDLED\n", insn->imm); + return -EINVAL; + } + /* * If mem_off does not fit within the 9 bit ll/sc * instruction immediate field, use a temp reg. diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 022103c6a201..aaf1a887f653 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -683,10 +683,18 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, break; /* - * BPF_STX XADD (atomic_add) + * BPF_STX ATOMIC (atomic ops) */ - /* *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_W: + if (insn->imm != BPF_ADD) { + pr_err_ratelimited( + "eBPF filter atomic op code %02x (@%d) unsupported\n", + code, i); + return -ENOTSUPP; + } + + /* *(u32 *)(dst + off) += src */ + /* Get EA into TMP_REG_1 */ EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], dst_reg, off)); tmp_idx = ctx->idx * 4; @@ -699,8 +707,15 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, /* we're done if this succeeded */ PPC_BCC_SHORT(COND_NE, tmp_idx); break; - /* *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm != BPF_ADD) { + pr_err_ratelimited( + "eBPF filter atomic op code %02x (@%d) unsupported\n", + code, i); + return -ENOTSUPP; + } + /* *(u64 *)(dst + off) += src */ + EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], dst_reg, off)); tmp_idx = ctx->idx * 4; EMIT(PPC_RAW_LDARX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1], 0)); diff --git a/arch/riscv/net/bpf_jit_comp32.c b/arch/riscv/net/bpf_jit_comp32.c index 579575f9cdae..a9ef808b235f 100644 --- a/arch/riscv/net/bpf_jit_comp32.c +++ b/arch/riscv/net/bpf_jit_comp32.c @@ -881,7 +881,7 @@ static int emit_store_r64(const s8 *dst, const s8 *src, s16 off, const s8 *rd = bpf_get_reg64(dst, tmp1, ctx); const s8 *rs = bpf_get_reg64(src, tmp2, ctx); - if (mode == BPF_XADD && size != BPF_W) + if (mode == BPF_ATOMIC && (size != BPF_W || imm != BPF_ADD)) return -1; emit_imm(RV_REG_T0, off, ctx); @@ -899,7 +899,7 @@ static int emit_store_r64(const s8 *dst, const s8 *src, s16 off, case BPF_MEM: emit(rv_sw(RV_REG_T0, 0, lo(rs)), ctx); break; - case BPF_XADD: + case BPF_ATOMIC: /* .imm checked above - only BPF_ADD allowed */ emit(rv_amoadd_w(RV_REG_ZERO, lo(rs), RV_REG_T0, 0, 0), ctx); break; @@ -1260,7 +1260,6 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_STX | BPF_MEM | BPF_H: case BPF_STX | BPF_MEM | BPF_W: case BPF_STX | BPF_MEM | BPF_DW: - case BPF_STX | BPF_XADD | BPF_W: if (BPF_CLASS(code) == BPF_ST) { emit_imm32(tmp2, imm, ctx); src = tmp2; @@ -1271,8 +1270,21 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, return -1; break; + case BPF_STX | BPF_ATOMIC | BPF_W: + if (insn->imm != BPF_ADD) { + pr_info_once( + "bpf-jit: not supported: atomic operation %02x ***\n", + insn->imm); + return -EFAULT; + } + + if (emit_store_r64(dst, src, off, ctx, BPF_SIZE(code), + BPF_MODE(code))) + return -1; + break; + /* No hardware support for 8-byte atomics in RV32. */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_DW: /* Fallthrough. */ notsupported: diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index 8a56b5293117..b44ff52f84a6 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -1027,10 +1027,18 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_add(RV_REG_T1, RV_REG_T1, rd, ctx); emit_sd(RV_REG_T1, 0, rs, ctx); break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm != BPF_ADD) { + pr_err("bpf-jit: not supported: atomic operation %02x ***\n", + insn->imm); + return -EINVAL; + } + + /* atomic_add: lock *(u32 *)(dst + off) += src + * atomic_add: lock *(u64 *)(dst + off) += src + */ + if (off) { if (is_12b_int(off)) { emit_addi(RV_REG_T1, rd, off, ctx); diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index 0a4182792876..f973e2ead197 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -1205,18 +1205,23 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, jit->seen |= SEEN_MEM; break; /* - * BPF_STX XADD (atomic_add) + * BPF_ATOMIC */ - case BPF_STX | BPF_XADD | BPF_W: /* *(u32 *)(dst + off) += src */ - /* laal %w0,%src,off(%dst) */ - EMIT6_DISP_LH(0xeb000000, 0x00fa, REG_W0, src_reg, - dst_reg, off); - jit->seen |= SEEN_MEM; - break; - case BPF_STX | BPF_XADD | BPF_DW: /* *(u64 *)(dst + off) += src */ - /* laalg %w0,%src,off(%dst) */ - EMIT6_DISP_LH(0xeb000000, 0x00ea, REG_W0, src_reg, - dst_reg, off); + case BPF_STX | BPF_ATOMIC | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_W: + if (insn->imm != BPF_ADD) { + pr_err("Unknown atomic operation %02x\n", insn->imm); + return -1; + } + + /* *(u32/u64 *)(dst + off) += src + * + * BFW_W: laal %w0,%src,off(%dst) + * BPF_DW: laalg %w0,%src,off(%dst) + */ + EMIT6_DISP_LH(0xeb000000, + BPF_SIZE(insn->code) == BPF_W ? 0x00fa : 0x00ea, + REG_W0, src_reg, dst_reg, off); jit->seen |= SEEN_MEM; break; /* diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c index 3364e2a00989..4b8d3c65d266 100644 --- a/arch/sparc/net/bpf_jit_comp_64.c +++ b/arch/sparc/net/bpf_jit_comp_64.c @@ -1366,12 +1366,18 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) break; } - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: { + case BPF_STX | BPF_ATOMIC | BPF_W: { const u8 tmp = bpf2sparc[TMP_REG_1]; const u8 tmp2 = bpf2sparc[TMP_REG_2]; const u8 tmp3 = bpf2sparc[TMP_REG_3]; + if (insn->imm != BPF_ADD) { + pr_err_once("unknown atomic op %02x\n", insn->imm); + return -EINVAL; + } + + /* lock *(u32 *)(dst + off) += src */ + if (insn->dst_reg == BPF_REG_FP) ctx->saw_frame_pointer = true; @@ -1390,11 +1396,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) break; } /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: { + case BPF_STX | BPF_ATOMIC | BPF_DW: { const u8 tmp = bpf2sparc[TMP_REG_1]; const u8 tmp2 = bpf2sparc[TMP_REG_2]; const u8 tmp3 = bpf2sparc[TMP_REG_3]; + if (insn->imm != BPF_ADD) { + pr_err_once("unknown atomic op %02x\n", insn->imm); + return -EINVAL; + } + if (insn->dst_reg == BPF_REG_FP) ctx->saw_frame_pointer = true; diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index ee7905051ee9..5e5a132b3d52 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -811,6 +811,34 @@ static void emit_neg(u8 **pprog, u32 reg, bool is64) *pprog = prog; } +static int emit_atomic(u8 **pprog, u8 atomic_op, + u32 dst_reg, u32 src_reg, s16 off, u8 bpf_size) +{ + u8 *prog = *pprog; + int cnt = 0; + + EMIT1(0xF0); /* lock prefix */ + + maybe_emit_mod(&prog, dst_reg, src_reg, bpf_size == BPF_DW); + + /* emit opcode */ + switch (atomic_op) { + case BPF_ADD: + /* lock *(u32/u64*)(dst_reg + off) = src_reg */ + EMIT1(simple_alu_opcodes[atomic_op]); + break; + default: + pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op); + return -EFAULT; + } + + emit_insn_suffix(&prog, dst_reg, src_reg, off); + + *pprog = prog; + return 0; +} + + static bool ex_handler_bpf(const struct exception_table_entry *x, struct pt_regs *regs, int trapnr, unsigned long error_code, unsigned long fault_addr) @@ -855,6 +883,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, int i, cnt = 0, excnt = 0; int proglen = 0; u8 *prog = temp; + int err; detect_reg_usage(insn, insn_cnt, callee_regs_used, &tail_call_seen); @@ -1263,17 +1292,12 @@ st: if (is_imm8(insn->off)) } break; - /* STX XADD: lock *(u32*)(dst_reg + off) += src_reg */ - case BPF_STX | BPF_XADD | BPF_W: - /* Emit 'lock add dword ptr [rax + off], eax' */ - if (is_ereg(dst_reg) || is_ereg(src_reg)) - EMIT3(0xF0, add_2mod(0x40, dst_reg, src_reg), 0x01); - else - EMIT2(0xF0, 0x01); - goto xadd; - case BPF_STX | BPF_XADD | BPF_DW: - EMIT3(0xF0, add_2mod(0x48, dst_reg, src_reg), 0x01); -xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off); + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + err = emit_atomic(&prog, insn->imm, dst_reg, src_reg, + insn->off, BPF_SIZE(insn->code)); + if (err) + return err; break; /* call */ diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c index 96fde03aa987..d17b67c69f89 100644 --- a/arch/x86/net/bpf_jit_comp32.c +++ b/arch/x86/net/bpf_jit_comp32.c @@ -2243,10 +2243,8 @@ emit_cond_jmp: jmp_cond = get_cond_jmp_opcode(BPF_OP(code), false); return -EFAULT; } break; - /* STX XADD: lock *(u32 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_W: - /* STX XADD: lock *(u64 *)(dst + off) += src */ - case BPF_STX | BPF_XADD | BPF_DW: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: goto notyet; case BPF_JMP | BPF_EXIT: if (seen_exit) { diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c index 0a721f6e8676..e31f8fbbc696 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c @@ -3109,13 +3109,19 @@ mem_xadd(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, bool is64) return 0; } -static int mem_xadd4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +static int mem_atomic4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { + if (meta->insn.imm != BPF_ADD) + return -EOPNOTSUPP; + return mem_xadd(nfp_prog, meta, false); } -static int mem_xadd8(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +static int mem_atomic8(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { + if (meta->insn.imm != BPF_ADD) + return -EOPNOTSUPP; + return mem_xadd(nfp_prog, meta, true); } @@ -3475,8 +3481,8 @@ static const instr_cb_t instr_cb[256] = { [BPF_STX | BPF_MEM | BPF_H] = mem_stx2, [BPF_STX | BPF_MEM | BPF_W] = mem_stx4, [BPF_STX | BPF_MEM | BPF_DW] = mem_stx8, - [BPF_STX | BPF_XADD | BPF_W] = mem_xadd4, - [BPF_STX | BPF_XADD | BPF_DW] = mem_xadd8, + [BPF_STX | BPF_ATOMIC | BPF_W] = mem_atomic4, + [BPF_STX | BPF_ATOMIC | BPF_DW] = mem_atomic8, [BPF_ST | BPF_MEM | BPF_B] = mem_st1, [BPF_ST | BPF_MEM | BPF_H] = mem_st2, [BPF_ST | BPF_MEM | BPF_W] = mem_st4, diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h index fac9c6f9e197..d0e17eebddd9 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/main.h +++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h @@ -428,9 +428,9 @@ static inline bool is_mbpf_classic_store_pkt(const struct nfp_insn_meta *meta) return is_mbpf_classic_store(meta) && meta->ptr.type == PTR_TO_PACKET; } -static inline bool is_mbpf_xadd(const struct nfp_insn_meta *meta) +static inline bool is_mbpf_atomic(const struct nfp_insn_meta *meta) { - return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_STX | BPF_XADD); + return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_STX | BPF_ATOMIC); } static inline bool is_mbpf_mul(const struct nfp_insn_meta *meta) diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c index e92ee510fd52..9d235c0ce46a 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c @@ -479,7 +479,7 @@ nfp_bpf_check_ptr(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, pr_vlog(env, "map writes not supported\n"); return -EOPNOTSUPP; } - if (is_mbpf_xadd(meta)) { + if (is_mbpf_atomic(meta)) { err = nfp_bpf_map_mark_used(env, meta, reg, NFP_MAP_USE_ATOMIC_CNT); if (err) @@ -523,12 +523,17 @@ nfp_bpf_check_store(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, } static int -nfp_bpf_check_xadd(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, - struct bpf_verifier_env *env) +nfp_bpf_check_atomic(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + struct bpf_verifier_env *env) { const struct bpf_reg_state *sreg = cur_regs(env) + meta->insn.src_reg; const struct bpf_reg_state *dreg = cur_regs(env) + meta->insn.dst_reg; + if (meta->insn.imm != BPF_ADD) { + pr_vlog(env, "atomic op not implemented: %d\n", meta->insn.imm); + return -EOPNOTSUPP; + } + if (dreg->type != PTR_TO_MAP_VALUE) { pr_vlog(env, "atomic add not to a map value pointer: %d\n", dreg->type); @@ -655,8 +660,8 @@ int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx, if (is_mbpf_store(meta)) return nfp_bpf_check_store(nfp_prog, meta, env); - if (is_mbpf_xadd(meta)) - return nfp_bpf_check_xadd(nfp_prog, meta, env); + if (is_mbpf_atomic(meta)) + return nfp_bpf_check_atomic(nfp_prog, meta, env); if (is_mbpf_alu(meta)) return nfp_bpf_check_alu(nfp_prog, meta, env); diff --git a/include/linux/filter.h b/include/linux/filter.h index 1b62397bd124..ce19988fb312 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -261,13 +261,15 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) /* Atomic memory add, *(uint *)(dst_reg + off16) += src_reg */ -#define BPF_STX_XADD(SIZE, DST, SRC, OFF) \ +#define BPF_ATOMIC_ADD(SIZE, DST, SRC, OFF) \ ((struct bpf_insn) { \ - .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ .dst_reg = DST, \ .src_reg = SRC, \ .off = OFF, \ - .imm = 0 }) + .imm = BPF_ADD }) +#define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ + /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index c3458ec1f30a..d0adc48db43c 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -19,7 +19,8 @@ /* ld/ldx fields */ #define BPF_DW 0x18 /* double word (64-bit) */ -#define BPF_XADD 0xc0 /* exclusive add */ +#define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */ +#define BPF_XADD 0xc0 /* exclusive add - legacy name */ /* alu/jmp fields */ #define BPF_MOV 0xb0 /* mov reg to reg */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 261f8692d0d2..3abc6b250b18 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1309,8 +1309,8 @@ EXPORT_SYMBOL_GPL(__bpf_call_base); INSN_3(STX, MEM, H), \ INSN_3(STX, MEM, W), \ INSN_3(STX, MEM, DW), \ - INSN_3(STX, XADD, W), \ - INSN_3(STX, XADD, DW), \ + INSN_3(STX, ATOMIC, W), \ + INSN_3(STX, ATOMIC, DW), \ /* Immediate based. */ \ INSN_3(ST, MEM, B), \ INSN_3(ST, MEM, H), \ @@ -1618,13 +1618,25 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) LDX_PROBE(DW, 8) #undef LDX_PROBE - STX_XADD_W: /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ - atomic_add((u32) SRC, (atomic_t *)(unsigned long) - (DST + insn->off)); + STX_ATOMIC_W: + switch (IMM) { + case BPF_ADD: + /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ + atomic_add((u32) SRC, (atomic_t *)(unsigned long) + (DST + insn->off)); + default: + goto default_label; + } CONT; - STX_XADD_DW: /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */ - atomic64_add((u64) SRC, (atomic64_t *)(unsigned long) - (DST + insn->off)); + STX_ATOMIC_DW: + switch (IMM) { + case BPF_ADD: + /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */ + atomic64_add((u64) SRC, (atomic64_t *)(unsigned long) + (DST + insn->off)); + default: + goto default_label; + } CONT; default_label: @@ -1634,7 +1646,8 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) * * Note, verifier whitelists all opcodes in bpf_opcode_in_insntable(). */ - pr_warn("BPF interpreter: unknown opcode %02x\n", insn->code); + pr_warn("BPF interpreter: unknown opcode %02x (imm: 0x%x)\n", + insn->code, insn->imm); BUG_ON(1); return 0; } diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index b44d8c447afd..37c8d6e9b4cc 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -153,14 +153,16 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); - else if (BPF_MODE(insn->code) == BPF_XADD) + else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == BPF_ADD) { verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) += r%d\n", insn->code, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); - else + } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); + } } else if (class == BPF_ST) { if (BPF_MODE(insn->code) != BPF_MEM) { verbose(cbs->private_data, "BUG_st_%02x\n", insn->code); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e333ce43f281..1947da617b03 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3598,13 +3598,17 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn return err; } -static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) +static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) { int err; - if ((BPF_SIZE(insn->code) != BPF_W && BPF_SIZE(insn->code) != BPF_DW) || - insn->imm != 0) { - verbose(env, "BPF_XADD uses reserved fields\n"); + if (insn->imm != BPF_ADD) { + verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); + return -EINVAL; + } + + if (BPF_SIZE(insn->code) != BPF_W && BPF_SIZE(insn->code) != BPF_DW) { + verbose(env, "invalid atomic operand size\n"); return -EINVAL; } @@ -3627,19 +3631,19 @@ static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_ins is_pkt_reg(env, insn->dst_reg) || is_flow_key_reg(env, insn->dst_reg) || is_sk_reg(env, insn->dst_reg)) { - verbose(env, "BPF_XADD stores into R%d %s is not allowed\n", + verbose(env, "atomic stores into R%d %s is not allowed\n", insn->dst_reg, reg_type_str[reg_state(env, insn->dst_reg)->type]); return -EACCES; } - /* check whether atomic_add can read the memory */ + /* check whether we can read the memory */ err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_READ, -1, true); if (err) return err; - /* check whether atomic_add can write into the same memory */ + /* check whether we can write into the same memory */ return check_mem_access(env, insn_idx, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_WRITE, -1, true); } @@ -9497,8 +9501,8 @@ static int do_check(struct bpf_verifier_env *env) } else if (class == BPF_STX) { enum bpf_reg_type *prev_dst_type, dst_reg_type; - if (BPF_MODE(insn->code) == BPF_XADD) { - err = check_xadd(env, env->insn_idx, insn); + if (BPF_MODE(insn->code) == BPF_ATOMIC) { + err = check_atomic(env, env->insn_idx, insn); if (err) return err; env->insn_idx++; @@ -9908,7 +9912,7 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) if (BPF_CLASS(insn->code) == BPF_STX && ((BPF_MODE(insn->code) != BPF_MEM && - BPF_MODE(insn->code) != BPF_XADD) || insn->imm != 0)) { + BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { verbose(env, "BPF_STX uses reserved fields\n"); return -EINVAL; } diff --git a/lib/test_bpf.c b/lib/test_bpf.c index ca7d635bccd9..fbb13ef9207c 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -4295,7 +4295,7 @@ static struct bpf_test tests[] = { { { 0, 0xffffffff } }, .stack_depth = 40, }, - /* BPF_STX | BPF_XADD | BPF_W/DW */ + /* BPF_STX | BPF_ATOMIC | BPF_W/DW */ { "STX_XADD_W: Test: 0x12 + 0x10 = 0x22", .u.insns_int = { diff --git a/samples/bpf/bpf_insn.h b/samples/bpf/bpf_insn.h index 544237980582..db67a2847395 100644 --- a/samples/bpf/bpf_insn.h +++ b/samples/bpf/bpf_insn.h @@ -138,11 +138,11 @@ struct bpf_insn; #define BPF_STX_XADD(SIZE, DST, SRC, OFF) \ ((struct bpf_insn) { \ - .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ .dst_reg = DST, \ .src_reg = SRC, \ .off = OFF, \ - .imm = 0 }) + .imm = BPF_ADD }) /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/samples/bpf/sock_example.c b/samples/bpf/sock_example.c index 00aae1d33fca..b18fa8083137 100644 --- a/samples/bpf/sock_example.c +++ b/samples/bpf/sock_example.c @@ -54,7 +54,7 @@ static int test_sock(void) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0), BPF_MOV64_IMM(BPF_REG_0, 0), /* r0 = 0 */ BPF_EXIT_INSN(), }; diff --git a/samples/bpf/test_cgrp2_attach.c b/samples/bpf/test_cgrp2_attach.c index 20fbd1241db3..61896c4f9322 100644 --- a/samples/bpf/test_cgrp2_attach.c +++ b/samples/bpf/test_cgrp2_attach.c @@ -53,7 +53,7 @@ static int prog_load(int map_fd, int verdict) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0), /* Count bytes */ BPF_MOV64_IMM(BPF_REG_0, MAP_KEY_BYTES), /* r0 = 1 */ @@ -64,7 +64,7 @@ static int prog_load(int map_fd, int verdict) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, offsetof(struct __sk_buff, len)), /* r1 = skb->len */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0), BPF_MOV64_IMM(BPF_REG_0, verdict), /* r0 = verdict */ BPF_EXIT_INSN(), diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index ca28b6ab8db7..95ff51d97f25 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -171,13 +171,14 @@ /* Atomic memory add, *(uint *)(dst_reg + off16) += src_reg */ -#define BPF_STX_XADD(SIZE, DST, SRC, OFF) \ +#define BPF_ATOMIC_ADD(SIZE, DST, SRC, OFF) \ ((struct bpf_insn) { \ - .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ .dst_reg = DST, \ .src_reg = SRC, \ .off = OFF, \ - .imm = 0 }) + .imm = BPF_ADD }) +#define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index c3458ec1f30a..d0adc48db43c 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -19,7 +19,8 @@ /* ld/ldx fields */ #define BPF_DW 0x18 /* double word (64-bit) */ -#define BPF_XADD 0xc0 /* exclusive add */ +#define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */ +#define BPF_XADD 0xc0 /* exclusive add - legacy name */ /* alu/jmp fields */ #define BPF_MOV 0xb0 /* mov reg to reg */ diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c index b549fcfacc0b..882fce827c81 100644 --- a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c +++ b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c @@ -45,13 +45,13 @@ static int prog_load_cnt(int verdict, int val) BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), BPF_MOV64_IMM(BPF_REG_1, val), /* r1 = 1 */ - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */ + BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0), BPF_LD_MAP_FD(BPF_REG_1, cgroup_storage_fd), BPF_MOV64_IMM(BPF_REG_2, 0), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), BPF_MOV64_IMM(BPF_REG_1, val), - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_W, BPF_REG_0, BPF_REG_1, 0, 0), + BPF_ATOMIC_ADD(BPF_W, BPF_REG_0, BPF_REG_1, 0), BPF_LD_MAP_FD(BPF_REG_1, percpu_cgroup_storage_fd), BPF_MOV64_IMM(BPF_REG_2, 0), diff --git a/tools/testing/selftests/bpf/verifier/ctx.c b/tools/testing/selftests/bpf/verifier/ctx.c index 93d6b1641481..ede3842d123b 100644 --- a/tools/testing/selftests/bpf/verifier/ctx.c +++ b/tools/testing/selftests/bpf/verifier/ctx.c @@ -10,14 +10,13 @@ .prog_type = BPF_PROG_TYPE_SCHED_CLS, }, { - "context stores via XADD", + "context stores via BPF_ATOMIC", .insns = { BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_W, BPF_REG_1, - BPF_REG_0, offsetof(struct __sk_buff, mark), 0), + BPF_ATOMIC_ADD(BPF_W, BPF_REG_1, BPF_REG_0, offsetof(struct __sk_buff, mark)), BPF_EXIT_INSN(), }, - .errstr = "BPF_XADD stores into R1 ctx is not allowed", + .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", .result = REJECT, .prog_type = BPF_PROG_TYPE_SCHED_CLS, }, diff --git a/tools/testing/selftests/bpf/verifier/leak_ptr.c b/tools/testing/selftests/bpf/verifier/leak_ptr.c index d6eec17f2cd2..f9a594b48fb3 100644 --- a/tools/testing/selftests/bpf/verifier/leak_ptr.c +++ b/tools/testing/selftests/bpf/verifier/leak_ptr.c @@ -13,7 +13,7 @@ .errstr_unpriv = "R2 leaks addr into mem", .result_unpriv = REJECT, .result = REJECT, - .errstr = "BPF_XADD stores into R1 ctx is not allowed", + .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", }, { "leak pointer into ctx 2", @@ -28,7 +28,7 @@ .errstr_unpriv = "R10 leaks addr into mem", .result_unpriv = REJECT, .result = REJECT, - .errstr = "BPF_XADD stores into R1 ctx is not allowed", + .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", }, { "leak pointer into ctx 3", diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c index 91bb77c24a2e..85b5e8b70e5d 100644 --- a/tools/testing/selftests/bpf/verifier/unpriv.c +++ b/tools/testing/selftests/bpf/verifier/unpriv.c @@ -206,7 +206,8 @@ BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_10, BPF_REG_0, -8, 0), + BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_DW, + BPF_REG_10, BPF_REG_0, -8, BPF_ADD), BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc), BPF_EXIT_INSN(), diff --git a/tools/testing/selftests/bpf/verifier/xadd.c b/tools/testing/selftests/bpf/verifier/xadd.c index c5de2e62cc8b..70a320505bf2 100644 --- a/tools/testing/selftests/bpf/verifier/xadd.c +++ b/tools/testing/selftests/bpf/verifier/xadd.c @@ -51,7 +51,7 @@ BPF_EXIT_INSN(), }, .result = REJECT, - .errstr = "BPF_XADD stores into R2 pkt is not allowed", + .errstr = "BPF_ATOMIC stores into R2 pkt is not allowed", .prog_type = BPF_PROG_TYPE_XDP, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, }, From patchwork Thu Dec 3 16:02:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949063 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD332C197BF for ; Thu, 3 Dec 2020 16:05:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 76252207AA for ; Thu, 3 Dec 2020 16:05:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731179AbgLCQEp (ORCPT ); Thu, 3 Dec 2020 11:04:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726267AbgLCQEp (ORCPT ); Thu, 3 Dec 2020 11:04:45 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE3B1C08E863 for ; Thu, 3 Dec 2020 08:03:23 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id r5so1690398wma.2 for ; Thu, 03 Dec 2020 08:03:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=P/yq/QQWbpB7r4QFh7S+AKecPYKN/ajhsuXzhalk5DU=; b=nk3c1Pr1/+i7APpWeoY7LSgu1pCCyRq/vSUN8hqvQ9hfqnXht8rHs3eImodgT8++kR o3/W9PA0EcGW72ltEJH8mfk+Au8TLr3PhqrgHgpNAjn5L5yBRJ5VGOpqgd7Q93RxlA95 vUaJ0Z2JKPqK+k1LB5oxvyK3AINNAIRnIj+Z57ZtDnL80qrJGghy/iVzEFXfWTWOdQLz BPX+Pc0M9TKUBYT27UFffn2lI8iKVlDyYXBmo2ILwgM73fL9WuiKgBbqK3rbon0xk9KY kdKy6fdni/99+aivkAfVis+LeCHg+/Tj+X1u+IqMNsmMc6SzpDpJ2VZQzE5HoSjlivvm qjKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=P/yq/QQWbpB7r4QFh7S+AKecPYKN/ajhsuXzhalk5DU=; b=kCRB+MH6RORSr7OxDRcCoFcojTGmiIidflx+BXC48zKvUHXNNtxdhgLBkb/O6V5dx3 v4ytKc+ugLL5nj+SUAQ52uOByvFdP/QS4m4az94GFSP7YXPJnN3vgkBVM7AYgpgLZ5Yk 6SRkeZJefMl6navIGQCRdcG+fpul9Uu82lzTsArt4B/Y05FkkYQ6yDcYWPDy+5sllxt7 7B/c1P/VF00nFyqeh8TcurDqulOppr+NdQsjC80N5Apml9tUHK2+v8a5XVT4t0SdFTFx 5nyh+DCCPBMWllDdHGucBVGE/1mUmY1v41xEMrRJi7Bvk3ZX3+6hZ7PBnrH1208WQpeG yz6Q== X-Gm-Message-State: AOAM5303f4WhSJEnvsrUwligrEaDe1GtIZL2CNnVDcKL1vFzf0sQDc7A vKb9QqT++bTKluSRM4M9MQUoJsHKQCrgWj/Mf5h8H+ObaUpWoFd/6ZIC3tIrQ/40RFRxHWJOyyF rjRnJXCcRHnlgHf19qG6VA8qDa6Wpnp+teiucH1AW8ywNR7lRHgdB16R0EVUDxAY= X-Google-Smtp-Source: ABdhPJwsAhXZL/lcGzdQfplLgykQGeAYQoEOYH2tTYj+nuKMiQXnnyj+PcrWYn1cSui6VWlMWaLR9HoHoq/9Og== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a7b:c5c6:: with SMTP id n6mr3982703wmk.131.1607011402494; Thu, 03 Dec 2020 08:03:22 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:37 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-7-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 06/14] bpf: Move BPF_STX reserved field check into BPF_STX verifier code From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net I can't find a reason why this code is in resolve_pseudo_ldimm64; since I'll be modifying it in a subsequent commit, tidy it up. Change-Id: I3410469270f4889a3af67612bd6c2e7979ab4da1 Signed-off-by: Brendan Jackman Acked-by: Yonghong Song --- kernel/bpf/verifier.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 1947da617b03..e8b41ccdfb90 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -9501,6 +9501,12 @@ static int do_check(struct bpf_verifier_env *env) } else if (class == BPF_STX) { enum bpf_reg_type *prev_dst_type, dst_reg_type; + if (((BPF_MODE(insn->code) != BPF_MEM && + BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { + verbose(env, "BPF_STX uses reserved fields\n"); + return -EINVAL; + } + if (BPF_MODE(insn->code) == BPF_ATOMIC) { err = check_atomic(env, env->insn_idx, insn); if (err) @@ -9910,13 +9916,6 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) return -EINVAL; } - if (BPF_CLASS(insn->code) == BPF_STX && - ((BPF_MODE(insn->code) != BPF_MEM && - BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { - verbose(env, "BPF_STX uses reserved fields\n"); - return -EINVAL; - } - if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) { struct bpf_insn_aux_data *aux; struct bpf_map *map; From patchwork Thu Dec 3 16:02:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949067 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE6A8C1B0E3 for ; Thu, 3 Dec 2020 16:05:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 432CF207AD for ; Thu, 3 Dec 2020 16:05:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731194AbgLCQEx (ORCPT ); Thu, 3 Dec 2020 11:04:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731188AbgLCQEx (ORCPT ); Thu, 3 Dec 2020 11:04:53 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 422D8C08E9AA for ; Thu, 3 Dec 2020 08:03:25 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id n12so1904146qta.9 for ; Thu, 03 Dec 2020 08:03:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=+urGfu1jFSm4aWjVX73KCX3G8rm9BJtSYy38Di6nXZQ=; b=KtwMf49PHhFZmPeQCcqfZgpKP2dzfqwqAl7gMO9KRchJRL9kIJBvJDFgz1M5ui7d3H zpzD5G9nxrmej8bVQspRlFhYeAXLJxt1uZs+6iCh/BpQZu0cDAxIE8zZfk7OWBAUARQj k6ObBLrdflwAXYYK8fjSTzpX5p9rt4YRZt3EGdBK9Bf3cGqI8eNenoLT0L4qHjUQkiYd 8qel8hlEKnW25zA1BC+bGmtFGD0HheGCD26QhE6JUPFSTSDQjIixSyEJsUxZcJYLnPI/ lGn/o0rvSe92FIHhmA+ygxLKEv3u011ha2DkJ7GBNz08YzMJYKg5eJnmFacMnqr4/NUH TOLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+urGfu1jFSm4aWjVX73KCX3G8rm9BJtSYy38Di6nXZQ=; b=eOq2zwnidfXr0Wj7FvFu0WBx9DoyAvM1e1hBBFh6FGXOWOjIBobPa6XrnY0ElpY2e9 GLz5hkz+pe7fEa/CRcnPT7G3D4RpFUjHZxLCdVbXPs12Nofec6r9eRRkkEw90D91q1Ly TB6i8l+JPDqp3/sUsRG89cHzJo6kEFnw02O5SD9qhMXaIGRxyGxnLgXd/OnHvaOUZNz3 Un1Gzg8ObN/lppI7cKbXjRgd+IWB3j0j7cZQVNeJ0SU2vfpwhMwnwYqcdlL3/tBxUc1A DeBm75g0jls8IAMpeoGm9emNws/M4MWKqGcoh0nuxRutT+obE9TStSA3cIEdtMZZtQhM l6Wg== X-Gm-Message-State: AOAM531J9ZfXACXGmikmeMe3LXrQoO+3bYDO27tIYQ+SqeF8T4blkc7z WBGonMYwl/hlhxsnij5nzrblBNgeyLAIZhIqOBYjiON58Z2/zRK5XCzNfnTmd3AqF0FrGufmrW0 PX5vINzNitLO2C003P7uqNAq9QPezs7EIjLc4n+f58xUVlDwz/wQ8FAgJEsBlxaE= X-Google-Smtp-Source: ABdhPJzCzU67tYkFgCTnBOrucVOisGfrzO2T/lkc6n63Gth7ECSwRyVfghL9QSq3A8ANNyd8P136GnNTmpyIpg== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a05:6214:493:: with SMTP id ay19mr3889957qvb.36.1607011404394; Thu, 03 Dec 2020 08:03:24 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:38 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-8-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 07/14] bpf: Add BPF_FETCH field / create atomic_fetch_add instruction From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This value can be set in bpf_insn.imm, for BPF_ATOMIC instructions, in order to have the previous value of the atomically-modified memory location loaded into the src register after an atomic op is carried out. Suggested-by: Yonghong Song Signed-off-by: Brendan Jackman Change-Id: I649ad48edb565a32ccdf72924ffe96a8c8da57ad Acked-by: Yonghong Song --- arch/x86/net/bpf_jit_comp.c | 4 ++++ include/linux/filter.h | 9 +++++++++ include/uapi/linux/bpf.h | 3 +++ kernel/bpf/core.c | 13 +++++++++++++ kernel/bpf/disasm.c | 7 +++++++ kernel/bpf/verifier.c | 35 ++++++++++++++++++++++++---------- tools/include/linux/filter.h | 10 ++++++++++ tools/include/uapi/linux/bpf.h | 3 +++ 8 files changed, 74 insertions(+), 10 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 5e5a132b3d52..88cb09fa3bfb 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -827,6 +827,10 @@ static int emit_atomic(u8 **pprog, u8 atomic_op, /* lock *(u32/u64*)(dst_reg + off) = src_reg */ EMIT1(simple_alu_opcodes[atomic_op]); break; + case BPF_ADD | BPF_FETCH: + /* src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */ + EMIT2(0x0F, 0xC1); + break; default: pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op); return -EFAULT; diff --git a/include/linux/filter.h b/include/linux/filter.h index ce19988fb312..4e04d0fc454f 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -270,6 +270,15 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) .imm = BPF_ADD }) #define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ +/* Atomic memory add with fetch, src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_ADD(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_ADD | BPF_FETCH }) /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index d0adc48db43c..025e377e7229 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -44,6 +44,9 @@ #define BPF_CALL 0x80 /* function call */ #define BPF_EXIT 0x90 /* function return */ +/* atomic op type fields (stored in immediate) */ +#define BPF_FETCH 0x01 /* fetch previous value into src reg */ + /* Register numbers */ enum { BPF_REG_0 = 0, diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 3abc6b250b18..61e93eb7d363 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1624,16 +1624,29 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ atomic_add((u32) SRC, (atomic_t *)(unsigned long) (DST + insn->off)); + break; + case BPF_ADD | BPF_FETCH: + SRC = (u32) atomic_fetch_add( + (u32) SRC, + (atomic_t *)(unsigned long) (DST + insn->off)); + break; default: goto default_label; } CONT; + STX_ATOMIC_DW: switch (IMM) { case BPF_ADD: /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */ atomic64_add((u64) SRC, (atomic64_t *)(unsigned long) (DST + insn->off)); + break; + case BPF_ADD | BPF_FETCH: + SRC = (u64) atomic64_fetch_add( + (u64) SRC, + (atomic64_t *)(s64) (DST + insn->off)); + break; default: goto default_label; } diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 37c8d6e9b4cc..3ee2246a52ef 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -160,6 +160,13 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == (BPF_ADD | BPF_FETCH)) { + verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_add(*(%s *)(r%d %+d), r%d)\n", + insn->code, insn->src_reg, + BPF_SIZE(insn->code) == BPF_DW ? "64" : "", + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, insn->src_reg); } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e8b41ccdfb90..a68adbcee370 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3602,7 +3602,11 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i { int err; - if (insn->imm != BPF_ADD) { + switch (insn->imm) { + case BPF_ADD: + case BPF_ADD | BPF_FETCH: + break; + default: verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); return -EINVAL; } @@ -3631,7 +3635,7 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i is_pkt_reg(env, insn->dst_reg) || is_flow_key_reg(env, insn->dst_reg) || is_sk_reg(env, insn->dst_reg)) { - verbose(env, "atomic stores into R%d %s is not allowed\n", + verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n", insn->dst_reg, reg_type_str[reg_state(env, insn->dst_reg)->type]); return -EACCES; @@ -3644,8 +3648,20 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i return err; /* check whether we can write into the same memory */ - return check_mem_access(env, insn_idx, insn->dst_reg, insn->off, - BPF_SIZE(insn->code), BPF_WRITE, -1, true); + err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, + BPF_SIZE(insn->code), BPF_WRITE, -1, true); + if (err) + return err; + + if (!(insn->imm & BPF_FETCH)) + return 0; + + /* check and record load of old value into src reg */ + err = check_reg_arg(env, insn->src_reg, DST_OP); + if (err) + return err; + + return 0; } static int __check_stack_boundary(struct bpf_verifier_env *env, u32 regno, @@ -9501,12 +9517,6 @@ static int do_check(struct bpf_verifier_env *env) } else if (class == BPF_STX) { enum bpf_reg_type *prev_dst_type, dst_reg_type; - if (((BPF_MODE(insn->code) != BPF_MEM && - BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) { - verbose(env, "BPF_STX uses reserved fields\n"); - return -EINVAL; - } - if (BPF_MODE(insn->code) == BPF_ATOMIC) { err = check_atomic(env, env->insn_idx, insn); if (err) @@ -9515,6 +9525,11 @@ static int do_check(struct bpf_verifier_env *env) continue; } + if (BPF_MODE(insn->code) != BPF_MEM || insn->imm != 0) { + verbose(env, "BPF_STX uses reserved fields\n"); + return -EINVAL; + } + /* check src1 operand */ err = check_reg_arg(env, insn->src_reg, SRC_OP); if (err) diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index 95ff51d97f25..ac7701678e1a 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -180,6 +180,16 @@ .imm = BPF_ADD }) #define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */ +/* Atomic memory add with fetch, src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_ADD(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_ADD | BPF_FETCH }) + /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index d0adc48db43c..025e377e7229 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -44,6 +44,9 @@ #define BPF_CALL 0x80 /* function call */ #define BPF_EXIT 0x90 /* function return */ +/* atomic op type fields (stored in immediate) */ +#define BPF_FETCH 0x01 /* fetch previous value into src reg */ + /* Register numbers */ enum { BPF_REG_0 = 0, From patchwork Thu Dec 3 16:02:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949069 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 719D2C1B087 for ; Thu, 3 Dec 2020 16:05:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1537D207A9 for ; Thu, 3 Dec 2020 16:05:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731192AbgLCQEx (ORCPT ); Thu, 3 Dec 2020 11:04:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731190AbgLCQEx (ORCPT ); Thu, 3 Dec 2020 11:04:53 -0500 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C0F1C094240 for ; Thu, 3 Dec 2020 08:03:27 -0800 (PST) Received: by mail-qt1-x84a.google.com with SMTP id t22so1906729qtq.2 for ; Thu, 03 Dec 2020 08:03:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=1MZi03le10KPs72LGaJoCYUsGmV3rDzv2p96g6KE5Ew=; b=FYu5IkxouiPAp9VF4qKQKGNGOOqOP5k4M+P/ZychnqrzGYvRBm6iiTnKPLpDb3+puN QBYXh8ulHIfkSQOrlwty4j/w5hYeY1tGnuNDBtkpT22pTdEXQ/DMujSqdFcubR2Rt4mQ CibGHGq7VTGqCfE01aCewYEe7wQNmapgGhZpsJCdYnG6ZWrIcOEi8KasmDByVRScd4yl TYOF9vxlziFeu6oMMv1J5mt4YucPPciWtFFWBkn39/l+aLcZFhwGE2ilp/OpI9ySjCe+ PwNFuO1azOVNNPCyCrn1oNecLjJko9gO4BtlXwfzW9ZNpZxppWFHxMUvByxDk+5Ufbqx 32Dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1MZi03le10KPs72LGaJoCYUsGmV3rDzv2p96g6KE5Ew=; b=mIs97RoiZKhWCdoLbNvBLYPObh7nc/+HMmEAR9UoJZ81lFsyxYiOGO2+BU72JSaN4O 7LXgJfpO2JSsG2rAoNVGL2f0zsxmkBi/x96yRCOWFNSGhd0nzPNFGP6LHkfRnQMyPAeN WTYDIZ65HOryIBwrJqjUw6gf0B1TtAuy2FuVly3JjmhHFsKeoTuutVSmIAJttqOvL/mS qgx+m8rm9suk2J9NNzssxFpaZ0Fg0fQOCMSoDpmTjCu2xmoTTFcyxXIppZjF4RBBmooM iucPb1RRjzSLxPl0SKNBDLNs/j44AzI+EzACfp3vJRhTf9o6b+WtapbaVg9xo0LsDpZb Eumg== X-Gm-Message-State: AOAM532pVCYHXjrIxx40jsLIwOEhPRUlG7VgAra/jokGtbZRLNOvmoGD D8G0IqcxhOsGAD0VPZY5YOkvXuNLl3s5tlRtoPFUT0/Zqh4QV8hCJI2chnv8xCXDVH7e2e68eK1 fWkFV/MDqzWQz0jcTemvAtjys1h7v9g0Wn6kwzSE0sVwCV+aGcnzQ0phF01v0++s= X-Google-Smtp-Source: ABdhPJykAvi2lneDKa4CRJ+hIgXgHMTncVmHnWe45xoBh+GhN8w152/S4iOWvuiQJFh9WDHf4fhgVZBesYjnnA== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a05:6214:2b4:: with SMTP id m20mr4110654qvv.34.1607011406442; Thu, 03 Dec 2020 08:03:26 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:39 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-9-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 08/14] bpf: Add instructions for atomic_[cmp]xchg From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This adds two atomic opcodes, both of which include the BPF_FETCH flag. XCHG without the BPF_FETCh flag would naturally encode atomic_set. This is not supported because it would be of limited value to userspace (it doesn't imply any barriers). CMPXCHG without BPF_FETCH woulud be an atomic compare-and-write. We don't have such an operation in the kernel so it isn't provided to BPF either. There are two significant design decisions made for the CMPXCHG instruction: - To solve the issue that this operation fundamentally has 3 operands, but we only have two register fields. Therefore the operand we compare against (the kernel's API calls it 'old') is hard-coded to be R0. x86 has similar design (and A64 doesn't have this problem). A potential alternative might be to encode the other operand's register number in the immediate field. - The kernel's atomic_cmpxchg returns the old value, while the C11 userspace APIs return a boolean indicating the comparison result. Which should BPF do? A64 returns the old value. x86 returns the old value in the hard-coded register (and also sets a flag). That means return-old-value is easier to JIT. Signed-off-by: Brendan Jackman Change-Id: I3f19ad867dfd08515eecf72674e6fdefe28424bb Acked-by: Yonghong Song --- arch/x86/net/bpf_jit_comp.c | 8 ++++++++ include/linux/filter.h | 20 ++++++++++++++++++++ include/uapi/linux/bpf.h | 4 +++- kernel/bpf/core.c | 20 ++++++++++++++++++++ kernel/bpf/disasm.c | 15 +++++++++++++++ kernel/bpf/verifier.c | 19 +++++++++++++++++-- tools/include/linux/filter.h | 20 ++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 4 +++- 8 files changed, 106 insertions(+), 4 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 88cb09fa3bfb..7d29bc3bb4ff 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -831,6 +831,14 @@ static int emit_atomic(u8 **pprog, u8 atomic_op, /* src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */ EMIT2(0x0F, 0xC1); break; + case BPF_XCHG: + /* src_reg = atomic_xchg(*(u32/u64*)(dst_reg + off), src_reg); */ + EMIT1(0x87); + break; + case BPF_CMPXCHG: + /* r0 = atomic_cmpxchg(*(u32/u64*)(dst_reg + off), r0, src_reg); */ + EMIT2(0x0F, 0xB1); + break; default: pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op); return -EFAULT; diff --git a/include/linux/filter.h b/include/linux/filter.h index 4e04d0fc454f..6186280715ed 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -280,6 +280,26 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) .off = OFF, \ .imm = BPF_ADD | BPF_FETCH }) +/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ + +#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XCHG }) + +/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */ + +#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_CMPXCHG }) + /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 025e377e7229..53334530cc81 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -45,7 +45,9 @@ #define BPF_EXIT 0x90 /* function return */ /* atomic op type fields (stored in immediate) */ -#define BPF_FETCH 0x01 /* fetch previous value into src reg */ +#define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ +#define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ +#define BPF_FETCH 0x01 /* not an opcode on its own, used to build others */ /* Register numbers */ enum { diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 61e93eb7d363..28f960bc2e30 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1630,6 +1630,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) (u32) SRC, (atomic_t *)(unsigned long) (DST + insn->off)); break; + case BPF_XCHG: + SRC = (u32) atomic_xchg( + (atomic_t *)(unsigned long) (DST + insn->off), + (u32) SRC); + break; + case BPF_CMPXCHG: + BPF_R0 = (u32) atomic_cmpxchg( + (atomic_t *)(unsigned long) (DST + insn->off), + (u32) BPF_R0, (u32) SRC); + break; default: goto default_label; } @@ -1647,6 +1657,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) (u64) SRC, (atomic64_t *)(s64) (DST + insn->off)); break; + case BPF_XCHG: + SRC = (u64) atomic64_xchg( + (atomic64_t *)(u64) (DST + insn->off), + (u64) SRC); + break; + case BPF_CMPXCHG: + BPF_R0 = (u64) atomic64_cmpxchg( + (atomic64_t *)(u64) (DST + insn->off), + (u64) BPF_R0, (u64) SRC); + break; default: goto default_label; } diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 3ee2246a52ef..18357ea9a17d 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -167,6 +167,21 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, BPF_SIZE(insn->code) == BPF_DW ? "64" : "", bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == BPF_CMPXCHG) { + verbose(cbs->private_data, "(%02x) r0 = atomic%s_cmpxchg(*(%s *)(r%d %+d), r0, r%d)\n", + insn->code, + BPF_SIZE(insn->code) == BPF_DW ? "64" : "", + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, + insn->src_reg); + } else if (BPF_MODE(insn->code) == BPF_ATOMIC && + insn->imm == BPF_XCHG) { + verbose(cbs->private_data, "(%02x) r%d = atomic%s_xchg(*(%s *)(r%d %+d), r%d)\n", + insn->code, insn->src_reg, + BPF_SIZE(insn->code) == BPF_DW ? "64" : "", + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, insn->src_reg); } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index a68adbcee370..ccf4315e54e7 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3601,10 +3601,13 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn) { int err; + int load_reg; switch (insn->imm) { case BPF_ADD: case BPF_ADD | BPF_FETCH: + case BPF_XCHG: + case BPF_CMPXCHG: break; default: verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); @@ -3626,6 +3629,13 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i if (err) return err; + if (insn->imm == BPF_CMPXCHG) { + /* Check comparison of R0 with memory location */ + err = check_reg_arg(env, BPF_REG_0, SRC_OP); + if (err) + return err; + } + if (is_pointer_value(env, insn->src_reg)) { verbose(env, "R%d leaks addr into mem\n", insn->src_reg); return -EACCES; @@ -3656,8 +3666,13 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i if (!(insn->imm & BPF_FETCH)) return 0; - /* check and record load of old value into src reg */ - err = check_reg_arg(env, insn->src_reg, DST_OP); + if (insn->imm == BPF_CMPXCHG) + load_reg = BPF_REG_0; + else + load_reg = insn->src_reg; + + /* check and record load of old value */ + err = check_reg_arg(env, load_reg, DST_OP); if (err) return err; diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index ac7701678e1a..ea99bd17d003 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -190,6 +190,26 @@ .off = OFF, \ .imm = BPF_ADD | BPF_FETCH }) +/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ + +#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XCHG }) + +/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */ + +#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_CMPXCHG }) + /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 025e377e7229..53334530cc81 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -45,7 +45,9 @@ #define BPF_EXIT 0x90 /* function return */ /* atomic op type fields (stored in immediate) */ -#define BPF_FETCH 0x01 /* fetch previous value into src reg */ +#define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ +#define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ +#define BPF_FETCH 0x01 /* not an opcode on its own, used to build others */ /* Register numbers */ enum { From patchwork Thu Dec 3 16:02:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949071 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07983C2BBCF for ; Thu, 3 Dec 2020 16:05:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C316A207AE for ; Thu, 3 Dec 2020 16:05:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389302AbgLCQFJ (ORCPT ); Thu, 3 Dec 2020 11:05:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387680AbgLCQFI (ORCPT ); Thu, 3 Dec 2020 11:05:08 -0500 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FC94C094242 for ; Thu, 3 Dec 2020 08:03:29 -0800 (PST) Received: by mail-qt1-x849.google.com with SMTP id l17so920243qtj.18 for ; Thu, 03 Dec 2020 08:03:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Zq0SqodNfGWo0FrhKhUzqW6LznJPM9VRVrmmLrNNcbU=; b=TzpdEgvIUmgM+1bz8MY//NyO1rKyKk13IzzSbcbwV2VqrSdQS/3/qkAHCqh+yuY4/H BcYXGyH49FPaNcNTKp4BPzkuBgh3OzUnZmj0O8bPFCoFEiiq8L2BfXABRm2rsd1kD4kt vbq8FR7qTzYBIYz8iBUAqikcmhG+kWWeSnXtlK+t9TdwFVGtXJNpVRtUUIIFMRSxPvFG +13484oP8c//fOdSuOL88AArhx4eCcXDTFvWnEJxwid+Ni0Iml/emnmT1yP0EvWzLLWE AnCYrAKz1Tc0ivYwKfC4V3j/gPYh4K95FiJEvpMzEBqE4oS3ksGWnFpUa8m+2Uz34Si8 3UpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Zq0SqodNfGWo0FrhKhUzqW6LznJPM9VRVrmmLrNNcbU=; b=a5e94Mg4XwjV8FwnIwUZiB68FI7uXIChu42q3Iy5h3M0OWZ89htvpoBBrg8pQi4QFC oRi1Zt39+TOD4nn/e+nTIvEx2WAsqv61AjZirvfvkVFfjHM8eyzHk2rhFC5q2yuyLKWl +OZ3byi4TLH0NKkVQzzsFuw7DrX8Ct15HjwBbycbBCQPuwEMjHBRcd4WW52zCCMPEcfe UbyVIlmw19t06SjhF7dhiivwtsr7IFFGGFBWqDSAWWn5s1u+YCj/22pn6eXB/3JiB91j gxKBDJ8uqbMHUR4lGd12WCVzp2SFR9lUonlwjMv+K7R+hK00nfXaJOn2UK7r9rSHYstB 6DMg== X-Gm-Message-State: AOAM530e6W0MjYIlUtRFBAKiQTEwhxBtVC/GRYXgS8zc4xjANNZE8QG3 vIcjG8r9WkEqB2KfHh3auBKXdcz74FiWfOznX657+f7LJdgVxhYWpS2xqa/XvLPPgjMJy9Lfye3 ddBH37PcEz3Z5ntLRU1LMl8VAHmB/2MMhh/v06hJVrArna3jvCeBdtk/DssHPStc= X-Google-Smtp-Source: ABdhPJwkdXHQOyAMC7vF09gVHjOmwUzAc8dqhl2UErBbxOARkbN+aoGdeq08jVpteVp0sZU0pzV+ymxRatUYsg== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a0c:ec4a:: with SMTP id n10mr3763187qvq.54.1607011408407; Thu, 03 Dec 2020 08:03:28 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:40 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-10-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 09/14] bpf: Pull out a macro for interpreting atomic ALU operations From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Since the atomic operations that are added in subsequent commits are all isomorphic with BPF_ADD, pull out a macro to avoid the interpreter becoming dominated by lines of atomic-related code. Note that this sacrificies interpreter performance (combining STX_ATOMIC_W and STX_ATOMIC_DW into single switch case means that we need an extra conditional branch to differentiate them) in favour of compact and (relatively!) simple C code. Change-Id: I8cae5b66e75f34393de6063b91c05a8006fdd9e6 Signed-off-by: Brendan Jackman Acked-by: Yonghong Song --- kernel/bpf/core.c | 79 +++++++++++++++++++++++------------------------ 1 file changed, 38 insertions(+), 41 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 28f960bc2e30..498d3f067be7 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1618,55 +1618,52 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) LDX_PROBE(DW, 8) #undef LDX_PROBE - STX_ATOMIC_W: - switch (IMM) { - case BPF_ADD: - /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ - atomic_add((u32) SRC, (atomic_t *)(unsigned long) - (DST + insn->off)); - break; - case BPF_ADD | BPF_FETCH: - SRC = (u32) atomic_fetch_add( - (u32) SRC, - (atomic_t *)(unsigned long) (DST + insn->off)); - break; - case BPF_XCHG: - SRC = (u32) atomic_xchg( - (atomic_t *)(unsigned long) (DST + insn->off), - (u32) SRC); - break; - case BPF_CMPXCHG: - BPF_R0 = (u32) atomic_cmpxchg( - (atomic_t *)(unsigned long) (DST + insn->off), - (u32) BPF_R0, (u32) SRC); +#define ATOMIC(BOP, KOP) \ + case BOP: \ + if (BPF_SIZE(insn->code) == BPF_W) \ + atomic_##KOP((u32) SRC, (atomic_t *)(unsigned long) \ + (DST + insn->off)); \ + else \ + atomic64_##KOP((u64) SRC, (atomic64_t *)(unsigned long) \ + (DST + insn->off)); \ + break; \ + case BOP | BPF_FETCH: \ + if (BPF_SIZE(insn->code) == BPF_W) \ + SRC = (u32) atomic_fetch_##KOP( \ + (u32) SRC, \ + (atomic_t *)(unsigned long) (DST + insn->off)); \ + else \ + SRC = (u64) atomic64_fetch_##KOP( \ + (u64) SRC, \ + (atomic64_t *)(s64) (DST + insn->off)); \ break; - default: - goto default_label; - } - CONT; STX_ATOMIC_DW: + STX_ATOMIC_W: switch (IMM) { - case BPF_ADD: - /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */ - atomic64_add((u64) SRC, (atomic64_t *)(unsigned long) - (DST + insn->off)); - break; - case BPF_ADD | BPF_FETCH: - SRC = (u64) atomic64_fetch_add( - (u64) SRC, - (atomic64_t *)(s64) (DST + insn->off)); - break; + ATOMIC(BPF_ADD, add) + case BPF_XCHG: - SRC = (u64) atomic64_xchg( - (atomic64_t *)(u64) (DST + insn->off), - (u64) SRC); + if (BPF_SIZE(insn->code) == BPF_W) + SRC = (u32) atomic_xchg( + (atomic_t *)(unsigned long) (DST + insn->off), + (u32) SRC); + else + SRC = (u64) atomic64_xchg( + (atomic64_t *)(u64) (DST + insn->off), + (u64) SRC); break; case BPF_CMPXCHG: - BPF_R0 = (u64) atomic64_cmpxchg( - (atomic64_t *)(u64) (DST + insn->off), - (u64) BPF_R0, (u64) SRC); + if (BPF_SIZE(insn->code) == BPF_W) + BPF_R0 = (u32) atomic_cmpxchg( + (atomic_t *)(unsigned long) (DST + insn->off), + (u32) BPF_R0, (u32) SRC); + else + BPF_R0 = (u64) atomic64_cmpxchg( + (atomic64_t *)(u64) (DST + insn->off), + (u64) BPF_R0, (u64) SRC); break; + default: goto default_label; } From patchwork Thu Dec 3 16:02:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949075 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D12E5C433FE for ; Thu, 3 Dec 2020 16:05:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 668EE207A9 for ; Thu, 3 Dec 2020 16:05:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389303AbgLCQFM (ORCPT ); Thu, 3 Dec 2020 11:05:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387680AbgLCQFM (ORCPT ); Thu, 3 Dec 2020 11:05:12 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65477C09424A for ; Thu, 3 Dec 2020 08:03:32 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id h68so1674434wme.5 for ; Thu, 03 Dec 2020 08:03:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=rQ5aZpFY6Pg/O0rx7Qe3LRIdXZHuAzMSl0kSyosK4jw=; b=aolwBRKb1Ahd6dJVwT5jjunBlBiNuEzWiWEVJjvvdZKdHX1NUw1CKIp1zItHSQ4/bR y5E+2Bf/0JrINRujSc7scrOuvVzLi+NOUR2TO/o6Siyuep+2/XEytq8///k+wDTlv+1q PZTvTFnizoD88nIUqRSXl7Mqh0myanIiWLokarjdEJk1Jwsifpk+u8IEyORHlGKkCTgA icZGeBWvT3z0GbVM/DpLZum0zKKG6rTzRB1lumPttCjS6x5K+ZOiudq0QWfKLFGctury 0qW+fDE110cKpRjorrkDqOUVlfeeyh08PFL/8a0jp+KHvPPeovQsjUNBuVSWx/EA7lfS YRLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rQ5aZpFY6Pg/O0rx7Qe3LRIdXZHuAzMSl0kSyosK4jw=; b=EB4B5THvsA2yu6PM3iaz07ev//TDpLW9XEv07t8yU/LX7kASd0a9ZaeZu0Ik1Piw7J z4Q3Pi0e6BjkLbqyEfvl3iOrkew3pGee+n+TYtQPOLSjSNQoH/hXZRsZgtzTdrOxk4XZ yzL52oa04wce/88j0D1pJI2pwaWgyN1CmFaCPUhmVZ3inW46ee5EaSZb1r6iIkXr9g/y h48BEcIcLQ/BYqxyYtutLnxtYkvYsb163ASr52EdO+iMPMaDMV4N5+Bx++7CIAjslPL3 zr2/Dkj5knUTrLYXGi3p2JvPPCOeHdJ1h5bigJf5EYAjkHL0v6A/lNoJbzw8Urjsh0F2 ao8Q== X-Gm-Message-State: AOAM531PH7/QFG7h8qWA9yFzzmCdRk81jio5eZfheeisEM5MtkzdnPWJ l26OWEJPJOt4mYeq4UUcuTXpZqMP4ka9hNh8MNO6z8bYKhhOi6io6NCwEGkHXSRQB5WxNn2Z431 MwCbfD6kW43pm+a8gPCL5L0DcLbTFqBRamEJv/t3pcl0g2m+AhJ4nrKVkhd8ArPI= X-Google-Smtp-Source: ABdhPJy1qICjwyNsJnerw91e+uKX76VWe7VJLJ9bBqzMacrLONPebxz4DwbJ/gdj/kkJGkO7/phmRdWlAWoYVg== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:adf:f08e:: with SMTP id n14mr4415603wro.136.1607011410574; Thu, 03 Dec 2020 08:03:30 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:41 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-11-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 10/14] bpf: Add bitwise atomic instructions From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This adds instructions for atomic[64]_[fetch_]and atomic[64]_[fetch_]or atomic[64]_[fetch_]xor All these operations are isomorphic enough to implement with the same verifier, interpreter, and x86 JIT code, hence being a single commit. The main interesting thing here is that x86 doesn't directly support the fetch_ version these operations, so we need to generate a CMPXCHG loop in the JIT. This requires the use of two temporary registers, IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose. Change-Id: I340b10cecebea8cb8a52e3606010cde547a10ed4 Signed-off-by: Brendan Jackman --- arch/x86/net/bpf_jit_comp.c | 50 +++++++++++++++++++++++++++++- include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++ kernel/bpf/core.c | 5 ++- kernel/bpf/disasm.c | 21 ++++++++++--- kernel/bpf/verifier.c | 6 ++++ tools/include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++ 6 files changed, 196 insertions(+), 6 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 7d29bc3bb4ff..4ab0f821326c 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -824,6 +824,10 @@ static int emit_atomic(u8 **pprog, u8 atomic_op, /* emit opcode */ switch (atomic_op) { case BPF_ADD: + case BPF_SUB: + case BPF_AND: + case BPF_OR: + case BPF_XOR: /* lock *(u32/u64*)(dst_reg + off) = src_reg */ EMIT1(simple_alu_opcodes[atomic_op]); break; @@ -1306,8 +1310,52 @@ st: if (is_imm8(insn->off)) case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: + if (insn->imm == (BPF_AND | BPF_FETCH) || + insn->imm == (BPF_OR | BPF_FETCH) || + insn->imm == (BPF_XOR | BPF_FETCH)) { + u8 *branch_target; + bool is64 = BPF_SIZE(insn->code) == BPF_DW; + + /* + * Can't be implemented with a single x86 insn. + * Need to do a CMPXCHG loop. + */ + + /* Will need RAX as a CMPXCHG operand so save R0 */ + emit_mov_reg(&prog, true, BPF_REG_AX, BPF_REG_0); + branch_target = prog; + /* Load old value */ + emit_ldx(&prog, BPF_SIZE(insn->code), + BPF_REG_0, dst_reg, insn->off); + /* + * Perform the (commutative) operation locally, + * put the result in the AUX_REG. + */ + emit_mov_reg(&prog, is64, AUX_REG, BPF_REG_0); + maybe_emit_mod(&prog, AUX_REG, src_reg, is64); + EMIT2(simple_alu_opcodes[BPF_OP(insn->imm)], + add_2reg(0xC0, AUX_REG, src_reg)); + /* Attempt to swap in new value */ + err = emit_atomic(&prog, BPF_CMPXCHG, + dst_reg, AUX_REG, insn->off, + BPF_SIZE(insn->code)); + if (WARN_ON(err)) + return err; + /* + * ZF tells us whether we won the race. If it's + * cleared we need to try again. + */ + EMIT2(X86_JNE, -(prog - branch_target) - 2); + /* Return the pre-modification value */ + emit_mov_reg(&prog, is64, src_reg, BPF_REG_0); + /* Restore R0 after clobbering RAX */ + emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX); + break; + + } + err = emit_atomic(&prog, insn->imm, dst_reg, src_reg, - insn->off, BPF_SIZE(insn->code)); + insn->off, BPF_SIZE(insn->code)); if (err) return err; break; diff --git a/include/linux/filter.h b/include/linux/filter.h index 6186280715ed..698f82897b0d 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -280,6 +280,66 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) .off = OFF, \ .imm = BPF_ADD | BPF_FETCH }) +/* Atomic memory and, *(uint *)(dst_reg + off16) &= src_reg */ + +#define BPF_ATOMIC_AND(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_AND }) + +/* Atomic memory and with fetch, src_reg = atomic_fetch_and(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_AND(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_AND | BPF_FETCH }) + +/* Atomic memory or, *(uint *)(dst_reg + off16) |= src_reg */ + +#define BPF_ATOMIC_OR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_OR }) + +/* Atomic memory or with fetch, src_reg = atomic_fetch_or(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_OR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_OR | BPF_FETCH }) + +/* Atomic memory xor, *(uint *)(dst_reg + off16) ^= src_reg */ + +#define BPF_ATOMIC_XOR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XOR }) + +/* Atomic memory xor with fetch, src_reg = atomic_fetch_xor(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XOR | BPF_FETCH }) + /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ #define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 498d3f067be7..27eac4d5724c 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1642,7 +1642,10 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) STX_ATOMIC_W: switch (IMM) { ATOMIC(BPF_ADD, add) - + ATOMIC(BPF_AND, and) + ATOMIC(BPF_OR, or) + ATOMIC(BPF_XOR, xor) +#undef ATOMIC case BPF_XCHG: if (BPF_SIZE(insn->code) == BPF_W) SRC = (u32) atomic_xchg( diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 18357ea9a17d..0c7c1c31a57b 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -80,6 +80,13 @@ const char *const bpf_alu_string[16] = { [BPF_END >> 4] = "endian", }; +static const char *const bpf_atomic_alu_string[16] = { + [BPF_ADD >> 4] = "add", + [BPF_AND >> 4] = "and", + [BPF_OR >> 4] = "or", + [BPF_XOR >> 4] = "or", +}; + static const char *const bpf_ldst_string[] = { [BPF_W >> 3] = "u32", [BPF_H >> 3] = "u16", @@ -154,17 +161,23 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, insn->dst_reg, insn->off, insn->src_reg); else if (BPF_MODE(insn->code) == BPF_ATOMIC && - insn->imm == BPF_ADD) { - verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) += r%d\n", + (insn->imm == BPF_ADD || insn->imm == BPF_ADD || + insn->imm == BPF_OR || insn->imm == BPF_XOR)) { + verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) %s r%d\n", insn->code, bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, + bpf_alu_string[BPF_OP(insn->imm) >> 4], insn->src_reg); } else if (BPF_MODE(insn->code) == BPF_ATOMIC && - insn->imm == (BPF_ADD | BPF_FETCH)) { - verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_add(*(%s *)(r%d %+d), r%d)\n", + (insn->imm == (BPF_ADD | BPF_FETCH) || + insn->imm == (BPF_AND | BPF_FETCH) || + insn->imm == (BPF_OR | BPF_FETCH) || + insn->imm == (BPF_XOR | BPF_FETCH))) { + verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_%s(*(%s *)(r%d %+d), r%d)\n", insn->code, insn->src_reg, BPF_SIZE(insn->code) == BPF_DW ? "64" : "", + bpf_atomic_alu_string[BPF_OP(insn->imm) >> 4], bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); } else if (BPF_MODE(insn->code) == BPF_ATOMIC && diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ccf4315e54e7..dd30eb9a6c1b 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3606,6 +3606,12 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i switch (insn->imm) { case BPF_ADD: case BPF_ADD | BPF_FETCH: + case BPF_AND: + case BPF_AND | BPF_FETCH: + case BPF_OR: + case BPF_OR | BPF_FETCH: + case BPF_XOR: + case BPF_XOR | BPF_FETCH: case BPF_XCHG: case BPF_CMPXCHG: break; diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h index ea99bd17d003..b74febf83eb1 100644 --- a/tools/include/linux/filter.h +++ b/tools/include/linux/filter.h @@ -190,6 +190,66 @@ .off = OFF, \ .imm = BPF_ADD | BPF_FETCH }) +/* Atomic memory and, *(uint *)(dst_reg + off16) -= src_reg */ + +#define BPF_ATOMIC_AND(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_AND }) + +/* Atomic memory and with fetch, src_reg = atomic_fetch_and(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_AND(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_AND | BPF_FETCH }) + +/* Atomic memory or, *(uint *)(dst_reg + off16) -= src_reg */ + +#define BPF_ATOMIC_OR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_OR }) + +/* Atomic memory or with fetch, src_reg = atomic_fetch_or(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_OR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_OR | BPF_FETCH }) + +/* Atomic memory xor, *(uint *)(dst_reg + off16) -= src_reg */ + +#define BPF_ATOMIC_XOR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XOR }) + +/* Atomic memory xor with fetch, src_reg = atomic_fetch_xor(*(dst_reg + off), src_reg); */ + +#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \ + ((struct bpf_insn) { \ + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \ + .dst_reg = DST, \ + .src_reg = SRC, \ + .off = OFF, \ + .imm = BPF_XOR | BPF_FETCH }) + /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */ #define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \ From patchwork Thu Dec 3 16:02:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949065 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB320C1B0D8 for ; Thu, 3 Dec 2020 16:05:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A5CB5207A9 for ; Thu, 3 Dec 2020 16:05:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731180AbgLCQEs (ORCPT ); Thu, 3 Dec 2020 11:04:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726267AbgLCQEr (ORCPT ); Thu, 3 Dec 2020 11:04:47 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0A8AC09424B for ; Thu, 3 Dec 2020 08:03:34 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id o203so1479063wmo.3 for ; Thu, 03 Dec 2020 08:03:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=IbyQm/n8+pi2q0Ky7fqqHIueT8J7malorhV/m8hIHd0=; b=Nurb0bQUXgEwLKlH5E+dwxUOfz6ZXl/yQpQCvW3kVvQiJ2raqNbOIazSDz9hNWteRv +LvxNCxOuTAnxMk8JjZ3Qtj2MFeuGLY/cCt/E6bvMZKUtwH/Vr5zo88LCYux8jXHt9IL iP2g4OF7JFbLDleVFjYUSBewEgColqkcr4Bhr7k79rR08y33F1Su5i8Zg6eU093n9HP3 8rnyG4VwE+aTRu1+LOCCnKtEcbyaBSvkluzY+ijUltDeczBfmQSdqiBLSoF4nkXe6QQx EtbumVv2MTkwa8D38k+kIrgtW8GM2AFDu40LM8e+C89nIA5kGZK/MYSzZbUNVfnPzZOS ZLeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IbyQm/n8+pi2q0Ky7fqqHIueT8J7malorhV/m8hIHd0=; b=EQvYVAg9bX6CzKl93dYE/YIHzUjE5ZCirc5DfmcddZH+9Ve88dVeUqnNRd1+sBN9tu K//LO2eRxjaTWmMhwBNiPzbV6al7kj0skc5lJsMHWzT5v5hcmDjpsWHoBvBx74i6jICG umDLD83H4vbwPL7X7oVliB2r+ByBhTBL03eqfwZUEGbVGAi5cHO3guh31dJOqWbZpqJ3 33RLrqY+Mo7tfISbVytpGwgOugr5yOEbu/QkHx+toirZKVP0JLE6uVj6Q3QviwDoyJGc FCVTcVCVJEXgVALcE3fVNyGLV5FqBliXaxUQXn32oFGsbFTjCoBosLP3wRwCRE1SR/CX JbJA== X-Gm-Message-State: AOAM531W0ngIUeylEKkdFiUN2sLlVF+tp2Jwq82EyfaXfoSlaBRwIhO0 PGSUnXKH1oBsa6UeJL4wdfDUNjGSpB3YwIIFfGo9WOZR/2FSG6dioFMQftvBXsMjGxgNEvUEhGO 9rllfBS/e+qfXrfDEsnWJcsiq+YSsp0p+QCx4QtGihP3bUBixKp/CUCKMXHg9QbU= X-Google-Smtp-Source: ABdhPJwODx9KkTc4wAZ00voMZnuvUvFetzDzkNineD7k8tCokZqHIMO8OE449/1wHAHd88n53eCq7kCl9djk9A== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a1c:3d86:: with SMTP id k128mr4029962wma.66.1607011412987; Thu, 03 Dec 2020 08:03:32 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:42 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-12-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 11/14] tools build: Implement feature check for BPF atomics in Clang From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman , Arnaldo Carvalho de Melo , Jiri Olsa , Quentin Monnet , "Frank Ch. Eigler" , Stephane Eranian , Namhyung Kim , Thomas Hebb Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Change-Id: Ia15bb76f7152fff2974e38242d7430ce2987a71e Cc: Arnaldo Carvalho de Melo Cc: Jiri Olsa Cc: Quentin Monnet Cc: "Frank Ch. Eigler" Cc: Stephane Eranian Cc: Namhyung Kim Cc: Thomas Hebb Change-Id: Ie2c3832eaf050d627764071d1927c7546e7c4b4b Signed-off-by: Brendan Jackman --- tools/build/feature/Makefile | 4 ++++ tools/build/feature/test-clang-bpf-atomics.c | 9 +++++++++ 2 files changed, 13 insertions(+) create mode 100644 tools/build/feature/test-clang-bpf-atomics.c diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile index cdde783f3018..81370d7fa193 100644 --- a/tools/build/feature/Makefile +++ b/tools/build/feature/Makefile @@ -70,6 +70,7 @@ FILES= \ test-libaio.bin \ test-libzstd.bin \ test-clang-bpf-co-re.bin \ + test-clang-bpf-atomics.bin \ test-file-handle.bin \ test-libpfm4.bin @@ -331,6 +332,9 @@ $(OUTPUT)test-clang-bpf-co-re.bin: $(CLANG) -S -g -target bpf -o - $(patsubst %.bin,%.c,$(@F)) | \ grep BTF_KIND_VAR +$(OUTPUT)test-clang-bpf-atomics.bin: + $(CLANG) -S -g -target bpf -mcpu=v3 -Werror=implicit-function-declaration -o - $(patsubst %.bin,%.c,$(@F)) 2>&1 + $(OUTPUT)test-file-handle.bin: $(BUILD) diff --git a/tools/build/feature/test-clang-bpf-atomics.c b/tools/build/feature/test-clang-bpf-atomics.c new file mode 100644 index 000000000000..8b5fcdd4ba6f --- /dev/null +++ b/tools/build/feature/test-clang-bpf-atomics.c @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2020 Google + +int x = 0; + +int foo(void) +{ + return __sync_val_compare_and_swap(&x, 1, 2); +} From patchwork Thu Dec 3 16:02:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949077 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E506C4361A for ; Thu, 3 Dec 2020 16:05:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DCEC1207AD for ; Thu, 3 Dec 2020 16:05:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731229AbgLCQFS (ORCPT ); Thu, 3 Dec 2020 11:05:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729272AbgLCQFS (ORCPT ); Thu, 3 Dec 2020 11:05:18 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 384D5C09424D for ; Thu, 3 Dec 2020 08:03:40 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id 102so1992128qva.0 for ; Thu, 03 Dec 2020 08:03:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=IMilI2Z6+aKwuOWDf3tm98A2hDsLcRMs3IV5SkkVoW4=; b=ODl5ipUqgCmdiomJnZEgAey6Nrfb6zLj72eClBJGt7MuFtRy5pI9S0EKEh6PzKF1uw K5pNTv6Fb+aHa650ISo858vH7GAnYZlQd2xkgSf3ijCPjxKQ2p/tfNrvvvIZFcsQrrOl iTCPOoZh5c3T88WCtUnaggkYgqxXXugDiIgY0T+I1V6yVFDBchhqVEW7rjxH0mjI/0qp yIMBBJgJ4E+uzCqkGfHst0vspHJLgXWWZwnqi/dr0LKEB60YHGIcJF0JGFZWV5ZAd3LJ Z6hV6S+MSsJHVSaUCt4FC/qaM19Jc2dg9rzgozKvvZe2sN3IcgpG/2ViAztRQYzn43bD 1zLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IMilI2Z6+aKwuOWDf3tm98A2hDsLcRMs3IV5SkkVoW4=; b=cLCNWtjomyLuJXdnleX1UsWmI8EbRNWJ1MRhS4EqZwrHrX6dMdBAUHWzyhfMzGqIj8 tw7lJV22w5udToLIbEEw++Zy8yP0pZ9UjaefAmEHOmpyTMW3r8IhXJ0VTxo40Dl3xFEH M/jrJcHwDxhoRFBjuVJ18z2TLYmmx/Rcw81M71UqF6uoru5zpQZmTBoM/KGtR9Frycn/ ZhJL3ytFH6a/voL7FvaJcGj7vuw26O8VojUb7d0wYT+xHW2gHLKb/luPt1WuYwfKJryS AzwnZR72URt7DHFVpDeLkC1x4OXPDhz7vPvdFacPSpm7uCVOAcHLUB1jBJglQSjJWqUU OEkg== X-Gm-Message-State: AOAM532rlBjTlhNzhSrpYYDxmOjxE0PWeqRnRTQ5ls7Cj4piK3rYgSUX oM/T8li2GVJ5o31U8oDof0UkEqvjTNa8MO1mI+81c87Z0eLkB9Ywfw6xTDBfRR4fo/4bdtDFem0 R+KdYh3+XngjIdkC5mhUrI5q+PjTRwxsAq4Y3gt2Inqf7OE6hsaywUkpDrVVnhFo= X-Google-Smtp-Source: ABdhPJyWcI7r5iOiGtGzuUAxvhLA44h5393hO+egwkMucsytcbRIQyR7e8xwVW3kCOBm5nSQUiWqBkzQW27w7w== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a0c:f809:: with SMTP id r9mr3775595qvn.17.1607011415132; Thu, 03 Dec 2020 08:03:35 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:43 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-13-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 12/14] bpf: Pull tools/build/feature biz into selftests Makefile From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This is somewhat cargo-culted from the libbpf build. It will be used in a subsequent patch to query for Clang BPF atomics support. Change-Id: I9318a1702170eb752acced35acbb33f45126c44c Signed-off-by: Brendan Jackman --- tools/testing/selftests/bpf/.gitignore | 1 + tools/testing/selftests/bpf/Makefile | 38 ++++++++++++++++++++++++++ 2 files changed, 39 insertions(+) diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore index 395ae040ce1f..3c604dff1e20 100644 --- a/tools/testing/selftests/bpf/.gitignore +++ b/tools/testing/selftests/bpf/.gitignore @@ -35,3 +35,4 @@ test_cpp /tools /runqslower /bench +/FEATURE-DUMP.selftests.bpf diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 894192c319fb..f21c4841a612 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -104,8 +104,46 @@ OVERRIDE_TARGETS := 1 override define CLEAN $(call msg,CLEAN) $(Q)$(RM) -r $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(EXTRA_CLEAN) + $(Q)$(RM) $(OUTPUT)/FEATURE-DUMP.selftests.bpf endef +# This will work when bpf is built in tools env. where srctree +# isn't set and when invoked from selftests build, where srctree +# is set to ".". building_out_of_srctree is undefined for in srctree +# builds +ifeq ($(srctree),) +update_srctree := 1 +endif +ifdef building_out_of_srctree +update_srctree := 1 +endif +ifeq ($(update_srctree),1) +srctree := $(patsubst %/,%,$(dir $(CURDIR))) +srctree := $(patsubst %/,%,$(dir $(srctree))) +srctree := $(patsubst %/,%,$(dir $(srctree))) +srctree := $(patsubst %/,%,$(dir $(srctree))) +endif + +FEATURE_USER = .selftests.bpf +FEATURE_TESTS = clang-bpf-atomics +FEATURE_DISPLAY = clang-bpf-atomics + +check_feat := 1 +NON_CHECK_FEAT_TARGETS := clean +ifdef MAKECMDGOALS +ifeq ($(filter-out $(NON_CHECK_FEAT_TARGETS),$(MAKECMDGOALS)),) + check_feat := 0 +endif +endif + +ifeq ($(check_feat),1) +ifeq ($(FEATURES_DUMP),) +include $(srctree)/tools/build/Makefile.feature +else +include $(FEATURES_DUMP) +endif +endif + include ../lib.mk SCRATCH_DIR := $(OUTPUT)/tools From patchwork Thu Dec 3 16:02:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949073 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EC17C2BB40 for ; Thu, 3 Dec 2020 16:05:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 00EAF207AA for ; Thu, 3 Dec 2020 16:05:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728922AbgLCQEz (ORCPT ); Thu, 3 Dec 2020 11:04:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731206AbgLCQEz (ORCPT ); Thu, 3 Dec 2020 11:04:55 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5F49C09424F for ; Thu, 3 Dec 2020 08:03:41 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id q17so1513768wmc.1 for ; Thu, 03 Dec 2020 08:03:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=+YlzII1FPWkmmWl374z00jUbytJDOFE0Tn9tWVZ2UuI=; b=nAS8arlCbJXC9apqOEdMxcOrnHzUIRhHMseBzyck3v8NFqRoGx42EahAhzmxrOb5W8 lTki4MTFBScupBrmyXSn3nDL40CGepXn/sm33tzvg2xckuVi+D2s5LO1lwDajXxPpson dlhsPbYMfX9Xj0DaPYHpWM/ztQ1RYJggR+h7J6Qvj/lZeBcQ27Bx2rSwem3vAV3CyAcV LmNu+2yncq62WQCHl3joSQnC6smpETYpPLNVJzL5ypmcKmHqVfanrdu6ypVsIYopCSKe 58t+VHN+C+mdUA7rSOzsWniP0cc8hkfdOP5ybD/mso/9jjMmBb933vw984vXOuRs8oDm uXWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+YlzII1FPWkmmWl374z00jUbytJDOFE0Tn9tWVZ2UuI=; b=UhuCgLadPtK5ygTu5uI+O8uUvkgnrElJSgP/ARb2Udr3isrA85beKLttN9OxNqaMof qBcVe8yQGinybYSvFyFxCJzM4s3GZhfoiDDMWJZANulMXGVXkuPIIVDZIJITPjz7ZEUG yp/PRax/jQKeJS2EmN6+DikhP82uljzJ1qIaniBqoC6s4n1iRLKpckTZW7c8Yhn92zij AMnieDkMmDQX0ZzCSjXXLzXGjvqNbitRvklaJKb1Egjzs7y2+JHurqzsWw6rw7gfAZIQ yDf1HbLmimvz+9oHYGz8x0E1jdP7lFnyPEOQlSn5LiQ7qJMpUaL/yXdVNwoFsT9mQJbB qrLQ== X-Gm-Message-State: AOAM530Ps523thiBXkq0oXCGGm7Tyfz7cA4Bnyz6Zth/cQoYoNjmgTfu fur7Ldcu8tih/CvvKDp1+werOhv1lj9eant6H4lX2W+MJD5molzLg/2Jc2M1vxxUOIZv+KwRQZm jft7mcmwY4C5VXY5ol6C/kfSxERnGYpkiGAXjEXoCLg7Mt94Z1b6NsBVMPRkwzm0= X-Google-Smtp-Source: ABdhPJxzhEpI2QeJTsxw8kLtoiRXpeu1K241f+EdSBCTudWQ/2Knr4KMGsS7aTWhJDWONikctEZdHOIPeux/Pg== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a1c:e1c6:: with SMTP id y189mr3700889wmg.172.1607011420575; Thu, 03 Dec 2020 08:03:40 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:44 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-14-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 13/14] bpf: Add tests for new BPF atomic operations From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This relies on the work done by Yonghong Song in https://reviews.llvm.org/D72184 Note the use of a define called ENABLE_ATOMICS_TESTS: this is used to: - Avoid breaking the build for people on old versions of Clang - Avoid needing separate lists of test objects for no_alu32, where atomics are not supported even if Clang has the feature. The atomics_test.o BPF object is built unconditionally both for test_progs and test_progs-no_alu32. For test_progs, if Clang supports atomics, ENABLE_ATOMICS_TESTS is defined, so it includes the proper test code. Otherwise, progs and global vars are defined anyway, as stubs; this means that the skeleton user code still builds. The atomics_test.o userspace object is built once and used for both test_progs and test_progs-no_alu32. A variable called skip_tests is defined in the BPF object's data section, which tells the userspace object whether to skip the atomics test. Change-Id: Iecc12f35f0ded4a1dd805cce1be576e7b27917ef Signed-off-by: Brendan Jackman --- tools/testing/selftests/bpf/Makefile | 4 + .../selftests/bpf/prog_tests/atomics_test.c | 262 ++++++++++++++++++ .../selftests/bpf/progs/atomics_test.c | 154 ++++++++++ .../selftests/bpf/verifier/atomic_and.c | 77 +++++ .../selftests/bpf/verifier/atomic_cmpxchg.c | 96 +++++++ .../selftests/bpf/verifier/atomic_fetch_add.c | 106 +++++++ .../selftests/bpf/verifier/atomic_or.c | 77 +++++ .../selftests/bpf/verifier/atomic_xchg.c | 46 +++ .../selftests/bpf/verifier/atomic_xor.c | 77 +++++ 9 files changed, 899 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/atomics_test.c create mode 100644 tools/testing/selftests/bpf/progs/atomics_test.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_and.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_fetch_add.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_or.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xchg.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xor.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index f21c4841a612..448a9eb1a56c 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -431,11 +431,15 @@ TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read \ $(wildcard progs/btf_dump_test_case_*.c) TRUNNER_BPF_BUILD_RULE := CLANG_BPF_BUILD_RULE TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS) +ifeq ($(feature-clang-bpf-atomics),1) + TRUNNER_BPF_CFLAGS += -DENABLE_ATOMICS_TESTS +endif TRUNNER_BPF_LDFLAGS := -mattr=+alu32 $(eval $(call DEFINE_TEST_RUNNER,test_progs)) # Define test_progs-no_alu32 test runner. TRUNNER_BPF_BUILD_RULE := CLANG_NOALU32_BPF_BUILD_RULE +TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS) TRUNNER_BPF_LDFLAGS := $(eval $(call DEFINE_TEST_RUNNER,test_progs,no_alu32)) diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c new file mode 100644 index 000000000000..66f0ccf4f4ec --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c @@ -0,0 +1,262 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include + + +#include "atomics_test.skel.h" + +static struct atomics_test *setup(void) +{ + struct atomics_test *atomics_skel; + __u32 duration = 0, err; + + atomics_skel = atomics_test__open_and_load(); + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n")) + return NULL; + + if (atomics_skel->data->skip_tests) { + printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)", + __func__); + test__skip(); + goto err; + } + + err = atomics_test__attach(atomics_skel); + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err)) + goto err; + + return atomics_skel; + +err: + atomics_test__destroy(atomics_skel); + return NULL; +} + +static void test_add(void) +{ + struct atomics_test *atomics_skel; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = setup(); + if (!atomics_skel) + return; + + prog_fd = bpf_program__fd(atomics_skel->progs.add); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run add", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + ASSERT_EQ(atomics_skel->data->add64_value, 3, "add64_value"); + ASSERT_EQ(atomics_skel->bss->add64_result, 1, "add64_result"); + + ASSERT_EQ(atomics_skel->data->add32_value, 3, "add32_value"); + ASSERT_EQ(atomics_skel->bss->add32_result, 1, "add32_result"); + + ASSERT_EQ(atomics_skel->bss->add_stack_value_copy, 3, "add_stack_value"); + ASSERT_EQ(atomics_skel->bss->add_stack_result, 1, "add_stack_result"); + + ASSERT_EQ(atomics_skel->data->add_noreturn_value, 3, "add_noreturn_value"); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_sub(void) +{ + struct atomics_test *atomics_skel; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = setup(); + if (!atomics_skel) + return; + + prog_fd = bpf_program__fd(atomics_skel->progs.sub); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run sub", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + ASSERT_EQ(atomics_skel->data->sub64_value, -1, "sub64_value"); + ASSERT_EQ(atomics_skel->bss->sub64_result, 1, "sub64_result"); + + ASSERT_EQ(atomics_skel->data->sub32_value, -1, "sub32_value"); + ASSERT_EQ(atomics_skel->bss->sub32_result, 1, "sub32_result"); + + ASSERT_EQ(atomics_skel->bss->sub_stack_value_copy, -1, "sub_stack_value"); + ASSERT_EQ(atomics_skel->bss->sub_stack_result, 1, "sub_stack_result"); + + ASSERT_EQ(atomics_skel->data->sub_noreturn_value, -1, "sub_noreturn_value"); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_and(void) +{ + struct atomics_test *atomics_skel; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = setup(); + if (!atomics_skel) + return; + + prog_fd = bpf_program__fd(atomics_skel->progs.and); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run and", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + ASSERT_EQ(atomics_skel->data->and64_value, 0x010ull << 32, "and64_value"); + ASSERT_EQ(atomics_skel->bss->and64_result, 0x110ull << 32, "and64_result"); + + ASSERT_EQ(atomics_skel->data->and32_value, 0x010, "and32_value"); + ASSERT_EQ(atomics_skel->bss->and32_result, 0x110, "and32_result"); + + ASSERT_EQ(atomics_skel->data->and_noreturn_value, 0x010ull << 32, "and_noreturn_value"); +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_or(void) +{ + struct atomics_test *atomics_skel; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = setup(); + if (!atomics_skel) + return; + + prog_fd = bpf_program__fd(atomics_skel->progs.or); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run or", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + ASSERT_EQ(atomics_skel->data->or64_value, 0x111ull << 32, "or64_value"); + ASSERT_EQ(atomics_skel->bss->or64_result, 0x110ull << 32, "or64_result"); + + ASSERT_EQ(atomics_skel->data->or32_value, 0x111, "or32_value"); + ASSERT_EQ(atomics_skel->bss->or32_result, 0x110, "or32_result"); + + ASSERT_EQ(atomics_skel->data->or_noreturn_value, 0x111ull << 32, "or_noreturn_value"); +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_xor(void) +{ + struct atomics_test *atomics_skel; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = setup(); + if (!atomics_skel) + return; + + prog_fd = bpf_program__fd(atomics_skel->progs.xor); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run xor", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + ASSERT_EQ(atomics_skel->data->xor64_value, 0x101ull << 32, "xor64_value"); + ASSERT_EQ(atomics_skel->bss->xor64_result, 0x110ull << 32, "xor64_result"); + + ASSERT_EQ(atomics_skel->data->xor32_value, 0x101, "xor32_value"); + ASSERT_EQ(atomics_skel->bss->xor32_result, 0x110, "xor32_result"); + + ASSERT_EQ(atomics_skel->data->xor_noreturn_value, 0x101ull << 32, "xor_nxoreturn_value"); +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_cmpxchg(void) +{ + struct atomics_test *atomics_skel; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = setup(); + if (!atomics_skel) + return; + + prog_fd = bpf_program__fd(atomics_skel->progs.add); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run add", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + ASSERT_EQ(atomics_skel->data->cmpxchg64_value, 2, "cmpxchg64_value"); + ASSERT_EQ(atomics_skel->bss->cmpxchg64_result_fail, 1, "cmpxchg_result_fail"); + ASSERT_EQ(atomics_skel->bss->cmpxchg64_result_succeed, 1, "cmpxchg_result_succeed"); + + ASSERT_EQ(atomics_skel->data->cmpxchg32_value, 2, "cmpxchg32_value"); + ASSERT_EQ(atomics_skel->bss->cmpxchg32_result_fail, 1, "cmpxchg_result_fail"); + ASSERT_EQ(atomics_skel->bss->cmpxchg32_result_succeed, 1, "cmpxchg_result_succeed"); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +static void test_xchg(void) +{ + struct atomics_test *atomics_skel; + int err, prog_fd; + __u32 duration = 0, retval; + + atomics_skel = setup(); + if (!atomics_skel) + return; + + prog_fd = bpf_program__fd(atomics_skel->progs.add); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + if (CHECK(err || retval, "test_run add", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration)) + goto cleanup; + + ASSERT_EQ(atomics_skel->data->xchg64_value, 2, "xchg64_value"); + ASSERT_EQ(atomics_skel->bss->xchg64_result, 1, "xchg_result"); + + ASSERT_EQ(atomics_skel->data->xchg32_value, 2, "xchg32_value"); + ASSERT_EQ(atomics_skel->bss->xchg32_result, 1, "xchg_result"); + +cleanup: + atomics_test__destroy(atomics_skel); +} + +void test_atomics_test(void) +{ + if (test__start_subtest("add")) + test_add(); + if (test__start_subtest("sub")) + test_sub(); + if (test__start_subtest("and")) + test_and(); + if (test__start_subtest("or")) + test_or(); + if (test__start_subtest("xor")) + test_xor(); + if (test__start_subtest("cmpxchg")) + test_cmpxchg(); + if (test__start_subtest("xchg")) + test_xchg(); +} diff --git a/tools/testing/selftests/bpf/progs/atomics_test.c b/tools/testing/selftests/bpf/progs/atomics_test.c new file mode 100644 index 000000000000..d40c93496843 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/atomics_test.c @@ -0,0 +1,154 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +#ifdef ENABLE_ATOMICS_TESTS +bool skip_tests __attribute((__section__(".data"))) = false; +#else +bool skip_tests = true; +#endif + +__u64 add64_value = 1; +__u64 add64_result = 0; +__u32 add32_value = 1; +__u32 add32_result = 0; +__u64 add_stack_value_copy = 0; +__u64 add_stack_result = 0; +__u64 add_noreturn_value = 1; + +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(add, int a) +{ +#ifdef ENABLE_ATOMICS_TESTS + __u64 add_stack_value = 1; + + add64_result = __sync_fetch_and_add(&add64_value, 2); + add32_result = __sync_fetch_and_add(&add32_value, 2); + add_stack_result = __sync_fetch_and_add(&add_stack_value, 2); + add_stack_value_copy = add_stack_value; + __sync_fetch_and_add(&add_noreturn_value, 2); +#endif + + return 0; +} + +__s64 sub64_value = 1; +__s64 sub64_result = 0; +__s32 sub32_value = 1; +__s32 sub32_result = 0; +__s64 sub_stack_value_copy = 0; +__s64 sub_stack_result = 0; +__s64 sub_noreturn_value = 1; + +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(sub, int a) +{ +#ifdef ENABLE_ATOMICS_TESTS + __u64 sub_stack_value = 1; + + sub64_result = __sync_fetch_and_sub(&sub64_value, 2); + sub32_result = __sync_fetch_and_sub(&sub32_value, 2); + sub_stack_result = __sync_fetch_and_sub(&sub_stack_value, 2); + sub_stack_value_copy = sub_stack_value; + __sync_fetch_and_sub(&sub_noreturn_value, 2); +#endif + + return 0; +} + +__u64 and64_value = (0x110ull << 32); +__u64 and64_result = 0; +__u32 and32_value = 0x110; +__u32 and32_result = 0; +__u64 and_noreturn_value = (0x110ull << 32); + +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(and, int a) +{ +#ifdef ENABLE_ATOMICS_TESTS + + and64_result = __sync_fetch_and_and(&and64_value, 0x011ull << 32); + and32_result = __sync_fetch_and_and(&and32_value, 0x011); + __sync_fetch_and_and(&and_noreturn_value, 0x011ull << 32); +#endif + + return 0; +} + +__u64 or64_value = (0x110ull << 32); +__u64 or64_result = 0; +__u32 or32_value = 0x110; +__u32 or32_result = 0; +__u64 or_noreturn_value = (0x110ull << 32); + +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(or, int a) +{ +#ifdef ENABLE_ATOMICS_TESTS + or64_result = __sync_fetch_and_or(&or64_value, 0x011ull << 32); + or32_result = __sync_fetch_and_or(&or32_value, 0x011); + __sync_fetch_and_or(&or_noreturn_value, 0x011ull << 32); +#endif + + return 0; +} + +__u64 xor64_value = (0x110ull << 32); +__u64 xor64_result = 0; +__u32 xor32_value = 0x110; +__u32 xor32_result = 0; +__u64 xor_noreturn_value = (0x110ull << 32); + +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(xor, int a) +{ +#ifdef ENABLE_ATOMICS_TESTS + xor64_result = __sync_fetch_and_xor(&xor64_value, 0x011ull << 32); + xor32_result = __sync_fetch_and_xor(&xor32_value, 0x011); + __sync_fetch_and_xor(&xor_noreturn_value, 0x011ull << 32); +#endif + + return 0; +} + +__u64 cmpxchg64_value = 1; +__u64 cmpxchg64_result_fail = 0; +__u64 cmpxchg64_result_succeed = 0; +__u32 cmpxchg32_value = 1; +__u32 cmpxchg32_result_fail = 0; +__u32 cmpxchg32_result_succeed = 0; + +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(cmpxchg, int a) +{ +#ifdef ENABLE_ATOMICS_TESTS + cmpxchg64_result_fail = __sync_val_compare_and_swap(&cmpxchg64_value, 0, 3); + cmpxchg64_result_succeed = __sync_val_compare_and_swap(&cmpxchg64_value, 1, 2); + + cmpxchg32_result_fail = __sync_val_compare_and_swap(&cmpxchg32_value, 0, 3); + cmpxchg32_result_succeed = __sync_val_compare_and_swap(&cmpxchg32_value, 1, 2); +#endif + + return 0; +} + +__u64 xchg64_value = 1; +__u64 xchg64_result = 0; +__u32 xchg32_value = 1; +__u32 xchg32_result = 0; + +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(xchg, int a) +{ +#ifdef ENABLE_ATOMICS_TESTS + __u64 val64 = 2; + __u32 val32 = 2; + + __atomic_exchange(&xchg64_value, &val64, &xchg64_result, __ATOMIC_RELAXED); + __atomic_exchange(&xchg32_value, &val32, &xchg32_result, __ATOMIC_RELAXED); +#endif + + return 0; +} diff --git a/tools/testing/selftests/bpf/verifier/atomic_and.c b/tools/testing/selftests/bpf/verifier/atomic_and.c new file mode 100644 index 000000000000..7eea6d9dfd7d --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_and.c @@ -0,0 +1,77 @@ +{ + "BPF_ATOMIC_AND without fetch", + .insns = { + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* atomic_and(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_AND(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (val != 0x010) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x010, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* r1 should not be clobbered, no BPF_FETCH flag */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_AND with fetch", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 123), + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* old = atomic_fetch_and(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_AND(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 0x110) exit(3); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x010) exit(2); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2), + BPF_MOV64_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (for fear of x86 JIT bug) */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_AND with fetch 32bit", + .insns = { + /* r0 = (s64) -1 */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1), + /* val = 0x110; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110), + /* old = atomic_fetch_and(&val, 0x011); */ + BPF_MOV32_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_AND(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 0x110) exit(3); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x010) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2), + BPF_MOV32_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (for fear of x86 JIT bug) + * It should be -1 so add 1 to get exit code. + */ + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c new file mode 100644 index 000000000000..335e12690be7 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c @@ -0,0 +1,96 @@ +{ + "atomic compare-and-exchange smoketest - 64bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + /* old = atomic_cmpxchg(&val, 2, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 3) exit(2); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* if (val != 3) exit(3); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* old = atomic_cmpxchg(&val, 3, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 3) exit(4); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 4), + BPF_EXIT_INSN(), + /* if (val != 4) exit(5); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV64_IMM(BPF_REG_0, 5), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "atomic compare-and-exchange smoketest - 32bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3), + /* old = atomic_cmpxchg(&val, 2, 4); */ + BPF_MOV32_IMM(BPF_REG_1, 4), + BPF_MOV32_IMM(BPF_REG_0, 2), + BPF_ATOMIC_CMPXCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 3) exit(2); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* if (val != 3) exit(3); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* old = atomic_cmpxchg(&val, 3, 4); */ + BPF_MOV32_IMM(BPF_REG_1, 4), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_ATOMIC_CMPXCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 3) exit(4); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 4), + BPF_EXIT_INSN(), + /* if (val != 4) exit(5); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV32_IMM(BPF_REG_0, 5), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "Can't use cmpxchg on uninit src reg", + .insns = { + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "!read_ok", +}, +{ + "Can't use cmpxchg on uninit memory", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_MOV64_IMM(BPF_REG_2, 4), + BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "invalid read from stack", +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c new file mode 100644 index 000000000000..7c87bc9a13de --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c @@ -0,0 +1,106 @@ +{ + "BPF_ATOMIC_FETCH_ADD smoketest - 64bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 3 to stack */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + /* Put a 1 in R1, add it to the 3 on the stack, and load the value back into R1 */ + BPF_MOV64_IMM(BPF_REG_1, 1), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* Check the value we loaded back was 3 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* Load value from stack */ + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8), + /* Check value loaded from stack was 4 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_FETCH_ADD smoketest - 32bit", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 3 to stack */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3), + /* Put a 1 in R1, add it to the 3 on the stack, and load the value back into R1 */ + BPF_MOV32_IMM(BPF_REG_1, 1), + BPF_ATOMIC_FETCH_ADD(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* Check the value we loaded back was 3 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* Load value from stack */ + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4), + /* Check value loaded from stack was 4 */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "Can't use ATM_FETCH_ADD on frame pointer", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_10, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr_unpriv = "R10 leaks addr into mem", + .errstr = "frame pointer is read only", +}, +{ + "Can't use ATM_FETCH_ADD on uninit src reg", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + /* It happens that the address leak check is first, but it would also be + * complain about the fact that we're trying to modify R10. + */ + .errstr = "!read_ok", +}, +{ + "Can't use ATM_FETCH_ADD on uninit dst reg", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_2, BPF_REG_0, -8), + BPF_EXIT_INSN(), + }, + .result = REJECT, + /* It happens that the address leak check is first, but it would also be + * complain about the fact that we're trying to modify R10. + */ + .errstr = "!read_ok", +}, +{ + "Can't use ATM_FETCH_ADD on kernel memory", + .insns = { + /* This is an fentry prog, context is array of the args of the + * kernel function being called. Load first arg into R2. + */ + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 0), + /* First arg of bpf_fentry_test7 is a pointer to a struct. + * Attempt to modify that struct. Verifier shouldn't let us + * because it's kernel memory. + */ + BPF_MOV64_IMM(BPF_REG_3, 1), + BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_2, BPF_REG_3, 0), + /* Done */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_TRACING, + .expected_attach_type = BPF_TRACE_FENTRY, + .kfunc = "bpf_fentry_test7", + .result = REJECT, + .errstr = "only read is supported", +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_or.c b/tools/testing/selftests/bpf/verifier/atomic_or.c new file mode 100644 index 000000000000..1b22fb2881f0 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_or.c @@ -0,0 +1,77 @@ +{ + "BPF_ATOMIC_OR without fetch", + .insns = { + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* atomic_or(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_OR(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (val != 0x111) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x111, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* r1 should not be clobbered, no BPF_FETCH flag */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_OR with fetch", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 123), + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* old = atomic_fetch_or(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_OR(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 0x110) exit(3); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x111) exit(2); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x111, 2), + BPF_MOV64_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (for fear of x86 JIT bug) */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_OR with fetch 32bit", + .insns = { + /* r0 = (s64) -1 */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1), + /* val = 0x110; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110), + /* old = atomic_fetch_or(&val, 0x011); */ + BPF_MOV32_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_OR(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 0x110) exit(3); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x111) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x111, 2), + BPF_MOV32_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (for fear of x86 JIT bug) + * It should be -1 so add 1 to get exit code. + */ + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_xchg.c b/tools/testing/selftests/bpf/verifier/atomic_xchg.c new file mode 100644 index 000000000000..9348ac490e24 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_xchg.c @@ -0,0 +1,46 @@ +{ + "atomic exchange smoketest - 64bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3), + /* old = atomic_xchg(&val, 4); */ + BPF_MOV64_IMM(BPF_REG_1, 4), + BPF_ATOMIC_XCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 3) exit(1); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* if (val != 4) exit(2); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "atomic exchange smoketest - 32bit", + .insns = { + /* val = 3; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3), + /* old = atomic_xchg(&val, 4); */ + BPF_MOV32_IMM(BPF_REG_1, 4), + BPF_ATOMIC_XCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 3) exit(1); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 3, 2), + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* if (val != 4) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 4, 2), + BPF_MOV32_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_xor.c b/tools/testing/selftests/bpf/verifier/atomic_xor.c new file mode 100644 index 000000000000..d1315419a3a8 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_xor.c @@ -0,0 +1,77 @@ +{ + "BPF_ATOMIC_XOR without fetch", + .insns = { + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* atomic_xor(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_XOR(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (val != 0x101) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x101, 2), + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN(), + /* r1 should not be clobbered, no BPF_FETCH flag */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_XOR with fetch", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 123), + /* val = 0x110; */ + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110), + /* old = atomic_fetch_xor(&val, 0x011); */ + BPF_MOV64_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_XOR(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* if (old != 0x110) exit(3); */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x101) exit(2); */ + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8), + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x101, 2), + BPF_MOV64_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (fxor fear of x86 JIT bug) */ + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* exit(0); */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, +{ + "BPF_ATOMIC_XOR with fetch 32bit", + .insns = { + /* r0 = (s64) -1 */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1), + /* val = 0x110; */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110), + /* old = atomic_fetch_xor(&val, 0x011); */ + BPF_MOV32_IMM(BPF_REG_1, 0x011), + BPF_ATOMIC_FETCH_XOR(BPF_W, BPF_REG_10, BPF_REG_1, -4), + /* if (old != 0x110) exit(3); */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2), + BPF_MOV32_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + /* if (val != 0x101) exit(2); */ + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4), + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x101, 2), + BPF_MOV32_IMM(BPF_REG_1, 2), + BPF_EXIT_INSN(), + /* Check R0 wasn't clobbered (fxor fear of x86 JIT bug) + * It should be -1 so add 1 to get exit code. + */ + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, +}, From patchwork Thu Dec 3 16:02:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 11949079 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DA90C0018C for ; Thu, 3 Dec 2020 16:05:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC15A207AA for ; Thu, 3 Dec 2020 16:05:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731207AbgLCQFP (ORCPT ); Thu, 3 Dec 2020 11:05:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729272AbgLCQFO (ORCPT ); Thu, 3 Dec 2020 11:05:14 -0500 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F3AEC094251 for ; Thu, 3 Dec 2020 08:03:43 -0800 (PST) Received: by mail-qv1-xf49.google.com with SMTP id y8so1959570qvu.22 for ; Thu, 03 Dec 2020 08:03:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=+oJt4YN1vV8BikzJk8T4MnuRoPet/ONJM85/l1oaFsA=; b=fVN2tNQLNEwDBCS0z2Ugrc7Z322KowXofgwiQ9YEdeV/KCt2OGr2VBVa1EMDEa3wKz YpU4Ba+Cl9MxtT2dEJYg5FdMcHTjj+6fBROVSB/N+NSL0TiLW2mHwqpo8ifxLtEZdrD+ x6P66T34hNWOrJKcGHbv30mOOjKtNiOFoExyWjKuAxS5djB+RtMiBt7GhEBLLZmKKHfl Md5+jyELCFNLSaecee9i0NHD+EcOtPwqyRNlPsuXVZ3P88OJm/Zhyt6gjKgapTAQ/79f FBvvYqNEbTpfZY7SzjM3k9cEX9dCDVEtrwx1ftqs398vsjI90tGKgDpMAGHK1AY5ZvjS Vr+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+oJt4YN1vV8BikzJk8T4MnuRoPet/ONJM85/l1oaFsA=; b=mhQVvrg/F3Voh8sWudidP5bECbCq/+Fmk8srSR/2GSKQRMO1c0lMf5gC6PqJDssdnW rAvDWl7pFdXMSi2zr+0TIQdA7uqm0jnEb+VHMhZouuZns4PEqcfwwOHX2t9CZEOYdd/Z t5l5P9mqlqzQPxvcFlYNQQERmoqydG6IgQhyDGLhhbj85usiEnJae9tn0b7fiDiQ4l8l QwrwJfN6W3X7BZ2eOY+jwr8vtx8x5pGgjRRjbf5DhSBHcCFKqCt44sXAKDV66bbTE960 CjvrrbcOAEroIMci6Fyc/eHQr8QkDNUgkRi2a2OojnvVJzTkwp9QFelPVhe8pK4Lv/iL KtWg== X-Gm-Message-State: AOAM530rCPXP610iA5b/bF0q2qHJeSHbwbuQ05ZWjtxKK/kdV9JJzjXB 0pgo9xBxn4QS28qHkD9WvjBH66a1gbITdQV56vZFoXXejZUWpjtoN+TfGOqtQpW65xdoWlM4eEq aFhTc9xvdB0xZZHbJBuP1pV79p4/9UdNf2sY+mhuokjq9TLpYqA2HXXASZskI4WI= X-Google-Smtp-Source: ABdhPJzEd1nsztwyeAv0SEjKx8FK75ZZ+DgtqH3y1CX08lYeGnBAHDonB2nICe8xfwm8dnQc5SWESwtNDMWl4Q== Sender: "jackmanb via sendgmr" X-Received: from beeg.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:11db]) (user=jackmanb job=sendgmr) by 2002:a0c:9d07:: with SMTP id m7mr4069974qvf.5.1607011422530; Thu, 03 Dec 2020 08:03:42 -0800 (PST) Date: Thu, 3 Dec 2020 16:02:45 +0000 In-Reply-To: <20201203160245.1014867-1-jackmanb@google.com> Message-Id: <20201203160245.1014867-15-jackmanb@google.com> Mime-Version: 1.0 References: <20201203160245.1014867-1-jackmanb@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH bpf-next v3 14/14] bpf: Document new atomic instructions From: Brendan Jackman To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Yonghong Song , Daniel Borkmann , KP Singh , Florent Revest , linux-kernel@vger.kernel.org, Jann Horn , Brendan Jackman Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Change-Id: Ic70fe9e3cb4403df4eb3be2ea5ae5af53156559e Signed-off-by: Brendan Jackman --- Documentation/networking/filter.rst | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/Documentation/networking/filter.rst b/Documentation/networking/filter.rst index 1583d59d806d..26d508a5e038 100644 --- a/Documentation/networking/filter.rst +++ b/Documentation/networking/filter.rst @@ -1053,6 +1053,32 @@ encoding. .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg +The basic atomic operations supported (from architecture v4 onwards) are: + + BPF_ADD + BPF_AND + BPF_OR + BPF_XOR + +Each having equivalent semantics with the ``BPF_ADD`` example, that is: the +memory location addresed by ``dst_reg + off`` is atomically modified, with +``src_reg`` as the other operand. If the ``BPF_FETCH`` flag is set in the +immediate, then these operations also overwrite ``src_reg`` with the +value that was in memory before it was modified. + +The more special operations are: + + BPF_XCHG + +This atomically exchanges ``src_reg`` with the value addressed by ``dst_reg + +off``. + + BPF_CMPXCHG + +This atomically compares the value addressed by ``dst_reg + off`` with +``R0``. If they match it is replaced with ``src_reg``, The value that was there +before is loaded back to ``R0``. + Note that 1 and 2 byte atomic operations are not supported. You may encounter BPF_XADD - this is a legacy name for BPF_ATOMIC, referring to