diff mbox series

[bpf-next,v3,13/14] bpf: Add tests for new BPF atomic operations

Message ID 20201203160245.1014867-14-jackmanb@google.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series Atomics for eBPF | expand

Checks

Context Check Description
netdev/cover_letter success Link
netdev/fixes_present success Link
netdev/patch_count success Link
netdev/tree_selection success Clearly marked for bpf-next
netdev/subject_prefix success Link
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Link
netdev/module_param success Was 0 now: 0
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/kdoc success Errors and warnings before: 8 this patch: 0
netdev/verify_fixes success Link
netdev/checkpatch fail CHECK: Please don't use multiple blank lines ERROR: Remove Gerrit Change-Id's before submitting upstream ERROR: Unrecognized email address: '' ERROR: do not initialise globals to 0 WARNING: Missing or malformed SPDX-License-Identifier tag in line 1 WARNING: Use a single space after To: WARNING: added, moved or deleted file(s), does MAINTAINERS need updating? WARNING: line length of 81 exceeds 80 columns WARNING: line length of 82 exceeds 80 columns WARNING: line length of 83 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns WARNING: line length of 86 exceeds 80 columns WARNING: line length of 87 exceeds 80 columns WARNING: line length of 92 exceeds 80 columns WARNING: line length of 94 exceeds 80 columns WARNING: line length of 96 exceeds 80 columns WARNING: line length of 97 exceeds 80 columns WARNING: line length of 98 exceeds 80 columns
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/header_inline success Link
netdev/stable success Stable not CCed

Commit Message

Brendan Jackman Dec. 3, 2020, 4:02 p.m. UTC
This relies on the work done by Yonghong Song in
https://reviews.llvm.org/D72184

Note the use of a define called ENABLE_ATOMICS_TESTS: this is used
to:

 - Avoid breaking the build for people on old versions of Clang
 - Avoid needing separate lists of test objects for no_alu32, where
   atomics are not supported even if Clang has the feature.

The atomics_test.o BPF object is built unconditionally both for
test_progs and test_progs-no_alu32. For test_progs, if Clang supports
atomics, ENABLE_ATOMICS_TESTS is defined, so it includes the proper
test code. Otherwise, progs and global vars are defined anyway, as
stubs; this means that the skeleton user code still builds.

The atomics_test.o userspace object is built once and used for both
test_progs and test_progs-no_alu32. A variable called skip_tests is
defined in the BPF object's data section, which tells the userspace
object whether to skip the atomics test.

Change-Id: Iecc12f35f0ded4a1dd805cce1be576e7b27917ef
Signed-off-by: Brendan Jackman <jackmanb@google.com>
---
 tools/testing/selftests/bpf/Makefile          |   4 +
 .../selftests/bpf/prog_tests/atomics_test.c   | 262 ++++++++++++++++++
 .../selftests/bpf/progs/atomics_test.c        | 154 ++++++++++
 .../selftests/bpf/verifier/atomic_and.c       |  77 +++++
 .../selftests/bpf/verifier/atomic_cmpxchg.c   |  96 +++++++
 .../selftests/bpf/verifier/atomic_fetch_add.c | 106 +++++++
 .../selftests/bpf/verifier/atomic_or.c        |  77 +++++
 .../selftests/bpf/verifier/atomic_xchg.c      |  46 +++
 .../selftests/bpf/verifier/atomic_xor.c       |  77 +++++
 9 files changed, 899 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/atomics_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/atomics_test.c
 create mode 100644 tools/testing/selftests/bpf/verifier/atomic_and.c
 create mode 100644 tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
 create mode 100644 tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
 create mode 100644 tools/testing/selftests/bpf/verifier/atomic_or.c
 create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xchg.c
 create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xor.c

Comments

Yonghong Song Dec. 4, 2020, 7:06 a.m. UTC | #1
On 12/3/20 8:02 AM, Brendan Jackman wrote:
> This relies on the work done by Yonghong Song in
> https://reviews.llvm.org/D72184
> 
> Note the use of a define called ENABLE_ATOMICS_TESTS: this is used
> to:
> 
>   - Avoid breaking the build for people on old versions of Clang
>   - Avoid needing separate lists of test objects for no_alu32, where
>     atomics are not supported even if Clang has the feature.
> 
> The atomics_test.o BPF object is built unconditionally both for
> test_progs and test_progs-no_alu32. For test_progs, if Clang supports
> atomics, ENABLE_ATOMICS_TESTS is defined, so it includes the proper
> test code. Otherwise, progs and global vars are defined anyway, as
> stubs; this means that the skeleton user code still builds.
> 
> The atomics_test.o userspace object is built once and used for both
> test_progs and test_progs-no_alu32. A variable called skip_tests is
> defined in the BPF object's data section, which tells the userspace
> object whether to skip the atomics test.
> 
> Change-Id: Iecc12f35f0ded4a1dd805cce1be576e7b27917ef
> Signed-off-by: Brendan Jackman <jackmanb@google.com>
> ---
>   tools/testing/selftests/bpf/Makefile          |   4 +
>   .../selftests/bpf/prog_tests/atomics_test.c   | 262 ++++++++++++++++++
>   .../selftests/bpf/progs/atomics_test.c        | 154 ++++++++++
>   .../selftests/bpf/verifier/atomic_and.c       |  77 +++++
>   .../selftests/bpf/verifier/atomic_cmpxchg.c   |  96 +++++++
>   .../selftests/bpf/verifier/atomic_fetch_add.c | 106 +++++++
>   .../selftests/bpf/verifier/atomic_or.c        |  77 +++++
>   .../selftests/bpf/verifier/atomic_xchg.c      |  46 +++
>   .../selftests/bpf/verifier/atomic_xor.c       |  77 +++++
>   9 files changed, 899 insertions(+)
>   create mode 100644 tools/testing/selftests/bpf/prog_tests/atomics_test.c
>   create mode 100644 tools/testing/selftests/bpf/progs/atomics_test.c
>   create mode 100644 tools/testing/selftests/bpf/verifier/atomic_and.c
>   create mode 100644 tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
>   create mode 100644 tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
>   create mode 100644 tools/testing/selftests/bpf/verifier/atomic_or.c
>   create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xchg.c
>   create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xor.c
> 
> diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> index f21c4841a612..448a9eb1a56c 100644
> --- a/tools/testing/selftests/bpf/Makefile
> +++ b/tools/testing/selftests/bpf/Makefile
> @@ -431,11 +431,15 @@ TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read				\
>   		       $(wildcard progs/btf_dump_test_case_*.c)
>   TRUNNER_BPF_BUILD_RULE := CLANG_BPF_BUILD_RULE
>   TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
> +ifeq ($(feature-clang-bpf-atomics),1)
> +  TRUNNER_BPF_CFLAGS += -DENABLE_ATOMICS_TESTS
> +endif
>   TRUNNER_BPF_LDFLAGS := -mattr=+alu32
>   $(eval $(call DEFINE_TEST_RUNNER,test_progs))
>   
>   # Define test_progs-no_alu32 test runner.
>   TRUNNER_BPF_BUILD_RULE := CLANG_NOALU32_BPF_BUILD_RULE
> +TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
>   TRUNNER_BPF_LDFLAGS :=
>   $(eval $(call DEFINE_TEST_RUNNER,test_progs,no_alu32))
>   
> diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> new file mode 100644
> index 000000000000..66f0ccf4f4ec
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> @@ -0,0 +1,262 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <test_progs.h>
> +
> +
> +#include "atomics_test.skel.h"
> +
> +static struct atomics_test *setup(void)
> +{
> +	struct atomics_test *atomics_skel;
> +	__u32 duration = 0, err;
> +
> +	atomics_skel = atomics_test__open_and_load();
> +	if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
> +		return NULL;
> +
> +	if (atomics_skel->data->skip_tests) {
> +		printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
> +		       __func__);
> +		test__skip();
> +		goto err;
> +	}
> +
> +	err = atomics_test__attach(atomics_skel);
> +	if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
> +		goto err;
> +
> +	return atomics_skel;
> +
> +err:
> +	atomics_test__destroy(atomics_skel);
> +	return NULL;
> +}
> +
> +static void test_add(void)
> +{
> +	struct atomics_test *atomics_skel;
> +	int err, prog_fd;
> +	__u32 duration = 0, retval;
> +
> +	atomics_skel = setup();

When running the test, I observed a noticeable delay between skel load 
and skel attach. The reason is the bpf program object file contains
multiple programs and the above setup() tries to do attachment
for ALL programs but actually below only "add" program is tested.
This will unnecessarily increase test_progs running time.

The best is for setup() here only load and attach program "add".
The libbpf API bpf_program__set_autoload() can set a particular
program not autoload. You can call attach function explicitly
for one specific program. This should be able to reduce test
running time.

> +	if (!atomics_skel)
> +		return;
> +
> +	prog_fd = bpf_program__fd(atomics_skel->progs.add);
> +	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
> +				NULL, NULL, &retval, &duration);
> +	if (CHECK(err || retval, "test_run add",
> +		  "err %d errno %d retval %d duration %d\n",
> +		  err, errno, retval, duration))
> +		goto cleanup;
> +
> +	ASSERT_EQ(atomics_skel->data->add64_value, 3, "add64_value");
> +	ASSERT_EQ(atomics_skel->bss->add64_result, 1, "add64_result");
> +
> +	ASSERT_EQ(atomics_skel->data->add32_value, 3, "add32_value");
> +	ASSERT_EQ(atomics_skel->bss->add32_result, 1, "add32_result");
> +
> +	ASSERT_EQ(atomics_skel->bss->add_stack_value_copy, 3, "add_stack_value");
> +	ASSERT_EQ(atomics_skel->bss->add_stack_result, 1, "add_stack_result");
> +
> +	ASSERT_EQ(atomics_skel->data->add_noreturn_value, 3, "add_noreturn_value");
> +
> +cleanup:
> +	atomics_test__destroy(atomics_skel);
> +}
> +
> +static void test_sub(void)
> +{
> +	struct atomics_test *atomics_skel;
> +	int err, prog_fd;
> +	__u32 duration = 0, retval;
> +
> +	atomics_skel = setup();
> +	if (!atomics_skel)
> +		return;
> +
[...]
Brendan Jackman Dec. 4, 2020, 9:45 a.m. UTC | #2
On Thu, Dec 03, 2020 at 11:06:31PM -0800, Yonghong Song wrote:
> On 12/3/20 8:02 AM, Brendan Jackman wrote:
[...]
> > diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> > new file mode 100644
> > index 000000000000..66f0ccf4f4ec
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> > @@ -0,0 +1,262 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +#include <test_progs.h>
> > +
> > +
> > +#include "atomics_test.skel.h"
> > +
> > +static struct atomics_test *setup(void)
> > +{
> > +	struct atomics_test *atomics_skel;
> > +	__u32 duration = 0, err;
> > +
> > +	atomics_skel = atomics_test__open_and_load();
> > +	if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
> > +		return NULL;
> > +
> > +	if (atomics_skel->data->skip_tests) {
> > +		printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
> > +		       __func__);
> > +		test__skip();
> > +		goto err;
> > +	}
> > +
> > +	err = atomics_test__attach(atomics_skel);
> > +	if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
> > +		goto err;
> > +
> > +	return atomics_skel;
> > +
> > +err:
> > +	atomics_test__destroy(atomics_skel);
> > +	return NULL;
> > +}
> > +
> > +static void test_add(void)
> > +{
> > +	struct atomics_test *atomics_skel;
> > +	int err, prog_fd;
> > +	__u32 duration = 0, retval;
> > +
> > +	atomics_skel = setup();
> 
> When running the test, I observed a noticeable delay between skel load and
> skel attach. The reason is the bpf program object file contains
> multiple programs and the above setup() tries to do attachment
> for ALL programs but actually below only "add" program is tested.
> This will unnecessarily increase test_progs running time.
> 
> The best is for setup() here only load and attach program "add".
> The libbpf API bpf_program__set_autoload() can set a particular
> program not autoload. You can call attach function explicitly
> for one specific program. This should be able to reduce test
> running time.

Interesting, thanks a lot - I'll try this out next week. Maybe we can
actually load all the progs once at the beginning (i.e. in
test_atomics_test) then attach/detch each prog individually as needed...
Sorry, I haven't got much of a grip on libbpf yet.
Yonghong Song Dec. 4, 2020, 3:28 p.m. UTC | #3
On 12/4/20 1:45 AM, Brendan Jackman wrote:
> On Thu, Dec 03, 2020 at 11:06:31PM -0800, Yonghong Song wrote:
>> On 12/3/20 8:02 AM, Brendan Jackman wrote:
> [...]
>>> diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
>>> new file mode 100644
>>> index 000000000000..66f0ccf4f4ec
>>> --- /dev/null
>>> +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
>>> @@ -0,0 +1,262 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +
>>> +#include <test_progs.h>
>>> +
>>> +
>>> +#include "atomics_test.skel.h"
>>> +
>>> +static struct atomics_test *setup(void)
>>> +{
>>> +	struct atomics_test *atomics_skel;
>>> +	__u32 duration = 0, err;
>>> +
>>> +	atomics_skel = atomics_test__open_and_load();
>>> +	if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
>>> +		return NULL;
>>> +
>>> +	if (atomics_skel->data->skip_tests) {
>>> +		printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
>>> +		       __func__);
>>> +		test__skip();
>>> +		goto err;
>>> +	}
>>> +
>>> +	err = atomics_test__attach(atomics_skel);
>>> +	if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
>>> +		goto err;
>>> +
>>> +	return atomics_skel;
>>> +
>>> +err:
>>> +	atomics_test__destroy(atomics_skel);
>>> +	return NULL;
>>> +}
>>> +
>>> +static void test_add(void)
>>> +{
>>> +	struct atomics_test *atomics_skel;
>>> +	int err, prog_fd;
>>> +	__u32 duration = 0, retval;
>>> +
>>> +	atomics_skel = setup();
>>
>> When running the test, I observed a noticeable delay between skel load and
>> skel attach. The reason is the bpf program object file contains
>> multiple programs and the above setup() tries to do attachment
>> for ALL programs but actually below only "add" program is tested.
>> This will unnecessarily increase test_progs running time.
>>
>> The best is for setup() here only load and attach program "add".
>> The libbpf API bpf_program__set_autoload() can set a particular
>> program not autoload. You can call attach function explicitly
>> for one specific program. This should be able to reduce test
>> running time.
> 
> Interesting, thanks a lot - I'll try this out next week. Maybe we can
> actually load all the progs once at the beginning (i.e. in

If you have subtest, people expects subtest can be individual runable.
This will complicate your logic.

> test_atomics_test) then attach/detch each prog individually as needed...
> Sorry, I haven't got much of a grip on libbpf yet.

One alternative is not to do subtests. There is nothing run to have
just one bpf program instead of many. This way, you load all and attach
once, then do all the test verification.
Andrii Nakryiko Dec. 4, 2020, 7:49 p.m. UTC | #4
On Fri, Dec 4, 2020 at 7:29 AM Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 12/4/20 1:45 AM, Brendan Jackman wrote:
> > On Thu, Dec 03, 2020 at 11:06:31PM -0800, Yonghong Song wrote:
> >> On 12/3/20 8:02 AM, Brendan Jackman wrote:
> > [...]
> >>> diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> >>> new file mode 100644
> >>> index 000000000000..66f0ccf4f4ec
> >>> --- /dev/null
> >>> +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> >>> @@ -0,0 +1,262 @@
> >>> +// SPDX-License-Identifier: GPL-2.0
> >>> +
> >>> +#include <test_progs.h>
> >>> +
> >>> +
> >>> +#include "atomics_test.skel.h"
> >>> +
> >>> +static struct atomics_test *setup(void)
> >>> +{
> >>> +   struct atomics_test *atomics_skel;
> >>> +   __u32 duration = 0, err;
> >>> +
> >>> +   atomics_skel = atomics_test__open_and_load();
> >>> +   if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
> >>> +           return NULL;
> >>> +
> >>> +   if (atomics_skel->data->skip_tests) {
> >>> +           printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
> >>> +                  __func__);
> >>> +           test__skip();
> >>> +           goto err;
> >>> +   }
> >>> +
> >>> +   err = atomics_test__attach(atomics_skel);
> >>> +   if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
> >>> +           goto err;
> >>> +
> >>> +   return atomics_skel;
> >>> +
> >>> +err:
> >>> +   atomics_test__destroy(atomics_skel);
> >>> +   return NULL;
> >>> +}
> >>> +
> >>> +static void test_add(void)
> >>> +{
> >>> +   struct atomics_test *atomics_skel;
> >>> +   int err, prog_fd;
> >>> +   __u32 duration = 0, retval;
> >>> +
> >>> +   atomics_skel = setup();
> >>
> >> When running the test, I observed a noticeable delay between skel load and
> >> skel attach. The reason is the bpf program object file contains
> >> multiple programs and the above setup() tries to do attachment
> >> for ALL programs but actually below only "add" program is tested.
> >> This will unnecessarily increase test_progs running time.
> >>
> >> The best is for setup() here only load and attach program "add".
> >> The libbpf API bpf_program__set_autoload() can set a particular
> >> program not autoload. You can call attach function explicitly
> >> for one specific program. This should be able to reduce test
> >> running time.
> >
> > Interesting, thanks a lot - I'll try this out next week. Maybe we can
> > actually load all the progs once at the beginning (i.e. in
>
> If you have subtest, people expects subtest can be individual runable.
> This will complicate your logic.
>
> > test_atomics_test) then attach/detch each prog individually as needed...
> > Sorry, I haven't got much of a grip on libbpf yet.
>
> One alternative is not to do subtests. There is nothing run to have
> just one bpf program instead of many. This way, you load all and attach
> once, then do all the test verification.

I think subtests are good for debuggability, at least. But in this
case it's very easy to achieve everything you've discussed:

1. do open() right there in test_atomics_test()  (btw, consider naming
the test just "atomics" or "atomic_insns" or something, no need for
test-test tautology)
2. check if needs skipping, skip entire test
3. if not skipping, load
4. then pass the same instance of the skeleton to each subtest
5. each subtest will
  5a. bpf_prog__attach(skel->prog.my_specific_subtest_prog);
  5b. trigger and do checks
  5c. bpf_link__destroy(<link from 5a step>);
Brendan Jackman Dec. 7, 2020, 3:48 p.m. UTC | #5
On Fri, Dec 04, 2020 at 11:49:22AM -0800, Andrii Nakryiko wrote:
> On Fri, Dec 4, 2020 at 7:29 AM Yonghong Song <yhs@fb.com> wrote:
> >
> >
> >
> > On 12/4/20 1:45 AM, Brendan Jackman wrote:
> > > On Thu, Dec 03, 2020 at 11:06:31PM -0800, Yonghong Song wrote:
> > >> On 12/3/20 8:02 AM, Brendan Jackman wrote:
> > > [...]
> > >>> diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> > >>> new file mode 100644
> > >>> index 000000000000..66f0ccf4f4ec
> > >>> --- /dev/null
> > >>> +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> > >>> @@ -0,0 +1,262 @@
> > >>> +// SPDX-License-Identifier: GPL-2.0
> > >>> +
> > >>> +#include <test_progs.h>
> > >>> +
> > >>> +
> > >>> +#include "atomics_test.skel.h"
> > >>> +
> > >>> +static struct atomics_test *setup(void)
> > >>> +{
> > >>> +   struct atomics_test *atomics_skel;
> > >>> +   __u32 duration = 0, err;
> > >>> +
> > >>> +   atomics_skel = atomics_test__open_and_load();
> > >>> +   if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
> > >>> +           return NULL;
> > >>> +
> > >>> +   if (atomics_skel->data->skip_tests) {
> > >>> +           printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
> > >>> +                  __func__);
> > >>> +           test__skip();
> > >>> +           goto err;
> > >>> +   }
> > >>> +
> > >>> +   err = atomics_test__attach(atomics_skel);
> > >>> +   if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
> > >>> +           goto err;
> > >>> +
> > >>> +   return atomics_skel;
> > >>> +
> > >>> +err:
> > >>> +   atomics_test__destroy(atomics_skel);
> > >>> +   return NULL;
> > >>> +}
> > >>> +
> > >>> +static void test_add(void)
> > >>> +{
> > >>> +   struct atomics_test *atomics_skel;
> > >>> +   int err, prog_fd;
> > >>> +   __u32 duration = 0, retval;
> > >>> +
> > >>> +   atomics_skel = setup();
> > >>
> > >> When running the test, I observed a noticeable delay between skel load and
> > >> skel attach. The reason is the bpf program object file contains
> > >> multiple programs and the above setup() tries to do attachment
> > >> for ALL programs but actually below only "add" program is tested.
> > >> This will unnecessarily increase test_progs running time.
> > >>
> > >> The best is for setup() here only load and attach program "add".
> > >> The libbpf API bpf_program__set_autoload() can set a particular
> > >> program not autoload. You can call attach function explicitly
> > >> for one specific program. This should be able to reduce test
> > >> running time.
> > >
> > > Interesting, thanks a lot - I'll try this out next week. Maybe we can
> > > actually load all the progs once at the beginning (i.e. in
> >
> > If you have subtest, people expects subtest can be individual runable.
> > This will complicate your logic.
> >
> > > test_atomics_test) then attach/detch each prog individually as needed...
> > > Sorry, I haven't got much of a grip on libbpf yet.
> >
> > One alternative is not to do subtests. There is nothing run to have
> > just one bpf program instead of many. This way, you load all and attach
> > once, then do all the test verification.
> 
> I think subtests are good for debuggability, at least. But in this
> case it's very easy to achieve everything you've discussed:
> 
> 1. do open() right there in test_atomics_test()  (btw, consider naming
> the test just "atomics" or "atomic_insns" or something, no need for
> test-test tautology)
> 2. check if needs skipping, skip entire test
> 3. if not skipping, load
> 4. then pass the same instance of the skeleton to each subtest
> 5. each subtest will
>   5a. bpf_prog__attach(skel->prog.my_specific_subtest_prog);
>   5b. trigger and do checks
>   5c. bpf_link__destroy(<link from 5a step>);

Thanks, this seems like the way forward to me.
diff mbox series

Patch

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index f21c4841a612..448a9eb1a56c 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -431,11 +431,15 @@  TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read				\
 		       $(wildcard progs/btf_dump_test_case_*.c)
 TRUNNER_BPF_BUILD_RULE := CLANG_BPF_BUILD_RULE
 TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
+ifeq ($(feature-clang-bpf-atomics),1)
+  TRUNNER_BPF_CFLAGS += -DENABLE_ATOMICS_TESTS
+endif
 TRUNNER_BPF_LDFLAGS := -mattr=+alu32
 $(eval $(call DEFINE_TEST_RUNNER,test_progs))
 
 # Define test_progs-no_alu32 test runner.
 TRUNNER_BPF_BUILD_RULE := CLANG_NOALU32_BPF_BUILD_RULE
+TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
 TRUNNER_BPF_LDFLAGS :=
 $(eval $(call DEFINE_TEST_RUNNER,test_progs,no_alu32))
 
diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
new file mode 100644
index 000000000000..66f0ccf4f4ec
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
@@ -0,0 +1,262 @@ 
+// SPDX-License-Identifier: GPL-2.0
+
+#include <test_progs.h>
+
+
+#include "atomics_test.skel.h"
+
+static struct atomics_test *setup(void)
+{
+	struct atomics_test *atomics_skel;
+	__u32 duration = 0, err;
+
+	atomics_skel = atomics_test__open_and_load();
+	if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
+		return NULL;
+
+	if (atomics_skel->data->skip_tests) {
+		printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
+		       __func__);
+		test__skip();
+		goto err;
+	}
+
+	err = atomics_test__attach(atomics_skel);
+	if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
+		goto err;
+
+	return atomics_skel;
+
+err:
+	atomics_test__destroy(atomics_skel);
+	return NULL;
+}
+
+static void test_add(void)
+{
+	struct atomics_test *atomics_skel;
+	int err, prog_fd;
+	__u32 duration = 0, retval;
+
+	atomics_skel = setup();
+	if (!atomics_skel)
+		return;
+
+	prog_fd = bpf_program__fd(atomics_skel->progs.add);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	if (CHECK(err || retval, "test_run add",
+		  "err %d errno %d retval %d duration %d\n",
+		  err, errno, retval, duration))
+		goto cleanup;
+
+	ASSERT_EQ(atomics_skel->data->add64_value, 3, "add64_value");
+	ASSERT_EQ(atomics_skel->bss->add64_result, 1, "add64_result");
+
+	ASSERT_EQ(atomics_skel->data->add32_value, 3, "add32_value");
+	ASSERT_EQ(atomics_skel->bss->add32_result, 1, "add32_result");
+
+	ASSERT_EQ(atomics_skel->bss->add_stack_value_copy, 3, "add_stack_value");
+	ASSERT_EQ(atomics_skel->bss->add_stack_result, 1, "add_stack_result");
+
+	ASSERT_EQ(atomics_skel->data->add_noreturn_value, 3, "add_noreturn_value");
+
+cleanup:
+	atomics_test__destroy(atomics_skel);
+}
+
+static void test_sub(void)
+{
+	struct atomics_test *atomics_skel;
+	int err, prog_fd;
+	__u32 duration = 0, retval;
+
+	atomics_skel = setup();
+	if (!atomics_skel)
+		return;
+
+	prog_fd = bpf_program__fd(atomics_skel->progs.sub);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	if (CHECK(err || retval, "test_run sub",
+		  "err %d errno %d retval %d duration %d\n",
+		  err, errno, retval, duration))
+		goto cleanup;
+
+	ASSERT_EQ(atomics_skel->data->sub64_value, -1, "sub64_value");
+	ASSERT_EQ(atomics_skel->bss->sub64_result, 1, "sub64_result");
+
+	ASSERT_EQ(atomics_skel->data->sub32_value, -1, "sub32_value");
+	ASSERT_EQ(atomics_skel->bss->sub32_result, 1, "sub32_result");
+
+	ASSERT_EQ(atomics_skel->bss->sub_stack_value_copy, -1, "sub_stack_value");
+	ASSERT_EQ(atomics_skel->bss->sub_stack_result, 1, "sub_stack_result");
+
+	ASSERT_EQ(atomics_skel->data->sub_noreturn_value, -1, "sub_noreturn_value");
+
+cleanup:
+	atomics_test__destroy(atomics_skel);
+}
+
+static void test_and(void)
+{
+	struct atomics_test *atomics_skel;
+	int err, prog_fd;
+	__u32 duration = 0, retval;
+
+	atomics_skel = setup();
+	if (!atomics_skel)
+		return;
+
+	prog_fd = bpf_program__fd(atomics_skel->progs.and);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	if (CHECK(err || retval, "test_run and",
+		  "err %d errno %d retval %d duration %d\n",
+		  err, errno, retval, duration))
+		goto cleanup;
+
+	ASSERT_EQ(atomics_skel->data->and64_value, 0x010ull << 32, "and64_value");
+	ASSERT_EQ(atomics_skel->bss->and64_result, 0x110ull << 32, "and64_result");
+
+	ASSERT_EQ(atomics_skel->data->and32_value, 0x010, "and32_value");
+	ASSERT_EQ(atomics_skel->bss->and32_result, 0x110, "and32_result");
+
+	ASSERT_EQ(atomics_skel->data->and_noreturn_value, 0x010ull << 32, "and_noreturn_value");
+cleanup:
+	atomics_test__destroy(atomics_skel);
+}
+
+static void test_or(void)
+{
+	struct atomics_test *atomics_skel;
+	int err, prog_fd;
+	__u32 duration = 0, retval;
+
+	atomics_skel = setup();
+	if (!atomics_skel)
+		return;
+
+	prog_fd = bpf_program__fd(atomics_skel->progs.or);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	if (CHECK(err || retval, "test_run or",
+		  "err %d errno %d retval %d duration %d\n",
+		  err, errno, retval, duration))
+		goto cleanup;
+
+	ASSERT_EQ(atomics_skel->data->or64_value, 0x111ull << 32, "or64_value");
+	ASSERT_EQ(atomics_skel->bss->or64_result, 0x110ull << 32, "or64_result");
+
+	ASSERT_EQ(atomics_skel->data->or32_value, 0x111, "or32_value");
+	ASSERT_EQ(atomics_skel->bss->or32_result, 0x110, "or32_result");
+
+	ASSERT_EQ(atomics_skel->data->or_noreturn_value, 0x111ull << 32, "or_noreturn_value");
+cleanup:
+	atomics_test__destroy(atomics_skel);
+}
+
+static void test_xor(void)
+{
+	struct atomics_test *atomics_skel;
+	int err, prog_fd;
+	__u32 duration = 0, retval;
+
+	atomics_skel = setup();
+	if (!atomics_skel)
+		return;
+
+	prog_fd = bpf_program__fd(atomics_skel->progs.xor);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	if (CHECK(err || retval, "test_run xor",
+		  "err %d errno %d retval %d duration %d\n",
+		  err, errno, retval, duration))
+		goto cleanup;
+
+	ASSERT_EQ(atomics_skel->data->xor64_value, 0x101ull << 32, "xor64_value");
+	ASSERT_EQ(atomics_skel->bss->xor64_result, 0x110ull << 32, "xor64_result");
+
+	ASSERT_EQ(atomics_skel->data->xor32_value, 0x101, "xor32_value");
+	ASSERT_EQ(atomics_skel->bss->xor32_result, 0x110, "xor32_result");
+
+	ASSERT_EQ(atomics_skel->data->xor_noreturn_value, 0x101ull << 32, "xor_nxoreturn_value");
+cleanup:
+	atomics_test__destroy(atomics_skel);
+}
+
+static void test_cmpxchg(void)
+{
+	struct atomics_test *atomics_skel;
+	int err, prog_fd;
+	__u32 duration = 0, retval;
+
+	atomics_skel = setup();
+	if (!atomics_skel)
+		return;
+
+	prog_fd = bpf_program__fd(atomics_skel->progs.add);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	if (CHECK(err || retval, "test_run add",
+		  "err %d errno %d retval %d duration %d\n",
+		  err, errno, retval, duration))
+		goto cleanup;
+
+	ASSERT_EQ(atomics_skel->data->cmpxchg64_value, 2, "cmpxchg64_value");
+	ASSERT_EQ(atomics_skel->bss->cmpxchg64_result_fail, 1, "cmpxchg_result_fail");
+	ASSERT_EQ(atomics_skel->bss->cmpxchg64_result_succeed, 1, "cmpxchg_result_succeed");
+
+	ASSERT_EQ(atomics_skel->data->cmpxchg32_value, 2, "cmpxchg32_value");
+	ASSERT_EQ(atomics_skel->bss->cmpxchg32_result_fail, 1, "cmpxchg_result_fail");
+	ASSERT_EQ(atomics_skel->bss->cmpxchg32_result_succeed, 1, "cmpxchg_result_succeed");
+
+cleanup:
+	atomics_test__destroy(atomics_skel);
+}
+
+static void test_xchg(void)
+{
+	struct atomics_test *atomics_skel;
+	int err, prog_fd;
+	__u32 duration = 0, retval;
+
+	atomics_skel = setup();
+	if (!atomics_skel)
+		return;
+
+	prog_fd = bpf_program__fd(atomics_skel->progs.add);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	if (CHECK(err || retval, "test_run add",
+		  "err %d errno %d retval %d duration %d\n",
+		  err, errno, retval, duration))
+		goto cleanup;
+
+	ASSERT_EQ(atomics_skel->data->xchg64_value, 2, "xchg64_value");
+	ASSERT_EQ(atomics_skel->bss->xchg64_result, 1, "xchg_result");
+
+	ASSERT_EQ(atomics_skel->data->xchg32_value, 2, "xchg32_value");
+	ASSERT_EQ(atomics_skel->bss->xchg32_result, 1, "xchg_result");
+
+cleanup:
+	atomics_test__destroy(atomics_skel);
+}
+
+void test_atomics_test(void)
+{
+	if (test__start_subtest("add"))
+		test_add();
+	if (test__start_subtest("sub"))
+		test_sub();
+	if (test__start_subtest("and"))
+		test_and();
+	if (test__start_subtest("or"))
+		test_or();
+	if (test__start_subtest("xor"))
+		test_xor();
+	if (test__start_subtest("cmpxchg"))
+		test_cmpxchg();
+	if (test__start_subtest("xchg"))
+		test_xchg();
+}
diff --git a/tools/testing/selftests/bpf/progs/atomics_test.c b/tools/testing/selftests/bpf/progs/atomics_test.c
new file mode 100644
index 000000000000..d40c93496843
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/atomics_test.c
@@ -0,0 +1,154 @@ 
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <stdbool.h>
+
+#ifdef ENABLE_ATOMICS_TESTS
+bool skip_tests __attribute((__section__(".data"))) = false;
+#else
+bool skip_tests = true;
+#endif
+
+__u64 add64_value = 1;
+__u64 add64_result = 0;
+__u32 add32_value = 1;
+__u32 add32_result = 0;
+__u64 add_stack_value_copy = 0;
+__u64 add_stack_result = 0;
+__u64 add_noreturn_value = 1;
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(add, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+	__u64 add_stack_value = 1;
+
+	add64_result = __sync_fetch_and_add(&add64_value, 2);
+	add32_result = __sync_fetch_and_add(&add32_value, 2);
+	add_stack_result = __sync_fetch_and_add(&add_stack_value, 2);
+	add_stack_value_copy = add_stack_value;
+	__sync_fetch_and_add(&add_noreturn_value, 2);
+#endif
+
+	return 0;
+}
+
+__s64 sub64_value = 1;
+__s64 sub64_result = 0;
+__s32 sub32_value = 1;
+__s32 sub32_result = 0;
+__s64 sub_stack_value_copy = 0;
+__s64 sub_stack_result = 0;
+__s64 sub_noreturn_value = 1;
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(sub, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+	__u64 sub_stack_value = 1;
+
+	sub64_result = __sync_fetch_and_sub(&sub64_value, 2);
+	sub32_result = __sync_fetch_and_sub(&sub32_value, 2);
+	sub_stack_result = __sync_fetch_and_sub(&sub_stack_value, 2);
+	sub_stack_value_copy = sub_stack_value;
+	__sync_fetch_and_sub(&sub_noreturn_value, 2);
+#endif
+
+	return 0;
+}
+
+__u64 and64_value = (0x110ull << 32);
+__u64 and64_result = 0;
+__u32 and32_value = 0x110;
+__u32 and32_result = 0;
+__u64 and_noreturn_value = (0x110ull << 32);
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(and, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+
+	and64_result = __sync_fetch_and_and(&and64_value, 0x011ull << 32);
+	and32_result = __sync_fetch_and_and(&and32_value, 0x011);
+	__sync_fetch_and_and(&and_noreturn_value, 0x011ull << 32);
+#endif
+
+	return 0;
+}
+
+__u64 or64_value = (0x110ull << 32);
+__u64 or64_result = 0;
+__u32 or32_value = 0x110;
+__u32 or32_result = 0;
+__u64 or_noreturn_value = (0x110ull << 32);
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(or, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+	or64_result = __sync_fetch_and_or(&or64_value, 0x011ull << 32);
+	or32_result = __sync_fetch_and_or(&or32_value, 0x011);
+	__sync_fetch_and_or(&or_noreturn_value, 0x011ull << 32);
+#endif
+
+	return 0;
+}
+
+__u64 xor64_value = (0x110ull << 32);
+__u64 xor64_result = 0;
+__u32 xor32_value = 0x110;
+__u32 xor32_result = 0;
+__u64 xor_noreturn_value = (0x110ull << 32);
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(xor, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+	xor64_result = __sync_fetch_and_xor(&xor64_value, 0x011ull << 32);
+	xor32_result = __sync_fetch_and_xor(&xor32_value, 0x011);
+	__sync_fetch_and_xor(&xor_noreturn_value, 0x011ull << 32);
+#endif
+
+	return 0;
+}
+
+__u64 cmpxchg64_value = 1;
+__u64 cmpxchg64_result_fail = 0;
+__u64 cmpxchg64_result_succeed = 0;
+__u32 cmpxchg32_value = 1;
+__u32 cmpxchg32_result_fail = 0;
+__u32 cmpxchg32_result_succeed = 0;
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(cmpxchg, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+	cmpxchg64_result_fail = __sync_val_compare_and_swap(&cmpxchg64_value, 0, 3);
+	cmpxchg64_result_succeed = __sync_val_compare_and_swap(&cmpxchg64_value, 1, 2);
+
+	cmpxchg32_result_fail = __sync_val_compare_and_swap(&cmpxchg32_value, 0, 3);
+	cmpxchg32_result_succeed = __sync_val_compare_and_swap(&cmpxchg32_value, 1, 2);
+#endif
+
+	return 0;
+}
+
+__u64 xchg64_value = 1;
+__u64 xchg64_result = 0;
+__u32 xchg32_value = 1;
+__u32 xchg32_result = 0;
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(xchg, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+	__u64 val64 = 2;
+	__u32 val32 = 2;
+
+	__atomic_exchange(&xchg64_value, &val64, &xchg64_result, __ATOMIC_RELAXED);
+	__atomic_exchange(&xchg32_value, &val32, &xchg32_result, __ATOMIC_RELAXED);
+#endif
+
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/verifier/atomic_and.c b/tools/testing/selftests/bpf/verifier/atomic_and.c
new file mode 100644
index 000000000000..7eea6d9dfd7d
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_and.c
@@ -0,0 +1,77 @@ 
+{
+	"BPF_ATOMIC_AND without fetch",
+	.insns = {
+		/* val = 0x110; */
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+		/* atomic_and(&val, 0x011); */
+		BPF_MOV64_IMM(BPF_REG_1, 0x011),
+		BPF_ATOMIC_AND(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+		/* if (val != 0x010) exit(2); */
+		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x010, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 2),
+		BPF_EXIT_INSN(),
+		/* r1 should not be clobbered, no BPF_FETCH flag */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
+		BPF_MOV64_IMM(BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
+{
+	"BPF_ATOMIC_AND with fetch",
+	.insns = {
+		BPF_MOV64_IMM(BPF_REG_0, 123),
+		/* val = 0x110; */
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+		/* old = atomic_fetch_and(&val, 0x011); */
+		BPF_MOV64_IMM(BPF_REG_1, 0x011),
+		BPF_ATOMIC_FETCH_AND(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+		/* if (old != 0x110) exit(3); */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 3),
+		BPF_EXIT_INSN(),
+		/* if (val != 0x010) exit(2); */
+		BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2),
+		BPF_MOV64_IMM(BPF_REG_1, 2),
+		BPF_EXIT_INSN(),
+		/* Check R0 wasn't clobbered (for fear of x86 JIT bug) */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+		/* exit(0); */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
+{
+	"BPF_ATOMIC_AND with fetch 32bit",
+	.insns = {
+		/* r0 = (s64) -1 */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1),
+		/* val = 0x110; */
+		BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110),
+		/* old = atomic_fetch_and(&val, 0x011); */
+		BPF_MOV32_IMM(BPF_REG_1, 0x011),
+		BPF_ATOMIC_FETCH_AND(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+		/* if (old != 0x110) exit(3); */
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+		BPF_MOV32_IMM(BPF_REG_0, 3),
+		BPF_EXIT_INSN(),
+		/* if (val != 0x010) exit(2); */
+		BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4),
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2),
+		BPF_MOV32_IMM(BPF_REG_1, 2),
+		BPF_EXIT_INSN(),
+		/* Check R0 wasn't clobbered (for fear of x86 JIT bug)
+		 * It should be -1 so add 1 to get exit code.
+		 */
+		BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
new file mode 100644
index 000000000000..335e12690be7
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
@@ -0,0 +1,96 @@ 
+{
+	"atomic compare-and-exchange smoketest - 64bit",
+	.insns = {
+		/* val = 3; */
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+		/* old = atomic_cmpxchg(&val, 2, 4); */
+		BPF_MOV64_IMM(BPF_REG_1, 4),
+		BPF_MOV64_IMM(BPF_REG_0, 2),
+		BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+		/* if (old != 3) exit(2); */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 2),
+		BPF_EXIT_INSN(),
+		/* if (val != 3) exit(3); */
+		BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 3),
+		BPF_EXIT_INSN(),
+		/* old = atomic_cmpxchg(&val, 3, 4); */
+		BPF_MOV64_IMM(BPF_REG_1, 4),
+		BPF_MOV64_IMM(BPF_REG_0, 3),
+		BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+		/* if (old != 3) exit(4); */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 4),
+		BPF_EXIT_INSN(),
+		/* if (val != 4) exit(5); */
+		BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 5),
+		BPF_EXIT_INSN(),
+		/* exit(0); */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
+{
+	"atomic compare-and-exchange smoketest - 32bit",
+	.insns = {
+		/* val = 3; */
+		BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3),
+		/* old = atomic_cmpxchg(&val, 2, 4); */
+		BPF_MOV32_IMM(BPF_REG_1, 4),
+		BPF_MOV32_IMM(BPF_REG_0, 2),
+		BPF_ATOMIC_CMPXCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+		/* if (old != 3) exit(2); */
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+		BPF_MOV32_IMM(BPF_REG_0, 2),
+		BPF_EXIT_INSN(),
+		/* if (val != 3) exit(3); */
+		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4),
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+		BPF_MOV32_IMM(BPF_REG_0, 3),
+		BPF_EXIT_INSN(),
+		/* old = atomic_cmpxchg(&val, 3, 4); */
+		BPF_MOV32_IMM(BPF_REG_1, 4),
+		BPF_MOV32_IMM(BPF_REG_0, 3),
+		BPF_ATOMIC_CMPXCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+		/* if (old != 3) exit(4); */
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+		BPF_MOV32_IMM(BPF_REG_0, 4),
+		BPF_EXIT_INSN(),
+		/* if (val != 4) exit(5); */
+		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4),
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
+		BPF_MOV32_IMM(BPF_REG_0, 5),
+		BPF_EXIT_INSN(),
+		/* exit(0); */
+		BPF_MOV32_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
+{
+	"Can't use cmpxchg on uninit src reg",
+	.insns = {
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+		BPF_MOV64_IMM(BPF_REG_0, 3),
+		BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_2, -8),
+		BPF_EXIT_INSN(),
+	},
+	.result = REJECT,
+	.errstr = "!read_ok",
+},
+{
+	"Can't use cmpxchg on uninit memory",
+	.insns = {
+		BPF_MOV64_IMM(BPF_REG_0, 3),
+		BPF_MOV64_IMM(BPF_REG_2, 4),
+		BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_2, -8),
+		BPF_EXIT_INSN(),
+	},
+	.result = REJECT,
+	.errstr = "invalid read from stack",
+},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
new file mode 100644
index 000000000000..7c87bc9a13de
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
@@ -0,0 +1,106 @@ 
+{
+	"BPF_ATOMIC_FETCH_ADD smoketest - 64bit",
+	.insns = {
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		/* Write 3 to stack */
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+		/* Put a 1 in R1, add it to the 3 on the stack, and load the value back into R1 */
+		BPF_MOV64_IMM(BPF_REG_1, 1),
+		BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+		/* Check the value we loaded back was 3 */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+		/* Load value from stack */
+		BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
+		/* Check value loaded from stack was 4 */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1),
+		BPF_MOV64_IMM(BPF_REG_0, 2),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
+{
+	"BPF_ATOMIC_FETCH_ADD smoketest - 32bit",
+	.insns = {
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		/* Write 3 to stack */
+		BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3),
+		/* Put a 1 in R1, add it to the 3 on the stack, and load the value back into R1 */
+		BPF_MOV32_IMM(BPF_REG_1, 1),
+		BPF_ATOMIC_FETCH_ADD(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+		/* Check the value we loaded back was 3 */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+		/* Load value from stack */
+		BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4),
+		/* Check value loaded from stack was 4 */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1),
+		BPF_MOV64_IMM(BPF_REG_0, 2),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
+{
+	"Can't use ATM_FETCH_ADD on frame pointer",
+	.insns = {
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+		BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_10, -8),
+		BPF_EXIT_INSN(),
+	},
+	.result = REJECT,
+	.errstr_unpriv = "R10 leaks addr into mem",
+	.errstr = "frame pointer is read only",
+},
+{
+	"Can't use ATM_FETCH_ADD on uninit src reg",
+	.insns = {
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+		BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_2, -8),
+		BPF_EXIT_INSN(),
+	},
+	.result = REJECT,
+	/* It happens that the address leak check is first, but it would also be
+	 * complain about the fact that we're trying to modify R10.
+	 */
+	.errstr = "!read_ok",
+},
+{
+	"Can't use ATM_FETCH_ADD on uninit dst reg",
+	.insns = {
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_2, BPF_REG_0, -8),
+		BPF_EXIT_INSN(),
+	},
+	.result = REJECT,
+	/* It happens that the address leak check is first, but it would also be
+	 * complain about the fact that we're trying to modify R10.
+	 */
+	.errstr = "!read_ok",
+},
+{
+	"Can't use ATM_FETCH_ADD on kernel memory",
+	.insns = {
+		/* This is an fentry prog, context is array of the args of the
+		 * kernel function being called. Load first arg into R2.
+		 */
+		BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 0),
+		/* First arg of bpf_fentry_test7 is a pointer to a struct.
+		 * Attempt to modify that struct. Verifier shouldn't let us
+		 * because it's kernel memory.
+		 */
+		BPF_MOV64_IMM(BPF_REG_3, 1),
+		BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_2, BPF_REG_3, 0),
+		/* Done */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	},
+	.prog_type = BPF_PROG_TYPE_TRACING,
+	.expected_attach_type = BPF_TRACE_FENTRY,
+	.kfunc = "bpf_fentry_test7",
+	.result = REJECT,
+	.errstr = "only read is supported",
+},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_or.c b/tools/testing/selftests/bpf/verifier/atomic_or.c
new file mode 100644
index 000000000000..1b22fb2881f0
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_or.c
@@ -0,0 +1,77 @@ 
+{
+	"BPF_ATOMIC_OR without fetch",
+	.insns = {
+		/* val = 0x110; */
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+		/* atomic_or(&val, 0x011); */
+		BPF_MOV64_IMM(BPF_REG_1, 0x011),
+		BPF_ATOMIC_OR(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+		/* if (val != 0x111) exit(2); */
+		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x111, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 2),
+		BPF_EXIT_INSN(),
+		/* r1 should not be clobbered, no BPF_FETCH flag */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
+		BPF_MOV64_IMM(BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
+{
+	"BPF_ATOMIC_OR with fetch",
+	.insns = {
+		BPF_MOV64_IMM(BPF_REG_0, 123),
+		/* val = 0x110; */
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+		/* old = atomic_fetch_or(&val, 0x011); */
+		BPF_MOV64_IMM(BPF_REG_1, 0x011),
+		BPF_ATOMIC_FETCH_OR(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+		/* if (old != 0x110) exit(3); */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 3),
+		BPF_EXIT_INSN(),
+		/* if (val != 0x111) exit(2); */
+		BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x111, 2),
+		BPF_MOV64_IMM(BPF_REG_1, 2),
+		BPF_EXIT_INSN(),
+		/* Check R0 wasn't clobbered (for fear of x86 JIT bug) */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+		/* exit(0); */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
+{
+	"BPF_ATOMIC_OR with fetch 32bit",
+	.insns = {
+		/* r0 = (s64) -1 */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1),
+		/* val = 0x110; */
+		BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110),
+		/* old = atomic_fetch_or(&val, 0x011); */
+		BPF_MOV32_IMM(BPF_REG_1, 0x011),
+		BPF_ATOMIC_FETCH_OR(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+		/* if (old != 0x110) exit(3); */
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+		BPF_MOV32_IMM(BPF_REG_0, 3),
+		BPF_EXIT_INSN(),
+		/* if (val != 0x111) exit(2); */
+		BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4),
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x111, 2),
+		BPF_MOV32_IMM(BPF_REG_1, 2),
+		BPF_EXIT_INSN(),
+		/* Check R0 wasn't clobbered (for fear of x86 JIT bug)
+		 * It should be -1 so add 1 to get exit code.
+		 */
+		BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_xchg.c b/tools/testing/selftests/bpf/verifier/atomic_xchg.c
new file mode 100644
index 000000000000..9348ac490e24
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_xchg.c
@@ -0,0 +1,46 @@ 
+{
+	"atomic exchange smoketest - 64bit",
+	.insns = {
+		/* val = 3; */
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+		/* old = atomic_xchg(&val, 4); */
+		BPF_MOV64_IMM(BPF_REG_1, 4),
+		BPF_ATOMIC_XCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+		/* if (old != 3) exit(1); */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+		/* if (val != 4) exit(2); */
+		BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 2),
+		BPF_EXIT_INSN(),
+		/* exit(0); */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
+{
+	"atomic exchange smoketest - 32bit",
+	.insns = {
+		/* val = 3; */
+		BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3),
+		/* old = atomic_xchg(&val, 4); */
+		BPF_MOV32_IMM(BPF_REG_1, 4),
+		BPF_ATOMIC_XCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+		/* if (old != 3) exit(1); */
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+		BPF_MOV32_IMM(BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+		/* if (val != 4) exit(2); */
+		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4),
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
+		BPF_MOV32_IMM(BPF_REG_0, 2),
+		BPF_EXIT_INSN(),
+		/* exit(0); */
+		BPF_MOV32_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_xor.c b/tools/testing/selftests/bpf/verifier/atomic_xor.c
new file mode 100644
index 000000000000..d1315419a3a8
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_xor.c
@@ -0,0 +1,77 @@ 
+{
+	"BPF_ATOMIC_XOR without fetch",
+	.insns = {
+		/* val = 0x110; */
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+		/* atomic_xor(&val, 0x011); */
+		BPF_MOV64_IMM(BPF_REG_1, 0x011),
+		BPF_ATOMIC_XOR(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+		/* if (val != 0x101) exit(2); */
+		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x101, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 2),
+		BPF_EXIT_INSN(),
+		/* r1 should not be clobbered, no BPF_FETCH flag */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
+		BPF_MOV64_IMM(BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
+{
+	"BPF_ATOMIC_XOR with fetch",
+	.insns = {
+		BPF_MOV64_IMM(BPF_REG_0, 123),
+		/* val = 0x110; */
+		BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+		/* old = atomic_fetch_xor(&val, 0x011); */
+		BPF_MOV64_IMM(BPF_REG_1, 0x011),
+		BPF_ATOMIC_FETCH_XOR(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+		/* if (old != 0x110) exit(3); */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 3),
+		BPF_EXIT_INSN(),
+		/* if (val != 0x101) exit(2); */
+		BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x101, 2),
+		BPF_MOV64_IMM(BPF_REG_1, 2),
+		BPF_EXIT_INSN(),
+		/* Check R0 wasn't clobbered (fxor fear of x86 JIT bug) */
+		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
+		BPF_MOV64_IMM(BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+		/* exit(0); */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},
+{
+	"BPF_ATOMIC_XOR with fetch 32bit",
+	.insns = {
+		/* r0 = (s64) -1 */
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1),
+		/* val = 0x110; */
+		BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110),
+		/* old = atomic_fetch_xor(&val, 0x011); */
+		BPF_MOV32_IMM(BPF_REG_1, 0x011),
+		BPF_ATOMIC_FETCH_XOR(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+		/* if (old != 0x110) exit(3); */
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+		BPF_MOV32_IMM(BPF_REG_0, 3),
+		BPF_EXIT_INSN(),
+		/* if (val != 0x101) exit(2); */
+		BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4),
+		BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x101, 2),
+		BPF_MOV32_IMM(BPF_REG_1, 2),
+		BPF_EXIT_INSN(),
+		/* Check R0 wasn't clobbered (fxor fear of x86 JIT bug)
+		 * It should be -1 so add 1 to get exit code.
+		 */
+		BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+		BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+},