From patchwork Thu Sep 1 11:48:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12962341 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93060ECAAD1 for ; Thu, 1 Sep 2022 11:48:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232979AbiIALsc (ORCPT ); Thu, 1 Sep 2022 07:48:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232666AbiIALsZ (ORCPT ); Thu, 1 Sep 2022 07:48:25 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7613513A7DA; Thu, 1 Sep 2022 04:48:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662032903; x=1693568903; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xRiOOhzD5CDn3mjfGtfoUJBiwhULg7E/Wk3TH7qD1U4=; b=I72qPEbphJZdHKQ2JxcyTj7sMRcK9zH4S15WG/UD02WZtdk1IPCeAG6z 8RA+v11kbnb+zpWYQ33xg8VWME+/uctwIVctB7EP06sIuMnlVmg6vz5wi YrugV3+yYk6GRPC00rlYLvo6+IN6zeKjs5wfyBta1HEiz6jn/9un8A9v9 O0p1LQUkHJkib3aDDSjkU2SMjv8AES280gVTOoSRym1XUrN34JFYAz1xI td8dfKoGnE5l4v4h+ZmzA/JQ6IUsNc2EovLv0SMkfuOlzJoMr7+CHZ/mF pKilv/aDJMKHvfqoE7ZfQmT0bxmbzhR35Uzd+TumcYsJcYoKvXsLkDzn6 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10456"; a="357410353" X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="357410353" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2022 04:48:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="673814332" Received: from boxer.igk.intel.com ([10.102.20.173]) by fmsmga008.fm.intel.com with ESMTP; 01 Sep 2022 04:48:16 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, john.fastabend@gmail.com, Maciej Fijalkowski Subject: [PATCH v6 bpf-next 1/6] selftests: xsk: query for native XDP support Date: Thu, 1 Sep 2022 13:48:08 +0200 Message-Id: <20220901114813.16275-2-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220901114813.16275-1-maciej.fijalkowski@intel.com> References: <20220901114813.16275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, xdpxceiver assumes that underlying device supports XDP in native mode - it is fine by now since tests can run only on a veth pair. Future commit is going to allow running test suite against physical devices, so let us query the device if it is capable of running XDP programs in native mode. This way xdpxceiver will not try to run TEST_MODE_DRV if device being tested is not supporting it. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xskxceiver.c | 39 ++++++++++++++++++++++-- 1 file changed, 37 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c index 14b4737b223c..19f65bce7f65 100644 --- a/tools/testing/selftests/bpf/xskxceiver.c +++ b/tools/testing/selftests/bpf/xskxceiver.c @@ -99,6 +99,8 @@ #include #include "xsk.h" #include "xskxceiver.h" +#include +#include #include "../kselftest.h" /* AF_XDP APIs were moved into libxdp and marked as deprecated in libbpf. @@ -1712,10 +1714,40 @@ static void ifobject_delete(struct ifobject *ifobj) free(ifobj); } +static bool is_xdp_supported(struct ifobject *ifobject) +{ + int flags = XDP_FLAGS_DRV_MODE; + + LIBBPF_OPTS(bpf_link_create_opts, opts, .flags = flags); + struct bpf_insn insns[2] = { + BPF_MOV64_IMM(BPF_REG_0, XDP_PASS), + BPF_EXIT_INSN() + }; + int ifindex = if_nametoindex(ifobject->ifname); + int prog_fd, insn_cnt = ARRAY_SIZE(insns); + int err; + + prog_fd = bpf_prog_load(BPF_PROG_TYPE_XDP, NULL, "GPL", insns, insn_cnt, NULL); + if (prog_fd < 0) + return false; + + err = bpf_xdp_attach(ifindex, prog_fd, flags, NULL); + if (err) { + close(prog_fd); + return false; + } + + bpf_xdp_detach(ifindex, flags, NULL); + close(prog_fd); + + return true; +} + int main(int argc, char **argv) { struct pkt_stream *pkt_stream_default; struct ifobject *ifobj_tx, *ifobj_rx; + int modes = TEST_MODE_SKB + 1; u32 i, j, failed_tests = 0; struct test_spec test; @@ -1743,15 +1775,18 @@ int main(int argc, char **argv) init_iface(ifobj_rx, MAC2, MAC1, IP2, IP1, UDP_PORT2, UDP_PORT1, worker_testapp_validate_rx); + if (is_xdp_supported(ifobj_tx)) + modes++; + test_spec_init(&test, ifobj_tx, ifobj_rx, 0); pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, PKT_SIZE); if (!pkt_stream_default) exit_with_error(ENOMEM); test.pkt_stream_default = pkt_stream_default; - ksft_set_plan(TEST_MODE_MAX * TEST_TYPE_MAX); + ksft_set_plan(modes * TEST_TYPE_MAX); - for (i = 0; i < TEST_MODE_MAX; i++) + for (i = 0; i < modes; i++) for (j = 0; j < TEST_TYPE_MAX; j++) { test_spec_init(&test, ifobj_tx, ifobj_rx, i); run_pkt_test(&test, i, j); From patchwork Thu Sep 1 11:48:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12962342 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF69AC54EE9 for ; Thu, 1 Sep 2022 11:48:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233003AbiIALsd (ORCPT ); Thu, 1 Sep 2022 07:48:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232920AbiIALs0 (ORCPT ); Thu, 1 Sep 2022 07:48:26 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1B9313A7CF; Thu, 1 Sep 2022 04:48:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662032904; x=1693568904; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xc3wID9c6J21IeCg/VSVF34zIePYOyysA/yFAIUTSYc=; b=lW11sHfrTOSSyOPKOT0DEvVcC0SF/mBY7AvmEQGpRWnCX7U7V9ZKIo2Z SeVswkPEFU+H61fPkphSp7+mpHs9f1O7eNYvipqQYJlS0YSbi7A78nL5d eRjPEYEvQuPPLJnv+b5WhnHSetmM/jeQ8LHYZuYnrD3Ffazmwh05ZArYB M5pF3Vg7MmoGYI977LcA9t9hlCIBba0EEwULcZLdYKMgNX2G/VBL8hJso 5oe3lItwH8pOoUBgSpCAsdSSWq0UvzPYpI7CXeodZyqpprw4unHat5fbP nCbGSdbbEz6X0pg2lHgn6lat4X+zTtpek6NQ9j+Sp/QolN4tvY/jnrgXL A==; X-IronPort-AV: E=McAfee;i="6500,9779,10456"; a="357410362" X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="357410362" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2022 04:48:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="673814347" Received: from boxer.igk.intel.com ([10.102.20.173]) by fmsmga008.fm.intel.com with ESMTP; 01 Sep 2022 04:48:21 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, john.fastabend@gmail.com, Maciej Fijalkowski Subject: [PATCH v6 bpf-next 2/6] selftests: xsk: introduce default Rx pkt stream Date: Thu, 1 Sep 2022 13:48:09 +0200 Message-Id: <20220901114813.16275-3-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220901114813.16275-1-maciej.fijalkowski@intel.com> References: <20220901114813.16275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net In order to prepare xdpxceiver for physical device testing, let us introduce default Rx pkt stream. Reason for doing it is that physical device testing will use a UMEM with a doubled size where half of it will be used by Tx and other half by Rx. This means that pkt addresses will differ for Tx and Rx streams. Rx thread will initialize the xsk_umem_info::base_addr that is added here so that pkt_set(), when working on Rx UMEM will add this offset and second half of UMEM space will be used. Note that currently base_addr is 0 on both sides. Future commit will do the mentioned initialization. Previously, veth based testing worked on separate UMEMs, so single default stream was fine. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xskxceiver.c | 74 +++++++++++++++--------- tools/testing/selftests/bpf/xskxceiver.h | 4 +- 2 files changed, 51 insertions(+), 27 deletions(-) diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c index 19f65bce7f65..fbf65135fef6 100644 --- a/tools/testing/selftests/bpf/xskxceiver.c +++ b/tools/testing/selftests/bpf/xskxceiver.c @@ -433,15 +433,16 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, ifobj->use_poll = false; ifobj->use_fill_ring = true; ifobj->release_rx = true; - ifobj->pkt_stream = test->pkt_stream_default; ifobj->validation_func = NULL; if (i == 0) { ifobj->rx_on = false; ifobj->tx_on = true; + ifobj->pkt_stream = test->tx_pkt_stream_default; } else { ifobj->rx_on = true; ifobj->tx_on = false; + ifobj->pkt_stream = test->rx_pkt_stream_default; } memset(ifobj->umem, 0, sizeof(*ifobj->umem)); @@ -465,12 +466,15 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, static void test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, struct ifobject *ifobj_rx, enum test_mode mode) { - struct pkt_stream *pkt_stream; + struct pkt_stream *tx_pkt_stream; + struct pkt_stream *rx_pkt_stream; u32 i; - pkt_stream = test->pkt_stream_default; + tx_pkt_stream = test->tx_pkt_stream_default; + rx_pkt_stream = test->rx_pkt_stream_default; memset(test, 0, sizeof(*test)); - test->pkt_stream_default = pkt_stream; + test->tx_pkt_stream_default = tx_pkt_stream; + test->rx_pkt_stream_default = rx_pkt_stream; for (i = 0; i < MAX_INTERFACES; i++) { struct ifobject *ifobj = i ? ifobj_rx : ifobj_tx; @@ -531,16 +535,17 @@ static void pkt_stream_delete(struct pkt_stream *pkt_stream) static void pkt_stream_restore_default(struct test_spec *test) { struct pkt_stream *tx_pkt_stream = test->ifobj_tx->pkt_stream; + struct pkt_stream *rx_pkt_stream = test->ifobj_rx->pkt_stream; - if (tx_pkt_stream != test->pkt_stream_default) { + if (tx_pkt_stream != test->tx_pkt_stream_default) { pkt_stream_delete(test->ifobj_tx->pkt_stream); - test->ifobj_tx->pkt_stream = test->pkt_stream_default; + test->ifobj_tx->pkt_stream = test->tx_pkt_stream_default; } - if (test->ifobj_rx->pkt_stream != test->pkt_stream_default && - test->ifobj_rx->pkt_stream != tx_pkt_stream) + if (rx_pkt_stream != test->rx_pkt_stream_default) { pkt_stream_delete(test->ifobj_rx->pkt_stream); - test->ifobj_rx->pkt_stream = test->pkt_stream_default; + test->ifobj_rx->pkt_stream = test->rx_pkt_stream_default; + } } static struct pkt_stream *__pkt_stream_alloc(u32 nb_pkts) @@ -563,7 +568,7 @@ static struct pkt_stream *__pkt_stream_alloc(u32 nb_pkts) static void pkt_set(struct xsk_umem_info *umem, struct pkt *pkt, u64 addr, u32 len) { - pkt->addr = addr; + pkt->addr = addr + umem->base_addr; pkt->len = len; if (len > umem->frame_size - XDP_PACKET_HEADROOM - MIN_PKT_SIZE * 2 - umem->frame_headroom) pkt->valid = false; @@ -602,22 +607,29 @@ static void pkt_stream_replace(struct test_spec *test, u32 nb_pkts, u32 pkt_len) pkt_stream = pkt_stream_generate(test->ifobj_tx->umem, nb_pkts, pkt_len); test->ifobj_tx->pkt_stream = pkt_stream; + pkt_stream = pkt_stream_generate(test->ifobj_rx->umem, nb_pkts, pkt_len); test->ifobj_rx->pkt_stream = pkt_stream; } -static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int offset) +static void __pkt_stream_replace_half(struct ifobject *ifobj, u32 pkt_len, + int offset) { - struct xsk_umem_info *umem = test->ifobj_tx->umem; + struct xsk_umem_info *umem = ifobj->umem; struct pkt_stream *pkt_stream; u32 i; - pkt_stream = pkt_stream_clone(umem, test->pkt_stream_default); - for (i = 1; i < test->pkt_stream_default->nb_pkts; i += 2) + pkt_stream = pkt_stream_clone(umem, ifobj->pkt_stream); + for (i = 1; i < ifobj->pkt_stream->nb_pkts; i += 2) pkt_set(umem, &pkt_stream->pkts[i], (i % umem->num_frames) * umem->frame_size + offset, pkt_len); - test->ifobj_tx->pkt_stream = pkt_stream; - test->ifobj_rx->pkt_stream = pkt_stream; + ifobj->pkt_stream = pkt_stream; +} + +static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int offset) +{ + __pkt_stream_replace_half(test->ifobj_tx, pkt_len, offset); + __pkt_stream_replace_half(test->ifobj_rx, pkt_len, offset); } static void pkt_stream_receive_half(struct test_spec *test) @@ -659,7 +671,8 @@ static struct pkt *pkt_generate(struct ifobject *ifobject, u32 pkt_nb) return pkt; } -static void pkt_stream_generate_custom(struct test_spec *test, struct pkt *pkts, u32 nb_pkts) +static void __pkt_stream_generate_custom(struct ifobject *ifobj, + struct pkt *pkts, u32 nb_pkts) { struct pkt_stream *pkt_stream; u32 i; @@ -668,15 +681,20 @@ static void pkt_stream_generate_custom(struct test_spec *test, struct pkt *pkts, if (!pkt_stream) exit_with_error(ENOMEM); - test->ifobj_tx->pkt_stream = pkt_stream; - test->ifobj_rx->pkt_stream = pkt_stream; - for (i = 0; i < nb_pkts; i++) { - pkt_stream->pkts[i].addr = pkts[i].addr; + pkt_stream->pkts[i].addr = pkts[i].addr + ifobj->umem->base_addr; pkt_stream->pkts[i].len = pkts[i].len; pkt_stream->pkts[i].payload = i; pkt_stream->pkts[i].valid = pkts[i].valid; } + + ifobj->pkt_stream = pkt_stream; +} + +static void pkt_stream_generate_custom(struct test_spec *test, struct pkt *pkts, u32 nb_pkts) +{ + __pkt_stream_generate_custom(test->ifobj_tx, pkts, nb_pkts); + __pkt_stream_generate_custom(test->ifobj_rx, pkts, nb_pkts); } static void pkt_dump(void *pkt, u32 len) @@ -1745,7 +1763,8 @@ static bool is_xdp_supported(struct ifobject *ifobject) int main(int argc, char **argv) { - struct pkt_stream *pkt_stream_default; + struct pkt_stream *rx_pkt_stream_default; + struct pkt_stream *tx_pkt_stream_default; struct ifobject *ifobj_tx, *ifobj_rx; int modes = TEST_MODE_SKB + 1; u32 i, j, failed_tests = 0; @@ -1779,10 +1798,12 @@ int main(int argc, char **argv) modes++; test_spec_init(&test, ifobj_tx, ifobj_rx, 0); - pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, PKT_SIZE); - if (!pkt_stream_default) + tx_pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, PKT_SIZE); + rx_pkt_stream_default = pkt_stream_generate(ifobj_rx->umem, DEFAULT_PKT_CNT, PKT_SIZE); + if (!tx_pkt_stream_default || !rx_pkt_stream_default) exit_with_error(ENOMEM); - test.pkt_stream_default = pkt_stream_default; + test.tx_pkt_stream_default = tx_pkt_stream_default; + test.rx_pkt_stream_default = rx_pkt_stream_default; ksft_set_plan(modes * TEST_TYPE_MAX); @@ -1796,7 +1817,8 @@ int main(int argc, char **argv) failed_tests++; } - pkt_stream_delete(pkt_stream_default); + pkt_stream_delete(tx_pkt_stream_default); + pkt_stream_delete(rx_pkt_stream_default); ifobject_delete(ifobj_tx); ifobject_delete(ifobj_rx); diff --git a/tools/testing/selftests/bpf/xskxceiver.h b/tools/testing/selftests/bpf/xskxceiver.h index ee97576757a9..8d1c31f127e7 100644 --- a/tools/testing/selftests/bpf/xskxceiver.h +++ b/tools/testing/selftests/bpf/xskxceiver.h @@ -99,6 +99,7 @@ struct xsk_umem_info { u32 frame_headroom; void *buffer; u32 frame_size; + u32 base_addr; bool unaligned_mode; }; @@ -159,7 +160,8 @@ struct ifobject { struct test_spec { struct ifobject *ifobj_tx; struct ifobject *ifobj_rx; - struct pkt_stream *pkt_stream_default; + struct pkt_stream *tx_pkt_stream_default; + struct pkt_stream *rx_pkt_stream_default; u16 total_steps; u16 current_step; u16 nb_sockets; From patchwork Thu Sep 1 11:48:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12962343 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58140ECAAD1 for ; Thu, 1 Sep 2022 11:48:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233088AbiIALsv (ORCPT ); Thu, 1 Sep 2022 07:48:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232963AbiIALsa (ORCPT ); Thu, 1 Sep 2022 07:48:30 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02A4B13A7E2; Thu, 1 Sep 2022 04:48:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662032908; x=1693568908; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ubvauW2ELUkcoVyWSv7QDK+bwZ40uhVhTYb2hOpWjrI=; b=G0WpYkJVCcIBnGsDei3n6d1vl5P0/3hHQdIHQjRF92z6gFMDvHiMg942 CbSzQkykROZb+5eNBEybyRnHgCw1hvOnnJC4HggnTvxFilic/gI3YPetd +i9narzw/V/77oRpV0tRyHfkTX7oo7xlLBaKzOo6u5EI7YcTtgbAWKqO8 3dUly1K3taj1uMwifFb7LtJDqszC96a0JcfGx2/kb4h2tSr0SXisYTDHh 38as9n4maUIwdARI2jh3QvuSkpdF1WEZs7sqLi2D4sW1kfYXAxU6hyh7s bO5MfnGKxlmKlEQveYYquP00MdJB3QvEFvUvZSkitOgodnxljJnRaMl52 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10456"; a="357410370" X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="357410370" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2022 04:48:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="673814359" Received: from boxer.igk.intel.com ([10.102.20.173]) by fmsmga008.fm.intel.com with ESMTP; 01 Sep 2022 04:48:23 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, john.fastabend@gmail.com, Maciej Fijalkowski Subject: [PATCH v6 bpf-next 3/6] selftests: xsk: increase chars for interface name to 16 Date: Thu, 1 Sep 2022 13:48:10 +0200 Message-Id: <20220901114813.16275-4-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220901114813.16275-1-maciej.fijalkowski@intel.com> References: <20220901114813.16275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net So that "enp240s0f0" or such name can be used against xskxceiver. While at it, also extend character count for netns name. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xskxceiver.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/xskxceiver.h b/tools/testing/selftests/bpf/xskxceiver.h index 8d1c31f127e7..04b298c72f67 100644 --- a/tools/testing/selftests/bpf/xskxceiver.h +++ b/tools/testing/selftests/bpf/xskxceiver.h @@ -29,8 +29,8 @@ #define TEST_FAILURE -1 #define TEST_CONTINUE 1 #define MAX_INTERFACES 2 -#define MAX_INTERFACE_NAME_CHARS 7 -#define MAX_INTERFACES_NAMESPACE_CHARS 10 +#define MAX_INTERFACE_NAME_CHARS 16 +#define MAX_INTERFACES_NAMESPACE_CHARS 16 #define MAX_SOCKETS 2 #define MAX_TEST_NAME_SIZE 32 #define MAX_TEARDOWN_ITER 10 From patchwork Thu Sep 1 11:48:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12962344 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29049C0502C for ; Thu, 1 Sep 2022 11:48:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232988AbiIALsy (ORCPT ); Thu, 1 Sep 2022 07:48:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232992AbiIALsc (ORCPT ); Thu, 1 Sep 2022 07:48:32 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E738613A7CC; Thu, 1 Sep 2022 04:48:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662032909; x=1693568909; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QtBWM/ZPc7lBL5BmlSAfsB52xAOnBTfGdarlV2Z7VkI=; b=hSCH6Wb8KWA0T45LIbN0dDq4dlCF3sLzPx+b3dGY1Ry+9C7V5w/Sv8Oo uheYd7Tnyg0IWZ0E037MIyqH0gLGWYTDT0/J/MgJnKAjN/kW3IObASXPA 3YVgxloxh8sNB//0Mi9jajHcLn7WHA0DmCYNInJsKw+lyq5qIvpvFu9ia 8FJGxXu1D/0IBBVCw1oK7gvEnKJkvRJ53vspKznScznpF0MwgeFxYG6U5 KU8X/DQjSPMyVUHW+ppfJ4h6jocJsAYjS62zfz2qsSq4FdQ+WEeKxa3f+ kDxXs0ZZL2U46VBZMGvoaZx0lOY945SJeLmcpnFA40tAr4bEcTKWq4eyx Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10456"; a="357410381" X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="357410381" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2022 04:48:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="673814378" Received: from boxer.igk.intel.com ([10.102.20.173]) by fmsmga008.fm.intel.com with ESMTP; 01 Sep 2022 04:48:25 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, john.fastabend@gmail.com, Maciej Fijalkowski Subject: [PATCH v6 bpf-next 4/6] selftests: xsk: add support for executing tests on physical device Date: Thu, 1 Sep 2022 13:48:11 +0200 Message-Id: <20220901114813.16275-5-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220901114813.16275-1-maciej.fijalkowski@intel.com> References: <20220901114813.16275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, architecture of xdpxceiver is designed strictly for conducting veth based tests. Veth pair is created together with a network namespace and one of the veth interfaces is moved to the mentioned netns. Then, separate threads for Tx and Rx are spawned which will utilize described setup. Infrastructure described in the paragraph above can not be used for testing AF_XDP support on physical devices. That testing will be conducted on a single network interface and same queue. Xskxceiver needs to be extended to distinguish between veth tests and physical interface tests. Since same iface/queue id pair will be used by both Tx/Rx threads for physical device testing, Tx thread, which happen to run after the Rx thread, is going to create XSK socket with shared umem flag. In order to track this setting throughout the lifetime of spawned threads, introduce 'shared_umem' boolean variable to struct ifobject and set it to true when xdpxceiver is run against physical device. In such case, UMEM size needs to be doubled, so half of it will be used by Rx thread and other half by Tx thread. For two step based test types, value of XSKMAP element under key 0 has to be updated as there is now another socket for the second step. Also, to avoid race conditions when destroying XSK resources, move this activity to the main thread after spawned Rx and Tx threads have finished its job. This way it is possible to gracefully remove shared umem without introducing synchronization mechanisms. To run xsk selftests suite on physical device, append "-i $IFACE" when invoking test_xsk.sh. For veth based tests, simply skip it. When "-i $IFACE" is in place, under the hood test_xsk.sh will use $IFACE for both interfaces supplied to xdpxceiver, which in turn will interpret that this execution of test suite is for a physical device. Note that currently this makes it possible only to test SKB and DRV mode (in case underlying device has native XDP support). ZC testing support is added in a later patch. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/test_xsk.sh | 52 ++++-- tools/testing/selftests/bpf/xskxceiver.c | 204 +++++++++++++++-------- tools/testing/selftests/bpf/xskxceiver.h | 1 + 3 files changed, 170 insertions(+), 87 deletions(-) diff --git a/tools/testing/selftests/bpf/test_xsk.sh b/tools/testing/selftests/bpf/test_xsk.sh index 096a957594cd..d821fd098504 100755 --- a/tools/testing/selftests/bpf/test_xsk.sh +++ b/tools/testing/selftests/bpf/test_xsk.sh @@ -73,14 +73,20 @@ # # Run and dump packet contents: # sudo ./test_xsk.sh -D +# +# Run test suite for physical device in loopback mode +# sudo ./test_xsk.sh -i IFACE . xsk_prereqs.sh -while getopts "vD" flag +ETH="" + +while getopts "vDi:" flag do case "${flag}" in v) verbose=1;; D) dump_pkts=1;; + i) ETH=${OPTARG};; esac done @@ -132,18 +138,25 @@ setup_vethPairs() { ip link set ${VETH0} up } -validate_root_exec -validate_veth_support ${VETH0} -validate_ip_utility -setup_vethPairs - -retval=$? -if [ $retval -ne 0 ]; then - test_status $retval "${TEST_NAME}" - cleanup_exit ${VETH0} ${VETH1} ${NS1} - exit $retval +if [ ! -z $ETH ]; then + VETH0=${ETH} + VETH1=${ETH} + NS1="" +else + validate_root_exec + validate_veth_support ${VETH0} + validate_ip_utility + setup_vethPairs + + retval=$? + if [ $retval -ne 0 ]; then + test_status $retval "${TEST_NAME}" + cleanup_exit ${VETH0} ${VETH1} ${NS1} + exit $retval + fi fi + if [[ $verbose -eq 1 ]]; then ARGS+="-v " fi @@ -152,26 +165,33 @@ if [[ $dump_pkts -eq 1 ]]; then ARGS="-D " fi +retval=$? test_status $retval "${TEST_NAME}" ## START TESTS statusList=() -TEST_NAME="XSK_SELFTESTS_SOFTIRQ" +TEST_NAME="XSK_SELFTESTS_${VETH0}_SOFTIRQ" exec_xskxceiver -cleanup_exit ${VETH0} ${VETH1} ${NS1} -TEST_NAME="XSK_SELFTESTS_BUSY_POLL" +if [ -z $ETH ]; then + cleanup_exit ${VETH0} ${VETH1} ${NS1} +fi +TEST_NAME="XSK_SELFTESTS_${VETH0}_BUSY_POLL" busy_poll=1 -setup_vethPairs +if [ -z $ETH ]; then + setup_vethPairs +fi exec_xskxceiver ## END TESTS -cleanup_exit ${VETH0} ${VETH1} ${NS1} +if [ -z $ETH ]; then + cleanup_exit ${VETH0} ${VETH1} ${NS1} +fi failures=0 echo -e "\nSummary:" diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c index fbf65135fef6..b54b844cae89 100644 --- a/tools/testing/selftests/bpf/xskxceiver.c +++ b/tools/testing/selftests/bpf/xskxceiver.c @@ -301,8 +301,8 @@ static void enable_busy_poll(struct xsk_socket_info *xsk) exit_with_error(errno); } -static int xsk_configure_socket(struct xsk_socket_info *xsk, struct xsk_umem_info *umem, - struct ifobject *ifobject, bool shared) +static int __xsk_configure_socket(struct xsk_socket_info *xsk, struct xsk_umem_info *umem, + struct ifobject *ifobject, bool shared) { struct xsk_socket_config cfg = {}; struct xsk_ring_cons *rxr; @@ -448,6 +448,9 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, memset(ifobj->umem, 0, sizeof(*ifobj->umem)); ifobj->umem->num_frames = DEFAULT_UMEM_BUFFERS; ifobj->umem->frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE; + if (ifobj->shared_umem && ifobj->rx_on) + ifobj->umem->base_addr = DEFAULT_UMEM_BUFFERS * + XSK_UMEM__DEFAULT_FRAME_SIZE; for (j = 0; j < MAX_SOCKETS; j++) { memset(&ifobj->xsk_arr[j], 0, sizeof(ifobj->xsk_arr[j])); @@ -1146,6 +1149,70 @@ static int validate_tx_invalid_descs(struct ifobject *ifobject) return TEST_PASS; } +static void xsk_configure_socket(struct test_spec *test, struct ifobject *ifobject, + struct xsk_umem_info *umem, bool tx) +{ + int i, ret; + + for (i = 0; i < test->nb_sockets; i++) { + bool shared = (ifobject->shared_umem && tx) ? true : !!i; + u32 ctr = 0; + + while (ctr++ < SOCK_RECONF_CTR) { + ret = __xsk_configure_socket(&ifobject->xsk_arr[i], umem, + ifobject, shared); + if (!ret) + break; + + /* Retry if it fails as xsk_socket__create() is asynchronous */ + if (ctr >= SOCK_RECONF_CTR) + exit_with_error(-ret); + usleep(USLEEP_MAX); + } + if (ifobject->busy_poll) + enable_busy_poll(&ifobject->xsk_arr[i]); + } +} + +static void thread_common_ops_tx(struct test_spec *test, struct ifobject *ifobject) +{ + xsk_configure_socket(test, ifobject, test->ifobj_rx->umem, true); + ifobject->xsk = &ifobject->xsk_arr[0]; + ifobject->xsk_map_fd = test->ifobj_rx->xsk_map_fd; + memcpy(ifobject->umem, test->ifobj_rx->umem, sizeof(struct xsk_umem_info)); +} + +static void xsk_populate_fill_ring(struct xsk_umem_info *umem, struct pkt_stream *pkt_stream) +{ + u32 idx = 0, i, buffers_to_fill; + int ret; + + if (umem->num_frames < XSK_RING_PROD__DEFAULT_NUM_DESCS) + buffers_to_fill = umem->num_frames; + else + buffers_to_fill = XSK_RING_PROD__DEFAULT_NUM_DESCS; + + ret = xsk_ring_prod__reserve(&umem->fq, buffers_to_fill, &idx); + if (ret != buffers_to_fill) + exit_with_error(ENOSPC); + for (i = 0; i < buffers_to_fill; i++) { + u64 addr; + + if (pkt_stream->use_addr_for_fill) { + struct pkt *pkt = pkt_stream_get_pkt(pkt_stream, i); + + if (!pkt) + break; + addr = pkt->addr; + } else { + addr = i * umem->frame_size; + } + + *xsk_ring_prod__fill_addr(&umem->fq, idx++) = addr; + } + xsk_ring_prod__submit(&umem->fq, buffers_to_fill); +} + static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject) { u64 umem_sz = ifobject->umem->num_frames * ifobject->umem->frame_size; @@ -1153,13 +1220,15 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject) LIBBPF_OPTS(bpf_xdp_query_opts, opts); int ret, ifindex; void *bufs; - u32 i; ifobject->ns_fd = switch_namespace(ifobject->nsname); if (ifobject->umem->unaligned_mode) mmap_flags |= MAP_HUGETLB; + if (ifobject->shared_umem) + umem_sz *= 2; + bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0); if (bufs == MAP_FAILED) exit_with_error(errno); @@ -1168,24 +1237,9 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject) if (ret) exit_with_error(-ret); - for (i = 0; i < test->nb_sockets; i++) { - u32 ctr = 0; - - while (ctr++ < SOCK_RECONF_CTR) { - ret = xsk_configure_socket(&ifobject->xsk_arr[i], ifobject->umem, - ifobject, !!i); - if (!ret) - break; - - /* Retry if it fails as xsk_socket__create() is asynchronous */ - if (ctr >= SOCK_RECONF_CTR) - exit_with_error(-ret); - usleep(USLEEP_MAX); - } + xsk_populate_fill_ring(ifobject->umem, ifobject->pkt_stream); - if (ifobject->busy_poll) - enable_busy_poll(&ifobject->xsk_arr[i]); - } + xsk_configure_socket(test, ifobject, ifobject->umem, false); ifobject->xsk = &ifobject->xsk_arr[0]; @@ -1221,22 +1275,18 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject) exit_with_error(-ret); } -static void testapp_cleanup_xsk_res(struct ifobject *ifobj) -{ - print_verbose("Destroying socket\n"); - xsk_socket__delete(ifobj->xsk->xsk); - munmap(ifobj->umem->buffer, ifobj->umem->num_frames * ifobj->umem->frame_size); - xsk_umem__delete(ifobj->umem->umem); -} - static void *worker_testapp_validate_tx(void *arg) { struct test_spec *test = (struct test_spec *)arg; struct ifobject *ifobject = test->ifobj_tx; int err; - if (test->current_step == 1) - thread_common_ops(test, ifobject); + if (test->current_step == 1) { + if (!ifobject->shared_umem) + thread_common_ops(test, ifobject); + else + thread_common_ops_tx(test, ifobject); + } print_verbose("Sending %d packets on interface %s\n", ifobject->pkt_stream->nb_pkts, ifobject->ifname); @@ -1247,53 +1297,23 @@ static void *worker_testapp_validate_tx(void *arg) if (err) report_failure(test); - if (test->total_steps == test->current_step || err) - testapp_cleanup_xsk_res(ifobject); pthread_exit(NULL); } -static void xsk_populate_fill_ring(struct xsk_umem_info *umem, struct pkt_stream *pkt_stream) -{ - u32 idx = 0, i, buffers_to_fill; - int ret; - - if (umem->num_frames < XSK_RING_PROD__DEFAULT_NUM_DESCS) - buffers_to_fill = umem->num_frames; - else - buffers_to_fill = XSK_RING_PROD__DEFAULT_NUM_DESCS; - - ret = xsk_ring_prod__reserve(&umem->fq, buffers_to_fill, &idx); - if (ret != buffers_to_fill) - exit_with_error(ENOSPC); - for (i = 0; i < buffers_to_fill; i++) { - u64 addr; - - if (pkt_stream->use_addr_for_fill) { - struct pkt *pkt = pkt_stream_get_pkt(pkt_stream, i); - - if (!pkt) - break; - addr = pkt->addr; - } else { - addr = i * umem->frame_size; - } - - *xsk_ring_prod__fill_addr(&umem->fq, idx++) = addr; - } - xsk_ring_prod__submit(&umem->fq, buffers_to_fill); -} - static void *worker_testapp_validate_rx(void *arg) { struct test_spec *test = (struct test_spec *)arg; struct ifobject *ifobject = test->ifobj_rx; struct pollfd fds = { }; + int id = 0; int err; - if (test->current_step == 1) + if (test->current_step == 1) { thread_common_ops(test, ifobject); - - xsk_populate_fill_ring(ifobject->umem, ifobject->pkt_stream); + } else { + bpf_map_delete_elem(ifobject->xsk_map_fd, &id); + xsk_socket__update_xskmap(ifobject->xsk->xsk, ifobject->xsk_map_fd); + } fds.fd = xsk_socket__fd(ifobject->xsk->xsk); fds.events = POLLIN; @@ -1311,25 +1331,38 @@ static void *worker_testapp_validate_rx(void *arg) pthread_mutex_unlock(&pacing_mutex); } - if (test->total_steps == test->current_step || err) - testapp_cleanup_xsk_res(ifobject); pthread_exit(NULL); } +static void testapp_clean_xsk_umem(struct ifobject *ifobj) +{ + u64 umem_sz = ifobj->umem->num_frames * ifobj->umem->frame_size; + + if (ifobj->shared_umem) + umem_sz *= 2; + + xsk_umem__delete(ifobj->umem->umem); + munmap(ifobj->umem->buffer, umem_sz); +} + static int testapp_validate_traffic_single_thread(struct test_spec *test, struct ifobject *ifobj, enum test_type type) { + bool old_shared_umem = ifobj->shared_umem; pthread_t t0; if (pthread_barrier_init(&barr, NULL, 2)) exit_with_error(errno); test->current_step++; - if (type == TEST_TYPE_POLL_RXQ_TMOUT) + if (type == TEST_TYPE_POLL_RXQ_TMOUT) pkt_stream_reset(ifobj->pkt_stream); pkts_in_flight = 0; - /*Spawn thread */ + test->ifobj_rx->shared_umem = false; + test->ifobj_tx->shared_umem = false; + + /* Spawn thread */ pthread_create(&t0, NULL, ifobj->func_ptr, test); if (type != TEST_TYPE_POLL_TXQ_TMOUT) @@ -1340,6 +1373,14 @@ static int testapp_validate_traffic_single_thread(struct test_spec *test, struct pthread_join(t0, NULL); + if (test->total_steps == test->current_step || test->fail) { + xsk_socket__delete(ifobj->xsk->xsk); + testapp_clean_xsk_umem(ifobj); + } + + test->ifobj_rx->shared_umem = old_shared_umem; + test->ifobj_tx->shared_umem = old_shared_umem; + return !!test->fail; } @@ -1369,6 +1410,14 @@ static int testapp_validate_traffic(struct test_spec *test) pthread_join(t1, NULL); pthread_join(t0, NULL); + if (test->total_steps == test->current_step || test->fail) { + xsk_socket__delete(ifobj_tx->xsk->xsk); + xsk_socket__delete(ifobj_rx->xsk->xsk); + testapp_clean_xsk_umem(ifobj_rx); + if (!ifobj_tx->shared_umem) + testapp_clean_xsk_umem(ifobj_tx); + } + return !!test->fail; } @@ -1448,9 +1497,9 @@ static void testapp_headroom(struct test_spec *test) static void testapp_stats_rx_dropped(struct test_spec *test) { test_spec_set_name(test, "STAT_RX_DROPPED"); + pkt_stream_replace_half(test, MIN_PKT_SIZE * 4, 0); test->ifobj_rx->umem->frame_headroom = test->ifobj_rx->umem->frame_size - XDP_PACKET_HEADROOM - MIN_PKT_SIZE * 3; - pkt_stream_replace_half(test, MIN_PKT_SIZE * 4, 0); pkt_stream_receive_half(test); test->ifobj_rx->validation_func = validate_rx_dropped; testapp_validate_traffic(test); @@ -1573,6 +1622,11 @@ static void testapp_invalid_desc(struct test_spec *test) pkts[7].valid = false; } + if (test->ifobj_tx->shared_umem) { + pkts[4].addr += UMEM_SIZE; + pkts[5].addr += UMEM_SIZE; + } + pkt_stream_generate_custom(test, pkts, ARRAY_SIZE(pkts)); testapp_validate_traffic(test); pkt_stream_restore_default(test); @@ -1769,6 +1823,7 @@ int main(int argc, char **argv) int modes = TEST_MODE_SKB + 1; u32 i, j, failed_tests = 0; struct test_spec test; + bool shared_umem; /* Use libbpf 1.0 API mode */ libbpf_set_strict_mode(LIBBPF_STRICT_ALL); @@ -1783,6 +1838,10 @@ int main(int argc, char **argv) setlocale(LC_ALL, ""); parse_command_line(ifobj_tx, ifobj_rx, argc, argv); + shared_umem = !strcmp(ifobj_tx->ifname, ifobj_rx->ifname); + + ifobj_tx->shared_umem = shared_umem; + ifobj_rx->shared_umem = shared_umem; if (!validate_interface(ifobj_tx) || !validate_interface(ifobj_rx)) { usage(basename(argv[0])); @@ -1819,6 +1878,9 @@ int main(int argc, char **argv) pkt_stream_delete(tx_pkt_stream_default); pkt_stream_delete(rx_pkt_stream_default); + free(ifobj_rx->umem); + if (!ifobj_tx->shared_umem) + free(ifobj_tx->umem); ifobject_delete(ifobj_tx); ifobject_delete(ifobj_rx); diff --git a/tools/testing/selftests/bpf/xskxceiver.h b/tools/testing/selftests/bpf/xskxceiver.h index 04b298c72f67..11f017785986 100644 --- a/tools/testing/selftests/bpf/xskxceiver.h +++ b/tools/testing/selftests/bpf/xskxceiver.h @@ -153,6 +153,7 @@ struct ifobject { bool busy_poll; bool use_fill_ring; bool release_rx; + bool shared_umem; u8 dst_mac[ETH_ALEN]; u8 src_mac[ETH_ALEN]; }; From patchwork Thu Sep 1 11:48:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12962345 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CDD1ECAAD1 for ; Thu, 1 Sep 2022 11:49:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233025AbiIALs6 (ORCPT ); Thu, 1 Sep 2022 07:48:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232994AbiIALsc (ORCPT ); Thu, 1 Sep 2022 07:48:32 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0975013A7D1; Thu, 1 Sep 2022 04:48:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662032910; x=1693568910; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PNFxYgMRiGgQdhK5MLuKFPduXgrGWXWcwIqvdjKITFc=; b=SUnPQlD/NqCGEts++DvbUVDor9+28EhmQuc0lAluC/Qjd7gCwFrZeWjW 2OpIOB7VWWOV9w2d6bzzY3YwShD3F/Ud2q89p7RzhIp4B/6FQRaN4oCHs zPj/V8DPThIIzpRR1prvSk5jRQ8XbWYenTquNWtr552X9Ts/Y2ak+/guv Zvp1bbunMYkeWSBSej460e4XPExemnvvYaUjDZ8NG8NLqMM2cLQw9mPrJ 6R2Ffcjp9Eng5ZjvpvV/OyxVTmF+t5ntJv5/+wi39JPOMm4ZtQAa5DV8+ 5tacydT5W8A80Dl/29Tha6xutQ4UdiT/uus/UFirsyPAP/F5qo4KktXEV Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10456"; a="357410392" X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="357410392" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2022 04:48:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="673814396" Received: from boxer.igk.intel.com ([10.102.20.173]) by fmsmga008.fm.intel.com with ESMTP; 01 Sep 2022 04:48:27 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, john.fastabend@gmail.com, Maciej Fijalkowski Subject: [PATCH v6 bpf-next 5/6] selftests: xsk: make sure single threaded test terminates Date: Thu, 1 Sep 2022 13:48:12 +0200 Message-Id: <20220901114813.16275-6-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220901114813.16275-1-maciej.fijalkowski@intel.com> References: <20220901114813.16275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net For single threaded poll tests call pthread_kill() from main thread so that we are sure worker thread has finished its job and it is possible to proceed with next test types from test suite. It was observed that on some platforms it takes a bit longer for worker thread to exit and next test case sees device as busy in this case. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xskxceiver.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c index b54b844cae89..74b21ddf5a98 100644 --- a/tools/testing/selftests/bpf/xskxceiver.c +++ b/tools/testing/selftests/bpf/xskxceiver.c @@ -1345,6 +1345,11 @@ static void testapp_clean_xsk_umem(struct ifobject *ifobj) munmap(ifobj->umem->buffer, umem_sz); } +static void handler(int signum) +{ + pthread_exit(NULL); +} + static int testapp_validate_traffic_single_thread(struct test_spec *test, struct ifobject *ifobj, enum test_type type) { @@ -1362,6 +1367,7 @@ static int testapp_validate_traffic_single_thread(struct test_spec *test, struct test->ifobj_rx->shared_umem = false; test->ifobj_tx->shared_umem = false; + signal(SIGUSR1, handler); /* Spawn thread */ pthread_create(&t0, NULL, ifobj->func_ptr, test); @@ -1371,6 +1377,7 @@ static int testapp_validate_traffic_single_thread(struct test_spec *test, struct if (pthread_barrier_destroy(&barr)) exit_with_error(errno); + pthread_kill(t0, SIGUSR1); pthread_join(t0, NULL); if (test->total_steps == test->current_step || test->fail) { From patchwork Thu Sep 1 11:48:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12962346 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E751C0502C for ; Thu, 1 Sep 2022 11:49:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233036AbiIALtJ (ORCPT ); Thu, 1 Sep 2022 07:49:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233034AbiIALsm (ORCPT ); Thu, 1 Sep 2022 07:48:42 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5CD613970F; Thu, 1 Sep 2022 04:48:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662032912; x=1693568912; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8/bu0Sa1gHfv92mY1RdhiDRB0bnAsrV4aqbf5PhQi3c=; b=DKu7gcjv8yLv3iPfC/XIdz2zKcF1XVKnnyX2uHALF9NjLxZAssCdeWXa Ji4i9fTfyb3YpMCid7JfY/DMGmAYW0fddZUdXbcJ8/83s4UaIn2CClS21 vVydykPhoZ79ugYUppAsw672Fw0MY1tjQbuc6bAxXmObe/jKsCBETBnqs 66L747UgDGKUBWC58Pg+w/6XHt2otfhL5CCzTGvAUgkE/1YeWVXEyyCfZ Yi2z7MCdwYeEfs3QqFE4zu9wpverF3+a1pp5Hk6W1byVPIAonV/mHUqEg 9X8plGMQre1zWZy5QRasal1bSBMmuYvSEPgcPe6z6Ed6b00Gkst77wUDf g==; X-IronPort-AV: E=McAfee;i="6500,9779,10456"; a="357410403" X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="357410403" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2022 04:48:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="673814410" Received: from boxer.igk.intel.com ([10.102.20.173]) by fmsmga008.fm.intel.com with ESMTP; 01 Sep 2022 04:48:29 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, john.fastabend@gmail.com, Maciej Fijalkowski Subject: [PATCH v6 bpf-next 6/6] selftests: xsk: add support for zero copy testing Date: Thu, 1 Sep 2022 13:48:13 +0200 Message-Id: <20220901114813.16275-7-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220901114813.16275-1-maciej.fijalkowski@intel.com> References: <20220901114813.16275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce new mode to xdpxceiver responsible for testing AF_XDP zero copy support of driver that serves underlying physical device. When setting up test suite, determine whether driver has ZC support or not by trying to bind XSK ZC socket to the interface. If it succeeded, interpret it as ZC support being in place and do softirq and busy poll tests for zero copy mode. Note that Rx dropped tests are skipped since ZC path is not touching rx_dropped stat at all. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xskxceiver.c | 76 ++++++++++++++++++++++-- tools/testing/selftests/bpf/xskxceiver.h | 2 + 2 files changed, 74 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c index 74b21ddf5a98..ef33309bbe49 100644 --- a/tools/testing/selftests/bpf/xskxceiver.c +++ b/tools/testing/selftests/bpf/xskxceiver.c @@ -124,9 +124,20 @@ static void __exit_with_error(int error, const char *file, const char *func, int } #define exit_with_error(error) __exit_with_error(error, __FILE__, __func__, __LINE__) - -#define mode_string(test) (test)->ifobj_tx->xdp_flags & XDP_FLAGS_SKB_MODE ? "SKB" : "DRV" #define busy_poll_string(test) (test)->ifobj_tx->busy_poll ? "BUSY-POLL " : "" +static char *mode_string(struct test_spec *test) +{ + switch (test->mode) { + case TEST_MODE_SKB: + return "SKB"; + case TEST_MODE_DRV: + return "DRV"; + case TEST_MODE_ZC: + return "ZC"; + default: + return "BOGUS"; + } +} static void report_failure(struct test_spec *test) { @@ -322,6 +333,51 @@ static int __xsk_configure_socket(struct xsk_socket_info *xsk, struct xsk_umem_i return xsk_socket__create(&xsk->xsk, ifobject->ifname, 0, umem->umem, rxr, txr, &cfg); } +static bool ifobj_zc_avail(struct ifobject *ifobject) +{ + size_t umem_sz = DEFAULT_UMEM_BUFFERS * XSK_UMEM__DEFAULT_FRAME_SIZE; + int mmap_flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE; + struct xsk_socket_info *xsk; + struct xsk_umem_info *umem; + bool zc_avail = false; + void *bufs; + int ret; + + bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0); + if (bufs == MAP_FAILED) + exit_with_error(errno); + + umem = calloc(1, sizeof(struct xsk_umem_info)); + if (!umem) { + munmap(bufs, umem_sz); + exit_with_error(-ENOMEM); + } + umem->frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE; + ret = xsk_configure_umem(umem, bufs, umem_sz); + if (ret) + exit_with_error(-ret); + + xsk = calloc(1, sizeof(struct xsk_socket_info)); + if (!xsk) + goto out; + ifobject->xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST; + ifobject->xdp_flags |= XDP_FLAGS_DRV_MODE; + ifobject->bind_flags = XDP_USE_NEED_WAKEUP | XDP_ZEROCOPY; + ifobject->rx_on = true; + xsk->rxqsize = XSK_RING_CONS__DEFAULT_NUM_DESCS; + ret = __xsk_configure_socket(xsk, umem, ifobject, false); + if (!ret) + zc_avail = true; + + xsk_socket__delete(xsk->xsk); + free(xsk); +out: + munmap(umem->buffer, umem_sz); + xsk_umem__delete(umem->umem); + free(umem); + return zc_avail; +} + static struct option long_options[] = { {"interface", required_argument, 0, 'i'}, {"busy-poll", no_argument, 0, 'b'}, @@ -488,9 +544,14 @@ static void test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, else ifobj->xdp_flags |= XDP_FLAGS_DRV_MODE; - ifobj->bind_flags = XDP_USE_NEED_WAKEUP | XDP_COPY; + ifobj->bind_flags = XDP_USE_NEED_WAKEUP; + if (mode == TEST_MODE_ZC) + ifobj->bind_flags |= XDP_ZEROCOPY; + else + ifobj->bind_flags |= XDP_COPY; } + test->mode = mode; __test_spec_init(test, ifobj_tx, ifobj_rx); } @@ -1664,6 +1725,10 @@ static void run_pkt_test(struct test_spec *test, enum test_mode mode, enum test_ { switch (type) { case TEST_TYPE_STATS_RX_DROPPED: + if (mode == TEST_MODE_ZC) { + ksft_test_result_skip("Can not run RX_DROPPED test for ZC mode\n"); + return; + } testapp_stats_rx_dropped(test); break; case TEST_TYPE_STATS_TX_INVALID_DESCS: @@ -1860,8 +1925,11 @@ int main(int argc, char **argv) init_iface(ifobj_rx, MAC2, MAC1, IP2, IP1, UDP_PORT2, UDP_PORT1, worker_testapp_validate_rx); - if (is_xdp_supported(ifobj_tx)) + if (is_xdp_supported(ifobj_tx)) { modes++; + if (ifobj_zc_avail(ifobj_tx)) + modes++; + } test_spec_init(&test, ifobj_tx, ifobj_rx, 0); tx_pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, PKT_SIZE); diff --git a/tools/testing/selftests/bpf/xskxceiver.h b/tools/testing/selftests/bpf/xskxceiver.h index 11f017785986..edb76d2def9f 100644 --- a/tools/testing/selftests/bpf/xskxceiver.h +++ b/tools/testing/selftests/bpf/xskxceiver.h @@ -62,6 +62,7 @@ enum test_mode { TEST_MODE_SKB, TEST_MODE_DRV, + TEST_MODE_ZC, TEST_MODE_MAX }; @@ -167,6 +168,7 @@ struct test_spec { u16 current_step; u16 nb_sockets; bool fail; + enum test_mode mode; char name[MAX_TEST_NAME_SIZE]; };