From patchwork Thu Sep 1 11:48:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12962341 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93060ECAAD1 for ; Thu, 1 Sep 2022 11:48:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232979AbiIALsc (ORCPT ); Thu, 1 Sep 2022 07:48:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232666AbiIALsZ (ORCPT ); Thu, 1 Sep 2022 07:48:25 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7613513A7DA; Thu, 1 Sep 2022 04:48:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662032903; x=1693568903; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xRiOOhzD5CDn3mjfGtfoUJBiwhULg7E/Wk3TH7qD1U4=; b=I72qPEbphJZdHKQ2JxcyTj7sMRcK9zH4S15WG/UD02WZtdk1IPCeAG6z 8RA+v11kbnb+zpWYQ33xg8VWME+/uctwIVctB7EP06sIuMnlVmg6vz5wi YrugV3+yYk6GRPC00rlYLvo6+IN6zeKjs5wfyBta1HEiz6jn/9un8A9v9 O0p1LQUkHJkib3aDDSjkU2SMjv8AES280gVTOoSRym1XUrN34JFYAz1xI td8dfKoGnE5l4v4h+ZmzA/JQ6IUsNc2EovLv0SMkfuOlzJoMr7+CHZ/mF pKilv/aDJMKHvfqoE7ZfQmT0bxmbzhR35Uzd+TumcYsJcYoKvXsLkDzn6 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10456"; a="357410353" X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="357410353" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Sep 2022 04:48:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,280,1654585200"; d="scan'208";a="673814332" Received: from boxer.igk.intel.com ([10.102.20.173]) by fmsmga008.fm.intel.com with ESMTP; 01 Sep 2022 04:48:16 -0700 From: Maciej Fijalkowski To: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: netdev@vger.kernel.org, magnus.karlsson@intel.com, bjorn@kernel.org, john.fastabend@gmail.com, Maciej Fijalkowski Subject: [PATCH v6 bpf-next 1/6] selftests: xsk: query for native XDP support Date: Thu, 1 Sep 2022 13:48:08 +0200 Message-Id: <20220901114813.16275-2-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220901114813.16275-1-maciej.fijalkowski@intel.com> References: <20220901114813.16275-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, xdpxceiver assumes that underlying device supports XDP in native mode - it is fine by now since tests can run only on a veth pair. Future commit is going to allow running test suite against physical devices, so let us query the device if it is capable of running XDP programs in native mode. This way xdpxceiver will not try to run TEST_MODE_DRV if device being tested is not supporting it. Acked-by: Magnus Karlsson Signed-off-by: Maciej Fijalkowski --- tools/testing/selftests/bpf/xskxceiver.c | 39 ++++++++++++++++++++++-- 1 file changed, 37 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c index 14b4737b223c..19f65bce7f65 100644 --- a/tools/testing/selftests/bpf/xskxceiver.c +++ b/tools/testing/selftests/bpf/xskxceiver.c @@ -99,6 +99,8 @@ #include #include "xsk.h" #include "xskxceiver.h" +#include +#include #include "../kselftest.h" /* AF_XDP APIs were moved into libxdp and marked as deprecated in libbpf. @@ -1712,10 +1714,40 @@ static void ifobject_delete(struct ifobject *ifobj) free(ifobj); } +static bool is_xdp_supported(struct ifobject *ifobject) +{ + int flags = XDP_FLAGS_DRV_MODE; + + LIBBPF_OPTS(bpf_link_create_opts, opts, .flags = flags); + struct bpf_insn insns[2] = { + BPF_MOV64_IMM(BPF_REG_0, XDP_PASS), + BPF_EXIT_INSN() + }; + int ifindex = if_nametoindex(ifobject->ifname); + int prog_fd, insn_cnt = ARRAY_SIZE(insns); + int err; + + prog_fd = bpf_prog_load(BPF_PROG_TYPE_XDP, NULL, "GPL", insns, insn_cnt, NULL); + if (prog_fd < 0) + return false; + + err = bpf_xdp_attach(ifindex, prog_fd, flags, NULL); + if (err) { + close(prog_fd); + return false; + } + + bpf_xdp_detach(ifindex, flags, NULL); + close(prog_fd); + + return true; +} + int main(int argc, char **argv) { struct pkt_stream *pkt_stream_default; struct ifobject *ifobj_tx, *ifobj_rx; + int modes = TEST_MODE_SKB + 1; u32 i, j, failed_tests = 0; struct test_spec test; @@ -1743,15 +1775,18 @@ int main(int argc, char **argv) init_iface(ifobj_rx, MAC2, MAC1, IP2, IP1, UDP_PORT2, UDP_PORT1, worker_testapp_validate_rx); + if (is_xdp_supported(ifobj_tx)) + modes++; + test_spec_init(&test, ifobj_tx, ifobj_rx, 0); pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, PKT_SIZE); if (!pkt_stream_default) exit_with_error(ENOMEM); test.pkt_stream_default = pkt_stream_default; - ksft_set_plan(TEST_MODE_MAX * TEST_TYPE_MAX); + ksft_set_plan(modes * TEST_TYPE_MAX); - for (i = 0; i < TEST_MODE_MAX; i++) + for (i = 0; i < modes; i++) for (j = 0; j < TEST_TYPE_MAX; j++) { test_spec_init(&test, ifobj_tx, ifobj_rx, i); run_pkt_test(&test, i, j);