From patchwork Sun Mar 6 23:43:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12770985 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EBB4C433EF for ; Sun, 6 Mar 2022 23:43:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234455AbiCFXoQ (ORCPT ); Sun, 6 Mar 2022 18:44:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234465AbiCFXoL (ORCPT ); Sun, 6 Mar 2022 18:44:11 -0500 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EA243FBF6; Sun, 6 Mar 2022 15:43:18 -0800 (PST) Received: by mail-pl1-x643.google.com with SMTP id n15so2303905plh.2; Sun, 06 Mar 2022 15:43:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VwHdyYnuRcRXYNFmqabJL9282iWYdbjzM/YzBvIZOfw=; b=FtabHTT7QlroIUf2e3cAIpypNvHpmjUDgNGcIr34R+uTczAO0l6rFhlp1CJP5R2SET oFgP3nvv2edWIdVI4mSH0+HL4QED/oVyUHzkkPGO3NLVqjGx2zbgb2fw+48iiQN5tWy0 CheDnblejxQUhs0kri34MLyhX0FKliODSFBF22R+TRIDaKcpta9RR2cJfAon7IoiUpm8 +PhsJhNGhpwT3xz4q4w5vRk5V81UhF9UfYW+p/54Hd8V0sVh2ElOQgJzeI+dC3jq3y0r U1BLyFuZ4B9lpxmAgB6jQu0SYrtlvxQwvwEiGXlqVoTFDwHGA7JbLRqbCsn5My9U0Zej nQjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VwHdyYnuRcRXYNFmqabJL9282iWYdbjzM/YzBvIZOfw=; b=PFxrmO6Z+o6g7/q4iV5E+SM3ueGZahIcYMeFPbB+GJLT2lORBz95pXKtjxXtgFilck E0Enkq0+EblVbf5Ae/MNHOi/I4iq6bGgPV/oqsSDJPQwT6f0oq37/WuXeUj6szKofExc 2cwDaFlKW2ViX6s71Yq8hkLVqsdmfw6ACFMz5ak6Il8PcH36gFvfCeqsv7KOB3SXBhoF gpWk+WxnN4R771q07kbJ3k2PRU9BNPV/JSx8OZwW7Eqb6NfltCFrS1xRwQwNUZDIP2eU U7VbaFVjx2lhouirn+SDq2MbOTgSaPQZeF4txDbSRk0+z2GaOjmzsq0w++P3fRGrzp+t p1Pw== X-Gm-Message-State: AOAM5316DUOnQf+en5Lr6IALEyLSP+o2ZAC8xHs/2kRq+hnBAwJ49r9x RYSDnXdWLkOi6HgCNshoNvf/xpR/PRc= X-Google-Smtp-Source: ABdhPJzPCnm135JRRlwfFwUmQVnOXL1DEKfhj/f9eRGrrELkTyhO91A9aQZEFEIXAndIgxAtlw/Ayw== X-Received: by 2002:a17:90b:3b45:b0:1bf:275d:e3a6 with SMTP id ot5-20020a17090b3b4500b001bf275de3a6mr14685131pjb.157.1646610197821; Sun, 06 Mar 2022 15:43:17 -0800 (PST) Received: from localhost ([2405:201:6014:d0c0:6243:316e:a9e1:adda]) by smtp.gmail.com with ESMTPSA id c5-20020a056a00248500b004f6b5ddcc65sm10220143pfv.199.2022.03.06.15.43.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Mar 2022 15:43:17 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Jesper Dangaard Brouer , Lorenzo Bianconi , John Fastabend , Jakub Kicinski , Lorenz Bauer , netdev@vger.kernel.org Subject: [PATCH bpf-next v1 1/5] bpf: Add ARG_SCALAR and ARG_CONSTANT Date: Mon, 7 Mar 2022 05:13:07 +0530 Message-Id: <20220306234311.452206-2-memxor@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220306234311.452206-1-memxor@gmail.com> References: <20220306234311.452206-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2592; h=from:subject; bh=+UTutSV7e6MxZzmPcWPn5d4JDQXC+kDPpa78cbCfs90=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBiJUWooddDSYApqqx1ZGBQrmGShyR+BarlGZJnYKCC //Ag5mGJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCYiVFqAAKCRBM4MiGSL8RykqYD/ wPBgtBobik/QPvn9vUOCRnvGj+dc2qegy9RqXUXytJfvas6L3x2cp3z3FJuOxOEQggZ+QGixxahs2d FxUu62ECQGNw8hRUyAo93+T8sROPKpatnD1YjWyuAsvaaqio80KyvMoQKyaK4+5hjvQ1nhGHPE9h81 TIN/orECOFXw5OZTUKo5UUm1mcXfxxvpbNEbETJu7cHb3HHGrrFPBaOwgKEvNM3iA+fJo81mbuOdyu ga61FT7ttIJxB/dxsLcASQx/Hl7GMP/veqh9FkSYCJWtozIQg2nWg6nFwoMYHaLN/1Cg6u2Uts8zJf vBH5e16DrPaY1WT2km5N8dVgkqUa6+bmqVYls8NoJfjkNI7h3QngoOkQTLjiHzWoU60RpzPTJjOO+C e6RnpozaLOq0boHnzFcFCaEh8U45OTlREMBsJq9lRbKQFykdWVE1Ba0UVZrabyMPVoKmkkLGGnWwi/ 9VPfqo9JoL68D1c5mLEDtrk5iFRgyQnRVtMD88whW8tw+rCHsJun0lWDsEoIlQDgTZ8KX5M/W+fTK/ S/K5RqCw/trezMuptmk2ZZDea6Jgc1bnnVp+j2rKkilppB1WnbmgZSR+ozlzrJR3GEGCVokKrjel3c lHUKjq1ckBaC+zo6Rp2CF/mFpLggcrZBZUXDZxRJSuQiyiqchkq8qnxoANYg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net In the next patch, we will introduce a new helper 'bpf_packet_pointer' that takes offset and len and returns a packet pointer. There we want to statically enforce offset is in range [0, 0xffff], and that len is a constant value, in range [1, 0xffff]. This also helps us avoid a pointless runtime check. To make these checks possible, we need to ensure we only get a scalar type. Although a lot of other argument types take scalars, their intent is different. Hence add general ARG_SCALAR and ARG_CONSTANT types, where the latter is also checked to be constant in addition to being scalar. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 2 ++ kernel/bpf/verifier.c | 13 +++++++++++++ 2 files changed, 15 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 88449fbbe063..7841d90b83df 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -391,6 +391,8 @@ enum bpf_arg_type { ARG_PTR_TO_STACK, /* pointer to stack */ ARG_PTR_TO_CONST_STR, /* pointer to a null terminated read-only string */ ARG_PTR_TO_TIMER, /* pointer to bpf_timer */ + ARG_SCALAR, /* a scalar with any value(s) */ + ARG_CONSTANT, /* a scalar with constant value */ __BPF_ARG_TYPE_MAX, /* Extended arg_types. */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ec3a7b6c9515..0373d5bd240f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5163,6 +5163,12 @@ static bool arg_type_is_int_ptr(enum bpf_arg_type type) type == ARG_PTR_TO_LONG; } +static bool arg_type_is_scalar(enum bpf_arg_type type) +{ + return type == ARG_SCALAR || + type == ARG_CONSTANT; +} + static int int_ptr_type_to_size(enum bpf_arg_type type) { if (type == ARG_PTR_TO_INT) @@ -5302,6 +5308,8 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = { [ARG_PTR_TO_STACK] = &stack_ptr_types, [ARG_PTR_TO_CONST_STR] = &const_str_ptr_types, [ARG_PTR_TO_TIMER] = &timer_types, + [ARG_SCALAR] = &scalar_types, + [ARG_CONSTANT] = &scalar_types, }; static int check_reg_type(struct bpf_verifier_env *env, u32 regno, @@ -5635,6 +5643,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, verbose(env, "string is not zero-terminated\n"); return -EINVAL; } + } else if (arg_type_is_scalar(arg_type)) { + if (arg_type == ARG_CONSTANT && !tnum_is_const(reg->var_off)) { + verbose(env, "R%d is not a known constant\n", regno); + return -EACCES; + } } return err; From patchwork Sun Mar 6 23:43:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12770986 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9787CC43219 for ; Sun, 6 Mar 2022 23:43:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234469AbiCFXoR (ORCPT ); Sun, 6 Mar 2022 18:44:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234470AbiCFXoQ (ORCPT ); Sun, 6 Mar 2022 18:44:16 -0500 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B26624091B; Sun, 6 Mar 2022 15:43:21 -0800 (PST) Received: by mail-pf1-x444.google.com with SMTP id z16so12287168pfh.3; Sun, 06 Mar 2022 15:43:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/1xcdTbkMMBdSKTr166+hix38bor1ITd/OdvOt4zm1I=; b=kWfhBnv1mR/Q7Z/cNCua8julakqqYaLhjsyodjh0lBvi/93mQTJFXWV9JEbVrlyr7x K2XkkSek6viYiKBo/vYMM8VCmwwTp6nV9POWWWD2pkFnhbVLsrNOafXuE1DZ0x8djXoX acOxti5Zkgu0oqS6REYslj72HYokkmKEUGYwbaMM4rzPS/mHNwuimcN6aMtQ6BEa+Fiy mmXvXUzYLecdl1uYN8ETpeDaLMtzMwLiJui1T1BcxwAB/47s5YErt3jTh3ffGaw/pLWS q5ffZqhaxz+wIe0iVCke5wtbgNi14GPyZUqipuFnn8WprVD6l5PWc3MVkG9fp5GCb1tV sIXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/1xcdTbkMMBdSKTr166+hix38bor1ITd/OdvOt4zm1I=; b=u2J7UNBb8cUb6LPd7+ejUyKq/8WPLW3x8UNSijhx8Xi5Xaxvvjc4Lg4b/NvmY+kNHe gMxGVCOD74v+pmtCvgMLNp+tR3GygxA78Bu9LyTgHPAtZ2QWe6yC1b/4DxVh8GkW73QL 1kvmP2UwI5PD6Tf37ZhyaYhxTYJneH2fGBjtlzi+iex94TiqrK7unC70kVHSdOnjAkn9 wUj/VuODgwqfImC4K/awNLqfMqglbn6JpjPFFeiOZFBzXHxsdCeLfEIXGpSZXhwIjRw/ A2E4QEW63FQpUvEM6k8UaMrpionzEx1RmELfLZmSxfOf1Qs1vGTkfd8LGMAdbS3TLRv8 33JA== X-Gm-Message-State: AOAM5304zeMe1OJzeYwaEMP150tCqpwd8gMxd5J26dFm4zkips85bLUi 8WRPlFPT7Mjk24HULqOTfsXAbtAlO34= X-Google-Smtp-Source: ABdhPJz+TO0N3hPDmasYgHPVc0GjHABKLPWweXbCvjBCjQZgZpglGI1e4ISsbN5rxcpsEB9QlKdy9g== X-Received: by 2002:a63:4864:0:b0:378:badd:b786 with SMTP id x36-20020a634864000000b00378baddb786mr7668735pgk.512.1646610201018; Sun, 06 Mar 2022 15:43:21 -0800 (PST) Received: from localhost ([2405:201:6014:d0c0:6243:316e:a9e1:adda]) by smtp.gmail.com with ESMTPSA id s8-20020a056a0008c800b004f664655937sm13984919pfu.157.2022.03.06.15.43.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Mar 2022 15:43:20 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Jesper Dangaard Brouer , Lorenzo Bianconi , John Fastabend , Jakub Kicinski , Lorenz Bauer , netdev@vger.kernel.org Subject: [PATCH bpf-next v1 2/5] bpf: Introduce pkt_uid concept for PTR_TO_PACKET Date: Mon, 7 Mar 2022 05:13:08 +0530 Message-Id: <20220306234311.452206-3-memxor@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220306234311.452206-1-memxor@gmail.com> References: <20220306234311.452206-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7096; h=from:subject; bh=1r3cqsm4DN2Zvo68UgHRNdowV7LtN0+dv3/iFHXLB2c=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBiJUWoZMfvQPzkb9ZuQ+yeRrvDU0UgfmlES3VoS8JC 5yMyJY6JAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCYiVFqAAKCRBM4MiGSL8RysPfD/ 4lGrXjWqMdlaZKnNMavl/4jH4zUH7Y+AawB/FZnUyxXLUtu3dZUHQkTJ9rCxe0XOaruHghx9fLtSC6 aSbktlterKvVuTWQ36RLlveChPyPFHSwlLy9Q3sgKogytmzWxSpGjiumTGyYQVe12Aw1vjncNtHn5j Y0muRe8wOcEsfyT1vsibsFNISfly0JugvMbYdydveP72Ng6XYeKjBUNqP/CsaWKL1rASGwmagqIrXV 93hFNfDF9VI339VxsDJNen48B+QFau2T8f1bbFh73jUNgzdvI6yxUqbiolDU6KCXhNj3mQGRs/9DpX ySAh1KdIXv/cQe96OVPA4k2pPW1JtH0262Tjf9QmXakK3I/KgkIctxwYuotGTwj/pvk3XiYbD7TQNd 7FGNoAS3pnYJFvevqidGiH+5A/BdFTCMIpZ/A8f7JznrwnpjkAlue2a7ds5Rzc9oOX4VDJvdjK+SSs 1bvicxUATC9mTg/FoZZq+PpQe//I9XHn4tb74Dkbvdysh3KCYBBRQQmkoDy6OfKYkoXCvndCvqJ+jy w0m8vcvCWGBjKLsV1slsfV3oxyz+q7FrVDbWhE0M8cA/OpUvzk1q6gyNoPRGaWvnKjyC1zYZcwSvue A51q0cfYEcSpU0HNN/gQ8ajoP5W5YNqWHQtXwg6/VGlGmVP0u4F/iSIqHP+Q== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add a new member in PTR_TO_PACKET specific register state, namely pkt_uid. This is used to classify packet pointers into different sets, and the invariant is that any pkt pointers not belonging to the same set, i.e. not sharing same pkt_uid, won't be allowed for comparison with each other. During range propagation in __find_good_pkt_pointers, we now need to take care to skip packet pointers with a different pkt_uid. This can be used to set for a packet pointer returned from a helper 'bpf_packet_pointer' in the next patch, that encodes the range from the len parameter it gets. Generating a distinct pkt_uid means this pointer cannot be compared with other packet pointers and its range cannot be manipulated. Note that for helpers which change underlying packet data, we don't make any distinction based on pkt_uid for clear_all_pkt_pointers, since even though the pkt_uid is different, they all point into ctx. regsafe is updated to match non-zero pkt_uid using the idmap to ensure it rejects distinct pkt_uid pkt pointers. We also replace memset of reg->raw to set range to 0, but it is helpful to elaborate on why replacing it with reg->range = 0 is correct. In commit 0962590e5533 ("bpf: fix partial copy of map_ptr when dst is scalar"), the copying was changed to use raw so that all possible members of type specific register state are copied, since at that point the type of register is not known. But inside the reg_is_pkt_pointer block, there is no need to memset the whole 'raw' struct, since we also have a pkt_uid member that we now want to preserve after copying from one register to another, for pkt pointers. A test for this case has been included to prevent regressions. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 9 ++++++- kernel/bpf/verifier.c | 47 ++++++++++++++++++++++++++++-------- 2 files changed, 45 insertions(+), 11 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index c1fc4af47f69..0379f953cf22 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -50,7 +50,14 @@ struct bpf_reg_state { s32 off; union { /* valid when type == PTR_TO_PACKET */ - int range; + struct { + int range; + /* This is used to tag some PTR_TO_PACKET so that they + * cannot be compared existing PTR_TO_PACKET with + * different pkt_uid. + */ + u32 pkt_uid; + }; /* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE | * PTR_TO_MAP_VALUE_OR_NULL diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 0373d5bd240f..88ac2c833bed 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -712,8 +712,14 @@ static void print_verifier_state(struct bpf_verifier_env *env, verbose_a("ref_obj_id=%d", reg->ref_obj_id); if (t != SCALAR_VALUE) verbose_a("off=%d", reg->off); - if (type_is_pkt_pointer(t)) + if (type_is_pkt_pointer(t)) { verbose_a("r=%d", reg->range); + /* pkt_uid is only set for PTR_TO_PACKET, so + * type_is_pkt_pointer check is enough. + */ + if (reg->pkt_uid) + verbose_a("pkt_uid=%d", reg->pkt_uid); + } else if (base_type(t) == CONST_PTR_TO_MAP || base_type(t) == PTR_TO_MAP_KEY || base_type(t) == PTR_TO_MAP_VALUE) @@ -7604,7 +7610,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, if (reg_is_pkt_pointer(ptr_reg)) { dst_reg->id = ++env->id_gen; /* something was added to pkt_ptr, set range to zero */ - memset(&dst_reg->raw, 0, sizeof(dst_reg->raw)); + dst_reg->range = 0; } break; case BPF_SUB: @@ -7664,7 +7670,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, dst_reg->id = ++env->id_gen; /* something was added to pkt_ptr, set range to zero */ if (smin_val < 0) - memset(&dst_reg->raw, 0, sizeof(dst_reg->raw)); + dst_reg->range = 0; } break; case BPF_AND: @@ -8701,7 +8707,8 @@ static void __find_good_pkt_pointers(struct bpf_func_state *state, for (i = 0; i < MAX_BPF_REG; i++) { reg = &state->regs[i]; - if (reg->type == type && reg->id == dst_reg->id) + if (reg->type == type && reg->id == dst_reg->id && + reg->pkt_uid == dst_reg->pkt_uid) /* keep the maximum range already checked */ reg->range = max(reg->range, new_range); } @@ -8709,7 +8716,8 @@ static void __find_good_pkt_pointers(struct bpf_func_state *state, bpf_for_each_spilled_reg(i, state, reg) { if (!reg) continue; - if (reg->type == type && reg->id == dst_reg->id) + if (reg->type == type && reg->id == dst_reg->id && + reg->pkt_uid == dst_reg->pkt_uid) reg->range = max(reg->range, new_range); } } @@ -9330,6 +9338,14 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno, __mark_ptr_or_null_regs(vstate->frame[i], id, is_null); } +static bool is_bad_pkt_comparison(const struct bpf_reg_state *dst_reg, + const struct bpf_reg_state *src_reg) +{ + if (!reg_is_pkt_pointer_any(dst_reg) || !reg_is_pkt_pointer_any(src_reg)) + return false; + return dst_reg->pkt_uid != src_reg->pkt_uid; +} + static bool try_match_pkt_pointers(const struct bpf_insn *insn, struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg, @@ -9343,6 +9359,9 @@ static bool try_match_pkt_pointers(const struct bpf_insn *insn, if (BPF_CLASS(insn->code) == BPF_JMP32) return false; + if (is_bad_pkt_comparison(dst_reg, src_reg)) + return false; + switch (BPF_OP(insn->code)) { case BPF_JGT: if ((dst_reg->type == PTR_TO_PACKET && @@ -9640,11 +9659,17 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, mark_ptr_or_null_regs(other_branch, insn->dst_reg, opcode == BPF_JEQ); } else if (!try_match_pkt_pointers(insn, dst_reg, ®s[insn->src_reg], - this_branch, other_branch) && - is_pointer_value(env, insn->dst_reg)) { - verbose(env, "R%d pointer comparison prohibited\n", - insn->dst_reg); - return -EACCES; + this_branch, other_branch)) { + if (is_pointer_value(env, insn->dst_reg)) { + verbose(env, "R%d pointer comparison prohibited\n", + insn->dst_reg); + return -EACCES; + } + if (is_bad_pkt_comparison(dst_reg, ®s[insn->src_reg])) { + verbose(env, "R%d, R%d pkt pointer comparison prohibited\n", + insn->dst_reg, insn->src_reg); + return -EACCES; + } } if (env->log.level & BPF_LOG_LEVEL) print_insn_state(env, this_branch->frame[this_branch->curframe]); @@ -10891,6 +10916,8 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, /* id relations must be preserved */ if (rold->id && !check_ids(rold->id, rcur->id, idmap)) return false; + if (rold->pkt_uid && !check_ids(rold->pkt_uid, rcur->pkt_uid, idmap)) + return false; /* new val must satisfy old val knowledge */ return range_within(rold, rcur) && tnum_in(rold->var_off, rcur->var_off); From patchwork Sun Mar 6 23:43:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12770987 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC79AC433EF for ; Sun, 6 Mar 2022 23:43:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234489AbiCFXo2 (ORCPT ); Sun, 6 Mar 2022 18:44:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234471AbiCFXoU (ORCPT ); Sun, 6 Mar 2022 18:44:20 -0500 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDBA240E78; Sun, 6 Mar 2022 15:43:25 -0800 (PST) Received: by mail-pg1-x544.google.com with SMTP id bc27so12153446pgb.4; Sun, 06 Mar 2022 15:43:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=viufb9svbOee7nVQSmYexlXk/QoK2XrV/MZnnyUZYdM=; b=BCM4vtlICZ4xHLjwagUbQjAC9Obw+47okAL5vDAU2r65mClLmSp7zrhd6mzCcFX9La YmuSWZ3dV+zTd83YXzLfiL87AYHCkfeJmIIr5SSYwzQkbmicHXtgxSbXujt0pWB5Ge2j Rn3lVObLqvnKIjGO6abTjP7OoNwql5FY3UdjXRGfgcQIdqHJX+/LsKXUlBhM4KHko8hB qsoDOmKgzDb6womsIelJZtly6g3tZ5hsweVjoY9m1/S/EFQHltZgHOO6ZpOqGSWSlrMn MFfWjIfZZpiSwMxh8sFHkE9CidLC9HyJeR5KKS6eJWS1/6sCIq+HAeQ7tuQZetqbS8/y 5TGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=viufb9svbOee7nVQSmYexlXk/QoK2XrV/MZnnyUZYdM=; b=HmQb3gbBc7CqC3rJUchEQdqHLrUH7de8sW7bvvHRNz6jCRlP/QBUn7kj9fav043O4M +CSvDIdGLJjxR3g1yl2yPR1FJvKB+bIklBAk4ber+niACF68LBWaGapSKmhlWHf3INzv QRcbv45rD7vPaXbBv7NNYd3yUYNQSHAdWu7ML1W9uBd1nDI/PleknMM4v8dYr8LSAo4p KENVkRugVRHBIGPbhFb0xAyPEgURbS968UBqCXAWOucMMZrMJzaNkbVCbavu2737aa4q FkNP9Jw9P6lw3stZF/jDKP6zz0xNCJxOUGG+XH18mPz0fjWNEUmk9OGLLbyJOcHcB50d xLAA== X-Gm-Message-State: AOAM532faU9BU1wZJkQOJE7Boy06f6bsfh4KpcB8Q4UvpId+JPirvvQI FiJ6BDsvfuFsLQDIUJdE9TLmTncARFk= X-Google-Smtp-Source: ABdhPJzKL30Eh/5E0zTyHdSO5QSaDje+44DFz8njRWrON5b5QJpBkQUUVhtOhPtfXKg5EOPCOyHnBg== X-Received: by 2002:aa7:85d8:0:b0:4f6:8ae9:16a8 with SMTP id z24-20020aa785d8000000b004f68ae916a8mr10068931pfn.15.1646610204462; Sun, 06 Mar 2022 15:43:24 -0800 (PST) Received: from localhost ([2405:201:6014:d0c0:6243:316e:a9e1:adda]) by smtp.gmail.com with ESMTPSA id k7-20020a63ff07000000b00372dc67e854sm9984278pgi.14.2022.03.06.15.43.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Mar 2022 15:43:24 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Jesper Dangaard Brouer , Lorenzo Bianconi , John Fastabend , Jakub Kicinski , Lorenz Bauer , netdev@vger.kernel.org Subject: [PATCH bpf-next v1 3/5] bpf: Introduce bpf_packet_pointer helper to do DPA Date: Mon, 7 Mar 2022 05:13:09 +0530 Message-Id: <20220306234311.452206-4-memxor@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220306234311.452206-1-memxor@gmail.com> References: <20220306234311.452206-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10943; h=from:subject; bh=E4Gz+N0WC+39oXuLsAtTzzz6Tby4HR9hqTU+laFyc2s=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBiJUWoRgu92UDUVnnst7FwTltipumkolBF+PQKaOZQ Rt0ww+GJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCYiVFqAAKCRBM4MiGSL8RyppDEA CdVE9NUvohDqu644WBVZuNQpjOYsi2gNHd6sDn2gWMCjBGVCcgf03J1Az89lWfhtmLQHlwVI7pae6m iRAixnWDzbHm7z35GHDir5QQTYdROnhnDn92K8oG2aJ16l/HBNkpO7rxv6CY/wKJ8q8KpTRjuR1ljT am+rNFzAwMQIL446Afq0scOBpkVbWiHCCRMYNulzUsqzZHFESu+4Y1zTWOW7WDJZVRvEqFrUlZHcsf V299V/FPZUQv9SfuvLXPPY0Z5Herfy42yelJqCvZAvcaX21T69ppqR4eLQQkpdL2aaZX214NY6IJ/M CbaSCIdn67oCX2VBuundCI7EWu4AddWI3YZ+L/YCeHTT3M2Rp/A5eO71kwCCTGG3LWbnW6QIdr9B1Z VhmpJ8w5jNoPvNRLBtbq17qaVp+B4sDLH/xv3lXyUZs98QsV9yNItoQYA5k/drB+Sc+W6YDIP2nBLF PJe3gwWRtuqq2XLLCek9IVln6K8eOwxYjRHy5oiQd6l7VSEaVhxuTgPxIZM8CIkayTlw3GQ4vI6gMJ lpEcX4+yw7WCQgV+8KKIhpD5ZQzoaa6kROjazsPL3yhyEaeggLZjoQIUL4JsqFgTEs4k7cIBqvDyhF qRenrzZk4qafsOatOd98y5DvoLEG59Mgh913gCn141DGJaG/CyzJhJuNhCbQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce a new helper 'bpf_packet_pointer', that returns a packet pointer to a linear area in a possibly multi-buffer XDP buff. Earlier, user had to use bpf_xdp_load_bytes and bpf_xdp_store_bytes to read from and write to multi-bufer XDP buff, but this led to a memcpy for an ideal case (where we detect a linear area in the initial frame or frags). Instead, we can expose the bpf_packet_pointer function, and return a packet pointer with a fixed range, so that user can do direct packet access in the contiguous region. The name bpf_packet_pointer is chosen so this helper can also be implemented for TC programs in the future, using skb as ctx. The helper either returns the pointer to linear contiguous area, or NULL if it fails to find one. In that case, user can resort to the existing helpers to do access across frame or frag boundaries. The case of offset + len > xdp_get_buff_len is still rejected, but the user can already check for that beforehand so the error code is dropped for it, and NULL is returned. We use the support for ARG_SCALAR, ARG_CONSTANT, and pkt_uid for PTR_TO_PACKET in this commit. First, it is enforced that offset is only in range [0, 0xffff], and that len is a constant, with value in range [1, 0xffff]. Then, we introduce ret_pkt_len member in bpf_call_arg_meta to remember the length to set for the returned packet pointer. A fresh ID is assigned to pkt_uid on each call, so that comparisons of these PTR_TO_PACKET is rejected with existing packet pointers obtained from ctx or other calls to bpf_packet_pointer, to prevent range manipulation. The existing bpf_xdp_load_bytes/bpf_xdp_store_bytes now do a call to bpf_xdp_copy_buf directly. The intended usage is that user first calls bpf_packet_pointer, and on receiving NULL from the call, invokes these 'slow path' helpers that handle the access across head/frag boundary. Note that the reason we choose PTR_TO_PACKET as the return value, and not PTR_TO_MEM with a fixed mem_size, is because these pointers need to be invalided (by clear_all_pkt_pointers) when a helper that changes packet is invoked. Instead of special casing PTR_TO_MEM for that purpose, it is better to adjust PTR_TO_PACKET to work for this mode with minimal additions on the verifier side (from previous commit). Also, the verifier errors related to bad access mention pkt pointer and not pointer to memory, which is more meaningful to the BPF programmer. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 2 ++ include/uapi/linux/bpf.h | 12 +++++++++ kernel/bpf/verifier.c | 37 ++++++++++++++++++++++++++ net/core/filter.c | 48 +++++++++++++++++----------------- tools/include/uapi/linux/bpf.h | 12 +++++++++ 5 files changed, 87 insertions(+), 24 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 7841d90b83df..981e87c64e47 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -421,6 +421,7 @@ enum bpf_return_type { RET_PTR_TO_ALLOC_MEM, /* returns a pointer to dynamically allocated memory */ RET_PTR_TO_MEM_OR_BTF_ID, /* returns a pointer to a valid memory or a btf_id */ RET_PTR_TO_BTF_ID, /* returns a pointer to a btf_id */ + RET_PTR_TO_PACKET, /* returns a pointer to a packet */ __BPF_RET_TYPE_MAX, /* Extended ret_types. */ @@ -430,6 +431,7 @@ enum bpf_return_type { RET_PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCK_COMMON, RET_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | MEM_ALLOC | RET_PTR_TO_ALLOC_MEM, RET_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_BTF_ID, + RET_PTR_TO_PACKET_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_PACKET, /* This must be the last entry. Its purpose is to ensure the enum is * wide enough to hold the higher bits reserved for bpf_type_flag. diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 4eebea830613..3736cfbb325e 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -5117,6 +5117,17 @@ union bpf_attr { * 0 on success. * **-EINVAL** for invalid input * **-EOPNOTSUPP** for unsupported delivery_time_type and protocol + * + * void *bpf_packet_pointer(void *ctx, u32 offset, u32 len) + * Description + * Return a pointer to linear area in packet at *offset* of length + * *len*. The returned packet pointer cannot be compared to any + * other packet pointers. + * + * This helper is only available to XDP programs. + * Return + * Pointer to packet on success that can be accessed for *len* + * bytes, or NULL when it fails. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -5312,6 +5323,7 @@ union bpf_attr { FN(xdp_store_bytes), \ FN(copy_from_user_task), \ FN(skb_set_delivery_time), \ + FN(packet_pointer), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 88ac2c833bed..e6e494e07f4c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -257,6 +257,7 @@ struct bpf_call_arg_meta { struct btf *ret_btf; u32 ret_btf_id; u32 subprogno; + int ret_pkt_len; }; struct btf *btf_vmlinux; @@ -5654,6 +5655,32 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, verbose(env, "R%d is not a known constant\n", regno); return -EACCES; } + + if (meta->func_id == BPF_FUNC_packet_pointer) { + struct tnum range; + + switch (arg + 1) { + case 2: + /* arg2 = offset, enforce that the range is [0, 0xffff] */ + range = tnum_range(0, 0xffff); + if (!tnum_in(range, reg->var_off)) { + verbose(env, "R%d must be in range [0, 0xffff]\n", regno); + return -EINVAL; + } + break; + case 3: + /* arg3 = len, already checked to be constant */ + if (!reg->var_off.value || reg->var_off.value > 0xffff) { + verbose(env, "R%d must be in range [1, 0xffff]\n", regno); + return -EINVAL; + } + meta->ret_pkt_len = reg->var_off.value; + break; + default: + verbose(env, "verifier internal error: bpf_xdp_pointer unknown arg\n"); + return -EFAULT; + } + } } return err; @@ -6873,6 +6900,16 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn */ regs[BPF_REG_0].btf = btf_vmlinux; regs[BPF_REG_0].btf_id = ret_btf_id; + } else if (base_type(ret_type) == RET_PTR_TO_PACKET) { + mark_reg_known_zero(env, regs, BPF_REG_0); + regs[BPF_REG_0].type = PTR_TO_PACKET | ret_flag; + regs[BPF_REG_0].pkt_uid = ++env->id_gen; + if (!meta.ret_pkt_len) { + verbose(env, "verifier internal error: ret_pkt_len unset\n"); + return -EFAULT; + } + /* Already checked to be in range [1, 0xffff] */ + regs[BPF_REG_0].range = meta.ret_pkt_len; } else { verbose(env, "unknown return type %u of func %s#%d\n", base_type(ret_type), func_id_name(func_id), func_id); diff --git a/net/core/filter.c b/net/core/filter.c index 88767f7da150..4fc19b9e64c7 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3889,18 +3889,15 @@ static void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off, } } -static void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len) +BPF_CALL_3(bpf_xdp_pointer, struct xdp_buff *, xdp, u32, offset, u32, len) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); u32 size = xdp->data_end - xdp->data; void *addr = xdp->data; int i; - if (unlikely(offset > 0xffff || len > 0xffff)) - return ERR_PTR(-EFAULT); - if (offset + len > xdp_get_buff_len(xdp)) - return ERR_PTR(-EINVAL); + return (unsigned long)NULL; if (offset < size) /* linear area */ goto out; @@ -3917,23 +3914,28 @@ static void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len) offset -= frag_size; } out: - return offset + len < size ? addr + offset : NULL; + return offset + len < size ? (unsigned long)addr + offset : (unsigned long)NULL; } +static const struct bpf_func_proto bpf_xdp_pointer_proto = { + .func = bpf_xdp_pointer, + .gpl_only = false, + .ret_type = RET_PTR_TO_PACKET_OR_NULL, + .arg1_type = ARG_PTR_TO_CTX, + .arg2_type = ARG_SCALAR, + .arg3_type = ARG_CONSTANT, +}; + BPF_CALL_4(bpf_xdp_load_bytes, struct xdp_buff *, xdp, u32, offset, void *, buf, u32, len) { - void *ptr; - - ptr = bpf_xdp_pointer(xdp, offset, len); - if (IS_ERR(ptr)) - return PTR_ERR(ptr); + if (unlikely(offset > 0xffff || len > 0xffff)) + return -EFAULT; - if (!ptr) - bpf_xdp_copy_buf(xdp, offset, buf, len, false); - else - memcpy(buf, ptr, len); + if (offset + len > xdp_get_buff_len(xdp)) + return -EINVAL; + bpf_xdp_copy_buf(xdp, offset, buf, len, false); return 0; } @@ -3950,17 +3952,13 @@ static const struct bpf_func_proto bpf_xdp_load_bytes_proto = { BPF_CALL_4(bpf_xdp_store_bytes, struct xdp_buff *, xdp, u32, offset, void *, buf, u32, len) { - void *ptr; - - ptr = bpf_xdp_pointer(xdp, offset, len); - if (IS_ERR(ptr)) - return PTR_ERR(ptr); + if (unlikely(offset > 0xffff || len > 0xffff)) + return -EFAULT; - if (!ptr) - bpf_xdp_copy_buf(xdp, offset, buf, len, true); - else - memcpy(ptr, buf, len); + if (offset + len > xdp_get_buff_len(xdp)) + return -EINVAL; + bpf_xdp_copy_buf(xdp, offset, buf, len, true); return 0; } @@ -7820,6 +7818,8 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_xdp_load_bytes_proto; case BPF_FUNC_xdp_store_bytes: return &bpf_xdp_store_bytes_proto; + case BPF_FUNC_packet_pointer: + return &bpf_xdp_pointer_proto; case BPF_FUNC_fib_lookup: return &bpf_xdp_fib_lookup_proto; case BPF_FUNC_check_mtu: diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 4eebea830613..3736cfbb325e 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -5117,6 +5117,17 @@ union bpf_attr { * 0 on success. * **-EINVAL** for invalid input * **-EOPNOTSUPP** for unsupported delivery_time_type and protocol + * + * void *bpf_packet_pointer(void *ctx, u32 offset, u32 len) + * Description + * Return a pointer to linear area in packet at *offset* of length + * *len*. The returned packet pointer cannot be compared to any + * other packet pointers. + * + * This helper is only available to XDP programs. + * Return + * Pointer to packet on success that can be accessed for *len* + * bytes, or NULL when it fails. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -5312,6 +5323,7 @@ union bpf_attr { FN(xdp_store_bytes), \ FN(copy_from_user_task), \ FN(skb_set_delivery_time), \ + FN(packet_pointer), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper From patchwork Sun Mar 6 23:43:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12770989 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46985C4332F for ; Sun, 6 Mar 2022 23:43:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233516AbiCFXo2 (ORCPT ); Sun, 6 Mar 2022 18:44:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234479AbiCFXo1 (ORCPT ); Sun, 6 Mar 2022 18:44:27 -0500 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4ED341636; Sun, 6 Mar 2022 15:43:28 -0800 (PST) Received: by mail-pf1-x443.google.com with SMTP id d17so12316262pfl.0; Sun, 06 Mar 2022 15:43:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SK5eJWwdny4MgSa99M4Dd7h9oZBl1uRtjfGWpEWW6Xs=; b=p15VkK1Jm11t+6iLUfSsmmRo8ggtct6+ebAlDnbgtuSjbOttJs90I5IhDL6Xw90tld fXrNbAaRi9rzia+/RbaE0IshMnPV7ywD6IWb7tiiJnoxvq8X582lB5m0/UbWgsD1HHvO rCfQgEYNIysZp04fTW0hmYx3fEAYfiX0o2L49uJxFcxhnV5nezCM4TkDS5LtTjQYpQZV MmVig9d5LGIX9hH4IupM9QZmEhR7Rmsl8fNiGqGmHJkD35X4jUwaF4CTLYunibq4Oj0P GXbtWFF9O50n2nkfznQcY04iNGb6GKNLGhwJA5hoMjqOYg1gb+iPhXp4AMEaUPUcM/q7 GzvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SK5eJWwdny4MgSa99M4Dd7h9oZBl1uRtjfGWpEWW6Xs=; b=QNSl0Ahmn8+wdTrHLg1cNfqYtHFjquLGSa7GSn0SQcVyt4DUTd2MZ6vKtbOJ8Ut8JI XEIv0T44D4XLMybfuCBaIYS5T0xE4UrwD78ii5QV0XZwt/KWQM9xC1mIRKSbnZ6/nCzq eZZm2bJ67BASb2L1q2Nr4SP+VkGrjFbl5xaF3BSWwjA/jY5hrj5Vbwx86+PcI6/T/RVk bLVwNDPiQrhVRaFSh+ZcKjQM8RknAEpzc/rgRUEaWpP5vDa6pJWWkRhHMHKIKYkmyhg8 uqPl4L7Xqjz7eDc8YFd5/CTRIlKhsXBi4jOxDbaHfw5EKtoihZaGaPcoDrkeN6gitBN9 9FwA== X-Gm-Message-State: AOAM5321CORdpBkTqxLY2X8KtQ0dBxKSKZYCx0/4vs1vcG4Zk3OUPoN+ pMGvxvwxbmEQHjTdm23mqMkQX9Ez/kI= X-Google-Smtp-Source: ABdhPJx7SA2MTL6Q/7xJoebnAcbyQs28ZXljIGLt+wgzZs9QrBme47ZkO4/dlW3ykrY+aXzsAVLiRw== X-Received: by 2002:a65:6a8e:0:b0:378:b62d:f397 with SMTP id q14-20020a656a8e000000b00378b62df397mr7762246pgu.239.1646610208057; Sun, 06 Mar 2022 15:43:28 -0800 (PST) Received: from localhost ([2405:201:6014:d0c0:6243:316e:a9e1:adda]) by smtp.gmail.com with ESMTPSA id x1-20020a637c01000000b0038007eeee0bsm4024983pgc.22.2022.03.06.15.43.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Mar 2022 15:43:27 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Jesper Dangaard Brouer , Lorenzo Bianconi , John Fastabend , Jakub Kicinski , Lorenz Bauer , netdev@vger.kernel.org Subject: [PATCH bpf-next v1 4/5] selftests/bpf: Add verifier tests for pkt pointer with pkt_uid Date: Mon, 7 Mar 2022 05:13:10 +0530 Message-Id: <20220306234311.452206-5-memxor@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220306234311.452206-1-memxor@gmail.com> References: <20220306234311.452206-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6037; h=from:subject; bh=v2oPD/898mip5EnchYtYAcaGnb/9AXn9iiWsqg8DQh4=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBiJUWoITMEHp/+IE2alU3k+E1XjmQEBgvhNyEeLIJX n0oZJ4eJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCYiVFqAAKCRBM4MiGSL8Ryov0D/ 9PaohaNpddLWKELefPVlZaa+uBXz9m2WKda2RSx2Zd4oQZGsi5sQgnqeB6TWZNrB3GjqTYog1D2yd9 uBGJG3J5BsnUzXHfYT14kLyZ5NdPr8y0KLB2xqsaExPGdS4eYYwSB2FiTDbeOPDFwwsEny5JyeTNir NjbZOua1e6M2CaNmGnm5dw3Cddq5CCyT39iiranKmTgomeg0a8HjunaFrof08uFPVCVoGrYdaQxKtt jJNoKpWI3WPiqVT2QDqmr0xEk6LZsQon1rtVx0tPzA4qv8zFHessvxvGeituqsrFe1qVx3mryHYJ7I G2pDWf8DSt4E8NmE7gljalVax5r7J/W9do8vj1jQJSIRZCpA9iyiv5hSfesi3lOliftriHjzfww5S4 CrMA3UwALnUl3aCT+1a2gG7qCB8+rexRygp4TFdUceAHCBm9RsaY28tdUJl21ytD/ZfTUu2SDyzv+6 hX2hZhMeyoG/6Rq6IGfQ3l3AKc1kTY/o3W16IA7OxNL5nEQU7LMU+XRka//ZsKp+yqLoBNkmCH/sMG yxBjevKOqd6xFLSdRgeOk+Mf9ziGeh0nfdqDsIC3PikHAZpeMXoxEh/AGUZi0Uuiat0WbB6gg4WhU9 HSxsgCft6ZzkGpr6k/m93w/Kt8OWDmwCPylcdJg+zFsPXdYft5Vz/lDLZJBg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Use bpf_packet_pointer to obtain such pkt pointers, and verify various behaviors, like find_good_pkt_pointers skipping pkt pointers with unequal pkt_uid, ensuring that offset, len to bpf_packet_pointer are within limits imposed statically by verifier, rejecting comparion of pkt pointer with pkt_uid against unequal pkt_uid, ensuring clear_all_pkt_pointers doens't skip pkt_uid pkts. Signed-off-by: Kumar Kartikeya Dwivedi --- tools/testing/selftests/bpf/verifier/xdp.c | 146 +++++++++++++++++++++ 1 file changed, 146 insertions(+) diff --git a/tools/testing/selftests/bpf/verifier/xdp.c b/tools/testing/selftests/bpf/verifier/xdp.c index 5ac390508139..580b294cde11 100644 --- a/tools/testing/selftests/bpf/verifier/xdp.c +++ b/tools/testing/selftests/bpf/verifier/xdp.c @@ -12,3 +12,149 @@ .prog_type = BPF_PROG_TYPE_XDP, .retval = 1, }, +{ + "XDP bpf_packet_pointer offset cannot be > 0xffff", + .insns = { + BPF_MOV64_IMM(BPF_REG_2, 0x10000), + BPF_MOV64_IMM(BPF_REG_3, 42), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_packet_pointer), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_XDP, + .result_unpriv = REJECT, + .result = REJECT, + .errstr = "R2 must be in range [0, 0xffff]", +}, +{ + "XDP bpf_packet_pointer len must be constant", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, ingress_ifindex)), + BPF_JMP32_IMM(BPF_JSGE, BPF_REG_2, 0, 1), + BPF_EXIT_INSN(), + BPF_JMP32_IMM(BPF_JSLE, BPF_REG_2, 0xffff, 1), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_3, BPF_REG_2), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_packet_pointer), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_XDP, + .result_unpriv = REJECT, + .result = REJECT, + .errstr = "R3 is not a known constant", +}, +{ + "XDP bpf_packet_pointer len cannot be 0", + .insns = { + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, ingress_ifindex)), + BPF_JMP32_IMM(BPF_JSGE, BPF_REG_2, 0, 1), + BPF_EXIT_INSN(), + BPF_JMP32_IMM(BPF_JSLE, BPF_REG_2, 0xffff, 1), + BPF_EXIT_INSN(), + BPF_MOV64_IMM(BPF_REG_3, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_packet_pointer), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_XDP, + .result_unpriv = REJECT, + .result = REJECT, + .errstr = "R3 must be in range [1, 0xffff]", +}, +{ + "XDP bpf_packet_pointer R0 cannot be compared with xdp_md pkt ptr", + .insns = { + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_MOV64_IMM(BPF_REG_3, 42), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_packet_pointer), + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), + BPF_EXIT_INSN(), + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, offsetof(struct xdp_md, data_end)), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 16), + BPF_JMP_REG(BPF_JGE, BPF_REG_0, BPF_REG_1, 1), + BPF_EXIT_INSN(), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_XDP, + .result_unpriv = REJECT, + .result = REJECT, + .errstr = "R0, R1 pkt pointer comparison prohibited", +}, +{ + "XDP bpf_packet_pointer R0 range propagation skips unequal pkt_uid", + .insns = { + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_MOV64_IMM(BPF_REG_3, 1), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_packet_pointer), + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), + BPF_EXIT_INSN(), + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, offsetof(struct xdp_md, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_6, offsetof(struct xdp_md, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_6, offsetof(struct xdp_md, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_6, offsetof(struct xdp_md, data_end)), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 16), + BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_4, 1), + BPF_EXIT_INSN(), + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, -16), + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, 4), + BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_3, 8), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_XDP, + .result_unpriv = REJECT, + .result = REJECT, + .errstr = "invalid access to packet, off=0 size=8, R0(id=0,off=0,r=1)", +}, +{ + "XDP clear_all_pkt_pointers doesn't skip pkt_uid != 0", + .insns = { + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_MOV64_IMM(BPF_REG_3, 16), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_packet_pointer), + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), + BPF_MOV64_IMM(BPF_REG_2, 1), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_xdp_adjust_tail), + BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_7, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_XDP, + .result_unpriv = REJECT, + .result = REJECT, + .errstr = "R7 invalid mem access 'scalar'", +}, +{ + "XDP pkt_uid preserved when resetting range on rX += var", + .insns = { + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_MOV64_IMM(BPF_REG_3, 16), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_packet_pointer), + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), + BPF_EXIT_INSN(), + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, offsetof(struct xdp_md, ingress_ifindex)), + BPF_JMP32_IMM(BPF_JGE, BPF_REG_1, 0, 1), + BPF_EXIT_INSN(), + BPF_JMP32_IMM(BPF_JLE, BPF_REG_1, 4, 1), + BPF_EXIT_INSN(), + BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, offsetof(struct xdp_md, data_end)), + BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, 1), + BPF_EXIT_INSN(), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_XDP, + .result_unpriv = REJECT, + .result = REJECT, + .errstr = "R1, R0 pkt pointer comparison prohibited", +}, From patchwork Sun Mar 6 23:43:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12770988 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A1B6C43219 for ; Sun, 6 Mar 2022 23:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234477AbiCFXo3 (ORCPT ); Sun, 6 Mar 2022 18:44:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234485AbiCFXo1 (ORCPT ); Sun, 6 Mar 2022 18:44:27 -0500 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF32441FA6; Sun, 6 Mar 2022 15:43:31 -0800 (PST) Received: by mail-pl1-x643.google.com with SMTP id n2so2779801plf.4; Sun, 06 Mar 2022 15:43:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lJBUsmq/FnvtLIHjhXncbj5Tidl/UCuplMGGTlXujII=; b=LQi92m9Afpnf1TM82RlMQxFYjgT3szmcMSDg2sQr/BnC9mXVyV9VbKDAdPfa40RKnz SHwzoxd6RC5AwfNOh8syL+fTj1yP++/ShUCiGxyo5nHn/0DMX85x+26hJK1qhZDiTv6Y Yr3XEMM+d3fn9xwQIpUTCNaq0TLe6QXeMEoa/lUy+R5xNuQbcDMpAiT/dEUe7+qVv3Hy NcTcvcGXAOSR9eSEFIK5KB4eArj0JEDR7B0DPbs/Ylg+pjXljzCWePQNNaT/a58bZPmY Iu13eLaei/BRSzLhNs2ukpMtarQX1vFowagoJv2ALwpqtZN0opRlHhScyFsj+jBzxfWy jkuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lJBUsmq/FnvtLIHjhXncbj5Tidl/UCuplMGGTlXujII=; b=M5EZij4657QiKS9aqkTpEzhxaqic0boPxYJe4kUaIawqcZE5J1P1D4Ucq3Mr4OHmLj Niibvdc2Ygfk/OSTGuCXvPsMTBg/dIkTMD+cL2SdJwoxwn4cOCCVB2zeBlgJm5y33w7c Li7gcz7ivbsgk5vFhGIsLRipQ0oSeL4yPvX1gkLimQvUOtzB891KdaslIR3mm3vwDLtD rFJhjPPynMPlAP3E2ERzK9TyGWa2BvwAiF42Dbq+H5TAAVIK+A1XWHdWypAU95exUeME UJ94H7JKo/IINO+sySdkk4WODPdXUDaAQcnFUTfh9z/Y3mPbOknlyCKqAlGSVmx9Ilr0 sncw== X-Gm-Message-State: AOAM5300o7CgSLZ3jpfMYAUnlTu007P3SRWAo8dLfRaEB0I+MC7N4/uc ybm5GZKkj1I2R5ddqmnaIKO1mk7GGNQ= X-Google-Smtp-Source: ABdhPJxihHw79IwcWUHPwD503YUhnpkHhvXyT4+4bwOqEiG1Du8iSfrW3MOGY9klGUHDH05qTfKyzA== X-Received: by 2002:a17:903:4a:b0:151:be09:3de9 with SMTP id l10-20020a170903004a00b00151be093de9mr9484280pla.138.1646610211165; Sun, 06 Mar 2022 15:43:31 -0800 (PST) Received: from localhost ([2405:201:6014:d0c0:6243:316e:a9e1:adda]) by smtp.gmail.com with ESMTPSA id e18-20020a63d952000000b00372a1295210sm9821274pgj.51.2022.03.06.15.43.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Mar 2022 15:43:30 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Jesper Dangaard Brouer , Lorenzo Bianconi , John Fastabend , Jakub Kicinski , Lorenz Bauer , netdev@vger.kernel.org Subject: [PATCH bpf-next v1 5/5] selftests/bpf: Update xdp_adjust_frags to use bpf_packet_pointer Date: Mon, 7 Mar 2022 05:13:11 +0530 Message-Id: <20220306234311.452206-6-memxor@gmail.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220306234311.452206-1-memxor@gmail.com> References: <20220306234311.452206-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6576; h=from:subject; bh=OXZz41oVugK2/7L6N3ycV+OTCSur4CL6HLpmzD2oez4=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBiJUWoJznMM5rS0a5JMG902uKhYP4qtu8Mca9o8Ucm coegemaJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCYiVFqAAKCRBM4MiGSL8RyuxpEA COV26IS1uGo85apqHNO1zlSf8il4NxnYojYQ1jMEKDcqb+E+IPiUTuu26dWXLPdiSbms/zfJ/m3Uef WA7UKrOmO6+/a7IBtmpjbvSFGfauA29EjmWI4NwzwZCLE2sHXpoK1HbnPLdkvn5RsicATgaVjotdfN FUypDdl8l/r6+3uzT9E4mWJtHgZv2ERP5GPmE1dZdynm75eyfUlQLja0AtRbphjcF2gAUyvcht6EgJ 1LyJxO/Lz9g1yI4L2l7BZFfD4dCrrTo/LCFhvZq+L07JZoFABlc/vxVMVvtP7t4WxJwatLfUhNslf8 IuHKIGcwd00r9SQqZg+PoMqIuHYrGzSNKYBsxTbm2FXTTe0VdJIUvqPAc0kZB5uCuUFGxGhZBgcNbs B21TWnC//R1oJutDhkbUkWjgLnmmi2IvqjI40LxU/38WONXtZfr1Hlb2tr7po6aTsZaWHRqQDBVPip vT3qbJlQSTQqXJejU5YKKtEJFirhnQHeqricQ4MKaTj/Vycr21fVeHMmP9hAIo0RJjJYlPTFVJV1sV z7/+s3p8jIGTU2yGDBrKOgEzdCeiXQuq0bYQ5i9vxUG9ByIKxalB4BQ5hxdW8gVJnbD/HUdO1bEk6K hbWCoE16vlp+IumeqecBoLA5o0cAXS7ul9FTKjyD6JG6YTK40RgoKmyxVnng== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Test that in case of linear region, we always are able to do DPA without any errors. Note how offset is clamped to range [0, 0xffff] and len is a constant. Ensure that helper vs DPA is detected and tested. Add a force_helper mode, that forces use of bpf_xdp_load_bytes and bpf_xdp_store_bytes instead of using bpf_packet_pointer, even for contiguous regions, to make sure that case keeps working. Also, we can take this opportunity to convert it to use BPF skeleton. Signed-off-by: Kumar Kartikeya Dwivedi --- .../bpf/prog_tests/xdp_adjust_frags.c | 46 +++++++++++++------ .../bpf/progs/test_xdp_update_frags.c | 46 +++++++++++++------ 2 files changed, 65 insertions(+), 27 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_frags.c b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_frags.c index 2f033da4cd45..cfb50a575b11 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_frags.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_frags.c @@ -2,26 +2,24 @@ #include #include -static void test_xdp_update_frags(void) +#include "test_xdp_update_frags.skel.h" + +static void test_xdp_update_frags(bool force_helper) { - const char *file = "./test_xdp_update_frags.o"; int err, prog_fd, max_skb_frags, buf_size, num; - struct bpf_program *prog; - struct bpf_object *obj; + LIBBPF_OPTS(bpf_test_run_opts, topts); + struct test_xdp_update_frags *skel; __u32 *offset; __u8 *buf; FILE *f; - LIBBPF_OPTS(bpf_test_run_opts, topts); - obj = bpf_object__open(file); - if (libbpf_get_error(obj)) + skel = test_xdp_update_frags__open_and_load(); + if (!ASSERT_OK_PTR(skel, "test_xdp_update_frags__open_and_load")) return; - prog = bpf_object__next_program(obj, NULL); - if (bpf_object__load(obj)) - return; + skel->bss->force_helper = force_helper; - prog_fd = bpf_program__fd(prog); + prog_fd = bpf_program__fd(skel->progs.xdp_adjust_frags); buf = malloc(128); if (!ASSERT_OK_PTR(buf, "alloc buf 128b")) @@ -45,6 +43,13 @@ static void test_xdp_update_frags(void) ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval"); ASSERT_EQ(buf[16], 0xbb, "xdp_update_frag buf[16]"); ASSERT_EQ(buf[31], 0xbb, "xdp_update_frag buf[31]"); + if (force_helper) { + ASSERT_EQ(skel->bss->used_dpa, false, "did not use DPA"); + ASSERT_EQ(skel->bss->used_helper, true, "used helper"); + } else { + ASSERT_EQ(skel->bss->used_dpa, true, "used DPA"); + ASSERT_EQ(skel->bss->used_helper, false, "did not use helper"); + } free(buf); @@ -70,6 +75,13 @@ static void test_xdp_update_frags(void) ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval"); ASSERT_EQ(buf[5000], 0xbb, "xdp_update_frag buf[5000]"); ASSERT_EQ(buf[5015], 0xbb, "xdp_update_frag buf[5015]"); + if (force_helper) { + ASSERT_EQ(skel->bss->used_dpa, false, "did not use DPA"); + ASSERT_EQ(skel->bss->used_helper, true, "used helper"); + } else { + ASSERT_EQ(skel->bss->used_dpa, true, "used DPA"); + ASSERT_EQ(skel->bss->used_helper, false, "did not use helper"); + } memset(buf, 0, 9000); offset = (__u32 *)buf; @@ -84,6 +96,8 @@ static void test_xdp_update_frags(void) ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval"); ASSERT_EQ(buf[3510], 0xbb, "xdp_update_frag buf[3510]"); ASSERT_EQ(buf[3525], 0xbb, "xdp_update_frag buf[3525]"); + ASSERT_EQ(skel->bss->used_dpa, false, "did not use DPA"); + ASSERT_EQ(skel->bss->used_helper, true, "used helper"); memset(buf, 0, 9000); offset = (__u32 *)buf; @@ -98,6 +112,8 @@ static void test_xdp_update_frags(void) ASSERT_EQ(topts.retval, XDP_PASS, "xdp_update_frag retval"); ASSERT_EQ(buf[7606], 0xbb, "xdp_update_frag buf[7606]"); ASSERT_EQ(buf[7621], 0xbb, "xdp_update_frag buf[7621]"); + ASSERT_EQ(skel->bss->used_dpa, false, "did not use DPA"); + ASSERT_EQ(skel->bss->used_helper, true, "used helper"); free(buf); @@ -136,11 +152,13 @@ static void test_xdp_update_frags(void) "unsupported buf size, possible non-default /proc/sys/net/core/max_skb_flags?"); free(buf); out: - bpf_object__close(obj); + test_xdp_update_frags__destroy(skel); } void test_xdp_adjust_frags(void) { - if (test__start_subtest("xdp_adjust_frags")) - test_xdp_update_frags(); + if (test__start_subtest("xdp_adjust_frags-force-nodpa")) + test_xdp_update_frags(true); + if (test__start_subtest("xdp_adjust_frags-dpa+memcpy")) + test_xdp_update_frags(false); } diff --git a/tools/testing/selftests/bpf/progs/test_xdp_update_frags.c b/tools/testing/selftests/bpf/progs/test_xdp_update_frags.c index 2a3496d8e327..1ad5c45e06e0 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_update_frags.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_update_frags.c @@ -4,37 +4,57 @@ * modify it under the terms of version 2 of the GNU General Public * License as published by the Free Software Foundation. */ -#include -#include +#include #include int _version SEC("version") = 1; +bool force_helper; +bool used_dpa; +bool used_helper; + +#define XDP_LEN 16 + SEC("xdp.frags") int xdp_adjust_frags(struct xdp_md *xdp) { __u8 *data_end = (void *)(long)xdp->data_end; __u8 *data = (void *)(long)xdp->data; - __u8 val[16] = {}; + __u8 val[XDP_LEN] = {}; + __u8 *ptr = NULL; __u32 offset; int err; + used_dpa = false; + used_helper = false; + if (data + sizeof(__u32) > data_end) return XDP_DROP; offset = *(__u32 *)data; - err = bpf_xdp_load_bytes(xdp, offset, val, sizeof(val)); - if (err < 0) + offset &= 0xffff; + if (!force_helper) + ptr = bpf_packet_pointer(xdp, offset, XDP_LEN); + if (!ptr) { + used_helper = true; + err = bpf_xdp_load_bytes(xdp, offset, val, sizeof(val)); + if (err < 0) + return XDP_DROP; + ptr = val; + } else { + used_dpa = true; + } + + if (ptr[0] != 0xaa || ptr[15] != 0xaa) /* marker */ return XDP_DROP; - if (val[0] != 0xaa || val[15] != 0xaa) /* marker */ - return XDP_DROP; - - val[0] = 0xbb; /* update the marker */ - val[15] = 0xbb; - err = bpf_xdp_store_bytes(xdp, offset, val, sizeof(val)); - if (err < 0) - return XDP_DROP; + ptr[0] = 0xbb; /* update the marker */ + ptr[15] = 0xbb; + if (ptr == val) { + err = bpf_xdp_store_bytes(xdp, offset, val, sizeof(val)); + if (err < 0) + return XDP_DROP; + } return XDP_PASS; }