From patchwork Mon Nov 14 19:15:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042715 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 959D4C43219 for ; Mon, 14 Nov 2022 19:15:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237044AbiKNTP5 (ORCPT ); Mon, 14 Nov 2022 14:15:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237031AbiKNTPz (ORCPT ); Mon, 14 Nov 2022 14:15:55 -0500 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D706B264B2 for ; Mon, 14 Nov 2022 11:15:54 -0800 (PST) Received: by mail-pf1-x443.google.com with SMTP id 130so11920136pfu.8 for ; Mon, 14 Nov 2022 11:15:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bffBMRisH/b+vAP1Qh1wMiOBQlQqteRIJ6zyoQtDJas=; b=BIcghtqQ6Bqqzellu33lXldArNSxOrOeaTJE2+D4DyLvhq/f6s8twFmScsgkEw/5c+ y6uwOlaj99lCoO327tx9k1ye08rp3MNduEBRbFGP52aGFpCZOXwLQN4NXsrNq31Gne2d /qmxv2nP6JPuJ6cg8S0Vjg+DqwZeMXAYOj9xpb8+q6zqqJDKwxlaP4hz8vHNsvM+AU3T pnaeYR8WI4mJ6a07SeT8a9+v9fcBCP+6VmqgeOPMjhwYRtYxVhdDRNONxuvlLvgsIljS TK00UwqOshCHpOTb4wu1aWo/2wflXrKJ5U0eFn6Zq6O5ABylMdgxSL4ruSbIk9/ARzpB OqZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bffBMRisH/b+vAP1Qh1wMiOBQlQqteRIJ6zyoQtDJas=; b=X4DrE8Os8JjjIrtp4OHzAMDdQJFDpQADwOw8RYvnS7qNGs9rwuL0i/1hlxOWXv9U+4 E+goTDr/g79DEjHUE+G7ZBXFA5geWxoY6nkJtueIVumfxZn7cQ4QEV9b45kJFwXC8o12 xZr5IoGvCiTxMgU0srSpmBMxk502JgSxyqFfPqU6mFL8tYyCu9rUGsv4woZiSQeCKd5S 46q70JwyjeWi1XHT/iOJyVBus+yFJJ198TmZpVG6EZKpQB7MxaUx/RHn9A5j7q2Wl/5T aMfIEAkgko6crVCPCZMC/JdwFd/WOoPtVpwExCyTItEc2gLsUIDkD1+NNr5HLrtqQCwl majg== X-Gm-Message-State: ANoB5pmTMJ1SgJBhvosjK5/BBSBb4+Q98YeYLTusr3tV9wURwP3Rfi3K 0zT7Ex/eHubtIXRR9pKdc+oTsOP2SAV3PA== X-Google-Smtp-Source: AA0mqf6jc/MT6NHikxIrtCqMxh4l7zxTf2Kh6y97n34VzDE8GZ4hDSGjHXu4WgrqZ8WV3dQYZTmhbw== X-Received: by 2002:aa7:9493:0:b0:56b:9ae8:ca05 with SMTP id z19-20020aa79493000000b0056b9ae8ca05mr15019543pfk.59.1668453354087; Mon, 14 Nov 2022 11:15:54 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id e22-20020a630f16000000b004769f0fd385sm1118541pgl.52.2022.11.14.11.15.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:15:53 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 01/26] bpf: Remove local kptr references in documentation Date: Tue, 15 Nov 2022 00:45:22 +0530 Message-Id: <20221114191547.1694267-2-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1743; i=memxor@gmail.com; h=from:subject; bh=TQdLtozHcFAX41FmGoXIb11rizNdf5YDECl5nuh1YTs=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPIYkpbjgYlHHXeNDaXnxMAMbY+CWQTumnO8K0c JvRxkNiJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyAAKCRBM4MiGSL8Ryo3+EA CqHhdioEZowD6lNpI58xjrfUACBlpTCyVzUX9lbNW5J5Bk6qCpvMKK4gj9Veg9EXSpYlg5fdK4kSqp bgi1HjfqSF5+MaH1GMswHmN17IRmYSZoRs0i8v7UWr0tsYFWnXP6gXpYGgfiI4ERDuJQ7vwvL7oWRt xmQpL3bsKJWXEAbsLTtmVJzNppyLW46LmpDT9SZsOFgrm5gF4DnStXDNaDMDuv01XqCI6n1tKunggj 4Etiscv82pqaQ8YaWz9d/JA0MJWJtc+EW2GJgzTbEMbOvVjrjg9C9uOsZI7u+8DBQFLf6++RHgFdfx guGVyUjK1xQTS9675Gv809l1GjOat2qfFnuGYCgqk4l7WyGvH9LWDXscjjjDqLywSQXsi9hACVCevR gboL0lCFhyGXoCLdDC2kQ3RmDKPEt0dM4zRf53HkgoWLot8kxPWhhJeHBn0W0Q91O2pig03jZQxZSQ g674bVRcxNnxgmNPL57O94Zjueo6juotymmATUsh2TR4Yudns/wlDyEjVm6N1wxOPZ085olNKW30bi QZeDyokRiSiIKvxEbtM7AOrww+2Eb14MnqKooRiMI6y5ymVP+Hy+4NP7U5COPgmcrCYJ5O6+exk9gu tYYtLWsjnxFd3Uv/ktgYaC+TM+gTAtM5z51HB5ma1STsOnBZ9YSPUDVJa6Fg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net We don't want to commit to a specific name for these. Simply call them allocated objects coming from bpf_obj_new, which is completely clear in itself. Signed-off-by: Kumar Kartikeya Dwivedi --- Documentation/bpf/bpf_design_QA.rst | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/Documentation/bpf/bpf_design_QA.rst b/Documentation/bpf/bpf_design_QA.rst index 17e774d96c5e..cec2371173d7 100644 --- a/Documentation/bpf/bpf_design_QA.rst +++ b/Documentation/bpf/bpf_design_QA.rst @@ -332,13 +332,14 @@ avoid defining types with 'bpf\_' prefix to not be broken in future releases. In other words, no backwards compatibility is guaranteed if one using a type in BTF with 'bpf\_' prefix. -Q: What is the compatibility story for special BPF types in local kptrs? ------------------------------------------------------------------------- -Q: Same as above, but for local kptrs (i.e. pointers to objects allocated using -bpf_obj_new for user defined structures). Will the kernel preserve backwards +Q: What is the compatibility story for special BPF types in allocated objects? +------------------------------------------------------------------------------ +Q: Same as above, but for allocated objects (i.e. objects allocated using +bpf_obj_new for user defined types). Will the kernel preserve backwards compatibility for these features? A: NO. Unlike map value types, there are no stability guarantees for this case. The -whole local kptr API itself is unstable (since it is exposed through kfuncs). +whole API to work with allocated objects and any support for special fields +inside them is unstable (since it is exposed through kfuncs). From patchwork Mon Nov 14 19:15:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042716 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65F1BC433FE for ; Mon, 14 Nov 2022 19:16:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236661AbiKNTQB (ORCPT ); Mon, 14 Nov 2022 14:16:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237049AbiKNTP7 (ORCPT ); Mon, 14 Nov 2022 14:15:59 -0500 Received: from mail-pj1-x1042.google.com (mail-pj1-x1042.google.com [IPv6:2607:f8b0:4864:20::1042]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56D5426553 for ; Mon, 14 Nov 2022 11:15:58 -0800 (PST) Received: by mail-pj1-x1042.google.com with SMTP id b11so11230927pjp.2 for ; Mon, 14 Nov 2022 11:15:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RnrYmyFxoGEM6OlddgT2gVaTsDC543yW1/JtZQJqlOM=; b=RMmxbFDcKLe95zYZ89a1bXqebNK2GuRnttLD5IzGPsjKmYI1z93GLOMjecBnzMKsfy uDeCHfK3ubuQYVvHSWt7LAClp+ZWswkTj4T1jszBcSuRv7jwXf5lWoqdvj+Y5rTTFqlE 6GUCjVCI9nGH6lBYKOorW4ZMTbqhBL+RH3HrklBv4klXCtw2UIk5N63wCBLgAJ/jTBhK dGXnf1ixWxx5PnNYtNFzI1b9KatgV+4KBkozVHTDiIawvvvKpPKdS0CtRJZ86VBcvS+F DT3FYr2HoasAddgGvYC76Pj05YPRHOqoaSFvEaXAJExhbVuJZCg/e9n83+3BFkhabBcp m/Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RnrYmyFxoGEM6OlddgT2gVaTsDC543yW1/JtZQJqlOM=; b=liTTW/2O4bpCesD1rzcpUYlii47klJdpfkvEtuJVW1Jumph+xOZxSEA3kQdfU0kCFT EGPUrT3A52k5yN+kJowdSBxw9cjs/0yKQesLlSAZrcMUrBQZP16tBZCEuWUYn3ZfMVQB lfIEUyZgQIFnGPXy7X4xWCgrZl1u7rgfX4jHTWs8FiyIE0aL46nnTMWkCKhAaKE+znYr KAuvT8xvZyWIIT6KcV3qNxHmhjpxFH2uo7uY/whV2K/8nNrFR+nWkPo7QIY11BdqdJN2 QFzE36Z2zBm9nUXzgBnvNFo8mAknJqS8oSaWrp4qVOxcV0VRHBcEk3TXNR5ZPb9J6IlT 5xXA== X-Gm-Message-State: ANoB5pkt6Yj1hPpqSG9S9yV85qd41Ut3r6sFy2xCjiE52K9dxHrbWCo/ /wrMg+YqfTFuUtSI5PeebOBaLD8yakWskw== X-Google-Smtp-Source: AA0mqf6WQzHzei6kOn5uWPY/U0c2mqCgKMAtcV7P/4SnYdZygXPhdV2r64oKIwHvjHJmN537heYCzQ== X-Received: by 2002:a17:903:40d0:b0:182:2589:db21 with SMTP id t16-20020a17090340d000b001822589db21mr553055pld.151.1668453357623; Mon, 14 Nov 2022 11:15:57 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id y15-20020a17090264cf00b001869581f7ecsm7848033pli.116.2022.11.14.11.15.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:15:57 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 02/26] bpf: Remove BPF_MAP_OFF_ARR_MAX Date: Tue, 15 Nov 2022 00:45:23 +0530 Message-Id: <20221114191547.1694267-3-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1926; i=memxor@gmail.com; h=from:subject; bh=ZTLueoxLQrjQVPDjxfufu3gtMYHhhFj4zDRgWCv34xw=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPIKwPAD2afMKABKxB3GU1fQw0uIn5UXRBoG2Y8 3eCPBMaJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyAAKCRBM4MiGSL8RytwED/ 9UqAFViECziopPmeJJszU93PTw97fMuLmIGTeDOeDgo9Elx+jW1lAR26suKgCZh7s2FkNuOt1QvMVg y4zMB/YgZq8qKFAF4576MwPPQZ/9RL07TL4PErV/aug2edI+OM6xKPMSa50yxo0qgeICa5aeWCC9Mu 6HWKhB7dsNuZg6qwg9fHvhmmStpPTCfZ8wW0rh4kgL/C5cDYxccszBncR5LkKr9yktAm67A9uLZW92 7eAj9OlHr6Nqd/zZeMGbIoPbw/ieWjsLz+marxtTtVvYu1xUAuBIefHY8JNtIThvUghpRsftbpCI4Q HhyoICMwEgobSEICtt8Da7Tpxb1+ZrAMzXwctwV43Fsrm+AoKUaQa7P6Z296tI//dj0MtX4364+9tT 8Op6CUY1maHIsnXmdvfAvLguYmaq2ejY/WG55V4kE6mdqxC4hRQvdtkxYoyn7gvrdp8/0ctiYYiIQQ gehQy7oyvzGFW88Kv6jQ0MpErE/usW8IJFNpLXWuZC7vz4fKlUB9gvTULJSddNxIsXPm3lHlmf8p5L NxVQoSVUHI5ToYsQwl3r+fMenhXx0grZ494vS5asrFSpueh916U1w4LXl8Q+B11TjAhRX2YJFeoi98 FK0jMFtZBr+DbBe25z7UFLxmDmXo8z7VXtrOiqCjJZhKbH8WUnEGG2qpQyow== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net In f71b2f64177a ("bpf: Refactor map->off_arr handling"), map->off_arr was refactored to be btf_field_offs. The number of field offsets is equal to maximum possible fields limited by BTF_FIELDS_MAX. Hence, reuse BTF_FIELDS_MAX as spin_lock and timer no longer are to be handled specially for offset sorting, fix the comment, and remove incorrect WARN_ON as its rec->cnt can never exceed this value. The reason to keep separate constant was the it was always more 2 more than total kptrs. This is no longer the case. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 9 ++++----- kernel/bpf/btf.c | 2 +- 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 798aec816970..1a66a1df1af1 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -165,9 +165,8 @@ struct bpf_map_ops { }; enum { - /* Support at most 8 pointers in a BTF type */ - BTF_FIELDS_MAX = 10, - BPF_MAP_OFF_ARR_MAX = BTF_FIELDS_MAX, + /* Support at most 10 fields in a BTF type */ + BTF_FIELDS_MAX = 10, }; enum btf_field_type { @@ -203,8 +202,8 @@ struct btf_record { struct btf_field_offs { u32 cnt; - u32 field_off[BPF_MAP_OFF_ARR_MAX]; - u8 field_sz[BPF_MAP_OFF_ARR_MAX]; + u32 field_off[BTF_FIELDS_MAX]; + u8 field_sz[BTF_FIELDS_MAX]; }; struct bpf_map { diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 5579ff3a5b54..12361d7b2498 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -3584,7 +3584,7 @@ struct btf_field_offs *btf_parse_field_offs(struct btf_record *rec) u8 *sz; BUILD_BUG_ON(ARRAY_SIZE(foffs->field_off) != ARRAY_SIZE(foffs->field_sz)); - if (IS_ERR_OR_NULL(rec) || WARN_ON_ONCE(rec->cnt > sizeof(foffs->field_off))) + if (IS_ERR_OR_NULL(rec)) return NULL; foffs = kzalloc(sizeof(*foffs), GFP_KERNEL | __GFP_NOWARN); From patchwork Mon Nov 14 19:15:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042717 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C45CC433FE for ; Mon, 14 Nov 2022 19:16:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237117AbiKNTQM (ORCPT ); Mon, 14 Nov 2022 14:16:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237077AbiKNTQE (ORCPT ); Mon, 14 Nov 2022 14:16:04 -0500 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83B5026498 for ; Mon, 14 Nov 2022 11:16:01 -0800 (PST) Received: by mail-pf1-x444.google.com with SMTP id v28so11916678pfi.12 for ; Mon, 14 Nov 2022 11:16:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TNUXGrsIBZA47dfMoUXlsA9DJrJBOE3oKFjyxYnaenU=; b=Ybt6hotfYhHBdLPnxjTKOaiLl/gs+ROSWcdo3b4eFkWB3i+ta+BN4EXEV5lspE4+zX i48BYnRXJnLAu9K7ni0LSVntYOpzt7IJxsWBd/Qkq/LNGA7vjIDsCXIO5+wA5XhrGSct zF24uqnr3WES8apCITpXng9a8GCS/XLeuayh8IJubGWiyAd0mgiobMIgqIFD72jiBIvr q2kZ7Zu1WnfFMAHBFnuTVRRm2JychK5WYPzI/T/lPrdDjLkarXMoR1E159jGnb2BOMiG tCwoYBHhYDT6kWHI2rwB9XjoKBfRO5h5dL11UGRBWZAHM19KcajVq8hvha3JLOoOsRKB 5s2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TNUXGrsIBZA47dfMoUXlsA9DJrJBOE3oKFjyxYnaenU=; b=hFHhflSmG20T+Rv8O4EYebN6qywt+3cGXHjVf2tQ3fYlZt6UOnpgZpYmRvEfLKu+YO TitxxSg4MlWAkTRdCzVZC6Wi46WNKUvLFy+LLpfP4JKdfQMN9jm6ypTa9m0FEzB6sg/+ gMX1yR6Tj3GlOF3aGYXIhrFT3N2O2ao5l/bEPv58g4MYhsLb7g2xOOWAKs92rdWpEIU1 rqFBWTVssZFfcXXC5gt7/m8L1GvArSXIWDPJQ71JcoSwmN6boS07jBJvzc7KifzTmO2U ecLFi/8BNo5nITwDRCVs+ZF8rMeKy/Edhfv/qizTwhk1HB4LGVsTYyVO+KGAWHdZVLIA 2LVw== X-Gm-Message-State: ANoB5pmr/hM3Cib2weGWsG6jMwQz3E/m/78qDdH2nmH9Y1oZvIFGhg0M +YAxbgK5msaEMG+0t0cIRT7I6SyaIcymug== X-Google-Smtp-Source: AA0mqf7flxNLxnKvG3vA39Zsdu+FGqRm+rf2BGy+uHKtQdZ6hg4WDQTKIaMCBZ3F9rfswL9muwSwag== X-Received: by 2002:a63:8f48:0:b0:455:8333:d8e with SMTP id r8-20020a638f48000000b0045583330d8emr12537724pgn.380.1668453360804; Mon, 14 Nov 2022 11:16:00 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id i3-20020a636d03000000b0046f6d7dcd1dsm6232464pgc.25.2022.11.14.11.16.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:00 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 03/26] bpf: Fix copy_map_value, zero_map_value Date: Tue, 15 Nov 2022 00:45:24 +0530 Message-Id: <20221114191547.1694267-4-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1622; i=memxor@gmail.com; h=from:subject; bh=6RvsvjttmdEULzDCNignBfVVDaAhe2Uld3KegARQNKU=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPINLiBPyhOAGswnMZ++VCv2z7GCXT00P33PG8B xurXpGSJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyAAKCRBM4MiGSL8RypZxEA DC9Qyv20/94RYHeqeZhTH2gl4Cy4d9m6nLXYC7hJ/lTQqLFEwcNM23KDmtTb29CgaIecPbeQyJvBtF LFFDRDABl1nGigZqwCrGpbOXQ6+2kW0Z0aQnhk139cPIHd0zi8oMiBdlI50ZTRadJ9lcrqDPxSmudp VLE4NSNNjNgs8f85OBm16+ra7Mb/cZDL904I3xBDVQilsMbllWlR2CJuJ0yhz/PUDTHGSh972w8+Tb +4sQ9NtlRC2y9SooacealSHvObUjG8EBlsFH84NSZk+qD3FnmTfw8HiG1d9lTJlAoSNpLrmnhmHahU K1gyGDJbrH0LVS5M9UKgMG6yuPZ1X60Yg0DZ8NE801a93FGaiUCInh3XaxUyoQyGGEEkKpYWrLQaIt cHSaTBnFbxWe80+UC7qIoTIYy2ixaF7WZs2APPNAqvW0XibesDzPobDZPx68u3abjKzElrx+xd/gsB sghX3dIJVUyAZwwKbVLrvIa+EkFsX73bubo0s1by9NT1u9ALenBnUqJPLpmte1cu7c27maHlkZHz8Q 4YAk0ZePJROkxp666oB8FWVhotITFRnHzelgH0rhDhjClZpQTU++qZTeZtHvoqUkO8cTsHPGwM248A PsIMUYVmiGCVM2V1/7wDvlaR0MvRzSU815TEF0/8M06l4iQZRKafMz/QfgWQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The current offset needs to also skip over the already copied region in addition to the size of the next field. This case manifests where there are gaps between adjacent special fields. It was observed that for a map value with size 48, having fields at: off: 0, 16, 32 size: 4, 16, 16 The current code does: memcpy(dst + 0, src + 0, 0) memcpy(dst + 4, src + 4, 12) memcpy(dst + 20, src + 20, 12) memcpy(dst + 36, src + 36, 12) With the fix, it is done correctly as: memcpy(dst + 0, src + 0, 0) memcpy(dst + 4, src + 4, 12) memcpy(dst + 32, src + 32, 0) memcpy(dst + 48, src + 48, 0) Fixes: 4d7d7f69f4b1 ("bpf: Adapt copy_map_value for multiple offset case") Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 1a66a1df1af1..f08eb2d27de0 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -360,7 +360,7 @@ static inline void bpf_obj_memcpy(struct btf_field_offs *foffs, u32 sz = next_off - curr_off; memcpy(dst + curr_off, src + curr_off, sz); - curr_off += foffs->field_sz[i]; + curr_off += foffs->field_sz[i] + sz; } memcpy(dst + curr_off, src + curr_off, size - curr_off); } @@ -390,7 +390,7 @@ static inline void bpf_obj_memzero(struct btf_field_offs *foffs, void *dst, u32 u32 sz = next_off - curr_off; memset(dst + curr_off, 0, sz); - curr_off += foffs->field_sz[i]; + curr_off += foffs->field_sz[i] + sz; } memset(dst + curr_off, 0, size - curr_off); } From patchwork Mon Nov 14 19:15:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042718 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C7EEC433FE for ; Mon, 14 Nov 2022 19:16:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236525AbiKNTQR (ORCPT ); Mon, 14 Nov 2022 14:16:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237116AbiKNTQM (ORCPT ); Mon, 14 Nov 2022 14:16:12 -0500 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74A6D27CCA for ; Mon, 14 Nov 2022 11:16:04 -0800 (PST) Received: by mail-pf1-x443.google.com with SMTP id y13so11932216pfp.7 for ; Mon, 14 Nov 2022 11:16:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GK54O+gAPcHLdHoAM2s1GlneRerMM22OMB7blml6vKU=; b=gEpxXuV5D1uPKu1ngQrSVqXGMfEXT/xSceFUamY64Xin4dSOvx6zw7hA+gHx+1qeuQ LikPpnNRUZSXYVMD/hmaVWyOzIVYbdlaPWomWxRRtkazBQvyqNkLHsEQH5bJ4SDotMFM 6RAh4EsPXY/OeSTAwJIvpICd/VpwuSFC6teyAPYqQu+3pS64WiBRUPtmlT2xNUqcM44p 3s8dDyCLoXAfySaR9VCbB+3TP+6NN7nFQJrKQOZL1gLothmhLZRZ9awN0QSdZ70OGkdv /KLl90qKYqupC95pw33hMMWfYFsJIaTHFEXuSWsYxVe73xZLLP8TLoIDu4ADMaA/4A3R WX7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GK54O+gAPcHLdHoAM2s1GlneRerMM22OMB7blml6vKU=; b=4c43COS0mJYprsWJQNBtuxMpc4WcohedZ0WdkVj2Mi1sF93tONxb036+6NzwwA0ddA k9W6m01cT+X9VT54I6tMkeqeXCkjhdm39yOpvuGgf6cMFp9I5P/30Y/yJ2I1ws4JubDh UGvtVszJdIgmcJq9OI6Rl2QwssYm5/PHCks9omfmosMPM4mgMjyAsPJIXPdbNBLIeNvv o8cvzWNccAs3m0XhWrLMhLj/DZ6Qss3ivUKGpZLkVD7mfNbqKPoG0ID8S59FTFzOr+IV s31DusXWf1EpVDxUNH5twFeKuijYJ+gcEt0L6d6FR++ikyo7acLzhNALzVT0rAbATP0h SlLQ== X-Gm-Message-State: ANoB5plv2M9T1N5Hv+GzHxg3Lr+nv+ELnOOWgT2Vj1ag6TKlZjmMqvu5 48UTbTBOVha0V+v1tsc+gghFbR3slbrjBA== X-Google-Smtp-Source: AA0mqf5XsnP9B4po85GUj969SBrMfMDhP512fkgKfWnHCZ4rdW41GvBMC9VkPxZKW7zaOmoDLkNPiw== X-Received: by 2002:a63:401:0:b0:46a:e5d0:6e1e with SMTP id 1-20020a630401000000b0046ae5d06e1emr12850042pge.530.1668453363672; Mon, 14 Nov 2022 11:16:03 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id j24-20020a17090ae61800b0020d9306e735sm6995257pjy.20.2022.11.14.11.16.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:03 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 04/26] bpf: Support bpf_list_head in map values Date: Tue, 15 Nov 2022 00:45:25 +0530 Message-Id: <20221114191547.1694267-5-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=16631; i=memxor@gmail.com; h=from:subject; bh=PGiSC2kNOieHtklE/3cKLURQz5lUU+B0MR2NMWWZf7U=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPICGXV9R2L4bi/mNTBFivcth4GV//lH+cmUEYZ kQ6A/YKJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyAAKCRBM4MiGSL8Ryt7DD/ 9aUA6FsQ9o9s2CiXH132NdsL9nLO9Ge1klx34U0uZVfZZjxmOlXxY7PpfWlvJxprawNqJFdIdMqoiE TI+rFDx1S2f3xDk8xoPhKepPhKlta/TrCov5OQlA368BZYmeGA3WpLlv5tSsdUb2S3geyaa6gjOh2o rWoJ+YGz7bZSF2kqJhMT5qzlf5SfOdO1gmea8zvBo0idDLT2K83Xr/RRBUiKOYSVEG17slDW0efFOz ByBY2UdldiYFc5ntW2PTdFGuaQkypYfNVdlMx/OA3FwGJn/s5AeKDAVAmAmkJCuwHm8STnZBoKWTnT y6imfOMXK3g76dW0i4WZxVszyfRnTMoHTUzQdqNx0pA1+1cCpOU4beoESDxRIBoY1Qt/jNi14dbMuy bMRRKMIIDzX1B1IpzhNdW5/VRm990tpLbWZ1t+y2+HmuZjotfgKA/qXuPfkA1ChoWI9plEbp6bio2F WMjElh37gBgjOnZ5b+xMiv+Vh3z/Tpa517BzeriXX+3qmTKLNMZbexXnbSpBNqNVBR8z5CtB/Znrfb RTL0ka7mMu+MDC0jEcaSFQPj6B3zqTScVf/MGAucoL4DxUqfe/sOhvaFXD65z6JTrUXHxyh1jwlrS2 5Oa9kebB7ZG3+LMWjSlHCcW6BASHNmSmlUzIrJgDTZGTWAWt6w397uJdK0Fg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add the support on the map side to parse, recognize, verify, and build metadata table for a new special field of the type struct bpf_list_head. To parameterize the bpf_list_head for a certain value type and the list_node member it will accept in that value type, we use BTF declaration tags. The definition of bpf_list_head in a map value will be done as follows: struct foo { struct bpf_list_node node; int data; }; struct map_value { struct bpf_list_head head __contains(foo, node); }; Then, the bpf_list_head only allows adding to the list 'head' using the bpf_list_node 'node' for the type struct foo. The 'contains' annotation is a BTF declaration tag composed of four parts, "contains:name:node" where the name is then used to look up the type in the map BTF, with its kind hardcoded to BTF_KIND_STRUCT during the lookup. The node defines name of the member in this type that has the type struct bpf_list_node, which is actually used for linking into the linked list. For now, 'kind' part is hardcoded as struct. This allows building intrusive linked lists in BPF, using container_of to obtain pointer to entry, while being completely type safe from the perspective of the verifier. The verifier knows exactly the type of the nodes, and knows that list helpers return that type at some fixed offset where the bpf_list_node member used for this list exists. The verifier also uses this information to disallow adding types that are not accepted by a certain list. For now, no elements can be added to such lists. Support for that is coming in future patches, hence draining and freeing items is done with a TODO that will be resolved in a future patch. Note that the bpf_list_head_free function moves the list out to a local variable under the lock and releases it, doing the actual draining of the list items outside the lock. While this helps with not holding the lock for too long pessimizing other concurrent list operations, it is also necessary for deadlock prevention: unless every function called in the critical section would be notrace, a fentry/fexit program could attach and call bpf_map_update_elem again on the map, leading to the same lock being acquired if the key matches and lead to a deadlock. While this requires some special effort on part of the BPF programmer to trigger and is highly unlikely to occur in practice, it is always better if we can avoid such a condition. While notrace would prevent this, doing the draining outside the lock has advantages of its own, hence it is used to also fix the deadlock related problem. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 17 ++++ include/uapi/linux/bpf.h | 10 +++ kernel/bpf/btf.c | 145 ++++++++++++++++++++++++++++++++- kernel/bpf/helpers.c | 32 ++++++++ kernel/bpf/syscall.c | 22 ++++- kernel/bpf/verifier.c | 7 ++ tools/include/uapi/linux/bpf.h | 10 +++ 7 files changed, 239 insertions(+), 4 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f08eb2d27de0..05f98e9e5c48 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -175,6 +175,7 @@ enum btf_field_type { BPF_KPTR_UNREF = (1 << 2), BPF_KPTR_REF = (1 << 3), BPF_KPTR = BPF_KPTR_UNREF | BPF_KPTR_REF, + BPF_LIST_HEAD = (1 << 4), }; struct btf_field_kptr { @@ -184,11 +185,18 @@ struct btf_field_kptr { u32 btf_id; }; +struct btf_field_list_head { + struct btf *btf; + u32 value_btf_id; + u32 node_offset; +}; + struct btf_field { u32 offset; enum btf_field_type type; union { struct btf_field_kptr kptr; + struct btf_field_list_head list_head; }; }; @@ -266,6 +274,8 @@ static inline const char *btf_field_type_name(enum btf_field_type type) case BPF_KPTR_UNREF: case BPF_KPTR_REF: return "kptr"; + case BPF_LIST_HEAD: + return "bpf_list_head"; default: WARN_ON_ONCE(1); return "unknown"; @@ -282,6 +292,8 @@ static inline u32 btf_field_type_size(enum btf_field_type type) case BPF_KPTR_UNREF: case BPF_KPTR_REF: return sizeof(u64); + case BPF_LIST_HEAD: + return sizeof(struct bpf_list_head); default: WARN_ON_ONCE(1); return 0; @@ -298,6 +310,8 @@ static inline u32 btf_field_type_align(enum btf_field_type type) case BPF_KPTR_UNREF: case BPF_KPTR_REF: return __alignof__(u64); + case BPF_LIST_HEAD: + return __alignof__(struct bpf_list_head); default: WARN_ON_ONCE(1); return 0; @@ -403,6 +417,9 @@ static inline void zero_map_value(struct bpf_map *map, void *dst) void copy_map_value_locked(struct bpf_map *map, void *dst, void *src, bool lock_src); void bpf_timer_cancel_and_free(void *timer); +void bpf_list_head_free(const struct btf_field *field, void *list_head, + struct bpf_spin_lock *spin_lock); + int bpf_obj_name_cpy(char *dst, const char *src, unsigned int size); struct bpf_offload_dev; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index fb4c911d2a03..6580448e9f77 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -6888,6 +6888,16 @@ struct bpf_dynptr { __u64 :64; } __attribute__((aligned(8))); +struct bpf_list_head { + __u64 :64; + __u64 :64; +} __attribute__((aligned(8))); + +struct bpf_list_node { + __u64 :64; + __u64 :64; +} __attribute__((aligned(8))); + struct bpf_sysctl { __u32 write; /* Sysctl is being read (= 0) or written (= 1). * Allows 1,2,4-byte read, but no write. diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 12361d7b2498..c0d73d71c539 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -3205,9 +3205,15 @@ enum { struct btf_field_info { enum btf_field_type type; u32 off; - struct { - u32 type_id; - } kptr; + union { + struct { + u32 type_id; + } kptr; + struct { + const char *node_name; + u32 value_btf_id; + } list_head; + }; }; static int btf_find_struct(const struct btf *btf, const struct btf_type *t, @@ -3261,6 +3267,63 @@ static int btf_find_kptr(const struct btf *btf, const struct btf_type *t, return BTF_FIELD_FOUND; } +static const char *btf_find_decl_tag_value(const struct btf *btf, + const struct btf_type *pt, + int comp_idx, const char *tag_key) +{ + int i; + + for (i = 1; i < btf_nr_types(btf); i++) { + const struct btf_type *t = btf_type_by_id(btf, i); + int len = strlen(tag_key); + + if (!btf_type_is_decl_tag(t)) + continue; + if (pt != btf_type_by_id(btf, t->type) || + btf_type_decl_tag(t)->component_idx != comp_idx) + continue; + if (strncmp(__btf_name_by_offset(btf, t->name_off), tag_key, len)) + continue; + return __btf_name_by_offset(btf, t->name_off) + len; + } + return NULL; +} + +static int btf_find_list_head(const struct btf *btf, const struct btf_type *pt, + const struct btf_type *t, int comp_idx, + u32 off, int sz, struct btf_field_info *info) +{ + const char *value_type; + const char *list_node; + s32 id; + + if (!__btf_type_is_struct(t)) + return BTF_FIELD_IGNORE; + if (t->size != sz) + return BTF_FIELD_IGNORE; + value_type = btf_find_decl_tag_value(btf, pt, comp_idx, "contains:"); + if (!value_type) + return -EINVAL; + list_node = strstr(value_type, ":"); + if (!list_node) + return -EINVAL; + value_type = kstrndup(value_type, list_node - value_type, GFP_KERNEL | __GFP_NOWARN); + if (!value_type) + return -ENOMEM; + id = btf_find_by_name_kind(btf, value_type, BTF_KIND_STRUCT); + kfree(value_type); + if (id < 0) + return id; + list_node++; + if (str_is_empty(list_node)) + return -EINVAL; + info->type = BPF_LIST_HEAD; + info->off = off; + info->list_head.value_btf_id = id; + info->list_head.node_name = list_node; + return BTF_FIELD_FOUND; +} + static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask, int *align, int *sz) { @@ -3284,6 +3347,12 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask, goto end; } } + if (field_mask & BPF_LIST_HEAD) { + if (!strcmp(name, "bpf_list_head")) { + type = BPF_LIST_HEAD; + goto end; + } + } /* Only return BPF_KPTR when all other types with matchable names fail */ if (field_mask & BPF_KPTR) { type = BPF_KPTR_REF; @@ -3339,6 +3408,12 @@ static int btf_find_struct_field(const struct btf *btf, if (ret < 0) return ret; break; + case BPF_LIST_HEAD: + ret = btf_find_list_head(btf, t, member_type, i, off, sz, + idx < info_cnt ? &info[idx] : &tmp); + if (ret < 0) + return ret; + break; default: return -EFAULT; } @@ -3393,6 +3468,12 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t, if (ret < 0) return ret; break; + case BPF_LIST_HEAD: + ret = btf_find_list_head(btf, var, var_type, -1, off, sz, + idx < info_cnt ? &info[idx] : &tmp); + if (ret < 0) + return ret; + break; default: return -EFAULT; } @@ -3491,11 +3572,52 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field, return ret; } +static int btf_parse_list_head(const struct btf *btf, struct btf_field *field, + struct btf_field_info *info) +{ + const struct btf_type *t, *n = NULL; + const struct btf_member *member; + u32 offset; + int i; + + t = btf_type_by_id(btf, info->list_head.value_btf_id); + /* We've already checked that value_btf_id is a struct type. We + * just need to figure out the offset of the list_node, and + * verify its type. + */ + for_each_member(i, t, member) { + if (strcmp(info->list_head.node_name, __btf_name_by_offset(btf, member->name_off))) + continue; + /* Invalid BTF, two members with same name */ + if (n) + return -EINVAL; + n = btf_type_by_id(btf, member->type); + if (!__btf_type_is_struct(n)) + return -EINVAL; + if (strcmp("bpf_list_node", __btf_name_by_offset(btf, n->name_off))) + return -EINVAL; + offset = __btf_member_bit_offset(n, member); + if (offset % 8) + return -EINVAL; + offset /= 8; + if (offset % __alignof__(struct bpf_list_node)) + return -EINVAL; + + field->list_head.btf = (struct btf *)btf; + field->list_head.value_btf_id = info->list_head.value_btf_id; + field->list_head.node_offset = offset; + } + if (!n) + return -ENOENT; + return 0; +} + struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type *t, u32 field_mask, u32 value_size) { struct btf_field_info info_arr[BTF_FIELDS_MAX]; struct btf_record *rec; + u32 next_off = 0; int ret, i, cnt; ret = btf_find_field(btf, t, field_mask, info_arr, ARRAY_SIZE(info_arr)); @@ -3517,6 +3639,11 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type ret = -EFAULT; goto end; } + if (info_arr[i].off < next_off) { + ret = -EEXIST; + goto end; + } + next_off = info_arr[i].off + btf_field_type_size(info_arr[i].type); rec->field_mask |= info_arr[i].type; rec->fields[i].offset = info_arr[i].off; @@ -3539,12 +3666,24 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type if (ret < 0) goto end; break; + case BPF_LIST_HEAD: + ret = btf_parse_list_head(btf, &rec->fields[i], &info_arr[i]); + if (ret < 0) + goto end; + break; default: ret = -EFAULT; goto end; } rec->cnt++; } + + /* bpf_list_head requires bpf_spin_lock */ + if (btf_record_has_field(rec, BPF_LIST_HEAD) && rec->spin_lock_off < 0) { + ret = -EINVAL; + goto end; + } + return rec; end: btf_record_free(rec); diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 283f55bbeb70..7bc71995f17c 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1706,6 +1706,38 @@ bpf_base_func_proto(enum bpf_func_id func_id) } } +void bpf_list_head_free(const struct btf_field *field, void *list_head, + struct bpf_spin_lock *spin_lock) +{ + struct list_head *head = list_head, *orig_head = list_head; + + BUILD_BUG_ON(sizeof(struct list_head) > sizeof(struct bpf_list_head)); + BUILD_BUG_ON(__alignof__(struct list_head) > __alignof__(struct bpf_list_head)); + + /* Do the actual list draining outside the lock to not hold the lock for + * too long, and also prevent deadlocks if tracing programs end up + * executing on entry/exit of functions called inside the critical + * section, and end up doing map ops that call bpf_list_head_free for + * the same map value again. + */ + __bpf_spin_lock_irqsave(spin_lock); + if (!head->next || list_empty(head)) + goto unlock; + head = head->next; +unlock: + INIT_LIST_HEAD(orig_head); + __bpf_spin_unlock_irqrestore(spin_lock); + + while (head != orig_head) { + void *obj = head; + + obj -= field->list_head.node_offset; + head = head->next; + /* TODO: Rework later */ + kfree(obj); + } +} + BTF_SET8_START(tracing_btf_ids) #ifdef CONFIG_KEXEC_CORE BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 85532d301124..fdbae52f463f 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -536,6 +536,9 @@ void btf_record_free(struct btf_record *rec) module_put(rec->fields[i].kptr.module); btf_put(rec->fields[i].kptr.btf); break; + case BPF_LIST_HEAD: + /* Nothing to release for bpf_list_head */ + break; default: WARN_ON_ONCE(1); continue; @@ -578,6 +581,9 @@ struct btf_record *btf_record_dup(const struct btf_record *rec) goto free; } break; + case BPF_LIST_HEAD: + /* Nothing to acquire for bpf_list_head */ + break; default: ret = -EFAULT; WARN_ON_ONCE(1); @@ -637,6 +643,11 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj) case BPF_KPTR_REF: field->kptr.dtor((void *)xchg((unsigned long *)field_ptr, 0)); break; + case BPF_LIST_HEAD: + if (WARN_ON_ONCE(rec->spin_lock_off < 0)) + continue; + bpf_list_head_free(field, field_ptr, obj + rec->spin_lock_off); + break; default: WARN_ON_ONCE(1); continue; @@ -965,7 +976,8 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, if (!value_type || value_size != map->value_size) return -EINVAL; - map->record = btf_parse_fields(btf, value_type, BPF_SPIN_LOCK | BPF_TIMER | BPF_KPTR, + map->record = btf_parse_fields(btf, value_type, + BPF_SPIN_LOCK | BPF_TIMER | BPF_KPTR | BPF_LIST_HEAD, map->value_size); if (!IS_ERR_OR_NULL(map->record)) { int i; @@ -1012,6 +1024,14 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, goto free_map_tab; } break; + case BPF_LIST_HEAD: + if (map->map_type != BPF_MAP_TYPE_HASH && + map->map_type != BPF_MAP_TYPE_LRU_HASH && + map->map_type != BPF_MAP_TYPE_ARRAY) { + ret = -EOPNOTSUPP; + goto free_map_tab; + } + break; default: /* Fail if map_type checks are missing for a field type */ ret = -EOPNOTSUPP; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 07c0259dfc1a..a50018e2d4a0 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12814,6 +12814,13 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env, { enum bpf_prog_type prog_type = resolve_prog_type(prog); + if (btf_record_has_field(map->record, BPF_LIST_HEAD)) { + if (is_tracing_prog_type(prog_type)) { + verbose(env, "tracing progs cannot use bpf_list_head yet\n"); + return -EINVAL; + } + } + if (btf_record_has_field(map->record, BPF_SPIN_LOCK)) { if (prog_type == BPF_PROG_TYPE_SOCKET_FILTER) { verbose(env, "socket filter progs cannot use bpf_spin_lock yet\n"); diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index fb4c911d2a03..6580448e9f77 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -6888,6 +6888,16 @@ struct bpf_dynptr { __u64 :64; } __attribute__((aligned(8))); +struct bpf_list_head { + __u64 :64; + __u64 :64; +} __attribute__((aligned(8))); + +struct bpf_list_node { + __u64 :64; + __u64 :64; +} __attribute__((aligned(8))); + struct bpf_sysctl { __u32 write; /* Sysctl is being read (= 0) or written (= 1). * Allows 1,2,4-byte read, but no write. From patchwork Mon Nov 14 19:15:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042719 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6769BC433FE for ; Mon, 14 Nov 2022 19:16:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236124AbiKNTQV (ORCPT ); Mon, 14 Nov 2022 14:16:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237158AbiKNTQP (ORCPT ); Mon, 14 Nov 2022 14:16:15 -0500 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79E9127DD9 for ; Mon, 14 Nov 2022 11:16:07 -0800 (PST) Received: by mail-pg1-x542.google.com with SMTP id b62so11111194pgc.0 for ; Mon, 14 Nov 2022 11:16:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9fB9+tUBSUn6plpY3tSPYq/bignEZhucGpbkfKAE7UY=; b=Kza5AoEymkyQo50hO/MCHqUYG70Waj6g2wOgI/hFaQm6x9vzSIElUtF5ifuwdlBtk5 niKTFjNTWufvfm5jOQ7u67+pyHUeHvrCQ4NemsTPsQR+gFgrOoHrVYc6jWdKF5p/vEVz Cj9pIUDM6aRug8/7UVLy6zAvfT/RIZXCh83gamc/rVTPuXb2XihKGjblqWMQZhAMhTuZ dEHLewhauOLEPKDhVRKT//fogePcrVO5nDxPaAllm8bIbkjNsBd5Fxcci0XSxoOYFxU5 kxe7gtlod6SSBK9w7wyLoqPt5t1RzKlRuDqcCjTXfCDf9X/QmXngbVQJpwePhYQM4GBI Zt4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9fB9+tUBSUn6plpY3tSPYq/bignEZhucGpbkfKAE7UY=; b=XzzDJAWXsbyGB89mqzTQMlwhMQedrPQ9cqK2qgOkpPFnuhVyd60/BPptPsg9Ccr1fJ 3axk0rUNrVd4XXNEWAKBwyBU4jyXAKvX4onu2vfJkGnSiHt9X/kKEY9pJBTXWmeGi2Ti 6Mz6Z2NCwz4IS/8V2RbVfOdlDS04QKHmOOJVyb4ObX7OVWzrUw1i/4qZ/j2xrC4u6jtU iWE0LQxkqc4XS4Lsct9BxeXimAEPx8sqRTw3OVzJQYXlM49sTdHhrrCkZ2MnxqhJm8xS wQ4gSTYpxOgjrWT2TW/ZtqtYPmVesd9bpMRIiGVSdDfX1PIUxq76xxDRo2asYElKLFuz rIWg== X-Gm-Message-State: ANoB5pk4yoFJuC2Q1XylGajS6PX/qjfVqpFRfplLg9mWrvXozPHeCPoy uTJhxekV6VyRQG8MzvX19arSpP9dxjKAkA== X-Google-Smtp-Source: AA0mqf714MrqqVJTwRvx5NODlSz+57AJIImZes5Ya8cVwh2712pwCKO/qbfl6s8YFc6FOQGaivr2gQ== X-Received: by 2002:a63:31c9:0:b0:476:9115:663d with SMTP id x192-20020a6331c9000000b004769115663dmr3857972pgx.436.1668453366769; Mon, 14 Nov 2022 11:16:06 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id u8-20020a1709026e0800b00186c41bd213sm7840008plk.177.2022.11.14.11.16.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:06 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 05/26] bpf: Rename RET_PTR_TO_ALLOC_MEM Date: Tue, 15 Nov 2022 00:45:26 +0530 Message-Id: <20221114191547.1694267-6-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2687; i=memxor@gmail.com; h=from:subject; bh=+megFhfdO04jt9azzJSUDOASFzsAIDIpP2m6YPhf6DY=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPIGUJS+7S1RwKXDjDw/jz4/hn9oy/cpICq9Md0 8KQl2umJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyAAKCRBM4MiGSL8RymwYD/ 46g+8EqY/N4e6YqE3GQcWxIfntMwjL9xWIOqOwtDZ8YHoP2yK1Fq4m0LjDCXE6yiZENSXREkjXWFl/ WQq383VaAT+pBMnnttD1aDZ/P9MleKz+njfiXce4jL8iitO2JnNZLXz/zK6opfKDakUcLVIcOb2R6X 4XhDToNwHC4a6wMjU5hTltjrrs3xXe8bwCJboMnxGzl6mEcMTPUFnshSgxjXABtDCS1mzXitkl0bV1 jHAtsBvitousEre6/Omm5R2GgFiBxtddjrF15bYz9DpbiCHit0mktACzAHOZqWhFnOFC7EjM1TVF7m q06CT7EKVW6mPWDbOP1eQZihn3lDf7AUA8uBMufoCdqGoqEY1MVcd/fh9r8sz5EmqrUHaSE5XHxkdi eQCFrPGeoMyZVSArBt/0CsuNxxs0Y6q9jNlqWIQvmnxrijcRZN9K+pouvFpeIZ6gLb49AFNuCLE5Lm hGT+cI8Yp3/TgCfpG3BmB+R8tkPdyqZ/KotDwwaAG2NTb59tyb1YvcQ/40FoC+z4lB8h0lbUEwykQe ed9JRX6Z5hpKP5GKm6rWnQydS9ceFpX9XyQRxMMqKEKxchEXC5GDcGUGXPE0kJpfnm5GPoaOPobUZB cyrjZJt3GxftUUIj3onC2cjyhfpXAYJTlENLniGV4O6T2kN5M1Jv9HHJEI3Q== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, the verifier has two return types, RET_PTR_TO_ALLOC_MEM, and RET_PTR_TO_ALLOC_MEM_OR_NULL, however the former is confusingly named to imply that it carries MEM_ALLOC, while only the latter does. This causes confusion during code review leading to conclusions like that the return value of RET_PTR_TO_DYNPTR_MEM_OR_NULL (which is RET_PTR_TO_ALLOC_MEM | PTR_MAYBE_NULL) may be consumable by bpf_ringbuf_{submit,commit}. Rename it to make it clear MEM_ALLOC needs to be tacked on top of RET_PTR_TO_MEM. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 6 +++--- kernel/bpf/verifier.c | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 05f98e9e5c48..2fe3ec620d54 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -607,7 +607,7 @@ enum bpf_return_type { RET_PTR_TO_SOCKET, /* returns a pointer to a socket */ RET_PTR_TO_TCP_SOCK, /* returns a pointer to a tcp_sock */ RET_PTR_TO_SOCK_COMMON, /* returns a pointer to a sock_common */ - RET_PTR_TO_ALLOC_MEM, /* returns a pointer to dynamically allocated memory */ + RET_PTR_TO_MEM, /* returns a pointer to memory */ RET_PTR_TO_MEM_OR_BTF_ID, /* returns a pointer to a valid memory or a btf_id */ RET_PTR_TO_BTF_ID, /* returns a pointer to a btf_id */ __BPF_RET_TYPE_MAX, @@ -617,8 +617,8 @@ enum bpf_return_type { RET_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCKET, RET_PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_TCP_SOCK, RET_PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCK_COMMON, - RET_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | MEM_ALLOC | RET_PTR_TO_ALLOC_MEM, - RET_PTR_TO_DYNPTR_MEM_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_ALLOC_MEM, + RET_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | MEM_ALLOC | RET_PTR_TO_MEM, + RET_PTR_TO_DYNPTR_MEM_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_MEM, RET_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_BTF_ID, /* This must be the last entry. Its purpose is to ensure the enum is diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index a50018e2d4a0..c88da7e3ca74 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7630,7 +7630,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn mark_reg_known_zero(env, regs, BPF_REG_0); regs[BPF_REG_0].type = PTR_TO_TCP_SOCK | ret_flag; break; - case RET_PTR_TO_ALLOC_MEM: + case RET_PTR_TO_MEM: mark_reg_known_zero(env, regs, BPF_REG_0); regs[BPF_REG_0].type = PTR_TO_MEM | ret_flag; regs[BPF_REG_0].mem_size = meta.mem_size; From patchwork Mon Nov 14 19:15:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042720 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3EF8C4332F for ; Mon, 14 Nov 2022 19:16:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237059AbiKNTQX (ORCPT ); Mon, 14 Nov 2022 14:16:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237071AbiKNTQQ (ORCPT ); Mon, 14 Nov 2022 14:16:16 -0500 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 024F628733 for ; Mon, 14 Nov 2022 11:16:10 -0800 (PST) Received: by mail-pl1-x643.google.com with SMTP id 4so10999789pli.0 for ; Mon, 14 Nov 2022 11:16:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yZueNr6f0tHcUKmyDtJKuqBamDqSkRqVn3/daT3k9+0=; b=IAqsRzJkMPkpeS52ASMpJc7k2goZVsCG8lUq+WBCi7LFP0N4p0Kzqyg/pQZ8sVe/Zl Gyz8WChZT706w+Zx6M/TQ4ujwZVUVlzMmi9UUnTpoRMX4l7ptjon9r6+oQLd6iw2RDZC v1MS6BGu7P01Ubbyjea8TsOquW6VoOsh65aKWcOcQaIQI6p7wpCKN9HI+PuviQVqTghG foaJ2mUOVZa45ZpYH8Q0uh2P6OEJBhX2yvPC+hxqxp8lIKXuFT6kO4dNxLq3FkDC5cqc /opLKgQEVhbspvrPFPYfq42Tvlbk8h1rHrXkc/uw5XLuhHkWcqPOuYjdCQ4WYxsq8P95 LY2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yZueNr6f0tHcUKmyDtJKuqBamDqSkRqVn3/daT3k9+0=; b=fy1W3aFIroHiPRj8AevirLcmn6py5XjFMm/baIO+nLk6XvbF8HqZN15dHG+iG5oC3o b7JLQ152YnmlHR5nZWf5dte/QQI3d2+w3doBCeNQAqcKr4X1yh6X+6RBfSpTcl9A8dIE Xy2PzirPsj1DK1BcoxketUS9l9H72xeH6Qb3in4672CFaWsbdmJvp0aAWSd3DaIEoNo+ Ar6955griJUVfj8YaG7zzr9JM52nWLuE/NOUnB0TUHQx6eXEsAcwCuwxWE4FqE0GkYZ7 6pba6xfVtnBEvnYVzMFUZq1kSY9wOmzausPxnjo6b0sS5cAnkcIfFZM4hpHY/dOKI2lN ivVw== X-Gm-Message-State: ANoB5pl0l79kcoT1MsN97osZ7U9VNxmkpc06DwKl9z5Vf8Ej/RhSocoz Kzl+BK3j3L6NB2zphbESa8hOk+GQEZjHAw== X-Google-Smtp-Source: AA0mqf62Hw5LDJgWLvN438MdCVWvN+71YpVTn5smHAHnGcKPVaM3m+hh0ubwY5VF5Ntjo0tkebDezA== X-Received: by 2002:a17:90b:2644:b0:213:971d:902e with SMTP id pa4-20020a17090b264400b00213971d902emr15162282pjb.123.1668453370124; Mon, 14 Nov 2022 11:16:10 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id e6-20020a170902784600b001869ba04c83sm7820736pln.245.2022.11.14.11.16.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:09 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 06/26] bpf: Rename MEM_ALLOC to MEM_RINGBUF Date: Tue, 15 Nov 2022 00:45:27 +0530 Message-Id: <20221114191547.1694267-7-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9098; i=memxor@gmail.com; h=from:subject; bh=pJNflLohdaN1tdbLPrpoFDWVDxqAzmKM2VOoyDzryLg=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPIs+C9YFnfVzLmkaR4mKhYJebSaN+FYvJnXJII fvFkXtyJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyAAKCRBM4MiGSL8RysfOEA Cr9LSXx2pQdTMXt6+OfO+1WeJ8ftah8UnEBQthR8MiBukxXMz9i4Y7mHoA//KguVK+nF4lWevq3FVQ B3fyLlepQxvrFL3PinU8TlX+P+5pcxeCwSw5WCztYBQfVzlPWExJ+7rUG7HMlER8TriN3gzKY9/22E gReX6CdDdXqOEzalpo/3BkN4yE4pe2ILCVURUjygZnyE6zd/KtT4fWz7U9oTMPIFRA6gCShPTCZ3ab rvMlfs5mnxSGEhvzj04LxT8L7/q1BkC8csJYtX2RWdjdzEwoyqPpSNzraasRWweShKbx3vzKmoEPUS b7A6i+d8cU6MEiKFr5+rMRuJjJ6+nbi7AG4jNsxwb9DlC/3hG8ycT6M7oTHpDCqLaSj1n7VPmNoxhS fFBlYoy9BdsN6sIJVS7Ln3+PwJiZ8+Kboe+mgMBkFtvlpBJSdiram1elUyFOaMb+BLMGc/rRfPHORe EREoL+AkL130hSyI846pWprlGHzOh2YAMKNjDeYewZGcIFeytbUp3NH2GcItaVIvreQmBu6Tm36meV YbtXRGHj6EoMuTW9dyg4yzD3NaLanXHp5Rv7T2y3GG6+iS6waNgvKRctAHh4wCIsrqtLLWDDEFK85s m7MF+K6XyVmKMN8ntbOTTNeJbk04eYM7K78E2Z7ZdJTSdutRjf5ivkOFJMiw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, verifier uses MEM_ALLOC type tag to specially tag memory returned from bpf_ringbuf_reserve helper. However, this is currently only used for this purpose and there is an implicit assumption that it only refers to ringbuf memory (e.g. the check for ARG_PTR_TO_ALLOC_MEM in check_func_arg_reg_off). Hence, rename MEM_ALLOC to MEM_RINGBUF to indicate this special relationship and instead open the use of MEM_ALLOC for more generic allocations made for user types. Also, since ARG_PTR_TO_ALLOC_MEM_OR_NULL is unused, simply drop it. Finally, update selftests using 'alloc_' verifier string to 'ringbuf_'. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 11 ++++------- kernel/bpf/ringbuf.c | 6 +++--- kernel/bpf/verifier.c | 14 +++++++------- tools/testing/selftests/bpf/prog_tests/dynptr.c | 2 +- tools/testing/selftests/bpf/verifier/ringbuf.c | 2 +- tools/testing/selftests/bpf/verifier/spill_fill.c | 2 +- 6 files changed, 17 insertions(+), 20 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 2fe3ec620d54..afc1c51b59ff 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -488,10 +488,8 @@ enum bpf_type_flag { */ MEM_RDONLY = BIT(1 + BPF_BASE_TYPE_BITS), - /* MEM was "allocated" from a different helper, and cannot be mixed - * with regular non-MEM_ALLOC'ed MEM types. - */ - MEM_ALLOC = BIT(2 + BPF_BASE_TYPE_BITS), + /* MEM points to BPF ring buffer reservation. */ + MEM_RINGBUF = BIT(2 + BPF_BASE_TYPE_BITS), /* MEM is in user address space. */ MEM_USER = BIT(3 + BPF_BASE_TYPE_BITS), @@ -565,7 +563,7 @@ enum bpf_arg_type { ARG_PTR_TO_LONG, /* pointer to long */ ARG_PTR_TO_SOCKET, /* pointer to bpf_sock (fullsock) */ ARG_PTR_TO_BTF_ID, /* pointer to in-kernel struct */ - ARG_PTR_TO_ALLOC_MEM, /* pointer to dynamically allocated memory */ + ARG_PTR_TO_RINGBUF_MEM, /* pointer to dynamically reserved ringbuf memory */ ARG_CONST_ALLOC_SIZE_OR_ZERO, /* number of allocated bytes requested */ ARG_PTR_TO_BTF_ID_SOCK_COMMON, /* pointer to in-kernel sock_common or bpf-mirrored bpf_sock */ ARG_PTR_TO_PERCPU_BTF_ID, /* pointer to in-kernel percpu type */ @@ -582,7 +580,6 @@ enum bpf_arg_type { ARG_PTR_TO_MEM_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_MEM, ARG_PTR_TO_CTX_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_CTX, ARG_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_SOCKET, - ARG_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_ALLOC_MEM, ARG_PTR_TO_STACK_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_STACK, ARG_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | ARG_PTR_TO_BTF_ID, /* pointer to memory does not need to be initialized, helper function must fill @@ -617,7 +614,7 @@ enum bpf_return_type { RET_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCKET, RET_PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_TCP_SOCK, RET_PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCK_COMMON, - RET_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | MEM_ALLOC | RET_PTR_TO_MEM, + RET_PTR_TO_RINGBUF_MEM_OR_NULL = PTR_MAYBE_NULL | MEM_RINGBUF | RET_PTR_TO_MEM, RET_PTR_TO_DYNPTR_MEM_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_MEM, RET_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_BTF_ID, diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c index 9e832acf4692..80f4b4d88aaf 100644 --- a/kernel/bpf/ringbuf.c +++ b/kernel/bpf/ringbuf.c @@ -447,7 +447,7 @@ BPF_CALL_3(bpf_ringbuf_reserve, struct bpf_map *, map, u64, size, u64, flags) const struct bpf_func_proto bpf_ringbuf_reserve_proto = { .func = bpf_ringbuf_reserve, - .ret_type = RET_PTR_TO_ALLOC_MEM_OR_NULL, + .ret_type = RET_PTR_TO_RINGBUF_MEM_OR_NULL, .arg1_type = ARG_CONST_MAP_PTR, .arg2_type = ARG_CONST_ALLOC_SIZE_OR_ZERO, .arg3_type = ARG_ANYTHING, @@ -490,7 +490,7 @@ BPF_CALL_2(bpf_ringbuf_submit, void *, sample, u64, flags) const struct bpf_func_proto bpf_ringbuf_submit_proto = { .func = bpf_ringbuf_submit, .ret_type = RET_VOID, - .arg1_type = ARG_PTR_TO_ALLOC_MEM | OBJ_RELEASE, + .arg1_type = ARG_PTR_TO_RINGBUF_MEM | OBJ_RELEASE, .arg2_type = ARG_ANYTHING, }; @@ -503,7 +503,7 @@ BPF_CALL_2(bpf_ringbuf_discard, void *, sample, u64, flags) const struct bpf_func_proto bpf_ringbuf_discard_proto = { .func = bpf_ringbuf_discard, .ret_type = RET_VOID, - .arg1_type = ARG_PTR_TO_ALLOC_MEM | OBJ_RELEASE, + .arg1_type = ARG_PTR_TO_RINGBUF_MEM | OBJ_RELEASE, .arg2_type = ARG_ANYTHING, }; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c88da7e3ca74..c588e5483540 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -577,8 +577,8 @@ static const char *reg_type_str(struct bpf_verifier_env *env, if (type & MEM_RDONLY) strncpy(prefix, "rdonly_", 32); - if (type & MEM_ALLOC) - strncpy(prefix, "alloc_", 32); + if (type & MEM_RINGBUF) + strncpy(prefix, "ringbuf_", 32); if (type & MEM_USER) strncpy(prefix, "user_", 32); if (type & MEM_PERCPU) @@ -5785,7 +5785,7 @@ static const struct bpf_reg_types mem_types = { PTR_TO_MAP_KEY, PTR_TO_MAP_VALUE, PTR_TO_MEM, - PTR_TO_MEM | MEM_ALLOC, + PTR_TO_MEM | MEM_RINGBUF, PTR_TO_BUF, }, }; @@ -5803,7 +5803,7 @@ static const struct bpf_reg_types int_ptr_types = { static const struct bpf_reg_types fullsock_types = { .types = { PTR_TO_SOCKET } }; static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } }; static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } }; -static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } }; +static const struct bpf_reg_types ringbuf_mem_types = { .types = { PTR_TO_MEM | MEM_RINGBUF } }; static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } }; static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } }; static const struct bpf_reg_types spin_lock_types = { .types = { PTR_TO_MAP_VALUE } }; @@ -5836,7 +5836,7 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = { [ARG_PTR_TO_BTF_ID] = &btf_ptr_types, [ARG_PTR_TO_SPIN_LOCK] = &spin_lock_types, [ARG_PTR_TO_MEM] = &mem_types, - [ARG_PTR_TO_ALLOC_MEM] = &alloc_mem_types, + [ARG_PTR_TO_RINGBUF_MEM] = &ringbuf_mem_types, [ARG_PTR_TO_INT] = &int_ptr_types, [ARG_PTR_TO_LONG] = &int_ptr_types, [ARG_PTR_TO_PERCPU_BTF_ID] = &percpu_btf_ptr_types, @@ -5957,14 +5957,14 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env, case PTR_TO_MAP_VALUE: case PTR_TO_MEM: case PTR_TO_MEM | MEM_RDONLY: - case PTR_TO_MEM | MEM_ALLOC: + case PTR_TO_MEM | MEM_RINGBUF: case PTR_TO_BUF: case PTR_TO_BUF | MEM_RDONLY: case SCALAR_VALUE: /* Some of the argument types nevertheless require a * zero register offset. */ - if (base_type(arg_type) != ARG_PTR_TO_ALLOC_MEM) + if (base_type(arg_type) != ARG_PTR_TO_RINGBUF_MEM) return 0; break; /* All the rest must be rejected, except PTR_TO_BTF_ID which allows diff --git a/tools/testing/selftests/bpf/prog_tests/dynptr.c b/tools/testing/selftests/bpf/prog_tests/dynptr.c index 8fc4e6c02bfd..b0c06f821cb8 100644 --- a/tools/testing/selftests/bpf/prog_tests/dynptr.c +++ b/tools/testing/selftests/bpf/prog_tests/dynptr.c @@ -17,7 +17,7 @@ static struct { {"ringbuf_missing_release2", "Unreleased reference id=2"}, {"ringbuf_missing_release_callback", "Unreleased reference id"}, {"use_after_invalid", "Expected an initialized dynptr as arg #3"}, - {"ringbuf_invalid_api", "type=mem expected=alloc_mem"}, + {"ringbuf_invalid_api", "type=mem expected=ringbuf_mem"}, {"add_dynptr_to_map1", "invalid indirect read from stack"}, {"add_dynptr_to_map2", "invalid indirect read from stack"}, {"data_slice_out_of_bounds_ringbuf", "value is outside of the allowed memory range"}, diff --git a/tools/testing/selftests/bpf/verifier/ringbuf.c b/tools/testing/selftests/bpf/verifier/ringbuf.c index b64d33e4833c..84838feba47f 100644 --- a/tools/testing/selftests/bpf/verifier/ringbuf.c +++ b/tools/testing/selftests/bpf/verifier/ringbuf.c @@ -28,7 +28,7 @@ }, .fixup_map_ringbuf = { 1 }, .result = REJECT, - .errstr = "dereference of modified alloc_mem ptr R1", + .errstr = "dereference of modified ringbuf_mem ptr R1", }, { "ringbuf: invalid reservation offset 2", diff --git a/tools/testing/selftests/bpf/verifier/spill_fill.c b/tools/testing/selftests/bpf/verifier/spill_fill.c index e23f07175e1b..9bb302dade23 100644 --- a/tools/testing/selftests/bpf/verifier/spill_fill.c +++ b/tools/testing/selftests/bpf/verifier/spill_fill.c @@ -84,7 +84,7 @@ }, .fixup_map_ringbuf = { 1 }, .result = REJECT, - .errstr = "R0 pointer arithmetic on alloc_mem_or_null prohibited", + .errstr = "R0 pointer arithmetic on ringbuf_mem_or_null prohibited", }, { "check corrupted spill/fill", From patchwork Mon Nov 14 19:15:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042721 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F20CCC4332F for ; Mon, 14 Nov 2022 19:16:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237079AbiKNTQ0 (ORCPT ); Mon, 14 Nov 2022 14:16:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237091AbiKNTQS (ORCPT ); Mon, 14 Nov 2022 14:16:18 -0500 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD8F327CDB for ; Mon, 14 Nov 2022 11:16:13 -0800 (PST) Received: by mail-pj1-x1043.google.com with SMTP id b1-20020a17090a7ac100b00213fde52d49so11646559pjl.3 for ; Mon, 14 Nov 2022 11:16:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=x03vizVK0BDArb4SMVvzX4tl/M5tTj6rRG7Qo5Aazu0=; b=RQOZrZE/OZ+NEx/yGfNp81uWY8o/5rHLhAYOkkgZxDxSjDuTaFW41N3gFzVH5n5Y9a OsrxtT1LTWfeH3HXrIsiSx4mvh2jzDf7a5jjDi2d5wNaRrTxQNcp3/pe69xmYQEawNbI AgBLGLEXvKn7vdPwu6babmecWbOA/ZwN5+Y2Qz555cNhu3uL4NY9VeT/MfLyqB+zy+hq bi91WURD65aayvIRa1ckdCWQ0m6T1dq0lpdriSFVvd9/H3ylx6iG5FuURRpvMRt4f5Yk dsIUeoKz7aZWBprrlqJU6PxKjJpW9yPuSjtzFY3wVOO0gJ03g1QgAzNChQ6GgJlZpI7c OcmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=x03vizVK0BDArb4SMVvzX4tl/M5tTj6rRG7Qo5Aazu0=; b=rDO2fur6COZ/U1b0Ofu1hBOm0UjVf/YBykKN/5M95t7GXwn5m+acs2Y+dXgTf9Fkfr sC7beGQLs1dyn+0tK8zidgnxO8OlA3Yl8FWrfvkaZ+uEV/OrNgbDNu5lfPMhCGaUdRTA Cn+3R/gzL4Ee1l/woXLdnINc2EjHtHoj524UmoCwia58cZzvGTc1I9N9+neupYENUDDm CYGKAIkJ+fL585V0iD+tLuUa919iCl+VQAHxMQYnK0+X9OfA+KbgLG7ejfeo2jzaF4/F prB//T7h7xzW0mWaAzVpwxhXNQmx6E41quo6QAW4Bk8dOsx0JjaFYjKvxVnRk9Ly4AW1 DRlA== X-Gm-Message-State: ANoB5pktbcLp/v8NkRcip/VjCM7t1cbUNX3u/zQ64C0JrlbTBDNcWjyc 7SKsp51L87AlkuOAgOxZc5DFNp3UyhNJQw== X-Google-Smtp-Source: AA0mqf5GzN3XBPcsH4ENTaCLUTM3VmPL4UIL1MtA5G3PlN9MmXgBu981tULwOc135FJh3XXZxkYc8A== X-Received: by 2002:a17:902:9f97:b0:186:de87:7ffd with SMTP id g23-20020a1709029f9700b00186de877ffdmr579598plq.94.1668453373198; Mon, 14 Nov 2022 11:16:13 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id l5-20020a170903120500b00186727e5f5csm7900083plh.248.2022.11.14.11.16.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:12 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 07/26] bpf: Refactor btf_struct_access Date: Tue, 15 Nov 2022 00:45:28 +0530 Message-Id: <20221114191547.1694267-8-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=12746; i=memxor@gmail.com; h=from:subject; bh=ZQSBi2SUjzUwh5VlOA5tnWKvkKqR/kwB1k+0YEpN5qQ=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPIbaid2b7h0ZuVZskIxcbw00s5zbZBj4fCsdfd F9ewLTCJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyAAKCRBM4MiGSL8RyqfeD/ 4/EB7w24AsbF134PA21AqdBogaIAkMq2EX4ZlM1vlYGpnYSHAvZtQ7MCrjS8Z8mS1ERUpLtD55OVXm iHZOAsWemJs2ZLqwY7K9t8burRcCINzIw+hKC2oEIBoNzqjaCU0Ec2BcVSvf4uxrK3iOYCYknGhmwD 06W1I+vC45Yswm1K2BIaU6nEUCoCzcI2KO1zleoNYo/hpG7GU99utki6POw4QWg6PfsmN4F1f4ekC4 Q33q6qoH8g58iWeDenUqQA0szr3WTEcKNyQzLaNlrGBLUTLST5j1LXzyqfjILDxZqtyZ4JmezLFq3a OskH4uiSjrSpepbvyzkU+hO9l/fe3I9a6BU+Pe0Iu0HWk1dgUnrLjvbzT+UPMx35peMrvmHoL8T2rc OTbS0U8i0FdlWlZ/4bS8uf5tWcSSjCHLQgqVu0cq1NvfikqEai18oRZ82BIX3fw/scKbKXOVBNlGx+ QMp8pcCMKdAENsNBkitxXPgYu6gq5zErqlWAgEYUnYJYWnR5EI4JwGa0uYkv78dOThBUBB7KY2P4Mo f6JizUKHJD5XFgDTm/K4E+NWQQl2V9At1mpFk+H7PiWohv5mpgB57oyLWIhR3tV4FqjbzJydjUeyfj DNDem2BE7TXvNqwpwzocD0JAxrspoLFb69Rb3q4yw1ueTa6HMuwRXMfHAj2Q== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Instead of having to pass multiple arguments that describe the register, pass the bpf_reg_state into the btf_struct_access callback. Currently, all call sites simply reuse the btf and btf_id of the reg they want to check the access of. The only exception to this pattern is the callsite in check_ptr_to_map_access, hence for that case create a dummy reg to simulate PTR_TO_BTF_ID access. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 17 ++++++++-------- include/linux/filter.h | 8 ++++---- kernel/bpf/btf.c | 11 +++++++---- kernel/bpf/verifier.c | 12 ++++++----- net/bpf/bpf_dummy_struct_ops.c | 14 ++++++------- net/core/filter.c | 34 +++++++++++++------------------- net/ipv4/bpf_tcp_ca.c | 13 ++++++------ net/netfilter/nf_conntrack_bpf.c | 17 +++++++--------- 8 files changed, 60 insertions(+), 66 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index afc1c51b59ff..49f9d2bec401 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -771,6 +771,7 @@ struct bpf_prog_ops { union bpf_attr __user *uattr); }; +struct bpf_reg_state; struct bpf_verifier_ops { /* return eBPF function prototype for verification */ const struct bpf_func_proto * @@ -792,9 +793,8 @@ struct bpf_verifier_ops { struct bpf_insn *dst, struct bpf_prog *prog, u32 *target_size); int (*btf_struct_access)(struct bpf_verifier_log *log, - const struct btf *btf, - const struct btf_type *t, int off, int size, - enum bpf_access_type atype, + const struct bpf_reg_state *reg, + int off, int size, enum bpf_access_type atype, u32 *next_btf_id, enum bpf_type_flag *flag); }; @@ -2080,9 +2080,9 @@ static inline bool bpf_tracing_btf_ctx_access(int off, int size, return btf_ctx_access(off, size, type, prog, info); } -int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, - const struct btf_type *t, int off, int size, - enum bpf_access_type atype, +int btf_struct_access(struct bpf_verifier_log *log, + const struct bpf_reg_state *reg, + int off, int size, enum bpf_access_type atype, u32 *next_btf_id, enum bpf_type_flag *flag); bool btf_struct_ids_match(struct bpf_verifier_log *log, const struct btf *btf, u32 id, int off, @@ -2333,9 +2333,8 @@ static inline struct bpf_prog *bpf_prog_by_id(u32 id) } static inline int btf_struct_access(struct bpf_verifier_log *log, - const struct btf *btf, - const struct btf_type *t, int off, int size, - enum bpf_access_type atype, + const struct bpf_reg_state *reg, + int off, int size, enum bpf_access_type atype, u32 *next_btf_id, enum bpf_type_flag *flag) { return -EACCES; diff --git a/include/linux/filter.h b/include/linux/filter.h index efc42a6e3aed..787d35dbf5b0 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -568,10 +568,10 @@ struct sk_filter { DECLARE_STATIC_KEY_FALSE(bpf_stats_enabled_key); extern struct mutex nf_conn_btf_access_lock; -extern int (*nfct_btf_struct_access)(struct bpf_verifier_log *log, const struct btf *btf, - const struct btf_type *t, int off, int size, - enum bpf_access_type atype, u32 *next_btf_id, - enum bpf_type_flag *flag); +extern int (*nfct_btf_struct_access)(struct bpf_verifier_log *log, + const struct bpf_reg_state *reg, + int off, int size, enum bpf_access_type atype, + u32 *next_btf_id, enum bpf_type_flag *flag); typedef unsigned int (*bpf_dispatcher_fn)(const void *ctx, const struct bpf_insn *insnsi, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index c0d73d71c539..875355ff3718 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6017,15 +6017,18 @@ static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf, return -EINVAL; } -int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, - const struct btf_type *t, int off, int size, - enum bpf_access_type atype __maybe_unused, +int btf_struct_access(struct bpf_verifier_log *log, + const struct bpf_reg_state *reg, + int off, int size, enum bpf_access_type atype __maybe_unused, u32 *next_btf_id, enum bpf_type_flag *flag) { + const struct btf *btf = reg->btf; enum bpf_type_flag tmp_flag = 0; + const struct btf_type *t; + u32 id = reg->btf_id; int err; - u32 id; + t = btf_type_by_id(btf, id); do { err = btf_struct_walk(log, btf, t, off, size, &id, &tmp_flag); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c588e5483540..5e74f460dfd0 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4688,16 +4688,14 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, } if (env->ops->btf_struct_access) { - ret = env->ops->btf_struct_access(&env->log, reg->btf, t, - off, size, atype, &btf_id, &flag); + ret = env->ops->btf_struct_access(&env->log, reg, off, size, atype, &btf_id, &flag); } else { if (atype != BPF_READ) { verbose(env, "only read is supported\n"); return -EACCES; } - ret = btf_struct_access(&env->log, reg->btf, t, off, size, - atype, &btf_id, &flag); + ret = btf_struct_access(&env->log, reg, off, size, atype, &btf_id, &flag); } if (ret < 0) @@ -4723,6 +4721,7 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env, { struct bpf_reg_state *reg = regs + regno; struct bpf_map *map = reg->map_ptr; + struct bpf_reg_state map_reg; enum bpf_type_flag flag = 0; const struct btf_type *t; const char *tname; @@ -4761,7 +4760,10 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env, return -EACCES; } - ret = btf_struct_access(&env->log, btf_vmlinux, t, off, size, atype, &btf_id, &flag); + /* Simulate access to a PTR_TO_BTF_ID */ + memset(&map_reg, 0, sizeof(map_reg)); + mark_btf_ld_reg(env, &map_reg, 0, PTR_TO_BTF_ID, btf_vmlinux, *map->ops->map_btf_id, 0); + ret = btf_struct_access(&env->log, &map_reg, off, size, atype, &btf_id, &flag); if (ret < 0) return ret; diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c index e78dadfc5829..2d434c1f4617 100644 --- a/net/bpf/bpf_dummy_struct_ops.c +++ b/net/bpf/bpf_dummy_struct_ops.c @@ -156,29 +156,29 @@ static bool bpf_dummy_ops_is_valid_access(int off, int size, } static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log, - const struct btf *btf, - const struct btf_type *t, int off, - int size, enum bpf_access_type atype, + const struct bpf_reg_state *reg, + int off, int size, enum bpf_access_type atype, u32 *next_btf_id, enum bpf_type_flag *flag) { const struct btf_type *state; + const struct btf_type *t; s32 type_id; int err; - type_id = btf_find_by_name_kind(btf, "bpf_dummy_ops_state", + type_id = btf_find_by_name_kind(reg->btf, "bpf_dummy_ops_state", BTF_KIND_STRUCT); if (type_id < 0) return -EINVAL; - state = btf_type_by_id(btf, type_id); + t = btf_type_by_id(reg->btf, reg->btf_id); + state = btf_type_by_id(reg->btf, type_id); if (t != state) { bpf_log(log, "only access to bpf_dummy_ops_state is supported\n"); return -EACCES; } - err = btf_struct_access(log, btf, t, off, size, atype, next_btf_id, - flag); + err = btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); if (err < 0) return err; diff --git a/net/core/filter.c b/net/core/filter.c index 6dd2baf5eeb2..37fad5a9b752 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -8651,28 +8651,25 @@ static bool tc_cls_act_is_valid_access(int off, int size, DEFINE_MUTEX(nf_conn_btf_access_lock); EXPORT_SYMBOL_GPL(nf_conn_btf_access_lock); -int (*nfct_btf_struct_access)(struct bpf_verifier_log *log, const struct btf *btf, - const struct btf_type *t, int off, int size, - enum bpf_access_type atype, u32 *next_btf_id, - enum bpf_type_flag *flag); +int (*nfct_btf_struct_access)(struct bpf_verifier_log *log, + const struct bpf_reg_state *reg, + int off, int size, enum bpf_access_type atype, + u32 *next_btf_id, enum bpf_type_flag *flag); EXPORT_SYMBOL_GPL(nfct_btf_struct_access); static int tc_cls_act_btf_struct_access(struct bpf_verifier_log *log, - const struct btf *btf, - const struct btf_type *t, int off, - int size, enum bpf_access_type atype, - u32 *next_btf_id, - enum bpf_type_flag *flag) + const struct bpf_reg_state *reg, + int off, int size, enum bpf_access_type atype, + u32 *next_btf_id, enum bpf_type_flag *flag) { int ret = -EACCES; if (atype == BPF_READ) - return btf_struct_access(log, btf, t, off, size, atype, next_btf_id, - flag); + return btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); mutex_lock(&nf_conn_btf_access_lock); if (nfct_btf_struct_access) - ret = nfct_btf_struct_access(log, btf, t, off, size, atype, next_btf_id, flag); + ret = nfct_btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); mutex_unlock(&nf_conn_btf_access_lock); return ret; @@ -8738,21 +8735,18 @@ void bpf_warn_invalid_xdp_action(struct net_device *dev, struct bpf_prog *prog, EXPORT_SYMBOL_GPL(bpf_warn_invalid_xdp_action); static int xdp_btf_struct_access(struct bpf_verifier_log *log, - const struct btf *btf, - const struct btf_type *t, int off, - int size, enum bpf_access_type atype, - u32 *next_btf_id, - enum bpf_type_flag *flag) + const struct bpf_reg_state *reg, + int off, int size, enum bpf_access_type atype, + u32 *next_btf_id, enum bpf_type_flag *flag) { int ret = -EACCES; if (atype == BPF_READ) - return btf_struct_access(log, btf, t, off, size, atype, next_btf_id, - flag); + return btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); mutex_lock(&nf_conn_btf_access_lock); if (nfct_btf_struct_access) - ret = nfct_btf_struct_access(log, btf, t, off, size, atype, next_btf_id, flag); + ret = nfct_btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); mutex_unlock(&nf_conn_btf_access_lock); return ret; diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c index 6da16ae6a962..d15c91de995f 100644 --- a/net/ipv4/bpf_tcp_ca.c +++ b/net/ipv4/bpf_tcp_ca.c @@ -69,18 +69,17 @@ static bool bpf_tcp_ca_is_valid_access(int off, int size, } static int bpf_tcp_ca_btf_struct_access(struct bpf_verifier_log *log, - const struct btf *btf, - const struct btf_type *t, int off, - int size, enum bpf_access_type atype, - u32 *next_btf_id, - enum bpf_type_flag *flag) + const struct bpf_reg_state *reg, + int off, int size, enum bpf_access_type atype, + u32 *next_btf_id, enum bpf_type_flag *flag) { + const struct btf_type *t; size_t end; if (atype == BPF_READ) - return btf_struct_access(log, btf, t, off, size, atype, next_btf_id, - flag); + return btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); + t = btf_type_by_id(reg->btf, reg->btf_id); if (t != tcp_sock_type) { bpf_log(log, "only read is supported\n"); return -EACCES; diff --git a/net/netfilter/nf_conntrack_bpf.c b/net/netfilter/nf_conntrack_bpf.c index 8639e7efd0e2..24002bc61e07 100644 --- a/net/netfilter/nf_conntrack_bpf.c +++ b/net/netfilter/nf_conntrack_bpf.c @@ -191,19 +191,16 @@ BTF_ID(struct, nf_conn___init) /* Check writes into `struct nf_conn` */ static int _nf_conntrack_btf_struct_access(struct bpf_verifier_log *log, - const struct btf *btf, - const struct btf_type *t, int off, - int size, enum bpf_access_type atype, - u32 *next_btf_id, - enum bpf_type_flag *flag) + const struct bpf_reg_state *reg, + int off, int size, enum bpf_access_type atype, + u32 *next_btf_id, enum bpf_type_flag *flag) { - const struct btf_type *ncit; - const struct btf_type *nct; + const struct btf_type *ncit, *nct, *t; size_t end; - ncit = btf_type_by_id(btf, btf_nf_conn_ids[1]); - nct = btf_type_by_id(btf, btf_nf_conn_ids[0]); - + ncit = btf_type_by_id(reg->btf, btf_nf_conn_ids[1]); + nct = btf_type_by_id(reg->btf, btf_nf_conn_ids[0]); + t = btf_type_by_id(reg->btf, reg->btf_id); if (t != nct && t != ncit) { bpf_log(log, "only read is supported\n"); return -EACCES; From patchwork Mon Nov 14 19:15:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042722 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2E85C43217 for ; Mon, 14 Nov 2022 19:16:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237122AbiKNTQ3 (ORCPT ); Mon, 14 Nov 2022 14:16:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237133AbiKNTQU (ORCPT ); Mon, 14 Nov 2022 14:16:20 -0500 Received: from mail-pj1-x1042.google.com (mail-pj1-x1042.google.com [IPv6:2607:f8b0:4864:20::1042]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F392B275DC for ; Mon, 14 Nov 2022 11:16:16 -0800 (PST) Received: by mail-pj1-x1042.google.com with SMTP id d59-20020a17090a6f4100b00213202d77e1so14733288pjk.2 for ; Mon, 14 Nov 2022 11:16:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bvd8qgc4ej2kNh64oFLXyThd9oPpY3rKbUoisAxMSCs=; b=EZctqYZUV35xaI2bdjqTpRfNxVzF4WPvxCCY7ze8y4T2sksjbLFNHxfg6dcn6RfTQ6 tBRPOfSC+Y54G+BbWk06oGCg8idh5BiH9eoNz7sdVBTNCCh2Snh8eYWo0y4iRUKVIP5n Nxu20+wBKw+maVPjAvUEIjV9MNMp9g/zvD9Owwr8ShsDwlISRJTqR/CaLYk3BWNMqEwX O1lc4DLfcKyqmjnTRPJf9G+Eg3NpVSMtYX1rmZFbWQ2CEWGG8mOI7AlarrdDBY9xmIn6 d1gy8HEulHWIkVE7GIWri3RmoLIty4xL4m8qSXNx8q8PfYRlgACpgZnsoAA1EmzgUzhl aE/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bvd8qgc4ej2kNh64oFLXyThd9oPpY3rKbUoisAxMSCs=; b=4+ytCyLuno3xBozPfWGG4CfLmMIPXM2dqVg5buss4U/8kQkZLyzqzOSomRp5hp2eZG 7y9YPvtpahg1uHF4PHusAFRFtL6gjmFe+qJNP3waZC91vcPEbeTBtTD1AawR3MCwCqJz +0ZX5aPxgs1g8YAsI0XOZ+X0apWiUsjNKijnJCdzfLUI1iNg4E5aYle4OewpFKDoxGPY Xs9hP/1V+iso4+ndtaLft/lVUEhWVYlsom1ppLmVhQLM2vfnhl1aeNVGYtdJtrPLqLal vjHPz1EOcnNrEhxnVFlq3TTxTq7+9d8DlckVTTt50DAblKx803NxqUTN+2eT7cr7Exhp drag== X-Gm-Message-State: ANoB5pliuq12ZE/9NSQsb2NB2vpfNNRGCsHn9mBa0HX+NGixRvmYMf7x LxnVPG38neJmWvAzoXwr/ScJM1dufY6geA== X-Google-Smtp-Source: AA0mqf5BgU/qx54NNYHgvSaBWOfxOzhKGHhofBXFOzh/2NQvfczt5c2AW2EO5UUoGzjAB1mIsXK1lw== X-Received: by 2002:a17:902:8686:b0:186:a97d:6bcc with SMTP id g6-20020a170902868600b00186a97d6bccmr602412plo.121.1668453376251; Mon, 14 Nov 2022 11:16:16 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id jc3-20020a17090325c300b0018157b415dbsm7904687plb.63.2022.11.14.11.16.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:16 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 08/26] bpf: Introduce allocated objects support Date: Tue, 15 Nov 2022 00:45:29 +0530 Message-Id: <20221114191547.1694267-9-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4871; i=memxor@gmail.com; h=from:subject; bh=2bRFzWQZD+aRTzydvT9JdLEXlBlGePU3cnRGLh5a31Q=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPIBHPCFUoG4d+/0JATJLRbv4bgzO8EFPfe7HMg nO5SCASJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyAAKCRBM4MiGSL8RyjhlD/ 4uqbeQasVpRzlUXZKoiEkKqwZZwsUrwz9WTGnSTmEXPmlvP4EGot/KdnnxTBGOoZGxA5MgG+9uTMYi NvIdRipVumgDl/f/9vkoE5kyjsQ1qEDJUw3Gy2AK4eIFuEGTicx5QCc1xaUbK2KKYjedbvxw2HD50j DGl6rhpa8/1GaJTABH4c6sKB40Wv7xJlaEQflkkvfvv81g9/e2P3SPTxW8gKSeCPHxXvZrsq8Hrguu eDNCFm4izGa8DzFs1zeTdOlxv/ZU834dVOzEoVYG5thcOXX3zX5jUDMiBBGin2SBOQ1kWTSzhffXHy sxCx7mxIo/hEjQkrGR3rIi1VMgO0K7euH1WQ2Xj+qHy/SB1Lm+Fe5vjv0Z/E9PJj1OMpGnFMaqqGCh mxZteeR3B1GWitg8iVIWNOEZohylGeua7pxyQrvvWCnSs2QY/fiDw5JfBvU6r7oGcJJA5tgbHN+CmB 5fScQxiDcU1dOzif3pgB7o/2ltzORxrRed+ZhWMw4Xi2+bavqBu0BEU4WSHOioUFIvyRoSZG6JqlNj b8hDe4IfSMeBoDxoJVIBKcXwqyYzPn9eJLhq2UMAMgPNF5fC60oTP4eLS+SsYfyeKoXglCX/QjoMO4 /DM9ddyIQwldyqiQp2Sso20h88ZudQ1GBNlycfKbrixl6XeozV1n8e0dpQZA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce support for representing pointers to objects allocated by the BPF program, i.e. PTR_TO_BTF_ID that point to a type in program BTF. This is indicated by the presence of MEM_ALLOC type flag in reg->type to avoid having to check btf_is_kernel when trying to match argument types in helpers. Whenever walking such types, any pointers being walked will always yield a SCALAR instead of pointer. In the future we might permit kptr inside such allocated objects (either kernel or local), and it will then form a PTR_TO_BTF_ID of the respective type. For now, such allocated objects will always be referenced in verifier context, hence ref_obj_id == 0 for them is a bug. It is allowed to write to such objects, as long fields that are special are not touched (support for which will be added in subsequent patches). Note that once such a pointer is marked PTR_UNTRUSTED, it is no longer allowed to write to it. No PROBE_MEM handling is therefore done for loads into this type unless PTR_UNTRUSTED is part of the register type, since they can never be in an undefined state, and their lifetime will always be valid. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 11 +++++++++++ kernel/bpf/btf.c | 5 +++++ kernel/bpf/verifier.c | 25 +++++++++++++++++++++++-- 3 files changed, 39 insertions(+), 2 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 49f9d2bec401..3cab113b149e 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -524,6 +524,11 @@ enum bpf_type_flag { /* Size is known at compile time. */ MEM_FIXED_SIZE = BIT(10 + BPF_BASE_TYPE_BITS), + /* MEM is of a an allocated object of type from program BTF. This is + * used to tag PTR_TO_BTF_ID allocated using bpf_obj_new. + */ + MEM_ALLOC = BIT(11 + BPF_BASE_TYPE_BITS), + __BPF_TYPE_FLAG_MAX, __BPF_TYPE_LAST_FLAG = __BPF_TYPE_FLAG_MAX - 1, }; @@ -2791,4 +2796,10 @@ struct bpf_key { bool has_ref; }; #endif /* CONFIG_KEYS */ + +static inline bool type_is_alloc(u32 type) +{ + return type & MEM_ALLOC; +} + #endif /* _LINUX_BPF_H */ diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 875355ff3718..9a596f430558 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6034,6 +6034,11 @@ int btf_struct_access(struct bpf_verifier_log *log, switch (err) { case WALK_PTR: + /* For local types, the destination register cannot + * become a pointer again. + */ + if (type_is_alloc(reg->type)) + return SCALAR_VALUE; /* If we found the pointer or scalar on t+off, * we're done. */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 5e74f460dfd0..d726d55622c9 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4687,14 +4687,27 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, return -EACCES; } - if (env->ops->btf_struct_access) { + if (env->ops->btf_struct_access && !type_is_alloc(reg->type)) { + if (!btf_is_kernel(reg->btf)) { + verbose(env, "verifier internal error: reg->btf must be kernel btf\n"); + return -EFAULT; + } ret = env->ops->btf_struct_access(&env->log, reg, off, size, atype, &btf_id, &flag); } else { - if (atype != BPF_READ) { + /* Writes are permitted with default btf_struct_access for local + * kptrs (which always have ref_obj_id > 0), but not for + * untrusted PTR_TO_BTF_ID | MEM_ALLOC. + */ + if (atype != BPF_READ && reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) { verbose(env, "only read is supported\n"); return -EACCES; } + if (type_is_alloc(reg->type) && !reg->ref_obj_id) { + verbose(env, "verifier internal error: ref_obj_id for allocated object must be non-zero\n"); + return -EFAULT; + } + ret = btf_struct_access(&env->log, reg, off, size, atype, &btf_id, &flag); } @@ -5973,6 +5986,7 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env, * fixed offset. */ case PTR_TO_BTF_ID: + case PTR_TO_BTF_ID | MEM_ALLOC: /* When referenced PTR_TO_BTF_ID is passed to release function, * it's fixed offset must be 0. In the other cases, fixed offset * can be non-zero. @@ -13659,6 +13673,13 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) break; case PTR_TO_BTF_ID: case PTR_TO_BTF_ID | PTR_UNTRUSTED: + /* PTR_TO_BTF_ID | MEM_ALLOC always has a valid lifetime, unlike + * PTR_TO_BTF_ID, and an active ref_obj_id, but the same cannot + * be said once it is marked PTR_UNTRUSTED, hence we must handle + * any faults for loads into such types. BPF_WRITE is disallowed + * for this case. + */ + case PTR_TO_BTF_ID | MEM_ALLOC | PTR_UNTRUSTED: if (type == BPF_READ) { insn->code = BPF_LDX | BPF_PROBE_MEM | BPF_SIZE((insn)->code); From patchwork Mon Nov 14 19:15:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042723 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFAC7C433FE for ; Mon, 14 Nov 2022 19:16:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237130AbiKNTQb (ORCPT ); Mon, 14 Nov 2022 14:16:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237086AbiKNTQW (ORCPT ); Mon, 14 Nov 2022 14:16:22 -0500 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0526264BF for ; Mon, 14 Nov 2022 11:16:19 -0800 (PST) Received: by mail-pj1-x1044.google.com with SMTP id v4-20020a17090a088400b00212cb0ed97eso11629454pjc.5 for ; Mon, 14 Nov 2022 11:16:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BvrjAsJ/CtehRdd/18veUt968rxDRxP25tNSHp1k2yg=; b=dYwmT9VDSynvjk8Z0POPfLvqzSsHrAAdppq8F87dfXPjGKN2Uvmc+MtDiPM9+jDaet XqTPZFMbERhWzCoZGDBsAZIkbpVpfz44Aklb0+O780mA3VQzKUNFeLG4gMGVlDvc0iru aDQwM/cpVluwiUhgme5vviC+iYjft95fOSwPF0R2IpEOVcCkM7BJfYup+33agKhqMqhq lC4swNV3B4EQKtLtI77VSTf5m25pWpOAp2/5AlRxJAby5nyXWjcJyONlSMgy22vE3ol7 53DLD1IMx4IvkjqGVI9LhyxZpNGFoHhwwdiVGLHmkUWmmasfhmzoIuzTbxPP79wUiC+F 1C/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BvrjAsJ/CtehRdd/18veUt968rxDRxP25tNSHp1k2yg=; b=WgU2jsOgMOsuZykF7nhlQ1/boz9QJ15mhCHlYMW7z7tA8j1/DYdvS+INLzJ1SHpU0c 2aKC72WoBBeETbs7ykHbiAFXvmj+mf9VxMWYAIXFNl+APx7Wd/sbLsBjlMplXs05rNnl i4aLwu7LHb/4jaGaafBOGo1nrcOMEceJM/iCc8KHsRoEOTlDUkNLtBf75MuEhy9P9deh 2e3RXmyIs209Lg7FEpjdR3d02gYTTgXxLUC/QFA72F+vZ2Ges4r44r+tIOG/giQ5tvNu xxBiZC4WrJzNyGojTUPW8jBHayvpT/O44djgFjpQ0RP+kIZQy2WWAbnPGY4RjMDvVFZF YnQw== X-Gm-Message-State: ANoB5pmIT/VSPZTwsNvPky+T2MP/gMBHDNKb5s5ItvAwWSxg0dJxZdSw FsuNL55CuMHWnsPs2JHhh4kvuq1FDsbSdQ== X-Google-Smtp-Source: AA0mqf5fDBaqEn1pFHIYE4pSikYVjlibLw39+5ZkMCSZ/fRK3cLkof+hrG9asIFuKaZSbCyu5N9tHw== X-Received: by 2002:a17:902:ea91:b0:176:9842:c651 with SMTP id x17-20020a170902ea9100b001769842c651mr607344plb.108.1668453379246; Mon, 14 Nov 2022 11:16:19 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id g190-20020a6252c7000000b0057255b82bd1sm950836pfb.217.2022.11.14.11.16.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:19 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 09/26] bpf: Recognize lock and list fields in allocated objects Date: Tue, 15 Nov 2022 00:45:30 +0530 Message-Id: <20221114191547.1694267-10-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=14894; i=memxor@gmail.com; h=from:subject; bh=CzCZEMyOeYJqr8gHyZfB+JwcsvptwIHnQU9FwtQC/aQ=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPITxMIncNvkGz4eW7vXQrNqyN5/WaP+G6NKLax xKsOeNeJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyAAKCRBM4MiGSL8RymzSD/ 4h+/UGJUa/+z/4X1CeooFhjq3275nFbLqsHRHgPMl0CI4MsfG9407bF2m5tTZBtQoteWxodKuh7jzD grn+3onovclBfSeIHdy6u/fL9UbaMF0vB6hzNxyeSom+j3KSrWGCuZysjf/wWfGlmvnzlI4OMp5WnS og5HMcZ6LvhjBVtXHgNoohtQpRmNiDiQcqOxNyY6ifbBgE3mxMfsvnHw5UIuBpU8NppmFHSPZfiBkc Aw+FhyFC/mlcNJuNXBhmIdtEfL8NaIrUgMHWcQUDyfevCXHmY4zK/8TZqLnSiut1dIvvASeGSyJmV5 6bZHzev4vm+Fg9x1w1xHrBNPJnTC5QP5xwBaO+gDujVFSkwEWlFXn+DiYUeet9GT3CVPPjVoRRWXSr lPQjnnmbwe/QIsn45+A4jnxUUV3yTQjgVm6B9Zkm4I/9da+zpgOf6+HVzj4S4iyZ3tZ+SvGmVbgQqf 3tA5vgY2tMM52b+AyXYEDzghCGnbeH7DsTawUOKvXv7SGsyOpNdpxfHXJvZ0sPZKdem5ehSMlb1TK1 Oz+wvBuMGe12u/y7p2zSpHA3a5wdclZdzJ3OQZSr58aDAwEsER6soiTl3XZ+mspCb1O0WVqvR8mbMU vzvX7SsdUDwI4Ya6OlLP1BjKWn5BZiGg759tnY00Ou5f5OfhxqXIYQrimvrQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Allow specifying bpf_spin_lock, bpf_list_head, bpf_list_node fields in a allocated object. Also update btf_struct_access to reject direct access to these special fields. A bpf_list_head allows implementing map-in-map style use cases, where an allocated object with bpf_list_head is linked into a list in a map value. This would require embedding a bpf_list_node, support for which is also included. The bpf_spin_lock is used to protect the bpf_list_head and other data. While we strictly don't require to hold a bpf_spin_lock while touching the bpf_list_head in such objects, as when have access to it, we have complete ownership of the object, the locking constraint is still kept and may be conditionally lifted in the future. Note that the specification of such types can be done just like map values, e.g.: struct bar { struct bpf_list_node node; }; struct foo { struct bpf_spin_lock lock; struct bpf_list_head head __contains(bar, node); struct bpf_list_node node; }; struct map_value { struct bpf_spin_lock lock; struct bpf_list_head head __contains(foo, node); }; To recognize such types in user BTF, we build a btf_struct_metas array of metadata items corresponding to each BTF ID. This is done once during the btf_parse stage to avoid having to do it each time during the verification process's requirement to inspect the metadata. Moreover, the computed metadata needs to be passed to some helpers in future patches which requires allocating them and storing them in the BTF that is pinned by the program itself, so that valid access can be assumed to such data during program runtime. A key thing to note is that once a btf_struct_meta is available for a type, both the btf_record and btf_field_offs should be available. It is critical that btf_field_offs is available in case special fields are present, as we extensively rely on special fields being zeroed out in map values and allocated objects in later patches. The code ensures that by bailing out in case of errors and ensuring both are available together. If the record is not available, the special fields won't be recognized, so not having both is also fine (in terms of being a verification error and not a runtime bug). Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 7 ++ include/linux/btf.h | 35 ++++++++ kernel/bpf/btf.c | 198 +++++++++++++++++++++++++++++++++++++++---- kernel/bpf/syscall.c | 4 + 4 files changed, 226 insertions(+), 18 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 3cab113b149e..4cd3c9e6f50b 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -176,6 +176,7 @@ enum btf_field_type { BPF_KPTR_REF = (1 << 3), BPF_KPTR = BPF_KPTR_UNREF | BPF_KPTR_REF, BPF_LIST_HEAD = (1 << 4), + BPF_LIST_NODE = (1 << 5), }; struct btf_field_kptr { @@ -276,6 +277,8 @@ static inline const char *btf_field_type_name(enum btf_field_type type) return "kptr"; case BPF_LIST_HEAD: return "bpf_list_head"; + case BPF_LIST_NODE: + return "bpf_list_node"; default: WARN_ON_ONCE(1); return "unknown"; @@ -294,6 +297,8 @@ static inline u32 btf_field_type_size(enum btf_field_type type) return sizeof(u64); case BPF_LIST_HEAD: return sizeof(struct bpf_list_head); + case BPF_LIST_NODE: + return sizeof(struct bpf_list_node); default: WARN_ON_ONCE(1); return 0; @@ -312,6 +317,8 @@ static inline u32 btf_field_type_align(enum btf_field_type type) return __alignof__(u64); case BPF_LIST_HEAD: return __alignof__(struct bpf_list_head); + case BPF_LIST_NODE: + return __alignof__(struct bpf_list_node); default: WARN_ON_ONCE(1); return 0; diff --git a/include/linux/btf.h b/include/linux/btf.h index d80345fa566b..a01a8da20021 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -6,6 +6,8 @@ #include #include +#include +#include #include #include @@ -78,6 +80,17 @@ struct btf_id_dtor_kfunc { u32 kfunc_btf_id; }; +struct btf_struct_meta { + u32 btf_id; + struct btf_record *record; + struct btf_field_offs *field_offs; +}; + +struct btf_struct_metas { + u32 cnt; + struct btf_struct_meta types[]; +}; + typedef void (*btf_dtor_kfunc_t)(void *); extern const struct file_operations btf_fops; @@ -408,6 +421,23 @@ static inline struct btf_param *btf_params(const struct btf_type *t) return (struct btf_param *)(t + 1); } +static inline int btf_id_cmp_func(const void *a, const void *b) +{ + const int *pa = a, *pb = b; + + return *pa - *pb; +} + +static inline bool btf_id_set_contains(const struct btf_id_set *set, u32 id) +{ + return bsearch(&id, set->ids, set->cnt, sizeof(u32), btf_id_cmp_func) != NULL; +} + +static inline void *btf_id_set8_contains(const struct btf_id_set8 *set, u32 id) +{ + return bsearch(&id, set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func); +} + #ifdef CONFIG_BPF_SYSCALL struct bpf_prog; @@ -423,6 +453,7 @@ int register_btf_kfunc_id_set(enum bpf_prog_type prog_type, s32 btf_find_dtor_kfunc(struct btf *btf, u32 btf_id); int register_btf_id_dtor_kfuncs(const struct btf_id_dtor_kfunc *dtors, u32 add_cnt, struct module *owner); +struct btf_struct_meta *btf_find_struct_meta(const struct btf *btf, u32 btf_id); #else static inline const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id) @@ -454,6 +485,10 @@ static inline int register_btf_id_dtor_kfuncs(const struct btf_id_dtor_kfunc *dt { return 0; } +static inline struct btf_struct_meta *btf_find_struct_meta(const struct btf *btf, u32 btf_id) +{ + return NULL; +} #endif static inline bool btf_type_is_struct_ptr(struct btf *btf, const struct btf_type *t) diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 9a596f430558..c0c2db93e0de 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -237,6 +237,7 @@ struct btf { struct rcu_head rcu; struct btf_kfunc_set_tab *kfunc_set_tab; struct btf_id_dtor_kfunc_tab *dtor_kfunc_tab; + struct btf_struct_metas *struct_meta_tab; /* split BTF support */ struct btf *base_btf; @@ -1642,8 +1643,30 @@ static void btf_free_dtor_kfunc_tab(struct btf *btf) btf->dtor_kfunc_tab = NULL; } +static void btf_struct_metas_free(struct btf_struct_metas *tab) +{ + int i; + + if (!tab) + return; + for (i = 0; i < tab->cnt; i++) { + btf_record_free(tab->types[i].record); + kfree(tab->types[i].field_offs); + } + kfree(tab); +} + +static void btf_free_struct_meta_tab(struct btf *btf) +{ + struct btf_struct_metas *tab = btf->struct_meta_tab; + + btf_struct_metas_free(tab); + btf->struct_meta_tab = NULL; +} + static void btf_free(struct btf *btf) { + btf_free_struct_meta_tab(btf); btf_free_dtor_kfunc_tab(btf); btf_free_kfunc_set_tab(btf); kvfree(btf->types); @@ -3353,6 +3376,12 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask, goto end; } } + if (field_mask & BPF_LIST_NODE) { + if (!strcmp(name, "bpf_list_node")) { + type = BPF_LIST_NODE; + goto end; + } + } /* Only return BPF_KPTR when all other types with matchable names fail */ if (field_mask & BPF_KPTR) { type = BPF_KPTR_REF; @@ -3396,6 +3425,7 @@ static int btf_find_struct_field(const struct btf *btf, switch (field_type) { case BPF_SPIN_LOCK: case BPF_TIMER: + case BPF_LIST_NODE: ret = btf_find_struct(btf, member_type, off, sz, field_type, idx < info_cnt ? &info[idx] : &tmp); if (ret < 0) @@ -3456,6 +3486,7 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t, switch (field_type) { case BPF_SPIN_LOCK: case BPF_TIMER: + case BPF_LIST_NODE: ret = btf_find_struct(btf, var_type, off, sz, field_type, idx < info_cnt ? &info[idx] : &tmp); if (ret < 0) @@ -3671,6 +3702,8 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type if (ret < 0) goto end; break; + case BPF_LIST_NODE: + break; default: ret = -EFAULT; goto end; @@ -5141,6 +5174,120 @@ static int btf_parse_hdr(struct btf_verifier_env *env) return btf_check_sec_info(env, btf_data_size); } +static const char *alloc_obj_fields[] = { + "bpf_spin_lock", + "bpf_list_head", + "bpf_list_node", +}; + +static struct btf_struct_metas * +btf_parse_struct_metas(struct bpf_verifier_log *log, struct btf *btf) +{ + union { + struct btf_id_set set; + struct { + u32 _cnt; + u32 _ids[ARRAY_SIZE(alloc_obj_fields)]; + } _arr; + } aof; + struct btf_struct_metas *tab = NULL; + int i, n, id, ret; + + BUILD_BUG_ON(offsetof(struct btf_id_set, cnt) != 0); + BUILD_BUG_ON(sizeof(struct btf_id_set) != sizeof(u32)); + + memset(&aof, 0, sizeof(aof)); + for (i = 0; i < ARRAY_SIZE(alloc_obj_fields); i++) { + /* Try to find whether this special type exists in user BTF, and + * if so remember its ID so we can easily find it among members + * of structs that we iterate in the next loop. + */ + id = btf_find_by_name_kind(btf, alloc_obj_fields[i], BTF_KIND_STRUCT); + if (id < 0) + continue; + aof.set.ids[aof.set.cnt++] = id; + } + + if (!aof.set.cnt) + return NULL; + sort(&aof.set.ids, aof.set.cnt, sizeof(aof.set.ids[0]), btf_id_cmp_func, NULL); + + n = btf_nr_types(btf); + for (i = 1; i < n; i++) { + const struct btf_member *member; + struct btf_field_offs *foffs; + struct btf_struct_meta *type; + struct btf_record *record; + const struct btf_type *t; + int j; + + t = btf_type_by_id(btf, i); + if (!t) { + ret = -EINVAL; + goto free; + } + if (!__btf_type_is_struct(t)) + continue; + + cond_resched(); + + for_each_member(j, t, member) { + if (btf_id_set_contains(&aof.set, member->type)) + goto parse; + } + continue; + parse: + if (!tab) { + tab = kzalloc(offsetof(struct btf_struct_metas, types[1]), + GFP_KERNEL | __GFP_NOWARN); + if (!tab) + return ERR_PTR(-ENOMEM); + } else { + struct btf_struct_metas *new_tab; + + new_tab = krealloc(tab, offsetof(struct btf_struct_metas, types[tab->cnt + 1]), + GFP_KERNEL | __GFP_NOWARN); + if (!new_tab) { + ret = -ENOMEM; + goto free; + } + tab = new_tab; + } + type = &tab->types[tab->cnt]; + + type->btf_id = i; + record = btf_parse_fields(btf, t, BPF_SPIN_LOCK | BPF_LIST_HEAD | BPF_LIST_NODE, t->size); + if (IS_ERR_OR_NULL(record)) { + ret = PTR_ERR_OR_ZERO(record) ?: -EFAULT; + goto free; + } + foffs = btf_parse_field_offs(record); + if (WARN_ON_ONCE(IS_ERR_OR_NULL(foffs))) { + btf_record_free(record); + ret = -EFAULT; + goto free; + } + type->record = record; + type->field_offs = foffs; + tab->cnt++; + } + return tab; +free: + btf_struct_metas_free(tab); + return ERR_PTR(ret); +} + +struct btf_struct_meta *btf_find_struct_meta(const struct btf *btf, u32 btf_id) +{ + struct btf_struct_metas *tab; + + BUILD_BUG_ON(offsetof(struct btf_struct_meta, btf_id) != 0); + tab = btf->struct_meta_tab; + if (!tab) + return NULL; + return bsearch(&btf_id, tab->types, tab->cnt, sizeof(tab->types[0]), btf_id_cmp_func); +} + static int btf_check_type_tags(struct btf_verifier_env *env, struct btf *btf, int start_id) { @@ -5191,6 +5338,7 @@ static int btf_check_type_tags(struct btf_verifier_env *env, static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, u32 log_level, char __user *log_ubuf, u32 log_size) { + struct btf_struct_metas *struct_meta_tab; struct btf_verifier_env *env = NULL; struct bpf_verifier_log *log; struct btf *btf = NULL; @@ -5259,15 +5407,24 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, if (err) goto errout; + struct_meta_tab = btf_parse_struct_metas(log, btf); + if (IS_ERR(struct_meta_tab)) { + err = PTR_ERR(struct_meta_tab); + goto errout; + } + btf->struct_meta_tab = struct_meta_tab; + if (log->level && bpf_verifier_log_full(log)) { err = -ENOSPC; - goto errout; + goto errout_meta; } btf_verifier_env_free(env); refcount_set(&btf->refcnt, 1); return btf; +errout_meta: + btf_free_struct_meta_tab(btf); errout: btf_verifier_env_free(env); if (btf) @@ -6028,6 +6185,28 @@ int btf_struct_access(struct bpf_verifier_log *log, u32 id = reg->btf_id; int err; + while (type_is_alloc(reg->type)) { + struct btf_struct_meta *meta; + struct btf_record *rec; + int i; + + meta = btf_find_struct_meta(btf, id); + if (!meta) + break; + rec = meta->record; + for (i = 0; i < rec->cnt; i++) { + struct btf_field *field = &rec->fields[i]; + u32 offset = field->offset; + if (off < offset + btf_field_type_size(field->type) && offset < off + size) { + bpf_log(log, + "direct access to %s is disallowed\n", + btf_field_type_name(field->type)); + return -EACCES; + } + } + break; + } + t = btf_type_by_id(btf, id); do { err = btf_struct_walk(log, btf, t, off, size, &id, &tmp_flag); @@ -7269,23 +7448,6 @@ bool btf_is_module(const struct btf *btf) return btf->kernel_btf && strcmp(btf->name, "vmlinux") != 0; } -static int btf_id_cmp_func(const void *a, const void *b) -{ - const int *pa = a, *pb = b; - - return *pa - *pb; -} - -bool btf_id_set_contains(const struct btf_id_set *set, u32 id) -{ - return bsearch(&id, set->ids, set->cnt, sizeof(u32), btf_id_cmp_func) != NULL; -} - -static void *btf_id_set8_contains(const struct btf_id_set8 *set, u32 id) -{ - return bsearch(&id, set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func); -} - enum { BTF_MODULE_F_LIVE = (1 << 0), }; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index fdbae52f463f..c96039a4e57f 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -537,6 +537,7 @@ void btf_record_free(struct btf_record *rec) btf_put(rec->fields[i].kptr.btf); break; case BPF_LIST_HEAD: + case BPF_LIST_NODE: /* Nothing to release for bpf_list_head */ break; default: @@ -582,6 +583,7 @@ struct btf_record *btf_record_dup(const struct btf_record *rec) } break; case BPF_LIST_HEAD: + case BPF_LIST_NODE: /* Nothing to acquire for bpf_list_head */ break; default: @@ -648,6 +650,8 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj) continue; bpf_list_head_free(field, field_ptr, obj + rec->spin_lock_off); break; + case BPF_LIST_NODE: + break; default: WARN_ON_ONCE(1); continue; From patchwork Mon Nov 14 19:15:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042724 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAC90C4167B for ; Mon, 14 Nov 2022 19:16:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237136AbiKNTQc (ORCPT ); Mon, 14 Nov 2022 14:16:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237063AbiKNTQX (ORCPT ); Mon, 14 Nov 2022 14:16:23 -0500 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DAD3026AEB for ; Mon, 14 Nov 2022 11:16:22 -0800 (PST) Received: by mail-pj1-x1044.google.com with SMTP id f5-20020a17090a4a8500b002131bb59d61so12968310pjh.1 for ; Mon, 14 Nov 2022 11:16:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qnt6anqQzqYDHqUQyjvXxXZhTpxeJ8+rhETmsNNiKKM=; b=bnWGTKMHx57Rx5R+RQWQw6ujZN08ia1ihTGT6IQVOBFTUXkm+1p6NBn3Xq+4DFX1l5 jYLDhVOWcIk2SOHexS+yUcY74TXl8BlvyF5b7QPYXkU5Ou1pZyGCxr+n2UU6soTwHsmt brT8DuFUNl17ktulUPL2shtMFBTHUjVOg7tQBqFv9UVLCnkV0opspv9hZSe6OINgfwQs OYg4p6EBnZSs70PbTG7e/Lcse6T/x4qTjSI9bhNMLki1K/M+oHg8pF85pFBjR1MLHPgi jiyzC2QGEUvgJ4oOokCtTeDPx4SxRSYaOOaJIT2lGx+elxTnTkAd03oYoloFw06eZj2I 4qyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qnt6anqQzqYDHqUQyjvXxXZhTpxeJ8+rhETmsNNiKKM=; b=4tKejnyQSGUTOOl/im72iMKhTU6u80Ue4zxNtHkP3m3/moNh3mn0/kS15T91FLlWII QFIe9Y3g58fDm9Ac8EvPrmYQ6TChKY+lG0X+VbMmvlebeDKta6f3648CY5QKJKuCL3Zj GIs6dUUtB8XmH627iMNfPTfuEnzyHAhuV/HlUBN133Yqn2/XaWItbUYkZX5l7GigSlCo lNB2AYVZR6n34t5g1rFDbEJLOkO0CA6Qk8GJsLmLUyB7kXfTGX2myQzxwgLiKKBNaorU KP6/puqDitgfax3Gp6VU/JiFdBvYr1syx4Dq+ISF0gbzwToc9IYD8+sT3p8pCOuyH3B7 2P3Q== X-Gm-Message-State: ANoB5pm3tDQy8rddDbEB2Fj39STrzNIfaQxMvFgzzsiE1glyls1BCnlg gKpzYb9wnrSFhVvYwHvE2KixA4QJEVcEyQ== X-Google-Smtp-Source: AA0mqf4VHTBQ1rSXVV/SqPXkARPjpxFZ4F+eP3a6Cuf/7ZOYoe0GbdPLLrDFQBhJ0DW6jyvE7/Jozw== X-Received: by 2002:a17:90a:c393:b0:213:8126:867c with SMTP id h19-20020a17090ac39300b002138126867cmr14795011pjt.183.1668453381940; Mon, 14 Nov 2022 11:16:21 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id o28-20020aa7979c000000b0056babe4fb8asm6188777pfp.49.2022.11.14.11.16.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:21 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 10/26] bpf: Verify ownership relationships for user BTF types Date: Tue, 15 Nov 2022 00:45:31 +0530 Message-Id: <20221114191547.1694267-11-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5571; i=memxor@gmail.com; h=from:subject; bh=S5uXjiFY72vxSDVNadqIFQRVCO6rmBYigteqvunocwU=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPI0f7fhq5upMc2hexjc+2pS5EY8TS1fdAcuPSL vlyKqQeJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyAAKCRBM4MiGSL8RyivzD/ 0YdmvxCcyRaQQ0TV0x1g9ZNj+Hon4sTu2wbbhqQFTaiA1doQ8EATEe0N3TrE7mtLqSdqUsLnBvzY2/ VH4+7URhh3zN0KoN50YsfUq6SAWIdmvn8i4vz0qOow2DjoyJFC0yr/u+0IRH6xKt/br7pyd95agoAk zjRc4qiOamAS1ibjnCoxZ3OiVvKpk8pi0pzAQlsyyjp6JSt0CVQxUzvvf91TEXXaXTqZLiYTsu5cBx OxckqgsKq9y39jiGsWL+EBmsLav97duYWK7CTktz7wCa7rdTCTWUwOGrp/X2BSl0DYRhxwhIzpYpHm O2rxcBF2kPITlV5VlLggqr6Qas/Ooxw72fn7n4GVXo2fLj9pQ5HxtK/D0YgwsMH79B2jv80zEwgywe TyvYsqg60SuSljx3eGq0z/StVpY46QbWFq30695vxcxjuU/JY/MXM+2a/ylteipKgtKwGrXZ2yRRob qigBVF/hrrc6D9vAykDWp1iLd69JD+O0LwbjECJDkTmYSRpNyhydOKygxjTHu/WRs4wRIgwcVtyILT ZmJ2O6kWafCLt4ZI6bWVoCjnYErnkI75cTtnbRUbzx2lihigW1PA5BNsAnYv8Ho1AzBfuAzJ+C7hqe EI+MUC1nON+k2+94aETNYLYrLJEg5WRRn23uZoW8sR+vPZ0Llfhbfv339epQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Ensure that there can be no ownership cycles among different types by way of having owning objects that can hold some other type as their element. For instance, a map value can only hold allocated objects, but these are allowed to have another bpf_list_head. To prevent unbounded recursion while freeing resources, elements of bpf_list_head in local kptrs can never have a bpf_list_head which are part of list in a map value. Later patches will verify this by having dedicated BTF selftests. Also, to make runtime destruction easier, once btf_struct_metas is fully populated, we can stash the metadata of the value type directly in the metadata of the list_head fields, as that allows easier access to the value type's layout to destruct it at runtime from the btf_field entry of the list head itself. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 1 + include/linux/btf.h | 1 + kernel/bpf/btf.c | 71 ++++++++++++++++++++++++++++++++++++++++++++ kernel/bpf/syscall.c | 4 +++ 4 files changed, 77 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 4cd3c9e6f50b..c88f75a68893 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -190,6 +190,7 @@ struct btf_field_list_head { struct btf *btf; u32 value_btf_id; u32 node_offset; + struct btf_record *value_rec; }; struct btf_field { diff --git a/include/linux/btf.h b/include/linux/btf.h index a01a8da20021..42d8f3730a8d 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -178,6 +178,7 @@ int btf_find_spin_lock(const struct btf *btf, const struct btf_type *t); int btf_find_timer(const struct btf *btf, const struct btf_type *t); struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type *t, u32 field_mask, u32 value_size); +int btf_check_and_fixup_fields(const struct btf *btf, struct btf_record *rec); struct btf_field_offs *btf_parse_field_offs(struct btf_record *rec); bool btf_type_is_void(const struct btf_type *t); s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind); diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index c0c2db93e0de..10644343d877 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -3723,6 +3723,67 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type return ERR_PTR(ret); } +int btf_check_and_fixup_fields(const struct btf *btf, struct btf_record *rec) +{ + int i; + + /* There are two owning types, kptr_ref and bpf_list_head. The former + * only supports storing kernel types, which can never store references + * to program allocated local types, atleast not yet. Hence we only need + * to ensure that bpf_list_head ownership does not form cycles. + */ + if (IS_ERR_OR_NULL(rec) || !(rec->field_mask & BPF_LIST_HEAD)) + return 0; + for (i = 0; i < rec->cnt; i++) { + struct btf_struct_meta *meta; + u32 btf_id; + + if (!(rec->fields[i].type & BPF_LIST_HEAD)) + continue; + btf_id = rec->fields[i].list_head.value_btf_id; + meta = btf_find_struct_meta(btf, btf_id); + if (!meta) + return -EFAULT; + rec->fields[i].list_head.value_rec = meta->record; + + if (!(rec->field_mask & BPF_LIST_NODE)) + continue; + + /* We need to ensure ownership acyclicity among all types. The + * proper way to do it would be to topologically sort all BTF + * IDs based on the ownership edges, since there can be multiple + * bpf_list_head in a type. Instead, we use the following + * reasoning: + * + * - A type can only be owned by another type in user BTF if it + * has a bpf_list_node. + * - A type can only _own_ another type in user BTF if it has a + * bpf_list_head. + * + * We ensure that if a type has both bpf_list_head and + * bpf_list_node, its element types cannot be owning types. + * + * To ensure acyclicity: + * + * When A only has bpf_list_head, ownership chain can be: + * A -> B -> C + * Where: + * - B has both bpf_list_head and bpf_list_node. + * - C only has bpf_list_node. + * + * When A has both bpf_list_head and bpf_list_node, some other + * type already owns it in the BTF domain, hence it can not own + * another owning type through any of the bpf_list_head edges. + * A -> B + * Where: + * - B only has bpf_list_node. + */ + if (meta->record->field_mask & BPF_LIST_HEAD) + return -ELOOP; + } + return 0; +} + static int btf_field_offs_cmp(const void *_a, const void *_b, const void *priv) { const u32 a = *(const u32 *)_a; @@ -5414,6 +5475,16 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, } btf->struct_meta_tab = struct_meta_tab; + if (struct_meta_tab) { + int i; + + for (i = 0; i < struct_meta_tab->cnt; i++) { + err = btf_check_and_fixup_fields(btf, struct_meta_tab->types[i].record); + if (err < 0) + goto errout_meta; + } + } + if (log->level && bpf_verifier_log_full(log)) { err = -ENOSPC; goto errout_meta; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index c96039a4e57f..4669020bb47d 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1044,6 +1044,10 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, } } + ret = btf_check_and_fixup_fields(btf, map->record); + if (ret < 0) + goto free_map_tab; + if (map->ops->map_check_btf) { ret = map->ops->map_check_btf(map, btf, key_type, value_type); if (ret < 0) From patchwork Mon Nov 14 19:15:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042726 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38407C433FE for ; Mon, 14 Nov 2022 19:16:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237103AbiKNTQg (ORCPT ); Mon, 14 Nov 2022 14:16:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237091AbiKNTQ0 (ORCPT ); Mon, 14 Nov 2022 14:16:26 -0500 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B08727146 for ; Mon, 14 Nov 2022 11:16:25 -0800 (PST) Received: by mail-pg1-x541.google.com with SMTP id q1so11056798pgl.11 for ; Mon, 14 Nov 2022 11:16:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yWIbGV+DYS+tw6h00wRIr63WFHoC77MSAByntOXPym0=; b=RmViMI2JC7KuJv3K8h9u0VWLXNov7IfPQEKLA52bXn0EjTt1szv5KVHSWBnmaImroR u+YPwtndO7QdbB0f1nbRFmtUS4qOVGuK6D5ir6ddCMZqS7i9NbmtMcCvXB5nALZbnwj7 ukhORugYvXgSiWBxsay9pS9vl99KUZoXM1xRxQVpkIQv0WDQ7Iq8+IhZkqd6dL90Np+K zDhQhub4sFv4YaKSfeBOojNdCa8wrWOycxTGjPq4Bf95FmgDCHh2UnYNYAIynbHpjLDb Fz49MdGunH/WTRYO/KPtWO3LLy78j8B94s/KC07zGGLhmpXn21fhDt+yvKn0r7oAwP4k PT7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yWIbGV+DYS+tw6h00wRIr63WFHoC77MSAByntOXPym0=; b=Ax0FsVWkU9IKi6RSSPQq0Wba9TB5UWBgzuSmtPmpL8oNatmpTu8ivv9Ov5pEy6rk2w Bbrwqg+FPL+Vwj5tZPSm6L6pE/v6ecA7LvSxliG9/7FtoOc7706VbSYS6vMUy0S2GTaN Id6pK5ReCW5wis+qRar8hlsTRCgXuOGlMq7F2ip29/9shvMyZDbRy/0Onuh7z4aYmpWW Pz13mUr8gg1CXNZBmHsulFhbwOp6q3/nQ1Rqivko0KFuwWEYvraUnQqewjWeQWEzA8jq //DM8EZv5zGS2MThPCHapOh18j6Koc8xUr9jNN8BZzo05NozD2CdEHheUXJwdmxy8QnT dOUA== X-Gm-Message-State: ANoB5pmZax8i0AQqBmrSw2h4DmEa/qjWel28mpFJu7LnlvxKVg9Oxpd1 qBR7STkwtTk/91qUZOXk3p7ZYSi5FONefA== X-Google-Smtp-Source: AA0mqf7733W5Yq5SD2eSeNZO+oyW+7kSIUoyoY7Bx8rgS0QcgnMlXheX3J7GASvQBuZtJAE+/pLbtg== X-Received: by 2002:a63:c001:0:b0:46f:e657:7d25 with SMTP id h1-20020a63c001000000b0046fe6577d25mr12517541pgg.347.1668453384903; Mon, 14 Nov 2022 11:16:24 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id t21-20020a1709028c9500b00176d218889esm7836781plo.228.2022.11.14.11.16.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:24 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 11/26] bpf: Allow locking bpf_spin_lock in allocated objects Date: Tue, 15 Nov 2022 00:45:32 +0530 Message-Id: <20221114191547.1694267-12-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6766; i=memxor@gmail.com; h=from:subject; bh=iEwNzupo2wEP+IFJ7zBWqno/omrHd/+4eO1JPSsEq9c=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJ8xKPku/FblQ+LroVn2lest98veZ0IYNimwbR 8JvF7TyJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8RyhUUEA CQ4ex9ezQBZFDRvpAltgcu9RGgzudwwD1w6NW2l6Qzk5Zz8rzL2rslWri+XWyBLybvFS2dAblKWsqL zQnLr/9DeRKALPic5vtZKWm/9OFtKbYW9hinC+FvJXoRxGtrjcpnNt9deCemwDn6MdvEBodxgA7hh8 5/Gs+s04vvJoejWqsFRmZQlzGj/QLrKahWpwFkopNnb+QCQmkdUpL5LXAyHgMjk06twpD6fxcBhyzT xwr05WvR4aYSOfQBJPGt7PVbvkK8Xa/laupI+ZY0t4UOrVABHc+PlbGyLHB/wYeeaWvsbIigmAmbTN kGPN7RGYPa4ktnd7yc3sLowxZhhU4eG4Yl+EW1s1j3dEFuYiI6qFQAL3VXR6Fv4VDMW82fPtVjijTY 0du44UMediMzzdr/tLsOzsnrhZeax7Kr0urjFB9VSQpq4332qTblkpmIj9+DUDWMYPxjZV7omN8hiY JppyiBapSgsOEYSlByQ+ziAGo5JOFWICFkQFBBiQGcCrEpaCK+xc/QYbJP5rR6Xm7LfBmKkxDTvLMs SOca+zhTn8XEPUu11JLoXvMJbX3NgKF47c5fskKaJ8B6Q9Cw528HhJJDrNjpnohkbpgXgdkabUnKvF 1kWaIesvAS51KbbhWjrbOWSlY4gtEW+wkd7jJlDsp7o+2EvSh6eCZlH6TdeQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Allow locking a bpf_spin_lock in an allocated object, in addition to already support map value pointers. The handling is similar to that of map values, by just preserving the reg->id of PTR_TO_BTF_ID | MEM_ALLOC as well, and adjusting process_spin_lock to work with them and remember the id in verifier state. Refactor the existing process_spin_lock to work with PTR_TO_BTF_ID | MEM_ALLOC in addition to PTR_TO_MAP_VALUE. We need to update the reg_may_point_to_spin_lock which is used in mark_ptr_or_null_reg to preserve reg->id, that will be used in env->cur_state->active_spin_lock to remember the currently held spin lock. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/helpers.c | 2 ++ kernel/bpf/verifier.c | 70 ++++++++++++++++++++++++++++++++----------- 2 files changed, 55 insertions(+), 17 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 7bc71995f17c..5bc0b9f0f306 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -336,6 +336,7 @@ const struct bpf_func_proto bpf_spin_lock_proto = { .gpl_only = false, .ret_type = RET_VOID, .arg1_type = ARG_PTR_TO_SPIN_LOCK, + .arg1_btf_id = BPF_PTR_POISON, }; static inline void __bpf_spin_unlock_irqrestore(struct bpf_spin_lock *lock) @@ -358,6 +359,7 @@ const struct bpf_func_proto bpf_spin_unlock_proto = { .gpl_only = false, .ret_type = RET_VOID, .arg1_type = ARG_PTR_TO_SPIN_LOCK, + .arg1_btf_id = BPF_PTR_POISON, }; void copy_map_value_locked(struct bpf_map *map, void *dst, void *src, diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d726d55622c9..070d003a99f0 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -453,8 +453,16 @@ static bool reg_type_not_null(enum bpf_reg_type type) static bool reg_may_point_to_spin_lock(const struct bpf_reg_state *reg) { - return reg->type == PTR_TO_MAP_VALUE && - btf_record_has_field(reg->map_ptr->record, BPF_SPIN_LOCK); + struct btf_record *rec = NULL; + + if (reg->type == PTR_TO_MAP_VALUE) { + rec = reg->map_ptr->record; + } else if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC)) { + struct btf_struct_meta *meta = btf_find_struct_meta(reg->btf, reg->btf_id); + if (meta) + rec = meta->record; + } + return btf_record_has_field(rec, BPF_SPIN_LOCK); } static bool type_is_rdonly_mem(u32 type) @@ -5588,8 +5596,10 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno]; struct bpf_verifier_state *cur = env->cur_state; bool is_const = tnum_is_const(reg->var_off); - struct bpf_map *map = reg->map_ptr; + struct btf_record *rec = NULL; u64 val = reg->var_off.value; + struct bpf_map *map = NULL; + struct btf *btf = NULL; if (!is_const) { verbose(env, @@ -5597,19 +5607,32 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, regno); return -EINVAL; } - if (!map->btf) { - verbose(env, - "map '%s' has to have BTF in order to use bpf_spin_lock\n", - map->name); - return -EINVAL; + if (reg->type == PTR_TO_MAP_VALUE) { + map = reg->map_ptr; + if (!map->btf) { + verbose(env, + "map '%s' has to have BTF in order to use bpf_spin_lock\n", + map->name); + return -EINVAL; + } + rec = map->record; + } else { + struct btf_struct_meta *meta; + + btf = reg->btf; + meta = btf_find_struct_meta(reg->btf, reg->btf_id); + if (meta) + rec = meta->record; } - if (!btf_record_has_field(map->record, BPF_SPIN_LOCK)) { - verbose(env, "map '%s' has no valid bpf_spin_lock\n", map->name); + + if (!btf_record_has_field(rec, BPF_SPIN_LOCK)) { + verbose(env, "%s '%s' has no valid bpf_spin_lock\n", map ? "map" : "local", + map ? map->name : "kptr"); return -EINVAL; } - if (map->record->spin_lock_off != val + reg->off) { + if (rec->spin_lock_off != val + reg->off) { verbose(env, "off %lld doesn't point to 'struct bpf_spin_lock' that is at %d\n", - val + reg->off, map->record->spin_lock_off); + val + reg->off, rec->spin_lock_off); return -EINVAL; } if (is_lock) { @@ -5815,13 +5838,19 @@ static const struct bpf_reg_types int_ptr_types = { }, }; +static const struct bpf_reg_types spin_lock_types = { + .types = { + PTR_TO_MAP_VALUE, + PTR_TO_BTF_ID | MEM_ALLOC, + } +}; + static const struct bpf_reg_types fullsock_types = { .types = { PTR_TO_SOCKET } }; static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } }; static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } }; static const struct bpf_reg_types ringbuf_mem_types = { .types = { PTR_TO_MEM | MEM_RINGBUF } }; static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } }; static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } }; -static const struct bpf_reg_types spin_lock_types = { .types = { PTR_TO_MAP_VALUE } }; static const struct bpf_reg_types percpu_btf_ptr_types = { .types = { PTR_TO_BTF_ID | MEM_PERCPU } }; static const struct bpf_reg_types func_ptr_types = { .types = { PTR_TO_FUNC } }; static const struct bpf_reg_types stack_ptr_types = { .types = { PTR_TO_STACK } }; @@ -5946,6 +5975,11 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno, return -EACCES; } } + } else if (type_is_alloc(reg->type)) { + if (meta->func_id != BPF_FUNC_spin_lock && meta->func_id != BPF_FUNC_spin_unlock) { + verbose(env, "verifier internal error: unimplemented handling of MEM_ALLOC\n"); + return -EFAULT; + } } return 0; @@ -6062,7 +6096,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, goto skip_type_check; /* arg_btf_id and arg_size are in a union. */ - if (base_type(arg_type) == ARG_PTR_TO_BTF_ID) + if (base_type(arg_type) == ARG_PTR_TO_BTF_ID || + base_type(arg_type) == ARG_PTR_TO_SPIN_LOCK) arg_btf_id = fn->arg_btf_id[arg]; err = check_reg_type(env, regno, arg_type, arg_btf_id, meta); @@ -6680,9 +6715,10 @@ static bool check_btf_id_ok(const struct bpf_func_proto *fn) int i; for (i = 0; i < ARRAY_SIZE(fn->arg_type); i++) { - if (base_type(fn->arg_type[i]) == ARG_PTR_TO_BTF_ID && !fn->arg_btf_id[i]) - return false; - + if (base_type(fn->arg_type[i]) == ARG_PTR_TO_BTF_ID) + return !!fn->arg_btf_id[i]; + if (base_type(fn->arg_type[i]) == ARG_PTR_TO_SPIN_LOCK) + return fn->arg_btf_id[i] == BPF_PTR_POISON; if (base_type(fn->arg_type[i]) != ARG_PTR_TO_BTF_ID && fn->arg_btf_id[i] && /* arg_btf_id and arg_size are in a union. */ (base_type(fn->arg_type[i]) != ARG_PTR_TO_MEM || From patchwork Mon Nov 14 19:15:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042725 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FAEFC4332F for ; Mon, 14 Nov 2022 19:16:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235969AbiKNTQf (ORCPT ); Mon, 14 Nov 2022 14:16:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237123AbiKNTQ3 (ORCPT ); Mon, 14 Nov 2022 14:16:29 -0500 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D6612655F for ; Mon, 14 Nov 2022 11:16:28 -0800 (PST) Received: by mail-pf1-x443.google.com with SMTP id z26so11954986pff.1 for ; Mon, 14 Nov 2022 11:16:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7bdRYjhA7/4SvqymiC//VxctBitwMX6pPdrBNrwmhX4=; b=l/QSM8H2DALnmB8T2BI2ZPtIpiYkDTQPMLc7Xk7gsbbTJVECAYTgBj4ZTYO9PY0Sa/ fZ9miuXvUnKxm3wRTvZIb+D+L3i1tyvLgodhGd8I+ZbGcWZ9n89w7ICwPPSvEnVjkdqh 7ByanSBxKNVmXlVRiAxsk3jkr/6u1MBue3qm/17JFHFD5hfk7a3ok5MaxhTZK+aBrWBd UpbFaQ23OcddT3ZsPEwLhkybCHe3MBCpj90yoaUcbDJmHUNvWD5RBIzhdvHtg+mTUOKr L6zBkk4xHK3HFD3aDm7QfZooEuu3ih/P4XFTrDfG8Lbi5T8lV3CIGBWfLNZW+iZ+KhBF ITDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7bdRYjhA7/4SvqymiC//VxctBitwMX6pPdrBNrwmhX4=; b=ln+r6NZ6QHy2JFDtMU43KvRjTUVoQfKIiB0DEDEXYk/WRf2jPgjdhEkmh6DBvPz1li y+Gd7aZEkKxYm8n0GpMgRQx+BiNsOGiQIobe2BwMzVEUlN5ELkx+SAZ3Epwixb3pB6Mx KQQRLCuwJSNAxMlFldP8EFDnT/Fcra+EdQOvkrbN2V+XNpZocE4pqvA5MMH94+FFeYI+ EzQX3oznZwdv41L0jHofv/QwVMLwLpHL4mUeT0BxzpujoTNN6qIGF6rcKoFnU4i+QpMs r/KXQlkCT72LBOl6RA+3rXZRMyuV2ye0PhIWcHjN+MS0ef32jJyWL/LSH92ySkqlmj4H vcFg== X-Gm-Message-State: ANoB5plQiEZioIb7sqXHGTfpBObF/iaf9ZuDzhWdeLsy6ljwew4ftmPp JtEMlZm7c+zS/zVajVAQXMh34FwoectQuA== X-Google-Smtp-Source: AA0mqf5QasjN9cJayGn0Uo91NlE0ZgBvC9eA0GEVg9AZiPXoYS480VLPDxy0dsOUlQDwrkkCJuubug== X-Received: by 2002:a65:4984:0:b0:470:8e:6003 with SMTP id r4-20020a654984000000b00470008e6003mr12944213pgs.19.1668453387858; Mon, 14 Nov 2022 11:16:27 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id l5-20020a170903120500b0018863dbf3b0sm7974862plh.45.2022.11.14.11.16.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:27 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 12/26] bpf: Allow locking bpf_spin_lock global variables Date: Tue, 15 Nov 2022 00:45:33 +0530 Message-Id: <20221114191547.1694267-13-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6954; i=memxor@gmail.com; h=from:subject; bh=nE42Ao32jrqG05ryyQ1whpx3jrA83r4xm2MpN5cZW+o=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJbQ8mcMrlLy8wMaVInXdyPzuSQ27PEHq0dxF7 m0odajmJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8Ryq6fD/ 91RexHoptmSd2alUPArzSM/f2GJHhV8XIguwVtPsqcyXL4wQKw7hC24VzvCmRC8UNVUW0WQ199qKmQ eleP3fnnZEGbwWiATUOWpCq2HKM8M64G0/SNCQ1PZWN3Kxg82tYcFKA2PxPcne6sONIiDbvUjtZVwn zCV/50Zow8jJgGVLi49ZLU9Xgt5IIq3nNz6Y8WvrKUi/b9wLjCVVX2Z0nPxmmDTc0UraXVw7ifHvbI PgCG5Zd0VraalCm7MdwxPcRWB0eSDTY7pj9ZxsiwXqQSm1UQjGSDgEwefri5A2x6qBXTSQbFehS6H+ NLlg0/+lSIKy4XFFTxJ/4USgNGmpel0UfIGKKHAECRt/OQi/fv2XdPPzW6N+4OLw5WD4bI5KFrSw76 fMtroMGeK4pOeIbVC9WvcL1IImjOGtEO6CV4M1oj0mzZ/ADf69vp/2m7V3mcyhBvAutBNOc0/cPfwz MEiumDtHZFLNvRUGgp1JIWCZ48dKY9IRibvu+/VMKqlY6bRgheEB+ZIt3DYY2ATK3cFc6dASOy4Pvb ZQfD8yPJ96xFQCrsKD4PVQr8+RrjeUBtOmZXpxXBmbUkydoMMw19vgRf/FJb/W8t3ZILF5MbHAYHC1 zVcYZ54G1uOvkzaKJq7NDehNK0neL/ExNKwfUJfIxGdS+TgZsEuYjtwQR47Q== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Global variables reside in maps accessible using direct_value_addr callbacks, so giving each load instruction's rewrite a unique reg->id disallows us from holding locks which are global. The reason for preserving reg->id as a unique value for registers that may point to spin lock is that two separate lookups are treated as two separate memory regions, and any possible aliasing is ignored for the purposes of spin lock correctness. This is not great especially for the global variable case, which are served from maps that have max_entries == 1, i.e. they always lead to map values pointing into the same map value. So refactor the active_spin_lock into a 'active_lock' structure which represents the lock identity, and instead of the reg->id, remember two fields, a pointer and the reg->id. The pointer will store reg->map_ptr or reg->btf. It's only necessary to distinguish for the id == 0 case of global variables, but always setting the pointer to a non-NULL value and using the pointer to check whether the lock is held simplifies code in the verifier. This is generic enough to allow it for global variables, map lookups, and allocated objects at the same time. Note that while whether a lock is held can be answered by just comparing active_lock.ptr to NULL, to determine whether the register is pointing to the same held lock requires comparing _both_ ptr and id. Finally, as a result of this refactoring, pseudo load instructions are not given a unique reg->id, as they are doing lookup for the same map value (max_entries is never greater than 1). Essentially, we consider that the tuple of (ptr, id) will always be unique for any kind of argument to bpf_spin_{lock,unlock}. Note that this can be extended in the future to also remember offset used for locking, so that we can introduce multiple bpf_spin_lock fields in the same allocation. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 10 ++++++++- kernel/bpf/verifier.c | 41 ++++++++++++++++++++++++------------ 2 files changed, 37 insertions(+), 14 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 1a32baa78ce2..fa738abea267 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -323,7 +323,15 @@ struct bpf_verifier_state { u32 branches; u32 insn_idx; u32 curframe; - u32 active_spin_lock; + struct { + /* This can either be reg->map_ptr or reg->btf, but it is only + * used to check whether the lock is held or not by comparing to + * NULL. + */ + void *ptr; + /* This will be reg->id */ + u32 id; + } active_lock; bool speculative; /* first and last insn idx of this verifier state */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 070d003a99f0..99b5edb56978 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1215,7 +1215,8 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state, } dst_state->speculative = src->speculative; dst_state->curframe = src->curframe; - dst_state->active_spin_lock = src->active_spin_lock; + dst_state->active_lock.ptr = src->active_lock.ptr; + dst_state->active_lock.id = src->active_lock.id; dst_state->branches = src->branches; dst_state->parent = src->parent; dst_state->first_insn_idx = src->first_insn_idx; @@ -5587,7 +5588,7 @@ int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state * Since only one bpf_spin_lock is allowed the checks are simpler than * reg_is_refcounted() logic. The verifier needs to remember only * one spin_lock instead of array of acquired_refs. - * cur_state->active_spin_lock remembers which map value element got locked + * cur_state->active_lock remembers which map value element got locked * and clears it after bpf_spin_unlock. */ static int process_spin_lock(struct bpf_verifier_env *env, int regno, @@ -5636,22 +5637,35 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, return -EINVAL; } if (is_lock) { - if (cur->active_spin_lock) { + if (cur->active_lock.ptr) { verbose(env, "Locking two bpf_spin_locks are not allowed\n"); return -EINVAL; } - cur->active_spin_lock = reg->id; + if (map) + cur->active_lock.ptr = map; + else + cur->active_lock.ptr = btf; + cur->active_lock.id = reg->id; } else { - if (!cur->active_spin_lock) { + void *ptr; + + if (map) + ptr = map; + else + ptr = btf; + + if (!cur->active_lock.ptr) { verbose(env, "bpf_spin_unlock without taking a lock\n"); return -EINVAL; } - if (cur->active_spin_lock != reg->id) { + if (cur->active_lock.ptr != ptr || + cur->active_lock.id != reg->id) { verbose(env, "bpf_spin_unlock of different lock\n"); return -EINVAL; } - cur->active_spin_lock = 0; + cur->active_lock.ptr = NULL; + cur->active_lock.id = 0; } return 0; } @@ -10582,8 +10596,8 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn) insn->src_reg == BPF_PSEUDO_MAP_IDX_VALUE) { dst_reg->type = PTR_TO_MAP_VALUE; dst_reg->off = aux->map_off; - if (btf_record_has_field(map->record, BPF_SPIN_LOCK)) - dst_reg->id = ++env->id_gen; + WARN_ON_ONCE(map->max_entries != 1); + /* We want reg->id to be same (0) as map_value is not distinct */ } else if (insn->src_reg == BPF_PSEUDO_MAP_FD || insn->src_reg == BPF_PSEUDO_MAP_IDX) { dst_reg->type = CONST_PTR_TO_MAP; @@ -10661,7 +10675,7 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn) return err; } - if (env->cur_state->active_spin_lock) { + if (env->cur_state->active_lock.ptr) { verbose(env, "BPF_LD_[ABS|IND] cannot be used inside bpf_spin_lock-ed region\n"); return -EINVAL; } @@ -11927,7 +11941,8 @@ static bool states_equal(struct bpf_verifier_env *env, if (old->speculative && !cur->speculative) return false; - if (old->active_spin_lock != cur->active_spin_lock) + if (old->active_lock.ptr != cur->active_lock.ptr || + old->active_lock.id != cur->active_lock.id) return false; /* for states to be equal callsites have to be the same @@ -12572,7 +12587,7 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } - if (env->cur_state->active_spin_lock && + if (env->cur_state->active_lock.ptr && (insn->src_reg == BPF_PSEUDO_CALL || insn->imm != BPF_FUNC_spin_unlock)) { verbose(env, "function calls are not allowed while holding a lock\n"); @@ -12609,7 +12624,7 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } - if (env->cur_state->active_spin_lock) { + if (env->cur_state->active_lock.ptr) { verbose(env, "bpf_spin_unlock is missing\n"); return -EINVAL; } From patchwork Mon Nov 14 19:15:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042727 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26E7FC4332F for ; Mon, 14 Nov 2022 19:16:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237119AbiKNTQg (ORCPT ); Mon, 14 Nov 2022 14:16:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237086AbiKNTQc (ORCPT ); Mon, 14 Nov 2022 14:16:32 -0500 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B9B4264BF for ; Mon, 14 Nov 2022 11:16:31 -0800 (PST) Received: by mail-pf1-x442.google.com with SMTP id g62so11922768pfb.10 for ; Mon, 14 Nov 2022 11:16:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hYvHeXAuVygr4ZcQHncrz4UIBOo5UsmMrsEx+4QLuhc=; b=fPYo/BT+M5gjOzRAA70Wb/DVvcXEx2jFAxcEZdoPz0VKxdcX4p3kz64r2zeTUZi9gl QDXlclg2eJj2a/bHwfDEl66G1e11MnxdbqM5cSLvJpf5LzW98d72SMbu4QgINKZoEqzy jSKInhrmIuBTv9SNCx17Nimk88jVjFSYMYVJFms1YL9ijZhNNOZ9mjAgEUQ2jaN4gv8Q lCHeU1C1mLBNKp2j/ct8Iks1/F9KnwhxulEufwoZTwbqhceFvMd+9SKu4/NAgaZJxwv8 CjT+3zi8Q9K6cZVJqekDuqgCrR2W6FFjQvxIRkkkqIGIazSr41vas8rhAP8c0+k+06dK pf1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hYvHeXAuVygr4ZcQHncrz4UIBOo5UsmMrsEx+4QLuhc=; b=GgFVcx9L2YIbKW4uVfc8PO3prVIGS9PjtT1C2hrYyoSDagyJXcdYRJ8j3vjejWl+/A 9pFnIxo7BM5DTrkIg3UAFdVIr+umHbNmzch94PLdLSpox0WnMs1LsOhex5zPDarddDSt b8VuAiCI7kBdztm+ns9UrcDUHvhsIycwQzPvlnzxv82dGubGg9r3SweAANJpLoq09sco pEQJPXyduCpt6anuJye2Z+hnpLtQ5yrn32hdbkJgEr4eY1/XPzn6y7pRrkkRl+2KW5Le 8PqUdu3qofyWEKqeMAe0eYYR7rUiMmhJ4PjFgK7lWLplzeYGEjMCdXlTyBmRA/SLsRm2 /w0Q== X-Gm-Message-State: ANoB5pkZk3bQ2XNYmAodtbJ97G2LgJbZrq6yXI4sqB1iEFXYJK3s/AG9 6cPVHkH/QdX1/BrG3JHagrMpBKO6V2ufQw== X-Google-Smtp-Source: AA0mqf7bARKReq5MgRNPlsHgOqffQytDxVaPa/ZetEXebHStVc3SkdO8osZlxXW56i/CguwNrme/Sw== X-Received: by 2002:a63:1a19:0:b0:46f:f4c1:7d34 with SMTP id a25-20020a631a19000000b0046ff4c17d34mr12946661pga.75.1668453390489; Mon, 14 Nov 2022 11:16:30 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id s19-20020a17090a441300b0020af2bab83fsm6899723pjg.23.2022.11.14.11.16.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:30 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 13/26] bpf: Allow locking bpf_spin_lock in inner map values Date: Tue, 15 Nov 2022 00:45:34 +0530 Message-Id: <20221114191547.1694267-14-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1237; i=memxor@gmail.com; h=from:subject; bh=vEeT2/iSWZHC3KMjxfZP18u7ktP/IseF8QZTy1MJn24=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJHwor9e+vDGcaQsSs/hJmvUZYJzEVob7xslb7 HsZAtuOJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8Rys7+D/ 9aHZAugrk/mRYwdbRDLBrP1EuqblnypI30UswomWd7kVV24T1cpQpzKn+J//tTYkmUx8N/2ePoYRfg JNh3DjIVrS//NFzSbr2fOgBqRNq9/dzVVkR4d/M01g/FT5xiOAhkcGfkhdavC2sINuWB1uIpR3x7hR 5kMPtOBzzA1G9Icq64YktVXL5xPcLs4oAYc8cWOEMautRfdyujsLNZFNU1WytLELEUKam6L1axSkVc 6SfgfBb+StMNDO4IiPVOdVyjCyUhFIa9NdC83Dg3VfP2MY6k3ZFNLscTJE5Jh9/xRgtwZ7zwfAM30E t60Py8U4psKPzqCmwegXQnM5K0lWhOisMEfpJ+JH424WchG40B64RrhrsYMJKeop1ubSHEbkzR68Y2 HW0PsFdZAzm9bwwcrA91KhJ4tstXqYyob6d4PICjWOYBHD/gRZWueOCq4B9HJwI7/aE9QwrioK0nlq EwZFJrx7bLklGGYN61GLYN8lf0ZobQ7VUcGsQrFyaijewMcxZX1NWCfdffb4T6DjvC9vN4gbhCM8y4 N2ndW+teDdj2MJDPuUdLHs7hH9E3a/+fwTtWyq+ZKAXqdHAhE5TTrp1tfJuKQyuJ67CnD0o20iKBrx y9ivSZakdRdGjM8Sr8b+Ac+scfyFTbq5FeEnE3uoXqLwTJqKE2mQ/IICL5Sw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net There is no need to restrict users from locking bpf_spin_lock in map values of inner maps. Each inner map lookup gets a unique reg->id assigned to the returned PTR_TO_MAP_VALUE which will be preserved after the NULL check. Distinct lookups into different inner map get unique IDs, and distinct lookups into same inner map also get unique IDs. Hence, lift the restriction by removing the check return -ENOTSUPP in map_in_map.c. Later commits will add comprehensive test cases to ensure that invalid cases are rejected. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/map_in_map.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/kernel/bpf/map_in_map.c b/kernel/bpf/map_in_map.c index 8ca0cca39d49..f31893a123a2 100644 --- a/kernel/bpf/map_in_map.c +++ b/kernel/bpf/map_in_map.c @@ -29,11 +29,6 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd) return ERR_PTR(-ENOTSUPP); } - if (btf_record_has_field(inner_map->record, BPF_SPIN_LOCK)) { - fdput(f); - return ERR_PTR(-ENOTSUPP); - } - inner_map_meta_size = sizeof(*inner_map_meta); /* In some cases verifier needs to access beyond just base map. */ if (inner_map->ops == &array_map_ops) From patchwork Mon Nov 14 19:15:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042729 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B916BC4332F for ; Mon, 14 Nov 2022 19:16:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235813AbiKNTQ4 (ORCPT ); Mon, 14 Nov 2022 14:16:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237013AbiKNTQg (ORCPT ); Mon, 14 Nov 2022 14:16:36 -0500 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3888E26559 for ; Mon, 14 Nov 2022 11:16:34 -0800 (PST) Received: by mail-pl1-x643.google.com with SMTP id 4so11000881pli.0 for ; Mon, 14 Nov 2022 11:16:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+b9M0ntE1BJ/xQLOmPampTkRhvEiEUSMkpBcqYJ1fqU=; b=E6MVc6DqRvUMMiyPjtpNjCSpo9LuJ+qi/9OAXjjoGmGYUD/5JMsdVdz3h1tPQT0auZ 3Yaq7JWLJ0hpUFi5kvPdQDlLn4LBt2IsID79fGsDM1NYMsnYe1bOnn8dlFkbZaMGDoJ6 i+uQpI09zn9wRGmN6ocsABQtoG5sVwdb44ey1b3z5AIKIpniy9E7DlCfuDOv35Uvfc4J C5AO9PyQlbdmDSFA6QgfDR3WV9E88e/I7WrCMkX8lDCfJn952RGktcJAICrUlUjeKICV NKI68rJA1BsRfhrseEzB78Zy8r55BNqzCY+eexDYGCAqu2Kcer3xK44colm63fNcQTBB s04A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+b9M0ntE1BJ/xQLOmPampTkRhvEiEUSMkpBcqYJ1fqU=; b=0lfAKPx5hISZXzIk6PeyEMkBoPlr+nbG9DznNA+mprUYdsrn4Gl+O2BF4mcWwYwv+v en3h+VqKzxYrZV4x/a6gIdq+sygC+z/cBBvOZcuZHxQkN0HFTEScH+wDGoDMzcSBEo4H EhvzaLZ1UhNf8JI86rpRke7q7H4uCsSdncqHLQ5pHZZB7lskzpITf3WqMEcb7YaAige9 db5HrqFNZ1QwK+ZrLOfSkD7uhMv4REia2Q1GO2mk0NScgAFgi3dMEd0YTrFfQj5ev00p ZjJrozqyU29qR24cLvnJ/osyySgqDGfi5Pj8mmYdbp3AqFAaEL8p08fvc/A31v1vw0Ii 86lA== X-Gm-Message-State: ANoB5pnVYgmBqN0+zAMFzT7fbAI3F1wiV8AUS+CPbcSv8yFO+BtZWAqu TRfEseFjFX4hhMX8woTs7PpcEMgCX24ftQ== X-Google-Smtp-Source: AA0mqf5Y1xh9OsOr2urBorWjaTz10qjZU3DdtgxrrNequYt+9LN9Fv1tCIcfRqbs6aMyUl+rtFUaPA== X-Received: by 2002:a17:902:6a8c:b0:188:b840:deec with SMTP id n12-20020a1709026a8c00b00188b840deecmr739829plk.15.1668453393421; Mon, 14 Nov 2022 11:16:33 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id 69-20020a17090a09cb00b00212d4c50647sm10118843pjo.36.2022.11.14.11.16.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:33 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 14/26] bpf: Rewrite kfunc argument handling Date: Tue, 15 Nov 2022 00:45:35 +0530 Message-Id: <20221114191547.1694267-15-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=27126; i=memxor@gmail.com; h=from:subject; bh=QHXbbMbvHOvIvJKU6M9E2CKFGLuy7gTYKmwEuLz6jME=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJ07lMBh6ZtclYbiFacC3QHZ1RPsuPDMeXWxr0 g6XANVGJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8Ryv3gD/ 9to2Itw5XDpEGi/KdaRSTYMDeFsQzDh6gQLeNCLuDL/jzTckFlhGHpSl+g06YKijKMMWxdC6wyO+1N 7ItRf8k+/gXpxnr+pRFeqhMvFj2kzG2THqo9cAJeP4plKWv2t0fu+IRCzAjL5uVTqwqpY7rlAyr98n kAF602W6Y5u6aOoln2lY2oBNERwsoflBfAnhUojgGO3c/Vn/MG9cI+KHqG/wb33T9EH+Efhg5IYwLg nfPXi0fqXZc5tU4oLLXy8sRTzJQ2i8pJQpwCuvgqwmGhjWmh/yaHprjkSTqu1im6PsNHgv+7ZthDA7 bAA7mN3qLnRYNMfk6XUBBKE3qLNmGE0Z43YHcXrKluxdl/x3Oa4ANSQ+cqQV5lhPmaKyXY4l0Cfq3U 4VAYOFozfeHbt3mT20HdE2R5FW0dr8M4JFL410Uuttqr38dsZ4rMqLWANxbInLcAbWuwQ0hIsH5fAS xaDIAxdD/5IImj0j3FAVWdZJfiosqsxfs6ISeZZeW9oUSA2uKIpbV30QVN8B/IcIYTbTJ5zlUOBAKJ K9jPssXib8rOEZMXJAEx6Pn+iMBsmnjfg25fGCGaO+SHunEkvXUij/AIT3fCuSICmzi+EWGzNejoHU 9UbV/hG/um+OCrXXYKapMsmVrhPTsHKIA1f2gpAJ3u6KWO2P7J5AUHDqngyg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net As we continue to add more features, argument types, kfunc flags, and different extensions to kfuncs, the code to verify the correctness of the kfunc prototype wrt the passed in registers has become ad-hoc and ugly to read. To make life easier, and make a very clear split between different stages of argument processing, move all the code into verifier.c and refactor into easier to read helpers and functions. This also makes sharing code within the verifier easier with kfunc argument processing. This will be more and more useful in later patches as we are now moving to implement very core BPF helpers as kfuncs, to keep them experimental before baking into UAPI. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/btf.h | 31 +- kernel/bpf/btf.c | 16 +- kernel/bpf/verifier.c | 547 +++++++++++++++++- .../bpf/prog_tests/kfunc_dynptr_param.c | 2 +- tools/testing/selftests/bpf/verifier/calls.c | 2 +- .../selftests/bpf/verifier/ref_tracking.c | 4 +- 6 files changed, 569 insertions(+), 33 deletions(-) diff --git a/include/linux/btf.h b/include/linux/btf.h index 42d8f3730a8d..d5b26380a60f 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -338,6 +338,16 @@ static inline bool btf_type_is_struct(const struct btf_type *t) return kind == BTF_KIND_STRUCT || kind == BTF_KIND_UNION; } +static inline bool __btf_type_is_struct(const struct btf_type *t) +{ + return BTF_INFO_KIND(t->info) == BTF_KIND_STRUCT; +} + +static inline bool btf_type_is_array(const struct btf_type *t) +{ + return BTF_INFO_KIND(t->info) == BTF_KIND_ARRAY; +} + static inline u16 btf_type_vlen(const struct btf_type *t) { return BTF_INFO_VLEN(t->info); @@ -439,9 +449,10 @@ static inline void *btf_id_set8_contains(const struct btf_id_set8 *set, u32 id) return bsearch(&id, set->pairs, set->cnt, sizeof(set->pairs[0]), btf_id_cmp_func); } -#ifdef CONFIG_BPF_SYSCALL struct bpf_prog; +struct bpf_verifier_log; +#ifdef CONFIG_BPF_SYSCALL const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id); const char *btf_name_by_offset(const struct btf *btf, u32 offset); struct btf *btf_parse_vmlinux(void); @@ -455,6 +466,12 @@ s32 btf_find_dtor_kfunc(struct btf *btf, u32 btf_id); int register_btf_id_dtor_kfuncs(const struct btf_id_dtor_kfunc *dtors, u32 add_cnt, struct module *owner); struct btf_struct_meta *btf_find_struct_meta(const struct btf *btf, u32 btf_id); +const struct btf_member * +btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf, + const struct btf_type *t, enum bpf_prog_type prog_type, + int arg); +bool btf_types_are_same(const struct btf *btf1, u32 id1, + const struct btf *btf2, u32 id2); #else static inline const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id) @@ -490,6 +507,18 @@ static inline struct btf_struct_meta *btf_find_struct_meta(const struct btf *btf { return NULL; } +static inline const struct btf_member * +btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf, + const struct btf_type *t, enum bpf_prog_type prog_type, + int arg) +{ + return NULL; +} +static inline bool btf_types_are_same(const struct btf *btf1, u32 id1, + const struct btf *btf2, u32 id2) +{ + return false; +} #endif static inline bool btf_type_is_struct_ptr(struct btf *btf, const struct btf_type *t) diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 10644343d877..ff8b46c209dd 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -478,16 +478,6 @@ static bool btf_type_nosize_or_null(const struct btf_type *t) return !t || btf_type_nosize(t); } -static bool __btf_type_is_struct(const struct btf_type *t) -{ - return BTF_INFO_KIND(t->info) == BTF_KIND_STRUCT; -} - -static bool btf_type_is_array(const struct btf_type *t) -{ - return BTF_INFO_KIND(t->info) == BTF_KIND_ARRAY; -} - static bool btf_type_is_datasec(const struct btf_type *t) { return BTF_INFO_KIND(t->info) == BTF_KIND_DATASEC; @@ -5537,7 +5527,7 @@ static u8 bpf_ctx_convert_map[] = { #undef BPF_MAP_TYPE #undef BPF_LINK_TYPE -static const struct btf_member * +const struct btf_member * btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, enum bpf_prog_type prog_type, int arg) @@ -6323,8 +6313,8 @@ int btf_struct_access(struct bpf_verifier_log *log, * end up with two different module BTFs, but IDs point to the common type in * vmlinux BTF. */ -static bool btf_types_are_same(const struct btf *btf1, u32 id1, - const struct btf *btf2, u32 id2) +bool btf_types_are_same(const struct btf *btf1, u32 id1, + const struct btf *btf2, u32 id2) { if (id1 != id2) return false; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 99b5edb56978..ddb7ac1cb529 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7859,19 +7859,523 @@ static void mark_btf_func_reg_size(struct bpf_verifier_env *env, u32 regno, } } +struct bpf_kfunc_call_arg_meta { + /* In parameters */ + struct btf *btf; + u32 func_id; + u32 kfunc_flags; + const struct btf_type *func_proto; + const char *func_name; + /* Out parameters */ + u32 ref_obj_id; + u8 release_regno; + bool r0_rdonly; + u64 r0_size; +}; + +static bool is_kfunc_acquire(struct bpf_kfunc_call_arg_meta *meta) +{ + return meta->kfunc_flags & KF_ACQUIRE; +} + +static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta) +{ + return meta->kfunc_flags & KF_RET_NULL; +} + +static bool is_kfunc_release(struct bpf_kfunc_call_arg_meta *meta) +{ + return meta->kfunc_flags & KF_RELEASE; +} + +static bool is_kfunc_trusted_args(struct bpf_kfunc_call_arg_meta *meta) +{ + return meta->kfunc_flags & KF_TRUSTED_ARGS; +} + +static bool is_kfunc_sleepable(struct bpf_kfunc_call_arg_meta *meta) +{ + return meta->kfunc_flags & KF_SLEEPABLE; +} + +static bool is_kfunc_destructive(struct bpf_kfunc_call_arg_meta *meta) +{ + return meta->kfunc_flags & KF_DESTRUCTIVE; +} + +static bool is_kfunc_arg_kptr_get(struct bpf_kfunc_call_arg_meta *meta, int arg) +{ + return arg == 0 && (meta->kfunc_flags & KF_KPTR_GET); +} + +static bool is_kfunc_arg_mem_size(const struct btf *btf, + const struct btf_param *arg, + const struct bpf_reg_state *reg) +{ + int len, sfx_len = sizeof("__sz") - 1; + const struct btf_type *t; + const char *param_name; + + t = btf_type_skip_modifiers(btf, arg->type, NULL); + if (!btf_type_is_scalar(t) || reg->type != SCALAR_VALUE) + return false; + + /* In the future, this can be ported to use BTF tagging */ + param_name = btf_name_by_offset(btf, arg->name_off); + if (str_is_empty(param_name)) + return false; + len = strlen(param_name); + if (len < sfx_len) + return false; + param_name += len - sfx_len; + if (strncmp(param_name, "__sz", sfx_len)) + return false; + + return true; +} + +static bool is_kfunc_arg_ret_buf_size(const struct btf *btf, + const struct btf_param *arg, + const struct bpf_reg_state *reg, + const char *name) +{ + int len, target_len = strlen(name); + const struct btf_type *t; + const char *param_name; + + t = btf_type_skip_modifiers(btf, arg->type, NULL); + if (!btf_type_is_scalar(t) || reg->type != SCALAR_VALUE) + return false; + + param_name = btf_name_by_offset(btf, arg->name_off); + if (str_is_empty(param_name)) + return false; + len = strlen(param_name); + if (len != target_len) + return false; + if (strcmp(param_name, name)) + return false; + + return true; +} + +enum { + KF_ARG_DYNPTR_ID, +}; + +BTF_ID_LIST(kf_arg_btf_ids) +BTF_ID(struct, bpf_dynptr_kern) + +static bool is_kfunc_arg_dynptr(const struct btf *btf, + const struct btf_param *arg) +{ + const struct btf_type *t; + u32 res_id; + + t = btf_type_skip_modifiers(btf, arg->type, NULL); + if (!t) + return false; + if (!btf_type_is_ptr(t)) + return false; + t = btf_type_skip_modifiers(btf, t->type, &res_id); + if (!t) + return false; + return btf_types_are_same(btf, res_id, btf_vmlinux, kf_arg_btf_ids[KF_ARG_DYNPTR_ID]); +} + +/* Returns true if struct is composed of scalars, 4 levels of nesting allowed */ +static bool __btf_type_is_scalar_struct(struct bpf_verifier_env *env, + const struct btf *btf, + const struct btf_type *t, int rec) +{ + const struct btf_type *member_type; + const struct btf_member *member; + u32 i; + + if (!btf_type_is_struct(t)) + return false; + + for_each_member(i, t, member) { + const struct btf_array *array; + + member_type = btf_type_skip_modifiers(btf, member->type, NULL); + if (btf_type_is_struct(member_type)) { + if (rec >= 3) { + verbose(env, "max struct nesting depth exceeded\n"); + return false; + } + if (!__btf_type_is_scalar_struct(env, btf, member_type, rec + 1)) + return false; + continue; + } + if (btf_type_is_array(member_type)) { + array = btf_array(member_type); + if (!array->nelems) + return false; + member_type = btf_type_skip_modifiers(btf, array->type, NULL); + if (!btf_type_is_scalar(member_type)) + return false; + continue; + } + if (!btf_type_is_scalar(member_type)) + return false; + } + return true; +} + + +static u32 *reg2btf_ids[__BPF_REG_TYPE_MAX] = { +#ifdef CONFIG_NET + [PTR_TO_SOCKET] = &btf_sock_ids[BTF_SOCK_TYPE_SOCK], + [PTR_TO_SOCK_COMMON] = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], + [PTR_TO_TCP_SOCK] = &btf_sock_ids[BTF_SOCK_TYPE_TCP], +#endif +}; + +enum kfunc_ptr_arg_type { + KF_ARG_PTR_TO_CTX, + KF_ARG_PTR_TO_KPTR_STRONG, /* PTR_TO_KPTR but type specific */ + KF_ARG_PTR_TO_DYNPTR, + KF_ARG_PTR_TO_BTF_ID, /* Also covers reg2btf_ids conversions */ + KF_ARG_PTR_TO_MEM, + KF_ARG_PTR_TO_MEM_SIZE, /* Size derived from next argument, skip it */ +}; + +static enum kfunc_ptr_arg_type +get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, + struct bpf_kfunc_call_arg_meta *meta, + const struct btf_type *t, const struct btf_type *ref_t, + const char *ref_tname, const struct btf_param *args, + int argno, int nargs) +{ + u32 regno = argno + 1; + struct bpf_reg_state *regs = cur_regs(env); + struct bpf_reg_state *reg = ®s[regno]; + bool arg_mem_size = false; + + /* In this function, we verify the kfunc's BTF as per the argument type, + * leaving the rest of the verification with respect to the register + * type to our caller. When a set of conditions hold in the BTF type of + * arguments, we resolve it to a known kfunc_ptr_arg_type. + */ + if (btf_get_prog_ctx_type(&env->log, meta->btf, t, resolve_prog_type(env->prog), argno)) + return KF_ARG_PTR_TO_CTX; + + if (is_kfunc_arg_kptr_get(meta, argno)) { + if (!btf_type_is_ptr(ref_t)) { + verbose(env, "arg#0 BTF type must be a double pointer for kptr_get kfunc\n"); + return -EINVAL; + } + ref_t = btf_type_by_id(meta->btf, ref_t->type); + ref_tname = btf_name_by_offset(meta->btf, ref_t->name_off); + if (!btf_type_is_struct(ref_t)) { + verbose(env, "kernel function %s args#0 pointer type %s %s is not supported\n", + meta->func_name, btf_type_str(ref_t), ref_tname); + return -EINVAL; + } + return KF_ARG_PTR_TO_KPTR_STRONG; + } + + if (is_kfunc_arg_dynptr(meta->btf, &args[argno])) + return KF_ARG_PTR_TO_DYNPTR; + + if ((base_type(reg->type) == PTR_TO_BTF_ID || reg2btf_ids[base_type(reg->type)])) { + if (!btf_type_is_struct(ref_t)) { + verbose(env, "kernel function %s args#%d pointer type %s %s is not supported\n", + meta->func_name, argno, btf_type_str(ref_t), ref_tname); + return -EINVAL; + } + return KF_ARG_PTR_TO_BTF_ID; + } + + if (argno + 1 < nargs && is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1])) + arg_mem_size = true; + + /* This is the catch all argument type of register types supported by + * check_helper_mem_access. However, we only allow when argument type is + * pointer to scalar, or struct composed (recursively) of scalars. When + * arg_mem_size is true, the pointer can be void *. + */ + if (!btf_type_is_scalar(ref_t) && !__btf_type_is_scalar_struct(env, meta->btf, ref_t, 0) && + (arg_mem_size ? !btf_type_is_void(ref_t) : 1)) { + verbose(env, "arg#%d pointer type %s %s must point to %sscalar, or struct with scalar\n", + argno, btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : ""); + return -EINVAL; + } + return arg_mem_size ? KF_ARG_PTR_TO_MEM_SIZE : KF_ARG_PTR_TO_MEM; +} + +static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, + const struct btf_type *ref_t, + const char *ref_tname, u32 ref_id, + struct bpf_kfunc_call_arg_meta *meta, + int argno) +{ + const struct btf_type *reg_ref_t; + bool strict_type_match = false; + const struct btf *reg_btf; + const char *reg_ref_tname; + u32 reg_ref_id; + + if (reg->type == PTR_TO_BTF_ID) { + reg_btf = reg->btf; + reg_ref_id = reg->btf_id; + } else { + reg_btf = btf_vmlinux; + reg_ref_id = *reg2btf_ids[base_type(reg->type)]; + } + + if (is_kfunc_trusted_args(meta) || (is_kfunc_release(meta) && reg->ref_obj_id)) + strict_type_match = true; + + reg_ref_t = btf_type_skip_modifiers(reg_btf, reg_ref_id, ®_ref_id); + reg_ref_tname = btf_name_by_offset(reg_btf, reg_ref_t->name_off); + if (!btf_struct_ids_match(&env->log, reg_btf, reg_ref_id, reg->off, meta->btf, ref_id, strict_type_match)) { + verbose(env, "kernel function %s args#%d expected pointer to %s %s but R%d has a pointer to %s %s\n", + meta->func_name, argno, btf_type_str(ref_t), ref_tname, argno + 1, + btf_type_str(reg_ref_t), reg_ref_tname); + return -EINVAL; + } + return 0; +} + +static int process_kf_arg_ptr_to_kptr_strong(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, + const struct btf_type *ref_t, + const char *ref_tname, + struct bpf_kfunc_call_arg_meta *meta, + int argno) +{ + struct btf_field *kptr_field; + + /* check_func_arg_reg_off allows var_off for + * PTR_TO_MAP_VALUE, but we need fixed offset to find + * off_desc. + */ + if (!tnum_is_const(reg->var_off)) { + verbose(env, "arg#0 must have constant offset\n"); + return -EINVAL; + } + + kptr_field = btf_record_find(reg->map_ptr->record, reg->off + reg->var_off.value, BPF_KPTR); + if (!kptr_field || kptr_field->type != BPF_KPTR_REF) { + verbose(env, "arg#0 no referenced kptr at map value offset=%llu\n", + reg->off + reg->var_off.value); + return -EINVAL; + } + + if (!btf_struct_ids_match(&env->log, meta->btf, ref_t->type, 0, kptr_field->kptr.btf, + kptr_field->kptr.btf_id, true)) { + verbose(env, "kernel function %s args#%d expected pointer to %s %s\n", + meta->func_name, argno, btf_type_str(ref_t), ref_tname); + return -EINVAL; + } + return 0; +} + +static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta) +{ + const char *func_name = meta->func_name, *ref_tname; + const struct btf *btf = meta->btf; + const struct btf_param *args; + u32 i, nargs; + int ret; + + args = (const struct btf_param *)(meta->func_proto + 1); + nargs = btf_type_vlen(meta->func_proto); + if (nargs > MAX_BPF_FUNC_REG_ARGS) { + verbose(env, "Function %s has %d > %d args\n", func_name, nargs, + MAX_BPF_FUNC_REG_ARGS); + return -EINVAL; + } + + /* Check that BTF function arguments match actual types that the + * verifier sees. + */ + for (i = 0; i < nargs; i++) { + struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[i + 1]; + const struct btf_type *t, *ref_t, *resolve_ret; + enum bpf_arg_type arg_type = ARG_DONTCARE; + u32 regno = i + 1, ref_id, type_size; + bool is_ret_buf_sz = false; + int kf_arg_type; + + t = btf_type_skip_modifiers(btf, args[i].type, NULL); + if (btf_type_is_scalar(t)) { + if (reg->type != SCALAR_VALUE) { + verbose(env, "R%d is not a scalar\n", regno); + return -EINVAL; + } + if (is_kfunc_arg_ret_buf_size(btf, &args[i], reg, "rdonly_buf_size")) { + meta->r0_rdonly = true; + is_ret_buf_sz = true; + } else if (is_kfunc_arg_ret_buf_size(btf, &args[i], reg, "rdwr_buf_size")) { + is_ret_buf_sz = true; + } + + if (is_ret_buf_sz) { + if (meta->r0_size) { + verbose(env, "2 or more rdonly/rdwr_buf_size parameters for kfunc"); + return -EINVAL; + } + + if (!tnum_is_const(reg->var_off)) { + verbose(env, "R%d is not a const\n", regno); + return -EINVAL; + } + + meta->r0_size = reg->var_off.value; + ret = mark_chain_precision(env, regno); + if (ret) + return ret; + } + continue; + } + + if (!btf_type_is_ptr(t)) { + verbose(env, "Unrecognized arg#%d type %s\n", i, btf_type_str(t)); + return -EINVAL; + } + + if (reg->ref_obj_id) { + if (is_kfunc_release(meta) && meta->ref_obj_id) { + verbose(env, "verifier internal error: more than one arg with ref_obj_id R%d %u %u\n", + regno, reg->ref_obj_id, + meta->ref_obj_id); + return -EFAULT; + } + meta->ref_obj_id = reg->ref_obj_id; + if (is_kfunc_release(meta)) + meta->release_regno = regno; + } + + ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id); + ref_tname = btf_name_by_offset(btf, ref_t->name_off); + + kf_arg_type = get_kfunc_ptr_arg_type(env, meta, t, ref_t, ref_tname, args, i, nargs); + if (kf_arg_type < 0) + return kf_arg_type; + + switch (kf_arg_type) { + case KF_ARG_PTR_TO_BTF_ID: + if (!is_kfunc_trusted_args(meta)) + break; + if (!reg->ref_obj_id) { + verbose(env, "R%d must be referenced\n", regno); + return -EINVAL; + } + fallthrough; + case KF_ARG_PTR_TO_CTX: + /* Trusted arguments have the same offset checks as release arguments */ + arg_type |= OBJ_RELEASE; + break; + case KF_ARG_PTR_TO_KPTR_STRONG: + case KF_ARG_PTR_TO_DYNPTR: + case KF_ARG_PTR_TO_MEM: + case KF_ARG_PTR_TO_MEM_SIZE: + /* Trusted by default */ + break; + default: + WARN_ON_ONCE(1); + return -EFAULT; + } + + if (is_kfunc_release(meta) && reg->ref_obj_id) + arg_type |= OBJ_RELEASE; + ret = check_func_arg_reg_off(env, reg, regno, arg_type); + if (ret < 0) + return ret; + + switch (kf_arg_type) { + case KF_ARG_PTR_TO_CTX: + if (reg->type != PTR_TO_CTX) { + verbose(env, "arg#%d expected pointer to ctx, but got %s\n", i, btf_type_str(t)); + return -EINVAL; + } + break; + case KF_ARG_PTR_TO_KPTR_STRONG: + if (reg->type != PTR_TO_MAP_VALUE) { + verbose(env, "arg#0 expected pointer to map value\n"); + return -EINVAL; + } + ret = process_kf_arg_ptr_to_kptr_strong(env, reg, ref_t, ref_tname, meta, i); + if (ret < 0) + return ret; + break; + case KF_ARG_PTR_TO_DYNPTR: + if (reg->type != PTR_TO_STACK) { + verbose(env, "arg#%d expected pointer to stack\n", i); + return -EINVAL; + } + + if (!is_dynptr_reg_valid_init(env, reg)) { + verbose(env, "arg#%d pointer type %s %s must be valid and initialized\n", + i, btf_type_str(ref_t), ref_tname); + return -EINVAL; + } + + if (!is_dynptr_type_expected(env, reg, ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_LOCAL)) { + verbose(env, "arg#%d pointer type %s %s points to unsupported dynamic pointer type\n", + i, btf_type_str(ref_t), ref_tname); + return -EINVAL; + } + break; + case KF_ARG_PTR_TO_BTF_ID: + /* Only base_type is checked, further checks are done here */ + if (reg->type != PTR_TO_BTF_ID && + (!reg2btf_ids[base_type(reg->type)] || type_flag(reg->type))) { + verbose(env, "arg#%d expected pointer to btf or socket\n", i); + return -EINVAL; + } + ret = process_kf_arg_ptr_to_btf_id(env, reg, ref_t, ref_tname, ref_id, meta, i); + if (ret < 0) + return ret; + break; + case KF_ARG_PTR_TO_MEM: + resolve_ret = btf_resolve_size(btf, ref_t, &type_size); + if (IS_ERR(resolve_ret)) { + verbose(env, "arg#%d reference type('%s %s') size cannot be determined: %ld\n", + i, btf_type_str(ref_t), ref_tname, PTR_ERR(resolve_ret)); + return -EINVAL; + } + ret = check_mem_reg(env, reg, regno, type_size); + if (ret < 0) + return ret; + break; + case KF_ARG_PTR_TO_MEM_SIZE: + ret = check_kfunc_mem_size_reg(env, ®s[regno + 1], regno + 1); + if (ret < 0) { + verbose(env, "arg#%d arg#%d memory, len pair leads to invalid memory access\n", i, i + 1); + return ret; + } + /* Skip next '__sz' argument */ + i++; + break; + } + } + + if (is_kfunc_release(meta) && !meta->release_regno) { + verbose(env, "release kernel function %s expects refcounted PTR_TO_BTF_ID\n", + func_name); + return -EINVAL; + } + + return 0; +} + static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, int *insn_idx_p) { const struct btf_type *t, *func, *func_proto, *ptr_type; struct bpf_reg_state *regs = cur_regs(env); - struct bpf_kfunc_arg_meta meta = { 0 }; const char *func_name, *ptr_type_name; + struct bpf_kfunc_call_arg_meta meta; u32 i, nargs, func_id, ptr_type_id; int err, insn_idx = *insn_idx_p; const struct btf_param *args; struct btf *desc_btf; u32 *kfunc_flags; - bool acq; /* skip for now, but return error when we find this in fixup_kfunc_call */ if (!insn->imm) @@ -7892,24 +8396,34 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, func_name); return -EACCES; } - if (*kfunc_flags & KF_DESTRUCTIVE && !capable(CAP_SYS_BOOT)) { - verbose(env, "destructive kfunc calls require CAP_SYS_BOOT capabilities\n"); + + /* Prepare kfunc call metadata */ + memset(&meta, 0, sizeof(meta)); + meta.btf = desc_btf; + meta.func_id = func_id; + meta.kfunc_flags = *kfunc_flags; + meta.func_proto = func_proto; + meta.func_name = func_name; + + if (is_kfunc_destructive(&meta) && !capable(CAP_SYS_BOOT)) { + verbose(env, "destructive kfunc calls require CAP_SYS_BOOT capability\n"); return -EACCES; } - acq = *kfunc_flags & KF_ACQUIRE; - - meta.flags = *kfunc_flags; + if (is_kfunc_sleepable(&meta) && !env->prog->aux->sleepable) { + verbose(env, "program must be sleepable to call sleepable kfunc %s\n", func_name); + return -EACCES; + } /* Check the arguments */ - err = btf_check_kfunc_arg_match(env, desc_btf, func_id, regs, &meta); + err = check_kfunc_args(env, &meta); if (err < 0) return err; /* In case of release function, we get register number of refcounted - * PTR_TO_BTF_ID back from btf_check_kfunc_arg_match, do the release now + * PTR_TO_BTF_ID in bpf_kfunc_arg_meta, do the release now. */ - if (err) { - err = release_reference(env, regs[err].ref_obj_id); + if (meta.release_regno) { + err = release_reference(env, regs[meta.release_regno].ref_obj_id); if (err) { verbose(env, "kfunc %s#%d reference has not been acquired before\n", func_name, func_id); @@ -7923,7 +8437,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, /* Check return type */ t = btf_type_skip_modifiers(desc_btf, func_proto->type, NULL); - if (acq && !btf_type_is_struct_ptr(desc_btf, t)) { + if (is_kfunc_acquire(&meta) && !btf_type_is_struct_ptr(meta.btf, t)) { verbose(env, "acquire kernel function does not return PTR_TO_BTF_ID\n"); return -EINVAL; } @@ -7962,20 +8476,23 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, regs[BPF_REG_0].type = PTR_TO_BTF_ID; regs[BPF_REG_0].btf_id = ptr_type_id; } - if (*kfunc_flags & KF_RET_NULL) { + if (is_kfunc_ret_null(&meta)) { regs[BPF_REG_0].type |= PTR_MAYBE_NULL; /* For mark_ptr_or_null_reg, see 93c230e3f5bd6 */ regs[BPF_REG_0].id = ++env->id_gen; } mark_btf_func_reg_size(env, BPF_REG_0, sizeof(void *)); - if (acq) { + if (is_kfunc_acquire(&meta)) { int id = acquire_reference_state(env, insn_idx); if (id < 0) return id; - regs[BPF_REG_0].id = id; + if (is_kfunc_ret_null(&meta)) + regs[BPF_REG_0].id = id; regs[BPF_REG_0].ref_obj_id = id; } + if (reg_may_point_to_spin_lock(®s[BPF_REG_0]) && !regs[BPF_REG_0].id) + regs[BPF_REG_0].id = ++env->id_gen; } /* else { add_kfunc_call() ensures it is btf_type_is_void(t) } */ nargs = btf_type_vlen(func_proto); diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c index c210657d4d0a..55d641c1f126 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_dynptr_param.c @@ -22,7 +22,7 @@ static struct { "arg#0 pointer type STRUCT bpf_dynptr_kern points to unsupported dynamic pointer type", 0}, {"not_valid_dynptr", "arg#0 pointer type STRUCT bpf_dynptr_kern must be valid and initialized", 0}, - {"not_ptr_to_stack", "arg#0 pointer type STRUCT bpf_dynptr_kern not to stack", 0}, + {"not_ptr_to_stack", "arg#0 expected pointer to stack", 0}, {"dynptr_data_null", NULL, -EBADMSG}, }; diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c index e1a937277b54..86d6fef2e3b4 100644 --- a/tools/testing/selftests/bpf/verifier/calls.c +++ b/tools/testing/selftests/bpf/verifier/calls.c @@ -109,7 +109,7 @@ }, .prog_type = BPF_PROG_TYPE_SCHED_CLS, .result = REJECT, - .errstr = "arg#0 pointer type STRUCT prog_test_ref_kfunc must point", + .errstr = "arg#0 expected pointer to btf or socket", .fixup_kfunc_btf_id = { { "bpf_kfunc_call_test_acquire", 3 }, { "bpf_kfunc_call_test_release", 5 }, diff --git a/tools/testing/selftests/bpf/verifier/ref_tracking.c b/tools/testing/selftests/bpf/verifier/ref_tracking.c index fd683a32a276..55cba01c99d5 100644 --- a/tools/testing/selftests/bpf/verifier/ref_tracking.c +++ b/tools/testing/selftests/bpf/verifier/ref_tracking.c @@ -142,7 +142,7 @@ .kfunc = "bpf", .expected_attach_type = BPF_LSM_MAC, .flags = BPF_F_SLEEPABLE, - .errstr = "arg#0 pointer type STRUCT bpf_key must point to scalar, or struct with scalar", + .errstr = "arg#0 expected pointer to btf or socket", .fixup_kfunc_btf_id = { { "bpf_lookup_user_key", 2 }, { "bpf_key_put", 4 }, @@ -163,7 +163,7 @@ .kfunc = "bpf", .expected_attach_type = BPF_LSM_MAC, .flags = BPF_F_SLEEPABLE, - .errstr = "arg#0 pointer type STRUCT bpf_key must point to scalar, or struct with scalar", + .errstr = "arg#0 expected pointer to btf or socket", .fixup_kfunc_btf_id = { { "bpf_lookup_system_key", 1 }, { "bpf_key_put", 3 }, From patchwork Mon Nov 14 19:15:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042730 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F135CC433FE for ; Mon, 14 Nov 2022 19:16:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236120AbiKNTQ4 (ORCPT ); Mon, 14 Nov 2022 14:16:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237169AbiKNTQi (ORCPT ); Mon, 14 Nov 2022 14:16:38 -0500 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 505A527B3B for ; Mon, 14 Nov 2022 11:16:37 -0800 (PST) Received: by mail-pf1-x442.google.com with SMTP id y203so11947539pfb.4 for ; Mon, 14 Nov 2022 11:16:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DjImNk8M5XDbutA0KqbUIZ0siyfqhHVevQ6oqXO1GeQ=; b=QVsl3qvx2Ap2WspWo8GEkowZEERAZVI4dhFbzPthIuxjhzkX7JBuoQ0mKVt19Rr1ub Az2IfYJkFZp67V6KDtabTYquABpx0Vl2Quv8iLIikUYpQPMy0/J0RQc8eu/qZ9WcrQQB AJbvw1Kqslmm0qvm2KwXGsW+K7Ke6/4TiN80bv8HyZFoubpToNkaLGSti6cXIYTGbsrB ffIk+ZzRQNHA/9DvzBys9sk5dI1Wq+9+eH++YHsKCX+269WNhJXdLduRZe2JJsBpdvGF 1VtgjThF+uCsCyWc628NN6LEQNvdEu6yb9aQrcbsphaM55FIH3aIBJi5k8JPlaUOdjuI w0ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DjImNk8M5XDbutA0KqbUIZ0siyfqhHVevQ6oqXO1GeQ=; b=JPUCQCv3mbTGv7grUZtFE162xGfvoLIF3q5kFGQtOHsUYdcwNJi78yzZMolo/CMDw/ dMqiucADIJCYZ+9xDAlmUdACCv7wbCduWTvNuQ4gJYtOFAVCz1h4w5BEjTf5phltRYRp oSA23Q58gr++raEJKzHkQJZPEGwde3zcHpPc5Ll8kmLThGO7HNZTrNgbT4L9brbre40P 9n0n5MIJKUtE1MXbdkCbKJGheiQH5KIKQvCDz062aMdQ+BWLkY+TBtZu3xGBP9JEF5U8 3YpkUA0ce6Jwnbsobqtmliu2W2n6+ZbhEZ8N71jVj2npg8BOFdk4jubRxCnkmEaJKZbf hA+w== X-Gm-Message-State: ANoB5pnaaru4MtvCJX9bsMiuOXhR3y+yFzN8HsFkWfc8Hl0RtFJywFOU ID1HlJiEd7oz8wvT8QavXw5WA2ByU9UTqw== X-Google-Smtp-Source: AA0mqf5aXPzKY29qUAtFNJjdyzufaYTJY806EL5KDEDe+nWNfSTh1SGCYgHmKNG/BiDF0zKVL7Vqrw== X-Received: by 2002:a05:6a00:2196:b0:56d:1fdc:9d37 with SMTP id h22-20020a056a00219600b0056d1fdc9d37mr14813164pfi.77.1668453396350; Mon, 14 Nov 2022 11:16:36 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id ja13-20020a170902efcd00b0017f7c4e260fsm7901580plb.150.2022.11.14.11.16.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:36 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 15/26] bpf: Drop kfunc bits from btf_check_func_arg_match Date: Tue, 15 Nov 2022 00:45:36 +0530 Message-Id: <20221114191547.1694267-16-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=18717; i=memxor@gmail.com; h=from:subject; bh=pdDUwbE2QXj6eh5VUo2KvW+KmSPlZX5I7eHkZVq/RSY=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJhovOTrRWFLeXLHUZ/aI/5ZCuni0MlbnSnZUV HdmeipmJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8Ryn7lD/ 4+tzWl1h1BE80wOTI0QXmIgY0JY5x9E5kGv9K3YZgjXstBRhIykaM8xB+Gsc0h5jfQBszU0JhGjtKM 4VlRi8BrhxJoAT9TVCEM1x0amEq2tFXMAC7DflzFOkClhYpTmp2+wxREwycJurV0S6IbsmK2lfie4z Y1Ofxj+QqMwyKZ0dChAFxp2bmqyxn6pmag23XyZIWLCjl9WLWCWH5XFnWZuQ1Ciob8vxHUuBPGrRj3 hx6RJJ8iqPNARpgADKsIRy0Qtv8SkoGb6Ai3/0Kj2INse2kgFOKiGGphSD4IkxznTZAjzbKn4K7R9j ZAnKTkcVVeWM+2yno3HYDebgJkUZCTLT+A7WrG4bFzt0i2P0QLmsb9fX12D97aCxghJ8HXECtPxrrc XG8H/Uhe+wH3x7VL7xGGmtIBNeYZGCTD9HjQcvhR8yIMe0u+JCYJqRamK5FZ5Hq77SLGrwVpi3NHiy wP3m+8JcJci3vCuymTJNTMZEgQ4y6plr5iGjSMnYvbSnBdjErQGo+cwrXOHqW9MDUTYO2kpnMnc7Im cHcoArmsRRL7NDXL4IFbxDP10kzp6LkyQsr5FHbbJ1NK4t+xa/4KkHAQThkbFOXwssJ1ZxYxRGzpzn 5eptY3fsn2DmkO0tNMASG0f3eDphbP79TelUkHGg3B1QyEXj4W2VEK8U+bpA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Remove all kfunc related bits now from btf_check_func_arg_match, as users have been converted away to refactored kfunc argument handling. This is split into a separate commit to aid review, in order to compare what has been preserved from the removed bits easily instead of mixing removed hunks with previous patch. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 11 -- include/linux/bpf_verifier.h | 2 - kernel/bpf/btf.c | 364 +---------------------------------- kernel/bpf/verifier.c | 4 +- 4 files changed, 10 insertions(+), 371 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index c88f75a68893..62a16b699e71 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2108,22 +2108,11 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, const char *func_name, struct btf_func_model *m); -struct bpf_kfunc_arg_meta { - u64 r0_size; - bool r0_rdonly; - int ref_obj_id; - u32 flags; -}; - struct bpf_reg_state; int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, struct bpf_reg_state *regs); int btf_check_subprog_call(struct bpf_verifier_env *env, int subprog, struct bpf_reg_state *regs); -int btf_check_kfunc_arg_match(struct bpf_verifier_env *env, - const struct btf *btf, u32 func_id, - struct bpf_reg_state *regs, - struct bpf_kfunc_arg_meta *meta); int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog, struct bpf_reg_state *reg); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index fa738abea267..887fa4d922f6 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -597,8 +597,6 @@ int check_ptr_off_reg(struct bpf_verifier_env *env, int check_func_arg_reg_off(struct bpf_verifier_env *env, const struct bpf_reg_state *reg, int regno, enum bpf_arg_type arg_type); -int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, - u32 regno); int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, u32 mem_size); bool is_dynptr_reg_valid_init(struct bpf_verifier_env *env, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index ff8b46c209dd..6ea8d0cf81f7 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6596,122 +6596,19 @@ int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *pr return btf_check_func_type_match(log, btf1, t1, btf2, t2); } -static u32 *reg2btf_ids[__BPF_REG_TYPE_MAX] = { -#ifdef CONFIG_NET - [PTR_TO_SOCKET] = &btf_sock_ids[BTF_SOCK_TYPE_SOCK], - [PTR_TO_SOCK_COMMON] = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], - [PTR_TO_TCP_SOCK] = &btf_sock_ids[BTF_SOCK_TYPE_TCP], -#endif -}; - -/* Returns true if struct is composed of scalars, 4 levels of nesting allowed */ -static bool __btf_type_is_scalar_struct(struct bpf_verifier_log *log, - const struct btf *btf, - const struct btf_type *t, int rec) -{ - const struct btf_type *member_type; - const struct btf_member *member; - u32 i; - - if (!btf_type_is_struct(t)) - return false; - - for_each_member(i, t, member) { - const struct btf_array *array; - - member_type = btf_type_skip_modifiers(btf, member->type, NULL); - if (btf_type_is_struct(member_type)) { - if (rec >= 3) { - bpf_log(log, "max struct nesting depth exceeded\n"); - return false; - } - if (!__btf_type_is_scalar_struct(log, btf, member_type, rec + 1)) - return false; - continue; - } - if (btf_type_is_array(member_type)) { - array = btf_type_array(member_type); - if (!array->nelems) - return false; - member_type = btf_type_skip_modifiers(btf, array->type, NULL); - if (!btf_type_is_scalar(member_type)) - return false; - continue; - } - if (!btf_type_is_scalar(member_type)) - return false; - } - return true; -} - -static bool is_kfunc_arg_mem_size(const struct btf *btf, - const struct btf_param *arg, - const struct bpf_reg_state *reg) -{ - int len, sfx_len = sizeof("__sz") - 1; - const struct btf_type *t; - const char *param_name; - - t = btf_type_skip_modifiers(btf, arg->type, NULL); - if (!btf_type_is_scalar(t) || reg->type != SCALAR_VALUE) - return false; - - /* In the future, this can be ported to use BTF tagging */ - param_name = btf_name_by_offset(btf, arg->name_off); - if (str_is_empty(param_name)) - return false; - len = strlen(param_name); - if (len < sfx_len) - return false; - param_name += len - sfx_len; - if (strncmp(param_name, "__sz", sfx_len)) - return false; - - return true; -} - -static bool btf_is_kfunc_arg_mem_size(const struct btf *btf, - const struct btf_param *arg, - const struct bpf_reg_state *reg, - const char *name) -{ - int len, target_len = strlen(name); - const struct btf_type *t; - const char *param_name; - - t = btf_type_skip_modifiers(btf, arg->type, NULL); - if (!btf_type_is_scalar(t) || reg->type != SCALAR_VALUE) - return false; - - param_name = btf_name_by_offset(btf, arg->name_off); - if (str_is_empty(param_name)) - return false; - len = strlen(param_name); - if (len != target_len) - return false; - if (strcmp(param_name, name)) - return false; - - return true; -} - static int btf_check_func_arg_match(struct bpf_verifier_env *env, const struct btf *btf, u32 func_id, struct bpf_reg_state *regs, bool ptr_to_mem_ok, - struct bpf_kfunc_arg_meta *kfunc_meta, bool processing_call) { enum bpf_prog_type prog_type = resolve_prog_type(env->prog); - bool rel = false, kptr_get = false, trusted_args = false; - bool sleepable = false; struct bpf_verifier_log *log = &env->log; - u32 i, nargs, ref_id, ref_obj_id = 0; - bool is_kfunc = btf_is_kernel(btf); const char *func_name, *ref_tname; const struct btf_type *t, *ref_t; const struct btf_param *args; - int ref_regno = 0, ret; + u32 i, nargs, ref_id; + int ret; t = btf_type_by_id(btf, func_id); if (!t || !btf_type_is_func(t)) { @@ -6737,14 +6634,6 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, return -EINVAL; } - if (is_kfunc && kfunc_meta) { - /* Only kfunc can be release func */ - rel = kfunc_meta->flags & KF_RELEASE; - kptr_get = kfunc_meta->flags & KF_KPTR_GET; - trusted_args = kfunc_meta->flags & KF_TRUSTED_ARGS; - sleepable = kfunc_meta->flags & KF_SLEEPABLE; - } - /* check that BTF function arguments match actual types that the * verifier sees. */ @@ -6752,42 +6641,9 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, enum bpf_arg_type arg_type = ARG_DONTCARE; u32 regno = i + 1; struct bpf_reg_state *reg = ®s[regno]; - bool obj_ptr = false; t = btf_type_skip_modifiers(btf, args[i].type, NULL); if (btf_type_is_scalar(t)) { - if (is_kfunc && kfunc_meta) { - bool is_buf_size = false; - - /* check for any const scalar parameter of name "rdonly_buf_size" - * or "rdwr_buf_size" - */ - if (btf_is_kfunc_arg_mem_size(btf, &args[i], reg, - "rdonly_buf_size")) { - kfunc_meta->r0_rdonly = true; - is_buf_size = true; - } else if (btf_is_kfunc_arg_mem_size(btf, &args[i], reg, - "rdwr_buf_size")) - is_buf_size = true; - - if (is_buf_size) { - if (kfunc_meta->r0_size) { - bpf_log(log, "2 or more rdonly/rdwr_buf_size parameters for kfunc"); - return -EINVAL; - } - - if (!tnum_is_const(reg->var_off)) { - bpf_log(log, "R%d is not a const\n", regno); - return -EINVAL; - } - - kfunc_meta->r0_size = reg->var_off.value; - ret = mark_chain_precision(env, regno); - if (ret) - return ret; - } - } - if (reg->type == SCALAR_VALUE) continue; bpf_log(log, "R%d is not a scalar\n", regno); @@ -6800,88 +6656,14 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, return -EINVAL; } - /* These register types have special constraints wrt ref_obj_id - * and offset checks. The rest of trusted args don't. - */ - obj_ptr = reg->type == PTR_TO_CTX || reg->type == PTR_TO_BTF_ID || - reg2btf_ids[base_type(reg->type)]; - - /* Check if argument must be a referenced pointer, args + i has - * been verified to be a pointer (after skipping modifiers). - * PTR_TO_CTX is ok without having non-zero ref_obj_id. - */ - if (is_kfunc && trusted_args && (obj_ptr && reg->type != PTR_TO_CTX) && !reg->ref_obj_id) { - bpf_log(log, "R%d must be referenced\n", regno); - return -EINVAL; - } - ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id); ref_tname = btf_name_by_offset(btf, ref_t->name_off); - /* Trusted args have the same offset checks as release arguments */ - if ((trusted_args && obj_ptr) || (rel && reg->ref_obj_id)) - arg_type |= OBJ_RELEASE; ret = check_func_arg_reg_off(env, reg, regno, arg_type); if (ret < 0) return ret; - if (is_kfunc && reg->ref_obj_id) { - /* Ensure only one argument is referenced PTR_TO_BTF_ID */ - if (ref_obj_id) { - bpf_log(log, "verifier internal error: more than one arg with ref_obj_id R%d %u %u\n", - regno, reg->ref_obj_id, ref_obj_id); - return -EFAULT; - } - ref_regno = regno; - ref_obj_id = reg->ref_obj_id; - } - - /* kptr_get is only true for kfunc */ - if (i == 0 && kptr_get) { - struct btf_field *kptr_field; - - if (reg->type != PTR_TO_MAP_VALUE) { - bpf_log(log, "arg#0 expected pointer to map value\n"); - return -EINVAL; - } - - /* check_func_arg_reg_off allows var_off for - * PTR_TO_MAP_VALUE, but we need fixed offset to find - * off_desc. - */ - if (!tnum_is_const(reg->var_off)) { - bpf_log(log, "arg#0 must have constant offset\n"); - return -EINVAL; - } - - kptr_field = btf_record_find(reg->map_ptr->record, reg->off + reg->var_off.value, BPF_KPTR); - if (!kptr_field || kptr_field->type != BPF_KPTR_REF) { - bpf_log(log, "arg#0 no referenced kptr at map value offset=%llu\n", - reg->off + reg->var_off.value); - return -EINVAL; - } - - if (!btf_type_is_ptr(ref_t)) { - bpf_log(log, "arg#0 BTF type must be a double pointer\n"); - return -EINVAL; - } - - ref_t = btf_type_skip_modifiers(btf, ref_t->type, &ref_id); - ref_tname = btf_name_by_offset(btf, ref_t->name_off); - - if (!btf_type_is_struct(ref_t)) { - bpf_log(log, "kernel function %s args#%d pointer type %s %s is not supported\n", - func_name, i, btf_type_str(ref_t), ref_tname); - return -EINVAL; - } - if (!btf_struct_ids_match(log, btf, ref_id, 0, kptr_field->kptr.btf, - kptr_field->kptr.btf_id, true)) { - bpf_log(log, "kernel function %s args#%d expected pointer to %s %s\n", - func_name, i, btf_type_str(ref_t), ref_tname); - return -EINVAL; - } - /* rest of the arguments can be anything, like normal kfunc */ - } else if (btf_get_prog_ctx_type(log, btf, t, prog_type, i)) { + if (btf_get_prog_ctx_type(log, btf, t, prog_type, i)) { /* If function expects ctx type in BTF check that caller * is passing PTR_TO_CTX. */ @@ -6891,109 +6673,10 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, i, btf_type_str(t)); return -EINVAL; } - } else if (is_kfunc && (reg->type == PTR_TO_BTF_ID || - (reg2btf_ids[base_type(reg->type)] && !type_flag(reg->type)))) { - const struct btf_type *reg_ref_t; - const struct btf *reg_btf; - const char *reg_ref_tname; - u32 reg_ref_id; - - if (!btf_type_is_struct(ref_t)) { - bpf_log(log, "kernel function %s args#%d pointer type %s %s is not supported\n", - func_name, i, btf_type_str(ref_t), - ref_tname); - return -EINVAL; - } - - if (reg->type == PTR_TO_BTF_ID) { - reg_btf = reg->btf; - reg_ref_id = reg->btf_id; - } else { - reg_btf = btf_vmlinux; - reg_ref_id = *reg2btf_ids[base_type(reg->type)]; - } - - reg_ref_t = btf_type_skip_modifiers(reg_btf, reg_ref_id, - ®_ref_id); - reg_ref_tname = btf_name_by_offset(reg_btf, - reg_ref_t->name_off); - if (!btf_struct_ids_match(log, reg_btf, reg_ref_id, - reg->off, btf, ref_id, - trusted_args || (rel && reg->ref_obj_id))) { - bpf_log(log, "kernel function %s args#%d expected pointer to %s %s but R%d has a pointer to %s %s\n", - func_name, i, - btf_type_str(ref_t), ref_tname, - regno, btf_type_str(reg_ref_t), - reg_ref_tname); - return -EINVAL; - } } else if (ptr_to_mem_ok && processing_call) { const struct btf_type *resolve_ret; u32 type_size; - if (is_kfunc) { - bool arg_mem_size = i + 1 < nargs && is_kfunc_arg_mem_size(btf, &args[i + 1], ®s[regno + 1]); - bool arg_dynptr = btf_type_is_struct(ref_t) && - !strcmp(ref_tname, - stringify_struct(bpf_dynptr_kern)); - - /* Permit pointer to mem, but only when argument - * type is pointer to scalar, or struct composed - * (recursively) of scalars. - * When arg_mem_size is true, the pointer can be - * void *. - * Also permit initialized local dynamic pointers. - */ - if (!btf_type_is_scalar(ref_t) && - !__btf_type_is_scalar_struct(log, btf, ref_t, 0) && - !arg_dynptr && - (arg_mem_size ? !btf_type_is_void(ref_t) : 1)) { - bpf_log(log, - "arg#%d pointer type %s %s must point to %sscalar, or struct with scalar\n", - i, btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : ""); - return -EINVAL; - } - - if (arg_dynptr) { - if (reg->type != PTR_TO_STACK) { - bpf_log(log, "arg#%d pointer type %s %s not to stack\n", - i, btf_type_str(ref_t), - ref_tname); - return -EINVAL; - } - - if (!is_dynptr_reg_valid_init(env, reg)) { - bpf_log(log, - "arg#%d pointer type %s %s must be valid and initialized\n", - i, btf_type_str(ref_t), - ref_tname); - return -EINVAL; - } - - if (!is_dynptr_type_expected(env, reg, - ARG_PTR_TO_DYNPTR | DYNPTR_TYPE_LOCAL)) { - bpf_log(log, - "arg#%d pointer type %s %s points to unsupported dynamic pointer type\n", - i, btf_type_str(ref_t), - ref_tname); - return -EINVAL; - } - - continue; - } - - /* Check for mem, len pair */ - if (arg_mem_size) { - if (check_kfunc_mem_size_reg(env, ®s[regno + 1], regno + 1)) { - bpf_log(log, "arg#%d arg#%d memory, len pair leads to invalid memory access\n", - i, i + 1); - return -EINVAL; - } - i++; - continue; - } - } - resolve_ret = btf_resolve_size(btf, ref_t, &type_size); if (IS_ERR(resolve_ret)) { bpf_log(log, @@ -7006,36 +6689,13 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, if (check_mem_reg(env, reg, regno, type_size)) return -EINVAL; } else { - bpf_log(log, "reg type unsupported for arg#%d %sfunction %s#%d\n", i, - is_kfunc ? "kernel " : "", func_name, func_id); + bpf_log(log, "reg type unsupported for arg#%d function %s#%d\n", i, + func_name, func_id); return -EINVAL; } } - /* Either both are set, or neither */ - WARN_ON_ONCE((ref_obj_id && !ref_regno) || (!ref_obj_id && ref_regno)); - /* We already made sure ref_obj_id is set only for one argument. We do - * allow (!rel && ref_obj_id), so that passing such referenced - * PTR_TO_BTF_ID to other kfuncs works. Note that rel is only true when - * is_kfunc is true. - */ - if (rel && !ref_obj_id) { - bpf_log(log, "release kernel function %s expects refcounted PTR_TO_BTF_ID\n", - func_name); - return -EINVAL; - } - - if (sleepable && !env->prog->aux->sleepable) { - bpf_log(log, "kernel function %s is sleepable but the program is not\n", - func_name); - return -EINVAL; - } - - if (kfunc_meta && ref_obj_id) - kfunc_meta->ref_obj_id = ref_obj_id; - - /* returns argument register number > 0 in case of reference release kfunc */ - return rel ? ref_regno : 0; + return 0; } /* Compare BTF of a function declaration with given bpf_reg_state. @@ -7065,7 +6725,7 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog, return -EINVAL; is_global = prog->aux->func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; - err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, NULL, false); + err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, false); /* Compiler optimizations can remove arguments from static functions * or mismatched type can be passed into a global function. @@ -7108,7 +6768,7 @@ int btf_check_subprog_call(struct bpf_verifier_env *env, int subprog, return -EINVAL; is_global = prog->aux->func_info_aux[subprog].linkage == BTF_FUNC_GLOBAL; - err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, NULL, true); + err = btf_check_func_arg_match(env, btf, btf_id, regs, is_global, true); /* Compiler optimizations can remove arguments from static functions * or mismatched type can be passed into a global function. @@ -7119,14 +6779,6 @@ int btf_check_subprog_call(struct bpf_verifier_env *env, int subprog, return err; } -int btf_check_kfunc_arg_match(struct bpf_verifier_env *env, - const struct btf *btf, u32 func_id, - struct bpf_reg_state *regs, - struct bpf_kfunc_arg_meta *meta) -{ - return btf_check_func_arg_match(env, btf, func_id, regs, true, meta, true); -} - /* Convert BTF of a function into bpf_reg_state if possible * Returns: * EFAULT - there is a verifier bug. Abort verification. diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ddb7ac1cb529..d95b6cc63e38 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5544,8 +5544,8 @@ int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, return err; } -int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, - u32 regno) +static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, + u32 regno) { struct bpf_reg_state *mem_reg = &cur_regs(env)[regno - 1]; bool may_be_null = type_may_be_null(mem_reg->type); From patchwork Mon Nov 14 19:15:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042731 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39DE5C43217 for ; Mon, 14 Nov 2022 19:17:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237086AbiKNTQ6 (ORCPT ); Mon, 14 Nov 2022 14:16:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237217AbiKNTQm (ORCPT ); Mon, 14 Nov 2022 14:16:42 -0500 Received: from mail-pl1-x643.google.com (mail-pl1-x643.google.com [IPv6:2607:f8b0:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9616427CCE for ; Mon, 14 Nov 2022 11:16:40 -0800 (PST) Received: by mail-pl1-x643.google.com with SMTP id p21so10957213plr.7 for ; Mon, 14 Nov 2022 11:16:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=B+LkDCN+FUAB0DSRYHtbfHGdldxcjkmkvBSRI2uvuvc=; b=bFemB5S+ic1oYzhh+Q+TfI2gx3xWiSoTyCaWSb3kNxroH0CK/Cnh5pcq66rVM+YMoF QsRQnLxcex0VbM982vjN38UG3WVHKFhyVtV55QRgxZsgMXxStQVm+VUAOFOXIvRQqcBq 0YyNszmDzOsSupkV92sR4DNJAPvKyX2kw1uVAUJ6/PAQyGGM845cjFs+WQbCB9PEVcFS ZYoC6hN/olcrT+uEPgYANUrMTMddohafYe9zi77mmxD0e0XwkkvzM1pGxDVTpEINP3jA gGufieItJAcV/m5h1qpuMmQvbdZuuEIJ09zPy8KJsEjxMfCTON1IsS3QghiwhMC+bBtM GE/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=B+LkDCN+FUAB0DSRYHtbfHGdldxcjkmkvBSRI2uvuvc=; b=2HJEWupnKadRnF5v2nxV7MvC8s9g1aHppAMp+ySQ+bUB2/KxG9qRxUTs025A0eluBB T0CS8nssguXlERrFs+6oIV7ouNH/IX3wAiOslsAtSQvxCoM2/XNLvGiIBeJCMcEMBHFI jc0DhvYll034bcIIe+bjOGf9x+H4xgAlcM+drOmOehYs8SjxCSYbTlmXe8BHwpBnVoPg u3dDT9/UCiZgTIuSjqyV0r046wknXDQqjAidi+X92PZToog/4LzqfkEGscY6zqOB8MFG gFFOhZuLrKaG+KoBYdgJXsUxErAe94rMyFqvLu2PgAgrinasoKw3HOCM3Co39fXRd7Hy hOVA== X-Gm-Message-State: ANoB5pmrVgCTIu9rf7ZvnLDiLEs76VghHYnTN3oYkA67uXjKy094hUiY nG/stnBCyien+MLxayKii0P4QaCiHZYw/w== X-Google-Smtp-Source: AA0mqf6M8i2W3ONqvboyKdIefYMovinfbtJ7/z2n+k+lEWBKC4UJE3rKtE/Qg3Sl6YDp6/wFfPRSyw== X-Received: by 2002:a17:90a:9746:b0:214:2920:4675 with SMTP id i6-20020a17090a974600b0021429204675mr14963695pjw.0.1668453400018; Mon, 14 Nov 2022 11:16:40 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id z18-20020a634c12000000b00434760ee36asm6101263pga.16.2022.11.14.11.16.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:39 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 16/26] bpf: Support constant scalar arguments for kfuncs Date: Tue, 15 Nov 2022 00:45:37 +0530 Message-Id: <20221114191547.1694267-17-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5263; i=memxor@gmail.com; h=from:subject; bh=/CN4/sJC0FjD34jM8x+dKNRs6I02WDF547SBhA3s2dk=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJbc8c8yHaLF7QaapbJouiPlwg2hAJVEbXY9pE W9HKkjSJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8RykroEA CAdVp2k+LplHBXCb2BCHirfi0vrR4C2eNUR5cPR8avS+BSxzfTv3Uf6+DyhQHoej/C3/+i/gCe64cO kWm2+ZDJ3QxYAcIb/JFON2eN1gwWco4x6d2Ck4LV68YXCvkC60b+7VqxXTj5/h1Ntxni/dbbrCu8of Ix+yYGPTH9jTBRWvBCKe4INxF7VAnjXwsgPWPmADqBzbE3QmS6oGBMixdJ4g7Ovu2SQAQyzWTDul6I JPtaeK78eXmKJFFimIWGIVlk+PWYg8vIzrhj7o+OTGdI/Duop6NnNn98xBGoJqH/cDFJJSLkOalIU6 ai1VfUkPQIp68AWy3FF9U6jaNlaUIAO9hXWZ/Z5+J5vy0MkQm6jW+dq0iVRQ2qI9X0oZKFgImvnW9F IvHOZolrc0YPrBBDqDs/95Bkzjwg5pUY91Cc/ViEU1QmgufFeEfSU+4TU0EoN+PnJEtBYM9ysxd9ns gNhnqfDLFTs24R3yE2fh/0sqbLLwKYssqBv6pLOmdaKHJMRUMJ1lRKGrEZUfrYMQfaCG47QvH6xpec z2YFdRHvBBVY4fWD7Qqu5JA7IV2dzrF14hzqX9+vXLVHItqwZqC/0msgJC257SVnQJj/5XOvuFw9hS /SsIrB4ZYaGxYboHLGKRtjXDtxsaC1wQZi0Jt8qJfRUwdHJMkTZG41IA8XFg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Allow passing known constant scalars as arguments to kfuncs that do not represent a size parameter. This makes the search pruning optimization of verifier more conservative for such kfunc calls, and each non-distinct argument is considered unequivalent. We will use this support to then expose a bpf_obj_new function where it takes the local type ID of a type in program BTF, and returns a PTR_TO_BTF_ID | MEM_ALLOC to the local type, and allows programs to allocate their own objects. Signed-off-by: Kumar Kartikeya Dwivedi --- Documentation/bpf/kfuncs.rst | 22 ++++++++++++++ kernel/bpf/verifier.c | 59 +++++++++++++++++++++++++++--------- 2 files changed, 67 insertions(+), 14 deletions(-) diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst index 0f858156371d..8fa9c052417f 100644 --- a/Documentation/bpf/kfuncs.rst +++ b/Documentation/bpf/kfuncs.rst @@ -72,6 +72,28 @@ argument as its size. By default, without __sz annotation, the size of the type of the pointer is used. Without __sz annotation, a kfunc cannot accept a void pointer. +2.2.1 __k Annotation +-------------------- + +This annotation is only understood for scalar arguments, where it indicates that +the verifier must check the scalar argument to be a known constant, which does +not indicate a size parameter. + +An example is given below:: + + void *bpf_obj_new(u32 local_type_id__k, ...) + { + ... + } + +Here, bpf_obj_new uses local_type_id argument to find out the size of that type +ID in program's BTF and return a sized pointer to it. Each type ID will have a +distinct size, hence it is crucial to treat each such call as distinct when +values don't match. + +Hence, whenever a constant scalar argument is accepted by a kfunc which is not a +size parameter, __k suffix should be used. + .. _BPF_kfunc_nodef: 2.3 Using an existing kernel function diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d95b6cc63e38..a4a1424b19a5 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7871,6 +7871,10 @@ struct bpf_kfunc_call_arg_meta { u8 release_regno; bool r0_rdonly; u64 r0_size; + struct { + u64 value; + bool found; + } arg_constant; }; static bool is_kfunc_acquire(struct bpf_kfunc_call_arg_meta *meta) @@ -7908,30 +7912,40 @@ static bool is_kfunc_arg_kptr_get(struct bpf_kfunc_call_arg_meta *meta, int arg) return arg == 0 && (meta->kfunc_flags & KF_KPTR_GET); } -static bool is_kfunc_arg_mem_size(const struct btf *btf, - const struct btf_param *arg, - const struct bpf_reg_state *reg) +static bool __kfunc_param_match_suffix(const struct btf *btf, + const struct btf_param *arg, + const char *suffix) { - int len, sfx_len = sizeof("__sz") - 1; - const struct btf_type *t; + int suffix_len = strlen(suffix), len; const char *param_name; - t = btf_type_skip_modifiers(btf, arg->type, NULL); - if (!btf_type_is_scalar(t) || reg->type != SCALAR_VALUE) - return false; - /* In the future, this can be ported to use BTF tagging */ param_name = btf_name_by_offset(btf, arg->name_off); if (str_is_empty(param_name)) return false; len = strlen(param_name); - if (len < sfx_len) + if (len < suffix_len) return false; - param_name += len - sfx_len; - if (strncmp(param_name, "__sz", sfx_len)) + param_name += len - suffix_len; + return !strncmp(param_name, suffix, suffix_len); +} + +static bool is_kfunc_arg_mem_size(const struct btf *btf, + const struct btf_param *arg, + const struct bpf_reg_state *reg) +{ + const struct btf_type *t; + + t = btf_type_skip_modifiers(btf, arg->type, NULL); + if (!btf_type_is_scalar(t) || reg->type != SCALAR_VALUE) return false; - return true; + return __kfunc_param_match_suffix(btf, arg, "__sz"); +} + +static bool is_kfunc_arg_sfx_constant(const struct btf *btf, const struct btf_param *arg) +{ + return __kfunc_param_match_suffix(btf, arg, "__k"); } static bool is_kfunc_arg_ret_buf_size(const struct btf *btf, @@ -8207,7 +8221,24 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ verbose(env, "R%d is not a scalar\n", regno); return -EINVAL; } - if (is_kfunc_arg_ret_buf_size(btf, &args[i], reg, "rdonly_buf_size")) { + if (is_kfunc_arg_sfx_constant(meta->btf, &args[i])) { + /* kfunc is already bpf_capable() only, no need + * to check it here. + */ + if (meta->arg_constant.found) { + verbose(env, "verifier internal error: only one constant argument permitted\n"); + return -EFAULT; + } + if (!tnum_is_const(reg->var_off)) { + verbose(env, "R%d must be a known constant\n", regno); + return -EINVAL; + } + ret = mark_chain_precision(env, regno); + if (ret < 0) + return ret; + meta->arg_constant.found = true; + meta->arg_constant.value = reg->var_off.value; + } else if (is_kfunc_arg_ret_buf_size(btf, &args[i], reg, "rdonly_buf_size")) { meta->r0_rdonly = true; is_ret_buf_sz = true; } else if (is_kfunc_arg_ret_buf_size(btf, &args[i], reg, "rdwr_buf_size")) { From patchwork Mon Nov 14 19:15:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042732 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BB36C4332F for ; Mon, 14 Nov 2022 19:17:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237077AbiKNTRA (ORCPT ); Mon, 14 Nov 2022 14:17:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237259AbiKNTQu (ORCPT ); Mon, 14 Nov 2022 14:16:50 -0500 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01374286E0 for ; Mon, 14 Nov 2022 11:16:43 -0800 (PST) Received: by mail-pj1-x1043.google.com with SMTP id b11so11232925pjp.2 for ; Mon, 14 Nov 2022 11:16:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iIAJD4NtaCcjL470lAk4xz5IZxTnPogMDkOta0nE8zM=; b=qrb3gkH+svZHdntEYD99y/e9d63zX30moXl2uimjnzD4KB7qi0yLSEoLASbevNxxdl k3KG0OL1Oxp1L+ugmHjmYjs/oF/MyWZoqwO1YrSUB2tH/f6G7IPvpq6MB50u+kXgqLON uV//pU5B4dSIrQ/d3ozSQ6wkq37490Q1u7WuEYSZon0NM9jsgUVQ0W/5yjJcehvR9Mg9 +Ddx7iZGo1sDR55GwzZgE32/UKF57G5hx/SOGEweAdfpvIwVjzGC8z/h19Bcnt1M+Hb4 +eG9r3f/pQLkgn0/bnklX/izH23fj+8uGrb1KzIpYJFMx0xyZTfaHXlwSzyJF1nTGNWg BxRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iIAJD4NtaCcjL470lAk4xz5IZxTnPogMDkOta0nE8zM=; b=EZecoo0EQP1x0xZ7GTTdKiswS6cSQBAlqF0HiuydOzTlg4PuRUjOrTM7n33ft5Lgde Rxpn6fnJZsFiwx2vTNOhUm0BBZ+bFc28HQsxLfOJ6zoMAyvhQIc5QmzTRmbVfEvxf7Mo T7xC9c16F5wYYz4pAvW63GkxNDts+Fbku0h4pFx7Bl7GrOXq5pcLDvXbOhhP9Szp1Vg8 GOxRTkjH+7FgfnHcPQPd/X7ozEuzEnaIvBjWdhaI7toFUsvWuJKKX/PnPC0dgxirjylz MX+OyWwpnA2m363TAIIrV7tUeSlZYfePQlZYIJZoqbmTh+L1V3u9e/LscW0WCxwRKnjw M8iQ== X-Gm-Message-State: ANoB5plewQ8+XjVkJeJtKvX/OdYV9ew/N3S1gRqi2MRKnG5TfAqnf3Q7 l1hlzx77zgqVmTrU49Uy2COpeDwUF9A8Gg== X-Google-Smtp-Source: AA0mqf7Os/yUgPyPlixsYq1cpoBTIHsKTJpXdok6y4KXnv1w626tsfEebSX/lxhiBcsQ2gDAeadOGg== X-Received: by 2002:a17:90a:c205:b0:20b:590:46b1 with SMTP id e5-20020a17090ac20500b0020b059046b1mr15084211pjt.83.1668453403129; Mon, 14 Nov 2022 11:16:43 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id y7-20020aa79e07000000b0056e8eb09d58sm7315414pfq.170.2022.11.14.11.16.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:42 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 17/26] bpf: Introduce bpf_obj_new Date: Tue, 15 Nov 2022 00:45:38 +0530 Message-Id: <20221114191547.1694267-18-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=15820; i=memxor@gmail.com; h=from:subject; bh=qr7N/J5HeZ2wahA9NK635lLQX5A4T8aiunvtZT5Y6oA=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJM4YYltJTfGAvc1Lu8iijf3gb5lJGd35hqEGP huio7ieJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8RyvPCD/ 9Vi2B1Gqm+Vn9rdX21fS31FriVIHumsuYi9andnEQjG1oQUjeF2zGzuYcAeSO4wYMuhih+thXenZir hx4zLJsXB+AjREfpfpM35emsQbaPQfKCA6OxqrrQQn5bhENVf2NEmffp9kvAF6tpWvOseZ63ittURw zarqpG2O46x2gs0B85KG6b4jXsGDPcdiHa/4ams01nevlqDL/hMBuxYQbMwE/lsd6cyrZL1sr1lB86 /S4uCOAi8Q7EHVtJl7mvMnxMcDQ8p9C8T79tXum6dTo4IsfHNTW3VoTYyEYwiCx//ZEpV38TCk1HwD cE+Z+IZd3IcpLLiC85UH9Cn8kKnPpzVM7/tc4SpLTTsBp6EbKnNwl7h+lqj8c8TWehzBTyVblf/eaM VAsH87VUNqr/Y7EQRDC6mfMfvaYIn69UjQpYK2X8TMzM7dmZARReIXqHiisww4Zc2l6KQt9cEU895G 8yMuJm0rts/VH0LjfSuljh+L+0PoWVZHUaueEnW3V8OO7eRMekCnDlO+MlPxoIehh3ilHOdpXidj6o bdb9KQEHvAcrI3VSY7CmzPXaogeut6tSdDd25gnaSqhipqP/162Fz0NClRGZDI9m6xFV6B/umladGb TpHhAnFU9Wa0YuJXUUMsnBadv8CiVOmlLfgfRpcEJmjW6LBhGJM+69P3L6og== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce type safe memory allocator bpf_obj_new for BPF programs. The kernel side kfunc is named bpf_obj_new_impl, as passing hidden arguments to kfuncs still requires having them in prototype, unlike BPF helpers which always take 5 arguments and have them checked using bpf_func_proto in verifier, ignoring unset argument types. Introduce __ign suffix to ignore a specific kfunc argument during type checks, then use this to introduce support for passing type metadata to the bpf_obj_new_impl kfunc. The user passes BTF ID of the type it wants to allocates in program BTF, the verifier then rewrites the first argument as the size of this type, after performing some sanity checks (to ensure it exists and it is a struct type). The second argument is also fixed up and passed by the verifier. This is the btf_struct_meta for the type being allocated. It would be needed mostly for the offset array which is required for zero initializing special fields while leaving the rest of storage in unitialized state. It would also be needed in the next patch to perform proper destruction of the object's special fields. Under the hood, bpf_obj_new will call bpf_mem_alloc and bpf_mem_free, using the any context BPF memory allocator introduced recently. To this end, a global instance of the BPF memory allocator is initialized on boot to be used for this purpose. This 'bpf_global_ma' serves all allocations for bpf_obj_new. In the future, bpf_obj_new variants will allow specifying a custom allocator. Note that now that bpf_obj_new can be used to allocate objects that can be linked to BPF linked list (when future linked list helpers are available), we need to also free the elements using bpf_mem_free. However, since the draining of elements is done outside the bpf_spin_lock, we need to do migrate_disable around the call since bpf_list_head_free can be called from map free path where migration is enabled. Otherwise, when called from BPF programs migration is already disabled. A convenience macro is included in the bpf_experimental.h header to hide over the ugly details of the implementation, leading to user code looking similar to a language level extension which allocates and constructs fields of a user type. struct bar { struct bpf_list_node node; }; struct foo { struct bpf_spin_lock lock; struct bpf_list_head head __contains(bar, node); }; void prog(void) { struct foo *f; f = bpf_obj_new(typeof(*f)); if (!f) return; ... } A key piece of this story is still missing, i.e. the free function, which will come in the next patch. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 21 ++-- include/linux/bpf_verifier.h | 2 + kernel/bpf/core.c | 16 +++ kernel/bpf/helpers.c | 47 ++++++-- kernel/bpf/verifier.c | 107 ++++++++++++++++-- .../testing/selftests/bpf/bpf_experimental.h | 25 ++++ 6 files changed, 195 insertions(+), 23 deletions(-) create mode 100644 tools/testing/selftests/bpf/bpf_experimental.h diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 62a16b699e71..4635e31bd6fc 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -54,6 +54,8 @@ struct cgroup; extern struct idr btf_idr; extern spinlock_t btf_idr_lock; extern struct kobject *btf_kobj; +extern struct bpf_mem_alloc bpf_global_ma; +extern bool bpf_global_ma_set; typedef u64 (*bpf_callback_t)(u64, u64, u64, u64, u64); typedef int (*bpf_iter_init_seq_priv_t)(void *private_data, @@ -333,16 +335,19 @@ static inline bool btf_record_has_field(const struct btf_record *rec, enum btf_f return rec->field_mask & type; } -static inline void check_and_init_map_value(struct bpf_map *map, void *dst) +static inline void bpf_obj_init(const struct btf_field_offs *foffs, void *obj) { - if (!IS_ERR_OR_NULL(map->record)) { - struct btf_field *fields = map->record->fields; - u32 cnt = map->record->cnt; - int i; + int i; - for (i = 0; i < cnt; i++) - memset(dst + fields[i].offset, 0, btf_field_type_size(fields[i].type)); - } + if (!foffs) + return; + for (i = 0; i < foffs->cnt; i++) + memset(obj + foffs->field_off[i], 0, foffs->field_sz[i]); +} + +static inline void check_and_init_map_value(struct bpf_map *map, void *dst) +{ + bpf_obj_init(map->field_offs, dst); } /* memcpy that is used with 8-byte aligned pointers, power-of-8 size and diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 887fa4d922f6..306fc1d6cc4a 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -427,6 +427,8 @@ struct bpf_insn_aux_data { */ struct bpf_loop_inline_state loop_inline_state; }; + u64 obj_new_size; /* remember the size of type passed to bpf_obj_new to rewrite R1 */ + struct btf_struct_meta *kptr_struct_meta; u64 map_key_state; /* constant (32 bit) key tracking for maps */ int ctx_field_size; /* the ctx field size for load insn, maybe 0 */ u32 seen; /* this insn was processed by the verifier at env->pass_cnt */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 9c16338bcbe8..2e57fc839a5c 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -60,6 +61,9 @@ #define CTX regs[BPF_REG_CTX] #define IMM insn->imm +struct bpf_mem_alloc bpf_global_ma; +bool bpf_global_ma_set; + /* No hurry in this branch * * Exported for the bpf jit load helper. @@ -2746,6 +2750,18 @@ int __weak bpf_arch_text_invalidate(void *dst, size_t len) return -ENOTSUPP; } +#ifdef CONFIG_BPF_SYSCALL +static int __init bpf_global_ma_init(void) +{ + int ret; + + ret = bpf_mem_alloc_init(&bpf_global_ma, 0, false); + bpf_global_ma_set = !ret; + return ret; +} +late_initcall(bpf_global_ma_init); +#endif + DEFINE_STATIC_KEY_FALSE(bpf_stats_enabled_key); EXPORT_SYMBOL(bpf_stats_enabled_key); diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 5bc0b9f0f306..c4f1c22cc44c 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -19,6 +19,7 @@ #include #include #include +#include #include "../../lib/kstrtox.h" @@ -1735,25 +1736,57 @@ void bpf_list_head_free(const struct btf_field *field, void *list_head, obj -= field->list_head.node_offset; head = head->next; - /* TODO: Rework later */ - kfree(obj); + /* The contained type can also have resources, including a + * bpf_list_head which needs to be freed. + */ + bpf_obj_free_fields(field->list_head.value_rec, obj); + /* bpf_mem_free requires migrate_disable(), since we can be + * called from map free path as well apart from BPF program (as + * part of map ops doing bpf_obj_free_fields). + */ + migrate_disable(); + bpf_mem_free(&bpf_global_ma, obj); + migrate_enable(); } } -BTF_SET8_START(tracing_btf_ids) +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes", + "Global functions as their definitions will be in vmlinux BTF"); + +void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign) +{ + struct btf_struct_meta *meta = meta__ign; + u64 size = local_type_id__k; + void *p; + + if (unlikely(!bpf_global_ma_set)) + return NULL; + p = bpf_mem_alloc(&bpf_global_ma, size); + if (!p) + return NULL; + if (meta) + bpf_obj_init(meta->field_offs, p); + return p; +} + +__diag_pop(); + +BTF_SET8_START(generic_btf_ids) #ifdef CONFIG_KEXEC_CORE BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE) #endif -BTF_SET8_END(tracing_btf_ids) +BTF_ID_FLAGS(func, bpf_obj_new_impl, KF_ACQUIRE | KF_RET_NULL) +BTF_SET8_END(generic_btf_ids) -static const struct btf_kfunc_id_set tracing_kfunc_set = { +static const struct btf_kfunc_id_set generic_kfunc_set = { .owner = THIS_MODULE, - .set = &tracing_btf_ids, + .set = &generic_btf_ids, }; static int __init kfunc_init(void) { - return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &tracing_kfunc_set); + return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &generic_kfunc_set); } late_initcall(kfunc_init); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index a4a1424b19a5..c7f5d83783db 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7948,6 +7948,11 @@ static bool is_kfunc_arg_sfx_constant(const struct btf *btf, const struct btf_pa return __kfunc_param_match_suffix(btf, arg, "__k"); } +static bool is_kfunc_arg_sfx_ignore(const struct btf *btf, const struct btf_param *arg) +{ + return __kfunc_param_match_suffix(btf, arg, "__ign"); +} + static bool is_kfunc_arg_ret_buf_size(const struct btf *btf, const struct btf_param *arg, const struct bpf_reg_state *reg, @@ -8216,6 +8221,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ int kf_arg_type; t = btf_type_skip_modifiers(btf, args[i].type, NULL); + + if (is_kfunc_arg_sfx_ignore(btf, &args[i])) + continue; + if (btf_type_is_scalar(t)) { if (reg->type != SCALAR_VALUE) { verbose(env, "R%d is not a scalar\n", regno); @@ -8395,6 +8404,17 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return 0; } +enum special_kfunc_type { + KF_bpf_obj_new_impl, +}; + +BTF_SET_START(special_kfunc_set) +BTF_ID(func, bpf_obj_new_impl) +BTF_SET_END(special_kfunc_set) + +BTF_ID_LIST(special_kfunc_list) +BTF_ID(func, bpf_obj_new_impl) + static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, int *insn_idx_p) { @@ -8469,17 +8489,64 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, t = btf_type_skip_modifiers(desc_btf, func_proto->type, NULL); if (is_kfunc_acquire(&meta) && !btf_type_is_struct_ptr(meta.btf, t)) { - verbose(env, "acquire kernel function does not return PTR_TO_BTF_ID\n"); - return -EINVAL; + /* Only exception is bpf_obj_new_impl */ + if (meta.btf != btf_vmlinux || meta.func_id != special_kfunc_list[KF_bpf_obj_new_impl]) { + verbose(env, "acquire kernel function does not return PTR_TO_BTF_ID\n"); + return -EINVAL; + } } if (btf_type_is_scalar(t)) { mark_reg_unknown(env, regs, BPF_REG_0); mark_btf_func_reg_size(env, BPF_REG_0, t->size); } else if (btf_type_is_ptr(t)) { - ptr_type = btf_type_skip_modifiers(desc_btf, t->type, - &ptr_type_id); - if (!btf_type_is_struct(ptr_type)) { + ptr_type = btf_type_skip_modifiers(desc_btf, t->type, &ptr_type_id); + + if (meta.btf == btf_vmlinux && btf_id_set_contains(&special_kfunc_set, meta.func_id)) { + if (!btf_type_is_void(ptr_type)) { + verbose(env, "kernel function %s must have void * return type\n", + meta.func_name); + return -EINVAL; + } + if (meta.func_id == special_kfunc_list[KF_bpf_obj_new_impl]) { + const struct btf_type *ret_t; + struct btf *ret_btf; + u32 ret_btf_id; + + if (((u64)(u32)meta.arg_constant.value) != meta.arg_constant.value) { + verbose(env, "local type ID argument must be in range [0, U32_MAX]\n"); + return -EINVAL; + } + + ret_btf = env->prog->aux->btf; + ret_btf_id = meta.arg_constant.value; + + /* This may be NULL due to user not supplying a BTF */ + if (!ret_btf) { + verbose(env, "bpf_obj_new requires prog BTF\n"); + return -EINVAL; + } + + ret_t = btf_type_by_id(ret_btf, ret_btf_id); + if (!ret_t || !__btf_type_is_struct(ret_t)) { + verbose(env, "bpf_obj_new type ID argument must be of a struct\n"); + return -EINVAL; + } + + mark_reg_known_zero(env, regs, BPF_REG_0); + regs[BPF_REG_0].type = PTR_TO_BTF_ID | MEM_ALLOC; + regs[BPF_REG_0].btf = ret_btf; + regs[BPF_REG_0].btf_id = ret_btf_id; + + env->insn_aux_data[insn_idx].obj_new_size = ret_t->size; + env->insn_aux_data[insn_idx].kptr_struct_meta = + btf_find_struct_meta(ret_btf, ret_btf_id); + } else { + verbose(env, "kernel function %s unhandled dynamic return type\n", + meta.func_name); + return -EFAULT; + } + } else if (!__btf_type_is_struct(ptr_type)) { if (!meta.r0_size) { ptr_type_name = btf_name_by_offset(desc_btf, ptr_type->name_off); @@ -8507,6 +8574,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, regs[BPF_REG_0].type = PTR_TO_BTF_ID; regs[BPF_REG_0].btf_id = ptr_type_id; } + if (is_kfunc_ret_null(&meta)) { regs[BPF_REG_0].type |= PTR_MAYBE_NULL; /* For mark_ptr_or_null_reg, see 93c230e3f5bd6 */ @@ -14644,8 +14712,8 @@ static int fixup_call_args(struct bpf_verifier_env *env) return err; } -static int fixup_kfunc_call(struct bpf_verifier_env *env, - struct bpf_insn *insn) +static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, + struct bpf_insn *insn_buf, int insn_idx, int *cnt) { const struct bpf_kfunc_desc *desc; @@ -14664,8 +14732,21 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, return -EFAULT; } + *cnt = 0; insn->imm = desc->imm; + if (insn->off) + return 0; + if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl]) { + struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta; + struct bpf_insn addr[2] = { BPF_LD_IMM64(BPF_REG_2, (long)kptr_struct_meta) }; + u64 obj_new_size = env->insn_aux_data[insn_idx].obj_new_size; + insn_buf[0] = BPF_MOV64_IMM(BPF_REG_1, obj_new_size); + insn_buf[1] = addr[0]; + insn_buf[2] = addr[1]; + insn_buf[3] = *insn; + *cnt = 4; + } return 0; } @@ -14807,9 +14888,19 @@ static int do_misc_fixups(struct bpf_verifier_env *env) if (insn->src_reg == BPF_PSEUDO_CALL) continue; if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { - ret = fixup_kfunc_call(env, insn); + ret = fixup_kfunc_call(env, insn, insn_buf, i + delta, &cnt); if (ret) return ret; + if (cnt == 0) + continue; + + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); + if (!new_prog) + return -ENOMEM; + + delta += cnt - 1; + env->prog = prog = new_prog; + insn = new_prog->insnsi + i + delta; continue; } diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h new file mode 100644 index 000000000000..aeb6a7fcb7c4 --- /dev/null +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -0,0 +1,25 @@ +#ifndef __BPF_EXPERIMENTAL__ +#define __BPF_EXPERIMENTAL__ + +#include +#include +#include +#include + +/* Description + * Allocates an object of the type represented by 'local_type_id' in + * program BTF. User may use the bpf_core_type_id_local macro to pass the + * type ID of a struct in program BTF. + * + * The 'local_type_id' parameter must be a known constant. + * The 'meta' parameter is a hidden argument that is ignored. + * Returns + * A pointer to an object of the type corresponding to the passed in + * 'local_type_id', or NULL on failure. + */ +extern void *bpf_obj_new_impl(__u64 local_type_id, void *meta) __ksym; + +/* Convenience macro to wrap over bpf_obj_new_impl */ +#define bpf_obj_new(type) ((type *)bpf_obj_new_impl(bpf_core_type_id_local(type), NULL)) + +#endif From patchwork Mon Nov 14 19:15:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042733 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96DD7C433FE for ; Mon, 14 Nov 2022 19:17:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237203AbiKNTRE (ORCPT ); Mon, 14 Nov 2022 14:17:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237315AbiKNTQw (ORCPT ); Mon, 14 Nov 2022 14:16:52 -0500 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46A9E28E33 for ; Mon, 14 Nov 2022 11:16:47 -0800 (PST) Received: by mail-pf1-x441.google.com with SMTP id k15so11959278pfg.2 for ; Mon, 14 Nov 2022 11:16:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZgwD7hHjBTRbxXrPOenPO+YEqh7ZDN+ICMghM+V56Yk=; b=ZgF/09ybgda0xqnqycvbZGuX1nM2dP7dzaj3XQcJzIIEhytkgDFBcAr+YFPUwDlw4b wxzh1rOmiT5FlFnaS5g9JbghrfTLOHzuBA6PM/bo/odP0+/ZiQI9YQXoVBwWASHd/0xJ En9PC9rXgg6B6Jn7W4Im5+9LkwQBpijfv8cfbxXiwKb7j76a5raBuufeGer0ggy+9xtx zvUc2XO/EVLxmSqO7jGlXB8WkDcUuz1wfzrQMgbDifrbtf26PQyONxoO1UTKSCllPjoL cGV0LgcRdJuezTN8uMDT78GucGGEGz0+aUepsAmVngH0txTX3DyGPNcG9ItiSnUk+Jzp c8tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZgwD7hHjBTRbxXrPOenPO+YEqh7ZDN+ICMghM+V56Yk=; b=c1NNkVCaXWX6a6lyILj7ejKV6VUJqdHPcaBk0UKMYQS/JVSlFYgmrS0mp4bbXEGYbC 9TXU0Pdt9NFjrIVub/wOzRXtKYaFpdq7GSy2h1pDNyk9dT/otE7PNHxrpJ5+wl5rxYrh icqtSuCaE1rH8h2bo25YW7KqSjBAfJDb5NpQqx8W9JK5tYsNsgoKR1vMMlV9Wm2nBk7z /7laqjzMJr/yl9olSv8ip8OTWvs9s9zyt3xf+r+CtWNyRGUobhOkLmzLQC0ZpC1vCXZS Z+oLDLfakT7kzXRclxMMnidLd38vzERJlcC1IM+T6daK+XEuilH1h8T1nTMwpdA2Vm5U xTiw== X-Gm-Message-State: ANoB5pkfmO3rmwjnWHh2Wczh41KDGTybxLBvakgaf/jvxPj2KHQLH8so 9Oe52A7A9kmUUvHVgvYZDYBYbg41IJlb6A== X-Google-Smtp-Source: AA0mqf4uw1syDLfRRBf8+unNffxyZ9e8Qh63DJxm2ynOrQUwyoUDICmceG7mG1Xac0G/jVjQHf2U4Q== X-Received: by 2002:aa7:9156:0:b0:56c:a60d:54d7 with SMTP id 22-20020aa79156000000b0056ca60d54d7mr14955383pfi.18.1668453406546; Mon, 14 Nov 2022 11:16:46 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id u18-20020a170903125200b001869efb722csm7946220plh.215.2022.11.14.11.16.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:46 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 18/26] bpf: Introduce bpf_obj_drop Date: Tue, 15 Nov 2022 00:45:39 +0530 Message-Id: <20221114191547.1694267-19-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7764; i=memxor@gmail.com; h=from:subject; bh=SrU5MfLuSj+AUTV1TBgLP4xjL6su07KJ2IKqwCBFpHo=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJKgHdwZotZv/R3sjAPAYYt51WlOFL7Kk964bz 9KGuAcOJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8RypS4EA CEH6Z81L66D210jrJeZNIBjnDCk3GojuQznaCAxk9zNhGhe7u7YRsp0vdoRURlxm2q+Rx7GBJWtpS7 5BKLioyYCvXPb32mAL0ZKbx8sHZ45IW2lB27YFDNpABFSqHO/aqgAie9VGkXucHag3vhogR4qEYHYU 09kGZs9+r+1Wx3PGDXVYua6xcvE1jGl9Nw/EPXLwpgq8zoPicmPXnOP40XgvYXjA/K4b6KXFhStSl+ kBL8TPooyR1WVHWzZBqb2QFhV4xA3WcUgJCXRcLs1Z9BBPOFZrzyWPbba3KnAOcLHM0CdSy7jYR1aI jko7/hONScflnP3f15zq+PF4rcbL2yDgtIivEX9EBamh14oV95VlkdQ4dXdXJ7o07KuT/PFP8UCIGu Jmph4dG8M4fnkI57E18mzDCumRfct/8K7rtFoEqXNLxrPUF+KREiP+96zfdkBZwcAq1ZRYIjaHfKUd P6RULi/0IZOO2jf4VsTd/sOPmn/9NT1EluTvbMH+tU+KX+xQJ2WE1p8hOvzwtybpM2N2B6rXc0O3wu 3NK3Xm5b816KXT5unK07i5MzUMYtI/HQQoIvUv3fEz0lAu6T06C9zWBfFdsqSmYpBs10Ogl99uB0QX xd1CxaMMBNvIrr8qHC8jd1uT86gZ40S5OUt+B7k2QyCF22kCPTpVvNhWV8mA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Introduce bpf_obj_drop, which is the kfunc used to free allocated objects (allocated using bpf_obj_new). Pairing with bpf_obj_new, it implicitly destructs the fields part of object automatically without user intervention. Just like the previous patch, btf_struct_meta that is needed to free up the special fields is passed as a hidden argument to the kfunc. For the user, a convenience macro hides over the kernel side kfunc which is named bpf_obj_drop_impl. Continuing the previous example: void prog(void) { struct foo *f; f = bpf_obj_new(typeof(*f)); if (!f) return; bpf_obj_drop(f); } Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/helpers.c | 11 ++++ kernel/bpf/verifier.c | 66 +++++++++++++++---- .../testing/selftests/bpf/bpf_experimental.h | 13 ++++ 3 files changed, 79 insertions(+), 11 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index c4f1c22cc44c..71d803ca0c1d 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1770,6 +1770,16 @@ void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign) return p; } +void bpf_obj_drop_impl(void *p__alloc, void *meta__ign) +{ + struct btf_struct_meta *meta = meta__ign; + void *p = p__alloc; + + if (meta) + bpf_obj_free_fields(meta->record, p); + bpf_mem_free(&bpf_global_ma, p); +} + __diag_pop(); BTF_SET8_START(generic_btf_ids) @@ -1777,6 +1787,7 @@ BTF_SET8_START(generic_btf_ids) BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE) #endif BTF_ID_FLAGS(func, bpf_obj_new_impl, KF_ACQUIRE | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_obj_drop_impl, KF_RELEASE) BTF_SET8_END(generic_btf_ids) static const struct btf_kfunc_id_set generic_kfunc_set = { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c7f5d83783db..7372737cbde9 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7875,6 +7875,10 @@ struct bpf_kfunc_call_arg_meta { u64 value; bool found; } arg_constant; + struct { + struct btf *btf; + u32 btf_id; + } arg_obj_drop; }; static bool is_kfunc_acquire(struct bpf_kfunc_call_arg_meta *meta) @@ -7953,6 +7957,11 @@ static bool is_kfunc_arg_sfx_ignore(const struct btf *btf, const struct btf_para return __kfunc_param_match_suffix(btf, arg, "__ign"); } +static bool is_kfunc_arg_alloc_obj(const struct btf *btf, const struct btf_param *arg) +{ + return __kfunc_param_match_suffix(btf, arg, "__alloc"); +} + static bool is_kfunc_arg_ret_buf_size(const struct btf *btf, const struct btf_param *arg, const struct bpf_reg_state *reg, @@ -8053,6 +8062,7 @@ static u32 *reg2btf_ids[__BPF_REG_TYPE_MAX] = { enum kfunc_ptr_arg_type { KF_ARG_PTR_TO_CTX, + KF_ARG_PTR_TO_ALLOC_BTF_ID, /* Allocated object */ KF_ARG_PTR_TO_KPTR_STRONG, /* PTR_TO_KPTR but type specific */ KF_ARG_PTR_TO_DYNPTR, KF_ARG_PTR_TO_BTF_ID, /* Also covers reg2btf_ids conversions */ @@ -8060,6 +8070,20 @@ enum kfunc_ptr_arg_type { KF_ARG_PTR_TO_MEM_SIZE, /* Size derived from next argument, skip it */ }; +enum special_kfunc_type { + KF_bpf_obj_new_impl, + KF_bpf_obj_drop_impl, +}; + +BTF_SET_START(special_kfunc_set) +BTF_ID(func, bpf_obj_new_impl) +BTF_ID(func, bpf_obj_drop_impl) +BTF_SET_END(special_kfunc_set) + +BTF_ID_LIST(special_kfunc_list) +BTF_ID(func, bpf_obj_new_impl) +BTF_ID(func, bpf_obj_drop_impl) + static enum kfunc_ptr_arg_type get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta, @@ -8080,6 +8104,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, if (btf_get_prog_ctx_type(&env->log, meta->btf, t, resolve_prog_type(env->prog), argno)) return KF_ARG_PTR_TO_CTX; + if (is_kfunc_arg_alloc_obj(meta->btf, &args[argno])) + return KF_ARG_PTR_TO_ALLOC_BTF_ID; + if (is_kfunc_arg_kptr_get(meta, argno)) { if (!btf_type_is_ptr(ref_t)) { verbose(env, "arg#0 BTF type must be a double pointer for kptr_get kfunc\n"); @@ -8298,6 +8325,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return kf_arg_type; switch (kf_arg_type) { + case KF_ARG_PTR_TO_ALLOC_BTF_ID: case KF_ARG_PTR_TO_BTF_ID: if (!is_kfunc_trusted_args(meta)) break; @@ -8334,6 +8362,21 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return -EINVAL; } break; + case KF_ARG_PTR_TO_ALLOC_BTF_ID: + if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) { + verbose(env, "arg#%d expected pointer to allocated object\n", i); + return -EINVAL; + } + if (!reg->ref_obj_id) { + verbose(env, "allocated object must be referenced\n"); + return -EINVAL; + } + if (meta->btf == btf_vmlinux && + meta->func_id == special_kfunc_list[KF_bpf_obj_drop_impl]) { + meta->arg_obj_drop.btf = reg->btf; + meta->arg_obj_drop.btf_id = reg->btf_id; + } + break; case KF_ARG_PTR_TO_KPTR_STRONG: if (reg->type != PTR_TO_MAP_VALUE) { verbose(env, "arg#0 expected pointer to map value\n"); @@ -8404,17 +8447,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return 0; } -enum special_kfunc_type { - KF_bpf_obj_new_impl, -}; - -BTF_SET_START(special_kfunc_set) -BTF_ID(func, bpf_obj_new_impl) -BTF_SET_END(special_kfunc_set) - -BTF_ID_LIST(special_kfunc_list) -BTF_ID(func, bpf_obj_new_impl) - static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, int *insn_idx_p) { @@ -8541,6 +8573,10 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, env->insn_aux_data[insn_idx].obj_new_size = ret_t->size; env->insn_aux_data[insn_idx].kptr_struct_meta = btf_find_struct_meta(ret_btf, ret_btf_id); + } else if (meta.func_id == special_kfunc_list[KF_bpf_obj_drop_impl]) { + env->insn_aux_data[insn_idx].kptr_struct_meta = + btf_find_struct_meta(meta.arg_obj_drop.btf, + meta.arg_obj_drop.btf_id); } else { verbose(env, "kernel function %s unhandled dynamic return type\n", meta.func_name); @@ -14746,6 +14782,14 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, insn_buf[2] = addr[1]; insn_buf[3] = *insn; *cnt = 4; + } else if (desc->func_id == special_kfunc_list[KF_bpf_obj_drop_impl]) { + struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta; + struct bpf_insn addr[2] = { BPF_LD_IMM64(BPF_REG_2, (long)kptr_struct_meta) }; + + insn_buf[0] = addr[0]; + insn_buf[1] = addr[1]; + insn_buf[2] = *insn; + *cnt = 3; } return 0; } diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index aeb6a7fcb7c4..8473395a11af 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -22,4 +22,17 @@ extern void *bpf_obj_new_impl(__u64 local_type_id, void *meta) __ksym; /* Convenience macro to wrap over bpf_obj_new_impl */ #define bpf_obj_new(type) ((type *)bpf_obj_new_impl(bpf_core_type_id_local(type), NULL)) +/* Description + * Free an allocated object. All fields of the object that require + * destruction will be destructed before the storage is freed. + * + * The 'meta' parameter is a hidden argument that is ignored. + * Returns + * Void. + */ +extern void bpf_obj_drop_impl(void *kptr, void *meta) __ksym; + +/* Convenience macro to wrap over bpf_obj_drop_impl */ +#define bpf_obj_drop(kptr) bpf_obj_drop_impl(kptr, NULL) + #endif From patchwork Mon Nov 14 19:15:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042734 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14D59C433FE for ; Mon, 14 Nov 2022 19:17:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235800AbiKNTRI (ORCPT ); Mon, 14 Nov 2022 14:17:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237340AbiKNTQx (ORCPT ); Mon, 14 Nov 2022 14:16:53 -0500 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECC1B29361 for ; Mon, 14 Nov 2022 11:16:49 -0800 (PST) Received: by mail-pf1-x442.google.com with SMTP id 140so10474749pfz.6 for ; Mon, 14 Nov 2022 11:16:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qWiYAsRgE6BX8qJ0Iu8jZXDm2tAGkdrg9ETQquXoTlI=; b=QRVu4fG/sV68FEzalpq4wde8k2XK2PKZnTvpnREks7okA046FlAJvDFjyT5e4rpY2r 0sXeuF1vBubW0/lqHXUbOTZrXXSdvB1Iuq0JpQw3ubinAnOwY4V1ghH24VmO2H3kPLRG kk3uD2QFptyJc5vDUqs9CJ3CdyLpS4rsD+C5CqNwuPsNI6OS9uHaraJRDvhx7z/h2PVs vDn/fX9vZTjB8EaNCnZqgsLOi3SS9vKAtN2enZenJhiwdXbDR5bvNPh4UNBDoJnEDpEc SXWj6oPYnkpBtYGmOx5/IajhyrZKCjjzS0sLZzNeQkOT0NUom0e8zkofiBLrS8vTx24r 9npg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qWiYAsRgE6BX8qJ0Iu8jZXDm2tAGkdrg9ETQquXoTlI=; b=xATtYLGVGzH9+KwwWLhDH+j/JezMl3eChQyimIq7fLIsEMc/2uxd+sOAgkSZpB5rP6 7VhEJ7jYgkfTscvAmy1ili1VtTk8Sy/Mxs/E1gezU2OwAXE7P7XvfvNBXvt/6ybHdh1d 0j8JOIH4+Ynmk1kRsRUe4O9LrxkJUNMT+pEHc0ROcg1TXvOx5xitPN+nCWw+yYtHs0GN Hl0YYJsCJmc7O1g4+9Rx4Q9cDvfvlANZ7qn7ai703KeXHczKzrN42mS/UVa0NQUNxDW4 LmH6xB9x1l5ACZCc8xYF6G/fj29AuiivwfRcniWeX37aOSVDMDPbQE5+T3Zic3EL9LA1 yUSw== X-Gm-Message-State: ANoB5pks56WjyN0/+NdYmcWZ+5XOxK7FfDiQs0jR2DLqAdrvk7IVFX5Z zOURfY91gJl3/waWk1BlKvX/AraQQVCylw== X-Google-Smtp-Source: AA0mqf4P0INbQDx/ifVJwKXR1ahUnXCKfXUk44hJU4eRWwySdXay/iGhGY/PCXkVw6GPUfk4mUBd1Q== X-Received: by 2002:a63:1949:0:b0:46f:38ad:de99 with SMTP id 9-20020a631949000000b0046f38adde99mr13050820pgz.218.1668453409427; Mon, 14 Nov 2022 11:16:49 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id v64-20020a626143000000b0056bcd7e1e04sm7081365pfb.124.2022.11.14.11.16.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:49 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 19/26] bpf: Permit NULL checking pointer with non-zero fixed offset Date: Tue, 15 Nov 2022 00:45:40 +0530 Message-Id: <20221114191547.1694267-20-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2227; i=memxor@gmail.com; h=from:subject; bh=6Xyei5T2AP1FaAFXmRhfrqj1zOfW3ny611AeNkXJLek=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJKfgXhqUoqyJsCCsCEvXFyun4/h/CISksRirh Gy+8s76JAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8Ryqi8EA C9lxKSSHIs/86mXCDuRVZ0SAQMM1+i3eRUC0/NkGWPl94SGQ5d0DW93J6JkGHJdQOl9xAmJJLowVoX XUBytFukbja1u91DDX7tNEQn4VDMvErnIrRu46HXIUktXBu7hjSTSAkkXQA0pDEYuQ6yeteax/hTt1 RiLeowgw+LFoF48NFiti+2lzuKxcfbgSsNfp4rmteWkklvd/3PfX1ocWxAIEGPSCs6J5n0uanvUp+d Gl8DsY8cE4iMS8f7DSFSjonEo7Uxfhq5WzF4IU57RbC99QhtbwMFCvBPWmIN2YxOU5dTLrNPDxXP2X TvKzpXoLhLkfjhKq9096knbmSPzjKxqFmBKeWL28JfNS2FiAD8ixI2cFci05zWKIeKXJc46iUZ0zMa cyHLyU5tpt6Cz9QuQZjXQOCbdwOh2N94d59643/0TLAT5kDtuvesIjVNz33v0EbefkP4qE/VnjZJ4J jow9WRCzKKNv8xrVrey4yPWO5SVTEnmoYkODoRRzv8BxOrLVH0mqEhTTwjx8ruK49vZAbYK32sY8Tu G9AO7wg1J8GG1/jZ6EmOvdUtuc5Wj1F+ZZFMY33f0mKm7EAj9eheawqNlDoav6Elfu67Nv0rZ2Vcap wuW1aHPkiilDb9UXuZ0kc0W+GAJ0YBe3FCnarikJtbXQkds0LebZZIX7WtSg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Pointer increment on seeing PTR_MAYBE_NULL is already protected against, hence make an exception for PTR_TO_BTF_ID | MEM_ALLOC while still keeping the warning for other unintended cases that might creep in. bpf_list_pop_{front,_back} helpers planned to be introduced in next commit will return a MEM_ALLOC register with incremented offset pointing to bpf_list_node field. The user is supposed to then obtain the pointer to the entry using container_of after NULL checking it. The current restrictions trigger a warning when doing the NULL checking. Revisiting the reason, it is meant as an assertion which seems to actually work and catch the bad case. Hence, under no other circumstances can reg->off be non-zero for a register that has the PTR_MAYBE_NULL type flag set. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 7372737cbde9..e194c3feb01f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -10800,15 +10800,20 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state, { if (type_may_be_null(reg->type) && reg->id == id && !WARN_ON_ONCE(!reg->id)) { - if (WARN_ON_ONCE(reg->smin_value || reg->smax_value || - !tnum_equals_const(reg->var_off, 0) || - reg->off)) { + if (reg->smin_value || reg->smax_value || !tnum_equals_const(reg->var_off, 0) || reg->off) { /* Old offset (both fixed and variable parts) should * have been known-zero, because we don't allow pointer * arithmetic on pointers that might be NULL. If we * see this happening, don't convert the register. + * + * But in some cases, some helpers that return local + * kptrs advance offset for the returned pointer. + * In those cases, it is fine to expect to see reg->off. */ - return; + if (WARN_ON_ONCE(reg->type != (PTR_TO_BTF_ID | MEM_ALLOC | PTR_MAYBE_NULL))) + return; + if (WARN_ON_ONCE(reg->smin_value || reg->smax_value || !tnum_equals_const(reg->var_off, 0))) + return; } if (is_null) { reg->type = SCALAR_VALUE; From patchwork Mon Nov 14 19:15:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042736 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A24DAC4332F for ; Mon, 14 Nov 2022 19:17:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236117AbiKNTRL (ORCPT ); Mon, 14 Nov 2022 14:17:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236656AbiKNTQ7 (ORCPT ); Mon, 14 Nov 2022 14:16:59 -0500 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0561927B32 for ; Mon, 14 Nov 2022 11:16:52 -0800 (PST) Received: by mail-pj1-x1044.google.com with SMTP id f5-20020a17090a4a8500b002131bb59d61so12969086pjh.1 for ; Mon, 14 Nov 2022 11:16:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8UqleD//A6KXCAy0Y6jAtCZ9y5aTXl3qAfr2evlQs5c=; b=ZhChMiIBZ248i4iKb6xQwSu6dl9N/eJiOHlDi88Pb1bHsBNPncGwDVY1X4lOlui95q FJgCCqLkVY8hsVZ6KCOZvfxwyAMhny5GRe6ZIZ2yVgSKki3BI2rm+1wukNpUk2YU3sjw DZu5O/kMpwsCLU3n0/oCJKub3U5m/OHKdySr8siCdhxIY0p3mKYKplODXGppcCYwP7mX wHDf+LlnyaeBtXR7RzGPIIBLkus90WeR7PvULiGpjjXaYA0mnuzgHW2Pd9UpwHC3pFGM YvFnpiL5lr3rQ0LRZJj/TivPutyoLhNVlQBJ0bY/aUdzk4eGhzl4mnIs0llSoYo66SF7 q3uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8UqleD//A6KXCAy0Y6jAtCZ9y5aTXl3qAfr2evlQs5c=; b=BSs9KYO5jkK/RuL/zxzYmuRB2EerBKMb3gmBSobB9Un++tpFXXhJ9CjUnio6El5C37 M0VhYND+5x85AEUE2DKt6i1freZw4sCjHCiuDMJU2CQbRBjlp4st5WHoJB/hHrH+MkJQ tlL9SK1yfosbavu/TFUtCzqt6eCRh2OWm3DI6Jjdt9iMAJxgLTApWG6iwXZeTyZuwV+F gOQiWfn56HKRH/ImvB0gZWnYoFjugG4XTsToSoaQAEKVuJUir5A5XsWG3cIIwNav2fg0 qA51CUb09UJVEhyrnAcqKu2b3h8tiGV6S7csAhwApMWGgu/ka8irJ1XBihZCBgs1yS5n nu3w== X-Gm-Message-State: ANoB5pmnA4kbZDfHPRZhAD7ZMnTK3/kqggxZn6UTkUR1g/BhvHlBRmDa F1qBLQFel6OJignFXCwXR+TLnfIMzfpwKQ== X-Google-Smtp-Source: AA0mqf5nA88FaUKVskKXo5U2RSHEbOtJERWsvwwlXaV+NI/jsnp9Lv4sKp6OYTs+ZvH7TBbOSa7esw== X-Received: by 2002:a17:90a:4ec8:b0:20d:63be:917b with SMTP id v8-20020a17090a4ec800b0020d63be917bmr15243220pjl.80.1668453412225; Mon, 14 Nov 2022 11:16:52 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id s10-20020a170902ea0a00b00177faf558b5sm7895849plg.250.2022.11.14.11.16.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:51 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 20/26] bpf: Introduce single ownership BPF linked list API Date: Tue, 15 Nov 2022 00:45:41 +0530 Message-Id: <20221114191547.1694267-21-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=20838; i=memxor@gmail.com; h=from:subject; bh=9pD5UQ4ILuM8SrVIrfLdY/cgruSQvKt+hNH5IofHh/g=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJzXrRsJ8qcIPFmcD4xmI8leU27A4YB5uzicS/ 7hC+MJmJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8Ryu11D/ wOavHFwoHHfSh1pcO7RTwQ0W97MemJkIGPcutLmMBlFkVeeXBCKAG+thJKoZIPgWbYph76aVyWx8HI nJsJPqUVXezIjMX0vXXxXwfEcIVd5+k+NfEHKB4OTdUx/EOSe0z5E5yoR4tvcvqKQT51Fscq5CElr6 4eHhdQuiBBcSpDCA9YcJy8uI+py9kv6iirzQvEq5KFoDPptKsMwyASQsFhrCZJfqQt4nOSCKCLdOxQ QPpjWwGU/r0bN9wRwjVB+xlNJJ4wwthI6cbQZB6cNRoCawXbTHOxHXkyMwQOTR5+wOr4cN5NffzT0g fzUdR4/k9RrY8M+74ZfNkOphYUr/3Venx2IKHTiFYtTMb9sKc2AZTZybkHHJ82nB0PWGuWf8m4WInV n5OCsJiRLxdiDhN6XEu6ScdO4HKQBAqkLlYLI3WUHwLXfufMBZsYeRAXcZJdHB1ApSaWoVKKaex7Qt vOYY9SVz7mnlKcte7UXRimOjMI7QpJWaoj1kmPEiMCuDByBghOXVbpsJGMNX7qDkBFvrVXzUpfNFpD X+7S60S+5/LtbWQiQWXnPdMcTG+yPUC0/kXOahjvOp36Gt9E07MqmI1xU4EUz95va/C8pLuZ48dnUK O+b9edZ39PWvlAt374loKP/qIAai4rTqlUriRw73cEi8yCd9C54SWguGxQ8A== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add a linked list API for use in BPF programs, where it expects protection from the bpf_spin_lock in the same allocation as the bpf_list_head. For now, only one bpf_spin_lock can be present hence that is assumed to be the one protecting the bpf_list_head. The following functions are added to kick things off: // Add node to beginning of list void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node); // Add node to end of list void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node); // Remove node at beginning of list and return it struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head); // Remove node at end of list and return it struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head); The lock protecting the bpf_list_head needs to be taken for all operations. The verifier ensures that the lock that needs to be taken is always held, and only the correct lock is taken for these operations. These checks are made statically by relying on the reg->id preserved for registers pointing into regions having both bpf_spin_lock and the objects protected by it. The comment over check_reg_allocation_locked in this change describes the logic in detail. Note that bpf_list_push_front and bpf_list_push_back are meant to consume the object containing the node in the 1st argument, however that specific mechanism is intended to not release the ref_obj_id directly until the bpf_spin_unlock is called. In this commit, nothing is done, but the next commit will be introducing logic to handle this case, so it has been left as is for now. bpf_list_pop_front and bpf_list_pop_back delete the first or last item of the list respectively, and return pointer to the element at the list_node offset. The user can then use container_of style macro to get the actual entry type. The verifier however statically knows the actual type, so the safety properties are still preserved. With these additions, programs can now manage their own linked lists and store their objects in them. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/helpers.c | 55 +++- kernel/bpf/verifier.c | 292 +++++++++++++++++- .../testing/selftests/bpf/bpf_experimental.h | 28 ++ 3 files changed, 361 insertions(+), 14 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 71d803ca0c1d..212e791d7452 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1780,6 +1780,50 @@ void bpf_obj_drop_impl(void *p__alloc, void *meta__ign) bpf_mem_free(&bpf_global_ma, p); } +static void __bpf_list_add(struct bpf_list_node *node, struct bpf_list_head *head, bool tail) +{ + struct list_head *n = (void *)node, *h = (void *)head; + + if (unlikely(!h->next)) + INIT_LIST_HEAD(h); + if (unlikely(!n->next)) + INIT_LIST_HEAD(n); + tail ? list_add_tail(n, h) : list_add(n, h); +} + +void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node) +{ + return __bpf_list_add(node, head, false); +} + +void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node) +{ + return __bpf_list_add(node, head, true); +} + +static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head, bool tail) +{ + struct list_head *n, *h = (void *)head; + + if (unlikely(!h->next)) + INIT_LIST_HEAD(h); + if (list_empty(h)) + return NULL; + n = tail ? h->prev : h->next; + list_del_init(n); + return (struct bpf_list_node *)n; +} + +struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head) +{ + return __bpf_list_del(head, false); +} + +struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head) +{ + return __bpf_list_del(head, true); +} + __diag_pop(); BTF_SET8_START(generic_btf_ids) @@ -1788,6 +1832,10 @@ BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE) #endif BTF_ID_FLAGS(func, bpf_obj_new_impl, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_obj_drop_impl, KF_RELEASE) +BTF_ID_FLAGS(func, bpf_list_push_front) +BTF_ID_FLAGS(func, bpf_list_push_back) +BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL) BTF_SET8_END(generic_btf_ids) static const struct btf_kfunc_id_set generic_kfunc_set = { @@ -1797,7 +1845,12 @@ static const struct btf_kfunc_id_set generic_kfunc_set = { static int __init kfunc_init(void) { - return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &generic_kfunc_set); + int ret; + + ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &generic_kfunc_set); + if (ret) + return ret; + return register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &generic_kfunc_set); } late_initcall(kfunc_init); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e194c3feb01f..c034ca2d9479 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7879,6 +7879,9 @@ struct bpf_kfunc_call_arg_meta { struct btf *btf; u32 btf_id; } arg_obj_drop; + struct { + struct btf_field *field; + } arg_list_head; }; static bool is_kfunc_acquire(struct bpf_kfunc_call_arg_meta *meta) @@ -7989,13 +7992,17 @@ static bool is_kfunc_arg_ret_buf_size(const struct btf *btf, enum { KF_ARG_DYNPTR_ID, + KF_ARG_LIST_HEAD_ID, + KF_ARG_LIST_NODE_ID, }; BTF_ID_LIST(kf_arg_btf_ids) BTF_ID(struct, bpf_dynptr_kern) +BTF_ID(struct, bpf_list_head) +BTF_ID(struct, bpf_list_node) -static bool is_kfunc_arg_dynptr(const struct btf *btf, - const struct btf_param *arg) +static bool __is_kfunc_ptr_arg_type(const struct btf *btf, + const struct btf_param *arg, int type) { const struct btf_type *t; u32 res_id; @@ -8008,7 +8015,22 @@ static bool is_kfunc_arg_dynptr(const struct btf *btf, t = btf_type_skip_modifiers(btf, t->type, &res_id); if (!t) return false; - return btf_types_are_same(btf, res_id, btf_vmlinux, kf_arg_btf_ids[KF_ARG_DYNPTR_ID]); + return btf_types_are_same(btf, res_id, btf_vmlinux, kf_arg_btf_ids[type]); +} + +static bool is_kfunc_arg_dynptr(const struct btf *btf, const struct btf_param *arg) +{ + return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_DYNPTR_ID); +} + +static bool is_kfunc_arg_list_head(const struct btf *btf, const struct btf_param *arg) +{ + return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_LIST_HEAD_ID); +} + +static bool is_kfunc_arg_list_node(const struct btf *btf, const struct btf_param *arg) +{ + return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_LIST_NODE_ID); } /* Returns true if struct is composed of scalars, 4 levels of nesting allowed */ @@ -8065,6 +8087,8 @@ enum kfunc_ptr_arg_type { KF_ARG_PTR_TO_ALLOC_BTF_ID, /* Allocated object */ KF_ARG_PTR_TO_KPTR_STRONG, /* PTR_TO_KPTR but type specific */ KF_ARG_PTR_TO_DYNPTR, + KF_ARG_PTR_TO_LIST_HEAD, + KF_ARG_PTR_TO_LIST_NODE, KF_ARG_PTR_TO_BTF_ID, /* Also covers reg2btf_ids conversions */ KF_ARG_PTR_TO_MEM, KF_ARG_PTR_TO_MEM_SIZE, /* Size derived from next argument, skip it */ @@ -8073,16 +8097,28 @@ enum kfunc_ptr_arg_type { enum special_kfunc_type { KF_bpf_obj_new_impl, KF_bpf_obj_drop_impl, + KF_bpf_list_push_front, + KF_bpf_list_push_back, + KF_bpf_list_pop_front, + KF_bpf_list_pop_back, }; BTF_SET_START(special_kfunc_set) BTF_ID(func, bpf_obj_new_impl) BTF_ID(func, bpf_obj_drop_impl) +BTF_ID(func, bpf_list_push_front) +BTF_ID(func, bpf_list_push_back) +BTF_ID(func, bpf_list_pop_front) +BTF_ID(func, bpf_list_pop_back) BTF_SET_END(special_kfunc_set) BTF_ID_LIST(special_kfunc_list) BTF_ID(func, bpf_obj_new_impl) BTF_ID(func, bpf_obj_drop_impl) +BTF_ID(func, bpf_list_push_front) +BTF_ID(func, bpf_list_push_back) +BTF_ID(func, bpf_list_pop_front) +BTF_ID(func, bpf_list_pop_back) static enum kfunc_ptr_arg_type get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, @@ -8125,6 +8161,12 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, if (is_kfunc_arg_dynptr(meta->btf, &args[argno])) return KF_ARG_PTR_TO_DYNPTR; + if (is_kfunc_arg_list_head(meta->btf, &args[argno])) + return KF_ARG_PTR_TO_LIST_HEAD; + + if (is_kfunc_arg_list_node(meta->btf, &args[argno])) + return KF_ARG_PTR_TO_LIST_NODE; + if ((base_type(reg->type) == PTR_TO_BTF_ID || reg2btf_ids[base_type(reg->type)])) { if (!btf_type_is_struct(ref_t)) { verbose(env, "kernel function %s args#%d pointer type %s %s is not supported\n", @@ -8220,6 +8262,194 @@ static int process_kf_arg_ptr_to_kptr_strong(struct bpf_verifier_env *env, return 0; } +/* Implementation details: + * + * Each register points to some region of memory, which we define as an + * allocation. Each allocation may embed a bpf_spin_lock which protects any + * special BPF objects (bpf_list_head, bpf_rb_root, etc.) part of the same + * allocation. The lock and the data it protects are colocated in the same + * memory region. + * + * Hence, everytime a register holds a pointer value pointing to such + * allocation, the verifier preserves a unique reg->id for it. + * + * The verifier remembers the lock 'ptr' and the lock 'id' whenever + * bpf_spin_lock is called. + * + * To enable this, lock state in the verifier captures two values: + * active_lock.ptr = Register's type specific pointer + * active_lock.id = A unique ID for each register pointer value + * + * Currently, PTR_TO_MAP_VALUE and PTR_TO_BTF_ID | MEM_ALLOC are the two + * supported register types. + * + * The active_lock.ptr in case of map values is the reg->map_ptr, and in case of + * allocated objects is the reg->btf pointer. + * + * The active_lock.id is non-unique for maps supporting direct_value_addr, as we + * can establish the provenance of the map value statically for each distinct + * lookup into such maps. They always contain a single map value hence unique + * IDs for each pseudo load pessimizes the algorithm and rejects valid programs. + * + * So, in case of global variables, they use array maps with max_entries = 1, + * hence their active_lock.ptr becomes map_ptr and id = 0 (since they all point + * into the same map value as max_entries is 1, as described above). + * + * In case of inner map lookups, the inner map pointer has same map_ptr as the + * outer map pointer (in verifier context), but each lookup into an inner map + * assigns a fresh reg->id to the lookup, so while lookups into distinct inner + * maps from the same outer map share the same map_ptr as active_lock.ptr, they + * will get different reg->id assigned to each lookup, hence different + * active_lock.id. + * + * In case of allocated objects, active_lock.ptr is the reg->btf, and the + * reg->id is a unique ID preserved after the NULL pointer check on the pointer + * returned from bpf_obj_new. Each allocation receives a new reg->id. + */ +static int check_reg_allocation_locked(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + void *ptr; + u32 id; + + switch ((int)reg->type) { + case PTR_TO_MAP_VALUE: + ptr = reg->map_ptr; + break; + case PTR_TO_BTF_ID | MEM_ALLOC: + ptr = reg->btf; + break; + default: + verbose(env, "verifier internal error: unknown reg type for lock check\n"); + return -EFAULT; + } + id = reg->id; + + if (!env->cur_state->active_lock.ptr) + return -EINVAL; + if (env->cur_state->active_lock.ptr != ptr || + env->cur_state->active_lock.id != id) { + verbose(env, "held lock and object are not in the same allocation\n"); + return -EINVAL; + } + return 0; +} + +static bool is_bpf_list_api_kfunc(u32 btf_id) +{ + return btf_id == special_kfunc_list[KF_bpf_list_push_front] || + btf_id == special_kfunc_list[KF_bpf_list_push_back] || + btf_id == special_kfunc_list[KF_bpf_list_pop_front] || + btf_id == special_kfunc_list[KF_bpf_list_pop_back]; +} + +static int process_kf_arg_ptr_to_list_head(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, u32 regno, + struct bpf_kfunc_call_arg_meta *meta) +{ + struct btf_record *rec = NULL; + struct btf_field *field; + u32 list_head_off; + + if (meta->btf != btf_vmlinux || !is_bpf_list_api_kfunc(meta->func_id)) { + verbose(env, "verifier internal error: bpf_list_head argument for unknown kfunc\n"); + return -EFAULT; + } + + if (reg->type == PTR_TO_MAP_VALUE) { + rec = reg->map_ptr->record; + } else /* PTR_TO_BTF_ID | MEM_ALLOC */ { + struct btf_struct_meta *meta; + + meta = btf_find_struct_meta(reg->btf, reg->btf_id); + if (!meta) { + verbose(env, "bpf_list_head not found for allocated object\n"); + return -EINVAL; + } + rec = meta->record; + } + + if (!tnum_is_const(reg->var_off)) { + verbose(env, + "R%d doesn't have constant offset. bpf_list_head has to be at the constant offset\n", + regno); + return -EINVAL; + } + + list_head_off = reg->off + reg->var_off.value; + field = btf_record_find(rec, list_head_off, BPF_LIST_HEAD); + if (!field) { + verbose(env, "bpf_list_head not found at offset=%u\n", list_head_off); + return -EINVAL; + } + + /* All functions require bpf_list_head to be protected using a bpf_spin_lock */ + if (check_reg_allocation_locked(env, reg)) { + verbose(env, "bpf_spin_lock at off=%d must be held for bpf_list_head\n", + rec->spin_lock_off); + return -EINVAL; + } + + if (meta->arg_list_head.field) { + verbose(env, "verifier internal error: repeating bpf_list_head arg\n"); + return -EFAULT; + } + meta->arg_list_head.field = field; + return 0; +} + +static int process_kf_arg_ptr_to_list_node(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, u32 regno, + struct bpf_kfunc_call_arg_meta *meta) +{ + struct btf_struct_meta *struct_meta; + struct btf_field *field; + struct btf_record *rec; + u32 list_node_off; + + if (meta->btf != btf_vmlinux || + (meta->func_id != special_kfunc_list[KF_bpf_list_push_front] && + meta->func_id != special_kfunc_list[KF_bpf_list_push_back])) { + verbose(env, "verifier internal error: bpf_list_head argument for unknown kfunc\n"); + return -EFAULT; + } + + if (!tnum_is_const(reg->var_off)) { + verbose(env, + "R%d doesn't have constant offset. bpf_list_head has to be at the constant offset\n", + regno); + return -EINVAL; + } + + struct_meta = btf_find_struct_meta(reg->btf, reg->btf_id); + if (!struct_meta) { + verbose(env, "bpf_list_node not found for allocated object\n"); + return -EINVAL; + } + rec = struct_meta->record; + + list_node_off = reg->off + reg->var_off.value; + field = btf_record_find(rec, list_node_off, BPF_LIST_NODE); + if (!field || field->offset != list_node_off) { + verbose(env, "bpf_list_node not found at offset=%u\n", list_node_off); + return -EINVAL; + } + + field = meta->arg_list_head.field; + + if (!btf_struct_ids_match(&env->log, reg->btf, reg->btf_id, 0, field->list_head.btf, + field->list_head.value_btf_id, true)) { + verbose(env, "bpf_list_head value type does not match arg#1\n"); + return -EINVAL; + } + + if (list_node_off != field->list_head.node_offset) { + verbose(env, "arg#1 offset must be for bpf_list_node at off=%d\n", + field->list_head.node_offset); + return -EINVAL; + } + return 0; +} + static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta) { const char *func_name = meta->func_name, *ref_tname; @@ -8340,6 +8570,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ break; case KF_ARG_PTR_TO_KPTR_STRONG: case KF_ARG_PTR_TO_DYNPTR: + case KF_ARG_PTR_TO_LIST_HEAD: + case KF_ARG_PTR_TO_LIST_NODE: case KF_ARG_PTR_TO_MEM: case KF_ARG_PTR_TO_MEM_SIZE: /* Trusted by default */ @@ -8404,6 +8636,33 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return -EINVAL; } break; + case KF_ARG_PTR_TO_LIST_HEAD: + if (reg->type != PTR_TO_MAP_VALUE && + reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) { + verbose(env, "arg#%d expected pointer to map value or allocated object\n", i); + return -EINVAL; + } + if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC) && !reg->ref_obj_id) { + verbose(env, "allocated object must be referenced\n"); + return -EINVAL; + } + ret = process_kf_arg_ptr_to_list_head(env, reg, regno, meta); + if (ret < 0) + return ret; + break; + case KF_ARG_PTR_TO_LIST_NODE: + if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) { + verbose(env, "arg#%d expected pointer to allocated object\n", i); + return -EINVAL; + } + if (!reg->ref_obj_id) { + verbose(env, "allocated object must be referenced\n"); + return -EINVAL; + } + ret = process_kf_arg_ptr_to_list_node(env, reg, regno, meta); + if (ret < 0) + return ret; + break; case KF_ARG_PTR_TO_BTF_ID: /* Only base_type is checked, further checks are done here */ if (reg->type != PTR_TO_BTF_ID && @@ -8535,11 +8794,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, ptr_type = btf_type_skip_modifiers(desc_btf, t->type, &ptr_type_id); if (meta.btf == btf_vmlinux && btf_id_set_contains(&special_kfunc_set, meta.func_id)) { - if (!btf_type_is_void(ptr_type)) { - verbose(env, "kernel function %s must have void * return type\n", - meta.func_name); - return -EINVAL; - } if (meta.func_id == special_kfunc_list[KF_bpf_obj_new_impl]) { const struct btf_type *ret_t; struct btf *ret_btf; @@ -8577,6 +8831,15 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, env->insn_aux_data[insn_idx].kptr_struct_meta = btf_find_struct_meta(meta.arg_obj_drop.btf, meta.arg_obj_drop.btf_id); + } else if (meta.func_id == special_kfunc_list[KF_bpf_list_pop_front] || + meta.func_id == special_kfunc_list[KF_bpf_list_pop_back]) { + struct btf_field *field = meta.arg_list_head.field; + + mark_reg_known_zero(env, regs, BPF_REG_0); + regs[BPF_REG_0].type = PTR_TO_BTF_ID | MEM_ALLOC; + regs[BPF_REG_0].btf = field->list_head.btf; + regs[BPF_REG_0].btf_id = field->list_head.value_btf_id; + regs[BPF_REG_0].off = field->list_head.node_offset; } else { verbose(env, "kernel function %s unhandled dynamic return type\n", meta.func_name); @@ -13244,11 +13507,14 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } - if (env->cur_state->active_lock.ptr && - (insn->src_reg == BPF_PSEUDO_CALL || - insn->imm != BPF_FUNC_spin_unlock)) { - verbose(env, "function calls are not allowed while holding a lock\n"); - return -EINVAL; + if (env->cur_state->active_lock.ptr) { + if ((insn->src_reg == BPF_REG_0 && insn->imm != BPF_FUNC_spin_unlock) || + (insn->src_reg == BPF_PSEUDO_CALL) || + (insn->src_reg == BPF_PSEUDO_KFUNC_CALL && + (insn->off != 0 || !is_bpf_list_api_kfunc(insn->imm)))) { + verbose(env, "function calls are not allowed while holding a lock\n"); + return -EINVAL; + } } if (insn->src_reg == BPF_PSEUDO_CALL) err = check_func_call(env, insn, &env->insn_idx); diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index 8473395a11af..d6b143275e82 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -35,4 +35,32 @@ extern void bpf_obj_drop_impl(void *kptr, void *meta) __ksym; /* Convenience macro to wrap over bpf_obj_drop_impl */ #define bpf_obj_drop(kptr) bpf_obj_drop_impl(kptr, NULL) +/* Description + * Add a new entry to the beginning of the BPF linked list. + * Returns + * Void. + */ +extern void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node) __ksym; + +/* Description + * Add a new entry to the end of the BPF linked list. + * Returns + * Void. + */ +extern void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node) __ksym; + +/* Description + * Remove the entry at the beginning of the BPF linked list. + * Returns + * Pointer to bpf_list_node of deleted entry, or NULL if list is empty. + */ +extern struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head) __ksym; + +/* Description + * Remove the entry at the end of the BPF linked list. + * Returns + * Pointer to bpf_list_node of deleted entry, or NULL if list is empty. + */ +extern struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head) __ksym; + #endif From patchwork Mon Nov 14 19:15:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042735 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 978F9C43217 for ; Mon, 14 Nov 2022 19:17:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236537AbiKNTRN (ORCPT ); Mon, 14 Nov 2022 14:17:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237153AbiKNTRB (ORCPT ); Mon, 14 Nov 2022 14:17:01 -0500 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0885927B3F for ; Mon, 14 Nov 2022 11:16:56 -0800 (PST) Received: by mail-pj1-x1044.google.com with SMTP id v4-20020a17090a088400b00212cb0ed97eso11630996pjc.5 for ; Mon, 14 Nov 2022 11:16:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=S7tY5u6yMRQNosh2VIGkVKHtaOl/+ZUuNBZUXf6u/ww=; b=AC8qPZXjjllbsHT1b+PgGcYQKTXmodAtdXbhOK+vnwORoqfihD05sUV5X7ydklf3qz 6yYOfsFWhYkH9OP78IKJFodmskQLeYFwxmHD09Zm9uIjpLtpnTDvfuR+i5LSn/MeRfmu /wOQeDdiuflZxN9SvFy6SGYn3hinSrdeziOjw0euYEKvYy8V8cqQfZxrpgPMgucBUpYV uw6CtMZ1hDdphGrYCjt1SCcn9VSJ833cZ4FT46YYXbqQg5pdrCdbNRBlkCqLcAHYMFmH jT3V0/D6YQO3HdUCwkj+3hv5u1g3f2aQMDHiOsezHKGZPBPaTzrGfW0Er1GZUbulKwEI sFqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S7tY5u6yMRQNosh2VIGkVKHtaOl/+ZUuNBZUXf6u/ww=; b=3wA/SDzktCDWf0MKtMNEYp2Wx8E5wuoFj1gGPFzL/S5haQEADTnM+f4fSx0iijkYBr JQ+LCR8IC/ZAJuYYZi5fIrV1UZkrPFA4LhLDsnrWfnJzwiyEd0vXZFohWh5eHgQKifj7 9rlQM3Gdc+YaaxhJ6cLPaD/2krXfDsIB9HtpNjT9UPNsyMearXtoK5271FHc7Pf6BqJs d1MmuGpkgMoS+LDV4lY2odo5MWx/he8iCzqx3+TNbf9l3PGtIF76KpvE/eoZCPGalECE o9twjLDxvM+zgpoaOeSOsu2FLw70yJafgl1IQOrcO4tZsPC6+G+iDvb4S23qINn2imDN Zydg== X-Gm-Message-State: ANoB5pnmvQoMaxp6FNIgp18PBtju9We2mGf1JpjmR5uuiqiZYrOQhfhF vXlEUbbOS2/IanwOS70gYtmIOfg+wcHZyQ== X-Google-Smtp-Source: AA0mqf4Ud+dtR8uDqSCWe7lNt4Evkp9ody7ZQCzl9f4YH/d2n8OqSt1Kv5V4Dz0EI5lm/KsQ2C1/Lw== X-Received: by 2002:a17:90a:ea98:b0:212:ec36:4d04 with SMTP id h24-20020a17090aea9800b00212ec364d04mr15196658pjz.158.1668453415176; Mon, 14 Nov 2022 11:16:55 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id y17-20020a17090322d100b0017bb38e4588sm7923159plg.135.2022.11.14.11.16.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:54 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 21/26] bpf: Add 'release on unlock' logic for bpf_list_push_{front,back} Date: Tue, 15 Nov 2022 00:45:42 +0530 Message-Id: <20221114191547.1694267-22-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5174; i=memxor@gmail.com; h=from:subject; bh=CRPhTjFlo/zGiQz/ST2tY8babKNLG0Y1BEeMid8sYPs=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJaY1RS8gBjKbZ9dEeB6QXy2sCIZP9eT449jiM MxES8VaJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8RyvheEA CkLsEihclOj8QG4z0EUkEkBbQMUIoCoYVMPSCz87SzVKjJihuiwweFmKM0RJxAab5MZL77iHOF8uAv 9utPqhHpZeoG3xYrt9+cd0r707Ce+qY6oL7UAVBSvybQmO32LKOYGcfRWiTLzWxe94bsxg+Uk7resk j9IBwOz5Uel60Gp/eUdXreXpPf+JH08SI1KSr2LZJGyDw+1Q3wgK4Jpn2mbYArXUSL8B7FNp+tZcGX 6aZOfMfFS6WqLIx8DeAZk1a32BmNcf+pbhR2eLPKe3lLI6TxpKL+BLYOPJ+OgV/qzUTHs0bW1J2Kwu ruJ2Ud5E7gcnVm00O6K5w8HCcfZH2TAPUY7pHZTFDikTw6O3F09F4vfdd+1yMyhfcTsmQdZm6Dju0N 0sFnKjgWvfIuQ3OorrPiLVb600PCkZgZ9LhzjlFGLAlVUAMrmlhya7fERiPzD8xKHh79lY/Qw75I4y M+z2JY4y833OhX605VdOF8y+ABBr2VQNfPMSYLU/JfnPNJKkxkgd8QXG5cWrPPFNrWJyAPWP2ovpXC wBxk2LT6OdGUJtPda1DLReYv5ZRldKRvrEOhkTnD5AD5omYRvjl7FKl1SoTSjASm5WRZ6qug/VaVyk W9tRFGvQAQ7iIBEZxw+TT4lRbzpfcdEaAylNGp6Fh+I9ttbF+JdQ1KSXXAWw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This commit implements the delayed release logic for bpf_list_push_front and bpf_list_push_back. Once a node has been added to the list, it's pointer changes to PTR_UNTRUSTED. However, it is only released once the lock protecting the list is unlocked. For such PTR_TO_BTF_ID | MEM_ALLOC with PTR_UNTRUSTED set but an active ref_obj_id, it is still permitted to read them as long as the lock is held. Writing to them is not allowed. This allows having read access to push items we no longer own until we release the lock guarding the list, allowing a little more flexibility when working with these APIs. Note that enabling write support has fairly tricky interactions with what happens inside the critical section. Just as an example, currently, bpf_obj_drop is not permitted, but if it were, being able to write to the PTR_UNTRUSTED pointer while the object gets released back to the memory allocator would violate safety properties we wish to guarantee (i.e. not crashing the kernel). The memory could be reused for a different type in the BPF program or even in the kernel as it gets eventually kfree'd. Not enabling bpf_obj_drop inside the critical section would appear to prevent all of the above, but that is more of an artifical limitation right now. Since the write support is tangled with how we handle potential aliasing of nodes inside the critical section that may or may not be part of the list anymore, it has been deferred to a future patch. Signed-off-by: Kumar Kartikeya Dwivedi Acked-by: Dave Marchevsky --- include/linux/bpf_verifier.h | 5 ++++ kernel/bpf/verifier.c | 48 +++++++++++++++++++++++++++++++++++- 2 files changed, 52 insertions(+), 1 deletion(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 306fc1d6cc4a..740e774e1c7a 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -223,6 +223,11 @@ struct bpf_reference_state { * exiting a callback function. */ int callback_ref; + /* Mark the reference state to release the registers sharing the same id + * on bpf_spin_unlock (for nodes that we will lose ownership to but are + * safe to access inside the critical section). + */ + bool release_on_unlock; }; /* state of the program: diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c034ca2d9479..8725c2ee7eb4 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5648,7 +5648,9 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, cur->active_lock.ptr = btf; cur->active_lock.id = reg->id; } else { + struct bpf_func_state *fstate = cur_func(env); void *ptr; + int i; if (map) ptr = map; @@ -5666,6 +5668,16 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, } cur->active_lock.ptr = NULL; cur->active_lock.id = 0; + + for (i = 0; i < fstate->acquired_refs; i++) { + /* WARN because this reference state cannot be freed + * before this point, as bpf_spin_lock critical section + * does not allow functions that release the allocated + * object immediately. + */ + if (fstate->refs[i].release_on_unlock) + WARN_ON_ONCE(release_reference(env, fstate->refs[i].id)); + } } return 0; } @@ -8262,6 +8274,39 @@ static int process_kf_arg_ptr_to_kptr_strong(struct bpf_verifier_env *env, return 0; } +static int ref_set_release_on_unlock(struct bpf_verifier_env *env, u32 ref_obj_id) +{ + struct bpf_func_state *state = cur_func(env); + struct bpf_reg_state *reg; + int i; + + /* bpf_spin_lock only allows calling list_push and list_pop, no BPF + * subprogs, no global functions. This means that the references would + * not be released inside the critical section but they may be added to + * the reference state, and the acquired_refs are never copied out for a + * different frame as BPF to BPF calls don't work in bpf_spin_lock + * critical sections. + */ + if (!ref_obj_id) { + verbose(env, "verifier internal error: ref_obj_id is zero for release_on_unlock\n"); + return -EFAULT; + } + for (i = 0; i < state->acquired_refs; i++) { + if (state->refs[i].id == ref_obj_id) { + WARN_ON_ONCE(state->refs[i].release_on_unlock); + state->refs[i].release_on_unlock = true; + /* Now mark everyone sharing same ref_obj_id as untrusted */ + bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({ + if (reg->ref_obj_id == ref_obj_id) + reg->type |= PTR_UNTRUSTED; + })); + return 0; + } + } + verbose(env, "verifier internal error: ref state missing for ref_obj_id\n"); + return -EFAULT; +} + /* Implementation details: * * Each register points to some region of memory, which we define as an @@ -8447,7 +8492,8 @@ static int process_kf_arg_ptr_to_list_node(struct bpf_verifier_env *env, field->list_head.node_offset); return -EINVAL; } - return 0; + /* Set arg#1 for expiration after unlock */ + return ref_set_release_on_unlock(env, reg->ref_obj_id); } static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta) From patchwork Mon Nov 14 19:15:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042737 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10C2AC43219 for ; Mon, 14 Nov 2022 19:17:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237049AbiKNTRO (ORCPT ); Mon, 14 Nov 2022 14:17:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237169AbiKNTRB (ORCPT ); Mon, 14 Nov 2022 14:17:01 -0500 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70DC927CF1 for ; Mon, 14 Nov 2022 11:16:58 -0800 (PST) Received: by mail-pg1-x542.google.com with SMTP id b62so11113039pgc.0 for ; Mon, 14 Nov 2022 11:16:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MvMB46Vqr7eLH23u46H2EgXkWOTOPfK9QqKgaAYTLEg=; b=VN63TrDtzjbKb6NMp2HlbVGHkd5LmqMZ2B+nuEN98/MEfHAfLQiAoLsY6UpPKarJlv QysVphllI2X5m2+W0tnCKaVPxt6NsoY16CDPBODEYLGAvQyp5cFMvWMIAY2ADUSv2nlh 4HZlObpDXLfrKphSRmj05RI+CDy8sr+6kbrqEyd8cz+hLVSFh5vv9nwVCndXYPkbai98 FedckmKnA8cSAcOXOQWuk67JIOHTl68wyIrXKr+3JiAVbMOJh/Rk9KrcgDDwerZhrtHa NvrfnOJpbOWcozxuKTEogZCidwghbLBkGeqLqqd3APR+d3hvQajB4uXhrCEOO/x6jE6b mzuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MvMB46Vqr7eLH23u46H2EgXkWOTOPfK9QqKgaAYTLEg=; b=eZTpgl+eTFnxQCx6/ujMFGjAgPXpTe8swtBKy87zYep+qgE9WTjY0tOfYV6126MzXT 3bSQiVc6DlmcbLtCjRwDl/GJ0vHKSczmoeSuI83ebk5YFlxy5XSwOrOiD2FVFz7Wf9wY uAngAeCFi8Ad3iQtwYQw2HdMQ9DrjHM4rOijwaw01cu0AvqG7HIuQYvMZ1Q28BPkC5br 1vJYaTEObGVsVZNWXPnJ5AfriovsLxuMJedW/+TeHVktBts9svwSI2lJM9IhXyDVVE+D 1wAp01rFEv/yhNxH83HcU0oGPcNSSmummnHq6JbT7994hblbY5as/fJ3EFjwXgPdmGaE GxPw== X-Gm-Message-State: ANoB5pk7NaSXpjPQRQcrS9dtmIQk+V1BpAX0KBUwfx2sHq5Lc5LV4LI8 K1ocSCTA+635gm1rOfTIgzilSfGhcTUL3Q== X-Google-Smtp-Source: AA0mqf4AGaa8tfoFPIHbuaOXvCkm7khrMAcyFWVmhQOSBRSbniXpyF2Lg5/mPc38aH46NeD/+Khiyw== X-Received: by 2002:a65:5908:0:b0:46f:1e8f:1633 with SMTP id f8-20020a655908000000b0046f1e8f1633mr12806431pgu.556.1668453417917; Mon, 14 Nov 2022 11:16:57 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id o1-20020aa79781000000b0056ba02feda1sm7268623pfp.94.2022.11.14.11.16.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:16:57 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 22/26] selftests/bpf: Add __contains macro to bpf_experimental.h Date: Tue, 15 Nov 2022 00:45:43 +0530 Message-Id: <20221114191547.1694267-23-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=956; i=memxor@gmail.com; h=from:subject; bh=VcTPpE/Iv5Qqfs22C7MYEC67+XombxopywbBgVHOTXY=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPJat2m++oaewJcZt2UDMupXoNxYDkyNqAT1xwv xR2sFL+JAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTyQAKCRBM4MiGSL8RyrDfD/ 49lcztYLkqhJTXf3MAJmFopoj1vvxf3HjBxJSxsbzJ5pVVPb4mZFQ9PSLCakVjvvCf05lw20Uj673n gp1TqsZi4QFZBWS8wZj3UUy9tj/ikvSeM9Qa+x0HzNStqyslmol3hMqAUpNY4IbfN4kv4Q5cvnyuV9 guWPFmK325T/i8PZMfYwLiUboGPj7Pfni0xsYnHkdhxiaxy/naLOp292BAMcyZ8CZBI9apEw07IgeU Z/n08cQeCJLQtdQ0rX7ZHvl/zg/cuPoZKxvxEKZo87Ts7Rv6e4ztg4YNFRmzaKNrubHw7sjN5hu/2W 99TZZFiUW2G31LPSbfZR8xzDAN3xuiWlcDwOd+MLvLSYChdlwRG5WfjU0KVmRXhmHa8otiF6JrzZDH Ekf2n661gyq+eBM+uNr6zcDuOX0mnkUOTXFlI3jeJk7L048z51RaEThSvwP89FHdf/E1HY11qjMYyr Hix3W/9uWkxpeGpJDvAIudH0yIs4wQor5G01yoC5rLaPLHjirfkd68FPBTXxG/NlJy/lN5NWy8ZmAJ q2Sacnm3PHoM68WaTjyLMJe6Dg4mwSDZATOxrAY/DFa7Nd8nNiM9wmq1ktM3YPNhtOkvT/wYwiXE/o XsQtFBZO6Qw6HEkxIBzqfCRPNOG173eCmtze8Tma+oKc611pJBkzUh+FOR9A== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add user facing __contains macro which provides a convenient wrapper over the verbose kernel specific BTF declaration tag required to annotate BPF list head structs in user types. Signed-off-by: Kumar Kartikeya Dwivedi Acked-by: Dave Marchevsky --- tools/testing/selftests/bpf/bpf_experimental.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index d6b143275e82..424f7bbbfe9b 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -6,6 +6,8 @@ #include #include +#define __contains(name, node) __attribute__((btf_decl_tag("contains:" #name ":" #node))) + /* Description * Allocates an object of the type represented by 'local_type_id' in * program BTF. User may use the bpf_core_type_id_local macro to pass the From patchwork Mon Nov 14 19:15:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042738 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E72CC4321E for ; Mon, 14 Nov 2022 19:17:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236432AbiKNTRP (ORCPT ); Mon, 14 Nov 2022 14:17:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237199AbiKNTRD (ORCPT ); Mon, 14 Nov 2022 14:17:03 -0500 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4AC1827DD2 for ; Mon, 14 Nov 2022 11:17:01 -0800 (PST) Received: by mail-pf1-x443.google.com with SMTP id 130so11923219pfu.8 for ; Mon, 14 Nov 2022 11:17:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=orrU9bkwct7hSaHpeFwTQHPvPh3WfieLrB51Oysef3o=; b=LwyyW9Bf0YoqQe2ggMqqZwhn4PTkC4Lb3MiPwCKm9GOf+dUpmPMTaMvT5HWqDgqk/Y oO9ATAqZAWmWsYaBonyIjPH87FXek3xCOw5SPe05Ewo3c+8w5NIZxFdHduefmnUHZDbD Uks6CHKvAW/zBAtSQLeandQPDtlgJ26G9AVurAB1+ekX4j8cIL2tioXfGrndlrmSoDGO qX28iJyqf2elaGXjgTvf+274Gf+JYnPVjiQJ6fskxrNvdw8f80IGj4VtkDf6xWnZLrEn aTDMUwlLE2ArW1+DhQFj9AuvoGSS5Lc9yVRkzIfPH0BwdChjkMlzHwecSc3Bdx/laf3K tIGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=orrU9bkwct7hSaHpeFwTQHPvPh3WfieLrB51Oysef3o=; b=l3qkTbYBx27O2AlLmKAx4g5Ix09nqLbj2N68uhguHMOCKiSmMjKZFBnB+ZwaMemrZd BU4och/G3r+ulyhvZMzDnV9aT6qKA6wqLv6qg65tNm9cyfd/QRI/XfHySUHMEUiclii6 JCwLZNO7wI+qt/SBGLs5xOruiUY/e1/A28tCNegSj4xoENcbWvwm5haD7ZLLka3jKY8o R159JSe+BP8xxIsGHE0cO3saHTHmVCxci837tIvQDZJtZE6++b5GC8TBIA0jSzV6YT9V OdjKUXPIsbnPVGPUNK+zg08snz/EZkzYzCw9bRSz9WZSJY4zfV/3vmdSxNrD7laXcOgY NGkw== X-Gm-Message-State: ANoB5plDiTpz/heaem4AwMZurXASf1JZyvFCOiHWq5FI287IA0v0QhVv EumfOVKGuf2K9CRKE3uHTGiXlfhuZ9KZoQ== X-Google-Smtp-Source: AA0mqf66E1eCVm8naDhSxSjPqfD5uqMFUwJU6Tqd64uCI3dQjf866hfHUcJPOHIci7CQ0CcU/LvW3Q== X-Received: by 2002:a63:1a59:0:b0:473:c377:b82 with SMTP id a25-20020a631a59000000b00473c3770b82mr13401447pgm.113.1668453420566; Mon, 14 Nov 2022 11:17:00 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id e5-20020aa79805000000b005632f6490aasm7083892pfl.77.2022.11.14.11.16.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:17:00 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 23/26] selftests/bpf: Update spinlock selftest Date: Tue, 15 Nov 2022 00:45:44 +0530 Message-Id: <20221114191547.1694267-24-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=4240; i=memxor@gmail.com; h=from:subject; bh=7VEkgc+BO3uo7sABXUFzym09WvMjkfDkhbTvKZAA/+A=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPK4CA2EmruSzd0pkEr5lKkj3lWXgt8hXk7B9gB YUq2/fyJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTygAKCRBM4MiGSL8RypRtD/ 4+agLXY5dQ29as85gdZr6t/TZeoDY9CF2fNs44Jp+u2emmK4LeLzEqGnjrZPq03wi4PgD9RwV231VU 8BLKR2O8NCcWCVqWWyyluoFCTEMmrEfOcJ/6InWabfB5KheZApkKndq9A3vZs5NR4i82J3XKp4ZDFn 28lZRF7qW7jtcRHsz3KN/NDYd9hup7cpB1hbLfq+ya4uvbK/Q65jCbhYIKwsmXrJ0X4RtV02IagBSk 1U99oxRILCYp1iuUKc/5dt/+eJox3EFqcpzvh3kpdIb9UwHGins3ibet7CKz0Pso+jrgGxWNoS+L0Q Tq6a63r77R9STy5nsMcNRhJhIKEAai9W9MUej5hGXra5lPqYUIMeAtjpapx++gTtLo+zQZ6l0rZeWp IHvf1CL3nYKQ4TuZWipC+v9RN7g3GkwGQjhEBhQH8XEEKRBFt+TxWLryraKU+l/xW7ss7WiK8l1e/D VJLks5CuyR3JIHDLKidvOt8jyEcEgeB5URjw9CnsuLfA+RV6f+SAOBbQXxi5le0ATBTbJT2SKzKmM2 8dWkRZO+1NAIc/OEJj/jIr5oHilPW9ZCcLxi54Pus9R5JdrWj2/q2VFlytN8HPLRSXBnV+AwR/O+ov HhK3OFAFAJeedDZbW0D7ljAeaYdf92maUYFaoG++6tJeU8UtbYKBKtTS4y+Q== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Make updates in preparation for adding more test cases to this selftest: - Convert from CHECK_ to ASSERT macros. - Use BPF skeleton - Fix typo sping -> spin - Rename spinlock.c -> spin_lock.c Signed-off-by: Kumar Kartikeya Dwivedi --- .../selftests/bpf/prog_tests/spin_lock.c | 49 +++++++++++++++++++ .../selftests/bpf/prog_tests/spinlock.c | 45 ----------------- .../selftests/bpf/progs/test_spin_lock.c | 4 +- 3 files changed, 51 insertions(+), 47 deletions(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/spin_lock.c delete mode 100644 tools/testing/selftests/bpf/prog_tests/spinlock.c diff --git a/tools/testing/selftests/bpf/prog_tests/spin_lock.c b/tools/testing/selftests/bpf/prog_tests/spin_lock.c new file mode 100644 index 000000000000..fab061e9d77c --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/spin_lock.c @@ -0,0 +1,49 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +#include "test_spin_lock.skel.h" + +static void *spin_lock_thread(void *arg) +{ + int err, prog_fd = *(u32 *) arg; + LIBBPF_OPTS(bpf_test_run_opts, topts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 10000, + ); + + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "test_run"); + ASSERT_OK(topts.retval, "test_run retval"); + pthread_exit(arg); +} + +void test_spinlock(void) +{ + struct test_spin_lock *skel; + pthread_t thread_id[4]; + int prog_fd, i; + void *ret; + + skel = test_spin_lock__open_and_load(); + if (!ASSERT_OK_PTR(skel, "test_spin_lock__open_and_load")) + return; + prog_fd = bpf_program__fd(skel->progs.bpf_spin_lock_test); + for (i = 0; i < 4; i++) { + int err; + + err = pthread_create(&thread_id[i], NULL, &spin_lock_thread, &prog_fd); + if (!ASSERT_OK(err, "pthread_create")) + goto end; + } + + for (i = 0; i < 4; i++) { + if (!ASSERT_OK(pthread_join(thread_id[i], &ret), "pthread_join")) + goto end; + if (!ASSERT_EQ(ret, &prog_fd, "ret == prog_fd")) + goto end; + } +end: + test_spin_lock__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/prog_tests/spinlock.c b/tools/testing/selftests/bpf/prog_tests/spinlock.c deleted file mode 100644 index 15eb1372d771..000000000000 --- a/tools/testing/selftests/bpf/prog_tests/spinlock.c +++ /dev/null @@ -1,45 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -#include -#include - -static void *spin_lock_thread(void *arg) -{ - int err, prog_fd = *(u32 *) arg; - LIBBPF_OPTS(bpf_test_run_opts, topts, - .data_in = &pkt_v4, - .data_size_in = sizeof(pkt_v4), - .repeat = 10000, - ); - - err = bpf_prog_test_run_opts(prog_fd, &topts); - ASSERT_OK(err, "test_run"); - ASSERT_OK(topts.retval, "test_run retval"); - pthread_exit(arg); -} - -void test_spinlock(void) -{ - const char *file = "./test_spin_lock.bpf.o"; - pthread_t thread_id[4]; - struct bpf_object *obj = NULL; - int prog_fd; - int err = 0, i; - void *ret; - - err = bpf_prog_test_load(file, BPF_PROG_TYPE_CGROUP_SKB, &obj, &prog_fd); - if (CHECK_FAIL(err)) { - printf("test_spin_lock:bpf_prog_test_load errno %d\n", errno); - goto close_prog; - } - for (i = 0; i < 4; i++) - if (CHECK_FAIL(pthread_create(&thread_id[i], NULL, - &spin_lock_thread, &prog_fd))) - goto close_prog; - - for (i = 0; i < 4; i++) - if (CHECK_FAIL(pthread_join(thread_id[i], &ret) || - ret != (void *)&prog_fd)) - goto close_prog; -close_prog: - bpf_object__close(obj); -} diff --git a/tools/testing/selftests/bpf/progs/test_spin_lock.c b/tools/testing/selftests/bpf/progs/test_spin_lock.c index 7e88309d3229..5bd10409285b 100644 --- a/tools/testing/selftests/bpf/progs/test_spin_lock.c +++ b/tools/testing/selftests/bpf/progs/test_spin_lock.c @@ -45,8 +45,8 @@ struct { #define CREDIT_PER_NS(delta, rate) (((delta) * rate) >> 20) -SEC("tc") -int bpf_sping_lock_test(struct __sk_buff *skb) +SEC("cgroup_skb/ingress") +int bpf_spin_lock_test(struct __sk_buff *skb) { volatile int credit = 0, max_credit = 100, pkt_len = 64; struct hmap_elem zero = {}, *val; From patchwork Mon Nov 14 19:15:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042739 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 939BCC4332F for ; Mon, 14 Nov 2022 19:17:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237198AbiKNTRW (ORCPT ); Mon, 14 Nov 2022 14:17:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237252AbiKNTRG (ORCPT ); Mon, 14 Nov 2022 14:17:06 -0500 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AA45286D6 for ; Mon, 14 Nov 2022 11:17:04 -0800 (PST) Received: by mail-pl1-x642.google.com with SMTP id l2so10928554pld.13 for ; Mon, 14 Nov 2022 11:17:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=D5mhxE7t9/w1waItLCLixY08yGr4GoIXj2ByOScU2oQ=; b=BGfJE7zc7NNEzaFkfiJEhC/aMY7xSAhE2b4t9Nw+J9ulMvGM7Ltz3XpU2oex5xGTQr YVNOFtBqNK3Io4YUqE+tp93ujIDwfJMxbW5rA4/LcnHubNf7bKtdoAMiP3Hjc8+BSl9d AOLC3UBbzhJPKih4KTR9JLVcaNtUqzYAsvhr0zqnctRBGYStmdZCxLmGhAWxOURsXPyq hzzSaaT1AZbPZVMh8K1cePKkY7XylHFwKJUFQlr4R8KmaYSlPS+ORxapnLiFeqJOyzA7 VU/E/04qebRH/qK5pKW6WWhd/2qDx+vK5D7BWjRYbpQpvpIRuQikvyhqnZ88seGhQq73 wuNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=D5mhxE7t9/w1waItLCLixY08yGr4GoIXj2ByOScU2oQ=; b=d9wgjLdjE3fsKIMWhonKgnVkV+lOIy0a2TKEMQEAnV9GWqyVHiE3EJg0PWF66MzhaX 4NdeoOp3tOalX5m+mLGuGNsN/dMcx51HtMQ7I2esfeDu8QAqG+lHEcd19YIG/Mz89ZUp aRFBdf/tTMd5rkzGS7Eb2csS3l1OU3fmz1/RQMLHgHt0f1e5xKnYGW9pkolqUHa4HL4T w+w4KuFTodZl5ylsxltIw9F8yKy8uLAlFVxctabPanVpGEPaq/hOMn/JpNcDL523rZD2 +Ie+TqOSGLe+tej6tcDXAddYTife6s/k3ME2AcdKiFd4dYo1d9m+N+kolcnHnIzKX3vx 0MmQ== X-Gm-Message-State: ANoB5plRfV8hfylfSCtqr1waah+bCL4m77Xbn+6kjcQtT9KXyzpB2ms/ 6fpoWJEN8ApyGdNZAD0zT/fNfPosZFQ3Kg== X-Google-Smtp-Source: AA0mqf7/OeCGJN+7eJEGMmpJ/5FNs4DSQgJ/E2v8myhBg4gRXUZvaEKG6V+JPSabRfFtvUx9JwDWpQ== X-Received: by 2002:a17:902:e5c4:b0:187:4738:bf85 with SMTP id u4-20020a170902e5c400b001874738bf85mr654265plf.94.1668453423292; Mon, 14 Nov 2022 11:17:03 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id q13-20020a17090311cd00b00174f7d10a03sm3606322plh.86.2022.11.14.11.17.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:17:03 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 24/26] selftests/bpf: Add failure test cases for spin lock pairing Date: Tue, 15 Nov 2022 00:45:45 +0530 Message-Id: <20221114191547.1694267-25-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10926; i=memxor@gmail.com; h=from:subject; bh=EgO226hLDLT8uUjs61C2Qe2aCLhsrlTx0+w4ds2Mqpc=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPKgMT9Abs9cB+op+OrrkFyBfHhPTKvNDwLe6tH HY3gwCyJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTygAKCRBM4MiGSL8Ryvd9EA CY3fp6lTKLi1A/trOsOQ5SSyWp2HpyRsO6Y6r2eI2ng5FHUaGAfh5Cb9v4cdqHydeqQS01hht4sLYR ML6nlCALS72PHMZgNXJnCGtumzqm3y/rDSViACVRLWvB8EAo5mwUlj2g8gTkKopm7Ito6wqqvIRZZS 2UNf5Xjc4psoZCIBHS7l2eIbAbdm0UCNebz5z1Bya+NazLB03xcNlN/J8ZczYXQH3syckXGHAT0ofO UfbtJgQjyoMM6ESN/e7+4KKWhBWOQ/54hNx2cUgDrDu8lI3Kfgh4AVnQnQxgoWSOgIbzewQrLH5p4M pTkgW8JsgjOwLEaneNt7VXflTvlEDhiuqT4Mod3b1fcWu4sUAPn4poZ7LM1n8uzhH7RLcNbpCXtzq+ bvNdMoxObu5pcaVRaGvdnxFUG6d6i+cpiI5FsLhvhXbY+92+jihf1qYwuvfyKFbD8Og6fmrNYF2yBa eN9hHVcorLRHhJ8CUnCs1yJZMwatTLqFfsL52osQvR2nzplt3IQ9HGU0R6xiit70NpEK9ASzBWbH4z ArWVUpQsAJOUgEkxJ7poZDrw4bx21BLTWzM196Ii0tmT2/PJlosBdp8Gr032fuVByDK1C3YWPBwIzW UYpzm/um+Xv9AnO7nDG/cGFsE56ZcvGLAKf4xVPqVmKGEWYkhTw6NQsC0wAw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net First, ensure that whenever a bpf_spin_lock is present in an allocation, the reg->id is preserved. This won't be true for global variables however, since they have a single map value per map, hence the verifier harcodes it to 0 (so that multiple pseudo ldimm64 insns can yield the same lock object per map at a given offset). Next, add test cases for all possible combinations (kptr, global, map value, inner map value). Since we lifted restriction on locking in inner maps, also add test cases for them. Currently, each lookup into an inner map gets a fresh reg->id, so even if the reg->map_ptr is same, they will be treated as separate allocations and the incorrect unlock pairing will be rejected. Signed-off-by: Kumar Kartikeya Dwivedi --- .../selftests/bpf/prog_tests/spin_lock.c | 89 +++++++- .../selftests/bpf/progs/test_spin_lock_fail.c | 204 ++++++++++++++++++ 2 files changed, 292 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/bpf/progs/test_spin_lock_fail.c diff --git a/tools/testing/selftests/bpf/prog_tests/spin_lock.c b/tools/testing/selftests/bpf/prog_tests/spin_lock.c index fab061e9d77c..72282e92a78a 100644 --- a/tools/testing/selftests/bpf/prog_tests/spin_lock.c +++ b/tools/testing/selftests/bpf/prog_tests/spin_lock.c @@ -3,6 +3,79 @@ #include #include "test_spin_lock.skel.h" +#include "test_spin_lock_fail.skel.h" + +static char log_buf[1024 * 1024]; + +static struct { + const char *prog_name; + const char *err_msg; +} spin_lock_fail_tests[] = { + { "lock_id_kptr_preserve", + "5: (bf) r1 = r0 ; R0_w=ptr_foo(id=2,ref_obj_id=2,off=0,imm=0) " + "R1_w=ptr_foo(id=2,ref_obj_id=2,off=0,imm=0) refs=2\n6: (85) call bpf_this_cpu_ptr#154\n" + "R1 type=ptr_ expected=percpu_ptr_" }, + { "lock_id_global_zero", + "; R1_w=map_value(off=0,ks=4,vs=4,imm=0)\n2: (85) call bpf_this_cpu_ptr#154\n" + "R1 type=map_value expected=percpu_ptr_" }, + { "lock_id_mapval_preserve", + "8: (bf) r1 = r0 ; R0_w=map_value(id=1,off=0,ks=4,vs=8,imm=0) " + "R1_w=map_value(id=1,off=0,ks=4,vs=8,imm=0)\n9: (85) call bpf_this_cpu_ptr#154\n" + "R1 type=map_value expected=percpu_ptr_" }, + { "lock_id_innermapval_preserve", + "13: (bf) r1 = r0 ; R0=map_value(id=2,off=0,ks=4,vs=8,imm=0) " + "R1_w=map_value(id=2,off=0,ks=4,vs=8,imm=0)\n14: (85) call bpf_this_cpu_ptr#154\n" + "R1 type=map_value expected=percpu_ptr_" }, + { "lock_id_mismatch_kptr_kptr", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_kptr_global", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_kptr_mapval", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_kptr_innermapval", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_global_global", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_global_kptr", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_global_mapval", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_global_innermapval", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_mapval_mapval", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_mapval_kptr", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_mapval_global", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_mapval_innermapval", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_innermapval_innermapval1", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_innermapval_innermapval2", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_innermapval_kptr", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_innermapval_global", "bpf_spin_unlock of different lock" }, + { "lock_id_mismatch_innermapval_mapval", "bpf_spin_unlock of different lock" }, +}; + +static void test_spin_lock_fail_prog(const char *prog_name, const char *err_msg) +{ + LIBBPF_OPTS(bpf_object_open_opts, opts, .kernel_log_buf = log_buf, + .kernel_log_size = sizeof(log_buf), + .kernel_log_level = 1); + struct test_spin_lock_fail *skel; + struct bpf_program *prog; + int ret; + + skel = test_spin_lock_fail__open_opts(&opts); + if (!ASSERT_OK_PTR(skel, "test_spin_lock_fail__open_opts")) + return; + + prog = bpf_object__find_program_by_name(skel->obj, prog_name); + if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name")) + goto end; + + bpf_program__set_autoload(prog, true); + + ret = test_spin_lock_fail__load(skel); + if (!ASSERT_ERR(ret, "test_spin_lock_fail__load must fail")) + goto end; + + if (!ASSERT_OK_PTR(strstr(log_buf, err_msg), "expected error message")) { + fprintf(stderr, "Expected: %s\n", err_msg); + fprintf(stderr, "Verifier: %s\n", log_buf); + } + +end: + test_spin_lock_fail__destroy(skel); +} static void *spin_lock_thread(void *arg) { @@ -19,7 +92,7 @@ static void *spin_lock_thread(void *arg) pthread_exit(arg); } -void test_spinlock(void) +void test_spin_lock_success(void) { struct test_spin_lock *skel; pthread_t thread_id[4]; @@ -47,3 +120,17 @@ void test_spinlock(void) end: test_spin_lock__destroy(skel); } + +void test_spin_lock(void) +{ + int i; + + test_spin_lock_success(); + + for (i = 0; i < ARRAY_SIZE(spin_lock_fail_tests); i++) { + if (!test__start_subtest(spin_lock_fail_tests[i].prog_name)) + continue; + test_spin_lock_fail_prog(spin_lock_fail_tests[i].prog_name, + spin_lock_fail_tests[i].err_msg); + } +} diff --git a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c new file mode 100644 index 000000000000..86cd183ef6dc --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c @@ -0,0 +1,204 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include "bpf_experimental.h" + +struct foo { + struct bpf_spin_lock lock; + int data; +}; + +struct array_map { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct foo); + __uint(max_entries, 1); +} array_map SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); + __array(values, struct array_map); +} map_of_maps SEC(".maps") = { + .values = { + [0] = &array_map, + }, +}; + +SEC(".data.A") struct bpf_spin_lock lockA; +SEC(".data.B") struct bpf_spin_lock lockB; + +SEC("?tc") +int lock_id_kptr_preserve(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_this_cpu_ptr(f); + return 0; +} + +SEC("?tc") +int lock_id_global_zero(void *ctx) +{ + bpf_this_cpu_ptr(&lockA); + return 0; +} + +SEC("?tc") +int lock_id_mapval_preserve(void *ctx) +{ + struct foo *f; + int key = 0; + + f = bpf_map_lookup_elem(&array_map, &key); + if (!f) + return 0; + bpf_this_cpu_ptr(f); + return 0; +} + +SEC("?tc") +int lock_id_innermapval_preserve(void *ctx) +{ + struct foo *f; + int key = 0; + void *map; + + map = bpf_map_lookup_elem(&map_of_maps, &key); + if (!map) + return 0; + f = bpf_map_lookup_elem(map, &key); + if (!f) + return 0; + bpf_this_cpu_ptr(f); + return 0; +} + +#define CHECK(test, A, B) \ + SEC("?tc") \ + int lock_id_mismatch_##test(void *ctx) \ + { \ + struct foo *f1, *f2, *v, *iv; \ + int key = 0; \ + void *map; \ + \ + map = bpf_map_lookup_elem(&map_of_maps, &key); \ + if (!map) \ + return 0; \ + iv = bpf_map_lookup_elem(map, &key); \ + if (!iv) \ + return 0; \ + v = bpf_map_lookup_elem(&array_map, &key); \ + if (!v) \ + return 0; \ + f1 = bpf_obj_new(typeof(*f1)); \ + if (!f1) \ + return 0; \ + f2 = bpf_obj_new(typeof(*f2)); \ + if (!f2) { \ + bpf_obj_drop(f1); \ + return 0; \ + } \ + bpf_spin_lock(A); \ + bpf_spin_unlock(B); \ + return 0; \ + } + +CHECK(kptr_kptr, &f1->lock, &f2->lock); +CHECK(kptr_global, &f1->lock, &lockA); +CHECK(kptr_mapval, &f1->lock, &v->lock); +CHECK(kptr_innermapval, &f1->lock, &iv->lock); + +CHECK(global_global, &lockA, &lockB); +CHECK(global_kptr, &lockA, &f1->lock); +CHECK(global_mapval, &lockA, &v->lock); +CHECK(global_innermapval, &lockA, &iv->lock); + +SEC("?tc") +int lock_id_mismatch_mapval_mapval(void *ctx) +{ + struct foo *f1, *f2; + int key = 0; + + f1 = bpf_map_lookup_elem(&array_map, &key); + if (!f1) + return 0; + f2 = bpf_map_lookup_elem(&array_map, &key); + if (!f2) + return 0; + + bpf_spin_lock(&f1->lock); + f1->data = 42; + bpf_spin_unlock(&f2->lock); + + return 0; +} + +CHECK(mapval_kptr, &v->lock, &f1->lock); +CHECK(mapval_global, &v->lock, &lockB); +CHECK(mapval_innermapval, &v->lock, &iv->lock); + +SEC("?tc") +int lock_id_mismatch_innermapval_innermapval1(void *ctx) +{ + struct foo *f1, *f2; + int key = 0; + void *map; + + map = bpf_map_lookup_elem(&map_of_maps, &key); + if (!map) + return 0; + f1 = bpf_map_lookup_elem(map, &key); + if (!f1) + return 0; + f2 = bpf_map_lookup_elem(map, &key); + if (!f2) + return 0; + + bpf_spin_lock(&f1->lock); + f1->data = 42; + bpf_spin_unlock(&f2->lock); + + return 0; +} + +SEC("?tc") +int lock_id_mismatch_innermapval_innermapval2(void *ctx) +{ + struct foo *f1, *f2; + int key = 0; + void *map; + + map = bpf_map_lookup_elem(&map_of_maps, &key); + if (!map) + return 0; + f1 = bpf_map_lookup_elem(map, &key); + if (!f1) + return 0; + map = bpf_map_lookup_elem(&map_of_maps, &key); + if (!map) + return 0; + f2 = bpf_map_lookup_elem(map, &key); + if (!f2) + return 0; + + bpf_spin_lock(&f1->lock); + f1->data = 42; + bpf_spin_unlock(&f2->lock); + + return 0; +} + +CHECK(innermapval_kptr, &iv->lock, &f1->lock); +CHECK(innermapval_global, &iv->lock, &lockA); +CHECK(innermapval_mapval, &iv->lock, &v->lock); + +#undef CHECK + +char _license[] SEC("license") = "GPL"; From patchwork Mon Nov 14 19:15:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042740 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A81B3C43217 for ; Mon, 14 Nov 2022 19:17:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237219AbiKNTRX (ORCPT ); Mon, 14 Nov 2022 14:17:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237286AbiKNTRK (ORCPT ); Mon, 14 Nov 2022 14:17:10 -0500 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2D412655F for ; Mon, 14 Nov 2022 11:17:07 -0800 (PST) Received: by mail-pj1-x1041.google.com with SMTP id h14so11217911pjv.4 for ; Mon, 14 Nov 2022 11:17:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oqNJljVDIsXhhzjw1efhRO0V8DNidbAkWSwgvIaFExY=; b=ilo9+YMCgqRGfQ0zySDE4jgzlXTFuzmc2YUaP6FWKyB6OvIY+qkT4PieBFZ6NRkD9r 1qLazDfv9NLMRwujOBDfjLhnYlkxaf0/cJw7kIuH4ex9C6fnKrbDfqutrp/bJrivdEL0 9RnUIoW90yxaJbJARil/1Bu25aZyvRIUd4tOhvQ1uIilvGYm1YeSsfC2a6UGOgEjzsOq 0xQ46FjKh4mNSwdlbqpt2KitXNfKF+0lyWPMON82ufqeyNcoK0ETQk/M0rxNDLYHbCo3 Bqh4E0n2mg3kY5n3ejbG5LUvJe37VnkA/HNiWQq3Uj83t2xZ5prnxayCY6Bv8ox6XI/T +8eQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oqNJljVDIsXhhzjw1efhRO0V8DNidbAkWSwgvIaFExY=; b=2eGknZ/oDXbwM60IoQb6ATyjQ2YcrytidS7TxcZDBBZ8CAEkrwi3CUYpw/JG7gzVcG WOtcP69KddbqLFDgJ3uMKJdaf0Z45KlJ+qlyM3j2K62qeBRACyEnR/FOWO6KhJYXNsMl 1NgoEBjBwXtBmL4dXPSBQxhW9P62i7dPk929P/u9XZFuL2+8ikNovVYcYsVfCekE+3wo PnZpLCSYQiV4GUvgo53gHIZBwW2M7/rnSfBEgz1onIPVz7XGfC/LuI5ufBxoV8xRsz4S wUS+ca6IWYdus/RIrkGs8+i0femcVK9bBYujsn9OXmMw2+O2pzq5jtit61feAN+Tu1/r yw3Q== X-Gm-Message-State: ANoB5pmD4b88u5NyAiKdTyWjlSwT230LxEn2PPd0bLNBzGIo8JRFzaR6 N0lTn4ydlw5TWJSnELQAWMwmsm+JgS7pCA== X-Google-Smtp-Source: AA0mqf7QR26prC+NbMhnlX/WMU6jVsoRaOukR6fJbqFLbYV4jSkS5rOAa4IUe6pabhxDHs0OM4yisw== X-Received: by 2002:a17:902:eb87:b0:187:282c:9ba4 with SMTP id q7-20020a170902eb8700b00187282c9ba4mr704490plg.42.1668453426496; Mon, 14 Nov 2022 11:17:06 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id x67-20020a623146000000b0056bee236e9csm7334116pfx.142.2022.11.14.11.17.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:17:06 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 25/26] selftests/bpf: Add BPF linked list API tests Date: Tue, 15 Nov 2022 00:45:46 +0530 Message-Id: <20221114191547.1694267-26-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=35474; i=memxor@gmail.com; h=from:subject; bh=ym5zGZElr4FjpGXZRLgvXVHCu29Kyfq3u+6nwwTOb9k=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPKeXj1Bit6Mr1FSEmhpj7QFg77VoSpOKZdIJK2 mErCvZ2JAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTygAKCRBM4MiGSL8RysY+D/ 0bi1aALK+Z5Thsbeby0CLwCc8jONcyu+dD16s3qI30Nvc8izAmqEd0JT9tdNqGKgpo/iD+XFkXHoyS gqDA72KuMhGW2dP9VVqALtKVtE5vpfTQ5C7vhe4wFMP2zLLHJXfCnQmmbFcZYgUZB0xT7ZXZOdTms/ z3N+s8pbTMJvDLu0DKPElzl2lPUM/UASvjaybFpA8HNbojFnBmD2Fv3/a1HJgmFyqb+N5h57ScgS2m BeHyBOSROaQTCrgoLr92q6BGLcS4QE+iL9PDSimBf3YnzyW8d6s3ZSvLnnkbiGHYObZhvHCVZ8h47Y T+NW10/PBw/plZOSIqfi6aWLD87zz0X66xibaG7/h69x/imVmS+5KsiMb9Jp1K/18I0BWtZ4GJ/jNN EaWOSxR60IZ5BMP62MDJWt0mhCuCeorfhozV3mvDPwYzJZdRmYERGWIbz1dlKAnteYFpft93MkE6wB 2ydgZBSWvlGqK4D8d0PZHjnx7YtevhJh7Pb7swNC56Pw9ihDoNso2ESNsRDPaFWNlfDmtFTHtsp2yG onR6y7HyisWJ+Xb68GsdVah5TcqSmxkkb42PAHxjcXDzMO6ewB3YOoTvIVPc0rhd7pPjco1dVblscI XSNvVxtCIQNxscoUXfzq4shknLbwm94QCAbSsEhtjnmlIpNl+zupL2OYofuQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Include various tests covering the success and failure cases. Also, run the success cases at runtime to verify correctness of linked list manipulation routines, in addition to ensuring successful verification. Signed-off-by: Kumar Kartikeya Dwivedi --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../selftests/bpf/prog_tests/linked_list.c | 253 ++++++++ .../testing/selftests/bpf/progs/linked_list.c | 370 +++++++++++ .../testing/selftests/bpf/progs/linked_list.h | 56 ++ .../selftests/bpf/progs/linked_list_fail.c | 581 ++++++++++++++++++ 5 files changed, 1261 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/linked_list.c create mode 100644 tools/testing/selftests/bpf/progs/linked_list.c create mode 100644 tools/testing/selftests/bpf/progs/linked_list.h create mode 100644 tools/testing/selftests/bpf/progs/linked_list_fail.c diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index be4e3d47ea3e..072243af93b0 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -33,6 +33,7 @@ ksyms_module # test_ksyms_module__open_and_load unex ksyms_module_libbpf # JIT does not support calling kernel function (kfunc) ksyms_module_lskel # test_ksyms_module_lskel__open_and_load unexpected error: -9 (?) libbpf_get_fd_by_id_opts # failed to attach: ERROR: strerror_r(-524)=22 (trampoline) +linked_list # JIT does not support calling kernel function (kfunc) lookup_key # JIT does not support calling kernel function (kfunc) lru_bug # prog 'printk': failed to auto-attach: -524 map_kptr # failed to open_and_load program: -524 (trampoline) diff --git a/tools/testing/selftests/bpf/prog_tests/linked_list.c b/tools/testing/selftests/bpf/prog_tests/linked_list.c new file mode 100644 index 000000000000..e8569db2f3bc --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/linked_list.c @@ -0,0 +1,253 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include + +#include "linked_list.skel.h" +#include "linked_list_fail.skel.h" + +static char log_buf[1024 * 1024]; + +static struct { + const char *prog_name; + const char *err_msg; +} linked_list_fail_tests[] = { +#define TEST(test, off) \ + { #test "_missing_lock_push_front", \ + "bpf_spin_lock at off=" #off " must be held for bpf_list_head" }, \ + { #test "_missing_lock_push_back", \ + "bpf_spin_lock at off=" #off " must be held for bpf_list_head" }, \ + { #test "_missing_lock_pop_front", \ + "bpf_spin_lock at off=" #off " must be held for bpf_list_head" }, \ + { #test "_missing_lock_pop_back", \ + "bpf_spin_lock at off=" #off " must be held for bpf_list_head" }, + TEST(kptr, 32) + TEST(global, 16) + TEST(map, 0) + TEST(inner_map, 0) +#undef TEST +#define TEST(test, op) \ + { #test "_kptr_incorrect_lock_" #op, \ + "held lock and object are not in the same allocation\n" \ + "bpf_spin_lock at off=32 must be held for bpf_list_head" }, \ + { #test "_global_incorrect_lock_" #op, \ + "held lock and object are not in the same allocation\n" \ + "bpf_spin_lock at off=16 must be held for bpf_list_head" }, \ + { #test "_map_incorrect_lock_" #op, \ + "held lock and object are not in the same allocation\n" \ + "bpf_spin_lock at off=0 must be held for bpf_list_head" }, \ + { #test "_inner_map_incorrect_lock_" #op, \ + "held lock and object are not in the same allocation\n" \ + "bpf_spin_lock at off=0 must be held for bpf_list_head" }, + TEST(kptr, push_front) + TEST(kptr, push_back) + TEST(kptr, pop_front) + TEST(kptr, pop_back) + TEST(global, push_front) + TEST(global, push_back) + TEST(global, pop_front) + TEST(global, pop_back) + TEST(map, push_front) + TEST(map, push_back) + TEST(map, pop_front) + TEST(map, pop_back) + TEST(inner_map, push_front) + TEST(inner_map, push_back) + TEST(inner_map, pop_front) + TEST(inner_map, pop_back) +#undef TEST + { "map_compat_kprobe", "tracing progs cannot use bpf_list_head yet" }, + { "map_compat_kretprobe", "tracing progs cannot use bpf_list_head yet" }, + { "map_compat_tp", "tracing progs cannot use bpf_list_head yet" }, + { "map_compat_perf", "tracing progs cannot use bpf_list_head yet" }, + { "map_compat_raw_tp", "tracing progs cannot use bpf_list_head yet" }, + { "map_compat_raw_tp_w", "tracing progs cannot use bpf_list_head yet" }, + { "obj_type_id_oor", "local type ID argument must be in range [0, U32_MAX]" }, + { "obj_new_no_composite", "bpf_obj_new type ID argument must be of a struct" }, + { "obj_new_no_struct", "bpf_obj_new type ID argument must be of a struct" }, + { "obj_drop_non_zero_off", "R1 must have zero offset when passed to release func" }, + { "new_null_ret", "R0 invalid mem access 'ptr_or_null_'" }, + { "obj_new_acq", "Unreleased reference id=" }, + { "use_after_drop", "invalid mem access 'scalar'" }, + { "ptr_walk_scalar", "type=scalar expected=percpu_ptr_" }, + { "direct_read_lock", "direct access to bpf_spin_lock is disallowed" }, + { "direct_write_lock", "direct access to bpf_spin_lock is disallowed" }, + { "direct_read_head", "direct access to bpf_list_head is disallowed" }, + { "direct_write_head", "direct access to bpf_list_head is disallowed" }, + { "direct_read_node", "direct access to bpf_list_node is disallowed" }, + { "direct_write_node", "direct access to bpf_list_node is disallowed" }, + { "write_after_push_front", "only read is supported" }, + { "write_after_push_back", "only read is supported" }, + { "use_after_unlock_push_front", "invalid mem access 'scalar'" }, + { "use_after_unlock_push_back", "invalid mem access 'scalar'" }, + { "double_push_front", "arg#1 expected pointer to allocated object" }, + { "double_push_back", "arg#1 expected pointer to allocated object" }, + { "no_node_value_type", "bpf_list_node not found for allocated object\n" }, + { "incorrect_value_type", "bpf_list_head value type does not match arg#1" }, + { "incorrect_node_var_off", "variable ptr_ access var_off=(0x0; 0xffffffff) disallowed" }, + { "incorrect_node_off1", "bpf_list_node not found at offset=1" }, + { "incorrect_node_off2", "arg#1 offset must be for bpf_list_node at off=0" }, + { "no_head_type", "bpf_list_head not found for allocated object" }, + { "incorrect_head_var_off1", "R1 doesn't have constant offset" }, + { "incorrect_head_var_off2", "variable ptr_ access var_off=(0x0; 0xffffffff) disallowed" }, + { "incorrect_head_off1", "bpf_list_head not found at offset=17" }, + { "incorrect_head_off2", "bpf_list_head not found at offset=1" }, + { "pop_front_off", + "15: (bf) r1 = r6 ; R1_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=40,imm=0) " + "R6_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=40,imm=0) refs=2,4\n" + "16: (85) call bpf_this_cpu_ptr#154\nR1 type=ptr_or_null_ expected=percpu_ptr_" }, + { "pop_back_off", + "15: (bf) r1 = r6 ; R1_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=40,imm=0) " + "R6_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=40,imm=0) refs=2,4\n" + "16: (85) call bpf_this_cpu_ptr#154\nR1 type=ptr_or_null_ expected=percpu_ptr_" }, +}; + +static void test_linked_list_fail_prog(const char *prog_name, const char *err_msg) +{ + LIBBPF_OPTS(bpf_object_open_opts, opts, .kernel_log_buf = log_buf, + .kernel_log_size = sizeof(log_buf), + .kernel_log_level = 1); + struct linked_list_fail *skel; + struct bpf_program *prog; + int ret; + + skel = linked_list_fail__open_opts(&opts); + if (!ASSERT_OK_PTR(skel, "linked_list_fail__open_opts")) + return; + + prog = bpf_object__find_program_by_name(skel->obj, prog_name); + if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name")) + goto end; + + bpf_program__set_autoload(prog, true); + + ret = linked_list_fail__load(skel); + if (!ASSERT_ERR(ret, "linked_list_fail__load must fail")) + goto end; + + if (!ASSERT_OK_PTR(strstr(log_buf, err_msg), "expected error message")) { + fprintf(stderr, "Expected: %s\n", err_msg); + fprintf(stderr, "Verifier: %s\n", log_buf); + } + +end: + linked_list_fail__destroy(skel); +} + +static void clear_fields(struct bpf_map *map) +{ + char buf[24]; + int key = 0; + + memset(buf, 0xff, sizeof(buf)); + ASSERT_OK(bpf_map__update_elem(map, &key, sizeof(key), buf, sizeof(buf), 0), "check_and_free_fields"); +} + +enum { + TEST_ALL, + PUSH_POP, + PUSH_POP_MULT, + LIST_IN_LIST, +}; + +static void test_linked_list_success(int mode, bool leave_in_map) +{ + LIBBPF_OPTS(bpf_test_run_opts, opts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 1, + ); + struct linked_list *skel; + int ret; + + skel = linked_list__open_and_load(); + if (!ASSERT_OK_PTR(skel, "linked_list__open_and_load")) + return; + + if (mode == LIST_IN_LIST) + goto lil; + if (mode == PUSH_POP_MULT) + goto ppm; + + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.map_list_push_pop), &opts); + ASSERT_OK(ret, "map_list_push_pop"); + ASSERT_OK(opts.retval, "map_list_push_pop retval"); + if (!leave_in_map) + clear_fields(skel->maps.array_map); + + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.inner_map_list_push_pop), &opts); + ASSERT_OK(ret, "inner_map_list_push_pop"); + ASSERT_OK(opts.retval, "inner_map_list_push_pop retval"); + if (!leave_in_map) + clear_fields(skel->maps.inner_map); + + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.global_list_push_pop), &opts); + ASSERT_OK(ret, "global_list_push_pop"); + ASSERT_OK(opts.retval, "global_list_push_pop retval"); + if (!leave_in_map) + clear_fields(skel->maps.data_A); + + if (mode == PUSH_POP) + goto end; + +ppm: + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.map_list_push_pop_multiple), &opts); + ASSERT_OK(ret, "map_list_push_pop_multiple"); + ASSERT_OK(opts.retval, "map_list_push_pop_multiple retval"); + if (!leave_in_map) + clear_fields(skel->maps.array_map); + + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.inner_map_list_push_pop_multiple), &opts); + ASSERT_OK(ret, "inner_map_list_push_pop_multiple"); + ASSERT_OK(opts.retval, "inner_map_list_push_pop_multiple retval"); + if (!leave_in_map) + clear_fields(skel->maps.inner_map); + + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.global_list_push_pop_multiple), &opts); + ASSERT_OK(ret, "global_list_push_pop_multiple"); + ASSERT_OK(opts.retval, "global_list_push_pop_multiple retval"); + if (!leave_in_map) + clear_fields(skel->maps.data_A); + + if (mode == PUSH_POP_MULT) + goto end; + +lil: + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.map_list_in_list), &opts); + ASSERT_OK(ret, "map_list_in_list"); + ASSERT_OK(opts.retval, "map_list_in_list retval"); + if (!leave_in_map) + clear_fields(skel->maps.array_map); + + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.inner_map_list_in_list), &opts); + ASSERT_OK(ret, "inner_map_list_in_list"); + ASSERT_OK(opts.retval, "inner_map_list_in_list retval"); + if (!leave_in_map) + clear_fields(skel->maps.inner_map); + + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.global_list_in_list), &opts); + ASSERT_OK(ret, "global_list_in_list"); + ASSERT_OK(opts.retval, "global_list_in_list retval"); + if (!leave_in_map) + clear_fields(skel->maps.data_A); +end: + linked_list__destroy(skel); +} + +void test_linked_list(void) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(linked_list_fail_tests); i++) { + if (!test__start_subtest(linked_list_fail_tests[i].prog_name)) + continue; + test_linked_list_fail_prog(linked_list_fail_tests[i].prog_name, + linked_list_fail_tests[i].err_msg); + } + test_linked_list_success(PUSH_POP, false); + test_linked_list_success(PUSH_POP, true); + test_linked_list_success(PUSH_POP_MULT, false); + test_linked_list_success(PUSH_POP_MULT, true); + test_linked_list_success(LIST_IN_LIST, false); + test_linked_list_success(LIST_IN_LIST, true); + test_linked_list_success(TEST_ALL, false); +} diff --git a/tools/testing/selftests/bpf/progs/linked_list.c b/tools/testing/selftests/bpf/progs/linked_list.c new file mode 100644 index 000000000000..2c7b615c6d41 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/linked_list.c @@ -0,0 +1,370 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include "bpf_experimental.h" + +#ifndef ARRAY_SIZE +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) +#endif + +#include "linked_list.h" + +static __always_inline +int list_push_pop(struct bpf_spin_lock *lock, struct bpf_list_head *head, bool leave_in_map) +{ + struct bpf_list_node *n; + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 2; + + bpf_spin_lock(lock); + n = bpf_list_pop_front(head); + bpf_spin_unlock(lock); + if (n) { + bpf_obj_drop(container_of(n, struct foo, node)); + bpf_obj_drop(f); + return 3; + } + + bpf_spin_lock(lock); + n = bpf_list_pop_back(head); + bpf_spin_unlock(lock); + if (n) { + bpf_obj_drop(container_of(n, struct foo, node)); + bpf_obj_drop(f); + return 4; + } + + + bpf_spin_lock(lock); + f->data = 42; + bpf_list_push_front(head, &f->node); + bpf_spin_unlock(lock); + if (leave_in_map) + return 0; + bpf_spin_lock(lock); + n = bpf_list_pop_back(head); + bpf_spin_unlock(lock); + if (!n) + return 5; + f = container_of(n, struct foo, node); + if (f->data != 42) { + bpf_obj_drop(f); + return 6; + } + + bpf_spin_lock(lock); + f->data = 13; + bpf_list_push_front(head, &f->node); + bpf_spin_unlock(lock); + bpf_spin_lock(lock); + n = bpf_list_pop_front(head); + bpf_spin_unlock(lock); + if (!n) + return 7; + f = container_of(n, struct foo, node); + if (f->data != 13) { + bpf_obj_drop(f); + return 8; + } + bpf_obj_drop(f); + + bpf_spin_lock(lock); + n = bpf_list_pop_front(head); + bpf_spin_unlock(lock); + if (n) { + bpf_obj_drop(container_of(n, struct foo, node)); + return 9; + } + + bpf_spin_lock(lock); + n = bpf_list_pop_back(head); + bpf_spin_unlock(lock); + if (n) { + bpf_obj_drop(container_of(n, struct foo, node)); + return 10; + } + return 0; +} + + +static __always_inline +int list_push_pop_multiple(struct bpf_spin_lock *lock, struct bpf_list_head *head, bool leave_in_map) +{ + struct bpf_list_node *n; + struct foo *f[8], *pf; + int i; + + for (i = 0; i < ARRAY_SIZE(f); i++) { + f[i] = bpf_obj_new(typeof(**f)); + if (!f[i]) + return 2; + f[i]->data = i; + bpf_spin_lock(lock); + bpf_list_push_front(head, &f[i]->node); + bpf_spin_unlock(lock); + } + + for (i = 0; i < ARRAY_SIZE(f); i++) { + bpf_spin_lock(lock); + n = bpf_list_pop_front(head); + bpf_spin_unlock(lock); + if (!n) + return 3; + pf = container_of(n, struct foo, node); + if (pf->data != (ARRAY_SIZE(f) - i - 1)) { + bpf_obj_drop(pf); + return 4; + } + bpf_spin_lock(lock); + bpf_list_push_back(head, &pf->node); + bpf_spin_unlock(lock); + } + + if (leave_in_map) + return 0; + + for (i = 0; i < ARRAY_SIZE(f); i++) { + bpf_spin_lock(lock); + n = bpf_list_pop_back(head); + bpf_spin_unlock(lock); + if (!n) + return 5; + pf = container_of(n, struct foo, node); + if (pf->data != i) { + bpf_obj_drop(pf); + return 6; + } + bpf_obj_drop(pf); + } + bpf_spin_lock(lock); + n = bpf_list_pop_back(head); + bpf_spin_unlock(lock); + if (n) { + bpf_obj_drop(container_of(n, struct foo, node)); + return 7; + } + + bpf_spin_lock(lock); + n = bpf_list_pop_front(head); + bpf_spin_unlock(lock); + if (n) { + bpf_obj_drop(container_of(n, struct foo, node)); + return 8; + } + return 0; +} + +static __always_inline +int list_in_list(struct bpf_spin_lock *lock, struct bpf_list_head *head, bool leave_in_map) +{ + struct bpf_list_node *n; + struct bar *ba[8], *b; + struct foo *f; + int i; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 2; + for (i = 0; i < ARRAY_SIZE(ba); i++) { + b = bpf_obj_new(typeof(*b)); + if (!b) { + bpf_obj_drop(f); + return 3; + } + b->data = i; + bpf_spin_lock(&f->lock); + bpf_list_push_back(&f->head, &b->node); + bpf_spin_unlock(&f->lock); + } + + bpf_spin_lock(lock); + f->data = 42; + bpf_list_push_front(head, &f->node); + bpf_spin_unlock(lock); + + if (leave_in_map) + return 0; + + bpf_spin_lock(lock); + n = bpf_list_pop_front(head); + bpf_spin_unlock(lock); + if (!n) + return 4; + f = container_of(n, struct foo, node); + if (f->data != 42) { + bpf_obj_drop(f); + return 5; + } + + for (i = 0; i < ARRAY_SIZE(ba); i++) { + bpf_spin_lock(&f->lock); + n = bpf_list_pop_front(&f->head); + bpf_spin_unlock(&f->lock); + if (!n) { + bpf_obj_drop(f); + return 6; + } + b = container_of(n, struct bar, node); + if (b->data != i) { + bpf_obj_drop(f); + bpf_obj_drop(b); + return 7; + } + bpf_obj_drop(b); + } + bpf_spin_lock(&f->lock); + n = bpf_list_pop_front(&f->head); + bpf_spin_unlock(&f->lock); + if (n) { + bpf_obj_drop(f); + bpf_obj_drop(container_of(n, struct bar, node)); + return 8; + } + bpf_obj_drop(f); + return 0; +} + +static __always_inline +int test_list_push_pop(struct bpf_spin_lock *lock, struct bpf_list_head *head) +{ + int ret; + + ret = list_push_pop(lock, head, false); + if (ret) + return ret; + return list_push_pop(lock, head, true); +} + +static __always_inline +int test_list_push_pop_multiple(struct bpf_spin_lock *lock, struct bpf_list_head *head) +{ + int ret; + + ret = list_push_pop_multiple(lock ,head, false); + if (ret) + return ret; + return list_push_pop_multiple(lock, head, true); +} + +static __always_inline +int test_list_in_list(struct bpf_spin_lock *lock, struct bpf_list_head *head) +{ + int ret; + + ret = list_in_list(lock, head, false); + if (ret) + return ret; + return list_in_list(lock, head, true); +} + +SEC("tc") +int map_list_push_pop(void *ctx) +{ + struct map_value *v; + + v = bpf_map_lookup_elem(&array_map, &(int){0}); + if (!v) + return 1; + return test_list_push_pop(&v->lock, &v->head); +} + +SEC("tc") +int inner_map_list_push_pop(void *ctx) +{ + struct map_value *v; + void *map; + + map = bpf_map_lookup_elem(&map_of_maps, &(int){0}); + if (!map) + return 1; + v = bpf_map_lookup_elem(map, &(int){0}); + if (!v) + return 1; + return test_list_push_pop(&v->lock, &v->head); +} + +SEC("tc") +int global_list_push_pop(void *ctx) +{ + return test_list_push_pop(&glock, &ghead); +} + +SEC("tc") +int map_list_push_pop_multiple(void *ctx) +{ + struct map_value *v; + int ret; + + v = bpf_map_lookup_elem(&array_map, &(int){0}); + if (!v) + return 1; + return test_list_push_pop_multiple(&v->lock, &v->head); +} + +SEC("tc") +int inner_map_list_push_pop_multiple(void *ctx) +{ + struct map_value *v; + void *map; + int ret; + + map = bpf_map_lookup_elem(&map_of_maps, &(int){0}); + if (!map) + return 1; + v = bpf_map_lookup_elem(map, &(int){0}); + if (!v) + return 1; + return test_list_push_pop_multiple(&v->lock, &v->head); +} + +SEC("tc") +int global_list_push_pop_multiple(void *ctx) +{ + int ret; + + ret = list_push_pop_multiple(&glock, &ghead, false); + if (ret) + return ret; + return list_push_pop_multiple(&glock, &ghead, true); +} + +SEC("tc") +int map_list_in_list(void *ctx) +{ + struct map_value *v; + int ret; + + v = bpf_map_lookup_elem(&array_map, &(int){0}); + if (!v) + return 1; + return test_list_in_list(&v->lock, &v->head); +} + +SEC("tc") +int inner_map_list_in_list(void *ctx) +{ + struct map_value *v; + void *map; + int ret; + + map = bpf_map_lookup_elem(&map_of_maps, &(int){0}); + if (!map) + return 1; + v = bpf_map_lookup_elem(map, &(int){0}); + if (!v) + return 1; + return test_list_in_list(&v->lock, &v->head); +} + +SEC("tc") +int global_list_in_list(void *ctx) +{ + return test_list_in_list(&glock, &ghead); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/linked_list.h b/tools/testing/selftests/bpf/progs/linked_list.h new file mode 100644 index 000000000000..8db80ed64db1 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/linked_list.h @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef LINKED_LIST_H +#define LINKED_LIST_H + +#include +#include +#include "bpf_experimental.h" + +struct bar { + struct bpf_list_node node; + int data; +}; + +struct foo { + struct bpf_list_node node; + struct bpf_list_head head __contains(bar, node); + struct bpf_spin_lock lock; + int data; + struct bpf_list_node node2; +}; + +struct map_value { + struct bpf_spin_lock lock; + int data; + struct bpf_list_head head __contains(foo, node); +}; + +struct array_map { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct map_value); + __uint(max_entries, 1); +}; + +struct array_map array_map SEC(".maps"); +struct array_map inner_map SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); + __array(values, struct array_map); +} map_of_maps SEC(".maps") = { + .values = { + [0] = &inner_map, + }, +}; + +#define private(name) SEC(".data." #name) __hidden __attribute__((aligned(8))) + +private(A) struct bpf_spin_lock glock; +private(A) struct bpf_list_head ghead __contains(foo, node); +private(B) struct bpf_spin_lock glock2; + +#endif diff --git a/tools/testing/selftests/bpf/progs/linked_list_fail.c b/tools/testing/selftests/bpf/progs/linked_list_fail.c new file mode 100644 index 000000000000..1d9017240e19 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/linked_list_fail.c @@ -0,0 +1,581 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include "bpf_experimental.h" + +#include "linked_list.h" + +#define INIT \ + struct map_value *v, *v2, *iv, *iv2; \ + struct foo *f, *f1, *f2; \ + struct bar *b; \ + void *map; \ + \ + map = bpf_map_lookup_elem(&map_of_maps, &(int){ 0 }); \ + if (!map) \ + return 0; \ + v = bpf_map_lookup_elem(&array_map, &(int){ 0 }); \ + if (!v) \ + return 0; \ + v2 = bpf_map_lookup_elem(&array_map, &(int){ 0 }); \ + if (!v2) \ + return 0; \ + iv = bpf_map_lookup_elem(map, &(int){ 0 }); \ + if (!iv) \ + return 0; \ + iv2 = bpf_map_lookup_elem(map, &(int){ 0 }); \ + if (!iv2) \ + return 0; \ + f = bpf_obj_new(typeof(*f)); \ + if (!f) \ + return 0; \ + f1 = f; \ + f2 = bpf_obj_new(typeof(*f2)); \ + if (!f2) { \ + bpf_obj_drop(f1); \ + return 0; \ + } \ + b = bpf_obj_new(typeof(*b)); \ + if (!b) { \ + bpf_obj_drop(f2); \ + bpf_obj_drop(f1); \ + return 0; \ + } + +#define CHECK(test, op, hexpr) \ + SEC("?tc") \ + int test##_missing_lock_##op(void *ctx) \ + { \ + INIT; \ + void (*p)(void *) = (void *)&bpf_list_##op; \ + p(hexpr); \ + return 0; \ + } + +CHECK(kptr, push_front, &f->head); +CHECK(kptr, push_back, &f->head); +CHECK(kptr, pop_front, &f->head); +CHECK(kptr, pop_back, &f->head); + +CHECK(global, push_front, &ghead); +CHECK(global, push_back, &ghead); +CHECK(global, pop_front, &ghead); +CHECK(global, pop_back, &ghead); + +CHECK(map, push_front, &v->head); +CHECK(map, push_back, &v->head); +CHECK(map, pop_front, &v->head); +CHECK(map, pop_back, &v->head); + +CHECK(inner_map, push_front, &iv->head); +CHECK(inner_map, push_back, &iv->head); +CHECK(inner_map, pop_front, &iv->head); +CHECK(inner_map, pop_back, &iv->head); + +#undef CHECK + +#define CHECK(test, op, lexpr, hexpr) \ + SEC("?tc") \ + int test##_incorrect_lock_##op(void *ctx) \ + { \ + INIT; \ + void (*p)(void *) = (void *)&bpf_list_##op; \ + bpf_spin_lock(lexpr); \ + p(hexpr); \ + return 0; \ + } + +#define CHECK_OP(op) \ + CHECK(kptr_kptr, op, &f1->lock, &f2->head); \ + CHECK(kptr_global, op, &f1->lock, &ghead); \ + CHECK(kptr_map, op, &f1->lock, &v->head); \ + CHECK(kptr_inner_map, op, &f1->lock, &iv->head); \ + \ + CHECK(global_global, op, &glock2, &ghead); \ + CHECK(global_kptr, op, &glock, &f1->head); \ + CHECK(global_map, op, &glock, &v->head); \ + CHECK(global_inner_map, op, &glock, &iv->head); \ + \ + CHECK(map_map, op, &v->lock, &v2->head); \ + CHECK(map_kptr, op, &v->lock, &f2->head); \ + CHECK(map_global, op, &v->lock, &ghead); \ + CHECK(map_inner_map, op, &v->lock, &iv->head); \ + \ + CHECK(inner_map_inner_map, op, &iv->lock, &iv2->head); \ + CHECK(inner_map_kptr, op, &iv->lock, &f2->head); \ + CHECK(inner_map_global, op, &iv->lock, &ghead); \ + CHECK(inner_map_map, op, &iv->lock, &v->head); + +CHECK_OP(push_front); +CHECK_OP(push_back); +CHECK_OP(pop_front); +CHECK_OP(pop_back); + +#undef CHECK +#undef CHECK_OP +#undef INIT + +SEC("?kprobe/xyz") +int map_compat_kprobe(void *ctx) +{ + bpf_list_push_front(&ghead, NULL); + return 0; +} + +SEC("?kretprobe/xyz") +int map_compat_kretprobe(void *ctx) +{ + bpf_list_push_front(&ghead, NULL); + return 0; +} + +SEC("?tracepoint/xyz") +int map_compat_tp(void *ctx) +{ + bpf_list_push_front(&ghead, NULL); + return 0; +} + +SEC("?perf_event") +int map_compat_perf(void *ctx) +{ + bpf_list_push_front(&ghead, NULL); + return 0; +} + +SEC("?raw_tp/xyz") +int map_compat_raw_tp(void *ctx) +{ + bpf_list_push_front(&ghead, NULL); + return 0; +} + +SEC("?raw_tp.w/xyz") +int map_compat_raw_tp_w(void *ctx) +{ + bpf_list_push_front(&ghead, NULL); + return 0; +} + +SEC("?tc") +int obj_type_id_oor(void *ctx) +{ + bpf_obj_new_impl(~0UL, NULL); + return 0; +} + +SEC("?tc") +int obj_new_no_composite(void *ctx) +{ + bpf_obj_new_impl(bpf_core_type_id_local(int), (void *)42); + return 0; +} + +SEC("?tc") +int obj_new_no_struct(void *ctx) +{ + + bpf_obj_new(union { int data; unsigned udata; }); + return 0; +} + +SEC("?tc") +int obj_drop_non_zero_off(void *ctx) +{ + void *f; + + f = bpf_obj_new(struct foo); + if (!f) + return 0; + bpf_obj_drop(f+1); + return 0; +} + +SEC("?tc") +int new_null_ret(void *ctx) +{ + return bpf_obj_new(struct foo)->data; +} + +SEC("?tc") +int obj_new_acq(void *ctx) +{ + bpf_obj_new(struct foo); + return 0; +} + +SEC("?tc") +int use_after_drop(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_obj_drop(f); + return f->data; +} + +SEC("?tc") +int ptr_walk_scalar(void *ctx) +{ + struct test1 { + struct test2 { + struct test2 *next; + } *ptr; + } *p; + + p = bpf_obj_new(typeof(*p)); + if (!p) + return 0; + bpf_this_cpu_ptr(p->ptr); + return 0; +} + +SEC("?tc") +int direct_read_lock(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + return *(int *)&f->lock; +} + +SEC("?tc") +int direct_write_lock(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + *(int *)&f->lock = 0; + return 0; +} + +SEC("?tc") +int direct_read_head(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + return *(int *)&f->head; +} + +SEC("?tc") +int direct_write_head(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + *(int *)&f->head = 0; + return 0; +} + +SEC("?tc") +int direct_read_node(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + return *(int *)&f->node; +} + +SEC("?tc") +int direct_write_node(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + *(int *)&f->node = 0; + return 0; +} + +static __always_inline +int write_after_op(void (*push_op)(void *head, void *node)) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_spin_lock(&glock); + push_op(&ghead, &f->node); + f->data = 42; + bpf_spin_unlock(&glock); + + return 0; +} + +SEC("?tc") +int write_after_push_front(void *ctx) +{ + return write_after_op((void *)bpf_list_push_front); +} + +SEC("?tc") +int write_after_push_back(void *ctx) +{ + return write_after_op((void *)bpf_list_push_back); +} + +static __always_inline +int use_after_unlock(void (*op)(void *head, void *node)) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_spin_lock(&glock); + f->data = 42; + op(&ghead, &f->node); + bpf_spin_unlock(&glock); + + return f->data; +} + +SEC("?tc") +int use_after_unlock_push_front(void *ctx) +{ + return use_after_unlock((void *)bpf_list_push_front); +} + +SEC("?tc") +int use_after_unlock_push_back(void *ctx) +{ + return use_after_unlock((void *)bpf_list_push_back); +} + +static __always_inline +int list_double_add(void (*op)(void *head, void *node)) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_spin_lock(&glock); + op(&ghead, &f->node); + op(&ghead, &f->node); + bpf_spin_unlock(&glock); + + return 0; +} + +SEC("?tc") +int double_push_front(void *ctx) +{ + return list_double_add((void *)bpf_list_push_front); +} + +SEC("?tc") +int double_push_back(void *ctx) +{ + return list_double_add((void *)bpf_list_push_back); +} + +SEC("?tc") +int no_node_value_type(void *ctx) +{ + void *p; + + p = bpf_obj_new(struct { int data; }); + if (!p) + return 0; + bpf_spin_lock(&glock); + bpf_list_push_front(&ghead, p); + bpf_spin_unlock(&glock); + + return 0; +} + +SEC("?tc") +int incorrect_value_type(void *ctx) +{ + struct bar *b; + + b = bpf_obj_new(typeof(*b)); + if (!b) + return 0; + bpf_spin_lock(&glock); + bpf_list_push_front(&ghead, &b->node); + bpf_spin_unlock(&glock); + + return 0; +} + +SEC("?tc") +int incorrect_node_var_off(struct __sk_buff *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_spin_lock(&glock); + bpf_list_push_front(&ghead, (void *)&f->node + ctx->protocol); + bpf_spin_unlock(&glock); + + return 0; +} + +SEC("?tc") +int incorrect_node_off1(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_spin_lock(&glock); + bpf_list_push_front(&ghead, (void *)&f->node + 1); + bpf_spin_unlock(&glock); + + return 0; +} + +SEC("?tc") +int incorrect_node_off2(void *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_spin_lock(&glock); + bpf_list_push_front(&ghead, &f->node2); + bpf_spin_unlock(&glock); + + return 0; +} + +SEC("?tc") +int no_head_type(void *ctx) +{ + void *p; + + p = bpf_obj_new(typeof(struct { int data; })); + if (!p) + return 0; + bpf_spin_lock(&glock); + bpf_list_push_front(p, NULL); + bpf_spin_lock(&glock); + + return 0; +} + +SEC("?tc") +int incorrect_head_var_off1(struct __sk_buff *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_spin_lock(&glock); + bpf_list_push_front((void *)&ghead + ctx->protocol, &f->node); + bpf_spin_unlock(&glock); + + return 0; +} + +SEC("?tc") +int incorrect_head_var_off2(struct __sk_buff *ctx) +{ + struct foo *f; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + bpf_spin_lock(&glock); + bpf_list_push_front((void *)&f->head + ctx->protocol, &f->node); + bpf_spin_unlock(&glock); + + return 0; +} + +SEC("?tc") +int incorrect_head_off1(void *ctx) +{ + struct foo *f; + struct bar *b; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + b = bpf_obj_new(typeof(*b)); + if (!b) { + bpf_obj_drop(f); + return 0; + } + + bpf_spin_lock(&f->lock); + bpf_list_push_front((void *)&f->head + 1, &b->node); + bpf_spin_unlock(&f->lock); + + return 0; +} + +SEC("?tc") +int incorrect_head_off2(void *ctx) +{ + struct foo *f; + struct bar *b; + + f = bpf_obj_new(typeof(*f)); + if (!f) + return 0; + + bpf_spin_lock(&glock); + bpf_list_push_front((void *)&ghead + 1, &f->node); + bpf_spin_unlock(&glock); + + return 0; +} + +static __always_inline +int pop_ptr_off(void *(*op)(void *head)) +{ + struct { + struct bpf_list_head head __contains(foo, node2); + struct bpf_spin_lock lock; + } *p; + struct bpf_list_node *n; + + p = bpf_obj_new(typeof(*p)); + if (!p) + return 0; + bpf_spin_lock(&p->lock); + n = op(&p->head); + bpf_spin_unlock(&p->lock); + + bpf_this_cpu_ptr(n); + return 0; +} + +SEC("?tc") +int pop_front_off(void *ctx) +{ + return pop_ptr_off((void *)bpf_list_pop_front); +} + +SEC("?tc") +int pop_back_off(void *ctx) +{ + return pop_ptr_off((void *)bpf_list_pop_back); +} + +char _license[] SEC("license") = "GPL"; From patchwork Mon Nov 14 19:15:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13042741 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3C15C433FE for ; Mon, 14 Nov 2022 19:17:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236601AbiKNTRY (ORCPT ); Mon, 14 Nov 2022 14:17:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236156AbiKNTRL (ORCPT ); Mon, 14 Nov 2022 14:17:11 -0500 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1FB82654F for ; Mon, 14 Nov 2022 11:17:10 -0800 (PST) Received: by mail-pj1-x1041.google.com with SMTP id m14-20020a17090a3f8e00b00212dab39bcdso14764979pjc.0 for ; Mon, 14 Nov 2022 11:17:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ikZwvFVx43nJH39b53Lifsz5EBKM+wB7iSYfgKZ+Cw0=; b=JDvyCBGFS7FQdipJCzkdxh3jwb1NwCtL58SeBmBbC9PLNJ4bmwiKTO3GTPwcv1+94R WjityXnxysfMjk8bm3xEb7d2aEn4anDi3GJvoBv75aigFwIfPjWgRwCMAZO/eQJxnZDR 8m9S2k9Lcz7Vnv85VcMMvKGg03AGU7IZtkhntDG7iEMB5gbRmtO4V3yOcyKXFlBA2lJt Ld15uf4sk/Y/jyREBbp5sCIC5Y+J/W//cLyvOSph77oZKhiTyRJ1mC1y9zy5rAogfmFi l2fQfQikaK2qLV8zsKkLckIWIbJSNOPDXuPcSkPAafbtXxN9g4A4czNVsPqpzZMaSk0c r9HA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ikZwvFVx43nJH39b53Lifsz5EBKM+wB7iSYfgKZ+Cw0=; b=WK7+N32b6Uf5yBJiO65C9yhpObj+UigZ2GooysQK+vlr9nDwi1qwlcR/XHtu+TgfaT uxpVzsA2PG8daJvXJ59f1pJACKNYD7qZhFkzlmHk9L+g8mKqpNJaxfVJIhklQ++Od2Ec 6uXjTGPYerupCTjRum50Pl6RIqQ5LFc8MnKuu0r6FdnwZG56bXo6dwCFyfIAHIPZfxo2 EKFVTm6Bkx7tVbL31vzs/1Ja7f0kADVDv6VHvRtnjNuG4th3y7AIUKsuY6zwRVWOmHRb nhuB9uveoHbnRLp9yhc2G1OKVzS5ykasOoT2UAgev61orkLI2qbOV0yxRLL/AmxPobSw KsAg== X-Gm-Message-State: ANoB5pkK0L1t+d5pcePeBvUCFguTI4oD/FWqqyosATtMciZ50Eo/G0kC /8yDTahrdUPRY1T0NVAU3ZSWp6gyTsWSRw== X-Google-Smtp-Source: AA0mqf7QkxMpU2C5MEspUcz2ItbKThJNNKpyNDMvfwqDDNPm2C8CnRHnGA4Fqh0qykBCEmp+j2PW3A== X-Received: by 2002:a17:902:b183:b0:186:d5b9:fbcd with SMTP id s3-20020a170902b18300b00186d5b9fbcdmr690778plr.64.1668453429760; Mon, 14 Nov 2022 11:17:09 -0800 (PST) Received: from localhost ([59.152.80.69]) by smtp.gmail.com with ESMTPSA id e18-20020a17090301d200b00172cb8b97a8sm7998615plh.5.2022.11.14.11.17.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Nov 2022 11:17:09 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Dave Marchevsky Subject: [PATCH bpf-next v7 26/26] selftests/bpf: Add BTF sanity tests Date: Tue, 15 Nov 2022 00:45:47 +0530 Message-Id: <20221114191547.1694267-27-memxor@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221114191547.1694267-1-memxor@gmail.com> References: <20221114191547.1694267-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=17214; i=memxor@gmail.com; h=from:subject; bh=v1LI+fTgGiLUO8irIiQN9nGTkKTb9ImcS7iWlL1GYas=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBjcpPKCJRIpIn4JAqKr4qE23QDsjdcoPPPSP3mg+KN HkCcjVGJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY3KTygAKCRBM4MiGSL8Rym8eEA DDWwcSUIO07IO/hreP3tMGGpsma1v44kAANn4fm9KMeNH/+kdVtboyQG/GwkFiRolQwcM+VQ3iUe5v qrGKt1m4spoeG9n+n7Q7gLMcSOub3CI/H5twrgI+EMljl5I10eLfh6fddcQafFixwI2zeBQJ3cDYS3 A5Sldeghaxpbsf5mqs/hO3lk+Mg7pgpZ21rM3jRbt00R8QZ/5oG9n52OyyTq+KZlxRQgq7MnPSyQ8f TgdFBIEO457cMurwFppOZeDt4iJy5L1Agv6wQaDN9uxCzugiDeI3dU+nH7S5WJIA+QjgeErUmtqlgH c9izRhr+7iH65RP0xPz+ENq8Hk5z1Atpal0jymTknMuzf9EE/yHDHsroGdGlmZKflu1AmsakGbl56A RX0HRvQ9tcAc8N3SZjEIOPhO1c3fNmBzOmpJDDTRkdt6oU6sscnVmdRsKNQtP15wDxWIdz/jEnSqqh IjAc+IaFKuitsUXf17UD9GsbBu8ZTcw20xhNDpqgZT43Bkj9PWCOpd5hTpmAsJ/eQ8SRauLfMFGyjC h073eZHTm9cgTFZpWLcePnXpZISp2XvP8lMfhuDhNR2cLiB2xvtxTSP0JiZeqVkRs106eg4GbfcWZR sUaoQa5ye1eRlUcDJfTx328EtSuzI0rGi/sTL9ZLxJvv3s6iwi7zUSrCSPPg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Preparing the metadata for bpf_list_head involves a complicated parsing step and type resolution for the contained value. Ensure that corner cases are tested against and invalid specifications in source are duly rejected. Also include tests for incorrect ownership relationships in the BTF. Signed-off-by: Kumar Kartikeya Dwivedi --- .../selftests/bpf/prog_tests/linked_list.c | 485 ++++++++++++++++++ 1 file changed, 485 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/linked_list.c b/tools/testing/selftests/bpf/prog_tests/linked_list.c index e8569db2f3bc..bdc5a4f82e79 100644 --- a/tools/testing/selftests/bpf/prog_tests/linked_list.c +++ b/tools/testing/selftests/bpf/prog_tests/linked_list.c @@ -1,4 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 +#include +#include +#include #include #include @@ -233,6 +236,487 @@ static void test_linked_list_success(int mode, bool leave_in_map) linked_list__destroy(skel); } +#define SPIN_LOCK 2 +#define LIST_HEAD 3 +#define LIST_NODE 4 + +static struct btf *init_btf(void) +{ + int id, lid, hid, nid; + struct btf *btf; + + btf = btf__new_empty(); + if (!ASSERT_OK_PTR(btf, "btf__new_empty")) + return NULL; + id = btf__add_int(btf, "int", 4, BTF_INT_SIGNED); + if (!ASSERT_EQ(id, 1, "btf__add_int")) + goto end; + lid = btf__add_struct(btf, "bpf_spin_lock", 4); + if (!ASSERT_EQ(lid, SPIN_LOCK, "btf__add_struct bpf_spin_lock")) + goto end; + hid = btf__add_struct(btf, "bpf_list_head", 16); + if (!ASSERT_EQ(hid, LIST_HEAD, "btf__add_struct bpf_list_head")) + goto end; + nid = btf__add_struct(btf, "bpf_list_node", 16); + if (!ASSERT_EQ(nid, LIST_NODE, "btf__add_struct bpf_list_node")) + goto end; + return btf; +end: + btf__free(btf); + return NULL; +} + +static void test_btf(void) +{ + struct btf *btf = NULL; + int id, err; + + while (test__start_subtest("btf: too many locks")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 24); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", SPIN_LOCK, 0, 0); + if (!ASSERT_OK(err, "btf__add_struct foo::a")) + break; + err = btf__add_field(btf, "b", SPIN_LOCK, 32, 0); + if (!ASSERT_OK(err, "btf__add_struct foo::a")) + break; + err = btf__add_field(btf, "c", LIST_HEAD, 64, 0); + if (!ASSERT_OK(err, "btf__add_struct foo::a")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, -E2BIG, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: missing lock")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 16); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_struct foo::a")) + break; + id = btf__add_decl_tag(btf, "contains:baz:a", 5, 0); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:baz:a")) + break; + id = btf__add_struct(btf, "baz", 16); + if (!ASSERT_EQ(id, 7, "btf__add_struct baz")) + break; + err = btf__add_field(btf, "a", LIST_NODE, 0, 0); + if (!ASSERT_OK(err, "btf__add_field baz::a")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, -EINVAL, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: bad offset")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 36); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", LIST_NODE, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + err = btf__add_field(btf, "c", SPIN_LOCK, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::c")) + break; + id = btf__add_decl_tag(btf, "contains:foo:b", 5, 0); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:foo:b")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, -EEXIST, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: missing contains:")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 24); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", SPIN_LOCK, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", LIST_HEAD, 64, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, -EINVAL, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: missing struct")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 24); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", SPIN_LOCK, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", LIST_HEAD, 64, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + id = btf__add_decl_tag(btf, "contains:bar:bar", 5, 1); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:bar:bar")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, -ENOENT, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: missing node")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 24); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", SPIN_LOCK, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", LIST_HEAD, 64, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + id = btf__add_decl_tag(btf, "contains:foo:c", 5, 1); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:foo:c")) + break; + + err = btf__load_into_kernel(btf); + btf__free(btf); + ASSERT_EQ(err, -ENOENT, "check btf"); + break; + } + + while (test__start_subtest("btf: node incorrect type")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 20); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", SPIN_LOCK, 128, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + id = btf__add_decl_tag(btf, "contains:bar:a", 5, 0); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:bar:a")) + break; + id = btf__add_struct(btf, "bar", 4); + if (!ASSERT_EQ(id, 7, "btf__add_struct bar")) + break; + err = btf__add_field(btf, "a", SPIN_LOCK, 0, 0); + if (!ASSERT_OK(err, "btf__add_field bar::a")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, -EINVAL, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: multiple bpf_list_node with name b")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 52); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", LIST_NODE, 128, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + err = btf__add_field(btf, "b", LIST_NODE, 256, 0); + if (!ASSERT_OK(err, "btf__add_field foo::c")) + break; + err = btf__add_field(btf, "d", SPIN_LOCK, 384, 0); + if (!ASSERT_OK(err, "btf__add_field foo::d")) + break; + id = btf__add_decl_tag(btf, "contains:foo:b", 5, 0); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:foo:b")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, -EINVAL, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: owning | owned AA cycle")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 36); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", LIST_NODE, 128, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + err = btf__add_field(btf, "c", SPIN_LOCK, 256, 0); + if (!ASSERT_OK(err, "btf__add_field foo::c")) + break; + id = btf__add_decl_tag(btf, "contains:foo:b", 5, 0); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:foo:b")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, -ELOOP, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: owning | owned ABA cycle")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 36); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", LIST_NODE, 128, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + err = btf__add_field(btf, "c", SPIN_LOCK, 256, 0); + if (!ASSERT_OK(err, "btf__add_field foo::c")) + break; + id = btf__add_decl_tag(btf, "contains:bar:b", 5, 0); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:bar:b")) + break; + id = btf__add_struct(btf, "bar", 36); + if (!ASSERT_EQ(id, 7, "btf__add_struct bar")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field bar::a")) + break; + err = btf__add_field(btf, "b", LIST_NODE, 128, 0); + if (!ASSERT_OK(err, "btf__add_field bar::b")) + break; + err = btf__add_field(btf, "c", SPIN_LOCK, 256, 0); + if (!ASSERT_OK(err, "btf__add_field bar::c")) + break; + id = btf__add_decl_tag(btf, "contains:foo:b", 7, 0); + if (!ASSERT_EQ(id, 8, "btf__add_decl_tag contains:foo:b")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, -ELOOP, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: owning -> owned")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 20); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", SPIN_LOCK, 128, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + id = btf__add_decl_tag(btf, "contains:bar:a", 5, 0); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:bar:a")) + break; + id = btf__add_struct(btf, "bar", 16); + if (!ASSERT_EQ(id, 7, "btf__add_struct bar")) + break; + err = btf__add_field(btf, "a", LIST_NODE, 0, 0); + if (!ASSERT_OK(err, "btf__add_field bar::a")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, 0, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: owning -> owning | owned -> owned")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 20); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", SPIN_LOCK, 128, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + id = btf__add_decl_tag(btf, "contains:bar:b", 5, 0); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:bar:b")) + break; + id = btf__add_struct(btf, "bar", 36); + if (!ASSERT_EQ(id, 7, "btf__add_struct bar")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field bar::a")) + break; + err = btf__add_field(btf, "b", LIST_NODE, 128, 0); + if (!ASSERT_OK(err, "btf__add_field bar::b")) + break; + err = btf__add_field(btf, "c", SPIN_LOCK, 256, 0); + if (!ASSERT_OK(err, "btf__add_field bar::c")) + break; + id = btf__add_decl_tag(btf, "contains:baz:a", 7, 0); + if (!ASSERT_EQ(id, 8, "btf__add_decl_tag contains:baz:a")) + break; + id = btf__add_struct(btf, "baz", 16); + if (!ASSERT_EQ(id, 9, "btf__add_struct baz")) + break; + err = btf__add_field(btf, "a", LIST_NODE, 0, 0); + if (!ASSERT_OK(err, "btf__add_field baz:a")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, 0, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: owning | owned -> owning | owned -> owned")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 36); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", LIST_NODE, 128, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + err = btf__add_field(btf, "c", SPIN_LOCK, 256, 0); + if (!ASSERT_OK(err, "btf__add_field foo::c")) + break; + id = btf__add_decl_tag(btf, "contains:bar:b", 5, 0); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:bar:b")) + break; + id = btf__add_struct(btf, "bar", 36); + if (!ASSERT_EQ(id, 7, "btf__add_struct bar")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field bar:a")) + break; + err = btf__add_field(btf, "b", LIST_NODE, 128, 0); + if (!ASSERT_OK(err, "btf__add_field bar:b")) + break; + err = btf__add_field(btf, "c", SPIN_LOCK, 256, 0); + if (!ASSERT_OK(err, "btf__add_field bar:c")) + break; + id = btf__add_decl_tag(btf, "contains:baz:a", 7, 0); + if (!ASSERT_EQ(id, 8, "btf__add_decl_tag contains:baz:a")) + break; + id = btf__add_struct(btf, "baz", 16); + if (!ASSERT_EQ(id, 9, "btf__add_struct baz")) + break; + err = btf__add_field(btf, "a", LIST_NODE, 0, 0); + if (!ASSERT_OK(err, "btf__add_field baz:a")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, -ELOOP, "check btf"); + btf__free(btf); + break; + } + + while (test__start_subtest("btf: owning -> owning | owned -> owning | owned -> owned")) { + btf = init_btf(); + if (!ASSERT_OK_PTR(btf, "init_btf")) + break; + id = btf__add_struct(btf, "foo", 20); + if (!ASSERT_EQ(id, 5, "btf__add_struct foo")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field foo::a")) + break; + err = btf__add_field(btf, "b", SPIN_LOCK, 128, 0); + if (!ASSERT_OK(err, "btf__add_field foo::b")) + break; + id = btf__add_decl_tag(btf, "contains:bar:b", 5, 0); + if (!ASSERT_EQ(id, 6, "btf__add_decl_tag contains:bar:b")) + break; + id = btf__add_struct(btf, "bar", 36); + if (!ASSERT_EQ(id, 7, "btf__add_struct bar")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field bar::a")) + break; + err = btf__add_field(btf, "b", LIST_NODE, 128, 0); + if (!ASSERT_OK(err, "btf__add_field bar::b")) + break; + err = btf__add_field(btf, "c", SPIN_LOCK, 256, 0); + if (!ASSERT_OK(err, "btf__add_field bar::c")) + break; + id = btf__add_decl_tag(btf, "contains:baz:b", 7, 0); + if (!ASSERT_EQ(id, 8, "btf__add_decl_tag")) + break; + id = btf__add_struct(btf, "baz", 36); + if (!ASSERT_EQ(id, 9, "btf__add_struct baz")) + break; + err = btf__add_field(btf, "a", LIST_HEAD, 0, 0); + if (!ASSERT_OK(err, "btf__add_field bar::a")) + break; + err = btf__add_field(btf, "b", LIST_NODE, 128, 0); + if (!ASSERT_OK(err, "btf__add_field bar::b")) + break; + err = btf__add_field(btf, "c", SPIN_LOCK, 256, 0); + if (!ASSERT_OK(err, "btf__add_field bar::c")) + break; + id = btf__add_decl_tag(btf, "contains:bam:a", 9, 0); + if (!ASSERT_EQ(id, 10, "btf__add_decl_tag contains:bam:a")) + break; + id = btf__add_struct(btf, "bam", 16); + if (!ASSERT_EQ(id, 11, "btf__add_struct bam")) + break; + err = btf__add_field(btf, "a", LIST_NODE, 0, 0); + if (!ASSERT_OK(err, "btf__add_field bam::a")) + break; + + err = btf__load_into_kernel(btf); + ASSERT_EQ(err, -ELOOP, "check btf"); + btf__free(btf); + break; + } +} + void test_linked_list(void) { int i; @@ -243,6 +727,7 @@ void test_linked_list(void) test_linked_list_fail_prog(linked_list_fail_tests[i].prog_name, linked_list_fail_tests[i].err_msg); } + test_btf(); test_linked_list_success(PUSH_POP, false); test_linked_list_success(PUSH_POP, true); test_linked_list_success(PUSH_POP_MULT, false);