From patchwork Tue Sep 7 22:23:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479515 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C7EDC43217 for ; Tue, 7 Sep 2021 22:24:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0F85861104 for ; Tue, 7 Sep 2021 22:24:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347658AbhIGWZF (ORCPT ); Tue, 7 Sep 2021 18:25:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347663AbhIGWZA (ORCPT ); Tue, 7 Sep 2021 18:25:00 -0400 Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [IPv6:2a00:1450:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 681B3C0613C1 for ; Tue, 7 Sep 2021 15:23:51 -0700 (PDT) Received: by mail-ed1-x530.google.com with SMTP id g8so62016edt.7 for ; Tue, 07 Sep 2021 15:23:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2vMJ3l66MA4Xk2XsGDPJ11b5ddQG+LbfDZ98V6KBPYQ=; b=t6wSvyqGm0TH8zdK/DXSlSWY8TgAvVRfYSUnQlkKnYRBW0eGXyTlXPmpvg3Ska129X Yxxpu1F2IHRgpN4lwpMlpRnatLwwORctnfLp2jxdNwT3vKSBoqZ1wPoguRUShH7IaDZH 6U1MW8sBSt6RO9KYghNQifwh/fE7mEUT9beZgNb6MsWc+pC0LiCo73bGNU5dLPVTHeW2 Yq5L7MGBBqLxY9JP3n4BG5l7Hk9QXQpteBwTgsMJoosSZ0qDajvWuZ5ujfPZ0CeuAgwr ndg8wtDSwo7XsoJuBO3P4DISNFANzAg2LnDktSNWZpu9nsF14ZHNsPgIkD3e1EsVu0pv GgAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2vMJ3l66MA4Xk2XsGDPJ11b5ddQG+LbfDZ98V6KBPYQ=; b=dzAHBQh6IHl43MydLhW+Z7nCmms9C+CwizW5RW2idrmdA1ZoxRCJW5GfiUgCwPujT2 pqsYccnZTcnR0Gud47XHPMJosYyPPhzqvmdWhI8AR9g+fOeBbHkl95KSViSV+aRNkzp/ KyLzjFNQmeYg0KOrRF4Q5MakTEEhLYI7DZkWDe23/q+V2RKZ4HSEFRzDFRqYbe3F2Nud WR75RHt4c7FbCAAbsaYiEXHd0FM7oJW6YnIXve4VEADOupfxz8XXzxJEHsCJ+S+XKDKB OyPVuJAYd3ETOD7r7TteWyRbIC4LVuTIeOjgoOvyuAlQ7ITr4606HqEGqJelG0F97K12 QnFQ== X-Gm-Message-State: AOAM530H8YE44t6rPK2CvRqquFipXfEL70uwHijDWn0VYjV9bge9+gHA RM1EncTaXtoGeW02dskbl37MqQ== X-Google-Smtp-Source: ABdhPJwgG/puxsuG33LJpm369ZIEvvdH4LjSB0lkFALPWHzZ9W5rHFgvd6KvgkWYKwjgaeAZoZhGCA== X-Received: by 2002:aa7:d0c3:: with SMTP id u3mr533835edo.158.1631053430002; Tue, 07 Sep 2021 15:23:50 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.23.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:23:49 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 01/13] bpf/tests: Allow different number of runs per test case Date: Wed, 8 Sep 2021 00:23:27 +0200 Message-Id: <20210907222339.4130924-2-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch allows a test cast to specify the number of runs to use. For compatibility with existing test case definitions, the default value 0 is interpreted as MAX_TESTRUNS. A reduced number of runs is useful for complex test programs where 1000 runs may take a very long time. Instead of reducing what is tested, one can instead reduce the number of times the test is run. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 830a18ecffc8..c8bd3e9ab10a 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -80,6 +80,7 @@ struct bpf_test { int expected_errcode; /* used when FLAG_EXPECTED_FAIL is set in the aux */ __u8 frag_data[MAX_DATA]; int stack_depth; /* for eBPF only, since tests don't call verifier */ + int nr_testruns; /* Custom run count, defaults to MAX_TESTRUNS if 0 */ }; /* Large test cases need separate allocation and fill handler. */ @@ -8631,6 +8632,9 @@ static int run_one(const struct bpf_prog *fp, struct bpf_test *test) { int err_cnt = 0, i, runs = MAX_TESTRUNS; + if (test->nr_testruns) + runs = min(test->nr_testruns, MAX_TESTRUNS); + for (i = 0; i < MAX_SUBTESTS; i++) { void *data; u64 duration; From patchwork Tue Sep 7 22:23:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479517 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFC13C433EF for ; Tue, 7 Sep 2021 22:24:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BA63461106 for ; Tue, 7 Sep 2021 22:24:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348390AbhIGWZG (ORCPT ); Tue, 7 Sep 2021 18:25:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347875AbhIGWZB (ORCPT ); Tue, 7 Sep 2021 18:25:01 -0400 Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com [IPv6:2a00:1450:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9054CC0613D9 for ; Tue, 7 Sep 2021 15:23:52 -0700 (PDT) Received: by mail-ej1-x62d.google.com with SMTP id mf2so5574ejb.9 for ; Tue, 07 Sep 2021 15:23:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hsJRd+5H/xBdXI0ewID2C2LPT2CelRazFOfPu6EBp7o=; b=KnQcCDG4joN/ogi3ID8wzVS6karwjlbEwsmI67W1We6ekzxA3sSe4MG1ZhP+xhqgyw cZanzYaFXh5Yf+Vulpsi9QJaER1febT1m84fhk8aor7I2gF026jU55KOMvl1X/29ixJs DYQr7J/6PZZTwGmaf6IR1g3GBroXDSzV9SNw0fkL/KoLy0K41fmRPL36YT069rAbNiop w6gjyJqZ8ON6Uu3m2K2tSF0wn8dMfG3FCwGNRQIm7SESUaISV5vfX8tT6bpj41che9cI RiRpW0vE4Lkyc1oaewfKsLW5ha0oUtkn8JGhzOPgujhTfDTS0WbIIgx/zjqkhjzA3fPD F5Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hsJRd+5H/xBdXI0ewID2C2LPT2CelRazFOfPu6EBp7o=; b=KeTUfqffrEG4ZhnA8ZsCNJnt16SvPekwArnqSlCwDYVlNsJqIL9iywjIjqLrlwhrVp iDWkLfE9DjglRRoDxx1aBaQfBPQ7n8lDuEFrJvsb11jLGJzcftCRRiN2o82Y7T7ZbsJ4 BWZ9ngMUpEQsg74i1a46PnVhI4O21NmgY63qFdFmwDI38EJXhH+0YwWRyOpf3jknzh7r wjVW0Jut429jXXixMGFFdI7keCHq63OL03fXRj09roQ/SkiKHoneZHbz+LzqqiiDCt0R wab3u+n4H1v6HmQ1j6QPh8b0saFR8gvVD/YtpJP9ENGkOr5vdEMSlGApAPWW9QbC7/4b tpQg== X-Gm-Message-State: AOAM533xqxBb0ptYeQBhFH6h7i7CwNHHgTuLxTSOUoDE7s/vWYVlkbQ7 7BPf6Q1JczvlKWs3R18LR7dm9g== X-Google-Smtp-Source: ABdhPJzSWwoZYW+9Of+HubxRBa+2Uw7JHyRAXZP30LZNmKsw14bmsUh43lowozO/j0JJTmMgynmOFA== X-Received: by 2002:a17:906:4482:: with SMTP id y2mr621958ejo.484.1631053430981; Tue, 07 Sep 2021 15:23:50 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.23.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:23:50 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 02/13] bpf/tests: Reduce memory footprint of test suite Date: Wed, 8 Sep 2021 00:23:28 +0200 Message-Id: <20210907222339.4130924-3-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The test suite used to call any fill_helper callbacks to generate eBPF program data for all test cases at once. This caused ballooning memory requirements as more extensive test cases were added. Now the each fill_helper is called before the test is run and the allocated memory released afterwards, before the next test case is processed. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 26 ++++++++++++-------------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index c8bd3e9ab10a..f0651dc6450b 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -8694,8 +8694,6 @@ static __init int find_test_index(const char *test_name) static __init int prepare_bpf_tests(void) { - int i; - if (test_id >= 0) { /* * if a test_id was specified, use test_range to @@ -8739,23 +8737,11 @@ static __init int prepare_bpf_tests(void) } } - for (i = 0; i < ARRAY_SIZE(tests); i++) { - if (tests[i].fill_helper && - tests[i].fill_helper(&tests[i]) < 0) - return -ENOMEM; - } - return 0; } static __init void destroy_bpf_tests(void) { - int i; - - for (i = 0; i < ARRAY_SIZE(tests); i++) { - if (tests[i].fill_helper) - kfree(tests[i].u.ptr.insns); - } } static bool exclude_test(int test_id) @@ -8959,7 +8945,19 @@ static __init int test_bpf(void) pr_info("#%d %s ", i, tests[i].descr); + if (tests[i].fill_helper && + tests[i].fill_helper(&tests[i]) < 0) { + pr_cont("FAIL to prog_fill\n"); + continue; + } + fp = generate_filter(i, &err); + + if (tests[i].fill_helper) { + kfree(tests[i].u.ptr.insns); + tests[i].u.ptr.insns = NULL; + } + if (fp == NULL) { if (err == 0) { pass_cnt++; From patchwork Tue Sep 7 22:23:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479511 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36DF5C4332F for ; Tue, 7 Sep 2021 22:23:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 19B7961101 for ; Tue, 7 Sep 2021 22:23:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348198AbhIGWZE (ORCPT ); Tue, 7 Sep 2021 18:25:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347901AbhIGWZB (ORCPT ); Tue, 7 Sep 2021 18:25:01 -0400 Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [IPv6:2a00:1450:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7F97C061796 for ; Tue, 7 Sep 2021 15:23:53 -0700 (PDT) Received: by mail-ed1-x52c.google.com with SMTP id i6so98657edu.1 for ; Tue, 07 Sep 2021 15:23:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LkH/sO56VAlmNRIplGiQQoew4M/S2hYPzrsnrhPWCOU=; b=tQve5sZ+7lgtm+hWfWn9aZIDi3EB95iR99D8qW7UA/Hw9g4RJvfKzuGUWoOSaGtOWV hyPvYkO6Ct/xqirTUJhgF1al19rndV5IwSow3NmwjN8GPZGtMQN38uqJsoDc+P5x/da4 1jzmJ51vvaWJpUmRVCI86yYYWZf6A0YBVRdxhZO+3hwlLNA2oe9+rJdxAn3wN2gj6DK6 k9TV9g7eMKySrwjwejw+WFX9tSxfDNvNajHy3HFpkHLJueDbmydLB0Zh9ElnCiZDsjQK KAqy2jV7wJnn/1Dvs80gpXavvWm4SLcPwIPQKOeR0eCRV2bbQAV5hCZo1vMVUmONE7Zf jY7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LkH/sO56VAlmNRIplGiQQoew4M/S2hYPzrsnrhPWCOU=; b=XV9WHqZjdQSnbtYC26EE7zXxemncqLH1hSzWXB8IRfMyfQEAS3zgaJdtB9jiS5+r9R jk3X/MyeFuJb5ReOA2+rUNSQoLZzlfYiS7H8xazEDdGpA+Nt1KQSlAK4PaoD9UPtqy27 DJRngWYirxRfbFxVZTW21SoHAHMyRV1V1/Dt3JaZubSuboqQjbSOhFdyEqqjzWK5aDaT VciI0yyCtcL66JmNNQIEbDR4OcJj72w+skYjAuslcQv53ulPYPboGvsBqZGgYi6cjQ7N F0JhohvkCKfib02nu0r5aJ3cdKcG2UokpmqOi0Wm/7zHg7JhFNGBVnG4zHPunzETGxyO 2urg== X-Gm-Message-State: AOAM531e1iZ7FUpN0lJ80C9RoKMpT/15HQTZZiPvnnWyVy6jaQxHazd4 0mQWV/2EOywbFqdW4t6tU44AFw== X-Google-Smtp-Source: ABdhPJx62kWnqtc7039/ABnoKyGbW56/yla4SNzWfnBXj0QXEBAGiY04V91svwVLs3sVlT0uKRxYlw== X-Received: by 2002:a05:6402:350d:: with SMTP id b13mr529314edd.1.1631053431973; Tue, 07 Sep 2021 15:23:51 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.23.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:23:51 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 03/13] bpf/tests: Add exhaustive tests of ALU shift values Date: Wed, 8 Sep 2021 00:23:29 +0200 Message-Id: <20210907222339.4130924-4-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a set of tests for ALU64 and ALU32 shift operations to verify correctness for all possible values of the shift value. Mainly intended for JIT testing. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 260 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 260 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index f0651dc6450b..f76472c050a7 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -497,6 +497,168 @@ static int bpf_fill_long_jmp(struct bpf_test *self) return 0; } +static int __bpf_ld_imm64(struct bpf_insn insns[2], u8 reg, s64 imm64) +{ + struct bpf_insn tmp[] = {BPF_LD_IMM64(reg, imm64)}; + + memcpy(insns, tmp, sizeof(tmp)); + return 2; +} + +/* Test an ALU shift operation for all valid shift values */ +static int __bpf_fill_alu_shift(struct bpf_test *self, u8 op, + u8 mode, bool alu32) +{ + static const s64 regs[] = { + 0x0123456789abcdefLL, /* dword > 0, word < 0 */ + 0xfedcba9876543210LL, /* dowrd < 0, word > 0 */ + 0xfedcba0198765432LL, /* dowrd < 0, word < 0 */ + 0x0123458967abcdefLL, /* dword > 0, word > 0 */ + }; + int bits = alu32 ? 32 : 64; + int len = (2 + 7 * bits) * ARRAY_SIZE(regs) + 3; + struct bpf_insn *insn; + int imm, k; + int i = 0; + + insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL); + if (!insn) + return -ENOMEM; + + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 0); + + for (k = 0; k < ARRAY_SIZE(regs); k++) { + s64 reg = regs[k]; + + i += __bpf_ld_imm64(&insn[i], R3, reg); + + for (imm = 0; imm < bits; imm++) { + u64 val; + + /* Perform operation */ + insn[i++] = BPF_ALU64_REG(BPF_MOV, R1, R3); + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R2, imm); + if (alu32) { + if (mode == BPF_K) + insn[i++] = BPF_ALU32_IMM(op, R1, imm); + else + insn[i++] = BPF_ALU32_REG(op, R1, R2); + switch (op) { + case BPF_LSH: + val = (u32)reg << imm; + break; + case BPF_RSH: + val = (u32)reg >> imm; + break; + case BPF_ARSH: + val = (u32)reg >> imm; + if (imm > 0 && (reg & 0x80000000)) + val |= ~(u32)0 << (32 - imm); + break; + } + } else { + if (mode == BPF_K) + insn[i++] = BPF_ALU64_IMM(op, R1, imm); + else + insn[i++] = BPF_ALU64_REG(op, R1, R2); + switch (op) { + case BPF_LSH: + val = (u64)reg << imm; + break; + case BPF_RSH: + val = (u64)reg >> imm; + break; + case BPF_ARSH: + val = (u64)reg >> imm; + if (imm > 0 && reg < 0) + val |= ~(u64)0 << (64 - imm); + break; + } + } + + /* + * When debugging a JIT that fails this test, one + * can write the immediate value to R0 here to find + * out which operand values that fail. + */ + + /* Load reference and check the result */ + i += __bpf_ld_imm64(&insn[i], R4, val); + insn[i++] = BPF_JMP_REG(BPF_JEQ, R1, R4, 1); + insn[i++] = BPF_EXIT_INSN(); + } + } + + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 1); + insn[i++] = BPF_EXIT_INSN(); + + self->u.ptr.insns = insn; + self->u.ptr.len = len; + BUG_ON(i > len); + + return 0; +} + +static int bpf_fill_alu_lsh_imm(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_LSH, BPF_K, false); +} + +static int bpf_fill_alu_rsh_imm(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_RSH, BPF_K, false); +} + +static int bpf_fill_alu_arsh_imm(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_K, false); +} + +static int bpf_fill_alu_lsh_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_LSH, BPF_X, false); +} + +static int bpf_fill_alu_rsh_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_RSH, BPF_X, false); +} + +static int bpf_fill_alu_arsh_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_X, false); +} + +static int bpf_fill_alu32_lsh_imm(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_LSH, BPF_K, true); +} + +static int bpf_fill_alu32_rsh_imm(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_RSH, BPF_K, true); +} + +static int bpf_fill_alu32_arsh_imm(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_K, true); +} + +static int bpf_fill_alu32_lsh_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_LSH, BPF_X, true); +} + +static int bpf_fill_alu32_rsh_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_RSH, BPF_X, true); +} + +static int bpf_fill_alu32_arsh_reg(struct bpf_test *self) +{ + return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_X, true); +} + static struct bpf_test tests[] = { { "TAX", @@ -8414,6 +8576,104 @@ static struct bpf_test tests[] = { {}, { { 0, 2 } }, }, + /* Exhaustive test of ALU64 shift operations */ + { + "ALU64_LSH_K: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu_lsh_imm, + }, + { + "ALU64_RSH_K: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu_rsh_imm, + }, + { + "ALU64_ARSH_K: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu_arsh_imm, + }, + { + "ALU64_LSH_X: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu_lsh_reg, + }, + { + "ALU64_RSH_X: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu_rsh_reg, + }, + { + "ALU64_ARSH_X: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu_arsh_reg, + }, + /* Exhaustive test of ALU32 shift operations */ + { + "ALU32_LSH_K: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_lsh_imm, + }, + { + "ALU32_RSH_K: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_rsh_imm, + }, + { + "ALU32_ARSH_K: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_arsh_imm, + }, + { + "ALU32_LSH_X: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_lsh_reg, + }, + { + "ALU32_RSH_X: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_rsh_reg, + }, + { + "ALU32_ARSH_X: all shift values", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_arsh_reg, + }, }; static struct net_device dev; From patchwork Tue Sep 7 22:23:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479519 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F85FC433F5 for ; Tue, 7 Sep 2021 22:24:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E7FE961104 for ; Tue, 7 Sep 2021 22:24:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348486AbhIGWZH (ORCPT ); Tue, 7 Sep 2021 18:25:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348091AbhIGWZC (ORCPT ); Tue, 7 Sep 2021 18:25:02 -0400 Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78A92C0617AD for ; Tue, 7 Sep 2021 15:23:54 -0700 (PDT) Received: by mail-ed1-x535.google.com with SMTP id eb14so57124edb.8 for ; Tue, 07 Sep 2021 15:23:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ch5Oo1oq8i1LcY9ozSfNlxFm0FOEnR5Hv+KS3NKiJvg=; b=VN8VE94cysfKu//qVGAZCdglLsljT0DLn5EdDSCyiRskhwndrJUa6G6w55XuQDsnbN nDG03jLFZJXxtZMTtHJi0uYjIBhh4fkjCOKLcR4h18bI1ceEmr0LBLZ5u16+nBK2shzk wpTL4y1G4v+xBgGwO9z1nhiIcl/fzZ6O2ujxlO+6W9xHCDqaoO4WfqmpxTLiYjlmjVBH FioawrK/OP3PtzmtwDYeEjTVdapK82xap9laEFP0/+sL/u/xLibFONh/waKqc6Jg4eWX HtR4TciZ5t+IyEjZEfBqdAOFnqpkA9h1Ge+9c1Irx63JCArL3yUfs7WsbJzqb27tZNO6 Qqsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ch5Oo1oq8i1LcY9ozSfNlxFm0FOEnR5Hv+KS3NKiJvg=; b=uNmdp/Nwe46YzoX+kDtUr+JAxP3pet0RLUVTS5HxVxKGVJ6bjEU/h213AOKCDZXbD2 2z8YQ4/y/usV3m/th2zHt8D7esE9WSfD2+IwXI79Oc5JBgfHJ2bZpkqLnbOp6tCmWeJ2 KeY3b+xc8xbrBYOsqfxWfaen0OECZN9vX/AeDQrxek/bj2i4HAfr8qFA9zPoiKivwFyA ZJMBoKafXSRByxsqZhjawx2e0bC6s/+/U/R7KuvroD/KGTd5H976vBTWI/PRv4Aq6OdE 2QQpbmEsA247Hh/d4a2hGphzelIS5R23+6i+1Wx9110kqiJque62O0m6pZX6liXdnau5 oNUg== X-Gm-Message-State: AOAM532IIFLvYbKJogfoOeyph+z9OO2YbaqfdfemKub058IFd73xf11r vRpml4RQTbkl9CEwZcX4C9diau9k7xZiW8Nb+Gg= X-Google-Smtp-Source: ABdhPJzDUpSeuKwRJcZ81gJmPxBR4vE/+7X0KywCvCrvPxih76lXtBKZDn3QTQ4mXi0sWXYHQ9u89Q== X-Received: by 2002:aa7:c408:: with SMTP id j8mr528045edq.280.1631053433031; Tue, 07 Sep 2021 15:23:53 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.23.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:23:52 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 04/13] bpf/tests: Add exhaustive tests of ALU operand magnitudes Date: Wed, 8 Sep 2021 00:23:30 +0200 Message-Id: <20210907222339.4130924-5-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a set of tests for ALU64 and ALU32 arithmetic and bitwise logical operations to verify correctness for all possible magnitudes of the register and immediate operands. Mainly intended for JIT testing. The patch introduces a pattern generator that can be used to drive extensive tests of different kinds of operations. It is parameterized to allow tuning of the operand combinations to test. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 772 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 772 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index f76472c050a7..7992004bc876 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -659,6 +659,451 @@ static int bpf_fill_alu32_arsh_reg(struct bpf_test *self) return __bpf_fill_alu_shift(self, BPF_ARSH, BPF_X, true); } +/* + * Common operand pattern generator for exhaustive power-of-two magnitudes + * tests. The block size parameters can be adjusted to increase/reduce the + * number of combinatons tested and thereby execution speed and memory + * footprint. + */ + +static inline s64 value(int msb, int delta, int sign) +{ + return sign * (1LL << msb) + delta; +} + +static int __bpf_fill_pattern(struct bpf_test *self, void *arg, + int dbits, int sbits, int block1, int block2, + int (*emit)(struct bpf_test*, void*, + struct bpf_insn*, s64, s64)) +{ + static const int sgn[][2] = {{1, 1}, {1, -1}, {-1, 1}, {-1, -1}}; + struct bpf_insn *insns; + int di, si, bt, db, sb; + int count, len, k; + int extra = 1 + 2; + int i = 0; + + /* Total number of iterations for the two pattern */ + count = (dbits - 1) * (sbits - 1) * block1 * block1 * ARRAY_SIZE(sgn); + count += (max(dbits, sbits) - 1) * block2 * block2 * ARRAY_SIZE(sgn); + + /* Compute the maximum number of insns and allocate the buffer */ + len = extra + count * (*emit)(self, arg, NULL, 0, 0); + insns = kmalloc_array(len, sizeof(*insns), GFP_KERNEL); + if (!insns) + return -ENOMEM; + + /* Add head instruction(s) */ + insns[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 0); + + /* + * Pattern 1: all combinations of power-of-two magnitudes and sign, + * and with a block of contiguous values around each magnitude. + */ + for (di = 0; di < dbits - 1; di++) /* Dst magnitudes */ + for (si = 0; si < sbits - 1; si++) /* Src magnitudes */ + for (k = 0; k < ARRAY_SIZE(sgn); k++) /* Sign combos */ + for (db = -(block1 / 2); + db < (block1 + 1) / 2; db++) + for (sb = -(block1 / 2); + sb < (block1 + 1) / 2; sb++) { + s64 dst, src; + + dst = value(di, db, sgn[k][0]); + src = value(si, sb, sgn[k][1]); + i += (*emit)(self, arg, + &insns[i], + dst, src); + } + /* + * Pattern 2: all combinations for a larger block of values + * for each power-of-two magnitude and sign, where the magnitude is + * the same for both operands. + */ + for (bt = 0; bt < max(dbits, sbits) - 1; bt++) /* Magnitude */ + for (k = 0; k < ARRAY_SIZE(sgn); k++) /* Sign combos */ + for (db = -(block2 / 2); db < (block2 + 1) / 2; db++) + for (sb = -(block2 / 2); + sb < (block2 + 1) / 2; sb++) { + s64 dst, src; + + dst = value(bt % dbits, db, sgn[k][0]); + src = value(bt % sbits, sb, sgn[k][1]); + i += (*emit)(self, arg, &insns[i], + dst, src); + } + + /* Append tail instructions */ + insns[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 1); + insns[i++] = BPF_EXIT_INSN(); + BUG_ON(i > len); + + self->u.ptr.insns = insns; + self->u.ptr.len = i; + + return 0; +} + +/* + * Block size parameters used in pattern tests below. une as needed to + * increase/reduce the number combinations tested, see following examples. + * block values per operand MSB + * ---------------------------------------- + * 0 none + * 1 (1 << MSB) + * 2 (1 << MSB) + [-1, 0] + * 3 (1 << MSB) + [-1, 0, 1] + */ +#define PATTERN_BLOCK1 1 +#define PATTERN_BLOCK2 5 + +/* Number of test runs for a pattern test */ +#define NR_PATTERN_RUNS 1 + +/* + * Exhaustive tests of ALU operations for all combinations of power-of-two + * magnitudes of the operands, both for positive and negative values. The + * test is designed to verify e.g. the JMP and JMP32 operations for JITs that + * emit different code depending on the magnitude of the immediate value. + */ + +static bool __bpf_alu_result(u64 *res, u64 v1, u64 v2, u8 op) +{ + *res = 0; + switch (op) { + case BPF_MOV: + *res = v2; + break; + case BPF_AND: + *res = v1 & v2; + break; + case BPF_OR: + *res = v1 | v2; + break; + case BPF_XOR: + *res = v1 ^ v2; + break; + case BPF_ADD: + *res = v1 + v2; + break; + case BPF_SUB: + *res = v1 - v2; + break; + case BPF_MUL: + *res = v1 * v2; + break; + case BPF_DIV: + if (v2 == 0) + return false; + *res = div64_u64(v1, v2); + break; + case BPF_MOD: + if (v2 == 0) + return false; + div64_u64_rem(v1, v2, res); + break; + } + return true; +} + +static int __bpf_emit_alu64_imm(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 imm) +{ + int op = *(int *)arg; + int i = 0; + u64 res; + + if (!insns) + return 7; + + if (__bpf_alu_result(&res, dst, (s32)imm, op)) { + i += __bpf_ld_imm64(&insns[i], R1, dst); + i += __bpf_ld_imm64(&insns[i], R3, res); + insns[i++] = BPF_ALU64_IMM(op, R1, imm); + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); + insns[i++] = BPF_EXIT_INSN(); + } + + return i; +} + +static int __bpf_emit_alu32_imm(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 imm) +{ + int op = *(int *)arg; + int i = 0; + u64 res; + + if (!insns) + return 7; + + if (__bpf_alu_result(&res, (u32)dst, (u32)imm, op)) { + i += __bpf_ld_imm64(&insns[i], R1, dst); + i += __bpf_ld_imm64(&insns[i], R3, (u32)res); + insns[i++] = BPF_ALU32_IMM(op, R1, imm); + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); + insns[i++] = BPF_EXIT_INSN(); + } + + return i; +} + +static int __bpf_emit_alu64_reg(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 src) +{ + int op = *(int *)arg; + int i = 0; + u64 res; + + if (!insns) + return 9; + + if (__bpf_alu_result(&res, dst, src, op)) { + i += __bpf_ld_imm64(&insns[i], R1, dst); + i += __bpf_ld_imm64(&insns[i], R2, src); + i += __bpf_ld_imm64(&insns[i], R3, res); + insns[i++] = BPF_ALU64_REG(op, R1, R2); + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); + insns[i++] = BPF_EXIT_INSN(); + } + + return i; +} + +static int __bpf_emit_alu32_reg(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 src) +{ + int op = *(int *)arg; + int i = 0; + u64 res; + + if (!insns) + return 9; + + if (__bpf_alu_result(&res, (u32)dst, (u32)src, op)) { + i += __bpf_ld_imm64(&insns[i], R1, dst); + i += __bpf_ld_imm64(&insns[i], R2, src); + i += __bpf_ld_imm64(&insns[i], R3, (u32)res); + insns[i++] = BPF_ALU32_REG(op, R1, R2); + insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1); + insns[i++] = BPF_EXIT_INSN(); + } + + return i; +} + +static int __bpf_fill_alu64_imm(struct bpf_test *self, int op) +{ + return __bpf_fill_pattern(self, &op, 64, 32, + PATTERN_BLOCK1, PATTERN_BLOCK2, + &__bpf_emit_alu64_imm); +} + +static int __bpf_fill_alu32_imm(struct bpf_test *self, int op) +{ + return __bpf_fill_pattern(self, &op, 64, 32, + PATTERN_BLOCK1, PATTERN_BLOCK2, + &__bpf_emit_alu32_imm); +} + +static int __bpf_fill_alu64_reg(struct bpf_test *self, int op) +{ + return __bpf_fill_pattern(self, &op, 64, 64, + PATTERN_BLOCK1, PATTERN_BLOCK2, + &__bpf_emit_alu64_reg); +} + +static int __bpf_fill_alu32_reg(struct bpf_test *self, int op) +{ + return __bpf_fill_pattern(self, &op, 64, 64, + PATTERN_BLOCK1, PATTERN_BLOCK2, + &__bpf_emit_alu32_reg); +} + +/* ALU64 immediate operations */ +static int bpf_fill_alu64_mov_imm(struct bpf_test *self) +{ + return __bpf_fill_alu64_imm(self, BPF_MOV); +} + +static int bpf_fill_alu64_and_imm(struct bpf_test *self) +{ + return __bpf_fill_alu64_imm(self, BPF_AND); +} + +static int bpf_fill_alu64_or_imm(struct bpf_test *self) +{ + return __bpf_fill_alu64_imm(self, BPF_OR); +} + +static int bpf_fill_alu64_xor_imm(struct bpf_test *self) +{ + return __bpf_fill_alu64_imm(self, BPF_XOR); +} + +static int bpf_fill_alu64_add_imm(struct bpf_test *self) +{ + return __bpf_fill_alu64_imm(self, BPF_ADD); +} + +static int bpf_fill_alu64_sub_imm(struct bpf_test *self) +{ + return __bpf_fill_alu64_imm(self, BPF_SUB); +} + +static int bpf_fill_alu64_mul_imm(struct bpf_test *self) +{ + return __bpf_fill_alu64_imm(self, BPF_MUL); +} + +static int bpf_fill_alu64_div_imm(struct bpf_test *self) +{ + return __bpf_fill_alu64_imm(self, BPF_DIV); +} + +static int bpf_fill_alu64_mod_imm(struct bpf_test *self) +{ + return __bpf_fill_alu64_imm(self, BPF_MOD); +} + +/* ALU32 immediate operations */ +static int bpf_fill_alu32_mov_imm(struct bpf_test *self) +{ + return __bpf_fill_alu32_imm(self, BPF_MOV); +} + +static int bpf_fill_alu32_and_imm(struct bpf_test *self) +{ + return __bpf_fill_alu32_imm(self, BPF_AND); +} + +static int bpf_fill_alu32_or_imm(struct bpf_test *self) +{ + return __bpf_fill_alu32_imm(self, BPF_OR); +} + +static int bpf_fill_alu32_xor_imm(struct bpf_test *self) +{ + return __bpf_fill_alu32_imm(self, BPF_XOR); +} + +static int bpf_fill_alu32_add_imm(struct bpf_test *self) +{ + return __bpf_fill_alu32_imm(self, BPF_ADD); +} + +static int bpf_fill_alu32_sub_imm(struct bpf_test *self) +{ + return __bpf_fill_alu32_imm(self, BPF_SUB); +} + +static int bpf_fill_alu32_mul_imm(struct bpf_test *self) +{ + return __bpf_fill_alu32_imm(self, BPF_MUL); +} + +static int bpf_fill_alu32_div_imm(struct bpf_test *self) +{ + return __bpf_fill_alu32_imm(self, BPF_DIV); +} + +static int bpf_fill_alu32_mod_imm(struct bpf_test *self) +{ + return __bpf_fill_alu32_imm(self, BPF_MOD); +} + +/* ALU64 register operations */ +static int bpf_fill_alu64_mov_reg(struct bpf_test *self) +{ + return __bpf_fill_alu64_reg(self, BPF_MOV); +} + +static int bpf_fill_alu64_and_reg(struct bpf_test *self) +{ + return __bpf_fill_alu64_reg(self, BPF_AND); +} + +static int bpf_fill_alu64_or_reg(struct bpf_test *self) +{ + return __bpf_fill_alu64_reg(self, BPF_OR); +} + +static int bpf_fill_alu64_xor_reg(struct bpf_test *self) +{ + return __bpf_fill_alu64_reg(self, BPF_XOR); +} + +static int bpf_fill_alu64_add_reg(struct bpf_test *self) +{ + return __bpf_fill_alu64_reg(self, BPF_ADD); +} + +static int bpf_fill_alu64_sub_reg(struct bpf_test *self) +{ + return __bpf_fill_alu64_reg(self, BPF_SUB); +} + +static int bpf_fill_alu64_mul_reg(struct bpf_test *self) +{ + return __bpf_fill_alu64_reg(self, BPF_MUL); +} + +static int bpf_fill_alu64_div_reg(struct bpf_test *self) +{ + return __bpf_fill_alu64_reg(self, BPF_DIV); +} + +static int bpf_fill_alu64_mod_reg(struct bpf_test *self) +{ + return __bpf_fill_alu64_reg(self, BPF_MOD); +} + +/* ALU32 register operations */ +static int bpf_fill_alu32_mov_reg(struct bpf_test *self) +{ + return __bpf_fill_alu32_reg(self, BPF_MOV); +} + +static int bpf_fill_alu32_and_reg(struct bpf_test *self) +{ + return __bpf_fill_alu32_reg(self, BPF_AND); +} + +static int bpf_fill_alu32_or_reg(struct bpf_test *self) +{ + return __bpf_fill_alu32_reg(self, BPF_OR); +} + +static int bpf_fill_alu32_xor_reg(struct bpf_test *self) +{ + return __bpf_fill_alu32_reg(self, BPF_XOR); +} + +static int bpf_fill_alu32_add_reg(struct bpf_test *self) +{ + return __bpf_fill_alu32_reg(self, BPF_ADD); +} + +static int bpf_fill_alu32_sub_reg(struct bpf_test *self) +{ + return __bpf_fill_alu32_reg(self, BPF_SUB); +} + +static int bpf_fill_alu32_mul_reg(struct bpf_test *self) +{ + return __bpf_fill_alu32_reg(self, BPF_MUL); +} + +static int bpf_fill_alu32_div_reg(struct bpf_test *self) +{ + return __bpf_fill_alu32_reg(self, BPF_DIV); +} + +static int bpf_fill_alu32_mod_reg(struct bpf_test *self) +{ + return __bpf_fill_alu32_reg(self, BPF_MOD); +} + static struct bpf_test tests[] = { { "TAX", @@ -8674,6 +9119,333 @@ static struct bpf_test tests[] = { { { 0, 1 } }, .fill_helper = bpf_fill_alu32_arsh_reg, }, + /* ALU64 immediate magnitudes */ + { + "ALU64_MOV_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mov_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_AND_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_and_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_OR_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_or_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_XOR_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_xor_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_ADD_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_add_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_SUB_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_sub_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_MUL_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mul_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_DIV_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_div_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_MOD_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mod_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + /* ALU32 immediate magnitudes */ + { + "ALU32_MOV_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mov_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_AND_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_and_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_OR_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_or_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_XOR_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_xor_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_ADD_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_add_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_SUB_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_sub_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_MUL_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mul_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_DIV_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_div_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_MOD_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mod_imm, + }, + /* ALU64 register magnitudes */ + { + "ALU64_MOV_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mov_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_AND_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_and_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_OR_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_or_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_XOR_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_xor_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_ADD_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_add_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_SUB_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_sub_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_MUL_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mul_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_DIV_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_div_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU64_MOD_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu64_mod_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + /* ALU32 register magnitudes */ + { + "ALU32_MOV_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mov_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_AND_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_and_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_OR_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_or_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_XOR_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_xor_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_ADD_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_add_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_SUB_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_sub_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_MUL_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mul_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_DIV_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_div_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "ALU32_MOD_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_alu32_mod_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, }; static struct net_device dev; From patchwork Tue Sep 7 22:23:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479513 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0809DC433F5 for ; Tue, 7 Sep 2021 22:24:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E5DEB61104 for ; Tue, 7 Sep 2021 22:23:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348298AbhIGWZF (ORCPT ); Tue, 7 Sep 2021 18:25:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348097AbhIGWZD (ORCPT ); Tue, 7 Sep 2021 18:25:03 -0400 Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [IPv6:2a00:1450:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F323C061575 for ; Tue, 7 Sep 2021 15:23:55 -0700 (PDT) Received: by mail-ed1-x530.google.com with SMTP id q3so76695edt.5 for ; Tue, 07 Sep 2021 15:23:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cP+5VZ7Lnqwww/8GD+noWohXdBTy/nMr7EJZ8CHSxaE=; b=cvSql0qaso72p7Gg8xFAd9QnzhLE5yB5NGU3L058ru8ecsDvHVGDSJcNisPklR6ipx 3LiZ9jCNhRVNMhZNCuyMMDg+GtNJx7aidUPhLzClkDvS20OBkJ5CG45lJvce+omeaTGI fCWCb881AQprufcm+ORnW7jUAgJYFIKcMEgRmIjeq8YcXdiacdKN0I9gjlKRCE57b/k7 SXuKp3Ue+8b08FBGuRuyTaNv9pYiEmot8QZc3A22n9pE1y8qJmPGGt/It9RaFVV5y9hP JZ9U6Y/mC2IhYXPAy3dNFquw+wEvrc8Ds4+xeO86u0lWJfevPBSk0rMwgE7v4xnI4++T /xGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cP+5VZ7Lnqwww/8GD+noWohXdBTy/nMr7EJZ8CHSxaE=; b=oS0Y0vXuVgLuKj+XkE2GpAfg96nIv2dYWl70haOO4L1z1sUmtATxDJHxB6HSRfjBz9 nSpgCZ5n8QQVU8b/DxTpCc+T0h6mTaf3T9hFbc8FRI4Lu8SZB5nVvrix7O0VcpCa7M7t 2jnegxQ0tjX6REJc9x8h8oh8oQwsJH/ZUMhAOU+qe/v6+0n4HUrIAPL8JwU6WzH88dPK Bs6tlvqROYg2M9tnJwIobV3oq4EXQ7oT+UPUxRQB9qSDN1WCWwspAc0PwPKjLkuG2p1D MaObi0Bs4uOYojGy1S7xlxPhRCMFfh9Bmwob8TBJOpxNQd93b3WgGZ1XSHP/b39HfCmv RLAQ== X-Gm-Message-State: AOAM5325uu5Ej+AOt+M3ZJedx5T2hzdfkTcfd99aj1u8FoXks/riLjEg L5QQ/4EIQ1ZiTHj9IrjdX2WSKQ== X-Google-Smtp-Source: ABdhPJwfQZnsLsSUXsOOdSXqHQHT05OLexk8Em8YRo5s8FL3oj2Md5YyQlISTx+sLmGacmpCY6wpLA== X-Received: by 2002:a05:6402:1d8c:: with SMTP id dk12mr494668edb.229.1631053434035; Tue, 07 Sep 2021 15:23:54 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.23.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:23:53 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 05/13] bpf/tests: Add exhaustive tests of JMP operand magnitudes Date: Wed, 8 Sep 2021 00:23:31 +0200 Message-Id: <20210907222339.4130924-6-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a set of tests for conditional JMP and JMP32 operations to verify correctness for all possible magnitudes of the immediate and register operands. Mainly intended for JIT testing. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 779 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 779 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 7992004bc876..e7ea8472c3d1 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -1104,6 +1104,384 @@ static int bpf_fill_alu32_mod_reg(struct bpf_test *self) return __bpf_fill_alu32_reg(self, BPF_MOD); } + +/* + * Exhaustive tests of JMP operations for all combinations of power-of-two + * magnitudes of the operands, both for positive and negative values. The + * test is designed to verify e.g. the JMP and JMP32 operations for JITs that + * emit different code depending on the magnitude of the immediate value. + */ + +static bool __bpf_match_jmp_cond(s64 v1, s64 v2, u8 op) +{ + switch (op) { + case BPF_JSET: + return !!(v1 & v2); + case BPF_JEQ: + return v1 == v2; + case BPF_JNE: + return v1 != v2; + case BPF_JGT: + return (u64)v1 > (u64)v2; + case BPF_JGE: + return (u64)v1 >= (u64)v2; + case BPF_JLT: + return (u64)v1 < (u64)v2; + case BPF_JLE: + return (u64)v1 <= (u64)v2; + case BPF_JSGT: + return v1 > v2; + case BPF_JSGE: + return v1 >= v2; + case BPF_JSLT: + return v1 < v2; + case BPF_JSLE: + return v1 <= v2; + } + return false; +} + +static int __bpf_emit_jmp_imm(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 imm) +{ + int op = *(int *)arg; + + if (insns) { + bool match = __bpf_match_jmp_cond(dst, (s32)imm, op); + int i = 0; + + insns[i++] = BPF_ALU32_IMM(BPF_MOV, R0, match); + + i += __bpf_ld_imm64(&insns[i], R1, dst); + insns[i++] = BPF_JMP_IMM(op, R1, imm, 1); + if (!match) + insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1); + insns[i++] = BPF_EXIT_INSN(); + + return i; + } + + return 5 + 1; +} + +static int __bpf_emit_jmp32_imm(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 imm) +{ + int op = *(int *)arg; + + if (insns) { + bool match = __bpf_match_jmp_cond((s32)dst, (s32)imm, op); + int i = 0; + + i += __bpf_ld_imm64(&insns[i], R1, dst); + insns[i++] = BPF_JMP32_IMM(op, R1, imm, 1); + if (!match) + insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1); + insns[i++] = BPF_EXIT_INSN(); + + return i; + } + + return 5; +} + +static int __bpf_emit_jmp_reg(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 src) +{ + int op = *(int *)arg; + + if (insns) { + bool match = __bpf_match_jmp_cond(dst, src, op); + int i = 0; + + i += __bpf_ld_imm64(&insns[i], R1, dst); + i += __bpf_ld_imm64(&insns[i], R2, src); + insns[i++] = BPF_JMP_REG(op, R1, R2, 1); + if (!match) + insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1); + insns[i++] = BPF_EXIT_INSN(); + + return i; + } + + return 7; +} + +static int __bpf_emit_jmp32_reg(struct bpf_test *self, void *arg, + struct bpf_insn *insns, s64 dst, s64 src) +{ + int op = *(int *)arg; + + if (insns) { + bool match = __bpf_match_jmp_cond((s32)dst, (s32)src, op); + int i = 0; + + i += __bpf_ld_imm64(&insns[i], R1, dst); + i += __bpf_ld_imm64(&insns[i], R2, src); + insns[i++] = BPF_JMP32_REG(op, R1, R2, 1); + if (!match) + insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1); + insns[i++] = BPF_EXIT_INSN(); + + return i; + } + + return 7; +} + +static int __bpf_fill_jmp_imm(struct bpf_test *self, int op) +{ + return __bpf_fill_pattern(self, &op, 64, 32, + PATTERN_BLOCK1, PATTERN_BLOCK2, + &__bpf_emit_jmp_imm); +} + +static int __bpf_fill_jmp32_imm(struct bpf_test *self, int op) +{ + return __bpf_fill_pattern(self, &op, 64, 32, + PATTERN_BLOCK1, PATTERN_BLOCK2, + &__bpf_emit_jmp32_imm); +} + +static int __bpf_fill_jmp_reg(struct bpf_test *self, int op) +{ + return __bpf_fill_pattern(self, &op, 64, 64, + PATTERN_BLOCK1, PATTERN_BLOCK2, + &__bpf_emit_jmp_reg); +} + +static int __bpf_fill_jmp32_reg(struct bpf_test *self, int op) +{ + return __bpf_fill_pattern(self, &op, 64, 64, + PATTERN_BLOCK1, PATTERN_BLOCK2, + &__bpf_emit_jmp32_reg); +} + +/* JMP immediate tests */ +static int bpf_fill_jmp_jset_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp_imm(self, BPF_JSET); +} + +static int bpf_fill_jmp_jeq_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp_imm(self, BPF_JEQ); +} + +static int bpf_fill_jmp_jne_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp_imm(self, BPF_JNE); +} + +static int bpf_fill_jmp_jgt_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp_imm(self, BPF_JGT); +} + +static int bpf_fill_jmp_jge_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp_imm(self, BPF_JGE); +} + +static int bpf_fill_jmp_jlt_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp_imm(self, BPF_JLT); +} + +static int bpf_fill_jmp_jle_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp_imm(self, BPF_JLE); +} + +static int bpf_fill_jmp_jsgt_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp_imm(self, BPF_JSGT); +} + +static int bpf_fill_jmp_jsge_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp_imm(self, BPF_JSGE); +} + +static int bpf_fill_jmp_jslt_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp_imm(self, BPF_JSLT); +} + +static int bpf_fill_jmp_jsle_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp_imm(self, BPF_JSLE); +} + +/* JMP32 immediate tests */ +static int bpf_fill_jmp32_jset_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp32_imm(self, BPF_JSET); +} + +static int bpf_fill_jmp32_jeq_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp32_imm(self, BPF_JEQ); +} + +static int bpf_fill_jmp32_jne_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp32_imm(self, BPF_JNE); +} + +static int bpf_fill_jmp32_jgt_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp32_imm(self, BPF_JGT); +} + +static int bpf_fill_jmp32_jge_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp32_imm(self, BPF_JGE); +} + +static int bpf_fill_jmp32_jlt_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp32_imm(self, BPF_JLT); +} + +static int bpf_fill_jmp32_jle_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp32_imm(self, BPF_JLE); +} + +static int bpf_fill_jmp32_jsgt_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp32_imm(self, BPF_JSGT); +} + +static int bpf_fill_jmp32_jsge_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp32_imm(self, BPF_JSGE); +} + +static int bpf_fill_jmp32_jslt_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp32_imm(self, BPF_JSLT); +} + +static int bpf_fill_jmp32_jsle_imm(struct bpf_test *self) +{ + return __bpf_fill_jmp32_imm(self, BPF_JSLE); +} + +/* JMP register tests */ +static int bpf_fill_jmp_jset_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp_reg(self, BPF_JSET); +} + +static int bpf_fill_jmp_jeq_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp_reg(self, BPF_JEQ); +} + +static int bpf_fill_jmp_jne_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp_reg(self, BPF_JNE); +} + +static int bpf_fill_jmp_jgt_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp_reg(self, BPF_JGT); +} + +static int bpf_fill_jmp_jge_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp_reg(self, BPF_JGE); +} + +static int bpf_fill_jmp_jlt_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp_reg(self, BPF_JLT); +} + +static int bpf_fill_jmp_jle_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp_reg(self, BPF_JLE); +} + +static int bpf_fill_jmp_jsgt_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp_reg(self, BPF_JSGT); +} + +static int bpf_fill_jmp_jsge_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp_reg(self, BPF_JSGE); +} + +static int bpf_fill_jmp_jslt_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp_reg(self, BPF_JSLT); +} + +static int bpf_fill_jmp_jsle_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp_reg(self, BPF_JSLE); +} + +/* JMP32 register tests */ +static int bpf_fill_jmp32_jset_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp32_reg(self, BPF_JSET); +} + +static int bpf_fill_jmp32_jeq_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp32_reg(self, BPF_JEQ); +} + +static int bpf_fill_jmp32_jne_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp32_reg(self, BPF_JNE); +} + +static int bpf_fill_jmp32_jgt_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp32_reg(self, BPF_JGT); +} + +static int bpf_fill_jmp32_jge_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp32_reg(self, BPF_JGE); +} + +static int bpf_fill_jmp32_jlt_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp32_reg(self, BPF_JLT); +} + +static int bpf_fill_jmp32_jle_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp32_reg(self, BPF_JLE); +} + +static int bpf_fill_jmp32_jsgt_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp32_reg(self, BPF_JSGT); +} + +static int bpf_fill_jmp32_jsge_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp32_reg(self, BPF_JSGE); +} + +static int bpf_fill_jmp32_jslt_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp32_reg(self, BPF_JSLT); +} + +static int bpf_fill_jmp32_jsle_reg(struct bpf_test *self) +{ + return __bpf_fill_jmp32_reg(self, BPF_JSLE); +} + + static struct bpf_test tests[] = { { "TAX", @@ -9281,6 +9659,7 @@ static struct bpf_test tests[] = { { }, { { 0, 1 } }, .fill_helper = bpf_fill_alu32_mod_imm, + .nr_testruns = NR_PATTERN_RUNS, }, /* ALU64 register magnitudes */ { @@ -9446,6 +9825,406 @@ static struct bpf_test tests[] = { .fill_helper = bpf_fill_alu32_mod_reg, .nr_testruns = NR_PATTERN_RUNS, }, + /* JMP immediate magnitudes */ + { + "JMP_JSET_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jset_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JEQ_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jeq_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JNE_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jne_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JGT_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jgt_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JGE_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jge_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JLT_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jlt_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JLE_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jle_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JSGT_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jsgt_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JSGE_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jsge_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JSLT_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jslt_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JSLE_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jsle_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + /* JMP register magnitudes */ + { + "JMP_JSET_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jset_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JEQ_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jeq_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JNE_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jne_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JGT_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jgt_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JGE_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jge_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JLT_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jlt_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JLE_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jle_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JSGT_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jsgt_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JSGE_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jsge_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JSLT_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jslt_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP_JSLE_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp_jsle_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + /* JMP32 immediate magnitudes */ + { + "JMP32_JSET_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jset_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JEQ_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jeq_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JNE_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jne_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JGT_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jgt_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JGE_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jge_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JLT_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jlt_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JLE_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jle_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JSGT_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jsgt_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JSGE_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jsge_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JSLT_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jslt_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JSLE_K: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jsle_imm, + .nr_testruns = NR_PATTERN_RUNS, + }, + /* JMP32 register magnitudes */ + { + "JMP32_JSET_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jset_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JEQ_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jeq_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JNE_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jne_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JGT_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jgt_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JGE_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jge_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JLT_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jlt_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JLE_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jle_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JSGT_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jsgt_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JSGE_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jsge_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JSLT_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jslt_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, + { + "JMP32_JSLE_X: all register value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_jmp32_jsle_reg, + .nr_testruns = NR_PATTERN_RUNS, + }, }; static struct net_device dev; From patchwork Tue Sep 7 22:23:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479523 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 922E5C433EF for ; Tue, 7 Sep 2021 22:24:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 735D761108 for ; Tue, 7 Sep 2021 22:24:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348736AbhIGWZI (ORCPT ); Tue, 7 Sep 2021 18:25:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348176AbhIGWZD (ORCPT ); Tue, 7 Sep 2021 18:25:03 -0400 Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com [IPv6:2a00:1450:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B517C0617AE for ; Tue, 7 Sep 2021 15:23:56 -0700 (PDT) Received: by mail-ed1-x52f.google.com with SMTP id j13so25858edv.13 for ; Tue, 07 Sep 2021 15:23:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uooNLUFw7tTjyttcx5ZfcVBX1/gJxTnpJeJqLsHXd70=; b=aIlisFBohL6Jf0r6Dj4HyCs3AQn9tN0chC8zUyz1c7la5S+A1cEjHiARE0HZMOdXyR vy36fEtlRL2X0E5bqTHfgz26PT0rkOs/srnobobFAVby+AAorDW+0Rxn5R4Q32tufp8E 5iDaQoMiX0qvLTq3OTdfXfnl0FV58LcaiZb7NdnWoLaOWHSJC63Yw15XqXZeVZFAgcfX 5SL7xUzW7xYg1nAkn6aZBPRRykWT5c3tqz7Cs3unAJx6US4CdWuMpgAtPRRwF4pLDfKl mDdfWEEgXML49jINbFMdv7LyYJ7a+d4EaB0z8vDV8LEpngDIoM0BLq1dYxSViZ3+fCe4 U/qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uooNLUFw7tTjyttcx5ZfcVBX1/gJxTnpJeJqLsHXd70=; b=jzLlpygdD6A7rsckWP6jTKN9OVPAFz6dknoXbN7c9I5zWxUWxx5oTvyMotDx3XDduA 8jv5CsL4EqDAnGDHWwEGamRROZL61dhv7x2gfoerZfo0L0yOSOfNBY/XoxPpEmYAFcWN y6yCzr5r433/naHWf3/Spk1qudnb7ydhgOP3EciuLI0Mvc8GdsEahQqZtkVV7mDXuhSf wjZhH0F9UddJw1cRHkWBbofLsyvoX9DDHLnSGLHF5ysFc9MPkq0SCWfE4c7C1oeaVpXV JRBoWXW68tK0NsRVIXsaeUDJkz/Y6YMEWt7qqUnGmYgdHXdGByGxC4nY7HbZrRmd4+mg 6vHg== X-Gm-Message-State: AOAM531MLMmEz4M2jiUVqaQl4N0EPeK8+T4jHaT8l7y6SuQlSnpft3SJ uP3pE+VM/UY+p2t2w9fxuzhcSA== X-Google-Smtp-Source: ABdhPJytjE2GCvCIaCoxeBNOe0JBGv1XW+v6YxCtV4nPYFTLRwvR7/RCzPfqHrSrjpwDG+6/WrK5Dw== X-Received: by 2002:a50:ee94:: with SMTP id f20mr513340edr.117.1631053435023; Tue, 07 Sep 2021 15:23:55 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.23.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:23:54 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 06/13] bpf/tests: Add staggered JMP and JMP32 tests Date: Wed, 8 Sep 2021 00:23:32 +0200 Message-Id: <20210907222339.4130924-7-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a new type of jump test where the program jumps forwards and backwards with increasing offset. It mainly tests JITs where a relative jump may generate different JITed code depending on the offset size, read MIPS. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 829 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 829 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index e7ea8472c3d1..c25e99a461de 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -1481,6 +1481,426 @@ static int bpf_fill_jmp32_jsle_reg(struct bpf_test *self) return __bpf_fill_jmp32_reg(self, BPF_JSLE); } +/* + * Set up a sequence of staggered jumps, forwards and backwards with + * increasing offset. This tests the conversion of relative jumps to + * JITed native jumps. On some architectures, for example MIPS, a large + * PC-relative jump offset may overflow the immediate field of the native + * conditional branch instruction, triggering a conversion to use an + * absolute jump instead. Since this changes the jump offsets, another + * offset computation pass is necessary, and that may in turn trigger + * another branch conversion. This jump sequence is particularly nasty + * in that regard. + * + * The sequence generation is parameterized by size and jump type. + * The size must be even, and the expected result is always size + 1. + * Below is an example with size=8 and result=9. + * + * ________________________Start + * R0 = 0 + * R1 = r1 + * R2 = r2 + * ,------- JMP +4 * 3______________Preamble: 4 insns + * ,----------|-ind 0- if R0 != 7 JMP 8 * 3 + 1 <--------------------. + * | | R0 = 8 | + * | | JMP +7 * 3 ------------------------. + * | ,--------|-----1- if R0 != 5 JMP 7 * 3 + 1 <--------------. | | + * | | | R0 = 6 | | | + * | | | JMP +5 * 3 ------------------. | | + * | | ,------|-----2- if R0 != 3 JMP 6 * 3 + 1 <--------. | | | | + * | | | | R0 = 4 | | | | | + * | | | | JMP +3 * 3 ------------. | | | | + * | | | ,----|-----3- if R0 != 1 JMP 5 * 3 + 1 <--. | | | | | | + * | | | | | R0 = 2 | | | | | | | + * | | | | | JMP +1 * 3 ------. | | | | | | + * | | | | ,--t=====4> if R0 != 0 JMP 4 * 3 + 1 1 2 3 4 5 6 7 8 loc + * | | | | | R0 = 1 -1 +2 -3 +4 -5 +6 -7 +8 off + * | | | | | JMP -2 * 3 ---' | | | | | | | + * | | | | | ,------5- if R0 != 2 JMP 3 * 3 + 1 <-----' | | | | | | + * | | | | | | R0 = 3 | | | | | | + * | | | | | | JMP -4 * 3 ---------' | | | | | + * | | | | | | ,----6- if R0 != 4 JMP 2 * 3 + 1 <-----------' | | | | + * | | | | | | | R0 = 5 | | | | + * | | | | | | | JMP -6 * 3 ---------------' | | | + * | | | | | | | ,--7- if R0 != 6 JMP 1 * 3 + 1 <-----------------' | | + * | | | | | | | | R0 = 7 | | + * | | Error | | | JMP -8 * 3 ---------------------' | + * | | paths | | | ,8- if R0 != 8 JMP 0 * 3 + 1 <-----------------------' + * | | | | | | | | | R0 = 9__________________Sequence: 3 * size - 1 insns + * `-+-+-+-+-+-+-+-+-> EXIT____________________Return: 1 insn + * + */ + +/* The maximum size parameter */ +#define MAX_STAGGERED_JMP_SIZE ((0x7fff / 3) & ~1) + +/* We use a reduced number of iterations to get a reasonable execution time */ +#define NR_STAGGERED_JMP_RUNS 10 + +static int __bpf_fill_staggered_jumps(struct bpf_test *self, + const struct bpf_insn *jmp, + u64 r1, u64 r2) +{ + int size = self->test[0].result - 1; + int len = 4 + 3 * (size + 1); + struct bpf_insn *insns; + int off, ind; + + insns = kmalloc_array(len, sizeof(*insns), GFP_KERNEL); + if (!insns) + return -ENOMEM; + + /* Preamble */ + insns[0] = BPF_ALU64_IMM(BPF_MOV, R0, 0); + insns[1] = BPF_ALU64_IMM(BPF_MOV, R1, r1); + insns[2] = BPF_ALU64_IMM(BPF_MOV, R2, r2); + insns[3] = BPF_JMP_IMM(BPF_JA, 0, 0, 3 * size / 2); + + /* Sequence */ + for (ind = 0, off = size; ind <= size; ind++, off -= 2) { + struct bpf_insn *ins = &insns[4 + 3 * ind]; + int loc; + + if (off == 0) + off--; + + loc = abs(off); + ins[0] = BPF_JMP_IMM(BPF_JNE, R0, loc - 1, + 3 * (size - ind) + 1); + ins[1] = BPF_ALU64_IMM(BPF_MOV, R0, loc); + ins[2] = *jmp; + ins[2].off = 3 * (off - 1); + } + + /* Return */ + insns[len - 1] = BPF_EXIT_INSN(); + + self->u.ptr.insns = insns; + self->u.ptr.len = len; + + return 0; +} + +/* 64-bit unconditional jump */ +static int bpf_fill_staggered_ja(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JA, 0, 0, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 0, 0); +} + +/* 64-bit immediate jumps */ +static int bpf_fill_staggered_jeq_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JEQ, R1, 1234, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); +} + +static int bpf_fill_staggered_jne_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JNE, R1, 1234, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 4321, 0); +} + +static int bpf_fill_staggered_jset_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSET, R1, 0x82, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 0x86, 0); +} + +static int bpf_fill_staggered_jgt_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JGT, R1, 1234, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 0x80000000, 0); +} + +static int bpf_fill_staggered_jge_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JGE, R1, 1234, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); +} + +static int bpf_fill_staggered_jlt_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JLT, R1, 0x80000000, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); +} + +static int bpf_fill_staggered_jle_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JLE, R1, 1234, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); +} + +static int bpf_fill_staggered_jsgt_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSGT, R1, -2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -1, 0); +} + +static int bpf_fill_staggered_jsge_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSGE, R1, -2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -2, 0); +} + +static int bpf_fill_staggered_jslt_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSLT, R1, -1, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -2, 0); +} + +static int bpf_fill_staggered_jsle_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSLE, R1, -1, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -1, 0); +} + +/* 64-bit register jumps */ +static int bpf_fill_staggered_jeq_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_REG(BPF_JEQ, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); +} + +static int bpf_fill_staggered_jne_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_REG(BPF_JNE, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 4321, 1234); +} + +static int bpf_fill_staggered_jset_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_REG(BPF_JSET, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 0x86, 0x82); +} + +static int bpf_fill_staggered_jgt_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_REG(BPF_JGT, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 0x80000000, 1234); +} + +static int bpf_fill_staggered_jge_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_REG(BPF_JGE, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); +} + +static int bpf_fill_staggered_jlt_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_REG(BPF_JLT, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0x80000000); +} + +static int bpf_fill_staggered_jle_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_REG(BPF_JLE, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); +} + +static int bpf_fill_staggered_jsgt_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_REG(BPF_JSGT, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -1, -2); +} + +static int bpf_fill_staggered_jsge_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_REG(BPF_JSGE, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -2, -2); +} + +static int bpf_fill_staggered_jslt_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_REG(BPF_JSLT, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -2, -1); +} + +static int bpf_fill_staggered_jsle_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP_REG(BPF_JSLE, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -1, -1); +} + +/* 32-bit immediate jumps */ +static int bpf_fill_staggered_jeq32_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JEQ, R1, 1234, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); +} + +static int bpf_fill_staggered_jne32_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JNE, R1, 1234, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 4321, 0); +} + +static int bpf_fill_staggered_jset32_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JSET, R1, 0x82, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 0x86, 0); +} + +static int bpf_fill_staggered_jgt32_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JGT, R1, 1234, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 0x80000000, 0); +} + +static int bpf_fill_staggered_jge32_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JGE, R1, 1234, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); +} + +static int bpf_fill_staggered_jlt32_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JLT, R1, 0x80000000, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); +} + +static int bpf_fill_staggered_jle32_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JLE, R1, 1234, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0); +} + +static int bpf_fill_staggered_jsgt32_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JSGT, R1, -2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -1, 0); +} + +static int bpf_fill_staggered_jsge32_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JSGE, R1, -2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -2, 0); +} + +static int bpf_fill_staggered_jslt32_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JSLT, R1, -1, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -2, 0); +} + +static int bpf_fill_staggered_jsle32_imm(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_IMM(BPF_JSLE, R1, -1, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -1, 0); +} + +/* 32-bit register jumps */ +static int bpf_fill_staggered_jeq32_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JEQ, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); +} + +static int bpf_fill_staggered_jne32_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JNE, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 4321, 1234); +} + +static int bpf_fill_staggered_jset32_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JSET, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 0x86, 0x82); +} + +static int bpf_fill_staggered_jgt32_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JGT, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 0x80000000, 1234); +} + +static int bpf_fill_staggered_jge32_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JGE, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); +} + +static int bpf_fill_staggered_jlt32_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JLT, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0x80000000); +} + +static int bpf_fill_staggered_jle32_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JLE, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234); +} + +static int bpf_fill_staggered_jsgt32_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JSGT, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -1, -2); +} + +static int bpf_fill_staggered_jsge32_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JSGE, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -2, -2); +} + +static int bpf_fill_staggered_jslt32_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JSLT, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -2, -1); +} + +static int bpf_fill_staggered_jsle32_reg(struct bpf_test *self) +{ + struct bpf_insn jmp = BPF_JMP32_REG(BPF_JSLE, R1, R2, 0); + + return __bpf_fill_staggered_jumps(self, &jmp, -1, -1); +} + static struct bpf_test tests[] = { { @@ -10225,6 +10645,415 @@ static struct bpf_test tests[] = { .fill_helper = bpf_fill_jmp32_jsle_reg, .nr_testruns = NR_PATTERN_RUNS, }, + /* Staggered jump sequences, immediate */ + { + "Staggered jumps: JMP_JA", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_ja, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JEQ_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jeq_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JNE_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jne_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JSET_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jset_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JGT_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jgt_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JGE_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jge_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JLT_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jlt_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JLE_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jle_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JSGT_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsgt_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JSGE_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsge_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JSLT_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jslt_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JSLE_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsle_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + /* Staggered jump sequences, register */ + { + "Staggered jumps: JMP_JEQ_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jeq_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JNE_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jne_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JSET_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jset_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JGT_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jgt_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JGE_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jge_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JLT_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jlt_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JLE_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jle_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JSGT_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsgt_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JSGE_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsge_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JSLT_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jslt_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP_JSLE_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsle_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + /* Staggered jump sequences, JMP32 immediate */ + { + "Staggered jumps: JMP32_JEQ_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jeq32_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JNE_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jne32_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JSET_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jset32_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JGT_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jgt32_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JGE_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jge32_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JLT_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jlt32_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JLE_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jle32_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JSGT_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsgt32_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JSGE_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsge32_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JSLT_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jslt32_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JSLE_K", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsle32_imm, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + /* Staggered jump sequences, JMP32 register */ + { + "Staggered jumps: JMP32_JEQ_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jeq32_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JNE_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jne32_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JSET_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jset32_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JGT_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jgt32_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JGE_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jge32_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JLT_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jlt32_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JLE_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jle32_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JSGT_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsgt32_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JSGE_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsge32_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JSLT_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jslt32_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, + { + "Staggered jumps: JMP32_JSLE_X", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, MAX_STAGGERED_JMP_SIZE + 1 } }, + .fill_helper = bpf_fill_staggered_jsle32_reg, + .nr_testruns = NR_STAGGERED_JMP_RUNS, + }, }; static struct net_device dev; From patchwork Tue Sep 7 22:23:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479521 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32603C43219 for ; Tue, 7 Sep 2021 22:24:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1DFB261101 for ; Tue, 7 Sep 2021 22:24:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348176AbhIGWZJ (ORCPT ); Tue, 7 Sep 2021 18:25:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348194AbhIGWZE (ORCPT ); Tue, 7 Sep 2021 18:25:04 -0400 Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com [IPv6:2a00:1450:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83773C061757 for ; Tue, 7 Sep 2021 15:23:57 -0700 (PDT) Received: by mail-ed1-x529.google.com with SMTP id g22so30999edy.12 for ; Tue, 07 Sep 2021 15:23:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HhJO+fwx4Re7Ogd46AIK3Gol1QT5GDKpHEFoCva4/XY=; b=r9wpcq38uuyLdaPq7ss8nO5w1z+fCl9W3VsjbCMpFzMP35wvpUwboXIzDS/Tjqn5Ni os5Zi46c/M+nDj+H9P4MI/k04Mh3YiIWYXyRNNsY4XJNT+oo/57KrovlY/dZbVS48zdU Iu/XsZJQ19wROcY4hxUimFZOk281vlX78qqDyYPIm4tlzx9PWu2OYMvGZt1+8P+D8E5q 5nhaXLFiDHn6u+DOIEdK8oDW7n0B4Hs9cBDwA5YFI9fHELtOlrVeTN3ob0/C3UoG3BAT Xn29EwFGPdBJyPgf1aqVVX9Qiz/W56meTh7R+3RJkDHJDqKWh6mhSEIek1Ld/SV1icog kXwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HhJO+fwx4Re7Ogd46AIK3Gol1QT5GDKpHEFoCva4/XY=; b=l76rUKx/8BwCPCDSs+2HDVBFNCVzic/uRjH3CKV5mV6PhPFTI/aLErgHPAMDL1S5Bj WZDib94eEqP80DQTrSf4NcUipdcBidnB+W/XLMUm96JSio2p/4c9OVdmOwM46pI25PHM wTTDFb+gwNxK9p7sQ+g2gOYNjRfu9LNJKkQ80lJGoZbJny4kzwkEbfqbnYBKFEpFSS70 Iwe8MVUQ9PIVs2heax3EHqM14Y6KQG1TL1RYQcc17ULWCwcbgfts/ylxfHNsGUjN0hhC qTLBIMJQVf46KMkWwwUmFxoTA1BLUzYq7PUMNxeZxSb4Zq6r9hGJpb/V5/VOFn31j+oA LCQw== X-Gm-Message-State: AOAM533r58YqjmmFBriiiUiLPBj+/+Ku6eKDI2LTiyNJtK5NUps77pJC g17kJqqtX8rrGm+qD9b7jePXBg== X-Google-Smtp-Source: ABdhPJxlrzDpvGTsqk6MIZOtirpUruD2WU7uuxQMUtZlfIvU3sqqMMbq/bMtQO3bMygvpMbkLzkf7Q== X-Received: by 2002:aa7:c313:: with SMTP id l19mr512534edq.131.1631053436076; Tue, 07 Sep 2021 15:23:56 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.23.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:23:55 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 07/13] bpf/tests: Add exhaustive test of LD_IMM64 immediate magnitudes Date: Wed, 8 Sep 2021 00:23:33 +0200 Message-Id: <20210907222339.4130924-8-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net this patch adds a test for the 64-bit immediate load, a two-instruction operation, to verify correctness for all possible magnitudes of the immediate operand. Mainly intended for JIT testing. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index c25e99a461de..6a04447171c7 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -1104,6 +1104,60 @@ static int bpf_fill_alu32_mod_reg(struct bpf_test *self) return __bpf_fill_alu32_reg(self, BPF_MOD); } +/* + * Test the two-instruction 64-bit immediate load operation for all + * power-of-two magnitudes of the immediate operand. For each MSB, a block + * of immediate values centered around the power-of-two MSB are tested, + * both for positive and negative values. The test is designed to verify + * the operation for JITs that emit different code depending on the magnitude + * of the immediate value. This is often the case if the native instruction + * immediate field width is narrower than 32 bits. + */ +static int bpf_fill_ld_imm64(struct bpf_test *self) +{ + int block = 64; /* Increase for more tests per MSB position */ + int len = 3 + 8 * 63 * block * 2; + struct bpf_insn *insn; + int bit, adj, sign; + int i = 0; + + insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL); + if (!insn) + return -ENOMEM; + + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 0); + + for (bit = 0; bit <= 62; bit++) { + for (adj = -block / 2; adj < block / 2; adj++) { + for (sign = -1; sign <= 1; sign += 2) { + s64 imm = sign * ((1LL << bit) + adj); + + /* Perform operation */ + i += __bpf_ld_imm64(&insn[i], R1, imm); + + /* Load reference */ + insn[i++] = BPF_ALU32_IMM(BPF_MOV, R2, imm); + insn[i++] = BPF_ALU32_IMM(BPF_MOV, R3, + (u32)(imm >> 32)); + insn[i++] = BPF_ALU64_IMM(BPF_LSH, R3, 32); + insn[i++] = BPF_ALU64_REG(BPF_OR, R2, R3); + + /* Check result */ + insn[i++] = BPF_JMP_REG(BPF_JEQ, R1, R2, 1); + insn[i++] = BPF_EXIT_INSN(); + } + } + } + + insn[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 1); + insn[i++] = BPF_EXIT_INSN(); + + self->u.ptr.insns = insn; + self->u.ptr.len = len; + BUG_ON(i != len); + + return 0; +} /* * Exhaustive tests of JMP operations for all combinations of power-of-two @@ -10245,6 +10299,15 @@ static struct bpf_test tests[] = { .fill_helper = bpf_fill_alu32_mod_reg, .nr_testruns = NR_PATTERN_RUNS, }, + /* LD_IMM64 immediate magnitudes */ + { + "LD_IMM64: all immediate value magnitudes", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_ld_imm64, + }, /* JMP immediate magnitudes */ { "JMP_JSET_K: all immediate value magnitudes", From patchwork Tue Sep 7 22:23:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479525 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8371C43217 for ; Tue, 7 Sep 2021 22:24:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D3D0961101 for ; Tue, 7 Sep 2021 22:24:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348393AbhIGWZK (ORCPT ); Tue, 7 Sep 2021 18:25:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348363AbhIGWZF (ORCPT ); Tue, 7 Sep 2021 18:25:05 -0400 Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com [IPv6:2a00:1450:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74D60C061757 for ; Tue, 7 Sep 2021 15:23:58 -0700 (PDT) Received: by mail-ed1-x52a.google.com with SMTP id g8so62308edt.7 for ; Tue, 07 Sep 2021 15:23:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VrYufPGtjCUT0dky9yDhI8t1ufTvbEptWesm+ao/2u0=; b=M6rYgPcRM/XxNtbSMsd5L2c+ztWnOku7cmwqnr5o4ezFRiBhmANsKRbLs2Idz47LH7 lLlT2KcAIvWgC1I0kHtR1lRkga1dB38a5UvLA+zGCpC3wMmDLTWHxvrewgyoR3GXTkGk 8fSIiUcnHLwrO8HZ67mU9Rg1Lu50TwDS27pKyHADMvPev4vlWXFw5ec5LtoN18gGukor 6O7Xm4dAHW2ht8/eJa+xyvNe/XjseR+zfe2OyHObFIu+hV7+YHjON7z4opoG+Y63iGpo 6KoadOA9BZB9EU+x80AXoOWO/C5+3D/HvG0iWeOL1zzh2zk0LKB11f9LrDrJKaj65WM7 i9Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VrYufPGtjCUT0dky9yDhI8t1ufTvbEptWesm+ao/2u0=; b=Ss1BwVoeO9g0ZBCUk8FMXSXOp8Hbe/VcOwTV28b3xnEeFhyTQkLD3nDPRJOrVOdBjn tfXqHjq37XaSnn6MJQ2ijv0b6I4Xf0rbldul2vS/CfX+a6U1a67a+oXpR3UPEGWNOobM 7+S3f2oK9Vb8dveeNkcv0Ac//y/ZRjlGJols5K70FR6PgFJU8Hb9tXnH4Bl9lEz++e4V N8qPJybW1deb83P79woNhUzds29cyBthVbJl7/pNV5la53iVrb6CkDeDElPhn3IRyz9U x1dcLrUtfx5YWw0tYXhnLTGbegxR6TstWO4cJO/DbXDW/QzDt9J6sSgU8EZt1qHY/CLK FyrA== X-Gm-Message-State: AOAM53188SeYoG606g74iJnyBj6Yv0X40N81IgDQUsFtJgA1NjnO/UsB LTFkeJlelHHSDWs82heVz5RLMw== X-Google-Smtp-Source: ABdhPJxq/zVCKO+LcvpT4BOmvbBvObpGjqgAxUwKWfiwxPoRvleMXb4blJuA/NcsIQTueqQlLCDvMA== X-Received: by 2002:a05:6402:40ce:: with SMTP id z14mr560238edb.28.1631053437078; Tue, 07 Sep 2021 15:23:57 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.23.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:23:56 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 08/13] bpf/tests: Add test case flag for verifier zero-extension Date: Wed, 8 Sep 2021 00:23:34 +0200 Message-Id: <20210907222339.4130924-9-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a new flag to indicate that the verified did insert zero-extensions, even though the verifier is not being run for any of the tests. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 6a04447171c7..26f7c244c78a 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -52,6 +52,7 @@ #define FLAG_NO_DATA BIT(0) #define FLAG_EXPECTED_FAIL BIT(1) #define FLAG_SKB_FRAG BIT(2) +#define FLAG_VERIFIER_ZEXT BIT(3) enum { CLASSIC = BIT(6), /* Old BPF instructions only. */ @@ -11280,6 +11281,8 @@ static struct bpf_prog *generate_filter(int which, int *err) fp->type = BPF_PROG_TYPE_SOCKET_FILTER; memcpy(fp->insnsi, fptr, fp->len * sizeof(struct bpf_insn)); fp->aux->stack_depth = tests[which].stack_depth; + fp->aux->verifier_zext = !!(tests[which].aux & + FLAG_VERIFIER_ZEXT); /* We cannot error here as we don't need type compatibility * checks. From patchwork Tue Sep 7 22:23:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479527 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46DF5C4332F for ; Tue, 7 Sep 2021 22:24:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 330EA61101 for ; Tue, 7 Sep 2021 22:24:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348363AbhIGWZK (ORCPT ); Tue, 7 Sep 2021 18:25:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348380AbhIGWZG (ORCPT ); Tue, 7 Sep 2021 18:25:06 -0400 Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [IPv6:2a00:1450:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7956FC061757 for ; Tue, 7 Sep 2021 15:23:59 -0700 (PDT) Received: by mail-ej1-x62f.google.com with SMTP id u14so1136184ejf.13 for ; Tue, 07 Sep 2021 15:23:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=J0f5jVcksFy4YRNLGjbRBcdH4KEBNN3Q3Q2xThuDCKg=; b=ekvGOWm3GywuqTBDsN+csbEfHchz/0qck+uoAYdjfpl5Jb6PrWM/Zd5f0yOmXqWpBV umZaa3A167yiRtGpY8il28GEfgRp/dfP38AZSdKabuR8pVS88LhXj5jhMQXHnXTXmCcp 3UOSTazWf+GQg25BJX43pl3HXk0EYE68C/TblA3mluGOPHAw8x6f7LocIuGZXsIIKOSR +0Uq78702T44TUUnJn0ycuerXV5f3+f4tk0IdonzIwLxYuuuSYJkTpZqa9CwFwIwpxb+ BzfMzQw3iDvyXWJ41let5Xg8HR6rloNHJh9U0OgShp7EBhNt4S+yJePq85LRQladHPRl uKaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=J0f5jVcksFy4YRNLGjbRBcdH4KEBNN3Q3Q2xThuDCKg=; b=jbFuZJfM76PMe3oDctEo4PoBOBWlMKqJ9gkMH86klChIaJRmJyVME1A21v8JMQPpuS P8R0gn8RwKyxhNeJYP3aSY97hJ7Cs2pdbMqCfQzR8TokuIC3XhVJn2bWAV4RP24x+H2O 2eHF8Aj57FNOzH6L5DJFtt+Y+IXmPmLuKSUCgKgC7tCAA9CzehmMUDdRvZL1nh5c4Ds6 zVqKWaeGpDuV0TDKOGb2ei4QkQTeGwqj26BNsbWUd5ezKecqFE9+ILqD5WfmUvM7X7ou G5LgJqEEII4UC7+Z1ipN813SisMeX2x3Fy6n+hhTz8oCc7+yoRvXTRx/U6BuyyyYN/5M 780A== X-Gm-Message-State: AOAM530Ud7Ss+2ooCnKW4YPUr0lKseKx8BMojLemJsWwOBvQW5J+lybx fd/isTMARmP6OW1cno6xOXAG6w== X-Google-Smtp-Source: ABdhPJzNmGCsAC9sTJgFBSmx2JMy4diEccCwiWEDSuMsgshA9CWAMyxOceSXGCSGXrQi3G+5Oxi6vw== X-Received: by 2002:a17:907:aa4:: with SMTP id bz4mr629006ejc.97.1631053438120; Tue, 07 Sep 2021 15:23:58 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.23.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:23:57 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 09/13] bpf/tests: Add JMP tests with small offsets Date: Wed, 8 Sep 2021 00:23:35 +0200 Message-Id: <20210907222339.4130924-10-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a set of tests for JMP to verify that the JITed jump offset is calculated correctly. We pretend that the verifier has inserted any zero extensions to make the jump-over operations JIT to one instruction each, in order to control the exact JITed jump offset. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 26f7c244c78a..7286cf347b95 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -10709,6 +10709,77 @@ static struct bpf_test tests[] = { .fill_helper = bpf_fill_jmp32_jsle_reg, .nr_testruns = NR_PATTERN_RUNS, }, + /* Short relative jumps */ + { + "Short relative jump: offset=0", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_JMP_IMM(BPF_JEQ, R0, 0, 0), + BPF_EXIT_INSN(), + BPF_ALU32_IMM(BPF_MOV, R0, -1), + }, + INTERNAL | FLAG_NO_DATA | FLAG_VERIFIER_ZEXT, + { }, + { { 0, 0 } }, + }, + { + "Short relative jump: offset=1", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_JMP_IMM(BPF_JEQ, R0, 0, 1), + BPF_ALU32_IMM(BPF_ADD, R0, 1), + BPF_EXIT_INSN(), + BPF_ALU32_IMM(BPF_MOV, R0, -1), + }, + INTERNAL | FLAG_NO_DATA | FLAG_VERIFIER_ZEXT, + { }, + { { 0, 0 } }, + }, + { + "Short relative jump: offset=2", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_JMP_IMM(BPF_JEQ, R0, 0, 2), + BPF_ALU32_IMM(BPF_ADD, R0, 1), + BPF_ALU32_IMM(BPF_ADD, R0, 1), + BPF_EXIT_INSN(), + BPF_ALU32_IMM(BPF_MOV, R0, -1), + }, + INTERNAL | FLAG_NO_DATA | FLAG_VERIFIER_ZEXT, + { }, + { { 0, 0 } }, + }, + { + "Short relative jump: offset=3", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_JMP_IMM(BPF_JEQ, R0, 0, 3), + BPF_ALU32_IMM(BPF_ADD, R0, 1), + BPF_ALU32_IMM(BPF_ADD, R0, 1), + BPF_ALU32_IMM(BPF_ADD, R0, 1), + BPF_EXIT_INSN(), + BPF_ALU32_IMM(BPF_MOV, R0, -1), + }, + INTERNAL | FLAG_NO_DATA | FLAG_VERIFIER_ZEXT, + { }, + { { 0, 0 } }, + }, + { + "Short relative jump: offset=4", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_JMP_IMM(BPF_JEQ, R0, 0, 4), + BPF_ALU32_IMM(BPF_ADD, R0, 1), + BPF_ALU32_IMM(BPF_ADD, R0, 1), + BPF_ALU32_IMM(BPF_ADD, R0, 1), + BPF_ALU32_IMM(BPF_ADD, R0, 1), + BPF_EXIT_INSN(), + BPF_ALU32_IMM(BPF_MOV, R0, -1), + }, + INTERNAL | FLAG_NO_DATA | FLAG_VERIFIER_ZEXT, + { }, + { { 0, 0 } }, + }, /* Staggered jump sequences, immediate */ { "Staggered jumps: JMP_JA", From patchwork Tue Sep 7 22:23:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479529 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UPPERCASE_50_75,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C47AFC4321E for ; Tue, 7 Sep 2021 22:24:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A604661101 for ; Tue, 7 Sep 2021 22:24:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348380AbhIGWZL (ORCPT ); Tue, 7 Sep 2021 18:25:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348709AbhIGWZI (ORCPT ); Tue, 7 Sep 2021 18:25:08 -0400 Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com [IPv6:2a00:1450:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 972A2C06179A for ; Tue, 7 Sep 2021 15:24:00 -0700 (PDT) Received: by mail-ed1-x52b.google.com with SMTP id r7so70219edd.6 for ; Tue, 07 Sep 2021 15:24:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4Hoi4u2G8QSI9j8NSKtALs+4n115VKwNp7ykDTPYo/0=; b=I+8JwBdVzJ+ayEjXr3bm7ZAMRgaDCMMYC6/sjN9wx0+6Wwx2bqgRf+AQoDYV2LEC2y gRKM5HjxKIdN6KLahOUYW4YsqS3tX9CPrcAibe/cOxrwZwHSehybxbBcRbRDh5Objxd5 ykO3p7Z5P2omOjHGgos7dzcOeqA1ZJN0/jMNpRz+WxYP7glc8U2/8nGwBgve61jVAYVZ LKR68dLYcAkdGy22RbO9AcZLjSFl/cMaR2B9HBn+yIekRazNDDEQKidfwhpZgYjAuEj4 K5bbiBIHqoIv3z8njdqnFxZCE5qOmpsWyhv7UwyyhliJh6rym4BbAtFNNUCBUpeudxpl IKmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4Hoi4u2G8QSI9j8NSKtALs+4n115VKwNp7ykDTPYo/0=; b=R6LMm+zX+7iJdbp1U5xPfJnhNWpUXpQMpdTLXOodqWDMAHtD7swAKEUo2qDlZASWB3 V97XH0nY6nqVec1/mrCwfjwD9OP15ZVeXip86CL/vYnLYH5fMJml3TsywL1nOrqUQh7z JfIVndtVjg+if/ymy7r8qU6SL7uY/3L7vkHeqB8b3qvvmkQWCHytbPLF2lVfTxKXFjgR U6LyQ2CX3A3QTJZUFthqIu/+2lVWI1e7t+KECWHzMXW+pb2ojg4pn1MF4FLhtgq0quIg CDJ8vuWhuYyhKeHVjV9MJiigUTobTdn596Ym0Ij0eNzsSe7Nc4vdH7mOGfgPsg9zrBog LWxQ== X-Gm-Message-State: AOAM531T/cmM0yQfFJacBrPWEf9DeE48+vuQQKZageRuH2WDvIku2irb B8Pgq5RFNqEQnaANS3K1vcCOjg== X-Google-Smtp-Source: ABdhPJy7swd6eI/NjrcwiSEAPou2swBQ1oUK7u0Rb0RnLK6xpm4x/Oap3lg2+jIMOYnt66pxPT2DeQ== X-Received: by 2002:a05:6402:3188:: with SMTP id di8mr536275edb.300.1631053439113; Tue, 07 Sep 2021 15:23:59 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.23.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:23:58 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 10/13] bpf/tests: Add JMP tests with degenerate conditional Date: Wed, 8 Sep 2021 00:23:36 +0200 Message-Id: <20210907222339.4130924-11-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a set of tests for JMP and JMP32 operations where the branch decision is know at JIT time. Mainly testing JIT behaviour. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 229 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 229 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 7286cf347b95..ae261667ca0a 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -10709,6 +10709,235 @@ static struct bpf_test tests[] = { .fill_helper = bpf_fill_jmp32_jsle_reg, .nr_testruns = NR_PATTERN_RUNS, }, + /* Conditional jumps with constant decision */ + { + "JMP_JSET_K: imm = 0 -> never taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_IMM(BPF_JSET, R1, 0, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 0 } }, + }, + { + "JMP_JLT_K: imm = 0 -> never taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_IMM(BPF_JLT, R1, 0, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 0 } }, + }, + { + "JMP_JGE_K: imm = 0 -> always taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_IMM(BPF_JGE, R1, 0, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + }, + { + "JMP_JGT_K: imm = 0xffffffff -> never taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_IMM(BPF_JGT, R1, U32_MAX, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 0 } }, + }, + { + "JMP_JLE_K: imm = 0xffffffff -> always taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_IMM(BPF_JLE, R1, U32_MAX, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + }, + { + "JMP32_JSGT_K: imm = 0x7fffffff -> never taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP32_IMM(BPF_JSGT, R1, S32_MAX, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 0 } }, + }, + { + "JMP32_JSGE_K: imm = -0x80000000 -> always taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP32_IMM(BPF_JSGE, R1, S32_MIN, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + }, + { + "JMP32_JSLT_K: imm = -0x80000000 -> never taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP32_IMM(BPF_JSLT, R1, S32_MIN, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 0 } }, + }, + { + "JMP32_JSLE_K: imm = 0x7fffffff -> always taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP32_IMM(BPF_JSLE, R1, S32_MAX, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + }, + { + "JMP_JEQ_X: dst = src -> always taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_REG(BPF_JEQ, R1, R1, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + }, + { + "JMP_JGE_X: dst = src -> always taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_REG(BPF_JGE, R1, R1, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + }, + { + "JMP_JLE_X: dst = src -> always taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_REG(BPF_JLE, R1, R1, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + }, + { + "JMP_JSGE_X: dst = src -> always taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_REG(BPF_JSGE, R1, R1, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + }, + { + "JMP_JSLE_X: dst = src -> always taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_REG(BPF_JSLE, R1, R1, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + }, + { + "JMP_JNE_X: dst = src -> never taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_REG(BPF_JNE, R1, R1, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 0 } }, + }, + { + "JMP_JGT_X: dst = src -> never taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_REG(BPF_JGT, R1, R1, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 0 } }, + }, + { + "JMP_JLT_X: dst = src -> never taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_REG(BPF_JLT, R1, R1, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 0 } }, + }, + { + "JMP_JSGT_X: dst = src -> never taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_REG(BPF_JSGT, R1, R1, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 0 } }, + }, + { + "JMP_JSLT_X: dst = src -> never taken", + .u.insns_int = { + BPF_ALU64_IMM(BPF_MOV, R0, 1), + BPF_JMP_REG(BPF_JSLT, R1, R1, 1), + BPF_ALU64_IMM(BPF_MOV, R0, 0), + BPF_EXIT_INSN(), + }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 0 } }, + }, /* Short relative jumps */ { "Short relative jump: offset=0", From patchwork Tue Sep 7 22:23:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479531 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C4ABC433FE for ; Tue, 7 Sep 2021 22:24:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7565C6112F for ; Tue, 7 Sep 2021 22:24:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349075AbhIGWZR (ORCPT ); Tue, 7 Sep 2021 18:25:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348763AbhIGWZJ (ORCPT ); Tue, 7 Sep 2021 18:25:09 -0400 Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com [IPv6:2a00:1450:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97A34C0613A3 for ; Tue, 7 Sep 2021 15:24:01 -0700 (PDT) Received: by mail-ej1-x636.google.com with SMTP id h9so40780ejs.4 for ; Tue, 07 Sep 2021 15:24:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SvTqgiVuJ0nzlvet/gavj4LBDX8bzgVniFImt4+vvyQ=; b=wMo8I2L+lMHTY5GG18FUPBa1WRzriHGfrYeMN2UY2MRyIwwwdcBXTlalUUAkDx15wI lWOh2h1zXREgWK7MXVcBCFHIQf7GqLVstgX6jMQbQTccPa6Cqm/2CTz71UudnVF61qp0 4Z/g4ikDyLe+5rsoardJiHG2jUGyZt4H4rv9ZCPIN1P8eJX4hYdRX8poHSlTzi8dVod5 c9w84oRcvQga0AN3yBtCpqL4smoIyJWO2XOOfkcZPa55YwPys5eSmH1MQ5X2banmCTul VMmsXuBRUOHzXIjecER4b5HXbmtCFPb2U/pMTBcMXSQoANFFurl/9tB+fsal3FkH+azF 5OTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SvTqgiVuJ0nzlvet/gavj4LBDX8bzgVniFImt4+vvyQ=; b=ei7SqFNaF3eK0PCMeZl1GDZCoq03aWEevHiLf++3zkHso8LI5EJm065PuXx7UQP9W7 saL1Kc6tVjiCdHG20gjRCXCXSUAF93NAaXAOs8YlRKeHvsF2gMt3FbfPT7F8dBB1S3FX 9N8naQcFPbR6i3lAmf24BXYGCHD+mj6VIpWk5l7hk6fsneiV4HGb1uKZiL5Tuu/3pkst VcOsrTOXt2cF6Ekc5wDdRDPYMraQMpLTNMxKyygnuuUgVlJ1b6XewQEzLUC4S7wnUQf9 Ic994+i4XW6pb9WtuMOfGnEtN+sGN1VsFDrhzhRuGIZrDz6BkhcqrEFBy/o9gdwBRdW6 3l+Q== X-Gm-Message-State: AOAM533uRlHRW6UJa08eyAEcwZ3UYhyuigoYQBWi/xbmlfTYbZxfzRDl 8aSakqx/87J3LWUshpE6k8DBlA== X-Google-Smtp-Source: ABdhPJxseIs2s4Yw1wtPttxCYRq9K9WkFcR+11LvHe16CrOSubQjHngL/ZCxB+FEq1pmjGtEvV32WA== X-Received: by 2002:a17:906:160a:: with SMTP id m10mr659897ejd.67.1631053440097; Tue, 07 Sep 2021 15:24:00 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.23.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:23:59 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 11/13] bpf/tests: Expand branch conversion JIT test Date: Wed, 8 Sep 2021 00:23:37 +0200 Message-Id: <20210907222339.4130924-12-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch expands the branch conversion test introduced by 66e5eb84 ("bpf, tests: Add branch conversion JIT test"). The test now includes a JMP with maximum eBPF offset. This triggers branch conversion for the 64-bit MIPS JIT. Additional variants are also added for cases when the branch is taken or not taken. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 143 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 100 insertions(+), 43 deletions(-) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index ae261667ca0a..a515f9b670c9 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -463,41 +463,6 @@ static int bpf_fill_stxdw(struct bpf_test *self) return __bpf_fill_stxdw(self, BPF_DW); } -static int bpf_fill_long_jmp(struct bpf_test *self) -{ - unsigned int len = BPF_MAXINSNS; - struct bpf_insn *insn; - int i; - - insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL); - if (!insn) - return -ENOMEM; - - insn[0] = BPF_ALU64_IMM(BPF_MOV, R0, 1); - insn[1] = BPF_JMP_IMM(BPF_JEQ, R0, 1, len - 2 - 1); - - /* - * Fill with a complex 64-bit operation that expands to a lot of - * instructions on 32-bit JITs. The large jump offset can then - * overflow the conditional branch field size, triggering a branch - * conversion mechanism in some JITs. - * - * Note: BPF_MAXINSNS of ALU64 MUL is enough to trigger such branch - * conversion on the 32-bit MIPS JIT. For other JITs, the instruction - * count and/or operation may need to be modified to trigger the - * branch conversion. - */ - for (i = 2; i < len - 1; i++) - insn[i] = BPF_ALU64_IMM(BPF_MUL, R0, (i << 16) + i); - - insn[len - 1] = BPF_EXIT_INSN(); - - self->u.ptr.insns = insn; - self->u.ptr.len = len; - - return 0; -} - static int __bpf_ld_imm64(struct bpf_insn insns[2], u8 reg, s64 imm64) { struct bpf_insn tmp[] = {BPF_LD_IMM64(reg, imm64)}; @@ -506,6 +471,73 @@ static int __bpf_ld_imm64(struct bpf_insn insns[2], u8 reg, s64 imm64) return 2; } +/* + * Branch conversion tests. Complex operations can expand to a lot + * of instructions when JITed. This in turn may cause jump offsets + * to overflow the field size of the native instruction, triggering + * a branch conversion mechanism in some JITs. + */ +static int __bpf_fill_max_jmp(struct bpf_test *self, int jmp, int imm) +{ + struct bpf_insn *insns; + int len = S16_MAX + 5; + int i; + + insns = kmalloc_array(len, sizeof(*insns), GFP_KERNEL); + if (!insns) + return -ENOMEM; + + i = __bpf_ld_imm64(insns, R1, 0x0123456789abcdefULL); + insns[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 1); + insns[i++] = BPF_JMP_IMM(jmp, R0, imm, S16_MAX); + insns[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 2); + insns[i++] = BPF_EXIT_INSN(); + + while (i < len - 1) { + static const int ops[] = { + BPF_LSH, BPF_RSH, BPF_ARSH, BPF_ADD, + BPF_SUB, BPF_MUL, BPF_DIV, BPF_MOD, + }; + int op = ops[(i >> 1) % ARRAY_SIZE(ops)]; + + if (i & 1) + insns[i++] = BPF_ALU32_REG(op, R0, R1); + else + insns[i++] = BPF_ALU64_REG(op, R0, R1); + } + + insns[i++] = BPF_EXIT_INSN(); + self->u.ptr.insns = insns; + self->u.ptr.len = len; + BUG_ON(i != len); + + return 0; +} + +/* Branch taken by runtime decision */ +static int bpf_fill_max_jmp_taken(struct bpf_test *self) +{ + return __bpf_fill_max_jmp(self, BPF_JEQ, 1); +} + +/* Branch not taken by runtime decision */ +static int bpf_fill_max_jmp_not_taken(struct bpf_test *self) +{ + return __bpf_fill_max_jmp(self, BPF_JEQ, 0); +} + +/* Branch always taken, known at JIT time */ +static int bpf_fill_max_jmp_always_taken(struct bpf_test *self) +{ + return __bpf_fill_max_jmp(self, BPF_JGE, 0); +} + +/* Branch never taken, known at JIT time */ +static int bpf_fill_max_jmp_never_taken(struct bpf_test *self) +{ + return __bpf_fill_max_jmp(self, BPF_JLT, 0); +} + /* Test an ALU shift operation for all valid shift values */ static int __bpf_fill_alu_shift(struct bpf_test *self, u8 op, u8 mode, bool alu32) @@ -8653,14 +8685,6 @@ static struct bpf_test tests[] = { { }, { { 0, 1 } }, }, - { /* Mainly checking JIT here. */ - "BPF_MAXINSNS: Very long conditional jump", - { }, - INTERNAL | FLAG_NO_DATA, - { }, - { { 0, 1 } }, - .fill_helper = bpf_fill_long_jmp, - }, { "JMP_JA: Jump, gap, jump, ...", { }, @@ -11009,6 +11033,39 @@ static struct bpf_test tests[] = { { }, { { 0, 0 } }, }, + /* Conditional branch conversions */ + { + "Long conditional jump: taken at runtime", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_max_jmp_taken, + }, + { + "Long conditional jump: not taken at runtime", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 2 } }, + .fill_helper = bpf_fill_max_jmp_not_taken, + }, + { + "Long conditional jump: always taken, known at JIT time", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 1 } }, + .fill_helper = bpf_fill_max_jmp_always_taken, + }, + { + "Long conditional jump: never taken, known at JIT time", + { }, + INTERNAL | FLAG_NO_DATA, + { }, + { { 0, 2 } }, + .fill_helper = bpf_fill_max_jmp_never_taken, + }, /* Staggered jump sequences, immediate */ { "Staggered jumps: JMP_JA", From patchwork Tue Sep 7 22:23:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479533 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A370C4332F for ; Tue, 7 Sep 2021 22:24:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 311FF610E8 for ; Tue, 7 Sep 2021 22:24:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348763AbhIGWZR (ORCPT ); Tue, 7 Sep 2021 18:25:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348194AbhIGWZJ (ORCPT ); Tue, 7 Sep 2021 18:25:09 -0400 Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85565C061575 for ; Tue, 7 Sep 2021 15:24:02 -0700 (PDT) Received: by mail-ed1-x535.google.com with SMTP id g8so62496edt.7 for ; Tue, 07 Sep 2021 15:24:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JrA4KDr+EqvoE+WZ7hlhOAoAEblbMKXTm8z0RFxsW20=; b=11Rj6tT0t9nXv03B00cYJiO6JXe4h7+CH+/EJZeFVesk5UE+4NYf9j/yT69un9nCjx xOwLrdqRcyD8TaWBCpigLhDZmx7EJD0f/wiRjdXbqDMEFVE9x2VQyonsZfcuupT7T8HA 79E94+SG2Q+RIUAAytwhzy9Mn49SKNU6+FpX8jSC7P8TOoBfB7QGCQ4H3mmnqvuBBtiw nKecmUjYrz2FW++2tWkFmG/+JLEnzu6dqkggFApjNbu4zcNS6dSU7Mslj/raMGAd3mRg w0H7dKpYkp1rAEPg9Mpts1bkXP+soB1WbQxtM4CfVQy3/F84V8wZfFhmNsTVSGbOWIaH OxDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JrA4KDr+EqvoE+WZ7hlhOAoAEblbMKXTm8z0RFxsW20=; b=B12Egq6cTYKYgHyNPbiSwyYB2hqxkwqS4ibw25nlKdvuV38xEkXI54faz1YcmxhMZ6 ns6f/NUQHw7BQ2XW+xP4QpaLRRN7vVi3oSCNP6jdPwCnkcVLYPItaCQPXE6yYo95Wesx hnzdITEKuv98Ucct6+UCLW4CCkhe1T20Ow4/vZnPN1KLXEK3I8dHigYk9i8r+hEXLF55 5TYIQePN9vuQYz57r+5r0NtjkzyqkYTFsvM0UYayFJU6ty4pmOz0vmbVst/DV1lZzt2O SGk4limuRRveOoQJfEzX2TspNWsRZ/MG8J8ymVUwO38oT/esrmyTCZw6BE/lYg33GaKH +cYA== X-Gm-Message-State: AOAM530RoEOQ2pUFaRWq33thsViLcdXS5pcX1UaEVqnOkuSrdVJZTNty BpY9YyxiSwGQBxidC4J9ERO6fw== X-Google-Smtp-Source: ABdhPJyw3WbjP2nep9L5FYtPnmDRuBWTYPiYFgIx4CUBpU1uwQHZ8CYmfWJTHPn2VZgWGyrotpbCqw== X-Received: by 2002:aa7:d854:: with SMTP id f20mr507406eds.353.1631053441133; Tue, 07 Sep 2021 15:24:01 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.24.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:24:00 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 12/13] bpf/tests: Add more BPF_END byte order conversion tests Date: Wed, 8 Sep 2021 00:23:38 +0200 Message-Id: <20210907222339.4130924-13-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds tests of the high 32 bits of 64-bit BPF_END conversions. It also adds a mirrored set of tests where the source bytes are reversed. The MSB of each byte is now set on the high word instead, possibly affecting sign-extension during conversion in a different way. Mainly for JIT testing. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 122 insertions(+) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index a515f9b670c9..7475abfd2186 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -6748,6 +6748,67 @@ static struct bpf_test tests[] = { { }, { { 0, (u32) cpu_to_be64(0x0123456789abcdefLL) } }, }, + { + "ALU_END_FROM_BE 64: 0x0123456789abcdef >> 32 -> 0x01234567", + .u.insns_int = { + BPF_LD_IMM64(R0, 0x0123456789abcdefLL), + BPF_ENDIAN(BPF_FROM_BE, R0, 64), + BPF_ALU64_IMM(BPF_RSH, R0, 32), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, (u32) (cpu_to_be64(0x0123456789abcdefLL) >> 32) } }, + }, + /* BPF_ALU | BPF_END | BPF_FROM_BE, reversed */ + { + "ALU_END_FROM_BE 16: 0xfedcba9876543210 -> 0x3210", + .u.insns_int = { + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), + BPF_ENDIAN(BPF_FROM_BE, R0, 16), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, cpu_to_be16(0x3210) } }, + }, + { + "ALU_END_FROM_BE 32: 0xfedcba9876543210 -> 0x76543210", + .u.insns_int = { + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), + BPF_ENDIAN(BPF_FROM_BE, R0, 32), + BPF_ALU64_REG(BPF_MOV, R1, R0), + BPF_ALU64_IMM(BPF_RSH, R1, 32), + BPF_ALU32_REG(BPF_ADD, R0, R1), /* R1 = 0 */ + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, cpu_to_be32(0x76543210) } }, + }, + { + "ALU_END_FROM_BE 64: 0xfedcba9876543210 -> 0x76543210", + .u.insns_int = { + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), + BPF_ENDIAN(BPF_FROM_BE, R0, 64), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, (u32) cpu_to_be64(0xfedcba9876543210ULL) } }, + }, + { + "ALU_END_FROM_BE 64: 0xfedcba9876543210 >> 32 -> 0xfedcba98", + .u.insns_int = { + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), + BPF_ENDIAN(BPF_FROM_BE, R0, 64), + BPF_ALU64_IMM(BPF_RSH, R0, 32), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, (u32) (cpu_to_be64(0xfedcba9876543210ULL) >> 32) } }, + }, /* BPF_ALU | BPF_END | BPF_FROM_LE */ { "ALU_END_FROM_LE 16: 0x0123456789abcdef -> 0xefcd", @@ -6785,6 +6846,67 @@ static struct bpf_test tests[] = { { }, { { 0, (u32) cpu_to_le64(0x0123456789abcdefLL) } }, }, + { + "ALU_END_FROM_LE 64: 0x0123456789abcdef >> 32 -> 0xefcdab89", + .u.insns_int = { + BPF_LD_IMM64(R0, 0x0123456789abcdefLL), + BPF_ENDIAN(BPF_FROM_LE, R0, 64), + BPF_ALU64_IMM(BPF_RSH, R0, 32), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, (u32) (cpu_to_le64(0x0123456789abcdefLL) >> 32) } }, + }, + /* BPF_ALU | BPF_END | BPF_FROM_LE, reversed */ + { + "ALU_END_FROM_LE 16: 0xfedcba9876543210 -> 0x1032", + .u.insns_int = { + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), + BPF_ENDIAN(BPF_FROM_LE, R0, 16), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, cpu_to_le16(0x3210) } }, + }, + { + "ALU_END_FROM_LE 32: 0xfedcba9876543210 -> 0x10325476", + .u.insns_int = { + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), + BPF_ENDIAN(BPF_FROM_LE, R0, 32), + BPF_ALU64_REG(BPF_MOV, R1, R0), + BPF_ALU64_IMM(BPF_RSH, R1, 32), + BPF_ALU32_REG(BPF_ADD, R0, R1), /* R1 = 0 */ + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, cpu_to_le32(0x76543210) } }, + }, + { + "ALU_END_FROM_LE 64: 0xfedcba9876543210 -> 0x10325476", + .u.insns_int = { + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), + BPF_ENDIAN(BPF_FROM_LE, R0, 64), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, (u32) cpu_to_le64(0xfedcba9876543210ULL) } }, + }, + { + "ALU_END_FROM_LE 64: 0xfedcba9876543210 >> 32 -> 0x98badcfe", + .u.insns_int = { + BPF_LD_IMM64(R0, 0xfedcba9876543210ULL), + BPF_ENDIAN(BPF_FROM_LE, R0, 64), + BPF_ALU64_IMM(BPF_RSH, R0, 32), + BPF_EXIT_INSN(), + }, + INTERNAL, + { }, + { { 0, (u32) (cpu_to_le64(0xfedcba9876543210ULL) >> 32) } }, + }, /* BPF_ST(X) | BPF_MEM | BPF_B/H/W/DW */ { "ST_MEM_B: Store/Load byte: max negative", From patchwork Tue Sep 7 22:23:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Almbladh X-Patchwork-Id: 12479535 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 276B7C43217 for ; Tue, 7 Sep 2021 22:24:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1160A610E8 for ; Tue, 7 Sep 2021 22:24:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349188AbhIGWZS (ORCPT ); Tue, 7 Sep 2021 18:25:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240866AbhIGWZK (ORCPT ); Tue, 7 Sep 2021 18:25:10 -0400 Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com [IPv6:2a00:1450:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87886C06175F for ; Tue, 7 Sep 2021 15:24:03 -0700 (PDT) Received: by mail-ej1-x62e.google.com with SMTP id a25so27753ejv.6 for ; Tue, 07 Sep 2021 15:24:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=anyfinetworks-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IdU2//OyeVptfirfxlYX1MaHRHk1JMfVF2Ov3wUcHAw=; b=Ywi7rRps2UdR/T8EeoEIKAO/8crhphLHrBcISjcKIyiXXlCL9EJU/Q+3snrKeNdpcZ Qyiy6pkk72EEVWLuS4lBsOKmOkSbTyIXeGwXexGzhH8OIGaGbOf+/LKiPhNrmmMNsgzF EwmBQquNXdwJ9LDdAYpogRvGb3nMfPJtnu+poXKUpzVVraIGs+esp6aX4Swpf8xVeqPS R2RabjO3UVxr2v9odro5WKySojEEoTcTCKm/nzA2wo/B0QHVTUkDNTz3wzLtZPN+uR7s JjngSzkQiQf4E3JvqxAbkey5vIusR1FrvJyZlPU8oZBWCE5mC+y8dFH8RufXb+VjMf3n btLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IdU2//OyeVptfirfxlYX1MaHRHk1JMfVF2Ov3wUcHAw=; b=AoLgHsYLKWaf7sb9J84bkUc/SZwDK7c3VZNFxd/cxzC5G1u5i+Owrn92tKRMnXMgoU o1MzFz1xLKacpIXUw55aK+bMaJ9kqwBQ+YoA0lDGeT+WpT0qDSVzQyx4dlWGQl6wB5eM e3Y+8jDUHQewPDJLA94xhBP29LL2gGLSNzohHZI3hdbQGlSiyf/DNt4krVMEgT3e1Mi9 9Qv2teFnMPIg5JzVf/wbLAOmLIipZDvqpYXERKD81g3h05FwppkukXgUZDyULTcb+NMY 8Yw81MZWs42ifJ3NLA0CvPUK67MTY1fJEffcq5+jPLWMGr+ov+R6roOJdRHF6UHQvPOL lFZw== X-Gm-Message-State: AOAM531me1MhV+5sKc/q5kdBcYI7boCLHvU/xui7/YpP3Cm0SkefYRBH jpsJ40qyrB9ku82sABQmaVnGRw== X-Google-Smtp-Source: ABdhPJweM0qf3jpmPKmEWp4tGuDYJWVNXYM1U3UoL9ASMdEOUK3a/XVt18fWej/OV67LL8uts79gBQ== X-Received: by 2002:a17:906:748e:: with SMTP id e14mr657815ejl.152.1631053442139; Tue, 07 Sep 2021 15:24:02 -0700 (PDT) Received: from anpc2.lan (static-213-115-136-2.sme.telenor.se. [213.115.136.2]) by smtp.gmail.com with ESMTPSA id gb24sm71772ejc.53.2021.09.07.15.24.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 15:24:01 -0700 (PDT) From: Johan Almbladh To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org Cc: kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, iii@linux.ibm.com, netdev@vger.kernel.org, bpf@vger.kernel.org, Johan Almbladh Subject: [PATCH bpf-next v2 13/13] bpf/tests: Add tail call limit test with external function call Date: Wed, 8 Sep 2021 00:23:39 +0200 Message-Id: <20210907222339.4130924-14-johan.almbladh@anyfinetworks.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> References: <20210907222339.4130924-1-johan.almbladh@anyfinetworks.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This patch adds a tail call limit test where the program also emits a BPF_CALL to an external function prior to the tail call. Mainly testing that JITed programs preserve its internal register state, for example tail call count, across such external calls. Signed-off-by: Johan Almbladh --- lib/test_bpf.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 48 insertions(+), 3 deletions(-) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 7475abfd2186..6e45b4da9841 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -12259,6 +12259,20 @@ static struct tail_call_test tail_call_tests[] = { }, .result = MAX_TAIL_CALL_CNT + 1, }, + { + "Tail call count preserved across function calls", + .insns = { + BPF_ALU64_IMM(BPF_ADD, R1, 1), + BPF_STX_MEM(BPF_DW, R10, R1, -8), + BPF_CALL_REL(0), + BPF_LDX_MEM(BPF_DW, R1, R10, -8), + BPF_ALU32_REG(BPF_MOV, R0, R1), + TAIL_CALL(0), + BPF_EXIT_INSN(), + }, + .stack_depth = 8, + .result = MAX_TAIL_CALL_CNT + 1, + }, { "Tail call error path, NULL target", .insns = { @@ -12281,6 +12295,29 @@ static struct tail_call_test tail_call_tests[] = { }, }; +/* + * A test function to be called from a BPF program, clobbering a lot of + * CPU registers in the process. A JITed BPF program calling this function + * must save and restore any caller-saved registers it uses for internal + * state, for example the current tail call count. + */ +BPF_CALL_1(test_bpf_func, u64, arg) +{ + char buf[64]; + long a = 0; + long b = 1; + long c = 2; + long d = 3; + long e = 4; + long f = 5; + long g = 6; + long h = 7; + + return snprintf(buf, sizeof(buf), + "%ld %lu %lx %ld %lu %lx %ld %lu %x", + a, b, c, d, e, f, g, h, (int)arg); +} + static void __init destroy_tail_call_tests(struct bpf_array *progs) { int i; @@ -12334,16 +12371,17 @@ static __init int prepare_tail_call_tests(struct bpf_array **pprogs) for (i = 0; i < len; i++) { struct bpf_insn *insn = &fp->insnsi[i]; - if (insn->imm != TAIL_CALL_MARKER) - continue; - switch (insn->code) { case BPF_LD | BPF_DW | BPF_IMM: + if (insn->imm != TAIL_CALL_MARKER) + break; insn[0].imm = (u32)(long)progs; insn[1].imm = ((u64)(long)progs) >> 32; break; case BPF_ALU | BPF_MOV | BPF_K: + if (insn->imm != TAIL_CALL_MARKER) + break; if (insn->off == TAIL_CALL_NULL) insn->imm = ntests; else if (insn->off == TAIL_CALL_INVALID) @@ -12351,6 +12389,13 @@ static __init int prepare_tail_call_tests(struct bpf_array **pprogs) else insn->imm = which + insn->off; insn->off = 0; + break; + + case BPF_JMP | BPF_CALL: + if (insn->src_reg != BPF_PSEUDO_CALL) + break; + *insn = BPF_EMIT_CALL(test_bpf_func); + break; } }