From patchwork Thu Mar 30 04:16:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13193371 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FCFAC761AF for ; Thu, 30 Mar 2023 04:18:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229828AbjC3ESJ convert rfc822-to-8bit (ORCPT ); Thu, 30 Mar 2023 00:18:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229805AbjC3ESI (ORCPT ); Thu, 30 Mar 2023 00:18:08 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ADD7F1BD for ; Wed, 29 Mar 2023 21:18:06 -0700 (PDT) Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 32TKrMuT031604 for ; Wed, 29 Mar 2023 21:18:06 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3pmvrhj44f-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 29 Mar 2023 21:18:06 -0700 Received: from twshared52344.38.frc1.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Wed, 29 Mar 2023 21:18:04 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 2A4362C676698; Wed, 29 Mar 2023 21:17:53 -0700 (PDT) From: Andrii Nakryiko To: , , , , , , CC: , Subject: [PATCH bpf-next 1/8] bpf: ignore verifier log reset in BPF_LOG_KERNEL mode Date: Wed, 29 Mar 2023 21:16:35 -0700 Message-ID: <20230330041642.1118787-2-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230330041642.1118787-1-andrii@kernel.org> References: <20230330041642.1118787-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: fevsh6pRtRrCOs14mclenqnV0CNZIIO6 X-Proofpoint-GUID: fevsh6pRtRrCOs14mclenqnV0CNZIIO6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-29_16,2023-03-28_02,2023-02-09_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Verifier log position reset is meaningless in BPF_LOG_KERNEL mode, so just exit early in bpf_vlog_reset() if log->level is BPF_LOG_KERNEL. This avoid meaningless put_user() into NULL log->ubuf. Signed-off-by: Andrii Nakryiko --- kernel/bpf/log.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c index beab7c7f52a7..0bcc0c5ce697 100644 --- a/kernel/bpf/log.c +++ b/kernel/bpf/log.c @@ -104,7 +104,7 @@ void bpf_vlog_reset(struct bpf_verifier_log *log, u64 new_pos) char zero = 0; u32 pos; - if (!bpf_verifier_log_needed(log)) + if (!bpf_verifier_log_needed(log) || log->level == BPF_LOG_KERNEL) return; /* if position to which we reset is beyond current log window, From patchwork Thu Mar 30 04:16:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13193370 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BE4AC77B61 for ; Thu, 30 Mar 2023 04:18:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229833AbjC3ESK convert rfc822-to-8bit (ORCPT ); Thu, 30 Mar 2023 00:18:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229808AbjC3ESI (ORCPT ); Thu, 30 Mar 2023 00:18:08 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5029A1BB for ; Wed, 29 Mar 2023 21:18:07 -0700 (PDT) Received: from pps.filterd (m0148461.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 32TKgOcY016020 for ; Wed, 29 Mar 2023 21:18:07 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3pmvkmt79h-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 29 Mar 2023 21:18:06 -0700 Received: from twshared52344.38.frc1.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::f) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Wed, 29 Mar 2023 21:18:04 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 1AE0D2C6766BF; Wed, 29 Mar 2023 21:17:56 -0700 (PDT) From: Andrii Nakryiko To: , , , , , , CC: , Subject: [PATCH bpf-next 2/8] bpf: fix missing -EFAULT return on user log buf error in btf_parse() Date: Wed, 29 Mar 2023 21:16:36 -0700 Message-ID: <20230330041642.1118787-3-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230330041642.1118787-1-andrii@kernel.org> References: <20230330041642.1118787-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: A7ksBGB-ynEind9ssU2OD6FHwH7mzggb X-Proofpoint-ORIG-GUID: A7ksBGB-ynEind9ssU2OD6FHwH7mzggb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-29_16,2023-03-28_02,2023-02-09_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net btf_parse() is missing -EFAULT error return if log->ubuf was NULL-ed out due to error while copying data into user-provided buffer. Add it, but handle a special case of BPF_LOG_KERNEL in which log->ubuf is always NULL. Signed-off-by: Andrii Nakryiko --- kernel/bpf/btf.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 2574cc9b3e28..a67b1b669b0c 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -5598,6 +5598,10 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, err = -ENOSPC; goto errout_meta; } + if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf) { + err = -EFAULT; + goto errout_meta; + } btf_verifier_env_free(env); refcount_set(&btf->refcnt, 1); From patchwork Thu Mar 30 04:16:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13193372 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DE30C74A5B for ; Thu, 30 Mar 2023 04:18:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229557AbjC3ESM convert rfc822-to-8bit (ORCPT ); Thu, 30 Mar 2023 00:18:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229612AbjC3ESJ (ORCPT ); Thu, 30 Mar 2023 00:18:09 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F2CA5B8E for ; Wed, 29 Mar 2023 21:18:08 -0700 (PDT) Received: from pps.filterd (m0109333.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 32TKZgxK025004 for ; Wed, 29 Mar 2023 21:18:08 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3pmvg5a963-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 29 Mar 2023 21:18:07 -0700 Received: from twshared30317.05.prn5.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:21d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Wed, 29 Mar 2023 21:18:06 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id E70092C6766D1; Wed, 29 Mar 2023 21:17:59 -0700 (PDT) From: Andrii Nakryiko To: , , , , , , CC: , Subject: [PATCH bpf-next 3/8] bpf: avoid incorrect -EFAULT error in BPF_LOG_KERNEL mode Date: Wed, 29 Mar 2023 21:16:37 -0700 Message-ID: <20230330041642.1118787-4-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230330041642.1118787-1-andrii@kernel.org> References: <20230330041642.1118787-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: Pi3Zgdi6uvRXNNmml6yuiHtdBuwn7D4A X-Proofpoint-ORIG-GUID: Pi3Zgdi6uvRXNNmml6yuiHtdBuwn7D4A X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-29_16,2023-03-28_02,2023-02-09_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net If verifier log is in BPF_LOG_KERNEL mode, no log->ubuf is expected and it stays NULL throughout entire verification process. Don't erroneously return -EFAULT in such case. Signed-off-by: Andrii Nakryiko --- kernel/bpf/verifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index e121c6c13635..82a5876940aa 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -18789,7 +18789,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) bpf_vlog_finalize(log); if (log->level && bpf_vlog_truncated(log)) ret = -ENOSPC; - if (log->level && !log->ubuf) { + if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf) { ret = -EFAULT; goto err_release_maps; } From patchwork Thu Mar 30 04:16:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13193374 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 150DEC6FD1D for ; Thu, 30 Mar 2023 04:18:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229678AbjC3EST convert rfc822-to-8bit (ORCPT ); Thu, 30 Mar 2023 00:18:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229837AbjC3ESQ (ORCPT ); Thu, 30 Mar 2023 00:18:16 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A82381BB for ; Wed, 29 Mar 2023 21:18:14 -0700 (PDT) Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 32U0BEj7004057 for ; Wed, 29 Mar 2023 21:18:14 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3pmyn5s2ue-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 29 Mar 2023 21:18:14 -0700 Received: from twshared32017.39.prn1.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:21d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Wed, 29 Mar 2023 21:18:13 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 013442C67670F; Wed, 29 Mar 2023 21:18:01 -0700 (PDT) From: Andrii Nakryiko To: , , , , , , CC: , Subject: [PATCH bpf-next 4/8] bpf: simplify logging related error conditions handling Date: Wed, 29 Mar 2023 21:16:38 -0700 Message-ID: <20230330041642.1118787-5-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230330041642.1118787-1-andrii@kernel.org> References: <20230330041642.1118787-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: tjisDJ_JZRIwPvDb3O7umngqm-IwEFk4 X-Proofpoint-ORIG-GUID: tjisDJ_JZRIwPvDb3O7umngqm-IwEFk4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-29_16,2023-03-28_02,2023-02-09_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Move log->level == 0 check into bpf_vlog_truncated() instead of doing it explicitly. Also remove unnecessary goto in kernel/bpf/verifier.c. Signed-off-by: Andrii Nakryiko --- kernel/bpf/btf.c | 2 +- kernel/bpf/log.c | 4 +++- kernel/bpf/verifier.c | 6 ++---- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index a67b1b669b0c..36e3c25bdca5 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -5594,7 +5594,7 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, } bpf_vlog_finalize(log); - if (log->level && bpf_vlog_truncated(log)) { + if (bpf_vlog_truncated(log)) { err = -ENOSPC; goto errout_meta; } diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c index 0bcc0c5ce697..a64cc430a9c9 100644 --- a/kernel/bpf/log.c +++ b/kernel/bpf/log.c @@ -167,7 +167,9 @@ static int bpf_vlog_reverse_ubuf(struct bpf_verifier_log *log, int start, int en bool bpf_vlog_truncated(const struct bpf_verifier_log *log) { - if (log->level & BPF_LOG_FIXED) + if (!log->level) + return false; + else if (log->level & BPF_LOG_FIXED) return bpf_log_used(log) >= log->len_total - 1; else return log->start_pos > 0; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 82a5876940aa..efeeafe590b9 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -18787,12 +18787,10 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) env->prog->aux->verified_insns = env->insn_processed; bpf_vlog_finalize(log); - if (log->level && bpf_vlog_truncated(log)) + if (bpf_vlog_truncated(log)) ret = -ENOSPC; - if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf) { + if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf) ret = -EFAULT; - goto err_release_maps; - } if (ret) goto err_release_maps; From patchwork Thu Mar 30 04:16:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13193373 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECAC6C74A5B for ; Thu, 30 Mar 2023 04:18:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229839AbjC3ESR convert rfc822-to-8bit (ORCPT ); Thu, 30 Mar 2023 00:18:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229829AbjC3ESP (ORCPT ); Thu, 30 Mar 2023 00:18:15 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4BBC5B9C for ; Wed, 29 Mar 2023 21:18:12 -0700 (PDT) Received: from pps.filterd (m0148460.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 32TJU4Bp014184 for ; Wed, 29 Mar 2023 21:18:12 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3pmesc79g6-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 29 Mar 2023 21:18:11 -0700 Received: from ash-exhub204.TheFacebook.com (2620:10d:c0a8:83::4) by ash-exhub202.TheFacebook.com (2620:10d:c0a8:83::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Wed, 29 Mar 2023 21:18:10 -0700 Received: from twshared38955.16.prn3.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Wed, 29 Mar 2023 21:18:09 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 13A792C676755; Wed, 29 Mar 2023 21:18:04 -0700 (PDT) From: Andrii Nakryiko To: , , , , , , CC: , Subject: [PATCH bpf-next 5/8] bpf: keep track of total log content size in both fixed and rolling modes Date: Wed, 29 Mar 2023 21:16:39 -0700 Message-ID: <20230330041642.1118787-6-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230330041642.1118787-1-andrii@kernel.org> References: <20230330041642.1118787-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: an5EGTTPdo9RBULUsu9p2_5wsbkFs-44 X-Proofpoint-ORIG-GUID: an5EGTTPdo9RBULUsu9p2_5wsbkFs-44 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-29_16,2023-03-28_02,2023-02-09_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Change how we do accounting in BPF_LOG_FIXED mode and adopt log->end_pos as *logical* log position. This means that we can go beyond physical log buffer size now and be able to tell what log buffer size should be to fit entire log contents without -ENOSPC. To do this for BPF_LOG_FIXED mode, we need to remove a short-circuiting logic of not vsnprintf()'ing further log content once we filled up user-provided buffer, which is done by bpf_verifier_log_needed() checks. We modify these checks to always keep going if log->level is non-zero (i.e., log is requested), even if log->ubuf was NULL'ed out due to copying data to user-space, or if entire log buffer is physically full. We adopt bpf_verifier_vlog() routine to work correctly with log->ubuf == NULL condition, performing log formatting into temporary kernel buffer, doing all the necessary accounting, but just avoiding copying data out if buffer is full or NULL'ed out. With these changes, it's now possible to do this sort of determination of log contents size in both BPF_LOG_FIXED and default rolling log mode. We need to keep in mind bpf_vlog_reset(), though, which shrinks log contents after successful verification of a particular code path. This log reset means that log->end_pos isn't always increasing, so to return back to users what should be the log buffer size to fit all log content without causing -ENOSPC even in the presenec of log resetting, we need to keep maximum over "lifetime" of logging. We do this accounting in bpf_vlog_update_len_max() helper. A related and subtle aspect is that with this logical log->end_pos even in BPF_LOG_FIXED mode we could temporary "overflow" buffer, but then reset it back with bpf_vlog_reset() to a position inside user-supplied log_buf. In such situation we still want to properly maintain terminating zero. We will eventually return -ENOSPC even if final log buffer is small (we detect this through log->len_max check). This behavior is simpler to reason about and is consistent with current behavior of verifier log. Handling of this required a small addition to bpf_vlog_reset() logic to avoid doing put_user() beyond physical log buffer dimensions. Another issue to keep in mind is that we limit log buffer size to 32-bit value and keep such log length as u32, but theoretically verifier could produce huge log stretching beyond 4GB. Instead of keeping (and later returning) 64-bit log length, we cap it at UINT_MAX. Current UAPI makes it impossible to specify log buffer size bigger than 4GB anyways, so we don't really loose anything here and keep everything consistently 32-bit in UAPI. This property will be utilized in next patch. Doing the same determination of maximum log buffer for rolling mode is trivial, as log->end_pos and log->start_pos are already logical positions, so there is nothing new there. These changes do incidentally fix one small issue with previous logging logic. Previously, if use provided log buffer of size N, and actual log output was exactly N-1 bytes + terminating \0, kernel logic coun't distinguish this condition from log truncation scenario which would end up with truncated log contents of N-1 bytes + terminating \0 as well. But now with log->end_pos being logical position that could go beyond actual log buffer size, we can distinguish these two conditions, which we do in this patch. This plays nicely with returning log_size_actual (implemented in UAPI in the next patch), as we can now guarantee that if user takes such log_size_actual and provides log buffer of that exact size, they will not get -ENOSPC in return. All in all, all these changes do conceptually unify fixed and rolling log modes much better, and allow a nice feature requested by users: knowing what should be the size of the buffer to avoid -ENOSPC. We'll plumb this through the UAPI and the code in the next patch. Signed-off-by: Andrii Nakryiko --- include/linux/bpf_verifier.h | 12 ++----- kernel/bpf/log.c | 68 +++++++++++++++++++++++++----------- 2 files changed, 50 insertions(+), 30 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 4c926227f612..98d2eb382dbb 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -504,6 +504,7 @@ struct bpf_verifier_log { char __user *ubuf; u32 level; u32 len_total; + u32 len_max; char kbuf[BPF_VERIFIER_TMP_LOG_SIZE]; }; @@ -517,23 +518,16 @@ struct bpf_verifier_log { #define BPF_LOG_MIN_ALIGNMENT 8U #define BPF_LOG_ALIGNMENT 40U -static inline u32 bpf_log_used(const struct bpf_verifier_log *log) -{ - return log->end_pos - log->start_pos; -} - static inline bool bpf_verifier_log_full(const struct bpf_verifier_log *log) { if (log->level & BPF_LOG_FIXED) - return bpf_log_used(log) >= log->len_total - 1; + return log->end_pos >= log->len_total; return false; } static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log) { - return log && - ((log->level && log->ubuf && !bpf_verifier_log_full(log)) || - log->level == BPF_LOG_KERNEL); + return log && log->level; } #define BPF_MAX_SUBPROGS 256 diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c index a64cc430a9c9..a8b5dfbcb355 100644 --- a/kernel/bpf/log.c +++ b/kernel/bpf/log.c @@ -16,10 +16,26 @@ bool bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log) log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK); } +static void bpf_vlog_update_len_max(struct bpf_verifier_log *log, u32 add_len) +{ + /* add_len includes terminal \0, so no need for +1. */ + u64 len = log->end_pos - log->start_pos + add_len; + + /* log->len_max could be larger than our current len due to + * bpf_vlog_reset() calls, so we maintain the max of any length at any + * previous point + */ + if (len > UINT_MAX) + log->len_max = UINT_MAX; + else if (len > log->len_max) + log->len_max = len; +} + void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt, va_list args) { - unsigned int n; + u64 cur_pos; + u32 new_n, n; n = vscnprintf(log->kbuf, BPF_VERIFIER_TMP_LOG_SIZE, fmt, args); @@ -33,21 +49,28 @@ void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt, return; } + n += 1; /* include terminating zero */ if (log->level & BPF_LOG_FIXED) { - n = min(log->len_total - bpf_log_used(log) - 1, n); - log->kbuf[n] = '\0'; - n += 1; - - if (copy_to_user(log->ubuf + log->end_pos, log->kbuf, n)) - goto fail; + /* check if we have at least something to put into user buf */ + new_n = 0; + if (log->end_pos < log->len_total - 1) { + new_n = min_t(u32, log->len_total - log->end_pos, n); + log->kbuf[new_n - 1] = '\0'; + } + bpf_vlog_update_len_max(log, n); + cur_pos = log->end_pos; log->end_pos += n - 1; /* don't count terminating '\0' */ + + if (log->ubuf && new_n && + copy_to_user(log->ubuf + cur_pos, log->kbuf, new_n)) + goto fail; } else { - u64 new_end, new_start, cur_pos; + u64 new_end, new_start; u32 buf_start, buf_end, new_n; - log->kbuf[n] = '\0'; - n += 1; + log->kbuf[n - 1] = '\0'; + bpf_vlog_update_len_max(log, n); new_end = log->end_pos + n; if (new_end - log->start_pos >= log->len_total) @@ -66,6 +89,12 @@ void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt, if (buf_end == 0) buf_end = log->len_total; + log->start_pos = new_start; + log->end_pos = new_end - 1; /* don't count terminating '\0' */ + + if (!log->ubuf) + return; + /* if buf_start > buf_end, we wrapped around; * if buf_start == buf_end, then we fill ubuf completely; we * can't have buf_start == buf_end to mean that there is @@ -89,9 +118,6 @@ void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt, buf_end)) goto fail; } - - log->start_pos = new_start; - log->end_pos = new_end - 1; /* don't count terminating '\0' */ } return; @@ -114,8 +140,13 @@ void bpf_vlog_reset(struct bpf_verifier_log *log, u64 new_pos) log->end_pos = new_pos; if (log->end_pos < log->start_pos) log->start_pos = log->end_pos; - div_u64_rem(new_pos, log->len_total, &pos); - if (put_user(zero, log->ubuf + pos)) + + if (log->level & BPF_LOG_FIXED) + pos = log->end_pos + 1; + else + div_u64_rem(new_pos, log->len_total, &pos); + + if (log->ubuf && pos < log->len_total && put_user(zero, log->ubuf + pos)) log->ubuf = NULL; } @@ -167,12 +198,7 @@ static int bpf_vlog_reverse_ubuf(struct bpf_verifier_log *log, int start, int en bool bpf_vlog_truncated(const struct bpf_verifier_log *log) { - if (!log->level) - return false; - else if (log->level & BPF_LOG_FIXED) - return bpf_log_used(log) >= log->len_total - 1; - else - return log->start_pos > 0; + return log->len_max > log->len_total; } void bpf_vlog_finalize(struct bpf_verifier_log *log) From patchwork Thu Mar 30 04:16:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13193375 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AE64C6FD1D for ; Thu, 30 Mar 2023 04:18:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229820AbjC3ES3 convert rfc822-to-8bit (ORCPT ); Thu, 30 Mar 2023 00:18:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229840AbjC3ESW (ORCPT ); Thu, 30 Mar 2023 00:18:22 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED2D85BBE for ; Wed, 29 Mar 2023 21:18:17 -0700 (PDT) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 32TJTvWl006271 for ; Wed, 29 Mar 2023 21:18:17 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3pmr5349tv-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 29 Mar 2023 21:18:16 -0700 Received: from twshared32017.39.prn1.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Wed, 29 Mar 2023 21:18:13 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 221042C6767D3; Wed, 29 Mar 2023 21:18:06 -0700 (PDT) From: Andrii Nakryiko To: , , , , , , CC: , Subject: [PATCH bpf-next 6/8] bpf: add log_size_actual output field to return log contents size Date: Wed, 29 Mar 2023 21:16:40 -0700 Message-ID: <20230330041642.1118787-7-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230330041642.1118787-1-andrii@kernel.org> References: <20230330041642.1118787-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: LlaF7KQrFL7ZoizayJKFnVmAjO2N9Tcs X-Proofpoint-ORIG-GUID: LlaF7KQrFL7ZoizayJKFnVmAjO2N9Tcs X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-29_16,2023-03-28_02,2023-02-09_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add output-only log_size_actual/btf_log_size_actual field to BPF_PROG_LOAD and BPF_BTF_LOAD commands, respectively. It will return the size of log buffer necessary to fit in all the log contents at specified log_level. This is very useful for BPF loader libraries like libbpf to be able to size log buffer correctly, but could be used by users directly, if necessary, as well. This patch plumbs all this through the code, taking into account actual bpf_attr size provided by user to determine if these new fields are expected by users. And if they are, set them from kernel on return. We refactory btf_parse() function to accommodate this, moving attr and uattr handling inside it. The rest is very straightforward code, which is split from the logging accounting changes in the previous patch to make it simpler to review logic vs UAPI changes. Signed-off-by: Andrii Nakryiko --- include/linux/bpf.h | 2 +- include/linux/btf.h | 2 +- include/uapi/linux/bpf.h | 10 ++++++++++ kernel/bpf/btf.c | 32 ++++++++++++++++++-------------- kernel/bpf/syscall.c | 16 ++++++++-------- kernel/bpf/verifier.c | 8 +++++++- tools/include/uapi/linux/bpf.h | 12 +++++++++++- 7 files changed, 56 insertions(+), 26 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 2d8f3f639e68..57507a2fcc8d 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2176,7 +2176,7 @@ int bpf_check_uarg_tail_zero(bpfptr_t uaddr, size_t expected_size, size_t actual_size); /* verify correctness of eBPF program */ -int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr); +int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size); #ifndef CONFIG_BPF_JIT_ALWAYS_ON void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth); diff --git a/include/linux/btf.h b/include/linux/btf.h index d53b10cc55f2..495250162422 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -125,7 +125,7 @@ extern const struct file_operations btf_fops; void btf_get(struct btf *btf); void btf_put(struct btf *btf); -int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr); +int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_sz); struct btf *btf_get_by_fd(int fd); int btf_get_info_by_fd(const struct btf *btf, const union bpf_attr *attr, diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index e3d3b5160d26..2d90b820ba1e 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1407,6 +1407,11 @@ union bpf_attr { __aligned_u64 fd_array; /* array of FDs */ __aligned_u64 core_relos; __u32 core_relo_rec_size; /* sizeof(struct bpf_core_relo) */ + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + */ + __u32 log_size_actual; }; struct { /* anonymous struct used by BPF_OBJ_* commands */ @@ -1492,6 +1497,11 @@ union bpf_attr { __u32 btf_size; __u32 btf_log_size; __u32 btf_log_level; + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + */ + __u32 btf_log_size_actual; }; struct { diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 36e3c25bdca5..1e974383f0e6 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -5504,9 +5504,10 @@ static int btf_check_type_tags(struct btf_verifier_env *env, return 0; } -static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, - u32 log_level, char __user *log_ubuf, u32 log_size) +static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) { + bpfptr_t btf_data = make_bpfptr(attr->btf, uattr.is_kernel); + char __user *log_ubuf = u64_to_user_ptr(attr->btf_log_buf); struct btf_struct_metas *struct_meta_tab; struct btf_verifier_env *env = NULL; struct bpf_verifier_log *log; @@ -5514,7 +5515,7 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, u8 *data; int err; - if (btf_data_size > BTF_MAX_SIZE) + if (attr->btf_size > BTF_MAX_SIZE) return ERR_PTR(-E2BIG); env = kzalloc(sizeof(*env), GFP_KERNEL | __GFP_NOWARN); @@ -5522,13 +5523,13 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, return ERR_PTR(-ENOMEM); log = &env->log; - if (log_level || log_ubuf || log_size) { + if (attr->btf_log_level || log_ubuf || attr->btf_log_size) { /* user requested verbose verifier output * and supplied buffer to store the verification trace */ - log->level = log_level; + log->level = attr->btf_log_level; log->ubuf = log_ubuf; - log->len_total = log_size; + log->len_total = attr->btf_log_size; /* log attributes have to be sane */ if (!bpf_verifier_log_attr_valid(log)) { @@ -5544,16 +5545,16 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, } env->btf = btf; - data = kvmalloc(btf_data_size, GFP_KERNEL | __GFP_NOWARN); + data = kvmalloc(attr->btf_size, GFP_KERNEL | __GFP_NOWARN); if (!data) { err = -ENOMEM; goto errout; } btf->data = data; - btf->data_size = btf_data_size; + btf->data_size = attr->btf_size; - if (copy_from_bpfptr(data, btf_data, btf_data_size)) { + if (copy_from_bpfptr(data, btf_data, attr->btf_size)) { err = -EFAULT; goto errout; } @@ -5594,6 +5595,12 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, } bpf_vlog_finalize(log); + if (uattr_size >= offsetofend(union bpf_attr, btf_log_size_actual) && + copy_to_bpfptr_offset(uattr, offsetof(union bpf_attr, btf_log_size_actual), + &log->len_max, sizeof(log->len_max))) { + err = -EFAULT; + goto errout_meta; + } if (bpf_vlog_truncated(log)) { err = -ENOSPC; goto errout_meta; @@ -7214,15 +7221,12 @@ static int __btf_new_fd(struct btf *btf) return anon_inode_getfd("btf", &btf_fops, btf, O_RDONLY | O_CLOEXEC); } -int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr) +int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) { struct btf *btf; int ret; - btf = btf_parse(make_bpfptr(attr->btf, uattr.is_kernel), - attr->btf_size, attr->btf_log_level, - u64_to_user_ptr(attr->btf_log_buf), - attr->btf_log_size); + btf = btf_parse(attr, uattr, uattr_size); if (IS_ERR(btf)) return PTR_ERR(btf); diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index e18ac7fdc210..fe2411a0d68e 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2501,9 +2501,9 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type) } /* last field in 'union bpf_attr' used by this command */ -#define BPF_PROG_LOAD_LAST_FIELD core_relo_rec_size +#define BPF_PROG_LOAD_LAST_FIELD log_size_actual -static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr) +static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) { enum bpf_prog_type type = attr->prog_type; struct bpf_prog *prog, *dst_prog = NULL; @@ -2653,7 +2653,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr) goto free_prog_sec; /* run eBPF verifier */ - err = bpf_check(&prog, attr, uattr); + err = bpf_check(&prog, attr, uattr, uattr_size); if (err < 0) goto free_used_maps; @@ -4371,9 +4371,9 @@ static int bpf_obj_get_info_by_fd(const union bpf_attr *attr, return err; } -#define BPF_BTF_LOAD_LAST_FIELD btf_log_level +#define BPF_BTF_LOAD_LAST_FIELD btf_log_size_actual -static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr) +static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr, __u32 uattr_size) { if (CHECK_ATTR(BPF_BTF_LOAD)) return -EINVAL; @@ -4381,7 +4381,7 @@ static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr) if (!bpf_capable()) return -EPERM; - return btf_new_fd(attr, uattr); + return btf_new_fd(attr, uattr, uattr_size); } #define BPF_BTF_GET_FD_BY_ID_LAST_FIELD btf_id @@ -5059,7 +5059,7 @@ static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size) err = map_freeze(&attr); break; case BPF_PROG_LOAD: - err = bpf_prog_load(&attr, uattr); + err = bpf_prog_load(&attr, uattr, size); break; case BPF_OBJ_PIN: err = bpf_obj_pin(&attr); @@ -5104,7 +5104,7 @@ static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size) err = bpf_raw_tracepoint_open(&attr); break; case BPF_BTF_LOAD: - err = bpf_btf_load(&attr, uattr); + err = bpf_btf_load(&attr, uattr, size); break; case BPF_BTF_GET_FD_BY_ID: err = bpf_btf_get_fd_by_id(&attr); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index efeeafe590b9..b9f7ebf8b536 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -18620,7 +18620,7 @@ struct btf *bpf_get_btf_vmlinux(void) return btf_vmlinux; } -int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) +int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u32 uattr_size) { u64 start_time = ktime_get_ns(); struct bpf_verifier_env *env; @@ -18787,6 +18787,12 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) env->prog->aux->verified_insns = env->insn_processed; bpf_vlog_finalize(log); + if (uattr_size >= offsetofend(union bpf_attr, log_size_actual) && + copy_to_bpfptr_offset(uattr, offsetof(union bpf_attr, log_size_actual), + &log->len_max, sizeof(log->len_max))) { + ret = -EFAULT; + goto err_release_maps; + } if (bpf_vlog_truncated(log)) ret = -ENOSPC; if (log->level && log->level != BPF_LOG_KERNEL && !log->ubuf) diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index d6c5a022ae28..2d90b820ba1e 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1407,6 +1407,11 @@ union bpf_attr { __aligned_u64 fd_array; /* array of FDs */ __aligned_u64 core_relos; __u32 core_relo_rec_size; /* sizeof(struct bpf_core_relo) */ + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + */ + __u32 log_size_actual; }; struct { /* anonymous struct used by BPF_OBJ_* commands */ @@ -1492,6 +1497,11 @@ union bpf_attr { __u32 btf_size; __u32 btf_log_size; __u32 btf_log_level; + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + */ + __u32 btf_log_size_actual; }; struct { @@ -1513,7 +1523,7 @@ union bpf_attr { struct { /* struct used by BPF_LINK_CREATE command */ union { __u32 prog_fd; /* eBPF program to attach */ - __u32 map_fd; /* eBPF struct_ops to attach */ + __u32 map_fd; /* struct_ops to attach */ }; union { __u32 target_fd; /* object to attach to */ From patchwork Thu Mar 30 04:16:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13193376 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4339C74A5B for ; Thu, 30 Mar 2023 04:18:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229844AbjC3ESa convert rfc822-to-8bit (ORCPT ); Thu, 30 Mar 2023 00:18:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229852AbjC3ESY (ORCPT ); Thu, 30 Mar 2023 00:18:24 -0400 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CEA59618A for ; Wed, 29 Mar 2023 21:18:19 -0700 (PDT) Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.17.1.19/8.17.1.19) with ESMTP id 32U30urn012235 for ; Wed, 29 Mar 2023 21:18:19 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by m0089730.ppops.net (PPS) with ESMTPS id 3pn23w0bhd-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 29 Mar 2023 21:18:18 -0700 Received: from twshared15216.17.frc2.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:21d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Wed, 29 Mar 2023 21:18:17 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 5B2C22C67686D; Wed, 29 Mar 2023 21:18:08 -0700 (PDT) From: Andrii Nakryiko To: , , , , , , CC: , Subject: [PATCH bpf-next 7/8] libbpf: wire through log_size_actual value returned from kernel Date: Wed, 29 Mar 2023 21:16:41 -0700 Message-ID: <20230330041642.1118787-8-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230330041642.1118787-1-andrii@kernel.org> References: <20230330041642.1118787-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: H-vPw31icp2kemFs8Pco9OYkPF9AIdCz X-Proofpoint-GUID: H-vPw31icp2kemFs8Pco9OYkPF9AIdCz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-29_16,2023-03-28_02,2023-02-09_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add output-only log_size_actual field to bpf_prog_load_opts to return bpf_attr->log_size_actual value back from bpf() syscall. Note, that we have to drop const modifier from opts in bpf_prog_load(). This could potentially cause compilation error for some users. But the usual practice is to define bpf_prog_load_ops as a local variable next to bpf_prog_load() call and pass pointer to it, so const vs non-const makes no difference and won't even come up in most (if not all) cases. There are no runtime and ABI backwards/forward compatibility issues at all. If user provides old struct bpf_prog_load_opts, libbpf won't set new fields. If old libbpf is provided new bpf_prog_load_opts, nothing will happen either as old libbpf doesn't yet know about this new field. Adding a new variant of bpf_prog_load() just for this seems like a big and unnecessary overkill. As a corroborating evidence is the fact that entire selftests/bpf code base required not adjustment whatsoever. Signed-off-by: Andrii Nakryiko --- tools/lib/bpf/bpf.c | 7 +++++-- tools/lib/bpf/bpf.h | 11 +++++++++-- 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index 615185226ed0..00938b6983b7 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -230,9 +230,9 @@ alloc_zero_tailing_info(const void *orecord, __u32 cnt, int bpf_prog_load(enum bpf_prog_type prog_type, const char *prog_name, const char *license, const struct bpf_insn *insns, size_t insn_cnt, - const struct bpf_prog_load_opts *opts) + struct bpf_prog_load_opts *opts) { - const size_t attr_sz = offsetofend(union bpf_attr, fd_array); + const size_t attr_sz = offsetofend(union bpf_attr, log_size_actual); void *finfo = NULL, *linfo = NULL; const char *func_info, *line_info; __u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd; @@ -314,6 +314,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type, } fd = sys_bpf_prog_load(&attr, attr_sz, attempts); + OPTS_SET(opts, log_size_actual, attr.log_size_actual); if (fd >= 0) return fd; @@ -354,6 +355,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type, } fd = sys_bpf_prog_load(&attr, attr_sz, attempts); + OPTS_SET(opts, log_size_actual, attr.log_size_actual); if (fd >= 0) goto done; } @@ -368,6 +370,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type, attr.log_level = 1; fd = sys_bpf_prog_load(&attr, attr_sz, attempts); + OPTS_SET(opts, log_size_actual, attr.log_size_actual); } done: /* free() doesn't affect errno, so we don't need to restore it */ diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index b073e73439ef..45a967e65165 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -96,13 +96,20 @@ struct bpf_prog_load_opts { __u32 log_level; __u32 log_size; char *log_buf; + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + * If kernel doesn't support this feature, log_size is left unchanged. + */ + __u32 log_size_actual; + size_t :0; }; -#define bpf_prog_load_opts__last_field log_buf +#define bpf_prog_load_opts__last_field log_size_actual LIBBPF_API int bpf_prog_load(enum bpf_prog_type prog_type, const char *prog_name, const char *license, const struct bpf_insn *insns, size_t insn_cnt, - const struct bpf_prog_load_opts *opts); + struct bpf_prog_load_opts *opts); /* Flags to direct loading requirements */ #define MAPS_RELAX_COMPAT 0x01 From patchwork Thu Mar 30 04:16:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13193377 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09DB0C74A5B for ; Thu, 30 Mar 2023 04:18:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229851AbjC3ESg convert rfc822-to-8bit (ORCPT ); Thu, 30 Mar 2023 00:18:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229840AbjC3ESa (ORCPT ); Thu, 30 Mar 2023 00:18:30 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 341F75B92 for ; Wed, 29 Mar 2023 21:18:22 -0700 (PDT) Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 32TKr0h6029689 for ; Wed, 29 Mar 2023 21:18:22 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3pmvrhj45q-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 29 Mar 2023 21:18:21 -0700 Received: from twshared38955.16.prn3.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Wed, 29 Mar 2023 21:18:19 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 679662C6768D5; Wed, 29 Mar 2023 21:18:10 -0700 (PDT) From: Andrii Nakryiko To: , , , , , , CC: , Subject: [PATCH bpf-next 8/8] selftests/bpf: add tests to validate log_size_actual feature Date: Wed, 29 Mar 2023 21:16:42 -0700 Message-ID: <20230330041642.1118787-9-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230330041642.1118787-1-andrii@kernel.org> References: <20230330041642.1118787-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: c8cmmou3ie4etfLeWiSo1mzy_DkJv-bM X-Proofpoint-GUID: c8cmmou3ie4etfLeWiSo1mzy_DkJv-bM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-29_16,2023-03-28_02,2023-02-09_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add additional test cases validating that log_size_actual is consistent between fixed and rotating log modes, and that log_size_actual can be used *exactly* without causing -ENOSPC, while using just 1 byte shorter log buffer would cause -ENOSPC. Signed-off-by: Andrii Nakryiko --- .../selftests/bpf/prog_tests/verifier_log.c | 92 +++++++++++++++---- 1 file changed, 76 insertions(+), 16 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/verifier_log.c b/tools/testing/selftests/bpf/prog_tests/verifier_log.c index afe9e0384055..410bab151f1b 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier_log.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier_log.c @@ -18,25 +18,41 @@ static bool check_prog_load(int prog_fd, bool expect_err, const char *tag) if (!ASSERT_GT(prog_fd, 0, tag)) return false; } + if (prog_fd >= 0) + close(prog_fd); return true; } +static struct { + /* strategically placed before others to avoid accidental modification by kernel */ + char filler[1024]; + char buf[1024]; + /* strategically placed after buf[] to catch more accidental corruptions */ + char reference[1024]; +} logs; +static const struct bpf_insn *insns; +static size_t insn_cnt; + +static int load_prog(struct bpf_prog_load_opts *opts, bool expect_load_error) +{ + int prog_fd; + + prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_prog", + "GPL", insns, insn_cnt, opts); + check_prog_load(prog_fd, expect_load_error, "prog_load"); + + return prog_fd; +} + static void verif_log_subtest(const char *name, bool expect_load_error, int log_level) { LIBBPF_OPTS(bpf_prog_load_opts, opts); - struct { - /* strategically placed before others to avoid accidental modification by kernel */ - char filler[1024]; - char buf[1024]; - /* strategically placed after buf[] to catch more accidental corruptions */ - char reference[1024]; - } logs; char *exp_log, prog_name[16], op_name[32]; struct test_log_buf *skel; struct bpf_program *prog; - const struct bpf_insn *insns; - size_t insn_cnt, fixed_log_sz; - int i, err, prog_fd; + size_t fixed_log_sz; + __u32 log_sz_actual_fixed, log_sz_actual_rolling; + int i, err, prog_fd, res; skel = test_log_buf__open(); if (!ASSERT_OK_PTR(skel, "skel_open")) @@ -61,11 +77,7 @@ static void verif_log_subtest(const char *name, bool expect_load_error, int log_ opts.log_buf = logs.reference; opts.log_size = sizeof(logs.reference); opts.log_level = log_level | 8 /* BPF_LOG_FIXED */; - prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_fixed", - "GPL", insns, insn_cnt, &opts); - if (!check_prog_load(prog_fd, expect_load_error, "fixed_buf_prog_load")) - goto cleanup; - close(prog_fd); + load_prog(&opts, expect_load_error); fixed_log_sz = strlen(logs.reference) + 1; if (!ASSERT_GT(fixed_log_sz, 50, "fixed_log_sz")) @@ -89,7 +101,7 @@ static void verif_log_subtest(const char *name, bool expect_load_error, int log_ opts.log_level = log_level | 8; /* fixed-length log */ opts.log_size = 25; - prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_fixed50", + prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_fixed25", "GPL", insns, insn_cnt, &opts); if (!ASSERT_EQ(prog_fd, -ENOSPC, "unexpected_log_fixed_prog_load_result")) { if (prog_fd >= 0) @@ -147,6 +159,54 @@ static void verif_log_subtest(const char *name, bool expect_load_error, int log_ } } + /* (FIXED) get actual log size */ + opts.log_buf = logs.buf; + opts.log_level = log_level | 8; /* BPF_LOG_FIXED */ + opts.log_size = sizeof(logs.buf); + res = load_prog(&opts, expect_load_error); + ASSERT_NEQ(res, -ENOSPC, "prog_load_res_fixed"); + + log_sz_actual_fixed = opts.log_size_actual; + ASSERT_GT(log_sz_actual_fixed, 0, "log_sz_actual_fixed"); + + /* (ROLLING) get actual log size */ + opts.log_buf = logs.buf; + opts.log_level = log_level; + opts.log_size = sizeof(logs.buf); + res = load_prog(&opts, expect_load_error); + ASSERT_NEQ(res, -ENOSPC, "prog_load_res_rolling"); + + log_sz_actual_rolling = opts.log_size_actual; + ASSERT_EQ(log_sz_actual_rolling, log_sz_actual_fixed, "log_sz_actual_eq"); + + /* (FIXED) expect -ENOSPC for one byte short log */ + opts.log_buf = logs.buf; + opts.log_level = log_level | 8; /* BPF_LOG_FIXED */ + opts.log_size = log_sz_actual_fixed - 1; + res = load_prog(&opts, true /* should fail */); + ASSERT_EQ(res, -ENOSPC, "prog_load_res_too_short_fixed"); + + /* (FIXED) expect *not* -ENOSPC with exact log_size_actual buffer */ + opts.log_buf = logs.buf; + opts.log_level = log_level | 8; /* BPF_LOG_FIXED */ + opts.log_size = log_sz_actual_fixed; + res = load_prog(&opts, expect_load_error); + ASSERT_NEQ(res, -ENOSPC, "prog_load_res_just_right_fixed"); + + /* (ROLLING) expect -ENOSPC for one byte short log */ + opts.log_buf = logs.buf; + opts.log_level = log_level; + opts.log_size = log_sz_actual_rolling - 1; + res = load_prog(&opts, true /* should fail */); + ASSERT_EQ(res, -ENOSPC, "prog_load_res_too_short_rolling"); + + /* (ROLLING) expect *not* -ENOSPC with exact log_size_actual buffer */ + opts.log_buf = logs.buf; + opts.log_level = log_level; + opts.log_size = log_sz_actual_rolling; + res = load_prog(&opts, expect_load_error); + ASSERT_NEQ(res, -ENOSPC, "prog_load_res_just_right_rolling"); + cleanup: test_log_buf__destroy(skel); }