From patchwork Thu Oct 22 08:21:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850513 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18D2DC4363A for ; Thu, 22 Oct 2020 08:21:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC632223FB for ; Thu, 22 Oct 2020 08:21:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354918; bh=NThOyxiEtowQZqqqX0bxyOagVkHAUXQCi+Bkk/xr5hM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=apYjDJ+8DVTHN0sT9VhqWPAK8YgtM8cQ1IfyeYdCQaYz9TNMb9xd4mALiXuu9WfKF KxsdEgukW3hZSGs2RgyTWnE56IA7rGAxBpWX0QcQ76+DuxarYsNRQki89dksXOZg7W LcqoSEAUSfKEIhr0SZ33LRqroNqMQWwAAbVBlC4g= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895081AbgJVIV6 convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:21:58 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:26302 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895052AbgJVIV4 (ORCPT ); Thu, 22 Oct 2020 04:21:56 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-162-ZLP9DZhuMCaTUgbcWPi4aw-1; Thu, 22 Oct 2020 04:21:51 -0400 X-MC-Unique: ZLP9DZhuMCaTUgbcWPi4aw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 333C9846375; Thu, 22 Oct 2020 08:21:49 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1659F60BFA; Thu, 22 Oct 2020 08:21:45 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 01/16] ftrace: Add check_direct_entry function Date: Thu, 22 Oct 2020 10:21:23 +0200 Message-Id: <20201022082138.2322434-2-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Move code that checks on valid direct ip into separate check_direct_entry function. It will be used in following patches, there's no functional change. Signed-off-by: Jiri Olsa --- kernel/trace/ftrace.c | 55 +++++++++++++++++++++++++------------------ 1 file changed, 32 insertions(+), 23 deletions(-) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 8185f7240095..27e9210073d3 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -5025,6 +5025,36 @@ struct ftrace_direct_func *ftrace_find_direct_func(unsigned long addr) return NULL; } +static int check_direct_ip(unsigned long ip) +{ + struct dyn_ftrace *rec; + + /* See if there's a direct function at @ip already */ + if (ftrace_find_rec_direct(ip)) + return -EBUSY; + + rec = lookup_rec(ip, ip); + if (!rec) + return -ENODEV; + + /* + * Check if the rec says it has a direct call but we didn't + * find one earlier? + */ + if (WARN_ON(rec->flags & FTRACE_FL_DIRECT)) + return -ENODEV; + + /* Make sure the ip points to the exact record */ + if (ip != rec->ip) { + ip = rec->ip; + /* Need to check this ip for a direct. */ + if (ftrace_find_rec_direct(ip)) + return -EBUSY; + } + + return 0; +} + /** * register_ftrace_direct - Call a custom trampoline directly * @ip: The address of the nop at the beginning of a function @@ -5047,35 +5077,14 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr) struct ftrace_direct_func *direct; struct ftrace_func_entry *entry; struct ftrace_hash *free_hash = NULL; - struct dyn_ftrace *rec; int ret = -EBUSY; mutex_lock(&direct_mutex); - /* See if there's a direct function at @ip already */ - if (ftrace_find_rec_direct(ip)) - goto out_unlock; - - ret = -ENODEV; - rec = lookup_rec(ip, ip); - if (!rec) - goto out_unlock; - - /* - * Check if the rec says it has a direct call but we didn't - * find one earlier? - */ - if (WARN_ON(rec->flags & FTRACE_FL_DIRECT)) + ret = check_direct_ip(ip); + if (ret) goto out_unlock; - /* Make sure the ip points to the exact record */ - if (ip != rec->ip) { - ip = rec->ip; - /* Need to check this ip for a direct. */ - if (ftrace_find_rec_direct(ip)) - goto out_unlock; - } - ret = -ENOMEM; if (ftrace_hash_empty(direct_functions) || direct_functions->count > 2 * (1 << direct_functions->size_bits)) { From patchwork Thu Oct 22 08:21:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850515 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66A23C388F7 for ; Thu, 22 Oct 2020 08:22:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EE6F0223C7 for ; Thu, 22 Oct 2020 08:22:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354922; bh=SzB5CATN+oUWAq6uTxJn3qfM+18LnZxQVSmrepyywSI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=xp7Op0ixXOJHDzwZDob6lrGZZvej6Gt6nBoyBJuReED23Y1W+NDeA8yNT48BB+glk 11Z+wYtYAP0VQHSZPqVFOU3U2McGDDwKFgocNtZaAD7LposjQ1qNA9YKu9f60zSUNz F1dAsUWzSnXq6f7nTtP3F7jvWlAJeqs6NT42Aqzk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895065AbgJVIWB convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:01 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:20625 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895082AbgJVIV7 (ORCPT ); Thu, 22 Oct 2020 04:21:59 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-6-eLPBuOAuPg-LALwjrKNG-g-1; Thu, 22 Oct 2020 04:21:54 -0400 X-MC-Unique: eLPBuOAuPg-LALwjrKNG-g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A63221006C9E; Thu, 22 Oct 2020 08:21:52 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 89E4A60BFA; Thu, 22 Oct 2020 08:21:49 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 02/16] ftrace: Add adjust_direct_size function Date: Thu, 22 Oct 2020 10:21:24 +0200 Message-Id: <20201022082138.2322434-3-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Move code for adjusting size of direct hash into separate adjust_direct_size function. It will be used in following patches, there's no functional change. Signed-off-by: Jiri Olsa --- kernel/trace/ftrace.c | 39 +++++++++++++++++++++++---------------- 1 file changed, 23 insertions(+), 16 deletions(-) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 27e9210073d3..cb8b7a66c6af 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -5055,6 +5055,27 @@ static int check_direct_ip(unsigned long ip) return 0; } +static int adjust_direct_size(int new_size, struct ftrace_hash **free_hash) +{ + if (ftrace_hash_empty(direct_functions) || + direct_functions->count > 2 * (1 << direct_functions->size_bits)) { + struct ftrace_hash *new_hash; + int size = ftrace_hash_empty(direct_functions) ? 0 : new_size; + + if (size < 32) + size = 32; + + new_hash = dup_hash(direct_functions, size); + if (!new_hash) + return -ENOMEM; + + *free_hash = direct_functions; + direct_functions = new_hash; + } + + return 0; +} + /** * register_ftrace_direct - Call a custom trampoline directly * @ip: The address of the nop at the beginning of a function @@ -5086,22 +5107,8 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr) goto out_unlock; ret = -ENOMEM; - if (ftrace_hash_empty(direct_functions) || - direct_functions->count > 2 * (1 << direct_functions->size_bits)) { - struct ftrace_hash *new_hash; - int size = ftrace_hash_empty(direct_functions) ? 0 : - direct_functions->count + 1; - - if (size < 32) - size = 32; - - new_hash = dup_hash(direct_functions, size); - if (!new_hash) - goto out_unlock; - - free_hash = direct_functions; - direct_functions = new_hash; - } + if (adjust_direct_size(direct_functions->count + 1, &free_hash)) + goto out_unlock; entry = kmalloc(sizeof(*entry), GFP_KERNEL); if (!entry) From patchwork Thu Oct 22 08:21:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850507 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BB6FC388F7 for ; Thu, 22 Oct 2020 08:22:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D8C5B223C7 for ; Thu, 22 Oct 2020 08:22:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354924; bh=QrkYH9mSSfCpsMsKAJ0SdUDz9cIekY5IWQPqC5WNtzQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=WkxN1oZFmAhLn3uyE4m//PNK3tQonAdITDpgl5SYlamUXeInTPky9flLx96C4Swjx NpOkGXR0CcQwInBQmAetpHKt3u0ngxymQvycobGMiGfoF/I8T0qzG+uijydTJd6HOU cBPVN643K5+YxLRROBCRu9tdOYCUhR5kxon4oGbc= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895188AbgJVIWE convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:04 -0400 Received: from us-smtp-delivery-44.mimecast.com ([207.211.30.44]:32431 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895052AbgJVIWD (ORCPT ); Thu, 22 Oct 2020 04:22:03 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-215-gMQl2aa5NIarZOrl5RRJDg-1; Thu, 22 Oct 2020 04:21:58 -0400 X-MC-Unique: gMQl2aa5NIarZOrl5RRJDg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 425641868424; Thu, 22 Oct 2020 08:21:56 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0B46260BFA; Thu, 22 Oct 2020 08:21:52 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 03/16] ftrace: Add get/put_direct_func function Date: Thu, 22 Oct 2020 10:21:25 +0200 Message-Id: <20201022082138.2322434-4-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Move code for managing ftrace_direct_funcs entries into get_direct_func and put_direct_func functions. It will be used in following patches, there's no functional change. Signed-off-by: Jiri Olsa --- kernel/trace/ftrace.c | 44 +++++++++++++++++++++++++++++-------------- 1 file changed, 30 insertions(+), 14 deletions(-) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index cb8b7a66c6af..95ef7e2a6a57 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -5076,6 +5076,32 @@ static int adjust_direct_size(int new_size, struct ftrace_hash **free_hash) return 0; } +static struct ftrace_direct_func *get_direct_func(unsigned long addr) +{ + struct ftrace_direct_func *direct; + + direct = ftrace_find_direct_func(addr); + if (!direct) { + direct = kmalloc(sizeof(*direct), GFP_KERNEL); + if (!direct) + return NULL; + direct->addr = addr; + direct->count = 0; + list_add_rcu(&direct->next, &ftrace_direct_funcs); + ftrace_direct_func_count++; + } + + return direct; +} + +static void put_direct_func(struct ftrace_direct_func *direct) +{ + list_del_rcu(&direct->next); + synchronize_rcu_tasks(); + kfree(direct); + ftrace_direct_func_count--; +} + /** * register_ftrace_direct - Call a custom trampoline directly * @ip: The address of the nop at the beginning of a function @@ -5114,17 +5140,10 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr) if (!entry) goto out_unlock; - direct = ftrace_find_direct_func(addr); + direct = get_direct_func(addr); if (!direct) { - direct = kmalloc(sizeof(*direct), GFP_KERNEL); - if (!direct) { - kfree(entry); - goto out_unlock; - } - direct->addr = addr; - direct->count = 0; - list_add_rcu(&direct->next, &ftrace_direct_funcs); - ftrace_direct_func_count++; + kfree(entry); + goto out_unlock; } entry->ip = ip; @@ -5144,13 +5163,10 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr) if (ret) { kfree(entry); if (!direct->count) { - list_del_rcu(&direct->next); - synchronize_rcu_tasks(); - kfree(direct); + put_direct_func(direct); if (free_hash) free_ftrace_hash(free_hash); free_hash = NULL; - ftrace_direct_func_count--; } } else { direct->count++; From patchwork Thu Oct 22 08:21:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850503 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 229A4C388F7 for ; Thu, 22 Oct 2020 08:22:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BC225223BF for ; Thu, 22 Oct 2020 08:22:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354934; bh=yKYCCFC9QJpuifff5HDLXlOKr7ptp1bi8amFDWBaaDs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=UbomYgOnZjPFDxVPF1HaAGAkA/xXeU9ycZ0+4NTEboj/ZxguQJ0MbsH0km2+nG6T1 5qDRbAixK4sR0GFLx9LC7Id5BeFSS16UYAOs3yy6kV6tJ1Ba4hS+8jwJohysSUHdzY OTjxQOBIrV3236Eo6Itbkdmlp9A6/RshD2Z2XFlc= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895232AbgJVIWO convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:14 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:25141 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895215AbgJVIWL (ORCPT ); Thu, 22 Oct 2020 04:22:11 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-480-dGRBVbSkNiSXPOmlsQg_8w-1; Thu, 22 Oct 2020 04:22:05 -0400 X-MC-Unique: dGRBVbSkNiSXPOmlsQg_8w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DC25B1006C9F; Thu, 22 Oct 2020 08:22:03 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7837760BFA; Thu, 22 Oct 2020 08:21:56 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 04/16] ftrace: Add ftrace_set_filter_ips function Date: Thu, 22 Oct 2020 10:21:26 +0200 Message-Id: <20201022082138.2322434-5-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding ftrace_set_filter_ips function that allows to set filter on multiple ip addresses. These are provided as array of unsigned longs together with the array count: int ftrace_set_filter_ips(struct ftrace_ops *ops, unsigned long *ips, int count, int remove); The function copies logic of ftrace_set_filter_ip but over multiple ip addresses. It will be used in following patches for faster direct ip/addr trampolines update. Signed-off-by: Jiri Olsa --- include/linux/ftrace.h | 3 +++ kernel/trace/ftrace.c | 56 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 59 insertions(+) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 1bd3a0356ae4..d71d88d10517 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -463,6 +463,8 @@ struct dyn_ftrace { int ftrace_force_update(void); int ftrace_set_filter_ip(struct ftrace_ops *ops, unsigned long ip, int remove, int reset); +int ftrace_set_filter_ips(struct ftrace_ops *ops, unsigned long *ips, + int count, int remove); int ftrace_set_filter(struct ftrace_ops *ops, unsigned char *buf, int len, int reset); int ftrace_set_notrace(struct ftrace_ops *ops, unsigned char *buf, @@ -738,6 +740,7 @@ static inline unsigned long ftrace_location(unsigned long ip) #define ftrace_regex_open(ops, flag, inod, file) ({ -ENODEV; }) #define ftrace_set_early_filter(ops, buf, enable) do { } while (0) #define ftrace_set_filter_ip(ops, ip, remove, reset) ({ -ENODEV; }) +#define ftrace_set_filter_ips(ops, ip, remove) ({ -ENODEV; }) #define ftrace_set_filter(ops, buf, len, reset) ({ -ENODEV; }) #define ftrace_set_notrace(ops, buf, len, reset) ({ -ENODEV; }) #define ftrace_free_filter(ops) do { } while (0) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 95ef7e2a6a57..44c2d21b8c19 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -4977,6 +4977,47 @@ ftrace_set_hash(struct ftrace_ops *ops, unsigned char *buf, int len, return ret; } +static int +ftrace_set_hash_ips(struct ftrace_ops *ops, unsigned long *ips, + int count, int remove, int enable) +{ + struct ftrace_hash **orig_hash; + struct ftrace_hash *hash; + int ret, i; + + if (unlikely(ftrace_disabled)) + return -ENODEV; + + mutex_lock(&ops->func_hash->regex_lock); + + if (enable) + orig_hash = &ops->func_hash->filter_hash; + else + orig_hash = &ops->func_hash->notrace_hash; + + hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash); + if (!hash) { + ret = -ENOMEM; + goto out_regex_unlock; + } + + for (i = 0; i < count; i++) { + ret = ftrace_match_addr(hash, ips[i], remove); + if (ret < 0) + goto out_regex_unlock; + } + + mutex_lock(&ftrace_lock); + ret = ftrace_hash_move_and_update_ops(ops, orig_hash, hash, enable); + mutex_unlock(&ftrace_lock); + + out_regex_unlock: + mutex_unlock(&ops->func_hash->regex_lock); + + free_ftrace_hash(hash); + return ret; +} + static int ftrace_set_addr(struct ftrace_ops *ops, unsigned long ip, int remove, int reset, int enable) @@ -4984,6 +5025,13 @@ ftrace_set_addr(struct ftrace_ops *ops, unsigned long ip, int remove, return ftrace_set_hash(ops, NULL, 0, ip, remove, reset, enable); } +static int +ftrace_set_addrs(struct ftrace_ops *ops, unsigned long *ips, + int count, int remove, int enable) +{ + return ftrace_set_hash_ips(ops, ips, count, remove, enable); +} + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS struct ftrace_direct_func { @@ -5395,6 +5443,14 @@ int ftrace_set_filter_ip(struct ftrace_ops *ops, unsigned long ip, } EXPORT_SYMBOL_GPL(ftrace_set_filter_ip); +int ftrace_set_filter_ips(struct ftrace_ops *ops, unsigned long *ips, + int count, int remove) +{ + ftrace_ops_init(ops); + return ftrace_set_addrs(ops, ips, count, remove, 1); +} +EXPORT_SYMBOL_GPL(ftrace_set_filter_ips); + /** * ftrace_ops_set_global_filter - setup ops to use global filters * @ops - the ops which will use the global filters From patchwork Thu Oct 22 08:21:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850511 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EDE2C388F7 for ; Thu, 22 Oct 2020 08:22:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3EEA9223BF for ; Thu, 22 Oct 2020 08:22:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354937; bh=JWBpMbQjNa3iCp9Q3TAwK7cZwyAjXbCTC8bf0/smNBI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=BCMf01kaRj3Po+43IStbLizhlNtFh/Rxa8PTknuQCEC2f4ZfOk70mGSDHnWKdzRz9 JeVevGHikGNlrDvfx5a3HPdG5nv03gbBLrAmfX4sDSNkz5HVMp3EDI7xraYv1PloG8 aTJ/qtHY1K/Z/kRwtk3nI9jJKjEywhvNRUhWh24A= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895233AbgJVIWP convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:15 -0400 Received: from us-smtp-delivery-44.mimecast.com ([207.211.30.44]:59072 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895231AbgJVIWO (ORCPT ); Thu, 22 Oct 2020 04:22:14 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-341-t04g1738PrSBKV3W_FzIHQ-1; Thu, 22 Oct 2020 04:22:09 -0400 X-MC-Unique: t04g1738PrSBKV3W_FzIHQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7B11110E2180; Thu, 22 Oct 2020 08:22:07 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3F00160BFA; Thu, 22 Oct 2020 08:22:04 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 05/16] ftrace: Add register_ftrace_direct_ips function Date: Thu, 22 Oct 2020 10:21:27 +0200 Message-Id: <20201022082138.2322434-6-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding register_ftrace_direct_ips function that llows to register array of ip addresses and trampolines for direct filter. the interface is: int register_ftrace_direct_ips(unsigned long *ips, unsigned long *addrs, int count); It wil be used in following patches to register bpf trampolines in batch mode. Signed-off-by: Jiri Olsa --- include/linux/ftrace.h | 2 ++ kernel/trace/ftrace.c | 75 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 77 insertions(+) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index d71d88d10517..9ed52755667a 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -291,6 +291,8 @@ int ftrace_modify_direct_caller(struct ftrace_func_entry *entry, unsigned long old_addr, unsigned long new_addr); unsigned long ftrace_find_rec_direct(unsigned long ip); +int register_ftrace_direct_ips(unsigned long *ips, unsigned long *addrs, + int count); #else # define ftrace_direct_func_count 0 static inline int register_ftrace_direct(unsigned long ip, unsigned long addr) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 44c2d21b8c19..770bcd1a245a 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -5231,6 +5231,81 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr) } EXPORT_SYMBOL_GPL(register_ftrace_direct); +int register_ftrace_direct_ips(unsigned long *ips, unsigned long *addrs, + int count) +{ + struct ftrace_hash *free_hash = NULL; + struct ftrace_direct_func *direct; + struct ftrace_func_entry *entry; + int i, j; + int ret; + + mutex_lock(&direct_mutex); + + /* Check all the ips */ + for (i = 0; i < count; i++) { + ret = check_direct_ip(ips[i]); + if (ret) + goto out_unlock; + } + + ret = -ENOMEM; + if (adjust_direct_size(direct_functions->count + count, &free_hash)) + goto out_unlock; + + for (i = 0; i < count; i++) { + entry = kmalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + goto out_clean; + + direct = get_direct_func(addrs[i]); + if (!direct) { + kfree(entry); + goto out_clean; + } + + direct->count++; + entry->ip = ips[i]; + entry->direct = addrs[i]; + __add_hash_entry(direct_functions, entry); + } + + ret = ftrace_set_filter_ips(&direct_ops, ips, count, 0); + + if (!ret && !(direct_ops.flags & FTRACE_OPS_FL_ENABLED)) { + ret = register_ftrace_function(&direct_ops); + if (ret) + ftrace_set_filter_ips(&direct_ops, ips, count, 1); + } + + out_clean: + if (ret) { + for (j = 0; j < i; j++) { + direct = get_direct_func(addrs[j]); + if (!direct) + continue; + + if (!direct->count) + put_direct_func(direct); + + entry = ftrace_lookup_ip(direct_functions, ips[j]); + if (WARN_ON_ONCE(!entry)) + continue; + free_hash_entry(direct_functions, entry); + } + } + out_unlock: + mutex_unlock(&direct_mutex); + + if (free_hash) { + synchronize_rcu_tasks(); + free_ftrace_hash(free_hash); + } + + return ret; +} +EXPORT_SYMBOL_GPL(register_ftrace_direct_ips); + static struct ftrace_func_entry *find_direct_entry(unsigned long *ip, struct dyn_ftrace **recp) { From patchwork Thu Oct 22 08:21:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850517 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09E2AC388F2 for ; Thu, 22 Oct 2020 08:22:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AA74D223FB for ; Thu, 22 Oct 2020 08:22:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354941; bh=HRHVeHm8EpP37GI46+MSU0/lBSTcf0YuK3WnUgZZqq0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=lmI6wnLO8yjbf9Puak6DtTVZ5MuLQgwjNlWWGDPwHyI2ihVoYVDxWhlrgNnZ2gG1V nbZSAStdKIUX0TwayyOvvnrMfuMTpJIqPaXINPtvLI6cQqR2PXKK3whnJQRF6pGEaL RT7d0BI7T+1bRBpMV2PQxG6d0cnmwIIVF638bV9o= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895257AbgJVIWV convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:21 -0400 Received: from us-smtp-delivery-44.mimecast.com ([207.211.30.44]:57355 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895254AbgJVIWU (ORCPT ); Thu, 22 Oct 2020 04:22:20 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-459-6hIyeFD-Niup_JfvA99kwg-1; Thu, 22 Oct 2020 04:22:13 -0400 X-MC-Unique: 6hIyeFD-Niup_JfvA99kwg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 06B6B186DD37; Thu, 22 Oct 2020 08:22:11 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id D6C5C60BFA; Thu, 22 Oct 2020 08:22:07 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 06/16] ftrace: Add unregister_ftrace_direct_ips function Date: Thu, 22 Oct 2020 10:21:28 +0200 Message-Id: <20201022082138.2322434-7-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding unregister_ftrace_direct_ips function that allows to unregister array of ip addresses and trampolines for direct filter. the interface is: int unregister_ftrace_direct_ips(unsigned long *ips, unsigned long *addrs, int count); It wil be used in following patches to unregister bpf trampolines in batch mode. Signed-off-by: Jiri Olsa --- include/linux/ftrace.h | 2 ++ kernel/trace/ftrace.c | 51 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+) diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h index 9ed52755667a..24525473043e 100644 --- a/include/linux/ftrace.h +++ b/include/linux/ftrace.h @@ -293,6 +293,8 @@ int ftrace_modify_direct_caller(struct ftrace_func_entry *entry, unsigned long ftrace_find_rec_direct(unsigned long ip); int register_ftrace_direct_ips(unsigned long *ips, unsigned long *addrs, int count); +int unregister_ftrace_direct_ips(unsigned long *ips, unsigned long *addrs, + int count); #else # define ftrace_direct_func_count 0 static inline int register_ftrace_direct(unsigned long ip, unsigned long addr) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 770bcd1a245a..15a13e6c1f31 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -5374,6 +5374,57 @@ int unregister_ftrace_direct(unsigned long ip, unsigned long addr) } EXPORT_SYMBOL_GPL(unregister_ftrace_direct); +int unregister_ftrace_direct_ips(unsigned long *ips, unsigned long *addrs, + int count) +{ + struct ftrace_direct_func *direct; + struct ftrace_func_entry *entry; + int i, del = 0, ret = -ENODEV; + + mutex_lock(&direct_mutex); + + for (i = 0; i < count; i++) { + entry = find_direct_entry(&ips[i], NULL); + if (!entry) + goto out_unlock; + del++; + } + + if (direct_functions->count - del == 0) + unregister_ftrace_function(&direct_ops); + + ret = ftrace_set_filter_ips(&direct_ops, ips, count, 1); + + WARN_ON(ret); + + for (i = 0; i < count; i++) { + entry = __ftrace_lookup_ip(direct_functions, ips[i]); + if (WARN_ON(!entry)) + continue; + + remove_hash_entry(direct_functions, entry); + + direct = ftrace_find_direct_func(addrs[i]); + if (!WARN_ON(!direct)) { + /* This is the good path (see the ! before WARN) */ + direct->count--; + WARN_ON(direct->count < 0); + if (!direct->count) { + list_del_rcu(&direct->next); + synchronize_rcu_tasks(); + kfree(direct); + kfree(entry); + ftrace_direct_func_count--; + } + } + } + out_unlock: + mutex_unlock(&direct_mutex); + + return ret; +} +EXPORT_SYMBOL_GPL(unregister_ftrace_direct_ips); + static struct ftrace_ops stub_ops = { .func = ftrace_stub, }; From patchwork Thu Oct 22 08:21:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850505 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64026C388F7 for ; Thu, 22 Oct 2020 08:22:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 18D802065D for ; Thu, 22 Oct 2020 08:22:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354941; bh=INie3r4lYMovKYcxtsxhVy3u+6IT01MqfucYPZp94Ko=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=M6AyyaKxBx+g9sV4+eWK12lU+eMq39QvQpJfNcf6zb/K7nVGXsBjVIMp0BhzBPmQM ozUM7H/p5kfpcjeZo/AABh93SQOl/97CXx76mvB0KZToQvdIqUWiBleZ3lRK7JL2Bf thFbcn0ckXgSZfPNgyTOtpy+xoYvOu/3ApsezI6w= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2894746AbgJVIWU convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:20 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:41046 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895214AbgJVIWT (ORCPT ); Thu, 22 Oct 2020 04:22:19 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-588-qmXCLt9APqSbW5u1qxlVEg-1; Thu, 22 Oct 2020 04:22:16 -0400 X-MC-Unique: qmXCLt9APqSbW5u1qxlVEg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7CF8F5F9C9; Thu, 22 Oct 2020 08:22:14 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6101260BFA; Thu, 22 Oct 2020 08:22:11 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 07/16] kallsyms: Use rb tree for kallsyms name search Date: Thu, 22 Oct 2020 10:21:29 +0200 Message-Id: <20201022082138.2322434-8-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC The kallsyms_expand_symbol function showed in several bpf related profiles, because it's doing linear search. Before: Performance counter stats for './src/bpftrace -ve kfunc:__x64_sys_s* \ { printf("test\n"); } i:ms:10 { printf("exit\n"); exit();}' (5 runs): 2,535,458,767 cycles:k ( +- 0.55% ) 940,046,382 cycles:u ( +- 0.27% ) 33.60 +- 3.27 seconds time elapsed ( +- 9.73% ) Loading all the vmlinux symbols in rbtree and and switch to rbtree search in kallsyms_lookup_name function to save few cycles and time. After: Performance counter stats for './src/bpftrace -ve kfunc:__x64_sys_s* \ { printf("test\n"); } i:ms:10 { printf("exit\n"); exit();}' (5 runs): 2,199,433,771 cycles:k ( +- 0.55% ) 936,105,469 cycles:u ( +- 0.37% ) 26.48 +- 3.57 seconds time elapsed ( +- 13.49% ) Each symbol takes 160 bytes, so for my .config I've got about 18 MBs used for 115285 symbols. Signed-off-by: Jiri Olsa --- kernel/kallsyms.c | 95 ++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 86 insertions(+), 9 deletions(-) diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c index 4fb15fa96734..107c8284170e 100644 --- a/kernel/kallsyms.c +++ b/kernel/kallsyms.c @@ -50,6 +50,36 @@ extern const u16 kallsyms_token_index[] __weak; extern const unsigned int kallsyms_markers[] __weak; +static struct kmem_cache *symbol_cachep; + +struct symbol { + char name[KSYM_NAME_LEN]; + unsigned long addr; + struct rb_node rb_node; +}; + +static struct rb_root symbols_root = RB_ROOT; + +static struct symbol *find_symbol(const char *name) +{ + struct symbol *sym; + struct rb_node *n; + int err; + + n = symbols_root.rb_node; + while (n) { + sym = rb_entry(n, struct symbol, rb_node); + err = strcmp(name, sym->name); + if (err < 0) + n = n->rb_left; + else if (err > 0) + n = n->rb_right; + else + return sym; + } + return NULL; +} + /* * Expand a compressed symbol data into the resulting uncompressed string, * if uncompressed string is too long (>= maxlen), it will be truncated, @@ -164,16 +194,12 @@ static unsigned long kallsyms_sym_address(int idx) /* Lookup the address for this symbol. Returns 0 if not found. */ unsigned long kallsyms_lookup_name(const char *name) { - char namebuf[KSYM_NAME_LEN]; - unsigned long i; - unsigned int off; + struct symbol *sym; - for (i = 0, off = 0; i < kallsyms_num_syms; i++) { - off = kallsyms_expand_symbol(off, namebuf, ARRAY_SIZE(namebuf)); + sym = find_symbol(name); + if (sym) + return sym->addr; - if (strcmp(namebuf, name) == 0) - return kallsyms_sym_address(i); - } return module_kallsyms_lookup_name(name); } @@ -743,9 +769,60 @@ static const struct proc_ops kallsyms_proc_ops = { .proc_release = seq_release_private, }; +static bool __init add_symbol(struct symbol *new) +{ + struct rb_node *parent = NULL; + struct rb_node **p; + struct symbol *sym; + int err; + + p = &symbols_root.rb_node; + + while (*p != NULL) { + parent = *p; + sym = rb_entry(parent, struct symbol, rb_node); + err = strcmp(new->name, sym->name); + if (err < 0) + p = &(*p)->rb_left; + else if (err > 0) + p = &(*p)->rb_right; + else + return false; + } + + rb_link_node(&new->rb_node, parent, p); + rb_insert_color(&new->rb_node, &symbols_root); + return true; +} + +static int __init kallsyms_name_search_init(void) +{ + bool sym_added = true; + struct symbol *sym; + unsigned int off; + unsigned long i; + + symbol_cachep = KMEM_CACHE(symbol, SLAB_PANIC|SLAB_ACCOUNT); + + for (i = 0, off = 0; i < kallsyms_num_syms; i++) { + if (sym_added) { + sym = kmem_cache_alloc(symbol_cachep, GFP_KERNEL); + if (!sym) + return -ENOMEM; + } + off = kallsyms_expand_symbol(off, sym->name, ARRAY_SIZE(sym->name)); + sym->addr = kallsyms_sym_address(i); + sym_added = add_symbol(sym); + } + + if (!sym_added) + kmem_cache_free(symbol_cachep, sym); + return 0; +} + static int __init kallsyms_init(void) { proc_create("kallsyms", 0444, NULL, &kallsyms_proc_ops); - return 0; + return kallsyms_name_search_init(); } device_initcall(kallsyms_init); From patchwork Thu Oct 22 08:21:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850509 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3B00C388F2 for ; Thu, 22 Oct 2020 08:22:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 72BAC2065D for ; Thu, 22 Oct 2020 08:22:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354948; bh=grrujud9EHPu2ij7pJxYCd0l9+KzN+YRkFGQSDx/LVk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=R2BeQY5EDdVVshqrF9OMNAGwgL2Wnsg/slFP0evzF4otD52De2UWYEc1jagY8hKk+ DzYWgqSG4hCKZL68xRX+6gYrUUsWdeN/W9EKbXPd+DMgLwH6G4D/ayLjCDACzZs+oZ muvJiIr5PTJLeEputlBw1zKz7weIARAzHieA9pyY= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895284AbgJVIW1 convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:27 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:47619 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895275AbgJVIW1 (ORCPT ); Thu, 22 Oct 2020 04:22:27 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-144-O96xkGQbMumUIQ-GyHKVXQ-1; Thu, 22 Oct 2020 04:22:20 -0400 X-MC-Unique: O96xkGQbMumUIQ-GyHKVXQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0AB57186DD31; Thu, 22 Oct 2020 08:22:18 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id CE75C60BFA; Thu, 22 Oct 2020 08:22:14 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 08/16] bpf: Use delayed link free in bpf_link_put Date: Thu, 22 Oct 2020 10:21:30 +0200 Message-Id: <20201022082138.2322434-9-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Moving bpf_link_free call into delayed processing so we don't need to wait for it when releasing the link. For example bpf_tracing_link_release could take considerable amount of time in bpf_trampoline_put function due to synchronize_rcu_tasks call. It speeds up bpftrace release time in following example: Before: Performance counter stats for './src/bpftrace -ve kfunc:__x64_sys_s* { printf("test\n"); } i:ms:10 { printf("exit\n"); exit();}' (5 runs): 3,290,457,628 cycles:k ( +- 0.27% ) 933,581,973 cycles:u ( +- 0.20% ) 50.25 +- 4.79 seconds time elapsed ( +- 9.53% ) After: Performance counter stats for './src/bpftrace -ve kfunc:__x64_sys_s* { printf("test\n"); } i:ms:10 { printf("exit\n"); exit();}' (5 runs): 2,535,458,767 cycles:k ( +- 0.55% ) 940,046,382 cycles:u ( +- 0.27% ) 33.60 +- 3.27 seconds time elapsed ( +- 9.73% ) Signed-off-by: Jiri Olsa --- kernel/bpf/syscall.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 1110ecd7d1f3..61ef29f9177d 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2346,12 +2346,8 @@ void bpf_link_put(struct bpf_link *link) if (!atomic64_dec_and_test(&link->refcnt)) return; - if (in_atomic()) { - INIT_WORK(&link->work, bpf_link_put_deferred); - schedule_work(&link->work); - } else { - bpf_link_free(link); - } + INIT_WORK(&link->work, bpf_link_put_deferred); + schedule_work(&link->work); } static int bpf_link_release(struct inode *inode, struct file *filp) From patchwork Thu Oct 22 08:21:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850521 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 332DDC388F7 for ; Thu, 22 Oct 2020 08:22:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DBE9824196 for ; Thu, 22 Oct 2020 08:22:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354954; bh=L+ar26HitMX6NeUGiP904xxvEYyUX6L2LiWo1+tz0KM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=1NqgVW90Mk2/DQ/+kyL8dIJARNwv0lobfXDQThsLPmaVF/3G28GwPDg3ZCuZtjPQc UUvkTarJx9ClscbUhn7grwjDvJpQ5wosrpnwlqDOceT5CPZf8OJacbF3DTMN3hisx5 pTYxAvlKo05G67pm/sER1/BzUvkZ6Fwwb7KCDx3Y= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895299AbgJVIWe convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:34 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:27537 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895285AbgJVIWd (ORCPT ); Thu, 22 Oct 2020 04:22:33 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-406-mrIbtscfP7ye1OFsRsMfmQ-1; Thu, 22 Oct 2020 04:22:26 -0400 X-MC-Unique: mrIbtscfP7ye1OFsRsMfmQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 783AE186DD32; Thu, 22 Oct 2020 08:22:24 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5C53760BFA; Thu, 22 Oct 2020 08:22:18 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 09/16] bpf: Add BPF_TRAMPOLINE_BATCH_ATTACH support Date: Thu, 22 Oct 2020 10:21:31 +0200 Message-Id: <20201022082138.2322434-10-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding BPF_TRAMPOLINE_BATCH_ATTACH support, that allows to attach tracing multiple fentry/fexit pograms to trampolines within one syscall. Currently each tracing program is attached in seprate bpf syscall and more importantly by separate register_ftrace_direct call, which registers trampoline in ftrace subsystem. We can save some cycles by simple using its batch variant register_ftrace_direct_ips. Before: Performance counter stats for './src/bpftrace -ve kfunc:__x64_sys_s* { printf("test\n"); } i:ms:10 { printf("exit\n"); exit();}' (5 runs): 2,199,433,771 cycles:k ( +- 0.55% ) 936,105,469 cycles:u ( +- 0.37% ) 26.48 +- 3.57 seconds time elapsed ( +- 13.49% ) After: Performance counter stats for './src/bpftrace -ve kfunc:__x64_sys_s* { printf("test\n"); } i:ms:10 { printf("exit\n"); exit();}' (5 runs): 1,456,854,867 cycles:k ( +- 0.57% ) 937,737,431 cycles:u ( +- 0.13% ) 12.44 +- 2.98 seconds time elapsed ( +- 23.95% ) The new BPF_TRAMPOLINE_BATCH_ATTACH syscall command expects following data in union bpf_attr: struct { __aligned_u64 in; __aligned_u64 out; __u32 count; } trampoline_batch; in - pointer to user space array with file descrptors of loaded bpf programs to attach out - pointer to user space array for resulting link descriptor count - number of 'in/out' file descriptors Basically the new code gets programs from 'in' file descriptors and attaches them the same way the current code does, apart from the last step that registers probe ip with trampoline. This is done at the end with new register_ftrace_direct_ips function. The resulting link descriptors are written in 'out' array and match 'in' array file descriptors order. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 15 ++++++- include/uapi/linux/bpf.h | 7 ++++ kernel/bpf/syscall.c | 88 ++++++++++++++++++++++++++++++++++++++-- kernel/bpf/trampoline.c | 69 +++++++++++++++++++++++++++---- 4 files changed, 164 insertions(+), 15 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 2b16bf48aab6..d28c7ac3af3f 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -583,6 +583,13 @@ enum bpf_tramp_prog_type { BPF_TRAMP_REPLACE, /* more than MAX */ }; +struct bpf_trampoline_batch { + int count; + int idx; + unsigned long *ips; + unsigned long *addrs; +}; + struct bpf_trampoline { /* hlist for trampoline_table */ struct hlist_node hlist; @@ -644,11 +651,14 @@ static __always_inline unsigned int bpf_dispatcher_nop_func( return bpf_func(ctx, insnsi); } #ifdef CONFIG_BPF_JIT -int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr); +int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr, + struct bpf_trampoline_batch *batch); int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr); struct bpf_trampoline *bpf_trampoline_get(u64 key, struct bpf_attach_target_info *tgt_info); void bpf_trampoline_put(struct bpf_trampoline *tr); +struct bpf_trampoline_batch *bpf_trampoline_batch_alloc(int count); +void bpf_trampoline_batch_free(struct bpf_trampoline_batch *batch); #define BPF_DISPATCHER_INIT(_name) { \ .mutex = __MUTEX_INITIALIZER(_name.mutex), \ .func = &_name##_func, \ @@ -693,7 +703,8 @@ void bpf_ksym_add(struct bpf_ksym *ksym); void bpf_ksym_del(struct bpf_ksym *ksym); #else static inline int bpf_trampoline_link_prog(struct bpf_prog *prog, - struct bpf_trampoline *tr) + struct bpf_trampoline *tr, + struct bpf_trampoline_batch *batch) { return -ENOTSUPP; } diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index bf5a99d803e4..04df4d576fd4 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -125,6 +125,7 @@ enum bpf_cmd { BPF_ITER_CREATE, BPF_LINK_DETACH, BPF_PROG_BIND_MAP, + BPF_TRAMPOLINE_BATCH_ATTACH, }; enum bpf_map_type { @@ -631,6 +632,12 @@ union bpf_attr { __u32 prog_fd; } raw_tracepoint; + struct { /* anonymous struct used by BPF_TRAMPOLINE_BATCH_ATTACH */ + __aligned_u64 in; + __aligned_u64 out; + __u32 count; + } trampoline_batch; + struct { /* anonymous struct for BPF_BTF_LOAD */ __aligned_u64 btf; __aligned_u64 btf_log_buf; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 61ef29f9177d..e370b37e3e8e 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2553,7 +2553,8 @@ static const struct bpf_link_ops bpf_tracing_link_lops = { static int bpf_tracing_prog_attach(struct bpf_prog *prog, int tgt_prog_fd, - u32 btf_id) + u32 btf_id, + struct bpf_trampoline_batch *batch) { struct bpf_link_primer link_primer; struct bpf_prog *tgt_prog = NULL; @@ -2678,7 +2679,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, if (err) goto out_unlock; - err = bpf_trampoline_link_prog(prog, tr); + err = bpf_trampoline_link_prog(prog, tr, batch); if (err) { bpf_link_cleanup(&link_primer); link = NULL; @@ -2826,7 +2827,7 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr) tp_name = prog->aux->attach_func_name; break; } - return bpf_tracing_prog_attach(prog, 0, 0); + return bpf_tracing_prog_attach(prog, 0, 0, NULL); case BPF_PROG_TYPE_RAW_TRACEPOINT: case BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE: if (strncpy_from_user(buf, @@ -2879,6 +2880,81 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr) return err; } +#define BPF_RAW_TRACEPOINT_OPEN_BATCH_LAST_FIELD trampoline_batch.count + +static int bpf_trampoline_batch(const union bpf_attr *attr, int cmd) +{ + void __user *uout = u64_to_user_ptr(attr->trampoline_batch.out); + void __user *uin = u64_to_user_ptr(attr->trampoline_batch.in); + struct bpf_trampoline_batch *batch = NULL; + struct bpf_prog *prog; + int count, ret, i, fd; + u32 *in, *out; + + if (CHECK_ATTR(BPF_RAW_TRACEPOINT_OPEN_BATCH)) + return -EINVAL; + + if (!uin || !uout) + return -EINVAL; + + count = attr->trampoline_batch.count; + + in = kcalloc(count, sizeof(u32), GFP_KERNEL); + out = kcalloc(count, sizeof(u32), GFP_KERNEL); + if (!in || !out) { + kfree(in); + kfree(out); + return -ENOMEM; + } + + ret = copy_from_user(in, uin, count * sizeof(u32)); + if (ret) + goto out_clean; + + /* test read out array */ + ret = copy_to_user(uout, out, count * sizeof(u32)); + if (ret) + goto out_clean; + + batch = bpf_trampoline_batch_alloc(count); + if (!batch) + goto out_clean; + + for (i = 0; i < count; i++) { + if (cmd == BPF_TRAMPOLINE_BATCH_ATTACH) { + prog = bpf_prog_get(in[i]); + if (IS_ERR(prog)) { + ret = PTR_ERR(prog); + goto out_clean; + } + + ret = -EINVAL; + if (prog->type != BPF_PROG_TYPE_TRACING) + goto out_clean; + if (prog->type == BPF_PROG_TYPE_TRACING && + prog->expected_attach_type == BPF_TRACE_RAW_TP) + goto out_clean; + + fd = bpf_tracing_prog_attach(prog, 0, 0, batch); + if (fd < 0) + goto out_clean; + + out[i] = fd; + } + } + + ret = register_ftrace_direct_ips(batch->ips, batch->addrs, batch->idx); + if (!ret) + WARN_ON_ONCE(copy_to_user(uout, out, count * sizeof(u32))); + +out_clean: + /* XXX cleanup partialy attached array */ + bpf_trampoline_batch_free(batch); + kfree(in); + kfree(out); + return ret; +} + static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog, enum bpf_attach_type attach_type) { @@ -4018,7 +4094,8 @@ static int tracing_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog * else if (prog->type == BPF_PROG_TYPE_EXT) return bpf_tracing_prog_attach(prog, attr->link_create.target_fd, - attr->link_create.target_btf_id); + attr->link_create.target_btf_id, + NULL); return -EINVAL; } @@ -4437,6 +4514,9 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz case BPF_RAW_TRACEPOINT_OPEN: err = bpf_raw_tracepoint_open(&attr); break; + case BPF_TRAMPOLINE_BATCH_ATTACH: + err = bpf_trampoline_batch(&attr, cmd); + break; case BPF_BTF_LOAD: err = bpf_btf_load(&attr); break; diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 35c5887d82ff..3383644eccc8 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -107,6 +107,51 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key) return tr; } +static int bpf_trampoline_batch_add(struct bpf_trampoline_batch *batch, + unsigned long ip, unsigned long addr) +{ + int idx = batch->idx; + + if (idx >= batch->count) + return -EINVAL; + + batch->ips[idx] = ip; + batch->addrs[idx] = addr; + batch->idx++; + return 0; +} + +struct bpf_trampoline_batch *bpf_trampoline_batch_alloc(int count) +{ + struct bpf_trampoline_batch *batch; + + batch = kmalloc(sizeof(*batch), GFP_KERNEL); + if (!batch) + return NULL; + + batch->ips = kcalloc(count, sizeof(batch->ips[0]), GFP_KERNEL); + batch->addrs = kcalloc(count, sizeof(batch->addrs[0]), GFP_KERNEL); + if (!batch->ips || !batch->addrs) { + kfree(batch->ips); + kfree(batch->addrs); + kfree(batch); + return NULL; + } + + batch->count = count; + batch->idx = 0; + return batch; +} + +void bpf_trampoline_batch_free(struct bpf_trampoline_batch *batch) +{ + if (!batch) + return; + kfree(batch->ips); + kfree(batch->addrs); + kfree(batch); +} + static int is_ftrace_location(void *ip) { long addr; @@ -144,7 +189,8 @@ static int modify_fentry(struct bpf_trampoline *tr, void *old_addr, void *new_ad } /* first time registering */ -static int register_fentry(struct bpf_trampoline *tr, void *new_addr) +static int register_fentry(struct bpf_trampoline *tr, void *new_addr, + struct bpf_trampoline_batch *batch) { void *ip = tr->func.addr; int ret; @@ -154,9 +200,12 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr) return ret; tr->func.ftrace_managed = ret; - if (tr->func.ftrace_managed) - ret = register_ftrace_direct((long)ip, (long)new_addr); - else + if (tr->func.ftrace_managed) { + if (batch) + ret = bpf_trampoline_batch_add(batch, (long)ip, (long)new_addr); + else + ret = register_ftrace_direct((long)ip, (long)new_addr); + } else ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, NULL, new_addr); return ret; } @@ -185,7 +234,8 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total) return tprogs; } -static int bpf_trampoline_update(struct bpf_trampoline *tr) +static int bpf_trampoline_update(struct bpf_trampoline *tr, + struct bpf_trampoline_batch *batch) { void *old_image = tr->image + ((tr->selector + 1) & 1) * PAGE_SIZE/2; void *new_image = tr->image + (tr->selector & 1) * PAGE_SIZE/2; @@ -230,7 +280,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr) err = modify_fentry(tr, old_image, new_image); else /* first time registering */ - err = register_fentry(tr, new_image); + err = register_fentry(tr, new_image, batch); if (err) goto out; tr->selector++; @@ -261,7 +311,8 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog) } } -int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr) +int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr, + struct bpf_trampoline_batch *batch) { enum bpf_tramp_prog_type kind; int err = 0; @@ -299,7 +350,7 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr) } hlist_add_head(&prog->aux->tramp_hlist, &tr->progs_hlist[kind]); tr->progs_cnt[kind]++; - err = bpf_trampoline_update(tr); + err = bpf_trampoline_update(tr, batch); if (err) { hlist_del(&prog->aux->tramp_hlist); tr->progs_cnt[kind]--; @@ -326,7 +377,7 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr) } hlist_del(&prog->aux->tramp_hlist); tr->progs_cnt[kind]--; - err = bpf_trampoline_update(tr); + err = bpf_trampoline_update(tr, NULL); out: mutex_unlock(&tr->mutex); return err; From patchwork Thu Oct 22 08:21:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850523 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9250DC4363A for ; Thu, 22 Oct 2020 08:22:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3F7A521775 for ; Thu, 22 Oct 2020 08:22:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354959; bh=peJlOgznPc90btyG+QHbLg4ABksRzM76xCRBEykxWPw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=UhpccNYgAr7d/1SGhLv0IRqZDXdSSgxkTEt9Rx5jqdm/341SEKNVPacae0WECX3HM A/PlVruNkRXWHhUHL5rXxhOMTHMqrtIURny9znBWd3NliOdVbdyiloOBQEIahM9BVr XMm8z1ggPIPv4WIpcaFO/yE8LU67Qosri/7KT7kk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895285AbgJVIWg convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:36 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:53079 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895288AbgJVIWg (ORCPT ); Thu, 22 Oct 2020 04:22:36 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-366-69vaJIiIM3Wvq-Cx3DYGJQ-1; Thu, 22 Oct 2020 04:22:29 -0400 X-MC-Unique: 69vaJIiIM3Wvq-Cx3DYGJQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F1CFE1006CA3; Thu, 22 Oct 2020 08:22:27 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id CF45060BFA; Thu, 22 Oct 2020 08:22:24 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 10/16] bpf: Add BPF_TRAMPOLINE_BATCH_DETACH support Date: Thu, 22 Oct 2020 10:21:32 +0200 Message-Id: <20201022082138.2322434-11-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding BPF_TRAMPOLINE_BATCH_DETACH support, that allows to detach tracing multiple fentry/fexit pograms from trampolines within one syscall. The new BPF_TRAMPOLINE_BATCH_DETACH syscall command expects following data in union bpf_attr: struct { __aligned_u64 in; __aligned_u64 out; __u32 count; } trampoline_batch; in - pointer to user space array with link descrptors of attached bpf programs to detach out - pointer to user space array for resulting error code count - number of 'in/out' file descriptors Basically the new code gets programs from 'in' link descriptors and detaches them the same way the current code does, apart from the last step that unregisters probe ip with trampoline. This is done at the end with new unregister_ftrace_direct function. The resulting error codes are written in 'out' array and match 'in' array link descriptors order. Signed-off-by: Jiri Olsa --- include/linux/bpf.h | 3 ++- include/uapi/linux/bpf.h | 3 ++- kernel/bpf/syscall.c | 28 ++++++++++++++++++++++++++-- kernel/bpf/trampoline.c | 25 ++++++++++++++++--------- 4 files changed, 46 insertions(+), 13 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index d28c7ac3af3f..828a4e88224f 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -653,7 +653,8 @@ static __always_inline unsigned int bpf_dispatcher_nop_func( #ifdef CONFIG_BPF_JIT int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr, struct bpf_trampoline_batch *batch); -int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr); +int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr, + struct bpf_trampoline_batch *batch); struct bpf_trampoline *bpf_trampoline_get(u64 key, struct bpf_attach_target_info *tgt_info); void bpf_trampoline_put(struct bpf_trampoline *tr); diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 04df4d576fd4..b6a08aa49aa4 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -126,6 +126,7 @@ enum bpf_cmd { BPF_LINK_DETACH, BPF_PROG_BIND_MAP, BPF_TRAMPOLINE_BATCH_ATTACH, + BPF_TRAMPOLINE_BATCH_DETACH, }; enum bpf_map_type { @@ -632,7 +633,7 @@ union bpf_attr { __u32 prog_fd; } raw_tracepoint; - struct { /* anonymous struct used by BPF_TRAMPOLINE_BATCH_ATTACH */ + struct { /* anonymous struct used by BPF_TRAMPOLINE_BATCH_* */ __aligned_u64 in; __aligned_u64 out; __u32 count; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index e370b37e3e8e..19fb608546c0 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2505,7 +2505,7 @@ static void bpf_tracing_link_release(struct bpf_link *link) container_of(link, struct bpf_tracing_link, link); WARN_ON_ONCE(bpf_trampoline_unlink_prog(link->prog, - tr_link->trampoline)); + tr_link->trampoline, NULL)); bpf_trampoline_put(tr_link->trampoline); @@ -2940,10 +2940,33 @@ static int bpf_trampoline_batch(const union bpf_attr *attr, int cmd) goto out_clean; out[i] = fd; + } else { + struct bpf_tracing_link *tr_link; + struct bpf_link *link; + + link = bpf_link_get_from_fd(in[i]); + if (IS_ERR(link)) { + ret = PTR_ERR(link); + goto out_clean; + } + + if (link->type != BPF_LINK_TYPE_TRACING) { + ret = -EINVAL; + bpf_link_put(link); + goto out_clean; + } + + tr_link = container_of(link, struct bpf_tracing_link, link); + bpf_trampoline_unlink_prog(link->prog, tr_link->trampoline, batch); + bpf_link_put(link); } } - ret = register_ftrace_direct_ips(batch->ips, batch->addrs, batch->idx); + if (cmd == BPF_TRAMPOLINE_BATCH_ATTACH) + ret = register_ftrace_direct_ips(batch->ips, batch->addrs, batch->idx); + else + ret = unregister_ftrace_direct_ips(batch->ips, batch->addrs, batch->idx); + if (!ret) WARN_ON_ONCE(copy_to_user(uout, out, count * sizeof(u32))); @@ -4515,6 +4538,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz err = bpf_raw_tracepoint_open(&attr); break; case BPF_TRAMPOLINE_BATCH_ATTACH: + case BPF_TRAMPOLINE_BATCH_DETACH: err = bpf_trampoline_batch(&attr, cmd); break; case BPF_BTF_LOAD: diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 3383644eccc8..cdad87461e5d 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -164,14 +164,18 @@ static int is_ftrace_location(void *ip) return 1; } -static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr) +static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr, + struct bpf_trampoline_batch *batch) { void *ip = tr->func.addr; int ret; - if (tr->func.ftrace_managed) - ret = unregister_ftrace_direct((long)ip, (long)old_addr); - else + if (tr->func.ftrace_managed) { + if (batch) + ret = bpf_trampoline_batch_add(batch, (long)ip, (long)old_addr); + else + ret = unregister_ftrace_direct((long)ip, (long)old_addr); + } else ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, old_addr, NULL); return ret; } @@ -248,7 +252,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, return PTR_ERR(tprogs); if (total == 0) { - err = unregister_fentry(tr, old_image); + err = unregister_fentry(tr, old_image, batch); tr->selector = 0; goto out; } @@ -361,13 +365,16 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr, } /* bpf_trampoline_unlink_prog() should never fail. */ -int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr) +int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr, + struct bpf_trampoline_batch *batch) { enum bpf_tramp_prog_type kind; - int err; + int err = 0; kind = bpf_attach_type_to_tramp(prog); mutex_lock(&tr->mutex); + if (hlist_unhashed(&prog->aux->tramp_hlist)) + goto out; if (kind == BPF_TRAMP_REPLACE) { WARN_ON_ONCE(!tr->extension_prog); err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP, @@ -375,9 +382,9 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr) tr->extension_prog = NULL; goto out; } - hlist_del(&prog->aux->tramp_hlist); + hlist_del_init(&prog->aux->tramp_hlist); tr->progs_cnt[kind]--; - err = bpf_trampoline_update(tr, NULL); + err = bpf_trampoline_update(tr, batch); out: mutex_unlock(&tr->mutex); return err; From patchwork Thu Oct 22 08:21:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850519 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B719C5517A for ; Thu, 22 Oct 2020 08:22:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E7702065D for ; Thu, 22 Oct 2020 08:22:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354957; bh=/X+xk09+yy6+XubYDzqU4oll+e4i47nq52AYMhesHhw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=cYHVMu3+ZOs2ND4s4wvHrzJQR6mXC905q1AETkxixLSFov9TqGIsCkEHO42Ni642i hMwsKEEmNrZIadjel4EvmI/DzB28mWKzX+PkkLXxdwCuQk7H0qDRwFT8HEgt5R+FXs 6BmLa/XS0sk5qri/xxgYLNrlUjiv4b6QuKa2QMyo= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895313AbgJVIWg convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:36 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:39871 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895285AbgJVIWg (ORCPT ); Thu, 22 Oct 2020 04:22:36 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-510-DeQ42BCBOfqGnFFAXv2Bew-1; Thu, 22 Oct 2020 04:22:33 -0400 X-MC-Unique: DeQ42BCBOfqGnFFAXv2Bew-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7140A5F9C8; Thu, 22 Oct 2020 08:22:31 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 51AE160BFA; Thu, 22 Oct 2020 08:22:28 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 11/16] bpf: Sync uapi bpf.h to tools Date: Thu, 22 Oct 2020 10:21:33 +0200 Message-Id: <20201022082138.2322434-12-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Sync uapi bpf.h with trampoline batch attach changes. Signed-off-by: Jiri Olsa --- tools/include/uapi/linux/bpf.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index bf5a99d803e4..b6a08aa49aa4 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -125,6 +125,8 @@ enum bpf_cmd { BPF_ITER_CREATE, BPF_LINK_DETACH, BPF_PROG_BIND_MAP, + BPF_TRAMPOLINE_BATCH_ATTACH, + BPF_TRAMPOLINE_BATCH_DETACH, }; enum bpf_map_type { @@ -631,6 +633,12 @@ union bpf_attr { __u32 prog_fd; } raw_tracepoint; + struct { /* anonymous struct used by BPF_TRAMPOLINE_BATCH_* */ + __aligned_u64 in; + __aligned_u64 out; + __u32 count; + } trampoline_batch; + struct { /* anonymous struct for BPF_BTF_LOAD */ __aligned_u64 btf; __aligned_u64 btf_log_buf; From patchwork Thu Oct 22 08:21:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850525 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8460BC388F2 for ; Thu, 22 Oct 2020 08:22:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 385C3223C7 for ; Thu, 22 Oct 2020 08:22:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354965; bh=51iZr6X9BdqPyEZWFeT0wHtXtg82YSswDCmAbnZjAis=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=eHwGN+izuEvaUUmkxAhyC7YHoGDx3isslFOiTkCwcUTD5AWNboPEirY9A+JnSWxK+ lbdJaKqM3kqSResxDB3o/xJfQYrgVZx+s2w/0waN69PbxUNmGDrJyBW2Z5BFaSFr5F Eq89hg7ooIHooj7cnVzUkIkCJFXRatWpyh3GLbW4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895339AbgJVIWo convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:44 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:49840 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895326AbgJVIWn (ORCPT ); Thu, 22 Oct 2020 04:22:43 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-238-cT-TqApdPra38tpPbLZk1A-1; Thu, 22 Oct 2020 04:22:36 -0400 X-MC-Unique: cT-TqApdPra38tpPbLZk1A-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E4A1F107ACF5; Thu, 22 Oct 2020 08:22:34 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id C529160BFA; Thu, 22 Oct 2020 08:22:31 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 12/16] bpf: Move synchronize_rcu_mult for batch processing (NOT TO BE MERGED) Date: Thu, 22 Oct 2020 10:21:34 +0200 Message-Id: <20201022082138.2322434-13-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC I noticed some of the profiled workloads did not spend more cycles, but took more time to finish than current code. I tracked it to rcu synchronize_rcu_mult call in bpf_trampoline_update and when I called it just once for batch mode it got faster. The current processing when attaching the program is: for each program: bpf(BPF_RAW_TRACEPOINT_OPEN bpf_tracing_prog_attach bpf_trampoline_link_prog bpf_trampoline_update synchronize_rcu_mult register_ftrace_direct With the change the synchronize_rcu_mult is called just once: bpf(BPF_TRAMPOLINE_BATCH_ATTACH for each program: bpf_tracing_prog_attach bpf_trampoline_link_prog bpf_trampoline_update synchronize_rcu_mult register_ftrace_direct_ips I'm not sure this does not break stuff, because I don't follow rcu code that much ;-) However stats are nicer now: Before: Performance counter stats for './test_progs -t attach_test' (5 runs): 37,410,887 cycles:k ( +- 0.98% ) 70,062,158 cycles:u ( +- 0.39% ) 26.80 +- 4.10 seconds time elapsed ( +- 15.31% ) After: Performance counter stats for './test_progs -t attach_test' (5 runs): 36,812,432 cycles:k ( +- 2.52% ) 69,907,191 cycles:u ( +- 0.38% ) 15.04 +- 2.94 seconds time elapsed ( +- 19.54% ) Signed-off-by: Jiri Olsa --- kernel/bpf/syscall.c | 3 +++ kernel/bpf/trampoline.c | 3 ++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 19fb608546c0..b315803c34d3 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -31,6 +31,7 @@ #include #include #include +#include #define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \ (map)->map_type == BPF_MAP_TYPE_CGROUP_ARRAY || \ @@ -2920,6 +2921,8 @@ static int bpf_trampoline_batch(const union bpf_attr *attr, int cmd) if (!batch) goto out_clean; + synchronize_rcu_mult(call_rcu_tasks, call_rcu_tasks_trace); + for (i = 0; i < count; i++) { if (cmd == BPF_TRAMPOLINE_BATCH_ATTACH) { prog = bpf_prog_get(in[i]); diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index cdad87461e5d..0d5e4c5860a9 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -271,7 +271,8 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, * programs finish executing. * Wait for these two grace periods together. */ - synchronize_rcu_mult(call_rcu_tasks, call_rcu_tasks_trace); + if (!batch) + synchronize_rcu_mult(call_rcu_tasks, call_rcu_tasks_trace); err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2, &tr->func.model, flags, tprogs, From patchwork Thu Oct 22 08:21:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850537 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA3DBC4363A for ; Thu, 22 Oct 2020 08:22:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6959221775 for ; Thu, 22 Oct 2020 08:22:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354968; bh=HBpFY0wRnr2AMGJHgk8jZ+iCdiNrE5MfMgOvbf1zZSM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=qwf+lTe7KRfYWRx4pIg6n7Aw5LXhgxz3Pwc6+sIZQ6V2F5HXcyhtH9mr33GxSf6pA VEWeGcPF7KShNKc+wNMLuk1bXSy6cLXjI6qIza2GWumKOw8bd0y3bIlSO1rv4aWvW+ WqlqxAR7Jif6PL8CPqsWrtK1YfkXTyJSH6PHhH3I= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895347AbgJVIWr convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:47 -0400 Received: from us-smtp-delivery-44.mimecast.com ([207.211.30.44]:44962 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895337AbgJVIWq (ORCPT ); Thu, 22 Oct 2020 04:22:46 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-585-cvUtozlHMIK0VFhYQdK8lQ-1; Thu, 22 Oct 2020 04:22:40 -0400 X-MC-Unique: cvUtozlHMIK0VFhYQdK8lQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 84705107ACF5; Thu, 22 Oct 2020 08:22:38 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4A41460BFA; Thu, 22 Oct 2020 08:22:35 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 13/16] libbpf: Add trampoline batch attach support Date: Thu, 22 Oct 2020 10:21:35 +0200 Message-Id: <20201022082138.2322434-14-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding trampoline batch attach support so it's possible to use batch mode to load tracing programs. Adding trampoline_attach_batch bool to struct bpf_object_open_opts. When set to true the bpf_object__attach_skeleton will try to load all tracing programs via batch mode. Signed-off-by: Jiri Olsa --- tools/lib/bpf/bpf.c | 12 +++++++ tools/lib/bpf/bpf.h | 1 + tools/lib/bpf/libbpf.c | 76 +++++++++++++++++++++++++++++++++++++++- tools/lib/bpf/libbpf.h | 5 ++- tools/lib/bpf/libbpf.map | 1 + 5 files changed, 93 insertions(+), 2 deletions(-) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index d27e34133973..21fffff5e237 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -858,6 +858,18 @@ int bpf_raw_tracepoint_open(const char *name, int prog_fd) return sys_bpf(BPF_RAW_TRACEPOINT_OPEN, &attr, sizeof(attr)); } +int bpf_trampoline_batch_attach(int *ifds, int *ofds, int count) +{ + union bpf_attr attr; + + memset(&attr, 0, sizeof(attr)); + attr.trampoline_batch.in = ptr_to_u64(ifds); + attr.trampoline_batch.out = ptr_to_u64(ofds); + attr.trampoline_batch.count = count; + + return sys_bpf(BPF_TRAMPOLINE_BATCH_ATTACH, &attr, sizeof(attr)); +} + int bpf_load_btf(const void *btf, __u32 btf_size, char *log_buf, __u32 log_buf_size, bool do_log) { diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index 875dde20d56e..ba3b0b6e3cb0 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -235,6 +235,7 @@ LIBBPF_API int bpf_prog_query(int target_fd, enum bpf_attach_type type, __u32 query_flags, __u32 *attach_flags, __u32 *prog_ids, __u32 *prog_cnt); LIBBPF_API int bpf_raw_tracepoint_open(const char *name, int prog_fd); +LIBBPF_API int bpf_trampoline_batch_attach(int *ifds, int *ofds, int count); LIBBPF_API int bpf_load_btf(const void *btf, __u32 btf_size, char *log_buf, __u32 log_buf_size, bool do_log); LIBBPF_API int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf, diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 313034117070..584da3b401ac 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -421,6 +421,7 @@ struct bpf_object { bool loaded; bool has_subcalls; + bool trampoline_attach_batch; /* * Information when doing elf related work. Only valid if fd @@ -6907,6 +6908,9 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz, return ERR_PTR(-ENOMEM); } + obj->trampoline_attach_batch = OPTS_GET(opts, trampoline_attach_batch, + false); + err = bpf_object__elf_init(obj); err = err ? : bpf_object__check_endianness(obj); err = err ? : bpf_object__elf_collect(obj); @@ -10811,9 +10815,75 @@ int bpf_object__load_skeleton(struct bpf_object_skeleton *s) return 0; } +static bool is_trampoline(const struct bpf_program *prog) +{ + return prog->type == BPF_PROG_TYPE_TRACING && + (prog->expected_attach_type == BPF_TRACE_FENTRY || + prog->expected_attach_type == BPF_TRACE_FEXIT); +} + +static int attach_trace_batch(struct bpf_object_skeleton *s) +{ + int i, prog_fd, ret = -ENOMEM; + int *in_fds, *out_fds, cnt; + + in_fds = calloc(s->prog_cnt, sizeof(in_fds[0])); + out_fds = calloc(s->prog_cnt, sizeof(out_fds[0])); + if (!in_fds || !out_fds) + goto out_clean; + + ret = -EINVAL; + for (cnt = 0, i = 0; i < s->prog_cnt; i++) { + struct bpf_program *prog = *s->progs[i].prog; + + if (!is_trampoline(prog)) + continue; + + prog_fd = bpf_program__fd(prog); + if (prog_fd < 0) { + pr_warn("prog '%s': can't attach before loaded\n", prog->name); + goto out_clean; + } + in_fds[cnt++] = prog_fd; + } + + ret = bpf_trampoline_batch_attach(in_fds, out_fds, cnt); + if (ret) + goto out_clean; + + for (cnt = 0, i = 0; i < s->prog_cnt; i++) { + struct bpf_program *prog = *s->progs[i].prog; + struct bpf_link **linkp = s->progs[i].link; + struct bpf_link *link; + + if (!is_trampoline(prog)) + continue; + + link = calloc(1, sizeof(*link)); + if (!link) + goto out_clean; + + link->detach = &bpf_link__detach_fd; + link->fd = out_fds[cnt++]; + *linkp = link; + } + +out_clean: + free(in_fds); + free(out_fds); + return ret; +} + int bpf_object__attach_skeleton(struct bpf_object_skeleton *s) { - int i; + struct bpf_object *obj = *s->obj; + int i, err; + + if (obj->trampoline_attach_batch) { + err = attach_trace_batch(s); + if (err) + return err; + } for (i = 0; i < s->prog_cnt; i++) { struct bpf_program *prog = *s->progs[i].prog; @@ -10823,6 +10893,10 @@ int bpf_object__attach_skeleton(struct bpf_object_skeleton *s) if (!prog->load) continue; + /* Program was attached via batch mode. */ + if (obj->trampoline_attach_batch && is_trampoline(prog)) + continue; + sec_def = find_sec_def(prog->sec_name); if (!sec_def || !sec_def->attach_fn) continue; diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 6909ee81113a..66f8e78aa9f8 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -93,8 +93,11 @@ struct bpf_object_open_opts { * system Kconfig for CONFIG_xxx externs. */ const char *kconfig; + /* Attach trampolines via batch mode. + */ + bool trampoline_attach_batch; }; -#define bpf_object_open_opts__last_field kconfig +#define bpf_object_open_opts__last_field trampoline_attach_batch LIBBPF_API struct bpf_object *bpf_object__open(const char *path); LIBBPF_API struct bpf_object * diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index 4ebfadf45b47..5a5ce921956d 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -336,4 +336,5 @@ LIBBPF_0.2.0 { perf_buffer__epoll_fd; perf_buffer__consume_buffer; xsk_socket__create_shared; + bpf_trampoline_batch_attach; } LIBBPF_0.1.0; From patchwork Thu Oct 22 08:21:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850539 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10E3FC388F2 for ; Thu, 22 Oct 2020 08:22:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B9F64223FB for ; Thu, 22 Oct 2020 08:22:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354973; bh=PEunvef1fHlhp6lLA1CLWO3htc3FR+mnf0ea82azCLA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=DRg+FYQgDVUZWiLrUcVrqHoKKamt6vhxwbYD/qnwwxyo0+pqCSuimu6AhIQhXAqz9 +VElYrmszh/XGQlaa1qHFeZ2n7mhJB/pBI51Gmh6Fq0tZGPGxISl8Qw2oiXrRr0f0K Ij7kUYjedrLXud0rQd8T474zuvGcm5BV2eo4g0ZU= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895397AbgJVIWw convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:52 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:52339 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895375AbgJVIWv (ORCPT ); Thu, 22 Oct 2020 04:22:51 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-234-HbLvGENtOjCNNqMn1YckvA-1; Thu, 22 Oct 2020 04:22:46 -0400 X-MC-Unique: HbLvGENtOjCNNqMn1YckvA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D611C8049E7; Thu, 22 Oct 2020 08:22:44 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id D9ACB60BFA; Thu, 22 Oct 2020 08:22:38 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 14/16] libbpf: Add trampoline batch detach support Date: Thu, 22 Oct 2020 10:21:36 +0200 Message-Id: <20201022082138.2322434-15-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding trampoline batch attach support so it's possible to use batch mode to load tracing programs. Adding trampoline_attach_batch bool to struct bpf_object_open_opts. When set to true the bpf_object__detach_skeleton will try to detach all tracing programs via batch mode. Signed-off-by: Jiri Olsa --- tools/lib/bpf/bpf.c | 16 +++++++++++-- tools/lib/bpf/bpf.h | 1 + tools/lib/bpf/libbpf.c | 50 ++++++++++++++++++++++++++++++++++++++++ tools/lib/bpf/libbpf.map | 1 + 4 files changed, 66 insertions(+), 2 deletions(-) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index 21fffff5e237..9af13e511851 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -858,7 +858,7 @@ int bpf_raw_tracepoint_open(const char *name, int prog_fd) return sys_bpf(BPF_RAW_TRACEPOINT_OPEN, &attr, sizeof(attr)); } -int bpf_trampoline_batch_attach(int *ifds, int *ofds, int count) +static int bpf_trampoline_batch(int cmd, int *ifds, int *ofds, int count) { union bpf_attr attr; @@ -867,7 +867,19 @@ int bpf_trampoline_batch_attach(int *ifds, int *ofds, int count) attr.trampoline_batch.out = ptr_to_u64(ofds); attr.trampoline_batch.count = count; - return sys_bpf(BPF_TRAMPOLINE_BATCH_ATTACH, &attr, sizeof(attr)); + return sys_bpf(cmd, &attr, sizeof(attr)); +} + +int bpf_trampoline_batch_attach(int *ifds, int *ofds, int count) +{ + return bpf_trampoline_batch(BPF_TRAMPOLINE_BATCH_ATTACH, + ifds, ofds, count); +} + +int bpf_trampoline_batch_detach(int *ifds, int *ofds, int count) +{ + return bpf_trampoline_batch(BPF_TRAMPOLINE_BATCH_DETACH, + ifds, ofds, count); } int bpf_load_btf(const void *btf, __u32 btf_size, char *log_buf, __u32 log_buf_size, diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index ba3b0b6e3cb0..c6fb5977de79 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -236,6 +236,7 @@ LIBBPF_API int bpf_prog_query(int target_fd, enum bpf_attach_type type, __u32 *prog_ids, __u32 *prog_cnt); LIBBPF_API int bpf_raw_tracepoint_open(const char *name, int prog_fd); LIBBPF_API int bpf_trampoline_batch_attach(int *ifds, int *ofds, int count); +LIBBPF_API int bpf_trampoline_batch_detach(int *ifds, int *ofds, int count); LIBBPF_API int bpf_load_btf(const void *btf, __u32 btf_size, char *log_buf, __u32 log_buf_size, bool do_log); LIBBPF_API int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf, diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 584da3b401ac..02e9e8279aa7 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -10874,6 +10874,47 @@ static int attach_trace_batch(struct bpf_object_skeleton *s) return ret; } +static int detach_trace_batch(struct bpf_object_skeleton *s) +{ + int *in_fds, *out_fds, cnt; + int i, ret = -ENOMEM; + + in_fds = calloc(s->prog_cnt, sizeof(in_fds[0])); + out_fds = calloc(s->prog_cnt, sizeof(out_fds[0])); + if (!in_fds || !out_fds) + goto out_clean; + + for (cnt = 0, i = 0; i < s->prog_cnt; i++) { + struct bpf_program *prog = *s->progs[i].prog; + struct bpf_link **link = s->progs[i].link; + + if (!is_trampoline(prog)) + continue; + in_fds[cnt++] = (*link)->fd; + } + + ret = bpf_trampoline_batch_detach(in_fds, out_fds, cnt); + if (ret) + goto out_clean; + + for (i = 0; i < s->prog_cnt; i++) { + struct bpf_program *prog = *s->progs[i].prog; + struct bpf_link **link = s->progs[i].link; + + if (!is_trampoline(prog)) + continue; + + bpf_link__disconnect(*link); + bpf_link__destroy(*link); + *link = NULL; + } + +out_clean: + free(in_fds); + free(out_fds); + return ret; +} + int bpf_object__attach_skeleton(struct bpf_object_skeleton *s) { struct bpf_object *obj = *s->obj; @@ -10914,11 +10955,20 @@ int bpf_object__attach_skeleton(struct bpf_object_skeleton *s) void bpf_object__detach_skeleton(struct bpf_object_skeleton *s) { + struct bpf_object *obj = *s->obj; int i; + if (obj->trampoline_attach_batch) + detach_trace_batch(s); + for (i = 0; i < s->prog_cnt; i++) { + struct bpf_program *prog = *s->progs[i].prog; struct bpf_link **link = s->progs[i].link; + /* Program was attached via batch mode. */ + if (obj->trampoline_attach_batch && is_trampoline(prog)) + continue; + bpf_link__destroy(*link); *link = NULL; } diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index 5a5ce921956d..cfe0b3d52172 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -337,4 +337,5 @@ LIBBPF_0.2.0 { perf_buffer__consume_buffer; xsk_socket__create_shared; bpf_trampoline_batch_attach; + bpf_trampoline_batch_detach; } LIBBPF_0.1.0; From patchwork Thu Oct 22 08:21:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850541 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84D93C4363A for ; Thu, 22 Oct 2020 08:22:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2F53D223C7 for ; Thu, 22 Oct 2020 08:22:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354978; bh=S5j1SlRXhix3v9uzwI0IJiwVeLHXFamWi8HA9HHvVmU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=DZOGID/S+Q1Xm0kYmytnTo758YuqVqWHofgCwY84xue8tnr5FNSgLcp6IrKMAgWIZ BxF6FbcA1q4+Bz1VzIHnYeXXmgnv2XbdwUeqaxQBcDyVcnd+Jl2bScmw1COPz7dbmn myYNKGjlvuVNEBc/Tl9n7Q2yA+oxyVcbHrU50LVk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895447AbgJVIW4 convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:56 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:41066 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895411AbgJVIWz (ORCPT ); Thu, 22 Oct 2020 04:22:55 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-552-CuehPYyMNMiBU6pwuZxr1g-1; Thu, 22 Oct 2020 04:22:50 -0400 X-MC-Unique: CuehPYyMNMiBU6pwuZxr1g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 812C28049E7; Thu, 22 Oct 2020 08:22:48 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 357E760BFA; Thu, 22 Oct 2020 08:22:45 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 15/16] selftests/bpf: Add trampoline batch test Date: Thu, 22 Oct 2020 10:21:37 +0200 Message-Id: <20201022082138.2322434-16-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding simple test that loads fentry tracing programs to bpf_fentry_test* functions and uses trampoline_attach_batch bool in struct bpf_object_open_opts to attach them in batch mode. Signed-off-by: Jiri Olsa --- .../bpf/prog_tests/trampoline_batch.c | 45 +++++++++++ .../bpf/progs/trampoline_batch_test.c | 75 +++++++++++++++++++ 2 files changed, 120 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/trampoline_batch.c create mode 100644 tools/testing/selftests/bpf/progs/trampoline_batch_test.c diff --git a/tools/testing/selftests/bpf/prog_tests/trampoline_batch.c b/tools/testing/selftests/bpf/prog_tests/trampoline_batch.c new file mode 100644 index 000000000000..98929ac0bef6 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/trampoline_batch.c @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019 Facebook */ +#include +#include "trampoline_batch_test.skel.h" + +void test_trampoline_batch(void) +{ + DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts); + struct trampoline_batch_test *skel = NULL; + int err, prog_fd, i; + __u32 duration = 0, retval; + __u64 *result; + + opts.trampoline_attach_batch = true; + + skel = trampoline_batch_test__open_opts(&opts); + if (CHECK(!skel, "skel_open", "open failed\n")) + goto cleanup; + + err = trampoline_batch_test__load(skel); + if (CHECK(err, "skel_load", "load failed: %d\n", err)) + goto cleanup; + + err = trampoline_batch_test__attach(skel); + if (CHECK(err, "skel_attach", "attach failed: %d\n", err)) + goto cleanup; + + prog_fd = bpf_program__fd(skel->progs.test1); + err = bpf_prog_test_run(prog_fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + CHECK(err || retval, "test_run", + "err %d errno %d retval %d duration %d\n", + err, errno, retval, duration); + + result = (__u64 *)skel->bss; + for (i = 0; i < 6; i++) { + if (CHECK(result[i] != 1, "result", + "trampoline_batch_test fentry_test%d failed err %lld\n", + i + 1, result[i])) + goto cleanup; + } + +cleanup: + trampoline_batch_test__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/trampoline_batch_test.c b/tools/testing/selftests/bpf/progs/trampoline_batch_test.c new file mode 100644 index 000000000000..ff93799037f0 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/trampoline_batch_test.c @@ -0,0 +1,75 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019 Facebook */ +#include +#include +#include + +char _license[] SEC("license") = "GPL"; + +__u64 test1_result = 0; +SEC("fentry/bpf_fentry_test1") +int BPF_PROG(test1, int a) +{ + test1_result = 1; + return 0; +} + +__u64 test2_result = 0; +SEC("fentry/bpf_fentry_test2") +int BPF_PROG(test2, int a, __u64 b) +{ + test2_result = 1; + return 0; +} + +__u64 test3_result = 0; +SEC("fentry/bpf_fentry_test3") +int BPF_PROG(test3, char a, int b, __u64 c) +{ + test3_result = 1; + return 0; +} + +__u64 test4_result = 0; +SEC("fentry/bpf_fentry_test4") +int BPF_PROG(test4, void *a, char b, int c, __u64 d) +{ + test4_result = 1; + return 0; +} + +__u64 test5_result = 0; +SEC("fentry/bpf_fentry_test5") +int BPF_PROG(test5, __u64 a, void *b, short c, int d, __u64 e) +{ + test5_result = 1; + return 0; +} + +__u64 test6_result = 0; +SEC("fentry/bpf_fentry_test6") +int BPF_PROG(test6, __u64 a, void *b, short c, int d, void * e, __u64 f) +{ + test6_result = 1; + return 0; +} + +struct bpf_fentry_test_t { + struct bpf_fentry_test_t *a; +}; + +__u64 test7_result = 0; +SEC("fentry/bpf_fentry_test7") +int BPF_PROG(test7, struct bpf_fentry_test_t *arg) +{ + test7_result = 1; + return 0; +} + +__u64 test8_result = 0; +SEC("fentry/bpf_fentry_test8") +int BPF_PROG(test8, struct bpf_fentry_test_t *arg) +{ + test8_result = 1; + return 0; +} From patchwork Thu Oct 22 08:21:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Olsa X-Patchwork-Id: 11850543 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01756C388F7 for ; Thu, 22 Oct 2020 08:23:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A350C2065D for ; Thu, 22 Oct 2020 08:23:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603354987; bh=Xdw+FEqDA9lcrfhrzGtbTbFolSOkZWZ9g6eJAm3ZuR4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=NGm9ZxZhf06Stn2oLglQKb64WvFoWEpbwDyjqjs9fBRIR7E6RVHmUjw5Fna8UI9zy ku6N8fjGE0hjMK3HfCNTIl+/WAIXEnbNtoFBB9SiQocGv9rAWOYrUi1RhSP6SiUt3h lrclOtVf82neh/yknHyEVIEITKBSl0+aOzcUAtbE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895487AbgJVIW7 convert rfc822-to-8bit (ORCPT ); Thu, 22 Oct 2020 04:22:59 -0400 Received: from us-smtp-delivery-44.mimecast.com ([205.139.111.44]:40119 "EHLO us-smtp-delivery-44.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895475AbgJVIW6 (ORCPT ); Thu, 22 Oct 2020 04:22:58 -0400 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-566-IawCvyt2OLSZqIgMHxnCnQ-1; Thu, 22 Oct 2020 04:22:53 -0400 X-MC-Unique: IawCvyt2OLSZqIgMHxnCnQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0070510E2186; Thu, 22 Oct 2020 08:22:52 +0000 (UTC) Received: from krava.redhat.com (unknown [10.40.195.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id D845060BFA; Thu, 22 Oct 2020 08:22:48 +0000 (UTC) From: Jiri Olsa To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Daniel Xu , Steven Rostedt , Jesper Brouer , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Viktor Malik Subject: [RFC bpf-next 16/16] selftests/bpf: Add attach batch test (NOT TO BE MERGED) Date: Thu, 22 Oct 2020 10:21:38 +0200 Message-Id: <20201022082138.2322434-17-jolsa@kernel.org> In-Reply-To: <20201022082138.2322434-1-jolsa@kernel.org> References: <20201022082138.2322434-1-jolsa@kernel.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=jolsa@kernel.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Adding test that attaches to 50 known functions, that are also added to kernel. This test is meant only for fast check on attach times, and can be probably in a different mergeable way, but at the moment it fits the need. Signed-off-by: Jiri Olsa --- net/bpf/test_run.c | 55 ++++++++++++++++ .../selftests/bpf/prog_tests/attach_test.c | 27 ++++++++ .../testing/selftests/bpf/progs/attach_test.c | 62 +++++++++++++++++++ 3 files changed, 144 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/attach_test.c create mode 100644 tools/testing/selftests/bpf/progs/attach_test.c diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index c1c30a9f76f3..8fc6d27fc07f 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -167,6 +167,61 @@ int noinline bpf_modify_return_test(int a, int *b) *b += 1; return a + *b; } + +#define ATTACH_TEST(__n) \ + int noinline __PASTE(bpf_attach_test, __n)(void) { return 0; } + +ATTACH_TEST(0) +ATTACH_TEST(1) +ATTACH_TEST(2) +ATTACH_TEST(3) +ATTACH_TEST(4) +ATTACH_TEST(5) +ATTACH_TEST(6) +ATTACH_TEST(7) +ATTACH_TEST(8) +ATTACH_TEST(9) +ATTACH_TEST(10) +ATTACH_TEST(11) +ATTACH_TEST(12) +ATTACH_TEST(13) +ATTACH_TEST(14) +ATTACH_TEST(15) +ATTACH_TEST(16) +ATTACH_TEST(17) +ATTACH_TEST(18) +ATTACH_TEST(19) +ATTACH_TEST(20) +ATTACH_TEST(21) +ATTACH_TEST(22) +ATTACH_TEST(23) +ATTACH_TEST(24) +ATTACH_TEST(25) +ATTACH_TEST(26) +ATTACH_TEST(27) +ATTACH_TEST(28) +ATTACH_TEST(29) +ATTACH_TEST(30) +ATTACH_TEST(31) +ATTACH_TEST(32) +ATTACH_TEST(33) +ATTACH_TEST(34) +ATTACH_TEST(35) +ATTACH_TEST(36) +ATTACH_TEST(37) +ATTACH_TEST(38) +ATTACH_TEST(39) +ATTACH_TEST(40) +ATTACH_TEST(41) +ATTACH_TEST(42) +ATTACH_TEST(43) +ATTACH_TEST(44) +ATTACH_TEST(45) +ATTACH_TEST(46) +ATTACH_TEST(47) +ATTACH_TEST(48) +ATTACH_TEST(49) + __diag_pop(); ALLOW_ERROR_INJECTION(bpf_modify_return_test, ERRNO); diff --git a/tools/testing/selftests/bpf/prog_tests/attach_test.c b/tools/testing/selftests/bpf/prog_tests/attach_test.c new file mode 100644 index 000000000000..c5c6534c49c9 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/attach_test.c @@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include "attach_test.skel.h" + +void test_attach_test(void) +{ + DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts); + struct attach_test *attach_skel = NULL; + __u32 duration = 0; + int err; + + opts.trampoline_attach_batch = true; + attach_skel = attach_test__open_opts(&opts); + if (CHECK(!attach_skel, "attach_test__open_opts", "open skeleton failed\n")) + goto cleanup; + + err = attach_test__load(attach_skel); + if (CHECK(err, "attach_skel_load", "attach skeleton failed\n")) + goto cleanup; + + err = attach_test__attach(attach_skel); + if (CHECK(err, "attach", "attach failed: %d\n", err)) + goto cleanup; + +cleanup: + attach_test__destroy(attach_skel); +} diff --git a/tools/testing/selftests/bpf/progs/attach_test.c b/tools/testing/selftests/bpf/progs/attach_test.c new file mode 100644 index 000000000000..51b18f83c109 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/attach_test.c @@ -0,0 +1,62 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019 Facebook */ +#include +#include +#include + +char _license[] SEC("license") = "GPL"; + +#define ATTACH_PROG(__n) \ +SEC("fentry/bpf_attach_test" #__n) \ +int BPF_PROG(prog ## __n) { return 0; } + +ATTACH_PROG(0) +ATTACH_PROG(1) +ATTACH_PROG(2) +ATTACH_PROG(3) +ATTACH_PROG(4) +ATTACH_PROG(5) +ATTACH_PROG(6) +ATTACH_PROG(7) +ATTACH_PROG(8) +ATTACH_PROG(9) +ATTACH_PROG(10) +ATTACH_PROG(11) +ATTACH_PROG(12) +ATTACH_PROG(13) +ATTACH_PROG(14) +ATTACH_PROG(15) +ATTACH_PROG(16) +ATTACH_PROG(17) +ATTACH_PROG(18) +ATTACH_PROG(19) +ATTACH_PROG(20) +ATTACH_PROG(21) +ATTACH_PROG(22) +ATTACH_PROG(23) +ATTACH_PROG(24) +ATTACH_PROG(25) +ATTACH_PROG(26) +ATTACH_PROG(27) +ATTACH_PROG(28) +ATTACH_PROG(29) +ATTACH_PROG(30) +ATTACH_PROG(31) +ATTACH_PROG(32) +ATTACH_PROG(33) +ATTACH_PROG(34) +ATTACH_PROG(35) +ATTACH_PROG(36) +ATTACH_PROG(37) +ATTACH_PROG(38) +ATTACH_PROG(39) +ATTACH_PROG(40) +ATTACH_PROG(41) +ATTACH_PROG(42) +ATTACH_PROG(43) +ATTACH_PROG(44) +ATTACH_PROG(45) +ATTACH_PROG(46) +ATTACH_PROG(47) +ATTACH_PROG(48) +ATTACH_PROG(49)