Message ID | 20230309180213.180263-3-hbathini@linux.ibm.com (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | BPF |
Headers | show |
Series | enable bpf_prog_pack allocator for powerpc | expand |
On Thu, Mar 9, 2023 at 10:02 AM Hari Bathini <hbathini@linux.ibm.com> wrote: > > bpf_arch_text_copy is used to dump JITed binary to RX page, allowing > multiple BPF programs to share the same page. Use the newly introduced > patch_instructions() to implement it. Around 5X improvement in speed > of execution observed, using the new patch_instructions() function > over patch_instruction(), while running the tests from test_bpf.ko. > > Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> > --- > arch/powerpc/net/bpf_jit_comp.c | 23 ++++++++++++++++++++++- > 1 file changed, 22 insertions(+), 1 deletion(-) > > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c > index e93aefcfb83f..0a70319116d1 100644 > --- a/arch/powerpc/net/bpf_jit_comp.c > +++ b/arch/powerpc/net/bpf_jit_comp.c > @@ -13,9 +13,12 @@ > #include <linux/netdevice.h> > #include <linux/filter.h> > #include <linux/if_vlan.h> > -#include <asm/kprobes.h> > +#include <linux/memory.h> > #include <linux/bpf.h> > > +#include <asm/kprobes.h> > +#include <asm/code-patching.h> > + > #include "bpf_jit.h" > > static void bpf_jit_fill_ill_insns(void *area, unsigned int size) > @@ -272,3 +275,21 @@ int bpf_add_extable_entry(struct bpf_prog *fp, u32 *image, int pass, struct code > ctx->exentry_idx++; > return 0; > } > + > +void *bpf_arch_text_copy(void *dst, void *src, size_t len) > +{ > + void *ret = ERR_PTR(-EINVAL); > + int err; > + > + if (WARN_ON_ONCE(core_kernel_text((unsigned long)dst))) > + return ret; > + > + ret = dst; > + mutex_lock(&text_mutex); > + err = patch_instructions(dst, src, false, len); > + if (err) > + ret = ERR_PTR(err); > + mutex_unlock(&text_mutex); > + > + return ret; > +} It seems we don't really need "ret". How about something like: +void *bpf_arch_text_copy(void *dst, void *src, size_t len) +{ + int err; + + if (WARN_ON_ONCE(core_kernel_text((unsigned long)dst))) + return ERR_PTR(-EINVAL); + + mutex_lock(&text_mutex); + err = patch_instructions(dst, src, false, len); + mutex_unlock(&text_mutex); + + return err ? ERR_PTR(err) : dst; +} Song
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c index e93aefcfb83f..0a70319116d1 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -13,9 +13,12 @@ #include <linux/netdevice.h> #include <linux/filter.h> #include <linux/if_vlan.h> -#include <asm/kprobes.h> +#include <linux/memory.h> #include <linux/bpf.h> +#include <asm/kprobes.h> +#include <asm/code-patching.h> + #include "bpf_jit.h" static void bpf_jit_fill_ill_insns(void *area, unsigned int size) @@ -272,3 +275,21 @@ int bpf_add_extable_entry(struct bpf_prog *fp, u32 *image, int pass, struct code ctx->exentry_idx++; return 0; } + +void *bpf_arch_text_copy(void *dst, void *src, size_t len) +{ + void *ret = ERR_PTR(-EINVAL); + int err; + + if (WARN_ON_ONCE(core_kernel_text((unsigned long)dst))) + return ret; + + ret = dst; + mutex_lock(&text_mutex); + err = patch_instructions(dst, src, false, len); + if (err) + ret = ERR_PTR(err); + mutex_unlock(&text_mutex); + + return ret; +}
bpf_arch_text_copy is used to dump JITed binary to RX page, allowing multiple BPF programs to share the same page. Use the newly introduced patch_instructions() to implement it. Around 5X improvement in speed of execution observed, using the new patch_instructions() function over patch_instruction(), while running the tests from test_bpf.ko. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> --- arch/powerpc/net/bpf_jit_comp.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-)