diff mbox series

[bpf-next,v4,3/7] bpf: Introduce BPF_MODIFY_RETURN

Message ID 20200304191853.1529-4-kpsingh@chromium.org (mailing list archive)
State New, archived
Headers show
Series Introduce BPF_MODIFY_RET tracing progs | expand

Commit Message

KP Singh March 4, 2020, 7:18 p.m. UTC
From: KP Singh <kpsingh@google.com>

When multiple programs are attached, each program receives the return
value from the previous program on the stack and the last program
provides the return value to the attached function.

The fmod_ret bpf programs are run after the fentry programs and before
the fexit programs. The original function is only called if all the
fmod_ret programs return 0 to avoid any unintended side-effects. The
success value, i.e. 0 is not currently configurable but can be made so
where user-space can specify it at load time.

For example:

int func_to_be_attached(int a, int b)
{  <--- do_fentry

do_fmod_ret:
   <update ret by calling fmod_ret>
   if (ret != 0)
        goto do_fexit;

original_function:

    <side_effects_happen_here>

}  <--- do_fexit

The fmod_ret program attached to this function can be defined as:

SEC("fmod_ret/func_to_be_attached")
int BPF_PROG(func_name, int a, int b, int ret)
{
        // This will skip the original function logic.
        return 1;
}

The first fmod_ret program is passed 0 in its return argument.

Signed-off-by: KP Singh <kpsingh@google.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
---
 arch/x86/net/bpf_jit_comp.c    | 130 ++++++++++++++++++++++++++++++---
 include/linux/bpf.h            |   1 +
 include/uapi/linux/bpf.h       |   1 +
 kernel/bpf/btf.c               |   3 +-
 kernel/bpf/syscall.c           |   1 +
 kernel/bpf/trampoline.c        |   5 +-
 kernel/bpf/verifier.c          |   1 +
 tools/include/uapi/linux/bpf.h |   1 +
 8 files changed, 130 insertions(+), 13 deletions(-)

Comments

Stephen Smalley March 5, 2020, 1:51 p.m. UTC | #1
On Wed, Mar 4, 2020 at 2:20 PM KP Singh <kpsingh@chromium.org> wrote:
>
> From: KP Singh <kpsingh@google.com>
>
> When multiple programs are attached, each program receives the return
> value from the previous program on the stack and the last program
> provides the return value to the attached function.
>
> The fmod_ret bpf programs are run after the fentry programs and before
> the fexit programs. The original function is only called if all the
> fmod_ret programs return 0 to avoid any unintended side-effects. The
> success value, i.e. 0 is not currently configurable but can be made so
> where user-space can specify it at load time.
>
> For example:
>
> int func_to_be_attached(int a, int b)
> {  <--- do_fentry
>
> do_fmod_ret:
>    <update ret by calling fmod_ret>
>    if (ret != 0)
>         goto do_fexit;
>
> original_function:
>
>     <side_effects_happen_here>
>
> }  <--- do_fexit
>
> The fmod_ret program attached to this function can be defined as:
>
> SEC("fmod_ret/func_to_be_attached")
> int BPF_PROG(func_name, int a, int b, int ret)
> {
>         // This will skip the original function logic.
>         return 1;
> }
>
> The first fmod_ret program is passed 0 in its return argument.
>
> Signed-off-by: KP Singh <kpsingh@google.com>
> Acked-by: Andrii Nakryiko <andriin@fb.com>

IIUC you've switched from a model where the BPF program would be
invoked after the original function logic
and the BPF program is skipped if the original function logic returns
non-zero to a model where the BPF program is invoked first and
the original function logic is skipped if the BPF program returns
non-zero.  I'm not keen on that for userspace-loaded code attached
to LSM hooks; it means that userspace BPF programs can run even if
SELinux would have denied access and SELinux hooks get
skipped entirely if the BPF program returns an error.  I think Casey
may have wrongly pointed you in this direction on the grounds
it can already happen with the base DAC checking logic.  But that's
kernel DAC checking logic, not userspace-loaded code.
And the existing checking on attachment is not sufficient for SELinux
since CAP_MAC_ADMIN is not all powerful to SELinux.
Be careful about designing your mechanisms around Smack because Smack
is not the only LSM.
KP Singh March 5, 2020, 3:54 p.m. UTC | #2
On 05-Mar 08:51, Stephen Smalley wrote:
> On Wed, Mar 4, 2020 at 2:20 PM KP Singh <kpsingh@chromium.org> wrote:
> >
> > From: KP Singh <kpsingh@google.com>
> >
> > When multiple programs are attached, each program receives the return
> > value from the previous program on the stack and the last program
> > provides the return value to the attached function.
> >
> > The fmod_ret bpf programs are run after the fentry programs and before
> > the fexit programs. The original function is only called if all the
> > fmod_ret programs return 0 to avoid any unintended side-effects. The
> > success value, i.e. 0 is not currently configurable but can be made so
> > where user-space can specify it at load time.
> >
> > For example:
> >
> > int func_to_be_attached(int a, int b)
> > {  <--- do_fentry
> >
> > do_fmod_ret:
> >    <update ret by calling fmod_ret>
> >    if (ret != 0)
> >         goto do_fexit;
> >
> > original_function:
> >
> >     <side_effects_happen_here>
> >
> > }  <--- do_fexit
> >
> > The fmod_ret program attached to this function can be defined as:
> >
> > SEC("fmod_ret/func_to_be_attached")
> > int BPF_PROG(func_name, int a, int b, int ret)
> > {
> >         // This will skip the original function logic.
> >         return 1;
> > }
> >
> > The first fmod_ret program is passed 0 in its return argument.
> >
> > Signed-off-by: KP Singh <kpsingh@google.com>
> > Acked-by: Andrii Nakryiko <andriin@fb.com>
> 
> IIUC you've switched from a model where the BPF program would be
> invoked after the original function logic
> and the BPF program is skipped if the original function logic returns
> non-zero to a model where the BPF program is invoked first and
> the original function logic is skipped if the BPF program returns
> non-zero.  I'm not keen on that for userspace-loaded code attached

We do want to continue the KRSI series and the effort to implement a
proper BPF LSM. In the meantime, the tracing + error injection
solution helps us to:

  * Provide better debug capabilities.
  * And parallelize the effort to come up with the right helpers
    for our LSM work and work on sleepable BPF which is also essential
    for some of the helpers.

As you noted, in the KRSI v4 series, we mentioned that we would like
to have the user-space loaded BPF programs be unable to override the
decision made by the in-kernel logic/LSMs, but this got shot down:

   https://lore.kernel.org/bpf/00c216e1-bcfd-b7b1-5444-2a2dfa69190b@schaufler-ca.com

I would like to continue this discussion when we post the v5 series
for KRSI as to what the correct precedence order should be for the
BPF_PROG_TYPE_LSM and would appreciate if you also bring it up there.

> to LSM hooks; it means that userspace BPF programs can run even if
> SELinux would have denied access and SELinux hooks get
> skipped entirely if the BPF program returns an error.  I think Casey
> may have wrongly pointed you in this direction on the grounds
> it can already happen with the base DAC checking logic.  But that's

What we can do for this tracing/modify_ret series, is to remove
the special casing for "security_" functions in the BPF code and add
ALLOW_ERROR_INJECTION calls to the security hooks. This way, if
someone needs to disable the BPF programs being able to modify
security hooks, they can disable error injection. If that's okay, we
can send a patch.

- KP

> kernel DAC checking logic, not userspace-loaded code.
> And the existing checking on attachment is not sufficient for SELinux
> since CAP_MAC_ADMIN is not all powerful to SELinux.
> Be careful about designing your mechanisms around Smack because Smack
> is not the only LSM.
Casey Schaufler March 5, 2020, 5:35 p.m. UTC | #3
On 3/5/2020 7:54 AM, KP Singh wrote:
> On 05-Mar 08:51, Stephen Smalley wrote:
>> On Wed, Mar 4, 2020 at 2:20 PM KP Singh <kpsingh@chromium.org> wrote:
>>> From: KP Singh <kpsingh@google.com>
>>>
>>> When multiple programs are attached, each program receives the return
>>> value from the previous program on the stack and the last program
>>> provides the return value to the attached function.
>>>
>>> The fmod_ret bpf programs are run after the fentry programs and before
>>> the fexit programs. The original function is only called if all the
>>> fmod_ret programs return 0 to avoid any unintended side-effects. The
>>> success value, i.e. 0 is not currently configurable but can be made so
>>> where user-space can specify it at load time.
>>>
>>> For example:
>>>
>>> int func_to_be_attached(int a, int b)
>>> {  <--- do_fentry
>>>
>>> do_fmod_ret:
>>>    <update ret by calling fmod_ret>
>>>    if (ret != 0)
>>>         goto do_fexit;
>>>
>>> original_function:
>>>
>>>     <side_effects_happen_here>
>>>
>>> }  <--- do_fexit
>>>
>>> The fmod_ret program attached to this function can be defined as:
>>>
>>> SEC("fmod_ret/func_to_be_attached")
>>> int BPF_PROG(func_name, int a, int b, int ret)
>>> {
>>>         // This will skip the original function logic.
>>>         return 1;
>>> }
>>>
>>> The first fmod_ret program is passed 0 in its return argument.
>>>
>>> Signed-off-by: KP Singh <kpsingh@google.com>
>>> Acked-by: Andrii Nakryiko <andriin@fb.com>
>> IIUC you've switched from a model where the BPF program would be
>> invoked after the original function logic
>> and the BPF program is skipped if the original function logic returns
>> non-zero to a model where the BPF program is invoked first and
>> the original function logic is skipped if the BPF program returns
>> non-zero.  I'm not keen on that for userspace-loaded code attached
> We do want to continue the KRSI series and the effort to implement a
> proper BPF LSM. In the meantime, the tracing + error injection
> solution helps us to:
>
>   * Provide better debug capabilities.
>   * And parallelize the effort to come up with the right helpers
>     for our LSM work and work on sleepable BPF which is also essential
>     for some of the helpers.
>
> As you noted, in the KRSI v4 series, we mentioned that we would like
> to have the user-space loaded BPF programs be unable to override the
> decision made by the in-kernel logic/LSMs, but this got shot down:
>
>    https://lore.kernel.org/bpf/00c216e1-bcfd-b7b1-5444-2a2dfa69190b@schaufler-ca.com
>
> I would like to continue this discussion when we post the v5 series
> for KRSI as to what the correct precedence order should be for the
> BPF_PROG_TYPE_LSM and would appreciate if you also bring it up there.

I believe that I have stated that order isn't my issue.
Go first, last or as specified in the lsm list, I really
don't care. We'll talk about what does matter in the KRSI
thread.


>> to LSM hooks; it means that userspace BPF programs can run even if
>> SELinux would have denied access and SELinux hooks get
>> skipped entirely if the BPF program returns an error.

Then I'm fine with using the LSM ordering mechanisms that Kees
thought through to run the BPF last. Although I think it's somewhat
concerning that SELinux cares what other security models might be
in place. If BPF programs can violate SELinux (or traditional DAC)
policies there are bigger issues than ordering.

>>   I think Casey
>> may have wrongly pointed you in this direction on the grounds
>> it can already happen with the base DAC checking logic.  But that's
> What we can do for this tracing/modify_ret series, is to remove
> the special casing for "security_" functions in the BPF code and add
> ALLOW_ERROR_INJECTION calls to the security hooks. This way, if
> someone needs to disable the BPF programs being able to modify
> security hooks, they can disable error injection. If that's okay, we
> can send a patch.
>
> - KP
>
>> kernel DAC checking logic, not userspace-loaded code.
>> And the existing checking on attachment is not sufficient for SELinux
>> since CAP_MAC_ADMIN is not all powerful to SELinux.
>> Be careful about designing your mechanisms around Smack because Smack
>> is not the only LSM.

:)
Stephen Smalley March 5, 2020, 6:03 p.m. UTC | #4
On Thu, Mar 5, 2020 at 12:35 PM Casey Schaufler <casey@schaufler-ca.com> wrote:
> I believe that I have stated that order isn't my issue.
> Go first, last or as specified in the lsm list, I really
> don't care. We'll talk about what does matter in the KRSI
> thread.

Order matters when the security module logic (in this case, the BPF
program) is loaded from userspace and
the userspace process isn't already required to be fully privileged
with respect to the in-kernel security modules.
CAP_MAC_ADMIN was their (not unreasonable) attempt to check that
requirement; it just doesn't happen to convey
the same meaning for SELinux since SELinux predates the introduction
of CAP_MAC_ADMIN (in Linux at least) and
since SELinux was designed to confine even processes with capabilities.

> Then I'm fine with using the LSM ordering mechanisms that Kees
> thought through to run the BPF last. Although I think it's somewhat
> concerning that SELinux cares what other security models might be
> in place. If BPF programs can violate SELinux (or traditional DAC)
> policies there are bigger issues than ordering.

It is only safe for Smack because CAP_MAC_ADMIN already conveys all
privileges with respect to Smack.
Otherwise, the BPF program can access information about the object
attributes, e.g. inode attributes,
and leak that information to userspace even if SELinux would have
denied the process that loaded the BPF
program permissions to directly obtain that information.  This is also
why Landlock has to be last in the LSM list.
Casey Schaufler March 5, 2020, 6:47 p.m. UTC | #5
On 3/5/2020 10:03 AM, Stephen Smalley wrote:
> On Thu, Mar 5, 2020 at 12:35 PM Casey Schaufler <casey@schaufler-ca.com> wrote:
>> I believe that I have stated that order isn't my issue.
>> Go first, last or as specified in the lsm list, I really
>> don't care. We'll talk about what does matter in the KRSI
>> thread.
> Order matters when the security module logic (in this case, the BPF
> program) is loaded from userspace and
> the userspace process isn't already required to be fully privileged
> with respect to the in-kernel security modules.
> CAP_MAC_ADMIN was their (not unreasonable) attempt to check that
> requirement; it just doesn't happen to convey
> the same meaning for SELinux since SELinux predates the introduction
> of CAP_MAC_ADMIN (in Linux at least) and
> since SELinux was designed to confine even processes with capabilities.

If KRSI "needs" to go last, I'm fine with that. What I continue
to object to is making KRSI/BPF a special case in the code. It
doesn't need to be.
Stephen Smalley March 5, 2020, 7:43 p.m. UTC | #6
On Thu, Mar 5, 2020 at 10:54 AM KP Singh <kpsingh@chromium.org> wrote:
>
> On 05-Mar 08:51, Stephen Smalley wrote:
> > IIUC you've switched from a model where the BPF program would be
> > invoked after the original function logic
> > and the BPF program is skipped if the original function logic returns
> > non-zero to a model where the BPF program is invoked first and
> > the original function logic is skipped if the BPF program returns
> > non-zero.  I'm not keen on that for userspace-loaded code attached
>
> We do want to continue the KRSI series and the effort to implement a
> proper BPF LSM. In the meantime, the tracing + error injection
> solution helps us to:
>
>   * Provide better debug capabilities.
>   * And parallelize the effort to come up with the right helpers
>     for our LSM work and work on sleepable BPF which is also essential
>     for some of the helpers.
>
> As you noted, in the KRSI v4 series, we mentioned that we would like
> to have the user-space loaded BPF programs be unable to override the
> decision made by the in-kernel logic/LSMs, but this got shot down:
>
>    https://lore.kernel.org/bpf/00c216e1-bcfd-b7b1-5444-2a2dfa69190b@schaufler-ca.com
>
> I would like to continue this discussion when we post the v5 series
> for KRSI as to what the correct precedence order should be for the
> BPF_PROG_TYPE_LSM and would appreciate if you also bring it up there.

That's fine but I guess I don't see why you or anyone else would
bother with introducing a BPF_PROG_TYPE_LSM
if BPF_PROG_MODIFY_RETURN is accepted and is allowed to attach to the
LSM hooks.  What's the benefit to you
if you can achieve your goals directly with MODIFY_RETURN?

> > to LSM hooks; it means that userspace BPF programs can run even if
> > SELinux would have denied access and SELinux hooks get
> > skipped entirely if the BPF program returns an error.  I think Casey
> > may have wrongly pointed you in this direction on the grounds
> > it can already happen with the base DAC checking logic.  But that's
>
> What we can do for this tracing/modify_ret series, is to remove
> the special casing for "security_" functions in the BPF code and add
> ALLOW_ERROR_INJECTION calls to the security hooks. This way, if
> someone needs to disable the BPF programs being able to modify
> security hooks, they can disable error injection. If that's okay, we
> can send a patch.

Realistically distros tend to enable lots of developer-friendly
options including error injection, and most users don't build their
own kernels
and distros won't support them when they do. So telling users they can
just rebuild their kernel without error injection if they care about
BPF programs being able to modify security hooks isn't really viable.
The security modules need a way to veto it based on their policies.
That's why I suggested a security hook here.
KP Singh March 5, 2020, 9:16 p.m. UTC | #7
On 05-Mär 14:43, Stephen Smalley wrote:
> On Thu, Mar 5, 2020 at 10:54 AM KP Singh <kpsingh@chromium.org> wrote:
> >
> > On 05-Mar 08:51, Stephen Smalley wrote:
> > > IIUC you've switched from a model where the BPF program would be
> > > invoked after the original function logic
> > > and the BPF program is skipped if the original function logic returns
> > > non-zero to a model where the BPF program is invoked first and
> > > the original function logic is skipped if the BPF program returns
> > > non-zero.  I'm not keen on that for userspace-loaded code attached
> >
> > We do want to continue the KRSI series and the effort to implement a
> > proper BPF LSM. In the meantime, the tracing + error injection
> > solution helps us to:
> >
> >   * Provide better debug capabilities.
> >   * And parallelize the effort to come up with the right helpers
> >     for our LSM work and work on sleepable BPF which is also essential
> >     for some of the helpers.
> >
> > As you noted, in the KRSI v4 series, we mentioned that we would like
> > to have the user-space loaded BPF programs be unable to override the
> > decision made by the in-kernel logic/LSMs, but this got shot down:
> >
> >    https://lore.kernel.org/bpf/00c216e1-bcfd-b7b1-5444-2a2dfa69190b@schaufler-ca.com
> >
> > I would like to continue this discussion when we post the v5 series
> > for KRSI as to what the correct precedence order should be for the
> > BPF_PROG_TYPE_LSM and would appreciate if you also bring it up there.
> 
> That's fine but I guess I don't see why you or anyone else would
> bother with introducing a BPF_PROG_TYPE_LSM
> if BPF_PROG_MODIFY_RETURN is accepted and is allowed to attach to the
> LSM hooks.  What's the benefit to you
> if you can achieve your goals directly with MODIFY_RETURN?

There is still value in being a proper LSM, as I had mentioned in KRSI
v3 that not all security_* wrappers simply call the attached hooks and
return their exit code.

It's also okay, taking into consideration Casey's objections and Kees'
suggestion, to be properly registered with the LSM framework (even if
it is with LSM_ORDER_LAST) and work towards improving some of the
performance bottle-necks in the framework. It would be a positive
outcome for all LSMs.

BPF_MODIFY_RETURN is just a first step which lays the foundation for
BPF_PROG_TYPE_LSM and facilitates us to build BPF infrastructure for
it.

- KP

> 
> > > to LSM hooks; it means that userspace BPF programs can run even if
> > > SELinux would have denied access and SELinux hooks get
> > > skipped entirely if the BPF program returns an error.  I think Casey
> > > may have wrongly pointed you in this direction on the grounds
> > > it can already happen with the base DAC checking logic.  But that's
> >
> > What we can do for this tracing/modify_ret series, is to remove
> > the special casing for "security_" functions in the BPF code and add
> > ALLOW_ERROR_INJECTION calls to the security hooks. This way, if
> > someone needs to disable the BPF programs being able to modify
> > security hooks, they can disable error injection. If that's okay, we
> > can send a patch.
> 
> Realistically distros tend to enable lots of developer-friendly
> options including error injection, and most users don't build their
> own kernels
> and distros won't support them when they do. So telling users they can
> just rebuild their kernel without error injection if they care about
> BPF programs being able to modify security hooks isn't really viable.
> The security modules need a way to veto it based on their policies.
> That's why I suggested a security hook here.
diff mbox series

Patch

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index d6349e930b06..b1fd000feb89 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1362,7 +1362,7 @@  static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_args,
 }
 
 static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
-			   struct bpf_prog *p, int stack_size)
+			   struct bpf_prog *p, int stack_size, bool mod_ret)
 {
 	u8 *prog = *pprog;
 	int cnt = 0;
@@ -1383,6 +1383,13 @@  static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
 	if (emit_call(&prog, p->bpf_func, prog))
 		return -EINVAL;
 
+	/* BPF_TRAMP_MODIFY_RETURN trampolines can modify the return
+	 * of the previous call which is then passed on the stack to
+	 * the next BPF program.
+	 */
+	if (mod_ret)
+		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+
 	/* arg1: mov rdi, progs[i] */
 	emit_mov_imm64(&prog, BPF_REG_1, (long) p >> 32,
 		       (u32) (long) p);
@@ -1442,6 +1449,23 @@  static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond)
 	return 0;
 }
 
+static int emit_mod_ret_check_imm8(u8 **pprog, int value)
+{
+	u8 *prog = *pprog;
+	int cnt = 0;
+
+	if (!is_imm8(value))
+		return -EINVAL;
+
+	if (value == 0)
+		EMIT2(0x85, add_2reg(0xC0, BPF_REG_0, BPF_REG_0));
+	else
+		EMIT3(0x83, add_1reg(0xF8, BPF_REG_0), value);
+
+	*pprog = prog;
+	return 0;
+}
+
 static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
 		      struct bpf_tramp_progs *tp, int stack_size)
 {
@@ -1449,9 +1473,49 @@  static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
 	u8 *prog = *pprog;
 
 	for (i = 0; i < tp->nr_progs; i++) {
-		if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size))
+		if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size, false))
+			return -EINVAL;
+	}
+	*pprog = prog;
+	return 0;
+}
+
+static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
+			      struct bpf_tramp_progs *tp, int stack_size,
+			      u8 **branches)
+{
+	u8 *prog = *pprog;
+	int i;
+
+	/* The first fmod_ret program will receive a garbage return value.
+	 * Set this to 0 to avoid confusing the program.
+	 */
+	emit_mov_imm32(&prog, false, BPF_REG_0, 0);
+	emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+	for (i = 0; i < tp->nr_progs; i++) {
+		if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size, true))
 			return -EINVAL;
+
+		/* Generate a branch:
+		 *
+		 * if (ret !=  0)
+		 *	goto do_fexit;
+		 *
+		 * If needed this can be extended to any integer value which can
+		 * be passed by user-space when the program is loaded.
+		 */
+		if (emit_mod_ret_check_imm8(&prog, 0))
+			return -EINVAL;
+
+		/* Save the location of the branch and Generate 6 nops
+		 * (4 bytes for an offset and 2 bytes for the jump) These nops
+		 * are replaced with a conditional jump once do_fexit (i.e. the
+		 * start of the fexit invocation) is finalized.
+		 */
+		branches[i] = prog;
+		emit_nops(&prog, 4 + 2);
 	}
+
 	*pprog = prog;
 	return 0;
 }
@@ -1521,10 +1585,12 @@  int arch_prepare_bpf_trampoline(void *image, void *image_end,
 				struct bpf_tramp_progs *tprogs,
 				void *orig_call)
 {
-	int cnt = 0, nr_args = m->nr_args;
+	int ret, i, cnt = 0, nr_args = m->nr_args;
 	int stack_size = nr_args * 8;
 	struct bpf_tramp_progs *fentry = &tprogs[BPF_TRAMP_FENTRY];
 	struct bpf_tramp_progs *fexit = &tprogs[BPF_TRAMP_FEXIT];
+	struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN];
+	u8 **branches = NULL;
 	u8 *prog;
 
 	/* x86-64 supports up to 6 arguments. 7+ can be added in the future */
@@ -1557,24 +1623,60 @@  int arch_prepare_bpf_trampoline(void *image, void *image_end,
 		if (invoke_bpf(m, &prog, fentry, stack_size))
 			return -EINVAL;
 
+	if (fmod_ret->nr_progs) {
+		branches = kcalloc(fmod_ret->nr_progs, sizeof(u8 *),
+				   GFP_KERNEL);
+		if (!branches)
+			return -ENOMEM;
+
+		if (invoke_bpf_mod_ret(m, &prog, fmod_ret, stack_size,
+				       branches)) {
+			ret = -EINVAL;
+			goto cleanup;
+		}
+	}
+
 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
-		if (fentry->nr_progs)
+		if (fentry->nr_progs || fmod_ret->nr_progs)
 			restore_regs(m, &prog, nr_args, stack_size);
 
 		/* call original function */
-		if (emit_call(&prog, orig_call, prog))
-			return -EINVAL;
+		if (emit_call(&prog, orig_call, prog)) {
+			ret = -EINVAL;
+			goto cleanup;
+		}
 		/* remember return value in a stack for bpf prog to access */
 		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
 	}
 
+	if (fmod_ret->nr_progs) {
+		/* From Intel 64 and IA-32 Architectures Optimization
+		 * Reference Manual, 3.4.1.4 Code Alignment, Assembly/Compiler
+		 * Coding Rule 11: All branch targets should be 16-byte
+		 * aligned.
+		 */
+		emit_align(&prog, 16);
+		/* Update the branches saved in invoke_bpf_mod_ret with the
+		 * aligned address of do_fexit.
+		 */
+		for (i = 0; i < fmod_ret->nr_progs; i++)
+			emit_cond_near_jump(&branches[i], prog, branches[i],
+					    X86_JNE);
+	}
+
 	if (fexit->nr_progs)
-		if (invoke_bpf(m, &prog, fexit, stack_size))
-			return -EINVAL;
+		if (invoke_bpf(m, &prog, fexit, stack_size)) {
+			ret = -EINVAL;
+			goto cleanup;
+		}
 
 	if (flags & BPF_TRAMP_F_RESTORE_REGS)
 		restore_regs(m, &prog, nr_args, stack_size);
 
+	/* This needs to be done regardless. If there were fmod_ret programs,
+	 * the return value is only updated on the stack and still needs to be
+	 * restored to R0.
+	 */
 	if (flags & BPF_TRAMP_F_CALL_ORIG)
 		/* restore original return value back into RAX */
 		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
@@ -1586,9 +1688,15 @@  int arch_prepare_bpf_trampoline(void *image, void *image_end,
 		EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */
 	EMIT1(0xC3); /* ret */
 	/* Make sure the trampoline generation logic doesn't overflow */
-	if (WARN_ON_ONCE(prog > (u8 *)image_end - BPF_INSN_SAFETY))
-		return -EFAULT;
-	return prog - (u8 *)image;
+	if (WARN_ON_ONCE(prog > (u8 *)image_end - BPF_INSN_SAFETY)) {
+		ret = -EFAULT;
+		goto cleanup;
+	}
+	ret = prog - (u8 *)image;
+
+cleanup:
+	kfree(branches);
+	return ret;
 }
 
 static int emit_fallback_jump(u8 **pprog)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 98ec10b23dbb..f748b31e5888 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -474,6 +474,7 @@  void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start);
 enum bpf_tramp_prog_type {
 	BPF_TRAMP_FENTRY,
 	BPF_TRAMP_FEXIT,
+	BPF_TRAMP_MODIFY_RETURN,
 	BPF_TRAMP_MAX,
 	BPF_TRAMP_REPLACE, /* more than MAX */
 };
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index d6b33ea27bcc..40b2d9476268 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -210,6 +210,7 @@  enum bpf_attach_type {
 	BPF_TRACE_RAW_TP,
 	BPF_TRACE_FENTRY,
 	BPF_TRACE_FEXIT,
+	BPF_MODIFY_RETURN,
 	__MAX_BPF_ATTACH_TYPE
 };
 
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 787140095e58..30841fb8b3c0 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3710,7 +3710,8 @@  bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		nr_args--;
 	}
 
-	if (prog->expected_attach_type == BPF_TRACE_FEXIT &&
+	if ((prog->expected_attach_type == BPF_TRACE_FEXIT ||
+	     prog->expected_attach_type == BPF_MODIFY_RETURN) &&
 	    arg == nr_args) {
 		if (!t)
 			/* Default prog with 5 args. 6th arg is retval. */
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 13de65363ba2..7ce0815793dd 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2324,6 +2324,7 @@  static int bpf_tracing_prog_attach(struct bpf_prog *prog)
 
 	if (prog->expected_attach_type != BPF_TRACE_FENTRY &&
 	    prog->expected_attach_type != BPF_TRACE_FEXIT &&
+	    prog->expected_attach_type != BPF_MODIFY_RETURN &&
 	    prog->type != BPF_PROG_TYPE_EXT) {
 		err = -EINVAL;
 		goto out_put_prog;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 546198f6f307..221a17af1f81 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -232,7 +232,8 @@  static int bpf_trampoline_update(struct bpf_trampoline *tr)
 		goto out;
 	}
 
-	if (tprogs[BPF_TRAMP_FEXIT].nr_progs)
+	if (tprogs[BPF_TRAMP_FEXIT].nr_progs ||
+	    tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)
 		flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
 
 	/* Though the second half of trampoline page is unused a task could be
@@ -269,6 +270,8 @@  static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(enum bpf_attach_type t)
 	switch (t) {
 	case BPF_TRACE_FENTRY:
 		return BPF_TRAMP_FENTRY;
+	case BPF_MODIFY_RETURN:
+		return BPF_TRAMP_MODIFY_RETURN;
 	case BPF_TRACE_FEXIT:
 		return BPF_TRAMP_FEXIT;
 	default:
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 289383edfc8c..2460c8e6b5be 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9950,6 +9950,7 @@  static int check_attach_btf_id(struct bpf_verifier_env *env)
 		if (!prog_extension)
 			return -EINVAL;
 		/* fallthrough */
+	case BPF_MODIFY_RETURN:
 	case BPF_TRACE_FENTRY:
 	case BPF_TRACE_FEXIT:
 		if (!btf_type_is_func(t)) {
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index d6b33ea27bcc..40b2d9476268 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -210,6 +210,7 @@  enum bpf_attach_type {
 	BPF_TRACE_RAW_TP,
 	BPF_TRACE_FENTRY,
 	BPF_TRACE_FEXIT,
+	BPF_MODIFY_RETURN,
 	__MAX_BPF_ATTACH_TYPE
 };