Message ID | 20240906051205.530219-3-andrii@kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | uprobes,mm: speculative lockless VMA-to-uprobe lookup | expand |
* Andrii Nakryiko <andrii@kernel.org> [240906 01:12]: ... > --- > kernel/events/uprobes.c | 51 +++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 51 insertions(+) > > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c > index a2e6a57f79f2..b7e0baa83de1 100644 ... > @@ -2088,6 +2135,10 @@ static struct uprobe *find_active_uprobe_rcu(unsigned long bp_vaddr, int *is_swb I'm having issues locating this function in akpm/mm-unstable. What tree/commits am I missing to do a full review of this code? > struct uprobe *uprobe = NULL; > struct vm_area_struct *vma; > > + uprobe = find_active_uprobe_speculative(bp_vaddr); > + if (uprobe) > + return uprobe; > + > mmap_read_lock(mm); > vma = vma_lookup(mm, bp_vaddr); > if (vma) { > -- > 2.43.5 > >
On Sat, Sep 7, 2024 at 6:22 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote: > > * Andrii Nakryiko <andrii@kernel.org> [240906 01:12]: > > ... > > > --- > > kernel/events/uprobes.c | 51 +++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 51 insertions(+) > > > > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c > > index a2e6a57f79f2..b7e0baa83de1 100644 > ... > > > @@ -2088,6 +2135,10 @@ static struct uprobe *find_active_uprobe_rcu(unsigned long bp_vaddr, int *is_swb > > I'm having issues locating this function in akpm/mm-unstable. What > tree/commits am I missing to do a full review of this code? Hey Liam, These patches are based on tip/perf/core, find_active_uprobe_rcu() just landed a few days ago. See [0]. [0] https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/log/?h=perf/core > > > struct uprobe *uprobe = NULL; > > struct vm_area_struct *vma; > > > > + uprobe = find_active_uprobe_speculative(bp_vaddr); > > + if (uprobe) > > + return uprobe; > > + > > mmap_read_lock(mm); > > vma = vma_lookup(mm, bp_vaddr); > > if (vma) { > > -- > > 2.43.5 > > > >
On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@kernel.org> wrote: > Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely > access vma->vm_file->f_inode field locklessly under just rcu_read_lock() No, not every file is SLAB_TYPESAFE_BY_RCU - see for example ovl_mmap(), which uses backing_file_mmap(), which does vma_set_file(vma, file) where "file" comes from ovl_mmap()'s "realfile", which comes from file->private_data, which is set in ovl_open() to the return value of ovl_open_realfile(), which comes from backing_file_open(), which allocates a file with alloc_empty_backing_file(), which uses a normal kzalloc() without any RCU stuff, with this comment: * This is only for kernel internal use, and the allocate file must not be * installed into file tables or such. And when a backing_file is freed, you can see on the path __fput() -> file_free() that files with FMODE_BACKING are directly freed with kfree(), no RCU delay. So the RCU-ness of "struct file" is an implementation detail of the VFS, and you can't rely on it for ->vm_file unless you get the VFS to change how backing file lifetimes work, which might slow down some other workload, or you find a way to figure out whether you're dealing with a backing file without actually accessing the file. > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) > +{ > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; > + struct mm_struct *mm = current->mm; > + struct uprobe *uprobe; > + struct vm_area_struct *vma; > + struct file *vm_file; > + struct inode *vm_inode; > + unsigned long vm_pgoff, vm_start; > + int seq; > + loff_t offset; > + > + if (!mmap_lock_speculation_start(mm, &seq)) > + return NULL; > + > + rcu_read_lock(); > + > + vma = vma_lookup(mm, bp_vaddr); > + if (!vma) > + goto bail; > + > + vm_file = data_race(vma->vm_file); A plain "data_race()" says "I'm fine with this load tearing", but you're relying on this load not tearing (since you access the vm_file pointer below). You're also relying on the "struct file" that vma->vm_file points to being populated at this point, which means you need CONSUME semantics here, which READ_ONCE() will give you, and something like RELEASE semantics on any pairing store that populates vma->vm_file, which means they'd all have to become something like smp_store_release()). You might want to instead add another recheck of the sequence count (which would involve at least a read memory barrier after the preceding patch is fixed) after loading the ->vm_file pointer to ensure that no one was concurrently changing the ->vm_file pointer before you do memory accesses through it. > + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC) > + goto bail; missing data_race() annotation on the vma->vm_flags access > + vm_inode = data_race(vm_file->f_inode); As noted above, this doesn't work because you can't rely on having RCU lifetime for the file. One *very* ugly hack you could do, if you think this code is so performance-sensitive that you're willing to do fairly atrocious things here, would be to do a "yes I am intentionally doing a UAF read and I know the address might not even be mapped at this point, it's fine, trust me" pattern, where you use copy_from_kernel_nofault(), kind of like in prepend_copy() in fs/d_path.c, and then immediately recheck the sequence count before doing *anything* with this vm_inode pointer you just loaded. > + vm_pgoff = data_race(vma->vm_pgoff); > + vm_start = data_race(vma->vm_start); > + > + offset = (loff_t)(vm_pgoff << PAGE_SHIFT) + (bp_vaddr - vm_start); > + uprobe = find_uprobe_rcu(vm_inode, offset); > + if (!uprobe) > + goto bail; > + > + /* now double check that nothing about MM changed */ > + if (!mmap_lock_speculation_end(mm, seq)) > + goto bail; > + > + rcu_read_unlock(); > + > + /* happy case, we speculated successfully */ > + return uprobe; > +bail: > + rcu_read_unlock(); > + return NULL; > +}
On Mon, Sep 9, 2024 at 6:13 AM Jann Horn <jannh@google.com> wrote: > > On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@kernel.org> wrote: > > Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely > > access vma->vm_file->f_inode field locklessly under just rcu_read_lock() > > No, not every file is SLAB_TYPESAFE_BY_RCU - see for example > ovl_mmap(), which uses backing_file_mmap(), which does > vma_set_file(vma, file) where "file" comes from ovl_mmap()'s > "realfile", which comes from file->private_data, which is set in > ovl_open() to the return value of ovl_open_realfile(), which comes > from backing_file_open(), which allocates a file with > alloc_empty_backing_file(), which uses a normal kzalloc() without any > RCU stuff, with this comment: > > * This is only for kernel internal use, and the allocate file must not be > * installed into file tables or such. > > And when a backing_file is freed, you can see on the path > __fput() -> file_free() > that files with FMODE_BACKING are directly freed with kfree(), no RCU delay. Good catch on FMODE_BACKING, I didn't realize there is this exception, thanks! I think the way forward would be to detect that the backing file is in FMODE_BACKING and fall back to mmap_lock-protected code path. I guess I have the question to Liam and Suren, do you think it would be ok to add another bool after `bool detached` in struct vm_area_struct (guarded by CONFIG_PER_VMA_LOCK), or should we try to add an extra bit into vm_flags_t? The latter would work without CONFIG_PER_VMA_LOCK, but I don't know what's acceptable with mm folks. This flag can be set in vma_set_file() when swapping backing file and wherever else vma->vm_file might be set/updated (I need to audit the code). > > So the RCU-ness of "struct file" is an implementation detail of the > VFS, and you can't rely on it for ->vm_file unless you get the VFS to > change how backing file lifetimes work, which might slow down some > other workload, or you find a way to figure out whether you're dealing > with a backing file without actually accessing the file. > > > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) > > +{ > > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; > > + struct mm_struct *mm = current->mm; > > + struct uprobe *uprobe; > > + struct vm_area_struct *vma; > > + struct file *vm_file; > > + struct inode *vm_inode; > > + unsigned long vm_pgoff, vm_start; > > + int seq; > > + loff_t offset; > > + > > + if (!mmap_lock_speculation_start(mm, &seq)) > > + return NULL; > > + > > + rcu_read_lock(); > > + > > + vma = vma_lookup(mm, bp_vaddr); > > + if (!vma) > > + goto bail; > > + > > + vm_file = data_race(vma->vm_file); > > A plain "data_race()" says "I'm fine with this load tearing", but > you're relying on this load not tearing (since you access the vm_file > pointer below). > You're also relying on the "struct file" that vma->vm_file points to > being populated at this point, which means you need CONSUME semantics > here, which READ_ONCE() will give you, and something like RELEASE > semantics on any pairing store that populates vma->vm_file, which > means they'd all have to become something like smp_store_release()). vma->vm_file should be set in VMA before it is installed and is never modified afterwards, isn't that the case? So maybe no extra barrier are needed and READ_ONCE() would be enough. > > You might want to instead add another recheck of the sequence count > (which would involve at least a read memory barrier after the > preceding patch is fixed) after loading the ->vm_file pointer to > ensure that no one was concurrently changing the ->vm_file pointer > before you do memory accesses through it. > > > + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC) > > + goto bail; > > missing data_race() annotation on the vma->vm_flags access ack > > > + vm_inode = data_race(vm_file->f_inode); > > As noted above, this doesn't work because you can't rely on having RCU > lifetime for the file. One *very* ugly hack you could do, if you think > this code is so performance-sensitive that you're willing to do fairly > atrocious things here, would be to do a "yes I am intentionally doing > a UAF read and I know the address might not even be mapped at this > point, it's fine, trust me" pattern, where you use > copy_from_kernel_nofault(), kind of like in prepend_copy() in > fs/d_path.c, and then immediately recheck the sequence count before > doing *anything* with this vm_inode pointer you just loaded. > > yeah, let's leave it as a very unfortunate plan B and try to solve it a bit cleaner. > > > + vm_pgoff = data_race(vma->vm_pgoff); > > + vm_start = data_race(vma->vm_start); > > + > > + offset = (loff_t)(vm_pgoff << PAGE_SHIFT) + (bp_vaddr - vm_start); > > + uprobe = find_uprobe_rcu(vm_inode, offset); > > + if (!uprobe) > > + goto bail; > > + > > + /* now double check that nothing about MM changed */ > > + if (!mmap_lock_speculation_end(mm, seq)) > > + goto bail; > > + > > + rcu_read_unlock(); > > + > > + /* happy case, we speculated successfully */ > > + return uprobe; > > +bail: > > + rcu_read_unlock(); > > + return NULL; > > +}
On Mon, Sep 9, 2024 at 11:29 PM Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > On Mon, Sep 9, 2024 at 6:13 AM Jann Horn <jannh@google.com> wrote: > > On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@kernel.org> wrote: > > > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) > > > +{ > > > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; > > > + struct mm_struct *mm = current->mm; > > > + struct uprobe *uprobe; > > > + struct vm_area_struct *vma; > > > + struct file *vm_file; > > > + struct inode *vm_inode; > > > + unsigned long vm_pgoff, vm_start; > > > + int seq; > > > + loff_t offset; > > > + > > > + if (!mmap_lock_speculation_start(mm, &seq)) > > > + return NULL; > > > + > > > + rcu_read_lock(); > > > + > > > + vma = vma_lookup(mm, bp_vaddr); > > > + if (!vma) > > > + goto bail; > > > + > > > + vm_file = data_race(vma->vm_file); > > > > A plain "data_race()" says "I'm fine with this load tearing", but > > you're relying on this load not tearing (since you access the vm_file > > pointer below). > > You're also relying on the "struct file" that vma->vm_file points to > > being populated at this point, which means you need CONSUME semantics > > here, which READ_ONCE() will give you, and something like RELEASE > > semantics on any pairing store that populates vma->vm_file, which > > means they'd all have to become something like smp_store_release()). > > vma->vm_file should be set in VMA before it is installed and is never > modified afterwards, isn't that the case? So maybe no extra barrier > are needed and READ_ONCE() would be enough. Ah, right, I'm not sure what I was thinking there. I... guess you only _really_ need the READ_ONCE() if something can actually ever change the ->vm_file pointer, otherwise just a plain load with no annotation whatsoever would be good enough? I'm fairly sure nothing can ever change the ->vm_file pointer of a live VMA, and I think _currently_ it looks like nothing will NULL out the ->vm_file pointer on free either... though that last part is probably not something you should rely on...
On Mon, Sep 9, 2024 at 2:29 PM Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote: > > On Mon, Sep 9, 2024 at 6:13 AM Jann Horn <jannh@google.com> wrote: > > > > On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@kernel.org> wrote: > > > Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely > > > access vma->vm_file->f_inode field locklessly under just rcu_read_lock() > > > > No, not every file is SLAB_TYPESAFE_BY_RCU - see for example > > ovl_mmap(), which uses backing_file_mmap(), which does > > vma_set_file(vma, file) where "file" comes from ovl_mmap()'s > > "realfile", which comes from file->private_data, which is set in > > ovl_open() to the return value of ovl_open_realfile(), which comes > > from backing_file_open(), which allocates a file with > > alloc_empty_backing_file(), which uses a normal kzalloc() without any > > RCU stuff, with this comment: > > > > * This is only for kernel internal use, and the allocate file must not be > > * installed into file tables or such. > > > > And when a backing_file is freed, you can see on the path > > __fput() -> file_free() > > that files with FMODE_BACKING are directly freed with kfree(), no RCU delay. > > Good catch on FMODE_BACKING, I didn't realize there is this exception, thanks! > > I think the way forward would be to detect that the backing file is in > FMODE_BACKING and fall back to mmap_lock-protected code path. > > I guess I have the question to Liam and Suren, do you think it would > be ok to add another bool after `bool detached` in struct > vm_area_struct (guarded by CONFIG_PER_VMA_LOCK), or should we try to > add an extra bit into vm_flags_t? The latter would work without > CONFIG_PER_VMA_LOCK, but I don't know what's acceptable with mm folks. > > This flag can be set in vma_set_file() when swapping backing file and > wherever else vma->vm_file might be set/updated (I need to audit the > code). I understand that this would work but I'm not very eager to leak vm_file attributes like FMODE_BACKING into vm_area_struct. Instead maybe that exception can be avoided? Treating all vm_files equally as RCU-safe would be a much simpler solution. I see that this exception was introduced in [1] and I don't know if this was done for performance reasons or something else. Christian, CCing you here to please clarify. [1] https://lore.kernel.org/all/20231005-sakralbau-wappnen-f5c31755ed70@brauner/ > > > > > So the RCU-ness of "struct file" is an implementation detail of the > > VFS, and you can't rely on it for ->vm_file unless you get the VFS to > > change how backing file lifetimes work, which might slow down some > > other workload, or you find a way to figure out whether you're dealing > > with a backing file without actually accessing the file. > > > > > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) > > > +{ > > > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; > > > + struct mm_struct *mm = current->mm; > > > + struct uprobe *uprobe; > > > + struct vm_area_struct *vma; > > > + struct file *vm_file; > > > + struct inode *vm_inode; > > > + unsigned long vm_pgoff, vm_start; > > > + int seq; > > > + loff_t offset; > > > + > > > + if (!mmap_lock_speculation_start(mm, &seq)) > > > + return NULL; > > > + > > > + rcu_read_lock(); > > > + > > > + vma = vma_lookup(mm, bp_vaddr); > > > + if (!vma) > > > + goto bail; > > > + > > > + vm_file = data_race(vma->vm_file); > > > > A plain "data_race()" says "I'm fine with this load tearing", but > > you're relying on this load not tearing (since you access the vm_file > > pointer below). > > You're also relying on the "struct file" that vma->vm_file points to > > being populated at this point, which means you need CONSUME semantics > > here, which READ_ONCE() will give you, and something like RELEASE > > semantics on any pairing store that populates vma->vm_file, which > > means they'd all have to become something like smp_store_release()). > > vma->vm_file should be set in VMA before it is installed and is never > modified afterwards, isn't that the case? So maybe no extra barrier > are needed and READ_ONCE() would be enough. > > > > > You might want to instead add another recheck of the sequence count > > (which would involve at least a read memory barrier after the > > preceding patch is fixed) after loading the ->vm_file pointer to > > ensure that no one was concurrently changing the ->vm_file pointer > > before you do memory accesses through it. > > > > > + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC) > > > + goto bail; > > > > missing data_race() annotation on the vma->vm_flags access > > ack > > > > > > + vm_inode = data_race(vm_file->f_inode); > > > > As noted above, this doesn't work because you can't rely on having RCU > > lifetime for the file. One *very* ugly hack you could do, if you think > > this code is so performance-sensitive that you're willing to do fairly > > atrocious things here, would be to do a "yes I am intentionally doing > > a UAF read and I know the address might not even be mapped at this > > point, it's fine, trust me" pattern, where you use > > copy_from_kernel_nofault(), kind of like in prepend_copy() in > > fs/d_path.c, and then immediately recheck the sequence count before > > doing *anything* with this vm_inode pointer you just loaded. > > > > > > yeah, let's leave it as a very unfortunate plan B and try to solve it > a bit cleaner. > > > > > > > + vm_pgoff = data_race(vma->vm_pgoff); > > > + vm_start = data_race(vma->vm_start); > > > + > > > + offset = (loff_t)(vm_pgoff << PAGE_SHIFT) + (bp_vaddr - vm_start); > > > + uprobe = find_uprobe_rcu(vm_inode, offset); > > > + if (!uprobe) > > > + goto bail; > > > + > > > + /* now double check that nothing about MM changed */ > > > + if (!mmap_lock_speculation_end(mm, seq)) > > > + goto bail; > > > + > > > + rcu_read_unlock(); > > > + > > > + /* happy case, we speculated successfully */ > > > + return uprobe; > > > +bail: > > > + rcu_read_unlock(); > > > + return NULL; > > > +}
On Tue, Sep 10, 2024 at 8:39 AM Jann Horn <jannh@google.com> wrote: > > On Mon, Sep 9, 2024 at 11:29 PM Andrii Nakryiko > <andrii.nakryiko@gmail.com> wrote: > > On Mon, Sep 9, 2024 at 6:13 AM Jann Horn <jannh@google.com> wrote: > > > On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@kernel.org> wrote: > > > > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) > > > > +{ > > > > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; > > > > + struct mm_struct *mm = current->mm; > > > > + struct uprobe *uprobe; > > > > + struct vm_area_struct *vma; > > > > + struct file *vm_file; > > > > + struct inode *vm_inode; > > > > + unsigned long vm_pgoff, vm_start; > > > > + int seq; > > > > + loff_t offset; > > > > + > > > > + if (!mmap_lock_speculation_start(mm, &seq)) > > > > + return NULL; > > > > + > > > > + rcu_read_lock(); > > > > + > > > > + vma = vma_lookup(mm, bp_vaddr); > > > > + if (!vma) > > > > + goto bail; > > > > + > > > > + vm_file = data_race(vma->vm_file); > > > > > > A plain "data_race()" says "I'm fine with this load tearing", but > > > you're relying on this load not tearing (since you access the vm_file > > > pointer below). > > > You're also relying on the "struct file" that vma->vm_file points to > > > being populated at this point, which means you need CONSUME semantics > > > here, which READ_ONCE() will give you, and something like RELEASE > > > semantics on any pairing store that populates vma->vm_file, which > > > means they'd all have to become something like smp_store_release()). > > > > vma->vm_file should be set in VMA before it is installed and is never > > modified afterwards, isn't that the case? So maybe no extra barrier > > are needed and READ_ONCE() would be enough. > > Ah, right, I'm not sure what I was thinking there. > > I... guess you only _really_ need the READ_ONCE() if something can > actually ever change the ->vm_file pointer, otherwise just a plain > load with no annotation whatsoever would be good enough? I'm fairly yep, probably, I was just trying to be cautious :) > sure nothing can ever change the ->vm_file pointer of a live VMA, and > I think _currently_ it looks like nothing will NULL out the ->vm_file > pointer on free either... though that last part is probably not > something you should rely on... This seems to be rather important, but similarly to how vm_file can't be modified, it seems reasonable to assume that it won't be set to NULL (it's a modification to set it to a new NULL value, isn't it?). I mean, we can probably just add a NULL check and rely on the atomicity of setting a pointer, so not a big deal, but seems like a pretty reasonable assumption to make.
On Tue, Sep 10, 2024 at 9:32 AM Suren Baghdasaryan <surenb@google.com> wrote: > > On Mon, Sep 9, 2024 at 2:29 PM Andrii Nakryiko > <andrii.nakryiko@gmail.com> wrote: > > > > On Mon, Sep 9, 2024 at 6:13 AM Jann Horn <jannh@google.com> wrote: > > > > > > On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@kernel.org> wrote: > > > > Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely > > > > access vma->vm_file->f_inode field locklessly under just rcu_read_lock() > > > > > > No, not every file is SLAB_TYPESAFE_BY_RCU - see for example > > > ovl_mmap(), which uses backing_file_mmap(), which does > > > vma_set_file(vma, file) where "file" comes from ovl_mmap()'s > > > "realfile", which comes from file->private_data, which is set in > > > ovl_open() to the return value of ovl_open_realfile(), which comes > > > from backing_file_open(), which allocates a file with > > > alloc_empty_backing_file(), which uses a normal kzalloc() without any > > > RCU stuff, with this comment: > > > > > > * This is only for kernel internal use, and the allocate file must not be > > > * installed into file tables or such. > > > > > > And when a backing_file is freed, you can see on the path > > > __fput() -> file_free() > > > that files with FMODE_BACKING are directly freed with kfree(), no RCU delay. > > > > Good catch on FMODE_BACKING, I didn't realize there is this exception, thanks! > > > > I think the way forward would be to detect that the backing file is in > > FMODE_BACKING and fall back to mmap_lock-protected code path. > > > > I guess I have the question to Liam and Suren, do you think it would > > be ok to add another bool after `bool detached` in struct > > vm_area_struct (guarded by CONFIG_PER_VMA_LOCK), or should we try to > > add an extra bit into vm_flags_t? The latter would work without > > CONFIG_PER_VMA_LOCK, but I don't know what's acceptable with mm folks. > > > > This flag can be set in vma_set_file() when swapping backing file and > > wherever else vma->vm_file might be set/updated (I need to audit the > > code). > > I understand that this would work but I'm not very eager to leak > vm_file attributes like FMODE_BACKING into vm_area_struct. > Instead maybe that exception can be avoided? Treating all vm_files I agree, that would be best, of course. It seems like [1] was an optimization to avoid kfree_rcu() calls, not sure how big of a deal it is to undo that, given we do have a use case that calls for it now. Let's see what Christian thinks. > equally as RCU-safe would be a much simpler solution. I see that this > exception was introduced in [1] and I don't know if this was done for > performance reasons or something else. Christian, CCing you here to > please clarify. > > [1] https://lore.kernel.org/all/20231005-sakralbau-wappnen-f5c31755ed70@brauner/ > > > > > > > > > So the RCU-ness of "struct file" is an implementation detail of the > > > VFS, and you can't rely on it for ->vm_file unless you get the VFS to > > > change how backing file lifetimes work, which might slow down some > > > other workload, or you find a way to figure out whether you're dealing > > > with a backing file without actually accessing the file. > > > > > > > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) > > > > +{ > > > > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; > > > > + struct mm_struct *mm = current->mm; > > > > + struct uprobe *uprobe; > > > > + struct vm_area_struct *vma; > > > > + struct file *vm_file; > > > > + struct inode *vm_inode; > > > > + unsigned long vm_pgoff, vm_start; > > > > + int seq; > > > > + loff_t offset; > > > > + > > > > + if (!mmap_lock_speculation_start(mm, &seq)) > > > > + return NULL; > > > > + > > > > + rcu_read_lock(); > > > > + > > > > + vma = vma_lookup(mm, bp_vaddr); > > > > + if (!vma) > > > > + goto bail; > > > > + > > > > + vm_file = data_race(vma->vm_file); > > > > > > A plain "data_race()" says "I'm fine with this load tearing", but > > > you're relying on this load not tearing (since you access the vm_file > > > pointer below). > > > You're also relying on the "struct file" that vma->vm_file points to > > > being populated at this point, which means you need CONSUME semantics > > > here, which READ_ONCE() will give you, and something like RELEASE > > > semantics on any pairing store that populates vma->vm_file, which > > > means they'd all have to become something like smp_store_release()). > > > > vma->vm_file should be set in VMA before it is installed and is never > > modified afterwards, isn't that the case? So maybe no extra barrier > > are needed and READ_ONCE() would be enough. > > > > > > > > You might want to instead add another recheck of the sequence count > > > (which would involve at least a read memory barrier after the > > > preceding patch is fixed) after loading the ->vm_file pointer to > > > ensure that no one was concurrently changing the ->vm_file pointer > > > before you do memory accesses through it. > > > > > > > + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC) > > > > + goto bail; > > > > > > missing data_race() annotation on the vma->vm_flags access > > > > ack > > > > > > > > > + vm_inode = data_race(vm_file->f_inode); > > > > > > As noted above, this doesn't work because you can't rely on having RCU > > > lifetime for the file. One *very* ugly hack you could do, if you think > > > this code is so performance-sensitive that you're willing to do fairly > > > atrocious things here, would be to do a "yes I am intentionally doing > > > a UAF read and I know the address might not even be mapped at this > > > point, it's fine, trust me" pattern, where you use > > > copy_from_kernel_nofault(), kind of like in prepend_copy() in > > > fs/d_path.c, and then immediately recheck the sequence count before > > > doing *anything* with this vm_inode pointer you just loaded. > > > > > > > > > > yeah, let's leave it as a very unfortunate plan B and try to solve it > > a bit cleaner. > > > > > > > > > > > + vm_pgoff = data_race(vma->vm_pgoff); > > > > + vm_start = data_race(vma->vm_start); > > > > + > > > > + offset = (loff_t)(vm_pgoff << PAGE_SHIFT) + (bp_vaddr - vm_start); > > > > + uprobe = find_uprobe_rcu(vm_inode, offset); > > > > + if (!uprobe) > > > > + goto bail; > > > > + > > > > + /* now double check that nothing about MM changed */ > > > > + if (!mmap_lock_speculation_end(mm, seq)) > > > > + goto bail; > > > > + > > > > + rcu_read_unlock(); > > > > + > > > > + /* happy case, we speculated successfully */ > > > > + return uprobe; > > > > +bail: > > > > + rcu_read_unlock(); > > > > + return NULL; > > > > +}
On Tue, Sep 10, 2024 at 01:58:10PM GMT, Andrii Nakryiko wrote: > On Tue, Sep 10, 2024 at 9:32 AM Suren Baghdasaryan <surenb@google.com> wrote: > > > > On Mon, Sep 9, 2024 at 2:29 PM Andrii Nakryiko > > <andrii.nakryiko@gmail.com> wrote: > > > > > > On Mon, Sep 9, 2024 at 6:13 AM Jann Horn <jannh@google.com> wrote: > > > > > > > > On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@kernel.org> wrote: > > > > > Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely > > > > > access vma->vm_file->f_inode field locklessly under just rcu_read_lock() > > > > > > > > No, not every file is SLAB_TYPESAFE_BY_RCU - see for example > > > > ovl_mmap(), which uses backing_file_mmap(), which does > > > > vma_set_file(vma, file) where "file" comes from ovl_mmap()'s > > > > "realfile", which comes from file->private_data, which is set in > > > > ovl_open() to the return value of ovl_open_realfile(), which comes > > > > from backing_file_open(), which allocates a file with > > > > alloc_empty_backing_file(), which uses a normal kzalloc() without any > > > > RCU stuff, with this comment: > > > > > > > > * This is only for kernel internal use, and the allocate file must not be > > > > * installed into file tables or such. > > > > > > > > And when a backing_file is freed, you can see on the path > > > > __fput() -> file_free() > > > > that files with FMODE_BACKING are directly freed with kfree(), no RCU delay. > > > > > > Good catch on FMODE_BACKING, I didn't realize there is this exception, thanks! > > > > > > I think the way forward would be to detect that the backing file is in > > > FMODE_BACKING and fall back to mmap_lock-protected code path. > > > > > > I guess I have the question to Liam and Suren, do you think it would > > > be ok to add another bool after `bool detached` in struct > > > vm_area_struct (guarded by CONFIG_PER_VMA_LOCK), or should we try to > > > add an extra bit into vm_flags_t? The latter would work without > > > CONFIG_PER_VMA_LOCK, but I don't know what's acceptable with mm folks. > > > > > > This flag can be set in vma_set_file() when swapping backing file and > > > wherever else vma->vm_file might be set/updated (I need to audit the > > > code). > > > > I understand that this would work but I'm not very eager to leak > > vm_file attributes like FMODE_BACKING into vm_area_struct. > > Instead maybe that exception can be avoided? Treating all vm_files > > I agree, that would be best, of course. It seems like [1] was an > optimization to avoid kfree_rcu() calls, not sure how big of a deal it > is to undo that, given we do have a use case that calls for it now. > Let's see what Christian thinks. Do you just mean? diff --git a/fs/file_table.c b/fs/file_table.c index 7ce4d5dac080..03e58b28e539 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -68,7 +68,7 @@ static inline void file_free(struct file *f) put_cred(f->f_cred); if (unlikely(f->f_mode & FMODE_BACKING)) { path_put(backing_file_user_path(f)); - kfree(backing_file(f)); + kfree_rcu(backing_file(f)); } else { kmem_cache_free(filp_cachep, f); } Then the only thing you can do with FMODE_BACKING is to skip it. I think that should be fine since backing files right now are only used by overlayfs and I don't think the kfree_rcu() will be a performance issue. > > > equally as RCU-safe would be a much simpler solution. I see that this > > exception was introduced in [1] and I don't know if this was done for > > performance reasons or something else. Christian, CCing you here to > > please clarify. > > > > [1] https://lore.kernel.org/all/20231005-sakralbau-wappnen-f5c31755ed70@brauner/ > > > > > > > > > > > > > So the RCU-ness of "struct file" is an implementation detail of the > > > > VFS, and you can't rely on it for ->vm_file unless you get the VFS to > > > > change how backing file lifetimes work, which might slow down some > > > > other workload, or you find a way to figure out whether you're dealing > > > > with a backing file without actually accessing the file. > > > > > > > > > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) > > > > > +{ > > > > > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; > > > > > + struct mm_struct *mm = current->mm; > > > > > + struct uprobe *uprobe; > > > > > + struct vm_area_struct *vma; > > > > > + struct file *vm_file; > > > > > + struct inode *vm_inode; > > > > > + unsigned long vm_pgoff, vm_start; > > > > > + int seq; > > > > > + loff_t offset; > > > > > + > > > > > + if (!mmap_lock_speculation_start(mm, &seq)) > > > > > + return NULL; > > > > > + > > > > > + rcu_read_lock(); > > > > > + > > > > > + vma = vma_lookup(mm, bp_vaddr); > > > > > + if (!vma) > > > > > + goto bail; > > > > > + > > > > > + vm_file = data_race(vma->vm_file); > > > > > > > > A plain "data_race()" says "I'm fine with this load tearing", but > > > > you're relying on this load not tearing (since you access the vm_file > > > > pointer below). > > > > You're also relying on the "struct file" that vma->vm_file points to > > > > being populated at this point, which means you need CONSUME semantics > > > > here, which READ_ONCE() will give you, and something like RELEASE > > > > semantics on any pairing store that populates vma->vm_file, which > > > > means they'd all have to become something like smp_store_release()). > > > > > > vma->vm_file should be set in VMA before it is installed and is never > > > modified afterwards, isn't that the case? So maybe no extra barrier > > > are needed and READ_ONCE() would be enough. > > > > > > > > > > > You might want to instead add another recheck of the sequence count > > > > (which would involve at least a read memory barrier after the > > > > preceding patch is fixed) after loading the ->vm_file pointer to > > > > ensure that no one was concurrently changing the ->vm_file pointer > > > > before you do memory accesses through it. > > > > > > > > > + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC) > > > > > + goto bail; > > > > > > > > missing data_race() annotation on the vma->vm_flags access > > > > > > ack > > > > > > > > > > > > + vm_inode = data_race(vm_file->f_inode); > > > > > > > > As noted above, this doesn't work because you can't rely on having RCU > > > > lifetime for the file. One *very* ugly hack you could do, if you think > > > > this code is so performance-sensitive that you're willing to do fairly > > > > atrocious things here, would be to do a "yes I am intentionally doing > > > > a UAF read and I know the address might not even be mapped at this > > > > point, it's fine, trust me" pattern, where you use > > > > copy_from_kernel_nofault(), kind of like in prepend_copy() in > > > > fs/d_path.c, and then immediately recheck the sequence count before > > > > doing *anything* with this vm_inode pointer you just loaded. > > > > > > > > > > > > > > yeah, let's leave it as a very unfortunate plan B and try to solve it > > > a bit cleaner. > > > > > > > > > > > > > > > + vm_pgoff = data_race(vma->vm_pgoff); > > > > > + vm_start = data_race(vma->vm_start); > > > > > + > > > > > + offset = (loff_t)(vm_pgoff << PAGE_SHIFT) + (bp_vaddr - vm_start); > > > > > + uprobe = find_uprobe_rcu(vm_inode, offset); > > > > > + if (!uprobe) > > > > > + goto bail; > > > > > + > > > > > + /* now double check that nothing about MM changed */ > > > > > + if (!mmap_lock_speculation_end(mm, seq)) > > > > > + goto bail; > > > > > + > > > > > + rcu_read_unlock(); > > > > > + > > > > > + /* happy case, we speculated successfully */ > > > > > + return uprobe; > > > > > +bail: > > > > > + rcu_read_unlock(); > > > > > + return NULL; > > > > > +}
On Thu, Sep 12, 2024 at 4:17 AM Christian Brauner <brauner@kernel.org> wrote: > > On Tue, Sep 10, 2024 at 01:58:10PM GMT, Andrii Nakryiko wrote: > > On Tue, Sep 10, 2024 at 9:32 AM Suren Baghdasaryan <surenb@google.com> wrote: > > > > > > On Mon, Sep 9, 2024 at 2:29 PM Andrii Nakryiko > > > <andrii.nakryiko@gmail.com> wrote: > > > > > > > > On Mon, Sep 9, 2024 at 6:13 AM Jann Horn <jannh@google.com> wrote: > > > > > > > > > > On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@kernel.org> wrote: > > > > > > Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely > > > > > > access vma->vm_file->f_inode field locklessly under just rcu_read_lock() > > > > > > > > > > No, not every file is SLAB_TYPESAFE_BY_RCU - see for example > > > > > ovl_mmap(), which uses backing_file_mmap(), which does > > > > > vma_set_file(vma, file) where "file" comes from ovl_mmap()'s > > > > > "realfile", which comes from file->private_data, which is set in > > > > > ovl_open() to the return value of ovl_open_realfile(), which comes > > > > > from backing_file_open(), which allocates a file with > > > > > alloc_empty_backing_file(), which uses a normal kzalloc() without any > > > > > RCU stuff, with this comment: > > > > > > > > > > * This is only for kernel internal use, and the allocate file must not be > > > > > * installed into file tables or such. > > > > > > > > > > And when a backing_file is freed, you can see on the path > > > > > __fput() -> file_free() > > > > > that files with FMODE_BACKING are directly freed with kfree(), no RCU delay. > > > > > > > > Good catch on FMODE_BACKING, I didn't realize there is this exception, thanks! > > > > > > > > I think the way forward would be to detect that the backing file is in > > > > FMODE_BACKING and fall back to mmap_lock-protected code path. > > > > > > > > I guess I have the question to Liam and Suren, do you think it would > > > > be ok to add another bool after `bool detached` in struct > > > > vm_area_struct (guarded by CONFIG_PER_VMA_LOCK), or should we try to > > > > add an extra bit into vm_flags_t? The latter would work without > > > > CONFIG_PER_VMA_LOCK, but I don't know what's acceptable with mm folks. > > > > > > > > This flag can be set in vma_set_file() when swapping backing file and > > > > wherever else vma->vm_file might be set/updated (I need to audit the > > > > code). > > > > > > I understand that this would work but I'm not very eager to leak > > > vm_file attributes like FMODE_BACKING into vm_area_struct. > > > Instead maybe that exception can be avoided? Treating all vm_files > > > > I agree, that would be best, of course. It seems like [1] was an > > optimization to avoid kfree_rcu() calls, not sure how big of a deal it > > is to undo that, given we do have a use case that calls for it now. > > Let's see what Christian thinks. > > Do you just mean? > > diff --git a/fs/file_table.c b/fs/file_table.c > index 7ce4d5dac080..03e58b28e539 100644 > --- a/fs/file_table.c > +++ b/fs/file_table.c > @@ -68,7 +68,7 @@ static inline void file_free(struct file *f) > put_cred(f->f_cred); > if (unlikely(f->f_mode & FMODE_BACKING)) { > path_put(backing_file_user_path(f)); > - kfree(backing_file(f)); > + kfree_rcu(backing_file(f)); > } else { > kmem_cache_free(filp_cachep, f); > } > > Then the only thing you can do with FMODE_BACKING is to skip it. I think > that should be fine since backing files right now are only used by > overlayfs and I don't think the kfree_rcu() will be a performance issue. Yes, something along those lines. Ok, great, if it's ok to add back kfree_rcu(), then I think that resolves the main problem I was running into. I'll incorporate adding back RCU-delated freeing as a separate patch into the future patch set, thanks! > > > > > > equally as RCU-safe would be a much simpler solution. I see that this > > > exception was introduced in [1] and I don't know if this was done for > > > performance reasons or something else. Christian, CCing you here to > > > please clarify. > > > > > > [1] https://lore.kernel.org/all/20231005-sakralbau-wappnen-f5c31755ed70@brauner/ > > > > > > > > > > > > > > > > > So the RCU-ness of "struct file" is an implementation detail of the > > > > > VFS, and you can't rely on it for ->vm_file unless you get the VFS to > > > > > change how backing file lifetimes work, which might slow down some > > > > > other workload, or you find a way to figure out whether you're dealing > > > > > with a backing file without actually accessing the file. > > > > > > > > > > > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) > > > > > > +{ > > > > > > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; > > > > > > + struct mm_struct *mm = current->mm; > > > > > > + struct uprobe *uprobe; > > > > > > + struct vm_area_struct *vma; > > > > > > + struct file *vm_file; > > > > > > + struct inode *vm_inode; > > > > > > + unsigned long vm_pgoff, vm_start; > > > > > > + int seq; > > > > > > + loff_t offset; > > > > > > + > > > > > > + if (!mmap_lock_speculation_start(mm, &seq)) > > > > > > + return NULL; > > > > > > + > > > > > > + rcu_read_lock(); > > > > > > + > > > > > > + vma = vma_lookup(mm, bp_vaddr); > > > > > > + if (!vma) > > > > > > + goto bail; > > > > > > + > > > > > > + vm_file = data_race(vma->vm_file); > > > > > > > > > > A plain "data_race()" says "I'm fine with this load tearing", but > > > > > you're relying on this load not tearing (since you access the vm_file > > > > > pointer below). > > > > > You're also relying on the "struct file" that vma->vm_file points to > > > > > being populated at this point, which means you need CONSUME semantics > > > > > here, which READ_ONCE() will give you, and something like RELEASE > > > > > semantics on any pairing store that populates vma->vm_file, which > > > > > means they'd all have to become something like smp_store_release()). > > > > > > > > vma->vm_file should be set in VMA before it is installed and is never > > > > modified afterwards, isn't that the case? So maybe no extra barrier > > > > are needed and READ_ONCE() would be enough. > > > > > > > > > > > > > > You might want to instead add another recheck of the sequence count > > > > > (which would involve at least a read memory barrier after the > > > > > preceding patch is fixed) after loading the ->vm_file pointer to > > > > > ensure that no one was concurrently changing the ->vm_file pointer > > > > > before you do memory accesses through it. > > > > > > > > > > > + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC) > > > > > > + goto bail; > > > > > > > > > > missing data_race() annotation on the vma->vm_flags access > > > > > > > > ack > > > > > > > > > > > > > > > + vm_inode = data_race(vm_file->f_inode); > > > > > > > > > > As noted above, this doesn't work because you can't rely on having RCU > > > > > lifetime for the file. One *very* ugly hack you could do, if you think > > > > > this code is so performance-sensitive that you're willing to do fairly > > > > > atrocious things here, would be to do a "yes I am intentionally doing > > > > > a UAF read and I know the address might not even be mapped at this > > > > > point, it's fine, trust me" pattern, where you use > > > > > copy_from_kernel_nofault(), kind of like in prepend_copy() in > > > > > fs/d_path.c, and then immediately recheck the sequence count before > > > > > doing *anything* with this vm_inode pointer you just loaded. > > > > > > > > > > > > > > > > > > yeah, let's leave it as a very unfortunate plan B and try to solve it > > > > a bit cleaner. > > > > > > > > > > > > > > > > > > > + vm_pgoff = data_race(vma->vm_pgoff); > > > > > > + vm_start = data_race(vma->vm_start); > > > > > > + > > > > > > + offset = (loff_t)(vm_pgoff << PAGE_SHIFT) + (bp_vaddr - vm_start); > > > > > > + uprobe = find_uprobe_rcu(vm_inode, offset); > > > > > > + if (!uprobe) > > > > > > + goto bail; > > > > > > + > > > > > > + /* now double check that nothing about MM changed */ > > > > > > + if (!mmap_lock_speculation_end(mm, seq)) > > > > > > + goto bail; > > > > > > + > > > > > > + rcu_read_unlock(); > > > > > > + > > > > > > + /* happy case, we speculated successfully */ > > > > > > + return uprobe; > > > > > > +bail: > > > > > > + rcu_read_unlock(); > > > > > > + return NULL; > > > > > > +}
On 09/05, Andrii Nakryiko wrote: > > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) > +{ > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; ... > + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC) > + goto bail; Not that this can really simplify your patch, feel free to ignore, but I don't think you need to check vma->vm_flags. Yes, find_active_uprobe_rcu() does the same valid_vma(vma, false) check, but it too can/should be removed, afaics. valid_vma(vma, false) makes sense in, say, unapply_uprobe() to quickly filter out vma's which can't have this bp installed, but not in the handle_swbp() paths. Oleg.
On Sun, Sep 15, 2024 at 5:04 PM Oleg Nesterov <oleg@redhat.com> wrote: > > On 09/05, Andrii Nakryiko wrote: > > > > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) > > +{ > > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; > ... > > + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC) > > + goto bail; > > Not that this can really simplify your patch, feel free to ignore, but I don't > think you need to check vma->vm_flags. > > Yes, find_active_uprobe_rcu() does the same valid_vma(vma, false) check, but it > too can/should be removed, afaics. yep, agreed, I'll see to simplify both, you points make total sense > > valid_vma(vma, false) makes sense in, say, unapply_uprobe() to quickly filter > out vma's which can't have this bp installed, but not in the handle_swbp() paths. > > Oleg. >
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index a2e6a57f79f2..b7e0baa83de1 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -2081,6 +2081,53 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr) return is_trap_insn(&opcode); } +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) +{ + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; + struct mm_struct *mm = current->mm; + struct uprobe *uprobe; + struct vm_area_struct *vma; + struct file *vm_file; + struct inode *vm_inode; + unsigned long vm_pgoff, vm_start; + int seq; + loff_t offset; + + if (!mmap_lock_speculation_start(mm, &seq)) + return NULL; + + rcu_read_lock(); + + vma = vma_lookup(mm, bp_vaddr); + if (!vma) + goto bail; + + vm_file = data_race(vma->vm_file); + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC) + goto bail; + + vm_inode = data_race(vm_file->f_inode); + vm_pgoff = data_race(vma->vm_pgoff); + vm_start = data_race(vma->vm_start); + + offset = (loff_t)(vm_pgoff << PAGE_SHIFT) + (bp_vaddr - vm_start); + uprobe = find_uprobe_rcu(vm_inode, offset); + if (!uprobe) + goto bail; + + /* now double check that nothing about MM changed */ + if (!mmap_lock_speculation_end(mm, seq)) + goto bail; + + rcu_read_unlock(); + + /* happy case, we speculated successfully */ + return uprobe; +bail: + rcu_read_unlock(); + return NULL; +} + /* assumes being inside RCU protected region */ static struct uprobe *find_active_uprobe_rcu(unsigned long bp_vaddr, int *is_swbp) { @@ -2088,6 +2135,10 @@ static struct uprobe *find_active_uprobe_rcu(unsigned long bp_vaddr, int *is_swb struct uprobe *uprobe = NULL; struct vm_area_struct *vma; + uprobe = find_active_uprobe_speculative(bp_vaddr); + if (uprobe) + return uprobe; + mmap_read_lock(mm); vma = vma_lookup(mm, bp_vaddr); if (vma) {
Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely access vma->vm_file->f_inode field locklessly under just rcu_read_lock() protection, which enables looking up uprobe from uprobes_tree completely locklessly and speculatively without the need to acquire mmap_lock for reads. In most cases, anyway, under the assumption that there are no parallel mm and/or VMA modifications. The underlying struct file's memory won't go away from under us (even if struct file can be reused in the meantime). We rely on newly added mmap_lock_speculation_{start,end}() helpers to validate that mm_struct stays intact for entire duration of this speculation. If not, we fall back to mmap_lock-protected lookup. The speculative logic is written in such a way that it will safely handle any garbage values that might be read from vma or file structs. Benchmarking results speak for themselves. BEFORE (latest tip/perf/core) ============================= uprobe-nop ( 1 cpus): 3.384 ± 0.004M/s ( 3.384M/s/cpu) uprobe-nop ( 2 cpus): 5.456 ± 0.005M/s ( 2.728M/s/cpu) uprobe-nop ( 3 cpus): 7.863 ± 0.015M/s ( 2.621M/s/cpu) uprobe-nop ( 4 cpus): 9.442 ± 0.008M/s ( 2.360M/s/cpu) uprobe-nop ( 5 cpus): 11.036 ± 0.013M/s ( 2.207M/s/cpu) uprobe-nop ( 6 cpus): 10.884 ± 0.019M/s ( 1.814M/s/cpu) uprobe-nop ( 7 cpus): 7.897 ± 0.145M/s ( 1.128M/s/cpu) uprobe-nop ( 8 cpus): 10.021 ± 0.128M/s ( 1.253M/s/cpu) uprobe-nop (10 cpus): 9.932 ± 0.170M/s ( 0.993M/s/cpu) uprobe-nop (12 cpus): 8.369 ± 0.056M/s ( 0.697M/s/cpu) uprobe-nop (14 cpus): 8.678 ± 0.017M/s ( 0.620M/s/cpu) uprobe-nop (16 cpus): 7.392 ± 0.003M/s ( 0.462M/s/cpu) uprobe-nop (24 cpus): 5.326 ± 0.178M/s ( 0.222M/s/cpu) uprobe-nop (32 cpus): 5.426 ± 0.059M/s ( 0.170M/s/cpu) uprobe-nop (40 cpus): 5.262 ± 0.070M/s ( 0.132M/s/cpu) uprobe-nop (48 cpus): 6.121 ± 0.010M/s ( 0.128M/s/cpu) uprobe-nop (56 cpus): 6.252 ± 0.035M/s ( 0.112M/s/cpu) uprobe-nop (64 cpus): 7.644 ± 0.023M/s ( 0.119M/s/cpu) uprobe-nop (72 cpus): 7.781 ± 0.001M/s ( 0.108M/s/cpu) uprobe-nop (80 cpus): 8.992 ± 0.048M/s ( 0.112M/s/cpu) AFTER ===== uprobe-nop ( 1 cpus): 3.534 ± 0.033M/s ( 3.534M/s/cpu) uprobe-nop ( 2 cpus): 6.701 ± 0.007M/s ( 3.351M/s/cpu) uprobe-nop ( 3 cpus): 10.031 ± 0.007M/s ( 3.344M/s/cpu) uprobe-nop ( 4 cpus): 13.003 ± 0.012M/s ( 3.251M/s/cpu) uprobe-nop ( 5 cpus): 16.274 ± 0.006M/s ( 3.255M/s/cpu) uprobe-nop ( 6 cpus): 19.563 ± 0.024M/s ( 3.261M/s/cpu) uprobe-nop ( 7 cpus): 22.696 ± 0.054M/s ( 3.242M/s/cpu) uprobe-nop ( 8 cpus): 24.534 ± 0.010M/s ( 3.067M/s/cpu) uprobe-nop (10 cpus): 30.475 ± 0.117M/s ( 3.047M/s/cpu) uprobe-nop (12 cpus): 33.371 ± 0.017M/s ( 2.781M/s/cpu) uprobe-nop (14 cpus): 38.864 ± 0.004M/s ( 2.776M/s/cpu) uprobe-nop (16 cpus): 41.476 ± 0.020M/s ( 2.592M/s/cpu) uprobe-nop (24 cpus): 64.696 ± 0.021M/s ( 2.696M/s/cpu) uprobe-nop (32 cpus): 85.054 ± 0.027M/s ( 2.658M/s/cpu) uprobe-nop (40 cpus): 101.979 ± 0.032M/s ( 2.549M/s/cpu) uprobe-nop (48 cpus): 110.518 ± 0.056M/s ( 2.302M/s/cpu) uprobe-nop (56 cpus): 117.737 ± 0.020M/s ( 2.102M/s/cpu) uprobe-nop (64 cpus): 124.613 ± 0.079M/s ( 1.947M/s/cpu) uprobe-nop (72 cpus): 133.239 ± 0.032M/s ( 1.851M/s/cpu) uprobe-nop (80 cpus): 142.037 ± 0.138M/s ( 1.775M/s/cpu) Previously total throughput was maxing out at 11mln/s, and gradually declining past 8 cores. With this change, it now keeps growing with each added CPU, reaching 142mln/s at 80 CPUs (this was measured on a 80-core Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz). Suggested-by: Matthew Wilcox <willy@infradead.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> --- kernel/events/uprobes.c | 51 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+)