Message ID | 20200726080224.205470-2-fly@kernel.page (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] mm: make mm->locked_vm an atomic64 counter | expand |
On Sun, 26 Jul 2020, Pengfei Li wrote: > Since mm->locked_vm is already an atomic counter, account_locked_vm() > does not need to hold mmap_lock. I am worried that this patch, already added to mmotm, along with its 1/2 making locked_vm an atomic64, might be rushed into v5.9 with just that two-line commit description, and no discussion at all. locked_vm belongs fundamentally to mm/mlock.c, and the lock to guard it is mmap_lock; and mlock() has some complicated stuff to do under that lock while it decides how to adjust locked_vm. It is very easy to convert an unsigned long to an atomic64_t, but "atomic read, check limit and do stuff, atomic add" does not give the same guarantee as holding the right lock around it all. (At the very least, __account_locked_vm() in 1/2 should be changed to replace its atomic64_add by an atomic64_cmpxchg, to enforce the limit that it just checked. But that will be no more than lipstick on a pig, when the right lock that everyone else agrees upon is not being held.) Now, it can be argued that our locked_vm and pinned_vm maintenance is so random and deficient, and too difficult to keep right across a sprawl of drivers, that we should just be grateful for those that do volunteer to subject themselves to RLIMIT_MEMLOCK limitation, and never mind if it's a little racy. And it may well be that all those who have made considerable efforts in the past to improve the situation, have more interesting things to devote their time to, and would prefer not to get dragged back here. But let's at least give this a little more visibility, and hope to hear opinions one way or the other from those who care. Hugh > > Signed-off-by: Pengfei Li <fly@kernel.page> > --- > drivers/vfio/vfio_iommu_type1.c | 8 ++------ > mm/util.c | 15 +++------------ > 2 files changed, 5 insertions(+), 18 deletions(-) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index 78013be07fe7..53818fce78a6 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -376,12 +376,8 @@ static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async) > if (!mm) > return -ESRCH; /* process exited */ > > - ret = mmap_write_lock_killable(mm); > - if (!ret) { > - ret = __account_locked_vm(mm, abs(npage), npage > 0, dma->task, > - dma->lock_cap); > - mmap_write_unlock(mm); > - } > + ret = __account_locked_vm(mm, abs(npage), npage > 0, > + dma->task, dma->lock_cap); > > if (async) > mmput(mm); > diff --git a/mm/util.c b/mm/util.c > index 473add0dc275..320fdd537aea 100644 > --- a/mm/util.c > +++ b/mm/util.c > @@ -424,8 +424,7 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) > * @task: task used to check RLIMIT_MEMLOCK > * @bypass_rlim: %true if checking RLIMIT_MEMLOCK should be skipped > * > - * Assumes @task and @mm are valid (i.e. at least one reference on each), and > - * that mmap_lock is held as writer. > + * Assumes @task and @mm are valid (i.e. at least one reference on each). > * > * Return: > * * 0 on success > @@ -437,8 +436,6 @@ int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc, > unsigned long locked_vm, limit; > int ret = 0; > > - mmap_assert_write_locked(mm); > - > locked_vm = atomic64_read(&mm->locked_vm); > if (inc) { > if (!bypass_rlim) { > @@ -476,17 +473,11 @@ EXPORT_SYMBOL_GPL(__account_locked_vm); > */ > int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc) > { > - int ret; > - > if (pages == 0 || !mm) > return 0; > > - mmap_write_lock(mm); > - ret = __account_locked_vm(mm, pages, inc, current, > - capable(CAP_IPC_LOCK)); > - mmap_write_unlock(mm); > - > - return ret; > + return __account_locked_vm(mm, pages, inc, > + current, capable(CAP_IPC_LOCK)); > } > EXPORT_SYMBOL_GPL(account_locked_vm); > > -- > 2.26.2
On Wed, Jul 29, 2020 at 12:21:11PM -0700, Hugh Dickins wrote: > On Sun, 26 Jul 2020, Pengfei Li wrote: > > > Since mm->locked_vm is already an atomic counter, account_locked_vm() > > does not need to hold mmap_lock. > > I am worried that this patch, already added to mmotm, along with its > 1/2 making locked_vm an atomic64, might be rushed into v5.9 with just > that two-line commit description, and no discussion at all. > > locked_vm belongs fundamentally to mm/mlock.c, and the lock to guard > it is mmap_lock; and mlock() has some complicated stuff to do under > that lock while it decides how to adjust locked_vm. > > It is very easy to convert an unsigned long to an atomic64_t, but > "atomic read, check limit and do stuff, atomic add" does not give > the same guarantee as holding the right lock around it all. Yes, this is why I withdrew my attempt to do something similar last year, I didn't want to make the accounting racy. Stack and heap growing and mremap would be affected in addition to mlock. It'd help to hear more about the motivation for this. Daniel
On Wed, 29 Jul 2020 12:21:11 -0700 (PDT) Hugh Dickins <hughd@google.com> wrote: Sorry for the late reply. > On Sun, 26 Jul 2020, Pengfei Li wrote: > > > Since mm->locked_vm is already an atomic counter, > > account_locked_vm() does not need to hold mmap_lock. > > I am worried that this patch, already added to mmotm, along with its > 1/2 making locked_vm an atomic64, might be rushed into v5.9 with just > that two-line commit description, and no discussion at all. > > locked_vm belongs fundamentally to mm/mlock.c, and the lock to guard > it is mmap_lock; and mlock() has some complicated stuff to do under > that lock while it decides how to adjust locked_vm. > > It is very easy to convert an unsigned long to an atomic64_t, but > "atomic read, check limit and do stuff, atomic add" does not give > the same guarantee as holding the right lock around it all. > > (At the very least, __account_locked_vm() in 1/2 should be changed to > replace its atomic64_add by an atomic64_cmpxchg, to enforce the limit > that it just checked. But that will be no more than lipstick on a > pig, when the right lock that everyone else agrees upon is not being > held.) > Thank you for your detailed comment. You are right, I should use atomic64_cmpxchg to guarantee the limit of RLIMIT_MEMLOCK. > Now, it can be argued that our locked_vm and pinned_vm maintenance > is so random and deficient, and too difficult to keep right across > a sprawl of drivers, that we should just be grateful for those that > do volunteer to subject themselves to RLIMIT_MEMLOCK limitation, > and never mind if it's a little racy. > > And it may well be that all those who have made considerable efforts > in the past to improve the situation, have more interesting things to > devote their time to, and would prefer not to get dragged back here. > > But let's at least give this a little more visibility, and hope > to hear opinions one way or the other from those who care. Thank you. My patch should be more thoughtful. I will send an email to Stephen soon asking to remove these two patches from -mm tree.
On Thu, 30 Jul 2020 16:57:05 -0400 Daniel Jordan <daniel.m.jordan@oracle.com> wrote: > On Wed, Jul 29, 2020 at 12:21:11PM -0700, Hugh Dickins wrote: > > On Sun, 26 Jul 2020, Pengfei Li wrote: > > > > > Since mm->locked_vm is already an atomic counter, > > > account_locked_vm() does not need to hold mmap_lock. > > > > I am worried that this patch, already added to mmotm, along with its > > 1/2 making locked_vm an atomic64, might be rushed into v5.9 with > > just that two-line commit description, and no discussion at all. > > > > locked_vm belongs fundamentally to mm/mlock.c, and the lock to guard > > it is mmap_lock; and mlock() has some complicated stuff to do under > > that lock while it decides how to adjust locked_vm. > > > > It is very easy to convert an unsigned long to an atomic64_t, but > > "atomic read, check limit and do stuff, atomic add" does not give > > the same guarantee as holding the right lock around it all. > > Yes, this is why I withdrew my attempt to do something similar last > year, I didn't want to make the accounting racy. Stack and heap > growing and mremap would be affected in addition to mlock. > > It'd help to hear more about the motivation for this. > Thanks for your comments. My motivation is to allow mm related counters to be safely read and written without holding mmap_lock. But sorry i didn't do well.
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 78013be07fe7..53818fce78a6 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -376,12 +376,8 @@ static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async) if (!mm) return -ESRCH; /* process exited */ - ret = mmap_write_lock_killable(mm); - if (!ret) { - ret = __account_locked_vm(mm, abs(npage), npage > 0, dma->task, - dma->lock_cap); - mmap_write_unlock(mm); - } + ret = __account_locked_vm(mm, abs(npage), npage > 0, + dma->task, dma->lock_cap); if (async) mmput(mm); diff --git a/mm/util.c b/mm/util.c index 473add0dc275..320fdd537aea 100644 --- a/mm/util.c +++ b/mm/util.c @@ -424,8 +424,7 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) * @task: task used to check RLIMIT_MEMLOCK * @bypass_rlim: %true if checking RLIMIT_MEMLOCK should be skipped * - * Assumes @task and @mm are valid (i.e. at least one reference on each), and - * that mmap_lock is held as writer. + * Assumes @task and @mm are valid (i.e. at least one reference on each). * * Return: * * 0 on success @@ -437,8 +436,6 @@ int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc, unsigned long locked_vm, limit; int ret = 0; - mmap_assert_write_locked(mm); - locked_vm = atomic64_read(&mm->locked_vm); if (inc) { if (!bypass_rlim) { @@ -476,17 +473,11 @@ EXPORT_SYMBOL_GPL(__account_locked_vm); */ int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc) { - int ret; - if (pages == 0 || !mm) return 0; - mmap_write_lock(mm); - ret = __account_locked_vm(mm, pages, inc, current, - capable(CAP_IPC_LOCK)); - mmap_write_unlock(mm); - - return ret; + return __account_locked_vm(mm, pages, inc, + current, capable(CAP_IPC_LOCK)); } EXPORT_SYMBOL_GPL(account_locked_vm);
Since mm->locked_vm is already an atomic counter, account_locked_vm() does not need to hold mmap_lock. Signed-off-by: Pengfei Li <fly@kernel.page> --- drivers/vfio/vfio_iommu_type1.c | 8 ++------ mm/util.c | 15 +++------------ 2 files changed, 5 insertions(+), 18 deletions(-)