Message ID | 20210907201456.4036910-1-Liam.Howlett@oracle.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3] mmap_lock: Change trace and locking order | expand |
On Tue, 7 Sep 2021 20:15:19 +0000 Liam Howlett <liam.howlett@oracle.com> wrote: > The ordering of the printed messages from the mmap_lock trace can occur > out of order. This results in confusing trace logs such as: > > task cpu atomic counter: message > --------------------------------------------- > task-749 [006] .... 14437980: mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true > task-750 [007] .... 14437981: mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true > task-749 [006] .... 14437983: mmap_lock_released: mm=00000000c94d28b8 memcg_path= write=true > > When the actual series of evens are as follows: > > task-749 [006] mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true > task-749 [006] mmap_lock_released: mm=00000000c94d28b8 memcg_path= write=true > > task-750 [007] mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true > > The incorrect ordering of the trace log happens because the release log > is outside of the lock itself. The ordering can be guaranteed by > protecting the acquire success and release trace logs by the lock. > > Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> > Suggested-by: Steven Rostedt (VMware) <rostedt@goodmis.org> FYI, If you received Acks for a patch, and you resend just to update the change log, you can then include those acks in that email, as the acks were already done for the code change. If you change the code, you may need to ask to get the review/acks again. But since this time you only changed the change log, and the code is still the same, you should have included: Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: David Hildenbrand <david@redhat.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> -- Steve > --- > include/linux/mmap_lock.h | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h > index 0540f0156f58..b179f1e3541a 100644 > --- a/include/linux/mmap_lock.h > +++ b/include/linux/mmap_lock.h > @@ -101,14 +101,14 @@ static inline bool mmap_write_trylock(struct mm_struct *mm) > > static inline void mmap_write_unlock(struct mm_struct *mm) > { > - up_write(&mm->mmap_lock); > __mmap_lock_trace_released(mm, true); > + up_write(&mm->mmap_lock); > } > > static inline void mmap_write_downgrade(struct mm_struct *mm) > { > - downgrade_write(&mm->mmap_lock); > __mmap_lock_trace_acquire_returned(mm, false, true); > + downgrade_write(&mm->mmap_lock); > } > > static inline void mmap_read_lock(struct mm_struct *mm) > @@ -140,8 +140,8 @@ static inline bool mmap_read_trylock(struct mm_struct *mm) > > static inline void mmap_read_unlock(struct mm_struct *mm) > { > - up_read(&mm->mmap_lock); > __mmap_lock_trace_released(mm, false); > + up_read(&mm->mmap_lock); > } > > static inline bool mmap_read_trylock_non_owner(struct mm_struct *mm) > @@ -155,8 +155,8 @@ static inline bool mmap_read_trylock_non_owner(struct mm_struct *mm) > > static inline void mmap_read_unlock_non_owner(struct mm_struct *mm) > { > - up_read_non_owner(&mm->mmap_lock); > __mmap_lock_trace_released(mm, false); > + up_read_non_owner(&mm->mmap_lock); > } > > static inline void mmap_assert_locked(struct mm_struct *mm)
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 0540f0156f58..b179f1e3541a 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -101,14 +101,14 @@ static inline bool mmap_write_trylock(struct mm_struct *mm) static inline void mmap_write_unlock(struct mm_struct *mm) { - up_write(&mm->mmap_lock); __mmap_lock_trace_released(mm, true); + up_write(&mm->mmap_lock); } static inline void mmap_write_downgrade(struct mm_struct *mm) { - downgrade_write(&mm->mmap_lock); __mmap_lock_trace_acquire_returned(mm, false, true); + downgrade_write(&mm->mmap_lock); } static inline void mmap_read_lock(struct mm_struct *mm) @@ -140,8 +140,8 @@ static inline bool mmap_read_trylock(struct mm_struct *mm) static inline void mmap_read_unlock(struct mm_struct *mm) { - up_read(&mm->mmap_lock); __mmap_lock_trace_released(mm, false); + up_read(&mm->mmap_lock); } static inline bool mmap_read_trylock_non_owner(struct mm_struct *mm) @@ -155,8 +155,8 @@ static inline bool mmap_read_trylock_non_owner(struct mm_struct *mm) static inline void mmap_read_unlock_non_owner(struct mm_struct *mm) { - up_read_non_owner(&mm->mmap_lock); __mmap_lock_trace_released(mm, false); + up_read_non_owner(&mm->mmap_lock); } static inline void mmap_assert_locked(struct mm_struct *mm)
The ordering of the printed messages from the mmap_lock trace can occur out of order. This results in confusing trace logs such as: task cpu atomic counter: message --------------------------------------------- task-749 [006] .... 14437980: mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true task-750 [007] .... 14437981: mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true task-749 [006] .... 14437983: mmap_lock_released: mm=00000000c94d28b8 memcg_path= write=true When the actual series of evens are as follows: task-749 [006] mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true task-749 [006] mmap_lock_released: mm=00000000c94d28b8 memcg_path= write=true task-750 [007] mmap_lock_acquire_returned: mm=00000000c94d28b8 memcg_path= write=true success=true The incorrect ordering of the trace log happens because the release log is outside of the lock itself. The ordering can be guaranteed by protecting the acquire success and release trace logs by the lock. Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Suggested-by: Steven Rostedt (VMware) <rostedt@goodmis.org> --- include/linux/mmap_lock.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)