Message ID | 20250310172318.653630-9-sj@kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/madvise: batch tlb flushes for MADV_DONTNEED and MADV_FREE | expand |
+cc Rik on this, as he's working on TLB flush-related stuff. Maybe worth cc-ing him on series respins too? Unless Rik objects of course :P Again, nit, but your subject line/first line of commit message is definitely too long here! :) On Mon, Mar 10, 2025 at 10:23:17AM -0700, SeongJae Park wrote: > MADV_DONTNEED[_LOCKED] and MADV_FREE internal logics for > [process_]madvise() can be invoked with batched tlb flushes. Update > vector_madvise() and do_madvise(), which are called for the two system > calls respectively, to use those in the efficient way. Initialize an > mmu_gather object before starting the internal works, and flush the > gathered tlb entries at once after all the internal works are done. super nit but logics -> logic and works -> work :) I think we need more here as to why you're restricting to MADV_DONTNEED_LOCKED and MADV_FREE. I see pageout initialises a tlb gather object, so does cold, etc. etc.? > > Signed-off-by: SeongJae Park <sj@kernel.org> This is really nice, I love how we're able to evolve this towards batching flushes. Overall though I'd like you to address some of the concerns here before giving tags... :) > --- > mm/madvise.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++---- > 1 file changed, 47 insertions(+), 4 deletions(-) > > diff --git a/mm/madvise.c b/mm/madvise.c > index d7ea71c6422c..d5f4ce3041a4 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -905,6 +905,7 @@ static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma, > > struct madvise_behavior { > int behavior; > + struct mmu_gather *tlb; > }; Aha! Good :) I see in 9/9 you actually pull the caller_tlb stuff out, I still feel like we should be threading this state through further, if possible, rather than passing in behavior->tlb as a parameter. But this is nitty I suppose! > > static long madvise_dontneed_free(struct vm_area_struct *vma, > @@ -964,9 +965,11 @@ static long madvise_dontneed_free(struct vm_area_struct *vma, > } > > if (behavior == MADV_DONTNEED || behavior == MADV_DONTNEED_LOCKED) > - return madvise_dontneed_single_vma(NULL, vma, start, end); > + return madvise_dontneed_single_vma( > + madv_behavior->tlb, vma, start, end); > else if (behavior == MADV_FREE) > - return madvise_free_single_vma(NULL, vma, start, end); > + return madvise_free_single_vma( > + madv_behavior->tlb, vma, start, end); Yeah as I said above be nice to just pass madv_behavior, makes things more flexible to pass a pointer to the helper struct through, as you can > else > return -EINVAL; > } > @@ -1639,6 +1642,32 @@ static void madvise_unlock(struct mm_struct *mm, int behavior) > mmap_read_unlock(mm); > } > > +static bool madvise_batch_tlb_flush(int behavior) > +{ > + switch (behavior) { > + case MADV_DONTNEED: > + case MADV_DONTNEED_LOCKED: > + return true; > + default: > + return false; > + } > +} I kind of hate this madvise_ prefix stuff, like we're in mm/madvise.c, it's pretty obvious static functions are related to madvise :) but this is a pre-existing thing, not your fault, and it's actually right to maintain consistency with this. So this is purely a whine that can be >/dev/null. > + > +static void madvise_init_tlb(struct madvise_behavior *madv_behavior, > + struct mm_struct *mm) > +{ > + if (!madvise_batch_tlb_flush(madv_behavior->behavior)) > + return; > + tlb_gather_mmu(madv_behavior->tlb, mm); > +} > + > +static void madvise_finish_tlb(struct madvise_behavior *madv_behavior) > +{ > + if (!madvise_batch_tlb_flush(madv_behavior->behavior)) > + return; > + tlb_finish_mmu(madv_behavior->tlb); > +} > + Nitty, but for both of these, usually I like the guard clause pattern, but since it's such a trivial thing I think it reads better as: if (madvise_batch_tlb_flush(madv_behavior->behavior)) tlb_gather_mmu(madv_behavior->tlb, mm); and: if (madvise_batch_tlb_flush(madv_behavior->behavior)) tlb_finish_mmu(madv_behavior->tlb); > static bool is_valid_madvise(unsigned long start, size_t len_in, int behavior) > { > size_t len; > @@ -1791,14 +1820,20 @@ static int madvise_do_behavior(struct mm_struct *mm, > int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior) > { > int error; > - struct madvise_behavior madv_behavior = {.behavior = behavior}; > + struct mmu_gather tlb; > + struct madvise_behavior madv_behavior = { > + .behavior = behavior, > + .tlb = &tlb, > + }; > > if (madvise_should_skip(start, len_in, behavior, &error)) > return error; > error = madvise_lock(mm, behavior); > if (error) > return error; > + madvise_init_tlb(&madv_behavior, mm); > error = madvise_do_behavior(mm, start, len_in, &madv_behavior); > + madvise_finish_tlb(&madv_behavior); > madvise_unlock(mm, behavior); > > return error; > @@ -1815,13 +1850,18 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter, > { > ssize_t ret = 0; > size_t total_len; > - struct madvise_behavior madv_behavior = {.behavior = behavior}; > + struct mmu_gather tlb; > + struct madvise_behavior madv_behavior = { > + .behavior = behavior, > + .tlb = &tlb, > + }; > > total_len = iov_iter_count(iter); > > ret = madvise_lock(mm, behavior); > if (ret) > return ret; > + madvise_init_tlb(&madv_behavior, mm); > > while (iov_iter_count(iter)) { > unsigned long start = (unsigned long)iter_iov_addr(iter); > @@ -1850,14 +1890,17 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter, > } > > /* Drop and reacquire lock to unwind race. */ > + madvise_finish_tlb(&madv_behavior); > madvise_unlock(mm, behavior); > madvise_lock(mm, behavior); > + madvise_init_tlb(&madv_behavior, mm); Fun, but I guess necessary. I strongly suspect this code path will never actually happen, but we need to account for it. > continue; > } > if (ret < 0) > break; > iov_iter_advance(iter, iter_iov_len(iter)); > } > + madvise_finish_tlb(&madv_behavior); > madvise_unlock(mm, behavior); > > ret = (total_len - iov_iter_count(iter)) ? : ret; > -- > 2.39.5
diff --git a/mm/madvise.c b/mm/madvise.c index d7ea71c6422c..d5f4ce3041a4 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -905,6 +905,7 @@ static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma, struct madvise_behavior { int behavior; + struct mmu_gather *tlb; }; static long madvise_dontneed_free(struct vm_area_struct *vma, @@ -964,9 +965,11 @@ static long madvise_dontneed_free(struct vm_area_struct *vma, } if (behavior == MADV_DONTNEED || behavior == MADV_DONTNEED_LOCKED) - return madvise_dontneed_single_vma(NULL, vma, start, end); + return madvise_dontneed_single_vma( + madv_behavior->tlb, vma, start, end); else if (behavior == MADV_FREE) - return madvise_free_single_vma(NULL, vma, start, end); + return madvise_free_single_vma( + madv_behavior->tlb, vma, start, end); else return -EINVAL; } @@ -1639,6 +1642,32 @@ static void madvise_unlock(struct mm_struct *mm, int behavior) mmap_read_unlock(mm); } +static bool madvise_batch_tlb_flush(int behavior) +{ + switch (behavior) { + case MADV_DONTNEED: + case MADV_DONTNEED_LOCKED: + return true; + default: + return false; + } +} + +static void madvise_init_tlb(struct madvise_behavior *madv_behavior, + struct mm_struct *mm) +{ + if (!madvise_batch_tlb_flush(madv_behavior->behavior)) + return; + tlb_gather_mmu(madv_behavior->tlb, mm); +} + +static void madvise_finish_tlb(struct madvise_behavior *madv_behavior) +{ + if (!madvise_batch_tlb_flush(madv_behavior->behavior)) + return; + tlb_finish_mmu(madv_behavior->tlb); +} + static bool is_valid_madvise(unsigned long start, size_t len_in, int behavior) { size_t len; @@ -1791,14 +1820,20 @@ static int madvise_do_behavior(struct mm_struct *mm, int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior) { int error; - struct madvise_behavior madv_behavior = {.behavior = behavior}; + struct mmu_gather tlb; + struct madvise_behavior madv_behavior = { + .behavior = behavior, + .tlb = &tlb, + }; if (madvise_should_skip(start, len_in, behavior, &error)) return error; error = madvise_lock(mm, behavior); if (error) return error; + madvise_init_tlb(&madv_behavior, mm); error = madvise_do_behavior(mm, start, len_in, &madv_behavior); + madvise_finish_tlb(&madv_behavior); madvise_unlock(mm, behavior); return error; @@ -1815,13 +1850,18 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter, { ssize_t ret = 0; size_t total_len; - struct madvise_behavior madv_behavior = {.behavior = behavior}; + struct mmu_gather tlb; + struct madvise_behavior madv_behavior = { + .behavior = behavior, + .tlb = &tlb, + }; total_len = iov_iter_count(iter); ret = madvise_lock(mm, behavior); if (ret) return ret; + madvise_init_tlb(&madv_behavior, mm); while (iov_iter_count(iter)) { unsigned long start = (unsigned long)iter_iov_addr(iter); @@ -1850,14 +1890,17 @@ static ssize_t vector_madvise(struct mm_struct *mm, struct iov_iter *iter, } /* Drop and reacquire lock to unwind race. */ + madvise_finish_tlb(&madv_behavior); madvise_unlock(mm, behavior); madvise_lock(mm, behavior); + madvise_init_tlb(&madv_behavior, mm); continue; } if (ret < 0) break; iov_iter_advance(iter, iter_iov_len(iter)); } + madvise_finish_tlb(&madv_behavior); madvise_unlock(mm, behavior); ret = (total_len - iov_iter_count(iter)) ? : ret;
MADV_DONTNEED[_LOCKED] and MADV_FREE internal logics for [process_]madvise() can be invoked with batched tlb flushes. Update vector_madvise() and do_madvise(), which are called for the two system calls respectively, to use those in the efficient way. Initialize an mmu_gather object before starting the internal works, and flush the gathered tlb entries at once after all the internal works are done. Signed-off-by: SeongJae Park <sj@kernel.org> --- mm/madvise.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 47 insertions(+), 4 deletions(-)