Message ID | 20241211232754.1583023-17-kaleshsingh@google.com (mailing list archive) |
---|---|
State | Awaiting Upstream |
Headers | show |
Series | mm: Introduce arch_mmap_hint() | expand |
On Wed, Dec 11, 2024 at 3:31 PM Kalesh Singh <kaleshsingh@google.com> wrote: > > Commit 249608ee4713 ("mm: respect mmap hint address when aligning for THP") > fallsback to PAGE_SIZE alignment instead of THP alignment > for anonymous mapping as long as a hint address is provided by the user > -- even if we weren't able to allocate the unmapped area at the hint > address in the end. > > This was done to address the immediate regression in anonymous mappings > where the hint address were being ignored in some cases; due to commit > efa7df3e3bb5 ("mm: align larger anonymous mappings on THP boundaries"). > > It was later pointed out that this issue also existed for file-backed > mappings from file systems that use thp_get_unmapped_area() for their > .get_unmapped_area() file operation. > > The same fix was not applied for file-backed mappings since it would > mean any mmap requests that provide a hint address would be only > PAGE_SIZE-aligned regardless of whether allocation was successful at > the hint address or not. > > Instead, use arch_mmap_hint() to first attempt allocation at the hint > address and fallback to THP alignment if there isn't sufficient VA space > to satisfy the allocation at the hint address. > > Signed-off-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Yang Shi <shy828301@gmail.com> > --- > mm/huge_memory.c | 17 ++++++++++------- > mm/mmap.c | 1 - > 2 files changed, 10 insertions(+), 8 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 2da5520bfe24..426761a30aff 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1097,6 +1097,16 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, > loff_t off_align = round_up(off, size); > unsigned long len_pad, ret, off_sub; > > + /* > + * If allocation at the address hint succeeds; respect the hint and > + * don't try to align to THP boundary; > + * > + * Or if an the requested extent is invalid return the error immediately. > + */ > + addr = arch_mmap_hint(filp, addr, len, off, flags); > + if (addr) > + return addr; > + > if (!IS_ENABLED(CONFIG_64BIT) || in_compat_syscall()) > return 0; > > @@ -1117,13 +1127,6 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, > if (IS_ERR_VALUE(ret)) > return 0; > > - /* > - * Do not try to align to THP boundary if allocation at the address > - * hint succeeds. > - */ > - if (ret == addr) > - return addr; > - > off_sub = (off - ret) & (size - 1); > > if (test_bit(MMF_TOPDOWN, ¤t->mm->flags) && !off_sub) > diff --git a/mm/mmap.c b/mm/mmap.c > index 76dd6acdf051..3286fdff26f2 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -814,7 +814,6 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, > if (get_area) { > addr = get_area(file, addr, len, pgoff, flags); > } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && !file > - && !addr /* no hint */ > && IS_ALIGNED(len, PMD_SIZE)) { > /* Ensures that larger anonymous mappings are THP aligned. */ > addr = thp_get_unmapped_area_vmflags(file, addr, len, > -- > 2.47.0.338.g60cca15819-goog > >
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2da5520bfe24..426761a30aff 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1097,6 +1097,16 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, loff_t off_align = round_up(off, size); unsigned long len_pad, ret, off_sub; + /* + * If allocation at the address hint succeeds; respect the hint and + * don't try to align to THP boundary; + * + * Or if an the requested extent is invalid return the error immediately. + */ + addr = arch_mmap_hint(filp, addr, len, off, flags); + if (addr) + return addr; + if (!IS_ENABLED(CONFIG_64BIT) || in_compat_syscall()) return 0; @@ -1117,13 +1127,6 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, if (IS_ERR_VALUE(ret)) return 0; - /* - * Do not try to align to THP boundary if allocation at the address - * hint succeeds. - */ - if (ret == addr) - return addr; - off_sub = (off - ret) & (size - 1); if (test_bit(MMF_TOPDOWN, ¤t->mm->flags) && !off_sub) diff --git a/mm/mmap.c b/mm/mmap.c index 76dd6acdf051..3286fdff26f2 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -814,7 +814,6 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, if (get_area) { addr = get_area(file, addr, len, pgoff, flags); } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && !file - && !addr /* no hint */ && IS_ALIGNED(len, PMD_SIZE)) { /* Ensures that larger anonymous mappings are THP aligned. */ addr = thp_get_unmapped_area_vmflags(file, addr, len,
Commit 249608ee4713 ("mm: respect mmap hint address when aligning for THP") fallsback to PAGE_SIZE alignment instead of THP alignment for anonymous mapping as long as a hint address is provided by the user -- even if we weren't able to allocate the unmapped area at the hint address in the end. This was done to address the immediate regression in anonymous mappings where the hint address were being ignored in some cases; due to commit efa7df3e3bb5 ("mm: align larger anonymous mappings on THP boundaries"). It was later pointed out that this issue also existed for file-backed mappings from file systems that use thp_get_unmapped_area() for their .get_unmapped_area() file operation. The same fix was not applied for file-backed mappings since it would mean any mmap requests that provide a hint address would be only PAGE_SIZE-aligned regardless of whether allocation was successful at the hint address or not. Instead, use arch_mmap_hint() to first attempt allocation at the hint address and fallback to THP alignment if there isn't sufficient VA space to satisfy the allocation at the hint address. Signed-off-by: Kalesh Singh <kaleshsingh@google.com> --- mm/huge_memory.c | 17 ++++++++++------- mm/mmap.c | 1 - 2 files changed, 10 insertions(+), 8 deletions(-)