diff mbox series

iov_iter: fix copy_page_from_iter_atomic() for highmem

Message ID 20241112-geregelt-hirte-ab810337e3c0@brauner (mailing list archive)
State New
Headers show
Series iov_iter: fix copy_page_from_iter_atomic() for highmem | expand

Commit Message

Christian Brauner Nov. 12, 2024, 3:36 p.m. UTC
When fixing copy_page_from_iter_atomic() in c749d9b7ebbc ("iov_iter: fix
copy_page_from_iter_atomic() if KMAP_LOCAL_FORCE_MAP") the check for
PageHighMem() got moved out of the loop. If copy_page_from_iter_atomic()
crosses page boundaries it will use a stale PageHighMem() check for an
earlier page.

Fixes: 908a1ad89466 ("iov_iter: Handle compound highmem pages in copy_page_from_iter_atomic()")
Fixes: c749d9b7ebbc ("iov_iter: fix copy_page_from_iter_atomic() if KMAP_LOCAL_FORCE_MAP")
Cc: stable@vger.kernel.org
Reviewed-by: David Howells <dhowells@redhat.com>
Signed-off-by: Christian Brauner <brauner@kernel.org>
---
Hey Linus,

I think the original fix was buggy but then again my knowledge of
highmem isn't particularly detailed. Compile tested only. If correct, I
would ask you to please apply it directly.

Thanks!
Christian
---
 lib/iov_iter.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

Comments

Hugh Dickins Nov. 12, 2024, 4:51 p.m. UTC | #1
On Tue, 12 Nov 2024, Christian Brauner wrote:

> When fixing copy_page_from_iter_atomic() in c749d9b7ebbc ("iov_iter: fix
> copy_page_from_iter_atomic() if KMAP_LOCAL_FORCE_MAP") the check for
> PageHighMem() got moved out of the loop. If copy_page_from_iter_atomic()
> crosses page boundaries it will use a stale PageHighMem() check for an
> earlier page.
> 
> Fixes: 908a1ad89466 ("iov_iter: Handle compound highmem pages in copy_page_from_iter_atomic()")
> Fixes: c749d9b7ebbc ("iov_iter: fix copy_page_from_iter_atomic() if KMAP_LOCAL_FORCE_MAP")
> Cc: stable@vger.kernel.org
> Reviewed-by: David Howells <dhowells@redhat.com>
> Signed-off-by: Christian Brauner <brauner@kernel.org>
> ---
> Hey Linus,
> 
> I think the original fix was buggy but then again my knowledge of
> highmem isn't particularly detailed. Compile tested only. If correct, I
> would ask you to please apply it directly.

I haven't seen whatever discussion led up to this.  I don't believe
my commit was buggy (setting uses_kmap once at the top was intentional);
but I haven't looked at the other Fixee, and I've no objection if you all
prefer to add this on.

I imagine you're worried by the idea of a folio getting passed in, and
its first struct page is in a lowmem pageblock, but the folio somehow
spans pageblocks so that a later struct page is in a highmem pageblock.

That does not happen - except perhaps in the case of a hugetlb gigantic
folio, cobbled together from separate pageblocks.  But the code here,
before my change and after it and after this mod, does not allow for
that case anyway - the "page += offset / PAGE_SIZE" is assuming that
struct pages are contiguous.  If there is a worry here (I assumed not),
I think it would be that.

Cc'ing Matthew who is much more alert to such issues than I am.
Dashing out shortly, back in two hours,
Hugh

> 
> Thanks!
> Christian
> ---
>  lib/iov_iter.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/iov_iter.c b/lib/iov_iter.c
> index 908e75a28d90..e90a5ababb11 100644
> --- a/lib/iov_iter.c
> +++ b/lib/iov_iter.c
> @@ -457,12 +457,16 @@ size_t iov_iter_zero(size_t bytes, struct iov_iter *i)
>  }
>  EXPORT_SYMBOL(iov_iter_zero);
>  
> +static __always_inline bool iter_atomic_uses_kmap(struct page *page)
> +{
> +	return IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) ||
> +	       PageHighMem(page);
> +}
> +
>  size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
>  		size_t bytes, struct iov_iter *i)
>  {
>  	size_t n, copied = 0;
> -	bool uses_kmap = IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) ||
> -			 PageHighMem(page);
>  
>  	if (!page_copy_sane(page, offset, bytes))
>  		return 0;
> @@ -473,7 +477,7 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
>  		char *p;
>  
>  		n = bytes - copied;
> -		if (uses_kmap) {
> +		if (iter_atomic_uses_kmap(page)) {
>  			page += offset / PAGE_SIZE;
>  			offset %= PAGE_SIZE;
>  			n = min_t(size_t, n, PAGE_SIZE - offset);
> @@ -484,7 +488,7 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
>  		kunmap_atomic(p);
>  		copied += n;
>  		offset += n;
> -	} while (uses_kmap && copied != bytes && n > 0);
> +	} while (iter_atomic_uses_kmap(page) && copied != bytes && n > 0);
>  
>  	return copied;
>  }
> -- 
> 2.45.2
Linus Torvalds Nov. 12, 2024, 4:59 p.m. UTC | #2
On Tue, 12 Nov 2024 at 07:36, Christian Brauner <brauner@kernel.org> wrote:
>
> Hey Linus,
>
> I think the original fix was buggy but then again my knowledge of
> highmem isn't particularly detailed. Compile tested only. If correct, I
> would ask you to please apply it directly.

No, I think the original fix was fine.

As Hugh says, the "PageHighMem(page)" test is valid for the whole
folio, even if there are multiple pages. It's not some kind of flag
that changes dynamically per page, and a folio that spans from lowmem
to highmem would be insane.

So doing that test just once at the top of the function is actually
the correct thing to do, even if it might look a bit wrong.

At most, maybe add a comment to that 'uses_kmap' initialization.

             Linus
Christian Brauner Nov. 13, 2024, 10:14 a.m. UTC | #3
On Tue, Nov 12, 2024 at 08:51:04AM -0800, Hugh Dickins wrote:
> On Tue, 12 Nov 2024, Christian Brauner wrote:
> 
> > When fixing copy_page_from_iter_atomic() in c749d9b7ebbc ("iov_iter: fix
> > copy_page_from_iter_atomic() if KMAP_LOCAL_FORCE_MAP") the check for
> > PageHighMem() got moved out of the loop. If copy_page_from_iter_atomic()
> > crosses page boundaries it will use a stale PageHighMem() check for an
> > earlier page.
> > 
> > Fixes: 908a1ad89466 ("iov_iter: Handle compound highmem pages in copy_page_from_iter_atomic()")
> > Fixes: c749d9b7ebbc ("iov_iter: fix copy_page_from_iter_atomic() if KMAP_LOCAL_FORCE_MAP")
> > Cc: stable@vger.kernel.org
> > Reviewed-by: David Howells <dhowells@redhat.com>
> > Signed-off-by: Christian Brauner <brauner@kernel.org>
> > ---
> > Hey Linus,
> > 
> > I think the original fix was buggy but then again my knowledge of
> > highmem isn't particularly detailed. Compile tested only. If correct, I
> > would ask you to please apply it directly.
> 
> I haven't seen whatever discussion led up to this.  I don't believe
> my commit was buggy (setting uses_kmap once at the top was intentional);
> but I haven't looked at the other Fixee, and I've no objection if you all
> prefer to add this on.
> 
> I imagine you're worried by the idea of a folio getting passed in, and
> its first struct page is in a lowmem pageblock, but the folio somehow
> spans pageblocks so that a later struct page is in a highmem pageblock.
> 
> That does not happen - except perhaps in the case of a hugetlb gigantic
> folio, cobbled together from separate pageblocks.  But the code here,
> before my change and after it and after this mod, does not allow for
> that case anyway - the "page += offset / PAGE_SIZE" is assuming that
> struct pages are contiguous.  If there is a worry here (I assumed not),
> I think it would be that.

Thank you for the detailed reply that really cleared a lot of things up
for me. I was very confused at first by the change and that's why I
thought to just send a patch and see whether I can get a good
explanation. :)
diff mbox series

Patch

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 908e75a28d90..e90a5ababb11 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -457,12 +457,16 @@  size_t iov_iter_zero(size_t bytes, struct iov_iter *i)
 }
 EXPORT_SYMBOL(iov_iter_zero);
 
+static __always_inline bool iter_atomic_uses_kmap(struct page *page)
+{
+	return IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) ||
+	       PageHighMem(page);
+}
+
 size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
 		size_t bytes, struct iov_iter *i)
 {
 	size_t n, copied = 0;
-	bool uses_kmap = IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) ||
-			 PageHighMem(page);
 
 	if (!page_copy_sane(page, offset, bytes))
 		return 0;
@@ -473,7 +477,7 @@  size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
 		char *p;
 
 		n = bytes - copied;
-		if (uses_kmap) {
+		if (iter_atomic_uses_kmap(page)) {
 			page += offset / PAGE_SIZE;
 			offset %= PAGE_SIZE;
 			n = min_t(size_t, n, PAGE_SIZE - offset);
@@ -484,7 +488,7 @@  size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
 		kunmap_atomic(p);
 		copied += n;
 		offset += n;
-	} while (uses_kmap && copied != bytes && n > 0);
+	} while (iter_atomic_uses_kmap(page) && copied != bytes && n > 0);
 
 	return copied;
 }