diff mbox series

[v5,20/25] fs: Handle intra-page faults in copy_mount_options()

Message ID 20200624175244.25837-21-catalin.marinas@arm.com (mailing list archive)
State New, archived
Headers show
Series arm64: Memory Tagging Extension user-space support | expand

Commit Message

Catalin Marinas June 24, 2020, 5:52 p.m. UTC
The copy_mount_options() function takes a user pointer argument but no
size. It tries to read up to a PAGE_SIZE. However, copy_from_user() is
not guaranteed to return all the accessible bytes if, for example, the
access crosses a page boundary and gets a fault on the second page. To
work around this, the current copy_mount_options() implementation
performs two copy_from_user() passes, first to the end of the current
page and the second to what's left in the subsequent page.

On arm64 with MTE enabled, access to a user page may trigger a fault
after part of the buffer has been copied (when the user pointer tag,
bits 56-59, no longer matches the allocation tag stored in memory).
Allow copy_mount_options() to handle such intra-page faults by returning
-EFAULT only if the first copy_from_user() has not copied any bytes.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
---

Notes:
    v4:
    - Rewrite to avoid arch_has_exact_copy_from_user()
    
    New in v3.

 fs/namespace.c | 24 +++++++++++++++++++++---
 1 file changed, 21 insertions(+), 3 deletions(-)
diff mbox series

Patch

diff --git a/fs/namespace.c b/fs/namespace.c
index f30ed401cc6d..5b6a9c459674 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -3074,7 +3074,7 @@  static void shrink_submounts(struct mount *mnt)
 void *copy_mount_options(const void __user * data)
 {
 	char *copy;
-	unsigned size;
+	unsigned size, left;
 
 	if (!data)
 		return NULL;
@@ -3085,12 +3085,30 @@  void *copy_mount_options(const void __user * data)
 
 	size = PAGE_SIZE - offset_in_page(data);
 
-	if (copy_from_user(copy, data, size)) {
+	/*
+	 * Attempt to copy to the end of the first user page. On success,
+	 * left == 0, copy the rest from the second user page (if it is
+	 * accessible). copy_from_user() will zero the part of the kernel
+	 * buffer not copied into.
+	 *
+	 * On architectures with intra-page faults (arm64 with MTE), the read
+	 * from the first page may fail after copying part of the user data
+	 * (left > 0 && left < size). Do not attempt the second copy in this
+	 * case as the end of the valid user buffer has already been reached.
+	 * Ensure, however, that the second part of the kernel buffer is
+	 * zeroed.
+	 */
+	left = copy_from_user(copy, data, size);
+	if (left == size) {
 		kfree(copy);
 		return ERR_PTR(-EFAULT);
 	}
 	if (size != PAGE_SIZE) {
-		if (copy_from_user(copy + size, data + size, PAGE_SIZE - size))
+		if (left == 0)
+			/* return not relevant, just silence the compiler */
+			left = copy_from_user(copy + size, data + size,
+					      PAGE_SIZE - size);
+		else
 			memset(copy + size, 0, PAGE_SIZE - size);
 	}
 	return copy;