From patchwork Fri Sep 22 11:30:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395604 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 044B0CD4F5E for ; Fri, 22 Sep 2023 11:31:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E47AD6B02B1; Fri, 22 Sep 2023 07:31:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DF7FB6B02B2; Fri, 22 Sep 2023 07:31:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C22826B02B3; Fri, 22 Sep 2023 07:31:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A7D346B02B1 for ; Fri, 22 Sep 2023 07:31:05 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 86EDEB3A60 for ; Fri, 22 Sep 2023 11:31:05 +0000 (UTC) X-FDA: 81264016890.13.92391E0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf07.hostedemail.com (Postfix) with ESMTP id D1B6440028 for ; Fri, 22 Sep 2023 11:31:03 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Z9NWbWz2; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695382263; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CVEbGW4BLJF1l/4uNaMtMi83HInwhhC15AgRCPKnf+k=; b=k7xPAOLliHCEC5QAoUSS4FAYCr7inQ+tWUXn2+7ixO1+h0/NbanNjU0uGlTCKOsf8gbdQi plbY3CozjeHN32v+J6gtr8YiIWuLzmTrzqXwexzLAAoDVb8c4djM5cTFxrf0Tp1ScnIUit YiBjOxZVvb0njOs0VA1myNVbqNvjh+4= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Z9NWbWz2; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf07.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695382263; a=rsa-sha256; cv=none; b=ABF30MYsA8ubfRB9CcSpV72SgKBwK3/WYhGz1DPr/o6I1Jbnj7QcADe8dY0ozoggadFstC M+ad3Hl1ow2tMirEDB9RQwqPL7gPV1yOeWrzFnEtCnXZthEq+2HGB9HGO42VTrpnjYK3Bj v5NbsGsMJDl3wBuQWUk7mA9UAXf62Qw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695382263; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CVEbGW4BLJF1l/4uNaMtMi83HInwhhC15AgRCPKnf+k=; b=Z9NWbWz2JijvfO3VHVwg2xxOnc0vP9Kg1IPDuCSvL2jwfdQSwnjM1GQr2VN4MppPVarV0f Tw0b4qnFShPF7mOxVKWQr4OKukprvqpZqeQOUQwzCzPs4YCjxZMr5DuSWos9MR5Cv4F8/l EwwKPseBnFQb8F9svZw8HAiu+BnyKlA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-338-3grvvCggPbu-u3y6M0hRxA-1; Fri, 22 Sep 2023 07:31:01 -0400 X-MC-Unique: 3grvvCggPbu-u3y6M0hRxA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2AA49803498; Fri, 22 Sep 2023 11:31:00 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 88E1340C2064; Fri, 22 Sep 2023 11:30:56 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Brendan Higgins , David Gow , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kselftest@vger.kernel.org, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, Andrew Morton , Christian Brauner , David Hildenbrand , John Hubbard , Huacai Chen , WANG Xuerui , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , loongarch@lists.linux.dev, linux-s390@vger.kernel.org Subject: [PATCH v3 05/10] iov_iter: Create a function to prepare userspace VM for UBUF/IOVEC tests Date: Fri, 22 Sep 2023 12:30:33 +0100 Message-ID: <20230922113038.1135236-6-dhowells@redhat.com> In-Reply-To: <20230922113038.1135236-1-dhowells@redhat.com> References: <20230922113038.1135236-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: D1B6440028 X-Stat-Signature: arwxodeu56k3cqboejdye78qnb55e7nh X-Rspam-User: X-HE-Tag: 1695382263-951285 X-HE-Meta: U2FsdGVkX1+aX3lqXvQXps+WRnwiMwXUHVrUM54ho+yX3vt7aWhHaQY1KEtTl0ouLmnA2lnWSQ1pmwOmvkSUkfHu7bbovPZaP5bFIiIEZUOsyOnCQJyZ2JUmQyERY7kWsA9he+M6MACj7kDxcJwBT/rDgQdjd9rozWdgfcKE8jwLn8tFgb5NYYMrvFi0ftREmBdggBiRKx4bKldyDOAeyX8GAGh60pg2XweArWWFm1fWXBYKSIwHk9/JCksNhpDfsAh3ZoR+YByhzD0Ax6zn7vjAAl8AfKtVdXyV7bgNk95JXnZEOOW4ztLgbcX1beA+JdASYNS+BiFNcRa+ZxLpH+tLe5X4AWtV9VYNtM7izrd7odRnTQZxySU6my2+vtEgoTBhPqsetpJGjENCKXMC6JO3OCpJ+j20CgmYh3eTIbTZcBJqaJLYnBanXyP3VnizkR04g1VF4OWPzp3F+a+Imhy/8kCxCgjNi91RuGEpWE6f+hUy3hcOS0UahDxK+Y0zyAelt/Jv+O2vjKWMdXDMvzlb5mVQk74IbMIEYx6Z3baROLIjocqpelJa55yIbXMV6rpIRfQ3FsntgkiDrPCNYpdCUOJ14QgKfXJBQX/r7n9f8lKISqI7wN2JUAoHFMCXd7DPSzJqbGwtKYRSyUCt+Q+np2GwYCyQgU6663+3sXp3cLnFd8u7poPAu14heC6VW8+zdEKobtJA6N3+i1JYb2LWCzIjmJa5RZ5aRj8KM3WIyacf68wfHSEUFQt3E/dQONM2heJWjh9yCS2Z1vLwqHFKFRub4Xv8OHF/mP1YnyIzETIdx8qA0z5F55xZG3Kuf7eiunf5s9n/yz7aV0LJz7E17209SW1so3GlgNvzmrzK37DvCy9qXfcm4OntNWKJ1+UWvDn5m3taFYi0Li0PsIsGQ3eQd1rpttG5UGoIzgq9I93R1AzeWcd3yQU8qj7jUXRtdWWU1jkOecEosIY 2IdycYnI qRvh4SXJC3FPwBVl3yFGtksZdlCMXGJQRlUypldSB0zrYwYiVnBWtcEXWG0Yhc7W3McuuoAcs3TFuU1KctuYSjuB5Be9s2/TIdpTGLLrTvEy3bTwk0+Gqfti5mn9Pl8PnI3jteVDzcBcADgE5YNwN1GrQoX04ZacSOWEsqoIA+6WPT1s0sNDPTab0BN3mEgP5Ls1f4d83Z3ZC52fRkdK+27wvP/smJ3jAscHbYw2e4GnCyIxq7HGPGw9aCrosBOzCAkAPYUhoeJNM3oxYYb5l/bRWoEf5nZsp0UvnhWmKiUYhSby4TE0yoXg7rcGf+6qOaYK3JPt+S1s4lSrmxI/t33s+1X1gTITKkuvvlR9dutfxvcL6CBsYJagqyxmR0VHXt+LFVSsepUW6Bm6kwLfSXbuKhlCU1ziYGUhMmrAmsQ2RaVHQBfQ+jmtv80X4J/IYiv2xUPLZvE7TQjaeECvOcVgLTEp8mtvhPschX3DDvmKAhBDNh8+lhwIe4AgLBoidEQkVXuMxN3QeRoANUy4EkS5rxMoQkAwyvnJAtSCl6dID9UlDTOrr4P1LB1AyyoJUsNIgdwj/TfXjqQdCZh3SZgWdUR2H940Jdeau01quMUpt/NemE6tuOkvWYi59g5kYy2JlQ7my1eTBhkHbpPxWCeZR/SJL9tQsqprxOa1vtprym26pAEVBCs+IpwO9rlJ2pZEpa9X5Q5WvGkI6wT3QyUtSmw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Create a function to set up a userspace VM for the kunit testing thread and set up a buffer within it such that ITER_UBUF and ITER_IOVEC tests can be performed. Note that this requires current->mm to point to a sufficiently set up mm_struct. This is done by partially mirroring what execve does. The following steps are performed: (1) Allocate an mm_struct and pick an arch layout (required to set mm->get_unmapped_area). (2) Create an empty "stack" VMA so that the VMA maple tree is set up and won't cause a crash in the maple tree code later. We don't actually care about the stack as we're not going to actually execute userspace. (3) Create an anon file and attach a bunch of folios to it so that the requested number of pages are accessible. (4) Make the kthread use the mm. This must be done before mmap is called. (5) Shared-mmap the anon file into the allocated mm_struct. This requires access to otherwise unexported core symbols: mm_alloc(), vm_area_alloc(), insert_vm_struct() arch_pick_mmap_layout() and anon_inode_getfile_secure(), which I've exported _GPL. [?] Would it be better if this were done in core and not in a module? Signed-off-by: David Howells cc: Andrew Morton cc: Christoph Hellwig cc: Christian Brauner cc: Jens Axboe cc: Al Viro cc: Matthew Wilcox cc: David Hildenbrand cc: John Hubbard cc: Brendan Higgins cc: David Gow cc: Huacai Chen cc: WANG Xuerui cc: Heiko Carstens cc: Vasily Gorbik cc: Alexander Gordeev cc: Christian Borntraeger cc: Sven Schnelle cc: linux-mm@kvack.org cc: loongarch@lists.linux.dev cc: linux-s390@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-kselftest@vger.kernel.org cc: kunit-dev@googlegroups.com --- arch/s390/kernel/vdso.c | 1 + fs/anon_inodes.c | 1 + kernel/fork.c | 2 + lib/kunit_iov_iter.c | 143 ++++++++++++++++++++++++++++++++++++++++ mm/mmap.c | 1 + mm/util.c | 3 + 6 files changed, 151 insertions(+) diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c index bbaefd84f15e..6849eac59129 100644 --- a/arch/s390/kernel/vdso.c +++ b/arch/s390/kernel/vdso.c @@ -223,6 +223,7 @@ unsigned long vdso_size(void) size += vdso64_end - vdso64_start; return PAGE_ALIGN(size); } +EXPORT_SYMBOL_GPL(vdso_size); int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) { diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c index 24192a7667ed..4190336180ee 100644 --- a/fs/anon_inodes.c +++ b/fs/anon_inodes.c @@ -176,6 +176,7 @@ struct file *anon_inode_getfile_secure(const char *name, return __anon_inode_getfile(name, fops, priv, flags, context_inode, true); } +EXPORT_SYMBOL_GPL(anon_inode_getfile_secure); static int __anon_inode_getfd(const char *name, const struct file_operations *fops, diff --git a/kernel/fork.c b/kernel/fork.c index 3b6d20dfb9a8..9ab604574400 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -494,6 +494,7 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) return vma; } +EXPORT_SYMBOL_GPL(vm_area_alloc); struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) { @@ -1337,6 +1338,7 @@ struct mm_struct *mm_alloc(void) memset(mm, 0, sizeof(*mm)); return mm_init(mm, current, current_user_ns()); } +EXPORT_SYMBOL_GPL(mm_alloc); static inline void __mmput(struct mm_struct *mm) { diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index eb86371b67d0..63e4dd1e7c1b 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -10,6 +10,13 @@ #include #include #include +#include +#include +#include +#include +#include +#include +#include #include #include #include @@ -68,6 +75,20 @@ static void iov_kunit_unmap(void *data) vunmap(data); } +static void iov_kunit_mmdrop(void *data) +{ + struct mm_struct *mm = data; + + if (current->mm == mm) + kthread_unuse_mm(mm); + mmdrop(mm); +} + +static void iov_kunit_fput(void *data) +{ + fput(data); +} + /* * Create a buffer out of some pages and return a vmap'd pointer to it. */ @@ -151,6 +172,128 @@ static void iov_kunit_check_pattern(struct kunit *test, const u8 *buffer, } } +static const struct file_operations iov_kunit_user_file_fops = { + .mmap = generic_file_mmap, +}; + +static int iov_kunit_user_file_read_folio(struct file *file, struct folio *folio) +{ + folio_mark_uptodate(folio); + folio_unlock(folio); + return 0; +} + +static const struct address_space_operations iov_kunit_user_file_aops = { + .read_folio = iov_kunit_user_file_read_folio, + .dirty_folio = filemap_dirty_folio, +}; + +/* + * Create an anonymous file and attach a bunch of pages to it. We can then use + * this in mmap() and check the pages against it when doing extraction tests. + */ +static struct file *iov_kunit_create_file(struct kunit *test, size_t npages, + struct page ***ppages) +{ + struct folio *folio; + struct file *file; + struct page **pages = NULL; + size_t i; + + if (ppages) { + pages = kunit_kcalloc(test, npages, sizeof(struct page *), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pages); + *ppages = pages; + } + + file = anon_inode_getfile_secure("kunit-iov-test", + &iov_kunit_user_file_fops, + NULL, O_RDWR, NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, file); + kunit_add_action_or_reset(test, iov_kunit_fput, file); + file->f_mapping->a_ops = &iov_kunit_user_file_aops; + + i_size_write(file_inode(file), npages * PAGE_SIZE); + for (i = 0; i < npages; i++) { + folio = filemap_grab_folio(file->f_mapping, i); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, folio); + if (pages) + *pages++ = folio_page(folio, 0); + folio_unlock(folio); + folio_put(folio); + } + + return file; +} + +/* + * Attach a userspace buffer to a kernel thread by adding an mm_struct to it + * and mmapping the buffer. If the caller requires a list of pages for + * checking, then an anon_inode file is created, populated with pages and + * mmapped otherwise an anonymous mapping is used. + */ +static u8 __user *__init iov_kunit_create_user_buf(struct kunit *test, + size_t npages, + struct page ***ppages) +{ + struct rlimit rlim_stack = { + .rlim_cur = LONG_MAX, + .rlim_max = LONG_MAX, + }; + struct vm_area_struct *vma; + struct mm_struct *mm; + struct file *file; + u8 __user *buffer; + int ret; + + KUNIT_ASSERT_NULL(test, current->mm); + + mm = mm_alloc(); + KUNIT_ASSERT_NOT_NULL(test, mm); + kunit_add_action_or_reset(test, iov_kunit_mmdrop, mm); + arch_pick_mmap_layout(mm, &rlim_stack); + + vma = vm_area_alloc(mm); + KUNIT_ASSERT_NOT_NULL(test, vma); + vma_set_anonymous(vma); + + /* + * Place the stack at the largest stack address the architecture + * supports. Later, we'll move this to an appropriate place. We don't + * use STACK_TOP because that can depend on attributes which aren't + * configured yet. + */ + vma->vm_end = STACK_TOP_MAX; + vma->vm_start = vma->vm_end - PAGE_SIZE; + vm_flags_init(vma, VM_SOFTDIRTY | VM_STACK_FLAGS | VM_STACK_INCOMPLETE_SETUP); + vma->vm_page_prot = vm_get_page_prot(vma->vm_flags); + + ret = insert_vm_struct(mm, vma); + KUNIT_ASSERT_EQ(test, ret, 0); + + mm->stack_vm = mm->total_vm = 1; + + /* + * If we want the pages, attach the pages to a file to prevent swap + * interfering, otherwise use an anonymous mapping. + */ + if (ppages) { + file = iov_kunit_create_file(test, npages, ppages); + + kthread_use_mm(mm); + buffer = (u8 __user *)vm_mmap(file, 0, PAGE_SIZE * npages, + PROT_READ | PROT_WRITE, + MAP_SHARED, 0); + } else { + kthread_use_mm(mm); + buffer = (u8 __user *)vm_mmap(NULL, 0, PAGE_SIZE * npages, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, 0); + } + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, (void __force *)buffer); + return buffer; +} + static void __init iov_kunit_load_kvec(struct kunit *test, struct iov_iter *iter, int dir, struct kvec *kvec, unsigned int kvmax, diff --git a/mm/mmap.c b/mm/mmap.c index b56a7f0c9f85..2ea4a98a2cab 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -3284,6 +3284,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_area_struct *vma) return 0; } +EXPORT_SYMBOL_GPL(insert_vm_struct); /* * Copy the vma structure to a new location in the same mm, diff --git a/mm/util.c b/mm/util.c index 8cbbfd3a3d59..09895358f067 100644 --- a/mm/util.c +++ b/mm/util.c @@ -455,6 +455,9 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) mm->get_unmapped_area = arch_get_unmapped_area; } #endif +#ifdef CONFIG_MMU +EXPORT_SYMBOL_GPL(arch_pick_mmap_layout); +#endif /** * __account_locked_vm - account locked pages to an mm's locked_vm