From patchwork Tue Dec 15 03:08:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11973779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3400AC2BB40 for ; Tue, 15 Dec 2020 03:08:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CABFC22D06 for ; Tue, 15 Dec 2020 03:08:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CABFC22D06 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E4B18D0013; Mon, 14 Dec 2020 22:08:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 596DC6B00CE; Mon, 14 Dec 2020 22:08:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 485A58D0013; Mon, 14 Dec 2020 22:08:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0156.hostedemail.com [216.40.44.156]) by kanga.kvack.org (Postfix) with ESMTP id 2E9BF6B00CD for ; Mon, 14 Dec 2020 22:08:29 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E91C98249980 for ; Tue, 15 Dec 2020 03:08:28 +0000 (UTC) X-FDA: 77594033496.02.trade68_1f06be627420 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id CB99410097AA0 for ; Tue, 15 Dec 2020 03:08:28 +0000 (UTC) X-HE-Tag: trade68_1f06be627420 X-Filterd-Recvd-Size: 9644 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Dec 2020 03:08:28 +0000 (UTC) Date: Mon, 14 Dec 2020 19:08:25 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1608001707; bh=0axLWJ2nWeMNFDVodcxuaPZHJVxIbQvMZ/Q/gy4QhBw=; h=From:To:Subject:In-Reply-To:From; b=kU+l1gtNA7fKgXpYBi/1tpKvXDEjVEdYAVPw1mDQhB77PL4UGuFTJLDg/crbOJoZo Y5cWOYfsJSWgU/yo3JT9sQmgbwZgejJnb+Gdvmos9MANNV7GUIcZ6/ljEQV1KMPSk1 rIPPzlfmU+6ZM1zayJAF2JQ9oWee9ysLXzkUGs/4= From: Andrew Morton To: akpm@linux-foundation.org, bgeffon@google.com, catalin.marinas@arm.com, dan.carpenter@oracle.com, dan.j.williams@intel.com, dave.jiang@intel.com, dima@arista.com, hughd@google.com, jgg@ziepe.ca, jhubbard@nvidia.com, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, linux@armlinux.org.uk, luto@kernel.org, mike.kravetz@oracle.com, minchan@kernel.org, mingo@redhat.com, mm-commits@vger.kernel.org, rcampbell@nvidia.com, tglx@linutronix.de, torvalds@linux-foundation.org, tsbogend@alpha.franken.de, vbabka@suse.cz, viro@zeniv.linux.org.uk, vishal.l.verma@intel.com, will@kernel.org Subject: [patch 089/200] mm: forbid splitting special mappings Message-ID: <20201215030825.xV_MGqqpp%akpm@linux-foundation.org> In-Reply-To: <20201214190237.a17b70ae14f129e2dca3d204@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dmitry Safonov Subject: mm: forbid splitting special mappings Don't allow splitting of vm_special_mapping's. It affects vdso/vvar areas. Uprobes have only one page in xol_area so they aren't affected. Those restrictions were enforced by checks in .mremap() callbacks. Restrict resizing with generic .split() callback. Link: https://lkml.kernel.org/r/20201013013416.390574-7-dima@arista.com Signed-off-by: Dmitry Safonov Cc: Alexander Viro Cc: Andy Lutomirski Cc: Brian Geffon Cc: Catalin Marinas Cc: Dan Carpenter Cc: Dan Williams Cc: Dave Jiang Cc: Hugh Dickins Cc: Ingo Molnar Cc: Jason Gunthorpe Cc: John Hubbard Cc: "Kirill A. Shutemov" Cc: Mike Kravetz Cc: Minchan Kim Cc: Ralph Campbell Cc: Russell King Cc: Thomas Bogendoerfer Cc: Thomas Gleixner Cc: Vishal Verma Cc: Vlastimil Babka Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm/kernel/vdso.c | 9 ------- arch/arm64/kernel/vdso.c | 41 +++--------------------------------- arch/mips/vdso/genvdso.c | 4 --- arch/s390/kernel/vdso.c | 11 --------- arch/x86/entry/vdso/vma.c | 17 -------------- mm/mmap.c | 12 ++++++++++ 6 files changed, 17 insertions(+), 77 deletions(-) --- a/arch/arm64/kernel/vdso.c~mm-forbid-splitting-special-mappings +++ a/arch/arm64/kernel/vdso.c @@ -78,17 +78,9 @@ static union { } vdso_data_store __page_aligned_data; struct vdso_data *vdso_data = vdso_data_store.data; -static int __vdso_remap(enum vdso_abi abi, - const struct vm_special_mapping *sm, - struct vm_area_struct *new_vma) -{ - unsigned long new_size = new_vma->vm_end - new_vma->vm_start; - unsigned long vdso_size = vdso_info[abi].vdso_code_end - - vdso_info[abi].vdso_code_start; - - if (vdso_size != new_size) - return -EINVAL; - +static int vdso_mremap(const struct vm_special_mapping *sm, + struct vm_area_struct *new_vma) +{ current->mm->context.vdso = (void *)new_vma->vm_start; return 0; @@ -219,17 +211,6 @@ static vm_fault_t vvar_fault(const struc return vmf_insert_pfn(vma, vmf->address, pfn); } -static int vvar_mremap(const struct vm_special_mapping *sm, - struct vm_area_struct *new_vma) -{ - unsigned long new_size = new_vma->vm_end - new_vma->vm_start; - - if (new_size != VVAR_NR_PAGES * PAGE_SIZE) - return -EINVAL; - - return 0; -} - static int __setup_additional_pages(enum vdso_abi abi, struct mm_struct *mm, struct linux_binprm *bprm, @@ -280,12 +261,6 @@ up_fail: /* * Create and map the vectors page for AArch32 tasks. */ -static int aarch32_vdso_mremap(const struct vm_special_mapping *sm, - struct vm_area_struct *new_vma) -{ - return __vdso_remap(VDSO_ABI_AA32, sm, new_vma); -} - enum aarch32_map { AA32_MAP_VECTORS, /* kuser helpers */ AA32_MAP_SIGPAGE, @@ -308,11 +283,10 @@ static struct vm_special_mapping aarch32 [AA32_MAP_VVAR] = { .name = "[vvar]", .fault = vvar_fault, - .mremap = vvar_mremap, }, [AA32_MAP_VDSO] = { .name = "[vdso]", - .mremap = aarch32_vdso_mremap, + .mremap = vdso_mremap, }, }; @@ -453,12 +427,6 @@ out: } #endif /* CONFIG_COMPAT */ -static int vdso_mremap(const struct vm_special_mapping *sm, - struct vm_area_struct *new_vma) -{ - return __vdso_remap(VDSO_ABI_AA64, sm, new_vma); -} - enum aarch64_map { AA64_MAP_VVAR, AA64_MAP_VDSO, @@ -468,7 +436,6 @@ static struct vm_special_mapping aarch64 [AA64_MAP_VVAR] = { .name = "[vvar]", .fault = vvar_fault, - .mremap = vvar_mremap, }, [AA64_MAP_VDSO] = { .name = "[vdso]", --- a/arch/arm/kernel/vdso.c~mm-forbid-splitting-special-mappings +++ a/arch/arm/kernel/vdso.c @@ -50,15 +50,6 @@ static const struct vm_special_mapping v static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma) { - unsigned long new_size = new_vma->vm_end - new_vma->vm_start; - unsigned long vdso_size; - - /* without VVAR page */ - vdso_size = (vdso_total_pages - 1) << PAGE_SHIFT; - - if (vdso_size != new_size) - return -EINVAL; - current->mm->context.vdso = new_vma->vm_start; return 0; --- a/arch/mips/vdso/genvdso.c~mm-forbid-splitting-special-mappings +++ a/arch/mips/vdso/genvdso.c @@ -263,10 +263,6 @@ int main(int argc, char **argv) fprintf(out_file, " const struct vm_special_mapping *sm,\n"); fprintf(out_file, " struct vm_area_struct *new_vma)\n"); fprintf(out_file, "{\n"); - fprintf(out_file, " unsigned long new_size =\n"); - fprintf(out_file, " new_vma->vm_end - new_vma->vm_start;\n"); - fprintf(out_file, " if (vdso_image.size != new_size)\n"); - fprintf(out_file, " return -EINVAL;\n"); fprintf(out_file, " current->mm->context.vdso =\n"); fprintf(out_file, " (void *)(new_vma->vm_start);\n"); fprintf(out_file, " return 0;\n"); --- a/arch/s390/kernel/vdso.c~mm-forbid-splitting-special-mappings +++ a/arch/s390/kernel/vdso.c @@ -61,17 +61,8 @@ static vm_fault_t vdso_fault(const struc static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *vma) { - unsigned long vdso_pages; - - vdso_pages = vdso64_pages; - - if ((vdso_pages << PAGE_SHIFT) != vma->vm_end - vma->vm_start) - return -EINVAL; - - if (WARN_ON_ONCE(current->mm != vma->vm_mm)) - return -EFAULT; - current->mm->context.vdso_base = vma->vm_start; + return 0; } --- a/arch/x86/entry/vdso/vma.c~mm-forbid-splitting-special-mappings +++ a/arch/x86/entry/vdso/vma.c @@ -89,30 +89,14 @@ static void vdso_fix_landing(const struc static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma) { - unsigned long new_size = new_vma->vm_end - new_vma->vm_start; const struct vdso_image *image = current->mm->context.vdso_image; - if (image->size != new_size) - return -EINVAL; - vdso_fix_landing(image, new_vma); current->mm->context.vdso = (void __user *)new_vma->vm_start; return 0; } -static int vvar_mremap(const struct vm_special_mapping *sm, - struct vm_area_struct *new_vma) -{ - const struct vdso_image *image = new_vma->vm_mm->context.vdso_image; - unsigned long new_size = new_vma->vm_end - new_vma->vm_start; - - if (new_size != -image->sym_vvar_start) - return -EINVAL; - - return 0; -} - #ifdef CONFIG_TIME_NS static struct page *find_timens_vvar_page(struct vm_area_struct *vma) { @@ -252,7 +236,6 @@ static const struct vm_special_mapping v static const struct vm_special_mapping vvar_mapping = { .name = "[vvar]", .fault = vvar_fault, - .mremap = vvar_mremap, }; /* --- a/mm/mmap.c~mm-forbid-splitting-special-mappings +++ a/mm/mmap.c @@ -3422,6 +3422,17 @@ static int special_mapping_mremap(struct return 0; } +static int special_mapping_split(struct vm_area_struct *vma, unsigned long addr) +{ + /* + * Forbid splitting special mappings - kernel has expectations over + * the number of pages in mapping. Together with VM_DONTEXPAND + * the size of vma should stay the same over the special mapping's + * lifetime. + */ + return -EINVAL; +} + static const struct vm_operations_struct special_mapping_vmops = { .close = special_mapping_close, .fault = special_mapping_fault, @@ -3429,6 +3440,7 @@ static const struct vm_operations_struct .name = special_mapping_name, /* vDSO code relies that VVAR can't be accessed remotely */ .access = NULL, + .may_split = special_mapping_split, }; static const struct vm_operations_struct legacy_special_mapping_vmops = {