From patchwork Tue Mar 12 22:28:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B887CC54E58 for ; Tue, 12 Mar 2024 22:29:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 28A568E0020; Tue, 12 Mar 2024 18:29:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 177CE8E0011; Tue, 12 Mar 2024 18:29:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E971C8E0020; Tue, 12 Mar 2024 18:29:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D28CA8E0011 for ; Tue, 12 Mar 2024 18:29:07 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id AD0EB160679 for ; Tue, 12 Mar 2024 22:29:07 +0000 (UTC) X-FDA: 81889828734.29.B85CD4A Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf15.hostedemail.com (Postfix) with ESMTP id 7BAD3A0019 for ; Tue, 12 Mar 2024 22:29:05 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=jRpzRnHB; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282545; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=u74nF8VNRsj8nyW4JDTw5xhm9TtFqgt6L4QISsNeaKM=; b=lVJ5S6YxQ6dJtydjcop8P0lJAJ3y1gA/j5JASf2Hw8/A3MrEzBxPXzbBIoxP0c3WRhjX8d 0E4F79jmoEKNXiTp4MzqAHnqJabbdTm+90cufKi2tFD9+rQBJutETmKK154INq8hBlh92j htfV69TBSu9JNyvoUe6Zz1Z08hUBlqo= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=jRpzRnHB; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282545; a=rsa-sha256; cv=none; b=uq9/GtwzZQnpQeXjVlATSOG+90wcq0vXT1nlLHB4AuTUfXaEh4P3mQMKECr9pSTot/TySZ XyeD7LodwZKX0AuwAopEnPgiRQxfD9Yoz0/Br5cFjcX9wHR7Ixe+CX5PCOB1k/aFmvsgVo 3zQP//Q/x575bDAx4r0RmZRzhBBRJqA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282546; x=1741818546; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zVX1vaEkDKmChEJ4w568WWgPFcE9+A3WKwowQ5YMPd8=; b=jRpzRnHBtJWcvFl1vLSHxohfeWveo0rDc76dpTmMdlTHlehcde49s08p jBDz0kQHDAUxcijYIstBNIhsA/bnO5q2vItJJtFC7gvD47h8z8J3AMNGL IohkdiF0HfV2eudMWJszenDifIhrPFOGEsrmJHfAVG5PzRd/3i4nEDnJw /tCiw7jSAZKowMmekWjNltmuTrcEn8ddUfCZmwe2RwUd9dGoa/NRciMpm e9dgjgg7bs/oyZNzhPyveCQe5wGz5+c63YrW9MZwkOkp3wdmsadSSeF6F UDbGbs78j8R93VOX91gt+hUAINv8woNR31emljzy4At9bLyxJ87R3VB4X g==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5191915" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5191915" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356826" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:01 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com Subject: [PATCH v3 01/12] mm: Switch mm->get_unmapped_area() to a flag Date: Tue, 12 Mar 2024 15:28:32 -0700 Message-Id: <20240312222843.2505560-2-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7BAD3A0019 X-Stat-Signature: 4nikdzxmy1asghnspacndj3bryiutw3i X-HE-Tag: 1710282545-760153 X-HE-Meta: U2FsdGVkX1/GKZ4Vj4aASSIXgZyUhserIbE6NtKRcs9O7yX82XPM5iFYBVmx3LkJ0qXp+YJe+u6KtMtHnFEGlL+yV/6ZrGbQ8LxZ8gY8tRu/BDMuuxYljbz/hluf7vNMP/xe1cfU/d3dWw06pH75cF/Qn6LZStini/+RtDqhqTgdh2hzwGB1SGne2Ant5a28cg+syzeYRfpc+AbZ736c776XVdpPm4FFHXCOooLMMyKOXlyRuM+fyhd/Pwyu6aVn27o1uc8X/l/FfNQWLnCZwhSMTfxkO2fl10WQpkZLSZT4TzIHiQi6UM+CQlOxorfrHx5yeqAXIeSwv14gbSFGLXrKEwT5Adz88LKbqw1fDRmmo/6UG0ZEtRlgMgk85HXUAZoES3Puri7ae7N0HQSCopzlRwJ3ryeomxmrM+sYdDhJnFyXETey7vTdsEV1MRhatuFwAA8s108UPKCasCnOvh4xBtjqmNVvTLmnnS1ln42vPoNRbubVyu1wVJvvmzXVimVef4NwQHvLubl1AkQVrjP3hc+EcmZym029gg4bxfxXfAo9KpEOgdKF2otc9OPjoLGdN7DnCc6Ts1mMoGfyiFj5JyomfLafAshwSieka+FwRxuhSsc7Bh42hr6Arln3RDUa/ImySu4aVsSnrQyhV3QexXtDBpLX0l6GwiQI7fbTeuUGxuT0cJIOM7ETDKxg1hKCbrhg23LfI52zX7YAbJBaAxm8zb5jVgOxblKdSVboyU8sDhgPOU6gHS/LEQNX8JXTZqdDEZNgeRdqfc8IxZAcqvC6sTVDF0IcFtij8q9fltPvZVMu4puBVc+MwTF86/qILAl8FpwkzP45daRDaznAFmnIwb6ElPKoQI0vEi3yRRIl4ePcUo07Ehd4Xiy97scSE65ZqtO1L2pIoqc6L812wk02qkOhTX8pUOxm1GrC/xcrjC5d9o7xFMjVqGv8Pv2q8nwoviPR2Qjsm7j udtFky9D ZmtHUPo+v2gxoo71j7qsdZQkC11mB2yNjCkDEz/nWTxN/xmFN0Eay72+T2uGcM5CPBwUGuPGNm10eNxFLJeEM9KeR+wah0Ct1SvdNtUyUHxhGfgEeYj262sGoiVqiJFXD249o0EQ7FbonrY129hycck8HYzQ2WEVTOn60tRToLn8lPdrMHFv8IaFf8lbfqRHb36807nkx33j+ezs8cY9BfHsr4hGE/cLMdRdMxRWuPwwWgeQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The mm_struct contains a function pointer *get_unmapped_area(), which is set to either arch_get_unmapped_area() or arch_get_unmapped_area_topdown() during the initialization of the mm. Since the function pointer only ever points to two functions that are named the same across all arch's, a function pointer is not really required. In addition future changes will want to add versions of the functions that take additional arguments. So to save a pointers worth of bytes in mm_struct, and prevent adding additional function pointers to mm_struct in future changes, remove it and keep the information about which get_unmapped_area() to use in a flag. Add the new flag to MMF_INIT_MASK so it doesn't get clobbered on fork by mmf_init_flags(). Most MM flags get clobbered on fork. In the pre-existing behavior mm->get_unmapped_area() would get copied to the new mm in dup_mm(), so not clobbering the flag preserves the existing behavior around inheriting the topdown-ness. Introduce a helper, mm_get_unmapped_area(), to easily convert code that refers to the old function pointer to instead select and call either arch_get_unmapped_area() or arch_get_unmapped_area_topdown() based on the flag. Then drop the mm->get_unmapped_area() function pointer. Leave the get_unmapped_area() pointer in struct file_operations alone. The main purpose of this change is to reorganize in preparation for future changes, but it also converts the calls of mm->get_unmapped_area() from indirect branches into a direct ones. The stress-ng bigheap benchmark calls realloc a lot, which calls through get_unmapped_area() in the kernel. On x86, the change yielded a ~1% improvement there on a retpoline config. In testing a few x86 configs, removing the pointer unfortunately didn't result in any actual size reductions in the compiled layout of mm_struct. But depending on compiler or arch alignment requirements, the change could shrink the size of mm_struct. Signed-off-by: Rick Edgecombe Acked-by: Dave Hansen Acked-by: Liam R. Howlett Reviewed-by: Kirill A. Shutemov --- v3: - Fix comment that still referred to mm->get_unmapped_area() - Resolve trivial rebase conflicts with "mm: thp_get_unmapped_area must honour topdown preference" - Spelling fix in log v2: - Fix comment on MMF_TOPDOWN (Kirill, rppt) - Move MMF_TOPDOWN to actually unused bit - Add MMF_TOPDOWN to MMF_INIT_MASK so it doesn't get clobbered on fork, and result in the children using the search up path. - New lower performance results after above bug fix - Add Reviews and Acks --- arch/s390/mm/hugetlbpage.c | 2 +- arch/s390/mm/mmap.c | 4 ++-- arch/sparc/kernel/sys_sparc_64.c | 15 ++++++--------- arch/sparc/mm/hugetlbpage.c | 2 +- arch/x86/kernel/cpu/sgx/driver.c | 2 +- arch/x86/mm/hugetlbpage.c | 2 +- arch/x86/mm/mmap.c | 4 ++-- drivers/char/mem.c | 2 +- drivers/dax/device.c | 6 +++--- fs/hugetlbfs/inode.c | 4 ++-- fs/proc/inode.c | 15 ++++++++------- fs/ramfs/file-mmu.c | 2 +- include/linux/mm_types.h | 6 +----- include/linux/sched/coredump.h | 5 ++++- include/linux/sched/mm.h | 5 +++++ io_uring/io_uring.c | 2 +- mm/debug.c | 6 ------ mm/huge_memory.c | 9 ++++----- mm/mmap.c | 21 ++++++++++++++++++--- mm/shmem.c | 11 +++++------ mm/util.c | 6 +++--- 21 files changed, 70 insertions(+), 61 deletions(-) diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c index 297a6d897d5a..c2d2850ec8d5 100644 --- a/arch/s390/mm/hugetlbpage.c +++ b/arch/s390/mm/hugetlbpage.c @@ -328,7 +328,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, goto check_asce_limit; } - if (mm->get_unmapped_area == arch_get_unmapped_area) + if (!test_bit(MMF_TOPDOWN, &mm->flags)) addr = hugetlb_get_unmapped_area_bottomup(file, addr, len, pgoff, flags); else diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c index fc9a7dc26c5e..cd52d72b59cf 100644 --- a/arch/s390/mm/mmap.c +++ b/arch/s390/mm/mmap.c @@ -182,10 +182,10 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) */ if (mmap_is_legacy(rlim_stack)) { mm->mmap_base = mmap_base_legacy(random_factor); - mm->get_unmapped_area = arch_get_unmapped_area; + clear_bit(MMF_TOPDOWN, &mm->flags); } else { mm->mmap_base = mmap_base(random_factor, rlim_stack); - mm->get_unmapped_area = arch_get_unmapped_area_topdown; + set_bit(MMF_TOPDOWN, &mm->flags); } } diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c index 1e9a9e016237..1dbf7211666e 100644 --- a/arch/sparc/kernel/sys_sparc_64.c +++ b/arch/sparc/kernel/sys_sparc_64.c @@ -218,14 +218,10 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, unsigned long get_fb_unmapped_area(struct file *filp, unsigned long orig_addr, unsigned long len, unsigned long pgoff, unsigned long flags) { unsigned long align_goal, addr = -ENOMEM; - unsigned long (*get_area)(struct file *, unsigned long, - unsigned long, unsigned long, unsigned long); - - get_area = current->mm->get_unmapped_area; if (flags & MAP_FIXED) { /* Ok, don't mess with it. */ - return get_area(NULL, orig_addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, NULL, orig_addr, len, pgoff, flags); } flags &= ~MAP_SHARED; @@ -238,7 +234,8 @@ unsigned long get_fb_unmapped_area(struct file *filp, unsigned long orig_addr, u align_goal = (64UL * 1024); do { - addr = get_area(NULL, orig_addr, len + (align_goal - PAGE_SIZE), pgoff, flags); + addr = mm_get_unmapped_area(current->mm, NULL, orig_addr, + len + (align_goal - PAGE_SIZE), pgoff, flags); if (!(addr & ~PAGE_MASK)) { addr = (addr + (align_goal - 1UL)) & ~(align_goal - 1UL); break; @@ -256,7 +253,7 @@ unsigned long get_fb_unmapped_area(struct file *filp, unsigned long orig_addr, u * be obtained. */ if (addr & ~PAGE_MASK) - addr = get_area(NULL, orig_addr, len, pgoff, flags); + addr = mm_get_unmapped_area(current->mm, NULL, orig_addr, len, pgoff, flags); return addr; } @@ -292,7 +289,7 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) gap == RLIM_INFINITY || sysctl_legacy_va_layout) { mm->mmap_base = TASK_UNMAPPED_BASE + random_factor; - mm->get_unmapped_area = arch_get_unmapped_area; + clear_bit(MMF_TOPDOWN, &mm->flags); } else { /* We know it's 32-bit */ unsigned long task_size = STACK_TOP32; @@ -303,7 +300,7 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) gap = (task_size / 6 * 5); mm->mmap_base = PAGE_ALIGN(task_size - gap - random_factor); - mm->get_unmapped_area = arch_get_unmapped_area_topdown; + set_bit(MMF_TOPDOWN, &mm->flags); } } diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c index b432500c13a5..38a1bef47efb 100644 --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -123,7 +123,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr, (!vma || addr + len <= vm_start_gap(vma))) return addr; } - if (mm->get_unmapped_area == arch_get_unmapped_area) + if (!test_bit(MMF_TOPDOWN, &mm->flags)) return hugetlb_get_unmapped_area_bottomup(file, addr, len, pgoff, flags); else diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c index 262f5fb18d74..22b65a5f5ec6 100644 --- a/arch/x86/kernel/cpu/sgx/driver.c +++ b/arch/x86/kernel/cpu/sgx/driver.c @@ -113,7 +113,7 @@ static unsigned long sgx_get_unmapped_area(struct file *file, if (flags & MAP_FIXED) return addr; - return current->mm->get_unmapped_area(file, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags); } #ifdef CONFIG_COMPAT diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index 5804bbae4f01..6d77c0039617 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -141,7 +141,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr, } get_unmapped_area: - if (mm->get_unmapped_area == arch_get_unmapped_area) + if (!test_bit(MMF_TOPDOWN, &mm->flags)) return hugetlb_get_unmapped_area_bottomup(file, addr, len, pgoff, flags); else diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c index c90c20904a60..a2cabb1c81e1 100644 --- a/arch/x86/mm/mmap.c +++ b/arch/x86/mm/mmap.c @@ -129,9 +129,9 @@ static void arch_pick_mmap_base(unsigned long *base, unsigned long *legacy_base, void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) { if (mmap_is_legacy()) - mm->get_unmapped_area = arch_get_unmapped_area; + clear_bit(MMF_TOPDOWN, &mm->flags); else - mm->get_unmapped_area = arch_get_unmapped_area_topdown; + set_bit(MMF_TOPDOWN, &mm->flags); arch_pick_mmap_base(&mm->mmap_base, &mm->mmap_legacy_base, arch_rnd(mmap64_rnd_bits), task_size_64bit(0), diff --git a/drivers/char/mem.c b/drivers/char/mem.c index 3c6670cf905f..9b80e622ae80 100644 --- a/drivers/char/mem.c +++ b/drivers/char/mem.c @@ -544,7 +544,7 @@ static unsigned long get_unmapped_area_zero(struct file *file, } /* Otherwise flags & MAP_PRIVATE: with no shmem object beneath it */ - return current->mm->get_unmapped_area(file, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags); #else return -ENOSYS; #endif diff --git a/drivers/dax/device.c b/drivers/dax/device.c index 93ebedc5ec8c..47c126d37b59 100644 --- a/drivers/dax/device.c +++ b/drivers/dax/device.c @@ -329,14 +329,14 @@ static unsigned long dax_get_unmapped_area(struct file *filp, if ((off + len_align) < off) goto out; - addr_align = current->mm->get_unmapped_area(filp, addr, len_align, - pgoff, flags); + addr_align = mm_get_unmapped_area(current->mm, filp, addr, len_align, + pgoff, flags); if (!IS_ERR_VALUE(addr_align)) { addr_align += (off - addr_align) & (align - 1); return addr_align; } out: - return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, filp, addr, len, pgoff, flags); } static const struct address_space_operations dev_dax_aops = { diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index d746866ae3b6..cd87ea5944a1 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -249,11 +249,11 @@ generic_hugetlb_get_unmapped_area(struct file *file, unsigned long addr, } /* - * Use mm->get_unmapped_area value as a hint to use topdown routine. + * Use MMF_TOPDOWN flag as a hint to use topdown routine. * If architectures have special needs, they should define their own * version of hugetlb_get_unmapped_area. */ - if (mm->get_unmapped_area == arch_get_unmapped_area_topdown) + if (test_bit(MMF_TOPDOWN, &mm->flags)) return hugetlb_get_unmapped_area_topdown(file, addr, len, pgoff, flags); return hugetlb_get_unmapped_area_bottomup(file, addr, len, diff --git a/fs/proc/inode.c b/fs/proc/inode.c index 05350f3c2812..017144a8516c 100644 --- a/fs/proc/inode.c +++ b/fs/proc/inode.c @@ -451,15 +451,16 @@ pde_get_unmapped_area(struct proc_dir_entry *pde, struct file *file, unsigned lo unsigned long len, unsigned long pgoff, unsigned long flags) { - typeof_member(struct proc_ops, proc_get_unmapped_area) get_area; - - get_area = pde->proc_ops->proc_get_unmapped_area; + if (pde->proc_ops->proc_get_unmapped_area) + return pde->proc_ops->proc_get_unmapped_area(file, orig_addr, + len, pgoff, + flags); #ifdef CONFIG_MMU - if (!get_area) - get_area = current->mm->get_unmapped_area; + else + return mm_get_unmapped_area(current->mm, file, orig_addr, + len, pgoff, flags); #endif - if (get_area) - return get_area(file, orig_addr, len, pgoff, flags); + return orig_addr; } diff --git a/fs/ramfs/file-mmu.c b/fs/ramfs/file-mmu.c index c7a1aa3c882b..b45c7edc3225 100644 --- a/fs/ramfs/file-mmu.c +++ b/fs/ramfs/file-mmu.c @@ -35,7 +35,7 @@ static unsigned long ramfs_mmu_get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { - return current->mm->get_unmapped_area(file, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags); } const struct file_operations ramfs_file_operations = { diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 8b611e13153e..d20869881214 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -749,11 +749,7 @@ struct mm_struct { } ____cacheline_aligned_in_smp; struct maple_tree mm_mt; -#ifdef CONFIG_MMU - unsigned long (*get_unmapped_area) (struct file *filp, - unsigned long addr, unsigned long len, - unsigned long pgoff, unsigned long flags); -#endif + unsigned long mmap_base; /* base of mmap area */ unsigned long mmap_legacy_base; /* base of mmap area in bottom-up allocations */ #ifdef CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h index 02f5090ffea2..e62ff805cfc9 100644 --- a/include/linux/sched/coredump.h +++ b/include/linux/sched/coredump.h @@ -92,9 +92,12 @@ static inline int get_dumpable(struct mm_struct *mm) #define MMF_VM_MERGE_ANY 30 #define MMF_VM_MERGE_ANY_MASK (1 << MMF_VM_MERGE_ANY) +#define MMF_TOPDOWN 31 /* mm searches top down by default */ +#define MMF_TOPDOWN_MASK (1 << MMF_TOPDOWN) + #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\ MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK |\ - MMF_VM_MERGE_ANY_MASK) + MMF_VM_MERGE_ANY_MASK | MMF_TOPDOWN_MASK) static inline unsigned long mmf_init_flags(unsigned long flags) { diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 9a19f1b42f64..cde946e926d8 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -8,6 +8,7 @@ #include #include #include +#include /* * Routines for handling mm_structs @@ -186,6 +187,10 @@ arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); +unsigned long mm_get_unmapped_area(struct mm_struct *mm, struct file *filp, + unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags); + unsigned long generic_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index cd9a137ad6ce..9eb3b2587031 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -3513,7 +3513,7 @@ static unsigned long io_uring_mmu_get_unmapped_area(struct file *filp, #else addr = 0UL; #endif - return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, filp, addr, len, pgoff, flags); } #else /* !CONFIG_MMU */ diff --git a/mm/debug.c b/mm/debug.c index ee533a5ceb79..32db5de8e1e7 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -162,9 +162,6 @@ EXPORT_SYMBOL(dump_vma); void dump_mm(const struct mm_struct *mm) { pr_emerg("mm %px task_size %lu\n" -#ifdef CONFIG_MMU - "get_unmapped_area %px\n" -#endif "mmap_base %lu mmap_legacy_base %lu\n" "pgd %px mm_users %d mm_count %d pgtables_bytes %lu map_count %d\n" "hiwater_rss %lx hiwater_vm %lx total_vm %lx locked_vm %lx\n" @@ -190,9 +187,6 @@ void dump_mm(const struct mm_struct *mm) "def_flags: %#lx(%pGv)\n", mm, mm->task_size, -#ifdef CONFIG_MMU - mm->get_unmapped_area, -#endif mm->mmap_base, mm->mmap_legacy_base, mm->pgd, atomic_read(&mm->mm_users), atomic_read(&mm->mm_count), diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 94c958f7ebb5..bc3bf441e768 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -822,8 +822,8 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, if (len_pad < len || (off + len_pad) < off) return 0; - ret = current->mm->get_unmapped_area(filp, addr, len_pad, - off >> PAGE_SHIFT, flags); + ret = mm_get_unmapped_area(current->mm, filp, addr, len_pad, + off >> PAGE_SHIFT, flags); /* * The failure might be due to length padding. The caller will retry @@ -841,8 +841,7 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, off_sub = (off - ret) & (size - 1); - if (current->mm->get_unmapped_area == arch_get_unmapped_area_topdown && - !off_sub) + if (test_bit(MMF_TOPDOWN, ¤t->mm->flags) && !off_sub) return ret + size; ret += off_sub; @@ -859,7 +858,7 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, if (ret) return ret; - return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, filp, addr, len, pgoff, flags); } EXPORT_SYMBOL_GPL(thp_get_unmapped_area); diff --git a/mm/mmap.c b/mm/mmap.c index 3281287771c9..39e9a3ae3ca5 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1815,7 +1815,8 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { unsigned long (*get_area)(struct file *, unsigned long, - unsigned long, unsigned long, unsigned long); + unsigned long, unsigned long, unsigned long) + = NULL; unsigned long error = arch_mmap_check(addr, len, flags); if (error) @@ -1825,7 +1826,6 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, if (len > TASK_SIZE) return -ENOMEM; - get_area = current->mm->get_unmapped_area; if (file) { if (file->f_op->get_unmapped_area) get_area = file->f_op->get_unmapped_area; @@ -1844,7 +1844,11 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, if (!file) pgoff = 0; - addr = get_area(file, addr, len, pgoff, flags); + if (get_area) + addr = get_area(file, addr, len, pgoff, flags); + else + addr = mm_get_unmapped_area(current->mm, file, addr, len, + pgoff, flags); if (IS_ERR_VALUE(addr)) return addr; @@ -1859,6 +1863,17 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, EXPORT_SYMBOL(get_unmapped_area); +unsigned long +mm_get_unmapped_area(struct mm_struct *mm, struct file *file, + unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags) +{ + if (test_bit(MMF_TOPDOWN, &mm->flags)) + return arch_get_unmapped_area_topdown(file, addr, len, pgoff, flags); + return arch_get_unmapped_area(file, addr, len, pgoff, flags); +} +EXPORT_SYMBOL(mm_get_unmapped_area); + /** * find_vma_intersection() - Look up the first VMA which intersects the interval * @mm: The process address space. diff --git a/mm/shmem.c b/mm/shmem.c index d7c84ff62186..5452065faa46 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2240,8 +2240,6 @@ unsigned long shmem_get_unmapped_area(struct file *file, unsigned long uaddr, unsigned long len, unsigned long pgoff, unsigned long flags) { - unsigned long (*get_area)(struct file *, - unsigned long, unsigned long, unsigned long, unsigned long); unsigned long addr; unsigned long offset; unsigned long inflated_len; @@ -2251,8 +2249,8 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (len > TASK_SIZE) return -ENOMEM; - get_area = current->mm->get_unmapped_area; - addr = get_area(file, uaddr, len, pgoff, flags); + addr = mm_get_unmapped_area(current->mm, file, uaddr, len, pgoff, + flags); if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) return addr; @@ -2309,7 +2307,8 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (inflated_len < len) return addr; - inflated_addr = get_area(NULL, uaddr, inflated_len, 0, flags); + inflated_addr = mm_get_unmapped_area(current->mm, NULL, uaddr, + inflated_len, 0, flags); if (IS_ERR_VALUE(inflated_addr)) return addr; if (inflated_addr & ~PAGE_MASK) @@ -4755,7 +4754,7 @@ unsigned long shmem_get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { - return current->mm->get_unmapped_area(file, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags); } #endif diff --git a/mm/util.c b/mm/util.c index 5a6a9802583b..2b959553f9ce 100644 --- a/mm/util.c +++ b/mm/util.c @@ -452,17 +452,17 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) if (mmap_is_legacy(rlim_stack)) { mm->mmap_base = TASK_UNMAPPED_BASE + random_factor; - mm->get_unmapped_area = arch_get_unmapped_area; + clear_bit(MMF_TOPDOWN, &mm->flags); } else { mm->mmap_base = mmap_base(random_factor, rlim_stack); - mm->get_unmapped_area = arch_get_unmapped_area_topdown; + set_bit(MMF_TOPDOWN, &mm->flags); } } #elif defined(CONFIG_MMU) && !defined(HAVE_ARCH_PICK_MMAP_LAYOUT) void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) { mm->mmap_base = TASK_UNMAPPED_BASE; - mm->get_unmapped_area = arch_get_unmapped_area; + clear_bit(MMF_TOPDOWN, &mm->flags); } #endif From patchwork Tue Mar 12 22:28:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590670 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21277C54E5D for ; Tue, 12 Mar 2024 22:29:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AA1358E0022; Tue, 12 Mar 2024 18:29:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A52218E0011; Tue, 12 Mar 2024 18:29:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 87C658E0022; Tue, 12 Mar 2024 18:29:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 722658E0011 for ; Tue, 12 Mar 2024 18:29:09 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 25F1C1609DF for ; Tue, 12 Mar 2024 22:29:09 +0000 (UTC) X-FDA: 81889828818.25.39C5DDD Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf24.hostedemail.com (Postfix) with ESMTP id 3B55718000E for ; Tue, 12 Mar 2024 22:29:05 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=TFB8VeGm; spf=pass (imf24.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282546; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bFDsCgelve8reXSxgTmUguiyPwGkmOEZmQXzPltn6Qg=; b=qdVtGGCsyCUhyXnMdJPOPQ9TcaK5nRB6FEaq5cU3u0eyavlcAM900bXe84XvpkobPTdgbc G1E52I2Mo6oqeKjzstA/iUPxpJF7JkXqlmbI2fOigOOQowa1MkUvtaO4V+oDIgEy/Hcs72 iADI8ACH8g0K6WqkrvGqYVs/ZTYC6cI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282546; a=rsa-sha256; cv=none; b=xhGRS8zIOVWkCP40kec3cyPxTJUAVoJBdFW1jUZli3sPEZ3BanDpRee0xgyvXI+VFEOweW pBKAdYgJN9hJlnhqHs87Ov1nZqus3py1IZd+OaC18IHUWZbFUNgz4XsFa+vtq7DyQ/BM/n kz9t8urEr9gUr2zxlCB1AGuFQ74EoBE= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=TFB8VeGm; spf=pass (imf24.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282546; x=1741818546; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dgRXKbvPqyuL8hOX1YSrzuU0fpl568sxa0M2x6sm+7M=; b=TFB8VeGmoVtM4LI8BsNN8T/jXdfDHfhMxEJhfEypwQUKq9yNUtkynoZg xOw12WER5Qufxai22EBT3Ks78X+zu6wkhaYhLcrAVTaHnvP6esq7fAOQR vKFEUSfl6sI7FYsJyqdgTXJE+9gNQRwN8daiJYWd79RdTVQo7t4THKXJV Ltu0FoqU6KiovaW33u8CQOFZo07W/UW9DVdy4eMTHGYjWK/zU9gI5kxgt l2+tohsOMVgjIYroWM3H7BEEwFNTwt7EDd+nrQtesakuGct5F01CG3yhZ q4LC7s7VXvaBdvHOxrVeZSEhFZHe9hNpLqNdCEHG8Wsu3WrBy46XNIUAB Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5191929" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5191929" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356831" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:01 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com Subject: [PATCH v3 02/12] mm: Introduce arch_get_unmapped_area_vmflags() Date: Tue, 12 Mar 2024 15:28:33 -0700 Message-Id: <20240312222843.2505560-3-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3B55718000E X-Rspam-User: X-Stat-Signature: jznsd3hqnr9i1pf38eundfgzao7qaq85 X-Rspamd-Server: rspam03 X-HE-Tag: 1710282545-72193 X-HE-Meta: U2FsdGVkX1+FA4n7m1m1lyRKcyu+RxpSEyZ7oTfPaSnOghlmIWlPIzsBuyEXqp4mtsSK7joApRrsw3wGamsXHIbBrffmd6q9PCcJ6Ag4q4foRVgE+78+n7hTfwrkGCLvEKCwcBV/HNo2+U+2+RBHwS1APCX7kbsIyBYdRSs+YWNZFpXQbGmvzKveELzLw4y8wEoR41syZQGBSk0GCAYLW3Slxh0xFiUdUdG8/PonIa4owoQKWq/rx4beeT9eCaFUIcjyqlvIlvhkPc9gWmINZWZP+ZgpFTgcn1Vt2LncwJfR2GLuLC0q1nYvEvkA0ghCGaWot/2byvW+qaOCgJpUt2gbD4l9MrLbQW+iZgDjcckGn4dZTyTk55nsEE9NWRzSjjG9AkWceS3Av8pxdoor2dMqxacb0V2LFKB6pBdvN5+/5zBgJE8MdIFqUJplLct/UzNJsol/8qHoy/M0fIjae94h8+kLgcMnILRGdhdok8uGIDSmaY+guixMzDQmdlhNUtQH/9d9RqKQCag+ZdXkDnQ+3nTRUAA0xZ00LeGYFQ5RAd41jeNU0Rg64wWwHv5RYiFyUROmEMpDUN/9WviOo+rfIW+bDxUrfM/JnLuuKiIubi7H3ExIBOk6Yoz5Cvpq7Ctm0i8sLgx4qc652s5bfkaMTRHGGPJ4xg+q5EzuIKLv/ygFNQ+R5o3Eomk9H6A63Jhk9T/O3m/XZGQK1+tti2lnxqdETpbB0NMrcyKTOmnqNg0pS5fI95BMMfsk4Mg6kBHbHv3JLr0NaFsg6IArtLzAbaWrwFeGMZN7qXv+egwo7KZm3sCGwUiF8l/USuCLLyvk8SY0mY/q6gX3Q+vD+gzSUPHRer+u2/Gb/B02bm80frrp6ghLZ9qvEH6wfzWyPpQn+TeSHqw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. In order to take the start gap into account, the maple tree search needs to know the size of start gap the new mapping will need. The call chain from do_mmap() to the actual maple tree search looks like this: do_mmap(size, vm_flags, map_flags, ..) mm/mmap.c:get_unmapped_area(size, map_flags, ...) arch_get_unmapped_area(size, map_flags, ...) vm_unmapped_area(struct vm_unmapped_area_info) One option would be to add another MAP_ flag to mean a one page start gap (as is for shadow stack), but this consumes a flag unnecessarily. Another option could be to simply increase the size passed in do_mmap() by the start gap size, and adjust after the fact, but this will interfere with the alignment requirements passed in struct vm_unmapped_area_info, and unknown to mmap.c. Instead, introduce variants of arch_get_unmapped_area/_topdown() that take vm_flags. In future changes, these variants can be used in mmap.c:get_unmapped_area() to allow the vm_flags to be passed through to vm_unmapped_area(), while preserving the normal arch_get_unmapped_area/_topdown() for the existing callers. Signed-off-by: Rick Edgecombe --- include/linux/sched/mm.h | 17 +++++++++++++++++ mm/mmap.c | 28 ++++++++++++++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index cde946e926d8..7b44441865c5 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -191,6 +191,23 @@ unsigned long mm_get_unmapped_area(struct mm_struct *mm, struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); +extern unsigned long +arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags); +extern unsigned long +arch_get_unmapped_area_topdown_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t); + +unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm, + struct file *filp, + unsigned long addr, + unsigned long len, + unsigned long pgoff, + unsigned long flags, + vm_flags_t vm_flags); + unsigned long generic_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, diff --git a/mm/mmap.c b/mm/mmap.c index 39e9a3ae3ca5..e23ce8ca24c9 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1810,6 +1810,34 @@ arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr, } #endif +#ifndef HAVE_ARCH_UNMAPPED_AREA_VMFLAGS +extern unsigned long +arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags) +{ + return arch_get_unmapped_area(filp, addr, len, pgoff, flags); +} + +extern unsigned long +arch_get_unmapped_area_topdown_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags) +{ + return arch_get_unmapped_area_topdown(filp, addr, len, pgoff, flags); +} +#endif + +unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm, struct file *filp, + unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, + vm_flags_t vm_flags) +{ + if (test_bit(MMF_TOPDOWN, &mm->flags)) + return arch_get_unmapped_area_topdown_vmflags(filp, addr, len, pgoff, + flags, vm_flags); + return arch_get_unmapped_area_vmflags(filp, addr, len, pgoff, flags, vm_flags); +} + unsigned long get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) From patchwork Tue Mar 12 22:28:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590672 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AFC1C54E5D for ; Tue, 12 Mar 2024 22:29:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0448B8E0024; Tue, 12 Mar 2024 18:29:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F0E668E0025; Tue, 12 Mar 2024 18:29:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC0BF8E0024; Tue, 12 Mar 2024 18:29:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B2EA98E0011 for ; Tue, 12 Mar 2024 18:29:10 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6E57EA0740 for ; Tue, 12 Mar 2024 22:29:10 +0000 (UTC) X-FDA: 81889828860.09.87C4D4D Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf24.hostedemail.com (Postfix) with ESMTP id 6109518000C for ; Tue, 12 Mar 2024 22:29:08 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hCZuqNvY; spf=pass (imf24.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282548; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=heTClq2fhft0JrBvJbtw5Y/342RH8KsSHyD4E4sYLCU=; b=m6ae8UGTJZQHEPbkIeqt4sP378Ny5yyYjh8ntstG/zbwLKhvYKvzXW+ZamjGDNN3exlh4o bAIm9t2vjh6DbbWzbS7YJdCeiWNkD4k7aoQdeZBl0v14h6bpbQMcDki8mkXkba/HvdLUzc RRHpSstkECirrGG4zqkM3Pune3Q20HI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282548; a=rsa-sha256; cv=none; b=oSaIY1IWAFbJ25zzYN5Cm/ow17sYi2pFtMsBzVfJDE7hPMS8V5cOfZzU3eitWF/s6d3BtK pcHJVwjRi7BtH/aQqSjUUJtukmMtTxZf4qzLGnhfNji0bwLHhIdcNPnR7s+6/pWQXOXJJ0 qx4Ytj23wFDl4le8+1bEHce5gZo5Dv8= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hCZuqNvY; spf=pass (imf24.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282549; x=1741818549; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/il3sOjTRp5Oqe4q9Enzhux+EAXqchUHAcOIqSHR2bA=; b=hCZuqNvYrqgHDYc5shfPIgNq2P6v8mdHMJOvZVGQVHThSZO4k4otXBhS 0zaJcT/RvK9jmfXszQtgHSlVEjT/YFtXzWdhpOB+3kaSmRsmyvxjlx6ti /z11ZKi1IZ2XFmC+JaXi3KSyyJOnVW5hL+aC1TlqAUL7pvYUsBuCkJD23 vvyULm4Un+g6RWl3jaxn7S0ttU6Mhgc5UzhmXOWkKaK8eUAlgB46k34pZ DeqH5beqsuErDiM4032IbAWCwMaIktwCCijdeO0sINuhTjoJtx5eLIVkD M4deWUGQ85pQ3cAT0lc0SjwVoSO7i0A3a/KKUaSvRvROEZ/a0shFjGmVq g==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5191943" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5191943" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356836" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:02 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com Subject: [PATCH v3 03/12] mm: Use get_unmapped_area_vmflags() Date: Tue, 12 Mar 2024 15:28:34 -0700 Message-Id: <20240312222843.2505560-4-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 6109518000C X-Rspam-User: X-Stat-Signature: twip3w818nxi4fz7cnc8ipiq97opcou1 X-Rspamd-Server: rspam03 X-HE-Tag: 1710282548-45846 X-HE-Meta: U2FsdGVkX1+IdAdc48g3x3xMiEjy1wZO9aRHBGHSOSP8VyqCJuJ8hzH/OG85CTMRe6vdjwla6hHB8mz3BlUJ46Ke14onkrLypM8f+P2XNmX12nJjGlSr9whrPEvsu7bK6GrTl4+fTzRxHYLmPiMK5LJWE2K+lE1xB4fMKAGwoirc7SB23gUdVC/gKK/G8HhmCNkS+HXpFnwN64SXxpvx8H3WAJhNbIPGH6bih5WsXg+v0SLdI19TIY6xs2775tPHkGyPo+Jq1HR6ZI1BBm9QcuTJNpcs1eqcqpochYfczrU/rPI+gLlkWZkudOmcX9rCzMOf3ydpU4flO4c0wksQF/aWEeyqZhtA7IMrro0qrfMr6UFUsgUKsuTVUcCsvx5VT4Ja70fRRsETWbqo9sunIJ33z7MEn7c9jDz13cMYyzTiVkcQHF6KzxiO/dtWQ10H9FY+B2YtEBfwPH46XWLnEX0dtSQcr8UMl/N52DO3O66mt1uV7rlqmXHTZrjyX6I0EYFyGY8hLHa2OGLrxozErtLKx66dOM9MjbFCRSrBezk/ofZz4pGoKDo1+RNNFGC7MSA805sHjw2l2WKVrm8/jBxhls4T8by3XKkXkNnfLvDuXznbdPZSBRd7pU4VaeLnKOqnXKmuPUCvu4hZJV4KchenJP7gAb9VrTCQ0f5Fj7WjYldOzhDk5rUosO95BY1Ue/EQ6KfOYBdVdALDbhz9BKlyu3H98UlgwBlnQXYYh8iHjhZKQQUYQcCzItFoxcioc25IOxOqB0UE+GObWEtO1s8ukqpd76hRW3wb+7frjY2KR4QCpjJ2gTP71Wg74pNM+46KpfQX6xUW8u/UGccO8cXFHn1SRvTE7kwsESVorowKALwZ6Z7b/EJmMUDvJ55MZV8oI1t6Ke4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. Use mm_get_unmapped_area_vmflags() in the do_mmap() so future changes can cause shadow stack mappings to be placed with a guard gap. Also use the THP variant that takes vm_flags, such that THP shadow stack can get the same treatment. Adjust the vm_flags calculation to happen earlier so that the vm_flags can be passed into __get_unmapped_area(). Signed-off-by: Rick Edgecombe Reviewed-by: Christophe Leroy --- v2: - Make get_unmapped_area() a static inline (Kirill) --- include/linux/mm.h | 11 ++++++++++- mm/mmap.c | 34 ++++++++++++++++------------------ 2 files changed, 26 insertions(+), 19 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index f5a97dec5169..d91cde79aaee 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3363,7 +3363,16 @@ extern int install_special_mapping(struct mm_struct *mm, unsigned long randomize_stack_top(unsigned long stack_top); unsigned long randomize_page(unsigned long start, unsigned long range); -extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long); +unsigned long +__get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags); + +static inline unsigned long +get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags) +{ + return __get_unmapped_area(file, addr, len, pgoff, flags, 0); +} extern unsigned long mmap_region(struct file *file, unsigned long addr, unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, diff --git a/mm/mmap.c b/mm/mmap.c index e23ce8ca24c9..a3128ed26676 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1257,18 +1257,6 @@ unsigned long do_mmap(struct file *file, unsigned long addr, if (mm->map_count > sysctl_max_map_count) return -ENOMEM; - /* Obtain the address to map to. we verify (or select) it and ensure - * that it represents a valid section of the address space. - */ - addr = get_unmapped_area(file, addr, len, pgoff, flags); - if (IS_ERR_VALUE(addr)) - return addr; - - if (flags & MAP_FIXED_NOREPLACE) { - if (find_vma_intersection(mm, addr, addr + len)) - return -EEXIST; - } - if (prot == PROT_EXEC) { pkey = execute_only_pkey(mm); if (pkey < 0) @@ -1282,6 +1270,18 @@ unsigned long do_mmap(struct file *file, unsigned long addr, vm_flags |= calc_vm_prot_bits(prot, pkey) | calc_vm_flag_bits(flags) | mm->def_flags | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC; + /* Obtain the address to map to. we verify (or select) it and ensure + * that it represents a valid section of the address space. + */ + addr = __get_unmapped_area(file, addr, len, pgoff, flags, vm_flags); + if (IS_ERR_VALUE(addr)) + return addr; + + if (flags & MAP_FIXED_NOREPLACE) { + if (find_vma_intersection(mm, addr, addr + len)) + return -EEXIST; + } + if (flags & MAP_LOCKED) if (!can_do_mlock()) return -EPERM; @@ -1839,8 +1839,8 @@ unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm, struct file *fi } unsigned long -get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, - unsigned long pgoff, unsigned long flags) +__get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags) { unsigned long (*get_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long) @@ -1875,8 +1875,8 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, if (get_area) addr = get_area(file, addr, len, pgoff, flags); else - addr = mm_get_unmapped_area(current->mm, file, addr, len, - pgoff, flags); + addr = mm_get_unmapped_area_vmflags(current->mm, file, addr, len, + pgoff, flags, vm_flags); if (IS_ERR_VALUE(addr)) return addr; @@ -1889,8 +1889,6 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, return error ? error : addr; } -EXPORT_SYMBOL(get_unmapped_area); - unsigned long mm_get_unmapped_area(struct mm_struct *mm, struct file *file, unsigned long addr, unsigned long len, From patchwork Tue Mar 12 22:28:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB04AC54E58 for ; Tue, 12 Mar 2024 22:29:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E9AC88E0021; Tue, 12 Mar 2024 18:29:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E222C8E0011; Tue, 12 Mar 2024 18:29:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CEA738E0021; Tue, 12 Mar 2024 18:29:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BBB768E0011 for ; Tue, 12 Mar 2024 18:29:08 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 83A2D40730 for ; Tue, 12 Mar 2024 22:29:08 +0000 (UTC) X-FDA: 81889828776.24.7714EF2 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf11.hostedemail.com (Postfix) with ESMTP id 83DF04000C for ; Tue, 12 Mar 2024 22:29:06 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=UlTlTOpD; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282546; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=E4bFmXjAMRGmz8BILlOEqF3bzfCSQ9N6uf10SfMcabk=; b=3LtGWj61j5IpMoZPgAvuCSlYe14GNqiXCWWTNEZqmyFZhC86aw0IgM6u+wu19k0TJHpRGI 3Aup0nWe/aq1/4xKmWmIjmpSxcWo7T8ToHU8EuERRguP/jle1jZFKGN7NQcNQUnle0XRcv dHCXxOlQsM3c6qogIvJJLabMMvQI1Cw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282546; a=rsa-sha256; cv=none; b=7hq+NQ8BfoK5Ckprt6RHidwxuauRff7ETqMm1QZVbP1a47WsikIVLt/KOLlP1TXZnWyCJZ uuHNmpLVDvO5pPMXAaC0S2IsPqD05gxJNIuNXkNp50kmUD71YUy3NU0YhhPI+iGe7tAiW7 ROrwxUseUfGpHywre5kJfNIP2HUt7fY= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=UlTlTOpD; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282547; x=1741818547; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/QgkcmxaszXiFcioo4194oXRAE7nWH9S6PPTH54Wib0=; b=UlTlTOpDaiSaolUkzychZ89iIx5Rqx5x/cxcjjwfxVY/0yZuApvXo7Po B1JSYKALI55yOd2CEIXOOSubg49O6bW5TuykJtNvFh8l9zsxPb60b3j7n PsQK+FyjilpQUTFti1DbpRNFBqXw/ifdaT0rUCkHvFU36Ya6IQqvBufFs Y8OjcAlSkQMoaQ3de4/h7wawX6yZ8jZiU7iqwOoWwhyqRVTYYsd+V9Nye XiyffL3zZZSCaU88oUayodXMKlQAgenQ0jff3Xn75Z8TsWV9dbI9YXWWY 5L3PbV9JvOf6Cntu4/JIfcbKbo/FVcilGNYV1kk8eipHtlvNVRLM4GwlM g==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5191958" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5191958" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356841" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:02 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com Subject: [PATCH v3 04/12] thp: Add thp_get_unmapped_area_vmflags() Date: Tue, 12 Mar 2024 15:28:35 -0700 Message-Id: <20240312222843.2505560-5-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 83DF04000C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: h1nm8stof81hrtmyebxz3xourjpn93fb X-HE-Tag: 1710282546-248550 X-HE-Meta: U2FsdGVkX19D7MxvyjuRiaszRe9RhRRyCWDNwLK/UkuAizskL/LRz8cc1+47LAPHIiHV4i+QNBjMUxr9YA3CSJgIjIpGRDBsfNOcEd4qk3HvElAGCpkt+C8VEgDECGXt7/i+SY/fo1y91Z4SAignvYdPi5zHR9vq8OKj682xDf5hWTwPvNjABZyvUcIQdzqzzD8gI6dQqcCFyDU50SRqdQMhtcm+Zk0eV+KIPAQYgclpI1DnGVUnMbWl8Bj1gVo6BN+tdcBTDrFRQ8M/lhDwPz4vXmavby9NJrS7+ejz6104XkABkMcratYBbnOAqWMA9jxuomYUm32b9ahizMXKJvLvRoWRLblQhN4+jQlF0AZtWcQjyzlCBMIlvU06e2LgcQid1CkQy/wFfDgJEsF5i3jflD7EafMLKk1Q9yzRjJoxe/n/XHg0VoFhkQoMHVp4V3dcar3lBX7vNGB7OcF81x2U+FjbiBGKGkx8iHzIeLvlH5BQPUO1Ve+2WmbOsl7v/zvE0Df19DnxZPhRHLvKkSTFLGO7cW8GSAdvvklc+No/aBPbE1URA0WAipLy2Cx0+hlaJoAsfsZzExfy9rCGng075x8eym9+RNWv7DFN1KTcXmf+rhknbEMcN8FJCOTMDdH7TRuYMNdhXIzYlx2gLnRV28GHmed3lz2aNJTZLgJS06Gy0IelNaYvotibZIHBiuQfll1+WhocJVngSFoU1Tum0LjQ6R+dDtaHP79F0HdGtvUFbu9kX/dQOeXw2l/8Ptj52jEf4eylsDu3GxvBOduPtQ7A1o+lcLtLEKQUlosaZjWKZhHkipFwsZn5qJZEnknv720oh0WChZNB0dSB+S7wcHmWD5lCMxge9bH3jRBZpZ1rKzFtxJuTGfk3tX5fDmK/zr0ISPU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. Add a THP implementations of the vm_flags variant of get_unmapped_area(). Future changes will call this from mmap.c in the do_mmap() path to allow shadow stacks to be placed with consideration taken for the start guard gap. Shadow stack memory is always private and anonymous and so special guard gap logic is not needed in a lot of caseis, but it can be mapped by THP, so needs to be handled. Signed-off-by: Rick Edgecombe Reviewed-by: Christophe Leroy --- include/linux/huge_mm.h | 11 +++++++++++ mm/huge_memory.c | 23 ++++++++++++++++------- mm/mmap.c | 12 +++++++----- 3 files changed, 34 insertions(+), 12 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5adb86af35fc..8744c808d380 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -262,6 +262,9 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); +unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags, + vm_flags_t vm_flags); void folio_prep_large_rmappable(struct folio *folio); bool can_split_folio(struct folio *folio, int *pextra_pins); @@ -416,6 +419,14 @@ static inline void folio_prep_large_rmappable(struct folio *folio) {} #define thp_get_unmapped_area NULL +static inline unsigned long +thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags) +{ + return 0; +} + static inline bool can_split_folio(struct folio *folio, int *pextra_pins) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bc3bf441e768..349c93a1a7c3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -806,7 +806,8 @@ static inline bool is_transparent_hugepage(struct folio *folio) static unsigned long __thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, - loff_t off, unsigned long flags, unsigned long size) + loff_t off, unsigned long flags, unsigned long size, + vm_flags_t vm_flags) { loff_t off_end = off + len; loff_t off_align = round_up(off, size); @@ -822,8 +823,8 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, if (len_pad < len || (off + len_pad) < off) return 0; - ret = mm_get_unmapped_area(current->mm, filp, addr, len_pad, - off >> PAGE_SHIFT, flags); + ret = mm_get_unmapped_area_vmflags(current->mm, filp, addr, len_pad, + off >> PAGE_SHIFT, flags, vm_flags); /* * The failure might be due to length padding. The caller will retry @@ -848,17 +849,25 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, return ret; } -unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, - unsigned long len, unsigned long pgoff, unsigned long flags) +unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags, + vm_flags_t vm_flags) { unsigned long ret; loff_t off = (loff_t)pgoff << PAGE_SHIFT; - ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PMD_SIZE); + ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PMD_SIZE, vm_flags); if (ret) return ret; - return mm_get_unmapped_area(current->mm, filp, addr, len, pgoff, flags); + return mm_get_unmapped_area_vmflags(current->mm, filp, addr, len, pgoff, flags, + vm_flags); +} + +unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags) +{ + return thp_get_unmapped_area_vmflags(filp, addr, len, pgoff, flags, 0); } EXPORT_SYMBOL_GPL(thp_get_unmapped_area); diff --git a/mm/mmap.c b/mm/mmap.c index a3128ed26676..68381b90f906 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1863,20 +1863,22 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, * so use shmem's get_unmapped_area in case it can be huge. */ get_area = shmem_get_unmapped_area; - } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { - /* Ensures that larger anonymous mappings are THP aligned. */ - get_area = thp_get_unmapped_area; } /* Always treat pgoff as zero for anonymous memory. */ if (!file) pgoff = 0; - if (get_area) + if (get_area) { addr = get_area(file, addr, len, pgoff, flags); - else + } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { + /* Ensures that larger anonymous mappings are THP aligned. */ + addr = thp_get_unmapped_area_vmflags(file, addr, len, + pgoff, flags, vm_flags); + } else { addr = mm_get_unmapped_area_vmflags(current->mm, file, addr, len, pgoff, flags, vm_flags); + } if (IS_ERR_VALUE(addr)) return addr; From patchwork Tue Mar 12 22:28:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0516AC54E58 for ; Tue, 12 Mar 2024 22:29:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 831C78E0023; Tue, 12 Mar 2024 18:29:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7B9FC8E0011; Tue, 12 Mar 2024 18:29:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 659C48E0023; Tue, 12 Mar 2024 18:29:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 503818E0011 for ; Tue, 12 Mar 2024 18:29:10 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 2B3D8A0740 for ; Tue, 12 Mar 2024 22:29:10 +0000 (UTC) X-FDA: 81889828860.12.C4E089F Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf15.hostedemail.com (Postfix) with ESMTP id 260BBA0005 for ; Tue, 12 Mar 2024 22:29:07 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=YlWgh+XJ; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282548; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dU/wD7jO/OHK4JfqA79jGNc+9DBVuFZY+83XluY+FGA=; b=YsZlkm08/7afk/tqlB1b4O6NScx79H0LpOes6z9kFFHXNR56OTPmQl9o05azBWcDDkNfkJ XFfoF6lNrHoVtolh6O5O1odTxpMlvp0vm9v6JGf3fYHrf7oa36b5l0BI3sA5R4Imd76PkJ dwDHciY2pCQKxKoF0iKxxGjFDF5P+m8= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=YlWgh+XJ; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282548; a=rsa-sha256; cv=none; b=i2/PBEl+QytrPV8eUoSEzfUGqcan8j7YF5eil0BkSAWVVSnVoiMASQLCMFrMIe51DIJgYC F6VveDmHMqa1lvuZ0f79q2uo/77efnx/IukufDJIu5D4rq9ADoPw7uUlIhGnY6eg4ZhIQ1 ZUbZ+rPgGPlzTKLewnROPhrooFY0EWE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282548; x=1741818548; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=u8AE1TgBiBYssYKjxzzEC6IaZ9p99XcwVxRZfDUcRoE=; b=YlWgh+XJ6F+Ll9FvYBtWk9zJ0x+S4guMqq3owhv4OpS/d//spXWtFZQ3 /+To6A7fItKP59uVDafSFHiIcjAaDK2+xaKoAcZ3o2e9rPA2xuELGYZTX PwjrbsFRhT4oT7zOnfKctgbvBW8lgUJWAsKMladPV9VdG5q8Wgpu4XiIl lKjIKi/nz4pAcXPm3fAHgDaYNqiuWTTJ6f6EzqNBQ5NSYnp0Kzi8mE8VX wmXAPZKgIdjHtm9Q9uxmdfdaBcLaYXuPamwmFajOs36uuFmKjrwSV1xsq zlWpt1HkjUycNHKcuqN3ZMh18VqBxAmR0RA19fqFDuPa5UdjukzmlChxS w==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5191972" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5191972" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356852" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:02 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com, Guo Ren , linux-csky@vger.kernel.org Subject: [PATCH v3 05/12] csky: Use initializer for struct vm_unmapped_area_info Date: Tue, 12 Mar 2024 15:28:36 -0700 Message-Id: <20240312222843.2505560-6-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 260BBA0005 X-Stat-Signature: wdc3i4wsk5h3g88zejfjrkz3c36nw63m X-HE-Tag: 1710282547-433963 X-HE-Meta: U2FsdGVkX192KJCrMb87opz5607gKdTqR4rhm+sJgxKSJ3HqY+yk0i4cAFLZET0zzBD8JZS2fzWBsAy2OE2MpQsr0WflH9WngJzGNwK08s+6FGE/k9pTPIqG1W7XPRm1AMSNpWl0iUXHkKFC/By0Uv/5GZjUI+TU2rWuBtDYG2CQ44Ry/+DWoOnNs8IU80newgNQdS1ehU/aHe0QuiM7lCXSeRN/4t6kpf6NeB5n8R9inuqinF6lrP1yTKSaMafM/mGe8U8B67ZDSNwGpE3SCrAZd/pVjg2YdNfI7g4Qm7ER+gqEqDVujmkl5/ScIoXI6WCGQvV/d3ueuoHzodSTSTqDvTObgFM5/938D4235Ug9fubfvRvERxmR2kzD7/cdEipQcurxeirDi1Myv9tCExJWwzZBY7HhVI42HQabXIAHufUOw2bpOC9fo7TycvppTvXx/m/h+2hpoNeHgS9HGclLM2cTH/6OpucR07F+6pY1GMC+7Krpo0jSvh7GFAYZbl1E8rSAQrcq74sMHf2K5bMcXY7vKJ1b+2gly1YPRjdQAaNz1S1XHCkCoZucj0io2TnHx55rRHP59ochft3PFR7CMMA4PTbjw5qBFkn02ebZi4jMcGrOGRxEmJGCtWg2fmlQVN/kb6rvapsKUcnyowhCXafLTw7DRK4NE9OPxYzRQgfqo5dpSqShlTfTlwivDgTU7qZjiV4YNFeEPzrAU3pkGpEFauqNDGY/BVBSIr9ZqHlyRqBJLNdHZc2PLucB8wewpm6eO/d4zubspqNvNhgkUiD3T5EAYxhBYnqk444KXkPndymVjoRPM/dwwpOlefSlg8vB8w5MH07TvaW+ez+CcJpg+TuSrmnToB8uIuOW/84LJ7nEiBiX7Y6LC3VXgCADUY6yap6Ce2N1xnne+0HBLVO/oMB63hkSM+IZNIsjJXzjpST8JVNDXfAtbn/fdfPDgFpTZ4mYbGSAtMW WxATU8YO QwJkrP9O9BMULWSyTi/9wWa2Vxdg2TN1eRv0u2IKOD5GBVrlH/wl4mSjt4kuSmwWfrO03l5x7cnvcyuaZ9MB8K1sz7r/++ZoTvT8K5kMVEhiuW1DFUd/gLrOot6okiDYp9GDjDBKd4H40i+9LU+4G1XTwu3R5XwGqu/VR2bAgbs/+AEn4DImDBepsDA7S1EPVlBGca1lmXKT5e3A4CQM6wYxffPdgTr6ukCtYma3f7UU0xYkMVyqBz8XYCD5FBDVF/5CKkzNOxa/zBTY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each member manually, so if new members are added they will be uninitialized and the core code parsing the struct will see garbage in the new member. It could be possible to initialize the new member manually to 0 at each call site. This and a couple other options were discussed, and a working consensus (see links) was that in general the best way to accomplish this would be via static initialization with designated member initiators. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area() if the convention is to zero initialize the struct and any new member addition misses a call site that initializes each member manually. It could be possible to leave the code mostly untouched, and just change the line: struct vm_unmapped_area_info info to: struct vm_unmapped_area_info info = {}; However, that would leave cleanup for the members that are manually set to zero, as it would no longer be required. So to be reduce the chance of bugs via uninitialized members, instead simply continue the process to initialize the struct this way tree wide. This will zero any unspecified members. Move the member initializers to the struct declaration when they are known at that time. Leave the members out that were manually initialized to zero, as this would be redundant for designated initializers. Signed-off-by: Rick Edgecombe Reviewed-by: Guo Ren Cc: Guo Ren Cc: linux-csky@vger.kernel.org Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ Reviewed-by: Christophe Leroy --- v3: - Fixed spelling errors in log - Be consistent about field vs member in log Hi, This patch was split and refactored out of a tree-wide change [0] to just zero-init each struct vm_unmapped_area_info. The overall goal of the series is to help shadow stack guard gaps. Currently, there is only one arch with shadow stacks, but two more are in progress. It is compile tested only. There was further discussion that this method of initializing the structs while nice in some ways has a greater risk of introducing bugs in some of the more complicated callers. Since this version was reviewed my arch maintainers already, leave it as was already acknowledged. Thanks, Rick [0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/ --- arch/csky/abiv1/mmap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/csky/abiv1/mmap.c b/arch/csky/abiv1/mmap.c index 6792aca49999..7f826331d409 100644 --- a/arch/csky/abiv1/mmap.c +++ b/arch/csky/abiv1/mmap.c @@ -28,7 +28,12 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, struct mm_struct *mm = current->mm; struct vm_area_struct *vma; int do_align = 0; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .length = len, + .low_limit = mm->mmap_base, + .high_limit = TASK_SIZE, + .align_offset = pgoff << PAGE_SHIFT + }; /* * We only need to do colour alignment if either the I or D @@ -61,11 +66,6 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; - info.length = len; - info.low_limit = mm->mmap_base; - info.high_limit = TASK_SIZE; info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0; - info.align_offset = pgoff << PAGE_SHIFT; return vm_unmapped_area(&info); } From patchwork Tue Mar 12 22:28:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBBF3C54E5D for ; Tue, 12 Mar 2024 22:29:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 489258E0025; Tue, 12 Mar 2024 18:29:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 416198E0011; Tue, 12 Mar 2024 18:29:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2049C8E0026; Tue, 12 Mar 2024 18:29:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DDA888E0011 for ; Tue, 12 Mar 2024 18:29:10 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BB1D1A077E for ; Tue, 12 Mar 2024 22:29:10 +0000 (UTC) X-FDA: 81889828860.11.2F2A4E7 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf11.hostedemail.com (Postfix) with ESMTP id DAEC54000C for ; Tue, 12 Mar 2024 22:29:08 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Vhif4jw7; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282549; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=580Q3nTRFYFTa5dZ8BPFsAQbl4MD0TdYDfsVAzzY0+o=; b=3K90CHBbdcRJe3vAJTEaop/AVap/YYrg0yHe2R72OfasKmvpf2tshqozNX8ws/TI5hwUMT 0sf+rwqJfaoNmG303d6KN3IJsFCBZWjK5ew8+tSRIefgsyLfrkIpK7sH9O8b6FIP99UuHJ 2/+wk5A75PjZgAxe162HKqykz09ho84= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282549; a=rsa-sha256; cv=none; b=gDucYBAvCjPVgZ0FTq8rD3hgqMIrsSOcqZSKWPwxG49CIiwyOhFdtJ6p+MrTshFSLFkR/g 0sXkjaNhBrCuIeP+6OlEHD4+sfWyEboaGPuBedIfKQHDTF9zub5GfyM11PCEMAABE+VVw5 J6bRz3R30wcXr5Cs5hl88lhForbjTCs= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Vhif4jw7; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282549; x=1741818549; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0AXtXC2QsVELXP+nYGygIha/dj1Cad9y2eG6v/oGLtg=; b=Vhif4jw7od/UzuRmOMQvHKfxiDh5rCuQmcX8XoTqGTxBgLdn4Hui1SvZ IGryfRdtbliQ9Ubs7k2kqt26aJlH7PiFF2rDFN8R95RtzrVEoHnOC3g/Z w60g00GYiANFWRSLyUZ5kfqayTV05voIaxcKLV6+mecOa8pNnarlbxxNn KWM1nsx6Ix8C6oWy2OYkAjv0qkgLswu93tOZWMnWChD19c2l3HI8cLeOo T5wsA7MEYm91tXgH+ZoqNT083gZ01HN+pYNu4DcuhwoX2KuD9lI5o52pp 0+SJnTqIq57jKJU5y0FOinmRq6OJKXFJ7KQ2/VgmKGSYT88WJXqyTkYvr A==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5191983" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5191983" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356855" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:03 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com, Helge Deller , "James E.J. Bottomley" , linux-parisc@vger.kernel.org Subject: [PATCH v3 06/12] parisc: Use initializer for struct vm_unmapped_area_info Date: Tue, 12 Mar 2024 15:28:37 -0700 Message-Id: <20240312222843.2505560-7-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: DAEC54000C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 5pzhhkb7jtx3ogsa48qj6nakio6ffw8n X-HE-Tag: 1710282548-579992 X-HE-Meta: U2FsdGVkX18k9jS7QFnCP54L26CcC6ycOkUJpndFyn2AgXRv/uk1X63+EWwxxPGvx/RgkNGNg//JWvBmwN0Jdr3pxSZXJq5I9RGfeQhbuwfI+/LBS1BJFPMTB7LEiFK3oeeVpp39MuZb8q7ejI1pIrdmoCQMfPruoD0wgZnTmm3nljc9T/ZrI3HDfmkS97VaTmR4PAtVN/jL24rwRKNt5XsxUBEM4tVccehZdXyuXe/KmacJJb9WaCuiAdT/wDVwmFW4jOTVaRLlYYreYMCgK44TjJlD8p9AoUbEXrSSd9iBTevIWw62e/ycCFyOen2wb382IVVyJQffrgVVIl0Sd5yLP7MG7h8oiWNd0i1NM3D0WiZT3gD5MZi+x3bNGipl9I835AEhBJTKlJSYmE3DKfvle9ByqbkhLOvDwiSkt96c2vVyZfFTGSdBdI8cAl5kYU5beTfuONWxSZ2WRtmz5/H3BRxhhB3udubCU2ZeUCif3XAlSv0Ohc/IzGjGJ8y42ntEhfry5RKQezAZ4S7f7wGm+oWRbxpClh4o1M2f6KLoLbL24k8M60UKOCZN41uG0flT/Z7Nf9AAOwXwdljbdXW/e0rj57ElzyL7bDu1UCxlt6bhMU9tw/M9YwzpQqrFk9c5DIhNhrHMLckaDoTV6oZsdvHQkr5zK3NUiKH8Jx7VVnNgMk/k8MGB/NBVHKae3YYIeGvZve6sqgFtG6R9MF2CmV9dr73qurlljr4cqtUAdoqwjBx8777t+pdSwIjpwIyOjXyeXyc0I6eHpSQU0SccameTCJrncoQn2QjiPbW6SjP1F/eRMHl25Lt1zXfOdV8VEGh3O2ANE92s4GMaqRWucB9Sr4JdTIjn2HNiXEH3ST6fAeI+YvP0Y5n81b7VrPASCVb3kTImtu+vxr6r6Tgj0nCPAD/wvteURVU3hNkBirjt3iDWCQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each member manually, so if new members are added they will be uninitialized and the core code parsing the struct will see garbage in the new member. It could be possible to initialize the new member manually to 0 at each call site. This and a couple other options were discussed, and a working consensus (see links) was that in general the best way to accomplish this would be via static initialization with designated member initiators. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area() if the convention is to zero initialize the struct and any new member addition misses a call site that initializes each member manually. It could be possible to leave the code mostly untouched, and just change the line: struct vm_unmapped_area_info info to: struct vm_unmapped_area_info info = {}; However, that would leave cleanup for the members that are manually set to zero, as it would no longer be required. So to be reduce the chance of bugs via uninitialized members, instead simply continue the process to initialize the struct this way tree wide. This will zero any unspecified members. Move the member initializers to the struct declaration when they are known at that time. Leave the members out that were manually initialized to zero, as this would be redundant for designated initializers. Signed-off-by: Rick Edgecombe Acked-by: Helge Deller Cc: "James E.J. Bottomley" Cc: Helge Deller Cc: linux-parisc@vger.kernel.org Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ Reviewed-by: Christophe Leroy --- v3: - Fixed spelling errors in log - Be consistent about field vs member in log Hi, This patch was split and refactored out of a tree-wide change [0] to just zero-init each struct vm_unmapped_area_info. The overall goal of the series is to help shadow stack guard gaps. Currently, there is only one arch with shadow stacks, but two more are in progress. It is compile tested only. There was further discussion that this method of initializing the structs while nice in some ways has a greater risk of introducing bugs in some of the more complicated callers. Since this version was reviewed my arch maintainers already, leave it as was already acknowledged. Thanks, Rick [0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/ --- arch/parisc/kernel/sys_parisc.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c index 98af719d5f85..f7722451276e 100644 --- a/arch/parisc/kernel/sys_parisc.c +++ b/arch/parisc/kernel/sys_parisc.c @@ -104,7 +104,9 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, struct vm_area_struct *vma, *prev; unsigned long filp_pgoff; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .length = len + }; if (unlikely(len > TASK_SIZE)) return -ENOMEM; @@ -139,7 +141,6 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, return addr; } - info.length = len; info.align_mask = do_color_align ? (PAGE_MASK & (SHM_COLOUR - 1)) : 0; info.align_offset = shared_align_offset(filp_pgoff, pgoff); @@ -160,7 +161,6 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, */ } - info.flags = 0; info.low_limit = mm->mmap_base; info.high_limit = mmap_upper_limit(NULL); return vm_unmapped_area(&info); From patchwork Tue Mar 12 22:28:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590676 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CADB3C54E58 for ; Tue, 12 Mar 2024 22:29:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A2CB18E0028; Tue, 12 Mar 2024 18:29:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B6B88E0011; Tue, 12 Mar 2024 18:29:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82FE28E0028; Tue, 12 Mar 2024 18:29:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6C1AB8E0011 for ; Tue, 12 Mar 2024 18:29:13 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4786C160463 for ; Tue, 12 Mar 2024 22:29:13 +0000 (UTC) X-FDA: 81889828986.26.A2D6479 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf15.hostedemail.com (Postfix) with ESMTP id 4B63DA0005 for ; Tue, 12 Mar 2024 22:29:10 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=IAS66i75; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282550; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9n/FakS/5V2QrPHm0TUWzRHiHuBIkVldaGqGRm3C17I=; b=U1uf/vz86GOVaYdiwZjZgf5n5xj8knZiV8GrV5z1YVwopQIOOQ9CcX9kLxU8UUr7u8h+p8 gUgjNT5F5kVtH7rQwdr0EeRObmDyudOhws0CBw4vZOm23NIhhDiKi+iCGRE6iDbldXpEdr iEf7wHLNfERYLwHCoo2aBFj3NZP2h68= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=IAS66i75; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282550; a=rsa-sha256; cv=none; b=3TM0AJuMLESfJ86hr0pesLHZD2JVvrZMhX9jrnFh5y49/0l62x4MkZVl/XCxTQEefbzmM1 eTFanpH4phQ8AtFDrMkXZkOQ78UXPjl9ZyGbZhtnu4hWXC28pM4bdCS1pyOxCDZIOtJ6ml 8Pt3d7J76fu0R6MW/OdJPidt/sJ1xWA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282551; x=1741818551; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SOxmdONZG+ooJ46ryKwSr/UQJN1BRbIVpZR9p2Kx8ZQ=; b=IAS66i753CdMQtAqghirYqfidmYoVsJpc9SVN56noraScVLoXaqxhbqs M7jStfFYxSd7ECLQuP5IGHOSNVEe0EorvHOAnRdEBQZ2CISiTw9xo1tyF EnQwgT8KWsJBEQ6bE/mQ+Qhh1DkvpyceNOli+JWOASE/7kd9zGIO4bjn8 ygEr3QcmCC+HmTGYHYyIBodYhwWyE4QnSAMUvZO9FGatcLn/JPKBigf8c fXa2hjkp0d7xABogBTIKdGgBibl7RLTcp5VNg8wzcEjb44T5JB09SCtKw SF8oQbm4uciaXRXVT9E5CVLrsp8XelyYetljgHllXWJrv4t9BgFr6lwPW Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5191998" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5191998" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356860" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:03 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com, Michael Ellerman , Nicholas Piggin , "Aneesh Kumar K . V" , "Naveen N . Rao" , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v3 07/12] powerpc: Use initializer for struct vm_unmapped_area_info Date: Tue, 12 Mar 2024 15:28:38 -0700 Message-Id: <20240312222843.2505560-8-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 4B63DA0005 X-Stat-Signature: ixsry9o4hi98dhcy7e5zy3jyo48x8uaf X-HE-Tag: 1710282550-966718 X-HE-Meta: U2FsdGVkX19X9aeRTc8l/7SyLDMuMuDn7dPzl+Li8Gxbi2RdYgyKD30Lz+1cggt5dqhKEW5FYSWXeqh3OHceXTFuNevs4xh0zziyx8Uy+yVRBqa6jkDQ83sJ+OJNAwyPQ/8uoivgzvlQ+HV/H45aDEX1kk50Xz2fpRBO+eeerbnVng5Rwbl17IactpjJf3kO6am7EN3nk4Jn3V+nytGzVNMSOhJ+/3nQS+btHzFWXA5vQtpefgUAq0aa6caw3llBAniSoK/whghVKcd7e8CJ5kpx5RfYV8O1DiziuXEZbIMCd5yIE1GX5CZfatUp29rR9OFkm6sSCSSC8B01ckSo5CzCpg/jHr4Uyohm+JgwhoY+jvorBzuDoSACvcP1vcKLsQ5DDFA0HrKw8ONWbGC4u5ibYXu5IWGykcOkGmN61s4pMzbqQoymrGtGkdHUzlhQDNPgjEpb0EQ/yiAIpmeg+bGzWvwd0j9WGZA58iqj+RKVNgtj1URHah34sIiQlV/At6dQNJRZdzX7ct/41IJAG9WZNsWV0wMmgKO7B+NTfSOj3ybzrAhfcDw41PoqLQESLfkzTXQS36vGLciX67gp+ZP2+NBhJtGbkG8UiRRFOxSjmVsGPSWPcxuff5BZ5oDhbgP7OAO0VoHOZVQ/a5HqcQJUXv2Dh2afUy7iS8NPKdC8FnCrWY0M9/1WMbeBz53UaqvEBS8RGe1u4uPjiIviOnC/6xPr0iFcbEzPcbOQbDWvUFM5buFNQ4J6SO2UlquRbCfVHNsrFF/7iLjUzxb1PmsiQ0BoMnZjtf49ESQPO/21h86JK0XWoSD7+mpdQRIVAtRBYhj472bsdYfr76ciFzPi+7+g0glMQnLFnXwv0FnaJphrHhKCWbtEnyMnxs/lXedhrg88toBB2UYvHhRVMxBjJewjzwwvX4Xkg5suBecUFP9SEGcQiy+59FSrzDdylJDfiEi6fw8Wr202Hn8 5rgP5y26 rYDLugqdaA7a8nV0bvDIjb0Qt0Y8lza2H/NQqYMV41g8iTFRW3jF9LH7IaXFsxYtO/KJn4aDOs9gDaA55gkKopmwU2rdq1O0/dKNu1P9bAu53oAWxIZ9CixrnJh8wUWwvecRTl60kWp5+KP5BfgU9azdNEF0ynSKGjWVfKx0G2vQxTkpWBf7Tp9K90AzjCfO2AkcZALYV6wmqep5B5a2txayWlZZ8fqCGjIiHoP312CmJs4JPRISffq+KO7KMvwFlo8pk1WqjEpv21lQPYkC8VYYB5zMDOH3a/zGBopEa8KCcmowicKM+Mqmp0i9EL64JdlWk3gK2mkNbm1nN5nTM2bTDH1i8UbGujM4GJygDhGIi6VKPtC0YvvFf07O+my3FYb9cRk2wAhWwLnOrpl0kFs2R/jz97vifxZcJsbsF1RS8K9f4SvSJKxyNaw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each member manually, so if new members are added they will be uninitialized and the core code parsing the struct will see garbage in the new member. It could be possible to initialize the new member manually to 0 at each call site. This and a couple other options were discussed, and a working consensus (see links) was that in general the best way to accomplish this would be via static initialization with designated member initiators. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area() if the convention is to zero initialize the struct and any new member addition misses a call site that initializes each member manually. It could be possible to leave the code mostly untouched, and just change the line: struct vm_unmapped_area_info info to: struct vm_unmapped_area_info info = {}; However, that would leave cleanup for the members that are manually set to zero, as it would no longer be required. So to be reduce the chance of bugs via uninitialized members, instead simply continue the process to initialize the struct this way tree wide. This will zero any unspecified members. Move the member initializers to the struct declaration when they are known at that time. Leave the members out that were manually initialized to zero, as this would be redundant for designated initializers. Signed-off-by: Rick Edgecombe Acked-by: Michael Ellerman Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Aneesh Kumar K.V Cc: Naveen N. Rao Cc: linuxppc-dev@lists.ozlabs.org Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ --- v3: - Fixed spelling errors in log - Be consistent about field vs member in log Hi, This patch was split and refactored out of a tree-wide change [0] to just zero-init each struct vm_unmapped_area_info. The overall goal of the series is to help shadow stack guard gaps. Currently, there is only one arch with shadow stacks, but two more are in progress. It is compile tested only. There was further discussion that this method of initializing the structs while nice in some ways has a greater risk of introducing bugs in some of the more complicated callers. Since this version was reviewed my arch maintainers already, leave it as was already acknowledged. Thanks, Rick [0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/ --- arch/powerpc/mm/book3s64/slice.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c index c0b58afb9a47..6c7ac8c73a6c 100644 --- a/arch/powerpc/mm/book3s64/slice.c +++ b/arch/powerpc/mm/book3s64/slice.c @@ -282,12 +282,12 @@ static unsigned long slice_find_area_bottomup(struct mm_struct *mm, { int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); unsigned long found, next_end; - struct vm_unmapped_area_info info; - - info.flags = 0; - info.length = len; - info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); - info.align_offset = 0; + struct vm_unmapped_area_info info = { + .flags = 0, + .length = len, + .align_mask = PAGE_MASK & ((1ul << pshift) - 1), + .align_offset = 0 + }; /* * Check till the allow max value for this mmap request */ @@ -326,13 +326,14 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm, { int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); unsigned long found, prev; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .flags = VM_UNMAPPED_AREA_TOPDOWN, + .length = len, + .align_mask = PAGE_MASK & ((1ul << pshift) - 1), + .align_offset = 0 + }; unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr); - info.flags = VM_UNMAPPED_AREA_TOPDOWN; - info.length = len; - info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); - info.align_offset = 0; /* * If we are trying to allocate above DEFAULT_MAP_WINDOW * Add the different to the mmap_base. From patchwork Tue Mar 12 22:28:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E932C54E58 for ; Tue, 12 Mar 2024 22:29:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BB5648E0026; Tue, 12 Mar 2024 18:29:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B3BCC8E0011; Tue, 12 Mar 2024 18:29:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 968148E0026; Tue, 12 Mar 2024 18:29:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 79B158E0011 for ; Tue, 12 Mar 2024 18:29:12 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 569A31C0E37 for ; Tue, 12 Mar 2024 22:29:12 +0000 (UTC) X-FDA: 81889828944.06.CCA164E Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf24.hostedemail.com (Postfix) with ESMTP id 8AEC218000E for ; Tue, 12 Mar 2024 22:29:10 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=BVSrjXiX; spf=pass (imf24.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282550; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v93hRcq/wC40x08jjVsDDgbOkEEZvYQhUAbIO14h/7s=; b=S4ARJEoiO+h07kqqNYesyG/fzeuyOOeZuhjy1GuMfqNVrI2uq/1sLMU/guzWNaMzfI6+nO tNQzjIvwSkcg0+Sr1yswKBy5suCO1eLi1JlmNuw0pUwNmuYtYp5wPA/nnxze0P2NGBXUCp UN110ZR+BLf6FVHbd2Co8qF47+DhJ6I= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282550; a=rsa-sha256; cv=none; b=YfeVOM6ooprRjF9jroHwDqWfQsVRm0D++yuTcMvC/38QEMj+wgyqRwp45ohFfj8R6q9alD PbocAGBmAjLteVkrWFnwBaOxZZfwJWUtm0IqSXrBLbAgIb+3vU/uW63gx7S0UD26ntSQ6l OEgI+DaFBUC8PwjdoMXazkKdAodfIKY= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=BVSrjXiX; spf=pass (imf24.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282551; x=1741818551; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=D1/EWAXI8SpxfCEHFRS2m8qX6cfft62KgrdcBDWVQkY=; b=BVSrjXiX4wB8PgJ7SXuTF/vqkqr4IPvSx1reXeioQG+jbLqpSWO4/7Mb o37RQCrmMD/M+2UrDeDwiRVcve9JGtfv5xCrkZtiCYjc/uN2IFziJ/Sht Z82TGm5lW6VuracvvAHfnkseiUh2Wehzit7YcBv1xIIdziKtphuzz9LxB dp2uEi7wlzh9Ccc5Y42R9G79XxOUow1C1wB5gAef/ft1HbxRs3stSK4KH G5CkXKe99n+kiHf1/ToCSEemg48YcE7QIFxCHdWlHcZYz18qapmM9RQ0S QtTTQQvE/EC3PLuxBvuhyIOl96QRv4V628IVJ9DzbeQkmIbapRdBfnhP0 w==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5192015" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5192015" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356866" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:03 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org Subject: [PATCH v3 08/12] treewide: Use initializer for struct vm_unmapped_area_info Date: Tue, 12 Mar 2024 15:28:39 -0700 Message-Id: <20240312222843.2505560-9-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 8AEC218000E X-Rspam-User: X-Stat-Signature: pffbtkdyf7f7snwtuk1ah861meiod6ty X-Rspamd-Server: rspam03 X-HE-Tag: 1710282550-861424 X-HE-Meta: U2FsdGVkX18M+YGdODQlRe6ZBFhY1TYMWq7leidk1ccfhbliEcwCLoKRNF5R+U3sdwD+02N0Bf5AjVWANGgCzU0kc6jERbiQWN0g+n3CbbtKRXOCJN1lI7rEaGx30tmQbN6YlS+gxmFsDMgozBm+HvX3UI8SdgbwKV85q15AOUJlL3JkNcNo8vXnGE8ghXoH6IupJag2i6X4/fb1ACX5rHcWlEV8yt6zO2Nx82gR3oDzrf6ZHslX+pigxQ5U+1Nwn9QxXkUtrkQyFLAnSHa0ZwpK5rHAbe9Edsd9o4TRovE2AQnITiNLSUxB71xdKaEU/i5Okicn4NhYIjz7SELOX4quj39fjxhIOf/DqO4jy+MvjoSv0Qf2q2C2Iq5+5vsS6WoJtUsWmdnppHlKd+UN6T7kaDwOzvsHb2bj3iYvR/5UGROPI6iGS7TebJDgP/R8ntYCzwJnw93DCVz8t6BmjSFW0SpTvLiBhajM1syqrzkYrk1ii90Yz9jHxU98hVe7fzfINs1us5EOqeUp2Lbge4ZC/1QDs2qdZSMr+RbAu4kQbee44D2cdCy0iRGM3b6eQAW6olgkFYOzH1/e62Qf4Kf2coXdGStej0/IGNpxEPJm5WVjVbcR3NmU9gfLY3EHiiFm7xOwZdvRbtVkH01tSUAZ7ocRZqWLcI01aEROZyAuykJLwbOXhdn5XP5Jm/I5hknI/ioZk3QX9qTbsI1irtFcw/ZycZ2phM4aZsp435Mrhk8rZJkGpKs8YMWKg+dq9AtVHe6Cy/3iMhAw0ViRUKH+L8UyTDTL8UBmKcMzNre8rb0ywOMaaJ9CRX70ehrODY3nNELeUbTl+bxiZ1FLzE40pNYqTHKptyNwZ5fG0hU4I8lVetuPFvRlZDePOYWtI5A3W91tgHbCictVRocxUk2HGUFqRc8L76y4D2eQl30Np6T/4dvzwDxjPz1n+2u+3zQQOZ1CPUo4B0oQYpg 7uj9aYmj /c6Ra X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each member manually, so if new ones are added they will be uninitialized and the core code parsing the struct will see garbage in the new member. It could be possible to initialize the new member manually to 0 at each call site. This and a couple other options were discussed. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area(), if the convention is to zero initialize the struct and any new field addition missed a call site that initializes each field manually. So it is useful to do things similar across the kernel. The consensus (see links) was that in general the best way to accomplish taking into account both code cleanliness and minimizing the chance of introducing bugs, was to do C99 static initialization. As in: struct vm_unmapped_area_info info = {}; With this method of initialization, the whole struct will be zero initialized, and any statements setting fields to zero will be unneeded. The change should not leave cleanup at the call sides. While iterating though the possible solutions a few archs kindly acked other variations that still zero initialized the struct. These sites have been modified in previous changes using the pattern acked by the respective arch. So to be reduce the chance of bugs via uninitialized fields, perform a tree wide change using the consensus for the best general way to do this change. Use C99 static initializing to zero the struct and remove and statements that simply set members to zero. Signed-off-by: Rick Edgecombe Cc: linux-mm@kvack.org Cc: linux-alpha@vger.kernel.org Cc: linux-snps-arc@lists.infradead.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-csky@vger.kernel.org Cc: loongarch@lists.linux.dev Cc: linux-mips@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: sparclinux@vger.kernel.org Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ Link: https://lore.kernel.org/lkml/ec3e377a-c0a0-4dd3-9cb9-96517e54d17e@csgroup.eu/ Reviewed-by: Kees Cook --- Hi archs, For some context, this is part of a larger series to improve shadow stack guard gaps. It involves plumbing a new field via struct vm_unmapped_area_info. The first user is x86, but arm and riscv may likely use it as well. The change is compile tested only for non-x86. Thanks, Rick --- arch/alpha/kernel/osf_sys.c | 5 +---- arch/arc/mm/mmap.c | 4 +--- arch/arm/mm/mmap.c | 5 ++--- arch/loongarch/mm/mmap.c | 3 +-- arch/mips/mm/mmap.c | 3 +-- arch/s390/mm/hugetlbpage.c | 7 ++----- arch/s390/mm/mmap.c | 11 ++++------- arch/sh/mm/mmap.c | 5 ++--- arch/sparc/kernel/sys_sparc_32.c | 3 +-- arch/sparc/kernel/sys_sparc_64.c | 5 ++--- arch/sparc/mm/hugetlbpage.c | 7 ++----- arch/x86/kernel/sys_x86_64.c | 7 ++----- arch/x86/mm/hugetlbpage.c | 7 ++----- fs/hugetlbfs/inode.c | 7 ++----- mm/mmap.c | 9 ++------- 15 files changed, 27 insertions(+), 61 deletions(-) diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c index 5db88b627439..e5f881bc8288 100644 --- a/arch/alpha/kernel/osf_sys.c +++ b/arch/alpha/kernel/osf_sys.c @@ -1218,14 +1218,11 @@ static unsigned long arch_get_unmapped_area_1(unsigned long addr, unsigned long len, unsigned long limit) { - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; - info.flags = 0; info.length = len; info.low_limit = addr; info.high_limit = limit; - info.align_mask = 0; - info.align_offset = 0; return vm_unmapped_area(&info); } diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c index 3c1c7ae73292..69a915297155 100644 --- a/arch/arc/mm/mmap.c +++ b/arch/arc/mm/mmap.c @@ -27,7 +27,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* * We enforce the MAP_FIXED case. @@ -51,11 +51,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; info.high_limit = TASK_SIZE; - info.align_mask = 0; info.align_offset = pgoff << PAGE_SHIFT; return vm_unmapped_area(&info); } diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c index a0f8a0ca0788..d65d0e6ed10a 100644 --- a/arch/arm/mm/mmap.c +++ b/arch/arm/mm/mmap.c @@ -34,7 +34,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, struct vm_area_struct *vma; int do_align = 0; int aliasing = cache_is_vipt_aliasing(); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* * We only need to do colour alignment if either the I or D @@ -68,7 +68,6 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; info.high_limit = TASK_SIZE; @@ -87,7 +86,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, unsigned long addr = addr0; int do_align = 0; int aliasing = cache_is_vipt_aliasing(); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* * We only need to do colour alignment if either the I or D diff --git a/arch/loongarch/mm/mmap.c b/arch/loongarch/mm/mmap.c index a9630a81b38a..4bbd449b4a47 100644 --- a/arch/loongarch/mm/mmap.c +++ b/arch/loongarch/mm/mmap.c @@ -24,7 +24,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, struct vm_area_struct *vma; unsigned long addr = addr0; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (unlikely(len > TASK_SIZE)) return -ENOMEM; @@ -82,7 +82,6 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, */ } - info.flags = 0; info.low_limit = mm->mmap_base; info.high_limit = TASK_SIZE; return vm_unmapped_area(&info); diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c index 00fe90c6db3e..7e11d7b58761 100644 --- a/arch/mips/mm/mmap.c +++ b/arch/mips/mm/mmap.c @@ -34,7 +34,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, struct vm_area_struct *vma; unsigned long addr = addr0; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (unlikely(len > TASK_SIZE)) return -ENOMEM; @@ -92,7 +92,6 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, */ } - info.flags = 0; info.low_limit = mm->mmap_base; info.high_limit = TASK_SIZE; return vm_unmapped_area(&info); diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c index c2d2850ec8d5..51fb3806395b 100644 --- a/arch/s390/mm/hugetlbpage.c +++ b/arch/s390/mm/hugetlbpage.c @@ -258,14 +258,12 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; - info.flags = 0; info.length = len; info.low_limit = current->mm->mmap_base; info.high_limit = TASK_SIZE; info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; return vm_unmapped_area(&info); } @@ -274,7 +272,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; unsigned long addr; info.flags = VM_UNMAPPED_AREA_TOPDOWN; @@ -282,7 +280,6 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, info.low_limit = PAGE_SIZE; info.high_limit = current->mm->mmap_base; info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); /* diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c index cd52d72b59cf..5c9d9f18a55f 100644 --- a/arch/s390/mm/mmap.c +++ b/arch/s390/mm/mmap.c @@ -77,7 +77,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (len > TASK_SIZE - mmap_min_addr) return -ENOMEM; @@ -93,14 +93,12 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, goto check_asce_limit; } - info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; info.high_limit = TASK_SIZE; if (filp || (flags & MAP_SHARED)) info.align_mask = MMAP_ALIGN_MASK << PAGE_SHIFT; - else - info.align_mask = 0; + info.align_offset = pgoff << PAGE_SHIFT; addr = vm_unmapped_area(&info); if (offset_in_page(addr)) @@ -116,7 +114,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad { struct vm_area_struct *vma; struct mm_struct *mm = current->mm; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* requested length too big for entire address space */ if (len > TASK_SIZE - mmap_min_addr) @@ -140,8 +138,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad info.high_limit = mm->mmap_base; if (filp || (flags & MAP_SHARED)) info.align_mask = MMAP_ALIGN_MASK << PAGE_SHIFT; - else - info.align_mask = 0; + info.align_offset = pgoff << PAGE_SHIFT; addr = vm_unmapped_area(&info); diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c index b82199878b45..bee329d4149a 100644 --- a/arch/sh/mm/mmap.c +++ b/arch/sh/mm/mmap.c @@ -57,7 +57,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, struct mm_struct *mm = current->mm; struct vm_area_struct *vma; int do_colour_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate @@ -88,7 +88,6 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; info.length = len; info.low_limit = TASK_UNMAPPED_BASE; info.high_limit = TASK_SIZE; @@ -106,7 +105,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct mm_struct *mm = current->mm; unsigned long addr = addr0; int do_colour_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate diff --git a/arch/sparc/kernel/sys_sparc_32.c b/arch/sparc/kernel/sys_sparc_32.c index 082a551897ed..08a19727795c 100644 --- a/arch/sparc/kernel/sys_sparc_32.c +++ b/arch/sparc/kernel/sys_sparc_32.c @@ -41,7 +41,7 @@ SYSCALL_DEFINE0(getpagesize) unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate @@ -59,7 +59,6 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi if (!addr) addr = TASK_UNMAPPED_BASE; - info.flags = 0; info.length = len; info.low_limit = addr; info.high_limit = TASK_SIZE; diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c index 1dbf7211666e..d9c3b34ca744 100644 --- a/arch/sparc/kernel/sys_sparc_64.c +++ b/arch/sparc/kernel/sys_sparc_64.c @@ -93,7 +93,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi struct vm_area_struct * vma; unsigned long task_size = TASK_SIZE; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate @@ -126,7 +126,6 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi return addr; } - info.flags = 0; info.length = len; info.low_limit = TASK_UNMAPPED_BASE; info.high_limit = min(task_size, VA_EXCLUDE_START); @@ -154,7 +153,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, unsigned long task_size = STACK_TOP32; unsigned long addr = addr0; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* This should only ever run for 32-bit processes. */ BUG_ON(!test_thread_flag(TIF_32BIT)); diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c index 38a1bef47efb..4caf56b32e26 100644 --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -31,17 +31,15 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *filp, { struct hstate *h = hstate_file(filp); unsigned long task_size = TASK_SIZE; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (test_thread_flag(TIF_32BIT)) task_size = STACK_TOP32; - info.flags = 0; info.length = len; info.low_limit = TASK_UNMAPPED_BASE; info.high_limit = min(task_size, VA_EXCLUDE_START); info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); if ((addr & ~PAGE_MASK) && task_size > VA_EXCLUDE_END) { @@ -63,7 +61,7 @@ hugetlb_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct hstate *h = hstate_file(filp); struct mm_struct *mm = current->mm; unsigned long addr = addr0; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* This should only ever run for 32-bit processes. */ BUG_ON(!test_thread_flag(TIF_32BIT)); @@ -73,7 +71,6 @@ hugetlb_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, info.low_limit = PAGE_SIZE; info.high_limit = mm->mmap_base; info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); /* diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index c783aeb37dce..b3278e4f7e59 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -125,7 +125,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; unsigned long begin, end; if (flags & MAP_FIXED) @@ -144,11 +144,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; info.length = len; info.low_limit = begin; info.high_limit = end; - info.align_mask = 0; info.align_offset = pgoff << PAGE_SHIFT; if (filp) { info.align_mask = get_align_mask(); @@ -165,7 +163,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct vm_area_struct *vma; struct mm_struct *mm = current->mm; unsigned long addr = addr0; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* requested length too big for entire address space */ if (len > TASK_SIZE) @@ -210,7 +208,6 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, if (addr > DEFAULT_MAP_WINDOW && !in_32bit_syscall()) info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW; - info.align_mask = 0; info.align_offset = pgoff << PAGE_SHIFT; if (filp) { info.align_mask = get_align_mask(); diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index 6d77c0039617..fb600949a355 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -51,9 +51,8 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; - info.flags = 0; info.length = len; info.low_limit = get_mmap_base(1); @@ -65,7 +64,6 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file, task_size_32bit() : task_size_64bit(addr > DEFAULT_MAP_WINDOW); info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; return vm_unmapped_area(&info); } @@ -74,7 +72,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; @@ -89,7 +87,6 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW; info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); /* diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index cd87ea5944a1..ae833080a146 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -176,14 +176,12 @@ hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; - info.flags = 0; info.length = len; info.low_limit = current->mm->mmap_base; info.high_limit = arch_get_mmap_end(addr, len, flags); info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; return vm_unmapped_area(&info); } @@ -192,14 +190,13 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; info.low_limit = PAGE_SIZE; info.high_limit = arch_get_mmap_base(addr, current->mm->mmap_base); info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); /* diff --git a/mm/mmap.c b/mm/mmap.c index 68381b90f906..b889c79d11bd 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1707,7 +1707,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma, *prev; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags); if (len > mmap_end - mmap_min_addr) @@ -1725,12 +1725,9 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; info.high_limit = mmap_end; - info.align_mask = 0; - info.align_offset = 0; return vm_unmapped_area(&info); } @@ -1755,7 +1752,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, { struct vm_area_struct *vma, *prev; struct mm_struct *mm = current->mm; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags); /* requested length too big for entire address space */ @@ -1779,8 +1776,6 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, info.length = len; info.low_limit = PAGE_SIZE; info.high_limit = arch_get_mmap_base(addr, mm->mmap_base); - info.align_mask = 0; - info.align_offset = 0; addr = vm_unmapped_area(&info); /* From patchwork Tue Mar 12 22:28:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D95A2C54E58 for ; Tue, 12 Mar 2024 22:29:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4866B8E0027; Tue, 12 Mar 2024 18:29:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 43A5B8E0011; Tue, 12 Mar 2024 18:29:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28C038E0027; Tue, 12 Mar 2024 18:29:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 148598E0011 for ; Tue, 12 Mar 2024 18:29:13 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D4F02C05F4 for ; Tue, 12 Mar 2024 22:29:12 +0000 (UTC) X-FDA: 81889828944.27.FFCDEF5 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf11.hostedemail.com (Postfix) with ESMTP id DA3064000C for ; Tue, 12 Mar 2024 22:29:10 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hR9MwyJt; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282551; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nmVVEEIA3J5OAciiBC3omuJ+xpJDNNQ4gqbzwaKVhfs=; b=Rk1YCK60c1CWq6tTisxqijpfrYKnzCOkZGXTdceLo+q2FCNjOMbaLCAFwG0fgAdImyHT3o y7bjR95TAjveOKdGjlDy6B6pkYzvVpoV/etLAseEeKQn+g5olPbB5BjkzOTBd7YC+tk6cn hJILPmyXR46zLtq4l6UQthDvkgoGs2g= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282551; a=rsa-sha256; cv=none; b=1VZ/hmpe/diP53XQeUPdMaKcpQi54HwHGr4GRfFDkaYTJBfmTnM0mXzf8iKVZO/Jqs0p6H 95qUsaQLdlbH2VoaHwmRahY7wMv7Gh5gUk8763qUFRQPiIOZl0FFAOy1KmAekha9XypJNU 3h/vDg1m0R0UdPLvfmJhklqIDwKQ/Wk= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hR9MwyJt; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282551; x=1741818551; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DqX79GXBAG2/j46/SZGgINNlJjEl0m3V0W63EM+RdAU=; b=hR9MwyJtamXgJ/7iuO8TvE7NOlD+PbAmtBr1hOkGy4RaFhUqFH0K1Nrg jv5iK6FTQvrhLWmBGGNjAl1qMvT43i8A8kbcDq5YlRZimKVlhmDWmz6q3 ndH6BB0UqQstD0uUj2RMpc51wQMZ8ryyVbPyXOnEi4wtdnDn5WDLdJF2S iRx714m17vE4m74qfiHERvpfpvT7HI4+soXzn/Ghla1k6zwlyS4eXGICk 8cxHlhm8YX7Bqz15zTknzERb1B12JBZr7Z6Ndjq/wjYJMjgOwzxkPzDZ1 8e3ZxWlFgeCHq4NT6SQzt1c6N4fBrJ2+FnaA+OQkvYJgaeclFIR4eRX+k w==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5192041" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5192041" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356874" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:04 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com Subject: [PATCH v3 09/12] mm: Take placement mappings gap into account Date: Tue, 12 Mar 2024 15:28:40 -0700 Message-Id: <20240312222843.2505560-10-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: DA3064000C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 769ueffdspjnzu88uhpmc4i3bcedzzqc X-HE-Tag: 1710282550-251367 X-HE-Meta: U2FsdGVkX1+/WJK0+5AuNpWR7CeZqaP6wbVe5T45orAQ5FpZoshEFCvQwqWRT+9TXWTF4S9aQPsexd+vw6hKVUF/Jje85FbaZ4k4Wwvdjaxi/K3HACeaja1q5YJyKdgTp0vV5qiFMfFucf642lLrErCs3E9Ng1p0NWUBsh9z7v5p6OMkvQh62FNyOQS8faEb5CiCmtbAqBCgN8kDJCugkJUmwWWTqNyIjn6ZOpNJvGMSHbUBxL/qb1Be0HqLzlWeS+ZXV2kEKxsxCe3newLKaMBLLFNJ/g0Gd7h4a4T5STgvrNDEcPjO7w2Pp3pj5pKC65N02IJCMuwMa6H+rYT8Zw8U158lyKfEr/ue2C/aYcwzMdiyXacOUyH4DxGf54VQirIy/ycZuiGxZQlJlpRmCMjZ81dpEg2JhFWbd/hpcD81mmnCn843oWd1uaSoEZUQfm7RNHP+1Xh31e3iJVBOhd5RqRTxawMlBALCKf6htp9AjvG2HLHw6qXIyGY+hlrauJJBvsutdmE9lQsH3khb1FkyxJbeALfqikakvBofAep+TaghJmnu4X57asCbzFLgIki2ccHvkEkY7hpqn5sfwTL1Rd55i4XWv7f+FwOxJSlISrH6GQ9tJ7MkBNZtTBx0efAfycWEQArxOqUT6mgqO8xfRXbAljMj9oIdDTDKmfAZthJIb73aJmNYisTu5MQcLyrYhkK3xatJknS6e642U80AL+V8PFu5W1VEkw5D+CE2TaG+BNoDlNlYCwZXO1asCL9AV5mgGZB+Htk5njabg1Za2G58HdhxBAQd+1TPiKiX79eIiPYDRgxQ3iFqWb31DjLH5hY2Sr2xhSP+mVNSI/aTlsi69vgI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. For MAP_GROWSDOWN/VM_GROWSDOWN and MAP_GROWSUP/VM_GROWSUP this has not been a problem in practice because applications place these kinds of mappings very early, when there is not many mappings to find a space between. But for shadow stacks, they may be placed throughout the lifetime of the application. Use the start_gap field to find a space that includes the guard gap for the new mapping. Take care to not interfere with the alignment. Signed-off-by: Rick Edgecombe Reviewed-by: Christophe Leroy --- v3: - Spelling fix in comment v2: - Remove VM_UNMAPPED_START_GAP_SET and have struct vm_unmapped_area_info initialized with zeros (in another patch). (Kirill) - Drop unrelated space change (Kirill) - Add comment around interactions of alignment and start gap step (Kirill) --- include/linux/mm.h | 1 + mm/mmap.c | 12 +++++++++--- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d91cde79aaee..deade7be00d0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3418,6 +3418,7 @@ struct vm_unmapped_area_info { unsigned long high_limit; unsigned long align_mask; unsigned long align_offset; + unsigned long start_gap; }; extern unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info); diff --git a/mm/mmap.c b/mm/mmap.c index b889c79d11bd..634e706fd97e 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1582,7 +1582,7 @@ static unsigned long unmapped_area(struct vm_unmapped_area_info *info) MA_STATE(mas, ¤t->mm->mm_mt, 0, 0); /* Adjust search length to account for worst case alignment overhead */ - length = info->length + info->align_mask; + length = info->length + info->align_mask + info->start_gap; if (length < info->length) return -ENOMEM; @@ -1594,7 +1594,13 @@ static unsigned long unmapped_area(struct vm_unmapped_area_info *info) if (mas_empty_area(&mas, low_limit, high_limit - 1, length)) return -ENOMEM; - gap = mas.index; + /* + * Adjust for the gap first so it doesn't interfere with the + * later alignment. The first step is the minimum needed to + * fulill the start gap, the next steps is the minimum to align + * that. It is the minimum needed to fulill both. + */ + gap = mas.index + info->start_gap; gap += (info->align_offset - gap) & info->align_mask; tmp = mas_next(&mas, ULONG_MAX); if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if possible */ @@ -1633,7 +1639,7 @@ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) MA_STATE(mas, ¤t->mm->mm_mt, 0, 0); /* Adjust search length to account for worst case alignment overhead */ - length = info->length + info->align_mask; + length = info->length + info->align_mask + info->start_gap; if (length < info->length) return -ENOMEM; From patchwork Tue Mar 12 22:28:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DC9FC54E58 for ; Tue, 12 Mar 2024 22:29:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 439658E002A; Tue, 12 Mar 2024 18:29:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C2118E0011; Tue, 12 Mar 2024 18:29:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23B5E8E002A; Tue, 12 Mar 2024 18:29:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0D6F78E0011 for ; Tue, 12 Mar 2024 18:29:15 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E21C81A052D for ; Tue, 12 Mar 2024 22:29:14 +0000 (UTC) X-FDA: 81889829028.19.9AD9690 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf11.hostedemail.com (Postfix) with ESMTP id EAB094000C for ; Tue, 12 Mar 2024 22:29:12 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=SCR2f9ux; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282553; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6iy+vIqrqTw5SjEIJ96dh53Aug0F1GeH/F9ws4b/4NI=; b=1bWPCmoTxPyiGFpxl+sqXfhGs5ARUo4keTSn2zE1xfUlSu7zOazKeywJ88wHGC5nZRdSBK upgCKQK4VgUgiIcyy97e+xEnFqd8p6Om1MtJKLq7Ci+AMqFHIRD1P4ycerN4dbVGAUypjl +cxLuQknlDWBgop66zNlOVn4yevFsSA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282553; a=rsa-sha256; cv=none; b=ldj50fbjI5ijC6c4seCfuYeTc19uNAd9G1siH+0BzwtjwKNNKUQO1mSl6lQy9Oj0iMFomc sZcBjDRUssFnetMVtQC40cb2xq9/+fXIouTOtRucwSNqqpRt5Hn4MU8HwtrsNHh/G41mds URGnTuABr5/TJ3teJtVWNe/us6BOZaE= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=SCR2f9ux; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282553; x=1741818553; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CreGxe5/iXfUXls5YDaI9mwicpyrEDmmn8Wmlsjwro0=; b=SCR2f9uxobshduCzRUnlaQ8ha1N6rUoPYVhjpyxxRildd0yWAiD027ap kfueb3rmpndub8Z6w/8V+YiGn3Sj5zhcTaB1XnBXEPRTCi8ahwtONvRc5 jDRcMcB+D4/AzLjvWuD9njP7HQHhtxoXUvzV5pb/jNVPlnq4lc5LwjVij n8mBQnUHkNJGDJNdN4QPOYBS9ASxX1xzVq/2fuY26MNnMHZhEz5qnYCeU q4hsaPvY6RpIoHeb5//egJGdPfr9Soyswgu9qKtC0JV5duTx04fz9V6hP tC3Z5Lldn5ywBcmnzy9fnXeut5KyiRt9Z7FtP/ZIWrJmRWs0csNY3cMel Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5192059" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5192059" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356880" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:04 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com Subject: [PATCH v3 10/12] x86/mm: Implement HAVE_ARCH_UNMAPPED_AREA_VMFLAGS Date: Tue, 12 Mar 2024 15:28:41 -0700 Message-Id: <20240312222843.2505560-11-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: EAB094000C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: f33brxo6bpasmczgpbwz6pimz9wpb86b X-HE-Tag: 1710282552-265915 X-HE-Meta: U2FsdGVkX1/imdtQnoPBSzeX65U5RrznMeEzZPvLZWrJNB/h3GoyvgelLhhFt/fIxe8MDWV5z0Msdl72BjJIuIJC433c3u5ASkeKETE9Q28OsSHWLNheybre5S9B3hEFcznTS8By6THeKod7ol3Wcd0ZwduyueG3LZeIo8KPatnAfvkZBRsg+xiW9JBa9+1OLd0HSJCYHnbb3A+AfcGubplB9kbgzYIV8tT+gYCY7vZh72gLL95H48pHxf5giih3HiFXQfwNLr2724ltIPEv/tkKpZyPzUtc/JKmkgxXkJhsnWOID5eSipdDhVPEViQ6x5Y5D4EpQ8jkxpyC+5K32zWzOehEospW1aRidcDQhClejT+g1+3qTkgWhnGKulIssQXuJH37CdYsS9a58X4CJGFL3o6bqyWrC99b4YpcxkdXnct9HoJrOOd/CfTxRdl4HUZWKzL4ugTU9gtCxhrx+aIvF/+opL3NU0OXyp986vPIoSd7Cxvr2F5kxsJSaXJbm3zvpM2rNbpCBmigVA4HUsL9fu04ZAiFPQ//SlJ+dgI6cDPpjnkkQ6wDtdpZQXz3nYarVxj83UEnNbS+K1r8gZMJZfEY3UIQVZmt4uJ6KfFrarNB8dMnzq6gaUSycEjU3uJ2XsTNoyPn+WEHlXm8sylTmYbQOIzUtUXekILD7BdmFVTvNJUXDYoUZ4c6P1sME4cFhZdmTfdnTV7zNDXE86YzP0CaaRe9UOqITzgUzaWHrl0ERi2S9aStfFWlui7Qc8HQMmPAz34NCJaJyT+QvfqsLwGxAe2OIffwRRFpxJxsq3cBsyNq4VCa/UvBHHbUb4psG6MTRrrY2c2vlCJyWwTNqe+kXRyxb0+h/S0A6qgvY4/n3XceVKiaZ2aT9RriqMPqSR2G9BY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. Add x86 arch implementations of arch_get_unmapped_area_vmflags/_topdown() so future changes can allow the guard gap of type of vma being placed to be taken into account. This will be used for shadow stack memory. Signed-off-by: Rick Edgecombe --- v3: - Commit log grammar v2: - Remove unnecessary added extern --- arch/x86/include/asm/pgtable_64.h | 1 + arch/x86/kernel/sys_x86_64.c | 25 ++++++++++++++++++++----- 2 files changed, 21 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h index 24af25b1551a..13dcaf436efd 100644 --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -244,6 +244,7 @@ extern void cleanup_highmap(void); #define HAVE_ARCH_UNMAPPED_AREA #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN +#define HAVE_ARCH_UNMAPPED_AREA_VMFLAGS #define PAGE_AGP PAGE_KERNEL_NOCACHE #define HAVE_PAGE_AGP 1 diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index b3278e4f7e59..d6fbc4dd08ef 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -120,8 +120,8 @@ static void find_start_end(unsigned long addr, unsigned long flags, } unsigned long -arch_get_unmapped_area(struct file *filp, unsigned long addr, - unsigned long len, unsigned long pgoff, unsigned long flags) +arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags) { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; @@ -156,9 +156,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, } unsigned long -arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, - const unsigned long len, const unsigned long pgoff, - const unsigned long flags) +arch_get_unmapped_area_topdown_vmflags(struct file *filp, unsigned long addr0, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags) { struct vm_area_struct *vma; struct mm_struct *mm = current->mm; @@ -227,3 +227,18 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, */ return arch_get_unmapped_area(filp, addr0, len, pgoff, flags); } + +unsigned long +arch_get_unmapped_area(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags) +{ + return arch_get_unmapped_area_vmflags(filp, addr, len, pgoff, flags, 0); +} + +unsigned long +arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr, + const unsigned long len, const unsigned long pgoff, + const unsigned long flags) +{ + return arch_get_unmapped_area_topdown_vmflags(filp, addr, len, pgoff, flags, 0); +} From patchwork Tue Mar 12 22:28:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA476C54E58 for ; Tue, 12 Mar 2024 22:29:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C01E18E0029; Tue, 12 Mar 2024 18:29:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B89E78E0011; Tue, 12 Mar 2024 18:29:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B6CB8E0029; Tue, 12 Mar 2024 18:29:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 80B6E8E0011 for ; Tue, 12 Mar 2024 18:29:14 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 545FD1A052D for ; Tue, 12 Mar 2024 22:29:14 +0000 (UTC) X-FDA: 81889829028.21.0C29BFA Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf24.hostedemail.com (Postfix) with ESMTP id 76A7718000E for ; Tue, 12 Mar 2024 22:29:12 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=KFkMD0S2; spf=pass (imf24.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282552; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LhppU4CMZhZm5EP9rRLemF2K+Rv21AF3TMMjKMbUW+w=; b=Sbh4JfjYP9cNEAKqZVjR7QKuShArmLFF6pX2XHbT9pu9B1k0tSMHsUnmNUisc0wwkz7xxj RcjvfvNwNwZZm5HW0Y4kPOD9Ib24Cy82o7cOPsG4+/fDMoAenTwpBpkeRVMnDkN0f7ui+T VjESOgybLMr2x6eKL3pde501GNL4AAY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282552; a=rsa-sha256; cv=none; b=tPyHlzFEpM1p9o/pcjadaCl22MYEa6lOrhkwSj2Mln6niy1tu995BuIBuWSDTX6LSzrcFe knV5yd6sPmDnVOz7NT53hHgnqhVvhLZXh5tTjjqX2dSkhiwba0sZ+a0NPlDEhGXbLQZvb3 sQ2FpeviKp3FvZSUiLKkeEzVDv+YXC0= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=KFkMD0S2; spf=pass (imf24.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282553; x=1741818553; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TSd5WZe/p2AZRhCNpPLqqksEWjeny0UmWfCVRjzSme8=; b=KFkMD0S2eIIxirGr5fkC6iN2CaYHv07DbUPEqQhi7g1nZUTqygH3ReiD tdA/2Dvo6ub5NXbY0HGp4pYoRErIz0i3Ngb5/Hc0XRF45qgh7qyKit6Jp o4Ij/8wLlherlO/UE8PyX1P9JCxEfizzs/PT23zI53h5A5UnrP94racV2 nt4eGiRAjV79JmjGYer/S4kEmHl52S33baFog7VBCQ67LPT6tBNpjuT6i iI5U9ATbXuC8Wj3Te9XPfx90+3adyoyxIktP26HQjvXeqDtgCl4SCpHXJ Wd3GTvPi4BTWcqldF849fMk5rSoUhEptdgSzb9jNWfiXDZ9r+Gbm7wUbM g==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5192070" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5192070" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356883" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:04 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com Subject: [PATCH v3 11/12] x86/mm: Care about shadow stack guard gap during placement Date: Tue, 12 Mar 2024 15:28:42 -0700 Message-Id: <20240312222843.2505560-12-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 76A7718000E X-Rspam-User: X-Stat-Signature: zkrcquu5bgi6p1ir6gd63r9paf99frdx X-Rspamd-Server: rspam03 X-HE-Tag: 1710282552-327537 X-HE-Meta: U2FsdGVkX1820EmTR4lMBbaKvJCd2EQjb7IBFf7Hhxs9ZXfkBuA9XpmuxaQWi545uSbMTstbIb7Dwj0xpDDC2BKZN1ulTylUVWiEVOUJk8peRlV0ZZeiYR8bnNYte9yxfKJtikH36FvTNCQn6ykZuvVyozxpy5xdboDRwu6bShzggZDvjwJ61OQY/YaU6axih8ntLfrOO+BtXSNkP/XO0zw/tswbqv17KGkmwwY3AsvT8KDRD922o2F8goEIfUMTGAuws54uZtP45xCQF2VV0jUD7NsgTah9m8Nc34YncsHv7eGeEq/Fdp/aPAHFRpDmL+wSwGG5wrtNh4IEEOSX084EHtctfMDk/ufZw1ZFewIRFEbT05nSpR9w+EAemiCoFEHhudDhm3rYY2nA8YGSYJU1+I3HxNeL8SJEWIPgBtcVzx72tpCu4xzBkjEoq0PB4Egg485X8l7XcIB0Ram+KaBlGPz137VhVyBy0dX0Dg2qfvFDqfqzGW35enKtBMe1k8gdV5a+Nkuq8dO21hWKbZ9hnVe967HM9eJDlVQHBgEJrXI0W/ZjAulsvKW6vXHv1WasShBdRnIsrrL60k88OlLTguEfpZzR80RD2B+x+FfR5w3KqfQwezy7y158LLMR1WgRe8+TBCb14vv5rRaM94kmSrHqRXLhBx2qlfTE+sexYl3Wbk/Xsw5A7wJijDsRE3zva/zo7Y1uqSRUrtcxggMvn+KHHRQ3K/tyHHRUd4IEGz6Uq+uFcksoMx10eudm9sJ1fpl8baBJQ3i5VeKI6ZYFHh+0PVtGAPGSKpPauEMa0MgIGsA0gkHNWzTLyQrlHFzESQ7zWiXrzddSqXWgK3i2r7pBuO8H X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. Now that the vm_flags is passed into the arch get_unmapped_area()'s, and vm_unmapped_area() is ready to consider it, have VM_SHADOW_STACK's get guard gap consideration for scenario 2. Signed-off-by: Rick Edgecombe --- arch/x86/kernel/sys_x86_64.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index d6fbc4dd08ef..964cb435710e 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -119,6 +119,14 @@ static void find_start_end(unsigned long addr, unsigned long flags, *end = task_size_64bit(addr > DEFAULT_MAP_WINDOW); } +static inline unsigned long stack_guard_placement(vm_flags_t vm_flags) +{ + if (vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} + unsigned long arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags) @@ -148,6 +156,7 @@ arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, unsigned l info.low_limit = begin; info.high_limit = end; info.align_offset = pgoff << PAGE_SHIFT; + info.start_gap = stack_guard_placement(vm_flags); if (filp) { info.align_mask = get_align_mask(); info.align_offset += get_align_bits(); @@ -197,6 +206,7 @@ arch_get_unmapped_area_topdown_vmflags(struct file *filp, unsigned long addr0, info.low_limit = PAGE_SIZE; info.high_limit = get_mmap_base(0); + info.start_gap = stack_guard_placement(vm_flags); /* * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area From patchwork Tue Mar 12 22:28:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13590679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 094D3C54E5D for ; Tue, 12 Mar 2024 22:29:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C76FB8E002B; Tue, 12 Mar 2024 18:29:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C28FD8E0011; Tue, 12 Mar 2024 18:29:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A7BE38E002B; Tue, 12 Mar 2024 18:29:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 92FE08E0011 for ; Tue, 12 Mar 2024 18:29:15 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6BA651206D3 for ; Tue, 12 Mar 2024 22:29:15 +0000 (UTC) X-FDA: 81889829070.14.E1C2401 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf15.hostedemail.com (Postfix) with ESMTP id 73454A0005 for ; Tue, 12 Mar 2024 22:29:13 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=X1G+7fw5; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710282553; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NFCSdAhrunIzMrgwpPIPHm0AfcuiyWMeHA0mu0m/29Q=; b=IZLZ6GiBOa3qn4Oyb4CFMgw9HC39WJWmTiKGqCPSTYrAPox2pkIN4pfQOiU3AdmSonf4Ku BRGuX3KgLqkBRQezgzZ125Eyt1BX7xK/dscw+eYiUEe9PcB5BaPJ1IuOBAf2fDWBpdUJ5R A7anPddxIPrGwuzWbI5/gMeNDelyFXU= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=X1G+7fw5; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf15.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710282553; a=rsa-sha256; cv=none; b=1zAokHOPLSEOO5vo5yxy6DkmtpIPJ+wfJ7Of3MEvZc5KG1xIzF67gJVewGxhq5wcbMcCRY B5Hu40Kan+eTdrnXMyjUehSM+pvrnCCd2xmybiQ8ovnE9jqSVSUweHOnMHyg3HnJsPoY9g ViZ9vpcCdbhqUjLPKxmYP90LMAt2MKU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710282554; x=1741818554; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9Kz1B1g9YBWVQVBA7u5n8sBfRhBddgPHEWEy/INdrdY=; b=X1G+7fw5vmC4rXSUpb+zyJGAnWOL57o7z7+rxZJQB+/Aq2ox+MRnsk8H w0vMBOvh77P9G1Zm5/TVHRS6Whz6sMX3Ky8xYQMKE7MnntsxPCrLd6+OH VkbsXYcm7rghvv+6JADT/J6XztjwjQfwIQuO33XFx5czomCJQSByzw4q1 kEiQDypOhJp9qgaKKft23vwPNxP+CBq8t9aHYOLcEsuQxAELKkCnoovH8 93JgnSohPV5OX4PSirCe7ibYckRGdGbmQm6/F2DnXRZrq/qGUmY4ijVCh epKzU2khvSDqALFmZ1CYRMkwlqKMZE6ErX25lITaVG/vpp5xTDr88eDYo g==; X-IronPort-AV: E=McAfee;i="6600,9927,11011"; a="5192090" X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="5192090" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,119,1708416000"; d="scan'208";a="16356888" Received: from gargayus-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.255.231.196]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 15:29:05 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org, christophe.leroy@csgroup.eu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rick.p.edgecombe@intel.com Subject: [PATCH v3 12/12] selftests/x86: Add placement guard gap test for shstk Date: Tue, 12 Mar 2024 15:28:43 -0700 Message-Id: <20240312222843.2505560-13-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> References: <20240312222843.2505560-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 73454A0005 X-Stat-Signature: 3pb3dcuz57xn617hu76tzoz71dnabfx1 X-HE-Tag: 1710282553-284163 X-HE-Meta: U2FsdGVkX19uj2Vk71H0SXI/SYZejWGN8pvArWgH0iyOIojj/zgv8UF1nRlxBk2e7uPIc25w5U2tTPmWU6JNECOxKjioVBmL6qJlVxMc3DwUIG4Jblx7sfFA9xRJHfBamODuUEWE5119xObku5MgA3IND9yDIKSRWwvFrjatw5Np6begm1BMuA0LcyWUilQZep/vnt/ikfZmVuk+F9mwjb9ijKmoUaCUCFxZGeDddKM6KuGvnohBMiJHhl5rrtYBNKh1OXj/Lm67FoxiEMTDfGRO649KVAWEVpBWFcwM/WZu9ixQWsKf7NtWWVCPhk82KE7bUxoPxYttftkDPu/CxW3hXuSxIwA3WIwCgAc1IM6QtynIGfty0OTfJ7ruMaRcX8KZ2wmNvsYPPjGGe1FQWPhBiNfvaZuMbqN4YPeYdgjzuiFygNRDy0J2jzOBE55Qj7n+njgcNdYYSg8TfwezE35DgPV4qZ5Jo2ZMEbH0LScE/0LnrQq2AMkd1r+MRoTkXL0O27VNo5TT2pRX3dBu4G+cirdPtIxUMRZ+MsmFkxWv84NorMsWUQ8cZV7nHesI5MgG5jLX9CEM5r0kOdYB7c8ukNo7+yIXYb4md+y/QZK6r+ZdogK4oQw9jeXSQ5RZTWrSOOdVgxnH1DiZ6mTNAtDfswUjSMAJHhRRxxmq7z/9S31trFxPKiAJ+uHXSk2Etfy8+023jH9kreAZHfXRbmEzrrL7sO8FSuCAqdobsjTZ1fVcNDEgqlDdhTj5Ub9s5DZuSzXEJYikIkBRXSDUshmuOr1lxSyaTffADntE6UyQSu/hbEtpr4qjhg2emX/2RvkoixITfc6uhP4vNSrn+GnCmt/HAQdef3u76tgg7fUoTP7FTaLkkaIJvZkFqHW397dguhnf3wVKDFJ09QoybEfydaTFP+hj/smPSIsPyt5ZlDTLMBba2riRfnJ7CiCwzWeNJCvW1cOU6c7b4v5 nyLwURYh /H0pZgzeRze5nr765q/UeKegHEUp2E0GmcgAedaMb2zHW6P73NpQxcft8Ij7NIzwZVyB0/fOWTboCX9k2CRl71+bsQNejlxugt3rpuu7gXgaHjssBQaPMzHSH83rnnIR0iTByuil4U/eMYRTYs7PPRxXZUsTs5vAQQfkc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The existing shadow stack test for guard gaps just checks that new mappings are not placed in an existing mapping's guard gap. Add one that checks that new mappings are not placed such that preexisting mappings are in the new mappings guard gap. Signed-off-by: Rick Edgecombe --- .../testing/selftests/x86/test_shadow_stack.c | 67 +++++++++++++++++-- 1 file changed, 63 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/x86/test_shadow_stack.c b/tools/testing/selftests/x86/test_shadow_stack.c index 757e6527f67e..ee909a7927f9 100644 --- a/tools/testing/selftests/x86/test_shadow_stack.c +++ b/tools/testing/selftests/x86/test_shadow_stack.c @@ -556,7 +556,7 @@ struct node { * looked at the shadow stack gaps. * 5. See if it landed in the gap. */ -int test_guard_gap(void) +int test_guard_gap_other_gaps(void) { void *free_area, *shstk, *test_map = (void *)0xFFFFFFFFFFFFFFFF; struct node *head = NULL, *cur; @@ -593,11 +593,64 @@ int test_guard_gap(void) if (shstk - test_map - PAGE_SIZE != PAGE_SIZE) return 1; - printf("[OK]\tGuard gap test\n"); + printf("[OK]\tGuard gap test, other mapping's gaps\n"); return 0; } +/* Tests respecting the guard gap of the mapping getting placed */ +int test_guard_gap_new_mappings_gaps(void) +{ + void *free_area, *shstk_start, *test_map = (void *)0xFFFFFFFFFFFFFFFF; + struct node *head = NULL, *cur; + int ret = 0; + + free_area = mmap(0, PAGE_SIZE * 4, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + munmap(free_area, PAGE_SIZE * 4); + + /* Test letting map_shadow_stack find a free space */ + shstk_start = mmap(free_area, PAGE_SIZE, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + if (shstk_start == MAP_FAILED || shstk_start != free_area) + return 1; + + while (test_map > shstk_start) { + test_map = (void *)syscall(__NR_map_shadow_stack, 0, PAGE_SIZE, 0); + if (test_map == MAP_FAILED) { + printf("[INFO]\tmap_shadow_stack MAP_FAILED\n"); + ret = 1; + break; + } + + cur = malloc(sizeof(*cur)); + cur->mapping = test_map; + + cur->next = head; + head = cur; + + if (test_map == free_area + PAGE_SIZE) { + printf("[INFO]\tNew mapping has other mapping in guard gap!\n"); + ret = 1; + break; + } + } + + while (head) { + cur = head; + head = cur->next; + munmap(cur->mapping, PAGE_SIZE); + free(cur); + } + + munmap(shstk_start, PAGE_SIZE); + + if (!ret) + printf("[OK]\tGuard gap test, placement mapping's gaps\n"); + + return ret; +} + /* * Too complicated to pull it out of the 32 bit header, but also get the * 64 bit one needed above. Just define a copy here. @@ -850,9 +903,15 @@ int main(int argc, char *argv[]) goto out; } - if (test_guard_gap()) { + if (test_guard_gap_other_gaps()) { ret = 1; - printf("[FAIL]\tGuard gap test\n"); + printf("[FAIL]\tGuard gap test, other mappings' gaps\n"); + goto out; + } + + if (test_guard_gap_new_mappings_gaps()) { + ret = 1; + printf("[FAIL]\tGuard gap test, placement mapping's gaps\n"); goto out; }