From patchwork Mon Aug 12 08:26:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Ellerman X-Patchwork-Id: 13760292 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7C3DC531DD for ; Mon, 12 Aug 2024 08:26:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AB7D86B00C7; Mon, 12 Aug 2024 04:26:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A1EBE6B00C9; Mon, 12 Aug 2024 04:26:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 743FE6B00CA; Mon, 12 Aug 2024 04:26:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 330B56B00C7 for ; Mon, 12 Aug 2024 04:26:17 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B6944A7B12 for ; Mon, 12 Aug 2024 08:26:16 +0000 (UTC) X-FDA: 82442911152.23.E930BF4 Received: from mail.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) by imf23.hostedemail.com (Postfix) with ESMTP id 9641114000B for ; Mon, 12 Aug 2024 08:26:14 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=ellerman.id.au header.s=201909 header.b=SFMnKaw9; spf=pass (imf23.hostedemail.com: domain of michael@ellerman.id.au designates 150.107.74.76 as permitted sender) smtp.mailfrom=michael@ellerman.id.au; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723451120; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=84M21vyUGQcGCjX/ARQkrZuqRB4Zx/d2+mbl6H4Q7ow=; b=FaFh1niXpoqT5NJmWfoAKDzTnb38yCldA8lMg6eHAmA8fmQvanEi506ghwNLNQ+NRQKUAh vHDP2m6cw2UCP88dYQY/g2lkrjFeNw7kTAw4BmyJZ7IvDb6owB0qeEkn9LR6HZ1cyHtgBL pF6UKSJnKhX+vlav6ipqNLUYL73R1/s= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=ellerman.id.au header.s=201909 header.b=SFMnKaw9; spf=pass (imf23.hostedemail.com: domain of michael@ellerman.id.au designates 150.107.74.76 as permitted sender) smtp.mailfrom=michael@ellerman.id.au; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723451120; a=rsa-sha256; cv=none; b=TuipA39RnTJux00U9oJwbCEeL8Lf48cwxXwE93cuIKjd8DVN/eokb1Qo+9ZF1ZYmxePwM0 AR+UHiygMJLKZmH0DP8COFcxnnMz9qpMjTtS06FjPZ+V8oI+lec18ofUZk2S2JQzEv8yrP 2QtlHeQTiehbIoYNIE1PXEl+2byUrts= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ellerman.id.au; s=201909; t=1723451172; bh=84M21vyUGQcGCjX/ARQkrZuqRB4Zx/d2+mbl6H4Q7ow=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SFMnKaw9wh079Fm21LEZE2EP0FiNYPA6cxjvDY/X293d4Kw0ckMkqmByMFqK9pO5G jznvvg9zyPSEe8BR8lfLSrnywK78obH4o25/CtyUdQGY1T3yS/FPKTiIMBfkOzhZlw lpSQgClEL38ZD8l3i3fEsLw54el67bLM2qmxSfgbx2tt0O5ky7Q5lIgDfS0JyY/38p kRq3QyEWeizG93x51mx9p3qZzdUdxvad09CfVa4yp9B1vxA3gxqJ/ZU1myIJGRQyjQ RZrZb/0mmAAYc9M4x9/2YDVAuQIX5RleBOohv8WkqB1JVrJYmyQfEinFNEbbQwBQgb d62LsD0PLHWsA== Received: from authenticated.ozlabs.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mail.ozlabs.org (Postfix) with ESMTPSA id 4Wj6zm00Kgz4x8x; Mon, 12 Aug 2024 18:26:11 +1000 (AEST) From: Michael Ellerman To: Cc: , torvalds@linux-foundation.org, akpm@linux-foundation.org, christophe.leroy@csgroup.eu, jeffxu@google.com, Liam.Howlett@oracle.com, linux-kernel@vger.kernel.org, npiggin@gmail.com, oliver.sang@intel.com, pedro.falcato@gmail.com Subject: [PATCH v2 3/4] mm: Remove arch_unmap() Date: Mon, 12 Aug 2024 18:26:04 +1000 Message-ID: <20240812082605.743814-3-mpe@ellerman.id.au> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240812082605.743814-1-mpe@ellerman.id.au> References: <20240812082605.743814-1-mpe@ellerman.id.au> MIME-Version: 1.0 X-Stat-Signature: p61usxsbjuoffmbqp466y4hbugcnhtwt X-Rspam-User: X-Rspamd-Queue-Id: 9641114000B X-Rspamd-Server: rspam02 X-HE-Tag: 1723451174-394814 X-HE-Meta: U2FsdGVkX18oUio7jK+705rtxoXgX8EmJVSNdMFkzB1pKDiiKmLxz4XU5mp5HkW1J/WeHxZyx5zKUd1hwb9BljCJg8vDmBRmMWCH2mXsyxBX0jVY7qW+3YdmzgvU6uEWWstZJwioMuLQyZcFtARjybCR81nI6YB9yrHSP1Lg95dh20SAPOgyGSgsPn/eIyTepWhJStlxm2Edsi5k1FkRqTlX7ssLjg9K7LbcggbqvAjG5FUhOfY74+uXjHr0JC+HZyhi74W/OILyc4GF7oeJ3yBo/5HH0R3S4aPrRa+irp2x65Ruik/PU89XqcZdF94+jKYjaLol6FkcSg7AFIWWe0ThgYEkGejZcmVkGe558Ny9xz9Fo7z5qYWQJGyASepor18eRfxfKV3onrp+qa91JT+v0Xe63BNQCN6uYOfQtqfmqHKHh4ojSBlu2Y/u88auvYfwzqDnLoQ+PcIx6yMmDkArskt/CxnumHdGPrscVOFnNSZ2A2nfkzLIBOq8o+MJ45NGMmDhiWWlLK+xOoza8FLkgv1okdDFZqJaMoUza6w+qNmMHJinRPYzsMTbbhdo9S+KxPK1p/zCRDsk00H+/XZKsoU/gUYK8EyCQn5PbKTgLwhuK+ClEN2omxcMVSpTdzA8ZkJYQhgNewKZFlbzuFRoOaiT59ekRzhGN0MWQPmHf56OrqRWQQKvF73ttBy8ecrNDbxm1Q/hDgqYpqqoWetoU7C2YAtQpyK51NY0Qg5grk9K4pQR+lMCkaUEoILghRMi9X5qXuOVZYqa3nC4w0hpL87ovzgU+TpbhyVIy8yN6STlGguy5rDc4xz1QsR5GzUZfFIW6cHXJAWlJxvRB9sIYGA3RtAtvnXcuKK7OaUTypiOQ8nMXtkxlA7ULOMwZZ+MlGeSpkv0FZJnJMimJ3hgVqdMPLpIRP3l6JnJ+BTjAsz5w/LzapuXqVnZhicKOo5RzTdoiCpVFHO62xa R2Iw2vMD dWB0Z+OzgUVkoedtlmqK2xOOKCWbdFlZSBxm4xBUQLJ0dwplVPFi79wOItE3rQ1ANggmqIkhltCFIgn/A7Cpn+rH8/PGsfFaRCIN79/vk6q78tkjse2c/JwBFmK4aI6AmjdTiVBNVlop+QGuAwukUaS4EGeNMWAOxvdkkhZgJP76KE0FagRS/6pTkQeXrjwbQJfq3E2sd9uYsW1ZTKmcGCrHgxPPTdyDc3zdeRrX1+rB0x8u2UqOD1oPJGZ9jEK70AYRk3lWwE5dU0PM8TJ8sJ0ga4zk9XkCW+FsQxvzQFQd6+RRAlffOudkrzvEC9UIAzVGHj8GUzBTo8Iuck465taTgKB05az1P87QYCzTDXXV6Thm93p/UnlR2aIm5IECpIZAFEJgKscxgDNCP4I/BPu1BwjOkRG6eT+3+IynJXXhH8L+PGBKbRMly6u0cOyyUf+OrhBBFnE8NFfLJ7BPD6c6nBbsQekSIs8/1z8h3OOxTW5JxYhkHHL8MdQOYnMOe9hZKR84edkPLeCSIR8nGgvwxbHdkxzLlkNBtITByZeMxfjI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that powerpc no longer uses arch_unmap() to handle VDSO unmapping, there are no meaningful implementions left. Drop support for it entirely, and update comments which refer to it. Suggested-by: Linus Torvalds Signed-off-by: Michael Ellerman Acked-by: David Hildenbrand Reviewed-by: Thomas Gleixner --- arch/powerpc/include/asm/mmu_context.h | 5 ----- arch/x86/include/asm/mmu_context.h | 5 ----- include/asm-generic/mm_hooks.h | 11 +++-------- mm/mmap.c | 12 +++--------- 4 files changed, 6 insertions(+), 27 deletions(-) v2: Unchanged except for collecting tags. diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 9b8c1555744e..a334a1368848 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -260,11 +260,6 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, extern void arch_exit_mmap(struct mm_struct *mm); -static inline void arch_unmap(struct mm_struct *mm, - unsigned long start, unsigned long end) -{ -} - #ifdef CONFIG_PPC_MEM_KEYS bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write, bool execute, bool foreign); diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 8dac45a2c7fc..80f2a3187aa6 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -232,11 +232,6 @@ static inline bool is_64bit_mm(struct mm_struct *mm) } #endif -static inline void arch_unmap(struct mm_struct *mm, unsigned long start, - unsigned long end) -{ -} - /* * We only want to enforce protection keys on the current process * because we effectively have no access to PKRU for other diff --git a/include/asm-generic/mm_hooks.h b/include/asm-generic/mm_hooks.h index 4dbb177d1150..6eea3b3c1e65 100644 --- a/include/asm-generic/mm_hooks.h +++ b/include/asm-generic/mm_hooks.h @@ -1,8 +1,8 @@ /* SPDX-License-Identifier: GPL-2.0 */ /* - * Define generic no-op hooks for arch_dup_mmap, arch_exit_mmap - * and arch_unmap to be included in asm-FOO/mmu_context.h for any - * arch FOO which doesn't need to hook these. + * Define generic no-op hooks for arch_dup_mmap and arch_exit_mmap + * to be included in asm-FOO/mmu_context.h for any arch FOO which + * doesn't need to hook these. */ #ifndef _ASM_GENERIC_MM_HOOKS_H #define _ASM_GENERIC_MM_HOOKS_H @@ -17,11 +17,6 @@ static inline void arch_exit_mmap(struct mm_struct *mm) { } -static inline void arch_unmap(struct mm_struct *mm, - unsigned long start, unsigned long end) -{ -} - static inline bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write, bool execute, bool foreign) { diff --git a/mm/mmap.c b/mm/mmap.c index af4dbf0d3bd4..a86aa58ca37b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2789,7 +2789,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, * * This function takes a @mas that is either pointing to the previous VMA or set * to MA_START and sets it up to remove the mapping(s). The @len will be - * aligned and any arch_unmap work will be preformed. + * aligned. * * Return: 0 on success and drops the lock if so directed, error and leaves the * lock held otherwise. @@ -2809,16 +2809,12 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, return -EINVAL; /* - * Check if memory is sealed before arch_unmap. - * Prevent unmapping a sealed VMA. + * Check if memory is sealed, prevent unmapping a sealed VMA. * can_modify_mm assumes we have acquired the lock on MM. */ if (unlikely(!can_modify_mm(mm, start, end))) return -EPERM; - /* arch_unmap() might do unmaps itself. */ - arch_unmap(mm, start, end); - /* Find the first overlapping VMA */ vma = vma_find(vmi, end); if (!vma) { @@ -3232,14 +3228,12 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, struct mm_struct *mm = vma->vm_mm; /* - * Check if memory is sealed before arch_unmap. - * Prevent unmapping a sealed VMA. + * Check if memory is sealed, prevent unmapping a sealed VMA. * can_modify_mm assumes we have acquired the lock on MM. */ if (unlikely(!can_modify_mm(mm, start, end))) return -EPERM; - arch_unmap(mm, start, end); return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock); }