From patchwork Mon Mar 10 17:23:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 14010447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34447C282EC for ; Mon, 10 Mar 2025 17:23:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F7E7280027; Mon, 10 Mar 2025 13:23:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2828B280026; Mon, 10 Mar 2025 13:23:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11E95280027; Mon, 10 Mar 2025 13:23:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E208B280026 for ; Mon, 10 Mar 2025 13:23:32 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 833191CB33A for ; Mon, 10 Mar 2025 17:23:33 +0000 (UTC) X-FDA: 83206313106.29.C2C8577 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf17.hostedemail.com (Postfix) with ESMTP id B848F40002 for ; Mon, 10 Mar 2025 17:23:31 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="hYzz/5cX"; spf=pass (imf17.hostedemail.com: domain of sj@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sj@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741627411; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+rmRz2JdOMH6tRBprhjyr/GkURpl1h0cVs/0yJLjiIs=; b=o+K4q0VRDno0f8QZcIoe58ytYIKbjPvbVWtDZkauvydHoSWYpiIn/5PYNkO89dD4GIksvK oDQfh7xWRAn2eowuH++CCrZumnxYI5qfp+lipD06SPSgz0K3jYKE4guGR6B1zJqd/0plgk 82ERCXCwCuvRyKA3/+ZVdcfyS/5/tTI= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="hYzz/5cX"; spf=pass (imf17.hostedemail.com: domain of sj@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sj@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741627411; a=rsa-sha256; cv=none; b=3x3aRxxECTvr2Seru5jl6VRFdEXkIpVBMPgnsM8bR/mk13nMX+BMZSmDC6mzKWxF0DS8Cu Em7tQ5DSxFAjU3JyuOxT9p/yTSZtIt95xTEUY3pPnhGRwnk+bBbE8FsuXKMxrkCT6MaC3E N48W6c4H6rhgKjflwc7YPO/N5HAVNps= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 218EB5C20FF; Mon, 10 Mar 2025 17:21:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8AA1FC4CEF5; Mon, 10 Mar 2025 17:23:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741627410; bh=zqoCyf4FJzGNezZ0ikot3jkjvp+Q80Kl3ZyQNT+Fx4A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hYzz/5cXRxoUPQlKUgpos/cWLuZ2KWn45gHyU3U9M75uGnPaM5UcQwf/511qy1Tp2 glCushNsOICZVKgFqNF987iZhCb+LBCWfScQQSy2z8vFAQqxetIkwYZsaB6A7BDeqM tUv73/Id2lgNh4yQOttjQdClVnd7xQMbsU5nybe6xig4aN7Lf/bXIYsmv2PxyGBXfh VLAnoMfZ/TE7CD0N3y+FbIZ+Cx8wkKXwlIWOKgmHD+Jj6vJ35J55zMYCNqUIcHqfZ6 jUqkG1jkGhq0i23slle4gJZMbR5KcqoAVdll1VNfk8SQZre7lOfG2QaUJnROV1GTI6 QX/2LPUg6599g== From: SeongJae Park To: Andrew Morton Cc: SeongJae Park , "Liam R. Howlett" , David Hildenbrand , Lorenzo Stoakes , Shakeel Butt , Vlastimil Babka , kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 6/9] mm/memory: split non-tlb flushing part from zap_page_range_single() Date: Mon, 10 Mar 2025 10:23:15 -0700 Message-Id: <20250310172318.653630-7-sj@kernel.org> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20250310172318.653630-1-sj@kernel.org> References: <20250310172318.653630-1-sj@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B848F40002 X-Stat-Signature: m5zxd59dbu88skqc94yb8bjoskyu5td8 X-Rspam-User: X-HE-Tag: 1741627411-957450 X-HE-Meta: U2FsdGVkX18vx3MHnr8au9ot8lYTTqaa9Oa+1MUeYrDJBym50BhvRuDxxFgGJHcZPuLIWEbhQc08zO2JfvEC8bUwFvGVnsCIw72ASF9ZhaELi8NYNpWcm5aZzw0N8e64fYGHQIEBahJdXl55eARBnRF2xZ8XwMAgUgfGe5YjUQRTT4DkIxstQuXzbUNfFiYtXgvGyZbma5FhlCcCTBHAFzLAAQ/dZ5UDB5QB14nhsZKBJzpCGhMLOcHNk+qI5uCnR4tNWI05E562YRMNpknjUlnF/dr2V6PEtsCeScOdnAEVHj1zharPptpc0ORNDOLHRxwz7VnWMRKnYm7LS8oBKOgx7fK2G2+L2n05KjmX23E3jtbfLak45RLGZ9AOHRG4lZ+3I64BnZ9mHTt4b/QV/1SeqbI+82XDTVYmUl8i62AduQUTwr2LOoV9pkrcd6BPqFwRTho2b4I3vqHntHBxD7MOmSOUle3THch7UEruAD+JuVi1s/8BSNBDoZsFmSM073Jn/2jyzH5eJvsujQAHLtLjpMwL6iZ4sPuVXYeffMomAeBs27HOuMWkjfDB8ZYCcWOmiE+1neHNF27qML9+EeuAlrL6UOyP931RjrjjOdrxVtY8OpuUYyU1tmX6PyXam25A+Apd1n7aR0nN6NNtYYaTI5OTxmqNJlCl+TpyvXRv51iQ7HX6iNMLEqfB+pTEYdXnBT87FiPVuU/QyLXvWRBZZzL75rMWq2gj0rbYB6jCc7Kf/S+UNVKC/ZgA3X4/Wdpst4tc0oyBTQzJEDMLHkQPAnWkLzL10I+6T9zpInAE23LIG487HBuGTxa7BPs+XUgvc5Goqj3U06yx5oZtBdfOD4mqgkIwyEplhfHr4ge+XSVlrWvEJQMRblvykiCow//4gzAukGtK8S5bAt26NeM5vqf8HGIJBLRgYo7rmWo4DVDu7vmmr/U9apUZzAZpveOP5+jDoQdBFkqr+hQ ZygY1Cjv MbWNX7SoRAM3IzQ5o5mLKJtzrd1GYFaC4rRFXhfZBEAM0yRbUJpx3BR6Xz6VotK4chMzgM57/ePsFlayjD1qrQsAAb7M2gjHciNTj9LR0eCyyfwEBzcJQjuNzYu5P6SDWevk0HIb9WyU1+PlZNrr+qEDkeSiu8Er5ABT1ms80jgtDmLNm3q69l3up2AT4yPgn+uJFJ3eDh+rL1EYzu3dwQvrq/UIGAeytwG/LBzrIK8OQOXQHO2XZnW3S4W2jREhtcjzN0Hg9qpuMF1UIVQpjnQGJ2w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Some of zap_page_range_single() callers such as [process_]madvise() with MADV_DONEED[_LOCKED] cannot batch tlb flushes because zap_page_range_single() does tlb flushing for each invocation. Split out the body of zap_page_range_single() except mmu_gather object initialization and gathered tlb entries flushing parts for such batched tlb flushing usage. Signed-off-by: SeongJae Park --- mm/memory.c | 36 ++++++++++++++++++++++-------------- 1 file changed, 22 insertions(+), 14 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 78c7ee62795e..88c478e2ed1a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1995,38 +1995,46 @@ void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas, mmu_notifier_invalidate_range_end(&range); } -/** - * zap_page_range_single - remove user pages in a given range - * @vma: vm_area_struct holding the applicable pages - * @address: starting address of pages to zap - * @size: number of bytes to zap - * @details: details of shared cache invalidation - * - * The range must fit into one VMA. - */ -void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, +static void unmap_vma_single(struct mmu_gather *tlb, + struct vm_area_struct *vma, unsigned long address, unsigned long size, struct zap_details *details) { const unsigned long end = address + size; struct mmu_notifier_range range; - struct mmu_gather tlb; mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address, end); hugetlb_zap_begin(vma, &range.start, &range.end); - tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); mmu_notifier_invalidate_range_start(&range); /* * unmap 'address-end' not 'range.start-range.end' as range * could have been expanded for hugetlb pmd sharing. */ - unmap_single_vma(&tlb, vma, address, end, details, false); + unmap_single_vma(tlb, vma, address, end, details, false); mmu_notifier_invalidate_range_end(&range); - tlb_finish_mmu(&tlb); hugetlb_zap_end(vma, details); } +/** + * zap_page_range_single - remove user pages in a given range + * @vma: vm_area_struct holding the applicable pages + * @address: starting address of pages to zap + * @size: number of bytes to zap + * @details: details of shared cache invalidation + * + * The range must fit into one VMA. + */ +void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, + unsigned long size, struct zap_details *details) +{ + struct mmu_gather tlb; + + tlb_gather_mmu(&tlb, vma->vm_mm); + unmap_vma_single(&tlb, vma, address, size, details); + tlb_finish_mmu(&tlb); +} + /** * zap_vma_ptes - remove ptes mapping the vma * @vma: vm_area_struct holding ptes to be zapped