From patchwork Fri Aug 16 11:13:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bert Karwatzki X-Patchwork-Id: 13766003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C948BC3DA4A for ; Fri, 16 Aug 2024 11:14:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB6406B015E; Fri, 16 Aug 2024 07:14:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C1A1B6B0163; Fri, 16 Aug 2024 07:14:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D1266B015E; Fri, 16 Aug 2024 07:14:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 423736B0161 for ; Fri, 16 Aug 2024 07:14:37 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id F08FEA9438 for ; Fri, 16 Aug 2024 11:14:36 +0000 (UTC) X-FDA: 82457850552.06.DE5B8E8 Received: from mout.web.de (mout.web.de [212.227.17.12]) by imf06.hostedemail.com (Postfix) with ESMTP id 085B0180017 for ; Fri, 16 Aug 2024 11:14:34 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=web.de header.s=s29768273 header.b=vcmAT7AZ; spf=pass (imf06.hostedemail.com: domain of spasswolf@web.de designates 212.227.17.12 as permitted sender) smtp.mailfrom=spasswolf@web.de; dmarc=pass (policy=quarantine) header.from=web.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723806802; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/xNMCEy6ZMUJ5jlh63sbnY1rB8O0oQ4bXvYgIo02Des=; b=xC5Il8hAF/NP47jfPbaYFhiWmPpC2s+gKI0snPmMaHFXRrcrlvqRSjIH8uko3V+mNOHB8a lxCJg+suKqj7QjtgSWm4bizK6m+QWDL6y6ypKteMHBeKqV7EwpaGgGCZbBqNeTgwoILVu2 hAvdyqzkCfLV4Oj/crHLy0KuKI8s954= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723806802; a=rsa-sha256; cv=none; b=b+uAqa0Z93UJoGClFTLw93d9vvFxB5VpC6ZJjl9VMRxRmjTpN2/hvgriH6FRn11EGD/iDV NYO/ziwS7hpLThfT2csqurXoLVs2vGBtuR+GyjWQTB+ECXzJAVMamdhubCJ+1fnLMFc2D6 9D5Rm2p13wA6/sANFeeb2ebiRVpJ1LY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=web.de header.s=s29768273 header.b=vcmAT7AZ; spf=pass (imf06.hostedemail.com: domain of spasswolf@web.de designates 212.227.17.12 as permitted sender) smtp.mailfrom=spasswolf@web.de; dmarc=pass (policy=quarantine) header.from=web.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=web.de; s=s29768273; t=1723806866; x=1724411666; i=spasswolf@web.de; bh=/xNMCEy6ZMUJ5jlh63sbnY1rB8O0oQ4bXvYgIo02Des=; h=X-UI-Sender-Class:From:To:Cc:Subject:Date:Message-ID:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:cc: content-transfer-encoding:content-type:date:from:message-id: mime-version:reply-to:subject:to; b=vcmAT7AZZNkIuuRa5N9UjViGBadah3uIKAUXHavwmXLvLxIt5RP9oKIBi6JEgouB 79ipHY0KZ4DeH2SG4tapsi5mo4ofvs6GAzj/Z4rUREUzvK58A/IRu8jvBlaNB/fwK aH2awVNT4IqfkOW+HRjowuiM/WebN1g6KQA/KGlTGwOax27iQh3iAycQ9HHF3nhPU ZXlVzSQJQf4AHIrvm0hKU3px/BHuk/P54Ra/LYvVvoyl+MloYi3Lr4dSRwdLXIGfP /XNzjMu7gVIOOysIjpcLOil39xWhQtIhfsxs7lplB6J47fk8RtXovyjsZVatnoCx+ xIn8m/lt52rruwedKw== X-UI-Sender-Class: 814a7b36-bfc1-4dae-8640-3722d8ec6cd6 Received: from localhost.localdomain ([84.119.92.193]) by smtp.web.de (mrweb106 [213.165.67.124]) with ESMTPSA (Nemesis) id 1Mmho4-1rvN1o2hKo-00gKhQ; Fri, 16 Aug 2024 13:14:26 +0200 From: Bert Karwatzki To: "Liam R . Howlett" Cc: Bert Karwatzki , Suren Baghdasaryan , Vlastimil Babka , Lorenzo Stoakes , Matthew Wilcox , sidhartha.kumar@oracle.com, "Paul E . McKenney" , Jiri Olsa , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Kees Cook , Jeff Xu , Lorenzo Stoakes Subject: [PATCH v5.1 06/19] mm/mmap: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas Date: Fri, 16 Aug 2024 13:13:47 +0200 Message-ID: <20240816111405.11793-7-spasswolf@web.de> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240816111405.11793-1-spasswolf@web.de> References: <20240816111405.11793-1-spasswolf@web.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:y+XNaT7VlbfKyj8mm9kpB8LHAxr/8kSf1589Zgx9DxmcEkK71iE myTWMxxnM/mpnLCRk/GeFj/br64MsskK235q2J6ZoUyaiYC2qHuIzQzUYO1D6zOaXdcZ3dS Lipz4ITTAG4vU5DKHIAxvof6OfD8+Ox693Ojg+zhMXhutl08/4i2ZAefg2oenCGJoqD6khJ j4QPVPVVdTsL0WzwdqJyA== UI-OutboundReport: notjunk:1;M01:P0:eZpxWICXsUE=;Mtq0GbTcN03SPFKn5s9NjDyH1iA ADaXYb3QSoc4pphABoI9iKY4yFAuXXkNZ0ozG+WA2t+ay95D6Q++CvzBOE76buAbcnQSicTIx av7VGliloCI7810rZXT4dv25+Yi9gKIafPXQn/mQ8T9zdONhBaz8QUGmijv+QsWhqA4ZAanPq N/tw7CXvhd7L89hVPeyI86mNaRy+EmcwhYfPs2DzCQuHAsuQIDh5VcotyUlCDQgB5atPrT/ac dA7/VFBhyNop0lauGUcwbCkf6+jbR8RQ50MFRMFyxlY4Fojn+foNuNiI3GGOQK+eVb6yrjjbz Ng+UvbdlDlESoeWYxni7UEIkmKGEsoJVezrvOjcyAGaxQjqSXw2S5MyuCD/lJWG4DHRFj9VT8 wv3WYUNnaQuNCuXjeedWlVhI1MD6dcuCGn+6O1ylYKYpZFufQgJCLwGpPPUW7kmV2apOc4mBH MVxqsPQ0Q0zRezgfujDVRWz2LYCnf5gXHIImynj3IUWZoZzAnc81dbIsLHibqlMRZpGWgvJuQ PfSmqAIqe82Zpk8k/+0tGqN3cz7/R1sgkFGNrdVeIiVHDr0xFFgPExaJ2zAXaGYRGisgMBRDw XTIZLbF3hlnwORrgYO2o6kkppf117762j6uMLfQgAqrJIXCpgymizDkG1LfXcMCKSuGiwbx7h N5afIdOjD0Phvs6kgwCbOrh0knTF9LBgDu3MfkKfEo0Q7ufOgcwTN5Vd2RFkElexkRFHjbZOK AQRc8L4chhHufPE5hFpTKBOOtimLJX07uudXJ1pnPbK4L38s2jbFVVOs+HHjZn5P1RIX+HDgU E1QwuZRfA6gUsYKbucXtcFrg== X-Rspamd-Queue-Id: 085B0180017 X-Stat-Signature: aj4d9pxdo5n44wb48ra7wi9p96kuxus5 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1723806874-304101 X-HE-Meta: U2FsdGVkX18TVlvBX2AidB/8RcLCtFjAZXkbh/VLv/1xm4RUnjTOiuuDznTt0RfPvL5xKIrcRbvPvLsYYBRhjtiFVqbf/ns6c6c+Icnpo0lY0uOYEYwsT9nIyMqxkkQyAsS35+8loT7pPjHBNWHtn7Gzy8LYOYRMSbXJQ2G5M+6T3e5z8dPU+awXiS4rCXUL21MuQDugDq2YVUkbsK6vk56OmnYIMI048265Z0+vK2/GX9mytG6paoUJVIjPvv65VvUqrQ35tWOz3gtdU2taaSqLbwOYBGS9iDfwnrgpGIUAR6HcFQUEzlYUBQoN/Ww3uLd9yan/YjjC99ogdn8dEZWKJZabPcheTTp9DamfNOrFF8RdrgMfbBwSFxJiYURV1AsSPIuVNFPVZw01MKup8kFIj49uVNlFRPuLL3kaGaUMs4j9oj1+NNDPsVdENjg4f6T0QpbobpIZf/PMfIu1FwDIJmsaxxL72BU1f+SUjBHn37+nj/yGJRwThoNTXHERw3pJveZxwCsMZPiKD97CICws34udTa3VZ1DBoBTw9j5vaa4D7HQkACItNbQhLcEo+Mlo4U9ub1BZD6l4KzzGDNWzdH7hpVt3jUPCJ2ox/uj6ywfhXYCkKdP3l62NkCcWznygaxS8fBw02PFeyzEOR0odB7CPik0Gtmh1GcfXY6NvSQnUnbLBGsf3gK7JUGHEtgst+a3CS6/ohTZG4wGWdCrY0DE9agAvGtNnKse/XU6rsfJT7lWoW12DsTQvIOzRRSGShOjwLcD9ue4vUOM7QnY+uz7KVMiDfsAXkOuSMDBEK7z/plCwTZgv83EhZIWZOrmNNUdQerB5Xhl1RQYK1Qfewau4AgegOFa6yc9SIEJPijIfYT+7Ky0HVUzXWZJsD/JqC4qxn3S4vvamEVy1S8AAyl5qU43LqSRkU/du9v9Q9Aws8zLmdRY9nfT84L1RTuAtYs0Wg9GPcv0Kqgj MgyIg61D LuGdmCf5xhnkoh+wOLOWeI72rN9v88Eg+y+cB/T9Hs+zMpahiVPkJ+Wj2qRz9TFAzm0v36PomLqfCj3t3LFZSynSbbXAz3wbR7EU8RfbDpQYJl/NIdx+2J/tVQnMwqaIzbZLFjK9/1iq+ytF406PkSZcdCPT53MKa7E8xgK+r8nc3H/g/Ea8ZZtr0NE7ZhmbvEq+gRusqBAd9AfoC46rcjg0CRpWFTllbVeKlgRtgmmibgpKJi/V2LKP/uUCcmEjZU/SWEkiWSLaILPt161kmlBl0BaujTRzjpxBLw0RBod/0T3XO25yZN706AdwNmks/AQ4W2gq9b7gdxPmkJTfU2spw+A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Clean up the code by changing the munmap operation to use a structure for the accounting and munmap variables. Since remove_mt() is only called in one location and the contents will be reduced to almost nothing. The remains of the function can be added to vms_complete_munmap_vmas(). Signed-off-by: Liam R. Howlett Reviewed-by: Lorenzo Stoakes Reviewed-by: Suren Baghdasaryan --- mm/vma.c | 79 ++++++++++++++++++++++++++++---------------------------- mm/vma.h | 6 +++++ 2 files changed, 46 insertions(+), 39 deletions(-) -- 2.45.2 diff --git a/mm/vma.c b/mm/vma.c index 9495230df3c3..816736c4f82e 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -273,30 +273,6 @@ static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma, return __split_vma(vmi, vma, addr, new_below); } -/* - * Ok - we have the memory areas we should free on a maple tree so release them, - * and do the vma updates. - * - * Called with the mm semaphore held. - */ -static inline void remove_mt(struct mm_struct *mm, struct ma_state *mas) -{ - unsigned long nr_accounted = 0; - struct vm_area_struct *vma; - - /* Update high watermark before we lower total_vm */ - update_hiwater_vm(mm); - mas_for_each(mas, vma, ULONG_MAX) { - long nrpages = vma_pages(vma); - - if (vma->vm_flags & VM_ACCOUNT) - nr_accounted += nrpages; - vm_stat_account(mm, vma->vm_flags, -nrpages); - remove_vma(vma, false); - } - vm_unacct_memory(nr_accounted); -} - /* * init_vma_prep() - Initializer wrapper for vma_prepare struct * @vp: The vma_prepare struct @@ -388,7 +364,8 @@ static inline void init_vma_munmap(struct vma_munmap_struct *vms, vms->unlock = unlock; vms->uf = uf; vms->vma_count = 0; - vms->nr_pages = vms->locked_vm = 0; + vms->nr_pages = vms->locked_vm = vms->nr_accounted = 0; + vms->exec_vm = vms->stack_vm = vms->data_vm = 0; } /* @@ -715,7 +692,7 @@ static inline void abort_munmap_vmas(struct ma_state *mas_detach) * @vms: The vma munmap struct * @mas_detach: The maple state of the detached vmas * - * This updates the mm_struct, unmaps the region, frees the resources + * This function updates the mm_struct, unmaps the region, frees the resources * used for the munmap() and may downgrade the lock - if requested. Everything * needed to be done once the vma maple tree is updated. */ @@ -723,7 +700,7 @@ static inline void abort_munmap_vmas(struct ma_state *mas_detach) static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms, struct ma_state *mas_detach) { - struct vm_area_struct *prev, *next; + struct vm_area_struct *vma; struct mm_struct *mm; mm = vms->mm; @@ -732,21 +709,26 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms, if (vms->unlock) mmap_write_downgrade(mm); - prev = vma_iter_prev_range(vms->vmi); - next = vma_next(vms->vmi); - if (next) - vma_iter_prev_range(vms->vmi); - /* * We can free page tables without write-locking mmap_lock because VMAs * were isolated before we downgraded mmap_lock. */ mas_set(mas_detach, 1); - unmap_region(mm, mas_detach, vms->vma, prev, next, vms->start, vms->end, - vms->vma_count, !vms->unlock); - /* Statistics and freeing VMAs */ + unmap_region(mm, mas_detach, vms->vma, vms->prev, vms->next, + vms->start, vms->end, vms->vma_count, !vms->unlock); + /* Update high watermark before we lower total_vm */ + update_hiwater_vm(mm); + /* Stat accounting */ + WRITE_ONCE(mm->total_vm, READ_ONCE(mm->total_vm) - vms->nr_pages); + mm->exec_vm -= vms->exec_vm; + mm->stack_vm -= vms->stack_vm; + mm->data_vm -= vms->data_vm; + /* Remove and clean up vmas */ mas_set(mas_detach, 0); - remove_mt(mm, mas_detach); + mas_for_each(mas_detach, vma, ULONG_MAX) + remove_vma(vma, false); + + vm_unacct_memory(vms->nr_accounted); validate_mm(mm); if (vms->unlock) mmap_read_unlock(mm); @@ -794,13 +776,14 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms, if (error) goto start_split_failed; } + vms->prev = vma_prev(vms->vmi); /* * Detach a range of VMAs from the mm. Using next as a temp variable as * it is always overwritten. */ - next = vms->vma; - do { + for_each_vma_range(*(vms->vmi), next, vms->end) { + long nrpages; /* Does it split the end? */ if (next->vm_end > vms->end) { error = __split_vma(vms->vmi, next, vms->end, 0); @@ -813,6 +796,22 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms, if (error) goto munmap_gather_failed; vma_mark_detached(next, true); + nrpages = vma_pages(next); + + vms->nr_pages += nrpages; + if (next->vm_flags & VM_LOCKED) + vms->locked_vm += nrpages; + + if (next->vm_flags & VM_ACCOUNT) + vms->nr_accounted += nrpages; + + if (is_exec_mapping(next->vm_flags)) + vms->exec_vm += nrpages; + else if (is_stack_mapping(next->vm_flags)) + vms->stack_vm += nrpages; + else if (is_data_mapping(next->vm_flags)) + vms->data_vm += nrpages; + if (next->vm_flags & VM_LOCKED) vms->locked_vm += vma_pages(next); @@ -836,7 +835,9 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms, BUG_ON(next->vm_start < vms->start); BUG_ON(next->vm_start > vms->end); #endif - } for_each_vma_range(*(vms->vmi), next, vms->end); + } + + vms->next = vma_next(vms->vmi); #if defined(CONFIG_DEBUG_VM_MAPLE_TREE) /* Make sure no VMAs are about to be lost. */ diff --git a/mm/vma.h b/mm/vma.h index f65c739cbd00..7ba0d71b50ca 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -28,12 +28,18 @@ struct vma_munmap_struct { struct vma_iterator *vmi; struct mm_struct *mm; struct vm_area_struct *vma; /* The first vma to munmap */ + struct vm_area_struct *prev; /* vma before the munmap area */ + struct vm_area_struct *next; /* vma after the munmap area */ struct list_head *uf; /* Userfaultfd list_head */ unsigned long start; /* Aligned start addr (inclusive) */ unsigned long end; /* Aligned end addr (exclusive) */ int vma_count; /* Number of vmas that will be removed */ unsigned long nr_pages; /* Number of pages being removed */ unsigned long locked_vm; /* Number of locked pages */ + unsigned long nr_accounted; /* Number of VM_ACCOUNT pages */ + unsigned long exec_vm; + unsigned long stack_vm; + unsigned long data_vm; bool unlock; /* Unlock after the munmap */ };