From patchwork Tue Mar 4 15:48:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 14001000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7661AC282D2 for ; Tue, 4 Mar 2025 15:49:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2785280003; Tue, 4 Mar 2025 10:49:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EB0A16B008A; Tue, 4 Mar 2025 10:49:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDD74280003; Tue, 4 Mar 2025 10:49:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A02DA6B0089 for ; Tue, 4 Mar 2025 10:49:01 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4106D4BE68 for ; Tue, 4 Mar 2025 15:49:01 +0000 (UTC) X-FDA: 83184302082.04.F432BA2 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 05705100007 for ; Tue, 4 Mar 2025 15:48:58 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=TbhWQYVK; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741103339; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2RHlzy1ROEWhQbxxG4Yv+rRoMty5ce9tc18f8MaHZsU=; b=Oyj73u2pFKMutBRWpR1RWmWCIicI5N+YWHDPb0DWqpfKZ3Kz7CQrS0UXumrspJoD63y2Td 61zbEbg4NHBO5A74plwiAgeHCchQ8prk1/yEeGGHENazjAUDym37as/wkMscZ1DjNHQMKY QymoN92sLNFYO8JzxFdNeYHinwBbpbQ= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=TbhWQYVK; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741103339; a=rsa-sha256; cv=none; b=k8KHcsUC7InmhC9v3SouCSJNEPmzloVrMRVYKEovNcOPyFoeMAjSSZgbMvQypp9LodWZh+ TFw2FYLDONgUqySwL4Dx7Colc9MISitrZLq8ldg2/jn8Rf1zrnZXL96852Igaokm9o8ltq UbLFcnVvP6BdsXl4bOj1bBGmmVt6heo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741103338; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2RHlzy1ROEWhQbxxG4Yv+rRoMty5ce9tc18f8MaHZsU=; b=TbhWQYVKVp6JPl2a9WahTddLBBux+jhqRI6xD7ZHIwUXM5/RF9YMtIPNt+BV6zdAEQiS2w rQhork+tXQsgpvrwY9X3temtt2NHby3CRqusNSDC8ZYupMcQRiD9DpwXZkHr89QBpG9n3G vENbyJHzhVi0QD0yD+pt2iF3eOsO45A= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-468-oahWM_8UO7aQBhmzH26WyA-1; Tue, 04 Mar 2025 10:48:52 -0500 X-MC-Unique: oahWM_8UO7aQBhmzH26WyA-1 X-Mimecast-MFC-AGG-ID: oahWM_8UO7aQBhmzH26WyA_1741103331 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-390f9d00750so2003165f8f.3 for ; Tue, 04 Mar 2025 07:48:52 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741103331; x=1741708131; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2RHlzy1ROEWhQbxxG4Yv+rRoMty5ce9tc18f8MaHZsU=; b=hBaWsQ/kWkYuoeNvAJc/IQYHKY/guKXdKI2GDP+csPJM+VvzT5jBFBuUC7o5vkOOZV cdLfLXT/teQPwMNPKtVLlz21pnZBG2dAyY80AkboVZmUwDXc72/DgUqyNSXwHPxJXl52 IhwDq19n2lDqjBH2ae+XcFeG36ol0XrfhsiICWe5fKuNzvm5vGBBWN9mewbNPS8cgU5g 72pAAsabzlxUEUFx/bipIhW5ONCtD2o8+5fQnXJsbV8U9MhZy6cH9w+t+Vg96NnAwdMw 4QwEKp6gxanMW5+i+TkRMPsajRQ97eoGLZJ9lVJnYOiuwFFLZ/quisqUdPWQfvtgzWvC EzyA== X-Gm-Message-State: AOJu0YzA5DnzuW9CxI4LZC0TRT7OEbwFR6DD54Zks2hxzswVk+/jU0Wg Pcg7Fi9s/wEOXPZdPKB8QCK+G1ui7eyn2JEwFCuazTnQRRZe2ehfxQRtG9z4l7feUB2+N+2hhjt XT+tFMjTyaUdj5HUpC00l9DIbvQpILdanp6Euk0aPqcmExqOE X-Gm-Gg: ASbGnctKeucpBP1GhMk/fqhRvYhECEMymDwPpFTsHQWNzBbKCAF+lFYfhv+5zCWkvJG jEeP+o/MCp1lzDtv3Tk5IdFuuIvVQEUcmn8KA6UtVS50OVEqqyBSMuvd2hBef0qzrSRPMJ7ah8k 70YELnD8f2jN3UTLGbhBx4cJPVrsIWgvT+M7poTbBC2iCiTo67nmVcB5HOwlRUmVLab60T5ZThR X8tyORZlIw7rqXXHl6jsW/GFvQzVsfTQIKvLUP9/o+IqZR/QhYfRru4x15MhaODkNO0ilJdYG46 6cQ/aRga0Td0NaCybuxdJBzjsbxkB1oNNa55wGV+XGU0vNqNdli6ZYtbkIJuC+QttU5QeT4F1A2 8 X-Received: by 2002:a5d:5847:0:b0:390:e94c:453e with SMTP id ffacd0b85a97d-390eca070b2mr17839163f8f.39.1741103331037; Tue, 04 Mar 2025 07:48:51 -0800 (PST) X-Google-Smtp-Source: AGHT+IFS2yQ2nIm92Fqm8oP87QtIsIbeZe5pQ0O0ho/Z0iCYCjljpWAZZP5v8ry/LGiAu8oiIQK1RQ== X-Received: by 2002:a5d:5847:0:b0:390:e94c:453e with SMTP id ffacd0b85a97d-390eca070b2mr17839137f8f.39.1741103330643; Tue, 04 Mar 2025 07:48:50 -0800 (PST) Received: from localhost (p200300cbc73610009e302a8acd3d419c.dip0.t-ipconnect.de. [2003:cb:c736:1000:9e30:2a8a:cd3d:419c]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-390e485d906sm18206420f8f.90.2025.03.04.07.48.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Mar 2025 07:48:50 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, David Hildenbrand , Andrew Morton , Matthew Wilcox , Russell King , Masami Hiramatsu , Oleg Nesterov , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , "Liang, Kan" , Tong Tiangen Subject: [PATCH -next v1 1/3] kernel/events/uprobes: pass VMA instead of MM to remove_breakpoint() Date: Tue, 4 Mar 2025 16:48:44 +0100 Message-ID: <20250304154846.1937958-2-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250304154846.1937958-1-david@redhat.com> References: <20250304154846.1937958-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: yoYnk3A6VD9MVHUppY9msVZ710_teEeVBEYCDeOBBoQ_1741103331 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 05705100007 X-Rspam-User: X-Stat-Signature: qbnmu596w1df5af7k6pch18arawk1crh X-HE-Tag: 1741103338-56807 X-HE-Meta: U2FsdGVkX1/DFWL36OcvKpbmWZn40riWPB2aQrv4UJNRjJpHlchS30pweeCwmHze09rlODcqh2m7YW96yzfiCxJyW7KGl/3wxmrPMiniqFm38ZCWNcFyikePMBHjuefx9TQZsW3W5V1xJ/FIeCvKgN8h0v2GSZUsLsRslw2mJyBbS2rYWvIcSmMq0TW6fkpI191C6dKMPMuG6sMyv8jsI+E8VVB8ECDymeES3KiEoqaQDRspH3TU9zMFUInBXasbz3CTq5PJfWYvDa+vLVHYBllASym9D2xmDHTzLJeZVdHygnpeG995x6IiSqM0yLVoRXHpOeip/ca7NTp6VYa9FdrmF+GhsiHyCpZYExz52wcJfgujK5MVF4bDn2c4NtYIlefQLK/D8KnJOGvzyzOTB26QZu4uHgzBF8MyYtdla+ATN61WOWVFYvrpPR7QRN5GbVc1sigXNRLPS6qN366OIvpoyUSAFC1YPxXwie1WtoOTKFmpYDd1NPrPaeRf5n8V7h1/CfGpiaIRiXdOpM9+IlMk53gtFsqPea5jR5lVef5irFiOGZ4KZBgn2K3rqgWfm3GCoitiPoWXuyd3BBiip/9js4KNKwuihnq+vLoJ8RtuVAWiHoMRfRfr5LFeX9uze7jb0xl3HqtiBGeioifCiceIVCwNLqEBCLyp0jRvKZ7pEZjEvZa2JSxA9PIO0KiFcnk8o8UG7RDDHnjr5yyfBsxdabtoxZB+gd81qBqP/MfxkayPKbw/37LSvBX/ZwEBuNE/FDLUEb1jkv967ViOeIxGD3pQssl4RHXR6ZSTMnmNiaOQWogdfrgGMB1EO8g/N5Vm1FvF5/Tp7XjcvqpeEHx1Zz8YgmetmeZouCxkpKVyj4t5/8YhxgFjRmEKZO65Cyc8O7ByHPQNz0xvj/RmMqmcD0BFtQKzTK0A825f3K/SgS8qPpqRe/axNUndBF7t3tWiOk+gsp2EfJqgPnF /M0jTwik k/okA3d7QqMwRGVvENIYC0fyXMaZmtYI++G7rg/Y96XmMQJ9ml/Jxs80sylPVDUje7CfYs8hEKyN2IazUJTscOfnOnxVMXuWbU2xBw+ojVzz2WxHBs38sUz/tnrV3AomqoNNScC5jwBm0rqphqpoFvLH5nNO19/GnL/FGBxFD33eIfrim+WwsF4fGyKgLZlQ+/iPbh3Bn0qlVtqvkvnWo9mfuRji7JiH+Wyq+6LaTykkvtAm7kcTXxBWWec/V5WaAe5JZFseoFCi/2rg4vYLH2MdxQQwgUYL5N+X/zQJivYSfyw6GYdiQuPDbq01kmBV+pV5pC7C+RUXyT2AClopW9w5xdcJoF42GS05BkeRJURvV72a/p5jdSESkKySVsX+JgOQ7Vmylumpf+X1BDjFnTjD8cDB9IgVnuvaNm+jP2wPQejuIErrLfIvSg57AaKlkyj9zlDqpU9+2A4ILXhyo0h5gVhT0P+S5LkwKmBHHVo+Vb6WduceJ5xWU9bzr2UNEf37b333Ln0eEffSsbUoFnyuOqw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: ... and remove the "MM" argument from install_breakpoint(), because it can easily be derived from the VMA. Signed-off-by: David Hildenbrand --- kernel/events/uprobes.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 8fc53813779a4..991aacc80d0e0 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1134,10 +1134,10 @@ static bool filter_chain(struct uprobe *uprobe, struct mm_struct *mm) return ret; } -static int -install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long vaddr) +static int install_breakpoint(struct uprobe *uprobe, struct vm_area_struct *vma, + unsigned long vaddr) { + struct mm_struct *mm = vma->vm_mm; bool first_uprobe; int ret; @@ -1162,9 +1162,11 @@ install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, return ret; } -static int -remove_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, unsigned long vaddr) +static int remove_breakpoint(struct uprobe *uprobe, struct vm_area_struct *vma, + unsigned long vaddr) { + struct mm_struct *mm = vma->vm_mm; + set_bit(MMF_RECALC_UPROBES, &mm->flags); return set_orig_insn(&uprobe->arch, mm, vaddr); } @@ -1296,10 +1298,10 @@ register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new) if (is_register) { /* consult only the "caller", new consumer. */ if (consumer_filter(new, mm)) - err = install_breakpoint(uprobe, mm, vma, info->vaddr); + err = install_breakpoint(uprobe, vma, info->vaddr); } else if (test_bit(MMF_HAS_UPROBES, &mm->flags)) { if (!filter_chain(uprobe, mm)) - err |= remove_breakpoint(uprobe, mm, info->vaddr); + err |= remove_breakpoint(uprobe, vma, info->vaddr); } unlock: @@ -1472,7 +1474,7 @@ static int unapply_uprobe(struct uprobe *uprobe, struct mm_struct *mm) continue; vaddr = offset_to_vaddr(vma, uprobe->offset); - err |= remove_breakpoint(uprobe, mm, vaddr); + err |= remove_breakpoint(uprobe, vma, vaddr); } mmap_read_unlock(mm); @@ -1610,7 +1612,7 @@ int uprobe_mmap(struct vm_area_struct *vma) if (!fatal_signal_pending(current) && filter_chain(uprobe, vma->vm_mm)) { unsigned long vaddr = offset_to_vaddr(vma, uprobe->offset); - install_breakpoint(uprobe, vma->vm_mm, vma, vaddr); + install_breakpoint(uprobe, vma, vaddr); } put_uprobe(uprobe); } From patchwork Tue Mar 4 15:48:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 14000999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83382C021B8 for ; Tue, 4 Mar 2025 15:48:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0F87A280002; Tue, 4 Mar 2025 10:48:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 081AF6B0089; Tue, 4 Mar 2025 10:48:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DEDE8280002; Tue, 4 Mar 2025 10:48:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id BB8D06B0088 for ; Tue, 4 Mar 2025 10:48:58 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5F6D2B081E for ; Tue, 4 Mar 2025 15:48:58 +0000 (UTC) X-FDA: 83184301956.13.611E44A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 29CC414000A for ; Tue, 4 Mar 2025 15:48:56 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=O8IU2Jd9; spf=pass (imf09.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741103336; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ffON9SnaOWg9oFQtm+MueKUsNfzxra0dWfxUL8seX24=; b=FH6lW8CqcPYSaSaYuLBtW/wXvklJ9m9B1xiyLl8uiUNFaf7GbNFxEcIrXu+To+VjJea6DL cUYHxtNSAu7l0L65W0sjf1869ImG0eVHi0s4OjVS8A8BcdUIqfXur+nvdxttONif1cgWXA pwIbxXsvWBsLydV/qWonmLbzTwdtZGE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741103336; a=rsa-sha256; cv=none; b=tbuQEBZMZzttyGd7KwBcPpcwB/K8pjnNiplx7VUh7MZowAD6rKCMjCq8NAJW9t7I9pqYP6 ZFzTYI13lEX+H8tcFC4/s//JGBXlIvMHPod6g0xN4rK1RlHuBxM3ilbYcqtUTmjw/PLnDU dPhvoOUHStT95BJuvmn1GliX8b0gw0g= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=O8IU2Jd9; spf=pass (imf09.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741103335; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ffON9SnaOWg9oFQtm+MueKUsNfzxra0dWfxUL8seX24=; b=O8IU2Jd9LoQ+Ut6kNYA/dSdPaQdx8Rmfcb/jqASy0lIo6005c13kuUIV75Hl//usNCfsVi fOG5OqYxgR3mpS+2/0+8YaccwVD+W5JMRmJ2ObSSCnIAomUy3Lp+9fTuwyyEyrvr5Li0gc CMvk+f78cJRptP825uFSAenYKBbbORY= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-1-o_DqEFHYNSyQyVJCNMW8Wg-1; Tue, 04 Mar 2025 10:48:54 -0500 X-MC-Unique: o_DqEFHYNSyQyVJCNMW8Wg-1 X-Mimecast-MFC-AGG-ID: o_DqEFHYNSyQyVJCNMW8Wg_1741103333 Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-43bcb061704so5291335e9.0 for ; Tue, 04 Mar 2025 07:48:54 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741103333; x=1741708133; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ffON9SnaOWg9oFQtm+MueKUsNfzxra0dWfxUL8seX24=; b=fcFZ20jOy8+OxHOYSA12iKtQ6ftvnWHBYJzGoeGHp6ox/9M4RZYk57oRWCMntPJ1zm rGc+6QHT0nT3SaH8V0vBO7iCjQ+dyjVTdQ+ozqtZYx7ctNUW8RtGvXehsAc7usBJw67n AlK8SsC20FuXX1kIYhs8Gx5zapS3tA+xpx22LHeyrR+NcONbdUeVc4g49ZZd7uIBcowF A8JjCPxD1rrBUl9ZtEbA9vwl6j4gsFAyEBc78C1ftVuQXpoHmK3Dfjy+TEFD4/91Jd15 FyX8CamVkjh/Um2m6Q7iTXddvnk5S9k4K+xrtTMWI5Ycjs3aw9bZ8Yuh4Jx4D5QeUgur +Shw== X-Gm-Message-State: AOJu0YztUcvfuHYXmQQ1b/g5+6pXARbpVbFjev0dTBmAQevKooqY07qE vcuBkbXyJzm7RPNswuXuP9t7qrNgsaO4qd/rSfvgHpMnTJhn+7koVApRhQ09IEp1THoNrFA3Zjw CknCd9Lmk7ihUFH9i5YIEiOvx/5LngdgEN1EqurmK9qluI0XZ X-Gm-Gg: ASbGncsX4BFaca4MGnZJkMaLnzNNW5phLVa3tFuCioror8PJN/DuElRGDgEeViyLnDT kHZX8uDzH+b4ALIH+h14ArxdMhB6zdzks0Z7SXwS1MrkLwQ7QBt7ufn8IzC9KkYgLUfzaeNoHfV nuzjfN15u+qRlo8F32w0gw0Vxfqe96v/v+YH7v3X2/8AgGJGpXL7bFcIpzouRAwB/YQnuhSs1NX ILXxDTV3BSpsCm0zB4/h01K16ByqGcio2FX2w89wIMQJ04mgMrIUenUEv3KMvrVNWOrPc3ZnGbw gLCYuXrFtVvkY922SjulA4sjVO3xhxhBtKD6HkloIcsAl6XGwqg1G1ecmIpKbx4/HitKao31cOK G X-Received: by 2002:a05:600c:3c9d:b0:439:8a62:db42 with SMTP id 5b1f17b1804b1-43ba66e19d9mr153515885e9.8.1741103333140; Tue, 04 Mar 2025 07:48:53 -0800 (PST) X-Google-Smtp-Source: AGHT+IH4Gjl+jrn0ypwJ4W8O9mKURosa8cKWKJgfr05HkBSuQdnMp7DSreUXUU7A6eFr/8nx/dTcdQ== X-Received: by 2002:a05:600c:3c9d:b0:439:8a62:db42 with SMTP id 5b1f17b1804b1-43ba66e19d9mr153515575e9.8.1741103332777; Tue, 04 Mar 2025 07:48:52 -0800 (PST) Received: from localhost (p200300cbc73610009e302a8acd3d419c.dip0.t-ipconnect.de. [2003:cb:c736:1000:9e30:2a8a:cd3d:419c]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-43bca26676esm43128935e9.8.2025.03.04.07.48.51 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Mar 2025 07:48:52 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, David Hildenbrand , Andrew Morton , Matthew Wilcox , Russell King , Masami Hiramatsu , Oleg Nesterov , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , "Liang, Kan" , Tong Tiangen Subject: [PATCH -next v1 2/3] kernel/events/uprobes: pass VMA to set_swbp(), set_orig_insn() and uprobe_write_opcode() Date: Tue, 4 Mar 2025 16:48:45 +0100 Message-ID: <20250304154846.1937958-3-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250304154846.1937958-1-david@redhat.com> References: <20250304154846.1937958-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: kLbU719SACWJLRfNdmnR3WhT4gPc7ykZQ8yeC_g9PoI_1741103333 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-Stat-Signature: yqeqz1d6xieecfx788be3n7eqjiom1r1 X-Rspamd-Queue-Id: 29CC414000A X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1741103336-710940 X-HE-Meta: U2FsdGVkX18KbF6VFrnBtYdOtwIZyidNHjhgQax+qRUgObYll7Un/31N105A+Y6RzV9fsD5iJMwQKlXQg6SYa4RZ8+oQp56YquAm/fghyC4dxlVcs8ZQLLuPQu9znlCQl7TIVIzuEOhgLypHKo+Ghd7AxE9YF1NEaahap1iIbHq3UBblcbPO3Edh7RtDVR7JChCOdRNjh6dFM4jJLbvLvSOx2Tj/ytU5lXok4CavNL/Xc6rD3O5dZp4ShF/cspHKPIJTcgJb3iGasToiGfCGYUABA1YRMze2n7nKmYFjV7RsSy5HWMrkUEs1b537J/ulUn3/bTN5NdamLTwOKcE+Pb+Qi7dV+FZylAxbhxS1UXD7VqF5kyG+w/++dx3vwc0svpZjGfsxzbH6uO4KsozXFDbLcTJJFVNUqqhqNuwSloNFQjEAkqYUd/N0BrSr9rmMkgDLJExgeoW/wv5LB8EfAa6UuY/zl48zSzHgA3MZCg4XoUlZ79x1NoLaCujCvoFHeMsHyfDDnyBWNzXQKMtacNDya/jBEPZA4cxq4dflDN7VBAyoLZbMU2EtZvuA4kcm9OU38CtdiArH7wwQn0KUSALrxnkKj5dmsy5WyPo/zgjUnzDwjV8WjJ89nXPWq6xGTSPD/L2UePU5g1K6a7uvH2p0e0jD9nWDYlfxebFKHQkIZ/8r7xvKwhj4t16syCwxDKGX24BmaONw7hINGuriyGuiOPxmvPHdnIFjpTciwefJYqjQ01krI36NtiDAlgjrG/sn3e1Bn2k5BVGkcHXzL+ajlvs8GswpIItePpiF/KL+fiA6x36tZ9KpQ/UULmy8iO4Kri4f5RsOxmFKFnhFaB5Jez+bSkyWm0CrHQSA9RTqiCj7FqCT2RoxBVvOzycTd0xGHL8qLHLZX07XJxlS+hYg0t8b1aW4qEOOC1Vp8A0B2oA0EYrXBYA/eoHu7i1kTd2lRBW8IqsP5aFUxka yXScSzu7 /qWt4UnewahaoyQ1BagHPuOPWtN5SF2OshT221C8F7WZJ9/PAN3VtMPlgpbW3ECdr2fOZJG++9IRAZZz/34vGvnwOWZLG9akY2hk/TUuqAlg4ZtjbgQdwJM1yEcQDjdTuAtBndCWKKVpNtx767kVhzelT2SmQ0nxDQ5bIUhbu+vCjioQVpKPGaaeAUA9RFBpaNC1kE6GfR/6NlQLNE/5/4Bir4JpTXYhxC0cv07keCEjannSqjWBKidMeQKUv8V9p1cE7xds/Yd/EHAaQY/bsnihev1pYLBuR0LiCnyl0DqMl3XDwsHVc1cApoNnKPDzTMVdmKfZWQ0zHe1oA2C869f+RcJUToCux7GY6KE3cDpy2Wj7WC7rktbbQv8XDQHlAW76L6EprImckgDwvFw/FtBN1Pm3yo+XV8briVEGzxJ4aITMcKDXQio0bKEy9k/uczuKnrG42V8rclKjwa0+GZh1Ogy6MWhZnq/WuNVHxSwh27uNXfRq9gZcl7S/ppl9UPuKZ0aVVfhj/39d1hEvEbvvGhw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We already have the VMA, no need to look it up using get_user_page_vma_remote(). We can now switch to get_user_pages_remote(). Signed-off-by: David Hildenbrand --- arch/arm/probes/uprobes/core.c | 4 ++-- include/linux/uprobes.h | 6 +++--- kernel/events/uprobes.c | 33 +++++++++++++++++---------------- 3 files changed, 22 insertions(+), 21 deletions(-) diff --git a/arch/arm/probes/uprobes/core.c b/arch/arm/probes/uprobes/core.c index f5f790c6e5f89..885e0c5e8c20d 100644 --- a/arch/arm/probes/uprobes/core.c +++ b/arch/arm/probes/uprobes/core.c @@ -26,10 +26,10 @@ bool is_swbp_insn(uprobe_opcode_t *insn) (UPROBE_SWBP_ARM_INSN & 0x0fffffff); } -int set_swbp(struct arch_uprobe *auprobe, struct mm_struct *mm, +int set_swbp(struct arch_uprobe *auprobe, struct vm_area_struct *vma, unsigned long vaddr) { - return uprobe_write_opcode(auprobe, mm, vaddr, + return uprobe_write_opcode(auprobe, vma, vaddr, __opcode_to_mem_arm(auprobe->bpinsn)); } diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index a40efdda9052b..4da3bce5e062d 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -186,13 +186,13 @@ struct uprobes_state { }; extern void __init uprobes_init(void); -extern int set_swbp(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr); -extern int set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long vaddr); +extern int set_swbp(struct arch_uprobe *aup, struct vm_area_struct *vma, unsigned long vaddr); +extern int set_orig_insn(struct arch_uprobe *aup, struct vm_area_struct *vma, unsigned long vaddr); extern bool is_swbp_insn(uprobe_opcode_t *insn); extern bool is_trap_insn(uprobe_opcode_t *insn); extern unsigned long uprobe_get_swbp_addr(struct pt_regs *regs); extern unsigned long uprobe_get_trap_addr(struct pt_regs *regs); -extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr, uprobe_opcode_t); +extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, unsigned long vaddr, uprobe_opcode_t); extern struct uprobe *uprobe_register(struct inode *inode, loff_t offset, loff_t ref_ctr_offset, struct uprobe_consumer *uc); extern int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool); extern void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 991aacc80d0e0..0276defd6fbfa 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -474,19 +474,19 @@ static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm, * * uprobe_write_opcode - write the opcode at a given virtual address. * @auprobe: arch specific probepoint information. - * @mm: the probed process address space. + * @vma: the probed virtual memory area. * @vaddr: the virtual address to store the opcode. * @opcode: opcode to be written at @vaddr. * * Called with mm->mmap_lock held for read or write. * Return 0 (success) or a negative errno. */ -int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, - unsigned long vaddr, uprobe_opcode_t opcode) +int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, + unsigned long vaddr, uprobe_opcode_t opcode) { + struct mm_struct *mm = vma->vm_mm; struct uprobe *uprobe; struct page *old_page, *new_page; - struct vm_area_struct *vma; int ret, is_register, ref_ctr_updated = 0; bool orig_page_huge = false; unsigned int gup_flags = FOLL_FORCE; @@ -498,9 +498,9 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, if (is_register) gup_flags |= FOLL_SPLIT_PMD; /* Read the page with vaddr into memory */ - old_page = get_user_page_vma_remote(mm, vaddr, gup_flags, &vma); - if (IS_ERR(old_page)) - return PTR_ERR(old_page); + ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, &old_page, NULL); + if (ret != 1) + return ret; ret = verify_opcode(old_page, vaddr, &opcode); if (ret <= 0) @@ -590,30 +590,31 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, /** * set_swbp - store breakpoint at a given address. * @auprobe: arch specific probepoint information. - * @mm: the probed process address space. + * @vma: the probed virtual memory area. * @vaddr: the virtual address to insert the opcode. * * For mm @mm, store the breakpoint instruction at @vaddr. * Return 0 (success) or a negative errno. */ -int __weak set_swbp(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr) +int __weak set_swbp(struct arch_uprobe *auprobe, struct vm_area_struct *vma, + unsigned long vaddr) { - return uprobe_write_opcode(auprobe, mm, vaddr, UPROBE_SWBP_INSN); + return uprobe_write_opcode(auprobe, vma, vaddr, UPROBE_SWBP_INSN); } /** * set_orig_insn - Restore the original instruction. - * @mm: the probed process address space. + * @vma: the probed virtual memory area. * @auprobe: arch specific probepoint information. * @vaddr: the virtual address to insert the opcode. * * For mm @mm, restore the original opcode (opcode) at @vaddr. * Return 0 (success) or a negative errno. */ -int __weak -set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr) +int __weak set_orig_insn(struct arch_uprobe *auprobe, + struct vm_area_struct *vma, unsigned long vaddr) { - return uprobe_write_opcode(auprobe, mm, vaddr, + return uprobe_write_opcode(auprobe, vma, vaddr, *(uprobe_opcode_t *)&auprobe->insn); } @@ -1153,7 +1154,7 @@ static int install_breakpoint(struct uprobe *uprobe, struct vm_area_struct *vma, if (first_uprobe) set_bit(MMF_HAS_UPROBES, &mm->flags); - ret = set_swbp(&uprobe->arch, mm, vaddr); + ret = set_swbp(&uprobe->arch, vma, vaddr); if (!ret) clear_bit(MMF_RECALC_UPROBES, &mm->flags); else if (first_uprobe) @@ -1168,7 +1169,7 @@ static int remove_breakpoint(struct uprobe *uprobe, struct vm_area_struct *vma, struct mm_struct *mm = vma->vm_mm; set_bit(MMF_RECALC_UPROBES, &mm->flags); - return set_orig_insn(&uprobe->arch, mm, vaddr); + return set_orig_insn(&uprobe->arch, vma, vaddr); } struct map_info { From patchwork Tue Mar 4 15:48:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 14001001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50D99C021B8 for ; Tue, 4 Mar 2025 15:49:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 69E3F6B0089; Tue, 4 Mar 2025 10:49:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E4BE280004; Tue, 4 Mar 2025 10:49:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38F5B6B008A; Tue, 4 Mar 2025 10:49:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0664E280004 for ; Tue, 4 Mar 2025 10:49:01 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id BDE06C02C6 for ; Tue, 4 Mar 2025 15:49:01 +0000 (UTC) X-FDA: 83184302082.11.043FB19 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 8E36616001B for ; Tue, 4 Mar 2025 15:48:59 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GMHxkTkh; spf=pass (imf08.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741103339; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wgfV1BFqVf16XcSyPrOW+mvJosHJK6PFcKJVrOW03zU=; b=L9VpPM1biW+1aWE6t5aDYOk/JwdNdSn2wIUZyvrXqeu3KPGrwmFTKjhGh0XnFgxGBN5t6U IcFa0LB+sq+SQjGEpN7+B76RyP0FKMPu+HFFN+sGykTAGZzch++s98bL6ItLQmbIHYLxwR gkI48kKn+YhQX+Ek0/D58UPDLeVr2Ik= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GMHxkTkh; spf=pass (imf08.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741103339; a=rsa-sha256; cv=none; b=Rj0tJ1sBeJktZhzlr7mY1Svs6bQ/ns1B0X9NFUm+Nho4V+L7KV0S3eciFY4L7EuQRq2jlc fmxfN6OP/sRld4l+/dS0D9bsiuwbCUXNS118CwfGdCc0YUNol4Ea8rmZGmmIjafh9NGXkb XqrUohlAfbTVhMXueAFzPxZyrodheUY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741103338; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wgfV1BFqVf16XcSyPrOW+mvJosHJK6PFcKJVrOW03zU=; b=GMHxkTkh5h/p4BE22KayVAZy3BvLm4bRjFIvTDjarGnIjg7XY8xXpJjxEfe+Jm0K9dzDu8 9rU9IOTbzlUyQkj1kNTvkIhHF2myfFfXrm/gg37J14AmSofvCgvF7wJUVvdWifYc85BxO1 NHsHR0wIJkG7KI18zqygKGn3/dNGFbw= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-321-9fK3D2QdNb-HJvw8_NepAA-1; Tue, 04 Mar 2025 10:48:56 -0500 X-MC-Unique: 9fK3D2QdNb-HJvw8_NepAA-1 X-Mimecast-MFC-AGG-ID: 9fK3D2QdNb-HJvw8_NepAA_1741103335 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-3910cd78330so959060f8f.1 for ; Tue, 04 Mar 2025 07:48:56 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741103335; x=1741708135; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wgfV1BFqVf16XcSyPrOW+mvJosHJK6PFcKJVrOW03zU=; b=Anh5EO9/epVmsfl0QKB2d++5WiLIo2oLeOJ5A/2uKGCtuk/vhEuJOrCv2YV9hoGz/L vSs6O79F+mqyM/TX7Q3mHYd3PafLxRNksKaRZFFjO9eh14mp/sq1ZXHKfgCA+LqU4ErT NQ7zDwsUK/XZncuX5d+QTy4AMEHjP25HypLghSnHVULeWB2jIZodsxPBag374IsuipPZ JX/MN9O/Uq3qSvOn8+6U2pDq2MtkA/P6bau5ma2jGM39YVyYFsK1r4mgmbRVCionbj27 cRc3WmxiILZkOo46tdpobLQbMsi5+7b//DTG5FPwWAttWSjqbSGjAkqJECNewsksJX/l hx9A== X-Gm-Message-State: AOJu0YxBTXkLZfgXxfvXHb5y0VorM2blI1EPZyZ1XvchOA7CEwm77xvi l1OZuXRrpGGok1m0IJejlY0kV9M3HRU/ZL4HwiSMUKZeMYYUYNrwIX/b6mkF99pngyYa5SZFDCr nT+tDtSBsGmnSUQ4fnG434PK6TltRgaKs5j8GF5FsqRDp5mUqeRWiY+teVDY= X-Gm-Gg: ASbGnctwh5EGaofEYRXTi9S9nApn4/kaxrk5BZ45oKBRLj+//1/TCmY6ilD1sNzeO3m +vMxhFdPeFKdqkmjjru/cGATBkAZW+5I4SDBNQgyWkvl/w6XqfIADpernDk7q9H0miXu7ft0ygx oP4xw1yhcxJyFL81SASjDRvv85D3gko0bVha4vUwjtg0l1U0UsvFC8m4kVlpQpIRnKANUU7/pqr bNfQGG3fUUWCn1tQpBTqZv29WG2m98TBPMXTwlO4yOs2sVGNL0iSBoGZ98zwnp+xPBScQPghurm HBUf1iGLiv0Yw2bPcy0WT+yoWMngRwqjRIXnU0kWseLSiKqJK/5qdJXBMlccbZvQFk/BDVrX/ic T X-Received: by 2002:a05:6000:188c:b0:38f:3224:65ff with SMTP id ffacd0b85a97d-390ec7c6a8emr13964822f8f.5.1741103335439; Tue, 04 Mar 2025 07:48:55 -0800 (PST) X-Google-Smtp-Source: AGHT+IEpnVEdNPdrzCOWtxMEX5R8OAMo0fpxMs1MbCedgfb3wOXycePEGo2BYRlgc01bVfM8qeQFCg== X-Received: by 2002:a05:6000:188c:b0:38f:3224:65ff with SMTP id ffacd0b85a97d-390ec7c6a8emr13964780f8f.5.1741103334841; Tue, 04 Mar 2025 07:48:54 -0800 (PST) Received: from localhost (p200300cbc73610009e302a8acd3d419c.dip0.t-ipconnect.de. [2003:cb:c736:1000:9e30:2a8a:cd3d:419c]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-390e47a66adsm17850100f8f.25.2025.03.04.07.48.53 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Mar 2025 07:48:54 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, David Hildenbrand , Andrew Morton , Matthew Wilcox , Russell King , Masami Hiramatsu , Oleg Nesterov , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , "Liang, Kan" , Tong Tiangen Subject: [PATCH -next v1 3/3] kernel/events/uprobes: uprobe_write_opcode() rewrite Date: Tue, 4 Mar 2025 16:48:46 +0100 Message-ID: <20250304154846.1937958-4-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250304154846.1937958-1-david@redhat.com> References: <20250304154846.1937958-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: bhobLDGLK5gJT--UO-DwhoHvdBvQeRdZlbSVw9lp8dY_1741103335 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspam-User: X-Rspamd-Queue-Id: 8E36616001B X-Rspamd-Server: rspam09 X-Stat-Signature: 1pdzbgsjo9xcrpsbz7815p96u7wnqkdb X-HE-Tag: 1741103339-144527 X-HE-Meta: U2FsdGVkX1/qwrssHy67N/HArieRIXJ/m3Aw/A/+tMtTTyGZTE/LlMvv2KnasappHrC4B9CP6HKdWAzAX5LGG3ucAQl6m53paySvspbjzvVnyy7giVIIRe9ZdNMZWI/2UEn4kpqh5u+iX1X8XvOXWUQlUao2sOpdEXAA/lbr3CYtcnWmXFNz8hOdA7zplMNBbm2L9PEmc1PyZ7JsBxXgBpzoNJnHPXAAf/4Gum9ZvQ++07KstIUDwFyuQGFxigEkLnveQUJZIPy/CV1cXbqA85XJ9A2xC7ou/3WewbFkBcpX4kSsV922lfkhn2qtNOije8AEnc5LVx9FsdPHNt8+PznA01PUcOuPY3FU+yBCQvAdGwMbOXYrLhRjRe685dO9BxplYhdV/oTIpTa21sFqTF6vjwgBtNcQjMml39KwBnW1Bil7LCToo/fQo66lVt6jsCEPJ5Akchehlp44jc3mYclPgJ+NqGZ6+itaWkyjyPDkRTeHlVfchPtM0tyGzpchIZ2WxZT+0hZtjhW4xI8AP5Z16KxnO7qi1H1WPb5rp8CooZqWsx0Q9fnetzXErOM3I/1E+gvtSPm6+BEohCVlkiSHNzWiAUoC4YACkcaVKD5quSefXEFHuCVKmovM7cGegsFTocGPi+qaALhulCvsCiIOUuKtnBZHs4taVVR22kWFGhulYn/c7k9744oJ3+yIVq0ioe3cdTq2BcbKnKSa7ejBk1Kx6I05TBp83kp1TpyUq9FutRQzXEsws5ODQevxMlxuVX2LnmAt+uoCy5a8s8DjBjorzRqHckser2N4XGZo2qwKFApvpgOD2yzBVB7CrcqLWAQrlT7wfcdpB2qhNqg54ndle/VTAWkzpMy/5R5Ywsdex4PvHpeWZLFx01V3R31e3v3Peino2BL9XX47n5H+cmsE3YeElmGUNkEvrn/D1TNg+k27MTHX51oHu1Ea8+q5NxkHBl2E3JBDBk9 zixX5dtE PS9ffRhllCJowNOK0KORlqlOlraKWq9Tt5LCjBfqQDlMSoFEbTb7wu5Ls+XXw2yVVBBA1UDCL2Bxe2yq2KIFioDi0qvpuWxH1K68DDW6X3+lK15yd/qG5itkv5ZSiZKa7imDWRtRM8s75aq4rx+cvLekxMi7FbuzzRacZWtC1pbiM8v7iBtdaSlPSj4Yn4yK31ri2VwOhzZp9c96KXQs5USHgBMTmSUgqUKsTagn8IHuI+QLotrdNv4MbbjtzRyXGasGizkqn/2Z4SV9LVdkUbrgEtGizI2vNB9VccgG536Y2Kdeuv97zRxePwi+Z7ExvM+qO9ahKsKb2OiZ8/bJuukbin7x5W/trlHNKNqwK7Ax3+8ik+X2OnU7PIa0y0SPWk0NTpo8rNByq7vV14/PMYPug9PP2G7LW2emHl4GNOH6+SHB5MdyLRxt3BDGl0Dx9Za1LV/oSUM1T7gJgJ00nuDBtMiwIMWNiIAgyZ0vM8ZwZ5rdqA/FdWA8rP8BKD1XhXQe3hAhVrrIB1h8SmQ684XDvHA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: uprobe_write_opcode() does some pretty low-level things that really, it shouldn't be doing: for example, manually breaking COW by allocating anonymous folios and replacing mapped pages. Further, it does seem to do some shaky things: for example, writing to possible COW-shared anonymous pages or zapping anonymous pages that might be pinned. We're also not taking care of uffd, uffd-wp, softdirty ... although rather corner cases here. Let's just get it right like ordinary ptrace writes would. Let's rewrite the code, leaving COW-breaking to core-MM, triggered by FOLL_FORCE|FOLL_WRITE (note that the code was already using FOLL_FORCE). We'll use GUP to lookup/faultin the page and break COW if required. Then, we'll walk the page tables using a folio_walk to perform our page modification atomically by temporarily unmap the PTE + flushing the TLB. Likely, we could avoid the temporary unmap in case we can just atomically write the instruction, but that will be a separate project. Unfortunately, we still have to implement the zapping logic manually, because we only want to zap in specific circumstances (e.g., page content identical). Note that we can now handle large folios (compound pages) and the shared zeropage just fine, so drop these checks. Signed-off-by: David Hildenbrand --- kernel/events/uprobes.c | 316 ++++++++++++++++++++-------------------- 1 file changed, 160 insertions(+), 156 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 0276defd6fbfa..4e39280f8f424 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -29,6 +29,7 @@ #include #include #include /* check_stable_address_space */ +#include #include @@ -151,91 +152,6 @@ static loff_t vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr) return ((loff_t)vma->vm_pgoff << PAGE_SHIFT) + (vaddr - vma->vm_start); } -/** - * __replace_page - replace page in vma by new page. - * based on replace_page in mm/ksm.c - * - * @vma: vma that holds the pte pointing to page - * @addr: address the old @page is mapped at - * @old_page: the page we are replacing by new_page - * @new_page: the modified page we replace page by - * - * If @new_page is NULL, only unmap @old_page. - * - * Returns 0 on success, negative error code otherwise. - */ -static int __replace_page(struct vm_area_struct *vma, unsigned long addr, - struct page *old_page, struct page *new_page) -{ - struct folio *old_folio = page_folio(old_page); - struct folio *new_folio; - struct mm_struct *mm = vma->vm_mm; - DEFINE_FOLIO_VMA_WALK(pvmw, old_folio, vma, addr, 0); - int err; - struct mmu_notifier_range range; - pte_t pte; - - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr, - addr + PAGE_SIZE); - - if (new_page) { - new_folio = page_folio(new_page); - err = mem_cgroup_charge(new_folio, vma->vm_mm, GFP_KERNEL); - if (err) - return err; - } - - /* For folio_free_swap() below */ - folio_lock(old_folio); - - mmu_notifier_invalidate_range_start(&range); - err = -EAGAIN; - if (!page_vma_mapped_walk(&pvmw)) - goto unlock; - VM_BUG_ON_PAGE(addr != pvmw.address, old_page); - pte = ptep_get(pvmw.pte); - - /* - * Handle PFN swap PTES, such as device-exclusive ones, that actually - * map pages: simply trigger GUP again to fix it up. - */ - if (unlikely(!pte_present(pte))) { - page_vma_mapped_walk_done(&pvmw); - goto unlock; - } - - if (new_page) { - folio_get(new_folio); - folio_add_new_anon_rmap(new_folio, vma, addr, RMAP_EXCLUSIVE); - folio_add_lru_vma(new_folio, vma); - } else - /* no new page, just dec_mm_counter for old_page */ - dec_mm_counter(mm, MM_ANONPAGES); - - if (!folio_test_anon(old_folio)) { - dec_mm_counter(mm, mm_counter_file(old_folio)); - inc_mm_counter(mm, MM_ANONPAGES); - } - - flush_cache_page(vma, addr, pte_pfn(pte)); - ptep_clear_flush(vma, addr, pvmw.pte); - if (new_page) - set_pte_at(mm, addr, pvmw.pte, - mk_pte(new_page, vma->vm_page_prot)); - - folio_remove_rmap_pte(old_folio, old_page, vma); - if (!folio_mapped(old_folio)) - folio_free_swap(old_folio); - page_vma_mapped_walk_done(&pvmw); - folio_put(old_folio); - - err = 0; - unlock: - mmu_notifier_invalidate_range_end(&range); - folio_unlock(old_folio); - return err; -} - /** * is_swbp_insn - check if instruction is breakpoint instruction. * @insn: instruction to be checked. @@ -463,6 +379,105 @@ static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm, return ret; } +static bool orig_page_is_identical(struct vm_area_struct *vma, + unsigned long vaddr, struct page *page, bool *pmd_mappable) +{ + const pgoff_t index = vaddr_to_offset(vma, vaddr) >> PAGE_SHIFT; + struct page *orig_page = find_get_page(vma->vm_file->f_inode->i_mapping, + index); + struct folio *orig_folio; + bool identical; + + if (!orig_page) + return false; + orig_folio = page_folio(orig_page); + + *pmd_mappable = folio_test_pmd_mappable(orig_folio); + identical = folio_test_uptodate(orig_folio) && + pages_identical(page, orig_page); + folio_put(orig_folio); + return identical; +} + +static int __uprobe_write_opcode(struct vm_area_struct *vma, + struct folio_walk *fw, struct folio *folio, + unsigned long opcode_vaddr, uprobe_opcode_t opcode) +{ + const unsigned long vaddr = opcode_vaddr & PAGE_MASK; + const bool is_register = !!is_swbp_insn(&opcode); + bool pmd_mappable; + + /* We're done if we don't find an anonymous folio when unregistering. */ + if (!folio_test_anon(folio)) + return is_register ? -EFAULT : 0; + + /* For now, we'll only handle PTE-mapped folios. */ + if (fw->level != FW_LEVEL_PTE) + return -EFAULT; + + /* + * See can_follow_write_pte(): we'd actually prefer a writable PTE here, + * but the VMA might not be writable. + */ + if (!pte_write(fw->pte)) { + if (!PageAnonExclusive(fw->page)) + return -EFAULT; + if (unlikely(userfaultfd_pte_wp(vma, fw->pte))) + return -EFAULT; + /* SOFTDIRTY is handled via pte_mkdirty() below. */ + } + + /* + * We'll temporarily unmap the page and flush the TLB, such that we can + * modify the page atomically. + */ + flush_cache_page(vma, vaddr, pte_pfn(fw->pte)); + fw->pte = ptep_clear_flush(vma, vaddr, fw->ptep); + + /* Verify that the page content is still as expected. */ + if (verify_opcode(fw->page, opcode_vaddr, &opcode) <= 0) { + set_pte_at(vma->vm_mm, vaddr, fw->ptep, fw->pte); + return -EAGAIN; + } + copy_to_page(fw->page, opcode_vaddr, &opcode, UPROBE_SWBP_INSN_SIZE); + + /* + * When unregistering, we may only zap a PTE if uffd is disabled and + * there are no unexpected folio references ... + */ + if (is_register || userfaultfd_missing(vma) || + (folio_ref_count(folio) != folio_mapcount(folio) + + folio_test_swapcache(folio) * folio_nr_pages(folio))) + goto remap; + + /* + * ... and the mapped page is identical to the original page that + * would get faulted in on next access. + */ + if (!orig_page_is_identical(vma, vaddr, fw->page, &pmd_mappable)) + goto remap; + + dec_mm_counter(vma->vm_mm, MM_ANONPAGES); + folio_remove_rmap_pte(folio, fw->page, vma); + if (!folio_mapped(folio) && folio_test_swapcache(folio) && + folio_trylock(folio)) { + folio_free_swap(folio); + folio_unlock(folio); + } + folio_put(folio); + + return pmd_mappable; +remap: + /* + * Make sure that our copy_to_page() changes become visible before the + * set_pte_at() write. + */ + smp_wmb(); + /* We modified the page. Make sure to mark the PTE dirty. */ + set_pte_at(vma->vm_mm, vaddr, fw->ptep, pte_mkdirty(fw->pte)); + return 0; +} + /* * NOTE: * Expect the breakpoint instruction to be the smallest size instruction for @@ -475,116 +490,105 @@ static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm, * uprobe_write_opcode - write the opcode at a given virtual address. * @auprobe: arch specific probepoint information. * @vma: the probed virtual memory area. - * @vaddr: the virtual address to store the opcode. - * @opcode: opcode to be written at @vaddr. + * @opcode_vaddr: the virtual address to store the opcode. + * @opcode: opcode to be written at @opcode_vaddr. * * Called with mm->mmap_lock held for read or write. * Return 0 (success) or a negative errno. */ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct vm_area_struct *vma, - unsigned long vaddr, uprobe_opcode_t opcode) + const unsigned long opcode_vaddr, uprobe_opcode_t opcode) { + const unsigned long vaddr = opcode_vaddr & PAGE_MASK; struct mm_struct *mm = vma->vm_mm; struct uprobe *uprobe; - struct page *old_page, *new_page; int ret, is_register, ref_ctr_updated = 0; - bool orig_page_huge = false; unsigned int gup_flags = FOLL_FORCE; + struct mmu_notifier_range range; + struct folio_walk fw; + struct folio *folio; + struct page *page; is_register = is_swbp_insn(&opcode); uprobe = container_of(auprobe, struct uprobe, arch); -retry: + if (WARN_ON_ONCE(!is_cow_mapping(vma->vm_flags))) + return -EINVAL; + + /* + * When registering, we have to break COW to get an exclusive anonymous + * page that we can safely modify. Use FOLL_WRITE to trigger a write + * fault if required. When unregistering, we might be lucky and the + * anon page is already gone. So defer write faults until really + * required. Use FOLL_SPLIT_PMD, because __uprobe_write_opcode() + * cannot deal with PMDs yet. + */ if (is_register) - gup_flags |= FOLL_SPLIT_PMD; - /* Read the page with vaddr into memory */ - ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, &old_page, NULL); + gup_flags |= FOLL_WRITE | FOLL_SPLIT_PMD; + +retry: + ret = get_user_pages_remote(mm, vaddr, 1, gup_flags, &page, NULL); if (ret != 1) - return ret; + goto out; - ret = verify_opcode(old_page, vaddr, &opcode); + ret = verify_opcode(page, opcode_vaddr, &opcode); + put_page(page); if (ret <= 0) - goto put_old; - - if (is_zero_page(old_page)) { - ret = -EINVAL; - goto put_old; - } - - if (WARN(!is_register && PageCompound(old_page), - "uprobe unregister should never work on compound page\n")) { - ret = -EINVAL; - goto put_old; - } + goto out; /* We are going to replace instruction, update ref_ctr. */ if (!ref_ctr_updated && uprobe->ref_ctr_offset) { ret = update_ref_ctr(uprobe, mm, is_register ? 1 : -1); if (ret) - goto put_old; + goto out; ref_ctr_updated = 1; } - ret = 0; - if (!is_register && !PageAnon(old_page)) - goto put_old; - - ret = anon_vma_prepare(vma); - if (ret) - goto put_old; - - ret = -ENOMEM; - new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr); - if (!new_page) - goto put_old; - - __SetPageUptodate(new_page); - copy_highpage(new_page, old_page); - copy_to_page(new_page, vaddr, &opcode, UPROBE_SWBP_INSN_SIZE); - if (!is_register) { - struct page *orig_page; - pgoff_t index; - - VM_BUG_ON_PAGE(!PageAnon(old_page), old_page); - - index = vaddr_to_offset(vma, vaddr & PAGE_MASK) >> PAGE_SHIFT; - orig_page = find_get_page(vma->vm_file->f_inode->i_mapping, - index); - - if (orig_page) { - if (PageUptodate(orig_page) && - pages_identical(new_page, orig_page)) { - /* let go new_page */ - put_page(new_page); - new_page = NULL; - - if (PageCompound(orig_page)) - orig_page_huge = true; - } - put_page(orig_page); - } + /* + * In the common case, we'll be able to zap the page when + * unregistering. So trigger MMU notifiers now, as we won't + * be able to do it under PTL. + */ + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, + vaddr, vaddr + PAGE_SIZE); + mmu_notifier_invalidate_range_start(&range); + } + + /* Walk the page tables again, to perform the actual update. */ + folio = folio_walk_start(&fw, vma, vaddr, 0); + if (folio) { + ret = __uprobe_write_opcode(vma, &fw, folio, opcode_vaddr, + opcode); + folio_walk_end(&fw, vma); + } else { + ret = -EAGAIN; } - ret = __replace_page(vma, vaddr & PAGE_MASK, old_page, new_page); - if (new_page) - put_page(new_page); -put_old: - put_page(old_page); + if (!is_register) + mmu_notifier_invalidate_range_end(&range); - if (unlikely(ret == -EAGAIN)) + switch (ret) { + case -EFAULT: + gup_flags |= FOLL_WRITE | FOLL_SPLIT_PMD; + fallthrough; + case -EAGAIN: goto retry; + default: + break; + } +out: /* Revert back reference counter if instruction update failed. */ - if (ret && is_register && ref_ctr_updated) + if (ret < 0 && is_register && ref_ctr_updated) update_ref_ctr(uprobe, mm, -1); /* try collapse pmd for compound page */ - if (!ret && orig_page_huge) + if (ret > 0) collapse_pte_mapped_thp(mm, vaddr, false); - return ret; + return ret < 0 ? ret : 0; } /**