From patchwork Thu Feb 20 20:29:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13984487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D95E8C021B1 for ; Thu, 20 Feb 2025 20:30:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D4CD28030C; Thu, 20 Feb 2025 15:30:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5840F28030A; Thu, 20 Feb 2025 15:30:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4736928030C; Thu, 20 Feb 2025 15:30:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 29F1428030A for ; Thu, 20 Feb 2025 15:30:04 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C63EDC2C67 for ; Thu, 20 Feb 2025 20:30:03 +0000 (UTC) X-FDA: 83141464686.29.97BC17B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id 905AE160004 for ; Thu, 20 Feb 2025 20:30:01 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ZbzIA1ku; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740083402; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=K9KhmlkwXex8CkDkyk+KzjqbaSSDiVC2fgr7ShK655M=; b=bd+UEAiz5gsv8felLJ0PVt3NMJBnaH8MoHOVT7JSFPvsE3X76K9gUyJkdJ/Ioa75e1MdsT uaQUY27CFX+LgMuao1QzK5GhLEBY8nUHaWaVTnNfRFR6qcfztKaGHRFjjPgj8psbu+CJ/3 fy31gCIyvpxiOh/nFN9THBtpSaYB5kI= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ZbzIA1ku; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740083402; a=rsa-sha256; cv=none; b=LkqsfzQz9h1faq9WVna1PQtQLPEVxtDKz7TFmg2tCW/sQzvM2kIm3ufjRwJzQISBjCYdxi Ja64/vBYHtNYxvqJBH/Rh+ojN16azJ1QPfj105vf2u/0wWPo6CbM3y8rIQ3Qe5fVyc1Hio MOsjTdL2TsGagC3tV1LkILHXqziB6Y8= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:Message-ID: Subject:To:From:Date:Sender:Reply-To:Cc:Content-Transfer-Encoding:Content-ID: Content-Description:In-Reply-To:References; bh=K9KhmlkwXex8CkDkyk+KzjqbaSSDiVC2fgr7ShK655M=; b=ZbzIA1kuDfCkmj2ZpEwLg0x8CY I2ovlLGzf+yQ7p1+YITXE2Vd8sZE/Z1trVD6mgEi00LbI40f5hi0pKEbWCTXgOD3BD5nI7do496AY G04BWJJHSnIu6dIIJbEgcdszcPoR0QazQY9KVenT3Cb7VYpr+TeZCOWeKDqkl8kPjLaRkOEZ2kUM+ TNR2irV7R49r1CyDyZX0kVcJt+ajjSLSh0eM0xSiWCT/Au6rnNGB8s+8+jbnBoOVZybGsmw4Qeuf1 nYlmWICvED2Zf3oskxy/hct+h9iASz1FLycv/uicjZHi/gYMF6qpaAgMRITiu4awJkOIxLbke35Fj IdbjOzCw==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tlDBH-0000000ApHE-32Xp for linux-mm@kvack.org; Thu, 20 Feb 2025 20:29:59 +0000 Date: Thu, 20 Feb 2025 20:29:59 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Subject: Is uprobe_write_opcode() OK? Message-ID: MIME-Version: 1.0 Content-Disposition: inline X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 905AE160004 X-Stat-Signature: xc4jhn81kog1inrzb9u1ztonndbnqgaj X-Rspam-User: X-HE-Tag: 1740083401-970862 X-HE-Meta: U2FsdGVkX18amMoUiwEYckP6C/v9ohR21Z0+1Ey9caA8pzXqIU7S+VkSl+AIrOAY1sNDzDGE8OB2tQ6e0L5A9veSpECMWSh3sNynuT8i+79lH6ag5jnTXWopY0RfzZB300ZdK1eI0nGw6PvWLUDUIAG1Owy+JoxuIzIF4HbNwV2jmoULxaQS7us06uAKKfvmwyuBz9JKRB+9YzU3WrDFA55E3qnzOIR7LXmprVxFQ+rSz0aI8TV+QTDBH5Iyxel43kP0IQSEBJly4K+gGhQbj7dKTwkZ7XsmiTjhpZ6jv75CDkx3ZnREjAPRtpk94+lpnXBCTJop46cejFp8E2jymVASFawtxlmyUQhffbBYM71aX/0hcLKUGwWIDUOzVhldg7tvUy/nl0IeO6/kteGQWVZD1lgx6a8N01AA38uBKWq1tm/l07pc0pufu/3DvVToNQiwgOihvWGMiQn+E8BdObLz4iEtAhI7+6x+QQ1w8if4AgIfs/A9def29OV1/pimRFSBrpNemvbBNx+dHWkm6OSwcMYZazT9nup2QHAX6b5Rc14KpSoiEG06KPHhZUoBsK32L1Q55sk+R51XyfhA9N3Kc3yBGEsE7VCuyM/JTvN7VkcvoUlUEAA7nPocBcY/OGwTrhGXZYhjgdOFslKvXjir4kGHESEhynKuqzFYm03dIsXhnOAx+/qgZlt7U0rhkGSCgYtXIGWieTWWuWcjtaQi3E/j6MeyyH4CPG8AA/LstM2pzQpiTaJ+2506VkI1bTCX+Iqz9Pmo0rpzHi+1+1yL1l7ZO7ifJ7rdFxff2e0cCslXM9FVgrN5zgvVw+2+kj2/HImTx9A5WQXGZLAgC/Es1Begjud4C8PXBLbfPWSjv0FLOkJ1IaLr4HeUDsODhBt4yFlw3qcp0ZD4yv3AXTz70tCWp6aiAp4uMKzWR9jQIuKlA7vIH194SL5HfY+bDEE9WzarhiKt724ZW7E F6hGjXIo eRE/LCFQXrRc0azj7BBlI5aJ8c//YkhELJOUiITQwZ7zVUkIVhBtAzeeIgCP1bwEX4sXVVrC4R5FVh169NOUtQTU3IdGKWKSbniphzhNXsEoYRKt9ZTJCnuVJU1feipDSQEkD/JeMZTC9x+qsTx7cGj3/FY/LK2BE+hGuhFMkTzCz8xc4298VngKYc6CLMcgkvwE97iT8tVjQ6vdwTOE1NV9w1C6sISVMQbRtBYlVq5VMzIF+TckpCJ56bktnPswV9T0RdHT0F6Ke7LhdWo7456nAz/rLq6Xi0sg6ty8M4/4MQCxpAkLbHAYV4xzUm/TKZECaaj1rbm/e3JUwxbSuwtMbGI9q5R09kJ24f30kz1s2H+yWHwRFdW+L7Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: I just wrote the patch below, but now I'm wondering if it's perpetuating the mistake of not using our existing COW mechanism to handle uprobes. Anyone looked at this code recently? commit 8471400297a4 Author: Matthew Wilcox (Oracle) Date: Thu Feb 20 15:04:46 2025 -0500 uprobes: Use a folio instead of a page Allocate an order-0 folio instead of a page. Saves a few calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 2ca797cbe465..d4330870cf6a 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -158,17 +158,16 @@ static loff_t vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr) * @vma: vma that holds the pte pointing to page * @addr: address the old @page is mapped at * @old_page: the page we are replacing by new_page - * @new_page: the modified page we replace page by + * @new_folio: the modified folio we replace @page with * - * If @new_page is NULL, only unmap @old_page. + * If @new_folio is NULL, only unmap @old_page. * * Returns 0 on success, negative error code otherwise. */ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, - struct page *old_page, struct page *new_page) + struct page *old_page, struct folio *new_folio) { struct folio *old_folio = page_folio(old_page); - struct folio *new_folio; struct mm_struct *mm = vma->vm_mm; DEFINE_FOLIO_VMA_WALK(pvmw, old_folio, vma, addr, 0); int err; @@ -177,8 +176,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr, addr + PAGE_SIZE); - if (new_page) { - new_folio = page_folio(new_page); + if (new_folio) { err = mem_cgroup_charge(new_folio, vma->vm_mm, GFP_KERNEL); if (err) return err; @@ -193,7 +191,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, goto unlock; VM_BUG_ON_PAGE(addr != pvmw.address, old_page); - if (new_page) { + if (new_folio) { folio_get(new_folio); folio_add_new_anon_rmap(new_folio, vma, addr, RMAP_EXCLUSIVE); folio_add_lru_vma(new_folio, vma); @@ -208,9 +206,9 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, flush_cache_page(vma, addr, pte_pfn(ptep_get(pvmw.pte))); ptep_clear_flush(vma, addr, pvmw.pte); - if (new_page) + if (new_folio) set_pte_at(mm, addr, pvmw.pte, - mk_pte(new_page, vma->vm_page_prot)); + folio_mk_pte(new_folio, vma->vm_page_prot)); folio_remove_rmap_pte(old_folio, old_page, vma); if (!folio_mapped(old_folio)) @@ -474,7 +472,8 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr, uprobe_opcode_t opcode) { struct uprobe *uprobe; - struct page *old_page, *new_page; + struct page *old_page; + struct folio *new_folio; struct vm_area_struct *vma; int ret, is_register, ref_ctr_updated = 0; bool orig_page_huge = false; @@ -519,13 +518,14 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, goto put_old; ret = -ENOMEM; - new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr); - if (!new_page) + new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vaddr); + if (!new_folio) goto put_old; - __SetPageUptodate(new_page); - copy_highpage(new_page, old_page); - copy_to_page(new_page, vaddr, &opcode, UPROBE_SWBP_INSN_SIZE); + copy_highpage(folio_page(new_folio, 0), old_page); + copy_to_page(folio_page(new_folio, 0), vaddr, &opcode, + UPROBE_SWBP_INSN_SIZE); + __folio_mark_uptodate(new_folio); if (!is_register) { struct page *orig_page; @@ -539,10 +539,11 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, if (orig_page) { if (PageUptodate(orig_page) && - pages_identical(new_page, orig_page)) { + pages_identical(folio_page(new_folio, 0), + orig_page)) { /* let go new_page */ - put_page(new_page); - new_page = NULL; + folio_put(new_folio); + new_folio = NULL; if (PageCompound(orig_page)) orig_page_huge = true; @@ -551,9 +552,9 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, } } - ret = __replace_page(vma, vaddr & PAGE_MASK, old_page, new_page); - if (new_page) - put_page(new_page); + ret = __replace_page(vma, vaddr & PAGE_MASK, old_page, new_folio); + if (new_folio) + folio_put(new_folio); put_old: put_page(old_page);