From patchwork Wed Jan 29 11:53:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953672 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B5532C02194 for ; Wed, 29 Jan 2025 11:54:24 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6D8C310E7BE; Wed, 29 Jan 2025 11:54:23 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="b8T9JxUr"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 10F3F10E7BC for ; Wed, 29 Jan 2025 11:54:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151661; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hZQr+sWP4JJuuRk9zHd8B8Ps+AYS2S2IVxxQCmOGAGE=; b=b8T9JxUr6CwHJkzWGqThYfzmk1D1pNcSFBazeOMP0WDroKCeeC5S/qtws5uZtII4yfWvC0 8vri2GUsd67M1q/56yTk7PqgBDwuyH772ze8EydLokzWMahVTJiPJMCdtXAn2MuRz8O/hv DyC3KGW6l+R7rItqsnzxCdtDghRaJuQ= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-103-Awc4vzguM_mvam2IBM6caw-1; Wed, 29 Jan 2025 06:54:18 -0500 X-MC-Unique: Awc4vzguM_mvam2IBM6caw-1 X-Mimecast-MFC-AGG-ID: Awc4vzguM_mvam2IBM6caw Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-4361b090d23so34439145e9.0 for ; Wed, 29 Jan 2025 03:54:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151657; x=1738756457; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hZQr+sWP4JJuuRk9zHd8B8Ps+AYS2S2IVxxQCmOGAGE=; b=WLsUOMg+YGQ9Q81pA+dPG8qaTgSJw17o0QW/BpfIMjFvDqsd/oCKo4rr+G+BsQ6tn6 PzC6y7FxuqSsr+r+jo8SehQPMckS7TwVDX0vZClmuaZVyyhXI34An6yLsbBEQrauyR1Y SIQLoUTsCwqUCTHmPLecQCWw3SwbmdHOpQWD+7layuHi+YIIGlLSBpJyaCt4hzFyhYii RrMzPXT+TsIPdAohtKcrFfXPWB0DkGkLKgLFu18LTecnpUmZ6HNzdBltWYdlrxQFIk6e 17CgRWYogjSzsD5c9n6YadSE+RTIeYtNkySjRBkzthv246ee4dRPqA9qrxIwRQHYIcMt 1STg== X-Forwarded-Encrypted: i=1; AJvYcCX5nogWet6SWHKxPhDI65QiC4rS4eRR03/9i5gxYIa3lGwBXBIIS0pSAWtz937qsr5Z9rdHzH9MTG0=@lists.freedesktop.org X-Gm-Message-State: AOJu0YyzJDoT/A3V/St5M0ztlo2UCRGcBY+EokLUgS19wriNdt81BNqI 1LCFuSdxkwR/po+CZ7qJJe/fW6BnMmIbr/gPs3qB78oRmTjM51R5ATS+eMzKrzxzW3SbfpdvzO5 NGAme2Pi2+vGiaZX4qKzm3n14HKvejCkal1iRdKnOxi0aikc+QOzzcOHS1m+JPSBMX4yHLOKPMU kp X-Gm-Gg: ASbGnctFKh8TN3rb/ZZ4xZmjruiqIMu7X8h+2aHOQhaXGh7kCEdKruPteAhQytO9xkA jj8RDkk01fICiwymznmFp6LYxYI+1m2by78U6k2j6h7ytoXpKEm3MCTskLpFH1tQMQeHVBCEppx Dem5a9IeAzMxSj9fyVBx5u4Yj7B4nx9SOiMpT/m1p/ZJixHz/kfIh86wjqNVIvr5LfK+pf8aE/y xeFjyTLi7Hm+hs9rvk/pUAvLywfc3CcTt9uNGFfXJmMxxJqJMxFCUw7WSaFq02dKECSBjvy07QW M81yIVW7IaX1GSTgiXH0gwLdiGEGmvkHylLHspX0bTcY4WLPUk1NKbypBQNWhzgoUg== X-Received: by 2002:a05:600c:cce:b0:434:ff9d:a370 with SMTP id 5b1f17b1804b1-438dc3366f8mr24658455e9.0.1738151657106; Wed, 29 Jan 2025 03:54:17 -0800 (PST) X-Google-Smtp-Source: AGHT+IEIkSHUg8u1CZ6d8yOIa8EuB96MLk/b/TKJcDJCOFvAxggFPdf4ivg/UC1RFEcPGInF41YvMw== X-Received: by 2002:a05:600c:cce:b0:434:ff9d:a370 with SMTP id 5b1f17b1804b1-438dc3366f8mr24658175e9.0.1738151656750; Wed, 29 Jan 2025 03:54:16 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-38c2a1bad92sm16868229f8f.61.2025.01.29.03.54.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:15 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe , stable@vger.kernel.org Subject: [PATCH v1 01/12] mm/gup: reject FOLL_SPLIT_PMD with hugetlb VMAs Date: Wed, 29 Jan 2025 12:53:59 +0100 Message-ID: <20250129115411.2077152-2-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: SsJqPizDb0lBrE2oojyCUqPw80xK_2OF4L1WImRTQog_1738151657 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" We only have two FOLL_SPLIT_PMD users. While uprobe refuses hugetlb early, make_device_exclusive_range() can end up getting called on hugetlb VMAs. Right now, this means that with a PMD-sized hugetlb page, we can end up calling split_huge_pmd(), because pmd_trans_huge() also succeeds with hugetlb PMDs. For example, using a modified hmm-test selftest one can trigger: [ 207.017134][T14945] ------------[ cut here ]------------ [ 207.018614][T14945] kernel BUG at mm/page_table_check.c:87! [ 207.019716][T14945] Oops: invalid opcode: 0000 [#1] PREEMPT SMP KASAN NOPTI [ 207.021072][T14945] CPU: 3 UID: 0 PID: ... [ 207.023036][T14945] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-2.fc40 04/01/2014 [ 207.024834][T14945] RIP: 0010:page_table_check_clear.part.0+0x488/0x510 [ 207.026128][T14945] Code: ... [ 207.029965][T14945] RSP: 0018:ffffc9000cb8f348 EFLAGS: 00010293 [ 207.031139][T14945] RAX: 0000000000000000 RBX: 00000000ffffffff RCX: ffffffff8249a0cd [ 207.032649][T14945] RDX: ffff88811e883c80 RSI: ffffffff8249a357 RDI: ffff88811e883c80 [ 207.034183][T14945] RBP: ffff888105c0a050 R08: 0000000000000005 R09: 0000000000000000 [ 207.035688][T14945] R10: 00000000ffffffff R11: 0000000000000003 R12: 0000000000000001 [ 207.037203][T14945] R13: 0000000000000200 R14: 0000000000000001 R15: dffffc0000000000 [ 207.038711][T14945] FS: 00007f2783275740(0000) GS:ffff8881f4980000(0000) knlGS:0000000000000000 [ 207.040407][T14945] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 207.041660][T14945] CR2: 00007f2782c00000 CR3: 0000000132356000 CR4: 0000000000750ef0 [ 207.043196][T14945] PKRU: 55555554 [ 207.043880][T14945] Call Trace: [ 207.044506][T14945] [ 207.045086][T14945] ? __die+0x51/0x92 [ 207.045864][T14945] ? die+0x29/0x50 [ 207.046596][T14945] ? do_trap+0x250/0x320 [ 207.047430][T14945] ? do_error_trap+0xe7/0x220 [ 207.048346][T14945] ? page_table_check_clear.part.0+0x488/0x510 [ 207.049535][T14945] ? handle_invalid_op+0x34/0x40 [ 207.050494][T14945] ? page_table_check_clear.part.0+0x488/0x510 [ 207.051681][T14945] ? exc_invalid_op+0x2e/0x50 [ 207.052589][T14945] ? asm_exc_invalid_op+0x1a/0x20 [ 207.053596][T14945] ? page_table_check_clear.part.0+0x1fd/0x510 [ 207.054790][T14945] ? page_table_check_clear.part.0+0x487/0x510 [ 207.055993][T14945] ? page_table_check_clear.part.0+0x488/0x510 [ 207.057195][T14945] ? page_table_check_clear.part.0+0x487/0x510 [ 207.058384][T14945] __page_table_check_pmd_clear+0x34b/0x5a0 [ 207.059524][T14945] ? __pfx___page_table_check_pmd_clear+0x10/0x10 [ 207.060775][T14945] ? __pfx___mutex_unlock_slowpath+0x10/0x10 [ 207.061940][T14945] ? __pfx___lock_acquire+0x10/0x10 [ 207.062967][T14945] pmdp_huge_clear_flush+0x279/0x360 [ 207.064024][T14945] split_huge_pmd_locked+0x82b/0x3750 ... Before commit 9cb28da54643 ("mm/gup: handle hugetlb in the generic follow_page_mask code"), we would have ignored the flag; instead, let's simply refuse the combination completely in check_vma_flags(): the caller is likely not prepared to handle any hugetlb folios. We'll teach make_device_exclusive_range() separately to ignore any hugetlb folios as a future-proof safety net. Fixes: 9cb28da54643 ("mm/gup: handle hugetlb in the generic follow_page_mask code") Cc: Signed-off-by: David Hildenbrand Reviewed-by: John Hubbard Reviewed-by: Alistair Popple --- mm/gup.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/gup.c b/mm/gup.c index 3883b307780e..61e751baf862 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1283,6 +1283,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) return -EOPNOTSUPP; + if ((gup_flags & FOLL_SPLIT_PMD) && is_vm_hugetlb_page(vma)) + return -EOPNOTSUPP; + if (vma_is_secretmem(vma)) return -EFAULT; From patchwork Wed Jan 29 11:54:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EA4D0C0218D for ; Wed, 29 Jan 2025 11:54:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2500710E7C0; Wed, 29 Jan 2025 11:54:26 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="bwRzM+2u"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id CF33B10E7C0 for ; Wed, 29 Jan 2025 11:54:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151663; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=herznK9N3+37fO8/yUnjjPI+kU1qrhYMSQrHDNQrbI8=; b=bwRzM+2uHt+qpeD9N2r4zABp2V7qI3JuItDa+rgIv2ZxuThcC/qQb5dBpcM88ZQ5s0ap2x Yt+/7i5b+Vrmqr7fMlXFG+AGyO+gOiIMeAlO/IEKTTAPIEybzSjTCZLb3CUsEbzs4DFULI OBxlXMAKhtEm3syBc58h2UDvsnPdjyo= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-620-kQzVyX3mODuqH_jS2vpXdA-1; Wed, 29 Jan 2025 06:54:21 -0500 X-MC-Unique: kQzVyX3mODuqH_jS2vpXdA-1 X-Mimecast-MFC-AGG-ID: kQzVyX3mODuqH_jS2vpXdA Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-43625ceae52so36533915e9.0 for ; Wed, 29 Jan 2025 03:54:21 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151660; x=1738756460; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=herznK9N3+37fO8/yUnjjPI+kU1qrhYMSQrHDNQrbI8=; b=pDgCWpTdWX5OxmK/mGNiXwJagvHfKmPRwsejZRRJgh48jcWd+wy8AO12LQP+op9s5U OWX2rTz4GD1imla5RqNHzPGjIUmQK6dNstqpI092y6EgmW1gTMtYvVplit3Mrewohebu HnGDYzsllGLOQMSggnGmD02G6htn/9G6ba67OG2dxUM/v39WJgLyPxsnejYeUYbkjTgx 8OaSFlStZiTJgsjcCi8YATJJzd4/PLNEGl9C2cfmvlq6CKf8iUj8CJBNRIdnf/sCQSGO FYwksDtYeQfj/hgTyzDftgTiREz0KDNEyJC/9XHoyugpUoZXVaT53LhdgFRfDY/7ouwK 2dyg== X-Forwarded-Encrypted: i=1; AJvYcCVUm9Vb0JQnmPVDMkz49qgS7BD4lqdozbob/OFvT7/A14DoS0Rn1S8G88vtW88819BILBNMBOd+krg=@lists.freedesktop.org X-Gm-Message-State: AOJu0Yzh+9QHAqrYMyxILOCRg9RGwcNI8ojf4VQwRdFhv3GFtYedEm+F +BwfSQEdJt6cKfAGNFNju6dKubmh/jQKS/ZFSQsR1pafjm7Q2Lbn5r4rwTfL95zZL5U9YvtNXwF 2JPC4Q37K35xk6n00gos4Tbn7DbRQb6BOVqcAg4DEzoRASZyhNki8IjPPhsA1krfG4A== X-Gm-Gg: ASbGncu9cm/AxDluHjgDktxBlcV+iJPIePGTBq8C4E6jKYQfqi5LfwnNEcjZZPYR68D ysR2EPvhEdzRSe2r738OSZctJ3FOJA32C0H+fubddzS3+BR1JyRBSHgyCAVePoPDzLcDNnaPyEG 3YXCu+8l9DTsu/WcXXHUw37gyb56vK4mtdXPnJwsppLDA6RFVkBFDISpPem0swONbHK3v85K39D d6wyzcseBpUYu8OWk+mUEmSfjJujRRDsOIA2uDpX4Gi9HRQCFAOqvKbezHh3l38JxUr7LYFB7LN CWMWOM4+74AEFHrGi+fwGPPOBYd7/nO0pC5Qk6maSu+VWPWQ9EhuPz9zKxfD2Tvemg== X-Received: by 2002:a05:600c:5486:b0:433:c76d:d57e with SMTP id 5b1f17b1804b1-438dc3a40d3mr26190425e9.5.1738151660681; Wed, 29 Jan 2025 03:54:20 -0800 (PST) X-Google-Smtp-Source: AGHT+IEopxoArzHie066mjWjUvNKTSc/02V79PTjUrPj2QcjbwdMpR1m12X2+qvEI4pAjYXanIEeWA== X-Received: by 2002:a05:600c:5486:b0:433:c76d:d57e with SMTP id 5b1f17b1804b1-438dc3a40d3mr26190115e9.5.1738151660040; Wed, 29 Jan 2025 03:54:20 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-438dcbbc52dsm21427725e9.0.2025.01.29.03.54.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:18 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe , stable@vger.kernel.org Subject: [PATCH v1 02/12] mm/rmap: reject hugetlb folios in folio_make_device_exclusive() Date: Wed, 29 Jan 2025 12:54:00 +0100 Message-ID: <20250129115411.2077152-3-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: pltkpYzG0yXAW0X1S2OdDuIHZLrI06CGsjG1bskNVTM_1738151661 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Even though FOLL_SPLIT_PMD on hugetlb now always fails with -EOPNOTSUPP, let's add a safety net in case FOLL_SPLIT_PMD usage would ever be reworked. In particular, before commit 9cb28da54643 ("mm/gup: handle hugetlb in the generic follow_page_mask code"), GUP(FOLL_SPLIT_PMD) would just have returned a page. In particular, hugetlb folios that are not PMD-sized would never have been prone to FOLL_SPLIT_PMD. hugetlb folios can be anonymous, and page_make_device_exclusive_one() is not really prepared for handling them at all. So let's spell that out. Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Cc: Signed-off-by: David Hildenbrand Reviewed-by: Alistair Popple --- mm/rmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index c6c4d4ea29a7..17fbfa61f7ef 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2499,7 +2499,7 @@ static bool folio_make_device_exclusive(struct folio *folio, * Restrict to anonymous folios for now to avoid potential writeback * issues. */ - if (!folio_test_anon(folio)) + if (!folio_test_anon(folio) || folio_test_hugetlb(folio)) return false; rmap_walk(folio, &rwc); From patchwork Wed Jan 29 11:54:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953693 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7859BC0218D for ; Wed, 29 Jan 2025 11:54:29 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EA88710E7C2; Wed, 29 Jan 2025 11:54:28 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="N1nw3rxa"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7A90B10E7C3 for ; Wed, 29 Jan 2025 11:54:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151666; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FbpRkzZA4TilYwHsov5kO/PS96AeznJfw9pJaA8wpMg=; b=N1nw3rxaCjaMMKo5m7OvbxQYP1a+kPUcfXjWPQguOW9x+z6acMCIdpuCUDGFiMnF3DGQsJ CPlQmBO7pRLrIobPOAkbWVxfgXhn+62N5OU0GCgt2Spf7FeiPGA6mHVOlSEljcY5zt9D0n BqTXXfoYGnJoY1saCfLlCzptlmjBzoo= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-33-QtdN-WOoOFWTfcTuumwohA-1; Wed, 29 Jan 2025 06:54:25 -0500 X-MC-Unique: QtdN-WOoOFWTfcTuumwohA-1 X-Mimecast-MFC-AGG-ID: QtdN-WOoOFWTfcTuumwohA Received: by mail-wr1-f72.google.com with SMTP id ffacd0b85a97d-385d6ee042eso4937809f8f.0 for ; Wed, 29 Jan 2025 03:54:24 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151664; x=1738756464; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FbpRkzZA4TilYwHsov5kO/PS96AeznJfw9pJaA8wpMg=; b=gmvB4kCkxOle2OAQ3iRJom5BSL2l8dGCba0LYo4VqfyaXySR1QGUp3ZUncM8yRsBoN 2AQMwRroKNWWJzQQCZHfz8pFBWiIzTZRQs7vYw/gS7gSGGmc5ZFVSzvoy3cqVxKMdExX I9CU4TDMneyNZ72G2b/s/D4vRfdoU6GCI2viEzGVHdmKf7u9F2MAR4S5Pp/To7G4Kr1A CGYr120P0DWDD8XzctTm2qRCGJZAjj0lzAhdNV8A+v/+fs3GxOsd3Y89aliD3aQTwPlc vCRQSwO8uhmvTIcw+i+QAfUpx2FdjdyxO5hToICDUo+n3yGK4kg2qy0xRhRXiV4wpC53 V8LA== X-Forwarded-Encrypted: i=1; AJvYcCUKtbxHTbVO5S4erE3YP8CKvuz/Gby10icbmJLUIzoubI8m4cNMqnbtBmxMxwLF8SNaI1EqXQW9IEA=@lists.freedesktop.org X-Gm-Message-State: AOJu0YxmeF3RjpJk5ISQZppLYLrNX0Z1h5dpqedS/OVyDFVgiaWSMbsK j/8An534DMWDLKjPvw6m8cRwxaWtPSjFBGcFb5GqP2Lsy4Jw8hWNHFUy7O8MyE+2sw6LmGj7f9d Qh+lSSqPT9N2FTgcYkrx6DYsZ5BpKkn27/nG6O5nUcsAOMoakRvUFxpKgqUHlqsSkyg== X-Gm-Gg: ASbGncsYcsec3anIOwc4i01Yfm5WBnxjAf9Ni39GTgwavyC8PIZx9bgdK6hDv8jFkSV KzQol0o4hLh8/UriN+Z5DGaP0DDR1B5ZuZ5qyKJMLrFF5uVluqy/o3M3jQrjm+4CL/3OpIH9IKi Flju4aG1fxe9J8+IN2XyfI40xHKlaqBaAh8zikIxd+DYn7By2IcQTleRX/cBWQUFpYFFR7ktdpT lSmxU8+TYdxBICCmfKJTrtKjQ6zEkhNRakRXGuvg/mi7YSMqOdTucz7dnHe1C547TAcWzzHrZf8 YOXQT3RaI+69awqTuvcNxxzGCuqrLmsmoUpbZAOYqywKW87nmpHhT/kba8XELDmdvw== X-Received: by 2002:a5d:64c3:0:b0:385:e5d8:2bea with SMTP id ffacd0b85a97d-38c519460aemr2346421f8f.20.1738151663844; Wed, 29 Jan 2025 03:54:23 -0800 (PST) X-Google-Smtp-Source: AGHT+IECXzKNlO/KxhDgtZSrbOQmu7i91GFToVYgJKpoIoUvo8DlvKFDTSa6/lsKfWQw3fetIFbvYQ== X-Received: by 2002:a5d:64c3:0:b0:385:e5d8:2bea with SMTP id ffacd0b85a97d-38c519460aemr2346391f8f.20.1738151663351; Wed, 29 Jan 2025 03:54:23 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-438dcc27130sm20111455e9.16.2025.01.29.03.54.20 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:22 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe Subject: [PATCH v1 03/12] mm/rmap: convert make_device_exclusive_range() to make_device_exclusive() Date: Wed, 29 Jan 2025 12:54:01 +0100 Message-ID: <20250129115411.2077152-4-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 7z8ak7AkxuzbxV7Z5_voeX4oaHDmCktnt1UY--TWC-g_1738151664 X-Mimecast-Originator: redhat.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The single "real" user in the tree of make_device_exclusive_range() always requests making only a single address exclusive. The current implementation is hard to fix for properly supporting anonymous THP / large folios and for avoiding messing with rmap walks in weird ways. So let's always process a single address/page and return folio + page to minimize page -> folio lookups. This is a preparation for further changes. Reject any non-anonymous or hugetlb folios early, directly after GUP. Signed-off-by: David Hildenbrand Reviewed-by: Alistair Popple Acked-by: Simona Vetter --- Documentation/mm/hmm.rst | 2 +- Documentation/translations/zh_CN/mm/hmm.rst | 2 +- drivers/gpu/drm/nouveau/nouveau_svm.c | 5 +- include/linux/mmu_notifier.h | 2 +- include/linux/rmap.h | 5 +- lib/test_hmm.c | 45 +++++------ mm/rmap.c | 90 +++++++++++---------- 7 files changed, 75 insertions(+), 76 deletions(-) diff --git a/Documentation/mm/hmm.rst b/Documentation/mm/hmm.rst index f6d53c37a2ca..7d61b7a8b65b 100644 --- a/Documentation/mm/hmm.rst +++ b/Documentation/mm/hmm.rst @@ -400,7 +400,7 @@ Exclusive access memory Some devices have features such as atomic PTE bits that can be used to implement atomic access to system memory. To support atomic operations to a shared virtual memory page such a device needs access to that page which is exclusive of any -userspace access from the CPU. The ``make_device_exclusive_range()`` function +userspace access from the CPU. The ``make_device_exclusive()`` function can be used to make a memory range inaccessible from userspace. This replaces all mappings for pages in the given range with special swap diff --git a/Documentation/translations/zh_CN/mm/hmm.rst b/Documentation/translations/zh_CN/mm/hmm.rst index 0669f947d0bc..22c210f4e94f 100644 --- a/Documentation/translations/zh_CN/mm/hmm.rst +++ b/Documentation/translations/zh_CN/mm/hmm.rst @@ -326,7 +326,7 @@ devm_memunmap_pages() 和 devm_release_mem_region() 当资源可以绑定到 ``s 一些设备具有诸如原子PTE位的功能,可以用来实现对系统内存的原子访问。为了支持对一 个共享的虚拟内存页的原子操作,这样的设备需要对该页的访问是排他的,而不是来自CPU -的任何用户空间访问。 ``make_device_exclusive_range()`` 函数可以用来使一 +的任何用户空间访问。 ``make_device_exclusive()`` 函数可以用来使一 个内存范围不能从用户空间访问。 这将用特殊的交换条目替换给定范围内的所有页的映射。任何试图访问交换条目的行为都会 diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index b4da82ddbb6b..39e3740980bb 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -609,10 +609,9 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm, notifier_seq = mmu_interval_read_begin(¬ifier->notifier); mmap_read_lock(mm); - ret = make_device_exclusive_range(mm, start, start + PAGE_SIZE, - &page, drm->dev); + page = make_device_exclusive(mm, start, drm->dev, &folio); mmap_read_unlock(mm); - if (ret <= 0 || !page) { + if (IS_ERR(page)) { ret = -EINVAL; goto out; } diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index e2dd57ca368b..d4e714661826 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -46,7 +46,7 @@ struct mmu_interval_notifier; * @MMU_NOTIFY_EXCLUSIVE: to signal a device driver that the device will no * longer have exclusive access to the page. When sent during creation of an * exclusive range the owner will be initialised to the value provided by the - * caller of make_device_exclusive_range(), otherwise the owner will be NULL. + * caller of make_device_exclusive(), otherwise the owner will be NULL. */ enum mmu_notifier_event { MMU_NOTIFY_UNMAP = 0, diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 683a04088f3f..86425d42c1a9 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -663,9 +663,8 @@ int folio_referenced(struct folio *, int is_locked, void try_to_migrate(struct folio *folio, enum ttu_flags flags); void try_to_unmap(struct folio *, enum ttu_flags flags); -int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, - unsigned long end, struct page **pages, - void *arg); +struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr, + void *owner, struct folio **foliop); /* Avoid racy checks */ #define PVMW_SYNC (1 << 0) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 056f2e411d7b..9e1b07a227a3 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -780,10 +780,8 @@ static int dmirror_exclusive(struct dmirror *dmirror, unsigned long start, end, addr; unsigned long size = cmd->npages << PAGE_SHIFT; struct mm_struct *mm = dmirror->notifier.mm; - struct page *pages[64]; struct dmirror_bounce bounce; - unsigned long next; - int ret; + int ret = 0; start = cmd->addr; end = start + size; @@ -795,36 +793,31 @@ static int dmirror_exclusive(struct dmirror *dmirror, return -EINVAL; mmap_read_lock(mm); - for (addr = start; addr < end; addr = next) { - unsigned long mapped = 0; - int i; - - next = min(end, addr + (ARRAY_SIZE(pages) << PAGE_SHIFT)); + for (addr = start; addr < end; addr += PAGE_SIZE) { + struct folio *folio; + struct page *page; - ret = make_device_exclusive_range(mm, addr, next, pages, NULL); - /* - * Do dmirror_atomic_map() iff all pages are marked for - * exclusive access to avoid accessing uninitialized - * fields of pages. - */ - if (ret == (next - addr) >> PAGE_SHIFT) - mapped = dmirror_atomic_map(addr, next, pages, dmirror); - for (i = 0; i < ret; i++) { - if (pages[i]) { - unlock_page(pages[i]); - put_page(pages[i]); - } + page = make_device_exclusive(mm, addr, &folio, NULL); + if (IS_ERR(page)) { + ret = PTR_ERR(page); + break; } - if (addr + (mapped << PAGE_SHIFT) < next) { - mmap_read_unlock(mm); - mmput(mm); - return -EBUSY; - } + ret = dmirror_atomic_map(addr, addr + PAGE_SIZE, &page, dmirror); + if (!ret) + ret = -EBUSY; + folio_unlock(folio); + folio_put(folio); + + if (ret) + break; } mmap_read_unlock(mm); mmput(mm); + if (ret) + return -EBUSY; + /* Return the migrated data for verification. */ ret = dmirror_bounce_init(&bounce, start, size); if (ret) diff --git a/mm/rmap.c b/mm/rmap.c index 17fbfa61f7ef..676df4fba5b0 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2495,70 +2495,78 @@ static bool folio_make_device_exclusive(struct folio *folio, .arg = &args, }; - /* - * Restrict to anonymous folios for now to avoid potential writeback - * issues. - */ - if (!folio_test_anon(folio) || folio_test_hugetlb(folio)) - return false; - rmap_walk(folio, &rwc); return args.valid && !folio_mapcount(folio); } /** - * make_device_exclusive_range() - Mark a range for exclusive use by a device + * make_device_exclusive() - Mark an address for exclusive use by a device * @mm: mm_struct of associated target process - * @start: start of the region to mark for exclusive device access - * @end: end address of region - * @pages: returns the pages which were successfully marked for exclusive access + * @addr: the virtual address to mark for exclusive device access * @owner: passed to MMU_NOTIFY_EXCLUSIVE range notifier to allow filtering + * @foliop: folio pointer will be stored here on success. + * + * This function looks up the page mapped at the given address, grabs a + * folio reference, locks the folio and replaces the PTE with special + * device-exclusive non-swap entry, preventing userspace CPU access. The + * function will return with the folio locked and referenced. * - * Returns: number of pages found in the range by GUP. A page is marked for - * exclusive access only if the page pointer is non-NULL. + * On fault these special device-exclusive entries are replaced with the + * original PTE under folio lock, after calling MMU notifiers. * - * This function finds ptes mapping page(s) to the given address range, locks - * them and replaces mappings with special swap entries preventing userspace CPU - * access. On fault these entries are replaced with the original mapping after - * calling MMU notifiers. + * Only anonymous non-hugetlb folios are supported and the VMA must have + * write permissions such that we can fault in the anonymous page writable + * in order to mark it exclusive. The caller must hold the mmap_lock in read + * mode. * * A driver using this to program access from a device must use a mmu notifier * critical section to hold a device specific lock during programming. Once * programming is complete it should drop the page lock and reference after * which point CPU access to the page will revoke the exclusive access. + * + * Returns: pointer to mapped page on success, otherwise a negative error. */ -int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, - unsigned long end, struct page **pages, - void *owner) +struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr, + void *owner, struct folio **foliop) { - long npages = (end - start) >> PAGE_SHIFT; - long i; + struct folio *folio; + struct page *page; + long npages; + + mmap_assert_locked(mm); - npages = get_user_pages_remote(mm, start, npages, + /* + * Fault in the page writable and try to lock it; note that if the + * address would already be marked for exclusive use by the device, + * the GUP call would undo that first by triggering a fault. + */ + npages = get_user_pages_remote(mm, addr, 1, FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD, - pages, NULL); - if (npages < 0) - return npages; - - for (i = 0; i < npages; i++, start += PAGE_SIZE) { - struct folio *folio = page_folio(pages[i]); - if (PageTail(pages[i]) || !folio_trylock(folio)) { - folio_put(folio); - pages[i] = NULL; - continue; - } + &page, NULL); + if (npages != 1) + return ERR_PTR(npages); + folio = page_folio(page); - if (!folio_make_device_exclusive(folio, mm, start, owner)) { - folio_unlock(folio); - folio_put(folio); - pages[i] = NULL; - } + if (!folio_test_anon(folio) || folio_test_hugetlb(folio)) { + folio_put(folio); + return ERR_PTR(-EOPNOTSUPP); + } + + if (!folio_trylock(folio)) { + folio_put(folio); + return ERR_PTR(-EBUSY); } - return npages; + if (!folio_make_device_exclusive(folio, mm, addr, owner)) { + folio_unlock(folio); + folio_put(folio); + return ERR_PTR(-EBUSY); + } + *foliop = folio; + return page; } -EXPORT_SYMBOL_GPL(make_device_exclusive_range); +EXPORT_SYMBOL_GPL(make_device_exclusive); #endif void __put_anon_vma(struct anon_vma *anon_vma) From patchwork Wed Jan 29 11:54:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953694 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E5D1DC02190 for ; Wed, 29 Jan 2025 11:54:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 58D5010E7C6; Wed, 29 Jan 2025 11:54:34 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="Lza5kFiS"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id DE8BD10E7C6 for ; Wed, 29 Jan 2025 11:54:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151671; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ttFA6KXnCKLshNnRCUvMdxdw/LdWUHfs73bnazzKqOM=; b=Lza5kFiSqD7O0mgFvtQ743XqjYkLhOfSKrEbDIvge/0WrugxO7GSvGplkR5tHe8lXfsIHW KizPrAAjbpQyGs5IGx7ZVuXjo6FoXYxAsAvmBJztodJbyMwHvXiAt2lQMIh9pBgRAOmnDR bEqlICEY8IEy3atUy4M3RShXysgYAxA= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-172-J1dUAUVoNk2RhNdPw5XlJA-1; Wed, 29 Jan 2025 06:54:27 -0500 X-MC-Unique: J1dUAUVoNk2RhNdPw5XlJA-1 X-Mimecast-MFC-AGG-ID: J1dUAUVoNk2RhNdPw5XlJA Received: by mail-wr1-f72.google.com with SMTP id ffacd0b85a97d-385e00ebb16so2352543f8f.3 for ; Wed, 29 Jan 2025 03:54:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151666; x=1738756466; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ttFA6KXnCKLshNnRCUvMdxdw/LdWUHfs73bnazzKqOM=; b=sS+chda9saF3eN2V4bbVrQHG53Z6h7TNrNYjJgzABq80NVeQcrXiLhSVauxj/6Zew/ tCNWiPROx2Asj9eqrAS8dT5b6Guefsqtfj652sCyHVYtDofTwuNAOkWpaJg2ghxLFgvk vQ9+liu/bGnYs3w/e34Bm78iyl9lDJYt32Op+2Tx3CGOLoG7O5KB5slmnO0LfF4DQX8F KkZApaku2Q2tHCpgL+IugO0yZns828oGfqQlP4jS88WdzfXAnyf/5Pn3PV8Ts+mANRa0 GnYGuTQpi7kn/e9a6hRNVFpZYDKRKZdOV0i+oQ3KWTgcqnFYmbThHCYecmBBxACpjvbY ht/g== X-Forwarded-Encrypted: i=1; AJvYcCUSLUDVhnK9EYFxQWpgRTDXquUtzfxq+7fd9dR/+mD+EOfTk/pOIDv4eJeKWj0QUBoWG6zcThMzFM0=@lists.freedesktop.org X-Gm-Message-State: AOJu0Yyb5Sqo/nA0h/6NcvnfrVODlse/cFTAStMHSD+4FnsinYzTbHiL ZgBT/6NMRnHGNpdkKqjMq0r3MxEYcOEAfUE8KlIMZURVBzL4zgcqtBDdXzBC13/U7AT2dipRRMw 9aAoJsbIdloIdU/0B4a9xay57S5zcOXqtYmD3IM3mGOeTe9vr19GxAkKvtH2L1SQN3w== X-Gm-Gg: ASbGnctWCS0TLc4hovi9cSR15Cu0sARAeBxVsjOlvNQvTkF5h+BJQeJcrgXUnqbcf+/ 7N5X/yxtuEAyn1q5+ftifzJEipuhZ6u8srRPqfLApnrh66gt9z0i8D3Yvm8VVPbGnL1oR4mSBak gHG6VIxpJuNeh1vMcbmmKrU35iqMUxJozzxBOypGdPYU+evvBvIUa53iSVqftgTYvUOQYtBja+9 d8fpTtiCjNhqZxhjc1NNJ8ac8D0Vh9BATbTNWB1mGAnOg3cfeAKWuSv1ldwBEvvaLuBIHL51xrR nvjFKm9STl1d9TuaYBcaPppr+cqhEwqRg8x266RpLcAFTwOF1Ql9MFAbU2u/EKWMfQ== X-Received: by 2002:a05:6000:11c9:b0:38c:1270:f961 with SMTP id ffacd0b85a97d-38c520b7c7fmr2019780f8f.46.1738151666562; Wed, 29 Jan 2025 03:54:26 -0800 (PST) X-Google-Smtp-Source: AGHT+IExNZpiTenD6CJ57erMOTnO5s4zsyHNUaMa/bCtN9QyTRZVkdeAROPCHCw+IAs2gC6Ff7tvTA== X-Received: by 2002:a05:6000:11c9:b0:38c:1270:f961 with SMTP id ffacd0b85a97d-38c520b7c7fmr2019743f8f.46.1738151666064; Wed, 29 Jan 2025 03:54:26 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-38c2a1bb0besm17079668f8f.79.2025.01.29.03.54.24 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:25 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe Subject: [PATCH v1 04/12] mm/rmap: implement make_device_exclusive() using folio_walk instead of rmap walk Date: Wed, 29 Jan 2025 12:54:02 +0100 Message-ID: <20250129115411.2077152-5-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: -lPAGd_b43Q_559c7etbCnT7uYdwieQRUI83MnDlktQ_1738151667 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" We require a writable PTE and only support anonymous folio: we can only have exactly one PTE pointing at that page, which we can just lookup using a folio walk, avoiding the rmap walk and the anon VMA lock. So let's stop doing an rmap walk and perform a folio walk instead, so we can easily just modify a single PTE and avoid relying on rmap/mapcounts. We now effectively work on a single PTE instead of multiple PTEs of a large folio, allowing for conversion of individual PTEs from non-exclusive to device-exclusive -- note that the other way always worked on single PTEs. We can drop the MMU_NOTIFY_EXCLUSIVE MMU notifier call and document why that is not required: GUP will already take care of the MMU_NOTIFY_EXCLUSIVE call if required (there is already a device-exclusive entry) when not finding a present PTE and having to trigger a fault and ending up in remove_device_exclusive_entry(). Note that the PTE is always writable, and we can always create a writable-device-exclusive entry. With this change, device-exclusive is fully compatible with THPs / large folios. We still require PMD-sized THPs to get PTE-mapped, and supporting PMD-mapped THP (without the PTE-remapping) is a different endeavour that might not be worth it at this point. This gets rid of the "folio_mapcount()" usage and let's us fix ordinary rmap walks (migration/swapout) next. Spell out that messing with the mapcount is wrong and must be fixed. Signed-off-by: David Hildenbrand --- mm/rmap.c | 188 ++++++++++++++++-------------------------------------- 1 file changed, 55 insertions(+), 133 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 676df4fba5b0..49ffac6d27f8 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2375,131 +2375,6 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags) } #ifdef CONFIG_DEVICE_PRIVATE -struct make_exclusive_args { - struct mm_struct *mm; - unsigned long address; - void *owner; - bool valid; -}; - -static bool page_make_device_exclusive_one(struct folio *folio, - struct vm_area_struct *vma, unsigned long address, void *priv) -{ - struct mm_struct *mm = vma->vm_mm; - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); - struct make_exclusive_args *args = priv; - pte_t pteval; - struct page *subpage; - bool ret = true; - struct mmu_notifier_range range; - swp_entry_t entry; - pte_t swp_pte; - pte_t ptent; - - mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, - vma->vm_mm, address, min(vma->vm_end, - address + folio_size(folio)), - args->owner); - mmu_notifier_invalidate_range_start(&range); - - while (page_vma_mapped_walk(&pvmw)) { - /* Unexpected PMD-mapped THP? */ - VM_BUG_ON_FOLIO(!pvmw.pte, folio); - - ptent = ptep_get(pvmw.pte); - if (!pte_present(ptent)) { - ret = false; - page_vma_mapped_walk_done(&pvmw); - break; - } - - subpage = folio_page(folio, - pte_pfn(ptent) - folio_pfn(folio)); - address = pvmw.address; - - /* Nuke the page table entry. */ - flush_cache_page(vma, address, pte_pfn(ptent)); - pteval = ptep_clear_flush(vma, address, pvmw.pte); - - /* Set the dirty flag on the folio now the pte is gone. */ - if (pte_dirty(pteval)) - folio_mark_dirty(folio); - - /* - * Check that our target page is still mapped at the expected - * address. - */ - if (args->mm == mm && args->address == address && - pte_write(pteval)) - args->valid = true; - - /* - * Store the pfn of the page in a special migration - * pte. do_swap_page() will wait until the migration - * pte is removed and then restart fault handling. - */ - if (pte_write(pteval)) - entry = make_writable_device_exclusive_entry( - page_to_pfn(subpage)); - else - entry = make_readable_device_exclusive_entry( - page_to_pfn(subpage)); - swp_pte = swp_entry_to_pte(entry); - if (pte_soft_dirty(pteval)) - swp_pte = pte_swp_mksoft_dirty(swp_pte); - if (pte_uffd_wp(pteval)) - swp_pte = pte_swp_mkuffd_wp(swp_pte); - - set_pte_at(mm, address, pvmw.pte, swp_pte); - - /* - * There is a reference on the page for the swap entry which has - * been removed, so shouldn't take another. - */ - folio_remove_rmap_pte(folio, subpage, vma); - } - - mmu_notifier_invalidate_range_end(&range); - - return ret; -} - -/** - * folio_make_device_exclusive - Mark the folio exclusively owned by a device. - * @folio: The folio to replace page table entries for. - * @mm: The mm_struct where the folio is expected to be mapped. - * @address: Address where the folio is expected to be mapped. - * @owner: passed to MMU_NOTIFY_EXCLUSIVE range notifier callbacks - * - * Tries to remove all the page table entries which are mapping this - * folio and replace them with special device exclusive swap entries to - * grant a device exclusive access to the folio. - * - * Context: Caller must hold the folio lock. - * Return: false if the page is still mapped, or if it could not be unmapped - * from the expected address. Otherwise returns true (success). - */ -static bool folio_make_device_exclusive(struct folio *folio, - struct mm_struct *mm, unsigned long address, void *owner) -{ - struct make_exclusive_args args = { - .mm = mm, - .address = address, - .owner = owner, - .valid = false, - }; - struct rmap_walk_control rwc = { - .rmap_one = page_make_device_exclusive_one, - .done = folio_not_mapped, - .anon_lock = folio_lock_anon_vma_read, - .arg = &args, - }; - - rmap_walk(folio, &rwc); - - return args.valid && !folio_mapcount(folio); -} - /** * make_device_exclusive() - Mark an address for exclusive use by a device * @mm: mm_struct of associated target process @@ -2530,9 +2405,12 @@ static bool folio_make_device_exclusive(struct folio *folio, struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr, void *owner, struct folio **foliop) { - struct folio *folio; + struct folio *folio, *fw_folio; + struct vm_area_struct *vma; + struct folio_walk fw; struct page *page; - long npages; + swp_entry_t entry; + pte_t swp_pte; mmap_assert_locked(mm); @@ -2540,12 +2418,16 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr, * Fault in the page writable and try to lock it; note that if the * address would already be marked for exclusive use by the device, * the GUP call would undo that first by triggering a fault. + * + * If any other device would already map this page exclusively, the + * fault will trigger a conversion to an ordinary + * (non-device-exclusive) PTE and issue a MMU_NOTIFY_EXCLUSIVE. */ - npages = get_user_pages_remote(mm, addr, 1, - FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD, - &page, NULL); - if (npages != 1) - return ERR_PTR(npages); + page = get_user_page_vma_remote(mm, addr, + FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD, + &vma); + if (IS_ERR(page)) + return page; folio = page_folio(page); if (!folio_test_anon(folio) || folio_test_hugetlb(folio)) { @@ -2558,11 +2440,51 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr, return ERR_PTR(-EBUSY); } - if (!folio_make_device_exclusive(folio, mm, addr, owner)) { + /* + * Let's do a second walk and make sure we still find the same page + * mapped writable. If we don't find what we expect, we will trigger + * GUP again to fix it up. Note that a page of an anonymous folio can + * only be mapped writable using exactly one page table mapping + * ("exclusive"), so there cannot be other mappings. + */ + fw_folio = folio_walk_start(&fw, vma, addr, 0); + if (fw_folio != folio || fw.page != page || + fw.level != FW_LEVEL_PTE || !pte_write(fw.pte)) { + if (fw_folio) + folio_walk_end(&fw, vma); folio_unlock(folio); folio_put(folio); return ERR_PTR(-EBUSY); } + + /* Nuke the page table entry so we get the uptodate dirty bit. */ + flush_cache_page(vma, addr, page_to_pfn(page)); + fw.pte = ptep_clear_flush(vma, addr, fw.ptep); + + /* Set the dirty flag on the folio now the pte is gone. */ + if (pte_dirty(fw.pte)) + folio_mark_dirty(folio); + + /* + * Store the pfn of the page in a special device-exclusive non-swap pte. + * do_swap_page() will trigger the conversion back while holding the + * folio lock. + */ + entry = make_writable_device_exclusive_entry(page_to_pfn(page)); + swp_pte = swp_entry_to_pte(entry); + if (pte_soft_dirty(fw.pte)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + /* The pte is writable, uffd-wp does not apply. */ + set_pte_at(mm, addr, fw.ptep, swp_pte); + + /* + * TODO: The device-exclusive non-swap PTE holds a folio reference but + * does not count as a mapping (mapcount), which is wrong and must be + * fixed, otherwise RMAP walks don't behave as expected. + */ + folio_remove_rmap_pte(folio, page, vma); + + folio_walk_end(&fw, vma); *foliop = folio; return page; } From patchwork Wed Jan 29 11:54:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B4F4EC02194 for ; Wed, 29 Jan 2025 11:54:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F02E910E7C8; Wed, 29 Jan 2025 11:54:34 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="IE2jdU9f"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 56A9910E7BD for ; Wed, 29 Jan 2025 11:54:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151672; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/5TTv0Iu+aL+vrfHcl/JjDhmk1gN271wttPyRyQHADk=; b=IE2jdU9fvAybVz7I6gRQD6dD7avsQ5WQr5tSEjC5RHCTxizAd3gNgYmfnW8HbpMhiiBlQJ MfGbA7RZ+QIxVbhZ3GOqbxfd0+HFUfV9pLGKBuNRFFqBmiRo94bcEl0jj5bgsuGflK3d12 962TFHOIAfs/RfDblGKObPAo1Tv4xTU= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-549-PN_Dt3GHMI6HSehUpPT-gA-1; Wed, 29 Jan 2025 06:54:29 -0500 X-MC-Unique: PN_Dt3GHMI6HSehUpPT-gA-1 X-Mimecast-MFC-AGG-ID: PN_Dt3GHMI6HSehUpPT-gA Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-38634103b0dso3585165f8f.2 for ; Wed, 29 Jan 2025 03:54:29 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151668; x=1738756468; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/5TTv0Iu+aL+vrfHcl/JjDhmk1gN271wttPyRyQHADk=; b=Vf7crj/4vCvdxLR7Ae27WfjkNiR6MA4RWA48fj8aXy5ZOvh6i6RH0bzvQwQiXJcv8/ Nk4wixskvPVNYovKsQ4ujVQnGwALxuGp5JSucDwKnQ7JFC8WgWAkF7HWbqOHirq1ZyMP PY51YxHmL61N4SdV/T5WB72runYpmCNO5sMT3611krPfdxbjZiWSxzy8jMvF3Su09mku QkT4M2WZDWQ+x5XNvcsVp9EzdMy6fGFnmzre2NLniuoPpU1CYiPn/kQy0POhtQekTo3A zjJo5KTCMUsxI/aSrgFFVUlYKK3PPZoU1GUbCK0p3oDUhcxaWP9ZKbCCN21U4It3sf5c 2+nQ== X-Forwarded-Encrypted: i=1; AJvYcCVdhcXNq14XC8AevQzRdC9CPvjH7JzKLakNqsj3rn75cAcM39FCxIvUYed36/2UWITxGtRWgsZOJq8=@lists.freedesktop.org X-Gm-Message-State: AOJu0Yy0gc72JkIr9v0Hxhg/EH9LC5GpfbloFYYaFj9SEFg+OIGmf2cb t6ykGh25PUsN1hJBHfIgJkRgZf4DwI+0CUK97cFWP+/tBaW7WXhGv4xNx8hQubddu+eZ1dthWRO Dg2UAzB6efvHvrel1a3T4xe6hO1gpl9fC9PB5Kyibr1nzgNUI/CoFlh9okQioMq1gig== X-Gm-Gg: ASbGnct6xGwaeNHZ33B1KIU+sHDIxe6+AXvTvPELNtXj3IZqW5/y+V5+HR3zP7KHk6t 8exWx5w73TawOLKqrEJMZOfw5yh+nnGjgN15WGre5zKb3gzPNcLq2tCDmebYPaqkSJNOAxx+BGc T40MhGFjKdMYXJJmZfmyRgf8CutgGwMXWDC/i9UVMPbl6z19b1TtZXh+xb/Qz8qgEauzAnCICnh tyUSFJONigEjThrdOWHiLYQ6bCidSC1ZpZxcPGZ6wJl6155BJ/dH0N8SJHviRNoMLm4J28ZwbsK y+GgvPIpBMWtP8UVLJWA6YV2fDngg32SoLpg6ZsLREnpfi2Xtqectmfn2277JvVJzw== X-Received: by 2002:a05:6000:1fac:b0:386:3329:6a04 with SMTP id ffacd0b85a97d-38c51e8de63mr2549365f8f.39.1738151668336; Wed, 29 Jan 2025 03:54:28 -0800 (PST) X-Google-Smtp-Source: AGHT+IH3OskkNotGDGa9hKk/bhinDJ94hVo+TUAo4zN3qwEB+ZEXq9Nn9aY72BgysSoJ/r2gZh7hQQ== X-Received: by 2002:a05:6000:1fac:b0:386:3329:6a04 with SMTP id ffacd0b85a97d-38c51e8de63mr2549347f8f.39.1738151667985; Wed, 29 Jan 2025 03:54:27 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-38c2a188fe8sm17066981f8f.56.2025.01.29.03.54.26 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:27 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe Subject: [PATCH v1 05/12] mm/memory: detect writability in restore_exclusive_pte() through can_change_pte_writable() Date: Wed, 29 Jan 2025 12:54:03 +0100 Message-ID: <20250129115411.2077152-6-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: fPhxaeZ9XG4xq2vyYqtFMYWUUAU9Vl_hcAZEuCcjqwE_1738151668 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Let's do it just like mprotect write-upgrade or during NUMA-hinting faults on PROT_NONE PTEs: detect if the PTE can be writable by using can_change_pte_writable(). Set the PTE only dirty if the folio is dirty: we might not necessarily have a write access, and setting the PTE writable doesn't require setting the PTE dirty. With this change in place, there is no need to have separate readable and writable device-exclusive entry types, and we'll merge them next separately. Note that, during fork(), we first convert the device-exclusive entries back to ordinary PTEs, and we only ever allow conversion of writable PTEs to device-exclusive -- only mprotect can currently change them to readable-device-exclusive. Consequently, we always expect PageAnonExclusive(page)==true and can_change_pte_writable()==true, unless we are dealing with soft-dirty tracking or uffd-wp. But reusing can_change_pte_writable() for now is cleaner. Signed-off-by: David Hildenbrand --- mm/memory.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 03efeeef895a..db38d6ae4e74 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -725,18 +725,21 @@ static void restore_exclusive_pte(struct vm_area_struct *vma, struct folio *folio = page_folio(page); pte_t orig_pte; pte_t pte; - swp_entry_t entry; orig_pte = ptep_get(ptep); pte = pte_mkold(mk_pte(page, READ_ONCE(vma->vm_page_prot))); if (pte_swp_soft_dirty(orig_pte)) pte = pte_mksoft_dirty(pte); - entry = pte_to_swp_entry(orig_pte); if (pte_swp_uffd_wp(orig_pte)) pte = pte_mkuffd_wp(pte); - else if (is_writable_device_exclusive_entry(entry)) - pte = maybe_mkwrite(pte_mkdirty(pte), vma); + + if ((vma->vm_flags & VM_WRITE) && + can_change_pte_writable(vma, address, pte)) { + if (folio_test_dirty(folio)) + pte = pte_mkdirty(pte); + pte = pte_mkwrite(pte, vma); + } VM_BUG_ON_FOLIO(pte_write(pte) && (!folio_test_anon(folio) && PageAnonExclusive(page)), folio); From patchwork Wed Jan 29 11:54:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953696 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 584EAC0218D for ; Wed, 29 Jan 2025 11:54:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D1E6F10E7C7; Wed, 29 Jan 2025 11:54:39 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="LKUWO0PY"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id EAC9F10E7C5 for ; Wed, 29 Jan 2025 11:54:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151674; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bwRdkJe7S5eVWCBWfyg7xh/8u+L2oGZrd41rkCVbduE=; b=LKUWO0PYGzc9dUy1UErMnP8OYzjxyiNMxuix/cKwbPyIYHzTWSCZCK/Ib22baSZ9WegrhN ldpZLa/hVFB859ju/s4ficWmRE3y6m/T5i8guJeslw1W7X2ECh7t6EfQIgmSCK2Y2tpH3O MtgNE6Qmg5ZnLmmDXl36+UQx08cRwOI= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-576-QazVhdy4M3Ofk1W9sck0_A-1; Wed, 29 Jan 2025 06:54:32 -0500 X-MC-Unique: QazVhdy4M3Ofk1W9sck0_A-1 X-Mimecast-MFC-AGG-ID: QazVhdy4M3Ofk1W9sck0_A Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-43651b1ba8aso46674375e9.1 for ; Wed, 29 Jan 2025 03:54:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151671; x=1738756471; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bwRdkJe7S5eVWCBWfyg7xh/8u+L2oGZrd41rkCVbduE=; b=CgppWoUaw58lQsgfqLjQve1HupwRKS+dWVWdK1AC2qMQe6u2Ku8xS8SNlB3fXLlSz3 XI1clcJFSwQWbhfMVAXabs/OFFUmCAzFV4gAz9q8yn/Eii+bxbjSkCzZJ2qEQJdoja41 0gPjtddPV0httNMm8jmAAZTdNmG6UBVl+OnWALsD6DfrVzGdnvU1jUESQWL40RjMWNuv 91fLJPqojuvXf/GwJ9AzlOpNn58c1Da5gurK7isCiM0Xl2souKvKE7ccmTdhzbs6qlAv PDzOB0b7F2aF+Zl4QZNfSpXDfhO2R7L6ILxoqyXH19pTasAnKLg4XiJaS5R6SrfE+h6Y Vz3w== X-Forwarded-Encrypted: i=1; AJvYcCXB9tYk3BtMW21CPVRV6zCE0DIM31jNmDuzOsjpMyMnGcu8Zdhrdzok0yeVnpqAQl75nJreGRN6Xzo=@lists.freedesktop.org X-Gm-Message-State: AOJu0YwKSwJXH2Q3lNzPkIrAS/NstDtDbYn5XQOYfbAGdASubSQPLwKw Tfx0lMKjCuzACFvxjKXdw6WmE9x9PtyOuJB+9rosFVgIE1v+XZL80bJB4mUDOADg7W13lBjPuOd yee9UyUDraNHwHkmw04cF/ypwesZeecWM4DYdAVQ4v+Yrk72X3qnCFJrXEjhMI2ODTA== X-Gm-Gg: ASbGnctMP6bcY2UIFyQClZzLJBtx6Bbd8JyTOGwV32buCnkJ01PKjfjGOuYkOE9DJbC Xr9kFG6wrHtwCgCmU2F7nXeMWOPfG8ks1lCS5p3yGNl0gxWgdyxuJttwNU5cmHINn1nRuzyqoUW RzTZ9JixOfmOT89iUH7cDJ4obX9mKvmvNJrp1wLzYad7zF4EnD/cUnAO5F88r0fOzetuFj88nXM 8vL5psakUIMTdzRJM+clvXJnVdDcpkiyU/1ii0tkIx70WLzaMXHGt9U1zvWlFsXnFJtAY3Wv8WF 0pwHO/prtrthivlcipEKHVBCGkyX21+essQuGeKsTz7j/u9Ux3Vca1YWRtfVF7XQ2g== X-Received: by 2002:a05:600c:1c1a:b0:434:f9c4:a850 with SMTP id 5b1f17b1804b1-438dc3c4623mr25664805e9.10.1738151671648; Wed, 29 Jan 2025 03:54:31 -0800 (PST) X-Google-Smtp-Source: AGHT+IEhuPkSFqdGYqW92VDVvy4lKssWj+By02GdvgYFn3sg13RZ9e45UT5b96yUwTD+A2GCuT4qDg== X-Received: by 2002:a05:600c:1c1a:b0:434:f9c4:a850 with SMTP id 5b1f17b1804b1-438dc3c4623mr25664525e9.10.1738151671268; Wed, 29 Jan 2025 03:54:31 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-438dcc24e73sm20192895e9.16.2025.01.29.03.54.28 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:30 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe Subject: [PATCH v1 06/12] mm: use single SWP_DEVICE_EXCLUSIVE entry type Date: Wed, 29 Jan 2025 12:54:04 +0100 Message-ID: <20250129115411.2077152-7-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: QsI8qH4AdhJpNwkSppM_wMSd54N98PVdrjlcLU-1BR8_1738151672 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" There is no need for the distinction anymore; let's merge the readable and writable device-exclusive entries into a single device-exclusive entry type. Signed-off-by: David Hildenbrand Acked-by: Simona Vetter Reviewed-by: Alistair Popple --- include/linux/swap.h | 7 +++---- include/linux/swapops.h | 27 ++++----------------------- mm/mprotect.c | 8 -------- mm/page_table_check.c | 5 ++--- mm/rmap.c | 2 +- 5 files changed, 10 insertions(+), 39 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 91b30701274e..9a48e79a0a52 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -74,14 +74,13 @@ static inline int current_is_kswapd(void) * to a special SWP_DEVICE_{READ|WRITE} entry. * * When a page is mapped by the device for exclusive access we set the CPU page - * table entries to special SWP_DEVICE_EXCLUSIVE_* entries. + * table entries to a special SWP_DEVICE_EXCLUSIVE entry. */ #ifdef CONFIG_DEVICE_PRIVATE -#define SWP_DEVICE_NUM 4 +#define SWP_DEVICE_NUM 3 #define SWP_DEVICE_WRITE (MAX_SWAPFILES+SWP_HWPOISON_NUM+SWP_MIGRATION_NUM) #define SWP_DEVICE_READ (MAX_SWAPFILES+SWP_HWPOISON_NUM+SWP_MIGRATION_NUM+1) -#define SWP_DEVICE_EXCLUSIVE_WRITE (MAX_SWAPFILES+SWP_HWPOISON_NUM+SWP_MIGRATION_NUM+2) -#define SWP_DEVICE_EXCLUSIVE_READ (MAX_SWAPFILES+SWP_HWPOISON_NUM+SWP_MIGRATION_NUM+3) +#define SWP_DEVICE_EXCLUSIVE (MAX_SWAPFILES+SWP_HWPOISON_NUM+SWP_MIGRATION_NUM+2) #else #define SWP_DEVICE_NUM 0 #endif diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 96f26e29fefe..64ea151a7ae3 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -186,26 +186,16 @@ static inline bool is_writable_device_private_entry(swp_entry_t entry) return unlikely(swp_type(entry) == SWP_DEVICE_WRITE); } -static inline swp_entry_t make_readable_device_exclusive_entry(pgoff_t offset) +static inline swp_entry_t make_device_exclusive_entry(pgoff_t offset) { - return swp_entry(SWP_DEVICE_EXCLUSIVE_READ, offset); -} - -static inline swp_entry_t make_writable_device_exclusive_entry(pgoff_t offset) -{ - return swp_entry(SWP_DEVICE_EXCLUSIVE_WRITE, offset); + return swp_entry(SWP_DEVICE_EXCLUSIVE, offset); } static inline bool is_device_exclusive_entry(swp_entry_t entry) { - return swp_type(entry) == SWP_DEVICE_EXCLUSIVE_READ || - swp_type(entry) == SWP_DEVICE_EXCLUSIVE_WRITE; + return swp_type(entry) == SWP_DEVICE_EXCLUSIVE; } -static inline bool is_writable_device_exclusive_entry(swp_entry_t entry) -{ - return unlikely(swp_type(entry) == SWP_DEVICE_EXCLUSIVE_WRITE); -} #else /* CONFIG_DEVICE_PRIVATE */ static inline swp_entry_t make_readable_device_private_entry(pgoff_t offset) { @@ -227,12 +217,7 @@ static inline bool is_writable_device_private_entry(swp_entry_t entry) return false; } -static inline swp_entry_t make_readable_device_exclusive_entry(pgoff_t offset) -{ - return swp_entry(0, 0); -} - -static inline swp_entry_t make_writable_device_exclusive_entry(pgoff_t offset) +static inline swp_entry_t make_device_exclusive_entry(pgoff_t offset) { return swp_entry(0, 0); } @@ -242,10 +227,6 @@ static inline bool is_device_exclusive_entry(swp_entry_t entry) return false; } -static inline bool is_writable_device_exclusive_entry(swp_entry_t entry) -{ - return false; -} #endif /* CONFIG_DEVICE_PRIVATE */ #ifdef CONFIG_MIGRATION diff --git a/mm/mprotect.c b/mm/mprotect.c index 516b1d847e2c..9cb6ab7c4048 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -225,14 +225,6 @@ static long change_pte_range(struct mmu_gather *tlb, newpte = swp_entry_to_pte(entry); if (pte_swp_uffd_wp(oldpte)) newpte = pte_swp_mkuffd_wp(newpte); - } else if (is_writable_device_exclusive_entry(entry)) { - entry = make_readable_device_exclusive_entry( - swp_offset(entry)); - newpte = swp_entry_to_pte(entry); - if (pte_swp_soft_dirty(oldpte)) - newpte = pte_swp_mksoft_dirty(newpte); - if (pte_swp_uffd_wp(oldpte)) - newpte = pte_swp_mkuffd_wp(newpte); } else if (is_pte_marker_entry(entry)) { /* * Ignore error swap entries unconditionally, diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 509c6ef8de40..c2b3600429a0 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -196,9 +196,8 @@ EXPORT_SYMBOL(__page_table_check_pud_clear); /* Whether the swap entry cached writable information */ static inline bool swap_cached_writable(swp_entry_t entry) { - return is_writable_device_exclusive_entry(entry) || - is_writable_device_private_entry(entry) || - is_writable_migration_entry(entry); + return is_writable_device_private_entry(entry) || + is_writable_migration_entry(entry); } static inline void page_table_check_pte_flags(pte_t pte) diff --git a/mm/rmap.c b/mm/rmap.c index 49ffac6d27f8..65d9bbea16d0 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2470,7 +2470,7 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr, * do_swap_page() will trigger the conversion back while holding the * folio lock. */ - entry = make_writable_device_exclusive_entry(page_to_pfn(page)); + entry = make_device_exclusive_entry(page_to_pfn(page)); swp_pte = swp_entry_to_pte(entry); if (pte_soft_dirty(fw.pte)) swp_pte = pte_swp_mksoft_dirty(swp_pte); From patchwork Wed Jan 29 11:54:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53167C02194 for ; Wed, 29 Jan 2025 11:54:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4C0D310E7CB; Wed, 29 Jan 2025 11:54:40 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="WAYASyGF"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id D6DBA10E7C4 for ; Wed, 29 Jan 2025 11:54:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151677; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SXa0V5qonctJf7EFSiCmTQng5dHGkyNA7f/zw6SOSBk=; b=WAYASyGFbN7nJUXo0mEkZdRX+ZrxhbZu8gl1H2q4ygb6Sef/QPDcsgu35H9//IG/biSfL4 shmcZBeXw3HrLM3Evbg+XHSs9COZ2GGPptpKP+tG+HfaHJqMjZr+jdfZoIExUs3o1dN1LD 9GomvwlFFZEm4YJy9oXGTJ6wKuosHF8= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-37-8eqo1fPLNSGFUz703mH1aQ-1; Wed, 29 Jan 2025 06:54:35 -0500 X-MC-Unique: 8eqo1fPLNSGFUz703mH1aQ-1 X-Mimecast-MFC-AGG-ID: 8eqo1fPLNSGFUz703mH1aQ Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-4361ecebc5bso34372105e9.1 for ; Wed, 29 Jan 2025 03:54:35 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151674; x=1738756474; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SXa0V5qonctJf7EFSiCmTQng5dHGkyNA7f/zw6SOSBk=; b=dkIjFFni6sfFiyNMYhiXsDSh5CADLEjWLR8UybwPMVj4q98l5YEiiz6ZsxigaduhUx mUqSU28fQvJ3xwI95R228W28ukIs99ZzYADNP/ieeJqIHCBEYRc1uqP7WHZLMoYZiGbR sRrYmI6+wJNosRkVhk8R8YMUKh1E/4RlQ3HxGwUXqPPTjqZBAFMDpBK2fzdt273qEMFr BCnw5gd3SAkDqGEzYmF322xyiVLvCzGjsyZ7UWOE9jslUhELDGfC1z/EXzXSXmq/R98K Amr/LuYr4gsDea3zXwlyi4lV1IRs6VbCezC1FFIJIazj5SRVexFwFE7yI7jA8nPvFjOM a1Aw== X-Forwarded-Encrypted: i=1; AJvYcCUA/l7caA8fzasjOkqkNiZy4QOROY6YFU5gCYp/o77Jau9j41GHeeaNd29CdqDjriDBZj+szaGNZ8I=@lists.freedesktop.org X-Gm-Message-State: AOJu0YzlILdLeaKFtMLgfW9d69oKl5WkctM9jEfdUZyjJOll/pI1Km/b tDPbjgaoGnWwPJgiCkah56cZkdiA7qjZnvV9EtxmX/cWspkvNpWNusYgkcftasYXUIcqNxmJSEN kYlju0m9YlFrwVfNmDFmqlT6KWtVwZJrpjIMrWbTu92z32ISgECybhXc88TmJjRocdQ== X-Gm-Gg: ASbGncvvCkgvFeO7FlTXmQutPpzfMkCwTcnOTciHQ3hSg9VFZTj1BfN5fhBzdlWDQM4 GBLvIiuNEdUqNajRdteWYc3rqsxP/WA/VeOUzctZRFf2QEAzOJHN/WZUjqfeQ71hBIN9C2NA8oN DO7KEp8tz9orOoH9GSfNL3Q9Igxc27ezrgqAXEryLfzwwTZJ4bL3Pyv7PXCsriXIHkT4hz9QWuD fRDQ56QX/fX2dfwsmrJerQD+9We4DWnHOiP8sC/WlH0949k4QsI5NSVpUc2WNnE39NewOg5FL0b I1RIuoPwL8nL35tiSZzkgEq/mFv3mWxrZRTITo9VpCTYXk12+wuyRvJku5U/MiZvIQ== X-Received: by 2002:a5d:61cd:0:b0:385:f7ef:a57f with SMTP id ffacd0b85a97d-38c519744d5mr2113625f8f.27.1738151674159; Wed, 29 Jan 2025 03:54:34 -0800 (PST) X-Google-Smtp-Source: AGHT+IGYVCfHEVHB5xz9x9tgAenDwJY8DuXR9neg40xsYB0gvwDn7frPxCs2ZGd0+jvQzBFPN5gg9A== X-Received: by 2002:a5d:61cd:0:b0:385:f7ef:a57f with SMTP id ffacd0b85a97d-38c519744d5mr2113590f8f.27.1738151673837; Wed, 29 Jan 2025 03:54:33 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-38c2a1bb040sm16943248f8f.67.2025.01.29.03.54.31 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:32 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe Subject: [PATCH v1 07/12] mm/page_vma_mapped: device-private entries are not migration entries Date: Wed, 29 Jan 2025 12:54:05 +0100 Message-ID: <20250129115411.2077152-8-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 9a9y_quEaI-z5y0cffT8deZ3OvuYhod_10ClUVOJkpw_1738151674 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" It's unclear why they would be considered migration entries; they are not. Likely we'll never really trigger that case in practice, because migration (including folio split) of a folio that has device-private entries is never started, as we would detect "additional references": device-private entries adjust the mapcount, but not the refcount. Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Signed-off-by: David Hildenbrand Reviewed-by: Alistair Popple --- mm/page_vma_mapped.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 81839a9e74f1..32679be22d30 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -111,8 +111,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) return false; entry = pte_to_swp_entry(ptent); - if (!is_migration_entry(entry) && - !is_device_exclusive_entry(entry)) + if (!is_migration_entry(entry)) return false; pfn = swp_offset_pfn(entry); From patchwork Wed Jan 29 11:54:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F1A7AC02196 for ; Wed, 29 Jan 2025 11:54:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7981610E7C9; Wed, 29 Jan 2025 11:54:43 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="AN7tnBJH"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 277FF10E7CA for ; Wed, 29 Jan 2025 11:54:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151679; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PlRguNVTZsTlNEYBD/J+r00pj+4jYSoE9sS4zp2ee/k=; b=AN7tnBJHlzIYtwcIwmI99ZTK8RhuzM0inlF6SliAPadv8NH+T3pAfKWc5l5sA+2xK2hZ4n /oX+MYsSt4SJYBXS2IpDhVLX2MyjAHqCavSjU5Ec2Dd21prSgFc2lUwqBXn6S8Ow75575I kDnJIIdSTP7GtKnROhauZUPpv6kMgjk= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-1-cE35JpcYM2K8-i71Evtizg-1; Wed, 29 Jan 2025 06:54:38 -0500 X-MC-Unique: cE35JpcYM2K8-i71Evtizg-1 X-Mimecast-MFC-AGG-ID: cE35JpcYM2K8-i71Evtizg Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-386333ea577so2499004f8f.1 for ; Wed, 29 Jan 2025 03:54:37 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151677; x=1738756477; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PlRguNVTZsTlNEYBD/J+r00pj+4jYSoE9sS4zp2ee/k=; b=HzMyf2m4s21mEzHJjq9lJGDwxB7fU+O6ky6zi4Y/ezSR+iRTeBIQwP5TF92n22J2qQ 311Y3yCqyeGd8Ne0SLtd49z4O3S+Oz5MTnJIgCZo2qMpgXIQBkDs9aDXdNY3365ILo1Q zHyYdRETEVsRwptI3P1Z2M+ANXycxbQnZqz+q9ivOdrzpLHkXR6Xtbbw3/1OKQ/7SwFJ mh5sffTnoh2dHIwnka/6EKIZTxVUV4gVUQA2M4IEW0O4HXmnh9b/l2lpnKjCVI1Mww6V uP6QoEXOPhp95Xb2VjcyYfZ7fN6TU0Zdytqb4Laoi+fQYGNp1KAwSrnc7VuhY2Dkh5SP ZwBQ== X-Forwarded-Encrypted: i=1; AJvYcCXj3H7KYs1YTIEOdL200bM+SCkXwMOs1117OaomYuLdBWoo8RYd0S/KBdoghc/HakBFY4m+Dc5gZmA=@lists.freedesktop.org X-Gm-Message-State: AOJu0YzePaVC43F8zCz5+TyfVj74iTx3RC89eSK84ff2rMzGW9zouUjG dmjuMKM4JB+wNWj/u537QUn8wEyOpUU3Pm92YFWYuo48YGYSFJ3nqtMFYhfrtTelYkvQLtvj61y 9g3BSGhj9uvA6DFDDgtp1iKuiwra7F/X8dCVBz89zXJMmfDzmvr9Va2GFZbSPcfjZBA== X-Gm-Gg: ASbGncuQbeWcfecd7DU5fv8SMfpaq5aHSwgQ8iLRx3jtiZIbBe6PAdLXAIlen6wAjk2 I5kSXcu01Zijo7Xj9PQATaVdl6pwfxBAmGUa1W9mxD9yJJHwwcu/INJOx43xJBdIn68CBrRfk2J bQuSdtfG4Iv73/TtAn/uE3iwUrQKYi0sTt0GZqY1CYXFA89QTb/3bELumU5fLfHqmrDSNze7D/u NH+k1kAk8Nf0CseQvftWhetya2YOAXbZkLpGO4B5t6TeZ918PIGutDtzn4++ah6Jvbf6sT0E+9G H5D9LWK5O3jLDh7uKLl+eyjCLElUg5e1CQrFea+iqNt4VXOh6IVuU7Iu3RRbVTbD6A== X-Received: by 2002:a5d:50c2:0:b0:38b:d7d2:12f2 with SMTP id ffacd0b85a97d-38c520bf925mr1637954f8f.54.1738151676879; Wed, 29 Jan 2025 03:54:36 -0800 (PST) X-Google-Smtp-Source: AGHT+IE238aJbXQykMLxOmkA/GlxflTBLrEd6FkSw7fKwOzieBzddAgHKUTAMn1ykR+OZNV3l083wA== X-Received: by 2002:a5d:50c2:0:b0:38b:d7d2:12f2 with SMTP id ffacd0b85a97d-38c520bf925mr1637927f8f.54.1738151676497; Wed, 29 Jan 2025 03:54:36 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-38c2a17d7a7sm17234978f8f.32.2025.01.29.03.54.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:36 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe Subject: [PATCH v1 08/12] mm/rmap: handle device-exclusive entries correctly in try_to_unmap_one() Date: Wed, 29 Jan 2025 12:54:06 +0100 Message-ID: <20250129115411.2077152-9-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: m6OLnJiAscTAAxMMG-j-ubC1Xj37RBxMFSSk9cjx1zI_1738151677 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Ever since commit b756a3b5e7ea ("mm: device exclusive memory access") we can return with a device-exclusive entry from page_vma_mapped_walk(). try_to_unmap_one() is not prepared for that, so teach it about these non-present nonswap PTEs. Before that, could we also have triggered this case with device-private entries? Unlikely. Note that we could currently only run into this case with device-exclusive entries on THPs. For order-0 folios, we still adjust the mapcount on conversion to device-exclusive, making the rmap walk abort early (folio_mapcount() == 0 and breaking swapout). We'll fix that next, now that try_to_unmap_one() can handle it. Further note that try_to_unmap() calls MMU notifiers and holds the folio lock, so any device-exclusive users should be properly prepared for this device-exclusive PTE to "vanish". Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Signed-off-by: David Hildenbrand --- mm/rmap.c | 53 ++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 40 insertions(+), 13 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 65d9bbea16d0..12900f367a2a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1648,9 +1648,9 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); + bool anon_exclusive, ret = true; pte_t pteval; struct page *subpage; - bool anon_exclusive, ret = true; struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; unsigned long pfn; @@ -1722,7 +1722,19 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, /* Unexpected PMD-mapped THP? */ VM_BUG_ON_FOLIO(!pvmw.pte, folio); - pfn = pte_pfn(ptep_get(pvmw.pte)); + /* + * We can end up here with selected non-swap entries that + * actually map pages similar to PROT_NONE; see + * page_vma_mapped_walk()->check_pte(). + */ + pteval = ptep_get(pvmw.pte); + if (likely(pte_present(pteval))) { + pfn = pte_pfn(pteval); + } else { + pfn = swp_offset_pfn(pte_to_swp_entry(pteval)); + VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); + } + subpage = folio_page(folio, pfn - folio_pfn(folio)); address = pvmw.address; anon_exclusive = folio_test_anon(folio) && @@ -1778,7 +1790,9 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, hugetlb_vma_unlock_write(vma); } pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); - } else { + if (pte_dirty(pteval)) + folio_mark_dirty(folio); + } else if (likely(pte_present(pteval))) { flush_cache_page(vma, address, pfn); /* Nuke the page table entry. */ if (should_defer_flush(mm, flags)) { @@ -1796,6 +1810,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, } else { pteval = ptep_clear_flush(vma, address, pvmw.pte); } + if (pte_dirty(pteval)) + folio_mark_dirty(folio); + } else { + pte_clear(mm, address, pvmw.pte); } /* @@ -1805,10 +1823,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval); - /* Set the dirty flag on the folio now the pte is gone. */ - if (pte_dirty(pteval)) - folio_mark_dirty(folio); - /* Update high watermark before we lower rss */ update_hiwater_rss(mm); @@ -1822,8 +1836,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, dec_mm_counter(mm, mm_counter(folio)); set_pte_at(mm, address, pvmw.pte, pteval); } - - } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { + } else if (likely(pte_present(pteval)) && pte_unused(pteval) && + !userfaultfd_armed(vma)) { /* * The guest indicated that the page content is of no * interest anymore. Simply discard the pte, vmscan @@ -1902,6 +1916,12 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, set_pte_at(mm, address, pvmw.pte, pteval); goto walk_abort; } + + /* + * arch_unmap_one() is expected to be a NOP on + * architectures where we could have non-swp entries + * here, so we'll not check/care. + */ if (arch_unmap_one(mm, vma, address, pteval) < 0) { swap_free(entry); set_pte_at(mm, address, pvmw.pte, pteval); @@ -1926,10 +1946,17 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, swp_pte = swp_entry_to_pte(entry); if (anon_exclusive) swp_pte = pte_swp_mkexclusive(swp_pte); - if (pte_soft_dirty(pteval)) - swp_pte = pte_swp_mksoft_dirty(swp_pte); - if (pte_uffd_wp(pteval)) - swp_pte = pte_swp_mkuffd_wp(swp_pte); + if (likely(pte_present(pteval))) { + if (pte_soft_dirty(pteval)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_uffd_wp(pteval)) + swp_pte = pte_swp_mkuffd_wp(swp_pte); + } else { + if (pte_swp_soft_dirty(pteval)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_swp_uffd_wp(pteval)) + swp_pte = pte_swp_mkuffd_wp(swp_pte); + } set_pte_at(mm, address, pvmw.pte, swp_pte); } else { /* From patchwork Wed Jan 29 11:54:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB75DC02194 for ; Wed, 29 Jan 2025 11:54:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 36C8710E7CA; Wed, 29 Jan 2025 11:54:44 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="fdzn0TjX"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9EB6410E7CA for ; Wed, 29 Jan 2025 11:54:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dBI2mWOVOmwqsdVlja7jNAEkV5b5lmKCa22lnGaqVYs=; b=fdzn0TjXMN2bKaGhrTThvk5lBfdREaaanDT9xDIGz2rFrQJkdEjDrbSu4GR4nsbD0p5qAT aimdZp1o1gnscr11A8Dxew1pUVm/i/GYc3fTrGPGcutinfCX7ulhCOeE1Husuv5FRR8E0j P0asp1NcFM4i01+ph+vT5F5PAxAPnPw= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-612-2KpZzaQPNeSLqEJ3EIHiNA-1; Wed, 29 Jan 2025 06:54:40 -0500 X-MC-Unique: 2KpZzaQPNeSLqEJ3EIHiNA-1 X-Mimecast-MFC-AGG-ID: 2KpZzaQPNeSLqEJ3EIHiNA Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-38bf4913659so5053243f8f.1 for ; Wed, 29 Jan 2025 03:54:40 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151679; x=1738756479; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dBI2mWOVOmwqsdVlja7jNAEkV5b5lmKCa22lnGaqVYs=; b=MWq+8XelWmQfr6PvFPx2l4eNVpuIDF+1AsdigAG2nzmOQia/16B2T9OzlAqL2+9eaI JjP8bKFS7TbfIS+rPafZNBqL0d4icDTOaAKxIoWJPfqkBQBGUD/SgTc5k7r4vyqxSdJj W4rrditY0tykgt8NEx1JUJKDoFe4SptxpodhdJmhFUsziOLS7bEUxjhYMc5e4IVg7HWj aFFzQeIkStKb7f1KSGFByJX+XVhlB+bJ+VqYPGzKCoKv1Yln4hNTbEIXQKofIhx+UkJU kopW34uUJrFI7QFuL58sW9IWSJw2l8iD4GE4VlfIOkJmMPJ5klsmthS1wFtHndWQ8rfQ Ivlg== X-Forwarded-Encrypted: i=1; AJvYcCVUgXNxrrKfVXLo1LWimohUOsVxotz92bN+GJ0+iMl+B8Xpf5xlcQEw5ZNTlq3JdhI6WNNq0pX87hA=@lists.freedesktop.org X-Gm-Message-State: AOJu0Yy9+A19b9WTn59oPIV7PkYcO7iCd56+IwZbXjjXE1fG44sGJpAC LMZjKPbrgyjk9WOOZsp3o8j6nBAlpooM6OIqYBYoJBUI9DdTjcZJwbVWbiWn7fzbUXlHr8Za8Iz OME+LoGar3tRBwlUBbQW+s+DV8GnhNUDiXkGn1pZHvjZwiYhPJMs14groqDyBBOJn0w== X-Gm-Gg: ASbGncuaH9vA8ZQ1A2eA2F427yRYVNf2W0/N7aKM28dpDJDUEUm5INqpXB6qjSpa8ik 3UP+ET3BVxsao+dPKDTJn3Ke4EWPEh0WdZiOQP5eUQ9eCLPZ9coX8y7YiD3Rs/Ehi6rZVMMNtua oDkBaXp9XA21vA/Uhx4T0kEJwIwkPHRUpoUWbZaG6hf1Y4VmQ1C1efyCfy3rbBKs0+3etUA+XJB DXmon/Cw+a7l2Xpw+kHY/Rbe+JsRA+xn3T+abfrvlTyudxgcktGqgOCbvFFGMARlHjYNczRsPKE ZKpVuNLWyFlWN9gyYobmRaPiBYWKf1PaFChsrhAvJoF1/5fij78HCC0+bc9jxEtTvg== X-Received: by 2002:a5d:47c9:0:b0:38c:3eab:2e17 with SMTP id ffacd0b85a97d-38c5194dae9mr2038623f8f.2.1738151679494; Wed, 29 Jan 2025 03:54:39 -0800 (PST) X-Google-Smtp-Source: AGHT+IEuFMqyDWaa/JVieaul0deYFiTz0a9oHzheZm+vFN4IrpXYFm3MUhtZq6TOTjv8DbS0i6Ouqg== X-Received: by 2002:a5d:47c9:0:b0:38c:3eab:2e17 with SMTP id ffacd0b85a97d-38c5194dae9mr2038593f8f.2.1738151679034; Wed, 29 Jan 2025 03:54:39 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-38c2a1764d3sm17086479f8f.19.2025.01.29.03.54.37 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:38 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe Subject: [PATCH v1 09/12] mm/rmap: handle device-exclusive entries correctly in try_to_migrate_one() Date: Wed, 29 Jan 2025 12:54:07 +0100 Message-ID: <20250129115411.2077152-10-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: gJuRP4g0oCuJfKNWpnoL0D9nzakk0hAuF8Uhl3Yjc_o_1738151679 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Ever since commit b756a3b5e7ea ("mm: device exclusive memory access") we can return with a device-exclusive entry from page_vma_mapped_walk(). try_to_migrate_one() is not prepared for that, so teach it about these non-present nonswap PTEs. We already handle device-private entries by specializing on the folio, so we can reshuffle that code to make it work on the non-present nonswap PTEs instead. Get rid of most folio_is_device_private() handling, except when handling HWPoison. It's unclear what the right thing to do here is. Note that we could currently only run into this case with device-exclusive entries on THPs; but as we have a refcount vs. mapcount inbalance, folio splitting etc. will just bail out early and not even try migrating. For order-0 folios, we still adjust the mapcount on conversion to device-exclusive, making the rmap walk abort early (folio_mapcount() == 0 and breaking swapout). We'll fix that next, now that try_to_migrate_one() can handle it. Further note that try_to_migrate() calls MMU notifiers and holds the folio lock, so any device-exclusive users should be properly prepared for this device-exclusive PTE to "vanish". Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Signed-off-by: David Hildenbrand --- mm/rmap.c | 125 ++++++++++++++++++++++-------------------------------- 1 file changed, 51 insertions(+), 74 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 12900f367a2a..903a78e60781 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2040,9 +2040,9 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); + bool anon_exclusive, writable, ret = true; pte_t pteval; struct page *subpage; - bool anon_exclusive, ret = true; struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; unsigned long pfn; @@ -2109,24 +2109,20 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, /* Unexpected PMD-mapped THP? */ VM_BUG_ON_FOLIO(!pvmw.pte, folio); - pfn = pte_pfn(ptep_get(pvmw.pte)); - - if (folio_is_zone_device(folio)) { - /* - * Our PTE is a non-present device exclusive entry and - * calculating the subpage as for the common case would - * result in an invalid pointer. - * - * Since only PAGE_SIZE pages can currently be - * migrated, just set it to page. This will need to be - * changed when hugepage migrations to device private - * memory are supported. - */ - VM_BUG_ON_FOLIO(folio_nr_pages(folio) > 1, folio); - subpage = &folio->page; + /* + * We can end up here with selected non-swap entries that + * actually map pages similar to PROT_NONE; see + * page_vma_mapped_walk()->check_pte(). + */ + pteval = ptep_get(pvmw.pte); + if (likely(pte_present(pteval))) { + pfn = pte_pfn(pteval); } else { - subpage = folio_page(folio, pfn - folio_pfn(folio)); + pfn = swp_offset_pfn(pte_to_swp_entry(pteval)); + VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); } + + subpage = folio_page(folio, pfn - folio_pfn(folio)); address = pvmw.address; anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(subpage); @@ -2182,7 +2178,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, } /* Nuke the hugetlb page table entry */ pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); - } else { + if (pte_dirty(pteval)) + folio_mark_dirty(folio); + writable = pte_write(pteval); + } else if (likely(pte_present(pteval))) { flush_cache_page(vma, address, pfn); /* Nuke the page table entry. */ if (should_defer_flush(mm, flags)) { @@ -2200,54 +2199,21 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, } else { pteval = ptep_clear_flush(vma, address, pvmw.pte); } + if (pte_dirty(pteval)) + folio_mark_dirty(folio); + writable = pte_write(pteval); + } else { + pte_clear(mm, address, pvmw.pte); + writable = is_writable_device_private_entry(pte_to_swp_entry(pteval)); } - /* Set the dirty flag on the folio now the pte is gone. */ - if (pte_dirty(pteval)) - folio_mark_dirty(folio); + VM_WARN_ON_FOLIO(writable && folio_test_anon(folio) && + !anon_exclusive, folio); /* Update high watermark before we lower rss */ update_hiwater_rss(mm); - if (folio_is_device_private(folio)) { - unsigned long pfn = folio_pfn(folio); - swp_entry_t entry; - pte_t swp_pte; - - if (anon_exclusive) - WARN_ON_ONCE(folio_try_share_anon_rmap_pte(folio, - subpage)); - - /* - * Store the pfn of the page in a special migration - * pte. do_swap_page() will wait until the migration - * pte is removed and then restart fault handling. - */ - entry = pte_to_swp_entry(pteval); - if (is_writable_device_private_entry(entry)) - entry = make_writable_migration_entry(pfn); - else if (anon_exclusive) - entry = make_readable_exclusive_migration_entry(pfn); - else - entry = make_readable_migration_entry(pfn); - swp_pte = swp_entry_to_pte(entry); - - /* - * pteval maps a zone device page and is therefore - * a swap pte. - */ - if (pte_swp_soft_dirty(pteval)) - swp_pte = pte_swp_mksoft_dirty(swp_pte); - if (pte_swp_uffd_wp(pteval)) - swp_pte = pte_swp_mkuffd_wp(swp_pte); - set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); - trace_set_migration_pte(pvmw.address, pte_val(swp_pte), - folio_order(folio)); - /* - * No need to invalidate here it will synchronize on - * against the special swap migration pte. - */ - } else if (PageHWPoison(subpage)) { + if (PageHWPoison(subpage) && !folio_is_device_private(folio)) { pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { hugetlb_count_sub(folio_nr_pages(folio), mm); @@ -2257,8 +2223,8 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, dec_mm_counter(mm, mm_counter(folio)); set_pte_at(mm, address, pvmw.pte, pteval); } - - } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { + } else if (likely(pte_present(pteval)) && pte_unused(pteval) && + !userfaultfd_armed(vma)) { /* * The guest indicated that the page content is of no * interest anymore. Simply discard the pte, vmscan @@ -2274,6 +2240,11 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, swp_entry_t entry; pte_t swp_pte; + /* + * arch_unmap_one() is expected to be a NOP on + * architectures where we could have non-swp entries + * here. + */ if (arch_unmap_one(mm, vma, address, pteval) < 0) { if (folio_test_hugetlb(folio)) set_huge_pte_at(mm, address, pvmw.pte, @@ -2284,8 +2255,6 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, page_vma_mapped_walk_done(&pvmw); break; } - VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) && - !anon_exclusive, subpage); /* See folio_try_share_anon_rmap_pte(): clear PTE first. */ if (folio_test_hugetlb(folio)) { @@ -2310,7 +2279,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, * pte. do_swap_page() will wait until the migration * pte is removed and then restart fault handling. */ - if (pte_write(pteval)) + if (writable) entry = make_writable_migration_entry( page_to_pfn(subpage)); else if (anon_exclusive) @@ -2319,15 +2288,23 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, else entry = make_readable_migration_entry( page_to_pfn(subpage)); - if (pte_young(pteval)) - entry = make_migration_entry_young(entry); - if (pte_dirty(pteval)) - entry = make_migration_entry_dirty(entry); - swp_pte = swp_entry_to_pte(entry); - if (pte_soft_dirty(pteval)) - swp_pte = pte_swp_mksoft_dirty(swp_pte); - if (pte_uffd_wp(pteval)) - swp_pte = pte_swp_mkuffd_wp(swp_pte); + if (likely(pte_present(pteval))) { + if (pte_young(pteval)) + entry = make_migration_entry_young(entry); + if (pte_dirty(pteval)) + entry = make_migration_entry_dirty(entry); + swp_pte = swp_entry_to_pte(entry); + if (pte_soft_dirty(pteval)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_uffd_wp(pteval)) + swp_pte = pte_swp_mkuffd_wp(swp_pte); + } else { + swp_pte = swp_entry_to_pte(entry); + if (pte_swp_soft_dirty(pteval)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_swp_uffd_wp(pteval)) + swp_pte = pte_swp_mkuffd_wp(swp_pte); + } if (folio_test_hugetlb(folio)) set_huge_pte_at(mm, address, pvmw.pte, swp_pte, hsz); From patchwork Wed Jan 29 11:54:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B13C0C0218D for ; Wed, 29 Jan 2025 11:54:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3619C10E7C1; Wed, 29 Jan 2025 11:54:47 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="G19dnS3x"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id D92B110E7C1 for ; Wed, 29 Jan 2025 11:54:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151685; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mNM6Zw+nv5CKX0FXtzXMADzAxRUJZ/HteqC2IEBjnh8=; b=G19dnS3xM+nwKpkK7PrXw7fw8ozO6UVA78CVfLg74SxJnMkuJDjdruT3p5cGjvr2pdVmNH si+ru0wFVZukY15nb4bs0XWLUoSw1xXrGMSGuE6LoSYQpwNknllwdcpMO7V6JWB2Ffh+my Xj+Z7pBabURNY2OPUXQ0eCBw54UAPVQ= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-472-GM6I1SJMPhC4FtBg7iuz4g-1; Wed, 29 Jan 2025 06:54:43 -0500 X-MC-Unique: GM6I1SJMPhC4FtBg7iuz4g-1 X-Mimecast-MFC-AGG-ID: GM6I1SJMPhC4FtBg7iuz4g Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-385e2579507so2852891f8f.1 for ; Wed, 29 Jan 2025 03:54:43 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151682; x=1738756482; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mNM6Zw+nv5CKX0FXtzXMADzAxRUJZ/HteqC2IEBjnh8=; b=vtSzJ3MmmjV+5zvDzw8ukS+/N4XYtcmokK/2fF8GdocpLErboQ1fpAyovTBN8x+fuY Twqufq1gDFlwRQ+pnVTpeDe/sA7DAqe/bnf8UkksFBdFvkO8ecsnbtILG2lbIKAz3jG1 qku5GKbwxmEy8P2eh5hZZZO9hcoZltXwEvALDajXSgzoEIEeTIJiopCgSxqkzzPyxGnh sVDt4Yik0D6D+EmhhZuI+kGXa3KcKr6CcrsdKHGBvGYwzq7KBmwOi4OBPFxS3TDf0HLF KRTg2TfZVC4bic9lOJdwNFhIG7497a7xZtpQhbJJi0MZvZv7bsVovK1TiaZwCZ2TItgQ Us+g== X-Forwarded-Encrypted: i=1; AJvYcCWeM67Fl5BZH8BtseFeRwV8Z78kIkkaudWTbGtmOJWChM02ReB6Yn3XsJvto23XhjwyaI7HdUSW+NI=@lists.freedesktop.org X-Gm-Message-State: AOJu0YxRUmXzCrCW3wiRzjKVFnT2dTNWHUO3BJFljNrSFmLbMCePDwYE 3pVX7zshEimOttF0QYGnCFtxQCzeVcbaQ8b+V7rC/Eaad5w+xlvDi4cIYWH5Mn3m9HFsYHgPYYE qvjSjKej4+YjP3ZCWvmYw/NpXAVw9j4nBslG2Z1DZpx9kKGq/hpVZnHMQzmjIvDcmOA== X-Gm-Gg: ASbGncsmxJIOioRKr6VHUg1NMPu2M++SK2ziEIRJPt/DHnOcabulsPQWIk2zZplcB0o sbD6/xpzbj11ULZm4JgQYWJlj3/hY0NnD/fsqdfMwT0f3f6NSStJ7qYJbnv+Wi//Jn5C/OYyBf8 N4zkC2zBvrZrn+XxyNHGO3vDC8HiUSUk6SqswI42b0cDyYmpPAHm/72S7OYR+Mbo0vQ9aWQoO8e Ig8gn2WktmZDEN13ptgQwiZqIVX1t1w+k6C78x2sDDSRtYFA7Wr3jD2AJm8qKqQ0YCGgddt0huS VnRWkF/hVLE2dgch3083uaNHnIYWEhOy+PqlXIPhQjFrDScdg32yX9RKWD03KMT7VA== X-Received: by 2002:a05:6000:4013:b0:385:f631:612 with SMTP id ffacd0b85a97d-38c5195f2e5mr2414991f8f.17.1738151682132; Wed, 29 Jan 2025 03:54:42 -0800 (PST) X-Google-Smtp-Source: AGHT+IEfVb9Gb5/TQQdvQH0d8Y86brLumLHRijGlRqbPqwAUyE2LkrZ873cAyk0+z1AQr8yjahg13A== X-Received: by 2002:a05:6000:4013:b0:385:f631:612 with SMTP id ffacd0b85a97d-38c5195f2e5mr2414952f8f.17.1738151681703; Wed, 29 Jan 2025 03:54:41 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-38c2a1c4212sm16316119f8f.87.2025.01.29.03.54.39 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:41 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe Subject: [PATCH v1 10/12] mm/rmap: handle device-exclusive entries correctly in folio_referenced_one() Date: Wed, 29 Jan 2025 12:54:08 +0100 Message-ID: <20250129115411.2077152-11-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: W9yZSyFwJ2EcBKkWBPvQU0tkNfXjGub1Cm1mQhE3k7k_1738151682 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Ever since commit b756a3b5e7ea ("mm: device exclusive memory access") we can return with a device-exclusive entry from page_vma_mapped_walk(). folio_referenced_one() is not prepared for that, so teach it about these non-present nonswap PTEs. We'll likely never hit that path with device-private entries, but we could with device-exclusive ones. It's not really clear what to do: the device could be accessing this PTE, but we don't have that information in the PTE. Likely MMU notifiers should be taking care of that, and we can just assume "not referenced by the CPU". Note that we could currently only run into this case with device-exclusive entries on THPs. For order-0 folios, we still adjust the mapcount on conversion to device-exclusive, making the rmap walk abort early (folio_mapcount() == 0). We'll fix that next, now that folio_referenced_one() can handle it. Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Signed-off-by: David Hildenbrand --- mm/rmap.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 903a78e60781..77b063e9aec4 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -899,8 +899,14 @@ static bool folio_referenced_one(struct folio *folio, if (lru_gen_look_around(&pvmw)) referenced++; } else if (pvmw.pte) { - if (ptep_clear_flush_young_notify(vma, address, - pvmw.pte)) + /* + * We can end up here with selected non-swap entries + * that actually map pages similar to PROT_NONE; see + * page_vma_mapped_walk()->check_pte(). From a CPU + * perspective, these PTEs are old. + */ + if (pte_present(ptep_get(pvmw.pte)) && + ptep_clear_flush_young_notify(vma, address, pvmw.pte)) referenced++; } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { if (pmdp_clear_flush_young_notify(vma, address, From patchwork Wed Jan 29 11:54:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 94923C02193 for ; Wed, 29 Jan 2025 11:54:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1036310E7D2; Wed, 29 Jan 2025 11:54:52 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="PyMluvrV"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9C24010E7D1 for ; Wed, 29 Jan 2025 11:54:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151687; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1V8bFv4UpnBG412679q6Wat0SbxTmvFr7JjVIwnpoX4=; b=PyMluvrVFEVhWOvU701IA6uIKr6lGAir04szbsaYqBAGNS55WI/tolZ79NfBkD/phwmC1E PhZjed1S9WOdOdYX82hHItlno81InGZCdQJWgjJE+SozHX6YZU0DtWLEVNggFUkmnSTzIK SHI/+dwC/fOBsdUHKCqDfKF9PKMCwH8= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-252-pBAilTOfMbyofSTjiyP8iw-1; Wed, 29 Jan 2025 06:54:46 -0500 X-MC-Unique: pBAilTOfMbyofSTjiyP8iw-1 X-Mimecast-MFC-AGG-ID: pBAilTOfMbyofSTjiyP8iw Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-38bf4913659so5053307f8f.1 for ; Wed, 29 Jan 2025 03:54:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151685; x=1738756485; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1V8bFv4UpnBG412679q6Wat0SbxTmvFr7JjVIwnpoX4=; b=B5HuCbxh3PKcRxoI01NtgaKitIou6HcWXO91GFCuqX7c0lSXD3k9DzXX+/KzdTMLqb 2aZQgoniyOZor7ioKSg+ppDV0SEJo2i2WZfxHn+iFvEiHmvBb8YLbMLF0AImefNQHGGZ lGmmBSMjIlHzvqD1Blm60tpF6m77xXAKBEC/HTKy51lksThU3pqNJS6L1IU849u+gKsT YznLTAM92+yL5ObtwCz1DCXzeXkG5/dzZSW2hhkTj4sy42c7+kXej8rqMD5rfwJn5zQH ohaJiZeOA9Ur6bwc47WvAGADPNnvEg064FN190s2AuA1TEZGOsXrXg9WDlIkyw+9VCSA T9vA== X-Forwarded-Encrypted: i=1; AJvYcCVDn6ZD5juEw9WxwG6KrWbPUb30F0gamk5JIo07/qGAeQ/32nG/IT/B4sjs15KFGvstGJYr/Jbuq+I=@lists.freedesktop.org X-Gm-Message-State: AOJu0YzqRa/4CZsZg4DLXVgU9kwUH7STN5AVIHoL5Z4b/IQZUwzwWBCs DFfJLiba+KoVmnD56lzvOe32GjMiob4Dkl5ri1F8sh2ig2tOSbqbCusdQws62JzMZivKEdt6fwy Xsst8GSlFT9qQocFjrP+OtRHMqU9gI04iK8ky/0XzKHOn2ASHuo7rl+s7blAmt75hyNER9XgWRq 3j X-Gm-Gg: ASbGncukPF1ZEqbQ2hbyR/qE5aggQ6MYONH4OiC+j2WAeTXV2oWnjam0yJ/RJKOuvVP KJ9v8ZkBMfXZupFfgQSZs9wqIoAO61C1fYvyU5NZqRbh3NUIT6pOtSXEIMdVqi8XVoHnsi6y6XI SdNFAhX+Tre98nVpJskUst/aG09jzbr76PCi2crfs1XGwXD4X0q+MsmulIhnXziu6oHBpeh+PLb YkD2vcAg8sQbMq+fw4KCOTwo0bISBZcjmJ74+QrCyJug4vXR87uS2zW7BJMFGcWDlWAH9qMADHC P5GYSgZfUwDNGEN9nycspZPKRCY052LDuHoAH5D8Hd97txA+zyRC5jDLXFKLEAH4wQ== X-Received: by 2002:a05:6000:1f88:b0:385:e176:4420 with SMTP id ffacd0b85a97d-38c5194da70mr2305429f8f.10.1738151685464; Wed, 29 Jan 2025 03:54:45 -0800 (PST) X-Google-Smtp-Source: AGHT+IHUbIgz0SPBde6y+6m4GBK9jxdSxFJzhj9pSmwm9jjlsojNxFlCvTBxohQUGPQAzMIjP8ET+w== X-Received: by 2002:a05:6000:1f88:b0:385:e176:4420 with SMTP id ffacd0b85a97d-38c5194da70mr2305401f8f.10.1738151685052; Wed, 29 Jan 2025 03:54:45 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-438dcc2ef08sm20681625e9.22.2025.01.29.03.54.42 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:43 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe Subject: [PATCH v1 11/12] mm/rmap: handle device-exclusive entries correctly in page_vma_mkclean_one() Date: Wed, 29 Jan 2025 12:54:09 +0100 Message-ID: <20250129115411.2077152-12-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 6Q8IWMUJmXdLT78DO5VXhK0k092gHTkK92HyCvuWb80_1738151686 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Ever since commit b756a3b5e7ea ("mm: device exclusive memory access") we can return with a device-exclusive entry from page_vma_mapped_walk(). page_vma_mkclean_one() is not prepared for that, so teach it about these non-present nonswap PTEs. We'll likely never hit that path with device-private entries, but we could with device-exclusive ones. It's not really clear what to do: the device could be accessing this PTE, but we don't have that information in the PTE. Likely MMU notifiers should be taking care of that, and we can just assume "not writable and not dirty from CPU perspective". Note that we could currently only run into this case with device-exclusive entries on THPs. We still adjust the mapcount on conversion to device-exclusive, making the rmap walk abort early (folio_mapcount() == 0) for order-0 folios. We'll fix that next, now that page_vma_mkclean_one() can handle it. Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Signed-off-by: David Hildenbrand --- mm/rmap.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/mm/rmap.c b/mm/rmap.c index 77b063e9aec4..9e2002d97d6f 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1050,6 +1050,14 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) pte_t *pte = pvmw->pte; pte_t entry = ptep_get(pte); + /* + * We can end up here with selected non-swap entries + * that actually map pages similar to PROT_NONE; see + * page_vma_mapped_walk()->check_pte(). From a CPU + * perspective, these PTEs are clean and not writable. + */ + if (!pte_present(entry)) + continue; if (!pte_dirty(entry) && !pte_write(entry)) continue; From patchwork Wed Jan 29 11:54:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13953702 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9C747C0218D for ; Wed, 29 Jan 2025 11:54:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1C02510E7D6; Wed, 29 Jan 2025 11:54:57 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.b="hIpLnsqD"; dkim-atps=neutral Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id D3BC310E7CF for ; Wed, 29 Jan 2025 11:54:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738151691; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8FdUGdC2w/N5TeOcJ3tL/h1ogRWBFVvHy4t3b3m6isA=; b=hIpLnsqD5RbDU8gw0WGmPPyfiIak/MxLjmueg0GlNudyinRIKdeyXdoeBzP8T/S5FY8jRP bsk7YaIEj60jfF+hUeWx/lHDtFZjuc030VsLFF4s3IzrLB28o660Wgey2nxtDQrMJEcZYN SbDZWpLXGxYPz/D491Io2+TH7dvuSfc= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-284-EQM04Qk_MwiBLo4Yv2BHCg-1; Wed, 29 Jan 2025 06:54:50 -0500 X-MC-Unique: EQM04Qk_MwiBLo4Yv2BHCg-1 X-Mimecast-MFC-AGG-ID: EQM04Qk_MwiBLo4Yv2BHCg Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-385e49efd59so2588089f8f.0 for ; Wed, 29 Jan 2025 03:54:49 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738151689; x=1738756489; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8FdUGdC2w/N5TeOcJ3tL/h1ogRWBFVvHy4t3b3m6isA=; b=wU83P2fiGpEP6WgkZ3KIBcbWW1DXTJ95C8eCzN3RrFV2CMII/Z0Z2MuKuYn+A+Se53 ic7awSP57DwSMlY2cWh1UELPATiCqT3xECvcyJjSzqplvWXHNeh1tnm+84TiEsIPg4i+ cvbfG6tqDdUw4/pI5mDyhsc/i8N93oqWn4i0QarDP0PWcArCeQIYi8RpIHeLIwHF+YoS ZRjbWhpnrXL3vcafuVGFLZjP1b3unIM2MJZIv95mv+augmdQ75iXxTq3V4eSAGVuqWpM IQluQxA7ixyCP+hk1vlJ2OH3wQSxYcnt8YU0qHG+fsxTBvi4gVjI+KYzWYcwlCFK/T0H 5l2Q== X-Forwarded-Encrypted: i=1; AJvYcCVo2Qt1NwDGtu+cdTLOGVYZS0UBuMmPwYeUyLx6UWXFQtbWg+T/KTdYwyjaOY9sFGXBgr61haN/T8U=@lists.freedesktop.org X-Gm-Message-State: AOJu0Yx4GqKG1X+h580hsKkExdAQzVlYQD5eRAh2jBWcrmT8bMf9d+1p BE3YUGe3RXJsPa7M+Gh5tuMaYMlxqz/FJ6Hxmlz5xQXxLtcLXxp7qy9cvUxQmr2OaF1XjczMIO6 kfw9/lIR1FXJygvRPLVdohlIlNIkjkRKXHlVNL0XF38+Y6dhdZ30/6A3KAoKkhFS5mQ== X-Gm-Gg: ASbGncvA63WzEO40jDtQiY9H4UZGEq8H2HkMm7nVO2kOo5sAbPMyudQW58KXixv0TBr CCHNE70iXxvnLl7C3xIV1AypRVG/+9lFFpe0jBZGVbyYCmCEHndnHSN9bizL7fgx5SfZR0u+vJz nGpF15Q9C7YoU0makTIn9YIIOMv66yGacA9kTvQGPU3OTu/qf1pJFc4JOVGZkwxypxwKSrX+HwP Cpy4WlO4p/LFZSOGQh/hdYAj927OHFwBok+yXpVZlqG3ZX69yz/gt2zcwoZkGSt3BTPFrGxPbYe B2DJpkhL8ReERrPKrYvtRZpJUsw7cveUUmSnwUjhztiwBNoyvecaw0/TWupoVmmiwQ== X-Received: by 2002:a5d:5384:0:b0:38a:8ed1:c5c7 with SMTP id ffacd0b85a97d-38c520bdb45mr1846662f8f.46.1738151688840; Wed, 29 Jan 2025 03:54:48 -0800 (PST) X-Google-Smtp-Source: AGHT+IEQObdLJ4S/sxdMO/dY9vI6fd5p4fQOJgmwgX26yitgXbCNTkIYH7aPDZgCkfY36quLJ2o2xA== X-Received: by 2002:a5d:5384:0:b0:38a:8ed1:c5c7 with SMTP id ffacd0b85a97d-38c520bdb45mr1846636f8f.46.1738151688502; Wed, 29 Jan 2025 03:54:48 -0800 (PST) Received: from localhost (p200300cbc7053b0064b867195794bf13.dip0.t-ipconnect.de. [2003:cb:c705:3b00:64b8:6719:5794:bf13]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-38c2a1c4006sm17020800f8f.94.2025.01.29.03.54.45 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 03:54:47 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org, David Hildenbrand , Andrew Morton , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jonathan Corbet , Alex Shi , Yanteng Si , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pasha Tatashin , Peter Xu , Alistair Popple , Jason Gunthorpe Subject: [PATCH v1 12/12] mm/rmap: keep mapcount untouched for device-exclusive entries Date: Wed, 29 Jan 2025 12:54:10 +0100 Message-ID: <20250129115411.2077152-13-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250129115411.2077152-1-david@redhat.com> References: <20250129115411.2077152-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: GHvMRhgyFMedG79_pPXAvmP-UjBv0ZKad4hyNU2pgBY_1738151689 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Now that conversion to device-exclusive does no longer perform an rmap walk and the main page_vma_mapped_walk() users were taught to properly handle nonswap entries, let's treat device-exclusive entries just as if they would be present, similar to how we handle device-private entries already. This fixes swapout/migration of folios with device-exclusive entries. Likely there are still some page_vma_mapped_walk() callers that are not fully prepared for these entries, and where we simply want to refuse !pte_present() entries. They have to be fixed independently; the ones in mm/rmap.c are prepared. Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Signed-off-by: David Hildenbrand --- mm/memory.c | 17 +---------------- mm/rmap.c | 7 ------- 2 files changed, 1 insertion(+), 23 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index db38d6ae4e74..cd689cd8a7c8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -743,20 +743,6 @@ static void restore_exclusive_pte(struct vm_area_struct *vma, VM_BUG_ON_FOLIO(pte_write(pte) && (!folio_test_anon(folio) && PageAnonExclusive(page)), folio); - - /* - * No need to take a page reference as one was already - * created when the swap entry was made. - */ - if (folio_test_anon(folio)) - folio_add_anon_rmap_pte(folio, page, vma, address, RMAP_NONE); - else - /* - * Currently device exclusive access only supports anonymous - * memory so the entry shouldn't point to a filebacked page. - */ - WARN_ON_ONCE(1); - set_pte_at(vma->vm_mm, address, ptep, pte); /* @@ -1628,8 +1614,7 @@ static inline int zap_nonpresent_ptes(struct mmu_gather *tlb, */ WARN_ON_ONCE(!vma_is_anonymous(vma)); rss[mm_counter(folio)]--; - if (is_device_private_entry(entry)) - folio_remove_rmap_pte(folio, page, vma); + folio_remove_rmap_pte(folio, page, vma); folio_put(folio); } else if (!non_swap_entry(entry)) { /* Genuine swap entries, hence a private anon pages */ diff --git a/mm/rmap.c b/mm/rmap.c index 9e2002d97d6f..4acc9f6d743a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2495,13 +2495,6 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr, /* The pte is writable, uffd-wp does not apply. */ set_pte_at(mm, addr, fw.ptep, swp_pte); - /* - * TODO: The device-exclusive non-swap PTE holds a folio reference but - * does not count as a mapping (mapcount), which is wrong and must be - * fixed, otherwise RMAP walks don't behave as expected. - */ - folio_remove_rmap_pte(folio, page, vma); - folio_walk_end(&fw, vma); *foliop = folio; return page;