From patchwork Fri Aug 2 15:55:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13751718 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22794C52D6F for ; Fri, 2 Aug 2024 15:55:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A7EB26B0085; Fri, 2 Aug 2024 11:55:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A064F6B0088; Fri, 2 Aug 2024 11:55:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8CE286B0089; Fri, 2 Aug 2024 11:55:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 63A856B0085 for ; Fri, 2 Aug 2024 11:55:49 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0E8C3A7DA0 for ; Fri, 2 Aug 2024 15:55:49 +0000 (UTC) X-FDA: 82407756018.23.ADBEB2B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 552E640025 for ; Fri, 2 Aug 2024 15:55:47 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Hy97X8ye; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722614070; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UBGeX8yYPmEXT+9EKnjIl4n92Yc2dlnMkDo/CXaQu2Y=; b=YQcMAZo+n19Ri30TsTmDOcgPi2LCuiEE2bMyc6gBDR5lSA0Ii2V0iVe2FGAaVoJ3xRbjVK NCXXwrR8RnubrvdtFxKXPFy1OBXnC1SYP466+YqJixKBk3Q3Rq4duPyu3Bv17sVb3802/A 4q5p2YKZGnhuT0XKzyZiS4FWFwbsYug= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Hy97X8ye; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722614070; a=rsa-sha256; cv=none; b=j6voN8KTSu3FqTFc8YaY74B4o7zmgAZ1ynBZ3jqXyALBPYizY+3Uc1hpV9gBxRSD5zRp2y db513A6V6ppakGRoP6WczxMqbvkb2FBCuB423Tdy84faOhgH04A7ib5OmzCTzxALhiLxat 7gbY/lkZBBip730L5D6DX9FCNS+dHc0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722614146; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UBGeX8yYPmEXT+9EKnjIl4n92Yc2dlnMkDo/CXaQu2Y=; b=Hy97X8yew4bmD2W1uY8Xv85CeEbi/x92a2crHxmsB2R9p3pQzSdesWpv6xqE+9+syQOD7Q CnT8kld4YPKO4/ukDdfjmGh3VOQymlHKQdgGYRgCQ23DKtCb3QQ5JeBN7U7zN7XJ1LkpVk A2uY250xAR6EdT6G/yVMuiad2NT70fs= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-693-3V3FyyZyMi6_6IaMXo6ocw-1; Fri, 02 Aug 2024 11:55:42 -0400 X-MC-Unique: 3V3FyyZyMi6_6IaMXo6ocw-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D08A81955EB7; Fri, 2 Aug 2024 15:55:39 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.113]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 90DD4300018D; Fri, 2 Aug 2024 15:55:33 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Gerald Schaefer Subject: [PATCH v1 01/11] mm: provide vm_normal_(page|folio)_pmd() with CONFIG_PGTABLE_HAS_HUGE_LEAVES Date: Fri, 2 Aug 2024 17:55:14 +0200 Message-ID: <20240802155524.517137-2-david@redhat.com> In-Reply-To: <20240802155524.517137-1-david@redhat.com> References: <20240802155524.517137-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 552E640025 X-Stat-Signature: fbpwatsjj9m911ggjf46fyjiomemw51j X-Rspam-User: X-HE-Tag: 1722614147-447601 X-HE-Meta: U2FsdGVkX1/oHAlDxY5o+Yu9OKoYvlhPXrNkWqaxENoYrCzXCO4iR8tcv97KiJKoqueAeoxE+xSyneKPWsRPsMKzAWvzFZ7yVnxoTCzujQkdFRwC2+l7Wk+QWXpQLmaKn04kD+XPMKJB+hc4i1hgaB0Z4rWI13H0a9UiLar6oih7yUUUhzpfVQQHGXdzI+gofVSNqLD9iQdndh0pzr9iFa9nsIfJqoC+eh4+UtYo0cKIKeZqxdJWn1FncgXCVLXalUWpYqS0kBk/H2Ca07s0Lu0B5Gn/eo8J2Amrid1oHnkQ7K0nejVp815k6RPNl14HkKFYTxv/p4jzg4hjXhJBcyr7L5K+FIyU+tVcCQiD2WPMwlWhjqhB2f1n1XHC/BLte4knzRbkMAsPbJsAGVCg4qZeYo2Ap6xLInbLPvuTgHrPoR/1YN4KLtdxHMwC0LvCU6YV2Pg2g7PlKHs0TlT0ofN0iZiXKQ2AxlzH0Q58ULRXApRKmenE07ozdjL8D6AVY5Q2OcqqSeSq/3AAVbOTQKI2TUZHmcnj8cxVMDEI6ORyZT+lIcdZvcCPzHyFGLHIHvq13dbRKjEcPVemqb/BhINnJV2PDXi4ToKWqOI4pBYTRnNf9N1M2lVCeVCxvncnVmmrtc+9Bc9U48/qtT2xzwTru122b88fvSUuIYRWcI69HnqZ3zPT34nzFg9gJubq8eaNdHmQIDgLvCV4s2f7CTCMPpssq2iw/UANwTW/FJfZIynjbRxwYUZ9YPIYSuPiV/ka369odaftvqFzDwSqK/nljPhfXoKGhP3AXzV6yAG0RKOosUFX6aeYZoprmFy98JIzNfBoYMyFrJWmeKVF+cQT9ZocO/HmTtoKrFh4HYIqBZmUlTay1yY5poK8ixsoFEQgQNXDZlDm6H55DhQLpHtfKmlVIXUQjYGAKFk+TFMjIKvqm3iXTLn5QTZWbOesteyVJW3K3cSFrKMKiIL g164gdHD mKU2mlIVBHea1av38lHoEahx5kQbb6JJCYNjdOcjaeMa8K82LnI/Im8Z2eXvvlF2AGhBhL5qa2ljchp8oeVAG5kpwx2B5UTDvT5Ay19o8GEKLeYtEuPN2fkr1g68U+OIJ2APsQT9ecG5n2yYm0+APEe0TQ3/1q5hFEYcIlWN9y3kmfCkp+6NnB+RpR09d1MqusyBEvaWG9vsu71cNfXl+tj8mEFY1e/Th57MVgaXIRoTnXBI7otw0A+qd9wBQD+LdhOa+69qflPCuQxbmy9nYqPQa6IW6pYwRok8impm+1plJC+FDdLkioXHs4FxadF9SdPvlK3zSAXP0aQIws7LcCvOQAA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We want to make use of vm_normal_page_pmd() in generic page table walking code where we might walk hugetlb folios that are mapped by PMDs even without CONFIG_TRANSPARENT_HUGEPAGE. So let's expose vm_normal_page_pmd() + vm_normal_folio_pmd() with CONFIG_PGTABLE_HAS_HUGE_LEAVES. Signed-off-by: David Hildenbrand --- mm/memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index 4c8716cb306c..29772beb3275 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -666,7 +666,7 @@ struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr, return NULL; } -#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t pmd) { From patchwork Fri Aug 2 15:55:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13751719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 098BAC3DA4A for ; Fri, 2 Aug 2024 15:55:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 881996B0089; Fri, 2 Aug 2024 11:55:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 80ACA6B008A; Fri, 2 Aug 2024 11:55:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 684516B008C; Fri, 2 Aug 2024 11:55:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 464DF6B0089 for ; Fri, 2 Aug 2024 11:55:57 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 01D7EA1269 for ; Fri, 2 Aug 2024 15:55:56 +0000 (UTC) X-FDA: 82407756354.16.5103870 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf01.hostedemail.com (Postfix) with ESMTP id 336134000A for ; Fri, 2 Aug 2024 15:55:55 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=LP28Xdaq; spf=pass (imf01.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722614149; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UsiWiNzwiDquKdRCTf6m6XAEYG/mEcIY3V2xwGU3/s4=; b=O5YwtKaHkH+q3QW6Zf7UvaNiVi8y+zSzdGe6RnhvP0yTbgds8WMkaYS40dLyoijeixwKuf 35z2+lKHusLyH1QP6q+jmD5mnydIYZ55ksH7fX7BxpzSw0p0o6Jj3t5B0g4UH2cee05NYb nZXUn0nkq1u0GWArSyH7wIU/DR+GoR4= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=LP28Xdaq; spf=pass (imf01.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722614149; a=rsa-sha256; cv=none; b=sRP1OodgfFGEIgTOpSL1vHzMjJLpGbJw/7M5S/yfevNLnTKj6iNdltdsNzii0TZt8ldx30 yI0MxSPWb6YvbQ4nUMDIxSKmzuwdPkXjLc6p2jzCSu6rN6VxYtPvfXkbdoAQ8nyZPrbh3w jDZoDdl/YIXkVPGTLZIFcXfQQYr1pOI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722614154; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UsiWiNzwiDquKdRCTf6m6XAEYG/mEcIY3V2xwGU3/s4=; b=LP28XdaqMpvN7CrwYwiiDeYxIP4R/zKrTQFdqyWFqUmt7RSFB2LyeqrXhNSPeswvT/w4SO cEjlUY2Wk9VIVzASnxQQGQ7jZd5aywtC0+dfPxPRii4LLVuVZeUsLXNBrqgalkPn11TLUu FsOq9EcjevJe2aYG57GY770bYEAdjII= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-295-BK2N-7WpOw6In_X1MJIYww-1; Fri, 02 Aug 2024 11:55:48 -0400 X-MC-Unique: BK2N-7WpOw6In_X1MJIYww-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 19D131944AAF; Fri, 2 Aug 2024 15:55:46 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.113]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5F05D300018D; Fri, 2 Aug 2024 15:55:40 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Gerald Schaefer Subject: [PATCH v1 02/11] mm/pagewalk: introduce folio_walk_start() + folio_walk_end() Date: Fri, 2 Aug 2024 17:55:15 +0200 Message-ID: <20240802155524.517137-3-david@redhat.com> In-Reply-To: <20240802155524.517137-1-david@redhat.com> References: <20240802155524.517137-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspam-User: X-Stat-Signature: pn1u8rqkokmj9g5iuu8tmtz8h36jb433 X-Rspamd-Queue-Id: 336134000A X-Rspamd-Server: rspam11 X-HE-Tag: 1722614155-689752 X-HE-Meta: U2FsdGVkX1/bUjxNcg6B9bYXxk0tw1eGGgUdD9SqKIxxpnaM3YUaAhEYy4FSyYmhmR1dC8bDmA3xGQ0+pLLwgR6nlEz2aMuA/WSTbkXLGk0H648ZG5nQxilVwzeHBu8feTEYuoaYjw6N3txQ4Yhsgo6wuEFFe1Tb2fDbtLtkpuCLHm/0wGnqFwHo0pwRh49YWthrsIM2ah+c453EmXr7r5cC4Ct1yR7KwXf1/MjsGGNnYJgCgQvmvB/eooVNOJPLOW5NcwFiJw1Ss4PbNCrFj1wZp7EDKZPsXCh9KDmKhsNTAELd7Ct20/UQ/K/YKII6A8fVLsqP45uGgW5b2gymlDBW8vqC84JBsHUjhBbwujz2mHo11tYZCvIi4d98gjrOrKc9ZVL4JmLcmAsSIBIIYI+xO7M6OzbfECJNwj89RH6mDTpL9YnalpUoRnVqOFiu6WQrwfSSBKwcg7RIrU5Hk06lI+GXrQiRcotu6IuxuduWOP6ANi6X/UcU9wuhGoN9Pz7L5MDVC+sIRiXIR6Bv2/54HWtSFq2Qx/qwVJQlwIy2A8LPQ/yk4ta4NhXoC+6BnYpgc/4FalGWuKbRDYe0twCkEDJ3pYnAok9szVzlEFW+TLExFMYoGVwMcy9EfUQAibzZHmwRzOXHFGoi4ysa0CxvnAE9TkV7dxQKl0ujypNyQRNBDkBjrPMF8s5QelW32i79Uu+XmNcn6oguQdwjlMiBDdRaJ6bocR2O6QC9pDYtidQwZtf8lH+Yt6rOwiGALoI04LHUKGgvpBMrQgZoTYmpl6rTa9FmV86pT5w4SQYU5O/61u9GZl8+hfVIVd2JLpoT9rCyk1bfNYxEByAaHU+lcRKaJY8r5iuiVGzmC8kR7BOA8yeeWRCxm3Qov44vlwRl6BHef3BLyZRiF2K0+7f1yPlt+0lIuTOorTjJ8hfC2c8qUYUSatLLiLH6SmMCfr8fGwg0qidXWlyJlnU YVYXcrDx YLHzciuBFud46eUJB2w0K7HBZHCYCE6ZWkBk7WNTttKA29J6cpKohBG2loDZio14YcQPUUIhumnsdLqAd6CcxkfZV+lxUX42x7YENkeh3n+wrITGS0fPI1DE9YXEBIXbF1nJJ5XXK69tE7hSHzZGmvm9/Osvkk3qGo4j6nzHtfKpzKv+7yWp6y5aCYZd5NqFeRqHJMkj2k7bpgyCbm2jus0qJZU+NaZBSkrE8o3bYvPSYY0UcsNSTdn0kPQl8JzmrGw/6bxp9EYm6dmeDwlV1UCXo6+s0cgbrDuO2bGHXeh5XAlYevapsCvSYq23+r353wweT5H+9ZlOoaiaqyFsDg1G78aP7+K70yTPJ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We want to get rid of follow_page(), and have a more reasonable way to just lookup a folio mapped at a certain address, perform some checks while still under PTL, and then only conditionally grab a folio reference if really required. Further, we might want to get rid of some walk_page_range*() users that really only want to temporarily lookup a single folio at a single address. So let's add a new page table walker that does exactly that, similarly to GUP also being able to walk hugetlb VMAs. Add folio_walk_end() as a macro for now: the compiler is not easy to please with the pte_unmap()->kunmap_local(). Note that one difference between follow_page() and get_user_pages(1) is that follow_page() will not trigger faults to get something mapped. So folio_walk is at least currently not a replacement for get_user_pages(1), but could likely be extended/reused to achieve something similar in the future. Signed-off-by: David Hildenbrand --- include/linux/pagewalk.h | 58 +++++++++++ mm/pagewalk.c | 202 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 260 insertions(+) diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h index 27cd1e59ccf7..f5eb5a32aeed 100644 --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -130,4 +130,62 @@ int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, pgoff_t nr, const struct mm_walk_ops *ops, void *private); +typedef int __bitwise folio_walk_flags_t; + +/* + * Walk migration entries as well. Careful: a large folio might get split + * concurrently. + */ +#define FW_MIGRATION ((__force folio_walk_flags_t)BIT(0)) + +/* Walk shared zeropages (small + huge) as well. */ +#define FW_ZEROPAGE ((__force folio_walk_flags_t)BIT(1)) + +enum folio_walk_level { + FW_LEVEL_PTE, + FW_LEVEL_PMD, + FW_LEVEL_PUD, +}; + +/** + * struct folio_walk - folio_walk_start() / folio_walk_end() data + * @page: exact folio page referenced (if applicable) + * @level: page table level identifying the entry type + * @pte: pointer to the page table entry (FW_LEVEL_PTE). + * @pmd: pointer to the page table entry (FW_LEVEL_PMD). + * @pud: pointer to the page table entry (FW_LEVEL_PUD). + * @ptl: pointer to the page table lock. + * + * (see folio_walk_start() documentation for more details) + */ +struct folio_walk { + /* public */ + struct page *page; + enum folio_walk_level level; + union { + pte_t *ptep; + pud_t *pudp; + pmd_t *pmdp; + }; + union { + pte_t pte; + pud_t pud; + pmd_t pmd; + }; + /* private */ + struct vm_area_struct *vma; + spinlock_t *ptl; +}; + +struct folio *folio_walk_start(struct folio_walk *fw, + struct vm_area_struct *vma, unsigned long addr, + folio_walk_flags_t flags); + +#define folio_walk_end(__fw, __vma) do { \ + spin_unlock((__fw)->ptl); \ + if (likely((__fw)->level == FW_LEVEL_PTE)) \ + pte_unmap((__fw)->ptep); \ + vma_pgtable_walk_end(__vma); \ +} while (0) + #endif /* _LINUX_PAGEWALK_H */ diff --git a/mm/pagewalk.c b/mm/pagewalk.c index ae2f08ce991b..cd79fb3b89e5 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -3,6 +3,8 @@ #include #include #include +#include +#include /* * We want to know the real level where a entry is located ignoring any @@ -654,3 +656,203 @@ int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, return err; } + +/** + * folio_walk_start - walk the page tables to a folio + * @fw: filled with information on success. + * @vma: the VMA. + * @addr: the virtual address to use for the page table walk. + * @flags: flags modifying which folios to walk to. + * + * Walk the page tables using @addr in a given @vma to a mapped folio and + * return the folio, making sure that the page table entry referenced by + * @addr cannot change until folio_walk_end() was called. + * + * As default, this function returns only folios that are not special (e.g., not + * the zeropage) and never returns folios that are supposed to be ignored by the + * VM as documented by vm_normal_page(). If requested, zeropages will be + * returned as well. + * + * As default, this function only considers present page table entries. + * If requested, it will also consider migration entries. + * + * If this function returns NULL it might either indicate "there is nothing" or + * "there is nothing suitable". + * + * On success, @fw is filled and the function returns the folio while the PTL + * is still held and folio_walk_end() must be called to clean up, + * releasing any held locks. The returned folio must *not* be used after the + * call to folio_walk_end(), unless a short-term folio reference is taken before + * that call. + * + * @fw->page will correspond to the page that is effectively referenced by + * @addr. However, for migration entries and shared zeropages @fw->page is + * set to NULL. Note that large folios might be mapped by multiple page table + * entries, and this function will always only lookup a single entry as + * specified by @addr, which might or might not cover more than a single page of + * the returned folio. + * + * This function must *not* be used as a naive replacement for + * get_user_pages() / pin_user_pages(), especially not to perform DMA or + * to carelessly modify page content. This function may *only* be used to grab + * short-term folio references, never to grab long-term folio references. + * + * Using the page table entry pointers in @fw for reading or modifying the + * entry should be avoided where possible: however, there might be valid + * use cases. + * + * WARNING: Modifying page table entries in hugetlb VMAs requires a lot of care. + * For example, PMD page table sharing might require prior unsharing. Also, + * logical hugetlb entries might span multiple physical page table entries, + * which *must* be modified in a single operation (set_huge_pte_at(), + * huge_ptep_set_*, ...). Note that the page table entry stored in @fw might + * not correspond to the first physical entry of a logical hugetlb entry. + * + * The mmap lock must be held in read mode. + * + * Return: folio pointer on success, otherwise NULL. + */ +struct folio *folio_walk_start(struct folio_walk *fw, + struct vm_area_struct *vma, unsigned long addr, + folio_walk_flags_t flags) +{ + unsigned long entry_size; + bool expose_page = true; + struct page *page; + pud_t *pudp, pud; + pmd_t *pmdp, pmd; + pte_t *ptep, pte; + spinlock_t *ptl; + pgd_t *pgdp; + p4d_t *p4dp; + + mmap_assert_locked(vma->vm_mm); + vma_pgtable_walk_begin(vma); + + if (WARN_ON_ONCE(addr < vma->vm_start || addr >= vma->vm_end)) + goto not_found; + + pgdp = pgd_offset(vma->vm_mm, addr); + if (pgd_none_or_clear_bad(pgdp)) + goto not_found; + + p4dp = p4d_offset(pgdp, addr); + if (p4d_none_or_clear_bad(p4dp)) + goto not_found; + + pudp = pud_offset(p4dp, addr); + pud = pudp_get(pudp); + if (pud_none(pud)) + goto not_found; + if (IS_ENABLED(CONFIG_PGTABLE_HAS_HUGE_LEAVES) && pud_leaf(pud)) { + ptl = pud_lock(vma->vm_mm, pudp); + pud = pudp_get(pudp); + + entry_size = PUD_SIZE; + fw->level = FW_LEVEL_PUD; + fw->pudp = pudp; + fw->pud = pud; + + if (!pud_present(pud) || pud_devmap(pud)) { + spin_unlock(ptl); + goto not_found; + } else if (!pud_leaf(pud)) { + spin_unlock(ptl); + goto pmd_table; + } + /* + * TODO: vm_normal_page_pud() will be handy once we want to + * support PUD mappings in VM_PFNMAP|VM_MIXEDMAP VMAs. + */ + page = pud_page(pud); + goto found; + } + +pmd_table: + VM_WARN_ON_ONCE(pud_leaf(*pudp)); + pmdp = pmd_offset(pudp, addr); + pmd = pmdp_get_lockless(pmdp); + if (pmd_none(pmd)) + goto not_found; + if (IS_ENABLED(CONFIG_PGTABLE_HAS_HUGE_LEAVES) && pmd_leaf(pmd)) { + ptl = pmd_lock(vma->vm_mm, pmdp); + pmd = pmdp_get(pmdp); + + entry_size = PMD_SIZE; + fw->level = FW_LEVEL_PMD; + fw->pmdp = pmdp; + fw->pmd = pmd; + + if (pmd_none(pmd)) { + spin_unlock(ptl); + goto not_found; + } else if (!pmd_leaf(pmd)) { + spin_unlock(ptl); + goto pte_table; + } else if (pmd_present(pmd)) { + page = vm_normal_page_pmd(vma, addr, pmd); + if (page) { + goto found; + } else if ((flags & FW_ZEROPAGE) && + is_huge_zero_pmd(pmd)) { + page = pfn_to_page(pmd_pfn(pmd)); + expose_page = false; + goto found; + } + } else if ((flags & FW_MIGRATION) && + is_pmd_migration_entry(pmd)) { + swp_entry_t entry = pmd_to_swp_entry(pmd); + + page = pfn_swap_entry_to_page(entry); + expose_page = false; + goto found; + } + spin_unlock(ptl); + goto not_found; + } + +pte_table: + VM_WARN_ON_ONCE(pmd_leaf(pmdp_get_lockless(pmdp))); + ptep = pte_offset_map_lock(vma->vm_mm, pmdp, addr, &ptl); + if (!ptep) + goto not_found; + pte = ptep_get(ptep); + + entry_size = PAGE_SIZE; + fw->level = FW_LEVEL_PTE; + fw->ptep = ptep; + fw->pte = pte; + + if (pte_present(pte)) { + page = vm_normal_page(vma, addr, pte); + if (page) + goto found; + if ((flags & FW_ZEROPAGE) && + is_zero_pfn(pte_pfn(pte))) { + page = pfn_to_page(pte_pfn(pte)); + expose_page = false; + goto found; + } + } else if (!pte_none(pte)) { + swp_entry_t entry = pte_to_swp_entry(pte); + + if ((flags & FW_MIGRATION) && + is_migration_entry(entry)) { + page = pfn_swap_entry_to_page(entry); + expose_page = false; + goto found; + } + } + pte_unmap_unlock(ptep, ptl); +not_found: + vma_pgtable_walk_end(vma); + return NULL; +found: + if (expose_page) + /* Note: Offset from the mapped page, not the folio start. */ + fw->page = nth_page(page, (addr & (entry_size - 1)) >> PAGE_SHIFT); + else + fw->page = NULL; + fw->ptl = ptl; + return page_folio(page); +} From patchwork Fri Aug 2 15:55:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13751720 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D21CC3DA4A for ; Fri, 2 Aug 2024 15:56:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F37F96B008C; Fri, 2 Aug 2024 11:56:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EE88F6B0092; Fri, 2 Aug 2024 11:56:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D65C66B0093; Fri, 2 Aug 2024 11:56:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B78696B008C for ; Fri, 2 Aug 2024 11:56:02 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3A64BA7DA5 for ; Fri, 2 Aug 2024 15:56:02 +0000 (UTC) X-FDA: 82407756564.08.EE27BEE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf19.hostedemail.com (Postfix) with ESMTP id 853B11A0015 for ; Fri, 2 Aug 2024 15:56:00 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=iD1CoK8y; spf=pass (imf19.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722614131; a=rsa-sha256; cv=none; b=RUDfiGDgYBmntL1NfpJCMCPLP8pHaU13A4uR7NN/Z4IPhUdGeTt+MSN5RgJr8R784wtOEn 5Do3lT/A/vCR5omHU9g6ZbmlMuPSUy0yVH5jvDPy3vAQy9YSIKaI7i4dD+H0wxF6E3SN5k nxFYGs4Q+IWPjal61df+k6xuNkSwOY8= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=iD1CoK8y; spf=pass (imf19.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722614131; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bw5wWvDZXB6NwMKpULq7b52Hk472J0+TTnpqg3dn/tg=; b=h4hgQtVmWEYl4leZiZyjXGqDQFEHLEifB7+r0ycrzKf1fJT4TVVaCG3uMnowhpCJc8+BoM L9ZS/ZRZk0GwAwC4C8dQLXmPknCaKgwl03JrlQYQcNYwdJFk5D8SiwMAA1lDXWFLRUnUGF fZhHWpzokV7O1Pkd2DD5FjdtweNX48I= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722614159; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bw5wWvDZXB6NwMKpULq7b52Hk472J0+TTnpqg3dn/tg=; b=iD1CoK8yND4V5mBrxk2QaHXqmMGZaVxG20SVfRJw24Yqr+DOmj3x5kRJ2SQdn4biaxJBQL XDWBS0tyAdDSaJdMs3lbHR0x0hZ7BnxLc8zYaHqdKEFVw1eN6x/7JFZ0i7j8m4LtIGrJDt Gp0XS5rQ5D1Od2CaZBlyiXoRwooVrAM= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-63-0uIxfBSuNxWDrKG_hH-lfg-1; Fri, 02 Aug 2024 11:55:55 -0400 X-MC-Unique: 0uIxfBSuNxWDrKG_hH-lfg-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2085618B669A; Fri, 2 Aug 2024 15:55:53 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.113]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A8A8F3000198; Fri, 2 Aug 2024 15:55:46 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Gerald Schaefer Subject: [PATCH v1 03/11] mm/migrate: convert do_pages_stat_array() from follow_page() to folio_walk Date: Fri, 2 Aug 2024 17:55:16 +0200 Message-ID: <20240802155524.517137-4-david@redhat.com> In-Reply-To: <20240802155524.517137-1-david@redhat.com> References: <20240802155524.517137-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Stat-Signature: w8cafgckycb1w7tck4bhxh8q6mpmdnca X-Rspamd-Queue-Id: 853B11A0015 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1722614160-230319 X-HE-Meta: U2FsdGVkX1+u1Ip5wwQY8C0XGQ31r0HUHbsRUCJJ/04xgJ4/yCV0cydIion9RWyZv8xTShjHsIKnNOtg5uEgybtiuzSco4MAjYxlK9mrt+6TmeqFo8IVrONzfwHBJpFrSBtYCezupCsy8kHviJrodqLNHrdrJIJ1E/ydjaWYIMkO8YOmcp5pM9uVdCaAn1fb5OacE+yRxmdDJp4b4ycIn2xuxILBgs4ZzaTbe4eQKF6fh2C1TwnWIqd6YFNLpzRp12d6/6djPNtnjmoTlEzg+3lwh9d8qyN0LlwBTjfBtCa0Rt2fwMbYMd9mWSCpI5NWB9YXXH5N29AyxYxkwBy2xuTHh5vhXB+fr0eAjbsSuUAiNR8I8cKPmnhNbYjiITQNflpQGnpcxUgs4FcML4uX7XQLuxmSDftxv27C7ixBzmc+SRnoiRpfqgwwTcEwR2qlVSNP7USTWCDfIcKpXBnndtz/uwCSt6NUCY8OjlI88HEdx5rPPHpN4utBmP5fVKf69THRI9jocABT8Tb7qBpZ+8v1qDR/8lVyiNQ8GvSy44Cl9jpPfNAOKVox+jJQI28Bq15Zt6nDAs5mHUhu3kMSnE2i4/toLth0PpKJrb7C2T0SrtKl5k6xLDd4h4aZYfuQs5JMuDNs9RqXJPxZIMrGkhcab+TtW/2Ie8A0bOHxUt2CeynbzEFhqohkZRhGjLQdIy+YiN+7xwAwVk1KrWKMXKBbl/AnRMM3DkDCNF61JhYdq4ylt/Hrr6svcfox48GC4atm11HtB806zENf3iH+jMiZNJyPjpeU+KMTeFIuxMQkjl93hmVQPwGI6m/fg4elkZrwIE9qQdL2vVy/T18gTwg61+sRJZULzbP3DtXQA3k+2T6Jdiya1Cl60MCGCAQkZEOBFH7afBBeymY9gTY2V0pldwFghwmdbRDWl+sCrJxl8hexVYet5KDmUf/RDfnMUyrTGE2LM5FUOC/sIG/ 6mzzhjS2 QEgLnGL3oBGBt6RRXx79BtI/13JUQsKdStQXu+TP1MVl25cR3/Os+KnuZYB10vHfj+VXIDDtZ9iE5ASIb0sou3jivl4BDVJ5Ywt6rwiFZliUy77JC35rySRGxylSCGbEd5Ft+5dStWN6lPF3gVae+R3nF6wd/T2W7Bm1y0LzBZSDgw5cteeGPOWouNvt3Gp2q6LI13KPDmEm8ru/TUMwzEt8eB+pUYmCLxMYTJB2oRWnhHbt6tciIHau39S86Arz2VXkmeJ7D6j1kOwSr/Zxx1P7odlWlJC7bZ7iAKAnfyt23+mFJ/6cwtPm2mGueqz76n79KVd/WoUNE1BIsJ9/61OoXnw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's use folio_walk instead, so we can avoid taking a folio reference just to read the nid and get rid of another follow_page()/FOLL_DUMP user. Use FW_ZEROPAGE so we can return "-EFAULT" for it as documented. The possible return values for follow_page() were confusing, especially with FOLL_DUMP set. We'll handle it like documented in the man page: * -EFAULT: This is a zero page or the memory area is not mapped by the process. * -ENOENT: The page is not present. We'll keep setting -ENOENT for ZONE_DEVICE. Maybe not the right thing to do, but it likely doesn't really matter (just like for weird devmap, whereby we fake "not present"). Note that the other errors (-EACCESS, -EBUSY, -EIO, -EINVAL, -ENOMEM) so far only applied when actually moving pages, not when only querying stats. We'll effectively drop the "secretmem" check we had in follow_page(), but that shouldn't really matter here, we're not accessing folio/page content after all. Signed-off-by: David Hildenbrand --- mm/migrate.c | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index aa482c954cb0..b5365a434ba9 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -50,6 +50,7 @@ #include #include #include +#include #include @@ -2331,28 +2332,26 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages, for (i = 0; i < nr_pages; i++) { unsigned long addr = (unsigned long)(*pages); struct vm_area_struct *vma; - struct page *page; + struct folio_walk fw; + struct folio *folio; int err = -EFAULT; vma = vma_lookup(mm, addr); if (!vma) goto set_status; - /* FOLL_DUMP to ignore special (like zero) pages */ - page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); - - err = PTR_ERR(page); - if (IS_ERR(page)) - goto set_status; - - err = -ENOENT; - if (!page) - goto set_status; - - if (!is_zone_device_page(page)) - err = page_to_nid(page); - - put_page(page); + folio = folio_walk_start(&fw, vma, addr, FW_ZEROPAGE); + if (folio) { + if (is_zero_folio(folio) || is_huge_zero_folio(folio)) + err = -EFAULT; + else if (folio_is_zone_device(folio)) + err = -ENOENT; + else + err = folio_nid(folio); + folio_walk_end(&fw, vma); + } else { + err = -ENOENT; + } set_status: *status = err; From patchwork Fri Aug 2 15:55:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13751721 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C96FC52D6F for ; Fri, 2 Aug 2024 15:56:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2675E6B0093; Fri, 2 Aug 2024 11:56:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 218C46B0095; Fri, 2 Aug 2024 11:56:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B98B6B0096; Fri, 2 Aug 2024 11:56:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DDE5B6B0093 for ; Fri, 2 Aug 2024 11:56:08 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9D16E12133E for ; Fri, 2 Aug 2024 15:56:08 +0000 (UTC) X-FDA: 82407756816.29.03D436E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf10.hostedemail.com (Postfix) with ESMTP id C5995C0026 for ; Fri, 2 Aug 2024 15:56:06 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JngJUdyk; spf=pass (imf10.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722614121; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WGUlr9yCrIjZfp2Jios2/QtrJcjV6+nMbt6NumQJE9A=; b=DQCISnXvUrAOvoVb+tdnJYgeJrlNvuFTv3mS17eFhku6FMeEWUYpCZywVbat5IYyd1kxlj l0nUzOl/J5kGg89y6BNgvUykLUkQcDxuP0WyrVPWTnafAsZllY+b4+igoAArbJ5Hpo2q8g hg45XzHsfC8UCT0405mAwfl4B+9ccSY= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JngJUdyk; spf=pass (imf10.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722614121; a=rsa-sha256; cv=none; b=tQ1Ia0tAYmAdSA04tiiSR7yhtZjil1YkSFr3UdDeqRNCf09/DN3pMsUgH8MMx7t3d139Y/ NcjEpmnyYJjs0AOGpcAG4mM4I7ef2WKn+JCbQcJkqlXppNVBRBSCjJ/GdHBVa+AEMV+Gv1 iOLWRtPLbgok2rrM9ZGJnCDq4a0TRPY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722614166; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WGUlr9yCrIjZfp2Jios2/QtrJcjV6+nMbt6NumQJE9A=; b=JngJUdykOhU5j/7yrQJym9Rn7lu28pSgWOBSKw3LxVFZv5THS8XmMWFbe3Aa/fmre3VwkU LJQH6tjPE5xpasvziFILZXJhGC+7cT5/A54+wpn8Snfsdp4WZJZLIpO02z0Jnwh7cGBadF cvlHB2FoEv4c/Q4jRY7H/ah3PxUQSCE= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-8-GL6qdjBaOLKZo_tI8pVNMA-1; Fri, 02 Aug 2024 11:56:03 -0400 X-MC-Unique: GL6qdjBaOLKZo_tI8pVNMA-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9722A1955D4D; Fri, 2 Aug 2024 15:55:59 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.113]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 863C5300018D; Fri, 2 Aug 2024 15:55:53 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Gerald Schaefer Subject: [PATCH v1 04/11] mm/migrate: convert add_page_for_migration() from follow_page() to folio_walk Date: Fri, 2 Aug 2024 17:55:17 +0200 Message-ID: <20240802155524.517137-5-david@redhat.com> In-Reply-To: <20240802155524.517137-1-david@redhat.com> References: <20240802155524.517137-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Stat-Signature: 3qiwxnsjri6onk85ritd1s51yytky7dm X-Rspam-User: X-Rspamd-Queue-Id: C5995C0026 X-Rspamd-Server: rspam02 X-HE-Tag: 1722614166-717222 X-HE-Meta: U2FsdGVkX1+0kPgzE4bTjhckzgIxCbWQSHhwAoJQGh5Uh529ES1OD2EsGoD7VVrVzsS30mLmNufCPMXPKNNsUEYw0ByPAQ8HzPDDQak8+6Ar9Ntd3qkhO+AeuETMzCFXPMDevtXOrrWd5Hrwwg6POjMa4oEhWdkFOTxjYhQD7qLid1e4pAfOY6qEVXmbi63Dtz+tcexdPySLLikj4PIf7tLcaEC6ux3mkYyU6o13+KcmAA08ABOuZlIsjnSZ+zJgjUuAOYw3jI7xYMtEgLKE4ELWFDSjC5rkwmqNivu+PU8x8BkZZ/xvGr7xRXAkQVi0pfH3ykB3ZuBWO59JktrNVpvA4aDlydurYpiMrfVg7L7mDAj0j1vOQl2qUWy8uUHj0IEQPtnkJuhdx+YngsjNpGya4adsX2Bn0YoY/vFzExJacmGj2cTkSsa1lsyJKHBej6n9NUI58EBFZhFsnqexteTjpftZoubKhpSdywIuGj/uFskRLFHbxfPQ41/k68DpjsOyn5zCaY6oDsmmHsZutpBmp85z0MkMqQp6zzyzRdUCjh4qd5DBvhxny2eyFoxgLyNNH0wAbL/IleAx59GlDDFQGy6ggqobfQSIo0p24k2u1McJNlL6VPCxqz/XlXra4joRhFJYyTdKBELeteWS7J4KB8rKdBcnDFXc0KXKmWWB06nGsvPELQ3An0w2SsUpg1S1dQYFatcAyGt7o6W0/+5Ih+9SYUcgokNOwgQL1D1Rq166kcZ9VMM+oUek1pUp21fc8I5uL28wwPJz1QYDTdZigDbNxkeUA+1ZuT744tMrv3ia/RdGAePu0GnBv+VPBHJ82DwU7rp82rYObsnpHm7Axb+vyfCzNM/wJHdYcoez8XRwA/6Z2IymsojHDVjVlKag7zFklRjKOxZ9n68Sx0snUpuwxOHDdSU69239PjDxpFVfOrzdskUTXGf3Klg1I5s1ix07daBE3AnoJHA wmuzgoH3 exh+VKf+hi71uo8aiSHGFCHx5spSIbvlTKkV+9adeWawu203uhMyv2YPWf3rItRTUdniopAuxwna8hSelj5YwAdT/0Ap+pXHDK2WqEUtgAtDQ404xMQVSqufDRokGA4T4FXdrq8i52+6QVaTjvRrITLrRNrWojDZDXlZsoBhqTi+HRE8Nzla61HYVtatY9rs5+QO9wh6mTd/WhT8on2mBAhsjb14ouRv2ZxMIg72FwVycD0jdadB4KY1VQ9If+5U/4GnwlR7Z19BQPoVpEpGjS9julnK3LJqHJQP5AK5gCXZpeWIbFK+Iwoj3jFGtPZv7mDVRYxc6kOTEnXNKwyvNNEYCs/YGPOykP1VN X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's use folio_walk instead, so we can avoid taking a folio reference when we won't even be trying to migrate the folio and to get rid of another follow_page()/FOLL_DUMP user. Use FW_ZEROPAGE so we can return "-EFAULT" for it as documented. We now perform the folio_likely_mapped_shared() check under PTL, which is what we want: relying on the mapcount and friends after dropping the PTL does not make too much sense, as the page can get unmapped concurrently from this process. Further, we perform the folio isolation under PTL, similar to how we handle it for MADV_PAGEOUT. The possible return values for follow_page() were confusing, especially with FOLL_DUMP set. We'll handle it like documented in the man page: * -EFAULT: This is a zero page or the memory area is not mapped by the process. * -ENOENT: The page is not present. We'll keep setting -ENOENT for ZONE_DEVICE. Maybe not the right thing to do, but it likely doesn't really matter (just like for weird devmap, whereby we fake "not present"). The other errros are left as is, and match the documentation in the man page. While at it, rename add_page_for_migration() to add_folio_for_migration(). We'll lose the "secretmem" check, but that shouldn't really matter because these folios cannot ever be migrated. Should vma_migratable() refuse these VMAs? Maybe. Signed-off-by: David Hildenbrand --- mm/migrate.c | 100 +++++++++++++++++++++++---------------------------- 1 file changed, 45 insertions(+), 55 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index b5365a434ba9..e1383d9cc944 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2112,76 +2112,66 @@ static int do_move_pages_to_node(struct list_head *pagelist, int node) return err; } +static int __add_folio_for_migration(struct folio *folio, int node, + struct list_head *pagelist, bool migrate_all) +{ + if (is_zero_folio(folio) || is_huge_zero_folio(folio)) + return -EFAULT; + + if (folio_is_zone_device(folio)) + return -ENOENT; + + if (folio_nid(folio) == node) + return 0; + + if (folio_likely_mapped_shared(folio) && !migrate_all) + return -EACCES; + + if (folio_test_hugetlb(folio)) { + if (isolate_hugetlb(folio, pagelist)) + return 1; + } else if (folio_isolate_lru(folio)) { + list_add_tail(&folio->lru, pagelist); + node_stat_mod_folio(folio, + NR_ISOLATED_ANON + folio_is_file_lru(folio), + folio_nr_pages(folio)); + return 1; + } + return -EBUSY; +} + /* - * Resolves the given address to a struct page, isolates it from the LRU and + * Resolves the given address to a struct folio, isolates it from the LRU and * puts it to the given pagelist. * Returns: - * errno - if the page cannot be found/isolated + * errno - if the folio cannot be found/isolated * 0 - when it doesn't have to be migrated because it is already on the * target node * 1 - when it has been queued */ -static int add_page_for_migration(struct mm_struct *mm, const void __user *p, +static int add_folio_for_migration(struct mm_struct *mm, const void __user *p, int node, struct list_head *pagelist, bool migrate_all) { struct vm_area_struct *vma; - unsigned long addr; - struct page *page; + struct folio_walk fw; struct folio *folio; - int err; + unsigned long addr; + int err = -EFAULT; mmap_read_lock(mm); addr = (unsigned long)untagged_addr_remote(mm, p); - err = -EFAULT; vma = vma_lookup(mm, addr); - if (!vma || !vma_migratable(vma)) - goto out; - - /* FOLL_DUMP to ignore special (like zero) pages */ - page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); - - err = PTR_ERR(page); - if (IS_ERR(page)) - goto out; - - err = -ENOENT; - if (!page) - goto out; - - folio = page_folio(page); - if (folio_is_zone_device(folio)) - goto out_putfolio; - - err = 0; - if (folio_nid(folio) == node) - goto out_putfolio; - - err = -EACCES; - if (folio_likely_mapped_shared(folio) && !migrate_all) - goto out_putfolio; - - err = -EBUSY; - if (folio_test_hugetlb(folio)) { - if (isolate_hugetlb(folio, pagelist)) - err = 1; - } else { - if (!folio_isolate_lru(folio)) - goto out_putfolio; - - err = 1; - list_add_tail(&folio->lru, pagelist); - node_stat_mod_folio(folio, - NR_ISOLATED_ANON + folio_is_file_lru(folio), - folio_nr_pages(folio)); + if (vma && vma_migratable(vma)) { + folio = folio_walk_start(&fw, vma, addr, FW_ZEROPAGE); + if (folio) { + err = __add_folio_for_migration(folio, node, pagelist, + migrate_all); + folio_walk_end(&fw, vma); + } else { + err = -ENOENT; + } } -out_putfolio: - /* - * Either remove the duplicate refcount from folio_isolate_lru() - * or drop the folio ref if it was not isolated. - */ - folio_put(folio); -out: mmap_read_unlock(mm); return err; } @@ -2275,8 +2265,8 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes, * Errors in the page lookup or isolation are not fatal and we simply * report them via status */ - err = add_page_for_migration(mm, p, current_node, &pagelist, - flags & MPOL_MF_MOVE_ALL); + err = add_folio_for_migration(mm, p, current_node, &pagelist, + flags & MPOL_MF_MOVE_ALL); if (err > 0) { /* The page is successfully queued for migration */ From patchwork Fri Aug 2 15:55:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13751722 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 966C5C52D6F for ; Fri, 2 Aug 2024 15:56:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 267346B0096; Fri, 2 Aug 2024 11:56:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 217666B0098; Fri, 2 Aug 2024 11:56:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B8786B0099; Fri, 2 Aug 2024 11:56:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E151B6B0096 for ; Fri, 2 Aug 2024 11:56:15 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 9B6DC16127F for ; Fri, 2 Aug 2024 15:56:15 +0000 (UTC) X-FDA: 82407757110.12.2F5BB1C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf30.hostedemail.com (Postfix) with ESMTP id EBC3480024 for ; Fri, 2 Aug 2024 15:56:13 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eVkHIZBd; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722614130; a=rsa-sha256; cv=none; b=D5JMLvn82qObwrWPA2LKh6PjRzMm0JhaMdB6bxs7cTqpiO9nKYAA7AWxCCrgidJDS5+ceT Ch2F+qfh5zZi4yzZDHx8qrM5Lf5cFmA6h+E17EUtP6xs8PiZhrr8kHt8cJXyADR7m3eOnv d7mrmItaXrED8LSvEV/0EaS2rU4BL9Q= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eVkHIZBd; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722614130; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Dsokb2QZ1A8J7FAYTpU1vDg99kVkubfGaPDc1WAA3q4=; b=Z69RhLDVz2on91Qs5tCi6hpNTVFuGTrzCsdomErCL5zk+TfXJUJhL1ToMYOkOLdsTKAi1T 54eOzstmt+C7AHDBeqKA0lvuDPfPLmMrVFNzNnHpgrLtZXQhb4hVbxTDyZtDti4J2SHv6a hLd0yJb0qW4hkSUUifv1nB5N1STiShM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722614173; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Dsokb2QZ1A8J7FAYTpU1vDg99kVkubfGaPDc1WAA3q4=; b=eVkHIZBd93wrpJN/ve1Ux5Wk0m1cGTYPOwToXz75Uq7IR/kBL/6nD+MasBLuK4B2U48hUB 1r7s3ohtVpfvgz+NUZjMyOJIV0pt3y2pqkeGeN6yQUum29PM0mqsEG8RRFvy6nTYZ+tD/1 O4M6Xkz/L4T05OMYn6oHXIJCZglv4Jk= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-318-NNjgzyEROca_nRik9KEcaw-1; Fri, 02 Aug 2024 11:56:08 -0400 X-MC-Unique: NNjgzyEROca_nRik9KEcaw-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4CD5B19560B1; Fri, 2 Aug 2024 15:56:06 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.113]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 241AD300018D; Fri, 2 Aug 2024 15:55:59 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Gerald Schaefer Subject: [PATCH v1 05/11] mm/ksm: convert get_mergeable_page() from follow_page() to folio_walk Date: Fri, 2 Aug 2024 17:55:18 +0200 Message-ID: <20240802155524.517137-6-david@redhat.com> In-Reply-To: <20240802155524.517137-1-david@redhat.com> References: <20240802155524.517137-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspamd-Queue-Id: EBC3480024 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: ktqa3xkwbxp1yjtss7kxsoqy1ntkzy3c X-HE-Tag: 1722614173-904500 X-HE-Meta: U2FsdGVkX1/3q6YdYftsFB/5eMlXStOwdppdIHCSsCU1LhyvjSWsXa6lQESLrHX9ZMljolQ5ErVyxUqUycMZP2Hk0bciSwpAjqV4IrTZJUQnPxDPWkUhxltzWvW11lggsftSpM2tfzf3cDJX6fGmi1qX+EZPuLPhhr1VJQOVin6AJXTt/Lg8PF7JhbJI2F4HmMKlTPAIcHePtvT7MV7bORaTASOIgOobIUmLjVtLMMmbU6hVbwXEECRtSLCE4zym5MnSX0kLyJSPB3E0EHPVz/atla2sqK5wxpHwOshXUBtxBX7Sy8gdtHdOgoKz1ilRob9Cdl/qLyyrGiTUR0RLrdMF6AhO58G1oDhTz9NvJ4xBkDXULrMgU+sHgsJQrRgJ+4td/l9/Q3P0mLDxx9yCPQeL1/QoOXLSlJxAHLVVmNKVi8QPJSvof1ayHApIEyv1nOooroJ1SAMeV1auJzOAR9dg1Ou8/a9m9VWa++m8mKnUgiKx/off3ylPNZZYU2Rcq0VRNXfLhg93nDcrSc0p5tFvk63/X7jdsxItoPEBOmv58kCPDJipbf23ln0WKqtHoTgaO/pOTp1v1ZCfq/hZIg6WSryRBxj4qjNz2JguMbf6rdOAMN9DqiCIbkAJHjRShdCS6bjKPA2d2k5ZwK4nYR8y4DCz2Ry1bjDo/wlz5K515yHsg4Qo2A/LRujpqk+uEIqNiZLlItDfaMitf7aOnaWCFTh437j7mc7235PMGcUZ3Qbp5zJ8MpDH9bJOYZqTMQR6emixNnZNh6fYbn1AwYnb4/BDm+VBSTwGqklRurh4qRnnXukr+fvENF/mP6g3aLo2CEqOVtR/+jOh3gn45lEAH4Ank9PgKNb/H5VPLWLY7SDVymjoRH8bGCsSGmvXeaK/c5e5CiU4LBuSExshKSXIi81vNAwnBz14xeRNU5Z2PIjHypL9EvP06w7LpbqGveBk7sS7PUarOZkz7wc sNXgcrZ8 NhjhBHqabv/Dk/EmbacDbRiX9epMJRNxegB3/OQh5uysWt+VgFX38AHnK/aXQcxCq99kJYJEQ4QXrDyX+3nPMJV1wUQ8OJyXKTox2W8eMX13tGABOTlu2tqZtyf5gJRL8N3k3D0xs156YbLGExJa1TTznnq814+wsJMySyG/ELDkt7whs3r2OAEH3GNH8mktuc4bscy+tkt6MGmiWEhp8ztyU9g5LqkOhQGMgrNzr+7l/sv9vVfJbVTqtSaEddWW/56Y0HeMrQu0oMF8iM5qXiHbRmpGB3uBP85WyAu2RUcCAf84G+GNLEfENG+cjO3fYE0XHOB6WtYE1Q3KarPRnLv4uuw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's use folio_walk instead, for example avoiding taking temporary folio references if the folio does not even apply and getting rid of one more follow_page() user. Note that zeropages obviously don't apply: old code could just have specified FOLL_DUMP. Anon folios are never secretmem, so we don't care about losing the check in follow_page(). Signed-off-by: David Hildenbrand --- mm/ksm.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 14d9e53b1ec2..742b005f3f77 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -767,26 +767,28 @@ static struct page *get_mergeable_page(struct ksm_rmap_item *rmap_item) struct mm_struct *mm = rmap_item->mm; unsigned long addr = rmap_item->address; struct vm_area_struct *vma; - struct page *page; + struct page *page = NULL; + struct folio_walk fw; + struct folio *folio; mmap_read_lock(mm); vma = find_mergeable_vma(mm, addr); if (!vma) goto out; - page = follow_page(vma, addr, FOLL_GET); - if (IS_ERR_OR_NULL(page)) - goto out; - if (is_zone_device_page(page)) - goto out_putpage; - if (PageAnon(page)) { + folio = folio_walk_start(&fw, vma, addr, 0); + if (folio) { + if (!folio_is_zone_device(folio) && + folio_test_anon(folio)) { + folio_get(folio); + page = fw.page; + } + folio_walk_end(&fw, vma); + } +out: + if (page) { flush_anon_page(vma, page, addr); flush_dcache_page(page); - } else { -out_putpage: - put_page(page); -out: - page = NULL; } mmap_read_unlock(mm); return page; From patchwork Fri Aug 2 15:55:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13751749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDEA9C3DA4A for ; Fri, 2 Aug 2024 15:56:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 574FC6B0099; Fri, 2 Aug 2024 11:56:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 525CE6B009A; Fri, 2 Aug 2024 11:56:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C56A6B009B; Fri, 2 Aug 2024 11:56:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 181876B0099 for ; Fri, 2 Aug 2024 11:56:24 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id BC3F51A1272 for ; Fri, 2 Aug 2024 15:56:23 +0000 (UTC) X-FDA: 82407757446.15.B20F4AC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf23.hostedemail.com (Postfix) with ESMTP id EF706140017 for ; Fri, 2 Aug 2024 15:56:21 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=fguStQ63; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722614139; a=rsa-sha256; cv=none; b=A1g4oVZdB565IW0lx7Ss+8uo5Agu83EdvAHfjT5Pdz5p0hmf3k81HweqFxWoSqyxYFmFYU 3h90LEpPmpwZM53cmzCOJJ5PW2kCIt9GWZvb29jZ0/y9rQAzJ74Q09r5SsTnn/ZThc+ysG +0KogSpAgZCFK6RrLFGsDvN5w1Xa5Mk= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=fguStQ63; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722614138; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9RoWh69jXPnUan+groy9VerJmyWjUE6OshAG0jiN7k8=; b=Vgs52DMYPxVeJgoLOJmCrmwjp2AsEMBMei+5a3XBFO07IFDxmQK459DNK2jPbwD2sL+2eB 75lPRdUEQ1UK9R7FTXdbah0tie2VttdNmp38IC5F3P6b4r2xF4Hgn2UfH+AKXoFSt6xNNk 4fFXpuEJBG1vn73q6oViCktBIZoETtU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722614181; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9RoWh69jXPnUan+groy9VerJmyWjUE6OshAG0jiN7k8=; b=fguStQ63bEwP1E4QgRiIR8nyFqDMRUGXx7m0BKZg+BfQMa7MIUAgk6en1Y7CyWY2QKJIRK hUlMbisgOxYvWDYEKE1yIuU8HVVWTra+m1HjrCDHxmnA6awLFB82Aa5TgUH6YTkHy8/S/9 lGUmC6P1qbvZvJv2rRv7QQ3sZvqfa1Y= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-654-6rd6U-xePRWdI5xPe_dR3w-1; Fri, 02 Aug 2024 11:56:15 -0400 X-MC-Unique: 6rd6U-xePRWdI5xPe_dR3w-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 45CAA18B65ED; Fri, 2 Aug 2024 15:56:13 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.113]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 8A4CE300018D; Fri, 2 Aug 2024 15:56:06 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Gerald Schaefer Subject: [PATCH v1 06/11] mm/ksm: convert scan_get_next_rmap_item() from follow_page() to folio_walk Date: Fri, 2 Aug 2024 17:55:19 +0200 Message-ID: <20240802155524.517137-7-david@redhat.com> In-Reply-To: <20240802155524.517137-1-david@redhat.com> References: <20240802155524.517137-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspamd-Queue-Id: EF706140017 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: izhbsb9enooowcobmifu1f8bq3zs8dqq X-HE-Tag: 1722614181-158697 X-HE-Meta: U2FsdGVkX1/vH/NAjfEDa3VdranV30wlqs8OWlK3sI0DlVOPYNGJoGb+ok52ekXWcnk3bWgDlSomrtAvfaMH/qvuLZRfxTHlCGsj0ng3NRuj3CfNmGnT93X5FUmuzGND6c67vP6R5L5Ee5rPOeQrB86f4KKl/qZx0YrZBtuxUgV5OSNDhMkfvse+5U64zQzq4W/43rUC0QmTZWevtiZ/9zdc4xN+n3a0UD88FiCv28CpK5BZPatdhWt4FY8FSn39Dq/OE0BBB7FCkFHZp/MYoLBP9ZdRYimkZKbR/ouCdcXwSUjc/3tjlNMzzQd/UM5dIYUT7nVz7lWMzEPQNTXhr90hce3t6ZQLkyNMF/Ib8a6iSyJw+NRhDPAAugLPHlSusvmzgflCaHhrLNr+B2h6Ry3k2Wr87yZZ0Co5I7Wc2ol1hVlR6foEw48zbu4bCzVLGGzbbNoIy8m5VjFq/4wTeQQV1T6Akd6PQE/Mj6lJYGqpFrmGyWc0J3y+KgyGVEemTvHXJpJmiLOQgr8KFPht3+mHyRDkMmvoZ4Ww5/gJSe9rWQeAEerLgTWmP4Xp98a98NFRUu7HgkFuDgwKc4xz+m/coI6LppM+ZF2/mAh8jxzC29jEcY/EYoLj2WFg1yUgiLBp5+bVYc7WZ+Ar194OERA9OBQVmqSVTEovb8UVDKg+oUs6XbsB8MIgitmtrCRmM81fsYTj+msj21iZrOmhURLMDpHeHuAxEl1pia3n0BLNJNcHiISRa//XGC+iH+lg3Cq1nWrh8AdpvfGup6XxjFil46fs/yVeUN8D8LRTwDyhIaEM/ZPP2PFIeoKqFszNFIpriHW/m+ZxwnB1zT3AzA1trK5A9DBriU69hQ1moh81QTmobgDCLFdjLR/7WDXc19HGjje8M3ZDnHsuMlkfgNCUeZWc0bvKsa2ZIGCtxwtCIRDJgQI/xFtQuA5YytNh3AJtskGzZ+uHFAgRmtY 5+zTYyeQ kxBIGywcc3DfGPoBsuWaQ0TDJ/5xzD5kgnrUx3cx1Hf6NLgtZ22dHcFbvdwYe33uWlXNf+QYRfp5A0L78Zg42izyZ0D/Uvx6qxFwzrZDtYDay03tGRiVxo/R8KjebVSu530n8H55q5nfH/Hrw2BlWhVQWwnP38CYuGA7t9jgx5lVk76YBGwrcowBDJZSz2OtQ8YwAbHeR7ugRw7WWFe1qs/8hrWqZHIqbwfj0vnzQ51Ykn3D50sjNEx4QU/43eYCTi0yzyS9xcMZCugrJlsvxuRvbSN6g75uRJJWEN/5zBooV8EcSbs26SoG9VzfBJbn3XzUWOu5UHIQSDckwTN1L2dH14Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's use folio_walk instead, for example avoiding taking temporary folio references if the folio does obviously not even apply and getting rid of one more follow_page() user. We cannot move all handling under the PTL, so leave the rmap handling (which implies an allocation) out. Note that zeropages obviously don't apply: old code could just have specified FOLL_DUMP. Further, we don't care about losing the secretmem check in follow_page(): these are never anon pages and vma_ksm_compatible() would never consider secretmem vmas (VM_SHARED | VM_MAYSHARE must be set for secretmem, see secretmem_mmap()). Signed-off-by: David Hildenbrand --- mm/ksm.c | 38 ++++++++++++++++++++++++-------------- 1 file changed, 24 insertions(+), 14 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 742b005f3f77..0f5b2bba4ef0 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2564,36 +2564,46 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) ksm_scan.address = vma->vm_end; while (ksm_scan.address < vma->vm_end) { + struct page *tmp_page = NULL; + struct folio_walk fw; + struct folio *folio; + if (ksm_test_exit(mm)) break; - *page = follow_page(vma, ksm_scan.address, FOLL_GET); - if (IS_ERR_OR_NULL(*page)) { - ksm_scan.address += PAGE_SIZE; - cond_resched(); - continue; + + folio = folio_walk_start(&fw, vma, ksm_scan.address, 0); + if (folio) { + if (!folio_is_zone_device(folio) && + folio_test_anon(folio)) { + folio_get(folio); + tmp_page = fw.page; + } + folio_walk_end(&fw, vma); } - if (is_zone_device_page(*page)) - goto next_page; - if (PageAnon(*page)) { - flush_anon_page(vma, *page, ksm_scan.address); - flush_dcache_page(*page); + + if (tmp_page) { + flush_anon_page(vma, tmp_page, ksm_scan.address); + flush_dcache_page(tmp_page); rmap_item = get_next_rmap_item(mm_slot, ksm_scan.rmap_list, ksm_scan.address); if (rmap_item) { ksm_scan.rmap_list = &rmap_item->rmap_list; - if (should_skip_rmap_item(*page, rmap_item)) + if (should_skip_rmap_item(tmp_page, rmap_item)) { + folio_put(folio); goto next_page; + } ksm_scan.address += PAGE_SIZE; - } else - put_page(*page); + *page = tmp_page; + } else { + folio_put(folio); + } mmap_read_unlock(mm); return rmap_item; } next_page: - put_page(*page); ksm_scan.address += PAGE_SIZE; cond_resched(); } From patchwork Fri Aug 2 15:55:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13751750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A1C1C3DA4A for ; Fri, 2 Aug 2024 15:56:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B4C516B009B; Fri, 2 Aug 2024 11:56:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AC9B06B009C; Fri, 2 Aug 2024 11:56:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 96C8F6B009D; Fri, 2 Aug 2024 11:56:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7731A6B009B for ; Fri, 2 Aug 2024 11:56:27 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3A37E1611E7 for ; Fri, 2 Aug 2024 15:56:27 +0000 (UTC) X-FDA: 82407757614.19.9A367DF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf18.hostedemail.com (Postfix) with ESMTP id 8B12C1C0005 for ; Fri, 2 Aug 2024 15:56:25 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="ethd7/TO"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722614138; a=rsa-sha256; cv=none; b=GQ1cImYOTvWhXRNMZ6u7FLpNsTJOve/LJr5m99wuBe4ULQxTiku1Gp5lYzortdv3HyYpct BU9inUnZLXNa/+D2gTRkWhjdi7wwTMb2yrnCJCeLcOs6sW0OYCv0DjvwaCBrCwM1tSdwb1 iawwrtgcIh0txCHp7x+fKQ12gVcu5fw= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="ethd7/TO"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722614138; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Mv/si6kOfwIvneZw24kO6Mkvevsw9QN/WnWMpKXtofY=; b=ZE6cyOEkF+nE/UC3gPZRTX5cjbl7H45RX4oB1QyOUX7KN4+k1dQUmy4FGosEzVUW1/2cwU z09kUJTHFtQgNcHvZYzDk5VEh5/dNGSxEKcy48fEfV3YNGY2TRb2okb0YT0gHU2X3xs2/I zoPF+cPC4K05FpKY3RZGJaON32eH4ds= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722614184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Mv/si6kOfwIvneZw24kO6Mkvevsw9QN/WnWMpKXtofY=; b=ethd7/TOR6mOGjdtHCtCfoMV4pv4EClKlB1peSKIx/PFd4JX+M841h7GUnM2r+IzDVa1G2 6AvQhVq5g3O3cdLW27WY0s9mI9qANhSHdJSlLim62oPBm1nzIyv1yWrYfHS/y/GD6ubdg8 BiHnzzWPt45k89nShtRF7s6fJCgvNes= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-261-HVr8hLF5NxywldDisXzbUw-1; Fri, 02 Aug 2024 11:56:21 -0400 X-MC-Unique: HVr8hLF5NxywldDisXzbUw-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7675919B9A9B; Fri, 2 Aug 2024 15:56:19 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.113]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7F6E1300019D; Fri, 2 Aug 2024 15:56:13 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Gerald Schaefer Subject: [PATCH v1 07/11] mm/huge_memory: convert split_huge_pages_pid() from follow_page() to folio_walk Date: Fri, 2 Aug 2024 17:55:20 +0200 Message-ID: <20240802155524.517137-8-david@redhat.com> In-Reply-To: <20240802155524.517137-1-david@redhat.com> References: <20240802155524.517137-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 8B12C1C0005 X-Stat-Signature: tzwpea5mggmcg4wnpbmyrrupk1phodj5 X-Rspam-User: X-HE-Tag: 1722614185-410622 X-HE-Meta: U2FsdGVkX18DkWb+NJK3eNHTDoJB6TNVe59E6De8xzaeURZ6WkFcwzZdMqjicrbXGh6Y2of94Urn3tVaOYl+iqjfyiOAFHJVyDOqU5a7El6OBRVWorRlSYdlCQPWDAPt/HugDNlJztin81vxmpixLEeXdYBsqVyvyhBk+ktFw0fWNctetPQnOEQIRwzIX6an42jfI9qfkm60D2rc0Y8l6HNAdZqaFtBmkFuuRBYRVW5qhlXSGwFi8gEfkX/trV04J/KQxk3DCMxwaFSJL9My+U5qff2YjigmCxKJ0mEnLwUIUyfNBQl2Mes+AvONO6OUgBcDzj2DlQvGLmdlYyNJZNcWcMWcvQoloPO2P+FhvQkcWj6KjzuNhOROtHIQu5JuUP6hRCOzwKaOwkbgscT6WYPSAFcFaw6gkCno31U94CpxE9Ya9NFclu1Yn92izeFVPKeG7dAISk9FEuLzY2X7PltqIrEZi+rcM43JGfvo8y7vb5IbI9glwEcqSr15pLF6mtQRIsMCvNY7AIv+9gXRITh7Dl6DL3C1m1eh+DhvDkGXPqZ3eMS7yA4fVDlCD2gIwjo4d/dcJdtLNuU0ksSvJo4NJCsr2X02ztUG7Jwnj16E0JUjbFwPAorabDsJkP49VZgx6Y/KBE9kR6cqqnHPYI+xqarEkxy0rKEvMHwKwBKL4h4PhjhGJU0jts6OZrkTNG0o2fXxOdGOpb/NFDWFTkoPIqy9BsP9Ta2cVWQewg1Ksff3WWEng0ECwcPDtebWyp7S7oatE4FvqR/gc4hjIVifZcsAflnUiM8ESbM/6GEt+At5VZ2f0OWvm2M9oEccwO6sn2LmgOdcsaqlWSotFiqCEVedckBSc4LJM7YeseqHjMRs6vwkbqjTdka3jHDE7kmUCHzbHOX183Qh84CEBf5wT7S8hLo0z7KQqj61YS6bJ6tW1V8gVNLtkzP23v/yndlYQf71yAP25sVMcDS EZmLqFaW SgdKDW3+Eg56w+q6Am3vUgvsaIO9Mq7UCvVY2FAmMZx+3D4eODcWLsHRjdYd8WyYhOg4d1vMsdcvsUk5mouN4bGL322zGiClasMj0EdrdG4eJee48MchtxxevOuDJ0PLKNQvnpVGgzaXvLqmDuPRxNvGaiBuGNBY1MlDHKD2be05DS06ytPj0lMezK6rqf1xG7rX9BCnHqjEKHaGaHzCSDwlmXEV/4mfh1NUbaOl/PrnfyOaaOYYl49y0A20WJ9YI0+D4UcqM3ZvAilzVmUzvoUG3gqWkIjMJ+U4Hz2HND2WAQuPje3gcrsHbcnLCcoR1xrYRgCu2Wl42UA5pdnYqLeWuUw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's remove yet another follow_page() user. Note that we have to do the split without holding the PTL, after folio_walk_end(). We don't care about losing the secretmem check in follow_page(). Signed-off-by: David Hildenbrand Signed-off-by: David Hildenbrand --- mm/huge_memory.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0167dc27e365..697fcf89f975 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include @@ -3507,7 +3508,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, */ for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) { struct vm_area_struct *vma = vma_lookup(mm, addr); - struct page *page; + struct folio_walk fw; struct folio *folio; if (!vma) @@ -3519,13 +3520,10 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, continue; } - /* FOLL_DUMP to ignore special (like zero) pages */ - page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); - - if (IS_ERR_OR_NULL(page)) + folio = folio_walk_start(&fw, vma, addr, 0); + if (!folio) continue; - folio = page_folio(page); if (!is_transparent_hugepage(folio)) goto next; @@ -3544,13 +3542,19 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, if (!folio_trylock(folio)) goto next; + folio_get(folio); + folio_walk_end(&fw, vma); if (!split_folio_to_order(folio, new_order)) split++; folio_unlock(folio); -next: folio_put(folio); + + cond_resched(); + continue; +next: + folio_walk_end(&fw, vma); cond_resched(); } mmap_read_unlock(mm); From patchwork Fri Aug 2 15:55:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13751751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3DBDC52D6F for ; Fri, 2 Aug 2024 15:56:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D9D96B009D; Fri, 2 Aug 2024 11:56:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 488EA6B009E; Fri, 2 Aug 2024 11:56:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 350936B009F; Fri, 2 Aug 2024 11:56:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 12BB96B009D for ; Fri, 2 Aug 2024 11:56:37 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B2B46C1419 for ; Fri, 2 Aug 2024 15:56:36 +0000 (UTC) X-FDA: 82407757992.06.F80B696 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id 0D376180003 for ; Fri, 2 Aug 2024 15:56:34 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VMyO9V5j; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf24.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722614152; a=rsa-sha256; cv=none; b=NguPuJmmIzH7RL6HTg1qFo9GnSRkNwNF1kR8Lgwppn/NR+4pZkVK31/o7KbbjUJeED0bRx uKSPEy2X51ZzT1/q2GPIPKnucMeDk+zVVavXXQ0BCf1svtZbi1rHrsb7U6Xdc3OGDYQzHk JyLDZHY7GHp/0PmBScA0jFsiz2qwCpU= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VMyO9V5j; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf24.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722614152; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pjcECCTgCmWZTwab+g/RP394PBHLh37fGlT1XvTATXs=; b=seOj44JWxybfCn6aXubzP+bYisiBNHmC592Z38+CZkc9duok98svUBnqnN56WB8yNgZ19c XOGQA1BzgsTlNPiE/DcM+h/UVPjLdfqDPF4LBU4QoNp7jUaD3sCnTyvfAlGU2FswMrlCLH Ja+r5HFA5ITGGnmuL4w6YTqe6cReuVw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722614194; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pjcECCTgCmWZTwab+g/RP394PBHLh37fGlT1XvTATXs=; b=VMyO9V5jw5PBC7zQEqKVIsbk2jI0a1m2Gg/Wds7dtOoDh1R9rj2QYr2r8+7QeRzYpnN9Md 7O2qCW1wOJBlgbJbsYHirQEpBE0oWYrvJnCx8ekJ46z8wYroYyXN+l4JsFGB2fsp2u9zkw 5oSEJCf8dguEb20+c7zAk2S3ShXDkqg= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-74-WFy2ItSoPW6bK_LvEE1k_Q-1; Fri, 02 Aug 2024 11:56:29 -0400 X-MC-Unique: WFy2ItSoPW6bK_LvEE1k_Q-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 95BE31910417; Fri, 2 Aug 2024 15:56:26 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.113]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 11D163000198; Fri, 2 Aug 2024 15:56:19 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Gerald Schaefer Subject: [PATCH v1 08/11] s390/uv: convert gmap_destroy_page() from follow_page() to folio_walk Date: Fri, 2 Aug 2024 17:55:21 +0200 Message-ID: <20240802155524.517137-9-david@redhat.com> In-Reply-To: <20240802155524.517137-1-david@redhat.com> References: <20240802155524.517137-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspamd-Queue-Id: 0D376180003 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: sc9jbq71mgtpsxxkgbur383mbtbzw8pr X-HE-Tag: 1722614194-938805 X-HE-Meta: U2FsdGVkX19cXBDtt57pNgPDZfjeph7zefLxKwevZ3B4v8w09wN+woQDBvT8bGSCbnoKen2RZtH6UObF4nHbKQcDAfSYkr5ztzr6kM6s6Co+wAeJz4b8BVsAl6y2LpxNHrHa182dTfQU1eumjq4ReLQ++9sOvmNWG8G/MFBA5iEy7OW+Ngmt1aAHS7dZeQYYV6EECsu+imVbEWRV8v6YpGz/o8xbLOyUsCeBwZdLAPMrO/fgN4pfo+f4Z4la8ViuMk0XProvaJUZvz7pJuscBUubotqnsfUObtCFyAMQQDGSl4dxy13pxQ83aTZoHI2yZI8hGjUOoCuYe4hpVSAyUuMHiQNHcSz/p5uQj6TzMmVICdwh3LnPWTs3w1kOzHAD6GtpmBWMUQgNwjASfni5kfaaZQsZE8BHyW9vd9KZ3kqEl8RLNm/NSl81XR1xeMSDgMOBL3g3+d6SWitdNOgGg+7wARrYr1aM4/sn+3jJPyHUVuzlhMRnTsBEDrL/yLw8oY6gGa+4pm6/EUMLfBbUAhdIHgOvF2lmXI09hs1SIa8DywU4AEfHxFPArixnDnWjafVGmqaFXrjhJe9hzdfPmwq8Zh2oC+ZmoZh2QEI418dr/FbqIYyAH70nJ9dTrqxuMMxxbNX6joSaHrO7ifr2eXpXcalm2uBQ95bGBXzZfhaGXJHH3oDcMkjp68OCu78QaAVKPGITyR/ectwK4SkZ3qDTLtHyzpfIs0OESjMzHzX5nxbP34pYms5KOFqCKBViQYj0XJhHQzC5yuufDK8EiMJQE+ySQRpzub20oACWast7oYP9cfKjw4AedJa9o61DK970SxdReXM63mLSalc/ZeRrDAkuiSzW5YUBS8b+e7Up9jE5Kmek1DsGyCr09u7nuALZMyOcyLq9c35Ads7MKYVxL9jRW4o8pt+Cbyfl/PskkUAfsRjSvcIxPep+gxDFTq0CX8vt8JDFqWtpoaP pz2SiqHl znsjFriL1ZxaAs4FE+wFjeWlne16rqS98txduJYK8G248oQsAU7PMM8+KcW3yip1ffdRwWZ8GuKpBEVlJ1AKSpo62c4ggimFNFQnUilUs8R//q3eVfBe4/4IrZgK5X41JbPEM2jWgn52isdaCRpF42NcR7Mr+RYSjpUccjvFtHlcOdNrPxcDrCIQX+aVHXK//YNoJgdXMbXnd75gm9a7Y1PN7OtEvbXTrF9DrMlA58cJX9j5ztEgwh3RRaUu2Jj8/b5EUYxwBiCHwkOhbalGCC7LNP6Z5FWMVa9+YH6doobaxWGWqZNBn/ZISZjzrjiD5PoNy6fTdBxdEetthTCr0yL0wLim4VE4PZ+cSaJIYHaib5h724f5q/wvJCsBM3qIQQlQ+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's get rid of another follow_page() user and perform the UV calls under PTL -- which likely should be fine. No need for an additional reference while holding the PTL: uv_destroy_folio() and uv_convert_from_secure_folio() raise the refcount, so any concurrent make_folio_secure() would see an unexpted reference and cannot set PG_arch_1 concurrently. Do we really need a writable PTE? Likely yes, because the "destroy" part is, in comparison to the export, a destructive operation. So we'll keep the writability check for now. We'll lose the secretmem check from follow_page(). Likely we don't care about that here. Signed-off-by: David Hildenbrand --- arch/s390/kernel/uv.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c index 35ed2aea8891..9646f773208a 100644 --- a/arch/s390/kernel/uv.c +++ b/arch/s390/kernel/uv.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -462,9 +463,9 @@ EXPORT_SYMBOL_GPL(gmap_convert_to_secure); int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr) { struct vm_area_struct *vma; + struct folio_walk fw; unsigned long uaddr; struct folio *folio; - struct page *page; int rc; rc = -EFAULT; @@ -483,11 +484,15 @@ int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr) goto out; rc = 0; - /* we take an extra reference here */ - page = follow_page(vma, uaddr, FOLL_WRITE | FOLL_GET); - if (IS_ERR_OR_NULL(page)) + folio = folio_walk_start(&fw, vma, uaddr, 0); + if (!folio) goto out; - folio = page_folio(page); + /* + * See gmap_make_secure(): large folios cannot be secure. Small + * folio implies FW_LEVEL_PTE. + */ + if (folio_test_large(folio) || !pte_write(fw.pte)) + goto out_walk_end; rc = uv_destroy_folio(folio); /* * Fault handlers can race; it is possible that two CPUs will fault @@ -500,7 +505,8 @@ int gmap_destroy_page(struct gmap *gmap, unsigned long gaddr) */ if (rc) rc = uv_convert_from_secure_folio(folio); - folio_put(folio); +out_walk_end: + folio_walk_end(&fw, vma); out: mmap_read_unlock(gmap->mm); return rc; From patchwork Fri Aug 2 15:55:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13751752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D6FFC52D6F for ; Fri, 2 Aug 2024 15:56:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A3D06B009F; Fri, 2 Aug 2024 11:56:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 154BB6B00A0; Fri, 2 Aug 2024 11:56:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F37736B00A1; Fri, 2 Aug 2024 11:56:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D263A6B009F for ; Fri, 2 Aug 2024 11:56:46 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 8CB0C1C4B5F for ; Fri, 2 Aug 2024 15:56:46 +0000 (UTC) X-FDA: 82407758412.16.31CA52A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf21.hostedemail.com (Postfix) with ESMTP id C62311C001A for ; Fri, 2 Aug 2024 15:56:44 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=f0TtLs3b; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722614157; a=rsa-sha256; cv=none; b=jRF9a7aUU2Gq4xZCV9ZO1CmVhdlGECuhiJBF9S+RnBgFSwdwEKJEq+jPtjMh2T+YfpeKfE 9BcpRAJVSOuA9ZOOuG93xK0eW47OOV991sml1XsX2fnJY20RTR83GePm1YiMLGMnublm0v GJYnqAu5YDwNfba46AODFKgWBJcsSag= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=f0TtLs3b; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722614157; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Y8hDUFeX9csW/JE1owVpjHxLVtzwb2VhQ3RSyvg6DEc=; b=E4b+0t0PBUhE7+mpd5emtaxNAbPiPNqvISbGtcnEKH1lHWMkQT1zfeP742bW1FZKC+BdOj zmdMV49apNS6zevdC6avI0sNbqqYQEfih4RrO5vTyhnFX4Y1iWiQqpwXGoeUhRPjqw1p+9 /84Q3TAKnrzdjMng9/JPgF5RQtseYXY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722614203; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y8hDUFeX9csW/JE1owVpjHxLVtzwb2VhQ3RSyvg6DEc=; b=f0TtLs3bjsEhCvNLIivKYozUU5WnT2Pqj0C+4rbhCMIul/xABTHy1PHyRcTF8PpLBaWjD6 ew++LhdWIjzOju/JkkkCkIlU+wIfyP5aOFlJrYYN1uS88uQ95vHI57VYA4e+KKpZ5sfK6Q q1gj3VP1uQ5ks7NCNupmcLorSrutPb0= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-314-xQRxca8hNvOMbFAn_7Ci0g-1; Fri, 02 Aug 2024 11:56:36 -0400 X-MC-Unique: xQRxca8hNvOMbFAn_7Ci0g-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9C68619776D7; Fri, 2 Aug 2024 15:56:32 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.113]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CD843300018D; Fri, 2 Aug 2024 15:56:26 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Gerald Schaefer Subject: [PATCH v1 09/11] s390/mm/fault: convert do_secure_storage_access() from follow_page() to folio_walk Date: Fri, 2 Aug 2024 17:55:22 +0200 Message-ID: <20240802155524.517137-10-david@redhat.com> In-Reply-To: <20240802155524.517137-1-david@redhat.com> References: <20240802155524.517137-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: C62311C001A X-Stat-Signature: pxfhi6ruej8981inrdojre1uoqbu5xbx X-Rspam-User: X-HE-Tag: 1722614204-707854 X-HE-Meta: U2FsdGVkX1/2DGmc3JohczeA997AZQ01m3c6/+oPXjAy7Ndfz/JrQW0M71/eyR+AdP8qdangvTfDP0zC+A5sgNpfyyaa50XeK5MwmBQTRxOdf72f8soDq7leOn+rj91N3Aj+OKI9K9gDRvz+WWXyZnlAJKpX1OPSXLxb0gIqT681Hh/0ko/91YFM7nCgeTr6oE07m2yTbjr1Ywncwf+6pUYj6LYimlzHLSAWO5HRmY++Ejl00FxRKWFrZNSEz2ZEiklt5M2SiTYhwCeLP1MiXPH3baFDPJzO2NvH1yo+CfMRl91LzPojz1Ko/cMVP9t/gWL5M//zOiLfJfyqhR5wubz/4fuFnwSwOKDEuUnycdkRREWyUToQCsStX8ciGse163yd8FicOwptLswVLRKNdKkdKXNIrVNHyU8GM0p35plzmQl0tGDXfgrkunJHTldxYLMBlSxg1FY2STv4+l37IDoG7rMC98HbEZRyID35Nc7EB21xXw+94WNj5XPLtao80g/VTY2j4Q0RqxaHKAtbtxdrEMx3s3cvYdS3xoNebFoNxmhUTY9cnBVzkwdJOMTrUV7LTlMAxHfueoLfufwakSBDCLb4D0gTih0GoMRZpgMKiTK2QQl5St1e80U4Q0q+tRRTJoOVdPuPGYtDab54oI+qzYjoqGfnXgzvJTQV5pIP0X6yTXXX1RCD/giptLtT275Vbd1Yq+l8W77yJ1wrPuKB5pLoGalf3vJnH5KIsTAygW3TscWVwu1n1OojmvGYcwyY4ec1I+hMSGgBwgl0jyFcV4tAMMt2m6LT5AlDKczCUBLExqIbidwd5AT9d6iOC+u2Yds2WSCzhp4irEbs7V1FdvDcsx7Ioh5twGsDRbfVcrO8tHWqrrATJQaA35M6CTs/ahDnCRWLicnOWN/XOwDZUR5rFuV0SPZYIiyA0ruSJdAqhF9TBrNqNzc7UTNldQltVAB3xPjvouPMM5h jwc6w64I Xehakea9lfRzxbcvLHVWHnfHfx1e89zJSMjEERbxjxAQ8W4uOpk13pYZSrNZIhw5du/C4mKddRGdRRZZMLO8O4sl/YTytHBqaQXnZgthzKmh6HsLeW9yxrrvNRN6oS45Se+zu1N92DnC+WhxqFpYfW02zWeU5qDzwVyDaO87jMeSGfW3LtKJLTJ56K/lpChZnLJUdD10FcXQ5nRbmT1t/JMI7nBmPzmzRjzZL/NgN2EnleP8AP5kYxQPBHY0YX66VgrQpOcyzmW1/JiRHNx5ZvMpXRNoV5E9AANcBnme+PoGERoH8fxVkM1fdxXtPUeBme6hXAbIBlz9HkB2nakcF+1Z+O58d4ND7DlY25wJMXno5or+d+Y03DHhgp1rHI3yPfsQu X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's get rid of another follow_page() user and perform the conversion under PTL: Note that this is also what follow_page_pte() ends up doing. Unfortunately we cannot currently optimize out the additional reference, because arch_make_folio_accessible() must be called with a raised refcount to protect against concurrent conversion to secure. We can just move the arch_make_folio_accessible() under the PTL, like follow_page_pte() would. We'll effectively drop the "writable" check implied by FOLL_WRITE: follow_page_pte() would also not check that when calling arch_make_folio_accessible(), so there is no good reason for doing that here. We'll lose the secretmem check from follow_page() as well, about which we shouldn't really care about. Signed-off-by: David Hildenbrand --- arch/s390/mm/fault.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index 8e149ef5e89b..ad8b0d6b77ea 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -492,9 +493,9 @@ void do_secure_storage_access(struct pt_regs *regs) union teid teid = { .val = regs->int_parm_long }; unsigned long addr = get_fault_address(regs); struct vm_area_struct *vma; + struct folio_walk fw; struct mm_struct *mm; struct folio *folio; - struct page *page; struct gmap *gmap; int rc; @@ -536,15 +537,18 @@ void do_secure_storage_access(struct pt_regs *regs) vma = find_vma(mm, addr); if (!vma) return handle_fault_error(regs, SEGV_MAPERR); - page = follow_page(vma, addr, FOLL_WRITE | FOLL_GET); - if (IS_ERR_OR_NULL(page)) { + folio = folio_walk_start(&fw, vma, addr, 0); + if (!folio) { mmap_read_unlock(mm); break; } - folio = page_folio(page); - if (arch_make_folio_accessible(folio)) - send_sig(SIGSEGV, current, 0); + /* arch_make_folio_accessible() needs a raised refcount. */ + folio_get(folio); + rc = arch_make_folio_accessible(folio); folio_put(folio); + folio_walk_end(&fw, vma); + if (rc) + send_sig(SIGSEGV, current, 0); mmap_read_unlock(mm); break; case KERNEL_FAULT: From patchwork Fri Aug 2 15:55:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13751753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2472C3DA4A for ; Fri, 2 Aug 2024 15:56:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5EBA46B00A1; Fri, 2 Aug 2024 11:56:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 59ABF6B00A2; Fri, 2 Aug 2024 11:56:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3EFC56B00A3; Fri, 2 Aug 2024 11:56:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1C1F86B00A1 for ; Fri, 2 Aug 2024 11:56:49 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C7851161282 for ; Fri, 2 Aug 2024 15:56:48 +0000 (UTC) X-FDA: 82407758496.08.E4063BC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 040C18001F for ; Fri, 2 Aug 2024 15:56:46 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K2aQEdZ8; spf=pass (imf02.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722614149; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MPEnEoi5Ma31G0FuCm1jYE13UYiHj4w8jLpqqbcOSIc=; b=VajeKpF2eKt9Uo2mvZ8RGl+L48TIEz2NNWCW2BWbtIz4gnJWazVe3Ps0AsdQu4SarzQuKu ypSc5EwM5mBQElMPCsS/VarD2QPdiKSDB5sysuXDW64vYHnYzs2La6W6sY6v+IYC9as9Et YtweofGCQruEyPHaP+2kAYeYDOL9AjM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722614149; a=rsa-sha256; cv=none; b=Z631EPmCRdb+tLja0qjdTGD1ctq8RHt1OWQOyjUJR/PXIhTRsYV+d7ivdy4QlnRNln9V3F 3DuJ1lyBUTNjG72QNPcWjVjWRFNDYwMMci5o+FHY9xE9njtEuGBsGj6H7Gs4cha8TEzZ64 +qLZgY6UmxBBIq/xFClg1ALCT8jSwxw= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K2aQEdZ8; spf=pass (imf02.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722614206; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MPEnEoi5Ma31G0FuCm1jYE13UYiHj4w8jLpqqbcOSIc=; b=K2aQEdZ8Bqi4hvn0CaVNy2DAhQT2orff8+NsiWN+47s3NO4vz+EmrnFeWQyO2TX4ZgJagM rwk+ScgpAOrEvvWK/w6UEgpRFLZ2llHs2ZvBppA4h1uMX/dQyjpUDfZ6JTgyxA8jSfvx/K cA9SK640JqKW6FE30cyZg93nElPC/10= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-150-6ny0YSGpPsS9rY-jD1QXAg-1; Fri, 02 Aug 2024 11:56:42 -0400 X-MC-Unique: 6ny0YSGpPsS9rY-jD1QXAg-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 420521944B27; Fri, 2 Aug 2024 15:56:39 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.113]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 13545300018D; Fri, 2 Aug 2024 15:56:32 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Gerald Schaefer Subject: [PATCH v1 10/11] mm: remove follow_page() Date: Fri, 2 Aug 2024 17:55:23 +0200 Message-ID: <20240802155524.517137-11-david@redhat.com> In-Reply-To: <20240802155524.517137-1-david@redhat.com> References: <20240802155524.517137-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 040C18001F X-Stat-Signature: 3rm3c9d8knm8ppkrejj8buwfhrg4jahu X-HE-Tag: 1722614206-838586 X-HE-Meta: U2FsdGVkX19IxfIbnC2qYWejCchyK+ldRIi0IGOlu5roD0gsjX4IUQLtZlTJsh97bsVlRIat9VVzq1n9IYkl2SSIqQEs99y7aCnJ5Xx1l4fDMGIXKCxTjO2ZDA077r5TQ+zD/lM+oGbzIxpRKTikCyAxS07k1e3S5p85tIZikFswPf5vzq+0EJsSmyVsUxwUav/qG5frlUq7aI7jkRsUQ6a9LP85CCRTlTt3yyJkfzQ4JsRsYCTRKzsfaZ211PcozOxziqwDam5gif4UIYm9UzG54/yrnpkd1edq88unUlSkVS92BkS6+hY69nayeicm9KieNqJ0Iq1A0kIVdFDxKfdYyZhn1seEk3rDBVpJIRtDhyzZ6l1D4/AcNbeahArDd9i1NG8tz+0WVjjwK0b6yJdkvq7qtikSiH4SyiwvrTrmGZoQYM3OkMpRVC7x6r/oyv4cFho4r6f35FWFb3I/PHLwMpZGJsQYO2J5QS8D6KwvIaqnCxuT6QSFsJpe94FTBRW4td29bu4Wdm/IM9M7M4Qx7JZQCh1G4pPK4ULtYvHnurP6x0gyk2V5G+lBjoUNTLHUJZozB7yLLLxLyqu+5ArOcrwfuI+8lsZ4t4ISsbDGi+gcGEJ2jC3Ue0/SYzoLhce0KxBN+BiN5lPMMAREvGUHTWr9vN7Dte/n2IFCkE7gu3n8JqQULDR7I7isLv77JSZxej49/n0nUntKeDun3aOIj4aUacgfPc5HAkIfuoXRj+tWFnZYSu2ahALlU1RCLNZL6RwsAVl1Dgh9j9IJEekoyl6PpoAtQcg63tK96fotlqeYjonuDX5wxd+o7kW+AOdkR+NtB4Z5Mc+2FgnEEBNUx8KGjje0YM+0gjgB9tCn2o6ZQTsUe607BTR/BJwo5FV+PX3LQrO0fhAlSQe/Eolt/frNNuy1hHpa57Zo2NikZvQb3Z/nXUB7LyIEO9yFgn1MtVMtpmZs1Zsn/0W RiphdkeZ Yz/7jhGAXwG00f9tR70Kztyr+Y6nlX8CvnqXqTvFMLEK3qMfy6rcDdX44kb8jgQ/HwZZeIVhkjXxjFzki69pJmgOLlYayyUHQtQZ4pNv23wPCZXwyGJXwg2VZK/ydjjmGjET+r45Yti5BjSQBg0OQqS2jfkYGUd52cqiFbS8FHOS+fF24m9rbL8R58NmzHwhGfL+0YLI10brHOUJQkGiqHAA7LnWeOZwcP+PnVw+Tn9VPlKKqZQ4tBoj/48QhymMgKvhIrlRdG60Uh067JRwU6mLu4DifKfSdFhKZFeDWRSa2w9x7xIzMoDT7RDbo7yrHoqJQpHMTPfK5DBtNy3375in6Dnswb/HXbosS X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: All users are gone, let's remove it and any leftovers in comments. We'll leave any FOLL/follow_page_() naming cleanups as future work. Signed-off-by: David Hildenbrand --- Documentation/mm/transhuge.rst | 6 +++--- include/linux/mm.h | 3 --- mm/filemap.c | 2 +- mm/gup.c | 24 +----------------------- mm/nommu.c | 6 ------ 5 files changed, 5 insertions(+), 36 deletions(-) diff --git a/Documentation/mm/transhuge.rst b/Documentation/mm/transhuge.rst index 1ba0ad63246c..a2cd8800d527 100644 --- a/Documentation/mm/transhuge.rst +++ b/Documentation/mm/transhuge.rst @@ -31,10 +31,10 @@ Design principles feature that applies to all dynamic high order allocations in the kernel) -get_user_pages and follow_page -============================== +get_user_pages and pin_user_pages +================================= -get_user_pages and follow_page if run on a hugepage, will return the +get_user_pages and pin_user_pages if run on a hugepage, will return the head or tail pages as usual (exactly as they would do on hugetlbfs). Most GUP users will only care about the actual physical address of the page and its temporary pinning to release after the I/O diff --git a/include/linux/mm.h b/include/linux/mm.h index 2f6c08b53e4f..ee8cea73d415 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3527,9 +3527,6 @@ static inline vm_fault_t vmf_fs_error(int err) return VM_FAULT_SIGBUS; } -struct page *follow_page(struct vm_area_struct *vma, unsigned long address, - unsigned int foll_flags); - static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) { if (vm_fault & VM_FAULT_OOM) diff --git a/mm/filemap.c b/mm/filemap.c index d62150418b91..4130be74f6fd 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -112,7 +112,7 @@ * ->swap_lock (try_to_unmap_one) * ->private_lock (try_to_unmap_one) * ->i_pages lock (try_to_unmap_one) - * ->lruvec->lru_lock (follow_page->mark_page_accessed) + * ->lruvec->lru_lock (follow_page_mask->mark_page_accessed) * ->lruvec->lru_lock (check_pte_range->isolate_lru_page) * ->private_lock (folio_remove_rmap_pte->set_page_dirty) * ->i_pages lock (folio_remove_rmap_pte->set_page_dirty) diff --git a/mm/gup.c b/mm/gup.c index 3e8484c893aa..d19884e097fd 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1072,28 +1072,6 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, return page; } -struct page *follow_page(struct vm_area_struct *vma, unsigned long address, - unsigned int foll_flags) -{ - struct follow_page_context ctx = { NULL }; - struct page *page; - - if (vma_is_secretmem(vma)) - return NULL; - - if (WARN_ON_ONCE(foll_flags & FOLL_PIN)) - return NULL; - - /* - * We never set FOLL_HONOR_NUMA_FAULT because callers don't expect - * to fail on PROT_NONE-mapped pages. - */ - page = follow_page_mask(vma, address, foll_flags, &ctx); - if (ctx.pgmap) - put_dev_pagemap(ctx.pgmap); - return page; -} - static int get_gate_page(struct mm_struct *mm, unsigned long address, unsigned int gup_flags, struct vm_area_struct **vma, struct page **page) @@ -2519,7 +2497,7 @@ static bool is_valid_gup_args(struct page **pages, int *locked, * These flags not allowed to be specified externally to the gup * interfaces: * - FOLL_TOUCH/FOLL_PIN/FOLL_TRIED/FOLL_FAST_ONLY are internal only - * - FOLL_REMOTE is internal only and used on follow_page() + * - FOLL_REMOTE is internal only, set in (get|pin)_user_pages_remote() * - FOLL_UNLOCKABLE is internal only and used if locked is !NULL */ if (WARN_ON_ONCE(gup_flags & INTERNAL_GUP_FLAGS)) diff --git a/mm/nommu.c b/mm/nommu.c index 40cac1348b40..385b0c15add8 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -1578,12 +1578,6 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, return ret; } -struct page *follow_page(struct vm_area_struct *vma, unsigned long address, - unsigned int foll_flags) -{ - return NULL; -} - int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn, unsigned long size, pgprot_t prot) { From patchwork Fri Aug 2 15:55:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13751754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDD77C3DA4A for ; Fri, 2 Aug 2024 15:56:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 74C916B00A3; Fri, 2 Aug 2024 11:56:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D4ED6B00A4; Fri, 2 Aug 2024 11:56:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 577466B00A5; Fri, 2 Aug 2024 11:56:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 35FB96B00A3 for ; Fri, 2 Aug 2024 11:56:55 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D38E0C13EC for ; Fri, 2 Aug 2024 15:56:54 +0000 (UTC) X-FDA: 82407758748.04.4D963CE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf17.hostedemail.com (Postfix) with ESMTP id 21BEA4001B for ; Fri, 2 Aug 2024 15:56:52 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=EgLAis2q; spf=pass (imf17.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722614155; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XisBOloUEP3qStGU5kEnvOleMULjhsDMvLI1Qb38ywI=; b=UDYHIpGQ+jHkLeVrI2MfLRtS6+PWQ2vdZ5+jaiHxtSF95NRl0mZWuzLxx6f3pXECTtgnel wDY+ZGqpgh3zi/dTKlk1KHFbn3ZrvrF6WCD2fftHtPXXbcRuSrWrqUfY0CQpPyWndyIsvg 75n6VvX5wao0pzT90Eo9sCmd8mVNv2M= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722614155; a=rsa-sha256; cv=none; b=t35DtqMhT1X2ConkIn/I/B79JOx7Lahy3Ir6XxOeQW1JuLHuTXnkmv92C8ht6+4nIBPSjj 8oh5D7Dsn4g7EIRjQHlUxKU/8PqsCTaInE7k3MmnFDBCDaWW3bi6MXUZLXOgmb0clIEy54 EhA7hlSFU9gwDW740hVwtOhADLS663c= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=EgLAis2q; spf=pass (imf17.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722614212; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XisBOloUEP3qStGU5kEnvOleMULjhsDMvLI1Qb38ywI=; b=EgLAis2qodxejLBxwWpVNsuu2eVYq9OsqJ2MR6kOZeYcTX8p4Id7ksFmIkbxwvLezHeepu PJHL2ZeJiLyKb+X/iimAkzZlJ8PvkDLOY3APKRByvjdTAAUECRnSvcU17JnL++1n9EPtaC iSo8eAahcruafEqZ0yqMQ6th9IPMQdM= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-533-7fMx_I3IMvS20EQpewJYGQ-1; Fri, 02 Aug 2024 11:56:48 -0400 X-MC-Unique: 7fMx_I3IMvS20EQpewJYGQ-1 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 833C51955D4D; Fri, 2 Aug 2024 15:56:45 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.113]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B30F4300018D; Fri, 2 Aug 2024 15:56:39 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Gerald Schaefer Subject: [PATCH v1 11/11] mm/ksm: convert break_ksm() from walk_page_range_vma() to folio_walk Date: Fri, 2 Aug 2024 17:55:24 +0200 Message-ID: <20240802155524.517137-12-david@redhat.com> In-Reply-To: <20240802155524.517137-1-david@redhat.com> References: <20240802155524.517137-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Rspamd-Queue-Id: 21BEA4001B X-Stat-Signature: dbtryqfcmd1huckxfgyzhmdobyameuqj X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1722614212-889300 X-HE-Meta: U2FsdGVkX189M08/S5T4C5eZqjh33xa9TLO/vXfWpfml7tBktYGVQu/joLhvWfUV3UU3WbcS6B9vqOCbhe3PWHOr1b5hEzbFJxaxJHhb9AUtjFH/Cl/AUFfCJSMdWEWkmNauM43MoSPERu3ExwFZ9RY1/c9EiImDrqPq8S67el6nGjUnVRVp854qn4dHD19jbmvECRXNU4+MykLbA61IxkWxJap0+fDi0C7AcmCno8sfJpxc30Xa7y+a6gRVFqh3oeaNgt3h8CXHnxTeozX0G3BcQ0pDassuLERg+6M7AzZoA/pJDxET9Xdr0vI4/02oqa0/GR5sXtnPh/1SLrVvWv1Jwxg86vF1dhz7RqzBmLVCLreJzO/nGiRMjfEfxKyICBAVvbSpNfK4XczXI6kMfvt9lnO6NfxOLqc/WQOrEr1G5MOdDShIHZ7BCDBE98dDsKfNm+jrLrBQ3xSQdQJsVr/43bx5ucC60CrzFF25yU3UNS1oqGmLdPChp8AiJNr2VhEqAnZHhWnmbFs7tdhfxTCQaqEdFTGRnMv7DUatAZhdgmpA8w5uFZlk/WDLI9F4XsqihMtbi+jfMVtENnqrftqaYlSZEMg62TzV0//TOeJXeLCc0OtpfeHJZOLyHG2zIKf9hevkGw+QiE/x1ucYLmo+fpMj1Z0RelEuG6c1kdLLV4l2jOIy0hfFwtWZBMDeSsotjnkYXUJDvePfdS2j2u1YCbxZkUNHvBGqVidEem+OXg7lL2NKQmVeWWyCU6I7LoMZ5SLJtE8+h9WglMWzKTezJl/gDx2CGyHR4Gqsx63Fk3W7FIXZyx/3bWAwtPifp8jAAI5YdHm+mF6AvOPIoVXPB28+59bZ12aQcW/7sm+6+Fq4mA6FWc/hmhs6NljUNUYimKBblKMsjFAFyHqC4Y9EKqvr9OjvRGJ21EX/xcjg3yAF+zXdV1bzjhVI/T/jOBZoWd3GJzR3/ynoxAu KVBVd6Gm c2EIy1wILUQjm0otgC1hB2FoYbhKG0xVHq1vkk8fv1ZpQnZjpeM9IkqsnFll7+05lYacHMZQUoE5q/NyTM09E89Dr7GUQG1rxN4qzfeHeozr+Tl709nk7Iwq7Hjualpm9j01utwPmYDu2HDEivehbY1LzRzofolPhIsPhoJgay90RCiiHW0XS8G5VG7v15jBvFLriXEqvAIS9hh0qmiPFG3CHLfcuxJ99zEJWCBVHwi4oY2ADkY9zJAO3Me9MDKkoRON9XbJUw3xNf4vWhxyjy5/v6zfsxqXLWKaZzNjNWlUkX6pzMarLVNj4lJZGeFtm9P2riQmnXnitF4vCWrNbFLmL1A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's simplify by reusing folio_walk. Keep the existing behavior by handling migration entries and zeropages. Signed-off-by: David Hildenbrand --- mm/ksm.c | 63 ++++++++++++++------------------------------------------ 1 file changed, 16 insertions(+), 47 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 0f5b2bba4ef0..8e53666bc7b0 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -608,47 +608,6 @@ static inline bool ksm_test_exit(struct mm_struct *mm) return atomic_read(&mm->mm_users) == 0; } -static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next, - struct mm_walk *walk) -{ - struct page *page = NULL; - spinlock_t *ptl; - pte_t *pte; - pte_t ptent; - int ret; - - pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); - if (!pte) - return 0; - ptent = ptep_get(pte); - if (pte_present(ptent)) { - page = vm_normal_page(walk->vma, addr, ptent); - } else if (!pte_none(ptent)) { - swp_entry_t entry = pte_to_swp_entry(ptent); - - /* - * As KSM pages remain KSM pages until freed, no need to wait - * here for migration to end. - */ - if (is_migration_entry(entry)) - page = pfn_swap_entry_to_page(entry); - } - /* return 1 if the page is an normal ksm page or KSM-placed zero page */ - ret = (page && PageKsm(page)) || is_ksm_zero_pte(ptent); - pte_unmap_unlock(pte, ptl); - return ret; -} - -static const struct mm_walk_ops break_ksm_ops = { - .pmd_entry = break_ksm_pmd_entry, - .walk_lock = PGWALK_RDLOCK, -}; - -static const struct mm_walk_ops break_ksm_lock_vma_ops = { - .pmd_entry = break_ksm_pmd_entry, - .walk_lock = PGWALK_WRLOCK, -}; - /* * We use break_ksm to break COW on a ksm page by triggering unsharing, * such that the ksm page will get replaced by an exclusive anonymous page. @@ -665,16 +624,26 @@ static const struct mm_walk_ops break_ksm_lock_vma_ops = { static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_vma) { vm_fault_t ret = 0; - const struct mm_walk_ops *ops = lock_vma ? - &break_ksm_lock_vma_ops : &break_ksm_ops; + + if (lock_vma) + vma_start_write(vma); do { - int ksm_page; + bool ksm_page = false; + struct folio_walk fw; + struct folio *folio; cond_resched(); - ksm_page = walk_page_range_vma(vma, addr, addr + 1, ops, NULL); - if (WARN_ON_ONCE(ksm_page < 0)) - return ksm_page; + folio = folio_walk_start(&fw, vma, addr, + FW_MIGRATION | FW_ZEROPAGE); + if (folio) { + /* Small folio implies FW_LEVEL_PTE. */ + if (!folio_test_large(folio) && + (folio_test_ksm(folio) || is_ksm_zero_pte(fw.pte))) + ksm_page = true; + folio_walk_end(&fw, vma); + } + if (!ksm_page) return 0; ret = handle_mm_fault(vma, addr,