From patchwork Wed Sep 7 18:01:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12969314 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96C50C38145 for ; Wed, 7 Sep 2022 18:01:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 120466B0072; Wed, 7 Sep 2022 14:01:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D0586B0073; Wed, 7 Sep 2022 14:01:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB3BA8D0002; Wed, 7 Sep 2022 14:01:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id DADA96B0072 for ; Wed, 7 Sep 2022 14:01:49 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B65A31C3DA9 for ; Wed, 7 Sep 2022 18:01:49 +0000 (UTC) X-FDA: 79886057538.01.E702C4F Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by imf21.hostedemail.com (Postfix) with ESMTP id 5F12F1C008A for ; Wed, 7 Sep 2022 18:01:49 +0000 (UTC) Received: by mail-pj1-f54.google.com with SMTP id q3so15296510pjg.3 for ; Wed, 07 Sep 2022 11:01:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date; bh=UHYrwEcXPbGBhfii4X3ydQ5bpddbyvYjPSHAsGv/lH4=; b=Doko71dtNVKu/BhyMmGhby+5UUD7eQNqxwNLK82EUfGX5hRVsaHhveHIiLbtb455zC 5uQiezT061WGQLtnac/5oxTDVM2pExBgYyuIce4gXxFM4emRXhvHbHtg0mHipn7ADiq6 +Vu5BgfJSWzgqC/tK9ekDstxD8caurkZlnJ7oyYh0MWUXOTS5xdxMwCo3/vLM0UL0hF1 ljoxl+9/DhCrjm1Ou72naIcIWhli5dUKKQp7XGpExD/rL3JFasi1izq2tUiB8ML8tQSa nkZZWQ8I4b3C/kYqqPSJhkPdB8icbFAZ/XNlARpcMUsOp4CJCZ6ZeHfO+oroBOA3TtLN atmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date; bh=UHYrwEcXPbGBhfii4X3ydQ5bpddbyvYjPSHAsGv/lH4=; b=x2YQ0B8+AskYo+ECw6Ed9Zs0Qe6SSNY2H60cT6pIux5Ts7JCxkrJsJWjtD4EGQbWI2 JgFMXnuxNiqsbkDbXeQjPWU2JzFZZGggsaHYNHek21jCT7lyDo0uLpSvV82cxVakWjF5 gYSszGAioYkh6Z3bvR/zqrYVDzfsJKhgId1M/2Joy5d5fPGKWdxfyR+Vua3CRU6JiDTy YZqAzZRS3SKB0w1sLKBr1LX/5Zu1AmpCtVayDYCHSWn6vA282u//Co19bmRNPgQmCoGo 8DlEudmXCqx82ueDeuYNWSOIDpnj1qIXqW8FZ87ZI22wDndsofNUp+iNDq3XKb0tk+Nl PymQ== X-Gm-Message-State: ACgBeo0dsi86wKuTk7e+aitTsd9vvMferBB4xgeLKAc4ihpmgbK9a1DH FY4okjQh8IYlwNgfQB77pB0= X-Google-Smtp-Source: AA6agR6+7Dm7eIkNWnpZLwRV6pSc8CxeNaPaeJzHXSnuuWgT/UX831Yvo6fd3ymvW7WSe7CjhGk3uQ== X-Received: by 2002:a17:90b:4a4b:b0:202:5bbb:b7b6 with SMTP id lb11-20020a17090b4a4b00b002025bbbb7b6mr1513741pjb.161.1662573708142; Wed, 07 Sep 2022 11:01:48 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id t17-20020a170902e85100b0017312bfca95sm12801664plg.253.2022.09.07.11.01.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Sep 2022 11:01:46 -0700 (PDT) From: Yang Shi To: david@redhat.com, peterx@redhat.com, kirill.shutemov@linux.intel.com, jhubbard@nvidia.com, jgg@nvidia.com, hughd@google.com, akpm@linux-foundation.org, aneesh.kumar@linux.ibm.com Cc: shy828301@gmail.com, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 1/2] mm: gup: fix the fast GUP race against THP collapse Date: Wed, 7 Sep 2022 11:01:43 -0700 Message-Id: <20220907180144.555485-1-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662573709; a=rsa-sha256; cv=none; b=Yy4wQl8cRteGlYQtHXycx4CPbyLxbGP7fuhhJ+fbAymyDGrIyfqQe1KGxdHQ8BtAujgu5L lgTJ6G2WxZGON058J6R2Cn+r5153N+O6f4f8hhf6Bki5jLb8Gx6lvxb7g5x/zZs1qTOLdV 6/QuRyVf7boGtdv2GQYDBL3j0dw9yf4= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Doko71dt; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of shy828301@gmail.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=shy828301@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662573709; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=UHYrwEcXPbGBhfii4X3ydQ5bpddbyvYjPSHAsGv/lH4=; b=gmERiJQXIKP2ap/1gWqi602JrybyCTjffzcAQr5PRNonl2xW0nk8mkFZe8TIwlyCKl6ykz W70/dAfPFLLzHCN2rKe7Vw4O3zNqspdQwuZnHbrAcP4ie4fA13YowMGFMotEFcZF2oZTjG N8NUa5eSH0IUBvUgIIBHm4etQKEgVbI= X-Rspamd-Queue-Id: 5F12F1C008A Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Doko71dt; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of shy828301@gmail.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=shy828301@gmail.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: y38r17ax6fg5s8yhq61m811e843ajq78 X-HE-Tag: 1662573709-136465 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer sufficient to handle concurrent GUP-fast in all cases, it only handles traditional IPI-based GUP-fast correctly. On architectures that send an IPI broadcast on TLB flush, it works as expected. But on the architectures that do not use IPI to broadcast TLB flush, it may have the below race: CPU A CPU B THP collapse fast GUP gup_pmd_range() <-- see valid pmd gup_pte_range() <-- work on pte pmdp_collapse_flush() <-- clear pmd and flush __collapse_huge_page_isolate() check page pinned <-- before GUP bump refcount pin the page check PTE <-- no change __collapse_huge_page_copy() copy data to huge page ptep_clear() install huge pmd for the huge page return the stale page discard the stale page The race could be fixed by checking whether PMD is changed or not after taking the page pin in fast GUP, just like what it does for PTE. If the PMD is changed it means there may be parallel THP collapse, so GUP should back off. Also update the stale comment about serializing against fast GUP in khugepaged. Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()") Acked-by: David Hildenbrand Acked-by: Peter Xu Signed-off-by: Yang Shi Reviewed-by: John Hubbard --- v2: * Incorporated the comment from Peter about the comment. * Moved the comment right before gup_pte_range() instead of in the body of the function, per John. * Added patch 2/2 per Aneesh. mm/gup.c | 34 ++++++++++++++++++++++++++++------ mm/khugepaged.c | 10 ++++++---- 2 files changed, 34 insertions(+), 10 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index f3fc1f08d90c..40aa1c937212 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2380,8 +2380,28 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start, } #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, - unsigned int flags, struct page **pages, int *nr) +/* + * Fast-gup relies on pte change detection to avoid concurrent pgtable + * operations. + * + * To pin the page, fast-gup needs to do below in order: + * (1) pin the page (by prefetching pte), then (2) check pte not changed. + * + * For the rest of pgtable operations where pgtable updates can be racy + * with fast-gup, we need to do (1) clear pte, then (2) check whether page + * is pinned. + * + * Above will work for all pte-level operations, including THP split. + * + * For THP collapse, it's a bit more complicated because fast-gup may be + * walking a pgtable page that is being freed (pte is still valid but pmd + * can be cleared already). To avoid race in such condition, we need to + * also check pmd here to make sure pmd doesn't change (corresponds to + * pmdp_collapse_flush() in the THP collapse code path). + */ +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, + unsigned long end, unsigned int flags, + struct page **pages, int *nr) { struct dev_pagemap *pgmap = NULL; int nr_start = *nr, ret = 0; @@ -2423,7 +2443,8 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, goto pte_unmap; } - if (unlikely(pte_val(pte) != pte_val(*ptep))) { + if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) || + unlikely(pte_val(pte) != pte_val(*ptep))) { gup_put_folio(folio, 1, flags); goto pte_unmap; } @@ -2470,8 +2491,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, * get_user_pages_fast_only implementation that can pin pages. Thus it's still * useful to have gup_huge_pmd even if we can't operate on ptes. */ -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, - unsigned int flags, struct page **pages, int *nr) +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, + unsigned long end, unsigned int flags, + struct page **pages, int *nr) { return 0; } @@ -2791,7 +2813,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr, PMD_SHIFT, next, flags, pages, nr)) return 0; - } else if (!gup_pte_range(pmd, addr, next, flags, pages, nr)) + } else if (!gup_pte_range(pmd, pmdp, addr, next, flags, pages, nr)) return 0; } while (pmdp++, addr = next, addr != end); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2d74cf01f694..518b49095db3 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1049,10 +1049,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */ /* - * After this gup_fast can't run anymore. This also removes - * any huge TLB entry from the CPU so we won't allow - * huge and small TLB entries for the same virtual address - * to avoid the risk of CPU bugs in that area. + * This removes any huge TLB entry from the CPU so we won't allow + * huge and small TLB entries for the same virtual address to + * avoid the risk of CPU bugs in that area. + * + * Parallel fast GUP is fine since fast GUP will back off when + * it detects PMD is changed. */ _pmd = pmdp_collapse_flush(vma, address, pmd); spin_unlock(pmd_ptl); From patchwork Wed Sep 7 18:01:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12969315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4768EC54EE9 for ; Wed, 7 Sep 2022 18:01:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C441D6B0073; Wed, 7 Sep 2022 14:01:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF35B6B0074; Wed, 7 Sep 2022 14:01:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F8C36B0075; Wed, 7 Sep 2022 14:01:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 909E36B0073 for ; Wed, 7 Sep 2022 14:01:51 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6C8F9141109 for ; Wed, 7 Sep 2022 18:01:51 +0000 (UTC) X-FDA: 79886057622.06.0892982 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf01.hostedemail.com (Postfix) with ESMTP id 1246E4007D for ; Wed, 7 Sep 2022 18:01:50 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id fs14so10508496pjb.5 for ; Wed, 07 Sep 2022 11:01:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=eU/KWFZN5UecQ8lbzprnvq1q98iyrbbNqlvylwkyiHU=; b=GtwBMc6EL/iXg66DLvtx19O/9lfPfRbzA8mc4+Xi+LrbcSCdAPV5vLH4lpomv4ciNM fIYzkt3IS2BPGL2ih6bmDimjFB5SB4q2SuKHQwAlcbzYHpoVNaDYxuZdsVuFFX/3T/Uc YXF7Rny2ZEEGby9c57HmCFy9pg3z5r8m4VTNcZVG97Uv8KyUzy/Bnxo5Um52ZTTyeAVW G0qCz8OCaLUh7ICyMGSrI1Jh/Jswa1I086oJypMirhyA+vvor74uUAxXhbsk+gINCKbU HC9fT6qRKh58VoKTe1FnhEu84oA7FZip2zriqLGw9iKZGl+voSYJs7XVmVRxS1swZU7R JKYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=eU/KWFZN5UecQ8lbzprnvq1q98iyrbbNqlvylwkyiHU=; b=B07vlga2IrpvatNRNox2PLAU1na8tabJ4jzLfTMOuQWqVefx7WQ3x8PKVxgkzN0tyV 8EGhP5Hm0SZBWZ/kPfBjDWdYyuuJOIbLGNqFAHzo7CozT+JMpBnl0qTx578juhoM1K+b VeZQo4HLOAGE5EnnBgJAenJGlkhhQpiEvg2EYuFqTKJ/SDfLEqw6SL5dIKjAUqaxTUFU MZrVaQ5jJ3INqPj5XxDuQSPsnQf+ev/RkUEgXIe8Thx12v8vp9nP2OV/bgg0AeVZYZd5 bJsjTC0TAGb32M3C5iw+CDE3T6V5Lujzhq7Gm1JGUlm0aniYlqkMbzEkxmSRJgmE5sJ9 bc3w== X-Gm-Message-State: ACgBeo1bRy0WIuHufuxPz4bhtimUlaTK9iPOEoO1cmoK51ikZpHO8KBf OuDwkL0wHhf2A9n68fquSu4= X-Google-Smtp-Source: AA6agR4jQu9kTFMCgKbNUddUbVeBrjOtp6VteymiGZOmYpTK0JnXVpNi96p1AWx0nYhR3UmigAXkAQ== X-Received: by 2002:a17:90a:d150:b0:1fd:9336:5db3 with SMTP id t16-20020a17090ad15000b001fd93365db3mr5322933pjw.242.1662573709955; Wed, 07 Sep 2022 11:01:49 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id t17-20020a170902e85100b0017312bfca95sm12801664plg.253.2022.09.07.11.01.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Sep 2022 11:01:49 -0700 (PDT) From: Yang Shi To: david@redhat.com, peterx@redhat.com, kirill.shutemov@linux.intel.com, jhubbard@nvidia.com, jgg@nvidia.com, hughd@google.com, akpm@linux-foundation.org, aneesh.kumar@linux.ibm.com Cc: shy828301@gmail.com, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org Subject: [v2 PATCH 2/2] powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush Date: Wed, 7 Sep 2022 11:01:44 -0700 Message-Id: <20220907180144.555485-2-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220907180144.555485-1-shy828301@gmail.com> References: <20220907180144.555485-1-shy828301@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662573711; a=rsa-sha256; cv=none; b=nbkIyOn4MPzb4a32ai25dvn6szLjq/RmPCQWu+xTcB+atXcEkATGOprOjlTiuHDlJrHudt d368TF/Xxzr0MN//Nbj5XaM05en48+sRR7/66FYPxsTjmOv5RrD0AUuoJrFFU4FqN8yJ28 t0LW/41A2ywaA2HQ+2xsF0KDW2L/2TA= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=GtwBMc6E; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of shy828301@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=shy828301@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662573711; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eU/KWFZN5UecQ8lbzprnvq1q98iyrbbNqlvylwkyiHU=; b=BI7pGn69FN+vBw39Hgw2VS285450f3znCT2+EFPMVmips7yoNxhQBbSXVY0sF+teO7d/Q5 521g8rgbZ+Dn8JfSxTdD3aypmxZCEWS6AiBn9oTLn58hDdkS88zMpeXdRYYkoFUWLg1/9J 8+HS03k9CWWmLB+vqKdHA0nB4RKNj2U= X-Rspamd-Queue-Id: 1246E4007D Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=GtwBMc6E; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of shy828301@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=shy828301@gmail.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: nnfsr8ojtwy9x8ncigt9zsawchz1fjoh X-HE-Tag: 1662573710-237152 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The IPI broadcast is used to serialize against fast-GUP, but fast-GUP will move to use RCU instead of disabling local interrupts in fast-GUP. Using an IPI is the old-styled way of serializing against fast-GUP although it still works as expected now. And fast-GUP now fixed the potential race with THP collapse by checking whether PMD is changed or not. So IPI broadcast in radix pmd collapse flush is not necessary anymore. But it is still needed for hash TLB. Suggested-by: Aneesh Kumar K.V Signed-off-by: Yang Shi Acked-by: David Hildenbrand Acked-by: Peter Xu --- arch/powerpc/mm/book3s64/radix_pgtable.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 698274109c91..e712f80fe189 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -937,15 +937,6 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre pmd = *pmdp; pmd_clear(pmdp); - /* - * pmdp collapse_flush need to ensure that there are no parallel gup - * walk after this call. This is needed so that we can have stable - * page ref count when collapsing a page. We don't allow a collapse page - * if we have gup taken on the page. We can ensure that by sending IPI - * because gup walk happens with IRQ disabled. - */ - serialize_against_pte_lookup(vma->vm_mm); - radix__flush_tlb_collapsed_pmd(vma->vm_mm, address); return pmd;