From patchwork Thu Feb 18 23:12:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12094507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B9CCC433DB for ; Thu, 18 Feb 2021 23:12:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EDBD564E92 for ; Thu, 18 Feb 2021 23:12:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EDBD564E92 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8B6BB6B006E; Thu, 18 Feb 2021 18:12:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 88AFA8D0001; Thu, 18 Feb 2021 18:12:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 705056B0071; Thu, 18 Feb 2021 18:12:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id 5A73C6B006E for ; Thu, 18 Feb 2021 18:12:14 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1C8A1181AF5C6 for ; Thu, 18 Feb 2021 23:12:14 +0000 (UTC) X-FDA: 77832938988.06.DEDB0AF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by imf19.hostedemail.com (Postfix) with ESMTP id 5A2F890009EA for ; Thu, 18 Feb 2021 23:12:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1613689933; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4JoTh9i7nRqiPgisTFKHvYqnz+XigsyDbcSAEKf2YS0=; b=CCcUfxOhV0D4M06JXEwTEi42SF/hni+R/wnr9W69pwTk2L4i3m476P03FNe5Kn6uUsJOCC L2AakNW8u3yQWJI6FdY5RudlNQCjFKQt4OcxaiDisRkyDpp+gmO5Zjv43yu+tHQAa/SiuH jlwHbPQ2wx3vP2qUx1h6SrSn94vEqSM= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-452-eh7KTaIcNdKSW47HZt3nrg-1; Thu, 18 Feb 2021 18:12:09 -0500 X-MC-Unique: eh7KTaIcNdKSW47HZt3nrg-1 Received: by mail-qt1-f198.google.com with SMTP id d10so2172800qtx.8 for ; Thu, 18 Feb 2021 15:12:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4JoTh9i7nRqiPgisTFKHvYqnz+XigsyDbcSAEKf2YS0=; b=ISZWuOHJEFXPVV1qJzsrxlUy5geaIzThWyr4GdCvhGj+vXn5CodQb3IBHFn1u7O2qh FNAsfJxP80O6OGKSdN2GbWnIYug7H5DLmWLVnKI/Zvk4HXwAFLLoflP+LDFW/6XTS6w1 pXj1AWrxsR9l/+4NhTP6yPkDTxbhlf/Lj5FusYp85/scP4wdc7/H4eh2u3oldYvkgPcx FCaiEamoDcDzyO/iyxSykljwC3RIRR7MOyPRrfrLY/IqDUTisiokTnL3LgmRr7UQAx0p jb/Pax2qpYHQOwIa/Sip/VysNpYzJWmP7D2yccQqDT7wxeLgDWVQRQh8StSQ7Emn0tRh c+FQ== X-Gm-Message-State: AOAM531DiJM1ZGNWqx/Khi+fFG/0sTVjrJcD1X6UIexU/lCrJI4NJhjL 9wk01o5Lh7Vtq4Eqi4Pf7O1q4p42V/TyGk2X54AqYsp2o6O44BJY7MYMzVbeuXDXjoFRl7eSnyv rKgFktt/17m4= X-Received: by 2002:a37:6f01:: with SMTP id k1mr6789270qkc.252.1613689928948; Thu, 18 Feb 2021 15:12:08 -0800 (PST) X-Google-Smtp-Source: ABdhPJwH7uSnFwdtsTX7ZkSOcixigLZ7WrmzlE0H3TVrR6Bzrcir+R3XPF4G+Uufy0zQJHFnDKl59g== X-Received: by 2002:a37:6f01:: with SMTP id k1mr6789252qkc.252.1613689928723; Thu, 18 Feb 2021 15:12:08 -0800 (PST) Received: from xz-x1.redhat.com (bras-vprn-toroon474qw-lp130-20-174-93-89-182.dsl.bell.ca. [174.93.89.182]) by smtp.gmail.com with ESMTPSA id b7sm5057179qkj.115.2021.02.18.15.12.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Feb 2021 15:12:08 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: peterx@redhat.com, Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , "Kirill A . Shutemov" , Andrew Morton , Matthew Wilcox , Mike Kravetz Subject: [PATCH v4 4/4] hugetlb/userfaultfd: Unshare all pmds for hugetlbfs when register wp Date: Thu, 18 Feb 2021 18:12:06 -0500 Message-Id: <20210218231206.15524-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210218230633.15028-1-peterx@redhat.com> References: <20210218230633.15028-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5A2F890009EA X-Stat-Signature: hhf5hpx68h1q9odkz8fg358mzsi3p7bu Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf19; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=63.128.21.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613689931-966307 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Huge pmd sharing for hugetlbfs is racy with userfaultfd-wp because userfaultfd-wp is always based on pgtable entries, so they cannot be shared. Walk the hugetlb range and unshare all such mappings if there is, right before UFFDIO_REGISTER will succeed and return to userspace. This will pair with want_pmd_share() in hugetlb code so that huge pmd sharing is completely disabled for userfaultfd-wp registered range. Reviewed-by: Mike Kravetz Signed-off-by: Peter Xu --- fs/userfaultfd.c | 4 ++++ include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 51 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 58 insertions(+) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 894cc28142e7..e259318fcae1 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -1448,6 +1449,9 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, vma->vm_flags = new_flags; vma->vm_userfaultfd_ctx.ctx = ctx; + if (is_vm_hugetlb_page(vma) && uffd_disable_huge_pmd_share(vma)) + hugetlb_unshare_all_pmds(vma); + skip: prev = vma; start = vma->vm_end; diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3b4104021dd3..6437483ad01b 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -188,6 +188,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot); bool is_hugetlb_entry_migration(pte_t pte); +void hugetlb_unshare_all_pmds(struct vm_area_struct *vma); #else /* !CONFIG_HUGETLB_PAGE */ @@ -369,6 +370,8 @@ static inline vm_fault_t hugetlb_fault(struct mm_struct *mm, return 0; } +static inline void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) { } + #endif /* !CONFIG_HUGETLB_PAGE */ /* * hugepages at page global directory. If arch support diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f53a0b852ed8..fc62932c31cb 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5653,6 +5653,57 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason) } } +/* + * This function will unconditionally remove all the shared pmd pgtable entries + * within the specific vma for a hugetlbfs memory range. + */ +void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) +{ + struct hstate *h = hstate_vma(vma); + unsigned long sz = huge_page_size(h); + struct mm_struct *mm = vma->vm_mm; + struct mmu_notifier_range range; + unsigned long address, start, end; + spinlock_t *ptl; + pte_t *ptep; + + if (!(vma->vm_flags & VM_MAYSHARE)) + return; + + start = ALIGN(vma->vm_start, PUD_SIZE); + end = ALIGN_DOWN(vma->vm_end, PUD_SIZE); + + if (start >= end) + return; + + /* + * No need to call adjust_range_if_pmd_sharing_possible(), because + * we have already done the PUD_SIZE alignment. + */ + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, + start, end); + mmu_notifier_invalidate_range_start(&range); + i_mmap_lock_write(vma->vm_file->f_mapping); + for (address = start; address < end; address += PUD_SIZE) { + unsigned long tmp = address; + + ptep = huge_pte_offset(mm, address, sz); + if (!ptep) + continue; + ptl = huge_pte_lock(h, mm, ptep); + /* We don't want 'address' to be changed */ + huge_pmd_unshare(mm, vma, &tmp, ptep); + spin_unlock(ptl); + } + flush_hugetlb_tlb_range(vma, start, end); + i_mmap_unlock_write(vma->vm_file->f_mapping); + /* + * No need to call mmu_notifier_invalidate_range(), see + * Documentation/vm/mmu_notifier.rst. + */ + mmu_notifier_invalidate_range_end(&range); +} + #ifdef CONFIG_CMA static bool cma_reserve_called __initdata;