From patchwork Tue Dec 19 07:55:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D534EC46CA2 for ; Tue, 19 Dec 2023 07:58:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6EF248D000E; Tue, 19 Dec 2023 02:58:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 678398D0005; Tue, 19 Dec 2023 02:58:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F2278D000E; Tue, 19 Dec 2023 02:58:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 388C78D0005 for ; Tue, 19 Dec 2023 02:58:39 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 066AD14023B for ; Tue, 19 Dec 2023 07:58:39 +0000 (UTC) X-FDA: 81582815958.26.29F3E47 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf30.hostedemail.com (Postfix) with ESMTP id 406B580017 for ; Tue, 19 Dec 2023 07:58:37 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dE9yJylT; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972717; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=C+SULC41WW+Kd90iko+LfCmzwEshCatJrP9IQa4Pxio=; b=AgGySVwitkezGc33F87FeUgD0ZRVEUlzxdckUdQU4R6zxNTKuab1wfYKX9oCKyoeFuzOlw hzJf4uI3d2NfPuzSzD+PI5UjY+MB3pPokdDTvAvkpH/dEmkqeBTaQV1PN+SBQhORm5+yOz vhUTriQWyBxznvTU6u2GdfYltqWNyoM= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dE9yJylT; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972717; a=rsa-sha256; cv=none; b=BKti39W/J96qB86XyvNgpltg6DKOQ272e+JnRgO4JZN6L0AEd5Sn9edNhd5XZVqsMKeotd p0fYBK1MaXhaDIaILj37OE1EU9mb/2QSqstaTYBJAOUCWlHLWyn4y3Kdvy80zXMO2IuKvp bUIqzNIcVI+0O0kEMxmWEMicigUOb2k= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972716; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C+SULC41WW+Kd90iko+LfCmzwEshCatJrP9IQa4Pxio=; b=dE9yJylT1JDJL0dmSRBRpwPIZSV3wwwqTg8Pok3U5iVNZs7kA9BOsOIpp8t08lGKVIcYiJ Ea43sbBoD5SqDh6ZJuZDKz7kOMSjGkHKWQzBCnIDF0McNtNmcAMoS2hctEmyhmwCRw/5Cq RkI05gqNojYnjWv8IljIHjijtd6HtNE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-443-hR7CaQnoO4OWgrV4n0G44w-1; Tue, 19 Dec 2023 02:58:29 -0500 X-MC-Unique: hR7CaQnoO4OWgrV4n0G44w-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 462A5185A782; Tue, 19 Dec 2023 07:58:28 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4D2F42026D66; Tue, 19 Dec 2023 07:58:16 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code Date: Tue, 19 Dec 2023 15:55:38 +0800 Message-ID: <20231219075538.414708-14-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspam-User: X-Stat-Signature: n5aof9scwbyf7stkee98j1dsaq8qkb3r X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 406B580017 X-HE-Tag: 1702972717-923869 X-HE-Meta: U2FsdGVkX1+GvWF7OHU7JVjKn/fgd5u1Rrh4PccSfehlXK6J5iqDxTwwAxv797W/xAF2p0IKpxvWeU9elvXNynw2d+kF5Uzj/ztQXyjeMjIUeIbTAC+s2NPPkxq1MHQO8tApfVtGhLcnz2P5Q0yyrj0Ly7UZeswaeGmQhs/URT4+YSLco+NrjG5rM9lzH777BJUSRNAcOWV1SxrZjInYSh5P57MxOg8yHvGNhYtOM5CXlqon8A4YP24AcKdy83hV/7QdsRqEgyOzguZocA8xGg/tHaLgF0rwMy7qRt8T2uSxDcazaaRP41dZl96zXQrgkHguHDLPDKOVxCSCuaF7jdeHVTs5fQYvEm+85+CCfaqLE7w3QMLOw2yTkniy78iyJINKsgwjtVmQjdnzc3Cw/kOUUlFrTNceGnIgUCgJJSOsk+Lh44ONLswRHDsRG6t+eHO3K22XLSu9y4b+Vf+XqPYhbdR4FuIBXinSsTtCK6S+G/W3L01yB21TRLVC4NuVA29aGtVvo04NqAXQMiydLu6VrLfqgTXmPnXSgBYUr/re/I/zhiPry0UdfDeO1XaNKoZilRSnpkmuKeDaFPUML2ZrwyCdaOU9YeFw6dbnaxDsZSsZPdGrRPiX4Kc5kzhOaHQzL437tyxZKa8aN+OcVGx6j65dB2nCclFyZnUkbcqcotSYtXe5Tw2212F1F5cJJu4cBx9ccYje4g1APQFIT965gOcAJPG39wphX13HxHp4yxMJeYjFm5g/9+72TtBkdiRcVSubHILGx4NB8FIgj8ZqGBCMjXfbmalmvP7KytoBVBxWhqUUos2wCUIzrDHqTbG216CcTHKnkwlxH52S/sttrDVfrFfwZs+Zpz/ZfWYCSF82xeqrFUKHFxSEAz/IBPDqOehmORT58Gsy3N0uccf2ICrUKm7xD6GacakmzrWQvc6jjXkk8wW44RO8JRMKtudZ4xPwamX4TUt9Qw3 /f6bWWKN t+suWY1MDqh8sei3wzqVbuvEOfk5IjJftlEXsABYkwQRddnyPWI0RUdDQwQB3VTCDMB7ixrAX++TtEwGuh3+Xwfqm9noCqB4ZRvAAQXikJIJLrtHb1Xjm1KMwvNK/fedVSMYfTZZvfGE4JSNc84bCj4S5OrApciD6uYXUSqhlU6I2aIqQ0nEDxTe8+yPyOAmvjqH3H0Pf4de6YE5U7tXI+/gE7ZNcX3RYQTT0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Now follow_page() is ready to handle hugetlb pages in whatever form, and over all architectures. Switch to the generic code path. Time to retire hugetlb_follow_page_mask(), following the previous retirement of follow_hugetlb_page() in 4849807114b8. There may be a slight difference of how the loops run when processing slow GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each loop of __get_user_pages() will resolve one pgtable entry with the patch applied, rather than relying on the size of hugetlb hstate, the latter may cover multiple entries in one loop. A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over a tight loop of slow gup after the path switched. That shouldn't be a problem because slow-gup should not be a hot path for GUP in general: when page is commonly present, fast-gup will already succeed, while when the page is indeed missing and require a follow up page fault, the slow gup degrade will probably buried in the fault paths anyway. It also explains why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup: accelerate thp gup even for "pages != NULL"") lands, the latter not part of a performance analysis but a side benefit. If the performance will be a concern, we can consider handle CONT_PTE in follow_page(). Before that is justified to be necessary, keep everything clean and simple. Signed-off-by: Peter Xu --- include/linux/hugetlb.h | 7 ---- mm/gup.c | 15 +++------ mm/hugetlb.c | 71 ----------------------------------------- 3 files changed, 5 insertions(+), 88 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index f8c5c174c8a6..8a352d577ebf 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -332,13 +332,6 @@ static inline void hugetlb_zap_end( { } -static inline struct page *hugetlb_follow_page_mask( - struct vm_area_struct *vma, unsigned long address, unsigned int flags, - unsigned int *page_mask) -{ - BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ -} - static inline int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *dst_vma, diff --git a/mm/gup.c b/mm/gup.c index 14a7d13e7bd6..f34c0a912311 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -997,18 +997,11 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, { pgd_t *pgd, pgdval; struct mm_struct *mm = vma->vm_mm; + struct page *page; - ctx->page_mask = 0; - - /* - * Call hugetlb_follow_page_mask for hugetlb vmas as it will use - * special hugetlb page table walking code. This eliminates the - * need to check for hugetlb entries in the general walking code. - */ - if (is_vm_hugetlb_page(vma)) - return hugetlb_follow_page_mask(vma, address, flags, - &ctx->page_mask); + vma_pgtable_walk_begin(vma); + ctx->page_mask = 0; pgd = pgd_offset(mm, address); pgdval = *pgd; @@ -1020,6 +1013,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, else page = follow_p4d_mask(vma, address, pgd, flags, ctx); + vma_pgtable_walk_end(vma); + return page; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 29705e5c6f40..3013122a739f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6783,77 +6783,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } #endif /* CONFIG_USERFAULTFD */ -struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, - unsigned long address, unsigned int flags, - unsigned int *page_mask) -{ - struct hstate *h = hstate_vma(vma); - struct mm_struct *mm = vma->vm_mm; - unsigned long haddr = address & huge_page_mask(h); - struct page *page = NULL; - spinlock_t *ptl; - pte_t *pte, entry; - int ret; - - hugetlb_vma_lock_read(vma); - pte = hugetlb_walk(vma, haddr, huge_page_size(h)); - if (!pte) - goto out_unlock; - - ptl = huge_pte_lock(h, mm, pte); - entry = huge_ptep_get(pte); - if (pte_present(entry)) { - page = pte_page(entry); - - if (!huge_pte_write(entry)) { - if (flags & FOLL_WRITE) { - page = NULL; - goto out; - } - - if (gup_must_unshare(vma, flags, page)) { - /* Tell the caller to do unsharing */ - page = ERR_PTR(-EMLINK); - goto out; - } - } - - page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT)); - - /* - * Note that page may be a sub-page, and with vmemmap - * optimizations the page struct may be read only. - * try_grab_page() will increase the ref count on the - * head page, so this will be OK. - * - * try_grab_page() should always be able to get the page here, - * because we hold the ptl lock and have verified pte_present(). - */ - ret = try_grab_page(page, flags); - - if (WARN_ON_ONCE(ret)) { - page = ERR_PTR(ret); - goto out; - } - - *page_mask = (1U << huge_page_order(h)) - 1; - } -out: - spin_unlock(ptl); -out_unlock: - hugetlb_vma_unlock_read(vma); - - /* - * Fixup retval for dump requests: if pagecache doesn't exist, - * don't try to allocate a new page but just skip it. - */ - if (!page && (flags & FOLL_DUMP) && - !hugetlbfs_pagecache_present(h, vma, address)) - page = ERR_PTR(-EFAULT); - - return page; -} - long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot, unsigned long cp_flags)