From patchwork Thu Mar 21 22:07:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13599428 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D37D9C54E58 for ; Thu, 21 Mar 2024 22:10:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wLXMwAaLbawpQULO4c+ys9emZp27GfPU/Kj5o1Ac+s0=; b=EGT+Ie2IR0Elt1 uCKgtnyx0GWbo03cqQHzXs3V8npxOKOMSfTd0lYgUL6gDf2Cx+RB+c50MG7YZUOqeLDml4Syc5SNJ sjlzC9Id41mOtgNcM8z0Uw/qylM0+uIydOHbS0YW82K6AHplnARpklGFWXne50bAlRtbkXfPfqAlR JJU2Tm0Ryw8U7x4tAxuzKwebBf57b5LZrWDXiruoGoFqwZ8lqjPzSCCi9a+FvV2Cs6At2yt9Vn+eE iYQBWvV45Aw3/LAu90hyHhEikHeHT3LPag+n12i4TcslRSTqYX3oYW5ZO8X5ypviMfB+S8ju/FIzM 0MR9Q2ybC1Cih7jjZc3w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rnQbn-00000004rVP-17CS; Thu, 21 Mar 2024 22:09:59 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rnQaE-00000004qQH-0zWe for linux-arm-kernel@lists.infradead.org; Thu, 21 Mar 2024 22:08:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711058901; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=U+6dxwHwA9JOroUFOiyHgWbjpU2ZpFB4CR6YlbtDU08=; b=VzBPnXw+Svzs2+TALHTOqbQNaHvor5SQKNQ3mn1IVsk6GrBG6UwLAQky65P9JLR0sqEbSn jSZ1Gi7b/Fd2pu/9MwzB1x+/77Lq7m4BqZqTqME9F1KNF3uQZ1yBmbydCNQTDzIfG/95ku DtDoaKYdjohVdfN0wOSGGv02UC+TwMg= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-647-T1wEwV3-NOGdM18wLf4tKQ-1; Thu, 21 Mar 2024 18:08:20 -0400 X-MC-Unique: T1wEwV3-NOGdM18wLf4tKQ-1 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-6961a54234cso2674346d6.1 for ; Thu, 21 Mar 2024 15:08:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711058899; x=1711663699; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U+6dxwHwA9JOroUFOiyHgWbjpU2ZpFB4CR6YlbtDU08=; b=Ta8Uf101pcmPWz2DS4AJcQvLa2M9vFr0KBU4vc+3MST14N2RleMnIrQLW/w57ebhmj 22CNufHq/Px8lltZ8dW/a1dk8ur8AOzwfP99Wcy5MgpJhBZbj1Ie7csdqvaaK2XkEtsM CWRrf2iAeYdFs7C54hD3F+TJ8zwgVM+2wszy+j+YMXixtB4+JCGhf5dJUnQCw3d1lE9n Mj0h1i/UMj0p9wJ05mU0GpE4ULaVdRrqJomqltZkhAsVvaf4Q0XForjgO9bvsT7onyE7 MgCnMD4VFln5Db2/LTpmnpw0YZ8vj+XfnIm2FjD5dyCCgFYn1V1YzCIxCY87fK7mBDa9 sI7g== X-Forwarded-Encrypted: i=1; AJvYcCWoH9YRa2l6ZR2CtgNJIaa0WHqITB/8qieEB4LFAfV3cWMC8PQbBIjGNpT8PQ+06fpc/fkb7AzWckSzTmZ6eiqpkvaVCMuyG/kY46FhSOavfg45q2Q= X-Gm-Message-State: AOJu0YznUyd8Ldo93d9hR9JAgPJ6+8XFSDaoMcGvIhyu2KEo0gFpYhCP +mj1I0vKEIxcnXWVWD8B0PLc7xa1SMMghupO5qJ+Jz9Gb0y95hDoro5u9yZkxWu9wur0Kt9FBYK oW02E0WNBiAnl77JRzATVIrhEJAFRnuleZlzQIHn9Z0fOcDC/oLI698joMIv7WIns7bohdEGQ X-Received: by 2002:a05:6214:3912:b0:691:456f:415a with SMTP id nh18-20020a056214391200b00691456f415amr179397qvb.4.1711058899103; Thu, 21 Mar 2024 15:08:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFnkSxmKz4azbUELRR0cRTvXrQr4udCnJcoTFPGMx5l/qOyVfxe6G6Q7JjdIdQ6Zo0lZ2QFGA== X-Received: by 2002:a05:6214:3912:b0:691:456f:415a with SMTP id nh18-20020a056214391200b00691456f415amr179365qvb.4.1711058898616; Thu, 21 Mar 2024 15:08:18 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id o6-20020a0562140e4600b00690baf5cde9sm351663qvc.118.2024.03.21.15.08.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Mar 2024 15:08:18 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org, Michael Ellerman , Christophe Leroy , Matthew Wilcox , Rik van Riel , Lorenzo Stoakes , Axel Rasmussen , peterx@redhat.com, Yang Shi , John Hubbard , linux-arm-kernel@lists.infradead.org, "Kirill A . Shutemov" , Andrew Jones , Vlastimil Babka , Mike Rapoport , Andrew Morton , Muchun Song , Christoph Hellwig , linux-riscv@lists.infradead.org, James Houghton , David Hildenbrand , Jason Gunthorpe , Andrea Arcangeli , "Aneesh Kumar K . V" , Mike Kravetz Subject: [PATCH v3 07/12] mm/gup: Handle hugetlb for no_page_table() Date: Thu, 21 Mar 2024 18:07:57 -0400 Message-ID: <20240321220802.679544-8-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240321220802.679544-1-peterx@redhat.com> References: <20240321220802.679544-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240321_150822_624133_47E2AAB7 X-CRM114-Status: GOOD ( 16.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu no_page_table() is not yet used for hugetlb code paths. Make it prepared. The major difference here is hugetlb will return -EFAULT as long as page cache does not exist, even if VM_SHARED. See hugetlb_follow_page_mask(). Pass "address" into no_page_table() too, as hugetlb will need it. Reviewed-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index f3ae8f6ce8a4..943671736080 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) #ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, - unsigned int flags) + unsigned int flags, unsigned long address) { + if (!(flags & FOLL_DUMP)) + return NULL; + /* - * When core dumping an enormous anonymous area that nobody - * has touched so far, we don't want to allocate unnecessary pages or + * When core dumping, we don't want to allocate unnecessary pages or * page tables. Return error instead of NULL to skip handle_mm_fault, * then get_dump_page() will return NULL to leave a hole in the dump. * But we can only make this optimization where a hole would surely * be zero-filled if handle_mm_fault() actually did handle it. */ - if ((flags & FOLL_DUMP) && - (vma_is_anonymous(vma) || !vma->vm_ops->fault)) + if (is_vm_hugetlb_page(vma)) { + struct hstate *h = hstate_vma(vma); + + if (!hugetlbfs_pagecache_present(h, vma, address)) + return ERR_PTR(-EFAULT); + } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) { return ERR_PTR(-EFAULT); + } + return NULL; } @@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, ptep = pte_offset_map_lock(mm, pmd, address, &ptl); if (!ptep) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); pte = ptep_get(ptep); if (!pte_present(pte)) goto no_page; @@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, pte_unmap_unlock(ptep, ptl); if (!pte_none(pte)) return NULL; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } static struct page *follow_pmd_mask(struct vm_area_struct *vma, @@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, pmd = pmd_offset(pudp, address); pmdval = pmdp_get_lockless(pmd); if (pmd_none(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (likely(!pmd_trans_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); if (unlikely(!pmd_present(*pmd))) { spin_unlock(ptl); - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(!pmd_trans_huge(*pmd))) { spin_unlock(ptl); @@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = pud_offset(p4dp, address); if (pud_none(*pud)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pud_devmap(*pud)) { ptl = pud_lock(mm, pud); page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(pud_bad(*pud))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pmd_mask(vma, address, pud, flags, ctx); } @@ -777,10 +785,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4dp = p4d_offset(pgdp, address); p4d = READ_ONCE(*p4dp); if (!p4d_present(p4d)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_leaf(p4d)); if (unlikely(p4d_bad(p4d))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4dp, flags, ctx); } @@ -830,7 +838,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_p4d_mask(vma, address, pgd, flags, ctx); }