From patchwork Mon Aug 26 20:43:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13778436 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 462BDC5321D for ; Mon, 26 Aug 2024 20:46:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=rCV2q6t08l8ndW0pGxUU3F/GDhRRI1SX3G16E23iQDg=; b=3QbseDLbLc3wn5Nb/tewSI7SR7 6jH3lnu+b7qG0dYIoA7+Uar06KLI+7WoJ1VX0XdtsexMdqFHnVIl7rlvFzAPexFh1rLwS+6xbzlPg yc4pIMHE4pd6sfLJ3BbwEI9TxiOaVTlUFNHoSp1OD+vBmKL4hHvTtnUgpx13bXS+q7IwTcuqbvmcF BIA69JLQ2zdkeT2Zx5F59lP4kPvrfOfLZiBvXJOZNDGeaDvrEr5esAKeJyBcbnd4Lv4D+ffFcNeb4 m0YFN+bNsERly1hqXhIKfzCnPyc+W8GRuXMPJdUABtlkXGHxWhSRFv6miCtsEbRZ7BQsiLYW2Tew5 CSCEViVg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sigbi-00000008fLD-0AjG; Mon, 26 Aug 2024 20:46:34 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sigZH-00000008eKg-2UVm for linux-arm-kernel@lists.infradead.org; Mon, 26 Aug 2024 20:44:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724705042; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rCV2q6t08l8ndW0pGxUU3F/GDhRRI1SX3G16E23iQDg=; b=Tl3d8A2AhgRkTO/Om/ffHNuLk5Tvtkwr+ZhC4vFyHmijx1Um9A9tO5MSWww9vCXXI9JjG2 +xcizImJhRhT03d9MrKB3hA6TpMywsDmYH1QvbCUb42nsK7xyqSwTEalUlagAedDWdcQul 9vnc/5WqJILWqKjB2TV9u8DRG+NfD6o= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-488-vZ5H_cfANKq11zxQP9STfw-1; Mon, 26 Aug 2024 16:44:01 -0400 X-MC-Unique: vZ5H_cfANKq11zxQP9STfw-1 Received: by mail-qk1-f199.google.com with SMTP id af79cd13be357-7a1d4238d65so631891385a.0 for ; Mon, 26 Aug 2024 13:44:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724705040; x=1725309840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rCV2q6t08l8ndW0pGxUU3F/GDhRRI1SX3G16E23iQDg=; b=uWi95pPzCiz/FXX2qihOJvIg06N6lYbajBKZJ/IOinpmAAAX9IPPGLGQtdD9plEsb8 W66wKOGYwMwgtZGvk0PqCFgObKoY826jdD5Bu6GjhpZxZ/KTyNmvdkRiAj8i9ilT0dU+ W5i6AhuPKhGNW98hGdlzXAU+NRDBhEFUowkrSsVupxUydEhBAi3tXz3PQg85Z9B2WB44 IGG1c1yGL+DRHnkGOLVSzwr7swqBZkveioxVqAutc3zhjdj7FUqbWSw9aI15futBnQSQ Xff0RGBUaePFcln54MY2ju1ccLwmM8xcelE4sK34qvDbMhun7LyL1KXMjablMY3Vttdi 2SiQ== X-Forwarded-Encrypted: i=1; AJvYcCXlWC3DUZKmFnuAjUT33395INcMlcvYy5DpwibQnpRyO826A9vss1jfCNe4vyp3gGC0CNRPTiWkx5AAOin/wSzE@lists.infradead.org X-Gm-Message-State: AOJu0Yz4QNcyEKfaz7gUXEeHOC9RY32rLp1Jt0t3DgsZsf8gK3b7pI/e IabNLLPIoXBzZg1gdPcN+WjdbntLGIHo36s0FzKyaYRJxnLAGIwqK7LKmllOdafApY73PSwrNL7 DSdi4ywswbq+FnzJFzezXkz8054A7C/PNDwVNXqSi0owan1mNPvUIFTpLcqxE0aU8X7sruJZp X-Received: by 2002:a05:620a:28c8:b0:7a1:62ad:9d89 with SMTP id af79cd13be357-7a7e4e6d3c1mr92956385a.64.1724705040436; Mon, 26 Aug 2024 13:44:00 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEW5yVxsMo5HJEI7stPZogQTAiTuzeFOof5FGrO8bEsBYthlHw7SRF31ZNT2saoeMAKCiiqzA== X-Received: by 2002:a05:620a:28c8:b0:7a1:62ad:9d89 with SMTP id af79cd13be357-7a7e4e6d3c1mr92952485a.64.1724705040096; Mon, 26 Aug 2024 13:44:00 -0700 (PDT) Received: from x1n.redhat.com (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7a67f3fd6c1sm491055185a.121.2024.08.26.13.43.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Aug 2024 13:43:59 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gavin Shan , Catalin Marinas , x86@kernel.org, Ingo Molnar , Andrew Morton , Paolo Bonzini , Dave Hansen , Thomas Gleixner , Alistair Popple , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Sean Christopherson , peterx@redhat.com, Oscar Salvador , Jason Gunthorpe , Borislav Petkov , Zi Yan , Axel Rasmussen , David Hildenbrand , Yan Zhao , Will Deacon , Kefeng Wang , Alex Williamson , Matthew Wilcox , "Aneesh Kumar K . V" Subject: [PATCH v2 02/19] mm: Drop is_huge_zero_pud() Date: Mon, 26 Aug 2024 16:43:36 -0400 Message-ID: <20240826204353.2228736-3-peterx@redhat.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20240826204353.2228736-1-peterx@redhat.com> References: <20240826204353.2228736-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240826_134403_733981_C16236A3 X-CRM114-Status: GOOD ( 16.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org It constantly returns false since 2017. One assertion is added in 2019 but it should never have triggered, IOW it means what is checked should be asserted instead. If it didn't exist for 7 years maybe it's good idea to remove it and only add it when it comes. Cc: Matthew Wilcox Cc: Aneesh Kumar K.V Acked-by: David Hildenbrand Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- include/linux/huge_mm.h | 10 ---------- mm/huge_memory.c | 13 +------------ 2 files changed, 1 insertion(+), 22 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 4902e2f7e896..b550b5a248bb 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -433,11 +433,6 @@ static inline bool is_huge_zero_pmd(pmd_t pmd) return pmd_present(pmd) && READ_ONCE(huge_zero_pfn) == pmd_pfn(pmd); } -static inline bool is_huge_zero_pud(pud_t pud) -{ - return false; -} - struct folio *mm_get_huge_zero_folio(struct mm_struct *mm); void mm_put_huge_zero_folio(struct mm_struct *mm); @@ -578,11 +573,6 @@ static inline bool is_huge_zero_pmd(pmd_t pmd) return false; } -static inline bool is_huge_zero_pud(pud_t pud) -{ - return false; -} - static inline void mm_put_huge_zero_folio(struct mm_struct *mm) { return; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a81eab98d6b8..3f74b09ada38 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1429,10 +1429,8 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr, ptl = pud_lock(mm, pud); if (!pud_none(*pud)) { if (write) { - if (pud_pfn(*pud) != pfn_t_to_pfn(pfn)) { - WARN_ON_ONCE(!is_huge_zero_pud(*pud)); + if (WARN_ON_ONCE(pud_pfn(*pud) != pfn_t_to_pfn(pfn))) goto out_unlock; - } entry = pud_mkyoung(*pud); entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma); if (pudp_set_access_flags(vma, addr, pud, entry, 1)) @@ -1680,15 +1678,6 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, if (unlikely(!pud_trans_huge(pud) && !pud_devmap(pud))) goto out_unlock; - /* - * When page table lock is held, the huge zero pud should not be - * under splitting since we don't split the page itself, only pud to - * a page table. - */ - if (is_huge_zero_pud(pud)) { - /* No huge zero pud yet */ - } - /* * TODO: once we support anonymous pages, use * folio_try_dup_anon_rmap_*() and split if duplicating fails.