From patchwork Wed Feb 5 15:09:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13961285 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1BD3C02194 for ; Wed, 5 Feb 2025 15:10:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7EAFA280014; Wed, 5 Feb 2025 10:10:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 79B26280001; Wed, 5 Feb 2025 10:10:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C5CE280014; Wed, 5 Feb 2025 10:10:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F2801280001 for ; Wed, 5 Feb 2025 10:10:49 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 653E31A01FE for ; Wed, 5 Feb 2025 15:10:49 +0000 (UTC) X-FDA: 83086228218.23.BA3B9C0 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 92FA4A0017 for ; Wed, 5 Feb 2025 15:10:47 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738768247; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OkH6mYXJp0TjRSwBtmBs0YmaHm+TspyfIyTEYgopupM=; b=JGa61S4wGXTDvjTcaItd4UKOvne9uRdHSCuLGhAdbmilSANCrJ48ghwBgJ5KzmQsHxD2Lr reOCkr4cQIeLN9Bnc4T/u2i2iK1BouwWWTJpd+Bo+TCxwVr5jXXkgEAABiBvnt7bRoH18g hpn2hNgXhQQETwl1DU10YWyo2+zdepc= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738768247; a=rsa-sha256; cv=none; b=33A+86v65KBMulIiz0gHBOdnEckrm49OOKXsopVN9mT2pSWr4XXVuDeF3xsxK3gJQ8IXIO Knr4fWlgjJhRuwLxS0HcnxvF7wmyN4emYB9SAXS3rtzZRA4qqapatlQyYUtnwC2pJfNJfM lwwejDFsxajp4gjluLSFYNVVOGgjrts= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8D37F1063; Wed, 5 Feb 2025 07:11:10 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 712D43F5A1; Wed, 5 Feb 2025 07:10:44 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Muchun Song , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Mark Rutland , Ard Biesheuvel , Anshuman Khandual , Dev Jain , Alexandre Ghiti , Steve Capper , Kevin Brodsky Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v1 07/16] arm64: hugetlb: Use ___set_ptes() and ___ptep_get_and_clear() Date: Wed, 5 Feb 2025 15:09:47 +0000 Message-ID: <20250205151003.88959-8-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205151003.88959-1-ryan.roberts@arm.com> References: <20250205151003.88959-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 92FA4A0017 X-Stat-Signature: 9q966xj8r5jwqpm9dihbeu8zeswjodkt X-Rspam-User: X-HE-Tag: 1738768247-593454 X-HE-Meta: U2FsdGVkX1/5UiA1uzRMuyz2saqt3fs/45hamM2QfUlXnF75lMclNW4OR/RlVfQq7kpe5PW5BEEdzbjBRt3PkbHH8so9Q5fpOIcXGwcnGWYeSCicqDkZFsg4ziz6YK6OmjArfgenwDiGva9nZju7yKL9s+faMKMTf0H07sp5ImVCtQ75iORjCPMurPUhPCQ+iauLH7sJzy5SSYOnYml5s6ySt/U2RD1YaPHR3HUGHEY9v5vyKpzLafkk6kwYfGTScSvfVTn1WvM46Z4FlL/+i4I3tTpldotE6KGaHYZ8aIE+9+j3PV3oaI4neKnVemM3myt2htjq1xQ2vLFVZDVhid3PU1HM8dQw6/3J5gwJZKeY4XcY5zBVIPfmro/H8K/ssMwn6pUT5znXxPojHLIpHDVbKl5DlwVh3Cen1bAALpP3/aVgmZz11/tUpT02M8ENc4vJQ62as5oKPXXNKR6SEF6dI5HtjKnoSJP+LDySQzKLhoLns/3rWebaCU5Tzk3XGBspd7v5xvQPa8CE/L33UNcj00HGVSe0R2ZiZpAad9F/h8+ziKzZ08WlXYKXBDv5VJz/ZKTViy0kWEtn80jTCK7mNhO0dCNvPwdAeB6HUtYXNhsqpwyqKLq5HTFoyaCJgL0NIh0olIGyW8tL4kDUqtWXc7Dfa8A98Ycb4TJc2kQPeF/vyu1kvEHDoT7WKUvL/tcrYiR5KldhrY7KfqBdvJbekHqFPtXCnWhJbh5MUDkKb4NiQJVtuZZsxPA8Wy40UpbR6l8QjMV5lHZ+I2tdzIBWc4qK45fgd6vvs3YQPLIQzCpLltnBRLN8hnqn1nycQzlGHH4E0pHh1/jJsV7WtrQ4WJBhNSU8BzLMYnOnmvRSwTdcR3SliGfGqHWuVYJ5GIHY5IAdN+ww3XmMZspAEH1EkE7xW1U+NDn5v6EHrALaGWDTFiX+r7ggLdEzG41/ymLuiopPX72DQED9Jzh pz7Syfcf cuyoYhO8P0sg1M4UVtd2n1JsM7kl230S7UMImF5MbCjxtriPvt92Xf+AVtr17NHqHcZnJT3iO4kaF55jLHz7/sCkHvz5JChEGQ6oKQ5B1rl5XGyPWEVmpEszJrBzjADlhZBvugFQNrIIiIU1rXVc8aqTa/pWssHiusdqZdBAg8S7s+6h8/aQHNN/tKZLLLlUUarZjVwc/YiE0aks= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Refactor the huge_pte helpers to use the new generic ___set_ptes() and ___ptep_get_and_clear() APIs. This provides 2 benefits; First, when page_table_check=on, hugetlb is now properly/fully checked. Previously only the first page of a hugetlb folio was checked. Second, instead of having to call __set_ptes(nr=1) for each pte in a loop, the whole contiguous batch can now be set in one go, which enables some efficiencies and cleans up the code. One detail to note is that huge_ptep_clear_flush() was previously calling ptep_clear_flush() for a non-contiguous pte (i.e. a pud or pmd block mapping). This has a couple of disadvantages; first ptep_clear_flush() calls ptep_get_and_clear() which transparently handles contpte. Given we only call for non-contiguous ptes, it would be safe, but a waste of effort. It's preferable to go stright to the layer below. However, more problematic is that ptep_get_and_clear() is for PAGE_SIZE entries so it calls page_table_check_pte_clear() and would not clear the whole hugetlb folio. So let's stop special-casing the non-cont case and just rely on get_clear_contig_flush() to do the right thing for non-cont entries. Signed-off-by: Ryan Roberts --- arch/arm64/mm/hugetlbpage.c | 50 ++++++++----------------------------- 1 file changed, 11 insertions(+), 39 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index e870d01d12ea..02afee31444e 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -166,12 +166,12 @@ static pte_t get_clear_contig(struct mm_struct *mm, pte_t pte, tmp_pte; bool present; - pte = __ptep_get_and_clear(mm, addr, ptep); + pte = ___ptep_get_and_clear(mm, ptep, pgsize); present = pte_present(pte); while (--ncontig) { ptep++; addr += pgsize; - tmp_pte = __ptep_get_and_clear(mm, addr, ptep); + tmp_pte = ___ptep_get_and_clear(mm, ptep, pgsize); if (present) { if (pte_dirty(tmp_pte)) pte = pte_mkdirty(pte); @@ -215,7 +215,7 @@ static void clear_flush(struct mm_struct *mm, unsigned long i, saddr = addr; for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) - __ptep_get_and_clear(mm, addr, ptep); + ___ptep_get_and_clear(mm, ptep, pgsize); __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true); } @@ -226,32 +226,20 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, size_t pgsize; int i; int ncontig; - unsigned long pfn, dpfn; - pgprot_t hugeprot; ncontig = num_contig_ptes(sz, &pgsize); if (!pte_present(pte)) { for (i = 0; i < ncontig; i++, ptep++, addr += pgsize) - __set_ptes(mm, addr, ptep, pte, 1); + ___set_ptes(mm, ptep, pte, 1, pgsize); return; } - if (!pte_cont(pte)) { - __set_ptes(mm, addr, ptep, pte, 1); - return; - } - - pfn = pte_pfn(pte); - dpfn = pgsize >> PAGE_SHIFT; - hugeprot = pte_pgprot(pte); - /* Only need to "break" if transitioning valid -> valid. */ - if (pte_valid(__ptep_get(ptep))) + if (pte_cont(pte) && pte_valid(__ptep_get(ptep))) clear_flush(mm, addr, ptep, pgsize, ncontig); - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) - __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1); + ___set_ptes(mm, ptep, pte, ncontig, pgsize); } pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, @@ -441,11 +429,9 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t pte, int dirty) { - int ncontig, i; + int ncontig; size_t pgsize = 0; - unsigned long pfn = pte_pfn(pte), dpfn; struct mm_struct *mm = vma->vm_mm; - pgprot_t hugeprot; pte_t orig_pte; VM_WARN_ON(!pte_present(pte)); @@ -454,7 +440,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, return __ptep_set_access_flags(vma, addr, ptep, pte, dirty); ncontig = find_num_contig(mm, addr, ptep, &pgsize); - dpfn = pgsize >> PAGE_SHIFT; if (!__cont_access_flags_changed(ptep, pte, ncontig)) return 0; @@ -469,19 +454,14 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, if (pte_young(orig_pte)) pte = pte_mkyoung(pte); - hugeprot = pte_pgprot(pte); - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) - __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1); - + ___set_ptes(mm, ptep, pte, ncontig, pgsize); return 1; } void huge_ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - unsigned long pfn, dpfn; - pgprot_t hugeprot; - int ncontig, i; + int ncontig; size_t pgsize; pte_t pte; @@ -494,16 +474,11 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, } ncontig = find_num_contig(mm, addr, ptep, &pgsize); - dpfn = pgsize >> PAGE_SHIFT; pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); pte = pte_wrprotect(pte); - hugeprot = pte_pgprot(pte); - pfn = pte_pfn(pte); - - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) - __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1); + ___set_ptes(mm, ptep, pte, ncontig, pgsize); } pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, @@ -517,10 +492,7 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, pte = __ptep_get(ptep); VM_WARN_ON(!pte_present(pte)); - if (!pte_cont(pte)) - return ptep_clear_flush(vma, addr, ptep); - - ncontig = find_num_contig(mm, addr, ptep, &pgsize); + ncontig = num_contig_ptes(page_size(pte_page(pte)), &pgsize); return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); }