From patchwork Fri Oct 21 16:36:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13015103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB329C433FE for ; Fri, 21 Oct 2022 16:37:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 85CA68E0025; Fri, 21 Oct 2022 12:37:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E84A8E0001; Fri, 21 Oct 2022 12:37:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 54DA68E0025; Fri, 21 Oct 2022 12:37:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 383008E0001 for ; Fri, 21 Oct 2022 12:37:52 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 034E81A0568 for ; Fri, 21 Oct 2022 16:37:51 +0000 (UTC) X-FDA: 80045513184.04.C11512D Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf07.hostedemail.com (Postfix) with ESMTP id AA11B40034 for ; Fri, 21 Oct 2022 16:37:51 +0000 (UTC) Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-360a7ff46c3so34041067b3.12 for ; Fri, 21 Oct 2022 09:37:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pO7qfdtljC0cQEtW68yd+sWFuuFiNFYwsARvURkZJUk=; b=F5EfAc0EFaqIHDJwcnG3SX+PWOaohJQFLM2kFecrOP2UMSCoQVjg78Y10c4j1PI46N x3pQT8CD3Oz4bgPgYC/hkNHGtoaKAp+KDoJ159WXQmk8tAlHkqBPm8EcDd37g5RKAzHh OvLW9vkuEzWvkq5Y6LbXK3yN9XL3aGUUggRQqwFgMqJtrLkxrke5o4a9R8EwmQ1kcj1S Hppt92/szI1X7GCxGOniN98/J1Tm1vwZRKo9WhiiZPqGEQDCeU4Uj32Clokiwp8SefxD ZHhuRWT6/KZWweMGNRSX67APat2/HtcUTuMbrmhoGIuQHcxDyVwqWe/kONRtmfkeK79F UDpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pO7qfdtljC0cQEtW68yd+sWFuuFiNFYwsARvURkZJUk=; b=U0lcROngGfd5DQdS9cYztibjKAWEeMzTKKVUzPt14XEX5BJJb/SzDQ3i8JzcZc3lGr 0tSHbE3JStbCfnqsB6KryrEvbDPotmw3um8I/PBDfTjl0lPsgJB8eDFtGlI/EOX44jeV XSfnpAtkhxSgdvtwYlO/1bhZukkxEk7tzbMv67tj6xSBkQyMW8AgIWYft6WXOHfx+LMC eOIRcUwBjC9V+lQaco8Kaj/x+yVW8BrxA9RkrMtZTZNKlzoGz3zAO4FDoJOiInB2aJ7z ScUtAXXFHtuQLyJM/9zOIGqBPE0Gf6ivVM4AUnZtRRhKbGLq44FhO6fmivutIWplkV0E YqBw== X-Gm-Message-State: ACrzQf0f59wOGT+4DJx6XPwYjb0puFcaP/wNezOtQHjWAK3PTrruP7W3 Z6GZsCMe3c3mik2buaGHxfkNobv9MJFZzNAD X-Google-Smtp-Source: AMsMyM7rpb5mjTGAEgy2DHwIFR1eIfpS/I9RsBpBYNRwglN8Yvfr24l7aLFFrSr+D9OVYLDzvsu+JgqdYQ0MB69h X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a0d:da85:0:b0:360:819a:ffa8 with SMTP id c127-20020a0dda85000000b00360819affa8mr17910682ywe.414.1666370270941; Fri, 21 Oct 2022 09:37:50 -0700 (PDT) Date: Fri, 21 Oct 2022 16:36:53 +0000 In-Reply-To: <20221021163703.3218176-1-jthoughton@google.com> Mime-Version: 1.0 References: <20221021163703.3218176-1-jthoughton@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021163703.3218176-38-jthoughton@google.com> Subject: [RFC PATCH v2 37/47] hugetlb: remove huge_pte_lock and huge_pte_lockptr From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=F5EfAc0E; spf=pass (imf07.hostedemail.com: domain of 33spSYwoKCN8KUIPVHIUPOHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--jthoughton.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=33spSYwoKCN8KUIPVHIUPOHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666370271; a=rsa-sha256; cv=none; b=k5WujZRm5OY/pxjs7skVUW1ANs74vy/s7NcxPKwLjcDxX0XQ3i9azeQJSX91gQcevOSaAq XbC7B+toEoEpS6JLInt4yRfBKgJXsN/gifVOXH2CDnKkijlydffVwkSKtk5ZXc+x6OoPiN Nph0tXa+Asg5JoVeIlmXpUDQ2eTNYEM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666370271; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pO7qfdtljC0cQEtW68yd+sWFuuFiNFYwsARvURkZJUk=; b=au5gFmftvdkdyjMmizN1qw2kwDyvV1HXzTWZZ99NgDZMyropXaZ5sznLArxEKEMbhftjmX /uEMQhWidf71Z0OFKn5KJ95sMzp+jftnzIw6CQtOQU3BMSQzYmwwakTdS/UKREr6DjgxNz waVhyG4yro6shfw5ozveu0ctN/xWp3A= Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=F5EfAc0E; spf=pass (imf07.hostedemail.com: domain of 33spSYwoKCN8KUIPVHIUPOHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--jthoughton.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=33spSYwoKCN8KUIPVHIUPOHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: w3sp1b8i3ao9fssker5cyjuafi8ezfma X-Rspamd-Queue-Id: AA11B40034 X-Rspamd-Server: rspam02 X-Rspam-User: X-HE-Tag: 1666370271-125181 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: They are replaced with hugetlb_pte_lock{,ptr}. All callers that haven't already been replaced don't get called when using HGM, so we handle them by populating hugetlb_ptes with the standard, hstate-sized huge PTEs. Signed-off-by: James Houghton --- include/linux/hugetlb.h | 28 +++------------------------- mm/hugetlb.c | 15 ++++++++++----- 2 files changed, 13 insertions(+), 30 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 5378b98cc7b8..e6dc25b15403 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1015,14 +1015,6 @@ static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask) return modified_mask; } -static inline spinlock_t *huge_pte_lockptr(unsigned int shift, - struct mm_struct *mm, pte_t *pte) -{ - if (shift == PMD_SHIFT) - return pmd_lockptr(mm, (pmd_t *) pte); - return &mm->page_table_lock; -} - #ifndef hugepages_supported /* * Some platform decide whether they support huge pages at boot @@ -1226,12 +1218,6 @@ static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask) return 0; } -static inline spinlock_t *huge_pte_lockptr(unsigned int shift, - struct mm_struct *mm, pte_t *pte) -{ - return &mm->page_table_lock; -} - static inline void hugetlb_count_init(struct mm_struct *mm) { } @@ -1307,16 +1293,6 @@ int hugetlb_collapse(struct mm_struct *mm, struct vm_area_struct *vma, } #endif -static inline spinlock_t *huge_pte_lock(struct hstate *h, - struct mm_struct *mm, pte_t *pte) -{ - spinlock_t *ptl; - - ptl = huge_pte_lockptr(huge_page_shift(h), mm, pte); - spin_lock(ptl); - return ptl; -} - static inline spinlock_t *hugetlb_pte_lockptr(struct mm_struct *mm, struct hugetlb_pte *hpte) { @@ -1324,7 +1300,9 @@ spinlock_t *hugetlb_pte_lockptr(struct mm_struct *mm, struct hugetlb_pte *hpte) BUG_ON(!hpte->ptep); if (hpte->ptl) return hpte->ptl; - return huge_pte_lockptr(hugetlb_pte_shift(hpte), mm, hpte->ptep); + if (hugetlb_pte_level(hpte) == HUGETLB_LEVEL_PMD) + return pmd_lockptr(mm, (pmd_t *) hpte->ptep); + return &mm->page_table_lock; } static inline diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d80db81a1fa5..9d4e41c41f78 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5164,9 +5164,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, put_page(hpage); /* Install the new huge page if src pte stable */ - dst_ptl = huge_pte_lock(h, dst, dst_pte); - src_ptl = huge_pte_lockptr(huge_page_shift(h), - src, src_pte); + dst_ptl = hugetlb_pte_lock(dst, &dst_hpte); + src_ptl = hugetlb_pte_lockptr(src, &src_hpte); spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); entry = huge_ptep_get(src_pte); if (!pte_same(src_pte_old, entry)) { @@ -7465,6 +7464,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, pte_t *spte = NULL; pte_t *pte; spinlock_t *ptl; + struct hugetlb_pte hpte; i_mmap_lock_read(mapping); vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) { @@ -7485,7 +7485,8 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, if (!spte) goto out; - ptl = huge_pte_lock(hstate_vma(vma), mm, spte); + hugetlb_pte_populate(&hpte, (pte_t *)pud, PUD_SHIFT, HUGETLB_LEVEL_PUD); + ptl = hugetlb_pte_lock(mm, &hpte); if (pud_none(*pud)) { pud_populate(mm, pud, (pmd_t *)((unsigned long)spte & PAGE_MASK)); @@ -8179,6 +8180,7 @@ void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) unsigned long address, start, end; spinlock_t *ptl; pte_t *ptep; + struct hugetlb_pte hpte; if (!(vma->vm_flags & VM_MAYSHARE)) return; @@ -8203,7 +8205,10 @@ void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) ptep = huge_pte_offset(mm, address, sz); if (!ptep) continue; - ptl = huge_pte_lock(h, mm, ptep); + + hugetlb_pte_populate(&hpte, ptep, huge_page_shift(h), + hpage_size_to_level(sz)); + ptl = hugetlb_pte_lock(mm, &hpte); huge_pmd_unshare(mm, vma, address, ptep); spin_unlock(ptl); }