From patchwork Mon Oct 21 04:22:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13843540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F5EBD3C93E for ; Mon, 21 Oct 2024 04:22:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B9ADA6B0083; Mon, 21 Oct 2024 00:22:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B71686B0089; Mon, 21 Oct 2024 00:22:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 99F9D6B0088; Mon, 21 Oct 2024 00:22:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 7C6D86B0082 for ; Mon, 21 Oct 2024 00:22:28 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id F0AF3A145C for ; Mon, 21 Oct 2024 04:22:00 +0000 (UTC) X-FDA: 82696312104.01.75EE3DE Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf24.hostedemail.com (Postfix) with ESMTP id D12FC18000C for ; Mon, 21 Oct 2024 04:22:23 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="MXjg+g2/"; spf=pass (imf24.hostedemail.com: domain of 3AdcVZwYKCEY627piwowwotm.kwutqv25-uus3iks.wzo@flex--yuzhao.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3AdcVZwYKCEY627piwowwotm.kwutqv25-uus3iks.wzo@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729484398; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KalWPiT5Cmq0awZpJ2ht7HZzxnfLYA4lEUcwS3DKUno=; b=A+LPPsHzkhCMtkZEdwOMiVBwDdMoxwVIfBqGMnpyc4mBUuvhMNRDVD2tFC4rrbGrJ3somN KJNZ8bKxlV9OraazDRBlpFQamhZzZVPwIH4nZiOLji3+TbLP3Lr6YSdg5qUCU+X4EUHGJg qr5fQDmsHLmA1RuhC05SvFEQKR/hN34= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729484398; a=rsa-sha256; cv=none; b=DNWjT+0jtOHQpG4eV3xegamCkbtWmzldkEKmUrq3gfeRzbtzKyPZOnFHTmG6UcRxsXDPu2 GGu4zEMDTmcVxYK3xTZv0eJk9udLfxJo2aGgbeezpNVc0AS6qgHMfvEDpXFiipgcZ6USA9 WmIPRvsjsCc/sDkNrTEBnkR1Qw8LFAA= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="MXjg+g2/"; spf=pass (imf24.hostedemail.com: domain of 3AdcVZwYKCEY627piwowwotm.kwutqv25-uus3iks.wzo@flex--yuzhao.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3AdcVZwYKCEY627piwowwotm.kwutqv25-uus3iks.wzo@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6e1fbe2a6b1so70600117b3.2 for ; Sun, 20 Oct 2024 21:22:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1729484545; x=1730089345; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KalWPiT5Cmq0awZpJ2ht7HZzxnfLYA4lEUcwS3DKUno=; b=MXjg+g2/6p0EHl3uZTAuVCGDsc9dt3WICOC0Mgr8TIQ/MGHXeVFMRhaMSDKGDK29Hs UoQ97scqCPrBdJwPUrW6Br5RCcYR/eX3P5bqphTU9B/k8HOZ4qfXxE2UZE2TuXwqqTNs 3gyK2vH9CWRjtwlMnlOU0BvWFie1XI/hFf8dpbhruSHAv2iNjoxSGw+8hmOtmSNgX8vh iMp56h4Qmeh3C/tD9XUde8DCAfZ0y2T0U+vy+MB9GWy9G5w4XforMEmZLCB5GbInl8G5 tW8l4okUMQUEWzxlqPSOTfuj+TBEPATZ/2afwgbbkP+kKr98bckbTBHqac3JUHARBgbJ 6NsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729484545; x=1730089345; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KalWPiT5Cmq0awZpJ2ht7HZzxnfLYA4lEUcwS3DKUno=; b=LMX915dSvMGZu+9ENxHmQeSWqru+9GV5Am8QSw2D9hwI+EOrRZ+c0zxon6QM3AHTbH bR/VjdNbVDpZK0WMWt71gm/9DteKhVjvLUGoqSLH29LPPQ34yBCc2XtUDap3z+M6Xm8V TDO3D4hwwlRABhlWb/yi+0YD8xV3aDERxB3WIjoyW9fZNLuz7wqluhrt98zYLaopOaiv xkkPzhKY+dKGEiQ06YFrF7HXh6Cck/RW/Txsj8KOlpm6x1gMIazlJJyD21m79oJTGNu9 XR+ANM05VaW6MTVFVIc/DlH2QodgM/w0j1WmKqJcYnNA2ZND9WDh3/GoCQO1MaQ8rqxq EhzA== X-Forwarded-Encrypted: i=1; AJvYcCWh4fdYzXHJ4kA4Ou+gHxACn5OE4ai9ovO3vCOEd5JqYdMlmePhjD1nX1H3ohqhyu42QLja/JKNpw==@kvack.org X-Gm-Message-State: AOJu0Yxf9G6KBMVl3El6XxK9SIVcH1k+jwJr3/vZHjcGqa9A2ikyS16p BasRq18nyQBejDR0vCQ4dMVOzBhAQmjiigerz1m/jF4BSF/ei8HvfTDFtqxE6dVOduMNaZBPG/I KZA== X-Google-Smtp-Source: AGHT+IH2Ur7HHiXBaY4bDEA69MOZ2p1T90XrFcEHYsQ0uSyMb8pWEEibt70U8jRDqfM7NgoMPydl9BV9rAw= X-Received: from yuzhao2.bld.corp.google.com ([2a00:79e0:2e28:6:1569:9ef4:20ab:abf9]) (user=yuzhao job=sendgmr) by 2002:a25:dc41:0:b0:e25:caa3:9c2e with SMTP id 3f1490d57ef6-e2bb16d9206mr22332276.11.1729484545011; Sun, 20 Oct 2024 21:22:25 -0700 (PDT) Date: Sun, 20 Oct 2024 22:22:13 -0600 In-Reply-To: <20241021042218.746659-1-yuzhao@google.com> Mime-Version: 1.0 References: <20241021042218.746659-1-yuzhao@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241021042218.746659-2-yuzhao@google.com> Subject: [PATCH v1 1/6] mm/hugetlb_vmemmap: batch update PTEs From: Yu Zhao To: Andrew Morton , Catalin Marinas , Marc Zyngier , Muchun Song , Thomas Gleixner , Will Deacon Cc: Douglas Anderson , Mark Rutland , Nanyong Sun , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yu Zhao X-Rspamd-Queue-Id: D12FC18000C X-Stat-Signature: qqekg4py4rhhmw43qodsrh99pa611h6e X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1729484543-195516 X-HE-Meta: U2FsdGVkX18I84GVENTMxSAoD4fEI9Rrq8epJA7WXjHgxLh1aC0mPEmgeLFIGsqk12Nf8B1iCcEIdskuxCgJcIeTB2rjmKCAJyHucBCnTTtHIa+9tJnxaMguImq+vpql94yJxvI7Gc7LQkTZCYtEgIpPrElZp/XRld2JtS6JMKXHuJVE6SrzvWkN1Vb0MNUZ8h5fPZa+tf439GGMBo9k1Kyd7gMO22uIu9Z47vM1PFIEqIt2TpX09wfnQp/679AfZtjmjz3fng903yGEVZaCOt9GLu6dPpC6TvwfABiIDZjbn9exvp6aTw+j+thH1g+t/vowd38UKyNERDXFp1Y3ajhHVx7tdyqm8MyCPXN94oQ7x+MUGVDs6AowSpUh6sayqdUCXVjLahH644rQnZN0nTanzxX0LrinLy/oI37DOrbZLGIa5dKWkjhupOMgCqb8cD4pKTBXAeK2Ujqd682pfPUqWl91WWu+rnl5feOLlBmziMU0iOMqVGTiZY+Kn5No0Uc/44nkOhdKoD+UZ6gbiMDIu7yeZvBcDfOKPYQQABVAnYE5kstmHLuR0ZnsbsL9ONntaRiMVv9suMa7oRvuB0N7OnMI3ZM2BGtcTwC8WPMG83J9MXP85+P4v2ek4YdWEvqupx1zW+ZOjOfLAZR47qHuWPDoROQctgRXsgaZ8uTrLIWSUzd9CD9QFOXRngpaqXG/B28ZEYmRPWs3fRTxNBIHkRJpJ/iXzEabijK6b71Odi1yf/bFvV4AjZXg3UiDS/UocOubkJmKaE9zpNF4TLkfGynmodhazSTq7W8tCz9dfLuNq5Kefj69Rv4R8HeSCTl5NUn15CxQH24+bZCx41d0j5mjtqYbZGOqEZPRRiKZrKZJnJUYav3sTUBRuE+inNW/n5g4jB4rmXejU2MRL0gMPq2bEIsIsRvOIOn/Dm+ekMyP422ngSRo5NmrCj366Cm5HjMxOCFcY+EUTWr ++sdMVAE R576NptPMbfzjmEeUBSOtAkohqV9OK1bQrMLUvbArDPnWFvuwTJ/u0WagbYOll+F0CqI9Tl4ZBMka+MuLY4FKFL+ZctHplkCYxxx95fwHgr7AEQyAJ6JKhq+l3o4YlQHZKThOATtkKbsUMeYCxMxb/1tr3rqFLYz4i4rI5WGYdKWnja7l+1JuuNAw1W01nfCniOHfDOxDwrFrrdNtccf9heOjzeHDu+qQT7amkHjlZ12tM1LPOqIsGdbEaR8n/lUUr77atN8Y4xaIOeQNDaKZR7zZSlbEEBcdFIszNsPYywiXSdHzpIONYU31vCKw0Fu+3fhwq2Cy7WXurLXBEJ9MI62T42XSzros/9rNLwoJRYAUhpDp+dD5xE/UAeH61nKeV/iXcFwKk+JwVJ45sTp/bUcWrbCyBlEOm7a3LGZ3SrxslJgRCoNwpBdwFGYTXIXnxH6IzWP2opmx4d1CQMP91AlUlqXpqFUq3RZHKzVLl9DusuNlkb2TeWZZYZp6/G545KMlH0XLk/c6bA0Ygjdpsw6QvRrWCoL1DP/oOMPJsJP7orcht07pOkubKpEAk0Phd0K+lX7NTFgAki4YhpWlpDdYEg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Convert vmemmap_remap_walk->remap_pte to ->remap_pte_range so that vmemmap remap walks can batch update PTEs. The goal of this conversion is to allow architectures to implement their own optimizations if possible, e.g., only to stop remote CPUs once for each batch when updating vmemmap on arm64. It is not intended to change the remap workflow nor should it by itself have any side effects on performance. Signed-off-by: Yu Zhao --- mm/hugetlb_vmemmap.c | 163 ++++++++++++++++++++++++------------------- 1 file changed, 91 insertions(+), 72 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 57b7f591eee8..46befab48d41 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -22,7 +22,7 @@ /** * struct vmemmap_remap_walk - walk vmemmap page table * - * @remap_pte: called for each lowest-level entry (PTE). + * @remap_pte_range: called on a range of PTEs. * @nr_walked: the number of walked pte. * @reuse_page: the page which is reused for the tail vmemmap pages. * @reuse_addr: the virtual address of the @reuse_page page. @@ -32,8 +32,8 @@ * operations. */ struct vmemmap_remap_walk { - void (*remap_pte)(pte_t *pte, unsigned long addr, - struct vmemmap_remap_walk *walk); + void (*remap_pte_range)(pte_t *pte, unsigned long start, + unsigned long end, struct vmemmap_remap_walk *walk); unsigned long nr_walked; struct page *reuse_page; unsigned long reuse_addr; @@ -101,10 +101,6 @@ static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr, struct page *head; struct vmemmap_remap_walk *vmemmap_walk = walk->private; - /* Only splitting, not remapping the vmemmap pages. */ - if (!vmemmap_walk->remap_pte) - walk->action = ACTION_CONTINUE; - spin_lock(&init_mm.page_table_lock); head = pmd_leaf(*pmd) ? pmd_page(*pmd) : NULL; /* @@ -129,33 +125,36 @@ static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr, ret = -ENOTSUPP; } spin_unlock(&init_mm.page_table_lock); - if (!head || ret) + if (ret) return ret; - return vmemmap_split_pmd(pmd, head, addr & PMD_MASK, vmemmap_walk); -} + if (head) { + ret = vmemmap_split_pmd(pmd, head, addr & PMD_MASK, vmemmap_walk); + if (ret) + return ret; + } -static int vmemmap_pte_entry(pte_t *pte, unsigned long addr, - unsigned long next, struct mm_walk *walk) -{ - struct vmemmap_remap_walk *vmemmap_walk = walk->private; + if (vmemmap_walk->remap_pte_range) { + pte_t *pte = pte_offset_kernel(pmd, addr); - /* - * The reuse_page is found 'first' in page table walking before - * starting remapping. - */ - if (!vmemmap_walk->reuse_page) - vmemmap_walk->reuse_page = pte_page(ptep_get(pte)); - else - vmemmap_walk->remap_pte(pte, addr, vmemmap_walk); - vmemmap_walk->nr_walked++; + vmemmap_walk->nr_walked += (next - addr) / PAGE_SIZE; + /* + * The reuse_page is found 'first' in page table walking before + * starting remapping. + */ + if (!vmemmap_walk->reuse_page) { + vmemmap_walk->reuse_page = pte_page(ptep_get(pte)); + pte++; + addr += PAGE_SIZE; + } + vmemmap_walk->remap_pte_range(pte, addr, next, vmemmap_walk); + } return 0; } static const struct mm_walk_ops vmemmap_remap_ops = { .pmd_entry = vmemmap_pmd_entry, - .pte_entry = vmemmap_pte_entry, }; static int vmemmap_remap_range(unsigned long start, unsigned long end, @@ -172,7 +171,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end, if (ret) return ret; - if (walk->remap_pte && !(walk->flags & VMEMMAP_REMAP_NO_TLB_FLUSH)) + if (walk->remap_pte_range && !(walk->flags & VMEMMAP_REMAP_NO_TLB_FLUSH)) flush_tlb_kernel_range(start, end); return 0; @@ -204,33 +203,45 @@ static void free_vmemmap_page_list(struct list_head *list) free_vmemmap_page(page); } -static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, - struct vmemmap_remap_walk *walk) +static void vmemmap_remap_pte_range(pte_t *pte, unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) { - /* - * Remap the tail pages as read-only to catch illegal write operation - * to the tail pages. - */ - pgprot_t pgprot = PAGE_KERNEL_RO; - struct page *page = pte_page(ptep_get(pte)); - pte_t entry; - - /* Remapping the head page requires r/w */ - if (unlikely(addr == walk->reuse_addr)) { - pgprot = PAGE_KERNEL; - list_del(&walk->reuse_page->lru); + int i; + struct page *page; + int nr_pages = (end - start) / PAGE_SIZE; + for (i = 0; i < nr_pages; i++) { + page = pte_page(ptep_get(pte + i)); + + list_add(&page->lru, walk->vmemmap_pages); + } + + page = walk->reuse_page; + + if (start == walk->reuse_addr) { + list_del(&page->lru); + copy_page(page_to_virt(page), (void *)walk->reuse_addr); /* - * Makes sure that preceding stores to the page contents from - * vmemmap_remap_free() become visible before the set_pte_at() - * write. + * Makes sure that preceding stores to the page contents become + * visible before set_pte_at(). */ smp_wmb(); } - entry = mk_pte(walk->reuse_page, pgprot); - list_add(&page->lru, walk->vmemmap_pages); - set_pte_at(&init_mm, addr, pte, entry); + for (i = 0; i < nr_pages; i++) { + pte_t val; + + /* + * The head page must be mapped read-write; the tail pages are + * mapped read-only to catch illegal modifications. + */ + if (!i && start == walk->reuse_addr) + val = mk_pte(page, PAGE_KERNEL); + else + val = mk_pte(page, PAGE_KERNEL_RO); + + set_pte_at(&init_mm, start + PAGE_SIZE * i, pte + i, val); + } } /* @@ -252,27 +263,39 @@ static inline void reset_struct_pages(struct page *start) memcpy(start, from, sizeof(*from) * NR_RESET_STRUCT_PAGE); } -static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, - struct vmemmap_remap_walk *walk) +static void vmemmap_restore_pte_range(pte_t *pte, unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) { - pgprot_t pgprot = PAGE_KERNEL; + int i; struct page *page; - void *to; - - BUG_ON(pte_page(ptep_get(pte)) != walk->reuse_page); + int nr_pages = (end - start) / PAGE_SIZE; page = list_first_entry(walk->vmemmap_pages, struct page, lru); - list_del(&page->lru); - to = page_to_virt(page); - copy_page(to, (void *)walk->reuse_addr); - reset_struct_pages(to); + + for (i = 0; i < nr_pages; i++) { + BUG_ON(pte_page(ptep_get(pte + i)) != walk->reuse_page); + + copy_page(page_to_virt(page), (void *)walk->reuse_addr); + reset_struct_pages(page_to_virt(page)); + + page = list_next_entry(page, lru); + } /* * Makes sure that preceding stores to the page contents become visible - * before the set_pte_at() write. + * before set_pte_at(). */ smp_wmb(); - set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); + + for (i = 0; i < nr_pages; i++) { + pte_t val; + + page = list_first_entry(walk->vmemmap_pages, struct page, lru); + list_del(&page->lru); + + val = mk_pte(page, PAGE_KERNEL); + set_pte_at(&init_mm, start + PAGE_SIZE * i, pte + i, val); + } } /** @@ -290,7 +313,6 @@ static int vmemmap_remap_split(unsigned long start, unsigned long end, unsigned long reuse) { struct vmemmap_remap_walk walk = { - .remap_pte = NULL, .flags = VMEMMAP_SPLIT_NO_TLB_FLUSH, }; @@ -322,10 +344,10 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end, { int ret; struct vmemmap_remap_walk walk = { - .remap_pte = vmemmap_remap_pte, - .reuse_addr = reuse, - .vmemmap_pages = vmemmap_pages, - .flags = flags, + .remap_pte_range = vmemmap_remap_pte_range, + .reuse_addr = reuse, + .vmemmap_pages = vmemmap_pages, + .flags = flags, }; int nid = page_to_nid((struct page *)reuse); gfp_t gfp_mask = GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN; @@ -340,8 +362,6 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end, */ walk.reuse_page = alloc_pages_node(nid, gfp_mask, 0); if (walk.reuse_page) { - copy_page(page_to_virt(walk.reuse_page), - (void *)walk.reuse_addr); list_add(&walk.reuse_page->lru, vmemmap_pages); memmap_pages_add(1); } @@ -371,10 +391,9 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end, * They will be restored in the following call. */ walk = (struct vmemmap_remap_walk) { - .remap_pte = vmemmap_restore_pte, - .reuse_addr = reuse, - .vmemmap_pages = vmemmap_pages, - .flags = 0, + .remap_pte_range = vmemmap_restore_pte_range, + .reuse_addr = reuse, + .vmemmap_pages = vmemmap_pages, }; vmemmap_remap_range(reuse, end, &walk); @@ -425,10 +444,10 @@ static int vmemmap_remap_alloc(unsigned long start, unsigned long end, { LIST_HEAD(vmemmap_pages); struct vmemmap_remap_walk walk = { - .remap_pte = vmemmap_restore_pte, - .reuse_addr = reuse, - .vmemmap_pages = &vmemmap_pages, - .flags = flags, + .remap_pte_range = vmemmap_restore_pte_range, + .reuse_addr = reuse, + .vmemmap_pages = &vmemmap_pages, + .flags = flags, }; /* See the comment in the vmemmap_remap_free(). */