From patchwork Mon Oct 21 04:22:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13843547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BE9DCD3C93D for ; Mon, 21 Oct 2024 04:25:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=KalWPiT5Cmq0awZpJ2ht7HZzxnfLYA4lEUcwS3DKUno=; b=jM/nwhwXguluoqETm/TZQgJNvx fUWOJHW9UHo16lzOiqGu2OfTQYKUHFPW4QbIDxb7av78telxvWBJCR+KsISBnW/XFRpFzobVZK8JB bkA1YugZqgJooloJ0wjRaMYtWtH7TeWbnblxFYGmmTrYj26qh7biYVTUrepECkuGxhyJ9YLScHViO atX9zM2TObwL/npoHIfJhylA8gXk1EfbHVEbzHbcZ8DEuxjeotYhIXP4Oo6n1NXhlFoP9qAWsrohu AjqdrNu8gPuCWpkT/AOWlia0NGuqpajT/DLiN1RmOC6lmF11OW0sUI19pXwVL8zGDpX8PSiubLh+s 1kMrhpCw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t2jz7-000000061NY-07Cs; Mon, 21 Oct 2024 04:25:37 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1t2jw3-000000060qs-1bwA for linux-arm-kernel@lists.infradead.org; Mon, 21 Oct 2024 04:22:28 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-6e1fbe2a6b1so70600127b3.2 for ; Sun, 20 Oct 2024 21:22:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1729484545; x=1730089345; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KalWPiT5Cmq0awZpJ2ht7HZzxnfLYA4lEUcwS3DKUno=; b=zxROUKbwEsmkMoOlGQpUb8PDtvPrCHI2LUYAPKtJex5Bh8AKAaDAQXWulAOKoUmthv zGLtqiNGjU4lb+Q2EFcb8PW/fhq9PhhLr0TNg1yil7kW6rp9A/z7ar/sWh/cbc4U7/yV bp6epofOJdnay2Z/nr04b+dIqHo9vckhfFI+q4VEvFnlxyCY4Fapth6Mh9DjvKb1U0cE VE5RsIWSq6v8HE+GQ1Oam70ws5i+/JL8TpxeuoV4GVA2pTsHVfC0FrAJup+X1U5ZZVJb VAWJDfJROE2MjZGW4AaBpC4HazoDWs3CIgVRoouMSErl8H3/1YIdmt7IWqWtQA6ooLtO YVYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729484545; x=1730089345; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KalWPiT5Cmq0awZpJ2ht7HZzxnfLYA4lEUcwS3DKUno=; b=A2BtJsGt71JbQe/lz8vRRFwragmWUgd3sMv3NGaITqPtyY2b5AJILUgrkgFMBXi1bj Y2CdI2mPzwpjQANwmD1FjnxLQ3hLNypa9Uow9n0K9n6G9ObZt/iVO7PZspIqd6WUMxqk H3ABiNbnI5S8uKvlvbyu5hfI7KsPLdJUU20KF74idyLVjcIelAQWPZ3Wm6rCC+SUAW2h 6hqQFfeWnbNXYoAzudpm/F5YEQXi2SmpRUGKqzOmWnRoIFzAGOvmvoWUspoFu6F1fPeZ vegROez+8g2Yu3wvzDBEnIYvtHNZGubt3o6vqGTpVSr5h9DYdD7x4kGyKUCR6lzPm/OR te/g== X-Forwarded-Encrypted: i=1; AJvYcCXyG7AUue98eV9jvnN6riAZrSEbgqExavEQVeG5fGQV6VpCbimtf8VGGmT5bM/F4Vpn5Udtu/qdN3rsdlfQa9Jo@lists.infradead.org X-Gm-Message-State: AOJu0YzIYiIKF1k3r0RJdH2qK48skPW5GKC6MzZTj3K+501pwomVUC/f APKYbnSMVaTcd3bkZyDmJStPc3WhmJ+dSN8B88OuHu1DVZCqVnLld5cQQmog3Ww/vI6pOP1SVfT kdg== X-Google-Smtp-Source: AGHT+IH2Ur7HHiXBaY4bDEA69MOZ2p1T90XrFcEHYsQ0uSyMb8pWEEibt70U8jRDqfM7NgoMPydl9BV9rAw= X-Received: from yuzhao2.bld.corp.google.com ([2a00:79e0:2e28:6:1569:9ef4:20ab:abf9]) (user=yuzhao job=sendgmr) by 2002:a25:dc41:0:b0:e25:caa3:9c2e with SMTP id 3f1490d57ef6-e2bb16d9206mr22332276.11.1729484545011; Sun, 20 Oct 2024 21:22:25 -0700 (PDT) Date: Sun, 20 Oct 2024 22:22:13 -0600 In-Reply-To: <20241021042218.746659-1-yuzhao@google.com> Mime-Version: 1.0 References: <20241021042218.746659-1-yuzhao@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241021042218.746659-2-yuzhao@google.com> Subject: [PATCH v1 1/6] mm/hugetlb_vmemmap: batch update PTEs From: Yu Zhao To: Andrew Morton , Catalin Marinas , Marc Zyngier , Muchun Song , Thomas Gleixner , Will Deacon Cc: Douglas Anderson , Mark Rutland , Nanyong Sun , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yu Zhao X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241020_212227_489275_FFDA4BC8 X-CRM114-Status: GOOD ( 25.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Convert vmemmap_remap_walk->remap_pte to ->remap_pte_range so that vmemmap remap walks can batch update PTEs. The goal of this conversion is to allow architectures to implement their own optimizations if possible, e.g., only to stop remote CPUs once for each batch when updating vmemmap on arm64. It is not intended to change the remap workflow nor should it by itself have any side effects on performance. Signed-off-by: Yu Zhao --- mm/hugetlb_vmemmap.c | 163 ++++++++++++++++++++++++------------------- 1 file changed, 91 insertions(+), 72 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 57b7f591eee8..46befab48d41 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -22,7 +22,7 @@ /** * struct vmemmap_remap_walk - walk vmemmap page table * - * @remap_pte: called for each lowest-level entry (PTE). + * @remap_pte_range: called on a range of PTEs. * @nr_walked: the number of walked pte. * @reuse_page: the page which is reused for the tail vmemmap pages. * @reuse_addr: the virtual address of the @reuse_page page. @@ -32,8 +32,8 @@ * operations. */ struct vmemmap_remap_walk { - void (*remap_pte)(pte_t *pte, unsigned long addr, - struct vmemmap_remap_walk *walk); + void (*remap_pte_range)(pte_t *pte, unsigned long start, + unsigned long end, struct vmemmap_remap_walk *walk); unsigned long nr_walked; struct page *reuse_page; unsigned long reuse_addr; @@ -101,10 +101,6 @@ static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr, struct page *head; struct vmemmap_remap_walk *vmemmap_walk = walk->private; - /* Only splitting, not remapping the vmemmap pages. */ - if (!vmemmap_walk->remap_pte) - walk->action = ACTION_CONTINUE; - spin_lock(&init_mm.page_table_lock); head = pmd_leaf(*pmd) ? pmd_page(*pmd) : NULL; /* @@ -129,33 +125,36 @@ static int vmemmap_pmd_entry(pmd_t *pmd, unsigned long addr, ret = -ENOTSUPP; } spin_unlock(&init_mm.page_table_lock); - if (!head || ret) + if (ret) return ret; - return vmemmap_split_pmd(pmd, head, addr & PMD_MASK, vmemmap_walk); -} + if (head) { + ret = vmemmap_split_pmd(pmd, head, addr & PMD_MASK, vmemmap_walk); + if (ret) + return ret; + } -static int vmemmap_pte_entry(pte_t *pte, unsigned long addr, - unsigned long next, struct mm_walk *walk) -{ - struct vmemmap_remap_walk *vmemmap_walk = walk->private; + if (vmemmap_walk->remap_pte_range) { + pte_t *pte = pte_offset_kernel(pmd, addr); - /* - * The reuse_page is found 'first' in page table walking before - * starting remapping. - */ - if (!vmemmap_walk->reuse_page) - vmemmap_walk->reuse_page = pte_page(ptep_get(pte)); - else - vmemmap_walk->remap_pte(pte, addr, vmemmap_walk); - vmemmap_walk->nr_walked++; + vmemmap_walk->nr_walked += (next - addr) / PAGE_SIZE; + /* + * The reuse_page is found 'first' in page table walking before + * starting remapping. + */ + if (!vmemmap_walk->reuse_page) { + vmemmap_walk->reuse_page = pte_page(ptep_get(pte)); + pte++; + addr += PAGE_SIZE; + } + vmemmap_walk->remap_pte_range(pte, addr, next, vmemmap_walk); + } return 0; } static const struct mm_walk_ops vmemmap_remap_ops = { .pmd_entry = vmemmap_pmd_entry, - .pte_entry = vmemmap_pte_entry, }; static int vmemmap_remap_range(unsigned long start, unsigned long end, @@ -172,7 +171,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end, if (ret) return ret; - if (walk->remap_pte && !(walk->flags & VMEMMAP_REMAP_NO_TLB_FLUSH)) + if (walk->remap_pte_range && !(walk->flags & VMEMMAP_REMAP_NO_TLB_FLUSH)) flush_tlb_kernel_range(start, end); return 0; @@ -204,33 +203,45 @@ static void free_vmemmap_page_list(struct list_head *list) free_vmemmap_page(page); } -static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, - struct vmemmap_remap_walk *walk) +static void vmemmap_remap_pte_range(pte_t *pte, unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) { - /* - * Remap the tail pages as read-only to catch illegal write operation - * to the tail pages. - */ - pgprot_t pgprot = PAGE_KERNEL_RO; - struct page *page = pte_page(ptep_get(pte)); - pte_t entry; - - /* Remapping the head page requires r/w */ - if (unlikely(addr == walk->reuse_addr)) { - pgprot = PAGE_KERNEL; - list_del(&walk->reuse_page->lru); + int i; + struct page *page; + int nr_pages = (end - start) / PAGE_SIZE; + for (i = 0; i < nr_pages; i++) { + page = pte_page(ptep_get(pte + i)); + + list_add(&page->lru, walk->vmemmap_pages); + } + + page = walk->reuse_page; + + if (start == walk->reuse_addr) { + list_del(&page->lru); + copy_page(page_to_virt(page), (void *)walk->reuse_addr); /* - * Makes sure that preceding stores to the page contents from - * vmemmap_remap_free() become visible before the set_pte_at() - * write. + * Makes sure that preceding stores to the page contents become + * visible before set_pte_at(). */ smp_wmb(); } - entry = mk_pte(walk->reuse_page, pgprot); - list_add(&page->lru, walk->vmemmap_pages); - set_pte_at(&init_mm, addr, pte, entry); + for (i = 0; i < nr_pages; i++) { + pte_t val; + + /* + * The head page must be mapped read-write; the tail pages are + * mapped read-only to catch illegal modifications. + */ + if (!i && start == walk->reuse_addr) + val = mk_pte(page, PAGE_KERNEL); + else + val = mk_pte(page, PAGE_KERNEL_RO); + + set_pte_at(&init_mm, start + PAGE_SIZE * i, pte + i, val); + } } /* @@ -252,27 +263,39 @@ static inline void reset_struct_pages(struct page *start) memcpy(start, from, sizeof(*from) * NR_RESET_STRUCT_PAGE); } -static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, - struct vmemmap_remap_walk *walk) +static void vmemmap_restore_pte_range(pte_t *pte, unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) { - pgprot_t pgprot = PAGE_KERNEL; + int i; struct page *page; - void *to; - - BUG_ON(pte_page(ptep_get(pte)) != walk->reuse_page); + int nr_pages = (end - start) / PAGE_SIZE; page = list_first_entry(walk->vmemmap_pages, struct page, lru); - list_del(&page->lru); - to = page_to_virt(page); - copy_page(to, (void *)walk->reuse_addr); - reset_struct_pages(to); + + for (i = 0; i < nr_pages; i++) { + BUG_ON(pte_page(ptep_get(pte + i)) != walk->reuse_page); + + copy_page(page_to_virt(page), (void *)walk->reuse_addr); + reset_struct_pages(page_to_virt(page)); + + page = list_next_entry(page, lru); + } /* * Makes sure that preceding stores to the page contents become visible - * before the set_pte_at() write. + * before set_pte_at(). */ smp_wmb(); - set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); + + for (i = 0; i < nr_pages; i++) { + pte_t val; + + page = list_first_entry(walk->vmemmap_pages, struct page, lru); + list_del(&page->lru); + + val = mk_pte(page, PAGE_KERNEL); + set_pte_at(&init_mm, start + PAGE_SIZE * i, pte + i, val); + } } /** @@ -290,7 +313,6 @@ static int vmemmap_remap_split(unsigned long start, unsigned long end, unsigned long reuse) { struct vmemmap_remap_walk walk = { - .remap_pte = NULL, .flags = VMEMMAP_SPLIT_NO_TLB_FLUSH, }; @@ -322,10 +344,10 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end, { int ret; struct vmemmap_remap_walk walk = { - .remap_pte = vmemmap_remap_pte, - .reuse_addr = reuse, - .vmemmap_pages = vmemmap_pages, - .flags = flags, + .remap_pte_range = vmemmap_remap_pte_range, + .reuse_addr = reuse, + .vmemmap_pages = vmemmap_pages, + .flags = flags, }; int nid = page_to_nid((struct page *)reuse); gfp_t gfp_mask = GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN; @@ -340,8 +362,6 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end, */ walk.reuse_page = alloc_pages_node(nid, gfp_mask, 0); if (walk.reuse_page) { - copy_page(page_to_virt(walk.reuse_page), - (void *)walk.reuse_addr); list_add(&walk.reuse_page->lru, vmemmap_pages); memmap_pages_add(1); } @@ -371,10 +391,9 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end, * They will be restored in the following call. */ walk = (struct vmemmap_remap_walk) { - .remap_pte = vmemmap_restore_pte, - .reuse_addr = reuse, - .vmemmap_pages = vmemmap_pages, - .flags = 0, + .remap_pte_range = vmemmap_restore_pte_range, + .reuse_addr = reuse, + .vmemmap_pages = vmemmap_pages, }; vmemmap_remap_range(reuse, end, &walk); @@ -425,10 +444,10 @@ static int vmemmap_remap_alloc(unsigned long start, unsigned long end, { LIST_HEAD(vmemmap_pages); struct vmemmap_remap_walk walk = { - .remap_pte = vmemmap_restore_pte, - .reuse_addr = reuse, - .vmemmap_pages = &vmemmap_pages, - .flags = flags, + .remap_pte_range = vmemmap_restore_pte_range, + .reuse_addr = reuse, + .vmemmap_pages = &vmemmap_pages, + .flags = flags, }; /* See the comment in the vmemmap_remap_free(). */