From patchwork Wed Feb 5 15:09:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13961293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B9ECC02192 for ; Wed, 5 Feb 2025 15:11:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CCBC96B0099; Wed, 5 Feb 2025 10:11:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C547D6B009C; Wed, 5 Feb 2025 10:11:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACE196B009D; Wed, 5 Feb 2025 10:11:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 63E386B0099 for ; Wed, 5 Feb 2025 10:11:14 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 0FF5DA023E for ; Wed, 5 Feb 2025 15:11:11 +0000 (UTC) X-FDA: 83086229142.28.C48DBBD Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf02.hostedemail.com (Postfix) with ESMTP id 65F6780005 for ; Wed, 5 Feb 2025 15:11:07 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf02.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738768267; a=rsa-sha256; cv=none; b=vlr66oAKHRBQRZEL+eS1rUtsOs+pxlUDMoonIVFHh1Livgd0aqEZ42S8hNyMr7X+Mz0EBA YJkuISUuFEcDDlbTgzHDgSHcTtV03PMLYRt/5LPNjhvhNnUownR7xcdPXzRbYEkc0++Hpb 065v3i+P8Mb3DePVGgOzFjkEtsCq7zg= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf02.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738768267; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QIxxSOtTOWfkgJ6ZPTh9wpPoFqmWlmW/dFwplfHXX5c=; b=ZGpEiOfm5+/ZPMHdTfI54gm8B6s+KpoNbXTdtJVD266YrR8moqYv5R8hP1hCOPgWMzaQXw o8kJcv6I7ETwXU4slfd3zlDgf89OSZP3XBSGVYySWvCLd+SlJwRl2NrE/25ViPu6a4sz+0 jOXRjV8gutQnMOMe5lmpR1xlYKFT73o= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 439D7165C; Wed, 5 Feb 2025 07:11:30 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 27B823F5A1; Wed, 5 Feb 2025 07:11:04 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Muchun Song , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Mark Rutland , Ard Biesheuvel , Anshuman Khandual , Dev Jain , Alexandre Ghiti , Steve Capper , Kevin Brodsky Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v1 14/16] mm/vmalloc: Batch arch_sync_kernel_mappings() more efficiently Date: Wed, 5 Feb 2025 15:09:54 +0000 Message-ID: <20250205151003.88959-15-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205151003.88959-1-ryan.roberts@arm.com> References: <20250205151003.88959-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 65F6780005 X-Stat-Signature: hjx3zxzfz5ujyofhjabzeexy351gwnsd X-HE-Tag: 1738768267-259423 X-HE-Meta: U2FsdGVkX194ZBG3NCkyxVBG5hnwMHoFFOMMoooRO0Ih1AOseqnJmr79hH7VgjV2QTm7lMnGzQSGnQwXSm/VkrWkfwGh65tMWwDfkDZC23tk+zvfQCiBIGugKyPq+Fy7b9ImzMbY20wWhCm9rJsgn8v6eNKAVwZp7H+9VVI9d2QB//OYJo6oBjqY2PVIOMniVMJqusmr81K02nSmIxH8WqWwYfFluPBH0sChgFJKL0SYtWFOCiC7cDnOR/YAItqQfVhpVZVKijkIgbd3EBr0BWJTpTFPscIyjWtGEBFSALBQUNhiHSeNAVJubiDlRHpe5mME+zA65dtgwfLgq3a/1Ep07vVivUYnBwxMpWSlWvtCWFckLPNo0w5/QgrHCbXQ3GHP7E+fde6VKTgMPcQXFfaA0IyZN5Jxuj6s63YdYVORwWqSxkloHkjsZmWX+VX6WldcXTN07zzf5lfp6M48Qf6GcgzBxOS1P59laeHN9OjI9mUJxytH8LOY2LrD+miUSAmUko3JTxhfM18v1XNbX0a0kazTJuo11uq0jX5wIOeFdfB1cb3ujrOlTFqNTA3/Tdw1YewPin+y3DhfjxsYcVJuCltIeSuw/Xq2hPfHM9sgZ5iayvTQTmqmELcPF6utsRYHlaXOs5kCAzbnu9L0Re5Rqspx3ebLynxn0RuKlyhRa1qtnQXZBaZQClJnhd+hzpc3QP/nMY2qWmlm8Mxjsb6PgMoRyluAAA/SpfQggsvXtQdd8r3RNMno8O1ctDq2gWyB3zBqe57Wg2qUF/rAkc376Nz06UPWxwnQxQHMiE3yINUWVsHkL4HLVSkhtFTCQH8RSQMMAPoZ6wI2HA43yeh6CXv2ZqJBAz/3VH+RpNIRiYn65vs3YtuRpH0dJfyEkkjG9iBiWVQWK5r3EBkpYUaQwNI2FeXmXHw4NgzEQ9TovCUg2Hg+bmy37FbNQuwHIzfPRBG75hWoc9zhFOc 7AgJ7rhy JuJjBKD+BL/thYxCBn9kmYZJ5AcUv7cdZDJd90GBTkpoU8cmOf3PW19AIILNRhrF4EkAOmMo9SFNr7ag3dytLWbi7poMGNde8z54MhJUYpHtuR/ggmRn+6jb5RDBXfNa432RmrrzDSLHgRBleIJ5m6Hc1DUhvn/UWJX4opj1G2ZQfbiqljhjGnX1Q8/kjefYA6JYn0Cd0PMlZnos= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When page_shift is greater than PAGE_SIZE, __vmap_pages_range_noflush() will call vmap_range_noflush() for each individual huge page. But vmap_range_noflush() would previously call arch_sync_kernel_mappings() directly so this would end up being called for every huge page. We can do better than this; refactor the call into the outer __vmap_pages_range_noflush() so that it is only called once for the entire batch operation. This will benefit performance for arm64 which is about to opt-in to using the hook. Signed-off-by: Ryan Roberts --- mm/vmalloc.c | 60 ++++++++++++++++++++++++++-------------------------- 1 file changed, 30 insertions(+), 30 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 68950b1824d0..50fd44439875 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -285,40 +285,38 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, static int vmap_range_noflush(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, - unsigned int max_page_shift) + unsigned int max_page_shift, pgtbl_mod_mask *mask) { pgd_t *pgd; - unsigned long start; unsigned long next; int err; - pgtbl_mod_mask mask = 0; might_sleep(); BUG_ON(addr >= end); - start = addr; pgd = pgd_offset_k(addr); do { next = pgd_addr_end(addr, end); err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, - max_page_shift, &mask); + max_page_shift, mask); if (err) break; } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); - if (mask & ARCH_PAGE_TABLE_SYNC_MASK) - arch_sync_kernel_mappings(start, end); - return err; } int vmap_page_range(unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot) { + pgtbl_mod_mask mask = 0; int err; err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot), - ioremap_max_page_shift); + ioremap_max_page_shift, &mask); + if (mask & ARCH_PAGE_TABLE_SYNC_MASK) + arch_sync_kernel_mappings(addr, end); + flush_cache_vmap(addr, end); if (!err) err = kmsan_ioremap_page_range(addr, end, phys_addr, prot, @@ -587,29 +585,24 @@ static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr, } static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, - pgprot_t prot, struct page **pages) + pgprot_t prot, struct page **pages, pgtbl_mod_mask *mask) { - unsigned long start = addr; pgd_t *pgd; unsigned long next; int err = 0; int nr = 0; - pgtbl_mod_mask mask = 0; BUG_ON(addr >= end); pgd = pgd_offset_k(addr); do { next = pgd_addr_end(addr, end); if (pgd_bad(*pgd)) - mask |= PGTBL_PGD_MODIFIED; - err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); + *mask |= PGTBL_PGD_MODIFIED; + err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, mask); if (err) break; } while (pgd++, addr = next, addr != end); - if (mask & ARCH_PAGE_TABLE_SYNC_MASK) - arch_sync_kernel_mappings(start, end); - return err; } @@ -626,26 +619,33 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift) { unsigned int i, nr = (end - addr) >> PAGE_SHIFT; + unsigned long start = addr; + pgtbl_mod_mask mask = 0; + int err = 0; WARN_ON(page_shift < PAGE_SHIFT); if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMALLOC) || - page_shift == PAGE_SHIFT) - return vmap_small_pages_range_noflush(addr, end, prot, pages); - - for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) { - int err; - - err = vmap_range_noflush(addr, addr + (1UL << page_shift), - page_to_phys(pages[i]), prot, - page_shift); - if (err) - return err; + page_shift == PAGE_SHIFT) { + err = vmap_small_pages_range_noflush(addr, end, prot, pages, + &mask); + } else { + for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) { + err = vmap_range_noflush(addr, + addr + (1UL << page_shift), + page_to_phys(pages[i]), prot, + page_shift, &mask); + if (err) + break; - addr += 1UL << page_shift; + addr += 1UL << page_shift; + } } - return 0; + if (mask & ARCH_PAGE_TABLE_SYNC_MASK) + arch_sync_kernel_mappings(start, end); + + return err; } int vmap_pages_range_noflush(unsigned long addr, unsigned long end,