From patchwork Fri Jun 26 09:34:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 11626967 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E31EB618 for ; Fri, 26 Jun 2020 09:35:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B90F321548 for ; Fri, 26 Jun 2020 09:35:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B90F321548 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=8bytes.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C79A26B000D; Fri, 26 Jun 2020 05:35:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C29DC6B0010; Fri, 26 Jun 2020 05:35:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B3FB86B0022; Fri, 26 Jun 2020 05:35:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id 994456B000D for ; Fri, 26 Jun 2020 05:35:00 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 11F729895 for ; Fri, 26 Jun 2020 09:35:00 +0000 (UTC) X-FDA: 76970853960.09.ear17_2a0f18b26e54 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id DCC58180AD820 for ; Fri, 26 Jun 2020 09:34:59 +0000 (UTC) X-Spam-Summary: 1,0,0,42f3ab429938a9fd,d41d8cd98f00b204,joro@8bytes.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1437:1534:1542:1711:1730:1747:1777:1792:2393:2559:2562:2693:2898:2899:3138:3139:3140:3141:3142:3354:3865:3867:3868:3870:3871:3872:4250:4321:5007:6119:6261:7576:10004:11026:11658:11914:12043:12291:12296:12297:12438:12555:12679:12683:12895:13972:14181:14394:14721:21080:21627:21990:30054:30070,0,RBL:81.169.241.247:@8bytes.org:.lbl8.mailshell.net-64.201.201.201 62.14.6.100;04yry5ykug6a8jsr5kcqh1otuz7w4opbfbagdajthcnqhxhjwtj86mmpus3m59i.grc9d3nt8bm1wu85fij7f965tahmj6kco7oed19fjmf4csj57ta1cxx3yj955zf.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: ear17_2a0f18b26e54 X-Filterd-Recvd-Size: 3367 Received: from theia.8bytes.org (8bytes.org [81.169.241.247]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Fri, 26 Jun 2020 09:34:59 +0000 (UTC) Received: by theia.8bytes.org (Postfix, from userid 1000) id 72182391; Fri, 26 Jun 2020 11:34:57 +0200 (CEST) From: Joerg Roedel To: x86@kernel.org Cc: hpa@zytor.com, Dave Hansen , Andy Lutomirski , Peter Zijlstra , Andrew Morton , Steven Rostedt , joro@8bytes.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Joerg Roedel Subject: [PATCH] x86/mm: Pre-allocate p4d/pud pages for vmalloc area Date: Fri, 26 Jun 2020 11:34:50 +0200 Message-Id: <20200626093450.27741-1-joro@8bytes.org> X-Mailer: git-send-email 2.17.1 X-Rspamd-Queue-Id: DCC58180AD820 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joerg Roedel Pre-allocate the page-table pages for the vmalloc area at the level which needs synchronization on x86. This is P4D for 5-level and PUD for 4-level paging. Doing this at boot makes sure all page-tables in the system have these pages already and do not need to be synchronized at runtime. The runtime synchronizatin takes the pgd_lock and iterates over all page-tables in the system, so it can take quite long and is better avoided. Signed-off-by: Joerg Roedel --- arch/x86/mm/init_64.c | 55 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 55 insertions(+) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index dbae185511cd..475a4008445b 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1238,6 +1238,59 @@ static void __init register_page_bootmem_info(void) #endif } +/* + * Pre-allocates page-table pages for the vmalloc area in the kernel page-table. + * Only the level which needs to be synchronized between all page-tables is + * allocated because the synchronization can be expensive. + */ +static void __init preallocate_vmalloc_pages(void) +{ + unsigned long addr; + const char *lvl; + int count = 0; + + for (addr = VMALLOC_START; addr <= VMALLOC_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { + pgd_t *pgd = pgd_offset_k(addr); + p4d_t *p4d; + pud_t *pud; + + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) { + /* Can only happen with 5-level paging */ + p4d = p4d_alloc(&init_mm, pgd, addr); + if (!p4d) { + lvl = "p4d"; + goto failed; + } + count += 1; + } + + if (pgtable_l5_enabled()) + continue; + + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) { + /* Ends up here only with 4-level paging */ + pud = pud_alloc(&init_mm, p4d, addr); + if (!pud) { + lvl = "pud"; + goto failed; + } + count += 1; + } + } + + return; + +failed: + + /* + * A failure here is not fatal - If the pages can be allocated later it + * will be synchronized to other page-tables. + */ + pr_err("Failed to pre-allocate %s pages for vmalloc area\n", lvl); +} + void __init mem_init(void) { pci_iommu_alloc(); @@ -1261,6 +1314,8 @@ void __init mem_init(void) if (get_gate_vma(&init_mm)) kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR, PAGE_SIZE, KCORE_USER); + preallocate_vmalloc_pages(); + mem_init_print_info(NULL); }