From patchwork Wed Jul 11 11:29:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 10519471 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 912046032A for ; Wed, 11 Jul 2018 11:31:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 936F226861 for ; Wed, 11 Jul 2018 11:31:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8530F28837; Wed, 11 Jul 2018 11:31:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 155A726861 for ; Wed, 11 Jul 2018 11:31:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 739956B027E; Wed, 11 Jul 2018 07:30:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6EC1B6B0282; Wed, 11 Jul 2018 07:30:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33E786B0284; Wed, 11 Jul 2018 07:30:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by kanga.kvack.org (Postfix) with ESMTP id C3F0D6B027E for ; Wed, 11 Jul 2018 07:30:13 -0400 (EDT) Received: by mail-ed1-f72.google.com with SMTP id i26-v6so3990731edr.4 for ; Wed, 11 Jul 2018 04:30:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=NbdKIJPV96otTTsK+7fSwd+ewV0164F3rd38sK3xCqQ=; b=HE6v1gXQYKS/si57s+s05gDVdNlHyYbPz7IDOdxcRD4bH6gdx28jcuXNtU3Q3xsZub Fd1ijvISYbsY4a/Rf7xLvHDjXEajnrB+KHzW47rEPSmCoO5/41rnlanIma9AWC/xqdnP frxNj0ycrEtvxxi+ntilCZdgOtCjfkt/gOh8vtMAaDDhAyvTaEtsZ/T8NQqqL2VpBptg 3ETYpPn+WW35M3i2Jlu83RdZhzcwFFBcQnYPKwJh2rkNTkKN15Rn4aRGukq0EaMcZmjc B1tBrwSbvY3+W4GajDHm5dyhGyVmQQSY5ee2+4AIX62hOCgvIEMcY2qyLSeQU7YZQJ22 2R/Q== X-Gm-Message-State: APt69E2cbvtobOJwSC/I8GW+UFlmQdShvYx71tlhkYFAAzpwClzszvFI CcDHat6/t4E9R5QWzjH/oTVZ+I0GeHF609aKINP+RfEA5X0HteVEWgoVTaLDTQLJi/OqqPCCekX /SjFn91eVeJ1dTwum/EW5oTr8L1DsvYFWPWwdcc7N0simMBOs/BTDfWUzqVPNBlspDw== X-Received: by 2002:a50:8a23:: with SMTP id i32-v6mr30964288edi.49.1531308613357; Wed, 11 Jul 2018 04:30:13 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfs5ql/Qt4yTjEDwFryqaZYBvpTJ/kSPrkyI63bWDzZQG9TWLPbbP1biBgmYBoroKIu9tXt X-Received: by 2002:a50:8a23:: with SMTP id i32-v6mr30964254edi.49.1531308612771; Wed, 11 Jul 2018 04:30:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531308612; cv=none; d=google.com; s=arc-20160816; b=LjOkL2ORi/zzqx2Ex/UAZXk+DygidKCZs8GN/DPRkFBSo/ujjzCUnrpv2EgmP57WqN Orx7mU/qroIJROy4nFgJ7KwYQ9E9B2sfalR/9sYUBC2RXYF4aAYMCaL3IPwbTvVn97Hh NgdtMLIeBcIRMnk2eaIJZX292HDvVPOh1IJOJUkKL73Rt0PeGUWdljGz4ewteElODMCb 4NLjLxXgc2x67xsDe6EN/J//109u+e0PzXdWuDuo1VEfE1+NtFlzPOF4wQVVWh3HG+QF QgdZaYo/M2LUXOfTqPxEnWIGEsOsCeJW3U8/h5CreTIZRxOXYphznS8PNjrdGpRmp+gJ E9BQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=NbdKIJPV96otTTsK+7fSwd+ewV0164F3rd38sK3xCqQ=; b=Vf0IhPkLphZwa9whoeT7LZJmy8EwSsUuWUIsauMMF20u8QIdsaEw3pFsl4pinPgu/+ KyCUFzpNI8USQZ2EXKIfzXfi/yvf3AeZZzw3guXU6r2Xor5UhI18xwthUj9BlReQqxbg F7qwpmr/Y3mZkSRAAS+eG6CRvFdiIhjMaR/9/QA2LB/YjJGLMN+Mn08Y9D/KSwYKZiR0 sdPPAYtn3sC9OoNtBcTcBzqrVKty71wTh/f6PUDYuMXX7EuSyskDENAH3VZ4Jo10eob1 cot+on/jb085w9cjAVRxAezlZsLNdDYVVDniGNxgHIOGK6fi6epDpV3TuxaXnfciZC7z YBFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b="j/+NYrxt"; spf=pass (google.com: domain of joro@8bytes.org designates 81.169.241.247 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: from theia.8bytes.org (8bytes.org. [81.169.241.247]) by mx.google.com with ESMTPS id r12-v6si2295243eda.307.2018.07.11.04.30.12 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Jul 2018 04:30:12 -0700 (PDT) Received-SPF: pass (google.com: domain of joro@8bytes.org designates 81.169.241.247 as permitted sender) client-ip=81.169.241.247; Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b="j/+NYrxt"; spf=pass (google.com: domain of joro@8bytes.org designates 81.169.241.247 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: by theia.8bytes.org (Postfix, from userid 1000) id 1D397912; Wed, 11 Jul 2018 13:30:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1531308604; bh=rgVeffvhNFRkOo6nEkJpgtDI1kRbfcMiZvXjhHYSbFQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j/+NYrxtcOhWiH1wQDb2YcOVFSVp44cstSuLGIiV8tzj+GFCLw92dFvZYqntYUoV0 ufX3Wno35DSiERZPapVRYId2PATCfHSGw+QZE9nzAlUZ7V9F9wyEgwVwQaGOqYbHKV Ql8DghCfQ2D053rKkV2ZInqPnlAHrJYDvdOAb7YhoEttADDb+dMTePU4EiS5uCeHeg zrXEJBQsdTZBnIgRZuMFSnkUKpep9qNN2kim8YbCybSBAApoct8DLx9P48iPu92UwS NTACFJz8cB53vRwRCBZuXOdzaC/uKX4HyYltnKFLLdyGEBt4tJkp9q4WepHI+iMnqm LSLlYunHojQ2w== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 17/39] x86/pgtable/32: Allocate 8k page-tables when PTI is enabled Date: Wed, 11 Jul 2018 13:29:24 +0200 Message-Id: <1531308586-29340-18-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1531308586-29340-1-git-send-email-joro@8bytes.org> References: <1531308586-29340-1-git-send-email-joro@8bytes.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Joerg Roedel Allocate a kernel and a user page-table root when PTI is enabled. Also allocate a full page per root for PAE because otherwise the bit to flip in cr3 to switch between them would be non-constant, which creates a lot of hassle. Keep that for a later optimization. Signed-off-by: Joerg Roedel --- arch/x86/kernel/head_32.S | 20 +++++++++++++++----- arch/x86/mm/pgtable.c | 5 +++-- 2 files changed, 18 insertions(+), 7 deletions(-) diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S index abe6df1..30f9cb2 100644 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -512,11 +512,18 @@ ENTRY(initial_code) ENTRY(setup_once_ref) .long setup_once +#ifdef CONFIG_PAGE_TABLE_ISOLATION +#define PGD_ALIGN (2 * PAGE_SIZE) +#define PTI_USER_PGD_FILL 1024 +#else +#define PGD_ALIGN (PAGE_SIZE) +#define PTI_USER_PGD_FILL 0 +#endif /* * BSS section */ __PAGE_ALIGNED_BSS - .align PAGE_SIZE + .align PGD_ALIGN #ifdef CONFIG_X86_PAE .globl initial_pg_pmd initial_pg_pmd: @@ -526,14 +533,17 @@ initial_pg_pmd: initial_page_table: .fill 1024,4,0 #endif + .align PGD_ALIGN initial_pg_fixmap: .fill 1024,4,0 -.globl empty_zero_page -empty_zero_page: - .fill 4096,1,0 .globl swapper_pg_dir + .align PGD_ALIGN swapper_pg_dir: .fill 1024,4,0 + .fill PTI_USER_PGD_FILL,4,0 +.globl empty_zero_page +empty_zero_page: + .fill 4096,1,0 EXPORT_SYMBOL(empty_zero_page) /* @@ -542,7 +552,7 @@ EXPORT_SYMBOL(empty_zero_page) #ifdef CONFIG_X86_PAE __PAGE_ALIGNED_DATA /* Page-aligned for the benefit of paravirt? */ - .align PAGE_SIZE + .align PGD_ALIGN ENTRY(initial_page_table) .long pa(initial_pg_pmd+PGD_IDENT_ATTR),0 /* low identity map */ # if KPMDS == 3 diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 47b5951..db6fb77 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -343,7 +343,8 @@ static inline pgd_t *_pgd_alloc(void) * We allocate one page for pgd. */ if (!SHARED_KERNEL_PMD) - return (pgd_t *)__get_free_page(PGALLOC_GFP); + return (pgd_t *)__get_free_pages(PGALLOC_GFP, + PGD_ALLOCATION_ORDER); /* * Now PAE kernel is not running as a Xen domain. We can allocate @@ -355,7 +356,7 @@ static inline pgd_t *_pgd_alloc(void) static inline void _pgd_free(pgd_t *pgd) { if (!SHARED_KERNEL_PMD) - free_page((unsigned long)pgd); + free_pages((unsigned long)pgd, PGD_ALLOCATION_ORDER); else kmem_cache_free(pgd_cache, pgd); }