From patchwork Tue Jun 22 22:24:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 12338683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7426BC48BDF for ; Tue, 22 Jun 2021 22:25:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0D6C560FF1 for ; Tue, 22 Jun 2021 22:25:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D6C560FF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5C6206B006E; Tue, 22 Jun 2021 18:25:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 54DD36B0070; Tue, 22 Jun 2021 18:25:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CED96B0071; Tue, 22 Jun 2021 18:25:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0221.hostedemail.com [216.40.44.221]) by kanga.kvack.org (Postfix) with ESMTP id 018C06B0070 for ; Tue, 22 Jun 2021 18:25:46 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 434791812C267 for ; Tue, 22 Jun 2021 22:25:47 +0000 (UTC) X-FDA: 78282793134.10.80331DC Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf28.hostedemail.com (Postfix) with ESMTP id 52C7E20010B9 for ; Tue, 22 Jun 2021 22:25:44 +0000 (UTC) IronPort-SDR: UqK47bDvGobuB/yTJNRnTaXFTpqQEtcJbiH/jmuzo3DtEirBNLtVyPtfzMkUL3iEDSg3n562qE Ne17H5gvJZFw== X-IronPort-AV: E=McAfee;i="6200,9189,10023"; a="194292821" X-IronPort-AV: E=Sophos;i="5.83,292,1616482800"; d="scan'208";a="194292821" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2021 15:25:42 -0700 IronPort-SDR: 3Cv6VAe2Arq0XVuwX4bqp5n3jtAvWgetfg9Z4EvtFqHrEJXY64YqaMxwKlKLDmBrBxGGFASMBi Tls65kRwfPbA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,292,1616482800"; d="scan'208";a="406110488" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga006.jf.intel.com with ESMTP; 22 Jun 2021 15:25:42 -0700 Subject: [RFC][PATCH 1/8] x86/pkeys: add PKRU storage outside of task XSAVE buffer To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org,Dave Hansen ,tglx@linutronix.de,mingo@redhat.com,bp@alien8.de,x86@kernel.org,luto@kernel.org From: Dave Hansen Date: Tue, 22 Jun 2021 15:24:57 -0700 References: <20210622222455.E901B5AC@viggo.jf.intel.com> In-Reply-To: <20210622222455.E901B5AC@viggo.jf.intel.com> Message-Id: <20210622222457.C16E9CB5@viggo.jf.intel.com> Authentication-Results: imf28.hostedemail.com; dkim=none; spf=none (imf28.hostedemail.com: domain of dave.hansen@linux.intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=dave.hansen@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 52C7E20010B9 X-Stat-Signature: ireufz5a7jqr1ogg776y97m3oznj1tik X-HE-Tag: 1624400744-308139 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen PKRU has space in the task XSAVE buffer, but is not context-switched by XSAVE/XRSTOR. It is switched more eagerly than other FPU state because PKRU affects things like copy_to/from_user(). This is because PKRU affects user *PERMISSION* accesses, not just accesses made from user *MODE* itself. Prepare to move PKRU away from being XSAVE-managed. Allocate space in the thread_struct for it and save/restore it in the context-switch path separately from the XSAVE-managed features. Leave the XSAVE storage in place for now to ensure bisectability. Signed-off-by: Dave Hansen Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: x86@kernel.org Cc: Andy Lutomirski --- b/arch/x86/include/asm/pkru.h | 5 +++++ b/arch/x86/kernel/cpu/common.c | 3 +++ b/arch/x86/kernel/process_64.c | 9 ++++----- b/arch/x86/mm/pkeys.c | 2 ++ 4 files changed, 14 insertions(+), 5 deletions(-) diff -puN arch/x86/include/asm/pkru.h~pkru-stash-thread-value arch/x86/include/asm/pkru.h --- a/arch/x86/include/asm/pkru.h~pkru-stash-thread-value 2021-06-22 14:49:06.594051763 -0700 +++ b/arch/x86/include/asm/pkru.h 2021-06-22 14:49:06.607051763 -0700 @@ -44,11 +44,16 @@ static inline void write_pkru(u32 pkru) if (!cpu_feature_enabled(X86_FEATURE_OSPKE)) return; /* + * Update the actual register. + * * WRPKRU is relatively expensive compared to RDPKRU. * Avoid WRPKRU when it would not change the value. */ if (pkru != rdpkru()) wrpkru(pkru); + + /* Update the thread-local, context-switched value: */ + current->thread.pkru = pkru; } static inline void pkru_write_default(void) diff -puN arch/x86/kernel/cpu/common.c~pkru-stash-thread-value arch/x86/kernel/cpu/common.c --- a/arch/x86/kernel/cpu/common.c~pkru-stash-thread-value 2021-06-22 14:49:06.596051763 -0700 +++ b/arch/x86/kernel/cpu/common.c 2021-06-22 14:49:06.608051763 -0700 @@ -482,6 +482,9 @@ static __always_inline void setup_pku(st cr4_set_bits(X86_CR4_PKE); /* Load the default PKRU value */ pkru_write_default(); + + /* Establish the default value for future tasks: */ + init_task.thread.pkru = init_pkru_value; } #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS diff -puN arch/x86/kernel/process_64.c~pkru-stash-thread-value arch/x86/kernel/process_64.c --- a/arch/x86/kernel/process_64.c~pkru-stash-thread-value 2021-06-22 14:49:06.599051763 -0700 +++ b/arch/x86/kernel/process_64.c 2021-06-22 14:49:06.608051763 -0700 @@ -349,15 +349,14 @@ static __always_inline void load_seg_leg static __always_inline void x86_pkru_load(struct thread_struct *prev, struct thread_struct *next) { - if (!cpu_feature_enabled(X86_FEATURE_OSPKE)) - return; + u32 pkru = read_pkru(); /* Stash the prev task's value: */ - prev->pkru = rdpkru(); + prev->pkru = pkru; /* - * PKRU writes are slightly expensive. Avoid them when not - * strictly necessary: + * PKRU writes are slightly expensive. Avoid + * them when not strictly necessary: */ if (prev->pkru != next->pkru) wrpkru(next->pkru); diff -puN arch/x86/mm/pkeys.c~pkru-stash-thread-value arch/x86/mm/pkeys.c --- a/arch/x86/mm/pkeys.c~pkru-stash-thread-value 2021-06-22 14:49:06.604051763 -0700 +++ b/arch/x86/mm/pkeys.c 2021-06-22 14:49:06.609051763 -0700 @@ -159,6 +159,8 @@ static ssize_t init_pkru_write_file(stru return -EINVAL; WRITE_ONCE(init_pkru_value, new_init_pkru); + WRITE_ONCE(init_task.thread.pkru, new_init_pkru); + return count; }