From patchwork Mon Jun 1 01:16:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 11581071 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 26ED360D for ; Mon, 1 Jun 2020 01:20:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1831F2067B for ; Mon, 1 Jun 2020 01:20:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726901AbgFABUS (ORCPT ); Sun, 31 May 2020 21:20:18 -0400 Received: from mga02.intel.com ([134.134.136.20]:51175 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726555AbgFABUS (ORCPT ); Sun, 31 May 2020 21:20:18 -0400 IronPort-SDR: NmP+cG2b85XJRwUcPrn5oqKhXAWa003W4ycUC0fbkaGu8Fod5upja6WIv21kNuZQAH7lTh1c5F eKkq3duWODkA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2020 18:20:18 -0700 IronPort-SDR: Bk/Ny9oOxI+cBFgFw20IkJTjpBIq6UbUJbE32EgUBOx+zOouurxE5hoOpPMprcRAbC6bJCxRP7 fS4jFImHfVBQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,459,1583222400"; d="scan'208";a="268157319" Received: from hbetts-mobl.ger.corp.intel.com (HELO localhost) ([10.252.59.178]) by orsmga003.jf.intel.com with ESMTP; 31 May 2020 18:20:10 -0700 From: Jarkko Sakkinen To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-sgx@vger.kernel.org Cc: Sean Christopherson , Jethro Beekman , Jarkko Sakkinen , akpm@linux-foundation.org, andriy.shevchenko@linux.intel.com, asapek@google.com, bp@alien8.de, cedric.xing@intel.com, chenalexchen@google.com, conradparker@google.com, cyhanish@google.com, dave.hansen@intel.com, haitao.huang@intel.com, josh@joshtriplett.org, kai.huang@intel.com, kai.svahn@intel.com, kmoy@google.com, ludloff@google.com, luto@kernel.org, nhorman@redhat.com, npmccallum@redhat.com, puiterwijk@redhat.com, rientjes@google.com, tglx@linutronix.de, yaozhangx@google.com Subject: [PATCH v31 16/21] x86/fault: Add helper function to sanitize error code Date: Mon, 1 Jun 2020 04:16:40 +0300 Message-Id: <20200601011645.794040-17-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200601011645.794040-1-jarkko.sakkinen@linux.intel.com> References: <20200601011645.794040-1-jarkko.sakkinen@linux.intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org From: Sean Christopherson Add helper function to sanitize error code to prepare for vDSO exception fixup, which will expose the error code to userspace and runs before set_signal_archinfo(), i.e. suppresses the signal when fixup is successful. Acked-by: Jethro Beekman Signed-off-by: Sean Christopherson Signed-off-by: Jarkko Sakkinen --- arch/x86/mm/fault.c | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 16c53c874bb9..23666e34abbc 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -704,6 +704,18 @@ pgtable_bad(struct pt_regs *regs, unsigned long error_code, oops_end(flags, regs, sig); } +static void sanitize_error_code(unsigned long address, + unsigned long *error_code) +{ + /* + * To avoid leaking information about the kernel page + * table layout, pretend that user-mode accesses to + * kernel addresses are always protection faults. + */ + if (address >= TASK_SIZE_MAX) + *error_code |= X86_PF_PROT; +} + static void set_signal_archinfo(unsigned long address, unsigned long error_code) { @@ -760,6 +772,8 @@ no_context(struct pt_regs *regs, unsigned long error_code, * faulting through the emulate_vsyscall() logic. */ if (current->thread.sig_on_uaccess_err && signal) { + sanitize_error_code(address, &error_code); + set_signal_archinfo(address, error_code); /* XXX: hwpoison faults will set the wrong code. */ @@ -908,13 +922,7 @@ __bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code, if (is_errata100(regs, address)) return; - /* - * To avoid leaking information about the kernel page table - * layout, pretend that user-mode accesses to kernel addresses - * are always protection faults. - */ - if (address >= TASK_SIZE_MAX) - error_code |= X86_PF_PROT; + sanitize_error_code(address, &error_code); if (likely(show_unhandled_signals)) show_signal_msg(regs, error_code, address, tsk); @@ -1031,6 +1039,8 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address, if (is_prefetch(regs, error_code, address)) return; + sanitize_error_code(address, &error_code); + set_signal_archinfo(address, error_code); #ifdef CONFIG_MEMORY_FAILURE