From patchwork Wed Feb 5 18:19:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu-cheng Yu X-Patchwork-Id: 11366929 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5211117E0 for ; Wed, 5 Feb 2020 18:21:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 27616217F4 for ; Wed, 5 Feb 2020 18:21:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 27616217F4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4AA906B006E; Wed, 5 Feb 2020 13:20:34 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 42D0F6B0071; Wed, 5 Feb 2020 13:20:34 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1244F6B0075; Wed, 5 Feb 2020 13:20:33 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id B9B7A6B006E for ; Wed, 5 Feb 2020 13:20:33 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4E82D40F4 for ; Wed, 5 Feb 2020 18:20:33 +0000 (UTC) X-FDA: 76456888746.02.toy24_792ae5514f71c X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,yu-cheng.yu@intel.com,:x86@kernel.org:hpa@zytor.com:tglx@linutronix.de:mingo@redhat.com:linux-kernel@vger.kernel.org:linux-doc@vger.kernel.org::linux-arch@vger.kernel.org:linux-api@vger.kernel.org:arnd@arndb.de:luto@kernel.org:bsingharora@gmail.com:bp@alien8.de:gorcunov@gmail.com:dave.hansen@linux.intel.com:esyr@redhat.com:fweimer@redhat.com:hjl.tools@gmail.com:jannh@google.com:corbet@lwn.net:keescook@chromium.org:mike.kravetz@oracle.com:nadav.amit@gmail.com:oleg@redhat.com:pavel@ucw.cz:peterz@infradead.org:rdunlap@infradead.org:ravi.v.shankar@intel.com:vedvyas.shanbhogue@intel.com:dave.martin@arm.com:x86-patch-review@intel.com:yu-cheng.yu@intel.com,RULES_HIT:30003:30012:30054:30056:30064:30069:30070,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: toy24_792ae5514f71c X-Filterd-Recvd-Size: 4652 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Wed, 5 Feb 2020 18:20:32 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Feb 2020 10:20:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,406,1574150400"; d="scan'208";a="279447794" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by FMSMGA003.fm.intel.com with ESMTP; 05 Feb 2020 10:20:27 -0800 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , x86-patch-review@intel.com Cc: Yu-cheng Yu Subject: [RFC PATCH v9 13/27] x86/mm: Shadow Stack page fault error checking Date: Wed, 5 Feb 2020 10:19:21 -0800 Message-Id: <20200205181935.3712-14-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200205181935.3712-1-yu-cheng.yu@intel.com> References: <20200205181935.3712-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If a page fault is triggered by a Shadow Stack (SHSTK) access (e.g. CALL/RET) or SHSTK management instructions (e.g. WRUSSQ), then bit[6] of the page fault error code is set. In access_error(), verify a SHSTK page fault is within a SHSTK memory area. It is always an error otherwise. For a valid SHSTK access, set FAULT_FLAG_WRITE to effect copy-on-write. Signed-off-by: Yu-cheng Yu Reviewed-by: Kees Cook --- arch/x86/include/asm/traps.h | 2 ++ arch/x86/mm/fault.c | 18 ++++++++++++++++++ 2 files changed, 20 insertions(+) diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h index 7ac26bbd0bef..8023d177fcd8 100644 --- a/arch/x86/include/asm/traps.h +++ b/arch/x86/include/asm/traps.h @@ -169,6 +169,7 @@ enum { * bit 3 == 1: use of reserved bit detected * bit 4 == 1: fault was an instruction fetch * bit 5 == 1: protection keys block access + * bit 6 == 1: shadow stack access fault */ enum x86_pf_error_code { X86_PF_PROT = 1 << 0, @@ -177,5 +178,6 @@ enum x86_pf_error_code { X86_PF_RSVD = 1 << 3, X86_PF_INSTR = 1 << 4, X86_PF_PK = 1 << 5, + X86_PF_SHSTK = 1 << 6, }; #endif /* _ASM_X86_TRAPS_H */ diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 304d31d8cbbc..9c1243302663 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1187,6 +1187,17 @@ access_error(unsigned long error_code, struct vm_area_struct *vma) (error_code & X86_PF_INSTR), foreign)) return 1; + /* + * Verify X86_PF_SHSTK is within a Shadow Stack VMA. + * It is always an error if there is a Shadow Stack + * fault outside a Shadow Stack VMA. + */ + if (error_code & X86_PF_SHSTK) { + if (!(vma->vm_flags & VM_SHSTK)) + return 1; + return 0; + } + if (error_code & X86_PF_WRITE) { /* write, present and write, not present: */ if (unlikely(!(vma->vm_flags & VM_WRITE))) @@ -1344,6 +1355,13 @@ void do_user_addr_fault(struct pt_regs *regs, perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); + /* + * If the fault is caused by a Shadow Stack access, + * i.e. CALL/RET/SAVEPREVSSP/RSTORSSP, then set + * FAULT_FLAG_WRITE to effect copy-on-write. + */ + if (hw_error_code & X86_PF_SHSTK) + flags |= FAULT_FLAG_WRITE; if (hw_error_code & X86_PF_WRITE) flags |= FAULT_FLAG_WRITE; if (hw_error_code & X86_PF_INSTR)