From patchwork Thu Sep 29 22:29:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 12994671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FB4FC4332F for ; Thu, 29 Sep 2022 22:30:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D215A8D000B; Thu, 29 Sep 2022 18:30:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CFA848D0001; Thu, 29 Sep 2022 18:30:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAD518D000C; Thu, 29 Sep 2022 18:30:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 745438D0001 for ; Thu, 29 Sep 2022 18:30:26 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 50499161357 for ; Thu, 29 Sep 2022 22:30:26 +0000 (UTC) X-FDA: 79966568052.14.3C7CE5C Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf10.hostedemail.com (Postfix) with ESMTP id CA6F5C0009 for ; Thu, 29 Sep 2022 22:30:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664490625; x=1696026625; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=VsTBZoElthVu/yGuB+Di9RxtRD1iZA84O3vOvi0lRyE=; b=YO06CTpLGHgo6nRBFBcl14HnVGhcXRPnh6DSmcpCYqS0odBZPppP8+vd ofJwads1H5LEj+KWUo6tiC0KVFtcGZaN1AX8h4Qfu/9lrCMmiLcgFsS5T VTDitqokg6lTviSOZWozG3Jpv23rJkFTKXNX34Wh8NtXD+IiT8Na0Qegz 9msNCM+2/L5+OjYmAyDA6U5rJeUw1xq4+BXPEC7gZmbd9MN66uNqP3GrI 5qXZZRYE2TX9PJIkXBMrVKgq7I7hApdM3EpGxD8G8aXpvtVlIzbZbhXNg pFb1OEmeXupd4CxXqmPupjR2uMTuh1mqNh729r00yNkuAzGPegD9rEQjv w==; X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="328420481" X-IronPort-AV: E=Sophos;i="5.93,356,1654585200"; d="scan'208";a="328420481" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2022 15:30:24 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="691016218" X-IronPort-AV: E=Sophos;i="5.93,356,1654585200"; d="scan'208";a="691016218" Received: from sergungo-mobl.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.251.25.88]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2022 15:30:16 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V . Shankar" , Weijiang Yang , "Kirill A . Shutemov" , joao.moreira@intel.com, John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v2 16/39] x86/mm: Update maybe_mkwrite() for shadow stack Date: Thu, 29 Sep 2022 15:29:13 -0700 Message-Id: <20220929222936.14584-17-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220929222936.14584-1-rick.p.edgecombe@intel.com> References: <20220929222936.14584-1-rick.p.edgecombe@intel.com> ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=YO06CTpL; spf=pass (imf10.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664490626; a=rsa-sha256; cv=none; b=ZefvNKHdBveSxwW93jxHHVSrmU28QDH2T96oBXqNmRTgVRTx3cncuyWBKQmRf8SGdPAcMP /jA2MDB/ZT1YIaWIbf/Kg3mZz8Km2CuG78NUa5iJgYdlRBauBls5i3Qvipp8knLKjCXpQv dka1FKRQFcvwWyhE4h0TgrBEj+IKQp0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664490626; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=hSokEo+okB4VVxEGWNT3kVWJxhhV773MtNt90JDX7wU=; b=bK/finlaHiuHNpmf1nATA1oO1OvzSkSmSrCQOFE/gqs89C1+mrC8S0iiXF1j485nw17ZLg HUDs6qXxv/IqWAs4aX2Mkn8pieEyXHdFyH9ekGUJCMxuZ7Xn08OS9TZTVx/ewy5I4nP93s js3gdZlDSTsejJYC8jKY3A9N3qNDZFk= Authentication-Results: imf10.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=YO06CTpL; spf=pass (imf10.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Stat-Signature: u4pc45pqpof3n1zkowashnqmgwpk3cgc X-Rspamd-Queue-Id: CA6F5C0009 X-Rspamd-Server: rspam05 X-HE-Tag: 1664490625-859563 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yu-cheng Yu When serving a page fault, maybe_mkwrite() makes a PTE writable if there is a write access to it, and its vma has VM_WRITE. Shadow stack accesses to shadow stack vma's are also treated as write accesses by the fault handler. This is because setting shadow stack memory makes it writable via some instructions, so COW has to happen even for shadow stack reads. So maybe_mkwrite() should continue to set VM_WRITE vma's as normally writable, but also set VM_WRITE|VM_SHADOW_STACK vma's as shadow stack. Do this by adding a pte_mkwrite_shstk() and a cross-arch stub. Check for VM_SHADOW_STACK in maybe_mkwrite() and call pte_mkwrite_shstk() accordingly. Apply the same changes to maybe_pmd_mkwrite(). Signed-off-by: Yu-cheng Yu Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe Cc: Kees Cook Reviewed-by: Kees Cook --- v2: - Change to handle shadow stacks that are VM_WRITE|VM_SHADOW_STACK - Ditch arch specific maybe_mkwrite(), and make the code generic Yu-cheng v29: - Remove likely()'s. arch/x86/include/asm/pgtable.h | 2 ++ include/linux/mm.h | 14 +++++++++++++- include/linux/pgtable.h | 14 ++++++++++++++ mm/huge_memory.c | 9 ++++++++- mm/memory.c | 3 +-- 5 files changed, 38 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 58c7bf9d7392..7a769c4dbc1c 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -419,6 +419,7 @@ static inline pte_t pte_mkdirty(pte_t pte) return pte_set_flags(pte, dirty | _PAGE_SOFT_DIRTY); } +#define pte_mkwrite_shstk pte_mkwrite_shstk static inline pte_t pte_mkwrite_shstk(pte_t pte) { /* pte_clear_cow() also sets Dirty=1 */ @@ -555,6 +556,7 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd) return pmd_set_flags(pmd, dirty | _PAGE_SOFT_DIRTY); } +#define pmd_mkwrite_shstk pmd_mkwrite_shstk static inline pmd_t pmd_mkwrite_shstk(pmd_t pmd) { return pmd_clear_cow(pmd); diff --git a/include/linux/mm.h b/include/linux/mm.h index 8cd413c5a329..fef14ab3abcb 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -981,13 +981,25 @@ void free_compound_page(struct page *page); * servicing faults for write access. In the normal case, do always want * pte_mkwrite. But get_user_pages can cause write faults for mappings * that do not have writing enabled, when used by access_process_vm. + * + * If a vma is shadow stack (a type of writable memory), mark the pte shadow + * stack. */ +#ifndef maybe_mkwrite static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) { - if (likely(vma->vm_flags & VM_WRITE)) + if (!(vma->vm_flags & VM_WRITE)) + goto out; + + if (vma->vm_flags & VM_SHADOW_STACK) + pte = pte_mkwrite_shstk(pte); + else pte = pte_mkwrite(pte); + +out: return pte; } +#endif vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr); diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 014ee8f0fbaa..21115b4895ca 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -480,6 +480,13 @@ static inline pte_t pte_sw_mkyoung(pte_t pte) #define pte_mk_savedwrite pte_mkwrite #endif +#ifndef pte_mkwrite_shstk +static inline pte_t pte_mkwrite_shstk(pte_t pte) +{ + return pte; +} +#endif + #ifndef pte_clear_savedwrite #define pte_clear_savedwrite pte_wrprotect #endif @@ -488,6 +495,13 @@ static inline pte_t pte_sw_mkyoung(pte_t pte) #define pmd_savedwrite pmd_write #endif +#ifndef pmd_mkwrite_shstk +static inline pmd_t pmd_mkwrite_shstk(pmd_t pmd) +{ + return pmd; +} +#endif + #ifndef pmd_mk_savedwrite #define pmd_mk_savedwrite pmd_mkwrite #endif diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e9414ee57c5b..11fc69eb4717 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -554,8 +554,15 @@ __setup("transparent_hugepage=", setup_transparent_hugepage); pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { - if (likely(vma->vm_flags & VM_WRITE)) + if (!(vma->vm_flags & VM_WRITE)) + goto out; + + if (vma->vm_flags & VM_SHADOW_STACK) + pmd = pmd_mkwrite_shstk(pmd); + else pmd = pmd_mkwrite(pmd); + +out: return pmd; } diff --git a/mm/memory.c b/mm/memory.c index 4ba73f5aa8bb..6e8379f6793c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4098,8 +4098,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) entry = mk_pte(page, vma->vm_page_prot); entry = pte_sw_mkyoung(entry); - if (vma->vm_flags & VM_WRITE) - entry = pte_mkwrite(pte_mkdirty(entry)); + entry = maybe_mkwrite(pte_mkdirty(entry), vma); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl);