From patchwork Tue Apr 27 20:43:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu-cheng Yu X-Patchwork-Id: 12227261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6692C43461 for ; Tue, 27 Apr 2021 20:44:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8D142613BC for ; Tue, 27 Apr 2021 20:44:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8D142613BC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D73956B0088; Tue, 27 Apr 2021 16:44:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD2C86B0089; Tue, 27 Apr 2021 16:44:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B29716B008A; Tue, 27 Apr 2021 16:44:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id 79BC46B0089 for ; Tue, 27 Apr 2021 16:44:21 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 39F6C181AEF00 for ; Tue, 27 Apr 2021 20:44:21 +0000 (UTC) X-FDA: 78079324722.32.763D4EC Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf25.hostedemail.com (Postfix) with ESMTP id F09DC6000112 for ; Tue, 27 Apr 2021 20:44:15 +0000 (UTC) IronPort-SDR: eI3cjaJX/OC60JGa7rlA88Q+xHci4ujtF9U44Ih54veMb93SgxPEPUSpiONTf9WqVLlGSrmqbI xJb/NGTRr78g== X-IronPort-AV: E=McAfee;i="6200,9189,9967"; a="194473697" X-IronPort-AV: E=Sophos;i="5.82,255,1613462400"; d="scan'208";a="194473697" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Apr 2021 13:44:19 -0700 IronPort-SDR: Y3eIhPtaBE1OHDJhM0C9YozO+hBI0cR0zf+ufyIl8shy1LMKbOcEAYlkNovrvQtP73wiNQ9ubD daQLBw43M/mg== X-IronPort-AV: E=Sophos;i="5.82,255,1613462400"; d="scan'208";a="465623532" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Apr 2021 13:44:18 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu , Haitao Huang Cc: Yu-cheng Yu , "Kirill A . Shutemov" Subject: [PATCH v26 19/30] mm: Update can_follow_write_pte() for shadow stack Date: Tue, 27 Apr 2021 13:43:04 -0700 Message-Id: <20210427204315.24153-20-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210427204315.24153-1-yu-cheng.yu@intel.com> References: <20210427204315.24153-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: F09DC6000112 X-Stat-Signature: 3id5qfm6ni5h1xap5u6kpbmdfbcr9xac Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf25; identity=mailfrom; envelope-from=""; helo=mga04.intel.com; client-ip=192.55.52.120 X-HE-DKIM-Result: none/none X-HE-Tag: 1619556255-122090 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Can_follow_write_pte() ensures a read-only page is COWed by checking the FOLL_COW flag, and uses pte_dirty() to validate the flag is still valid. Like a writable data page, a shadow stack page is writable, and becomes read-only during copy-on-write, but it is always dirty. Thus, in the can_follow_write_pte() check, it belongs to the writable page case and should be excluded from the read-only page pte_dirty() check. Apply the same changes to can_follow_write_pmd(). While at it, also split the long line into smaller ones. Signed-off-by: Yu-cheng Yu Reviewed-by: Kirill A. Shutemov Cc: Kees Cook --- v26: - Instead of passing vm_flags, pass down vma pointer to can_follow_write_*(). v25: - Split long line into smaller ones. v24: - Change arch_shadow_stack_mapping() to is_shadow_stack_mapping(). mm/gup.c | 16 ++++++++++++---- mm/huge_memory.c | 16 ++++++++++++---- 2 files changed, 24 insertions(+), 8 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index ef7d2da9f03f..f9705281e853 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -356,10 +356,18 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags, + struct vm_area_struct *vma) { - return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + if (pte_write(pte)) + return true; + if ((flags & (FOLL_FORCE | FOLL_COW)) != (FOLL_FORCE | FOLL_COW)) + return false; + if (!pte_dirty(pte)) + return false; + if (is_shadow_stack_mapping(vma->vm_flags)) + return false; + return true; } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -402,7 +410,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags, vma)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 044029ef45cd..cf10c3822853 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1338,10 +1338,18 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags, + struct vm_area_struct *vma) { - return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + if (pmd_write(pmd)) + return true; + if ((flags & (FOLL_FORCE | FOLL_COW)) != (FOLL_FORCE | FOLL_COW)) + return false; + if (!pmd_dirty(pmd)) + return false; + if (is_shadow_stack_mapping(vma->vm_flags)) + return false; + return true; } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1354,7 +1362,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags, vma)) goto out; /* Avoid dumping huge zero page */