From patchwork Fri Sep 18 19:21:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu-cheng Yu X-Patchwork-Id: 11785817 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DC06F139A for ; Fri, 18 Sep 2020 19:22:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A3DE922208 for ; Fri, 18 Sep 2020 19:22:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A3DE922208 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4613E6B0078; Fri, 18 Sep 2020 15:22:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1D3B56B0081; Fri, 18 Sep 2020 15:22:24 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 000D76B007E; Fri, 18 Sep 2020 15:22:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0148.hostedemail.com [216.40.44.148]) by kanga.kvack.org (Postfix) with ESMTP id DDB096B007D for ; Fri, 18 Sep 2020 15:22:23 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 9AF74824999B for ; Fri, 18 Sep 2020 19:22:23 +0000 (UTC) X-FDA: 77277153366.22.pets50_1a163782712d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 7D44B18038E67 for ; Fri, 18 Sep 2020 19:22:23 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,yu-cheng.yu@intel.com,,RULES_HIT:30003:30029:30054:30056:30064:30069:30070,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.50.0.100;04ygfqxkaceda14g3jphudwq5yr3dypb3usdy4nenrewcwy3j45hc53kux175m5.oozzi19aniwh4n7qway5dirwpzz8t4dgm83q9fqz8kghk44k1bewjwbqgp7rpf9.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: pets50_1a163782712d X-Filterd-Recvd-Size: 5684 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Sep 2020 19:22:22 +0000 (UTC) IronPort-SDR: RdisQR6r25XFLWGU+UbBc56dyHN4uff3793TDwLRXZK25rnFa5Kiz/kRAXbUHUqbSqJy6WJexz eIeMepnr883Q== X-IronPort-AV: E=McAfee;i="6000,8403,9748"; a="147696177" X-IronPort-AV: E=Sophos;i="5.77,274,1596524400"; d="scan'208";a="147696177" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Sep 2020 12:22:22 -0700 IronPort-SDR: CJDvV00xD0MVw3tIN57G+avCeD6timGfKDbxUvqH3kisp5gkJGRFHLDRAH5MUnWxgbluDfmkAD 7GFTWnV5bWbQ== X-IronPort-AV: E=Sophos;i="5.77,274,1596524400"; d="scan'208";a="484331885" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Sep 2020 12:22:21 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang Cc: Yu-cheng Yu Subject: [PATCH v12 18/26] mm: Update can_follow_write_pte() for shadow stack Date: Fri, 18 Sep 2020 12:21:16 -0700 Message-Id: <20200918192125.25473-19-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200918192125.25473-1-yu-cheng.yu@intel.com> References: <20200918192125.25473-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Can_follow_write_pte() ensures a read-only page is COWed by checking the FOLL_COW flag, and uses pte_dirty() to validate the flag is still valid. Like a writable data page, a shadow stack page is writable, and becomes read-only during copy-on-write, but it is always dirty. Thus, in the can_follow_write_pte() check, it belongs to the writable page case and should be excluded from the read-only page pte_dirty() check. Apply the same changes to can_follow_write_pmd(). Signed-off-by: Yu-cheng Yu --- v10: - Reverse name changes to can_follow_write_*(). mm/gup.c | 8 +++++--- mm/huge_memory.c | 8 +++++--- 2 files changed, 10 insertions(+), 6 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index e5739a1974d5..bbe4d32269e3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -384,10 +384,12 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags, + struct vm_area_struct *vma) { return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte) && + !arch_shadow_stack_mapping(vma->vm_flags)); } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -430,7 +432,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags, vma)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8c381794c084..6b2a400996fa 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1296,10 +1296,12 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags, + struct vm_area_struct *vma) { return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd) && + !arch_shadow_stack_mapping(vma->vm_flags)); } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1312,7 +1314,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags, vma)) goto out; /* Avoid dumping huge zero page */