From patchwork Wed Jan 27 08:30:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Huaitong Han X-Patchwork-Id: 8131111 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 6EA0D9F1C0 for ; Wed, 27 Jan 2016 08:33:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CE90220295 for ; Wed, 27 Jan 2016 08:33:18 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DDFF120256 for ; Wed, 27 Jan 2016 08:33:17 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aOLUv-0001Fa-NZ; Wed, 27 Jan 2016 08:30:41 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aOLUt-0001ED-Da for xen-devel@lists.xen.org; Wed, 27 Jan 2016 08:30:39 +0000 Received: from [85.158.137.68] by server-3.bemta-3.messagelabs.com id 9D/2F-02499-E2088A65; Wed, 27 Jan 2016 08:30:38 +0000 X-Env-Sender: huaitong.han@intel.com X-Msg-Ref: server-10.tower-31.messagelabs.com!1453883428!18368651!5 X-Originating-IP: [134.134.136.24] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 60188 invoked from network); 27 Jan 2016 08:30:37 -0000 Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24) by server-10.tower-31.messagelabs.com with SMTP; 27 Jan 2016 08:30:37 -0000 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP; 27 Jan 2016 00:30:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,354,1449561600"; d="scan'208";a="890082923" Received: from huaitong-desk.bj.intel.com ([10.238.135.217]) by fmsmga001.fm.intel.com with ESMTP; 27 Jan 2016 00:30:35 -0800 From: Huaitong Han To: jbeulich@suse.com, andrew.cooper3@citrix.com, george.dunlap@eu.citrix.com, tim@xen.org, keir@xen.org Date: Wed, 27 Jan 2016 16:30:29 +0800 Message-Id: <1453883430-9098-5-git-send-email-huaitong.han@intel.com> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1453883430-9098-1-git-send-email-huaitong.han@intel.com> References: <1453883430-9098-1-git-send-email-huaitong.han@intel.com> Cc: Huaitong Han , George Dunlap , xen-devel@lists.xen.org Subject: [Xen-devel] [PATCH V7 4/5] xen/mm: Clean up pfec handling in gva_to_gfn X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: George Dunlap Changes in v7: *Update SDM chapter comments. *Add hvm_vcpu check in sh_gva_to_gfn. --- At the moment, the pfec argument to gva_to_gfn has two functions: * To inform guest_walk what kind of access is happenind * As a value to pass back into the guest in the event of a fault. Unfortunately this is not quite treated consistently: the hvm_fetch_* function will "pre-clear" the PFEC_insn_fetch flag before calling gva_to_gfn; meaning guest_walk doesn't actually know whether a given access is an instruction fetch or not. This works now, but will cause issues when pkeys are introduced, since guest_walk will need to know whether an access is an instruction fetch even if it doesn't return PFEC_insn_fetch. Fix this by making a clean separation for in and out functionalities of the pfec argument: 1. Always pass in the access type to gva_to_gfn 2. Filter out inappropriate access flags before returning from gva_to_gfn. (The PFEC_insn_fetch flag should only be passed to the guest if either NX or SMEP is enabled. See Intel 64 Developer's Manual, Volume 3, Chapter Paging, PAGE-FAULT EXCEPTIONS) Signed-off-by: George Dunlap Signed-off-by: Huaitong Han Acked-by: Jan Beulich --- xen/arch/x86/hvm/hvm.c | 8 ++------ xen/arch/x86/mm/hap/guest_walk.c | 10 +++++++++- xen/arch/x86/mm/shadow/multi.c | 6 ++++++ 3 files changed, 17 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 674feea..5ec2ae1 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -4438,11 +4438,9 @@ enum hvm_copy_result hvm_copy_from_guest_virt( enum hvm_copy_result hvm_fetch_from_guest_virt( void *buf, unsigned long vaddr, int size, uint32_t pfec) { - if ( hvm_nx_enabled(current) || hvm_smep_enabled(current) ) - pfec |= PFEC_insn_fetch; return __hvm_copy(buf, vaddr, size, HVMCOPY_from_guest | HVMCOPY_fault | HVMCOPY_virt, - PFEC_page_present | pfec); + PFEC_page_present | PFEC_insn_fetch | pfec); } enum hvm_copy_result hvm_copy_to_guest_virt_nofault( @@ -4464,11 +4462,9 @@ enum hvm_copy_result hvm_copy_from_guest_virt_nofault( enum hvm_copy_result hvm_fetch_from_guest_virt_nofault( void *buf, unsigned long vaddr, int size, uint32_t pfec) { - if ( hvm_nx_enabled(current) || hvm_smep_enabled(current) ) - pfec |= PFEC_insn_fetch; return __hvm_copy(buf, vaddr, size, HVMCOPY_from_guest | HVMCOPY_no_fault | HVMCOPY_virt, - PFEC_page_present | pfec); + PFEC_page_present | PFEC_insn_fetch | pfec); } unsigned long copy_to_user_hvm(void *to, const void *from, unsigned int len) diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c index 49d0328..d2716f9 100644 --- a/xen/arch/x86/mm/hap/guest_walk.c +++ b/xen/arch/x86/mm/hap/guest_walk.c @@ -82,7 +82,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)( if ( !top_page ) { pfec[0] &= ~PFEC_page_present; - return INVALID_GFN; + goto out_tweak_pfec; } top_mfn = _mfn(page_to_mfn(top_page)); @@ -139,6 +139,14 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)( if ( missing & _PAGE_SHARED ) pfec[0] = PFEC_page_shared; + out_tweak_pfec: + /* + * SDM Intel 64 Volume 3, Chapter Paging, PAGE-FAULT EXCEPTIONS: + * The PFEC_insn_fetch flag is set only when NX or SMEP are enabled. + */ + if ( !hvm_nx_enabled(v) && !hvm_smep_enabled(v) ) + pfec[0] &= ~PFEC_insn_fetch; + return INVALID_GFN; } diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index 162c06f..d42597c 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3669,6 +3669,12 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m, pfec[0] &= ~PFEC_page_present; if ( missing & _PAGE_INVALID_BITS ) pfec[0] |= PFEC_reserved_bit; + /* + * SDM Intel 64 Volume 3, Chapter Paging, PAGE-FAULT EXCEPTIONS: + * The PFEC_insn_fetch flag is set only when NX or SMEP are enabled. + */ + if ( is_hvm_vcpu(v) && !hvm_nx_enabled(v) && !hvm_smep_enabled(v) ) + pfec[0] &= ~PFEC_insn_fetch; return INVALID_GFN; } gfn = guest_walk_to_gfn(&gw);