From patchwork Thu Jun 18 14:39:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 11612311 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 87CD2913 for ; Thu, 18 Jun 2020 14:39:55 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6CDFE20773 for ; Thu, 18 Jun 2020 14:39:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6CDFE20773 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jlvhB-0001g3-Lw; Thu, 18 Jun 2020 14:39:13 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1jlvhA-0001fw-Ik for xen-devel@lists.xenproject.org; Thu, 18 Jun 2020 14:39:12 +0000 X-Inumbo-ID: 77941c1e-b171-11ea-bca7-bc764e2007e4 Received: from mga02.intel.com (unknown [134.134.136.20]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 77941c1e-b171-11ea-bca7-bc764e2007e4; Thu, 18 Jun 2020 14:39:08 +0000 (UTC) IronPort-SDR: ZJwvxblUqawLx2cdqGGRGknGPinoslPeX/q0yZv33wL6t14wMm9k0thO3vDRj/WpNtQpnmf+xa tixZaxYE2FQw== X-IronPort-AV: E=McAfee;i="6000,8403,9655"; a="130991527" X-IronPort-AV: E=Sophos;i="5.73,526,1583222400"; d="scan'208";a="130991527" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jun 2020 07:39:08 -0700 IronPort-SDR: EEyENaOnzeDXI0wIjSv/8wWcfF+r6GXpel4M9d2UGnpC2ie0Pu4vU45v3uh4eROki0CNV1VdyM DgvBrYliamdw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,526,1583222400"; d="scan'208";a="277642948" Received: from unknown (HELO ubuntu.localdomain) ([10.255.79.91]) by orsmga006.jf.intel.com with ESMTP; 18 Jun 2020 07:39:06 -0700 From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Subject: [PATCH v3 for-4.14] x86/vmx: use P2M_ALLOC in vmx_load_pdptrs instead of P2M_UNSHARE Date: Thu, 18 Jun 2020 07:39:04 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Tamas K Lengyel , Jun Nakajima , Wei Liu , Paul Durrant , Andrew Cooper , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" While forking VMs running a small RTOS system (Zephyr) a Xen crash has been observed due to a mm-lock order violation while copying the HVM CPU context from the parent. This issue has been identified to be due to hap_update_paging_modes first getting a lock on the gfn using get_gfn. This call also creates a shared entry in the fork's memory map for the cr3 gfn. The function later calls hap_update_cr3 while holding the paging_lock, which results in the lock-order violation in vmx_load_pdptrs when it tries to unshare the above entry when it grabs the page with the P2M_UNSHARE flag set. Since vmx_load_pdptrs only reads from the page its usage of P2M_UNSHARE was unnecessary to start with. Using P2M_ALLOC is the appropriate flag to ensure the p2m is properly populated. Note that the lock order violation is avoided because before the paging_lock is taken a lookup is performed with P2M_ALLOC that forks the page, thus the second lookup in vmx_load_pdptrs succeeds without having to perform the fork. We keep P2M_ALLOC in vmx_load_pdptrs because there are code-paths leading up to it which don't take the paging_lock and that have no previous lookup. Currently no other code-path exists leading there with the paging_lock taken, thus no further adjustments are necessary. Signed-off-by: Tamas K Lengyel Reviewed-by: Roger Pau Monné Reviewed-by: Kevin Tian --- v3: expand commit message to explain why there is no lock-order violation --- xen/arch/x86/hvm/vmx/vmx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index ab19d9424e..cc6d4ece22 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -1325,7 +1325,7 @@ static void vmx_load_pdptrs(struct vcpu *v) if ( (cr3 & 0x1fUL) && !hvm_pcid_enabled(v) ) goto crash; - page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, &p2mt, P2M_UNSHARE); + page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, &p2mt, P2M_ALLOC); if ( !page ) { /* Ideally you don't want to crash but rather go into a wait