From patchwork Wed Jan 8 17:14:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 11324119 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 105A313A0 for ; Wed, 8 Jan 2020 17:15:41 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EAD6E2067D for ; Wed, 8 Jan 2020 17:15:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EAD6E2067D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipEup-0000VQ-OP; Wed, 08 Jan 2020 17:14:43 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipEuo-0000V2-Ja for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 17:14:42 +0000 X-Inumbo-ID: 5945a022-323a-11ea-8599-bc764e2007e4 Received: from mga14.intel.com (unknown [192.55.52.115]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 5945a022-323a-11ea-8599-bc764e2007e4; Wed, 08 Jan 2020 17:14:37 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Jan 2020 09:14:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,410,1571727600"; d="scan'208";a="395806086" Received: from tlengyel-mobl2.amr.corp.intel.com (HELO localhost.localdomain) ([10.251.132.23]) by orsmga005.jf.intel.com with ESMTP; 08 Jan 2020 09:14:35 -0800 From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 09:14:02 -0800 Message-Id: <199ba3c6fbe8f3de3b1513f70c5ea77f67aa2b42.1578503483.git.tamas.lengyel@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 05/18] x86/mem_sharing: don't try to unshare twice during page fault X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Tamas K Lengyel , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The page was already tried to be unshared in get_gfn_type_access. If that didn't work, then trying again is pointless. Don't try to send vm_event again either, simply check if there is a ring or not. Signed-off-by: Tamas K Lengyel Acked-by: Jan Beulich --- xen/arch/x86/hvm/hvm.c | 28 ++++++++++++++++++---------- 1 file changed, 18 insertions(+), 10 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 38e9006c92..5d24ceb469 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include #include @@ -1702,11 +1703,14 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, struct domain *currd = curr->domain; struct p2m_domain *p2m, *hostp2m; int rc, fall_through = 0, paged = 0; - int sharing_enomem = 0; vm_event_request_t *req_ptr = NULL; bool sync = false; unsigned int page_order; +#ifdef CONFIG_MEM_SHARING + bool sharing_enomem = false; +#endif + /* On Nested Virtualization, walk the guest page table. * If this succeeds, all is fine. * If this fails, inject a nested page fault into the guest. @@ -1894,14 +1898,16 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, if ( p2m_is_paged(p2mt) || (p2mt == p2m_ram_paging_out) ) paged = 1; - /* Mem sharing: unshare the page and try again */ - if ( npfec.write_access && (p2mt == p2m_ram_shared) ) +#ifdef CONFIG_MEM_SHARING + /* Mem sharing: if still shared on write access then its enomem */ + if ( npfec.write_access && p2m_is_shared(p2mt) ) { ASSERT(p2m_is_hostp2m(p2m)); - sharing_enomem = mem_sharing_unshare_page(currd, gfn); + sharing_enomem = true; rc = 1; goto out_put_gfn; } +#endif /* Spurious fault? PoD and log-dirty also take this path. */ if ( p2m_is_ram(p2mt) ) @@ -1955,19 +1961,21 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, */ if ( paged ) p2m_mem_paging_populate(currd, gfn); + +#ifdef CONFIG_MEM_SHARING if ( sharing_enomem ) { - int rv; - - if ( (rv = mem_sharing_notify_enomem(currd, gfn, true)) < 0 ) + if ( !vm_event_check_ring(currd->vm_event_share) ) { - gdprintk(XENLOG_ERR, "Domain %hu attempt to unshare " - "gfn %lx, ENOMEM and no helper (rc %d)\n", - currd->domain_id, gfn, rv); + gprintk(XENLOG_ERR, "Domain %pd attempt to unshare " + "gfn %lx, ENOMEM and no helper\n", + currd, gfn); /* Crash the domain */ rc = 0; } } +#endif + if ( req_ptr ) { if ( monitor_traps(curr, sync, req_ptr) < 0 )