From patchwork Mon Sep 28 10:44:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11803383 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 66C041580 for ; Mon, 28 Sep 2020 10:45:07 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 25D912075A for ; Mon, 28 Sep 2020 10:45:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="ZBktpqpi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 25D912075A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMqdh-0003AO-Sj; Mon, 28 Sep 2020 10:44:13 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kMqdg-0003AJ-Ug for xen-devel@lists.xenproject.org; Mon, 28 Sep 2020 10:44:12 +0000 X-Inumbo-ID: 53b49b73-b04c-48d0-9730-689be1898ccc Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 53b49b73-b04c-48d0-9730-689be1898ccc; Mon, 28 Sep 2020 10:44:11 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1601289851; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=6o0DLxPL2zFTfmZExqlpwhWQ7/gvc4fAB1mduDmKKJI=; b=ZBktpqpiiqVX4OzOQHg4CTqZCF/XkFQFmDyDjpEePTpmp0Gt/1MeIzJenub930pHuy79Tk Cgkz4BBTu07Z1eLDcyI7c2IesWbs6dD7lr1iViKj00D/GREVt8Nza2ljCDtSnPm5NfUVuk Tc6xVlQGM42V4CU+xZOPyloh73vjogM= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 0F22FAC54; Mon, 28 Sep 2020 10:44:11 +0000 (UTC) To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Paul Durrant , "olekstysh@gmail.com" , George Dunlap From: Jan Beulich Subject: [PATCH] x86/HVM: refine when to send mapcache invalidation request to qemu Message-ID: Date: Mon, 28 Sep 2020 12:44:09 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" For one it was wrong to send the request only upon a completed hypercall: Even if only part of it completed before getting preempted, invalidation ought to occur. Therefore fold the two return statements. And then XENMEM_decrease_reservation isn't the only means by which pages can get removed from a guest, yet all removals ought to be signaled to qemu. Put setting of the flag into the central p2m_remove_page() underlying all respective hypercalls. Plus finally there's no point sending the request for the local domain when the domain acted upon is a different one. If anything that domain's qemu's mapcache may need invalidating, but it's unclear how useful this would be: That remote domain may not execute hypercalls at all, and hence may never make it to the point where the request actually gets issued. I guess the assumption is that such manipulation is not supposed to happen anymore once the guest has been started? Signed-off-by: Jan Beulich --- Putting the check in guest_physmap_remove_page() might also suffice, but then a separate is_hvm_domain() would need adding again. --- a/xen/arch/x86/hvm/hypercall.c +++ b/xen/arch/x86/hvm/hypercall.c @@ -31,7 +31,6 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { - const struct vcpu *curr = current; long rc; switch ( cmd & MEMOP_CMD_MASK ) @@ -41,14 +40,11 @@ static long hvm_memory_op(int cmd, XEN_G return -ENOSYS; } - if ( !curr->hcall_compat ) + if ( !current->hcall_compat ) rc = do_memory_op(cmd, arg); else rc = compat_memory_op(cmd, arg); - if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation ) - curr->domain->arch.hvm.qemu_mapcache_invalidate = true; - return rc; } @@ -326,14 +322,11 @@ int hvm_hypercall(struct cpu_user_regs * HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax); - if ( curr->hcall_preempted ) - return HVM_HCALL_preempted; - if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) && test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) ) send_invalidate_req(); - return HVM_HCALL_completed; + return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed; } /* --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -812,6 +812,9 @@ p2m_remove_page(struct p2m_domain *p2m, } } + if ( p2m->domain == current->domain ) + p2m->domain->arch.hvm.qemu_mapcache_invalidate = true; + return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid, p2m->default_access); }