From patchwork Tue Apr 23 08:10:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Anisov X-Patchwork-Id: 10912175 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83AD0922 for ; Tue, 23 Apr 2019 08:11:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 733B128817 for ; Tue, 23 Apr 2019 08:11:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 65AF12881C; Tue, 23 Apr 2019 08:11:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DD9C828817 for ; Tue, 23 Apr 2019 08:11:55 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hIqVf-0001oy-Jw; Tue, 23 Apr 2019 08:10:35 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hIqVd-0001oE-KS for xen-devel@lists.xenproject.org; Tue, 23 Apr 2019 08:10:33 +0000 X-Inumbo-ID: 433785e2-659f-11e9-92d7-bc764e045a96 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 433785e2-659f-11e9-92d7-bc764e045a96; Tue, 23 Apr 2019 08:10:32 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id v1so2208856lfg.5 for ; Tue, 23 Apr 2019 01:10:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7xzhkUbgLwFYUz+eQIQAlHxn4lZsa4B1+UaGHWaqJf8=; b=CdDoLCfpTGOonKphA5/jHvjLc4NJNEobq5KeRE4mjAmcTG5jawZDrl13Ra++XWhueG Mh1gWY1363I/aWIYG7/LUx+sgsXH8tF1f38ZcuMX5nVjYh3v1czYdFhZ/RIaH3sEbCDD QbdHAqQZGFi4GovY+qVlkiBnxDiIR7OtmW7D2aq3VffwaMVWvYlCt9fvDzNn3YPoKbC7 QF55Ga5x4cDgoXHEWtmEMOc2m5JCya2S+8CUb2ud2WffIaB5a9wP2YofDH9HEWWvOa7b NVzIzh12r3GtQpkhE1s+nGX3gOR7/xiC80K7/YH9SB1VhdhFMEtrQHvKMKyvABaIyVQK r2zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7xzhkUbgLwFYUz+eQIQAlHxn4lZsa4B1+UaGHWaqJf8=; b=hs/brDr6fyOjta0+TNqlN//ZJgxSWaVcbpD3UDAuh/otJ1KIlsLzP2IMJ2j8PMLb+X wn6V9pVMQ69oG1bU70rGDR9NDsHmT2fp4N8QMTXWSd/cliOevXTS77u3ibJEtujy5dUS 4pAJaJQFeIg20G4YUjovXriB6ArBApzoZP/GWFLtQnNWzJuQMbWTAqfLu3zjznWEPxnK kMi5wqX9y4p3FFaeWBVatomope+mt3Y1MPedtdXL+GGRdAoJ+Eoci7aZYnazHkTURdf/ S7vVcWaO/Fw2MlnClZU46zSN/xrfF3/tbOEotDLh9ocQcEdrEoA617mBmFpUgWrqOvcs 8kpA== X-Gm-Message-State: APjAAAUVXem3vKt1/4fOEBuTOzL619tQs5uexOnn5G5Vdp0KpcTFqX6T JWRmOr/DbzdS5Mk7x1jw+0c= X-Google-Smtp-Source: APXvYqwu9aBvug2bpxd88OF+2Q9HyVDvZl2gfqNoHISSa7WchSZ/BvWWOWPIH4i1LIGuCy3BVPK16Q== X-Received: by 2002:ac2:4355:: with SMTP id o21mr13676995lfl.123.1556007030738; Tue, 23 Apr 2019 01:10:30 -0700 (PDT) Received: from aanisov-work.kyiv.epam.com (ll-22.209.223.85.sovam.net.ua. [85.223.209.22]) by smtp.gmail.com with ESMTPSA id m28sm3533538lfc.71.2019.04.23.01.10.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 23 Apr 2019 01:10:29 -0700 (PDT) From: Andrii Anisov To: xen-devel@lists.xen.org Date: Tue, 23 Apr 2019 11:10:25 +0300 Message-Id: <1556007026-31057-2-git-send-email-andrii.anisov@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556007026-31057-1-git-send-email-andrii.anisov@gmail.com> References: <1556007026-31057-1-git-send-email-andrii.anisov@gmail.com> Subject: [Xen-devel] [PATCH v2 1/2] xen: introduce VCPUOP_register_runstate_phys_memory_area hypercall X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Andrii Anisov , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , xen-devel@lists.xenproject.org, Wei Liu , =?utf-8?q?Ro?= =?utf-8?q?ger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Andrii Anisov The hypercall employs the same vcpu_register_runstate_memory_area structure for the interface, but requires registered area to not cross a page boundary. Signed-off-by: Andrii Anisov --- xen/common/domain.c | 5 ++++- xen/include/public/vcpu.h | 15 +++++++++++++++ 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/xen/common/domain.c b/xen/common/domain.c index 88bbe98..ae22049 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1532,10 +1532,13 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg) vcpu_runstate_get(v, &runstate); __copy_to_guest(runstate_guest(v), &runstate, 1); } - break; } + case VCPUOP_register_runstate_phys_memory_area: + rc = -EOPNOTSUPP; + break; + #ifdef VCPU_TRAP_NMI case VCPUOP_send_nmi: if ( !guest_handle_is_null(arg) ) diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h index 3623af9..d7da4a3 100644 --- a/xen/include/public/vcpu.h +++ b/xen/include/public/vcpu.h @@ -235,6 +235,21 @@ struct vcpu_register_time_memory_area { typedef struct vcpu_register_time_memory_area vcpu_register_time_memory_area_t; DEFINE_XEN_GUEST_HANDLE(vcpu_register_time_memory_area_t); +/* + * Register a shared memory area from which the guest may obtain its own + * runstate information without needing to execute a hypercall. + * Notes: + * 1. The registered address must be guest's physical address. + * 2. The registered runstate area should not cross page boundary. + * 3. Only one shared area may be registered per VCPU. The shared area is + * updated by the hypervisor each time the VCPU is scheduled. Thus + * runstate.state will always be RUNSTATE_running and + * runstate.state_entry_time will indicate the system time at which the + * VCPU was last scheduled to run. + * @extra_arg == pointer to vcpu_register_runstate_memory_area structure. + */ +#define VCPUOP_register_runstate_phys_memory_area 14 + #endif /* __XEN_PUBLIC_VCPU_H__ */ /* From patchwork Tue Apr 23 08:10:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Anisov X-Patchwork-Id: 10912179 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7FA74922 for ; Tue, 23 Apr 2019 08:12:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6F1D428817 for ; Tue, 23 Apr 2019 08:12:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 62F8D2881C; Tue, 23 Apr 2019 08:12:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 88FA928817 for ; Tue, 23 Apr 2019 08:12:00 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hIqVi-0001qz-Eb; Tue, 23 Apr 2019 08:10:38 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hIqVg-0001po-Az for xen-devel@lists.xen.org; Tue, 23 Apr 2019 08:10:36 +0000 X-Inumbo-ID: 449ee538-659f-11e9-92d7-bc764e045a96 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 449ee538-659f-11e9-92d7-bc764e045a96; Tue, 23 Apr 2019 08:10:34 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id k8so102977lja.8 for ; Tue, 23 Apr 2019 01:10:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=RsLst2KsiIu8i6aZIlsw+0iDtWi/h+uC+WJEFHKEXB8=; b=PbR4zOFwsA0DbtdM66x479x6pQ3Pvkp4c8V/v8+B4Q9dNAqLgVDXl4FthavQqpBix0 W1s6VTmeRF0gBY0pRWfC6HWCuwJ2PDl4D1d7lljQpxW59JzWD9luHeUQa4/Sk8WAxwo1 8cikdO20HMK5HO35nhdt7GmXPI4TuMeml9Qa1CfemhO+0VPIhVgUSH7mivwIUwqGj6Yv 8fD5v4pQn6AkeBTuRicasIOEX5QfOvbFFKZAgPl92yxbmXolkGEXTPzSAlWY+NQULx5a kI75KSUsuL+OAdhTOEb2kUWyUBZ5D0fJT1se6kThT204pFLf9bQorDGeswYmtOsZdv1h J90w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=RsLst2KsiIu8i6aZIlsw+0iDtWi/h+uC+WJEFHKEXB8=; b=hczq/377L1y2mKYTW/hncBOqpADjAS4Bw8588VBc2dWd1IB5Hz6Pq7QCnAG8MCjvrX ciZ5iZFqicS16Xz/i3pDKLhaQiIngTkemsZTRezIY7l/UTeInDVKlEURpOjeSr/Yi/nF DDwMSna4D7kfS+DApUKaxq+F70VGaK1SKr3ngfyNjpZzgrXJO8dOE4qiXpEdq7vauVdK syepzrSFBv/r4Xl2os4F64qfcUUTEvXtmEQjeL6WrrZcwzOU+ZeBTZMjc2AICHfPcK5N MXLF0L/GAAmaa8v27pwSrn9gGY2dXCD0Gd37tFryXUpku9Jgw7n+A6R8XfLzENrE5AEM acyQ== X-Gm-Message-State: APjAAAWIhCJyaxKkvjEsZyeQM2A1w3uYuyrUSmTpQgNeIcp8w0/wCtVC djPhfoAz0Xr+JYjcnLxUdCSrBnYLW0bY2A== X-Google-Smtp-Source: APXvYqy7MFT2Bb/Vi7HSynbjxMv6JuQk0/g6v8gvvxAvnbp83KOgW3Q/P4eumrZLDa+S68TzwqiZ1Q== X-Received: by 2002:a2e:9a46:: with SMTP id k6mr12730474ljj.119.1556007032513; Tue, 23 Apr 2019 01:10:32 -0700 (PDT) Received: from aanisov-work.kyiv.epam.com (ll-22.209.223.85.sovam.net.ua. [85.223.209.22]) by smtp.gmail.com with ESMTPSA id m28sm3533538lfc.71.2019.04.23.01.10.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 23 Apr 2019 01:10:31 -0700 (PDT) From: Andrii Anisov To: xen-devel@lists.xen.org Date: Tue, 23 Apr 2019 11:10:26 +0300 Message-Id: <1556007026-31057-3-git-send-email-andrii.anisov@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556007026-31057-1-git-send-email-andrii.anisov@gmail.com> References: <1556007026-31057-1-git-send-email-andrii.anisov@gmail.com> Subject: [Xen-devel] [PATCH v2 2/2] xen: implement VCPUOP_register_runstate_phys_memory_area X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Andrii Anisov , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Jan Beulich , xen-devel@lists.xenproject.org, Wei Liu , =?utf-8?q?Ro?= =?utf-8?q?ger_Pau_Monn=C3=A9?= MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Andrii Anisov VCPUOP_register_runstate_phys_memory_area is implemented via runstate area mapping. Signed-off-by: Andrii Anisov --- xen/arch/arm/domain.c | 62 +++++++++++++++++-------- xen/arch/x86/domain.c | 105 +++++++++++++++++++++++++++++++------------ xen/common/domain.c | 80 ++++++++++++++++++++++++++++++++- xen/include/asm-arm/domain.h | 2 + xen/include/xen/domain.h | 2 + xen/include/xen/sched.h | 8 ++++ 6 files changed, 210 insertions(+), 49 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 6dc633e..8e24e63 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -275,32 +275,55 @@ static void ctxt_switch_to(struct vcpu *n) } /* Update per-VCPU guest runstate shared memory area (if registered). */ -static void update_runstate_area(struct vcpu *v) +void update_runstate_area(struct vcpu *v) { - void __user *guest_handle = NULL; + if ( !guest_handle_is_null(runstate_guest(v)) ) + { + void __user *guest_handle = NULL; + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + guest_handle = &v->runstate_guest.p->state_entry_time + 1; + guest_handle--; + v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; + __raw_copy_to_guest(guest_handle, + (void *)(&v->runstate.state_entry_time + 1) - 1, + 1); + smp_wmb(); + } - if ( guest_handle_is_null(runstate_guest(v)) ) - return; + __copy_to_guest(runstate_guest(v), &v->runstate, 1); - if ( VM_ASSIST(v->domain, runstate_update_flag) ) - { - guest_handle = &v->runstate_guest.p->state_entry_time + 1; - guest_handle--; - v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; - __raw_copy_to_guest(guest_handle, - (void *)(&v->runstate.state_entry_time + 1) - 1, 1); - smp_wmb(); + if ( guest_handle ) + { + v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + __raw_copy_to_guest(guest_handle, + (void *)(&v->runstate.state_entry_time + 1) - 1, + 1); + } } - __copy_to_guest(runstate_guest(v), &v->runstate, 1); - - if ( guest_handle ) + spin_lock(&v->mapped_runstate_lock); + if ( v->mapped_runstate ) { - v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE; - smp_wmb(); - __raw_copy_to_guest(guest_handle, - (void *)(&v->runstate.state_entry_time + 1) - 1, 1); + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + v->mapped_runstate->state_entry_time |= XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; + } + + memcpy(v->mapped_runstate, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + v->mapped_runstate->state_entry_time &= ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE; + } } + spin_unlock(&v->mapped_runstate_lock); + } static void schedule_tail(struct vcpu *prev) @@ -998,6 +1021,7 @@ long do_arm_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) a { case VCPUOP_register_vcpu_info: case VCPUOP_register_runstate_memory_area: + case VCPUOP_register_runstate_phys_memory_area: return do_vcpu_op(cmd, vcpuid, arg); default: return -EINVAL; diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 9eaa978..46c2219 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1558,51 +1558,98 @@ void paravirt_ctxt_switch_to(struct vcpu *v) wrmsr_tsc_aux(v->arch.msrs->tsc_aux); } -/* Update per-VCPU guest runstate shared memory area (if registered). */ -bool update_runstate_area(struct vcpu *v) +static void update_mapped_runstate_area_native(struct vcpu *v) { - bool rc; - struct guest_memory_policy policy = { .nested_guest_mode = false }; - void __user *guest_handle = NULL; - - if ( guest_handle_is_null(runstate_guest(v)) ) - return true; - - update_guest_memory_policy(v, &policy); - if ( VM_ASSIST(v->domain, runstate_update_flag) ) { - guest_handle = has_32bit_shinfo(v->domain) - ? &v->runstate_guest.compat.p->state_entry_time + 1 - : &v->runstate_guest.native.p->state_entry_time + 1; - guest_handle--; v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; - __raw_copy_to_guest(guest_handle, - (void *)(&v->runstate.state_entry_time + 1) - 1, 1); + v->mapped_runstate.native->state_entry_time |= XEN_RUNSTATE_UPDATE; smp_wmb(); } - if ( has_32bit_shinfo(v->domain) ) + memcpy(v->mapped_runstate.native, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) { - struct compat_vcpu_runstate_info info; + v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE; + v->mapped_runstate.native->state_entry_time &= ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + } +} - XLAT_vcpu_runstate_info(&info, &v->runstate); - __copy_to_guest(v->runstate_guest.compat, &info, 1); - rc = true; +static void update_mapped_runstate_area_compat(struct vcpu *v) +{ + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; + v->mapped_runstate.compat->state_entry_time |= XEN_RUNSTATE_UPDATE; + smp_wmb(); } - else - rc = __copy_to_guest(runstate_guest(v), &v->runstate, 1) != - sizeof(v->runstate); - if ( guest_handle ) + memcpy(v->mapped_runstate.compat, &v->runstate, sizeof(v->runstate)); + + if ( VM_ASSIST(v->domain, runstate_update_flag) ) { v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE; + v->mapped_runstate.compat->state_entry_time &= ~XEN_RUNSTATE_UPDATE; smp_wmb(); - __raw_copy_to_guest(guest_handle, - (void *)(&v->runstate.state_entry_time + 1) - 1, 1); } +} - update_guest_memory_policy(v, &policy); +/* Update per-VCPU guest runstate shared memory area (if registered). */ +bool update_runstate_area(struct vcpu *v) +{ + bool rc = true; + + if ( !guest_handle_is_null(runstate_guest(v)) ) + { + struct guest_memory_policy policy = { .nested_guest_mode = false }; + void __user *guest_handle = NULL; + + update_guest_memory_policy(v, &policy); + if ( VM_ASSIST(v->domain, runstate_update_flag) ) + { + guest_handle = has_32bit_shinfo(v->domain) + ? &v->runstate_guest.compat.p->state_entry_time + 1 + : &v->runstate_guest.native.p->state_entry_time + 1; + guest_handle--; + v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE; + __raw_copy_to_guest(guest_handle, + (void *)(&v->runstate.state_entry_time + 1) - 1, 1); + smp_wmb(); + } + + if ( has_32bit_shinfo(v->domain) ) + { + struct compat_vcpu_runstate_info info; + + XLAT_vcpu_runstate_info(&info, &v->runstate); + __copy_to_guest(v->runstate_guest.compat, &info, 1); + rc = true; + } + else + rc = __copy_to_guest(runstate_guest(v), &v->runstate, 1) != + sizeof(v->runstate); + + if ( guest_handle ) + { + v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE; + smp_wmb(); + __raw_copy_to_guest(guest_handle, + (void *)(&v->runstate.state_entry_time + 1) - 1, 1); + } + update_guest_memory_policy(v, &policy); + } + + spin_lock(v->mapped_runstate_lock); + if ( v->mapped_runstate ) + { + if ( has_32bit_shinfo((v)->domain) ) + update_mapped_runstate_area_compat(v); + else + update_mapped_runstate_area_native(v); + } + spin_unlock(v->mapped_runstate_lock); return rc; } diff --git a/xen/common/domain.c b/xen/common/domain.c index ae22049..6df76c6 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -149,6 +149,7 @@ struct vcpu *vcpu_create( v->dirty_cpu = VCPU_CPU_CLEAN; spin_lock_init(&v->virq_lock); + spin_lock_init(&v->mapped_runstate_lock); tasklet_init(&v->continue_hypercall_tasklet, NULL, 0); @@ -699,6 +700,69 @@ int rcu_lock_live_remote_domain_by_id(domid_t dom, struct domain **d) return 0; } +static void _unmap_runstate_area(struct vcpu *v) +{ + mfn_t mfn; + + if ( !v->mapped_runstate ) + return; + + mfn = _mfn(virt_to_mfn(runstate_guest(v).p)); + + unmap_domain_page_global((void *) + ((unsigned long)v->mapped_runstate & + PAGE_MASK)); + + v->mapped_runstate = NULL; + put_page_and_type(mfn_to_page(mfn)); +} + +static int map_runstate_area(struct vcpu *v, + struct vcpu_register_runstate_memory_area *area) +{ + unsigned long offset = area->addr.p & ~PAGE_MASK; + gfn_t gfn = gaddr_to_gfn(area->addr.p); + struct domain *d = v->domain; + void *mapping; + struct page_info *page; + size_t size = sizeof (struct vcpu_runstate_info ); + + if ( offset > (PAGE_SIZE - size) ) + return -EINVAL; + + page = get_page_from_gfn(d, gfn_x(gfn), NULL, P2M_ALLOC); + if ( !page ) + return -EINVAL; + + if ( !get_page_type(page, PGT_writable_page) ) + { + put_page(page); + return -EINVAL; + } + + mapping = __map_domain_page_global(page); + + if ( mapping == NULL ) + { + put_page_and_type(page); + return -ENOMEM; + } + + spin_lock(&v->mapped_runstate_lock); + _unmap_runstate_area(v); + v->mapped_runstate = mapping + offset; + spin_unlock(&v->mapped_runstate_lock); + + return 0; +} + +static void unmap_runstate_area(struct vcpu *v) +{ + spin_lock(&v->mapped_runstate_lock); + _unmap_runstate_area(v); + spin_unlock(&v->mapped_runstate_lock); +} + int domain_kill(struct domain *d) { int rc = 0; @@ -737,7 +801,11 @@ int domain_kill(struct domain *d) if ( cpupool_move_domain(d, cpupool0) ) return -ERESTART; for_each_vcpu ( d, v ) + { + set_xen_guest_handle(runstate_guest(v), NULL); + unmap_runstate_area(v); unmap_vcpu_info(v); + } d->is_dying = DOMDYING_dead; /* Mem event cleanup has to go here because the rings * have to be put before we call put_domain. */ @@ -1192,6 +1260,7 @@ int domain_soft_reset(struct domain *d) for_each_vcpu ( d, v ) { set_xen_guest_handle(runstate_guest(v), NULL); + unmap_runstate_area(v); unmap_vcpu_info(v); } @@ -1536,8 +1605,17 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg) } case VCPUOP_register_runstate_phys_memory_area: - rc = -EOPNOTSUPP; + { + struct vcpu_register_runstate_memory_area area; + + rc = -EFAULT; + if ( copy_from_guest(&area, arg, 1) ) + break; + + rc = map_runstate_area(v, &area); + break; + } #ifdef VCPU_TRAP_NMI case VCPUOP_send_nmi: diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 312fec8..3fb6ea2 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -217,6 +217,8 @@ void vcpu_show_execution_state(struct vcpu *); void vcpu_show_registers(const struct vcpu *); void vcpu_switch_to_aarch64_mode(struct vcpu *); +void update_runstate_area(struct vcpu *); + /* * Due to the restriction of GICv3, the number of vCPUs in AFF0 is * limited to 16, thus only the first 4 bits of AFF0 are legal. We will diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index d1bfc82..ecddcfe 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -118,4 +118,6 @@ struct vnuma_info { void vnuma_destroy(struct vnuma_info *vnuma); +struct vcpu_register_runstate_memory_area; + #endif /* __XEN_DOMAIN_H__ */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 748bb0f..2afe31c 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -163,15 +163,23 @@ struct vcpu void *sched_priv; /* scheduler-specific data */ struct vcpu_runstate_info runstate; + + spinlock_t mapped_runstate_lock; + #ifndef CONFIG_COMPAT # define runstate_guest(v) ((v)->runstate_guest) XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest address */ + vcpu_runstate_info_t *mapped_runstate; #else # define runstate_guest(v) ((v)->runstate_guest.native) union { XEN_GUEST_HANDLE(vcpu_runstate_info_t) native; XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat; } runstate_guest; /* guest address */ + union { + vcpu_runstate_info_t* native; + vcpu_runstate_info_compat_t* compat; + } mapped_runstate; /* guest address */ #endif /* last time when vCPU is scheduled out */