From patchwork Tue Feb 27 11:34:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10244853 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9E2C460208 for ; Tue, 27 Feb 2018 11:35:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 91EA828833 for ; Tue, 27 Feb 2018 11:35:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 855E428845; Tue, 27 Feb 2018 11:35:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D2EC728833 for ; Tue, 27 Feb 2018 11:35:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753124AbeB0LfM (ORCPT ); Tue, 27 Feb 2018 06:35:12 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:54024 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753101AbeB0LfI (ORCPT ); Tue, 27 Feb 2018 06:35:08 -0500 Received: by mail-wm0-f67.google.com with SMTP id t74so23542366wme.3 for ; Tue, 27 Feb 2018 03:35:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=christofferdall-dk.20150623.gappssmtp.com; s=20150623; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=m1GpXHYEz2XwOM9k3ixmDsSaOuEt57A4eGi5MOSpS2o=; b=Lr0Q2laVyUI2eAJCaYoYI0+OUKNdHLN5u+DmdpfBIuqOnf5inbfEgX5Ic+7XgbXy4L 4NlB721RQj3ZeV52HsU3nPiEqSj+9wGd9MM0ZbHtzkZkSkkqWUBtsDAK+nfOt7NilaCU OtyKOxlis6yjM7raobZOQupUNi5b6sU9s65Qi3Hj17U9a15fO7Eo2CPajO9N74/4Yf6H gPqZkuJCSIy0eo8gDyf3DdWFo2m6E03Cebpi/lxG78rAP1p0+8LxuNKzQ59fwbtTl6jt hFF3zWjgcFrGQfgZrzxinUw6MmQz85Bjvu4X3S81vn46lQ2UG4jpI5du1nhcSpf+/ZSs j10g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=m1GpXHYEz2XwOM9k3ixmDsSaOuEt57A4eGi5MOSpS2o=; b=m/R1pB51K+1NJzyt0d4lPb5y56DDdNQyANP3Y7ryxoqvKYn0bcD4hyeODGqnPftP25 JOzq3cPiopIaciva/uznZsX+fjYNrLE94aSfoBj0C/uJg2ib+Hj8cA4syCnm35Z6XFir 2ZbhlXc8tFnmEADYL5DHXb2AH5tF1GPKl+5oM6WA4IjBPCRAxVkeQXAoQQRCo/0jBX+O hs+LaivyOeP0ZG2XQt4DO537YqMQ9bNI2AojAzjTZFIlVcSdr8xPl8sn2k5kt66xTeN3 +H0egmigv7my0QjMskJqDbZKFepuDpOSQfnA9unNPxZ9OhWcMhzZOEdqtACLRPQYZ+Pb /4Sw== X-Gm-Message-State: APf1xPBfH6doAM94UbUnSVizdqQSEwciuwy3o9X3sRG/KwcggVJZ3GSE TiaC9zaQbUlEPwlb5Az3zPE7nw== X-Google-Smtp-Source: AH8x226hqqVv7trRH4///Wd2gLqTjSOU7c9Z83V87ZDQSsSv4lVVOADP9xBs/so6aOXUHogQAxpnwg== X-Received: by 10.80.184.58 with SMTP id j55mr18736133ede.45.1519731307480; Tue, 27 Feb 2018 03:35:07 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id m1sm9176786ede.39.2018.02.27.03.35.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 27 Feb 2018 03:35:06 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Marc Zyngier , Andrew Jones , Shih-Wei Li , Dave Martin , Julien Grall , Tomasz Nowicki , Yury Norov Subject: [PATCH v5 25/40] KVM: arm64: Introduce framework for accessing deferred sysregs Date: Tue, 27 Feb 2018 12:34:14 +0100 Message-Id: <20180227113429.637-26-cdall@kernel.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180227113429.637-1-cdall@kernel.org> References: <20180227113429.637-1-cdall@kernel.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall We are about to defer saving and restoring some groups of system registers to vcpu_put and vcpu_load on supported systems. This means that we need some infrastructure to access system registes which supports either accessing the memory backing of the register or directly accessing the system registers, depending on the state of the system when we access the register. We do this by defining read/write accessor functions, which can handle both "immediate" and "deferrable" system registers. Immediate registers are always saved/restored in the world-switch path, but deferrable registers are only saved/restored in vcpu_put/vcpu_load when supported and sysregs_loaded_on_cpu will be set in that case. Note that we don't use the deferred mechanism yet in this patch, but only introduce infrastructure. This is to improve convenience of review in the subsequent patches where it is clear which registers become deferred. Reviewed-by: Marc Zyngier Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall --- Notes: Changes since v4: - Slightly reworded commentary based on Drew's feedback Changes since v3: - Changed to a switch-statement based approach to improve readability. Changes since v2: - New patch (deferred register handling has been reworked) arch/arm64/include/asm/kvm_host.h | 8 ++++++-- arch/arm64/kvm/sys_regs.c | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 179bb9d5760b..ab46bc70add6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -284,6 +284,10 @@ struct kvm_vcpu_arch { /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ u64 vsesr_el2; + + /* True when deferrable sysregs are loaded on the physical CPU, + * see kvm_vcpu_load_sysregs and kvm_vcpu_put_sysregs. */ + bool sysregs_loaded_on_cpu; }; #define vcpu_gp_regs(v) (&(v)->arch.ctxt.gp_regs) @@ -296,8 +300,8 @@ struct kvm_vcpu_arch { */ #define __vcpu_sys_reg(v,r) ((v)->arch.ctxt.sys_regs[(r)]) -#define vcpu_read_sys_reg(v,r) __vcpu_sys_reg(v,r) -#define vcpu_write_sys_reg(v,n,r) do { __vcpu_sys_reg(v,r) = n; } while (0) +u64 vcpu_read_sys_reg(struct kvm_vcpu *vcpu, int reg); +void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg); /* * CP14 and CP15 live in the same array, as they are backed by the diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 7514db002430..c809f0d1a059 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include #include @@ -76,6 +77,38 @@ static bool write_to_read_only(struct kvm_vcpu *vcpu, return false; } +u64 vcpu_read_sys_reg(struct kvm_vcpu *vcpu, int reg) +{ + if (!vcpu->arch.sysregs_loaded_on_cpu) + goto immediate_read; + + /* + * System registers listed in the switch are not saved on every + * exit from the guest but are only saved on vcpu_put. + */ + switch (reg) { + } + +immediate_read: + return __vcpu_sys_reg(vcpu, reg); +} + +void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) +{ + if (!vcpu->arch.sysregs_loaded_on_cpu) + goto immediate_write; + + /* + * System registers listed in the switch are not restored on every + * entry to the guest but are only restored on vcpu_load. + */ + switch (reg) { + } + +immediate_write: + __vcpu_sys_reg(vcpu, reg) = val; +} + /* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */ static u32 cache_levels;